Principles of Good AI

As a strategic design agency, we craft socio-technical AI engagements that are both good for people and good for business.

Our clients continue to ask how AI plays a role in transforming products, services, and business models. These requests don’t fit neatly in a box as it has become abundantly clear that AI will radically alter nearly every aspect of our lives over the next decade. At Mad*Pow, we have always paid particular care to the outcomes and impact of our work on empowering people through experiences. This time, however, we’re working with a much larger scale of change and transformation complexity.

In our exploration of engagement principles, we found many organizations attempting to address the same socio-technical challenges. Government agencies from NIST’s Risk Management framework, the DIU, UN’s AI for Good, academics, and ethics professionals on the Montreal Declaration of Responsible AI are all actively identifying the ethics at stake with AI progress. Meanwhile, within the industry, technologists from communities like HuggingFace, to startups like Waverly, and conferences like at Stanford HAI are helping shape shared language and experiences for sensemaking to improve human conditions. 

We have drawn on many of these voices and authors to define our own perspective. As a strategic design agency, we craft socio-technical AI engagements that are both good for people and good for business. We propose that Good AI engagements drive strategic initiatives by offering a unique balance of trusted automation and human agency, held in balance by organizational structures and sustained by adaptive behaviors.

Balancing automation with human capabilities as organizational processes.

Balancing automation with human capabilities as organizational processes.

Good AI is Beneficial, Responsible, and Sustainable

In previous articles, we have described the core values of AI to business in the coming decade and introduced concepts of Good AI. In this article, we articulate the principles by which Mad*Pow plans to work and collaborate to facilitate Good AI for our clients. 

We hope these principles will set the foundation as we (as an agency and as a society at large) begin to build new AI tools, operationalize new ways of working, and integrate continuously changing technology throughout the organization equitably. 

NOTE: These principles function as a system of interdependencies and work best when applied across contexts. For example - ‘continuous improvement’ applies to the perspective of what is beneficial to society, as much as it does to the buttons on a screen. 

Good AI Principles 

  1. Measure impact using a systems thinking approach
    Scale, velocity, and efficiency are strengths of artificial intelligence technologies, and we should not limit their impact to a single metric. A systems view will support and demonstrate upstream, downstream, and lateral positive effects. 
  2. Leverage futures thinking to plan for long-term growth
    AI is built within closed product dimensions but exists within complex, ever-changing social contexts. While an AI solution may be appropriate today, there should be express consideration about how that technology impacts, and will be impacted by, ever-changing political, social, economic, technological, and legal contexts. 
  3. Deliberately evolve the balance between human control and automation
    The fear of losing control permeates most pop culture references to AI. Reframe fear into opportunities for new experiences that reflect personal values and objectives. Give digital citizens co-creation and ownership rights to their own data. Balance automation with skills building for continuous learning with organizational structures.
  4. Directly challenge and circumvent human bias
    The critiques of automated bias are warranted, but let’s look closer... Redlining and other historical injustices have been a direct result of human bias. With AI, we have the opportunity to remove the chance for human bias in critical decision-making situations like creditworthiness. Already, algorithms are being created to check the other as an approach to root out deeply ingrained human bias. 
  5. Distribute leadership and responsibilities
    Centralized ownership of technology as rich and complex as what is on the horizon would be a missed opportunity at best. At worst, a small, most likely homogenous team would be making decisions for a tool that could impact billions. Organizing shared governance for AI further stabilizes and improves the product as a whole via democratic synthesis.
  6. Actively seek opportunities to generate new and better ways of knowing
    Today, we rely heavily on excellent data in small quantities and traumatized data in large quantities. Similarly, we have a stark contrast between qualitative and quantitative data, which we roughly combine to make sense of the world. The power of AI can blur the boundaries of quant and qual data at massive scales with high confidence. 
  7. Improve human capacity with creative augmentation
    Human judgment in complex value-based decision-making situations will be challenging to replicate in perpetuity. As arbiters of Good AI, we must find ways to scale AI based on the patterns and factors of human lived experiences with continuous feedback loops. This will protect the legitimacy of AI outcomes while achieving a radically new paradigm for individuals and society.
  8. Amplify aspirational human behaviors like honesty and courage.
    The impact of social media on mental health in the past couple of decades is an example of how designing for clicks, i.e. “engagement” based on behavioral economics, fails to acknowledge or take accountability for a participant's wellbeing. This principle extends to complicated and complex behaviors like improving health and saving money. Nudges that prey on fear work, but to reflect how we as humans drive towards well-being is to amplify human agency.
  9. Develop dialogic capabilities by incentivizing cooperation and co-creation
    Current data capturing often happens as a byproduct of engaging with another service or product and unless structured around intent, would end up in a data swamp. Good AI should acknowledge, incentivize, and reward people with co-ownership for contributing to the collection, utilization and sharing of their data. This principle creates space for meaning and relationships to emerge within teams, organizations and communities.
  10. Continually evolve based on your organization’s purpose and values
    These principles are not enough on their own, and neither would one development process. Bring cross-functional teams together to create shared language. Regular audits and process improvement methods founded on organizational values will ensure that the outcomes generated at never-before-seen scope and scale are precisely what the organization aims to achieve.

Executing these principles requires exceptional organizational design and strategic leadership. Some organizations are already well equipped to work and communicate across teams, share ownership, and constantly learn. On the other hand, developing Good AI may be just the beginning of your organizational transformation. 

Learn more about utilizing Futures Thinking, Systems Thinking, and Innovation Design to explore principles and strategies for designing responsible AI solutions in the recording of our webinar. You’ll walk away with the inspiration for what's possible and some tactical approaches to guide decision-making around AI.

Designing Responsible AI Solutions Webinar

Thursday, October 6, 2022, Noon - 1pm
Rachael Acker & Jesse Flores

The lure of new technology such as Artificial Intelligence (AI) and Machine Learning (ML) seemingly promises humanity a magic bullet, a solution to all our everyday needs. However, we need to build resilient systems around a well-being economy vs. fixing a broken one with expensive toys. 

As digital creators and technologists, we tell ourselves that technology is neutral, but that’s naive. 

In an ideal world, AI/ML should solve complex social needs, not create more communication filter bubbles like TikTok. AI/ML as technology is not a solution in itself. Like current digital technologies, its value is contextual to human needs. As business leaders, technologists, and designers of digital experiences, we must create beneficial solutions with ethical goals for our tech to follow suit.

Contributed by
Jesse Flores
Job Title
Senior Experience Strategist
Rachael Acker
Job Title
VP, Experience Strategy & Research