Insights

Designing Resilient Systems with AI

 Designing Resilient Systems with AI

The lure of new technology such as Artificial Intelligence (AI) and Machine Learning (ML) seemingly promises humanity a magic bullet, a solution to all our everyday needs. This notion prompts organizations to focus time & resources on easily solvable solutions rather than tackling systemic challenges that benefit society at large.

We’re trying to build a rocket ship to buy groceries down the street.

However, what we need to do is build resilient systems around a well-being economy vs. fixing a broken one with expensive toys. This is a task that requires substantial systemic behavior change and an AI technology solution that augments human expertise. 

So, what is the ideal role of AI in society?* AI is a beneficial machine that will further humanity, a design medium to create new experiences, and a change accelerator. We must design holistic, systemic AI experiences that benefit society, the climate, our youngest & oldest populations, and subsequent generations. This is about creating worldwide experiences that are as good for the planet as they are good for its people through the application of AI/ML technologies. 

Emily Bender of the University of Washington wrote in the Seattle Times (May 11, 2022): “Why are journalists and others so ready to believe claims of magical AI systems?... We should demand instead journalism that refuses to be dazzled by claims of “artificial intelligence” and looks behind the curtain…. It behooves us all to remember that computers are simply tools. They can be beneficial if we set them to right-sized tasks that match their capabilities well and maintain human judgment about what to do with the output. If we mistake our ability to make sense of language and images generated by computers for the computers being “thinking” entities, we risk ceding power — not to computers, but to those who would hide behind the curtain.

*For the purposes of this article and subsequent series, we reference AI to encompass machine learning, algorithms, NLP, and not General AI or the notion of sentience.

We live in a VUCA — volatile, uncertain, complex, and ambiguous— environment. Our financial, health, and educational systems are struggling to adapt to our rapidly changing world, as we’ve all experienced in living through a global pandemic. Our current societal challenges are technical, economic, environmental, and overall complex in nature. From the recession, a growing aging population, nursing shortages, reproductive rights being challenged, workplace well-being, and mental health. The list goes on. 

Entrepreneurs, innovators, and business leaders are taught to approach problem-solving from an analytical lens. Time and effort are spent focusing on diagnosing, analyzing, planning a solution, and then implementing it. Sure, this analytical approach simplifies decision-making to optimize for growth, comfort, and winning; however, this strategy doesn’t work in a VUCA world. It’s only adding more strain to our already fragile systems, blinding us to the potential impact of our “solutions” across ecosystems. 

As individuals, we are always in pursuit of new experiences, the feeling of a shiny new experience, like opening the box of a new iPhone or watching the likes grow on your TikTok video. So addicted are we to these feelings of desire and the need to be in a flow state that we have wound up in an unending search for new and novel

This individualistic desire for privilege is what drives demand for engagement. Today, in most if not all product and service organizations, engagement is the core requirement for delivering solutions that are desirable, feasible, and viable

Venn Diagram
Let’s find where the intersection of values meets the intersection of benefits to society. Illustration by Rachael Acker.

This simple yet effective leverage point has become a ubiquitous business strategy that has fueled persuasive technologies. This short-term dopamine-seeking reward loop is also what has fueled the growth of the UX industry. Our obsession with creating engaging digital experiences that reduce friction, meet people where they are, and optimize comfort has impacted social behaviors in ways we are not ready to face.

Under immense pressure to prioritize engagement and growth, technology platforms have created a race for human attention that’s unleashed invisible harms to society.” - The Ledger of Harms, The Center for Humane Technology (1)

As digital creators and technologists, we tell ourselves that technology is neutral, but that’s naive. We need to wake up to the fact that our digital experiences have changed our social structures and are actively accelerating the shift of power and resources to those who don’t understand what people really need. 

Leaders and decision-makers need to recognize how far they’ve grown away from understanding the needs of others not like them and recognize how seemingly small decisions could impact society at large in dangerous, harmful ways. 

The collective “We” need to shift decision power back to local, marginalized, and disenfranchised communities to design space for agency over our own lives. We need to take the time to reflect on hard questions about emerging systems that predict and automate decisions on our behalf. We need to apply strategic friction to AI-driven systems to benefit something greater than ourselves. We have a certain responsibility as a citizen of this planet - and we must become comfortable with a certain level of discomfort for the sake of our collective future. 

Water Bottle Comparison
Prioritizing high friction systems for the benefit of the collective isn’t a new concept; let’s apply it to holistically beneficial value drivers. Illustration by Rachael Acker.

Let’s be clear; humans make decisions, not AI. Humans are augmented with tools that support decision-making, and AI models are AI-infused super tools (2); they are NOT autonomous, intelligent teammates, even if they might behave in human-like ways.

When we shift from engaging to beneficial solutions, conversations move from “barriers to engagement” to “conversations about coping mechanisms for stressors and tensions in a VUCA world.” And an individual’s ability to adapt to uncertainty shifts from “desirable solutions for individual consumption” to “solutions that build resilient systems.”  These are examples of questions we should be asking ourselves to better leverage AI in our lives. 

  • Should AI only be used for the common good?
  • Who is accountable when AI causes harm?
  • Can I trust the reason behind why AI does what it does?
  • How will I know when AI is fair and does not discriminate against me?
  • How does AI respect human values and align decisions to what I value?

As business leaders, technologists, and designers of digital experiences, we must create beneficial solutions with ethical goals that shift the collective mindset from an extractive, attention economy to a salutogenic, well-being economy

In this insight series and in the culminating webinar on October 6, we'll explore design principles and experience strategies for implementing responsible AI technologies and what organizations need to do to create beneficial solutions that are good for people and good for business.

 

  1.  https://ledger.humanetech.com/
  2.  Shneiderman, B. (2022). Human-centered Ai. Oxford University Press.
Contributed by
Name
Rachael Acker
Job Title
VP, Experience Strategy & Research