AI has a resilience problem. Designers and researchers can help fix it

2012. I walk out of a gastroenterologist’s office with a brochure titled “Your Life With Ulcerative Colitis.”

What the brochure doesn’t say: A month later, I will wake up on the day of a critical midyear design presentation feeling too nauseous to leave my apartment, and will have to spend several weeks at my parents’ house, where I will miss several more midterms. A year later, I’ll stand at a boarding gate and feel too sick to take a five-hour flight and meet with potential graduate school advisers. I’ll soon learn that, for me, these won’t be one-offs. Instead, I’ll live a life of constant flux, impossible to plan for.

Desperate for some control as I push through academia, I turn to tech products. But technology can’t help me. Digital tools excel at routines, but falter at exceptions. I can schedule weeks of meetings in a few clicks, but when I’m unwell, I’m copy-pasting the same cancellation message a dozen times. My personal-finance app keeps me on track, but only until an urgent-care bill throws things off. When my fitness tracker chastises me for not closing my rings during a particularly brutal flare-up, I shove it into my junk drawer. Technology is failing me when I need it the most.

Happy paths

2016. I join Big Tech, working as a user researcher in early-stage and AI technology. Two things become immediately clear.

First, my story is far from unique. Anecdotes from many hundreds of user interviews reflect lives riddled with chaos and disruption. Change—unplanned and planned—is the norm.

Second, consumer products are largely designed for “happy paths.” A clear-cut problem is solved by a superhero technology, resulting in a favorable outcome that is tied off with a neat bow. For the sake of clarity, efficiency, and technical ease, the zigzag realities of lives are often sanitized into an idealized arc. We trot out these squeaky clean stories as “hero use cases” for a product idea—first to convince ourselves, then our executives, and, finally, our users.

Today’s explosion of consumer-facing GenAI products are built with the same recipe. We get heartstring-tugging stories with just enough complexity to feel real, without any of the mess. A dad uses AI to prepare for a job interview while reminiscing on parenthood. A parent brings a child’s imaginary creature to life in a custom picture book. Some brands try to incorporate more chaotic realities (a storm hits restaurant patio seating) only to portray absurd overdependence on AI (waiters leave their customers drenched because an AI agent doesn’t reseat them indoors).

If you’re like me, these ads make you want to scream: “You’re standing in the middle of the kitchen. How are your kids not interrupting your conversation with AI 27 times?” But in contrast to the “hero use case,” taking kid snack breaks and asking AI to repeat itself over the noise of toddler screams are often cordoned off as “edge cases” in product development. The implication: These occurrences are rare.

But they aren’t. Human journeys are not straight lines. They are dynamic, defined by change, interruptions, and curveballs. Some 60% of Americans reported experiencing an unexpected expense in the past year, though 42% don’t have an emergency fund greater than $1,000. Households with two or more children have a viral infection in the household more than 50% of the time. And an estimated 28% of work time each year is lost to distractions.

When technology isn’t resilient to this reality, it breaks—sometimes catastrophically. Like when a Florida teen dies by suicide after his lengthy conversations with a Character.ai chatbot turn darkly romantic. When AI-powered cameras mounted on public buses mistakenly ticket thousands of legally parked vehicles in New York because they fail to recognize alternate side zones. Or when AI weather models fail to predict the worst storms because extreme weather data doesn’t exist in the training data.

These outcomes are extreme, but the pathways leading there are deeply ordinary, broken by nascent technology that isn’t resilient to the gritty reality of human behavior. Sometimes, the catalyst stems from the tech itself, like security vulnerabilities. Other times, it’s agnostic of the technology, like mental health. But in all cases, the technology was not resilient to changes in context.

AI’s broken promise

Years ago, you could blame technology as the limiting factor. But AI should, ideally, thrive on this sort of complexity—using its superpowers of pattern recognition, synthesis, and triangulation of thousands of data points about users and their environment. GenAI has introduced a new frontier around deep reasoning and human interaction that should make the technology more tractable and transparent.

AI is uniquely positioned to help people anticipate and recover from change, the kind that they may not have seen coming. Yet the Character.ai system didn’t raise the alarm when a conversation overtly turned dangerous, much less recognize patterns that may suggest that it was headed that way. On issuing its 7,000th ticket in one day, the MTA’s system didn’t flag that this is an unusually large number of violations on a route.

It’s never easy to deal with the complex behavior of humans and societies. But when we keep designing to make already great lives 1% better, we are perpetuating a specific type of harm—one that happens when the people designing the technology aren’t considering the real ways it might be used.

As UX practitioners, we are uniquely positioned to start the conversation about how to change this. To move toward an AI UX rooted in resilience, we’ll need to shepherd at least three main shifts in the way our products are designed.

1. Shift the user stories we tell—which directly map to the problems we choose to solve. UX must choose to foreground the hard, complex story. We all have one: a multigenerational household with life-stage changes, moves across the country, divorce, job loss, a chronic illness. Right now, a key barrier to centering these stories is that they extend ideation cycles, which is uncomfortable in an increasingly launch-first-or-perish climate. As a result, cleaner stories, like the product narratives described earlier, win out.

To break this cycle, UX can introduce complex user stories to product teams starting with ideation, through prototype and concept testing—especially ones that cut horizontally across product ecosystems. This requires creating a new canon: an accessible taxonomy of types of complexity, curveballs, and changes that we can easily pull from. Such a taxonomy might take the form of brainstorming prompts, user journey templates, or a card deck or visualization used in sprints. This cracking open will take time, but the more we tell these stories, the easier they will roll off the tongue, and the more they can become normalized.

2. Shift how we leverage user data in AI-powered products. Today, user data collected by companies—while wide-ranging—isn’t always curated or connected well. Most users, particularly younger generations, have resigned themselves to data collection and don’t mind it, but also don’t understand how the data is used or whether it benefits them. This is not an argument to collect more data. Rather, it’s a call to connect existing data for more meaningful, tangible user benefits, like helping navigate blind spots and complexity.

Consider a simple example: Ann’s AI agent has access to a calendar app where she has blocked off time for a post-work run, a weather app that shows unexpected evening rain showers, and a maps app that she frequently uses to navigate to a yoga studio. This agent can now surface a timely suggestion: help Ann move meetings to shift the run to earlier in the day, or help her find a class at the yoga studio at that time. In reflecting how people really use their technology, this sort of cross-product dialogue and synthesis has the opportunity to leverage AI and user data to unlock resilience in the face of change.

3. Shift away from traditional definitions of “seamlessness” and “magic moments” toward ones that gracefully embrace failure, meaningful friction, and deep, explicit user feedback. AI advancements tend to tempt product teams to remove all friction and present users with auto-magical solutions to needs they weren’t even aware of, from hyper-personalized AI-driven ads to “smart” nudges on food and shopping apps. Common success metrics used today reflect the value we place on frictionless experiences: fewer clicks, greater session length, engagement with automation features, fewer user-submitted comments. This can cause a misleading overreliance on implicit behavioral signals that don’t always reflect real intent.

Take the example of an in-app pop-up: A user might spend a long time viewing it, even clicking on a link—not because they find it useful but because they can’t find the exit. Even when users do provide explicit feedback, it’s often not in a form that can be interpreted meaningfully, leading to undesired outcomes. Think, for example, of how OpenAI’s models grew sycophantic after a thumbs-up on a response was used as a signal to make the chatbot behave more in that direction.

Instead, how might we offer users more ways to provide granular feedback that can shed light not only on the “what” but also the “why”? This can be meaningful friction that can empower users to have their unique human context be better understood while harnessing the beyond-human capabilities of AI. One could argue that this, in fact, is the more magical experience.

Finally, the pursuit of seamless perfection risks underplaying the shortcomings of AI itself—misunderstood accents, factual inaccuracies, biased imagery. These are a function of the technology, and are bound to happen. UX needs to treat these as predictable breaking points in the technology, build frameworks to classify them, and design intentionally with them as part of the user narrative.

Of course, it’s far simpler to sketch these solutions than implement them, but if AI is to work well for real-world problems, we need to tackle real-world complexity head-on. UX is in a powerful position to shift these mindsets. As it has done for domains like accessibility and product inclusion, UX can redefine the problems and narratives that emerging technology is built for, and reshape the UX to accommodate product and user realities to support resilience.

Are we brave enough to get into the messy weeds and do it?

No comments

Read more