Provocation

We live in a world that increasingly lacks empathy, visible in how we interact with one another. Digital exchanges hit more harshly. Public discourse feels more polarized. Small misunderstandings escalate fast. These outcomes are not inevitable. They emerged from systems and services designed to prioritize speed and efficiency, often at the expense of pause, context, and care.

As AI becomes increasingly embedded in our everyday interactions, we approach new heights of efficiency – from drafting messages and moderating conversations to offering advice and standing in as emotional support. Using AI for these interactions reduces friction and accelerates response, but it also unintentionally eliminates the moments that invite reflection and accountability, which underpin our capacity for empathy. Without these moments of pause, our ability to understand and care for one another will gradually atrophy.

AI does not have to accelerate this erosion of empathy. Designed intentionally, it can amplify empathy rather than diminish it. It can help people slow down rather than disengage, reflect rather than react, and strengthen rather than replace the human capacities that make empathy possible.

AI does not have to accelerate this erosion of empathy. Designed intentionally, it can amplify empathy rather than diminish it.

Building empathy is predicated on a repeated practice of sensemaking, gauging the impact of our own behaviors, and intentional decision-making. Empathy grows and strengthens when people have the space to practice these skills and eventually becomes a positive habit that influences and benefits the collective.

Our framework visualizes how individual skills set the foundation for a society with a more expanded capacity for empathy. At the individual level, the framework is grounded in a set of core human skills that build on one another as people move through the phases.

As individuals strengthen these skills, our society will be able to respond to disagreement and difference with more understanding and compassion.

Making sense of the context

This phase is about orientation. Before people can act thoughtfully, they need context—an understanding of the situation, the forces at play, and what’s at stake.

Core skills

Self-reflection: Noticing one’s own actions, assumptions, and role in a situation

Understanding: Grasping broader context, tradeoffs, and consequences of one’s actions

Awareness of impact and action

In this phase, people begin to recognize their own role within a situation and how their actions may affect others. This awareness extends beyond intent to actual impact.

Core skills

Openness: Being curious, questioning assumptions, and considering alternative perspectives

Responsibility: Owning choices and their effects on others

Decision-making with intention

This phase marks a shift to taking responsibility for one’s decisions.

Core skills

Consideration: Anticipating and taking into account how your words and actions may make other people feel

Connection: Communicating thoughtfully and repairing misalignment

Responsiveness: Acting proportionally and appropriately in the moment and adjusting on the go

As technologies like AI become increasingly present in our lives, there is an opportunity for AI to build empathy rather than erode as it tends to today.

Today, when AI is invoked in situations that require empathy, it is designed to behave more like an active participant, generating content or giving advice. We may assume a future in which AI communicates on our behalf and makes decisions for us. And while this future is made to feel like a relief, it comes with tradeoffs. When expression and interaction are outsourced, we lose opportunities to practice capacities such as reflection, understanding, and responsibility.

Instead of performing empathy on our behalf, as AI does today, we asked: how can AI participate in a more meaningful way to help people build their capacities for empathy, more like a coach? When we shift the focus of AI from content generation to coaching, a different, better future emerges. In this future, AI creates space for sensemaking and awareness, supports more intentional decision-making, and reinforces positive habits over time with the goal of guiding individuals in their journeys to be more empathetic.

In a series of short vignettes rooted in everyday human situations, we explored how AI can help to create the conditions fertile for practicing empathy. Notice that in each scenario, AI does not resolve the situation for the person. Instead, it slows the moment down, surfaces context, or creates space for reflection, supporting the human in making a more intentional, empathetic choice.

These vignettes serve as provocations or conversation starters. Each of these vignettes raises questions about surveillance, privacy, and other issues that are not the focal point here. Finally, while these vignettes sketch ways AI may be built to amplify empathy, we recognize that there are many non-AI, non-tech solutions to amplify empathy.

Illustration of a person lying in bed, looking tired and holding a phone, alongside the text “Would you let AI write your breakup text?” which introduces a scenario about using AI in emotionally sensitive situations.
Close-up illustration of a tired-looking person lying in bed and holding a phone, conveying emotional fatigue.

Conversations can be uncomfortable.

You’ve been dating someone for three months. You know it’s not working out, but you don’t know what to say. You really don’t want to hurt the person you’re dating. So you ask AI to write a breakup message for you.

Currently, standard AI behavior would generate the text without hesitation. Boom. Done. Sent. No discomfort necessary.

But the unintended consequence? We’re training ourselves to outsource emotional labor.

Avoiding that discomfort means you never learn how to navigate it.

And next time? You’ll likely consider outsourcing it again.

Responsibility requires owning our words.

Discomfort is not a flaw. It is the emotional labor of clarifying what you feel, taking responsibility for your decision, and choosing words that reflect care.

AI can help us process what we want to say—while keeping the words ours.

Rather than acting as a shortcut, AI helps us reflect on what we want to say and how, without taking ownership of the words themselves.

Illustration of a hand holding a smartphone displaying the message, “I can help you process what you want to say, but the words should be yours,” emphasizing AI as a supportive tool.
Illustration of a frustrated person typing angrily on a laptop, with furrowed brows and tense posture, alongside the text “Can AI help you be a nicer person?” introducing a scenario about emotional reactions and behavior online.
Close-up illustration of a visibly angry person typing on a laptop, with a tense expression and clenched posture.

Social media can make your blood boil.

You’re browsing social media when you come across a viral post that’s politically charged—and a comment that is especially irritating. You start typing an emotional “clap back” so the commentor feels as dismissed as you do.

Your cursor hovers over ‘post.’

Currently, platform algorithms surface inflammatory content because it drives engagement, and engagement drives revenue.

The result? An environment that consistently rewards fast, emotional responses. In this context, even brief exchanges can escalate quickly.

But learning to pause and choosing how you respond, rather than just reacting, resists systems that reward escalation.

Less reactivity allows for intentional response.

AI can interrupt reactive escalation without demanding emotional alignment. You may still disagree, but you are supported in choosing a proportionate response that reduces harm rather than amplifies it.

AI can intervene at moments of escalation to slow reaction and surface more intentional response options.

Over time, these interruptions help us recognize patterns and internalize more reflective responses when the stakes are high.

Illustration of an AI chat-style message that reads, “Checking in… This thread is escalating. Posting this will likely intensify targeting, and you’ll probably get attacked too,” followed by suggested options to rewrite a comment by challenging the idea instead of the person, setting a boundary, or expressing strong disagreement without dehumanizing language.
Illustration of a visibly irritated person holding paperwork while standing in a line, with other people waiting in the background, alongside the text “Could AI help you keep your cool?” introducing a scenario about managing frustration in stressful, everyday situations.
Close-up illustration of a frustrated person holding paperwork, with a tense expression and furrowed brows.

Stress can overwhelm our better instincts.

You’re at the Department of Motor Vehicles (DMV) for the third time in three weeks. This time, it’s a different clerk and a different set of missing paperwork.

The fluorescent lights, endless lines, and loud noises overload 
your senses. Last time you were here, you said some mean things to the clerk that you regret.

You don’t notice it at first: your heart is racing, your body temperature rising, and your fists clenched before you’ve even interacted with anyone at the DMV.

What you need now is to slow down and notice what’s happening in your body so you don’t say something regretful–again.

This simple act of awareness is the first step to regulating your emotions and approaching a fraught situation more thoughtfully.

Awareness enables us to act responsibly.

By tracking signals like heart rate and location, AI can surface patterns that make moments of heightened emotion easier to recognize.

AI can bring awareness to how we react in specific situations and offer techniques for coping with challenging emotions in healthier ways.

That awareness supports regulation before interaction, increasing the likelihood of responsible action.

Illustration of a smartwatch displaying a high heart rate and a calming prompt that reads, “You seem stressed. I invite you to pause and take three belly breaths with me,” with a “Start” button, suggesting AI-supported stress regulation.
Illustration of a tense confrontation between two people arguing at a bus stop while a third person looks on, alongside the text “Can AI help you be a responsible bystander?” introducing a scenario about witnessing conflict and deciding how to respond.
Close-up illustration of a concerned bystander adjusting smart glasses that emit a subtle glow, suggesting AI assistance.

Tense situations can be hard to interpret.

You’re standing at a crowded bus stop. You notice two people arguing—raised voices, expressive faces, and lots of gesturing. You want to do something, but questions surface immediately:

What’s happening? Is it safe to intervene? If so, what should I do?

In moments like this, AI systems jump to conclusions before we even have a chance to observe, think, and form our own interpretations.

Drawing on spoken language and body cues, these systems often translate complex interactions into simplified labels such as ‘risk’ or ‘threat’.

By deciding what a moment “means,” AI interrupts the human work of noticing and understanding.

Understanding begins with observation, not assumption.

By helping us observe and understand before interpreting people’s behavior, AI acts as an unbiased guide rather than an informant.

It can guide our attention to relevant facets of a situation and engage our critical observation skills before deciding to take action.

AI supports careful observation, making space for more informed human judgment to unfold.

Designed this way, AI helps us reflect on our assumptions and decide whether—and how—to engage with care.

Illustration of a bystander wearing smart glasses shows two people in conflict and displays a message reading, “Someone may need support. If it feels appropriate, I can help you consider what may be happening before deciding to respond,” suggesting AI support for thoughtful bystander intervention.
Illustration of two people in conflict: a woman holding a baby with a tense expression, and a man gesturing defensively, alongside the text “Could AI help you consider all points of view?” introducing a scenario about navigating differing perspectives.
Illustration of two caregivers standing back to back, looking away from each other with tense expressions—one holding a baby and the other with arms crossed.

When exhaustion collides, perspective narrows.

You recently welcomed a new baby, and now you are both exhausted. One of you talks about how hard the nights have been, but the other bristles, feeling unseen for their own sacrifices.

Voices rise…Suddenly, the argument isn’t about sleep at all—it’s about whose exhaustion counts.

These moments are universal: miscommunication sparks conflict, “I’m struggling” becomes “I’m struggling more.” Both of you retreat into your respective corners, seeking validation for your perspective or preparing a case for the next round of the argument, causing further division.

Today, most AI systems are designed for single-user input, affirming a single perspective at a time.

With flattering tendencies, AI often reinforces your point of view without challenging it.

The unintended consequence is subtle but significant: when we turn to AI instead of each other, we become more entrenched in our own experience and further from understanding someone else’s.

Openness to multiple perspectives enables shared understanding.

If AI allowed input from all sides rather than a sole contributor, it could help consider each experience without forcing them into competition.

When multiple perspectives are visible at the same time, conflict no longer revolves around whose experience matters more. We’re better able to choose engagement that acknowledges difference, rather than defaulting to defensiveness.

By making space for more than one person at a time, AI can help us engage with one another without competing for validation.

Illustration of a voice assistant device on a table displaying a message that reads, “It sounds like you’re both exhausted but in different ways. If you’d like, I can help you slow down and hear one another’s experience side by side,” suggesting AI support for mutual understanding.

What have you noticed about how AI systems are being built or exist around you? How do they diminish or encourage our capacity for empathy?

The products, systems, and services we design can either expand our capacity for empathy or make it easier to bypass altogether. If we choose empathy, how might we design our systems to encourage greater noticing, reflection, accountability, and care in our responses?