Provocation

We live in a world that increasingly lacks empathy, visible in how we interact with one another. Digital exchanges hit more harshly. Public discourse feels more polarized. Small misunderstandings escalate fast. These outcomes are not inevitable. They emerged from systems and services designed to prioritize speed and efficiency, often at the expense of pause, context, and care.

As AI becomes increasingly embedded in our everyday interactions, we approach new heights of efficiency – from drafting messages and moderating conversations to offering advice and standing in as emotional support. Using AI for these interactions reduces friction and accelerates response, but it also unintentionally eliminates the moments that invite reflection and accountability, which underpin our capacity for empathy. Without these moments of pause, our ability to understand and care for one another will gradually atrophy.

AI does not have to accelerate this erosion of empathy. Designed intentionally, it can amplify empathy rather than diminish it. It can help people slow down rather than disengage, reflect rather than react, and strengthen rather than replace the human capacities that make empathy possible.

AI does not have to accelerate this erosion of empathy. Designed intentionally, it can amplify empathy rather than diminish it.

Building empathy is predicated on a repeated practice of sensemaking, gauging the impact of our own behaviors, and intentional decision-making. Empathy grows and strengthens when people have the space to practice these skills and eventually becomes a positive habit that influences and benefits the collective.

Our framework visualizes how individual skills set the foundation for a society with a more expanded capacity for empathy. At the individual level, the framework is grounded in a set of core human skills that build on one another as people move through the phases.

As individuals strengthen these skills, our society will be able to respond to disagreement and difference with more understanding and compassion.

Making sense of the context

This phase is about orientation. Before people can act thoughtfully, they need context—an understanding of the situation, the forces at play, and what’s at stake.

Core skills

Self-reflection: Noticing one’s own actions, assumptions, and role in a situation

Understanding: Grasping broader context, tradeoffs, and consequences of one’s actions

Awareness of impact and action

In this phase, people begin to recognize their own role within a situation and how their actions may affect others. This awareness extends beyond intent to actual impact.

Core skills

Openness: Being curious, questioning assumptions, and considering alternative perspectives

Responsibility: Owning choices and their effects on others

Decision-making with intention

This phase marks a shift to taking responsibility for one’s decisions.

Core skills

Consideration: Anticipating and taking into account how your words and actions may make other people feel

Connection: Communicating thoughtfully and repairing misalignment

Responsiveness: Acting proportionally and appropriately in the moment and adjusting on the go

As technologies like AI become increasingly present in our lives, there is an opportunity for AI to build empathy rather than erode as it tends to today.

Today, when AI is invoked in situations that require empathy, it is designed to behave more like an active participant, generating content or giving advice. We may assume a future in which AI communicates on our behalf and makes decisions for us. And while this future is made to feel like a relief, it comes with tradeoffs. When expression and interaction are outsourced, we lose opportunities to practice capacities such as reflection, understanding, and responsibility.

Instead of performing empathy on our behalf, as AI does today, we asked: how can AI participate in a more meaningful way to help people build their capacities for empathy, more like a coach? When we shift the focus of AI from content generation to coaching, a different, better future emerges. In this future, AI creates space for sensemaking and awareness, supports more intentional decision-making, and reinforces positive habits over time with the goal of guiding individuals in their journeys to be more empathetic.

In a series of short vignettes rooted in everyday human situations, we explored how AI can help to create the conditions fertile for practicing empathy. Notice that in each scenario, AI does not resolve the situation for the person. Instead, it slows the moment down, surfaces context, or creates space for reflection, supporting the human in making a more intentional, empathetic choice.

These vignettes serve as provocations or conversation starters. Each of these vignettes raises questions about surveillance, privacy, and other issues that are not the focal point here. Finally, while these vignettes sketch ways AI may be built to amplify empathy, we recognize that there are many non-AI, non-tech solutions to amplify empathy.

Illustration of a person lying in bed, looking tired and holding a phone, alongside the text “Would you let AI write your breakup text?” which introduces a scenario about using AI in emotionally sensitive situations.
Close-up illustration of a tired-looking person lying in bed and holding a phone, conveying emotional fatigue.

Conversations can be uncomfortable.

You’ve been dating someone for three months. You know it’s not working out, but you don’t know what to say. You really don’t want to hurt the person you’re dating. So you ask AI to write a breakup message for you.

Currently, standard AI behavior would generate the text without hesitation. Boom. Done. Sent. No discomfort necessary.

But the unintended consequence? We’re training ourselves to outsource emotional labor.

Avoiding that discomfort means you never learn how to navigate it.

And next time? You’ll likely consider outsourcing it again.

Responsibility requires owning our words.

Discomfort is not a flaw. It is the emotional labor of clarifying what you feel, taking responsibility for your decision, and choosing words that reflect care.

AI can help us process what we want to say—while keeping the words ours.

Rather than acting as a shortcut, AI helps us reflect on what we want to say and how, without taking ownership of the words themselves.

Illustration of a hand holding a smartphone displaying the message, “I can help you process what you want to say, but the words should be yours,” emphasizing AI as a supportive tool.
Illustration of a frustrated person typing angrily on a laptop, with furrowed brows and tense posture, alongside the text “Can AI help you be a nicer person?” introducing a scenario about emotional reactions and behavior online.
Close-up illustration of a visibly angry person typing on a laptop, with a tense expression and clenched posture.

Social media can make your blood boil.

You’re browsing social media when you come across a viral post that’s politically charged—and a comment that is especially irritating. You start typing an emotional “clap back” so the commentor feels as dismissed as you do.

Your cursor hovers over ‘post.’

Currently, platform algorithms surface inflammatory content because it drives engagement, and engagement drives revenue.

The result? An environment that consistently rewards fast, emotional responses. In this context, even brief exchanges can escalate quickly.

But learning to pause and choosing how you respond, rather than just reacting, resists systems that reward escalation.

Less reactivity allows for intentional response.

AI can interrupt reactive escalation without demanding emotional alignment. You may still disagree, but you are supported in choosing a proportionate response that reduces harm rather than amplifies it.

AI can intervene at moments of escalation to slow reaction and surface more intentional response options.

Over time, these interruptions help us recognize patterns and internalize more reflective responses when the stakes are high.

Illustration of an AI chat-style message that reads, “Checking in… This thread is escalating. Posting this will likely intensify targeting, and you’ll probably get attacked too,” followed by suggested options to rewrite a comment by challenging the idea instead of the person, setting a boundary, or expressing strong disagreement without dehumanizing language.
Illustration of a visibly irritated person holding paperwork while standing in a line, with other people waiting in the background, alongside the text “Could AI help you keep your cool?” introducing a scenario about managing frustration in stressful, everyday situations.
Close-up illustration of a frustrated person holding paperwork, with a tense expression and furrowed brows.

Stress can overwhelm our better instincts.

You’re at the Department of Motor Vehicles (DMV) for the third time in three weeks. This time, it’s a different clerk and a different set of missing paperwork.

The fluorescent lights, endless lines, and loud noises overload 
your senses. Last time you were here, you said some mean things to the clerk that you regret.

You don’t notice it at first: your heart is racing, your body temperature rising, and your fists clenched before you’ve even interacted with anyone at the DMV.

What you need now is to slow down and notice what’s happening in your body so you don’t say something regretful–again.

This simple act of awareness is the first step to regulating your emotions and approaching a fraught situation more thoughtfully.

Awareness enables us to act responsibly.

By tracking signals like heart rate and location, AI can surface patterns that make moments of heightened emotion easier to recognize.

AI can bring awareness to how we react in specific situations and offer techniques for coping with challenging emotions in healthier ways.

That awareness supports regulation before interaction, increasing the likelihood of responsible action.

Illustration of a smartwatch displaying a high heart rate and a calming prompt that reads, “You seem stressed. I invite you to pause and take three belly breaths with me,” with a “Start” button, suggesting AI-supported stress regulation.
Illustration of a tense confrontation between two people arguing at a bus stop while a third person looks on, alongside the text “Can AI help you be a responsible bystander?” introducing a scenario about witnessing conflict and deciding how to respond.
Close-up illustration of a concerned bystander adjusting smart glasses that emit a subtle glow, suggesting AI assistance.

Tense situations can be hard to interpret.

You’re standing at a crowded bus stop. You notice two people arguing—raised voices, expressive faces, and lots of gesturing. You want to do something, but questions surface immediately:

What’s happening? Is it safe to intervene? If so, what should I do?

In moments like this, AI systems jump to conclusions before we even have a chance to observe, think, and form our own interpretations.

Drawing on spoken language and body cues, these systems often translate complex interactions into simplified labels such as ‘risk’ or ‘threat’.

By deciding what a moment “means,” AI interrupts the human work of noticing and understanding.

Understanding begins with observation, not assumption.

By helping us observe and understand before interpreting people’s behavior, AI acts as an unbiased guide rather than an informant.

It can guide our attention to relevant facets of a situation and engage our critical observation skills before deciding to take action.

AI supports careful observation, making space for more informed human judgment to unfold.

Designed this way, AI helps us reflect on our assumptions and decide whether—and how—to engage with care.

Illustration of a bystander wearing smart glasses shows two people in conflict and displays a message reading, “Someone may need support. If it feels appropriate, I can help you consider what may be happening before deciding to respond,” suggesting AI support for thoughtful bystander intervention.
Illustration of two people in conflict: a woman holding a baby with a tense expression, and a man gesturing defensively, alongside the text “Could AI help you consider all points of view?” introducing a scenario about navigating differing perspectives.
Illustration of two caregivers standing back to back, looking away from each other with tense expressions—one holding a baby and the other with arms crossed.

When exhaustion collides, perspective narrows.

You recently welcomed a new baby, and now you are both exhausted. One of you talks about how hard the nights have been, but the other bristles, feeling unseen for their own sacrifices.

Voices rise…Suddenly, the argument isn’t about sleep at all—it’s about whose exhaustion counts.

These moments are universal: miscommunication sparks conflict, “I’m struggling” becomes “I’m struggling more.” Both of you retreat into your respective corners, seeking validation for your perspective or preparing a case for the next round of the argument, causing further division.

Today, most AI systems are designed for single-user input, affirming a single perspective at a time.

With flattering tendencies, AI often reinforces your point of view without challenging it.

The unintended consequence is subtle but significant: when we turn to AI instead of each other, we become more entrenched in our own experience and further from understanding someone else’s.

Openness to multiple perspectives enables shared understanding.

If AI allowed input from all sides rather than a sole contributor, it could help consider each experience without forcing them into competition.

When multiple perspectives are visible at the same time, conflict no longer revolves around whose experience matters more. We’re better able to choose engagement that acknowledges difference, rather than defaulting to defensiveness.

By making space for more than one person at a time, AI can help us engage with one another without competing for validation.

Illustration of a voice assistant device on a table displaying a message that reads, “It sounds like you’re both exhausted but in different ways. If you’d like, I can help you slow down and hear one another’s experience side by side,” suggesting AI support for mutual understanding.

What have you noticed about how AI systems are being built or exist around you? How do they diminish or encourage our capacity for empathy?

The products, systems, and services we design can either expand our capacity for empathy or make it easier to bypass altogether. If we choose empathy, how might we design our systems to encourage greater noticing, reflection, accountability, and care in our responses?

An abstract composition featuring various green, yellow, blue, and purple shapes, with two sets of shapes resembling human forms merging together in the center of the composition.

The above vignette shows “cultural humility” in action. This approach fosters cultural understanding through respect, empathy, and critical self-reflection to build partnerships between providers and the diverse individuals they serve. Cultural humility has become a hallmark pathway for realizing health care that responds to the needs of diverse patient populations and reduces the extreme health disparities they often face. 

Cultural humility is needed now more than ever. If current trends continue, immigrants and their descendants will account for around 88% of the U.S. population growth in 2065. Alongside this, diversity will also grow within healthcare professions. But the current care model in the U.S. rests on a culture of biomedicine that is largely inhospitable to diverse health-related beliefs and practices. Instead, we call for ways to work with our increasingly pluralistic society to uplift the benefits of biomedicine while embracing diverse perspectives on health and healing. 

Centering lived experiences in healthcare

Within any cultural or identity group, each person’s lived experience is intricate and varied, and what is necessary to live a healthy and fulfilling life is equally individualistic. To recognize diverse needs in health care, medical training and practice have come to focus on “cultural competence,” “a set of congruent behaviors, knowledge, attitudes, and policies that come together in a system, organization, or among professionals that enables effective work in cross-cultural situations.” But even with cultural competence, lived experience is often overlooked, causing providers to make assumptions about a specific patient based on learned facts about the broader racial/ethnic groups to which they may belong. This can lead to care decisions based on generalizations, resulting in inappropriate recommendations for a patient’s unique circumstances.

On the other hand, “cultural humility” is a much stronger formation for realizing culturally responsive care that honors each patient’s lived experience. It is grounded in rigorous self-reflection and a willingness to listen to, learn about, and adapt to patients’ diverse cultural values and practices. Crucially, exercising cultural humility reduces unconscious bias and stereotyping toward diverse patient populations based on many identity factors, from cultural background, race, and age to socioeconomic status, religion, and gender identity. Bias has been shown to negatively impact patient care, including poor patient-provider communication, low patient satisfaction, and mistrust of the healthcare system. A culturally humble approach to care achieves the nuanced understanding of patients’ lived experiences and unique backgrounds necessary to truly embrace cultural differences and work toward dismantling the structural vulnerabilities that result in unequal health outcomes.

Practicing cultural humility during moments of care

We see an opportunity to intervene at the most intimate level of care during face-to-face interactions between patients and providers, making cultural dimensions more accessible and the hidden barriers to care faced by multicultural communities more visible. 

Isolated tools exist that make inroads into providing clinicians with what they would need to realize culturally appropriate care. The tools fall into three focus areas:

  1. Improving communication between patients and providers 
    The Eight Questions and the Cultural Formulation Interview can be used to elicit patients’ understanding of their illnesses in the clinic. And the Vital Talk app trains providers to communicate with their patients about sensitive topics, which could be especially relevant for providers who did not have “narrative medicine” as part of their training. But cultural dimensions of care are still not a focus of the app. Moreover, with these tools, providers are still left without guidance on implementing them in practice or pragmatic ways to support their uptake in clinical settings within the time and logistical constraints of appointments.

  2. Equipping providers with cultural information
    Existing provider-focused databases like Ethnomed and CultureVision can help contextualize culturally specific beliefs about health and illness that might surface during a visit while suggesting pointers for culturally appropriate care. But accessing these tools during a visit may take up valuable time and could detract from the provider’s ability to listen and respond to the patient’s needs. The focus on the information at the level of cultural groups may also be problematic, resulting in a lack of nuanced context around each patient’s needs and preferences. Lastly, these tools provide a fixed set of information that does not change, for example, based on community member input or adapt to the needs of individual patients. They do not allow cultural tailoring or adaptations to happen in real-time during patient-provider interactions, such as through in-the-moment personalized recommendations based on information elicited by the patient during clinical visits.

  3. Engaging patients in after-care and ensuring data transparency
    Lastly, some tools provide patients with notes, information, and resources following their appointments. OurNotes is a platform that makes care notes accessible to patients, allowing them to engage with their providers during after-care and express concerns before their next visit. It encourages providers to voice record reflections, which helps them relay insights about patients to other team members while also developing their self-awareness skills. OurNotes also works to mitigate power imbalances through transparency of any data collected during a visit. While a promising development, OurNotes does not target improving interactions during moments of care.

While they have their merits, all these solutions are only piecemeal, standalone tools that imperfectly address a sliver of the patient and provider experience.

We believe a better approach is one making valuable resources less cumbersome for providers to access in real-time, least disruptive to critical face time with patients, and genuinely representative of cultural and individual diversity. This approach includes digital tools and experiences that enhance provider capacity and support them in facilitating more flexible and adaptive patient care. Recognizing that digital products tend to be one-off solutions to complex problems, we see an opportunity to capitalize on their ability to seamlessly integrate with current workflows and software, automate repetitive tasks while offering guidance on those more complex, and customize interactions tailored to individual needs and preferences. At their core, aspirational digital products would enable the practice of cultural humility during patient-provider interactions through experiences that capitalize on its foundational components: fostering cultural understanding through respect, empathy, and critical self-reflection.

We see an opportunity for the development of digital products that afford culturally responsive experiences and focus on the following elements: 


Culturally responsive patient-centered care
Patient-centered care focusing on culture involves treating patients holistically and respecting their unique health needs and desired health outcomes as the driving force behind their healthcare decisions. Digital products prioritizing patient-centered care consider patients’ needs, preferences, and values in the context of their lived experiences. They help facilitate communication between healthcare providers and patients, allowing patients to share their concerns and providers to respond accordingly, enabling patients to engage in and adapt their care plans and collaborate with providers to make more informed decisions. A key but sometimes neglected facet of genuinely patient-centered care involves understanding and appropriately responding to patients’ cultural and individual identity contexts.


Empathy and active listening
Digital products should encourage healthcare providers to engage in more empathetic practices towards their patients, actively listening to them, understanding their perspectives, and validating their emotions and experiences. Providers need tools to help them prepare for cross-cultural patient interactions to elicit relevant information during clinical encounters and respond compassionately. These products would afford a more culturally appropriate and inclusive care experience by prepping the provider with language that respects the patient’s preferences (e.g., preferred name and pronouns) and is non-judgmental.


Respectful and collaborative decision-making
Respectful and collaborative decision-making elevates patient agency to allow for mutual understanding and agreement between them and providers. Digital products can support patient agency through tools that afford them control over their healthcare decisions and will enable them to own and tailor personal data, deeply understand vital medical details concerning their diagnosis and treatment – often missed during care visits – and empower them with the necessary information to communicate and collaborate more effectively with their providers on their care plans.


Continuous learning and self-reflection
To learn and be knowledgeable of the many existing cultural and identity backgrounds is a complex and seemingly infinite task. It is pertinent that providers have the tools to continuously listen and learn from the specific and diverse patient communities they serve. While speaking directly to patients and their families is critical to learning, digital products can provide automated tools that coach providers through moments of cultural misunderstanding to reflect on biases, assumptions, and beliefs about other cultures, traditional practices, and worldviews. These tools should seamlessly integrate into existing provider workflows, making it easier for them to engage in learnings during and beyond direct patient interactions.

Closing Thoughts

We believe many benefits will flow from adopting a culturally humble approach to healthcare delivery, especially by implementing appropriate digital technologies to enhance moments of care:

  • Patients can more easily find care more aligned to their needs and identities that make them feel welcome in the healthcare system

  • Patients will approach care with greater trust, as fear or drop-off due to unexpected clinical activities, tense interactions, and conflicting treatment expectations get reduced

  • Patients will engage more in their healthcare as they feel a greater sense of connection and belonging with their provider and healthcare system

  • Quality of care is improved as providers gain an understanding of diverse patient lifeworlds and are prompted to self-reflect on their own beliefs and practices, ultimately approaching all patients with more empathy

  • A cycle of learning and improvement will be embedded in the healthcare system as providers become more self-aware and reflective, inspiring these attributes in their trainees

  • Patients will experience better outcomes and health disparities will be reduced as patients are more engaged with and better served by the healthcare system

Actualizing a positive future healthcare experience for our rapidly diversifying population requires building cultural humility into the fabric of healthcare training and practice. Explore one way we envision doing this: Traverse — a vision for culturally responsive healthcare.

Partnership Highlight

This year, Artefact had two opportunities to partner with mission-driven organizations to understand young people’s relationship with digital technology and how they can support their efforts to shape a better future. In celebration of those partnerships with Omidyar Network and Hopelab, we highlight our approach to centering young people’s perspectives as we implemented our research and structured our recommendations.

“Our partnership with Artefact has helped us clarify how we can take action and support youth who are creating opportunities for inclusion and well-being in the next digital era. We appreciate the team’s depth of research, and their responsiveness to emergent opportunities in the work.”

Young people and the hope for a new digital future

Youth are growing up in a vast digital system with a level of complexity that we haven’t seen before. Many features on today’s major tech platforms keep youth online by design, depleting their energy and consuming their attention. Combined with the short life cycle of pop culture and the fear of missing out, young people – especially Gen Z – are aggressively pulled online, affecting their productivity, mental health, and overall wellness. These effects will likely persist with emerging technologies such as the metaverse and web3. Still, young people are capitalizing on this ‘new tech’ to have a role in shaping a more accountable, equitable, and inclusive internet for themselves and future generations.

An inclusive, systems approach to understanding youth beliefs and behaviors

Omidyar Network and Hopelab each needed actionable insights to develop a holistic strategy and prioritize actions aimed at influencing and activating technology as a force for good in supporting young people. However, the focus of each organization’s effort was slightly different. Omidyar Network focused on identifying the core issues that animate digital native activism and organizing as it relates to technology. These issues ranged from digital rights to social justice to tech worker activism. In contrast, Hopelab concentrated on understanding how emerging technologies can uplift or detract from youth mental health and well-being.

Throughout each project, we took inspiration from well-established fields such as inclusive design and human-centered design, incorporating equitable methods affording continuous participation for internal and external stakeholders.

Participatory methods to engage internal and external stakeholders included:

  • Using simple tools like Dovetail to convey research insights and allow stakeholders to view secondary research and highlight reels of key topics discussed during 1:1 interviews
  • Hosting multiple workshops to review research insights, co-create opportunity areas, and develop critical actions
  • Hosting office hours for youth and key internal stakeholders to give feedback, check assumptions, and develop actionable priorities
  • Sharing research insights and project outcomes with internal and external stakeholders to keep participants informed, give transparency to our processes, and solicit feedback to ensure data points were representative of their voices

In addition, we took a systems approach in selecting research participants to holistically understand how youth are affected by the internet and what they are doing to take control of their future. This approach helped us understand the nuances and complexity of this problem space through various perspectives.

An overview of who we spoke to:

  • BIPOC + Youth Digital Creators
  • Digital Rights Youth Activists
  • Web3 Designers
  • Mental Health Product Innovators
  • Psychology + Digital Technology Academics
  • Metaverse Academics
  • Feminist Technologists
  • Data & Security Researchers
  • Youth Mental Health Experts

Engaging diverse youth perspectives

Whether engaging digital natives to comment on our preliminary research insights or inviting them to attend key workshops and presentations, we continuously sought to ensure youth voices remained centered. Why? Because of their diverse lived experiences growing up digital and their drive to design, create, and advocate for what they want to see in the world.

Our approach to centering young people’s lived experiences online included the following methods:

  • Conducting outreach on popular web2 platforms (e.g., Twitter, Instagram, TikTok) where digital natives are active and currently participating in conversations around technology
  • Bringing in youth advisors as co-researchers to help shape insights and outcomes
  • Creating video highlight reels with direct quotes from youth participants to better represent their words and attitudes in our research
  • Developing youth-centered design principles taken directly from one-on-one and group discussions to guide future action
  • Developing youth-centered areas of focus that steered strategies toward the issues that matter most to GenZ

Supporting young people in their pursuit of better digital futures

The landscape of digital experiences and emerging technology is rapidly changing, allowing youth to shape the development of these technologies before they are entrenched. And young people are activated, ready, and willing to be the catalysts for change. They need a platform to be heard and supported that amplifies their needs and values. We are excited about Omidyar Network and Hopelab’s work to provide young people with this platform and support. Putting youth at the center is critical if we want the internet of tomorrow to be a place where future generations can thrive.

Want to learn more?

To learn more about the Omidyar Network project, check out the case study: A Youth-Led Agenda for the Responsible Tech Movement.

To learn about the insights and outcomes from the Hopelab project, attend a talk by Neeti Sanyal, Artefact’s Executive Creative Director, at the HLTH 2022 Conference Gen Z & Web3: How a Mental Health Crisis among Digital Natives is Shaping Our Virtual Future. This panel discussion is scheduled for Tuesday, November 15th, 4:20 PM—4:55 PM PST.