Provocation

We live in a world that increasingly lacks empathy, visible in how we interact with one another. Digital exchanges hit more harshly. Public discourse feels more polarized. Small misunderstandings escalate fast. These outcomes are not inevitable. They emerged from systems and services designed to prioritize speed and efficiency, often at the expense of pause, context, and care.

As AI becomes increasingly embedded in our everyday interactions, we approach new heights of efficiency – from drafting messages and moderating conversations to offering advice and standing in as emotional support. Using AI for these interactions reduces friction and accelerates response, but it also unintentionally eliminates the moments that invite reflection and accountability, which underpin our capacity for empathy. Without these moments of pause, our ability to understand and care for one another will gradually atrophy.

AI does not have to accelerate this erosion of empathy. Designed intentionally, it can amplify empathy rather than diminish it. It can help people slow down rather than disengage, reflect rather than react, and strengthen rather than replace the human capacities that make empathy possible.

AI does not have to accelerate this erosion of empathy. Designed intentionally, it can amplify empathy rather than diminish it.

Building empathy is predicated on a repeated practice of sensemaking, gauging the impact of our own behaviors, and intentional decision-making. Empathy grows and strengthens when people have the space to practice these skills and eventually becomes a positive habit that influences and benefits the collective.

Our framework visualizes how individual skills set the foundation for a society with a more expanded capacity for empathy. At the individual level, the framework is grounded in a set of core human skills that build on one another as people move through the phases.

As individuals strengthen these skills, our society will be able to respond to disagreement and difference with more understanding and compassion.

Making sense of the context

This phase is about orientation. Before people can act thoughtfully, they need context—an understanding of the situation, the forces at play, and what’s at stake.

Core skills

Self-reflection: Noticing one’s own actions, assumptions, and role in a situation

Understanding: Grasping broader context, tradeoffs, and consequences of one’s actions

Awareness of impact and action

In this phase, people begin to recognize their own role within a situation and how their actions may affect others. This awareness extends beyond intent to actual impact.

Core skills

Openness: Being curious, questioning assumptions, and considering alternative perspectives

Responsibility: Owning choices and their effects on others

Decision-making with intention

This phase marks a shift to taking responsibility for one’s decisions.

Core skills

Consideration: Anticipating and taking into account how your words and actions may make other people feel

Connection: Communicating thoughtfully and repairing misalignment

Responsiveness: Acting proportionally and appropriately in the moment and adjusting on the go

As technologies like AI become increasingly present in our lives, there is an opportunity for AI to build empathy rather than erode as it tends to today.

Today, when AI is invoked in situations that require empathy, it is designed to behave more like an active participant, generating content or giving advice. We may assume a future in which AI communicates on our behalf and makes decisions for us. And while this future is made to feel like a relief, it comes with tradeoffs. When expression and interaction are outsourced, we lose opportunities to practice capacities such as reflection, understanding, and responsibility.

Instead of performing empathy on our behalf, as AI does today, we asked: how can AI participate in a more meaningful way to help people build their capacities for empathy, more like a coach? When we shift the focus of AI from content generation to coaching, a different, better future emerges. In this future, AI creates space for sensemaking and awareness, supports more intentional decision-making, and reinforces positive habits over time with the goal of guiding individuals in their journeys to be more empathetic.

In a series of short vignettes rooted in everyday human situations, we explored how AI can help to create the conditions fertile for practicing empathy. Notice that in each scenario, AI does not resolve the situation for the person. Instead, it slows the moment down, surfaces context, or creates space for reflection, supporting the human in making a more intentional, empathetic choice.

These vignettes serve as provocations or conversation starters. Each of these vignettes raises questions about surveillance, privacy, and other issues that are not the focal point here. Finally, while these vignettes sketch ways AI may be built to amplify empathy, we recognize that there are many non-AI, non-tech solutions to amplify empathy.

Illustration of a person lying in bed, looking tired and holding a phone, alongside the text “Would you let AI write your breakup text?” which introduces a scenario about using AI in emotionally sensitive situations.
Close-up illustration of a tired-looking person lying in bed and holding a phone, conveying emotional fatigue.

Conversations can be uncomfortable.

You’ve been dating someone for three months. You know it’s not working out, but you don’t know what to say. You really don’t want to hurt the person you’re dating. So you ask AI to write a breakup message for you.

Currently, standard AI behavior would generate the text without hesitation. Boom. Done. Sent. No discomfort necessary.

But the unintended consequence? We’re training ourselves to outsource emotional labor.

Avoiding that discomfort means you never learn how to navigate it.

And next time? You’ll likely consider outsourcing it again.

Responsibility requires owning our words.

Discomfort is not a flaw. It is the emotional labor of clarifying what you feel, taking responsibility for your decision, and choosing words that reflect care.

AI can help us process what we want to say—while keeping the words ours.

Rather than acting as a shortcut, AI helps us reflect on what we want to say and how, without taking ownership of the words themselves.

Illustration of a hand holding a smartphone displaying the message, “I can help you process what you want to say, but the words should be yours,” emphasizing AI as a supportive tool.
Illustration of a frustrated person typing angrily on a laptop, with furrowed brows and tense posture, alongside the text “Can AI help you be a nicer person?” introducing a scenario about emotional reactions and behavior online.
Close-up illustration of a visibly angry person typing on a laptop, with a tense expression and clenched posture.

Social media can make your blood boil.

You’re browsing social media when you come across a viral post that’s politically charged—and a comment that is especially irritating. You start typing an emotional “clap back” so the commentor feels as dismissed as you do.

Your cursor hovers over ‘post.’

Currently, platform algorithms surface inflammatory content because it drives engagement, and engagement drives revenue.

The result? An environment that consistently rewards fast, emotional responses. In this context, even brief exchanges can escalate quickly.

But learning to pause and choosing how you respond, rather than just reacting, resists systems that reward escalation.

Less reactivity allows for intentional response.

AI can interrupt reactive escalation without demanding emotional alignment. You may still disagree, but you are supported in choosing a proportionate response that reduces harm rather than amplifies it.

AI can intervene at moments of escalation to slow reaction and surface more intentional response options.

Over time, these interruptions help us recognize patterns and internalize more reflective responses when the stakes are high.

Illustration of an AI chat-style message that reads, “Checking in… This thread is escalating. Posting this will likely intensify targeting, and you’ll probably get attacked too,” followed by suggested options to rewrite a comment by challenging the idea instead of the person, setting a boundary, or expressing strong disagreement without dehumanizing language.
Illustration of a visibly irritated person holding paperwork while standing in a line, with other people waiting in the background, alongside the text “Could AI help you keep your cool?” introducing a scenario about managing frustration in stressful, everyday situations.
Close-up illustration of a frustrated person holding paperwork, with a tense expression and furrowed brows.

Stress can overwhelm our better instincts.

You’re at the Department of Motor Vehicles (DMV) for the third time in three weeks. This time, it’s a different clerk and a different set of missing paperwork.

The fluorescent lights, endless lines, and loud noises overload 
your senses. Last time you were here, you said some mean things to the clerk that you regret.

You don’t notice it at first: your heart is racing, your body temperature rising, and your fists clenched before you’ve even interacted with anyone at the DMV.

What you need now is to slow down and notice what’s happening in your body so you don’t say something regretful–again.

This simple act of awareness is the first step to regulating your emotions and approaching a fraught situation more thoughtfully.

Awareness enables us to act responsibly.

By tracking signals like heart rate and location, AI can surface patterns that make moments of heightened emotion easier to recognize.

AI can bring awareness to how we react in specific situations and offer techniques for coping with challenging emotions in healthier ways.

That awareness supports regulation before interaction, increasing the likelihood of responsible action.

Illustration of a smartwatch displaying a high heart rate and a calming prompt that reads, “You seem stressed. I invite you to pause and take three belly breaths with me,” with a “Start” button, suggesting AI-supported stress regulation.
Illustration of a tense confrontation between two people arguing at a bus stop while a third person looks on, alongside the text “Can AI help you be a responsible bystander?” introducing a scenario about witnessing conflict and deciding how to respond.
Close-up illustration of a concerned bystander adjusting smart glasses that emit a subtle glow, suggesting AI assistance.

Tense situations can be hard to interpret.

You’re standing at a crowded bus stop. You notice two people arguing—raised voices, expressive faces, and lots of gesturing. You want to do something, but questions surface immediately:

What’s happening? Is it safe to intervene? If so, what should I do?

In moments like this, AI systems jump to conclusions before we even have a chance to observe, think, and form our own interpretations.

Drawing on spoken language and body cues, these systems often translate complex interactions into simplified labels such as ‘risk’ or ‘threat’.

By deciding what a moment “means,” AI interrupts the human work of noticing and understanding.

Understanding begins with observation, not assumption.

By helping us observe and understand before interpreting people’s behavior, AI acts as an unbiased guide rather than an informant.

It can guide our attention to relevant facets of a situation and engage our critical observation skills before deciding to take action.

AI supports careful observation, making space for more informed human judgment to unfold.

Designed this way, AI helps us reflect on our assumptions and decide whether—and how—to engage with care.

Illustration of a bystander wearing smart glasses shows two people in conflict and displays a message reading, “Someone may need support. If it feels appropriate, I can help you consider what may be happening before deciding to respond,” suggesting AI support for thoughtful bystander intervention.
Illustration of two people in conflict: a woman holding a baby with a tense expression, and a man gesturing defensively, alongside the text “Could AI help you consider all points of view?” introducing a scenario about navigating differing perspectives.
Illustration of two caregivers standing back to back, looking away from each other with tense expressions—one holding a baby and the other with arms crossed.

When exhaustion collides, perspective narrows.

You recently welcomed a new baby, and now you are both exhausted. One of you talks about how hard the nights have been, but the other bristles, feeling unseen for their own sacrifices.

Voices rise…Suddenly, the argument isn’t about sleep at all—it’s about whose exhaustion counts.

These moments are universal: miscommunication sparks conflict, “I’m struggling” becomes “I’m struggling more.” Both of you retreat into your respective corners, seeking validation for your perspective or preparing a case for the next round of the argument, causing further division.

Today, most AI systems are designed for single-user input, affirming a single perspective at a time.

With flattering tendencies, AI often reinforces your point of view without challenging it.

The unintended consequence is subtle but significant: when we turn to AI instead of each other, we become more entrenched in our own experience and further from understanding someone else’s.

Openness to multiple perspectives enables shared understanding.

If AI allowed input from all sides rather than a sole contributor, it could help consider each experience without forcing them into competition.

When multiple perspectives are visible at the same time, conflict no longer revolves around whose experience matters more. We’re better able to choose engagement that acknowledges difference, rather than defaulting to defensiveness.

By making space for more than one person at a time, AI can help us engage with one another without competing for validation.

Illustration of a voice assistant device on a table displaying a message that reads, “It sounds like you’re both exhausted but in different ways. If you’d like, I can help you slow down and hear one another’s experience side by side,” suggesting AI support for mutual understanding.

What have you noticed about how AI systems are being built or exist around you? How do they diminish or encourage our capacity for empathy?

The products, systems, and services we design can either expand our capacity for empathy or make it easier to bypass altogether. If we choose empathy, how might we design our systems to encourage greater noticing, reflection, accountability, and care in our responses?

Collage of agriculture workers working in the field

Climate change has direct impacts on human health, but those impacts vary widely by location. Local health impacts depend on a large number of factors, including specific regional climate impacts, demographics and human vulnerabilities, existing local adaptation capacity and resources, and cultural context. Therefore, organizations will need to tailor mitigation and adaptation strategies to the regional risks and contexts of different communities.

Participants at the 2023 Global Digital Development Forum called to move away from entrenched approaches that tend to look to top-down solutions to drive change. Instead, they suggested more holistic, inter-disciplinary, collaborative, and inclusive engagements that account for on-the-ground contexts and people-centered approaches. Participatory methodologies are well suited to bring local voices into conversations, decision-making, and equitable engagement.

Headshots of the two featured speakers: Ezgi Canpolat, PhD and Kinari Webb, MD

In this panel discussion, Artefact’s Whitney Easton sits down with Health in Harmony’s Kinari Webb, MD and the World Bank’s Ezgi Canpolat, PhD to share the work they are doing to foreground the social dimensions of climate change and support planetary health. Through concrete examples, we will explore what is most difficult and most promising about working deeply and collaboratively with local partners and communities to craft a more resilient future for us all.

Topics include:

  • What does it mean in practice to put people at the center of climate and health action?
  • What’s most missing from existing approaches that attempt to reduce the health impacts of climate change, and what’s most promising on the horizon?
  • What can the COVID-19 pandemic teach us about how to work toward planetary health?
  • How might we better engage with cultural contexts and local realities as we design initiatives, particularly when it comes to ensuring impact and minimizing unintended consequences?
  • How can the predictive power enabled by Big Data and technology be balanced with local, real-life contexts to ensure that local stakeholders and citizens truly benefit?

In recent months, we’ve seen the rise of independent social media marketed toward authenticity: first BeReal, now others like Gas have cropped up. When we speak with Gen Z consumers, authenticity feels like a buzzword—it comes up again and again as a guidepost for ideal experiences—yet, they have difficulty defining it. Instead, it feels like a reaction to the inauthenticity they see on Instagram and to a lesser extent TikTok, which they see as to blame for feeling a lack of social connection in spaces we believe should foster connection. While BeReal’s features limit the ability to curate posts, the core of its UX is the same as larger social media platforms, which limits the social connection that underpins authenticity. To design for authenticity, platforms must adopt a UX that allows users to adapt and evolve their identities over time.

Putting on an “act” in social spaces isn’t unique to social media. In 1959, Erving Goffman published The Presentation of Self in Everyday Life, where he contends that real-life social situations cause participants to be actors on a stage, with each implicitly knowing their role. The character one plays depends on a variety of contextual factors: who is present, the “props” and “set” (visual cues) among others. As such, each performance is different. His theory explains why one might feel awkward when two different social groups are in the same room: the actor doesn’t know which role they are supposed to play.

In online spaces, the feed is our perma-stage. Facebook’s News Feed was designed to deliver updates on friends the same way we receive updates on local and national news. It seems inevitable that this product vision would produce performances, and highly curated ones at that. Its one-to-many nature limits standard interaction; instead of an actor-actor dynamic, we see a creator-commentor-lurker hierarchy. And because creators design their posts to cater to the masses, they are not moving from stage to stage; instead, one’s online persona feels static. Here, the light of inauthenticity shines through, as we are no longer playing together, but watching others perform.

In Goffman’s model, actors retreat “back-stage” when they are alone or with close others — this is the place where they can let their hair down and be free from keeping up impressions. While the dominance of social media’s feed might make the Internet seem like an unlikely place for back-stage settings, we find almost every social media has a direct message function. In contrast to the one-to-many, post-centric UX of the feed, these back-stage spaces are one-to-one or one-to-few interaction-heavy spaces that have come to be the most fulfilling part of the social media experience for users. Instead of solo “lurking” that can lead to comparison and loneliness, users that are active in back channels find engagement, connection, and reprieve to be themselves, or at least the character that feels like the smallest margin of performance with this particular friend or group, since they have created their “show” together.

But it’s the feed that dominates the social media experience. It permeates moments that would have traditionally been back-stage settings (for example, alone in one’s home), and so we find ourselves wanting authenticity, or a back-stage feeling, here. And so, trends like posting crying selfies have surfaced, which feel close to a cut and paste: back-stage content onto the front-stage. While a post like this could make a user feel understood or less alone momentarily, the infrastructure on social media doesn’t enable the interaction needed to produce real support, and can continue to feel designed for likes. Between glamour shots and crying selfies sits BeReal, where users post more of the “everyday” of their everyday life. Still, BeReal has been criticized for either being boring, still performative, or even exclusive in a more intimate way. A feed can’t support true connection, the table-stakes of enduring authenticity.

Outside of these two paradigms, we see a third type of space emerging. Platforms like Discord have taken hold during the pandemic as a more casual place to “hang out” virtually. Building on a chat-based UX, Discord enables users to find others with similar interests and move between smaller and larger channels as well as text and voice-based communication. Further, Discord is the hub for creative expressions like Midjourney, an AI image generator that can only be accessed through Discord using bot commands. Similarly, Fortnite builds conversation through shared experience and play, in so doing re-leveling the audience-observer dynamic and putting engagement over performance. Extending Goffman’s metaphor, we might compare the social atmosphere created on Discord and Fortnite to a writer’s room, where users engage and create together. 

A more agile space like Discord reflects the “Presentation of Self” as charted by Gen Z. This generation sees the self as a canvas for experimentation, where identity is fluid. Through creative tools and less definite spaces, creativity and play  extend to the making of self on a journey of self-discovery. Users can create and try on characters much like a comedian might on a Tuesday night, to first see if it might resonate for Saturday night, much before an enduring part of the act.

To enable more dynamic interactions , we will need to move away from a cut and paste UX approach to a ground-up infrastructure that is designed for fluidity. Taking pointers from the “writer’s room,” two principles can guide us. First, collaboration. Similar to “yes – and,” creators in authentic spaces create in tandem vs. a creator-consumer dynamic. UX of authentic spaces must lean toward chat over post, which fosters interaction and relationships that ensure it’s safe to try a new presentation of self. Second, authentic social media needs impermanence. Though a feed may refresh over time, we know that posts on Instagram will be connected to our profile for years to come. If it’s instead lost in a Discord feed, we may feel more freedom to experiment and “get it wrong.” Combining collaboration and impermanence, we might just set the stage to permit the collection of characters we all play, so that we can all feel a bit more dynamic, and perhaps even authentic, in digital spaces.

An abstract composition featuring various green, yellow, blue, and purple shapes, with two sets of shapes resembling human forms merging together in the center of the composition.

The above vignette shows “cultural humility” in action. This approach fosters cultural understanding through respect, empathy, and critical self-reflection to build partnerships between providers and the diverse individuals they serve. Cultural humility has become a hallmark pathway for realizing health care that responds to the needs of diverse patient populations and reduces the extreme health disparities they often face. 

Cultural humility is needed now more than ever. If current trends continue, immigrants and their descendants will account for around 88% of the U.S. population growth in 2065. Alongside this, diversity will also grow within healthcare professions. But the current care model in the U.S. rests on a culture of biomedicine that is largely inhospitable to diverse health-related beliefs and practices. Instead, we call for ways to work with our increasingly pluralistic society to uplift the benefits of biomedicine while embracing diverse perspectives on health and healing. 

Centering lived experiences in healthcare

Within any cultural or identity group, each person’s lived experience is intricate and varied, and what is necessary to live a healthy and fulfilling life is equally individualistic. To recognize diverse needs in health care, medical training and practice have come to focus on “cultural competence,” “a set of congruent behaviors, knowledge, attitudes, and policies that come together in a system, organization, or among professionals that enables effective work in cross-cultural situations.” But even with cultural competence, lived experience is often overlooked, causing providers to make assumptions about a specific patient based on learned facts about the broader racial/ethnic groups to which they may belong. This can lead to care decisions based on generalizations, resulting in inappropriate recommendations for a patient’s unique circumstances.

On the other hand, “cultural humility” is a much stronger formation for realizing culturally responsive care that honors each patient’s lived experience. It is grounded in rigorous self-reflection and a willingness to listen to, learn about, and adapt to patients’ diverse cultural values and practices. Crucially, exercising cultural humility reduces unconscious bias and stereotyping toward diverse patient populations based on many identity factors, from cultural background, race, and age to socioeconomic status, religion, and gender identity. Bias has been shown to negatively impact patient care, including poor patient-provider communication, low patient satisfaction, and mistrust of the healthcare system. A culturally humble approach to care achieves the nuanced understanding of patients’ lived experiences and unique backgrounds necessary to truly embrace cultural differences and work toward dismantling the structural vulnerabilities that result in unequal health outcomes.

Practicing cultural humility during moments of care

We see an opportunity to intervene at the most intimate level of care during face-to-face interactions between patients and providers, making cultural dimensions more accessible and the hidden barriers to care faced by multicultural communities more visible. 

Isolated tools exist that make inroads into providing clinicians with what they would need to realize culturally appropriate care. The tools fall into three focus areas:

  1. Improving communication between patients and providers 
    The Eight Questions and the Cultural Formulation Interview can be used to elicit patients’ understanding of their illnesses in the clinic. And the Vital Talk app trains providers to communicate with their patients about sensitive topics, which could be especially relevant for providers who did not have “narrative medicine” as part of their training. But cultural dimensions of care are still not a focus of the app. Moreover, with these tools, providers are still left without guidance on implementing them in practice or pragmatic ways to support their uptake in clinical settings within the time and logistical constraints of appointments.

  2. Equipping providers with cultural information
    Existing provider-focused databases like Ethnomed and CultureVision can help contextualize culturally specific beliefs about health and illness that might surface during a visit while suggesting pointers for culturally appropriate care. But accessing these tools during a visit may take up valuable time and could detract from the provider’s ability to listen and respond to the patient’s needs. The focus on the information at the level of cultural groups may also be problematic, resulting in a lack of nuanced context around each patient’s needs and preferences. Lastly, these tools provide a fixed set of information that does not change, for example, based on community member input or adapt to the needs of individual patients. They do not allow cultural tailoring or adaptations to happen in real-time during patient-provider interactions, such as through in-the-moment personalized recommendations based on information elicited by the patient during clinical visits.

  3. Engaging patients in after-care and ensuring data transparency
    Lastly, some tools provide patients with notes, information, and resources following their appointments. OurNotes is a platform that makes care notes accessible to patients, allowing them to engage with their providers during after-care and express concerns before their next visit. It encourages providers to voice record reflections, which helps them relay insights about patients to other team members while also developing their self-awareness skills. OurNotes also works to mitigate power imbalances through transparency of any data collected during a visit. While a promising development, OurNotes does not target improving interactions during moments of care.

While they have their merits, all these solutions are only piecemeal, standalone tools that imperfectly address a sliver of the patient and provider experience.

We believe a better approach is one making valuable resources less cumbersome for providers to access in real-time, least disruptive to critical face time with patients, and genuinely representative of cultural and individual diversity. This approach includes digital tools and experiences that enhance provider capacity and support them in facilitating more flexible and adaptive patient care. Recognizing that digital products tend to be one-off solutions to complex problems, we see an opportunity to capitalize on their ability to seamlessly integrate with current workflows and software, automate repetitive tasks while offering guidance on those more complex, and customize interactions tailored to individual needs and preferences. At their core, aspirational digital products would enable the practice of cultural humility during patient-provider interactions through experiences that capitalize on its foundational components: fostering cultural understanding through respect, empathy, and critical self-reflection.

We see an opportunity for the development of digital products that afford culturally responsive experiences and focus on the following elements: 


Culturally responsive patient-centered care
Patient-centered care focusing on culture involves treating patients holistically and respecting their unique health needs and desired health outcomes as the driving force behind their healthcare decisions. Digital products prioritizing patient-centered care consider patients’ needs, preferences, and values in the context of their lived experiences. They help facilitate communication between healthcare providers and patients, allowing patients to share their concerns and providers to respond accordingly, enabling patients to engage in and adapt their care plans and collaborate with providers to make more informed decisions. A key but sometimes neglected facet of genuinely patient-centered care involves understanding and appropriately responding to patients’ cultural and individual identity contexts.


Empathy and active listening
Digital products should encourage healthcare providers to engage in more empathetic practices towards their patients, actively listening to them, understanding their perspectives, and validating their emotions and experiences. Providers need tools to help them prepare for cross-cultural patient interactions to elicit relevant information during clinical encounters and respond compassionately. These products would afford a more culturally appropriate and inclusive care experience by prepping the provider with language that respects the patient’s preferences (e.g., preferred name and pronouns) and is non-judgmental.


Respectful and collaborative decision-making
Respectful and collaborative decision-making elevates patient agency to allow for mutual understanding and agreement between them and providers. Digital products can support patient agency through tools that afford them control over their healthcare decisions and will enable them to own and tailor personal data, deeply understand vital medical details concerning their diagnosis and treatment – often missed during care visits – and empower them with the necessary information to communicate and collaborate more effectively with their providers on their care plans.


Continuous learning and self-reflection
To learn and be knowledgeable of the many existing cultural and identity backgrounds is a complex and seemingly infinite task. It is pertinent that providers have the tools to continuously listen and learn from the specific and diverse patient communities they serve. While speaking directly to patients and their families is critical to learning, digital products can provide automated tools that coach providers through moments of cultural misunderstanding to reflect on biases, assumptions, and beliefs about other cultures, traditional practices, and worldviews. These tools should seamlessly integrate into existing provider workflows, making it easier for them to engage in learnings during and beyond direct patient interactions.

Closing Thoughts

We believe many benefits will flow from adopting a culturally humble approach to healthcare delivery, especially by implementing appropriate digital technologies to enhance moments of care:

  • Patients can more easily find care more aligned to their needs and identities that make them feel welcome in the healthcare system

  • Patients will approach care with greater trust, as fear or drop-off due to unexpected clinical activities, tense interactions, and conflicting treatment expectations get reduced

  • Patients will engage more in their healthcare as they feel a greater sense of connection and belonging with their provider and healthcare system

  • Quality of care is improved as providers gain an understanding of diverse patient lifeworlds and are prompted to self-reflect on their own beliefs and practices, ultimately approaching all patients with more empathy

  • A cycle of learning and improvement will be embedded in the healthcare system as providers become more self-aware and reflective, inspiring these attributes in their trainees

  • Patients will experience better outcomes and health disparities will be reduced as patients are more engaged with and better served by the healthcare system

Actualizing a positive future healthcare experience for our rapidly diversifying population requires building cultural humility into the fabric of healthcare training and practice. Explore one way we envision doing this: Traverse — a vision for culturally responsive healthcare.

The pandemic has demonstrated the healthcare industry’s ability and appetite to adopt models of care that meet patients where they are – whether online, at home, or in the community.

In this webinar, Artefact sits down with Sara Vaezy, Chief Digital and Growth Strategy Officer at Providence and Dr. Shantanu Nundy, physician and Chief Medical Officer at Accolade, to explore the innovative and accelerated models of care here in the U.S. that are impacting not only patients today but also the patient experience in the years to come.

We explore:

  • Opportunities and risks in distributed care models such as hospitalization at home; digital models such as telemedicine for behavioral health; and decentralized models such as subscription-based care
  • What these evolving models of care mean for the patient experience, their relationship with care providers, and greater health outcomes
  • How evolving care models that center the patient might support greater inclusion and equity, creating new opportunities to reach underserved populations

As part of SxSW EDU Online 2021, we sat down with Elyse Eidman-Aadahl, Executive Director of the National Writing Project (NWP), and Lukas Wenrick, Assistant Director of the Learning Enterprise at Arizona State University, to discuss inclusion in EdTech.

Discover the “ABC”s of EdTech inclusivity – Align, Build, and Contextualize – as we share an approach to developing inclusive, flexible, and human learning pathways and programs at any organization.

We explore strategies and lessons learned creating curriculum, programs, and delivery models for greater access, equity, and inclusion, and identify ways your organization can develop tech-enabled learning experiences that serve every student’s unique needs.

The UX 2030 Series

As emerging technology becomes an increasingly ubiquitous part of our lives, the design decisions we make today will shape how these technologies impact the world over the decade to come.

This series envisions how we might apply emerging technology in specific industries to create positive impact. We’ll explore what might accelerate or hinder these realities and the key risk areas and unintended consequences to consider.

Illustration by Laura Carr + Paige Ormiston


With society seemingly more divided than ever, coming to a shared sense of reality, empathy, and purpose is a public imperative. While we often think of virtual reality (VR) in an entertainment or enterprise context, the emotional power and behavioral impact of the unprecedented realism VR environments of the future offer can create enormous opportunities for public education and consensus-building.

We imagine a 2030 where responsibly designed VR experiences are a unifying medium that help people understand complex circumstances and grasp the impact of invisible challenges through tangible, hyper-personalized experiences bolstered by technology like 3D environmental mapping, AI, and machine learning. So how do we get there – and what risks will we face along the way?

Tackling the tragedy of the commons

People have trouble comprehending slow, distributed change. If a process doesn’t happen at a pace or visibility that we can perceive, we may not believe it is occurring at all. The COVID-19 pandemic has shown us how many people find it hard to grasp the idea of a virus that causes preventable deaths and the impact of individual actions on transmission.

What if we could see someone breathe out virus droplets and the surfaces they land on? What if we could visualize first-hand how the virus spreads in enclosed spaces, and who would become infected or even die? What if that happened over the course of seconds, not weeks? And what if – rather than seeing the impact on anonymous avatars – you were experiencing this environment with your own family, friends, and coworkers? The evolution of VR can make this vision a reality.

VR is the captivating next frontier of data visualization. It has the power to make the intangible visible, and the consequences of action – or inaction – immediate. Just as the humble bar chart helped people compare the scale and relationships of numbers centuries ago, VR has the potential to communicate the effects of our actions not just on ourselves today, but on complex systems over time. Over the next decade, VR as a visualization mechanism will create opportunities for society to become educated on issues that are slow, complex, and require collective shifts in behavior – such as pandemics or climate change.

Visualizing the environment, emotion, and time

VR has the unique ability to manipulate environment, emotion, and time. As VR evolves over the next 10 years, it will be able to virtualize the real world into highly believable and persuasive copies. What good might we build with this technology?

Used responsibly and with intention, VR can bring alignment and consensus to contentious or difficult-to-understand topics. Imagine an educational VR game that teaches the impact of individual behavior on the outcomes of a global pandemic. The experience puts you into everyday situations in your own life and maps the direct impact of your actions on virus transmission. You take public transport, meet with coworkers, enjoy a couple drinks with friends at your favorite crowded bar, and go home to your family. At each of these points, you have the option to make a decision: Do you wear a mask? Do you get on the crowded bus? Do you sit indoors at the bar?

Environment

By 2030, VR will utilize real environments mapped in 3D with a level of accuracy that will make them essentially indistinguishable from the real world. We are already seeing a tremendous amount of progress in environmental mapping techniques and data, and what started as Google Street View is now expanding to map indoor environments as well. Consumer devices increasingly include depth-sensing cameras that can generate 3D representations of not only individual objects but entire buildings.

Moreover, VR is not constrained to the limitations of the real world. A VR environment could transport you from interacting with people on the ground to being literally high in the sky, observing how the other people who you interacted with go about their days. Perhaps your actions influence how others act after observing you. You can follow their decisions – good and bad – to see the role we all play in exhibiting responsible behaviors for the good of the community.

Emotion

VR also offers feedback at an emotional level – one of its most powerful characteristics. When integrated with other emerging technologies, VR environments can impact our emotions and behavior both in the virtual and real world. Recent advancements in generative adversarial network algorithms for machine learning have created convincing reproductions of images, speech, and gestures of real living humans, called deepfakes. What if we applied that technology to generating personalized characters in a VR environment? By reproducing the people you care about in VR, the emotional connection you have to the experience will be much deeper.

Time

Another key feature of VR is the ability to compress time, amplifying subtle changes into concrete impact. What if you could immediately see the results of your decisions, for example in an infection rate score that demonstrates which of your actions have the direst consequences and which have minuscule impact at scale?

Because VR has the ability to compress time into a fast-paced experience, you could re-live your day again and again, varying the choices you make. In this Groundhog Day-like experience, the immediate feedback and ability to iterate on behavior can help you better recognize and learn the impact of your actions.

Addressing risk areas

Even if VR experiences are deliberately called out as “virtual” or “not real,” there is still a significant amount of “reality” that VR designers create with regard to how users experience and perceive these environments. The potential to significantly distort the truth raises several important ethical questions.

Misinformation

It is obvious that the ability to influence better behavior through manipulation of environment, emotion, and time may be used to influence people to commit bad behavior or actions that are not in their interest. We must assume that the existence of powerful technology means that it not only can, but will, be used for malicious purposes. The false information that is easily spread across social media today shows us that we can’t assume audiences will recognize truth from fiction in digital contexts. When VR is nearly indistinguishable from the real world, then those to whom “seeing is believing” are easily radicalized.

As designers and technologists, we need to hold ourselves accountable to responsible use of technology like VR. Creators of platforms that distribute and enable VR content must also establish content standards and ratings that flag inappropriate experiences or those that distort reality in ways that would have negative consequences. Twitter’s decision to flag misinformation surrounding the 2020 election is one example of an attempt to balance preventing the spread of dangerous misinformation while enabling ambitious goals of free speech.

Critically, as a society, we must teach and practice media and digital literacy, and foster the critical thinking skills to question the motivation of technology products.

The ethics of emotional manipulation

Simulating real humans through machine learning algorithms (like deepfakes) is already highly controversial today. Amplifying this in a VR context to influence people on an emotional level must face even more ethical scrutiny. Is it ethical to exploit people’s emotional attachments in order to influence their behavior? It seems that anti-smoking, anti-drunk driving, and even social impact campaigns already believe it is.

Yet it is challenging to argue that the preferable outcomes would in all cases justify the means. It may also be falsely based on the assumption of having a full understanding of what the truth is at all times, which we know isn’t always the case. If we manipulate people to behave in a way that we understand is preferable, what if that understanding of what’s preferable changes? Emotional influencing is not something that can be easily reversed once it has happened, so we have to proceed with caution, to say the least.

Data privacy

Data privacy is another critical issue that a VR environment based on the real world exacerbates. Is it ethical to use personal data about humans and environments to influence behavior? Where is the data gathered from? How would someone consent to having themselves or their home recreated by an algorithm in VR? What about shared or public spaces like your favorite bar or park?

Governments have begun introducing regulation in an attempt to address data protection concerns. For example, California passed a law in 2019 that prohibits distributing manipulated videos of political candidates within 60 days of an election. The question remains, however, of how enforceable such laws might be in practice, particularly as emerging technologies scale.

VR as a unifying medium

Historian Melvin Kranzberg’s First Law of Technology famously states, “Technology is neither good nor bad; nor is it neutral.” To realize the educational and unifying power of VR, we must recognize that it is our collective responsibility as designers, technologists, regulators, and consumers, to create and leverage VR experiences that align with our individual, societal, and collective good. We believe we can aim higher together and responsibly harness emerging technology to tackle the most pressing issues of our time and create a preferable future.

Credit: The Gender Spectrum Collection

Photo credit: The Gender Spectrum Collection

The issue of inclusion has been central to the transformation of healthcare in 2020. While the COVID-19 pandemic accelerated advancements in telehealth that brought greater and safer access to care to many, it has also exacerbated existing inequities faced by vulnerable populations such as seniors; people experiencing disabilities; Black, Indigenous, and People of Color; LGBTQ+ individuals; non-English speakers; and those living in poverty.

Some 25% of Americans may not have the digital tools or literacy to access telehealth services, a population that often intersects with racial groups that face disparities in health and care. For decades, minority populations in the U.S. have received lower-quality health care than their white counterparts across comparable socioeconomic, age, and health statuses.

The complexity of the healthcare system makes change feel daunting at best and insurmountable at worst. While the system is slow to evolve, introducing more inclusive improvements in healthcare can create positive incremental change at a human level.

Artefact Strategy Director Felix Chang joined the Innovation Learning Network, Centura Health, and Design Thinking Exchange (DTX) for a digital exchange discussing how to foster a culture of inclusion in healthcare. With participants spanning Kaiser Permanente, Philips Healthcare and LA County Department of Health Services, the session explored actions that innovation leaders can take to create more inclusive health systems, products, or services. These were the key takeaways:

Strengthen processes and teams

Look for opportunities to integrate inclusive values within your organization at a process and intervention level. Are there ways to embed inclusive practices in workflows, tools, and existing partnerships such as patient advisory councils or cross-discipline decision-making?

When developing interventions, consider how to ensure diverse research participant composition, avoid making assumptions about patients, and take communities of use as experts in their experience. For example, the Group Health Research Institute introduced elements of co-creation by establishing a patient panel to guide the process of developing their SIMBA decision aid. The tool helps breast cancer patients better understand and make informed decisions on their breast cancer monitoring options. As part of the experience, patients are prompted with questions to capture their values and the factors that are most important to them (such as procedure risks, duration, and cost), giving them the agency to personalize their experience and prioritize what matters most in their specific situation.

Promote accountability

Developing a culture of inclusion is only effective when it is a shared responsibility. Establish explicit definitions, commitments, and plans to ensure transparency and actionable benchmarks.

This could be at a department, project, or initiative-level, such as setting quantifiable standards for more inclusive stakeholder and patient groups in planning initiatives, giving feedback on processes when in place, and prioritizing what to improve.

Good data mining and benchmarks for measurement can help identify gaps and areas that need more focus.

Normalize inclusion at a system level

People often avoid deviating from the norm, and working toward systemic change can be overwhelming. By sharing best practices, learnings, and case studies of success, you can begin to normalize a common culture of inclusion across your team or organization.

Create excitement and interest by spotlighting inclusive work in your team or field. How might you establish a system of shared successes and lessons learned? For example, The Bill & Melinda Gates Foundation and Gates Ventures developed the Exemplars in Global Health platform, which gathers public health data and performance outcomes across the world in order to share best practices and learnings within the public health professional community. What would a similar practice look like for your team or organization? What are other examples from outside the healthcare industry that show the measurable impact of true inclusion?

Consistently celebrating how your organization and others are adopting inclusive practices creates a sense of community and shared purpose that can support continued action.

What action will you take?

Creating change in an entrenched system is neither quick nor easy, but each of us must take steps in our roles and spheres of influence that together can contribute to a sea change. By strengthening inclusive practices in teams and processes, promoting accountability, and normalizing a culture of inclusion, we can start to take important steps to creating healthcare experiences and outcomes that serve all people.

It is hard to imagine that it has been over a year since Business Roundtable released a revised Statement on the Purpose of a Corporation. For many, the question was whether these statements and others in the same spirit truly represented the beginnings of a fundamental shift toward stakeholder capitalism, or whether it was all an exercise in PR. Then came the pandemic, widespread social unrest, and a declining economy, all wrapped in the unrelenting uncertainty of 2020. By Fall, the initial report card was mixed: early research found that signatories had not made significant progress toward these goals compared to peers engaged in business as usual.

One explanation is that transformational change frequently encounters obstacles in terms of organizational readiness, priorities, scope, and timing. It simply takes time. Consequently, even as leaders and the zeitgeist have embraced the vision of stakeholder capitalism, many organizations may lag behind in terms of their capacity to act on otherwise good intentions.

As we look ahead to 2021 and beyond, we believe that firms will need to demonstrate more concrete actions toward stakeholder centricity and a commitment to preferable futures – both in response to increasing external pressures and as a means of making organizations more resilient in the face of continued uncertainty.

So, how might we do that?

Designing a more responsible future

Human-centered design (HCD) is a starting point. Traditionally, this has meant engaging stakeholders and users, identifying challenges, unmet needs and opportunities, co-designing and prototyping solutions, and iterating throughout execution and delivery. As a first step toward stakeholder centricity, integrating more human-centered processes is a proven approach to creating more meaningful and relevant products, services, and interventions. But it’s not enough.

On its own, the strength of HCD is also a limitation. A disproportionate focus on users can create critical blind spots and limits our ability to consider impacts to other stakeholders. This can lead to the sort of significant negative externalities that we experience from products and services every day – for example the effects of social media on political polarization, the gig economy as a contributor to income inequality, delivery services and the environment, and so on. In each case, exceptionally good solutions lead to negative effects. Indeed, almost everything we make creates a long cascade of systemic impacts that shape the world around us, both immediately and over time.

Moving toward a more fair and inclusive form of capitalism will require that design and innovation leaders adopt a more holistic approach to shaping the future. In our own work, we have built on the strong foundation of HCD and expanded the definition of stakeholders to include users as well as others impacted by a product or policy, the commons, society as a whole, and even the planet. We also advocate a long-term perspective that brings futures literacy into the innovation process. This approach to stakeholders and futures requires the development of new mindsets and methods adapted from multiple disciplines – systems thinking, business design, foresight, and ethics, among others. We are calling this integrated approach to stakeholder centricity “responsible design.”

Responsible design means the products and services we create should account for impacts to all stakeholders – today and in the future.

The current state of responsible design

Responsible design is an emerging discipline, though we see evidence of the trend in movements related to ethical technology, the circular economy, healthcare, design for social change, and elsewhere. Even so, we wanted to better gauge whether these ideas were becoming mainstream in large organizations, and whether firms were investing time and resources in responsible design as a path toward corporate purpose and/or stakeholder capitalism. To answer this question – and to better understand potential barriers to change – we surveyed a group of 50 senior leaders in design and innovation, drawn from multiple industries including technology, healthcare, pharma, retail, and mobility.

Long-term thinking and outcomes-focused design

We asked participants to rank a series of responsible design attributes in terms of whether their organization would value it highly or not at all. These included “taking a long-term perspective,” “being cognizant of the wider impact of solutions,” “working toward preferable futures,” “understanding complex systems and root causes,” and “being inclusive of all stakeholders.” Using the same attributes, we then asked whether firms were devoting more or fewer resources to each compared to five years ago. Notable findings include the growing importance of thinking and planning in long timescales, and the importance of being more cognizant of impacts – both of which run counter to the “move fast and break things” ethos of the previous decade.

Stakeholders in theory vs. practice

Participants also ranked “inclusive of all stakeholders” as the least-valued and lowest-trending attribute, suggesting that stakeholder-centric thinking remains emergent compared to the more established mental models and practices of human-centered design. While there seems to be recognition that stakeholder capitalism itself represents a worthy ideal (60% of participants say that it is increasing in importance, when asked directly), our survey also revealed the lack of a common definition and a variety of interpretations of what it might look like in practice.

Barriers to organizational change

Other qualitative responses revealed a range of tensions between the attributes of responsible design and perceptions that such an approach would be slow, inefficient, or costly. Many organizations – particularly in technology – are aligned to values that may limit the near-term prioritization and adoption of responsible design (e.g., customer obsession, short-term results, data-driven decision making, etc.). This condition is exacerbated by differing priorities, incentives, and motives across departments and up and down the organization.

Looking ahead

Our research suggests that positive change is occurring and the trend toward responsible design and stakeholder capitalism will likely continue, particularly in light of the shifting global role of organizations and leadership amidst the larger backdrop of social, political, and economic uncertainty. As the pandemic has stressed so many global systems, we see an opportunity for new thinking and a more fundamental reset of business as usual. Now is the time.

We are also optimistic that the business case is clear, even as the transition to stakeholder capitalism is unevenly distributed and variously interpreted. Responsible design will enable firms to command greater influence over desired outcomes and actual impacts while better anticipating and managing risk. This will allow organizations to more effectively align corporate purpose, values, and actions, creating more durable brand value and attracting critical resources.

Lastly, we believe stakeholder centricity will change how we do innovation. It will be critical to better define the meaning of key mental models and concepts as a precursor to more ambitious organizational change. And we will need to create new tools and frameworks that help us do our work. This may take some time, and we should be patient in measuring progress toward these desirable goals, while continuing to advocate for change at all levels of the system.

A special thanks to Executive Creative Director Neeti Sanyal and Strategy Director Jeff Turkelson for their research support.

When a feature launch or key deliverable is on the line, the last thing a product team wants to do is slow down – even when there might be a problem. In the face of breakneck deadlines and competing stakeholder priorities, how can you assess the impact of your work and advocate for a more intentional, ethical approach to technology development?

We know tech products have real consequences in the world. Designers and builders like you are increasingly at the forefront of a shift toward more responsible technology. Yet generating awareness and conversation around tech ethics in an organization can feel like an uphill battle, full-time job, and unchartered territory all rolled into one. That’s why Omidyar Network and Artefact partnered to create the Ethical Explorer Pack, a toolkit to help individuals and teams build technology that’s safer, healthier, fairer, and more inclusive for all.

In this webinar, Sarah Drinkwater, Director of Beneficial Technology at Omidyar Network, and Hannah Hoffman, Design Director at Artefact, share the thinking behind the Ethical Explorer Pack and how you can use the toolkit in a variety of situations during the product development life cycle. Download the free toolkit and learn how to advocate for more responsible tech in your organization, no matter your role.

Check out the resources shared by attendees below, and be sure to sign up for our Impact by Design event series to keep the conversation going.