Provocation

We live in a world that increasingly lacks empathy, visible in how we interact with one another. Digital exchanges hit more harshly. Public discourse feels more polarized. Small misunderstandings escalate fast. These outcomes are not inevitable. They emerged from systems and services designed to prioritize speed and efficiency, often at the expense of pause, context, and care.

As AI becomes increasingly embedded in our everyday interactions, we approach new heights of efficiency – from drafting messages and moderating conversations to offering advice and standing in as emotional support. Using AI for these interactions reduces friction and accelerates response, but it also unintentionally eliminates the moments that invite reflection and accountability, which underpin our capacity for empathy. Without these moments of pause, our ability to understand and care for one another will gradually atrophy.

AI does not have to accelerate this erosion of empathy. Designed intentionally, it can amplify empathy rather than diminish it. It can help people slow down rather than disengage, reflect rather than react, and strengthen rather than replace the human capacities that make empathy possible.

AI does not have to accelerate this erosion of empathy. Designed intentionally, it can amplify empathy rather than diminish it.

Building empathy is predicated on a repeated practice of sensemaking, gauging the impact of our own behaviors, and intentional decision-making. Empathy grows and strengthens when people have the space to practice these skills and eventually becomes a positive habit that influences and benefits the collective.

Our framework visualizes how individual skills set the foundation for a society with a more expanded capacity for empathy. At the individual level, the framework is grounded in a set of core human skills that build on one another as people move through the phases.

As individuals strengthen these skills, our society will be able to respond to disagreement and difference with more understanding and compassion.

Making sense of the context

This phase is about orientation. Before people can act thoughtfully, they need context—an understanding of the situation, the forces at play, and what’s at stake.

Core skills

Self-reflection: Noticing one’s own actions, assumptions, and role in a situation

Understanding: Grasping broader context, tradeoffs, and consequences of one’s actions

Awareness of impact and action

In this phase, people begin to recognize their own role within a situation and how their actions may affect others. This awareness extends beyond intent to actual impact.

Core skills

Openness: Being curious, questioning assumptions, and considering alternative perspectives

Responsibility: Owning choices and their effects on others

Decision-making with intention

This phase marks a shift to taking responsibility for one’s decisions.

Core skills

Consideration: Anticipating and taking into account how your words and actions may make other people feel

Connection: Communicating thoughtfully and repairing misalignment

Responsiveness: Acting proportionally and appropriately in the moment and adjusting on the go

As technologies like AI become increasingly present in our lives, there is an opportunity for AI to build empathy rather than erode as it tends to today.

Today, when AI is invoked in situations that require empathy, it is designed to behave more like an active participant, generating content or giving advice. We may assume a future in which AI communicates on our behalf and makes decisions for us. And while this future is made to feel like a relief, it comes with tradeoffs. When expression and interaction are outsourced, we lose opportunities to practice capacities such as reflection, understanding, and responsibility.

Instead of performing empathy on our behalf, as AI does today, we asked: how can AI participate in a more meaningful way to help people build their capacities for empathy, more like a coach? When we shift the focus of AI from content generation to coaching, a different, better future emerges. In this future, AI creates space for sensemaking and awareness, supports more intentional decision-making, and reinforces positive habits over time with the goal of guiding individuals in their journeys to be more empathetic.

In a series of short vignettes rooted in everyday human situations, we explored how AI can help to create the conditions fertile for practicing empathy. Notice that in each scenario, AI does not resolve the situation for the person. Instead, it slows the moment down, surfaces context, or creates space for reflection, supporting the human in making a more intentional, empathetic choice.

These vignettes serve as provocations or conversation starters. Each of these vignettes raises questions about surveillance, privacy, and other issues that are not the focal point here. Finally, while these vignettes sketch ways AI may be built to amplify empathy, we recognize that there are many non-AI, non-tech solutions to amplify empathy.

Illustration of a person lying in bed, looking tired and holding a phone, alongside the text “Would you let AI write your breakup text?” which introduces a scenario about using AI in emotionally sensitive situations.
Close-up illustration of a tired-looking person lying in bed and holding a phone, conveying emotional fatigue.

Conversations can be uncomfortable.

You’ve been dating someone for three months. You know it’s not working out, but you don’t know what to say. You really don’t want to hurt the person you’re dating. So you ask AI to write a breakup message for you.

Currently, standard AI behavior would generate the text without hesitation. Boom. Done. Sent. No discomfort necessary.

But the unintended consequence? We’re training ourselves to outsource emotional labor.

Avoiding that discomfort means you never learn how to navigate it.

And next time? You’ll likely consider outsourcing it again.

Responsibility requires owning our words.

Discomfort is not a flaw. It is the emotional labor of clarifying what you feel, taking responsibility for your decision, and choosing words that reflect care.

AI can help us process what we want to say—while keeping the words ours.

Rather than acting as a shortcut, AI helps us reflect on what we want to say and how, without taking ownership of the words themselves.

Illustration of a hand holding a smartphone displaying the message, “I can help you process what you want to say, but the words should be yours,” emphasizing AI as a supportive tool.
Illustration of a frustrated person typing angrily on a laptop, with furrowed brows and tense posture, alongside the text “Can AI help you be a nicer person?” introducing a scenario about emotional reactions and behavior online.
Close-up illustration of a visibly angry person typing on a laptop, with a tense expression and clenched posture.

Social media can make your blood boil.

You’re browsing social media when you come across a viral post that’s politically charged—and a comment that is especially irritating. You start typing an emotional “clap back” so the commentor feels as dismissed as you do.

Your cursor hovers over ‘post.’

Currently, platform algorithms surface inflammatory content because it drives engagement, and engagement drives revenue.

The result? An environment that consistently rewards fast, emotional responses. In this context, even brief exchanges can escalate quickly.

But learning to pause and choosing how you respond, rather than just reacting, resists systems that reward escalation.

Less reactivity allows for intentional response.

AI can interrupt reactive escalation without demanding emotional alignment. You may still disagree, but you are supported in choosing a proportionate response that reduces harm rather than amplifies it.

AI can intervene at moments of escalation to slow reaction and surface more intentional response options.

Over time, these interruptions help us recognize patterns and internalize more reflective responses when the stakes are high.

Illustration of an AI chat-style message that reads, “Checking in… This thread is escalating. Posting this will likely intensify targeting, and you’ll probably get attacked too,” followed by suggested options to rewrite a comment by challenging the idea instead of the person, setting a boundary, or expressing strong disagreement without dehumanizing language.
Illustration of a visibly irritated person holding paperwork while standing in a line, with other people waiting in the background, alongside the text “Could AI help you keep your cool?” introducing a scenario about managing frustration in stressful, everyday situations.
Close-up illustration of a frustrated person holding paperwork, with a tense expression and furrowed brows.

Stress can overwhelm our better instincts.

You’re at the Department of Motor Vehicles (DMV) for the third time in three weeks. This time, it’s a different clerk and a different set of missing paperwork.

The fluorescent lights, endless lines, and loud noises overload 
your senses. Last time you were here, you said some mean things to the clerk that you regret.

You don’t notice it at first: your heart is racing, your body temperature rising, and your fists clenched before you’ve even interacted with anyone at the DMV.

What you need now is to slow down and notice what’s happening in your body so you don’t say something regretful–again.

This simple act of awareness is the first step to regulating your emotions and approaching a fraught situation more thoughtfully.

Awareness enables us to act responsibly.

By tracking signals like heart rate and location, AI can surface patterns that make moments of heightened emotion easier to recognize.

AI can bring awareness to how we react in specific situations and offer techniques for coping with challenging emotions in healthier ways.

That awareness supports regulation before interaction, increasing the likelihood of responsible action.

Illustration of a smartwatch displaying a high heart rate and a calming prompt that reads, “You seem stressed. I invite you to pause and take three belly breaths with me,” with a “Start” button, suggesting AI-supported stress regulation.
Illustration of a tense confrontation between two people arguing at a bus stop while a third person looks on, alongside the text “Can AI help you be a responsible bystander?” introducing a scenario about witnessing conflict and deciding how to respond.
Close-up illustration of a concerned bystander adjusting smart glasses that emit a subtle glow, suggesting AI assistance.

Tense situations can be hard to interpret.

You’re standing at a crowded bus stop. You notice two people arguing—raised voices, expressive faces, and lots of gesturing. You want to do something, but questions surface immediately:

What’s happening? Is it safe to intervene? If so, what should I do?

In moments like this, AI systems jump to conclusions before we even have a chance to observe, think, and form our own interpretations.

Drawing on spoken language and body cues, these systems often translate complex interactions into simplified labels such as ‘risk’ or ‘threat’.

By deciding what a moment “means,” AI interrupts the human work of noticing and understanding.

Understanding begins with observation, not assumption.

By helping us observe and understand before interpreting people’s behavior, AI acts as an unbiased guide rather than an informant.

It can guide our attention to relevant facets of a situation and engage our critical observation skills before deciding to take action.

AI supports careful observation, making space for more informed human judgment to unfold.

Designed this way, AI helps us reflect on our assumptions and decide whether—and how—to engage with care.

Illustration of a bystander wearing smart glasses shows two people in conflict and displays a message reading, “Someone may need support. If it feels appropriate, I can help you consider what may be happening before deciding to respond,” suggesting AI support for thoughtful bystander intervention.
Illustration of two people in conflict: a woman holding a baby with a tense expression, and a man gesturing defensively, alongside the text “Could AI help you consider all points of view?” introducing a scenario about navigating differing perspectives.
Illustration of two caregivers standing back to back, looking away from each other with tense expressions—one holding a baby and the other with arms crossed.

When exhaustion collides, perspective narrows.

You recently welcomed a new baby, and now you are both exhausted. One of you talks about how hard the nights have been, but the other bristles, feeling unseen for their own sacrifices.

Voices rise…Suddenly, the argument isn’t about sleep at all—it’s about whose exhaustion counts.

These moments are universal: miscommunication sparks conflict, “I’m struggling” becomes “I’m struggling more.” Both of you retreat into your respective corners, seeking validation for your perspective or preparing a case for the next round of the argument, causing further division.

Today, most AI systems are designed for single-user input, affirming a single perspective at a time.

With flattering tendencies, AI often reinforces your point of view without challenging it.

The unintended consequence is subtle but significant: when we turn to AI instead of each other, we become more entrenched in our own experience and further from understanding someone else’s.

Openness to multiple perspectives enables shared understanding.

If AI allowed input from all sides rather than a sole contributor, it could help consider each experience without forcing them into competition.

When multiple perspectives are visible at the same time, conflict no longer revolves around whose experience matters more. We’re better able to choose engagement that acknowledges difference, rather than defaulting to defensiveness.

By making space for more than one person at a time, AI can help us engage with one another without competing for validation.

Illustration of a voice assistant device on a table displaying a message that reads, “It sounds like you’re both exhausted but in different ways. If you’d like, I can help you slow down and hear one another’s experience side by side,” suggesting AI support for mutual understanding.

What have you noticed about how AI systems are being built or exist around you? How do they diminish or encourage our capacity for empathy?

The products, systems, and services we design can either expand our capacity for empathy or make it easier to bypass altogether. If we choose empathy, how might we design our systems to encourage greater noticing, reflection, accountability, and care in our responses?

Artefact’s staff reflects on AI’s potential impact on individuals and society by answering questions prompted by the Tarot Cards of Tech. Each section contains videos that explores a tarot card and provides our perspectives and provocations.

The Tarot Cards of Tech was created to help innovators think deeply about scenarios around scale, disruption, usage, equity, and access.  With the recent developments and democratization of AI, we revisited these cards to imagine a better tech future that accounts for unintended consequences and our values we hold as a society.

Cultural implications for youth

Jeff Turkelson, Senior Strategy Director

Transcript: I love that this card starts to get at, maybe some of the less quantifiable, but still really important facets of life. So, when it comes to something like self-driving cars, which generative AI is actually really helping to enable, of course people think about how AI can replace the professional driver, or how AI is generally coming for all of our jobs.

But there are so many other interesting implications. So for example, if you no longer need to drive your car, would you then ever need to get a license to drive? And if we do away with needing a license to drive, then what does that mean for that moment in time where you turn 16 years old and you get newfound independence with your drivers license? If that disappears, that could really change what it means to become a teenager and become a young adults, etc. So what other events or rituals would AI disrupt for young adults as they grow older?

Value and vision led design

Piyali Sircar, Lead Researcher

Transcript: This invitation to think about the impact of incorporating gen AI into our products is really an opportunity to think about design differently. We should be asking ourselves, “What is our vision for the futures we could build?” and once we define those, the next question is, “Does gen AI have a role to play in enabling these futures?” Because the answer may be “no”, and that should be okay if we’re truly invested in our vision. And if the answer is “yes”, then we need to try to anticipate the cultural implications of introducing gen AI into our domain space. For example, “How will this shift the way people spend time? How will it change the way they interact with another? What do they care about? What does this product say about society as a whole?” Just a few questions to think about.

Introducing positive friction

Chad Hall, Senior Design Director

Transcript: The ‘Big Bad Wolf’ card reminds me to consider not only which AI product features are vulnerable to manipulation, but also who the bad actors might be. Those bad actors could be a user, it could be us, our teams, or even future teams. So, for example, while your product might not misuse data now, a future feature could exploit it.

A recent example that comes to mind is two students who added facial recognition software to AI glasses with a built-in camera. They were able to easily dox the identities of just about anyone they came across in their daily life.

I think product teams need to introduce just enough positive friction in their workflows to pause and consider impacts. Generative AI is only going to ask for more access to our personal data to help with more complex tasks. So the reality is, if nobody tries to ask the question, the questions are never going to get asked.

Minimizing harm in AI

Neeti Sanyal, VP Creative

Transcript: I think it’s important to ask whether AI could be a bad actor? Even when you’re not trying to produce misinformation with generative AI, in some ways it is inherently doing that. I am concerned about the potential for generative AI to cause harm in a field that has low tolerance for risk, things like health care or finance. An example that comes to mind is a conversational bot that can give the wrong mental health advice to someone that is experiencing a moment of crisis.

One exciting way that companies are addressing this is by building a tech stack that uses both generative and traditional AI. And it’s the combination of these techniques that help minimize the chance of hallucinations and can create outputs are much more predictable.

If we are thoughtful in how the AI is constructed in the first place, we can help prevent AI from being the bad actor.

Building job security

Rachael Cicero, Associate Design Director

Transcript: One thing we keep hearing about is the disappearing workforce, but often I think we’re overlooking the fact that humans will continue to exist in and contribute to society. Instead, I’d like to see a shift the conversation from the disappearing workforce to the unique contributions of human and AI collaboration. Consider civic technology, where generative AI can be used for things like supporting the process of unemployment applications. AI can help with document recognition, which can really reduce the load on human staff, and also accelerate response time for applicants. To me, that collaboration isn’t about replacing jobs but really about enhancing them.

The key to that is investing in reskilling. By including the perspectives of people affected in the design of AI systems, we can better understand the tasks they want automated. The goal being to create a future where AI and humans can work together, enhancing each other’s strengths, and ensuring that everyone has an opportunity to thrive in a pretty evolving job market.

Transforming tradition

Max West, Principal Designer

Transcript: This card reminds me of how cable TV technology reshaped media jobs. Remember Video Jockeys on the popular 90s MTV show, Total Request Live? VJs had evolved from selecting and remixing music, like their traditional radio counterparts, to focusing on engaging with crowds, talking to celebrities, and orchestrating pop cultural moments. 

Now take the cable TV example and apply it to AI transforming an industry like education. Teacher’s jobs could similarly shift to a more social focus. An AI-powered app could tailor a math or science lesson to a student’s unique cognitive abilities, while the teacher can focus more on the physical, interpersonal, and social aspects of learning. In the same way that VJs would provide crowd-pleasing moments between music videos, educators might find themselves in a similar “hosting” role for the classroom.   


So it’s less about what disappears and more about what can transform. So, while roles may change with AI, it could create time and space for richer, more personal experiences among groups.

A vector illustration depicting a person venturing towards a Web3 landscape

Developing Skills and Earning a Livelihood

Whether it’s NFTs, Web3 or AI, the rapid evolution of technology can offer opportunities for users of all ages, but young people – who spend so much of their time online – have a unique relationship with these emerging tools. And, despite what many think, adolescents are already using these emerging technologies to improve their well-being at a time where the mere existence and lived experiences of BIPOC and LGBTQ+ youth, especially, are under attack.

Take 13-year-old digital artist Laya Mathikshara from Chennai, India, for example. In May 2021, as a neophyte in the world of digital art, she sold her first NFT.

Her animated artwork titled What if, Moon had life? depicted an active core of the Moon gurgling. Inspired by the distance between the Earth and the Moon (384,400 km), Laya listed the reserve price as 0.384400 ETH (Ethereum) on Foundation, a platform that enables creators to monetize their work using blockchain technology. It caught the eye of Melvin, co-founder of the NFT Malayali Community, who placed a bid and collected her first artwork for 0.39 ETH ($1,572 at the time).

After the sale, and the success of subsequent NFTs, Laya – now 15 years old – decided to make digital art her career. With Web3, a collector of her art introduced her to other artists, who she felt inspired to support through Ethereum donations. It “feels amazing to help people and contribute. The feeling is awesome,” she says.

“I started with nothing to be honest,” says Laya, “with zero knowledge about digital art itself. So I learned digital art [in parallel to] NFTs because I had been into traditional art [when I was younger].”

Supporting Key Developmental Assets to Wellbeing

Knowing that young people spend much of their unstructured time online, that digital wellness is a distinct concern for Gen Z, and that the technology landscape is rapidly changing, Artefact partnered with Hopelab to conduct an exploratory study to understand their experiences with emerging technology platforms – ones largely enabled by Web3 technologies like blockchain, smart contracts, and DAOs (decentralized autonomous organizations). The organizations were particularly interested in how these technologies might contribute to a wide spectrum of developmental assets to improve the well-being of young people.

Our study found that Web3 can support youth wellness because it is built on values such as ownership, validation, and community that link to developmental assets like agency and belonging. These values are fundamentally different from the values of Web 2, a technology operating on business models that monetize our attention and personal data.

Awareness and usage of Web3 technologies is already high among Gen Zers, with 55% in the U.S. claiming to understand the concept of Web3. Twenty-nine percent have owned or traded a cryptocurrency and 22% have owned or traded an NFT. Importantly, Gen Z believes these technologies are more than a fad: 62% are confident that DAOs will improve how companies are run in the future.

Having grown up with multiple compounding stressors including climate change, a global pandemic, and political unrest, some Gen Z find appeal in Web3’s potential to create what you want, own what you make, support yourself, and change the world.

With Web3, young people are experimenting with their interests and identities, creating art and music, accumulating wealth, consuming and sharing opinions, forming communities, and supporting causes that deeply resonate with them.

Victor Langlois and LATASHÁ, visual and musical artists, respectively, each represent the diversity that is important to our organizations, and have made real income at a young age through NFTs. Likewise, World of Women, a community of creators and collectors believe representation and inclusion should be built into the foundation of Web3, while UkraineDAO seeks to raise money to support Ukraine.

Aligning with GenZ Values

The gateway to Web3 for youth has commonly been through media hype, celebrity fanfare, and video games. Youth we spoke to were all skeptical, at least at first. Laya says, “I thought it was some cyber magical money or something. It just didn’t feel real.” After learning how to use the technology to create assets themselves and even make money via NFTs without a bank account, they began to invest more time experimenting with the tech and consuming content.

These experiences are not without challenges, of course. Young people in our study shared that they need to spend a lot of time learning about the ever-evolving space and building connections to stay relevant. The financial ups and downs are more extreme than the stock market, along with the potential for major losses at the hands of scammers or platform vulnerabilities. Like Web2, there is pressure to be endlessly plugged into the constant news, with social capital to be gained by being consistently online. Some of society’s broader social issues also permeate Web3 spaces: racist NFTs and communities abound.

Despite these challenges, there is genuine excitement for a new internet built on Gen Z’s core values. Several youth shared how DAOs are flipping organizational norms, where hierarchy and experience no longer determine whether your idea takes hold. Web3 technologies are giving youth an opportunity to start careers that weren’t previously viable, find new audiences and fanbases, create financial independence, detach from untrustworthy platforms, and find and contribute to caring communities – all while building their creativity, socioemotional, and critical thinking skills online.

These experiences are helping Gen Z feel a strong sense of belonging as they find communities and causes they care about. In the words of one of our interviewees, Web3 offers a “new and shiny” way to “do good in the world.” The experiences are more accessible – and specific to them – and the decentralized nature of Web3 means that creators and the public, not big tech or its algorithms, get to determine what is current and relevant. This is especially important for creators from groups that have been excluded from power because of their race, ethnicity, gender, or orientation. One participant shared how empowering it was to no longer be at the whim of social media platforms that may make design changes that erase your content, user base, or searchability overnight.

Like any other technology, Web3 and its components can have positive and negative impacts, but its fundamental tenets mean that we will likely see promising innovations and experiences that can support young people to find agency and belonging.

“We are all decentralized for the most part,” says Laya. “And the fun fact is, I have not met many of my Indian friends…I haven’t met folks in the U.S. or any other countries for that matter… you don’t even have to connect to a person in real life, but you still feel connected.”


Neeti Sanyal is VP at Artefact, a design firm that works in the areas of health care, education, and technology.

Jaspal Sandhu is Executive Vice President at Hopelab, a social innovation lab and impact investor at the intersection of tech and youth mental health.


Ernest Cline, in his Ready Player One (and Two) books, paints a picture of a world where everyone avoids the problems of physical reality—a global energy crisis, environmental degradation, extreme socioeconomic inequality—by taking “an escape hatch into a better reality.” That escape hatch is the OASIS, the fully fledged metaverse in virtual reality, “where anything was possible.”

While news cycles have shifted away from the metaverse in recent months (thanks ChatGPT!), big tech and startups have been working diligently behind the scenes, investing billions of dollars in creating alternative realities, with goals of bringing about their own concepts of the metaverse. Apple is heavily rumored to release its mixed reality headset at WWDC in June that will likely challenge the criticisms of Meta’s approach and advance the metaverse’s arrival massively even if under a different label, like spatial computing. With different visions of the metaverse rising up in tandem, we must examine the tools we have access to and the foundation on which we can build in order to ensure this “better reality” is truly better.

Re-define viable

Those in new product development usually think about whether an idea is technologically feasible, desirable to users, and viable in the marketplace.

Meta, Google, Microsoft and Apple, and many other players, like Magic Leap, have been working on Augmented Reality (AR) (e.g., Ray-Ban Stories, Google Glass, Project Starline, iOS), Mixed Reality (MR) (e.g., HoloLens 2, Magic Leap 2, Rumored Apple Headset), and Virtual Reality (VR) (e.g., Meta Quest 2, QuestPro) projects for decades. Advancements in headsets and computing suggest the technology is mature enough to support conceptual reality experience for the mass market. Feasibility: check.

The lukewarm success of remote work has catalyzed a clear problem space for the metaverse to tackle, and our increasing entanglement with contextual online avatars (e.g., vTubers , “finsta” Instagram profiles, and even the everyday Apple Memoji) along with the success of MMO’s (massively multiplayer online games, e.g., Fortnite and Roblox) signal a readiness for consumers to develop a personal connection to an online self and peers, making the metaverse desirable from a consumer standpoint. Desirability: check.

Viability, however, is complicated. Today’s virtual business model relies heavily on advertising. Users experience it as free, but the model has far-reaching direct and indirect consequences. It has helped to harvest and push mass amounts of rare materials into landfills. It has escalated extreme polarization in our political systems and degraded community relationships needed to sustain a democracy. It has sowed distrust of institutions through the proliferation of misinformation. It has fostered screen addiction, and increased social isolation, while weakening interpersonal connections. We’ve learned we can create a viable world — one that is economically profitable for a select few — but, we haven’t learned to create a world viable enough to sustain our environment, political systems, societal values, meaningful relationships, individual agency, and be economically profitable. Viability is there, but it’s complicated. We need to treat it with special care.

If we don’t act now, the metaverse will be built on the same foundation as today’s paradigm of which we’re living the consequences of already.

Here’s a five-step framework you can use to work toward a better metaverse sooner, than later.

As product owners, designers, and technologists, now is the time to ask ourselves, “What if the metaverse succeeds?” As you build the metaverse from your own perspective and strategy, how can you avoid contributing to a world that has the need for an “escape hatch from reality” and create a world we might actually want to spend some time in?

The five steps


Step one

Each company working in mixed reality will define the metaverse differently. Apple will undoubtedly have a different approach from Meta, from Epic, and so on. As it is still emerging, the metaverse currently has no one definition. As such, teams must align on a shared definition to ensure they are working toward the same product vision and forecast the resulting unintended consequences of that strategy.

This framework uses my emerging definition of the metaverse:  “a shadow layer creating a seamless experience across shared realities (AR, VR, PR).” This shadow layer would be made up of data and information presented seamlessly and contextually across different realities. There are endless ways the mixing of these realities could be imagined. For example, on an individual scale, imagine a friend in the virtual world that could seamlessly accompany you to dinner in physical reality through an augmented reality experience. Your definition of the metaverse will work for this process too, but it must be clearly defined.


Step two

When defining a better metaverse, you must think critically about the underlying model it would be built on. Consider how your  proposal for the metaverse will exist on a spectrum of disruption. Will it augment the existing status quo of today’s centralized model (built on a small number of operating systems and shrinking number of server providers) or push to reimagine an alternative future, such as a decentralized (e.g., Web3) or distributed model (e.g., early internet)? Depending on the model, the landscape and potential futures you’re working toward (and their unintended consequences) will be considerably different.


Step three

If you look at as many aspects and stakeholders of systems that are a part of the metaverse, you can consider how to shape a better future. Beyond the software to make the metaverse real, you need to step back and consider how it will impact multiple levels of the systems it will operate in. One model (based on, but different from Pace Layering) imagines what the potential effects might be at differing levels of scales (individual, relational, group, societal, environmental, etc.).

After considering the rings of impact, create a matrix by industry (education, healthcare, social impact, etc.) to capture relevant areas the intervention might impact. It’s important to push beyond the obvious stakeholders and industries you might initially consider. Try pushing to include a specific population or industry that isn’t already a part of your normal processes or strategy. These are where we can often see threads of potential unintended consequences emerge.


Step four

While all of the topics and stakeholders captured in the matrix will be affected in some ways, not all will be to the same degree. It may depend on how any given future plays out. Utilize a series of 2×2 future matrices inspired by Alun Rhydderch’s 2×2 Matrix Technique to push hypothetical scenarios that are both idealistic (utopic) and problematic (dystopic), to varying degrees of intensity (mass adoption vs. passing fad). While the future will likely end up somewhere in the middle, considering extremes allows us to hypothesize what the preferable future is and capture potential blindspots where unintended consequences could happen along the way.


Step five

Now reflect on the  learnings from the process. Ask yourself several questions. How does your definition and strategy for the metaverse affect society, industries, and individuals?  How does your strategy for the metaverse, played out through potential future scenarios, affect the different systems of scale? How might you change your definition, vision, and/or strategy to build toward a better metaverse?


Repeat to keep the stars aligned as the future unfolds.

These 5 steps, from definition to imagining to reflection,  must be an ongoing activity; revisited and repeated on a regular basis. The future is uncertain and the world will change around us. Advances in generative A.I. could dramatically change the technological landscape. Changes in political winds could change the governance landscape. A major climate or global health crises, as we saw with COVID-19, could change societal priorities. Doing these activities proactively in the initial design can help reduce the number of reactive outcomes we will have to chase down once the product is released.

Whatever the metaverse becomes, hopefully we all can help make sure it aligns a preferable future that favors all realities; a future world we want to run towards, rather than escape from.

Thank you to Carolyn Yip for helping bring these figures to life through animation, and to Matthew Jordan, Hannah Grace Martin, Neeti Sanyal, Yuna Shin, and Holger Kuehnle for editing and thoughts along the way.

In recent months, we’ve seen the rise of independent social media marketed toward authenticity: first BeReal, now others like Gas have cropped up. When we speak with Gen Z consumers, authenticity feels like a buzzword—it comes up again and again as a guidepost for ideal experiences—yet, they have difficulty defining it. Instead, it feels like a reaction to the inauthenticity they see on Instagram and to a lesser extent TikTok, which they see as to blame for feeling a lack of social connection in spaces we believe should foster connection. While BeReal’s features limit the ability to curate posts, the core of its UX is the same as larger social media platforms, which limits the social connection that underpins authenticity. To design for authenticity, platforms must adopt a UX that allows users to adapt and evolve their identities over time.

Putting on an “act” in social spaces isn’t unique to social media. In 1959, Erving Goffman published The Presentation of Self in Everyday Life, where he contends that real-life social situations cause participants to be actors on a stage, with each implicitly knowing their role. The character one plays depends on a variety of contextual factors: who is present, the “props” and “set” (visual cues) among others. As such, each performance is different. His theory explains why one might feel awkward when two different social groups are in the same room: the actor doesn’t know which role they are supposed to play.

In online spaces, the feed is our perma-stage. Facebook’s News Feed was designed to deliver updates on friends the same way we receive updates on local and national news. It seems inevitable that this product vision would produce performances, and highly curated ones at that. Its one-to-many nature limits standard interaction; instead of an actor-actor dynamic, we see a creator-commentor-lurker hierarchy. And because creators design their posts to cater to the masses, they are not moving from stage to stage; instead, one’s online persona feels static. Here, the light of inauthenticity shines through, as we are no longer playing together, but watching others perform.

In Goffman’s model, actors retreat “back-stage” when they are alone or with close others — this is the place where they can let their hair down and be free from keeping up impressions. While the dominance of social media’s feed might make the Internet seem like an unlikely place for back-stage settings, we find almost every social media has a direct message function. In contrast to the one-to-many, post-centric UX of the feed, these back-stage spaces are one-to-one or one-to-few interaction-heavy spaces that have come to be the most fulfilling part of the social media experience for users. Instead of solo “lurking” that can lead to comparison and loneliness, users that are active in back channels find engagement, connection, and reprieve to be themselves, or at least the character that feels like the smallest margin of performance with this particular friend or group, since they have created their “show” together.

But it’s the feed that dominates the social media experience. It permeates moments that would have traditionally been back-stage settings (for example, alone in one’s home), and so we find ourselves wanting authenticity, or a back-stage feeling, here. And so, trends like posting crying selfies have surfaced, which feel close to a cut and paste: back-stage content onto the front-stage. While a post like this could make a user feel understood or less alone momentarily, the infrastructure on social media doesn’t enable the interaction needed to produce real support, and can continue to feel designed for likes. Between glamour shots and crying selfies sits BeReal, where users post more of the “everyday” of their everyday life. Still, BeReal has been criticized for either being boring, still performative, or even exclusive in a more intimate way. A feed can’t support true connection, the table-stakes of enduring authenticity.

Outside of these two paradigms, we see a third type of space emerging. Platforms like Discord have taken hold during the pandemic as a more casual place to “hang out” virtually. Building on a chat-based UX, Discord enables users to find others with similar interests and move between smaller and larger channels as well as text and voice-based communication. Further, Discord is the hub for creative expressions like Midjourney, an AI image generator that can only be accessed through Discord using bot commands. Similarly, Fortnite builds conversation through shared experience and play, in so doing re-leveling the audience-observer dynamic and putting engagement over performance. Extending Goffman’s metaphor, we might compare the social atmosphere created on Discord and Fortnite to a writer’s room, where users engage and create together. 

A more agile space like Discord reflects the “Presentation of Self” as charted by Gen Z. This generation sees the self as a canvas for experimentation, where identity is fluid. Through creative tools and less definite spaces, creativity and play  extend to the making of self on a journey of self-discovery. Users can create and try on characters much like a comedian might on a Tuesday night, to first see if it might resonate for Saturday night, much before an enduring part of the act.

To enable more dynamic interactions , we will need to move away from a cut and paste UX approach to a ground-up infrastructure that is designed for fluidity. Taking pointers from the “writer’s room,” two principles can guide us. First, collaboration. Similar to “yes – and,” creators in authentic spaces create in tandem vs. a creator-consumer dynamic. UX of authentic spaces must lean toward chat over post, which fosters interaction and relationships that ensure it’s safe to try a new presentation of self. Second, authentic social media needs impermanence. Though a feed may refresh over time, we know that posts on Instagram will be connected to our profile for years to come. If it’s instead lost in a Discord feed, we may feel more freedom to experiment and “get it wrong.” Combining collaboration and impermanence, we might just set the stage to permit the collection of characters we all play, so that we can all feel a bit more dynamic, and perhaps even authentic, in digital spaces.

Illustration by Marine Au Yeung

Recently, an article on Fast Company made the announcement that corporate America broke up with design, citing that companies who were once green with Apple-envy and hungry for transformation are now jaded after realizing that “design is rarely the thing that determines whether something succeeds in the market.” Add in the recent corporate silence on the topic of design, and rumors abound – apparently design and corporate America are in trouble.

But does this really mean the relationship has fizzled out? Or could it instead be a time for reflection, re-evaluation, and evolution? Three of our Design and Strategy Directors respond to these questions and more –

Corporate America didn’t break up with design. It broke up with the mythological promise design firms sold them.

Jeff Turkelson, Strategy Director

‘Design will allow you to disrupt, transform, create and lead industries. Just do some research, run some workshops with sticky notes, prototype, and you’ll be onto  something that no one else could dream of!’

These are the false promises that corporate America has broken up with. But, there were always dissenters, Don Norman himself said it:

“Design research is great when it comes to improving existing product categories but essentially useless when it comes to new, innovative breakthroughs.”

Flash forward to today, and the hype around being a design-led organization is pretty much dead. But corporate America has embraced design in a more traditional sense—significantly expanding internal design teams—not to think of radical breakthroughs but to create good user experiences that are usable and delightful.

It’s in many ways a reversion back to the decades-old paradigm of user-centered design (though often twisted by profit incentives, e.g. designing to maximize engagement or conversion rates rather than truly serve the user).

However, the spirit of human-centered design (HCD) is not lost. It has evolved. While the idea of being a “design-led” organization has lost its allure and most in-house practitioners are focused on traditional craft, design’s value to business was always secondary to the value designers sought to bring to humans. And for perhaps largely external reasons, many corporations have begun to embrace HCD’s value-based themes: designing for accessibility, inclusion, equity, etc.

Here we see design intersect with the responsible technology movement— designers, technologists, activists, and more, seeking to create positive outcomes or at least mitigate harms. Designers don’t get to say they own this broader movement but they do play an important role in its evolution.

What goes in comes out: amplify design’s value by doing these four things.

Chad Hall, Design Director

Companies green with “Apple envy” may have invested billions of dollars in design, but few did it well, and most in a way vastly different from the design-centric companies they looked up to. Here are four easy to overlook things they could do to better gain value from design.

1. Understand the complexity of problem spaces

“Simplicity and complexity need each other,” (John Maeda), a.k.a. there can be no simplicity without complexity. Designers work hard to obfuscate the complexity that exists in the products, services, strategies, and processes we work on. Understanding and allowing time and space to work through these complexities is paramount. If designers or companies don’t understand the complexities of what they work on and invest the time and resources to make sense of it, they’ll never be able to simplify anything down into a ‘magical solution.’

2. Foster seamless interdisciplinary collaboration

Design works best when not in a vacuum. Too often, I see these situations that prevent seamless experiences: A product team separated from key decision makers; A care team that doesn’t have good insight into their patient’s experience; An education board that is far removed from the students and communities it aims to impact.

Seamless customer experiences are a product of seamless interdisciplinary collaboration. Working alongside an interdisciplinary team with deep understandings of different industries, domains, processes, or organizations at hand, designers become experts in not only crafting forms, but leveraging their knowledge to become experts in facilitating processes. 

3. Align power and incentives with desired outcomes

If companies want transformation, they need to examine their internal power and incentives structures. It’s not enough to have a vision. Fragmented teams and inequitable power in decision making yield products with poor outcomes.

To make seamless experiences, the customer experience must be singular above internal organizational divisions, product categories, and even earnings reports in some cases. The organization and culture must support this collaboration; allowing, motivating, and empowering employees to make decisions that work toward the shared goal of a seamless customer experience. Rather we often see internal competition, tailoring outcomes to please a HiPPO (Highest Paid Person’s Opinion), or misaligned performance measurements that incentivize personal decisions over preferable product outcomes. While most organizations might have the vision, they don’t align the distribution of power and incentives to get the outcomes they seek.


4. Be curious about the unknown

Many companies have implemented design in a risk-averse way. Expecting transformation without accepting a level of risk leads to disappointment. To curb risk, we’ve turned a designer’s intuition and mastery of skills into a scalable and repeatable process built upon the scientific method. Design Thinking and Human-Centered Design have proliferated. These are great at identifying existing pain points and undiscovered needs, but lend toward refining existing solutions through incremental improvements. This has merit and need in products today. But, it’s not going to rock the boat.

To make large leaps, we must allow imagination and intuition back into the process. Designer’s, through years of mastery, are primed to make unexpected connections that can lead to new innovations. But, this process is nearly impossible to evaluate and scale. It pushes us into the unknown future and to rely somewhat on intuition. In our data-driven world, this is uncomfortable! It’s an inherent risk. But, a risk that could lead to a potential big win.

Design is expanding and evolving — we’re counting on corporate America to do the same

Joan Stoeckle, Design Director

Design is baked into countless experiences we encounter on a daily basis – ordering ahead for curbside pickup, communicating with our healthcare providers through patient portals, and of course in the devices we use for hours each day. We’ve become so accustomed to frictionless, carefully-designed experiences that the occasional encounter with an outdated tool can feel downright grating by contrast. Were corporations truly to break up with design, as customers and consumers we would definitely notice. Perhaps their common understanding of design was too narrow from the start.

Although design was historically associated with the creation of beautiful objects and innovative products, today we also interact with invisible forms of design in the services and systems we use and are a part of. Not only was our home’s smart speaker designed, but so was the AI and the specific phrases used to communicate with it, and we are as much components of that system as the speaker itself. The expansion of design into different contexts and ways of interacting with people and systems certainly represents new and exciting frontiers for innovation, but many designers and organizations are also exploring novel and alternative approaches to the processes and practice of design – not just its outputs. 

User-centered design established a baseline of orienting around the needs of end-users. Human-centered design helps foster a more holistic view of people as more than just ‘users’ of a product, anchored in understanding motivations, behaviors, values, and more. Elements of each are central to the design thinking process that was adopted by many companies. But there are concerns that focusing only on the needs of target users results in a myopic view of challenges and opportunities and can lead to unintended consequences (ex. worker injuries in warehouses that are struggling to meet consumer demand for rapid shipping). In response, designers and organizations are questioning and reframing the process of design to foster equity and inclusion, design for diverse and complex needs, and create more sustainable futures.

The practice of design is expanding and evolving in response to social, economic, and environmental realities. Will corporations also take informed action by evolving how and with whom they create products, services, and systems? Or will many of them, as the article suggests, walk away from a narrow and outdated notion of design?

At Artefact we continue to evolve our methods in support of our mission to create better futures: taking a more holistic view through stakeholder mapping, establishing best practices for trauma-informed design research, reflecting diversity of needs and mindsets through persona spectra, guiding participatory and co-design processes, reflecting on possible unintended consequences, and more.

Partnership Highlight

This year, Artefact had two opportunities to partner with mission-driven organizations to understand young people’s relationship with digital technology and how they can support their efforts to shape a better future. In celebration of those partnerships with Omidyar Network and Hopelab, we highlight our approach to centering young people’s perspectives as we implemented our research and structured our recommendations.

“Our partnership with Artefact has helped us clarify how we can take action and support youth who are creating opportunities for inclusion and well-being in the next digital era. We appreciate the team’s depth of research, and their responsiveness to emergent opportunities in the work.”

Young people and the hope for a new digital future

Youth are growing up in a vast digital system with a level of complexity that we haven’t seen before. Many features on today’s major tech platforms keep youth online by design, depleting their energy and consuming their attention. Combined with the short life cycle of pop culture and the fear of missing out, young people – especially Gen Z – are aggressively pulled online, affecting their productivity, mental health, and overall wellness. These effects will likely persist with emerging technologies such as the metaverse and web3. Still, young people are capitalizing on this ‘new tech’ to have a role in shaping a more accountable, equitable, and inclusive internet for themselves and future generations.

An inclusive, systems approach to understanding youth beliefs and behaviors

Omidyar Network and Hopelab each needed actionable insights to develop a holistic strategy and prioritize actions aimed at influencing and activating technology as a force for good in supporting young people. However, the focus of each organization’s effort was slightly different. Omidyar Network focused on identifying the core issues that animate digital native activism and organizing as it relates to technology. These issues ranged from digital rights to social justice to tech worker activism. In contrast, Hopelab concentrated on understanding how emerging technologies can uplift or detract from youth mental health and well-being.

Throughout each project, we took inspiration from well-established fields such as inclusive design and human-centered design, incorporating equitable methods affording continuous participation for internal and external stakeholders.

Participatory methods to engage internal and external stakeholders included:

  • Using simple tools like Dovetail to convey research insights and allow stakeholders to view secondary research and highlight reels of key topics discussed during 1:1 interviews
  • Hosting multiple workshops to review research insights, co-create opportunity areas, and develop critical actions
  • Hosting office hours for youth and key internal stakeholders to give feedback, check assumptions, and develop actionable priorities
  • Sharing research insights and project outcomes with internal and external stakeholders to keep participants informed, give transparency to our processes, and solicit feedback to ensure data points were representative of their voices

In addition, we took a systems approach in selecting research participants to holistically understand how youth are affected by the internet and what they are doing to take control of their future. This approach helped us understand the nuances and complexity of this problem space through various perspectives.

An overview of who we spoke to:

  • BIPOC + Youth Digital Creators
  • Digital Rights Youth Activists
  • Web3 Designers
  • Mental Health Product Innovators
  • Psychology + Digital Technology Academics
  • Metaverse Academics
  • Feminist Technologists
  • Data & Security Researchers
  • Youth Mental Health Experts

Engaging diverse youth perspectives

Whether engaging digital natives to comment on our preliminary research insights or inviting them to attend key workshops and presentations, we continuously sought to ensure youth voices remained centered. Why? Because of their diverse lived experiences growing up digital and their drive to design, create, and advocate for what they want to see in the world.

Our approach to centering young people’s lived experiences online included the following methods:

  • Conducting outreach on popular web2 platforms (e.g., Twitter, Instagram, TikTok) where digital natives are active and currently participating in conversations around technology
  • Bringing in youth advisors as co-researchers to help shape insights and outcomes
  • Creating video highlight reels with direct quotes from youth participants to better represent their words and attitudes in our research
  • Developing youth-centered design principles taken directly from one-on-one and group discussions to guide future action
  • Developing youth-centered areas of focus that steered strategies toward the issues that matter most to GenZ

Supporting young people in their pursuit of better digital futures

The landscape of digital experiences and emerging technology is rapidly changing, allowing youth to shape the development of these technologies before they are entrenched. And young people are activated, ready, and willing to be the catalysts for change. They need a platform to be heard and supported that amplifies their needs and values. We are excited about Omidyar Network and Hopelab’s work to provide young people with this platform and support. Putting youth at the center is critical if we want the internet of tomorrow to be a place where future generations can thrive.

Want to learn more?

To learn more about the Omidyar Network project, check out the case study: A Youth-Led Agenda for the Responsible Tech Movement.

To learn about the insights and outcomes from the Hopelab project, attend a talk by Neeti Sanyal, Artefact’s Executive Creative Director, at the HLTH 2022 Conference Gen Z & Web3: How a Mental Health Crisis among Digital Natives is Shaping Our Virtual Future. This panel discussion is scheduled for Tuesday, November 15th, 4:20 PM—4:55 PM PST.

Image source: Fast Company

Fast Company honors Artefact with three Innovation by Design awards

A version of this press release first appeared on PR Newswire


Fast Company has honored Artefact, a design and strategy firm with a mission to create better futures, as a winner and honorable mention across three categories of the 2022 Innovation by Design Awards: Rapid Response, Healthcare, and Experimental.

Fast Company’s October 2022 issue celebrates visionary design that solves the most crucial problems of today and anticipates the pressing issues of tomorrow. Celebrating more than a decade of Innovation by Design, this year’s honorees feature a range of finalists from Fortune 500 to small, impactful firms. Entries are judged on the key ingredients of innovation: functionality, originality, beauty, sustainability, user insight, cultural impact, and business impact.

“We are honored to have our work in emergency preparedness, healthcare, and retail recognized. We believe that thinking about unintended consequences and all stakeholders is critical to bringing positive change in the world. Artefact is proud to work with individuals, communities, and organizations to create a better future, by design.”

Sabrina Boler, COO of Artefact


Artefact was recognized across three categories for the following work —

Navis: Emergency preparedness

Winner for Rapid Response

Navis is a conceptual emergency preparedness system that guides people in planning for, and responding to, crisis scenarios. The concept uses conversational UI and augmented reality to help people create a personalized emergency plan on their preferred devices. A durable home hub helps people stay connected during an emergency and translate plans into action.


AdaptDX Pro: Diagnosing macular degeneration

Honorable Mention for Healthcare

Artefact partnered with MacuLogix to help create AdaptDX Pro, the first portable, wearable, and AI-integrated ophthalmic screening system for age-related macular degeneration on the market. The AdaptDx Pro overcame the challenges of traditional ophthalmic devices by rethinking the patient and technician experience, and led to earlier, more accurate diagnosis and disease management. The AdaptDX Pro first shipped in June 2020, and over the past several years has performed over 1 million tests across 1200 eyecare practices. Today, AdaptDx Pro is owned by LumiThera.


Future of shopping and food retail

Honorable Mention for Experimental

We imagined three ways that emerging technology might help customers shop with more confidence during the pandemic, while ensuring businesses efficiently manage guest volume, protect employees, and sustain revenue by guiding safe customer behavior, forecasting risk, and bringing the best of in-store shopping, online.






About Artefact
Artefact is a visionary design and strategy firm with a mission to create better futures. By partnering with leaders and approaching the toughest challenges with equal parts creativity and pragmatism, we deliver lasting change. Headquartered in Seattle, our award-winning team includes researchers, strategists, and designers with a passion for excellence and impact. Connect with us today.

About Fast Company

Fast Company is the only media brand fully dedicated to the vital intersection of business, innovation, and design, engaging the most influential leaders, companies, and thinkers on the future of business. Winners, finalists, and honorable mentions of Fast Company’s sought-after Innovation by Design Awards can be found online and in the October issue of the print magazine, on newsstands September 27, 2022.








The UX 2030 Series

As emerging technology becomes an increasingly ubiquitous part of our lives, the design decisions we make today will shape how these technologies impact the world over the decade to come.

This series envisions how we might apply emerging technology in specific industries to create positive impact. We’ll explore what might accelerate or hinder these realities and the key risk areas and unintended consequences to consider.

Illustration by Laura Carr


A version of this article was first published in IoT Now.

Access to healthy food is a staggering problem in the U.S. Some 19 million Americans live in food deserts, while up to 40% of food produced in the U.S. goes to waste. Moreover, the production, transportation, and distribution of food is the fifth-highest contributor to greenhouse gas emissions in the country. It’s clear that the existing food system faces an overwhelming efficiency problem.

Growing food is a reasonably well-understood science that humans have iterated on for thousands of years. Yet despite advancements in technology, agriculture is still one of the least digitized of all major industries, according to McKinsey. There is enormous opportunity to combine agricultural technology with the proliferation of the Internet of Things (IoT) to improve access to food in underserved communities.

We imagine a 2030 where IoT-enabled circular food production democratizes agricultural skills, improves efficiency, and can be personalized to meet community needs. These community solutions would augment – not replace – the existing agricultural system, providing supplementary access to healthy foods to those most in need.

So how do we get there – and what risks will we face along the way?

More accessible, efficient, and personalized food production

IoT has the unique ability to integrate and automate tasks that would require significant expertise or time, greatly improving efficiencies and offering novel ways to personalize experiences. As IoT evolves over the next decade, how might this technology help improve access to food?

Democratizing skills

While existing personal and community gardens have an important role to play in food access and urban development, they can be unrealistic to scale. The knowledge and work required to sow, tend, and harvest food at the right time and in the right way every day is a daunting task for anyone, especially those living in food deserts or underserved communities.

An automated IoT system could address this challenge by bringing specialized farming knowledge to laypeople. Imagine a communal rooftop garden on an apartment or commercial building where healthy produce can grow throughout the year. Yet rather than the people living or working in the building tending to the crops, the garden would be managed by a web of sensors, automated watering systems, and robotics for tasks such as sowing, pruning, and harvesting.

Specialized sensors could take on specific tasks of measuring watering levels, soil nutrition, as well as plant ripeness and health. With IoT sensors and fully connected system-on-a-chip (SoC) devices continuously becoming cheaper, a monitoring device could be deployed for nearly every plant on a rooftop garden. This means the time-consuming task of tending to plants can be carried out by inexpensive electronics rather than humans, reducing barriers to access and allowing more people to participate in, and reap the benefits of, urban farming.

Improving efficiencies

Humans are increasingly developing novel and more sustainable ways to farm that involve less – or better managed – water, light, and soil. Combined with the possibilities of machine learning to identify the best time and manner to tend to and harvest plants, by 2030 we could establish robust farming operations in almost any location.

As systems of IoT-enabled devices and sensors work together in harmony to measure water and nutrient levels for each plant and communicate with connected pumps and other delivery systems, machine learning can aggregate these vast amounts of data and drive inputs which ensure ideal growth conditions. Rainwater collection systems, coupled with weather prediction models, could determine optimal watering schedules. Devices might direct the sun throughout the day to plants that need it most, or capture sunlight itself and store its energy for cloudy days.

An IoT-enabled system layer can manage the individual technologies used to grow food and organize which gardens might be best suited for which plants based on growing conditions inherent to its location and predictions for the needs of the people living in that community.

Community personalization

The connected and automated nature of IoT is well-placed to help determine a community’s real-time food needs and provide personalized distribution.

Just as the IoT system in aggregate could predict climate and resulting crop yields, it could also determine consumption patterns based on daily habits and anticipate the irregularities of a family and community’s schedules. Machine learning could detect patterns and anticipate food supply needs across a community, in order to allocate space to the produce in highest demand and efficiently distribute available produce within the community. A fully automated, communal garden could also be connected with other automated gardens, allowing for win-win sharing of crops and eliminating surplus that might otherwise go to waste.

Multiple communities could together make up a large system of interdependencies that can optimize the use of technology while distributing up-front costs across different investing areas. Even greater impact could come from partnering with existing local organizations such as food banks and community centers.

Addressing risk areas

Implementing such a complex, interconnected solution requires not only an understanding of human needs and technological constraints, but also the broader economic and social impact.

Cost

While the cost of IoT technology is continuously decreasing, the overall costs of establishing such a system are still significant. There unfortunately aren’t many examples of new technology adopted by underserved communities first – typically, those who can afford it create the economies of scale that make the technology accessible to a wider audience. Depending on population and density, a system such as this might not make financial sense for every food desert or underserved community – for example, distributing infrastructure costs may work for thousands of apartment dwellers but not for a hundred small-town inhabitants.

However, we have to look beyond the short-term investment costs and consider the long-term benefits of this system that other industries and stakeholders might find valuable. Start-ups like AeroFarms and Vertical Harvest are already leveraging technology to bring vertical farming to urban communities in the U.S., and governments are taking note as well: Singapore aims to triple domestic food production by 2030 through the use of technology-backed systems like multi-story urban LED farms and recirculating aquaculture systems. Industries from retail to healthcare could see a case for pursuing the positive long-term health outcomes of providing people with access to healthier food options. 

Privacy

Any system that relies on tremendous data collection in order to fuel machine learning models needs to be fortified against misuse of data and have a clear perspective on who retains control of it.

A highly interconnected system of IoT devices, robots, and machine learning models raises concerns about how privacy and user consent would be managed. Would people or communities be comfortable sharing their food consumption habits? Who else would have access to that information?

Privacy concerns may also be more significant for some communities than others. Lack of trust in government and centralized organizational bodies may be a barrier to adoption of a system that assumes people would be comfortable letting something as personal as food be handled by robots that are invisibly managed. Care must be taken to co-design such as system with members of the community, educate them on how it works, what data is collected, and how community members are empowered to control it.

Behavior Change

Access to healthier foods alone does not ensure that people will use them. What we eat is a very personal decision with social, cultural, and educational impacts. How might systems like this change the relationships people and communities have with food? Could these systems support existing community organizations and resources that have a strong understanding of their communities’ unique needs? At the individual level, could they help people live and eat healthier?

Providing healthy produce is only one aspect of systemic change that helps people build new, sustainable eating habits. There will be a need for instruction and guidance in terms of nutrition, recipes, and motivation in order to encourage behavior change for those with busy schedules or no awareness or interest in adapting their lifestyle habits.

Designing with, not for

IoT represents a unique opportunity to solve some of the inefficiencies of food production and distribution, and with that, the ability to address inequities in food access.

Nevertheless, there are important challenges involved in creating an infrastructure that impacts such an important aspect of what we as humans need to survive. As designers, it’s critical to engage with communities of use when considering such systems, elevating their needs and lived experiences, and ensuring that we design with, not for, them. Moreover, we need to approach such problems with a systems-thinking mindset that considers all people and groups potentially affected by the change, whether they ever come in direct contact with it or not.

It’s a difficult challenge, but an imperative to avoid unforeseen consequences and design for preferable outcomes. In leveraging this responsible design approach, we might imagine a future where IoT is used not only to bring healthy food closer to underserved areas, but bring people closer to each other, as a community.

The pandemic has demonstrated the healthcare industry’s ability and appetite to adopt models of care that meet patients where they are – whether online, at home, or in the community.

In this webinar, Artefact sits down with Sara Vaezy, Chief Digital and Growth Strategy Officer at Providence and Dr. Shantanu Nundy, physician and Chief Medical Officer at Accolade, to explore the innovative and accelerated models of care here in the U.S. that are impacting not only patients today but also the patient experience in the years to come.

We explore:

  • Opportunities and risks in distributed care models such as hospitalization at home; digital models such as telemedicine for behavioral health; and decentralized models such as subscription-based care
  • What these evolving models of care mean for the patient experience, their relationship with care providers, and greater health outcomes
  • How evolving care models that center the patient might support greater inclusion and equity, creating new opportunities to reach underserved populations