In response to Merriam-Webster naming “AI slop” the Word of the Year for 2025, Artefact’s Neeti Sanyal and Jeff Turkelson led a webinar for designers and creative professionals exploring the growing impact of AI slop on our information economy, research and design practices, and workplace culture.
The webinar is divided into two parts. In the first, they define AI slop and examine its rapid proliferation across the internet. They unpack why it exists, how it spreads, and why it persists. Using examples from platforms such as Pinterest and Steam, they explore how companies are attempting to manage the flood of AI-generated content, and how, despite these efforts, AI slop contributes to a polluted information ecosystem and rising mistrust online.
The second half introduces a related phenomenon called “work slop”: low-quality outputs produced when AI tools are used without sufficient critical thinking or editorial judgment. Demonstrating AI tools like Google NotebookLM, slide deck generators, and Figma Make, they show how easily well-intentioned professionals can generate work that looks polished but lacks rigor. They argue that work slop not only lowers collective standards for quality, but can also erode trust in the people who produce it. Worse still, correcting sloppy AI output can create productivity setbacks, offsetting the efficiency gains AI promises.
They conclude with practical reflections on how individuals and teams can remain vigilant—adopting more intentional, critical approaches to AI use in order to mitigate its negative effects and preserve standards of quality in creative work.
The future of AI can help, rather than hinder, our capacity for understanding and compassion.
Provocation
Outsourcing empathy to AI erodes our ability to understand others.
We live in a world that increasingly lacks empathy, visible in how we interact with one another. Digital exchanges hit more harshly. Public discourse feels more polarized. Small misunderstandings escalate fast. These outcomes are not inevitable. They emerged from systems and services designed to prioritize speed and efficiency, often at the expense of pause, context, and care.
As AI becomes increasingly embedded in our everyday interactions, we approach new heights of efficiency – from drafting messages and moderating conversations to offering advice and standing in as emotional support. Using AI for these interactions reduces friction and accelerates response, but it also unintentionally eliminates the moments that invite reflection and accountability, which underpin our capacity for empathy. Without these moments of pause, our ability to understand and care for one another will gradually atrophy.
AI does not have to accelerate this erosion of empathy. Designed intentionally, it can amplify empathy rather than diminish it. It can help people slow down rather than disengage, reflect rather than react, and strengthen rather than replace the human capacities that make empathy possible.
AI does not have to accelerate this erosion of empathy. Designed intentionally, it can amplify empathy rather than diminish it.
Designing AI with intent can build on our ability to empathize.
Building empathy is predicated on a repeated practice of sensemaking, gauging the impact of our own behaviors, and intentional decision-making. Empathy grows and strengthens when people have the space to practice these skills and eventually becomes a positive habit that influences and benefits the collective.
Our framework visualizes how individual skills set the foundation for a society with a more expanded capacity for empathy. At the individual level, the framework is grounded in a set of core human skills that build on one another as people move through the phases.
As individuals strengthen these skills, our society will be able to respond to disagreement and difference with more understanding and compassion.
Making sense of the context
This phase is about orientation. Before people can act thoughtfully, they need context—an understanding of the situation, the forces at play, and what’s at stake.
Core skills
Self-reflection: Noticing one’s own actions, assumptions, and role in a situation
Understanding: Grasping broader context, tradeoffs, and consequences of one’s actions
Awareness of impact and action
In this phase, people begin to recognize their own role within a situation and how their actions may affect others. This awareness extends beyond intent to actual impact.
Core skills
Openness: Being curious, questioning assumptions, and considering alternative perspectives
Responsibility: Owning choices and their effects on others
Decision-making with intention
This phase marks a shift to taking responsibility for one’s decisions.
Core skills
Consideration: Anticipating and taking into account how your words and actions may make other people feel
Connection: Communicating thoughtfully and repairing misalignment
Responsiveness: Acting proportionally and appropriately in the moment and adjusting on the go
As technologies like AI become increasingly present in our lives, there is an opportunity for AI to build empathy rather than erode as it tends to today.
AI as a coach, not a participant.
Today, when AI is invoked in situations that require empathy, it is designed to behave more like an active participant, generating content or giving advice. We may assume a future in which AI communicates on our behalf and makes decisions for us. And while this future is made to feel like a relief, it comes with tradeoffs. When expression and interaction are outsourced, we lose opportunities to practice capacities such as reflection, understanding, and responsibility.
Instead of performing empathy on our behalf, as AI does today, we asked: how can AI participate in a more meaningful way to help people build their capacities for empathy, more like a coach? When we shift the focus of AI from content generation to coaching, a different, better future emerges. In this future, AI creates space for sensemaking and awareness, supports more intentional decision-making, and reinforces positive habits over time with the goal of guiding individuals in their journeys to be more empathetic.
How might AI help people learn and expand their capacity to empathize with other people?
In a series of short vignettes rooted in everyday human situations, we explored how AI can help to create the conditions fertile for practicing empathy. Notice that in each scenario, AI does not resolve the situation for the person. Instead, it slows the moment down, surfaces context, or creates space for reflection, supporting the human in making a more intentional, empathetic choice.
These vignettes serve as provocations or conversation starters. Each of these vignettes raises questions about surveillance, privacy, and other issues that are not the focal point here. Finally, while these vignettes sketch ways AI may be built to amplify empathy, we recognize that there are many non-AI, non-tech solutions to amplify empathy.
Conversations can be uncomfortable.
You’ve been dating someone for three months. You know it’s not working out, but you don’t know what to say. You really don’t want to hurt the person you’re dating. So you ask AI to write a breakup message for you.
Currently, standard AI behavior would generate the text without hesitation. Boom. Done. Sent. No discomfort necessary.
But the unintended consequence? We’re training ourselves to outsource emotional labor.
Avoiding that discomfort means you never learn how to navigate it.
And next time? You’ll likely consider outsourcing it again.
What if AI refused to write your breakup text?
Responsibility requires owning our words.
Discomfort is not a flaw. It is the emotional labor of clarifying what you feel, taking responsibility for your decision, and choosing words that reflect care.
AI can help us process what we want to say—while keeping the words ours.
Rather than acting as a shortcut, AI helps us reflect on what we want to say and how, without taking ownership of the words themselves.
Social media can make your blood boil.
You’re browsing social media when you come across a viral post that’s politically charged—and a comment that is especially irritating. You start typing an emotional “clap back” so the commentor feels as dismissed as you do.
Your cursor hovers over ‘post.’
Currently, platform algorithms surface inflammatory content because it drives engagement, and engagement drives revenue.
The result? An environment that consistently rewards fast, emotional responses. In this context, even brief exchanges can escalate quickly.
But learning to pause and choosing how you respond, rather than just reacting, resists systems that reward escalation.
What if AI checked in with you in an emotionally reactive moment?
Less reactivity allows for intentional response.
AI can interrupt reactive escalation without demanding emotional alignment. You may still disagree, but you are supported in choosing a proportionate response that reduces harm rather than amplifies it.
AI can intervene at moments of escalation to slow reaction and surface more intentional response options.
Over time, these interruptions help us recognize patterns and internalize more reflective responses when the stakes are high.
Stress can overwhelm our better instincts.
You’re at the Department of Motor Vehicles (DMV) for the third time in three weeks. This time, it’s a different clerk and a different set of missing paperwork.
The fluorescent lights, endless lines, and loud noises overload your senses. Last time you were here, you said some mean things to the clerk that you regret.
You don’t notice it at first: your heart is racing, your body temperature rising, and your fists clenched before you’ve even interacted with anyone at the DMV.
What you need now is to slow down and notice what’s happening in your body so you don’t say something regretful–again.
This simple act of awareness is the first step to regulating your emotions and approaching a fraught situation more thoughtfully.
What if AI helped you notice and regulate stress before it takes over?
Awareness enables us to act responsibly.
By tracking signals like heart rate and location, AI can surface patterns that make moments of heightened emotion easier to recognize.
AI can bring awareness to how we react in specific situations and offer techniques for coping with challenging emotions in healthier ways.
That awareness supports regulation before interaction, increasing the likelihood of responsible action.
Tense situations can be hard to interpret.
You’re standing at a crowded bus stop. You notice two people arguing—raised voices, expressive faces, and lots of gesturing. You want to do something, but questions surface immediately:
What’s happening? Is it safe to intervene? If so, what should I do?
In moments like this, AI systems jump to conclusions before we even have a chance to observe, think, and form our own interpretations.
Drawing on spoken language and body cues, these systems often translate complex interactions into simplified labels such as ‘risk’ or ‘threat’.
By deciding what a moment “means,” AI interrupts the human work of noticing and understanding.
What if AI helped us make sense of a situation before rushing to label it?
Understanding begins with observation, not assumption.
By helping us observe and understand before interpreting people’s behavior, AI acts as an unbiased guide rather than an informant.
It can guide our attention to relevant facets of a situation and engage our critical observation skills before deciding to take action.
AI supports careful observation, making space for more informed human judgment to unfold.
Designed this way, AI helps us reflect on our assumptions and decide whether—and how—to engage with care.
When exhaustion collides, perspective narrows.
You recently welcomed a new baby, and now you are both exhausted. One of you talks about how hard the nights have been, but the other bristles, feeling unseen for their own sacrifices.
Voices rise…Suddenly, the argument isn’t about sleep at all—it’s about whose exhaustion counts.
These moments are universal: miscommunication sparks conflict, “I’m struggling” becomes “I’m struggling more.” Both of you retreat into your respective corners, seeking validation for your perspective or preparing a case for the next round of the argument, causing further division.
Today, most AI systems are designed for single-user input, affirming a single perspective at a time.
With flattering tendencies, AI often reinforces your point of view without challenging it.
The unintended consequence is subtle but significant: when we turn to AI instead of each other, we become more entrenched in our own experience and further from understanding someone else’s.
What if AI could hold space for more than one person at a time?
Openness to multiple perspectives enables shared understanding.
If AI allowed input from all sides rather than a sole contributor, it could help consider each experience without forcing them into competition.
When multiple perspectives are visible at the same time, conflict no longer revolves around whose experience matters more. We’re better able to choose engagement that acknowledges difference, rather than defaulting to defensiveness.
By making space for more than one person at a time, AI can help us engage with one another without competing for validation.
What have you noticed about how AI systems are being built or exist around you? How do they diminish or encourage our capacity for empathy?
We have a choice to make…
The products, systems, and services we design can either expand our capacity for empathy or make it easier to bypass altogether. If we choose empathy, how might we design our systems to encourage greater noticing, reflection, accountability, and care in our responses?
Just as the early days of Human Computer Interaction emphasized thoughtful UX to make personal computing accessible and intuitive, we now need the same intentionality in designing AI experiences.
AI model development and UX design must go hand in hand—because building powerful systems isn’t enough if people can’t understand, trust, or effectively use them.
Anyone who has conversed with ChatGPT or created an image with MidJourney might assume that a product leveraging generative AI doesn’t require much UX design—after all, these AI systems seem to understand us and follow our commands naturally. However, UX design remains just as crucial for AI-enabled products as it is for non-AI ones. A well-designed user experience ensures the AI product addresses real needs, instills trust, and enables intuitive and desirable interactions. Without thoughtful UX, even the most advanced AI can be misdirected, ineffective, or hard to use, preventing people from deriving real value. For companies developing these products, poor UX can be costly—or worse, harmful, as demonstrated by problematic AI systems seen in past incidences with self-driving cars, hiring, and mental health. Author Arvind Narayan of the book “AI Snake Oil” recently stated that:
“AI companies haven’t invested a lot of time into product development. They’ve made these models better but how to use that to improve your own workflows has been kind of largely up to the user. There needs to be a 50/50 split between model development and product development.”
Here are four common pitfalls, or challenges, that innovators looking to leverage AI may encounter, and how UX design can help avoid them.
Challenge
AI makes it easy to pursue use cases that don’t add value, but designers can validate ideas through testing to prioritize the most compelling features.
Solution
Because generative AI is a powerful general-purpose technology, products built with it can serve a diverse range of use cases without much development time. Unlike other technological innovations like blockchain, which have relatively narrower applications, generative AI is highly adaptable—powering everything from creative content generation to automation, personalization, and decision-making. In this way, AI is like a swiss army knife, full of tools that can solve multiple problems. And with AI-driven companies attracting significant attention and funding, innovators can fall into the trap of building technology first and searching for a problem later. This approach can lead to products that are bloated with features that don’t solve real needs. Design helps innovators identify and validate various use cases by answering a crucial question: “Is this valuable?”
Consider the AI Pin by Humane. This product promises to serve multiple needs—acting as a virtual companion, introducing new music, and capturing spontaneous photos. It even transforms your palm into a display where you can view and control a timer with tap gestures. But are all these features truly desirable? What deserves priority?
Through research, prototyping, and user testing, UX design helps innovators identify the most compelling use cases and execute them exceptionally well. Without this focus, attempting to do too much at once can dilute the product’s effectiveness, resulting in a scattered experience, a muddled value proposition, and ultimately, a product that fails to resonate with users.
Challenge
AI can make products harder to trust, but designers can rebuild it through mechanisms like data provenance, confidence indicators, feedback loops, and human overrides.
Solution
Just as users need to trust products to use and adopt them, they must also trust the AI that powers those products. Rachel Botsman, author of “Who Can You Trust,” presents a framework describing four traits of trustworthiness that apply to people, companies, and technologies. Two traits relate to capabilities (competence and reliability)—the “how”, while two relate to intention (empathy and integrity)—the “why”.
We tend to trust others when they demonstrate two key qualities: competence—the ability to do what they promise effectively—and reliability—consistently delivering expected results. With humans, it’s relatively easy to assess these traits through conversation and interaction. But what happens when a system is 3x, or even 10x, more capable than any person we’ve encountered—and we have no real insight into how it works?
That’s the fundamental challenge with AI. As a black-box technology, its growing power makes it harder—not easier—to evaluate. While it’s relatively simple to notice improvements in AI-generated text, assessing the accuracy and reliability of AI-driven information or analysis—especially in unfamiliar domains—is much more difficult.
Here, UX design plays a role in helping make AI behavior more understandable, whether it’s recommending content, making predictions, or automating complex tasks. By designing for transparency, UX can foster trust through strategies like surfacing explanations for AI decisions (such as data provenance) and showing confidence indicators to help users gauge how certain the system is in its outputs. This is what Rachel Botsman terms “trust signals”—small, often unconscious clues we use to assess the trustworthiness of others or something. To see some of these ideas in action, check out our piece with Fastco.
Referencing Botsman’s framework again, UX design can also help Al-enabled products act with integrity (being fair and ethical) and with empathy (understanding and aligning with human interests and values). By incorporating diverse perspectives, needs, and values into the design process, UX helps create more inclusive and responsible AI systems. And frequent and extensive user testing enables designers to identify and address biases, unintended consequences, and potential harms before they impact users. This helps correct what AI field calls the alignment problem.
In addition, thoughtful interface design can provide clear affordances, feedback loops, and user controls. Examples include letting users know when its response is generally considered controversial by the public, having users rate an AI response to train it to be better over time (e.g., more personalized or accurate), and building in moments where humans can override the AI.
Challenge
AI can be used as an excuse to skip defining the right interaction model, but designers can ground solutions in existing user workflows, mental models, and preferred levels of control to arrive at interactions that truly work.
Solution
When we think of generative AI, we often default to imagining a chatbot. That’s no surprise—large language models are built to generate text, making chat a natural starting point. But simply slapping a chatbot onto an existing product rarely creates real value for users. It’s like applying tiger balm to every kind of pain—it might help sometimes, but it’s far from a universal cure. We saw this clearly with early banking chatbots; anyone who used them remembers how frustrating they were.
True innovation in AI-powered experiences requires more than just plugging in chat—it demands a deep understanding of existing workflows, user mental models, and interaction patterns. That’s where skilled UX designers shine. While text-based interfaces dominate today, the real future lies in well-crafted, multimodal experiences. For designers, this shift brings both exciting opportunities and complex challenges: choosing the right mode of interaction for the task at hand becomes more critical—and more nuanced—than ever.
For instance, natural language excels at complex queries like “Show me customers who spent less than $50 between 11am–2pm last month.” However, it falls short for spatial tasks—imagine trying to perfectly position an element through verbal commands: “Move it left… no, too far… right a bit…” These limitations become especially apparent in generative AI, where initial prompts like “Draw a kid in a sandbox” work well, but precise adjustments (“Make the sandbox 10% smaller”) become tedious.
For AI products to succeed, UX designers must determine the optimal level of user control over model inputs and queries, design clear ways for users to understand and modify AI-generated outputs, and create systems for users to effectively utilize those outputs. All of this must be wrapped in an interface that feels natural and easy to use.
Challenge
While great product design has always fostered emotional connection, AI-enabled products can introduce new psychological dynamics—such as discomfort and inadequacy, but designers can develop mechanisms to preserve users with a healthy sense of self.
Solution
Great products evoke emotions. From connected cars to smartwatches to payment apps, thoughtful design can surprise and delight users while making them feel understood and valued—creating a deeper emotional connection with both product and brand. AI-enabled products are no exception. Talkie and Character.ai are two bot creation platforms that successfully keep users engaged and interested. Similarly, Waze builds community through crowdsourced traffic updates and hazards and adds an element of play through customizable voice options and car icons.
What sets AI apart, however, is its seemingly superhuman capabilities, which fundamentally shift how people perceive themselves when using these technologies. It’s like a funhouse mirror that can perturb your sense of self. This phenomenon is explained in a HBR article titled “How AI affects our sense of self.” While productivity and creator apps can trigger job security concerns, using AI for writing essays or applying for a job often leaves users feeling like they’re “cheating” or inadequate. Companies must address these psychological barriers through strategic product design that acknowledges and accounts for these complex emotions. Again, UX design can play a role. To mitigate job displacement fears, design can emphasize human oversight, ensuring users remain central to meaningful tasks and decisions. To help users feel more comfortable with AI, products can incorporate teachable moments explaining why the AI is offering certain suggestions. This helps users enhance their skills rather than becoming overly dependent on AI.
UX Design unlocks AI’s true potential for impact
While AI systems continue to advance at a remarkable pace, their success ultimately hinges on thoughtful UX design that puts human needs first. By focusing on generating real value, building trust, aligning with existing workflows, and considering emotional needs, designers can create AI-enabled products that are not just powerful, but truly impactful. The partnership between AI capability and human-centered design can deeply transform the human experience. Companies that bring design into their AI development, with intention and investment, will be the ones to create products that genuinely improve people’s lives.
Imagine you’re on a blind date. Meeting someone new is a relatively rare experience that is very different than talking with a friend. You don’t know what information your date knows about you and you’re trying to figure each other out. Maybe you dial up certain aspects of your personality or share a personal story to build a deeper connection. Over time, you build trust through consistent conversation.
Now imagine you’re chatting with a new social AI chatbot. A similar back and forth of getting to know each other might occur. You might want to know the social chatbot’s backstory, its limitations, its preferences, or its values. Social AI chatbots are increasingly human-like with advanced speech capabilities, interactive digital avatars, and highly adaptable personality characteristics that can carry on conversation that feels like it’s with another person. People are chatting with AIs acting as their therapist, friend, life coach, romantic partner, or even as spiritual oracle. Given the deeply personal roles that emerging social AI may take on in our lives, trust with such systems (or even regular humans for that matter) should be earned through experience, not freely given.
However, increasingly indistinguishable interactions make forming human-like relationships with AI a blurry endeavor. Like a blind date, you may hit it off at first but discover your date’s behavior shifts as the conversation continues. When a chatbot (or human) performs in an inconsistent, opaque, or odd way, this can erode the process of building trust, especially if someone is sharing sensitive and personal information. To address this, social AI product designers can consider key factors of healthy human relationships such as boundaries, communication, empathy, respect, and mirroring and apply these characteristics to ensure the design of responsible chatbot experiences.
Boundaries are about establishing clarity and defining capabilities.
People need a clear understanding of a social AI’s content policy, its training data, its capabilities, its limitations, and how best to interact with it in a safe and compliant manner. This is especially important for sensitive uses such as mental healthcare or when the users are children. For example, many flagship LLMs provide disclaimers that responses may be inaccurate. Google requires teens to watch a video educating them about AI and its potential problems before using it. Microsoft’s recently redesigned its Copilot interface to show users a variety of its capabilities through visual tiles that act as starting prompts. Like a blind date, communicating what each other is open to and capable of can support fostering a better connection.
Communication is about constructive feedback that improves connection.
People can sometimes mess up when engaging with a chatbot. For example, they might use a word or phrase in a prompt that violates a content policy. They might discuss a topic or ask for advice on something that is very personal or taboo. When this happens, AI systems can sometimes reject the prompt without a clear explanation, when constructive feedback would be more helpful in teaching people how to best prompt the system in a compliant way. Like a blind date, when you cross a line, a kind piece of feedback can help get the conversation back on track. For example, when discussing topics related to sensitive personal data, Mixtral AI provides additional reassurances in its responses as to how it manages users’ data to preemptively put any concerns at ease.
Empathy is about responding to a user’s emotional needs in the moment.
People can bring all kinds of emotions to conversations with social AI chatbots. Sometimes they are just looking for companionship or a place to vent about their day. Social AI chatbots can respond with empathy, providing people with space to reflect by asking more questions, generating personal stories, or suggesting how to modulate mood. For example, an app called Summit, positioned as an AI life coach, can track physical activities related to specific wellness goals that a person has set up. If someone shares a bad mood due to stress, the AI chatbot will suggest an activity that the person previously mentioned had helped them de-stress, such as taking a walk. Like a blind date, your partner’s ability to recall information previously shared and contextualize it with your current emotional expression helps you feel seen and heard.
Respect is about allowing people to be themselves freely.
Inevitably an individual’s values may misalign with those of AI product designers, but just like a blind date, each party should be able to show up as themselves without fear of being judged. Similarly, people should be able to express themselves on political, religious, or cultural topics and be received in a respectful way. While a chatbot may not explicitly agree with the person’s statement, it should respond with respectful acknowledgement. For example, the kids-focused AI companion Heeyo will politely acknowledge a child’s prompts related to their family’s political or cultural views but doesn’t offer any specific validation of positions in response. Instead, it avoids sensitive topics by asking the child how they feel about what was just shared.
Mirroring is about active listening and attunement to the user.
Like on a blind date, healthy mirroring behaviors can help forge subconscious social connection rapidly. Mirroring behaviors, such as imitating styles of speech, gestures, or mood, are an effective way to show each other you are listening and well-attuned. For example, if someone is working through a complex life issue with a social chatbot, the AI’s responses might be more inquisitive than prescriptive and it may start to stylize its responses in a way that mirrors the person, such as in a short and humorous or long and emotional manner. Google’s NotebookLM will create an AI-generated podcast with two voices discussing a topic of choice. After the script is generated, it will add in speech disfluencies—filler words like “um” or “like”—to help the conversation between the two generated voices feel more natural.
Social AI experiences will continue to rapidly advance and further blur the lines between human and synthetic relationships. While AI technology is running at 21st century speeds, our human brains are mostly stuck in the stone age. The fundamental ways that we form connections haven’t changed as rapidly as our technology. Keeping this in mind, AI product designers can lean on these core relationship characteristics to help people build mutual trust and understanding with these complex systems.
Perspectives
Designing the Future with AI
A mini-series unpacking AI and discussing our role in its implementation
Artefact’s staff reflects on AI’s potential impact on individuals and society by answering questions prompted by the Tarot Cards of Tech. Each section contains videos that explores a tarot card and provides our perspectives and provocations.
The Tarot Cards of Tech was created to help innovators think deeply about scenarios around scale, disruption, usage, equity, and access. With the recent developments and democratization of AI, we revisited these cards to imagine a better tech future that accounts for unintended consequences and our values we hold as a society.
Cultural implications for youth
Jeff Turkelson, Senior Strategy Director
Transcript: I love that this card starts to get at, maybe some of the less quantifiable, but still really important facets of life. So, when it comes to something like self-driving cars, which generative AI is actually really helping to enable, of course people think about how AI can replace the professional driver, or how AI is generally coming for all of our jobs.
But there are so many other interesting implications. So for example, if you no longer need to drive your car, would you then ever need to get a license to drive? And if we do away with needing a license to drive, then what does that mean for that moment in time where you turn 16 years old and you get newfound independence with your drivers license? If that disappears, that could really change what it means to become a teenager and become a young adults, etc. So what other events or rituals would AI disrupt for young adults as they grow older?
Value and vision led design
Piyali Sircar, Lead Researcher
Transcript: This invitation to think about the impact of incorporating gen AI into our products is really an opportunity to think about design differently. We should be asking ourselves, “What is our vision for the futures we could build?” and once we define those, the next question is, “Does gen AI have a role to play in enabling these futures?” Because the answer may be “no”, and that should be okay if we’re truly invested in our vision. And if the answer is “yes”, then we need to try to anticipate the cultural implications of introducing gen AI into our domain space. For example, “How will this shift the way people spend time? How will it change the way they interact with another? What do they care about? What does this product say about society as a whole?” Just a few questions to think about.
Introducing positive friction
Chad Hall, Senior Design Director
Transcript: The ‘Big Bad Wolf’ card reminds me to consider not only which AI product features are vulnerable to manipulation, but also who the bad actors might be. Those bad actors could be a user, it could be us, our teams, or even future teams. So, for example, while your product might not misuse data now, a future feature could exploit it.
A recent example that comes to mind is two students who added facial recognition software to AI glasses with a built-in camera. They were able to easily dox the identities of just about anyone they came across in their daily life.
I think product teams need to introduce just enough positive friction in their workflows to pause and consider impacts. Generative AI is only going to ask for more access to our personal data to help with more complex tasks. So the reality is, if nobody tries to ask the question, the questions are never going to get asked.
Minimizing harm in AI
Neeti Sanyal, VP Creative
Transcript: I think it’s important to ask whether AI could be a bad actor? Even when you’re not trying to produce misinformation with generative AI, in some ways it is inherently doing that. I am concerned about the potential for generative AI to cause harm in a field that has low tolerance for risk, things like health care or finance. An example that comes to mind is a conversational bot that can give the wrong mental health advice to someone that is experiencing a moment of crisis.
One exciting way that companies are addressing this is by building a tech stack that uses both generative and traditional AI. And it’s the combination of these techniques that help minimize the chance of hallucinations and can create outputs are much more predictable.
If we are thoughtful in how the AI is constructed in the first place, we can help prevent AI from being the bad actor.
Building job security
Rachael Cicero, Associate Design Director
Transcript: One thing we keep hearing about is the disappearing workforce, but often I think we’re overlooking the fact that humans will continue to exist in and contribute to society. Instead, I’d like to see a shift the conversation from the disappearing workforce to the unique contributions of human and AI collaboration. Consider civic technology, where generative AI can be used for things like supporting the process of unemployment applications. AI can help with document recognition, which can really reduce the load on human staff, and also accelerate response time for applicants. To me, that collaboration isn’t about replacing jobs but really about enhancing them.
The key to that is investing in reskilling. By including the perspectives of people affected in the design of AI systems, we can better understand the tasks they want automated. The goal being to create a future where AI and humans can work together, enhancing each other’s strengths, and ensuring that everyone has an opportunity to thrive in a pretty evolving job market.
Transforming tradition
Max West, Principal Designer
Transcript: This card reminds me of how cable TV technology reshaped media jobs. Remember Video Jockeys on the popular 90s MTV show, Total Request Live? VJs had evolved from selecting and remixing music, like their traditional radio counterparts, to focusing on engaging with crowds, talking to celebrities, and orchestrating pop cultural moments.
Now take the cable TV example and apply it to AI transforming an industry like education. Teacher’s jobs could similarly shift to a more social focus. An AI-powered app could tailor a math or science lesson to a student’s unique cognitive abilities, while the teacher can focus more on the physical, interpersonal, and social aspects of learning. In the same way that VJs would provide crowd-pleasing moments between music videos, educators might find themselves in a similar “hosting” role for the classroom.
So it’s less about what disappears and more about what can transform. So, while roles may change with AI, it could create time and space for richer, more personal experiences among groups.
Exploring how generative AI could superpower research outputs to foster greater empathy and engagement
With the release of GPT-4 and the growing interest in open-source generative AIs such as DALL-E 2, Midjourney, and more, there is no dearth of people writing about and commenting on the potential positive and negative impacts of AI, and how it might change work in general and design work specifically. As we sought to familiarize ourselves with many of these tools and technologies, we immediately recognized the potential risks and dangers, but also the prospects for generative AI to augment how we do research and communicate findings and insights.
Looking at some of the typical methods and deliverables of the human-centered design process, we not only saw practical opportunities for AI to support our work in the nearer term, but also some more experimental, less obvious (and, in some cases, potentially problematic) opportunities further out in the future.
Now, while each of the above use cases merits its own deep dive, in this article we want to focus on how advances in AI could potentially transform one common, well-established output of HCD research: the persona.
Breathing new life into an old standard
A persona is a fictional, yet realistic, description of a typical or target user of a product. It’s an archetype based on a synthesis of research with real humans that summarizes and describes their needs, concerns, goals, behaviors, and other relevant background information.
Personas are meant to foster empathy for the users for whom we design and develop products and services. They are meant to support designers, developers, planners, strategists, copywriters, marketers, and other stakeholders build greater understanding and make better decisions grounded in research.
But personas tend to be flat, static, and reductive—often taking the form of posters or slide decks and highly susceptible to getting lost and forgotten on shelves, hard drives, or in the cloud. Is that the best we can do? Why aren’t these very common research outputs of the human-centered design process, well, a little more “alive” and engaging?
Peering into a possible future with “live personas”
Imagine a persona “bot” that not only conveys critical information about user goals, needs, behaviors, and demographics, but also has an image, likeness, voice, and personality? What if those persona posters on the wall could talk? What if all the various members and stakeholders of product, service, and solution teams could interact with these archetypal users, and in doing so, deepen their understanding of and empathy for them and their needs?
In that spirit, we decided to use currently available, off-the-shelf, mostly or completely free AI tools to see if we could re-imagine the persona into something more personal, dynamic, and interactive—or, what we’ll call for now, a “live persona.” What follows is the output of our experiments.
As you’ll see in the video below, we created two high school student personas, abstracted and generalized from research conducted in the postsecondary education space. One is more confident and proactive; the other more anxious and passive.
Now, without further ado, meet María and Malik:
Chatting with María and Malik, two “live personas”
Looking a bit closer under the hood
Each of our live personas began as, essentially, a chatbot. We looked at tools like Character.ai and Inworld, and ultimately built María and Malik in the latter. Inworld is intended to be a development platform for game characters, but many of the ideas and capabilities in it are intriguing in the context of personas, like personality and mood attributes that are adjustable, personal and common knowledge sets, goals and actions, and scenes. While we did not explore all those features, we did create two high school student personas representing a couple “extremes” with regards to thinking about and planning their post-secondary future: a more passive and uncertain María and a more proactive and confident Malik.
Here’s a peek at how we created Malik from scratch:
Making Malik, a “live persona”
Interacting with María and Malik, it was immediately evident how these two archetypes were similar and different. But they still felt a tad cartoonish and robotic. So, we took some steps to improve progressively on their appearance, voices, and expressiveness.
Here’s a peek at how we made María progressively more realistic by combining several different generative AI and other tools:
Making María, a “live persona,” progressively more realistic
Eyeing the future cautiously
The gaming industry is already leading in the development of AI-powered characters, so it certainly seems logical to consider applying many of those precedents, principles, tools, and techniques to aspects of our own work in the broader design of solutions, experiences, and services. Our experimentation with several generative AI tools available today shows that it is indeed possible to create relatively lifelike and engaging interactive personas—though perhaps not entirely efficiently (yet). And, in fact, we might be able to do more than just create individual personas to chat with; we could create scenes or even metaverse environments containing multiple live personas that interact with each other and then observe how those interactions play out. In this scenario, our research might inform the design of a specific service or experience (e.g., a patient-provider interaction or a retail experience). Building AI-powered personas and running “simulations” with them could potentially help design teams prototype a new or enhanced experience.
But, while it’s fun and easy to imagine more animated, captivating research and design outputs utilizing generative AI, it’s important to pause and appreciate the numerous inherent risks and potential unintended consequences of AI—practical, ethical, and otherwise. Here are just a few that come to mind:
Algorithmically-generated outputs could perpetuate biases and stereotypes because AIs are only as good as the data they are trained on.
AIs are known to have hallucinations, in which they may respond over-confidently in a way that doesn’t seem justified or aligned with their training data—or, as we’ve additionally configured, with the definitions, descriptions, and parameters of an AI-powered persona. Those hallucinations, in turn, could influence someone to make a product development decision that might unintentionally cause harm or disservice.
AIs could be designed to continuously learn and evolve over time, taking in all previous conversations and potentially steering users towards the answers they think they’d want rather than reflecting the data they were originally trained on. This would negate the purpose of the outputs and could result in poor product development decisions.
People could develop a deep sense of connection and emotional attachment to AIs that look, sound, and feel humanlike—in fact, they already have. It’s an important first principle that AIs be transparent and proactively communicate that they are AIs, but when the underlying models become more and more truthful and they are embodied in more realistic and charismatic ways, then it becomes more probable that users might develop trust and affinity towards them. Imagine how much more potentially serious a hallucination becomes, even if a bot states upfront that it is fictitious and powered by AI!
Finally, do we even really want design personas that have so much to say?! Leveraging generative AI in any of these ways, without thoughtful deliberation, could ultimately lead us to over-index on attraction and engagement with the artifact at the expense of its primary purpose. Even if we could “train” live personas to accurately reflect the core ideas and insights that are germane to designing user-centered products and services, would giving them the gift of gab just end up muddling the message?
In short, designing live personas would have to consider these consequences very carefully. Guardrails might be needed, such as limiting the types of questions and requests that a user may ask the persona, making the persona “stateless” so it can’t remember previous conversations, capping the amount of time users can interact with the persona, and having the persona remind the user that they are fictitious at various points during a conversation. Ultimately, personas must remain true to their original intent and accurately represent the research insights and data that bore them.
And further, even if applying generative AI technologies in these ways becomes sufficiently accessible and cost-effective, it will still behoove us to remember that they are still only tools that we might use as part of our greater research and design processes, and that we should not be over-swayed nor base major decisions on something a bot says, as charming as they might be.
Though it’s still early days, what do you think about the original premise? Could Al-enabled research outputs that are more interactive and engaging actually foster greater empathy and understanding of target end-users and could that lead to better strategy, design, development, and implementation decisions? Or will the effort required, and possible risks of AI-enabled research outputs outweigh their possible benefits?
“I can’t see it. I can’t touch it. But I know it exists, and I know I’m part of it. I should care about it.”
Timothy Morton
AI is top of mind for every leader, executive, and board member. It is impacting how organizations approach their entire business, spanning the functions of strategy, communications, recruitment, partnerships, labor relations, and risk management to name a few. No wonder wrapping your head around AI is such a formidable challenge. Some leaders are forging ahead with integrating AI across their business, but many don’t know where to begin.
Beyond the complexities of its opaque technical construction, AI presents many challenges for both leadership and workers. The deployment of AI is not merely a technical solution that increases efficiency and enhances capabilities, but rather is a complex “hyperobject” that touches every aspect of an organization, impacting workers, customers, and citizens across the globe. While AI has the potential to augment work in magical ways, it also presents significant obstacles to equitable and trustworthy mass adoption, such as a significant AI skills gap among workers, exploitative labor practices used for training algorithms, and fragile consumer sentiment around privacy concerns.
To confront AI, leaders leveraging it in their business need an expanded view of how the technology will impact their organization, their workforce, and the ecosystems in which they operate. With this vital understanding, organizations can build a tailored approach to developing their workforce, building partnerships, innovating their product experiences, and fostering other resilience behaviors that increase agility in an age of disruption. Research shows that organizations with healthy resilience behaviors such as knowledge sharing and bottom-up innovation were less likely than others to go bankrupt following the disruptions of the COVID pandemic.
Hyperobjects and our collective future.
Originally coined by professor Timothy Morton, a hyperobject is something so massively distributed in time and space as to transcend localization—an amorphous constellation of converging forces that is out of any one person’s control. Similar to other large, complex phenomena that have a potential to radically transform our world—think climate change or the COVID-19 pandemic—AI is one of the hyperobjects defining our future.
Hyperobjects are difficult to face from a single point of view because their tendrils are often so broad and interconnected. Dealing with huge, transformative things requires broad perspective and collective solidarity that considers impacts beyond the interests of a single organization. The task of organizational leaders in an age of disruption by hyperobjects—like AI and climate change— is to rebalance the economic and social relationships between a broad group of stakeholders (management, shareholders, workers, customers, regulators, contractors, etc.) impacted by rapid change and displacement.
To help leaders form a comprehensive approach to cultivating resilience, innovation, and equity in the age of AI, we developed a simple framework of five priorities—or the 5 Ps—for building an AI-ready organization: People, Partnerships, Provenance, Product, and Prosperity.
People: Communicate a clear AI vision and support continual learning across teams
Charting an AI future for any organization begins with its people. Leaders need to identify the potential areas where AI can enhance operations and augment human capabilities. This starts by identifying low-risk opportunities—such as writing assistance or behavioral nudges— to experiment with as your AI capabilities mature. As new roles emerge, organizations must prioritize continuous learning and development programs to upskill and reskill employees, equipping them with the AI literacy needed to adapt to the changing landscape.
Despite all the media fanfare, usage of dedicated AI products like ChatGPT is still fairly limited, mostly used by Millenials and Gen Z, with only 1 in 3 people familiar with dedicated AI tools. Concerns about the technology are real and can affect morale. Almost half of workers fear losing their jobs to AI. Organizations need open communication and transparency about their AI adoption plans and the potential impacts on the workforce so they can also mitigate social anxiety and address fears or resistance to automation. Fostering a culture of continuous innovation can also encourage employees to embrace AI as an opportunity for growth rather than a threat to job security.
Additionally, team structures should be optimized for agility, cross-functionality, and embedded AI expertise. Rather than treating AI as a separate function, how might you develop AI expertise closer to the day-to-day work happening in teams? This could include things like promoting data literacy amongst teams to better understand AI insights, developing centers of excellence to provide training resources, recruiting AI experts, or establishing accessible feedback mechanisms to improve AI model performance.
Partnerships: Leverage strategic partnerships to reduce risks and expand capabilities
According to Google Brain co-founder Andrew Ng, “The beauty of AI partnerships lies in their ability to bring together diverse perspectives and expertise to solve complex problems.” In the rapidly evolving landscape of AI technologies, no single organization can possess all the expertise and resources required to stay at the forefront of innovation. Collaborating with external partners, such as AI platform providers, research institutions, product design experts, and training/support firms, can help address capability gaps and speed up time to market.
Transformational technology investments requiring large capital expenditures and retooling can be a barrier for organizations to adopt new methods of production. However, partnerships offer opportunities for risk and cost sharing, reducing the initial burdens of AI implementation. Working with partners can also enhance an organization’s ability to scale and expand into new markets. Examples include Google’s partnership with Mayo Clinic on AI in healthcare as well as Siemens partnership with IBM and Microsoft focused on AI in industrial manufacturing.
Investing in both informal and contractual collaboration with partners has proven positive impacts to organizational resilience. Leaders should foster a culture of cross-industry collaboration, staying aware of AI partnerships happening in their industry and remaining open to collaborations that may seem atypical on the surface. Partnership can support expanding customer reach, deepening the addressable market within a segment by diversifying AI offerings. Partnerships in adjacent industries can deliver economies of scale through shared AI infrastructure while expanding AI capabilities through larger pooled datasets—think of the drug industry partnering more closely with the grocery/restaurant/fitness industries to mutually build more responsive products and services for people with chronic health conditions, like diabetes, using AI-powered recommendations modeled on activity/purchasing behavior from cross-industry data-sharing agreements.
Leaders should work to foster partnership-building activities. This could include providing teams with appropriate resources for partnership initiatives, establishing a clear framework for assessing partnership opportunities, and supporting external networking opportunities to strengthen relationships across sectors.
Provenance: Ensure reliable data governance and integrity
Where will your data come from? How will it be verified, managed, and maintained? The integrity and provenance of data is a paramount concern when developing AI-enabled products and services. The accuracy, reliability, and completeness of data directly influence the performance and ethical implications of AI algorithms. Inaccurate or biased data can lead to flawed predictions and decisions, potentially causing harm to individuals or perpetuating social inequalities.
Many people share concerns about regulating AI usage, with over two-thirds of respondents in a recent survey expressing that AI models should be required to be trained on data that has been fact-checked. Implementing robust data governance practices, including data validation, cleansing, and security measures, is essential to safeguard data integrity throughout its lifecycle. Additionally, organizations must be transparent to customers about their data collection methods and data usage to address concerns related to privacy and data misuse.
TikTok, facing renewed privacy and national security scrutiny of its social media products, recently launched a Transparency and Accountability Center at its Los Angeles headquarters that provides visitors with a behind-the-scenes look at the company’s algorithms and content-moderation practices. While it remains unclear how effective this approach will be to address major misinformation and privacy issues, the company is pioneering new approaches that could be a model for others in the industry, such providing outside experts with access to its source code, allowing external audits, and providing learning content to make its opaque AI processes more explainable to journalists and users.
Product: Innovate your product and service experience responsibly
Research shows that building an agile culture of innovation is critical to fostering organizational resilience. Engaging employees at all levels of an organization in AI-focused innovation initiatives ensures that solutions address diverse needs and unlock opportunities that may be outside the field of view of siloed AI groups or leadership teams. However, hastily AI-ifying all your products to stay relevant without integrating an ethics and integrity lens could cause unintended harm or breed mistrust with customers/users. Engaging in a comprehensive user research process to better understand the needs of users, risks and opportunities, and the impacts of AI on outcomes can help shape more responsibly designed products.
Our own recent work with Fast Company explored a set of design principles for integrating generative AI into digital products. It showcased examples of progressive disclosures affordances when interacting with AI chatbots as well as what a standardized labeling system for AI-generated content could look like to increase transparency with users. Establishing strong AI product design best practices like these are especially important for highly-consequential and personally-sensitive products and services in sectors like education, healthcare, and financial services. Start with the AI applications that are more mainstream than cutting edge. For example, autocomplete text in your app experience to save customers time during their onboarding experience is a more developed use case rather than using facial recognition to understand your customer’s emotional state.
Launching successful AI products and features requires the engagement of everyone involved in a product organization, extending beyond the typical research, design, and development functions to include adjacent and supporting functions like legal, communications, and operations to ensure that AI products are delivering on their promises. Leaders should establish QA best practices for testing and vetting products for their ethical and social impacts before public release. The Center for Humane Technology’s Design Guide is a great place to start thinking about evaluating the impact an AI product has on users.
Prosperity: Share productivity gains across stakeholder groups
As AI provides businesses with tangible gains, there is an opportunity to share this newfound prosperity across stakeholder groups. According to political theorist David Moscrop, “With automation, the plutocrats get the increased efficiency and returns of new machinery and processes; the rest get stagnant wages, increasingly precarious work, and cultural kipple. This brave new world is at once new and yet the same as it ever was. Accordingly, it remains as true as ever that the project of extending liberty to the many through the transformation of work is only incidentally about changing the tools we use; it remains a struggle to change the relations of production.” Leaders are tasked with rebalancing stakeholder relationships to mitigate backlash from potentially rapid and negative impacts to workers’ livelihoods across industries.
AI’s material impact on jobs will likely be felt the hardest by lower-wage workers who will have the least amount of say over how AI is integrated (and eventually replaces) their jobs. The Partnership on AI’s Guidelines for AI and Shared Prosperity provide a great starting point for leaders to start identifying key signals and risks to job displacement, a job impact assessment tool, as well as stakeholder-specific guidelines for rebalancing the impacts of AI on the workforce.
AI enterprises have a voracious appetite for data, which is often extracted from a variety of sources for free—directly from users through opaque data usage agreements or pirated directly from artists and other creators to train AI models. Such data relationships need to be rethought as they are increasingly becoming economic relationships, concentrating wealth and power in the hands of the extracting organizations. As the recent strike by the SAG-AFTRA union demonstrated, business leaders need to consider who they are sourcing data from that feeds algorithms and revisit the underlying contracts and agreements that remunerate the contributors to AI systems in an equitable manner. One interesting proposal includes the development of data cooperatives that can help act as fiduciary intermediates for brokering shared value between data producers and consumers.
While enormous value can be unlocked from AI—$4.4 trillion to be exact—should all of that go into the pockets of executives and shareholders? In addition to fairly compensating algorithm-training creatives and data-supplying users, leaders should also consider how they might pass along AI-generated value to their end customers. This could be directly financial—like a drug company lowering prices after integrating AI into their development pipeline— or it could be value-added by expanding service offerings—such as a bank offering 24/7 service hours through AI-supported customer touchpoints.
Taking it one step at a time
The technology of AI may shift rapidly, but we are really just at the beginning of a much larger transition in our economy. The decisions that leaders and organizations make today will have long-tail consequences that we can’t clearly see now. The gains from AI may be quite lucrative to those who implement well and the race to dominate in a winners-take-all economy is real. But racing to modernize your business operations and build AI-ified products and services without understanding the broad impacts of the technology is a bit like a 19th-century factory owner dumping toxic waste into the air in a hasty effort to leverage the latest production-line technology of the industrial era. We are all now paying the consequences of those leaders’ decisions.
Be considerate of the foundational AI choices and behaviors that will impact the long-term resilience of your organization and shape equity in our future society. Hopefully these five Ps can help you confront the AI hyperobject in a holistic manner by getting back to basics. Take it slow at first, so you can move steadily later. Gather your troops and work to deeply understand your people/customers/partners, then thoughtfully make your move forward.
—
All images used in this article were generated using AI.
“…you don’t even have to connect to a person in real life, but you still feel connected.”
– Laya, 15 year old NFT Artist
Developing Skills and Earning a Livelihood
Whether it’s NFTs, Web3 or AI, the rapid evolution of technology can offer opportunities for users of all ages, but young people – who spend so much of their time online – have a unique relationship with these emerging tools. And, despite what many think, adolescents are already using these emerging technologies to improve their well-being at a time where the mere existence and lived experiences of BIPOC and LGBTQ+ youth, especially, are under attack.
Take 13-year-old digital artist Laya Mathikshara from Chennai, India, for example. In May 2021, as a neophyte in the world of digital art, she sold her first NFT.
Launching the “What if ?”Collection on foundation – featuring arts that are inspired by the converse of scientific facts. The first piece is “What if, Moon had life ?” is now out @withFND Do check out 😀https://t.co/8mya2FOm42pic.twitter.com/e72C9sDrEC
— Laya Mathikshara (@layamathikshara) May 22, 2021
Her animated artwork titled What if, Moon had life? depicted an active core of the Moon gurgling. Inspired by the distance between the Earth and the Moon (384,400 km), Laya listed the reserve price as 0.384400 ETH (Ethereum) on Foundation, a platform that enables creators to monetize their work using blockchain technology. It caught the eye of Melvin, co-founder of the NFT Malayali Community, who placed a bid and collected her first artwork for 0.39 ETH ($1,572 at the time).
After the sale, and the success of subsequent NFTs, Laya – now 15 years old – decided to make digital art her career. With Web3, a collector of her art introduced her to other artists, who she felt inspired to support through Ethereum donations. It “feels amazing to help people and contribute. The feeling is awesome,” she says.
“I started with nothing to be honest,” says Laya, “with zero knowledge about digital art itself. So I learned digital art [in parallel to] NFTs because I had been into traditional art [when I was younger].”
Supporting Key Developmental Assets to Wellbeing
Knowing that young people spend much of their unstructured time online, that digital wellness is a distinct concern for Gen Z, and that the technology landscape is rapidly changing, Artefact partnered with Hopelab to conduct an exploratory study to understand their experiences with emerging technology platforms – ones largely enabled by Web3 technologies like blockchain, smart contracts, and DAOs (decentralized autonomous organizations). The organizations were particularly interested in how these technologies might contribute to a wide spectrum of developmental assets to improve the well-being of young people.
Our study found that Web3 can support youth wellness because it is built on values such as ownership, validation, and community that link to developmental assets like agency and belonging. These values are fundamentally different from the values of Web 2, a technology operating on business models that monetize our attention and personal data.
Having grown up with multiple compounding stressors including climate change, a global pandemic, and political unrest, some Gen Z find appeal in Web3’s potential to create what you want, own what you make, support yourself, and change the world.
With Web3, young people are experimenting with their interests and identities, creating art and music, accumulating wealth, consuming and sharing opinions, forming communities, and supporting causes that deeply resonate with them.
Victor Langlois and LATASHÁ, visual and musical artists, respectively, each represent the diversity that is important to our organizations, and have made real income at a young age through NFTs. Likewise, World of Women, a community of creators and collectors believe representation and inclusion should be built into the foundation of Web3, while UkraineDAO seeks to raise money to support Ukraine.
Aligning with GenZ Values
The gateway to Web3 for youth has commonly been through media hype, celebrity fanfare, and video games. Youth we spoke to were all skeptical, at least at first. Laya says, “I thought it was some cyber magical money or something. It just didn’t feel real.” After learning how to use the technology to create assets themselves and even make money via NFTs without a bank account, they began to invest more time experimenting with the tech and consuming content.
These experiences are not without challenges, of course. Young people in our study shared that they need to spend a lot of time learning about the ever-evolving space and building connections to stay relevant. The financial ups and downs are more extreme than the stock market, along with the potential for major losses at the hands of scammers or platform vulnerabilities. Like Web2, there is pressure to be endlessly plugged into the constant news, with social capital to be gained by being consistently online. Some of society’s broader social issues also permeate Web3 spaces: racist NFTs and communities abound.
Despite these challenges, there is genuine excitement for a new internet built on Gen Z’s core values. Several youth shared how DAOs are flipping organizational norms, where hierarchy and experience no longer determine whether your idea takes hold. Web3 technologies are giving youth an opportunity to start careers that weren’t previously viable, find new audiences and fanbases, create financial independence, detach from untrustworthy platforms, and find and contribute to caring communities – all while building their creativity, socioemotional, and critical thinking skills online.
These experiences are helping Gen Z feel a strong sense of belonging as they find communities and causes they care about. In the words of one of our interviewees, Web3 offers a “new and shiny” way to “do good in the world.” The experiences are more accessible – and specific to them – and the decentralized nature of Web3 means that creators and the public, not big tech or its algorithms, get to determine what is current and relevant. This is especially important for creators from groups that have been excluded from power because of their race, ethnicity, gender, or orientation. One participant shared how empowering it was to no longer be at the whim of social media platforms that may make design changes that erase your content, user base, or searchability overnight.
Like any other technology, Web3 and its components can have positive and negative impacts, but its fundamental tenets mean that we will likely see promising innovations and experiences that can support young people to find agency and belonging.
“We are all decentralized for the most part,” says Laya. “And the fun fact is, I have not met many of my Indian friends…I haven’t met folks in the U.S. or any other countries for that matter… you don’t even have to connect to a person in real life, but you still feel connected.”
Neeti Sanyal is VP at Artefact, a design firm that works in the areas of health care, education, and technology.
Jaspal Sandhu is Executive Vice President at Hopelab, a social innovation lab and impact investor at the intersection of tech and youth mental health.
Learning (r)evolution: Exploring the Impact of AI in K-12 Education
The K-12 education sector is at a unique inflection point as digital technologies radically reform how students learn, how educators teach, and how organizations adapt to serve the needs of increasingly diverse student populations. The future of learning may look radically different from today. Recent rapid advances in AI have made many leaders pause to question how such a transformative leap in technology will impact their organization, its people, and its stakeholders in both the near term and the long term.
Will the impact of AI on educational outcomes be a disruptive revolution or a natural evolution?
In this white paper, Artefact developed four future scenarios to understand the impact of AI in the K-12 education sector from a variety of perspectives, including students, parents, teachers, administrators, and tech industry professionals. Each of the scenarios come with a set of ethical and equity considerations that result from how technological and societal trends interact in various ways. This work builds on our expertise in user experience design and strategic foresight and our experience working in the education sector.
At Artefact, we believe in the powerful impact that strategic foresight and design has on an organization’s long-term success. By exploring possible futures, we hope to help you spark critical conversations and strategic planning across your team to ensure equity, inclusion, and innovation as your organization evolves alongside AI. Our white paper also includes a discussion guide to help get those initial conversations off the ground.
Download the white paper!
Grab your copy of the white paper and reach out to see how Artefact can help you manage transformational change affecting your business today.
“Before long, billions of people around the world were working and playing in the OASIS every day. Some of them met, fell in love, and got married without ever setting foot on the same continent. The lines of distinction between a person’s real identity and that of their avatar began to blur. It was the dawn of new era, one where most of the human race now spent all of their free time inside a videogame.”
—Ernest Cline, Ready Player One
Ernest Cline, in his Ready Player One (and Two) books, paints a picture of a world where everyone avoids the problems of physical reality—a global energy crisis, environmental degradation, extreme socioeconomic inequality—by taking “an escape hatch into a better reality.” That escape hatch is the OASIS, the fully fledged metaverse in virtual reality, “where anything was possible.”
While news cycles have shifted away from the metaverse in recent months (thanks ChatGPT!), big tech and startups have been working diligently behind the scenes, investing billions of dollars in creating alternative realities, with goals of bringing about their own concepts of the metaverse. Apple is heavily rumored to release its mixed reality headset at WWDC in June that will likely challenge the criticisms of Meta’s approach and advance the metaverse’s arrival massively even if under a different label, like spatial computing. With different visions of the metaverse rising up in tandem, we must examine the tools we have access to and the foundation on which we can build in order to ensure this “better reality” is truly better.
Re-define viable
Those in new product development usually think about whether an idea is technologically feasible, desirable to users, and viable in the marketplace.
Meta, Google, Microsoft and Apple, and many other players, like Magic Leap, have been working on Augmented Reality (AR) (e.g., Ray-Ban Stories, Google Glass, Project Starline, iOS), Mixed Reality (MR) (e.g., HoloLens 2, Magic Leap 2, Rumored Apple Headset), and Virtual Reality (VR) (e.g., Meta Quest 2, QuestPro) projects for decades. Advancements in headsets and computing suggest the technology is mature enough to support conceptual reality experience for the mass market. Feasibility: check.
The lukewarm success of remote work has catalyzed a clear problem space for the metaverse to tackle, and our increasing entanglement with contextual online avatars (e.g., vTubers , “finsta” Instagram profiles, and even the everyday Apple Memoji) along with the success of MMO’s (massively multiplayer online games, e.g., Fortnite and Roblox) signal a readiness for consumers to develop a personal connection to an online self and peers, making the metaverse desirable from a consumer standpoint. Desirability: check.
If we don’t act now, the metaverse will be built on the same foundation as today’s paradigm of which we’re living the consequences of already.
Here’s a five-step framework you can use to work toward a better metaverse sooner, than later.
Five steps to work toward a better metaverse
As product owners, designers, and technologists, now is the time to ask ourselves, “What if the metaverse succeeds?” As you build the metaverse from your own perspective and strategy, how can you avoid contributing to a world that has the need for an “escape hatch from reality” and create a world we might actually want to spend some time in?
The five steps
Define the term Plot a path Map the landscape Imagine divergent futures Reflect on strategy
Step one
Define the term
Each company working in mixed reality will define the metaverse differently. Apple will undoubtedly have a different approach from Meta, from Epic, and so on. As it is still emerging, the metaverse currently has no one definition. As such, teams must align on a shared definition to ensure they are working toward the same product vision and forecast the resulting unintended consequences of that strategy.
This framework uses my emerging definition of the metaverse: “a shadow layer creating a seamless experience across shared realities (AR, VR, PR).” This shadow layer would be made up of data and information presented seamlessly and contextually across different realities. There are endless ways the mixing of these realities could be imagined. For example, on an individual scale, imagine a friend in the virtual world that could seamlessly accompany you to dinner in physical reality through an augmented reality experience. Your definition of the metaverse will work for this process too, but it must be clearly defined.
Step two
Plot a path
When defining a better metaverse, you must think critically about the underlying model it would be built on. Consider how your proposal for the metaverse will exist on a spectrum of disruption. Will it augment the existing status quo of today’s centralized model (built on a small number of operating systems and shrinking number of server providers) or push to reimagine an alternative future, such as a decentralized (e.g., Web3) or distributed model (e.g., early internet)? Depending on the model, the landscape and potential futures you’re working toward (and their unintended consequences) will be considerably different.
Step three
Map the landscape
If you look at as many aspects and stakeholders of systems that are a part of the metaverse, you can consider how to shape a better future. Beyond the software to make the metaverse real, you need to step back and consider how it will impact multiple levels of the systems it will operate in. One model (based on, but different from Pace Layering) imagines what the potential effects might be at differing levels of scales (individual, relational, group, societal, environmental, etc.).
After considering the rings of impact, create a matrix by industry (education, healthcare, social impact, etc.) to capture relevant areas the intervention might impact. It’s important to push beyond the obvious stakeholders and industries you might initially consider. Try pushing to include a specific population or industry that isn’t already a part of your normal processes or strategy. These are where we can often see threads of potential unintended consequences emerge.
Step four
Imagine potential futures
While all of the topics and stakeholders captured in the matrix will be affected in some ways, not all will be to the same degree. It may depend on how any given future plays out. Utilize a series of 2×2 future matrices inspired by Alun Rhydderch’s 2×2 Matrix Technique to push hypothetical scenarios that are both idealistic (utopic) and problematic (dystopic), to varying degrees of intensity (mass adoption vs. passing fad). While the future will likely end up somewhere in the middle, considering extremes allows us to hypothesize what the preferable future is and capture potential blindspots where unintended consequences could happen along the way.
Step five
Reflect on strategy
Now reflect on the learnings from the process. Ask yourself several questions. How does your definition and strategy for the metaverse affect society, industries, and individuals? How does your strategy for the metaverse, played out through potential future scenarios, affect the different systems of scale? How might you change your definition, vision, and/or strategy to build toward a better metaverse?
Repeat to keep the stars aligned as the future unfolds.
These 5 steps, from definition to imagining to reflection, must be an ongoing activity; revisited and repeated on a regular basis. The future is uncertain and the world will change around us. Advances in generative A.I. could dramatically change the technological landscape. Changes in political winds could change the governance landscape. A major climate or global health crises, as we saw with COVID-19, could change societal priorities. Doing these activities proactively in the initial design can help reduce the number of reactive outcomes we will have to chase down once the product is released.
Whatever the metaverse becomes, hopefully we all can help make sure it aligns a preferable future that favors all realities; a future world we want to run towards, rather than escape from.