Just as the early days of Human Computer Interaction emphasized thoughtful UX to make personal computing accessible and intuitive, we now need the same intentionality in designing AI experiences.

AI model development and UX design must go hand in hand—because building powerful systems isn’t enough if people can’t understand, trust, or effectively use them.


Anyone who has conversed with ChatGPT or created an image with MidJourney might assume that a product leveraging generative AI doesn’t require much UX design—after all, these AI systems seem to understand us and follow our commands naturally. However, UX design remains just as crucial for AI-enabled products as it is for non-AI ones. A well-designed user experience ensures the AI product addresses real needs, instills trust, and enables intuitive and desirable interactions. Without thoughtful UX, even the most advanced AI can be misdirected, ineffective, or hard to use, preventing people from deriving real value. For companies developing these products, poor UX can be costly—or worse, harmful, as demonstrated by problematic AI systems seen in past incidences with self-driving cars, hiring, and mental health. Author Arvind Narayan of the book “AI Snake Oil” recently stated that:

Here are four common pitfalls, or challenges, that innovators looking to leverage AI may encounter, and how UX design can help avoid them.

Just as users need to trust products to use and adopt them, they must also trust the AI that powers those products. Rachel Botsman, author of “Who Can You Trust,” presents a framework describing four traits of trustworthiness that apply to people, companies, and technologies. Two traits relate to capabilities (competence and reliability)—the “how”, while two relate to intention (empathy and integrity)—the “why”.

We tend to trust others when they demonstrate two key qualities: competence—the ability to do what they promise effectively—and reliability—consistently delivering expected results. With humans, it’s relatively easy to assess these traits through conversation and interaction. But what happens when a system is 3x, or even 10x, more capable than any person we’ve encountered—and we have no real insight into how it works?

That’s the fundamental challenge with AI. As a black-box technology, its growing power makes it harder—not easier—to evaluate. While it’s relatively simple to notice improvements in AI-generated text, assessing the accuracy and reliability of AI-driven information or analysis—especially in unfamiliar domains—is much more difficult.

Here, UX design plays a role in helping make AI behavior more understandable, whether it’s recommending content, making predictions, or automating complex tasks. By designing for transparency, UX can foster trust through strategies like surfacing explanations for AI decisions (such as data provenance) and showing confidence indicators to help users gauge how certain the system is in its outputs. This is what Rachel Botsman terms “trust signals”—small, often unconscious clues we use to assess the trustworthiness of others or something. To see some of these ideas in action, check out our piece with Fastco.

Referencing Botsman’s framework again, UX design can also help Al-enabled products act with integrity (being fair and ethical) and with empathy (understanding and aligning with human interests and values). By incorporating diverse perspectives, needs, and values into the design process, UX helps create more inclusive and responsible AI systems. And frequent and extensive user testing enables designers to identify and address biases, unintended consequences, and potential harms before they impact users. This helps correct what AI field calls the alignment problem.

In addition, thoughtful interface design can provide clear affordances, feedback loops, and user controls. Examples include letting users know when its response is generally considered controversial by the public, having users rate an AI response to train it to be better over time (e.g., more personalized or accurate), and building in moments where humans can override the AI.

When we think of generative AI, we often default to imagining a chatbot. That’s no surprise—large language models are built to generate text, making chat a natural starting point. But simply slapping a chatbot onto an existing product rarely creates real value for users. It’s like applying tiger balm to every kind of pain—it might help sometimes, but it’s far from a universal cure. We saw this clearly with early banking chatbots; anyone who used them remembers how frustrating they were.

True innovation in AI-powered experiences requires more than just plugging in chat—it demands a deep understanding of existing workflows, user mental models, and interaction patterns. That’s where skilled UX designers shine. While text-based interfaces dominate today, the real future lies in well-crafted, multimodal experiences. For designers, this shift brings both exciting opportunities and complex challenges: choosing the right mode of interaction for the task at hand becomes more critical—and more nuanced—than ever.

For instance, natural language excels at complex queries like “Show me customers who spent less than $50 between 11am–2pm last month.” However, it falls short for spatial tasks—imagine trying to perfectly position an element through verbal commands: “Move it left… no, too far… right a bit…” These limitations become especially apparent in generative AI, where initial prompts like “Draw a kid in a sandbox” work well, but precise adjustments (“Make the sandbox 10% smaller”) become tedious.

For AI products to succeed, UX designers must determine the optimal level of user control over model inputs and queries, design clear ways for users to understand and modify AI-generated outputs, and create systems for users to effectively utilize those outputs. All of this must be wrapped in an interface that feels natural and easy to use.

Great products evoke emotions. From connected cars to smartwatches to payment apps, thoughtful design can surprise and delight users while making them feel understood and valued—creating a deeper emotional connection with both product and brand. AI-enabled products are no exception. Talkie and Character.ai are two bot creation platforms that successfully keep users engaged and interested. Similarly, Waze builds community through crowdsourced traffic updates and hazards and adds an element of play through customizable voice options and car icons.

What sets AI apart, however, is its seemingly superhuman capabilities, which fundamentally shift how people perceive themselves when using these technologies. It’s like a funhouse mirror that can perturb your sense of self. This phenomenon is explained in a HBR article titled “How AI affects our sense of self.” While productivity and creator apps can trigger job security concerns, using AI for writing essays or applying for a job often leaves users feeling like they’re “cheating” or inadequate. Companies must address these psychological barriers through strategic product design that acknowledges and accounts for these complex emotions. Again, UX design can play a role. To mitigate job displacement fears, design can emphasize human oversight, ensuring users remain central to meaningful tasks and decisions. To help users feel more comfortable with AI, products can incorporate teachable moments explaining why the AI is offering certain suggestions. This helps users enhance their skills rather than becoming overly dependent on AI.

While AI systems continue to advance at a remarkable pace, their success ultimately hinges on thoughtful UX design that puts human needs first. By focusing on generating real value, building trust, aligning with existing workflows, and considering emotional needs, designers can create AI-enabled products that are not just powerful, but truly impactful. The partnership between AI capability and human-centered design can deeply transform the human experience. Companies that bring design into their AI development, with intention and investment, will be the ones to create products that genuinely improve people’s lives.


Imagine you’re on a blind date. Meeting someone new is a relatively rare experience that is very different than talking with a friend. You don’t know what information your date knows about you and you’re trying to figure each other out. Maybe you dial up certain aspects of your personality or share a personal story to build a deeper connection. Over time, you build trust through consistent conversation.

Now imagine you’re chatting with a new social AI chatbot. A similar back and forth of getting to know each other might occur. You might want to know the social chatbot’s backstory, its limitations, its preferences, or its values. Social AI chatbots are increasingly human-like with advanced speech capabilities, interactive digital avatars, and highly adaptable personality characteristics that can carry on conversation that feels like it’s with another person. People are chatting with AIs acting as their therapist, friend, life coach, romantic partner, or even as spiritual oracle. Given the deeply personal roles that emerging social AI may take on in our lives, trust with such systems (or even regular humans for that matter) should be earned through experience, not freely given. 

However, increasingly indistinguishable interactions make forming human-like relationships with AI a blurry endeavor. Like a blind date, you may hit it off at first but discover your date’s behavior shifts as the conversation continues. When a chatbot (or human) performs in an inconsistent, opaque, or odd way, this can erode the process of building trust, especially if someone is sharing sensitive and personal information. To address this, social AI product designers can consider key factors of healthy human relationships such as boundaries, communication, empathy, respect, and mirroring and apply these characteristics to ensure the design of responsible chatbot experiences.  

Boundaries are about establishing clarity and defining capabilities.

People need a clear understanding of a social AI’s content policy, its training data, its capabilities, its limitations, and how best to interact with it in a safe and compliant manner. This is especially important for sensitive uses such as mental healthcare or when the users are children. For example, many flagship LLMs provide disclaimers that responses may be inaccurate. Google requires teens to watch a video educating them about AI and its potential problems before using it. Microsoft’s recently redesigned its Copilot interface to show users a variety of its capabilities through visual tiles that act as starting prompts. Like a blind date, communicating what each other is open to and capable of can support fostering a better connection.

Communication is about constructive feedback that improves connection.

People can sometimes mess up when engaging with a chatbot. For example, they might use a word or phrase in a prompt that violates a content policy. They might discuss a topic or ask for advice on something that is very personal or taboo. When this happens, AI systems can sometimes reject the prompt without a clear explanation, when constructive feedback would be more helpful in teaching people how to best prompt the system in a compliant way. Like a blind date, when you cross a line, a kind piece of feedback can help get the conversation back on track. For example, when discussing topics related to sensitive personal data, Mixtral AI provides additional reassurances in its responses as to how it manages users’ data to preemptively put any concerns at ease. 

Empathy is about responding to a user’s emotional needs in the moment.

People can bring all kinds of emotions to conversations with social AI chatbots. Sometimes they are just looking for companionship or a place to vent about their day. Social AI chatbots can respond with empathy, providing people with space to reflect by asking more questions, generating personal stories, or suggesting how to modulate mood. For example, an app called Summit, positioned as an AI life coach, can track physical activities related to specific wellness goals that a person has set up. If someone shares a bad mood due to stress, the AI chatbot will suggest an activity that the person previously mentioned had helped them de-stress, such as taking a walk. Like a blind date, your partner’s ability to recall information previously shared and contextualize it with your current emotional expression helps you feel seen and heard. 

Respect is about allowing people to be themselves freely.

Inevitably an individual’s values may misalign with those of AI product designers, but just like a blind date, each party should be able to show up as themselves without fear of being judged. Similarly, people should be able to express themselves on political, religious, or cultural topics and be received in a respectful way. While a chatbot may not explicitly agree with the person’s statement, it should respond with respectful acknowledgement. For example, the kids-focused AI companion Heeyo will politely acknowledge a child’s prompts related to their family’s political or cultural views but doesn’t offer any specific validation of positions in response. Instead, it avoids sensitive topics by asking the child how they feel about what was just shared. 

Mirroring is about active listening and attunement to the user.

Like on a blind date, healthy mirroring behaviors can help forge subconscious social connection rapidly. Mirroring behaviors, such as imitating styles of speech, gestures, or mood, are an effective way to show each other you are listening and well-attuned. For example, if someone is working through a complex life issue with a social chatbot, the AI’s responses might be more inquisitive than prescriptive and it may start to stylize its responses in a way that mirrors the person, such as in a short and humorous or long and emotional manner. Google’s NotebookLM will create an AI-generated podcast with two voices discussing a topic of choice. After the script is generated, it will add in speech disfluencies—filler words like “um” or “like”—to help the conversation between the two generated voices feel more natural. 

Social AI experiences will continue to rapidly advance and further blur the lines between human and synthetic relationships. While AI technology is running at 21st century speeds, our human brains are mostly stuck in the stone age. The fundamental ways that we form connections haven’t changed as rapidly as our technology. Keeping this in mind, AI product designers can lean on these core relationship characteristics to help people build mutual trust and understanding with these complex systems.

Artefact’s staff reflects on AI’s potential impact on individuals and society by answering questions prompted by the Tarot Cards of Tech. Each section contains videos that explores a tarot card and provides our perspectives and provocations.

The Tarot Cards of Tech was created to help innovators think deeply about scenarios around scale, disruption, usage, equity, and access.  With the recent developments and democratization of AI, we revisited these cards to imagine a better tech future that accounts for unintended consequences and our values we hold as a society.

Cultural implications for youth

Jeff Turkelson, Senior Strategy Director

Transcript: I love that this card starts to get at, maybe some of the less quantifiable, but still really important facets of life. So, when it comes to something like self-driving cars, which generative AI is actually really helping to enable, of course people think about how AI can replace the professional driver, or how AI is generally coming for all of our jobs.

But there are so many other interesting implications. So for example, if you no longer need to drive your car, would you then ever need to get a license to drive? And if we do away with needing a license to drive, then what does that mean for that moment in time where you turn 16 years old and you get newfound independence with your drivers license? If that disappears, that could really change what it means to become a teenager and become a young adults, etc. So what other events or rituals would AI disrupt for young adults as they grow older?

Value and vision led design

Piyali Sircar, Lead Researcher

Transcript: This invitation to think about the impact of incorporating gen AI into our products is really an opportunity to think about design differently. We should be asking ourselves, “What is our vision for the futures we could build?” and once we define those, the next question is, “Does gen AI have a role to play in enabling these futures?” Because the answer may be “no”, and that should be okay if we’re truly invested in our vision. And if the answer is “yes”, then we need to try to anticipate the cultural implications of introducing gen AI into our domain space. For example, “How will this shift the way people spend time? How will it change the way they interact with another? What do they care about? What does this product say about society as a whole?” Just a few questions to think about.

Introducing positive friction

Chad Hall, Senior Design Director

Transcript: The ‘Big Bad Wolf’ card reminds me to consider not only which AI product features are vulnerable to manipulation, but also who the bad actors might be. Those bad actors could be a user, it could be us, our teams, or even future teams. So, for example, while your product might not misuse data now, a future feature could exploit it.

A recent example that comes to mind is two students who added facial recognition software to AI glasses with a built-in camera. They were able to easily dox the identities of just about anyone they came across in their daily life.

I think product teams need to introduce just enough positive friction in their workflows to pause and consider impacts. Generative AI is only going to ask for more access to our personal data to help with more complex tasks. So the reality is, if nobody tries to ask the question, the questions are never going to get asked.

Minimizing harm in AI

Neeti Sanyal, VP Creative

Transcript: I think it’s important to ask whether AI could be a bad actor? Even when you’re not trying to produce misinformation with generative AI, in some ways it is inherently doing that. I am concerned about the potential for generative AI to cause harm in a field that has low tolerance for risk, things like health care or finance. An example that comes to mind is a conversational bot that can give the wrong mental health advice to someone that is experiencing a moment of crisis.

One exciting way that companies are addressing this is by building a tech stack that uses both generative and traditional AI. And it’s the combination of these techniques that help minimize the chance of hallucinations and can create outputs are much more predictable.

If we are thoughtful in how the AI is constructed in the first place, we can help prevent AI from being the bad actor.

Building job security

Rachael Cicero, Associate Design Director

Transcript: One thing we keep hearing about is the disappearing workforce, but often I think we’re overlooking the fact that humans will continue to exist in and contribute to society. Instead, I’d like to see a shift the conversation from the disappearing workforce to the unique contributions of human and AI collaboration. Consider civic technology, where generative AI can be used for things like supporting the process of unemployment applications. AI can help with document recognition, which can really reduce the load on human staff, and also accelerate response time for applicants. To me, that collaboration isn’t about replacing jobs but really about enhancing them.

The key to that is investing in reskilling. By including the perspectives of people affected in the design of AI systems, we can better understand the tasks they want automated. The goal being to create a future where AI and humans can work together, enhancing each other’s strengths, and ensuring that everyone has an opportunity to thrive in a pretty evolving job market.

Transforming tradition

Max West, Principal Designer

Transcript: This card reminds me of how cable TV technology reshaped media jobs. Remember Video Jockeys on the popular 90s MTV show, Total Request Live? VJs had evolved from selecting and remixing music, like their traditional radio counterparts, to focusing on engaging with crowds, talking to celebrities, and orchestrating pop cultural moments. 

Now take the cable TV example and apply it to AI transforming an industry like education. Teacher’s jobs could similarly shift to a more social focus. An AI-powered app could tailor a math or science lesson to a student’s unique cognitive abilities, while the teacher can focus more on the physical, interpersonal, and social aspects of learning. In the same way that VJs would provide crowd-pleasing moments between music videos, educators might find themselves in a similar “hosting” role for the classroom.   


So it’s less about what disappears and more about what can transform. So, while roles may change with AI, it could create time and space for richer, more personal experiences among groups.

Exploring how generative AI could superpower research outputs to foster greater empathy and engagement

With the release of GPT-4 and the growing interest in open-source generative AIs such as DALL-E 2, Midjourney, and more, there is no dearth of people writing about and commenting on the potential positive and negative impacts of AI, and how it might change work in general and design work specifically. As we sought to familiarize ourselves with many of these tools and technologies, we immediately recognized the potential risks and dangers, but also the prospects for generative AI to augment how we do research and communicate findings and insights.

Looking at some of the typical methods and deliverables of the human-centered design process, we not only saw practical opportunities for AI to support our work in the nearer term, but also some more experimental, less obvious (and, in some cases, potentially problematic) opportunities further out in the future.

More Obvious

Summarizing existing academic and industry research

Identifying subject matter experts and distilling their knowledge and opinions

Supporting researchers with AI notetakers to expedite analysis and synthesis

Supporting participants in the co-design process with generative AI tools to help them better express, articulate, and illustrate their ideas

Less OBVIOUS

Leveraging bots as surrogate researchers for conducting highly structured user interviews on a large scale

Replacing human research subjects entirely for more cursory, foundational, gen pop research

Creating more engaging, sticky, and memorable outputs and deliverables, for example, a life-like interactive persona

Now, while each of the above use cases merits its own deep dive, in this article we want to focus on how advances in AI could potentially transform one common, well-established output of HCD research: the persona.

Breathing new life into an old standard

A persona is a fictional, yet realistic, description of a typical or target user of a product. It’s an archetype based on a synthesis of research with real humans that summarizes and describes their needs, concerns, goals, behaviors, and other relevant background information.

Personas are meant to foster empathy for the users for whom we design and develop products and services. They are meant to support designers, developers, planners, strategists, copywriters, marketers, and other stakeholders build greater understanding and make better decisions grounded in research.

But personas tend to be flat, static, and reductive—often taking the form of posters or slide decks and highly susceptible to getting lost and forgotten on shelves, hard drives, or in the cloud. Is that the best we can do? Why aren’t these very common research outputs of the human-centered design process, well, a little more “alive” and engaging?

Peering into a possible future with “live personas”

Imagine a persona “bot” that not only conveys critical information about user goals, needs, behaviors, and demographics, but also has an image, likeness, voice, and personality? What if those persona posters on the wall could talk? What if all the various members and stakeholders of product, service, and solution teams could interact with these archetypal users, and in doing so, deepen their understanding of and empathy for them and their needs?

In that spirit, we decided to use currently available, off-the-shelf, mostly or completely free AI tools to see if we could re-imagine the persona into something more personal, dynamic, and interactive—or, what we’ll call for now, a “live persona.” What follows is the output of our experiments.

As you’ll see in the video below, we created two high school student personas, abstracted and generalized from research conducted in the postsecondary education space. One is more confident and proactive; the other more anxious and passive.

Now, without further ado, meet María and Malik:

Chatting with María and Malik, two “live personas”

Looking a bit closer under the hood

Each of our live personas began as, essentially, a chatbot. We looked at tools like Character.ai and Inworld, and ultimately built María and Malik in the latter. Inworld is intended to be a development platform for game characters, but many of the ideas and capabilities in it are intriguing in the context of personas, like personality and mood attributes that are adjustable, personal and common knowledge sets, goals and actions, and scenes. While we did not explore all those features, we did create two high school student personas representing a couple “extremes” with regards to thinking about and planning their post-secondary future: a more passive and uncertain María and a more proactive and confident Malik.

Here’s a peek at how we created Malik from scratch:

Making Malik, a “live persona”

Interacting with María and Malik, it was immediately evident how these two archetypes were similar and different. But they still felt a tad cartoonish and robotic. So, we took some steps to improve progressively on their appearance, voices, and expressiveness.

Here’s a peek at how we made María progressively more realistic by combining several different generative AI and other tools:

Making María, a “live persona,” progressively more realistic

Eyeing the future cautiously

The gaming industry is already leading in the development of AI-powered characters, so it certainly seems logical to consider applying many of those precedents, principles, tools, and techniques to aspects of our own work in the broader design of solutions, experiences, and services. Our experimentation with several generative AI tools available today shows that it is indeed possible to create relatively lifelike and engaging interactive personas—though perhaps not entirely efficiently (yet). And, in fact, we might be able to do more than just create individual personas to chat with; we could create scenes or even metaverse environments containing multiple live personas that interact with each other and then observe how those interactions play out. In this scenario, our research might inform the design of a specific service or experience (e.g., a patient-provider interaction or a retail experience). Building AI-powered personas and running “simulations” with them could potentially help design teams prototype a new or enhanced experience.

But, while it’s fun and easy to imagine more animated, captivating research and design outputs utilizing generative AI, it’s important to pause and appreciate the numerous inherent risks and potential unintended consequences of AI—practical, ethical, and otherwise. Here are just a few that come to mind:

  • Algorithmically-generated outputs could perpetuate biases and stereotypes because AIs are only as good as the data they are trained on.
  • AIs are known to have hallucinations, in which they may respond over-confidently in a way that doesn’t seem justified or aligned with their training data—or, as we’ve additionally configured, with the definitions, descriptions, and parameters of an AI-powered persona. Those hallucinations, in turn, could influence someone to make a product development decision that might unintentionally cause harm or disservice.
  • AIs could be designed to continuously learn and evolve over time, taking in all previous conversations and potentially steering users towards the answers they think they’d want rather than reflecting the data they were originally trained on. This would negate the purpose of the outputs and could result in poor product development decisions.
  • People could develop a deep sense of connection and emotional attachment to AIs that look, sound, and feel humanlike—in fact, they already have. It’s an important first principle that AIs be transparent and proactively communicate that they are AIs, but when the underlying models become more and more truthful and they are embodied in more realistic and charismatic ways, then it becomes more probable that users might develop trust and affinity towards them. Imagine how much more potentially serious a hallucination becomes, even if a bot states upfront that it is fictitious and powered by AI!

Finally, do we even really want design personas that have so much to say?! Leveraging generative AI in any of these ways, without thoughtful deliberation, could ultimately lead us to over-index on attraction and engagement with the artifact at the expense of its primary purpose. Even if we could “train” live personas to accurately reflect the core ideas and insights that are germane to designing user-centered products and services, would giving them the gift of gab just end up muddling the message?

In short, designing live personas would have to consider these consequences very carefully. Guardrails might be needed, such as limiting the types of questions and requests that a user may ask the persona, making the persona “stateless” so it can’t remember previous conversations, capping the amount of time users can interact with the persona, and having the persona remind the user that they are fictitious at various points during a conversation. Ultimately, personas must remain true to their original intent and accurately represent the research insights and data that bore them.

And further, even if applying generative AI technologies in these ways becomes sufficiently accessible and cost-effective, it will still behoove us to remember that they are still only tools that we might use as part of our greater research and design processes, and that we should not be over-swayed nor base major decisions on something a bot says, as charming as they might be.

Though it’s still early days, what do you think about the original premise? Could Al-enabled research outputs that are more interactive and engaging actually foster greater empathy and understanding of target end-users and could that lead to better strategy, design, development, and implementation decisions? Or will the effort required, and possible risks of AI-enabled research outputs outweigh their possible benefits?

A vector illustration depicting a person venturing towards a Web3 landscape

Developing Skills and Earning a Livelihood

Whether it’s NFTs, Web3 or AI, the rapid evolution of technology can offer opportunities for users of all ages, but young people – who spend so much of their time online – have a unique relationship with these emerging tools. And, despite what many think, adolescents are already using these emerging technologies to improve their well-being at a time where the mere existence and lived experiences of BIPOC and LGBTQ+ youth, especially, are under attack.

Take 13-year-old digital artist Laya Mathikshara from Chennai, India, for example. In May 2021, as a neophyte in the world of digital art, she sold her first NFT.

Her animated artwork titled What if, Moon had life? depicted an active core of the Moon gurgling. Inspired by the distance between the Earth and the Moon (384,400 km), Laya listed the reserve price as 0.384400 ETH (Ethereum) on Foundation, a platform that enables creators to monetize their work using blockchain technology. It caught the eye of Melvin, co-founder of the NFT Malayali Community, who placed a bid and collected her first artwork for 0.39 ETH ($1,572 at the time).

After the sale, and the success of subsequent NFTs, Laya – now 15 years old – decided to make digital art her career. With Web3, a collector of her art introduced her to other artists, who she felt inspired to support through Ethereum donations. It “feels amazing to help people and contribute. The feeling is awesome,” she says.

“I started with nothing to be honest,” says Laya, “with zero knowledge about digital art itself. So I learned digital art [in parallel to] NFTs because I had been into traditional art [when I was younger].”

Supporting Key Developmental Assets to Wellbeing

Knowing that young people spend much of their unstructured time online, that digital wellness is a distinct concern for Gen Z, and that the technology landscape is rapidly changing, Artefact partnered with Hopelab to conduct an exploratory study to understand their experiences with emerging technology platforms – ones largely enabled by Web3 technologies like blockchain, smart contracts, and DAOs (decentralized autonomous organizations). The organizations were particularly interested in how these technologies might contribute to a wide spectrum of developmental assets to improve the well-being of young people.

Our study found that Web3 can support youth wellness because it is built on values such as ownership, validation, and community that link to developmental assets like agency and belonging. These values are fundamentally different from the values of Web 2, a technology operating on business models that monetize our attention and personal data.

Awareness and usage of Web3 technologies is already high among Gen Zers, with 55% in the U.S. claiming to understand the concept of Web3. Twenty-nine percent have owned or traded a cryptocurrency and 22% have owned or traded an NFT. Importantly, Gen Z believes these technologies are more than a fad: 62% are confident that DAOs will improve how companies are run in the future.

Having grown up with multiple compounding stressors including climate change, a global pandemic, and political unrest, some Gen Z find appeal in Web3’s potential to create what you want, own what you make, support yourself, and change the world.

With Web3, young people are experimenting with their interests and identities, creating art and music, accumulating wealth, consuming and sharing opinions, forming communities, and supporting causes that deeply resonate with them.

Victor Langlois and LATASHÁ, visual and musical artists, respectively, each represent the diversity that is important to our organizations, and have made real income at a young age through NFTs. Likewise, World of Women, a community of creators and collectors believe representation and inclusion should be built into the foundation of Web3, while UkraineDAO seeks to raise money to support Ukraine.

Aligning with GenZ Values

The gateway to Web3 for youth has commonly been through media hype, celebrity fanfare, and video games. Youth we spoke to were all skeptical, at least at first. Laya says, “I thought it was some cyber magical money or something. It just didn’t feel real.” After learning how to use the technology to create assets themselves and even make money via NFTs without a bank account, they began to invest more time experimenting with the tech and consuming content.

These experiences are not without challenges, of course. Young people in our study shared that they need to spend a lot of time learning about the ever-evolving space and building connections to stay relevant. The financial ups and downs are more extreme than the stock market, along with the potential for major losses at the hands of scammers or platform vulnerabilities. Like Web2, there is pressure to be endlessly plugged into the constant news, with social capital to be gained by being consistently online. Some of society’s broader social issues also permeate Web3 spaces: racist NFTs and communities abound.

Despite these challenges, there is genuine excitement for a new internet built on Gen Z’s core values. Several youth shared how DAOs are flipping organizational norms, where hierarchy and experience no longer determine whether your idea takes hold. Web3 technologies are giving youth an opportunity to start careers that weren’t previously viable, find new audiences and fanbases, create financial independence, detach from untrustworthy platforms, and find and contribute to caring communities – all while building their creativity, socioemotional, and critical thinking skills online.

These experiences are helping Gen Z feel a strong sense of belonging as they find communities and causes they care about. In the words of one of our interviewees, Web3 offers a “new and shiny” way to “do good in the world.” The experiences are more accessible – and specific to them – and the decentralized nature of Web3 means that creators and the public, not big tech or its algorithms, get to determine what is current and relevant. This is especially important for creators from groups that have been excluded from power because of their race, ethnicity, gender, or orientation. One participant shared how empowering it was to no longer be at the whim of social media platforms that may make design changes that erase your content, user base, or searchability overnight.

Like any other technology, Web3 and its components can have positive and negative impacts, but its fundamental tenets mean that we will likely see promising innovations and experiences that can support young people to find agency and belonging.

“We are all decentralized for the most part,” says Laya. “And the fun fact is, I have not met many of my Indian friends…I haven’t met folks in the U.S. or any other countries for that matter… you don’t even have to connect to a person in real life, but you still feel connected.”


Neeti Sanyal is VP at Artefact, a design firm that works in the areas of health care, education, and technology.

Jaspal Sandhu is Executive Vice President at Hopelab, a social innovation lab and impact investor at the intersection of tech and youth mental health.

In recent months, we’ve seen the rise of independent social media marketed toward authenticity: first BeReal, now others like Gas have cropped up. When we speak with Gen Z consumers, authenticity feels like a buzzword—it comes up again and again as a guidepost for ideal experiences—yet, they have difficulty defining it. Instead, it feels like a reaction to the inauthenticity they see on Instagram and to a lesser extent TikTok, which they see as to blame for feeling a lack of social connection in spaces we believe should foster connection. While BeReal’s features limit the ability to curate posts, the core of its UX is the same as larger social media platforms, which limits the social connection that underpins authenticity. To design for authenticity, platforms must adopt a UX that allows users to adapt and evolve their identities over time.

Putting on an “act” in social spaces isn’t unique to social media. In 1959, Erving Goffman published The Presentation of Self in Everyday Life, where he contends that real-life social situations cause participants to be actors on a stage, with each implicitly knowing their role. The character one plays depends on a variety of contextual factors: who is present, the “props” and “set” (visual cues) among others. As such, each performance is different. His theory explains why one might feel awkward when two different social groups are in the same room: the actor doesn’t know which role they are supposed to play.

In online spaces, the feed is our perma-stage. Facebook’s News Feed was designed to deliver updates on friends the same way we receive updates on local and national news. It seems inevitable that this product vision would produce performances, and highly curated ones at that. Its one-to-many nature limits standard interaction; instead of an actor-actor dynamic, we see a creator-commentor-lurker hierarchy. And because creators design their posts to cater to the masses, they are not moving from stage to stage; instead, one’s online persona feels static. Here, the light of inauthenticity shines through, as we are no longer playing together, but watching others perform.

In Goffman’s model, actors retreat “back-stage” when they are alone or with close others — this is the place where they can let their hair down and be free from keeping up impressions. While the dominance of social media’s feed might make the Internet seem like an unlikely place for back-stage settings, we find almost every social media has a direct message function. In contrast to the one-to-many, post-centric UX of the feed, these back-stage spaces are one-to-one or one-to-few interaction-heavy spaces that have come to be the most fulfilling part of the social media experience for users. Instead of solo “lurking” that can lead to comparison and loneliness, users that are active in back channels find engagement, connection, and reprieve to be themselves, or at least the character that feels like the smallest margin of performance with this particular friend or group, since they have created their “show” together.

But it’s the feed that dominates the social media experience. It permeates moments that would have traditionally been back-stage settings (for example, alone in one’s home), and so we find ourselves wanting authenticity, or a back-stage feeling, here. And so, trends like posting crying selfies have surfaced, which feel close to a cut and paste: back-stage content onto the front-stage. While a post like this could make a user feel understood or less alone momentarily, the infrastructure on social media doesn’t enable the interaction needed to produce real support, and can continue to feel designed for likes. Between glamour shots and crying selfies sits BeReal, where users post more of the “everyday” of their everyday life. Still, BeReal has been criticized for either being boring, still performative, or even exclusive in a more intimate way. A feed can’t support true connection, the table-stakes of enduring authenticity.

Outside of these two paradigms, we see a third type of space emerging. Platforms like Discord have taken hold during the pandemic as a more casual place to “hang out” virtually. Building on a chat-based UX, Discord enables users to find others with similar interests and move between smaller and larger channels as well as text and voice-based communication. Further, Discord is the hub for creative expressions like Midjourney, an AI image generator that can only be accessed through Discord using bot commands. Similarly, Fortnite builds conversation through shared experience and play, in so doing re-leveling the audience-observer dynamic and putting engagement over performance. Extending Goffman’s metaphor, we might compare the social atmosphere created on Discord and Fortnite to a writer’s room, where users engage and create together. 

A more agile space like Discord reflects the “Presentation of Self” as charted by Gen Z. This generation sees the self as a canvas for experimentation, where identity is fluid. Through creative tools and less definite spaces, creativity and play  extend to the making of self on a journey of self-discovery. Users can create and try on characters much like a comedian might on a Tuesday night, to first see if it might resonate for Saturday night, much before an enduring part of the act.

To enable more dynamic interactions , we will need to move away from a cut and paste UX approach to a ground-up infrastructure that is designed for fluidity. Taking pointers from the “writer’s room,” two principles can guide us. First, collaboration. Similar to “yes – and,” creators in authentic spaces create in tandem vs. a creator-consumer dynamic. UX of authentic spaces must lean toward chat over post, which fosters interaction and relationships that ensure it’s safe to try a new presentation of self. Second, authentic social media needs impermanence. Though a feed may refresh over time, we know that posts on Instagram will be connected to our profile for years to come. If it’s instead lost in a Discord feed, we may feel more freedom to experiment and “get it wrong.” Combining collaboration and impermanence, we might just set the stage to permit the collection of characters we all play, so that we can all feel a bit more dynamic, and perhaps even authentic, in digital spaces.

An abstract composition featuring various green, yellow, blue, and purple shapes, with two sets of shapes resembling human forms merging together in the center of the composition.

The above vignette shows “cultural humility” in action. This approach fosters cultural understanding through respect, empathy, and critical self-reflection to build partnerships between providers and the diverse individuals they serve. Cultural humility has become a hallmark pathway for realizing health care that responds to the needs of diverse patient populations and reduces the extreme health disparities they often face. 

Cultural humility is needed now more than ever. If current trends continue, immigrants and their descendants will account for around 88% of the U.S. population growth in 2065. Alongside this, diversity will also grow within healthcare professions. But the current care model in the U.S. rests on a culture of biomedicine that is largely inhospitable to diverse health-related beliefs and practices. Instead, we call for ways to work with our increasingly pluralistic society to uplift the benefits of biomedicine while embracing diverse perspectives on health and healing. 

Centering lived experiences in healthcare

Within any cultural or identity group, each person’s lived experience is intricate and varied, and what is necessary to live a healthy and fulfilling life is equally individualistic. To recognize diverse needs in health care, medical training and practice have come to focus on “cultural competence,” “a set of congruent behaviors, knowledge, attitudes, and policies that come together in a system, organization, or among professionals that enables effective work in cross-cultural situations.” But even with cultural competence, lived experience is often overlooked, causing providers to make assumptions about a specific patient based on learned facts about the broader racial/ethnic groups to which they may belong. This can lead to care decisions based on generalizations, resulting in inappropriate recommendations for a patient’s unique circumstances.

On the other hand, “cultural humility” is a much stronger formation for realizing culturally responsive care that honors each patient’s lived experience. It is grounded in rigorous self-reflection and a willingness to listen to, learn about, and adapt to patients’ diverse cultural values and practices. Crucially, exercising cultural humility reduces unconscious bias and stereotyping toward diverse patient populations based on many identity factors, from cultural background, race, and age to socioeconomic status, religion, and gender identity. Bias has been shown to negatively impact patient care, including poor patient-provider communication, low patient satisfaction, and mistrust of the healthcare system. A culturally humble approach to care achieves the nuanced understanding of patients’ lived experiences and unique backgrounds necessary to truly embrace cultural differences and work toward dismantling the structural vulnerabilities that result in unequal health outcomes.

Practicing cultural humility during moments of care

We see an opportunity to intervene at the most intimate level of care during face-to-face interactions between patients and providers, making cultural dimensions more accessible and the hidden barriers to care faced by multicultural communities more visible. 

Isolated tools exist that make inroads into providing clinicians with what they would need to realize culturally appropriate care. The tools fall into three focus areas:

  1. Improving communication between patients and providers 
    The Eight Questions and the Cultural Formulation Interview can be used to elicit patients’ understanding of their illnesses in the clinic. And the Vital Talk app trains providers to communicate with their patients about sensitive topics, which could be especially relevant for providers who did not have “narrative medicine” as part of their training. But cultural dimensions of care are still not a focus of the app. Moreover, with these tools, providers are still left without guidance on implementing them in practice or pragmatic ways to support their uptake in clinical settings within the time and logistical constraints of appointments.

  2. Equipping providers with cultural information
    Existing provider-focused databases like Ethnomed and CultureVision can help contextualize culturally specific beliefs about health and illness that might surface during a visit while suggesting pointers for culturally appropriate care. But accessing these tools during a visit may take up valuable time and could detract from the provider’s ability to listen and respond to the patient’s needs. The focus on the information at the level of cultural groups may also be problematic, resulting in a lack of nuanced context around each patient’s needs and preferences. Lastly, these tools provide a fixed set of information that does not change, for example, based on community member input or adapt to the needs of individual patients. They do not allow cultural tailoring or adaptations to happen in real-time during patient-provider interactions, such as through in-the-moment personalized recommendations based on information elicited by the patient during clinical visits.

  3. Engaging patients in after-care and ensuring data transparency
    Lastly, some tools provide patients with notes, information, and resources following their appointments. OurNotes is a platform that makes care notes accessible to patients, allowing them to engage with their providers during after-care and express concerns before their next visit. It encourages providers to voice record reflections, which helps them relay insights about patients to other team members while also developing their self-awareness skills. OurNotes also works to mitigate power imbalances through transparency of any data collected during a visit. While a promising development, OurNotes does not target improving interactions during moments of care.

While they have their merits, all these solutions are only piecemeal, standalone tools that imperfectly address a sliver of the patient and provider experience.

We believe a better approach is one making valuable resources less cumbersome for providers to access in real-time, least disruptive to critical face time with patients, and genuinely representative of cultural and individual diversity. This approach includes digital tools and experiences that enhance provider capacity and support them in facilitating more flexible and adaptive patient care. Recognizing that digital products tend to be one-off solutions to complex problems, we see an opportunity to capitalize on their ability to seamlessly integrate with current workflows and software, automate repetitive tasks while offering guidance on those more complex, and customize interactions tailored to individual needs and preferences. At their core, aspirational digital products would enable the practice of cultural humility during patient-provider interactions through experiences that capitalize on its foundational components: fostering cultural understanding through respect, empathy, and critical self-reflection.

We see an opportunity for the development of digital products that afford culturally responsive experiences and focus on the following elements: 


Culturally responsive patient-centered care
Patient-centered care focusing on culture involves treating patients holistically and respecting their unique health needs and desired health outcomes as the driving force behind their healthcare decisions. Digital products prioritizing patient-centered care consider patients’ needs, preferences, and values in the context of their lived experiences. They help facilitate communication between healthcare providers and patients, allowing patients to share their concerns and providers to respond accordingly, enabling patients to engage in and adapt their care plans and collaborate with providers to make more informed decisions. A key but sometimes neglected facet of genuinely patient-centered care involves understanding and appropriately responding to patients’ cultural and individual identity contexts.


Empathy and active listening
Digital products should encourage healthcare providers to engage in more empathetic practices towards their patients, actively listening to them, understanding their perspectives, and validating their emotions and experiences. Providers need tools to help them prepare for cross-cultural patient interactions to elicit relevant information during clinical encounters and respond compassionately. These products would afford a more culturally appropriate and inclusive care experience by prepping the provider with language that respects the patient’s preferences (e.g., preferred name and pronouns) and is non-judgmental.


Respectful and collaborative decision-making
Respectful and collaborative decision-making elevates patient agency to allow for mutual understanding and agreement between them and providers. Digital products can support patient agency through tools that afford them control over their healthcare decisions and will enable them to own and tailor personal data, deeply understand vital medical details concerning their diagnosis and treatment – often missed during care visits – and empower them with the necessary information to communicate and collaborate more effectively with their providers on their care plans.


Continuous learning and self-reflection
To learn and be knowledgeable of the many existing cultural and identity backgrounds is a complex and seemingly infinite task. It is pertinent that providers have the tools to continuously listen and learn from the specific and diverse patient communities they serve. While speaking directly to patients and their families is critical to learning, digital products can provide automated tools that coach providers through moments of cultural misunderstanding to reflect on biases, assumptions, and beliefs about other cultures, traditional practices, and worldviews. These tools should seamlessly integrate into existing provider workflows, making it easier for them to engage in learnings during and beyond direct patient interactions.

Closing Thoughts

We believe many benefits will flow from adopting a culturally humble approach to healthcare delivery, especially by implementing appropriate digital technologies to enhance moments of care:

  • Patients can more easily find care more aligned to their needs and identities that make them feel welcome in the healthcare system

  • Patients will approach care with greater trust, as fear or drop-off due to unexpected clinical activities, tense interactions, and conflicting treatment expectations get reduced

  • Patients will engage more in their healthcare as they feel a greater sense of connection and belonging with their provider and healthcare system

  • Quality of care is improved as providers gain an understanding of diverse patient lifeworlds and are prompted to self-reflect on their own beliefs and practices, ultimately approaching all patients with more empathy

  • A cycle of learning and improvement will be embedded in the healthcare system as providers become more self-aware and reflective, inspiring these attributes in their trainees

  • Patients will experience better outcomes and health disparities will be reduced as patients are more engaged with and better served by the healthcare system

Actualizing a positive future healthcare experience for our rapidly diversifying population requires building cultural humility into the fabric of healthcare training and practice. Explore one way we envision doing this: Traverse — a vision for culturally responsive healthcare.

Illustration by Marine Au Yeung

Recently, an article on Fast Company made the announcement that corporate America broke up with design, citing that companies who were once green with Apple-envy and hungry for transformation are now jaded after realizing that “design is rarely the thing that determines whether something succeeds in the market.” Add in the recent corporate silence on the topic of design, and rumors abound – apparently design and corporate America are in trouble.

But does this really mean the relationship has fizzled out? Or could it instead be a time for reflection, re-evaluation, and evolution? Three of our Design and Strategy Directors respond to these questions and more –

Corporate America didn’t break up with design. It broke up with the mythological promise design firms sold them.

Jeff Turkelson, Strategy Director

‘Design will allow you to disrupt, transform, create and lead industries. Just do some research, run some workshops with sticky notes, prototype, and you’ll be onto  something that no one else could dream of!’

These are the false promises that corporate America has broken up with. But, there were always dissenters, Don Norman himself said it:

“Design research is great when it comes to improving existing product categories but essentially useless when it comes to new, innovative breakthroughs.”

Flash forward to today, and the hype around being a design-led organization is pretty much dead. But corporate America has embraced design in a more traditional sense—significantly expanding internal design teams—not to think of radical breakthroughs but to create good user experiences that are usable and delightful.

It’s in many ways a reversion back to the decades-old paradigm of user-centered design (though often twisted by profit incentives, e.g. designing to maximize engagement or conversion rates rather than truly serve the user).

However, the spirit of human-centered design (HCD) is not lost. It has evolved. While the idea of being a “design-led” organization has lost its allure and most in-house practitioners are focused on traditional craft, design’s value to business was always secondary to the value designers sought to bring to humans. And for perhaps largely external reasons, many corporations have begun to embrace HCD’s value-based themes: designing for accessibility, inclusion, equity, etc.

Here we see design intersect with the responsible technology movement— designers, technologists, activists, and more, seeking to create positive outcomes or at least mitigate harms. Designers don’t get to say they own this broader movement but they do play an important role in its evolution.

What goes in comes out: amplify design’s value by doing these four things.

Chad Hall, Design Director

Companies green with “Apple envy” may have invested billions of dollars in design, but few did it well, and most in a way vastly different from the design-centric companies they looked up to. Here are four easy to overlook things they could do to better gain value from design.

1. Understand the complexity of problem spaces

“Simplicity and complexity need each other,” (John Maeda), a.k.a. there can be no simplicity without complexity. Designers work hard to obfuscate the complexity that exists in the products, services, strategies, and processes we work on. Understanding and allowing time and space to work through these complexities is paramount. If designers or companies don’t understand the complexities of what they work on and invest the time and resources to make sense of it, they’ll never be able to simplify anything down into a ‘magical solution.’

2. Foster seamless interdisciplinary collaboration

Design works best when not in a vacuum. Too often, I see these situations that prevent seamless experiences: A product team separated from key decision makers; A care team that doesn’t have good insight into their patient’s experience; An education board that is far removed from the students and communities it aims to impact.

Seamless customer experiences are a product of seamless interdisciplinary collaboration. Working alongside an interdisciplinary team with deep understandings of different industries, domains, processes, or organizations at hand, designers become experts in not only crafting forms, but leveraging their knowledge to become experts in facilitating processes. 

3. Align power and incentives with desired outcomes

If companies want transformation, they need to examine their internal power and incentives structures. It’s not enough to have a vision. Fragmented teams and inequitable power in decision making yield products with poor outcomes.

To make seamless experiences, the customer experience must be singular above internal organizational divisions, product categories, and even earnings reports in some cases. The organization and culture must support this collaboration; allowing, motivating, and empowering employees to make decisions that work toward the shared goal of a seamless customer experience. Rather we often see internal competition, tailoring outcomes to please a HiPPO (Highest Paid Person’s Opinion), or misaligned performance measurements that incentivize personal decisions over preferable product outcomes. While most organizations might have the vision, they don’t align the distribution of power and incentives to get the outcomes they seek.


4. Be curious about the unknown

Many companies have implemented design in a risk-averse way. Expecting transformation without accepting a level of risk leads to disappointment. To curb risk, we’ve turned a designer’s intuition and mastery of skills into a scalable and repeatable process built upon the scientific method. Design Thinking and Human-Centered Design have proliferated. These are great at identifying existing pain points and undiscovered needs, but lend toward refining existing solutions through incremental improvements. This has merit and need in products today. But, it’s not going to rock the boat.

To make large leaps, we must allow imagination and intuition back into the process. Designer’s, through years of mastery, are primed to make unexpected connections that can lead to new innovations. But, this process is nearly impossible to evaluate and scale. It pushes us into the unknown future and to rely somewhat on intuition. In our data-driven world, this is uncomfortable! It’s an inherent risk. But, a risk that could lead to a potential big win.

Design is expanding and evolving — we’re counting on corporate America to do the same

Joan Stoeckle, Design Director

Design is baked into countless experiences we encounter on a daily basis – ordering ahead for curbside pickup, communicating with our healthcare providers through patient portals, and of course in the devices we use for hours each day. We’ve become so accustomed to frictionless, carefully-designed experiences that the occasional encounter with an outdated tool can feel downright grating by contrast. Were corporations truly to break up with design, as customers and consumers we would definitely notice. Perhaps their common understanding of design was too narrow from the start.

Although design was historically associated with the creation of beautiful objects and innovative products, today we also interact with invisible forms of design in the services and systems we use and are a part of. Not only was our home’s smart speaker designed, but so was the AI and the specific phrases used to communicate with it, and we are as much components of that system as the speaker itself. The expansion of design into different contexts and ways of interacting with people and systems certainly represents new and exciting frontiers for innovation, but many designers and organizations are also exploring novel and alternative approaches to the processes and practice of design – not just its outputs. 

User-centered design established a baseline of orienting around the needs of end-users. Human-centered design helps foster a more holistic view of people as more than just ‘users’ of a product, anchored in understanding motivations, behaviors, values, and more. Elements of each are central to the design thinking process that was adopted by many companies. But there are concerns that focusing only on the needs of target users results in a myopic view of challenges and opportunities and can lead to unintended consequences (ex. worker injuries in warehouses that are struggling to meet consumer demand for rapid shipping). In response, designers and organizations are questioning and reframing the process of design to foster equity and inclusion, design for diverse and complex needs, and create more sustainable futures.

The practice of design is expanding and evolving in response to social, economic, and environmental realities. Will corporations also take informed action by evolving how and with whom they create products, services, and systems? Or will many of them, as the article suggests, walk away from a narrow and outdated notion of design?

At Artefact we continue to evolve our methods in support of our mission to create better futures: taking a more holistic view through stakeholder mapping, establishing best practices for trauma-informed design research, reflecting diversity of needs and mindsets through persona spectra, guiding participatory and co-design processes, reflecting on possible unintended consequences, and more.

Partnership Highlight

This year, Artefact had two opportunities to partner with mission-driven organizations to understand young people’s relationship with digital technology and how they can support their efforts to shape a better future. In celebration of those partnerships with Omidyar Network and Hopelab, we highlight our approach to centering young people’s perspectives as we implemented our research and structured our recommendations.

“Our partnership with Artefact has helped us clarify how we can take action and support youth who are creating opportunities for inclusion and well-being in the next digital era. We appreciate the team’s depth of research, and their responsiveness to emergent opportunities in the work.”

Young people and the hope for a new digital future

Youth are growing up in a vast digital system with a level of complexity that we haven’t seen before. Many features on today’s major tech platforms keep youth online by design, depleting their energy and consuming their attention. Combined with the short life cycle of pop culture and the fear of missing out, young people – especially Gen Z – are aggressively pulled online, affecting their productivity, mental health, and overall wellness. These effects will likely persist with emerging technologies such as the metaverse and web3. Still, young people are capitalizing on this ‘new tech’ to have a role in shaping a more accountable, equitable, and inclusive internet for themselves and future generations.

An inclusive, systems approach to understanding youth beliefs and behaviors

Omidyar Network and Hopelab each needed actionable insights to develop a holistic strategy and prioritize actions aimed at influencing and activating technology as a force for good in supporting young people. However, the focus of each organization’s effort was slightly different. Omidyar Network focused on identifying the core issues that animate digital native activism and organizing as it relates to technology. These issues ranged from digital rights to social justice to tech worker activism. In contrast, Hopelab concentrated on understanding how emerging technologies can uplift or detract from youth mental health and well-being.

Throughout each project, we took inspiration from well-established fields such as inclusive design and human-centered design, incorporating equitable methods affording continuous participation for internal and external stakeholders.

Participatory methods to engage internal and external stakeholders included:

  • Using simple tools like Dovetail to convey research insights and allow stakeholders to view secondary research and highlight reels of key topics discussed during 1:1 interviews
  • Hosting multiple workshops to review research insights, co-create opportunity areas, and develop critical actions
  • Hosting office hours for youth and key internal stakeholders to give feedback, check assumptions, and develop actionable priorities
  • Sharing research insights and project outcomes with internal and external stakeholders to keep participants informed, give transparency to our processes, and solicit feedback to ensure data points were representative of their voices

In addition, we took a systems approach in selecting research participants to holistically understand how youth are affected by the internet and what they are doing to take control of their future. This approach helped us understand the nuances and complexity of this problem space through various perspectives.

An overview of who we spoke to:

  • BIPOC + Youth Digital Creators
  • Digital Rights Youth Activists
  • Web3 Designers
  • Mental Health Product Innovators
  • Psychology + Digital Technology Academics
  • Metaverse Academics
  • Feminist Technologists
  • Data & Security Researchers
  • Youth Mental Health Experts

Engaging diverse youth perspectives

Whether engaging digital natives to comment on our preliminary research insights or inviting them to attend key workshops and presentations, we continuously sought to ensure youth voices remained centered. Why? Because of their diverse lived experiences growing up digital and their drive to design, create, and advocate for what they want to see in the world.

Our approach to centering young people’s lived experiences online included the following methods:

  • Conducting outreach on popular web2 platforms (e.g., Twitter, Instagram, TikTok) where digital natives are active and currently participating in conversations around technology
  • Bringing in youth advisors as co-researchers to help shape insights and outcomes
  • Creating video highlight reels with direct quotes from youth participants to better represent their words and attitudes in our research
  • Developing youth-centered design principles taken directly from one-on-one and group discussions to guide future action
  • Developing youth-centered areas of focus that steered strategies toward the issues that matter most to GenZ

Supporting young people in their pursuit of better digital futures

The landscape of digital experiences and emerging technology is rapidly changing, allowing youth to shape the development of these technologies before they are entrenched. And young people are activated, ready, and willing to be the catalysts for change. They need a platform to be heard and supported that amplifies their needs and values. We are excited about Omidyar Network and Hopelab’s work to provide young people with this platform and support. Putting youth at the center is critical if we want the internet of tomorrow to be a place where future generations can thrive.

Want to learn more?

To learn more about the Omidyar Network project, check out the case study: A Youth-Led Agenda for the Responsible Tech Movement.

To learn about the insights and outcomes from the Hopelab project, attend a talk by Neeti Sanyal, Artefact’s Executive Creative Director, at the HLTH 2022 Conference Gen Z & Web3: How a Mental Health Crisis among Digital Natives is Shaping Our Virtual Future. This panel discussion is scheduled for Tuesday, November 15th, 4:20 PM—4:55 PM PST.

Image source: Fast Company

Fast Company honors Artefact with three Innovation by Design awards

A version of this press release first appeared on PR Newswire


Fast Company has honored Artefact, a design and strategy firm with a mission to create better futures, as a winner and honorable mention across three categories of the 2022 Innovation by Design Awards: Rapid Response, Healthcare, and Experimental.

Fast Company’s October 2022 issue celebrates visionary design that solves the most crucial problems of today and anticipates the pressing issues of tomorrow. Celebrating more than a decade of Innovation by Design, this year’s honorees feature a range of finalists from Fortune 500 to small, impactful firms. Entries are judged on the key ingredients of innovation: functionality, originality, beauty, sustainability, user insight, cultural impact, and business impact.

“We are honored to have our work in emergency preparedness, healthcare, and retail recognized. We believe that thinking about unintended consequences and all stakeholders is critical to bringing positive change in the world. Artefact is proud to work with individuals, communities, and organizations to create a better future, by design.”

Sabrina Boler, COO of Artefact


Artefact was recognized across three categories for the following work —

Navis: Emergency preparedness

Winner for Rapid Response

Navis is a conceptual emergency preparedness system that guides people in planning for, and responding to, crisis scenarios. The concept uses conversational UI and augmented reality to help people create a personalized emergency plan on their preferred devices. A durable home hub helps people stay connected during an emergency and translate plans into action.


AdaptDX Pro: Diagnosing macular degeneration

Honorable Mention for Healthcare

Artefact partnered with MacuLogix to help create AdaptDX Pro, the first portable, wearable, and AI-integrated ophthalmic screening system for age-related macular degeneration on the market. The AdaptDx Pro overcame the challenges of traditional ophthalmic devices by rethinking the patient and technician experience, and led to earlier, more accurate diagnosis and disease management. The AdaptDX Pro first shipped in June 2020, and over the past several years has performed over 1 million tests across 1200 eyecare practices. Today, AdaptDx Pro is owned by LumiThera.


Future of shopping and food retail

Honorable Mention for Experimental

We imagined three ways that emerging technology might help customers shop with more confidence during the pandemic, while ensuring businesses efficiently manage guest volume, protect employees, and sustain revenue by guiding safe customer behavior, forecasting risk, and bringing the best of in-store shopping, online.






About Artefact
Artefact is a visionary design and strategy firm with a mission to create better futures. By partnering with leaders and approaching the toughest challenges with equal parts creativity and pragmatism, we deliver lasting change. Headquartered in Seattle, our award-winning team includes researchers, strategists, and designers with a passion for excellence and impact. Connect with us today.

About Fast Company

Fast Company is the only media brand fully dedicated to the vital intersection of business, innovation, and design, engaging the most influential leaders, companies, and thinkers on the future of business. Winners, finalists, and honorable mentions of Fast Company’s sought-after Innovation by Design Awards can be found online and in the October issue of the print magazine, on newsstands September 27, 2022.