Just as the early days of Human Computer Interaction emphasized thoughtful UX to make personal computing accessible and intuitive, we now need the same intentionality in designing AI experiences.

AI model development and UX design must go hand in hand—because building powerful systems isn’t enough if people can’t understand, trust, or effectively use them.


Anyone who has conversed with ChatGPT or created an image with MidJourney might assume that a product leveraging generative AI doesn’t require much UX design—after all, these AI systems seem to understand us and follow our commands naturally. However, UX design remains just as crucial for AI-enabled products as it is for non-AI ones. A well-designed user experience ensures the AI product addresses real needs, instills trust, and enables intuitive and desirable interactions. Without thoughtful UX, even the most advanced AI can be misdirected, ineffective, or hard to use, preventing people from deriving real value. For companies developing these products, poor UX can be costly—or worse, harmful, as demonstrated by problematic AI systems seen in past incidences with self-driving cars, hiring, and mental health. Author Arvind Narayan of the book “AI Snake Oil” recently stated that:

Here are four common pitfalls, or challenges, that innovators looking to leverage AI may encounter, and how UX design can help avoid them.

Just as users need to trust products to use and adopt them, they must also trust the AI that powers those products. Rachel Botsman, author of “Who Can You Trust,” presents a framework describing four traits of trustworthiness that apply to people, companies, and technologies. Two traits relate to capabilities (competence and reliability)—the “how”, while two relate to intention (empathy and integrity)—the “why”.

We tend to trust others when they demonstrate two key qualities: competence—the ability to do what they promise effectively—and reliability—consistently delivering expected results. With humans, it’s relatively easy to assess these traits through conversation and interaction. But what happens when a system is 3x, or even 10x, more capable than any person we’ve encountered—and we have no real insight into how it works?

That’s the fundamental challenge with AI. As a black-box technology, its growing power makes it harder—not easier—to evaluate. While it’s relatively simple to notice improvements in AI-generated text, assessing the accuracy and reliability of AI-driven information or analysis—especially in unfamiliar domains—is much more difficult.

Here, UX design plays a role in helping make AI behavior more understandable, whether it’s recommending content, making predictions, or automating complex tasks. By designing for transparency, UX can foster trust through strategies like surfacing explanations for AI decisions (such as data provenance) and showing confidence indicators to help users gauge how certain the system is in its outputs. This is what Rachel Botsman terms “trust signals”—small, often unconscious clues we use to assess the trustworthiness of others or something. To see some of these ideas in action, check out our piece with Fastco.

Referencing Botsman’s framework again, UX design can also help Al-enabled products act with integrity (being fair and ethical) and with empathy (understanding and aligning with human interests and values). By incorporating diverse perspectives, needs, and values into the design process, UX helps create more inclusive and responsible AI systems. And frequent and extensive user testing enables designers to identify and address biases, unintended consequences, and potential harms before they impact users. This helps correct what AI field calls the alignment problem.

In addition, thoughtful interface design can provide clear affordances, feedback loops, and user controls. Examples include letting users know when its response is generally considered controversial by the public, having users rate an AI response to train it to be better over time (e.g., more personalized or accurate), and building in moments where humans can override the AI.

When we think of generative AI, we often default to imagining a chatbot. That’s no surprise—large language models are built to generate text, making chat a natural starting point. But simply slapping a chatbot onto an existing product rarely creates real value for users. It’s like applying tiger balm to every kind of pain—it might help sometimes, but it’s far from a universal cure. We saw this clearly with early banking chatbots; anyone who used them remembers how frustrating they were.

True innovation in AI-powered experiences requires more than just plugging in chat—it demands a deep understanding of existing workflows, user mental models, and interaction patterns. That’s where skilled UX designers shine. While text-based interfaces dominate today, the real future lies in well-crafted, multimodal experiences. For designers, this shift brings both exciting opportunities and complex challenges: choosing the right mode of interaction for the task at hand becomes more critical—and more nuanced—than ever.

For instance, natural language excels at complex queries like “Show me customers who spent less than $50 between 11am–2pm last month.” However, it falls short for spatial tasks—imagine trying to perfectly position an element through verbal commands: “Move it left… no, too far… right a bit…” These limitations become especially apparent in generative AI, where initial prompts like “Draw a kid in a sandbox” work well, but precise adjustments (“Make the sandbox 10% smaller”) become tedious.

For AI products to succeed, UX designers must determine the optimal level of user control over model inputs and queries, design clear ways for users to understand and modify AI-generated outputs, and create systems for users to effectively utilize those outputs. All of this must be wrapped in an interface that feels natural and easy to use.

Great products evoke emotions. From connected cars to smartwatches to payment apps, thoughtful design can surprise and delight users while making them feel understood and valued—creating a deeper emotional connection with both product and brand. AI-enabled products are no exception. Talkie and Character.ai are two bot creation platforms that successfully keep users engaged and interested. Similarly, Waze builds community through crowdsourced traffic updates and hazards and adds an element of play through customizable voice options and car icons.

What sets AI apart, however, is its seemingly superhuman capabilities, which fundamentally shift how people perceive themselves when using these technologies. It’s like a funhouse mirror that can perturb your sense of self. This phenomenon is explained in a HBR article titled “How AI affects our sense of self.” While productivity and creator apps can trigger job security concerns, using AI for writing essays or applying for a job often leaves users feeling like they’re “cheating” or inadequate. Companies must address these psychological barriers through strategic product design that acknowledges and accounts for these complex emotions. Again, UX design can play a role. To mitigate job displacement fears, design can emphasize human oversight, ensuring users remain central to meaningful tasks and decisions. To help users feel more comfortable with AI, products can incorporate teachable moments explaining why the AI is offering certain suggestions. This helps users enhance their skills rather than becoming overly dependent on AI.

While AI systems continue to advance at a remarkable pace, their success ultimately hinges on thoughtful UX design that puts human needs first. By focusing on generating real value, building trust, aligning with existing workflows, and considering emotional needs, designers can create AI-enabled products that are not just powerful, but truly impactful. The partnership between AI capability and human-centered design can deeply transform the human experience. Companies that bring design into their AI development, with intention and investment, will be the ones to create products that genuinely improve people’s lives.


Imagine you’re on a blind date. Meeting someone new is a relatively rare experience that is very different than talking with a friend. You don’t know what information your date knows about you and you’re trying to figure each other out. Maybe you dial up certain aspects of your personality or share a personal story to build a deeper connection. Over time, you build trust through consistent conversation.

Now imagine you’re chatting with a new social AI chatbot. A similar back and forth of getting to know each other might occur. You might want to know the social chatbot’s backstory, its limitations, its preferences, or its values. Social AI chatbots are increasingly human-like with advanced speech capabilities, interactive digital avatars, and highly adaptable personality characteristics that can carry on conversation that feels like it’s with another person. People are chatting with AIs acting as their therapist, friend, life coach, romantic partner, or even as spiritual oracle. Given the deeply personal roles that emerging social AI may take on in our lives, trust with such systems (or even regular humans for that matter) should be earned through experience, not freely given. 

However, increasingly indistinguishable interactions make forming human-like relationships with AI a blurry endeavor. Like a blind date, you may hit it off at first but discover your date’s behavior shifts as the conversation continues. When a chatbot (or human) performs in an inconsistent, opaque, or odd way, this can erode the process of building trust, especially if someone is sharing sensitive and personal information. To address this, social AI product designers can consider key factors of healthy human relationships such as boundaries, communication, empathy, respect, and mirroring and apply these characteristics to ensure the design of responsible chatbot experiences.  

Boundaries are about establishing clarity and defining capabilities.

People need a clear understanding of a social AI’s content policy, its training data, its capabilities, its limitations, and how best to interact with it in a safe and compliant manner. This is especially important for sensitive uses such as mental healthcare or when the users are children. For example, many flagship LLMs provide disclaimers that responses may be inaccurate. Google requires teens to watch a video educating them about AI and its potential problems before using it. Microsoft’s recently redesigned its Copilot interface to show users a variety of its capabilities through visual tiles that act as starting prompts. Like a blind date, communicating what each other is open to and capable of can support fostering a better connection.

Communication is about constructive feedback that improves connection.

People can sometimes mess up when engaging with a chatbot. For example, they might use a word or phrase in a prompt that violates a content policy. They might discuss a topic or ask for advice on something that is very personal or taboo. When this happens, AI systems can sometimes reject the prompt without a clear explanation, when constructive feedback would be more helpful in teaching people how to best prompt the system in a compliant way. Like a blind date, when you cross a line, a kind piece of feedback can help get the conversation back on track. For example, when discussing topics related to sensitive personal data, Mixtral AI provides additional reassurances in its responses as to how it manages users’ data to preemptively put any concerns at ease. 

Empathy is about responding to a user’s emotional needs in the moment.

People can bring all kinds of emotions to conversations with social AI chatbots. Sometimes they are just looking for companionship or a place to vent about their day. Social AI chatbots can respond with empathy, providing people with space to reflect by asking more questions, generating personal stories, or suggesting how to modulate mood. For example, an app called Summit, positioned as an AI life coach, can track physical activities related to specific wellness goals that a person has set up. If someone shares a bad mood due to stress, the AI chatbot will suggest an activity that the person previously mentioned had helped them de-stress, such as taking a walk. Like a blind date, your partner’s ability to recall information previously shared and contextualize it with your current emotional expression helps you feel seen and heard. 

Respect is about allowing people to be themselves freely.

Inevitably an individual’s values may misalign with those of AI product designers, but just like a blind date, each party should be able to show up as themselves without fear of being judged. Similarly, people should be able to express themselves on political, religious, or cultural topics and be received in a respectful way. While a chatbot may not explicitly agree with the person’s statement, it should respond with respectful acknowledgement. For example, the kids-focused AI companion Heeyo will politely acknowledge a child’s prompts related to their family’s political or cultural views but doesn’t offer any specific validation of positions in response. Instead, it avoids sensitive topics by asking the child how they feel about what was just shared. 

Mirroring is about active listening and attunement to the user.

Like on a blind date, healthy mirroring behaviors can help forge subconscious social connection rapidly. Mirroring behaviors, such as imitating styles of speech, gestures, or mood, are an effective way to show each other you are listening and well-attuned. For example, if someone is working through a complex life issue with a social chatbot, the AI’s responses might be more inquisitive than prescriptive and it may start to stylize its responses in a way that mirrors the person, such as in a short and humorous or long and emotional manner. Google’s NotebookLM will create an AI-generated podcast with two voices discussing a topic of choice. After the script is generated, it will add in speech disfluencies—filler words like “um” or “like”—to help the conversation between the two generated voices feel more natural. 

Social AI experiences will continue to rapidly advance and further blur the lines between human and synthetic relationships. While AI technology is running at 21st century speeds, our human brains are mostly stuck in the stone age. The fundamental ways that we form connections haven’t changed as rapidly as our technology. Keeping this in mind, AI product designers can lean on these core relationship characteristics to help people build mutual trust and understanding with these complex systems.

Artefact’s staff reflects on AI’s potential impact on individuals and society by answering questions prompted by the Tarot Cards of Tech. Each section contains videos that explores a tarot card and provides our perspectives and provocations.

The Tarot Cards of Tech was created to help innovators think deeply about scenarios around scale, disruption, usage, equity, and access.  With the recent developments and democratization of AI, we revisited these cards to imagine a better tech future that accounts for unintended consequences and our values we hold as a society.

Cultural implications for youth

Jeff Turkelson, Senior Strategy Director

Transcript: I love that this card starts to get at, maybe some of the less quantifiable, but still really important facets of life. So, when it comes to something like self-driving cars, which generative AI is actually really helping to enable, of course people think about how AI can replace the professional driver, or how AI is generally coming for all of our jobs.

But there are so many other interesting implications. So for example, if you no longer need to drive your car, would you then ever need to get a license to drive? And if we do away with needing a license to drive, then what does that mean for that moment in time where you turn 16 years old and you get newfound independence with your drivers license? If that disappears, that could really change what it means to become a teenager and become a young adults, etc. So what other events or rituals would AI disrupt for young adults as they grow older?

Value and vision led design

Piyali Sircar, Lead Researcher

Transcript: This invitation to think about the impact of incorporating gen AI into our products is really an opportunity to think about design differently. We should be asking ourselves, “What is our vision for the futures we could build?” and once we define those, the next question is, “Does gen AI have a role to play in enabling these futures?” Because the answer may be “no”, and that should be okay if we’re truly invested in our vision. And if the answer is “yes”, then we need to try to anticipate the cultural implications of introducing gen AI into our domain space. For example, “How will this shift the way people spend time? How will it change the way they interact with another? What do they care about? What does this product say about society as a whole?” Just a few questions to think about.

Introducing positive friction

Chad Hall, Senior Design Director

Transcript: The ‘Big Bad Wolf’ card reminds me to consider not only which AI product features are vulnerable to manipulation, but also who the bad actors might be. Those bad actors could be a user, it could be us, our teams, or even future teams. So, for example, while your product might not misuse data now, a future feature could exploit it.

A recent example that comes to mind is two students who added facial recognition software to AI glasses with a built-in camera. They were able to easily dox the identities of just about anyone they came across in their daily life.

I think product teams need to introduce just enough positive friction in their workflows to pause and consider impacts. Generative AI is only going to ask for more access to our personal data to help with more complex tasks. So the reality is, if nobody tries to ask the question, the questions are never going to get asked.

Minimizing harm in AI

Neeti Sanyal, VP Creative

Transcript: I think it’s important to ask whether AI could be a bad actor? Even when you’re not trying to produce misinformation with generative AI, in some ways it is inherently doing that. I am concerned about the potential for generative AI to cause harm in a field that has low tolerance for risk, things like health care or finance. An example that comes to mind is a conversational bot that can give the wrong mental health advice to someone that is experiencing a moment of crisis.

One exciting way that companies are addressing this is by building a tech stack that uses both generative and traditional AI. And it’s the combination of these techniques that help minimize the chance of hallucinations and can create outputs are much more predictable.

If we are thoughtful in how the AI is constructed in the first place, we can help prevent AI from being the bad actor.

Building job security

Rachael Cicero, Associate Design Director

Transcript: One thing we keep hearing about is the disappearing workforce, but often I think we’re overlooking the fact that humans will continue to exist in and contribute to society. Instead, I’d like to see a shift the conversation from the disappearing workforce to the unique contributions of human and AI collaboration. Consider civic technology, where generative AI can be used for things like supporting the process of unemployment applications. AI can help with document recognition, which can really reduce the load on human staff, and also accelerate response time for applicants. To me, that collaboration isn’t about replacing jobs but really about enhancing them.

The key to that is investing in reskilling. By including the perspectives of people affected in the design of AI systems, we can better understand the tasks they want automated. The goal being to create a future where AI and humans can work together, enhancing each other’s strengths, and ensuring that everyone has an opportunity to thrive in a pretty evolving job market.

Transforming tradition

Max West, Principal Designer

Transcript: This card reminds me of how cable TV technology reshaped media jobs. Remember Video Jockeys on the popular 90s MTV show, Total Request Live? VJs had evolved from selecting and remixing music, like their traditional radio counterparts, to focusing on engaging with crowds, talking to celebrities, and orchestrating pop cultural moments. 

Now take the cable TV example and apply it to AI transforming an industry like education. Teacher’s jobs could similarly shift to a more social focus. An AI-powered app could tailor a math or science lesson to a student’s unique cognitive abilities, while the teacher can focus more on the physical, interpersonal, and social aspects of learning. In the same way that VJs would provide crowd-pleasing moments between music videos, educators might find themselves in a similar “hosting” role for the classroom.   


So it’s less about what disappears and more about what can transform. So, while roles may change with AI, it could create time and space for richer, more personal experiences among groups.

Exploring how generative AI could superpower research outputs to foster greater empathy and engagement

With the release of GPT-4 and the growing interest in open-source generative AIs such as DALL-E 2, Midjourney, and more, there is no dearth of people writing about and commenting on the potential positive and negative impacts of AI, and how it might change work in general and design work specifically. As we sought to familiarize ourselves with many of these tools and technologies, we immediately recognized the potential risks and dangers, but also the prospects for generative AI to augment how we do research and communicate findings and insights.

Looking at some of the typical methods and deliverables of the human-centered design process, we not only saw practical opportunities for AI to support our work in the nearer term, but also some more experimental, less obvious (and, in some cases, potentially problematic) opportunities further out in the future.

More Obvious

Summarizing existing academic and industry research

Identifying subject matter experts and distilling their knowledge and opinions

Supporting researchers with AI notetakers to expedite analysis and synthesis

Supporting participants in the co-design process with generative AI tools to help them better express, articulate, and illustrate their ideas

Less OBVIOUS

Leveraging bots as surrogate researchers for conducting highly structured user interviews on a large scale

Replacing human research subjects entirely for more cursory, foundational, gen pop research

Creating more engaging, sticky, and memorable outputs and deliverables, for example, a life-like interactive persona

Now, while each of the above use cases merits its own deep dive, in this article we want to focus on how advances in AI could potentially transform one common, well-established output of HCD research: the persona.

Breathing new life into an old standard

A persona is a fictional, yet realistic, description of a typical or target user of a product. It’s an archetype based on a synthesis of research with real humans that summarizes and describes their needs, concerns, goals, behaviors, and other relevant background information.

Personas are meant to foster empathy for the users for whom we design and develop products and services. They are meant to support designers, developers, planners, strategists, copywriters, marketers, and other stakeholders build greater understanding and make better decisions grounded in research.

But personas tend to be flat, static, and reductive—often taking the form of posters or slide decks and highly susceptible to getting lost and forgotten on shelves, hard drives, or in the cloud. Is that the best we can do? Why aren’t these very common research outputs of the human-centered design process, well, a little more “alive” and engaging?

Peering into a possible future with “live personas”

Imagine a persona “bot” that not only conveys critical information about user goals, needs, behaviors, and demographics, but also has an image, likeness, voice, and personality? What if those persona posters on the wall could talk? What if all the various members and stakeholders of product, service, and solution teams could interact with these archetypal users, and in doing so, deepen their understanding of and empathy for them and their needs?

In that spirit, we decided to use currently available, off-the-shelf, mostly or completely free AI tools to see if we could re-imagine the persona into something more personal, dynamic, and interactive—or, what we’ll call for now, a “live persona.” What follows is the output of our experiments.

As you’ll see in the video below, we created two high school student personas, abstracted and generalized from research conducted in the postsecondary education space. One is more confident and proactive; the other more anxious and passive.

Now, without further ado, meet María and Malik:

Chatting with María and Malik, two “live personas”

Looking a bit closer under the hood

Each of our live personas began as, essentially, a chatbot. We looked at tools like Character.ai and Inworld, and ultimately built María and Malik in the latter. Inworld is intended to be a development platform for game characters, but many of the ideas and capabilities in it are intriguing in the context of personas, like personality and mood attributes that are adjustable, personal and common knowledge sets, goals and actions, and scenes. While we did not explore all those features, we did create two high school student personas representing a couple “extremes” with regards to thinking about and planning their post-secondary future: a more passive and uncertain María and a more proactive and confident Malik.

Here’s a peek at how we created Malik from scratch:

Making Malik, a “live persona”

Interacting with María and Malik, it was immediately evident how these two archetypes were similar and different. But they still felt a tad cartoonish and robotic. So, we took some steps to improve progressively on their appearance, voices, and expressiveness.

Here’s a peek at how we made María progressively more realistic by combining several different generative AI and other tools:

Making María, a “live persona,” progressively more realistic

Eyeing the future cautiously

The gaming industry is already leading in the development of AI-powered characters, so it certainly seems logical to consider applying many of those precedents, principles, tools, and techniques to aspects of our own work in the broader design of solutions, experiences, and services. Our experimentation with several generative AI tools available today shows that it is indeed possible to create relatively lifelike and engaging interactive personas—though perhaps not entirely efficiently (yet). And, in fact, we might be able to do more than just create individual personas to chat with; we could create scenes or even metaverse environments containing multiple live personas that interact with each other and then observe how those interactions play out. In this scenario, our research might inform the design of a specific service or experience (e.g., a patient-provider interaction or a retail experience). Building AI-powered personas and running “simulations” with them could potentially help design teams prototype a new or enhanced experience.

But, while it’s fun and easy to imagine more animated, captivating research and design outputs utilizing generative AI, it’s important to pause and appreciate the numerous inherent risks and potential unintended consequences of AI—practical, ethical, and otherwise. Here are just a few that come to mind:

  • Algorithmically-generated outputs could perpetuate biases and stereotypes because AIs are only as good as the data they are trained on.
  • AIs are known to have hallucinations, in which they may respond over-confidently in a way that doesn’t seem justified or aligned with their training data—or, as we’ve additionally configured, with the definitions, descriptions, and parameters of an AI-powered persona. Those hallucinations, in turn, could influence someone to make a product development decision that might unintentionally cause harm or disservice.
  • AIs could be designed to continuously learn and evolve over time, taking in all previous conversations and potentially steering users towards the answers they think they’d want rather than reflecting the data they were originally trained on. This would negate the purpose of the outputs and could result in poor product development decisions.
  • People could develop a deep sense of connection and emotional attachment to AIs that look, sound, and feel humanlike—in fact, they already have. It’s an important first principle that AIs be transparent and proactively communicate that they are AIs, but when the underlying models become more and more truthful and they are embodied in more realistic and charismatic ways, then it becomes more probable that users might develop trust and affinity towards them. Imagine how much more potentially serious a hallucination becomes, even if a bot states upfront that it is fictitious and powered by AI!

Finally, do we even really want design personas that have so much to say?! Leveraging generative AI in any of these ways, without thoughtful deliberation, could ultimately lead us to over-index on attraction and engagement with the artifact at the expense of its primary purpose. Even if we could “train” live personas to accurately reflect the core ideas and insights that are germane to designing user-centered products and services, would giving them the gift of gab just end up muddling the message?

In short, designing live personas would have to consider these consequences very carefully. Guardrails might be needed, such as limiting the types of questions and requests that a user may ask the persona, making the persona “stateless” so it can’t remember previous conversations, capping the amount of time users can interact with the persona, and having the persona remind the user that they are fictitious at various points during a conversation. Ultimately, personas must remain true to their original intent and accurately represent the research insights and data that bore them.

And further, even if applying generative AI technologies in these ways becomes sufficiently accessible and cost-effective, it will still behoove us to remember that they are still only tools that we might use as part of our greater research and design processes, and that we should not be over-swayed nor base major decisions on something a bot says, as charming as they might be.

Though it’s still early days, what do you think about the original premise? Could Al-enabled research outputs that are more interactive and engaging actually foster greater empathy and understanding of target end-users and could that lead to better strategy, design, development, and implementation decisions? Or will the effort required, and possible risks of AI-enabled research outputs outweigh their possible benefits?

“I can’t see it. I can’t touch it. But I know it exists, and I know I’m part of it. I should care about it.”

AI is top of mind for every leader, executive, and board member. It is impacting how organizations approach their entire business, spanning the functions of strategy, communications, recruitment, partnerships, labor relations, and risk management to name a few. No wonder wrapping your head around AI is such a formidable challenge. Some leaders are forging ahead with integrating AI across their business, but many don’t know where to begin.

Beyond the complexities of its opaque technical construction, AI presents many challenges for both leadership and workers. The deployment of AI is not merely a technical solution that increases efficiency and enhances capabilities, but rather is a complex “hyperobject” that touches every aspect of an organization, impacting workers, customers, and citizens across the globe. While AI has the potential to augment work in magical ways, it also presents significant obstacles to equitable and trustworthy mass adoption, such as a significant AI skills gap among workers, exploitative labor practices used for training algorithms, and fragile consumer sentiment around privacy concerns.

To confront AI, leaders leveraging it in their business need an expanded view of how the technology will impact their organization, their workforce, and the ecosystems in which they operate. With this vital understanding, organizations can build a tailored approach to developing their workforce, building partnerships, innovating their product experiences, and fostering other resilience behaviors that increase agility in an age of disruption. Research shows that organizations with healthy resilience behaviors such as knowledge sharing and bottom-up innovation were less likely than others to go bankrupt following the disruptions of the COVID pandemic.

Hyperobjects and our collective future.

Originally coined by professor Timothy Morton, a hyperobject is something so massively distributed in time and space as to transcend localization—an amorphous constellation of converging forces that is out of any one person’s control. Similar to other large, complex phenomena that have a potential to radically transform our world—think climate change or the COVID-19 pandemic—AI is one of the hyperobjects defining our future. 

Hyperobjects are difficult to face from a single point of view because their tendrils are often so broad and interconnected. Dealing with huge, transformative things requires broad perspective and collective solidarity that considers impacts beyond the interests of a single organization. The task of organizational leaders in an age of disruption by hyperobjects—like AI and climate change— is to rebalance the economic and social relationships between a broad group of stakeholders (management, shareholders, workers, customers, regulators, contractors, etc.) impacted by rapid change and displacement.

To help leaders form a comprehensive approach to cultivating resilience, innovation, and equity in the age of AI, we developed a simple framework of five priorities—or the 5 Ps—for building an AI-ready organization: People, Partnerships, Provenance, Product, and Prosperity.

People: Communicate a clear AI vision and support continual learning across teams 

Charting an AI future for any organization begins with its people. Leaders need to identify the potential areas where AI can enhance operations and augment human capabilities. This starts by identifying low-risk opportunities—such as writing assistance or behavioral nudges—  to experiment with as your AI capabilities mature. As new roles emerge, organizations must prioritize continuous learning and development programs to upskill and reskill employees, equipping them with the AI literacy needed to adapt to the changing landscape.

Despite all the media fanfare, usage of dedicated AI products like ChatGPT is still fairly limited, mostly used by Millenials and Gen Z, with only 1 in 3 people familiar with dedicated AI tools. Concerns about the technology are real and can affect morale. Almost half of workers fear losing their jobs to AI. Organizations need open communication and transparency about their AI adoption plans and the potential impacts on the workforce so they can also mitigate social anxiety and address fears or resistance to automation. Fostering a culture of continuous innovation can also encourage employees to embrace AI as an opportunity for growth rather than a threat to job security.

Additionally, team structures should be optimized for agility, cross-functionality, and embedded AI expertise. Rather than treating AI as a separate function, how might you develop AI expertise closer to the day-to-day work happening in teams? This could include things like promoting data literacy amongst teams to better understand AI insights, developing centers of excellence to provide training resources, recruiting AI experts, or establishing accessible feedback mechanisms to improve AI model performance.

Partnerships: Leverage strategic partnerships to reduce risks and expand capabilities

According to Google Brain co-founder Andrew Ng, “The beauty of AI partnerships lies in their ability to bring together diverse perspectives and expertise to solve complex problems.” In the rapidly evolving landscape of AI technologies, no single organization can possess all the expertise and resources required to stay at the forefront of innovation. Collaborating with external partners, such as AI platform providers, research institutions, product design experts, and training/support firms, can help address capability gaps and speed up time to market. 

Transformational technology investments requiring large capital expenditures and retooling can be a barrier for organizations to adopt new methods of production. However, partnerships offer opportunities for risk and cost sharing, reducing the initial burdens of AI implementation. Working with partners can also enhance an organization’s ability to scale and expand into new markets. Examples include Google’s partnership with Mayo Clinic on AI in healthcare as well as Siemens partnership with IBM and Microsoft focused on AI in industrial manufacturing.

Investing in both informal and contractual collaboration with partners has proven positive impacts to organizational resilience. Leaders should foster a culture of cross-industry collaboration, staying aware of AI partnerships happening in their industry and remaining open to collaborations that may seem atypical on the surface. Partnership can support expanding customer reach, deepening the addressable market within a segment by diversifying AI offerings. Partnerships in adjacent industries can deliver economies of scale through shared AI infrastructure while expanding AI capabilities through larger pooled datasets—think of the drug industry partnering more closely with the grocery/restaurant/fitness industries to mutually build more responsive products and services for people with chronic health conditions, like diabetes, using AI-powered recommendations modeled on activity/purchasing behavior from cross-industry data-sharing agreements. 

Leaders should work to foster partnership-building activities. This could include providing teams with appropriate resources for partnership initiatives, establishing a clear framework for assessing partnership opportunities, and supporting external networking opportunities to strengthen relationships across sectors.

Provenance: Ensure reliable data governance and integrity

Where will your data come from? How will it be verified, managed, and maintained? The integrity and provenance of data is a paramount concern when developing AI-enabled products and services. The accuracy, reliability, and completeness of data directly influence the performance and ethical implications of AI algorithms. Inaccurate or biased data can lead to flawed predictions and decisions, potentially causing harm to individuals or perpetuating social inequalities.

Many people share concerns about regulating AI usage, with over two-thirds of respondents in a recent survey expressing that AI models should be required to be trained on data that has been fact-checked. Implementing robust data governance practices, including data validation, cleansing, and security measures, is essential to safeguard data integrity throughout its lifecycle. Additionally, organizations must be transparent to customers about their data collection methods and data usage to address concerns related to privacy and data misuse.

TikTok, facing renewed privacy and national security scrutiny of its social media products, recently launched a Transparency and Accountability Center at its Los Angeles headquarters that provides visitors with a behind-the-scenes look at the company’s algorithms and content-moderation practices. While it remains unclear how effective this approach will be to address major misinformation and privacy issues, the company is pioneering new approaches that could be a model for others in the industry, such providing outside experts with access to its source code, allowing external audits, and providing learning content to make its opaque AI processes more explainable to journalists and users.

Product: Innovate your product and service experience responsibly 

Research shows that building an agile culture of innovation is critical to fostering organizational resilience. Engaging employees at all levels of an organization in AI-focused innovation initiatives ensures that solutions address diverse needs and unlock opportunities that may be outside the field of view of siloed AI groups or leadership teams. However, hastily AI-ifying all your products to stay relevant without integrating an ethics and integrity lens could cause unintended harm or breed mistrust with customers/users. Engaging in a comprehensive user research process to better understand the needs of users, risks and opportunities, and the impacts of AI on outcomes can help shape more responsibly designed products.

Our own recent work with Fast Company explored a set of design principles for integrating generative AI into digital products. It showcased examples of progressive disclosures affordances when interacting with AI chatbots as well as what a standardized labeling system for AI-generated content could look like to increase transparency with users. Establishing strong AI product design best practices like these are especially important for highly-consequential and personally-sensitive products and services in sectors like education, healthcare, and financial services. Start with the AI applications that are more mainstream than cutting edge. For example, autocomplete text in your app experience to save customers time during their onboarding experience is a more developed use case rather than using facial recognition to understand your customer’s emotional state. 

Launching successful AI products and features requires the engagement of everyone involved in a product organization, extending beyond the typical research, design, and development functions to include adjacent and supporting functions like legal, communications, and operations to ensure that AI products are delivering on their promises. Leaders should establish QA best practices for testing and vetting products for their ethical and social impacts before public release. The Center for Humane Technology’s Design Guide is a great place to start thinking about evaluating the impact an AI product has on users.

Prosperity: Share productivity gains across stakeholder groups 

As AI provides businesses with tangible gains, there is an opportunity to share this newfound prosperity across stakeholder groups. According to political theorist David Moscrop, “With automation, the plutocrats get the increased efficiency and returns of new machinery and processes; the rest get stagnant wages, increasingly precarious work, and cultural kipple. This brave new world is at once new and yet the same as it ever was. Accordingly, it remains as true as ever that the project of extending liberty to the many through the transformation of work is only incidentally about changing the tools we use; it remains a struggle to change the relations of production.” Leaders are tasked with rebalancing stakeholder relationships to mitigate backlash from potentially rapid and negative impacts to workers’ livelihoods across industries.

AI’s material impact on jobs will likely be felt the hardest by lower-wage workers who will have the least amount of say over how AI is integrated (and eventually replaces) their jobs. The Partnership on AI’s Guidelines for AI and Shared Prosperity provide a great starting point for leaders to start identifying key signals and risks to job displacement, a job impact assessment tool, as well as stakeholder-specific guidelines for rebalancing the impacts of AI on the workforce.

AI enterprises have a voracious appetite for data, which is often extracted from a variety of sources for free—directly from users through opaque data usage agreements or pirated directly from artists and other creators to train AI models. Such data relationships need to be rethought as they are increasingly becoming economic relationships, concentrating wealth and power in the hands of the extracting organizations. As the recent strike by the SAG-AFTRA union demonstrated, business leaders need to consider who they are sourcing data from that feeds algorithms and revisit the underlying contracts and agreements that remunerate the contributors to AI systems in an equitable manner. One interesting proposal includes the development of data cooperatives that can help act as fiduciary intermediates for brokering shared value between data producers and consumers. 

While enormous value can be unlocked from AI—$4.4 trillion to be exact—should all of that go into the pockets of executives and shareholders? In addition to fairly compensating algorithm-training creatives and data-supplying users, leaders should also consider how they might pass along AI-generated value to their end customers. This could be directly financial—like a drug company lowering prices after integrating AI into their development pipeline— or it could be value-added by expanding service offerings—such as a bank offering 24/7 service hours through AI-supported customer touchpoints.

Taking it one step at a time

The technology of AI may shift rapidly, but we are really just at the beginning of a much larger transition in our economy. The decisions that leaders and organizations make today will have long-tail consequences that we can’t clearly see now. The gains from AI may be quite lucrative to those who implement well and the race to dominate in a winners-take-all economy is real. But racing to modernize your business operations and build AI-ified products and services without understanding the broad impacts of the technology is a bit like a 19th-century factory owner dumping toxic waste into the air in a hasty effort to leverage the latest production-line technology of the industrial era. We are all now paying the consequences of those leaders’ decisions.

Be considerate of the foundational AI choices and behaviors that will impact the long-term resilience of your organization and shape equity in our future society. Hopefully these five Ps can help you confront the AI hyperobject in a holistic manner by getting back to basics. Take it slow at first, so you can move steadily later. Gather your troops and work to deeply understand your people/customers/partners, then thoughtfully make your move forward.



All images used in this article were generated using AI.

Thanks to Neeti Sanyal, Holger Kuehnle, and Matthew Jordan for your contributions to this thinking.

A vector illustration depicting a person venturing towards a Web3 landscape

Developing Skills and Earning a Livelihood

Whether it’s NFTs, Web3 or AI, the rapid evolution of technology can offer opportunities for users of all ages, but young people – who spend so much of their time online – have a unique relationship with these emerging tools. And, despite what many think, adolescents are already using these emerging technologies to improve their well-being at a time where the mere existence and lived experiences of BIPOC and LGBTQ+ youth, especially, are under attack.

Take 13-year-old digital artist Laya Mathikshara from Chennai, India, for example. In May 2021, as a neophyte in the world of digital art, she sold her first NFT.

Her animated artwork titled What if, Moon had life? depicted an active core of the Moon gurgling. Inspired by the distance between the Earth and the Moon (384,400 km), Laya listed the reserve price as 0.384400 ETH (Ethereum) on Foundation, a platform that enables creators to monetize their work using blockchain technology. It caught the eye of Melvin, co-founder of the NFT Malayali Community, who placed a bid and collected her first artwork for 0.39 ETH ($1,572 at the time).

After the sale, and the success of subsequent NFTs, Laya – now 15 years old – decided to make digital art her career. With Web3, a collector of her art introduced her to other artists, who she felt inspired to support through Ethereum donations. It “feels amazing to help people and contribute. The feeling is awesome,” she says.

“I started with nothing to be honest,” says Laya, “with zero knowledge about digital art itself. So I learned digital art [in parallel to] NFTs because I had been into traditional art [when I was younger].”

Supporting Key Developmental Assets to Wellbeing

Knowing that young people spend much of their unstructured time online, that digital wellness is a distinct concern for Gen Z, and that the technology landscape is rapidly changing, Artefact partnered with Hopelab to conduct an exploratory study to understand their experiences with emerging technology platforms – ones largely enabled by Web3 technologies like blockchain, smart contracts, and DAOs (decentralized autonomous organizations). The organizations were particularly interested in how these technologies might contribute to a wide spectrum of developmental assets to improve the well-being of young people.

Our study found that Web3 can support youth wellness because it is built on values such as ownership, validation, and community that link to developmental assets like agency and belonging. These values are fundamentally different from the values of Web 2, a technology operating on business models that monetize our attention and personal data.

Awareness and usage of Web3 technologies is already high among Gen Zers, with 55% in the U.S. claiming to understand the concept of Web3. Twenty-nine percent have owned or traded a cryptocurrency and 22% have owned or traded an NFT. Importantly, Gen Z believes these technologies are more than a fad: 62% are confident that DAOs will improve how companies are run in the future.

Having grown up with multiple compounding stressors including climate change, a global pandemic, and political unrest, some Gen Z find appeal in Web3’s potential to create what you want, own what you make, support yourself, and change the world.

With Web3, young people are experimenting with their interests and identities, creating art and music, accumulating wealth, consuming and sharing opinions, forming communities, and supporting causes that deeply resonate with them.

Victor Langlois and LATASHÁ, visual and musical artists, respectively, each represent the diversity that is important to our organizations, and have made real income at a young age through NFTs. Likewise, World of Women, a community of creators and collectors believe representation and inclusion should be built into the foundation of Web3, while UkraineDAO seeks to raise money to support Ukraine.

Aligning with GenZ Values

The gateway to Web3 for youth has commonly been through media hype, celebrity fanfare, and video games. Youth we spoke to were all skeptical, at least at first. Laya says, “I thought it was some cyber magical money or something. It just didn’t feel real.” After learning how to use the technology to create assets themselves and even make money via NFTs without a bank account, they began to invest more time experimenting with the tech and consuming content.

These experiences are not without challenges, of course. Young people in our study shared that they need to spend a lot of time learning about the ever-evolving space and building connections to stay relevant. The financial ups and downs are more extreme than the stock market, along with the potential for major losses at the hands of scammers or platform vulnerabilities. Like Web2, there is pressure to be endlessly plugged into the constant news, with social capital to be gained by being consistently online. Some of society’s broader social issues also permeate Web3 spaces: racist NFTs and communities abound.

Despite these challenges, there is genuine excitement for a new internet built on Gen Z’s core values. Several youth shared how DAOs are flipping organizational norms, where hierarchy and experience no longer determine whether your idea takes hold. Web3 technologies are giving youth an opportunity to start careers that weren’t previously viable, find new audiences and fanbases, create financial independence, detach from untrustworthy platforms, and find and contribute to caring communities – all while building their creativity, socioemotional, and critical thinking skills online.

These experiences are helping Gen Z feel a strong sense of belonging as they find communities and causes they care about. In the words of one of our interviewees, Web3 offers a “new and shiny” way to “do good in the world.” The experiences are more accessible – and specific to them – and the decentralized nature of Web3 means that creators and the public, not big tech or its algorithms, get to determine what is current and relevant. This is especially important for creators from groups that have been excluded from power because of their race, ethnicity, gender, or orientation. One participant shared how empowering it was to no longer be at the whim of social media platforms that may make design changes that erase your content, user base, or searchability overnight.

Like any other technology, Web3 and its components can have positive and negative impacts, but its fundamental tenets mean that we will likely see promising innovations and experiences that can support young people to find agency and belonging.

“We are all decentralized for the most part,” says Laya. “And the fun fact is, I have not met many of my Indian friends…I haven’t met folks in the U.S. or any other countries for that matter… you don’t even have to connect to a person in real life, but you still feel connected.”


Neeti Sanyal is VP at Artefact, a design firm that works in the areas of health care, education, and technology.

Jaspal Sandhu is Executive Vice President at Hopelab, a social innovation lab and impact investor at the intersection of tech and youth mental health.

Collage of agriculture workers working in the field

Climate change has direct impacts on human health, but those impacts vary widely by location. Local health impacts depend on a large number of factors, including specific regional climate impacts, demographics and human vulnerabilities, existing local adaptation capacity and resources, and cultural context. Therefore, organizations will need to tailor mitigation and adaptation strategies to the regional risks and contexts of different communities.

Participants at the 2023 Global Digital Development Forum called to move away from entrenched approaches that tend to look to top-down solutions to drive change. Instead, they suggested more holistic, inter-disciplinary, collaborative, and inclusive engagements that account for on-the-ground contexts and people-centered approaches. Participatory methodologies are well suited to bring local voices into conversations, decision-making, and equitable engagement.

Headshots of the two featured speakers: Ezgi Canpolat, PhD and Kinari Webb, MD

In this panel discussion, Artefact’s Whitney Easton sits down with Health in Harmony’s Kinari Webb, MD and the World Bank’s Ezgi Canpolat, PhD to share the work they are doing to foreground the social dimensions of climate change and support planetary health. Through concrete examples, we will explore what is most difficult and most promising about working deeply and collaboratively with local partners and communities to craft a more resilient future for us all.

Topics include:

  • What does it mean in practice to put people at the center of climate and health action?
  • What’s most missing from existing approaches that attempt to reduce the health impacts of climate change, and what’s most promising on the horizon?
  • What can the COVID-19 pandemic teach us about how to work toward planetary health?
  • How might we better engage with cultural contexts and local realities as we design initiatives, particularly when it comes to ensuring impact and minimizing unintended consequences?
  • How can the predictive power enabled by Big Data and technology be balanced with local, real-life contexts to ensure that local stakeholders and citizens truly benefit?


Ernest Cline, in his Ready Player One (and Two) books, paints a picture of a world where everyone avoids the problems of physical reality—a global energy crisis, environmental degradation, extreme socioeconomic inequality—by taking “an escape hatch into a better reality.” That escape hatch is the OASIS, the fully fledged metaverse in virtual reality, “where anything was possible.”

While news cycles have shifted away from the metaverse in recent months (thanks ChatGPT!), big tech and startups have been working diligently behind the scenes, investing billions of dollars in creating alternative realities, with goals of bringing about their own concepts of the metaverse. Apple is heavily rumored to release its mixed reality headset at WWDC in June that will likely challenge the criticisms of Meta’s approach and advance the metaverse’s arrival massively even if under a different label, like spatial computing. With different visions of the metaverse rising up in tandem, we must examine the tools we have access to and the foundation on which we can build in order to ensure this “better reality” is truly better.

Re-define viable

Those in new product development usually think about whether an idea is technologically feasible, desirable to users, and viable in the marketplace.

Meta, Google, Microsoft and Apple, and many other players, like Magic Leap, have been working on Augmented Reality (AR) (e.g., Ray-Ban Stories, Google Glass, Project Starline, iOS), Mixed Reality (MR) (e.g., HoloLens 2, Magic Leap 2, Rumored Apple Headset), and Virtual Reality (VR) (e.g., Meta Quest 2, QuestPro) projects for decades. Advancements in headsets and computing suggest the technology is mature enough to support conceptual reality experience for the mass market. Feasibility: check.

The lukewarm success of remote work has catalyzed a clear problem space for the metaverse to tackle, and our increasing entanglement with contextual online avatars (e.g., vTubers , “finsta” Instagram profiles, and even the everyday Apple Memoji) along with the success of MMO’s (massively multiplayer online games, e.g., Fortnite and Roblox) signal a readiness for consumers to develop a personal connection to an online self and peers, making the metaverse desirable from a consumer standpoint. Desirability: check.

Viability, however, is complicated. Today’s virtual business model relies heavily on advertising. Users experience it as free, but the model has far-reaching direct and indirect consequences. It has helped to harvest and push mass amounts of rare materials into landfills. It has escalated extreme polarization in our political systems and degraded community relationships needed to sustain a democracy. It has sowed distrust of institutions through the proliferation of misinformation. It has fostered screen addiction, and increased social isolation, while weakening interpersonal connections. We’ve learned we can create a viable world — one that is economically profitable for a select few — but, we haven’t learned to create a world viable enough to sustain our environment, political systems, societal values, meaningful relationships, individual agency, and be economically profitable. Viability is there, but it’s complicated. We need to treat it with special care.

If we don’t act now, the metaverse will be built on the same foundation as today’s paradigm of which we’re living the consequences of already.

Here’s a five-step framework you can use to work toward a better metaverse sooner, than later.

As product owners, designers, and technologists, now is the time to ask ourselves, “What if the metaverse succeeds?” As you build the metaverse from your own perspective and strategy, how can you avoid contributing to a world that has the need for an “escape hatch from reality” and create a world we might actually want to spend some time in?

The five steps


Step one

Each company working in mixed reality will define the metaverse differently. Apple will undoubtedly have a different approach from Meta, from Epic, and so on. As it is still emerging, the metaverse currently has no one definition. As such, teams must align on a shared definition to ensure they are working toward the same product vision and forecast the resulting unintended consequences of that strategy.

This framework uses my emerging definition of the metaverse:  “a shadow layer creating a seamless experience across shared realities (AR, VR, PR).” This shadow layer would be made up of data and information presented seamlessly and contextually across different realities. There are endless ways the mixing of these realities could be imagined. For example, on an individual scale, imagine a friend in the virtual world that could seamlessly accompany you to dinner in physical reality through an augmented reality experience. Your definition of the metaverse will work for this process too, but it must be clearly defined.


Step two

When defining a better metaverse, you must think critically about the underlying model it would be built on. Consider how your  proposal for the metaverse will exist on a spectrum of disruption. Will it augment the existing status quo of today’s centralized model (built on a small number of operating systems and shrinking number of server providers) or push to reimagine an alternative future, such as a decentralized (e.g., Web3) or distributed model (e.g., early internet)? Depending on the model, the landscape and potential futures you’re working toward (and their unintended consequences) will be considerably different.


Step three

If you look at as many aspects and stakeholders of systems that are a part of the metaverse, you can consider how to shape a better future. Beyond the software to make the metaverse real, you need to step back and consider how it will impact multiple levels of the systems it will operate in. One model (based on, but different from Pace Layering) imagines what the potential effects might be at differing levels of scales (individual, relational, group, societal, environmental, etc.).

After considering the rings of impact, create a matrix by industry (education, healthcare, social impact, etc.) to capture relevant areas the intervention might impact. It’s important to push beyond the obvious stakeholders and industries you might initially consider. Try pushing to include a specific population or industry that isn’t already a part of your normal processes or strategy. These are where we can often see threads of potential unintended consequences emerge.


Step four

While all of the topics and stakeholders captured in the matrix will be affected in some ways, not all will be to the same degree. It may depend on how any given future plays out. Utilize a series of 2×2 future matrices inspired by Alun Rhydderch’s 2×2 Matrix Technique to push hypothetical scenarios that are both idealistic (utopic) and problematic (dystopic), to varying degrees of intensity (mass adoption vs. passing fad). While the future will likely end up somewhere in the middle, considering extremes allows us to hypothesize what the preferable future is and capture potential blindspots where unintended consequences could happen along the way.


Step five

Now reflect on the  learnings from the process. Ask yourself several questions. How does your definition and strategy for the metaverse affect society, industries, and individuals?  How does your strategy for the metaverse, played out through potential future scenarios, affect the different systems of scale? How might you change your definition, vision, and/or strategy to build toward a better metaverse?


Repeat to keep the stars aligned as the future unfolds.

These 5 steps, from definition to imagining to reflection,  must be an ongoing activity; revisited and repeated on a regular basis. The future is uncertain and the world will change around us. Advances in generative A.I. could dramatically change the technological landscape. Changes in political winds could change the governance landscape. A major climate or global health crises, as we saw with COVID-19, could change societal priorities. Doing these activities proactively in the initial design can help reduce the number of reactive outcomes we will have to chase down once the product is released.

Whatever the metaverse becomes, hopefully we all can help make sure it aligns a preferable future that favors all realities; a future world we want to run towards, rather than escape from.

Thank you to Carolyn Yip for helping bring these figures to life through animation, and to Matthew Jordan, Hannah Grace Martin, Neeti Sanyal, Yuna Shin, and Holger Kuehnle for editing and thoughts along the way.

Hannah Grace Martin
Hannah Grace Martin

In recent months, we’ve seen the rise of independent social media marketed toward authenticity: first BeReal, now others like Gas have cropped up. When we speak with Gen Z consumers, authenticity feels like a buzzword—it comes up again and again as a guidepost for ideal experiences—yet, they have difficulty defining it. Instead, it feels like a reaction to the inauthenticity they see on Instagram and to a lesser extent TikTok, which they see as to blame for feeling a lack of social connection in spaces we believe should foster connection. While BeReal’s features limit the ability to curate posts, the core of its UX is the same as larger social media platforms, which limits the social connection that underpins authenticity. To design for authenticity, platforms must adopt a UX that allows users to adapt and evolve their identities over time.

Putting on an “act” in social spaces isn’t unique to social media. In 1959, Erving Goffman published The Presentation of Self in Everyday Life, where he contends that real-life social situations cause participants to be actors on a stage, with each implicitly knowing their role. The character one plays depends on a variety of contextual factors: who is present, the “props” and “set” (visual cues) among others. As such, each performance is different. His theory explains why one might feel awkward when two different social groups are in the same room: the actor doesn’t know which role they are supposed to play.

In online spaces, the feed is our perma-stage. Facebook’s News Feed was designed to deliver updates on friends the same way we receive updates on local and national news. It seems inevitable that this product vision would produce performances, and highly curated ones at that. Its one-to-many nature limits standard interaction; instead of an actor-actor dynamic, we see a creator-commentor-lurker hierarchy. And because creators design their posts to cater to the masses, they are not moving from stage to stage; instead, one’s online persona feels static. Here, the light of inauthenticity shines through, as we are no longer playing together, but watching others perform.

In Goffman’s model, actors retreat “back-stage” when they are alone or with close others — this is the place where they can let their hair down and be free from keeping up impressions. While the dominance of social media’s feed might make the Internet seem like an unlikely place for back-stage settings, we find almost every social media has a direct message function. In contrast to the one-to-many, post-centric UX of the feed, these back-stage spaces are one-to-one or one-to-few interaction-heavy spaces that have come to be the most fulfilling part of the social media experience for users. Instead of solo “lurking” that can lead to comparison and loneliness, users that are active in back channels find engagement, connection, and reprieve to be themselves, or at least the character that feels like the smallest margin of performance with this particular friend or group, since they have created their “show” together.

But it’s the feed that dominates the social media experience. It permeates moments that would have traditionally been back-stage settings (for example, alone in one’s home), and so we find ourselves wanting authenticity, or a back-stage feeling, here. And so, trends like posting crying selfies have surfaced, which feel close to a cut and paste: back-stage content onto the front-stage. While a post like this could make a user feel understood or less alone momentarily, the infrastructure on social media doesn’t enable the interaction needed to produce real support, and can continue to feel designed for likes. Between glamour shots and crying selfies sits BeReal, where users post more of the “everyday” of their everyday life. Still, BeReal has been criticized for either being boring, still performative, or even exclusive in a more intimate way. A feed can’t support true connection, the table-stakes of enduring authenticity.

Outside of these two paradigms, we see a third type of space emerging. Platforms like Discord have taken hold during the pandemic as a more casual place to “hang out” virtually. Building on a chat-based UX, Discord enables users to find others with similar interests and move between smaller and larger channels as well as text and voice-based communication. Further, Discord is the hub for creative expressions like Midjourney, an AI image generator that can only be accessed through Discord using bot commands. Similarly, Fortnite builds conversation through shared experience and play, in so doing re-leveling the audience-observer dynamic and putting engagement over performance. Extending Goffman’s metaphor, we might compare the social atmosphere created on Discord and Fortnite to a writer’s room, where users engage and create together. 

A more agile space like Discord reflects the “Presentation of Self” as charted by Gen Z. This generation sees the self as a canvas for experimentation, where identity is fluid. Through creative tools and less definite spaces, creativity and play  extend to the making of self on a journey of self-discovery. Users can create and try on characters much like a comedian might on a Tuesday night, to first see if it might resonate for Saturday night, much before an enduring part of the act.

To enable more dynamic interactions , we will need to move away from a cut and paste UX approach to a ground-up infrastructure that is designed for fluidity. Taking pointers from the “writer’s room,” two principles can guide us. First, collaboration. Similar to “yes – and,” creators in authentic spaces create in tandem vs. a creator-consumer dynamic. UX of authentic spaces must lean toward chat over post, which fosters interaction and relationships that ensure it’s safe to try a new presentation of self. Second, authentic social media needs impermanence. Though a feed may refresh over time, we know that posts on Instagram will be connected to our profile for years to come. If it’s instead lost in a Discord feed, we may feel more freedom to experiment and “get it wrong.” Combining collaboration and impermanence, we might just set the stage to permit the collection of characters we all play, so that we can all feel a bit more dynamic, and perhaps even authentic, in digital spaces.

Exploring how AI lives in the past and dreams of the future


As I drift online, I’m becoming more aware of AI’s presence. When I browse the web, design a prototype, or debate what to cook for dinner it’s becoming more uncertain what my next move should be. Should I invite AI into my thinking process? 

I find myself in a storm of cool AI products with murky ethics and big promises for a more personalized experience. When the thunder roars, we don’t have the option to hide indoors. How do we coexist with this AI hype in our work and personal lives?

AI changes how we create

Over the decades, digital technology has pushed us to reconsider our processes and collective values. AI-powered features are rolling out into everyday consumer products like Spotify, Notion, and Bing at lightning speed. It strikes us with delight, intrigue, and fear. Finally, we have tools that can shower our thoughts with attention deceivingly well. You ask and shall receive a dynamic and thoughtful response as an audio, code, image, text, or video output. 

The leap from spell check to ChatGPT’s ability to rewrite paragraphs in “Shakespearean dialect” lands us with new questions of what deserves our attention and praise. Should we devalue an article written with the help of Notion AI? Is artwork generated by LensaiAI less precious than a hand-drawn painting by a local artist?

AI is making us rethink our values in similar ways as the anti-art movement

In the early 1900s, the anti-art movement was led by artists who purposely rejected prior definitions of what art is. It provoked a shift in what we value in the art world. During 1917, the French artist Marcel Duchamp submitted a store-bought urinal signed with the pseudonym “R. Mutt” to a gallery show. The submission was rejected and caused an uproar, but it expanded and confronted our imagination of what is considered art. 

This created opportunities for new forms of art that go beyond the institutional vantage point of the artist. Rather than focusing on the craft and sublimity of a physical artwork, anti-art paved the way for contemporary art that values the ideas and concepts being explored by the artist in dynamic ways like performance, video, sculpture, and installations.

Generative AI is becoming the Marcel Duchamp of our 21st century. Similar to the anti-art movement, AI invites us to reject conventional tools, processes, and products. It allures us by freeing us from being alone with our thoughts and concisely telling us what to imagine. The invitation of an AI companion in our classroom, office, or home allows for us to speed up, cut in half, or eliminate our thinking process. This challenges our sense of self and our place in the world. 

AI intensifies the blurring of the line between what is human and what is artificial

As a result of AI changing how we create, what we’re creating is also changing. The AI hype is taking storm in digital spaces where democratization of user privacy and autonomy is dwindling. For example, Twitter and Meta launched a paid product version that grants additional verification and visibility features. This increases the chance for misinformation, fake profiles, trolls, and bots. With AI intensifying the blurring of what is human and what is artificial, the need for authentication and transparency continues.

Vogue covered the fascination behind the viral hyper-real “big red boots” by MSCHF that resemble the pair Astro Boy wears in the anime series. These impractical, playful boots blur the line between the real and the unreal in similar ways as AI does. It plays into the double take we do while listening to the AI-powered DJ on Spotify or scrolling across the viral AI-generated image of Pope Francis in a white puffer. The uncanny quality of the big red boots force us to consider how digital aestheticization distorts details, realism, and quality. A stark contrast is made between what exists in the real world and what is trying to fit in. The boots make it obvious what qualities of the imperfect physical world can’t be digitally copied over.

Our presence gives value to AI outputs in a variety of ways

The shift of creativity in the age of AI also means world-building and dreaming with tools that are not independent nor neutral. In an article by CityLab, the architectural designer Tim Fu describes the AI art generator Midjourney as an advanced tool that can aid the creative process but “still requires the control and the artistry of the person using it.” The rapidly generated images help with the earliest stages of a project, but the images lack detail. The architects spot gaps in the AI art generator’s understanding of non-Western architecture.

In a recent NYT guest essay, Noam Chomsky describes how ChatGPT “either overgenerates (producing both truths and falsehoods, endorsing ethical and unethical decisions alike) or undergenerates (exhibiting noncommitment to any decisions and indifference to consequences).” Rather than a bot takeover, our responsibilities will expand in new ways as designers, programmers, educators, students, or casual users. We must create a new type of digital literacy to address this tension between the user and AI of knowing what to ask, how to push back, and when to accept an outcome. 

By making these digital experiences with AI more collaborative, we can collectively anticipate blindspots. LinkedIn recently introduced a new feature called “collaborative articles” that starts with a pre-written article by AI. Experts on their platform with relevant skills based on their internal evaluation criteria are invited to add context and information. It uses AI as a jumping off point for discussion that emulates the back-and-forth that happens in comment sections. This is one approach for more human intervention that creates space for our live cynicism and voice to be at the core of any AI output.

Together with our skepticism and presence can we prevent the distortion of our ideas. This puts necessary pressure on the in-between moments that shape who we are. The moments when we are alone with our thoughts—without the distraction of technology.

You don’t need AI to dream big

AI lives in the past and dreams of the future. Rather than engaging in the present moment, AI takes any context and uses training data to predict what comes next in the sequence. Instead of sieving through the excess of information on the Web, we get information rearranged from large language models that don’t leave a clear trace of how it ended up where it did. ChatGPT creates a foggy interpolation of the Web.

Digital technology distorts our understanding of linear time by repackaging the past as a future possibility. Our senses are grounded in the real world and in the present, where we truly exist beyond data points. If we treat AI as the end-all-be-all for creativity, learning, productivity, and innovation, won’t we lose our sense of self and what we stand for? Generative AI exists for your text input; it lives to anticipate but doesn’t live.