Artefact’s staff reflects on AI’s potential impact on individuals and society by answering questions prompted by the Tarot Cards of Tech. Each section contains videos that explores a tarot card and provides our perspectives and provocations.

The Tarot Cards of Tech was created to help innovators think deeply about scenarios around scale, disruption, usage, equity, and access.  With the recent developments and democratization of AI, we revisited these cards to imagine a better tech future that accounts for unintended consequences and our values we hold as a society.

Cultural implications for youth

Jeff Turkelson, Senior Strategy Director

Transcript: I love that this card starts to get at, maybe some of the less quantifiable, but still really important facets of life. So, when it comes to something like self-driving cars, which generative AI is actually really helping to enable, of course people think about how AI can replace the professional driver, or how AI is generally coming for all of our jobs.

But there are so many other interesting implications. So for example, if you no longer need to drive your car, would you then ever need to get a license to drive? And if we do away with needing a license to drive, then what does that mean for that moment in time where you turn 16 years old and you get newfound independence with your drivers license? If that disappears, that could really change what it means to become a teenager and become a young adults, etc. So what other events or rituals would AI disrupt for young adults as they grow older?

Value and vision led design

Piyali Sircar, Lead Researcher

Transcript: This invitation to think about the impact of incorporating gen AI into our products is really an opportunity to think about design differently. We should be asking ourselves, “What is our vision for the futures we could build?” and once we define those, the next question is, “Does gen AI have a role to play in enabling these futures?” Because the answer may be “no”, and that should be okay if we’re truly invested in our vision. And if the answer is “yes”, then we need to try to anticipate the cultural implications of introducing gen AI into our domain space. For example, “How will this shift the way people spend time? How will it change the way they interact with another? What do they care about? What does this product say about society as a whole?” Just a few questions to think about.

Exploring how generative AI could superpower research outputs to foster greater empathy and engagement

With the release of GPT-4 and the growing interest in open-source generative AIs such as DALL-E 2, Midjourney, and more, there is no dearth of people writing about and commenting on the potential positive and negative impacts of AI, and how it might change work in general and design work specifically. As we sought to familiarize ourselves with many of these tools and technologies, we immediately recognized the potential risks and dangers, but also the prospects for generative AI to augment how we do research and communicate findings and insights.

Looking at some of the typical methods and deliverables of the human-centered design process, we not only saw practical opportunities for AI to support our work in the nearer term, but also some more experimental, less obvious (and, in some cases, potentially problematic) opportunities further out in the future.

More Obvious

Summarizing existing academic and industry research

Identifying subject matter experts and distilling their knowledge and opinions

Supporting researchers with AI notetakers to expedite analysis and synthesis

Supporting participants in the co-design process with generative AI tools to help them better express, articulate, and illustrate their ideas

Less OBVIOUS

Leveraging bots as surrogate researchers for conducting highly structured user interviews on a large scale

Replacing human research subjects entirely for more cursory, foundational, gen pop research

Creating more engaging, sticky, and memorable outputs and deliverables, for example, a life-like interactive persona

Now, while each of the above use cases merits its own deep dive, in this article we want to focus on how advances in AI could potentially transform one common, well-established output of HCD research: the persona.

Breathing new life into an old standard

A persona is a fictional, yet realistic, description of a typical or target user of a product. It’s an archetype based on a synthesis of research with real humans that summarizes and describes their needs, concerns, goals, behaviors, and other relevant background information.

Personas are meant to foster empathy for the users for whom we design and develop products and services. They are meant to support designers, developers, planners, strategists, copywriters, marketers, and other stakeholders build greater understanding and make better decisions grounded in research.

But personas tend to be flat, static, and reductive—often taking the form of posters or slide decks and highly susceptible to getting lost and forgotten on shelves, hard drives, or in the cloud. Is that the best we can do? Why aren’t these very common research outputs of the human-centered design process, well, a little more “alive” and engaging?

Peering into a possible future with “live personas”

Imagine a persona “bot” that not only conveys critical information about user goals, needs, behaviors, and demographics, but also has an image, likeness, voice, and personality? What if those persona posters on the wall could talk? What if all the various members and stakeholders of product, service, and solution teams could interact with these archetypal users, and in doing so, deepen their understanding of and empathy for them and their needs?

In that spirit, we decided to use currently available, off-the-shelf, mostly or completely free AI tools to see if we could re-imagine the persona into something more personal, dynamic, and interactive—or, what we’ll call for now, a “live persona.” What follows is the output of our experiments.

As you’ll see in the video below, we created two high school student personas, abstracted and generalized from research conducted in the postsecondary education space. One is more confident and proactive; the other more anxious and passive.

Now, without further ado, meet María and Malik:

Chatting with María and Malik, two “live personas”

Looking a bit closer under the hood

Each of our live personas began as, essentially, a chatbot. We looked at tools like Character.ai and Inworld, and ultimately built María and Malik in the latter. Inworld is intended to be a development platform for game characters, but many of the ideas and capabilities in it are intriguing in the context of personas, like personality and mood attributes that are adjustable, personal and common knowledge sets, goals and actions, and scenes. While we did not explore all those features, we did create two high school student personas representing a couple “extremes” with regards to thinking about and planning their post-secondary future: a more passive and uncertain María and a more proactive and confident Malik.

Here’s a peek at how we created Malik from scratch:

Making Malik, a “live persona”

Interacting with María and Malik, it was immediately evident how these two archetypes were similar and different. But they still felt a tad cartoonish and robotic. So, we took some steps to improve progressively on their appearance, voices, and expressiveness.

Here’s a peek at how we made María progressively more realistic by combining several different generative AI and other tools:

Making María, a “live persona,” progressively more realistic

Eyeing the future cautiously

The gaming industry is already leading in the development of AI-powered characters, so it certainly seems logical to consider applying many of those precedents, principles, tools, and techniques to aspects of our own work in the broader design of solutions, experiences, and services. Our experimentation with several generative AI tools available today shows that it is indeed possible to create relatively lifelike and engaging interactive personas—though perhaps not entirely efficiently (yet). And, in fact, we might be able to do more than just create individual personas to chat with; we could create scenes or even metaverse environments containing multiple live personas that interact with each other and then observe how those interactions play out. In this scenario, our research might inform the design of a specific service or experience (e.g., a patient-provider interaction or a retail experience). Building AI-powered personas and running “simulations” with them could potentially help design teams prototype a new or enhanced experience.

But, while it’s fun and easy to imagine more animated, captivating research and design outputs utilizing generative AI, it’s important to pause and appreciate the numerous inherent risks and potential unintended consequences of AI—practical, ethical, and otherwise. Here are just a few that come to mind:

  • Algorithmically-generated outputs could perpetuate biases and stereotypes because AIs are only as good as the data they are trained on.
  • AIs are known to have hallucinations, in which they may respond over-confidently in a way that doesn’t seem justified or aligned with their training data—or, as we’ve additionally configured, with the definitions, descriptions, and parameters of an AI-powered persona. Those hallucinations, in turn, could influence someone to make a product development decision that might unintentionally cause harm or disservice.
  • AIs could be designed to continuously learn and evolve over time, taking in all previous conversations and potentially steering users towards the answers they think they’d want rather than reflecting the data they were originally trained on. This would negate the purpose of the outputs and could result in poor product development decisions.
  • People could develop a deep sense of connection and emotional attachment to AIs that look, sound, and feel humanlike—in fact, they already have. It’s an important first principle that AIs be transparent and proactively communicate that they are AIs, but when the underlying models become more and more truthful and they are embodied in more realistic and charismatic ways, then it becomes more probable that users might develop trust and affinity towards them. Imagine how much more potentially serious a hallucination becomes, even if a bot states upfront that it is fictitious and powered by AI!

Finally, do we even really want design personas that have so much to say?! Leveraging generative AI in any of these ways, without thoughtful deliberation, could ultimately lead us to over-index on attraction and engagement with the artifact at the expense of its primary purpose. Even if we could “train” live personas to accurately reflect the core ideas and insights that are germane to designing user-centered products and services, would giving them the gift of gab just end up muddling the message?

In short, designing live personas would have to consider these consequences very carefully. Guardrails might be needed, such as limiting the types of questions and requests that a user may ask the persona, making the persona “stateless” so it can’t remember previous conversations, capping the amount of time users can interact with the persona, and having the persona remind the user that they are fictitious at various points during a conversation. Ultimately, personas must remain true to their original intent and accurately represent the research insights and data that bore them.

And further, even if applying generative AI technologies in these ways becomes sufficiently accessible and cost-effective, it will still behoove us to remember that they are still only tools that we might use as part of our greater research and design processes, and that we should not be over-swayed nor base major decisions on something a bot says, as charming as they might be.

Though it’s still early days, what do you think about the original premise? Could Al-enabled research outputs that are more interactive and engaging actually foster greater empathy and understanding of target end-users and could that lead to better strategy, design, development, and implementation decisions? Or will the effort required, and possible risks of AI-enabled research outputs outweigh their possible benefits?

“I can’t see it. I can’t touch it. But I know it exists, and I know I’m part of it. I should care about it.”

AI is top of mind for every leader, executive, and board member. It is impacting how organizations approach their entire business, spanning the functions of strategy, communications, recruitment, partnerships, labor relations, and risk management to name a few. No wonder wrapping your head around AI is such a formidable challenge. Some leaders are forging ahead with integrating AI across their business, but many don’t know where to begin.

Beyond the complexities of its opaque technical construction, AI presents many challenges for both leadership and workers. The deployment of AI is not merely a technical solution that increases efficiency and enhances capabilities, but rather is a complex “hyperobject” that touches every aspect of an organization, impacting workers, customers, and citizens across the globe. While AI has the potential to augment work in magical ways, it also presents significant obstacles to equitable and trustworthy mass adoption, such as a significant AI skills gap among workers, exploitative labor practices used for training algorithms, and fragile consumer sentiment around privacy concerns.

To confront AI, leaders leveraging it in their business need an expanded view of how the technology will impact their organization, their workforce, and the ecosystems in which they operate. With this vital understanding, organizations can build a tailored approach to developing their workforce, building partnerships, innovating their product experiences, and fostering other resilience behaviors that increase agility in an age of disruption. Research shows that organizations with healthy resilience behaviors such as knowledge sharing and bottom-up innovation were less likely than others to go bankrupt following the disruptions of the COVID pandemic.

Hyperobjects and our collective future.

Originally coined by professor Timothy Morton, a hyperobject is something so massively distributed in time and space as to transcend localization—an amorphous constellation of converging forces that is out of any one person’s control. Similar to other large, complex phenomena that have a potential to radically transform our world—think climate change or the COVID-19 pandemic—AI is one of the hyperobjects defining our future. 

Hyperobjects are difficult to face from a single point of view because their tendrils are often so broad and interconnected. Dealing with huge, transformative things requires broad perspective and collective solidarity that considers impacts beyond the interests of a single organization. The task of organizational leaders in an age of disruption by hyperobjects—like AI and climate change— is to rebalance the economic and social relationships between a broad group of stakeholders (management, shareholders, workers, customers, regulators, contractors, etc.) impacted by rapid change and displacement.

To help leaders form a comprehensive approach to cultivating resilience, innovation, and equity in the age of AI, we developed a simple framework of five priorities—or the 5 Ps—for building an AI-ready organization: People, Partnerships, Provenance, Product, and Prosperity.

People: Communicate a clear AI vision and support continual learning across teams 

Charting an AI future for any organization begins with its people. Leaders need to identify the potential areas where AI can enhance operations and augment human capabilities. This starts by identifying low-risk opportunities—such as writing assistance or behavioral nudges—  to experiment with as your AI capabilities mature. As new roles emerge, organizations must prioritize continuous learning and development programs to upskill and reskill employees, equipping them with the AI literacy needed to adapt to the changing landscape.

Despite all the media fanfare, usage of dedicated AI products like ChatGPT is still fairly limited, mostly used by Millenials and Gen Z, with only 1 in 3 people familiar with dedicated AI tools. Concerns about the technology are real and can affect morale. Almost half of workers fear losing their jobs to AI. Organizations need open communication and transparency about their AI adoption plans and the potential impacts on the workforce so they can also mitigate social anxiety and address fears or resistance to automation. Fostering a culture of continuous innovation can also encourage employees to embrace AI as an opportunity for growth rather than a threat to job security.

Additionally, team structures should be optimized for agility, cross-functionality, and embedded AI expertise. Rather than treating AI as a separate function, how might you develop AI expertise closer to the day-to-day work happening in teams? This could include things like promoting data literacy amongst teams to better understand AI insights, developing centers of excellence to provide training resources, recruiting AI experts, or establishing accessible feedback mechanisms to improve AI model performance.

Partnerships: Leverage strategic partnerships to reduce risks and expand capabilities

According to Google Brain co-founder Andrew Ng, “The beauty of AI partnerships lies in their ability to bring together diverse perspectives and expertise to solve complex problems.” In the rapidly evolving landscape of AI technologies, no single organization can possess all the expertise and resources required to stay at the forefront of innovation. Collaborating with external partners, such as AI platform providers, research institutions, product design experts, and training/support firms, can help address capability gaps and speed up time to market. 

Transformational technology investments requiring large capital expenditures and retooling can be a barrier for organizations to adopt new methods of production. However, partnerships offer opportunities for risk and cost sharing, reducing the initial burdens of AI implementation. Working with partners can also enhance an organization’s ability to scale and expand into new markets. Examples include Google’s partnership with Mayo Clinic on AI in healthcare as well as Siemens partnership with IBM and Microsoft focused on AI in industrial manufacturing.

Investing in both informal and contractual collaboration with partners has proven positive impacts to organizational resilience. Leaders should foster a culture of cross-industry collaboration, staying aware of AI partnerships happening in their industry and remaining open to collaborations that may seem atypical on the surface. Partnership can support expanding customer reach, deepening the addressable market within a segment by diversifying AI offerings. Partnerships in adjacent industries can deliver economies of scale through shared AI infrastructure while expanding AI capabilities through larger pooled datasets—think of the drug industry partnering more closely with the grocery/restaurant/fitness industries to mutually build more responsive products and services for people with chronic health conditions, like diabetes, using AI-powered recommendations modeled on activity/purchasing behavior from cross-industry data-sharing agreements. 

Leaders should work to foster partnership-building activities. This could include providing teams with appropriate resources for partnership initiatives, establishing a clear framework for assessing partnership opportunities, and supporting external networking opportunities to strengthen relationships across sectors.

Provenance: Ensure reliable data governance and integrity

Where will your data come from? How will it be verified, managed, and maintained? The integrity and provenance of data is a paramount concern when developing AI-enabled products and services. The accuracy, reliability, and completeness of data directly influence the performance and ethical implications of AI algorithms. Inaccurate or biased data can lead to flawed predictions and decisions, potentially causing harm to individuals or perpetuating social inequalities.

Many people share concerns about regulating AI usage, with over two-thirds of respondents in a recent survey expressing that AI models should be required to be trained on data that has been fact-checked. Implementing robust data governance practices, including data validation, cleansing, and security measures, is essential to safeguard data integrity throughout its lifecycle. Additionally, organizations must be transparent to customers about their data collection methods and data usage to address concerns related to privacy and data misuse.

TikTok, facing renewed privacy and national security scrutiny of its social media products, recently launched a Transparency and Accountability Center at its Los Angeles headquarters that provides visitors with a behind-the-scenes look at the company’s algorithms and content-moderation practices. While it remains unclear how effective this approach will be to address major misinformation and privacy issues, the company is pioneering new approaches that could be a model for others in the industry, such providing outside experts with access to its source code, allowing external audits, and providing learning content to make its opaque AI processes more explainable to journalists and users.

Product: Innovate your product and service experience responsibly 

Research shows that building an agile culture of innovation is critical to fostering organizational resilience. Engaging employees at all levels of an organization in AI-focused innovation initiatives ensures that solutions address diverse needs and unlock opportunities that may be outside the field of view of siloed AI groups or leadership teams. However, hastily AI-ifying all your products to stay relevant without integrating an ethics and integrity lens could cause unintended harm or breed mistrust with customers/users. Engaging in a comprehensive user research process to better understand the needs of users, risks and opportunities, and the impacts of AI on outcomes can help shape more responsibly designed products.

Our own recent work with Fast Company explored a set of design principles for integrating generative AI into digital products. It showcased examples of progressive disclosures affordances when interacting with AI chatbots as well as what a standardized labeling system for AI-generated content could look like to increase transparency with users. Establishing strong AI product design best practices like these are especially important for highly-consequential and personally-sensitive products and services in sectors like education, healthcare, and financial services. Start with the AI applications that are more mainstream than cutting edge. For example, autocomplete text in your app experience to save customers time during their onboarding experience is a more developed use case rather than using facial recognition to understand your customer’s emotional state. 

Launching successful AI products and features requires the engagement of everyone involved in a product organization, extending beyond the typical research, design, and development functions to include adjacent and supporting functions like legal, communications, and operations to ensure that AI products are delivering on their promises. Leaders should establish QA best practices for testing and vetting products for their ethical and social impacts before public release. The Center for Humane Technology’s Design Guide is a great place to start thinking about evaluating the impact an AI product has on users.

Prosperity: Share productivity gains across stakeholder groups 

As AI provides businesses with tangible gains, there is an opportunity to share this newfound prosperity across stakeholder groups. According to political theorist David Moscrop, “With automation, the plutocrats get the increased efficiency and returns of new machinery and processes; the rest get stagnant wages, increasingly precarious work, and cultural kipple. This brave new world is at once new and yet the same as it ever was. Accordingly, it remains as true as ever that the project of extending liberty to the many through the transformation of work is only incidentally about changing the tools we use; it remains a struggle to change the relations of production.” Leaders are tasked with rebalancing stakeholder relationships to mitigate backlash from potentially rapid and negative impacts to workers’ livelihoods across industries.

AI’s material impact on jobs will likely be felt the hardest by lower-wage workers who will have the least amount of say over how AI is integrated (and eventually replaces) their jobs. The Partnership on AI’s Guidelines for AI and Shared Prosperity provide a great starting point for leaders to start identifying key signals and risks to job displacement, a job impact assessment tool, as well as stakeholder-specific guidelines for rebalancing the impacts of AI on the workforce.

AI enterprises have a voracious appetite for data, which is often extracted from a variety of sources for free—directly from users through opaque data usage agreements or pirated directly from artists and other creators to train AI models. Such data relationships need to be rethought as they are increasingly becoming economic relationships, concentrating wealth and power in the hands of the extracting organizations. As the recent strike by the SAG-AFTRA union demonstrated, business leaders need to consider who they are sourcing data from that feeds algorithms and revisit the underlying contracts and agreements that remunerate the contributors to AI systems in an equitable manner. One interesting proposal includes the development of data cooperatives that can help act as fiduciary intermediates for brokering shared value between data producers and consumers. 

While enormous value can be unlocked from AI—$4.4 trillion to be exact—should all of that go into the pockets of executives and shareholders? In addition to fairly compensating algorithm-training creatives and data-supplying users, leaders should also consider how they might pass along AI-generated value to their end customers. This could be directly financial—like a drug company lowering prices after integrating AI into their development pipeline— or it could be value-added by expanding service offerings—such as a bank offering 24/7 service hours through AI-supported customer touchpoints.

Taking it one step at a time

The technology of AI may shift rapidly, but we are really just at the beginning of a much larger transition in our economy. The decisions that leaders and organizations make today will have long-tail consequences that we can’t clearly see now. The gains from AI may be quite lucrative to those who implement well and the race to dominate in a winners-take-all economy is real. But racing to modernize your business operations and build AI-ified products and services without understanding the broad impacts of the technology is a bit like a 19th-century factory owner dumping toxic waste into the air in a hasty effort to leverage the latest production-line technology of the industrial era. We are all now paying the consequences of those leaders’ decisions.

Be considerate of the foundational AI choices and behaviors that will impact the long-term resilience of your organization and shape equity in our future society. Hopefully these five Ps can help you confront the AI hyperobject in a holistic manner by getting back to basics. Take it slow at first, so you can move steadily later. Gather your troops and work to deeply understand your people/customers/partners, then thoughtfully make your move forward.



All images used in this article were generated using AI.

Thanks to Neeti Sanyal, Holger Kuehnle, and Matthew Jordan for your contributions to this thinking.

A vector illustration depicting a person venturing towards a Web3 landscape

Developing Skills and Earning a Livelihood

Whether it’s NFTs, Web3 or AI, the rapid evolution of technology can offer opportunities for users of all ages, but young people – who spend so much of their time online – have a unique relationship with these emerging tools. And, despite what many think, adolescents are already using these emerging technologies to improve their well-being at a time where the mere existence and lived experiences of BIPOC and LGBTQ+ youth, especially, are under attack.

Take 13-year-old digital artist Laya Mathikshara from Chennai, India, for example. In May 2021, as a neophyte in the world of digital art, she sold her first NFT.

Her animated artwork titled What if, Moon had life? depicted an active core of the Moon gurgling. Inspired by the distance between the Earth and the Moon (384,400 km), Laya listed the reserve price as 0.384400 ETH (Ethereum) on Foundation, a platform that enables creators to monetize their work using blockchain technology. It caught the eye of Melvin, co-founder of the NFT Malayali Community, who placed a bid and collected her first artwork for 0.39 ETH ($1,572 at the time).

After the sale, and the success of subsequent NFTs, Laya – now 15 years old – decided to make digital art her career. With Web3, a collector of her art introduced her to other artists, who she felt inspired to support through Ethereum donations. It “feels amazing to help people and contribute. The feeling is awesome,” she says.

“I started with nothing to be honest,” says Laya, “with zero knowledge about digital art itself. So I learned digital art [in parallel to] NFTs because I had been into traditional art [when I was younger].”

Supporting Key Developmental Assets to Wellbeing

Knowing that young people spend much of their unstructured time online, that digital wellness is a distinct concern for Gen Z, and that the technology landscape is rapidly changing, Artefact partnered with Hopelab to conduct an exploratory study to understand their experiences with emerging technology platforms – ones largely enabled by Web3 technologies like blockchain, smart contracts, and DAOs (decentralized autonomous organizations). The organizations were particularly interested in how these technologies might contribute to a wide spectrum of developmental assets to improve the well-being of young people.

Our study found that Web3 can support youth wellness because it is built on values such as ownership, validation, and community that link to developmental assets like agency and belonging. These values are fundamentally different from the values of Web 2, a technology operating on business models that monetize our attention and personal data.

Awareness and usage of Web3 technologies is already high among Gen Zers, with 55% in the U.S. claiming to understand the concept of Web3. Twenty-nine percent have owned or traded a cryptocurrency and 22% have owned or traded an NFT. Importantly, Gen Z believes these technologies are more than a fad: 62% are confident that DAOs will improve how companies are run in the future.

Having grown up with multiple compounding stressors including climate change, a global pandemic, and political unrest, some Gen Z find appeal in Web3’s potential to create what you want, own what you make, support yourself, and change the world.

With Web3, young people are experimenting with their interests and identities, creating art and music, accumulating wealth, consuming and sharing opinions, forming communities, and supporting causes that deeply resonate with them.

Victor Langlois and LATASHÁ, visual and musical artists, respectively, each represent the diversity that is important to our organizations, and have made real income at a young age through NFTs. Likewise, World of Women, a community of creators and collectors believe representation and inclusion should be built into the foundation of Web3, while UkraineDAO seeks to raise money to support Ukraine.

Aligning with GenZ Values

The gateway to Web3 for youth has commonly been through media hype, celebrity fanfare, and video games. Youth we spoke to were all skeptical, at least at first. Laya says, “I thought it was some cyber magical money or something. It just didn’t feel real.” After learning how to use the technology to create assets themselves and even make money via NFTs without a bank account, they began to invest more time experimenting with the tech and consuming content.

These experiences are not without challenges, of course. Young people in our study shared that they need to spend a lot of time learning about the ever-evolving space and building connections to stay relevant. The financial ups and downs are more extreme than the stock market, along with the potential for major losses at the hands of scammers or platform vulnerabilities. Like Web2, there is pressure to be endlessly plugged into the constant news, with social capital to be gained by being consistently online. Some of society’s broader social issues also permeate Web3 spaces: racist NFTs and communities abound.

Despite these challenges, there is genuine excitement for a new internet built on Gen Z’s core values. Several youth shared how DAOs are flipping organizational norms, where hierarchy and experience no longer determine whether your idea takes hold. Web3 technologies are giving youth an opportunity to start careers that weren’t previously viable, find new audiences and fanbases, create financial independence, detach from untrustworthy platforms, and find and contribute to caring communities – all while building their creativity, socioemotional, and critical thinking skills online.

These experiences are helping Gen Z feel a strong sense of belonging as they find communities and causes they care about. In the words of one of our interviewees, Web3 offers a “new and shiny” way to “do good in the world.” The experiences are more accessible – and specific to them – and the decentralized nature of Web3 means that creators and the public, not big tech or its algorithms, get to determine what is current and relevant. This is especially important for creators from groups that have been excluded from power because of their race, ethnicity, gender, or orientation. One participant shared how empowering it was to no longer be at the whim of social media platforms that may make design changes that erase your content, user base, or searchability overnight.

Like any other technology, Web3 and its components can have positive and negative impacts, but its fundamental tenets mean that we will likely see promising innovations and experiences that can support young people to find agency and belonging.

“We are all decentralized for the most part,” says Laya. “And the fun fact is, I have not met many of my Indian friends…I haven’t met folks in the U.S. or any other countries for that matter… you don’t even have to connect to a person in real life, but you still feel connected.”


Neeti Sanyal is VP at Artefact, a design firm that works in the areas of health care, education, and technology.

Jaspal Sandhu is Executive Vice President at Hopelab, a social innovation lab and impact investor at the intersection of tech and youth mental health.

Collage of agriculture workers working in the field

Climate change has direct impacts on human health, but those impacts vary widely by location. Local health impacts depend on a large number of factors, including specific regional climate impacts, demographics and human vulnerabilities, existing local adaptation capacity and resources, and cultural context. Therefore, organizations will need to tailor mitigation and adaptation strategies to the regional risks and contexts of different communities.

Participants at the 2023 Global Digital Development Forum called to move away from entrenched approaches that tend to look to top-down solutions to drive change. Instead, they suggested more holistic, inter-disciplinary, collaborative, and inclusive engagements that account for on-the-ground contexts and people-centered approaches. Participatory methodologies are well suited to bring local voices into conversations, decision-making, and equitable engagement.

Headshots of the two featured speakers: Ezgi Canpolat, PhD and Kinari Webb, MD

In this panel discussion, Artefact’s Whitney Easton sits down with Health in Harmony’s Kinari Webb, MD and the World Bank’s Ezgi Canpolat, PhD to share the work they are doing to foreground the social dimensions of climate change and support planetary health. Through concrete examples, we will explore what is most difficult and most promising about working deeply and collaboratively with local partners and communities to craft a more resilient future for us all.

Topics include:

  • What does it mean in practice to put people at the center of climate and health action?
  • What’s most missing from existing approaches that attempt to reduce the health impacts of climate change, and what’s most promising on the horizon?
  • What can the COVID-19 pandemic teach us about how to work toward planetary health?
  • How might we better engage with cultural contexts and local realities as we design initiatives, particularly when it comes to ensuring impact and minimizing unintended consequences?
  • How can the predictive power enabled by Big Data and technology be balanced with local, real-life contexts to ensure that local stakeholders and citizens truly benefit?


Ernest Cline, in his Ready Player One (and Two) books, paints a picture of a world where everyone avoids the problems of physical reality—a global energy crisis, environmental degradation, extreme socioeconomic inequality—by taking “an escape hatch into a better reality.” That escape hatch is the OASIS, the fully fledged metaverse in virtual reality, “where anything was possible.”

While news cycles have shifted away from the metaverse in recent months (thanks ChatGPT!), big tech and startups have been working diligently behind the scenes, investing billions of dollars in creating alternative realities, with goals of bringing about their own concepts of the metaverse. Apple is heavily rumored to release its mixed reality headset at WWDC in June that will likely challenge the criticisms of Meta’s approach and advance the metaverse’s arrival massively even if under a different label, like spatial computing. With different visions of the metaverse rising up in tandem, we must examine the tools we have access to and the foundation on which we can build in order to ensure this “better reality” is truly better.

Re-define viable

Those in new product development usually think about whether an idea is technologically feasible, desirable to users, and viable in the marketplace.

Meta, Google, Microsoft and Apple, and many other players, like Magic Leap, have been working on Augmented Reality (AR) (e.g., Ray-Ban Stories, Google Glass, Project Starline, iOS), Mixed Reality (MR) (e.g., HoloLens 2, Magic Leap 2, Rumored Apple Headset), and Virtual Reality (VR) (e.g., Meta Quest 2, QuestPro) projects for decades. Advancements in headsets and computing suggest the technology is mature enough to support conceptual reality experience for the mass market. Feasibility: check.

The lukewarm success of remote work has catalyzed a clear problem space for the metaverse to tackle, and our increasing entanglement with contextual online avatars (e.g., vTubers , “finsta” Instagram profiles, and even the everyday Apple Memoji) along with the success of MMO’s (massively multiplayer online games, e.g., Fortnite and Roblox) signal a readiness for consumers to develop a personal connection to an online self and peers, making the metaverse desirable from a consumer standpoint. Desirability: check.

Viability, however, is complicated. Today’s virtual business model relies heavily on advertising. Users experience it as free, but the model has far-reaching direct and indirect consequences. It has helped to harvest and push mass amounts of rare materials into landfills. It has escalated extreme polarization in our political systems and degraded community relationships needed to sustain a democracy. It has sowed distrust of institutions through the proliferation of misinformation. It has fostered screen addiction, and increased social isolation, while weakening interpersonal connections. We’ve learned we can create a viable world — one that is economically profitable for a select few — but, we haven’t learned to create a world viable enough to sustain our environment, political systems, societal values, meaningful relationships, individual agency, and be economically profitable. Viability is there, but it’s complicated. We need to treat it with special care.

If we don’t act now, the metaverse will be built on the same foundation as today’s paradigm of which we’re living the consequences of already.

Here’s a five-step framework you can use to work toward a better metaverse sooner, than later.

As product owners, designers, and technologists, now is the time to ask ourselves, “What if the metaverse succeeds?” As you build the metaverse from your own perspective and strategy, how can you avoid contributing to a world that has the need for an “escape hatch from reality” and create a world we might actually want to spend some time in?

The five steps


Step one

Each company working in mixed reality will define the metaverse differently. Apple will undoubtedly have a different approach from Meta, from Epic, and so on. As it is still emerging, the metaverse currently has no one definition. As such, teams must align on a shared definition to ensure they are working toward the same product vision and forecast the resulting unintended consequences of that strategy.

This framework uses my emerging definition of the metaverse:  “a shadow layer creating a seamless experience across shared realities (AR, VR, PR).” This shadow layer would be made up of data and information presented seamlessly and contextually across different realities. There are endless ways the mixing of these realities could be imagined. For example, on an individual scale, imagine a friend in the virtual world that could seamlessly accompany you to dinner in physical reality through an augmented reality experience. Your definition of the metaverse will work for this process too, but it must be clearly defined.


Step two

When defining a better metaverse, you must think critically about the underlying model it would be built on. Consider how your  proposal for the metaverse will exist on a spectrum of disruption. Will it augment the existing status quo of today’s centralized model (built on a small number of operating systems and shrinking number of server providers) or push to reimagine an alternative future, such as a decentralized (e.g., Web3) or distributed model (e.g., early internet)? Depending on the model, the landscape and potential futures you’re working toward (and their unintended consequences) will be considerably different.


Step three

If you look at as many aspects and stakeholders of systems that are a part of the metaverse, you can consider how to shape a better future. Beyond the software to make the metaverse real, you need to step back and consider how it will impact multiple levels of the systems it will operate in. One model (based on, but different from Pace Layering) imagines what the potential effects might be at differing levels of scales (individual, relational, group, societal, environmental, etc.).

After considering the rings of impact, create a matrix by industry (education, healthcare, social impact, etc.) to capture relevant areas the intervention might impact. It’s important to push beyond the obvious stakeholders and industries you might initially consider. Try pushing to include a specific population or industry that isn’t already a part of your normal processes or strategy. These are where we can often see threads of potential unintended consequences emerge.


Step four

While all of the topics and stakeholders captured in the matrix will be affected in some ways, not all will be to the same degree. It may depend on how any given future plays out. Utilize a series of 2×2 future matrices inspired by Alun Rhydderch’s 2×2 Matrix Technique to push hypothetical scenarios that are both idealistic (utopic) and problematic (dystopic), to varying degrees of intensity (mass adoption vs. passing fad). While the future will likely end up somewhere in the middle, considering extremes allows us to hypothesize what the preferable future is and capture potential blindspots where unintended consequences could happen along the way.


Step five

Now reflect on the  learnings from the process. Ask yourself several questions. How does your definition and strategy for the metaverse affect society, industries, and individuals?  How does your strategy for the metaverse, played out through potential future scenarios, affect the different systems of scale? How might you change your definition, vision, and/or strategy to build toward a better metaverse?


Repeat to keep the stars aligned as the future unfolds.

These 5 steps, from definition to imagining to reflection,  must be an ongoing activity; revisited and repeated on a regular basis. The future is uncertain and the world will change around us. Advances in generative A.I. could dramatically change the technological landscape. Changes in political winds could change the governance landscape. A major climate or global health crises, as we saw with COVID-19, could change societal priorities. Doing these activities proactively in the initial design can help reduce the number of reactive outcomes we will have to chase down once the product is released.

Whatever the metaverse becomes, hopefully we all can help make sure it aligns a preferable future that favors all realities; a future world we want to run towards, rather than escape from.

Thank you to Carolyn Yip for helping bring these figures to life through animation, and to Matthew Jordan, Hannah Grace Martin, Neeti Sanyal, Yuna Shin, and Holger Kuehnle for editing and thoughts along the way.

Hannah Grace Martin
Hannah Grace Martin

In recent months, we’ve seen the rise of independent social media marketed toward authenticity: first BeReal, now others like Gas have cropped up. When we speak with Gen Z consumers, authenticity feels like a buzzword—it comes up again and again as a guidepost for ideal experiences—yet, they have difficulty defining it. Instead, it feels like a reaction to the inauthenticity they see on Instagram and to a lesser extent TikTok, which they see as to blame for feeling a lack of social connection in spaces we believe should foster connection. While BeReal’s features limit the ability to curate posts, the core of its UX is the same as larger social media platforms, which limits the social connection that underpins authenticity. To design for authenticity, platforms must adopt a UX that allows users to adapt and evolve their identities over time.

Putting on an “act” in social spaces isn’t unique to social media. In 1959, Erving Goffman published The Presentation of Self in Everyday Life, where he contends that real-life social situations cause participants to be actors on a stage, with each implicitly knowing their role. The character one plays depends on a variety of contextual factors: who is present, the “props” and “set” (visual cues) among others. As such, each performance is different. His theory explains why one might feel awkward when two different social groups are in the same room: the actor doesn’t know which role they are supposed to play.

In online spaces, the feed is our perma-stage. Facebook’s News Feed was designed to deliver updates on friends the same way we receive updates on local and national news. It seems inevitable that this product vision would produce performances, and highly curated ones at that. Its one-to-many nature limits standard interaction; instead of an actor-actor dynamic, we see a creator-commentor-lurker hierarchy. And because creators design their posts to cater to the masses, they are not moving from stage to stage; instead, one’s online persona feels static. Here, the light of inauthenticity shines through, as we are no longer playing together, but watching others perform.

In Goffman’s model, actors retreat “back-stage” when they are alone or with close others — this is the place where they can let their hair down and be free from keeping up impressions. While the dominance of social media’s feed might make the Internet seem like an unlikely place for back-stage settings, we find almost every social media has a direct message function. In contrast to the one-to-many, post-centric UX of the feed, these back-stage spaces are one-to-one or one-to-few interaction-heavy spaces that have come to be the most fulfilling part of the social media experience for users. Instead of solo “lurking” that can lead to comparison and loneliness, users that are active in back channels find engagement, connection, and reprieve to be themselves, or at least the character that feels like the smallest margin of performance with this particular friend or group, since they have created their “show” together.

But it’s the feed that dominates the social media experience. It permeates moments that would have traditionally been back-stage settings (for example, alone in one’s home), and so we find ourselves wanting authenticity, or a back-stage feeling, here. And so, trends like posting crying selfies have surfaced, which feel close to a cut and paste: back-stage content onto the front-stage. While a post like this could make a user feel understood or less alone momentarily, the infrastructure on social media doesn’t enable the interaction needed to produce real support, and can continue to feel designed for likes. Between glamour shots and crying selfies sits BeReal, where users post more of the “everyday” of their everyday life. Still, BeReal has been criticized for either being boring, still performative, or even exclusive in a more intimate way. A feed can’t support true connection, the table-stakes of enduring authenticity.

Outside of these two paradigms, we see a third type of space emerging. Platforms like Discord have taken hold during the pandemic as a more casual place to “hang out” virtually. Building on a chat-based UX, Discord enables users to find others with similar interests and move between smaller and larger channels as well as text and voice-based communication. Further, Discord is the hub for creative expressions like Midjourney, an AI image generator that can only be accessed through Discord using bot commands. Similarly, Fortnite builds conversation through shared experience and play, in so doing re-leveling the audience-observer dynamic and putting engagement over performance. Extending Goffman’s metaphor, we might compare the social atmosphere created on Discord and Fortnite to a writer’s room, where users engage and create together. 

A more agile space like Discord reflects the “Presentation of Self” as charted by Gen Z. This generation sees the self as a canvas for experimentation, where identity is fluid. Through creative tools and less definite spaces, creativity and play  extend to the making of self on a journey of self-discovery. Users can create and try on characters much like a comedian might on a Tuesday night, to first see if it might resonate for Saturday night, much before an enduring part of the act.

To enable more dynamic interactions , we will need to move away from a cut and paste UX approach to a ground-up infrastructure that is designed for fluidity. Taking pointers from the “writer’s room,” two principles can guide us. First, collaboration. Similar to “yes – and,” creators in authentic spaces create in tandem vs. a creator-consumer dynamic. UX of authentic spaces must lean toward chat over post, which fosters interaction and relationships that ensure it’s safe to try a new presentation of self. Second, authentic social media needs impermanence. Though a feed may refresh over time, we know that posts on Instagram will be connected to our profile for years to come. If it’s instead lost in a Discord feed, we may feel more freedom to experiment and “get it wrong.” Combining collaboration and impermanence, we might just set the stage to permit the collection of characters we all play, so that we can all feel a bit more dynamic, and perhaps even authentic, in digital spaces.

Exploring how AI lives in the past and dreams of the future


As I drift online, I’m becoming more aware of AI’s presence. When I browse the web, design a prototype, or debate what to cook for dinner it’s becoming more uncertain what my next move should be. Should I invite AI into my thinking process? 

I find myself in a storm of cool AI products with murky ethics and big promises for a more personalized experience. When the thunder roars, we don’t have the option to hide indoors. How do we coexist with this AI hype in our work and personal lives?

AI changes how we create

Over the decades, digital technology has pushed us to reconsider our processes and collective values. AI-powered features are rolling out into everyday consumer products like Spotify, Notion, and Bing at lightning speed. It strikes us with delight, intrigue, and fear. Finally, we have tools that can shower our thoughts with attention deceivingly well. You ask and shall receive a dynamic and thoughtful response as an audio, code, image, text, or video output. 

The leap from spell check to ChatGPT’s ability to rewrite paragraphs in “Shakespearean dialect” lands us with new questions of what deserves our attention and praise. Should we devalue an article written with the help of Notion AI? Is artwork generated by LensaiAI less precious than a hand-drawn painting by a local artist?

AI is making us rethink our values in similar ways as the anti-art movement

In the early 1900s, the anti-art movement was led by artists who purposely rejected prior definitions of what art is. It provoked a shift in what we value in the art world. During 1917, the French artist Marcel Duchamp submitted a store-bought urinal signed with the pseudonym “R. Mutt” to a gallery show. The submission was rejected and caused an uproar, but it expanded and confronted our imagination of what is considered art. 

This created opportunities for new forms of art that go beyond the institutional vantage point of the artist. Rather than focusing on the craft and sublimity of a physical artwork, anti-art paved the way for contemporary art that values the ideas and concepts being explored by the artist in dynamic ways like performance, video, sculpture, and installations.

Generative AI is becoming the Marcel Duchamp of our 21st century. Similar to the anti-art movement, AI invites us to reject conventional tools, processes, and products. It allures us by freeing us from being alone with our thoughts and concisely telling us what to imagine. The invitation of an AI companion in our classroom, office, or home allows for us to speed up, cut in half, or eliminate our thinking process. This challenges our sense of self and our place in the world. 

AI intensifies the blurring of the line between what is human and what is artificial

As a result of AI changing how we create, what we’re creating is also changing. The AI hype is taking storm in digital spaces where democratization of user privacy and autonomy is dwindling. For example, Twitter and Meta launched a paid product version that grants additional verification and visibility features. This increases the chance for misinformation, fake profiles, trolls, and bots. With AI intensifying the blurring of what is human and what is artificial, the need for authentication and transparency continues.

Vogue covered the fascination behind the viral hyper-real “big red boots” by MSCHF that resemble the pair Astro Boy wears in the anime series. These impractical, playful boots blur the line between the real and the unreal in similar ways as AI does. It plays into the double take we do while listening to the AI-powered DJ on Spotify or scrolling across the viral AI-generated image of Pope Francis in a white puffer. The uncanny quality of the big red boots force us to consider how digital aestheticization distorts details, realism, and quality. A stark contrast is made between what exists in the real world and what is trying to fit in. The boots make it obvious what qualities of the imperfect physical world can’t be digitally copied over.

Our presence gives value to AI outputs in a variety of ways

The shift of creativity in the age of AI also means world-building and dreaming with tools that are not independent nor neutral. In an article by CityLab, the architectural designer Tim Fu describes the AI art generator Midjourney as an advanced tool that can aid the creative process but “still requires the control and the artistry of the person using it.” The rapidly generated images help with the earliest stages of a project, but the images lack detail. The architects spot gaps in the AI art generator’s understanding of non-Western architecture.

In a recent NYT guest essay, Noam Chomsky describes how ChatGPT “either overgenerates (producing both truths and falsehoods, endorsing ethical and unethical decisions alike) or undergenerates (exhibiting noncommitment to any decisions and indifference to consequences).” Rather than a bot takeover, our responsibilities will expand in new ways as designers, programmers, educators, students, or casual users. We must create a new type of digital literacy to address this tension between the user and AI of knowing what to ask, how to push back, and when to accept an outcome. 

By making these digital experiences with AI more collaborative, we can collectively anticipate blindspots. LinkedIn recently introduced a new feature called “collaborative articles” that starts with a pre-written article by AI. Experts on their platform with relevant skills based on their internal evaluation criteria are invited to add context and information. It uses AI as a jumping off point for discussion that emulates the back-and-forth that happens in comment sections. This is one approach for more human intervention that creates space for our live cynicism and voice to be at the core of any AI output.

Together with our skepticism and presence can we prevent the distortion of our ideas. This puts necessary pressure on the in-between moments that shape who we are. The moments when we are alone with our thoughts—without the distraction of technology.

You don’t need AI to dream big

AI lives in the past and dreams of the future. Rather than engaging in the present moment, AI takes any context and uses training data to predict what comes next in the sequence. Instead of sieving through the excess of information on the Web, we get information rearranged from large language models that don’t leave a clear trace of how it ended up where it did. ChatGPT creates a foggy interpolation of the Web.

Digital technology distorts our understanding of linear time by repackaging the past as a future possibility. Our senses are grounded in the real world and in the present, where we truly exist beyond data points. If we treat AI as the end-all-be-all for creativity, learning, productivity, and innovation, won’t we lose our sense of self and what we stand for? Generative AI exists for your text input; it lives to anticipate but doesn’t live.

An abstract composition featuring various green, yellow, blue, and purple shapes, with two sets of shapes resembling human forms merging together in the center of the composition.

The above vignette shows “cultural humility” in action. This approach fosters cultural understanding through respect, empathy, and critical self-reflection to build partnerships between providers and the diverse individuals they serve. Cultural humility has become a hallmark pathway for realizing health care that responds to the needs of diverse patient populations and reduces the extreme health disparities they often face. 

Cultural humility is needed now more than ever. If current trends continue, immigrants and their descendants will account for around 88% of the U.S. population growth in 2065. Alongside this, diversity will also grow within healthcare professions. But the current care model in the U.S. rests on a culture of biomedicine that is largely inhospitable to diverse health-related beliefs and practices. Instead, we call for ways to work with our increasingly pluralistic society to uplift the benefits of biomedicine while embracing diverse perspectives on health and healing. 

Centering lived experiences in healthcare

Within any cultural or identity group, each person’s lived experience is intricate and varied, and what is necessary to live a healthy and fulfilling life is equally individualistic. To recognize diverse needs in health care, medical training and practice have come to focus on “cultural competence,” “a set of congruent behaviors, knowledge, attitudes, and policies that come together in a system, organization, or among professionals that enables effective work in cross-cultural situations.” But even with cultural competence, lived experience is often overlooked, causing providers to make assumptions about a specific patient based on learned facts about the broader racial/ethnic groups to which they may belong. This can lead to care decisions based on generalizations, resulting in inappropriate recommendations for a patient’s unique circumstances.

On the other hand, “cultural humility” is a much stronger formation for realizing culturally responsive care that honors each patient’s lived experience. It is grounded in rigorous self-reflection and a willingness to listen to, learn about, and adapt to patients’ diverse cultural values and practices. Crucially, exercising cultural humility reduces unconscious bias and stereotyping toward diverse patient populations based on many identity factors, from cultural background, race, and age to socioeconomic status, religion, and gender identity. Bias has been shown to negatively impact patient care, including poor patient-provider communication, low patient satisfaction, and mistrust of the healthcare system. A culturally humble approach to care achieves the nuanced understanding of patients’ lived experiences and unique backgrounds necessary to truly embrace cultural differences and work toward dismantling the structural vulnerabilities that result in unequal health outcomes.

Practicing cultural humility during moments of care

We see an opportunity to intervene at the most intimate level of care during face-to-face interactions between patients and providers, making cultural dimensions more accessible and the hidden barriers to care faced by multicultural communities more visible. 

Isolated tools exist that make inroads into providing clinicians with what they would need to realize culturally appropriate care. The tools fall into three focus areas:

  1. Improving communication between patients and providers 
    The Eight Questions and the Cultural Formulation Interview can be used to elicit patients’ understanding of their illnesses in the clinic. And the Vital Talk app trains providers to communicate with their patients about sensitive topics, which could be especially relevant for providers who did not have “narrative medicine” as part of their training. But cultural dimensions of care are still not a focus of the app. Moreover, with these tools, providers are still left without guidance on implementing them in practice or pragmatic ways to support their uptake in clinical settings within the time and logistical constraints of appointments.

  2. Equipping providers with cultural information
    Existing provider-focused databases like Ethnomed and CultureVision can help contextualize culturally specific beliefs about health and illness that might surface during a visit while suggesting pointers for culturally appropriate care. But accessing these tools during a visit may take up valuable time and could detract from the provider’s ability to listen and respond to the patient’s needs. The focus on the information at the level of cultural groups may also be problematic, resulting in a lack of nuanced context around each patient’s needs and preferences. Lastly, these tools provide a fixed set of information that does not change, for example, based on community member input or adapt to the needs of individual patients. They do not allow cultural tailoring or adaptations to happen in real-time during patient-provider interactions, such as through in-the-moment personalized recommendations based on information elicited by the patient during clinical visits.

  3. Engaging patients in after-care and ensuring data transparency
    Lastly, some tools provide patients with notes, information, and resources following their appointments. OurNotes is a platform that makes care notes accessible to patients, allowing them to engage with their providers during after-care and express concerns before their next visit. It encourages providers to voice record reflections, which helps them relay insights about patients to other team members while also developing their self-awareness skills. OurNotes also works to mitigate power imbalances through transparency of any data collected during a visit. While a promising development, OurNotes does not target improving interactions during moments of care.

While they have their merits, all these solutions are only piecemeal, standalone tools that imperfectly address a sliver of the patient and provider experience.

We believe a better approach is one making valuable resources less cumbersome for providers to access in real-time, least disruptive to critical face time with patients, and genuinely representative of cultural and individual diversity. This approach includes digital tools and experiences that enhance provider capacity and support them in facilitating more flexible and adaptive patient care. Recognizing that digital products tend to be one-off solutions to complex problems, we see an opportunity to capitalize on their ability to seamlessly integrate with current workflows and software, automate repetitive tasks while offering guidance on those more complex, and customize interactions tailored to individual needs and preferences. At their core, aspirational digital products would enable the practice of cultural humility during patient-provider interactions through experiences that capitalize on its foundational components: fostering cultural understanding through respect, empathy, and critical self-reflection.

We see an opportunity for the development of digital products that afford culturally responsive experiences and focus on the following elements: 


Culturally responsive patient-centered care
Patient-centered care focusing on culture involves treating patients holistically and respecting their unique health needs and desired health outcomes as the driving force behind their healthcare decisions. Digital products prioritizing patient-centered care consider patients’ needs, preferences, and values in the context of their lived experiences. They help facilitate communication between healthcare providers and patients, allowing patients to share their concerns and providers to respond accordingly, enabling patients to engage in and adapt their care plans and collaborate with providers to make more informed decisions. A key but sometimes neglected facet of genuinely patient-centered care involves understanding and appropriately responding to patients’ cultural and individual identity contexts.


Empathy and active listening
Digital products should encourage healthcare providers to engage in more empathetic practices towards their patients, actively listening to them, understanding their perspectives, and validating their emotions and experiences. Providers need tools to help them prepare for cross-cultural patient interactions to elicit relevant information during clinical encounters and respond compassionately. These products would afford a more culturally appropriate and inclusive care experience by prepping the provider with language that respects the patient’s preferences (e.g., preferred name and pronouns) and is non-judgmental.


Respectful and collaborative decision-making
Respectful and collaborative decision-making elevates patient agency to allow for mutual understanding and agreement between them and providers. Digital products can support patient agency through tools that afford them control over their healthcare decisions and will enable them to own and tailor personal data, deeply understand vital medical details concerning their diagnosis and treatment – often missed during care visits – and empower them with the necessary information to communicate and collaborate more effectively with their providers on their care plans.


Continuous learning and self-reflection
To learn and be knowledgeable of the many existing cultural and identity backgrounds is a complex and seemingly infinite task. It is pertinent that providers have the tools to continuously listen and learn from the specific and diverse patient communities they serve. While speaking directly to patients and their families is critical to learning, digital products can provide automated tools that coach providers through moments of cultural misunderstanding to reflect on biases, assumptions, and beliefs about other cultures, traditional practices, and worldviews. These tools should seamlessly integrate into existing provider workflows, making it easier for them to engage in learnings during and beyond direct patient interactions.

Closing Thoughts

We believe many benefits will flow from adopting a culturally humble approach to healthcare delivery, especially by implementing appropriate digital technologies to enhance moments of care:

  • Patients can more easily find care more aligned to their needs and identities that make them feel welcome in the healthcare system

  • Patients will approach care with greater trust, as fear or drop-off due to unexpected clinical activities, tense interactions, and conflicting treatment expectations get reduced

  • Patients will engage more in their healthcare as they feel a greater sense of connection and belonging with their provider and healthcare system

  • Quality of care is improved as providers gain an understanding of diverse patient lifeworlds and are prompted to self-reflect on their own beliefs and practices, ultimately approaching all patients with more empathy

  • A cycle of learning and improvement will be embedded in the healthcare system as providers become more self-aware and reflective, inspiring these attributes in their trainees

  • Patients will experience better outcomes and health disparities will be reduced as patients are more engaged with and better served by the healthcare system

Actualizing a positive future healthcare experience for our rapidly diversifying population requires building cultural humility into the fabric of healthcare training and practice. Explore one way we envision doing this: Traverse — a vision for culturally responsive healthcare.

Illustration by Marine Au Yeung

Recently, an article on Fast Company made the announcement that corporate America broke up with design, citing that companies who were once green with Apple-envy and hungry for transformation are now jaded after realizing that “design is rarely the thing that determines whether something succeeds in the market.” Add in the recent corporate silence on the topic of design, and rumors abound – apparently design and corporate America are in trouble.

But does this really mean the relationship has fizzled out? Or could it instead be a time for reflection, re-evaluation, and evolution? Three of our Design and Strategy Directors respond to these questions and more –

Corporate America didn’t break up with design. It broke up with the mythological promise design firms sold them.

Jeff Turkelson, Strategy Director

‘Design will allow you to disrupt, transform, create and lead industries. Just do some research, run some workshops with sticky notes, prototype, and you’ll be onto  something that no one else could dream of!’

These are the false promises that corporate America has broken up with. But, there were always dissenters, Don Norman himself said it:

“Design research is great when it comes to improving existing product categories but essentially useless when it comes to new, innovative breakthroughs.”

Flash forward to today, and the hype around being a design-led organization is pretty much dead. But corporate America has embraced design in a more traditional sense—significantly expanding internal design teams—not to think of radical breakthroughs but to create good user experiences that are usable and delightful.

It’s in many ways a reversion back to the decades-old paradigm of user-centered design (though often twisted by profit incentives, e.g. designing to maximize engagement or conversion rates rather than truly serve the user).

However, the spirit of human-centered design (HCD) is not lost. It has evolved. While the idea of being a “design-led” organization has lost its allure and most in-house practitioners are focused on traditional craft, design’s value to business was always secondary to the value designers sought to bring to humans. And for perhaps largely external reasons, many corporations have begun to embrace HCD’s value-based themes: designing for accessibility, inclusion, equity, etc.

Here we see design intersect with the responsible technology movement— designers, technologists, activists, and more, seeking to create positive outcomes or at least mitigate harms. Designers don’t get to say they own this broader movement but they do play an important role in its evolution.

What goes in comes out: amplify design’s value by doing these four things.

Chad Hall, Design Director

Companies green with “Apple envy” may have invested billions of dollars in design, but few did it well, and most in a way vastly different from the design-centric companies they looked up to. Here are four easy to overlook things they could do to better gain value from design.

1. Understand the complexity of problem spaces

“Simplicity and complexity need each other,” (John Maeda), a.k.a. there can be no simplicity without complexity. Designers work hard to obfuscate the complexity that exists in the products, services, strategies, and processes we work on. Understanding and allowing time and space to work through these complexities is paramount. If designers or companies don’t understand the complexities of what they work on and invest the time and resources to make sense of it, they’ll never be able to simplify anything down into a ‘magical solution.’

2. Foster seamless interdisciplinary collaboration

Design works best when not in a vacuum. Too often, I see these situations that prevent seamless experiences: A product team separated from key decision makers; A care team that doesn’t have good insight into their patient’s experience; An education board that is far removed from the students and communities it aims to impact.

Seamless customer experiences are a product of seamless interdisciplinary collaboration. Working alongside an interdisciplinary team with deep understandings of different industries, domains, processes, or organizations at hand, designers become experts in not only crafting forms, but leveraging their knowledge to become experts in facilitating processes. 

3. Align power and incentives with desired outcomes

If companies want transformation, they need to examine their internal power and incentives structures. It’s not enough to have a vision. Fragmented teams and inequitable power in decision making yield products with poor outcomes.

To make seamless experiences, the customer experience must be singular above internal organizational divisions, product categories, and even earnings reports in some cases. The organization and culture must support this collaboration; allowing, motivating, and empowering employees to make decisions that work toward the shared goal of a seamless customer experience. Rather we often see internal competition, tailoring outcomes to please a HiPPO (Highest Paid Person’s Opinion), or misaligned performance measurements that incentivize personal decisions over preferable product outcomes. While most organizations might have the vision, they don’t align the distribution of power and incentives to get the outcomes they seek.


4. Be curious about the unknown

Many companies have implemented design in a risk-averse way. Expecting transformation without accepting a level of risk leads to disappointment. To curb risk, we’ve turned a designer’s intuition and mastery of skills into a scalable and repeatable process built upon the scientific method. Design Thinking and Human-Centered Design have proliferated. These are great at identifying existing pain points and undiscovered needs, but lend toward refining existing solutions through incremental improvements. This has merit and need in products today. But, it’s not going to rock the boat.

To make large leaps, we must allow imagination and intuition back into the process. Designer’s, through years of mastery, are primed to make unexpected connections that can lead to new innovations. But, this process is nearly impossible to evaluate and scale. It pushes us into the unknown future and to rely somewhat on intuition. In our data-driven world, this is uncomfortable! It’s an inherent risk. But, a risk that could lead to a potential big win.

Design is expanding and evolving — we’re counting on corporate America to do the same

Joan Stoeckle, Design Director

Design is baked into countless experiences we encounter on a daily basis – ordering ahead for curbside pickup, communicating with our healthcare providers through patient portals, and of course in the devices we use for hours each day. We’ve become so accustomed to frictionless, carefully-designed experiences that the occasional encounter with an outdated tool can feel downright grating by contrast. Were corporations truly to break up with design, as customers and consumers we would definitely notice. Perhaps their common understanding of design was too narrow from the start.

Although design was historically associated with the creation of beautiful objects and innovative products, today we also interact with invisible forms of design in the services and systems we use and are a part of. Not only was our home’s smart speaker designed, but so was the AI and the specific phrases used to communicate with it, and we are as much components of that system as the speaker itself. The expansion of design into different contexts and ways of interacting with people and systems certainly represents new and exciting frontiers for innovation, but many designers and organizations are also exploring novel and alternative approaches to the processes and practice of design – not just its outputs. 

User-centered design established a baseline of orienting around the needs of end-users. Human-centered design helps foster a more holistic view of people as more than just ‘users’ of a product, anchored in understanding motivations, behaviors, values, and more. Elements of each are central to the design thinking process that was adopted by many companies. But there are concerns that focusing only on the needs of target users results in a myopic view of challenges and opportunities and can lead to unintended consequences (ex. worker injuries in warehouses that are struggling to meet consumer demand for rapid shipping). In response, designers and organizations are questioning and reframing the process of design to foster equity and inclusion, design for diverse and complex needs, and create more sustainable futures.

The practice of design is expanding and evolving in response to social, economic, and environmental realities. Will corporations also take informed action by evolving how and with whom they create products, services, and systems? Or will many of them, as the article suggests, walk away from a narrow and outdated notion of design?

At Artefact we continue to evolve our methods in support of our mission to create better futures: taking a more holistic view through stakeholder mapping, establishing best practices for trauma-informed design research, reflecting diversity of needs and mindsets through persona spectra, guiding participatory and co-design processes, reflecting on possible unintended consequences, and more.