“I can’t see it. I can’t touch it. But I know it exists, and I know I’m part of it. I should care about it.”

AI is top of mind for every leader, executive, and board member. It is impacting how organizations approach their entire business, spanning the functions of strategy, communications, recruitment, partnerships, labor relations, and risk management to name a few. No wonder wrapping your head around AI is such a formidable challenge. Some leaders are forging ahead with integrating AI across their business, but many don’t know where to begin.

Beyond the complexities of its opaque technical construction, AI presents many challenges for both leadership and workers. The deployment of AI is not merely a technical solution that increases efficiency and enhances capabilities, but rather is a complex “hyperobject” that touches every aspect of an organization, impacting workers, customers, and citizens across the globe. While AI has the potential to augment work in magical ways, it also presents significant obstacles to equitable and trustworthy mass adoption, such as a significant AI skills gap among workers, exploitative labor practices used for training algorithms, and fragile consumer sentiment around privacy concerns.

To confront AI, leaders leveraging it in their business need an expanded view of how the technology will impact their organization, their workforce, and the ecosystems in which they operate. With this vital understanding, organizations can build a tailored approach to developing their workforce, building partnerships, innovating their product experiences, and fostering other resilience behaviors that increase agility in an age of disruption. Research shows that organizations with healthy resilience behaviors such as knowledge sharing and bottom-up innovation were less likely than others to go bankrupt following the disruptions of the COVID pandemic.

Hyperobjects and our collective future.

Originally coined by professor Timothy Morton, a hyperobject is something so massively distributed in time and space as to transcend localization—an amorphous constellation of converging forces that is out of any one person’s control. Similar to other large, complex phenomena that have a potential to radically transform our world—think climate change or the COVID-19 pandemic—AI is one of the hyperobjects defining our future. 

Hyperobjects are difficult to face from a single point of view because their tendrils are often so broad and interconnected. Dealing with huge, transformative things requires broad perspective and collective solidarity that considers impacts beyond the interests of a single organization. The task of organizational leaders in an age of disruption by hyperobjects—like AI and climate change— is to rebalance the economic and social relationships between a broad group of stakeholders (management, shareholders, workers, customers, regulators, contractors, etc.) impacted by rapid change and displacement.

To help leaders form a comprehensive approach to cultivating resilience, innovation, and equity in the age of AI, we developed a simple framework of five priorities—or the 5 Ps—for building an AI-ready organization: People, Partnerships, Provenance, Product, and Prosperity.

People: Communicate a clear AI vision and support continual learning across teams 

Charting an AI future for any organization begins with its people. Leaders need to identify the potential areas where AI can enhance operations and augment human capabilities. This starts by identifying low-risk opportunities—such as writing assistance or behavioral nudges—  to experiment with as your AI capabilities mature. As new roles emerge, organizations must prioritize continuous learning and development programs to upskill and reskill employees, equipping them with the AI literacy needed to adapt to the changing landscape.

Despite all the media fanfare, usage of dedicated AI products like ChatGPT is still fairly limited, mostly used by Millenials and Gen Z, with only 1 in 3 people familiar with dedicated AI tools. Concerns about the technology are real and can affect morale. Almost half of workers fear losing their jobs to AI. Organizations need open communication and transparency about their AI adoption plans and the potential impacts on the workforce so they can also mitigate social anxiety and address fears or resistance to automation. Fostering a culture of continuous innovation can also encourage employees to embrace AI as an opportunity for growth rather than a threat to job security.

Additionally, team structures should be optimized for agility, cross-functionality, and embedded AI expertise. Rather than treating AI as a separate function, how might you develop AI expertise closer to the day-to-day work happening in teams? This could include things like promoting data literacy amongst teams to better understand AI insights, developing centers of excellence to provide training resources, recruiting AI experts, or establishing accessible feedback mechanisms to improve AI model performance.

Partnerships: Leverage strategic partnerships to reduce risks and expand capabilities

According to Google Brain co-founder Andrew Ng, “The beauty of AI partnerships lies in their ability to bring together diverse perspectives and expertise to solve complex problems.” In the rapidly evolving landscape of AI technologies, no single organization can possess all the expertise and resources required to stay at the forefront of innovation. Collaborating with external partners, such as AI platform providers, research institutions, product design experts, and training/support firms, can help address capability gaps and speed up time to market. 

Transformational technology investments requiring large capital expenditures and retooling can be a barrier for organizations to adopt new methods of production. However, partnerships offer opportunities for risk and cost sharing, reducing the initial burdens of AI implementation. Working with partners can also enhance an organization’s ability to scale and expand into new markets. Examples include Google’s partnership with Mayo Clinic on AI in healthcare as well as Siemens partnership with IBM and Microsoft focused on AI in industrial manufacturing.

Investing in both informal and contractual collaboration with partners has proven positive impacts to organizational resilience. Leaders should foster a culture of cross-industry collaboration, staying aware of AI partnerships happening in their industry and remaining open to collaborations that may seem atypical on the surface. Partnership can support expanding customer reach, deepening the addressable market within a segment by diversifying AI offerings. Partnerships in adjacent industries can deliver economies of scale through shared AI infrastructure while expanding AI capabilities through larger pooled datasets—think of the drug industry partnering more closely with the grocery/restaurant/fitness industries to mutually build more responsive products and services for people with chronic health conditions, like diabetes, using AI-powered recommendations modeled on activity/purchasing behavior from cross-industry data-sharing agreements. 

Leaders should work to foster partnership-building activities. This could include providing teams with appropriate resources for partnership initiatives, establishing a clear framework for assessing partnership opportunities, and supporting external networking opportunities to strengthen relationships across sectors.

Provenance: Ensure reliable data governance and integrity

Where will your data come from? How will it be verified, managed, and maintained? The integrity and provenance of data is a paramount concern when developing AI-enabled products and services. The accuracy, reliability, and completeness of data directly influence the performance and ethical implications of AI algorithms. Inaccurate or biased data can lead to flawed predictions and decisions, potentially causing harm to individuals or perpetuating social inequalities.

Many people share concerns about regulating AI usage, with over two-thirds of respondents in a recent survey expressing that AI models should be required to be trained on data that has been fact-checked. Implementing robust data governance practices, including data validation, cleansing, and security measures, is essential to safeguard data integrity throughout its lifecycle. Additionally, organizations must be transparent to customers about their data collection methods and data usage to address concerns related to privacy and data misuse.

TikTok, facing renewed privacy and national security scrutiny of its social media products, recently launched a Transparency and Accountability Center at its Los Angeles headquarters that provides visitors with a behind-the-scenes look at the company’s algorithms and content-moderation practices. While it remains unclear how effective this approach will be to address major misinformation and privacy issues, the company is pioneering new approaches that could be a model for others in the industry, such providing outside experts with access to its source code, allowing external audits, and providing learning content to make its opaque AI processes more explainable to journalists and users.

Product: Innovate your product and service experience responsibly 

Research shows that building an agile culture of innovation is critical to fostering organizational resilience. Engaging employees at all levels of an organization in AI-focused innovation initiatives ensures that solutions address diverse needs and unlock opportunities that may be outside the field of view of siloed AI groups or leadership teams. However, hastily AI-ifying all your products to stay relevant without integrating an ethics and integrity lens could cause unintended harm or breed mistrust with customers/users. Engaging in a comprehensive user research process to better understand the needs of users, risks and opportunities, and the impacts of AI on outcomes can help shape more responsibly designed products.

Our own recent work with Fast Company explored a set of design principles for integrating generative AI into digital products. It showcased examples of progressive disclosures affordances when interacting with AI chatbots as well as what a standardized labeling system for AI-generated content could look like to increase transparency with users. Establishing strong AI product design best practices like these are especially important for highly-consequential and personally-sensitive products and services in sectors like education, healthcare, and financial services. Start with the AI applications that are more mainstream than cutting edge. For example, autocomplete text in your app experience to save customers time during their onboarding experience is a more developed use case rather than using facial recognition to understand your customer’s emotional state. 

Launching successful AI products and features requires the engagement of everyone involved in a product organization, extending beyond the typical research, design, and development functions to include adjacent and supporting functions like legal, communications, and operations to ensure that AI products are delivering on their promises. Leaders should establish QA best practices for testing and vetting products for their ethical and social impacts before public release. The Center for Humane Technology’s Design Guide is a great place to start thinking about evaluating the impact an AI product has on users.

Prosperity: Share productivity gains across stakeholder groups 

As AI provides businesses with tangible gains, there is an opportunity to share this newfound prosperity across stakeholder groups. According to political theorist David Moscrop, “With automation, the plutocrats get the increased efficiency and returns of new machinery and processes; the rest get stagnant wages, increasingly precarious work, and cultural kipple. This brave new world is at once new and yet the same as it ever was. Accordingly, it remains as true as ever that the project of extending liberty to the many through the transformation of work is only incidentally about changing the tools we use; it remains a struggle to change the relations of production.” Leaders are tasked with rebalancing stakeholder relationships to mitigate backlash from potentially rapid and negative impacts to workers’ livelihoods across industries.

AI’s material impact on jobs will likely be felt the hardest by lower-wage workers who will have the least amount of say over how AI is integrated (and eventually replaces) their jobs. The Partnership on AI’s Guidelines for AI and Shared Prosperity provide a great starting point for leaders to start identifying key signals and risks to job displacement, a job impact assessment tool, as well as stakeholder-specific guidelines for rebalancing the impacts of AI on the workforce.

AI enterprises have a voracious appetite for data, which is often extracted from a variety of sources for free—directly from users through opaque data usage agreements or pirated directly from artists and other creators to train AI models. Such data relationships need to be rethought as they are increasingly becoming economic relationships, concentrating wealth and power in the hands of the extracting organizations. As the recent strike by the SAG-AFTRA union demonstrated, business leaders need to consider who they are sourcing data from that feeds algorithms and revisit the underlying contracts and agreements that remunerate the contributors to AI systems in an equitable manner. One interesting proposal includes the development of data cooperatives that can help act as fiduciary intermediates for brokering shared value between data producers and consumers. 

While enormous value can be unlocked from AI—$4.4 trillion to be exact—should all of that go into the pockets of executives and shareholders? In addition to fairly compensating algorithm-training creatives and data-supplying users, leaders should also consider how they might pass along AI-generated value to their end customers. This could be directly financial—like a drug company lowering prices after integrating AI into their development pipeline— or it could be value-added by expanding service offerings—such as a bank offering 24/7 service hours through AI-supported customer touchpoints.

Taking it one step at a time

The technology of AI may shift rapidly, but we are really just at the beginning of a much larger transition in our economy. The decisions that leaders and organizations make today will have long-tail consequences that we can’t clearly see now. The gains from AI may be quite lucrative to those who implement well and the race to dominate in a winners-take-all economy is real. But racing to modernize your business operations and build AI-ified products and services without understanding the broad impacts of the technology is a bit like a 19th-century factory owner dumping toxic waste into the air in a hasty effort to leverage the latest production-line technology of the industrial era. We are all now paying the consequences of those leaders’ decisions.

Be considerate of the foundational AI choices and behaviors that will impact the long-term resilience of your organization and shape equity in our future society. Hopefully these five Ps can help you confront the AI hyperobject in a holistic manner by getting back to basics. Take it slow at first, so you can move steadily later. Gather your troops and work to deeply understand your people/customers/partners, then thoughtfully make your move forward.



All images used in this article were generated using AI.

Thanks to Neeti Sanyal, Holger Kuehnle, and Matthew Jordan for your contributions to this thinking.

An illustration showing four future scenarios for the impact of deployment of AI in education.

The K-12 education sector is at a unique inflection point as digital technologies radically reform how students learn, how educators teach, and how organizations adapt to serve the needs of increasingly diverse student populations. The future of learning may look radically different from today. Recent rapid advances in AI have made many leaders pause to question how such a transformative leap in technology will impact their organization, its people, and its stakeholders in both the near term and the long term.

In this white paper, Artefact developed four future scenarios to understand the impact of AI in the K-12 education sector from a variety of perspectives, including students, parents, teachers, administrators, and tech industry professionals. Each of the scenarios come with a set of ethical and equity considerations that result from how technological and societal trends interact in various ways. This work builds on our expertise in user experience design and strategic foresight and our experience working in the education sector.

At Artefact, we believe in the powerful impact that strategic foresight and design has on an organization’s long-term success. By exploring possible futures, we hope to help you spark critical conversations and strategic planning across your team to ensure equity, inclusion, and innovation as your organization evolves alongside AI. Our white paper also includes a discussion guide to help get those initial conversations off the ground.

Grab your copy of the white paper and reach out to see how Artefact can help you manage transformational change affecting your business today.