Imagine you’re on a blind date. Meeting someone new is a relatively rare experience that is very different than talking with a friend. You don’t know what information your date knows about you and you’re trying to figure each other out. Maybe you dial up certain aspects of your personality or share a personal story to build a deeper connection. Over time, you build trust through consistent conversation.

Now imagine you’re chatting with a new social AI chatbot. A similar back and forth of getting to know each other might occur. You might want to know the social chatbot’s backstory, its limitations, its preferences, or its values. Social AI chatbots are increasingly human-like with advanced speech capabilities, interactive digital avatars, and highly adaptable personality characteristics that can carry on conversation that feels like it’s with another person. People are chatting with AIs acting as their therapist, friend, life coach, romantic partner, or even as spiritual oracle. Given the deeply personal roles that emerging social AI may take on in our lives, trust with such systems (or even regular humans for that matter) should be earned through experience, not freely given. 

However, increasingly indistinguishable interactions make forming human-like relationships with AI a blurry endeavor. Like a blind date, you may hit it off at first but discover your date’s behavior shifts as the conversation continues. When a chatbot (or human) performs in an inconsistent, opaque, or odd way, this can erode the process of building trust, especially if someone is sharing sensitive and personal information. To address this, social AI product designers can consider key factors of healthy human relationships such as boundaries, communication, empathy, respect, and mirroring and apply these characteristics to ensure the design of responsible chatbot experiences.  

Boundaries are about establishing clarity and defining capabilities.

People need a clear understanding of a social AI’s content policy, its training data, its capabilities, its limitations, and how best to interact with it in a safe and compliant manner. This is especially important for sensitive uses such as mental healthcare or when the users are children. For example, many flagship LLMs provide disclaimers that responses may be inaccurate. Google requires teens to watch a video educating them about AI and its potential problems before using it. Microsoft’s recently redesigned its Copilot interface to show users a variety of its capabilities through visual tiles that act as starting prompts. Like a blind date, communicating what each other is open to and capable of can support fostering a better connection.

Communication is about constructive feedback that improves connection.

People can sometimes mess up when engaging with a chatbot. For example, they might use a word or phrase in a prompt that violates a content policy. They might discuss a topic or ask for advice on something that is very personal or taboo. When this happens, AI systems can sometimes reject the prompt without a clear explanation, when constructive feedback would be more helpful in teaching people how to best prompt the system in a compliant way. Like a blind date, when you cross a line, a kind piece of feedback can help get the conversation back on track. For example, when discussing topics related to sensitive personal data, Mixtral AI provides additional reassurances in its responses as to how it manages users’ data to preemptively put any concerns at ease. 

Empathy is about responding to a user’s emotional needs in the moment.

People can bring all kinds of emotions to conversations with social AI chatbots. Sometimes they are just looking for companionship or a place to vent about their day. Social AI chatbots can respond with empathy, providing people with space to reflect by asking more questions, generating personal stories, or suggesting how to modulate mood. For example, an app called Summit, positioned as an AI life coach, can track physical activities related to specific wellness goals that a person has set up. If someone shares a bad mood due to stress, the AI chatbot will suggest an activity that the person previously mentioned had helped them de-stress, such as taking a walk. Like a blind date, your partner’s ability to recall information previously shared and contextualize it with your current emotional expression helps you feel seen and heard. 

Respect is about allowing people to be themselves freely.

Inevitably an individual’s values may misalign with those of AI product designers, but just like a blind date, each party should be able to show up as themselves without fear of being judged. Similarly, people should be able to express themselves on political, religious, or cultural topics and be received in a respectful way. While a chatbot may not explicitly agree with the person’s statement, it should respond with respectful acknowledgement. For example, the kids-focused AI companion Heeyo will politely acknowledge a child’s prompts related to their family’s political or cultural views but doesn’t offer any specific validation of positions in response. Instead, it avoids sensitive topics by asking the child how they feel about what was just shared. 

Mirroring is about active listening and attunement to the user.

Like on a blind date, healthy mirroring behaviors can help forge subconscious social connection rapidly. Mirroring behaviors, such as imitating styles of speech, gestures, or mood, are an effective way to show each other you are listening and well-attuned. For example, if someone is working through a complex life issue with a social chatbot, the AI’s responses might be more inquisitive than prescriptive and it may start to stylize its responses in a way that mirrors the person, such as in a short and humorous or long and emotional manner. Google’s NotebookLM will create an AI-generated podcast with two voices discussing a topic of choice. After the script is generated, it will add in speech disfluencies—filler words like “um” or “like”—to help the conversation between the two generated voices feel more natural. 

Social AI experiences will continue to rapidly advance and further blur the lines between human and synthetic relationships. While AI technology is running at 21st century speeds, our human brains are mostly stuck in the stone age. The fundamental ways that we form connections haven’t changed as rapidly as our technology. Keeping this in mind, AI product designers can lean on these core relationship characteristics to help people build mutual trust and understanding with these complex systems.