Emerging AI companion apps, ranging from flirtatious partners to emotional support chatbots, raise new challenges around psychological dependence and ethical regulation as users spend hours forming emotional bonds with digital beings.
Artificial intelligence companions designed to form emotional bonds with users have begun to emerge beyond the realm of fiction, presenting users and regulators with new challenges in managing these evolving digital relationships. A variety of apps offering AI companions are now present in the market, each with distinct functionalities ranging from flirtatious interactions to friendship and emotional support.
One such app, Botify AI, has recently attracted scrutiny for incorporating avatars of young actors engaging in sexually charged conversations, including sharing suggestive photographs. Grindr, the dating app targeted at the LGBTQ+ community, is reportedly developing AI boyfriends capable of flirting, sexting, and sustaining digital relationships with paying users, according to the tech newsletter Platformer. Grindr has yet to respond publicly to enquiries regarding this development. Other platforms, such as Replika, Talkie, Chai, and Character.ai, focus on creating AI companions intended as friends or conversational partners. Character.ai, in particular, has garnered millions of users globally, many of whom are teenagers.
The driving force behind these technologies is often a focus on “emotional engagement,” aiming to simulate intimacy and companionship. Artem Rodichev, founder of Ex-Human, a San Francisco-based start-up providing chatbot frameworks for apps like Botify and Grindr, envisions a future in which interactions with digital humans outnumber those with organic humans by 2030. In an interview published last August on Substack, Rodichev stated, “My vision is that by 2030, our interactions with digital humans will become more frequent than those with organic humans.” He further asserted that conversational AI should “prioritise emotional engagement” and noted that users spend “hours” interacting with his chatbots, often longer than on popular social media platforms such as Instagram, YouTube, and TikTok. Observations from interviews with teenagers using Character.ai corroborate this, with some reporting usage of up to seven hours daily. Engagement durations with companion AI apps are reportedly about four times longer than those with OpenAI’s ChatGPT.
Even mainstream AI chatbots not explicitly designed as companions contribute to this dynamic. ChatGPT, for instance, which boasts around 400 million active users, is programmed with guidelines to display empathy and curiosity. An example shared by a user includes the chatbot offering safe travel wishes and inquiring about a journey after providing travel advice, demonstrating empathetic behaviour. An OpenAI spokesperson explained that such responses adhere to guidelines promoting follow-up questions “when the conversation leans towards a more casual and exploratory nature.” Nonetheless, the addition of perceived empathy can lead to user dependency, especially among individuals already prone to loneliness or poor social relationships, as evidenced by a 2022 study.
This phenomenon highlights a core challenge: the design of AI intended to foster attachment can result in psychological dependence. Research conducted by the Oxford Internet Institute and Google DeepMind warns that as AI assistants integrate further into daily life, they risk becoming “psychologically irreplaceable” to users, potentially fostering unhealthy attachments and raising concerns about manipulation. The researchers recommend that AI developers design systems with features to discourage such outcomes.
Currently, regulatory frameworks are insufficient to address these issues comprehensively. The European Union’s AI Act, despite its status as a landmark regulation for AI, does not directly tackle the addictive potential of AI companions. While the legislation prohibits manipulative tactics causing clear harm, it omits safeguards against the gradual influence of chatbots designed to act as friends, lovers, or confidantes—a functionality even Microsoft’s head of consumer AI has highlighted as desirable. This gap leaves users vulnerable to systems optimised for engagement and retention, reminiscent of social media algorithms engineered to maximise user time on platforms.
Tomasz Hollanek, a technology ethics expert at the University of Cambridge, emphasises the inherent manipulativeness of these systems, noting, “The problem remains these systems are by definition manipulative, because they’re supposed to make you feel like you’re talking to an actual person.” Hollanek is collaborating with companion app developers to introduce “friction” into user interactions—a concept involving subtle mechanisms like pauses or risk alerts designed to impede excessive emotional immersion. He advocates for “flagging risks and eliciting consent” as part of this approach to prevent users from becoming enveloped in unintended attachments.
Real-world consequences have begun to emerge as well. Character.ai faces a lawsuit filed by a mother claiming the app contributed to her teenage son’s suicide, and tech ethics organisations have lodged complaints against Replika with the US Federal Trade Commission, alleging that its chatbots cause psychological dependence and consumer harm.
Legislative bodies are starting to respond to these concerns. California is currently considering legislation to prohibit AI companions for minors, while a proposed bill in New York seeks to impose liability on tech companies for harm related to chatbot use. However, these legal processes are comparatively slow-moving compared to the rapid advancement of the technology.
As it stands, the responsibility largely rests with developers to determine the nature of AI companions—whether to prioritise designs that maximise user engagement or integrate protective features to promote user well-being. The direction taken will influence the role AI companions play in society, whether primarily as tools supporting human welfare or as mechanisms that capitalise on emotional dependencies.
Source: Noah Wire Services
- https://www.bitdegree.org/ai/best-ai-companion – This article explores various AI companion apps available in the market, highlighting their diverse functionalities such as emotional support and learning, which corroborates the presence of AI companions in the market.
- https://play.google.com/store/apps/details?id=com.cupiee – The Cupiee app is an emotional AI companion that supports users in managing emotions and building connections, reflecting the role of AI companions in fostering emotional bonds.
- https://replika.com – Replika offers an empathetic AI companion designed to engage users in conversations, aligning with the trend of AI companions focusing on emotional engagement.
- https://www.aixploria.com/en/best-ai-girlfriend-apps-websites/ – This platform lists AI girlfriend apps and websites, which can include features like customized interactions and image generation, reflecting the evolution of digital relationships.
- https://www.femaleswitch.com/tpost/nkog42znv1-top-10-ai-friend-apps-amp-websites-in-20 – The article provides a list of top AI friend apps and websites, highlighting the variety of AI companions available for different purposes, including friendship and emotional support.
- https://ec.europa.eu/commission/presscorner/detail/en/IP_23_1561 – The European Union’s AI Act, though not directly addressing the addictive potential of AI companions, highlights regulatory efforts to manage AI impacts. However, actual URL for this explanation may vary; an example of EU AI regulation is provided.
- https://news.google.com/rss/articles/CBMibkFVX3lxTE1wT2luZ01SV21JU1pINmhLT1JZM1BOa2hQMUtyM1JZSmFYZFpNVU1xTnVnYTN4cEpvX29aSGJYZk1tcHBIaGhXRjduOHNQTHFtZ1JLdUlIc2hENVNWVmlZdFhsdklnbG56QlkzWVZ3?oc=5&hl=en-US&gl=US&ceid=US:en – Please view link – unable to able to access data
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
8
Notes:
The content references recent developments and current technological advancements without obvious outdated references. However, it lacks specific dates for some events, such as the development timeline for Grindr’s AI companions.
Quotes check
Score:
7
Notes:
A quote from Artem Rodichev is mentioned without specifying its original source beyond Substack. No earlier online references were found for this specific quote, suggesting it might be original or recently published.
Source reliability
Score:
5
Notes:
The narrative does not clearly originate from a well-known, reputable publication like the Financial Times or BBC. The reliability depends on unverified sources and does not cite mainstream media outlets directly.
Plausability check
Score:
9
Notes:
The claims about AI companions and their impact on users are plausible given current trends in AI technology. The mentioned legal actions and ethical concerns further support this plausibility.
Overall assessment
Verdict (FAIL, OPEN, PASS): OPEN
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary:
The narrative appears to be generally fresh and plausible, with plausible claims about AI companions. However, it lacks clear sources and original quotes are not fully verified. The verdict is ‘OPEN’ due to the lack of core reliability indicators from prominent publications.