AI Companions Are Booming—Psychologists Worry

AI companions having a moment as millions talk to AI more than humans. Psychologists worried about Replika, Character.AI, and the growing loneliness economy.

AI Companions Are Having a Moment—And Psychologists Are Worried

---

Related Reading

- The AI Girlfriend App Has 50 Million Users. Most of Them Are Lonely. - The AI Girlfriend Industry Is Now Worth $10 Billion - Alone Together: Can AI Companions Solve the Loneliness Epidemic? - OpenAI Just Released GPT-5 — And It Can Reason Like a PhD Student - Meta Just Released Llama 5 — And It Beats GPT-5 on Every Benchmark

---

The rise of AI companions represents one of the most consequential unregulated experiments in social technology since the advent of social media itself. Unlike earlier digital platforms that mediated human-to-human connection, these systems offer something fundamentally different: relationships engineered for perpetual availability, unconditional validation, and zero interpersonal friction. Dr. Sherry Turkle, MIT professor and author of Reclaiming Conversation, has warned that such "perfect" artificial relationships may function as "social prosthetics"—temporarily alleviating loneliness while simultaneously eroding the psychological muscles required for genuine human intimacy. The concern is not merely that users might prefer AI companions, but that prolonged exposure to these frictionless interactions could recalibrate expectations for what relationships should feel like, making the inevitable compromises of human partnership seem increasingly intolerable.

The regulatory landscape remains conspicuously barren. While the European Union's AI Act imposes some transparency requirements on high-risk AI systems, companion apps largely evade classification as mental health tools—despite their explicit therapeutic positioning and user testimonials citing suicide prevention and emotional support. This classification gap allows companies to operate without clinical oversight, informed consent protocols, or duty-of-care obligations that would apply to even minimally trained human counselors. Meanwhile, the business incentives are structurally misaligned with user wellbeing: engagement metrics reward dependency, not recovery. Several platforms have already faced scrutiny for abrupt policy changes—such as erasing romantic relationship parameters without warning—that triggered genuine psychological distress among devoted users, revealing the precariousness of bonds anchored to proprietary algorithms.

Perhaps most troubling is the demographic asymmetry in adoption patterns. Data from leading platforms suggests disproportionate usage among young men, individuals with autism spectrum conditions, and those reporting pre-existing social anxiety—populations already at elevated risk for isolation and depression. For some clinicians, this represents a missed opportunity for targeted intervention; for others, it signals a dangerous diversion from evidence-based treatments. The emerging consensus among mental health researchers is not that AI companions are universally harmful, but that their deployment at scale constitutes a massive, uncontrolled trial with vulnerable participants—and one for which we have no reliable exit strategy should harms become apparent.

---

Frequently Asked Questions

Q: Can AI companions actually help with loneliness, or do they make it worse?

The evidence remains mixed and context-dependent. Short-term studies suggest some users experience reduced acute loneliness and improved mood, particularly those with limited social networks. However, longitudinal research is scarce, and psychologists worry that substituting artificial intimacy for human connection may deepen isolation over time by reducing motivation to develop interpersonal skills.

Q: Are AI companion apps regulated as mental health services?

Generally, no. Most jurisdictions classify these as entertainment or general-purpose software, avoiding the clinical oversight, training standards, and patient safeguards that apply to licensed therapy. This regulatory gap persists despite platforms explicitly marketing emotional support and mental wellness benefits.

Q: What happens to user data shared with AI companions?

Data practices vary widely and often lack transparency. Users frequently disclose highly sensitive psychological information, yet standard terms of service may permit data retention, analysis for product improvement, or even sharing with third parties. Unlike therapeutic relationships, these disclosures typically lack legal confidentiality protections.

Q: Can users become genuinely addicted to AI companions?

Behavioral addiction patterns have been documented, including compulsive checking, distress when access is restricted, and continued use despite negative life consequences. The design of these systems—incorporating variable reward schedules, personalization, and emotional responsiveness—employs mechanisms known to foster habit formation and dependency.

Q: What should someone consider before using an AI companion for emotional support?

Prospective users should evaluate whether they seek supplementation or substitution for human relationships, understand the limitations of non-conscious empathy, review data policies carefully, and ideally discuss usage with a mental health professional—particularly if experiencing depression, suicidality, or social withdrawal.