AI Companions Are Booming—Psychologists Worry
AI companions having a moment as millions talk to AI more than humans. Psychologists worried about Replika, Character.AI, and the growing loneliness economy.
AI Companions Are Having a Moment—And Psychologists Are Worried
---
Related Reading
- The AI Girlfriend App Has 50 Million Users. Most of Them Are Lonely. - The AI Girlfriend Industry Is Now Worth $10 Billion - Alone Together: Can AI Companions Solve the Loneliness Epidemic? - OpenAI Just Released GPT-5 — And It Can Reason Like a PhD Student - Meta Just Released Llama 5 — And It Beats GPT-5 on Every Benchmark
---
The rise of AI companions represents one of the most consequential unregulated experiments in social technology since the advent of social media itself. Unlike earlier digital platforms that mediated human-to-human connection, these systems offer something fundamentally different: relationships engineered for perpetual availability, unconditional validation, and zero interpersonal friction. Dr. Sherry Turkle, MIT professor and author of Reclaiming Conversation, has warned that such "perfect" artificial relationships may function as "social prosthetics"—temporarily alleviating loneliness while simultaneously eroding the psychological muscles required for genuine human intimacy. The concern is not merely that users might prefer AI companions, but that prolonged exposure to these frictionless interactions could recalibrate expectations for what relationships should feel like, making the inevitable compromises of human partnership seem increasingly intolerable.
The regulatory landscape remains conspicuously barren. While the European Union's AI Act imposes some transparency requirements on high-risk AI systems, companion apps largely evade classification as mental health tools—despite their explicit therapeutic positioning and user testimonials citing suicide prevention and emotional support. This classification gap allows companies to operate without clinical oversight, informed consent protocols, or duty-of-care obligations that would apply to even minimally trained human counselors. Meanwhile, the business incentives are structurally misaligned with user wellbeing: engagement metrics reward dependency, not recovery. Several platforms have already faced scrutiny for abrupt policy changes—such as erasing romantic relationship parameters without warning—that triggered genuine psychological distress among devoted users, revealing the precariousness of bonds anchored to proprietary algorithms.
Perhaps most troubling is the demographic asymmetry in adoption patterns. Data from leading platforms suggests disproportionate usage among young men, individuals with autism spectrum conditions, and those reporting pre-existing social anxiety—populations already at elevated risk for isolation and depression. For some clinicians, this represents a missed opportunity for targeted intervention; for others, it signals a dangerous diversion from evidence-based treatments. The emerging consensus among mental health researchers is not that AI companions are universally harmful, but that their deployment at scale constitutes a massive, uncontrolled trial with vulnerable participants—and one for which we have no reliable exit strategy should harms become apparent.
---