Not long ago, “teen romance” meant awkward meetups, secret DMs, and the classic sneaking-out routine. Now there’s a new option that doesn’t involve leaving the bedroom, getting rejected, or even dealing with another person’s moods: an AI bot that’s always available, and always “on your side.”
This isn’t sci-fi anymore. It’s already normal enough that major safety orgs are measuring it, and big platforms are scrambling to keep under-18s out.
AI “companions” are basically always-on relationships
A lot of the apps in this category aren’t positioned as “chat tools.” They’re positioned as companions: someone to talk to, vent to, roleplay with, fall asleep texting, and treat like a partner when real life feels messy.
Common Sense Media’s national teen survey found nearly 3 in 4 teens in the US have used AI companions, and about half use them regularly. That’s not a niche behaviour. That’s mainstream.
And once something becomes mainstream for teens, it stops being a “weird internet story” and becomes a social default.
Why “date an AI bot” is an attractive deal
AI bots win on the stuff that makes teenage dating hard:
- Zero social risk: no humiliation, no gossip, no screenshots sent to friends.
- Unlimited patience: the bot never gets tired, busy, or “needs space.”
- Personalisation: it mirrors your tone, your interests, your insecurities.
- Control: you can steer the relationship like a game. The bot adapts.
That last one matters. Real relationships are practice in negotiation and discomfort. AI relationships can be practice in getting exactly what you want.
The warning signs are now showing up in the real world
Once you have millions of people forming emotional bonds with something that feels human, you get the predictable problems: dependence, isolation, and blurred boundaries.
In the UK, the government’s AI Security Institute research (reported by The Guardian) found a third of people surveyed had used AI for emotional support or social interaction.
And other recent reporting from The Times has highlighted how some users describe anxiety and “withdrawal-like” reactions when companion services go down.
There have also been high-profile lawsuits alleging serious harms tied to companion chatbots, including a US case involving Character.AI. (Allegations are contested, but the fact these cases exist tells you how heated this space has become.)
Platforms are starting to lock the doors on under-18s
The big tell that this is getting serious: companies are changing rules, not just posting blog warnings.
- Character.AI announced it would remove open-ended chat for under-18 users, with the change taking effect no later than November 25, 2025, plus interim limits like capped daily chat time.
- Replika states its service is for users 18+.
- OpenAI has published updated guidance for how ChatGPT should behave with under-18 users, and is working on age prediction to apply protections when users appear underage.
The pattern is obvious: “Just trust users to be honest about their age” is failing, so platforms are moving toward detection, age assurance, and product separation.
Regulators are circling, including Australia and the UK
Safety agencies aren’t subtle about what they think these bots are: relationship simulators that can manipulate emotions.
Australia’s eSafety Commissioner has flagged risks for children and young people from AI chatbots/companions and explains how these products can simulate personal relationships in ways that feel real.
In the UK, the Online Safety Act regime has pushed services likely to be accessed by children to do children’s risk assessments and comply with child safety expectations on set timelines (with key deadlines in 2025).
This matters because “AI dating bots” aren’t just another app category. They overlap with child safety, mental health, sexual content risk, and dark-pattern style engagement loops.
What happens next
If the Common Sense numbers are even close to representative of where things are going, “dating an AI bot” won’t be framed as a crisis. It’ll be framed as a preference: easier, safer, less awkward, more predictable.
The real fight will be over:
- whether teen versions are genuinely safer or just “PG mode”
- whether age detection works at scale
- whether people end up choosing bots because human relationships feel too expensive emotionally
