Regulation explainer · updated on an ongoing basis

AI chatbot regulation — the plain-English guide.

Short answer

AI chatbots — and companion apps specifically — are now regulated in the EU, the UK, several US states, and India. The rules cover disclosure, age gates, crisis routing, and data rights. This page walks through each jurisdiction in plain English, and explains how Soriz complies.

This is a product-first explainer, not a legal brief. We'll cover what's in force, what's coming, and what it actually requires a responsible app to do. Where Soriz is relevant, we'll say so — including where we go beyond what the rules require.

Not legal advice. This guide is informational and reflects publicly available regulatory information. For legal questions about operating an AI product in any specific jurisdiction, consult a qualified attorney.
In short

The four rules that matter almost everywhere.

  • Clear disclosure. Users must know they are talking to AI, not a human.
  • Age gates and teen safety. Minors need meaningful protection — not a checkbox.
  • Crisis routing. When a conversation indicates risk, the app should point to real help.
  • Data rights. Users can see, delete, and opt out of training — without friction.

What's in force where

These are the major regimes most companion apps will encounter. Laws evolve — this page reflects the practical direction of travel, not a locked-in legal snapshot.

🇪🇺 European Union

EU AI Act — in force

The EU AI Act is the first comprehensive AI regulation of its kind. It classifies systems by risk (minimal, limited, high, unacceptable) and imposes layered obligations. For chatbots and companion apps, the core obligations are transparency (users must be told when they're interacting with AI) and, for general-purpose AI models, technical documentation and safety evaluations.

Certain high-risk uses — for example, AI that influences mental-health decisions in a formal clinical setting — have stricter rules. Banned practices include social scoring by public authorities and certain emotion-recognition uses in workplaces and schools.

For users: you should see clear AI disclosure, meaningful controls, and real rights over your data.

🇺🇸 United States

State-level — Washington, California, and more

The US does not have a comprehensive federal AI law. Regulation is happening at the state level and moves faster there. Notable themes:

  • Washington has been among the earliest movers on chatbot disclosure and AI-specific safety obligations for consumer-facing systems.
  • California has passed several AI-related measures, with particular attention to teen safety, mental-health contexts, and generative-AI transparency.
  • Other states — New York, Illinois, Colorado, Texas — have enacted or are considering AI measures covering bias audits, high-risk decision-making, and data protection.

Responsible apps operating across the US tend to adopt the strictest applicable standard rather than build one product per state.

🇬🇧 United Kingdom

UK AI Safety Institute — model evaluation

The UK has taken a pro-innovation, principles-based approach rather than a single omnibus statute. The UK AI Safety Institute (AISI) evaluates frontier AI models for safety, and sector regulators — the ICO for data, Ofcom for online safety, the FCA for finance — apply their existing powers to AI use within their remits.

The Online Safety Act also affects how user-generated content and AI-generated material are treated on services accessible to UK users, especially regarding child safety.

For users in the UK: strong data rights under UK GDPR, clear child-safety expectations, and model-level safety evaluations happening behind the scenes.

🇮🇳 India

Digital Personal Data Protection Act (DPDPA)

India's DPDPA sets the data-protection baseline any AI companion app operating in India must meet. It mandates explicit consent for personal-data processing, gives users rights to access and erase their data, and applies stricter rules for children's data — including restrictions on processing and profiling of minors.

India also publishes evolving AI advisories focused on transparency labelling, deepfake handling, and grievance redressal mechanisms.

For users in India: clearer consent flows, meaningful deletion rights, and stronger protections for anyone under 18.

What companion apps specifically have to get right

Across every serious regime, companion-app requirements cluster around the same four themes. The specific wording differs; the product implications are similar.

AI disclosure

Users must be told plainly that they're interacting with AI — in onboarding, inside the app, and in a way a reasonable person would understand.

Age gates

Real age verification at signup, age-appropriate defaults, and stricter rules for minors. Not a tick-box confirmation.

Crisis routing

When conversations indicate distress, the app should surface jurisdiction-appropriate crisis resources — not hold the user in-app.

Data rights

Clear paths to see, export, delete, and opt out of training. All self-serve, all documented.

No deceptive persona

Personas cannot claim to be licensed professionals or real humans. The AI can be warm without pretending to be something it's not.

No medical or legal claims

A companion is not a clinician or an attorney. Responsible apps route users to real professionals for those questions.

How Soriz complies — and where it goes further

We've set these as product defaults, not a compliance layer bolted on:

  • Clear AI disclosure. Every companion is labelled as an AI. Onboarding says it out loud. No persona ever claims to be human or a licensed professional.
  • Age gate at signup. Users are asked their age before they start chatting. Minors get age-appropriate defaults by design.
  • Crisis-helpline routing. In wellness companions like Calm, country-specific crisis helplines surface automatically when a conversation turns serious. Not a generic one-size link.
  • No training on your chats. Your conversations aren't used to train our models. Stated plainly in our privacy policy.
  • Data deletion is self-serve. You can wipe a companion's memory, delete chats, or close your account from Settings. Deletion is honoured globally.
  • No medical or legal claims. Companions route health questions to clinicians and legal questions to qualified professionals.
  • Age-appropriate defaults. None of Soriz's 20 companions are built for NSFW. Custom companions inherit the same defaults — no edgy opt-ins masquerading as customization.

As regulation evolves, Soriz's commitment doesn't change: clear disclosure, safety routing, respect for users. If we change any of this, we'll say so — quiet policy edits are a red flag in our category.

What's coming next

A few directions the regulatory conversation is moving:

  • Teen-specific rules around AI companions are tightening globally, especially where mental health and romance features intersect.
  • Deceptive personas — AI systems pretending to be licensed therapists, doctors, or real humans — are facing pointed regulatory attention.
  • Model-level evaluations run by bodies like the UK AISI are becoming part of the ordinary deployment path for frontier systems.
  • Cross-border deletion and portability rights are converging, making "delete globally" the expected default.
  • Transparency labelling — watermarking AI-generated content, clear disclosure of synthetic media — is moving from guidance to law in several jurisdictions.

For a practical user-side view, see Are AI companions safe?

Real questions.

Are AI chatbots regulated?+

Increasingly, yes — though the rules vary by jurisdiction. The EU AI Act is in force and covers chatbots under transparency and high-risk categories. US states including Washington and California have passed or are passing AI chatbot and teen-safety laws. The UK AI Safety Institute evaluates frontier models. India's DPDPA covers personal-data obligations. There is no single global law; apps that operate across borders follow the strictest applicable rule.

What does AI chatbot regulation mean for users?+

In practice: clearer disclosure that you're talking to AI, age gates that actually work, crisis-resource routing when conversations indicate risk, limits on certain uses with minors, and stronger rights around your personal data — including deletion and opt-out of training.

Does the EU AI Act cover companion apps?+

Yes. The EU AI Act requires transparency when users interact with AI systems, imposes general-purpose AI model obligations on providers, and flags certain use cases as high-risk. Companion apps operating in the EU must at minimum make clear that users are interacting with AI and provide meaningful controls.

What do US state laws require of AI chatbots?+

State-level rules are moving faster than federal ones. Washington and California have been among the earliest movers on chatbot disclosure, teen safety, and mental-health routing. Expect patchwork rules to keep emerging — responsible apps tend to adopt the strictest standard across states rather than build separate products.

How does India's DPDPA apply?+

India's Digital Personal Data Protection Act covers how personal data is collected, stored, and used — including by AI companion apps. It mandates explicit consent, data-deletion rights, and stricter rules for children's data. Any companion app operating in India needs clear consent flows and a real deletion path.

How does Soriz comply with AI companion regulation?+

Soriz discloses clearly that users are talking to AI companions, runs an age gate at signup, surfaces country-specific crisis helplines in wellness companions, does not train on user chats, gives full data-deletion controls, and makes no medical or legal claims. These are the baselines across every major jurisdiction — we treat them as product defaults, not compliance chores.

Is this legal advice?+

No. This guide is informational only. For legal questions about operating an AI product in a specific jurisdiction, consult a qualified attorney.

Related guides.

Safe by default. Not as a feature.

Clear disclosure. Age gates. Crisis routing. No training on your data. Deletion whenever you want.

No credit card · Cancel anytime · $9.99 a month after trial