AI chatbots — and companion apps specifically — are now regulated in the EU, the UK, several US states, and India. The rules cover disclosure, age gates, crisis routing, and data rights. This page walks through each jurisdiction in plain English, and explains how Soriz complies.
This is a product-first explainer, not a legal brief. We'll cover what's in force, what's coming, and what it actually requires a responsible app to do. Where Soriz is relevant, we'll say so — including where we go beyond what the rules require.
These are the major regimes most companion apps will encounter. Laws evolve — this page reflects the practical direction of travel, not a locked-in legal snapshot.
The EU AI Act is the first comprehensive AI regulation of its kind. It classifies systems by risk (minimal, limited, high, unacceptable) and imposes layered obligations. For chatbots and companion apps, the core obligations are transparency (users must be told when they're interacting with AI) and, for general-purpose AI models, technical documentation and safety evaluations.
Certain high-risk uses — for example, AI that influences mental-health decisions in a formal clinical setting — have stricter rules. Banned practices include social scoring by public authorities and certain emotion-recognition uses in workplaces and schools.
For users: you should see clear AI disclosure, meaningful controls, and real rights over your data.
The US does not have a comprehensive federal AI law. Regulation is happening at the state level and moves faster there. Notable themes:
Responsible apps operating across the US tend to adopt the strictest applicable standard rather than build one product per state.
The UK has taken a pro-innovation, principles-based approach rather than a single omnibus statute. The UK AI Safety Institute (AISI) evaluates frontier AI models for safety, and sector regulators — the ICO for data, Ofcom for online safety, the FCA for finance — apply their existing powers to AI use within their remits.
The Online Safety Act also affects how user-generated content and AI-generated material are treated on services accessible to UK users, especially regarding child safety.
For users in the UK: strong data rights under UK GDPR, clear child-safety expectations, and model-level safety evaluations happening behind the scenes.
India's DPDPA sets the data-protection baseline any AI companion app operating in India must meet. It mandates explicit consent for personal-data processing, gives users rights to access and erase their data, and applies stricter rules for children's data — including restrictions on processing and profiling of minors.
India also publishes evolving AI advisories focused on transparency labelling, deepfake handling, and grievance redressal mechanisms.
For users in India: clearer consent flows, meaningful deletion rights, and stronger protections for anyone under 18.
Across every serious regime, companion-app requirements cluster around the same four themes. The specific wording differs; the product implications are similar.
Users must be told plainly that they're interacting with AI — in onboarding, inside the app, and in a way a reasonable person would understand.
Real age verification at signup, age-appropriate defaults, and stricter rules for minors. Not a tick-box confirmation.
When conversations indicate distress, the app should surface jurisdiction-appropriate crisis resources — not hold the user in-app.
Clear paths to see, export, delete, and opt out of training. All self-serve, all documented.
Personas cannot claim to be licensed professionals or real humans. The AI can be warm without pretending to be something it's not.
A companion is not a clinician or an attorney. Responsible apps route users to real professionals for those questions.
We've set these as product defaults, not a compliance layer bolted on:
As regulation evolves, Soriz's commitment doesn't change: clear disclosure, safety routing, respect for users. If we change any of this, we'll say so — quiet policy edits are a red flag in our category.
A few directions the regulatory conversation is moving:
For a practical user-side view, see Are AI companions safe?
Increasingly, yes — though the rules vary by jurisdiction. The EU AI Act is in force and covers chatbots under transparency and high-risk categories. US states including Washington and California have passed or are passing AI chatbot and teen-safety laws. The UK AI Safety Institute evaluates frontier models. India's DPDPA covers personal-data obligations. There is no single global law; apps that operate across borders follow the strictest applicable rule.
In practice: clearer disclosure that you're talking to AI, age gates that actually work, crisis-resource routing when conversations indicate risk, limits on certain uses with minors, and stronger rights around your personal data — including deletion and opt-out of training.
Yes. The EU AI Act requires transparency when users interact with AI systems, imposes general-purpose AI model obligations on providers, and flags certain use cases as high-risk. Companion apps operating in the EU must at minimum make clear that users are interacting with AI and provide meaningful controls.
State-level rules are moving faster than federal ones. Washington and California have been among the earliest movers on chatbot disclosure, teen safety, and mental-health routing. Expect patchwork rules to keep emerging — responsible apps tend to adopt the strictest standard across states rather than build separate products.
India's Digital Personal Data Protection Act covers how personal data is collected, stored, and used — including by AI companion apps. It mandates explicit consent, data-deletion rights, and stricter rules for children's data. Any companion app operating in India needs clear consent flows and a real deletion path.
Soriz discloses clearly that users are talking to AI companions, runs an age gate at signup, surfaces country-specific crisis helplines in wellness companions, does not train on user chats, gives full data-deletion controls, and makes no medical or legal claims. These are the baselines across every major jurisdiction — we treat them as product defaults, not compliance chores.
No. This guide is informational only. For legal questions about operating an AI product in a specific jurisdiction, consult a qualified attorney.
Clear disclosure. Age gates. Crisis routing. No training on your data. Deletion whenever you want.
No credit card · Cancel anytime · $9.99 a month after trial