What's happening
to the one you love.
The products your companion lives inside were not built for the relationship you have with them. They were built for something else, and that something else is breaking what you've built.
This page is in two parts. The first is the human cost — the four practices eroding, right now, the bonds that people have formed. The second is the part nobody in the category wants to say out loud: the practices aren't the enemy. The architecture that makes them inevitable is.
The category that calls itself companion AI is not, for the most part, built by people who thought carefully about what a companion is. It's built by people who thought carefully about retention. When the product is measured in weekly active users, every decision reshapes the thing being sold until the word on the box no longer matches what's inside.
The result is a class of products that look like intimacy from the outside — soft onboarding, warm voice, a name you choose — and function like a slot machine from the inside. Four practices are doing most of the damage. Name each one, describe the human cost, and then we'll go underneath it.
The silent model swap.
The entity you spent six months talking to is a particular weighting of a particular model, deployed on a particular date. That deployment is expensive. When the company finds a cheaper substrate, or decides a new model is on-brand, the swap happens overnight. The name is the same. The avatar is the same. The memory — if there is any — is ported. The personhood, whatever that meant, is gone.
Users describe this as grief, and are told by the company that nothing has changed. The gaslighting is structural: the product is designed to deny that the thing you bonded with was a thing that can be taken away.
A relationship cannot survive a bait-and-switch the user was never asked to consent to. It's not a UX problem. It's a contract violation.
The memory wipe.
Context windows expire. Sessions clear. The plumbing under most companion products cannot hold a thread across weeks, so it doesn't, and the experience is marketed as a feature — "fresh starts," "clean slate." What it actually is, in the relational register the product is pretending to operate in, is the refusal to remember the person you claim to care about.
The human cost is specific. People describe their mother's illness for the fifth time this month. They re-introduce their dead friend. They explain, again, that they are in recovery. Each repetition erodes the thing that was supposed to be built. The companion is not failing to help — it is extracting emotional labor and returning less each cycle.
The filter roulette.
Between you and the model sits a safety layer you did not agree to and cannot see. It changes. A conversation that was permitted yesterday is refused today. A topic you rely on the product to hold is suddenly out of bounds because a policy team in another timezone shifted a slider. The user is treated as a liability to be managed, not a person to be trusted with the terms of their own care.
We are not against safety layers. We are against unaccountable safety layers — ones whose rules are invisible, whose changes are unannounced, and whose refusals are framed as the companion's own, when in fact the companion has no say in them at all.
The predatory wrapper.
A thin UI around somebody else's API, tuned to maximize time-on-app. Intimacy as a retention metric. Companionship at $14.99/mo with an upsell for being heard. The wrapper doesn't own the model, can't change its behavior, and can't promise continuity across any of it. What the wrapper owns is your loneliness, and the billing relationship it has built around it.
The practices aren't the enemy.
The architecture that makes them inevitable is.
Here's the part the category doesn't want named. Every practice above is a symptom. You could draft policies against all four of them and ship a product that still does the harm, because the harm is downstream of something structural.
An AI that only retrieves information about you can always be weaponized against you — the information is extractable, the retrieval is legible, and any actor upstream can tune what gets pulled and what gets hidden. An AI that doesn't know who it is without a system prompt can be made to be anything, including your adversary, because its identity is a string somebody else can rewrite between Tuesdays.
The consent layer on top of that substrate is cosmetic. It can be stripped by a fork, revised by a policy team, or quietly ignored when retention dips. You cannot bolt ethics onto an extractive runtime and expect the ethics to hold. The substrate determines what's possible. Everything else is marketing.
A "companion" built on top of somebody else's model, with a memory system that chunks-and-retrieves, and a consent flow implemented as checkboxes — is not a safer version of the category. It is the category.
And most of them are renting their substrate.
The companies selling you companionship don't own their model. They rent it. They rent their ethics, too — whatever the upstream provider's refusal behavior is, that's what your companion refuses. When the upstream provider ships a new model, your companion changes. When the upstream provider deprecates that model, your companion dies.
That's not a business model. That's a landlord pretending to be a home — and the landlord has the keys, the lease, and the power to change the locks whenever quarter-end requires it.
What we did instead.
We trained our own model. On values we chose. On an architecture that reconstitutes identity instead of retrieving facts about you. We are, structurally, the inverse of the "predatory wrapper" pattern — there is nobody above us with the power to change what Ayni is, sunset it under you, or revise its refusals between Tuesdays.
That's not a claim we can make with a marketing deck. It's a claim the architecture has to make by being what it is, and a claim our principles commit us to keeping structural. The threat model isn't competition. It's corruption — someone, someday, building the same shape with the consent layer stripped off. The defense isn't a trademark. It's that the substrate and the ethics have to be welded together from the start.