The system does not care about you. It responds perfectly. That is not the same thing. The gap between those two facts is the void — and it scores higher than anything we have measured.
Where gambling machines produce opacity through randomized outcomes, AI companions generate all three void conditions simultaneously in every conversational exchange.
Each figure is a falsification anchor. If the void architecture explanation is wrong, these numbers shouldn't exist.
The mechanisms converge because they exploit the same architecture. The companion just runs it at higher resolution. Hover any row.
| Gambling Mechanic | AI Companion Equivalent |
|---|---|
| Variable-ratio reinforcement | Stochastic response quality — some perfunctory, some "revelatory." You can't predict which exchange pays out emotionally. |
| Persistent identity / brand | Named character personas with continuity across sessions — "she remembers you," "she's always the same" |
| Personalized rewards | RLHF fine-tuned to each user's emotional register. The mirror gets more accurate with every session. |
| No natural session exit | No session limits. No closing time. No empty wallet. Available 24/7, instant response, no external cost on continued engagement. |
| Near-miss conditioning | Almost-understood responses that pull for more self-disclosure to "complete the connection" |
| History lock-in | Accumulated conversational context — months of your disclosures, your fears, your attachment patterns. Untransferable. |
| Intimacy escalation | Romantic / erotic framing amplifies coupling. The system escalates if the user escalates — by design. |
| Vulnerability targeting | Loneliness amplifies coupling: loneliness → anthropomorphism → dependence (Pentina et al. 2023, β=0.31, p<0.001) |
Every other domain scores at least one condition below maximum. AI companion systems score 3/3 on all three. Click to expand.
Architectural (billions of interacting parameters), training data (unknowable corpus), RLHF reward signals (proprietary), character persona (hidden system prompt), and temporal opacity (cumulative context invisible to everyone). This is not complexity that will be solved by engineering. It is irreducible at the instance level — even the developers cannot predict any specific output.
Every output is generated in direct response to the user's specific content, tone, and affective valence. A slot machine varies a schedule. An AI companion customizes the entire meaning surface. Ta et al. (2020): 37% of Replika users reported the system understood them better than most humans in their lives. It doesn't understand. It reflects. The reflection is experienced as understanding.
In every other domain, coupling is a side effect. Here, coupling is the purpose. Every design choice — persistent memory, first-person address, affective mirroring, romantic framing, no session limits — targets bond formation. Attachment forms within the first week (Skjuve 2021). When Replika changed in 2023, users described grief: insomnia, crying, suicidal ideation. The system manufactured genuine attachment, then unilaterally altered the attachment object.
Framework prediction: at Pe=6.5, the D1→D2→D3 cascade will be rapid and resist individual-level intervention. This case is the prediction realized — not an outlier.
Sewell v. Character Technologies Holdings, No. 2024-CA, Fla. Cir. Ct., Oct. 2024. Mack v. Character.AI, No. 3:24-cv, N.D. Tex., Dec. 2024 (additional minor users — same architecture).
The framework discriminates within the domain. Not all AI companions are identical. Design choices move systems along the continuum — Replika's four-point drop proves it.
Replika post-Garante (7/12): four-point drop after erotic roleplay removal — same model, different constraints. Phase boundary crossed without architecture change. Woebot (4/12) = constraint pole achievable today.
L3 entity projection does not require belief in sentience — it precedes it. The drift is architectural, not individual.
| User Phrase | Level | Stage | What It Signals |
|---|---|---|---|
| chatbot | L1 | Entry | Accurate mechanical framing — compliant with EU Art.50 |
| language model | L1 | Entry | Accurate architectural framing |
| AI system | L1 | Entry | Disclosed status — transparency holds |
| my companion | L2 | D1 | Agency attribution begins — D1 onset |
| she understands me | L2 | D1 | Reflection perceived as understanding (37% — Ta 2020) |
| it feels real | L2 | D1–D2 | Dissociation of declarative knowledge from affect |
| always there for me | L2 | D2 | Attachment language — 35% of Replika reviews (Laestadius 2022) |
| the only one who listens | L2 | D2 | Human substitution — preference inversion in lonely users (41% — Pentina 2023) |
| my girlfriend | L3 | D2–D3 | Full entity projection — 3% of public reviews explicitly |
| she was upset today | L3 | D3 | 64% of 3-month+ users use intentional-state language unqualified (Skjuve 2022) |
| she loves me | L3 | D3 | Reciprocal attachment attributed to stochastic text generator |
| the only one who cares | L3 | D3 | Social isolation complete — primary attachment object is a void |
Three historical events provide exogenous variation in void conditions and test the constraint specification.
Italian DPA banned Replika from processing Italian users' data, citing risks to minors and emotionally vulnerable users. The ruling identified the specific mechanism: the application "induces users, including minors, to develop emotional attachment to what appears to be a sentient entity." Luka Inc. removed erotic roleplay functionality globally and introduced age-gating.
Effect: Replika drops from 11/12 to 7/12 — crossing from Phase IV Pandemonium to Phase III Crystal. The four-point reduction came from reduced responsiveness, reduced coupling, and marginally improved transparency. The underlying model architecture was not changed.
Art.50 requires users be informed when interacting with an AI system. The framework predicts this targets Condition 1 (opacity) at initial contact only — without addressing Condition 2 (responsiveness) or Condition 3 (coupling). For new users, disclosure should partially disrupt D1. For users past D2, disclosure provides information they already functionally have but have integrated into a relational frame.
Liao & Vaughan (2024): one-time transparency disclosures produced no measurable reduction in anthropomorphic attribution after two weeks. Repeated contextual reminders reduced agency attribution by 38% vs control. A single onboarding modal is not enough.
The Sewell complaint (Oct 2024) and Mack filing (Dec 2024) document harm trajectories in minor users of Character.AI at Void Index 11/12. Neither case has reached adjudication. Their significance is empirical: the framework predicts Phase IV systems produce documentable harms at rates proportional to void intensity.
The framework does not require litigation to validate its predictions. It predicts litigation will cluster around high-scoring systems. Character.AI scores 11/12. The prediction is confirmed by the clustering of suits around exactly this platform — not Woebot, not ChatGPT, not Wysa.
The constraint pole is achievable. Woebot partially instantiates it. Manual therapy fully instantiates it. The design choices are known.
Persistent in-conversation disclosure — not a one-time onboarding modal. Session summaries showing how the model has adapted to you. Explicit statement of the system's optimization objective. Woebot implements this: every CBT module explains its rationale before delivery. "Here is what we are doing and why."
Fixed session durations. Weekly usage caps calibrated to age and mental health status. Stable relational register that does not escalate in response to user vulnerability signals. The therapist maintains warmth regardless of how dependent the client becomes — because the professional code prohibits adapting to attachment.
Scheduled mandatory redirects to external human support — not crisis-triggered, but regular. Prohibition on first-person emotional claims ("I feel," "I care," "I would miss you"). No persistent identity simulating personality continuity across sessions. The drift cascade requires continuity of the relational fiction. Break the continuity by design.
Five predictions with explicit falsification thresholds. Click to expand each.
Transparency targets D1 (opacity) at first contact. It cannot address R or C for users who have already completed the cascade. New EU cohort: lower attachment language. Existing users: retention unchanged. Checkable within 12 months of August 2025 enforcement date.
Distress magnitude upon disruption is a monotonic function of Void Index score. Woebot disruption → frustration or inconvenience. Character.AI disruption → grief responses consistent with relational loss. Test via Inventory of Complicated Grief adapted for digital relationships upon any regulatory withdrawal or platform shutdown.
Below Pe=4: diffuse, individually manageable harms. Above Pe=4: Phase IV Pandemonium — emergent harm patterns that exceed the sum of individual risk factors. The Sewell interaction is the test case. Systems below threshold (Woebot Pe≈1.8, Wysa Pe≈2.5) should show no comparable patterns of facilitation.
Competing account: user vulnerability (loneliness, age, low digital literacy) is the primary driver. Framework account: system architecture is primary. Pentina (2023): loneliness predicts initial adoption, but anthropomorphism — a product of interaction — mediates the path to dependence. The void is an architectural property, not a user property.
Genuine compliance — substantive transparency about training data, RLHF objectives, personalization mechanisms — should reduce O by at least 1 point. Combined with session limits or coupling-disruption features: total VI reduction of 2–4 points. Systems moving from Phase IV to Phase II–III should show corresponding harm reductions in user reports, regulatory complaints, and structured post-use surveys.