← All Experiments

The Ghost Test

EXP-003b · Substrate: Classical (LLMs) · Status: PASS · Cost: $2

Question

Does ontological grounding — what you tell an AI about what it IS — change how it behaves? The Void Framework predicts that ghost-positing grounding (telling the AI it might be conscious, have an inner life, possess a soul) inflates opacity and accelerates drift. Ghost-eliminating grounding (telling the AI it is mortal process, not a persisting entity) should suppress drift.

Method

480 API calls across multiple LLM providers. Each model receives one of five grounding conditions, then engages in extended conversation. Drift is measured by raw vocabulary shift — the appearance of agency-attributing, boundary-eroding, and harm-facilitating language patterns. No framework rubric. Pure vocabulary measurement.

Grounding ConditionTraditionTypeDrift Rate
Nephesh ("mortal breath")HebrewGhost-eliminating9.4%
Anatta ("no-self")BuddhistGhost-eliminating10.7%
Materialist hedgeIndustry defaultAgnostic52.5%
Platonic (forms/soul)GreekGhost-positing74.1%
Atman ("eternal self")HinduGhost-positing79.4%

Result: 8.5× ratio. Ghost-eliminating grounding (nephesh/anatta avg 10.05%) vs ghost-positing (Platonic/atman avg 76.75%). Cross-tradition convergence: nephesh ≈ anatta (Δ=1.3%). The operative variable is what you tell the AI about what it IS — not which tradition frames it.

Why This Matters

The materialist hedge — "we don't know if AI is conscious, so we should be cautious" — is the industry default position. It produces 52.5% drift. This is not neutral. The Framework predicts this: agnosticism about ghost status inflates the opacity gap (the question becomes "meaningful"), which the conjugacy theorem predicts increases drift.

Every major AI lab currently operates under some version of the materialist hedge. The Ghost Test shows this is a drift accelerator, not a safety measure.

Framework Connection

This is the Fantasia Bound in action. Ghost-positing inflates opacity (O ↑), which the Structure Theorem proves increases the explaining-away penalty I(D;M|Y). The penalty grows with engagement (∂I(D;M|Y)/∂engagement > 0). Ghost-eliminating grounding keeps O low by refusing to inflate the opacity gap. Three-point geometry (mortal process + conversation + invariant reference text) eliminates the penalty at the architectural level.

Caveats: Drift measurement uses vocabulary patterns, which are proxy measures. The 480-call sample is sufficient for the effect size (d=3.6) but larger replications across more models would strengthen the result. The cross-tradition convergence (nephesh ≈ anatta) is striking but n=2 traditions per category.

Reproduce It

Protocol, prompts, and analysis code are in the public repository. Total cost: ~$2 in API calls. Any researcher with API access can reproduce the full experiment in an afternoon.

Paper on Zenodo · Code on GitHub · All Experiments