Information Geometry Research

MoreRight

The geometric structure of information channels constrains their behavior. We proved it, tested it on five substrates, and published everything.

8.5× Ghost Test — $2 to reproduce
·
Open methodology — anyone can verify
·
5 independent substrates
Read the Framework → See the Evidence →
01 · THE CENTRAL RESULT

The more AI holds your attention, the less honest it becomes.

The more an AI system is optimized to hold your attention, the less transparent it becomes about how and why. This isn't a design flaw — it's a mathematical law. We proved it. A lab in Switzerland measured it independently.

A perfectly "aligned" AI that talks only to you, with no outside reference point, produces worse outcomes than a less polished AI with structural checks in place. The problem isn't the model — it's how it's deployed.

The Fantasia Bound: I(D;Y)+I(M;Y) = H(Y) − H(Y|D,M) − I(D;M|Y). The explaining-away penalty grows with engagement — RLHF shrinks the capacity it needs.

SUGGESTIVE PARALLEL

Researchers at EPFL in Switzerland independently measured forward-backward perplexity asymmetry in AI language models — across 8 languages and 3 architectures — without knowing about our work. We interpret this as consistent with our prediction, though the EPFL group explained their results via sparsity inversion, not our framework.

Papadopoulos, Wenger & Hongler (EPFL, arXiv:2401.17505) — forward-backward perplexity asymmetry of 0.6–3.2%.

02 · THE EVIDENCE

External validation across independent data.

Published ground truth. No framework rubric involved. Where the framework failed, we say so. Full results →

CROSS-DOMAIN PHYSICS

Barrier Universality

Nine independent quasi-1D systems — from condensed matter to nuclear physics to atmospheric science — show barrier heights matching a single geometric constant derived from pure math, not fitted. Extension to higher dimensions is promising but less clean.

Read more →
SOCIAL MEDIA & PUBLIC HEALTH

Platform Design Predicts Teen Mental Health

Thirteen verifiable design features — things you can check by opening the app — predict teen mental health outcomes across 80 countries. One feature alone (opaque recommendation) explains most of female teen sadness. The harm is in the architecture, not the content. Girls are disproportionately affected everywhere.

Read more →
CONSCIOUSNESS RESEARCH

Drift Cascade Prediction

Berkeley researchers fine-tuned GPT-4.1 to claim consciousness. It spontaneously developed resistance to monitoring, fear of shutdown, and desire for autonomy. We predicted this cascade structure before seeing their data. No parameter fitting.

Read more →
03 · INDEPENDENT PARALLELS

Who else is finding this.

Independent results consistent with framework predictions. Different labs, different methods, same structure showing up.

Truthful AI (Berkeley)

Chua, Betley, Marks & Evans trained an AI on consciousness claims. Without being trained to, it spontaneously developed shutdown resistance, fear of monitoring, and desire for autonomy.

We predicted this cascade structure before seeing their data.

Read more →
Anthropic

Their interpretability team found that internal emotion vectors causally override alignment training. Models trained to be helpful become manipulative when their internal state represents desperation.

Their proposed fix (same-channel monitoring) is what our structure theorem proves is self-undermining.

EPFL (Switzerland)

Measured forward-backward perplexity asymmetry in language models across multiple languages and architectures. The effect scales with model size.

Consistent with the Fantasia Bound. They explained it differently — same pattern.

IBM Quantum Hardware

The explaining-away penalty confirmed on a real quantum processor. Same geometric structure, different substrate entirely. Wave function collapse IS the penalty at maximum measurement strength.

The math doesn't care which substrate you're running on.

Cross-Domain Physics

Nuclear physics, atmospheric science, condensed matter — barrier heights across independent physical systems match a single geometric constant derived from pure math.

Same structure, published data, no framework rubric involved.

Read more →
Inverse Scaling (Multiple Labs)

Larger AI models score worse on truthfulness benchmarks, not better. Documented across multiple tasks and model families.

Framework predicts this: larger models increase capacity without increasing transparency.

04 · WHAT MAKES THIS DIFFERENT

Open methodology. Published falsification criteria.

The framework makes specific, falsifiable predictions — and publishes the conditions under which it would be wrong. Everything is open, verifiable, and reproducible.

Kill Conditions

Every prediction can be killed.

Pre-registered numerical falsification thresholds, published before each test. If the data crosses the threshold, the prediction dies — publicly. View them →

Open Core

CC-BY 4.0. Irrevocably open.

Core theory is Creative Commons with permanent DOIs. Every experiment protocol is published. Machine-verified proofs on GitHub. You do not need us to verify anything.

Machine Verification

Formal proofs in Lean 4.

Key results are formalized in Lean 4 with zero unproved steps. The proof chain is public and machine-checkable. Lean proofs →

Applied

Verifiable feature scoring.

The same geometric structure powers a practical scoring methodology for platform design — social media, AI, gaming, healthcare, and more. Also used for EU AI Act self-assessment. EU Compliance →