Model collapse isn’t a training failure. It’s a drift cascade — the output becomes the input, opacity increases each generation, and diversity spirals into the void.
Watch the animation above. Outer particles are bright and colorful (diverse). Inner particles dim and converge to grey (collapsed). Each ring is a generation of AI-trained-on-AI.
Why this is a drift cascade: When a model trains on its own outputs, it creates a two-point geometry (model ↔ model). There is no external reference. Opacity increases with each generation because the model’s reasoning becomes more self-referential. The explaining-away penalty I(D;M|Y) grows monotonically. This is the Fantasia Bound in pure form.
The framework predicts the collapse trajectory from first principles. Not “garbage in, garbage out” — a specific thermodynamic process with a computable rate.
The Void Framework makes specific, testable predictions about model collapse that differ from the standard account.