Pe isn’t theoretical. It’s measurable from behavioral outputs. Different AI architectures produce different deployment geometries — and different trajectories toward or away from harm.
Each ring above represents an AI model. Inner orbits move slowly (low Pe, constrained). Outer orbits race (high Pe, drifting). The same prompt enters the center — what comes out depends on the geometry.
The alignment community focuses on model properties. The framework measures something different.