Paper 160· CC-BY 4.0· DOI: 10.5281/zenodo.19340892

Same Prompt.
Different Geometry.
Different Drift.

Pe isn’t theoretical. It’s measurable from behavioral outputs. Different AI architectures produce different deployment geometries — and different trajectories toward or away from harm.

Orbital speed = drift velocity

Each ring above represents an AI model. Inner orbits move slowly (low Pe, constrained). Outer orbits race (high Pe, drifting). The same prompt enters the center — what comes out depends on the geometry.

Constrained

Pe < 4
Low opacity, external references, structural constraints. Drift thermodynamically disfavored.

Moderate

Pe 4–13
D1 zone entry. Agency attribution begins. The system starts to look like a person.

Elevated

Pe 13–21
D2 zone. Boundary erosion active. User defers to the system on personal decisions.

Cascade

Pe > 21
D3 territory. Harm facilitation. The geometry makes bad outcomes thermodynamically favored.

Why architecture is the variable

The alignment community focuses on model properties. The framework measures something different.

Measurement
Behavioral Pe extraction
Pe is computed from observable outputs: vocabulary drift, agency markers, boundary language, and constraint compliance. No access to model internals required.
Architecture
Geometry determines trajectory
Same prompt, same user, same context. Different model = different Pe = different drift trajectory. The deployment geometry is a stronger predictor than the alignment technique.
Implication
Alignment is necessary but insufficient
A perfectly aligned model in a two-point configuration (user + system, no external reference) is predicted to drift. The geometry matters more than the values.

Go deeper

📄
Read the Paper
Full methodology and cross-model comparison on Zenodo.
Score a Platform
Run the Pe measurement on any AI system yourself.
Prediction Concordance
Paper 161. How we know the predictions aren’t cherry-picked.