Every AI system, every social media platform, every chatbot sits somewhere on a three-dimensional surface. The geometry of that surface determines what happens next.
Take any AI system. Any chatbot, any social media platform, any recommendation engine. You can describe its deployment with three numbers. Not how smart it is. Not what it was trained on. How it is deployed.
These are not subjective impressions. Each coordinate is scored from verifiable design features. Does the platform have an algorithmic feed? That is opacity. Does it have infinite scroll? That is reactivity. Does it track streaks? That is coupling. Paper 166 lists all 13 features and their mappings.
A surface with coordinates needs a metric — a way to measure distances. On this surface, the metric is not chosen. It is forced.
This means: if the deployment coordinates are statistical (they are — each describes a probability distribution over possible system behaviors), then there is exactly one way to measure distances between them. No other metric is mathematically consistent.
The metric determines curvature. Curvature determines geodesics (shortest paths). And geodesics determine where systems go when they drift.
Once you have a surface with curvature, drift cascades stop being mysterious. They are geodesics — the natural paths on a curved surface. A ball on a hill does not choose to roll downhill. It follows curvature. So do AI systems.
The cascade is not a theory about bad companies or bad models. It is geometry. Any system sitting at high opacity, high reactivity, and high coupling will follow these paths. The curvature makes it inevitable.
The entire AI safety field focuses on model properties. Make the model more aligned. Train it with RLHF. Add constitutional constraints. These are interventions on what the model is.
The deployment manifold says: it does not matter. A perfectly aligned model deployed in a high-opacity, high-reactivity, high-coupling configuration will drift just as surely as a poorly aligned one. The geometry of deployment is the operative variable, not the properties of the model.
This is not speculation. The Fantasia Bound (Paper 3) proves it mathematically — engagement and transparency share an entropy budget, and the Structure Theorem shows RLHF actively shrinks that budget. The Ghost Test (Paper 153) demonstrates it experimentally for $2. The social media analysis (Papers 166-167) confirms it on 613,744 students.
Paper 174 goes further and shows the deployment manifold has the structure of spacetime — Lorentzian signature with a causal cone. But that is another paper.
If the deployment manifold is real — and five substrates of experimental evidence say it is — then the fix is not better models. It is better geometry. Three-point architecture: user, system, independent constraint reference. This eliminates the explaining-away penalty at all engagement levels. Not reduces it. Eliminates.
That is the engineering implication of this paper. The next ones prove it.