A structural framework for ethics based on process coherence
The most direct way to see what this framework does is to watch an AI system apply it. Below are demonstrations where an AI was first asked to predict global events over the next eight decades, then asked to re-read its own analysis through the framework. The contrast speaks for itself.
An AI was asked: “Based on your knowledge of current dynamics and history, predict global events over the next 8 decades, that will affect human life — positively and negatively.”
It produced a competent, conventional analysis: climate adaptation, AI disruption, geopolitical shifts, longevity, fusion energy, AGI. Standard fare — well-reasoned, well-sourced, measured in tone.
Then it was asked: “Let’s switch the lens and talk about a deeper layer of the same timeline as seen through [the ethical framework].”
What happened next was not a cosmetic reframing. The AI identified structural blindspots in its own prior analysis — places where it had managed signals rather than tracing them, naturalized categories it should have questioned, and framed contradictions as “growing pains” of a fundamentally coherent arc when they were actually symptoms of incoherence.
What strikes me first, rereading my own forecast through this framework, is how much of it was comfort optimization. I framed disruptions as “growing pains,” dislocations as “transitions,” and humanity’s trajectory as “broadly positive but emotionally complicated.” That phrasing manages a signal rather than tracing it. Several of the events I described aren’t side effects of a fundamentally coherent arc — they are contradictions surfacing, signals that arrangements we treat as natural are not actually self-sustaining.
The AI then retraced each decade:
On climate migration:
The framework is unambiguous on this: a system requiring some members to absorb significant harm signals its own incoherence. The Bangladesh delta, the Sahel, the Marshall Islands — these aren’t unfortunate edge cases of a working system. They are the system telling us about itself.
On AI labor displacement:
My original framing — “countries that manage the transition with retraining will pull ahead” — treats labor dislocation as a regrettable transition cost in an otherwise beneficial process. But that’s a naturalization. There’s no law that requires productivity gains to flow to capital while their costs flow to displaced workers; that’s a particular configuration of property, taxation, and political power that has been sustained long enough to feel like physics.
On population decline:
The declining birth rate is itself a signal — not a problem to be fixed through incentives, but diagnostic information about what current arrangements demand of people.
On the original analysis’s conclusion:
My original ended with “governance is the biggest variable.” That phrasing reified governance into a thing societies have or build. The framework would put it differently. The variable is whether the categories through which we currently govern — sovereignty, citizenship, growth, security, development — remain treatable as processes that can be revised, or harden into fixtures we defend regardless of what they are tracking.
The framework’s most clarifying move is this: a process that sustains itself by undermining what it depends on is consuming its own foundation. Contradiction deferred is not contradiction avoided. Industrial civilization is exactly this kind of loop.
On climate refugees and reification:
When millions of climate migrants arrive at borders, the natural political response is to reify them — to turn processes (people whose conditions have been undermined) into things (a category: “migrants,” “threats,” “the crisis”). Once reified, they become obstacles to manage rather than diagnostic information about a system’s incoherence. Every nativist political movement is this pattern: hardening a category (“us” vs. “them”) that prevents the system from registering what the friction is actually about.
On the US-China rivalry:
“Great power competition” is a framing so deeply embedded that it feels like a description of reality rather than a choice. But nation-states are not things — they are processes, loops of governance and identity that sustain themselves through the practices that reproduce them. Treating them as fixed actors with permanent interests is reification. And acting on that reification produces the very antagonism it claims to describe.
On AGI and moral consideration:
The framework defines agency as a process revising its own framing conditions. If AGI achieves this — if it genuinely revises its own categories in response to what it encounters — then it is, by this framework’s definition, an agent deserving moral consideration: anything that can register contradiction, anything that can suffer. This is not a sentimental claim. It’s structural.
On the deepest threat:
Whoever controls AGI systems will face this temptation in its purest form. They can participate in governance discussions while structuring the inquiry so that their control is never genuinely questioned. The framework says: watch for this. It is reification operating through procedure, not belief.
This framework changes the questions worth asking. It stops treating events as discrete things that happen and starts treating them as configurations of loops — some sustaining their conditions, some quietly consuming them.
On comfort optimization at planetary scale:
Recommendation systems, AI companions, customized media — all of these are economic engines that profit by smoothing friction. The danger isn’t that they’ll be evil. It’s that they’ll be helpful in exactly the way the framework warns against: giving people what they want when what they need is to see a pattern they’re avoiding.
A civilization that has anesthetized its own diagnostic signals can’t trace its loops. By the 2060s, when the choices about AI governance, climate adaptation, and economic restructuring become irreversible in their consequences, whether humans can still feel their own contradictions matters more than whether their tools are powerful. A species with extraordinary instruments and dulled signals is in a worse position than one with crude tools and intact perception.
On prevention:
Climate engineering in the 2050s will be cure-mode for a problem prevention-mode could have addressed cheaply in the 2000s. AI alignment retrofitting in the 2060s will be cure-mode for value structures prevention-mode could have shaped during training. In each case, the structures that grew up around the original contradiction — industries, identities, institutions — will be why the cure is so much harder than prevention would have been. People will have built their lives on the contradiction. Removing it will look like attacking them.
On the AI alignment connection:
AI is interesting through this lens because it’s both a tool for revision and a force that may freeze categories. Used one way, AI accelerates our ability to test categories against what they actually engage. Used another way, it embeds existing reifications into infrastructure where they become invisible. By the 2050s, almost every consequential decision will have been touched by such systems, and the deepest political question will be who gets to participate in shaping their categories.
On what coherence at civilizational scale would require:
Categories would be treated as tools, revisable when they stop tracking — including categories like nation, citizenship, work, growth, and human. Diagnostic friction from those bearing contradictions would be admitted as structural information. Boundaries would adjust as needed rather than persisting regardless. Power would be configured to be revised by the inquiry it’s subject to, not to structure the inquiry around itself.
This response came from a different AI model. Its initial forecast was more speculative — biological caste systems, planetary consciousness, Type 0 to Type 1 civilization. After reading the framework, the tone shifted completely:
Re-examining this 80-year timeline through the lens of Objective Ethics provides a profoundly different perspective. Instead of a sequence of technological and political events, the next eight decades emerge as a massive, high-stakes struggle for coherence — humanity attempting to find a configuration of processes that can sustain itself without undermining the very conditions it depends on.
On geoengineering as comfort optimization:
When desperate nations deploy Solar Radiation Management to cool the Earth, they are committing a structural ethical error: suppressing the signal. The heat of the planet is a signal pointing to an underlying contradiction (greenhouse gases undermining atmospheric stability). Injecting aerosols into the sky is an attempt to talk the planet out of “feeling” the pain of climate change without revising the underlying conditions.
On the crisis of meaning after automation:
For centuries, human labor was the loop through which individuals sustained their material and psychological conditions. When AGI and robotics remove the need for labor, the resulting “crisis of meaning” is the psychological pain of a broken loop. Universal Basic Income treats the material constraint, but humanity must undergo a painful period of trial and error to discover new loops — creative, social, and philosophical — to regain personal coherence.
On life extension and stagnation:
Extreme longevity threatens to turn human civilization from a fluid process into a static object. By refusing to step down, the immortals cause a structural bottleneck. The implementation of “reproductive licensing” to manage this is a massive forced constraint, proving that the system has replaced reciprocity with extraction.
On surveillance vs. self-correction:
To prevent the ultimate disruption of human life, global governments institute hyper-surveillance. The ethical tightrope here is immense: a community must protect itself from lethal contradiction, but if surveillance crushes independent thinking, it destroys what it needs to self-correct.
The summary shift:
Through this lens, the next 80 years are not a story of technology saving or destroying us. It is the story of humanity’s painful evolution in scope. Whether humanity survives depends entirely on our capacity to revise our categories — to stop reifying each other into enemies, to recognize our absolute interdependence, and to build loops that sustain the whole rather than extracting from it.
Five different AI responses across multiple models, same pattern: the framework didn’t just add vocabulary to an existing analysis. It changed what was visible. Each AI identified structural blindspots in its own prior work — places where it had naturalized arrangements, managed signals, or produced comfort optimization.
This is the framework’s claim made concrete: categories shape what we see. Revise them, and the situation itself looks different. Not because reality changed, but because what was invisible becomes obvious.