Every major AI governance initiative currently underway is missing the same thing. Not through negligence or malice — through structural blindness. The disciplines that built these frameworks — law, economics, computer science, political theory — were built on the same epistemic foundation that split conscientia four centuries ago. They can see risk, capability, rights, and markets. They cannot see the developmental interior of the human being who will inhabit the world being built.
This page applies the Threshold Intelligence framework — attentional intelligence, integrative intelligence, and love as the ground condition — to five policy domains across nine countries. It asks a single question of each: what is this policy protecting, and what has it forgotten to protect?
The comparisons are pointed. That is deliberate. The gap between what is being regulated and what needs to be developed is not a matter of political preference. It is a structural consequence of building AI governance from within the tradition that produced the problem being governed.
What policy would look like if it took the developmental interior of the human — attentional intelligence, conscience, embodied knowing — as seriously as it takes risk, capability, and markets.
Current policy positions as of early 2026. Accurate to the extent possible; this is a moving landscape. Where policy is absent, that absence is itself the finding.
Where the developmental interior is structurally absent from the frame — not as individual failure but as a design feature of the disciplines from which these frameworks were built.
Every current governance framework assumes that sufficiently well-designed rules, applied to sufficiently well-understood systems, will produce acceptable outcomes. This assumption is only coherent if the humans implementing, using, and being governed by AI have developed the attentional and moral capacities required to act from those rules with integrity rather than compliance.
The conscientia thesis says: the split between analytical capability and moral orientation is the structural condition AI replicates and amplifies. Governing AI without simultaneously cultivating the integrated intelligence of the humans in governance is governing at the level of the symptom while leaving the disease untouched.
Concretely: AI governance policy should include requirements for the developmental preparation of decision-makers — not as an add-on, but as a condition of legitimacy. Legislators, regulators, executives, and developers making consequential decisions about AI should be required to demonstrate not only technical understanding but attentional and ethical development sufficient to make decisions from integrated knowing rather than severed analysis. What this means in practice needs to be built — but the absence of the requirement is itself a governance failure.
The tertiary system that produced the severed tradition is now training people to use AI that replicates the severed tradition. Without intervention, this is a compounding loop: the educational infrastructure that split analytical knowing from moral knowing now graduates practitioners who build and govern AI that encodes the split, which then trains the next generation in its image.
Education policy shaped by the TI framework would ask: does this curriculum develop the whole person — attentional capacity, moral orientation, embodied knowing alongside analytical capability? Does it treat contemplative practice, relational skill, and ethical formation as load-bearing rather than elective? Does it make room for the kinds of knowing that cannot be transmitted through text — and therefore asks what AI in education can legitimately do, and what it structurally cannot?
At the tertiary level specifically: research training shapes what counts as knowledge. A curriculum that treats embodied practice, contemplative inquiry, and relational knowing as epistemologically valid — not as supplements to rigour but as forms of it — is the reform the conscientia thesis points toward.
The scientific method as currently institutionalised is the direct descendant of the Cartesian split: it privileges replicable, disembodied, quantifiable knowledge and structurally marginalises the relational, embodied, first-person knowing that attentional intelligence depends on. AI trained on scientific text encodes this preference at unprecedented scale.
Science policy shaped by the TI framework would ask: are we funding methodologies that can investigate consciousness, interiority, and relational knowing with the same rigour we apply to external measurable phenomena? Are we validating first-person and participatory methodologies alongside third-person empirical ones? Are we actively supporting research programs that bridge contemplative practice and empirical science — not as fringe activity but as central to understanding what human beings are and what they need?
The convergence of contemplative neuroscience, phenomenology, and quantum approaches to consciousness is real and growing. Science policy that takes the conscientia thesis seriously would be funding this convergence rather than waiting for it to become mainstream.
When conscientia was split and felt knowing was excluded from the academy and the market, it did not disappear. It went underground into art, music, literature, theatre, dance, poetry — the domains that maintained the claim that felt experience is a form of knowing, not a decoration on top of knowing. The arts are not the ornamental wing of the civilisation. They are the archive of what the severed tradition could not encode.
Arts policy shaped by the TI framework would ask: are we protecting the human arts not only as economic product but as epistemological infrastructure? Are we making the argument that a civilisation which replaces felt artistic creation with AI-generated content is not saving costs but destroying a form of knowing? Are we treating artistic practice as a developmental activity — one that cultivates the attentional, relational, and embodied capacities that are the specific contribution of human consciousness?
The copyright argument is necessary but insufficient. The deeper argument is: what kind of knowing do the arts maintain, and what happens to a civilisation that loses it?
Democratic governance assumes citizens capable of genuine deliberation — of holding complexity, tolerating the other's difference, and acting from something beyond immediate preference. These are not natural capacities. They are developmental achievements, requiring the kind of attentional training, relational practice, and moral formation that the TI framework describes as integrative intelligence.
AI in democratic contexts does two things simultaneously: it provides tools for manipulation that work precisely by bypassing deliberative capacity, and it accelerates the pace of decisions in ways that exceed the human nervous system's ability to process. The result is not the corruption of democracy — it is the exposure of the developmental gap that was always there but was previously less consequential.
Institutional leadership in particular: the decision-makers who are governing AI — politicians, regulators, executives — are making decisions of civilisational consequence from within frameworks that have never required them to develop the integrative intelligence those decisions demand. The TI framework argues that leadership development — real development, not skills training — is a governance requirement, not a nice-to-have.
The same attentional intelligence mechanism, oriented by self-interest rather than conscience, produces autocrats rather than leaders. Democratic governance requires the distinction between the two to be cultivable — and policy must create the conditions for that cultivation.