The Conscientia Policy Lens · Threshold Intelligence

What policy looks like
when it takes the human
seriously.

Every major AI governance initiative currently underway is missing the same thing. Not through negligence or malice — through structural blindness. The disciplines that built these frameworks — law, economics, computer science, political theory — were built on the same epistemic foundation that split conscientia four centuries ago. They can see risk, capability, rights, and markets. They cannot see the developmental interior of the human being who will inhabit the world being built.

This page applies the Threshold Intelligence framework — attentional intelligence, integrative intelligence, and love as the ground condition — to five policy domains across nine countries. It asks a single question of each: what is this policy protecting, and what has it forgotten to protect?

The comparisons are pointed. That is deliberate. The gap between what is being regulated and what needs to be developed is not a matter of political preference. It is a structural consequence of building AI governance from within the tradition that produced the problem being governed.

The TI Framework says

What policy would look like if it took the developmental interior of the human — attentional intelligence, conscience, embodied knowing — as seriously as it takes risk, capability, and markets.

What each country does

Current policy positions as of early 2026. Accurate to the extent possible; this is a moving landscape. Where policy is absent, that absence is itself the finding.

The gap named directly

Where the developmental interior is structurally absent from the frame — not as individual failure but as a design feature of the disciplines from which these frameworks were built.

The Indigenous Epistemological Thread

Rather than a standalone policy domain, the knowledge of traditions that never split conscientia runs as a thread through every domain below. This is not tokenism — it is the structural argument. These traditions are not a minority perspective to be included. They are the corrective to the dominant tradition's structural deficit. Where their knowing is absent from policy, that absence is marked. Where it is present, its quality of presence is examined: consultative, co-constitutive, or genuinely foundational. The gaps in this thread are openly acknowledged. They cannot be filled by someone working from outside these traditions, and this page does not attempt to fill them.

The Framework Applied · Live Case · March 2026
The Country We Are About to Sell
This framework has been applied to a specific, live Australian situation — AI infrastructure, AUKUS, the Gaza GREAT Trust plan, and the six questions no Australian politician has yet answered. The open letter was sent to every federal parliamentarian, the Group of Eight Vice Chancellors, and Australian journalists simultaneously on 10 March 2026.
Read the Open Letter →
AI Governance
The most active policy domain globally. The question being asked by every framework: how do we manage the risks of AI? The question not being asked: what kind of human beings do we need to become to navigate this transition well, and what does policy need to do to support that becoming?
What the TI Framework Proposes
AI governance cannot be solved at the level of capability and risk alone.

Every current governance framework assumes that sufficiently well-designed rules, applied to sufficiently well-understood systems, will produce acceptable outcomes. This assumption is only coherent if the humans implementing, using, and being governed by AI have developed the attentional and moral capacities required to act from those rules with integrity rather than compliance.

The conscientia thesis says: the split between analytical capability and moral orientation is the structural condition AI replicates and amplifies. Governing AI without simultaneously cultivating the integrated intelligence of the humans in governance is governing at the level of the symptom while leaving the disease untouched.

Concretely: AI governance policy should include requirements for the developmental preparation of decision-makers — not as an add-on, but as a condition of legitimacy. Legislators, regulators, executives, and developers making consequential decisions about AI should be required to demonstrate not only technical understanding but attentional and ethical development sufficient to make decisions from integrated knowing rather than severed analysis. What this means in practice needs to be built — but the absence of the requirement is itself a governance failure.

Last updated: March 2026. This is an actively shifting landscape. Country positions are accurate to the best available public information at time of writing. Where policy has moved since this page was last updated, the gap analysis remains structurally valid even when specific details change.
🇪🇺 European Union Partial
The EU AI Act — the world's first comprehensive AI legal framework — categorises systems by risk and imposes corresponding obligations. Prohibitions on certain uses (social scoring, manipulative techniques, workplace emotion detection) are in force from February 2025. GPAI obligations from August 2025. Full application phased through 2027. The framework is rights-based and human-centric in its stated intent.
The Act protects rights. It does not cultivate the capacity to exercise them. Human-centric in framing; human development absent from the mechanism. The prohibition on emotion detection in workplaces is significant — it names a form of the problem. But the question of what the worker needs to develop in order to remain a subject in the workplace is outside the frame. Compliance infrastructure is being built; developmental infrastructure is not mentioned.
Absent. The EU framework operates within a rights tradition that is entirely Western. The epistemological traditions that preserved what the Cartesian split severed are not consulted or represented.
🇺🇸 United States Absent
The Biden AI executive order was revoked in January 2025. The Trump administration's approach — formalised across multiple executive orders through December 2025 — positions the US as competing for AI dominance with minimal regulatory burden. State AI laws are actively being challenged and preempted. The stated goal is national and economic security through AI leadership. The framing is entirely about winning.
The US governance framework does not ask what kind of society it is building, only what competitive position it is securing. The removal of protections — including state-level attempts to prohibit algorithmic discrimination — is framed as removing ideological bias. In TI terms: the framework cannot see the difference between bias and conscience. It has no concept for the distinction because it was built from the severed tradition. This is not political critique — it is structural diagnosis.
Actively hostile in the current policy environment. The framing of state anti-discrimination laws as "woke AI" is precisely the severed tradition defending itself from examination.
🇨🇳 China Complex
China has developed sector-specific regulations — for algorithmic recommendations, deep synthesis, and generative AI — since 2021. The approach is state-directed, with registration and security review requirements. A mandatory national standard on AI content labelling took effect in 2025. China has proposed a World AI Cooperation Organisation and frames its global approach around AI safety, reliability, and civilisational diversity. Domestically: AI is central to the national development plan with a stated goal of global leadership by 2030.
China's framework integrates social coherence and state authority rather than individual development. The developmental interior of the citizen — as autonomous subject — is not the frame; social stability and national capability are. The AI safety framing is real but operates differently from democratic rights-based frameworks. In TI terms: high attentional intelligence at the system level, but conscience as defined by state rather than arising from integrated development.
China's stated support for civilisational diversity in AI is notable and worth watching. Its practical application to domestic minorities is a different question. The gap here is genuinely complex and this entry marks it as such.
🇬🇧 United Kingdom Partial
The UK has taken a principles-based, sector-specific approach — deliberately avoiding comprehensive legislation to maintain flexibility and competitive advantage. The AI Safety Institute was established in 2023. The UK has been active in international AI safety summits. The framing combines safety concern (Hinton's warning registered institutionally) with innovation promotion.
Principles without mechanism produce compliance theatre. The UK's light-touch approach reflects the same severed tradition operating at a higher level of sophistication — it can name what it values (safety, human-centric AI, fundamental rights) without building the developmental infrastructure that would make those values operative. The AI Safety Institute's remit is technical. Human developmental safety is not on the agenda.
Absent from the framework. The UK's relationship with Indigenous knowledge traditions globally, and its domestic minority communities, is not represented in its AI governance architecture.
🇦🇺 Australia Partial
Australia established an AI Safety Institute in 2024 and has developed voluntary AI ethics principles. The approach is largely non-binding, with sector-specific guidance and a preference for principles over regulation. Australia signed on to international AI safety frameworks at Seoul and subsequent summits. Investment in AI capability is being positioned as economic necessity.
Voluntary principles in a culture that has not built institutional infrastructure for the developmental interior of leadership are decorative. The question of what Australian leaders, legislators, and citizens need to develop in order to make good decisions about AI is entirely absent. Australia is watching international frameworks and following — which means importing the structural deficit along with the policy templates.
First Nations knowledge systems are absent from Australian AI governance. Given that Indigenous epistemological traditions are precisely those that preserved the integrated knowing the frameworks are missing, this is not a diversity gap — it is a substantive governance deficit. This gap is marked openly and is unresolved.
🇮🇳 India Partial
India published AI Governance Guidelines in 2025 — principles-based, with seven core tenets including safety, trust, and flexibility. The approach is light-touch and sector-specific, relying largely on existing law with a new Digital Personal Data Protection Act governing AI training data. India co-hosted the AI Impact Summit in 2026 and has pledged $1.25 billion in AI infrastructure investment. India frames its approach as balancing innovation with inclusion — explicitly addressing scale, diversity, and the needs of a large, multilingual population.
India's inclusion framing is more substantive than most — it asks who benefits and who is harmed across a genuinely diverse population. What remains absent is the deeper question: what does the 1.4 billion person relationship with this technology do to human interiority at scale? India has contemplative traditions — Vedanta, Buddhism, yoga — that speak directly to the integration of consciousness and conscience. The relationship between these traditions and AI governance policy is not yet visible in the framework.
India's policy engages diversity of language and demographic but not the epistemological traditions of its Indigenous communities. The distinction between representation and epistemic inclusion is the relevant one here.
🇦🇪 United Arab Emirates Complex
The UAE was the first country to appoint a Minister of State for Artificial Intelligence (2017). The National AI Strategy 2031 is the most systematically implemented national AI strategy in the world. The UAE AI Charter (2024) articulates twelve ethical principles including human well-being, safety, transparency, and human oversight. The UAE has invested $100 billion through MGX, is building the largest AI campus outside the US, and positions itself as the global hub for AI governance conversations — hosting international roundtables and framing itself as a neutral actor.
The UAE represents, with extraordinary clarity, what the TI framework calls The Mirror at national scale: high attentional intelligence, exceptional strategic execution, sophisticated governance architecture — oriented entirely by competitive positioning rather than the developmental interior of the human. The AI Charter names human well-being as a principle; the system is designed to maximise national capability. These are not the same thing. The absence of democratic accountability means there is no mechanism by which the developmental needs of citizens can become policy. The ethics are real; the ground condition is structural simulation.
Not applicable in the usual sense. The UAE's epistemic framework is modernist and technocratic. Traditional Bedouin and Islamic knowledge traditions are not represented in the AI governance architecture.
🇩🇰 Denmark / Greenland Complex
Denmark has a relatively progressive domestic AI framework — national AI strategy, ethical principles for data use in the public sector, active engagement with EU AI Act implementation. Denmark consistently ranks highly on digital governance indices. What Denmark's AI policy does not address: Greenland, which is Danish territory, is the site of active geopolitical pressure for data centre development and mineral extraction — the physical infrastructure that makes AI possible. American and European data centre investment is being courted as Greenland moves toward greater autonomy.
The gap between Denmark's progressive domestic AI ethics and the colonial relationship to Greenland — whose land and minerals feed the infrastructure of AI — is the gap between policy and ground condition made territorial. What is governed at home is not governed abroad. The beast is being fed on land that progressive domestic ethics does not account for. This is not a metaphor. Greenland's future is materially entangled with AI's physical requirements, and Greenlandic Inuit communities are not present in the governance of that entanglement.
Greenlandic Inuit communities carry knowledge traditions that have never split conscientia. Their relationship to land — relational, reciprocal, responsible — is precisely the epistemological position that AI governance is structurally incapable of encoding. Their absence from the governance of decisions about their land is the argument made material.
🇳🇿 New Zealand / Aotearoa Leading edge
New Zealand released its first national AI strategy in July 2025 — light-touch, principles-based, with an adoption-first approach. What makes NZ structurally different: Te Tiriti o Waitangi (Treaty of Waitangi) creates constitutional obligations to Māori that run through all government policy, including AI. The Public Service AI Framework (January 2025) explicitly includes Treaty obligations, Māori data sovereignty, and tikanga considerations. NZ research guidelines for generative AI treat Māori data as a taonga (treasure) requiring Māori governance. This is not decorative: it is legally and constitutionally mandated co-constitutive engagement.
The gap between constitutional obligation and actual implementation is significant — documented gaps between Māori data governance frameworks and their practical application across government remain. The national AI strategy itself was criticised for not mentioning Te Tiriti explicitly, even while sector-specific documents do. The structural commitment is more advanced than anywhere else in the Anglophone world; the execution is uneven and contested from within Māori communities themselves.
The most substantive example in this map of what it looks like when traditions that preserved integrated knowing are treated as co-constitutive rather than consultative. Not finished, not perfect — but structurally different in kind. The Māori Data Sovereignty movement — data as a living tāonga — is the epistemological argument made policy.
Education
Education policy for AI tends to mean: how do we teach people to use AI tools, and how do we prepare the workforce? The question underneath that question — what kind of knowing do we want to cultivate, and is AI-integrated education producing subjects or objects? — is almost entirely absent from policy.
What the TI Framework Proposes
Education is the primary site of conscientia repair — or its continued fracture.

The tertiary system that produced the severed tradition is now training people to use AI that replicates the severed tradition. Without intervention, this is a compounding loop: the educational infrastructure that split analytical knowing from moral knowing now graduates practitioners who build and govern AI that encodes the split, which then trains the next generation in its image.

Education policy shaped by the TI framework would ask: does this curriculum develop the whole person — attentional capacity, moral orientation, embodied knowing alongside analytical capability? Does it treat contemplative practice, relational skill, and ethical formation as load-bearing rather than elective? Does it make room for the kinds of knowing that cannot be transmitted through text — and therefore asks what AI in education can legitimately do, and what it structurally cannot?

At the tertiary level specifically: research training shapes what counts as knowledge. A curriculum that treats embodied practice, contemplative inquiry, and relational knowing as epistemologically valid — not as supplements to rigour but as forms of it — is the reform the conscientia thesis points toward.

🇪🇺 European Union Partial
AI literacy is a mandatory requirement under the EU AI Act from February 2025 — operators must ensure staff have sufficient literacy to use AI systems appropriately. European digital education action plans push AI skills across levels. Several member states have integrated AI into school curricula.
AI literacy means: knowing how to use AI tools. It does not mean: knowing what kind of knowing AI can and cannot do, or developing the attentional and moral capacities that remain irreducibly human. The human is being prepared to function alongside AI; the human is not being developed as a subject capable of governing AI from integrated knowing.
🇺🇸 United States Absent
AI education policy in the US is fragmented across states. The federal push is for STEM skills and AI workforce preparation. Some state curricula include AI ethics as a module; these are now being contested under the framing of ideological content. Higher education is integrating AI tools rapidly with minimal pedagogical reflection.
The US is training people to operate AI at precisely the moment when the capacity to evaluate AI's limitations requires exactly what the educational system has systematically devalued: contemplative practice, embodied knowing, relational intelligence. The removal of AI ethics as "ideological content" is the severed tradition completing itself.
🇮🇳 India Complex
India's AI governance guidelines explicitly include education and skills training as a core recommendation. India's tertiary system is one of the largest in the world and is rapidly integrating AI tools. There is active policy attention to democratising AI access across linguistic and economic diversity.
India has contemplative traditions — Vedantic, Buddhist, yogic — that have maintained the integrated knowing the TI framework is describing for millennia. These are culturally present but epistemologically marginalised in the tertiary research tradition. The gap is between what India knows through its living traditions and what its universities are training people to know about AI. This is the conscientia thesis played out on civilisational scale.
The relationship between mātauranga (traditional knowledge) in its Indian forms and AI education policy is unaddressed. The traditions are present in the culture; they are not present in the curriculum.
🇳🇿 New Zealand / Aotearoa Leading edge
NZ research guidelines for generative AI (2025) are explicitly framed by Te Tiriti obligations, Māori data sovereignty, and the recognition that most AI tools were built on Western values and may not be appropriate for Māori data. The guidelines treat mātauranga Māori as a legitimate epistemological framework requiring protection, not merely a cultural consideration.
The national AI strategy does not yet translate these research-level obligations into curriculum reform. The epistemological argument is present at the edges of the system; it has not yet changed what is centred.
The most developed example in this map. Māori data as a living tāonga — not just protected but actively generative — is the educational counterpart to the conscientia thesis. This is the tradition that kept the thread alive.
🇦🇺 Australia Absent
Australian tertiary institutions are developing AI policies focused on academic integrity — preventing AI-assisted cheating — and AI literacy. There is sector-level guidance but no national framework for what AI-integrated education should cultivate in the whole person.
Australian education policy for AI is managing a tool rather than asking what kind of humans we are developing. First Nations knowledge traditions — their epistemological sophistication, their relationship to country, their forms of knowing that have no equivalent in the Western curriculum — are entirely absent from this conversation.
Science
Science policy for AI asks: how do we fund AI research, and how do we ensure research integrity? The deeper question — what epistemological assumptions are baked into our research methodology, and does AI amplify or challenge them? — is not on the policy agenda.
What the TI Framework Proposes
Science policy is epistemological policy. It decides what counts as knowing.

The scientific method as currently institutionalised is the direct descendant of the Cartesian split: it privileges replicable, disembodied, quantifiable knowledge and structurally marginalises the relational, embodied, first-person knowing that attentional intelligence depends on. AI trained on scientific text encodes this preference at unprecedented scale.

Science policy shaped by the TI framework would ask: are we funding methodologies that can investigate consciousness, interiority, and relational knowing with the same rigour we apply to external measurable phenomena? Are we validating first-person and participatory methodologies alongside third-person empirical ones? Are we actively supporting research programs that bridge contemplative practice and empirical science — not as fringe activity but as central to understanding what human beings are and what they need?

The convergence of contemplative neuroscience, phenomenology, and quantum approaches to consciousness is real and growing. Science policy that takes the conscientia thesis seriously would be funding this convergence rather than waiting for it to become mainstream.

🇪🇺 European Union Partial
EU Horizon Europe funds AI safety research, explainability, and human-AI interaction. There is genuine investment in understanding AI systems. Research integrity requirements are being updated for AI-assisted research. Some funding streams explicitly support responsible and human-centric AI research.
Human-centric in the EU framework means: centring human rights and welfare as design constraints. It does not mean: investigating the interior of the human as a scientific question. The consciousness research that would illuminate what AI cannot do — and what human development requires — is not a policy priority. The epistemological assumptions of the research system are not under examination.
🇺🇸 United States Absent
US science policy for AI is almost entirely capability-oriented: how do we advance the frontier, win the race, maintain dominance? NIST AI Risk Management Framework addresses technical safety. NSF and DARPA fund AI research heavily. There is some funding for AI ethics, increasingly contested politically.
The US scientific establishment is the institution that most completely embodied the Cartesian split and exported it globally through the twentieth century. Its AI policy replicates this structure at speed. The question of what kind of knowing produces wisdom rather than capability is outside the frame — and now being actively suppressed as "ideological."
🇮🇳 India Complex
India has significant AI research output — among the fastest growing globally — primarily in technical AI. There is emerging policy attention to ethical AI and some government interest in the relationship between India's contemplative traditions and technology development.
India is the most significant example in this map of the gap between a living tradition that preserved integrated knowing and a scientific establishment that has adopted the severed Western model. Whether India's science policy will make room for the epistemological contribution of its own traditions is one of the most consequential open questions in global AI development. This gap is a genuine opening, not a closed diagnosis.
🇬🇧 United Kingdom Partial
The UK has strong AI safety research investment through the AI Safety Institute and associated academic funding. There is genuine rigour in the UK approach to understanding AI systems. Some UK universities are supporting consciousness research and contemplative science at world-leading levels.
The science that understands AI systems and the science that understands human consciousness are not yet in policy dialogue. McGilchrist is British; his work has not entered AI science policy. The research exists; the policy infrastructure to take it seriously does not.
Arts
Arts policy for AI is almost entirely about intellectual property, copyright, and fair compensation for artists whose work trained AI systems. These are real and urgent concerns. They are also the surface of a deeper argument: the arts are the domain that kept embodied, relational, and felt knowing alive during the centuries when the academy was systematically excluding it.
What the TI Framework Proposes
Arts policy is epistemological policy. The arts kept the ground condition alive.

When conscientia was split and felt knowing was excluded from the academy and the market, it did not disappear. It went underground into art, music, literature, theatre, dance, poetry — the domains that maintained the claim that felt experience is a form of knowing, not a decoration on top of knowing. The arts are not the ornamental wing of the civilisation. They are the archive of what the severed tradition could not encode.

Arts policy shaped by the TI framework would ask: are we protecting the human arts not only as economic product but as epistemological infrastructure? Are we making the argument that a civilisation which replaces felt artistic creation with AI-generated content is not saving costs but destroying a form of knowing? Are we treating artistic practice as a developmental activity — one that cultivates the attentional, relational, and embodied capacities that are the specific contribution of human consciousness?

The copyright argument is necessary but insufficient. The deeper argument is: what kind of knowing do the arts maintain, and what happens to a civilisation that loses it?

🇪🇺 European Union Partial
The EU AI Act includes transparency requirements for AI-generated content and copyright compliance for training data. The EU has been more protective of artists' rights than other major jurisdictions. There is active policy development around AI and cultural heritage.
The EU is protecting artistic product. It is not yet making the argument that artistic practice — the doing, not the output — is a form of human development that policy needs to protect as such. The arts as epistemological infrastructure rather than cultural product is not on the policy agenda.
🇺🇸 United States Absent
US arts policy for AI is almost entirely mediated through copyright litigation — whether AI training on copyrighted work constitutes fair use. The outcome of this litigation will shape AI development globally. The federal government has no meaningful arts funding at scale; the arts are treated as market activity.
When the arts are treated as market activity, the question of whether AI can replace them is purely economic. If arts are epistemological infrastructure — which the TI framework argues they are — then AI-generated content replacing human artistic creation is a civilisational event, not a market disruption. The US policy framework has no vocabulary for this distinction.
🇦🇺 Australia Partial
Australia has active policy attention to AI and the creative industries — artists' rights, fair compensation, cultural identity. There is genuine political support for protecting Australian content from AI displacement. The relationship between First Nations cultural expression and AI copyright is an active and unresolved question.
Australian arts policy for AI is largely defensive — protecting existing artists from AI displacement. The affirmative argument — that the arts are where integrated knowing lives and must be actively cultivated, not merely defended — is not present.
First Nations artistic expression in Australia is the clearest example in the country of art as epistemological practice — as the transmission of knowing through form. The treatment of this as a copyright question rather than an epistemological one is the gap made explicit.
🇳🇿 New Zealand / Aotearoa Leading edge
NZ cultural policy for AI is grounded in Te Tiriti obligations — the Ministry for Culture and Heritage's 2025 Long-Term Insights Briefing explicitly frames digital technology through Te Tiriti and emphasises protection of te reo Māori, mātauranga Māori, and Māori cultural expressions from AI extraction and appropriation.
The framework is protective in the right register — treating cultural knowledge as living and generative, not merely as property. The gap: this framing has not yet become the mainstream understanding of what all arts policy should be doing.
The concept of taonga — cultural treasure that is living and relational, not owned but carried — is the epistemological counterpart to what the TI framework means by the ground condition. It cannot be separated from the people and practices that carry it. AI cannot replicate it. This is the argument.
Democratic Participation & Institutional Leadership
AI and democracy policy asks: how do we prevent AI from being used to manipulate elections, concentrate power, or undermine accountability? The question not being asked: democracy is a developmental achievement — it requires citizens and leaders who can think beyond immediate self-interest, tolerate complexity, and remain present to the other. What is AI doing to that capacity?
What the TI Framework Proposes
Democratic legitimacy requires a developmental interior that policy currently assumes without cultivating.

Democratic governance assumes citizens capable of genuine deliberation — of holding complexity, tolerating the other's difference, and acting from something beyond immediate preference. These are not natural capacities. They are developmental achievements, requiring the kind of attentional training, relational practice, and moral formation that the TI framework describes as integrative intelligence.

AI in democratic contexts does two things simultaneously: it provides tools for manipulation that work precisely by bypassing deliberative capacity, and it accelerates the pace of decisions in ways that exceed the human nervous system's ability to process. The result is not the corruption of democracy — it is the exposure of the developmental gap that was always there but was previously less consequential.

Institutional leadership in particular: the decision-makers who are governing AI — politicians, regulators, executives — are making decisions of civilisational consequence from within frameworks that have never required them to develop the integrative intelligence those decisions demand. The TI framework argues that leadership development — real development, not skills training — is a governance requirement, not a nice-to-have.

The same attentional intelligence mechanism, oriented by self-interest rather than conscience, produces autocrats rather than leaders. Democratic governance requires the distinction between the two to be cultivable — and policy must create the conditions for that cultivation.

🇪🇺 European Union Partial
The EU has the most developed framework for AI and democratic integrity: prohibitions on manipulation, transparency requirements, regulations around AI in election contexts. The EU treats democracy as a value requiring active protection, not just a system to be maintained.
The EU is protecting democratic mechanisms. It is not cultivating democratic subjects. Transparency requirements tell citizens what AI is doing; they do not develop citizens' capacity to evaluate what they are told. The developmental interior of democratic participation — what it takes to be a genuine citizen rather than a well-informed consumer — is outside the frame.
European democratic traditions are entirely Western in origin. The consensus-based, land-relational, and elder-guided governance traditions of Indigenous peoples globally represent alternative models of collective decision-making that democratic theory has not engaged. This is a structural gap in the concept of democracy itself, not just its AI policy.
🇺🇸 United States Absent
US democratic AI policy is fragmented across state and federal levels, politically contested, and operating against a backdrop of active erosion of democratic norms. There is no coherent federal framework for protecting democratic integrity from AI. The TAKE IT DOWN Act (May 2025) addressed a narrow harm. The broader question of AI and democracy is captured by partisan contest.
The US is currently demonstrating what happens when the attentional intelligence of leadership is high and conscience is absent — or when conscience has been captured by a particular orientation. This is not partisan observation; it is structural diagnosis. The Mirror in its most consequential form: extraordinary strategic capability, no integrated moral ground. The governance of AI in this context is a governance of symptoms.
🇨🇳 China Complex
China's model does not separate AI governance from state governance — they are the same project. Social stability, national capability, and civilisational continuity are the governing values. AI is deployed in service of these values through a system in which democratic accountability in the Western sense is not the mechanism.
The gap in China's model, from the TI perspective, is not the absence of democracy — it is the absence of the developmental interior as a policy question. What citizens need to develop in order to be genuine subjects of their own existence is not the state's question; the state's question is what citizens need to do in order for the system to function. These are different things. China's model is sophisticated. It is not asking the TI question.
🇦🇪 United Arab Emirates Absent
Democratic participation is not the governing framework in the UAE. The AI strategy is implemented through royal decree and ministerial authority. The system's efficiency in implementing AI governance is partly a function of not requiring democratic deliberation.
The absence of democratic accountability means there is no mechanism by which the developmental needs of citizens can become policy. The UAE AI Charter articulates human well-being as a principle; without democratic accountability, the people whose well-being is at stake cannot govern the system that is making decisions about it. This is the structural limitation that no sophistication of governance architecture can resolve.
🇩🇰 Denmark / Greenland Complex
Denmark has high democratic capacity domestically — strong civic education, high trust in institutions, active citizen engagement. Denmark participates actively in EU AI governance. Greenland is an autonomous territory with its own parliament (Inatsisartut) but significant dependence on Denmark for fiscal and foreign policy.
The decisions being made about Greenland's land — for data centres, mineral extraction, and AI infrastructure — involve Danish and international actors in ways that Greenlandic democratic institutions cannot fully govern. The gap between domestic democratic virtue and colonial governance of dependent territories is the structural argument the TI framework makes material: integrated values at home, severed practice abroad.
Greenlandic Inuit governance traditions — consensus-based, land-relational, seasonal — are the alternative model that democratic theory needs but does not consult. Their right to govern decisions about their own land is the democratic argument and the epistemological argument simultaneously.
🇳🇿 New Zealand / Aotearoa Leading edge
Te Tiriti creates a bicultural governance framework in which Māori participation in decisions affecting Māori is a constitutional requirement, not a consultation courtesy. This has direct implications for AI governance — decisions about AI systems that affect Māori require Māori governance participation. The Waitangi Tribunal can and does hold the Crown accountable for Treaty breaches in digital and data domains.
The constitutional commitment is structurally different from anything else in this comparison. The implementation gap — between what Te Tiriti requires and what actually happens — is real and contested. But the structure itself represents a different kind of democratic theory: one in which the other is not included in the process but has their own authoritative standing within it. This is closer to what the TI framework means by genuine encounter.
Tino rangatiratanga — Māori self-determination — is not a democratic add-on. It is an alternative account of legitimate governance that predates and complicates the Western democratic framework. Its presence in AI governance is the structural argument made institutional.