Human Control and India’s Nuclear Doctrine in Age of AI: Significance of UNGA Resolution 80/23

The relationship between Resolution 80/23 and India’s 2026 CD statement illustrates a recurring challenge in contemporary arms control: how to build durable global norms in a world of divergent security contexts. Both documents share a foundational conviction — that human judgment must remain at the core of nuclear decision-making, and that AI introduces risks that demand urgent, coordinated attention.

Nichole Ballawar Apr 19, 2026
Image
India’s Nuclear Doctrine

The adoption of UNGA Resolution 80/23 on 01 December 2025 marks a significant moment in the evolving global conversation on nuclear disarmament. At its core, the resolution addresses a pressing question: as artificial intelligence becomes increasingly integrated into nuclear command, control, and communications (NC3) systems, how do we ensure that the decision to use nuclear weapons remains a human one?

This question finds a direct echo in India’s statement at the High-Level Segment of the Conference on Disarmament (CD) on 24 February 2026. Though emerging from different institutional contexts, both articulate a shared concern — that automation in nuclear decision-making introduces risks that current frameworks are ill-equipped to manage.

Yet India’s approach also introduces a layer of strategic caution, reflecting the complexities of aligning global norms with national security imperatives. Understanding this alignment — and its limits — offers important insights into the future of nuclear governance in an AI-driven world.

Risks, Principles, and Calls to Action

Resolution 80/23 identifies several distinct risks associated with integrating AI into NC3 systems: automation bias, technical malfunctions, cyber vulnerabilities, and compressed decision-making timelines. Individually, each of these presents serious concerns. Collectively, they could significantly increase the probability of accidental, unintended, or unauthorized nuclear use — with catastrophic consequences.

In response, the resolution urges states to ensure that human oversight remains central to nuclear decision-making, even where AI tools are adopted. It further calls on states to develop and publish national policies that reflect this commitment, and encourages greater transparency and shared understanding among all actors. Importantly, the resolution does not oppose AI outright; it acknowledges the technology’s potential contributions to verification and monitoring. The concern is not with AI itself, but with its unchecked application in domains where errors are irreversible.

Conceptual Alignment, Strategic Restraint

India’s position at the CD reflects a clear conceptual alignment with the resolution’s core objective. Foreign Secretary Vikram Misri explicitly affirmed that ‘the decision to use nuclear weapons would be taken by humans’. — a direct and unambiguous endorsement of the principle of human accountability in matters of existential consequence.

India further emphasized the importance of embedding human judgment within AI-assisted military applications, both to ensure compliance with international humanitarian law and to mitigate operational risks. Its domestic framework on “Trustworthy AI in the Defence Sector,” built around principles of reliability, safety, and transparency, demonstrates that this is not merely rhetorical commitment — it reflects an active effort to translate norms into institutional practice.

Yet India abstained on Resolution 80/23. This abstention deserves careful interpretation. It does not signal opposition to the resolution’s goals. Rather, it reflects India’s concern that the resolution’s normative framing does not fully account for the security perspectives of all states, particularly nuclear-armed ones outside the NPT framework. In India’s view, durable norms must emerge from genuinely inclusive, consensus-driven processes — not frameworks that risk codifying asymmetric obligations. This position is consistent with India’s broader approach to multilateral disarmament texts, where it has repeatedly called for universally applicable, non-discriminatory commitments.

This caution is also grounded in doctrine. India’s nuclear posture — anchored in credible minimum deterrence and a no-first-use policy — is premised on deliberate, politically authorized decision-making. Any drift toward excessive automation would not merely raise abstract risks; it would directly undermine the doctrinal stability that India’s deterrence framework depends upon.

Neither Restriction nor Uncritical Adoption

India’s CD statement situates the question of AI in nuclear systems within a broader discourse on emerging technologies and global security. India does not approach this as a binary choice between embracing and restricting AI. Rather, it advocates for responsible use, appropriate regulatory frameworks, and vigilance against the unregulated militarization of transformative technologies.

This mirrors the resolution’s own balanced posture. Both recognize AI’s constructive potential — in verification, transparency, and early warning — while warning against its destabilizing application in sensitive military domains. Notably, India also raises the concern that excessive focus on AI risks could lead to the “stigmatization” of technologies with significant developmental value, particularly for the Global South. This is a perspective that broader multilateral conversations have often underweighted, and it reflects India’s dual identity as both a security actor and a developing economy with transformative technological ambitions.

Architecture of Norm-Building

A deeper point of convergence lies in both documents’ emphasis on multilateral engagement as the appropriate venue for norm development. India has consistently advocated for a UN system-wide assessment of the impact of science and technology on international security — a position that complements the resolution’s call for transparency and shared understanding.

India’s active engagement in discussions on Lethal Autonomous Weapons Systems (LAWS) within UN frameworks further reflects its commitment to shaping emerging norms through inclusive dialogue rather than standing apart from them. At the same time, India is clear that this engagement must be grounded in equity: new rules should be broadly acceptable and reflect the security realities of all states, not just the most powerful ones.

Risk Reduction: From Doctrine to Practice

Resolution 80/23 places significant emphasis on interim risk-reduction measures in the absence of complete nuclear disarmament. India’s long-standing advocacy for de-alerting and de-targeting of nuclear weapons, and its opposition to “hair-trigger” alert postures, aligns closely with this objective.

In an era where AI could dramatically compress decision-making timelines and increase system complexity, these measures become even more urgent. The risk is not merely theoretical: accelerated response cycles driven by algorithmic assessment could reduce the space for human deliberation precisely when it is most needed. India’s domestic Trustworthy AI framework is, in this sense, a practical contribution to the broader challenge — an attempt to operationalize the principle of human control rather than simply assert it.

As nuclear stability will depend increasingly on software architectures and decision protocols — not just on hardware and arsenals — the value of such frameworks extends well beyond any single state’s borders.

Strategic Autonomy and Shared Goals

The relationship between Resolution 80/23 and India’s 2026 CD statement illustrates a recurring challenge in contemporary arms control: how to build durable global norms in a world of divergent security contexts. Both documents share a foundational conviction — that human judgment must remain at the core of nuclear decision-making, and that AI introduces risks that demand urgent, coordinated attention.

Where they differ is in approach. The resolution seeks to establish a common normative baseline; India insists that such a baseline can only be sustainable if it is built through genuinely inclusive processes that reflect all security perspectives. India’s abstention is, in this light, less a rejection than a demand for higher standards of multilateral legitimacy.

The convergence between these two positions — partial, qualified, but real — suggests that there is a shared understanding of the stakes involved, even if the pathways diverge. As AI continues to reshape the nuclear landscape, sustaining this understanding through patient, inclusive dialogue will be essential. The alternative — norms that are selectively applied, or stability frameworks that outpace the political consensus needed to sustain them — carries risks that no state can afford to ignore.

(The author is a policy professional in international relations and trade policy, formerly associated with the Ministry of External Affairs (Policy Planning & Research Division) and the Ministry of Heavy Industries, Government of India. Views expressed are personal.He can be reached at @Nicholeballawar (https://x.com/Nicholeballawar)

Post a Comment

The content of this field is kept private and will not be shown publicly.