US and China Decline Global Military AI Declaration What It Means
The United States and China have declined to sign a global declaration on artificial intelligence use in military settings, highlighting deep divides in international governance and strategic competition.

The United States and China have declined to sign a global declaration on artificial intelligence use in military settings, highlighting deep divides in international governance and strategic competition.
In an era where artificial intelligence is increasingly shaping global security, two of the world’s most powerful nations the United States and China chose not to endorse a joint international declaration on the use of AI in military applications. This decision came at the Responsible AI in the Military Domain summit in A Coruña, Spain, and it highlights wide differences in how nations are approaching governance of emerging military technologies.
What the Declaration Proposed
At the summit, countries discussed a non-binding declaration that outlined about 20 guiding principles for responsible development and use of AI in military settings. Though not a legal treaty, the document was intended to promote norms that ensure safety, accountability and ethical oversight as AI tools become more prevalent in defence systems.
The principles in the draft declaration included:
- Emphasizing human responsibility and control over AI enabled weapons and systems
- Clear chains of command and accountability for AI driven operations
- Robust risk assessments and testing before deployment of AI capabilities
- Training and education for personnel working with AI technologies
- Sharing information on oversight mechanisms where national security permits
Though symbolic, these principles signal growing global concern that advances in artificial intelligence could outpace existing norms and legal frameworks.
The Opt Out by the US and China
Out of 85 nations present at the summit, only 35 signed the declaration. Among those who did not were the United States and China two countries with the largest military capabilities and rapid development of AI technologies.
Officials and analysts at the summit suggested that deeper geopolitical and strategic considerations influenced this decision. The United States and China appear to be balancing ethical concerns with strategic flexibility neither wants to be constrained by commitments that might limit their own AI advances if rivals do not follow the same path. This tension is often referred to by diplomats as a prisoner’s dilemma, where nations hesitate to impose limits on themselves if others are perceived to be moving ahead quickly.
Why the Declaration Matters
Even though it is non-binding, the declaration represents a significant effort to build shared expectations about how AI should be integrated into military planning and operations.
Supporters of the declaration believe that agreed norms can help:
- Reduce risks of accidents or unintended escalation caused by autonomous systems
- Ensure that human judgment remains central in decisions involving lethal force
- Provide a foundation for future international cooperation on arms control and AI safety
- Encourage transparency and oversight where feasible under national security constraints
However, the lack of participation from major AI powers underlines how difficult it is to achieve broad consensus when national security interests and competitive advantages are at stake.
Broader Geopolitical Context
The summit is one in a series of global efforts to address emerging risks from military AI. Previous meetings held in places like The Hague and Seoul drew broader participation and introduced earlier frameworks for cooperation, such as a “blueprint for action” in 2023 and 2024. In those settings, more nations, but not all, supported non-binding commitments on responsible AI use.
The current development reflects a deeper phase in global discussions on AI and defence, where diplomatic engagement must balance ethical caution with strategic competition. The absence of consensus among the largest military AI developers highlights ongoing tensions over how much flexibility nations are willing to give up in exchange for collective norms.
What This Means for the Future
The decision by the United States and China not to sign this declaration does not mean global efforts to govern military AI will stop. Instead, it shows that the landscape of AI governance is complex and shaped by competing priorities.
Countries that supported the declaration, including Canada, Germany, France, Britain, the Netherlands, South Korea and Ukraine, signal that many nations want standards even if they are not legally binding.
In the coming years, this debate over military AI will likely expand into broader international forums, including the United Nations and regional security organizations. The challenge will be preserving ethical principles and human oversight while addressing the strategic realities of technological competition.
Final Takeaway
Artificial intelligence has moved from experimental labs into the heart of global security discussions. The fact that the United States and China chose not to endorse a joint declaration at an AI in military summit reflects enduring tensions between ethics, governance and strategic advantage.
As nations continue to integrate AI into defence systems, the need for shared frameworks will remain pressing. Whether such norms can ever achieve universal acceptance will shape not only military policy but the future balance of international power in the age of advanced technology.