Why is Human Control Over AI Essential in Military Applications?

Synopsis
Key Takeaways
- Human oversight is essential in the military application of AI.
- AI must comply with international law and the UN Charter.
- Global regulatory frameworks for AI are urgently needed.
- Collaboration among various sectors is crucial to combat AI misinformation.
- Bridging the AI capacity gap can promote sustainable development.
United Nations, Sep 25 (NationPress) UN Secretary-General Antonio Guterres emphasized the crucial need for human oversight of artificial intelligence (AI) in relation to the use of force.
“Let’s be unequivocal: the destiny of humanity should not rest with an algorithm. Authority over life-and-death choices must remain with humans,” he stated on Wednesday (local time) during an open Security Council debate on AI's implications for international peace and security.
The Security Council alongside UN member nations must guarantee that military applications of AI adhere strictly to international law and the UN Charter. Human judgment and control are vital in every instance of force, he asserted.
AI is no longer a concept of the future; it is currently reshaping everyday life, the information landscape, and the global economy at an astonishing pace. The pertinent issue is not whether AI will impact international peace and security, but rather how the global community will influence that impact, Guterres noted.
In the absence of regulations, AI poses risks of weaponization. AI-driven cyberattacks can incapacitate or annihilate critical infrastructure within moments. The capability to fabricate and manipulate audio and video content endangers the integrity of information, exacerbates division, and may instigate diplomatic crises. Furthermore, the enormous energy and water requirements of extensive AI models, alongside competition for essential minerals, are generating new sources of tension, he cautioned.
Guterres emphasized the necessity for coherent global regulatory frameworks for AI, as reported by Xinhua news agency.
Collaboration among governments, platforms, media, and civil society is essential to identify and counter AI-generated misinformation—from disinformation campaigns to deepfakes that could undermine peace processes, humanitarian efforts, and electoral integrity, he stated.
“Transparency throughout the entire AI lifecycle is vital: rapid and verified attribution of information sources and their dissemination, along with systemic safeguards to prevent AI systems from propagating misinformation and inciting violence,” he urged.
He also pointed out the need to bridge the AI capacity gap. “Technology has the potential to expedite sustainable development, promote stability and peace. We must provide opportunities for all nations to contribute to shaping our AI future.”
From nuclear disarmament to aviation safety, the international community has proactively addressed technologies that could destabilize societies by establishing rules, creating institutions, and upholding human dignity, he remarked. “The opportunity to shape AI—for peace, justice, and humanity—is dwindling. We must act promptly.”