In the first-ever meeting of its sort, the United Nations Security Council (UNSC) held discussions to tackle the emerging pros and cons linked with generative and artificial intelligence. Fifteen representatives of states were present at the meeting, with the UK represented by James Cleverly as the host country’s chair. Speakers emphasized the urgency for the international community to tackle AI’s revolutionary potential, while at the same time conceding the inherent dangers of this new technology, said a UNSC-based news release.
The deployment of generative AI, according to Secretary-General António Guterres, could have serious ramifications for international peace and security. He mentioned the possibility of death, destruction, trauma, and psychological harm.
He also elaborated, “While it took more than 50 years for printed books to become widely available across Europe, ChatGPT reached 100 million users in just two months.”
While AI has the potential to help alleviate poverty and hunger, to cure cancer, and to inspire climate action, those who go rogue with the technology could potentially enact catastrophic consequences. Guterres proposed establishing a global watchdog to monitor artificial intelligence and to form a high-level advisory body to report on global AI governance options by the end of the year.
James Cleverly, Chairman of the session and British MP, urged international cooperation to address AI usage: “We are here today because AI will affect the work of this council. It could enhance or disrupt global strategic stability. It challenges our fundamental assumptions about defense and deterrence,” he said, adding that “…no country will be untouched by AI.”
“Our shared goal will be to consider the risks of AI and decide how they can be reduced through coordinated action.”
Experts, including Professor Zeng Yi, co-director of the China-UK Research Center for AI Ethics and Governance, and Jack Clark, co-founder of top AI business Anthropic, addressed the chamber for further emphasis.
Jack Clark, co-founder of Anthropic and former OpenAI policy director, emphasized the importance of understanding the science of combustion to ensure the safety of unexplored systems. He warned against relying solely on private-sector parties and emphasized the need for government involvement in securing AI applications.
Yi Zeng, director of the Brain-inspired Cognitive Intelligence Lab, emphasized that AI essentially comprises information-processing tools without genuine human capacity to understand and react, thereby making them untrustworthy as responsible agents for human decision-making. He also emphasized that both short-term and long-term AI indeed imperils humanity’s very existence due to the lack of defense against AI’s potential weaponization against human vulnerability.
Representatives from Ghana, Ecuador, and China criticized militarization of AI, highlighting risks of lethal autonomous weapons. They emphasized the need for a framework of peaceful and ethical considerations when deciding how best to utilize this technology, in order to avoid the very real possibility of human extinction.
“The robotization of conflict is a great challenge for our disarmament efforts and an existential challenge that this Council ignores at its peril,” Hernán Pérez Loose, Ecuador’s representative, suggested.
“AI risks include its integration into autonomous weapons systems…,” observed Harold Adlai Agyeman, the Ghanian representative.