Variable 6: Artificial Intelligence and Emerging Technologies
Variable 6: Artificial Intelligence and Emerging Technologies
Artificial intelligence (AI) systems will become more powerful, versatile, and widespread in the near future, placing increasing pressure on the foundations of economic, social, and political systems globally. Cooperative institutions that regulate and manage AI, particularly at the international level, will struggle to keep pace with this rapid rate of change, hindered by geopolitical divisions and the fact that most AI innovation occurs among private actors rather than national governments. These challenges will require the international community to consider new approaches to the international governance of AI. This proposal seeks to respond to “aps in global AI regulation and safety, particularly the dangers posed by a “ogue generative AI that has substantial decision-making authority and eludes built-in controls and regulation at the national and international levels. More specifically, we propose a backstop in case safeguards designed to prevent an AI crisis fail.
There is widespread concern about AI’s future development.1 Some have warned of the technology’s capacity to surpass the intelligence of humans within just a few years, with the possibility that, in an extreme worst-case scenario, artificial intelligence’s development leads to the extinction of the human race. At the international level, experts and policymakers worry that AI could significantly worsen geopolitical divisions and the ability to resolve transnational challenges. Militarily, analysts fear the deployment of AI in new weapon systems, including drones and other autonomous weapons, lessening human control and potentially weakening the barrier to killing. This is particularly so in the case of nuclear command and control. Some have advocated increasing the use of AI in nuclear weapons, which they argue would make accidental use less likely than under human supervision. Others, however, including many warfare ethicists, are dismayed by the prospect that a nuclear war could start without human decision-makers in the loop. They argue that the inclusion of AI in nuclear command and control may make accidental use even more likely, given the speed with which AI would make decisions.
Others in the AI community are more optimistic about the future development of the technology and its possible effects. For some, this is because they believe the technology is not innovating as quickly as was initially feared and may not make additional significant breakthroughs. Others believe it is possible to build controls in AI itself that will prevent the technology from causing serious harm. This debate will not be settled any time soon. What is clear, however, is that the future risk of AI is significant enough to warrant efforts to ensure its safe development. It would be irresponsible to do nothing in the hope that things take care of themselves.
The concern over AI has generated multiple and diverse efforts to provide some ground rules to this burgeoning but essentially unregulated field. These regulatory efforts, which fall into one of three categories, are not mutually exclusive but overlap and are potentially reinforcing.
- Ethical and normative frameworks to guide the research, development, deployment, and use of AI, most of which are voluntary or rely on self-policing;
- National or regional regulations or laws related to AI; and
- International AI governance, including the formation of new institutions and international standards put forth by existing institutions, including the U.N. and G7.
This first category primarily concerns issues surrounding copyright, privacy, and bias related to gender, race, sexual orientation, and disability. However, some of the initiatives are intended to address general AI safety. More than 100 sets of principles have been developed at the time of this writing, including the guiding principles in the U.N. AI Advisory Body Interim Report.2 The report calls for AI to be governed “inclusively, for the benefit of all,” in the “public interest,” and to be “universal, networked, and rooted in adaptive multi-stakeholder collaboration.” In the United States, the Biden administration’s Executive Order 14110 includes voluntary commitments from leading AI companies, including the development of “safe and secure” AI, promoting “responsible innovation,” and AI that protects civil rights and workers.3 While important, these efforts are insufficient and require additional compulsory standards.
The second category of AI management refers to actual laws and regulations at the national or regional level. The leading example is the E.U. AI Act — a European Union-wide regulatory framework that governs AI according to the level of risk it poses.4 It is the first set of regulations of its kind. However, the E.U. AI Act does not regulate all AI, leaving AI used for military, national security, and research purposes unaddressed.
Part of the regulatory challenge that AI poses is its inherent dual-use nature. Once released, AI models can essentially be fit for any purpose; there are no fundamental limitations or distinctions that one would find in other technologies. As such, many international bodies are attempting to develop more holistic AI regulations with new institutions to manage them. The U.N. AI Advisory Body, for example, is working to coordinate global AI governance and has members from various governments, civil society, and major private organizations. Additionally, the U.N. Educational, Scientific and Cultural Organization’s (UNESCO’s) Global AI Ethics and Governance Observatory maintains the mandate to “provide a global resource for policymakers, regulators, academics, the private sector and civil society to find solutions to the most pressing challenges posed by Artificial Intelligence.”5 Former British Prime Minister Rishi Sunak has proposed a “CERN [European Organization for Nuclear Research] for AI,” which would attempt to regulate AI much as CERN does for international particle physics.6 Similarly, numerous experts have called for an artificial intelligence International Atomic Energy Agency — referencing the international body that governs the use of nuclear technology.7
These bodies and their proposed regulations are numerous and multifaceted. Their efforts, which mainly aim to prevent the malicious use of artificial intelligence and any international challenges that could result, are vital. However, prevention is not foolproof. Regulatory gaps exist and will likely always exist for three reasons.
First, geopolitical divisions may prevent the cooperation necessary to reach robust international agreements. Second, AI is predominantly developed and controlled by private actors as opposed to national governments. Finally, and perhaps most importantly, there is the risk that AI evolves so quickly–on its own, outside of human control and in ways that we cannot anticipate–that regulations and countermeasures will not be able to keep pace. Preventative efforts rely on our limited anticipation of what we think the future of AI will be, suggesting that we need to develop the means to counter forms and conduct of AI that we cannot presently anticipate.
It is necessary, therefore, to combine robust preventative efforts with a backstop or failsafe that can act in the event of an AI crisis that preventive and regulatory measures fail to stop. Specifically, we seek to complement the aforementioned processes with an organization that addresses the risks of a rogue, non-human-directed AI.
Proposal 17: An AI emergency first response force
We propose the establishment of an international organization that serves as an emergency first response force for global artificial intelligence threats and emergencies that no single country could adequately respond to alone. The new organization should be inclusive and responsive to the states forming its membership, and be independent of other international organizations. It would, however, require the partnership, cooperation, and coordination of numerous intelligence organizations, law enforcement agencies, private companies, universities, technical and scientific institutes, as well as the governments of the organization’s member states. Its objective would be to monitor for global AI emergencies and prepare countries, governmental organizations, and private companies for how to best respond to those emergencies, as well as act and coordinate efforts to handle global AI emergencies. It would particularly focus on rogue generative AI that has escaped or eluded built-in controls and regulation at the national and international levels.
This organization combines a focus on prevention and response; its AI first responders would have three core divisions: (I) monitoring, detection, and prevention, (II) emergency preparedness, and (III) threat response and coordination.8 These organizational divisions would operate in unison despite their distinct objectives and areas of focus. Think of this organization as the tip of the spear for global AI emergency response.
Division I: Monitoring, detection, and prevention
This first division of this organization would serve primarily as a monitoring and detection watchdog for global AI threats and emergencies — the world’s AI eyes and ears. The ambition is to build a successful early warning system for developing AI threats and emergencies that this organization could initially deliver to private organizations, law enforcement, and governments in the hope that — with enough warning — those organizations could neutralize or mitigate the threat on their own. If this prevention effort through early warning does not work, however, this organization would stand ready to coordinate a response with relevant organizations and entities or step in and respond itself (the purpose of Division III). Division I would also include two other teams: (1) a team dedicated to researching the detection of future rogue AI threats and the best counter-response to them, and (2) a research and development team that builds its own AI tools to combat rogue and malicious AI.
Division II: Emergency preparedness
The second division of this organization would focus on preparing all relevant organizations and entities to respond to AI threats and emergencies. This division would conduct emergency preparedness exercises, run simulations, and offer best practices to those organizations and entities that can neutralize an emerging AI threat or respond after that threat has materialized and potentially deployed.
This division’s objective would be to ensure that, to the best of its ability, individuals and organizations are not responding to threats and emergencies for the first time. Emergency response and coordination plans would be in place to help guide actors in the event of a crisis. This division of the organization would focus on prevention via preparation, guided by the belief that for frontline actors, anticipation beats reaction.9
Division III: Threat response and coordination
The third division would serve as the global tip of the spear for responding to, neutralizing, and containing AI threats and emergencies when prevention has failed. A key element of this division’s functionality would be using AI to fight back against the threatening or rogue AI. If emerging AI threats cannot be eliminated or minimized at their source, these teams (organized according to specific types of AI threats) would step in, respond, and eliminate those threats when other organizations (most likely at the national level) fail. They would help coordinate among relevant entities and intervene when necessary to eliminate AI threats. Teams would be on constant alert, ready to deploy (likely virtually, but potentially on the ground as well) and respond to any AI crisis. Operational plans would be previously established for these teams in terms of coordination (e.g., with relevant private actors or a national intelligence service) and response to address threats as quickly and effectively as possible. They would regularly train to respond to various AI threats and emergencies.
- The AI concerns addressed in this report primarily reflect the discussions in the technology capitals of developed countries, particularly the United States and Europe. While states in the Global South tend to share those concerns, other questions — including, for instance, AI’s environmental impact — are more central. This divide is likely to increase over time. ↩︎
- U.N. AI Advisory Body, Interim Report: Governing AI for Humanity (New York: United Nations, 2023), https://www.un.org/sites/un2.un.org/files/ai_advisory_body_interim_report.pdf. ↩︎
- Government of the United States, “Executive Order 14110 of October 30, 2023, On the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence,” The White House, https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/. ↩︎
- European Union, “Artificial Intelligence Act (Regulation (EU) 2024/1689), Official Journal version of 13 June 2024,” Interinstitutional File: 2021/0106(COD), EUR-Lex (Access to European Union law), https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689. ↩︎
- United Nations, “Ethics of Artificial Intelligence,” UNESCO, https://www.unesco.org/en/artificial-intelligence/recommendation-ethics. ↩︎
- Laurie Clarke, Annabelle Dickson, and Cristina Gallardo, “Rishi Sunak wants to lead the world on AI. The world ain’t listening,” Politico, June 05, 2023, https://www.politico.eu/article/rishi-sunak-ai-technology-wants-to-lead-the-world-on-ai-the-world-aint-listening/. ↩︎
- Ian J. Stewart, “Why the IAEA model may not be best for regulating artificial intelligence,” Bulletin of the Atomic Scientists, June 09, 2023, https://thebulletin.org/2023/06/why-the-iaea-model-may-not-be-best-for-regulating-artificial-intelligence/. ↩︎
- Other teams or divisions within this proposed organization, such as legal and human resources, are likely to exist. For the purposes of this proposal, only those divisions with explicit AI functions are listed. ↩︎
- Whereas Division I’s preventative efforts are via early warning and detection. ↩︎