Policies To Address Extinction Risks From Advanced AI
Advanced AI poses an extinction risk to humanity. Leading AI researchers and CEOs of the three most advanced AI companies have recognized these risks. The UK AI Safety Summit provides world leaders with an opportunity to lay the groundwork for international coordination around these pressing threats.
Extinction risks from AI arise chiefly from "scaling". Scaling refers to the process of build- ing increasingly large, ever more autonomous, and more opaque AI systems. The vast majority of AI research with concrete applications, such as cancer detection and trading algorithms, is unconcerned with scaling.
Governments have a critical and time-limited opportunity to reduce extinction risks from AI. This report recommends immediate actions that can be taken by governments: recognizing the extinction risks from advanced AI, acknowledging the need for concrete scaling limits, and committing to build an international advanced AI project. Specifically, we recommend:
Establishing a Multinational AGI Consortium (MAGIC), a 'CERN for AI'. This institution would house world-leading AI scientists from all signatory countries, dedicated to the joint mission of working on advanced AI safety. MAGIC would perform cutting-edge research to control powerful AI systems and establish emergency response infrastructure that allows world leaders to swiftly halt AI development or deployment.
Implementing global compute limitations. Compute Limitations mitigate extinction risk by throttling the very few dangerous AI systems that rely on advanced hardware, while leaving the vast majority of the AI ecosystem unhindered. We recommend a tiered approach to compute limitations: development above a moratorium threshold would not be allowed to occur (except within MAGIC), development above a danger threshold would need to be regulated by MAGIC or a MAGIC-certified regulator, and development below the danger threshold would be unaffected by MAGIC.
Gating critical experiments. Before developing critical systems, companies should demonstrate the safety of these systems through verifiable criteria. The burden of proof should be on companies performing Frontier AI research to demonstrate that these systems are safe (rather than on regulators or auditors to show that systems are dangerous). This follows the standard practices from high-risk sectors, where demonstrating safety is a precondition for undertaking high-risk endeavors.
These proposals have strong support among the British and American public. Recent YouGov polling shows that 83% believe AI could accidentally cause a catastrophic event, 82% prefer slowing down the development of AI, 82% do not trust AI tech executives to regulate AI, 78% support a "global watchdog" to regulate powerful AI, and 74% believe AI policy should currently prevent AI from "reaching superhuman capabilities" (Artificial Intelligence Policy Institute [AIPI], 2023; Control AI, 2023).