With over 150 introduced bills, more than 25 committee meetings, and 9 senate insight forums, artificial intelligence (AI) remained a hot topic in the U.S. Congress in 2023. With the conclusion of Senator Chuck Schumer’s “insight forums” in December of last year, in 2024 lawmakers will continue to hammer out which of these ideas should ultimately become law. Here’s one area they should not neglect: building an early warning system for potentially severe risks posed by the most advanced AI systems.
A straightforward first step would be for Congress to enshrine, through legislation, a key provision from the White House’s October 30 executive order on AI. Buried amidst the order’s directives for agencies to draft reports and run consultative processes, a creative provision in section 4.2 attempts to put the government on better footing to handle the possible risks of increasingly advanced AI systems. The section requires companies developing “dual-use foundation models” to report “information, reports, or records” regarding those models—such as information about safety testing results—to the government.
Dual-use foundation models are defined in the order as those that could pose serious safety and security risks, such as by designing bioweapons, plotting cyberattacks, or evading human control altogether. There is a great deal of expert disagreement around whether to expect these kinds of severe risks from near-future systems. Some experts have warned that the risks are comparable to those of pandemics and nuclear war. Other experts disagree, either thinking that near-future AI systems are unlikely to be capable enough to pose those threats or that we will find ways to build them to be inherently safe.
Given this level of uncertainty and expert disagreement, the executive order’s information sharing-focused approach is a good way for the government to hedge its bets. If the most severe risks fail to materialize, the compliance burden is minimal, as companies only have to report the results of safety tests they are likely already performing—and if they aren’t performing that testing, it’s important to know that. In addition, the requirements are restricted to AI models trained with computational resources that cost hundreds of millions of dollars, so they only apply to large companies in any case. If severe risks do emerge, the requirements—if well-implemented—would provide the government with a crucial early warning.
One reason why Congress should put this provision into proposed legislation is that there are limits to what the president can do in an executive order. When it comes to the provision in question, the White House resorted to using the Defense Production Act, a 1960s-era law originally designed to mobilize the defense industrial base, to implement the reporting requirements. Perhaps because of this, it is currently ambiguous whether companies are required to conduct safety testing or simply report any testing that they do conduct. And of course, a future president could decide to rescind the entire executive order, putting this important section (and others) in jeopardy. A law passed by Congress could resolve these problems.
In passing a law, Congress would also have more leeway to set parameters for how the reporting requirements should be implemented. For example, it could set up a dedicated office for receiving and analyzing the reports, specify how the government should decide which safety tests companies must perform, and establish processes for determining which AI models should be included. All of these details would be on more solid and permanent footing than they are under the current executive order.
An extra point Congress should consider is that early warning helps little if there can be no response to it. Legislation to codify these reporting requirements could therefore also strengthen the ability for the government to respond to any severe risks that do arise. For example, Congress could require that a model with sophisticated hacking capabilities be made available first to those tasked with defending critical infrastructure. The non-binding AI Risk Management Framework issued by the National Institute for Standards and Technology currently recommends that where “catastrophic risks are present,” an AI system should not be developed or deployed further until those risks have been mitigated. Congress could ensure the government is empowered to act if private entities do not voluntarily abide by that recommendation, for example by providing (perhaps time-limited) authority for the government to halt a dangerous AI system’s development or deployment.
There’s no consensus about how soon we are likely to see serious risks from AI, but many experts think there is a real chance that it could be sooner rather than later. This makes it critical that the government be able to detect and respond to severe risks that could arise from the most advanced AI systems as the field continues to progress. The reporting requirements in Biden’s AI executive order were a step in the right direction. Congress would do well to include a stronger, more permanent version of those provisions in whatever AI legislation manages to get over the line.
Thomas Woodside is a Horizon Junior Fellow at CSET.