The Financial Times
Madhumita Murgia of The Financial Times interviewed CSET’s Heather Frase for an article about a red team recently organized by Open AI, a group of 50 academics and experts hired to test and mitigate the risks of OpenAI’s latest large language model (LLM), GPT-4. As a red team member, Frase tested GPT-4’s potential for aiding criminal activities and found that the risks associated with the technology would increase with widespread use. She emphasized the importance of operational testing, explaining that “things behave differently once they’re actually in use in the real environment.” Frase also advocated for the creation of a public ledger to report incidents related to LLMs, similar to existing reporting systems for cyber security or consumer fraud. The article caught the attention of various outlets that then contacted Frase for her views, including Freethink and O’Reilly Media.
Time
For a Time magazine investigation into how technology companies are trying to water down proposed changes to rules governing the use of AI in the European Union, Billy Perrigo turned to CSET Director of Strategy Helen Toner. If passed, the EU’s AI Act would make it the first major jurisdiction outside of China to pass targeted AI regulation. As Toner pointed out, “The underlying problem here is that the whole way they structured the EU Act, years ago at this point, was by having risk categories for different uses of AI.” She explained that the systems driving advances in AI today, such as GPT-4, represent a significant shift in how AI works.
The New York Times
German Lopez, who writes The Morning newsletter for The New York Times, tapped Toner to add perspective to a story on how various advanced AI applications such as ChatGPT might “someday become a regular part of our lives, helping us in day-to-day routines.” However, he pointed out, “The current technology is imperfect. It can make mistakes, and it struggles with more complicated tasks or programs. But the same is true for human coders.” Toner then noted, “Humans are not perfect at many of the tasks they perform.” Therefore, such technologies don’t have to be perfect either in order to assist humans in their work. “They merely have to save time,” Lopez observed. “A human coder could then use that extra time to improve on the A.I.’s work, brainstorm other ideas for programs or do something else entirely.”
DigiChina
Regulators at the Cyberspace Administration of China this month issued draft measures to govern generative AI in China. The draft, which is open for public comment until May 10, would target services that generate text, images, video, code and other media. In a piece for Stanford University’s DigiChina Project, Toner called this step “the latest brick in the regulatory structure that China is constructing around AI and related technologies.” She said one development to watch “will be how (and whether) these rules apply to research and development” and “whether any provisions will be added to future drafts to cover earlier parts of the research-to-product pipeline.”
Foreign Affairs
In a cautionary piece for Foreign Affairs, Research Fellow Josh A. Goldstein and OpenAI Researcher Girish Sastry unpacked how language models could enhance disinformation campaigns and undermine public trust. They wrote, “As generative AI tools sweep the world, it is hard to imagine that propagandists will not make use of them to lie and mislead. To prepare for this eventuality, governments, businesses and civil society organizations should develop norms and policies for the use of AI-generated text, as well as techniques for figuring out the origin of a particular piece of text and whether it has been created using AI.” The article draws on long-term research conducted by Goldstein and Sastry together with colleagues at CSET, OpenAI and the Stanford Internet Observatory. The authors also provided an abridged version of that report, laying out its key points and recommendations to policymakers, including options for mitigating the threat of AI-enabled influence operations.
Breaking Defense
Generative AI’s potential to boost U.S. national security is being explored even as the technology evolves at a dizzying pace, Sydney J. Freedberg, Jr. reported in Breaking Defense. “Agencies like the CIA and State Department have already expressed interest. But for now, at least, generative AI has a fatal flaw: It makes stuff up.” Freedberg noted that asking the same question more than once of a tool powered by a large language model such as ChatGPT and Bard will return subtly or even starkly different responses. Research Analyst Micah Musser explained why: “Each time [the LLM] generates a new word, it is assigning a sort of likelihood score to every single word that it knows. Then, from that probability distribution, it will select — somewhat at random — one of the more likely words.” Training these tools on larger datasets can help, but as Musser pointed out, “even if it does have sufficient data, it does have sufficient context, if you ask a hyper-specific question and it hasn’t memorized the [specific] example, it may just make something up.”
Lawfare
An opinion piece in Lawfare about the security risks of artificial intelligence by Jim Dempsey of the Stanford Cyber Policy Center cited a recent workshop paper Musser had written on the subject. The paper was based on a gathering that had included Dempsey, experts from Carnegie Mellon University, Microsoft, Twitter, BNH.ai, the MITRE Corporation, Nvidia, the Chief Digital and Artificial Intelligence Office and Georgetown University Law Center, along with CSET colleagues Andrew Lohn, Heather Frase and John Bansemer. Dempsey noted the report “offers stark reminders that the security risks for AI-based systems are real … starts from the premise that AI systems, especially those based on the techniques of machine learning, are remarkably vulnerable to a range of attacks” and “recommends immediately achievable actions that developers and policymakers can take to address the issue.”
Semafor
Meanwhile, in a story about the AI safety discussion now gaining steam in Washington — or, as he put it, “going off the rails” — Semafor’s Reed Albergotti cited an earlier paper by Musser and former Semester Research Analyst Ashton Garriott analyzing machine learning’s role in cybersecurity and the potential it has for transforming cyber defense in the near future. Albergotti recommended the paper to readers “for some solid but less sensational D.C. analysis.”
Politico
And Musser’s just-released report, The Main Resource is the Human, got a detailed shout-out in Politico’s Digital Future Daily. Derek Robertson, who expressed interest in this paper ahead of its publication yesterday, led with its main point: “In the race for AI supremacy, it might not be computing firepower that gives researchers a leg up.” Rather, a survey of more than 400 AI researchers unearthed the surprising result that access to talent is a bigger constraint for their projects than computing power, or compute. The CSET survey also found that researchers’ compute usage was pretty similar between academia and industry, as was their level of concern about future compute access. Musser and his colleagues concluded, “In light of these results … this report suggests that policymakers temper their expectations regarding the impact that restrictive policies may have on computing resources, and that policymakers instead direct their efforts at other bottlenecks such as developing, attracting, and retaining talent.”
Roll Call
Journalist Gopal Ratnam turned to Research Analyst Ngor Luong for a scoop in Roll Call about potential action by the House Select Committee on the Strategic Competition Between the U.S. and the Chinese Communist Party that could lead to important policy changes. Luong offered thoughts on the prospect of a new regime for reviewing the security of U.S.-based investments in China’s tech sector, particularly AI, the subject of a recent paper she wrote with Research Fellow Emily Weinstein. The authors identified the main U.S. investors active in the Chinese AI market and the set of AI companies in China that have benefitted from U.S. capital. They also recommend next steps for U.S. policymakers to better address the concerns over capital flowing into the Chinese AI ecosystem. Ratnam had quoted them both in a story about that paper.
South China Morning Post
Xinmei Shen turned to Research Analyst Hanna Dohmen for an article in The South China Morning Post on the ways in which China may be undermining its tech development goals by restricting freedom of expression, particularly as it seeks to foster a home-grown version of ChatGPT. Regarding China’s goal of creating a rival for ChatGPT, Dohmen stated, “excessive restrictions, content regulation, and censorship could hinder commercialization and further innovation of such technologies.”
Issues in Science and Technology
Senior Fellow Jaret Riddick teamed up with Research Analyst Cole McFaul on a commentary in Issues in Science and Technology. The pair were asked to take part in the discussion of “What the Ukraine-Russia War Means for South Korea’s Defense R&D,” drawing from the main subject in an analysis in a previous edition of the journal. Riddick and McFaul said the recommendations in that analysis “closely align with recent Defense Department efforts to foster innovation and accelerate adoption of the technologies that are fast transforming the U.S. national security landscape.”
GovConWire
Riddick joined a panel of distinguished experts from the Department of Defense and DOD-adjacent institutions to discuss the challenges and potential pathways for new talent and technology adoption, along with how to spur innovation, as part of the annual Defense R&D Summit at the Potomac Officers’ Club. His remarks capped a story on the event by GovConWire’s Charles Lyons Burt: “Riddick, who worked in the government until very recently, recounted an anecdote wherein he called a colleague, who is president of a small college in Ghana. ‘I called him once to talk about collaboration, thinking that I was coming to save the day. I left the phone call thinking we need to collaborate with them, because there are things that they’re doing to bring science to a local level and have it be an iterative process between local issues and science in a way that was really impressive to me.’”
The EurAsian Times
In a story about cyberspace security as it relates to tensions between the United States and China, The EurAsian Times noted a widely-cited CSET report by Research Analyst Jack Corrigan, Sergio Fontanez and Michael Kratsios. The report offers an overview of procurement bans at both the federal and state levels and recommends strategies for enhancing U.S. defense against foreign technology. Writing about the tightening of laws around cybersecurity and espionage in both countries, NC Bipindra pointed out in his story that the latest developments come as the two countries engage in a growing geopolitical battle on multiple fronts.
Spotlight on CSET Experts: Micah Musser
Micah Musser is a Research Analyst at CSET, where he works on the CyberAI Project. His latest reports include The Main Resource is the Human, Adversarial Machine Learning and Cybersecurity, and Forecasting Potential Misuses for Disinformation Campaigns–and How to Reduce Risk. Previously, he worked as a Research Assistant at the Berkley Center for Religion, Peace, and World Affairs. He graduated from Georgetown University’s College of Arts and Sciences with a B.A. (summa cum laude) in Government focusing on political theory.
Interested in speaking with Micah or our other experts? Contact Director of External Affairs Lynne Weil at Lynne.Weil@georgetown.edu.
Want to stay up to date on the latest CSET research? Sign up for our day-of-release reports and take a look at our biweekly newsletter, policy.ai.