Chair Issa, Ranking Member Johnson, members of the Subcommittee: Thank you for the opportunity to testify before you today.
I have spent the last 6 years working on AI and national security policy at Georgetown University’s Center for Security and Emerging Technology (CSET). U.S.-China competition in AI is a major focus of my research, as are questions of AI safety, security, and governance. I served on OpenAI’s board of directors from 2021 to 2023.
Context and Focus
My testimony will focus on trade secrets and competition issues relevant to so-called “frontier AI,” which refers to cutting-edge, general-purpose AI systems such as Google’s Gemini 2.5, OpenAI’s o3, Anthropic’s Claude 3.7, xAI’s Grok 3, and Meta’s Llama 4.
Frontier AI is only one part of the larger AI ecosystem, but from a strategic perspective, it is an especially important part. The companies at the frontier are actively working to build artificial general intelligence (AGI), i.e. AI that is as capable as human experts across a wide range of fields. The CEOs of these companies claim that this goal will likely be reached within the next 2-5 years,1 a view shared by top researchers and engineers both inside and outside of these companies. Once they reach AGI, they plan to push ahead with building “superintelligence,” i.e. AI that is far smarter and more capable than humans.2 Even if their projected timelines are overly optimistic, it is not an exaggeration to say that this level of AI would reshape the economy, upend the political system, and transform the international order.
The idea that AI is hard to govern because the technology is moving so fast is cliched, but true. But how AI is changing is predictable in at least one important way: it is getting more capable, more powerful, and more strategically critical.
Within the scope of this hearing, this has two important implications:
- The U.S. government has a critical national security interest in protecting the trade secrets and IP of leading U.S. AI companies.
- The U.S. government has a critical national security interest in understanding and monitoring the technology being built inside leading U.S. AI companies.
These two interests do not need to be in tension with each other, and can be pursued in parallel.
- Anthropic CEO Dario Amodei: “Making AI that is smarter than almost all humans at almost all things […] is most likely to happen in 2026-2027.” OpenAI CEO Sam Altman: “I think AGI will probably get developed during this president’s term.” Google DeepMind CEO Demis Hassabis: “I think we’re probably three to five years away [from AGI].”
- e.g. per Sam Altman: “We are beginning to turn our aim beyond AGI, to superintelligence in the true sense of the word.”