Washington, DC – Sharing within the artificial intelligence community has spurred scientific and economic progress, but it has also facilitated hacking attacks, CSET Senior Fellow Andrew Lohn said in testimony today before a joint hearing of two House Science Committee subcommittees.
At a hearing titled “Securing the Digital Commons: Open-Source Software Cybersecurity,” Lohn discussed the various vulnerabilities within the AI supply chain and the methods hackers use to subvert AI systems.
“These resources are only as secure as the organization or system that provides them,” Lohn said. “Today, the vast majority are hosted in the United States or its allies, but China is making a push to create state-of-the-art resources and the network infrastructure to provide them. If adversaries make the most capable models – or if they simply host them for download – then developers in the United States would face an unwelcome choice between capability and security.”
To maximize the benefits of AI sharing while reducing security risks, Lohn recommended that Congress support efforts to provide trusted AI resources, grant funding for security audits, work with U.S. government organizations to create a prioritized list of AI systems and resources, and augment red and blue teams of defensive hackers and security specialists with AI expertise to patch security system holes.
Lohn’s full testimony to the House Science, Space and Technology Subcommittee on Investigations and Oversight and the Subcommittee on Research and Technology is available HERE.
Founded in early 2019, the Center for Security and Emerging Technology at Georgetown University produces data-driven research at the intersection of security and technology, providing nonpartisan analysis to the policy community.
Media seeking interviews can contact Adrienne Thompson at Adrienne.Thompson@georgetown.edu