Washington, DC – Artificial intelligence systems have paved the way for scientific and economic progress, but they can also come with security risks, CSET Senior Fellow Andrew Lohn said in testimony today before a House Homeland Security subcommittee on cybersecurity.
At a hearing titled “Securing the Future: Harnessing the Potential of Emerging Technologies While Mitigating Security Risks,” Lohn discussed the application of AI systems in cybersecurity and AI’s vulnerabilities.
“While AI needs cybersecurity protections, it can also be a means to create new cybersecurity problems,” Lohn said. “In rare cases, AI might be used to create disruptions in the digital world such as by finding security holes or by helping disguise a digital intrusion. … AI is able to create images and videos of fake people, or of real people doing or saying things they never said or did. These deepfakes receive a lot of attention, deservedly so, but AI’s ability to write text is equally concerning and gets less attention.”
Lohn recommended supporting initiatives such as the National AI Research Resource to promote these technologies while mitigating security risks by monitoring countries’ acquisition and use of AI tools, building trustworthy source and media literacy to discourage disinformation, and continuing efforts to understand AI’s vulnerabilities to improve cybersecurity. Lohn’s full testimony before the House Homeland Security Subcommittee on Cybersecurity, Infrastructure Protection, and Innovation is available here.
Founded in early 2019, the Center for Security and Emerging Technology at Georgetown University produces data-driven research at the intersection of security and technology, providing nonpartisan analysis to the policy community.
Media seeking interviews can contact Adrienne Thompson at Adrienne.Thompson@georgetown.edu