At a congressional hearing last week, lawmakers on the House Science Subcommittee on Investigations and Oversight and Subcommittee on Research and Technology heard from private- and public-sector experts on securities vulnerabilities created by sharing advanced software, especially AI.
Here are edited excerpts:
“Sharing has driven both scientific and economic progress, but it has also created an alluring target for attackers … There is no good way to know if a downloaded model has a backdoor, and it turns out that those backdoors can survive even after the system has been adapted for a new task. A poisoned computer vision system might mistakenly identify objects, or a poisoned language model might not detect terrorist messages or disinformation campaigns that use the attacker’s secret codewords.” — Andrew Lohn, Senior Fellow in the CyberAI Project of the Center for Security and Emerging Technology at Georgetown University
Read the full article at The Wall Street Journal.