The U.S. military and its foreign adversaries could soon find themselves in an interminable battle to protect their artificial intelligence systems from attack while developing offensive capabilities to go after their enemies’ AI capabilities.
Defense officials see great potential for artificial intelligence and machine learning to aid in a variety of missions ranging from support functions to front-line warfighting. But the technology comes with risks.
“Machine learning … offers the allure of reshaping many aspects of national security, from intelligence analysis to weapons systems and more,” said a recent report by the Georgetown University Center for Security and Emerging Technology, “Hacking AI: A Primer for Policymakers on Machine Learning Cybersecurity.”
However, “machine learning systems — the core of modern AI — are rife with vulnerabilities,” noted the study written by CSET Senior Fellow Andrew Lohn.
Adversaries can attack these systems in a number of ways to include: manipulating the integrity of their data and leading them to make errors; prompting them to unveil sensitive information; or causing them to slow down or cease functioning, thereby limiting their availability, according to the report.
Read the full article at National Defense Magazine.