Reinforcement Learning Vulnerable to Attacks During Training: Reinforcement learning algorithms can be sabotaged by subtle tweaks in training data, according to new reporting from Wired. Known as Trojan attacks, these attacks alter the data used to train the machine learning system and result in specific undesired behavior when the system is implemented at a later date. Researchers at Boston University found that they could complete the attack by altering just .025 percent of the training data, and successfully demonstrated this discovery on a DeepMind algorithm. The researchers believe theirs is the first demonstration of Trojan attacks on reinforcement learning agents, which learn from their environment.
NDAA Extends NSCAI Mandate, Enhances Hiring for JAIC: House and Senate negotiators have reached an agreement on the Fiscal Year 2020 National Defense Authorization Act. The conference report incorporates several provisions related to AI, including authorization for the Joint AI Center to enhance its hiring of science and engineering experts. The NDAA also extends the National Security Commission on AI’s mandate until October 2021, requires a second interim report by December 2020 and delays the date of the final report until March 2021. In addition, the NDAA directs the Department of Defense to provide an analysis comparing U.S. and Chinese capabilities in AI and to report on the JAIC’s mission and objectives.
Schmidt and Work: US in Danger of Losing Global Leadership in AI: In an op-ed published last week, the co-chairs of the National Security Commission on AI, Eric Schmidt and Bob Work, wrote that the United States must act quickly to avoid losing its technical lead to China. While the country has long been a world leader in AI, they warn that by many metrics, America’s lead is dwindling. The op-ed summarizes the findings of the NSCAI Interim Report and underscores the importance of AI to national security and economic prosperity.
ICIG Report Describes Activities to Improve Oversight of AI:The Office of the Inspector General of the Intelligence Community released its Semiannual Report detailing its goals and activities from April to September 2019. One of the ICIG’s five programmatic objectives in 2019 was improving oversight of artificial intelligence. To that end, the report describes steps the ICIG took to build collaboration around and understanding of AI, both within and outside the intelligence community. The report also discusses the possibility of building an ICIG Community of Interest on AI.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.