AI Agent Wins AlphaDogfight Against Human Pilot: In a series of simulated dogfights, Heron Systems’ F-16 AI agent defeated an Air Force F-16 pilot five times out of five. DARPA’s AlphaDogfight Trial had three phases: first, AI agents from eight companies faced off against DARPA’s AI system, then the agents played each other. Finally, the top AI system competed with a human pilot. This trial isn’t the first time an AI pilot has defeated a human: In 2016, an AI beat a combat flight instructor in a simpler competition. Heron said its winning AI agent was trained with reinforcement learning, using four billion simulations equivalent to roughly 12 years of human fighter-pilot experience. DARPA Program Manager Col. Dan Javorsek called the trials a success, saying the goal was for AI systems “to earn the respect of a fighter pilot.”
White House Announces Funding for New AI Institutes: The Office of Science and Technology Policy, the National Science Foundation and the Department of Energy announced the creation of 12 new AI and quantum information science institutes, with plans to spend $1 billion over five years. NSF is awarding $140 million to a total of seven machine learning research institutes, and ultimately plans to invest more than $300 million. Six universities will host a combined seven institutes focused on using AI for weather forecasting, STEM education, molecular discovery, food systems, agricultural resilience and more. In the same announcement, the White House announced five quantum information science institutes led by Department of Energy National Laboratory teams.
Comments Requested on Export Controls for Foundational Technologies: The Department of Commerce issued a call for comments on how to identify foundational technologies essential to U.S. national security as it weighs possible export controls. The notice requests information on defining and identifying foundational technology, determining whether technology is essential to national security, and predicting the potential impact of export controls on foreign and domestic industry. The rule gives the example of semiconductor manufacturing equipment and associated tools as foundational technologies that may be tied to military efforts in China, Russia or Venezuela. Comments are open through October 26.
NIST Releases Draft Principles of Explainable AI: Researchers at the National Institute of Standards and Technology have drafted four principles to help judge the clarity of AI decisions. They argue that explainability — understanding why an AI system made a given decision — will be a key component in trusting AI in high-stakes scenarios. The four principles are: decisions should be accompanied by explanations, explanations should make sense to users, explanations should be accurate and the system should not operate when it isn’t confident in its decision. NIST requests comments on the principles through October 15.
In Translation CSET’s translations of significant foreign language documents on AI
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.