Worth Knowing
EU Leaders Reach Agreement on AI Act — Ratification Still in the Works: In December, EU leaders reached an agreement on the bloc’s landmark AI Act. The sweeping regulatory package will set important new standards for AI development and deployment within the EU, and — much like earlier EU regulatory efforts such as the General Data Protection Regulation (GDPR) — will likely have a significant impact on AI development abroad. While the official compromise text has not yet been released, an EU press release laid out the key pillars of the agreed-upon regulation. The AI Act will:
- Prohibit AI systems for untargeted facial image scraping, emotion recognition in workplaces and schools, social scoring, manipulation of human behavior, and sensitive biometric categorization (though the act includes law enforcement exemptions for some uses of biometric identification systems).
- Include obligations for systems classified as “high risk,” such as those used in healthcare, environmental applications, and elections. Such systems will be subjected to mandatory fundamental rights impact assessment among other requirements.
- Impose transparency and compliance requirements on “general-purpose AI” systems (also known as foundation models). These requirements include publishing technical documentation, publishing information about content used for training, and compliance with EU copyright law.
- More: The EU AI Act: A Primer | Five things you need to know about the EU’s new AI Act | EU competition chief defends Artificial Intelligence Act after Macron’s attack
- More: OpenAI offers to pay for ChatGPT customers’ copyright lawsuits | DALL-E’s New Guardrails: Fast, Furious, and Far from Airtight
- A research team from MIT and Harvard used AI to identify a new class of antibiotics that is effective against two of the most dangerous types of drug-resistant bacteria: methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant enterococci (VRE). According to their paper published in Nature, the researchers used a deep learning-based system to screen millions of compounds for antibiotic properties. Their use of graph-based search algorithms meant that the AI predictions were more “human-understandable” than alternative “black box” AI approaches — an important factor in helping researchers understand the system’s predictions and potentially identify related antibiotics. Two of the AI-identified candidates successfully killed both MRSA and VRE and proved effective in mice, but they will still need to go through the long process of clinical trials before they can be approved for human use.
- Google DeepMind researchers used AI to identify more than two million potential new inorganic materials, according to research published in the journal Nature. The research could help accelerate the discovery and synthesis process for materials used in important applications, like batteries and semiconductors. Some of the predicted structures identified by Google DeepMind were used to test the A-Lab at the Lawrence Berkeley National Laboratory, an autonomous lab that uses AI-guided robots to create new materials. In a second paper published in Nature, Berkeley Lab researchers wrote that the A-Lab synthesized 41 out of 58 attempted materials over 17 days of continuous autonomous operation. Google DeepMind contributed the 380,000 structures with the most predicted stability to the open-access Materials Project.
- More: How Microsoft found a potential new battery material using AI | Large Language Models in Biology | The Antimicrobial Resistance Research Landscape and Emerging Solutions
Government Updates
AI and Emerging Tech in the FY2024 NDAA: Before breaking for the holidays, Congress wrapped up its consideration of the Fiscal Year 2024 National Defense Authorization Act, passing the critical authorization bill for the 63rd year running. The bill contains a number of AI provisions:
- It includes provisions supporting the use of AI for optimizing aerial refueling in contested environments (Sec. 346) and maintenance operations at shipyards serving the U.S. Navy (Sec. 350).
- It authorizes the Chief Digital and Artificial Intelligence Officer to access and control any DOD data and establishes a CDAO Government Council to oversee the ethical collection and use of such data (Sec. 1521).
- It directs CDAO to develop and maintain data assets that support cyber operations preparations (Sec. 1523).
- It directs the DOD to develop a bug bounty program for foundational models used by the department (Sect. 1542) and a prize competition to develop watermarking and other detection technologies for identifying content created by generative AI (Sec. 1543).
- It tasks the DOD with defining guidance for near- and long-term strategies for the adoption of AI, as well as policies to support its ethical and secure use (Sec. 1544).
NSF to Stand Up NAIRR Pilot Next Week: Next week, the National Science Foundation will launch a pilot program for the National AI Research Resource, as directed by President Biden’s November executive order on AI. The NAIRR — envisioned as “a shared research infrastructure” that would provide computing power, access to open government and non-government datasets, and training resources to students and AI researchers — has been in the works since 2020, when the National AI Initiative Act of 2020 (passed as part of the FY2021 NDAA) established a task force to research its feasibility. The task force published its final report early last year, in which it recommended a $2.6 billion investment over six years to build out the NAIRR in full, as well as a pilot program to make computing resources available sooner. The president’s November EO directed the head of the NSF to identify and enlist computational, data, software, and training resources from across the Federal government and private sector for the pilot program. NSF officials told reporters that the initial effort will be a “modest… proof-of-concept effort” comprising “resources that we have in hand” and “in-kind contributions from different technology companies.” It’s not clear exactly how robust the resourcing will be, but it will likely be a far cry from the computing power recommended by the task force for the full NAIRR. NSF officials said the goal of the pilot is to prove the value of the NAIRR concept. Should it prove a success, it will be up to Congress to fund the full project.
GAO Reports Explore DOD’s AI Workforce and Federal Agencies’ AI Adoption: Two recent Government Accountability Office reports highlight the steps the federal government still needs to take to fully realize its AI ambitions:
- A study on the Pentagon’s AI workforce found that the DOD had not sufficiently defined and identified its AI workforce, hindering its ability to effectively meet its strategic goals and objectives. As the GAO report noted, the DOD has previously identified the cultivation of AI expertise as a strategic focus area, but without consistent definitions, the DOD can’t effectively assess progress or confidently set future goals. GAO recommended that the CDAO should be tasked with sufficiently defining and identifying the DOD’s AI workforce.
- A review of 23 federal agencies’ AI adoption found that the agencies had identified approximately 1,200 AI use cases, most of which were in the planning phase. While agencies like NASA and the Department of Commerce reported the highest number of AI use cases, GAO found that overall AI implementation remains inconsistent. Many agencies have not fully met federal AI requirements, with incomplete or inaccurate data in their AI use case inventories and a lack of comprehensive implementation planning. GAO recommends that agencies update their AI inventories, align them with guidelines, and implement AI requirements in federal law and policy to enhance management and oversight of AI applications.
In Translation
CSET’s translations of significant foreign language documents on AI
CSET’s translations of significant foreign language documents on AI
PRC Computing Power Policy: Action Plan for the High-Quality Development of Computing Power Infrastructure. This document is a Chinese government policy for the near-term development of computing power. The plan urges the expansion of compute in China, particularly of supercomputing and “intelligent compute” optimized for AI applications. But the policy also emphasizes improving the energy efficiency and lowering the carbon footprint of computing power infrastructure such as data centers. An appendix includes various compute-related metrics for China to strive for in 2023, 2024, and 2025.
PRC S&T Challenges: CAST Announces its Major Scientific Questions, Engineering Technology Challenges, and Industrial Technology Questions for 2023 | CAST 25th Annual Conference. This announcement by a Chinese Communist Party-run association for scientists names 28 questions as China’s outstanding S&T issues and challenges of 2023. The questions span a wide range of topics, including microchips, sustainable agriculture, renewable energy, rail transit, coal mining, manned Mars exploration, and many more.
Beijing Municipal Plan: Beijing Municipal Implementation Plan for Promoting the Innovative Development of Future Industries. This Beijing municipal government plan identifies 20 “future industries” that Beijing is targeting with favorable industrial policies, so as to build up world-class companies in these industries by 2035. The industries that Beijing — home of China’s best universities and a disproportionately high number of its leading tech companies — is boosting are in the AI, healthcare, manufacturing, energy, materials, and space sectors.
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
What’s New at CSET
REPORTS
- Assessing the Global Research Landscape at the Intersection of Climate and AI by Joanna Lewis, Autumn Toney, and Xinglan Shi
- Spurring Science: Examining U.S. Government Grant Activity in AI by Christian Schoeberl and Hanna Dohmen
- Assessing China’s AI Workforce: Regional, Military, and Surveillance Geographic Clusters by Dahlia Peterson, Ngor Luong, and Jacob Feldgoise
- Repurposing the Wheel: Lessons for AI Standards by Mina Narayanan, Alexandra Seymour, Heather Frase, and Karson Elmgren
- Controlling Large Language Model Outputs: A Primer by Jessica Ji, Josh A. Goldstein, and Andrew Lohn
- Scaling AI: Cost and Performance of AI at the Leading Edge by Andrew Lohn
- The Core of Federal Cyber Talent: Trends of Participating Institutions in the CyberCorps Scholarship-for-Service Program by Ali Crawford
PUBLICATIONS
- Bulletin of the Atomic Scientists: Costly signaling: How highlighting intent can help governments avoid dangerous AI miscalculations by Owen J. Daniels and Andrew Imbrie
- Brennan Center for Justice: Safeguards for Using Artificial Intelligence in Election Administration by Edgardo Cortés, Lawrence Norden, Heather Frase, and Mia Hoffmann
- Breaking Defense: The Right Stuff for AI: Hard-won safety lessons from the world of flight testing by Michael O’Connor
- Nikkei Asia: U.S. and China must seize opening to discuss military AI risks by Sam Bresnick
- CSET: CSET’s Must Read Research: A Primer by Tessa Baker
- CSET: A Bigger Yard, A Higher Fence: Understanding BIS’s Expanded Controls on Advanced Computing Exports by Hanna Dohmen and Jacob Feldgoise
- CSET: The Global Distribution of STEM Graduates: Which Countries Lead the Way? by Brendan Oliss, Cole McFaul, and Jaret C. Riddick
- CSET: Data Snapshot: BIS Best Data Practices: Part 2 by Christian Schoeberl
EMERGING TECHNOLOGY OBSERVATORY
- The Emerging Technology Observatory is now on Substack! Sign up for all the latest updates and analysis.
- Introducing the Latest Map of Science Enhancements
- Profiling Research Institutions With the Map of Science, Part 1: Harvard and MIT
- Profiling Research Institutions With the Map of Science, Part 2: Research Across America
- Editors’ picks from ETO Scout: Volume 2 (11/2-16/23)
- Editors’ picks from ETO Scout: Volume 3 (11/17-30/23)
- Editors’ picks from ETO Scout: Volume 4 (12/1-21/23)
- Editors’ picks from ETO Scout: Volume 5 (12/22/23-1/11/24)
PROJECTS
- Government AI Hire, Use, Buy (HUB) Roundtable Series: CSET and Georgetown University’s Beeck Center for Social Impact and Innovation, together with the Georgetown Law Institute for Technology Law and Policy, are leading a series of invite-only roundtables examining the government’s use of artificial intelligence. The series will bring together leading voices to grapple with the legal liability questions that AI poses, examine AI’s potential to transform government services, and consider how the government can better attract and use AI talent. The initiative is funded by a generous grant from the Rockefeller Foundation. Learn more about the HUB Roundtable Series.
EVENT RECAPS
- On November 16, CSET’s Jaret C. Riddick, Lauren Kahn, Michael O’Connor, Igor Mikolic-Torreira, and Emelia Probasco discussed DOD’s Replicator Initiative, including what capabilities might be available now and how DoD might maximize the program’s goals over the next 18 to 24 months. National Defense Magazine and Federal News Network covered the webinar.
- On December 7, CSET hosted Under Secretary of Commerce for Industry and Security Alan Estevez for a discussion on the Bureau of Industry and Security’s use of export controls amidst today’s geopolitical and emerging technology realities. CSET Non-Resident Senior Fellow Kevin Wolf set the scene with some historical context on U.S. approaches to export controls. Under Secretary Estevez then provided opening remarks, which were followed by a moderated Q&A with CSET Research Analyst Hanna Dohmen.
IN THE NEWS
- Axios: AI Insight Forums in review (Ashley Gold and Maria Curi quoted Anna Puglisi and Huey-Meei Chang)
- Axios: Q&A: CSET’s Dewey Murdick (Ashley Gold spoke to Dewey Murdick)
- BBC World Service: Business Daily Podcast: The race to secure semiconductor supply chains (Hannah Mullane spoke to Hanna Dohmen)
- Bloomberg: Big Tech’s Year of Partnering Up With AI Startups (Isabella Ward and Natalie Lung quoted Ngor Luong)
- Federal News Network: DoD’s Replicator program must be repeatable to be successful (Kirsten Errick quoted Lauren Kahn)
- Forbes: Leading Experts Weigh In On Growing The U.S. Economy In 2024 (Ankit Mishra quoted Matthias Oschinski)
- National Defense Magazine: Indo-Pacific Focus Provides Hint Into Replicator Shopping List (Josh Luckenbaugh quoted Michael O’Connor)
- National Defense Magazine: Replicator Initiative Looks to Swarm Through ‘Valley of Death’ (Josh Luckenbaugh quoted from the CSET event DoD Replicator: Small, Smart, Cheap, and Many, featuring Jaret Riddick, Lauren Kahn, Igor Mikolic-Torreira, Michael O’Connor)
- Newsweek: China Focuses on OpenAI Turmoil in A.I. Race with U.S. (Didi Kirsten Tatlow quoted Tessa Baker)
- Politico Tech Podcast: Is the OpenAI saga a wakeup call for AI safety? (Steven Overly spoke to Dewey Murdick)
- The Hill: AI threats loom over cautious Congress (Rebecca Klar quoted Dewey Murdick)
- The Washington Times: Adoption of AI tech a matter of life and death; scenarios show gravity of maintaining human control (Stephen Dinan quoted Lauren Kahn)
- Undark: The Military’s Big Bet on Artificial Intelligence (Sarah Scoles quoted Emelia Probasco)
- WIRED: Microsoft’s AI Chatbot Replies to Election Questions With Conspiracies, Fake Scandals, and Lies (David Gilbert quoted Josh Goldstein)
What We’re Reading
Report: Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, Apostol Vassilev, Alina Oprea, Alie Fordyce, and Hyrum Anderson, NIST (January 2024)
Strategy: National Defense Industrial Strategy, U.S. Department of Defense (2023)
Paper: Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training, Evan Hubinger et al., arXiv (January 2024)
Paper: Textbooks Are All You Need, Suriya Gunasekar et al., arXiv (June 2023)
Upcoming Events
- January 31: CSET Webinar, AI Executive Order Report Card: Reviewing the First 90 Days, featuring Heather Frase, Jack Corrigan, Luke Koslosky, and Rita Konaev
What else is going on? Suggest stories, documents to translate & upcoming events here.