Updates
Policy.ai has temporarily moved to a once-a-month schedule.
Plus: The job application deadline has been extended for our open Research Fellow – CyberAI and Fellow – Compete Line of Research Lead positions. See the hiring section below for more information.
Worth Knowing
The EU AI Act Could Be in Trouble: Final stage negotiations over the EU’s AI Act have hit a roadblock that could put the entire package in jeopardy, according to a report from the European news site Euractiv. Having passed the European Parliament in June with overwhelming support, the AI Act, originally proposed in 2021, still had to go through “trilogue” discussions between the EP, the European Commission and EU member states before it could become law. That process seemed to be on track to finish by the end of this year, but, according to Euractiv’s report, representatives from the bloc’s three biggest countries — Germany, France and Italy — pushed back strongly against the act’s regulation of so-called “foundation models” during a meeting last week. Foundation models — broadly capable models that can be used across a range of purposes (see Helen Toner’s recent explainer for more) — have been one of the key drivers of the recent AI boom. As we covered back in July, European companies had begun to criticize the AI Act, arguing that its regulations — especially those focused on foundation models — would drive away innovative AI firms. Those arguments seem to have swayed some key EU member states, especially those with upstart AI companies of their own. According to Euractiv, the Spanish presidency of the Council of the EU is attempting to stitch together a solution (a proposed “tiered” approach to foundation models failed to break the stalemate), but time could be running out. One EU source told Euractiv that, unless a solution can be found, the entire AI Act could be on the chopping block.
The UK’s AI Summit Brings Together Tech Execs and Government Reps: Early this month, tech developers, researchers, and senior officials from 28 governments met at Bletchley Park in the UK for the two-day AI Safety Summit. The summit resulted in a handful of concrete takeaways:
- Representatives of the 28 attending states, including the United States and China, signed onto the “Bletchley Declaration” — a document that acknowledges the danger posed by “frontier” AI models and affirms the importance of developing AI safely.
- A group of “like-minded governments” and major AI developers reached an agreement to involve select governments in testing new AI models before their release. The agreement was signed by representatives of the United States, UK, the EU, Australia, Canada, France, Germany, Italy, Japan, Korea, and Singapore (China, notably, was not involved in the agreement) and major AI firms, including Amazon, Anthropic, Google, Inflection AI, Meta, Microsoft, Mistral and OpenAI.
- The 28 countries in attendance agreed to support the development of a “State of the Science” report on frontier AI. The UK government, as the summit’s host, commissioned the prominent AI researcher Yoshua Bengio to lead the effort.
- More: Five takeaways from UK’s AI safety summit at Bletchley Park | CSET: Regulating the AI Frontier: Design Choices and Constraints | CSET: Skating to Where the Puck Is Going: Anticipating and Managing Risks from Frontier AI Systems
- Nvidia has reportedly developed three new AI chips that should be eligible for sale in China under the updated U.S. controls announced last month. According to SemiAnalysis, Nvidia plans to begin mass production of the chips within the month.
- OpenAI introduced a handful of new features and model updates at its first developer conference. Of particular note are customizable versions of its popular ChatGPT tool, as well as a new version of the flagship GPT-4 language model, dubbed GPT-4 Turbo, which the company says is cheaper to use and has a significantly expanded context window. Neither development is an unprecedented technical advance (OpenAI competitor Anthropic released a version of its chatbot, Claude, with a similarly large context window earlier this year), but they could further accelerate the rapid adoption of OpenAI’s tools.
- A U.S. District Court judge dismissed many of the claims in a closely watched copyright suit against several generative AI art developers. Claims against DeviantArt and Midjourney were dismissed entirely, though Judge William Orrick indicated that modified versions of the claims might pass muster. Meanwhile, a claim against Stability AI — developer of the popular Stable Diffusion model — will be allowed to proceed. The litigation is far from settled, with amended claims expected to be submitted shortly. Generative AI developers appear to anticipate more disputes to come: OpenAI CEO Sam Altman said last month his company would cover the legal fees of customers facing copyright infringement claims.
- More: China has a new plan for judging the safety of generative AI—and it’s packed with details | Amazon dedicates team to train ambitious AI model codenamed ‘Olympus’ -sources
Government Updates
Biden Issues a Sweeping Executive Order on AI: On October 30, President Biden signed an ambitious and expansive executive order that leverages many of the federal government’s most powerful tools to shape the development and deployment of AI. The 36-page EO and its hundreds of provisions require more space to break down (check out excellent rundowns from New America and Stanford for more granular analysis as well as CSET’s tracker of key deliverables), but at a high level, the EO:
- Leverages the power of the Defense Production Act to impose reporting obligations on the developers of the most powerful AI systems. Developers of these systems must notify the federal government when they train the model and disclose all red-teaming results. No current AI systems are thought to meet the performance threshold of 10^26 floating-point operations set by the EO (lower for models working with biological data), but administration officials said they expected the next generation of models would be covered.
- Orders the creation of new standards to ensure the safe development of AI, including standards to protect against the AI-aided development of dangerous biological materials.
- Directs agencies to streamline immigration processes to further attract highly skilled immigrants with expertise in AI and critical and emerging technologies.
- Directs the NSF to launch a pilot program for the National AI Research Resource. Plans for a NAIRR have been in the works since 2020, and earlier this year, a task force recommended a $2.6 billion investment to build out a “shared research infrastructure” of publicly accessible computing power, datasets, and educational tools. Ultimately, Congress will need to allocate the funds to stand up such a resource in full.
- Calls for agencies to identify how AI could assist their missions and to take steps to deploy these technologies by developing contracting tools, training federal employees, and hiring more technical talent via a “National AI Talent Surge.” The Office of Management and Budget is seeking further input on these provisions via a Request for Comment.
- Includes a host of other provisions related to protecting against the extant and near-term harms of AI systems, such as discrimination, fraud, and job displacement.
Forty-Five Countries Endorse the State Department’s Military AI Declaration: On Monday, the State Department announced that 45 states had joined the United States in endorsing the State Department’s Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. Initially unveiled earlier this year, the non-binding guidelines are intended to create international norms around the responsible development and deployment of military AI and autonomous systems. But the current version of the guidelines have already undergone some notable changes since they were announced (an archived version of the original guidelines is available here), including:
- While the earlier version included a measure calling for human involvement in all critical actions related to nuclear weapons employment, the current declaration omits that language.
- Where the earlier version called for “appropriate care, including appropriate levels of human judgment, in the development, deployment, and use of military AI capabilities,” the new version omits the clause on “appropriate levels of human judgment.”
In Translation
CSET’s translations of significant foreign language documents on AI
CSET’s translations of significant foreign language documents on AI
Chinese Generative AI Rules: Basic Safety Requirements for Generative Artificial Intelligence Services (Draft for Feedback). This draft Chinese standard for generative AI establishes very specific oversight processes that Chinese AI companies must adopt in regard to their model training data, model-generated content, and more. The standard names more than 30 specific safety risks, some of which—algorithmic bias, disclosure of personally identifiable information, copyright infringement—are widely recognized internationally. Others, such as guidelines on how to answer questions about China’s political system and Chinese history, are specific to the tightly censored Chinese internet. The standards also require Chinese generative AI service providers to incorporate more foreign-language content into their training data.
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
Job Openings
We’re hiring! Please apply or share the roles below with candidates in your network:
- Research Fellow – CyberAI: We are currently seeking candidates who are passionate about exploring topics at the intersection of AI and cybersecurity. As a CyberAI Research Fellow, you will play a pivotal role in assessing how AI techniques can enhance cybersecurity, mitigate emerging threats, and shape future cyber operations. We value candidates who can bridge the gap between technical and non-technical audiences, making complex concepts accessible. If you possess strong analytical skills, practical experience in machine learning or cybersecurity, and a deep understanding of AI’s failure modes and potential threats, we want to hear from you. Apply by Monday, November 27th
- Senior or Research Fellow – Compete LOR Lead: We are currently seeking candidates to lead our Compete Line of Research as a Senior or Research Fellow. This Fellow will play a pivotal role in leading and coordinating our Compete LOR efforts, by shaping priorities, devising research strategies, and overseeing research execution and report production. We are looking for individuals who possess a profound understanding of technological innovation’s pivotal role in U.S. national power, with expertise in areas such as economic security, trade and investment controls, trade regulation, and antitrust laws. Apply by Monday, November 27th
What’s New at CSET
REPORTS
- The Antimicrobial Resistance Research Landscape and Emerging Solutions by Vikram Venkatram and Katherine Quinn
PUBLICATIONS
- Time: What We Can Learn About Regulating AI from the Military by Emelia Probasco and Dewey Murdick
- Foreign Policy: Can Chatbots Help You Build a Bioweapon? by Steph Batalis
- CSET: The Executive Order on Safe, Secure, and Trustworthy AI: Decoding Biden’s AI Policy Roadmap by Tessa Baker
- CSET: Breaking Down the Biden AI EO: Ensuring Safe and Secure AI by Tessa Baker
- CSET: Breaking Down the Biden AI EO: Screening DNA Synthesis and Biorisk by Steph Batalis and Vikram Venkatram
- CSET: The Future of Drones in Ukraine: A Report from the DIU-Brave1 Warsaw Conference by Emelia Probasco
- CSET: Working Through Our Global AI Trust Issues by Kathleen Curlee
- CSET: Data Snapshot: BIS Best Data Practices: Part 1 by Christian Schoeberl
- CSET: CSET Celebrates Native American Heritage Month
EMERGING TECHNOLOGY OBSERVATORY
- The Emerging Technology Observatory is now on Substack! Sign up for all the latest updates and analysis.
- Editors’ picks from ETO Scout: Volume 1 (10/19-11/2/23)
IN THE NEWS
- The New York Times: Biden to Issue First Regulations on Artificial Intelligence Systems (David E. Sanger and Cecilia Kang quoted Lauren Kahn)
- Time: Why Biden’s AI Executive Order Only Goes So Far (Will Henshall quoted Helen Toner)
- Bloomberg: US Won’t Lose Its AI Lead to China Anytime Soon, Inflection AI CEO Says (Jackie Davalos and Nate Lanxon quoted Helen Toner)
- Wired: The U.S. and 30 Other Nations Agree to Set Guardrails for Military AI (Will Knight quoted Lauren Kahn)
- Inkstick Media: The Things That Go Boom podcast (Laicie Heeley hosted Lauren Kahn)
- KCBS Radio: Digital literacy amid the rise of artificial intelligence in elections (Margie Shafer and Eric Thomas spoke to Josh Goldstein)
- KCBS Radio: Biden signs extensive legislation to guide AI development in the U.S. (Holly Quan spoke to Lauren Kahn)
- Miami Herald: ‘Ambitious’ plan to regulate AI is unveiled by Biden. But what do experts think? (Brendan Rascius quoted Helen Toner)
- National Defense Magazine: Leveraging America’s Diverse STEM Talent (Jordan Chase and Wilson Miles cited the CSET report China is Fast Outpacing U.S. STEM PhD Growth)
- Nature: The world’s week on AI safety: powerful computing efforts launched to boost research (Nicola Jones quoted Helen Toner)
- Politico: Playbook: Johnson makes money moves (Eugene Daniels, Rachael Bade, and Ryan Lizza quoted Dewey Murdick)
- South China Morning Post: US-led AI declaration on responsible military use sees 45 countries join, but not China (Amber Wang quoted Sam Bresnick)
- The Messenger: Biden Outlines New AI Rules With Expansive Executive Order (Ben Powers quoted Helen Toner)
- The Messenger: Hackers Are Weaponizing AI to Improve a Favorite Attack (Eric Geller quoted Andrew Lohn)
- The Wire China: The Race to Regulate (Rachel Cheung quoted Helen Toner)
What We’re Reading
Report: 2023 Annual Report to Congress, the U.S.-China Economic and Security Review Commission (November 2023)
Report: Framework for Identifying Highly Consequential AI Use Cases, Special Competitive Studies Project and Johns Hopkins University Applied Physics Laboratory (November 2023)
Paper: Representation Engineering: A Top-Down Approach to AI Transparency, Andy Zou et al. (October 2023)
Upcoming Events
- TODAY (November 16): CSET Webinar, DOD Replicator: Small, Smart, Cheap, and Many — What we know about DOD’s Replicator Initiative and what it might achieve, featuring Emelia Probasco, Lauren Kahn, Jaret C. Riddick and Michael O’Connor
- November 28: Brennan Center for Justice and CSET, How Will AI Affect the 2024 Election?, featuring Larry Norden, Mekela Panditharatne, Zoë Schiffer and CSET’s Mia Hoffmann
What else is going on? Suggest stories, documents to translate & upcoming events here.