Worth Knowing
Google, Anthropic, and OpenAI Stay Busy with A String of Announcements: With new flagship models, text-to-video generators, AI coding agents, and multibillion dollar hardware investments, it was another massive month of news from some of the biggest AI developers:
- Google: Google DeepMind unveiled Veo 3, its latest video generation model that creates remarkably realistic eight-second clips complete with dialogue, sound effects, and music. While competitors like OpenAI and Runway have released video generation models of their own, industry observers and early user examples suggest Veo 3 is in a class of its own. But as in early 2023 when AI-generated images of a “swagged-out” Pope Francis heralded a new age of visual unreliability, Veo 3’s quality means we have probably hit that moment for video, too. Some users have already shown that Veo 3 can generate inflammatory fake news footage ranging from election fraud to religious violence. For now, the $249 price tag to access Veo 3 may keep a lid on widespread disinformation and AI slop, but as prices drop, it will likely become as widespread as AI-generated images.
- Anthropic: In May, Anthropic released Claude Opus 4 and Claude Sonnet 4, the company’s most advanced language and reasoning models to date. Anthropic touts Claude Sonnet 4 as the “best coding model in the world,” having achieved an industry-leading 72.7% accuracy on the SWE-bench benchmark for software development tasks. While both models excel in coding evaluations and specialized benchmarks, the popular LMArena leaderboard shows them trailing top offerings from Google (Gemini 2.5 Pro) and OpenAI (o3, GPT-4o, and GPT-4.5) in overall performance. The release is likely to strengthen Anthropic’s position in the lucrative API market, where its coding prowess has made it the default choice for popular “vibe coding” tools like Cursor and Windsurf.
- OpenAI: While OpenAI didn’t announce new flagship models, it still made important moves this month. Last week, the company introduced o3-pro, a more deliberate (and expensive) version of its o3 reasoning model that outperforms competitors like Gemini 2.5 Pro and Claude Opus 4 on key benchmarks. OpenAI also launched Codex, a cloud-based software engineering agent that can work on multiple coding tasks in parallel within secure, isolated environments. But perhaps the biggest move of the month was OpenAI’s $6.5 billion all-stock acquisition of io, the hardware design startup founded by former Apple design chief Jony Ive. Bringing in Ive, who worked with Steve Jobs to create some of the most successful and iconic consumer hardware products of all time, signals where OpenAI appears to be headed: a move from the chat window into the lucrative world of hardware.
- More: Mistral releases a pair of AI reasoning models | Chinese AI start-up DeepSeek pushes US rivals with R1 model upgrade
- More: Transparency and (shifting) priority stacks | Gemini 2.5 Technical Report | OpenAI o3 and o4-mini System Card
Government Updates
White House Drops “Safety” From AI Safety Institute in Rebrand: Earlier this month, the Commerce Department announced that the U.S. AI Safety Institute (AISI) would be reformed as the Center for AI Standards and Innovation (CAISI), dropping “safety” from the organization’s name as part of a “pro-innovation” and “pro-science” transformation. The institute — which was established under the Biden administration — had seemed to be in limbo after President Trump rolled back Biden’s AI executive order (AISI was not technically established by the Biden AI EO but was announced the same day) and after the institute’s inaugural director, Elizabeth Kelly, resigned in early February. AISI’s future seemed especially uncertain after staff were dropped from the U.S. delegation to February’s Paris AI summit. But the institute received support from some key stakeholders: in late May, the leaders of the House Select Committee on the CCP — Chair John Moolenaar (R-MI) and Ranking Member Raja Krishnamoorthi (D-IL) — sent a letter to Commerce Secretary Howard Lutnick stressing the importance of AISI in “understanding, predicting, and preparing for” China’s AI progress. Lutnick’s announcement of the rebrand endorses that goal, saying that CAISI will “lead evaluations and assessments of capabilities of U.S. and adversary AI systems, the adoption of foreign AI systems, and the state of international AI competition.” The organization will continue to serve as the AI industry’s point of contact with the federal government and will work with companies on voluntary testing and model security.
House-Passed 10-Year Ban on State AI Regulation Looks Doomed in Senate: A controversial provision in the House-passed budget reconciliation bill that would have imposed a 10-year moratorium on state and local AI regulation appears headed for the scrap heap after running into opposition and procedural obstacles in the Senate. The Senate Commerce Committee’s version of the budget bill, released in early June, omitted the blanket ban, replacing it with a conditional funding mechanism that ties $500 million in federal AI infrastructure grants to voluntary state regulatory pauses. The original House provision — Section 43201(c) — would have prohibited states from enforcing “any law or regulation limiting, restricting, or otherwise regulating” AI systems for a decade. The moratorium had been backed by House Republicans including Energy & Commerce Chairman Brett Guthrie and reportedly backed by tech giants like Google, Amazon, Microsoft, and Meta. Supporters of the moratorium have argued that grappling with 50 different state regulatory regimes would stifle innovation and have instead pushed for unified Congressional action. But support for the provision hasn’t been enough to overcome widespread pushback from 40 state attorneys general, over 260 state legislators across party lines, more than 140 civil society organizations, CSET researchers, and Republicans in the Senate like Josh Hawley and Marsha Blackburn who viewed it as federal overreach. The provision also poses procedural issues — the Senate’s Byrd Rule prohibits non-budgetary measures in reconciliation bills, a test observers said the moratorium was unlikely to pass. The provision’s expected fate means that burgeoning efforts to regulate AI at the state level would be able to continue, at least for now.
Trump Administration Moves to Revoke Chinese Student Visas: The State Department announced late last month that it will “aggressively revoke visas for Chinese students,” though the move’s scope remains unclear after President Trump appeared to walk it back last week. In a brief May 28 statement, Secretary of State Marco Rubio said the State Department would work with the Department of Homeland Security to rescind visas for Chinese students including (but presumably not limited to) “those with connections to the Chinese Communist Party or studying in critical fields.” But after reportedly successful U.S.-China trade negotiations last week, President Trump posted on social media that “WE WILL PROVIDE TO CHINA WHAT WAS AGREED TO, INCLUDING CHINESE STUDENTS USING OUR COLLEGES AND UNIVERSITIES (WHICH HAS ALWAYS BEEN GOOD WITH ME!).” Exactly what that means for the nearly 280,000 Chinese students studying at U.S. universities remains unclear. Concerns about the potential risks associated with Chinese students aren’t new — fears of IP theft, influence operations, and espionage by Chinese students and researchers go back to the early days of the Cold War. But many observers are concerned that the Trump administration’s move could backfire and damage U.S. research and innovation ecosystem while doing little to hurt — or even benefiting — China. According to CSET research, as many as 90 percent of Chinese students who earn U.S. PhDs remain in the United States afterward. As CSET’s Cole McFaul told NPR’s All Things Considered, the ability to attract and retain talented researchers from countries like China is a significant advantage for the United States. Democrats from the House Select Committee on the CCP echoed those concerns, writing in a statement that the decision “plays into the hands of our adversaries, including the Chinese Communist Party itself.” As we wrote in April, the Trump administration’s approach to immigration could disrupt a critical talent pipeline. Severe restrictions on Chinese students, should they go ahead, could be the biggest disruption yet.
OpenAI Lands $200 Million Pentagon Contract for “Frontier AI” Capabilities: On Monday, the Pentagon announced it had awarded $200 million to OpenAI to develop AI tools for national security applications. The one-year contract, the first in the company’s new “OpenAI for Government” initiative, will see the company working with the Pentagon’s Chief Digital and AI Office (CDAO) to prototype “frontier AI capabilities.” So far, the announcements from both the DOD and OpenAI have been light on specifics. According to OpenAI’s announcement, the work will focus on transforming administrative operations — from improving healthcare delivery for service members and their families to streamlining program and acquisition data analysis and supporting proactive cyber defense. The Pentagon, meanwhile, said OpenAI would “address critical national security challenges in both warfighting and enterprise domains.” That language closely mirrors the objectives of CDAO and DIU’s AI Rapid Capabilities Cell, launched last year, which identified “Command and Control (C2) and decision support, operational planning, logistics, weapons development and testing, uncrewed and autonomous systems, intelligence activities, information operations, and cyber operations” as key focus areas. While OpenAI emphasized that all use cases must comply with its usage policies prohibiting weapons development, the contract marks a significant shift for the company, which removed language barring “military and warfare” applications from its terms of service last year.
In Translation
CSET’s translations of significant foreign language documents on AI
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.CSET’s translations of significant foreign language documents on AI
Call for Research Ideas
- Risks From Internal Deployment of Frontier AI Models: Foundational Research Grants, a grant program within CSET, is calling for research ideas that would clarify and manage risks from deployment of frontier models within AI companies. Award sizes range up to $1,000,000 per project, to be expended over 3-24 months. Learn more and submit a 1-2 page expression of interest by June 30, 2025.
What’s New at CSET
REPORTS
- Wuhan’s AI Development: China’s Alternative Springboard to Artificial General Intelligence (AGI) by William Hannas, Huey-Meei Chang, and Daniel Chou
- Anticipating AI’s Impact on the Cyber Offense-Defense Balance by Andrew Lohn
- Opportunities in Open Science, Metascience, and Artificial Intelligence by Catherine Aiken, Greg Tananbaum, James Dunham, Ronnie Kinoshita, and Erin McKiernan
- Advanced Space Technologies: Challenges and Opportunities for U.S. National Security by Michael O’Connor and Kathleen Curlee
- Honchoing AI in the Air Force: If AI Is Important, the People Are Indispensable by Nolan Sweeney
PUBLICATIONS
- CSET: AI Safety Evaluations: An Explainer by Jessica Ji, Vikram Venkatram, and Steph Batalis
- CSET: How Prize Competitions Enable AI Innovation by Ali Crawford
- EdScoop: Kids need to experiment with AI by Emmy Probasco
- Barron’s: Elon Musk’s Trump Ties Damaged Tesla. SpaceX Is Insulated From Harm by Kathleen Curlee
- The Hill Times: Canada could lead on AI—if we’re willing to train for it by Matthias Oschinski and Ruhani Walia
- Tech Policy Press: AI Monopolies Are Coming. Now’s the Time to Stop Them by Jack Corrigan
- The National Interest: The Hidden Cost of AI: Extractive AI Is Bad for Business by Ali Crawford, Matthias Oschinski, and Andrew Lohn
- The Hill: States are regulating AI when Congress won’t. Don’t take away their power by Jessica Ji, Vikram Venkatram, and Mina Narayanan
IN THE NEWS
- Al Jazeera: Chinese students in US grapple with uncertainty over Trump’s visa policies (Joseph Stepansky quoted Cole McFaul)
- El Pais: Helen Toner, exconsejera de OpenAI: “Aunque la IA no avance más, su impacto ya es como el de internet” (Jordi Pérez Colomé quoted Helen Toner)
- FDI Intelligence: Mind games | Nations are cherry picking top foreign talent (Danielle Myles quoted Jacob Feldgoise)
- HuffPost: AI Models Will Sabotage And Blackmail Humans To Survive In New Tests. Should We Be Worried? (Monica Torres quoted Helen Toner)
- Nature: A framework for considering the use of generative AI for health (Isabella Joy de Vere Hunt, Kang-Xing Jin and Eleni Linos cited the CSET blog What Are Generative AI, Large Language Models, and Foundation Models?)
- NPR: Rubio’s move to revoke Chinese students’ visas sparks condemnation (Emily Feng spoke to Cole McFaul and cited the CSET report Trends in U.S. Intention-to-Stay Rates of International Ph.D. Graduates Across Nationality and STEM Fields)
- The Wall Street Journal: Does the President Want to Fix Harvard or Destroy It? (Jason L. Riley cited the CSET report The Long-Term Stay Rates of International STEM PhD Graduates)
- Tom’s Hardware: CHIPS Act beneficiaries ‘mired in NIMBY fights and two-year permits’ — delays to Micron, Amkor, and SK hynix NY fabs costing $5M per day (Anton Shilov cited the CSET report No Permits, No Fabs)
- WIRED: Trump’s Crackdown on Foreign Student Visas Could Derail Critical AI Research (Will Knight, Lauren Goode, Kate Knibbs, and Louise Matsakis quoted Helen Toner)
What We’re Reading
Report: The California Report on Frontier AI Policy, Joint California Policy Working Group on AI Frontier Models (June 2025)
Paper: The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity, Parshin Shojaee, Iman Mirzadeh, Keivan Alizadeh, Maxwell Horton, Samy Bengio, and Mehrdad Farajtabar, Apple Machine Learning Research (June 2025)
Paper: Stop Anthropomorphizing Intermediate Tokens as Reasoning/Thinking Traces!, Subbarao Kambhampati, Kaya Stechly, Karthik Valmeekam, Lucas Saldyt, Siddhant Bhambri, Vardhan Palod, Atharva Gundawar, Soumya Rani Samineni, Durgesh Kalwar, and Upasana Biswas (May 2025)
Post: How we built our multi-agent research system, Jeremy Hadfield, Barry Zhang, Kenneth Lien, Florian Scholz, Jeremy Fox, and Daniel Ford, Anthropic (June 2025)