Worth Knowing
A Big Month for AI Releases — Google, OpenAI, and Meta Make Waves: 2024 closed with a wave of major AI releases from the industry’s leading companies:
- Google released both a state-of-the-art language model and a video generation model that puts it at or near the top of both categories: Last week, Google debuted Gemini 2.0, its most powerful family of language models to date. In addition to native multimodality, the company says Gemini 2.0 is trained for native tool use, making it a better fit for agentic use cases than earlier models. So far, Google has released two models from the Gemini 2.0 family: Gemini 2.0 Flash Experimental — a faster “workhorse” model with a 1 million token context window — and Gemini-Exp-1206, a more powerful model that has now taken the top spot on the popular Chatbot Arena LLM Leaderboard, dethroning the most recent version of OpenAI’s GPT-4o model. Then this week, the company announced Veo 2, its latest text-to-video model (as well as Imagen 3, its newest text-to-image generator). In human evaluations conducted by Google, the company says users consistently preferred Veo 2’s outputs to those of OpenAI’s Sora.
- OpenAI rolled out a number of new products and features as part of its still-ongoing “12 Days of OpenAI” event, including the public release of its video generation tool Sora and the full release of its reasoning model, OpenAI o1, which had been available in beta since it was announced in September. The company also introduced a $200-a-month tier of ChatGPT that offers users unlimited access to o1 as well as a version of o1 that can use more computing resources to “think” longer.
- Meta released Llama 3.3 70B, an open weights model the company says is as performant as its much larger 405 billion-parameter Llama 3.1 model and approaches GPT-4-level capabilities. While Meta’s Llama models are not technically open source because the download license prescribes how they can and cannot be used, their open weights (available to download through Hugging Face or the Llama website) make them widely available to anyone, including — as we noted when Llama 3 launched earlier this year — scammers, spammers, and propagandists.
- More: I can now run a GPT-4 class model on my laptop with Llama 3.3 70B | OpenAI seeks to unlock investment by ditching ‘AGI’ clause with Microsoft | INTELLECT-1 — The First Globally Trained 10B Parameter Model
Government Updates
White House Debuts Another Update on China-Focused Chip Export Rules: Earlier this month, the Commerce Department announced significant updates to its semiconductor export controls, further tightening restrictions on China’s access to high-end chips and manufacturing equipment. The new rules mark the latest evolution of controls first introduced in October 2022 and updated last year. While the original controls have had an impact — the CEO of Chinese AI lab DeepSeek has cited them as a significant obstacle — their effectiveness has been mixed. High-end chip smuggling has been widespread (despite its CEO’s comments, Dylan Patel of SemiAnalysis estimates DeepSeek has accumulated some 50,000 nominally-controlled Nvidia Hopper GPUs), and Chinese chipmakers have built substantial stockpiles of semiconductor manufacturing equipment (SME) in anticipation of future restrictions. The new 200+ page rules package (part 1 and part 2) attempts to address these issues through three main mechanisms:
- Expanding restrictions on SME and the software used to produce advanced-node integrated circuits. Of particular note is a significant expansion of the Foreign Direct Product Rule that extends U.S. jurisdiction over any foreign-produced SME that contain “any amount of U.S.-origin integrated circuits”
- New controls on exports of high-bandwidth memory chips, which are essential for most AI systems
- The addition of 140 Chinese entities to the Bureau of Industry and Security’s Entity List, many of which are engaged in China’s indigenous semiconductor development
Trump Taps David Sacks as White House AI and Crypto Czar: President-elect Donald Trump tapped venture capitalist David Sacks to serve as his administration’s “AI & Crypto Czar” and to lead the Presidential Council of Advisors for Science and Technology. An early executive at PayPal and angel investor in Facebook, SpaceX, and Palantir, Sacks has built up a significant public profile in recent years as one of the co-hosts of the popular “All-In” podcast. In announcing the appointment, Trump said Sacks would “focus on making America the clear global leader” in both AI and crypto while working to “steer us away from Big Tech bias and censorship.” The exact scope and authority of Sacks’ position remain unclear. According to Bloomberg, Sacks will be classified as a “special government employee” capable of working up to 130 days per year. While not subject to Senate confirmation or asset disclosure requirements, Sacks still must recuse himself from matters where he has financial interests. Observers see Sacks’ appointment as another sign of Elon Musk’s influence within the incoming administration. Sacks and Musk (who is set to advise the president as the co-chair of the proposed “Department of Government Efficiency”) worked together at PayPal and have reportedly maintained close ties. Some observers have raised concerns that Sacks and Musk could use their positions to target rivals — including OpenAI, which Musk sued earlier this year — or privilege their own business interests. OpenAI CEO Sam Altman appears eager to smooth over any issues with the incoming administration, donating $1 million to President-Elect Donald Trump’s inauguration fund and congratulating Sacks on social media, which Musk responded to with a laughing emoji.
U.S.-China Commission Proposes “Manhattan Project” for AI Development: In its annual report, the U.S.-China Economic and Security Review Commission recommended Congress establish and fund a Manhattan Project-style program for AI to ensure the United States develops artificial general intelligence (AGI) before China. The report calls on Congress to: 1) give the executive branch broad multiyear contracting authority for funding AI, cloud, and data center firms; and 2) direct the DOD to classify AI-related items as “highest priority” rating in the Defense Priorities and Allocations System. While the commission’s recommendations aren’t binding, the proposal has garnered attention in Washington and Silicon Valley, especially because the incoming Trump administration seems like it could be receptive to the idea. In July, the Washington Post reported that Trump allies were drafting AI-focused executive orders incorporating similar Manhattan Project-style initiatives for military applications. And one of the U.S.-China Commission’s 12 members — Palantir Senior Advisor Jacob Helberg — was tapped last week to lead economic diplomacy at the State Department. Tech industry leaders, including Google CEO Sundar Pichai, have expressed willingness to participate in a Manhattan Project-type initiative. But some experts urge a more measured approach. As Anthropic co-founder and former CSET Non-Resident Fellow Jack Clark noted during a panel at the recent New York Times Dealbook Summit, a “Manhattan Project for AI” could be an unnecessary response when less-expensive government initiatives — like high-skilled immigration reform or increasing access to computing resources — might be sufficient.
CDAO Inks A Deal With Anduril for Tactical Data Platform: The Pentagon’s Chief Digital & AI Office awarded Anduril a $100 million contract to expand its “Lattice Mesh” system, which enables frontline forces to rapidly share and process data from hundreds of different sensors. Anduril followed up the contract by releasing its Lattice software development kit (SDK) last week, enabling third-party developers to build applications that can integrate directly into Lattice. As Ben Thompson noted in his Stratechery newsletter, these moves could mark an inflection point for military software development. While the DOD has historically struggled with software procurement, Anduril’s platform approach could create an accessible ecosystem more familiar to Silicon Valley. The CDAO contract reflects an evolution in the Pentagon’s data strategy. When Defense Secretary Lloyd Austin approved the Joint All-Domain Command and Control (JADC2 – since renamed “Combined Joint All Domain Command & Control” or CJADC2) initiative in 2021, DOD planners envisioned a single unified network connecting sensors across services. But as CDAO Principal Deputy Margaret Palmieri told Breaking Defense, her office now recognizes that tactical and strategic commanders have fundamentally different needs — the former requiring rapid access to limited data, the latter needing to process massive amounts of information over longer periods. Anduril’s Lattice aims to fill that tactical role, while Palantir is working on a platform for the strategic and operational levels. Earlier this month, the two firms announced plans to work together to share data between the two platforms to enhance data processing, including for AI training.
House AI Task Force Publishes Year-End Report: The House Task Force on AI released its end-of-year report on Tuesday, offering a roadmap that emphasizes sector-specific oversight over comprehensive new AI regulations. Led by Reps. Obernolte and Lieu, the AI task force was set up earlier this year to help Congress balance U.S. leadership in AI with appropriate safeguards for its development and deployment. The 24-member bipartisan group consulted with experts across academia, industry, and government to develop recommendations spanning 15 core areas from healthcare and intellectual property to government use of AI and national security. Rather than propose new regulatory frameworks, the 253-page report advocates that federal agencies take the lead on regulating AI within their sectors using their existing authorities. “We do not think it is a good idea for the United States to follow some of the other countries in the world in splitting off AI and establishing a brand new bureaucracy and a universal licensing requirement for it,” Obernolte told reporters (CSET’s Jack Corrigan, Owen Daniels, Lauren Kahn, and Danny Hague made a similar case in their June paper, Governing AI with Existing Authorities). While the 118th Congress will end next month, the report is an important signal of where Congress could be headed on AI in the next session.
In Translation
CSET’s translations of significant foreign language documents on AI
CSET’s translations of significant foreign language documents on AI
PRC Industry Response to Export Controls: Four Chinese Industry Associations Issue Statements Condemning U.S. Sanctions. This translation combines statements issued separately by four Chinese industry associations on December 3, 2024, condemning the new sanctions and export controls aimed at Chinese companies that the United States announced the previous day. All four statements encourage Chinese companies to reconsider purchases of U.S. chips and semiconductor equipment and to look elsewhere for suppliers. The Chinese Communist Party controls all industry associations in the country, so the coordinated statements should be understood to reflect the concerns of the Chinese leadership.
PRC Ministry of Commerce Export Ban: Ministry of Commerce Notice 2024 No. 46: Notice Concerning Strengthening Controls on Exports of Relevant Dual-Use Items to the United States. This notice from China’s Ministry of Commerce, issued December 3, 2024, bans the export of gallium, germanium, antimony, and superhard materials to the United States, and enacts stricter export control checks on dual-use graphite materials. This move immediately follows the U.S. announcement on December 2 of new export controls and sanctions designed to prevent sales of advanced U.S. microchips and semiconductor manufacturing equipment to China.
PRC Cybersecurity Standards Draft: National Standard of the People’s Republic of China: Cybersecurity Technology – Basic Safety Requirements for Generative Artificial Intelligence Services (Draft for Feedback). This draft Chinese national standard is designed to improve the safety and security of generative AI services. The standard addresses some cybersecurity concerns associated with generative AI, but primarily focuses on preventing AI systems from generating content the Communist Party finds objectionable, such as pornography, bullying, hate speech, defamation, copyright infringement, and criticism of the Party’s monopoly on power.
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
Job Openings
We’re hiring! Please apply or share the role below with candidates in your network:
- Director of Analysis: CSET is seeking applications for a Director of Analysis position. This leadership role combines organizational strategy, research oversight, and project & team management, will oversee CSET’s research agenda, manage a diverse team of 25+ researchers, and ensure high-quality, policy-relevant outputs.
What’s New at CSET
REPORTS
- Anticipating Biological Risk: A Toolkit for Strategic Biosecurity Policy by Steph Batalis
- Staying Current with Emerging Technology Trends: Using Big Data to Inform Planning by Emelia Probasco and Christian Schoeberl
- AI and the Future of Workforce Training by Matthias Oschinski, Ali Crawford, and Maggie Wu
- Identifying Emerging Technologies in Research by Catherine Aiken, James Dunham, Jennifer Melot, and Zachary Arnold
PUBLICATIONS AND PODCASTS
- CSET: RFI Response: Safety Considerations for Chemical and/or Biological AI Models by Steph Batalis and Vikram Venkatram
- Tech Policy Press: Old Meets New in Online Influence by Josh A. Goldstein
- DefenseScoop: America Goes All-In on Big AI by Jack Corrigan
- Inkstick: What Will the Green Transition Mean for a Divided World? by Owen J. Daniels, Nevada Joan Lee, Paul Pillar, and Thomas Ramge
- Walsh School of Foreign Service: Five Key Issues to Watch in AI in 2025 by Andrew Imbrie
- China Global Podcast — German Marshall Fund: Chinese Perspectives on Military Uses of AI with Sam Bresnick
EMERGING TECHNOLOGY OBSERVATORY
- Know more about AI policy: introducing AGORA
- Direct access to our data: exploring ETO’s new dataset portal
- Brain-inspired AI, surgical robots, AI in space: editors’ picks from ETO Scout, volume 17 (10/23/24-11/22/24)
- The CSET-ETO Chinese Technical Glossary
EVENT RECAPS
- On December 11, Matt Sheehan of the Carnegie Endowment for International Peace and Daniel Schiff, Assistant Professor and Co-Director of the Governance and Responsible AI Lab at Purdue University joined CSET’s Mina Narayanan and Zachary Arnold to discuss potential forecasts for AI governance efforts around the world in 2025. The webinar also featured the exclusive introduction of the Emerging Technology Observatory’s AGORA, a new tool designed to help researchers and the public examine AI-relevant laws, regulations, and standards. Watch a recording of the event.
IN THE NEWS
- DefenseOne: Eighteen ways Palantir wants the Pentagon to change (Lauren C. Williams cited the CSET report Building the Tech Coalition)
- Fast Company: AI 20: Helen Toner’s OpenAI exit only made her a more powerful force for responsible AI (Mark Sullivan quoted Helen Toner)
- GZero Media: Biden tightens China’s access to chips one last time (Scott Nover quoted Jacob Feldgoise)
- Inside Higher Ed: Howard Expects to Gain R-1 Status. Other HBCUs Will Follow (Sara Weissman quoted Jaret Riddick)
- Marketplace: Parsing two Trump appointees (David Brancaccio spoke to Jacob Feldgoise)
- MIT Technology Review: We saw a demo of the new AI system powering Anduril’s vision for war (James O’Donnell quoted Emelia Probasco)
- Nature: Why ‘open’ AI systems are actually closed, and why this matters (David Gray Widder, Meredith Whittaker, and Sarah Myers West cited the CSET report The AI Triad and What It Means for National Security Strategy)
- Nikkei Asia: China AI military use spurs latest U.S. chip export controls, analysts say (Ken Moriyasu cited the CSET report U.S. and Chinese Military AI Purchases)
- Tech Target: Elon Musk, big tech ties to China raise security concerns (Makenzie Holland quoted Sam Bresnick )
- The Globe and Mail: Fact check: Monstrous sea creatures aren’t being pulled from the depths (Patrick Dell quoted Josh A. Goldstein)
- The Information: TSMC’s Push to Be Tech’s Switzerland in Doubt as U.S.-China Tensions Grow (Qianer Liu quoted Jacob Feldgoise)
- The Washington Post: Sorry, Oxford dictionary nerds. This is the real word of the year (Shira Ovide cited Josh A. Goldstein’s Harvard Kennedy School Misinformation Review piece How spammers and scammers leverage AI-generated images on Facebook for audience growth)
- The Wire China: The Robotics Risk Tightrope (Noah Berman quoted Bill Hannas)
- UnHerd: The British scientists working for China: UK research is powering Beijing’s military-industrial complex (David Rose quoted Bill Hannas)
What We’re Reading
Annual Report: Military and Security Developments Involving the People’s Republic of China, U.S. Department of Defense (December 2024)
Working Paper: Under Pressure: Attitudes Towards China Among American Foreign Policy Professionals, Michael B. Cerny and Rory Truex (December 2024)
Report: How Federal Funding Terms and Conditions Could Encourage Safe Artificial Intelligence Development, Ryan Consaul and Gregory Smith, RAND Corporation (November 2024)
Paper: Whack-a-Chip: The Futility of Hardware-Centric Export Controls, Ritwik Gupta, Leah Walker, and Andrew W. Reddie (November 2024)