Worth Knowing
The DeepSeek Frenzy — Sorting through the Takeaways: Last month, we covered the Chinese AI lab DeepSeek, which had achieved GPT-4-level performance on a limited computing budget with its V3 model. Since last month’s edition was published, DeepSeek’s models — V3 and a reasoning model, “DeepSeek-R1,” released on January 20 — sent shockwaves through the U.S. stock market, Silicon Valley, and Washington D.C.: Nvidia’s stock price fell approximately 17% in one day, venture capitalist Marc Andreessen called it “AI’s Sputnik moment,” and President Trump said it “should be a wake-up call.” While some of the frenzy around DeepSeek has died down — and Nvidia’s stock price has mostly recovered — there are several important implications worth addressing now that the dust has settled:
- What DeepSeek means for global AI development: DeepSeek’s models matched or exceeded the performance of some of the best models developed by leading U.S. labs. Most notably, DeepSeek did so at a fraction of the cost. While exact figures aren’t known, according to Anthropic CEO Dario Amodei, Claude 3.5 Sonnet cost “a few $10M’s” to train. DeepSeek-V3’s final training run, meanwhile, cost an estimated $5.6 million. But as Amodei pointed out in a blog post published last month, DeepSeek’s low training cost isn’t outside the broader industry trend — models have been coming down in cost quickly as new training and distillation methods make it cheaper to train and run a model without sacrificing quality. Furthermore, the widely reported $6 million price tag is only a small fraction of the total cost of building out a leading AI lab and running models for customers — between its stockpiled chips and earlier research and development, DeepSeek has probably spent closer to $1 billion. Still, the broader trend raises a difficult question for the leading developers: if competitors can quickly and cheaply match leading performance, is there sufficient incentive to be at the bleeding edge?
- What DeepSeek means for U.S. export controls: One common refrain in the last month has been that DeepSeek’s success means that U.S. export controls on AI chip exports to China didn’t work. But the reality is more complicated — DeepSeek trained its V3 model on 2048 H800 GPUs, which Nvidia developed for the Chinese market to comply with the Biden administration’s October 2022 export controls on high-end AI chips. While H800s have degraded chip-to-chip networking compared to Nvidia H100s, they are still powerful chips — indeed, they were too powerful for the Commerce Department, which restricted their export to China with an October 2023 update. According to Dylan Patel of SemiAnalysis, DeepSeek has access to at least 20,000 Nvidia Hopper GPUs — evenly split between H100 and H800 chips. While H100s have been controlled since their launch and therefore would have to be smuggled, it’s entirely plausible that DeepSeek legally purchased its stock of H800s before October 2023. Either way, as CSIS’s Greg Allen noted, it will take time and new generations of leading AI chips before we can tell how effective the tightened export controls are at stifling Chinese AI development.
- What DeepSeek means for compute demand: DeepSeek’s launches triggered a sharp drop in Nvidia’s stock price as investors bet that the models’ impressive efficiency gains would slash GPU demand. But efficiency doesn’t necessarily mean reduced demand — as Microsoft CEO Satya Nadella pointed out on social media, efficiency gains can paradoxically increase overall usage of a resource through increased demand, a phenomenon known as the Jevons paradox. The market appears to have sided with Nadella’s view. Nvidia’s stock has largely recovered, and major AI developers haven’t scaled back their compute expansion plans.
- More: DeepSeek FAQ – Stratechery | Xi Jinping seizes DeepSeek moment to restore China tech chiefs to spotlight
- Released on January 23, OpenAI’s Operator, powered by a new “Computer-Using Agent” model, can navigate the web and interact with webpages. OpenAI isn’t the first company to release a web-surfing model — Anthropic debuted its own late last year — but OpenAI’s appears to be the best so far, outperforming rivals on the OSWorld benchmark. Early reviews have been lukewarm, though — The New York Times’ Kevin Roose declared the experience “intriguing… but usually more trouble than it was worth.” OpenAI, for its part, didn’t seem to think the tool was fully ready for primetime, dubbing it a “research preview” in its release materials.
- The company’s other recent release — Deep Research — received a more typical rollout and more effusive praise. The tool acts like a virtual research assistant: given a prompt, it will search the internet for relevant information, synthesize it, and produce a report. Because of the time it takes to search the web and the model Deep Research uses — it is powered by a version of the lab’s o3 reasoning model — Deep Research can take between 5 and 30 minutes to produce a report, citations and all. Observers generally agree that Deep Research produces decent, or better, results — if not quite PhD-level, then certainly entry-level (I documented my own impression of Deep Research on LinkedIn). Both Operator and Deep Research are currently only available on OpenAI’s $200-a-month ChatGPT Pro plan.
Government Updates
At Paris AI Summit, Safety Is Out and Competition Is In: The third global AI summit took place in Paris last week, co-hosted by France and India and attended by top political leaders, AI executives, and researchers. Unlike earlier summits in the UK (2023) and South Korea (2024), which had been framed around managing AI risks, the Paris summit focused much more heavily on maximizing AI’s economic potential. While the summit resulted in a declaration on “Inclusive and Sustainable Artificial Intelligence for People and the Planet” — which the United States and the UK both declined to sign — its most notable takeaway was the public effort by several countries, the United States and France chief among them, to stake out their competitive positions as leading destinations for AI development. Vice President J.D. Vance used his first major foreign policy speech to chart the Trump administration’s approach to AI: maintaining U.S. leadership in the technology by cultivating a hands-off regulatory environment. Vance criticized the European approach to AI regulation that, he said, risked “[killing] a transformative industry just as it’s taking off.” Vance said the Trump administration would take a lighter touch to AI regulation and encouraged the summit’s attendees to embrace that “deregulatory flavor.” Vance left the summit immediately after his speech, skipping remarks by European Commission President Ursula von der Leyen, a move observers saw as a deliberate snub based on the bloc’s contentious relationship with U.S. tech companies. For France, meanwhile, the summit was a showcase as much as a policy event. President Macron announced a €109 billion AI investment package, framing it as France’s answer to the Stargate project (see below), and said that both France and the EU would “simplify” the regulatory environment for tech to “resynchronise with the rest of the world.” The EU’s new chief digital minister, Executive Vice-President for Tech Sovereignty, Security and Democracy Henna Virkkunen, has signaled agreement, telling the Financial Times that the EU is “committed to cut bureaucracy and red tape” and helping the bloc’s AI companies compete effectively.
Trump Issues Executive Order on AI: Soon after taking office last month, President Trump signed an executive order that revoked President Biden’s 2023 AI executive order and directed his top AI officials to draft an Artificial Intelligence Action Plan that will “sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” The immediate effects of Trump’s new order are not particularly far-reaching. While Biden’s EO on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” imposed a wide range of tasks and responsibilities across the federal government, the vast majority of those tasks have been completed. One key ongoing feature of Biden’s order that Trump’s order revoked was a requirement that the developers of particularly powerful AI models share details about their model’s training and disclose all red-teaming results. But other remnants of the Biden EO — including dozens of Chief AI Officers appointed to federal agencies pursuant to the order — will require more deliberate action to undo. Trump’s EO directs the Assistant to the President for Science and Technology (Trump has nominated Michael Kratsios for the role), Special Advisor for AI and Crypto David Sacks, and Assistant to the President for National Security Affairs Michael Waltz to review “all policies, directives, regulations, orders, and other actions” taken pursuant to Biden’s revoked EO and suspend, revise, or rescind any that are incompatible with the goals of Trump’s new order. The EO also gives OMB Director Russell Vought, in consultation with Kratsios, 60 days to review and revise two OMB directives related to government use of AI — OMB memoranda M-24-10 and M-24-18.
Trump’s Federal Workforce Cuts Come for Federal Science and AI Staff: The Trump administration has kicked off its second term with far-reaching workforce cuts across the federal government as President Trump and “Department of Government Efficiency” figurehead Elon Musk aim to slash government spending. While the cuts have been focused largely on agencies like USAID, the CDC, and the Department of Veterans Affairs, initial cuts are starting at science agencies, including those with significant AI portfolios, and more could be coming soon:
- On Tuesday, the National Science Foundation fired 168 workers and, according to Politico, could cut up to half of its approximately 1,500 staffers within the next two months. Ars Technica reported that President Trump could seek to cut the agency’s $9 billion budget by as much as two-thirds in his budget request later this year. The NSF is a major U.S. research funder and one of Congress’ favorite homes for AI and emerging tech programs, including by way of significant authorized funding increases to drive AI and semiconductor research as part of the 2022 CHIPS and Science Act (actual appropriations have been a different story).
- Axios reported on Wednesday that the National Institute of Standards and Technology will undergo significant cuts. NIST has played a prominent role in driving the federal government’s response to AI, most notably developing the AI Risk Management Framework and housing the U.S. AI Safety Institute (AISI), which launched in late 2023. It also was tasked with overseeing billions in CHIPS and Science Act incentives and R&D programs. According to Axios, the 497 people expected to be fired include 57% of CHIPS staff focused on incentives and 67% of CHIPS staff focused on R&D. Bloomberg reports that AISI could face cuts as well, but exact details are not yet available.
Trump’s “America First Trade Policy” and Export Controls on China: On January 20, President Trump issued a memorandum establishing an “America First Trade Policy” that could have far-reaching effects on AI and semiconductor trade policy. Among other things, the memorandum directs the Secretary of State and the Secretary of Commerce to review, update, and close loopholes in export controls meant to prevent the transfer of strategic technologies, software, and intellectual property to geopolitical rivals. The memo also directs the Secretary of the Treasury, in consultation with the Commerce Secretary, to review President Biden’s 2023 executive order on outbound investments and assess whether it should be updated or rescinded. While the Trump administration has made deliberate efforts to differentiate itself from the Biden administration on some aspects of AI policy (see Trump’s AI executive order above), the president doesn’t seem intent on deviating dramatically from the Biden tack on export controls. President Trump’s pick for Commerce Secretary, Howard Lutnick — confirmed by the Senate on Tuesday — said he would work to stymie Chinese AI development during his confirmation hearings last month and would “empower” the Bureau of Industry and Security, although he didn’t commit to expanding export controls. Trump’s pick to head BIS, Under Secretary for Industry and Security Jeffrey Kessler, meanwhile, is an experienced trade policy lawyer with Commerce Department experience, and his pick for assistant secretary of commerce for export administration, Landon Heid, has earned a reputation as a “China hawk” during his tenure as a staffer on the House Select Committee on China. While the Biden administration left power with a flurry of export control updates, there seems to be an appetite for more on Capitol Hill — after Chinese AI lab DeepSeek’s recent releases, Senators Hawley and Warren wrote a letter to Lutnick urging him to further strengthen export controls on China, including by strengthening the AI diffusion rule and clamping down on chip smuggling.
OpenAI and Partners Debut $500B Stargate Plan at White House: OpenAI, SoftBank, Oracle, and MGX announced plans for a massive AI infrastructure project called “Stargate” at a White House event with President Trump last month. The proposed $500 billion initiative would build a network of advanced AI computing centers across the United States over the next four years. However, some observers noted the $500B figure was more aspirational than fully realized, with only about $100B of the total funds spoken for. Others pointed out that while the CEOs behind the project credited the president for enabling the project — “we wouldn’t be able to do this without you, Mr. President,” Altman told Trump during the event — the project was reported to be in development as early as March 2024 and construction on the first projects had begun in June. The White House announcement, then, may have been part of what the Washington Post dubbed an “aggressive charm offensive” waged by business leaders — especially tech CEOs — in the early days of the second Trump term. While President Trump had a rocky relationship with the tech world during his first term — characterized by social media bans and antitrust suits — big tech executives seem to be betting that a more amicable relationship is possible this time around.
In Translation
CSET’s translations of significant foreign language documents on AI
CSET’s translations of significant foreign language documents on AI
Song-Chun Zhu: The Race to General Purpose Artificial Intelligence is not Merely About Technological Competition; Even More So, it is a Struggle to Control the Narrative. Read our translation of an interview of Chinese AI expert Song-Chun Zhu, who argues that China’s AI industry should chart a different course than the current U.S. focus on data- and compute-heavy large language models.
Chinese Catalogue of Technologies Prohibited or Restricted from Export. Read our translation of China’s catalog of prohibited and restricted exports, as of its most recent revision in December 2023.
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
Job Openings
We’re hiring! Please apply or share the role below with candidates in your network:
- Director of Analysis: This leadership role combines organizational strategy, research oversight, and project & team management, will oversee CSET’s research agenda, manage a diverse team of 25+ researchers, and ensure high-quality, policy-relevant outputs.
- Research Fellow – Applications: This role will support the Applications line of research. You will analyze, publish, and help to lead CSET’s work on the use of AI in the national security arena, including shaping priorities, authoring or overseeing the execution of the research, and production of reports and other outputs.
- Media Engagement Specialist: The role will assist with the Center’s externally-facing activities and communications, with a particular emphasis on media outreach, strategic collateral creation, and event support.
What’s New at CSET
REPORTS
- Chinese Critiques of Large Language Models by William Hannas, Huey-Meei Chang, Maximilian Riesenhuber, and Daniel Chou
- AI Incidents: Key Components for a Mandatory Reporting Regime by Ren Bin Lee Dixon and Heather Frase
- Shaping the U.S. Space Launch Market by Michael O’Connor and Kathleen Curlee
- Putting Explainable AI to the Test: A Critical Look at AI Evaluation Approaches by Mina Narayanan, Christian Schoeberl, and Tim G. J. Rudner
PUBLICATIONS AND PODCASTS
- Bulletin of the Atomic Scientists: Will the Paris artificial intelligence summit set a unified approach to AI governance—or just be another conference? by Mia Hoffmann, Mina Narayanan, Owen J. Daniels
- Newsweek: DeepSeek Shows Why U.S., China Can Still Collaborate on Tech by William Hannas and Huey-Meei Chang
- Newsweek: Does China’s DeepSeek Mean U.S. AI Is Sunk? by Jack Corrigan and Sam Bresnick
- HBCU Digest: Howard’s R1 achievement is great. HBCU R2s may be an even greater story by Jaret C. Riddick and Brendan Oliss
- Cato Institute: Power Problems Podcast: The AI Competition with China featuring Sam Bresnick
- EqualAI: In AI We Trust? Podcast featuring Dewey Murdick
- Modern War Institute: MWI Podcast: DeepSeek and the U.S.-China AI Race featuring Sam Bresnick and Bill Hannas
- Squaring the Circle: AI and the Future of Warfare featuring Emmy Probasco
EMERGING TECHNOLOGY OBSERVATORY
- A year of Chinese tech advances: editors’ picks from ETO Scout, volume 19 (12/19/24-1/16/25)
- Exploring AI in this year’s NDAA with ETO AGORA
EVENT RECAPS
- On February 6, CSET Research Analyst Hanna Dohmen testified before the U.S.-China Economic and Security Review Commission during “Panel II: The Next Decade of U.S.-China Tech Competition” of the hearing “Made in China 2025—Who Is Winning?” Read her testimony and watch the full hearing.
IN THE NEWS
- Fast Company: Are you ‘AI literate’? Schools and jobs are insisting on it—and now it’s EU law (Jackie Snow cited the CSET data snapshot Leading the Charge: A Look at the Top-Producing AI Programs in U.S. Colleges and Universities)
- Fortune: OpenAI ex-board member Helen Toner says revoking ban on Nvidia AI chip exports would be a ‘huge victory’ for China (Sharon Goldman quoted Helen Toner)
- Fortune: Trump seemed blindsided by the sudden rise of Chinese AI service DeepSeek. Here’s how he’ll help U.S. tech punch back, experts say (David Meyer quoted Sam Bresnick)
- GZERO Media: France puts the AI in laissez-faire (Scott Nover quoted Mia Hoffmann)
- GZERO Media: Is DeepSeek the next US national security threat? (Scott Nover quoted Jack Corrigan )
- GZERO Media: What Stargate means for Donald Trump, OpenAI, and Silicon Valley (Scott Nover quoted Jack Corrigan)
- Lawfare: Beyond DeepSeek: How China’s AI Ecosystem Fuels Breakthroughs (Ruby Scanlon cited the CSET report Chinese Public AI R&D Spending: Provisional Findings)
- Nature: How China created AI model DeepSeek and shocked the world (Gemma Conroy and Smriti Mallapaty cited the CSET report Assessing China’s AI Workforce)
- South China Morning Post: China’s ability to launch DeepSeek’s popular chatbot draws US government panel’s scrutiny (Robert Delaney quoted Hanna Dohmen)
What We’re Reading
Report: Red-Teaming in the Public Interest, Ranjit Singh, Borhane Blili-Hamelin, Carol Anderson, Emnet Tafesse, Briana Vecchione, Beth Duckles, and Jacob Metcalf, Data & Society and AI Risk and Vulnerability Alliance (February 2025)
Paper: Distillation Scaling Laws, Dan Busbridge, Amitis Shidani, Floris Weers, Jason Ramapuram, Etai Littwin, and Russ Webb (February 2025)
Upcoming Events
- February 24: CSET Webinar, How the U.S. Government Hires, Uses, and Pays for AI Tools and Talent, featuring Alla Goldman Seiffert, Elizabeth Laird, and CSET’s Matthias Oschinski
What else is going on? Suggest stories, documents to translate & upcoming events here.