Worth Knowing
The Expensive and Power-Hungry AI Compute Spending Spree Heats Up: The last month brought a wave of deals involving some of the world’s biggest semiconductor companies and AI developers:
- Nvidia announced a $5 billion investment for a roughly 4% stake in Intel, making it one of the struggling chipmaker’s largest shareholders. The deal — which came shortly after the U.S. government took a 10% stake in Intel — also comes with plans for a closer partnership: the two companies plan to co-develop PC and data center chips combining Intel’s x86 CPUs and advanced packaging with Nvidia’s GPU and networking capabilities. While Nvidia’s GPUs will continue to be manufactured by TSMC for the foreseeable future, the partnership provides Intel with some additional stability and could offer Nvidia the potential security of a U.S.-owned manufacturing alternative.
- Nvidia and OpenAI struck a landmark deal worth up to $100 billion that will make the chip company both a major investor and guaranteed supplier to the AI lab. Under the arrangement, Nvidia will provide OpenAI with priority access to its cutting-edge data-center GPUs, while OpenAI will use the injected capital to purchase those chips. The companies signed a letter of intent to deploy at least 10 gigawatts of Nvidia computing resources, with the first 1 GW project slated for late 2026.
- Soon after its deal with Nvidia was announced, OpenAI announced an agreement with one of its new partner’s chief rivals, AMD. Under the terms of the deal, AMD will provide AI chips equivalent to about 6 gigawatts of computing capacity. In exchange, OpenAI received a warrant to buy up to 160 million shares of AMD — approximately 10% of the company — for one cent each, provided certain benchmarks are met. The deal gives OpenAI some much-needed diversity in its chip supply and makes AMD a more serious player in the AI computing space. AMD’s stock surged by nearly 40% in the day following the announcement.
- This week, OpenAI and Broadcom announced a strategic collaboration to co-develop 10 gigawatts worth of custom AI accelerators. OpenAI has flirted with the idea of designing and even manufacturing its own chips for years — it worked with Broadcom and TSMC last year to make its first in-house chips and CEO Sam Altman briefly championed a $7 trillion chipmaking project — but the new Broadcom announcement is the largest realistic project to date. The initial plan calls for 10 GW of these custom chips — to be designed in-house by OpenAI — beginning in the second half of 2026.
- More: Most U.S. Growth Now Rides on AI—And Economists Suspect a Bubble | OpenAI is projecting unprecedented revenue growth
- OpenAI had a slew of announcements this month, capped by the debut of its most powerful video generation model yet, Sora 2. Like Google’s latest Veo models (Google released an updated Veo 3.1 on Wednesday), Sora 2 comes much closer to producing lifelike video. While OpenAI’s model doesn’t appear to be a massive leap over its competitors’ models, the company’s Sora app, announced the same day, could go a long way to accelerating everyday engagement with AI video content, the same way ChatGPT sparked text and image generation. The (for now) invite-only, TikTok-like app lets users create and share AI-generated video clips, including those with “cameos” featuring themselves or other Sora users. Then at its October DevDay event, the company announced plans to build out ChatGPT into a broader AI application platform, introducing an Apps SDK that allows third-party services to integrate directly into ChatGPT’s interface. As we covered earlier, OpenAI has committed to a compute buildout totalling hundreds of billions of dollars. While the company is bringing in roughly $13 billion in annual revenue — nothing to sneeze at for most startups — its massive spending commitments mean it needs to scale up revenue considerably. Its push into advertising and ecommerce-friendly products seems to reflect that imperative.
- Anthropic rolled out two new models over the last month: Claude Sonnet 4.5 and Claude Haiku 4.5. Anthropic says the former, unveiled on September 29, is its best coding model to date and offers significant improvements in tasks like complex reasoning and autonomous tool use. The company says Sonnet 4.5 can sustain focus on very long tasks — up to 30 hours — without losing context, making it a good candidate for unsupervised, agentic coding tasks. Haiku 4.5, meanwhile, is a smaller, faster model that Anthropic says matches the coding performance of the prior flagship Claude Sonnet 4 — released in May — at one-third the cost and more than twice the speed. While OpenAI has pursued more of a swallow-the-world strategy, Anthropic has consistently focused on being the top AI coding model — a position that drove strong API revenue as developers leaned on its models for coding tasks. But as competitors like OpenAI and Google have developed new models that match or approach Anthropic’s coding performance, the company has had to find the sweet spot of capability and cost that keeps its models competitive.
- More: Building towards age prediction | AI Sam Altman and the Sora copyright gamble: ‘I hope Nintendo doesn’t sue us’
- More: California Signed A Landmark AI Safety Law. What To Know About SB53 | The California Report on Frontier AI Policy
More: The Consequences of China’s New Rare Earths Export Restrictions | China’s rare earth controls can ‘forbid any country on Earth from participating in the modern economy,’ former White House advisor warns
Government Updates
Trump’s $100,000 H-1B Fee Hits Tech Talent Pipeline: In September, President Trump signed a proclamation imposing a $100,000 fee on new H-1B visa applicants. Commerce Secretary Howard Lutnick called H-1B the “most abused visa,” arguing the fee would force companies to train American graduates instead of importing labor. Despite initial conflicting guidance on how the fee would be applied, the administration clarified that the fee will only apply to new H-1B petitions and will only be imposed one time, not on an ongoing annual basis. H-1B visas have been a critical part of the tech industry’s talent pipeline for decades: in the first half of FY2025, Amazon secured approval for over 12,000 H-1B positions, while Microsoft, Meta, and Google each had more than 5,000. A number of prominent tech and AI leaders — including Elon Musk, Satya Nadella, Sundar Pichai, and Yann LeCun — once held H-1Bs, and CSET research has found that former recipients are well-represented among AI startup founders. But there have also been long-standing concerns about abuse of the program. A report by Bloomberg last year found that some staffing and outsourcing companies had worked out elaborate tactics to game the system in favor of their applicants, often for lower-paid IT jobs. The Trump administration’s solution has been critiqued by many in Silicon Valley as too heavy-handed, however, especially for startups without the liquidity to pay $100,000 up front for necessary talent. And as IFP’s Jeremy Neufeld explained to the New York Times, the new fee structure likely has too many loopholes to effectively solve the problem of outsourcers gaming the system. On the bright side, Trump’s move may have reenergized congressional reform efforts. Soon after the president’s proclamation, Senators Grassley (R-IA) and Durbin (D-IL) introduced legislation to reform the H-1B and L-1 visa programs. Still, comprehensive immigration reform has repeatedly failed, and it’s not clear that President Trump wants a legislative solution. In the meantime, the new fee plan faces challenges in court — a group of unions, companies, and religious organizations filed a lawsuit to block the change earlier this month.
Bipartisan AI Evaluation Proposal Faces an Uphill Battle in the Senate: Senators Hawley (R-MO) and Blumenthal (D-CT) introduced the Artificial Intelligence Risk Evaluation Act of 2025 last month, a bipartisan bill that would impose federal oversight on the most powerful AI systems before they are deployed. The legislation would establish an “Advanced Artificial Intelligence Evaluation Program” within the Department of Energy, tasked with rigorously testing advanced AI models — defined as those trained using more than 10^26 floating-point operations — before developers can deploy them. The DOE program would conduct red-team testing and independent assessments to evaluate risks, including loss-of-control scenarios, weaponization by adversaries, threats to critical infrastructure or civil liberties, and scheming behavior. Companies would, upon request from the Energy Secretary, be required to submit extensive technical materials, including source code, training data, and model weights, and would face fines of at least $1 million per day for deploying advanced AI systems without complying. The bill would also require the Energy Secretary to deliver annual reports to Congress recommending federal regulatory actions based on the results of the testing program. If passed, the law would be the most significant domestic AI oversight regime yet, but it faces significant headwinds both inside Congress and from outside interest groups. Senate Commerce Chair Ted Cruz (R-TX) has favored a “light-touch” framework — which he introduced early last month — aimed at “[unleashing] American innovation and long-term growth” and “[preventing] a patchwork of burdensome AI regulation.” Industry groups, meanwhile, have bristled at anything more stringent than disclosure-based regulation like California’s recently passed SB 53 (see above). Even if the bill is unlikely to pass as written, it nevertheless points to continued bipartisan interest in increasing the pre-deployment scrutiny of the most powerful AI models.
In Translation
CSET’s translations of significant foreign language documents on AI
CSET’s translations of significant foreign language documents on AI
- Ministry of Commerce Notice 2025 No. 61: Announcement of the Decision to Implement Controls on Exports of Rare Earth-Related Items to Foreign Countries
- Opinions of the State Council on Deepening the Implementation of the “Artificial Intelligence+” Initiative
- Implementation Opinions of the National Development and Reform Commission and the National Energy Administration on Promoting the High-Quality Development of “Artificial Intelligence+” Energy
- Guidelines for the Construction of a Comprehensive Standardization System for the National Artificial Intelligence Industry (2024 Edition)
- Guide to Using Generative Artificial Intelligence in Primary and Secondary Schools (2025 Edition)
What’s New at CSET
REPORTS
- Harmonizing AI Guidance: Distilling Voluntary Standards and Best Practices into a Unified Framework by Kyle Crichton, Abhiram Reddy, Jessica Ji, Ali Crawford, Mia Hoffmann, Colin Shea-Blymyer, and John Bansemer
- U.S. AI Statecraft: From Gulf Deals to an International Framework by Pablo Chavez
PUBLICATIONS AND APPEARANCES
- CSET: AI Control: How to Make Use of Misbehaving AI Agents by Kendrea Beers and Cody Rushing
- Foreign Policy: Civilian Tech Is Powering China’s Military by Cole McFaul and Sam Bresnick
- Council on Foreign Relations: How America Can Win in Space to Protect Taiwan and Beyond by Kathleen Curlee
- Network 20/20: High Stakes in High Tech: The U.S. China AI Power Race featuring Owen J. Daniels
- Agents of Tech: Will the U.S. LOSE the AI Race to China? featuring Helen Toner
EVENT RECAPS
- On December 7, CSET hosted a conference on China’s military modernization and AI ecosystem. The event featured a keynote by Representative John Moolenaar, Chairman of the House Select Committee on China, and panels with leading experts discussing CSET’s latest research on China’s AI innovation and military-civil fusion.
IN THE NEWS
- Axios: Helen Toner on the AI risk “you could not really talk about” (Ashley Gold quoted Helen Toner)
- Bulletin of Atomic Scientists: How Trump’s new H-1B fee will hurt Silicon Valley and AI startups (Jeremy Hsu quoted Luke Koslosky)
- Inside AI Policy: CSET report says AI deals with Gulf kingdoms present need for governance policies (Charlie Mitchell cited the CSET Report U.S. AI Statecraft)
- NBC News: China is starting to talk about AI superintelligence, and some in the U.S. are taking notice (Jared Perlo quoted Helen Toner)
- Politico: Inside the Chinese AI threat to security (Phelim Kine quoted Helen Toner)
- TechBrew: What California’s landmark AI law means for US tech regulation (Patrick Kulp quoted Jessica Ji)
- The Hill: Trump’s $100K H-1B visa fee rattles Silicon Valley (Julia Shapero quoted Luke Koslosky)
- The Washington Post: AI firm DeepSeek writes less-secure code for groups China disfavors (Joseph Menn quoted Helen Toner)
- Times Higher Education: ‘Onerous’ four-year visas ‘would deter PhD students from US’ (Patrick Jack quoted Jacob Feldgoise)
What We’re Reading
Report: Evaluating the Risks of Preventive Attack in the Race for Advanced AI, Zachary Burdette and Hiwot Demelash, RAND Corporation (September 2025)
Paper: Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples, Alexandra Souly, Javier Rando, Ed Chapman, Xander Davies, Burak Hasircioglu, Ezzeldin Shereen, Carlos Mougan, Vasilios Mavroudis, Erik Jones, Chris Hicks, Nicholas Carlini, Yarin Gal, and Robert Kirk (October 2025)
Report: State of AI Report 2025, Nathan Benaich, Air Street Capital (October 2025)