Government Updates
What a Second Trump Term Means for AI Policy: Donald Trump’s victory in this month’s presidential election has many in the AI world wondering: what will the next four years look like for AI policy? Trump’s first term as president coincided with the start of the current boom in AI research and development. In 2017 — his first year in office — Google researchers published “Attention Is All You Need,” the hugely influential research paper that introduced the transformer architecture underlying today’s most powerful generative AI systems. In 2020, near the end of his first term, OpenAI released GPT-3, the company’s first model to make serious mainstream waves (a later version of GPT-3 powered ChatGPT when it launched in 2022). The Trump administration didn’t ignore the burgeoning technology: In 2019, President Trump signed the country’s first executive order on AI, which laid out a strategy to promote U.S. leadership in AI development. Then in December 2020, he issued the first executive order on federal government AI use. Just as the intervening four years have seen an unprecedented surge in AI development and investment, there has been a corresponding boom in AI policy efforts from the White House under Joe Biden. Highlights include:
- In 2023, President Biden issued an executive order that, among other things, imposed reporting obligations on developers of the most powerful AI systems, set off a flurry of AI-related actions among federal agencies (see our tracker for more), and set up the National Artificial Intelligence Research Resource pilot program.
- The National Institute of Standards and Technology released its voluntary AI Risk Management Framework to help AI developers manage risk (see our writeup from last year) and the Office of Science and Technology Policy published its non-binding “Blueprint for an AI Bill of Rights” (see our writeup from 2022).
- Among a host of AI-related actions, the Pentagon: stood down its original AI-focused organization — the Joint Artificial Intelligence Center, originally established in 2018 under President Trump — and consolidated its data, AI, and digital efforts under a new Chief Digital and Artificial Intelligence Office; launched the “Replicator Initiative” to field thousands of low-cost, attritable, autonomous systems on an accelerated timeline (Replicator 2 was announced earlier this year); and updated its autonomous weapons policy.
- Late last month, the White House issued a National Security Memorandum on AI to guide the U.S. national security strategy toward AI. The memo lays out a comprehensive plan to maintain U.S. leadership in AI while ensuring its safe and responsible use across national security agencies (read reactions from CSET experts).
What a Second Trump Term Means for Semiconductors: Since taking office in 2021, President Biden’s administration has pursued a two-pronged semiconductor strategy: incentivizing domestic manufacturing through the CHIPS and Science Act and limiting China’s access to high-end chips with strict export controls. On export controls, it seems unlikely that President Trump will roll back the Biden administration’s restrictions — announced in October 2022 — that aimed to significantly cut off China from the semiconductors needed for high-end AI applications. Those controls were part of a trend that began under President Trump’s first administration to restrict China’s access to critical technology, and Trump’s picks for key positions in his new administration — including noted “China hawks” Marco Rubio and Michael Waltz — don’t indicate a change in tack. Subsidizing domestic semiconductor manufacturing could be a different story. Though the 2022 CHIPS and Science Act passed through Congress with significant bipartisan support, Trump criticized the law during an October interview with the podcaster Joe Rogan. Soon after, House Speaker Mike Johnson indicated that repealing the law could be on his agenda, but quickly walked back those comments. While most observers don’t expect the Trump administration to try to repeal the law, the White House is working to allocate and finalize the remainder of the $39 billion in CHIPS Act incentives before President Biden leaves office. Last week, the Commerce Department finalized a $6.6 billion award for TSMC’s investment in new Arizona facilities, finalized $1.5 billion for GlobalFoundries on Wednesday, and is expected to finalize more awards soon.
What a Second Trump Term Means for AI Development: One area where questions remain most pronounced is what Donald Trump’s victory means for AI investment and development. Last year, the United States led the way in private investment — of the $95 billion invested worldwide, the United States accounted for $67 billion of it. From one angle, it looks like a second Trump term could be a boon for the technology companies financing the AI boom: President Trump earned the support of some of the tech world’s most outspoken voices, including Tesla, SpaceX, and xAI CEO Elon Musk and prominent venture capitalists like Marc Andreessen and David Sacks. After the election, Andreessen — who has championed a “Techno-Optimist” agenda and pointed to concerns about “hostile” overregulation as a key driver of his political activities — said Trump’s victory “felt like a boot off the throat.” But it’s far from clear that Trump’s second term will be a bacchanal for AI developers. During the first Trump administration, the Justice Department and Federal Trade Commission launched antitrust cases against Google and Facebook, respectively, and Amazon alleged it lost out on a lucrative DOD cloud computing contract due to political differences between the president and Amazon founder and then-CEO Jeff Bezos. Vice President-elect JD Vance has expressed support for current FTC Chair and Big Tech critic Lina Khan, and Attorney General nominee Matt Gaetz has previously called to break up Big Tech companies. Time will tell which voices — the tech accelerationists or the tech skeptics — will win out in the second Trump administration.
Worth Knowing
AI Wall? Hints of Diminishing Returns, But Developers Don’t Seem Worried: A report from The Information has sparked intense debate in AI circles about whether the field is approaching another “winter.” According to the outlet, OpenAI’s next flagship model, codenamed Orion, has shown more modest improvements over GPT-4 than the company saw between previous generations. And according to Bloomberg, OpenAI rivals Google and Anthropic have faced similar issues. The AI world is primed to worry about an AI winter — the history of AI research and development has been characterized by boom periods and fallow periods — and the year and a half since GPT-4’s release has seen OpenAI’s competitors, including Google, Anthropic, and Meta, release models that generally matched or only slightly exceeded GPT-4’s performance. But many in the world of AI development have dismissed the rumors of a scaling wall, including OpenAI CEO Sam Altman, who posted on social media that “there is no wall” and Anthropic CEO Dario Amodei, who seemed to dismiss the idea of a slowdown on a recent podcast. The truth could be that the scaling debate is growing less relevant — while companies are likely to hit a point at which ever-larger computing clusters become prohibitively expensive relatively soon, new approaches to AI development like OpenAI’s o1 reasoning model could open up new pathways for AI research that aren’t solely reliant on brute-force compute.
- More: Scaling AI: Cost and Performance of AI at the Leading Edge | Recent Venture Deals Show AI Valuations May Be Cooling
- Three of the biggest generative AI developers have begun working more closely with the U.S. military and intelligence agencies: Anthropic announced its Claude 3 and 3.5 family of language models will be available to the Pentagon and U.S. intelligence agencies through a partnership with Palantir and Amazon Web Services. Meta, meanwhile, gave the green light to U.S. government agencies and contractors — including those working on defense applications — to use its free, open-weight Llama models. And OpenAI, which had already worked with NASA, the IRS, and USAID, among other federal users, announced a limited ChatGPT Enterprise partnership with the Air Force Research Laboratory. The Pentagon and intelligence agencies have expressed interest in using generative AI tools since soon after OpenAI launched ChatGPT in 2022, and the DOD set up a task force last year to investigate the potential for generative AI integration across the military.
- Chinese researchers linked to the People’s Liberation Army adapted Meta’s open-weight, 13 billion-parameter Llama model for military purposes, according to recent reports from Reuters and the Jamestown Foundation. The prospect that U.S. adversaries could repurpose open-weight or open-source models for nefarious purposes has been a significant concern for policymakers, and the latest reports seemed to confirm those fears — U.S. Rep. John Moolenaar issued a statement saying the news “[demanded] a critical reassessment of how we protect our technological edge from exploitation.” But, as CSET’s Kyle Miller argued on social media, the military usefulness of the Chinese-adapted Llama may be significantly overstated — “The Chinese researchers created a simple Q&A bot that could answer basic questions about the military,” Miller wrote. The model was fine-tuned on public information about the military and could answer relatively simple prompts about military issues, such as “describe the U.S. Army Research Laboratory,” but doesn’t appear radically different from other publicly available language models.
- More: A Chinese lab has released a ‘reasoning’ AI model to rival OpenAI’s o1 | Open Foundation Models: Implications of Contemporary Artificial Intelligence
Job Openings
We’re hiring! Please apply or share the role below with candidates in your network:
- Program Specialist: We are currently seeking a detail-oriented Program Specialist to join our growing operations team. This role will provide essential administrative, organizational, and project management support. The Program Specialist will coordinate various CSET programs, including internships, external fellowships, and student employment, while ensuring smooth logistical operations. Ideal candidates will bring strong organizational and project management skills, experience in program management, and the ability to work effectively in a collaborative, cross-matrix environment. If you are motivated, detail-oriented, and passionate about the intersection of technology and security, we encourage you to apply. Apply by Monday, November 25th
What’s New at CSET
REPORTS
- Cybersecurity Risks of AI-Generated Code by Jessica Ji, Jenny Jun, Maggie Wu, and Rebecca Gelles
- Acquiring AI Companies: Tracking U.S. AI Mergers and Acquisitions by Jack Corrigan, Ngor Luong, and Christian Schoeberl
- AI Safety and Automation Bias by Lauren Kahn, Emelia Probasco, and Ronnie Kinoshita
PUBLICATIONS
- CSET: The National Security Memorandum on Artificial Intelligence — CSET Experts React by Igor Mikolic-Torreira, Hanna Dohmen, Jacob Feldgoise, Sam Bresnick, Emelia Probasco, Kyle Miller, and Owen Daniels
- CSET: Data Snapshot: Funding the AI Cloud — Amazon, Alphabet, and Microsoft’s Cloud Computing Investments, Part 1: If You Build Cloud, They Will Come by Christian Schoeberl and Jack Corrigan
- CSET: Data Snapshot: Funding the AI Cloud — Amazon, Alphabet, and Microsoft’s Cloud Computing Investments, Part 2: OSS as Big Tech’s Rotisserie Chicken by Christian Schoeberl and Jack Corrigan
- CSET: Data Snapshot: Funding the AI Cloud — Amazon, Alphabet, and Microsoft’s Cloud Computing Investments, Part 3: Corporate Investments as Golden Handcuffs by Christian Schoeberl and Jack Corrigan
EMERGING TECHNOLOGY OBSERVATORY
- The Emerging Technology Observatory is now on Substack! Sign up for the latest updates and analysis.
- China soars in space, struggles in chips: editors’ picks from ETO Scout, volume 16 (9/14/24-10/22/24)
- Key trends in global cybersecurity research: growth, leaders, dark horses
- Hot topics in cybersecurity research: insights from the Map of Science
EVENT RECAPS
- Senate Judiciary Committee: On November 19, CSET Research Fellow Sam Bresnick testified before the Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law for a hearing on “Big Hacks & Big Tech: China’s Cybersecurity Threat.” Read his testimony.
- CSET Webinar: On November 14, CSET’s Mia Hoffmann, Mina Narayanan, Cole McFaul, and Owen J. Daniels discussed global developments in AI governance over the past year. Watch a recording of the webinar.
IN THE NEWS
- 404 Media: AI-Powered Social Media Manipulation App Promises to ‘Shape Reality’ (Emanuel Maiberg quoted Josh A. Goldstein)
- Associated Press: Nvidia rivals focus on building a different kind of chip to power AI products (Matt O’Brien and Barbara Ortutay quoted Jacob Feldgoise)
- Atlantic Council: Apocalypse Later? (Dean Jackson and Meghan Conroy quoted Josh A. Goldstein)
- Bloomberg: Huawei Technologies’ Latest AI Chips Were Produced by TSMC (Mackenzie Hawkins cited Hanna Dohmen and Jacob Feldgoise’s Data Snapshot, Pushing the Limits: Huawei’s AI Chip Tests U.S. Export Controls)
- Bloomberg Law: AI Workers Seek Whistleblower Cover to Expose Emerging Threats (Kaustuv Basu quoted Helen Toner)
- Bulletin of Atomic Scientists: Trump’s potential impact on emerging and disruptive technologies (Sara Goudarzi quoted Owen J. Daniels)
- GZero Media: Can the US defense industry harness AI power while mitigating risks? (Scott Nover quoted Owen J. Daniels)
- GZero Media: US takes a close look at TSMC and Huawei (Scott Nover quoted Hanna Dohmen)
- Japan Times: New U.S. AI guidance puts pressure on allies — and rivals — to adopt tech (Gabriel Dominguez quoted Sam Bresnick)
- Modern War Institute: MWI Podcast: The Maven Smart System and the Future of Military AI (John Amble spoke to Emelia Probasco and Igor Mikolic-Torreira)
- Politico: This week’s cyber hearings are next year’s battles (Joseph Gedeon cited the Emerging Technology Observatory’s blog post, Key trends in global cybersecurity research: growth, leaders, dark horses)
- Radio Free Asia: China and the United States are competing head-on in artificial intelligence, and the United States hopes to control risks (Wang Yun quoted Zachary Arnold)
- Reuters: US Senate panel to hold hearing on suspected Chinese hacking incidents (David Shepardson covered Sam Bresnick’s testimony before the Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law on “Big Hacks & Big Tech: China’s Cybersecurity Threat”)
- Reuters: Exclusive: Chinese researchers develop AI model for military use on back of Meta’s Llama (James Pomfret and Jessie Pang quoted Bill Hannas)
- Tech Target: AMD layoffs follow AI job trend (Antone Gonsalves cited Sara Abdulla’s Data Snapshot, Leading the Charge: A Look at the Top-Producing AI Programs in U.S. Colleges and Universities)
- The New York Times: TSMC Chips Ended Up in Devices Made by China’s Huawei Despite U.S. Controls (Meaghan Tobin, Ana Swanson, John Liu, and Amy Chang Chien quoted Jacob Feldgoise and cited his Data Snapshot with Hanna Dohmen, Pushing the Limits: Huawei’s AI Chip Tests U.S. Export Controls)
- The Washington Post: Treasury finalizes prohibitions on U.S. investment in Chinese tech (David J. Lynch cited the CSET report U.S. Outbound Investment into Chinese AI Companies)
- TIME: AI’s Underwhelming Impact on the 2024 Elections (Andrew R. Chow quoted Mia Hoffmann)
- TIME: Silicon Valley Takes Artificial General Intelligence Seriously—Washington Must Too (Daniel Colson quoted Helen Toner)
What We’re Reading
Paper: Artificial Intelligence, Scientific Discovery, and Product Innovation, Aidan Toner-Rodgers (November 2024)
Report: 2024 Annual Report to Congress, U.S.-China Economic and Security Review Commission (November 2024)
Paper: Larger and more instructable language models become less reliable, Lexin Zhou, Wout Schellaert, Fernando Martínez-Plumed, Yael Moros-Daval, Cèsar Ferri, and José Hernández-Orallo, Nature (September 2024)