Worth Knowing
OpenAI and Google Make A Big Push for Broader Consumer AI Adoption: This week, OpenAI and Google debuted a number of new products and features:
- On Monday, OpenAI announced a new flagship model — dubbed “GPT-4o” — that, while only slightly more sophisticated than the previous flagship, GPT-4 Turbo, is significantly faster and cheaper to use than its predecessor. GPT-4o’s speed and native multimodality — it can take text, audio, photos, and videos as input with fewer steps than earlier models — allow for more seamless “conversations” with the model. Much of the announcement event focused on demonstrations of this “Voice Mode” (which drew inevitable comparisons to the 2013 movie “Her”). While, to date, access to OpenAI’s most advanced model has been paywalled (for the most part), the company said that GPT-4o would be rolled out to free users in the coming weeks. It is already available to paying users through ChatGPT Plus and the OpenAI API.
- On Tuesday, Google debuted a host of new AI-powered products and features at its yearly Google I/O conference. Many of these new tools seem positioned to compete with OpenAI’s products: its Gemini 1.5 Flash language model is a low-latency model similar (at least in terms of speed) to GPT-4o; its Imagen 3 text-to-image generator is in the same ballpark as OpenAI’s DALL-E 3; and its new text-to-video generator — “Veo” — comes a few months after OpenAI announced its own video generator, Sora (though neither company has said when it will roll the tools out to the general public). Notably, Google extended its advantage in massive LLM context windows (the amount of input data a model can process at once): while Gemini could already handle an industry-leading 1 million tokens, Gemini 1.5 Pro will have a 2 million token context window.
- More: AlphaFold 3 predicts the structure and interactions of all of life’s molecules | OpenAI chief scientist Ilya Sutskever is officially leaving
- More: ETO Scout: First in China! Alibaba Cloud fully supports Llama 3 | Open Foundation Models: Implications of Contemporary Artificial Intelligence
- 2023 was big for foundation models, both in terms of the number released (149 — more than double the previous year) and the cost to train them; the authors estimate that the compute used to train GPT-4 and Gemini Ultra cost $78 million and $191 million, respectively.
- Open-source AI research boomed in 2023, with AI-related GitHub projects increasing by nearly 60% compared to 2022.
- China maintained its significant lead in AI patents, accounting for more than 60% of the world’s AI patents in 2022 (the most recent year with data). The United States was a distant second, with 21% of the total.
- Despite a massive increase in generative AI investment — nearly nine times higher in 2023 than in 2022 — overall private investment in AI continued to fall from its 2021 peak, dipping under $100 billion for the first time since 2020.
- Academic brain drain continues, with the rate of newly minted AI PhDs turning to industry over academia increasing by 5.3% last year to more than 70%.
- Gender diversity in AI postsecondary education remains an issue. While there was a slight uptick in the percentage of female bachelor’s graduates in 2022 (the most recent year with data), graduate and PhD levels saw a minor downturn. In both cases, the rates have remained largely steady since 2010.
- More: ETO: Red-Hot Topics in AI Research: Insights from the Map of Science
Government Updates
Schumer-Led AI Working Group Releases Roadmap: Yesterday, the bipartisan Senate AI Work Group, led by Majority Leader Chuck Schumer, released its long-awaited roadmap for U.S. AI regulation. The final product attempts to translate lessons learned from last fall’s nine “AI Insight Forums” and other related efforts into a clear agenda for legislative action. It recommends that Congress move to, among other things:
- Build towards providing at least $32 billion annually for (non-defense) AI innovation, in line with levels proposed by the National Security Commission on Artificial Intelligence;
- Consider legislation to increase high-skilled STEM immigration;
- Authorize programs such as the National AI Research Resource (NAIRR) and fully fund previously authorized efforts, such as the CHIPS and Science Act; and
- Support the development of standards for the use of AI in critical infrastructure and consider where high-impact uses of AI should be limited or banned entirely.
USAF and DARPA Held Real-World AI Dogfighting Test: The Air Force and the Defense Advanced Research Projects Agency pitted an autonomously piloted plane against a human pilot in live dogfight exercises last year, military officials confirmed. While several countries — including the United States and China — have conducted virtual AI-vs-human testing over the last decade, the September 2023 test at Edwards Air Force Base is the first confirmed real-world exercise. The test used an X-62A VISTA — an F-16 modified to fly autonomously as part of DARPA’s Air Combat Evolution (ACE) program — flying against a human-piloted F-16. Maneuvers eventually ramped up to nose-to-nose engagements, during which the planes flew as close as 2,000 feet at 1,200 miles per hour. Officials said the test was about demonstrating “that we can safely test these AI agents in a safety critical air combat environment,” rather than seeing which side would come out on top, and the “winner” of the tests wasn’t disclosed. But if previous tests are any indication, the AI-powered planes are more than capable of holding their own; back in 2020, an autonomous fighter beat a human pilot 5-0 in a virtual DARPA-run test. The tests are an important step toward the Air Force’s goal of fielding 1,000 collaborative combat aircraft — autonomous aircraft flown side-by-side with human-piloted aircraft — by the end of the decade.
U.S. and PRC Representatives Hold AI Risk Talks in Geneva: On Tuesday, representatives from the United States and China met in Geneva for high-level talks on AI risk and safety. The talks were arranged after last November’s meeting between Presidents Biden and Xi in San Francisco. The U.S. delegation for the Geneva talks was led by White House official (and former CSETer) Tarun Chhabra and the State Department’s Seth Center. Their counterparts were officials from China’s Foreign Ministry and the National Development and Reform Commission. According to a PRC Foreign Ministry readout, the talks retread some old ground: both sides agreed that AI presents opportunities and risks, China reiterated that it would like the United Nations to take the lead in setting up a global framework for AI governance, and the PRC representatives “expressed a stern stance on the U.S. restrictions and pressure” in the AI space. But observers said that breaking new ground wasn’t the aim of the initial “get-to-know-you” meeting; as CSET’s Sam Bresnick told the Associated Press, the meetings are an important opportunity for the United States to “get a better sense of China’s approach to defining and mitigating AI risks” and could help to build trust between the two superpowers in the long run.
Commerce Is Developing Export Controls on AI Models: The Commerce Department is in the process of developing new rules that would control the export of advanced AI models, according to a Reuters report. In recent years, the Biden administration has imposed controls on the powerful chips used to train AI systems, but exports of the models themselves are still entirely legal. It’s unlikely the discussed restrictions would control any current models — according to the Reuters report, the Commerce Department is considering using the computing power threshold established by President Biden’s October 2023 executive order: 10^26 floating-point operations. As of today, no AI systems are thought to have hit that mark, but it’s likely that the next generation of models will surpass it. But a source at the Commerce Department said that the threshold could ultimately be lower, or coupled with other factors such as a model’s use cases or underlying training data. While Reuters says the Commerce Department is not yet close to finalizing a rule, some supporters in Congress are laying the necessary groundwork for such a rule when it does come. On May 8, a bipartisan group of House representatives introduced the Enhancing National Frameworks for Overseas Critical Exports Act, which would give the Commerce Department clear authority to control the export of AI systems or national security-related technologies. According to Reuters, the newly proposed bill was developed with input from the Biden administration.
In Translation
CSET’s translations of significant foreign language documents on AI
CSET’s translations of significant foreign language documents on AI
PRC Draft AI Law: Artificial Intelligence Law of the People’s Republic of China (Draft for Suggestions from Scholars). This document is a preliminary draft of China’s proposed AI Law that has circulated among legal scholars. The draft law specifies various scenarios in which AI developers, providers, or users are liable for misuse of AI tools. It also allows for the use of copyrighted material for model training in most cases and provides intellectual property protections for content created with the assistance of AI technology.
Computing Power White Paper: White Paper on China’s Computing Power Development Index (2022). This annual white paper by a Chinese state-run think tank analyzes the computing power landscape, in China and globally, as of late 2022. The report begins by summarizing global developments in compute over the past year and then provides statistics on China’s compute industry. The authors find that, despite disruptions caused by the COVID-19 pandemic, China’s computing power growth outpaced the world average in 2022.
PRC Industrial Policy Paper: Implementation Opinions on the Innovative Development of Pilot Testing in the Manufacturing Industry. This Chinese industrial policy directive aims to speed up the process of getting manufacturing prototypes into commercial production. One of the goals of this directive is to establish five homegrown, world-class pilot testing platforms in China by 2025. It prioritizes the development of more precise metrology technology as one means to this end.
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
Job Openings
We’re hiring! Please apply or share the role below with candidates in your network:
- Strategic Engagement Specialist: The Center for Security and Emerging Technology at Georgetown University (CSET) is seeking applications for a Strategic Engagement Specialist to advance CSET’s development goals through cultivating relationships with strategic stakeholders, preparing funding requests, and providing critical administrative support. Apply by Monday, June 10.
What’s New at CSET
REPORTS
- Gao Huajian and the China Talent Returnee Question by William Hannas, Huey-Meei Chang, and Daniel Chou
- China, Biotechnology, and BGI by Anna Puglisi and Chryssa Rask
- China and Medical AI by Caroline Schuerger, Vikram Venkatram, and Katherine Quinn
- Putting Teeth into AI Risk Management by Matthew Schoemaker
PUBLICATIONS
- CSET: Emergent Abilities in Large Language Models: An Explainer by Thomas Woodside
- CSET: The NAIRR Pilot: Estimating Compute by Kyle Miller and Rebecca Gelles
- CSET: Measuring Success in Place-Based Innovation by Hayes Meredith and Jaret C. Riddick
- CSET: Comment on BIS Request for Information by Jacob Feldgoise and Hanna Dohmen
- Breaking Defense: In a Taiwan conflict, tough choices could come for Big Tech by Sam Bresnick and Emelia Probasco
- War on the Rocks: How Will AI Change Cyber Operations? by Jenny Jun
- The Wire China: U.S. Big Tech in China: Too Big to Bail by Ngor Luong, Sam Bresnick, and Kathleen Curlee
EMERGING TECHNOLOGY OBSERVATORY
- The Emerging Technology Observatory is now on Substack! Sign up for the latest updates and analysis.
- Editors’ picks from ETO Scout: volume 10 (3/14/24-4/11/24)
- The state of global AI research
- Red-hot topics in AI research: insights from the Map of Science
- Editors’ picks from ETO Scout: volume 11 (4/12/24-5/9/24)
EVENT RECAPS
- In April, CSET’s Director of Strategy and Foundational Research Grants Helen Toner delivered a talk at TED2024 on the importance of developing smart AI policy, even in the face of uncertainty. Watch her full talk.
- On April 29, CSET hosted DHS Assistant Secretary Iranga Kahangama and AI Policy Specialist Noah Ringler for a discussion, moderated by CSET’s Jessica Ji, on the department’s AI efforts. Watch a full recording of the event.
IN THE NEWS
- Associated Press: China and US envoys will hold the first top-level dialogue on artificial intelligence in Geneva (Jamey Keaten quoted Sam Bresnick)
- Associated Press: US-China competition to field military drone swarms could fuel global arms race (Frank Bajak quoted Margarita Konaev and cited the CSET report U.S. and Chinese Military AI Purchases)
- Axios: Don’t fear AI-driven “biosurveillance,” experts say (Ryan Heath quoted Steph Batalis)
- Axios: Exclusive: Inside the AI research boom (Allison Snyder cited work from CSET’s Emerging Technology Observatory)
- Axios: AI optimists crowd out doubters at TED (Ina Fried covered Helen Toner’s TED Talk)
- Bloomberg: Former OpenAI Board Member Calls for Audits of Top AI Companies (Shirin Ghaffary quoted Helen Toner’s TED Talk)
- Business Insider: The US is betting big on AI chips, but there’s a giant flaw in the plan (Lakshmi Varanasi cited the CSET report AI Chips: What They Are and Why They Matter)
- NPR: AI-generated spam is starting to fill social media. Here’s why (Shannon Bond quoted Josh A. Goldstein)
- Stanford HAI: The Disinformation Machine: How Susceptible Are We to AI Propaganda? (Dylan Walsh cited Josh A. Goldstein’s recent paper, How persuasive is AI-generated propaganda? )
- The Conversation: From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam (The Conversation cited Goldstein’s paper, How Spammers and Scammers Leverage AI-Generated Images on Facebook for Audience Growth)
- The Independent: Alarm raised over bizarre images circulating on Facebook (Andrew Griffin cited Goldstein’s How Spammers and Scammers Leverage AI-Generated Images on Facebook for Audience Growth)
- WIRED: China Has a Controversial Plan for Brain-Computer Interfaces (Emily Mullin quoted William Hannas and cited the CSET report Bibliometric Analysis of China’s Non-Therapeutic Brain-Computer Interface Research)
What We’re Reading And Listening To
Report: Supercharging Research: Harnessing Artificial Intelligence to Meet Global Challenges, President’s Council of Advisors on Science and Technology (April 2024)
Podcast: The Road to Accountable AI, Kevin Werbach
Article: In Arizona, election workers trained with deepfakes to prepare for 2024, Sarah Ellison and Yvonne Wingett Sanchez, the Washington Post (May 2024)