Worth Knowing
Wall Street Recalibrates Its AI Expectations: Some investors are starting to temper their initially bullish outlook on artificial intelligence. The recalibration, which sent ripples through the stock market in recent weeks, has been driven by an apparent mismatch between the costs and benefits of AI systems. Tech giants like Microsoft, Alphabet, and Amazon are pouring massive amounts into AI startups, infrastructure, and R&D, but some industry analysts are beginning to question whether these substantial investments will yield commensurate rewards. “Tech giants and beyond are set to spend over [$1 trillion] on AI capex in coming years, with so far little to show for it … will this large spend ever pay off?,” Goldman Sachs wrote in a recent report. Some prominent economists like Daron Acemoglu have long questioned whether AI systems will deliver the broad-based productivity gains promised by the technology’s boosters, and the economic data from the last two years has done little to refute their claims. Even prominent labs like OpenAI are struggling to turn a profit amid soaring labor, infrastructure, and energy costs.
While generative AI systems have demonstrated payoffs in narrow areas such as code development and customer service, analysts say, companies have generally been slow to adopt the tech, and ongoing challenges like algorithmic bias and hallucinations have cast doubt on how well AI tools can scale. The growing skepticism about AI has contributed to a general downturn in tech stocks. Between July 10 and August 5, Google, Microsoft, Amazon, saw their share prices plunge more than 10%, and chipmakers like Nvidia and Intel lost more than a quarter of their market value.
Amid this volatility, large tech companies have sought to allay investors’ concerns and carry on as planned with their long-term AI ambitions. In quarterly earnings calls, Google announced it would pour more than $24 billion into AI infrastructure in the second half of 2024, and both Microsoft and Meta alerted investors to expect similarly substantial investments in the months ahead.
- More: Silicon Valley Wins Few Government Contracts | McKinsey Technology Trends Outlook 2024 | Generative AI is Still a Solution in Search of a Problem
- On July 18, OpenAI released GPT-4o mini, a lightweight multimodal model that is significantly cheaper to run than its more powerful counterpart, GPT-4o. OpenAI said the model excels in mathematical reasoning and coding, and comes equipped with safety measures that protect against potential misuse. The release came less than a week after The Washington Post reported that OpenAI illegally restricted employees from voicing concerns about the potential safety risks of the company’s products to regulators. The alleged practice came to light after whistleblowers filed a complaint with the Securities and Exchange Commission.
- On July 23, Meta released its largest and most powerful language model to date. Billed as “the first frontier-level open source AI model,” Llama 3.1 reportedly performed around the same level as OpenAI’s GPT-4 and Anthropic’s Claude 3.5 on a variety of benchmarks. Performance aside, Meta also claims the model can be run at about half the cost as OpenAI’s GPT-4o.
- Not to be outdone, Google DeepMind released an experimental version of its powerful Gemini 1.5 Pro model which some in the AI community have heralded as the most powerful model on the market. The model, available through the Gemini API and Google AI studio, has surged to the top of the LMSYS leaderboards, a community-run ranking of model performance. Google DeepMind also released a pair of mathematical reasoning models—AlphaProof and AlphaGeometry2—that performed at the same level as the silver medalist in the 2024 International Mathematical Olympiad. Google DeepMind leveraged a combination of the Gemini large language model and their AlphaZero algorithm to achieve this feat.
- More: AI companies promised to self-regulate one year ago. What’s changed? | Who Will Control the Future of AI? | Exclusive: Anthropic wants to pay hackers to find model flaws
Government Updates
Antitrust Enforcers Log A Win Against Big Tech: On August 5, a federal judge determined that Google had illegally abused its market power to shut out rivals from the market for search engines. “Google is a monopolist, and it has acted as one to maintain its monopoly,” Judge Amit Mehta wrote in a 277-page ruling. The landmark decision concluded the first antitrust case against a major technology company in more than 20 years. The case hinged largely on the vast sums that Google paid companies like Apple and Samsung to make its search engine the default on their smartphones and web browsers. According to Judge Mehta, the agreements offered Google “access to scale that its rivals cannot match” and left other search engines “at a persistent competitive disadvantage.” Legal scholars say the decision could make judges more receptive to the government’s arguments in other ongoing antitrust cases against tech giants like Apple, Amazon, and Meta. The remedy for Google will be decided in a separate case scheduled for September. Some experts have questioned whether any remedy could change the market for internet search all that much, and others have suggested generative AI firms like OpenAI actually present a bigger risk to Google’s search business than federal regulators.
The ruling could open the door to future antitrust enforcement in the AI industry. In a joint statement released on July 23, competition authorities in the U.S. and Europe committed to proactively policing the AI industry and addressing risks to competition “before they become entrenched or irreversible harms.” Already, the U.S. Federal Trade Commission has opened a probe into partnerships between major AI labs and cloud providers, and the U.S. Department of Justice is investigating Nvidia for potentially abusing its dominance in the chips market. Prior CSET research has explored the ways breaking up tech companies like Google could impact the Defense Department’s access to AI systems.
NTIA Weighs in on Open Weights: On July 30, the National Telecommunications and Information Administration (NTIA) published a report recommending the Biden administration support open models as a way to democratize the AI ecosystem. “‘Open-weight’ models allow developers to build upon and adapt previous work, broadening AI tools’ availability to small companies, researchers, nonprofits, and individuals,” agency officials wrote. The report marks an important development in the debate over the potential costs and benefits of open AI models. Proponents of open weights emphasize the ways that transparency could galvanize innovation and promote a more community-oriented approach to AI safety, but critics warn that open models allow adversaries to more easily adopt AI capabilities and potentially use the technology for nefarious purposes. While the report comes out in favor of open models, it also acknowledges the risks such technologies pose and urges policymakers to monitor the AI ecosystem and implement restrictions when appropriate. Among other things, the report recommends that the government research the safety and downstream uses of powerful open models and develop risk-related benchmarks to inform possible future policy changes. The report cited work from CSET researchers Thomas Woodside, Helen Toner, and Kyle Miller.
The NTIA report stands in stark contrast to other recent AI governance measures, which seek to tighten regulators’ control over the development and distribution of powerful models. For example, California’s SB 1047, which currently awaits a final vote in the state assembly, would hold developers liable for certain downstream uses of their models if those models exceed a certain size, a rule that would likely limit the distribution of certain open models. Many AI developers have spoken out against the California bill, claiming its provisions would stifle innovation and burden developers with costly compliance measures.
Job Openings
We’re hiring! Please apply or share the roles below with candidates in your network:
- Survey Researcher: We are currently seeking candidates for a Survey Researcher position. In this role, you will lead survey design and execution in collaboration with CSET analysts (Senior Fellows, Research Fellows, Research Analysts) across CSET’s lines of research. You will sit on CSET’s data team and will work closely with data scientists, software engineers, a translation lead, and data research analysts to produce, collect, and analyze original survey data. This is a hybrid role that combines a mix of research methods and data analysis skills. Strong applicants will have experience with survey design, online survey distribution platforms (e.g., Qualtrics), as well as project management, and qualitative and quantitative data analysis. Apply by September 9, 2024
In Translation
CSET’s translations of significant foreign language documents on AI
CSET’s translations of significant foreign language documents on AI
Disruptive Innovation: Interim Measures of Shanghai Municipality for the Administration of Disruptive Technology Innovation Projects. This document details how Shanghai Municipality oversees state-funded “disruptive technology” projects. The regulation provides detailed instructions on how to deal with projects that are behind schedule, failing, or fraudulent from the get-go, suggesting that these are common problems for Chinese government-financed technology projects.
Xi Touts Chinese Science: Xi Jinping: Speech at the Nationwide S&T Conference, National Science and Technology Awards Conference, and the Conference of Academicians of CAS and CAE (June 24, 2024). This document includes the text of Chinese President Xi Jinping’s speech at a major Chinese S&T conference in June 2024. In his remarks, Xi recaps many of China’s scientific triumphs of 2023 but also warns that the nation still has weaknesses in areas such as original innovation and talent.
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
What’s New at CSET
REPORTS
- Building the Tech Coalition: How Project Maven and the U.S. 18th Airborne Corps Operationalized Software and Artificial Intelligence for the Department of Defense by Emelia Probasco
- Tracing the Emergence of Extreme Ultraviolet Lithography: Lessons for Identifying, Protecting, and Promoting the Next Emerging Technology by John VerWey
- Governing AI with Existing Authorities: A Case Study in Commercial Aviation by Jack Corrigan, Owen Daniels, Lauren Kahn, and Danny Hague
- Eyes Wide Shut: Harnessing the Remote Sensing and Data Analysis Industries to Enhance National Security by Michael O’Connor and Kathleen Curlee
- Enabling Principles for AI Governance by Owen Daniels and Dewey Murdick
PUBLICATIONS
- The Diplomat: The US Chips Act, 2 Years Later by Jacob Feldgoise
- Foreign Affairs: The Limits of the China Chip Ban by Hanna Dohmen, Jacob Feldgoise, and Charles Kupchan
- Foreign Policy: Into the Minds of China’s Military AI Experts by Sam Bresnick
- CSET: Comment on Commerce Department RFI 89 FR 27411: AI and Open Government Data Assets Request for Information by Catherine Aiken, James Dunham, Jacob Feldgoise, Rebecca Gelles, Ronnie Kinoshita, Mina Narayanan, and Christian Schoeberl
- Fortune: The death of the Chevron doctrine complicates U.S. policymakers’ efforts to regulate AI—but there’s another way by Dewey Murdick and Owen Daniels
- DefenseScoop: Different makes us stronger: American diversity is a national security asset by Jaret Riddick
EMERGING TECHNOLOGY OBSERVATORY
- The Emerging Technology Observatory is now on Substack! Sign up for the latest updates and analysis.
- Editors’ picks from ETO Scout: volume 13 (6/7/24-7/18/24)
- Hot research topics in AI and biology: insights from the Map of Science
UPCOMING EVENT
- Join CSET on August 29 for Building the Tech Coalition, a half-day conference featuring distinguished speakers discussing the military’s application of AI and software. This event is sponsored by Domino Data Lab.
IN THE NEWS
- Bloomberg: DC Welcomes Ex-OpenAI Board Member After Sam Altman Drama (Shirin Ghaffary quoted Helen Toner)
- Nature: These AI firms publish the world’s most highly cited work (Elizabeth Gibney quoted CSET ETO and Ngor Luong)
- CyberScoop: Dewey Murdick on enabling principles for AI governance; a landmark breach at AT&T (Elias Groll interviewed Dewey Murdick on the Safe Mode podcast)
- The Diplomat: China’s Bid to Lead the World in AI (Mercy A. Kuo quoted Huey-Meei Chang and William Hannas)
- Roll Call: Treasury seeks tighter rules on US investment in China tech (Gopal Ratnam cited the CSET report U.S. Outbound Investment into Chinese AI Companies)
- The Atlantic: Hot AI Jesus Is Huge on Facebook (Caroline Mimbs Nyce quoted Josh A. Goldstein)
What We’re Reading
Report: The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed, James Ryseff, Brandon De Bruhl, and Sydne J. Newberry, RAND (August 2024)
Article: Unpacking New NIST Guidance on Artificial Intelligence, Gabby Miller, Tech Policy Press (August 2024)
Article: The AI Safety Debate Is All Wrong, Daron Acemoglu, Project Syndicate (August 2024)