Worth Knowing
Google and Microsoft Square Off in an AI-Powered Search Engine Fight: This week, Google and Microsoft announced plans to roll out new AI tools that could dramatically reshape both companies and the search engine business as a whole. On Monday, Google CEO Sundar Pichai unveiled Bard — a “conversational AI service” that, like OpenAI’s ChatGPT, appears capable of responding to human queries and synthesizing information. Unlike ChatGPT, it seems that Bard — which is based on a “lightweight” version of Google’s LaMDA language model — will be connected to the internet, allowing it to draw on new and up-to-date information. The day after Pichai’s announcement, Microsoft announced its own AI-powered changes to its Bing search engine and Edge web browser. Microsoft reached a multibillion-dollar investment deal with OpenAI in January, and according to the company, a “new, next-generation” language model developed by the research lab is under the hood of the “new Bing” (unconfirmed rumors say that model is the long-awaited GPT-4). For now, Google’s Bard is still not available to the public, and users have to jump through some hoops to use the new Bing. But it seems likely that, for better or worse, language model-powered tools will become a major part of the search engine business.
- More: The Race to Build a ChatGPT-Powered Search Engine | Google invests $300mn in artificial intelligence start-up Anthropic | Alphabet shares fall 7% following Google’s A.I. event | Google’s AI chatbot Bard makes factual error in first demo
- In a paper released in January, researchers from DeepMind introduced a reinforcement learning-trained agent that can adapt to new problems, learns from first-person demonstrations, and shows signs of using hypothesis-driven trial-and-error approaches to unsolved problems. As the paper explains, the RL agent — which was trained on a “vast, smooth, and diverse task space” — adapted to previously unseen situations on a timeline comparable with human players. Furthermore, the paper’s finding that performance scales with the size of the agent, its memory, and its training data could indicate a path for future research. As CSET Non-Resident Research Fellow Jack Clark noted in his newsletter, similar findings on scaling and performance contributed to increased interest in (and funding for) language models.
- In another paper published last week, DeepMind researchers proposed a more efficient way to sample from large language models. Using a smaller, faster model (in this case, a 4 billion parameter one) to generate a “draft” output, the researchers then used a larger, more sophisticated “target” model (the 70 billion parameter Chinchilla) to score the draft output, rather than generate its own output from scratch. According to the paper, this method of “speculative sampling” produced a 2–2.5 times decoding speedup compared to the target model working on its own. Because the proposed method does not require changes to the target model, theoretically it should be relatively easy to implement with models like those powering ChatGPT, Bard and other popular LLMs.
- More: Scaling laws for single-agent reinforcement learning | AI and Compute: How Much Longer Can Computing Power Drive Artificial Intelligence Progress?
Government Updates
Japan and The Netherlands to Impose New Export Restrictions on China: Japan and the Netherlands reportedly plan to join the United States in restricting exports of key semiconductor manufacturing equipment (SME) to China. The news comes after months of lobbying by the United States — which announced its own sweeping package of export controls last October — to get the two major SME-producing nations on board. The U.S. measures aimed to significantly hamper China’s ability to produce advanced node semiconductors and build up its advanced computing capabilities, projects that are (for now) heavily reliant on foreign imports. But as CSET Senior Fellow Kevin Wolf noted at the time, the U.S. restrictions would have faced an uphill battle as long as they remained unilateral. Details about the new agreement remain murky, and reports indicate that an official announcement is unlikely. Observers expect that it could be months, or even years, before Tokyo and The Hague can enact the necessary legal and regulatory changes to bring the agreement into full effect. But the development nevertheless appears to be a significant victory for the Biden administration in its campaign to build up “as large of a lead as possible” over the Chinese semiconductor industry.
Expert Take: “While it is encouraging to see the Netherlands and Japan agreeing to join the United States on these controls, U.S. officials should not make a habit of using unilateral and extraterritorial controls to get allies on board with our policies. Continuing to do so will only sour our relationship with countries that we need on our side to effectively compete with China. This is also the first time that this approach has worked. No U.S. allies have gone along with Huawei controls or adopted any of our Entity List controls. Although from the outside we may not have seen the same levels of pressure applied by the Biden administration to Japan and the Netherlands in this scenario, there has long been an understanding that multilateral controls are generally more effective long-term.” — Emily S. Weinstein, Research Fellow
NIST Releases Its AI Risk Management Framework: The National Institute of Standards and Technology released the first official version of its AI Risk Management Framework and a companion AI RMF Playbook, both of which are meant to help developers anticipate and manage the risks unique to AI systems. Created at the direction of Congress as part of the FY2021 NDAA, the voluntary framework outlines key characteristics of trustworthy AI systems and describes four functions that NIST says organizations can employ to address AI risk. The accompanying playbook includes suggested actions, supplementary reading, and other resources for organizations that want to implement these four functions. NIST’s AI RMF comes at an important moment — the appeal of generative AI tools has both spurred the development of new AI systems and increased concerns about their potential for misuse. But it remains to be seen whether voluntary frameworks such as the AI RMF and the Office of Science and Technology Policy’s “Blueprint for an AI Bill of Rights” will be enough to mitigate potential risks. NIST appears determined to keep an eye on the impact of the AI RMF and adapt accordingly — a note at the beginning of the document says the agency plans to review the “content and usefulness” of the RMF regularly and update it when warranted, potentially including industry-specific recommendations (the playbook, meanwhile, will be updated frequently).
The United States and India Team Up On Critical and Emerging Tech: Last week, officials from the United States and India held the inaugural meeting of the U.S.-India initiative on Critical and Emerging Technology. The iCET initiative was announced last year by President Biden and Indian Prime Minister Narendra Modi; last week’s meetings — led by U.S. National Security Advisor Jake Sullivan and his Indian counterpart, Ajit Doval — offered more details on the initiative meant to “elevate and expand” technological and defense industrial cooperation between the two countries. In comments to the press, Sullivan acknowledged the initiative’s relationship to the United States’ broader strategic competition with China, calling it a “big foundational piece of an overall strategy to put the entire democratic world in the Indo-Pacific in a position of strength.” Among other things, the two governments announced a partnership between the U.S. National Science Foundation and Indian science agencies meant to increase scientific collaboration (including on AI research), launched an “Innovation Bridge” to connect defense startup companies in the United States and India, and discussed plans to build up India’s semiconductor industry. The next iCET meeting will take place in New Delhi later this year.
The Pentagon’s AI Office Is Tasked With “Reinvigorating” DOD Experiments: Earlier this month, the Pentagon concluded the fifth in a series of Global Information Dominance Experiments (GIDE V), joint exercises meant to test the military’s data sharing and integration, evaluate its use of AI systems, and provide insight into the implementation of Joint All-Domain Command and Control, the DOD’s plan to connect and coordinate its services’ sensors in a single network. The previous four iterations of GIDE had been run by NORAD and Northern Command, but GIDE V was led by the Pentagon’s Chief Digital and Artificial Intelligence Office in partnership with the Joint Chiefs of Staff. In a press release, the DOD said that responsibility had been shifted to the CDAO in the hopes of “reintroducing and reinvigorating” the experiments. The Pentagon plans to run three more GIDE exercises over the course of 2023. In comments to Breaking Defense, GIDE Mission Commander Col. Matt Strohmeyer said the coming exercises will involve “increasingly complex demonstrations” with the goal of “stress-testing current systems and processes.” A date has not yet been set for GIDE VI.
In Translation
CSET’s translations of significant foreign language documents on AI
CSET’s translations of significant foreign language documents on AI
2019 Israeli Foreign Investment Review Process: Establishment of a Process and Mechanism for Examining National Security Aspects of Foreign Investments. This document, which took effect in 2019, details the process by which the Israeli government conducts national security reviews of foreign investments.
2022 Israeli Foreign Investment Review Process: Establishment of a Process and Mechanism for Examining National Security Aspects of Foreign Investments. This document, which took effect in 2022, also details the process by which the Israeli government conducts national security reviews of foreign investments. It strengthens and expands the scope of earlier foreign investment screening rules that the Israeli government adopted in 2019.
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
What’s New at CSET
REPORTS
- U.S. Outbound Investment into Chinese AI Companies by Emily Weinstein and Ngor Luong
PUBLICATIONS AND PODCASTS
- CSET: Data Snapshot: Keyword Cascade Plots: A Visual Tool for Research Cluster Selection by Autumn Toney and Melissa Flagg
- The TechTank Podcast: Is Economic Decoupling Possible for the United States? with Emily Weinstein
IN THE NEWS
- Foreign Policy: In a piece on U.S. concerns about the Chinese technology sector, Rishi Iyengar and Jack Detsch cited Emily Weinstein and Ngor Luong’s new brief, U.S. Outbound Investment into Chinese AI Companies.
- Nikkei Asia: Jack Stone Truitt recapped the takeaways of Weinstein and Luong’s brief.
- Reuters: The report earned the attention of Alexandra Alper, who highlighted its findings.
- Fox News: Aaron Kliegman also covered Weinstein and Luong’s brief for Fox News.
- CQ/Roll Call: Gopal Ratnam made Weinstein and Luong’s brief the centerpiece of a story about the federal government’s inability to date to close off U.S. capital flows into China’s tech sector.
- The Washington Post: Tim Starks cited a report by CSET’s Josh Goldstein and Micah Musser and CSET alum Katerina Sedova with coathors from Stanford and OpenAI, Forecasting Potential Misuses of Language Models for Disinformation Campaigns — and How to Reduce Risk, in the Cybersecurity 202 newsletter.
- The New York Times: Goldstein, Musser and Sedova’s report earned a citation in a Tiffany Hsu and Stuart A. Thompson piece about the disinformation risks of AI chatbots.
- Wired: In a story about detecting AI-generated text, Reece Rogers quoted Musser and cited his report with Goldstein, Sedova and colleagues from Stanford and OpenAI.
- CyberScoop: Elias Groll spoke with Senior Fellow Andrew Lohn about the potential cybersecurity impact of large language models like ChatGPT.
- Marketplace: Research Analyst Jacob Feldgoise appeared on the public radio program Marketplace to discuss semiconductor funding and the CHIPS and Science Act.
- The Sun Herald: Mississippi State Auditor Shad White cited Jack Corrigan, Sergio Fontanez and Michael Kratsios’ report, Banned in D.C.: Examining Government Approaches to Foreign Technology Threats, in an opinion piece about state spending on potentially untrustworthy Chinese technology.
What We’re Reading
Report: Annual Report FY 2022, Defense Innovation Unit (January 2023)
Article: Engines of Power: Electricity, AI, and General-purpose, Military Transformations, Jeffrey Ding and Allan Dafoe, European Journal of International Security (February 2023)
Upcoming Events
- February 10: Foreign Policy Live, The Balloon and the U.S.-China Relationship, featuring CSET’s Emily Weinstein and Foreign Policy’s Ravi Agrawal and James Palmer
- February 16: CSET Event, Turn off the Tap?: Assessing U.S. Investment and Support for Chinese AI Companies, featuring CSET’s Emily Weinstein and Ngor Luong, with Emily Kilcrease of CNAS
- February 28: Georgetown Law School, Judicial Innovation Fellowship: Info Session for Potential Applicants
What else is going on? Suggest stories, documents to translate & upcoming events here.