Worth Knowing
Alphabet Combines Its AI Crown Jewels to Form “Google DeepMind”: Alphabet has announced the merger of its two AI research laboratories — DeepMind and Google Brain — into a single unit dubbed “Google DeepMind.” In a public memo, CEO Sundar Pichai wrote that combining the talent and computational resources of the two teams will “significantly accelerate our progress in AI.” The London-based DeepMind — which Google acquired in 2014 — has been responsible for some of the biggest breakthroughs in AI over the last decade; its AlphaGo program was the first AI system to defeat a professional Go player, and its protein-folding AlphaFold was dubbed the “breakthrough of the year” by Science magazine in 2021. Google Brain was tremendously influential in its own right — in 2017, its researchers developed the transformer, the model architecture behind many of today’s powerful generative AI systems. But it hadn’t been smooth sailing at either lab of late: as we covered in 2021, Google executives rebuffed a DeepMind effort to secure more structural and decision-making autonomy; Google Brain, meanwhile, has reportedly suffered from internal frustration and attrition since the company fired two prominent AI ethics researchers, Timnit Gebru and Margaret Mitchell. But even if the merger is in part motivated by those behind-the-scenes issues (as Mitchell speculated on Twitter), the primary cause appears to be the threat to Alphabet’s core internet search business posed by the AI products of rivals such as Microsoft and OpenAI. Since late last year, the company has quickly shifted both talent and resources toward building and deploying generative AI products. While the new outfit retains the “DeepMind” name and cofounder Demis Hassabis will continue to serve as its CEO, it remains unclear how the merger will affect the lab’s core DNA – and whether the new Google DeepMind will maintain its focus on “Nobel-level problems” or shift its attention toward developing more commercialized tools.
- More: ‘Godfather of AI’ quits Google with regrets and fears about his life’s work | Co-founders of Google DeepMind and LinkedIn launch chatbot
- More: EU tech tsar Vestager sees political agreement on AI law this year | ChinAI: Five Improvements to China’s Generative AI Draft Regulations
Government Updates
White House, Commerce Take Steps toward Trustworthy AI Policies: As Vice President Harris and other senior administration officials prepare to meet later this morning with the CEOs of Alphabet, Anthropic, Microsoft and OpenAI about potential AI risks, the Commerce Department’s National Telecommunications and Information Administration is requesting public feedback about what types of policies could ensure that AI systems are trustworthy and work as promised while avoiding harm. The request for comment from the NTIA, which is in charge of federal telecommunications and information policy, is an important step toward rolling out more concrete policies and regulations. In a speech at the University of Pittsburgh, NTIA head Alan Davidson said that the RFC was part of a larger “AI initiative” that would “help build an ecosystem of AI audits, assessments and other mechanisms to help assure businesses and the public that AI systems can be trusted.” Since President Biden took office in 2021, his administration has introduced a number of measures related to responsible AI and AI safety, such as the Blueprint for an AI Bill of Rights and NIST’s AI Risk Management Framework. To date, most of the administration’s initiatives — including the “public assessments” of generative AI systems announced this morning — have been voluntary frameworks or non-binding advisory documents, but the NTIA’s RFC, together with the increasing pace of warnings from regulatory agencies (see the story below), indicates that more concrete regulatory action could be on the horizon.
The FTC, DOJ, CFPB and EEOC Pledge to Protect Against Harmful AI: Last week, the heads of the Federal Trade Commission, the Justice Department’s Civil Rights Division, the Consumer Financial Protection Bureau and the Equal Employment Opportunity Commission released a joint statement expressing concern about the potential of automated systems to perpetuate unlawful bias, discrimination and other harmful outcomes. The officials pledged that they would “vigorously use our collective authorities to protect individuals’ rights.” Regulators have been turning up the heat in recent months: as we covered last time, the DOJ warned that it was looking out for anti-competitive behavior in the AI industry, and the FTC published multiple blog posts warning companies to keep their AI activities on the straight and narrow. This week, the FTC published another such post, advising companies to avoid using AI-powered advertising to trick people into harmful choices and emphasizing the importance of transparency in AI-generated content. As noted in the story above about the NTIA’s request for comment, regulators’ focus on AI appears to be intensifying as commercial AI competition heats up.
AI Advisory Panel Issues Its Inaugural Report: The National AI Advisory Committee — the panel charged with advising the president and the National AI Initiative Office on issues related to AI — released its inaugural report last week. The NAIAC was established by the National AI Initiative Act of 2020 and held its first meeting last year. The new report covers the committee’s first year of its three-year appointment and offers 23 recommended actions tied to 13 objectives meant to “help the U.S. government and society at large navigate this critical path to harness AI opportunities, create and model values-based innovation, and reduce AI’s risks.” The report’s objectives include “Operationalize trustworthy AI governance,” “Ensure AI is trustworthy and lawful and expands opportunities,” “Scale an AI-capable federal workforce,” and “Continue to cultivate international collaboration and leadership on AI.” It endorses the approach taken by the National Institute of Standards and Technology in its AI Risk Management Framework and recommends that the White House take steps to implement the AI RMF across the federal government, encourage the private sector to adopt the AI RMF or aligned processes, and make an effort to “internationalize” the AI RMF through formal translations and workshops, so that framework can serve as the world’s “common language” on AI risk. On the day of the report’s release, four of the NAIAC’s members took part in a Brookings Institution panel to discuss the report and the state of AI development — a recording of that event is available online.
In Translation
CSET’s translations of significant foreign language documents on AI
CSET’s translations of significant foreign language documents on AI
PRC Reorganization Plan: Reform Plan for Party and State Institutions. This document is the full text of a significant Party and government reorganization plan that China’s parliament, the National People’s Congress, passed in March 2023. Highlights of the plan include two new commissions to oversee the financial industry, a new Party body that oversees technology policy, a weakened Ministry of Science and Technology, and the creation of a new National Data Bureau.
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
What’s New at CSET
REPORTS
- Viral Families and Disease X: A Framework for U.S. Pandemic Preparedness Policy by Caroline Schuerger, Steph Batalis, Katherine Quinn, Amesh Adalja and Anna Puglisi
- “The Main Resource is the Human”: A Survey of AI Researchers on the Importance of Compute by Micah Musser, Rebecca Gelles, Ronnie Kinoshita, Catherine Aiken and Andrew Lohn
- Volunteer Force: U.S. Tech Companies and Their Contributions in Ukraine by Christine H. Fox and Emelia Probasco
PUBLICATIONS
- CSET: Studying Tech Competition through Research Output: Some CSET Best Practices by Jacob Feldgoise, Catherine Aiken, Emily S. Weinstein and Zachary Arnold
- CSET: Data Snapshot: The Glass Classroom: Women’s Representation in AI-Related Post-Secondary Programs by Sara Abdulla
- CSET: A Byte Out of the Gap: Analyzing AI Bachelor’s, Master’s, and PhD Production for Black Students by Sara Abdulla
- Foreign Affairs: The Coming Age of AI-Powered Propaganda: How to Defend Against Supercharged Disinformation by Josh A. Goldstein and Girish Sastry
- Issues in Science and Technology: Lessons From the Ukraine-Russia War by Jaret C. Riddick and Cole McFaul
- DigiChina: How will China’s Generative AI Regulations Shape the Future? A DigiChina Forum with Helen Toner
EMERGING TECHNOLOGY OBSERVATORY
- Singapore’s AI research collaboration with China more than doubled between 2016 and 2021
- Singapore’s no exception: AI research collaboration around the Pacific
- ETO’s spring newsletter is out! To sign up for ETO updates (4x/year), visit eto.tech.
EVENT RECAPS
- On April 17, CSET Senior Fellow Heather Frase and John Bansemer, Director of CSET’s CyberAI Project and Senior Fellow, discussed Dr. Frase’s research on effectively evaluating and assessing AI systems across a broad range of applications, as well as her recently published research agenda.
IN THE NEWS
- The New York Times: Helen Toner spoke with German Lopez about the potential impact of AI tools for a recent edition of his newsletter, The Morning.
- The Financial Times: Madhumita Murgia interviewed CSET’s Heather Frase about her experience red teaming GPT-4.
- Politico: Derek Robertson detailed the main takeaways of Micah Musser, Rebecca Gelles, Ronnie Kinoshita, Catherine Aiken and Andrew Lohn’s latest report — “The Main Resource is the Human”: A Survey of AI Researchers on the Importance of Compute — in the Digital Future Daily newsletter.
- Breaking Defense: For an article on how the DOD might incorporate LLMs, Sydney J. Freedberg, Jr. spoke to Musser about how tools like ChatGPT work.
- Lawfare: For a piece about the security risks of AI, Jim Dempsey of the Stanford Cyber Policy Center cited a recent workshop paper by Musser, Lohn and others, Adversarial Machine Learning and Cybersecurity: Risks, Challenges, and Legal Implications.
- Roll Call: Gopal Ratnam spoke with Ngor Luong about U.S. investment in China and cited the findings of her February report with Emily Weinstein, U.S. Outbound Investment into Chinese AI Companies.
- South China Morning Post: Xinmei Shen reached out to Hanna Dohmen to discuss China’s efforts to develop a home-grown ChatGPT competitor.
- South China Morning Post: Ben Jiang and Coco Feng cited Dohmen in an article about Beijing’s attempts to foster AI development while limiting risk.
What We’re Reading
Article: Inside the Secret List of Websites That Make AI Like ChatGPT Sound Smart, Kevin Schaul, Szu Yu Chen and Nitasha Tiku, The Washington Post (April 2023)
Working Paper: Generative AI at Work, Erik Brynjolfsson, Danielle Li and Lindsey R. Raymond, National Bureau of Economic Research (April 2023)
Report: Artificial Intelligence in Nuclear Operations: Challenges, Opportunities, and Impacts, Mary Chesnut, Tim Ditter, Anya Fink, Larry Lewis and Tim McDonnell, CNA (April 2023)
Upcoming Events
- May 25: CSET Webinar, How Important is Compute to the Future of AI?: Challenging the conventional wisdom about constraints on AI progress, featuring Micah Musser and Tim Hwang
What else is going on? Suggest stories, documents to translate & upcoming events here.