EU AI Act Passes Two Committees, Heads to Parliament Vote Soon: Last week, two key EU committees approved a compromise version of the EU AI Act, setting the stage for a European Parliament vote, a process to resolve differences and, ultimately, adoption sometime this year or next. When it was proposed in 2021, the AI Act was ahead of its time, both in elevating AI as a key issue and in how it aimed to regulate “high risk” applications. A lot has changed in the two years since: regulators elsewhere — including in China and the United States — began paying more attention to AI, and the rise of popular generative tools from the likes of OpenAI and Midjourney have had a profound impact on the broader AI ecosystem. Some of the changes in the latest compromise text reflect those developments — the proposed act now includes language about “foundation models” and transparency requirements for generative AI tools (Brookings Fellow Alex Engler has some helpful Twitter threads and a running Google Doc detailing additional changes). Observers say the European Parliament will likely vote on the text of the act in June and that trilogue discussions between the EP, the European Commission and member states should start in July.
Expert Take: “The AI Act lays out a set of rules and processes intended to prevent harm from and to promote trustworthy AI. But many of the AI Act’s essential requirements, such as conformity assessments of high-risk AI systems, depend on standards and procedures that either don’t exist yet or for which we lack best practices and guidelines. These specifics will be determined by European standards bodies, market surveillance agencies and national governments, and will substantially shape the regulation’s effectiveness. As the AI Act is finalized over the course of the trilogue, we need to pay close attention to those parallel processes that will determine if the regulation actually has teeth.” — Mia Hoffmann, Research FellowGoogle Debuts New LLM — But Internal Critics Worry About Its AI Future: At its annual I/O conference last week, Google made a host of AI-related announcements. Chief among them was the debut of the PaLM 2 language model, which the company says is its most powerful language model yet. According to a Google technical report, the largest model in the PaLM 2 family outperforms its predecessor — PaLM, released last year — despite being “significantly smaller.” Google says PaLM 2 is already powering 25 of the company’s products, including the Bard chatbot, a generative AI “snapshot” feature that the company is rolling out for its search engine, and a number of new AI features for its Workspace office programs. But Google’s barrage of public announcements took place against a backdrop of some private uncertainty. An internal memo leaked to SemiAnalysis raised concerns about Google’s ability to “win [the AI] arms race” because of the threat posed by the open source community. The memo, which was reportedly shared widely within Google, argued that open source models such as Meta’s LLaMA LLMs and Stable Diffusion’s image generating tool will make it difficult for companies like Google or OpenAI to build a business strategy with generative AI at its center. While the memo made waves in the tech world after it leaked, it provoked plenty of disagreement — both about its characterization of open source models’ performance and the likelihood that consumers will embrace open source alternatives in meaningful numbers. With both Google and Microsoft rolling out generative AI tools in their business-oriented products, we should soon get a sense of whether their “moats” remain intact.
- More: Google makes its text-to-music AI public | Google is throwing generative AI at everything | Google Launches AI Supercomputer Powered by Nvidia H100 GPUs
Senate Judiciary Hearing on AI Makes Waves: AI took center stage on Capitol Hill on Tuesday when the Senate Judiciary Committee held a widely-covered hearing on AI oversight. The hearing before the Subcommittee on Privacy, Technology, and the Law was notable both for its subject matter and its witnesses: OpenAI CEO Sam Altman, NYU emeritus professor and prominent AI critic Gary Marcus, and IBM Chief Privacy & Trust Officer Christina Montgomery. While U.S. lawmakers have been slower than their European counterparts to get the ball rolling on AI legislation, the subcommittee’s members appeared eager to avoid repeating history — “Congress failed to meet the moment on social media,” Sen. Blumenthal said in his opening remarks, “now we have the obligation to do it on AI before the threats and the risks become real.” Both the witnesses and the Senators expressed a desire for greater regulation, but some observers raised concerns that the sense of urgency means important perspectives are being overlooked. Margaret Mitchell and Timnit Gebru, the former co-leads of Google’s Ethical AI team, told The Washington Post that lawmakers should take steps to ensure they’re not overly deferential to industry voices when it comes to conceiving and enacting new AI laws.
The White House Hosts AI Executives: Altman’s congressional testimony followed a visit to the White House two weeks earlier — on May 4, the OpenAI CEO and his counterparts at Alphabet, Anthropic and Microsoft met with Vice President Harris and senior Biden administration officials (President Biden also stopped by briefly) to discuss their companies’ development of AI. In conjunction with the meeting, the White House made several AI-related announcements: that it had secured commitments from a number of AI developers — including the four represented at the meeting and Hugging Face, Nvidia, and Stability AI — to participate in a “public evaluation” of their systems at this year’s DEF CON 31; $140 million in new AI research investments (see below); and upcoming draft policy guidance from Office of Management and Budget on government use of AI systems.
DOD Releases Science & Technology Strategy: Last week, the Pentagon released its National Defense Science and Technology Strategy. The long-anticipated document — which the DOD was originally directed to produce as part of the FY2019 NDAA — lays out the department’s S&T priorities and makes recommendations about how they can be achieved. The 12-page strategy reemphasizes the 14 “Critical Technology Areas” (including “Trusted AI and autonomy”) identified by the DOD Chief Technology Officer last year and details three key lines of effort for achieving the department’s goals:
- “Focus on the Joint Mission;”
- “Create and field capabilities at speed and scale” (including through collaboration with international allies and “non-traditional partnerships”);
- and “Ensure the foundations for research and development” (through infrastructure and workforce investments).
NSF Announces Seven New AI Research Institutes: The National Science Foundation, in partnership with other federal agencies and collaborating stakeholders, announced a $140 million investment that will establish seven more National Artificial Intelligence Research Institutes at universities across the country. The new institutes will focus on:
- Neural and Cognitive Foundations of Artificial Intelligence (led by Columbia University — funded by a partnership between NSF and the DOD’s Office of the Under Secretary of Defense for Research and Engineering)
- Intelligent Agents for Next-Generation Cybersecurity (University of California, Santa Barbara — funded by a partnership between NSF, DHS S&T and IBM)
- Trustworthy AI (University of Maryland — funded by a partnership between NSF and NIST)
- Climate Smart Agriculture and Forestry (University of Minnesota Twin Cities — funded by USDA-NIFA)
- AI for Decision Making (Carnegie Mellon University)
- AI-Augmented Learning to Expand Education Opportunities and Improve Outcomes (University of Illinois, Urbana-Champaign and the University at Buffalo — funded by a partnership between NSF and ED-IES)
CSET’s translations of significant foreign language documents on AI
CSET’s translations of significant foreign language documents on AI
PRC S&T Guidelines: Notice of the Ministry of Science and Technology on the Publication of the Guidelines for National New Generation Artificial Intelligence Innovation and Development Pilot Zone Construction Work (Revised Version). These revised guidelines, issued by the PRC Ministry of Science and Technology in September 2020 on the basis of previous guidelines from August 2019, describe a process by which Chinese cities can apply to establish “national new generation AI innovation and development pilot zones.” These zones will be located in cities that already possess robust AI infrastructure such as top universities, national labs, and leading tech companies. The guidelines state that China will create roughly 20 AI pilot zones by 2023.
PRC Transportation Innovation Plan: Outline of the Medium- to Long-Term Development Plan for Scientific and Technological Innovation in the Transportation Field (2021-2035). This document is China’s plan for applying high technology to the field of transportation. The plan sets qualitative goals for transport-related technological achievements for 2025, 2030, and 2035, and calls for the integration of technologies such as AI, blockchain, and cloud computing into China’s traffic, transportation, freight, shipping, and logistics sectors.
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
What’s New at CSET
- Financing “The New Oil”: Assessing AI Investment in Israel and the Broader Middle East by Tony Ferrara and Sara Abdulla
- Spotlight on Beijing Institute for General Artificial Intelligence: China’s State-Backed Program for General Purpose AI by Huey-Meei Chang and William Hannas
- CSET: What Are Generative AI, Large Language Models, and Foundation Models? by Helen Toner
- CSET: Controlling Access to Advanced Compute via the Cloud: Options for U.S. Policymakers, Part I by Hanna Dohmen, Jacob Feldgoise, Emily S. Weinstein and Timothy Fist
- CSET: Blog Post: Asian American and Pacific Islander (AAPI) Heritage Month, a (non-exhaustive) list of information and resources compiled by CSET team members
- The Diplomat: Don’t Sleep on Chinese Tech Investment in Southeast Asia by Ngor Luong and Channing Lee
- On May 17, Deputy Director of Analysis and Research Fellow Margarita Konaev spoke at Nexus 23, a symposium at the National Press Club held by Applied Intuition in collaboration with the Atlantic Council. A non-resident senior fellow at the Council, Konaev took part in a panel discussion, Ukraine: Autonomy, AI, and lessons from the battlefield.
IN THE NEWS
- Associated Press: For an article about the White House’s recent AI announcements, Matt O’Brien and Josh Boak reached out to Senior Fellow Heather Frase to discuss the implications of publicly evaluating AI systems at the DEF CON 31 conference.
- Politico: Frase was also quoted on the subject in Mohar Chatterjee’s Digital Future Daily newsletter.
- Fast Company: Rebecca Heilweil spoke to Research Analyst Jacob Feldgoise and leaned on the findings of the 2022 brief Reshoring Chipmaking Capacity Requires High-Skilled Foreign Talent for an article about the process of building chip fabs in the United States.
- The Wall Street Journal: According to an article by Lingling Wei, the Chinese government chose to restrict overseas access to important data sources due to “a drumbeat” of reports from U.S. think tanks, including CSET.
- Axios: Director of Strategy and Foundational Research Grants Helen Toner spoke to Ryan Heath about the Chinese government’s new rules on generative AI.
- Axios: Heath’s colleagues, Alison Snyder and Sophia Cai, cited the 2020 brief The Chipmakers: U.S. Strengths and Priorities for the High-End Semiconductor Workforce in an article about high-skilled immigration to the United States.
- Fox Business: Eric Revell spoke to Research Analyst Ngor Luong and cited her recent brief with Emily Weinstein, U.S. Outbound Investment into Chinese AI Companies, in an article about potential restrictions on U.S. investments in Chinese firms.
- The Messenger: Mina Narayanan spoke to Ben Powers about Congress’ efforts to catch up on regulating AI.
- The Bulletin of the Atomic Scientists: A Julie George piece about the national security implications of AI cited the 2021 report Harnessed Lightning: How the Chinese Military is Adopting Artificial Intelligence.
- GCN: Chris Teale cited Research Analyst Jack Corrigan’s comments from earlier this year on the need for greater transparency about tech-related national security concerns.
What We’re Reading
Report: Toward Comprehensive Risk Assessments and Assurance of AI-Based Systems, Heidy Khlaaf, Trail of Bits (March 2023)
Article: Are Emergent Abilities of Large Language Models a Mirage?, Rylan Schaeffer, Brando Miranda and Sanmi Koyejo (April 2023)
- May 18-19: Governance of Emerging Technologies and Science conference at Arizona State University, featuring Caroline Schuerger, Steph Batalis and Vikram Venkatram
- May 22: Ars Frontiers conference by Ars Technica, “Charting Responsible Growth,” with panel, “What Happens to the Developers When AI Can Code,” featuring Andrew Lohn
- May 25: CSET Webinar, How Important is Compute to the Future of AI?: Challenging the conventional wisdom about constraints on AI progress, featuring Micah Musser and Tim Hwang
What else is going on? Suggest stories, documents to translate & upcoming events here.