Updates
Policy.ai will be temporarily moving to a once-a-month schedule.
Plus: CSET is looking to award up to $750,000 to fund research projects on AI Assurance. Scroll down to learn more about our Foundational Research Grants Program.
Worth Knowing
EU AI Act Clears Parliament Vote, Moves Closer to Enactment: On Wednesday, the European Parliament voted overwhelmingly to approve the EU AI Act, which, if enacted, would place significant limits on certain “high-risk” applications and ban others outright. The 499-28 vote (with 93 abstentions) clears the way for negotiations with the European Council, which must take place before the act can become law. Though the act has undergone some changes since it was originally proposed in 2021, its core risk-based approach remains. Systems designated by regulators as “high risk” — such as those used in critical infrastructure or law enforcement — would be subject to strict limits, while systems deemed to pose an “unacceptable risk” would be banned outright. One such “unacceptable” application is the use of facial recognition in public spaces. The parliament’s version of the act survived attempts to insert exceptions for some law enforcement applications, but Politico reports that the issue is likely to remain a sticking point during upcoming trilogue discussions between the EP, the European Commission and EU member states. The act could also receive significant pushback from AI developers — OpenAI CEO Sam Altman said last month that the company could choose to leave the EU if it found complying with the act too burdensome (though he later walked that statement back). EU officials say they hope to have a final agreement on the act before the end of the year.
- More: Europe moves ahead on AI regulation, challenging tech giants’ power | UK PM Sunak pitches Britain as future home for AI regulation | Germany Introduces Its First National Security Strategy
Last Month In “AI Behaving Badly”: Those concerned about the concrete harms caused by AI tools can find plenty of evidence from the last month:
- Two U.S. lawyers are in hot water after they filed a legal brief in federal court that included fictitious legal citations generated by ChatGPT. Steven A. Schwartz admitted to using the OpenAI tool to conduct legal research and apologized for submitting a brief that cited at least half a dozen fake legal cases (as well as a compendium of eight fabricated opinions). Schwartz said he assumed ChatGPT worked like “a super search engine” and didn’t appear to realize it could hallucinate plausible-sounding but entirely fictitious responses. A judge has yet to determine whether Schwartz and a colleague whose name was on the brief will be sanctioned.
- The National Eating Disorder Association disabled its AI-powered chatbot over concerns about responses that were “harmful to those with eating disorders.” The move came soon after the organization announced it would be closing its human-staffed helpline (which had recently voted to unionize) and would “begin to pivot to the expanded use of AI-assisted technology.” The researchers who originally designed the chatbot, dubbed “Tessa,” said it was built as a closed, rule-based system so that it couldn’t go off script. They said the AI component was added to it without their knowledge by the company that hosted the bot. NEDA also said it was not consulted about the AI augmentation (though NEDA’s chair appears to have been aware of its AI capabilities when he announced the move to staff earlier this year).
- A Texas A&M at Commerce professor accused his class of using ChatGPT to write their final essays and said he would be giving them all an “incomplete” grade. The professor had copied his students’ essays into ChatGPT and the tool confirmed that it had written each one. The issue, however, is that ChatGPT does not currently have the ability to accurately identify ChatGPT-generated content. The situation seems to have been mostly resolved without students being flunked or barred from graduating — university administrators confirmed that a number of students have been cleared of cheating, though one student did admit to using ChatGPT to write his paper.
Government Updates
The White House Calls for Input on a “National AI Strategy”: Soon after a high-profile meeting with AI executives last month, the White House made several more announcements related to AI:
- The Office of Science and Technology Policy issued a request for information (RFI) related to a “National AI Strategy.” While the Biden administration has already released a number of documents related to AI — including the Blueprint for an AI Bill of Rights and NIST’s AI Risk Management Framework — the new AI strategy would take a “whole-of-society approach to AI” and “ensure a cohesive and comprehensive approach to AI-related risks and opportunities.” The RFI indicates that the Biden administration is interested in addressing both the immediate harms associated with AI and the longer-term risks the technology poses as it develops. Responses to the RFI must be submitted by July 7.
- OSTP issued an update to the National AI R&D Strategic Plan. The update reaffirms the eight strategies laid out in the initial AI R&D strategic plan (released in 2016) and its 2019 update — including making long-term investments in “fundamental and responsible AI research” (Strategy 1), grappling with the “ethical, legal, and societal implications of AI” (Strategy 3), and ensuring AI systems’ safety and security (Strategy 4) — and adds a ninth strategy emphasizing the importance of international collaboration in AI research (in related news, last week the United States and UK announced an “Atlantic Partnership” that includes plans to collaborate on critical and emerging technologies).
G7 Leaders Start Process to Discuss AI Governance: At their summit in Hiroshima, Japan, G7 leaders agreed to establish a “Hiroshima AI process” that will see cabinet-level ministers meet to discuss issues related to generative AI. According to the leaders’ communiqué, these discussions could include such topics as AI governance, copyright protections, transparency, responding to disinformation, and responsible AI. The communiqué acknowledged that individual countries could take different approaches to AI governance, it stressed the importance of making AI governance frameworks interoperable and encouraged stakeholders to develop trustworthy AI tools through “multi-stakeholder international organizations,” such as the OECD and the Global Partnership on AI. The G7 digital and technology ministers have already laid some important groundwork for the Hiroshima AI process discussions: at an April meeting, they agreed that AI policies and regulations should be “risk-based and forward-looking” and endorsed an “Action Plan for promoting global interoperability between tools for trustworthy AI” (which the G7 leaders highlighted in their communiqué). G7 officials held their first Hiroshima AI process-related meeting on May 30 and are set to deliver their recommendations by the end of the year.
Foundational Research Grants
CSET’s Foundational Research Grants (FRG) is calling for research ideas on AI assurance for general-purpose systems operating in open-ended domains. FRG is looking to award up to $750,000 per project — click here for full details.
In Translation
CSET’s translations of significant foreign language documents on AI
CSET’s translations of significant foreign language documents on AI
Chinese Think Tank Paper: White Paper on AI Framework Development (2022). This white paper by a Chinese state-affiliated think tank emphasizes the importance of AI frameworks in the current and future phases of the development of AI technology. The white paper references well-known foreign frameworks such as TensorFlow and PyTorch, and also goes into considerable detail on applications of Chinese platforms such as Baidu’s PaddlePaddle and Huawei’s MindSpore.
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
Job Openings
We’re hiring! Please apply or share the roles below with candidates in your network:
- Data Research Analyst: As a part of our data team, Data Research Analysts work directly with specific lines of research to produce data-driven research products and policy analysis alongside CSET’s analysis teams. This dynamic role serves as a bridge between data and analysis teams and combines knowledge of research methods and data analysis skills. Apply by June 26th, 2023
- Research/Senior Fellow – Emerging Technology Workforce: This Fellow will lead and coordinate our efforts on the Workforce LOR, including shaping priorities, laying out an overall research strategy, overseeing the execution of the research and production of reports, and helping to hire and manage supporting researchers. Apply by July 1, 2023
What’s New at CSET
REPORTS
- A Shot of Resilience: A Critical Analysis of Manufacturing Vulnerabilities in Vaccine Production by Steph Batalis and Anna Puglisi
- A Matrix for Selecting Responsible AI Frameworks by Mina Narayanan and Christian Schoeberl
- The Policy Playbook: Building a Systems-Oriented Approach to Technology and National Security Policy by Jack Corrigan, Melissa Flagg and Dewey Murdick
PUBLICATIONS
- CSET: Controlling Access to Compute via the Cloud: Options for U.S. Policymakers, Part II by Hanna Dohmen, Jacob Feldgoise, Emily Weinstein and Timothy Fist
- CSET: Capturing the Flag with ChatGPT: Generative AI for Cyber Education by Lisa Lam
- CSET: Blog Post: What We’re Reading on AI Regulation
- CSET: Blog Post: LGBTQIA+ Pride Month
- CSET: Data Snapshot: Expanding the Reach: The Collective Impact of HBCUs, PBIs, and Other Universities on Black AI Education by Sara Abdulla
- Foreign Affairs: The Illusion of China’s AI Prowess by Helen Toner, Jenny Xiao and Jeffrey Ding
- Time Magazine: AI Chatbots Are Doing Something a Lot Like Improv by Helen Toner
- The Hill: To promote AI effectively, policymakers must look beyond ChatGPT by Micah Musser
- ChinaTalk: Manga Makeover for Taiwan Conscription by Karson Elmgren
EMERGING TECHNOLOGY OBSERVATORY
- A new perspective on global R&D: Introducing summary view in the Map of Science
- Introducing our Research Almanac
- Exploring trends in AI and genetics with the Research Almanac
EVENT RECAPS
- On May 25, CSET Research Analyst Micah Musser and Institute for Progress Fellow Tim Hwang discussed CSET research examining factors that will contribute to future AI development. Watch a recording of the event and catch a highlight here.
IN THE NEWS
- Foreign Policy: Is China Gaining a Lead in the Tech Arms Race? (Jack Detsch, Rishi Iyengar and Robbie Gramer quoted Research Fellow Emily Weinstein)
- The Atlantic: The Russian Red Line Washington Won’t Cross—Yet (Daniel Block quoted Deputy Director of Analysis and Research Fellow Margarita Konaev)
- Vox: Ukraine-Russia war: What to know about the Ukrainian counteroffensive (Jen Kirby quoted Konaev)
- Associated Press: AI chips are hot. Here’s what they are, what they’re for and why investors see gold (David Hamilton quoted Research Analyst Hanna Dohmen)
- War on the Rocks: Qualified to Compete: A New Approach to Credentials (Alexandra Seymour cited the CSET data brief China is Fast Outpacing U.S. STEM PhD Growth)
- Marketplace: How AI is reshaping the computer chip industry (Matt Levin quoted Research Analyst Karson Elmgren)
- Semafor: What Beijing wants in exchange for returning Washington’s calls (Diego Mendoza quoted Research Fellow Sam Bresnick)
- CNBC: Google challenges OpenAI’s calls for government A.I. czar (Hayden Field and Lauren Feiner quoted Director of Strategy and Foundational Research Grants Helen Toner)
- Nikkei Asia: ChatGPT unleashed an AI race, now regulators are struggling to hold on (Yifan Yu quoted Toner)
- The Register: Nvidia GPUs fly out of the fabs – and right back into them (Tobias Mann cited the CSET policy brief Reshoring Chipmaking Capacity Requires High-Skilled Foreign Talent)
- Forbes: How Could Artificial Intelligence Impact Cybersecurity? (Dan Vigdor cited the CSET, OpenAI, and the Stanford Internet Observatory report Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations)
- Roll Call: Lawmakers seek to ease defense export controls to UK, Australia (Rachel Oswald quoted Non-Resident Senior Fellow Kevin Wolf)
- PitchBook: VCs inject $3.2B into defense-focused biotechnology (Rosie Bradbury quoted Senior Fellow Anna Puglisi)
- Export Compliance Daily: Researchers Caution Against Restricting Chinese Access to Cloud Computing Services (Ian Cohen quoted Research Fellow Emily Weinstein)
What We’re Reading
Paper: Let’s Verify Step by Step, Hunter Lightman, et al., OpenAI (May 2023)
Article: Faster sorting algorithms discovered using deep reinforcement learning, Daniel J. Mankowitz, et al., Nature (June 2023)
Upcoming Events
- June 23: CSET Webinar, Uplifting Cyber Defense: How new training tools can improve AI defenses, featuring Andrew Lohn, Krystal Jackson and Melody Wolk
What else is going on? Suggest stories, documents to translate & upcoming events here.