Worth Knowing
Addressing the Dangers of AI — Two Public Letters Frame the Debate: An open letter signed by a number of prominent tech leaders ignited a public discussion about the future of AI development and regulation. The letter, published by the non-profit Future of Life Institute, calls for a six-month moratorium on the training of AI systems more powerful than OpenAI’s GPT-4 due to concerns about potential disruptions — including the “loss of control of our civilization” — such systems could cause. The letter’s dire tone and its list of high-profile signatories (including Apple co-founder Steve Wozniak, Turing Award–winner Yoshua Bengio and SpaceX, Tesla and Twitter CEO Elon Musk) earned it both attention and pushback. A response published by the DAIR Institute — by the authors of the influential 2021 paper On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? — criticized the Future of Life letter for its “fearmongering and AI hype.” The DAIR letter argues that this focus on hypothetical scenarios distracts from the “ongoing” harms caused by existent AI systems, including issues related to AI-powered surveillance and the exploitative practices involved in training generative AI systems. The letters are part of a larger, long-running debate about how to address the potential harms of AI systems (and what those harms might be), but there is overlap between the two. Both call for greater regulation of AI, highlight the need for innovations such as provenance and watermarking systems to help track AI systems and their products, and raise concerns about the possible downsides of intensifying competition between companies and countries. It remains to be seen whether policymakers will take up any of these ideas — even those where all sides agree.
- More: UK rules out new AI regulator | Italy’s ChatGPT ban attracts EU privacy regulators | A misleading open letter about sci-fi AI dangers ignores the real risks
- More: How a tiny company with few rules is making fake images go mainstream | In generative AI legal Wild West, the courtroom battles are just getting started
- China maintained its lead in overall quantity of AI journal, conference and repository publications, even as the United States produced a majority of the world’s LLMs and multimodal models. Research collaboration between the two countries continued to be the most fruitful of any international relationship. While the total number of collaborations between the two countries grew between 2020 and 2021, the overall rate of growth (2.1 percent) was smaller than any year since 2010.
- Between 2021 and 2022, global private investment in AI fell 26.7 percent to $91.9 billion — the first time in the last decade that AI investment decreased. The United States accounted for more than half of the total investment, outpacing China by roughly $35 billion ($47.4 billion to $13.4 billion). The most popular areas of private investment were the medical and healthcare field ($6.1 billion), data management, processing and cloud ($5.9 billion), fintech ($5.5 billion), cybersecurity and data protection ($5.4 billion) and retail ($4.2 billion).
- AI is becoming a more popular and diverse field at the academic level. 19.1 percent of new computer science PhD recipients from U.S. universities specialized in AI in 2021, compared to 14.9 percent in 2020. PhD-level computer science students have become much more ethnically diverse over the last decade, but less progress has been made on gender — only 21.3 percent of new 2021 AI PhDs were female.
- Policymaker interest and action on AI appears to be growing. Legislative records from 127 countries showed that the number of bills containing “artificial intelligence” grew from only one in 2016 to 37 in 2022. In the United States, 10 percent of all federal AI bills became law in 2022, up from 2 percent in 2021.
- More: 2023 State of AI in 14 Charts | Jack Clark — Presenting the 2023 AI Index
Expert Take: “We’ve seen that LLMs have a tendency not only to hallucinate facts, but also citations “supporting” those facts. Since roughly half the training data is proprietary, outputs are likely to cite (real or hallucinated) proprietary sources. How are users of this tool supposed to confirm that a cited proprietary source is real and verify that it says what BloombergGPT claims it says?” — Igor Mikolic-Torreira, Director of Analysis
Government Updates
Biden Discusses AI Risks and Opportunities with S&T Council: On Tuesday, President Biden met with his council of advisors on science and technology to discuss the opportunities and risks of AI. “AI can help deal with some very difficult challenges,” Biden said, “but we also have to address the potential risks to our society, to our economy, to our national security.” Since he took office, Biden’s administration has rolled out a number of initiatives related to responsible AI and AI safety, including a Blueprint for an AI Bill of Rights and the AI Risk Management Framework. Regulatory agencies, too, have been increasing their scrutiny of practices in the AI industry (see the story on the DOJ and FTC below for more). Both the Blueprint for an AI Bill of Rights and the AI Risk Management Framework are non-binding advisory documents, while the FTC’s warnings have not yet turned into concrete action. To date, the U.S. government has taken a more hands-off approach than peers like the EU, which has moved to enact stricter limits on the types of AI systems that can be deployed (though the bloc’s proposed AI Act has yet to be finalized). Biden’s remarks could be a sign of change, however — with prominent advocates calling for greater government regulation of AI development (see the first story above for more), both the White House and Congress may be more open to imposing guardrails.
White House Announces Emerging Tech Deliverables at Democracy Summit: At last week’s second Summit for Democracy, the Biden administration made several AI- and emerging technology–related announcements, including:
- The National Institute of Standards and Technology debuted its Trustworthy & Responsible AI Resource Center designed to be a “one-stop-shop” for information related to the responsible use of AI. It builds on (and links to) NIST’s AI Risk Management Framework and the accompanying playbook, which the agency released earlier this year.
- The State Department announced that the United States would join 44 partner countries in endorsing new Guiding Principles on Government Use of Surveillance Technologies. The guidelines are meant to prevent government misuse of surveillance technologies, particularly through AI-powered video surveillance, internet controls, and “big data analytics tools.” On March 27, President Biden signed a related executive order restricting U.S. government use of commercial spyware.
- The White House Office of Science and Technology Policy released the National Strategy to Advance Privacy-Preserving Data Sharing and Analytics, which lays out the administration’s plan to build a data ecosystem that incorporates privacy-preserving data sharing and analytics technologies. OSTP will work with the National Economic Council to advance the strategy’s priorities and recommendations.
Expert Take: “The Summit for Democracy was a valuable signal from the United States that it is working closely with allies and partners to uphold and advance democratic alternatives to digital authoritarianism. The Executive Order banning U.S. government use of rights-abusing commercial spyware was a major step in the right direction. The Summit would have sent an even stronger message if it had included more discussion on how democracies are countering the proliferation of AI surveillance tools, such as facial recognition and predictive policing. Export controls — both unilateral and multilateral — have been an essential part of the toolkit, and I believe they warranted more attention.” — Dahlia Peterson, Research Analyst
DOJ and FTC Are Watching for Anti-Competitive Behavior in the AI Industry: Last week, Federal Trade Commission Chair Lina Khan and the Justice Department’s top antitrust attorney, Jonathan Kanter, said their agencies are on the lookout for anti-competitive behavior in the AI market. The comments came during the Spring 2023 Enforcers Summit, an event co-hosted by the FTC and the DOJ’s antitrust division, which share responsibility for enforcing federal antitrust laws. Both Khan and Kanter raised concerns that big tech companies could use their size to box out smaller players and said their agencies would be on the lookout for such artificial “barriers to entry.” The FTC and DOJ have paid increased attention to AI of late. At the South by Southwest festival last month, Kanter said his office had launched an initiative (dubbed “Project Gretzky” after the hockey legend’s line about skating “where the puck is going”) to get ahead of potential trends toward consolidation in the AI industry. The FTC, meanwhile, has published multiple AI-related blog posts in the last two months warning companies to “keep [their] AI claims in check” and to ensure their tools can’t be used for fraud, scams, or other harmful activities. Amid calls for greater government oversight of AI development, it wouldn’t be surprising to see either agency flex its regulatory muscles, but guarding against potential harms while encouraging competition may prove difficult.
In Translation
CSET’s translations of significant foreign language documents on AI
CSET’s translations of significant foreign language documents on AI
PRC National Security Academic Fund: Twenty Years of the NSAF Joint Fund: Exploration and Practice of a New Model for Strengthening Requirement-Led Basic Scientific Research Collaboration and Innovation. This translation describes China’s “National Security Academic Fund,” which supports the China Academy of Engineering Physics (CAEP), China’s nuclear weapons research, development and testing laboratory. Notably, experts from hundreds of institutions in dozens of countries have co-authored research papers subsidized by this fund, research that presumably benefits CAEP.
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
What’s New at CSET
REPORTS
- Adversarial Machine Learning and Cybersecurity: Risks, Challenges, and Legal Implications by Micah Musser, Andrew Lohn, James X. Dempsey, Jonathan Spring, Ram Shankar Siva Kumar, Brenda Leong, Christina Liaghati, Cindy Martinez, Crystal D. Grant, Daniel Rohrer, Heather Frase, John Bansemer, Jonathan Elliott, Mikel Rodriguez, Mitt Regan, Rumman Chowdhury and Stefan Hermanek
PUBLICATIONS
- Foreign Affairs: Agile Ukraine, Lumbering Russia by Margarita Konaev and Owen Daniels
- CSET: Data Snapshot: The Dynamic Face of AI Pre-Baccalaureate Credentials by Sara Abdulla
EVENT RECAPS
- On March 30, CSET Non-Resident Fellow John VerWey and In-Q-Tel’s Yan Zheng discussed ways to maximize CHIPS and Science Act investments to secure the U.S. semiconductor supply chain.
IN THE NEWS
- Foreign Policy: Research Fellow Emily Weinstein spoke to Rishi Iyengar about the problem with describing AI competition as a “race.”
- Semafor: In a piece about the AI safety discussion now going on in Washington, Louise Matsakis cited a paper by Research Analyst Micah Musser and Ashton Garriott analyzing the relationship between machine learning and cybersecurity.
- Bloomberg: Sam Kim cited a paper by CSET alumnus Saif M. Khan, Alexander Mann and Research Analyst Dahlia Peterson in an article on Japan’s efforts to get ahead in the global semiconductor market.
- GovCon Wire: Charles Lyons-Burt quoted Senior Fellow Jaret Riddick’s comments from a Potomac Officers Club panel on DOD R&D talent and innovation.
- University World News: Research Analyst Hanna Dohmen discussed the importance of assessing China’s AI capabilities and strengths with Yojana Sharma.
What We’re Reading
Paper: Natural Selection Favors AIs over Humans, Dan Hendrycks (March 2023)
Paper: Eight Things to Know about Large Language Models, Samuel R. Bowman (April 2023)
Upcoming Events
- April 17: CSET Webinar, One Size Does Not Fit All: Building Trust Across a Diverse Range of AI Systems, featuring Heather Frase and John Bansemer
What else is going on? Suggest stories, documents to translate & upcoming events here.