Updates
Policy.ai has temporarily moved to a once-a-month schedule while its usual author, Alex Friedland, is on parental leave.
Worth Knowing
Meta Frees the LLaMA: On July 18, Mark Zuckerberg announced that Meta would make a powerful new large language model, LLaMA 2, available to programmers free of charge. The model, which reportedly performs on par with ChatGPT, will be available under a commercial license, allowing developers to build new products and services off the LLaMA foundation. Beyond accelerating AI development, Meta says releasing the model to the public will also improve its security by allowing more people to inspect, identify, and fix potential vulnerabilities. The decision underscores a growing divide in the tech industry’s approach to AI. Companies like Google and OpenAI have maintained tight controls on their models, arguing that limiting access to the weights that undergird their systems is critical to curbing misuse and unintended harms. However, critics say the true purpose of these restrictions is to stifle potential competition, thus benefiting the companies themselves. By open-sourcing LLaMA, Meta joins startups like Hugging Face and Databricks in supporting a more decentralized AI ecosystem. Software ideology aside, Meta has a lot to gain from releasing LLaMA 2 into the wild. By open-sourcing the model, Meta can outsource some of its AI development and safety efforts to the public, and potentially close the performance gap between its own products and those developed by Google and OpenAI, which dominate the generative AI market. It also opens the door for LLaMA to become the foundation of a broad range of AI applications, similar to the role Google’s Android operating system plays in the mobile device market. Meta also expects to make money directly from Microsoft, Google, and other cloud-computing providers when they host the model on their cloud platforms.
- More: ChatGPT-maker OpenAI say it is doubling on preventing AI from Going Rogue | Aided by A.I. Language Models, Google’s Robots Are Getting Smart | (CSET) What Are Generative AI, Large Language Models, and Foundation Models?
- More: How Much Money Could Large Language Models Save Propogandists? | OpenAI can’t tell if something was written by AI after all | Hawley, Blumenthal Hold Hearing On Principles For Regulating Artificial Intelligence | (CSET) Poison in the Well
Government Updates
Biden Restricts Certain Outbound Investments in Chinese Tech: On Wednesday, President Biden issued a long-awaited executive order banning certain investments in Chinese technology companies. Specifically, the order places restrictions on investments in Chinese companies developing certain types of artificial intelligence, semiconductor, and quantum computing technology that are critical for “military, intelligence, surveillance, or cyber-enabled capabilities.” The order also requires all U.S. entities that do business in China to disclose investments related to those emerging technology sectors to the federal government. The Treasury Department and other federal agencies are charged with hammering out the details of these rules in the months ahead. In February, a CSET study found that U.S. investors were involved in transactions with Chinese AI companies that amounted to more than $40 billion between 2015 and 2021. Monitor our blog for soon-to-come in-depth analysis of the recent executive order.
Top AI Companies Promise the White House to Take Risks Seriously: On July 21, the Biden administration got seven AI companies—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI—to sign on to a series of non-binding commitments to protect users of their AI products. These include agreements to invest in cybersecurity, report the limitations of their models, and develop a watermarking system to denote AI-generated images, audio, and video. The companies reportedly intend to begin enacting these policies immediately—and have even formed an independent industry forum to promote these changes—but because the commitments are voluntary, they will face no consequences if they fail to do so. Many in the technical community say the commitments amount to a whole lot of nothing—they’re “as solid as swiss cheese,” wrote computational linguist Emily Bender. The agreement comes in the wake of Senator Chuck Schumer announcing his intention to develop comprehensive guardrails for AI and the Federal Trade Commission launching an sweeping investigation of consumer protection law violations at OpenAI . The White House also said it is in the process of drafting an executive order to address AI risks.
NDAA Update: On July 14, the U.S. House of Representatives passed its version of the Fiscal Year 2024 National Defense Authorization Act by a vote of 219 to 210. The Senate followed suit on July 27, approving its version of the bill by a vote of 86 to 11. Both bills share similar top line funding levels—authorizing approximately $886 billion across the Department of Defense and Department of Energy—and feature efforts to support technology and innovation cooperation with NATO and other allies and partners. However, the bills differ in other ways when it comes to AI and emerging technologies. The House bill includes provisions requiring DOD to develop processes to guarantee responsible AI use and expand international partnerships to field more joint emerging technology capabilities in the Indo-Pacific. For its part, the Senate bill mandates that DOD send detailed reports to Congress on its AI investments and create plans for regularly updating its AI strategy, AI security processes, and tech workforce development strategy. These differences leave much to be resolved before the two chambers can send a final version of the bill to the Oval Office.
Job Openings
We’re hiring! Please apply or share the roles below with candidates in your network:
- Research Coordinator: We are currently seeking candidates for a Research Coordinator position. In this role, you will provide project management support for our analysis team to ensure research timelines are met and research project trackers are up-to-date. You will manage and improve CSET’s research project tracking functions for accuracy, efficiency, and timeliness. Additionally, you will serve as executive administrator to the Director and Deputy Director of Analysis. Apply by August 14, 2023
In Translation
CSET’s translations of significant foreign language documents on AI
CSET’s translations of significant foreign language documents on AI
China’s Guide to Funding “Explainable and Generalizable” AI: Notice on the Release of the “Guide to the 2023 Annual Projects for the Major Research Program on Explainable and Generalizable Next-Generation Artificial Intelligence Methods”. This document offers an overview of China’s research priorities for explainable and generalizable AI methods for 2023. It also provides state funds for projects related to these priorities and explains how Chinese AI researchers can apply for funding. Applications of AI in medicine, biology, physics, materials science, and mathematics are prominent among the 2023 priorities.
What’s New at CSET
REPORTS
- Through a Glass, Darkly: Mapping Emerging Technologies and Their Supply Chains by John VerWey
- The Race for U.S. Technical Talent: Can the DOD and DIB Compete? by Diana Gehlhaus, James Ryseff and Jack Corrigan
- Adding Structure to AI Harm: An Introduction to CSET’s AI Harm Framework by Mia Hoffmann and Heather Frase
- Voices of Innovation: An Analysis of Influential AI Researchers in the United States by Sara Abdulla and Husanjot Chahal
- Who Cares About Trust? by Autumn Toney and Emelia Probasco
- Identifying AI Research by Christian Schoeberl, Autumn Toney and James Dunham
PUBLICATIONS
- Defense One: To Compete With China in STEM, Pentagon Should Invest in HBCUs by Jaret Riddick
- CSET: How Much Money Could Large Language Models Save Propogandists? by Micah Musser
- CSET: In & Out of China: Financial Support for AI Development by Ngor Luong and Rita Konaev
- Breaking Defense: China’s Rapid Space Launch Advantage, and How the U.S. Can Try to Counter It by Sam Bresnick
- CSET: Why Improving AI Reliability Metrics May Not Lead to Reliability by Helen Toner and Romeo Valentin
- CSET: Making AI (more) Safe, Secure, and Transparent: Context and Research from CSET by Tessa Baker
- CSET: Large Language Models (LLMs): An Explainer by James Dunham
- arXiv: Confidence-Building Measures for Artificial Intelligence: Workshop Proceedings
- CSET: Comment on NSF RFI on Developing a Roadmap for the Directorate of Technology, Innovation, and Partnerships (TIP) by Catherine Aiken
- CSET: Talking Past Progress: Discussions on Privacy Policy from the Academic Perspective by Autumn Toney
- CSET: Building Trust in AI: A New Era of Human-Machine Teaming by Dewey Murdick
- CSET: Data Snapshot: Tracking Industry in Government Contracts by Christian Schoeberl
EMERGING TECHNOLOGY OBSERVATORY
EVENT RECAPS
- On July 27, Research Fellow Jenny Jun testified before the House Foreign Affairs Subcommittee on Indo-Pacific for a hearing titled, “Illicit IT: Bankrolling Kim Jong Un.” Read her testimony and watch the full hearing here.
IN THE NEWS
- The Wall Street Journal: U.S. to Ban Some Investments in China (Andrew Duehren cited U.S. Outbound Investment into Chinese AI Companies)
- The New York Times Magazine: ‘An Act of War’: Inside America’s Silicon Blockade Against China (Alex W. Palmer quoted Kevin Wolf)
- The Economist: How Real is America’s Chipmaking Renaissance? (cited No Permits, No Fabs)
- The Washington Post: China Announces Rules to Keep AI Bound by ‘Core Socialist Values’ (Meaghan Tobin quoted Helen Toner)
- Bloomberg: U.S. Plans Narrow China Tech Investment Limits, Likely by 2024 (Eric Martin, Jenny Leonard, Daniel Flatley, and Anna Edgerton quoted Emily Weinstein)
- BBC: AI Must Have Better Security, Says Top Cyber Official (Gordon Corera quoted Andrew Lohn)
- Wired: Want to Win a Chip War? You’re Going to Need a Lot of Water (Rebecca Heilweil cited No Permits, No Fabs)
- CNBC: House Committee Takes Aim at U.S. Venture Capital Firms for Investments in Chinese AI (Rohan Goswami and Lauren Feiner cited U.S. Outbound Investment Into Chinese AI Companies)
- MIT Technology Review: Why Business Is Booming for Military AI Startups (Melissa Heikkilä quoted Ngor Luong and Lauren Kahn, and cited Harnessed Lightning)
- The Guardian: Disinformation Reimagined: How AI Could Erode Democracy in the 2024 US Elections (Nick Robins-Early quoted Josh Goldstein)
- Voice of America: U.S. Tech Leaders Aim for Fewer Export Curbs on AI Chips for China (John Xie quoted Hanna Dohmen)
- Vox: The AI Rules that U.S. Policymakers Are Considering, Explained (Dylan Matthews cited “The Main Resource is the Human”)
- Scientific American: We Need Smart Intellectual Property Laws for Artificial Intelligence (James Love quoted Toner)
What We’re Reading
Article: How Silicon Valley is helping the Pentagon in the AI arms race | Financial Times
Article: Transformers: the Google scientists who pioneered an AI revolution | Financial Times
Report: Building AI cities: How to spread the benefits of an emerging technology across more of America | Brookings
Upcoming Events
- November 1: CSET and Georgetown Center for Security Studies, Kalaris Conference featuring Dewey Murdick, Emmy Probasco, and Kevin Wolf
What else is going on? Suggest stories, documents to translate & upcoming events here.