Worth Knowing
Bing’s Chatbot and the Fraught Future of AI Competition: In the two weeks since Microsoft debuted its AI-powered Bing chatbot, the tool has highlighted the promise and perils of chatbots powered by current state-of-the-art large language models (LLMs). Though the Bing chatbot is still available only to a small group of testers on an invite-only basis, examples of its responses — impressive and disturbing alike — have been ubiquitous on Twitter, across the web, and even in prime position on the front page of The New York Times. In many of its more viral conversations, Bing’s chatbot comes across as somewhat unhinged: declaring love for its human user or denouncing them as enemies. But as The Verge’s James Vincent observed, the frenzied response to Bing chat’s output says just as much about human expectations as it does about the tool. That users would mistake an LLM’s fluency for sincerity and coherence was one of the key risks identified by the influential 2021 paper On the Dangers of Stochastic Parrots (which was at the heart of an LLM-related controversy at Google). The paper argued that this risk — combined with encoded biases and the associated environmental and financial costs of training the models — should give LLM developers pause. LLMs and other AI-based tools have proven helpful and relatively low-risk in the hands of experts who understand their limitations (see the NASA story below as one such example), but are likely riskier in the hands of non-experts. With the release of Bing chat and other related tools, however, the horse could be out of the barn. The AI-powered chat race between Microsoft and Google has already been at the center of hundred-billion-dollar market swings. Those financial implications seem to have changed the risk calculus for the developers behind state-of-the-art AI systems. A future with more powerful tools seems like a plausible outcome — but whether the appropriate guardrails will be in place remains an open question.
- More: How OpenAI Is Trying To Make ChatGPT Safer and Less Biased | The Future, Soon: What I Learned From Bing’s AI
Government Updates
The State Department Looks To Establish Norms on Military AI: Last week, the State Department unveiled a declaration on military AI and autonomy, a set of non-binding guidelines meant to build international consensus on the responsible development and use of military AI and autonomous systems. The declaration — which was announced at the Summit on Responsible AI in the Military Domain — includes 12 guidelines intended to represent the “best practices” for responsible military AI use, among them: ensuring that high-consequence AI development and deployment are overseen by senior officials, making AI development auditable, limiting AI tools to explicit and well-defined uses for which they were designed, and ensuring that relevant personnel have trained to understand both the capabilities and limitations of the systems they use or oversee. The declaration hits many of the same beats as recent U.S. government documents on responsible AI adoption, such as the DOD’s guidance on autonomy in weapons systems (updated in January), its 2020 Ethical Principles for Artificial Intelligence and its 2021 Responsible Artificial Intelligence Strategy and Implementation Pathway. U.S. officials said they hoped the declaration would help to establish common international norms around responsible AI development and, if and when signatories come aboard, create an opportunity for greater international collaboration on the issue of military AI and autonomy.
Expert Take: “The United States has been trying to lead the development of international norms around AI and the military for a while. This declaration, along with the recently released DOD policy on autonomous weapons, is the latest positive step. We can take this as good news for safety and international stability, but many details still need to be worked out before we can declare victory, starting with aligning around common definitions of shared terms.” — Emelia Probasco, Senior Fellow
The FTC Establishes a New Technology Office: On Friday, the Federal Trade Commission launched an Office of Technology to support its work on tech. It will be led by Chief Technology Officer Stephanie Nguyen and will significantly expand the agency’s roster of technologists (from a current staff of 10). In her announcement post, Nguyen wrote that the “shift in the pace and volume of technological changes,” including the increasing use of AI, was a key motivating factor behind the agency’s desire to bolster its in-house technological expertise. According to Nguyen, the office’s priority will be to support the agency’s enforcement investigations and cases. Cases involving AI seem a likely place for the new office to weigh in — Nguyen specifically cited “dissecting claims made about an AI-powered product to assess whether the offering is oozing with snake oil” as an example of the type of work the office would help to inform. The office will also advise on non-enforcement work, such as reports and congressional briefings, and engage with external stakeholders through workshops, consultations and conferences.
NASA Uses AI to Design Bespoke Parts: NASA’s use of AI to design mission hardware offers a compelling example of how the technology can be used to complement human experts. According to the agency, research engineers at the Maryland-based Goddard Space Flight Center have been using commercially available AI software to design one-off parts for use in a number of NASA missions, including the Exoplanet Climate Infrared Telescope mission and the Mars Sample Return mission. The results have been impressive — according to NASA engineers, the AI-designed parts have offered significant weight savings while improving on measures of stress and failure risk when compared to human designs, all while cutting down on the time needed to create the designs. The AI designs aren’t flawless — they can sometimes be too thin — but because they are subjected to validation tests, issues can still be identified. NASA is no stranger to AI — the agency leaned on AI-powered systems to help its Perseverance Rover land on Mars in 2021 — but its on-Earth use of the technology might provide a clearer example for how organizations can successfully implement emerging technologies.
Job Openings
We’re hiring! Please apply or share the role below with candidates in your network.
- Research or Senior Fellow – CyberAI: We are currently seeking experienced technologists to explore topics at the intersection of AI and cyber, either as a Research Fellow or Senior Fellow (depending on experience). Using a blend of analytic and technical skills, this position will help lead and coordinate our CyberAI project‘s efforts, including shaping research priorities, conducting original research, overseeing the execution of the research and production of reports, and leading other team members. Applications due by March 6.
In Translation
CSET’s translations of significant foreign language documents on AI
CSET’s translations of significant foreign language documents on AI
PRC Budget Document: 2022 Budget of the All-China Federation of Returned Overseas Chinese. This is the 2022 budget of the All-China Federation of Returned Overseas Chinese, a Communist Party-led organization that liaises with ethnic Chinese people living outside mainland China and encourages them to support the Party in various ways.
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
What’s New at CSET
REPORTS
- Chinese AI Investment and Commercial Activity in Southeast Asia by Ngor Luong, Channing Lee and Margarita Konaev
- One Size Does Not Fit All: Assessment, Safety, and Trust for the Diverse Range of AI Products, Tools, Services, and Resources by Heather Frase
PUBLICATIONS AND PODCASTS
- CSET: Data Snapshot: Diving into Deep Learning with Keyword Cascade Plots by Autumn Toney and Melissa Flagg
EVENT RECAPS
- On February 10, Research Fellow Emily Weinstein joined Foreign Policy’s Ravi Agrawal and James Palmer to discuss the U.S.-China relationship after the discovery and shootdown of a Chinese surveillance balloon.
- On February 16, CNAS Senior Fellow and Director of the Energy, Economics, and Security Program at CNAS Emily Kilcrease joined Emily Weinstein and Research Analyst Ngor Luong to discuss their findings on U.S. investment in China’s AI sector.
- On February 22, Director of Biotechnology Programs and Senior Fellow Anna Puglisi traveled to Rome to participate in a panel discussion for the event The Race to Disruptive Technologies: Nations as Ecosystems of Knowledge at the Centro Studi Americani.
IN THE NEWS
- BBC: Research Fellow Josh Goldstein spoke to David Silverberg about the potential disinformation risks of AI chatbots for an article in which Silverberg also cited Goldstein’s report with Research Analyst Micah Musser, CSET alum Katerina Sedova, and coathors from Stanford and OpenAI, Forecasting Potential Misuses of Language Models for Disinformation Campaigns — and How to Reduce Risk.
- Defense One: In a piece about the potential benefits of “small data” approaches, Vincent Carchidi cited two CSET-related publications: Husanjot Chahal, Helen Toner and Ilya Rahkovsky’s 2021 brief Small Data’s Big AI Potential and Chahal and Toner’s subsequent Scientific American op-ed.
- The Christian Science Monitor: Peter Grier cited Research Fellow Emily Weinstein’s comments at a February 10 Foreign Policy event for an article about the discovery and shootdown of a Chinese surveillance balloon and its impact on U.S.-China relations.
- Newsweek: A piece by John Feng about China’s spy balloon program cited the 2021 brief by Remco Zwetsloot, Jack Corrigan, Emily Weinstein, Dahlia Peterson, Diana Gehlhaus and Ryan Fedasiuk China is Fast Outpacing U.S. STEM PhD Growth.
- Bloomberg Government: Patty Nieberg reached out to Deputy Director of Analysis and Research Fellow Margarita Konaev to discuss the U.S. military’s need for “clarity of the narrative” with respect to defending Taiwan against potential Chinese aggression.
- Bloomberg: In a piece about U.S.-China decoupling, Chris Anstey cited Emily Weinstein and Ngor Luong’s recent brief, U.S. Outbound Investment into Chinese AI Companies.
- ChinaAI: Jeffrey Ding named Weinstein and Luong’s brief one of his “Should-read” links in a recent edition of his ChinAI newsletter.
- ChinaTalk: Lennart Heim cited Andrew Lohn and Micah Musser’s 2022 brief, AI and Compute: How Much Longer Can Computing Power Drive Artificial Intelligence Progress?, during a recent conversation with Jordan Schneider and Chris Miller.
What We’re Reading
Paper: Toolformer: Language Models Can Teach Themselves to Use Tools, Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda and Thomas Scialom (February 2023)
Paper: Against Predictive Optimization: On the Legitimacy of Decision-Making Algorithms that Optimize Predictive Accuracy, Angelina Wang, Sayash Kapoor, Solon Barocas and Arvind Narayanan (October 2022)
Article: Why China Didn’t Invent ChatGPT, Li Yuan, The New York Times (February 2023)
Upcoming Events
- February 23: CSET and the Georgetown School of Foreign Service, Secretary Gina Raimondo on the CHIPS Act and a Long-term Vision for America’s Technological Leadership, featuring Secretary of Commerce Gina Raimondo and CSET’s Emily Weinstein
- February 24: U.S.-China Economic and Security Review Commission, Panel II: Advancing Growth, Knowledge, and Innovation through Higher Education, featuring Anna Puglisi
- February 24: U.S.-China Economic and Security Review Commission, Panel III: The Role of Education in Promoting China’s Strategic and Emerging Industries, featuring Dahlia Peterson
- February 28: National Science Foundation, 2023 Data Analytics Symposium, featuring Catherine Aiken
- February 28: Georgetown Law School, Judicial Innovation Fellowship: Info Session for Potential Applicants
What else is going on? Suggest stories, documents to translate & upcoming events here.