Data, algorithms and models

CSET's Ryan Fedasiuk discusses China's use of technology companies to strengthen the government's authority over personal data under the Personal Information Protection Law.

Machine Intelligence for Scientific Discovery and Engineering Invention

Matthew Daniels Autumn Toney Melissa Flagg Charles Yang
| May 2021

The advantages of nations depend in part on their access to new inventions—and modern applications of artificial intelligence can help accelerate the creation of new inventions in the years ahead. This data brief is a first step toward understanding how modern AI and machine learning have begun accelerating growth across a wide array of science and engineering disciplines in recent years.

Foretell, a project run by the Center for Security and Emerging Technology (CSET) at Georgetown University, which also uses the Cultivate platform, employs this methodology to predict the course of technological competition between America and China.

The Path of Least Resistance

Margarita Konaev Husanjot Chahal
| April 2021

As multinational collaboration on emerging technologies takes center stage, U.S. allies and partners must overcome the technological, bureaucratic, and political barriers to working together. This report assesses the challenges to multinational collaboration and explains how joint projects centered on artificial intelligence applications for military logistics and sustainment offer a viable path forward.

AI Hubs

Max Langenkamp Melissa Flagg
| April 2021

U.S. policymakers need to understand the landscape of artificial intelligence talent and investment as AI becomes increasingly important to national and economic security. This knowledge is critical as leaders develop new alliances and work to curb China’s growing influence. As an initial effort, an earlier CSET report, “AI Hubs in the United States,” examined the domestic AI ecosystem by mapping where U.S. AI talent is produced, where it is concentrated, and where AI private equity funding goes. Given the global nature of the AI ecosystem and the importance of international talent flows, this paper looks for the centers of AI talent and investment in regions and countries that are key U.S. partners: Europe and the CANZUK countries (Canada, Australia, New Zealand, and the United Kingdom).

The Public AI Research Portfolio of China’s Security Forces

Dewey Murdick Daniel Chou Ryan Fedasiuk Emily Weinstein
| March 2021

New analytic tools are used in this data brief to explore the public artificial intelligence (AI) research portfolio of China’s security forces. The methods contextualize Chinese-language scholarly papers that claim a direct working affiliation with components of the Ministry of Public Security, People's Armed Police Force, and People’s Liberation Army. The authors review potential uses of computer vision, robotics, natural language processing and general AI research.

Mapping India’s AI Potential

Husanjot Chahal Sara Abdulla Jonathan Murdick Ilya Rahkovsky
| March 2021

With its massive information technology workforce, thriving research community and a growing technology ecosystem, India has a significant stake in the development of artificial intelligence globally. Drawing from a variety of original CSET datasets, the authors evaluate India’s potential for AI by examining its progress across five categories of indicators pertinent to AI development: talent, research, patents, companies and investments, and compute.

Key Concepts in AI Safety: Interpretability in Machine Learning

Tim G. J. Rudner Helen Toner
| March 2021

This paper is the third installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” described three categories of AI safety issues: problems of robustness, assurance, and specification. This paper introduces interpretability as a means to enable assurance in modern machine learning systems.

Key Concepts in AI Safety: Robustness and Adversarial Examples

Tim G. J. Rudner Helen Toner
| March 2021

This paper is the second installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” described three categories of AI safety issues: problems of robustness, assurance, and specification. This paper introduces adversarial examples, a major challenge to robustness in modern machine learning systems.

Key Concepts in AI Safety: An Overview

Tim G. J. Rudner Helen Toner
| March 2021

This paper is the first installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. In it, the authors introduce three categories of AI safety issues: problems of robustness, assurance, and specification. Other papers in this series elaborate on these and further key concepts.