Tag Archive: Machine learning

Machine Intelligence for Scientific Discovery and Engineering Invention

Matthew Daniels Autumn Toney Melissa Flagg Charles Yang
| May 2021

The advantages of nations depend in part on their access to new inventions—and modern applications of artificial intelligence can help accelerate the creation of new inventions in the years ahead. This data brief is a first step toward understanding how modern AI and machine learning have begun accelerating growth across a wide array of science and engineering disciplines in recent years.

Mapping Research Agendas in U.S. Corporate AI Laboratories

Rebecca Gelles Tim Hwang Simon Rodriguez
| April 2021

Leading U.S. companies are investing in the broad research field of artificial intelligence (AI), but where, specifically, are they making these investments? This data brief provides an analysis of the research papers published by Amazon, Apple, Facebook, Google, IBM, and Microsoft over the past decade to better understand what work their labs are prioritizing, and the degree to which these companies have similar or different research agendas overall. The authors find that major “AI companies” are often focused on very different subfields within AI, and that the private sector may be failing to make research investments consistent with ensuring long-term national competitiveness.

CSET Data Research Assistant Simon Rodriguez joins this episode of The Data Exchange to discuss how research in machine learning and AI affects public consciousness.

Key Concepts in AI Safety: Interpretability in Machine Learning

Tim G. J. Rudner Helen Toner
| March 2021

This paper is the third installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” described three categories of AI safety issues: problems of robustness, assurance, and specification. This paper introduces interpretability as a means to enable assurance in modern machine learning systems.

Key Concepts in AI Safety: Robustness and Adversarial Examples

Tim G. J. Rudner Helen Toner
| March 2021

This paper is the second installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” described three categories of AI safety issues: problems of robustness, assurance, and specification. This paper introduces adversarial examples, a major challenge to robustness in modern machine learning systems.

Key Concepts in AI Safety: An Overview

Tim G. J. Rudner Helen Toner
| March 2021

This paper is the first installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. In it, the authors introduce three categories of AI safety issues: problems of robustness, assurance, and specification. Other papers in this series elaborate on these and further key concepts.

Pentagon, Rivals to Play ‘Cat-and-Mouse Game’ with AI

National Defense Magazine
| March 12, 2021

CSET Senior Fellow Andrew Lohn's work on machine learning vulnerabilities is cited in this article on the Department of Defense's efforts to adopt artificial intelligence technologies for a host of functions.

Using Machine Learning to Fill Gaps in Chinese AI Market Data

Zachary Arnold Joanne Boisson Lorenzo Bongiovanni Daniel Chou Carrie Peelman Ilya Rahkovsky
| February 2021

In this proof-of-concept project, CSET and Amplyfi Ltd. used machine learning models and Chinese-language web data to identify Chinese companies active in artificial intelligence. Most of these companies were not labeled or described as AI-related in two high-quality commercial datasets. The authors' findings show that using structured data alone—even from the best providers—will yield an incomplete picture of the Chinese AI landscape.

Comparing Corporate and University Publication Activity in AI/ML

Simon Rodriguez Tim Hwang Rebecca Gelles
| January 2021

Based on news coverage alone, it can seem as if corporations dominate the research on artificial intelligence and machine learning when compared to the work of universities and academia. Authors Simon Rodriguez, Tim Hwang and Rebecca Gelles analyze the data over the past decade of research publications and find that, in fact, universities are the more dominant producers of AI papers. They also find that while corporations do tend to generate more citations to the work they publish in the field, these “high performing” papers are most frequently cross-collaborations with university labs.

Hacking AI

Andrew Lohn
| December 2020

Machine learning systems’ vulnerabilities are pervasive. Hackers and adversaries can easily exploit them. As such, managing the risks is too large a task for the technology community to handle alone. In this primer, Andrew Lohn writes that policymakers must understand the threats well enough to assess the dangers that the United States, its military and intelligence services, and its civilians face when they use machine learning.