Tag Archive: Artificial intelligence

AI Ecosystem: Where Does India Stand Compared To The US & China

Analytics India Magazine
| April 19, 2021

Analytics India draws from the data in CSET's report "Mapping India's AI Potential" to assess India's progress in emerging technology.

CSET Research Analyst Husanjot Chahal speaks with Fortune to discussed India's AI capabilities.

U.S. AI Workforce

Diana Gehlhaus Ilya Rahkovsky
| April 2021

A lack of good data on the U.S. artificial intelligence workforce limits the potential effectiveness of policies meant to increase and cultivate this cadre of talent. In this issue brief, the authors bridge that information gap with new analysis on the state of the U.S. AI workforce, along with insight into the ongoing concern over AI talent shortages. Their findings suggest some segments of the AI workforce are more likely than others to be experiencing a supply-demand gap.

AI Hubs

Max Langenkamp Melissa Flagg
| April 2021

U.S. policymakers need to understand the landscape of artificial intelligence talent and investment as AI becomes increasingly important to national and economic security. This knowledge is critical as leaders develop new alliances and work to curb China’s growing influence. As an initial effort, an earlier CSET report, “AI Hubs in the United States,” examined the domestic AI ecosystem by mapping where U.S. AI talent is produced, where it is concentrated, and where AI private equity funding goes. Given the global nature of the AI ecosystem and the importance of international talent flows, this paper looks for the centers of AI talent and investment in regions and countries that are key U.S. partners: Europe and the CANZUK countries (Canada, Australia, New Zealand, and the United Kingdom).

The Public AI Research Portfolio of China’s Security Forces

Dewey Murdick Daniel Chou Ryan Fedasiuk Emily S. Weinstein
| March 2021

New analytic tools are used in this data brief to explore the public artificial intelligence (AI) research portfolio of China’s security forces. The methods contextualize Chinese-language scholarly papers that claim a direct working affiliation with components of the Ministry of Public Security, People's Armed Police Force, and People’s Liberation Army. The authors review potential uses of computer vision, robotics, natural language processing and general AI research.

CSET Data Research Assistant Simon Rodriguez joins this episode of The Data Exchange to discuss how research in machine learning and AI affects public consciousness.

Mapping India’s AI Potential

Husanjot Chahal Sara Abdulla Jonathan Murdick Ilya Rahkovsky
| March 2021

With its massive information technology workforce, thriving research community and a growing technology ecosystem, India has a significant stake in the development of artificial intelligence globally. Drawing from a variety of original CSET datasets, the authors evaluate India’s potential for AI by examining its progress across five categories of indicators pertinent to AI development: talent, research, patents, companies and investments, and compute.

Key Concepts in AI Safety: Interpretability in Machine Learning

Tim G. J. Rudner Helen Toner
| March 2021

This paper is the third installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” described three categories of AI safety issues: problems of robustness, assurance, and specification. This paper introduces interpretability as a means to enable assurance in modern machine learning systems.

Key Concepts in AI Safety: Robustness and Adversarial Examples

Tim G. J. Rudner Helen Toner
| March 2021

This paper is the second installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” described three categories of AI safety issues: problems of robustness, assurance, and specification. This paper introduces adversarial examples, a major challenge to robustness in modern machine learning systems.

Key Concepts in AI Safety: An Overview

Tim G. J. Rudner Helen Toner
| March 2021

This paper is the first installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. In it, the authors introduce three categories of AI safety issues: problems of robustness, assurance, and specification. Other papers in this series elaborate on these and further key concepts.