A globally competitive AI workforce hinges on the education, development, and sustainment of the best and brightest AI talent. This issue brief compares efforts to integrate AI education in China and the United States, and what advantages and disadvantages this entails. The authors consider key differences in system design and oversight, as well as strategic planning. They then explore implications for the U.S. national security community.
CSET's Director of Strategy Helen Toner sat down with National Defense to discuss AI failures from her and Zachary Arnold's CSET report "AI Accidents: An Emerging Threat."
A new CSET report "Headline or Trend Line? Evaluating Chinese-Russian Collaboration in AI" uses data-backed analysis to address the Sino-Russian partnership and its effect on U.S. strategic interests.
Looking at AI in the automotive industry, a CSET report "AI Accidents: An Emerging Threat" identifies AI failures and calls for a greater emphasis on research and development to improve safety.
Margarita Konaev, Andrew Imbrie, Ryan Fedasiuk, Emily S. Weinstein, Katerina Sedova, and James Dunham
| August 2021
Chinese and Russian government officials are keen to publicize their countries’ strategic partnership in emerging technologies, particularly artificial intelligence. This report evaluates the scope of cooperation between China and Russia as well as relative trends over time in two key metrics of AI development: research publications and investment. The findings expose gaps between aspirations and reality, bringing greater accuracy and nuance to current assessments of Sino-Russian tech cooperation.
CSET Research Analyst Micah Musser discusses his research using GPT-3 to generate disinformation, and the difficulties for AI to identify AI-generated disinformation.
In Bank Automation New's latest podcast CSET's Micah Musser breaks down how AI and ML can heighten and hinder security, and how financial institutions can separate marketing fiction from cybersecurity reality.
In their report "AI Accidents," authors Helen Toner and Zachary Arnold make a noteworthy effort to name critical issues surrounding AI risk probability and impact.
This website uses cookies.
To learn more, please review this policy. By continuing to browse the site, you agree to these terms.
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.