News

In the news section, our experts take center stage in shaping discussions on technology and policy. Discover articles featuring insights from our experts or citing our research. CSET’s insights and research are pivotal in shaping key conversations within the evolving landscape of emerging technology and policy.

Dewey Murdick and Miriam Vogel shared their expert analysis in an op-ed published by Fortune. In their piece, they highlight the urgent need for the United States to strengthen its AI literacy and incident reporting systems to maintain global leadership amid rapidly advancing international competition, especially from China’s booming AI sector.

Assessment


China


Filter entries

CSET report "Securing AI" highlights the numbers of ways hackers can compromise AI by targeting its data.

In an interview with Fortune, Margarita Konaev breaks down Russia's AI ambitions and how the current economic sanctions are hindering it progress.

If the U.S. is to succeed in semiconductor manufacturing, the recruitment of foreign-born talent to the U.S. is needed according to Research Analyst Will Hunt in an interview with the South China Morning Post.

AI-powered disinformation, present and future

Towards Data Science
| March 23, 2022

Research Fellow Katerina Sedova joined the Towards Data Science podcast to discuss malicious applications of AI.

The new grant will contribute to the CyberAI Project's research at the intersection of artificial intelligence and cybersecurity.

Nvidia hack could help China according to CSET's policy.ai newsletter.

Research Analyst Dakota Cary discusses China's use of cyber schools to strengthen its cyber talent.

Russia’s AI industry faces collapse

Politico
| March 8, 2022

Margarita Konaev discussed Russia's stalled AI progress as a result of new technology sanctions and brain drain.

Margarita Konaev discusses concern over misinformation circulating on the internet on the war in Ukraine.

Hacking Poses Risks for Artificial Intelligence

SIGNAL Online
| March 1, 2022

CSET Senior Fellow Andrew Lohn discusses the potential for AI and machine learning software to be susceptible to data poisoning.