Cybersecurity

In a BBC article that discusses the urgent need to integrate cybersecurity measures into artificial intelligence systems, CSET's Andrew Lohn provided his expert analysis.

During her interview with ABC News Live, CSET's Helen Toner delved into the significant growth of Artificial Intelligence, with a particular emphasis on its impact within the realm of National Security.

Securing AI Makes for Safer AI

John Bansemer Andrew Lohn
| July 6, 2023

Recent discussions of AI have focused on safety, reliability, and other risks. Lost in this debate is the real need to secure AI against malicious actors. This blog post applies lessons from traditional cybersecurity to emerging AI-model risks.

In a WIRED article, CSET's Emily S. Weinstein contributed her expertise to the discussion surrounding the existence of encryption chips produced by Hualan Microelectronics, a Chinese company that has been identified by the US Department of Commerce due to its affiliations with the Chinese military.

A CSET report was cited in an article published by GCN discussing Montana's proposed statewide ban on TikTok and other social media apps.

An AI reckoning coming in August

Politico
| May 4, 2023

CSET's Heather Frase was interviewed by Politico, and the discussion was published in their newsletter in a segment that discusses a plan by the U.S. government to conduct a public experiment at the DEFCON 31 hacking convention in August.

The Eurasian Times cited a CSET report by Jack Corrigan, Sergio Fontanez, and Michael Kratsios in an article about the tightening of laws around cybersecurity and espionage by the US and China.

CSET’s Heather Frase was interviewed by The Financial Times in an article about OpenAI's red team and their mission to test and mitigate the risks of GPT-4.

CSET's Josh A. Goldstein was recently quoted in a WIRED article about state-backed hacking groups using fake LinkedIn profiles to steal information from their targets. Goldstein provides insight by highlighting the issues in the disinformation space.

An article published in OODA Loop cited a report by CSET's Josh Goldstein, Micah Musser, and CSET alumna Katerina Sedova in collaboration with OpenAI and Stanford Internet Observatory. The report explores the potential misuse of language models for influence operations in the future, and provides a framework for assessing mitigation strategies.