CyberAI

The Coming Age of AI-Powered Propaganda

Foreign Affairs
| April 7, 2023

CSET's Josh A. Goldstein and OpenAI's Girish Sastry co-authored an insightful article on language models and disinformation that was published in Foreign Affairs.

Breaking Defense published an article that explores both the potential benefits and risks of generative artificial intelligence, featuring insights from CSET's Micah Musser.

CSET Senior Fellow Dr. Heather Frase discussed her research on effectively evaluating and assessing AI systems across a broad range of applications.

Artificial intelligence systems are rapidly being deployed in all sectors of the economy, yet significant research has demonstrated that these systems can be vulnerable to a wide array of attacks. How different are these problems from more common cybersecurity vulnerabilities? What legal ambiguities do they create, and how can organizations ameliorate them? This report, produced in collaboration with the Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center, presents the recommendations of a July 2022 workshop of experts to help answer these questions.

NPR published an article featuring CSET's Josh Goldstein. Goldstein provided expert insight on the topic.

CSET's Josh A. Goldstein was recently quoted in a WIRED article about state-backed hacking groups using fake LinkedIn profiles to steal information from their targets. Goldstein provides insight by highlighting the issues in the disinformation space.

An article published in OODA Loop cited a report by CSET's Josh Goldstein, Micah Musser, and CSET alumna Katerina Sedova in collaboration with OpenAI and Stanford Internet Observatory. The report explores the potential misuse of language models for influence operations in the future, and provides a framework for assessing mitigation strategies.

Militaries seek to harness artificial intelligence for decision advantage. Yet AI systems introduce a new source of uncertainty in the likelihood of technical failures. Such failures could interact with strategic and human factors in ways that lead to miscalculation and escalation in a crisis or conflict. Harnessing AI effectively requires managing these risk trade-offs by reducing the likelihood, and containing the consequences of, AI failures.

Examining Singapore’s AI Progress

Kayla Goode, Heeu Millie Kim, and Melissa Deng
| March 2023

Despite being a small city-state, Singapore’s star continues to rise as an artificial intelligence hub presenting significant opportunities for international collaboration. Initiatives such as fast-tracking patent approval, incentivizing private investment, and addressing talent shortfalls are making the country a rapidly growing global AI hub. Such initiatives offer potential models for those seeking to leverage the technology and opportunities for collaboration in AI education and talent exchanges, research and development, and governance. The United States and Singapore share similar goals regarding the development and use of trusted and responsible AI and should continue to foster greater collaboration among public and private sector entities.

Don’t Neglect ‘Small-Data’ AI

Defense One
| February 13, 2023