CyberAI

On September 13, CSET experts discussed ways the U.S. can promote innovation to maintain its competitive advantage in emerging technologies.

The much-anticipated National Cyber Workforce and Education Strategy (NCWES) provides a comprehensive set of strategic objectives for training and producing more cyber talent by prioritizing and encouraging the development of more localized cyber ecosystems that serve the needs of a variety of communities rather than trying to prescribe a blanket policy. This is a much-needed and reinvigorated approach that understands the unavoidable inequities in both cyber education and workforce development, but provides strategies for mitigating them. In this blog post, we highlight key elements that could be easily overlooked.

Large language models (LLMs) could potentially be used by malicious actors to generate disinformation at scale. But how likely is this risk, and what types of economic incentives do propagandists actually face to turn to LLMs? New analysis uploaded to arXiv and summarized here suggests that it is all but certain that a well-run human-machine team that utilized existing LLMs (even open-source ones that are not cutting edge) would save a propagandist money on content generation relative to a human-only operation.

Confidence-Building Measures for Artificial Intelligence: Workshop Proceedings

Margarita Konaev and Andrew Lohn
| August 1, 2023

Two CSET researchers are coauthors for a new multi-organization report about the safety of AI systems led by OpenAI and the Berkeley Risk and Security Lab. The report, published on arXiv, identified six confidence-building measures (CBMs) that could be applied by AI labs to reduce hostility, prevent conflict escalation, and improve trust between parties as it relates to foundation AI models.

Jenny Jun's testimony before the House Foreign Affairs Subcommittee on Indo-Pacific for a hearing titled, "Illicit IT: Bankrolling Kim Jong Un."

On July 21, the White House announced voluntary commitments from seven AI firms to ensure safe, secure, and transparent AI. CSET’s research provides important context to this discussion.

In a BBC article that discusses the urgent need to integrate cybersecurity measures into artificial intelligence systems, CSET's Andrew Lohn provided his expert analysis.

In a Forbes article discussing the challenges posed by AI-generated content in the context of political campaigns and the upcoming presidential election, CSET's Josh A. Goldstein provided his expert take.

Securing AI Makes for Safer AI

John Bansemer and Andrew Lohn
| July 6, 2023

Recent discussions of AI have focused on safety, reliability, and other risks. Lost in this debate is the real need to secure AI against malicious actors. This blog post applies lessons from traditional cybersecurity to emerging AI-model risks.

China Isn’t Losing Sleep Over ChatGPT

The Diplomat
| July 3, 2023

CSET’s Micah Musser shared his insights in an op-ed published in The Diplomat. He discusses the misconception that there is an AI race between the United States and China, specifically focusing on the field of language modeling.