Tag Archive: Cybersecurity

Scaling AI

Andrew Lohn
| December 2023

While recent progress in artificial intelligence (AI) has relied primarily on increasing the size and scale of the models and computing budgets for training, we ask if those trends will continue. Financial incentives are against scaling, and there can be diminishing returns to further investment. These effects may already be slowing growth among the very largest models. Future progress in AI may rely more on ideas for shrinking models and inventive use of existing models than on simply increasing investment in compute resources.

Controlling Large Language Model Outputs: A Primer

Jessica Ji Josh A. Goldstein Andrew Lohn
| December 2023

Concerns over risks from generative artificial intelligence systems have increased significantly over the past year, driven in large part by the advent of increasingly capable large language models. But, how do AI developers attempt to control the outputs of these models? This primer outlines four commonly used techniques and explains why this objective is so challenging.

The much-anticipated National Cyber Workforce and Education Strategy (NCWES) provides a comprehensive set of strategic objectives for training and producing more cyber talent by prioritizing and encouraging the development of more localized cyber ecosystems that serve the needs of a variety of communities rather than trying to prescribe a blanket policy. This is a much-needed and reinvigorated approach that understands the unavoidable inequities in both cyber education and workforce development, but provides strategies for mitigating them. In this blog post, we highlight key elements that could be easily overlooked.

Large language models (LLMs) could potentially be used by malicious actors to generate disinformation at scale. But how likely is this risk, and what types of economic incentives do propagandists actually face to turn to LLMs? New analysis uploaded to arXiv and summarized here suggests that it is all but certain that a well-run human-machine team that utilized existing LLMs (even open-source ones that are not cutting edge) would save a propagandist money on content generation relative to a human-only operation.

In a BBC article that discusses the urgent need to integrate cybersecurity measures into artificial intelligence systems, CSET's Andrew Lohn provided his expert analysis.

During her interview with ABC News Live, CSET's Helen Toner delved into the significant growth of Artificial Intelligence, with a particular emphasis on its impact within the realm of National Security.

Securing AI Makes for Safer AI

John Bansemer Andrew Lohn
| July 6, 2023

Recent discussions of AI have focused on safety, reliability, and other risks. Lost in this debate is the real need to secure AI against malicious actors. This blog post applies lessons from traditional cybersecurity to emerging AI-model risks.

CSET's Andrew Lohn and Joshua A. Goldstein share their insights on the difficulties of identifying AI-generated text in disinformation campaigns in their op-ed in Lawfare.

In a WIRED article, CSET's Emily S. Weinstein contributed her expertise to the discussion surrounding the existence of encryption chips produced by Hualan Microelectronics, a Chinese company that has been identified by the US Department of Commerce due to its affiliations with the Chinese military.

Is China Gaining a Lead in the Tech Arms Race?

Foreign Policy
| June 8, 2023

In a weekly digest published by Foreign Policy, CSET's Emily S. Weinstein offered her expert analysis on a recent study conducted by the Australian Strategic Policy Institute.