Category Archive: Uncategorized

In his op-ed featured in Breaking Defense, CSET's Sam Bresnick from offers a deep dive into China's remarkable progress in bolstering space resilience, with a specific focus on tactically responsive space launch (TRSL). Read More

In celebration of Disability Pride Month, the CSET Inclusion Alliance invited guest speaker Linnea Lassiter to shed light on the intersection of technology policy and people with disabilities. Lassiter's insights aimed to encourage learning from, supporting, and celebrating the disabled community. Read More

Large language models (LLMs) could potentially be used by malicious actors to generate disinformation at scale. But how likely is this risk, and what types of economic incentives do propagandists actually face to turn to LLMs? New analysis uploaded to arXiv and summarized here suggests that it is all but certain that a well-run human-machine team that utilized existing LLMs (even open-source ones that are not cutting edge) would save a propagandist money on content generation relative to a human-only operation. Read More

How can we measure the reliability of machine learning systems? And do these measures really help us predict real world performance? A recent study by the Stanford Intelligent Systems Laboratory, supported by CSET funding, provides new evidence that models may perform well on certain reliability metrics while still being unreliable in other ways. This blog post summarizes the study’s results, which suggest that policymakers and regulators should not think of “reliability” or “robustness” as a single, easy-to-measure property of an AI system. Instead, AI reliability requirements will need to consider which facets of reliability matter most for any given use case, and how those facets can be evaluated. Read More

Two CSET researchers are coauthors for a new multi-organization report about the safety of AI systems led by OpenAI and the Berkeley Risk and Security Lab. The report, published on arXiv, identified six confidence-building measures (CBMs) that could be applied by AI labs to reduce hostility, prevent conflict escalation, and improve trust between parties as it relates to foundation AI models. Read More

CSET has received a lot of questions about LLMs and their implications. But questions and discussions tend to miss some basics about LLMs and how they work. In this blog post, we ask CSET’s NLP Engineer, James Dunham, to help us explain LLMs in plain English. Read More

Collaborations between researchers and policymakers are necessary for progress, but can be challenging in practice. This blog post reports on recent discussions by privacy experts on the obstacles they face when engaging in the policy space and advice on how to improve these barriers. Read More

On July 21, the White House announced voluntary commitments from seven AI firms to ensure safe, secure, and transparent AI. CSET’s research provides important context to this discussion. Read More

WIRED published an article citing a CSET report authored by John VerWey. The article delves into the increasing water demand in the semiconductor industry as the US aims to enhance chip production. This surge in demand is due to the substantial water usage required for cleaning silicon wafers during the manufacturing process in semiconductor factories. Read More

With the rapid integration of AI into our daily lives, we must all learn when and whether to trust the technology, understand its capabilities and limitations, and adapt as these systems — and our functional relationships with them — evolve. Read More