In his op-ed featured in Breaking Defense, CSET's Sam Bresnick from offers a deep dive into China's remarkable progress in bolstering space resilience, with a specific focus on tactically responsive space launch (TRSL). Read More
In celebration of Disability Pride Month, the CSET Inclusion Alliance invited guest speaker Linnea Lassiter to shed light on the intersection of technology policy and people with disabilities. Lassiter's insights aimed to encourage learning from, supporting, and celebrating the disabled community. Read More
Large language models (LLMs) could potentially be used by malicious actors to generate disinformation at scale. But how likely is this risk, and what types of economic incentives do propagandists actually face to turn to LLMs? New analysis uploaded to arXiv and summarized here suggests that it is all but certain that a well-run human-machine team that utilized existing LLMs (even open-source ones that are not cutting edge) would save a propagandist money on content generation relative to a human-only operation. Read More
How can we measure the reliability of machine learning systems? And do these measures really help us predict real world performance? A recent study by the Stanford Intelligent Systems Laboratory, supported by CSET funding, provides new evidence that models may perform well on certain reliability metrics while still being unreliable in other ways. This blog post summarizes the study’s results, which suggest that policymakers and regulators should not think of “reliability” or “robustness” as a single, easy-to-measure property of an AI system. Instead, AI reliability requirements will need to consider which facets of reliability matter most for any given use case, and how those facets can be evaluated. Read More
Two CSET researchers are coauthors for a new multi-organization report about the safety of AI systems led by OpenAI and the Berkeley Risk and Security Lab. The report, published on arXiv, identified six confidence-building measures (CBMs) that could be applied by AI labs to reduce hostility, prevent conflict escalation, and improve trust between parties as it relates to foundation AI models. Read More
CSET has received a lot of questions about LLMs and their implications. But questions and discussions tend to miss some basics about LLMs and how they work. In this blog post, we ask CSET’s NLP Engineer, James Dunham, to help us explain LLMs in plain English. Read More
Collaborations between researchers and policymakers are necessary for progress, but can be challenging in practice. This blog post reports on recent discussions by privacy experts on the obstacles they face when engaging in the policy space and advice on how to improve these barriers. Read More
On July 21, the White House announced voluntary commitments from seven AI firms to ensure safe, secure, and transparent AI. CSET’s research provides important context to this discussion. Read More
WIRED published an article citing a CSET report authored by John VerWey. The article delves into the increasing water demand in the semiconductor industry as the US aims to enhance chip production. This surge in demand is due to the substantial water usage required for cleaning silicon wafers during the manufacturing process in semiconductor factories. Read More
With the rapid integration of AI into our daily lives, we must all learn when and whether to trust the technology, understand its capabilities and limitations, and adapt as these systems — and our functional relationships with them — evolve. Read More
This website uses cookies.
To learn more, please review this policy. By continuing to browse the site, you agree to these terms.
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.