Blog

Delve into insightful blog posts from CSET experts exploring the nexus of technology and policy. Navigate through in-depth analyses, expert op-eds, and thought-provoking discussions on inclusion and diversity within the realm of technology.

The European Union's Artificial Intelligence Act has officially come into force today after more than five years of legislative processes and negotiations. While marking a significant milestone, it also initiates a prolonged phase of implementation, refinement, and enforcement. This blog post outlines key aspects of the regulation, such as rules for general-purpose AI and governance structures, and provides insights into its timeline and future expectations.

Assessment


Peer Watch


Filter entries

There’s a lot to digest in the October 30 White House’s AI Executive Order. Our tracker is a useful starting point to identify key provisions and monitor the government’s progress against specific milestones, but grappling with the substance is an entirely different matter. This blog post, focusing on Section 4 of the EO (“Developing Guidelines, Standards, and Best Practices for AI Safety and Security”), is the first in a series that summarizes interesting provisions, shares some of our initial reactions, and highlights some of CSET’s research that may help the USG tackle the EO.

The Executive Order on Safe, Secure, and Trustworthy AI: Decoding Biden’s AI Policy Roadmap

Ronnie Kinoshita, Luke Koslosky, and Tessa Baker
| May 3, 2024

On October 30, 2023, the Biden administration released its long-awaited Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. CSET has broken down the EO, focusing on specific government deliverables. Our EO Provision and Timeline tracker lists which agencies are responsible for actioning EO provisions and their deadlines.

On September 19, 2023, one of us had the opportunity to testify before the Senate Homeland Security and Government Affairs Subcommittee on Emerging Threats and Spending Oversight, chaired by Senator Maggie Hassan. Members of the subcommittee asked for an actionable plan for the U.S. government (USG) to address concerns about emerging technologies, like AI, synthetic biology and genetic engineering, and quantum technologies. This blog post endeavors to summarize near term policy recommendations based on CSET research as well as our best guesses about what is needed to address emerging concerns about these technologies.

On September 8, 2023, Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO) released their Bipartisan Framework on AI Legislation. The framework includes many ideas and recommendations that CSET research has highlighted over the past four years. This blog post highlights some of the most relevant reports and CSET’s perspective on the framework’s elements.

Universities can build more inclusive computer science programs by addressing the reasons that students may be deterred from pursuing the field. This blog post explores some of those reasons, features of CS education that cause them, and provides recommendations on how to design learning experiences that are safer and more exploratory for everyone.

The much-anticipated National Cyber Workforce and Education Strategy (NCWES) provides a comprehensive set of strategic objectives for training and producing more cyber talent by prioritizing and encouraging the development of more localized cyber ecosystems that serve the needs of a variety of communities rather than trying to prescribe a blanket policy. This is a much-needed and reinvigorated approach that understands the unavoidable inequities in both cyber education and workforce development, but provides strategies for mitigating them. In this blog post, we highlight key elements that could be easily overlooked.

Securing AI Makes for Safer AI

John Bansemer and Andrew Lohn
| July 6, 2023

Recent discussions of AI have focused on safety, reliability, and other risks. Lost in this debate is the real need to secure AI against malicious actors. This blog post applies lessons from traditional cybersecurity to emerging AI-model risks.

Large Language Models in Biology

Steph Batalis, Caroline Schuerger, and Vikram Venkatram
| June 16, 2023

Steph Batalis, Caroline Schuerger and Vikram Venkatram explore three notable areas in the life sciences where LLMs are catalyzing meaningful advances: drug discovery, genetics, and precision medicine.

The goal of this guide is to acquaint researchers and analysts with tools, resources, and best practices to ensure security when collecting or accessing open-source information.