Blog

Delve into insightful blog posts from CSET experts exploring the nexus of technology and policy. Navigate through in-depth analyses, expert op-eds, and thought-provoking discussions on inclusion and diversity within the realm of technology.

The European Union's Artificial Intelligence Act has officially come into force today after more than five years of legislative processes and negotiations. While marking a significant milestone, it also initiates a prolonged phase of implementation, refinement, and enforcement. This blog post outlines key aspects of the regulation, such as rules for general-purpose AI and governance structures, and provides insights into its timeline and future expectations.

Assessment


Peer Watch


Filter entries

On September 19, 2023, one of us had the opportunity to testify before the Senate Homeland Security and Government Affairs Subcommittee on Emerging Threats and Spending Oversight, chaired by Senator Maggie Hassan. Members of the subcommittee asked for an actionable plan for the U.S. government (USG) to address concerns about emerging technologies, like AI, synthetic biology and genetic engineering, and quantum technologies. This blog post endeavors to summarize near term policy recommendations based on CSET research as well as our best guesses about what is needed to address emerging concerns about these technologies.

This blog post by CSET’s Executive Director Dewey Murdick explores two different metaphorical lenses for governing the frontier of AI. The "Space Exploration Approach" likens AI models to spacecrafts venturing into unexplored territories, requiring detailed planning and regular updates. The "Snake-Filled Garden Approach" views AI as a garden with both harmless and dangerous 'snakes,' necessitating rigorous testing and risk assessment. In the post, Dewey examines these metaphors and the different ways they can inform approaches to AI governance strategy that balances innovation with safety, all while emphasizing the importance of ongoing learning and adaptability.

On September 8, 2023, Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO) released their Bipartisan Framework on AI Legislation. The framework includes many ideas and recommendations that CSET research has highlighted over the past four years. This blog post highlights some of the most relevant reports and CSET’s perspective on the framework’s elements.

Recent announcements from both Pentagon and Congressional leaders offer significant opportunity for rapidly delivering autonomous systems technology at-scale for U.S. Warfighters well into the future. Dr. Jaret Riddick, CSET Senior Fellow and former Principal Director for Autonomy in USD(R&E) offers his perspective on DOD’s Replicator Initiative and recent legislative proposals about DOD autonomy.

Universities can build more inclusive computer science programs by addressing the reasons that students may be deterred from pursuing the field. This blog post explores some of those reasons, features of CS education that cause them, and provides recommendations on how to design learning experiences that are safer and more exploratory for everyone.

The much-anticipated National Cyber Workforce and Education Strategy (NCWES) provides a comprehensive set of strategic objectives for training and producing more cyber talent by prioritizing and encouraging the development of more localized cyber ecosystems that serve the needs of a variety of communities rather than trying to prescribe a blanket policy. This is a much-needed and reinvigorated approach that understands the unavoidable inequities in both cyber education and workforce development, but provides strategies for mitigating them. In this blog post, we highlight key elements that could be easily overlooked.

Why Improving AI Reliability Metrics May Not Lead to Reliability

Romeo Valentin and Helen Toner
| August 8, 2023

How can we measure the reliability of machine learning systems? And do these measures really help us predict real world performance? A recent study by the Stanford Intelligent Systems Laboratory, supported by CSET funding, provides new evidence that models may perform well on certain reliability metrics while still being unreliable in other ways. This blog post summarizes the study’s results, which suggest that policymakers and regulators should not think of “reliability” or “robustness” as a single, easy-to-measure property of an AI system. Instead, AI reliability requirements will need to consider which facets of reliability matter most for any given use case, and how those facets can be evaluated.

CSET's Daniel Chou provides an update on previous CSET research exploring China's security forces' AI research portfolio.

Unwanted Foreign Transfers of U.S. Technology: Proposed Prevention Strategies

William Hannas and Huey-Meei Chang
| September 10, 2021

The transfer of national security relevant technology—to peer competitors especially—is a well-documented problem and must be balanced with the benefits of free exchange. The following propositions covering six facets of the transfer issue reflect CSET’s current recommendations on the matter.

Forecasting the Election’s Effect on American Opinion of China

Catherine Aiken and Michael Page
| November 2, 2020

Foretell was CSET's crowd forecasting pilot project focused on technology and security policy. It connected historical and forecast data on near-term events with the big-picture questions that are most relevant to policymakers. In January 2022, Foretell became part of a larger forecasting program to support U.S. government policy decisions called INFER, which is run by the Applied Research Laboratory for Intelligence and Security at the University of Maryland and Cultivate Labs.