Blog

Delve into insightful blog posts from CSET experts exploring the nexus of technology and policy. Navigate through in-depth analyses, expert op-eds, and thought-provoking discussions on inclusion and diversity within the realm of technology.

The European Union's Artificial Intelligence Act has officially come into force today after more than five years of legislative processes and negotiations. While marking a significant milestone, it also initiates a prolonged phase of implementation, refinement, and enforcement. This blog post outlines key aspects of the regulation, such as rules for general-purpose AI and governance structures, and provides insights into its timeline and future expectations.

Assessment


Peer Watch


Filter entries

Exploring AI Methods in Biology Research

Steph Batalis, Catherine Aiken, and James Dunham
| April 21, 2025

Opposing narratives around AI for biotechnology raise the question: how are biotech researchers actually using AI in published research? CSET’s Steph Batalis, Catherine Aiken, and James Dunham explored this question by leveraging CSET’s merged academic corpus, enriched publication metadata, and research clusters.

Artificial intelligence is becoming more integrated into the sciences. One of the scientific fields experiencing this is computational biology, which uses computer modeling to understand biological mechanisms and systems. This blog post provides an understanding of important research trends in these subject areas, and how advancements in AI can improve the speed and efficiency of computational biology to improve human health and well-being.

Now that the first set of milestones has passed for the Biden administration’s October 2023 executive order on artificial intelligence, where do things stand for biotech? This blog post gives an overview of the most recent action items, with a recap of expert commentary from CSET’s June 2024 Webinar on the AIxBio Policy Landscape.

Large language models (LLMs), the technology that powers generative artificial intelligence (AI) products like ChatGPT or Google Gemini, are often thought of as chatbots that predict the next word. But that isn't the full story of what LLMs are and how they work. This is the third blog post in a three-part series explaining some key elements of how LLMs function. This blog post explains how AI developers are finding ways to use LLMs for much more than just generating text.

Large language models (LLMs), the technology that powers generative artificial intelligence (AI) products like ChatGPT or Google Gemini, are often thought of as chatbots that predict the next word. But that isn't the full story of what LLMs are and how they work. This is the second blog post in a three-part series explaining some key elements of how LLMs function. This blog post explores fine-tuning—a set of techniques used to change the types of output that pre-trained models produce.

China’s Hybrid Economy: What to Do about BGI?

Anna Puglisi
| February 2, 2024

As the U.S. government considers banning genomics companies from China in the Biosecure Act, it opens a broader question of how the U.S. and other market economies should deal with China’s national champions. This blog post provides an overview of BGI and how China’s industrial policy impacts technology development.

CSET’s Must Read Research: A Primer

Tessa Baker
| December 18, 2023

This guide provides a run-down of CSET’s research since 2019 for first-time visitors and long-term fans alike. Quickly get up to speed on our “must-read” research and learn about how we organize our work.

Breaking Down the Biden AI EO: Screening DNA Synthesis and Biorisk

Steph Batalis and Vikram Venkatram
| November 16, 2023

The recent Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence will have major implications for biotechnology. The EO demonstrates that the White House considers biorisk a major concern for AI safety and security. In this blog post CSET’s bio experts explain the bio-relevant takeaways of the executive order, add some additional context, and note their remaining questions about its implementation.

There’s a lot to digest in the October 30 White House’s AI Executive Order. Our tracker is a useful starting point to identify key provisions and monitor the government’s progress against specific milestones, but grappling with the substance is an entirely different matter. This blog post, focusing on Section 4 of the EO (“Developing Guidelines, Standards, and Best Practices for AI Safety and Security”), is the first in a series that summarizes interesting provisions, shares some of our initial reactions, and highlights some of CSET’s research that may help the USG tackle the EO.

This blog post by CSET’s Executive Director Dewey Murdick explores two different metaphorical lenses for governing the frontier of AI. The "Space Exploration Approach" likens AI models to spacecrafts venturing into unexplored territories, requiring detailed planning and regular updates. The "Snake-Filled Garden Approach" views AI as a garden with both harmless and dangerous 'snakes,' necessitating rigorous testing and risk assessment. In the post, Dewey examines these metaphors and the different ways they can inform approaches to AI governance strategy that balances innovation with safety, all while emphasizing the importance of ongoing learning and adaptability.