Blog

Delve into insightful blog posts from CSET experts exploring the nexus of technology and policy. Navigate through in-depth analyses, expert op-eds, and thought-provoking discussions on inclusion and diversity within the realm of technology.

Featured Article

For Government Use of AI, What Gets Measured Gets Managed

Lawfare
| March 28, 2024

In their op-ed featured in Lawfare, CSET’s Matthew Burtell and Helen Toner shared their expert analysis on the significant implications of government procurement and deployment of artificial intelligence (AI) systems, emphasizing the need for high ethical and safety standards.

Applications


Assessment


Filter entries

In their op-ed featured in Breaking Defense, CSET's Sam Bresnick and Emelia Probasco provide their expert analysis on the involvement of US tech giants in conflicts, such as the Ukraine war, and raise important questions about their role and potential entanglements in future conflicts, particularly those involving Taiwan.

A recent topic of contention among artificial intelligence researchers has been whether large language models can exhibit unpredictable ("emergent") jumps in capability as they are scaled up. These arguments have found their way into policy circles and the popular press, often in simplified or distorted ways that have created confusion. This blog post explores the disagreements around emergence and their practical relevance for policy.

China Bets Big on Military AI

Center for European Policy Analysis
| April 3, 2024

In his op-ed published by the Center for European Policy Analysis (CEPA), CSET’s Sam Bresnick shared his expert analysis on China's evolving military capabilities and its growing emphasis on battlefield information and the role of AI.

Riding the AI Wave: What’s Happening in K-12 Education?

Ali Crawford Cherry Wu
| April 2, 2024

Over the past year, artificial intelligence has quickly become a focal point in K-12 education. This blog post describes new and existing K-12 AI education efforts so that U.S. policymakers and other decision-makers may better understand what’s happening in practice.

In their op-ed featured in Lawfare, CSET’s Matthew Burtell and Helen Toner shared their expert analysis on the significant implications of government procurement and deployment of artificial intelligence (AI) systems, emphasizing the need for high ethical and safety standards.

Happy Women's History Month! CSET recognizes and celebrate women in national security, tech policy, literature and business.

The Carnegie Classification of Institutions of Higher Education is making changes to drastically simplify the criteria that determine its highly coveted R1 top-tier research classification. Last year, CSET Senior Fellow, Jaret Riddick, wrote about a new law from Congress, Section 223 of the 2023 National Defense Authorization Act, intended to leverage existing Carnegie classification criteria to increase defense research capacity for historically Black colleges and universities. Now, research is needed to understand how the changes proposed for 2025 classification criteria impact U.S. Department of Defense goals for eligible HBCU partners.

This blog post assesses how different priorities can change the risk-benefit calculus of open foundation models, and provides divergent answers to the question of “given current AI capabilities, what might happen if the U.S. government left the open AI ecosystem unregulated?” By answering this question from different perspectives, this blog post highlights the dangers of hastily subscribing to any particular course of action without weighing the potentially beneficial, risky, and ambiguous implications of open models.

Large language models (LLMs), the technology that powers generative artificial intelligence (AI) products like ChatGPT or Google Gemini, are often thought of as chatbots that predict the next word. But that isn't the full story of what LLMs are and how they work. This is the third blog post in a three-part series explaining some key elements of how LLMs function. This blog post explains how AI developers are finding ways to use LLMs for much more than just generating text.

Large language models (LLMs), the technology that powers generative artificial intelligence (AI) products like ChatGPT or Google Gemini, are often thought of as chatbots that predict the next word. But that isn't the full story of what LLMs are and how they work. This is the second blog post in a three-part series explaining some key elements of how LLMs function. This blog post explores fine-tuning—a set of techniques used to change the types of output that pre-trained models produce.