Category Archive: Uncategorized

In a new preprint paper, CSET's Josh A. Goldstein and the Stanford Internet Observatory's Renee DiResta explored the use of AI-generated imagery to drive Facebook engagement. Read More

"U.S. and Chinese Military AI Purchases," a report by CSET, was referenced in an Axios article. The article explores the potential threat of AI-powered drone swarms, which could challenge the dominance of advanced military technologies. Read More

In EqualAI's podcast 'In AI We Trust?', Helen Toner discusses key AI issues like China's policies, AI in warfare, and regulation challenges. Read More

In a recent episode of the Corner Alliance's "AI, Government, and the Future" podcast that explores the challenges of assessing AI systems and managing their risk, Mina Narayanan, a Research Analyst at CSET, provides her expert take. Read More

The Carnegie Classification of Institutions of Higher Education is making changes to drastically simplify the criteria that determine its highly coveted R1 top-tier research classification. Last year, CSET Senior Fellow, Jaret Riddick, wrote about a new law from Congress, Section 223 of the 2023 National Defense Authorization Act, intended to leverage existing Carnegie classification criteria to increase defense research capacity for historically Black colleges and universities. Now, research is needed to understand how the changes proposed for 2025 classification criteria impact U.S. Department of Defense goals for eligible HBCU partners. Read More

This blog post assesses how different priorities can change the risk-benefit calculus of open foundation models, and provides divergent answers to the question of “given current AI capabilities, what might happen if the U.S. government left the open AI ecosystem unregulated?” By answering this question from different perspectives, this blog post highlights the dangers of hastily subscribing to any particular course of action without weighing the potentially beneficial, risky, and ambiguous implications of open models. Read More

Large language models (LLMs), the technology that powers generative artificial intelligence (AI) products like ChatGPT or Google Gemini, are often thought of as chatbots that predict the next word. But that isn't the full story of what LLMs are and how they work. This is the third blog post in a three-part series explaining some key elements of how LLMs function. This blog post explains how AI developers are finding ways to use LLMs for much more than just generating text. Read More

Large language models (LLMs), the technology that powers generative artificial intelligence (AI) products like ChatGPT or Google Gemini, are often thought of as chatbots that predict the next word. But that isn't the full story of what LLMs are and how they work. This is the second blog post in a three-part series explaining some key elements of how LLMs function. This blog post explores fine-tuning—a set of techniques used to change the types of output that pre-trained models produce. Read More

Large language models (LLMs), the technology that powers generative artificial intelligence (AI) products like ChatGPT or Google Gemini, are often thought of as chatbots that predict the next word. But that isn't the full story of what LLMs are and how they work. This is the first blog post in a three-part series explaining some key elements of how LLMs function. This blog post covers pre-training—the process by which LLMs learn to predict the next word—and why it’s so surprisingly powerful. Read More

In an article published by the Wall Street Journal, which centers on Nvidia's pivotal role and success in the artificial intelligence (AI) sector, CSET's Hanna Dohmen shares her expertise on graphics processing units (GPUs). Read More