Catherine Aiken is the Director of Data Science and Research at Georgetown’s Center for Security and Emerging Technology (CSET). Catherine was previously CSET’s survey specialist, designing and leading all of the Center’s survey and other human-subjects research. Before joining CSET, Catherine was at the University of Maryland, where she completed her doctorate and taught courses in political science and research methodology. Catherine’s research explored non-mainstream political action, and she has conducted research for the International Crisis Behavior Project, Cross-Domain Deterrence Project, and Johns Hopkins Applied Physics Laboratory and taught at the National Consortium for the Study of Terrorism and Responses to Terrorism. Catherine holds a B.A. from the University of Rochester and a Ph.D. in political science from the University of Maryland.

“The Main Resource is the Human”
April 2023Progress in artificial intelligence (AI) depends on talented researchers, well-designed algorithms, quality datasets, and powerful hardware. The relative importance of these factors is often debated, with many recent “notable” models requiring massive expenditures of advanced hardware. But how important is computational power for AI progress in general? This data brief explores the results of a survey of more than 400 AI researchers to evaluate the importance and distribution of computational needs.
As technology competition intensifies between the United States and China, governments and policy researchers are looking for metrics to assess each country’s relative strengths and weaknesses. One measure of technology innovation increasingly used by the policy community is research output. Drawing on CSET’s experiences over the last four years, this post shares our best practices for using research output to study national technological competition and inform public policy.
Catherine Aiken’s Testimony before the National Artificial Intelligence Advisory Committee
October 2022CSET's Catherine Aiken testified before the National Artificial Intelligence Advisory Committee on measuring progress in U.S. AI research and development.
Classifying AI Systems
January 2022This Classifying AI Systems Interactive presents several AI system classification frameworks developed to distill AI systems into concise, comparable and policy-relevant dimensions. It provides key takeaways and framework-specific results from CSET’s analysis of more than 1,800 system classifications done by survey respondents using the frameworks. You can explore the frameworks and example AI systems used in the survey, and even take the survey.
Testing Frameworks for the Classification of AI Systems
December 2021Director of Data Science and Research Catherine Aiken outlines how she tested frameworks for her report "Classifying AI Systems" in partnership with OECD.AI
Classifying AI Systems
November 2021This brief explores the development and testing of artificial intelligence system classification frameworks intended to distill AI systems into concise, comparable and policy-relevant dimensions. Comparing more than 1,800 system classifications, it points to several factors that increase the utility of a framework for human classification of AI systems and enable AI system management, risk assessment and governance.
Contending Frames
May 2021The narrative of an artificial intelligence “arms race” among the great powers has become shorthand to describe evolving dynamics in the field. Narratives about AI matter because they reflect and shape public perceptions of the technology. In this issue brief, the second in a series examining rhetorical frames in AI, the authors compare four narrative frames that are prominent in public discourse: AI Competition, Killer Robots, Economic Gold Rush and World Without Work.
Research from a CSET survey reveals that AI professionals are more willing to work with the U.S. military than originally perceived.
Is there a rift between the U.S. tech sector and the Department of Defense? To better understand this relationship, CSET surveyed U.S. AI industry professionals about their views toward working on DOD-funded AI projects. The authors find that these professionals hold a broad range of opinions about working with DOD. Among the key findings: Most AI professionals are positive or neutral about working on DOD-funded AI projects, and willingness to work with DOD increases for projects with humanitarian applications.
Foretell was CSET's crowd forecasting pilot project focused on technology and security policy. It connected historical and forecast data on near-term events with the big-picture questions that are most relevant to policymakers. In January 2022, Foretell became part of a larger forecasting program to support U.S. government policy decisions called INFER, which is run by the Applied Research Laboratory for Intelligence and Security at the University of Maryland and Cultivate Labs.
Future Indices
October 2020Foretell was CSET's crowd forecasting pilot project focused on technology and security policy. It connected historical and forecast data on near-term events with the big-picture questions that are most relevant to policymakers. In January 2022, Foretell became part of a larger forecasting program to support U.S. government policy decisions called INFER, which is run by the Applied Research Laboratory for Intelligence and Security at the University of Maryland and Cultivate Labs. This issue brief used recent forecast data to illustrate Foretell’s methodology.
China AI-Brain Research
September 2020Since 2016, China has engaged in a nationwide effort to "merge" AI and neuroscience research as a major part of its next-generation AI development program. This report explores China’s AI-brain program — identifying key players and organizations and recommending the creation of an open source S&T monitoring capability within the U.S. government.
Immigration Pathways and Plans of AI Talent
September 2020To better understand immigration paths of the AI workforce, CSET surveyed recent PhD graduates from top-ranking AI programs at U.S. universities. This data brief offers takeaways — namely, that AI PhDs find the United States an appealing destination for study and work, and those working in the country plan to stay.
Are great powers engaged in an artificial intelligence arms race? This issue brief explores the rhetorical framing of AI by analyzing more than 4,000 English-language articles over a seven-year period. Among its findings: a growing number of articles frame AI development as a competition, but articles using the competition frame represent a declining proportion of articles about AI.
Career Preferences of AI Talent
June 2020The United States faces increased international competition for top talent in artificial intelligence, a critical component of the American AI advantage. CSET surveyed recent AI PhDs from U.S. universities, offering insights into the academic and career preferences of the AI workforce.
Agile Alliances
February 2020The United States must collaborate with its allies and partners to shape the trajectory of artificial intelligence, promoting liberal democratic values and protecting against efforts to wield AI for authoritarian ends.