Catherine Aiken is the Director of Data Science and Research at Georgetown’s Center for Security and Emerging Technology (CSET). Catherine was previously CSET’s survey specialist, designing and leading all of the Center’s survey and other human-subjects research. Before joining CSET, Catherine was at the University of Maryland, where she completed her doctorate and taught courses in political science and research methodology. Catherine’s research explored non-mainstream political action, and she has conducted research for the International Crisis Behavior Project, Cross-Domain Deterrence Project, and Johns Hopkins Applied Physics Laboratory and taught at the National Consortium for the Study of Terrorism and Responses to Terrorism. Catherine holds a B.A. from the University of Rochester and a Ph.D. in political science from the University of Maryland.
Classifying AI SystemsJanuary 2022
This Classifying AI Systems Interactive presents several AI system classification frameworks developed to distill AI systems into concise, comparable and policy-relevant dimensions. It provides key takeaways and framework-specific results from CSET’s analysis of more than 1,800 system classifications done by survey respondents using the frameworks. You can explore the frameworks and example AI systems used in the survey, and even take the survey.
Testing Frameworks for the Classification of AI SystemsDecember 2021
Director of Data Science and Research Catherine Aiken outlines how she tested frameworks for her report "Classifying AI Systems" in partnership with OECD.AI
Classifying AI SystemsNovember 2021
This brief explores the development and testing of artificial intelligence system classification frameworks intended to distill AI systems into concise, comparable and policy-relevant dimensions. Comparing more than 1,800 system classifications, it points to several factors that increase the utility of a framework for human classification of AI systems and enable AI system management, risk assessment and governance.
Contending FramesMay 2021
The narrative of an artificial intelligence “arms race” among the great powers has become shorthand to describe evolving dynamics in the field. Narratives about AI matter because they reflect and shape public perceptions of the technology. In this issue brief, the second in a series examining rhetorical frames in AI, the authors compare four narrative frames that are prominent in public discourse: AI Competition, Killer Robots, Economic Gold Rush and World Without Work.
Research from a CSET survey reveals that AI professionals are more willing to work with the U.S. military than originally perceived.
Is there a rift between the U.S. tech sector and the Department of Defense? To better understand this relationship, CSET surveyed U.S. AI industry professionals about their views toward working on DOD-funded AI projects. The authors find that these professionals hold a broad range of opinions about working with DOD. Among the key findings: Most AI professionals are positive or neutral about working on DOD-funded AI projects, and willingness to work with DOD increases for projects with humanitarian applications.
Foretell was CSET's crowd forecasting pilot project focused on technology and security policy. It connected historical and forecast data on near-term events with the big-picture questions that are most relevant to policymakers. In January 2022, Foretell became part of a larger forecasting program to support U.S. government policy decisions called INFER, which is run by the Applied Research Laboratory for Intelligence and Security at the University of Maryland and Cultivate Labs.
Future IndicesOctober 2020
Foretell was CSET's crowd forecasting pilot project focused on technology and security policy. It connected historical and forecast data on near-term events with the big-picture questions that are most relevant to policymakers. In January 2022, Foretell became part of a larger forecasting program to support U.S. government policy decisions called INFER, which is run by the Applied Research Laboratory for Intelligence and Security at the University of Maryland and Cultivate Labs. This issue brief used recent forecast data to illustrate Foretell’s methodology.
China AI-Brain ResearchSeptember 2020
Since 2016, China has engaged in a nationwide effort to "merge" AI and neuroscience research as a major part of its next-generation AI development program. This report explores China’s AI-brain program — identifying key players and organizations and recommending the creation of an open source S&T monitoring capability within the U.S. government.
Immigration Pathways and Plans of AI TalentSeptember 2020
To better understand immigration paths of the AI workforce, CSET surveyed recent PhD graduates from top-ranking AI programs at U.S. universities. This data brief offers takeaways — namely, that AI PhDs find the United States an appealing destination for study and work, and those working in the country plan to stay.
Are great powers engaged in an artificial intelligence arms race? This issue brief explores the rhetorical framing of AI by analyzing more than 4,000 English-language articles over a seven-year period. Among its findings: a growing number of articles frame AI development as a competition, but articles using the competition frame represent a declining proportion of articles about AI.
Career Preferences of AI TalentJune 2020
The United States faces increased international competition for top talent in artificial intelligence, a critical component of the American AI advantage. CSET surveyed recent AI PhDs from U.S. universities, offering insights into the academic and career preferences of the AI workforce.
Agile AlliancesFebruary 2020
The United States must collaborate with its allies and partners to shape the trajectory of artificial intelligence, promoting liberal democratic values and protecting against efforts to wield AI for authoritarian ends.