Kathleen Curlee is a Research Analyst at Georgetown University’s Center for Security and Emerging Technology (CSET), focusing on the national security applications of artificial intelligence. Prior to joining CSET, she worked as a Legal Analyst at Hughes Hubbard and Reed LLP. She also held internships at the U.S. Department of State’s Office of Weapons of Mass Destruction Terrorism, the Federal Trade Commission, and the Governor of Arkansas. She has degrees in International Relations and Political Science from the University of Pennsylvania.
Related Content
Eyes Wide Open: Harnessing the Remote Sensing and Data Analysis Industries to Enhance National Security
July 2024The U.S. government has an opportunity to seize strategic advantages by working with the remote sensing and data analysis industries. Both grew rapidly over the last decade alongside technology improvements, cheaper space launch, new investment-based… Read More
Trust Issues: Discrepancies in Trustworthy AI Keywords Use in Policy and Research
June 2024Policy and research communities strive to mitigate AI harm while maximizing its benefits. Achieving effective and trustworthy AI necessitates the establishment of a shared language. The analysis of policies across different countries and research literature… Read More
In their op-ed featured in The Wire China, CSET's Ngor Luong, Sam Bresnick, and Kathleen Curlee provide their expert analysis on the changing landscape for U.S. big tech companies in China. Read More
U.S. technology companies have become important actors in modern conflicts, and several of them have meaningfully contributed to Ukraine’s defense. But many of these companies are deeply entangled with China, potentially complicating their decision-making in… Read More
Working Through Our Global AI Trust Issues
November 2023The use of the word “trustworthy” in relation to AI has sparked debate among policymakers and experts alike. This blog post explores different understandings of trustworthy AI among international actors, as well as challenges in… Read More
Australia, Canada, Japan, the United Kingdom, and the United States emphasize principles of accountability, explainability, fairness, privacy, security, and transparency in their high-level AI policy documents. But while the words are the same, these countries… Read More
When the technology and policy communities use terms associated with trustworthy AI, could they be talking past one another? This paper examines the use of trustworthy AI keywords and the potential for an “Inigo Montoya… Read More