Micah Musser is a Research Analyst at Georgetown’s Center for Security and Emerging Technology (CSET), where he works on the CyberAI Project. Previously, he worked as a Research Assistant at the Berkley Center for Religion, Peace, and World Affairs. He graduated from Georgetown University’s College of Arts and Sciences with a B.A. (summa cum laude) in Government focusing on political theory.
In an op-ed published in The Diplomat, Micah Musser discusses the concerns raised by policymakers in Washington about the disruptive potential of artificial intelligence (AI) technologies.
“The Main Resource is the Human”
April 2023Progress in artificial intelligence (AI) depends on talented researchers, well-designed algorithms, quality datasets, and powerful hardware. The relative importance of these factors is often debated, with many recent “notable” models requiring massive expenditures of advanced hardware. But how important is computational power for AI progress in general? This data brief explores the results of a survey of more than 400 AI researchers to evaluate the importance and distribution of computational needs.
Artificial intelligence systems are rapidly being deployed in all sectors of the economy, yet significant research has demonstrated that these systems can be vulnerable to a wide array of attacks. How different are these problems from more common cybersecurity vulnerabilities? What legal ambiguities do they create, and how can organizations ameliorate them? This report, produced in collaboration with the Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center, presents the recommendations of a July 2022 workshop of experts to help answer these questions.
Forecasting Potential Misuses of Language Models for Disinformation Campaigns—and How to Reduce Risk
January 2023Machine learning advances have powered the development of new and more powerful generative language models. These systems are increasingly able to write text at near human levels. In a new report, authors at CSET, OpenAI, and the Stanford Internet Observatory explore how language models could be misused for influence operations in the future, and provide a framework for assessing potential mitigation strategies.
In an opinion piece for Lawfare, Research Analyst Micah Musser discussed the new regulations that entered into effect in China requiring companies deploying recommendation algorithms to file details about those algorithms with the Cyberspace Administration of China.
AI and Compute
January 2022Between 2012 and 2018, the amount of computing power used by record-breaking artificial intelligence models doubled every 3.4 months. Even with money pouring into the AI field, this trendline is unsustainable. Because of cost, hardware availability and engineering difficulties, the next decade of AI can't rely exclusively on applying more and more computing power to drive further progress.
Machine Learning and Cybersecurity
June 2021Cybersecurity operators have increasingly relied on machine learning to address a rising number of threats. But will machine learning give them a decisive advantage or just help them keep pace with attackers? This report explores the history of machine learning in cybersecurity and the potential it has for transforming cyber defense in the near future.
Truth, Lies, and Automation
May 2021Growing popular and industry interest in high-performing natural language generation models has led to concerns that such models could be used to generate automated disinformation at scale. This report examines the capabilities of GPT-3--a cutting-edge AI system that writes text--to analyze its potential misuse for disinformation. A model like GPT-3 may be able to help disinformation actors substantially reduce the work necessary to write disinformation while expanding its reach and potentially also its effectiveness.
Automating Cyber Attacks
November 2020Based on an in-depth analysis of artificial intelligence and machine learning systems, the authors consider the future of applying such systems to cyber attacks, and what strategies attackers are likely or less likely to use. As nuanced, complex, and overhyped as machine learning is, they argue, it remains too important to ignore.
As demand for cybersecurity experts in the United States has grown faster than the supply of qualified workers, some organizations have turned to artificial intelligence to bolster their overwhelmed cyber teams. Organizations may opt for distinct teams that specialize exclusively in AI or cybersecurity, but there is a benefit to having employees with overlapping experience in both domains. This data brief analyzes hiring demand for individuals with a combination of AI and cybersecurity skills.