Analysis,
CyberAI Project

Dakota Cary

Research Analyst Print Bio

Dakota Cary is a Research Analyst at Georgetown’s Center for Security and Emerging Technology (CSET), where he works on the CyberAI Project. He holds an MA from Georgetown’s Security Studies program with a concentration in technology and national security, and also completed a certificate in Asian Studies. Prior to beginning his master’s program, Dakota worked in North Carolina politics.

CSET submitted this comment to the Office of Science and Technology Policy and the National Science Foundation to support the work of the National Artificial Intelligence Research Resource (NAIRR) Task Force to develop an implementation roadmap that would provide AI researchers and students across scientific disciplines access to computational resources, high-quality data, educational tools, and user support.

Robot Hacking Games

September 2021

Software vulnerability discovery, patching, and exploitation—collectively known as the vulnerability lifecycle—is time consuming and labor intensive. Automating the process could significantly improve software security and offensive hacking. The Defense Advanced Research Projects Agency’s Cyber Grand Challenge supported teams of researchers from 2014 to 2016 that worked to create these tools. China took notice. In 2017, China hosted its first Robot Hacking Game, seeking to automate the software vulnerability lifecycle. Since then, China has hosted seven such competitions and the People’s Liberation Army has increased its role in hosting the games.

To what extent does China’s cultivation of talent in cybersecurity and AI matter in terms of competitiveness with other countries? Right now, it seems to have an edge: China’s 11 World-Class Cybersecurity Schools offer more classes on artificial intelligence and machine learning than do the 20 U.S. universities certified as Centers of Academic Excellence in Cyber Operations. This policy brief recommends tracking 13 research grants from the National Science Foundation that attempt to integrate AI into cybersecurity curricula.

CSET's Dakota Cary describes China's plans to becomes a "cyber powerhouse" through the creation of its National Cybersecurity Center.

In an opinion piece for the Hill, CSET's Dakota Cary explains how China's new rules on software vulnerabilities threaten cybersecurity.

China wants to be a “cyber powerhouse” (网络强国). At the heart of this mission is the sprawling 40 km2 campus of the National Cybersecurity Center. Formally called the National Cybersecurity Talent and Innovation Base (国家网络安全人才与创新基地), the NCC is being built in Wuhan. The campus, which China began constructing in 2017 and is still building, includes seven centers for research, talent cultivation, and entrepreneurship; two government-focused laboratories; and a National Cybersecurity School.

China’s National Cybersecurity Center (NCC) resides on a 40 km2 plot in Wuhan. As one indication of its significance, the Chinese Communist Party’s highest-ranking members have an oversight committee for the facility. Over the next decade, the NCC will provide the talent, innovation, and indigenization of cyber capabilities that China’s Ministry of State Security, Ministry of Public Security, and People’s Liberation Army Strategic Support Force hacking teams lack. Though still under construction, the NCC’s first class of graduates will cross the stage in June 2022.

Six Chinese universities have relationships with Advanced Persistent Threat (APT) hacking teams. Their activities range from recruitment to running cyber operations. These partnerships, themselves a case study in military-civil fusion, allow state-sponsored hackers to quickly move research from the lab to the field. This report examines these universities’ relationships with known APTs and analyzes the schools’ AI/ML research that may translate to future operational capabilities.

Automating Cyber Attacks

November 2020

Based on an in-depth analysis of artificial intelligence and machine learning systems, the authors consider the future of applying such systems to cyber attacks, and what strategies attackers are likely or less likely to use. As nuanced, complex, and overhyped as machine learning is, they argue, it remains too important to ignore.

Machine learning may provide cyber attackers with the means to execute more effective and more destructive attacks against industrial control systems. As new ML tools are developed, CSET discusses the ways in which attackers may deploy these tools and the most effective avenues for industrial system defenders to respond.