Showcasing our researchers’ work and their latest media appearances as they weigh in on developments at the intersection of national security and emerging technology.
The Washington Post
Research Analyst Will Hunt, an authority on the specialized semiconductors that power artificial intelligence and advanced computing, spoke with The Washington Post for an article about the possible use of a novel export control against Russia if it invades Ukraine. Known as the foreign direct product rule, the measure would particularly affect the flow of electronics to Russia. Cutting off the country’s chip imports “would invariably hit the Russian leadership’s high-tech ambitions, whether in artificial intelligence or quantum computing,” Hunt said.
Axios
Hunt spoke with Axios about how tensions between the United States and China over Taiwan could jeopardize advanced semiconductor manufacturing amid the ongoing global chip shortage. Taiwan is home to the main manufacturer of leading-edge chips, while no such capacity currently exists in the United States. “It’s extreme concentration of some of the most strategically important chips in the world, and no capacity in the United States in the case of major disruptions,” Hunt noted. “The current shortage pales in comparison to the economic impact of what might happen if the United States lost access to chips in Taiwan.”
Foreign Affairs
Experts Andrew Imbrie, Helen Toner, and Anna Puglisi made the case for the implementation of privacy-enhancing technologies (PETs) to protect personal privacy in a recent Foreign Affairs opinion piece. “PETs allow researchers to harness big data to solve problems affecting billions of people while also protecting privacy” the authors noted. International collaboration will be necessary if PETs are to become common practice, they wrote: “Driving innovation and collaboration within and across democracies is important not only because it will help ensure those societies’ success but also because there will be a first-mover advantage in the adoption of PETs for governing the world’s private data–sharing networks.”
The Hill
If the United States wants to maintain its competitive edge over China, it must adopt AI education and workforce policies, according to Research Analysts Kayla Goode and Dahlia Peterson in an opinion piece for The Hill. While U.S. AI education programs, including AI summer camps, have seen an uptick, very little AI instruction occurs in the classroom compared to China’s educational systems. In their comparison of AI education in China and the United States together with Research Fellow Diana Gehlhaus, Goode and Peterson found that China’s Ministry of Education implements AI education across all levels. “Implementing competitive AI education across the United States is no easy task — there are no shortcuts and no single solution,” they noted in The Hill. “There are, however, two elements that education leaders and policymakers should prioritize: coordination and investment.”
Protocol
The Peking University Institute of International and Strategic Studies published a report concluding that China will be worse off than the United States in a tech decoupling. Shortly after its publication, Protocol reported, the publication was removed from the internet. “China is not afraid of admitting weakness publicly,” Research Analyst Emily Weinstein told Protocol. “In this case, I would speculate that the piece was likely pulled out of concern that it would be weaponized against China, at least in terms of messaging. … The Chinese government is likely very eager to keep up its image, particularly in the context of technology competition.”
Science Magazine
Weinstein spoke with Science Magazine on the disappearance of online resources pertaining to China’s Thousand Talent Program (TTP). Information “seemed to start disappearing around the time that the China Initiative was launched” Weinstein said. CSET research, including original translations of Chinese government documents, has found that TTP and its spinoffs were absorbed into other initiatives such as China’s 2019 High-End Foreign Expert Recruitment Plan.
Lawfare
Weinstein joined the Lawfare podcast to discuss the Justice Department’s China Initiative – and in particular, the case of Dr. Charles Lieber, a Harvard University chemist convicted of making false statements and other tax offenses in connection with his participation in TTP.
FedScoop
FedScoop reached out to Associate Director of Analysis and Research Fellow Margarita Konaev for her thoughts on the U.S. Department of Defense’s new list of critical technology priorities. The strategy emphasized the need for more investment in AI and cybersecurity.“ I think this is an area that over the past year conversation has increased about it,” said Konaev, adding that the list shows the department’s desire to move further towards a battlefield reality where AI-enabled machines work alongside troops.
The Wire China
The Wire China reported on how open-source data collection becomes challenging for researchers as China cracks down on releasing public information “There are reasons why the Chinese are cracking down on publicly available information,” Research Analyst Ryan Fedasiuk observed. “They publish information that you wouldn’t think they would, and the Chinese are starting to worry about that. So we want to take measures to protect these sources.” In his report Harnessed Lightning, Fedasiuk reviewed 66,00 government tenders to understand the Chinese military’s use of AI and outlined the meticulous steps he and CSET took to preserve the sources used in the report.
Grid
Research Fellow Katerina Sedova explained how advances in AI are fueling disinformation campaigns in an interview with Grid. Sedova is the lead author of a recent series of papers on AI and the future of disinformation. In a previous CSET report, Sedova and CSET colleagues found that using Open AI’s GPT-3, AI-generated systems were capable of writing disinformation campaigns. “We also tested it for disinformation, and it may be better at writing disinformation than it is at writing actual factual information,” Sedova said. Part of the solution to combating disinformation involves public awareness. “We need to start thinking about how we raise resilience and educate the population when it comes to identifying artificially generated speech and artificially generated content, to the extent that’s possible. To help them understand how everything they do online can be translated into potentially being targeted by influence operators.”
Journal of Political Risk
In an interview for the Journal of Political Risk, Director of Strategy Helen Toner unpacked the national security risks and benefits of artificial intelligence. “Building AI systems that are safe, reliable, fair, and interpretable is an enormous open problem. Research into these areas has grown over the past few years, but still only makes up a fraction of the total effort poured into building and deploying AI systems. If we’re going to end up with trustworthy AI systems, we’ll need far greater investment and research progress in these areas,” Toner said.
IEEE
Toner and Senior Fellow Andrew Lohn spoke with IEEE Spectrum about the dangerous implications of AI-enabled systems. When it comes to AI, “we are entering dangerous and uncharted territory with the rise of surveillance and tracking through data, and we have almost no understanding of the potential implication,” said Lohn. Even the smallest of details can have big ramifications in AI-enabled systems. A crisis can “start off as an innocuous single point of failure that makes all communications go dark, causing people to panic and economic activity to come to a standstill. A persistent lack of information, followed by other miscalculations, might lead a situation to spiral out of control,” Toner noted. And IEEE Security & Privacy took interest in Research Analyst Dakota Cary’s report on China’s Robot Hacking Games. Sponsored by the People’s Liberation Army to advance automatic software vulnerability detection, such competitions explore the future of cybersecurity attacks and defense.
Spotlight on CSET Experts: Katerina Sedova
Katerina Sedova is a Research Fellow working on the CyberAI Project. Previously, she published research and advised projects on disinformation, state-sponsored information operations and OSINT for the NATO Strategic Communications Center of Excellence, the Department of State and the Department of Defense.
Her latest reports include a two-part series on AI and the Future of Disinformation Campaigns, Headline or Trend Line?, and Truth, Lies, and Automation. She has been featured and quoted on her expertise in disinformation and cybersecurity in a variety of outlets, including Breaking Defense, Foreign Affairs, the BBC and NBC. She will be the featured speaker of CSET’s next webinar, “More Than Deepfakes,” on February 16. Registration info is here.
Interested in speaking with Katerina or our other experts? Contact External Affairs Specialist Adrienne Thompson at adrienne.thompson@georgetown.edu.
Want to stay up to date on the latest CSET research? Sign up for our day-of-release reports and take a look at our biweekly newsletter, policy.ai.