The New York Times, The Washington Post, Wired and the BBC
In their scholarly report posted on arXiv in January, Josh Goldstein, Micah Musser, and CSET alumna Katerina Sedova — together with co-authors at OpenAI and the Stanford Internet Observatory — explored how large language models could be misused in the future to generate and deploy digital propaganda for influence operations. The report caught the eye of The New York Times’ Tiffany Hsu and Stuart A. Thompson, who cited it in a piece about the disinformation risks of AI chatbots. Tim Starks, editor of The Washington Post’s Cybersecurity 202 newsletter, also noted the report’s findings in a writeup about ChatGPT’s ability to write malicious code. For a story about detecting AI-generated text, Wired’s Reece Rogers reached out to Musser to get his thoughts on potential remedies; he also noted the new report and its findings. The report also made waves across the pond — the BBC’s David Silverberg cited it and quoted Goldstein on the potential disinformation risks of AI chatbots. “Generative language models could produce a high volume of content that is original each time … and allow each propagandist to not rely on copying and pasting the same text across social media accounts or news sites,” Goldstein observed.
Foreign Policy, Reuters and Bloomberg
Emily Weinstein and Ngor Luong’s new policy brief, U.S. Outbound Investment into Chinese AI Companies, touched on a hot topic: the national security implications of U.S. investments in China. As such, it caught the attention of a number of interested journalists. In a Foreign Policy piece on U.S. concerns about the Chinese technology sector, Rishi Iyengar and Jack Detsch cited the brief’s finding that, between 2015 and 2021, U.S. investors were involved in investments worth more than $40 billion in 251 Chinese AI companies. Reuters’ Alexandra Alper also picked up on those headline findings in a comprehensive recap of Weinstein and Luong’s brief. Bloomberg’s Chris Anstey, meanwhile, cited Weinstein and Luong’s finding that “earlier stage venture capital investments in particular can provide intangible benefits beyond capital, including mentorship and coaching, name recognition and networking opportunities” for an article about U.S.-China decoupling. The paper was also highlighted by Nikkei Asia, Fox News, Roll Call and Jeffrey Ding’s ChinAI newsletter, where it was heralded as a “Should-read.”
The Wall Street Journal
In a Wall Street Journal article about the CHIPS and Science Act’s potential to coax chipmakers back to the United States, Yuka Hayashi cited key numbers from Will Hunt’s 2022 brief, Sustaining U.S. Competitiveness in Semiconductor Manufacturing: Priorities for CHIPS Act Incentives. Hunt found that, as of 2021, manufacturing of leading-edge logic chips (defined as 5nm or smaller) was heavily concentrated in only two places: Taiwan, which accounted 85 percent of the global supply, and South Korea, which produced the remaining 15 percent.
In a recent Newsweek article, John Feng dove into the background of Wu Zhe — a prominent scientist thought to be closely linked to China’s spy balloon program — and explored key details of the scientific landscape in China. For that, he leaned on Remco Zwetsloot, Jack Corrigan, Emily Weinstein, Dahlia Peterson, Diana Gehlhaus and Ryan Fedasiuk’s 2021 brief, China is Fast Outpacing U.S. STEM PhD Growth. This report explored the data on STEM PhD graduation rates and projected their growth over the next five years. The authors concluded that by 2025 Chinese universities will have nearly double the STEM PhD graduates per year compared to the United States.
Research Analyst Jacob Feldgoise appeared on the public radio program Marketplace to discuss semiconductor funding and the CHIPS and Science Act. He explained some of the motivating factors behind the bill and the process through which funds will be spent. “That money has already been dispersed by Congress,” Feldgoise told Marketplace’s Lily Jamali, “the Commerce Department still actually needs to unveil their application process for that money.”
Bloomberg Government’s Patty Nieberg reached out to Deputy Director of Analysis and Research Fellow Margarita Konaev to discuss the U.S. military’s need for “clarity of the narrative” with respect to defending Taiwan against potential Chinese aggression. “‘Defend Taiwan for semiconductors’ sounds a whole lot like ‘war for oil.’ Huge chunks of the population in the commercial sectors are going to get triggered by that and they’re not going to like it as much as ‘defend Ukraine because they’re fighting big bad Russia’,” Konaev observed. She concluded that to ensure public support for its pivoting defense posture, the United States would need to bring the conversation back to a unified “right versus wrong” message.
CyberScoop’s Elias Groll spoke with Senior Fellow Andrew Lohn about the potential cybersecurity impact of large language models such as ChatGPT. Lohn pumped the brakes on its impact, noting that “Phishing is already so successful that it might not make a huge difference.” But he did still see the potential for tools like ChatGPT to influence the cybersecurity landscape through their ability “to guide hackers through the process of an intrusion that maybe doesn’t even involve any new malware. … There are tons of open source tools and bits of malware that are just floating around or that are just prepackaged,” Lohn said. “I’m worried that ChatGPT will show more people to use that.”
The Wire China
Katrina Northrop reached out to Director of Biotechnology Programs and Senior Fellow Anna Puglisi to discuss the complexities of China’s illicit technology transfer efforts, including its involvement of private citizens. “It seems crazy that private citizens or students are asked to participate in these elaborate schemes. It seems like one-offs,” Puglisi said, “But China looks at the diaspora as a resource to be tapped. It is hard for westerners to understand how much pressure the Chinese government can bring to bear on its citizens.”
As Husanjot Chahal, Helen Toner and Ilya Rahkovsky argued in their 2021 brief Small Data’s Big AI Potential, an overemphasis on “big data” ignores the existence and underestimates the potential of “small data” approaches that do not require massive labeled datasets. In a piece about the potential benefits of “small data” approaches, Vincent Carchidi cited Chahal, Toner and Rahkovsky’s brief, as well as Chahal and Toner’s subsequent Scientific American op-ed.
Spotlight on CSET Experts: Emily S. Weinstein & Ngor Luong
Emily S. Weinstein is a Research Fellow focused on U.S. national competitiveness in AI/ML technology and U.S.-China technology competition. She is also a Nonresident Fellow at the Atlantic Council’s Global China Hub and the National Bureau of Asian Research. Emily has conducted research for CSET on China’s S&T ecosystem, talent flows, and technology transfer issues. She has written on topics related to research security and China’s S&T developments for Foreign Policy, Lawfare, Defense One and other outlets.
Ngor Luong is a Research Analyst focusing on China’s science and technology ecosystem, AI investment trends and AI diplomacy in the Indo-Pacific region. Prior to CSET, Ngor worked at the Center for American Progress, where she researched China’s industrial policy and 5G. Her work and commentary have appeared in Nikkei Asia and The Diplomat, among other outlets.
Interested in speaking with Emily, Ngor or our other experts? Contact the Director of External Affairs Lynne Weil at Lynne.Weil@georgetown.edu.
Want to stay up to date on the latest CSET research? Sign up for our day-of-release reports and take a look at our biweekly newsletter, policy.ai.