Work at CSET
We’re hiring! If you’re interested in joining our team, check out the positions in the “Job Openings” section below or consult our careers page.
Worth Knowing
UN Effort to Ban “Killer Robots” Fizzles: The United Nations’ Convention on Certain Conventional Weapons failed to move forward on a binding agreement that could have placed limits on — or even banned entirely — the use of lethal autonomous weapons systems. The review conference of the CCW meets every five years, and activists had pegged this meeting as the best chance to get LAWS restrictions on the books. Despite backing from UN Secretary-General Antonio Guterres and support from a number of prominent advocacy groups, that effort fell short. While a majority of the 125 countries belonging to the agreement said they favored regulating LAWS, the decision would have required unanimity, and a number of states — including the United States and Russia — objected. In pre-conference discussions, the U.S. representative argued for a “non-binding code of conduct” instead of binding regulations. While the CCW failed to bear fruit for supporters of a LAWS ban, it is not the only avenue for international regulation — as Jeremy Kahn explained in Fortune, LAWS ban advocates could turn to the UN General Assembly or other international forums as part of future efforts.
China Rolls Out New Algorithm Rules: Last week, Chinese regulators finalized new rules on the use of algorithmic recommendation systems. The regulations (translation available here), which were proposed last August, place significant limits on the tech companies’ content recommendation algorithms and give users the ability to opt out or limit their use of the systems. The new law, set to go into effect on March 1, will:
- Require companies to display information about how the algorithms work and their intended purpose.
- Give users the ability to opt out of using algorithmic recommendation systems entirely, opt out of personalized recommendation systems specifically, or delete personalized tags used by algorithms to make individual recommendations.
- Place limits on models that cause users to “become addicted or spend too much.”
- Bar algorithms that “generate or synthesize fake news information,” (a new addition since the draft regulations).
- More: China’s New AI Governance Initiatives Shouldn’t Be Ignored | Experts Examine China’s Pioneering Draft Algorithm Regulations | Will China’s Regulatory ‘Great Wall’ Hamper AI Ambitions?
Government Updates
Harvard Professor Convicted for Concealing Ties to China: Charles Lieber, a Harvard chemistry professor, was found guilty last month of making false statements to the U.S. government about his involvement with China’s Thousand Talents Program and failing to report income he received for his work there. Lieber’s high-profile case brought increased attention to the Justice Department’s China Initiative, launched under the Trump administration in 2018 to crack down on China’s targeting of U.S. technology, including early stage research. Lieber’s involvement with the Thousand Talents Program — a PRC initiative to attract foreign scientists and engineers — was not itself illegal, but his work with the Wuhan University of Technology attracted scrutiny that ultimately resulted in other charges. Most of the academics charged in China Initiative prosecutions have been accused of wire fraud or making false statements, not espionage — a fact critics have argued is a sign of the initiative’s overreach, though its supporters contend that the scale of Chinese espionage efforts make such “creative solutions” necessary.
White House Releases Guidance on Research Security: In related news, the White House issued a document providing implementation guidance for federal agencies as they update their research security policies. During the final week of the Trump administration, the White House issued a presidential memorandum that directed agencies to standardize their disclosure requirements with the aim of tightening research security. With many of the China Initiative’s flagship cases hinging on disclosure requirement misconduct, the memo has naturally attracted significant attention from researchers who receive U.S. government funding. Developed by an interagency panel at the direction of Office of Science and Technology Policy Director Eric Lander, the new guidance document focuses on standardizing disclosure requirements across agencies, implementing “digital persistent identifiers” to ease disclosure and encouraging communication between agencies, among other things. While the guidance did not address the China Initiative directly, Lander’s foreword struck an assuring tone, stressing the importance of avoiding measures that could create a “chilling atmosphere” for researchers. Lander has directed a handful of federal agencies to develop model grant application forms within the next 120 days that clearly articulate disclosure requirements.
DARPA Releases Toolkit to Help AI Developers Shore Up Defenses: The Defense Advanced Research Projects Agency released a toolkit to aid AI developers in testing their models’ defenses against attacks. While the Pentagon has increasingly embraced AI and machine learning tools, their adoption is not without potential risk — as the NSCAI warned in its final report last year, AI systems can be vulnerable to adversarial attacks, and it urged the U.S. government to step up its efforts on AI defense. DARPA’s “Guaranteeing AI Robustness against Deception” (GARD) program brought together researchers from academia and industry to compile a set of open-source tools (available here) that can help AI developers identify vulnerabilities and make their systems robust against a range of attacks. The toolkit includes a virtual evaluation testbed, a benchmark dataset, and “test dummies” that can help developers identify insufficient defenses — all open to the broader developer community through GitHub.
In Translation
CSET’s translations of significant foreign language documents on AI
CSET’s translations of significant foreign language documents on AI
PRC Military-Civil Fusion Plan: Xianning City Action Plan for In-Depth Development of Military-Civil Fusion (2021-2025) (Revised Draft). This plan is one example of how local PRC governments are implementing China’s “military-civil fusion” strategy, which encourages the Chinese military to tap into private-sector innovation and allows private companies to commercialize select military innovations. Like other PRC industrial policies, this local plan calls for the government to aid the expansion of leading private companies. Unlike many other military-civil fusion strategies, which focus on the high-tech sector, this one describes how local low-tech industries — such as firefighting equipment, agricultural produce and traditional Chinese herbal cures — can support the military. Note that although the Communist Party has drastically cut back on its use of the term “military-civil fusion” in recent years — 2021’s 14th Five-Year Plan Outline omits the phrase entirely — this local plan continues to use this terminology.
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
Job Openings
We’re hiring! Please apply or share the roles below with candidates in your network:
- Research Analyst (multiple): CSET RAs are vital to our work across a range of lines of research. Research Analysts collaborate with Research and Senior Fellows to execute CSET’s research. Apply by February 25 and be sure to list your areas of research interest in your cover letter.
- Data Research Analyst (multiple): DRAs work alongside our analysis and data teams to produce data-driven research products and policy analysis. This role combines knowledge of research methods and data analysis skills. Those with experience in common data visualization, programming languages, and/or statistical analysis tools may find this position of particular interest. Apply by February 25.
- AI Research Subgrant (AIRS) Program Director: CSET’s AIRS Program Director will manage sourcing, distributing and monitoring research grants that seek to promote the exploration of foundational technical topics that relate to potential national security implications of AI over the long term. Closes on January 31.
What’s New at CSET
REPORTS
- Trends in AI Research for the Visual Surveillance of Populations by Ashwin Acharya, Max Langenkamp and James Dunham
- Machines, Bureaucracies, and Markets as Artificial Intelligences by Richard Danzig
- Comparing U.S. and Chinese Contributions to High-Impact AI Research by Ashwin Acharya and Brian Dunn
- CSET: CSET at Three: Progress Report 2022
- Foreign Policy: Fibs About Funding Aren’t Espionage, Even When China Is Involved by Emily Weinstein
- Brookings TechStream: Beijing’s ‘re-innovation’ strategy is key element of US-China competition by Emily Weinstein
- The Lawfare Podcast: Dr. Charles Lieber and the China Initiative featuring Emily Weinstein
Foretell has launched a new project that combines expert and crowd judgment. You can read more about the experts’ views, including how they think trends like China’s military aggression, political polarization, and the strength of the tech sector affect the DOD-Silicon Valley relationship. See all 20 forecast questions associated with this project here.
IN THE NEWS
- Harvard Business Review: A piece by Bhaskar Chakravorti, Ajay Bhalla, Ravi Shankar Chaturvedi and Christina Filipovic about AI talent hubs cited Diana Gehlhaus and Santiago Mutis’ issue brief, The U.S. AI Workforce: Understanding the Supply of AI Talent.
- National Defense: Ryan Fedasiuk, Jennifer Melot and Ben Murphy’s October report, Harnessed Lightning: How the Chinese Military is Adopting Artificial Intelligence, continues to garner attention, earning a recap in National Defense by Jon Harper.
- Axios: Bethany Allen-Ebrahimian dubbed Emily Weinstein’s Foreign Policy piece, Fibs About Funding Aren’t Espionage, Even When China Is Involved, “the best analysis of the Justice Department’s China Initiative I have seen” in a recent edition of the Axios China newsletter.
- Deutschlandfunk: Margarita Konaev spoke to Thomas Reintjes about military adoption of AI for a report that aired on German radio station Deutschlandfunk.
- Rest of World: Helen Toner discussed China’s new algorithm rules with Vittoria Elliott and Meaghan Tobin.
What We’re Reading
Blog: Is the US going to screen outbound investment?, Sarah Bauerle-Danzman, Atlantic Council (January 2022)
Blog: China’s Share of Global Chip Sales Now Surpasses Taiwan’s, Closing in on Europe’s and Japan’s, Semiconductor Industry Association (January 2022)
Commentary: How U.S. Businesses View China’s Growing Influence in Tech Standards, Jacob Feldgoise and Matt Sheehan, Carnegie Endowment for International Peace (December 2021)
Upcoming Events
- January 20: CSET and Stanford HAI Webinar, Strengthening the Technical Foundations of U.S. Security, featuring CSET Senior Fellow Andrew Lohn, Stanford HAI Director of Policy Russell Wald and Stanford HAI Postdoctoral Fellow Jeff Ding
What else is going on? Suggest stories, documents to translate & upcoming events here.