We’re hiring! If you’re interested in joining our team, check out the positions in the “Job Openings” section below or consult our careers page.
DeepMind AI Controls Plasma in A Fusion Reactor: Last week, researchers at DeepMind announced they had trained an AI system to control plasma in a nuclear fusion reactor. Donut-shaped “tokamak” fusion reactors use powerful magnets to control the shape of superheated hydrogen atoms — a process that requires continuous monitoring and precise adjustments. In an attempt to replace the classical control algorithms that have been used in this part of the process, researchers from the Alphabet-owned AI lab used reinforcement learning to train a large neural network in a simulated reactor, then used a smaller, faster neural network to carry out the larger network’s predictions in the real reactor (their Nature paper is available here). The DeepMind model successfully controlled the plasma in the tokamak for two seconds — the maximum the reactor at the Swiss Plasma Center can handle before overheating. While the DeepMind system’s performance is an impressive proof-of-concept, the prospect of commercially viable nuclear fusion remains a distant goal. But experts say AI systems like DeepMind’s could help researchers experiment with new plasma configurations or optimize tokamak design, potentially accelerating fusion progress.
England’s NHS Plans to Conduct Algorithmic Impact Assessments:England’s National Health Service announced it will begin the world’s first pilot program to subject healthcare AI systems to algorithmic impact assessments. The AIAs — developed by the UK-based Ada Lovelace Institute — will assess AI systems for risks and biases before they can gain access to NHS data. The AIAs will be part of the data access process for the National COVID-19 Chest Imaging Database and the proposed National Medical Imaging Platform (NMIP) — both large databases of medical images from patients across the country. The Ada Lovelace Institute published a proposal outlining how the AIA process should work for the NMIP, which also includes recommendations for AIA applications in broader public and private contexts. As the prevalence of AI systems continues to grow, advocates and policymakers have increasingly embraced AIAs as a tool to mitigate their potential harms — see the entry “The Algorithmic Accountability Act Is Reintroduced” below for one such example in the United States.
BioNTech and UK-based AI company InstaDeep created an “early warning system” for flagging dangerous coronavirus variants. According to a preprint paper published by the companies’ researchers, their system successfully flagged 12 of 13 potentially dangerous variants an average of two months earlier than they were officially designated by the WHO. That time advantage could help vaccine companies like BioNTech prioritize the development of variant-specific vaccines. The system was not perfect, however — its one miss was the Delta variant. This was likely due, the researchers said, to limited genomic data availability and which mutations the model identified as significant.
The DOJ Ends The China Initiative: Yesterday, the Department of Justice announced that it is ending the China Initiative and replacing it with a program that is country agnostic and focuses on a wide range of threats. The controversial initiative, launched by the Trump administration in 2018, aimed to crack down on China’s targeting of U.S. technology, including early-stage research. But after a number of prominent cases involving academics at U.S. universities — including those of Gang Chen and Anming Hu — critics argued that the initiative had become overly broad despite its original mandate, was having a negative impact on collaborative research, and was engaged in racial profiling. In a speech announcing the change, Assistant Attorney General for National Security Matthew Olsen acknowledged those critiques, but defended the DOJ’s motivations, which he said “have been driven by genuine national security concerns.” To continue to address those concerns, Olsen said the department would introduce a “Strategy for Countering Nation-State Threats” that addresses illegal activity from adversarial nations — including the theft of trade secrets and malicious cyber activity — specifically naming China, Russia, Iran and North Korea.
The Algorithmic Accountability Act Is Reintroduced: Earlier this month, Sens. Ron Wyden and Cory Booker and Rep. Yvette Clarke introduced the Algorithmic Accountability Act of 2022, a bill that would require companies to assess and disclose information about how their automated systems are used. Specifically, the bill (a section-by-section summary and one-pager are also available) would require companies to conduct ongoing impact assessments of automated systems that make or help humans make “critical decisions.” The bill would create a sizeable new role for the FTC — it charges the agency with enforcing the bill’s new regulations, tasks it with developing the guidelines for assessment and reporting, requires it to publish an annual anonymized report on trends and a repository of information about automated critical decision systems, and creates a new 50-person Bureau of Technology inside the agency. The original version of the bill was introduced in 2019, but failed to gain traction.
PRC Talent Recruitment Plans:2020 National Foreign Expert Project Application Guide. This document briefly describes six different Chinese talent recruitment plans, all designed to entice foreign researchers, academics, administrators, or entrepreneurs to relocate to the PRC to enhance China’s strategic S&T capabilities.
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
We’re hiring! Please apply or share the roles below with candidates in your network:
Research Analyst (multiple): CSET RAs are vital to our work across a range of lines of research. Research Analysts collaborate with Research and Senior Fellows to execute CSET’s research. Applications close TOMORROW, February 25 and be sure to list your areas of research interest in your cover letter.
Data Research Analyst (multiple): DRAs work alongside our analysis and data teams to produce data-driven research products and policy analysis. This role combines knowledge of research methods and data analysis skills. Those with experience in common data visualization, programming languages, and/or statistical analysis tools may find this position of particular interest. Applications close TOMORROW, February 25.
Business Operations and Management Specialist: Reporting to CSET’s Director of Operations, the management specialist will have responsibility for sub-grant processing, contracts management and grants management for the entirety of CSET. Excel/gsheets skills are a must. Apply by March 4.
Research Fellow – Cyber/AI: CSET’s CyberAI project is currently seeking Research Fellow candidates to focus on machine learning applications for cybersecurity to assess their potential and identify recommendations for policymakers. Apply by March 14.
Please bookmark our careers page to stay up to date on all active job postings.
What’s New at CSET
CSET MAP OF SCIENCE UPDATE
The CSET Map of Science has undergone an update, with new features including the ability to filter research clusters by counts of papers cited by patents, to view papers from a cluster in the Dimensions web portal, and to explore cluster country collaboration counts over time.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.