China Rolls Out New Algorithm Rules: Last week, Chinese regulators finalized new rules on the use of algorithmic recommendation systems. The regulations (translation available here), which were proposed last August, place significant limits on the tech companies’ content recommendation algorithms and give users the ability to opt out or limit their use of the systems. The new law, set to go into effect on March 1, will:
Require companies to display information about how the algorithms work and their intended purpose.
Give users the ability to opt out of using algorithmic recommendation systems entirely, opt out of personalized recommendation systems specifically, or delete personalized tags used by algorithms to make individual recommendations.
Place limits on models that cause users to “become addicted or spend too much.”
Bar algorithms that “generate or synthesize fake news information,” (a new addition since the draft regulations).
Observers have noted that while some of the law’s provisions — namely those that ban the promotion of fake news, deepfakes, and “negative information” — are ambitious, it is unclear whether companies possess the technical ability to detect and remove such content. While the penalties for violating the law are relatively small — fines can reach up to approximately $15,600 — the prevalence of algorithmic recommendation systems among such Chinese companies as Tencent, Alibaba and ByteDance means the new law will likely still have a significant effect.
White House Releases Guidance on Research Security: In related news, the White House issued a document providing implementation guidance for federal agencies as they update their research security policies. During the final week of the Trump administration, the White House issued a presidential memorandum that directed agencies to standardize their disclosure requirements with the aim of tightening research security. With many of the China Initiative’s flagship cases hinging on disclosure requirement misconduct, the memo has naturally attracted significant attention from researchers who receive U.S. government funding. Developed by an interagency panel at the direction of Office of Science and Technology Policy Director Eric Lander, the new guidance document focuses on standardizing disclosure requirements across agencies, implementing “digital persistent identifiers” to ease disclosure and encouraging communication between agencies, among other things. While the guidance did not address the China Initiative directly, Lander’s foreword struck an assuring tone, stressing the importance of avoiding measures that could create a “chilling atmosphere” for researchers. Lander has directed a handful of federal agencies to develop model grant application forms within the next 120 days that clearly articulate disclosure requirements.
DARPA Releases Toolkit to Help AI Developers Shore Up Defenses: The Defense Advanced Research Projects Agency released a toolkit to aid AI developers in testing their models’ defenses against attacks. While the Pentagon has increasingly embraced AI and machine learning tools, their adoption is not without potential risk — as the NSCAI warned in its final report last year, AI systems can be vulnerable to adversarial attacks, and it urged the U.S. government to step up its efforts on AI defense. DARPA’s “Guaranteeing AI Robustness against Deception” (GARD) program brought together researchers from academia and industry to compile a set of open-source tools (available here) that can help AI developers identify vulnerabilities and make their systems robust against a range of attacks. The toolkit includes a virtual evaluation testbed, a benchmark dataset, and “test dummies” that can help developers identify insufficient defenses — all open to the broader developer community through GitHub.
In Translation CSET’s translations of significant foreign language documents on AI
PRC Military-Civil Fusion Plan:Xianning City Action Plan for In-Depth Development of Military-Civil Fusion (2021-2025) (Revised Draft). This plan is one example of how local PRC governments are implementing China’s “military-civil fusion” strategy, which encourages the Chinese military to tap into private-sector innovation and allows private companies to commercialize select military innovations. Like other PRC industrial policies, this local plan calls for the government to aid the expansion of leading private companies. Unlike many other military-civil fusion strategies, which focus on the high-tech sector, this one describes how local low-tech industries — such as firefighting equipment, agricultural produce and traditional Chinese herbal cures — can support the military. Note that although the Communist Party has drastically cut back on its use of the term “military-civil fusion” in recent years — 2021’s 14th Five-Year Plan Outline omits the phrase entirely — this local plan continues to use this terminology.
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
We’re hiring! Please apply or share the roles below with candidates in your network:
Research Analyst (multiple): CSET RAs are vital to our work across a range of lines of research. Research Analysts collaborate with Research and Senior Fellows to execute CSET’s research. Apply by February 25 and be sure to list your areas of research interest in your cover letter.
Data Research Analyst (multiple): DRAs work alongside our analysis and data teams to produce data-driven research products and policy analysis. This role combines knowledge of research methods and data analysis skills. Those with experience in common data visualization, programming languages, and/or statistical analysis tools may find this position of particular interest. Apply by February 25.
AI Research Subgrant (AIRS) Program Director: CSET’s AIRS Program Director will manage sourcing, distributing and monitoring research grants that seek to promote the exploration of foundational technical topics that relate to potential national security implications of AI over the long term. Closes on January 31.
Please bookmark our careers page to stay up to date on all active job postings.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.