Policy.ai has temporarily moved to a once-a-month schedule while its usual author, Alex Friedland, is on parental leave. We’ll be back to biweekly this fall!
Check Mate?: The release of Huawei’s latest smartphone has some U.S. policymakers questioning whether government efforts to kneecap China’s semiconductor industry are actually working. On August 29, Huawei rolled out the Mate 60 Pro, a smartphone that reportedly runs on the powerful 7-nanometer Kirin 9000 processor. Designed and produced in China by Semiconductor Manufacturing International Corp. (SMIC), the Kirin 9000 still lags a couple generations behind the most advanced Apple chips. But if Huawei can integrate the chip at scale, it would mark a major victory for China’s domestic technology sector—and potentially a setback for U.S. national security strategy. Some industry experts have warned that the semiconductor export control regime introduced by the Commerce Department last October may have a limited effect and simultaneously galvanize China’s efforts to build a homegrown chip sector. Huawei’s announcement appears to add credence to that argument. U.S. National Security Advisor Jake Sullivan said the government is working to get more information on the Kirin 9000’s specs and determine whether U.S. export controls were violated in its production. Rep. Mike Gallagher, chairman of the House Select Committee on the CCP, called on policymakers to block all U.S. technology exports to SMIC and Huawei. Across the Pacific, the Global Times, a CCP-affiliated media outlet, boasted that the “resurgence of Huawei smartphones … is enough to prove that the US’ extreme suppression has failed.”
Regulating AI in Political Ads: On September 6, Google announced that political advertisers must disclose if their ads use video, audio, or images that were generated by AI. The company attributed its decision to “the growing prevalence of tools that produce synthetic content.” Under the new policy, political ads that use AI for anything beyond minor edits will be required to include a synthetic content label. The policy will go into effect in November. While the Federal Election Commission and congressional lawmakers are also considering restrictions on AI in political ads, Google’s decision marks one of the first attempts to regulate this emerging space. The company’s announcement comes amid growing concerns that nefarious actors will use generative AI tools like ChatGPT, Bard, and DALL-E to conduct bigger,cheaper, and better targeted mis- and dis-information campaigns. Already we have seen the technology used to deceive viewers. On September 11, researchers revealed that China used AI-generated images to spread disinformation about the recent wildfires in Maui, casting the natural disaster as the deliberate result of a secret government weapons test. Shortly after President Biden announced his reelection run, the Republican National Committee released an AI-generated attack ad depicting America in crisis after a Biden victory in 2024. A June ad for Florida Governor Ron DeSantis’ presidential campaign also included an AI-generated image of former President Donald Trump kissing Dr. Anthony Fauci. Identifying and moderating content produced by AI has long proven challenging and there is not a clear path for doing so at scale. Industry leaders have signaled their support for “watermarking” AI-generated media. For now, however, users are largely responsible for discerning for themselves what’s real and what’s fake.
Google Goes on Trial: On Tuesday, a highly anticipated showdown between Google and the Department of Justice kicked off in a D.C. district courtroom. The antitrust trial centers on Google’s dominance of the online search market and the ways in which the company amassed that power. The DOJ alleges that Google used illegal agreements and other anti-competitive practices to undercut its rivals, while Google argues that its dominance is simply the result of developing a superior product. The trial, which is expected to stretch into 2024, is arguably the technology industry’s biggest antitrust case since Microsoft squared off with the DOJ more than two decades ago. And this is not the only antitrust suit currently facing the tech giant—on September 6, Google reached a tentative settlement with some three dozen states over allegations that the company had a monopoly over the distribution of apps on Android devices. The company is also accused of monopolizing online advertising technology and abusing that power to the detriment of consumers and publishers in a separate lawsuit filed by the DOJ. Given Google’s prominence in AI, the outcomes of these cases could have a significant impact on the AI landscape in the years ahead.
The Department of Defense announces its Replicator Initiative: On September 6, Deputy Secretary of Defense Kathleen Hicks announced a new Defense Department plan, called the Replicator Initiative. The initiative aims to boost the production of autonomous systems with the ultimate goal being “to field attritable autonomous systems at a scale of multiple thousands, in multiple domains, within the next 18-to-24 months.” Deputy Secretary Hicks framed the plan in the context of the department’s “need to innovate with urgency in this enduring age of strategic competition with the [People’s Republic of China].” Some of these systems will likely incorporate a range of AI applications, from autonomous navigation to computer vision, among others. Another key aspect of the initiative involves getting autonomous systems from development into the field more rapidly. The Replicator Initiative announcement came on the heels of recent detailed reporting from the New York Times about the U.S. Air Force’s plans to incorporate AI-enabled autonomous vehicles into operations, including the Valkyrie experimental aircraft. The Air Force envisions such systems operating in swarms in dangerous combat environments. See Dr. Jaret Riddick’s initial reaction to the Replicator announcement and other recent congressional proposals on autonomy.
US-China Science partnership is floundering: In late August, the Biden administration sought to temporarily extend the U.S.-China Science and Technology Cooperation Agreement (STA). The agreement, which has been in force since 1979 and renewed roughly every five years, sets norms for science collaborations; however, the administration renewed the agreement for only six months amid concerns about Chinese intellectual property theft and espionage. The STA was the first U.S.-Chinese bilateral agreement after normalizing relations, and experts disagree on the potential impact of withdrawing from the agreement. Some believe that the STA is an important channel through which to maintain communication about science and technology. They argue that ending the agreement would have a chilling effect on collaboration among university researchers and scientists in the United States and China that would ultimately hinder scientific progress in the United States at a critical moment. Others, including CSET’s Anna Puglisi, have noted that Chinese actions like restricting access to its open academic publications should lead policymakers to carefully evaluate the costs and benefits of maintaining the agreement. You can visit CSET’s website to read some of our past research on technological decoupling.
Congressional hearings roundup: In its recent push to explore AI regulation, Congress has convened a number of public and closed-door meetings in the past month on different aspects of artificial intelligence applications.
On September 7, CSET’s Anna Puglisi testified before the Senate Committee on Energy and Natural Resources in a “Hearing to Examine Recent Advances in Artificial Intelligence and the Department of Energy’s Role in Ensuring U.S. Competitiveness and Security in Emerging Technologies.” Anna and several other speakers testified on advancements in AI and the potential impact on and role in regulating AI for the U.S. Department of Energy.
On September 12, Senator John Hickenlooper (D-CO), Chair of the Subcommittee on Consumer Protection, Product Safety and Data Security, convened a hearing on “The Need for Transparency in Artificial Intelligence.” The hearing focused on how to identify different AI applications as beneficial or high-risk and how to most effectively create policies for trustworthiness.
The next day, September 13, Senator Chuck Schumer (D-NY) hosted industry leaders alongside labor and civil rights groups on Capitol Hill for an AI insight forum. The first of a number of planned meetings to collect a range of insights on how best to regulate AI, attendees included Google CEO Sundar Pichai; Tesla, X and SpaceX CEO Elon Musk; NVIDIA President Jensen Huang; Meta founder and CEO Mark Zuckerberg; Eric Schmidt; OpenAI CEO Sam Altman; and Microsoft CEO Satya Nadella.
Also on September 13, the Senate Judiciary Subcommittee on Privacy, Technology, and the Law held one in a series of hearings on how to best govern AI. The hearing built on a bi-partisan framework developed by Chair Senator Richard Blumenthal (D-CT) and Ranking Member Senator Josh Hawley (R-MO) for developing AI laws and regulations. The hearing featured testimony from Woodrow Hartzog, a Professor of Law at Washington University in St. Louis, William Dally, Chief Scientist and Senior Vice President of Research at NVIDIA, and Brad Smith, Vice Chair and President of Microsoft.
Please bookmark our careers page to stay up to date on all active job postings. You can also subscribe to receive job announcements by updating your subscription preferences in the footer of this email.
In Translation CSET’s translations of significant foreign language documents on AI
China’s Semiconductor-Related Export Controls:This translation is a compilation of China’s semiconductor-related export controls, as of September 7, 2023. It combines translations of the semiconductor-related portions of three official Chinese government documents: (1) the Chinese Catalogue of Technologies Prohibited or Restricted from Export, published in September 2008, (2) adjustments made to the Catalogue in August 2020, and (3) additional proposed adjustments to the Catalogue released for public feedback in December 2022.
Chinese Scientific Data Regulations:Measures for the Management of Scientific Data. This regulation governs the collection, protection, and sharing of scientific data. The regulation states that, as a general principle, scientific data should be shareable, but it also strictly limits the sharing of certain types of data, such as classified information and data to be shared with foreign researchers.
China’s System for Cultivating Homegrown Cyber Security Talent:2022 White Paper on the Live-Fire Capabilities of Cybersecurity Talents: Attack and Defense Live-Fire Capability Edition. This white paper, drafted by the Chinese Ministry of Education in concert with several universities, details China’s system for cultivating homegrown “live-fire” cybersecurity talent. The authors describe the methods China uses to train cyber talent—including formal education, corporate training, certification courses, competitions, and bug bounties—but warn that in spite of these efforts, the country’s supply of cybersecurity professionals remains woefully insufficient. The white paper concludes with policy recommendations designed to ameliorate the mismatch between the skills of newly minted graduates and the actual cybersecurity needs of Chinese companies.
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
Survey: We’ve launched a 3-minute user survey to mark ETO’s first anniversary and guide our future work. Whether you’re an ETO power user or have never used the platform, your candid input will help us improve! Access the survey here.
On September 7, Senior Fellow Anna Puglisi testified before the Senate Energy and Natural Resources Committee for a hearing on “recent advances in artificial intelligence and the Department of Energy’s role in ensuring U.S. competitiveness and security in emerging technologies.” Read her testimony and watch the full hearing here.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.