Worth Knowing
“Code Red” for Google — How ChatGPT Could Upend AI Development: Last month, Google executives declared “code red” over the threat posed to the tech behemoth’s core business — its all-important search engine — by OpenAI’s ChatGPT, according to a New York Times report. This week, news emerged that Microsoft plans to invest $10 billion in OpenAI and could incorporate some of the San Francisco-based lab’s popular tools — ChatGPT and DALL-E chief among them — into its products, such as Microsoft Office and the Bing search engine. The maneuvers could herald the beginning of a tech competition with the potential to dramatically reshape the AI development landscape. Although it only debuted in November, ChatGPT’s ability to answer complex questions — despite its propensity to make up facts and perpetuate biases (defects practically universal among similarly trained systems) — already has some users considering it as an alternative to Google’s golden goose: its search engine. While far from a certainty, the prospect has spurred a major shift at Google: according to the New York Times report, CEO Sundar Pichai has reassigned several teams within the company to work on developing and releasing new AI systems. That shouldn’t be a major technical hurdle for the company — Google and its sister companies, such as Alphabet-owned DeepMind, have long been at the forefront of AI research — but will require a significant reevaluation of its risk tolerance. While Google has powerful AI chatbots of its own — readers may remember that a Google engineer made the sensational claim last year that the company’s chatbot LaMDA was “sentient” — the company has reportedly been reluctant to make such systems public because of the “reputational risk” that biased or untrustworthy systems could pose to its core businesses. But this more considered approach to AI development — which still earned criticism from some AI ethics researchers for being too reckless — appears to be a luxury Google’s leaders feel they can no longer afford.
Policymakers in Beijing, Taipei and Seoul Grapple With Chip Subsidies: It looks like semiconductor industrial policy will remain a hot topic in 2023:
- Bloomberg reports that Chinese officials are considering moving away from the strategy of using direct subsidies to build up the country’s semiconductor manufacturing. As we covered last month, reports indicated that Beijing was planning a $143 billion support package for domestic chipmakers. But now, according to Bloomberg’s sources, some top Chinese officials are considering a different course after previous efforts (and the corruption that accompanied them) proved insufficient. It remains unclear how Beijing is planning to build up its semiconductor industry — a goal that has become more urgent since the United States announced a suite of export controls meant to cut off China’s access to high-end chips and chipmaking tools in October.
- Taiwan passed new subsidies that will let domestic chipmakers convert significant portions of their annual R&D and equipment expenditures into tax credits. While Taiwanese policymakers were broadly supportive of the semiconductor manufacturing subsidies in the United States’ CHIPS and Science Act — Taiwan-based chipmakers such as TSMC and GlobalWafers stand to benefit from their investments in U.S. plants — officials in Taipei said they wanted to make sure the country’s chipmakers “keep their roots here.”
- South Korea reportedly plans to offer tax credits of up to 25 percent for capital expenditures in strategic technologies, including semiconductors. A bill passed in December had pegged the tax breaks at 8 percent, but that plan drew the ire of President Yoon Suk Yeol, who argued that South Korea needed to do more to keep up with the semiconductor spending of other countries. South Korean lawmakers will still have to approve the new subsidies before they can go into effect.
- More: China to revamp chip strategy under US pressure, but $143 billion support package is not on the cards | Huawei EUV Scanner Patent Suggests Sub-7nm Chips for China | New U.S. House creates committee focused on competing with China
Government Updates
The State Department Launches Emerging Tech Office: Last week, the State Department established the Office of the Special Envoy for Critical and Emerging Technology, which will help coordinate the department’s emerging technology diplomacy. In comments to the media, a State Department spokesperson said the office is a key part of the department’s modernization agenda and will act as a “center for expertise” on key technologies such as AI, biotechnology, advanced computing and quantum information technology. The office will also help to “develop and coordinate” policy and will work with relevant foreign partners. Diplomatic efforts related to emerging technologies have picked up in recent years — the U.S.-EU Trade and Technology Council and ongoing discussions about semiconductor export controls with the Netherlands and Japan stand out as especially prominent examples — and the establishment of the new office seems predicated on the expectation that the trend will continue. While the department has not yet appointed a special envoy, Seth Center — formerly a senior advisor to the National Security Commission on Artificial Intelligence, among other things — has been appointed deputy envoy and will work to stand up the office.
IARPA Wants to Build AI Editors for Human Analysts: The Intelligence Advanced Research Projects Activity has launched a program to develop an AI-powered system that can help human analysts improve the quality of their written work. While text generators like ChatGPT may be in vogue, IARPA doesn’t seem eager to turn over its analysis work to AI just yet. Instead, the Rapid Explanation, Analysis, and Sourcing Online (REASON) program aims to create a system that works like an editor — taking draft reports written by humans and offering suggestions for their improvement. According to a technical description published by IARPA, the resultant system should work by focusing on three task areas: identifying relevant evidence that the author may have missed, identifying strengths and weaknesses in the author’s reasoning, and giving recommendations to improve the report’s argumentation. The REASON program will run for 42 months, beginning in October 2023, and will be split into two phases. In phase one, candidate systems will be developed and tested on unclassified data. During phase two, those systems will be refined and tested on classified data.
In Translation
CSET’s translations of significant foreign language documents on AI
CSET’s translations of significant foreign language documents on AI
PRC Mobile App Regulation Announcement: Office of the Central Cyberspace Affairs Commission Arranges and Launches “Clear and Bright: Rectifying the Chaos in the Realm of Mobile Internet Application Programs” Special Campaign. This document announces a new crackdown on mobile apps by China’s internet regulator. The announcement describes an app market rife with scams, pornography, and other undesirable phenomena, and puts the onus on Chinese app provider platforms to improve the situation.
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
Job Openings
We’re hiring! Please apply or share the roles below with candidates in your network:
- People Operations Specialist: We are currently seeking a People Operations Specialist to play a key role in helping to build and develop the CSET team, with a particular focus on furthering our diversity, equity and inclusion initiatives. This Specialist will provide administrative, organizational and project management support to ensure that our people-focused operations run smoothly. Applications due by January 30.
What’s New at CSET
PUBLICATIONS
- Lawfare: The AI “Revolution in Military Affairs”: What Would it Really Look Like? by Owen Daniels
- CSET, OpenAI, and the Stanford Internet Observatory: Forecasting Potential Misuses of Language Models for Disinformation Campaigns — and How to Reduce Risk by Josh Goldstein, Girish Sastry, Micah Musser, Renée DiResta, Matthew Gentzel and Katerina Sedova
- What’s hot in pharmacology? Discovering emerging topics with ETO’s Map of Science
- Virus-fighting nanoparticles, quantum dots, transparent wood: Hot topics in five research domains, found fast with ETO’s Map of Science
- The Washington Post: Pranshu Verma reached out to Deputy Director of Analysis and Research Fellow Margarita Konaev to discuss the potential future uses of AI during the war in Ukraine.
- DefenseScoop: Senior Fellow Emelia Probasco weighed in on the Pentagon’s efforts to turn its high-level principles into concrete AI development in a Brandi Vincent article.
- BBC World News: Research Analyst Hanna Dohmen discussed China’s new generative AI regulations on the BBC World News channel.
- ABC News: Dohmen also appeared on Australia’s ABC News to shed light on the Chinese regulations.
- CyberScoop: Elias Groll covered the release of Josh Goldstein, Girish Sastry, Micah Musser, Renée DiResta, Matthew Gentzel and Katerina Sedova’s new report, Forecasting Potential Misuses of Language Models for Disinformation Campaigns — and How to Reduce Risk.
What We’re Reading
Article: The Illusion of Controls: Unilateral Attempts to Contain China’s Technology Ambitions Will Fail, Sarah Bauerle Danzman and Emily Kilcrease, Foreign Affairs (December 2022)
Blog Post: OpwnAI: Cybercriminals Starting to Use ChatGPT, Check Point Research (January 2023)
Upcoming Events
- January 18: CSET Webinar, The Multinational Artificial Intelligence Landscape, featuring Karine Perset and Audrey Plonk of the OECD and CSET Executive Director Dewey Murdick
What else is going on? Suggest stories, documents to translate & upcoming events here.