“Code Red” for Google — How ChatGPT Could Upend AI Development: Last month, Google executives declared “code red” over the threat posed to the tech behemoth’s core business — its all-important search engine — by OpenAI’s ChatGPT, according to a New York Times report. This week, news emerged that Microsoft plans to invest $10 billion in OpenAI and could incorporate some of the San Francisco-based lab’s popular tools — ChatGPT and DALL-E chief among them — into its products, such as Microsoft Office and the Bing search engine. The maneuvers could herald the beginning of a tech competition with the potential to dramatically reshape the AI development landscape. Although it only debuted in November, ChatGPT’s ability to answer complex questions — despite its propensity to make up facts and perpetuate biases (defects practically universal among similarly trained systems) — already has some users considering it as an alternative to Google’s golden goose: its search engine. While far from a certainty, the prospect has spurred a major shift at Google: according to the New York Times report, CEO Sundar Pichai has reassigned several teams within the company to work on developing and releasing new AI systems. That shouldn’t be a major technical hurdle for the company — Google and its sister companies, such as Alphabet-owned DeepMind, have long been at the forefront of AI research — but will require a significant reevaluation of its risk tolerance. While Google has powerful AI chatbots of its own — readers may remember that a Google engineer made the sensational claim last year that the company’s chatbot LaMDA was “sentient” — the company has reportedly been reluctant to make such systems public because of the “reputational risk” that biased or untrustworthy systems could pose to its core businesses. But this more considered approach to AI development — which still earned criticism from some AI ethics researchers for being too reckless — appears to be a luxury Google’s leaders feel they can no longer afford.
Policymakers in Beijing, Taipei and Seoul Grapple With Chip Subsidies: It looks like semiconductor industrial policy will remain a hot topic in 2023:
Bloomberg reports that Chinese officials are considering moving away from the strategy of using direct subsidies to build up the country’s semiconductor manufacturing. As we covered last month, reports indicated that Beijing was planning a $143 billion support package for domestic chipmakers. But now, according to Bloomberg’s sources, some top Chinese officials are considering a different course after previous efforts (and the corruption that accompanied them) proved insufficient. It remains unclear how Beijing is planning to build up its semiconductor industry — a goal that has become more urgent since the United States announced a suite of export controls meant to cut off China’s access to high-end chips and chipmaking tools in October.
China Enacts Strict New Regulations on Generative AI Systems: This week, China enacted new rules governing the use of generative AI systems, such as those used to create images, videos, text and audio (including deepfakes). The new regulations issued by the Cyberspace Administration of China (translation available here) place significant restrictions on such systems. They prohibit their use in the creation or dissemination of “fake news information” and require the providers of “deep synthesis” services to enforce a number of safeguards: labeling generated content that “might cause confusion or mislead the public,” conducting user-identity verification, running reviews of user inputs and outputs, building “mechanisms for dispelling rumors” associated with AI-generated content, and more. This isn’t the first time Chinese authorities have gone further than their U.S. and European counterparts in regulating AI systems — last March, Beijing debuted rules governing the use of content recommendation algorithms. But the new rules could be some of the country’s most significant yet, depending on how they are enforced. Unlike last year’s rules on recommendation algorithms, the new regulations do not outline specific penalties. Should the penalties — combined with the abundant safeguarding requirements — prove too onerous, the companies behind generative AI systems may decide that the juice isn’t worth the squeeze and forgo the Chinese market altogether.
IARPA Wants to Build AI Editors for Human Analysts: The Intelligence Advanced Research Projects Activity has launched a program to develop an AI-powered system that can help human analysts improve the quality of their written work. While text generators like ChatGPT may be in vogue, IARPA doesn’t seem eager to turn over its analysis work to AI just yet. Instead, the Rapid Explanation, Analysis, and Sourcing Online (REASON) program aims to create a system that works like an editor — taking draft reports written by humans and offering suggestions for their improvement. According to a technical description published by IARPA, the resultant system should work by focusing on three task areas: identifying relevant evidence that the author may have missed, identifying strengths and weaknesses in the author’s reasoning, and giving recommendations to improve the report’s argumentation. The REASON program will run for 42 months, beginning in October 2023, and will be split into two phases. In phase one, candidate systems will be developed and tested on unclassified data. During phase two, those systems will be refined and tested on classified data.
In Translation CSET’s translations of significant foreign language documents on AI
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
We’re hiring! Please apply or share the roles below with candidates in your network:
People Operations Specialist: We are currently seeking a People Operations Specialist to play a key role in helping to build and develop the CSET team, with a particular focus on furthering our diversity, equity and inclusion initiatives. This Specialist will provide administrative, organizational and project management support to ensure that our people-focused operations run smoothly. Applications due by January 30.
Please bookmark our careers page to stay up to date on all active job postings. You can also subscribe to receive job announcements by updating your subscription preferences in the footer of this email.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.