Alphabet Combines Its AI Crown Jewels to Form “Google DeepMind”: Alphabet has announced the merger of its two AI research laboratories — DeepMind and Google Brain — into a single unit dubbed “Google DeepMind.” In a public memo, CEO Sundar Pichai wrote that combining the talent and computational resources of the two teams will “significantly accelerate our progress in AI.” The London-based DeepMind — which Google acquired in 2014 — has been responsible for some of the biggest breakthroughs in AI over the last decade; its AlphaGo program was the first AI system to defeat a professional Go player, and its protein-folding AlphaFold was dubbed the “breakthrough of the year” by Science magazine in 2021. Google Brain was tremendously influential in its own right — in 2017, its researchers developed the transformer, the model architecture behind many of today’s powerful generative AI systems. But it hadn’t been smooth sailing at either lab of late: as we covered in 2021, Google executives rebuffed a DeepMind effort to secure more structural and decision-making autonomy; Google Brain, meanwhile, has reportedly suffered from internal frustrationand attrition since the company fired two prominent AI ethics researchers, Timnit Gebru and Margaret Mitchell. But even if the merger is in part motivated by those behind-the-scenes issues (as Mitchell speculated on Twitter), the primary cause appears to be the threat to Alphabet’s core internet search business posed by the AI products of rivals such as Microsoft and OpenAI. Since late last year, the company has quickly shifted both talent and resources toward building and deploying generative AI products. While the new outfit retains the “DeepMind” name and cofounder Demis Hassabis will continue to serve as its CEO, it remains unclear how the merger will affect the lab’s core DNA – and whether the new Google DeepMind will maintain its focus on “Nobel-level problems” or shift its attention toward developing more commercialized tools.
Palantir Wants to Bring Chatbots to Military Decision-Making: Last week, Palantir Technologies — the data analytics company and government contractor — announced its AI Platform (AIP) for Defense, a software tool meant to adapt and run large language models on classified networks and data. A company promotional video shows military operators engaging with an LLM-powered chatbot to identify enemy equipment and positions, generate possible operational plans, task assets, and initiate operations. Just as in the private sector, generative AI has caught the attention of military leaders. In public comments, a number of Pentagon officials have said they are thinking about how to incorporate LLM-powered tools like OpenAI’s ChatGPT into the DOD’s workflow. But they have also expressed (appropriate) concern that such systems may not yet be ready for prime time. As Principal Director for Trusted AI and Autonomy at the Undersecretary of Defense for Research and Engineering Kimberly Sablon noted earlier this year, the propensity of such models to hallucinate facts (and even citations and references) is a serious concern that must be addressed before they can be incorporated into DOD workflows. While Palantir has not released detailed information about how its AIP system works, the promotional video shows it using a handful of off-the-shelf, open-source LLMs — FLAN-T5 XL, GPT-NeoX-20B, and Dolly-v2-12b — none of which is immune to hallucinating. Palantir says its platform contains “industry leading guardrails to control, govern, and trust in the AI,” but the company didn’t release any information that indicates it has cracked the hallucination problem. It seems unlikely that military users will be ready to trust LLMs until those core, underlying issues are adequately addressed.
White House, Commerce Take Steps toward Trustworthy AI Policies: As Vice President Harris and other senior administration officials prepare to meet later this morning with the CEOs of Alphabet, Anthropic, Microsoft and OpenAI about potential AI risks, the Commerce Department’s National Telecommunications and Information Administration is requesting public feedback about what types of policies could ensure that AI systems are trustworthy and work as promised while avoiding harm. The request for comment from the NTIA, which is in charge of federal telecommunications and information policy, is an important step toward rolling out more concrete policies and regulations. In a speech at the University of Pittsburgh, NTIA head Alan Davidson said that the RFC was part of a larger “AI initiative” that would “help build an ecosystem of AI audits, assessments and other mechanisms to help assure businesses and the public that AI systems can be trusted.” Since President Biden took office in 2021, his administration has introduced a number of measures related to responsible AI and AI safety, such as the Blueprint for an AI Bill of Rights and NIST’s AI Risk Management Framework. To date, most of the administration’s initiatives — including the “public assessments” of generative AI systems announced this morning — have been voluntary frameworks or non-binding advisory documents, but the NTIA’s RFC, together with the increasing pace of warnings from regulatory agencies (see the story below), indicates that more concrete regulatory action could be on the horizon.
The FTC, DOJ, CFPB and EEOC Pledge to Protect Against Harmful AI: Last week, the heads of the Federal Trade Commission, the Justice Department’s Civil Rights Division, the Consumer Financial Protection Bureau and the Equal Employment Opportunity Commission released a joint statement expressing concern about the potential of automated systems to perpetuate unlawful bias, discrimination and other harmful outcomes. The officials pledged that they would “vigorously use our collective authorities to protect individuals’ rights.” Regulators have been turning up the heat in recent months: as we covered last time, the DOJ warned that it was looking out for anti-competitive behavior in the AI industry, and the FTC published multiple blog posts warning companies to keep their AI activities on the straight and narrow. This week, the FTC published another such post, advising companies to avoid using AI-powered advertising to trick people into harmful choices and emphasizing the importance of transparency in AI-generated content. As noted in the story above about the NTIA’s request for comment, regulators’ focus on AI appears to be intensifying as commercial AI competition heats up.
AI Advisory Panel Issues Its Inaugural Report: The National AI Advisory Committee — the panel charged with advising the president and the National AI Initiative Office on issues related to AI — released its inaugural report last week. The NAIAC was established by the National AI Initiative Act of 2020 and held its first meeting last year. The new report covers the committee’s first year of its three-year appointment and offers 23 recommended actions tied to 13 objectives meant to “help the U.S. government and society at large navigate this critical path to harness AI opportunities, create and model values-based innovation, and reduce AI’s risks.” The report’s objectives include “Operationalize trustworthy AI governance,” “Ensure AI is trustworthy and lawful and expands opportunities,” “Scale an AI-capable federal workforce,” and “Continue to cultivate international collaboration and leadership on AI.” It endorses the approach taken by the National Institute of Standards and Technology in its AI Risk Management Framework and recommends that the White House take steps to implement the AI RMF across the federal government, encourage the private sector to adopt the AI RMF or aligned processes, and make an effort to “internationalize” the AI RMF through formal translations and workshops, so that framework can serve as the world’s “common language” on AI risk. On the day of the report’s release, four of the NAIAC’s members took part in a Brookings Institution panel to discuss the report and the state of AI development — a recording of that event is available online.
In Translation CSET’s translations of significant foreign language documents on AI
PRC Reorganization Plan:Reform Plan for Party and State Institutions. This document is the full text of a significant Party and government reorganization plan that China’s parliament, the National People’s Congress, passed in March 2023. Highlights of the plan include two new commissions to oversee the financial industry, a new Party body that oversees technology policy, a weakened Ministry of Science and Technology, and the creation of a new National Data Bureau.
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.