EU AI Act Passes Two Committees, Heads to Parliament Vote Soon: Last week, two key EU committees approved a compromise version of the EU AI Act, setting the stage for a European Parliament vote, a process to resolve differences and, ultimately, adoption sometime this year or next. When it was proposed in 2021, the AI Act was ahead of its time, both in elevating AI as a key issue and in how it aimed to regulate “high risk” applications. A lot has changed in the two years since: regulators elsewhere — including in China and the United States — began paying more attention to AI, and the rise of popular generative tools from the likes of OpenAI and Midjourney have had a profound impact on the broader AI ecosystem. Some of the changes in the latest compromise text reflect those developments — the proposed act now includes language about “foundation models” and transparency requirements for generative AI tools (Brookings Fellow Alex Engler has some helpful Twitter threads and a running Google Doc detailing additional changes). Observers say the European Parliament will likely vote on the text of the act in June and that trilogue discussions between the EP, the European Commission and member states should start in July.
Expert Take: “The AI Act lays out a set of rules and processes intended to prevent harm from and to promote trustworthy AI. But many of the AI Act’s essential requirements, such as conformity assessments of high-risk AI systems, depend on standards and procedures that either don’t exist yet or for which we lack best practices and guidelines. These specifics will be determined by European standards bodies, market surveillance agencies and national governments, and will substantially shape the regulation’s effectiveness. As the AI Act is finalized over the course of the trilogue, we need to pay close attention to those parallel processes that will determine if the regulation actually has teeth.” — Mia Hoffmann, Research Fellow
Google Debuts New LLM — But Internal Critics Worry About Its AI Future: At its annual I/O conference last week, Google made a host of AI-related announcements. Chief among them was the debut of the PaLM 2 language model, which the company says is its most powerful language model yet. According to a Google technical report, the largest model in the PaLM 2 family outperforms its predecessor — PaLM, released last year — despite being “significantly smaller.” Google says PaLM 2 is already powering 25 of the company’s products, including the Bard chatbot, a generative AI “snapshot” feature that the company is rolling out for its search engine, and a number of new AI features for its Workspace office programs. But Google’s barrage of public announcements took place against a backdrop of some private uncertainty. An internal memo leaked to SemiAnalysis raised concerns about Google’s ability to “win [the AI] arms race” because of the threat posed by the open source community. The memo, which was reportedly shared widely within Google, argued that open source models such as Meta’s LLaMA LLMs and Stable Diffusion’s image generating tool will make it difficult for companies like Google or OpenAI to build a business strategy with generative AI at its center. While the memo made waves in the tech world after it leaked, it provoked plenty of disagreement — both about its characterization of open source models’ performance and the likelihood that consumers will embrace open source alternatives in meaningful numbers. With both Google and Microsoft rolling out generative AI tools in their business-oriented products, we should soon get a sense of whether their “moats” remain intact.
Anthropic’s AI Assistant Can Now Process A Book’s Worth of Text: The AI startup Anthropic announced it had expanded the context window of its “Claude” AI assistant from 9,000 to 100,000 tokens. A model’s context window is a crucial part of its performance — like short-term memory, it corresponds to the amount of information a model can process at once. Claude’s 100,000-token context window means it can process roughly 75,000 words (about the length of the first Harry Potter book). That comparatively massive context window — GPT-4, for comparison, maxes out at 32,000 tokens — is both an important sign of what’s possible for LLMs and a potential differentiator for Anthropic in its competition with the likes of OpenAI and Google. The update means that Claude can accept significantly longer texts as inputs, hold extended conversations, and maintain more coherence over the course of a back-and-forth. The expanded context window is available to users through Anthropic’s API.
Senate Judiciary Hearing on AI Makes Waves: AI took center stage on Capitol Hill on Tuesday when the Senate Judiciary Committee held a widely-covered hearing on AI oversight. The hearing before the Subcommittee on Privacy, Technology, and the Law was notable both for its subject matter and its witnesses: OpenAI CEO Sam Altman, NYU emeritus professor and prominent AI critic Gary Marcus, and IBM Chief Privacy & Trust Officer Christina Montgomery. While U.S. lawmakers have been slower than their European counterparts to get the ball rolling on AI legislation, the subcommittee’s members appeared eager to avoid repeating history — “Congress failed to meet the moment on social media,” Sen. Blumenthal said in his opening remarks, “now we have the obligation to do it on AI before the threats and the risks become real.” Both the witnesses and the Senators expressed a desire for greater regulation, but some observers raised concerns that the sense of urgency means important perspectives are being overlooked. Margaret Mitchell and Timnit Gebru, the former co-leads of Google’s Ethical AI team, told The Washington Post that lawmakers should take steps to ensure they’re not overly deferential to industry voices when it comes to conceiving and enacting new AI laws.
“Create and field capabilities at speed and scale” (including through collaboration with international allies and “non-traditional partnerships”);
and “Ensure the foundations for research and development” (through infrastructure and workforce investments).
It underscores the importance of creating a more dynamic innovation ecosystem and says the DOD will pursue “new pathways to rapidly experiment with asymmetric capabilities and deliver new technologies at scale.” DOD officials told reporters the department will send an implementation plan for the strategy to Congress within the next 90 days.
NSF Announces Seven New AI Research Institutes: The National Science Foundation, in partnership with other federal agencies and collaborating stakeholders, announced a $140 million investment that will establish seven more National Artificial Intelligence Research Institutes at universities across the country. The new institutes will focus on:
Neural and Cognitive Foundations of Artificial Intelligence (led by Columbia University — funded by a partnership between NSF and the DOD’s Office of the Under Secretary of Defense for Research and Engineering)
On May 17, Deputy Director of Analysis and Research Fellow Margarita Konaev spoke at Nexus 23, a symposium at the National Press Club held by Applied Intuition in collaboration with the Atlantic Council. A non-resident senior fellow at the Council, Konaev took part in a panel discussion, Ukraine: Autonomy, AI, and lessons from the battlefield.
The Wall Street Journal: According to an article by Lingling Wei, the Chinese government chose to restrict overseas access to important data sources due to “a drumbeat” of reports from U.S. think tanks, including CSET.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.