The EU AI Act Could Be in Trouble: Final stage negotiations over the EU’s AI Act have hit a roadblock that could put the entire package in jeopardy, according to a report from the European news site Euractiv. Having passed the European Parliament in June with overwhelming support, the AI Act, originally proposed in 2021, still had to go through “trilogue” discussions between the EP, the European Commission and EU member states before it could become law. That process seemed to be on track to finish by the end of this year, but, according to Euractiv’s report, representatives from the bloc’s three biggest countries — Germany, France and Italy — pushed back strongly against the act’s regulation of so-called “foundation models” during a meeting last week. Foundation models — broadly capable models that can be used across a range of purposes (see Helen Toner’s recent explainer for more) — have been one of the key drivers of the recent AI boom. As we covered back in July, European companies had begun to criticize the AI Act, arguing that its regulations — especially those focused on foundation models — would drive away innovative AI firms. Those arguments seem to have swayed some key EU member states, especially those with upstart AI companies of their own. According to Euractiv, the Spanish presidency of the Council of the EU is attempting to stitch together a solution (a proposed “tiered” approach to foundation models failed to break the stalemate), but time could be running out. One EU source told Euractiv that, unless a solution can be found, the entire AI Act could be on the chopping block.
Representatives of the 28 attending states, including the United States and China, signed onto the “Bletchley Declaration” — a document that acknowledges the danger posed by “frontier” AI models and affirms the importance of developing AI safely.
A group of “like-minded governments” and major AI developers reached an agreement to involve select governments in testing new AI models before their release. The agreement was signed by representatives of the United States, UK, the EU, Australia, Canada, France, Germany, Italy, Japan, Korea, and Singapore (China, notably, was not involved in the agreement) and major AI firms, including Amazon, Anthropic, Google, Inflection AI, Meta, Microsoft, Mistral and OpenAI.
The 28 countries in attendance agreed to support the development of a “State of the Science” report on frontier AI. The UK government, as the summit’s host, commissioned the prominent AI researcher Yoshua Bengio to lead the effort.
The relatively warm public reception for the summit likely comes as a relief for Sunak, who threw his weight behind the summit as part of a push to make the UK a more prominent player in the global AI regulation conversation. Despite initial concerns that leading government officials would snub Sunak’s invitation, the summit ultimately earned a decent turnout, with U.S. Vice President Kamala Harris, President of the European Commission Ursula von der Leyen, and tech executives Elon Musk, Sam Altman and Jensen Huang all in attendance. The summit will not be a one-off — UK officials announced that a second will take place in South Korea in six months, with a third in France in a year.
OpenAI introduced a handful of new features and model updates at its first developer conference. Of particular note are customizable versions of its popular ChatGPT tool, as well as a new version of the flagship GPT-4 language model, dubbed GPT-4 Turbo, which the company says is cheaper to use and has a significantly expanded context window. Neither development is an unprecedented technical advance (OpenAI competitor Anthropic released a version of its chatbot, Claude, with a similarly large context window earlier this year), but they could further accelerate the rapid adoption of OpenAI’s tools.
A U.S. District Court judge dismissed many of the claims in a closely watched copyright suit against several generative AI art developers. Claims against DeviantArt and Midjourney were dismissed entirely, though Judge William Orrick indicated that modified versions of the claims might pass muster. Meanwhile, a claim against Stability AI — developer of the popular Stable Diffusion model — will be allowed to proceed. The litigation is far from settled, with amended claims expected to be submitted shortly. Generative AI developers appear to anticipate more disputes to come: OpenAI CEO Sam Altman said last month his company would cover the legal fees of customers facing copyright infringement claims.
Leverages the power of the Defense Production Act to impose reporting obligations on the developers of the most powerful AI systems. Developers of these systems must notify the federal government when they train the model and disclose all red-teaming results. No current AI systems are thought to meet the performance threshold of 10^26 floating-point operations set by the EO (lower for models working with biological data), but administration officials said they expected the next generation of models would be covered.
Directs agencies to streamline immigration processes to further attract highly skilled immigrants with expertise in AI and critical and emerging technologies.
Directs the NSF to launch a pilot program for the National AI Research Resource. Plans for a NAIRR have been in the works since 2020, and earlier this year, a task force recommended a $2.6 billion investment to build out a “shared research infrastructure” of publicly accessible computing power, datasets, and educational tools. Ultimately, Congress will need to allocate the funds to stand up such a resource in full.
Calls for agencies to identify how AI could assist their missions and to take steps to deploy these technologies by developing contracting tools, training federal employees, and hiring more technical talent via a “National AI Talent Surge.” The Office of Management and Budget is seeking further input on these provisions via a Request for Comment.
Includes a host of other provisions related to protecting against the extant and near-term harms of AI systems, such as discrimination, fraud, and job displacement.
While the earlier version included a measure calling for human involvement in all critical actions related to nuclear weapons employment, the current declaration omits that language.
Where the earlier version called for “appropriate care, including appropriate levels of human judgment, in the development, deployment, and use of military AI capabilities,” the new version omits the clause on “appropriate levels of human judgment.”
With the world’s militaries, including the U.S. military, still grappling with the difficulties of AI development and deployment, it would be surprising if this was the last time the guidelines evolved. The State Department said it will hold a regular dialogue with endorsing states to further bolster international support for the declaration and advance its implementation. The first meeting is planned for the first quarter of next year.
In Translation CSET’s translations of significant foreign language documents on AI
Chinese Generative AI Rules:Basic Safety Requirements for Generative Artificial Intelligence Services (Draft for Feedback). This draft Chinese standard for generative AI establishes very specific oversight processes that Chinese AI companies must adopt in regard to their model training data, model-generated content, and more. The standard names more than 30 specific safety risks, some of which—algorithmic bias, disclosure of personally identifiable information, copyright infringement—are widely recognized internationally. Others, such as guidelines on how to answer questions about China’s political system and Chinese history, are specific to the tightly censored Chinese internet. The standards also require Chinese generative AI service providers to incorporate more foreign-language content into their training data.
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
We’re hiring! Please apply or share the roles below with candidates in your network:
Research Fellow – CyberAI: We are currently seeking candidates who are passionate about exploring topics at the intersection of AI and cybersecurity. As a CyberAI Research Fellow, you will play a pivotal role in assessing how AI techniques can enhance cybersecurity, mitigate emerging threats, and shape future cyber operations. We value candidates who can bridge the gap between technical and non-technical audiences, making complex concepts accessible. If you possess strong analytical skills, practical experience in machine learning or cybersecurity, and a deep understanding of AI’s failure modes and potential threats, we want to hear from you. Apply by Monday, November 27th
Senior or Research Fellow – Compete LOR Lead: We are currently seeking candidates to lead our Compete Line of Research as a Senior or Research Fellow. This Fellow will play a pivotal role in leading and coordinating our Compete LOR efforts, by shaping priorities, devising research strategies, and overseeing research execution and report production. We are looking for individuals who possess a profound understanding of technological innovation’s pivotal role in U.S. national power, with expertise in areas such as economic security, trade and investment controls, trade regulation, and antitrust laws. Apply by Monday, November 27th
Please bookmark our careers page to stay up to date on all active job postings. You can also subscribe to receive job announcements by updating your subscription preferences in the footer of this email.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.