EU AI Act Clears Parliament Vote, Moves Closer to Enactment: On Wednesday, the European Parliament voted overwhelmingly to approve the EU AI Act, which, if enacted, would place significant limits on certain “high-risk” applications and ban others outright. The 499-28 vote (with 93 abstentions) clears the way for negotiations with the European Council, which must take place before the act can become law. Though the act has undergone some changes since it was originally proposed in 2021, its core risk-based approach remains. Systems designated by regulators as “high risk” — such as those used in critical infrastructure or law enforcement — would be subject to strict limits, while systems deemed to pose an “unacceptable risk” would be banned outright. One such “unacceptable” application is the use of facial recognition in public spaces. The parliament’s version of the act survived attempts to insert exceptions for some law enforcement applications, but Politico reports that the issue is likely to remain a sticking point during upcoming trilogue discussions between the EP, the European Commission and EU member states. The act could also receive significant pushback from AI developers — OpenAI CEO Sam Altman said last month that the company could choose to leave the EU if it found complying with the act too burdensome (though he later walked that statement back). EU officials say they hope to have a final agreement on the act before the end of the year.
Open Statement Warns of “Societal-Scale Risks” Posed by AI: A group of prominent AI researchers, policymakers and business leaders signed on to an open statement expressing concern about the existential risk posed by AI. The statement, which reads “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” was published by the San Francisco-based Center for AI Safety (both the Center for AI Safety and CSET receive support from Open Philanthropy) and earned the endorsement of Microsoft founder Bill Gates, OpenAI CEO Sam Altman, U.S. Congressman Ted Lieu and, notably, hundreds of professors in computer science and other disciplines. It follows another open letter, published in March (and signed by many of the same people), which called for a six-month pause on the training of powerful AI systems because of the civilizational risk such systems could pose. But according to Center for AI Safety Executive Director Dan Hendrycks, the new statement does not identify concrete interventions in order to create as big a tent as possible for those concerned about AI’s potential harms. As with the earlier letter, a number of observersargued that the focus on hypothetical existential risks is a distraction from genuine, ongoing harms caused by systems deployed today. While that concern remains valid, policymakers have not yet shown signs of succumbing to AI-doom myopia; as many of the other stories in this week’s newsletter demonstrate, governing bodies around the world continue to walk and chew gum — focusing conversations and regulatory efforts on mitigating existing harms while planning for long-term potentialities.
As the use of AI tools has become more common, so has their misuse. While some of the blame falls on individual users who turn to tools they don’t understand, observers and regulators alike are starting to direct more criticism at the companies behind the tools. If regulators’ recent missives are any indication, more than criticism could be on the way soon.
The Office of Science and Technology Policy issued a request for information (RFI) related to a “National AI Strategy.” While the Biden administration has already released a number of documents related to AI — including the Blueprint for an AI Bill of Rights and NIST’s AI Risk Management Framework — the new AI strategy would take a “whole-of-society approach to AI” and “ensure a cohesive and comprehensive approach to AI-related risks and opportunities.” The RFI indicates that the Biden administration is interested in addressing both the immediate harms associated with AI and the longer-term risks the technology poses as it develops. Responses to the RFI must be submitted by July 7.
OSTP issued an update to the National AI R&D Strategic Plan. The update reaffirms the eight strategies laid out in the initial AI R&D strategic plan (released in 2016) and its 2019 update — including making long-term investments in “fundamental and responsible AI research” (Strategy 1), grappling with the “ethical, legal, and societal implications of AI” (Strategy 3), and ensuring AI systems’ safety and security (Strategy 4) — and adds a ninth strategy emphasizing the importance of international collaboration in AI research (in related news, last week the United States and UK announced an “Atlantic Partnership” that includes plans to collaborate on critical and emerging technologies).
In Translation CSET’s translations of significant foreign language documents on AI
Chinese Think Tank Paper:White Paper on AI Framework Development (2022). This white paper by a Chinese state-affiliated think tank emphasizes the importance of AI frameworks in the current and future phases of the development of AI technology. The white paper references well-known foreign frameworks such as TensorFlow and PyTorch, and also goes into considerable detail on applications of Chinese platforms such as Baidu’s PaddlePaddle and Huawei’s MindSpore.
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
We’re hiring! Please apply or share the roles below with candidates in your network:
Data Research Analyst: As a part of our data team, Data Research Analysts work directly with specific lines of research to produce data-driven research products and policy analysis alongside CSET’s analysis teams. This dynamic role serves as a bridge between data and analysis teams and combines knowledge of research methods and data analysis skills. Apply by June 26th, 2023
Research/Senior Fellow – Emerging Technology Workforce: This Fellow will lead and coordinate our efforts on the Workforce LOR, including shaping priorities, laying out an overall research strategy, overseeing the execution of the research and production of reports, and helping to hire and manage supporting researchers. Apply by July 1, 2023
Please bookmark our careers page to stay up to date on all active job postings. You can also subscribe to receive job announcements by updating your subscription preferences in the footer of this email.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.