Worth Knowing
IDF Accelerates Targeting with AI, According to Report: A report published last week by the Israeli outlets +972 Magazine and Local Call alleges that the Israel Defense Forces (IDF) have relied heavily on an AI-powered system — with minimal human oversight — to identify targets for airstrikes in Gaza. The report, which cites six unnamed Israeli intelligence officers and has not been independently verified (though the accounts were also shared with the Guardian), alleges that the system — dubbed “Lavender” — processed massive amounts of surveillance data to mark as many as 37,000 Palestinians as likely Hamas or Palestinian Islamic Jihad militants. According to the report, the IDF has come to rely heavily on Lavender’s lists since the early weeks of the current war in Gaza, with little practical human oversight. One source said they spent approximately 20 seconds evaluating each Lavender-selected target — only enough time to confirm the target was male. Another automated system identified by the report — referred to as “Where’s Daddy?” — has allegedly been used to track targets back to their private homes, which were then marked for bombing, presumably elevating the risk of civilian casualties significantly. In a lengthy response, the IDF disputed the report, writing that “the IDF does not use an artificial intelligence system that identifies terrorist operatives or tries to predict whether a person is a terrorist. Information systems are merely tools for analysts in the target identification process. … The ‘system’ … is not a system, but simply a database whose purpose is to cross-reference intelligence sources.”
Expert Take: “Many discussions of AI in warfare focus on the risks of lethal autonomous weapons systems (LAWS) that autonomously select their own targets. There’s often an assumption that keeping a human “in the loop” who has to approve targeting decisions solves the problem of LAWS. This report from Israel, if accurate, shows the flaw of not considering the human-machine team as a whole. What matters is not just whether a human is involved, but how they interact with the AI system in practice—which will depend on questions of design, operator training, and testing in realistic conditions. If those questions are not thought through carefully, the results can be disastrous regardless of whether the human or machine has final say.” — Helen Toner, Director of Strategy and Foundational Research Grants
- More: Google Workers Revolt Over $1.2 Billion Israel Contract | Israel Deploys Expansive Facial Recognition Program in Gaza
- Last month, Nvidia introduced its next-generation Blackwell GPU architecture and the flagship B200 GPU, which the company touts as the “world’s most powerful chip” for AI. Blackwell, the successor to Nvidia’s current Hopper architecture, promises significant performance gains across AI workloads compared to its predecessor. According to Nvidia, the B200 chip delivers up to 5x performance improvement in AI workloads over the H100. When paired with Nvidia’s Grace CPU, Nvidia says the package can deliver a 30x performance increase for large language model inference and reduce cost and energy consumption by 25x. According to Nvidia CEO Jensen Huang, each B200 could cost as much as $40,000.
- Microsoft and OpenAI are discussing plans for an AI-focused data center project that could cost as much as $100 billion, according to a report published by The Information. The recent improvement in AI models has been largely driven by massive increases in the computing resources used to train them. OpenAI’s GPT-3 cost approximately $4.6 million to train, GPT-4 cost more than $100 million, and GPT-5 will likely cost significantly more. Microsoft has already committed several billion between its investments in Wisconsin-based data centers (which could cost as much as $10 billion) and its approximately $13 billion stake in OpenAI, but the reported data center project would dwarf both.
- The government of Saudi Arabia is planning a $40 billion fund to invest in AI, according to the New York Times. Saudi representatives have met several Silicon Valley investors, including representatives from Andreessen Horowitz, to discuss a potential partnership. The fund would be a significant addition to an investment landscape already flush with cash. But with the costs of training skyrocketing (see above), even $40 billion could be put to use quickly. The Saudi fund could face some roadblocks, however, if recent news is any indication; Anthropic reportedly rejected a Saudi attempt to invest in the company on national security grounds, according to a CNBC report.
- More: OpenAI is expected to release a ‘materially better’ GPT-5 for its chatbot mid-year, sources say | Behind the plot to break Nvidia’s grip on AI by targeting software | Meta debuts new generation of AI chip
Government Updates
White House Debuts Guidelines for Federal Government AI Use: On March 28, Vice President Harris announced new policies governing the federal government’s use of AI. The guidelines, issued by the Office of Management and Budget (OMB), were required by President Biden’s October 2023 executive order on AI and set requirements for how federal agencies should deploy the technology, including:
- By December 1, 2024, agencies are required to institute minimum practices when deploying “safety-impacting” and “rights-impacting” AI (both terms are defined in the memo). These practices include AI impact assessments, real-world testing, independent evaluation, ongoing monitoring, adequate training, and ample public notification and documentation. Agencies that cannot meet these minimum standards are required to stop using the system in question.
- Agencies must conduct annual inventories of each AI use case and publish a public version of the inventory online. Beginning with this year’s inventories, agencies must identify safety-impacting and rights-impacting AI use cases, outline the risks posed, and explain how the agency is managing the risks. Not all agencies’ AI use cases fall under this requirement — the DOD is excepted, for example — but agencies still must report some aggregate metrics about their non-covered AI use cases.
- Within 60 days of the guidance’s release, agencies must designate a Chief AI Officer to coordinate their agency’s use of AI, as initially mandated by the October 2023 executive order. Several agencies — including the Justice Department and the Department of Education — have already named CAIOs.
A Leadership Change at the Pentagon’s Lead AI Office: On Monday, Radha Plumb took over as the Department of Defense’s Chief Digital and Artificial Intelligence Officer (CDAO) from Craig Martell, who departed after leading the Pentagon’s AI office since 2022. First announced in 2021, the office of the CDAO combined several precursor organizations — including the Joint Artificial Intelligence Center — under one roof as part of the DOD’s effort to embrace AI. Martell, who came to DOD after a long career in the private sector, had a mixed tenure as the inaugural CDAO. There were some successes; in February, for example, Deputy Secretary of Defense Kathleen Hicks announced that the CDAO had helped deliver on one of its primary goals: to stand up a workable “minimum viable capability” of Combined Joint All-Domain Command and Control (CJADC2), the Pentagon’s ambitious plan to coordinate and connect its services’ sensors. But the office also suffered morale problems during Martell’s tenure — according to a 2023 DefenseScoop report, CDAO employees gave the office’s leadership negative ratings on an internal survey, the only negative rating in any category across the entire DOD. Unlike Martell, Plumb comes to the CDAO position from elsewhere in the DOD, having previously served as the Deputy Undersecretary for Defense Acquisition and Sustainment.
Commerce Announces Billions in Chips Act Funding: The Commerce Department has opened the spigots on CHIPS Act funding in recent weeks, announcing several preliminary funding agreements with major chipmakers, with more reported to be on the way. On March 20, the Commerce Department announced $8.5 billion in grants and as much as $11 billion in loans to Intel, which has plans to spend more than $100 billion over the next five years on projects in Arizona, New Mexico, Ohio, and Oregon. Then earlier this week, Commerce announced more than $11 billion — $6.6 billion in grants and $5 billion in loans — for TSMC. As we covered last month, concerns had begun to crop up around the company’s Arizona projects, but the funding announcement seems to have done quite a bit to heal any wounds; to coincide with the CHIPS-funding announcement, the Taiwan-based chipmaker announced plans for a third Arizona fab, bringing the price tag of its U.S. projects to more than $65 billion. And more major funding announcements are expected soon: Reuters reported on Monday that the Biden administration will award more than $6 billion to Samsung for its Texas-based projects — a previously announced $17 billion facility near Austin and $27 billion in soon-to-be-announced additional spending — and Bloomberg reported additional incoming grants for the Boise-based DRAM giant Micron in the coming weeks.
In Translation
CSET’s translations of significant foreign language documents on AI
CSET’s translations of significant foreign language documents on AI
PRC Generative AI Standard: Basic Safety Requirements for Generative Artificial Intelligence Services. This Chinese standard for generative AI establishes very specific oversight processes that Chinese AI companies must adopt in regard to their model training data, model-generated content, and more. The standard names more than 30 specific safety risks, some of which—algorithmic bias, disclosure of personally identifiable information, copyright infringement—are widely recognized internationally. Others, such as guidelines on how to answer questions about China’s political system and Chinese history, are specific to the tightly censored Chinese internet. One notable addition to this document, relative to a preliminary draft released in October 2023, is a clause requiring a supply chain security assessment of Chinese generative AI models’ underlying hardware and software.
PRC S&T Intelligence Center Document: International Science and Technology Information Center: Introduction to the Center. This document is a translation of the “About Us” page of the website of the International Science and Technology Information Center (ITIC), a government-run open-source S&T intelligence provider in Shenzhen, a tech hub in southern China. ITIC claims to use AI models to analyze all of the major global academic and corporate S&T literature databases so as to identify emerging trends in research and technology. The center makes its tools—notably its “Sci-Brain” or “S&T Supermind” platform—and underlying data available to the entire population of Shenzhen, free of charge.
PRC State-Run Enterprise Reform Document: Notice on the Initiative to Launch Value Creation Benchmarking Against World-Class Enterprises. This document, issued in 2022 by the Chinese government ministry in charge of state-run enterprises (SOEs) and popularly known simply as “Document 79,” announces a campaign to improve SOEs’ “value creation.” The Wall Street Journal reported that this directive requires SOEs in strategic economic sectors to replace all of their foreign IT software by 2027, a drive known informally in China as “Delete A,” for “delete America.” The publicly available version of Document 79 translated by CSET does not mention anything about removing foreign software, although it does emphasize the importance of SOEs improving their capacity for in-house innovation.
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
Job Openings
We’re hiring! Please apply or share the role below with candidates in your network:
- Program Specialist: CSET is currently accepting applications for a Program Specialist position to oversee its intern and student portfolios. The Program Specialist will play a key role in enhancing and managing CSET’s Inaugural Internship Program. Reporting to CSET’s Director of People and Operations, the Program Specialist will oversee CSET’s student and internship initiatives, as well as provide general administrative, organizational, and project management support to ensure CSET’s operations run smoothly. Apply by Monday, May 6.
What’s New at CSET
REPORTS
- Bibliometric Analysis of China’s Non-Therapeutic Brain-Computer Interface Research by William Hannas, Huey-Meei Chang, Rishika Chauhan, Daniel Chou, John O’Callaghan, Max Riesenhuber, Vikram Venkatram, and Jennifer Wang
- An Argument for Hybrid AI Incident Reporting by Ren Bin Lee Dixon and Heather Frase
PUBLICATIONS AND PODCAST APPEARANCES
- Lawfare: For Government Use of AI, What Gets Measured Gets Managed by Matthew Burtell and Helen Toner
- In AI We Trust? Podcast: How to govern AI in the face of uncertainty? with Helen Toner
- Center for European Policy Analysis: China Bets Big on Military AI by Sam Bresnick
- Norwegian Air Power Journal: The OODA Loop and Military AI by Owen J. Daniels
- CSET: Riding the AI Wave: What’s Happening in K-12 Education? by Ali Crawford
EMERGING TECHNOLOGY OBSERVATORY
- The Emerging Technology Observatory is now on Substack! Sign up for the latest updates and analysis.
- Profiling research institutions with the Map of Science, Part 4: Chinese schools and robot fish
- The state of global AI safety research
- Explore key topics with the revamped Map of Science subject filter
- Editors’ picks from ETO Scout: Volume 9 (2/23/24-3/13/24)
EVENT RECAPS
- On April 10, CSET’s Josh A. Goldstein, Meta’s Dr. Sarah Shirazyan, the Brennan Center for Justice’s Lawrence Norden, and Dr. Jon Roozenbeek of the University of Cambridge discussed how AI could impact elections in 2024 and how policymakers, the media, AI providers, and social media companies can respond.
IN THE NEWS
- Axios: Pentagon stares down “drone swarm” threat (Noah Bressner cited the CSET report U.S. and Chinese Military AI Purchases)
- Financial Times: Why AI conspiracy videos are spamming social media (Hannah Murphy cited Josh Goldstein’s post with Renee DiResta, How Spammers and Scammers Leverage AI-Generated Images on Facebook for Audience Growth)
- Interesting Engineering: China’s SSF uses civilians for space, cyber, and psychological warfare (Christopher McFadden quoted Sam Bresnick)
- Roll Call: Focus sharpens on China’s tech (Gopal Ratnam cited the CSET report Which Ties Will Bind?)
- Semafor: Semafor Technology Newsletter: AI safety research doesn’t meet the hype (Reed Albergotti cited the Emerging Technology Observatory’s post on the state of global AI safety research)
- South China Morning Post: Strategic Support Force: China’s mission to win future wars hinges on this shadowy military branch (Amber Wang quoted Sam Bresnick)
- The Wire China: How Google’s Alleged Thief Wooed Investors (Eliot Chen quoted Ngor Luong)
What We’re Reading
Paper: Governing AI Agents, Noam Kolt (April 2024)
Report: Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector, U.S. Department of the Treasury (March 2024)
Upcoming Events
- April 11: Knight-Georgetown Institute, Burning Questions: Online Deception and Generative AI
What else is going on? Suggest stories, documents to translate & upcoming events here.