Work at CSET
We’re hiring! If you’re interested in joining our team, check out the positions in the “Job Openings” section below or consult our careers page.
Worth Knowing
Privacy Experts Sound Alarm on Data Practices after Supreme Court Ruling: When the U.S. Supreme Court overturned Roe v. Wade last week, digital privacy experts warned that personal data could be used to prosecute individuals who seek abortions and those who help them, and urged companies to take steps to protect their users’ data. Tech companies have built vast troves of data about their U.S. users, enabling them to target ads, train powerful AI systems, and track some of the most intimate details of their users’ private lives. Now experts say that information — including location data, text messages and search history information, and even data from period-tracking and pregnancy apps — could be subpoenaed by prosecutors in the states where abortion access is illegal or severely limited (as happened in Mississipi in 2018). Law enforcement had already made increasing use of tech companies’ data in recent years — both by subpoenaing companies and by purchasing access through data brokers. But the Supreme Court’s ruling could set off an unprecedented wave of requests and warrants as prosecutors try to gather evidence of abortion. While federal legislation could do something to shield users’ data, observers are skeptical that a post-Roe data privacy bill (such as the recently introduced My Body, My Data Act) would generate enough support to become law, leading advocates to focus their efforts on the companies collecting and managing the data. So far, though, none of the big tech companies has shared how they plan to handle such requests or what changes, if any, they plan to make to their data collection practices.
- More: After Roe, concerns mount over AI-enabled surveillance | Abortion is illegal for millions. Will Big Tech help prosecute it?
Government Updates
The Pentagon Unveils Its Responsible AI Strategy: Last week, Deputy Secretary of Defense Kathleen Hicks signed the Responsible Artificial Intelligence Strategy and Implementation Pathway, which lays out the Pentagon’s plans for implementing “responsible AI” (RAI). In 2020, the DOD introduced its initial list of five ethical principles that should guide its AI efforts — that they be “Responsible,” “Equitable,” “Traceable,” “Reliable” and “Governable.” After President Biden took office last year, the DOD’s new leadership reaffirmed its commitment to the principles and outlined six “foundational tenets” that would guide RAI implementation. But the new 47-page strategy goes into much greater detail about exactly how the Pentagon plans to ensure the principles are upheld while keeping AI development moving at “the speed necessary to meet the National Defense Strategy.” The new strategy lays out specific goals and designates multiple lines of effort (naming offices of responsibility) for each of the six foundational tenets in order to make both organizational responsibilities and the desired results clear. It also describes the role the DOD’s new lead AI entity, the Chief Digital and Artificial Intelligence Office, will play in leading the process. The new strategy will probably not be the last we hear from the Pentagon on RAI — the plan says to expect updates as developments in AI research and changes in the structure of the DOD occur.
NATO’s New Strategy Reflects Big Changes Since 2010: During its annual summit this week, the leaders of the NATO allied countries agreed to a new Strategic Concept, a key document that sets the alliance’s priorities and guides its political and military strategy. This year’s Madrid summit has been particularly eventful — on Wednesday, the alliance also announced it had formally invited Finland and Sweden to join — but the importance of the new strategic concept shouldn’t be overlooked. The document is updated roughly once every decade — the previous strategic concept was adopted at the Lisbon summit back in 2010. The new document reflects the changing geopolitical and technological landscape, as well as the vision articulated in the alliance’s NATO 2030 agenda, unveiled last year. The challenges posed by Russia and China receive more attention in the new strategic concept than in the most recent version — indeed, the last one did not even mention China. The new plan is also more focused on technological change than its predecessor, warning that “Emerging and disruptive technologies … are altering the character of conflict, acquiring greater strategic importance and becoming key arenas of global competition.” While it does not mention AI specifically, the document includes pledges to promote emerging technology innovation through increased investment, to cooperate on technological adoption and integration (to that end, NATO leaders are expected to announce a “Defense Innovation Accelerator for the North Atlantic” soon), to build capacity to overcome supply shocks, and to work with partners to shape standards.
FY2023 NDAA Moving Through Congress: In recent weeks, both the Senate and House Armed Services Committees approved their versions of the fiscal year 2023 National Defense Authorization Act. Both committees approved overall increases relative to the President’s budget request, with SASC authorizing an additional $45 billion and HASC adding $37 billion. While texts of the final versions have not yet been shared publicly, committee summaries shed some light on provisions relevant to AI and emerging technology. The Senate’s version (approved by a vote of 23 to 3) would authorize an additional $70 million for “national network for microelectronics research and development activities,” and an additional $75 million for DARPA to execute on the recommendations of the National Security Commission on Artificial Intelligence. While HASC’s adopted version (approved by a vote of 57 to 1) has not yet been released, an earlier summary includes a mandated review of dual-use technologies that the Chinese Communist Party might exploit as well as provisions to bolster diversity in the STEM talent pipeline. HASC also adopted amendments that would establish a pilot program to incorporate defense-critical semiconductors into the National Defense Stockpile and require a briefing on integrating “commercial artificial intelligence … into deployed and next-generation tactical network programs.”
Pressure Building to Pass Competitiveness Bill (and Its Big Chip Subsidies): Congressional leaders met last week to negotiate a compromise on legislation that would unlock $52 billion in semiconductor funding and bolster U.S. tech competitiveness. The Senate passed the U.S. Innovation and Competition Act last summer, while the House passed its own competitiveness bill — the America COMPETES Act — earlier this year. Despite relatively strong bipartisan support (the Senate bill passed with 68 votes), the bills have remained stuck as Congress has worked slowly to overcome their differences. But pressure and frustration have begun to mount with the midterms fast approaching and the reconciliation process producing few results. Intel announced last week that it was delaying the groundbreaking ceremony for a $20 billion Ohio semiconductor fab “due in part to uncertainty” around the legislation and its subsidies. High-ranking federal and state officials — including Deputy Secretary of Defense Kathleen Hicks, Indiana Governor Eric Holcomb and Michigan Governor Gretchen Whitmer — have urged Congress to move forward with the legislation. Reports indicate that a stripped-down bill could come up for a vote before Congress’ August recess, but it’s not a given — Republicans may prefer to wait until after the midterms to move forward.
Air Force Looks to Work with HBCUs on Tactical Autonomy Research: On Monday, the Air Force announced plans to create a university-affiliated research center with one of the country’s Historically Black Colleges and Universities (HBCU). The DOD currently has 14 UARCs at universities around the country, each set up to research a specific subject related to the Pentagon’s needs. The new UARC, which would be the Air Force’s first, will research tactical autonomy — specifically focusing on trust, collaboration between platforms and human-machine teaming. According to the Air Force, it has several goals in setting up the center. In addition to advancing research on tactical autonomy, the Air Force says it hopes to grow the pool of available talent by working with a group of STEM researchers — HBCU students and graduates — who have been underrepresented in receiving DOD research funding. The program will run for five years (with five additional option years) at $12 million per year. Of the 107 HBCUs in the United States, 11 meet the “R2” research activity ranking necessary to apply for the solicitation.
In Translation
CSET’s translations of significant foreign language documents on AI
CSET’s translations of significant foreign language documents on AI
PRC Scientific Research Organization Budget: Chinese Academy of Sciences 2022 Budget. This document is a translation of the 2022 budget for the Chinese Academy of Sciences, a huge PRC government-run complex of scientific research institutes.
PRC Research Oversight Organization Budget: National Natural Science Foundation of China 2022 Annual Budget. This document is a translation of the 2022 budget for the National Natural Science Foundation of China, a government-run body that oversees and audits Chinese scientific research funds, principally those that support basic research.
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
Job Openings
We’re hiring! Please apply or share the roles below with candidates in your network:
- Executive Coordinator: The Executive Coordinator will provide critical executive, logistical, and project management support to the CSET Operations and Leadership Teams with limited supervision and high levels of autonomy. Apply by July 1.
- Research Fellow — Standards and Testing: The Research Fellow will focus on standards, testing, evaluation, safety and national security issues associated with AI systems. To do this, they will examine how the limitations, risk, and society and security impacts of AI can be understood and managed. Apply by July 15.
- Research Fellow — AI Applications: The Research Fellow will focus on helping decision makers evaluate and translate new and emerging technologies, particularly in the field of AI, into novel capabilities by separating real trends and strategic opportunities from technological hope and hype. Apply by July 15.
What’s New at CSET
REPORTS
- Sustaining and Growing the U.S. Semiconductor Advantage: A Primer by Owen Daniels and Will Hunt
- Silicon Twist: Managing the Chinese Military’s Access to AI Chips by Ryan Fedasiuk, Karson Elmgren and Ellen Lu
- CSET: Data Snapshot: Previewing PARAT 2.0 by Autumn Toney
- CSET: Revisiting China’s Security Forces’ AI Research Output by Daniel Chou
- Brookings: Geopolitical implications of AI and digital surveillance adoption by Dahlia Peterson and Samantha Hoffman
- The National Bureau of Asian Research: From Maritime Quad to Tech Quintet: Imagining South Korea’s Role in the Indo-Pacific Security Architecture by Ryan Fedasiuk and Elliot Silverberg
- House Homeland Security Committee: On June 22, CSET Senior Fellow Andrew Lohn testified before the Cybersecurity, Infrastructure Protection, and Innovation subcommittee at a hearing on “Securing the Future: Harnessing the Potential of Emerging Technologies while Mitigating Security Risks.” Lohn discussed AI’s capabilities in defensive and offensive cyber operations, and its abilities to enhance disinformation campaigns. Read his testimony or watch it here.
- On June 23, the CSET webinar Connecting the Quad: Increasing AI Collaboration between the United States, Australia, India and Japan featured a discussion between CNAS Senior Fellow and Director of the Technology and National Security Program Martijn Rasser and CSET Research Analysts Husanjot Chahal and Ngor Luong about opportunities for greater AI collaboration among Quad partners, drawing on their recent brief, Quad AI: Assessing AI-Related Collaboration between the United States, Australia, India, and Japan.
- The Wire China: CSET Translation Manager Ben Murphy’s recent issue brief, Chokepoints: China’s Self-Identified Strategic Technology Import Dependencies, got a writeup from Eliot Chen in an article earlier this week.
- Protocol: An Anna Kramer and Max A. Cherney article about U.S. chip companies’ hiring woes cited Owen Daniels and Will Hunt’s new policy brief Sustaining and Growing the U.S. Semiconductor Advantage: A Primer.
- MeriTalk: Jose Rascon recapped Andrew Lohn’s congressional testimony for Meritalk.
What We’re Reading
Report: The European Union AI Act: Next steps and issues for building international cooperation in AI, Joshua P. Meltzer and Aaron Tielemans, Brookings (June 2022)
Book: Cyber Persistence Theory: Redefining National Security in Cyberspace, Michael P. Fischerkeller, Emily O. Goldman and Richard J. Harknett (May 2022)
Article: The Fallacy of AI Functionality, Inioluwa Deborah Raji, I. Elizabeth Kumar, Aaron Horowitz and Andrew D. Selbst (June 2022)