Worth Knowing
Researchers Use AI to Generate Therapeutic Antibodies From Scratch: Researchers with a U.S. pharmaceutical company used AI to design novel antibodies that they say work as well as or better than those designed by humans. Antibody design is a hugely promising field — therapeutic antibodies have already been used to help treat a number of serious diseases — but designing novel antibodies is a difficult and resource-intensive process. In a preprint paper posted earlier this month to bioRxiv, researchers from the New York and Washington-based Absci Corporation showed how a generative AI model was able to design multiple novel antibodies that bind to a target receptor, HER2, more tightly than previously known therapeutic antibodies. In training the system, the researchers removed all data on antibodies known to bind to the target receptor from the training data — meaning the system couldn’t simply mimic the structure of antibodies already known to work. After screening and validating the designs produced by their system, the researchers confirmed HER2 receptor binding for 421 of the designs, including three that bound more tightly than the FDA-approved therapeutic antibody trastuzumab. Encouragingly, the designs produced by the system were both highly diverse (from each other and from known antibodies) and achieved a high score on a “naturalness” measure that the researchers developed — making the designs, in theory, easier to develop and more likely to provoke a favorable immune response. The reported findings have yet to be peer reviewed, nor have they been tested in living organisms. But, if validated, they could represent a hugely important step toward using AI to accelerate and bring down the cost of therapeutic antibody development.
Getty Sues the Company Behind a Popular Open Source AI Art Generator: In what could prove to be a landmark case for how AI systems are trained, Getty Images announced it is suing Stability AI — the maker of the popular open source AI art generator Stable Diffusion — for copyright infringement. It’s no secret that Stability AI churned through millions of stock images to train its model — the company used an open source dataset comprising billions of images scraped from the internet, including many from stock image companies like Getty. But, according to Getty, that unlicensed scraping and processing was unlawful. Despite their surging popularity, the way generative AI systems are trained remains legally questionable, as The Verge’s James Vincent has explored at length. Getty’s case is the latest to pit copyright holders and content creators against the companies behind such systems; Microsoft and OpenAI, for example, are embroiled in a lawsuit over how the computer code-generating tool Copilot was trained. Once decided, the suits could help clear up any legal uncertainty and set important guardrails. A favorable ruling for Getty wouldn’t mean the end of generative AI systems — OpenAI, for example, has a formal partnership with the stock image company Shutterstock — but it could lead to significant changes in the way training data is sourced.
- More: AI Art Generators Hit With Copyright Suit Over Artists’ Images | Why are Getty and Shutterstock on opposite sides of the AI legal debate?
Government Updates
The DOD Updates Its Autonomous Weapons Policy: Yesterday afternoon, the Pentagon released updated guidance on autonomy in weapons systems. Although this update (the previous version, originally from 2012 and updated in 2017, is available here) does not dramatically change the Pentagon’s policy on such systems’ development, as the DOD’s Emerging Capabilities Policy Director Dr. Michael Horowitz told reporters, it indicates the “dramatic, expanded vision” of AI’s future role. For starters, the new directive explicitly mentions AI, something the old guidance did not. The linguistic change reflects evolving technological realities — the deep learning revolution was just kicking off when the original guidance was issued — as well as significant developments within the DOD. In recent years, the Pentagon has adopted policies — such as 2020’s Ethical Principles for Artificial Intelligence and 2021’s Responsible Artificial Intelligence Strategy and Implementation Pathway — and stood up organizations (namely the Chief Digital and Artificial Intelligence Office) to guide the process for developing AI within the military, something the new directive reflects. Updating and clarifying processes appears to have been a key consideration behind the new guidance. As Defense One’s Patrick Tucker noted, there has been a great deal of confusion — even at senior levels within the military — about the directive and the review process for developing autonomous or semi-autonomous weapons systems. It’s too early to say whether the new guidance — including a flow chart “to help determine if senior review and approval is required” — will clear up that confusion, although this version and the other AI related guidance from high-level DOD leaders seem to provide more information and progress.
Task Force Releases Plan for $2.6B National AI Research Resource: On Tuesday, the National AI Research Resource task force released its final report, which details a plan for standing up a NAIRR. The task force, which includes a dozen technical experts from government, private companies and academia, was established by the National AI Initiative Act of 2020 (passed as part of the FY2021 NDAA) in order to explore the feasibility of establishing “a shared research infrastructure” capable of providing computing power, open government and non-government datasets, and educational tools to students and AI researchers. The final report builds on recommendations from the interim version published last year (see our coverage here) and adds more specificity about how the NAIRR should be stood up, what its resources should be, and how much it is likely to cost. The report proposes a four-phase roadmap which would have the NAIRR operational within two years (the report also includes a recommendation for a parallel pilot program that would make existing resources available to AI researchers at an earlier date). It outlines the compute resources the NAIRR should possess: a goal of 48–60 million hours on quad-GPU nodes at its initial operational capacity and 140–180 million hours on quad-GPU nodes once the project reaches its full operating capacity. To reach these goals, the task force estimates that the NAIRR needs $2.6 billion over an initial six-year period. The report is only a roadmap — ultimately, it will be up to Congress to fund the creation of an NAIRR. The 117th Congress showed an affinity for big STEM projects — look no further than last year’s CHIPS and Science Act — but it remains to be seen whether the 118th will have the same appetite.
The Air Force Partners With Howard University on Tactical Autonomy: On Monday, the Pentagon announced it had awarded Washington D.C.’s Howard University a five-year, $90 million contract to establish a new university-affiliated research center to study tactical autonomy. The announcement is doubly historic — the new UARC is both the first to be sponsored by the Air Force and the first at a historically Black college or university. The new center — which Howard will lead in consortium with eight other HBCUs — brings the total number of UARCs around the country to 15, with each set up to research a specific subject. In addition to the center’s research on trust in autonomous systems, collaboration between platforms, and human-machine teaming, the Pentagon also says it hopes the new UARC will broaden the pipeline of graduates with autonomy-related skills. Meanwhile, Howard’s president said the funding would help the university reach its goal of becoming an R1 research institution (making it the first R1 HBCU). After the five-year contract is up, the UARC will be eligible for five additional option years at $12 million per year.
The Justice Department Files Second Antitrust Lawsuit Against Google: On Tuesday, the Justice Department and attorneys general from eight states filed an antitrust lawsuit against Google, alleging the company has built an illegal monopoly over the digital advertising business. It is the second active federal case against the tech giant: In 2020, the DOJ filed a suit against the company over its search engine business practices — a case that is expected to go to trial later this year. If successful, the new case would deal a massive blow to the company: Google would be forced to sell off its immensely profitable ad business. The company’s advertising division pulled in more than $31 billion in 2021 and was closing in on $25 billion through three quarters last year. That profitability helped Google (and its parent company, Alphabet) funnel billions of dollars into longer-term R&D projects, including AI research. Google has apparently come to see AI as more than a moonshot — as we covered earlier this month, the company has ramped up its AI development in light of the perceived threat posed by OpenAI’s ChatGPT. And while Google’s AI team was largely spared during this month’s round of high-profile layoffs, it seems unlikely that the department would survive unscathed were the company forced to sell off the golden goose that is its advertising division.
The EEOC Aims to Scrutinize AI Hiring Systems: The Equal Employment Opportunity Commission will scrutinize the use of AI-based systems used in recruitment and hiring, according to the agency’s Draft Strategic Enforcement Plan. The draft SEP, released earlier this month, outlines the EEOC’s subject matter and enforcement priorities through FY2027, and is the first such plan to mention AI and machine learning. As part of its goal of “eliminating barriers in recruitment and hiring,” the EEOC will focus on the use of AI systems used to recruit applicants, target job advertisements, conduct applicant screening, and make (or assist in) hiring decisions. While AI didn’t make it into previous SEPs, the EEOC has nevertheless shown an increased interest in the subject over the last few years: in 2021, the agency launched an Initiative on Artificial Intelligence and Algorithmic Fairness, and, last year, it issued technical guidance outlining how using AI hiring tools could violate equal employment laws, such as the Americans with Disabilities Act. AI hiring tools will also be on the docket when the commission next convenes — on January 31, the EEOC will host a public meeting on “Navigating Employment Discrimination in AI and Automated Systems: A New Civil Rights Frontier.” The draft report is open for public comment until February 9, after which the EEOC will vote on a final draft.
In Translation
CSET’s translations of significant foreign language documents on AI
CSET’s translations of significant foreign language documents on AI
PRC Big Data Plan: Opinions of the CCP Central Committee and the State Council on Constructing a Basic System for Data and Putting Data Factors of Production to Better Use. This document describes, in broad strokes, the Chinese Communist Party’s guidelines for how “big data” can be used to spur economic development. It emphasizes data sharing, but also calls for restrictions on the sharing of classified and personally identifiable information. The document also urges the breaking up of “data monopolies” and warns that China will reciprocate if subjected to data export controls by foreign countries.
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
Job Openings
We’re hiring! Please apply or share the roles below with candidates in your network:
- People Operations Specialist: We are currently seeking a People Operations Specialist to play a key role in helping to build and develop the CSET team, with a particular focus on furthering our diversity, equity and inclusion initiatives. This Specialist will provide administrative, organizational and project management support to ensure that our people-focused operations run smoothly. Applications due by January 30.
- External Affairs Specialist: We are looking for an External Affairs Specialist to assist with our externally facing activities and communications, with a particular emphasis on media outreach. This Specialist will take part in team efforts to highlight CSET’s work through a combination of internally and externally-facing activities. Applications due by February 6.
What’s New at CSET
REPORTS
- Betting the House: Leveraging the CHIPS and Science Act to Increase U.S. Microelectronics Supply Chain Resilience by John VerWey
- CSET: Comment to the National Biotechnology and Biomanufacturing Initiative by Caroline Schuerger, Stephanie Batalis and Vikram Venkatram
- ChinaTalk Podcast: Chips Act: A How To Guide with Jacob Feldgoise
- Issues in Science and Technology: A New Role for Policy Analysts by Executive Director Dewey Murdick
- On January 18, Murdick led a discussion with Karine Perset and Audrey Plonk of the Organisation for Economic Co-operation and Development on multinational efforts to address challenges posed by AI.
- The Wire China: Katrina Northrop reached out to Director of Biotechnology Programs and Senior Fellow Anna Puglisi to discuss the complexities of preventing illicit technology transfer to China by private citizens.
- Grid: CSET’s Micah Musser and Josh Goldstein spoke to Benjamin Powers about generative AI and its potential use in influence operations, drawing on the findings of their recent report with colleagues from OpenAI and the Stanford Internet Observatory, Forecasting Potential Misuses of Language Models for Disinformation Campaigns — and How to Reduce Risk.
- Bloomberg: Katrina Manson covered the findings of Goldstein and Musser’s report for Bloomberg.
- VentureBeat: The report earned the attention of Ben Dickson, who reached out to Goldstein to discuss its conclusions.
- Platformer: Tech journalist Casey Newton touched on the findings of Goldstein and Musser’s report in his Substack newsletter.
- Interesting Engineering: Ameya Paleja cited the report in an article about Russian hackers’ use of ChatGPT.
- Medium: Rounding out the coverage of Goldstein and Musser’s collaborative paper with Stanford University and OpenAI, Waleed Rikab linked to the report in a post about generative AI and misinformation.
- GCN: Following Research Analyst Jack Corrigan’s appearance in a GCN webinar, Chris Teale recapped his comments and the findings of Corrigan’s October report with Sergio Fontanez and Michael Kratsios, Banned in D.C.: Examining Government Approaches to Foreign Technology Threats.
- Forbes: Shalin Jyotishi cited Diana Gehlhaus and Santiago Mutis’ 2021 brief, The U.S. AI Workforce: Understanding the Supply of AI Talent, for a piece about Amazon’s plan to help community colleges and HBCUs learn and teach about AI.
- Brookings Institution: In their recent Brookings report, Cameron F. Kerry, Joshua P. Meltzer and Matt Sheehan made use of the Emerging Technology Observatory’s Country Activity Tracker, leaning on its data for 10 of the report’s figures.
What We’re Reading
Opinion: I’m a Congressman Who Codes. AI Freaks Me Out. Rep. Ted Lieu, The New York Times (January 2023)
Article: The Return of Export Controls, Chad P. Brown, Foreign Affairs (January 2023)
Paper: Bridging Systems: Open Problems for Countering Destructive Divisiveness across Ranking, Recommenders, and Governance, Aviv Ovadya and Luke Thorburn (January 2023)
Upcoming Events
- February 16: CSET Event, Turn off the Tap?: Assessing U.S. Investment and Support for Chinese AI Companies, featuring CSET’s Emily Weinstein and Ngor Luong, with Emily Kilcrease of CNAS
What else is going on? Suggest stories, documents to translate & upcoming events here.