Researchers Use AI to Generate Therapeutic Antibodies From Scratch: Researchers with a U.S. pharmaceutical company used AI to design novel antibodies that they say work as well as or better than those designed by humans. Antibody design is a hugely promising field — therapeutic antibodies have already been used to help treat a number of serious diseases — but designing novel antibodies is a difficult and resource-intensive process. In a preprint paper posted earlier this month to bioRxiv, researchers from the New York and Washington-based Absci Corporation showed how a generative AI model was able to design multiple novel antibodies that bind to a target receptor, HER2, more tightly than previously known therapeutic antibodies. In training the system, the researchers removed all data on antibodies known to bind to the target receptor from the training data — meaning the system couldn’t simply mimic the structure of antibodies already known to work. After screening and validating the designs produced by their system, the researchers confirmed HER2 receptor binding for 421 of the designs, including three that bound more tightly than the FDA-approved therapeutic antibody trastuzumab. Encouragingly, the designs produced by the system were both highly diverse (from each other and from known antibodies) and achieved a high score on a “naturalness” measure that the researchers developed — making the designs, in theory, easier to develop and more likely to provoke a favorable immune response. The reported findings have yet to be peer reviewed, nor have they been tested in living organisms. But, if validated, they could represent a hugely important step toward using AI to accelerate and bring down the cost of therapeutic antibody development.
Getty Sues the Company Behind a Popular Open Source AI Art Generator: In what could prove to be a landmark case for how AI systems are trained, Getty Images announced it is suing Stability AI — the maker of the popular open source AI art generator Stable Diffusion — for copyright infringement. It’s no secret that Stability AI churned through millions of stock images to train its model — the company used an open source dataset comprising billions of images scraped from the internet, including many from stock image companies like Getty. But, according to Getty, that unlicensed scraping and processing was unlawful. Despite their surging popularity, the way generative AI systems are trained remains legally questionable, as The Verge’s James Vincent has explored at length. Getty’s case is the latest to pit copyright holders and content creators against the companies behind such systems; Microsoft and OpenAI, for example, are embroiled in a lawsuit over how the computer code-generating tool Copilot was trained. Once decided, the suits could help clear up any legal uncertainty and set important guardrails. A favorable ruling for Getty wouldn’t mean the end of generative AI systems — OpenAI, for example, has a formal partnership with the stock image company Shutterstock — but it could lead to significant changes in the way training data is sourced.
Tesla Is Under Fire for Its 2016 Video Showcase: In October 2016, Tesla released a video showing what appeared to be an impressive feat: one of its Model X vehicles driving autonomously on public roads. The four-minute video — which showed the Tesla stopping at stop signs and traffic lights, merging onto the highway and switching lanes — began with the message, “The person in the driver’s seat is only there for legal reasons. He is not doing anything. The car is driving itself.” Recent testimony from a senior Tesla employee and company emails obtained by Bloomberg cast the video in a much less spectacular light. While it appears to depict a continuous, seamless demo, Bloomberg reports that this wasn’t always the case — in emails to staff, Tesla CEO Elon Musk criticized earlier versions of the video for containing too many edits and said the final product “needs to feel like one continuous take.” It’s not clear how much, if any, footage was cut out of the final product, nor if the human driver had to take control during the drive that was used to make the video. But in a recent deposition, Tesla’s director of Autopilot software said human drivers had to take control multiple times during test runs, including when the car crashed into a fence while trying to park. Tesla has been in hot water with regulators for some time — in addition to a U.S. National Highway Traffic Safety Administration investigation that began in 2021, Reuters reported last October that the Justice Department had launched a criminal investigation into whether the company misled regulators and the public with its claims about its cars’ capabilities. Tesla’s competitors are likely to watch the proceedings closely — the carmaker’s legal troubles could influence how other companies develop, deploy and market their own AI systems, particularly if criminal liability comes into play.
The DOD Updates Its Autonomous Weapons Policy: Yesterday afternoon, the Pentagon released updated guidance on autonomy in weapons systems. Although this update (the previous version, originally from 2012 and updated in 2017, is available here) does not dramatically change the Pentagon’s policy on such systems’ development, as the DOD’s Emerging Capabilities Policy Director Dr. Michael Horowitz told reporters, it indicates the “dramatic, expanded vision” of AI’s future role. For starters, the new directive explicitly mentions AI, something the old guidance did not. The linguistic change reflects evolving technological realities — the deep learning revolution was just kicking off when the original guidance was issued — as well as significant developments within the DOD. In recent years, the Pentagon has adopted policies — such as 2020’s Ethical Principles for Artificial Intelligence and 2021’s Responsible Artificial Intelligence Strategy and Implementation Pathway — and stood up organizations (namely the Chief Digital and Artificial Intelligence Office) to guide the process for developing AI within the military, something the new directive reflects. Updating and clarifying processes appears to have been a key consideration behind the new guidance. As Defense One’s Patrick Tucker noted, there has been a great deal of confusion — even at senior levels within the military — about the directive and the review process for developing autonomous or semi-autonomous weapons systems. It’s too early to say whether the new guidance — including a flow chart “to help determine if senior review and approval is required” — will clear up that confusion, although this version and the other AI related guidance from high-level DOD leaders seem to provide more information and progress.
Task Force Releases Plan for $2.6B National AI Research Resource: On Tuesday, the National AI Research Resource task force released its final report, which details a plan for standing up a NAIRR. The task force, which includes a dozen technical experts from government, private companies and academia, was established by the National AI Initiative Act of 2020 (passed as part of the FY2021 NDAA) in order to explore the feasibility of establishing “a shared research infrastructure” capable of providing computing power, open government and non-government datasets, and educational tools to students and AI researchers. The final report builds on recommendations from the interim version published last year (see our coverage here) and adds more specificity about how the NAIRR should be stood up, what its resources should be, and how much it is likely to cost. The report proposes a four-phase roadmap which would have the NAIRR operational within two years (the report also includes a recommendation for a parallel pilot program that would make existing resources available to AI researchers at an earlier date). It outlines the compute resources the NAIRR should possess: a goal of 48–60 million hours on quad-GPU nodes at its initial operational capacity and 140–180 million hours on quad-GPU nodes once the project reaches its full operating capacity. To reach these goals, the task force estimates that the NAIRR needs $2.6 billion over an initial six-year period. The report is only a roadmap — ultimately, it will be up to Congress to fund the creation of an NAIRR. The 117th Congress showed an affinity for big STEM projects — look no further than last year’s CHIPS and Science Act — but it remains to be seen whether the 118th will have the same appetite.
The Air Force Partners With Howard University on Tactical Autonomy: On Monday, the Pentagon announced it had awarded Washington D.C.’s Howard University a five-year, $90 million contract to establish a new university-affiliated research center to study tactical autonomy. The announcement is doubly historic — the new UARC is both the first to be sponsored by the Air Force and the first at a historically Black college or university. The new center — which Howard will lead in consortium with eight other HBCUs — brings the total number of UARCs around the country to 15, with each set up to research a specific subject. In addition to the center’s research on trust in autonomous systems, collaboration between platforms, and human-machine teaming, the Pentagon also says it hopes the new UARC will broaden the pipeline of graduates with autonomy-related skills. Meanwhile, Howard’s president said the funding would help the university reach its goal of becoming an R1 research institution (making it the first R1 HBCU). After the five-year contract is up, the UARC will be eligible for five additional option years at $12 million per year.
The Justice Department Files Second Antitrust Lawsuit Against Google: On Tuesday, the Justice Department and attorneys general from eight states filed an antitrust lawsuit against Google, alleging the company has built an illegal monopoly over the digital advertising business. It is the second active federal case against the tech giant: In 2020, the DOJ filed a suit against the company over its search engine business practices — a case that is expected to go to trial later this year. If successful, the new case would deal a massive blow to the company: Google would be forced to sell off its immensely profitable ad business. The company’s advertising division pulled in more than $31 billion in 2021 and was closing in on $25 billion through three quarters last year. That profitability helped Google (and its parent company, Alphabet) funnel billions of dollars into longer-term R&D projects, including AI research. Google has apparently come to see AI as more than a moonshot — as we covered earlier this month, the company has ramped up its AI development in light of the perceived threat posed by OpenAI’s ChatGPT. And while Google’s AI team was largely spared during this month’s round of high-profile layoffs, it seems unlikely that the department would survive unscathed were the company forced to sell off the golden goose that is its advertising division.
The EEOC Aims to Scrutinize AI Hiring Systems: The Equal Employment Opportunity Commission will scrutinize the use of AI-based systems used in recruitment and hiring, according to the agency’s Draft Strategic Enforcement Plan. The draft SEP, released earlier this month, outlines the EEOC’s subject matter and enforcement priorities through FY2027, and is the first such plan to mention AI and machine learning. As part of its goal of “eliminating barriers in recruitment and hiring,” the EEOC will focus on the use of AI systems used to recruit applicants, target job advertisements, conduct applicant screening, and make (or assist in) hiring decisions. While AI didn’t make it into previous SEPs, the EEOC has nevertheless shown an increased interest in the subject over the last few years: in 2021, the agency launched an Initiative on Artificial Intelligence and Algorithmic Fairness, and, last year, it issued technical guidance outlining how using AI hiring tools could violate equal employment laws, such as the Americans with Disabilities Act. AI hiring tools will also be on the docket when the commission next convenes — on January 31, the EEOC will host a public meeting on “Navigating Employment Discrimination in AI and Automated Systems: A New Civil Rights Frontier.” The draft report is open for public comment until February 9, after which the EEOC will vote on a final draft.
In Translation CSET’s translations of significant foreign language documents on AI
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
We’re hiring! Please apply or share the roles below with candidates in your network:
People Operations Specialist: We are currently seeking a People Operations Specialist to play a key role in helping to build and develop the CSET team, with a particular focus on furthering our diversity, equity and inclusion initiatives. This Specialist will provide administrative, organizational and project management support to ensure that our people-focused operations run smoothly. Applications due by January 30.
External Affairs Specialist: We are looking for an External Affairs Specialist to assist with our externally facing activities and communications, with a particular emphasis on media outreach. This Specialist will take part in team efforts to highlight CSET’s work through a combination of internally and externally-facing activities. Applications due by February 6.
Please bookmark our careers page to stay up to date on all active job postings. You can also subscribe to receive job announcements by updating your subscription preferences in the footer of this email.
On January 18, Murdick led a discussion with Karine Perset and Audrey Plonk of the Organisation for Economic Co-operation and Development on multinational efforts to address challenges posed by AI.
IN THE NEWS
The Wire China: Katrina Northrop reached out to Director of Biotechnology Programs and Senior Fellow Anna Puglisi to discuss the complexities of preventing illicit technology transfer to China by private citizens.
Brookings Institution: In their recent Brookings report, Cameron F. Kerry, Joshua P. Meltzer and Matt Sheehan made use of the Emerging Technology Observatory’s Country Activity Tracker, leaning on its data for 10 of the report’s figures.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.