Worth Knowing
Advances in Text-to-Video: Last week, Meta’s AI lab announced a new system that can generate short videos from text prompts. While the “Make-A-Video” system is not yet publicly available, Meta has posted a handful of clips it has generated. Like the many text-to-image generators that have surged in popularity in recent months, Make-A-Video’s outputs appear both intriguing and bizarre. According to an accompanying paper published by Meta AI researchers, the system combines a text-to-image model (like those that power DALL-E and Stable Diffusion) that generates static images with additional layers that turn those still images into moving clips. Training the motion-generating parts of the system on unlabeled video data using unsupervised learning — which the researchers say was enough to give it a realistic sense of motion — helped overcome a significant barrier to text-to-video generation: the lack of high-quality text-video data. Meta’s system is not the only text-to-video tool — Chinese researchers debuted a text-to-video system earlier this year, and Google announced its own “Imagen Video” tool yesterday — but observers say its sophistication is impressive. And with the rapid progress of text-to-image generators from proof-of-concept to veritable free-for-all, it’s worth wondering how far away at-home text-to-video generators could be. Meta has said it plans to open a version of the tool to the public, but has not yet set a date for the release. Google, meanwhile, has said it doesn’t plan to publicly release its Imagen Video system until concerns over its potential to generate “fake, hateful, explicit or harmful content” are fully addressed.
- More: AI Data Laundering: How Academic and Nonprofit Researchers Shield Tech Companies from Accountability
- A “presumption of causality” that non-compliance by an AI developer or provider led to harm. Under current liability rules, claimants must demonstrate a direct causal link between non-compliance and a specific harm. The complexity of AI systems can make drawing a clear, direct line prohibitively difficult. The updated directive would place the burden of proof on the defendant to show that their non-compliance was not the cause of the harm.
- A “right of access to evidence” that grants claimants the right to access “necessary and proportionate” information about high-risk AI systems (as designated by the proposed AI Act) that may have caused harm.
- More: The EU Wants to Put Companies on the Hook for Harmful AI | Europe Edges Closer to a Ban on Facial Recognition
Government Updates
The White House Releases a “Blueprint for an AI Bill of Rights”: On Tuesday, the Office of Science and Technology Policy (OSTP) released a “Blueprint for an AI Bill of Rights” — a non-binding framework that lays out five principles meant to help guide AI development, deployment and use, and to shape public policy. Those principles are:
- “You should be protected from unsafe or ineffective systems” (Safe and Effective Systems).
- “You should not face discrimination by algorithms and systems should be used and designed in an equitable way” (Algorithmic Discrimination Protections).
- “You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used” (Data Privacy).
- “You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you” (Notice and Explanation).
- “You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter” (Human Alternatives, Consideration, and Fallback).
White House Reportedly Plans More Expansive Chinese Tech Restrictions: The New York Times reports that the Biden administration is planning new restrictions meant to further limit China’s access to technologies used in AI development, including high-end computing hardware. According to the report, the White House plans to use a variety of measures to target China’s AI and semiconductor industry. Notably, this includes employing the foreign direct product rule — a powerful and far-reaching regulation that gives the Commerce Department the ability to limit sales of items using U.S.-origin technology, even if produced abroad — to restrict a number of Chinese companies’ and research labs’ access to high-end computing power. As we’ve covered in recent months, the Biden administration has been ramping up its efforts to hinder China’s AI and semiconductor industries — it has already blocked certain exports of U.S.-designed high-end chips and taken steps to cut off access to chipmaking tools. The new rules will reportedly codify some of the early restrictions and expand others (potentially covering tools needed for memory production). While exact details are not yet available, observers say the restrictions likely represent “the U.S. government’s most significant effort to date.”
IARPA Wants to Use AI to Identify (and Protect) Anonymous Authors: The Intelligence Advanced Research Projects Activity (IARPA) has launched a program to develop AI tools capable of determining the authorship of written products. The Human Interpretable Attribution of Text Using Underlying Structure (HIATUS) program aims to create tools that can identify authorship based on unique stylistic features — so-called “linguistic fingerprints” — modify written products to protect an author’s identity, and explain to observers why it flagged text as attributable and why it made specific revisions. If successful, the program could presumably help U.S. intelligence agencies on multiple fronts, enabling them to identify malign anonymous authors while keeping their own authors safe. IARPA awarded contracts to six lead organizations, bringing together researchers from industry, non-profit and academic backgrounds. IARPA expects the program to last a total of 42 months.
In Translation
CSET’s translations of significant foreign language documents on AI
CSET’s translations of significant foreign language documents on AI
2016 PRC AI Strategy: “Internet+” Artificial Intelligence Three-Year Action and Implementation Plan. This document is one of China’s earliest national strategies for the AI industry. The plan encourages China’s application of AI technology in industries and fields such as smart home, self-driving cars, unmanned systems, security, wearables, and robotics.
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
What’s New at CSET
REPORTS
- Downrange: A Survey of China’s Cyber Ranges by Dakota Cary
- A Common Language for Responsible AI: Evolving and Defining DOD Terms for Implementation by Emelia Probasco
- CSET: GitHub Data: Capturing Open Source Software and Implementation by Christian Schoeberl
- Lawfare: Don’t Assume China’s AI Regulations Are Just a Power Play by Micah Musser
- The Global CS Education Conference: Research Fellow Diana Gehlhaus and Research Analysts Ali Crawford and Luke Koslosky presented on The Workforce and Education Landscape of AI and Cybersecurity at the CSEd Conference in Fort Lauderdale, Florida.
- On September 22, the CSET webinar The Biotechnology Landscape: How Understanding Global Biology Research Activity Can Inform Pandemic Preparedness featured a conversation between Senior Scholar at the Johns Hopkins University Center for Health Security Amesh Adalja and CSET’s Anna Puglisi and Caroline Schuerger about understanding the breadth and depth of global biological activity in preparing for future pandemics.
- The Atlantic: Timothy McLaughlin reached out to Research Analyst Dahlia Peterson to discuss the challenges that researchers and journalists experience when sourcing Chinese surveillance information.
- South China Morning Post: Lead Analyst William Hannas spoke to Mark Magnier about China’s theft of intellectual property for an article about a recently convicted Illinois Institute of Technology student.
- The Washington Post: In a piece about Micron’s major investment in upstate New York, Jeanne Whalen cited Will Hunt’s report, Reshoring Chipmaking Capacity Requires High-Skilled Foreign Talent.
What We’re Reading
Article: COVID-19 and Public Support for Autonomous Technologies—Did the Pandemic Catalyze a World of Robots?, Michael C. Horowitz, Lauren Kahn, Julia Macdonald and Jacquelyn Schneider, PLoS ONE (September 2022)
Article: Discovering Faster Matrix Multiplication Algorithms with Reinforcement Learning, Alhussein Fawzi et al., Nature (October 2022)
Upcoming Events
- October 12: National AI Advisory Committee, Field Hearing — Advancing U.S. Leadership in AI Research & Development (livestream available here), featuring Catherine Aiken
- October 12: American Conference Institute, U.S.-China Trade Controls conference, featuring Anna Puglisi
- October 12: McCourt School of Public Policy, Move Fast and Fix Things: Journalist Kara Swisher & Civic Entrepreneur Frank McCourt Discuss Technology, Democracy & the Future of the Internet
- October 19: CSET Webinar, Decoupling in Strategic Tech Sectors: What the Satellite Industry Can Teach Us About Future Efforts To Separate Technological Supply Chains, featuring Tim Hwang, Emily S. Weinstein and Martijn Rasser
- October 20: 2022 Innovations Dialogue: AI Disruption, Peace, And Security, featuring Margarita Konaev
What else is going on? Suggest stories, documents to translate & upcoming events here.