policy.ai
Biweekly updates on artificial intelligence, emerging technology and security policy.
— Subscribe here —
Facial recognition regulations evolve, Industries of the Future Act introduced and trade deal with China tackles tech transfer
Wednesday, January 22, 2020
Wednesday, January 22, 2020
Worth Knowing
Facial Recognition Controversy Deepens: More than 600 U.S. law enforcement agencies have quietly begun using a facial recognition app from start-up Clearview AI, finds a New York Times investigation. The system, which scrapes images from Facebook, YouTube, Venmo and others, has amassed more than three billion photos. Clearview’s volume of photos and lack of regulation or independent testing have exacerbated existing concerns about the increasing use of facial recognition. Meanwhile, the patchwork of state and local regulations on facial recognition continues to evolve: Last week, Cambridge became the fourth city in Massachusetts to limit the technology, and the California law prohibiting facial recognition use by law enforcement went into effect on January 1.
- More: We’re Banning Facial Recognition. We’re Missing the Point | EU Considers Temporary Ban on Facial Recognition in Public Spaces
- U.S. Chief Technology Officer Michael Kratsios explained the White House’s light-touch approach to AI regulation.
- Deputy Chief Technology Officer Lynne Parker pushed back against the idea that AI is zero-sum, saying the technology could benefit everyone.
- Intel announced they’re working with Facebook on a new AI chip.
DeepMind Publishes Protein Folding Results: DeepMind published a study on AlphaFold, their protein folding predicting system, in top science journal Nature last week. First posed in 1962, the protein folding problem asks how a given chain of amino acids folds into the 3D structure of a protein. Answering this problem is an important step in understanding the biochemistry of living organisms. DeepMind’s discovery came in December 2018 when AlphaFold won CASP, the biennial protein prediction competition, by correctly predicting 24 out of 43 structures; the runner-up predicted only 14. AlphaFold’s progress demonstrates the potential for scientific advancement through machine learning.
Government Updates
Senators Introduce Bill to Support R&D for AI and Other Industries: Sens. Wicker, Gardner and Baldwin introduced the Industries of the Future Act last week. The bipartisan legislation would require a plan to increase federal investments in “industries of the future” — including AI, quantum computing and biotechnology — to $10B per year by 2025. It would also establish an Industries of the Future Coordination Council to advise the White House Office of Science and Technology Policy on federal measures necessary to maintain the U.S. global edge in emerging technologies.
Trade Deal With China Prompts Debate Over Tech Provisions: President Trump signed a “Phase One” trade agreement with China on January 15 that included sections on intellectual property theft and technology transfer. China agreed to stop requiring U.S. companies to transfer technology as a condition of operating in the country. It also committed to strengthening legal protections for American intellectual property — including harsher punishments for IP theft — and improving the criminal and civil procedures for combatting online patent and copyright infringement. However, the agreement has triggered debate over the likelihood of implementation and enforcement, while prospects for a "Phase Two" deal remain uncertain.
House Holds Hearing on Facial Recognition: On January 15, the House Committee on Oversight and Reform held its third hearing on facial recognition technology. Chairwoman Maloney indicated that the committee plans to introduce “common-sense” facial recognition legislation in the very near future. Despite its expanded private-sector use, facial recognition is “just not ready for prime time,” she said. This hearing follows a National Institute of Standards and Technology report finding that commercial facial recognition systems misidentify women and minorities at high rates. Ranking Member Jordan also committed to advancing a bipartisan bill.
In Translation
CSET's translations of significant foreign language documents on AI
CSET's translations of significant foreign language documents on AI
China’s Strategy for Science and Technology Innovation: National 13th Five-Year Plan for S&T Innovation: Translation of a PRC State Council plan for science and technology innovation from 2016 to 2020. The first half of the plan details specific technologies that are near term priorities for research and investment. The second half discusses proposed changes to China’s S&T infrastructure.
WAIC Proposed Guidelines for AI Security: World AI Conference Security and Rule of Law Guidelines: Translation of a document issued just before the World Artificial Intelligence Conference in Shanghai in August 2019. The document consists of proposed legal guidelines to address a wide range of potential dangers posed by the rise of AI technology, including bugs, hackers, algorithmic bias and unemployment.
What We’re Reading
Strategy: Artificial Intelligence in Support of Defense (available in English and French), Report of the AI Task Force of the French Ministry of the Armies (September 2019)
Report: The Global AI Index, Tortoise Media (December 2019)
Article: The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse?, Toby Shevlane, Allan Dafoe (December 2019)
What’s New at CSET
PUBLICATIONS
- CSET Blog: America’s Future Lies in Technical Alliances by Melissa Flagg
- War on the Rocks: Ben Buchanan’s AI policy class for congressional staff was mentioned in an article on the AI literacy gap.
- Axios: Remco Zwetsloot spoke about the challenges of restricting foreign researchers for an Axios story on U.S.-China tech decoupling.
Events
- January 24: CSIS, Global Security Forum: Emerging Technologies Governance featuring Jason Matheny
- February 5: U.S. Copyright Office and the World Intellectual Property Organization, Copyright in the Age of Artificial Intelligence
What else is going on? Suggest stories, documents to translate & upcoming events here.
JAIC funded at $184M, U.S. export restrictions expanded and Facebook bans deepfake videos
Wednesday, January 8, 2020
Wednesday, January 8, 2020
Worth Knowing
2019 AI Index Documents the Field’s Progress: Stanford University’s Institute for Human-Centered AI released its annual AI Index Report. The almost 300-page report tracks the development of AI over time across a broad range of dimensions. Among the findings: significant growth over the past few years in AI conference attendance, number of publications, investment levels, and enrollment in related education. While China led in some metrics, including total number of publications, the United States led in others, such as citation impact, investments and patents. The report also includes a Global AI Vibrancy Tool that lets users compare countries’ relative strengths in AI.
- More: Full report
- More: The Semiconductor Industry and the Power of Globalisation | Maintaining the AI Chip Competitive Advantage of the United States and its Allies
Government Updates
Commerce Department Restricts Export of Certain AI Software: The Bureau of Industry and Security amended the Export Administration Regulations on Monday to include restrictions on the export of geospatial AI software. The interim rule requires a license for export and reexport of this software to all destinations except Canada. Restricted software must use a deep convolutional neural network to automate the analysis of geospatial imagery and have a variety of specific characteristics. The rule is open for comment until March 6th. Additional restrictions on exports of emerging technologies are expected.
FY20 Appropriations Increase Funding for AI: The Fiscal Year 2020 appropriations act passed by Congress and signed by the President in late December included substantial investments in AI-related activities. It funded the Joint AI Center at $183.83 million. While $25 million below the President’s request, this was a significant increase over the FY2019 funding of $93 million. Overall, the measure provided $77.5 million above the President’s request for Department of Defense AI-related activities.
NDAA-Mandated RAND Report Finds DOD Unprepared to Integrate AI: A federally mandated report on the Defense Department’s posture in AI found that the DOD’s approach is “significantly challenged across all dimensions.” The RAND Corporation’s independent assessment concluded that the JAIC lacks the authority and resources to implement the DOD’s vision for AI. In addition, the authors determined the current state of verification, validation, test and evaluation is “nowhere close” to ensuring the safety of AI applications. The report recommends new governance structures and strategic planning initiatives, among other actions.
White House Proposes Guidance for AI Regulation: On Tuesday, the Office of Management and Budget proposed guidance for government agency regulation of AI in the private sector. Under Executive Order 13859 on Maintaining American Leadership in Artificial Intelligence, agencies with regulatory authority must submit AI regulation plans to OMB. The memorandum outlines principles the plans should follow, including promoting trustworthy AI without hampering innovation and growth. It also encourages the support of voluntary consensus standards developed by industry for self-regulation. Agency plans are due in 180 days. Chief Technology Officer of the United States Michael Kratsios wrote an op-ed in Bloomberg introducing the principles.
What We’re Reading
Report: The American AI Century: A Blueprint for Action, CNAS (December 2019)
Report: 2019 AI Now Report, AI Now (December 2019)
Report: Report on Artificial Intelligence: Implications for NATO’s Armed Forces, NATO Parliamentary Assembly Science and Technology Subcommittee (October 2019)
Paper: A Stable Nuclear Future? The Impact of Autonomous Systems and Artificial Intelligence, Michael C. Horowitz, Paul Scharre and Alexander Velez-Green (December 2019)
In Translation
CSET's translations of significant foreign language documents on AI
CSET's translations of significant foreign language documents on AI
Qianzhan 2019 China AI Industry Report: 2019 Report on Current Conditions and Trends in the Artificial Intelligence Industry: Translation of Qianzhan Industry Research Institute’s business analysis of China’s AI industry. The document analyzes the current supply chain, market development and investments in China’s AI industry. It also assesses the outlook and trends for the future of the industry.
China’s Strategy for Innovation-Driven Development: Outline of the National Innovation-Driven Development Strategy: Translation of a CPC Central Committee and PRC State Council strategy identifying industries that China feels would most benefit from increased indigenous innovation. The document also identifies foreign talent and technology transfer as crucial for China’s emerging technology sectors.
China’s Ten-Year Strategy for Education Reform: Outline of the National Plan for Medium- and Long-Term Education Reform and Development: Translation of a CPC Central Committee and PRC State Council strategy for education reform issued in July 2010. Although the strategy doesn’t mention emerging technologies explicitly, the document addresses international educational exchange and cultivation of world-class talent, which has implications for emerging technology.
What’s New at CSET
REPORTS
- “Keeping Top AI Talent in the United States: Findings and Policy Options for International Graduate Student Retention” by Remco Zwetsloot, James Dunham, Zachary Arnold and Tina Huang
- “AI Safety, Security, and Stability Among Great Powers: Options, Challenges, and Lessons Learned for Pragmatic Engagement” by Andrew Imbrie and Elsa B. Kania
- IISS: Mapping the Terrain: AI Governance and the Future of Power by Andrew Imbrie
- The Diplomat: The US-China Tech Wars: China’s Immigration Disadvantage by Remco Zwetsloot and Dahlia Peterson
- MIT Technology Review: Ben Buchanan spoke about his upcoming book, The Hacker and the State, in an article on the future of hackers in geopolitics.
- The Diplomat: Elsa Kania was interviewed for a piece on her views about AI and great power competition.
- Marketplace: Dahlia Peterson spoke about facial recognition and biometric data for a story on China’s dependence on U.S. tech companies for surveillance.
- Axios: CSET’s report on graduate student retention was featured in an Axios story about the report.
- Wired: Helen Toner was interviewed for an article on China’s AI unicorns.
Events
- January 8: Subcommittee on Consumer Protection and Commerce of the House Committee on Energy and Commerce, Americans at Risk: Manipulation and Deception in the Digital Age
- January 16: AI in Government, AI Playbook for Success with Presidential Innovation Fellows
What else is going on? Suggest stories, documents to translate & upcoming events here.
Chinese Public AI R&D Spending, NeurIPS Underway, and NDAA Includes Provisions on AI
Wednesday December 11, 2019
Wednesday December 11, 2019
Worth Knowing
NeurIPS Underway in Vancouver: The Neural Information Processing Systems Conference, the most attended annual AI conference, is in progress in Vancouver, Canada until December 14. Conference organizers anticipated more than 13,000 attendees — a significant increase from the 8,000 participants in 2018. A total of 9,185 papers were submitted for consideration, with 1,428 accepted for presentation. NeurIPS is slated to be held in Vancouver again in 2020 and Sydney, Australia in 2021.
China Legislates Deepfakes: The Cyberspace Administration of China will require that all deepfakes be clearly marked as artificially generated starting January 1, 2020. The rules apply to all “fake news” created with technologies such as artificial intelligence or virtual reality. Failure to comply will be a criminal offense. Officials cite threats that deepfakes pose to national security and the social order as motivating factors. While there is no comparable deepfake legislation in the United States, California has passed laws restricting deepfake use under specific circumstances.
- More: Deepfakes are a real political threat. For now, though, they’re mainly used to degrade women | Deepfakes and Cheap Fakes
Government Updates
NDAA Extends NSCAI Mandate, Enhances Hiring for JAIC: House and Senate negotiators have reached an agreement on the Fiscal Year 2020 National Defense Authorization Act. The conference report incorporates several provisions related to AI, including authorization for the Joint AI Center to enhance its hiring of science and engineering experts. The NDAA also extends the National Security Commission on AI’s mandate until October 2021, requires a second interim report by December 2020 and delays the date of the final report until March 2021. In addition, the NDAA directs the Department of Defense to provide an analysis comparing U.S. and Chinese capabilities in AI and to report on the JAIC’s mission and objectives.
Schmidt and Work: US in Danger of Losing Global Leadership in AI: In an op-ed published last week, the co-chairs of the National Security Commission on AI, Eric Schmidt and Bob Work, wrote that the United States must act quickly to avoid losing its technical lead to China. While the country has long been a world leader in AI, they warn that by many metrics, America’s lead is dwindling. The op-ed summarizes the findings of the NSCAI Interim Report and underscores the importance of AI to national security and economic prosperity.
ICIG Report Describes Activities to Improve Oversight of AI: The Office of the Inspector General of the Intelligence Community released its Semiannual Report detailing its goals and activities from April to September 2019. One of the ICIG’s five programmatic objectives in 2019 was improving oversight of artificial intelligence. To that end, the report describes steps the ICIG took to build collaboration around and understanding of AI, both within and outside the intelligence community. The report also discusses the possibility of building an ICIG Community of Interest on AI.
What We’re Reading
Special Issue: RUSI Journal on Artificial Intelligence, The Royal United Services Institute (November 2019)
Post: An Epidemic of AI Misinformation, Gary Marcus in The Gradient (November 2019)
Paper: Review of Dual-Use Export Controls, European Parliament Think Tank (November 2019)
In Translation
CSET's translations of significant foreign language documents on AI
CSET's translations of significant foreign language documents on AI
China’s Five-Year Industrial Strategy for Emerging Technology: Circular of the State Council on Issuing the National 13th Five-Year Plan for the Development of Strategic Emerging Industries: Translation of a PRC State Council plan that sets quantifiable goalposts for the growth of certain high-tech industries. An appendix specifies the Chinese ministries responsible for carrying out this plan for each type of emerging technology.
What’s New at CSET
REPORTS
- “Chinese Public AI R&D Spending: Provisional Findings” by Ashwin Acharya and Zachary Arnold
- “Maintaining the AI Chip Competitive Advantage of the United States and its Allies” by Saif M. Khan
- “Defense Innovation Board AI Principles” translation from English to Chinese
- “Defense Innovation Board AI Principles” translation from English to Russian
- MIT Tech Review / Fortune / South China Morning Post: CSET’s recent issue brief on Chinese AI R&D was featured in several publications including MIT Tech Review, Fortune and The South China Morning Post.
- The Hill: Eric Schmidt and Bob Work’s op-ed cited CSET’s Margarita Konaev and CNA’s Samuel Bendett’s War on the Rocks piece on Russian AI-enabled combat.
- Federal News Network: Ben Buchanan joined Federal Drive with Tom Temin to talk about CSET’s new $2 million project, CyberAI.
- The Hoya: Ben Buchanan and Jason Matheny were quoted in an article discussing CSET’s plans for CyberAI.
- South China Morning Post: Remco Zwetsloot spoke about attracting and retaining foreign AI talent for an article on China’s AI workforce.
Events
- December 11: University of California, Artificial Intelligence Research Briefing
- December 12: Brookings, Lessons of History, Law, and Public Opinion for AI Development
- December 12: Hudson Institute, The Chinese Threat to America’s Industrial and High-Tech Future: The Case for a U.S. Industrial Policy
- December 23: Montreal.AI, Debate: Yoshua Bengio and Gary Marcus — live streaming
What else is going on? Suggest stories, documents to translate & upcoming events here.
Cerebras unveils the CS-1, explaining NLP, and China’s plan for university-level AI education
Wednesday November 27, 2019
Wednesday November 27, 2019
Worth Knowing
Cerebras Unveils Computer Using Largest Chip Ever Built: Last week, Cerebras Systems announced the CS-1, a computer designed around its Wafer Scale Engine. The WSE, released in August, is specialized for deep learning. It is also the largest computer chip ever produced, at more than 50 times larger than standard chips. The CS-1 provides the infrastructure for users to work with the chip, beginning with Argonne National Laboratory, the company’s first partner. Cerebras says the CS-1 delivers the performance of 1,000 GPUs combined, though this claim has not been verified.
- More: 6 Things to Know About the Biggest Chip Ever Built | The Deep Learning Revolution and Its Implications for Computer Architecture and Chip Design
- More: Full paper | Demo
- More: One Month, 500,000 Face Scans: How China is Using A.I. to Profile a Minority | Western Academia Helps Build China’s Automated Racism
Government Updates
2019 Annual USCC Report Highlights Emerging Technology: The U.S.-China Economic and Security Review Commission submitted its annual report to Congress on the national security implications of the economic relationship between the United States and China. The report includes a section on emerging technologies and military-civil fusion that argues Chinese advancements in AI could undermine U.S. economic and military advantages. The Commission makes several recommendations to Congress, including reestablishing a higher education advisory board under the FBI to identify signs of technology transfer. However, critics have noted errors and hyperbole in the report regarding China’s space program.
2016–2019 Progress Report Published on Advancing AI R&D: The National Science and Technology Council has released its Progress Report on AI R&D. The report describes how federal agencies are advancing the field in accordance with the National AI R&D Strategic Plan. It divides AI research by national strategy, sector and agency contribution, emphasizing the breadth and depth of federal investments in AI.
What We’re Reading
Report: Characteristics of H-1B Specialty Occupation Workers: Fiscal Year 2018 Annual Report to Congress, U.S. Citizenship and Immigration Services, Department of Homeland Security (November 2019)
Strategy: National Artificial Intelligence Strategy: Advancing Our Smart Nation Journey, Smart Nation Singapore (November 2019)
Paper: Artificial Intelligence in Land Forces: A Position Paper, The German Army Concepts and Capabilities Development Centre, Bundeswehr (October 2019)
In Translation
CSET's translations of significant foreign language documents on AI
CSET's translations of significant foreign language documents on AI
China’s Plan to Improve University-Level AI Education: The Artificial Intelligence Innovation Action Plan for Institutions of Higher Education: Translation of a Ministry of Education plan issued in April 2018. The plan lays out objectives designed to significantly enhance China’s cadre of AI talent and its university AI curricula by 2030.
China’s Plan to Build a National Tech Transfer System: The Program to Build a National Technology Transfer System: Translation of a PRC State Council plan issued in 2017. It briefly addresses China’s system for acquiring foreign technology but primarily focuses on the transfer of technology within China.
What’s New at CSET
PUBLICATIONS
- Defense One: Misguided Immigration Policies Are Endangering America’s AI Edge by Zachary Arnold
- Morning Consult: Immigration and the Future of U.S. AI by Zachary Arnold, Tina Huang and Remco Zwetsloot
- Hewlett Foundation: CSET was awarded a $2 million grant to support a new Cybersecurity and AI project led by Ben Buchanan. CyberAI will explore the effects of automation on cyber offense and defense.
- Syracuse University News: As part of a new $500,000 partnership with CSET, Syracuse University Institute for Security Policy and Law will assist CSET in investigating the legal, policy and security impacts of emerging technology. Judge James Baker is the grant’s principal investigator.
- South China Morning Post: Helen Toner spoke about China’s AI ambitions in an article about Chinese reliance on U.S. technology.
- CNAS: Andrew Imbrie and Elsa Kania have become members of CNAS’s newly launched Digital Freedom Forum.
- U.S.-China Economic and Security Review Commission: The 2019 Annual Report cites testimony and written reports by Jeff Ding, Elsa Kania, Lorand Laskai and Helen Toner.
- U.S. Senate Permanent Subcommittee on Investigations: Threats to the U.S. Research Enterprise: China’s Talent Recruitment Plans references testimony by Elsa Kania.
- Cervest: AI Ethics for Systemic Issues: A Structural Approach cites work by Remco Zwetsloot on risks from AI.
- Congressional Research Service: Artificial Intelligence and National Security was updated and now references CSET’s translation of Russia’s AI Strategy.
Events
- December 3: KPMG, AI in Action
- December 4: CSIS, China’s Power: Up for Debate, including a debate on technology influence
- December 9: AI in Government, AI Projects at NSF with Dorothy Aronson
- December 12: Brookings, Lessons of History, Law, and Public Opinion for AI Development
What else is going on? Suggest stories, documents to translate & upcoming events here.
China’s emotion recognition system, the DOD AI ethics recommendations and the NSCAI report
Wednesday, November 13, 2019
Wednesday, November 13, 2019
Worth Knowing
China Expands ‘Emotion Recognition’ AI Despite Expert Skepticism: Emotion recognition systems generated excitement at China’s 2019 Public Security Expo, the Financial Times reported. The technology is being rolled out in Xinjiang as part of crime prediction systems along with facial recognition, gait recognition and eye tracking. The system is meant to predict violent behavior by using AI to identify signs of aggression, nervousness and stress. However, experts have pushed back on such characterizations with studies suggesting that emotion recognition is accurate only 20–30 percent of the time. Despite this, both Chinese companies and U.S. tech giants like Google, Amazon and Microsoft continue to develop emotion recognition systems.
- More: Emotion Detection AI Is a $20 Billion Industry. New Research Says It Can’t Do What It Claims | China’s Algorithms of Repression
Reports of U.S. Military’s Extensive Facial Recognition System: The U.S. military maintains an extensive biometrics and facial recognition system with more than 7.4 million identities, according to documents obtained by OneZero. The system, known as the Automated Biometric Information System, stores biometric information on anyone who comes into contact with U.S. military systems abroad, including allied soldiers, with the goal of “denying... adversaries anonymity.” ABIS currently links to state and local law enforcement biometric systems and may eventually integrate with the Department of Homeland Security’s biometric database. While much is still unknown about the system, the use of facial recognition raises privacy and bias concerns given facial recognition’s low accuracy rates for women and minorities.
DeepMind Reaches Grandmaster Level in StarCraft II: DeepMind announced that AlphaStar, its AI designed to play StarCraft II, recently outperformed 99.8% of human players. Unlike the previous version of AlphaStar announced in January, changes were made to level the playing field with humans: AlphaStar now “sees” the board through a camera, is limited to act at human speed and is able to play as any species. The results are comparable to those of OpenAI’s Dota 2 artificial agent, showing the power of reinforcement learning as a training mechanism for complex gameplay environments.
- More: Full paper
Government Updates
DIB Approves AI Ethics Principles: After a series of roundtables and discussion, the Defense Innovation Board voted unanimously on October 31st to recommend AI Ethics Principles for the DOD, accompanied by a supporting document. The principles, intended for both combat and non-combat systems, state that AI should be responsible, equitable, traceable, reliable and governable. They also recommend a series of next steps, including establishing a DOD-wide steering committee and formalizing the principles within the DOD.
NSCAI Submits Interim Report to Congress: On Nov 4th, the bipartisan National Security Commission on Artificial Intelligence released its interim report. The report assesses the challenges and opportunities AI poses for national security, as well as noting concerning trendlines relative to China. The Commission suggests accelerating public investment in AI R&D, applying AI to national security, training and recruiting AI talent, protecting the U.S. technological advantage and encouraging global cooperation. Full recommendations will be made to Congress in a later report.
NSCAI Holds First-Ever Conference: The National Security Commission on Artificial Intelligence held a conference on the future of AI and national security on November 5th. Notable remarks include:
- Senate Minority Leader Chuck Schumer proposed investing $100 billion in emerging technologies including AI, quantum and robotics.
- Secretary of Defense Mark Esper called for partnering with universities and industry to protect the U.S. AI advantage over China.
- Google VP for Global Affairs Kent Walker said Google is eager to collaborate more closely with the DOD.
What We’re Reading
Report: Report of Estonia’s AI Taskforce, published by the Republic of Estonia Government Office and the Republic of Estonia Ministry of Economic Affairs and Communications (May 2019)
Report: The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk, Volume II, East Asian Perspectives, published by the Stockholm International Peace Research Institute and edited by Lora Saalman (October 2019)
Report: Partial Disengagement: A New U.S. Strategy for Economic Competition with China, by Charles W. Boustany and Aaron L. Friedberg for the National Bureau of Asian Research (November 2019)
In Translation
CSET's translations of significant foreign language documents on AI
CSET's translations of significant foreign language documents on AI
Taiwan’s Sensitive Science and Technology Protection Bill: General Notes on the Sensitive Science and Technology Protection Bill: Translation of a bill proposed in Taiwan’s parliament that provides for up to seven years in prison or a $1 million fine for leaks of sensitive technology. The bill aims to counter Chinese industrial espionage and reassure U.S. firms that they can conduct R&D in Taiwan without fear of their proprietary technology being disclosed to Chinese competitors.
What’s New at CSET
PUBLICATIONS
- The Cipher Brief: The Future of AI and Cybersecurity by Ben Buchanan
- Science: Jason Matheny discussed the importance of expanding federal funding on R&D as part of an article on Senator Schumer’s proposal for investing in AI.
- CIO: CSET’s recommendations on export controls for semiconductor manufacturing equipment were featured in an article on the NSCAI interim report.
- Axios: Tarun Chhabra was quoted in Axios Future on the difference between the United States and China’s approaches to funding technology.
- South China Morning Post: Helen Toner spoke about the U.S. blacklisting of Chinese tech companies in an article about the US-China tech war.
- Federal News Network: Jason Matheny discussed the ways in which an adversary’s AI tools can be used against them in an article on the next phase of AI in government.
- BBC: Helen Toner was interviewed in an In the Balance podcast episode on the threats AI might pose to humans.
Events
- November 18: Wilson Center, Bits and Borders: Navigating Asymmetrical Risk in a Digital World
- November 20: Politico, Artificial Intelligence at Work
- December 4: CSIS, China’s Power: Up for Debate, including a debate on technology influence
What else is going on? Suggest stories, documents to translate & upcoming events here.
Russia’s AI strategy translated, Chinese AI supply chains and the Deepfake Report Act
Wednesday, October 30, 2019
Wednesday, October 30, 2019
Worth Knowing
Chinese companies look to secure supply chains after being added to Entity List: Chinese AI companies seek to adapt their hardware supply chains after being put on the Entity List, which prohibits them from purchasing certain U.S. technologies. The CEO of Chinese AI startup Megvii says the company will be restricted in its ability to purchase x86 servers, GPUs and CPUs, but it will move forward with its IPO as planned. Also of note: The New York Times reports that in light of rising tensions with China, the Pentagon has been meeting with tech companies to assess U.S. dependence on Taiwanese chips, which are crucial for military applications.
Researchers use machine learning to develop a new metamaterial: A research team at Delft University of Technology created a new super-compressible material with the help of machine learning. While testing new materials usually requires extensive trial and error, the use of AI allowed for experimentation solely via simulation, significantly accelerating the process. Lead author Miguel Bessa says while the new material is exciting, the role of machine learning in its development is the real accomplishment. The researchers also released their code to facilitate broader use of ML in future materials design.
Inaugural Turing AI Fellows class named as part of UK talent push: The Alan Turing Institute, the UK Office for AI, and UK Research and Innovation have announced the appointment of five Turing AI Fellows, senior AI researchers selected to receive significant funding for five years. The institutions also published a call for applications to the Turing AI Acceleration Fellowship and Turing AI World-Leading Researcher Fellowships, which together will receive 37.5 million pounds ($48.2 million) in funding. These initiatives are part of a broader UK government strategy to attract and retain top AI talent.
Data labeling market expected to grow dramatically: The data labeling industry is growing, with workers in developing countries generating the massive quantities of human-labeled data needed to train AI. The market for data labeling was estimated at $150 million in 2018 and is predicted to grow to $1 billion by 2023. Labeled data is essential for supervised learning, and the growing industry allows tech companies to outsource this work rather than do it in-house. While changes in AI training methods could eventually make the industry less essential, it’s a necessity for now.
Government Updates
Trump reestablishes science and technology advisory council: On October 22nd, President Trump issued an Executive Order reconstituting the President’s Council of Advisors on Science and Technology and appointed the first seven members of an eventual 17. PCAST will advise the White House on science and technology and respond to requests for analysis or advice. Several of the advisors have backgrounds in artificial intelligence, which the Executive Order specifically mentioned as a key emerging technology along with quantum computing.
Senate passes Deepfake Report Act: The Senate passed the Deepfake Report Act by unanimous consent on October 24th. The bill would require the Secretary of Homeland Security to publish an annual report on the state of deepfake technology. The report is to include an assessment of technologies, how deepfakes could be used by foreign governments and non-state actors, methods for deepfake detection and progress on technological countermeasures. A companion bill was introduced in the House in June, but has not yet been brought up for a vote.
U.S. Army announces plans to integrate and adopt AI: Earlier this month, the Army provided a series of updates on its use of AI as part of its strategy for seamless AI integration. Among the developments: efforts to create an AI assistant for tank warfare known as Project Quarterback, an AI system designed to spot targets in reconnaissance photos which will be tested next year in Defender-Europe 20, and plans to gather more data for AI by equipping RQ-7Bv2 Shadow drones with sensor suites and fielding 200,000 IVAS soldier goggles.
Hurd and Kelly announce new AI initiative with Bipartisan Policy Center: In partnership with the Bipartisan Policy Center, Reps. Hurd and Kelly will develop a national AI strategy aimed at guiding Congress and the executive branch. They plan to convene public and private sector experts to weigh in on the challenges and opportunities of crafting policy on artificial intelligence, concluding with a federal AI framework. Hurd and Kelly previously co-authored a white paper on the importance of AI after hosting a series of congressional hearings.
What We’re Reading
Report: Opinion of the Data Ethics Commission, The Data Ethics Commission of the Federal Government of Germany (October 2019)
Book: The Impact of Emerging Technologies on the Law of Armed Conflict, edited by Eric Talbot Jensen and Major Ronald T. P. Alcala (September 2019)
Post: Artificial Intelligence Research Needs Responsible Publication Norms, Rebecca Crootof in Lawfare (October 2019)
In Translation
CSET's translations of significant foreign language documents on AI
CSET's translations of significant foreign language documents on AI
Russia’s National AI Strategy: Decree of the President of the Russian Federation on the Development of Artificial Intelligence in the Russian Federation: Russia’s national strategy for the development of artificial intelligence, released in October 2019. The document sets out a number of short-term (to be completed by 2024) and medium-term (by 2030) qualitative goals designed to build Russia into a leading AI power. For more, see analysis by CSET’s Margarita Konaev.
What’s New at CSET
TESTIMONY
- Ben Buchanan testified before the House Homeland Security Subcommittee on Cybersecurity, Infrastructure Protection and Innovation about the impact of artificial intelligence on cybersecurity. See Buchanan’s written testimony here.
- War on the Rocks: With AI, We’ll See Faster Fights, but Longer Wars by Margarita Konaev
- DigiChina: AI Policy and China: Realities of State-Led Development including sections by Helen Toner, Lorand Laskai and Jeff Ding
- The Europe Desk: Making the Words Sing: Andrew Imbrie on Speechwriting and Rhetoric, a conversation with Andrew Imbrie
- Journal of Strategic Studies: War in the City: Urban Ethnic Geography and Combat Effectiveness by Margarita Konaev with Michigan State University’s Kirstin J.H. Brathwaite
- Igor Mikolic-Torreira spoke at an Axios breakfast on the future of AI.
Events
- October 31: Defense Innovation Board Public Meeting at Georgetown University
- November 5: The National Security Commission on Artificial Intelligence, Strength through Innovation: The Future of AI and U.S. National Security featuring Jason Matheny
- November 6: CSIS, China’s New Era in Techno-Governance
What else is going on? Suggest stories, documents to translate & upcoming events here.
Blacklisting Chinese AI startups, deepfake detection and NSF funding for AI research
Wednesday, October 16, 2019
Wednesday, October 16, 2019
Worth Knowing
Trump Administration blacklists top Chinese AI startups for human rights violations: The White House added 28 organizations to the Entity List for human rights abuses in Xinjiang, prohibiting them from buying U.S.-developed technologies. The list named eight tech companies, including Megvii, iFlytek and SenseTime (three of China’s top AI companies), along with Dahua and Hikvision (leading manufacturers of surveillance products).
- More: Could blacklisting China’s AI champions backfire? | Megvii CEO says US ban will hit its supply of servers and could disturb IPO
Putin signs Russia’s AI strategy: President Vladimir Putin has approved Russia’s national AI strategy through 2030. The strategy, not yet translated into English, centralizes national AI efforts under the Ministry of Economic Development, according to Kommersant. Notably, it defines artificial intelligence as technology that can “simulate human cognitive functions,” including self-learning, and can achieve results comparable to those of “human intellectual activity.”
Google creates a dataset of 3,000 deepfakes to assist with detection efforts: Alphabet’s Jigsaw and Google produced and publicly released the Deep Fake Detection Dataset, with the goal of supporting research efforts to develop automatic deepfake detection. The dataset consists of 1,000 videos altered by four deepfake methods, as well as models to generate new deepfake data. Google’s move comes after Facebook, Microsoft, MIT and others dedicated $10 million to a new deepfake detection dataset and challenge.
Government Updates
NSF creates new program to fund $200M in long-term AI research over six years: The National Science Foundation announced the creation of a new National AI Research Institutes program to advance large-scale, long-term AI research. The grant will fund the creation of research institutes at the nexus of government, industry and academia focused on a series of core priorities in AI. In the first year, NSF anticipates disbursing $120 million in grants. Grant proposals are due to NSF by late January 2020.
DOE announces $13M in new funding for AI research projects: The Department of Energy’s Office of Science announced $13 million in funding for five research projects aimed at “improving AI as a tool of scientific investigation and prediction.” The projects fund both universities and DOE national laboratories. This development builds on other recent AI-related activities at the DOE, including the creation of the Artificial Intelligence and Technology Office and funding from ARPA-E for AI-developed tools to reduce nuclear power plant operating expenses.
Keep STEM Talent Act of 2019 introduced in the House: On October 4th, Reps. Bill Foster and Eddie Bernice Johnson introduced the Keep STEM Talent Act of 2019, which would provide permanent-resident status to advanced STEM degree holders. The bill, a companion to one introduced in the Senate by Sen. Richard Durbin, exempts STEM graduates from annual green card caps. Long wait times for green cards were identified as a key problem facing international AI talent in CSET’s recent report on Immigration Policy and the U.S. AI Sector.
What We’re Reading
Article: Chinese Semiconductor Industrial Policy: Past and Present and Prospects for Future Success, John VerWey in the United States International Trade Commission (July and August 2019), and The South Korea-Japan Trade Dispute in Context: Semiconductor Manufacturing, Chemicals, and Concentrated Supply Chains by Samuel M. Goodman, Dan Kim and John VerWey
Book: The Transpacific Experiment: How China and California Collaborate and Compete for Our Future, Matt Sheehan (August 2019)
Post: Critiquing Carnegie’s AI Surveillance Paper, John Honovick and Charles Rollet at IPVM (September 2019) — a response to a Carnegie paper we included in the previous edition of policy.ai
In Translation
CSET's translations of significant foreign language documents on AI
CSET's translations of significant foreign language documents on AI
Municipal action plan for “military-civil fusion” in developing “intelligent technologies”: Tianjin Municipal Action Plan for Military-Civil Fusion Special Projects in Intelligent Technology. While many Chinese local governments have published “military-civil fusion” plans, Tianjin’s is among the most detailed. It provides insight into local efforts to steer the development of emerging technologies in directions that fulfill PLA requirements.
What’s New at CSET
JOB OPENINGS
CSET is hiring! We’re looking to fill the following roles:
- Data Scientist: Apply machine learning, NLP, predictive modeling, and other advanced computational techniques to evaluate hypotheses and derive data-driven insights. BA in relevant area and 6+ yrs of experience required.
- Research Project Manager: Coordinate hiring of student research assistants; manage a research pipeline. BA in relevant area and 2+ yrs project management experience required.
- Research Analyst: Collaborate with Research Fellows to develop and execute research projects. 2+ yrs and BA in relevant area required (MA preferred).
- Ben Chang, Jeff Ding and Elsa Kania contributed chapters to Artificial Intelligence, China, Russia, and the Global Order: Technological, Political, Global and Creative Perspectives, Air University Press, edited by Nicholas D. Wright.
- Philanthropy News Digest: CSET and Georgetown’s Ethics Lab were awarded a grant from the Public Interest Technology University Network to fund the development of workshops on AI and ethics to train future leaders in tech policy.
- The Wall Street Journal: Elsa Kania spoke about the PLA’s unmanned weapons systems for an article on China’s weapons demonstration in its National Day parade.
- The Niskanen Center: Remco Zwetsloot is quoted in an article titled Six Ways Immigration Reform Can “Make America Great,” describing the importance of immigrants to the U.S. advantage in AI.
- National Journal: Helen Toner was interviewed for an article discussing the increasing tensions between U.S. and Chinese tech companies.
Events
- October 18: The Stanford Cyber Policy Center at the Freeman Spogli Institute, A.I., Semiconductors and Beyond: A U.S. Strategy for Winning the Tech Race with China
- October 29: CSIS, Managing the Risk of Tech Transfer to China
- November 5: The National Security Commission on Artificial Intelligence, Strength through Innovation: The Future of A.I. and U.S. National Security
- November 4-6: NVIDIA, GPU Technology Conference
What else is going on? Suggest stories, documents to translate & upcoming events here.
OpenAI experiments with hide-and-seek, California bans facial recognition use by police, and members of Congress seek to revive the OTA
Wednesday, October 2, 2019
Wednesday, October 2, 2019
Worth Knowing
OpenAI experiment tests how AI might “evolve” through competition: Researchers observed teams of AI agents playing billions of games of hide-and-seek in an attempt to understand emergent behavior. Over time, agents learned to use available tools in increasingly complex ways — including adopting strategies that programmers did not expect. The researchers hope this type of reinforcement learning will allow AI systems to solve increasingly complex problems in the future, but found the number of repetitions required makes it difficult to apply this technique to real-world settings.
California legislature bans facial recognition use by law enforcement: Both houses of the California legislature passed AB-1215, which prohibits law enforcement from “installing, activating, or using any biometric surveillance system in connection with an officer camera” for three years. The bill now goes to Gov. Gavin Newsom; if he signs it, California will become the largest state to ban specific uses of facial recognition.
- More: U.S. AI policy advisor predicts federal facial recognition ban | The Global Expansion of AI Surveillance
France releases military AI strategy: The French Ministry of the Armies has published a comprehensive report on military use of AI, building on the strategy laid out by Minister of Armies Florence Parly in April. The document establishes a Coordination Unit for Defense AI and a ministerial AI ethics committee, and it commits 430 million euros ($470 million) to AI research by 2025. The strategy describes the United States and China as global leaders in AI, but outlines a possible role for France if it coordinates with other nations within and outside the EU.
Government Updates
JAIC and GSA announce partnership to expand Pentagon’s use of AI: The General Services Administration and the DoD’s Joint AI Center announced their new partnership through the GSA’s Center of Excellence initiative on September 25th. They aim to expand the Pentagon’s use of AI by accelerating the delivery of AI capabilities and modernizing programmatic and acquisition processes. GSA also hopes the partnership will spur greater AI adoption across government.
Office of Technology Assessment Improvement and Enhancement Act introduced in House and Senate: Reps. Takano and Foster and Sens. Tillis and Hirono introduced the bipartisan Office of Technology Assessment Improvement and Enhancement Act in both houses of Congress on September 19th. It would revamp, rename and improve the OTA — which was defunded in 1995 — to ensure nonpartisan technology assessment is available to Congress. The House FY20 Legislative Branch Appropriations bill includes $6 million to fund the OTA; the Senate bill defers the issue until an ongoing report is complete.
MQ-25, the Navy’s unmanned refueling drone, completes successful test flight: The U.S. Navy announced a successful first test flight for its autonomous refueling drone, the Boeing-developed MQ-25 Stingray. The two-hour flight included autonomous taxi and takeoff. The Navy plans to integrate the MQ-25 into its strike arm by 2024 as the first carrier-launched autonomous unmanned aircraft.
Additional $8 million for NIST AI research included in Senate appropriations bill: The Senate FY20 Commerce, Justice and Science Appropriations bill includes $1.04 billion for the National Institute of Standards and Technology, a $52.5 million increase above the FY19 enacted level. Of the additional funds, $8 million would be allocated to expand NIST’s AI research and measurement efforts, including developing resources to model AI behavior and train, test and compare AI systems.
What We’re Reading
Artificial Intelligence on the Battlefield: An Initial Survey of Potential Implications for Deterrence, Stability, and Strategic Surprise, Center for Global Security Research at Lawrence Livermore National Laboratory (March 2019)
Open Arms: Evaluating Global Exposure to China’s Defense-Industrial Base, C4ADS (September 2019)
Deepfakes and Cheap Fakes: The Manipulation of Audio and Visual Evidence, Data and Society (September 2019)
In Translation
CSET's translations of significant foreign language documents on AI
CSET's translations of significant foreign language documents on AI
The emerging technologies of interest to China’s military: Guidelines for Basic Research and Cutting-Edge Technology Projects (2018): Issued by China’s State Administration of Science, Technology and Industry for National Defense, this document identifies several emerging technologies of interest to the Chinese military. SASTIND circulated these guidelines to Chinese universities and research institutes in 2018 to encourage them to apply for grants to conduct basic research in PLA areas of interest.
What’s New at CSET
IN THE NEWS
- South China Morning Post: CSET’s research on the U.S. AI workforce was featured in the South China Morning Post
- ChinAI Pod: Jeff Ding interviewed Remco Zwetsloot about CSET’s workforce research for the first-ever episode of the ChinAI Pod
- The Cipher Brief: Jason Matheny and Bill Hannas were featured in the Cipher Brief discussing Bill’s new report, titled China’s Access to Foreign AI Technology
- Global Journalist: Dahlia Peterson was interviewed on the Global Journalist about China’s surveillance state
Events
- October 4: Carnegie Endowment for International Peace, Toward Trustworthy Information and Communications Technology Suppliers
- October 10: The Brookings Institution, How China’s Tech Sector is Challenging the World
- October 15: The Jamestown Foundation, Ninth Annual China Defense and Security Conference
- November 4-6: NVIDIA, GPU Technology Conference
What else is going on? Suggest stories, documents to translate & upcoming events here.
AI professors leave for industry, Russia drafts AI strategy, and the White House requests $1B for AI R&D
Wednesday, September 18, 2019
Wednesday, September 18, 2019
Worth Knowing
Study finds flow of AI professors to industry discourages innovation: New research from the University of Rochester shows that after AI professors leave academia for private-sector work, fewer of their students start AI companies. The study, first covered by The New York Times, found that about 10 times as many North American professors left for tech companies in 2018 as did in 2009. The researchers say this trend could eventually hamper AI innovation and the economy.
Preview of Russia’s AI strategy: President Vladimir Putin is reviewing a draft AI strategy that he ordered state-owned Sberbank to prepare, according to DefenseOne. The wide-ranging document covers fundamental investments in AI — including funding for research, ethical and data regulations, and hardware and software developments — as well as specific applications of AI in healthcare and education. The final version is expected next month.
- More: Russia and AI-driven asymmetric warfare | CSET’s Margarita Konaev and Samuel Bendett of CNA on “Russian AI-Enabled Combat”
- More: Accepted NeurIPS papers
Government Updates
White House requests $1B in non-defense AI spending in 2020: The Trump administration submitted a supplemental request for $973.5 million in non-defense AI R&D spending for fiscal year 2020. While this number is higher than previous years, some industry leaders say it’s not enough. Looking ahead, a White House memo listed AI as a priority for the 2021 R&D budget.
Senate Defense Appropriations bill boosts defense funding for AI: The Senate Appropriations Committee approved its 2020 Defense Appropriations bill, which now awaits Senate consideration. The bill supports the President’s budget request for the Joint Artificial Intelligence Center at $208.8 million. In addition, it provides $83.5 million above the President’s budget on accounts labeled for AI-related Research, Development, Test and Evaluation.
White House holds AI Summit: On September 9th, the White House held The Summit on AI in Government for 175 industry, government and academic experts in AI. The event concluded with three case studies of AI use to improve government operations.
Air Force releases 2019 AI Strategy: On September 12th, the Air Force released its Annex to the DoD AI Strategy issued in 2018. The Annex aligns USAF strategy with that of DoD and focuses on expanding access to AI, preparing an AI workforce and treating data as a strategic asset.
What We’re Reading
Attacking Artificial Intelligence: AI’s Security Vulnerability and What Policymakers Can Do About It, Belfer Center for Science and International Affairs, Harvard Kennedy School (August 2019)
A Tentative Framework for Examining U.S. and Chinese Expenditures for Research and Development on Artificial Intelligence, The Institute for Defense Analyses Science & Technology Policy Institute (September 2019)
In Translation
CSET's translations of significant foreign language documents on AI
CSET's translations of significant foreign language documents on AI
Open-Source AI development platforms: Guidance on National New Generation Artificial Intelligence Open Innovation Platform Construction Work: Chinese Ministry of Science and Technology document describing the updated approval process for Chinese AI tech companies’ “open innovation platforms.” This document builds on the 2017 AI Development Plan, which identified open-source platforms as crucial to making China the world leader in AI by 2030.
What’s New at CSET
REPORTS
- “Strengthening the U.S. AI Workforce” by Remco Zwetsloot, Roxanne Heston and Zachary Arnold
- “Immigration Policy and the U.S. AI Sector” by Zachary Arnold, Roxanne Heston, Remco Zwetsloot and Tina Huang
- “China’s Access to Foreign AI Technology” by William C. Hannas and Huey-meei Chang
- C4ISRNET: Jason Matheny was featured in an article discussing the major security threats to AI and how to fund AI security
- Defense One: Lorand Laskai co-authored an op-ed titled “Welcome to the New Phase of US-China Tech Competition”
- China Digital Times: Dahlia Peterson co-wrote a series on Chinese surveillance system Sharper Eyes: Part 1: Surveilling the Surveillers, Part 2: Project Map and Part 3: Shandong to Xinjiang
- NextGov: Jason Matheny was quoted in an article on forthcoming federal AI regulations after speaking at the Politico AI Summit
- Axios: Remco Zwetsloot was quoted in Axios Future discussing immigration and the U.S. AI workforce
- Partnership on AI: Zachary Arnold and Tina Huang contributed to “Visa Laws, Policies, and Practices: Recommendations for Accelerating the Mobility of Global AI/ML Talent”
Events
- September 25: CSET and the Center for Security Studies at Georgetown, Kalaris Intelligence Conference: Artificial Intelligence and National Security
- September 26: National Academies of Sciences, Engineering, and Medicine, Applications of AI in Government and Industry
- September 30: CSIS, China’s AI Innovation Ecosystem, featuring Helen Toner
- October 15: The Jamestown Foundation, Ninth Annual China Defense and Security Conference
What else is going on? Suggest stories, documents to translate & upcoming events here.