policy.ai

Biweekly updates on artificial intelligence, emerging technology and security policy.

— Subscribe here —

China’s emotion recognition system, the DOD AI ethics recommendations and the NSCAI report
Wednesday, November 13, 2019
Worth Knowing

China Expands ‘Emotion Recognition’ AI Despite Expert Skepticism: Emotion recognition systems generated excitement at China’s 2019 Public Security Expo, the Financial Times reported. The technology is being rolled out in Xinjiang as part of crime prediction systems along with facial recognition, gait recognition and eye tracking. The system is meant to predict violent behavior by using AI to identify signs of aggression, nervousness and stress. However, experts have pushed back on such characterizations with studies suggesting that emotion recognition is accurate only 20–30 percent of the time. Despite this, both Chinese companies and U.S. tech giants like Google, Amazon and Microsoft continue to develop emotion recognition systems.
Canada Again Denies Travel Visas to African AI Researchers Attending NeurIPS: For the second year in a row, at least 15 AI researchers and students were denied travel visas to Canada over concerns that they would not return home after attending NeurIPS — a leading AI research conference — and the Black in AI workshop. Last year, Canada denied almost 100 African researchers’ visas for the same reason. ICLR, another top AI conference, will be held in Ethiopia in 2020 in part to avoid similar visa issues.
Reports of U.S. Military’s Extensive Facial Recognition System: The U.S. military maintains an extensive biometrics and facial recognition system with more than 7.4 million identities, according to documents obtained by OneZero. The system, known as the Automated Biometric Information System, stores biometric information on anyone who comes into contact with U.S. military systems abroad, including allied soldiers, with the goal of “denying... adversaries anonymity.” ABIS currently links to state and local law enforcement biometric systems and may eventually integrate with the Department of Homeland Security’s biometric database. While much is still unknown about the system, the use of facial recognition raises privacy and bias concerns given facial recognition’s low accuracy rates for women and minorities.
DeepMind Reaches Grandmaster Level in StarCraft II: DeepMind announced that AlphaStar, its AI designed to play StarCraft II, recently outperformed 99.8% of human players. Unlike the previous version of AlphaStar announced in January, changes were made to level the playing field with humans: AlphaStar now “sees” the board through a camera, is limited to act at human speed and is able to play as any species. The results are comparable to those of OpenAI’s Dota 2 artificial agent, showing the power of reinforcement learning as a training mechanism for complex gameplay environments.
Government Updates

DIB Approves AI Ethics Principles: After a series of roundtables and discussion, the Defense Innovation Board voted unanimously on October 31st to recommend AI Ethics Principles for the DOD, accompanied by a supporting document. The principles, intended for both combat and non-combat systems, state that AI should be responsible, equitable, traceable, reliable and governable. They also recommend a series of next steps, including establishing a DOD-wide steering committee and formalizing the principles within the DOD.

NSCAI Submits Interim Report to Congress: On Nov 4th, the bipartisan National Security Commission on Artificial Intelligence released its interim report. The report assesses the challenges and opportunities AI poses for national security, as well as noting concerning trendlines relative to China. The Commission suggests accelerating public investment in AI R&D, applying AI to national security, training and recruiting AI talent, protecting the U.S. technological advantage and encouraging global cooperation. Full recommendations will be made to Congress in a later report.

NSCAI Holds First-Ever Conference: The National Security Commission on Artificial Intelligence held a conference on the future of AI and national security on November 5th. Notable remarks include:
For more information, check out the videos of the second half of the conference, capped by Jason Matheny interviewing White House CTO Michael Kratsios.

What We’re Reading

Report: Report of Estonia’s AI Taskforce, published by the Republic of Estonia Government Office and the Republic of Estonia Ministry of Economic Affairs and Communications (May 2019)

Report: The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk, Volume II, East Asian Perspectives, published by the Stockholm International Peace Research Institute and edited by Lora Saalman (October 2019)

Report: Partial Disengagement: A New U.S. Strategy for Economic Competition with China, by Charles W. Boustany and Aaron L. Friedberg for the National Bureau of Asian Research (November 2019)

In Translation
CSET's translations of significant foreign language documents on AI

Taiwan’s Sensitive Science and Technology Protection Bill: General Notes on the Sensitive Science and Technology Protection Bill: Translation of a bill proposed in Taiwan’s parliament that provides for up to seven years in prison or a $1 million fine for leaks of sensitive technology. The bill aims to counter Chinese industrial espionage and reassure U.S. firms that they can conduct R&D in Taiwan without fear of their proprietary technology being disclosed to Chinese competitors.

What’s New at CSET

PUBLICATIONSIN THE NEWS
Events


What else is going on?
Suggest stories, documents to translate & upcoming events here.
Russia’s AI strategy translated, Chinese AI supply chains and the Deepfake Report Act
Wednesday, October 30, 2019
Worth Knowing

Chinese companies look to secure supply chains after being added to Entity List: Chinese AI companies seek to adapt their hardware supply chains after being put on the Entity List, which prohibits them from purchasing certain U.S. technologies. The CEO of Chinese AI startup Megvii says the company will be restricted in its ability to purchase x86 servers, GPUs and CPUs, but it will move forward with its IPO as planned. Also of note: The New York Times reports that in light of rising tensions with China, the Pentagon has been meeting with tech companies to assess U.S. dependence on Taiwanese chips, which are crucial for military applications.
Researchers use machine learning to develop a new metamaterial: A research team at Delft University of Technology created a new super-compressible material with the help of machine learning. While testing new materials usually requires extensive trial and error, the use of AI allowed for experimentation solely via simulation, significantly accelerating the process. Lead author Miguel Bessa says while the new material is exciting, the role of machine learning in its development is the real accomplishment. The researchers also released their code to facilitate broader use of ML in future materials design.
Inaugural Turing AI Fellows class named as part of UK talent push: The Alan Turing Institute, the UK Office for AI, and UK Research and Innovation have announced the appointment of five Turing AI Fellows, senior AI researchers selected to receive significant funding for five years. The institutions also published a call for applications to the Turing AI Acceleration Fellowship and Turing AI World-Leading Researcher Fellowships, which together will receive 37.5 million pounds ($48.2 million) in funding. These initiatives are part of a broader UK government strategy to attract and retain top AI talent.
Data labeling market expected to grow dramatically: The data labeling industry is growing, with workers in developing countries generating the massive quantities of human-labeled data needed to train AI. The market for data labeling was estimated at $150 million in 2018 and is predicted to grow to $1 billion by 2023. Labeled data is essential for supervised learning, and the growing industry allows tech companies to outsource this work rather than do it in-house. While changes in AI training methods could eventually make the industry less essential, it’s a necessity for now.
Government Updates

Trump reestablishes science and technology advisory council: On October 22nd, President Trump issued an Executive Order reconstituting the President’s Council of Advisors on Science and Technology and appointed the first seven members of an eventual 17. PCAST will advise the White House on science and technology and respond to requests for analysis or advice. Several of the advisors have backgrounds in artificial intelligence, which the Executive Order specifically mentioned as a key emerging technology along with quantum computing.

Senate passes Deepfake Report Act: The Senate passed the Deepfake Report Act by unanimous consent on October 24th. The bill would require the Secretary of Homeland Security to publish an annual report on the state of deepfake technology. The report is to include an assessment of technologies, how deepfakes could be used by foreign governments and non-state actors, methods for deepfake detection and progress on technological countermeasures. A companion bill was introduced in the House in June, but has not yet been brought up for a vote.

U.S. Army announces plans to integrate and adopt AI: Earlier this month, the Army provided a series of updates on its use of AI as part of its strategy for seamless AI integration. Among the developments: efforts to create an AI assistant for tank warfare known as Project Quarterback, an AI system designed to spot targets in reconnaissance photos which will be tested next year in Defender-Europe 20, and plans to gather more data for AI by equipping RQ-7Bv2 Shadow drones with sensor suites and fielding 200,000 IVAS soldier goggles.

Hurd and Kelly announce new AI initiative with Bipartisan Policy Center: In partnership with the Bipartisan Policy Center, Reps. Hurd and Kelly will develop a national AI strategy aimed at guiding Congress and the executive branch. They plan to convene public and private sector experts to weigh in on the challenges and opportunities of crafting policy on artificial intelligence, concluding with a federal AI framework. Hurd and Kelly previously co-authored a white paper on the importance of AI after hosting a series of congressional hearings.

What We’re Reading

Report: Opinion of the Data Ethics Commission, The Data Ethics Commission of the Federal Government of Germany (October 2019)

Book: The Impact of Emerging Technologies on the Law of Armed Conflict, edited by Eric Talbot Jensen and Major Ronald T. P. Alcala (September 2019)

Post: Artificial Intelligence Research Needs Responsible Publication Norms, Rebecca Crootof in Lawfare (October 2019)

In Translation
CSET's translations of significant foreign language documents on AI

Russia’s National AI Strategy: Decree of the President of the Russian Federation on the Development of Artificial Intelligence in the Russian Federation: Russia’s national strategy for the development of artificial intelligence, released in October 2019. The document sets out a number of short-term (to be completed by 2024) and medium-term (by 2030) qualitative goals designed to build Russia into a leading AI power. For more, see analysis by CSET’s Margarita Konaev.  

What’s New at CSET

TESTIMONYPUBLICATIONSIN THE NEWS
Events


What else is going on?
Suggest stories, documents to translate & upcoming events here.
Blacklisting Chinese AI startups, deepfake detection and NSF funding for AI research
Wednesday, October 16, 2019
Worth Knowing

Trump Administration blacklists top Chinese AI startups for human rights violations: The White House added 28 organizations to the Entity List for human rights abuses in Xinjiang, prohibiting them from buying U.S.-developed technologies. The list named eight tech companies, including Megvii, iFlytek and SenseTime (three of China’s top AI companies), along with Dahua and Hikvision (leading manufacturers of surveillance products).
OpenAI’s AI-powered robot learns to solve a Rubik’s Cube one-handed: OpenAI announced yesterday that it used reinforcement learning to enable a dextrous robot to complete a Rubik’s Cube. While robots have mastered Rubik’s Cubes before, “Dactyl” is unique in that it was trained in a variety of simulated environments and learned to adapt to the real world. The result is a high level of adaptability for a robotic hand, which the researchers believe may be an important step toward general-purpose robots.
Putin signs Russia’s AI strategy: President Vladimir Putin has approved Russia’s national AI strategy through 2030. The strategy, not yet translated into English, centralizes national AI efforts under the Ministry of Economic Development, according to Kommersant. Notably, it defines artificial intelligence as technology that can “simulate human cognitive functions,” including self-learning, and can achieve results comparable to those of “human intellectual activity.”
Google creates a dataset of 3,000 deepfakes to assist with detection efforts: Alphabet’s Jigsaw and Google produced and publicly released the Deep Fake Detection Dataset, with the goal of supporting research efforts to develop automatic deepfake detection. The dataset consists of 1,000 videos altered by four deepfake methods, as well as models to generate new deepfake data. Google’s move comes after Facebook, Microsoft, MIT and others dedicated $10 million to a new deepfake detection dataset and challenge.
Government Updates

NSF creates new program to fund $200M in long-term AI research over six years: The National Science Foundation announced the creation of a new National AI Research Institutes program to advance large-scale, long-term AI research. The grant will fund the creation of research institutes at the nexus of government, industry and academia focused on a series of core priorities in AI. In the first year, NSF anticipates disbursing $120 million in grants. Grant proposals are due to NSF by late January 2020.

DOE announces $13M in new funding for AI research projects: The Department of Energy’s Office of Science announced $13 million in funding for five research projects aimed at “improving AI as a tool of scientific investigation and prediction.” The projects fund both universities and DOE national laboratories. This development builds on other recent AI-related activities at the DOE, including the creation of the Artificial Intelligence and Technology Office and funding from ARPA-E for AI-developed tools to reduce nuclear power plant operating expenses.

Keep STEM Talent Act of 2019 introduced in the House: On October 4th, Reps. Bill Foster and Eddie Bernice Johnson introduced the Keep STEM Talent Act of 2019, which would provide permanent-resident status to advanced STEM degree holders. The bill, a companion to one introduced in the Senate by Sen. Richard Durbin, exempts STEM graduates from annual green card caps. Long wait times for green cards were identified as a key problem facing international AI talent in CSET’s recent report on Immigration Policy and the U.S. AI Sector.

What We’re Reading

Article: Chinese Semiconductor Industrial Policy: Past and Present and Prospects for Future Success, John VerWey in the United States International Trade Commission (July and August 2019), and The South Korea-Japan Trade Dispute in Context: Semiconductor Manufacturing, Chemicals, and Concentrated Supply Chains by Samuel M. Goodman, Dan Kim and John VerWey

Book: The Transpacific Experiment: How China and California Collaborate and Compete for Our Future, Matt Sheehan (August 2019)

Post: Critiquing Carnegie’s AI Surveillance Paper, John Honovick and Charles Rollet at IPVM (September 2019) — a response to a Carnegie paper we included in the previous edition of policy.ai

In Translation
CSET's translations of significant foreign language documents on AI

Municipal action plan for “military-civil fusion” in developing “intelligent technologies”: Tianjin Municipal Action Plan for Military-Civil Fusion Special Projects in Intelligent Technology. While many Chinese local governments have published “military-civil fusion” plans, Tianjin’s is among the most detailed. It provides insight into local efforts to steer the development of emerging technologies in directions that fulfill PLA requirements.

What’s New at CSET

JOB OPENINGS
CSET is hiring! We’re looking to fill the following roles:
  • Data Scientist: Apply machine learning, NLP, predictive modeling, and other advanced computational techniques to evaluate hypotheses and derive data-driven insights. BA in relevant area and 6+ yrs of experience required.
  • Research Project Manager: Coordinate hiring of student research assistants; manage a research pipeline. BA in relevant area and 2+ yrs project management experience required.
  • Research Analyst: Collaborate with Research Fellows to develop and execute research projects. 2+ yrs and BA in relevant area required (MA preferred).
PUBLICATIONSIN THE NEWS
Events


What else is going on?
Suggest stories, documents to translate & upcoming events here.
OpenAI experiments with hide-and-seek, California bans facial recognition use by police, and members of Congress seek to revive the OTA
Wednesday, October 2, 2019
Worth Knowing

OpenAI experiment tests how AI might “evolve” through competition: Researchers observed teams of AI agents playing billions of games of hide-and-seek in an attempt to understand emergent behavior. Over time, agents learned to use available tools in increasingly complex ways — including adopting strategies that programmers did not expect. The researchers hope this type of reinforcement learning will allow AI systems to solve increasingly complex problems in the future, but found the number of repetitions required makes it difficult to apply this technique to real-world settings.
California legislature bans facial recognition use by law enforcement: Both houses of the California legislature passed AB-1215, which prohibits law enforcement from “installing, activating, or using any biometric surveillance system in connection with an officer camera” for three years. The bill now goes to Gov. Gavin Newsom; if he signs it, California will become the largest state to ban specific uses of facial recognition.
Kalaris Conference convenes AI and national security experts: U.S. security interests will suffer if the United States doesn’t work with its allies to invest wisely in AI capabilities, leading figures from the intelligence and defense communities said at the Kalaris Intelligence Conference last week. CSET and Georgetown University’s Center for Security Studies co-hosted the annual conference. Among the speakers were Sue Gordon, former Principal Deputy Director of National Intelligence, and Lt. Gen. Jack Shanahan, director of the DoD’s Joint Artificial Intelligence Center.
France releases military AI strategy: The French Ministry of the Armies has published a comprehensive report on military use of AI, building on the strategy laid out by Minister of Armies Florence Parly in April. The document establishes a Coordination Unit for Defense AI and a ministerial AI ethics committee, and it commits 430 million euros ($470 million) to AI research by 2025. The strategy describes the United States and China as global leaders in AI, but outlines a possible role for France if it coordinates with other nations within and outside the EU.
Government Updates

JAIC and GSA announce partnership to expand Pentagon’s use of AI: The General Services Administration and the DoD’s Joint AI Center announced their new partnership through the GSA’s Center of Excellence initiative on September 25th. They aim to expand the Pentagon’s use of AI by accelerating the delivery of AI capabilities and modernizing programmatic and acquisition processes. GSA also hopes the partnership will spur greater AI adoption across government.

Office of Technology Assessment Improvement and Enhancement Act introduced in House and Senate: Reps. Takano and Foster and Sens. Tillis and Hirono introduced the bipartisan Office of Technology Assessment Improvement and Enhancement Act in both houses of Congress on September 19th. It would revamp, rename and improve the OTA — which was defunded in 1995 — to ensure nonpartisan technology assessment is available to Congress. The House FY20 Legislative Branch Appropriations bill includes $6 million to fund the OTA; the Senate bill defers the issue until an ongoing report is complete.

MQ-25, the Navy’s unmanned refueling drone, completes successful test flight: The U.S. Navy announced a successful first test flight for its autonomous refueling drone, the Boeing-developed MQ-25 Stingray. The two-hour flight included autonomous taxi and takeoff. The Navy plans to integrate the MQ-25 into its strike arm by 2024 as the first carrier-launched autonomous unmanned aircraft.

Additional $8 million for NIST AI research included in Senate appropriations bill: The Senate FY20 Commerce, Justice and Science Appropriations bill includes $1.04 billion for the National Institute of Standards and Technology, a $52.5 million increase above the FY19 enacted level. Of the additional funds, $8 million would be allocated to expand NIST’s AI research and measurement efforts, including developing resources to model AI behavior and train, test and compare AI systems.

In Translation
CSET's translations of significant foreign language documents on AI

The emerging technologies of interest to China’s military: Guidelines for Basic Research and Cutting-Edge Technology Projects (2018): Issued by China’s State Administration of Science, Technology and Industry for National Defense, this document identifies several emerging technologies of interest to the Chinese military. SASTIND circulated these guidelines to Chinese universities and research institutes in 2018 to encourage them to apply for grants to conduct basic research in PLA areas of interest.

What’s New at CSET

IN THE NEWS
Events


What else is going on?
Suggest stories, documents to translate & upcoming events here.
AI professors leave for industry, Russia drafts AI strategy, and the White House requests $1B for AI R&D
Wednesday, September 18, 2019
Worth Knowing

Study finds flow of AI professors to industry discourages innovation: New research from the University of Rochester shows that after AI professors leave academia for private-sector work, fewer of their students start AI companies. The study, first covered by The New York Times, found that about 10 times as many North American professors left for tech companies in 2018 as did in 2009. The researchers say this trend could eventually hamper AI innovation and the economy.
Preview of Russia’s AI strategy: President Vladimir Putin is reviewing a draft AI strategy that he ordered state-owned Sberbank to prepare, according to DefenseOne. The wide-ranging document covers fundamental investments in AI — including funding for research, ethical and data regulations, and hardware and software developments — as well as specific applications of AI in healthcare and education. The final version is expected next month.
Record number of submitted papers for NeurIPS: The Conference on Neural Information Processing Systems received 6,743 paper submissions this year — up 39% from last year, and double the number of submissions in 2017. Google, MIT and Stanford were the most common institutional affiliations across accepted papers. Since NeurIPS is the largest annual AI conference, its metrics are often used as indicators of continued growth and enthusiasm in the field of machine learning.
Partnership on AI calls for improving immigration policies that affect AI experts: The Partnership on AI, which connects major tech companies with government and NGOs to create collaborative proposals, released a report last week calling for increasing access to visas and other immigration benefits for global AI/ML experts. The report lays out policy recommendations including streamlined visa reviews for highly skilled individuals, AI/ML visa classifications and visa categories for AI/ML students and interns.
Government Updates

White House requests $1B in non-defense AI spending in 2020: The Trump administration submitted a supplemental request for $973.5 million in non-defense AI R&D spending for fiscal year 2020. While this number is higher than previous years, some industry leaders say it’s not enough. Looking ahead, a White House memo listed AI as a priority for the 2021 R&D budget.

Senate Defense Appropriations bill boosts defense funding for AI: The Senate Appropriations Committee approved its 2020 Defense Appropriations bill, which now awaits Senate consideration. The bill supports the President’s budget request for the Joint Artificial Intelligence Center at $208.8 million. In addition, it provides $83.5 million above the President’s budget on accounts labeled for AI-related Research, Development, Test and Evaluation.

White House holds AI Summit: On September 9th, the White House held The Summit on AI in Government for 175 industry, government and academic experts in AI. The event concluded with three case studies of AI use to improve government operations.

Air Force releases 2019 AI Strategy: On September 12th, the Air Force released its Annex to the DoD AI Strategy issued in 2018. The Annex aligns USAF strategy with that of DoD and focuses on expanding access to AI, preparing an AI workforce and treating data as a strategic asset.

What We’re Reading

Attacking Artificial Intelligence: AI’s Security Vulnerability and What Policymakers Can Do About It, Belfer Center for Science and International Affairs, Harvard Kennedy School (August 2019)

A Tentative Framework for Examining U.S. and Chinese Expenditures for Research and Development on Artificial Intelligence, The Institute for Defense Analyses Science & Technology Policy Institute (September 2019)

In Translation
CSET's translations of significant foreign language documents on AI

Open-Source AI development platforms: Guidance on National New Generation Artificial Intelligence Open Innovation Platform Construction Work: Chinese Ministry of Science and Technology document describing the updated approval process for Chinese AI tech companies’ “open innovation platforms.” This document builds on the 2017 AI Development Plan, which identified open-source platforms as crucial to making China the world leader in AI by 2030.

What’s New at CSET

REPORTSIN THE NEWS
Events


What else is going on?
Suggest stories, documents to translate & upcoming events here.