Wednesday October 30, 2019

Worth Knowing

Chinese companies look to secure supply chains after being added to Entity List: Chinese AI companies seek to adapt their hardware supply chains after being put on the Entity List, which prohibits them from purchasing certain U.S. technologies. The CEO of Chinese AI startup Megvii says the company will be restricted in its ability to purchase x86 servers, GPUs and CPUs, but it will move forward with its IPO as planned. Also of note: The New York Times reports that in light of rising tensions with China, the Pentagon has been meeting with tech companies to assess U.S. dependence on Taiwanese chips, which are crucial for military applications.
Researchers use machine learning to develop a new metamaterial: A research team at Delft University of Technology created a new super-compressible material with the help of machine learning. While testing new materials usually requires extensive trial and error, the use of AI allowed for experimentation solely via simulation, significantly accelerating the process. Lead author Miguel Bessa says while the new material is exciting, the role of machine learning in its development is the real accomplishment. The researchers also released their code to facilitate broader use of ML in future materials design.
Inaugural Turing AI Fellows class named as part of UK talent push: The Alan Turing Institute, the UK Office for AI, and UK Research and Innovation have announced the appointment of five Turing AI Fellows, senior AI researchers selected to receive significant funding for five years. The institutions also published a call for applications to the Turing AI Acceleration Fellowship and Turing AI World-Leading Researcher Fellowships, which together will receive 37.5 million pounds ($48.2 million) in funding. These initiatives are part of a broader UK government strategy to attract and retain top AI talent.
Data labeling market expected to grow dramatically: The data labeling industry is growing, with workers in developing countries generating the massive quantities of human-labeled data needed to train AI. The market for data labeling was estimated at $150 million in 2018 and is predicted to grow to $1 billion by 2023. Labeled data is essential for supervised learning, and the growing industry allows tech companies to outsource this work rather than do it in-house. While changes in AI training methods could eventually make the industry less essential, it’s a necessity for now.
Government Updates

Trump reestablishes science and technology advisory council: On October 22nd, President Trump issued an Executive Order reconstituting the President’s Council of Advisors on Science and Technology and appointed the first seven members of an eventual 17. PCAST will advise the White House on science and technology and respond to requests for analysis or advice. Several of the advisors have backgrounds in artificial intelligence, which the Executive Order specifically mentioned as a key emerging technology along with quantum computing.

Senate passes Deepfake Report Act: The Senate passed the Deepfake Report Act by unanimous consent on October 24th. The bill would require the Secretary of Homeland Security to publish an annual report on the state of deepfake technology. The report is to include an assessment of technologies, how deepfakes could be used by foreign governments and non-state actors, methods for deepfake detection and progress on technological countermeasures. A companion bill was introduced in the House in June, but has not yet been brought up for a vote.

U.S. Army announces plans to integrate and adopt AI: Earlier this month, the Army provided a series of updates on its use of AI as part of its strategy for seamless AI integration. Among the developments: efforts to create an AI assistant for tank warfare known as Project Quarterback, an AI system designed to spot targets in reconnaissance photos which will be tested next year in Defender-Europe 20, and plans to gather more data for AI by equipping RQ-7Bv2 Shadow drones with sensor suites and fielding 200,000 IVAS soldier goggles.

Hurd and Kelly announce new AI initiative with Bipartisan Policy Center: In partnership with the Bipartisan Policy Center, Reps. Hurd and Kelly will develop a national AI strategy aimed at guiding Congress and the executive branch. They plan to convene public and private sector experts to weigh in on the challenges and opportunities of crafting policy on artificial intelligence, concluding with a federal AI framework. Hurd and Kelly previously co-authored a white paper on the importance of AI after hosting a series of congressional hearings.

What We’re Reading

Report: Opinion of the Data Ethics Commission, The Data Ethics Commission of the Federal Government of Germany (October 2019)

Book: The Impact of Emerging Technologies on the Law of Armed Conflict, edited by Eric Talbot Jensen and Major Ronald T. P. Alcala (September 2019)

Post: Artificial Intelligence Research Needs Responsible Publication Norms, Rebecca Crootof in Lawfare (October 2019)

In Translation
CSET's translations of significant foreign language documents on AI

Russia’s National AI Strategy: Decree of the President of the Russian Federation on the Development of Artificial Intelligence in the Russian Federation: Russia’s national strategy for the development of artificial intelligence, released in October 2019. The document sets out a number of short-term (to be completed by 2024) and medium-term (by 2030) qualitative goals designed to build Russia into a leading AI power. For more, see analysis by CSET’s Margarita Konaev.  

What’s New at CSET


What else is going on?
Suggest stories, documents to translate & upcoming events here.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>