As CSET continues to grow, we want to learn more about our readers and how we can better serve their interests. If you’re interested in helping, you can do so by filling out a short, anonymous survey. Thank you for your support of CSET and policy.ai!
We are also looking to fill multiple positions!
Israel Reportedly Used an AI-Equipped, Remotely Operated Gun to Kill Iranian Nuclear Scientist: Israeli intelligence agents used a remote-controlled, AI-enabled machine gun to kill Iran’s top nuclear scientist, Mohsen Fakhrizadeh, last November, according to a New York Times story published this past weekend. While the use of remote-controlled weapons is not new, the Israeli system used AI to correct for more than a second and a half of input delay, allowing the system’s operator to fire the gun at a moving target while stationed more than 1,000 miles away. The Times’ story clarifies months of conflicting reports — soon after the attack, some Iranian officials and media sources pointed to the use of an “advanced electronic tool” connected to a “satellite device,” but observers had been skeptical, especially because other sources described a fierce gunfight and captured attackers. Observers say the implications of the now-confirmed AI-enabled system could be significant: with assassins able to kill targets without putting themselves in harm’s way, such attacks may become much easier to carry out and, therefore, more frequent.
UN Human Rights Chief Sounds Alarm on AI: Last week, the United Nations High Commissioner for Human Rights, Michelle Bachelet, called for a moratorium on the use of AI that puts human rights at risk until “guardrails” can be put in place. Her comments coincided with the release of a UN report that painted a grim picture of how government and private use of AI can undermine human rights, especially the right to privacy. Among its recommendations, the report urged states to ban the use of biometric surveillance in public spaces — including facial recognition systems — until data protection and effective standards to mitigate bias and accuracy problems can be put in place. While Bachelet does not have regulatory power, her comments and the report’s recommendations closely mirror many of the ideas behind the European Union’s AI act, proposed earlier this year.
UK Publishes AI Strategy: Yesterday, the United Kingdom published its National Artificial Intelligence Strategy, which aims to position the country for successful AI development over the coming decade. Though light on specific policy proposals or funding recommendations when compared with the United States’ hulking NSCAI report, the UK’s strategy lays out a handful of key goals for the next year, including: conducting a review of the UK’s compute needs, changing the country’s visa rules to attract AI talent, reviewing the UK’s semiconductor supply chain, and publishing a “Defence AI Strategy.” As Melissa Heikkilä noted in Politico, the plan’s emphasis on a “pro-innovation regulatory environment” jibes with recent moves the UK government has made, such as the plan to reform data protection rules announced earlier this month.
Ethics Panels & Internal Studies — Tech Giants Grapple with AI’s Effects: A handful of recent reports offer a glimpse into the internal battles of several major U.S. tech companies — Google, Facebook, IBM and Microsoft — as they grapple with the negative impacts of their AI and algorithmic products:
- Reuters reported that Google, Microsoft and IBM significantly altered — or discontinued entirely — major projects after internal ethics boards raised objections about their potential impacts. Google declined a client’s request for an AI-powered credit scoring tool, while IBM nixed a facial recognition system capable of identifying fevers and face masks. Microsoft’s Sensitive Uses panel, meanwhile, placed limits on a voice emulation system over concerns about its potential use in deepfakes.
- Leaked internal documents from Facebook paint a worrying picture of its platforms’ effects on political polarization and users’ mental health. While the company has publicly downplayed the negative effects of its products on teenage users, documents reviewed by The Wall Street Journal — including a number of internal studies — showed that Facebook has repeatedly found that its products (particularly Instagram) are linked to negative mental health outcomes, especially in teenage girls. A 2019 internal report obtained by MIT Technology Review, meanwhile, showed that Facebook’s algorithm-powered content-recommendation engine had amplified content from Eastern European troll farms — with possible ties to Russia’s Internet Research Agency — reaching 140 million U.S. Facebook users per month, even though the majority of those users were not subscribed to the troll farms’ pages. U.S. lawmakers have been quick to seize on the news — the Wall Street Journal series was a major talking point at a Senate antitrust hearing on Tuesday.
- More: DeepMind tells Google it has no idea how to make AI less toxic
U.S. Strikes Partnership Agreement with Australia and UK: Last week, the United States, the United Kingdom and Australia announced a new trilateral partnership known as AUKUS with important implications for Indo-Pacific geopolitics and cooperation on emerging tech. While much of the attention — both positive and negative — has centered on the plan to help Australia build its own nuclear-powered submarines, the partnership is also set to include cooperation on a number of critical technologies, including AI, cybersecurity and quantum technologies. Though none of the three countries’ leaders mentioned China in their remarks, and U.S. administration officials said the partnership was not aimed at countering any particular country, both observers and PRC officials were quick to frame the agreement as a move against China. AUKUS would be the latest in a string of joint efforts to counter China, including the U.S.-EU Trade and Technology Council — an initiative announced earlier this year that aims to reduce trade barriers, align regulatory standards and counter the PRC’s growing technological influence — which will be holding its inaugural meeting in Pittsburgh next week (though Politico reports the meeting is in danger of being cancelled, having been swept up in the fallout over the AUKUS deal).
Commerce Department Sets Up AI Advisory Panel: Earlier this month, Secretary of Commerce Gina Raimondo announced the creation of the National Artificial Intelligence Advisory Committee — a panel of experts that will advise the president and the National AI Initiative Office on issues related to AI. Both the NAIAC and the NAIIO were mandated by the National AI Initiative Act of 2020, which passed as part of the FY 2021 NDAA. The nine-member committee will provide recommendations and advice related to U.S. AI competitiveness, the state of AI research, AI ethics and bias, the makeup of the AI workforce and more, as well as offer feedback on the implementation of the National AI Initiative. The law requires that board members represent a variety of backgrounds and industries, including academic institutions, business, and civil rights and disability organizations. NIST says it has already received more than 65 nominations for the NAIAC and will continue to accept nominations until October 25.
Navy Launches Missile from Unmanned Ship: The U.S. military successfully launched a SM-6 missile from an unmanned surface vehicle (USV), an important step toward its goal of incorporating unmanned ships into the Navy’s fleet. The ship involved in the test, Ranger, is one of two USVs in DOD’s Ghost Fleet Overlord program — a project run by the Pentagon’s Strategic Capabilities Office, in coordination with the Navy, to prototype and test USVs. Last year, Ranger successfully sailed more than 4,700 nautical miles from Alabama to California, a trip that included a Panama Canal transit. According to the Pentagon, the ship was in autonomous mode for 97 percent of that trip. After Phase II of Ghost Fleet Overlord concludes later this year, both of the program’s USVs will transfer to the Navy.
CSET’s translations of significant foreign language documents on AI
CSET’s translations of significant foreign language documents on AI
PRC Scientific Literacy Plan: State Council Notice on the Publication of the Outline of the Nationwide Scientific Literacy Action Plan. This document is China’s updated plan to improve the “scientific literacy” of its population. The plan sets the themes and priorities of China’s scientific literacy and science popularization efforts through 2035. The document focuses on increasing the scientific literacy of five populations in particular: youth, rural residents, industrial workers, the elderly and officials.
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
Please apply or share the roles below with candidates in your network:
- AI Research Subgrant (AIRS) Program Director: CSET’s AIRS program will promote the exploration of foundational technical topics that relate to the potential national security implications of AI over the long term via research subgrants. The Director of AIRS will manage all technical, programmatic, and financial aspects of the new AIRS program.
- Research Fellow – Cyber/AI: CSET’s CyberAI project is currently seeking Research Fellow candidates to focus on machine learning (ML) applications for cybersecurity to assess their potential and identify recommendations for policymakers (background in ML programming or cybersecurity highly desired:). Submit your application by October 1.
- Senior Fellow: CSET’s Senior Fellows provide mentorship and intellectual leadership; shape and lead lines of inquiry and research projects aligned to our research priorities; and facilitate engagements with government, military, academic, and industry leaders.
- Georgetown’s Walsh School of Foreign Service, the home institution for CSET, is hiring for a Professor of the Practice in Security Studies and Director of External Education and Outreach: This three-year, non-tenure-line faculty position in the Security Studies Program will have teaching administrative responsibilities. The candidate would teach four courses a year and oversee SSP’s external education and outreach activities. A Ph.D. with a specialization in a security-related area is preferred. The start date for this position is January 1, 2022, though flexibility on start date is possible. Review of applications will begin on October 22, 2021 and continue until the position is filled.
What’s New at CSET
- Education in China and the United States: A Comparative System Overview by Dahlia Peterson, Kayla Goode and Diana Gehlhaus
- AI Education in China and the United States: A Comparative Assessment by Dahlia Peterson, Kayla Goode and Diana Gehlhaus
- From Cold War Sanctions to Weaponized Interdependence: An Annotated Bibliography on Competition and Control over Emerging Technologies by Adam Kline and Tim Hwang
- Robot Hacking Games: China’s Competitions to Automate the Software Vulnerability Lifecycle by Dakota Cary
- CSET: CSET Legislation Tracker by Daniel Hague and Jennifer Melot
- CSET: Data Snapshot: Concentrations of AI-Related Topics in Research: Robotics by Sara Abdulla
Foretell has launched a new project that combines expert and crowd judgment. You can read more about the experts’ views, including how they think trends like China’s military aggression, political polarization, and the strength of the tech sector affect the DOD-Silicon Valley relationship. See all 20 forecast questions associated with this project here.
- On September 16, the CSET Webinar Can AI Write Disinformation? featured a conversation between OpenAI’s Girish Sastry and CSET researchers Andrew Lohn, Katerina Sedova and Micah Musser. They discussed the potential for OpenAI’s GPT-3 to be used to compose and spread disinformation, based on their report Truth, Lies, and Automation: How Language Models Could Change Disinformation.
- FedScoop: For an article about the U.S. Air Force’s efforts to increase collaboration with private industry, FedScoop’s Jackson Barnett cited the 2020 CSET brief, “Cool Projects” or “Expanding the Efficiency of the Murderous American War Machine?”: AI Professionals’ Views on Working With the Department of Defense by Catherine Aiken, Rebecca Kagan and Michael Page.
- National Defense Magazine: Yasmin Tadjeh spoke to CSET Director of Strategy Helen Toner for a story recapping her July brief, co-authored with Zachary Arnold, AI Accidents: An Emerging Threat — What Could Happen and What to Do.
- ChinAI: In his Substack newsletter, Jeffrey Ding anointed Dahlia Peterson, Kayla Goode and Diana Gehlhaus’s issue brief, AI Education in China and the United States: A Comparative Assessment, his “Must-read” piece of the week.
- National Journal: Research Analyst Emily Weinstein spoke to National Journal for an article about the tensions surrounding recent research security proposals meant to counter Chinese economic espionage.
- University World News: Yojana Sharma reached out to CSET Research Analyst Jack Corrigan for an item about the August brief China is Fast Outpacing U.S. STEM PhD Growth, which Corrigan wrote with Remco Zwetsloot, Emily Weinstein, Dahlia Peterson, Diana Gehlhaus and Ryan Fedasiuk.
- Chemistry World: Senior Fellow Anna Puglisi spoke to Chemistry World’s Angeli Mehta for an article about U.S. export control’s impact on Chinese researchers.
What We’re Reading
Report: Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report, Michael L. Littman et al., Stanford University (September 2021)
Report: The geography of AI: Which cities will drive the artificial intelligence revolution?, Mark Muro and Sifan Liu, Brookings (September 2021)
Book: AI 2041: Ten Visions for Our Future, Kai-Fu Lee and Chen Qiufan (September 2021)
- September 24: Georgetown Institute for Technology Law and Policy and the Yale Information Society Project, AI Governance: Classifying AI Systems and Understanding AI Accidents, featuring Catherine Aiken and Helen Toner
- September 30th–October 29th: Day One Project, Fall Virtual Lunch Series on Industrial Policy
- October 6: R Street, Securing the States: From Security to Resiliency, featuring John Bansemer
- October 12: CSET Webinar, Collaborative S&T Development: Creating a NATO Decision Advantage in AI, featuring NATO Chief Scientist Bryan Wells and CSET’s Margarita Konaev
What else is going on? Suggest stories, documents to translate & upcoming events here.