As CSET continues to grow, we want to learn more about our readers and how we can better serve their interests. If you’re interested in helping, you can do so by filling out a short, anonymous survey. Thank you for your support of CSET and policy.ai!
We are also looking to fill multiple positions!
Intel Debuts Second Generation Brain-Like Chip: Last week, Intel announced the second generation of its neuromorphic research chip, dubbed Loihi 2. Neuromorphic chips take inspiration from brains, mimicking the neurons and synapses found in their biological equivalent. In theory, they could achieve energy efficiency and levels of adaptability unparalleled by chips based on the von Neumann architecture (the architecture for most modern chips). They also hold great promise for AI, with the potential to train and run more flexible and adaptive AI programs. Intel’s first Loihi chip, introduced in 2018, proved capable of detecting scents and powering a touch-sensing system, for example. The new iteration — built using Intel’s new Intel 4 process (the first Intel process node using extreme ultraviolet lithography) — represents a major step up from the previous generation in terms of power (each core has 8 times more neurons and synapses) and connectivity. In conjunction with the release of Loihi 2, Intel also introduced Lava, an open-source, modular framework for developing “neuro-inspired” applications.
PRC Regulatory Moves Set the Stage for AI and Computing Development: The steady drumbeat of regulatory news out of China continues, with a handful of announcements likely to impact AI development:
- Last week, China unveiled its first ethical guidelines for AI. The guidelines (available here in Chinese) outline basic principles for AI development, emphasizing the importance of safeguarding against abuses and protecting users. While the new guidelines are the first to deal explicitly with ethics specifications, similar efforts in the past include the Beijing AI Principles and the Ministry of Science and Technology’s eight Governance Principles for a New Generation of Artificial Intelligence, both issued in 2019.
- The Cyberspace Administration of China announced that over the course of the next three years, it will set up a governance structure for “algorithm security.” The announcement (translation from China Law Translate available here) called for regulations that would ensure the “healthy, orderly, and prosperous” development of algorithms while supporting Beijing’s ambitions to “preserve ideological security” and become a “cyberpower.”
- Beijing also banned cryptocurrency mining, striking a major blow to an industry that, until recently, had been largely concentrated in China. While cryptocurrency mining has been pegged as one of the major culprits behind the global chip shortage, observers say the ban is unlikely to put much of a dent in the global chip supply — seeing the writing on the wall, many mining operations had already moved out of China in recent months.
DeepMind Predicts the Weather: Researchers from DeepMind created an AI model that beat conventional near-term weather forecasting models in the vast majority of cases, according to an article they published in the journal Nature. The ability to forecast weather up to two hours in the future, known as “Nowcasting,” is critically important for many businesses and government services — such as emergency services and air traffic controllers, for example. DeepMind trained a deep generative model on a dataset of UK radar observations between 2016 and 2018. Judged against other popular methods, a group of 56 expert meteorologists rated the DeepMind model first for accuracy and usefulness in 89 percent of cases. Despite Deepmind’s promising results, the Alphabet subsidiary says its model is still a ways away from real-world deployment.
U.S. and EU Hold Inaugural Tech Talks: The United States and European Union agreed to coordinate on issues related to emerging technologies and global trade as part of the inaugural meeting of the U.S.-EU Trade and Technology Council (TTC), held in Pittsburgh last week. In a joint statement, U.S. and EU officials laid out plans to ensure that AI systems are “innovative and trustworthy and … respect universal human rights and shared democratic values,” to focus on “rebalancing” the global semiconductor supply chain, and to coordinate on investment screening and export controls for dual-use technologies, among other things. The agreement’s section on semiconductors appears to have been watered down from earlier drafts following objections from French officials, which observers say was part of the broader fallout from the AUKUS deal (for more on that diplomatic row, see our coverage last month). The TTC established ten working groups to focus on relevant issues ahead of its next meeting next year, though no date has been set.
FTC Chair Lina Khan Details Vision for the Agency: In a memo to staff, Federal Trade Commission Chair Lina Khan outlined her priorities for the agency. Khan, who was confirmed by the Senate in June, rose to prominence in 2017 after she penned an article for the Yale Law Journal, Amazon’s Antitrust Paradox, which argued that the prevailing antitrust framework was insufficient to address Amazon’s market power. She has since carved out a role as a leading antitrust advocate, and her nomination earlier this year was applauded by Big Tech critics on both sides of the political aisle. In her memo, Khan outlined a number of principles and priorities that seem likely to impact AI development. She wrote that the agency should be “forward-looking” in addressing problems with emerging technologies, should take “swift action” before unfair or illegal practices can take root, and took aim at industry consolidation and anti-competitive hiring practices. The White House, for its part, seems eager to staff the FTC with more commissioners in the Khan mold — President Biden nominated privacy expert and Georgetown law school professor Alvaro Bedoya for a seat on the commission last month.
Facebook Whistleblower Testifies Before Senate: Frances Haugen, a former Facebook employee turned whistleblower, testified before the Senate Commerce Subcommittee on Consumer Protection on Tuesday, painting a worrying picture of the effect of Facebook’s products — and specifically its algorithm-based recommendation systems — on mental health, misinformation, political violence and national security. Haugen, who worked as a lead product manager for the Civic Misinformation and Counter-Espionage teams, leaked internal documents to The Wall Street Journal that, together with her whistleblower complaints to the SEC, reportedly show that the company was aware of the harmful effects associated with its products. In her testimony, Haugen said that Facebook’s failure to address these problems had helped provoke violence around the world, including in Myanmar, Ethiopia, and the United States. While Haugen is far from the first to testify on the potential ills of social media and engagement-based algorithms, observers say her testimony appears to have struck a galvanizing chord. Other Congressional panels have already contacted Haugen to discuss national security implications, and a bipartisan group of senators called for subpoenas of Facebook’s internal records to aid in investigating her claims.
SenseTime (Maybe) Avoids U.S. Sanctions: One of China’s largest AI companies, SenseTime, may have avoided significant U.S. sanctions, at least according to its lawyers. The U.S. Commerce Department added SenseTime to its Entity List in 2019 for the company’s role in “China’s campaign of repression” in Xinjiang, severely limiting its ability to work with U.S. firms. But as Axios originally reported, the company believes a small 2020 update to its designation means that only its Beijing subsidiary, Beijing SenseTime, is covered by the restrictions, leaving the parent company relatively unconstrained. If correct, that interpretation could prove a boon for the company’s upcoming Hong Kong IPO, which is expected to raise up to $2 billion. The Commerce Department, for its part, has yet to confirm SenseTime’s interpretation, only saying that it “continually reviews available information” regarding Entity List inclusion.
CSET’s translations of significant foreign language documents on AI
CSET’s translations of significant foreign language documents on AI
PRC AI White Paper: Artificial Intelligence Standardization White Paper (2021 Edition). This white paper, issued by a PRC state-run think tank, provides an extensive overview of Chinese government standards-setting related to AI. Appendices list all of China’s current and planned AI standards and include case studies of recent PRC applications of AI in fields such as mass surveillance and smart logistics.
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
Please apply or share the roles below with candidates in your network:
- AI Research Subgrant (AIRS) Program Director: CSET’s AIRS program will promote the exploration of foundational technical topics that relate to the potential national security implications of AI over the long term via research subgrants. The Director of AIRS will manage all technical, programmatic, and financial aspects of the new AIRS program.
- Research Fellow – Cyber/AI: CSET’s CyberAI project is currently seeking Research Fellow candidates to focus on machine learning (ML) applications for cybersecurity to assess their potential and identify recommendations for policymakers (background in ML programming or cybersecurity highly desired:). Submit your application by October 15.
- Senior Fellow: CSET’s Senior Fellows provide mentorship and intellectual leadership; shape and lead lines of inquiry and research projects aligned to our research priorities; and facilitate engagements with government, military, academic, and industry leaders.
- Georgetown’s Walsh School of Foreign Service, the home institution for CSET, is hiring for a Professor of the Practice in Security Studies and Director of External Education and Outreach: This three-year, non-tenure-line faculty position in the Security Studies Program will have teaching administrative responsibilities. The candidate would teach four courses a year and oversee SSP’s external education and outreach activities. A Ph.D. with a specialization in a security-related area is preferred. The start date for this position is January 1, 2022, though flexibility on start date is possible. Review of applications will begin on October 22, 2021 and continue until the position is filled.
- Gracias Family Chair in Security and Emerging Technology: This non-tenure-track, rank-open position will have teaching and administrative responsibilities in the Security Studies Program, with potential for affiliations with CSET and the Science, Technology and International Affairs Program. The successful candidate will have a record of professional or teaching experience focused on security and emerging technology, with a particular focus on AI and its implications for national and international security. Review of applications will begin October 13 and continue until the position is filled.
What’s New at CSET
- CSET and the MITRE Corporation: The DOD’s Hidden Artificial Intelligence Workforce: Leveraging AI Talent at the U.S. Department of Defense by Diana Gehlhaus, Ron Hodge, Luke Koslosky, Kayla Goode and Jonathan Rotner
- CSET: Data Snapshot: Map of Science Updates and User Interface Launch: What’s New? by Autumn Toney
- Brookings: How China harnesses data fusion to make sense of surveillance data by Dahlia Peterson
- EETimes Weekly Briefing Podcast: Building a Framework to Trust AI with Helen Toner
- CSET: Formal Response: Recommendations for the National AI Research Resource Task Force
Foretell has launched a new project that combines expert and crowd judgment. You can read more about the experts’ views, including how they think trends like China’s military aggression, political polarization, and the strength of the tech sector affect the DOD-Silicon Valley relationship. See all 20 forecast questions associated with this project here.
IN THE NEWS
- Breaking Defense: Brad D. Williams recapped our September 16 event, Can AI Write Disinformation?, during which Andrew Lohn, Katerina Sedova and Micah Musser of CSET spoke with OpenAI’s Girish Sastry about GPT-3’s potential to turbocharge misinformation.
- University World News: For an article about the China Initiative, Nathan M. Greenfield reached out to CSET’s Emily Weinstein to discuss research collaboration between U.S. and Chinese scientists.
What We’re Reading
Article: Inconsistency in Conference Peer Review: Revisiting the 2014 NeurIPS Experiment, Corinna Cortes and Neil D. Lawrence, arXiv (September 2021)
Article: U.S.-China tensions knock 96% off of bilateral tech investment, Ting-Fang and Lauly Li, Nikkei (September 2021)
Article: Policing of foreign tech investment in the US is broken. Here’s how to fix it., Stephen Heifetz, Protocol (October 2021)
- October 7: Politico’s Defense Forum: Redefining American Power in a New World, featuring Anna Puglisi
- October 12: CSET Webinar, Collaborative S&T Development: Creating a NATO Decision Advantage in AI, featuring NATO Chief Scientist Bryan Wells and CSET’s Margarita Konaev
What else is going on? Suggest stories, documents to translate & upcoming events here.