Work at CSET
We’re hiring! If you’re interested in joining our team, check out the positions in the “Job Openings” section below or consult our careers page.
Worth Knowing
A Language Model Trained to Mimic 4chan Might Portend AI’s Grim Future: A machine learning researcher trained a language model on three and half years’ worth of 4chan posts to create what he dubbed “the most horrible model on the Internet,” raising concerns about the public availability of language models and sparking debate about their ethical use. Yannic Kilcher, a Swiss ML expert who covers AI and ML advances on his popular YouTube channel, fine-tuned an existing open-source language model — EleutherAI’s GPT-J-6B — using a dataset of more than 130 million posts from 4chan’s “Politically Incorrect” board, an online forum with a longstanding reputation for toxicity and offensiveness. As Kilcher described in a video documenting the process, he then programmed a team of bots to post on the board as often as they could. According to Kilcher, the bots posted approximately 30,000 times during two separate 24-hour periods. While 4chan users were able to identify some of the bots for what they were, this appeared to be due less to the model’s shortcomings and more to the bots’ superhuman indefatigability — they posted round-the-clock, as frequently as the site allowed. Kilcher’s experiment was criticized by a number of experts and observers, who called it irresponsible and unethical. While Kilcher made it possible for anyone to use his “GPT-4chan” by uploading it to Hugging Face, an online repository for AI and ML code, the site quickly restricted access. But the cat could be out of the bag: as Kilcher’s experiment shows, currently available open-source models and datasets can be used to create surprisingly effective language models with relative ease.
- More: OpenAI: Best Practices for Deploying Language Models | AI and the Future of Disinformation Campaigns
- More: Texas schools are surveilling students online, often without their knowledge or consent | Why Expensive Social Media Monitoring Has Failed to Protect Schools
- An engineer for Google’s Responsible AI team, Blake Lemoine, told the Washington Post he believes the company’s AI-powered chatbot generator, LaMDA, is “sentient” and deserving of rights “as a person.” Google investigated Lemoine’s claims and dismissed them, and has since placed Lemoine on paid leave for violations of the company’s confidentiality policy. Since the story broke, most experts have agreed that Lemoine read too much into his conversations with the tool, and doubted that LaMDA was doing much more than mimicry. Several also pointed out that the transcripts Lemoine published had been heavily edited, potentially giving the impression that LaMDA is more coherent than it is in reality.
- In a viral Twitter thread, Giannis Daras, a computer science PhD student at UT Austin, argued that DALL-E 2, OpenAI’s new text-to-image generator, has “a secret language.” It’s no secret that DALL-E 2 struggles with generating coherent images of text, but Daras said that the seeming gibberish is actually part of a “DALLE-2 language.” Re-inputting DALLE-2-generated text — “Vicootes,” for example — will generate images of whatever the word means in DALLE-2’s “language” — in this case, vegetables. Daras’s findings were also published in an arXiv paper. But many observers were skeptical. As Benjamin Hilton pointed out, it was difficult to recreate the effect with many DALLE-2-generated words or phrases, and the tool frequently produced seemingly unrelated images in response.
Government Updates
The Pentagon’s New AI Office Hits Its Stride: The office of the Chief Digital and Artificial Intelligence Officer — the new DOD office meant to coordinate the Pentagon’s data, AI and digital efforts — reached full operating capacity on June 1, the DOD announced earlier this month. The office’s maturation has made for some significant staffing changes inside the Pentagon. Lt. Gen. Michael Groen, the head of the Joint Artificial Intelligence Center — which has now been subsumed by the office of the CDAO, together with the Defense Digital Service and the Office of Advancing Analytics — announced his retirement on LinkedIn last month. The office of the CDAO, meanwhile, has tapped a number of hires to fill important positions, including Diane Staheli, who was recently named the head of the office’s Responsible AI Division. The office appears poised to provoke even more change: In his first interview as CDAO, Craig Martell, who was appointed in April, acknowledged the concerns of several recently departed DOD officials who have critiqued the Pentagon’s languid modernization, and discussed his plans to make progress despite “bureaucratic inertia.”
NAIRR Task Force Releases Report: In its interim report released late last month, the National AI Research Resource task force described its initial findings and recommendations, which included:
- The NAIRR should be set up to meet four primary goals: “Spur Innovation,” “Increase Diversity of Talent,” “Improve Capacity,” and “Advance Trustworthy AI.”
- The NAIRR should operate as a “federated cyberinfrastructure ecosystem run by a single management entity, with governance and external advisory bodies.” The report warns that assigning a single agency to manage the NAIRR could risk “narrowing its focus to that agency’s specific mission” and recommends the resource be run by a non-governmental entity with appropriate federal oversight.
- There should be an “integrated access portal through which all resources are made available.”
- The NAIRR should be accessible to “researchers and students from diverse backgrounds,” including both expert-level AI researchers and students beginning to experiment with AI development.
- The NAIRR’s managers should explore making statistical, administrative, and federally funded research data available to researchers.
- The NAIRR should offer a “federated mix of on-premise and commercial computational resources, including conventional servers, computing clusters, HPC, and cloud computing” to researchers.
U.S. Supercomputer Named World’s Fastest (But China Might Have Faster): Frontier, a supercomputer hosted at the Oak Ridge National Laboratory in Tennessee, was declared the world’s fastest supercomputer and “the first true exascale machine” by the Top500 project, which benchmarks and ranks the world’s most powerful systems. Exascale computing — the ability to perform at least one quintillion floating point operations per second — has been in the U.S. government’s sights for several years. The Department of Energy started its Exascale Computing Initiative in 2016 and has reportedly poured more than $2 billion into building three exascale systems, including Frontier (the two others — “Aurora” at Argonne National Laboratory and “El Capitan” at Lawrence Livermore National Laboratory — are expected to be finished by the end of 2023). According to the DOE, the systems “will enable breakthroughs in both science and industry through modeling and simulation, high-performance data analysis, and artificial intelligence and machine learning applications.” But the United States’ place atop the supercomputing leaderboard may not be as secure as the Top500 list would make it seem — while Frontier is the first publicly verified exascale system, experts believe that China might have already deployed two exascale systems last year and could have as many as 10 up and running by 2025.
DOD Planning Autonomous Weapons Policy Update: The Pentagon will be updating its guidance on autonomous weapons systems by the end of the year, Emerging Capabilities Policy Director Michael Horowitz told Breaking Defense. The current guidance — DOD directive 3000.9 — was signed nearly a decade ago, in November 2012. While both AI research and the DOD’s AI-development efforts have advanced dramatically in that time, Breaking Defense noted that the update is coming as part of a DOD-standard 10-year update process, not in response to any specific technological advances. Horowitz didn’t disclose any planned changes and expressed broad satisfaction with the “very responsible approach” of the current guidance, but said that there could be “some updates and clarifications that would be helpful.” One such update could be a clearer distinction between AI-enabled and autonomous weapons systems — which Horowitz noted are not the same. The original guidance made no mention of AI, but with Horowitz’s office getting input from many of the DOD’s new emerging tech offices, including the office of the CDAO, that omission seems unlikely in the new guidance.
In Translation
CSET’s translations of significant foreign language documents on AI
CSET’s translations of significant foreign language documents on AI
PRC Think Tank White Paper: Artificial Intelligence White Paper (2022). This white paper from a Chinese think tank describes the state of AI in China and the world. It divides its focus among AI innovation and breakthroughs, engineering and other practical uses of AI, and AI governance initiatives in the areas of trustworthiness and safety.
PRC Education Budget: Ministry of Education 2022 Budget. This document is the 2022 budget for China’s Ministry of Education. The ministry devotes the vast majority of its budget to fully funding 75 of China’s top universities, including all scientific research undertaken by them. This year, the ministry is also funding the launch of a long-term project to incorporate more Xi Jinping-related content into mandatory Marxist ideology courses for Chinese college students.
PRC Public Security Budget: Ministry of Public Security 2022 Budget. This document is the 2022 budget for the PRC Ministry of Public Security, which is responsible for Chinese police departments, border security, counterterrorism, counter-narcotics, top leaders’ security details, maintaining social order, and monitoring the Chinese internet for dissent. In 2022, the ministry is funding projects to renovate police academy campus facilities, train air marshals, and enforce the ban on fishing on the Yangtze River, among others.
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
Job Openings
We’re hiring! Please apply or share the roles below with candidates in your network:
- Executive Coordinator: The Executive Coordinator will provide critical executive, logistical, and project management support to the CSET Operations and Leadership Teams with limited supervision and high levels of autonomy. Apply by July 1.
- Research Fellow — Standards and Testing: The Research Fellow will focus on standards, testing, evaluation, safety and national security issues associated with AI systems. To do this, they will examine how the limitations, risk, and society and security impacts of AI can be understood and managed. Apply by July 15.
- Research Fellow — AI Applications: The Research Fellow will focus on helping decision makers evaluate and translate new and emerging technologies, particularly in the field of AI, into novel capabilities by separating real trends and strategic opportunities from technological hope and hype. Apply by July 15.
- UI/UX Designer: The UI/UX Designer will perform user interviews, write user stories, create user interface mockups, and conduct usability testing for public-facing support tools. Rolling application — apply today.
What’s New at CSET
REPORTS
- Quad AI: Assessing AI-Related Collaboration between the United States, Australia, India, and Japan by Husanjot Chahal, Ngor Luong, Sara Abdulla and Margarita Konaev
- Chokepoints: China’s Self-Identified Strategic Technology Import Dependencies by Ben Murphy
- China’s Industrial Clusters: Building AI-Driven Bio-Discovery Capacity by Anna Puglisi and Daniel Chou
- Re-Shoring Advanced Semiconductor Packaging: Innovation, Supply Chain Security, and U.S. Leadership in the Semiconductor Industry by John VerWey
- China’s State Key Laboratory System: A View Into China’s Innovation System by Emily Weinstein, Channing Lee, Ryan Fedasiuk and Anna Puglisi
- CSET: Data Visualization: Map of China’s State Key Laboratory System by Emily Weinstein, Daniel Chou, Channing Lee, Ryan Fedasiuk and Anna Puglisi
- CSET: Data Snapshot: “Growth” Companies in PARAT by Autumn Toney
- CSET: Data Snapshot: “Mature” Companies in PARAT by Autumn Toney
- The National Interest: The Case for Applying Global Magnitsky Sanctions Against Hikvision by Dahlia Peterson
- The Hill: Want to secure U.S. supply chains? Reform high-skilled immigration by Will Hunt and Remco Zwetsloot
- The Hill: The advantages of foreign STEM students staying in the U.S. by Jack Corrigan
- Council on Foreign Relations: The Future of the Quad’s Tech Cooperation Hangs in the Balance by Ngor Luong and Husanjot Chahal
- On May 23, the CSET webinar A New Export Control Regime for the 21st Century: How Russia’s Invasion has Created an Opportunity for a Techno-Democracy Partnership featured a discussion between CSET Research Fellow Emily Weinstein and CSET Non-Resident Senior Fellow Kevin Wolf on their proposal for a new export control regime among techno-democracies to better address contemporary challenges.
- The Wall Street Journal: Research Analyst Dahlia Peterson spoke with Angus Loten and explained why the UK’s fine of facial recognition firm Clearview AI is unlikely to have much of an effect on the company’s practices.
- Bloomberg: Jordan Robertson and Michael Riley reached out to Director of Biotechnology Programs and Senior Fellow Anna Puglisi to discuss Chinese government efforts to acquire semiconductor industry secrets.
- The Register: Laura Dobberstein recapped the new brief by Husanjot Chahal, Ngor Luong, Sara Abdulla and Margarita Konaev, Quad AI: Assessing AI-Related Collaboration between the United States, Australia, India, and Japan, in an article about the Quad’s May meeting.
- The Register: Dobberstein covered Translation Manager Ben Murphy’s new brief Chokepoints: China’s Self-Identified Strategic Technology Import Dependencies.
- The National Interest: A Kevin Klyman piece about semiconductor trade restrictions cited John VerWey’s October brief, No Permits, No Fabs: The Importance of Regulatory Reform for Semiconductor Manufacturing.
- Tech Target: Senior Fellow Andrew Lohn spoke with Antone Gonsalves about OpenAI and its partnership with Microsoft.
- Vox: Deputy Director of Analysis and Research Fellow Margarita Konaev discussed the state of the war in Ukraine and the impact of advanced weaponry with Ellen Ioanes.
- Politico: Koanev spoke about cutting-edge communications tech with Lee Hudson and Bryan Bender.
- GovCIO Media & Research: Konaev was also quoted in a Kate Macri article about Craig Martell’s future as the CDAO.
- The Financial Times: Hudson Lockett cited Ngor Luong, Zachary Arnold and Ben Murphy’s 2021 brief, Understanding Chinese Government Guidance Funds: An Analysis of Chinese-Language Sources in an article about China’s capital markets.
- Newsweek: Research Analyst Will Hunt’s January brief Sustaining U.S. Competitiveness in Semiconductor Manufacturing: Priorities for CHIPS Act Incentives earned a mention in Shubham Dwivedi and Gregory D. Wischer’s recent opinion piece.
What We’re Reading
Report: Comparing the Organizational Cultures of the Department of Defense and Silicon Valley, Nathan Voss and James Ryseff, RAND Corporation (2022)
Report: Stresses and contradictions in the Chinese economy in the early 2020s, Jacob Funk Kirkegaard, European Parliament Think Tank (May 2022)
Upcoming Events
- June 23: CSET Webinar, Connecting the Quad: Increasing AI Collaboration between the United States, Australia, India and Japan, featuring Husanjot Chahal, Ngor Luong and Martijn Rasser
- June 27: Brookings and AEI, Economic globalization after Ukraine, featuring Emily Weinstein
What else is going on? Suggest stories, documents to translate & upcoming events here.