Worth Knowing
OpenAI Debuts GPT-4 but Stays Mum About What’s Under the Hood: Last week, OpenAI announced GPT-4 — its long-anticipated next-generation large language model (LLM). In release materials, the company detailed GPT-4’s improvements over GPT-3.5, the model that powers the free version of the uber-popular ChatGPT. Those advances include multimodal capabilities (GPT-4 can accept both text and images as inputs, though it can only output text), improved performance across a range of academic and professional tests and other benchmarks, better guardrails against generating “disallowed content,” a higher likelihood of producing factual responses, and the capability to process and respond to much longer strings of text. But while OpenAI was eager to discuss GPT-4’s capabilities, it shared little about what was under the hood, beyond a section in the technical report that describes GPT-4 as a pre-trained “Transformer-style” model fine-tuned using Reinforcement Learning from Human Feedback. The tight-lipped debut differs significantly from the GPT-3 release, when OpenAI shared detailed information about how its model was built and trained. The GPT-4 technical paper credited this shift to changes in the broader “competitive landscape” and “the safety implications of large-scale models like GPT-4.” In an interview with The Verge’s James Vincent, OpenAI co-founder and Chief Scientist Ilya Sutskever said the company’s earlier approach to sharing its research had been “flat out” wrong. Despite keeping the finer details of the model’s architecture and training under wraps, OpenAI seems to have fewer reservations about unleashing the model itself. GPT-4 is already available for use by developers through the company’s API and to paying subscribers through ChatGPT.
Expert Take: “Many of the important questions about large language models are still open. One debate to watch closely concerns how openly available increasingly powerful language models should be. Machine learning has benefited from its culture of extreme openness, but as the state of the art comes closer to systems that could enable dangerous actions — or even wreak havoc on their own — researchers and policymakers will need to think seriously about how to prevent proliferation while promoting equitable access.” — Helen Toner is CSET’s Director of Strategy and Foundational Research Grants and a member of OpenAI’s Board of Directors
- More: Microsoft Strung Together Tens of Thousands of Chips in a Pricey Supercomputer for OpenAI | GPT-4 is here: what scientists think
- Chinese search giant Baidu announced “Ernie Bot,” an AI-powered chatbot and the first serious Chinese competitor to ChatGPT. Initial impressions of the new system were mixed, but recent reviews have been more positive. Ernie Bot doesn’t appear to be quite on par with GPT-4, but observers say it seems to be “one fair-size step” in that direction.
- Anthropic, a Google-backed AI company started by OpenAI alumni (among them CSET Non-Resident Fellow Jack Clark), launched its AI-powered chatbot “Claude.” Anthropic has announced partnerships with several companies to incorporate Claude into their products and is giving some users early access to the chatbot on a limited basis.
- Nvidia announced new cloud services to help businesses build and customize generative AI tools for domain-specific tasks.
- Adobe announced an AI image generator it says was trained on images that the company has a license to or whose copyright has expired. As we’ve covered, the training of AI art generators has been a contentious subject, but it looks like Adobe has sidestepped the most fraught aspect.
- More: GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models | Google’s PaLM-E is a generalist robot brain that takes commands
Government Updates
AI and Emerging Tech in the FY2024 Budget Request: Last week, the Biden Administration submitted its Fiscal Year 2024 budget request. It includes significant proposed investments to bolster U.S. science and technology activities, among them $4.4 billion for the Defense Advanced Research Projects Agency, $796.5 million for National Science Foundation-supported AI research, and $1.2 billion for the NSF’s new Directorate of Technology, Innovation and Partnerships. Additional highlights include:
- $209.7 million for NSF-supported semiconductors and microelectronics research;
- $1.8 billion identified for Department of Defense AI-related research, including “efforts to deliver and adopt responsible AI/ML-enabled capabilities on secure and reliable platforms, workforce development, and DoD-wide data management and modernization efforts;”
- $1.4 billion for Joint All Domain Command and Control (JADC2) initiatives;
- $115 million for a DOD Office of Strategic Capital designed to attract and scale private capital in critical technologies; and
- $139.2 million for Bureau of Industry and Security defense-related activities, including export enforcement and national security and technology transfer control.
The Commerce Department Proposes Guardrails on CHIPS Funds: On Tuesday, the Commerce Department announced proposed rules that would significantly limit the ability of CHIPS Act fund recipients to invest in China or expand their operations there. As we covered earlier this month, the Commerce Department will soon begin accepting applications for the $39 billion in semiconductor manufacturing incentives provided by the CHIPS and Science Act of 2022. In addition to boosting domestic semiconductor manufacturing, one of the act’s primary objectives (going back to its earlier iterations as the Endless Frontier Act and the United States Innovation and Competition Act of 2021) is countering China. The proposed rules would limit CHIPS funding recipients from investing more than $100,000 or expanding existing production capacity by more than 5 percent for facilities producing “leading edge” or “advanced” chips in “foreign countries of concern,” including China and Russia. A specific list of “semiconductors critical to national security,” including those used in quantum computing and for specialized military purposes, will also be subject to the 5 percent limit — regardless of the technology used to manufacture them. For facilities producing non-critical “legacy” chips, the expansion limit would be set at 10 percent. The proposed rules will be open for public comment for 60 days, beginning on March 23.
The FTC Issues Another Warning About AI: In a blog post earlier this week, the Federal Trade Commission warned AI developers against making and selling products that can be used for fraud, scams, or other harmful activities. The post is the second AI-related guidance issued by the FTC in the last month, a sign — together with its new Office of Technology — of the agency’s growing focus on tech-related enforcement efforts. The new guidance encourages companies to think through the “reasonably foreseeable” ways their generative AI systems could be misused for fraudulent purposes, and warns them to implement “durable, built-in” protections against consumer harm if they want to avoid FTC enforcement actions. The post seems to be related to a reported surge in scam calls during which fraudsters used AI-powered voice-mimicking software to impersonate the relatives of (often elderly) targets — indeed, the FTC issued a consumer alert about such calls the same day as the blog post. Beyond addressing low-level scams, the agency seems to be considering potential enforcement actions related to the “potentially dangerous” risks AI poses to children, teens and other vulnerable groups; the blog post concludes by emphasizing that FTC staff are closely monitoring concerns around these issues.
In Translation
CSET’s translations of significant foreign language documents on AI
CSET’s translations of significant foreign language documents on AI
PRC Domestic Demand Strategy: Outline of the Plan for the Strategy to Expand Domestic Demand (2022-2035). This is a translation of China’s short- to mid-term strategy for expanding domestic demand in its economy. Although its theme is increasing Chinese consumer demand, the strategy is wide-ranging and includes calls to increase the quantity and quality of the supply of goods and services and to improve China’s social safety net. It does not, however, set any metrics to measure how well the strategy is being implemented. Although this document is merely the “outline” of the plan to expand domestic demand, it is likely that this “outline” will be the fullest version of the strategy that China makes public.
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
What’s New at CSET
PUBLICATIONS
- CSET: Data Snapshot: Leading the Charge: A Look at the Top-Producing AI Programs in U.S. Colleges and Universities by Sara Abdulla
- CSET: Blog post: Women’s History Month, a (non-exhaustive) list of information and resources compiled by CSET team members to help us learn from, support and honor the women who’ve transformed our world
IN THE NEWS
- NPR: Research Fellow Josh Goldstein, lead author of a recent paper on how large language models can be misused to influence public opinion, was featured in a detailed piece this morning by Shannon Bond on the ease and frequency with which deepfake videos are created.
- Politico: John Sakellariadis sought the views of Director of Biotechnology Programs and Senior Fellow Anna Puglisi for an item on China’s illicit efforts to acquire genetic data from the United States in Morning Cybersecurity.
- CBS: In a Nicole Keller story on CBS Mornings about the potential national security threat posed by TikTok, which was tied to today’s House hearing, Puglisi addressed the platform’s potential for illicit tech transfer to China.
- CBC: For an article about Beijing’s overseas messaging efforts, Mark Gollom reached out to Puglisi to discuss China’s United Front Work Department.
- Vox: Director of Strategy and Foundational Research Grants Helen Toner discussed the problems with describing AI competition as a “race” with Sigal Samuel for an article about why AI development should be slowed down.
- Newsweek: John Feng spoke to Research Fellow Emily Weinstein to discuss China’ reorganization of some of its key science and technology bureaus.
- Science: Weinstein also weighed in on the reorganization in a Dennis Normile piece on the topic.
- Voice of America: Research Analyst Jacob Feldgoise shared his thoughts on the restructuring with VOA’s Lin Feng.
- University World News: Yojana Sharma reached out to Research Analyst Hanna Dohmen to discuss China’s approach to ChatGPT-style language models.
What We’re Reading
Report: Report and Recommendations, U.S. Chamber of Commerce Commission on Artificial Intelligence Competitiveness, Inclusion, and Innovation (March 2023)
Article: Three Easy Ways to Make AI Chatbots Safer, Noah Giansiracusa, Scientific American (March 2023)
Upcoming Events
- March 27 – April 1: Georgetown Initiative on Tech & Society, Tech & Society Week
- March 29: Georgetown Initiative on Tech & Society, Intersectional Bias in Applications of Artificial Intelligence, featuring Heather Frase
- March 30: CSET Event, Betting the House: Strengthening the Full Microelectronics Supply Chain, featuring John VerWey and In-Q-Tel’s Yan Zheng
What else is going on? Suggest stories, documents to translate & upcoming events here.