Policy.ai has temporarily moved to a once-a-month schedule while its usual author, Alex Friedland, is on parental leave.
Meta Frees the LLaMA: On July 18, Mark Zuckerberg announced that Meta would make a powerful new large language model, LLaMA 2, available to programmers free of charge. The model, which reportedly performs on par with ChatGPT, will be available under a commercial license, allowing developers to build new products and services off the LLaMA foundation. Beyond accelerating AI development, Meta says releasing the model to the public will also improve its security by allowing more people to inspect, identify, and fix potential vulnerabilities. The decision underscores a growing divide in the tech industry’s approach to AI. Companies like Google and OpenAI have maintained tight controls on their models, arguing that limiting access to the weights that undergird their systems is critical to curbing misuse and unintended harms. However, critics say the true purpose of these restrictions is to stifle potential competition, thus benefiting the companies themselves. By open-sourcing LLaMA, Meta joins startups like Hugging Face and Databricks in supporting a more decentralized AI ecosystem. Software ideology aside, Meta has a lot to gain from releasing LLaMA 2 into the wild. By open-sourcing the model, Meta can outsource some of its AI development and safety efforts to the public, and potentially close the performance gap between its own products and those developed by Google and OpenAI, which dominate the generative AI market. It also opens the door for LLaMA to become the foundation of a broad range of AI applications, similar to the role Google’s Android operating system plays in the mobile device market. Meta also expects to make money directly from Microsoft, Google, and other cloud-computing providers when they host the model on their cloud platforms.
AI Researchers Crack Chatbot Safeguards: A group of AI researchers recently released a report showing how techniques meant to stop chatbots from generating disinformation, hate speech, and other harmful material can be easily thwarted. The attack lets users bypass the safety guardrails in many of today’s most popular large language models, including OpenAI’s ChatGPT, Google’s Bard, Microsoft’s Bing Chat, and Anthropic’s Claude. “There is no obvious solution,” Zico Kolter, a Carnegie Mellon professor who co-authored the report, told the New York Times. “You can create as many of these attacks as you want in a short amount of time.” The method, which involves adding a long string of characters to the end of each prompt, is relatively easy to engineer. It is also particularly effective against open-source models, giving credence to the argument made by OpenAI and Google that restricting access to AI’s backend processes is safer than releasing models to the public, as Meta has done. Still, closed models are also vulnerable to the attack. The revelation suggests that despite developers’ best efforts, it may be impossible to prevent AI systems from being co-opted for malicious ends, at least for now. Researchers alerted Google, OpenAI, and Anthropic to their systems’ vulnerabilities before releasing the report, and each company said it is working to bolster their models’ defenses against this and other exploits. Their findings could add fuel to various policymakers’ efforts to regulate AI technology.
TSMC Delays Arizona Fab Opening Due to Talent Shortages: On July 20, Taiwan Semiconductor Manufacturing Company announced it would postpone opening a highly anticipated chip fabrication facility in Arizona. The reason for the delay: a shortage of skilled labor. “We are encountering certain challenges, as there is an insufficient amount of skilled workers with the specialized expertise required for equipment installation in a semiconductor-grade facility,” TSMC Chairman Mark Liu said during an earnings call. The company plans to transfer workers from Taiwan to Arizona to fill the talent gap. The plant, originally slated to start production in late 2024, will now open in 2025. TSMC is the world’s largest “pure-play” semiconductor foundry, and the postponement of its proposed fab could spell trouble for the U.S. government’s push to strengthen the domestic chip industry. Last year’s CHIPS and Science Act is expected to create tens of thousands of new manufacturing jobs and drive up the demand for certain technical workers by as much as 19 percent over the next decade. But today the United States is not on track to meet this demand. In July, a semiconductor industry trade group issued a report that projected a 67,000 worker shortfall by 2030.
Biden Restricts Certain Outbound Investments in Chinese Tech: On Wednesday, President Biden issued a long-awaited executive order banning certain investments in Chinese technology companies. Specifically, the order places restrictions on investments in Chinese companies developing certain types of artificial intelligence, semiconductor, and quantum computing technology that are critical for “military, intelligence, surveillance, or cyber-enabled capabilities.” The order also requires all U.S. entities that do business in China to disclose investments related to those emerging technology sectors to the federal government. The Treasury Department and other federal agencies are charged with hammering out the details of these rules in the months ahead. In February, a CSET study found that U.S. investors were involved in transactions with Chinese AI companies that amounted to more than $40 billion between 2015 and 2021. Monitor our blog for soon-to-come in-depth analysis of the recent executive order.
Top AI Companies Promise the White House to Take Risks Seriously: On July 21, the Biden administration got seven AI companies—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI—to sign on to a series of non-binding commitments to protect users of their AI products. These include agreements to invest in cybersecurity, report the limitations of their models, and develop a watermarking system to denote AI-generated images, audio, and video. The companies reportedly intend to begin enacting these policies immediately—and have even formed an independent industry forum to promote these changes—but because the commitments are voluntary, they will face no consequences if they fail to do so. Many in the technical community say the commitments amount to a whole lot of nothing—they’re “as solid as swiss cheese,” wrote computational linguist Emily Bender. The agreement comes in the wake of Senator Chuck Schumer announcing his intention to develop comprehensive guardrails for AI and the Federal Trade Commission launching an sweeping investigation of consumer protection law violations at OpenAI . The White House also said it is in the process of drafting an executive order to address AI risks.
NDAA Update: On July 14, the U.S. House of Representatives passed its version of the Fiscal Year 2024 National Defense Authorization Act by a vote of 219 to 210. The Senate followed suit on July 27, approving its version of the bill by a vote of 86 to 11. Both bills share similar top line funding levels—authorizing approximately $886 billion across the Department of Defense and Department of Energy—and feature efforts to support technology and innovation cooperation with NATO and other allies and partners. However, the bills differ in other ways when it comes to AI and emerging technologies. The House bill includes provisions requiring DOD to develop processes to guarantee responsible AI use and expand international partnerships to field more joint emerging technology capabilities in the Indo-Pacific. For its part, the Senate bill mandates that DOD send detailed reports to Congress on its AI investments and create plans for regularly updating its AI strategy, AI security processes, and tech workforce development strategy. These differences leave much to be resolved before the two chambers can send a final version of the bill to the Oval Office.
We’re hiring! Please apply or share the roles below with candidates in your network:
Research Coordinator: We are currently seeking candidates for a Research Coordinator position. In this role, you will provide project management support for our analysis team to ensure research timelines are met and research project trackers are up-to-date. You will manage and improve CSET’s research project tracking functions for accuracy, efficiency, and timeliness. Additionally, you will serve as executive administrator to the Director and Deputy Director of Analysis. Apply by August 14, 2023
Please bookmark our careers page to stay up to date on all active job postings. You can also subscribe to receive job announcements by updating your subscription preferences in the footer of this email.
In Translation CSET’s translations of significant foreign language documents on AI
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.