Tag Archive: Machine learning

Artificial intelligence systems are rapidly being deployed in all sectors of the economy, yet significant research has demonstrated that these systems can be vulnerable to a wide array of attacks. How different are these problems from more common cybersecurity vulnerabilities? What legal ambiguities do they create, and how can organizations ameliorate them? This report, produced in collaboration with the Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center, presents the recommendations of a July 2022 workshop of experts to help answer these questions.

Securing AI

Andrew Lohn Wyatt Hoffman
| March 2022

Like traditional software, vulnerabilities in machine learning software can lead to sabotage or information leakages. Also like traditional software, sharing information about vulnerabilities helps defenders protect their systems and helps attackers exploit them. This brief examines some of the key differences between vulnerabilities in traditional and machine learning systems and how those differences can affect the vulnerability disclosure and remediation processes.

Part 1 of CSET's "AI and the Future of Disinformation Campaigns" examines how artificial intelligence and machine learning can influence disinformation campaigns.

Making AI Work for Cyber Defense

Wyatt Hoffman
| December 2021

Artificial intelligence will play an increasingly important role in cyber defense, but vulnerabilities in AI systems call into question their reliability in the face of evolving offensive campaigns. Because securing AI systems can require trade-offs based on the types of threats, defenders are often caught in a constant balancing act. This report explores the challenges in AI security and their implications for deploying AI-enabled cyber defenses at scale.

Separating AI cybersecurity hype from reality

Bank Automation News
| August 20, 2021

In Bank Automation New's latest podcast CSET's Micah Musser breaks down how AI and ML can heighten and hinder security, and how financial institutions can separate marketing fiction from cybersecurity reality.

Microsoft Print Spooler, Kesaya ransomware mega-hack

Security Conversations
| July 6, 2021

Featured in Security Conversation's newsletter, Micah Musser's report "Machine Learning and Cybersecurity" is well worth the read.

AI Accidents: An Emerging Threat

Zachary Arnold Helen Toner
| July 2021

As modern machine learning systems become more widely used, the potential costs of malfunctions grow. This policy brief describes how trends we already see today—both in newly deployed artificial intelligence systems and in older technologies—show how damaging the AI accidents of the future could be. It describes a wide range of hypothetical but realistic scenarios to illustrate the risks of AI accidents and offers concrete policy suggestions to reduce these risks.

Poison in the Well

Andrew Lohn
| June 2021

Modern machine learning often relies on open-source datasets, pretrained models, and machine learning libraries from across the internet, but are those resources safe to use? Previously successful digital supply chain attacks against cyber infrastructure suggest the answer may be no. This report introduces policymakers to these emerging threats and provides recommendations for how to secure the machine learning supply chain.

A new CSET report by Micah Musser and Ashton Garriott explores the use of machine learning in cyber defense.

Machine Learning and Cybersecurity

Micah Musser Ashton Garriott
| June 2021

Cybersecurity operators have increasingly relied on machine learning to address a rising number of threats. But will machine learning give them a decisive advantage or just help them keep pace with attackers? This report explores the history of machine learning in cybersecurity and the potential it has for transforming cyber defense in the near future.