Executive Summary
As artificial intelligence capabilities continue to improve, critical infrastructure (CI) operators and providers seek to integrate new AI systems across their enterprises; however, these capabilities come with attendant risks and benefits. AI adoption may lead to more capable systems, improvements in business operations, and better tools to detect and respond to cyber threats. At the same time, AI systems will also introduce new cyber threats that CI providers must contend with. Last year’s AI executive order directed the various Sector Risk Management Agencies (SRMAs) to “evaluate and provide … an assessment of potential risks related to the use of AI in critical infrastructure sectors involved, including ways in which deploying AI may make critical infrastructure systems more vulnerable to critical failures, physical attacks, and cyberattacks.”
Despite the executive order’s recent direction, AI use in critical infrastructure is not new. AI tools that excel in prediction and anomaly detection have been used for cyber defense and other business activities for many years. For example, providers have long relied on commercial information technology solutions that are powered by AI to detect malicious activity. What has changed is that new generative AI techniques have become more capable and offer novel opportunities for CI operators. Potential uses include more capable chatbots for customer interaction, enhanced threat intelligence synthesis and prioritization, faster code production processes, and, more recently, AI agents that can perform actions based on user prompts.
CI operators and sectors are attempting to navigate this rapidly changing and uncertain landscape. Fortunately, there are analogues from cybersecurity that we can draw on. Years ago, innovations in network connectivity provided CI operators with a way to remotely monitor and operate many systems. However, this also created new attack vectors for malicious actors. Past lessons can help inform how organizations approach the integration of AI systems. Today, risk may arise in two ways: from AI vulnerabilities or failures in systems deployed within CI and from the malicious use of AI systems against CI sectors.
This workshop report provides technical mitigations and policy recommendations for managing the use of AI in critical infrastructure. Several findings and recommendations emerged from this discussion.
- Resource disparities between CI providers within and across sectors have a major impact on the prospects of AI adoption and management of AI-related risks. Further programs are needed to support less well-resourced providers with AI-related assistance, including financial resources, data for training models, requisite talent and staff, forums for communication, and a voice in the broader AI discourse. Expanding formal and informal means of mutual assistance could help close the disparity gap. These initiatives share resources, talent, and knowledge across organizations to improve the security and resiliency of the sector as a whole. They include formal programs, such as sharing personnel in response to incidents or emergencies, and informal efforts such as developing best practices or vetting products and services.
- There is a recognized need to integrate AI risk management into existing enterprise risk management practices; however, ownership of AI risk can be ambiguous within current corporate structures. This risk was referred to by one participant as the AI “hot potato” being tossed around the C-suite. A clear designation of responsibility for AI risk within the corporate structure is needed.
- Ambiguity between AI safety and AI security also poses substantial challenges to operationalizing AI risk management. Organizations are often unsure how to apply guidance from the National Institute of Standards and Technology’s recently published AI risk management framework alongside the cybersecurity framework. Further guidance on how to implement a unified approach to AI risk is needed. Tailoring and prioritizing this guidance would help make it more accessible to less well-resourced providers and those with specific, often bespoke, needs.
- While there are well-established channels for cybersecurity information sharing, there is no analogue in the context of AI. SRMAs should leverage existing venues, such as the Information Sharing and Analysis Centers, for AI security information sharing. Sharing AI safety issues, mitigations, and best practices is also critical, but the channels to do so are unclear. Clarity on what constitutes an AI incident, which incidents should be reported, the thresholds for reporting, and whether existing cyber-incident reporting channels are sufficient would be valuable. To promote cross-sector visibility and analysis that spans both AI safety and security, the sectors should consider establishing a centralized analysis center for AI safety and security.
- Skills to manage cyber and AI risks are similar but not identical. The implementation of AI systems will require expertise that many CI providers do not currently have. As such, providers and operators should actively upskill their current workforces and seek opportunities to cross-train staff with relevant cybersecurity skills to effectively address the range of AI- and cyber-related risks.
- Generative AI introduces new issues that can be more difficult to manage and that warrant close examination. CI providers should remain cautious and informed before adopting newer AI technologies, particularly for sensitive or mission-critical tasks. Assessing whether an organization is even ready to adopt these systems is a critical first step.