In the current cyber-threat environment, a well-educated workforce is critical to U.S. national security. Today, however, nearly six hundred thousand cybersecurity positions remain unfilled across the public and private sectors. This report explores high school cybersecurity competitions as a potential avenue for increasing the domestic cyber talent pipeline. The authors examine the competitions, their reach, and their impact on students’ educational and professional development.
We aim to demonstrate the value of mathematical models for policy debates about technological progress in cybersecurity by considering phishing, vulnerability discovery, and the dynamics between patching and exploitation. We then adjust the inputs to those mathematical models to match some possible advances in their underlying technology.
In his testimony before the House Subcommittee Cybersecurity, Infrastructure Protection, and Innovation, Senior Fellow Andrew Lohn offer recommendations on how to mitigate AI security risks.
Adversarial patches are images designed to fool otherwise well-performing neural network-based computer vision models. Although these attacks were initially conceived of and studied digitally, in that the raw pixel values of the image were perturbed, recent work has demonstrated that these attacks can successfully transfer to the physical world. This can be accomplished by printing out the patch and adding it into scenes of newly captured images or video footage.
In his testimony before the House of Representatives Subcommittee on Cybersecurity, Infrastructure Protection, and Innovation, Senior Fellow Andrew Lohn discussed the application of AI systems in cybersecurity and AI’s vulnerabilities.
CSET Senior Fellow Andrew Lohn testified before the House of Representatives Homeland Security Subcommittee on Cybersecurity, Infrastructure Protection, and Innovation at a hearing on "Securing the Future: Harnessing the Potential of Emerging Technologies While Mitigating Security Risks." Lohn discussed the application of AI systems in cybersecurity and AI’s vulnerabilities.
CSET shows support for OpenAI, Cohere, and AI21 Labs' statement regarding best practices applicable to any organization developing or deploying large language models.
At a hearing before the House Science Subcommittee on Investigations and Oversight and Subcommittee on Research and Technology explained the vulnerabilities of open-source software.
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.