We believe that the U.S. Government can begin addressing potential threats from AI now by relying on existing authorities and policies, with some refinement and additions. Instead of dwelling on complex and as-yet unanswered questions—such as how to align AI systems with human values, which is still the domain of researchers—policymakers can make necessary progress with immediate, actionable steps. These steps should aim to organize and coordinate a comprehensive governmental approach, one that effectively identifies and prioritizes threats, while maximizing learning to help improve our understanding of how to manage the risks and opportunities of AI into the future.
Near Term Policy Actions
We believe that our focus should be on establishing actionable measures and flexible policies. Whether the concern is the current limitations of AI or its future risks, the initial steps for mitigation can be very similar across different time horizons. The way we handle challenges today will influence how effectively we can address future (sometimes unknown) issues.
Specifically, we think Congress should encourage government to take the following actions now, or soon, on AI to improve protections for society, U.S. national security, and overall competitiveness:
- AI Harm Incident Reporting
- Recommendation 5: Call upon OMB to publicly share information regarding agency implementation plans for AI in Government and existing authorities to regulate AI.
- Software Liability
- Recommendation 6: Review key federal statutes, and outline necessary reforms and clarifications to ensure appropriate software liability for AI developers who fail to implement basic best practices including security and reporting.
- Recommendation 7: Direct the development of third-party auditing and red-teaming standards to support testing and evaluation of AI systems before deployment.
- Federal Talent Plan
- Recommendation 8: Encourage OPM’s ongoing efforts to enable agencies to consider skills-based qualifications, community college degrees and technical certifications in lieu of set academic credentials (bachelor’s degrees) for high-demand technical fields.
AI Harm Incident Reporting
Recommendation 1: Direct OMB to lead a working group of oversight agencies to define a common reporting framework and build infrastructure to collect data on AI harms.
Data on AI incidents and harms and security vulnerabilities will enable agencies and Congress to identify technological applications that may require greater government scrutiny and regulation, while also providing a more complete picture of how these systems and tools are performing in the real world.
This working group would:
- Define the conditions for reporting an incident;
- Scope the associated incident information to be reported;
- Build the reporting infrastructure, including a consistent data format and submission portal; and,
- Determine follow-up procedures for incidents (e.g., investigations, root cause analyses, information sharing).
By creating infrastructure, agencies with existing risk-management expertise, like the FCC, FAA, CFPB, SEC, CFTC, and others with the assistance of organizations like NIST, can pressure-test incident reporting tools before rolling out mandatory reporting requirements for systems of concern. Existing incident databases like the AI Incident Database can provide baseline data and best practices.
As Senator Lankford noted during the hearing, definitions matter and defining an AI harm incident is challenging. CSET’s research has defined AI harms as “when an entity experiences harm (or potential for harm) that is directly linked to the behavior of an AI system.” The burden of mandatory reporting of AI harms should be limited to actual and near-miss harms inflicted on a person or property greater than some threshold value–e.g., an AI that somehow causes a printer to jam and shreds a piece of paper, is not worth reporting. Relevant thresholds to consider concern severity, scale/exposure and frequency of the harm incident.
Recommendation 2: Use AI harm/incident reporting to inform legislation, liability, and regulatory rule-making around AI systems.
Once the working group has established an AI incident reporting framework, Congress should require mandatory AI incident reporting by firms doing business in the U.S. The systematic collection of data on AI harm incidents enables regulators to identify priority areas for intervention. Congress can leverage the incident reports to inform legislation and rule-making on AI.
AI systems used in applications like autonomous vehicles, drones/air vehicles, financial systems, healthcare delivery, employment, education testing and data management, law enforcement and information delivery are likely to be of significant importance to a mandatory reporting regime. These uses are already under the jurisdiction of different departments and agencies, including Department of Transportation (NTSB, FAA), Treasury (SEC, CFTC, OCC), HHS (CMS, FDA), EEOC, Department of Education, Justice and the FCC, among others. Mandatory requirements should be cascaded via normal regulatory pronouncements by oversight agencies as well as being incorporated into federal contract requirements via the General Services Administration, Department of Defense, grants via the National Science Foundation and other contracting vehicles. The reporting requirements must be expansive enough to enable updates and refinements as more is learned from initial reports and from other regulatory efforts around the world, and as the technologies evolve. Especially in the initial phase of implementation, reporting requirements should be regularly adjustable with periodic reviews, perhaps every ~6 months.
Recommendation 3: Build on current NSF efforts to define a set of criteria to identify critical emerging technologies.
Incident reporting information will allow agencies to collect information and nimbly respond to risk of harms posed by systems already deployed, but another concern echoed by HSGAC committee members was staying on top of ‘what’s next?’ We believe that list based approaches (i.e., annually updated, exclusively expert-determined, agency-specific lists of critical emerging technologies) will not work; instead, we need to develop a standard set of criteria and actively scan the horizon. A possible starting point for developing these criteria are a series of basic questions like:
- What new research areas are advancing most rapidly?
- What novel applications of research have evidence of being implemented, or where is research being adopted and applied in novel ways, in sufficient quantities to have potential impact?
- Where are we seeing these new areas and applications, in terms of geography and sector?
- Do the new areas and applications create potential for significant impact on U.S. national security, economic competitiveness, and/or societal well-being?
Recommendation 4: Monitor developments along established criteria to surface new technologies or applications that may require oversight.
We have long highlighted the benefits of a National Science and Technology Analytic Capability. This capability would enable the USG to monitor S&T emergence across a number of criteria, like those above, and would provide an ‘early-warning’ system for new technical advancements. Certain, especially risky technologies, may require greater scrutiny. Once identified, like AI already has, this proposed action plan could be applied to mitigate potential harms and maximize potential benefits of additional technologies.
Questions to consider once new capabilities or applications are identified, to determine if they warrant additional action and what that should look like include:
- Do we have the plans, authorities, and tools necessary to mitigate the threat or capitalize on the opportunity? If not, what can we easily update?
- Are our plans, authorities, and tools easily adaptable if and when the landscape changes? Do we have the information necessary to know when that adaptation needs to happen?
Recommendation 5: Call upon OMB to publicly share information regarding agency implementation plans for AI in Government and existing authorities to regulate AI.
In November 2020, OMB released M-21-06 asking agencies to provide information on their plans for AI by May 2021. Appendix B included a request to “list and describe any statutes that direct or authorize [the agency] to issue regulations specifically on the development and use of AI applications.” We are not aware of the outcome of this OMB request or subsequent action following the AI in Government Act of 2020, but agency information about existing statutory authorities is a good starting point for examining what regulations require refinement or clarification from Congress and what additional authorities may be required to meet the current moment.
Recommendation 6: Review key federal statutes, and outline necessary reforms and clarifications to ensure appropriate software liability for AI developers who fail to implement basic best practices including security and reporting.
Given the novelty of these technologies, Congress should evaluate whether any current laws contain loopholes that firms could exploit if they are negligent in undertaking appropriate (and industry-standard) testing and evaluation of AI’s before deploying them to the public. If AI developers and deployers can release half-baked systems with impunity, we are at risk–especially as it relates to AI security. In their March 2023 National Cybersecurity Strategy, the White House already proposed shifting greater liability onto software vendors for security flaws in their software. We believe similar accountability must be applied to AI developers who release faulty AIs due to negligent testing–but this area still needs further work and CSET does not have much research exploring options in this domain.
Some initial ideas proposed by others to implement this include:
- Build on proposals in the H.R.5793 of 2014 and EO14028 to implement “Software Bill of Materials (SBOM)” requirements for all deployed AI tools so that if bugs are found in software packages used to create AI systems, they can quickly be found and remedied.
- Evaluate whether legislative proposals (like S.4913-Securing Open Source Software Act of 2022) could be modified and reintroduced to require software companies to comply with existing standards for securing software before deployment to improve software security overall, and also ensure AI tools meet these minimum standards.
- Delegating these regulatory responsibilities to an agency like the FTC.
Once the basis for AI liability is established, Congress (or its designee) should also provide clarity on balancing liability between AI developers and deployers, particularly in cases where an AI system is tailored to a specific client’s needs. For example, outlining liability based on the cause of a harm (e.g. underlying algorithmic bias vs. using a system beyond its intended scope) could help clarify liability between developer and deployer.
At a minimum, Congress should provide greater guidance on these liability considerations before more firms enter the AI space, because these regulations will become harder to reform as firms become more numerous and powerful.
Recommendation 7: Direct the development of third-party auditing and red-teaming standards to support testing and evaluation of AI systems before deployment.
While re-evaluating liability protections, Congress can support the development of a common set of best practices for evaluating and testing high-risk AI systems before deployment. Many leading tech firms have established red-teams to identify and mitigate risks, but each firm is likely evaluating to a different standard. Common standards to evaluate AI systems used in high-risk contexts, such as healthcare, employment, education or law enforcement, are equally lacking. Congress should direct NIST working with other agencies with domain-relevant expertise (CISA, EEOC) to convene a private sector-civil society-government task force to devise minimum standards for High Risk AI Systems. Failure to comply with these terms could result in penalties, or those certifying compliance could be provided a limited liability shield of some kind.
Federal Talent Plan
Recommendation 8: Encourage OPM’s ongoing efforts to enable agencies to consider skills-based qualifications, community college degrees and technical certifications in lieu of set academic credentials (bachelor’s degrees) for high-demand technical fields.
One of the government’s greatest challenges is AI-enabled talent. We have found that 2-year degrees and non-degree credentials may be a viable alternative to 4-year+ degree programs and may enable firms, including the USG, to quickly upskill their workforce. However, USAJobs hiring requirements that typically stipulate minimum degree requirements (BA or BS) and disqualify candidates who do not have a degree may be unduly limiting hiring pools. While OPM has reportedly been working to review job requirements to “open the aperture on who can take these jobs” the state of the roll-out plan is not evident to us. Congress could direct OPM to relax these requirements, provide additional pathways into the federal workforce, and create alternative training pipelines, especially for Cybersecurity and Information Technology roles.
AI creates an exciting opportunity for the USG to pivot and develop greater policy agility for the emerging technologies that are likely to come next. While we can’t predict whether quantum or synthetic bio will break out and change the world tomorrow, we know that AI is subtly and not so subtly changing the world under our feet. Now is a time for American leadership, but we don’t need to wait for a single overarching legislative package to meet this moment–small, incremental, iterative steps, supported by data and information sharing, is what will help us deliver on this opportunity.