On October 30, 2023, the Biden administration released the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI). The release demonstrated the White House’s commitment to standards for AI safety and security; protection of privacy and civil rights for workers and consumers; and boosting innovation and competition to ensure global leadership in AI. Several respected voices in technical policy have weighed in with concerns about the lack of balance in the policy discussion between the restrictions that come with governance and the opportunity unleashed by innovation. In a recent article, Klon Kitchen of the American Enterprise Institute, put it simply: “…policy should not come at the expense of innovation.” The question emerges: Is it possible to apply AI standards, governance, safeguards and protections in a manner that stimulates innovation, competitiveness, and global leadership for the United States in AI?
Balance needed in the AI Technical Policy Discussion
In that recent article, Kitchen calls generative AI a “national security lifeline” that presents the United States with the opportunity “to advance innovation, boost economic readiness, and maintain global leadership in an emerging technology critical to US security interests.” However, as he points out in congressional testimony, it is important that government and Industry work together to, both, realize the promise of AI, and to mitigate perceived threats to economic and military security of the United States. Effective technical policy recommendations for U.S. national security must be balanced among goals focused on both the promise and threats associated with AI.
Yet another recent article pointed out three major camps in the AI policy discussion: “progressives” focused on social issues, “longtermists” concerned about the extinction of humanity, and “AI hawks” raising alarms about national security. The article pegs the focus for all three major groups on governing AI threats, albeit from different angles, with different priorities and objectives. This framing of the AI policy discussion is insightful, but forecasts an unbalanced perspective. Here, the major focus of the technical policy discussion is on AI threats, with no explicit mention of opportunity. As each perspective would ultimately lead to some sort of regulation related to governance, each of these three perspectives must be balanced by considerations of the impact on opportunity.
Balancing the Long Term and the Near Term
A recent article from the Carnegie Endowment for International Peace, points to a specific imbalance emerging in conversations critical to the future of AI governance, noting that talk of AI “superintelligence” has begun to shape the policy discussion. Longtermists and others in Silicon Valley believe that AI superintelligence that would surpass human abilities across virtually all domains is just around the corner, as little as 5 years away. The Carnegie article points to recent very prominent references to superintelligence from the British prime minister, the U.S. president, U.S. congressional hearings, and the United Nations Security Council as an indication that clarity is needed regarding hypothetical predictions related to superintelligence in the continuously evolving policy discussion. Ultimately, a policy too heavily focused on hypothetical concerns for superintelligence could risk the opportunity to address more near-term governance, safety and security concerns, and may very well stifle innovation.
The potential impact of policy discussions on AI workforce and talent development reinforces the need for balancing the governance discussion with considerations for opportunities associated with AI. Andrew Ng, an AI and machine learning pioneer, recently posted on X suggesting that young students are being discouraged from entering AI due to fears of contributing to human extinction. This could also be seen as a tangible impact of the ongoing technical policy discussion, where “longtermist” messaging is sounding the alarm on existential threats. This creates a challenge to AI workforce development at a time when AI workforce growth needs to be accelerated to maintain global competitiveness.
Balance in Social and Security AI Policy Considerations
The executive order on AI seeks to strike a balance in the technical policy discussion around civil rights and AI. Speaking during a technology panel sponsored by the Congressional Black Caucus Foundation in September of this year, Dr. Ashley Llorens, a Microsoft Vice President and Distinguished Scientist, framed the issue well, stating, “…if our collective work in AI & Society stops at mitigating harms then we’re really leaving a lot on the table for historically disadvantaged communities.” Opportunities exist to bring untapped pools of talent from historically-disadvantaged communities to address lagging production of domestic STEM talent. Talent from these communities is typically underrepresented among owners, founders and discoverers in emerging technologies. Growing this critical, underdeveloped talent base will bolster U.S. economic prosperity and global competitiveness. Actions on governance across competing perspectives in the technical policy discussion must help, not hinder, doing so.
Calls for actions on civil rights in housing, federal benefits and contracting, and criminal justice included in the executive order on AI would directly impact historically-disadvantaged populations. These calls build upon action already taken by publishing the Blueprint for an AI Bill of Rights and issuing a previous executive order directing agencies to combat algorithmic discrimination. Balance is needed in addressing these calls to ensure that policy frameworks intended to protect civil rights do not forestall innovation that could strengthen the very disadvantaged communities those frameworks are intended to serve.
On national security, the executive order aligns well with Department of Defense policy on ethical and responsible use of AI. However, according to Kitchen and others, new standards emerging will impact DOD adoption and acquisition of AI, testing and safety of defense AI systems, and AI talent development for the DOD––all issues addressed in some capacity in the executive order. Much will depend on how individual agencies and departments implement guidance in the executive order. For example, answers to questions concerning how the AI industry seeking to do business with the DOD will be impacted by these new standards, or how new red-teaming standards will impact AI safety, or how these standards impact gaps in AI talent for DOD, will all be influenced by today’s AI policy discussion. Industry, the DOD, and the U.S. Congress must work together to ensure that new standards do not limit the ability of the commercial sector to bring to bear its full capability.
A Balanced Path Forward: Why?
Unbalanced policy discussions run the risk of outcomes that swing the pendulum too far in one direction. While course correction in the face of these types of outcomes may be possible, it can be costly, time-consuming, labor-intensive, and an unnecessary draw on precious resources at this time of vigorous global competition. The policy community should heed the voices being raised from within its ranks concerning the need to balance critically needed action on governance. The balancing act required here must address a widely varying field of goals and objectives concerning continually emerging AI threats, in an environment where competition is growing for leadership in promising AI innovations that lie just over the horizon. In this era of great power competition, perhaps the best measure of each of the proposals on governance is the yardstick on opportunity, which maintains a balanced focus on leading AI innovation while upholding the highest principles of American society.