Chairwoman Johnson, Ranking Member Lucas, and committee members, thank you for the opportunity to testify about this critical subject. This hearing is informed by my work at OpenAI, an artificial intelligence research and development company seeking to build general-purpose AI systems to benefit all of humanity. It is also informed by my work as a member of the Steering Committee for the AI Index, a Stanford initiative to track, measure, and analyse the progress and impact of AI technology.
When thinking about the ethical and societal challenges of AI, we must remember AI is a product of the environment it is developed in, and it reflects the inherent biases of the people and institutions that built it. Therefore, when we think about how AI interacts with society, we should view it as a social system rather than a technological system, and this view should guide the sorts of policies we consider when thinking about how to govern it.
For the purposes of this hearing I will discuss a relatively narrow subset of AI: recent advances in machine learning, oriented around pattern recognition. Some of these techniques are relatively immature, but have recently become ‘good enough’ for various deployment use cases. Crucially, ‘good enough’ isn’t the same as ‘ideal’, and ‘good enough’ systems exhibit a range of problems and negative externalities which should require careful thinking during deployment. And whenever a system is “good enough” we should ask “for who?”
For this testimony, I will:
- Briefly outline recent progress in the field of artificial intelligence.
- Outline some of the ways in which contemporary and in-development systems can fail.
- Discuss the tools we have today to deal with such failures.
- Outline how government, industry, and academia can collectively address concerns around the development and deployment of AI systems.