Ethics

Policy and research communities strive to mitigate AI harm while maximizing its benefits. Achieving effective and trustworthy AI necessitates the establishment of a shared language. The analysis of policies across different countries and research literature identifies consensus on six critical concepts: accountability, explainability, fairness, privacy, security, and transparency.

Should we be concerned about the future of artificial intelligence?

Australian Broadcasting Corporation
| February 22, 2024

In an Australian Broadcasting Corporation 7.30 segment that discusses the concerns about the current understanding of AI technology, Helen Toner provided her expert insights.

In a report for the Observer Research Foundation, Research Analyst Husan Chahal writes about the ethics of artificial intelligence and how the multitude of efforts across such a diverse group of stakeholders reflects the need for guidance in AI development.

Truth, Lies, and Automation

Ben Buchanan Andrew Lohn Micah Musser Katerina Sedova
| May 2021

Growing popular and industry interest in high-performing natural language generation models has led to concerns that such models could be used to generate automated disinformation at scale. This report examines the capabilities of GPT-3--a cutting-edge AI system that writes text--to analyze its potential misuse for disinformation. A model like GPT-3 may be able to help disinformation actors substantially reduce the work necessary to write disinformation while expanding its reach and potentially also its effectiveness.