Artificial intelligence (AI) governance is a pressing policy issue. Policymakers and the public are concerned about the rapid adoption and deployment of AI systems. To address these concerns, governments are taking a range of actions, from formulating AI ethics principles to compiling AI inventories and mandating AI risk assessments. But efforts to ensure AI systems are safely developed and deployed require a standardized approach to classifying the varied types of AI systems.
This need motivated OECD and its partners to explore the potential of frameworks—structured tools to distil, define, and organize complex concepts—to enable human classification of AI systems. While discussions were underway at the OECD about what such a framework should look like, Georgetown University’s Center for Security and Emerging Technology (CSET) began to brainstorm how we could test the various frameworks being discussed. CSET’s premise was that a framework has to be both understandable and able to guide users to consistent, accurate classifications. Otherwise, it is of little use to policymakers or the public.
CSET recently published a report outlining findings from testing various frameworks and launched an interactive website where you can explore the frameworks, review framework-specific classification performance, and even try classifying a few systems yourself. Some highlights from that research, and CSET’s ongoing work with OECD on this topic, are summarized here.
Read the full article at OECD.AI