The United States and four key allies, Australia, Canada, Japan, and the United Kingdom, share common principles for trustworthy AI: accountability, explainability, fairness, privacy, security, and transparency. However, variations in their definitions of these principles, as revealed in their respective policy documents, could substantially affect interoperability and commercial exchanges, and hamper the development of international norms. This brief builds off of our prior work on the use of trustworthy AI terms in the U.S. and scrutinizes their use in the national guidelines and policy statements of these four key American allies. We find:
- High-level principles around responsible AI are similar in spirit, but differ in details. Those detailed differences do not represent substantial disagreements among the countries on trustworthy AI terms but they can become problematic as countries build specific guidance and eventually regulations atop the principles.
- All countries value accountability and aim to hold a human responsible for harm caused by an AI system, but countries vary on who should be accountable. Different expectations about the timeliness of accountability processes or the expectation of compensation will complicate international exchanges.
- For explainability and understandability, countries diverge on two core issues: the audience for the explanation and the expected subject of that explanation. Some countries have specific guidance on which audiences require explanations, while others are broad and vague. AI products in use across these nations will have to account for these varied and country-specific expectations, which may be more inefficient than they are helpful to users. Additionally, conflicts could arise when one nation expects certain data to be included in an explanation, but another country finds the inclusion of that data to be problematic for security or privacy reasons.
- Bias and discrimination are uniform concerns when it comes to the issue of fairness, but otherwise, fairness definitions depend on culture and context. Even among the five allies studied here, there are differences in expectations around the involvement of affected users in defining fairness or pursuing accountability for an unfair system.
- All nations value privacy, but differ on what is considered private, how privacy should be achieved, and who is responsible for protecting privacy. In the case of what should be protected, only the U.S. includes a mention of intellectual property, for example. With respect to how privacy should be achieved, only the UK guides developers to minimize the retention of private information.
- When it comes to transparency and fairness, all countries are clear on the need to at least disclose to a user that they are interacting with an AI system, if not gain consent from that user before an interaction. However, not all countries agree on the kind of situation that necessitates disclosure or consent.
- Security is often closely linked to other data and cybersecurity policies. However, not all countries include malicious attacks explicitly in their list of concerns.
Our analysis is limited to policy statements as they exist today. Each country would likely consider its governance of AI a work in progress. But this also means there is an opportunity now to influence the development of policies, before they harden into more specific guidance and regulation and while many countries are still interested in creating an international consensus on AI governance.