The following white paper co-authored by a PRC state think tank describes the importance and difficulty of improving the “trustworthiness” of AI systems. The authors recommend increased use of methods such as federated learning and differential privacy to strengthen AI systems’ capability to withstand cyberattacks. The white paper’s policy recommendations include drafting more Chinese legislation related to trustworthy AI, developing commercial AI insurance policies, and taking a cautious approach to research on artificial general intelligence (AGI).
The Chinese source text is available online at: http://m.caict.ac.cn/yjcg/202107/P020210709319866413974.pdf
An archived version of the Chinese source text is available online at: https://perma.cc/9XZR-8KNE
U.S. $1 ≈ 6.5 Chinese Yuan Renminbi (RMB), as of August 25, 2021.
White Paper on Trustworthy Artificial Intelligence
China Academy of Information and Communications Technology (CAICT)
JD Explore Academy
At present, the new generation of artificial intelligence (AI) technology is developing rapidly, and its penetration into all fields of society is accelerating, bringing profound changes to human production and life. While AI brings great opportunities, it also contains risks and challenges. When presiding over the ninth collective study session of the Central Committee Politburo in October 2018, General Secretary Xi Jinping emphasized that “it is necessary to strengthen our judgment of the potential risks of the development of artificial intelligence and to strengthen our watchfulness against them, to safeguard the interests of the people and national security, and to ensure the security, reliability, and controllability of artificial intelligence.” Enhancing confidence in the use of AI and promoting the healthy development of the AI industry has become an important concern.
The development of trustworthy AI is becoming a global consensus. In June 2019, the Group of Twenty (G20) proposed the “G20 AI Principles,” emphasizing the need to be people centered (以人为本) and develop trustworthy AI. These principles have also been universally recognized by the international community. The European Union (EU) and the United States have also placed the enhancement of user trust and the development of trustworthy AI at the core of their AI ethics and governance. In the future, translating abstract AI principles into concrete practices and implementing them into technologies, products, and applications is an inevitable choice when responding to social concerns, resolving outstanding contradictions, and preventing security risks. It is an important issue related to the long-term development of AI and is an urgent task that industry must quickly address.
Whether reviewing the background and history of trustworthy AI or looking forward to the future of new generation AI, this white paper holds that the stability, explainability, and fairness of AI are the core issues of concern to all parties. Given the present circumstances, this white paper begins from the perspective of implementation of the global AI governance consensus with a focus on trustworthy AI technology, industry, and industry practices, analyzes credible paths to achieve controllable, reliable, transparent, and explainable AI with privacy protection, clear responsibilities, and diversity and tolerance, and puts forward suggestions for the future development of trustworthy AI.
Since AI is still in a stage of rapid development, our understanding of trustworthy AI must be further deepened. We welcome your criticism and correction for any deficiencies in this white paper.
For the rest of the translation, download the PDF below.