Global Challenges in the Standardization of Ethics for Trustworthy AI

In this paper, we examine the challenges of developing international standards for Trustworthy AI that aim both to be global applicable and to address the ethical questions key to building trust at a commercial and societal level. We begin by examining the validity of grounding standards that aim for international reach on human right agreements, and the need to accommodate variations in prioritization and tradeoffs in implementing rights in different societal and cultural settings. We then examine the major recent proposals from the OECD, the EU and the IEEE on ethical governance of Trustworthy AI systems in terms of their scope and use of normative language. From this analysis, we propose a preliminary minimal model for the functional roles relevant to Trustworthy AI as a framing for further standards development in this area. We also identify the different types of interoperability reference points that may exist between these functional roles and remark on the potential role they could play in future standardization. Finally we examine a current AI standardization effort under ISO/IEC JTC1 to consider how future Trustworthy AI standards may be able to build on existing standards in developing ethical guidelines and in particular on the ISO standard on Social Responsibility.We conclude by proposing some future directions for research and development of Trustworthy AI standards.

Print Friendly, PDF & Email
+ posts

Leave a Reply

Your email address will not be published. Required fields are marked *