On June 9, 2019, the G20 Trade Ministers and Digital Economy Ministers issued a statement supporting “human centered artificial intelligence” (AI) (the G20 Statement) and endorsing the OECD Principles on AI adopted on May 22, 2019 (the OECD Principles), along with the OECD Council Recommendation on AI (OECD Recommendation). This article summarizes the OECD Recommendation and the G20 Statement, discusses the OECD’s next steps, and offers our take on the implications of these and related initiatives.
The OECD Recommendation
The OECD Recommendation is the first intergovernmental standard on AI, although it complements the AI Ethics Guidelines for Trustworthy AI adopted by the European Commission’s High-Level Expert Group on AI in April 2019 (the EU Guidelines) (see here for more information). The OECD Recommendation’s goal is to promote trustworthy AI, identifying five complementary principles and five recommendations pertaining to national policies and international co-operation.
The OECD Recommendation’s five principles are:
- Inclusive growth, sustainable development and well-being (stakeholders should pursue beneficial outcomes, such as augmenting human capabilities and enhancing creativity, advancing inclusion of underrepresented populations, reducing economic, social, gender and other inequalities, and protecting natural environments).
- Human-centred values and fairness (AI actors should respect the rule of law, human rights and democratic values (such as freedom, dignity and autonomy, privacy and data protection, non-discrimination and equality, fairness, social justice and internationally recognised labour rights), including by implementing appropriate mechanisms and safeguards such as the capacity for human determination).
- Transparency and explainability (AI actors should disclose meaningful information to foster a general understanding of AI systems, make stakeholders aware of their interactions with AI systems (including in the workplace), enable those affected by an AI system to understand the outcome and those adversely affected by an AI system to challenge the outcome).
- Robustness, security and safety (AI systems should be robust, safe and secure throughout their entire lifecycles, and AI actors should ensure traceability in relation to datasets, processes and decisions made during the AI system lifecycle to enable AI actors to analyse AI systems’ outcomes, and AI actors should apply a systematic risk management approach throughout AI systems’ lifecycles).
- Accountability (AI actors should be accountable for the proper functioning of AI systems).
The OECD Recommendation’s five recommendations are the following:
- Investing in AI research and development (governments should consider long-term investments and encourage private investments to spur innovation in trustworthy AI, focusing on technical issues and on AI-related social, legal and ethical implications and policy issues, with the aim of improving interoperability and use of standards).
- Fostering a digital ecosystem for AI (governments should develop digital ecosystems for trustworthy AI, incorporating infrastructure and mechanisms for sharing AI knowledge such as data trusts, to support safe, fair, legal and ethical sharing of data).
- Shaping an enabling policy environment for AI (governments should support an agile transition from R&D to the deployment and operation of trustworthy AI systems, including promoting “sandboxes” in which AI systems can be tested and scaled up and reviewing and adapting policy and regulatory frameworks).
- Building human capacity and preparing for labour market transformation (governments should prepare for economic transformation, including equipping their people with necessary skills, engaging in a social dialogue, promoting training programmes and supporting displaced workers and working with stakeholders to promote the responsible use of AI).
- International co-operation for trustworthy AI (governments should cooperate in the OECD and other fora to advance the principles in the OECD Recommendation, foster the sharing of AI knowledge and global technical standards for interoperable and trustworthy AI, and encourage the development and use of internationally comparable metrics).
The OECD Council instructed the Committee on Digital Economy Policy (CDEP) to continue to promote international co-operation for trustworthy AI. The CDEP was in charge of developing the OECD Recommendation, supported by a 50-person expert group formed in May 2018, which met four times before a draft of the OECD Recommendation was adopted in March 2019. The CDEP will now create an OECD AI Policy Observatory, which should be operational by late 2019, to serve as a hub for public policy on AI. The CDEP will report to the Council on the OECD Recommendation’s implementation and continued relevance for the first time in 2024.
The G20 Statement
In addition to endorsing the OECD AI Principles and Recommendation, discussed above, the G20 Statement made a link between human-centered AI and the protection of privacy and personal data. In addition, the G20 Statement addresses a number of complementary digital economy issues. For example, the G20 Statement supports the interoperability of different frameworks to facilitate cross-border data flows; interoperable standards and regulatory cooperation to promote agile and flexible policy approaches; and further work to improve security, including in relation to the Internet of Things, to promote trust in the digital economy.
The plethora of international policy initiatives in relation to AI and other digital economy issues, such as big data, can be confusing. There is also an understandable tendency not to pay them much attention, since they are for the most part general and in any case non-binding.
But we believe ignoring the OECD Principles and similar initiatives would be a mistake. The OECD Principles and similar initiatives reflect significant political pressure. While general, moreover, the OECD Principles include principles that lend themselves to being incorporated into new laws or regulations. Examples include the principles of transparency, explainability and accountability, and support for data sharing. The political pressure to consider new regulation in these areas is likely to continue and may become a key focus of the new European Commission, which is due to take office in late 2019.
|Jay Modrall, Partner,
Norton Rose Fulbright Brussels LLPJames R. Modrall is an antitrust and competition lawyer based in Brussels. He joined Norton Rose Fulbright LLP in September 2013 as partner, having been a resident partner in a major US law firm since 1986. A US-qualified lawyer by background, he is a member of the bar in New York, Washington, D.C. and Belgium.With 27 years of experience, he is a leading advisor for EU and international competition work, in particular the review and clearance of international mergers and acquisitions. Mr Modrall also has extensive experience with EU financial regulatory reform, advising the world’s leading private equity groups in connection with the new EU directive on alternative investment fund managers and leading banks and investment firms on EU initiatives including EU regulation of derivatives, EU reforms in financial market regulation and the creation of a new EU framework for crisis management, among others.Mr. Modrall’s native language is English, and he is fluent in Italian and proficient in Dutch and French.