This site is part of the Informa Connect Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Competition Law
search
Ai

European Commission’s High-Level Expert Group on Artificial Intelligence Publishes Artificial Intelligence Ethics Guidelines

Posted by on 12 April 2019
Share this article

April 10, 2019

On April 8, the European Commission’s High Level Expert Group on Artificial Intelligence (ExpertGroup) published its Ethics Guidelines for Trustworthy AI (Guidelines). The Guidelines are the first deliverable under the European Union (EU’s) April 2018 Communication on Artificial Intelligence for Europe (AI Strategy). The AI Strategy comprises a three-step approach: setting out the key requirements for trustworthy Artificial Intelligence (AI), launching a large scale pilot study, and working on international consensus building for “human-centric” AI.

The Guidelines represent the first concrete step in the AI Strategy, setting out requirements for trustworthy AI and draft “assessment lists” to assist companies in implementing the Guideline requirements. The explosive growth of AI raises many legal issues, including how to apply traditional liability principles when harm results from a decision made by AI, and the risk that AI will be used to implement antitrust violations such as price fixing, or even engage in collusion without human intervention. The Guidelines do not address such legal issues directly, focusing instead on the ethical design, deployment and use of AI and the robustness of AI systems. But a number of the Guideline requirements, especially those relating to human oversight, transparency and accountability, have significant implications for the legal debates.

In summer 2019, the Commission will launch a pilot study to test how the requirements can be implemented in practice. In early 2020, the Expert Group will review and revise the assessment lists, building on feedback from the pilot phase. The Expert Group is also charged with preparing a second deliverable, “Policy and Investment Recommendations for AI.”

The Guidelines

The Guidelines define trustworthy AI in terms of three components. Trustworthy AI should be:

  • lawful, complying with all applicable laws and regulations;
  • ethical, ensuring adherence to ethical principles and values; and
  • robust, both from a technical and social perspective.

The Guidelines focus on the requirements for ethical and robust AI, assuming that the development, deployment and use of AI will comply with mandatory laws and regulations. However, the principles and requirements set out in relation to ethical and robust AI may have legal implications for developers and users of AI.

Ethical and Robust AI

The Guidelines set out seven requirements in relation to trustworthy AI: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; environmental and societal well-being; and accountability.When implementing these requirements, trade-offs are inevitable. Such trade-offs should be explicitly acknowledged and evaluated in terms of their risk to ethical principles and properly documented. Accessible mechanisms should be foreseen to provide adequate redress for damages.

Human agency and oversight. The requirement of human agency and oversight entails that users should be given the knowledge and tools to comprehend and interact with AI systems to a satisfactory degree and, where possible, be enabled to reasonably self-assess or challenge the system so that they can make informed autonomous decisions regarding AI systems. Users have a right not to be subject to a decision based solely on automated processing when this produces legal effects or similarly significantly affects them. To this end, AI systems should involve human oversight, which may be achieved through governance mechanisms such as a human-in-the-loop (HITL), human-on-the-loop (HOTL), or human-in-command (HIC) approach. In general, the less oversight humans exercise over AI systems, the more extensive testing and stricter governance is required.

Technical robustness and safety. Technical robustness and safety includes four distinct components: resilience to attack and security; fallback plan and general safety; accuracy; and reliability/reproducibility. AI systems should be developed with a preventative approach to risks, ensuring that they behave as intended while minimising and preventing harm, taking account of potential changes in their operating environments or the action of human and artificial agents. AI systems should be protected against attacks targeting the data, the model or the underlying infrastructure. For AI systems to be considered secure, possible unintended applications of the AI system (e.g. dual-use applications) and potential abuse of the system by malicious actors should be taken into account and steps taken to prevent and mitigate these risks.

AI systems should have a fallback plan in case of problems. This can mean that AI systems switch from a statistical to a rule-based procedure, or that they ask for a human operator before continuing their action. Processes to clarify and assess potential risks associated with the use of AI systems, across various application areas, should be established. The level of safety measures required depends on the magnitude of the risk posed by an AI system, which in turn depends on the system’s capabilities.

Accuracy pertains to an AI system’s ability to make correct judgements, for example to correctly classify information into the proper categories, or its ability to make correct predictions, recommendations, or decisions based on data or models. A high level of accuracy is especially crucial in situations where the AI system directly affects human lives.

Finally, the results of AI systems must be reproducible, as well as reliable. A reliable AI system is one that works properly with a range of inputs and in a range of situations. Reproducibility describes whether an AI experiment exhibits the same behaviour when repeated under the same conditions.

Privacy and data governance. The Guidelines distinguish three components to privacy and data governance: privacy and data protection; quality and integrity of data; and access to data. AI systems must guarantee privacy and data protection throughout a system’s entire lifecycle, including the information initially provided by the user, as well as the information generated about users over the course of their interaction with the system. Data collected on individuals must not be used to unlawfully or unfairly discriminate against them.

AI systems must also address issues resulting from socially constructed biases, inaccuracies, errors and mistakes in data that need to be addressed prior to training the AI. Data integrity must also be ensured, because malicious data may change an AI system’s behaviour, particularly with self-learning systems. Processes and data sets used must be tested and documented, and data protocols should be put in place outlining who can access data and under which circumstances.

Transparency. The Guidelines’ transparency requirement encompass the data, the system and the business models of an AI system. The Guidelines distinguish three elements: traceability, explainability and communication. The data sets and the processes that yield the AI system’s decision should allow for traceability to enable identification of the reasons why an AI-decision was erroneous. Explainability includes the ability to explain both the technical processes of an AI system and the related human decisions (e.g. application areas of a system).

Whenever an AI system has a significant impact on people’s lives, it should be possible to demand a suitable explanation of the AI system’s decision-making process in a timely and appropriate way, depending on the expertise of the stakeholder concerned. Explanations of the degree to which an AI system influences and shapes the organisational decision-making process, design choices of the system, and the rationale for deploying it should also be available.

The AI system’s capabilities and limitations should be communicated to AI practitioners or end-users in an appropriate manner, which could encompass communication of the AI system's level of accuracy, as well as its limitations. In particular, AI systems should not represent themselves as humans to users, and humans should have the right to opt out of an AI system and to choose human interaction instead.

Diversity, non-discrimination and fairness. The Guidelines identify three components of this principle: avoidance of unfair bias; accessibility and universal design; and stakeholder participation. Data sets used by AI systems may suffer from the inclusion of inadvertent historic bias, incompleteness and bad governance models. The continuation of such biases could lead to unintended prejudice and discrimination. Harm can also result from the intentional exploitation of consumer biases or from unfair competition, such as the homogenisation of prices by means of collusion or as a result of a lack of market transparency. Algorithms’ programming may also suffer from unfair bias, which can be counteracted by putting in place oversight processes to analyse and address the system’s purpose, constraints, requirements and decisions in a clear and transparent manner. AI systems should not have a one-size-fits-all approach and should consider “Universal Design” principles to address the widest possible range of users, following relevant accessibility standards.

Stakeholders who may directly or indirectly be affected by the system should be consulted before and after deployment. Longer term mechanisms for stakeholder participation, for example by ensuring workers information, consultation and participation throughout the whole process of implementing AI systems at organisations, should be set up.

Environmental and societal well-being.This requirement entails the need to ensure that AI systems are developed, deployed and used in the most environmentally friendly way possible and with attention to their impact on social relationships and attachments. Similarly, the impact of AI systems should be assessed from a societal perspective, including particularly in political decision-making and electoral contexts.

Accountability.The accountability requirement necessitates that mechanisms be put in place to ensure responsibility and accountability for AI systems and their outcomes, including auditability and mechanisms to minimise and report negative impacts. In applications affecting fundamental rights, including safety-critical applications, AI systems should be able to be independently audited. AI systems must provide for reporting on actions or decisions that contribute to a certain system outcome and responding to the consequences. The Guidelines also recommend the use of impact assessments prior to and during the development, deployment and use of AI systems.

The Guidelines further discuss methods companies can use to implement the requirements, including both technical and non-technical methods. The technical methods include architectures translating the Guideline requirements into AI systems’ architectures, including white list states that the system should always allow, blacklist restrictions on states the system should never transgress and mixtures of these; “ethics and rule of law by design” to provide precise and explicit links between the abstract principles which the system is required to respect and specific implementation decisions; “explanation methods” to understand why an AI system behaved in a certain way; new tools for testing and validating AI systems; and quality of service indicators to provide a baseline for AI systems’ testing and development.

Non-technical methods include new or revised regulations on such matters as product safety and liability; updating of codes of conduct to include requirements from the Guidelines; standardisation of technical requirements for safety, technical robustness and transparency; certification of compliance with new standards, combined with accountability frameworks with disclaimers and review and redress mechanisms; governance frameworks such as appointment of a person responsible for AI ethics issues; education of stakeholders to increase basic AI literacy; creation of platforms for social dialogue; and encouraging diverse and inclusive design teams.

The Expert Group plans to address these issues in its forthcoming AI Policy and Investment Recommendations.

Assessment Lists and Next Steps

Chapter III of the Guidelines include a detailed set of questions, or checklist, to help companies assess whether they meet requirements for trustworthy AI. These checklists will be tested in the pilot phase beginning in summer 2019 and revised based on the feedback the Expert Group receives. Companies interested in participating in this process are invited to join the AI Alliance.

Our take

The Guidelines are non-binding and intended as a first step in the Expert Group’s work. Moreover, the Guidelines expressly exclude discussion of legal requirements for AI, focusing on requirements to promote ethical and robust AI systems. Nonetheless, the Guidelines set out a significant number of requirements together with detailed checklists to encourage and assess compliance. Requirements that may prove most significant from a legal perspective include those relating to human oversight, transparency and accountability, including traceability, explainability and auditability.

The impact of these requirements will depend on how broadly they are adopted by industry. There is likely to be considerable pressure for companies to show they are sensitive to the issues raised in the Guidelines and taking appropriate steps. If they are widely adopted by companies operating in Europe, the Guidelines could have a ripple effect around the world.

Meanwhile, the term of the Juncker Commission, which launched the AI Strategy and set up the Expert Group, is coming to an end this year. The next Commission will of course set its own priorities, which will reflect the results of the May 2019 EU elections. Regardless of the specific election results, however, AI seems likely to continue to play a central role in EU policy going into 2020 and the next Commission.

Jay Modrall, Norton Rose Fulbright
Jay Modrall, Partner, Norton Rose Fulbright LLPJames R. Modrall is an antitrust and competition lawyer based in Brussels. He joined Norton Rose Fulbright LLP in September 2013 as partner, having been a resident partner in a major US law firm since 1986. A US-qualified lawyer by background, he is a member of the bar in New York, Washington, D.C. and Belgium.

With 27 years of experience, he is a leading advisor for EU and international competition work, in particular the review and clearance of international mergers and acquisitions. Mr Modrall also has extensive experience with EU financial regulatory reform, advising the world’s leading private equity groups in connection with the new EU directive on alternative investment fund managers and leading banks and investment firms on EU initiatives including EU regulation of derivatives, EU reforms in financial market regulation and the creation of a new EU framework for crisis management, among others.

Mr. Modrall’s native language is English, and he is fluent in Italian and proficient in Dutch and French.

Share this article

Sign up for Competition Law email updates

keyboard_arrow_down