This site is part of the Informa Connect Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Risk Management
search
Innovation

Ethics & artificial intelligence: knowing the boundaries

Posted by on 10 February 2020
Share this article

The arrival and indeed the applications of technology ignited some of the most critical discourse on our emotional wellness, our future job security, and our definition of ourselves as human beings. There is still a lot that need to be addressed, particularly around the ethical use of AI. In this article, Elisabeth Bechtold, Head of Data Risk & Digital Policy, Zurich Insurance Group, explores key issues around the current use of AI and raises some imperative questions that the industry needs to consider during the applications of AI.

There is a lot of talk about AI these days and, while we tend to associate it with something that will be around in the future, it is here already. It is in personal assistants like Siri and Alexa, in medicine, autonomous driving, and even facial recognition used in a whole host of applications that are relevant to us and to our organisations. We are living in a data- and technology-driven world and the responsible use of AI and other methods of advanced analytics is getting increasingly relevant.

Why is the responsible, trustworthy, or ethical use of these technologies so important? Just think about how much data and data analytics has impacted your organisation and your industry. Also note that more data was created in 2018 than in the last 5,000 years combined, but we mere humans have only been able to assess 0.5% of it.

What happens when an AI model has the intelligence, power and application to analyse it all, in multiple ways, running multiple scenarios and choosing the optimal action?

The opportunities for social and economic advancement through AI seem endless. But it clearly leads to the question who decides what’s right and wrong, what’s just and unjust, and who gets what? Who decides about the data that’s being fed into those algorithms? Who ensures that data isn’t prejudicial, xenophobic, racially selective, or simply… wrong? Who defines ethical standards, who is setting ethical boundaries…? And who is to regulate all this?

In the past years, private companies, research institutions, governments and international standard setters such as the G20, OECD or institutions such as the European Union issued principles and guidelines for trustworthy AI. While it is broadly consented that AI should be ‘ethical’, views differ as to both what constitutes ‘ethical AI’ and which ethical requirements, technical standards and best practices are needed to live up to the aspiration of ethical AI. In April 2019, the European Commission’ High-Level Expert Group proposed a framework for trustworthy AI, based on the following three components:

  • “It should be lawful, complying with all applicable laws and regulations
  • It should be ethical, ensuring adherence to ethical principles and values; and
  • It should be robust, both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm.”

As risk managers, it is incumbent on us to look at the risks associated with the (un)ethical use of AI, to understand them and to find ways how to mitigate against them.

Research has shown a global consensus emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy). However, perspectives vary substantially as to the exact interpretation and implementation of these principles. The EU continues taking on a progressive stance on trustworthy AI as we could see from the new EU Commission’s president Ursula von der Leyen announcing to propose AI regulation during her first 100 days of office. But due to the difficulties to regulate this rapidly evolving field, today only very few binding laws and regulations provide clarity on the ground rules of deploying AI.

So how to proceed from a risk management perspective at a time of uncertainty? How to navigate successfully in today’s digital transformation at a time where we need to explore the business opportunities of AI and other advanced technologies but don’t have an established legal and/or regulatory framework to rely on?

As risk managers, it is incumbent on us to look at the risks associated with the (un)ethical use of AI, to understand them and to find ways how to mitigate against them. Using AI in a flawed and unethical fashion triggers the risk of biased or simply wrong outcomes. Overall, the use of advanced technologies triggers a broad range of risks and governance challenges such as understanding and controlling automated decision-making processes with algorithms often perceived as a “black box”. If not deployed in a correct and ethically sound fashion the potential benefits of AI for business are considerably reduced. Distorted and unethical AI outcomes may also have harmful societal effects by encouraging mistrust and, importantly, may have far-reaching reputational consequences. In short, from a risk manager perspective we need to ensure that we deploy AI systems and other advanced technologies in a very diligent way and line with our business strategy and our corporate values.

When trying to do the right thing, however, we also have to acknowledge the challenges of how to define such ethical standards for our own organisations and how to implement them into our business operations. We need to make sure that our use of AI is, first and foremost, responsible, ethically sound and complies with applicable laws and regulations. Secondly, the use of AI needs to be underpinned by a robust and holistic governance and assurance framework that provides for appropriate risk and compliance assessments, effective monitoring and implementation (end-to-end). Key considerations to be addressed by such governance and assurance framework include, in particular, fairness (to avoid bias), transparency, interpretability and explainability, as well as robustness and security. It may also be considered to create an “Ethics Committee” that could act as a sounding board and provide for a roadmap and direction on the alignment of business strategy, corporate values and the responsible use of AI.

As a responsible organisation, we need a strong commitment to align, foster, and scale values-led decision-making which builds trust and inspires confidence with both internal and external stakeholders. In today’s digital age, gaining and maintaining such trust based on the responsible use of advanced technologies is likely to be key success factor for the corporate world.

Share this article

Sign up for Risk Management email updates

keyboard_arrow_down