This site is part of the Informa Connect Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Risk Management
search
Artificial Intelligence

AI risk governance: How can you inspire trust?

Posted by on 06 April 2022
Share this article

From the evolving regulatory landscape to risk mitigation challenges, Lisa Bechtold, Zurich Insurance company Ltd, gives us an update on AI governance.

The opportunities for social and economic advancement through artificial intelligence (AI) seem endless. But in addition to such benefits, AI triggers specific risks in connection with its opacity, its complexity as well as its far-reaching impact and scale.

While operating almost invisibly in the background, algorithmic decision-making is often norm-setting in nature and far more influential than one would assume at first glance for both our private and professional lives. Algorithmic decision-making is increasingly related to questions such as:

  • who decides what’s right and wrong?
  • what’s just and unjust?
  • who receives what resources or benefits?

These questions clearly illustrate the fundamental human rights dimension of AI: AI outcomes can harm human beings. The data that is fed into algorithms plays a crucial role in this. Ensuring that such data is not prejudicial, xenophobic, racially selective, or otherwise inaccurate is key to the successful operation of AI systems. But who defines ethical standards and boundaries for the design and quality of algorithms and the underlying data? What is within the responsibility of the corporate sector and how can regulation and supervision add value to this highly complex field?

The evolving regulatory landscape

Over the past years, private companies, research institutions, governments, and international standard setters such as the G20, OECD, and the European Union have issued directional guidance on trustworthy AI. While there is a global consensus that AI should be ‘ethical’, views differ as to both what constitutes ‘ethical AI’ and which ethical requirements, technical standards and best practices are needed to live up to the aspiration of ethical AI. Regional and cultural particularities as well as the variety of jurisdictions and evolving body of relevant jurisprudence contribute to the complexity of the pursuit of ‘ethical AI’.

"While there is a global consensus that AI should be ‘ethical’, views differ as to both what constitutes ‘ethical AI’ and which ethical requirements, technical standards and best practices are needed to live up to the aspiration of ethical AI"

In April 2021, still building on the GDPR momentum, the European Commission launched a comprehensive legislative proposal on trustworthy AI, the “Artificial Intelligence Act”. The proposed regulation pursues a risk-based approach and prohibits, in particular, certain high risk AI applications and it also sets out detailed governance requirements for permitted high-risk applications, such as the use of AI in the judicial system or creditworthiness assessments of customers. While this regulation is emerging, there is no clarity (yet) on a wide range of questions such as the – currently proposed – very broad scope of the legislative proposal including not only AI systems in a purely technical sense, but also traditional statistical approaches such as linear regressions.

AI risk management challenges

So how to govern and mitigate AI risk at a time where we need to explore the business opportunities of AI and other advanced technologies but don’t have an established legal and/or regulatory framework to rely on? First and foremost, we need to understand the risks associated with the (un)ethical use of AI, and find ways to mitigate against them. Using AI in a flawed and unethical fashion triggers the risk of biased or simply wrong outcomes. Overall, the use of advanced technologies triggers a broad range of risks and governance challenges such as understanding and controlling automated decision-making processes with algorithms often perceived as a “black box”. If not deployed in a correct and ethically sound fashion, the potential benefits of AI for business are considerably reduced. Distorted and unethical AI outcomes may have harmful societal effects by encouraging mistrust and may have far-reaching reputational consequences.

"Distorted and unethical AI outcomes may have harmful societal effects by encouraging mistrust and may have far-reaching reputational consequences."

Importantly, managing AI risk can be further complicated by a variety of circumstances from the perspective of both providers and users of algorithmic models. First, the complex and often opaque nature of algorithms, specifically “black box” algorithms or deep learning applications, means that they lack transparency and can sometimes hardly be understood by experts (inherent challenge of explainable AI). Potential modifications through updates or self-learning during operation and limited predictability are additional factors adding to the complexity and opacity of AI systems. Also, hidden errors are likely to go undetected for a long time (often until it is too late) which again complicates the traceability of relevant failures.

Second, complications may arise due to intricate origins, as algorithms are frequently made up of different – not necessarily coordinated – contributions. For example, algorithms might have potential interdependencies with other sophisticated systems in such a way that the reliability of these algorithms might depend upon conditions in those other systems, making it potentially difficult to identify who is responsible for a specific result.

Similarly, the integration of algorithms into products and services (if algorithms are only components of the whole) complicates the search for the actual error and the respective responsibility. This is of particular relevance in cases of mass consumer products and services, where algorithms may pass through the hands of a variety of people other than their developers, such as designers, manufacturers of physical products, providers of services, distributors, licensees, etc. If there are multiple contributors to multi-layer algorithms, identifying who is actually responsible and needs to be held legally accountable from a liability perspective often presents a major challenge.

Third, AI failures can be much more impactful than traditional model failures due to the increasing use of AI in sensitive areas such as autonomous driving or medical advice (with AI taking or contributing to decisions over life or death), and its greater scale and reach than comparable decisions or actions taken by humans (AI-based manipulation of elections, pricing on e-commerce platforms, etc.).

How can we mitigate such looming risks? How can we define meaningful ethical standards for our own organisations and how should we integrate them into our corporate DNA? First, the use of AI needs to comply with applicable laws and regulations, namely in the area of data protection and privacy. Second, the use of AI needs to be underpinned by a robust and holistic governance framework that provides for appropriate risk and compliance assessments, effective monitoring, and implementation of key ethical principles that are addressing AI-specific risks, including the third-party risk dimension. Key considerations include fairness, transparency, interpretability and explainability, accountability as well as robustness and security (IT and cyber-security dimension).

And yet, for many of these challenges, effective remedies are still being sought for. Importantly, this does not put us in a position to simply “wait and see” as the risks we are facing are here already. Instead, such challenges clearly emphasise the need for a continuous commitment to the responsible and ethical use of AI along the value chain. Such commitment will likely be a continuous and equally fascinating and challenging journey to inspire confidence in our digital society.

Disclaimer
This blog reflects the personal view of the author and not necessarily that of Zurich Insurance Company Ltd.

Share this article

Sign up for Risk Management email updates

keyboard_arrow_down