This site is part of the Informa Connect Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Risk Management
search
AI & Machine learning

Bringing transparency and ethics into artificial intelligence

Posted by on 11 December 2019
Share this article

Financial services often have mature governance of financial risk models in place, including frameworks to address issues such as data reliability, incorrect or abused decisions, well-defined development processes, validation, etc. Most of these issues also relate to advanced algorithms (or AI) that fall outside the scope of these frameworks. A bank, for instance, may have several hundreds of financial risk models under governance, but up to tens of thousands of advanced algorithms that are not – ranging from marketing models to non-financial risk assessment models. Applying existing governance and extending it where needed can offer companies a solid basis when it comes to addressing questions about the ethical and responsible use of AI.

Deloitte is a strong advocate of transparency and responsibility in AI: AI that has been thoroughly tested, that is explainable to customers and employees, and that has all the ethical considerations in place.

Here, Deloitte experts share their insights and guide you to embed AI in the best possible way in your company. We also present four propositions that we have developed around the topic.

  1. A call for Transparency and Responsibility in AI.
  2. Unboxing the Box with Glassbox. A toolkit to create transparency in algorithms.
  3. Embedding Ethics in your organisation.
  4. AI Driven Business Models: a strategic approach to capture the full potential of AI

With these, we hope to contribute to a movement to use AI for good. This is the moment to ensure that AI models are built the right way. Only then, can we make sure that AI technologies will not cause harm but benefit humanity.

Transparency and responsibility in AI

In the past few years, the number of negative stories about AI has increased markedly. Tech-entrepreneur Elon Musk has even stated that AI is more dangerous than nuclear weapons. There have been numerous cases in which advanced or AI-powered algorithms were abused, went awry or caused damage. For example, the British political consulting firm Cambridge Analytica harvested the data of millions of Facebook users without their consent to influence the US elections, which raised questions on how algorithms can be abused to influence and manipulate the public sphere on a large scale.

Other cases brought to the surface unsolved questions around ethics in the application of AI. For instance, Google decided not to renew a contract with the Pentagon to develop AI that would identify potential drone targets in satellite images, after large-scale protests by employees who were concerned that their technology would be used for lethal purposes.

Stefan van Duin, partner Analytics and Cognitive at Deloitte and an expert in developing AI solutions, understands this public anxiety about AI. “The more we are going to apply AI in business and society, the more it will impact people in their daily lives – potentially even in life or death decisions like diagnosing illnesses, or the choices a self-driving car makes in complex traffic situations,” says Van Duin. “This calls for high levels of transparency and responsibility.”

Unboxing the box with Glassbox

AI models, in particular data-driven models like machine learning, can become highly complex. These algorithms are typically presented as a ‘black box’: you feed them with data and there is an outcome, but what happens in the meantime is hard to explain.

This lack of understanding of AI technology causes large risks for companies, says Roald Waaijer, director Risk Advisory at Deloitte. “AI-powered algorithms are increasingly used for decisions that affect our daily lives. Therefore, if an algorithm runs awry, the consequences can be disastrous. For a company, it can cause serious reputational damage and lead to fines of tens of millions of euros.” Worst of all, he adds, it may hurt customers, for instance by unintentionally treating them unfairly if there are biases in the algorithm or training data. “This may lead to a serious breach of trust, which can take years to rebuild.”

To help companies look inside the proverbial black box of AI, Deloitte has developed GlassBox. This technical toolkit is designed to validate AI models and to expose possible bias and unfairness – in short, to check whether AI-powered algorithms are doing what they are supposed to do. “It’s just like bringing your car to the garage,” explains Waaijer. “You occasionally need to look under the bonnet to see whether everything is working properly. That is what GlassBox does: we look under the bonnet of an algorithm to check the AI engine.”

Digital ethics

A teenage girl in Minnesota received a booklet from a department store with coupons for baby gear. Her father was furious: Why was this company sending these coupons to his daughter? A couple of days later, the girl admitted to her father that she was, in fact, pregnant. The department store had developed an algorithm that was able to assess the likelihood that a customer was pregnant based on her shopping behaviour – in this case, the algorithm happened to be spot on.

The story sparked great controversy.

“Companies are actively developing and exploiting their technological capabilities, often without considering the ramifications,” says Jan-Jan Lowijs, senior manager at Deloitte Risk Advisory with a focus on Privacy and Digital Ethics. “Any mishap can lead to serious reputational damage, legal issues and fines, and worst of all, the loss of trust and loyalty of customers. Trust is a company’s most valuable asset: if you betray your customers’ trust for short-term profit, you will lose in the long run.”

Safeguarding ethics in the use of advanced or AI-powered algorithms has become one of the most prominent questions of this era. Deloitte’s Digital Ethics proposition addresses this question. It provides a framework to help organisations develop guiding principles for their use of technology, to create a governance structure to embed these principles in their organisation, and finally, to monitor progress and see whether these principles have been effectively implemented.

AI driven business models

Big tech giants like Alphabet and AWS are developing science fiction like applications in their labs; algorithms that teach themselves winning game strategies, can recognise human emotions or mimic a human conversation.

But we also see many AI initiatives getting stuck in proof of concepts and pilots, making a lot of companies struggle with the question where to start and how to scale.

“A lack of a sound vision and right prioritisation causes a lot of AI projects to stall”, says Naser Bakhshi, senior manager Artificial Intelligence at Deloitte. “Many initiatives start with a technology that sounds cool, without thinking how it really can make an impact on the organisational goals. They should start with forming a vision on AI that is aligned with the company’s strategy, rather than just letting the one that shouts the loudest experiment freely.”

For your convenience, we have combined the four topics in one PDF. Download the full report.

Find more content on www.deloitte.nl/fsi.

Get more insights from RiskMinds International - download our latest eMagazine!

NEW closing banner for blogs 800 x 150 RM 2019 Q4 EMAG

Share this article

Sign up for Risk Management email updates

keyboard_arrow_down