This site is part of the Informa Connect Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Wealth & Investment Management

How to design an AI-proof governance framework

Posted by on 19 June 2019
Share this article

Regulation, governance and artificial intelligence (AI) are incredibly important topics for Luxembourg fund managers. How can they create a disruption-proof framework?

The Luxembourg fund industry, like most industries in this day and age, is being disrupted by artificial intelligence (AI). Use cases promise improved decision-making, new efficiencies through intelligent process automation, returns via algorithmic trading, and the enhancement of the client experience through virtual assistants, to name only a few.

Due to the fragmented nature of data across the investment and distribution value chain, the asset management industry may be slightly behind other industries in this matter. Nevertheless, Francois Drazdik, Head of Administration & Senior Industry Affairs Advisor at ALFI, has recently pointed out that Luxembourg—having the largest fund industry in Europe—is obliged to stay ahead of the game with regard to AI.[1]

Of course, with the deployment of groundbreaking AI solutions comes great responsibility: that of being the first to translate them into solid, functioning governance systems.

Governance topics are at the top of regulators’ agendas

Governance continues to dominate regulatory agendas, with Circular CSSF 18/698 being only the most recent pertinent piece of legislation that puts great emphasis on the robustness of fund managers’ governance frameworks as well as matters of substance.

AI, meanwhile, presents a whole new layer of governance that the investment funds industry needs to consider carefully. In its recent draft guideline paper on trustworthy AI, the European Commission’s High-Level Expert Group on Artificial Intelligence pointed out that the relationship between AI sophistication and governance is positive: the more advanced your AI system, the more emphasis is to be put on the respective governance framework.

By way of its whitepaper entitled “AI Use Cases, Inherent Risks and Opportunities”, the CSSF confirmed that, for Luxembourg entities, the governance of AI systems is also high on its agenda. Consequently, it comes as no surprise that the CSSF paper recommends involving an entire team in the analysis of AI models, including experts from compliance, risk management, and information security departments.

Core pillars of AI governance frameworks

To be more specific, the (human) governance framework for AI consists of a set of key roles and responsibilities that Luxembourg fund managers should take into consideration right from the conceptualization of their AI projects. To be compliant with regulators’ expectations, a suitable way of looking at AI governance may be through the lens of the three-lines-of-defense model described in Circular CSSF 12/552.

First line of defense: Business Unit

Proper governance starts with the core business functions. In the context of AI, particular attention should be paid to the interplay between the traditional business units and the IT function that has the prominent role in the development, implementation, and ongoing monitoring of the AI system.

David Hagen, Head of IT Supervision and Support PSF at CSSF, recently highlighted at a conference that the very nature of AI systems requires IT and business departments to work more closely together than ever before and on a day-to-day basis.

"Concerning the quantity aspect, the European Commission recently pointed out that it may be appropriate for some institutions to have an entire panel dedicated to AI ethics alone. "

What is needed is a change of mindset and an evolution of the traditional way of implementing IT projects: the thought process needs to extend beyond the setup phase and into the entire “lifetime” of the AI system. Understanding this core relationship provides the foundation for the other players within the governance framework to fulfill their roles.

Second line of defense: Support Functions

The second line of defense are the roles built on the following key pillars of solid AI governance systems:

  • The adoption of AI solutions affects the compliance function in several ways. On the one hand, AI has proven to be highly efficient in overtaking the first-line detection of fraud cases, therefore allowing the compliance function to focus attention away from repetitive tasks and towards the in-depth investigation of cases deemed suspicious by the system. On the other hand, the design and implementation of AI systems require careful attention from the compliance function with regards to matters of regulatory compliance, such as in the field of data privacy (for example, GDPR). These aspects need to be constantly reviewed throughout the implementation of the AI system.
  • Risk management has turned into a complex exercise for asset managers over the past decade as regulators on EU and Luxembourg levels have tightened the rules surrounding material categories of risk, including an increased focus on well-defined risk appetite statements and stress test environments. In the AI context, the traditional mix of risk is augmented by a set of IT-specific risks that need to be under the constant watch of the risk management function.
  • The concept of ethics in the context of AI has recently received peak attention on both EU and Luxembourg levels. Justifications for installing an ethics manager or team around an AI application include the following: the consideration of data biases (e.g. in the model training period), algorithmic biases (e.g. if the wrong models or parameters are used), human biases, and discriminative results of the model (e.g. underrepresentation of certain populations in the model). Further to this, ethics considerations are paramount in the fields of data protection, accountability, and auditability of the models deployed.
  • Information security has long been a key matter from a regulatory standpoint and it has obtained further meaning in the context of the recent GDPR. It is therefore no wonder that the CSSF suggests creating the role of a “data protection officer” (DPO) in the context of AI. Under the responsibility of this position falls, amongst other duties, the application of “data protection by design” principles, which include data minimization, data pseudonymization, data anonymization, data encryption, and the related technical security measures.

Third line of defense: Internal Audit Function

With the advent of AI solutions that are more and more sophisticated, those responsible for auditing the algorithms and their outputs also need to level up their approaches and verification workflows in order to cope with the increased level of complexity.

"AI, meanwhile, presents a whole new layer of governance that the investment funds industry needs to consider carefully."

Undergoing a “digital transformation” for this function includes the implementation of suitable toolsets for evaluating the documentation produced under the direction of the business and IT units. A common denominator of success at this level is again a healthy collaboration with the stakeholders at the other end in order to develop a suitable and commonly understood toolset for documentation and reporting.

Obtaining and orchestrating the right mix of resources is key

Setting up an appropriate governance team involves considerations of both quantity and quality. Regarding quality, Circular 18/698 refers to skills, experience, and reputation as the ingredients of a solid governing body.

Concerning the quantity aspect, the European Commission recently pointed out that it may be appropriate for some institutions to have an entire panel dedicated to AI ethics alone. From the outset, it might therefore seem that implementing a sophisticated AI system may mean more, rather than fewer, personnel.

The benefits of having an appropriate governance framework setup from the phase of project initiation will, however, likely pay off, once whitepapers and guidelines turn into more binding legislation and institutions have to justify their AI approaches and system outputs in front of regulators.

Especially in light of the tenseness that currently characterizes the Luxembourg employment market, setting up the right team should start immediately and will for most investment fund managers also include the training and up-skilling of available resources.

[1] ICT Luxembourg Press Release (2018),

Share this article

Sign up for Wealth & Investment Management email updates