Transitioning from model risk management to AI risk management
The financial industry is at the brink of a technological revolution, with artificial intelligence (AI) leading the charge. However, this power comes with significant responsibility, especially when it comes to the nuanced domain of risk management. AI systems present a complex and unique array of challenges and opportunities that demands a more sophisticated approach to risk management and regulatory compliance.
Since long, banks have developed a proven approach to managing the risks of models. These so-called model risk management (MRM) practices are built on a number of key principles including governance, model risk identification as well as development and independent validation standards.
In moving from traditional model risk management to AI assurance, MRM serves as a powerful starting point, but requires some recalibration.
Governance enhancement
The introduction of pivotal roles like the Chief AI Officer reporting directly to the CEO ensures that AI governance is prioritised at the highest organisational level.
Risk identification and tiering
Financial institutions must transition from model-centric risk assessments to a more dynamic, use-case-centric framework. Traditional risk tiering involves assessing complexity and materiality. With black box AI models, complexity which is closely related to explainability, is not easily measured. One approach here is to replace this concept by observability, which indicates how easy it is to detect deterioration in performance.
Strategic model inventory management
Many AI frameworks manage risks at the level of the use case, which contains the model/ algorithm together with the context in which it is used. By focusing on AI use cases rather than solely on models, organisations can achieve a more integrated and strategic perspective on how AI is deployed enterprise-wide.
Model development, review and usage
The complexity of AI often necessitates new testing and validation techniques. Good examples here include the validation of LLM-based use cases (see below) as well as the analysis of recommender systems that defy conventional backtesting methods.
Implementing an AI risk framework
In order to build a framework that can be used to manage the unique challenges of AI systems, existing AI guidelines and (draft versions of) regulatory initiatives offer plenty of inspiration.
Research studies, such as the study by Correa et al., highlights the variety and disparity of AI guidelines across jurisdictions worldwide, pointing out the gaps, notably in model identification. Financial institutions are encouraged to proactively address these gaps to ensure a robust AI risk management framework.
The AI technology landscape
AI in financial services is characterised by its divergence from traditional model environments, thriving on the capabilities provided by cloud infrastructure and an array of MLOps tools for data extraction, feature engineering, experiment tracking, and much more. Effective risk mitigation in this context demands that AI governance is integrated with this toolset, inducing correct use of these tools to build robust AI applications. Given the dynamic nature of ML algorithms, incorporating continuous integration and delivery pipelines, automated testing, and live performance monitoring is moreover crucial.
Best practices for AI risk management
Alongside integrating governance principles into the AI toolset comes a number of additional best practices. We highlight a few key ones.
Upskilling teams
AI teams often have limited awareness about (AI) risk management principles while traditional risk managers seldomly use the AI toolsets. This is why cross-functional teams combining AI specialists and risk managers is key, as is the upskilling of the various experts.
Lifecycle management
Tailoring AI model lifecycle processes to include rigorous testing, ongoing monitoring, and iterative updates to manage risks dynamically.
Ethical AI considerations
Embedding ethical considerations into AI development and deployment is vital to guarantee fairness, accountability, and transparency.
Addressing the unique risks of Generative AI
We illustrate the above principles by discussing how one can securely deploy Generative AI applications.
Generative AI, particularly Large Language Models (LLMs) like GPT-3.5 and GPT-4, raises significant concerns regarding trustworthiness. Just like any other model, these applications have to be tested, validated and monitored. The typical approach is to build a large corpus of prompts on which to evaluate the behaviour of the application, paying attention to potential issues of toxicity, bias, robustness, privacy, ethics, and fairness (see e.g. the paper by Wang et al.).
Conclusion
Managing AI risks in the financial sector is a dynamic and evolving challenge. A balanced approach integrating technology such as the Yields platform, regulatory compliance, and ethical considerations is crucial. As AI continues to reshape the financial industry, institutions must remain vigilant and proactive in their risk management strategies to responsibly harness the benefits of AI.