AI: A double-edged sword in risk

Discover AI use in financial institutions with Anne Kleppe, Global Lead Responsible AI and Managing Director and Partner at BCG.
Learn about the opportunities and risks AI presents to businesses and explore how leading organisations tackle the challenges ahead. Anne shares strategies to mitigate AI risks and how to bridge the existing talent gap. Finally, Anne discusses responsible AI principles to help navigate AI adoption across the business – watch the interview or read the key takeaways below.
AI's potential in risk management
The advent of AI technology has profoundly impacted various sectors, including financial risk management. While financial institutions have utilised AI for decades, especially in areas like credit risk and market surveillance, there's an emerging space for newer AI technologies (e.g. generative AI and agentic AI) to enhance existing processes and introduce innovation.
AI offers tremendous opportunities in market surveillance, fraud detection, and financial crime prevention. Generative AI and autonomous agents promise further enhancement in research, reporting, and end-to-end process efficiency, expanding into areas such as credit and know-your-customer (KYC) procedures.
Navigating risks in AI implementation
AI adoption and application come with a spectrum of risks. Instead of perceiving AI as a standalone risk, financial institutions should view it as a risk driver affecting various non-financial risks, including data security, cyber threats, and third-party vulnerabilities. Anne emphasised the necessity of formulating a robust risk strategy and governance setup that spans across the risk management processes.
An essential aspect in managing AI is real-time control during its operation. This involves embedding comprehensive controls during the development phase, rather than relying solely on conventional validation cycles.
The talent gap in AI integration
A significant challenge in AI integration is the existing talent gap across the financial sector. With AI use cases proliferating across industries, financial institutions need to focus on strategic hiring and training. Upskilling employees, particularly those who are new to taking on AI-focused roles, is crucial. Drawing on expertise from industries like autonomous driving can offer valuable insights and practices for financial institutions.
Defining and implementing responsible AI
The concept of responsible AI is integral to its successful integration. Anne defines responsible AI as systems that are proficient, safe, secure, and compliant. Transparency and explainability are crucial components of responsible AI principles.
Financial institutions already possess the foundational risk management skills required for assessing risk-return relationships. By applying this expertise to AI, they can address strategic, governance, and cultural challenges to ensure a seamless integration of AI technologies.
