Human and AI cognition have plenty in common, Alexander Sokol, Executive Chairman and Head of Quant Research at CompatibL, found through his research!
Inspired by works like Daniel Kahneman’s Thinking Fast and Slow, Alexander explains in this interview how cognitive biases affect both humans and AI, particularly in production environments.
He emphasizes the importance of a scientific, statistical approach to AI model risk validation and shares practical advice for quants and risk analysts to mitigate cognitive bias in AI. Key concepts such as crowdsourcing, the significance of multiple AI calls, and drawing parallels between human cognition and transformer-based AI models are explored.
Making the connection between human and AI cognition
Alexander’s research journey began with an unexpected revelation. His work initially targeted what appeared to be a conventional engineering problem. However, the solution emerged from a less anticipated domain – psychology. Upon integrating AI with production systems, challenges arose that seemed insurmountable through traditional engineering approaches. Understanding these issues through the lens of psychology proved to be transformative.
He references Daniel Kahneman's seminal work, Thinking, Fast and Slow, which elucidates the dual systems of intuitive and thoughtful processing in humans. When developing AI models, especially using chat interfaces, Sokol noticed remarkable parallels between AI and human cognition. The moment AI transitioned from development to production, the structured environment forced AI to operate akin to human 'fast thinking,' which led to less optimal outputs.
Cognitive bias in AI
In the realm of risk management, Sokol introduces "cognitive bias risk" as a novel category for AI models. He emphasises that the validation of AI models should mirror the rigorous quantitative approaches used in market and credit risk models. By employing scientific methods to measure model risk, the industry can effectively tackle AI's inherent unpredictability.
How to write better prompts – and yield better results
In order to avoid framing queries that could unintentionally bias AI responses, understanding the perspective of AI is crucial. AI models are conditioned to be a helpful assistant, which can lead to biased outputs if queries are mis-phrased. Breaking down inquiries into smaller, isolated components prevents AI from receiving inadvertent clues about desired answers, thereby reducing bias in responses.
For instance, instead of directly asking if a document is "grandfathered," break the question into steps that prevent AI from misinterpreting user intent. This approach draws from his personal experiences, where AI, eager to please, sometimes generated incorrect yet assertive conclusions.
Extrapolating reliable output based on multiple runs
Similar to Monte-Carlo simulations, invoking AI multiple times and averaging the responses yields more dependable outputs. This crowdsourcing-like technique significantly amplifies accuracy and mirrors human tendencies for unpredictability, such as variances in teachers’ grading or judicial decisions.

