This site is part of the Informa Connect Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Asset Recovery
search
AI systems

Q&A with Professor Simon Chesterman on Artificial Intelligence and Fraud & Asset Recovery

Posted by on 26 January 2022
Share this article

Leow Jiamin is a Partner at WongPartnership LLP (Singapore) and a member of the Steering Committee for Asset Recovery Next Gen. She caught up with Professor Simon Chesterman (Dean and Provost's Chair Professor of the National University of Singapore Faculty of Law and Senior Director of AI Governance at AI Singapore) after his keynote speech at the 4th Asset Recovery Asia Conference that took place from 30 Nov – 1 Dec 2021.

The use of Artificial Intelligence (“AI”) in fraud detection and asset recovery and the potentials it brings are well discussed. AI is currently being used by banks and law enforcement agencies to study behaviour. Such systems are able to trigger alerts when transactions that have a high risk of being fraudulent are detected.[1] There are also AI systems touted as being able to trace, within a very short period of time, communication between email addresses belonging to persons of interest, and such person’s bank accounts.

The benefits of AI in the fraud and asset recovery space are clear. Time is always critical in the tracing of assets, and AI may be able to complete in seconds what may take a human months or years to do. The fraud being investigated is often complex, involving complicated transactions specifically designed to avoid detection, spanning numerous jurisdictions. The fact that AI is able to process voluminous and complex data autonomously to identify trends and patterns without (or with minimal) human intervention is a breakthrough.

Professor Simon Chesterman gave the keynote speech “WE, THE ROBOTS? Regulating Artificial Intelligence and the Limits of the Law” at the Asset Recovery Asia Conference 2021, a topic that is discussed in great detail in his latest book of the same title.[2] Both the keynote speech and the book raise fascinating food for thought for practitioners in the fraud and asset recovery space, and I briefly highlight two such areas.

First, how do we balance opacity, transparency and explainability when using and relying on AI? This question arises in a number of ways. For instance, in order for lawyers to be able to rely on conclusions drawn by AI systems, they must be able to defend the reliability[3] of the AI system itself, especially in the event of a challenge by the opposing party in the course of proceedings. As another example, in the context of automated decision-making AI systems, parties involved would most definitely want to understand the reasoning behind the decision made. Afterall, decisions with legal consequences are not determined based on statistics, but after consideration of specific factual matrices.

Other than the coders and experts in the field, it is difficult for anyone else to understand AI systems. Not only is the AI system itself complicated and in a different (computing) language, the code and workings of AI systems are often closely guarded and protected as confidential information or trade secrets.[4]

But is this really an entirely new problem? Lawyers are not strangers to challenges to reliability of electronic evidence. Confidential information or trade secrets are well-managed by the Courts; disclosure of such information can be limited where appropriate or subject to a confidentiality club[5], with any affected hearings to be conducted in camera. Further, other than the judiciary,[6] there is an absence of a general common law duty to give reasons in administrative law.[7]

Second, it is often thought that the law is always playing catch-up with technology that is rapidly changing and developing. In fact, Professor Chesterman had in a 2015 article discussed how the law can and should respond to such challenges.[8]

One challenge in regulating AI is how can we address potential undesirable harms resulting from AI? It seems easy for one to attempt to escape liability or responsibility by simply saying “the machine did it, not me”.

But is the law really playing catch-up in the case of AI? Professor Chesterman highlights that in fact many AI activities can be regulated by applying existing or modified norms.[9] Safety issues may be addressed by product liability laws. We can look to civil and criminal laws in respect of accountability. Human rights that presently exists would take care of non-discrimination, and privacy issues can be dealt with by data protection laws.

With the above in mind, I am delighted to have caught up with Professor Chesterman after the Asset Recovery Asia Conference, to discuss some questions I had following his keynote speech and after reading his book, as well as the questions raised by the audience during the conference.

Q1: There is an increased use of AI in fraud and asset recovery investigations, what are your views on usefulness or reliability of such electronic evidence obtained via AI systems for purposes of proceedings?

Professor Chesterman: AI is very good at being consistent and honest. If you ask the AI system a question over and over again, it would not give you a different answer. If you interrogate it on whether it is biased, it would try to give you a truthful answer in a way that a human never would. AI is also useful in analysing behaviour, which is a key challenge in the fraud space – detecting and interpreting behaviour that departs from the norm in significant ways.

A problem, however, is that the more elaborate and sophisticated the AI system is, the further away it is from the lawyer’s area of expertise. This is a concern that lawyers need to address. If a lawyer does not understand the AI system being relied on, how willing will he or she be in standing by its decision? It will not be very persuasive to the Court if all the lawyer can muster is to say “the machine told me”.

This is distinct from, say, the field of medicine. Doctors there are already comfortable using AI in radiology analysis, an area in which AI routinely outperforms humans. The use of AI in medical science (which relies on statistical analysis in determining the success or failure of clinical trials) is also prevalent, and the fact that it might be unclear exactly how the outcome is achieved is not a barrier.[10]

 

Q2: This brings to mind the long-drawn litigation since the mid-2000s[11] surrounding the Horizon IT system used by the UK Post Office Ltd (“POL”). Horizon detected unexplained discrepancies in POL accounts, and the POL successfully privately prosecuted more than 900 of its sub-postmasters for theft, false accounting and/or fraud based on this evidence. It was only in 2019, after a group litigation was commenced, that extensive forensic analysis of the Horizon software was carried out, resulting in the UK High Court finding that the Horizon software contained software bugs, errors and defects “far larger number than ought to have been present in the system if [the Horizon system] were to be considered sufficiently robust such that they were extremely unlikely to be considered the cause of shortfalls in branches.”[12] This raises serious doubts as to the reliability of Horizon’s evidence and the implications of wrongful convictions.[13]

Do you think disclosure (ie scrutinising the AI software code to determine how the evidence was processed / obtained) can improve evidentiary value of evidence obtained via AI systems? Is doing so in the course of proceedings realistic?

Professor Chesterman: It’s a great question and points to a couple of the difficulties raised by reliance on computer systems in this area. The first is the danger that, if we don’t understand the detailed workings of a system — if we can’t see under the hood — then it’s very hard to hold it to account. If there had been a robust system of debugging and auditing, then perhaps some of the problems would have been discovered and addressed.

The second problem is that, even where decisions seem odd, counterintuitive, or just plain wrong, it can be very hard to get people to question them. This is known as automation bias — our tendency to give undue weight to computers and similar systems, while discounting contradictory information (such as the protestations of innocence of those falsely accused.

That’s not to say we always need to understand everything a machine does. But when people’s rights are going to be taken away, then surely we must not only understand it but take a positive position on whether we agree with it or not.

Q3: Asset recovery work tends to be cross-border in nature. Lawyers often find themselves struggling to explain to judges that certain behaviour is cultural and not necessary a sign of fraud. One of the key arguments against AI is that it can develop an inherent bias. What are the arguments against this perception? How do you teach a machine to assess transactions from different cultures?

Professor Chesterman: The real question is why are we giving this power to the machine? Why are we not doing it ourselves? AI is about statistics and complicated regression analysis. Machine learning systems are good at drawing correlations based on the past – i.e., where there is a strong correlation, it means that in the past, where A and B has happened, C has happened alongside it. This is not the same as saying when A and B has happened, C will necessarily follow.

There are some well-known examples of absurd correlations drawn by AI – for example, Amazon’s resume-screening algorithm had to be shut down when it ‘learned’ (based on ten years of data) that women’s applications were to be treated less favourably than men[14], or an audit of one resume-screening algorithm identified that the two most important factors indicative of a job performance at a particular company were being named Jared and having played high-school lacrosse.[15]

The question is how do we guard against biases being perpetuated by the AI system. First, we should look for the bias. The AI system can learn the qualities which you do not want to be discriminated against; it can check and correct itself. Second, we should not give too much weight to an AI system’s recommendations to guard against the kind of automation bias I mentioned earlier. Having a human as part of the decision-making loop may not be sufficient. For that reason, I think we need to be clear that someone — a human or a corporation, perhaps — will be held responsible for the decision.

Q4: In your book, you discussed a 2019 video of the Hangzhou Internet Court featuring an AI judge depicted by an avatar, and China’s move to create ‘smart courts’ as part of its New Generation Artificial Intelligence Development Plan.[16] You also talked about Canada’s Directive on Automated Decision-Making.[17] Do you think AI judges will or can eventually become a reality? Given how the programmers of AI systems sometimes cannot even explain the moves made by the AI system itself[18], how would that sit with judicial decisions for which the common law duty to give reasons apply?[19]

Professor Chesterman: While AI is affecting all professions, it will be difficult to replace or substitute litigation and the judiciary with AI. AI is quite good at dispute settlement, but not so much litigation. If litigation is outsourced to machines, it might lead to more efficient outcomes by resolving them instantly. But if it were possible to know what the answer to be in a court case, the intelligence men and women arguing on both sides should have worked that out already.

AI is very good at optimising or making predictions. However, those who think AI can take over big parts of court activity are assuming that there is always a “right” answer (and a “wrong” answer). That’s not borne out by the data. If one studies the cases that go to court, it often boils down to a fifty-fifty chance. You have smart people on both sides who think they are right, or at least have a strong argument, and it is the judge who has to decide between the two. The judge can go either way.

What does all this mean for AI? AI is good at predicting based on the past. If we handed over the whole legal system to AI, it freezes us in a moment rather than do what the law should do – which is to build up over time or incrementally. Thus, while AI systems will be useful in court, we should not hand over discretion to them.

Q5: Your book discusses the regulation of AI, and the possibility of AI regulating itself.[20] A lot of the fraud and asset recovery work we do is cross-border in nature. Do you think we are moving towards global regulations? Or are there differences between countries that are too much to overcome?

 

Professor Chesterman: UNESCO was looking into this in late November 2021. One of the real difficulties is that there is no global architecture in respect of regulating AI. Global coordination and a focused narrow approach to regulation is required.

I draw an analogy with nuclear energy. The need for global coordination resulted in the International Atomic Energy Agency, at the heart of which there is a grand bargain where technology is shared for beneficial purposes in exchange for the commitment not to use them as weapons. I can see something similar for AI in respect of some regulations at the global level. First, there might be some red lines to be drawn, for example, to restrain uncontrollable or uncontainable AI, to ensure that we maintain human control and be responsible, and that AI should not be used to avoid liability or risk. Second, it would be helpful if there is an agreement on transparency or to encourage transparency.

Asset Recovery Asia 2021, chaired by Wendy Lin (WongPartnership, Singapore) and Ros Prince (Stephenson Harwood, UK), brought together leading practitioners from the arbitration, fraud and insolvency world to discuss important emerging issues.

 

[Selected content, including Professor Chesterman’s Keynote Speech has been made available on-demand. Get in touch for more details.]

[1]     The Straits Times (David Sun) “Bank fraud experts in Singapore use AI to predict scammers’ next move” (6 April 2021) https://www.straitstimes.com/tech/tech-news/anti-fraud-experts-use-ai-to-predict-cheaters-next-move

[2]     “WE, THE ROBOTS? Regulating Artificial Intelligence and the Limits of the Law” (Cambridge University Press, 2021)

[3]     This features as an exception to the hearsay rule under Singapore law.

[4]     See the discussion in “WE, THE ROBOTS? Regulating Artificial Intelligence and the Limits of the Law” (Cambridge University Press, 2021), Chapter 6.

[5]     See, for example, the Intellectual Property Court Guide issued by the Supreme Court of Singapore (Registrar’s Circular No. 2 2013) https://www.judiciary.gov.sg/news-and-resources/registrar's-circulars/circular-details/registrar's-circular-no.-2-2013.

[6]     Even then, the duty for the judiciary to give reasons is not an absolute one: Thong Ah Fat v PP [2012] 1 SLR 676 at [28]-[33]. “As a rule of thumb, the more profound the consequences of a decision were, the greater the necessity for detailed reasoning.”

[7]     Manjit Singh s/o Kirpal Singh and anor v AG [2013] 2 SLR 844 at [85].

[8]     The Straits Times (Professor Simon Chesterman) “Law plays catch-up with technology” (7 March 2015) https://www.straitstimes.com/opinion/law-plays-catch-up-with-technology

[9]     “WE, THE ROBOTS? Regulating Artificial Intelligence and the Limits of the Law” (Cambridge University Press, 2021), Chapter 4.

[10]    See the discussion in “WE, THE ROBOTS? Regulating Artificial Intelligence and the Limits of the Law” (Cambridge University Press, 2021), Chapter 3.

[11]    E.g., Post Office Ltd v Castleton [2007] EWHC 5 (QB), R v Seema Misra T2009070, Bates v Post Office Ltd (No 6: Horizon Issues) Technical Appendix [2019] EWHC 3408 (QB).

[12]    Bates v Post Office Ltd (No 6: Horizon Issues) Technical Appendix [2019] EWHC 3408 (QB) at [434].

[13]    BBC “Post Office Horizon scandal: More subpostmasters cleared” (19 July 2021) https://www.bbc.com/news/business-57888146

[14]    “WE, THE ROBOTS? Regulating Artificial Intelligence and the Limits of the Law” (Cambridge University Press, 2021) at p71.

[15]    “WE, THE ROBOTS? Regulating Artificial Intelligence and the Limits of the Law” (Cambridge University Press, 2021) at p70.

[16]    “WE, THE ROBOTS? Regulating Artificial Intelligence and the Limits of the Law” (Cambridge University Press, 2021) at pp 224-225.

[17]    “WE, THE ROBOTS? Regulating Artificial Intelligence and the Limits of the Law” (Cambridge University Press, 2021) at pp 164-165.

[18]    See for example, the programmers of Google’s AlphaGo were unable to explain how the system cam up with the strategies for the game of Go that defeated the human grandmaster, discussed in “WE, THE ROBOTS? Regulating Artificial Intelligence and the Limits of the Law” (Cambridge University Press, 2021) at pp 2, 65.

[19]    The Singapore Court of Appeal in Thong Ah Fat v PP [2012] 1 SLR 676 at [20]-[25] explained that the judicial duty to give reasons (a) has a “self-educative” value, which “hones the exercise of judicial discretion and encourages judges to make well-founded decisions”; (b) allows parties to know why they won or lost; (c) ensures that the appellate court has the proper material to understand, and do justice to, the decision taken at first instance, (d) curbs arbitrariness and is a facet of judicial accountability; and (e) increases the transparency of the judicial system. See also FN6 above.

[20]    “WE, THE ROBOTS? Regulating Artificial Intelligence and the Limits of the Law” (Cambridge University Press, 2021), Chapter 9.

Share this article

Sign up for Asset Recovery email updates

keyboard_arrow_down