This site is part of the Informa Connect Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Experience the future of finance first

Risky business: why it's so important to let the machines do the work

Share this article

Staying ahead of the curve, especially when it comes to new technology, is critical for the insurance industry. And with AI becoming the new buzzword, here Chris Downer, Associate at XL Innovate discusses why it makes business sense to start phasing this in. Chris is at InsurTech Rising US  exploring What's next and how to prepare when it comes to emerging tech.

Man versus machine - a conflict which dates back to the invention of the wheel (“hey, this ‘wheel’ thing sure is going to put a lot of porters out of work”) - looms heavily over our world today, and the realm of risk management is no safe haven. Most pieces discussing the impact of AI, data, and machines focus on how algorithms can take over small human tasks -- from crunching large data sets, to translating text in multiple languages instantaneously, to automating advertising. However, as someone who backs startups that analyze risk as a living (i.e. insurance), the opportunity that comes with AI and technology is so much larger than better algorithms and advertising. While machines today certainly have their limitations, we cannot escape the fact that in area after area, man’s limitations vastly exceed those of our mechanical counterparts (look no further than Flippy the burger flipping robot).

The opportunity that comes with AI and technology is so much larger than better algorithms and advertising.

In fact, anywhere from 75 percent to 94 percent of incidents in property and casualty insurance are due to human error. Insurers are in business and covering risk, in large part, because of humans. Unfortunately, these incidents aren't just totaled cars or bruised backs, but lives lost. If any other single element triggered over 75 percent of losses, of course users and insurers would look to immediately eliminate that element. Obviously, I’m not advocating for a Westworld/i-Robot-type future (although both make for very good entertainment on the TV at home), but why not move to reduce human error and let machines do more of the work?

There is no doubt that AI and machines can dramatically reduce these risks, saving lives and avoiding massive financial and productivity losses.

Here are three obvious places to start where progress in AI will translate to big reductions in risk:


An NHTSA study looked at the major accident causes, and they found that a mere two percent of accidents were caused by the environment, another two percent were caused by the vehicles, and two percent came from "unknown" causes. That means a full 94 percent, meanwhile, where caused by human error. 94 percent! If humans were in school for bad driving we’d be getting a solid A for our work thus far.

What does this mean? Well, statistics show over 3,000 people die every day due to road crashes and another 20-50 million are injured or disabled every year, globally. In financial terms, road crashes cost $518 billion globally, which is 1-2 percent of global annual GDP. That is a terrible track record and needs to change. Given Waymo has driven over 5 million miles at last count, with zero fatalities, it seems likely autonomous vehicles will be able to do better. Still, we’ll have to contend with AI ethics questions, like the infamous Trolley Problem.


The story isn’t any better in the marine space. An analysis by Allianz shows that human error accounts for approximately 75 percent of the value of the almost 15,000 marine liability insurance claims studied over five years from 2011 to 2016, equivalent to over $1.6 billion in losses. In 2016 alone, marine accidents killed 1,596 people and caused $2.5 billion in damage. In 2017, U.S. Navy accidents led to the deaths of 17 sailors, between a series of destroyer collisions.

Investigations have led many to believe the culprit is sleep deprivation, which is one of the most basic human needs. You would think the military, of all organizations, would seek to reduce risk related to human impairment. Unfortunately, incentive structures mean that in many cases, humans are not even performing at their average cognitive capacity. Fortunately, sleep deprivation is not a concern for AI.


Ok, but humans have to be better if they’re not operating heavy machinery, right? Wrong. Cyberattacks and data breaches may not carry a death toll, but can lead to sizeable financial losses. It turns out 91 percent of ransomware infections start with an employee clicking on a phishing email and 95 percent of all security incidents involve human error. Those are ugly statistics, but would be fine if they didn’t lead to material business impact. How big is that impact? Well, according to IBM, the average cost of data breaches to U.S. organization was $7.35 million. Human culpability doesn’t come cheap.

Better training from IT teams can help, but they fail to work as well as preventative and economical scans from up-and-coming cyber security companies.

So the solution to lowering risk is pretty simple: remove humans from the equation. Or, in the case of office workers and cyber attacks, make sure there is an AI security system that can augment human abilities. The fact is, humans are no longer our safest option for driving a car, or captaining a boat, or avoiding cyber scams and phishing. We – as humans – have set the bar so low, machines couldn’t possibly do much worse at this point.

But, if software is the solution, a challenging question arises:

Where should humans stay in the decision making process?

This is not a blue collar or white collar question -- AI will to impact every part of the global economy. For the most part, this will be a positive development, but we need to maintain a candid view of what humans aren't good at, and where it makes sense to cede control to machines.

So, where do you think AI ends and human judgment takes over?

Find out more about the InsurTech Rising US >>

Chris Downer is an associate at XL Innovate, where he focuses on insurtech investments in North America, Europe and Asia, and leads due diligence and deal sourcing. Chris is a board observer at Pillar Technologies, an end-to-end environmental monitoring solution for construction sites and Stonestep, which provides microinsurance as a service in emerging markets. Chris also publishes a daily InsurTech newsletter, you can subscribe for this here >>

Share this article

Sign up for FinTech email updates

Upcoming event


22 - 23 Mar 2022, London, England
Tech to succeed today. Vision to thrive tomorrow.
Go to site