This site is part of the Informa Connect Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Risk Management
search
Cyber Risk

Is technology outpacing risk management?

Posted by on 19 June 2018
Share this article

Technology is changing at a rapid pace - are financial institutions equipped to identify and manage the new risks that come with these advances? Stephen Cobb,  Senior Security Researcher, ESET, explores. 

In 2017, two strains of malicious code negatively impacted a wide range of organizations in multiple sectors of industry, in scores of countries, across several continents. Known as WannaCry and NotPetya, these were by no means the only malware-based cyber crime campaigns conducted that year, but these two alone generated costs well into the billions of dollars; and they illustrated just how hard it is to manage the risks inherent in the massively complex, multi-dimensional matrix of digital technology upon which much of modern life now depends.

Given the rapid rate at which “digital transformation” is predicted to bring even more of such technology into organisations in the near future, it seems reasonable to ask whether or not technology is outpacing risk management. To help answer this question we can consider risk management in three parts: the discovery of risks, the assessment of risks, and the identification of suitable means by which to avoid or minimize the impact of technology risks.

Discovering the risks

Discovering the risks inherent in the deployment of digital technology is no easy task. Not only does it require skills and abilities that are in short supply, it takes a certain mindset as well. Consider what happened in January of 2018: the world learned of a serious vulnerability affecting some three billion CPUs, the Central Processing Units at the heart of computing devices, everything from servers to laptops, smartphones to tablets, even smart TVs.

As hardware and software vendors scrambled to respond to this unprecedented situation, two key data points for risk-minded analysts emerged: the vulnerability arose from chip design goals that prioritized performance over security, and the reason it took more than two decades to surface was because vulnerability researchers assumed “the chipmakers would have uncovered such a glaring security hole during testing and would never have shipped chips with a vulnerability like that”.

Not only are technology risks technically difficult to discover, finding them requires constant questioning of assumptions. Consider cryptography, one of the linchpins of digital transformation. In 2017 we learned that millions of encryption keys, used for everything from software code-signing to national identity cards, were open to exploitation via a flaw in the way the encryption was implemented, despite being certified to multiple internationally-recognized security standards. That problem was around for five years before it was discovered.

Assessing the risks

Moving from the discovery of technology risks to the assessment of risks, things get even harder, largely because of a lack of good data. For example, while the US government can tell you how many banks were robbed in a year it cannot tell you how many cyber crimes were committed. Enquirers are referred to studies performed by commercial entities that sell security services – hardly an objective source (and furthermore, many commercial surveys use flawed methodologies and are not consistent over time, making it very difficult to tell if things are getting better or worse, and by how much).

Vendor studies of technology risk can lead to a skewed focus on each new threat wave for which solutions have been developed. The cumulative nature of technology risks is obscured. For example, ransomware for commercial gain was the center of attention when NotPetya struck, but NotPetya was brickware– a system destroyer – not ransomware; moreover, it had geo-political objectives yet inflicted damage on commercial systems. In response there was new focus on the risks from state-aligned attacks, but we recently detected new brickware, posing as ransomware, written simply for bragging rights – the motive for most of last century’s malware. In other words, from a cyber perspective, technology risks are cumulative, unpredictable, and inadequately quantified.

Minimising the risks

When it comes to identifying the means to avoid and minimise the impact of technology risks, cyber security conferences like Black Hat or RSA are one place to look. In recent years vendors at these events have been betting big on artificial intelligence (AI) as the latest and greatest hope for defeating cyber criminals and managing technology risks. However, the case for AI solutions, as articulated by their developers and backers, often boils down to this: technology is now so complex that we cannot rely on humans to safely manage and defend it.

This does not bode well for the future of risk management if you look at AI in the light of four things we know about humans and technology: humans consistently over-estimate the net benefits of new technology; early warnings about technology risks are usually ignored; many technology threats are asymmetric; and many bad actors – both criminal and state-aligned – are now highly skilled, well-funded, and blurring the lines between crime, espionage, and geo-political aggression.

The collision between these realities and the risk-reduction potential of AI is impressively documented in “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation” a report by a large group of experts that appeared in February of 2018. The scenarios they outlined should concern every CRO. For me, the likely scenario is this: organisations will rely too heavily on AI-based security that is then defeated by the malicious use of AI, whose development efforts are not hampered by constraints like false positives and accidental damage to systems and data.

In his 2018 RSA Conference keynote, the CEO of RSA Security warned: “Our collective risk as an industry is that we fail to avoid a breach of trust in technology itself.” Just a few days earlier I had surveyed American adults, asking: “How much risk do you believe criminals hacking into computer systems pose to human health, safety, or prosperity?” A solid 70% rate the risk as either serious or very high.

That survey was conducted before the FBI warned Americans that unspecified bad actors are now using powerful multi-stage malware to take over their routers. More than 100 million US households and small businesses use a router to network their computers, tablets, and other digital devices, like “smart” phones, thermostats, alarms, cameras, door locks, and TVs. Many people are just now learning how hard these devices are to secure, and how much damage they can do in the wrong hands.

In short, technology risks are getting harder to find, measure, and avoid. At the same time, we face a rising tide of digital transformation – including bold new technology like AI and 5G – even as stormy geo-political conflicts are increasingly acted out in cyberspace and criminals get ever more cyber-savvy. It is hard to see how risk management will be able to keep pace.

Traders_Hackers_CROs_all_in_one_place

Share this article

Sign up for Risk Management email updates

keyboard_arrow_down