We often aspire to long-term success, but in practice, are we able to make decisions that benefit the long-term, and not the short-term? In this article, we look back on what we’ve learnt from Jon Danielsson, Director, Systemic Risk Centre, London School of Economics, at RiskMinds International 2019 during his keynote presentation.
In the financial markets, the past century of volatility tells a strange and interesting story about foresight, risk, and financial crises. At the foundation of the analysis is an assertion that the most important financial concerns are long term; what happens in the short term, although there are always fluctuations, is usually not that important. However, the risk management practices in industry and even as applied by regulators do not emphasise the importance of the long horizon. Instead, the tools and techniques of finance focus heavily on recent history – in effect measuring and seeking to manage short-term risk.
Looking at this from an empirical standpoint, a chart plotting the daily S&P showing the periods of highest volatility since the early 1900s will highlight 1932, 1987, and 2008. Clearly, the Great Depression, the Crash of ‘87, and the global financial crisis had a tremendous impact on the markets, but while they were not good years for investors, they completely miss the periods where the world faced the ultimate catastrophe: nuclear war. In 1962, the US and the USSR squared off in the Cuban Missile Crisis, and in 1983, the Soviet Union under Andropov came very close to launching its intercontinental ballistic missiles on the mistaken suspicion that that US was about to attack them. This disaster was only averted because Lieutenant Stanislav Petrov of the Soviet Air Defense did not trust the models and held off on triggering a Soviet retaliatory response. Such incidents could have resulted in the ultimate tail event, but financial market volatility was normal in both of those periods. So, it is fair to ask:
How informative is financial market data when it comes to the truly extreme risks in the world?
Predicting tail events
Take an actual price series of 4,000 days and for each day, the risk can be calculated using any one of six or seven standard industry techniques, including HS, MA, EWMA, GARCH, tGARCH, and EVT. These forecasting methods will come up with very different numbers and there is no consensus. In addition, the time window matters and if we add one more day, we might capture something very different. Further, no matter how carefully we examine the past, we cannot foresee highly unusual events. One example is the sudden sharp appreciation of the franc (CHF) by 15.5% by the Swiss government in 2015. If we ask the models how often this could happen, GARCH and EWMA will say “never” and MA and tGARCH will say once in a period much longer than the age of the universe and once in a period much longer than the age of the Earth, respectively. Only EVT takes a slightly more realistic view, with an estimate of once every 109 years. Even so, on the whole, these metrics are indicating that such an event could not happen. However, the predictive ability also breaks down after the event occurs. Certainly, common sense would suggest that the Swiss would not take the same action again any time soon, but most risk models went haywire following the event. In a quest for reasonable and accurate predictions, would a better way to evaluate the possibility of a currency appreciation or depreciation be more rooted in real financial and economic data? In this example, we could consider the owners and managers of the Swiss National Bank in combination with key factors such as SNB dividend payments, the money supply, capital reserves, and government bonds outstanding. These would all give a more grounded view on the direction for the currency.
In measuring risk and seeking accuracy in forecasts, the time dimension of risk is also critical. It is easier to measure and manage short-term risk involving business or idiosyncratic risk, although there will be the occasional impactful surprises. But longer term risk, involving systemic, macroeconomic, and political risk is much harder to assess and control. Again, looking at empirical evidence, there tend to be big banking losses about ten times every hundred years and banking failures about five times per century. At the extreme event end, there seem to be local systemic crises two or three times per century and global system financial crisis once or twice a century and these are driven by politics, which dominates macroeconomics, which in turn dominates the smaller scale financial risks.
For the most challenging crises, there are two reliable indictors: rapid credit growth and low volatility, stemming from the irony that when there is a general perception that the world is safe, it is also viewed as a good time to take on more risk. In essence, stability is, itself, destabilising. But as we grapple with these dynamics, including the known unknowns and the unknown unknowns, what can we do?
First, we should be wary of risk dashboards and false confidence. This doesn’t imply that we should not measure, but simply that we should evaluate carefully and skeptically, rather than relying blindly on model outputs. Then, we should focus on resilience, with awareness of the difference between perceived risk and actual risk, Finally, we should make better use of scenario planning – forewarned may be forearmed, and the more scenarios developed and analysed, the better our chances may be to survive the next tail event intact.