Years ago, Rebonato was discussing the 2007 Quant Apocalypse (I think), and he paraphrased the infamous “the wrong type of snow” from a British Rail interview to say that what happened was “the wrong kind of volatility”. All the time we hear that proprietary trading firms make money with volatility, and that periods with low volatility are bad for them. So, Marcos Costa Santos Carreira, PhD Candidate, École Polytechnique, investigates: is there a right kind of volatility for market makers? And how do market makers make money?
For that we will have to look at how markets work and try to model some of these features, starting with a contract where prices trade at certain prices only; the difference between these values is known as the tick value (e.g. 0.01 for US equities); so prices are multiples of the tick value (represented in our papers as α). Those not familiar with this subject might have heard about the decimalisation in US equities at the end of the 90s, where 1/8s and 1/16s were dropped in favor of the 0.01 price increment.
This discretisation means that what we observe (repeated trades at the same price, discrete price changes and the times between these changes) is representing some kind of hidden process, where agents make decisions based on variables like the order book imbalance (i.e. whether the size of the order book on the first (top) level(s) is bigger on the bid (ask) side than on the ask (bid) side.
So let’s assume that there are three types of agents: (i) market makers, who try to post liquidity on the top of each queue to capture the spread as (ii) noise traders cross the spread without depleting the existing liquidity at the price level traded; the traders who deplete the existing liquidity on one side of the order book are informed traders. We are adapting the definition from the paper From Glosten-Milgrom to the whole limit order book and applications to financial regulation.
As the tick value gets lower, more trades become informed; as the tick value gets higher, more trades become noise. But what else contributes to it?
Well, volatility should play a role; after all, the more volatile an asset is, the more the price will change, leading to faster price changes and a wider range of prices over a trading day.
We have already looked at the relationship between volatility and tick sizes in the paper A new approach for the dynamics of ultra high frequency data: the model with uncertainty zones and 3 years ago in the presentation Microstructure Of A Central Limit Order Book In FX Futures; we know that the number of price changes in a trading day is proportional to the square of the volatility and to the square of the inverse of the relative tick value (α divided by the asset price). We also know that the parameter η, defined as ½ multiplied by the ratio between the number of continuations and the number of alternations (where continuations refer to consecutive price changes with the same sign and alternations refer to consecutive price changes with opposite signs) plays a role in the number of trades.
We’ve been conducting further research with the help of the CME, and looking at the decreasing volatility of FX future contracts we started to notice that the number of trades and the trade durations (times between price changes) seemed different than expected; the answer is that, at the end of the day, prices will not be the same as of the opening; so a better way to model the evolution of the hidden price process is to look at the effect of both the volatility and the trend of the process; remember that usually in financial mathematics the time t multiplies the trend μ but also the square of the volatility σ; so a big reduction in σmakes the trend more important; so when counting the number of price changes one will find more continuations than expected.
Our new formulas can account for this difference, and more important we can now understand better the role that volatility and trend play. For a market maker, the volatility σ is good, as it leads to alternations and the opportunity to earn a fraction of the spread on average; but the trend μ is bad for the simple strategy of posting liquidity in the top of the book; so, following some of the ideas of From Glosten-Milgrom ..., we can think about the value of being on the top of the queue as something close to (1-2η)/(1+2η) on average; but because η can become quite large if the price moves relentlessly in one direction, we can see that there is no value in being a market maker in this situation, and market makers might become market takers. The parallels between the relative roles of the trend μ and the volatility σ and the ratio between informed trades and noise trades are stricking (in fact, we will argue that the ratio r on From Glosten-Milgrom ... should be equal to 2η), and that trying to infer the parameter η from the price changes and rice durations is equivalent to infer the recent values of μ and σ, which is equivalent to trying to infer the proportion of informed trading. More details will be discussed in the presentation at QuantMinds International.
We hope that this model will be of interest to regulators and exchanges. For regulators, using this approach might lead to a faster diagnostic that continuous trading might be failing to provide adequate liquidity and price discovery (as the trend itself becomes the information) and an auction might be reasonable when sudden moves happen; this would be better than to wait for a 5% or 10% price move. For exchanges, it would help them not only to better choose the appropriate tick values but to monitor which factors (volatile volatilities, relative presence of informed traders like institutional investors, etc.) are in play. And it helps one to understand that, because the price changes are a consequence of both μ and σ, that not every volatility is the right kind of volatility for market makers.