Forecasting volatility using Markov models

By Team Algo
Reading Time: 5 minutes

By Tushar Arora

In the last articles, we have learned about volatility, volatility clustering and how GARCH can be used to model volatility clustering. If we just go back to the picture from where we started and take a look at it again.

NIFTY daily returns from Feb 2002 to March 2022

Remember in the article “What is volatility?”, we said that these red encircled regions are indicating high volatility and green regions are indicating low volatility. Keeping this in mind let us first try to understand Markov property and Markov Chain. Markov property says that “Future is independent of the Past given the Present” which in time sense would translate to “If we know all the information about today, then whatever is going to happen tomorrow is not dependent on what happened yesterday”.

One way to explain Markov Property mathematically

Here, you saw that a stochastic variable(Y_t) can take these discrete values. But at any particular moment (time t) it can only take one of the available discrete values. So, when a stochastic variable changes its state(value) from one to another, we come to the realm of Markov Chains. Markov Chain is a stochastic process describing the transitions of a stochastic variable. Consider this simple Markov Chain :

2 state Discrete Markov Chain with it’s Transition Probability Matrix

The above image shows the diagram of the Markov Chain and its Transition Probability Matrix(TPM). Although from the diagram and matrix it is pretty clear what is happening. But for more clarity, let us try to understand by an example what these probabilities represent. Probability that the next state is 1 given the current state is 0 is 0.3 which can also be interpreted as Probability of transitioning to state 1 from state 0. Now that we have a basic understanding of Markov Chain, let us try to answer the question of how Markov Chains are related to volatility?

In the simplest case(see the red, green regions in the picture) we can say that volatility has 2 states or regimes (Low, High). So, considering volatility as a stochastic variable, we can model volatility using Markov chains. Because of switching of volatility between low to high or high to low, these models are sometimes known as “Markov Switching Models”. Another thing to note is that these volatility states can’t be directly observed and are hidden, what can be observed are the returns. The returns are influenced by these hidden states. That’s why these models are also called “Hidden Markov Switching Models” (HMS models). Let’s try to understand it mathematically.

Returns, Volatility, GBM and Markov Chain

Now that we have understood the motivation behind HMS models and little bit of what goes in the background mathematically, it’s time to practically implement it in python.

Python code for fitting HMS model to NIFTY data

Now, let us look at the output of this code (the summary of fitted Hidden Markov Switching model) :

Summary of fitted HMS model

The main aim of this article was to use Markov models for volatility forecasting. For this, let us revisit the example of the Markov chain. To find out the next state of the stochastic variable, we need to know two things : Present or Current state of variable and Transition Probabilities. If you look closely at the summary we have got, it clearly gives us the transition probabilities. All we have to worry about is how to find the current volatility state. This is where smoothed and predicted probabilities come into the picture. Smoothed marginal probabilities gives the probability of volatility being in a state at time t given all the observations upto time T. Predicted marginal probabilities gives the probability of volatility being in a state at time t given all the observations upto time (t-1). For forecasting, smoothed marginal probabilities are of no use because it is taking account of future data (at time t, it is using data for time > t as well) to calculate state probabilities. Whereas predicted marginal probabilities are exactly what we needed to solve the problem of figuring out the current volatility state.

Python code for plotting smoothed and predicted marginal probabilities

Predicted Marginal and Smoothed Marginal Probabilities for Low and High Volatility state

Now that we have transition probabilities and knowledge of the current probability state in hand, let’s move to our final task which is Forecasting Volatility.

Mathematical formulation for Volatility Forecasting

Although the formulas in the above picture look scary, they are very intuitive and easy to understand. We are trying to forecast volatility for which we needed Transition probabilities and the current state of the Markov Chain. First, we are trying to find the current state using conditional densities and prediction probabilities. Next, we are using the transition probabilities to calculate probabilities for the next Volatility state.

Low Volatility state probabilities for six months in year 2012

High Volatility state probabilities for 3 months in year 2009

The above 2 figures give another evidence of volatility clustering as we discussed in the article of Volatility Clustering. In the first picture you can see the volatility being in low state for months verifying that low changes tend to be followed by low changes. And in second, you will see high volatility state probabilities for months giving us evidence of persistence in volatility.

I have used the following references to write this article:

https://www.analyticsvidhya.com/blog/2019/10/regime-shift-models-time-series-modeling-financial-markets/

https://www.statsmodels.org/dev/examples/notebooks/generated/markov_regression.html

https://econweb.ucsd.edu/~jhamilto/palgrav1.pdf

https://www.quantstart.com/articles/hidden-markov-models-an-introduction/

In the next article, we will try to see how we combine the 2 discussed models of volatility : GARCH model and Markov model.