There are several ways to define a recession. Julius Shishkin suggested in 1974 that two consecutive quarters of negative GDP (*gross domestic product*) growth indicate a ongoing recession. Since then, that has been the standard definition. In this article, we will present a *Hidden-Markov-Model* (HMM) which will be used to predict recession probabilities based on German quarterly GDP data (source: OECD.)

A Hidden-Markov-Model is used to detect hidden states in the underlying data. In this example we will estimate a HMM with two states (*0: No Recession, 1: Recession*)

The above image shows the time evolution of a hidden markov model. and describe the hidden-states at times and while and are the observations we make (in this short example, denotes the quarterly GDP data at time ).

As usual for a Markov Model, the state depends only on the previous state and the transition matrix . In the following, two more general assumption will be made. There are two time-discrete random processes and , of which only the latter is observable. Through it, conclusions are to be drawn about the course of the first process.

First Markov characteristic:

The current value of the first process depends only on its last value

Second Markov characteristic:

The current value of the second process depends only on its last value

First off, we will take a look at the data used to estimate the model.

GDP | GDP_DL | |
---|---|---|

DATE | ||

1960-02-01 | 98.168533 | NaN |

1960-03-01 | 98.552486 | 0.390353 |

… | … | … |

2018-11-01 | 99.439948 | -0.147800 |

2018-12-01 | 99.297610 | -0.143242 |

In the following figure the (log) GDP Change

is illustrated. Notice the data during the 2008 financial crisis. Here, a change of GDP of approx. -1% between two quarters can be seen.

We will use quarterly GDP data from 1960-05-01 to 2018-12-01 to fit our model. The important question is: What are the most likely transition probabilities and emissions (hidden states) for a given time series?

The model parameters are estimated by the *Viterbi* algorithm. The probability that a series of states given an observed series of outputs can be denoted as:

where matrix encodes the probability of our hidden state generating output, given that the state at the corresponding time was .

In the above figure, we have two data sets. Both use the same time series, but the recession states (indicated by red and blue dots) in the bottom figure are predicted by our HMM-model. As you can see, there is a large overlap between prediction and the real data.

The next figure shows the probabilities with of whether the model predicts state or at time .

We can now predict the next three months based on the last time using our Hidden Markov Model and its fitted transition matrix.

Since our model was in state at the probability of staying in state is and .