In my previous post I looked at the problem of detecting a change in a randomly generated time series. At some point in the time series, the mean and/or the standard deviation used to generate the series makes a step-change. In this post I want to look at a more realistic problem of detecting multiple changes at unknown times in a time series.

I will make available the R-script for anyone who wants to play with this themselves – it is the best way to get a feel for the algorithm I plan on presenting. Finally I will apply the algorithm to the monthly returns data for the last 14 years for the Winton Diversified Fund.

Single Change Algorithm

It has been suggested to me that I present the previous algorithm in simpler terms, so here it is (let’s call it “Find Change”):

  1. Establish a null hypothesis: the returns in the series are normally distributed with a constant mean and standard deviation.
  2. Establish an alternate hypothesis: the series is actually made up of 2 shorter series each with their own mean and standard deviation. The second series starts at time, ttest.
  3. Start with ttest = 3. We need at least 2 data points for the standard deviation to be non-zero.
  4. Estimate the mean and standard deviations of points 1 – (ttest – 1) and points ttest – the end point, n.
  5. Calculate the log-likelihood ratio for the alternate divided by the null and store it (see definition below).
  6. If ttest is less than the length of the series – 1, add one and go to step 4 above, otherwise continue to step 7.
  7. Set the estimated change time, t0 to the time (outside of any boundary margin) at which the log-likelihood ratio was at its maximum. The estimated mean and standard deviations before and after the change can be calculated using the change time, or retrieved from memory.

The log likelihood ratio:

l_{1}^{n} = \sum_{1}^{t_{0}-1} ln(P(y_{i} \mid \mu_{0}, \sigma_{0})) + \sum_{t_{0}}^{n} ln(P(y_{i} \mid \mu_{1}, \sigma_{1})) - \sum_{1}^{n} ln(P(y_{i} \mid \tilde{\mu}, \tilde{\sigma}))

Using this process we will have tested every possible change point and selected the one that represents the most significant change in the mean and standard deviation and that is at least “margin” data points away from either end of the series.

Significance

Chi-Squared Distribution

Under the right circumstances, twice the log-likelihood ratio follows a chi-squared distribution with degrees of freedom equal to the difference in degrees of freedom between the alternate and null hypotheses. This allows us to pick a minimum significance that must be exceeded before our procedure reports a change. We can control the “sensitivity” of the algorithm.

I know that my null model has 2 degrees of freedom (mean and standard deviation). My alternate model has four dfs (2 means and 2 sd’s). So If I want a p-score of 10% or less, I have enough information to get a critical value for the log-likelihood ratio: > 2.303. Using R, llr.crit <- qchisq(1 – p.crit, df) / 2 where p.crit = 0.1 and df = 2.

We can add this in to the algorithm above by conditioning Step 7 above on the log-likelihood ratio exceeding the critical value, reporting “no change detected” if the critical value is not exceeded.

Not A Chi-Squared Distribution

What do we do if we suspect the 2 x log-likelihood ratio does not have a chi-squared distribution? We can build an empirical distribution based on the null hypothesis and use that to derive a critical value. We would do this by running thousands of tests where the null hypothesis is true (in this case, the mean and standard deviations used to create the series are constant). We would build up an empirical distribution of the the values of the log-likelihood ratio encountered in this experiment. We could then estimate what critical value of the log-likelihood ratio corresponds to our desired error rate.

Multi-Change Algorithm

Using our augmented “Find Change” algorithm, we have enough to build an algorithm which takes advantage of R’s recursive function capability (let’s call it “Decompose Series”):

  1. Call algorithm “Find Change”.
  2. If no change is detected then end, reporting “no change detected”, otherwise go to step 3.
  3. Create a sub-series by selecting points 1 – (t0 – 1) from the series submitted to “Find Change”.
  4. If this sub-series is longer than twice the margin, submit it to the “Decompose Series” algorithm.
  5. Create a sub-series by selecting points t0 – end from the series submitted to “Find Change”.
  6. If this sub-series is longer than twice the margin, submit it to the “Decompose Series” algorithm.
  7. Return cumulative results (change times, means and sd’s of all series and sub-series).
  8. Parse the results returning only the values for the finest level of detail for any given time index.

The use of “margin” blocks the over-sensitivity of the algorithm to changes near the end-points – it allows us to increase the sensitivity without getting changes reported every couple of data points. There may be better ways of handling this.

The diagram below captures the essence of what the algorithm is doing. The red curve is the log-likelihood ratio super-imposed over the data. The broken horizontal line is the critical value that must be exceeded by the log-likelihood for the detected change to be considered significant. The vertical broken line corresponds to the maximum log-likelihood ratio:

Detecting  multiple changes in a return series showing maximum liklihood change points.

The algorithm is successively dividing up the series into smaller and smaller chunks defined by significant change points until we hit the stopping criteria of “too short a data series” or “no significant change in the series”:

  • The series (1) is broken into the left 2/3 (2) and right 1/3 (3)
  • The right 1/3 contains no significant change (the maximum log-likelihood does not exceed the critical value)
  • The left 2/3 is further broken into the left 1/3 (4) and middle 1/3 (5)
  • Neither of these contains any significant change. The algorithm is done

Some Results

Following are some pictures of the behavior of the algorithm tested against random changes in mean and standard deviation at random times in random length series. The first two use a significance level of p.crit = 5%. The first series has a few changes and fairly short series, the other has many changes and longer series.

Detecting random changes in a short series.
Detecting random changes in a long series.

The next two pictures give a flavor of the effect of changing the significance level, On the left I use a significance of p.crit = 1%, and on the right p.crit = 20%. The first detects nothing, the second is overly sensitive.

Change detection too insensitive.
Change detection too sensitive.

The only way to get a feel for this is to play with it! Ask me for the code using the “Contact Us” link top right and I can email you the R script.

Detecting the change in Winton Capital's volatility policy.Here is what happens at a p-crit value of 5% when we submit the last 14 years or so of Winton’s Diversified Fund returns to the change detection algorithm. It is interesting to see both the estimated returns and estimated standard deviation of returns step down. It looks like Winton reduced their volatility and returns in two stages first in 2004 and again in 2007.

I would not read too much in to the return figures as we have already seen that the algorithm does a better job picking up volatility changes than return changes. Putting this in a manager-monitoring context it would certainly highlight an issue that an asset allocator would want to raise with the fund manager. Maybe it is significant, maybe not, but now we can go to the manager and find out.

Conclusion

I hope this has been an interesting look at off-line change detection algorithms. Shortly I will present how to adapt this approach still further to cope with on-line change detection in real time.

New Commodity Pool Launches

Please provide your name and email address so we can send you our quarterly compilation of new commodity pools registered with NFA.

We hate SPAM and promise to keep your email address safe.

Thank you. Your file will be available after you confirm your subscription. Check your in-box!

Biggest Hedge Funds By $AUM

Please provide your name and email address so we can send you our quarterly compilation of biggest hedge funds by $AUM as reported on the SEC's Form ADV.

We hate SPAM and promise to keep your email address safe.

Thank you. Your file will be available after you confirm your subscription. Check your in-box!

Free Hedge Fund ODD Toolkit

Note this is US Letter size. If you want A4, use the other button!

We hate SPAM and promise to keep your email address safe.

Thank you. Your file will be available after you confirm your subscription. Check your in-box!

Free Hedge Fund ODD Toolkit

Note this is A4 size. If you want US Letter, use the other button!

We hate SPAM and promise to keep your email address safe.

Thank you. Your file will be available after you confirm your subscription. Check your in-box!

Subscribe:

Don't miss our next hedge fund article!

We hate SPAM and promise to keep your email address safe.

Thank you! Check your email for confirmation message.

Share This

Share

If you found this post informative, please share it!