In my quest to understand correlation and covariance better, I took a short trip down the rabbit hole to visit eigen vectors and returned carrying a covariance ellipse. In my next post I will tie these concepts together when I re-visit the real-world issue of correlation and volatility. In the mean time, here is some useful background if you don’t already have it.

I am going to start by looking at correlation then covariance matrices for 2 data series. Then extend the ideas to multiple data series. I want to try to provide an intuitive understanding of what the eigen values and eigen vectors of these matrices mean.

Let’s start by considering a correlation matrix for two series:

If __e___{1} and __e___{2} are 2 dimensional column vectors and λ_{1} and λ_{2} are scalars and we define them as follows:

** ρ** .

__e___{i}= λ

_{i}.

__e___{i},

then λ_{i} and __e___{i} are the eigen values and eigen vectors, respectively, of the correlation matrix, ** ρ**. Each pairing of an eigen value and an eigen vector is actually a representative of an infinite set of solutions (since for any value of k, if λ

_{i}and

__e___{i}is a solution, then k . λ

_{i}and

__e___{i}/ k is also a solution). Since the eigen value has a “scale” interpretation while the eigen vector has a “direction” interpretation, it is usual to choose |

__e___{i}| = 1.

We find that the eigen values are (1 + ρ inline rho ) and (1 – ρinline rho ) and their respective eigen vectors are (1 / √2, 1 / √2) and (-1 / √2, 1 / √2). This makes intuitive sense since by computing the correlation (rather than covariance) matrix we have normalized the sample series to mean zero and standard deviation 1: we have calculated the covariance of the z-values.

The eigen vectors represent two alternative portfolios, the first being a 1:1 mix of the z-scores of the pairs in the data series, the second being a 1:-1 mix. If the series are positively correlated, the 1:1 portfolio will have a larger variance than the 1:-1 portfolio. In fact, it will have the largest variance of any portolio we can create from the original two series. We find in this case that the larger eigen value is associated with the eigen vector with the two positive values. So the eigen value has something to do with the portfolio with the largest possible variance. This brings us to our first “aha” moment: any correlation matrix is actually the covariance matrix of the z-values of the series, but the eigen values and vectors have no useful interpretation in the real world. By using correlation rather than covariance we have given up some useful information: the actual variances! Furthermore, we have lost useful information from the resulting eigen values.

So let’s go through the same process using the covariance matrix instead …

Consider two stationary series, ** X** = {x

_{1}, x

_{2},…} = ~N(μ

_{x}, σ

_{x}) and

**= {y**

__Y___{1}, y

_{2},…} = ~N(μ

_{y}, σ

_{y}) with a correlation coefficient, ρ. The covariance matrix looks like this:

This time, when we complete the eigen decomposition of the **covariance** matrix we get something more meaningful: the eigen vectors describe the maximum and minimum variance portfolios and the eigen values tell us the variance of these portfolios. It is more precise to say that the eigen vectors describe the maximum variance portfolio and the next largest variance portfolio that has zero covariance with the first portfolio, but when there are only two eigen values, this amounts to the same thing.

We can demonstrate these results by using the eigen vectors to create a portfolio of x and y for each eigen vector and examining the covariance of these portfolios. Create vectors representing the portfolio series and calculate the covariance matrix:

i.e. the portfolios are uncorrelated (orthogonal – that’s what eigen vectors do) with variances equal to the eigen values.

The icing on the cake is to use the eigen vectors and eigen values to plot an ellipse through the data. If you take the square root of the eigen values you have the standard deviations which have units of measure the same as the data. We can use these values (in the plot opposite I use 2 times the eigen value) as the semi-axes of an ellipse. The ellipse is centered on the means of the two series. It is rotated at an angle specified by the eigen vectors. Any point on the ellipse represents a portfolio of the two original series (given by the angle of the line from the point to the center of the ellipse). The distance from the center of the ellipse tells us the expected standard deviation of the portfolio!

Here’s an example. In this case I used my R function Random Correlated Series Generator to generate 2 series with 1,000 points each. The series had means (0.1, -0.2) and standard deviations (0.02, 0.03) and correlation coefficient of 0.75.

The covariance matrix and eigen decomposition is as follows (I have rounded the numbers to make the results clear):

As a check when we create a portfolio that is 0.5 . ** X** + 0.866 .

**and another that is -0.866**

__Y__**+ 0.5 .**

__X__**and calculate the covariance matrix we get**

__Y__as expected.

EDIT: I promised to extend these ideas beyond 2 series. There will be as many eigen value / vector combination as there are series. The largest eigen value will represent the maximum variance portfolio (whose make-up is specified by the eigen vector). The next largest eigen-value will be the variance of the portfolio with the largest variance that is uncorrelated to the first portfolio. And so on, until you reach the smallest eigen value which will be the minimum variance portfolio. This comes from the fact that the eigen vectors form an orthonormal basis – they are at right angles to each other, they have no covariance!

This is all part of Principal Component Analysis. My source for most of this has been Wikipedia.

EDIT 2: Corrected typo. Also, I wanted to point out that, although the ellipse axes do not look like they are in the right place, or look orthogonal, this is a result of the differences in the scales on each axis. Since __e___{1} . __e___{2} = 0 the eigen vectors are clearly orthogonal. For the chart above, if you stretch the chart vertically until the semi-axes form a right-angle, the ellipse itself comes into perfect form as well!

This is very interesting, You are aan overly professional blogger.

I have joined your rsss feed and stay up foor searching for more of your fatastic post.

Also, I have shared your website in my social

networks