Solved – Difference in expressions of variance and bias between MSE and MSPE

biasmsevariance

The difference between Mean Square Error (MSE) and Mean Square Predicted Error (MSPE) is not the mathematical expression, as @David Robinson writes here. MSE measures the quality of an estimator, while MSPE measures the quality of a predictor. But was is curious to me is that the mathematical expressions for the relationship between bias and variance for MSE and MSPE is mathematically different:

The MSPE can be decomposed into two terms (just like mean squared error is decomposed into bias and variance); however for MSPE one term is the sum of squared biases of the fitted values and another the sum of variances of the fitted values

We have:

$MSE(\hat{\theta})=E\left[\left(\hat{\theta}-E(\hat{\theta})\right)^2\right]+\left(E(\hat{\theta})-\hat{\theta}\right)^2=Var(\hat{\theta}) + Bias(\hat{\theta},\theta)^2$

From Wikipedia we read that for the MSPE we have the following relation:

\begin{equation} MSPE(L)=E\left[(\hat{g}(x_i)-g(x_i))^2\right]=\sum_i (E[\hat{g}(x_i)]-g(x_i))^2 + \sum_i Var(\hat{g}(x_i)) =\sum_i Bias(\hat{g}(x_i),g(x_i))^2 + \sum_i Var(\hat{g}(x_i)) \end{equation}

I'm looking for an intuitive explanation of the expression of bias and variance of the MSPE. Is it correct to think of this as each observation/fitted value having its own variance and bias? If so, it seems to me that increasing the amount of observations should increase the MSPE (more bias and variance sums). Should there maybe be a $\frac{1}{n}$ in front of the sums of the sums of the different bias and variance?

Best Answer

It helps to think carefully about exactly what type of objects $\hat \theta$ and $\hat g$ are.

In the top case, $\hat \theta$ would be what I would call an estimator of a parameter. Let's break it down. There is some true value we would like to gain knowledge about $\theta$, it is a number. To estimate the value of this parameter we use $\hat \theta$, which consumes a sample of data, and produces a number which we take to be an estimate of $\theta$. Said differently, $\hat \theta$ is a function which consumes a set of training data, and produces a number

$$ \hat \theta: \mathcal{T} \rightarrow \mathbb{R} $$

Often, when only one set of training data is around, people use the symbol $\hat \theta$ to mean the numeric estimate instead of the estimator, but in the grand scheme of things, this is a relatively benign abuse of notation.

OK, on to the second thing, what is $\hat g$? In this case, we are doing much the same, but this time we are estimating a function instead of a number. Now we consume a training dataset, and are returned a function from datapoints to real numbers

$$ \hat g: \mathcal{T} \rightarrow (\mathcal{X} \rightarrow \mathbb{R}) $$

This is a little mind bending the first time you think about it, but it's worth digesting.

Now, if we think of our samples as being distributed in some way, then $\hat \theta$ becomes a random variable, and we can take its expectation and variance and whatever we want, with no problem. But what is the variance of a function valued random variable? It's not really obvious.

The way out is to think like a computer programmer, what can functions do? They can be evaluated. This is where your $x_i$ comes in.

In this setup, $x_i$ is just a solitary fixed datapoint. The second equation is saying as long as you have a datapoint $x_i$ fixed, you can think of $\hat g$ as an estimator that returns a function, which you immediately evaluate to get a number. Now we're back in the situation where we consume datasets and get a number in return, so all our statistics of number values random variables comes to bear.

I've discussed this in a slightly different way in this answer.

Is it correct to think of this as each observation/fitted value having its own variance and bias?

Yup.

You can see this in confidence intervals around scatterplot smoothers, they tend to be wider near the boundaries of the data, as there the predicted value is more influenced by the neighborly training points. There are some examples in this tutorial on smoothing splines.