So, I have found the answer. It is not this (first formula in the question):
$$V_r=\frac{\sum_i n_ie_i^2}{\sum_i n_i} - \left(\frac{\sum_i n_ix_i}{\sum_i n_i}\right)^2$$
But this (somehow, the $e_i$ got replaced by $x_i$ in the presentation):
$$V_r=\frac{\sum_i n_ie_i^2}{\sum_i n_i} - \left(\frac{\sum_i n_ie_i}{\sum_i n_i}\right)^2$$
The second formula from the question is the same, but used when the frequency is not specified (no cross-table, just independent and dependent variable).
The solution wasn't that hard as the moderators in the comments were continuously stating!
It helps to think carefully about exactly what type of objects $\hat \theta$ and $\hat g$ are.
In the top case, $\hat \theta$ would be what I would call an estimator of a parameter. Let's break it down. There is some true value we would like to gain knowledge about $\theta$, it is a number. To estimate the value of this parameter we use $\hat \theta$, which consumes a sample of data, and produces a number which we take to be an estimate of $\theta$. Said differently, $\hat \theta$ is a function which consumes a set of training data, and produces a number
$$ \hat \theta: \mathcal{T} \rightarrow \mathbb{R} $$
Often, when only one set of training data is around, people use the symbol $\hat \theta$ to mean the numeric estimate instead of the estimator, but in the grand scheme of things, this is a relatively benign abuse of notation.
OK, on to the second thing, what is $\hat g$? In this case, we are doing much the same, but this time we are estimating a function instead of a number. Now we consume a training dataset, and are returned a function from datapoints to real numbers
$$ \hat g: \mathcal{T} \rightarrow (\mathcal{X} \rightarrow \mathbb{R}) $$
This is a little mind bending the first time you think about it, but it's worth digesting.
Now, if we think of our samples as being distributed in some way, then $\hat \theta$ becomes a random variable, and we can take its expectation and variance and whatever we want, with no problem. But what is the variance of a function valued random variable? It's not really obvious.
The way out is to think like a computer programmer, what can functions do? They can be evaluated. This is where your $x_i$ comes in.
In this setup, $x_i$ is just a solitary fixed datapoint. The second equation is saying as long as you have a datapoint $x_i$ fixed, you can think of $\hat g$ as an estimator that returns a function, which you immediately evaluate to get a number. Now we're back in the situation where we consume datasets and get a number in return, so all our statistics of number values random variables comes to bear.
I've discussed this in a slightly different way in this answer.
Is it correct to think of this as each observation/fitted value having its own variance and bias?
Yup.
You can see this in confidence intervals around scatterplot smoothers, they tend to be wider near the boundaries of the data, as there the predicted value is more influenced by the neighborly training points. There are some examples in this tutorial on smoothing splines.
Best Answer
The mean squared error as you have written it for OLS is hiding something:
$$\frac{\sum_{i}^{n}(y_i - \hat{y}_i) ^2}{n-2} = \frac{\sum_{i}^{n}\left[y_i - \left(\hat{\beta}_{0} + \hat{\beta}_{x}x_{i}\right)\right] ^2}{n-2}$$
Notice that the numerator sums over a function of both $y$ and $x$, so you lose a degree of freedom for each variable (or for each estimated parameter explaining one variable as a function of the other if your prefer), hence $n-2$. In the formula for the sample variance, the numerator is a function of a single variable, so you lose just one degree of freedom in the denominator.
However, you are on track in noticing that these are conceptually similar quantities. The sample variance measures the spread of the data around the sample mean (in squared units), while the MSE measures the vertical spread of the data around the sample regression line (in squared vertical units).