It's hard to guess what you might have done wrong since Sxx
and MSres
are correct. Perhaps the t-value? Here is the calculation done by hand in R to confirm that the R answer agrees with the formula:
data <- structure(list(x = c(170, 140, 180, 160, 170, 150, 170, 110,
120, 130, 120, 140, 160), y = c(162.5, 144, 147.5, 163.5, 192,
171.75, 162, 104.83, 105.67, 117.58, 140.25, 150.17, 165.17)), .Names = c("x",
"y"), row.names = c(NA, -13L), class = "data.frame")
data.lm <- lm(y~x, data=data)
# get prediction
pred <- predict.lm(data.lm, newdata=data.frame(x=170), interval="confidence", level=0.95)
# correct t-value
tval <- qt((1-0.95)/2, df=13-2)
# correct Sxx
Sxx <- sum((data$x - mean(data$x))^2)
# correct MSres - note division is by the number of df (but you have this right anyway)
MSres <- sum(data.lm$residuals^2)/11
# confidence interval calculated by hand
sqrt(MSres * (1/13 + (170 - mean(x))^2/Sxx)) * tval * c(1, -1) + pred[1]
# compare with R interval calculation
pred
Confidence interval is an estimate of an interval in which mean of observations will fall when x=xi
In its formula
$$
1/n + \frac{(x-\bar x)^2}{\sum (x_i - \bar x)^2}
$$
Tends to 0.
Prediction interval is an estimate of an interval in which future individual observations will fall when x=xi
In its formula
$$
1 + 1/n + \frac{(x-\bar x)^2}{\sum (x_i - \bar x)^2}
$$
tends to 1
That means that the confidence interval for the mean of the outcomes at xi gets smaller as sample size grows. (as Central limit Theorem would suggest) which means that by increase of the sample size our estimate for the average (mean) outcome for xi gets better.
$$ \lim_{n->infinity}{CI = \hat y}$$
But the dispersion of the distribution of y|xi "the probability of an individual outcome" at xi, Doesn't change very much because central limit theorem is related to central tendencies not to individual behavior or outcomes. Therefore the prediction interval doesn't change very much.
$$ \lim_{n->infinity}{PI = \hat y \pm t_{\alpha/2, n-2} \sqrt{MSE}}$$
Individual behavior remains uncertain no matter how much you increase your sample size ;)
Best Answer
Recall that consistency means that the estimator converges in probability to the parameter. This means that all the distribution of the estimator is arbitrarily concentrated around the parameter.
If you construct a confidence interval based on a consistent estimator, is will thus shrink infinitesimally as the sample size grows.