The standard error of the regression line at point $X$ (i.e. $s_{\widehat{Y}_{X}}$) is hand calculated (Yech!) using:
$s_{\widehat{Y}_{X}} = s_{Y|X}\sqrt{\frac{1}{n}+\frac{\left(X-\overline{X}\right)^{2}}{\sum_{i=1}^{n}{\left(X_{i}-\overline{X}\right)^{2}}}}$,
where the standard error of the estimate (i.e. $s_{Y|X}$) is hand calculated (Double yech!) using:
$s_{Y|X} = \sqrt{\frac{\sum_{i=1}^{n}{\left(Y_{i}-\widehat{Y}\right)^{2}}}{n-2}}$.
The confidence band about the regression line is then obtained as $\widehat{Y} \pm t_{\nu=n-2, \alpha/2}s_{\widehat{Y}}$.
Bear in mind that the confidence band about the regression line is not the same beast as the prediction band about the regression line (there is more uncertainty in predicting $Y$ given a value of $X$ than in estimating the regression line). And, as you are struggling to understand, the confidence intervals about the intercept and slope are yet other quantities.
Further, you do not understand confidence intervals: "if in 95% of the cases my estimates are within the confidence interval, these seem like a possible outcome?" Confidence intervals do not 'contain 95% of the estimates,' rather for each separate sample (produced by the same study design), 95% of the (separately calculated for each sample) 95% confidence intervals would contain the 'true population parameter' (i.e. the true slope, the true intercept, etc.) that $\widehat{\beta}$ and $\widehat{\alpha}$ are estimating.
I haven't investigated why the two CIs are unequal because I think both are wrong. The common problem is that the estimated parameters are likely correlated, perhaps heavily so, but neither procedure appears to account for that. (In the realistic examples shown below, the correlation ranges from -99.3% to -99.8%.)
To find confidence bands, start with the model in the alternative form
$$y = \exp(\alpha + \beta \log(x)) + \varepsilon.$$
The error term is $\varepsilon$ and $(\alpha,\beta)$ is the model parameter to be estimated. As in the question I will use nonlinear least squares to fit the model. (Taking logarithms of both sides will be vain, because they cannot simplify the right hand side due to the additive error term. Indeed, as the examples below indicate, it is possible--and perfectly OK--for observed values of $y$ to be negative.)
One output of the fitting procedure will be the variance-covariance matrix of the parameter estimates,
$$\mathbb{V} = \pmatrix{\sigma^2_\alpha & \rho \sigma_\alpha\sigma_\beta \\
\rho \sigma_\alpha\sigma_\beta & \sigma^2_\beta}.$$
The square roots of the diagonal elements, $\sigma_\alpha$ and $\sigma_\beta,$ are the standard errors of the estimated parameters $\hat\alpha$ and $\hat\beta,$ respectively. $\rho$ estimates the correlation coefficient of those estimates.
The predicted value at any (positive) number $x_0$ is
$$\hat{y}(x_0) = \exp(\hat\alpha + \hat\beta\log(x_0)) = e^\hat\alpha x_0^\hat\beta = \frac{\hat A}{\hat\beta} x_0^\hat\beta$$
where $\hat A=\hat\beta e^\hat\alpha,$ showing this really is the intended model as formulated in the question. Taking logarithms (which now is possible because there are no error terms in the equation) gives
$$\log(\hat y(x_0)) = \hat\alpha + \hat\beta \log(x_0) = (1, \log(x_0))\ (\hat\alpha,\hat\beta)^\prime.$$
Thus the standard error of the logarithm of the estimated response is
$$SE(x_0) = SE(\log(\hat y(x_0))) = \sqrt{(1, \log(x_0))\mathbb{V}(1, \log(x_0))^\prime}.$$
Being able to formulate the SE in this way was the whole point of the initial reparameterization of the model: the parameters $\alpha$ and $\beta$ (or, rather, their estimates) enter linearly into the calculation of the standard error of $\hat y.$ There is no need to expand Taylor series or "propagate error."
To construct a $100(1-a)\%$ confidence interval for $\log(y(x_0)),$ do as usual and set the endpoints at $$\log(\hat y(x_0)) \pm Z_{a/2}\, SE(x_0)$$ where $\Phi(Z_{a/2}) = a/2$ is the $a/2$ percentage point of the standard Normal distribution. Because this procedure aims to enclose the true response with $100(1-a)\%$ probability, this coverage property is preserved upon taking antilogarithms. That is,
A $100(1-a)\%$ confidence interval for $y(x_0)$ has endpoints
$$\begin{aligned} &\exp\left(\log(\hat y(x_0)) \pm Z_{a/2}\,
SE(x_0)\right)\\ &= \left[\frac{\hat y(x_0)}{\exp\left(|Z_{a/2}|
SE(x_0)\right)},\ \hat y(x_0)\exp\left(|Z_{a/2}|
SE(x_0)\right)\right]. \end{aligned}$$
By doing this for a sequence of values of $x_0$ you can construct confidence bands for the regression. If all is well (that is, all model assumptions are accurate and there are enough data to assure the sampling distribution of $(\hat\alpha,\hat\beta)$ is approximately Normal), you can hope that $100(1-a)\%$ of these bands envelop the graph of the true response function $x\to \exp(\alpha + \beta\log(x)).$
To illustrate, I generated $20$ datasets having properties similar to those in the problem: the true $\alpha$ and $\beta$ are close to the estimates reported in the question and the error variance $\operatorname{Var}(\varepsilon)$ was set to make the sum of squares of residuals close to that reported in the question (near 90,000). I used the foregoing technique to fit this model to each dataset and then for each one plotted (a) the data, (b) the $90\%$ confidence band and, for reference, (c) the graph of the true response function. The latter is colored red wherever it lies beyond the confidence band. The test of this approach is that about $10\%,$ or two of the $20,$ panels ought to show red portions: and that's exactly what happened (in iterations 10 and 16).
For details, consult this R
code that generated and plotted the simulations.
#
# Describe the model and the data.
#
x <- seq(10, 100, length.out=51)
alpha <- log(7.5)
beta <- 0.6
sigma <- sqrt(90000 / length(x)) # Error SD
a <- 0.10 # Test level
nrow <- 4 # Rows for the simulation
ncol <- 5 # Columns for the simulation
x.0 <- seq(min(x)*0.5, max(x)*1.1, length.out=101) # Prediction points
f <- function(x, theta) exp(theta[1] + theta[2]*log(x))
X <- data.frame(x=x, y.0=f(x, c(alpha, beta)))
#
# Create the datasets.
#
set.seed(17)
data.lst <- lapply(seq_len(nrow*ncol), function(i) {
X$y <- X$y.0 + rnorm(nrow(X), 0, sigma)
X$Iteration <- i
X
})
#
# Fit the model to the datasets.
#
Z <- qnorm(a/2) # For computing a 100(1-a)% confidence band
results.lst <- lapply(seq_along(data.lst), function(i) {
#
# Fit the data.
#
X <- data.lst[[i]]
fit <- nls(y ~ f(x, c(alpha, beta)), data=X, start=c(alpha=0, beta=0))
print(fit) # (Optional)
#
# Compute the SEs for log(y).
#
V <- vcov(fit)
se2 <- sapply(log(x.0), function(xi) {
u <- c(1, xi)
u %*% V %*% u
})
se <- sqrt(se2)
#
# Compute the CIs.
#
y <- log(f(x.0, coefficients(fit))) # The estimated log responses at the prediction points
data.frame(Iteration = i,
x = x.0,
y = exp(y),
y.lower = exp(y + Z * se),
y.upper = exp(y - Z * se))
})
#
# Plot the results.
#
X <- do.call(rbind, data.lst)
Y <- do.call(rbind, results.lst)
Y$y.0 <- f(Y$x, c(alpha, beta)) # Reference curve
library(ggplot2)
ggplot(Y, aes(x)) +
geom_ribbon(aes(ymin=y.lower, ymax=y.upper), fill="Gray") +
geom_line(aes(y=y.0, color=(y.lower <= y.0 & y.0 <= y.upper)), size=1,
show.legend=FALSE) +
geom_point(aes(y=y), data=X, alpha=1/4) +
facet_wrap(~ Iteration, nrow=nrow) +
ylab("y") +
ggtitle(paste0("Simulated datasets and ", 100*a, "% bands"),
"True model shown (in color) for reference")
Best Answer
See the wikipedia page on 'mean and predicted response'. Your linear regression is simpler than the 'standard' one which includes an intercept term. I believe you will approximate the variance of the mean response as $$\mbox{Var}\left(\hat{\beta}x_d\right) \approx \hat{\sigma}^2 \frac{x_d^2}{\sum_i x_i^2} = V_d,$$ where $\hat{\beta}$ is your estimate of the regression coefficient, and $\hat{\sigma}^2$ is the estimated variance in the error term. Thus the $1-\alpha$ confidence interval for the mean response at $x_d$ is $$y_d \pm t_{1-\alpha/2,n-2} \sqrt{V_d},$$ where $y_d = \hat{\beta}x_d$, and $t_{1-\alpha/2,n-2}$ is the $1-\alpha/2$ quantile of a t-distribution with $n-2$ degrees of freedom. (actually, I am not quite sure that shouldn't be $n-1$ in this case.)
For the 'predicted' response, you have to add $\hat{\sigma}^2$ to the above quoted $V_d$. The difference between them is not terribly clear in the wikipedia article. I believe the idea here is that if you sample the relationship at the new point $x_d$, there is error in your estimate of $\beta$, but also an error term in the model, so you have to add the extra $\sigma^2$ to your variance estimate. Or as I have seen it put elsewhere: adding more observations gives you greater confidence in the location of the fit line, but there is still error in the model.