Solved – General mathematics for confidence interval in multiple linear regression

confidence intervalmultiple regression

I have 30 observations and 4 (edited: numerical) variables (x1, x2, x3, x4). Also I have a linear model to predict y value. I want to calculate confidence interval for predicted (edited: and calculated) y values. I know that confidence level in one-factor analysis can be calculated multiplying t-value and standard error. How it is done in multi-factor analysis? My pre-knowledge is that there is matrix calculations involved.

Best Answer

In the interest of preserving the answers given in the link : http://reliawiki.org/index.php/Multiple_Linear_Regression_Analysis#Confidence_Intervals_in_Multiple_Linear_Regression

Confidence Intervals in Multiple Linear Regression

  1. Confidence Interval on βj
    A 100 (1−α) percent confidence interval on the regression coefficient, βj, is obtained as follows: $${{\hat{\beta }}_{j}}\pm {{t}_{\alpha /2,n-(k+1)}}\sqrt{{{C}_{jj}}}\,\!$$

Cjj ⇒ diagonal elements of the matrix C
Matrix C ⇒ variance-covariance matrix of the estimated regression coefficients ${\hat{\beta}}s$
$$C={{\hat{\sigma }}^{2}}{{({{X}^{\prime }}X)}^{-1}}\,\!$$ The variances of the ${\hat{\beta}}s$ are obtained using the (X′X)-1 matrix.
C is a symmetric matrix whose diagonal elements, Cjj, represent the variance of the estimated jth regression coefficient, ${\hat{\beta}}_{j}$.
The off-diagonal elements, Cij, represent the covariance between the ith and jth estimated regression coefficients, ${\hat{\beta}}_{i}$ and ${\hat{\beta}}_{j}$.
The value of ${\hat{\sigma }}^{2}$ is obtained using the error mean square, MSE.


2) Confidence Interval on ${\hat{y }_{i}}$
$${\hat{y }_{i}}\pm {{t}_{\alpha /2,n-(k+1)}} \sqrt{{{\hat{\sigma }}^{2}}{x'_{i}}{{({{X}^{\prime }}X)}^{-1}x_{i}}}\,\!$$
where $$x_{i} = \begin{bmatrix} 1\\x_{i1}\\.\\x_{ik} \end{bmatrix} $$

3) Confidence Interval on New Observations
$${\hat{y }_{p}}\pm {{t}_{\alpha /2,n-(k+1)}} \sqrt{{{\hat{\sigma }}^{2}}(1 + {x'_{p}}{{({{X}^{\prime }}X)}^{-1}x_{p})}}\,\!$$
where $$x_{p} = \begin{bmatrix} 1\\x_{p1}\\.\\x_{pk} \end{bmatrix} $$ $x_{p1}, .. , x_{pk}$ ⇒ levels of the predictor variables for ${\hat{y }_{p}}$

Related Question