[Math] QR factorization for least squares

least squareslinear algebraprojective-space

This is from my textbook
enter image description here

I don't undertand why small errorr in $A^TA$ can lead to large error in cofficient matrix? Because A=QR, so there should be no difference to use A or QR anyway.Could someone give an example? Thank you very much

Best Answer

A typical example uses the matrix $$ \mathbf{A} = \left( \begin{array}{cc} 1 & 1 \\ 0 & \epsilon \\ \end{array} \right) $$

Consider the linear system $$ \mathbf{A} x= b. $$ The solution via normal equations is $$ % \begin{align} % x_{LS} &= \left( \mathbf{A}^{*}\mathbf{A} \right)^{-1}\mathbf{A}^{*}b \\ % x \left( \begin{array}{cc} x_{1} \\ x_{2} \end{array} \right)_{LS} % &= % % \left( \begin{array}{cc} b_{1}-\frac{b_{2}}{\epsilon }\\\frac{b_{2}}{\epsilon } \end{array} \right) % \end{align} % $$

Minute changes in $\epsilon$, for example, $0.001\to0.00001$ create large changes in the solution: $$ \epsilon = 0.001: \quad x_{LS} = \left( \begin{array}{cc} b_{1}-1000b_{2} \\ 1000 b_{2} \end{array} \right) $$ $$ \epsilon = 0.00001: \quad x_{LS} = \left( \begin{array}{cc} b_{1}-100000b_{2} \\ 100000 b_{2} \end{array} \right) $$

The $\mathbf{QR}$ decomposition is $$ \mathbf{A} = \mathbf{QR} = \left( \begin{array}{cc} 1 & 0 \\ 0 & \frac{\epsilon }{\left| \epsilon \right| } \\ \end{array} \right) % \left( \begin{array}{cc} 1 & 1 \\ 0 & \left| \epsilon \right| \\ \end{array} \right) $$

Related Question