[Math] SOLVED: How to retrieve Eigenvectors from QR algorithm that applies shifts and deflation

eigenvalueseigenvectorlinear algebra

After having googled for several days without locating a definitive answer, I will try my luck here!

I have implemented a version of the QR algorithm to calculate Eigenvalues and hopefully Eigenvectors of a matrix $A$ of dimension $n\times n$.

In order to speed up the convergence rate i have applied a version of the algorithm that utilizes several improvenments, mainly inspired by https://www.math.kth.se/na/SF2524/matber15/qrmethod.pdf.

Firstly, we calculate $H=Hessenberg(A)$, which transforms $A$ to upper Hesenberg form $H_{n\times n}$.

Now, using $H$ we estimate the Eigenvalues using a QR-algorithm applying a Wilkinson shift and deflation. This can loosely be described as the following pseudo procedure, where $\lambda_i$ denotes the i'th Eigenvalue:

$\text{set}\ H_0:=H\\
\text{for}\ m=n,\ldots,2 \ \text{do} \\
\quad k=0\\
\qquad \text{repeat}\\
\qquad \quad k=k+1\\
\qquad \quad \sigma_k=Wilkinson(H_{k-1})\\
\qquad \quad H_{k-1}-\sigma_kI=:Q_kR_k\quad (*)\\
\qquad \quad H_k=R_kQ_k+\sigma_kI\\
\qquad \text{until}\ \vert h^{(k)}_{m,m-1}\vert<\epsilon\\
\qquad \lambda_m=h^{(k)}_{m,m}\\
\qquad H^{(0)}=H^{(k)}_{1:(m-1),1:(m-1)}\\
\text{end for}$

The last step is simply a deflation step that drops the last row and column of $H$ upon satisfactory convergence towards an eigenvalue.

The function $Wilkinson$ calculates the shift, and the QR factorization $(*)$ is done using Givens rotations, which is a standard procedure.

My question is, how do i determine the corresponding eigenvectors? In the standard QR-algorithm this can be computed as $\Pi _iQ_i$, however due to the deflation step I don't know how to proceed?

I have verified that eigenvalues are calculated correctly.

In advance, thank you very much for any help!

EDIT: Implementing Federico's solution below works as intended. Make sure you use your initial similarity transform $H=UAU^*$, where $H$ is upper Hessenberg, i.e. let $\bar{Q}_H$ be the eigenmatrix of $H$ then $\bar{Q}_A=U^*\bar{Q}_H$ yields the eigenmatrix of $A$.

Best Answer

Instead of dropping one row and one column, compute at each step a $(n-1)\times(n-1)$ orthogonal transformation (or $(n-k)\times(n-k)$, after $k$ deflation steps) $Q$ by working to the reduced matrix, and then apply it to the full matrix as $$ \begin{bmatrix} Q^* \\& I \end{bmatrix} \begin{bmatrix} H_{11} & H_{12}\\ 0 & H_{22} \end{bmatrix} \begin{bmatrix} Q \\& I \end{bmatrix} = \begin{bmatrix} Q^*H_{11}Q & Q^*H_{12}\\ 0 & H_{22} \end{bmatrix}. $$ In practice all you have to do is operating on the leading $(n-k)\times(n-k)$ block as you were doing before, and then multiplying $H_{12}$ by the orthogonal transformation $Q$ that you have generated.

In this way, your algorithm computes explicitly a sequence of $n\times n$ orthogonal transformations $Q_1, Q_2, \dots, Q_m$ that turns $A$ into a triangular matrix (Schur form). You can accumulate the product $Q_1Q_2\dotsm Q_m$ with $O(n^2)$ additional operations per step (so $O(n^3)$ in total during the algorithm, under the usual assumptions that $O(1)$ iterations per eigenvalue are sufficient). After that, all you have to do is recover the eigenvectors from the Schur form.

I hope this is sufficiently clear!

Related Question