[Physics] Do eigenvectors of quantum operators span the whole Hilbert Space

hilbert-spaceobservablesoperatorsquantum mechanics

I am trying to solve an exercise in Shankar's QM book (concretely 4.2.1), and I am asked the probability of each possible value for the operator $L_x$ when the particle is in a certain eigenstate of another operator $L_z$. I understand that I have to "expand" the state vector in the $L_z$ basis, which in this case is just the appropriate eigenvector, and then change the base to that of $L_x$, i.e to write the $L_z$ eigenstate in the $L_x$ base. The problem is that there is just no way of doing it, since the eigenvectors of $L_x$ don't span the whole Hilbert Space.

I thought all quantum operators had a set of eigenvectors that spanned the whole space, am I wrong on that or am I wrong on my algebra?

Best Answer

I believe you've just made an algebraic error. To find the normalised eigenstates of $L_x$ in the (eigenstates of) $L_z$ basis, note that the matrices as given you in exercise 4.2.1 are already (a representation of the operators $L_i$) in the (eigenstates of) $L_z$ basis. This can be appreciated by noting that $L_z$ is diagonal; acting with it on the vectors $(1,0,0)$, $(0,1,0)$ or $(0,0,1)$ returns the same vector, scaled by an eigenvalue.

$$ L_x = \frac{1}{\sqrt{2}}\begin{pmatrix} 0&1&0 \\1&0&1 \\0&1&0\end{pmatrix} \qquad L_y = \frac{1}{\sqrt{2}}\begin{pmatrix} 0&-i&0 \\i&0&-i \\0&i&0\end{pmatrix}\qquad L_z = \frac{1}{\sqrt{2}}\begin{pmatrix} 1&0&0 \\0&0&0 \\0&0&-1\end{pmatrix}$$

As such, the problem of finding the normalised eigenstates of $L_x$ in the $L_z$ basis reduces merely to finding the normalised eigenvectors of the matrix $L_x$ as given you. These are

$$ \frac{1}{2} \begin{pmatrix} 1 \\ \sqrt{2} \\1\end{pmatrix} \qquad\frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ 0\\-1\end{pmatrix} \qquad \frac{1}{2} \begin{pmatrix} 1 \\ -\sqrt{2} \\1\end{pmatrix}$$

You should be able to finish the problem from here.

================================================================================

Some words on matrices. You can think of a matrix as a representation of an operator in some specified basis, just as you can think of a column vector as a representation of an (abstract/geometric) vector in some specified basis. For instance, say we're considering the vector $ |\psi \rangle$, which is a co-ordinate free object. Then if we choose a basis $\{|i\rangle,|j\rangle,|k\rangle\}$, we can expand our vector in terms of it

$$ |\psi\rangle = 4|i\rangle + 2|j\rangle + |k\rangle$$

for instance. What we're saying is

$$|\psi\rangle \qquad \mathrm{can\ be\ represented\ by} \qquad \begin{pmatrix} 4 \\ 2 \\ 1\end{pmatrix}$$

Now the same is true of matrices and operators. For orthonormal bases, which is what we deal with in the vast majority of cases in quantum mechanics, the rule is this: say we choose a basis labelled by the index $i$ --- that is, let's call the vectors in our basis $\{|1\rangle,|2\rangle,|3\rangle,\ldots\}$ --- then the operator $\hat{L}_z$ can be represented by the matrix $L_z$, where the matrix $L_z$ is given by

$$ (L_z)_{ij} = \langle i|\hat{L}_z |j \rangle$$

It's somewhat an abuse of notation that we often drop the hats, denoting the matrix and the operator by the same symbol. It's also common to write the operator as equal to its matrix, when we should really say that the matrix merely provides a representation of the operator, in some basis.

Now suppose that the basis we chose was the basis of eigenvectors of $\hat{L}_z$. That is, suppose we chose our basis such that

$$\hat{L}_z |1\rangle = \lambda_1 |1\rangle\,,\qquad\hat{L}_z |2\rangle = \lambda_2 |2\rangle\,,\qquad \hat{L}_z |3\rangle = \lambda_3 |3\rangle\,,$$

Then what will our matrix look like? Well, using the formula above

$$(L_z)_{ij} =\langle i|\hat{L}_z |j \rangle = \langle i|\lambda_j |j \rangle = \lambda_j\langle i| j \rangle = \lambda_j \delta_{ij} \qquad \mathrm{(no\ sum)} $$

Using the orthonormal nature of the basis in the last step. In other words, the matrix will be

$$ L_z = \begin{pmatrix} \lambda_1 &0&0 \\ 0& \lambda_2 & 0\\ 0&0&\lambda_3\end{pmatrix}$$

It's diagonal, with the eigenvalues along the diagonal. So we have the operator $\hat{L}_z$ in this basis of eigenvectors, now let's ask what our eigenvectors are in this basis of eigenvectors. This is trivial. Our eigenvector $|1\rangle$ can be written as 'one lot of $|1\rangle$ plus zero lots of $|2\rangle$ plus zero lots of $|3\rangle$', and so we simply have

$$|1\rangle \qquad \mathrm{can\ be\ represented\ by} \qquad \begin{pmatrix} 1\\0\\0\end{pmatrix}$$

Essentially this is what it means to say that we're in the basis $\{|1\rangle,|2\rangle,|3\rangle\}$, the basis of eigenvectors of $\hat{L}_z$.

To summarise. Being in the basis of eigenvectors of a given operator means that the matrix representation of that operator will be diagonal. The fact that the matrix representing $\hat{L}_z$ is (in this case) diagonal therefore tells you that you're in the basis of eigenvectors of $\hat{L}_z$. The column vector representation of these eigenvectors in this basis is simply

$$\begin{pmatrix} 1\\0\\0\end{pmatrix} \qquad \begin{pmatrix} 0\\1\\0\end{pmatrix} \qquad \begin{pmatrix} 0\\0\\1\end{pmatrix}$$

As a final word, and I hope this does not confuse you: if you have a matrix representation of an operator in one basis, you can transform it into the matrix representation of that operator in another basis by pre- and post-multiplying by appropriate transformation matrices. It turns out that the transformation matrix you want has as its columns the column vector representations of the new basis vectors in terms of the old basis vectors. What this means is that if you want to diagonalise a matrix --- which means you want to get the matrix representation of the underlying operator in terms of the eigenvectors of that operator --- then the transformation matrix you want has, as its columns, the eigenvectors of the operator written in terms of the current basis you're in. This might be what you're thinking of.

Hope this helps!

Related Question