Eigenvectors and eigenvalues are properties of operators, not of Hilbert spaces. Hence, answering the first part of your question, if $| n \rangle$ is an orthonormal basis of $\mathcal{H}$ (the Hilbert space), it is not correct to say they are the eigenvalues of the Hilbert space. We simply say they are an orthonormal basis.
Suppose now you are given an operator $A \colon \mathcal{H} \to \mathcal{H}$ on the Hilbert space. Loosely speaking, a matrix. A vector $| \Psi \rangle \in \mathcal{H}$ is said to be an eigenvector of $A$ associated to the eigenvalue $a$ when $A | \Psi \rangle = a | \Psi \rangle$. Notice that I need the operator to define what I mean by eigenvector: eigenvectors are the vectors in which $A$ acts as if it was simply a number, this number being called an eigenvalue.
Now if $A$ is hermitian, there are two cool results:
- the eigenvalues of $A$ are all real;
- the eigenvectors of $A$ provide an orthonormal basis of the Hilbert space.
Hence, if you are given a hermitian operator on the Hilbert space, you can use it to obtain a basis. We usually pick the Hamiltonian, for example, because then each state in the basis has a simple time-evolution in terms of its energy (more specifically, in terms of the eigenvalue of the Hamiltonian to which it is associated). By the very definition of what a basis is, we can then write any vector in the Hilbert space in terms of eigenvectors of the Hamiltonian, i.e., in terms of states of definite energy. This provides a nice way to write down the time evolution of the state, since the Hamiltonian eigenstates have simple time-evolution rules.
We could pick other operators, if we so desired. Instead of the Hamiltonian, you could pick some other hermitian operator to obtain a basis, but it would probably be less convenient to work with.
Finally, it is worth mentioning that sometimes a few linearly independent states might be associated to the same eigenvalue. We call this degeneracy. In this case, we often need a few more operators (such as angular momentum squared and angular momentum in the $\vec{z}$ direction) to properly label all the states in the basis in a unique way. This is what happens when we solve for the hydrogen atom and need three labels to define uniquely which state in the basis is which, because a few of them have the same energy (as a consequence, we distinguish them by using their angular momentum properties).
Observables aren’t a basis of the Hilbert space: their eigenvectors form a basis. The strategy is to find the largest possible set of commuting operators, so that their common eigenvectors are uniquely labelled by the eigenvalues in the commuting set. In your example, the Hilbert space is 2-dimensional and the eigenvalues of $\hat S_z$ are $\pm \frac{1}{2}$, so that’s enough to uniquely label the basis of your Hilbert space, so you don’t need anything else.
In your example, you could choose your basis vector $(1,0)^\top$ and $(0,1)^\top$ to be the eigenvectors of $\hat S_x$, and this would be just as fine: the matrix representation of $\hat S_z$ and $\hat S_y$ would then be non-diagonal.
The dimension of the Hilbert space is tied to the number of distinct mutually exclusive outcomes: experiment shows there’s only 2 possible distinct outcomes to measuring the spin of a spin-1/2 particle, and since these outcomes do not depend on the direction, the basis states of any operator of the form
$$
n_x\hat S_x+n_y\hat S_y+n_z\hat S_z\, \qquad n_x^2+n_2^2+n_z^2=1
$$
would could serve as a basis for the 2-dimensional Hilbert space. It is convention to chose a basis where $\hat S_z$ is diagonal, but that’s just convention.
Best Answer
I believe you've just made an algebraic error. To find the normalised eigenstates of $L_x$ in the (eigenstates of) $L_z$ basis, note that the matrices as given you in exercise 4.2.1 are already (a representation of the operators $L_i$) in the (eigenstates of) $L_z$ basis. This can be appreciated by noting that $L_z$ is diagonal; acting with it on the vectors $(1,0,0)$, $(0,1,0)$ or $(0,0,1)$ returns the same vector, scaled by an eigenvalue.
$$ L_x = \frac{1}{\sqrt{2}}\begin{pmatrix} 0&1&0 \\1&0&1 \\0&1&0\end{pmatrix} \qquad L_y = \frac{1}{\sqrt{2}}\begin{pmatrix} 0&-i&0 \\i&0&-i \\0&i&0\end{pmatrix}\qquad L_z = \frac{1}{\sqrt{2}}\begin{pmatrix} 1&0&0 \\0&0&0 \\0&0&-1\end{pmatrix}$$
As such, the problem of finding the normalised eigenstates of $L_x$ in the $L_z$ basis reduces merely to finding the normalised eigenvectors of the matrix $L_x$ as given you. These are
$$ \frac{1}{2} \begin{pmatrix} 1 \\ \sqrt{2} \\1\end{pmatrix} \qquad\frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ 0\\-1\end{pmatrix} \qquad \frac{1}{2} \begin{pmatrix} 1 \\ -\sqrt{2} \\1\end{pmatrix}$$
You should be able to finish the problem from here.
================================================================================
Some words on matrices. You can think of a matrix as a representation of an operator in some specified basis, just as you can think of a column vector as a representation of an (abstract/geometric) vector in some specified basis. For instance, say we're considering the vector $ |\psi \rangle$, which is a co-ordinate free object. Then if we choose a basis $\{|i\rangle,|j\rangle,|k\rangle\}$, we can expand our vector in terms of it
$$ |\psi\rangle = 4|i\rangle + 2|j\rangle + |k\rangle$$
for instance. What we're saying is
$$|\psi\rangle \qquad \mathrm{can\ be\ represented\ by} \qquad \begin{pmatrix} 4 \\ 2 \\ 1\end{pmatrix}$$
Now the same is true of matrices and operators. For orthonormal bases, which is what we deal with in the vast majority of cases in quantum mechanics, the rule is this: say we choose a basis labelled by the index $i$ --- that is, let's call the vectors in our basis $\{|1\rangle,|2\rangle,|3\rangle,\ldots\}$ --- then the operator $\hat{L}_z$ can be represented by the matrix $L_z$, where the matrix $L_z$ is given by
$$ (L_z)_{ij} = \langle i|\hat{L}_z |j \rangle$$
It's somewhat an abuse of notation that we often drop the hats, denoting the matrix and the operator by the same symbol. It's also common to write the operator as equal to its matrix, when we should really say that the matrix merely provides a representation of the operator, in some basis.
Now suppose that the basis we chose was the basis of eigenvectors of $\hat{L}_z$. That is, suppose we chose our basis such that
$$\hat{L}_z |1\rangle = \lambda_1 |1\rangle\,,\qquad\hat{L}_z |2\rangle = \lambda_2 |2\rangle\,,\qquad \hat{L}_z |3\rangle = \lambda_3 |3\rangle\,,$$
Then what will our matrix look like? Well, using the formula above
$$(L_z)_{ij} =\langle i|\hat{L}_z |j \rangle = \langle i|\lambda_j |j \rangle = \lambda_j\langle i| j \rangle = \lambda_j \delta_{ij} \qquad \mathrm{(no\ sum)} $$
Using the orthonormal nature of the basis in the last step. In other words, the matrix will be
$$ L_z = \begin{pmatrix} \lambda_1 &0&0 \\ 0& \lambda_2 & 0\\ 0&0&\lambda_3\end{pmatrix}$$
It's diagonal, with the eigenvalues along the diagonal. So we have the operator $\hat{L}_z$ in this basis of eigenvectors, now let's ask what our eigenvectors are in this basis of eigenvectors. This is trivial. Our eigenvector $|1\rangle$ can be written as 'one lot of $|1\rangle$ plus zero lots of $|2\rangle$ plus zero lots of $|3\rangle$', and so we simply have
$$|1\rangle \qquad \mathrm{can\ be\ represented\ by} \qquad \begin{pmatrix} 1\\0\\0\end{pmatrix}$$
Essentially this is what it means to say that we're in the basis $\{|1\rangle,|2\rangle,|3\rangle\}$, the basis of eigenvectors of $\hat{L}_z$.
To summarise. Being in the basis of eigenvectors of a given operator means that the matrix representation of that operator will be diagonal. The fact that the matrix representing $\hat{L}_z$ is (in this case) diagonal therefore tells you that you're in the basis of eigenvectors of $\hat{L}_z$. The column vector representation of these eigenvectors in this basis is simply
$$\begin{pmatrix} 1\\0\\0\end{pmatrix} \qquad \begin{pmatrix} 0\\1\\0\end{pmatrix} \qquad \begin{pmatrix} 0\\0\\1\end{pmatrix}$$
As a final word, and I hope this does not confuse you: if you have a matrix representation of an operator in one basis, you can transform it into the matrix representation of that operator in another basis by pre- and post-multiplying by appropriate transformation matrices. It turns out that the transformation matrix you want has as its columns the column vector representations of the new basis vectors in terms of the old basis vectors. What this means is that if you want to diagonalise a matrix --- which means you want to get the matrix representation of the underlying operator in terms of the eigenvectors of that operator --- then the transformation matrix you want has, as its columns, the eigenvectors of the operator written in terms of the current basis you're in. This might be what you're thinking of.
Hope this helps!