Spectrum of a matrix operator on $L^2$ product space

functional-analysislinear algebramatricesspectral-theory

I am interested in a spectrum of a simple operator, effectively given by a matrix $A$, acting on a space $U$ that is $n$-th power of same base space V, $U = \underbrace{V \times \dots \times V}_{n\ \text{times}}$. Lets take $U = V \times V$, $V = L^2$ and
\begin{align}
&A = \begin{bmatrix} a &b \\ b &d \end{bmatrix} \quad \quad a,b,d \in \mathbb R, \\
&A \begin{pmatrix} f_1 \\ f_2 \end{pmatrix} = \begin{pmatrix} a f_1 + b f_2 \\ b f_1 + d f_2 \end{pmatrix} \quad \quad \forall \begin{pmatrix} f_1 \\ f_2 \end{pmatrix} \in U.
\end{align}

I would like to show that the spectrum of $A: U \to U$ is the same as the spectrum of $A: \mathbb{R}^2 \to \mathbb{R}^2$.

My idea is to go from the definition and check condition under which $A – \lambda I$ is not onto and not one-to-one. Checking the one-to-one property seems easy since that means solving a linear system
$$
(A – \lambda I) v = 0 \quad v \in U
$$

which can be done e. g. by Gauss elimination method and gives the same conditions on $\lambda$ being the root of the characteristic polynomial as in the linear algebraic case.
I have trouble showing the condition for $A-\lambda I$ being no onto. This means that the system
$$
(A-\lambda I) x = b
$$

has a solution for all $b \in U$.
The standard argument from linear algebra is that a matrix is onto when its columns are linearly independent does not translate well here. If we denote the columns of $A-\lambda I$ as $c_1, c_2 \in \mathbb{R}^2$, $x = (x_1, x_2)$, then the system can be rewritten as
\begin{align}
x_1 c_1 + x_2 c_2 = b.
\end{align}

The trouble with this is that $x_1, x_2$ that play the role of coefficient in the linear algebra case are now elements of $V$ and not $\mathbb R$ so this approach seem to lead nowhere. I think that the condition of $c_1, c_2$ being linearly independent is necessary and sufficient but I am not able to find the right argument or framework which would make this problem trivial. (The space $U$ looks a bit like $\mathbb{R}^2 \otimes V$ but I did not find any helpful reference for that.) I would be grateful for any direction or a suitable book/paper to follow.

Best Answer

$A-\lambda I\in\Bbb R^{2\times 2}$ is either invertible, whence $(A-\lambda I)^{-1}$ will act as the (bounded) inverse of $(A-\lambda I):V\times V\to V\times V$,
or, its kernel is nontrivial, whence $\lambda$ is an eigenvalue of $A$, and as you correctly deduced, the action of $A-\lambda I$ on $V\times V$ is not injective either.