Tensor is not simple

linear algebratensors

I'm trying to solve the following:
Let $e_1,e_2 \in V$ linearly independent. I want to show that $e_1 \otimes e_2 +e_2 \otimes e _1 \in V \otimes V$ is not simple. $V$ is a finite dimensional vector space.
By contradiction, I assume that there exist $p,q \in V$ such that

$$p \otimes q = e_1 \otimes e_2 +e_2 \otimes e _1 \tag{1}$$

If now $\{ e_1, e_2, e_3, …, e_n\}$ represents the basis of $V$ I can express $p$ as

$$p=\lambda_1 e_1 + \lambda_2 e_2 + \sum_{i=3}^{n} \lambda_i e_i \tag{2}$$

and

$$q=\tilde{\lambda_1} e_1 + \tilde{\lambda_2} e_2 + \sum_{i=3}^{n} \tilde{\lambda_i} e_i \tag{3}$$

With a naive approach I thought about plugging in (2) and (3) in (1), expanding it and compare the coefficients which leads to

$e_1 \otimes e_2 +e_2 \otimes e_1 \stackrel{!}{=} p \otimes q = \lambda_1 \tilde\lambda_2 \otimes (e_1,e_1) + \lambda_1 \tilde{\lambda}_2 \otimes (e_1,e_2) + \lambda_1 \sum_{i=3}^n \tilde{\lambda}_i \otimes (e_1,e_i)+\lambda_2\tilde{\lambda}_1 \otimes (e_2,e_1) + \lambda_2 \tilde{\lambda}_2 \otimes(e_2,e_2)+ …$

By comparing the coefficients the following equations must hold

$\lambda_1 \tilde{\lambda}_2=1$

$\lambda_2 \tilde{\lambda}_1=1$

$\lambda_1 \tilde{\lambda_1}=0$

$\lambda_2 \tilde{\lambda_2}=0$

Obviously, this leads to a contradiction, i.e. $e_1 \otimes e_2 +e_2 \otimes e _1 \in V \otimes V$ is not simple. I this approach correct? I'm very new to the topic of tensors so I'm just wondering if it's really that easy.

Best Answer

There is a slight problem with notation, in that you are writing $\lambda_1\tilde{\lambda_2}\otimes (e_1,e_2)$ when you should be writing $\lambda_1\tilde{\lambda_2}(e_1\otimes e_2)$, etc. However, this is easy to fix without fundamentally changing the proof.

Here is a higher level approach that relies on properties of tensor products, and has some connections with some ideas you will see later.

Suppose that $e_1\otimes e_2 + e_2\otimes e_1=v\otimes w$ for some vectors $v,w\in V$. Then for any bilinear map $\psi$, we must have $\psi(v,w)=\psi(e_1,e_2)+\psi(e_2,e_1)$. So all we must do is find the right bilinear maps to get a contradictory set of conditions on $v$ and $w$.

Fix some projection map $P:V\to \mathbb F^2$ with $P(e_1)=e_1, P(e_2)=e_2$. Then $P\otimes P:V\otimes V\to \mathbb F^2\otimes \mathbb F^2$, and so $P(v)\otimes P(w)=e_1\otimes e_2 + e_2\otimes e_1$. Therefore, we can reduce to the case that $V$ is $\mathbb F^2$.

We could use bilinear maps to pick off the coefficients and obtain a contradiction that way, mimicking the original proof. However, for variety's sake, here is another more geometric approach.

Consider the bilinear map $(v,w)\mapsto \det(v|w)$ where $(v|w)$ is the matrix with columns $v, w$. Then since $\det(e_1|e_2)=-\det(e_2|e_1)$, we must have $\det(v|w)=0$, and so $v, w$ are linearly dependent, so $w=\alpha v$ for some $\alpha \in \mathbb F^{\times}$. In higher level language, what we have shown is a shadow of the fact that since $e_1\wedge e_2+e_2\wedge e_1=0$, then $v\wedge w=0$, and so $v$ and $w$ are linearly dependent. This more general fact doesn't require using the projection to two dimensions.

Define the map $S_c(e_1)=c e_2, S_c(e_2)=e_1$ for $c\in \mathbb F$. By taking $\phi(v,w)=\det(v|S(w))$ and noting that $\det(e_1|e_1)+\det(e_2|c e_2)=0$, the exact same argument as above shows that $v$ and $S_c(w)$ are linearly dependent. This means that $w$ and $S_c(w)$ are also dependent (since $v\neq 0$), so $w$ is an eigenvector of $S_c$ for every $c$. But this is impossible.