[Math] Linear maps using Tensor Product

linear algebra

While I was reading some posts (Definition of a tensor for a manifold, and Tensors as matrices vs. Tensors as multi-linear maps), I encountered the following explanation:

"To give a linear map $V \rightarrow V$ is the same as to give a linear map $V^* \otimes V\rightarrow \mathbb{R}$, assuming we're looking at real vector spaces."

Could anybody kindly explain the above sentense in detail with an example? I am not a math-major, but very much interested in tensor analysis. Thank you in advance.

Best Answer

The other answers have already given an "abstract" answer, so I will just make sure you understand what all this means for the basic case $V =\mathbb{R}^n$ (and in finite dimensional linear algebra, that is really all there is anyway!)

Let $L: \mathbb{R}^n \to \mathbb{R}^n$ be a linear map. Let $M$ be the matrix of $L$ with respect to the standard basis. This matrix can act on a column vector by multiplication on the left $v \mapsto Mv = L(v)$, or it can act on row vectors by multiplication on the right $w \mapsto wM$. We can convert a row vector into a column vector or vice versa by transposing. This mapping is called the adjoint of $L$.

Row vectors represent maps $\mathbb{R}^n \to \mathbb{R}$, and so really represent elements of the dual space $V^*$. So the adjoint map really is $L^*: V^* \to V^*$

We get a bilinear map $V^* \times V \to \mathbb{R}$ by the rule $(w,v) \mapsto w(L(v)) = wMv$. In other words, the bilinear map associated to $L$ is given by just taking a row vector and a column vector, and sandwiching the matrix of $L$ in between them.

To be perfectly explicit about this, if $L:\mathbb{R}^2 \to \mathbb{R}^2$ has the matrix $ \begin{bmatrix} a_{11} &a_{21}\\a_{12}&a_{22} \end{bmatrix} $ then $$L\left(\begin{bmatrix} x_1\\ x_2\end{bmatrix}\right) = \begin{bmatrix} a_{11} &a_{21}\\a_{12}&a_{22} \end{bmatrix}\begin{bmatrix} x_1\\ x_2\end{bmatrix}$$

and

$$ L^*\left(\begin{bmatrix} y_1& y_2\end{bmatrix}\right) = \begin{bmatrix} y_1& y_2\end{bmatrix} \begin{bmatrix} a_{11} &a_{21}\\a_{12}&a_{22} \end{bmatrix} $$

The bilinear map $B$ is given by

$$ B\left(\begin{bmatrix} y_1& y_2\end{bmatrix},\begin{bmatrix} x_1\\ x_2\end{bmatrix}\right) = \begin{bmatrix} y_1& y_2\end{bmatrix} \begin{bmatrix} a_{11} &a_{21}\\a_{12}&a_{22} \end{bmatrix}\begin{bmatrix} x_1\\ x_2\end{bmatrix} $$

Observe that I can figure out the linear map (i.e. reconstruct the matrix) just by knowing the action of the bilinear map, since $a_{ij} = B(e_j^\top,e_i)$.

This observation motivates the following inverse construction:

Given a bilinear map $B : V^* \times V \to \mathbb{R}$, define a matrix $M$ by $a_{ij} = B(e_j^\top,e_i)$. Since the $e_i$ and $e_j^\top$ span their respective spaces, we see that these values determine the action of $B$, and moreover produce a linear map $L: V \to V$ whose matrix represents the bilinear form.

Note that my answer implicitly makes use of the standard inner product on $\mathbb{R}^n$: the inner product allows me to construct the natural isomorphism $V \to V^*$ given by $v \mapsto \langle v, \cdot \rangle$, which is the "row vector" associated with $v$.

Hopefully this makes things seem a bit less abstract!

You should also note that a similar story does NOT play out for higher order multilinear maps.

Related Question