Riesz Representation Theorem geometric intuition

inner-productslinear algebralinear-transformationsriesz-representation-theorem

We just learned in our linear algebra class about the Riesz Representation Theorem, which states that if $V$ is finite-dimensional and $f$ is a linear functional on $V$, then there is a unique vector $u$ in $V$ such that
$f(v) = <v,u>$
for every $v$ in $V.$
Can someone please give some geometric intuition in complex field about why this theorem is right?
and what is the connection between the theorem and with the conjugate part in the inner product in complex field.
Thank you.

Best Answer

We can look at the case $V = \mathbb{R}^n$. Let $f$ be a linear functional $f: \mathbb{R}^n \to \mathbb{R}$. Let $e_1, …, e_n$ denote the standard basis vectors.

Then for each vector $v = (v_1, …, v_n)$, we have $f(v) = f(v_1e_1 + … + v_ne_n) = v_1f(e_1) + … + v_nf(e_n) = \langle v, u \rangle$, where $u := (f(e_1), …, f(e_n))$. So, every linear functional is given as an inner product with a vector: just choose the vector whose coordinates are $f$ applied to the standard basis vectors $e_i$.

Since $f$ is a linear transformation, we can ask what its kernel and image is. If $f(e_i) = 0$ for all $i$, then $f$ is just the zero transformation, so it’s not so interesting. Otherwise $f(e_i) \neq 0$ for some $i$, so the image of $f$ is all of $\mathbb{R}$, because $\mathbb{R}$ is spanned by any nonzero vector. By the rank-nullity theorem, the kernel of $f$ has dimension $n - 1$. In other words, $f$ collapses a hyperplane (i.e. a subspace of dimension $n - 1$) to the point $0$. The kernel is a hyperplane.

Now notice that the kernel is the set of all vectors $v$ such that $f(v) = \langle v, u \rangle = 0$. In other words, it is the set of all vectors that are orthogonal to the vector $u$. This has a geometric interpretation. In $\mathbb{R}^3$, for example, the kernel would be the plane normal to the vector $u$.

Now you might say, “For any given plane, there are many vectors that are normal to it. Yet the theorem says there is a unique vector $u$. In other words, you’ve shown existence, but you haven’t shown uniqueness.”

Here is some intuition for this in $\mathbb{R}^3$. Imagine picking a plane in $\mathbb{R}^3$ and then asking for one of its normal vectors. Say, the plane is the $xy$-plane, and a normal vector is $(0,0,1)$. Now define $f: \mathbb{R}^3 \to \mathbb{R}$ such that $f(e_1) = 0, f(e_2) = 0$ and $f(e_3) = 1$. This uniquely defines $f$, because we’ve specified what $f$ should do to a basis. Clearly $f(v) = \langle v, (f(e_1), f(e_2), f(e_3)) \rangle = \langle v, (0,0,1) \rangle = 0$ for all $v$ in the plane, because that’s what it means for the vector $(0,0,1)$ to be normal to the plane. However, you can imagine that we might have chosen a different normal vector to the plane. Say, suppose we chose $(0,0,5)$ instead. Then you can see that this in turn uniquely defines a different map $f’$. It is the map $f’$ that sends $e_1$ to $0$, $e_2$ to $0$, and $e_3$ to $5$. And so on: Any particular scaling of a normal vector will give you a unique linear map.

In general, we have uniqueness, because: If $f(v) = \langle v, u_1 \rangle = \langle v, u_2 \rangle$ for all $v$, then $\langle v, u_1 - u_2 \rangle = \langle v, u_1 \rangle - \langle v, u_2 \rangle = 0$ for all $v$. So for $v = u_1 - u_2$, we have $\langle u_1 - u_2, u_1 - u_2 \rangle = 0$. The only way we can have a vector whose inner product with itself is $0$ is if we have the zero vector. Hence $u_1 = u_2$, which shows uniqueness.

Related Question