[Math] The intuition of the dual space

functional-analysis

The dual space of X is defined to be the space of all linear and continuous functionals that map X to R. But, What exactly is a dual space intuitively?

In my current self-guided understanding, I think of a space of function as a set of points( or a region) in infinite dimensional space $\mathbb R^\infty$. Let $f(x)$ be a element of a space of functions $X$, can I think of each value $f(x)$ as the magnitude in the dimension $x$?

If my assumption above is correct, then what does it mean to have a space consists of functionals? Functionals take a function as input and spit out a scalar, right? There are many functionals that involve differentiation and are not continuous. These functionals in no sense correspond to any functions, right?

Since all linear functionals that are bounded are also continuous, can I say that the only class of functionals that is linear and continuous is simple convolution with certain bounded function g(x)? Namely, $\int f(x)g(x)dx$?
And so, all g(x) that make the integral mapping continuous are the elements of the dual space? This is the best explanation I can come up with so far.

If all my assumptions are incorrect, can someone explain to me what it means to have a space which consists of functionals?

Best Answer

I have a super-naive interpretation that I find helpful. I think of the elements of $X$ as column vectors, and the elements of the dual, $X^*$ as row vectors. You can always multiply row vectors by column vectors, and each row vector gives a map from $X$ to $\mathbb{R}$.

In finite dimensions there is no canonical transformation of column vectors into row vectors, unless you choose a dot product. (If you haven't seen this before, then usual transpose operation corresponds to the usual dot product.) The same thing happens in infinite dimensions -- there is no canonical transformation unless you are in a Hilbert space. (The one thing that's different is that usually in infinite dimensions there's no transformation at all.)

The analogy is particularly clear for $L^p$ spaces. If you think of integration as fancy sums, then multiplying a row vector by a column vector in finite dimensions, $$ \sum_{i=1}^N v_i w_i, $$ becomes an integral for $f$ in $L^p$ and $g$ in its dual $L^q$, $$ \int f(x) g(x) dx. $$ The fact that they are all of this form is a non-trivial theorem, though, and there are elementary examples of function spaces where it fails.

Differentiation is more like a function from $X$ to $X$, so it's not a functional. You can define a functional by evaluating the derivative at a single point. Operators like differentiation on $L^p$ don't count not only because they are not continuous, but because they aren't defined on all of the space. This is usually the more important failure, and if you need differentiation for your application, you define a different function space that makes differentiation a continuous operator defined on that whole space. (Alternatively there's a weaker property than continuity, a "closed linear operator" and you can develop a theory for these.)

Related Question