Functional Analysis – Why Are Sobolev Spaces Useful?

functional-analysispartial differential equationssobolev-spaces

Why are Sobolev spaces useful, and what problems were they developed to overcome? I'm particularly interested in their relation to PDEs, as they are often described as the 'natural space in which to look for PDE solutions' – why is this? How do weak solutions and distributions come into this?

There are plenty of books on the subject, but these seem to jump straight into the details and I'm struggling to see the big picture. I know that the Sobolev norm makes the function spaces complete, which would guarantee that infinite linear combinations of solutions do not leave the space, as can be a problem when working with $\mathscr{C}^2$, for example, but are there any other reasons why this norm is important?

I'm also interested in the Sobolev embedding theorems, since I believe that they're important in the problems I'm trying to solve. These are (1) proving the compactness of the integral operator whose kernel is the Green's function for the Laplacian on a bounded domain $\Omega \subset \mathbb{R}^{n}$ with smooth boundary, and (2) understanding why minimising functions of the Rayleigh quotient,

$${\arg\min}_{f \in T} \int_{\Omega} \frac{\nabla f \cdot \nabla f}{\left< f , f \right>}$$

always exist, and that they are necessarily smooth ($\mathscr{C}^\infty(\Omega)$) among the set of trial functions $T$ of continuous functions with piecewise continuous first derivatives which vanish at the boundary and are not identically zero. To me, this sounds like the Sobolev space $H_0^1 (\Omega)$ at work, where the smoothness is the result of a Sobolev embedding theorem; however, I'm very new to Sobolev spaces and so don't know much about this. Could anyone provide me with some insight into how results (1) and (2) might be proven?

Best Answer

To motivate Sobolev spaces, let me pose a motivating problem.

Let $\Omega$ be a smooth, bounded domain in ${\Bbb R}^n$ and let $f$ be a $C^\infty$ function on $\Omega$. Prove that there exists a $C^2$ function $u$ satisfying $-\Delta u = f$ in $\Omega$ and $u = 0$ on the boundary of $\Omega$.

As far as PDE's go, this is the tamest of the tame: it's a second-order, constant coefficient elliptic PDE with a smooth right-hand side and a smooth boundary. Should be easy right? It certainly can be done, but you'll find it's harder than you might think.

Imagine replacing the PDE with something more complicated like $-\text{div}(A(x)\nabla u) = f$ for some $C^1$ uniformly positive definite matrix-valued function $A$. Proving even existence of solutions is a nightmare. Such PDE's come up all the time in the natural sciences, for instance representing the equillibrium distribution of heat (or stress, concentration of impurities,...) in a inhomogenous, anisotropic medium.

Proving the existence of weak solutions to such PDE's in Sobolev spaces is incredibly simple: once all the relevant theoretical machinery has been worked out, the existence, uniqueness, and other useful things about the solutions to the PDE can be proven in only a couple of lines. The reason Sobolev spaces are so effective for PDEs is that Sobolev spaces are Banach spaces, and thus the powerful tools of functional analysis can be brought to bear. In particular, the existence of weak solutions to many elliptic PDE follows directly from the Lax-Milgram theorem.

So what is a weak solution to a PDE? In simple terms, you take the PDE and multiply by a suitably chosen${}^*$ test function and integrate over the domain. For my problem, for instance, a weak formulation would be to say that $-\int_\Omega v\Delta u \, dx = \int_\Omega fv \, dx$ for all $C^\infty_0$ functions $v$. We often want to use integration by parts to simplify our weak formulation so that the order of the highest derivative appearing in the expression goes down: you can check that in fact $\int_\Omega \nabla v\cdot \nabla u \, dx = \int_\Omega fv \, dx$ for all $C^\infty_0$ functions $v$.

Note the logic. You begin with a smooth solution to your PDE, which a priori may or may not exist. You then derive from the PDE a certain integral equation which is guaranteed to hold for all suitable test functions $v$. You then define $u$ to be a weak solution of the PDE if the integral equation holds for all test functions $v$.

By construction, every classical solution to the PDE is a weak solution. Conversely, you can show that if $u$ is a $C^2$ weak solution, then $u$ is a classical solution.${}^\dagger$ Showing the existence of solutions in a Sobolev space is easy, but proving that they have enough regularity (that is, they are continuous differentiable up to some order—$2$, in our case) to be classical solutions often requires very length and technical proofs.${}^\$$

(The Sobolev embedding theorems you mention in your post are one of the key tools--they establish that if you have enough weak derivatives in a Sobolev sense, then you also are guaranteed to have a certain number of classical derivatives. The downside is you have to work in a Sobolev space $W^{k,p}$ where $p$ is larger than the dimension of the space, $n$. This is a major bummer since we like to work in $W^{k,2}$ since it is a Hilbert space, and thus has much nicer functional analytic tools. Alternatively, if you show that your function is in $W^{k,2}$ for every $k$, then it is guaranteed to lie in $C^\infty$.)

All of what I've written kind of dances around the central question of why Sobolev spaces are so useful and why all of these functional analytic tools work for Sobolev spaces but not for spaces like $C^2$. In a sentence, completeness is really, really important. Often, in analysis, when we want to show a solution to something exists, it's much easier to construct a bunch of approximate solutions and then show those approximations converge to a bona fide solution. But without completeness, there might not be a solution (a priori, at least) for them to converge to. As a much simpler example, think of the intermediate value theorem. $f(x) = x^2-2$ has $f(2) = 2$ and $f(0) = -2$, so there must exist a zero (namely $\sqrt{2}$) in $(0,2)$. This conclusion fails over the rationals however, since the rationals are not complete, $\sqrt{2} \notin {\Bbb Q}$. In fact, one way to define the Sobolev spaces is as the completion of $C^\infty$ (or $C^k$ for $k$ large enough) under the Sobolev norms.${}^\%$

I have not the space in this to answer your questions (1) and (2) directly, as answering these questions in detail really requires spinning out a whole theory. Most graduate textbooks on PDEs should have answers with all the details spelled out. (Evans is the standard reference, although he doesn't include potential theory so he doesn't answer (1), directly at least.) Hopefully this answer at least motivates why Sobolev spaces are the "appropriate space to look for solutions to PDEs".


${}^*$ Depending on the boundary conditions of the PDE's, our test functions may need to be zero on the boundary or not. Additionally, to make the functional analysis nice, we often want our test functions to be taken from the same Sobolev space as we seek solutions in. This usually poses no problem as we may begin by taking our test functions to be $C^\infty$ and use certain approximation arguments to extend to all functions in a suitable Sobolev space.

${}^\dagger$ Apply integration by parts to recover $-\int_\Omega v\Delta u \, dx = \int_\Omega fv \, dx$ for all $C^\infty_0$ functions $v$ and apply the fundamental lemma of calculus of variations.

${}^\$$ Take a look at a regularity proof for elliptic equations in your advanced PDE book of choice.

${}^\%$ You might ask why complete in Sobolev norm, not some simpler norm like $L^p$? Unfortunately, the $L^p$ completion of $C^\infty$ is $L^p$, and there are functions in $L^p$ which you can't define any sensible weak or strong derivative of. Thus, in order to define a complete normed space of differentiable functions, the derivative has to enter the norm (which is why the Sobolev norms are important, and in some sense natural.)

Related Question