I am strying studying on my own parabolic PDEs throughout the book "Linear and Quasi-linear Equations of Parabolic Type" by Vsevolod A. Solonnikov, Nina Uraltseva, Olga Ladyzhenskaya, but there is one thing that I do not understand: why we are looking for the solutions of linear and quasi-linear parabolic PDEs in fractional Sobolev spaces instead of the classical Sobolev spaces. I believe that there is some problem that we find when we try looking for these solutions in the classical Sobolev spaces, but I could not realize what is this problem and the only thing that I could find about the motivation for fractional Sobolev spaces is this. I will be grateful if someone can explain why we work in fractional Sobolev spaces instead of the classical Sobolev spaces.
Motivation for the fractional Sobolev spaces
fractional-sobolev-spacesmotivationparabolic pdesobolev-spaces
Related Solutions
There are multiple definitions of $H^{1/2}(\partial Ω)$ which are equivalent if the boundary is regular enough (Lipschitz continuous). The technically simplest, and how it usually appears in lectures on weak solutions for partial differential equations, is as the range of the trace operator $tr\colon H^1(Ω) \to L^2(\partial Ω)$:
$$ \begin{align}H^{1/2}(\partial Ω) &:= tr(H^1(\Omega)) := \{ v \in L^2(\partial Ω) \;|\; \exists u \in H^1(Ω)\colon tr(u) =v \},\\ \| v \|_{H^{1/2}(\partial Ω)} &:= \inf \{ \| \tilde u \|_{H^1(Ω)} \;|\; \tilde u \in H^1(Ω) \land tr(\tilde u) = v \}.\end{align}$$
The definition of the norm arises as follows. By the First isomorphism theorem for Banach Spaces, the trace operator induces an isomorphism $$ \begin{align}\widehat{tr}\colon H^1(Ω) / \operatorname{ker} tr &\to tr(H^1(\Omega)), \\ [u] &\mapsto tr(u)\end{align} $$ where $\operatorname{ker} tr$ is the kernel of the trace operator, $[u] \in H^1(Ω) / \operatorname{ker} tr$ denotes an equivalence class with representative $u \in H^1(Ω)$ and the norm on the quotient space is given by $$ \| [u] \|_{H^1(Ω) / \operatorname{ker} tr} := \inf \{ \| \tilde u \|_{H^1(Ω)} \;|\; \tilde u \in [u] \}. $$ This is a general construct for the quotient norm on Banach spaces. As a side remark, there holds $\operatorname{ker} tr = H^1_0(Ω)$ (the latter space is defined as completion of $C^\infty_0(\Omega)$ in $H^1(\Omega)$). One can then define a norm on $H^{1/2}(\partial Ω)$ using $\widehat{tr}$:
$$ \| v \|_{H^{1/2}(\partial Ω)} := \| \widehat{tr}^{-1}(v) \|_{H^1(Ω) / \operatorname{ker} tr} = \inf \{ \| \tilde u \|_{H^1(Ω)} \;|\; \tilde u \in \widehat{tr}^{-1}(v) \}$$
using that $\tilde u \in \widehat{tr}^{-1}(v)$ if and only if $\tilde u \in H^1(\Omega)$ and $tr(\tilde u) = v$ one arrives at the expression of the norm given in the beginning.
This definition of $H^{1/2}(\partial Ω)$ is not very useful if one wishes to check whether a specific function $v \in L^2(\partial Ω)$ is in $H^{1/2}(\partial Ω)$ and it does not explain the name $H^{1/2}(\partial Ω)$ (which came later historically).
The other definition of $H^{1/2}(\partial Ω)$ I present here is quite technical in detail as $\partial Ω$ is a $(n-1)$-dimensional manifold. In case $\partial Ω$ is a plane you have $\partial Ω \cong \mathbb R^{n-1}$ and you end up having to define $H^{1/2}(Ω')$ for $Ω' \subset \mathbb R^{n-1}$. For a general Lipschitz boundary you can "straighten" your boundary locally to look like a plane (this is a general technique while working with manifolds) and in the end you ask for a transformation of your boundary function to be in $H^{1/2}(Ω')$. (See [1] for details.)
All in all, you end up having to define $H^{1/2}(Ω')$. There are multiple ways for doing that, one using the Hölder-like seminorms as mentioned by Thomás, one using the Fourier coefficients (see Fractional Sobolev Spaces on Wikipedia) and one using interpolation between $L^2(Ω')$ and $H^1(Ω')$ (see [1] again).
For understanding the actual behavior of functions in $H^{1/2}(Ω')$ the definition using the Hölder-like norm (Sobolev-Slobodeckij norm) is probably the best: $$H^{1/2}(Ω') = \left\{ v ∈ L^2(Ω') \;|\; \| v \|_{L^2(Ω')} + \int_{Ω'}\int_{Ω'}\frac{|v(x)-v(y)|^2}{|x-y|^{n+1}} dx \, dy < \infty \right\}$$ Note that the additional integral term is somewhat like a Hölder condition. I like to think of $H^1(Ω) \subset H^{1/2}(Ω) \subset L^2(Ω)$ as something analogous to $C^1(Ω) \subset C^{1/2}(Ω) \subset C^0(Ω)$ in terms of regularity. That this is really analogous can be made precise using interpolation theory, which allows one to define spaces $H^s(Ω)$ for any $0 < s < 1$ "in-between" $L^2(\Omega)$ and $H^1(Ω)$, where the trace space appears as special case for $s = 1/2$.
The only source claiming the equivalence of the norms I know of is [2], but my Italian is not sufficient to follow the argument.
[1] Lions, J. L., & Magenes, E. (1972). Non-Homogeneous Boundary Value Problems and Applications.
[2] Gagliardo, E. (1957). Caratterizzazioni delle tracce sulla frontiera relative ad alcune classi di funzioni in n variabili. Rendiconti Del Seminario Matematico Della Università Di Padova, 27, 284–305.
To motivate Sobolev spaces, let me pose a motivating problem.
Let $\Omega$ be a smooth, bounded domain in ${\Bbb R}^n$ and let $f$ be a $C^\infty$ function on $\Omega$. Prove that there exists a $C^2$ function $u$ satisfying $-\Delta u = f$ in $\Omega$ and $u = 0$ on the boundary of $\Omega$.
As far as PDE's go, this is the tamest of the tame: it's a second-order, constant coefficient elliptic PDE with a smooth right-hand side and a smooth boundary. Should be easy right? It certainly can be done, but you'll find it's harder than you might think.
Imagine replacing the PDE with something more complicated like $-\text{div}(A(x)\nabla u) = f$ for some $C^1$ uniformly positive definite matrix-valued function $A$. Proving even existence of solutions is a nightmare. Such PDE's come up all the time in the natural sciences, for instance representing the equillibrium distribution of heat (or stress, concentration of impurities,...) in a inhomogenous, anisotropic medium.
Proving the existence of weak solutions to such PDE's in Sobolev spaces is incredibly simple: once all the relevant theoretical machinery has been worked out, the existence, uniqueness, and other useful things about the solutions to the PDE can be proven in only a couple of lines. The reason Sobolev spaces are so effective for PDEs is that Sobolev spaces are Banach spaces, and thus the powerful tools of functional analysis can be brought to bear. In particular, the existence of weak solutions to many elliptic PDE follows directly from the Lax-Milgram theorem.
So what is a weak solution to a PDE? In simple terms, you take the PDE and multiply by a suitably chosen${}^*$ test function and integrate over the domain. For my problem, for instance, a weak formulation would be to say that $-\int_\Omega v\Delta u \, dx = \int_\Omega fv \, dx$ for all $C^\infty_0$ functions $v$. We often want to use integration by parts to simplify our weak formulation so that the order of the highest derivative appearing in the expression goes down: you can check that in fact $\int_\Omega \nabla v\cdot \nabla u \, dx = \int_\Omega fv \, dx$ for all $C^\infty_0$ functions $v$.
Note the logic. You begin with a smooth solution to your PDE, which a priori may or may not exist. You then derive from the PDE a certain integral equation which is guaranteed to hold for all suitable test functions $v$. You then define $u$ to be a weak solution of the PDE if the integral equation holds for all test functions $v$.
By construction, every classical solution to the PDE is a weak solution. Conversely, you can show that if $u$ is a $C^2$ weak solution, then $u$ is a classical solution.${}^\dagger$ Showing the existence of solutions in a Sobolev space is easy, but proving that they have enough regularity (that is, they are continuous differentiable up to some order—$2$, in our case) to be classical solutions often requires very length and technical proofs.${}^\$$
(The Sobolev embedding theorems you mention in your post are one of the key tools--they establish that if you have enough weak derivatives in a Sobolev sense, then you also are guaranteed to have a certain number of classical derivatives. The downside is you have to work in a Sobolev space $W^{k,p}$ where $p$ is larger than the dimension of the space, $n$. This is a major bummer since we like to work in $W^{k,2}$ since it is a Hilbert space, and thus has much nicer functional analytic tools. Alternatively, if you show that your function is in $W^{k,2}$ for every $k$, then it is guaranteed to lie in $C^\infty$.)
All of what I've written kind of dances around the central question of why Sobolev spaces are so useful and why all of these functional analytic tools work for Sobolev spaces but not for spaces like $C^2$. In a sentence, completeness is really, really important. Often, in analysis, when we want to show a solution to something exists, it's much easier to construct a bunch of approximate solutions and then show those approximations converge to a bona fide solution. But without completeness, there might not be a solution (a priori, at least) for them to converge to. As a much simpler example, think of the intermediate value theorem. $f(x) = x^2-2$ has $f(2) = 2$ and $f(0) = -2$, so there must exist a zero (namely $\sqrt{2}$) in $(0,2)$. This conclusion fails over the rationals however, since the rationals are not complete, $\sqrt{2} \notin {\Bbb Q}$. In fact, one way to define the Sobolev spaces is as the completion of $C^\infty$ (or $C^k$ for $k$ large enough) under the Sobolev norms.${}^\%$
I have not the space in this to answer your questions (1) and (2) directly, as answering these questions in detail really requires spinning out a whole theory. Most graduate textbooks on PDEs should have answers with all the details spelled out. (Evans is the standard reference, although he doesn't include potential theory so he doesn't answer (1), directly at least.) Hopefully this answer at least motivates why Sobolev spaces are the "appropriate space to look for solutions to PDEs".
${}^*$ Depending on the boundary conditions of the PDE's, our test functions may need to be zero on the boundary or not. Additionally, to make the functional analysis nice, we often want our test functions to be taken from the same Sobolev space as we seek solutions in. This usually poses no problem as we may begin by taking our test functions to be $C^\infty$ and use certain approximation arguments to extend to all functions in a suitable Sobolev space.
${}^\dagger$ Apply integration by parts to recover $-\int_\Omega v\Delta u \, dx = \int_\Omega fv \, dx$ for all $C^\infty_0$ functions $v$ and apply the fundamental lemma of calculus of variations.
${}^\$$ Take a look at a regularity proof for elliptic equations in your advanced PDE book of choice.
${}^\%$ You might ask why complete in Sobolev norm, not some simpler norm like $L^p$? Unfortunately, the $L^p$ completion of $C^\infty$ is $L^p$, and there are functions in $L^p$ which you can't define any sensible weak or strong derivative of. Thus, in order to define a complete normed space of differentiable functions, the derivative has to enter the norm (which is why the Sobolev norms are important, and in some sense natural.)
Best Answer
The simple answer is that you can find better and sharper estimates using fractional spaces, or interpolation spaces. Let me give an example, our favorite parabolic pde: \begin{align} u_t=u_{xx}. \end{align} When we denote by $S(t)$ the semigroup generated by the laplacian, we can solve the equation as \begin{align} u(t)=S(t)u_0. \end{align} Suppose we want to measure $u(t)$ in some Hilbert space $X$, and the initial condition is from a space $Y$, then we find \begin{align} ||u(t)||_X\leq ||S(t)||_{L(Y,X)}||u_0||_Y. \end{align} The key question is now how the operator norm depends on time. For $X=H^2$ and $Y=L^2$, we know that the operator norm has a singularity of $t^{-1}$, but when we take $Y=H^2$ there is no singularity. Now what if we take an initial condition that is smoother then $L^2$, but not as smooth as $H^2$? How strong will the singularity be? In order to answer these questions, you need interpolation spaces between $H^2$ and $L^2$, in other words, you want to construct a family of spaces $H^\alpha$ in between $H^2$ and $L^2$, and fractional Sobolev spaces are a nice explicit way to construct these spaces. I do recommend to study these lecture notes, http://people.dmi.unipr.it/alessandra.lunardi/. Corollary 4.1.11 is the famous Ladyzhenskaja – Solonnikov – Ural’ceva-theorem, and the use of interpolation spaces becomes very clear here.