Laplace equation with the Robin’s boundary problem

analysisboundary value problemelliptic-equationspartial differential equationsspectral-theory

$\textbf{Problem}$ Let $\Omega$ be an open, bounded and connected subset of $\mathbb{R}^n$. Suppose that $\partial \Omega$ is $C^{\infty}$. Consider an eigenvalue problem
\begin{align*}
\begin{cases}
-\Delta u=\lambda u & \textrm{ in } \; \Omega \\
\frac{\partial u}{\partial \nu}=-u & \textrm{ on } \partial \Omega
\end{cases}
\end{align*}

Define a bilinear operater $(\cdot,\cdot)_{H^1}$ by
\begin{align*}
(u,v)_{H^1}:=\int_{\Omega} \nabla u \cdot \nabla v \;dx + \int_{\partial \Omega} uv \; d\sigma
\end{align*}

Show that there exists a constant $\theta>0$ independent of $u,v$ such that
\begin{align*}
(u,u)_{H^1} \geq \theta \Vert u \Vert _{H^1(\Omega)}^2
\end{align*}

$\textbf{Attempt}$

\begin{align*}
(u,u)_{H^1}&=\int_{\Omega} \nabla u \cdot \nabla u \;dx + \int_{\partial \Omega} u^2 \; d\sigma \\
&=\int_{\Omega} \nabla \cdot(u\nabla u)-u\Delta u \; dx +\int_{\partial \Omega} u^2 \; d\sigma \\
&=\int_{\partial \Omega} u \frac{\partial u}{\partial \nu} \; d\sigma +\int_{\Omega} \lambda u^2 dx +\int_{\partial \Omega} u^2 \; d\sigma \\
&=-\int_{\partial \Omega} u^2 \; d\sigma +\int_{\Omega} \lambda u^2 dx +\int_{\partial \Omega} u^2 \; d\sigma\\
&=\lambda \Vert u \Vert _{L^2(\Omega)}^2
\end{align*}

I don't know how to get $\lambda \Vert u \Vert_{L^2(\Omega)}^2 \geq \theta \Vert u \Vert_{H^1(\Omega)}^2$

Any help is appreciated..

Thank you!

Best Answer

Here is a standard proof by contradiction. Assume that the inequality does not hold. Then for all $n$ there is $u_n\in H^1(\Omega)$ such that $$ \|\nabla u_n\|_{L^2(\Omega)}^2 + \|u_n\|_{L^2(\partial\Omega)}^2 < n \|u_n\|_{H^1(\Omega)}^2. $$ This implies $u_n\ne0$, and we can assume w.l.o.g. $\|u_n\|_{H^1(\Omega)}=1$. Then $\nabla u_n\to0$ in $L^2(\Omega)$ and $u\to0$ in $L^2(\partial\Omega)$ follows immediately.

Since $(u_n)$ is bounded in $H^1$, after extracting a subsequence (denoted the same for simplicity), we have $u_n\rightharpoonup u$ in $H^1(\Omega)$. Then $\nabla u=0$, and $u$ is a constant. Since $\|u\|_{L^2(\partial\Omega)}=0$, it follows $u=0$. By compact embedding, $u_n\to0$ in $L^2(\Omega)$. This implies $u\to0$ strongly in $H^1(\Omega)$, which is a contradiction to $\|u_n\|_{H^1}=1$.

Related Question