[Math] Understanding tensor divergence notation in an integral

notationtensors

Given a smooth tensor valued function $\sigma:R^2\rightarrow R^{2\times2}$, I'm trying to show that

$\int_\Omega \nabla\cdot\sigma=\int_{\partial\Omega}\sigma n$,

where $\Omega$ is a connected simple region in $R^2$, $\partial\Omega$ is a closed simple curve along boundary of $\Omega$, and $n$ is a unit vector.

I'm struggling with several points in this problem:

  1. How can I represent $\sigma$ explicitly?
  2. How can I explicitly represent $\sigma n$?
  3. Given that $\nabla\cdot\sigma$ is a vector, would the differential of the left hand side of the equation still be $dxdy$?
  4. I am told that I can use divergence theorem explicitly, but the only version I'm familiar with applies to integrals whose integrands are scalar quantities? What is the tensor-divergence theorem equivalent?

Any help would be greatly appreciated! 🙂
Thanks.

Best Answer

The divergence theorem for a tensor could be written as

$$ \int_\Omega\sum_{i=1}^2\frac{\partial}{\partial x_i}\sigma_{ij}dxdy=\int_{\partial\Omega}\sum_{i=1}^2 n_i\sigma_{ij}ds\qquad j=1,2 $$

in practice you apply the usual divergence theorem to each vector $\boldsymbol{\sigma}_j=(\sigma_{1j},\sigma_{2j})$.

Related Question