Tensors – Product Rule for Divergence of T Grad Phi

tensors

It is easy to verify that

\begin{equation}
\nabla \cdot \beta(\mathbf x) \nabla \phi = \beta(\mathbf x) \nabla^2 \phi + \nabla \beta(\mathbf x) \cdot \nabla \phi
\end{equation}

for scalar valued functions $\beta(\mathbf x)$ and $\phi(\mathbf x)$, for $\mathbf x \in R^d$. Here, $\nabla \cdot$ is a divergence, and $\nabla \phi$ is a gradient.

My question is, what is the equivalent "product rule" if $\beta(\mathbf x)$ is replaced with a spatially varying tensor $T(\mathbf x)$? That is, what is the expansion of

\begin{equation}
\nabla \cdot T(\mathbf x) \nabla \phi = … ??
\end{equation}

where $T(\mathbf x) \in R^{d \times d}$. I have assumed everything is in Cartesian coordinates, so I can expand component-wise to get a scalar expression. However, I would welcome any input on how to formulate the problem using proper tensor notation. I suspect that the way I formulated the problem for scalar $\beta(\mathbf x)$ is misleading, and that in writing the tensor version the way I have, I am relying too much on intuition from linear algebra, which probably isn't helpful.

This post gave me some ideas, but it seems to be a different problem.

Best Answer

The most straightforward way to derive results like this is to work in index notation and then interpret the resulting terms. To that end, notice that (Einstein summation assumed)

$$ \nabla\cdot T\nabla\phi = \partial^i\left(T_{ij}\partial^j\phi\right) = \left(\partial^iT_{ij}\right)\partial^j\phi + T_{ij}\partial^i\partial^j\phi = \left(\nabla\cdot T\right)\cdot\nabla\phi + T:\nabla^2\phi, $$

Where I used that the divergence operates on the first index of the tensor $T$ and $\nabla^2\phi$ denotes the Hessian of $\phi$.