Divergence of a Tensor Field

divergence-operatormultivariable-calculustensorsvector analysis

Given a tensor field $\hat{\tau}$, I wish to calculate $\nabla\cdot\hat{\tau}$.

My first question: is this actually the divergence of the tensor field? Wikipedia seems to differentiate between div$(\hat{\tau})$ and $\nabla\cdot\hat{\tau}$ in this article but I don't quite understand why they are different in this case but not for vectors.

Now, I want to ask about the actual calculation given that:
$$ \hat{\tau} = -p\hat{I} + \eta(\nabla\mathbf{v}+(\nabla{\mathbf{v}})^T) $$
One way I could figure this out is by simply writing out the matrices on the RHS and then adding them to express that as a single matrix (I am only considering the $2D$ case so this would result in a $2\times2$ matrix which I could then apply the formula as given in the Wikipedia article I linked above. This would result in:
$$ \nabla\cdot\hat{\tau}=\nabla\cdot\begin{bmatrix} -p+2\eta u_x & \eta(u_z+w_x) \\ \eta(u_z+w_x) & -p+2\eta w_z \end{bmatrix} = \begin{bmatrix} (-p_x+2\eta u_{xx}) +(\eta(u_{zx}+w_{xx})) \\ (\eta(u_{zx}+w_{xx}))+(-p_z+2\eta w_{zz}) \end{bmatrix}$$
Is this correct? If not, where exactly did I go wrong?

Best Answer

Wiki's distinguishing of $\nabla\cdot$ from $\mathrm{div}$ is nonstandard. What they are addressing is the fact that both expressions are ambiguous as to whether the divergence is being applied to the row vectors or the column vectors. When working with asymmetric tensors, you have to be really careful about this. However, in this case, $\hat{\tau}$ is symmetric. Its row vectors are the same as its column vectors, so the two divergences are equal.

As for your calculation, I think something's gone awry. You have $\nabla \cdot (p\mathbf{I}) = \nabla p$ correct, but not $\nabla\cdot[\nabla \mathbf{v} + (\nabla \mathbf{v})^T] = \nabla^2 \mathbf{v} + \nabla(\nabla\cdot\mathbf{v}) = (2\partial_{xx}u+ \partial_{zz} u +\partial_{xz} w, \partial_{xx}w + 2\partial_{zz}w + \partial_{xz}u) $. I believe you differentiated the top row by $x$ and the bottom row by $z$, then added the columns together. You should have instead differentiated the first column by $x$ and the second column by $z$.

In general, a fair bit of caution should be used when using $\nabla$ in matrix calculations because matrix multiplication is not commutative. You need to keep track of what needs to be a row or column vector, and take transposes accordingly so $\nabla$ can show up in front of its argument.