Divergence of Dyadic Product Using Index Notation

index-notation

I am trying to prove the divergence of a dyadic product using index notation but I am not sure how to apply the product rule when it comes to the dot product. I would like to show:

$\nabla\cdot (\vec{u} \vec{v}) = (\nabla \cdot \vec{u})\vec{v} +\vec{u}\cdot \nabla \vec{v} $

In index notation I have:

$\hat{e_k}\frac{\partial }{\partial x_k}\cdot(u_i\hat{e_i}v_j\hat{e_j})$

But i am unsure how to apply the differential to the dyadic product. I assume I need to use the product rule but am not quite sure how to do that with index notation.

Best Answer

I’m very surprised that this simple question remains unanswered for nine months

Plus I don’t know why do you need to prove it using “index notation”, and therefore to limit yourself to orthonormal (“cartesian”) bases only or to deal with differentiation of basis vectors. Really, you don’t need to expand vectors, expanding just nabla ${\boldsymbol{\nabla} = \boldsymbol{r}^i \partial_i}$

Thus at first here’s how I prove it, simply and clear enough I bet

$$ \boldsymbol{\nabla} \cdot \bigl( \boldsymbol{a} \boldsymbol{b} \bigr) = \boldsymbol{r}^i \partial_i \cdot \bigl( \boldsymbol{a} \boldsymbol{b} \bigr) = \boldsymbol{r}^i \cdot \partial_i \bigl( \boldsymbol{a} \boldsymbol{b} \bigr) = \boldsymbol{r}^i \cdot \bigl( \partial_i \boldsymbol{a} \bigr) \boldsymbol{b} + \boldsymbol{r}^i \cdot \boldsymbol{a} \bigl( \partial_i \boldsymbol{b} \bigr) = $$ $$ = \bigl( \boldsymbol{r}^i \cdot \partial_i \boldsymbol{a} \bigr) \boldsymbol{b} + \boldsymbol{a} \cdot \boldsymbol{r}^i \bigl( \partial_i \boldsymbol{b} \bigr) = \bigl( \boldsymbol{r}^i \partial_i \cdot \boldsymbol{a} \bigr) \boldsymbol{b} + \boldsymbol{a} \cdot \bigl( \boldsymbol{r}^i \partial_i \boldsymbol{b} \bigr) = \left( \boldsymbol{\nabla} \cdot \boldsymbol{a} \right) \boldsymbol{b} + \boldsymbol{a} \cdot \bigl( \boldsymbol{\nabla} \boldsymbol{b} \bigr) $$

All that I use here besides expansion of nabla are (i) the “product rule”, (ii) the commutativity of dot product of any two vectors, (iii) the fact that dot product doesn't affect scalars (and coordinate derivative $\partial_i \equiv \frac{\partial}{\partial q^i}$ is scalar), dot product affects only vectors and tensors of even bigger complexity

But if you wish the full expansion so badly, I can give you it. For orthonormal measuring (“cartesian”), basis vectors $\boldsymbol{e}_i$ are mutually perpendicular to each other and their lengths are equal to the one unit (of some chosen linear units of measurement), that is $\boldsymbol{e}_i \cdot \boldsymbol{e}_j = \delta_{ij}$. In addition, they are constant (don't vary from point to point). Thence coordinate derivative $\partial_i \equiv \frac{\partial}{\partial x_i}$ of some vector $\boldsymbol{a} = a_i \boldsymbol{e}_i$ is

$$ \partial_i \boldsymbol{a} = \partial_i \bigl( a_j \boldsymbol{e}_j \bigr) = \bigl( \partial_i a_j \bigr) \boldsymbol{e}_j = \partial_i a_j \boldsymbol{e}_j $$

For non-orthonormal bases, there’re two complementary sets of basis vectors, $\boldsymbol{r}_i \equiv \partial_i \boldsymbol{r}$ (where $\boldsymbol{r}(q^i[, t])$ is position vector of a point) and $\boldsymbol{r}^i$ for which ${\boldsymbol{r}^i \cdot \boldsymbol{r}_j = \delta^i_j}$ (${\boldsymbol{r}_i \cdot \boldsymbol{r}^j = \delta_i^j}$). They are not constant from point to point, and the coordinate derivative of some vector $\boldsymbol{a}$ being measured using such a basis as ${\boldsymbol{a} = a_i \boldsymbol{r}^i}$ or as ${\boldsymbol{a} = a^i \boldsymbol{r}_i}$ is

$$ \partial_i \boldsymbol{a} = \partial_i \bigl( a_j \boldsymbol{r}^j \bigr) = \bigl( \partial_i a_j \bigr) \boldsymbol{r}^j + a_j \bigl( \partial_i \boldsymbol{r}^j \bigr) $$ or $$ \partial_i \boldsymbol{a} = \partial_i \bigl( a^j \boldsymbol{r}_j \bigr) = \bigl( \partial_i a^j \bigr) \boldsymbol{r}_j + a^j \bigl( \partial_i \boldsymbol{r}_j \bigr) $$

I’m not going to give the full component expansion for such bases yet, and I hope you can do it yourself (having enough time). Here is the expansion of the above proof when you measure vectors using some orthonormal basis

$$ \boldsymbol{\nabla} \cdot \bigl( \boldsymbol{a} \boldsymbol{b} \bigr) = \boldsymbol{e}_k \partial_k \cdot \bigl( \boldsymbol{a} \boldsymbol{b} \bigr) = \boldsymbol{e}_k \cdot \partial_k \bigl( a_i \boldsymbol{e}_i \, b_j \boldsymbol{e}_j \bigr) = \boldsymbol{e}_k \cdot \bigl( \partial_k a_i \boldsymbol{e}_i \bigr) b_j \boldsymbol{e}_j + \boldsymbol{e}_k \cdot a_i \boldsymbol{e}_i \bigl( \partial_k b_j \boldsymbol{e}_j \bigr) = $$ $$ = \bigl( \boldsymbol{e}_k \cdot \partial_k a_i \boldsymbol{e}_i \bigr) b_j \boldsymbol{e}_j + \boldsymbol{e}_i \cdot \boldsymbol{e}_k a_i \bigl( \partial_k b_j \boldsymbol{e}_j \bigr) = \bigl( \delta_{ki} \partial_k a_i \bigr) \boldsymbol{b} + \delta_{ik} a_i \bigl( \partial_k \boldsymbol{b} \bigr) = \left( \boldsymbol{\nabla} \cdot \boldsymbol{a} \right) \boldsymbol{b} + \boldsymbol{a} \cdot \bigl( \boldsymbol{\nabla} \boldsymbol{b} \bigr) $$

As you may see, there’s not a much new, but much more letters.

Post scriptum: you’re also welcome to take a look at another my answer Gradient of cross product of two vectors (where first is constant)

Related Question