I'm having trouble proving $$\nabla\times(\nabla f)=0$$ using index notation. I have started with: $$(\hat{e_i}\partial_i)\times(\hat{e_j}\partial_j f)=\partial_i\partial_jf(\hat{e_i}\times\hat{e_j})=\epsilon_{ijk}(\partial_i\partial_j f)\hat{e_k}$$
I know I have to use the fact that $\partial_i\partial_j=\partial_j\partial_i$ but I'm not sure how to proceed.
[Math] Proving the curl of a gradient is zero
index-notationvector analysisvectors
Related Solutions
I’m very surprised that this simple question remains unanswered for nine months
Plus I don’t know why do you need to prove it using “index notation”, and therefore to limit yourself to orthonormal (“cartesian”) bases only or to deal with differentiation of basis vectors. Really, you don’t need to expand vectors, expanding just nabla ${\boldsymbol{\nabla} = \boldsymbol{r}^i \partial_i}$
Thus at first here’s how I prove it, simply and clear enough I bet
$$ \boldsymbol{\nabla} \cdot \bigl( \boldsymbol{a} \boldsymbol{b} \bigr) = \boldsymbol{r}^i \partial_i \cdot \bigl( \boldsymbol{a} \boldsymbol{b} \bigr) = \boldsymbol{r}^i \cdot \partial_i \bigl( \boldsymbol{a} \boldsymbol{b} \bigr) = \boldsymbol{r}^i \cdot \bigl( \partial_i \boldsymbol{a} \bigr) \boldsymbol{b} + \boldsymbol{r}^i \cdot \boldsymbol{a} \bigl( \partial_i \boldsymbol{b} \bigr) = $$ $$ = \bigl( \boldsymbol{r}^i \cdot \partial_i \boldsymbol{a} \bigr) \boldsymbol{b} + \boldsymbol{a} \cdot \boldsymbol{r}^i \bigl( \partial_i \boldsymbol{b} \bigr) = \bigl( \boldsymbol{r}^i \partial_i \cdot \boldsymbol{a} \bigr) \boldsymbol{b} + \boldsymbol{a} \cdot \bigl( \boldsymbol{r}^i \partial_i \boldsymbol{b} \bigr) = \left( \boldsymbol{\nabla} \cdot \boldsymbol{a} \right) \boldsymbol{b} + \boldsymbol{a} \cdot \bigl( \boldsymbol{\nabla} \boldsymbol{b} \bigr) $$
All that I use here besides expansion of nabla are (i) the “product rule”, (ii) the commutativity of dot product of any two vectors, (iii) the fact that dot product doesn't affect scalars (and coordinate derivative $\partial_i \equiv \frac{\partial}{\partial q^i}$ is scalar), dot product affects only vectors and tensors of even bigger complexity
But if you wish the full expansion so badly, I can give you it. For orthonormal measuring (“cartesian”), basis vectors $\boldsymbol{e}_i$ are mutually perpendicular to each other and their lengths are equal to the one unit (of some chosen linear units of measurement), that is $\boldsymbol{e}_i \cdot \boldsymbol{e}_j = \delta_{ij}$. In addition, they are constant (don't vary from point to point). Thence coordinate derivative $\partial_i \equiv \frac{\partial}{\partial x_i}$ of some vector $\boldsymbol{a} = a_i \boldsymbol{e}_i$ is
$$ \partial_i \boldsymbol{a} = \partial_i \bigl( a_j \boldsymbol{e}_j \bigr) = \bigl( \partial_i a_j \bigr) \boldsymbol{e}_j = \partial_i a_j \boldsymbol{e}_j $$
For non-orthonormal bases, there’re two complementary sets of basis vectors, $\boldsymbol{r}_i \equiv \partial_i \boldsymbol{r}$ (where $\boldsymbol{r}(q^i[, t])$ is position vector of a point) and $\boldsymbol{r}^i$ for which ${\boldsymbol{r}^i \cdot \boldsymbol{r}_j = \delta^i_j}$ (${\boldsymbol{r}_i \cdot \boldsymbol{r}^j = \delta_i^j}$). They are not constant from point to point, and the coordinate derivative of some vector $\boldsymbol{a}$ being measured using such a basis as ${\boldsymbol{a} = a_i \boldsymbol{r}^i}$ or as ${\boldsymbol{a} = a^i \boldsymbol{r}_i}$ is
$$ \partial_i \boldsymbol{a} = \partial_i \bigl( a_j \boldsymbol{r}^j \bigr) = \bigl( \partial_i a_j \bigr) \boldsymbol{r}^j + a_j \bigl( \partial_i \boldsymbol{r}^j \bigr) $$ or $$ \partial_i \boldsymbol{a} = \partial_i \bigl( a^j \boldsymbol{r}_j \bigr) = \bigl( \partial_i a^j \bigr) \boldsymbol{r}_j + a^j \bigl( \partial_i \boldsymbol{r}_j \bigr) $$
I’m not going to give the full component expansion for such bases yet, and I hope you can do it yourself (having enough time). Here is the expansion of the above proof when you measure vectors using some orthonormal basis
$$ \boldsymbol{\nabla} \cdot \bigl( \boldsymbol{a} \boldsymbol{b} \bigr) = \boldsymbol{e}_k \partial_k \cdot \bigl( \boldsymbol{a} \boldsymbol{b} \bigr) = \boldsymbol{e}_k \cdot \partial_k \bigl( a_i \boldsymbol{e}_i \, b_j \boldsymbol{e}_j \bigr) = \boldsymbol{e}_k \cdot \bigl( \partial_k a_i \boldsymbol{e}_i \bigr) b_j \boldsymbol{e}_j + \boldsymbol{e}_k \cdot a_i \boldsymbol{e}_i \bigl( \partial_k b_j \boldsymbol{e}_j \bigr) = $$ $$ = \bigl( \boldsymbol{e}_k \cdot \partial_k a_i \boldsymbol{e}_i \bigr) b_j \boldsymbol{e}_j + \boldsymbol{e}_i \cdot \boldsymbol{e}_k a_i \bigl( \partial_k b_j \boldsymbol{e}_j \bigr) = \bigl( \delta_{ki} \partial_k a_i \bigr) \boldsymbol{b} + \delta_{ik} a_i \bigl( \partial_k \boldsymbol{b} \bigr) = \left( \boldsymbol{\nabla} \cdot \boldsymbol{a} \right) \boldsymbol{b} + \boldsymbol{a} \cdot \bigl( \boldsymbol{\nabla} \boldsymbol{b} \bigr) $$
As you may see, there’s not a much new, but much more letters.
Post scriptum: you’re also welcome to take a look at another my answer Gradient of cross product of two vectors (where first is constant)
You've rewritten $\partial_i(\partial_jF_k)$ as $F_k\partial_i\partial_j+\partial_j\partial_iF_k$. That would work if $\partial_j$ were an ordinary quantity you just multiply by $F_k$, but of course it's not. Indeed, your strategy also requires acknowledging $\partial_i$ is instead a differential operator, obeying the famous product rule.
The correct treatment needs no product rule. As @DavideMorgante's answer noted, you can just use the same symmetric indices argument in the proof of $A\cdot A\times F=0$ for a "normal" (i.e. non-operator-valued) vector $A$, since $\partial_i\partial_j=\partial_j\partial_i$ is just as true as $A_iA_j=A_jA_i$.
Best Answer
The point is that the quantity $M_{ijk}=\epsilon_{ijk}\partial_i\partial_j$ is antisymmetric in the indices $ij$, $$M_{ijk}=-M_{jik}$$
So when you sum over $i$ and $j$, you will get zero because $M_{ijk}$ will cancel $M_{jik}$ for every triple $ijk$.