Induction Hypothesis over $k$: Given a smooth divergence free vector field $v:\mathbb R^n \to \mathbb R^n$ such that $\text{div}_{k} v := \sum_{i=1}^k \frac{\partial v_i}{\partial x_i} = 0$, there exists a smooth skew-symmetric matrix $a:\mathbb R^n \to \mathbb R^{n\times n}$ such that $v_j = \sum_{i=1}^k \frac\partial{\partial x_i} a_{ij}$ for $1 \le i \le k$.
The case $k=0$ is trivial.
Suppose it is true for $k-1 \ge 0$. We prove it for $k$.
Let
$$ f_1(x_1,\dots,x_n) = \int_0^{x_1} \frac{\partial}{\partial x_k} v_k(\xi,x_2,x_1,\dots,x_n) \, d\xi .$$
Then
$$ \frac{\partial}{\partial x_1}(v_1+f_1) + \frac{\partial}{\partial x_2}v_2 + \dots + \frac{\partial}{\partial x_{k-1}}v_{k-1} = 0 .$$
By the inductive hypothesis, there is a skew symmetric matrix $a_{ij}$ such
$$ v_1 + f_1 = \sum_{i=1}^{k-1} \frac{\partial}{\partial x_i} a_{i1} $$
$$ v_j = \sum_{i=1}^{k-1} \frac{\partial}{\partial x_i} a_{ij} \quad \text{ for $2 \le j \le k-1$}$$
We define
$$ f_2(x_1,\dots,x_n) = \int_0^{x_1} v_k(\xi,x_2,\dots,x_{k-1},0,x_{k+1},\dots,x_n) \, d\xi - \int_0^{x_k} f_1(x_1,\dots,x_{k-1},\xi,\dots,x_n) \, d\xi .$$
Then
$$ \frac\partial{\partial x_1} f_2 = v_k(x_1,\dots,x_{k-1},0,\dots,x_n) - \int_0^{x_k} \frac\partial{\partial x_k} v_k(x_1,\dots,x_{k-1},\xi,\dots,x_n) \, d\xi = -v_k $$
and
$$ \frac\partial{\partial x_k} f_2 = - f_1 $$
Now extend the matrix $a$ by setting $a_{k1} = -a_{1k} = f_2$, and $a_{kj}=a_{jk} = 0$ for $2 \le j \le k$.
Then
$$ \sum_{i=1}^{k} \frac{\partial}{\partial x_i} a_{i1} = v_1 + f_1 + \frac\partial{\partial x_k} f_2 = v_1, $$
$$ \sum_{i=1}^{k} \frac{\partial}{\partial x_i} a_{j1} = v_j \quad \text{ for $2 \le j \le k-1$}, $$
and
$$ \sum_{i=1}^{k} \frac{\partial}{\partial x_i} a_{ik} = - \frac\partial{\partial x_1} f_2 = v_k. $$
You have the first derivative, $D\det(X)(A) = (\det{X})\operatorname{tr}{(X^{-1}A)} $, so you want to find $D^2\det(X)(A,B) = D(D\det(X)(A))(B)$. You can do this using the multivariable product and chain rules, provided you know the first derivative of $X \mapsto X^{-1}$ at $X=I$. We have
$$ (X^{-1}+\varepsilon K)(X+\varepsilon B) = I+\varepsilon (KX+X^{-1}B)+O(\varepsilon^2), $$
which suggests we put $ K=-X^{-1}BX^{-1} $. Then we have
$$ (X+\varepsilon B)\left( (X+\varepsilon B)^{-1}-(X^{-1} - \varepsilon X^{-1}BX^{-1}) \right) = I -(I+\varepsilon BX^{-1})-\varepsilon BX^{-1}+O(\varepsilon^2) = O(\varepsilon^2), $$
and it follows that $DX^{-1}(B)=-X^{-1}BX^{-1}$.
Then the product rule gives
$$ D((\det{X})\operatorname{tr}{(X^{-1}A)})(B) = D(\det{X})(B) \operatorname{tr}{(X^{-1}A)} + (\det{X})D(\operatorname{tr}{(X^{-1}A)})(B), $$
and trace is linear so the derivative passes inside and we find
$$ D((\det{X})\operatorname{tr}{(X^{-1}A)})(B) = (\det{X}) \big( \operatorname{tr}{(X^{-1}B)} \operatorname{tr}{(X^{-1}A)} - \operatorname{tr}{(D(X^{-1})(B)A)}) \big) \\
= (\det{X}) \big( \operatorname{tr}{(X^{-1}B)} \operatorname{tr}{(X^{-1}A)} - \operatorname{tr}{(X^{-1}BX^{-1}A)}) \big). $$
This is symmetric because the trace is cyclic.
Best Answer
As already answered by SAUVIK, if $f:E\longrightarrow F$, $$Df(x)\in L(E,F)\text{ and } Df:E\longrightarrow L(E,F).$$ So, $$D^2 f:E\longrightarrow L(E,L(E,F)).$$ Now, the space $L(E,L(E,F))$ can be identified of the space of bilinear functions form $E$ to $F$ via the isomorphism $$g\to\hat g,\qquad\hat g(x,y) = (g(x))(y).$$ The trick can be obviously extended to higher orders.