Finding $ \det'(A) $ for $ \det : \mathbb{R}^{n^2} \rightarrow \mathbb{R}$ is easy if $ A $ is invertible. Observe that $ f(x) = \det(xI + B) $ is the characteristic polynomial for $-B$. Hence if $f(x) = x^n + ax^{n-1} + .... $ then
$-a$ is sum of roots i.e. $ -a = tr(-B) $, hence $ a = tr(B) $. So it is easy to see that for any matrix norm $\|.\| $
$$ f(1) = \det(I + B) = 1 + tr(B) + O(\|B\|^2) $$
So if $ A $ is invertible, then $\det(A+H) = \det(A)\det(I + A^{-1}H) $, using above relation you get
$$ \det(A+H) = \det(A) + \det(A)tr(A^{-1}H) + O(\|H\|^2) $$
Hence $ \det'(A)(H) = \det(A)tr(A^{-1}H) $.
$\newcommand{\hom}{\operatorname{Hom}}$Let us write $A \subseteq U=V=\mathbb{R}^n$. We have
\begin{align*}
f:& A \to \mathbb{R}\\
Df:& A \to \hom(V,\mathbb{R})\\
D^2f:=D(Df):& A \to \hom(U, \hom(V,\mathbb{R}))
\end{align*}
Note that the codomain of $Df$ is $\hom(V,\mathbb{R})$ (the set of all linear maps from $V$ to $\mathbb{R}$), which is not a Euclidean space. So, what do we mean by $D(Df)$? Since $\hom(V,\mathbb{R})$ is isomorphic to $\mathbb{R}^n$, the answer is that we first identify each linear operator $L_i: e_j\mapsto\delta_{ij}$ with the vector $e_i$. Hence the linear map $Df(x)$ is identified with the gradient vector $\nabla f(x)=\left(\frac{\partial f}{\partial x_1},\,\frac{\partial f}{\partial x_2},\,\ldots,\,\frac{\partial f}{\partial x_n}\right)^\top$ before taking a second derivative.
Having this identification, the matrix of $D^2f$, as a linear operator, is the Jacobian matrix of $\nabla f$, i.e. $J(\nabla f)(x)=\left(\frac{\partial^2 f}{\partial x_j \partial x_i}\right)$. Therefore, $D^2f(x)(u)$ is represented by the vector $\left(\frac{\partial^2 f}{\partial x_j \partial x_i}\right)u$ and
$$
\left(D^2f(x)(u)\right)(v) = \left[\left(\frac{\partial^2 f}{\partial x_j \partial x_i}\right)u\right]^\top v = u^\top \left(\frac{\partial^2 f}{\partial x_i \partial x_j}\right)v.\tag{1}
$$
Technically, $D^2f(x)$ is not a bilinear form, but a linear operator that maps vectors to linear operators. Yet, the mapping $B_x:U\times V\to\mathbb{R}$ by $B_x(u,v)=\left(D^2f(x)(u)\right)(v)$ is bilinear. Therefore we can identify $D^2f(x)$ with the bilinear form $B_x$. And when we speak of the matrix of $D^2f(x):U\times V\to\mathbb{R}$, we actually mean the matrix of $B_x$ (and $D^2f(x)$ is not really a function from $U\times V$ to $\mathbb{R}$ in the first place).
By convention, if $\mathbf{u},\mathbf{v}$ and $\mathbf{B}$ represent respectively two vectors $u,v$ and a bilinear form $b(u,v)$ with respect to some basis, then $b(u,v)=\mathbf{u}^\top \mathbf{B}\mathbf{v}$. Therefore, from $(1)$, we see that the matrix of $B_x$ with respect to the standard basis is $\left(\frac{\partial^2 f}{\partial x_i \partial x_j}\right)$.
Best Answer
The determinant is the product of the eigenvalues. In two dimensions, the product being positive means that the two eigenvalues have the same sign, so $f$ is either concave or convex, and if the first derivative vanishes the point is an extremum. In higher dimensions, there's no such conclusion, since you could have any number of both positive and negative eigenvalues and have the determinant come out positive or negative.