There is an orthogonal matrix $T$, such that $T^tAT$ is diagonal.
Because of $v^t(T^tAT)v=(Tv)^tA(Tv)$ and $\|Tv\|=\|v\|$ one can $A$ assume to be diagonal and in this case the assertion is immediate, since $v^tAv = \sum_{i} v_i^2\lambda_i$ with the $\lambda_i$ being the diagonal elements (eigenvalues).
Let me provide some more details:
First of all we show $\|Tv\|=\|v\|$: We have $T^tT=1$ (by definition), hence
$$\|Tv\|^2=(Tv)^t(Tv)=v^t(T^tT)v=v^tv=\|v\|^2$$
For the rest:
We have to consider the numbers $v^tAv$, where $v$ runs through all vectors of length $1$. Since $T$ is orthogonal, it is a bijection on the unit sphere by our above argument, hence $Tv$ runs through all vectors of length $1$, if $v$ does so.
So considering the numbers $v^tAv$ is the same as considering the numbers $(Tv)^tA(Tv)$. Now the computation $$(Tv)^tA(Tv) = v^t(T^tAT)v$$ shows that we have to consider the numbers $v^t(T^tAT)v$, where $v$ runs through all vectors of length $1$. Since $T^tAT$ is diagonal, we are in the starting situation, but the matrix is diagonal now. So we could have assumed from the start that $A$ is diagonal, hence $A = \mathrm{diag}(\lambda_1, \dotsc, \lambda_n)$.
But in this case, the result is easy, since we have $v^tAv = \sum_{i} v_i^2\lambda_i$. Maximizing or minimizing this expression with respect to $\sum_i v_i^2=1$ is an easy task: The minimum is the minimal $\lambda_i$ and the maximum is the maximal $\lambda_i$.
You should really get used to such 'diagonalization arguments': It is the main reason, why diagonalizing matrices is such an important tool.
If you studied elementary linear algebra you may have learned that for a symmetric matrix there is a choice of an orthogonal and normed basis $e_1, \ldots, e_n$ such that the matrix, wrt to this basis, is diagonal with the eigenvalues as entries on the diagonal, and after appropriately ordering the basis vectors, it can be assumed that they are ordered by size, i.e.
$$H = \left(
\array{\lambda_1 & 0 & \dots & 0 \\
0 &\lambda_2 & \dots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 &\dots & 0 & \lambda_n
}
\right)$$
with $\lambda_1 \ge \lambda_2 \ge \dots\ge \lambda_n$.
If you then choose $d$ among the $e_i$, e.g. $d= e_i$, it should be clear that
$$e_i^THe_i = \lambda_i$$
If you choose $d$ as a linear combination of the $e_i$, $d= \sum_i t_ie_i$ (with $0\le t_i\le 1$, since otherwise $d$ will not be normed), it is also easy to see that then
$$ d^THd = \sum \lambda_i t_i^2 $$
which is the weighted average your authors are referring to.
From the diagonal representation you can also easily derive the statement about the maximum and minimum value of $d\mapsto d^THd $
Best Answer
Eigenvectors give only the direction of principal axes (for central conics and quadrics) not the principal curvature directions. There's a paper for finding Gaussian and mean curvatures for implicit surfaces. The standard results for general quadrics are summarized as follows:
Using matrix representation,
\begin{align} 0 &= F(x,y,z) \\ & \equiv \frac{1}{2} \begin{pmatrix} x & y & z & 1 \\ \end{pmatrix} \begin{pmatrix} a & h & g & i \\ h & b & f & j \\ g & f & c & k \\ i & j & k & d \end{pmatrix} \begin{pmatrix} x \\ y \\ z \\ 1 \end{pmatrix} \\ \\ \nabla F &= \begin{pmatrix} a & h & g & i \\ h & b & f & j \\ g & f & c & k \end{pmatrix} \begin{pmatrix} x \\ y \\ z \\ 1 \end{pmatrix} \\ \\ K &= -\frac{1}{|\nabla F|^4} \det \begin{pmatrix} a & h & g & i \\ h & b & f & j \\ g & f & c & k \\ i & j & k & d \end{pmatrix} \\ \\ H &= \frac{(\nabla F)^T \begin{pmatrix} a & h & g \\ h & b & f \\ g & f & c \end{pmatrix} (\nabla F)}{2|\nabla F|^3}- \frac{a+b+c}{2|\nabla F|} \\ \\ \kappa_{\pm} &= H \pm \sqrt{H^2-K} \end{align}
The principal directions or lines of curvature are not easy to solve, please refer to this paper for your further interest.
For standard paraboloids, the lines of curvature are as follows:
$$\boldsymbol{x}= \begin{pmatrix} \frac{a}{c} \sqrt{a^2-b^2} \sinh u \cos v \\ \frac{b}{c} \sqrt{a^2-b^2} \cosh u \sin v \\ \frac{a^2-b^2}{4c} (\cosh 2u-\cosh 2v) \end{pmatrix}$$
$$\boldsymbol{x}= \begin{pmatrix} \frac{a}{c} \sqrt{a^2+b^2} \sinh u \cosh v \\ \frac{b}{c} \sqrt{a^2+b^2} \cosh u \sinh v \\ \frac{a^2+b^2}{4c} (\cosh 2u-\cosh 2v) \end{pmatrix}$$
Analogous to quadratic polynomials:
\begin{align} \mathbb{A}^{T} &= \mathbb{A} \\ f(\boldsymbol{x}) &= \boldsymbol{x}^{T} \mathbb{A} \, \boldsymbol{x}+ \boldsymbol{b}^{T} \boldsymbol{x}+c \\ &= (\boldsymbol{x}-\boldsymbol{h})^{T} \mathbb{A} (\boldsymbol{x}-\boldsymbol{h})+k \\ \boldsymbol{h} &= -\frac{1}{2} \, \mathbb{A}^{-1} \boldsymbol{b} \\ k &= c-\frac{1}{4} \, \boldsymbol{b}^{T} \mathbb{A}^{-1} \, \boldsymbol{b} \\ \nabla f(\boldsymbol{x}) &= 2\mathbb{A} \, \boldsymbol{x}+\boldsymbol{b} \\ \nabla^{2} f(\boldsymbol{x}) &= 2\operatorname{tr}(\mathbb{A}) \end{align}
Note that $\boldsymbol{h}$ is the centre of central quadric (or conic) providing $\det \mathbb{A} \ne 0$.
For paraboloids, $$\det \mathbb{A}=0$$
See also another answer of mine for using Hessian matrix to find the curvature of superellipse.