As for the last question: otherwise, you don't have a critical point and there is nothing to test. :-) Think with one variable: would you look for a maximum or minimum if $f'(x_0) \neq 0$?
Your intuitive understanding of the Hessian points in the right direction. The point is: how to "sum up" all the data $f_{xx}, f_{xy} = f_{yx}, f_{yy}$ in just one single fact?
Well, thinking about the quadratic form that the Hessian defines. Namely,
$$
q(x,y) =
\begin{pmatrix}
x & y
\end{pmatrix}
\begin{pmatrix}
f_{xx} & f_{xy} \\
f_{yx} & f_{yy}
\end{pmatrix}
\begin{pmatrix}
x \\
y
\end{pmatrix}
=
f_{xx}x^2 + 2 f_{xy}xy + f_{yy}y^2 \ .
$$
If this quadratic form is positive-definitive, that is $q(x,y) > 0$ for all $(x,y) \neq (0,0)$, then $f$ has a local minimum at the point where this happens (just as in the one-variable case, $f''(x_0) > 0$ implies $f$ has a local minimum at $x_0$).
It's more or less obvious that for $q(x,y)$ to be positive or not at all points doesn't depend on the coordinate system you're using, isn't it?
Right, then do the following experiment: you have a nice quadratic form like
$$
q(x,y) = x^2 + y^2
$$
which is not ashamed to show clearly that she is positive-definite, is she?
Then, do to her the following linear change of coordinates:
$$
\begin{pmatrix}
x \\ y
\end{pmatrix}
=
\begin{pmatrix}
1 & 1 \\
1 & 0
\end{pmatrix}
\begin{pmatrix}
\overline{x} \\
\overline{y}
\end{pmatrix}
$$
and you'll get
$$
q(\overline{x}, \overline{y}) = 2\overline{x}^2 + 2 \overline{x}\overline{y} + \overline{y}^2 \ .
$$
Is now also clear that $q(\overline{x}, \overline{y}) > 0$ for all $ (\overline{x}, \overline{y}) \neq (0,0)$?
So, we need some device that allows us to show when a symmetric matrix like $H$ will define a positive-definite quadratic form $q(x,y)$, no matter if the fact is disguised because we are using the wrong coordinate system.
One of these devices are the eigenvalues of $H$: if all of them are positive, we know that, maybe after a change of coordinate system, our $q(x,y)$ will have an associate matrix like
$$
\begin{pmatrix}
\lambda & 0 \\
0 & \mu
\end{pmatrix}
$$
with $\lambda, \mu > 0$. Hence, in some coordinate system (and hence, in all of them), our $q > 0$.
The proof of the second derivative test at a critical point ($Df_a = 0$) runs as follows: for a given sufficiently smooth map $f: \Bbb{R}^n \to \Bbb{R}$, and a point $a \in \Bbb{R}^n$, we write a second order Taylor expansion at the point $a$:
\begin{align}
f(a+h) - f(a) &= \dfrac{1}{2}(D^2f_a)(h,h) + o(\lVert h\rVert^2).
\end{align}
In other words, there is a "remainder term", which is a function $\rho$, such that $\lim_{h \to 0} \rho(h) = 0$, and
\begin{align}
f(a+h) - f(a) &= \dfrac{1}{2}(D^2f_a)(h,h) + \rho(h) \lVert h\rVert^2.
\end{align}
If the Hessian $D^2f_a$ is positive definite say, then there is a positive constant $\lambda$ such that for all $h \in \Bbb{R}^n$, $D^2f_a(h,h) \geq \lambda \lVert h\rVert^2$ (with equality if and only if $h=0$). Hence,
\begin{align}
f(a+h) - f(a) &\geq \dfrac{\lambda}{2} \lVert h\rVert^2 + \rho(h) \lVert h\rVert^2 \\
&= \left( \dfrac{\lambda}{2} + \rho(h)\right) \lVert h\rVert^2.
\end{align}
Since $\rho(h) \to 0$ as $h \to 0$ and $\lambda > 0$, the term in brackets will be strictly positive if $h$ is sufficiently small in norm. Hence, for all $h$ sufficiently small in norm, $f(a+h) - f(a) \geq 0$ (with equality if and only if $h =0$). This is the proof for why a positive-definite Hessian implies you have a strict local minimum at a critical point $a$.
Of course, a similar proof holds for a negative-definite Hessian implying a strict local maximum.
Roughly speaking, the idea of the proof is that the local behaviour of $f(a+h) - f(a)$ is entirely determined by the behaviour of the Hessian, in the term $D^2f_a(h,h)$ (because the error term is "small"). So, to answer your questions,
The proof of the theorem above shows that we need to ensure that the entire term $D^2f_a(h,h)$ is positive (in fact bounded below by a positive multiple of $\lVert h \rVert^2$), so that we can conclude that $f(a+h) - f(a) \geq 0$. But just because an $n \times n$ matrix has all positive entires, it doesn't mean it is positive-definite (Robert's answer gives an explicit counter example).
Hopefully the proof I gave above justifies why definiteness comes into play (it's to ensure you have a good lower/upper bound on the $D^2f_a(h,h)$ term).
A matrix is positive(negative) definite if and only if all its eigenvalues are strictly positive (strictly negative). If there are some positive and some negative, then the matrix is indefinite. If this is the case for your Hessian, it means you have a saddle point (because the function is increasing along some directions while decreasing along others).
Best Answer
"Curvature of a function" is not a standard mathematical term as far as I know, though it may be something found in a calculus textbook. If you mean the curvature of the graph of a function $y=f(x)$, then it involves the first derivative as well as the second. One can make the first derivative go away by changing the system of coordinates so that one of the axis is tangent to the curve at the point of interest. Then indeed, the second derivative $f''$ gives the curvature of the graph.
For the surface $z=f(x,y)$, assuming $\nabla f$ vanishes at the point of interest, we get the principal curvatures (plural) as the eigenvalues of the Hessian. The determinant of the Hessian gives Gaussian curvature, while the trace (that is, Laplacian $\Delta f$) gives mean curvature times two.
The relation is more complicated at the points where $\nabla f$ is nonzero, and the entire concept of curvature gets much more complex when we move from surfaces to higher-dimensional manifolds. Then the curvature cannot be adequately measured by a single number. An accessible reference is Curvatures of Hypersurfaces.