I guess the problem is with how you have approached $\vec{x}^{T}H\vec{x} \ge 0$. In this equation, you wish to find whether matrix H is positive definite or not…..
While doing so, the elements of $\vec{x}$ has to be independent of elements of matrix $H$. I mean, when you use this equation to test the definiteness of the matrix, the elements of $H$ are constants and elements of vector $\vec{x}$ are variables. What you have dones is you have took the vector $\vec{x}$ as variable $x$ at position $(1,1)$ and variable $y$ at position $(2,1)$ [mind that the elements of matrix H are also in terms of variables x and y]. If your vector $\vec{x}$ were of other variables, say $\vec{x} = (p,q)$, your answer will be fine.
Compute the eigenvalues of the hessian.
If all the eigenvalues are nonnegative, it is positive semidefinite.
If all the eigenvalues are positive, it is positive definite.
If all the eigenvalues are nonpositive, it is negative semidefinite.
If all the eigenvalues are negative, it is negative definite.
Otherwise, it is indefinite.
Edit:
For that example, you have found $c=(0,0,0)$.
$$H(f(c))=\begin{bmatrix} 0 & 1 & 1 \\ 1 & 0 & 0 \\ 1 & 0 & 0\end{bmatrix}$$
$$H(f(c))-\lambda I= \begin{bmatrix} -\lambda & 1 & 1 \\ 1 & -\lambda & 0 \\ 1 & 0 & -\lambda\end{bmatrix}$$
\begin{align}\det(H(f(c))-\lambda I)&= \det\left(\begin{bmatrix}1 & 1 \\ -\lambda & 0 \end{bmatrix} \right) -\lambda \det \left( \begin{bmatrix}-\lambda & 1 \\ 1& -\lambda \end{bmatrix} \right)
\\&=\lambda-\lambda(\lambda^2-1)
\\&=\lambda(2-\lambda^2)\end{align}
Hence, the eigenvalues are $0$, $\sqrt{2}$ and $-\sqrt{2}$. Hence it is indefinite.
Best Answer
A convex function doesn't have to be twice differentiable; in fact, it doesn't have to be differentiable even once. For instance, $f(x)=|x|$ is not differentiable at the origin, and that's its minimum! We can, however, say this: the Hessian of a convex function must have be positive semidefinite wherever it is defined.
Furthermore, a convex function doesn't have to have a minimum. Take, for instance, the function $f(x)=e^x$. It is bounded below, and it has an infimum; $\inf_x f(x) = 0$. But at no value of $x$ does it achieve this value, so it has no minimum. Note that it is differentiable everywhere, and its second derivative is strictly positive.
It's worse than this: a convex function, even one that has a positive definite Hessian, need not have an infimum. Take, for instance, the function $f:\mathbb{R}_{++}\rightarrow\mathbb{R}$, $f(x)=-\log x$. The second derivative $1/x^2$ is positive on the entire domain of the function, but it is not bounded.
What you can say is that any convex function that is bounded below has an infimum; but again, that alone doesn't guarantee a minimum can be attained. For that you need something more. One common sufficient, but not necessary, condition is strong convexity. The basic definition is $$f(tx+(1-t)y) \geq tf(x) + (1-t)f(y) - \tfrac{1}{2}m t(1-t)\|x-y\|^2$$ where $m>0$ is fixed; any norm can be used, but the Euclidean norm is common. Note that, again, this does not assume that $f$ is differentiable. If it is twice differentiable at a point, however, strong convexity implies $\nabla^2 f(x) \succeq m I$, or $f''(x)\geq m$ for the scalar case.