In general, the controllability Gramian is given as the unique solution to the continuous time Lyapunov matrix equation
$$ AP + PA^T = BB^T.$$
(In the context of model reduction, $A$ is typically an asymptotically stable matrix, so this equation has a unique solution). The solution can be written as
$$ P = \int_0^\infty e^{At} BB^T e^{A^T t} dt $$
from which it follows that $P$ is (always) symmetric positive semi-definite and that null space of $P$ is identical to the null space of the controllability matrix $K$ given by
$$ K = \begin{bmatrix} B & AB & A^2 B & \dotsc & A^{n-1} B \end{bmatrix}, \quad \text{K for Krylov} $$
In particular, $P$ is symmetric positive definite if and only if $K$ has full rank. In principle, you can test if the system is controllable by attempting a Cholesky factorization of $P$. This will work nicely for small toy systems.
However, in practice, even when the system is controllable, the eigenvalues of $P$ decay so rapidly, that the matrix is singular to machine precision. In particular, a Cholesky factorization will fail and you will be left with the impression that the system is uncontrollable. Typically, an exceedingly good approximation $P$ of very low rank exists and can frequently be found without computing and truncating the full eigen-decomposition of $P$. This is the so-called low rank phenomenon for Lyapunov matrix equations. It is foundation upon which modern iterative solvers of Lyapunov equations are built and it is closely related to the success of balanced truncation.
As for literature you must obtain a copy of A. C. Antoulas Approximation of Large Scale Dynamical Systems, see
http://epubs.siam.org/doi/book/10.1137/1.9780898718713
for the details. K. Zhou's book "Robust and Optimal Control" may be of use to you as well, but I think it best to start with Antoulas's book.
EDIT: If we wish to determine if the system is controllable, then we need to investigate the controllability matrix $K$ in an numerically reliable manner. I am not sure of the details for the general case where $B$ has $k>1$ columns, but if $B = b \in \mathbb{R}^n$, then the Arnoldi algorithm can be applied. Given $(A,b)$ this algorithm attempts to build an orthonormal basis for the Krylov subspace
$$K(A,b) = \text{span} \{ A^j b \: : \: j = 1,2,\dotsc \}. $$
Now, in exact arithmetic, if the algorithm completes $k$ iterations, then it produces a factorization of the form
$$ A V_k = V_{k+1} \overline{H}_k,$$
where $V_k \in \mathbb{R}^{n \times k}$ is an orthonormal matrix with
$$K_k(A,b) := \text{span} \{ A^j b \: : \: j = 1,2,\dotsc,k-1 \} = \text{Ran} V_k$$
and $\overline{H}_k \in \mathbb{R}^{(k+1) \times k}$ is an quasi upper Hessenberg matrix. The last subdiagonal entry $h_{k+1,k}$ is critical, in the sense that it very precisely measures the size of a perturbation $\Delta A$ of $A$ such that
$$ (A + \Delta A)V_k = V_k H_k $$
where $H_k$ is the upper Hessenberg matrix which consists of all but the last row of $\overline{H}_k$. If you find that $h_{k+1,k}$ is tiny relative to the norm of the matrix $A$, then $K$ is has numerical rank $k$ and your system is close to an uncontrollable system. If on the other hand, you can complete $n-1$ steps of the Arnoldi algorithm and none of these subdiagonal entries are insignificant, then your system is controllable and far from being uncontrollable.
It is possible to say substantially more about these matters. For a good introduction to the Arnoldi method, I refer to Yousef Saad's book "Iterative methods for linear systems". The first edition of Saad's book is freely available at his website, see
http://www-users.cs.umn.edu/~saad/books.html
Best Answer
The problem is that you have miscalculated the controllability matrix. We find that $$ C = \pmatrix{1&0&1\\0&1&1\\0&1&1}. $$ The second and third rows are identical, so we see that $C$ fails to have full row rank. So, the system is not controllable, and it makes sense that the controllability Gramian that you attained was singular.
An explanation of why $C$ the system is not controllable in this case: note that the column-vector $B$ is a linear combination of the eignvectors of $A$ associated with eigenvalues $\lambda_{\pm} = \frac{1 \pm \sqrt{5}}{2}$, namely $$ v_{\pm} = \left(\frac{-1 \pm \sqrt{5}}{2}, 1,1 \right). $$ We see that $(1,0,0) = \frac 1{\sqrt{5}}(v_+ - v_-)$.