I addressed a very similar question on scicomp.SE, but I suppose it's good to have an answer here. The point is that neither the size of the entries nor the size of the determinant is a guarantee that your matrix is well- or ill-conditioned. For that, one would need to look at the matrix's singular values, to use a common criterion. In particular, the 2-norm condition number of a matrix is the largest singular value divided by the tiniest singular value; if the smallest singular value is zero, the matrix is singular, and if the smallest singular value is very tiny relative to the largest singular value, you have ill-conditioning.
For instance, matrices of the form
$$\begin{pmatrix}10^{12}&10^{-12}&\cdots&10^{-12}\\10^{-12}&10^{12}&\ddots&\vdots\\\vdots&\ddots&\ddots&10^{-12}\\10^{-12}&\cdots&10^{-12}&10^{12}\end{pmatrix}$$
($10^{12}$ on the diagonal, and $10^{-12}$ off-diagonal) are well conditioned (the ratio of the largest to the smallest singular value is very nearly equal to $1$), while the family of upper triangular matrices
$$\begin{pmatrix}1&2&\cdots&2\\&1&\ddots&\vdots\\&&\ddots&2\\&&&1\end{pmatrix}$$
studied by Alexander Ostrowski and Jim Wilkinson have a condition number equal to $\cot^2\dfrac{\pi}{4n}$, where $n$ is the size of the matrix.
This problem is difficult for numerical rather than computational reasons. Part of the problem is that you really need to be confident that the matrix is full rank, because if it is not, then a single error can make a determinant very large when it should actually be zero. For illustration, suppose we were trying to compute the determinant of
$$A =\begin{bmatrix} M & M \\ M & M \end{bmatrix}$$
where $M$ is very large. This determinant is of course $0$. Let's say some roundoff gave us
$$B = \begin{bmatrix} M & M \\ M & (M+\varepsilon) \end{bmatrix}$$
for some small $\varepsilon$. Now $A$ has determinant $0$ but $B$ has determinant $M\varepsilon$, which may be quite large. The coefficient gets much larger in higher dimensions (though it is always first order, as the determinant is a polynomial in the entries).
If you are confident that the matrix is full rank, then my best suggestion would be to perform an SVD, check to see that all the singular values are nonzero, then if they are not, do it again in higher precision.
Edit: there is one more thing you can do. Because $\text{det}(A)=\text{det}(A^T)$, you can perform column operations. In this case what you should do is rescale the matrix so that the largest entry in each column is $1$. You will multiply column $x_i$ by a number $c_i$, which will also multiply the determinant by $c_i$. Accordingly you will want to divide the final result by $c_i$, so that you get the determinant of the matrix you started with. You will still find that the matrix is extremely ill-conditioned afterward; for example you will still have a column with one entry of order $1$ and another of order $10^{-40}$. But the conflict between the first two columns and the rest will be gone.
Best Answer
If $\mathbf C$ is ill-conditioned; $\mathbf C^T \mathbf C$ is even more so; that is because this operation squares the condition number of your matrix $\mathbf C$.
If we consider the singular value decomposition of $\mathbf C=\mathbf U\mathbf \Sigma\mathbf V^T$, where $\mathbf \Sigma$ is the diagonal matrix of singular values $\mathrm{diag}(\sigma_i)$, then the (2-norm) condition number $\kappa_2(\mathbf C)=\frac{\max\sigma_i}{\min\sigma_i}$. If $\min\sigma_i$ is tiny, then $\mathbf C$ is ill-conditioned.
If we try to substitute in the singular value decomposition, we have
$$\mathbf C^T \mathbf C=(\mathbf U\mathbf \Sigma\mathbf V^T)^T(\mathbf U\mathbf \Sigma\mathbf V^T)=\mathbf V\mathbf \Sigma\mathbf U^T\mathbf U\mathbf \Sigma\mathbf V^T=\mathbf V\mathbf \Sigma^2\mathbf V^T$$
where we used the fact that $\mathbf U$ is orthogonal.
From the singular value decomposition of $\mathbf C^T \mathbf C=\mathbf V\mathbf \Sigma^2\mathbf V^T$, we see that $\kappa_2(\mathbf C^T \mathbf C)=\frac{(\max\sigma_i)^2}{(\min\sigma_i)^2}$. If $\min\sigma_i$ is tiny, $(\min\sigma_i)^2$ is even tinier; thus $\mathbf C^T \mathbf C$ is even more ill-conditioned than the original.
Your best bet is to use the singular value decomposition itself to form the pseudoinverse:
$\mathbf C^\dagger=\mathbf V\mathbf \Sigma^\dagger \mathbf U^T$
where to form $\mathbf \Sigma^\dagger$, the entries of $\mathbf \Sigma$ are either reciprocated if greater than the machine epsilon (or some power of it) multiplied by $\max\sigma_i$, or made zero otherwise.