Yes, there are some simple relationships between confidence interval comparisons and hypothesis tests in a wide range of practical settings. However, in addition to verifying the CI procedures and t-test are appropriate for our data, we must check that the sample sizes are not too different and that the two sets have similar standard deviations. We also should not attempt to derive highly precise p-values from comparing two confidence intervals, but should be glad to develop effective approximations.
In trying to reconcile the two replies already given (by @John and @Brett), it helps to be mathematically explicit. A formula for a symmetric two-sided confidence interval appropriate for the setting of this question is
$$\text{CI} = m \pm \frac{t_\alpha(n) s}{\sqrt{n}}$$
where $m$ is the sample mean of $n$ independent observations, $s$ is the sample standard deviation, $2\alpha$ is the desired test size (maximum false positive rate), and $t_\alpha(n)$ is the upper $1-\alpha$ percentile of the Student t distribution with $n-1$ degrees of freedom. (This slight deviation from conventional notation simplifies the exposition by obviating any need to fuss over the $n$ vs $n-1$ distinction, which will be inconsequential anyway.)
Using subscripts $1$ and $2$ to distinguish two independent sets of data for comparison, with $1$ corresponding to the larger of the two means, a non-overlap of confidence intervals is expressed by the inequality (lower confidence limit 1) $\gt$ (upper confidence limit 2); viz.,
$$m_1 - \frac{t_\alpha(n_1) s_1}{\sqrt{n_1}} \gt m_2 + \frac{t_\alpha(n_2) s_2}{\sqrt{n_2}}.$$
This can be made to look like the t-statistic of the corresponding hypothesis test (to compare the two means) with simple algebraic manipulations, yielding
$$\frac{m_1-m_2}{\sqrt{s_1^2/n_1 + s_2^2/n_2}} \gt \frac{s_1\sqrt{n_2}t_\alpha(n_1) + s_2\sqrt{n_1}t_\alpha(n_2)}{\sqrt{n_1 s_2^2 + n_2 s_1^2}}.$$
The left hand side is the statistic used in the hypothesis test; it is usually compared to a percentile of a Student t distribution with $n_1+n_2$ degrees of freedom: that is, to $t_\alpha(n_1+n_2)$. The right hand side is a biased weighted average of the original t distribution percentiles.
The analysis so far justifies the reply by @Brett: there appears to be no simple relationship available. However, let's probe further. I am inspired to do so because, intuitively, a non-overlap of confidence intervals ought to say something!
First, notice that this form of the hypothesis test is valid only when we expect $s_1$ and $s_2$ to be at least approximately equal. (Otherwise we face the notorious Behrens-Fisher problem and its complexities.) Upon checking the approximate equality of the $s_i$, we could then create an approximate simplification in the form
$$\frac{m_1-m_2}{s\sqrt{1/n_1 + 1/n_2}} \gt \frac{\sqrt{n_2}t_\alpha(n_1) + \sqrt{n_1}t_\alpha(n_2)}{\sqrt{n_1 + n_2}}.$$
Here, $s \approx s_1 \approx s_2$. Realistically, we should not expect this informal comparison of confidence limits to have the same size as $\alpha$. Our question then is whether there exists an $\alpha'$ such that the right hand side is (at least approximately) equal to the correct t statistic. Namely, for what $\alpha'$ is it the case that
$$t_{\alpha'}(n_1+n_2) = \frac{\sqrt{n_2}t_\alpha(n_1) + \sqrt{n_1}t_\alpha(n_2)}{\sqrt{n_1 + n_2}}\text{?}$$
It turns out that for equal sample sizes, $\alpha$ and $\alpha'$ are connected (to pretty high accuracy) by a power law. For instance, here is a log-log plot of the two for the cases $n_1=n_2=2$ (lowest blue line), $n_1=n_2=5$ (middle red line), $n_1=n_2=\infty$ (highest gold line). The middle green dashed line is an approximation described below. The straightness of these curves belies a power law. It varies with $n=n_1=n_2$, but not much.
The answer does depend on the set $\{n_1, n_2\}$, but it is natural to wonder how much it really varies with changes in the sample sizes. In particular, we could hope that for moderate to large sample sizes (maybe $n_1 \ge 10, n_2 \ge 10$ or thereabouts) the sample size makes little difference. In this case, we could develop a quantitative way to relate $\alpha'$ to $\alpha$.
This approach turns out to work provided the sample sizes are not too different from each other. In the spirit of simplicity, I will report an omnibus formula for computing the test size $\alpha'$ corresponding to the confidence interval size $\alpha$. It is
$$\alpha' \approx e \alpha^{1.91};$$
that is,
$$\alpha' \approx \exp(1 + 1.91\log(\alpha)).$$
This formula works reasonably well in these common situations:
Both sample sizes are close to each other, $n_1 \approx n_2$, and $\alpha$ is not too extreme ($\alpha \gt .001$ or so).
One sample size is within about three times the other and the smallest isn't too small (roughly, greater than $10$) and again $\alpha$ is not too extreme.
One sample size is within three times the other and $\alpha \gt .02$ or so.
The relative error (correct value divided by the approximation) in the first situation is plotted here, with the lower (blue) line showing the case $n_1=n_2=2$, the middle (red) line the case $n_1=n_2=5$, and the upper (gold) line the case $n_1=n_2=\infty$. Interpolating between the latter two, we see that the approximation is excellent for a wide range of practical values of $\alpha$ when sample sizes are moderate (around 5-50) and otherwise is reasonably good.
This is more than good enough for eyeballing a bunch of confidence intervals.
To summarize, the failure of two $2\alpha$-size confidence intervals of means to overlap is significant evidence of a difference in means at a level equal to $2e \alpha^{1.91}$, provided the two samples have approximately equal standard deviations and are approximately the same size.
I'll end with a tabulation of the approximation for common values of $2\alpha$. In the left hand column is the nominal size $2\alpha$ of the original confidence interval; in the right hand column is the actual size $2\alpha^\prime$ of the comparison of two such intervals:
$$\begin{array}{ll}
2\alpha & 2\alpha^\prime \\ \hline
0.1 &0.02\\
0.05 &0.005\\
0.01 &0.0002\\
0.005 &0.00006\\
\end{array}$$
For example, when a pair of two-sided 95% CIs ($2\alpha=.05$) for samples of approximately equal sizes do not overlap, we should take the means to be significantly different, $p \lt .005$. The correct p-value (for equal sample sizes $n$) actually lies between $.0037$ ($n=2$) and $.0056$ ($n=\infty$).
This result justifies (and I hope improves upon) the reply by @John. Thus, although the previous replies appear to be in conflict, both are (in their own ways) correct.
Best Answer
You can use a confidence interval (CI) for hypothesis testing. In the typical case, if the CI for an effect does not span 0 then you can reject the null hypothesis. But a CI can be used for more, whereas reporting whether it has been passed is the limit of the usefulness of a test.
The reason you're recommended to use CI instead of just a t-test, for example, is because then you can do more than just test hypotheses. You can make a statement about the range of effects you believe to be likely (the ones in the CI). You can't do that with just a t-test. You can also use it to make statements about the null, which you can't do with a t-test. If the t-test doesn't reject the null then you just say that you can't reject the null, which isn't saying much. But if you have a narrow confidence interval around the null then you can suggest that the null, or a value close to it, is likely the true value and suggest the effect of the treatment, or independent variable, is too small to be meaningful (or that your experiment doesn't have enough power and precision to detect an effect important to you because the CI includes both that effect and 0).
Added Later: I really should have said that, while you can use a CI like a test it isn't one. It's an estimate of a range where you think the parameter values lies. You can make test like inferences but you're just so much better off never talking about it that way.
Which is better?
A) The effect is 0.6, t(29) = 2.8, p < 0.05. This statistically significant effect is... (some discussion ensues about this statistical significance without any mention of or even strong ability to discuss the practical implication of the magnitude of the finding... under a Neyman-Pearson framework the magnitude of the t and p values is pretty much meaningless and all you can discuss is whether the effect is present or isn't found to be present. You can never really talk about there not actually being an effect based on the test.)
or
B) Using a 95% confidence interval I estimate the effect to be between 0.2 and 1.0. (some discussion ensues talking about the actual effect of interest, whether it's plausible values are ones that have any particular meaning and any use of the word significant for exactly what it's supposed to mean. In addition, the width of the CI can go directly to a discussion of whether this is a strong finding or whether you can only reach a more tentative conclusion)
If you took a basic statistics class you might initially gravitate toward A. And there may be some cases where it is a better way to report a result. But for most work B is by far and away superior. A range estimate is not a test.