You don't compare the individual points to conclude a treatment effect. You see whether the lines for the treatment and control are different.
In some circumstances, the fitted lines might be parallel, and just the difference in intercept is of interest. In others, both the intercept and slope might differ, and any difference would be of interest.
Testing point vs line in ordinary regression (not errors-in-variables, which is more complicated):
It's not correct to check if data values for another are in the confidence interval because the data values themselves have noise.
Call the first sample $(\underline{x}_1,\underline{y}_1)$, and the second one $(\underline{x}_2,\underline{y}_2)$. Your model for the first sample is $y_1(i) = \alpha_1 + \beta_1 x_{1,i} + \varepsilon_i$, with the usual iid $N(0,\sigma^2)$ assumption on the errors.
You want to see if a particular point $(x_{2,j},y_{2,j})$ is consistent with the first sample. Equivalently, to check whether an interval for $y_{2,j} - \left(\alpha_1 + \beta_1 x_{2,j}\right)$ includes 0 (notice the points are second-sample, the line is first-sample).
The usual way to obtain such CI would to construct a pivotal quantity, though one could simulate or boostrap as well.
However, since in this illustration we're doing it for a single point, under normal assumptions and with ordinary regression conditions, we can save some effort: this is a solved problem. It corresponds to (assuming sample 1 and sample 2 have a common population variance) checking whether one of the sample 2 observations lies within a prediction interval based on sample 1, rather than a confidence interval.
In my academic report I have a task to check whether or not mean values (for given two predictor values) predicted by the simple linear regression model are "statictically significantly different".
Is this a simple regression (one predictor)? If so, there's nothing to do -- if the slope is significantly different from zero, so are two distinct means. The t-statistic for the difference is just the t-statistic for the slope (apart, perhaps, from a sign change). Looking as absolute values:
$$\left|\frac{\hat{y}_1-\hat{y}_2}{\sqrt{\text{Var}(\hat{y}_1-\hat{y}_2)}}\right|=\left|\frac{(x_1-x_2)\hat{\beta}}{\sqrt{\text{Var}[(x_1-x_2)\hat{\beta}}]}\right|$$
$$=\left|\frac{(x_1-x_2)\hat{\beta}}{(x_1-x_2)\sqrt{\text{Var}(\hat{\beta})}}\right|$$
$$=\left|\frac{\hat{\beta}}{\sqrt{\text{Var}(\hat{\beta})}}\right|$$
If you have a multiple regression it's somewhat more complicated, but a similar calculation could be used to build a test for a change in mean.
I want to make a predictions for these two values and check whether or not the confidence intervals for them have a common part.
That's a different thing. For example, it ignores the correlation between the predictions.
If not, I assume they are (the predicted means) statistically different.
Am I right? Are there other statistical tools I may apply here?
Are you really interested in statistical significance or whether there's a practical difference?
Best Answer
The following article might be helpfull to you, as it describes how to evaluate if the effect of a given explanatory factor is invariant over persons, time, or organizations:
Paternoster, R., Brame, R., Mazerolle, P., & Piquero, A. R. (1998). Using the Correct Statistical Test for the Equality of Regression Coefficients. Criminology, 36(4), 859–866.
What they basically say is, that to test the hypothesis that the difference between $b_1$ and $b_2$ (1 and 2 being two samples or times) is equal to zero you can apply the following formula:
$\begin{equation} Z= \frac{b_1-b_2}{\sqrt{{SEb_1}^{2}+{SEb_2}^2}} \end{equation}$
SE being the standard error of the respective 'slopes' in your case.