The basic answer here is to not remove the intercept. Suppressing the intercept in a regression model is pretty much always a bad idea, and the 'significance' is largely irrelevant.
When you have a categorical explanatory variable, the standard approach is to use reference cell coding (commonly called 'dummy coding'). Under this scheme, you will have indicators for one fewer than the number of groups, but the intercept ends up taking account of that group. Thus, the intercept and the 0-level group are the same. (I discuss reference cell coding in more depth here, if you want more information.)
What these tests are telling you is that the mean of the 0-level condition cannot be differentiated from 0. However, this is very unlikely to be a piece of information that you should care about. In addition, it doesn't tell you anything about whether the 1-level and the 0-level differ, which presumably, you would care about. Either way, this is part of the model, so you should leave it in.
Yes. You can easily verify this by carrying out the following steps:
First, express the means of $A$, $B$, and $C$, in terms of the model with the specified contrast:
\begin{eqnarray*}
1\hat{\beta}_{0}-2\hat{\beta}_{1}+0\hat{\beta}_{2} & = & \hat{\mu}_{A}=E(Y_{A})\\
1\hat{\beta}_{0}+1\hat{\beta}_{1}-1\hat{\beta}_{2} & = & \hat{\mu}_{B}=E(Y_{B})\\
1\hat{\beta}_{0}+1\hat{\beta}_{1}+1\hat{\beta}_{2} & = & \hat{\mu}_{C}=E(Y_{C})
\end{eqnarray*}
Here, each $\hat{\mu}_i$ represents the group mean of group $i$, $i=A, B, C$.
Next, place each beta coefficient into a matrix augmented with the
means on the right, and place the matrix in row-reduced echelon form
using Guass-Jordan elimination:
\begin{eqnarray*}
\begin{bmatrix}1 & -2 & 0 & | & \hat{\mu}_{A}\\
1 & 1 & -1 & | & \hat{\mu}_{B}\\
1 & 1 & 1 & | & \hat{\mu}_{C}
\end{bmatrix} & \sim & \begin{bmatrix}1 & -2 & 0 & | & \hat{\mu}_{A}\\
0 & 3 & -1 & | & \hat{\mu}_{B}-\hat{\mu}_{A}\\
0 & 3 & 1 & | & \hat{\mu}_{C}-\hat{\mu}_{A}
\end{bmatrix}\\
& \sim & \begin{bmatrix}1 & -2 & 0 & | & \hat{\mu}_{A}\\
0 & 3 & -1 & | & \hat{\mu}_{B}-\hat{\mu}_{A}\\
0 & 0 & 2 & | & \left(\hat{\mu}_{C}-\hat{\mu}_{A}\right)-\left(\hat{\mu}_{B}-\hat{\mu}_{A}\right)
\end{bmatrix}\\
& \sim & \begin{bmatrix}1 & -2 & 0 & | & \hat{\mu}_{A}\\
0 & 3 & -1 & | & \hat{\mu}_{B}-\hat{\mu}_{A}\\
0 & 0 & 1 & | & \frac{1}{2}\left[\left(\hat{\mu}_{C}-\hat{\mu}_{A}\right)-\left(\hat{\mu}_{B}-\hat{\mu}_{A}\right)\right]
\end{bmatrix}\\
& \sim & \begin{bmatrix}1 & -2 & 0 & | & \hat{\mu}_{A}\\
0 & 1 & 0 & | & \frac{1}{3}\left\{ \left(\hat{\mu}_{B}-\hat{\mu}_{A}\right)+\frac{1}{2}\left[\left(\hat{\mu}_{C}-\hat{\mu}_{A}\right)-\left(\hat{\mu}_{B}-\hat{\mu}_{A}\right)\right]\right\} \\
0 & 0 & 1 & | & \frac{1}{2}\left[\left(\hat{\mu}_{C}-\hat{\mu}_{A}\right)-\left(\hat{\mu}_{B}-\hat{\mu}_{A}\right)\right]
\end{bmatrix}\\
& \sim & \begin{bmatrix}1 & 0 & 0 & | & \hat{\mu}_{A}+\frac{2}{3}\left\{ \left(\hat{\mu}_{B}-\hat{\mu}_{A}\right)+\frac{1}{2}\left[\left(\hat{\mu}_{C}-\hat{\mu}_{A}\right)-\left(\hat{\mu}_{B}-\hat{\mu}_{A}\right)\right]\right\} \\
0 & 1 & 0 & | & \frac{1}{3}\left\{ \left(\hat{\mu}_{B}-\hat{\mu}_{A}\right)+\frac{1}{2}\left[\left(\hat{\mu}_{C}-\hat{\mu}_{A}\right)-\left(\hat{\mu}_{B}-\hat{\mu}_{A}\right)\right]\right\} \\
0 & 0 & 1 & | & \frac{1}{2}\left[\left(\hat{\mu}_{C}-\hat{\mu}_{A}\right)-\left(\hat{\mu}_{B}-\hat{\mu}_{A}\right)\right]
\end{bmatrix}
\end{eqnarray*}
So, now, we know that the first pivot position corresponds to:
\begin{eqnarray*}
\hat{\beta}{}_{0} & = & \hat{\mu}_{A}+\frac{2}{3}\left\{ \left(\hat{\mu}_{B}-\hat{\mu}_{A}\right)+\frac{1}{2}\left[\left(\hat{\mu}_{C}-\hat{\mu}_{A}\right)-\left(\hat{\mu}_{B}-\hat{\mu}_{A}\right)\right]\right\} \\
& = & \hat{\mu}_{A}-\frac{2}{3}\hat{\mu}_{A}-\frac{1}{3}\hat{\mu}_{A}+\frac{1}{3}\hat{\mu}_{A}+\frac{2}{3}\hat{\mu}_{B}-\frac{1}{3}\hat{\mu}_{B}+\frac{1}{3}\hat{\mu}_{C}\\
& = & \frac{1}{3}\hat{\mu}_{A}+\frac{1}{3}\hat{\mu}_{B}+\frac{1}{3}\hat{\mu}_{C}\\
& = & \frac{\hat{\mu}_{A}+\hat{\mu}_{B}+\hat{\mu}_{C}}{3}
\end{eqnarray*}
The final expression indicates that $\hat{\beta}{}_{0}$, the intercept,
represents the simple mean of the group means.
Best Answer
The intercept is the estimate of the dependent variable when all the independent variables are 0. So, suppose you have a model such as
Income ~ Sex
Then if sex is coded as 0 for men and 1 for women, the intercept is the predicted value of income for men; if it is significant, it means that income for men is significantly different from 0.
In most cases, the significance of the intercept is not particularly interesting. Indeed, you can easily change the intercept by recoding the independent variable, but this has no effect on the meaning of the model.