Suppose you add predictors to your model one by one, and you have a way to do that to minimize the multicollinearity issue - that is, you somehow make sure that each new predictor is independent of the predictors that are already in the model. That way, the correlation matrix for predictors will be always diagonal, and multicollinearity won't be an issue - up to a point.
The thing is that, for a given sample size, $N$, it's impossible to find more than $N$ independent predictors (including the column of 1s for the intercept). That's because for design matrix $X$ of size $N \times p$ its rank cannot be greater than $\text{min}\{N, p\}$. No matter how you pick the predictors, if you use more than $N$ of them, the $X'X$ will be non-invertible for sure.
Based on what we know from linear algebra, it's impossible for $X'X$ to have rank greater than that of $X$. So, with $p > N$ then $X'X_{p\times p}$ will be of rank $N$ at most, but in order for it to be invertible the rank has to be $p$.
(1) Yes, leaving out a regressor can introduce bias into your regression, under certain conditions.
That is, if $Y = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + U$ is the true data generating process, and you estimated the regression using just $X_1$, then it can be shown that
$$ \hat \beta_1^\ast \overset{p}{\to} \beta_1 + \beta_2\frac{Cov(X_1,X_2)}{Var(X_1)}.$$
Thus, $\hat \beta_1^\ast$ will be biased only if $\beta_2\neq 0$ and $X_1$ and $X_2$ are correlated.
(2) Collinearity will not bias your regression. However, if can make your regression hard/impossible to meaningfully estimate.
In the extreme case, if you have perfect correlation between two variables, then you cannot even estimate the regression, because you will not be able to take the inverse of $(X'X)$ when calculating $\hat \beta = (X'X)^{-1}X'Y$.
However, if you have a very high level of correlation between two variables $X_1$ and $X_2$, then the variance of your two estimates $\beta_1,\beta_2$ will be high. If you think about it intuitively, the regression model doesn't really know whether to assign the effect of increasing the variables (which move together) to $\beta_1$ or $\beta_2$. Thus, it becomes hard to show statistical significance of coefficients, and estimates can be very unstable.
Furthermore, the values in the estimated regression may not be interpretable. Typically, we interpret coefficients as saying "all else equal, a one unit increase in $X_1$ will results in ....". However, a one unit increase in $X_1$ all else equal will never happen, because $X_1$ and $X_2$ are so correlated.
Best Answer
Re your 1st question Collinearity does not make the estimators biased or inconsistent, it just makes them subject to the problems Greene lists (with @whuber 's comments for clarification).
Re your 3rd question: High collinearity can exist with moderate correlations; e.g. if we have 9 iid variables and one that is the sum of the other 9, no pairwise correlation will be high but there is perfect collinearity.
Collinearity is a property of sets of independent variables, not just pairs of them.