The usual separation of variables in polar coordinates produces
$$\phi \left( r,\theta \right) =\sum _{n=0}^{\infty } \left( A_{{n}}\sin
\left( n\theta \right) +B_{{n}}\cos \left( n\theta \right) \right)
\left( {r}^{n}C_{{n}}+{r}^{-n}E_{{n}} \right) +F\ln \left( r
\right)$$
which is rewritten as
$$\phi \left( r,\theta \right) =\sum _{n=1}^{\infty } \left( A_{{n}}\sin
\left( n\theta \right) +B_{{n}}\cos \left( n\theta \right) \right)
\left( {r}^{n}C_{{n}}+{r}^{-n}E_{{n}} \right) +G+ F\ln \left( r
\right)$$
Applying the boundary condition at $r=R_1$ we obtain
$$\sum _{n=1}^{\infty } \left( A_{{n}}\sin \left( n\theta \right) +B_{{n
}}\cos \left( n\theta \right) \right) \left( {R_{{1}}}^{n}C_{{n}}+{R
_{{1}}}^{-n}E_{{n}} \right) +F\ln \left( R_{{1}} \right) +G =0
$$
From this last equation we derive that $G=-Fln(R_1)$ and
$${R_{{1}}}^{n}C_{{n}}+{R_{{1}}}^{-n}E_{{n}}= 0$$
Then we have
$$ E_{{n}}=-{R_{{1}}}^{2\,n}C_{{n}} $$
The solution takes the form
$$\phi \left( r,\theta \right) =\sum _{n=1}^{\infty } \left( A_{{n}}\sin
\left( n\theta \right) +B_{{n}}\cos \left( n\theta \right) \right)
\left( {r}^{n}-{r}^{-n}{R_{{1}}}^{2\,n} \right)-Fln(R_1)+Fln(r)
$$
Applying the boundary condition at $r=R_2$ we obtain
$$\sum _{n=1}^{\infty } \left( A_{{n}}\sin \left( n\theta \right) +B_{{n
}}\cos \left( n\theta \right) \right) n \left( {R_{{2}}}^{n-1}+{R_{{2
}}}^{-n-1}{R_{{1}}}^{2\,n} \right)+F/R_2 =f \left( \theta \right)
$$
Finally the constants $F$, $A_n$ and $B_n$ are determined using the standard expressions of Fourier series.
Matrix has icomplete rank, so the system is solvable only if
$$
\operatorname{rank} M = \operatorname{rank} \begin{pmatrix}M \;\big|\; f\end{pmatrix}.
$$
Due to truncation error, the last may not hold (but would hold, if you're using a conservative approximation, I suppose).
You can use QR decomposition to deal with that. Suppose $M = QR$ where $Q$ is orthogonal and $R$ is upper triangular matrix. The rank of $R$ is equal to the rank of $M$ and that is one less than number of its rows. So
$$
R = \begin{pmatrix}
\star & \star & \dots & \star & \star\\
0 & \star & \dots & \star & \star\\
0 & 0 & \ddots & \star & \star\\
0 & 0 & \dots & \star & \star\\
0 & 0 & \dots & 0 & 0\\
\end{pmatrix} = \begin{pmatrix}R' & h\\0&0\end{pmatrix}
$$
Solving the system $Mx = f$ is equivalent to solving the system
$$
Rx=Q^\top f \equiv \begin{pmatrix}z'\\\alpha\end{pmatrix}, \alpha \in \mathbb R.
$$
For the system to have a solution the last element $\alpha$ of $Q^\top f$ should be zero. Even if it's not, let's fix it as zero. Let's also fix the last element of $x$ as zero, since it can be any number.
$$
x = \begin{pmatrix}x'\\0\end{pmatrix}
$$
Now we have a nonsingular triangular system
$$
R' x' = z'
$$
which can easily be solved.
Note, that for the scheme you're using matrix $M$ is sparse, so you need sparse version of $QR$ decomposition. Also, since $Q$ is not sparse for QR even if $M$ was, you need to perform QR for extended matrix $(M|f)$ to compute both $R = Q^\top M$ and $Q^\top f$ at the same time.
Appendix
I've implemented this method in Matlab and also the method of one equation elimination. For correct problems they give the same solution.
For testing purposes I used square domain $[-1,1]\times[-1,1]$ and the following problem
$$
\Delta u = 0\\
\frac{\partial u}{\partial n} = g(x, y), \quad (x,y) \in \partial G.
$$
This problem has well known solvability criterion, that is
$$
0 = \left(\int_G \Delta u dx dy = \int_{\partial G} \frac{\partial u}{\partial n} d\Gamma\right) = \int_{\partial G} g d\Gamma
$$
Due to numerical errors it never can be exactly achieved, so from exact mathematical point of view, the numerical problem is always ill-posed. So we have to deal with regularized problem, that is to solve a problem with
$$
\int_{\partial G} g d \Gamma \neq 0.
$$
That's where the methods differ. They eliminate the inconsistency in various ways.
Equation elimination
When we are using equation elimination, that is removing one governing equation and replacing it with
$$
u_{i,j} = 0
$$
for example, we state that all the inconsistency should eliminated via $(i,j)$ point. In fact, the real problem we are solving now is
$$
\Delta u = \delta(x - x_i, y - y_i) \int_{\partial G} g d\Gamma\\
\frac{\partial u}{\partial n} = g(x, y), \quad (x,y) \in \partial G.
$$
so we put a source or a sink at $x_i, y_i$ to resolve the inconsistency. For the test problem I want $u(x,y) = \frac{x^2}{2} - \frac{y^2}{2}$ to be the solution, so
$$
g(-1, y) = 1,\quad g(1, y) = 1\\
g(x, -1) = -1, \quad g(x, 1) = -1
$$
Do demonstrate the effect of inconsistency I would take $g(x, -1) = 0.5$.
Note the pole-like thing at the middle - that was the point with $u_{i,j} = 0$ equation. At every other point the function satisfies $\Delta u = 0$ and the boundary conditions.
QR method
Again, let
$$
Mu = f
$$
be decomposed as
$$
Mu \equiv QRP^\top u = f\\
RP^\top u = Q^\top f.
$$
I've added permutation matrix $P$ to allow column reordering of the matrix $M$ which improves sparse QR a lot.
Let again,
$$
R = \begin{pmatrix}
R' & h\\
0 & 0
\end{pmatrix}, \quad Q^\top f = \begin{pmatrix}
z'\\\alpha
\end{pmatrix}
$$
So we state that
$$
x = \begin{pmatrix}
R^{-1} z'\\
0
\end{pmatrix}
$$
is a regularized solution to
$$
Rx = z \Leftrightarrow Mx = f
$$
since $Mx = f$ simply does not have a solution. But what is the real problem we are solving now?
Recall that $M$ is singular, we also know its left zeroing vector. Let $\omega = \frac{1}{\|\omega\|}(1, 1, \dots, 1)^\top$. Then
$$
\omega^\top M
$$
is simply the sum of every row of $M$ and that is zero (actually, it is implementation-dependent, but the idea remains the same). So the last row of $R$ is nothing more than $\omega^\top M$, and $\alpha = \omega^\top f \sim \int_{\partial G} g d\Gamma$ (up to some constant multiplication).
If we plug the regularized solution $x$ to the system $Rx = z$ we observe a residual:
$$
R x \neq z,\quad R x = z + \begin{pmatrix}0\\-\alpha\end{pmatrix}
$$
and by multiplying by $Q$ we have
$$
Mx = f - \alpha \omega
$$
since $\omega$ is exactly the last column of $Q$. So now we are solving a problem where inconsistency is smoothed over the whole domaing $G$ and its border $\partial G$, roughly speaking
$$
\Delta u = -\frac{1}{|G|} \int_{\partial G} g d\Gamma
$$
This method applied to the same problem gives the following solution
It looks fine, but, actually, the equation is violated (slightly) at every node and boundary conditions are also violated (also slightly).
Finally, there's no much difference what method to use if you're solving a well-posed problem, like arising from
$$
\nabla \phi = \mathbf E \Rightarrow \Delta \phi = \operatorname{div} \mathbf E.
$$
The Matlab code I've used you can find here
Best Answer
There should be a radial solution to this $\theta$ invariant problem. The only radial solutions are of the form $A+B\ln (r)$, and a solution is found once the boundary conditions are met: $$ A+B\ln(r_1)=C_1 \\ A+B\ln(r_2)=C_2. $$ The constant $B$ is determined by $B\ln(r_2/r_1)=C_2-C_1$: $$ B = \frac{C_2-C_1}{\ln(r_2)-\ln(r_1)}. $$ Then $A$ is determined by looking at $r=r_1$ or $r=r_2$: $$ A +\frac{C_2-C_1}{\ln(r_2)-\ln(r_1)}\ln(r_1)=C_1. $$