Matrix has icomplete rank, so the system is solvable only if
$$
\operatorname{rank} M = \operatorname{rank} \begin{pmatrix}M \;\big|\; f\end{pmatrix}.
$$
Due to truncation error, the last may not hold (but would hold, if you're using a conservative approximation, I suppose).
You can use QR decomposition to deal with that. Suppose $M = QR$ where $Q$ is orthogonal and $R$ is upper triangular matrix. The rank of $R$ is equal to the rank of $M$ and that is one less than number of its rows. So
$$
R = \begin{pmatrix}
\star & \star & \dots & \star & \star\\
0 & \star & \dots & \star & \star\\
0 & 0 & \ddots & \star & \star\\
0 & 0 & \dots & \star & \star\\
0 & 0 & \dots & 0 & 0\\
\end{pmatrix} = \begin{pmatrix}R' & h\\0&0\end{pmatrix}
$$
Solving the system $Mx = f$ is equivalent to solving the system
$$
Rx=Q^\top f \equiv \begin{pmatrix}z'\\\alpha\end{pmatrix}, \alpha \in \mathbb R.
$$
For the system to have a solution the last element $\alpha$ of $Q^\top f$ should be zero. Even if it's not, let's fix it as zero. Let's also fix the last element of $x$ as zero, since it can be any number.
$$
x = \begin{pmatrix}x'\\0\end{pmatrix}
$$
Now we have a nonsingular triangular system
$$
R' x' = z'
$$
which can easily be solved.
Note, that for the scheme you're using matrix $M$ is sparse, so you need sparse version of $QR$ decomposition. Also, since $Q$ is not sparse for QR even if $M$ was, you need to perform QR for extended matrix $(M|f)$ to compute both $R = Q^\top M$ and $Q^\top f$ at the same time.
Appendix
I've implemented this method in Matlab and also the method of one equation elimination. For correct problems they give the same solution.
For testing purposes I used square domain $[-1,1]\times[-1,1]$ and the following problem
$$
\Delta u = 0\\
\frac{\partial u}{\partial n} = g(x, y), \quad (x,y) \in \partial G.
$$
This problem has well known solvability criterion, that is
$$
0 = \left(\int_G \Delta u dx dy = \int_{\partial G} \frac{\partial u}{\partial n} d\Gamma\right) = \int_{\partial G} g d\Gamma
$$
Due to numerical errors it never can be exactly achieved, so from exact mathematical point of view, the numerical problem is always ill-posed. So we have to deal with regularized problem, that is to solve a problem with
$$
\int_{\partial G} g d \Gamma \neq 0.
$$
That's where the methods differ. They eliminate the inconsistency in various ways.
Equation elimination
When we are using equation elimination, that is removing one governing equation and replacing it with
$$
u_{i,j} = 0
$$
for example, we state that all the inconsistency should eliminated via $(i,j)$ point. In fact, the real problem we are solving now is
$$
\Delta u = \delta(x - x_i, y - y_i) \int_{\partial G} g d\Gamma\\
\frac{\partial u}{\partial n} = g(x, y), \quad (x,y) \in \partial G.
$$
so we put a source or a sink at $x_i, y_i$ to resolve the inconsistency. For the test problem I want $u(x,y) = \frac{x^2}{2} - \frac{y^2}{2}$ to be the solution, so
$$
g(-1, y) = 1,\quad g(1, y) = 1\\
g(x, -1) = -1, \quad g(x, 1) = -1
$$
Do demonstrate the effect of inconsistency I would take $g(x, -1) = 0.5$.
Note the pole-like thing at the middle - that was the point with $u_{i,j} = 0$ equation. At every other point the function satisfies $\Delta u = 0$ and the boundary conditions.
QR method
Again, let
$$
Mu = f
$$
be decomposed as
$$
Mu \equiv QRP^\top u = f\\
RP^\top u = Q^\top f.
$$
I've added permutation matrix $P$ to allow column reordering of the matrix $M$ which improves sparse QR a lot.
Let again,
$$
R = \begin{pmatrix}
R' & h\\
0 & 0
\end{pmatrix}, \quad Q^\top f = \begin{pmatrix}
z'\\\alpha
\end{pmatrix}
$$
So we state that
$$
x = \begin{pmatrix}
R^{-1} z'\\
0
\end{pmatrix}
$$
is a regularized solution to
$$
Rx = z \Leftrightarrow Mx = f
$$
since $Mx = f$ simply does not have a solution. But what is the real problem we are solving now?
Recall that $M$ is singular, we also know its left zeroing vector. Let $\omega = \frac{1}{\|\omega\|}(1, 1, \dots, 1)^\top$. Then
$$
\omega^\top M
$$
is simply the sum of every row of $M$ and that is zero (actually, it is implementation-dependent, but the idea remains the same). So the last row of $R$ is nothing more than $\omega^\top M$, and $\alpha = \omega^\top f \sim \int_{\partial G} g d\Gamma$ (up to some constant multiplication).
If we plug the regularized solution $x$ to the system $Rx = z$ we observe a residual:
$$
R x \neq z,\quad R x = z + \begin{pmatrix}0\\-\alpha\end{pmatrix}
$$
and by multiplying by $Q$ we have
$$
Mx = f - \alpha \omega
$$
since $\omega$ is exactly the last column of $Q$. So now we are solving a problem where inconsistency is smoothed over the whole domaing $G$ and its border $\partial G$, roughly speaking
$$
\Delta u = -\frac{1}{|G|} \int_{\partial G} g d\Gamma
$$
This method applied to the same problem gives the following solution
It looks fine, but, actually, the equation is violated (slightly) at every node and boundary conditions are also violated (also slightly).
Finally, there's no much difference what method to use if you're solving a well-posed problem, like arising from
$$
\nabla \phi = \mathbf E \Rightarrow \Delta \phi = \operatorname{div} \mathbf E.
$$
The Matlab code I've used you can find here
First off $\Sigma$ is of the wrong dimension. Since $C$ is a $3 \times 2$ matrix, so should $\Sigma$:
$$\Sigma=\left[\begin{matrix}1 & 0 \\ 0 & \sqrt 6 \\ 0 & 0\end{matrix}\right]$$
Next, you need to normalize $V$:
$$V=\left[\begin{matrix}-\frac{2}{\sqrt 5} & \frac{1}{\sqrt 5} \\ \frac{1}{\sqrt 5} & \frac{2}{\sqrt 5}\end{matrix}\right]$$
Now, we have:
$$U\Sigma=CV=\left[\begin{matrix}0 & \sqrt 5 \\ \frac{1}{\sqrt 5} & \frac{2}{\sqrt 5} \\ \frac{2}{\sqrt 5} & -\frac{1}{\sqrt 5}\end{matrix}\right]$$
Now, since the above is $U\Sigma$ and the first column of $\Sigma$ is $e_1$, the first column of this matrix must be the first column of $U$. Also, since the second column of $\Sigma$ is $\sqrt 6e_2$, the second column of $U$ must be the second column of $CV$ divided by $\sqrt 6$, so we get:
$$U=\left[\begin{matrix}0 & \sqrt{\frac 5 6} & ?? \\ \frac{1}{\sqrt 5} & \sqrt{\frac 2 {15}} & ?? \\ \frac{2}{\sqrt 5} & -\frac{1}{\sqrt 30} & ??\end{matrix}\right]$$
Finally, since $C$ has rank $2$ but $m=3$ rows, you need to pad the last column of $U$ with an orthogonal vector from the cokernel so that $U$ is an orthogonal matrix. Since the last column is orthogonal to the first two columns, it must be in the null space of the following matrix:
$$\left[\begin{matrix} 0 & \frac 1 {\sqrt 5} & \frac{2}{\sqrt 5} \\ \sqrt{\frac 5 6} & \sqrt{\frac 2 {15}} & -\frac{1}{\sqrt 30}\end{matrix}\right]$$
Basically, I just turned the two columns of $U$ into row vectors of the above matrix so that the null space of the above matrix is orthogonal to the columns of $U$. Now, find the RREF of the above matrix:
$$\left[\begin{matrix}1 & 0 & -1 \\ 0 & 1 & 2\end{matrix}\right]$$
From here, it should be clear that $(1,-2,1)^t$ spans the null space of this matrix. However, for $U$ to be orthogonal, we need this to be a unit vector, so normalize it by dividing by $\sqrt 6$:
$$\frac{1}{\sqrt{6}}\left[\begin{matrix}1 \\ -2 \\ 1\end{matrix}\right]=\left[\begin{matrix}\frac{1}{\sqrt 6} \\ -\sqrt{\frac 2 3} \\ \frac{1}{\sqrt 6}\end{matrix}\right]$$
Finally, we insert this column into $U$ to get:
$$U=\left[\begin{matrix}0 & \sqrt{\frac 5 6} & \frac{1}{\sqrt 6} \\ \frac{1}{\sqrt 5} & \sqrt{\frac 2 {15}} & -\sqrt{\frac 2 3} \\ \frac{2}{\sqrt 5} & -\frac{1}{\sqrt 30} & \frac{1}{\sqrt 6}\end{matrix}\right]$$
Best Answer
Here is an outline of the proof. Various steps are missing justification; you should fill in the blanks. Let $A = U\Sigma V^\top$. $$\|A\|_2^2 = \max_{x : \|x\|_2 = 1} \|Ax\|_2^2 = \max_{x : \|x\|_2 = 1} x^\top A^\top A x = \max_{x : \|x\|_2 = 1} x^\top V \Sigma^\top \Sigma V^\top x = \max_{y : \|y\|_2 = 1} y^\top \Sigma^\top \Sigma y = \max_{y : \|y\|_2 = 1} \sum_{i=1}^r \sigma_i^2(A) y_i^2 = \sigma^2_{\max}(A).$$
If $A$ is invertible, then $\Sigma$ is diagonal and square with nonzero diagonal entries, so $\Sigma^{-1}$ exists and $A^{-1} = V \Sigma^{-1} U^\top$ (why?). Then repeat the above argument to get $\|A^{-1}\|_2 = \sigma_{\min}$.