After doing Singular-value decomposition of the matrix, it would be very easy to find pseudo inverse of the matrix.
Steps:
Step 1:
Find the eigenvalues of $A^TA$ or $AA^T,$ whichever is of less size.
In this case, $A^TA$ would be of less size ($4\times 4$, rather than $6\times 6$).
Step 2:
Extract the singular values and frame the $\Sigma$ matrix which is formed by framing a diagonal matrix with diagonal elements equal to the squareroot of each of these eigenvalues and extending it to the shape of $A$ by introducing additional zeroes.
The eigenvalues of $A^TA$ are $\sigma_1^2=34$, $\sigma_2^2=6$, $\sigma_3^2=0$, $\sigma_4^2=0$. Shape of $\Sigma$=Shape of A=$6\times4$.
$\therefore \Sigma=\left[\begin{matrix}\sigma_1 & 0 & 0 & 0\\0 & \sigma_2 & 0 & 0\\0 & 0 & \sigma_3 & 0\\0 & 0 & 0 & \sigma_4\\0 & 0 & 0 & 0\\0 & 0 & 0 & 0\end{matrix}\right]=\left[\begin{matrix}5.8311 & 0 & 0 & 0\\0 & 2.4495 & 0 & 0\\0 & 0 & 0.0000 & 0\\0 & 0 & 0 & 0.0000\\0 & 0 & 0 & 0\\0 & 0 & 0 & 0\end{matrix}\right]$.
Step 3:
Frame V as the matrix with columns as linearly independent orthonormal vectors spacing the domain of $A^TA$.
Since the orthonormal eigenvectors span the domain of $A^TA$, let's find out the eigenvectors of $A^TA$.
The eigenvector corresponding to $\lambda_1=34$ is $v_1=\left[\begin{matrix}0.0648\\0.2593\\-0.3241\\-0.9075\end{matrix}\right]$.
The eigenvector corresponding to $\lambda_2=6$ is $v_2=\left[\begin{matrix}-0.8018\\0.5345\\0.2673\\0.0\end{matrix}\right]$.
The eigenvector corresponding to $\lambda_3=0$ is $v_3=\left[\begin{matrix}0.5773\\0.5773\\0.5773\\0.0\end{matrix}\right]$.
It can be noticed that an independent eigenvector for $\lambda_4=0$ does not exist. Therefore, the other independent vector that together helps in spanning the entire domain can be found by using Gram–Schmidt process. (It is basically taking a random vector and removing the components of other orthonormal vectors from it.)
Therefore, by taking a random vector $r_4=\left[\begin{matrix}1\\1\\1\\1\end{matrix}\right]$,
$v'_4=r_4-(r_4.v_1)v_1-(r_4.v_2)v_2-(r_4.v_3)v_3$ and $v_4=\frac{v'_4}{\left|v'_4\right|}$.
$\Longrightarrow v_4=\left[\begin{matrix}0.14\\0.5602\\-0.7001\\0.42\end{matrix}\right]$
$\therefore V=\left[\begin{matrix}v_1&v_2&v_3&v_4\end{matrix}\right]=\left[\begin{matrix}0.0648 & -0.8018 & 0.5774 & 0.14\\0.2593 & 0.5345 & 0.5774 & 0.5602\\-0.3241 & 0.2673 & 0.5774 & -0.7001\\-0.9075 & 0.0 & 0.0 & 0.42\end{matrix}\right]$.
Step 4:
Frame U as the matrix with columns equal to the orthonormal eigenvectors of $AA^T$, and satisfy the equation A=$U\Sigma V^T$.
Let $U=\left[\begin{matrix}u_1&u_2&u_3&u_4&u_5&u_6\end{matrix}\right]$, where $u_1,u_2,u_3,u_4,u_5,u_6$ are the columns of $U$.
Now, $A=U\Sigma V^T \Longrightarrow AV=U\Sigma$
$\Longrightarrow \left[\begin{matrix}-2.2039 & 1.0691 & 0.0 & 0.0\\1.102 & 1.3363 & 0.0 & 0.0\\-3.3059 & -0.2672 & 0.0 & 0.0\\3.3059 & 0.2672 & 0.0 & 0.0\\-1.102 & -1.3363 & 0.0 & 0.0\\2.2039 & -1.0691 & 0.0 & 0.0\end{matrix}\right]=\left[\begin{matrix}\sigma_1 u_1& \sigma_2 u_2&0&0\end{matrix}\right]$.
$\Longrightarrow u_1=\frac{1}{\sqrt{34}}\left[\begin{matrix}-2.2039\\1.102\\-3.3059\\3.3059\\-1.102\\2.2039\end{matrix}\right]=\left[\begin{matrix}-0.378\\0.189\\-0.567\\0.567\\-0.189\\0.378\end{matrix}\right]$ and $u_2=\frac{1}{\sqrt{6}}\left[\begin{matrix}1.0691\\1.3363\\-0.2672\\0.2672\\-1.3363\\-1.0691\end{matrix}\right]=\left[\begin{matrix}0.4365\\0.5455\\-0.1091\\0.1091\\-0.5455\\-0.4365\end{matrix}\right]$.
To find $u_3,u_4,u_5$ and $u_6$, we again use Gram–Schmidt process.
By taking random vector as $rv=\left[\begin{matrix}1\\2\\3\\4\\5\\6\end{matrix}\right]$,
$u'_3=rv-(rv.u_1)u_1-(rv.u_2)u_2$ and $u_3=\frac{u'_3}{\left|u'_3\right|} \Longrightarrow u_3=\left[\begin{matrix}0.3884\\0.4272\\0.4272\\0.3884\\0.3884\\0.4272\end{matrix}\right]$.
Similarly by taking random vectors as $rv_4=\left[\begin{matrix}1\\0\\0\\0\\0\\0\end{matrix}\right]$,$rv_5=\left[\begin{matrix}1\\1\\0\\0\\0\\0\end{matrix}\right]$ and $rv_6=\left[\begin{matrix}1\\1\\1\\0\\0\\0\end{matrix}\right]$
$u'_4=rv_4-(rv_4.u_1)u_1-(rv_4.u_2)u_2-(rv_4.u_3)u_3$ and $u_4=\frac{u'_4}{\left|u'_4\right|} \Longrightarrow u_4=\left[\begin{matrix}0.7182\\-0.4631\\-0.4631\\0.0221\\0.0221\\0.2331\end{matrix}\right]$,
$u'_5=rv_5-(rv_5.u_1)u_1-(rv_5.u_2)u_2-(rv_5.u_3)u_3-(rv_5.u_4)u_4$ and $u_5=\frac{u'_5}{\left|u'_5\right|} \Longrightarrow u_5=\left[\begin{matrix}0.0\\0.5193\\-0.4435\\-0.6207\\0.342\\0.1774\end{matrix}\right]$
$u'_6=rv_6-(rv_6.u_1)u_1-(rv_6.u_2)u_2-(rv_6.u_3)u_3-(rv_6.u_4)u_4-(rv_6.u_5)u_5$ and $u_6=\frac{u'_6}{\left|u'_6\right|} \Longrightarrow u_6=\left[\begin{matrix}0.0\\0.0001\\0.2698\\-0.361\\-0.6309\\0.6316\end{matrix}\right]$
$\therefore U=\left[\begin{matrix}u_1&u_2&u_3&u_4&u_5&u_6\end{matrix}\right]$
$\Longrightarrow U=\left[\begin{matrix}-0.378 & 0.4365 & 0.3884 & 0.7182 & 0.0 & 0.0\\0.189 & 0.5455 & 0.4272 & -0.4631 & 0.5193 & 0.0001\\-0.567 & -0.1091 & 0.4272 & -0.4631 & -0.4435 & 0.2698\\0.567 & 0.1091 & 0.3884 & 0.0221 & -0.6207 & -0.361\\-0.189 & -0.5455 & 0.3884 & 0.0221 & 0.342 & -0.6309\\0.378 & -0.4365 & 0.4272 & 0.2331 & 0.1774 & 0.6316\end{matrix}\right]$
(In this case, alternative matrices for $U$ are possible. And any one amongst those can be considered to be $U$.)
Step 5:
Frame the matrix $\Sigma^+$ as the matrix $\Sigma$ with its non-zero singular elements replaced by their corresponding reciprocals and then transposed.
$\therefore \Sigma^+=\left[\begin{matrix}\frac{1}{5.8311} & 0 & 0 & 0\\0 & \frac{1}{2.4495} & 0 & 0\\0 & 0 & 0.00 & 0\\0 & 0 & 0 & 0.00\\0 & 0 & 0 & 0\\0 & 0 & 0 & 0\end{matrix}\right]^T=\left[\begin{matrix}0.1715 & 0 & 0 & 0 & 0 & 0\\0 & 0.4082 & 0 & 0 & 0 & 0\\0 & 0 & 0 & 0 & 0 & 0\\0 & 0 & 0 & 0 & 0 & 0\end{matrix}\right]$
Step 6:
Finally calculate the Moore-Penrose inverse by the relation $A^+=V \Sigma^+ U^T$.
$\Longrightarrow A^+=\left[\begin{matrix}-0.1471 & -0.1765 & 0.0294 & -0.0294 & 0.1765 & 0.1471\\0.0784 & 0.1274 & -0.049 & 0.049 & -0.1274 & -0.0784\\0.0686 & 0.049 & 0.0196 & -0.0196 & -0.049 & -0.0686\\0.0588 & -0.0294 & 0.0882 & -0.0882 & 0.0294 & -0.0588\end{matrix}\right]$
What you wrote is not correct.
If $\lambda$ is an eigenvalue of $A$, then $A-\lambda I$ is never invertible, no matter if $A$ is diagonalizable or not.
For example, your "proof" can be used to prove that the matrix $I$ (the identity matrix) is also not diagonalizable, which is clearly absurd.
A matrix is diagonalizable if there exists an invertible matrix $P$ such that $A=P^{-1}DP$ and $D$ is a diagonal matrix. You have probably proven in classes (or a textbook you are learning from did) that this happens if and only if there exits $n$ linearly dependent eigenvectors for the matrix.
This is what you should start from. In your case, the matrix has only one eigenvalue, and that eigenvalue has only one eigenvector (up to, of course, scaling), and from that, we can conclude the matrix is not diagonalizable.
Best Answer
As $A=\begin{bmatrix} A_{11} & A_{12}\\ A_{21} & A_{22} \end{bmatrix}$ and rank($A$)$=r =$ rank($A_{11}$). $[A_{11} ~ A_{12}]$ has full row rank, which implies $[A_{21}~ A_{22}]$ are in the row space of $[A_{11} ~ A_{12}]$. Hence $\exists T~s.t.~[A_{21}~ A_{22}]=T[A_{11} ~ A_{12}]$.
For similar reasoning, $\begin{bmatrix} A_{12}\\ A_{22} \end{bmatrix} = \begin{bmatrix} A_{11}\\ A_{21} \end{bmatrix}S$ for some $S$, because $\begin{bmatrix} A_{11}\\ A_{21} \end{bmatrix}$ has full column rank.
In particular, we deduce $A_{21}=TA_{11}, A_{22}=TA_{12}, A_{12}=A_{11}S, A_{22}=A_{21}S.$ Recall $A_{11}^{-1}$ is invertible, $T=A_{21}A_{11}^{-1}$, which further imples $A_{22}=A_{21}A_{11}^{-1}A_{12}$.