Let $\mathbf{R}\in SO(3)$ be a rotation matrix, $t=R_{1,1} + R_{2,2} + R_{3,3}$ be the trace of $\mathbf{R}$, and $\mathbf{r}=\begin{bmatrix} R(3,2)-R(2,3) \\ R(1,3)-R(3,1) \\ R(2,1)-R(1,2) \end{bmatrix}$.
We can calculate the rotation vector $\omega$ (axis-angle representation) as follows:
$$\omega = \begin{cases}
\left(\frac{1}{2} - \frac{t-3}{12}\right)\mathbf{r} & \text{if}\quad t\ge3-\epsilon\\
\frac{\theta}{2\sin(\theta)}\mathbf{r} & \text{if}\quad 3-\epsilon > t > -1+\epsilon\\
\pi\frac{\mathbf{v}}{|\mathbf{v}|} & \text{if }\quad t\le -1+\epsilon
\end{cases}
$$
with
$$\theta = \arccos\left( \frac{t - 1}{2} \right)$$
and
$(w,\mathbf{v})$ being a unit quaternion
$$
v_a = \frac{s}{2},\quad v_b = \frac{1}{2s}(R_{b,a}+R_{a,b}),\quad v_c = \frac{1}{2s}(R_{c,a}+R_{a,c})\\
\quad\text{with} \quad s := \sqrt{R_{a,a}-R_{b,b}-R_{c,c} + 1}\\
\text{and}\quad a := \underset{i\in\{1,2,3\}}{\arg\max}\{R_{i,i}\},\quad b := (a+1)\text{ mod } 3, \quad c := (a+2)\text{ mod }3~.$$
Background: The last case for $\theta\approx \pm \pi$ (i.e. $t\approx-1$) is calculated using the route: rotation matrix $\Rightarrow$ unit quaternion $\Rightarrow$
axis-angle.***
Here, $\pi$ is the limit of $2\arctan\left(\frac{|\mathbf{v}|}{w}\right)$
with $w = \frac{1}{2s}(R_{c,b}-R_{b,c})$.
(*** rotation matrix to unit quaternion reference: Eigen library which
again refers to Ken Shoemake, "Quaternion Calculus and Fast Animation", 1987;
unit quaternion to axis-angle reference: C. Hertzberg et al.: "Integrating Generic Sensor Fusion Algorithms with Sound State
Representation through Encapsulation of Manifolds" Information Fusion, 2011)
Edit: It would be nice to have a higher order approximation for the $t\le-1+\epsilon$ case. Please drop a comment or edit if you have a good solution...
Edit2: Actually, there are two possible solutions for the case when $\theta$ is close to $\pi$. In both of them, we first transform the rotation matrix to the unit quaternion $q = (w, \mathbf{v})$ without any numerical issues (because of case differentiation, see links above). Then the scalar part of quaternion $w = \cos(\theta/2)$ is close to 0, and norm of vector part $|\mathbf{v}| = \sin(\theta/2)$ is close to 1 for $\theta$ close to $\pi$.
First solution: using reciprocal arguments of $\arctan$ (see properties in wiki):
$$ \arctan\left(\frac{1}{x}\right) = \frac{\pi}{2} - \arctan(x) \text{, if } x > 0 \\
\arctan\left(\frac{1}{x}\right) = -\frac{\pi}{2} - \arctan(x) \text{, if } x < 0$$
We have: $$\omega = \theta \frac{\mathbf{v}}{|\mathbf{v}|} = 2 \arctan\left(\frac{|\mathbf{v}|}{w}\right) \frac{\mathbf{v}}{|\mathbf{v}|} = \left(\pm \pi - 2 \arctan\left(\frac{w}{|\mathbf{v}|}\right) \right) \frac{\mathbf{v}}{|\mathbf{v}|}$$
Second solution: use a variant of the $\text{atan2}(y, x)$ formula that avoids inflated rounding errors (last one in definitions sections). Moreover, if we choose the quaternion with a non-negative scalar part $w = \cos(\theta/2) \ge 0$, (we always can do it since two quaternions $q$ and $-q$ represent the same rotation), we simultaneously ensure that the angle $\theta$ will be in range [0, $\pi$], and we can use single "half-angle" formula everywhere:
$$ \theta = 4 \arctan\left(\frac{|\mathbf{v}|}{w + \sqrt{w^2 + |\mathbf{v}|^2}} \right) = 4 \arctan\left(\frac{|\mathbf{v}|}{w + 1} \right) $$
For the rest of this post, let
$$ A=\begin{pmatrix} 1 & 0\\ 2 & 1\end{pmatrix}, B=\begin{pmatrix} 1 & 2\\ 0 & 1\end{pmatrix}, C=\begin{pmatrix} -1 & 0\\ 0 & -1\end{pmatrix}, D=\begin{pmatrix} -1 & 0\\ 0 & 1\end{pmatrix}.$$
Note that all of $A$, $B$, $C$ and $D$ live in $\Gamma(2)$.
First, let's consider the case of $G=SL(2,\mathbb{Z}$), which is the case I think you wanted (so in your notation, we have the requirement that $ad-bc=1$). Note that in this case, $D\notin G$.
Proposition: $A$, $B$, and $C$ generate $\Gamma(2)$.
Proof: Define a mapping from $f:\ \Gamma(2)\rightarrow \mathbb{Z}^+$ by the formula
$$ f:\ \begin{pmatrix} a & b\\ c & d\end{pmatrix}\mapsto |a|+|c|.$$
Let $\mathfrak{H}$ be the subgroup of $\Gamma(2)$ generated by $A$, $B$, and $C$, and let $X$ be an arbitrary element of $\Gamma(2)$. We will be done if we can show $X\in \mathfrak{H}$.
To this end, pick an element $Y\in \mathfrak{H}X$ [the right coset of $\mathfrak{H}$ containing $X$] for which $f(Y)$ is minimal.
Now letting $Y=\begin{pmatrix} a & b\\ c & d\end{pmatrix}$, consider the following cases:
- $c=0$.
We know $ad-bc=1$, and so in this case $a=d=\pm 1$. But then $Y$ (or $YC$) must be a power of $B$, since
$$ B^n=\begin{pmatrix} 1 & 2n\\ 0 & 1\end{pmatrix}.$$
This means $Y\in\mathfrak{H}\cap\mathfrak{H}X$, so that $\mathfrak{H}=\mathfrak{H}X$, or $X\in \mathfrak{H}$.
- $c\neq0$, and $|a| > |c|$.
Then there exists an $n\in\mathbb{Z}$ such that $-|c| < a+2nc < |c|$ [strict inequality because $a$ is odd and $c$ is even], and then
$$ B^nY=\begin{pmatrix} 1 & 2n\\ 0 & 1\end{pmatrix}\begin{pmatrix} a & b\\ c & d\end{pmatrix}=\begin{pmatrix} a+2nc & b+2nd\\ c & d\end{pmatrix},$$
so that $f(B^nY)=|a+2nc| + |c| < |c| + |c| < |a| + |c|$, contradicting the choice of $Y$. In other words, this case does not happen.
- $c\neq 0$, and $|a| < |c|$. Then a similar argument to the one above, using $A$ instead of $B$, leads also to a contradiction. The proof is now complete.
Thus we see that if $Z=\langle C\rangle$, then $\Gamma(2)/Z$ is generated by $A$ and $B$. But the group generated by $A$ and $B$ is free, by the Ping-Pong Lemma.
I think this is enough for the question, but if you actually meant $G=GL(2,\mathbb{Z})$, one can still say a lot. In this case, $\Gamma(2)/Z$ is not free, but has $F_2$ as a subgroup of index 2. There are only a handful of groups which possess $F_2$ as a subgroup of index 2, and in this case one gets the isomorphism $\Gamma(2)/Z\cong F_2\rtimes C_2$, where $F_2=\langle A, B\rangle$ and $C_2=\langle D\rangle$, with $D$ acting via $A^D=A^{-1}, B^D=B^{-1}$.
Best Answer
For the "by induction" statement: Let $T$ be the set of matrices with entries $a_{ij}$ such that $a_{ij}\in\Bbb Z$ if $i+j$ is even and $a_{ij}\in\Bbb Z\sqrt 2$ if $i+j$ is odd, that is, of the form $$\begin{pmatrix}\Bbb Z&\Bbb Z\sqrt2&\Bbb Z\\\Bbb Z\sqrt2&\Bbb Z&\Bbb Z\sqrt2\\\Bbb Z&\Bbb Z\sqrt2&\Bbb Z\end{pmatrix}.$$ Then $T$ forms a subring of $M(3,\Bbb R)$. Clearly $I,0\in T$ and $T$ is closed under addition. To show that $T$ is closed under multiplication, suppose $A,B\in T$, with elements $(a_{ij}),(b_{ij})$. Then the product is $c_{ij}=\sum_{k=1}^3a_{ik}b_{kj}$. If $i+j$ is even, then every element in the sum is either $a_{ik}b_{kj}\in\Bbb Z\cdot \Bbb Z$ if $k$ is even or $a_{ik}b_{kj}\in\sqrt 2\Bbb Z\cdot \sqrt 2\Bbb Z=2\Bbb Z$ if $k$ is odd, so the sum is also in $\Bbb Z$, or else $i+j$ is odd, in which case one of the two terms is in $\Bbb Z$ and the other is in $\sqrt 2\Bbb Z$ hence the product is in $\sqrt 2\Bbb Z$ and the sum of these is also in $\sqrt 2\Bbb Z$. Thus $AB\in T$.
Note also that $$3a=\begin{pmatrix}3&0&0\\0&1&-2\sqrt2\\0&2\sqrt2&1\end{pmatrix}\in T\qquad3b=\begin{pmatrix}1&-2\sqrt2&0\\2\sqrt2&1&0\\0&0&3\end{pmatrix}\in T,$$ so for any product $M$ of $n$ terms selected from $\{a,b,a^{-1},b^{-1}\}$, we have $3^nM\in T$, and since applying this to $v=(1,0,0)$ extracts the top row, we have $3^nMv=(i,j\sqrt 2,k)$ for some $i,j,k\in\Bbb Z$ as desired.
"Analyzing modulo 3": Consider taking the coefficients of the matrix $\bmod 3$ (treated as elements of $\Bbb Z[\sqrt2]$). This has the effect of simply reducing the coefficient of an element of the form $\Bbb Z\sqrt2$ (there are no mixed terms $a+b\sqrt2$). Dropping the $\sqrt2$ from the $a_{ij}$, $i+j$ odd entries and denoting $-1$ as $\bar 1$, we get:
$$[3a]=\begin{pmatrix}0&0&0\\0&1&1\\0&\bar 1&1\end{pmatrix}\quad [3a^{-1}]=\begin{pmatrix}0&0&0\\0&1&\bar 1\\0&1&1\end{pmatrix}\quad [3b]=\begin{pmatrix}1&1&0\\\bar 1&1&0\\0&0&0\end{pmatrix}\quad [3b^{-1}]=\begin{pmatrix}1&\bar 1&0\\1&1&0\\0&0&0\end{pmatrix}$$
These matrices share the common pattern of having a $2\times2$ block of $1$'s, with a single $\bar 1$ in the block and $0$ outside. Now consider the following four matrices:
$$A=[3a]=\begin{pmatrix}0&0&0\\0&1&1\\0&\bar 1&1\end{pmatrix}\quad B=\begin{pmatrix}0&0&0\\0&1&1\\0&1&\bar 1\end{pmatrix}\quad C=\begin{pmatrix}0&1&\bar 1\\0&1&1\\0&0&0\end{pmatrix}\quad D=\begin{pmatrix}0&\bar 1&1\\0&1&1\\0&0&0\end{pmatrix}$$
It turns out that matrices of this form and their negatives are closed under left multiplication by the generators, which constitutes a proof of the goal because the identity matrix is not of this form. Here is the multiplication table:
\begin{array}{c|cccc}\times&A&B&C&D\\\hline [3a]=A&-A&0&A&A\\ [3a^{-1}]&0&-B&B&B\\ [3b]&C&C&-C&0\\ [3b^{-1}]&D&D&0&-D \end{array}
The $0$ elements are because $[3a^{-1}][3a]=[9I]$ is divisible by $3$ and so is equal to $0$, but can only occur if the word $M$ is not reduced (contains adjacent cancelling pairs) - for example $[3a]B=0$, but $\pm B$ only arises from a product $[3a^{-1}]x$ for some $x\in\{A,B,C,D\}$. This proves that if $M$ is any product of $n$ terms selected from $\{a,b,a^{-1},b^{-1}\}$ whose last term is $a$ and with no cancelling pairs, $[3^nM]\in\{\pm A,\pm B,\pm C,\pm D\}$, but $[3^nI]=0$. Thus $M\ne I$. Argumentation by symmetry (or with a different set of matrices) also establishes the result for words that end in $a^{-1},b,b^{-1}$, so we have that $M\ne I$ for any nontrivial word.