This is a special case of a "triangular ring" construction, and you can find a detailed answer here about its left/right/two-sided ideal lattices.
Adjustments will have to be made if you really want to use lower triangular matrices, but the answer will be similar.
Added: Let's try to interpret this through the help given in that post. Let $T= \begin{pmatrix} R &0\\ M & S \end{pmatrix}$ be your ring, with $R=M=S=\mathbb{Z}$. Under ordinary matrix multiplication, $M$ is an $(S,R)$ bimodule. We may think of this ring as $R\oplus M\oplus S$ with funny multiplication.
- The right ideals are all of the form $J_2\oplus J_1$, where $J_1$ is a right ideal of $S$ and $J_2$ is a right $R$ submodule of $R\oplus M$ which contains $J_1M$.
To see the motivation for the somewhat cryptic conditions given in the other solution, just think: if I have a right ideal and I multiply on the right by $\begin{pmatrix}z&0\\0&0\end{pmatrix}$, what would be included in my ideal? Do the same with a few other sparse matrices and I think you'll see how the conditions work.
So, let us take $12\mathbb{Z}$ to be $J_1$, and pick a $J_2\supseteq 12\mathbb{Z}(\mathbb{Z})=12\mathbb{Z}$. You could pick, for example, $J_2=7\mathbb{Z}\oplus 6\mathbb{Z}\subseteq R\oplus M$. So our candidate ideal is $7\mathbb{Z}\oplus 6\mathbb{Z}\oplus 12\mathbb{Z}\subseteq R\oplus M\oplus S$. Written out properly with matrices it looks like:
$$
I=\begin{pmatrix} 7\mathbb{Z} &0\\ 6\mathbb{Z} & 12\mathbb{Z} \end{pmatrix}
$$
I have to warn you though, that $J_2$ need not be a direct sum of two submodules of $R$ and $M$ like that. You could have $J_2=(0,6\mathbb{Z})+\{(a,a)\mid a\in 7\mathbb{Z}\}=\{(a,a+b)\mid a\in 7\mathbb{Z}, b\in 6\mathbb{Z}\}\subseteq R\oplus M$.
But nevertheless, according to the rules,
$$
I=\begin{pmatrix} m\mathbb{Z} &0\\ n\mathbb{Z} & t\mathbb{Z} \end{pmatrix}
$$
will be a right ideal as long as $n$ divides $t$.
I'll encourage you to try working out the left ideals (but you can summon me again if you get stuck.)
As @Julien pointed out, every square matrix admits a $PLU$ decomposition, where $P$ is a permutation matrix. We have: $A = P \cdot L \cdot U$, such that:
$A=\begin{bmatrix}1 & 2 & 3 & 4 \\5 & 6 & 7 & 8\\1 & -1 & 2 & 3 \\2 & 1 & 1 &2 \end{bmatrix}= \begin{bmatrix} 1 & 0 & 0 & 0\\0 & 0 & 1 & 0\\0 & 0 & 0 & 1\\0 & 1 & 0 & 0 \end{bmatrix} \cdot \begin{bmatrix} 1 & 0 & 0 & 0\\1 & 1 & 0 & 0\\2 & 1 & 1 & 0\\5 & \dfrac{4}{3} & \dfrac{5}{3} & 1 \end{bmatrix} \cdot \begin{bmatrix} 1 & 2 & 3 & 4\\0 & -3 & -1 & -1\\0 & 0 & -4 & -5\\0 & 0 & 0 & -\dfrac{7}{3} \end{bmatrix}$
You could try manually cranking this one to find its $LU$ factorization. We want:
$L \cdot U = \begin{bmatrix} 1 & 0 & 0 & 0\\l_{21} & 1 & 0 & 0\\l_{31} & l_{32} & 1 & 0\\l_{41} & l_{42} & l_{43} & 1 \end{bmatrix} \cdot \begin{bmatrix} u_{11} & u_{12} & u_{13} & u_{14}\\0 & u_{22} &u_{23} & u_{24}\\0 & 0 & u_{33} & u_{34}\\0 & 0 & 0 & u_{44} \end{bmatrix} = \begin{bmatrix}1 & 2 & 3 & 4 \\5 & 6 & 7 & 8\\1 & -1 & 2 & 3 \\2 & 1 & 1 &2 \end{bmatrix}$
We start off by solving the first row, so we get:
$$u_{11} = 1, u_{12} = 2, u_{13} = 3, u_{14} = 4$$
The portion of the multiplication that determines the remaining entries in the first column of $A$ yields:
$$l_{21}u_{11} = 5 \rightarrow l_{21} = 5$$
$$l_{31}u_{11} = 1 \rightarrow l_{31} = 1$$
$$l_{11}u_{11} = 2 \rightarrow l_{41} = 2$$
At this point rewrite all the variables you solved for and then continue the process and see if you can solve the remaining variables. Of course it is easy to check the result if you can solve all of the equations.
So, we currently have:
$L \cdot U = \begin{bmatrix} 1 & 0 & 0 & 0\\5 & 1 & 0 & 0\\1 & l_{32} & 1 & 0\\2 & l_{42} & l_{43} & 1 \end{bmatrix} \cdot \begin{bmatrix} 1 & 2 & 3 & 4\\0 & u_{22} &u_{23} & u_{24}\\0 & 0 & u_{33} & u_{34}\\0 & 0 & 0 & u_{44} \end{bmatrix} = \begin{bmatrix}1 & 2 & 3 & 4 \\5 & 6 & 7 & 8\\1 & -1 & 2 & 3 \\2 & 1 & 1 &2 \end{bmatrix}$
Try solving for $u_{22},u_{23}, u_{24}$, and then $l_{32}, l_{42}$ and continue this process.
Best Answer
The point is, you have not given your intended map $f$.
I will give it for you $$ f \left( \begin{bmatrix} a & c \\ 0 & b \\ \end{bmatrix} \right) = \begin{bmatrix} b & 0 \\ c & a \\ \end{bmatrix}. $$ Now just compute to see $f(x y) = f(x) f(y)$.
Alternatively, save some time and effort by noting that $$ f \left( \begin{bmatrix} a & c \\ 0 & b \\ \end{bmatrix} \right) = \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} \cdot \begin{bmatrix} a & c \\ 0 & b \\ \end{bmatrix} \cdot \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} $$ and $$ \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}^{2} = I. $$
Explicitly, $$f \left( \begin{bmatrix} a & c \\ 0 & b \\ \end{bmatrix} \right) f \left( \begin{bmatrix} d & e \\ 0 & f \\ \end{bmatrix} \right) = \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} \cdot \begin{bmatrix} a & c \\ 0 & b \\ \end{bmatrix} \cdot \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} \cdot \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} \cdot \begin{bmatrix} d & e \\ 0 & f \\ \end{bmatrix} \cdot \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} =\\= \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} \cdot \begin{bmatrix} a & c \\ 0 & b \\ \end{bmatrix} \cdot\begin{bmatrix} d & e \\ 0 & f \\ \end{bmatrix} \cdot \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} = f \left( \begin{bmatrix} a & c \\ 0 & b \\ \end{bmatrix} \cdot \begin{bmatrix} d & e \\ 0 & f \\ \end{bmatrix}\right).$$