I was interested on the same question, so allow me to exploit my logic, hopping of course to get comments for possible flaws. Suppose you have two lower triangular matrices $\mathbf{L}_1$ and $\mathbf{L}_2$ illustrated bellow
$$\mathbf{L}_1 =
\begin{bmatrix}
l_{11}^{(1)} & l_{12}^{(1)} & \dots & \dots & l_{1n}^{(1)} &\\
& l_{22}^{(1)} & l_{23}^{(1)} & \dots & \vdots &\\
& & l_{33}^{(1)} & & \vdots &\\
& & & \ddots & \vdots &\\
& & & & l_{nn}^{(1)} &\\
\end{bmatrix}~~~~~\mathbf{L}_2 =
\begin{bmatrix}
l_{11}^{(2)} & l_{12}^{(2)} & \dots & \dots & l_{1n}^{(2)} &\\
& l_{22}^{(2)} & l_{23}^{(2)} & \dots & \vdots &\\
& & l_{33}^{(2)} & & \vdots &\\
& & & \ddots & \vdots &\\
& & & & l_{nn}^{(2)} &\\
\end{bmatrix}$$
We want to prove that the following product is a lower triangular matrix,
$$\mathbf{L}_1 \mathbf{L}_2 = \mathbf{L}_1 \big[ \mathbf{l}_1, \mathbf{l}_2^{(2)}, \dots, \mathbf{l}_n^{(2)} \big] = \big[ \mathbf{L}_1 \mathbf{l}_1^{(2)}, \mathbf{L}_1 \mathbf{l}_2^{(2)}, \dots, \mathbf{L}_1 \mathbf{l}_n^{(2)} \big]$$
As we can see, the $k$-th column of product matrix $\mathbf{L}_1 \mathbf{L}_2$ is given by $\mathbf{L}_1 \mathbf{l}_k^{(2)}$ which is the linear combination of the $\mathbf{L_1}$ matrix columns with coefficients defined by the $k$-th column vecor $\mathbf{l}_k^{(2)}$. Each of the product matrix columns $\big(\mathbf{L}_1 \mathbf{l}_k^{(2)}\big)$ have possible non-zeros entries only above the $k$-th element.
This is because, the new columns are linear combinations of the first $k$ columns $\mathbf{l}_k^{(1)}$ which by their turn have possible non-zero values above their $k$-th entry. This property comes form the fact that columns $\mathbf{l}_k^{(2)}$ have zero entries after their $k$-th element.
$\mathcal{Thanks~for~reading}$.
The point is, you have not given your intended map $f$.
I will give it for you
$$
f \left( \begin{bmatrix} a & c \\ 0 & b \\ \end{bmatrix} \right)
=
\begin{bmatrix} b & 0 \\ c & a \\ \end{bmatrix}.
$$
Now just compute to see $f(x y) = f(x) f(y)$.
Alternatively, save some time and effort by noting that
$$
f \left( \begin{bmatrix} a & c \\ 0 & b \\ \end{bmatrix} \right)
=
\begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} \cdot
\begin{bmatrix} a & c \\ 0 & b \\ \end{bmatrix} \cdot
\begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}
$$
and
$$
\begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}^{2} = I.
$$
Explicitly,
$$f \left( \begin{bmatrix} a & c \\ 0 & b \\ \end{bmatrix} \right) f \left( \begin{bmatrix} d & e \\ 0 & f \\ \end{bmatrix} \right) = \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} \cdot
\begin{bmatrix} a & c \\ 0 & b \\ \end{bmatrix} \cdot
\begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} \cdot \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} \cdot
\begin{bmatrix} d & e \\ 0 & f \\ \end{bmatrix} \cdot
\begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} =\\= \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} \cdot
\begin{bmatrix} a & c \\ 0 & b \\ \end{bmatrix} \cdot\begin{bmatrix} d & e \\ 0 & f \\ \end{bmatrix} \cdot
\begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} = f \left( \begin{bmatrix} a & c \\ 0 & b \\ \end{bmatrix} \cdot \begin{bmatrix} d & e \\ 0 & f \\ \end{bmatrix}\right).$$
Best Answer
This is a special case of a "triangular ring" construction, and you can find a detailed answer here about its left/right/two-sided ideal lattices.
Adjustments will have to be made if you really want to use lower triangular matrices, but the answer will be similar.
Added: Let's try to interpret this through the help given in that post. Let $T= \begin{pmatrix} R &0\\ M & S \end{pmatrix}$ be your ring, with $R=M=S=\mathbb{Z}$. Under ordinary matrix multiplication, $M$ is an $(S,R)$ bimodule. We may think of this ring as $R\oplus M\oplus S$ with funny multiplication.
To see the motivation for the somewhat cryptic conditions given in the other solution, just think: if I have a right ideal and I multiply on the right by $\begin{pmatrix}z&0\\0&0\end{pmatrix}$, what would be included in my ideal? Do the same with a few other sparse matrices and I think you'll see how the conditions work.
So, let us take $12\mathbb{Z}$ to be $J_1$, and pick a $J_2\supseteq 12\mathbb{Z}(\mathbb{Z})=12\mathbb{Z}$. You could pick, for example, $J_2=7\mathbb{Z}\oplus 6\mathbb{Z}\subseteq R\oplus M$. So our candidate ideal is $7\mathbb{Z}\oplus 6\mathbb{Z}\oplus 12\mathbb{Z}\subseteq R\oplus M\oplus S$. Written out properly with matrices it looks like: $$ I=\begin{pmatrix} 7\mathbb{Z} &0\\ 6\mathbb{Z} & 12\mathbb{Z} \end{pmatrix} $$
I have to warn you though, that $J_2$ need not be a direct sum of two submodules of $R$ and $M$ like that. You could have $J_2=(0,6\mathbb{Z})+\{(a,a)\mid a\in 7\mathbb{Z}\}=\{(a,a+b)\mid a\in 7\mathbb{Z}, b\in 6\mathbb{Z}\}\subseteq R\oplus M$.
But nevertheless, according to the rules, $$ I=\begin{pmatrix} m\mathbb{Z} &0\\ n\mathbb{Z} & t\mathbb{Z} \end{pmatrix} $$ will be a right ideal as long as $n$ divides $t$.
I'll encourage you to try working out the left ideals (but you can summon me again if you get stuck.)