There is a nice trick for calculating the inverse of any invertible upper triangular matrix, one which avoids the computation of complicated determinants. Since it works for any such upper (or lower) triangular matrix $T$ of any size $n$, I'll explain it in that context.
The first thing one needs to remember is that the determinant of a triangular matrix is the product of its diagonal entries. This may easily be seen by induction on $n$. It is trivially true if $n = 1$; for $n = 2$, we have
$T= \begin{bmatrix} t_{11} & t_{12} \\ 0 & t_{22} \end{bmatrix}, \tag{1}$
so obviously
$\det(T) = t_{11} t_{22}. \tag{2}$
If we now formulate the inductive hypothesis that
$\det(T) = \prod_1^k t_{ii} \tag{3}$
for any upper triangular $T$ of size $k$,
$T = [t_{ij}], \; \; 1 \le i, j \le k, \tag{4}$
then for $T$ of size $k + 1$ we have that
$\det(T) = t_{11} \det(T_{11}), \tag{5}$
where $T_{11}$ is the $k \times k$ matrix formed by deleting the first row and comumn of $T$. (4) follows easily from the expansion of $\det(T)$ in terms of its first-column minors (see this wikipedia page), since $t_{i1} = 0$ for $i \ge 2$. From our inductive hypothesis,
$\det(T_{11}) = \prod_2^{k + 1} t_{ii}, \tag{6}$
whence from (5)
$\det(T) = t_{11} \det(T_{11}) = t_{11} \prod_2^{k + 1} t_{ii} = \prod_1^{k + 1} t_{ii}, \tag{7}$
proving our assertion.
It follows immediately from (7) that the characteristic polynomial $p_T(\lambda)$ of $T$ is
$p_T(\lambda) = \det(T - \lambda I) = \prod_1^n (t_{ii} - \lambda), \tag{8}$
and from (8) that the eigenvalues of $T$ are precisely its diagonal entries, i.e. the $t_{ii}$, $1 \le i \le n$; also follows from (7) the related fact that $T$ is nonsingular, that is, $\det(T) \ne 0$, precisely when its diagonal entries are all nonzero.
For non-singular $T$ we may compute $T^{-1}$ as follows: write
$T = \Lambda + T_u, \tag{9}$
where $\Lambda$ is the diagonal matrix formed from the diagonal of $T$; viz.,
$\Lambda = [\delta_{ij} t_{ij}]; \tag{10}$
then $\Lambda$ is nonsingular and $T_u = T - \Lambda$ is the strictly upper triangular matrix obtained by setting the diagonal of $T$ to zero, i.e. setting $t_{ii} = 0$ for $1 \le i \le n$. We may write
$T = \Lambda (I + \Lambda^{-1} T_u), \tag{11}$
whence
$T^{-1} = (I + \Lambda^{-1} T_u)^{-1} \Lambda^{-1}. \tag{12}$
The matrix $\Lambda^{-1} T_u$ occurring in (12) is itself in fact strictly upper triagnular as well as is $T_u$; indeed, for any diagonal $D$, $DT_u$ is strictly upper tirangular, an assertion which is easily validated by direct calculation. It follows that $\Lambda^{-1} T_u$ is in fact nilpotent; that is, $(\Lambda^{-1} T_u)^n = 0$. We may now use the well-known algebraic identity
$(1 + x)(\sum_0^m (-x)^j) = 1 - (-x)^{m + 1}, \tag{13}$
easily seen to hold in any unital ring, applied to the matrix $x =\Lambda^{-1} T_u$, yielding, with $m = n - 1$,
$(I + \Lambda^{-1}T_u)(\sum_0^m (-\Lambda^{-1}T_u)^j) = I - (-\Lambda^{-1}T_u)^{m + 1} = I - (-\Lambda^{-1}T_u)^n = I. \tag{13}$
(13) shows that the inverse of $I + \Lambda^{-1}T_u$ is given by
$(I + \Lambda^{-1} T_u)^{-1} = \sum_0^m (-\Lambda^{-1}T_u)^j. \tag{14}$
It follows from (14) that $(I + \Lambda T_u)^{-1}$ is upper triangular, since each of the matrices $(-\Lambda^{-1}T_u)^j$, $j \ge 1$, is strictly upper triangular, and $(-\Lambda^{-1}T_u)^0 = I$. It further follows then that $T^{-1} = (I + \Lambda T_u)^{-1}\Lambda^{-1}$ is also upper triangular, being the product of the upper triangular matrix $(I + \Lambda T_u)^{-1}$ and the diagonal matrix $\Lambda^{-1}$. We have thus shown that the inverse of any invertible upper triangular matrix, of any size $n$, is itself an upper triangular matrix.
The inverse of any invertible matrix is invertible, the inverse of the inverse being the original matrix.
We can apply these considerations to the calculation of $A^{-1}$, where
$A = \begin{bmatrix} a & b & c \\ 0 & d & e \\ 0 & 0 & f \end{bmatrix}; \tag{14}$
here we have
$\Lambda = \begin{bmatrix} a & 0 & 0 \\ 0 & d & 0 \\ 0 & 0 & f \end{bmatrix} \tag{15}$
and
$T_u = \begin{bmatrix} 0 & b & c \\ 0 & 0 & e \\ 0 & 0 & 0 \end{bmatrix}; \tag{16}$
then
$\Lambda^{-1} T_u = \begin{bmatrix} a^{-1} & 0 & 0 \\ 0 & d^{-1} & 0 \\ 0 & 0 & f^{-1} \end{bmatrix} \begin{bmatrix} 0 & b & c \\ 0 & 0 & e \\ 0 & 0 & 0 \end{bmatrix} = \begin{bmatrix} 0 & ba^{-1} & ca^{-1} \\ 0 & 0 & ed^{-1} \\ 0 & 0 & 0 \end{bmatrix}; \tag{17}$
$(\Lambda^{-1} T_u)^2 = \begin{bmatrix} 0 & ba^{-1} & ca^{-1} \\ 0 & 0 & ed^{-1} \\ 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} 0 & ba^{-1} & ca^{-1} \\ 0 & 0 & ed^{-1} \\ 0 & 0 & 0 \end{bmatrix} = \begin{bmatrix} 0 & 0 & bea^{-1}d^{-1} \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}; \tag{18}$
$(\Lambda^{-1} T_u)^3 = 0; \tag{19}$
$\sum_0^2 (-\Lambda^{-1} T_u)^j = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} - \begin{bmatrix} 0 & ba^{-1} & ca^{-1} \\ 0 & 0 & ed^{-1} \\ 0 & 0 & 0 \end{bmatrix} + \begin{bmatrix} 0 & 0 & bea^{-1}d^{-1} \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}$
$= \begin{bmatrix} 1 & -ba^{-1} & (be - cd)a^{-1}d^{-1} \\ 0 & 1 &- ed^{-1} \\ 0 & 0 & 1 \end{bmatrix}; \tag{20}$
finally,
$T^{-1} = (I + \Lambda^{-1} T_u)^{-1} \Lambda^{-1} = (\sum_0^2 (-\Lambda^{-1} T_u)^j) \Lambda^{-1}$
$= \begin{bmatrix} 1 & -ba^{-1} & (be - cd)a^{-1}d^{-1} \\ 0 & 1 &- ed^{-1} \\ 0 & 0 & 1 \end{bmatrix}\begin{bmatrix} a^{-1} & 0 & 0 \\ 0 & d^{-1} & 0 \\ 0 & 0 & f^{-1} \end{bmatrix}$
$= \begin{bmatrix} a^{-1} & -ba^{-1}d^{-1} & (be - cd)a^{-1}d^{-1}f^{-1} \\ 0 & d^{-1} &- ed^{-1}f^{-1} \\ 0 & 0 & f^{-1} \end{bmatrix}, \tag{21}$
this in agreement with Nimda's calculations. Indeed, we have
$\begin{bmatrix} a & b & c \\ 0 & d & e \\ 0 & 0 & f \end{bmatrix}\begin{bmatrix} a^{-1} & -ba^{-1}d^{-1} & (be - cd)a^{-1}d^{-1}f^{-1} \\ 0 & d^{-1} &- ed^{-1}f^{-1} \\ 0 & 0 & f^{-1} \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}, \tag{22}$
as some simple algebra reveals.
Of course all this stuff applies to lower triangular matrices as well, and the demonstrations are similar and analogous, that is, essentially the same.
Hope this helps! Cheers,
and as always,
Fiat Lux!!!
I was interested on the same question, so allow me to exploit my logic, hopping of course to get comments for possible flaws. Suppose you have two lower triangular matrices $\mathbf{L}_1$ and $\mathbf{L}_2$ illustrated bellow
$$\mathbf{L}_1 =
\begin{bmatrix}
l_{11}^{(1)} & l_{12}^{(1)} & \dots & \dots & l_{1n}^{(1)} &\\
& l_{22}^{(1)} & l_{23}^{(1)} & \dots & \vdots &\\
& & l_{33}^{(1)} & & \vdots &\\
& & & \ddots & \vdots &\\
& & & & l_{nn}^{(1)} &\\
\end{bmatrix}~~~~~\mathbf{L}_2 =
\begin{bmatrix}
l_{11}^{(2)} & l_{12}^{(2)} & \dots & \dots & l_{1n}^{(2)} &\\
& l_{22}^{(2)} & l_{23}^{(2)} & \dots & \vdots &\\
& & l_{33}^{(2)} & & \vdots &\\
& & & \ddots & \vdots &\\
& & & & l_{nn}^{(2)} &\\
\end{bmatrix}$$
We want to prove that the following product is a lower triangular matrix,
$$\mathbf{L}_1 \mathbf{L}_2 = \mathbf{L}_1 \big[ \mathbf{l}_1, \mathbf{l}_2^{(2)}, \dots, \mathbf{l}_n^{(2)} \big] = \big[ \mathbf{L}_1 \mathbf{l}_1^{(2)}, \mathbf{L}_1 \mathbf{l}_2^{(2)}, \dots, \mathbf{L}_1 \mathbf{l}_n^{(2)} \big]$$
As we can see, the $k$-th column of product matrix $\mathbf{L}_1 \mathbf{L}_2$ is given by $\mathbf{L}_1 \mathbf{l}_k^{(2)}$ which is the linear combination of the $\mathbf{L_1}$ matrix columns with coefficients defined by the $k$-th column vecor $\mathbf{l}_k^{(2)}$. Each of the product matrix columns $\big(\mathbf{L}_1 \mathbf{l}_k^{(2)}\big)$ have possible non-zeros entries only above the $k$-th element.
This is because, the new columns are linear combinations of the first $k$ columns $\mathbf{l}_k^{(1)}$ which by their turn have possible non-zero values above their $k$-th entry. This property comes form the fact that columns $\mathbf{l}_k^{(2)}$ have zero entries after their $k$-th element.
$\mathcal{Thanks~for~reading}$.
Best Answer
I assume that $\mathbf A_L$ has three rows, and that the "box" of interest is the parallelpiped "generated" by these rows.
If all that is correct, then one solution is to take $\mathbf A_U[i,j] = \mathbf A_L[4-i,4-j]$, where $M[i,j]$ denotes the $i,j$ entry of $M$ for $i,j = 1,2,3$.