You matrix $A = (a_{ij})$ is
an upper triangular matrix
( $a_{ij} = 0$ whenever $i > j$ )
and a Toeplitz matrix
( $a_{ij}$ depends only on $i-j$ ) at the same time.
I cannot find any reference online which teach you how to evaluate its inverse effectively.
I hope this keywords can help you in your own search.
If you just want an inverse without too many other concerns, it is actually
pretty easy to get the inverse ourselves.
Let $\eta$ be the $n \times n$ matrix with $1$ on its superdiagonal and $0$ otherwise. i.e.
$$\eta = (\eta_{ij}),\quad \eta_{ij} = \begin{cases}1,& i - j = -1\\0,& \text{otherwise}\end{cases}$$
We have $\eta^n = 0$ and we can express $A$ as a polynomial in $\eta$.
$$A = x_1 I + x_2 \eta + x_3 \eta^2 + \cdots + x_n \eta^{n-1}$$
$A$ will be invertible when and only when $x_1$ is non-zero. When $A$ is invertible,
$A^{-1}$ is also an upper triangular Toeplitz matrix. We can also represent it as a polynomial in $\eta$.
Introduce numbers $\displaystyle\;\alpha_i = \frac{x_{i+1}}{x_1}$ and $\beta_i$ ( $i = 1,\ldots,n-1$ ) such that
$$\begin{align}
A &= x_1 \left(I + \alpha_1 \eta + \alpha_2 \eta^2 + \cdots + \alpha_{n-1} \eta^{n-1}\right)\\
A^{-1} &= x_1^{-1} \left(I + \beta_1 \eta + \beta_2 \eta^2 + \cdots + \beta_{n-1} \eta^{n-1}\right)
\end{align}
$$
The condition $A^{-1} A = I$ can be expanded to following set of relations. They will alow you to compute $\beta_k$ in a recursive manner.
$$\begin{align}
-\beta_1 &= \alpha_1\\
-\beta_2 &= \alpha_1 \beta_1 + \alpha_2\\
&\;\vdots\\
-\beta_k &= \alpha_1 \beta_{k-1} + \alpha_2 \beta_{k-2} + \cdots + \alpha_k\\
&\;\vdots
\end{align}
$$
When $n$ is small and you want individual $\beta_k$ as a function of $\alpha_k$.
There is actually a trick to get it. You can ask a CAS to compute the Taylor
expansion of the reciprocal of following polynomial in $t$:
$$\frac{1}{1 + \alpha_1 t + \alpha_2 t^2 + \cdots + \alpha_{n-1} t^{n-1}}
= 1 + \beta_1 t + \beta_2 t^2 + \cdots + \beta_{n-1} t^{n-1} + O(t^n)$$
The coefficients of $t^k$ ($1 \le k < n$) in the resulting Taylor expansion will
be the expression you want for $\beta_k$. e.g.
$$\begin{align}
\beta_1 &= -\alpha_1,\\
\beta_2 &= \alpha_1^2 - \alpha_2,\\
\beta_3 &= -\alpha_1^3 + 2\alpha_1\alpha_2 - \alpha_3,\\
\beta_4 &= \alpha_1^4 - 3\alpha_1^2\alpha_2 + \alpha_2^2 + 2\alpha_1 \alpha_3 - \alpha_4\\
&\;\vdots
\end{align}$$
Set
$$
X=\begin{bmatrix}A & C\\0 & B\end{bmatrix}
$$
If $A$ is not invertible, then its columns are linearly dependent, hence the first $m$ columns of $X$ are linearly dependent and also $X$ is not invertible. In this case the relation $\det X=\det A\det B$ is true. So we can assume $A$ is invertible; if Gaussian elimination on $A$ requires row switches, then collect all row switches in a permutation matrix $P$, so elimination on $PA$ can be done without row switches and $PA=LU$ where $L$ is lower triangular and $U$ is upper unitriangular. Consider the matrix
$$P'=\begin{bmatrix}P & 0 \\ 0 & I_n\end{bmatrix}$$
so
$$
P'X=
\begin{bmatrix}P & 0 \\ 0 & I_n\end{bmatrix}
\begin{bmatrix}A & C\\0 & B\end{bmatrix}=
\begin{bmatrix}PA&PC\\0&B\end{bmatrix}=
\begin{bmatrix}LU&PC\\0&B\end{bmatrix}=
\begin{bmatrix}L & 0 \\ 0 & I_n\end{bmatrix}
\begin{bmatrix}U & L^{-1}PC\\0 & B\end{bmatrix}
$$
Now, as
$$
\begin{bmatrix}L & 0 \\ 0 & I_n\end{bmatrix}
$$
is lower triangular, we clearly have that its determinant is equal to $\det L$. Since $U$ is upper unitriangular, with $m$ times repeated Laplace development we get that
$$
\det\begin{bmatrix}U & L^{-1}PC\\0 & B\end{bmatrix}=\det B
$$
Therefore
$$
\det P'X=\det P'\det X=\det L\det B
$$
On the other hand, $P'$ is a permutation matrix generated by as many row swaps as $P$, so $\det P=\det P'$. Also $\det A=\det L\det U=\det L$.
Hence $\det X=\det A\det B$.
We can also do it by induction. For $i=1,2,\dots,m$, denote by $A_i$ the matrix obtained from $A$ by removing the first column and the $i$-th row; $C_i$ is the matrix obtained from $C$ by removing the $i$-th row.
The base case of induction is for $m=1$, which is obvious. Suppose $m>1$ and develop $\det X$ along the first column:
\begin{multline}
\det X=
(-1)^{1+1}a_{11}\det\begin{bmatrix} A_1 & C_1 \\ 0 & B\end{bmatrix}+
(-1)^{2+1}a_{21}\det\begin{bmatrix} A_2 & C_2 \\ 0 & B\end{bmatrix}\\
+\dots+
(-1)^{m+1}a_{m1}\det\begin{bmatrix} A_m & C_m \\ 0 & B\end{bmatrix}
\end{multline}
By induction hypothesis, we have, for $i=1,2,\dots,m$,
$$
\det\begin{bmatrix} A_i & C_i \\ 0 & B\end{bmatrix}=\det A_i\det B
$$
so
\begin{align}
\det X&=
(-1)^{1+1}a_{11}\det A_1\det B+
(-1)^{2+1}a_{21}\det A_2\det B\\
&\qquad\qquad+\dots+
(-1)^{m+1}a_{m1}\det A_m\det B\\
&=
\bigl((-1)^{1+1}a_{11}\det A_1+
(-1)^{2+1}a_{21}\det A_2+\dots+
(-1)^{m+1}a_{m1}\det A_m\bigr)\det B\\
&=\det A\det B
\end{align}
Best Answer
I would like to present a very simple solution by interpretation of these matrices as operators on $\mathbb{R^n}$ (which will surprise nobody...). Triangular matrix $A$ acts as a discrete integration operator:
For any $x_1,x_2,x_3,\cdots x_n$:
$$\tag{1}A (x_1,x_2,x_3,\cdots x_n)^T=(s_1,s_2,s_3,\cdots s_n)^T \ \ \text{with} \ \ \begin{cases}s_1&=&x_1&&&&\\s_2&=&x_1+x_2&&\\s_3&=&x_1+x_2+x_3\\...\end{cases}$$
(1) is equivalent to:
$$\tag{2}A^{-1} (s_1,s_2,s_3,\cdots x_n)^T=(x_1,x_2,x_3,\cdots x_n)^T \ \ \text{with} \ \ \begin{cases}x_1&=& \ \ s_1&&&&\\x_2&=&-s_1&+&s_2&&\\x_3&=&&&-s_2&+&s_3\\...\end{cases}$$
and it suffices now to "collect the coefficients" in the right order in order to constitute the inverse matrix.
(Thus the inverse operation is - in a natural way - a discrete derivation operator).