This identity can be computed directly from the definition of the determinant. It helps to regard $A$ as a matrix whose entries are differentiable functions with respect to the real parameter $t.$ Recall the definition of the determinant -
$\det A = \displaystyle\sum_{\sigma \in S_n} \textrm{ sgn } (\sigma ) \displaystyle\prod_{i=1}^n a_{i\sigma (i) }$
and that the cofactor matrix $A_{ij}$ is defined entrywise by the determinant of a the matrix formed by deleting row $i$ and column $j$; $\textrm{ Adj } A$ is the transpose of this matrix. Now, noting the product rule,
$\frac{d}{dt} \det (A(t) +tB) = \displaystyle\sum_{\sigma \in S_n } \textrm{sgn } (\sigma) \displaystyle\prod_{i=1}^n [a + tb]_{i\sigma (i)} = \displaystyle\sum_{\sigma \in S_n} \textrm{sgn }(\sigma ) \displaystyle\sum_{j=1}^n b_{j \sigma (j) } \displaystyle\prod_{j\ne i} [a + tb]_{i \sigma (i) }=
\displaystyle\sum_{j=1}^n \displaystyle\sum_{\sigma \in S_n} b_{j\sigma (j)} \displaystyle\prod_{j\ne i} [a + tb]_{i \sigma (i) }$
Letting $t=0$ gives
$\displaystyle\sum_{j=1}^n \displaystyle\sum_{\sigma \in S_n} \textrm{sgn } (\sigma ) b_{j \sigma (j)} \displaystyle\prod_{j\ne i} a_{i \sigma (i) } = $
$\displaystyle\sum_{j=1}^n \displaystyle\sum_{k=1}^n b_{jk} \displaystyle\sum_{\sigma (j) = k } \textrm{sgn }(\sigma )
\displaystyle\prod_{j\ne i} a_{i \sigma (i) } =
\displaystyle\sum_{j=1}^n \displaystyle\sum_{k=1}^n b_{jk} \det A_{jk} = \textrm{ tr }(\textrm{Adj }(A) B) $
What's sort of neat is that this identity can be used to prove Cramer's Rule. Note that as a special case, $\frac{d}{dt} (I+ tB)_{t=0} = \textrm{ Tr } (B)$ Hence
$\frac{d}{dt} \det (A +tB)_{t=0} =$
$\det (A) \frac{d}{dt} \det (I +tA^{-1}B)_{t=0} =$
$\det (A) \textrm{ Tr} (A^{-1} B)$
Note that the diagonal entries of $A^{-1} B$ are defined by $\displaystyle\sum_{k=1}^n a^{-1}_{ik} b_{ki}$ for some $1\le i \le n.$ If we take $B$ to be the matrix with $b_{ji}=1$ and zero entries elsewhere, we have $\textrm{ tr } A^{-1} B = a^{-1}_{ij}.$ Applying a similar remark to $\textrm{Adj }(A) B,$ we deduce that
$\det (A) A^{-1} = \textrm{ Adj } (A),$ as the matrices are equal entrywise.
I would like to present a very simple solution by interpretation of these matrices as operators on $\mathbb{R^n}$ (which will surprise nobody...). Triangular matrix $A$ acts as a discrete integration operator:
For any $x_1,x_2,x_3,\cdots x_n$:
$$\tag{1}A (x_1,x_2,x_3,\cdots x_n)^T=(s_1,s_2,s_3,\cdots s_n)^T \ \ \text{with} \ \ \begin{cases}s_1&=&x_1&&&&\\s_2&=&x_1+x_2&&\\s_3&=&x_1+x_2+x_3\\...\end{cases}$$
(1) is equivalent to:
$$\tag{2}A^{-1} (s_1,s_2,s_3,\cdots x_n)^T=(x_1,x_2,x_3,\cdots x_n)^T \ \ \text{with} \ \ \begin{cases}x_1&=& \ \ s_1&&&&\\x_2&=&-s_1&+&s_2&&\\x_3&=&&&-s_2&+&s_3\\...\end{cases}$$
and it suffices now to "collect the coefficients" in the right order in order to constitute the inverse matrix.
(Thus the inverse operation is - in a natural way - a discrete derivation operator).
Best Answer
Here is an another way:
Note that $A=29I - e e^T$, where $e$ is a vector of ones. It is straightforward to check that the eigenvalues are $1,29$ hence it is invertible.
To compute $A^{-1}$, solve $Ax=b$.
This is $29x-(e^Tx)e = b$, hence $29 e^T x -(e^Tx)e^T e = e^T b $, or $e^T x = e^T b$. The first equation can be written as $29 x = (e^T b)e+b=(I+e e^T)b $, and so $x = {1 \over 29} (I+e e^T)b $, from which we get $A^{-1} = {1 \over 29} (I+e e^T)$.