We can invert a 2x2 matrix with 6 multiplies and one inversion. By Strassen's algorithm, we can multiply 2x2 matrices with 7 multiplies.
Partition your 4x4 matrix as
$$\left( \begin{matrix} A & B \\ C & D \end{matrix} \right) $$
We will reduce this to the identity.
Compute the inverse of $A$.
Perform Gaussian elimination
$$\left( \begin{matrix} I & A^{-1} B \\ 0 & D - CA^{-1}B \end{matrix} \right) $$
Let $E = D - C A^{-1} B$. Now compute $E^{-1}$. We can now finish Gaussian elimination.
Repeating these operations on an identity matrix tells us that the inverse is
$$ \left( \begin{matrix} A^{-1} + A^{-1} B E^{-1} C A^{-1} & -A^{-1} B E^{-1}
\\ -E^{-1} C A^{-1} & E^{-1} \end{matrix} \right) $$
The calculation of this inverse requires two matrix inversions (12 multiplies and 2 real inversions), and six 2x2 multiplies:
- $C A^{-1}$
- $(C A^{-1}) B$
- $E^{-1} (C A^{-1})$
- $A^{-1} B$
- $(A^{-1} B) E^{-1}$
- $(A^{-1} B)(E^{-1} C A^{-1})$
for 54 multiplies and 2 real inversions in all.
Of course, using Strassen's algorithm for 2x2 matrices is a terrible idea. I don't know if the above organization of the calculation is reasonable or not even if you don't use Strasses algorithm; I suspect it is unlikely.
If $A$ turns out to be singular, you have to partition the matrix differently to do the calculation.
One remark first, the determinant is defined for square matrices, not for vectors. Maybe the problem here is you view $D$ as a vector, whereas what is meant is "the square matrix with the same diagonal as $U$ and zeros everywhere else".
The LU decomposition yields a lower triangular matrix $L$ and an upper triangular matrix $U$ with
$$A=LU$$
The the determinant of a product is always the product of the determinants, it's perfectly safe to write
$$\det A=\det L \det U$$
Now, the determinant of a triangular matrix is the product of it's diagonal elements, and $L$ has only ones in its diagonal, whereas the diagonal of $U$ may be called $D$, and
$$\det A=\det D$$
For example, with $A=\left(\begin{matrix}
1 & 0 & 1 \\
2 & 1 & 0 \\
1 & 2 & 2 \\
\end{matrix}\right)$, you get the factorization
$$A=\left(\begin{matrix}
1 & 0 & 0 \\
2 & 1 & 0 \\
1 & 2 & 1 \\
\end{matrix}\right)\cdot\left(\begin{matrix}
1 & 0 & 1 \\
0 & 1 & -2 \\
0 & 0 & 5 \\
\end{matrix}\right)$$
And of course
$$\det \left(\begin{matrix}
1 & 0 & 0 \\
2 & 1 & 0 \\
1 & 2 & 1 \\
\end{matrix}\right)=\det \left(\begin{matrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
\end{matrix}\right)=1$$
$$\det \left(\begin{matrix}
1 & 0 & 1 \\
0 & 1 & -2 \\
0 & 0 & 5 \\
\end{matrix}\right)=\det \left(\begin{matrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 5 \\
\end{matrix}\right)=5$$
Hence $\det A=5$.
There is another $LU$ factorization, with ones in the diagonal of $U$ instead of $L$, but of course that does not change the answer, if $D$ is the diagonal of $L$.
There may be another concern: often, the $LU$ decomposition is written
$$PA=LU$$
Where $P$ is a permutation matrix. If happens when pivoting is used in de $LU$ decomposition.
Then $\det P \det A=\det L\det U$, but $\det P=\pm1$, since a permutation matrix is always orthogonal. Thus you must be careful with the sign of $\det A$. And $\det P$ is simply the sign of the permutation on which $P$ is based.
Best Answer
Usually, the purpose of doing LU decomposition is to avoid having to compute an inverse at all, because an upper-triangular system can be solved easily via back-substitution.
A simple way to obtain the inverse would be to just complete the Gauss-Jordan elimination the rest of the way, instead of stopping at the halfway point as one does with LU decomposition.