The point of having units in computations is to restrict what one can do, such that does doesn't end up inadvertently writing down a computation whose result depends on one's choice of units in an unexpected way.
In mathematics this is not done a lot, but mathematics can certainly do it if we want to. What you do is to work with matrices (etc) over the field of rational functions of $n$ variables, and assign one of the variables to stand for each of your fundamental units. That is, we're working in the field of fractions of the polynomial ring $\mathbb R[\rm m, s, kg, A, \ldots]$.
The fraction field contains all of the (unitless) real numbers, as well as all of the unitful measuremens such as $2\,{\rm s}$ or $4000\,{\rm m}$ or $35\,\rm{\frac m{s^2}}$. It also contains a lot of nonsensical elements such as $\frac{42\,\rm{kg}}{3\,{\rm m}+8\,{\rm s}}$, but that doesn't really bother us, because we know that the field-of-fractions construction still keeps the entire system consistent.
We can then define that the computation we're speaking of is well-formed if we can prove that the output always can be written as a real vector (or matrix or whatever) times a scalar such as $1\,{\rm \frac ms}$, which defines the dimension we expect (on physical grounds) that the answer must have.
Once this proof (known to physicists and engineers as dimensional analysis) has been carried out, we can then use an evaluation homomorphism to map all of the unit-variables to $1$, so they disappear from the formulas and the actual calculation can be done on ordinary real numbers.
Whether this is exciting or not is subjective.
Best Answer
$ \def\m#1{\left[\begin{array}{c}#1\end{array}\right]} $The wonderful book $\,$Multidimensional Analysis (by George W Hart)$\,$ addresses this question in detail. It is surprisingly difficult to get it right.
For example, here is the dimensional sketch of a rectangular matrix and that of its pseudoinverse $$\eqalign{ A &= \m{ (m\cdot C^{-1}) & (m\cdot s\cdot K^{-1}) \\ (kg\cdot C^{-1}) & (kg\cdot s\cdot K^{-1}) \\ (m\cdot s^{-1}\cdot C^{-1})&(m\cdot K^{-1})} \\\\ A^{+} &= \m{ (C\cdot m^{-1}) & (C\cdot kg^{-1}) & (C\cdot s\cdot m^{-1}) \\ (K\cdot s^{-1}\cdot m^{-1}) & (K\cdot s^{-1}\cdot kg^{-1}) & (K\cdot m^{-1}) } \\\\ }$$ Matrices which are squareable must have a special dimensional structure, e.g. $$\eqalign{ B &= \m{ ({\tt1}) & (m\cdot s^{-1}) \\ (s\cdot m^{-1}) & ({\tt1}) } \quad\implies\quad B^2 &\overset{\Delta}{\;=\;} B \\ }$$ In this case, the diagonal elements carry no units, while the units of the other elements are the reciprocal of those in the transposed position. All powers of $B$ carry the same units.
Likewise, functions of $B$ such as the square root or exponential, carry the same units as $B$.
Note that for the rectangular matrix above $$\eqalign{ AA^+ &= \m{ ({\tt1}) & (m\cdot kg^{-1}) & (s) \\ (kg\cdot m^{-1}) & ({\tt1}) & (kg\cdot s\cdot m^{-1}) \\ (s^{-1}) & (m\cdot s^{-1}\cdot kg^{-1}) & ({\tt1}) } \\\\ A^+A &= \m{ ({\tt1}) & (C\cdot s\cdot kg^{-1}) \\ (kg\cdot s^{-1}\cdot C^{-1}) & ({\tt1}) \\ } \\ }$$ So these projection matrices are squareable.
Also, an identity matrix has no dimensions on its diagonal elements, but does carry dimensions in its off-diagonal elements. An equation like $(I+B)$ only makes sense if $I$ carries the same units as $B$.
So there is not one, but an infinite number of $2\times 2$ identity matrices when units are included.