Numpy.einsum has a shortcut for this. You can specify the axes you want out.
Normally you line up, and sum repeated indices.
If you specify the output indices, it alines things the same way, but summs the indices that aren't in the specified output.
#Hadamard product
c = np.einsum('ij,ij->ij',a,b)
#select the diagonal of a matrix
c = np.einsum('ii->i',a)
#sum
c = np.einsum('ij->',a)
#squared length of each vector in a 2d array
c = np.einsum('ijk,ijk->ij',a,a)
I think your application can be said as:
#matrix product of each matrix in a 2d array
c = np.einsum('ijkl,ijlm->ijkm',a,b)
I find this is much simpler and more direct. I don't know if it matters, but I also think that this will be much more efficient than a direct implementation of the accepted answer.
Matrix multiplication with non-raised (i.e., not written as upper or lower) indices, the first index being the row index and the second the column index, is given by the rule
$$
(AB)_{i,k}=\sum_jA_{i,j}B_{j,k}\tag1
$$
Now your second rule for transforming $A$ to $A'$ can be written (if you'll forgive me for using non-Greek letters as indices)
$$
A'_{i,l}= \sum_{j,k} M_i^jA_{j,k}M_l^k,\tag2
$$
(I've inverted the indices in the LHS since I think you made a mistake: in your formula, if $M$ is the identity then $MAM^\top$ switches the indices, which cannot be right; with this proviso the correspondence is $\mu:=i$, $~\rho:=j$, $~\theta:=j$, $~\nu:=k$). Now if we agree to call the lower index of $M$ the first one and the upper index the second one, then in the right hand side of $(2)$, the second copy of $M$ has its indices switched with respect to what one would get by expanding out $MAM$ using $(1)$. So to get the indices in the right place one must transpose the second copy of $M$ before entering it into the matrix product: the RHS of $(2)$ describes the computation of $MAM^\top$.
Best Answer
In Einstein's summation notation, $$\begin{align} (CD^T)_{ij} &= C_{ia}(D^T)_{aj} = C_{ia}D_{ja}\\ (A(I-BB^T)C^T)_{ij} &= A_{ia}(I - BB^T)_{ab}(C^T)_{bj}\\ &= A_{ia}(\delta_{ab} - (BB^T)_{ab})C_{jb}\\ &= A_{ia}(\delta_{ab} - B_{ac}(B^T)_{cb})C_{jb}\\ &= A_{ia}(\delta_{ab} - B_{ac} B_{bc})C_{jb} \end{align} $$ where $\delta_{ab}$ is the Kronecker delta.
The equality $CD^T=A(I−BB^T)C^T$ becomes
$$C_{ia}D_{ja} = A_{ia}(\delta_{ab} - B_{ac} B_{bc})C_{jb} = A_{ia}C_{ja} - A_{ia}B_{ac}B_{bc}C_{jb} $$