\begin{align*}
\begin{bmatrix}
A^\ast & A^\ast A\\
I_m+AA^\ast & A
\end{bmatrix}^{-1}
&=
\left(\begin{bmatrix}
A^\ast A&A^\ast\\
A&I_m+AA^\ast
\end{bmatrix}
\begin{bmatrix}
0&I_n\\
I_m&0
\end{bmatrix}\right)^{-1}\\
&=
\begin{bmatrix}
0&I_m\\
I_n&0
\end{bmatrix}
\color{red}{\begin{bmatrix}
A^\ast A&A^\ast\\
A&I_m+AA^\ast
\end{bmatrix}}^{-1}.
\end{align*}
Since both $A^\ast A$ and the Schur complement $S=I_m + AA^\ast - A(A^\ast A)^{-1}A^\ast$ are invertible, you can compute the inverse on the last line by the matrix inversion formula
$$
\begin{bmatrix}A&B\\ C&D\end{bmatrix}^{-1}
= \begin{bmatrix}A^{-1}+A^{-1}BS^{-1}CA^{-1} & -A^{-1}BS^{-1}\\ -S^{-1}CA^{-1} & S^{-1} \end{bmatrix}
$$
where $S=D-CA^{-1}B$ (to apply this formula, you should replace $A$ by $A^\ast A$ and $B$ by $A^\ast$ etc.).
Alternatively, if you perform a singular value decomposition $A=U_{m\times m}\begin{bmatrix}\Sigma_{n\times n}\\ 0_{(m-n)\times n}\end{bmatrix}V_{n\times n}^\ast$, where $U$ and $V$ are unitary, it is easy to see that
$$
\color{red}{\begin{bmatrix}
A^\ast A&A^\ast\\
A&I_m+AA^\ast
\end{bmatrix}}
=
\begin{bmatrix}V\\ &U\end{bmatrix}
\begin{bmatrix}\Sigma^2&\Sigma&0\\ \Sigma&I_n+\Sigma^2&0\\ 0&0&I_{m-n}\end{bmatrix}
\begin{bmatrix}V^\ast\\ &U^\ast\end{bmatrix}
$$
and hence
$$
\color{red}{\begin{bmatrix}
A^\ast A&A^\ast\\
A&I_m+AA^\ast
\end{bmatrix}}^{-1}
=
\begin{bmatrix}V\\ &U\end{bmatrix}
\begin{bmatrix}\Sigma^{-2}+\Sigma^{-4}&-\Sigma^{-3}&0\\ -\Sigma^{-3}&\Sigma^{-2}&0\\ 0&0&I_{m-n}\end{bmatrix}
\begin{bmatrix}V^\ast\\ &U^\ast\end{bmatrix}.
$$
Before talking about multiplication of two matrices, let's see another way to interpret matrix $A$. Say we have a matrix $A$ as below,
$$
\begin{bmatrix}
1 & 2 & 3 \\
1 & 1 & 2 \\
1 & 2 & 3 \\
\end{bmatrix}
$$
we can easily find that column $\begin{bmatrix} 3 \\ 2 \\ 3 \\\end{bmatrix}$ is linear combination of first two columns.
$$
1\begin{bmatrix} 1 \\ 1 \\ 1\\\end{bmatrix} +
1\begin{bmatrix} 2 \\ 1 \\ 2\\\end{bmatrix} =
\begin{bmatrix} 3 \\ 2 \\ 3 \\\end{bmatrix}
$$
And you can say $\begin{bmatrix} 1 \\ 1 \\ 1 \\\end{bmatrix}$ and $\begin{bmatrix} 2 \\ 1 \\ 2 \\\end{bmatrix}$ are two basis for column space of $A$.
Forgive the reason why you want to decompose matrix $A$ at first place like this,
$$
\begin{bmatrix}
1 & 2 & 3 \\
1 & 1 & 2 \\
1 & 2 & 3 \\
\end{bmatrix} =
\begin{bmatrix}
1 & 0 & 1 \\
1 & 0 & 1 \\
1 & 0 & 1 \\
\end{bmatrix} +
\begin{bmatrix}
0 & 2 & 2 \\
0 & 1 & 1 \\
0 & 2 & 2 \\
\end{bmatrix}
$$
but you can, and in the end, it looks reasonable.
If you view this equation column wise, each $column_j$ of $A$ is the sum of corresponding $column_j$ of each matrix in RHS.
What's special about each matrix of RHS is that each of them is a rank 1 matrix whose column space is the line each base of column space of $A$ lies on. e,g.
$
\begin{bmatrix}
1 & 0 & 1 \\
1 & 0 & 1 \\
1 & 0 & 1 \\
\end{bmatrix}
$
spans only $\begin{bmatrix} 1 \\ 1 \\ 1 \\\end{bmatrix}$. And people say rank 1 matrices are the building blocks of any matrices.
If now you revisit the concept of viewing $A$ column by column, this decomposition actually emphasizes the concept of linear combination of base vectors.
If these make sense, you could extend the RHS further,
$$
\begin{bmatrix}
1 & 2 & 3 \\
1 & 1 & 2 \\
1 & 2 & 3 \\
\end{bmatrix} =
\begin{bmatrix} 1 \\ 1 \\ 1 \\\end{bmatrix}
\begin{bmatrix} 1 & 0 & 1 \\\end{bmatrix} +
\begin{bmatrix} 2 \\ 1 \\ 2 \\\end{bmatrix}
\begin{bmatrix} 0 & 1 & 1 \\\end{bmatrix}
$$
Each term in RHS says take this base, and make it "look like" a rank 3 matrix.
And we can massage it a little bit, namely put RHS into matrix form, you get
$$
\begin{bmatrix}
1 & 2 & 3 \\
1 & 1 & 2 \\
1 & 2 & 3 \\
\end{bmatrix} =
\begin{bmatrix}
1 & 2 \\
1 & 1 \\
1 & 2 \\
\end{bmatrix}
\begin{bmatrix}
1 & 0 & 1 \\
0 & 1 & 1 \\
\end{bmatrix}
$$
Now you can forget matrix $A$, and imagine what you have are just two matrices on RHS. When you read this text backward(I mean logically), I hope matrix multiplication in this fashion makes sense to you now. Or if you prefer, you can start with two matrices in the question.
Best Answer
It doesn't seem like there's anything here that can't be done with every-day matrix multiplication. Seems like we can calculate $A«B$ as follows:
As it ends up, "flipping" a matrix from right to left is itself a matrix operation. Namely, let $K_n$ be the $n\times n$ matrix given by
$$ K_n = \pmatrix{ 0&\cdots&0&1\\ \vdots&&1&0\\ 0&& &\vdots\\ 1&0&\cdots &0 } $$
Now, suppose $A$ is $k \times m$ and $B$ is $n \times k$. Then we can calculate $$ A«B= (B K_k)(AK_m)K_m = BK_kA $$ I have not heard of any application for this particular set of operations.
This analysis also reveals two different ways of finding $A«B$: