This is another grey area with mathematical terminology. Inversion can mean many different things in many different instances, but in general, it has to do with swapping two things, or creating something's "opposite." Each definition you give of inversion or finding an inverse can be made more precise with more words: $1/a$ is the multiplicative inverse of $a$, $-a$ is the additive inverse of $a$, $f^{-1}(x)$ is the inverse function of $f(x)$, etc. It can become confusing if one is not careful in context. For instance, suppose we are just talking about functions, and I bring up the topic of inverses. It is clear from context that every inverse we talk about is going to be the inverse function. If we want to make reference to $1/a$ or $-a$ at that time, we might instead use the words reciprocal and opposite, respectively. It just happens that saying "inverse" instead of the more descriptive "multiplicative inverse" say, is more brief and can at times be unnecessary.
All in all, there can be many different terms for a single operation, and there can be one term that applies to many different operations. This is quite common in introductory mathematics, as the different words are used to convey different meanings within context. You can see this with terms like "inversion" and "prime," but in each setting, you should strive to make their meanings precise and leave no ambiguity (but certainly within reason).
(P.S. Yes, arcsine is the same as inverse sine. Here is the functional inverse)
There is a way to get from (1)-(3) to (5)-(7) without the trick of assembling the matrices $u,v,w,x$ into a block matrix.
The special ingredient is the identity
$$
(1+rs)^{-1} = 1 - r(1+sr)^{-1}s, \tag{*}
$$
which is easy to verify.
Admittedly, this too may seem "magical" if you're unfamiliar with it.
At least, though, it's a universal kind of magic, in the sense that it's an identity in all rings in general, and not just for matrices.
(For a reference about this identity, see mathoverflow.net/questions/31595, where it appears with a tiny difference in sign.)
Using this identity, we will derive equation (5) from (1)-(3).
First, note that $u$ and $x$ are invertible.
(This follows from (1) and (2), although I'm glossing over the details.)
Now write (3) as $w^\dagger (x^{-1})^\dagger = u^{-1} v$.
Multiply that equation by its conjugate transpose $x^{-1} w = v^\dagger (u^{-1})^\dagger$ to get
$$
w^\dagger (x^{-1})^\dagger x^{-1} w = u^{-1} v v^\dagger (u^{-1})^\dagger.
$$
The right-hand side, applying (1), is
\begin{align}
u^{-1} v v^\dagger (u^{-1})^\dagger
&= u^{-1} (u u^\dagger - I) (u^{-1})^\dagger
\\ &= I - u^{-1} (u^{-1})^\dagger
\\ &= I - (u^\dagger u)^{-1},
\end{align}
while the left-hand side, applying (2) and the identity (*), is
\begin{align}
w^\dagger (x^{-1})^\dagger x^{-1} w
&= w^\dagger (x x^\dagger)^{-1} w
\\ &= w^\dagger (I + w w^\dagger)^{-1} w
\\ &= I - (I + w^\dagger w)^{-1}.
\end{align}
This shows that $u^\dagger u = I + w^\dagger w$ (equation (5)).
Equation (6) can be derived in the same way.
Equation (7) is easy to derive once (1)-(6) are all available:
\begin{align}
u^\dagger v
&= u^\dagger u u^{-1} v
\\ &= (I + w^\dagger w) w^\dagger (x^{-1})^\dagger
\\ &= w^\dagger (I + w w^\dagger) (x^{-1})^\dagger
\\ &= w^\dagger x x^\dagger (x^{-1})^\dagger
\\ &= w^\dagger x.
\end{align}
This way, we've proven (5)-(7) from (1)-(3), without going up to the higher dimensional "big picture" of the generalized unitary matrix $B$.
Best Answer
Consider an invertible $n \times n$ matrix $A$ with entries in a field $k.$ We claim that the $n \times n$ matrix $B$ such that $AB = I_{n \times n} = BA$ (where $I_{n \times n}$ is the $n \times n$ matrix with $1$s on the diagonal and $0$s elsewhere) is unique.
Proof. We will assume that there exists another $n \times n$ matrix $C$ such that $AC = I_{n \times n} = CA.$ Using the associativity of matrix multiplication, we have that $B = BI_{n \times n} = B(AC) = (BA)C = I_{n \times n}C = C.$ QED.
Consequently, we can designate the matrix inverse $B = A^{-1}$ of $A.$ Unfortunately, if you are dealing with an $m \times n$ matrix $A$ with entries in the field $k$ for distinct $m$ and $n,$ then $A$ might have a left-inverse $L$ such that $LA = I_{n \times n}$ or a right-inverse $R$ such that $AR = I_{m \times m}$ or neither but not both.
Consider the $3 \times 2$ matrix in your example $$A = \begin{pmatrix} 1 & 0 \\ 0 & 1 \\ 1 & 0 \end{pmatrix}.$$ Observe that this matrix cannot (by the linked post) have a right-inverse; however, it does have a left-inverse $$L = \begin{pmatrix} \frac 1 2 & 0 & \frac 1 2 \\ 0 & 1 & 0 \end{pmatrix}.$$ Ultimately, if we wish to solve $A \mathbf x = \mathbf y,$ we may apply $L$ on the left to obtain $\mathbf x = I_{2 \times 2} \mathbf x = (LA) \mathbf x = L(A \mathbf x) = L \mathbf y;$ however, we cannot conclude from this that $\mathbf x = L \mathbf y$ yields $A \mathbf x = \mathbf y$ because $L$ cannot be both a left- and right-inverse of $A.$
One more comment: the range of $A$ is $\operatorname{span}_k \{\langle 1, 0, 1 \rangle, \langle 0, 1, 0 \rangle \}$ since we have that $$A \mathbf x = \begin{pmatrix} 1 & 0 \\ 0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} a \\ b \end{pmatrix} = \langle a, b, a \rangle = \langle a, 0, a \rangle + \langle 0, b, 0 \rangle = a \langle 1, 0, 1 \rangle + b \langle 0, 1, 0 \rangle.$$ Consequently, the vector $\mathbf y = \langle 1, 1, 0 \rangle$ is not in the range of $A,$ i.e., the equation $A \mathbf x = \mathbf y$ has no solution.