Short answer
The reason that the capital $Q$'s are next to each other is because they are matrices acting in succession to transform $v$. The vector $v$ is also being transformed twice in the sequence $v\mapsto qv\mapsto qvq^\ast$, but this transformation depends on the order of multpilication (since quaternions are noncommutative), whereas when you model linear transformations by matrices on the left, they just stack up on the left. We will see that $Q$ represents $q$ and $Q^\ast$ represents $q^\ast$.
What you're seeing
You're looking at two different representations of the rotation: one as the rotation matrix $QQ^\ast$, and one as a quaternion $q$.
The first is for a column vector $v$ and two matrices you defined: the rotation is $v\mapsto Q^\ast Qv$ (You might actually want to think of as a composition of two steps: $v \mapsto Qv\mapsto Q^\ast Qv$. $Q$ will, incidentally, have to have $\det(Q)=1$ to represent a rotation.)
The second is for a vector $v$ interpreted as a quaternion with real part $0$: the rotation is $v\mapsto qvq^\ast$ where $q$ is a quaternion. (Again, you can view this as a two step process: $v\mapsto qv\mapsto qvq^\ast$. It's also important to point out that $q$ will have to be a unit length quaternion to represent a rotation, or else it does not preserve distances.)
It's important to remember that $v\mapsto qv$ and $v\mapsto vq^\ast$ are just an $\Bbb R$ linear transformations of $\Bbb H$. As such, you can fix a basis and find a matrix to represent multiplication by $q$. By choosing the right basis, the matrix produced is $Q$, and in the same basis, the matrix produced for $q^\ast$ is $Q^\ast$.
So the two sequences of mappings you see in the above two cases are really representing the same process.
(Incidentally, what about the sequence of transformations $v\mapsto vq^\ast\mapsto qvq^\ast$? Well, if you check, you'll see that $QQ^\ast=Q^\ast Q$, so doing things in the order $v\mapsto Q^\ast Qv$ still yields the same result as before :) )
Take a look at what $QQ^\ast$ looks like at Wolfram, remembering that $\det(Q)=(w^2+x^2+y^2+z^2)^2=1$. The upper left hand $3\times 3$ submatrix turns out to be a rotation matrix for $\Bbb R^3$, and the lower right hand entry is just $1$. Thus the matrix acts on the upper three rows but leaves the last row fixed. This is a clue that the first three basis vectors are where the $3$ spatial dimensions are living.
Reverse engineering the connection between matrix and quaternion
The picture we looked at in the last paragraph gives away that the authors of this representation want to represent $3$ dimensional vectors as column vectors $[x,y,z,0]^\top$. It is also likely that they want to use the "obvious" basis of $\{i,j,k,1\}$ (in that order) of quaternions to be the basis for these matrices.
Notice if $w=1$ and $x=y=z=0$, $Q$ becomes the identity matrix. Thus $w$ probably represents the real part of the quaternion, since the quaternion $1$ can represent the identity rotation.
If $w=y=z=0$ and $x=1$, we get another matrix. If you check how it acts on the coefficients of the ordered basis $\{i,j,k,1\}$, you'll find that it exactly matches left multiplication by $i\in\Bbb H$. This suggests $x$ is the coefficient for $i$ in $Q$.
Two identical analysis reveal that left multiplication by $j$ corresponds to $w=x=z=0$ and $y=1$, and that $k$ corresponds to $w=x=y=0$ and $z=1$.
Putting these things together, we have that a quaternion $w+xi+yj+zk$ with $(w^2+x^2+y^2+z^2)=1$ produces the matrix $Q$, which affects multiplication by quaternions on the left, the mapping being
$$
q=w+xi+yj+zk\mapsto\begin{pmatrix}w & -z & y & x \\ z & w & -x & y \\ -y &x &w& z\\ -x& -y & -z& w\end{pmatrix},
$$
An identical analysis reveals that $Q^\ast$ affects right multiplication by the conjugate of a quaternion on $v$. Explicitly, putting in a $1$ for $x$ and zeroes for $w,y,z$, the resulting map is right multiplication by $-i$. The mapping being given by (as you might guess)
$$
q=w+xi+yj+zk\mapsto\begin{pmatrix}w & -z & y & -x \\ z & w & -x & -y \\ -y &x &w& -z\\ x& y & z& w\end{pmatrix}
$$
Each of these matrices is producing right multiplication by the conjugate $q^\ast$.
So you can see there are two mappings at work here, both of them from the unit quaternions into $M_4(\Bbb R)$. One mapping realizes left multiplication by quaternions, the other realizes right multiplication by conjugates of quaternions.
Best Answer
Note that, by conjugating by $u$, it is equivalent to ask for an anti-automorphism of $A$ that leaves $L$ invariant and sends $u$ to $-u$. Writing $\theta := u^2 \in K$ (it lies in the center of $A$), this is very similar to the problem of constructing homomorphisms between algebraic field extensions:
What we are led to here is
Proof. In the commutative case above, one uses the universal property of the polynomial ring $K[X]$ (free commutative algebra) to write $A$ and $B$ as quotients of $K[X]$.
In this associative case, we proceed similarly: Let $\sigma$ be that automorphism of $L$. Let $K \langle L, X \rangle$ be the free associative algebra on the elements of $L$ and another variable $X$, and let $I$ be the ideal generated by all algebraic relations in $L$, as well as $Xl - \sigma(l) X$ for all $l \in L$ and $X^2 - \theta$. Inspecting degrees, it is not hard to see that $K \langle L, X \rangle / I$ has dimension $4$, so that the natural maps $K \langle L, X \rangle / I \to A, B$ sending $X$ to $u, v$, are $L$-linear $K$-algebra isomorphisms. $\square$
Note how every step in the proof has its commutative counterpart.
Back to the problem, we can apply this to $(A, u)$ and $(A^{\text{op}}, -u)$ to obtain the required anti-automorphism.