Short answer
The reason that the capital $Q$'s are next to each other is because they are matrices acting in succession to transform $v$. The vector $v$ is also being transformed twice in the sequence $v\mapsto qv\mapsto qvq^\ast$, but this transformation depends on the order of multpilication (since quaternions are noncommutative), whereas when you model linear transformations by matrices on the left, they just stack up on the left. We will see that $Q$ represents $q$ and $Q^\ast$ represents $q^\ast$.
What you're seeing
You're looking at two different representations of the rotation: one as the rotation matrix $QQ^\ast$, and one as a quaternion $q$.
The first is for a column vector $v$ and two matrices you defined: the rotation is $v\mapsto Q^\ast Qv$ (You might actually want to think of as a composition of two steps: $v \mapsto Qv\mapsto Q^\ast Qv$. $Q$ will, incidentally, have to have $\det(Q)=1$ to represent a rotation.)
The second is for a vector $v$ interpreted as a quaternion with real part $0$: the rotation is $v\mapsto qvq^\ast$ where $q$ is a quaternion. (Again, you can view this as a two step process: $v\mapsto qv\mapsto qvq^\ast$. It's also important to point out that $q$ will have to be a unit length quaternion to represent a rotation, or else it does not preserve distances.)
It's important to remember that $v\mapsto qv$ and $v\mapsto vq^\ast$ are just an $\Bbb R$ linear transformations of $\Bbb H$. As such, you can fix a basis and find a matrix to represent multiplication by $q$. By choosing the right basis, the matrix produced is $Q$, and in the same basis, the matrix produced for $q^\ast$ is $Q^\ast$.
So the two sequences of mappings you see in the above two cases are really representing the same process.
(Incidentally, what about the sequence of transformations $v\mapsto vq^\ast\mapsto qvq^\ast$? Well, if you check, you'll see that $QQ^\ast=Q^\ast Q$, so doing things in the order $v\mapsto Q^\ast Qv$ still yields the same result as before :) )
Take a look at what $QQ^\ast$ looks like at Wolfram, remembering that $\det(Q)=(w^2+x^2+y^2+z^2)^2=1$. The upper left hand $3\times 3$ submatrix turns out to be a rotation matrix for $\Bbb R^3$, and the lower right hand entry is just $1$. Thus the matrix acts on the upper three rows but leaves the last row fixed. This is a clue that the first three basis vectors are where the $3$ spatial dimensions are living.
Reverse engineering the connection between matrix and quaternion
The picture we looked at in the last paragraph gives away that the authors of this representation want to represent $3$ dimensional vectors as column vectors $[x,y,z,0]^\top$. It is also likely that they want to use the "obvious" basis of $\{i,j,k,1\}$ (in that order) of quaternions to be the basis for these matrices.
Notice if $w=1$ and $x=y=z=0$, $Q$ becomes the identity matrix. Thus $w$ probably represents the real part of the quaternion, since the quaternion $1$ can represent the identity rotation.
If $w=y=z=0$ and $x=1$, we get another matrix. If you check how it acts on the coefficients of the ordered basis $\{i,j,k,1\}$, you'll find that it exactly matches left multiplication by $i\in\Bbb H$. This suggests $x$ is the coefficient for $i$ in $Q$.
Two identical analysis reveal that left multiplication by $j$ corresponds to $w=x=z=0$ and $y=1$, and that $k$ corresponds to $w=x=y=0$ and $z=1$.
Putting these things together, we have that a quaternion $w+xi+yj+zk$ with $(w^2+x^2+y^2+z^2)=1$ produces the matrix $Q$, which affects multiplication by quaternions on the left, the mapping being
$$
q=w+xi+yj+zk\mapsto\begin{pmatrix}w & -z & y & x \\ z & w & -x & y \\ -y &x &w& z\\ -x& -y & -z& w\end{pmatrix},
$$
An identical analysis reveals that $Q^\ast$ affects right multiplication by the conjugate of a quaternion on $v$. Explicitly, putting in a $1$ for $x$ and zeroes for $w,y,z$, the resulting map is right multiplication by $-i$. The mapping being given by (as you might guess)
$$
q=w+xi+yj+zk\mapsto\begin{pmatrix}w & -z & y & -x \\ z & w & -x & -y \\ -y &x &w& -z\\ x& y & z& w\end{pmatrix}
$$
Each of these matrices is producing right multiplication by the conjugate $q^\ast$.
So you can see there are two mappings at work here, both of them from the unit quaternions into $M_4(\Bbb R)$. One mapping realizes left multiplication by quaternions, the other realizes right multiplication by conjugates of quaternions.
First: note we are dealing only with the unit quaternions as a representation of attitude. The full quaternions don't really have a role here. I should also note up front that the quaternion itself has a rate ($\dot{q}$), but like the Euler angle rates the quaternion rate is not the actual angular velocity, which is a 3-vector. They are nonetheless (as with the Euler angles) related.
Since the unit quaternions are isomorphic to $SU_2$ and $SU_2$ double covers $SO_3$ with kernel $\pm 1$, we can essentially say that each attitude matrix (aka direction cosine matrix, aka element of $SO_3$ with a matrix represntation) can be associated with two quaternions which differ by only a minus sign and each quaternion can be associated with a unique attitude matrix. Thus, without loss of generality, I may write
$$A = A(q) = A(-q).$$
You are asking about what happens when the attitude matrix is a function of time, e.g. $A = A(t)$. Let's take some arbitrary vector function $r(t)$ and look at it rotated, e.g.
$$b(t) = A(t)r(t).$$
By the product rule,
$$\frac{db}{dt} = \frac{dA}{dt}r + A\frac{dr}{dt}.$$
Since for any finite rotation rate only an infinitesimal rotation can occur in an infinitesimal amount of time, the matrix derivative is actually
$$
\frac{dA}{dt} = \frac{d}{dt}\begin{bmatrix}
1 & -q_3(t) & q_2(t)\\
q_3(t) & 1 & -q_1(t)\\
-q_2(t) & q_1(t) & 1\\
\end{bmatrix} = \Omega\times, $$
forming the cross product matrix where $\Omega$ is the angular velocity and the $q_i$ are the quaternion components, assuming we call the coefficient of 1 $q_0$. You should now recognize the product rule above as the formula for computing the derivative in a rotating reference frame.
Now do the same calculation for $b(t) = q(t)r(t)q^*(t)$ and find the same result! Why? Suppose we send $\mathbb{R}^3\rightarrow\mathbb{R}^3$ by computing $qiq^*$, $qjq^*$, and $qkq^*$. This is easily found to be $A(q)$--indeed, this is how the parameterization of $SO_3$ using the unit quaternions is typically derived. Thus all we have done is computed the "rotational derivative" using two different (but equivalent) formalisms for the rotation.
Anyway, from here it's not hard to show that
$$\frac{dq}{dt} = \frac{1}{2}\begin{bmatrix}
0 & -\Omega_x &-\Omega_y & -\Omega_z\\
\Omega_x & 0 & \Omega_z & -\Omega_y\\
\Omega_y & -\Omega_z & 0 & -\Omega_x\\
\Omega_z & \Omega_y & -\Omega_x & 0\\
\end{bmatrix}q = [\Omega]q.$$
If your angular velocity is constant the solution is simply
$$ q(t) = e^{[\Omega](t-t_0)}q(t_0),$$
but generally this is a time-dependent system in which the angular velocity on the body is itself obtained by solving the Euler equations
$$
I\dot{\Omega} +\Omega\times I\Omega = \sum_iM_i,
$$
where $I$ is the inertia tensor and $M_i$ are the moments (torques) on the rigid body. If, for instance, you have ever wondered how NASA keeps their spacecraft pointed in the right direction, it is by figuring out which quaternion corresponds to the orientation they want and firing small rockets on the side (typically monopropellant Hydrazine), spinning reaction/momentum wheels, etc. to get the right torques in the Euler equations to ultimately produce the right quaternion with the system above. The whole thing is done closed-loop using feedback control.
Best Answer
Yes, because the conversion map from matrices to quaternions has to be a homomorphism.
In words, the product of the quaternions equals the quaternion for the product of the matrices.