If you know the branch vector $\vec b = (x_0, y_0, z_0)$, then any vector perpendicular to it , say $\vec p = (x_1, y_1, z_1)$ will have dot product $0$.
So,
$$
\vec b . \vec p = x_0x_1 + y_0y_1 + z_0z_1 = \vec0 \\\text{dot product is zero as they are perpendicular}
$$pick any $x_1, y_1$ you prefer, and set $z_1$ as
$$z_1 = - \frac{\left (x_0 x_1 + y_0 y_1\right)}{ z_0}$$
That seems to be the simplest approach to get "random" branches.
If you want to control the length of the vector, then you can always normalize your obtained vector $\vec p$ and get the result vector $\vec r$ by using
$$
\vec r = len \times\frac{\vec p}{\left | \vec p\right| }
$$
where $len$ is the desired length of the vector.
If $z_0$ is $0$, then you'll get a singularity when you try to divide by $z_0$.
However, since $z_0$ is $0$, it does not impact the dot product at all, and therefore we can pick any $z_1$ we want to.
Solve for $y_1$ by picking some $x_1$, and then allowing $z_1$ to be "free" (that is, you can pick any $z_1 \in \mathbb{R}$). However, in doing so, you'll face a problem if $y_0$ is $0$ as well.
If both $z_0 = y_0 = 0$, then, you're forced to set $x_1 = 0$ (to maintain the dot product as $0$ for perpendicularity) while having $y_1, z_1 \in \mathbb{R}$ (that is, they are free).
You are right this can be viewed as a composition of (4D) rotations and, in fact, this particular example provides an example of a very interesting phenomenon in 4D Euclidean geometry. Namely, if we start with an arbitrary point $q$ in $\mathbb{H}$,
$$q = a + bi + cj + dk$$
then
$$kq = ak + bki + ckj + dkk$$
(the scalars commute so this is okay) which becomes through the usual formulas,
$$\begin{align}
kq &= ak + bj - ci - d\\
&= -d - ci + bj + ak\end{align}$$
This is a rotation in 4D space as can be seen by noting that its matrix form is
$$k \equiv \mathbf{K} := \begin{bmatrix} 0 && 0 && 0 && -1 \\ 0 && 0 && -1 && 0 \\ 0 && 1 && 0 && 0 \\ 1 && 0 && 0 && 0 \end{bmatrix}$$
which is easily seen to be orthogonal, i.e. $\mathbf{K}^T \mathbf{K} = \mathbf{I}$. Moreover, the expansion by cofactors and minors (which is very easy) shows $\mathrm{det}(\mathbf{K}) = 1$, so it is indeed a proper rotation (i.e. is not a rotoreflection).
Knowing that it's a rotation, the natural question is: what is it a rotation about? In 4D, we don't rotate around axes that are lines, but rather we rotate around planes, because a line gives "too much freedom" for a single angle of rotation to capture in the same way that we don't "rotate around a point" in 3D with a single angle of rotation. (This may be hard to visualized, but that's 4D for 'ya.)
So which plane is it? We may be tempted to note that while this permutes all 4 coordinates, you might see that it fixes the plane where $a = -d$ and $b = -c$, i.e. the quaternions of the form
$$a + bi - bj - ak$$
are unchanged under the transformation $\mathbf{K}$. Now you might think that a bit strange, given that is not a simple coordinate plane - and it is, indeed, an omen. In fact, if you see something like this happening in maths, chances are something very strange is going on - it's no guarantee, but your spidey sense should tingle at it and you should feel disturbed. In particular, we can decompose this a bit further. Taking another look we can see that it is actually the composition of two rotations: one of which does
$$(a + bi + cj + dk) \mapsto (-d + bi + cj + ak)$$
which fixes the $ij$-plane, and is a 90-degree, followed by
$$(a + bi + cj + dk) \mapsto (a - ci + bj + dk)$$
which fixes the $1k$-plane. This is also a 90-degree rotation. These two planes are orthogonal, and when you have a rotation in 4D space that is formed by the composition of two rotations of the same angle through two orthogonal planes, you get what is called an isoclinic or Clifford rotation that in fact fixes infinitely many planes, one other of which we just found. This is a very strange phenomenon with no simple analogue in 3D: in 3D a rotation will fix only one line, its axis, unless it's a degenerate rotation by zero degrees.
Likewise, the rotations by action of $j$ and $i$ are similar Clifford rotations, and they all together act to bring $1$ to $-1$ via $-1 = i(j(k(1))) = ijk$. If you want, you can think of this approximately as being like a hyperdimensional version of a Rubik's cube where you twist the sides through several different positions to move one colored patch from one side to another (not exactly, though, since here the whole "cube" is rotating, but an approximate analogy - the gist is the chained transformations.).
Best Answer
Orthogonal component method:
$\vec a$ rotates about $\vec b$ in a clockwise direction by $\theta$ rad according to the right hand rule where your thumb represents $\vec b$, and the curling of your fingers represents the direction of the rotation. This method involves finding $\vec a_{\perp b}$, the component of $\vec a$ orthogonal to $\vec b$ and rotating it by $\theta$ along the plane with normal $\vec b$ .
$\vec a$ can be decomposed into two components:$$\vec a = \vec a_{\parallel \vec b} + \vec a_{\perp \vec b}$$
$\vec a_{\parallel \vec b}$ is the component of $\vec a$ in the direction of $\vec b$ $$\vec a_{\parallel \vec b} = \Big(\dfrac{\vec a\cdot \vec b}{\vec b\cdot \vec b} \Big)\vec b$$ $\vec a_{\perp b}$ is the component of $\vec a$ in the direction orthogonal to $\vec b$ $$ $$
\begin{align*} \vec a_{\perp \vec b} =& \vec a - \vec a_{\parallel \vec b} \\ \\\vec a_{\perp \vec b}=& \vec a - \Big(\dfrac{\vec a\cdot \vec b}{\vec b\cdot \vec b} \Big) \vec b \end{align*}
Our next step is to determine $\vec w = \vec b \times \vec a_{\perp \vec b}$
This vector orthogonal to both $\vec a_{\perp \vec b}$ and $\vec b$ .
Then we need to find a linear combination of $\vec a_{\perp \vec b}$ and $\vec w$ representing a rotation of $\vec a_{\perp \vec b}$
$$\vec a_{\perp \vec b, \theta} = ||\vec a_{\perp \vec b}||(x_1 \vec a_{\perp \vec b} + x_2 \vec w)$$
Where: $$ x_1 = \dfrac{cos(\theta)}{||\vec a_{\perp \vec b}||} $$
and:
$$x_2 = \dfrac{sin(\theta)}{||\vec w||}$$
Finally we can make our vector representing the rotation of $\vec a$ around $\vec b$ by $\theta$ rad:
$$\vec a_{b,\theta} = \vec a_{\perp \vec b, \theta} + \vec a_{\parallel \vec b}$$
*NOTE:
1) As a preliminary belief check, make $(\theta = \pi/2$ ) or ( $\theta = 0$) and look at what the $sin(\theta)$ and $cos(\theta)$ in the equation for $\vec a_{\perp \vec b, \theta}$ do.*
2) If you need further demonstration that the last equation is the vector we are looking for just ask
3) The method described above is an adaptation of the "Rodrigues" rotation
Bibliography: "Linear Algebra with Applications" by Steven J. Leon https://en.wikipedia.org/wiki/Rodrigues%27_rotation_formula