It's quite similar to 2D space. Consider a vector $v = B-A$ which you can imagine is the direction from $A$ to $B$. Hence for any point $C$ between $A$ and $B$ (inclusive of $A$ and $B$).
$$C = A + tv$$
where $t = 0$ implies $C = A$ and $t = 1$ implies $C = B$ and $t \in (0,1)$ are all the points in between (one of which is the desired $C$). As you can see, this representation is independent of the dimension of your space.
So, what's the value of $t$? Well, let the known distance from $A$ to $C$ be $d_{AC}$. Now, the distance between $A$ and $B$ or $d_{AB}$ is the magnitude of $v$ or $|v|$ which is nothing but
$$d_{AB} = |v| = \sqrt{(a_x-b_x)^2+(a_y-b_y)^2+(a_z-b_z)^2}$$
(You can see that this formula for the Euclidean distance between two points is similar in 2D as well)
Therefore, $t = \large \frac{d_{AC}}{d_{AB}}$ and substituting $t$ and $v$ in the previous formula for $C$, we have:
$$C = A + \frac{d_{AC}}{\sqrt{(a_x-b_x)^2+(a_y-b_y)^2+(a_z-b_z)^2}}(B-A)$$
In 3 dimensions, there are infinitely many linear transformations that mat one plane onto another. We want to find one such transformation, defined by the matrix R, with the following properties:
$$R\mathbf{a}_x=\mathbf{b}_x\ ,\ R\mathbf{a}_y=\mathbf{b}_y$$
We can arbitrarily chose a third vector so that each set is a complete, linearly independent basis:
$$R\mathbf{a}_z=\mathbf{b}_z$$
where $\mathbf{a}_z$ and $\mathbf{b}_z$ are not contained within their respective planes. Let A be a matrix whose columns are $\mathbf{a}_x$, $\mathbf{a}_y$, and $\mathbf{a}_z$, and likewise with B. Then we have:
$$RA=B$$
Right multiplying by the inverse of A gives:
$$R=BA^{-1}$$
We can check to see if this gives the desired result:
$$(x,y,z)=x''\mathbf{a}_x+y''\mathbf{a}_y$$
$$(x',y',z')=R(x,y,z)=Rx''\mathbf{a}_x+Ry''\mathbf{a}_y$$
$$=x''R\mathbf{a}_x+y''R\mathbf{a}_y$$
$$=x''\mathbf{b}_x+y''\mathbf{b}_y$$
The choice of $\mathbf{a}_z$ and $\mathbf{b}_z$ does not affect what happens to points on the plane. It only determines how the "off-plane" component is transformed.
If R is to be a rotation matrix, A and B need to be orthogonal matrices, or some matrix multiple thereof. The conditions $||\mathbf{a}_x||=||\mathbf{b}_x||\ ,\ ||\mathbf{b}_y||=||\mathbf{a}_y||$, and $\mathbf{a}_x\cdot\mathbf{a}_y=\mathbf{b}_x\cdot\mathbf{b}_y$ must be met in order for this to be the case. Choosing $\mathbf{a}_z=\mathbf{a}_y\times\mathbf{a}_y$ and $\mathbf{b}_z=\mathbf{b}_y\times\mathbf{b}_y$ will then give the rotation matrix.
To see why this is, we need to use the property that the dot product is invariant under rotation. That is, for any vectors $\mathbf{u}$ and $\mathbf{v}$, $\mathbf{u}\cdot\mathbf{v}=(R\mathbf{u})\cdot(R\mathbf{v})$. From this we get 3 equalities:
$$\mathbf{a}_x\cdot\mathbf{a}_y=(R\mathbf{a}_x)\cdot(R\mathbf{a}_y)=\mathbf{b}_x\cdot\mathbf{b}_y$$
$$||\mathbf{a}_x||^2=\mathbf{a}_x\cdot\mathbf{a}_x=(R\mathbf{a}_x)\cdot(R\mathbf{a}_x)=\mathbf{b}_x\cdot\mathbf{b}_x=||\mathbf{b}_x||^2$$
$$||\mathbf{a}_y||^2=\mathbf{a}_y\cdot\mathbf{a}_y=(R\mathbf{a}_y)\cdot(R\mathbf{a}_y)=\mathbf{b}_y\cdot\mathbf{b}_y=||\mathbf{b}_y||^2$$
When choosing $\mathbf{a}_z$ and $\mathbf{b}_z$, we we will have 3 mode sets of dot products that need to be equal. It can be shown that $||\mathbf{a}_x\times\mathbf{a}_y||=||\mathbf{b}_x\times\mathbf{b}_y||$, and all of the other dot products are conveniently $0$.
Hope this helps.
Best Answer
As David K mentions, there are infinitely many rotations that map $\bf a$ to $\bf b$: Given any such rotation $R$, for a rotation $S$ about $\bf b$ by an arbitrary angle, $S \circ R$ is also a rotation that maps $\bf a$ to $\bf b$, and so there is a circle's worth of such rotations.
If the vectors $\bf a$ and $\bf b$ are linearly independent, however, then we can pick out a preferred rotation, namely the one that fixes ${\bf a} \times {\bf b}$ (equivalently, the unique rotation that preserves the plane spanned by $\bf a$ and $\bf b$ as well as the orientation of that plane).
Here's one way to construct the matrix corresponding to this preferred rotation: First, notice that we may as well normalize $\bf a$ and $\bf b$, that is, replace them respectively by the unit vectors $\frac{\bf a}{|{\bf a}|}$ and $\frac{\bf b}{|{\bf b}|}$.
The vector $${\bf n} := \frac{{\bf a} \times {\bf b}}{|{\bf a} \times {\bf b}|}$$ has unit length and is orthogonal to $\bf a$, and hence ${\bf a}$ and ${\bf b}$ together determine an oriented, orthonormal basis of $\Bbb R^3$, namely, $$({\bf a}, {\bf n} \times {\bf a}, {\bf n}).$$ In particular, the matrix $$\begin{pmatrix}{\bf a} & {\bf n} \times {\bf a} & {\bf n}\end{pmatrix}$$ defines a rotation, namely the one that sends the standard basis $({\bf e}_1, {\bf e}_2, {\bf e}_3)$ to the above basis, and its inverse does the reverse.
Now, by symmetry, $$({\bf b}, {\bf n} \times {\bf b}, {\bf n}).$$ is also an oriented, orthonormal basis, and the corresponding matrix built by adjoining these (column) vectors sends the standard basis to this one.
Putting this together with the inverse mentioned above maps $({\bf a}, {\bf n} \times {\bf a}, {\bf n})$ to the standard basis $({\bf e}_1, {\bf e}_2, {\bf e}_3)$ and then the standard basis to $({\bf b}, {\bf n} \times {\bf b}, {\bf n})$, that is, by construction $$\begin{pmatrix}{\bf b} & {\bf n} \times {\bf b} & {\bf n}\end{pmatrix}\begin{pmatrix}{\bf a} & {\bf n} \times {\bf a} & {\bf n}\end{pmatrix}^{-1}$$ is a rotation matrix that maps $\bf a$ to $\bf b$ and fixes $\bf n$ (and hence ${\bf a} \times {\bf b}$). (Since the inverted matrix $\begin{pmatrix}{\bf a} & {\bf n} \times {\bf a} & {\bf n}\end{pmatrix}$ is orthogonal, its inverse is just its transpose, saving considerable computation.)
One can of course expand this expression to find explicit formulas for the entries of the resulting matrix in terms of the components of $\bf a$ and $\bf b$, but I doubt that this would simplify nicely.