Let me give it a shot:
If I interpret this correctly, $\mathbf{F}$ will be the operator for the full spin of the coupled system, $\mathbf{S}$ will be the operator of the electron spin (usually, one would consider $\mathbf{J}$, the spin containing also spin-orbit coupling, but we are on the S-shell, hence no angular momentum) and $\mathbf{I}$ will be the nuclear spin. Then it should hold that $\mathbf{F}=\mathbf{S}+\mathbf{I}$, right?
First, let's have a look at the hyperfine structure Hamiltonian $\mathbf{H}_{hf}$. By construction of $\mathbf{F}$, the eigenstates of $\mathbf{H}_{hf}$ will be eigenstates of $F^2$ and $F_z$. This is just the same as for angular momentum and electron spin (and we construct $\mathbf{F}$ to have this property - this lets us label the eigenstates by the quantum number corresponding to $\mathbf{F}$). Hence the Hamiltonian must be diagonal in the $|F^2,m_F\rangle$-basis. One can also see that $F^2$ commutes with $I^2$ and $S^2$ (and so does $F_z$), since $\mathbf{F}=\mathbf{I}+\mathbf{S}$.
Now we have a look at $\mathbf{H}_B$, the interaction Hamiltonian with a constant magnetic field. We can see that (up to some prefactor) $\mathbf{H}_B=S_z$. Hence the eigenstates of $\mathbf{H}_B$ must be eigenstates of $S_z$ and thus also of $S^2$ and, since the two operators are independent (they relate to two different types of spins, hence the operators should better commute) also to $I^2$ and $I_z$, if you want.
The crucial problem is that $S_z$ and $F^2$ do not commute. Why?
Well: $\mathbf{F}=\mathbf{I}+\mathbf{S}$ hence $F^2=S^2+I^2+2\mathbf{S}\cdot \mathbf{I}$. Now $S_z$ and $\mathbf{S}$ do not commute, because $S_z$ does not commute with e.g. $S_x$, which is part of $\mathbf{S}$. Since $F^2$ commutes with $\mathbf{H}_{hf}$ and $S_z$ commutes with $\mathbf{H}_B$, but not with $F^2$, we have that $\mathbf{H}_{hf}$ does not commute with $\mathbf{H}_B$. This means that $\mathbf{H}_B$ and $\mathbf{H}_{hf}$ cannot be diagonal in the same basis, hence you need to have off-diagonal elements.
In order to see how the matrix representing $\mathbf{H}_B$ looks like in the $|F^2,m_F\rangle$-basis, you can express the $|m_I,m_S\rangle$-basis (in which $\mathbf{H}_B$ is diagonal) in terms of the other basis. This is exactly what equations (4.21) do. These are obtained by ordinary addition of angular momenta. From there, you can construct the unitary transforming the basis $|m_I,m_S\rangle$ into $|F^2,m_F\rangle$ and $\mathbf{H}_B$ will be the diagonal matrix in the basis $|m_I,m_S\rangle$ conjugated with this unitary.
EDIT:
I'm not quite sure whether I understand correctly what your problem is, but let me elaborate: We want to find the Hamiltonian $\mathbf{H}_B$ in the $|m_Im_S\rangle$ basis. In this basis, it is diagonal, because $\mathbf{H}_B$ is essentially $S_z$ (hence commutes with $S_z$) and it must also commute with $I_z$ since $S_z$ and $I_z$ are independent.
If we order the basis according to $|\frac{1}{2},\frac{1}{2}\rangle,|-\frac{1}{2},-\frac{1}{2}\rangle,|\frac{1}{2},-\frac{1}{2}\rangle,|-\frac{1}{2},\frac{1}{2}\rangle$, then, we can just read off the Hamiltonian: The first and fourth vector are eigenvectors to eigenvalue $\mu B$, the others of $-\mu B$ (by definition of $S_z$, since the second component in $|m_Im_S\rangle=|(SI)I_z,S_z\rangle$ tells us the eigenvalue of $S_z$ that the basis vector corresponds to), i.e.
$$ \mathbf{H}_B=\begin{pmatrix}{} \mu B & 0 & 0 & 0 \\ 0 & -\mu B & 0 & 0 \\ 0 & 0 & -\mu B & 0 \\ 0 & 0 & 0 & \mu B\end{pmatrix}$$
Now, as I said, you just have to change the basis. The matrix transforming the above basis into the new basis is given by eqn. (4.21a-d):
$$U:=\begin{pmatrix}{} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & \frac{1}{2} & \frac{1}{2} \\ 0 & 0 & -\frac{1}{2} & \frac{1}{2} \end{pmatrix}$$
where the ordering of the $|Fm_F\rangle$-basis is as for $\mathbf{H}$ in your text.
Now calculate $U\mathbf{H}_B U^{\dagger}$ and that should give you the part of $\mathbf{H}$ coming from $\mathbf{H}_B$ in the $|F,m_F\rangle$-basis and this will be exactly what is written in your book.
EDIT 2: I sort of suspected this, so here is some more linear algebra for the problem. I'll use Dirac notion since I suspect you are more familiar with this:
Now suppose you have given two bases $|e_i\rangle$ and $|f_i\rangle$ and suppose they are orthonormal bases. What we want is a matrix $U$ that transforms one basis into the other (I'll call it $U$, since it'll be a unitary - if the bases are not orthonormal, it'll only be an invertible matrix). So we want a matrix such that
$$ |f_i\rangle:=U|e_i\rangle \qquad \forall i$$
How to construct this matrix? Well, given an equation for $|f_i\rangle$ in terms of the $|e_i\rangle$ will give you the i-th row of the matrix. You can also see the matrix elements in Dirac notation:
$$ \langle e_j|U|e_i\rangle=\langle e_j|f_i\rangle $$
In your case, $|e_i\rangle=|m_Im_S\rangle$ and $|f_i\rangle=|F^2,m_F\rangle$. Hence equation (4.21a) will give you the first row of the matrix (the ordering of the basis vectors $|m_Im_S\rangle$ as I proposed above), (4.21c) the second (notice the basis ordering in the matrix $\mathbf{H}$!) (4.21b) the second and (4.21d) the last row of the matrix. Using the equation for the matrix elements above, you should be able to check that with not too much trouble. You can also easily check that $U$ is indeed a unitary (i.e. $UU^{\dagger}=U^{\dagger}U=\mathbb{1}$.
Then we can calculate the matrix elements:
$$ \langle e_i |\mathbf{H}|e_j\rangle=\langle e_i|U^{\dagger}U\mathbf{H}U^{\dagger}U|e_j\rangle=\langle f_i| U\mathbf{H}U^{\dagger}|f_j\rangle $$, which tells you how the matrix looks like in the other basis.
I'll try to help here, with some details I believe to be of aid in this question. If there's anything in the need of fixing here, please let me know.
First of all, let's introduce some terminology. Given a group $G$ and a vector space $V$, a representation of $G$ on the vector space $V$ is a homomorphism $\rho : G\to GL(V)$. We then say that $V$ carries a representation of $G$.
If $V$ is a complex inner product space, and $\rho(g)$ is unitary for each $g\in G$, then $\rho$ is said a unitary representation.
It turns out the meaning of this is that the elements of $G$ act on the vector space $V$.
There are lots of constructions that can be acrried out with representations. One of them is the so-called direct sum of representations, which as can be expected is tied to the direct sum of vector spaces.
The point is that considering the group $G$ fixed if you have a representation $(\rho,V)$ and another one $(\sigma,W)$, then you can induce a representation $(\rho\oplus \sigma, V\oplus W)$ on the vector space $V\oplus W$ by setting
$$\rho\oplus\sigma(g)(v,w)=(\rho(g)v, \sigma(g)w).$$
It turns out direct sums allows you to decompose representations into smaller components that build up the representation you started with.
For that matter one defines a subrepresentation of $(\rho,V)$ to be given by a subspace $W$ of $V$ invariant under the $\rho$-action of $G$. It turns out then that $(\rho,W)$ is a representation itself. With this terminlogy we say that a representation is irreducible if it doesn't have any proper subrepresentations, i.e., if it doesn't have "smaller parts".
Now, let's talk about rotations. The rotation group in three-dimensions is $SO(3)$. Quantum Mechanics describes systems by Hilbert spaces, which are complex complete inner product spaces. Transformations should be implemented by unitary transformations, to keep the physical quantities intact.
Thus if we want to know how rotations affect the states of the systems we are studying, we need to consider the action of $SO(3)$ on the Hilbert space $\mathcal{H}$, requiring unitarity, that is, we must consider unitary representations of $SO(3)$.
This is what your $\mathcal{D}(R)$ is. The issue is that you want the irreducible representations of $SO(3)$. This can be tackled by studying its Lie Algebra, which is the angular momentum algebra. The result one finds is: (i) there are no infinite-dimensional unitary irreducible representations of $SO(3)$, and (ii) there is one finite-dimensional representation of $SO(3)$ for each integer $j$ with dimensionality $2j+1$. These representations are denoted $D^{j}$.
But these are just the irreducibles. You can build up representations with the direct sum. When you take the direct sum of two such representations, the new one will be given by a block-diagonal matrix.
So when you let $j$ run through all integer values for example, you are actually considering the representation given by taking the direct sum over all these values. When you fix one $j$, you are considering just one particular irreducible representation acting on a subspace of state space.
For more informations, search for the irreducible representations of $SO(3)$, and you'll probably find the whole construction rigorously made.
Best Answer
There are two problems to deal with which must be disentangled to solve problems like these.
Both angular momentum operators are vector operators, so in some sense they "take values" in $\mathbb R^3$; you are being asked for their dot product, which should be taken within that copy of $\mathbb R^3$. You would have the same problem if you were asked to calculate the dot product $\mathbf r\cdot\mathbf p$ for a single particle without spin.
The orbital and spin angular momentum operators act on the two different factors of a tensor product of Hilbet spaces. Thus any (operator) product of a scalar orbital operator with a scalar spin operator should be interpreted as a tensor product. You would have the same problem if you were asked to calculate the product $L^2S^2$, which would need to be interpreted as $L^2\otimes S^2$.
Thus, in your case, you must read $L\cdot S$ as $$ \mathbf{L}\cdot \mathbf{S}=\sum_{i=1}^3L_iS_i=\sum_{i=1}^3L_i\otimes S_i. $$ To compute the matrix representation of this, you should begin with the matrix representation of each $L_i$ and $S_i$. You then compute the tensor product matrices $L_i\otimes S_i$. Finally, you add all of those matrices together to get the final result.
This is all much clearer with an example. The $z$ component, for example, is easy, since each matrix is given by $$ L_z=\hbar\begin{pmatrix}1&0&0\\0&0&0\\0&0&-1\end{pmatrix} \quad\text{and}\quad S_z=\frac\hbar 2 \begin{pmatrix}1&0\\0&-1\end{pmatrix}, $$ in the bases $\{|1\rangle,|0\rangle,|-1\rangle\}$ and $\{|\tfrac12\rangle,|-\tfrac12\rangle\}$ respectively. The tensor product matrix, then, in the basis $\{|1\rangle\otimes|\tfrac12\rangle ,|0\rangle\otimes|\tfrac12\rangle ,|-1\rangle\otimes|\tfrac12\rangle , |1\rangle\otimes|-\tfrac12\rangle ,|0\rangle\otimes|-\tfrac12\rangle ,|-1\rangle\otimes|-\tfrac12\rangle \}$, is given by $$ L_z\otimes S_z=\frac{\hbar^2} 2 \begin{pmatrix} 1\begin{pmatrix}1&0&0\\0&0&0\\0&0&-1\end{pmatrix} & 0\begin{pmatrix}1&0&0\\0&0&0\\0&0&-1\end{pmatrix} \\ 0\begin{pmatrix}1&0&0\\0&0&0\\0&0&-1\end{pmatrix} & -1\begin{pmatrix}1&0&0\\0&0&0\\0&0&-1\end{pmatrix} \end{pmatrix} =\frac{\hbar^2} 2 \begin{pmatrix} 1&0&0& 0&0&0\\0&0&0&0&0&0\\0&0&-1&0&0&0\\ 0&0&0&-1&0&0\\0&0&0&0&0&0\\0&0& 0&0&0&1 \end{pmatrix}. $$ This procedure should be repeated with both the $x$ and the $y$ components. Each of those will yield a six-by-six matrix (in this case). To get your final answer you should add all three matrices.