I haven't thought about this one before, so here is an approach that will work if you work hard enough at it.
Before I begin banging on, point number 1:
Should I assume a spin 3/2 system (4x4 Matrix) or an entangled Hilbert space with spin 1/2 and spin 1 (6x6 Matrix)?
Unquestionably the latter. It is a bipartite system and its state space is the tensor product of the two particle spaces. It simply cannot be anything else.
The basic principle here is conservation of angular momentum, so your basic procedure to solve your problem is:
Work out the matrices for the observables for the three nett angular momentum components (the three nett angular momentum operators);
Find the most general Hamiltonian which commutes for all of these three as commutation with the Hamiltonian is equivalent to invariance with time of all the moments of probability distributions of the measurements.
Part 1: The Three Angular Momentum Operators
The $x$-AM component observable for the spin half particle,
$$\sigma_x=\left(\begin{array}{cc}0&1\\1&0\end{array}\right)$$
has AM eigenvectors:
$$\psi_+=\frac{1}{\sqrt{2}}\left(\begin{array}{c}1\\1\end{array}\right);\quad\psi_-=\frac{1}{\sqrt{2}}\left(\begin{array}{c}1\\-1\end{array}\right)$$
and AM eigenvalues $\lambda_+=+\frac{1}{2}$ and $\lambda_-=-\frac{1}{2}$, respectively.
The $x$-AM component observable for the spin 1 particle,
$$S_x = \left(\begin{array}{ccc}0&0&0\\0&0&i\\0&-i&0\end{array}\right)$$
has AM eigenvectors:
$$\Psi_+=\frac{1}{\sqrt{2}}\left(\begin{array}{c}0\\i\\1\end{array}\right);\quad\Psi_-=\frac{1}{\sqrt{2}}\left(\begin{array}{c}0\\1\\i\end{array}\right);\quad\Psi_0=\left(\begin{array}{c}1\\0\\0\end{array}\right)$$
and AM eigenvalues $\Lambda_+=+1$, $\Lambda_-=-1$ and $\Lambda_0=0$, respectively. So now, for the two particle system, the six $x$-AM eigenstates are:
- $\psi_+\otimes\Psi_+$ with AM eigenvalue $\frac{1}{2}+1=\frac{3}{2}$
- $\psi_+\otimes\Psi_0$ with AM eigenvalue $\frac{1}{2}+0=\frac{1}{2}$
- $\psi_+\otimes\Psi_-1$ with AM eigenvalue $\frac{1}{2}-1=-\frac{1}{2}$
- $\psi_-\otimes\Psi_+$ with AM eigenvalue $-\frac{1}{2}+1=\frac{1}{2}$
- $\psi_-\otimes\Psi_0$ with AM eigenvalue $-\frac{1}{2}+0=-\frac{1}{2}$
- $\psi_-\otimes\Psi_-1$ with AM eigenvalue $-\frac{1}{2}-1=-\frac{3}{2}$
and so, if we order the eigenstates as above, the eigenvectors as columns are $\mathrm{vec}(\psi_+\otimes\Psi_+),\,\mathrm{vec}(\psi_+\otimes\Psi_0)\cdots$ (see the Wikipedia Vectorization Page) and so at last we get as the total $x$-AM component observable $\Sigma_X = P_X \Lambda_X P_X^\dagger$ where
$$P_X=\left(
\begin{array}{cccccc}
0 & \frac{1}{\sqrt{2}} & 0 & 0 & \frac{1}{\sqrt{2}} & 0 \\
0 & \frac{1}{\sqrt{2}} & 0 & 0 & -\frac{1}{\sqrt{2}} & 0 \\
\frac{i}{2} & 0 & \frac{1}{2} & \frac{i}{2} & 0 & \frac{1}{2} \\
\frac{i}{2} & 0 & \frac{1}{2} & -\frac{i}{2} & 0 & -\frac{1}{2} \\
\frac{1}{2} & 0 & \frac{i}{2} & \frac{1}{2} & 0 & \frac{i}{2} \\
\frac{1}{2} & 0 & \frac{i}{2} & -\frac{1}{2} & 0 & -\frac{i}{2} \\
\end{array}
\right)$$
and $\Lambda_X =\mathrm{diag}\left(\frac{3}{2},\,\frac{1}{2},\,\frac{-1}{2},\,\frac{1}{2},\,\frac{-1}{2},\,\frac{3}{2}\right)$. The result is:
$$\Sigma_X=\left(
\begin{array}{cccccc}
0 & \frac{1}{2} & 0 & 0 & 0 & 0 \\
\frac{1}{2} & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & \frac{3}{4} & -\frac{1}{4} & \frac{i}{4} & \frac{3 i}{4} \\
0 & 0 & -\frac{1}{4} & \frac{3}{4} & \frac{3 i}{4} & \frac{i}{4} \\
0 & 0 & -\frac{i}{4} & -\frac{3 i}{4} & \frac{3}{4} & -\frac{1}{4} \\
0 & 0 & -\frac{3 i}{4} & -\frac{i}{4} & -\frac{1}{4} & \frac{3}{4} \\
\end{array}
\right)$$
From here on it should be conceptually clear how to go, although tedious. You do the same for the $y$-AM observables:
$$\sigma_y=\left(\begin{array}{cc}0&-i\\i&0\end{array}\right)$$
$$S_y = \left(\begin{array}{ccc}0&0&-i\\0&0&0\\i&0&0\end{array}\right)$$
to find the total system $y$-AM observable $\Sigma_Y$ and for the $z$-AM observables:
$$\sigma_z=\left(\begin{array}{cc}i&0\\0&-i\end{array}\right)$$
$$S_z = \left(\begin{array}{ccc}0&i&0\\-i&0&0\\0&0&0\end{array}\right)$$
to get the total system $z$-AM observable $\Sigma_Z$.
Part 2: Find most general Hamiltonian
Your most general Hamiltonian will be defined by the three commutator relationships expressing conservation of AM:
$$[\hat{H},\,\Sigma_j]=0;\;j=X,\,Y,\,Z$$
You'll need to work out the invariant spaces of the three $\Sigma$s to do this. You'll get a linear space of possible $\hat{H}$s: in the two coupled spin half particles case there is essentially only one possible Hamiltonian that falls out of this approach and that is one proportional to $\sigma_x\otimes\sigma_x+\sigma_y\otimes\sigma_y+\sigma_z\otimes\sigma_z$ (plus a term proportional to the $4\times4$ identity matrix expressing the shift in the ground state energy) but this six dimensional case things will be a bit more complicated. Now as I said, I've never done this before, so I daresay there is a more systematic and less cumbersome way to work this all out. But any method is going to rest on the first principles expressed above.
Magnetic Field
What are the terms for the influence of the magnetic field. Well that's an easy one: in the ordering we have studied above, the uncoupled Hamiltonian will be:
$$\hat{H} = \gamma_{\frac{1}{2}}\left(\sigma_x\,B_x + \sigma_y\,B_y+ \sigma_z\,B_z\right)\otimes 1_{3\times3} + \gamma_1\,I_{2\times2}\otimes\left(S_x\,B_x + S_y\,B_y+ S_z\,B_z\right)$$
where $\gamma_{\frac{1}{2}}$ and $\gamma_1$ are the respective gyromagnetic ratios.
Notes on completing the method. You can also represent a bipartite state $\Phi=\psi\otimes\Psi$ as the literal $2\times 3$ matrix that is the outer product $\Phi=\psi Psi^T$ of the $2\times 1$ and $3\times 1$ column vectors. Then the operator on the first space act on the left and the operators on the second act on the right. So our $x$-component observable would be the linear, homogeneous transformation:
$$\Phi\mapsto \sigma_x\,\Phi\,S_x^T$$
and the vectorization operator (See Vectorization Wiki Page), which reorders our states into a $6\times 1$ column vectors as in my answer, writes this as
$$\mathrm{vec}(\Phi) \mapsto S_x\otimes\sigma_x\,\mathrm{vec}(\Phi)$$
Using the standard formula $\mathrm{vec}(A\,B\,C) = C^T\otimes A \,\mathrm{vec}(B)$. By dint of the formula $(A\otimes B)\, (C\otimes D) = (A\,C)\otimes(B\,D)$, and using the fact that inverse, complex conjugate, Hermitian conjugate and transpose operations distribute over the Kronecker produt, we can diagonalize $S_x\otimes\sigma_x$ inside the Kronecker product and find that the coupled system’s eigenstates are $\Pi_x\otimes \pi_x$, where $P_X,\,p_x$ are the matrices of eigenvectors of the individual multiplicands written as columns. So this will let you calculate $\Sigma_j,\,j=X,\,Y,\,Z$ systematically and fast.
Now to find the most general Hamiltonian, you need to find the invariant space of the group of matrices generated by the three matrices $\exp(i\,\Sigma_j)$ and find the irreducible representation of it: equivalently the smallest vector subspace of $\mathbb{C}^6$ left invariant by the group: by Schur’s lemma, any matrix commuting with all three must be proportional to the identity operator when restricted to this subspace. The scaling factor is possibly nought – i.e. the operator could possibly be the zero endomorphism. This completely characterizes the most general Hamiltionian: it can be any operator which is proportional to the identity when restricted to this irreducible subspace.
You could also find the subspace which is common to all three of the nullspaces of the three $36\times 36$ matrices $1_{36\times 36}\otimes \Sigma_j - \Sigma_j^T\otimes 1_{36\times 36}$ in Mathematica or Matlab, but I suspect there is a much eleganter method grounded on Schur's lemma!
I'm not so sure, if this is really, what you're looking for, but you can of course solve this easy problem analytically.
To do this, it is clever to first analyze the easier Hamiltonian
$H_0 = 2g (\vec L \cdot \vec S)$, where the $L_i$ and $S_j$ fulfill independent $SU(2)$-algebrae $$
[L_i, L_j] = i \epsilon_{ijk} L_k\\
[S_i, S_j] = i \epsilon_{ijk} S_k.
$$
This Hamiltonian can be written as
$$H_0 = g(J^2 - L^2 - S^2),$$
where we have defined the following operators:
$$L^2 = \sum_{i=1}^3 L_i^2 \otimes \mathbb{1},\\
S^2 = \sum_{i=1}^3 \mathbb{1} \otimes S_i^2, \\
J_i = L_i\otimes \mathbb{1} + \mathbb{1}\otimes S_i,\\J^2 = \sum_{i=1}^3 J_i^2.$$
Now as $L^2$ and $S^2$ are equal to $\frac{1}{2} (1+\frac{1}{2})$ on the subspace we are interested in (namely the one of a spin-1/2-particle), we can write the Hamiltonian as
$$H_0 = g(J^2 - 3/2)$$
and by simple addition of angular momenta, we find the eigenstates $|j, m\rangle$:
$$|1, 1\rangle = \left|\uparrow\uparrow\right\rangle\\
|1, 0\rangle = \frac{1}{\sqrt{2}}(\left|\uparrow\downarrow\right\rangle+\left|\downarrow\uparrow\right\rangle) \\
|1, -1\rangle = \left|\downarrow\downarrow\right\rangle\\
|0, 0\rangle = \frac{1}{\sqrt{2}}(\left|\uparrow\downarrow\right\rangle-\left|\downarrow\uparrow\right\rangle)$$
with energies $E_0 = -3g/2$ and $E_1 = g/2$ respectively (and obvious notation for the product base of the two particle hilbert space).
Now let's proceed to the real problem and add the second term. As you said, $$[J^2, L_z] = 2i (\vec L\times \vec S)_z$$ and so the eigenvalue of $J^2$ is no good quantum number anymore. But
$$[H, J_z] = 0,$$
so we can label the states of the system by their energy and their $J_z$ component. Indeed, if we calculate the action of $H$ on the former eigenbase, we see, that it only mixes $|1, 0\rangle$ and $|0, 0\rangle$. By diagonalizing the full Hamiltonian we find a new base of eigenstates:
$$|m=\pm 1, E_{\pm1} \rangle = |1, \pm 1\rangle\\
|m=0, E_{0,\pm}\rangle = \frac{1}{C_\pm}\left(\left(g\pm\sqrt{g^2+d^2}\right) |1, 0\rangle + d|0, 0\rangle\right),$$
where $C_\pm^2 = 2\left(g^2 \pm g \sqrt{g^2+d^2} + d^2\right)$ and corresponding energies $$E_{\pm1} = \frac{g}{2}\pm d\\
E_{0, \pm} = -\frac{g}{2} \pm \sqrt{g^2 + d^2},$$
which of course reduce to the eigenvalues of $H_0$ if you turn $d$ to zero.
Best Answer
I'll rename your parameters in a more standard form: $$ H = -B_1S_1^z-B_2S_2^z-JS_1^xS_2^x $$ In general, I don't think that there is a simple closed form for arbitrary $s_2,s_2$. Intuitively, your Hamiltonian has no standard symmetries, so it'll be hard to directly diagonalise it using standard basis. I'll just propose various approximation schemes that can be useful to calculate the spectrum.
A first natural step is to treat $J$ perturbatively in the $J\to0$ limit. In this case, the natural basis is the simultaneous $S_1^z,S_2^z$ eigenbasis of $(2s_1+1)(2s_2+1)$ vectors: $|m_1,m_2\rangle_{|m_1|\leq s_1,|m_2|\leq s_2}$. They automatically form an energy eigenbasis: $$ H(J=0)|m_1,m_2\rangle = (-B_1m_1-B_2m_2)|m_1,m_2\rangle \\ E_{m_1,m_2}^{(0)} = -B_1m_1-B_2m_2 $$
For simplicity assuming that initially the energy spectrum is non degenerate ($B_1,B_2$ being incommensurable suffices), 1rst order perturbation theory gives a vanishing correction. At second order, you get the leading order correction. Using: $$ \langle m'|S_x|m\rangle = \frac{1}{2}(\delta_{m',m+1}+\delta_{m',m-1})\sqrt{(s+1)(m+m'-1)-mm'} $$ you get: $$ E_{m_1,m_2}^{(2)} =\frac{J^2}{4}\sum_{\epsilon_1,\epsilon_2\in\{\pm\}^2}\frac{[(s_1+1)(2m_1+\epsilon_1-1)-m_1(m_1+\epsilon_1)][(s_2+1)(2m_2+\epsilon_2-1)-m_2(m_2+\epsilon_2)]}{\epsilon_1B_1+\epsilon_2B_2} $$
Conversely, you could use perturbative theory in the $J\to\infty$ limit. This time, the natural basis is the simultaneous $S_1^x,S_2^x$ eigenbasis of $(2s_1+1)(2s_2+1)$ vectors: $|m_1,m_2\rangle_{|m_1|\leq s_1,|m_2|\leq s_2}$. They automatically form an energy eigenbasis: $$ H(J\to\infty)|m_1,m_2\rangle = -Jm_1m_2|m_1,m_2\rangle \\ E_{m_1,m_2}^{(0)} = -Jm_1m_2 $$ However, the energies are degenerate and you need to apply second order perturbation theory with lifted degeneracy which complicates matters.
Finally, there is a third general approach outlined by Gec. Just as the low $s_1,s_2$ limit is easy to compute, the high $s_1,s_2$ limit is also easier. You just need to think classically since large $s$ is equivalent to vanishing $\hbar$. $S_1,S_2$ are now classical vectors of respective norm $s_1,s_2$. Assuming that they are both uniformly (using the standard rotation invariant measure), independently distributed on their respective spheres, this gives the energy density: $$ \rho(E) = \frac{(2s_1+1)(2s_2+1)}{(4\pi)^2}\int \delta(E+B_1s_1\cos\theta_1+B_2s_2\cos\theta_2+Js_1s_2\sin\theta_1\sin\theta_2\cos\phi_1\cos\phi_2)\sin\theta_2\sin\theta_2d\phi_1d\theta_1d\phi_2d\theta_2 $$ Once again, I don't think that you can calculate this analytically in general, but simulating it using Monte Carlo methods is very easy.
Hope this helps.