There is nothing special about complex eigenvalues. On the generalised eigenspace for $\lambda=-2+3i$, which is by definition the space of vectors that are ultimately annihilated by repeated applications of $A-\lambda I$, that matrix acts as a nilpotent matrix, and the problem of finding the Jordan normal form of $A$ on its generalised eigenspace for $\lambda$ is exactly the same as finding Jordan normal form of the nilpotent restriction of $A-\lambda I$ to that space. For a nilpotent matrix the only eigenvalue is$~0$, which simplifies thinking about it.
In case $A$ has multiple eigenvalues, the (complexified) vector space canonically decomposes into a direct sum of generalised eigenspaces for those eigenvalues; one performs this decomposition and then focusses on the restrictions to those generalised eigenspaces, which are completely independent. All in all the whole problem of finding Jordan normal forms boils down to the special case of nilpotent matrices.
Of course, if you start with a real matrix with complex (conjugate) eigenvalues, then the decomposition into generalised eigenspaces only exists in the complexicfied vector space: there is no decomposition at all of the initial real vector space that corresponds to it. This is no different from what happens in the case where the matrix is diagonalisable but with complex eigenvalues: there is no such thing as a real subspace that corresponds to each separate complex eigenvalue (although you can associate one real subspace space to each pair of complex conjugate eigenvalues; this is however not an eigenspace).
For the final question you added: one cannot do that expression in terms of real matrices. Did anybody tell you that one could? There may be some standard form you can give to the matrix over the real numbers, but describing it would be quite messy. If you really really want to know, you can try to read up on the Jordan-Chevalley decomposition.
Your question is similar to Example 2 in Wikipedia article "Generalized eigenvectors," on which I base my answer.
For a given eigenvalue, the number of chains equals the number of linearly-independent eigenvectors for that eigenvalue. So for your matrix A, each eigenvalue has one chain.
For eigenvalue 2, because the algebraic multiplicity is one, the chain length is one and consists of the corresponding eigenvector x₁ = [–1, 1, 0]ᵀ.
For eigenvalue 3, the algebraic multiplicity is two, but there is only one corresponding eigenvector, so you need to find one more generalized eigenvector to make chain of length two. To do that, solve the matrix equation (A – 3 I)y₂ = y₁ for y₂ where y₁ is your eigenvector [1, 0, 0]ᵀ. You get y₂ = [0, 1, 1]ᵀ.
Hence, you have one chain for each eigenvector, and a chain basis is {x₁, y₁, y₂}.
With the chain basis in that order, the first Jordan block is a one-by-one block for eigenvalue 2, and the second block is a two-by-two block for eigenvalue 3.
Best Answer
I am not sure what the instructor had in mind for an exam approach, but there are many ways to calculate the matrix exponential, for example Nineteen Dubious Ways to Compute the Exponential of a Matrix, Twenty-Five Years Later.
The closed form eigenvalues / eigenvectors are quite ugly, so we will use numerical values. Because this matrix has three distinct eigenvalues, a typical approach for finding the matrix exponential is using diagonalization.
We have ($P$ are the eigenvector column vectors and $D$ are the corresponding eigenvalues)
$$e^A = P e^ D P^{-1}$$
For the matrix $A$, we have eigenvalues
$$\lambda_{1,2,3} = 1.68721 + 0.889497 i,1.68721 - 0.889497 i,-1.37442$$
The eigenvectors are
$$P = \begin{pmatrix} 1.63173\, -2.11204 i & 2.68721\, +0.889497 i & 1. \\ 1.63173\, +2.11204 i & 2.68721\, -0.889497 i & 1. \\ -1.26346 & -0.374424 & 1. \\ \end{pmatrix}$$
The exponential is given as $e^A = P e^D P^{-1}$
$$\begin{pmatrix} 1.63173\, -2.11204 i & 2.68721\, +0.889497 i & 1. \\ 1.63173\, +2.11204 i & 2.68721\, -0.889497 i & 1. \\ -1.26346 & -0.374424 & 1. \\ \end{pmatrix} \begin{pmatrix} e^{1.68721\, -0.889497 i} & 0. & 0. \\ 0. & e^{1.68721\, +0.889497 i} & 0. \\ 0. & 0. & \frac{1}{e^{1.37442}} \\ \end{pmatrix} \begin{pmatrix} 0.0491893\, -0.169309 i & 0.116796\, +0.160105 i & 0.10588\, -0.153969 i \\ 0.0491893\, +0.169309 i & 0.116796\, -0.160105 i & 0.10588\, +0.153969 i \\ -0.0983786 & -0.233592 & 0.78824 \\ \end{pmatrix} $$
Multiplying these out
$$e^A = \begin{pmatrix} 1.56484 & 3.33455 & 2.90601 \\ -4.30322 & 5.86806 & -3.33455 \\ -1.11152 & 2.08019 & -0.372504 \\ \end{pmatrix}$$
Let's compare this with Wolfram Alpha's result.