$\DeclareMathOperator{\g}{\mathfrak g}$
A: Complex, Real and Quaternionic Representations
Let $\g$ be a semisimple Lie algebra over $\mathbb R$, and $V$ a (finite-dimensional, complex) representation of it, i.e. a finite dimensional $\mathbb C$-vector space with a homomorphism of real Lie algebras $\rho: \g \rightarrow \mathrm{End}_{\mathbb C} V$. There are several ways to define its conjugate representation $\overline V$, one of which would be to choose a basis of $V$, express the above endomorphisms as matrices, and then just complex-conjugate their entries.
As you correctly say, the existence of an equivalence of representations $V \simeq \bar V$ is a necessary, but not sufficient criterion for $V$ to have a real structure, i.e. for the existence of a real vector space $V_1$ together with a $\mathfrak g$-action such that $V$ identifies with the scalar extension ("complexification") $\mathbb C \otimes_{\mathbb R} V_1$ (or equivalently, the existence of a basis of $V$ such that the matrices of all $\rho(x), x \in \g$ have all real entries).
Namely, let's first assume that $V$ is irreducible. The existence of such an equivalence $V \simeq \bar V$ can also be expressed by saying there is a map $\alpha: V \rightarrow V$ which is antilinear (i.e. $\alpha(cv) = \overline{c} \alpha(v)$ for all $c \in \mathbb C, v \in V$), bijective, and commutes with all $\rho(x), x \in \g$. If that is the case, then $\alpha \circ \alpha$ is a (complex-linear!) automorphism of $V$, i.e. by Schur's Lemma, it's just multiplication by some scalar $b_\alpha \in \mathbb C^*$. Expressing $\alpha \circ \alpha \circ \alpha$ in two different ways we see that that scalar $b_\alpha$ must actually be real; since further we can scale $\alpha$ with any non-zero real number, which scales $\alpha \circ \alpha$ with a square, i.e. any positive real number, we basically have two possible cases: $b_\alpha = \color{red}\pm 1$ or in other words,
$$\alpha \circ \alpha = id_V \text{ or } -id_V.$$
In the $+$ case, we do have a real structure. In fact, since $\alpha\circ \alpha = id$, the map $\alpha$ viewed as endomorphism of $V$ as real vector space has two eigenvalues, $\pm 1$, and if $V \simeq V_1 \oplus V_{-1}$ is the corresponding decomposition into eigenspaces, then $V_1$ gives a real subspace as wanted (I chose the terminology $V_1$ above for that!).
But in the $-$ case, there is no such real structure. (Rather, one can then give $V$ the structure of a vector space over $\mathbb H$, the Hamilton quaternions with standard basis $\{1,i,j,k\}$, and the map $\alpha$ corresponds to multiplication "from the other side" with, say, $j$.)
One can further show that these two cases are mutually exclusive, i.e. there cannot exist two different isomorphisms $\alpha: V \rightarrow \overline{V}$ such that for one of them $\alpha \circ \alpha$ is a positive and for the other one it is a negative scalar.
So in total there are three possible, mutually exclusive cases:
- $V \not \simeq \overline V$. We call such $V$ "complex".
- $\exists \alpha: V \simeq \overline V$ and $\alpha \circ \alpha = id$. We call such $V$ "real".
- $\exists \alpha: V \simeq \overline V$ and $\alpha \circ \alpha = -id$. We call such $V$ "quaternionic".
One can characterize these cases equivalently via the [1: non-existence of any /2: existence of a symmetric / 3: existence of an alternating] $\g$-invariant bilinear form on $V$, or also via the "commutant" of $V$, i.e. $\mathrm{End}_{U_{\mathbb R}(\g)}(V)$ being 1: $\mathbb C$ / 2: $\mathbb R$ / 3: $\mathbb H$. Confer Bourbaki, Lie Groups and Algebras, chapter 9, Appendix II.
I point out that I have seen some sources which use "pseudoreal" for what I called "quaternionic", while other sources use "pseudoreal" for both cases 2 and 3. To make things worse, as soon as we remove our assumption that $V$ is irreducible, and/or look at representations of groups, there exists different non-interchangeable nomenclature conventions for these things. Actually, even the above seems to be not generally agreed upon as soon as our $\g$ is not compact. This is highly unfortunate. See e.g. answers and comments to https://mathoverflow.net/q/47492/27465 and https://mathoverflow.net/q/323969/27465.
B: Application to your question
Well, that means that if you are convinced that you have a map (flipping the tensors) which gives an isomorphism from your representation to its conjugate, then all you have to do is to see that if you translate that map into an $\alpha$ as above, and compose it with itself, you get (a positive real number times) the identity map, and not its negative. Seems highly likely from your candidate. I leave the details to you and use the rest of this answer to write down more theory for reference.
C: The standard example $\mathfrak{su}_2$; and the general compact case
Back to the three cases in part A. The basic example (for which almost all nomenclatures agree, and which takes us far!) for the distinction between cases 2 and 3 is the real Lie algebra $\g = \mathfrak{su}_2$, the compact real form of $\mathfrak{sl}_2$. As one learns, up to equivalence, the irreducible representations of this are just indexed by positive integers (mathematicians) or half-integers (physicists). In fact, up to iso there is
- one with $\mathrm{dim}(V) = 1$ (the trivial one),
- one with $\dim V = 2$ which can be realized by letting $g \in \mathfrak{su}_2 = \{\pmatrix{ai &b+ci\\-b+ci&-ai}: a,b,c \in \mathbb R\}$ act on $V:=\mathbb C^2$ by matrix multiplication,
- one with $\dim V = 3$ which can be realized by letting $g \in \mathfrak{su}_2 \simeq \mathfrak{so}_3 = \{\pmatrix{0 &x &y\\-x&0&y\\-y&-z&0}: x,y,z \in \mathbb R\}$ act on $V:=\mathbb C^3$ by matrix multiplication,
- and then one each for $\dim V = 4,5,6, ...$ which are not that easily written down with matrices, but instead via general $\mathfrak{sl}_2$-representation theory with ladder operators etc. tied together.
The way I wrote the first three examples, it's obvious that for $\dim V =1,3$ we are in the case that $V$ is real (for $\dim V=3$, I kind of cheated by assuming the isomorphism $\mathfrak{su}_2 \simeq \mathfrak{so}_3$ known, and the matrices in $\mathfrak{so}_3$ already have all real entries). The case $\dim V=2$ however is the standard example for a quaternionic representation. Indeed, check that $\alpha: \pmatrix{c_1\\c_2} \mapsto \pmatrix{-\overline{c_2}\\ \overline{c_1}}$ defines a bijective, antilinear, $\g$-equivariant map with $\alpha \circ \alpha = -id_V$.
Cool fact 1: It can be shown that the parity distinction continues, i.e. of the irreducible $\mathfrak{su}_2$-representations $V$, the ones with odd $\dim V$ are real, while the ones with even $\dim V$ are quaternionic. (Depending on whether you're in math or physics jargon, this will translate into either parity of your classifying parameter, or your classifying parameter being an integer or half-integer. But because that parameter is usually $\dim V-1$ (or $\dfrac{\dim V -1}{2}$ (so that the trival representation has parameter $0$), the parity distinction will be "the other way around": real for even (or integer), quaternionic for odd (or half-integer).
Cool fact 2: It can be shown that actually, this principle generalises to all compact (semi)simple $\mathfrak g$, except that now also the case of "truly complex" reps can occur. Namely, this is the content of Bourbaki's Lie Groups and Algebras, chapter 9, §7 no.2 proposition 1, cf. the answer to What property of the root system means a Lie algebra has complex structure? :
For $\mathfrak g$ the compact (!) real form of a semisimple Lie algebra, the irreducible representation $V$ of highest weight $\lambda$ (for $\lambda$ dominant w.r.t. a chosen set of simple roots $\Delta$) is
- complex if and only if $$-w_0(\lambda) \neq \lambda$$ where $w_0$ is the longest element of the Weyl group (w.r.t. that set of simple roots $\Delta$);
- real if $-w_0(\lambda) = \lambda$, and a certain invariant is even;
- quaternionic if $-w_0(\lambda) = \lambda$, and a certain invariant is odd.
Of course, that "certain invariant" generalises the integer (or half-integer) parameter from the $\mathfrak{su}_2$-case, has several definitions which are equivalent (although maybe not obviously so), and in general needs a little computation. One way to define it is as twice the sum of coefficients of $\lambda$ written in the chosen root basis $\Delta$,
$$2 \cdot \sum c_\alpha \text{ where } \lambda = \sum_{\alpha \in \Delta} c_\alpha \alpha$$
(The $2$ is there to make the invariant an integer, matching the math convention in the $\mathfrak{su}_2$ case (where the possible highest weights $\lambda$ for $\Delta= \{\alpha\}$ are $0, \frac12 \alpha, \alpha, \frac32 \alpha, ...$).)
D: But we are not in the compact case!
And this is bad, because as seen in the MathOverflow posts linked in A, different people use different nomenclatures now. Also, people mistakenly use the criteria from part C for this case, where they are in general no longer valid. There are actually two questions on this site where the accepted answer seems to fall into this trap, which has made me write lengthy answers outlining what I think is a correct approach: See If fundamental and antifundamental representations of a Lie algebra are inequivalent, can we deduce that all conjugate representations are? and Conjugate Representations of Lie Algebra of Lorentz Group.
In nuce, the issue is that if $\mathfrak g$ is not compact, then complex conjugation does not act on the weight lattice via $-w_0$, the element that plays the crucial role in part C.
Indeed, Jacques Tits undertook a vast generalisation of the idea of part C, not only for arbitrary forms, but also for arbitrary ground fields (not just $\mathbb R$), in his article Représentations linéaires irréductibles d'un groupe réductif sur un corps quelconque. I do not claim to understand half of it, but the gist is that we have to take into account how the full Galois group operates on our weights, and then on the set of those dominant weights $\lambda$ which are "sufficiently stable" under Galois (this generalises the $-w_0(\lambda) =\lambda$ criterion) we have a map going to the Brauer group of the ground field $k$ (this generalises the parameter which decides between $\mathbb R$ and $\mathbb H$, the only elements of the Brauer group of $\mathbb R$).
Now looking into the end of section 5 and the beginning of section 6 in that article, it seems too good to be true, but I would currently bet he shows there that for any quasi-split form (and such is every $\mathfrak{so}(q-2,q)$), if the highest weight of an irrep is stable under Galois (here: similar to its conjugate) at all, then the parameter gives out the trivial class in the Brauer group (here: $\mathbb R$, not $\mathbb H$, i.e. case 3 from part A cannot occur at all). I hope the computation in part B verifies this. It could then be amended to the calculations in the answer to $SO(p,q)$ Fundamental Weights?, which deal with more (but significantly different) irreps of $\mathfrak{so}_{p,q}$.
Best Answer
There is indeed a general statement that for each representation $\rho$ of a compact semisimple real Lie algebra $\mathfrak{g}$ on a finite dimensional $\mathbb C$-vector space $V$, there exists a non-degenerate hermitian form on $V$ which is invariant with respect to the $\mathfrak g$-action; equivalently, all matrices $\rho(x)$ ($x\in \mathfrak g$) are antihermitian. This is proven along the lines of your "Expected answer" e.g. in Bourbaki's volume on Lie Groups and Lie Algebras -- compare in particular vol. IX §1 no.1, where the statement is translated to the corresponding one about a $G$-invariant form for the corresponding Lie group $G$ (which now is given by hermitian matrices), and the existence of such a form, in turn, is proven with averaging over a Haar measure.
Note that often in physicists' notation, everything on the Lie algebra level is multiplied through with the imaginary unit $i$, in which case one might have hermitian matrices in both cases. However, you say that for you, $su(2)$ consists of antihermitian matrices:
$$ su(2) = \lbrace \pmatrix{ai & b+ci\\-b+ci & -ai} : a,b,c \in \mathbb R \rbrace,$$
and since this already shows the statement for the defining representation on $V= \mathbb C^2$, we should stick with those. In the following I'd just like to point out that for this basic case $\mathfrak{g}=su(2)$, everything can be shown explicitly and more precisely.
Namely, the irreps of $su(2)$ you are interested in are in one-to one-correpondence with the irreps $\sigma$ of the complexified $su(2)\otimes \mathbb C$ which is $\simeq sl_2(\mathbb C)$, and the correspondence is given by just restricting such an irrep to $su(2) \subset sl_2(\mathbb C)$. Concretely, one commonly looks at the basis
$$ h=\pmatrix{1&0\\0&-1}, \quad x=\pmatrix{0&1\\0&0}, \quad y= \pmatrix{0&0\\-1&0}$$
of $sl_2(\mathbb C)$, looks how in the representation these act as matrices, and then from these can get e.g. the matrices corresponding to the basis of $su(2)$
$$ih =\pmatrix{i&0\\0&-i}, \quad x+y= \pmatrix{0&1\\-1&0}, \quad ix-iy=\pmatrix{0&i\\i&0}.$$
Irreps of $sl_2(\mathbb C)$, in turn, are well-known and should be listed in literally every set of notes or book about representations of Lie algebras: For each $n \ge 1$ there is up to equivalence one such irrep $(\sigma_n, V \simeq \mathbb C^n)$ of dimension $n$; it is often given by explicitly defining operators $X, Y, H$ corresponding to $x,y,h$. These operators are rarely written down as matrices, but it's easy to do this, and the whole point of "weight decomposition" which these sources talk about is that there is a basis $v_1, ..., v_n$ of $V_n$ such that in this basis, $h$ acts via (i.e. $\sigma_n(h)$ is given by)
$$H = \pmatrix{n-1&0&\cdots&0&0\\ 0&n-3&\cdots&0&0\\ 0&0&\ddots&0&0\\ 0&0&\cdots&3-n&0\\ 0&0&\cdots&0&1-n}.$$
In particular, when we restrict to $su(2)$, the matrix $iH$ (which is the one via which $ih \in su(2)$) acts, is already antihermitian:
$$iH = \pmatrix{(n-1) i&0&\cdots&0&0\\ 0&(n-3)i&\cdots&0&0\\ 0&0&\ddots&0&0\\ 0&0&\cdots&(3-n)i&0\\ 0&0&\cdots&0&(1-n)i}.$$
The other two operators $X$ and $Y$ might look differently with different normalisations for the basis vectors. E.g. the way Bourbaki defines them (loc. cit. vol. VIII §1, first abstractly in no.2 and then with homogeneous polynomials in no.3), we have
$$X = \pmatrix{0&n-1&0&\cdots&0&0\\ 0&0&n-2&\cdots&0&0\\ 0&0&0&\cdots&0&0\\ 0&0&0&\ddots&2&0\\ 0&0&0&\cdots&0&1\\ 0&0&0&\cdots&0&0}, Y = \pmatrix{0&0&0&\cdots&0&0\\ -1&0&0&\cdots&0&0\\ 0&-2&0&\cdots&0&0\\ 0&0&0&\ddots&0&0\\ 0&0&0&\cdots&0&0\\ 0&0&0&\cdots&1-n&0}.$$
At first that looks disheartening because even though it gives back the original matrices for $n=2$, for all $n \ge 3$ the matrices $X+Y$ and $iX-iY$ are not yet antihermitian. However, now it's an exercise in linear algebra: For any $1 \le k \le n$, and $\lambda_k \in \mathbb C^*$, scaling the basis vector $e_k$ to $\lambda_k e_k$ will not change the matrix $H$, but it does change the matrices $X$ (whose $k$-th column gets multiplied with $\lambda_k$, and whose $k+1$-th colum with $\lambda_k^{-1}$) and $Y$ (whose $k-1$-th column gets $\cdot \lambda_k^{-1}$, and whose $k$-th column $\cdot \lambda_k$). Now write down the equations and find that you can always find $(\lambda_1, ..., \lambda_n)$ such that the new matrices $X$, $Y$ are real and negative transposes of each other, which makes $X+Y$ and $iX-iY$, and hence the entire representation, antihermitian with respect to the standard hermitian product on the new basis vectors $(\lambda_1 e_1, ..., \lambda_n e_n)$.
Concretely, e.g. for $n=4$ I get $\lambda_1 = \lambda_4=1, \lambda_2 = \lambda_3 = \sqrt3^{-1}$ and thus
$$X = \pmatrix{0&\sqrt 3&0&0\\ 0&0&2&0\\ 0&0&0&\sqrt 3\\ 0&0&0&0}, Y = \pmatrix{0&0&0&0\\ -\sqrt3 &0&0&0\\ 0&-2&0&0\\ 0&0&-\sqrt3&0\\}$$
which makes
$$X+Y = \pmatrix{0&\sqrt 3&0&0\\ -\sqrt 3&0&2&0\\ 0&-2&0&\sqrt 3\\ 0&0&-\sqrt 3&0}, iX-iY = \pmatrix{0&i\sqrt 3&0&0\\ i\sqrt 3&0&2i&0\\ 0&2i&0&i\sqrt 3\\ 0&0&i\sqrt 3&0}$$
nicely antihermitian.
This kind of rescaling to make the operators obviously antihermitian is rarely ever done*; one reason is that, easy as it is, it uses existence of square roots in $\mathbb R$, whereas the normalisation that Bourbaki uses works over any field, or actually, over $\mathbb Z$.
*ADDED: My claim that this is "rarely ever done" is wrong. This question made me look at the Wikipedia article on "Spin" in quantum mechanics, and in the section about "Higher Spins" I recognise exactly the scaled matrices I cooked up above (my example $n=4$ being exactly the case of spin $\frac32$), except for that physicists' convention that they multiply everything through with the imaginary unit $i$ (there might be further minor sign flips due to $Y \mapsto -Y$ and/or $-iH$ instead of $iH$ or something, but the idea is definitely the same).