There is no such nice formula for the cube root of a complex number with both real and imaginary parts nonzero. If you write out the real and imaginary parts of your cube root, you wind up solving cubic equations in one variable that have three irrational roots. This is the Casus Irreducibilis http://en.wikipedia.org/wiki/Casus_irreducibilis
In turn, for each of those cubics, Cardano's method leads you to finding the cube roots of other complex numbers. All very circular, and never gets anywhere.
The ten-operation sequence in the pseudo-code below is inspired by the computation of $ab - cd$ in the following paper:
Claude-Pierre Jeannerod, Nicolas Louvet, and Jean-Michel Muller, "Further Analysis of Kahan's Algorithm for the Accurate Computation of $2 \times 2$ Determinants", Mathematics of Computation, Vol. 82, No. 284, Oct. 2013, pp. 2245-2264 (online)
The algorithm makes use of the fused multiply-add, or fma
, operation to compute the difference of two products robustly in just four operations:
w := d * c
e := fma (-d, c, w)
f := fma (a, b, -w)
r := f + e
That algorithm is trivially applicable for the computation of the discriminant when computing the real solutions of a quadratic equation. The use of fma
allows for efficient implementations of error-free transformations, which result in a head-tail (or double-native) representation of operands that in essence provides twice the native precision. The above algorithm provides for an error bound of 1.5 ulp, provided no overflow or underflow occurs in intermediate computation.
For the computation of $p^{3} - q^{2}$ the above algorithm needs modification to incorporate the cubing, where it is preferred not to explicitly compute it as triple-native precision operand. The sequence below is the most efficient version I managed to create on the double; the worst case error does not appear to exceed 17 ulps as determined by reasonably extensive testing with test cases in which $p^{3}$ is very close in magnitude to $q^{2}$. Higher-accuracy versions at potentially reduced performance are undoubtedly possible.
In the code annotations w:e
, s:t
, u:v
designate head-tail pairs of native precision numbers that represent the exact product of two native-precision operands when summed.
When this pseudo-code is transformed into actual programming code processed by a compiler, it is essential that the compiler is configured to implement the sequence of operations as written. Many compilers will re-associate floating-point expressions at high optimization levels, which may even be the default. The strictest floating-point evaluation mode available should be selected. With the Intel compilers, this is /fp:strict
or -fp-model=strict
depending on operating system platform.
// w:e = -q*q exactly
w := -q * q
e := fma (-q, q, -w)
// s:t = p*p exactly
s := p * p
t := fma (p, p, -s)
// s:t * p + w:e = s*p + w + t*p +e = s*p+w + u:v + e = f + u:v + e
f := fma (s, p, w)
u := t * p
v := fma (t, p, -u)
// sum all terms into final result
r := ((e + f) + u) + v
Best Answer
I didn't check your calculations, but it seems you got into the point which actually made people turn their attention to complex numbers. Complex numbers were not "invented" in order to solve quadractic equations as some people usually tell us, but to solve cubic equations. People discovered a formula for the roots of a cubic polynomial, but then later discovered that in some situations (actually, a lot of them), the equations passed necessarily over complex numbers. Actually, even in situations where all roots are real this happen.
If you do the calculations correctly, the complex numbers will cancel themselves in the end.