[Math] Approximation of a polynomial with fractional power

approximationpolynomials

I have a polynomial I need to find the roots of, the major difficulty is that this polynomial has fractional exponents. I have made an approximation and I would like to have some idea of the error I would have to deal with. (if there is a better way of doing this I am all ears)

The original polynomial is $p(x)=a x^{2.56}+bx^{1.78}+cx+d = 0$

First, I make an approximation polynomial $p_2(x)=a x^{2.5}+bx^{1.75}+cx+d \approx p(x)$

Then I let $\alpha = x^{.25}$, giving me

$p_2(\alpha)=a \alpha^{10}+b\alpha^{7}+c\alpha^4+d$

Which I solve the zeros of using the Jenkins–Traub algorithm.

Best Answer

We are given the equation the nonlinear equation $f(x) = 0$ where $f : [0,\infty) \rightarrow \mathbb{R}$ is given by \begin{equation} f(x) = a x^p + bx^q + cx + d, \quad p = 2.56, \quad q=1.78 \end{equation} It is known that $a, c, d$ are (strictly) negative and that $b$ is (strictly) positive. We first consider the question of the existence of solutions. It is clear that $f$ is continuous. We have $f(0) = d < 0$. Since \begin{equation} f(x) \rightarrow -\infty, \quad x \rightarrow \infty, \quad x \ge 0 \end{equation} we cannot immediately conclude if $f$ has a zero. We therefore seek to determine the range of $f$ in the standard manner. We have \begin{equation} f'(x) = apx^{p-1} + bq x^{q-1} + c = (2.56) a x^{1.56} + (1.78)b x^{0.78} + c \end{equation} The fact that \begin{equation} 1.56 = 2 \cdot 0.78 \end{equation} is hardly a coincidence and it is worth investigating which property of the original problem gave rise to this.

It is vital that you check that this is equality is true, i.e. that $p$ and $q$ are exact and not the result of rounding.

With the substitution \begin{equation} y = x^{0.78} \end{equation} it is clear that $f'(x)=0$ if and only if \begin{equation} (2.56) a y^2 + (1.78)b y + c = 0. \end{equation} Given specific values of $a, b$ and $c$ it is (almost) trivial to determine if $f'$ has any zeros or not, but I urge you to consult Higham's book "Accuracy and Stability of Numerical Algorithms" if you have never considered catastrophic cancellation in this setting before. The stable solution of quadratic equations is discussed in either Chapter 1 or Chapter 2.

Solve the equation $f'(x)=0$ will allow you to break the interval $[0,\infty)$ into subintervals where $f$ is either strictly increasing or strictly decreasing. By evaluating $f$ at the break points, i.e the roots of $f'$ you will be able to detect any sign changes. By continuity, this will identify intervals which contain exactly one root.

For the sake of robustness I would recommend using the bisection algorithm. When speed is of essence, I always recommend a hybrid between the bisection algorithm and the secant method. In this manner you can have both the rapid convergence of the secant method and the safety of the bisection algorithm.

If this is part of a "serious" code, which will run billions of times or more and you need to ensure that it works subject to the limitations of floating point arithmetic, then do not hesitate to contact me via email. I can not make any promises, but it could be a fun problem.

I foresee no difficult in reaching a solution in less than a millisecond. I can not imagine that we would need more than a few thousand CPU cycles.

It is possible to view $f$ as a polynomial, but not in the variable $x$. Specifically, if $x=y^{50}$, then \begin{equation} f(x) = a x^{2.56} + bx^{1.78} + cx + d = a y^{128} + b y^{89} + y^{50} + c. \end{equation} This circumvents the need for any approximations, but I see no real advantage to this approach. We still have to determine the range of $f$ as well as the intervals which contain the root(s). Computing powers requires calls to the exponential and logarithm functions, so we are no better off with this form of $f$ than the original.

Related Question