[Math] Can the Taylor series be used to get polynomial roots

derivativespolynomialsrootstaylor expansion

I'm using this method:

First, write the polynomial in this form:
$$a_nx^n+a_{n-1}x^{n-1}+……a_2x^2+a_1x=c$$
Let the LHS of this expression be the function $f(x)$. I'm gonna write the Taylor series of $f^{-1}(x)$ around x=0 and then put $x=c$ in it to get $f^{-1}(c)$ which will be the value of $x$.

Since, $f^{-1}(0)=0$ here, so we've got the first term of our Taylor series as $0$.

Now, the only thing that remains is calculating the derivatives of $f^{-1}(x)$ at $x=0$.

I'm using the fact that $$\frac{d(f^{-1}(x))}{dx}=\frac{1}{f'(f^{-1}(x))}$$

By differentiating this equation, we can get the second derivative of f−1(x) as:
$$\frac{d^2(f^{-1}(x))}{dx^2}=-\frac{1}{(f'(f^{-1}(x)))^2}\cdot f''(f^{-1}(x))\cdot (f^{-1})'(x)$$

Similarly, we can get the other derivatives by further differentiation of this equation. Then we can evaluate all the derivatives at x=0 to get the Taylor series of $f^{-1}(x)$ and evaluate it at $x=c$ to get the value of $x$.

I don't know the formula of $f^{-1}(x)$ but I know the value of $f^{-1}(x)$ at $x=0$. After doing all the formulas, what I have to do in the end in evaluating that expression at $x=0$ and I've the value of $f^{-1}(x)$ at $x=0$.
For example, $$f'^{-1}(x)=\frac{d(f^{-1}(x))}{dx}=\frac{1}{f'(f(^{-1}(x))}$$
$$=\frac{1}{n*a_{n}(f(^{-1}(x)))^{n-1}+(n-1)a_{n-1}(f(^{-1}(x)))^{n-2}+………….2a_2f^{-1}(x)+a_1}$$
which gives $$\frac{1}{a_1}$$ at $x=0$ since $$f^{-1}(0)=0$$
Similarly,
$$\frac{d^2(f^{-1}(x))}{dx^2}=-\frac{1}{(f'(f^{-1}(x)))^2}\cdot f''(f^{-1}(x))\cdot (f^{-1})'(x)$$

We already have the value of the first derivative at $x=0$ so we can substitute that here, so
$$\frac{d^{2}}{dx^{2}}(f^{-1}(x))=-\frac{1}{a_1^{2}}\cdot 2a_2\cdot \frac{1}{a_1}$$
$$=-\frac{2a_2}{a_1^{3}}$$
I think this process can be continued to get more derivatives by the product rule.

1.Is this method correct?

2.Can something be done to make it better and remove the limitations (if there are any)?

UPDATE: If I'm not wrong, then I think this method only works if none of the coefficients of the polynomial are zero. Is there some way to remove that limitation?

UPDATE: Oh, I just figured out that we can obtain the Taylor series of the inverse of a polynomial around any point $x=a$ by this method. I also just found out about the Lagrange inversion theorem. It was also about getting Taylor series of inverse functions. I didn't understand much but it was the same series as mine except the coefficients. Are the coefficients also the same? Have I been doing the same thing?

Best Answer

\begin{eqnarray*} f(x)=x+a_1x^2+a_2x^3+a_3x^4\cdots \end{eqnarray*} The inverse function needs to satisfy $f(f^{[-1]}(x))=x$ so \begin{eqnarray*} f^{[-1]}(x)=x-a_1x^2+(2a_1^2-a_2)x^3+(-5a_1^3+5a_1a_2-a_3)x^4 \cdots \end{eqnarray*}

Now set $a_2=0$ and $a_3=0$ etc... \begin{eqnarray*} f^{[-1]}(x)=x-a_1x^2+2a_1^2x^3-5a_1^3x^4 +14x^5\cdots \end{eqnarray*} This sum is also \begin{eqnarray*} f^{[-1]}(x)=\frac{1-\sqrt{1-4ax}}{2a} \end{eqnarray*} This is overkill to derive the formula for the solution of a quadratic but if you now allow $a_2$ to be non zero you will have a series solution for a cubic equation & so on.

Your method will work ... it is just Lagrangian Inversion in disguise !

Related Question