What’s wrong with this induction? (Runtime analysis of standard polynomial long division)

fake-proofsinductionpolynomials

In my studies for my Bachelor's thesis, I've gone through a runtime analysis of plain vanilla polynomial long division, i.e. I wanted to prove the statement:

Let $f,\,g \in F[X] \land g\neq 0$ where $F$ is a field. Define $n:=\deg (f)\vee \deg (g)$. Then it takes $O(n^2 + 1)$ field operations to determine $q,\,r \in K[X]$ s.th.
$$
f = qg + r\,,\quad\deg(r) < \deg(g)
$$

NOTE: I've added the '$+ 1$' in the big-Oh notation for the border case that both degrees are zero (since afaik, $O(0)$ would mean zero field operations, which wouldn't be true).

However, I am not sure why the following *proof-attemp, that it takes $O(n+1)$ operations is wrong (I have some idea, which I will mention after this proof-attempt, but I can't see how this idea breaks this induction).

Proof We attempt induction over $d:=\deg(f)$. The statement is

S$(d)$
Let $f,\,g \in F[X] \land g\neq 0$ where $F$ is a field. Put $\,\mathbf{d} := \deg (f)$ and define $n:=\mathbf{d} \vee \deg (g)$.
Then it takes $O(\mathbf{n} + 1)$ field operations to determine $q,\,r
> \in K[X]$
s.th. $$ f = qg + r\,,\quad\deg(r) < \deg(g) $$

Base case Let $d = 0$. Then we have $ 0 = d = \deg (f) <= \deg(g) $ , since $g \neq 0$. Hence, $n = \deg(g)$ . If $n = \deg(g) = 0$,
then take $$ q:=fg^{-1},\,r:=0 $$ and we are done in a constant amount
of field operations, so in $O(1) = O(n+1)$. If $\deg(g) > 0 = \deg
> (g) $
, then $$ q:=0,\,r:=f $$ also do the job in $O(n) = O(n+1)$ field
operations. Hence, S$(0)$ holds true.

Inductive step Now, let $d > 0$ and assume that S$(\sigma)$ holds true for all $0 \leq \sigma < d$. As in the base case, if $\deg(g) > \deg (f)$, we are done in a constant amount of field operations. If $\deg(g) \leq \deg (f)$, then put $q_0 := l_f l_g X^{\deg(f) – \deg(g)}$, where $l_f,\,l_g$ are the leading coefficients of $f$ and $g$.
Then $$ \tilde f := f – q_0 g $$ is calculated in $O(\deg (g)) = O(\deg (f)) = O(n) = O(n+1)$ field operations, because we only
multiply $g$ by a monomial and $\deg (\tilde f) < \deg (f) = d$.

Induction hypothesis ($S(\deg (\tilde f))$ now yields $q_1$ and $r$ with $\deg (r) < \deg
> (g)$
, s.th. $$ \tilde f = q_1 g + r $$ in $O(\deg (\tilde f) \vee \deg
> (g) + 1) = O(n + 1)$
field operations. Then, $f = (q_0 + q_1) g + r$,
and since $q_1,\, q_0$ necessarily are bounded by $n$ in their
degrees, $q_0 + q_1$ is also calculated in $O(n + 1)$ field
operations. Since we have a sequence of procedures that take
$O(n+1)$ field operations, we are done.

Now, this obviously goes wrong somewhere.

My idea is that in the base case, we never have to consider $\deg(f) > \deg(g)$, in which case we would need $O(n^2)$ field operations in the worst case. However, I don't see anything wrong with the inductive proof itself (I'm probably blind in some way):

We have a statement S that depends on our induction variable, we prove the base case S$(0)$, we do the inductive step S$(\sigma)$ $, \sigma < d \rightarrow$ S$(d)$.

Best Answer

I think I've found the error. First off, in itself, the statement and the proof aren't wrong. The problem is, that they're not stating what I meant to prove.

Essentially, in all the statements I use/want to prove, $f$ and $g$ are fixed, and so, $n$ is fixed as well, and the runtime-related statements become pretty much something among the lines of

If we have two polynomials $f$ and $g$, $g \neq 0$, then we can calculate the polynomials $q$ and $r$ (as mentioned in the equation for polynomial long division) with some finite amount of field operations.

because $n$ becomes a constant, so that $O(n)$ means constant time. And if we consider a fixed input, the algorithm will of course always run in the same amount of time, since it's a constant-input-algorithm.

I initially wasn't careful enough wording the statement to be proven. Here's something closer to what I should have written instead:

Let $F$ be a field. Consider the 'standard' polynomial long division algorithm

$$ A: F[X] \times F[X] \setminus \left\{0\right\} \longrightarrow {F[X]}^2 $$ that uses $(f,\,g)$ to determine $(q,\,r)$ such that $f = qg + r$ and $\deg(r) < \deg (g)$.
Define $n(f,\,g):=\deg (f)\vee \deg (g)$ as the input size function.

Then $A$ has worst-case complexity $O(n^2 + 1)$ in terms of field operations, i.e. if $$ \rho_A: F[X] \times F[X] \setminus \left\{0\right\} \longrightarrow \mathbb{N} $$ describes the amount of field operations $\rho_A(f,\,g)$ needed for the computation $A(f,\,g)$, then for any sequence $(f_l,\,g_l)$ of polynomial tuples such that $$ \lim_l(n(f_l,\,g_l)) = \infty $$ (i.e. if any of the sequence projections is unbounded in the polynomial degree), we have $$ \rho_A(f_l,\,g_l) = O(n(f_l,\,g_l)^2 + 1)\;\text{as}\;l\rightarrow \infty $$

This more precise formulation also makes it more clear why the proof in the question was misleading.

Essentially, the root problem was misuse of big-Oh notation.

Related Question