Find the polynomial of fixed degree that minimizes the maximal error on a set of equidistant points

approximation-theoryoptimizationpolynomialsregression

I've got a set of $m \in \mathbb{N}$ 2D points which I want to represent as a polynomial of degree $n$: $\left \{ \left( x_i, y_i \right) \in \mathbb{R}^2 : i \in \mathbb{Z}, 0 \leq i \lt m \right \}$, with $x_i – x_{i – 1} = d$ being constant. These points happen to be values of a univariate real function, like so: $f(x_i) = y_i$; so one could say that I'm approximating that function with a polynomial, but I only care about the values on the equidistant $x_i$, not on the entire interval $[x_0, x_{m – 1}]$.

I want to find a polynomial that minimizes the maximal error, so my error function is such: $\max_{0 \leq i \lt m} \lvert y_i – p(x_i) \rvert$.

So far I know of two methods of finding "best" polynomials:

  1. The minimax polynomial has minimal error over an entire interval, not just some select points. I already verified that the Remez algorithm leads to suboptimal polynomials for my problem.
  2. Polynomial regression also produces a best polynomial, but I don't know what's the definition of the error that polynomial regression minimizes.

What method should I use for finding my polynomials?

EDIT: This question seems to be related, however it's specifically about the case $n = 1$, so I'm not sure how much it applies to the case of polynomials in general: Linear regression for minimizing the maximum of the residuals

Best Answer

The method in this link suggest you the minimization problem $$\left\{\begin{array}{rl} \min& r\\\\\text{subject to} & r-(y_i-p(x_i))\geq 0\\& r+(y_i-p(x_i))\geq 0 \end{array}\right.$$ by using the modulus definition, in which $$p(x)=\sum_{i=1}^ma_ix^i.$$

You can find some discussions by looking for linear programming minimizers on SearchOnMath. I like to use this tool to solve linear programming minimizers.