First, we must pin down what we mean by "better approximate." If you are interested in the interval $[-1,1]$ and your given function is $f(x)$, one reasonable definition of approximation error by a second function $g(x)$ is
$$\int_{-1}^1 [f(x)-g(x)]^2\,dx.$$
Now suppose $f(x)$ is an even function, and $e(x)$ an even polynomial approximation to $f(x)$. Can we improve the approximation by adding some odd polynomial terms $o(x)$? Let's check:
\begin{align*}
\int_{-1}^1 [f(x)-e(x)-o(x)]^2\,dx &= \int_{-1}^1 [f(x)-e(x)]^2 - 2[f(x)-e(x)]o(x) + o(x)^2\,dx\\
&= \int_{-1}^1 [f(x)-e(x)]^2\,dx + \int_{-1}^1 o(x)^2\,dx -2\int_{-1}^1[f(x)-e(x)]o(x)\,dx.
\end{align*}
Now let's use the fact that $f$ and $e$ are even, and $o$ is odd:
$$\int_{-1}^1[f(x)-e(x)]o(x)\,dx = -\int_0^1[f(x)-e(x)]o(x)\,dx + \int_0^1 [f(x)-e(x)]o(x)\,dx = 0.$$
Therefore
$$\int_{-1}^1 [f(x)-e(x)-o(x)]^2\,dx = \int_{-1}^1 [f(x)-e(x)]^2 + \int_{-1}^1 o(x)^2\,dx \geq \int_{-1}^1 [f(x)-e(x)]^2$$
and you were better off without the odd terms.
EDIT: For different norms, you carry out different flavors of the same argument. For instance, for the $\sup$ norm,
\begin{align*}
\sup_{x\in [-1,1]} |f(x)-e(x)-o(x)| &= \sup_{x\in [0,1]} \max\left(|f(x)-e(x)-o(x)|,|f(x)-e(x)+o(x)|\right)\\
&\geq \sup_{x\in [0,1]} |f(x)-e(x)|\\
&= \sup_{x\in [-1,1]} |f(x)-e(x)|.
\end{align*}
Best Answer
This is called the minimax approximation problem and we will use some geometry to help us solve this problem. First define
$$\epsilon(x)=2x^3+x^2+2x-1-(ax+b)$$
So looking at this picture here, the blue curve is the cubic function we are trying to approximate. And the purple curve is what our minimax approximation should roughly looks like. Let us define the max absolute error
$$\rho=\max_{-1\leq x \leq1}|\epsilon(x)|$$
we also see here that this max error is achieved exactly at three points
$$\epsilon(-1)=\rho=\epsilon(1) \textrm{ and } \epsilon(x_1)=-\rho$$
where I have the fat red vertical line at $x=x_1$. In addition, $\epsilon(x)$ achieves its minimum at $x=x_1$ so we have $\epsilon'(x_1)=0$. So now we solve four equations with four unknowns (a nonlinear system in this case) and get
$$a=4,b=\frac{-35-13\sqrt{13}}{108},x_1=\frac{-1+\sqrt{13}}{6},\rho=\frac{35+13\sqrt{13}}{108}$$
or better yet
$$a=4,b=-0.758076,x_1=0.434259, \rho=0.758076$$
so we know what the minimax line is, what the max error is and where is it achieved. In case you get two solutions while trying to solve this system, then remember to pick the solution where $\rho$ is positive.
Oh and the algorithm you are looking for is Remez algorithm and you might also want to look at
a theorem due to de la Vallee-Poussin (slide 4) for approximating the minimax error without having to calculate the approximation itself (not to be confused with de la Vallee-Poussin's theorem on uniform integrability)
Chebyshev's equioscillation theorem
and Chebyshev polynomials of the first kind which are useful for near minimax approximations.