I was bored in class one day and wondered to myself if there were any quadratics $x^2+ax+b$ such that $a$ and $b$ are the zeros. I found two: $x^2+x-2,$ and $x^2 -{1\over2}x -{1\over2}$. The comments suggested $x^2+0x+0$, though this seems trivial. I wonder if this applies to other degree polynomials. Clearly it never works for a linear, except for $x+0=0$, as if $x+a=0$, $x=-a$, not $a$. What about cubics, quadratics, or even higher powers? In general, $x^n$ works. What about non-trivial solutions?
[Math] Monic polynomials whose roots are their remaining coefficients
polynomialsroots
Related Solutions
How can one feel comfortable with non-solvable algebraic numbers?
The nice thing about solvable numbers is this idea that they have a formula. You can manipulate the formula as if it were actually a number using some algebra formalism that you probably have felt comfortable with for a while. For instance $\sqrt{3+\sqrt{6}}+2$ is an algebraic number. What do you get if you add it to $7$? Well $\left(\sqrt{3+\sqrt{6}}+2\right)+7=\sqrt{3+\sqrt{6}}+9$ seems like a decent answer. As a side note: there actually some reasonably hard algorithmic questions along these lines, but I'll assume they don't worry you. :-)
We'd like to be able to manipulate other algebraic numbers with similar comfort. The first method I was taught is pretty reasonable:
Kronecker's construction: If $x$ really is an algebraic number, then it is the root of some irreducible polynomial $x^n - a_{n-1} x^{n-1} - \ldots - a_1 x - a_0$. But how do we manipulate $x$? It's almost silly: we treat it just like $x$, and add and multiply as usual, except that $x \cdot x^{n-1}$ needs to be replaced by $a_{n-1} x^{n-1} + \ldots + a_1 x + a_0$, and division is handled by replacing $1/x$ with $( x^{n-1} - a_{n-1} x^{n-2} - \ldots - a_2 x - a_1)/a_0$. This is very similar to "integers mod n" where you replace big numbers by their remainder mod n,. In fact this is just replacing a polynomial in $x$ with its remainder mod $x^n - a_{n-1} x^{n-1} - \ldots - a_1 x - a_0$.
I found it somewhat satisfying, but in many ways it is very mysterious. We use the same symbol for many different algebraic numbers; each time we have to keep track of the $f(x)$ floating in the background. Also it raises deep questions about how to tell two algebraic numbers apart. Luckily more or less all of these questions have clean algorithmic answers, and they are described in Cohen's textbooks CCANT (A Course in Computational Algebraic Number Theory, Henri Cohen, 1993).
Companion matrices: But years later, it still bugged me. Then I studied splitting fields of group representations. The crazy thing about these fields is that they are subrings of matrix rings. So “numbers” were actually matrices. You've probably seen some tricks like this $$\mathbb{C} = \left\{ \begin{bmatrix} a & b \\ -b & a \end{bmatrix} : a,b \in \mathbb{R} \right\}$$ where we can make a bigger field out of matrices over a smaller field. It turns out that is always true: If $K \leq F$ are fields, then $F$ is a $K$-vector space, and the function $f:F \to M_n(K) : x \mapsto ( y \mapsto xy )$ is an injective homomorphism of fields, so that $f(F)$ is a field isomorphic to $F$ but whose “numbers” are just $n \times n$ matrices over $K$, where $n$ is the dimension of $F$ as a $K$-vector space (and yes $n$ could be infinite if you want, but it's not).
That might seem a little complicated, but $f$ just says "what does multiplying look like?" For instance if $\mathbb{C} = \mathbb{R} \oplus \mathbb{R} i$ then multiplying $a+bi$ sends $1$ to $a+bi$ and $i$ to $-b+ai$. The first row is $[a,b]$ and the second row $[-b,a]$. Too easy.
Ok, fine, but that assumes you already know how to multiply, and perhaps you are not yet comfortable enough to multiply non-solvable algebraic numbers! Again we use the polynomial $x^n - a_{n-1} x^{n-1} - \ldots - a_1 x - a_0$, but this time as a matrix. We use the same rule, viewing $F=K \oplus Kx \oplus Kx^2 \oplus \ldots \oplus Kx^{n-1}$ and ask what $x$ does to each basis element: well $x^i$ is typically sent to $x^{i+1}$. It's only the last one that things get funny:
$$f(x) = \begin{bmatrix} 0 & 1 & 0 & 0 & \ldots & 0 & 0 \\ 0 & 0 & 1 & 0 & \ldots & 0 & 0 \\ 0 & 0 & 0 & 1 & \ldots & 0 & 0 \\ & & & & \ddots & & \\ 0 & 0 & 0 & 0 & \ldots & 1 & 0 \\ 0 & 0 & 0 & 0 & \ldots & 0 & 1 \\ a_0 & a_1 & a_2 & a_3 & \ldots & a_{n-2} & a_{n-1} \end{bmatrix}$$
So this fancy “number” $x$ just becomes a matrix, most of whose entries are $0$. For instance $x^2 - (-1)$ gives the matrix $i = \left[\begin{smallmatrix} 0 & 1 \\ -1 & 0 \end{smallmatrix}\right]$.
This nice part here is that different algebraic numbers can actually have different matrix representations. The dark part is making sure that if you have two unrelated algebraic numbers that they actually multiply up like a field. You see $M_n(K)$ has many subfields, but is not itself a field, so you have to choose matrices that both lie within a subfield. Now for splitting fields and centralizer fields and all sorts of handy dandy fancy fields, you absolutely can make sure everything you care about comes from the field. Starting from just a bunch of polynomials though, you need to be careful and find a single polynomial that works for both. This is called the primitive element theorem.
This also lets you see the difference between eigenvalues in the field $K$ and eigenvalues (“numbers”) in the field $F$: the former are actually numbers, or multiples of the identity matrix, while the latter are full-fledged matrices that happen to lie in a subfield. If you ever studied the “real form” of the eigenvalue decomposition with $2\times 2$ blocks, those $2 \times 2$ blocks are exactly the $\begin{bmatrix}a&b\\-b&a\end{bmatrix}$ complex numbers.
I think I got the proof that no such real polynomial with degree $ \geq 6$ exists.
Let $n \geq 6$
Suppose for contradiction that $z_1,\ldots,z_n \in \mathbb R-\{0\}^n$ are such that $(X-z_1)...(X-z_n)=X^n+\sum_{k=1}^{n-1}z_iX^{n-i}$
Then three useful identities appear $$\sum_{k=1}^{n}z_k=-z_1 \; \; \; \;(1)$$
$$\sum_{\large1\leq i<j \leq n}z_iz_j=z_2 \; \; \; \;(2)$$
$$\prod_{k=1}^n z_k=(-1)^n z_n \; \; \; \;(3)$$
Since $$(\sum_{k=1}^{n}z_k)^2=\sum_{k=1}^{n}z_k^2+2\sum_{\large1\leq i<j \leq n}z_iz_j$$it follows that$$z_1^2=2z_2+\sum_{k=1}^{n}z_k^2$$
Hence $$0< \sum_{k=2}^{n}z_k^2=-2z_2 \; \; \; \;(4)$$ and $$0<\sum_{k=3}^{n}z_k^2=1-(z_2+1)^2 \; \; \; \;(5) $$
$(4)$ and $(5)$ imply $$\; \; \; \;-2<z_2<0 \; \; \; \;(6)$$
thus $(6)$ and $(4)$ imply $$0<\sum_{k=2}^{n}z_k^2 < 4 \; \; \; \; (7)$$
Also $(6)$ and $(5)$ imply $$0<\sum_{k=3}^{n}z_k^2 \leq 1 \; \; \; \; (8)$$
By AM-GM, $$\left(|z_3|^2\ldots|z_{n-1}|^2 \right)^{1/(n-3)} \leq \frac{1}{n-3}\sum_{k=3}^{n-1}z_k^2 \leq \frac{1}{n-3}\sum_{k=3}^{n}z_k^2$$
Hence $$|z_3|^2\ldots|z_{n-1}|^2 \leq \left(\frac{1}{n-3}\sum_{k=3}^{n}z_k^2\right)^{n-3} $$
Squaring, $$|z_3|\ldots|z_{n-1}| \leq \left(\frac{1}{n-3}\sum_{k=3}^{n}z_k^2\right)^{\large \frac{n-3}{2}} \leq_{ \large (8)} \dfrac{1}{{(n-3)}^{(n-3)/2}} \; \; \; \; (9)$$
By triangle inequality $(1)$, and Cauchy-Schwarz
$$2|z_1| \leq \sum_{k=2}^{n}|z_k| \leq \sqrt{n-1} \sqrt{\sum_{k=2}^{n}z_k^2} $$
Hence by $(7)$,
$$|z_1| \leq \sqrt{n-1} \; \; \; \; (10)$$
Rewriting $(6)$ as $$|z_2|\lt2 \; \; \; \; (11) $$
Recalling $(3)$ (with $z_n$ cancelled from both sides) and putting together $(9)$, $(10)$ and $(11)$, we have
$$1=|z_1||z_2||z_3|\cdots|z_{n-1}| < \dfrac{ 2\sqrt{n-1}}{{(n-3)}^{(n-3)/2}}$$
This inequality fails for $n\geq 6$.
Contradiction.
I can't prove anything for $n=5$ so maybe the conjecture doesn't hold.
Best Answer
For the quadratic case $x^2+ax+b=0$, by Viète's formulas we have $$-a=a+b\qquad b=ab$$ The first formula implies $b=-2a$. Substituting this into the second equation gives $-2a=-2a^2$ or $a=a^2$, so $a=0,1$. These give corresponding $b$ values of 0 and $-2$ respectively, so the unique non-trivial quadratic "auto-solving" polynomial is $x^2+x-2=0$.
For the cubic case $x^3+ax^2+bx+c$, the same formulas give $$-a=a+b+c\qquad b=ab+bc+ca\qquad-c=abc$$ If $c=0$ the situation is just the quadratic case with another zero root. In general, if $p(x)$ is auto-solving, so is $p(x)x^k$ for any $k>0$, since only extra zeros are introduced to both the coefficient and root list. Thus we have for example $x^3+x^2-2x$, but these are trivial.
There is a nice non-trivial auto-solving cubic polynomial, $x^3+x^2-x-1$, but also a very ugly one given by $$a=0.56519771\dots$$ $$b=-1.76929235\dots$$ $$c=0.63889691\dots$$ (It turns out that these values are in the OEIS: A273065, $-$A273066, A273067.)
And this is just the all-real solution! There are two solutions with complex coefficients. One is given by $$a={-0.78259885\dots}-0.52171371\dots i$$ $$b={0.88464617\dots}-0.58974280\dots i$$ $$c={0.68055154\dots}+1.63317024\dots i$$ and the other is obtained by taking the complex conjugates of the values above.
For quartics, there is only one non-trivial real auto-solving polynomial, $x^4+x^3+ax^2+bx+c$ where $$a=-1.75487766\dots$$ $$b=-0.56984029\dots$$ $$c=0.32471795\dots$$ Even more startlingly, there are no auto-solving real polynomials of degree five or higher. This and the uniqueness of the quartic example were proved by Paul R. Stein in "On Polynomial Equations with Coefficients Equal to Their Roots" (March 1966), American Mathematical Monthly 73 (3), pp. 272-274.