Every element of a field is algebraic over a fixed field

abstract-algebragalois-theory

I’m currently reading Algebra (Michael Artin). I can’t make sense of theorem 16.5.2 :

Theorem 16.5.2. Let $H$ be a finite group of automorphisms of a field $K$ and let $F$ denote the fixed field $K^H$. Let $\beta_1$ be an element of K, and let $\{\beta_1,…,\beta_r\}$ be the H-orbit of $\beta_1$. The irreducible polynomial for $\beta_1$ over $F$ if $g(x) = (x-\beta_1)…(x-\beta_r)$.

Proof: Let $g(x) = (x-\beta_1)…(x-\beta_r) = x^r – b_1 x^{r-1} + … + b_r$. The coefficients of $g$ are symmetric functions of the orbit $\{\beta_1, …, \beta_r\}$. Since the elements of $H$ permute the orbit, they fix the coefficients. Therefore $g$ has coefficients in the fixed field.

I thought I understood pretty much everything except implication in the "Since the elements of H permute the orbit, they fix the coefficients" part. Usually, when something like that happen, I build a simple example and try to follow the proof with the example to see how that works in practice.

In that case, after doing it I end up even more confused.

Let’s try to apply the theorem in the most simple example I can imagine: $K = \mathbb{R}, H = \{x, -x\}$. We have $\mathbb{R}^H = \{0\}$. The orbit of any non-zero element $\beta$ in $\mathbb{R}$ is $\{\beta, -\beta\}$, and therefore it’s irreducible polynomial in $\mathbb{R}^H$ would be (according to the theorem) $(x-\beta)(x+\beta)=x^2-\beta^2$, which… is obviously not in $\mathbb{R}^H[x]$ (I also note that the elements of $H$ permute the orbit: $x(\{\beta, -\beta\}) = \{\beta, -\beta\}$ and $(-x)(\{\beta, -\beta\}) = \{-\beta, \beta\}$, but they don’t fix the coefficients: $(-x)(-\beta^2) = \beta^2)$.

EDIT: Thanks to Mark for finding my mistake in my simple example. After fiddling a bit, I think that what Artin is trying to say by "Since the elements of H permute the orbit, they fix the coefficients" in its proof is that if $h \in H$, then $h(b_i(\beta_1, …, \beta_r)) = $ $b_i(h(\beta_1), …, h(\beta_r)) = $ $b_i(\sigma(\beta_1, …, \beta_r)) = $ $b_i(\beta_1, …, \beta_r)$, with $\sigma$ being a permutation. Am I on the right track ? If so, why exactly is the first equality valid ?

Best Answer

For a polynomial $p(x)=a_nx^n+a_{n-1}x^{n-1}+...+a_0$ and $\sigma\in H$ define:

$\sigma.p(x)=\sigma(a_n)x^n+\sigma(a_{n-1})x^{n-1}+...+\sigma(a_0)$

So you apply $\sigma$ on the coefficients of the polynomial. It is clearly an automorphism of the polynomial ring $K[x]$.

So for $g(x)=(x-\beta_1)...(x-\beta_r)$ we have by definition:

$\sigma.g(x)=(x-\sigma(\beta_1))...(x-\sigma(\beta_r))$

But since $\sigma\in H$ we know that $\sigma(\beta_1),...\sigma(\beta_r)$ are the same elements as $\beta_1,...,\beta_r$ up to permutation. Hence $g=\sigma.g$, and so they have the same coefficients. This means the coefficients of $g$ are fixed by $\sigma$.

Related Question