I know that a polynomial ring $R[x]$ is the ring with elements consisting of polynomials with coefficients in $R$. However, this definition leaves me confused when I try to really understand the concept of polynomial rings, rather than just accept that definition for what it is. Where did the polynomials come from? Why are polynomial rings important? Is there a way to “construct” the polynomials in the context of Abstract Algebra, so that we may see why studying rings made up of polynomials is important? Are polynomial rings actually just sets of polynomials, or are the polynomials just a concrete way to represent some more abstract idea? What does the variable $x$ mean in polynomial rings? I apologize for all the questions, I’m just having trouble seeing the importance or intuition behind them.
[Math] Importance and Intuition of Polynomial Rings
abstract-algebraring-theory
Related Solutions
A quotient of rings is a structure where you add a new equation in the previous ring.
For example, $$\mathbb R[T]/(T^2 + 1)$$ is the ring of polynomials, with the new equation $$T^2 + 1 = 0$$so this is $\mathbb C$. So, making a quotient by an ideal generated by 2 elements gives you two new equations. That's all.
- For the first example, the ring is $\mathbb Z[x]$ with additional equations: $$2 = 0 \ \ \& \ \ x^3 = 1$$ so this is $\mathbb Z_2[x]/(x^3 + 1)$ indeed.
- For the second, consider an isomorphism $f$ from $R_1$ to $R_2$; $f(1) = 1$ so $f$ leaves $F$ invariant; it remains to find images of $x,y$ so take the relationship $$x^2 = y^2 \ \ \ (R_1)$$ it implies that $(x+y)(x-y) = 0$, so it should be the case for the images of $x,y$ in $R_2$.
Let $f(x) = P(x,y) = P(y^2,y)$ and $f(y) = Q(x,y) = Q(y^2,y)$. We have $$(P(y^2,y)+Q(y^2,y))(P(y^2,y)-Q(y^2,y))=0$$ but this is impossible, because it should be true in $\mathbb Z[y]$ which has no $0$ divisors.
What is the intuition behind the definition of opposite rings, and for working with them? Do they ever come up in practice?
It's not clear why one would need any intuition to use them. You could say they are a very simple "construction" where you make a new ring out of a old one, but that view is not very fruitful.
There is a high-level, less accessible explanation. In category theory, you talk about objects and arrows between them (plus some axioms.) You might have guessed by now that there is in fact a notion of opposite category, and that's what happens when you take a category and point all arrows in the opposite direction.
Many things can be expressed as categories, and among those things are rings and partially ordered sets (or totally ordered sets if you prefer).
While viewing a partially ordered set as a category, the opposite category is just the reversed partial ordering. The opposite ring of a ring is just the opposite category of that ring viewed as a category.
Other than drawing this parallel between opposite ordering and opposite rings, I don't really have any further insight into what they are. Really they are most useful as a notational convenience.
The first place they arise naturally in a textbook on noncommutative algebra is probably while explaining the Artin-Wedderburn theorem. The way I remember it, no matter what setup you start out with, you eventually need to introduce the opposite ring of one of the rings in play. That's an example of using it for notational convenience.
Two more places they show up:
An abelian group $M$ is an $R, S$ bimodule iff it is a left $R\otimes S^{op}$ module.
If $R$ is a ring, then the ring of module endomorphisms $End(R_R)\cong R$, but $End(_RR)\cong R^{op}$.
Why is a right $A$-module the same thing as a left $A^{op}$-module?
You just check that $r\cdot m :=mr$ defines a left module structure on $M$. Without "opposite multiplication $\circ$", there is no way to prove that $(r\circ s)\cdot m=r\cdot(s\cdot m)$.
What is the intuition behind the definition of [anti-involutions], and for working with them? Do they ever come up in practice? And why do they provide a ring isomorphism $A\cong A^{op}$?
They come up in practice, for example, in the complex and quaternion conjugation maps. The first one is trivial since the complex numbers are commutative, but it's nontrivial for the quaternions. Additionally, the whole subject of $^\ast$-rings is devoted to the study of involutions like that.
As to why an anti-involution $f:A\to A$ yields an isomorphism $A\cong A^{op}$, I advise you to guess what the obvious candidate for a map is and then check to see that it's true.
What is the intuition behind $A^{op}\cong A$ for a commutative ring $A$ and having the notions of left and right $A$-modules coinciding?
A ring being isomorphic to its opposite ring just guarantees some left-right symmetry of the ring. For example, if $R$ is right Noetherian, $R^{op}$ is left Noetherian. If these two rings are isomorphic, then $R$ is Noetherian on both sides. If a ring is isomorphic to its opposite, then any one-sided condition that it has, it has on both sides.
The category of right modules and the category of left modules for a given ring can be quite different from each other. It could be, for example, that every left module admits a projective cover while there are right modules without projective covers. If, on the other hand, the two categories share the same properties, that is something special and is again a sort of 'symmetry' about the ring.
Best Answer
Great question! Polynomial rings really get glossed over in my opinion, when they are actually quite complicated objects.
Super formally, we define the polynomial ring $R[x]$ as follows. Let $S$ be the set of all sequences $(r_0, r_1, r_2,\ldots)$ where the $r_i$ are elements of $R$ and only finitely many are nonzero. We explicitly define operations $+$ and $\cdot$ on this set by $$ (a_0, a_1, a_2,\ldots) + (b_0, b_1, b_2, \ldots) = (a_0 + b_0, a_1 + b_1, a_2 + b_2, \ldots) $$ and $$ (a_0, a_1, a_2, \ldots) \cdot (b_0, b_1, b_2, \ldots) = (a_0b_0, a_1b_0 + a_0b_1, a_2b_0 + a_1b_1 + a_0b_2, \ldots). $$ That is, if $a = (a_i)$ and $b = (b_i)$, then $(a+b)_i = a_i + b_i$ and $(ab)_i = \sum_{k=0}^i a_kb_{i-k}$.
It is easy to check that $(S, +, \cdot)$ is a ring with zero $(0, 0, 0, \ldots)$ and identity $(1, 0, 0, \ldots)$.
Furthermore, as an $R$-algebra, $S$ is generated by the element $x = (0, 1, 0, 0, \ldots)$ (if you don't know what an $R$-algebra is, the point is that we can "get to" every element of $S$ using just the element $x$ and the elements of $R$). To see this, note that for each $n$, the element $x^n$ is the sequence $(0, \ldots, 0, 1, 0, \ldots)$ with a $1$ in the $n^\text{th}$ entry and zeros elsewhere. The element $(a_0, a_1, \ldots)$ of $S$ is equal to $$ a_0 + a_1x + a_2x^2 + \ldots. $$ Thus we can "get to" $(a_0, a_1, \ldots)$ just by raising $x$ to various powers and multiplying by elements of $R$.
We denote this ring by $R[x]$. The intuition is that we are "adjoining" an "indeterminate" $x$ to the ring $R$, which means that we are adding some element that has no constraints on it. In some sense, the element $x$ is "free". In practice, this is how we always think about polynomials.
Polynomial rings are extremely important. Without them, mathematics would basically not be possible. The most ubiquitous example I can think of is in Linear Algebra, where the theory of polynomial rings allows us to prove results like the Cayley-Hamilton Theorem. Another example is in the theory of probability generating functions. Polynomials are also vital to number theory and geometry. Every field of mathematics I can think of is built at least implicitly on the theory of polynomials.
Edit I got a bit carried away with advanced topics in my examples. For a more basic one, we need the theory of polynomial rings to understand even basic things like factorising quadratics. To prove that factorisation exists and is unique, you need to understand the ring $\mathbb{R}[x]$. Factorising such polynomials is very useful, as any high school student will tell you.