What is the intuition behind the definition of opposite rings, and for working with them? Do they ever come up in practice?
It's not clear why one would need any intuition to use them. You could say they are a very simple "construction" where you make a new ring out of a old one, but that view is not very fruitful.
There is a high-level, less accessible explanation. In category theory, you talk about objects and arrows between them (plus some axioms.) You might have guessed by now that there is in fact a notion of opposite category, and that's what happens when you take a category and point all arrows in the opposite direction.
Many things can be expressed as categories, and among those things are rings and partially ordered sets (or totally ordered sets if you prefer).
While viewing a partially ordered set as a category, the opposite category is just the reversed partial ordering. The opposite ring of a ring is just the opposite category of that ring viewed as a category.
Other than drawing this parallel between opposite ordering and opposite rings, I don't really have any further insight into what they are. Really they are most useful as a notational convenience.
The first place they arise naturally in a textbook on noncommutative algebra is probably while explaining the Artin-Wedderburn theorem. The way I remember it, no matter what setup you start out with, you eventually need to introduce the opposite ring of one of the rings in play. That's an example of using it for notational convenience.
Two more places they show up:
An abelian group $M$ is an $R, S$ bimodule iff it is a left $R\otimes S^{op}$ module.
If $R$ is a ring, then the ring of module endomorphisms $End(R_R)\cong R$, but $End(_RR)\cong R^{op}$.
Why is a right $A$-module the same thing as a left $A^{op}$-module?
You just check that $r\cdot m :=mr$ defines a left module structure on $M$. Without "opposite multiplication $\circ$", there is no way to prove that $(r\circ s)\cdot m=r\cdot(s\cdot m)$.
What is the intuition behind the definition of [anti-involutions], and for working with them? Do they ever come up in practice? And why do they provide a ring isomorphism $A\cong A^{op}$?
They come up in practice, for example, in the complex and quaternion conjugation maps. The first one is trivial since the complex numbers are commutative, but it's nontrivial for the quaternions. Additionally, the whole subject of $^\ast$-rings is devoted to the study of involutions like that.
As to why an anti-involution $f:A\to A$ yields an isomorphism $A\cong A^{op}$, I advise you to guess what the obvious candidate for a map is and then check to see that it's true.
What is the intuition behind $A^{op}\cong A$ for a commutative ring $A$ and having the notions of left and right $A$-modules coinciding?
A ring being isomorphic to its opposite ring just guarantees some left-right symmetry of the ring. For example, if $R$ is right Noetherian, $R^{op}$ is left Noetherian. If these two rings are isomorphic, then $R$ is Noetherian on both sides. If a ring is isomorphic to its opposite, then any one-sided condition that it has, it has on both sides.
The category of right modules and the category of left modules for a given ring can be quite different from each other. It could be, for example, that every left module admits a projective cover while there are right modules without projective covers. If, on the other hand, the two categories share the same properties, that is something special and is again a sort of 'symmetry' about the ring.
Fabio gives a very nice answer to your question, but doesn't directly address an important point of confusion in your original post/comments, so I'm adding this answer for posterity. In general, a map $\psi:R_1\rightarrow R_2$ will absolutely not induce an epimorphism $\text{rek}(\psi)^{-1}R_1\twoheadrightarrow R_2$, even if we take the stronger definition $\text{rek}(\psi)=\psi^{-1}(R_2^\times)$ given by Fabio. For instance, if every element of $\text{rek}(\psi)$ is already a unit in $R_1$, then we will just have $\text{rek}(\psi)^{-1}R_1=R_1$, and so using this it is easy to come up with examples where the induced map is not epi.
For instance, take $R_1=\mathbb{Q}$, and $R_2$ any field extension of $\mathbb{Q}$ with a non-trivial automorphism $\alpha$ that fixes $\mathbb{Q}$ pointwise, with $\psi:R_1\hookrightarrow R_2$ the inclusion map. Then $\text{rek}(\psi)=\mathbb{Q}^\times$ , so $\text{rek}(\psi)^{-1}\mathbb{Q}=\mathbb{Q}$ and the induced map to $R_2$ is just $\psi$, which is certainly not an epimorphism. (E.g. $\alpha\circ\psi=\text{id}_{R_2}\circ\psi$ but $\alpha\neq\text{id}_{R_2}$).
Indeed, the polynomial ring example you give in the comments of your post does not hold in general either. If we let $R_1=\mathbb{R}[x]$ and $R_2=\mathbb{R}[x,y]$, with $\psi:R_1\hookrightarrow R_2$ again the inclusion map, then once again $\text{rek}(\psi)=R_1^\times$ but $\psi$ is certainly not epi.
The problem in all of these examples is that $R_2$ can be very big compared to the image of $R_1$; hopefully the above examples clarify that point. (Note however, that – provided $R_2\neq\{0\}$ – the map $R_1\hookrightarrow \text{rek}(\psi)^{-1}R_1$ will still be injective, even if we use Fabio's stronger definition of $\text{rek}(\psi)$, because no element of $\text{rek}(\psi)$ can be a zero-divisor in $R_1$.)
Best Answer
Here is a simpler example explaining why your construction does not work.
Consider the ring $R_1 = R_2 = \mathbb Z[t]$. Now, instead of the identity map between $R_1[x] = \mathbb Z[t,x]$ and $R_2[x] = \mathbb Z[t,x]$, look at the map that switches $x$ and $t$: $f(x,t) \mapsto f(t,x)$. This map is an isomorphism, but your composition $$ R_1\hookrightarrow R_1[x]\overset{\sim}{\to}R_2[x]\twoheadrightarrow R_2, $$ is not; it maps $t$ to $0$.
So, even in the case where there is an isomorphism between $R_1$ and $R_2$, your construction does not necessarily give one.