Let's start by fixing some notation: I'll call $X \sqcup Y$ the disjoint union of sets (which is an instance of a categorical coproduct, in the category $\mathbf{Set}$ of sets and functions). I'll use $A + B$ for coproducts in a general category, e.g. the category $\mathbf{Mon}$ of monoids or the category $\mathbf{Grp}$ of groups. So, it'd make perfect sense to write something like
In the category $\mathbf{Set}$ of sets, the coproduct is given by the disjoint union: $X + Y := X \sqcup Y$.
Definition of coproducts of monoids
Now, assume you're given two monoids $\mathbf{A} = (A, \cdot, u)$ and $\mathbf{B} = (B, \times, e)$. We call $A$ the carrier of $\mathbf{A}$ (and similarly for $B$ and $\mathbf{B}$). Before going through the construction of their coproduct $\mathbf{A} + \mathbf{B}$, it might be instructive to see why its carrier cannot be $A \sqcup B$.
What goes wrong with plain disjoint unions?
For this to be the case, we'd need to define two things:
- An operation $\bullet : (A \sqcup B) \times (A \sqcup B) \to (A \sqcup B)$
- An element $1 \in A \sqcup B$
Let's start looking at the latter. What should we choose? There are two natural candidates: $(u, 1)$ the identity from $\mathbf{A}$ and $(e, 2)$ the identity from $\mathbf{B}$. Let's put a pin on this problem: we'll get back to it in a second.
We'd also need to define $x \bullet y$ for any two elements $x, y \in A \sqcup B$. What should it be? There are four possible cases:
- $x = (x_1, 1)$ and $y = (y_1, 1)$;
- $x = (x_2, 2)$ and $y = (y_2, 2)$;
- $x = (x_1, 1)$ and $y = (y_2, 2)$;
- $x = (x_2, 2)$ and $y = (y_1, 1)$.
It's clear what $x \bullet y$ should be in the first two cases: $(x_1 \cdot y_1, 1)$ and $(x_2 \times y_2, 2)$ respectively. It's also immediate to see that case 1 requires the identity element to be $(u, 1)$, while case 2 requires the identity to be $(e, 2)$. But the identity cannot be both at the same time!
Another problem arises for cases 3 and 4: what should we pick? Remember, we don't just want to chose any value $x \bullet y$ that makes $A \sqcup B$ a monoid: we're trying to define a coproduct, and coproducts satisfy a universal property! Intuitively, we'd want to make the most general definition, and there doesn't seem to be a reasonable choice.
What we'd want to do is to expand the set $A \sqcup B$, so that $x \bullet y$ is a new element whenever we're in case 3 or 4, but have it be what we described above in cases 1 and 2. And since we're at it, we'd want $(u, 1)$ and $(e, 2)$ to be the same element, which will then be the identity element.
The correct construction
We do this by following the recipe you describe: take $(A \sqcup B)^*$ to be the set of words whose letters are either elements of $A$ or of $B$, and then take $C \subset (A \sqcup B)^*$ to be the set of reduced such words. The claim is that we can define $\bullet$ and $1$, so that $(C, \bullet, 1)$ is a monoid, and moreover it satisfies the universal property of coproducts: $\mathbf{A} + \mathbf{B} = (C, \bullet, 1)$.
The first thing to notice is that now, niehter $[(u, 1)]$ nor $[(e, 2)]$ are reduced; the reduction of both is the empty word $[]$. So it would make sense to define $1 := []$, which is exactly what we do.
Then, we can also notice that $(A \sqcup B)^*$ is already a monoid, but it doesn't take into account the monoid structures of $\mathbf{A}$ or $\mathbf{B}$: we just take concatenation of words. Not only that: $1$ is the identity with respect to such an operation!
So we might guess that to define $x \bullet y$ for $x, y \in C$, we first want to concatenate them (obtaining an element $x y \in (A \sqcup B)^*$ which might fail to be reduced) and then reduce it to an element of $C$.
A concrete example
Here I'll drop the indices, since it will (hopefully!) be clear where each element comes from.
Before proving this is indeed a coproduct, let's look at a concrete example: let's pick $\mathbf{A} := (\mathbb{Z}, +, 0)$ the integers under addition, and $\mathbf{B} := (\{a, b\}^*, \cdot, \epsilon)$ the set of words on two letters, under concatenation (writing $\epsilon$ for the empty word). How do the reduction rules and operation on $\mathbf{A} + \mathbf{B}$ work?
Let's look an element which is not reduced: $x = [ab, 12, \epsilon, -7, ba]$. What is the corresponding element in $\mathbf{A} + \mathbf{B}$? First we remove the empty word, since it is the identity in $\mathbf{B}$, obtaining $x\prime = [ab, 12, -7, ba]$; then we notice that two adjacent elements both come from $\mathbf{A}$, so we perform the operation on them obtaining $x^{\prime \prime} = [ab, 5, ba]$. We can now see that what we got is reduced: it is indeed an element of $\mathbf{A} + \mathbf{B}$.
Now, let's look at $\bullet$. Fix two reduced elements, say $x = [ab, 5]$ and $y = [-5, ba, -2]$. To compute $x \bullet y$ we first concatenate them obtaining $xy = [ab, 5, -5, ba, -2]$ and then reduce it to $x \bullet y = [abba, -2]$ (by first adding 5 and -5 and obtaining 0, then removing it, then concatenating $ab$ and $ba$ obtaining $abba$).
This is indeed a coproduct!
Let's fix monoids $\mathbf{A}, \mathbf{B}$. To show that our construction is indeed a coproduct, what we need to show is that for any monoid $\mathbf{C}$, giving monoid homomorphisms $f : \mathbf{A} \to \mathbf{C}$, $g : \mathbf{B} \to \mathbf{C}$ is the same thing as giving a monoid homomorphism $h : \mathbf{A} + \mathbf{B} \to \mathbf{C}$. Let's do this:
- $f, g$ determine $h$: given an element $x = [x_1, \dots, x_n]$ which is not necessarily reduced, we can define $h(x)$ as follows. First, since $h$ is to be a monoid homomorphism, we'd want $h(x) = h([x_1]) \cdot \dots \cdot h([x_n])$, so we only need to define $h(x_i)$. But we know that either $x_i = (a_i, 1)$ or $x_i = (b_i, 2)$: we can then define $h(x_i)$ to be $f(a_i)$ if we're in the former case, and $g(b_i)$ in the latter.
Notice that at this point, we still don't know that $h$ is a monoid homomorphism (and we haven't used the fact that $f, g$ are monoid homomorphisms). Formally we'd need to work with the reduction rules, but here I'll just sketch the relevant argument (and you can assemble it into a rigorous proof). Consider $x = [(x^\prime, 1)]$ and $y = [(y^\prime, 1)]$: is it the case that $h(x) \cdot h(y) = h(x \bullet y)$? Well, we can unwind our definition. $h(x) \cdot h(y) = f(x^\prime) \cdot f(y^\prime) = f(x^\prime \cdot y^\prime) = h([(x^\prime \cdot y^\prime, 1)]) = h(x \bullet y)$ (assuming $x^\prime \cdot y^\prime$ is not the identity, of course; what happens in that case?). The same applies for $g$.
- $h$ determines $f, g$: for this we can just define inclusions $in_1 : \mathbf{A} \to \mathbf{A} + \mathbf{B}$ by setting $in_1(a) = [(a, 1)]$ if $a$ is not the identity, and $[]$ if it is. The fact this is a monoid homomorphism follows from the reduction rules (we can just compute $in_1(a_1) \bullet in_1(a_2) = [(a_1, 1)] \bullet [(a_2, 1)] = [(a_1 \cdot a_2, 1)] = in_1(a_1 \cdot a_2)$), and we can define $f = h \circ in_1$. A similar argument involving $in_2(b) := [(b, 2)]$ applies for $g$.
Questions
We're now ready to answer your questions.
- The answer to this one is no. That is, if $a_1, \dots, a_n \in A$ this doesn't mean $[a_1, \dots, a_n] \in A$. Their product is, but the sequence isn't! On the other hand, the last part of your question is correct: the reduction of $[a_1, \dots, a_n, b_1, \dots, b_m]$ is an element of carrier of $\mathbf{A} + \mathbf{B}$.
- The way you apply the reduction rules seems ok to me.
- I don't think there's a misprint, though the notation might be slightly confusing: a less confusing way to write it might be $in_i : X_i \to C$ sending $x \mapsto in_i(x) :=$ reduction of $[(x, i)]$. In other words, it takes one element $x \in X_i$ to the one-element, reduced sequence with just $x$ marked by the index of the monoid $X_i$ it comes from (unless $x$ is an identity, in which case it gets sent to the empty sequence). I'll stress one thing though: $X_i$'s elements are not sequences (as per my answer to question 1)!
- I'm not really sure what you're doing there, but I'll have to stress the fact that $C$ is not $X_1 \sqcup X_2$: it's a subset of the free monoid on the disjoint union: $C \subseteq (X_1 \sqcup X_2)^*$.
Some general remarks
You seem to confuse monoids with free monoids: elements of a monoid are not sequences in general, that's only the case if the monoid is free over some set (what you call the set of words over some alphabet).
In the case of free monoids, by the way, your intuition seems to be kind of correct: fix (disjoint) alphabets $A$ and $B$, the coproduct of the free monoids $A^* + B^*$ is the free monoid over the disjoint union $(A \sqcup B)^*$.
Another thing I'd like to remark is that in the mathematical literature, people will usually define the carrier of the coproduct of monoids in a slightly different way. Fix two monoids $\mathbf{A}$, $\mathbf{B}$; take the free monoid over the disjoint union of the carriers $(A \sqcup B)^*$ and then define an equivalence relation $\sim$ on it: two words are related iff they reduce to the same word. It's then relatively painless to show that this relation interacts nicely with the monoid structure (that is, it is a congruence), so we can take the quotient monoid, and then prove that it satisfies the universal property of coproducts.
What we're doing here is to fix a canonical representative for each equivalence class (the element which is already reduced) and presenting this quotient as a subset instead.
Best Answer
Consider the functions $a,b,u:\mathbb{N}^2\to\mathbb{N}^2$ defined by $a(m,n)=(m+1,n)$, $b(m,n)=(\max(m-1,0),n)$, and $u(m,n)=(n,m)$. Let $S$ be the monoid of functions $\mathbb{N}^2\to\mathbb{N}^2$ generated by $a,b,$ and $u$. Explicitly, $S$ is the set of all functions that can be constructed by repeatedly right-shifting or left-shifting on each coordinate separately, and then optionally swapping the coordinates. The only units of $S$ are $1$ and $u$.
Now note that $ba=1$ and $a$ commutes with $ubu$. So in the absolution $\mathfrak{a}(S)$, $a$ and $b$ will commute and thus be units. Note also that there is a homomorphism $f:S\to\mathbb{Z}$ which sends a function to the sum of the total net shifts that it does on each coordinate, so $f(a)=1$, $f(b)=-1$, and $f(u)=0$. Since $f(u)=0$ this induces a homomorphism $\mathfrak{a}(S)\to\mathbb{Z}$ that sends $a$ to $1$ and $b$ to $-1$, so $a,b\neq 1$ in $\mathfrak{a}(S)$. Thus $\mathfrak{a}(S)$ is not absolute. (In fact, this homomorphism $\mathfrak{a}(S)\to\mathbb{Z}$ is an isomorphism.)