If a set of vector fails to have an additive identity, can it still have an additive inverse

vector-spaces

Let $V = \mathbb{R}^2$

Let addition in $V$ be defined as:

$(u, v) + (x, y) = (u + x, 0)$

Let scalar multiplication be defined as:

$a(u, v) = (au, av)$

Clearly, there is no additive identity for $V$. Can $V$ still have an additive inverse, where we define the zero vector arbitrarily?

For example, could I define the zero vector in $V$ to be $(1, 0)$. Then the additive inverse would be $(-u + 1, a)$ where $a$ can be any real number:

$(u, v) + (-u + 1, a) = (1, 0)$

Best Answer

Not in any usual sense, no. Note first of all that your structure is not a vector space, because it is in particular not a group... But you probably realize this already.

You can in theory define a "zero vector" however you want, but normally, a zero element is required to be an additive neutral element, i.e. $x+0=x$ and $0+x=x$, or at least one of those equalities should hold (right-neutral, left-neutral). There is no choice of "vector" in your structure that would satisfy this, so you're out of luck.


That was the boring answer... So what nice things can we say? Well, $V$ has commutative and associative addition, as well as scalar multiplication that almost work. What we lack is a zero vector (and thereby also additive inverses), as well as one of the distributivity laws for scalar multiplication: $$ (a+b)(x,y) \ne a(x,y) + b(x,y) \quad\textrm{for}\quad y\ne0. $$ The only sensible definition of a "pseudo-zero vector" in $V$ is probably $\mathbf 0 := (0,0)$. It at least has the nice properties that $$ \mathbf 0+\mathbf 0=\mathbf 0 \quad\textrm{and}\quad a\mathbf 0 = \mathbf 0, $$ as well as the special cases $(x,0)+\mathbf 0=(x,0)$. With this we have non-unique additive inverses: $$ (x,y) + (u,v) = \mathbf 0 \iff (u,v) \in \{(-x, s) \mid s\in\Bbb R\}. $$ But what could we do with this? We would have for example $$ \mathbf u + \mathbf v = \mathbf w \implies \mathbf u + \mathbf 0 = \mathbf w - \mathbf v, $$ for any choice of $-\mathbf v$. Thus, we can sort of do algebra, as long as we remember that $\mathbf u + \mathbf 0 \ne \mathbf u$ in general. We also have the nice fact that $(-1)\mathbf v$ is a possible choice for $-\mathbf v$.


Finally, I should point out that we have the obvious injection $$ \varphi:\Bbb R \hookrightarrow V: x\mapsto (x,0). $$ This acts like a homomorphism (i.e. a linear map), meaning $$ \varphi(x+y)=\varphi(x)+\varphi(y)\quad \textrm{and} \quad \varphi(ax)=a\varphi(x). $$ So $V$ has the vector space $\mathbb R$ embedded in it. Note also that $\varphi(0)=(0,0)$, which is kind of nice.


EDIT. Inspired Mozibur Ullah's answer, the additive structure of $V$ is in fact a commutative semigroup. In the language of semigroups, $(0,0)$ is the unique idempotent element of $V$, which is why it was the only natural choice of $\mathbf 0$. An idempotent element is always the identity element of a unique maximal subgroup; in this case, the maximal subgroup is $\{(x,0)\} \cong \mathbb R$. In particular, $\{(x,0)\}$ is the unique maximal subgroup of $V$.

Related Question