Why does adding/subtracting/multiplying/dividing a number from both sides of an equality maintain the solution set

algebra-precalculus

So if you add/divide/subtract/multiply both sides of an equation, obviously the equality is maintained since you're doing the "same thing" to both sides. But does preserving the equality but changing the number on either side of the equality change the solution set? I know the answer is no but what's the reason?

My thinking is choose an arbitrary solution of $x$ in equation $A$, and show it must satisfy equation $B$, and vice versa.

Is there a general rule/proof?

I think it's easy to see with simple equations such as $2x + 82 = 7$ but what about one with many variables?

Best Answer

In some sense, you can think of the same operation as affecting the solution set, in a way, based on what "information" the operator "loses." Ordinary operations (multiplying by a nonzero number, adding, etc.) don't change the solution set, because these are easily reversible, and so no information is lost.

This ultimately depends on the operation in question. Usually when taught in school, we talk about this in the sense of "extraneous solutions", for which a typical first example arises with

$$x = 1 \implies x^2 = 1$$

The solution set to the first equation is $\{1\}$ (trivially), but the latter has $\{-1,1\}$. This is because squaring is an operation which is not quite invertible: it loses information. The solution set changes to reflect that, however, and in particular you can think of it account for all of the ways in which information is lost. Indeed, this holds in general for complex numbers, that the mapping $z \mapsto z^n$ ($n$ a positive integer) "loses" information, and introduces $n-1$ solutions in transforming $z=1$ to $z^n = 1$.

Another example would be with

$$x = \pi \implies \sin(x) = \sin(\pi) = 0$$

Again, we did the same thing on both sides, but we lost a lot more information now. The sine function is periodic and certainly not invertible (on the whole of its domain): in fact the solution set to the latter equation is

$$\left\{ \pi k \mid k \in \mathbb{Z} \right\}$$

an infinite set no less.

We would call that a countably infinite set; multiplication by zero, e.g. going from

$$x=1 \implies 0 = 0$$

gives us uncountably many solutions (assuming we're working in real numbers): we have lost all information, and any $x$ works.

A neat observation, however, is that the "true" solution set - the one we care about - is always contained within the new one. This is why when we test extraneous solutions, we are eliminating the ones that don't work, but can find some that do work: they are from the original solution set.

In this sense, with our observations in mind, we can make a claim:

Proposition: For the equation $f(x) = 0$ with solution set $S_f := \{ x \in \mathbb{R} \mid f(x) = 0 \}$, the application of another function $g$ (suppose it preserves $0$ for simplicity) to reach $(g \circ f)(x) = 0$ ensures $$S_f \subseteq S_{g \circ f}$$ with equality if $g$ is invertible (in the sense that there is another function $g^{-1}$ such that $g(g^{-1}(x)) = g^{-1}(g(x)) = x$ for all $x$).

Proof Sketch: Firstly, observe:

  • $x \in S_f \iff f(x) = 0$
  • Since $g(0)=0$, then $g(f(x)) = (g \circ f)(x) = 0$.
  • Hence, $x \in S_{g \circ f}$.

To see equality if $g$ is invertible, note that the remaining statement for equality is that $S_{g \circ f} \subseteq S_f$. Suppose $g$ is invertible; then:

  • $x \in S_{g \circ f} \iff g(f(x)) = 0$
  • Since $g$ is invertible and $g(0) = 0$, then $g^{-1}(0) = 0$ and $f(x) = 0$
  • Hence, $x \in S_f$

Personal Thought: Can the equality condition be extended to an "if and only if"?

I believe the same general idea should generalize fine to multivariate equations, within reason. For instance, consider the case of

$$x+y = 1 \implies x^2 + 2xy + y^2 = 1$$

which have the respective solution sets $S_1,S_2$ given by

$$\begin{align*} S_1 &:= \{(a,1-a) \mid a \in \mathbb{R}\} \\ S_2 &:= \{(a,1-a) \, , \, (a,-a-1) \mid a \in \mathbb{R}\} \end{align*}$$

Similarly, for systems of equations, e.g. when we're looking to solve

$$\left\{ \begin{aligned} &f_1(x) = 0 \\ &f_2(x) = 0 \\ &\vdots \\ &f_n(x) = 0 \end{aligned} \right.$$

we, in some sense, are looking at the intersection $\bigcap_{i=1}^n S_{f_1}$ (using the notation from the proposition): the points which satisfy them all simultaneously.

Related Question