Losing Solutions to a Rational Equation

algebra-precalculusrational-functions

To solve a rational equation, say $\dfrac{3}{x-2}=\dfrac{1}{x-1}+\dfrac{7}{(x-1)(x-2)}$, the usual strategy is to multiply both sides of the equation by the least common denominator (LCD). In this case, the LCD is $(x-1)(x-2)$ and multiplying both sides of the equation yields $3(x-1)=(x-2)+7$. The question is now reduced to solving a linear equation and the solutions to this linear equation will be the solutions to the rational equation (assuming they don't create a zero in any of the denominators).

However, this method of multiplying both sides by the LCD seems funny to me. How are we 100% certain that in doing so we don't "lose" any potential solutions to the rational equation? Is there any rigorous way to prove that this method gives us $\textbf{all}$ of the solutions to a rational equation? I apologize if this question is trivial.

Intuitively, it seems as though when we multiply both sides by the LCD, we are just getting rid of the "rejected solutions" that will cause a zero in any of the denominators.

Best Answer

Losing or gaining solutions to an equation happens when you perform an operation which is either not uniquely invertible, or an inverse of such an operation. For example, with squaring we see: $$y=x\to y^2=x^2\to y=\pm x$$

The function $x^2$ is not injective and does not have a unique inverse, and so that step produces extra solutions.

However, multiplying by a given polynomial is fine at any point where the polynomial is non-zero, as the inverse of that operation is just dividing by said polynomial.

So, in your question, multiplying by $(x-1)(x-2)$ is fine, but you must discount $x=1$ and $x=2$ if they arise as solutions. In this case, they aren't, so you haven't gained any solutions.

Related Question