When you divide, you are implicitly assuming that the number you are dividing by is not equal to zero. By dividing, you are excluding the possibility that the number in question is zero, and as such you may be eliminating correct answers.
For a very simple example, consider the case of the equation $x^2-x=0$.
There are two answers: $x=0$, and $x=1$. However, if you "divide by the variable", you can end up doing this:
$$\begin{align*}
x^2 - x & = 0\\
x^2 &= x &&\text{(adding }x\text{ to both sides)}\\
\frac{x^2}{x} &= \frac{x}{x} &&\text{(divide by }x\text{, which assumes }x\neq 0)\\
x &= 1.
\end{align*}$$
So you "lost" the solution $x=0$, because when you divided by $x$, you implicitly were saying "and $x\neq 0$". In order to "recover" this solution, you would have to consider "What happens if what I divided by is equal to $0$?"
For a more extreme example, consider something like
$$(x-1)(x-2)(x-3)(x-4)(x-5)(x-6)=0.$$
Since a product is equal to $0$ if and only if one of the factors is equal to $0$, there are six solutions to this equation: $x=1$, $x=2$, $x=3$, $x=4$, $x=5$, and $x=6$. Divide both sides by $x-1$, and you lose the solution $x=1$; divide both sides by $x-2$, you lose $x=2$. Continue this way until you are left with $x-6=0$, and you lost five of the six solutions. And if then you go ahead and divide by $x-6$, you get $1=0$, which has no solutions at all!
Whenever you divide by something, you are asserting that something is not zero; but if setting it equal to $0$ gives a solution to the original equation, you will be excluding that solution from consideration, and so "eliminate" that answer from your final tally.
$y + 3x = 17\tag1$ How do I know the relationship between $x,y$ is "preserved" when I subtract $3x$ from both sides? Or should I not think in terms of "preserved relationships between $x, y$" and instead "maintaining equalities"?
The integer solution set of the conditional equation $(1)$ is $$\{(5-n,3n+2)\mid n\in\mathbb Z\}.$$ Not every operation performed on $(1)$ preserves this solution set; for example, $$xy + 3x^2 = 17x\tag2$$ has an additional solution $(0,1),$ while $$\frac yx + 3 = \frac{17}x\tag3$$ is missing the solution $(0,17).$ As such, the first operation is valid while the second isn't.
To show that two equations have the same solution set, choose an arbitrary solution of one equation and show it must be in the solution solution of the other equation and vice versa, so the proof would be along the lines of: To show eqA = eqB, choose an arbitrary solution of eqA. We got to eqB from eqA through a sequence of operations such as multi/divide/sub/add and each operation simply scaled both sides or added/subtracted a number from both sides, and so this chosen solution hasn't been changed by the operations and so it must satisfy eqB too. Do the same thing for the other direction; grab an arbitrary solution, and we know it must satisfy eqA because we simply scaled both sides of the eqn or added/subtracted
This is not wrong, but also not the usual practice, where the reverse implication is typically not required. Furthermore, framing an inference or an operation on an equation as being either valid or invalid frequently feels more natural than literally thinking about whether the solution set is being narrowed down.
Best Answer
In some sense, you can think of the same operation as affecting the solution set, in a way, based on what "information" the operator "loses." Ordinary operations (multiplying by a nonzero number, adding, etc.) don't change the solution set, because these are easily reversible, and so no information is lost.
This ultimately depends on the operation in question. Usually when taught in school, we talk about this in the sense of "extraneous solutions", for which a typical first example arises with
$$x = 1 \implies x^2 = 1$$
The solution set to the first equation is $\{1\}$ (trivially), but the latter has $\{-1,1\}$. This is because squaring is an operation which is not quite invertible: it loses information. The solution set changes to reflect that, however, and in particular you can think of it account for all of the ways in which information is lost. Indeed, this holds in general for complex numbers, that the mapping $z \mapsto z^n$ ($n$ a positive integer) "loses" information, and introduces $n-1$ solutions in transforming $z=1$ to $z^n = 1$.
Another example would be with
$$x = \pi \implies \sin(x) = \sin(\pi) = 0$$
Again, we did the same thing on both sides, but we lost a lot more information now. The sine function is periodic and certainly not invertible (on the whole of its domain): in fact the solution set to the latter equation is
$$\left\{ \pi k \mid k \in \mathbb{Z} \right\}$$
an infinite set no less.
We would call that a countably infinite set; multiplication by zero, e.g. going from
$$x=1 \implies 0 = 0$$
gives us uncountably many solutions (assuming we're working in real numbers): we have lost all information, and any $x$ works.
A neat observation, however, is that the "true" solution set - the one we care about - is always contained within the new one. This is why when we test extraneous solutions, we are eliminating the ones that don't work, but can find some that do work: they are from the original solution set.
In this sense, with our observations in mind, we can make a claim:
I believe the same general idea should generalize fine to multivariate equations, within reason. For instance, consider the case of
$$x+y = 1 \implies x^2 + 2xy + y^2 = 1$$
which have the respective solution sets $S_1,S_2$ given by
$$\begin{align*} S_1 &:= \{(a,1-a) \mid a \in \mathbb{R}\} \\ S_2 &:= \{(a,1-a) \, , \, (a,-a-1) \mid a \in \mathbb{R}\} \end{align*}$$
Similarly, for systems of equations, e.g. when we're looking to solve
$$\left\{ \begin{aligned} &f_1(x) = 0 \\ &f_2(x) = 0 \\ &\vdots \\ &f_n(x) = 0 \end{aligned} \right.$$
we, in some sense, are looking at the intersection $\bigcap_{i=1}^n S_{f_1}$ (using the notation from the proposition): the points which satisfy them all simultaneously.