When you divide, you are implicitly assuming that the number you are dividing by is not equal to zero. By dividing, you are excluding the possibility that the number in question is zero, and as such you may be eliminating correct answers.
For a very simple example, consider the case of the equation $x^2-x=0$.
There are two answers: $x=0$, and $x=1$. However, if you "divide by the variable", you can end up doing this:
$$\begin{align*}
x^2 - x & = 0\\
x^2 &= x &&\text{(adding }x\text{ to both sides)}\\
\frac{x^2}{x} &= \frac{x}{x} &&\text{(divide by }x\text{, which assumes }x\neq 0)\\
x &= 1.
\end{align*}$$
So you "lost" the solution $x=0$, because when you divided by $x$, you implicitly were saying "and $x\neq 0$". In order to "recover" this solution, you would have to consider "What happens if what I divided by is equal to $0$?"
For a more extreme example, consider something like
$$(x-1)(x-2)(x-3)(x-4)(x-5)(x-6)=0.$$
Since a product is equal to $0$ if and only if one of the factors is equal to $0$, there are six solutions to this equation: $x=1$, $x=2$, $x=3$, $x=4$, $x=5$, and $x=6$. Divide both sides by $x-1$, and you lose the solution $x=1$; divide both sides by $x-2$, you lose $x=2$. Continue this way until you are left with $x-6=0$, and you lost five of the six solutions. And if then you go ahead and divide by $x-6$, you get $1=0$, which has no solutions at all!
Whenever you divide by something, you are asserting that something is not zero; but if setting it equal to $0$ gives a solution to the original equation, you will be excluding that solution from consideration, and so "eliminate" that answer from your final tally.
Sorry, read too quickly at first. You were on the right track - but you lost a logarithm at this step:
$$\frac{x^2}{x-\frac{1}{2}}=\log_3(3)$$
It should be
$$\log_2\left(\frac{x^2}{x-\frac{1}{2}}\right)=\log_3(3)$$
Now, what is $\log_3(3)$?
Best Answer
As percusse and GEdgar point-out in there comments that the reason this seemingly simple equation is not solvable using simple algebra lies in the fact that that the LHS of $2^x = x^2$ is a transcendental function. i.e. it cannot be expressed as a polynomial. Actually the closest it can come to in a "polynomial" form is its Maclaurin series form (see below).
Using pre-calculus techniques you can show, for instance, that you can take log of both sides as in
$$2^x = x^2$$ $$\implies ln(2^x) = ln(x^2) \quad \forall x \ne 0 $$ $$\implies x ln(2) = 2 ln(x) $$ $$\implies ln(x) = {2x \over ln(2)} \quad \textbf {(A)}$$
So the solution to our problem are all values of $x$ that are the roots of equation $\textbf{(A)}$ ... Pre-calculus you can use graphing techniques to determine the answer.
Solving transcendental functions, in general, requires a lot of different calculus techniques, that are probably beyond the scope of this answer.
Infinite Series for ${2^x}$
Using Taylor's Theorem (which is part of calculus) we can show that:
$$e^u = \sum_{n=0}^{ \infty } {u^n \over n!} = 1 + {u^1 \over 1!} + {u^2 \over 2!} + {u^3 \over 3!} + {u^4 \over 4!} + \cdots \quad \textbf{(B)}$$
For considerable historical reasons $\textbf{(B)}$ is called Maclaurin series for $e^u$. You can find Maclaurin series for a large number of functions that have certain properties.
For purpose of this discussion, assume that (B) is provable. We can use it to express the infinite series for $2^x$ by noting that $2 \equiv e^{ln(2)}$, and that $(a^x)^y = (a)^{xy}$.
$$ [2]^x = [e^{ln(2)}]^x = [e^{ln(2) \dot x}]$$
Substituting $u$ with $2^x$ in $\textbf{(B)}$ power-series we get:
$$2^x = ln(2) \sum_{n=0}^{ \infty } {x^n \over n!} = ln(2) \left( 1 + {x^1 \over 1!} + {x^2 \over 2!} + {x^3 \over 3!} + {x^4 \over 4!} + \cdots \right)$$
As you can see solving (A) without knowing some more properties about behavior of $e^x$ becomes intractable. That is what calculus is all about ;) once you get into it, you wil see that these problems become solvable. Although, the solutions are, by no means, trivial.