This is not what you asked for, because it's not a proof. But it is an argument.
Consider some famous and unresolved problem of mathematics, such as the twin primes conjecture. (Or the Collatz conjecture, the Goldbach conjecture, or, until recently, the Fermat conjecture.) I am sure you recall that the conjecture is that there are arbitrarily large numbers $p$ and $p+2$ that are both prime. ("Twin primes")
It's easy to write a computer program which, given an input $N$, looks for a pair of twin primes larger than $N$, and which halts when it finds such a pair.
The twin primes conjecture is true if, and only if, this program halts on all inputs.
Therefore, if there were a reliable way to tell if a program halts on all inputs, the twin primes conjecture would be easy to resolve, and we could conclude from the fact that it is not resolved that mathematicians are all a bunch of dummies.
Now maybe your students don't care about the twin primes conjecture and perhaps they are not sure that mathematicians are not all a bunch of dummies. But they are familiar with problems and puzzles in general, and they are probably familiar with the idea that it is sometimes not only hard to find solutions, but it can even be hard to see ahead of time if if there is a solution. They can probably be persuaded that you can get a computer to search exhaustively for the solution to some problem, and halt and print it out if it finds one.
A solution to the halting problem would mean that a very large class of problems could be easily solved. You would write a program to search for a solution, and then, instead of running it, you would analyze the program to see if it would halt. If the answer was yes, you would know there was a solution, and all you had to do to find it was to run the program. On the other hand if the program would not halt, you wouldn't have to bother to run it because you would know there was no solution.
That would be a very different world from the one we actually live in, and it may be possible to persuade the students of that.
Okay, so $2^{log(n)} < n$ because the logarithm base is greater than 2. Now you might want to see that $2^{2 log(n)} = (2^{log(n)})^2 $ to realise that B is faster growing than A.
Exponential growth is always faster than polynomial, so D has to be the fastest growing one. Furthermore, C > E because $n > log(n)$ (the factor $\frac{1}{2}$ has absolutely no effect here). Finally, E > B, because $n^2 > x^2$, where $x<n$ (see the equation on my first line).
So it is ABECD
Best Answer
General Rules of Thumb which hold for sufficiently large $n$:
$$\begin{align} &\log^c(n) \leq n^{\epsilon}, \quad \text{ for any } \epsilon > 0, \quad \text { and any constant } c \\ \\ &n^k \leq k^n, \qquad \text{where } k \text{ is any constant } k > 1 \end{align} $$
As Chris noted above, we can write this as
$$ A \ll \log^c (n) \ll n^{\epsilon} \ll k^n $$
To actually prove these rules, you just compute limits (assuming you know calculus). To see that $\log (n) \ll n^{\varepsilon}$ for example, just note that
$$ \lim_{n \to \infty} \frac{\log n}{n^{\epsilon}} = \lim_{n \to \infty} \frac{n^{-1}}{\epsilon n^{\epsilon - 1}} = 0 = \lim_{n \to \infty} \frac{1}{\epsilon n^{\epsilon}} = 0. $$
where $A$ and $c$ are constants, $\epsilon > 0$, and $k$ is a real number such that $k > 1$.
If you want to show that $\log^c(n) \ll n^{\epsilon}$ for any positive integer $c$, then just note that
$$ \lim_{n \to \infty} \frac{\log^c (n)}{n^\epsilon} \leq \left( \lim_{n \to \infty} \frac{\log (n)}{n^{\epsilon/c}} \right) \dots \left( \lim_{n \to \infty} \frac{\log(n)}{n^{\epsilon/ c}}\right) = 0 $$ by what we showed above. We can actually take $c$ be to be any arbitrary real number and the above still holds. This is easy to see since every real number is less than or equal to some integer (infinitely many in fact!). I will leave it to you to show that $n \log^3 (n) \ll n^{4/3}$, for example. The more complicated functions work the same way but we might need to be slightly creative when evaluating the limits.
Hope this helps.