The best way to solve this is to use Sturm's theorem. This gives an algorithm for computing the number of distinct real roots of any polynomial. The Wikipedia page is quite good, but I'll outline the method here.
Let $f(x)$ be a polynomial. We define a sequence as follows:
$$P_0=f$$
$$P_1=f'$$
$$P_{n+2}=-P_{n}\text{ mod }P_{n+1}$$
where $f'$ is the derivative of the polynomial and, for polynomials $P$ and $Q$, we define $P\text{ mod }Q$ to be the remainder of dividing $P$ by $Q$ - that is, the unique polynomial $R$ of degree less than $\deg Q$ such that $P=cQ+R$ for some other polynomial $c$. (This is also just the result you get by polynomial long division)
For instance, suppose we want to know how many roots $f(x)=x^3+2x+1$ has using this method - of course, we know the answer is $1$, but we should check. We get the following chain:
$$P_0=x^3+2x+1$$
$$P_1=3x^2+2$$
$$P_2=-\frac{4}3x-1$$
$$P_3=\frac{-59}{16}.$$
For any real number $a$, we define $V(a)$ to be the number of sign changes in the sequence $P_0(a),P_1(a),P_2(a),P_3(a)$, where we ignore any zeros. Assuming neither $a$ or $b$ are themselves roots, Sturm's theorem states that $V(a)-V(b)$ is the number of real roots between $a$ and $b$.
Note that $V(-\infty)=\lim_{a\rightarrow-\infty}V(a)$ or $V(\infty)=\lim_{b\rightarrow\infty}V(b)$ are easy to compute by looking at the leading terms of each polynomial. For instance, here we have that $V(-\infty)=2$ since, towards $-\infty$ we have that $P_0$ tends to $-\infty$, $P_1$ to $\infty$, $P_2$ to $\infty$ and $P_3$ is negative - so two sign changes. Then $V(\infty)=1$ because $P_0$ and $P_1$ are positive near $\infty$ and $P_2$ and $P_3$ are negative. This polynomial has $V(-\infty)-V(\infty)=1$ roots, as expected, since it is an increasing function.
This can be a bit laborious to do by hand, but it always works for any polynomial.
The only trick to proving this, at least in the square-free case, is to consider what happens to sign changes in this sequence as one moves along the real line: The number of sign changes can only change near a root of one of the polynomials. However, note that, for some polynomial $c$, we have the following relationship:
$$P_{n}=cP_{n+1}-P_{n+2}$$
Note that if $P_{n+1}$ has a root at a place where $P_n$ doesn't, then near that root, $P_n$ and $P_{n+2}$ must have opposite signs, since $P_n=-P_{n+2}$ at the root. So long as $P_0$ was squarefree (i.e. has no multiple roots), we can note that no consecutive terms share a root, so this always happens. As a result, the zero of $P_{n+1}$ does not affect the number of sign changes. However, if $P_0$ has a root, then the number of sign changes decreases by one there, since, near that root, $f$ and $f'$ have opposite signs prior to the root and equal signs after.
Best Answer
An algebraic argument: The fact that a polynomial of degree $n$, where $n \ge 1$, has at most $n$ roots can be proved without using machinery from the calculus. When $n=1$, the result is clear, there is in fact precisely one root. Suppose now that we know that the result is true for any polynomial of degree $k$. We show that the result is true for any polynomial of degree $k+1$.
Let $P(x)$ have degree $k+1$. If $P(x)$ has no roots, we are finished. If $P(x)$ has a root $\alpha$, then the polynomial $x-\alpha$ divides $P(x)$. So $P(x)$ is identically equal to $(x-\alpha)Q(x)$ for some quotient polynomial $Q(x)$. Now $Q(x)$ has degree $k$, so by assumption has no more than $k$ roots. It follows that $P(x)$ has at most $k+1$ roots, namely the roots of $Q(x)$, together with $\alpha$.
The above proof works for polynomials with coefficients in any field.
A calculus argument: For polynomials with real coefficients, we can prove the result by using "calculus" tools. These are not really appropriate, since the calculus tools are far more sophisticated than the algebraic tools used in the above proof, and we end up with a proof that only applies to polynomials with real coefficients. But as an exercise, let's do it.
The tool that gives the quickest argument is the Mean Value Theorem, actually a special case of it usually called Rolle's Theorem. This says that if $f(x)$ is a function that is differentiable in the interval $[a,b]$, and $f(a)=f(b)=0$, there is a $c$, with $a<c<b$, such that $f'(c)=0$. Informally, between any two roots of $f(x)$, there is a root of the derivative $f'(x)$.
Back to polynomials. Suppose that we know that any polynomial of degree $k$, with real coefficients, has at most $k$ real roots. We want to show that a polynomial $P(x)$ of degree $k+1$ has at most $k+1$ real roots.
Suppose to the contrary that $P(x)$ has at least $k+2$ roots. Pick any $k+2$ of them, and list them in increasing order $\alpha_1$, $\alpha_2$, and so on up to $\alpha_{k+2}$. By Rolle's Theorem, between any two roots of $P(x)$, there is a root of $P'(x)$. That implies that $P'(x)$ has at least $k+1$ roots. But $P'(x)$ is a polynomial of degree $k$, so by assumption can have no more than $k$ roots. This completes the proof.
We can avoid Rolle's Theorem, and use only the Intermediate Value Theorem. We want to show that between any two roots of $P(x)$, there is a root of $P'(x)$. The easiest calculation uses the fact that if $\alpha$ is a root of $P(x)$, then $x-\alpha$ divides $P(x)$. It is slightly unpleasant, since we have to take special care with roots of multiplicity greater than $1$.
The Fundamental Theorem of Algebra: I would guess that what you were told is a distorted version of an important comment about proofs of what is usually called the Fundamental Theorem of Algebra. Roughly speaking, this result says that a polynomial of degree $n \ge 1$ with complex coefficients has, if you take multiplicities into account, exactly $n$ roots in the field $\mathbb{C}$ of complex numbers.
There is a large number of different proofs of this result. The classical ones depend on fairly subtle arguments about functions of two variables.
The end of the Wikipedia article has a list of useful references.
The following important result is a consequence of the Fundamental Theorem of Algebra. Conversely, the Fundamental Theorem of Algebra can derived fairly simply from it. The result mentions only real numbers.
Theorem: Let $P(x)$ be a polynomial of degree $\ge 1$, with real coefficients. Then $P(x)$ can be expressed as a product $A_1(x)A_2(x)\cdots A_k(x)$, where each $A_i(x)$ has real coefficients, and is of degree $1$ or $2$.
There are proofs of this that use a fair bit of algebra (in the modern sense of algebra) and very little function theory. The only bit of "calculus" needed is to to show that any odd degree polynomial has a real root, and any positive number has a square root, both of which are consequences of the Intermediate Value Theorem. Proofs that use so "little" machinery from analysis tend to boast about it by saying that they "only" use the Intermediate Value Theorem.