In 1933, Kurt Gödel showed that the class called $\lbrack\exists^*\forall^2\exists^*, {\mathrm{all}}, (0)\rbrack$ was decidable. These are the formulas that begin with $\exists a\exists b\ldots \exists m\forall n\forall p\exists q\ldots\exists z$, with exactly two $\forall$ quantifiers, with no intervening $\exists$s. These formulas may contain arbitrary relations amongst the variables, but no functions or constants, and no equality symbol. Gödel showed that there is a method which takes any formula in this form and decides whether it is satisfiable. (If there are three $\forall$s in a row, or an $\exists$ between the $\forall$s, there is no such method.)
In the final sentence of the same paper, Gödel added:
In conclusion, I would still like to remark that Theorem I can also be proved, by the same method, for formulas that contain the identity sign.
Mathematicians took Gödel's word for it, and proved results derived from this one, until the mid-1960s, when Stål Aanderaa realized that Gödel had been mistaken, and the argument Gödel used would not work. In 1983, Warren Goldfarb showed that not only was Gödel's argument invalid, but his claimed result was actually false, and the larger class was not decidable.
Gödel's original 1933 paper is Zum Entscheidungsproblem des logischen Funktionenkalküls (On the decision problem for the functional calculus of logic) which can be found on pages 306–327 of volume I of his Collected Works. (Oxford University Press, 1986.) There is an introductory note by Goldfarb on pages 226–231, of which pages 229–231 address Gödel's error specifically.
Euler ranks higher than Gauss in my opinion. Having said that, I do not know if there is a recorded instance of Gauss making a mathematical error, but it's worth pointing out that the work of the 18th and early 19th century mathematicians was plagued by lack of rigor. Only later in the 18th century did the strides in analysis give mathematics a solid grounding.
Best Answer
Anyone back then working on anything that had anything to do with calculus or related topics could hardly avoid making mistakes, since there simply was no logically coherent formulation of the basic definitions at that time. Trying to prove something about continuous functions without a definition of continuity is going to lead to problems.
Fourier in particular is famous for stating that any periodic function is equal to the sum of its Fourier series. This is nonsense (see the comment below). But it's one of the all-time great errors. Trying to make sense of this, to see what could actually be proved in this direction, was one motivation for the development of modern rigorous analysis. In fact sorting this out was part of the motivation for at least three major developments that spring to mind:
People like Cauchy, Weierstrass et al invent epsilons and deltas. Now we can actually state and prove things about calculus rigorously.
But the theory of Fourier series, although it now made sense logically, still didn't work as well as we'd like; Lebesgue and others invent the Lebesgue integral and the theory of Fourier series gets a big boost.
(The first two items above are hugely well known. For more on the third, regarding Cantor, set theory and Fourier series, you might look here or here. Will R suggests you look here; I haven't seen that, internet too slow for YouTube, but a lecture by Walter Rudin on the topic is certain to be great.)
Comment I had no idea that the assertion that there exists a (continuous) function with a divergent Fourier series would be controversial. Writing down an explicit example is not easy; any continuous function that Fourier ever encountered does have a convergent Fourier series.
But proving the existence is very simple, from the right point of view. Say $s_n(f)$ is the $n$-th partial sum of the Fourier series for $f$ and $D_n$ is the Dirichlet kernel, so that $$s_n(f)(0)=\frac1{2\pi}\int_0^{2\pi}f(t)D_n(t)\,dt.$$The norm of $s_n(f)(0)$ as a linear functional on $C(\Bbb T)$ is the same as the norm of $D_n$ regarded as a complex measure, which is in turn equal to $\|D_n\|_1$. It's easy to see that $\|D_n\|_1\ge c\log n$. So the Uniform Boundedness Principle, aka the Banach-Steinhaus Theorem, shows that there exists $f\in C(\Bbb T)$ such that $s_n(f)$ is unbounded.