I assume you mean "...if and when you still have uniform convergence on compact subsets of (0,2π)? "
This is in the nature of what is called a localization theorem. These go back to Riemann who proved
that the convergence of the Fourier series of an $L^1$ function $f$ AT a point $x$ depends only on the behavior of $f$ in any small neighborhood of $x$. I'll stick my neck out a little and say that I wouldn't be too surprised if there is a generalization of Riemann's Localization Theorem that says that local uniform convergence near $x$ likewise only depends on the behavior of $f$ near $x$ (but that is pure conjecture on my part).
I believe that the problem of characterizing the sets of divergence for classical Fourier series is more or less open for all interesting classes ($C$, $L^\infty$, $L^p$ with $p>1$).
The strongest result that I'm aware of is due to Buzdalin who showed that any null-set $E\in F_\sigma\cap G_\delta$ is a set of divergence for the Fourier series of some continuous complex-valued function ("Trigonometric Fourier series of continuous functions diverging on a given set", Math. USSR Sbornik, 24 (1974)).
The characterization problem is mostly solved however for several other orthogonal systems, including the Haar and Franklin systems. There is also a very recent paper by Karagulyan where it is proved, in particular, that
A necessary and sufficient condition for a set $E \subset [0, 1]$ to be a set of divergence
for the sequence of $(C, \alpha)$-means ($\alpha>0$) of the Fourier series of some function $f \in L^\infty[0, 1]$ is that $E$ is a $G_{\delta\sigma}$-set of measure $0$.
(See G.A. Karagulyan, "Characterization of the sets of divergence for sequences of operators with the localization property", Sbornik: Mathematics, 202 (2011), pp. 9–33.)
To complicate things further, people tend to distinguish between the sets of divergence and unbounded divergence. A set $E \subset [0, 1]$ is said to be a set of divergence (resp. unbounded divergence) for a series of functions
$$\sum_{n=1}^{\infty}f_n(x),\qquad x\in[0,1],$$
if the series diverges for $x ∈ E$ and converges for $x \in [0, 1] \backslash E$ (resp. diverges unboundedly for $x ∈ E$).
One may think of the two optimistic working conjectures.
Every $G_{\delta\sigma}$-set $E $ of measure $0$ is a set of divergence for the Fourier series of some function $f \in C[0, 1]$.
Every $G_{\delta}$-set $E$ of measure $0$ is a set of unbounded divergence for the Fourier series of some function $f \in C[0, 1]$.
Conjecture 2 was explicitly formulated by P.L. Ul'yanov in the late 1960s. Both conjectures seem to be open.
Best Answer
A Fourier series truncated to order $n$ is the best approximation to the given function in the $L^2$ sense using trigonometric polynomials of order $n$. As such, small rapid deviations don't matter much. Since there is a limit to how big the derivatives of a trigonometric polynomial of fixed order can be (without the coefficients being big), in order to fit such a polynomial to a discontinuity it pays to overshoot a bit on each side of the discontinuity in order to “gather speed” so you can get from one value to the other fast. When I say it “pays”, i mean to say that you what you lose by not approximating the function too well at the overshoot, you more than gain back by doing the jump faster.