This answer aims to make a demonstration that is as elementary as possible, because such things frequently get to the essential idea. The only facts needed (beyond the simplest kind of algebraic manipulations) are linearity of integration (or, equivalently, of expectation), the change of variables formula for integrals, and the axiomatic result that a PDF integrates to unity.
Motivating this demonstration is the intuition that when $f_X$ is symmetric about $a$, then the contribution of any quantity $G(x)$ to the expectation $\mathbb{E}_X(G(X))$ will have the same weight as the quantity $G(2a-x)$, because $x$ and $2a-x$ are on opposite sides of $a$ and equally far from it. Provided, then, that $G(x) = -G(2a-x)$ for all $x$, everything cancels and the expectation must be zero. The relationship between $x$ and $2a-x$, then, is our point of departure.
Notice, by writing $y = x + a$, that the symmetry can just as well be expressed by the relationship
$$f_X(y) = f_X(2a-y)$$
for all $y$. For any measurable function $G$, the one-to-one change of variable from $x$ to $2a-x$ changes $dx$ to $-dx$, while reversing the direction of integration, implying
$$\mathbb{E}_X(G(X)) = \int G(x) f_X(x)dx = \int G(x) f_X(2a - x)dx = \int G(2a-x)f_X(x)dx.$$
Assuming this expectation exists (that is, the integral converges), the linearity of the integral implies
$$\int \left(G(x) - G(2a - x)\right)f_X(x)dx = 0.$$
Consider the odd moments about $a$, which are defined as the expectations of $G_{k,a}(X) = (X-a)^k$, $k = 1, 3, 5, \ldots$. In these cases
$$\eqalign{
G_{k,a}(x) - G_{k,a}(2a-x) &= (x-a)^k - (2a-x-a)^k \\&= (x-a)^k - (a-x)^k \\ &= (1^k - (-1)^k)(x-a)^k \\&= 2(x-a)^k,}$$
precisely because $k$ is odd. Applying the preceding result gives
$$0 = \int \left(G_{k,a}(x) - G_{k,a}(2a - x)\right)f_X(x)dx = 2\int (x-a)^k f_X(x)dx.$$
Because the right hand side is twice the $k$th moment about $a$, dividing by $2$ shows that this moment is zero whenever it exists.
Finally, the mean (assuming it exists) is
$$\mu_X = \mathbb{E}_X(X) = \int x f_X(x)dx = \int (2a-x)f_X(x)dx.$$
Once again exploiting linearity, and recalling that $\int f_X(x)dx=1$ because $f_X$ is a probability distribution, we can rearrange the last equality to read
$$2\mu_X = 2\int x f_X(x)dx = 2a\int f_X(x)dx = 2a\times 1 = 2a$$
with the unique solution $\mu_X = a$. Therefore all our previous calculations of moments about $a$ are really the central moments, QED.
Postword
The need to divide by $2$ in several places is related to the fact that there is a group of order $2$ acting on the measurable functions (namely, the group generated by the reflection in the line around $a$). More generally, the idea of a symmetry can be generalized to the action of any group. The theory of group representations implies that when the character of that action on a function is not trivial, it is orthogonal to the trivial character, and that means the expectation of the function must be zero. The orthogonality relations involve adding (or integrating) over the group, whence the size of the group constantly appears in denominators: its cardinality when it is finite or its volume when it is compact.
The beauty of this generalization becomes apparent in applications with manifest symmetry, such as in mechanical (or quantum mechanical) equations of motion of symmetrical systems exemplified by a benzene molecule (which has a 12 element symmetry group). (The QM application is most relevant here because it explicitly calculates expectations.) Values of physical interest--which typically involve multidimensional integrals of tensors--can be computed with no more work than was involved here, simply by knowing the characters associated with the integrands. For instance, the "colors" of various symmetric molecules--their spectra at various wavelengths--can be determined ab initio with this approach.
Question asks: "is the $n$th cumulant equivalent to the $n$th central moment (i.e. about the mean)?"
Answer is: only for $n = 1, 2$ or $3$.
Here, for example, are the first 9 cumulants of the population in terms of central moments $\mu_i$ of the population:
using mathStatica's CumulantToCentral
function.
More generally
In a multivariate world, the product cumulant will only be identical to the product central moments if 1 < (sum of the indexes) $\le$ 3. For example, $\kappa_{i,j,k}$ will be equal to $\mu_{i,j,k}$ provided $1 < i+j+k \le 3$. Here are some bivariate product cumulants expressed in terms of product central moments of the population:
Best Answer
The equation given by Wikipedia connects cumulants to moments (generally).
A proof of a formula connecting cumulants to central moments is found in A Recursive Formulation of the Old Problem of Obtaining Moments from Cumulants and Vice Versa
Letting $K(t)$ be the cumulant-generating function, and $M(t)$ the moment-generating function. The relationship between the two is \begin{equation} M(t)=\exp{\left[K(t)\right]} \end{equation} The proof follows by differentiation of this expression and noting that the $n$th derivative can be written as \begin{equation} D^n[M(t)]=\sum_{i=0}^{n-1}\binom{n-1}{i}D^{n-i}[K(t)]D^i[M(t)] \end{equation} Where $D^k$ denotes the $k$th derivative. Now setting $t=0$: \begin{equation} \theta_n=\sum_{i=0}^{n-1}\binom{n-1}{i}\kappa_{n-i}\theta_i\\ \theta_n=\kappa_n+\sum_{i=1}^{n-1}\binom{n-1}{i}\kappa_{n-i}\theta_i\\ \end{equation} Rewriting yields: \begin{equation} \kappa_n = \theta_n-\sum_{i=1}^{n-1}\binom{n-1}{i}\kappa_{n-i}\theta_i \end{equation} In terms of the central moments and cumulants.