according to the central limit theorem - the more samples you take from a population, no matter what shape the distribution is, the more normal your sampling distribution becomes.
This is incorrect in several respects. (I expect you've been told something close to this, but it's not the case.)
"no matter what shape the distribution is" ... not so. There are distributions for which the CLT doesn't hold (other limit theorems may apply).
"the more samples you take [...] the more normal your sampling distribution becomes" ... the central limit theorem doesn't actually assert this. It does happen (under certain conditions), but it's not really the CLT that says that. An example of a result that does say something (more or less) along these lines would be the Berry-Esseen theorem, which gives a bound on how far the distribution of the standardized mean can be from the standard normal distribution, and which bound decreases as $1/\sqrt{n}$.
So why, despite the central limit theorem, is the sample distribution of r not normally distributed?
The sample correlation statistic $r$ can be construed as a kind of (scaled) mean (especially if you define it with an $n$ denominator), and the CLT should apply to it. But the CLT is a theorem about the limit as $n\to\infty$; you have finite $n$. At finite sample sizes the distribution won't be normal -- indeed this should be obvious, since $r$ is bounded. The bounds will tend to make it skew as you move toward one or the other.
Indeed, even the effect you thought was CLT -- the one where increasing the sample size tends to make the sampling distribution of means more normal -- applies to $r$ too. As you take larger samples, $r$ does tend to be more nearly normally distributed in a very wide variety of circumstances.
You can investigate this fairly easily using simulation.
However, while it becomes more nearly normal as $n$ increases for a given population $\rho$, it is still generally skew at every finite sample size (when $\rho\neq 0$). The same applies when taking sample means from skewed distributions --- the skewness decreases with larger samples, but it doesn't go away.
But we don't get skew when plotting the sampling distribution of the mean
Not so! If the distribution you draw samples from is skewed, the sample means will also be skewed (but typically substantially less so):
As the sample size gets larger, the skewness in the sample mean will tend to reduce.
If we drew samples from a suitable distribution on $[-1,1]$, say a scaled-shifted beta, then as the population mean approached those bounds for a given $n$, the sample mean would tend to be more skewed (left skew for a population mean near $1$, right skew for a population mean near $-1$). But as the sample size increased for a given mean, the distribution would be less skewed. The situation is similar for the sample correlation.
You got the sample, and use an estimator to obtain a given property, such as the mean $\hat \mu$. The value of the estimator is a random value itself, and comes from some unknown distribution, called a sampling distribution. What your ">30" rule of thumb says is that this distribution could be approximated by the normal distribution if the sample size is larger than 30 observations. I'm not here to discuss the validity of this rule itself.
So, we're not saying here that a "single sample is normally distributed." In fact I don't even understand what you mean when saying this. We're talking about the sampling distribution of the parameter estimator such as the average $\bar x=\frac 1 n \sum_{i=1}^nx_i$. We're not saying anything about the distribution of $x_i$, because the sample size does not have anything to do with it.
In your case with a proportion there's something else that is going on. The proportion comes from Binomial distribution, which can be approximated by Normal distribution when the sample size is large. I wouldn't apply your rule of thumb here, because it's obtuse in comparison to the estimator of the variance of the proportion that is based on Binomial distribution.
Best Answer
Using similar language to your first sentence, when we say an estimator is asymptotically normally distributed, we mean something like as the sample size increases, the sampling distribution of a suitably standardized version of the estimator converges in distribution to some particular normal distribution.
Not in general, I think. Some quantity may be asymptotically normal but not come about as a result of any of the versions of the CLT (at least not in any obvious way - it might perhaps be that all of them can ultimately relate to the CLT, but I suspect it's possible to construct cases that would not).
However, very many estimators can be cast as a kind of average of some random variable and in that case a CLT-type argument may be indeed possible.
In some other cases you can combine the CLT with some other result to produce an argument that some estimator should be asymptotically normal (so the CLT may be involved but doesn't stand alone as the basis for the asymptotic normality).