But when we say "an estimator is asymptotically normally distributed", what does it mean?
Using similar language to your first sentence, when we say an estimator is asymptotically normally distributed, we mean something like as the sample size increases, the sampling distribution of a suitably standardized version of the estimator converges in distribution to some particular normal distribution.
Are "central limit theorem" and "asymptotically normally distributed" synonymous?
Not in general, I think. Some quantity may be asymptotically normal but not come about as a result of any of the versions of the CLT (at least not in any obvious way - it might perhaps be that all of them can ultimately relate to the CLT, but I suspect it's possible to construct cases that would not).
However, very many estimators can be cast as a kind of average of some random variable and in that case a CLT-type argument may be indeed possible.
In some other cases you can combine the CLT with some other result to produce an argument that some estimator should be asymptotically normal (so the CLT may be involved but doesn't stand alone as the basis for the asymptotic normality).
I'll do my best to summarise some of this in a hopefully digestible way. I think some of the confusion arises from the difference between "the variance of the sample mean" and "the variance of a sample" (and potentially the variance of the variance of a sample)
1: Variance of the Sample Mean. Take a sample of size N, calculate its mean. Take another sample, calculate its mean, etc... now you have lots of sample means. The variance of the means of those samples is the variance of the sample means
2: Sample variance: Take a sample of size N. Calculate the variance within that sample
3: Variance of sample variance. As in (1), take many samples of size N, calculate all of their variances, then calculate the variance of these.
Now let's state some facts about these
Sample Mean:
You sample N times from a distribution with mean $\mu$ and variance $\sigma ^{2}$. The expected value of your sample mean is $\mu$, and the variance of the sample mean (see (1) above) will be $\frac{\sigma ^{2}}{N}$
The above holds for most underlying distributions (there are some restrictions, e.g. the mean/variance must be defined).
If the underlying distribution is Gaussian, then we can say more than just what the expected value and variance of the sample mean will be. Then we know the full distribution. The sample mean will be a normally distributed with mean $\mu$ and variance $\frac{\sigma ^{2}}{N}$ (which is consistent with what I just said), but if the distribution is not normal, then this will only be approximately true as N increases (this is the central limit theorem). The number 30 is not a good benchmark for what a good N is, it depends on the distribution.
Sample Variance:
If you take a sample of size N from a distribution and calculate the variance of the sample ((2) above), its expected value is $\frac{N-1}{N}\sigma^{2}$, i.e. a little bit smaller than the true distribution's variance. So if you took many samples of size N, calculated the variance within each sample and averaged these variances, you'd expect to get the above.
I'm not aware if there is a formula for the variance on the variance ((3) above).
The above holds for most distributions (as with sample mean). If however you know the underlying distribution is normal, then again, you don't just know the expected value of the sample variance, you know its full distribution, and it's given by a chi-squared distributed with (N-1) degrees of freedom, which I believe is consistent with the expected value being $\frac{N-1}{N}\sigma ^{2}$, although this is not as obviously trivially true as in the sample mean case above.
t-statistic (combining the two)
Now when you take a sample from a distribution, the t-statistic is a way of combining the sample mean and sample variance, $\frac{\bar{x}}{\frac{s}{\sqrt{N-1}}}$ where $\bar{x}$ is the sample mean and $s$ is the square root of the sample variance. This might seem like a somewhat arbitrary quantity to calculate, but it turns out that one can show that this quantity follows the t-distribution with (N-1) degrees of freedom, provided the underlying data is sample from a normal distribution.
I'm not very knowledgable about what this is used for in practice (it's called a t-test but I don't use them much, so I'll let somebody else take this part), but it involves comparing the t-statistic of different samples and then referencing them against a "t-table" to determine whether they're likely to have come from the same underlying distribution or not. This is where the number 30 comes in. For samples of size N, you must reference a "t-table with N degrees of freedom". If turns out that as N grows to about 30, a t-table starts to look very similar to a Z-table. What that means is that for N>30, the t-statistic is distributed approximately normally, i.e. like the Z-statistic. The Z-statistic involves dividing the sample mean by the distributional variance if you know it...but in practice this is never the case, when would it ever be the case that you're sampling from a distribution whose mean you don't know but whose variance you do?
Note that all of this stuff around t- and z- statistics only apply when you assume that your sample has been sampled from a normal distribution. If you don't know the underlying distribution, you can still make some assertions (subject to some assumptions about the distribution) about expected values of the sample mean, the variance of the sample mean, and expected values of the sample variance, but knowing the means and variances of distributions is less powerful than knowing the full distribution.
Best Answer
You got the sample, and use an estimator to obtain a given property, such as the mean $\hat \mu$. The value of the estimator is a random value itself, and comes from some unknown distribution, called a sampling distribution. What your ">30" rule of thumb says is that this distribution could be approximated by the normal distribution if the sample size is larger than 30 observations. I'm not here to discuss the validity of this rule itself.
So, we're not saying here that a "single sample is normally distributed." In fact I don't even understand what you mean when saying this. We're talking about the sampling distribution of the parameter estimator such as the average $\bar x=\frac 1 n \sum_{i=1}^nx_i$. We're not saying anything about the distribution of $x_i$, because the sample size does not have anything to do with it.
In your case with a proportion there's something else that is going on. The proportion comes from Binomial distribution, which can be approximated by Normal distribution when the sample size is large. I wouldn't apply your rule of thumb here, because it's obtuse in comparison to the estimator of the variance of the proportion that is based on Binomial distribution.