Each child can be modeled as a Bernoulli r.v. $X_i$ with probability of having the disease equal to $p_i$, $X_i \sim B(p_i)$, $i=1,\dots ,n$. If you assume that a) $p_1 =p_2=\dots=p_n=p$ and b) that these are independent rv's then their joint density is
$$f(X_1,\dots,X_n) = \prod_{i=1}^{n}p^{x_i}(1-p)^{1-x_i}$$
and their log-likelihood function, viewed as a function of $p$ is
$$\ln L =\sum_{i=1}^{n}\left\{x_i\ln p+(1-x_i)\ln (1-p)\right\}$$
which leads to the MLE for $p$
$$\hat p =\frac 1n\sum_{i=1}^{n}x_i$$
which is unbiased since $$E\hat p =\frac 1n\sum_{i=1}^{n}Ex_i = \frac 1n np =p$$
Consider now the variable
$$U_i = X_i - E(X_i) = X_i -p \Rightarrow X_i = U_i + p$$
We have
$$EU_i = 0,\qquad Var(u_i) = Var(X_i) = p(1-p) $$
so it is covariance-stationary.
Subsitute for the $x$'s in the estimator
$$\hat p =\frac 1n\sum_{i=1}^{n}(u_i+p) = \frac 1n\sum_{i=1}^{n}u_i +p$$
and consider the quantity
$$\sqrt n (\hat p-p) =\sqrt n\frac 1n\sum_{i=1}^{n}u_i= \frac {1}{\sqrt n}\sum_{i=1}^{n}u_i$$
Since the $U$'s are covariance stationary, (and evidently i..i.d) then the CLT certainly applies and so
$$\sqrt n (\hat p-p) \rightarrow_d N\left (0, p(1-p)\right) $$
For approximate statistical inference, we manipulate this expression through
$$ \sqrt n (\hat p-p) = Z \Rightarrow \hat p = \frac {1} {\sqrt n}Z +p$$
and write that, for "large samples"
$$\hat p \sim_{approx} N\left (p, \frac {p(1-p)}{n}\right)$$
(but not when $n$ truly goes to infinity, since then $\hat p$ does not have a distribution, but collapses to a constant, the true value $p$ since $\hat p$ is a consistent estimator).
This may be better appreciated by expressing the result of CLT in terms of sums of iid random variables. We have
$$\sqrt{n} \frac{ \bar{X} -\mu}{\sigma} \sim N(0, 1) \quad \text{asymptotically}$$
Multiply the quotient by $\frac{\sigma}{\sqrt{n}}$ and use the fact that $Var(cX) = c^2 Var(X)$ to get
$$\bar{X}-\mu \sim N\left(0, \frac{\sigma^2}{n} \right)$$
Now add $\mu$ to the LHS and use the fact that $\mathbb{E} \left[a X+\mu\right] = a \mathbb{E}[X] + \mu$ to obtain
$$\bar{X} = \frac{1}{n} \sum_{i=1}^n X_i \sim N\left(\mu, \frac{\sigma^2}{n} \right)$$
Lastly, multiply by $n$ and use the above two results to see that
$$\sum_{i=1}^n X_i \sim N \left(n \mu, n\sigma^2 \right) $$
And what does this have to do with Wooldridge's statement? Well, if the error is the sum of many iid random variables then it will be approximately normally distributed, as just seen. But there is an issue here, namely that the unobserved factors will not necessarily be identically distributed and they might not even be independent!
Nevertheless, the CLT has been successfully extended to independent non-identically distributed random variables and even cases of mild dependence, under some additional regularity conditions. These are essentially conditions that guarantee that no term in the sum exerts disproportional influence on the asymptotic distribution, see also the wikipedia page on the CLT. You do not need to know these results of course; Wooldridge's aim is merely to provide intuition.
Hope this helps.
Best Answer
It is possible in this case to derive a confidence interval for $\theta$ that takes account of the fact that the parameter affects the variance of this distribution. This can be done by using the central limit theorem to give an approximate pivotal quantity, and then forming a confidence interval for this pivotal quantity directly by algebraic manipulation of a quadratic function that arises in this form.
Deriving the confidence interval: Let $X_1,X_2,X_3, \sim \text{IID Geom}(\theta)$ and note that the moments of this distribution are $\mathbb{E}(X_i) = 1/\theta$ and $\mathbb{V}(X_i) = (1-\theta)/\theta^2$. Applying the central limit theorem therefore gives the following distributional approximation for large $n$:
$$\sqrt{n} \cdot \frac{\theta \bar{X}_n - 1}{\sqrt{1-\theta}} \overset{\text{Approx}}{\sim} \text{N}(0,1).$$
Squaring this quantity gives the more useful pivotal quantity:
$$n \cdot \frac{(\theta \bar{X}_n - 1)^2}{1-\theta} \overset{\text{Approx}}{\sim} \text{ChiSq}(1).$$
We will let $\chi_{1,\alpha}^2$ denote the critical points of the chi-squared distribution with one degree-of-freedom with an upper-tail area of $0<\alpha<1$. We can use the above pivotal quantity for formation of a confidence interval via a quadratic function in $\theta$:
$$\begin{equation} \begin{aligned} 1-\alpha &\approx \mathbb{P} \Bigg( n \cdot \frac{(\theta \bar{X}_n - 1)^2}{1-\theta} \leqslant \chi_{1,\alpha}^2 \Bigg) \\[6pt] &= \mathbb{P} \Bigg( n (\theta \bar{X}_n - 1)^2 \leqslant (1-\theta) \chi_{1,\alpha}^2 \Bigg) \\[6pt] &= \mathbb{P} \Bigg( n \bar{X}_n^2 \theta^2 - (2n \bar{X}_n - \chi_{1,\alpha}^2) \theta + (n - \chi_{1,\alpha}^2) \leqslant 0 \Bigg). \\[6pt] \end{aligned} \end{equation}$$
The quadratic function inside this probability statement has discriminant $\Delta_n = \chi_{1,\alpha}^4 + 4n \chi_{1,\alpha}^2 \bar{X} (\bar{X} - 1)$, and so ---assuming this is positive--- we then have:
$$\begin{equation} \begin{aligned} 1-\alpha &\approx \mathbb{P} \Bigg( \Bigg( \theta - \frac{(2n \bar{X}_n - \chi_{1,\alpha}^2) - \sqrt{\Delta_n}}{2 n \bar{X}_n^2} \Bigg) \Bigg( \theta - \frac{(2n \bar{X}_n - \chi_{1,\alpha}^2) + \sqrt{\Delta_n}}{2 n \bar{X}_n^2} \Bigg) \leqslant 0 \Bigg) \\[6pt] &= \mathbb{P} \Bigg( \Bigg( \theta - \frac{1}{\bar{X}_n} + \frac{\chi_{1,\alpha}^2 + \sqrt{\Delta_n}}{2 n \bar{X}_n^2} \Bigg) \Bigg( \theta - \frac{1}{\bar{X}_n} + \frac{\chi_{1,\alpha}^2 - \sqrt{\Delta_n}}{2 n \bar{X}_n^2} \Bigg) \leqslant 0 \Bigg) \\[6pt] &= \mathbb{P} \Bigg( \frac{1}{\bar{X}_n} + \frac{\chi_{1,\alpha}^2 - \sqrt{\Delta_n}}{2 n \bar{X}_n^2} \leqslant \theta \leqslant \frac{1}{\bar{X}_n} + \frac{\chi_{1,\alpha}^2 + \sqrt{\Delta_n}}{2 n \bar{X}_n^2} \Bigg). \\[6pt] \end{aligned} \end{equation}$$
Hence, we have the confidence interval:
$$\text{CI}_\theta(1-\alpha) \equiv \Bigg[ \frac{1}{\bar{x}_n} + \frac{\chi_{1,\alpha}^2 - \sqrt{\Delta_n}}{2 n \bar{x}_n^2}, \frac{1}{\bar{x}_n} + \frac{\chi_{1,\alpha}^2 + \sqrt{\Delta_n}}{2 n \bar{x}_n^2} \Bigg].$$
Application to your data: In your data you have $n=100$ and $\bar{x}_n = 4.9$. Setting $\alpha = 0.05$ for a 95% confidence interval gives you $\chi_{1,\alpha}^2 = 3.841459$ which then gives:
$$\begin{equation} \begin{aligned} \sqrt{\Delta_n} &= \sqrt{\chi_{1,0.05}^4 + 4 \cdot 100 \cdot \chi_{1,0.05}^2 \cdot 4.9 (4.9 - 1)} \\[6pt] &= \sqrt{3.841459^2 + 400 \cdot 3.841459 \cdot 4.9 \cdot 3.9} \\[6pt] &= \sqrt{29,378.87} = 171.4026. \\[6pt] \end{aligned} \end{equation}$$
Hence, your 95% confidence interval (using the above form) is:
$$\begin{equation} \begin{aligned} \text{CI}_\theta(0.95) &= \Bigg[ \frac{1}{4.9} + \frac{3.841459 - 171.4026}{200 \cdot 4.9^2}, \frac{1}{4.9} + \frac{3.841459 + 171.4026}{200 \cdot 4.9^2} \Bigg] \\[6pt] &= \Bigg[ 0.2040816 -0.03489404, 0.2040816 + 0.03649398 \Bigg] \\[6pt] &= \Bigg[ 0.1691876, 0.2405756 \Bigg]. \\[6pt] \end{aligned} \end{equation}$$