Hint: Delta Method
let we know by central limit theorem
$$\sqrt{n}(\bar{X}-\mu_{x})\rightarrow N(0,\sigma^2)$$
By using Taylor series we get
$$\sqrt{n}\left(h(\bar{X})-h(\mu_{x})\right)=\sqrt{n} (\bar{X}-\mu_{x})h^{\prime}(\mu_{x})
+\sqrt{n} (\bar{X}-\mu_{x})^2\frac{h^{\prime \prime}(\mu_{x})}{2!}+\cdots$$
lets we use an approximation like (if $h^{\prime}(\mu_{x})\neq 0$)
$$\sqrt{n}\left(h(\bar{X})-h(\mu_{x})\right)\cong \color{red}{\sqrt{n} (\bar{X}-\mu_{x})}h^{\prime}(\mu_{x})\rightarrow N\left(0,\sigma^2 \left(h^{\prime}(\mu_{x})\right)^2\right) $$
Now just $h(t)=\frac{1}{t}$
To be able to explain this very thoroughly requires an understanding of a few advanced calculus concepts. Going by how the CLT is phrased in your text I'm going to assume that those ideas haven't been covered.
Before I do attempt an explanation I will suggest that if you have an interest/time to learn about these ideas in more depth, that you should work through Spivak's Calculus from start to finish. It provides the level of rigour needed to grasp these ideas fully. I struggled for a long time trying to understand the CLT rigourously and that was because I didn't have the foundations in place. This book illuminated my path.
So to your question.
I'll attempt to explain it in a very simple way and will reference the more advanced theorems needed in parentheses.
Like you said you know how to look up Z scores and find their probabilities. That is the last step in the process known as the Central Limit Theorem.
We begin with a sequence of i.i.d random variables, $X_{1}, X_{2}, \cdots$. This sequence of random variables could be infinite. As you should know each $X_{i}$ has its own distribution. One of the things we ask ourselves in statistics is "What would happen if we took a sum of all our random variables?". We will call this $S_{n}$, it is defined as:
$$S_{k} = \frac{1}{n}(X_{1} + X_{2} + \cdots + X_{n})$$
From calculus there is the idea of a sequence of numbers, here it is a similar idea except we don't know what the numbers are before hand. So as an example say each random variable $X_{i}$ is defined as follows:
$$X_{i} = \{0,1,2\}$$
That means each sequence of random variables, $X_{1}, X_{2}, \cdots, X_{n}$ will have some sort of combination of any of those three numbers.
Now back to our sum defined as $S_{k}$. This is just a sum of the terms in the sequence. It is also a random variable. Now $S_{k}$ can also be looked at as a sequence. Its values will just be the sums of the random variables.
In some instances sequences of numbers will converge to another number (advanced concept). In this scenario we have a concept called the Law of Large Numbers which states that this sequence of sums $S_{k}$ will converge to the mean, $\mu$. (advanced theorem). (the $n\mu$ you have in your statement is similar, but to make the explanation easier I did it this way).
Now you've probably asked "where the hell is the $Z$?". Well using the terms $S_{k}$ we will define another term called $Z_{n}$.
$$Z_{n} = \frac{S_{k} - \mu}{\sigma \sqrt{n}}$$
Doing this is called standardizing the random variable and it is done in practice a lot. Observe now that this again will be a sequence of terms, $Z_{1}, Z_{2}, \cdots$. And as well $Z_{n}$ are also random variables.
Going back all the way to the start we said each $X_{i}$ is i.i.d, that means they each have their own CDF, this will follow all the way to the $Z_{n}$ as well. That means there is a sequence of distribution functions $F_{1}, F_{2}, \cdots$. Each of these distribution functions take on the all the same values because each $Z_{i}$ takes on a set of values. That is $F_{1}(z), F_{2}(z), \cdots$.
So we have a situation now where we have a sequence of distribution functions $F_{1}(z), F_{2}(z), \cdots$. But what are these distribution functions? Binomial, Uniform, Poisson, etc?
It doesn't matter! Because what the CLT tells us is that this sequence of distribution functions will converge to a special distribution function: The standard normal distribution. (there is an advanced theorem on the idea of convergence of sequences of functions being used here)
Now why is that important as you asked? Well even though it might seem like it in the courses you take, in the wild it is more difficult to calculate probabilities using all of the different distribution functions we have created. So we can use the CLT to approximate the probabilities of phenomena which may themselves be distributed according to some other probability distribution (binomial, poisson, etc)
So it is a long answer but hopefully it is still simple enough to grasp the idea. If you want a really short TL;DR: Lot's of advanced mathematics make it work. But I hope you do take interest in the machinery underneath the hood. Ask any questions if you have and I'll try to answer.
Best Answer
This counterexample is from the book Limit Distributions for Sums of Independent Random Variables by Gnedenko and Kolmogorov.
Let $X$ have density $\begin{cases} 0 &\text{if} |x|\geq \frac 1e \\ \frac{1}{2|x|\log^2(|x|)} &\text{if} |x|< \frac 1e \end{cases}$
The authors argue that $f_n$ the density of $\sum_{i=1}^n X_i$ verifies $\displaystyle f_n(x) > \frac{c_n}{|x \log^{n+1}(|x|)|}$ for some positive constant $c_n$ in a neighborhood of $0$. So the density of $Z_n$ (which is a normalized version of $f_n$) is infinite at $0$.
They prove the following theorem:
Theorem: Suppose $X$ has density $f$. If
Then $\displaystyle \sup_{x\in \mathbb R} \left|\sigma \sqrt n f_n(\sigma \sqrt n x) - \frac{1}{\sqrt{2\pi}} e^{-x^2/2} \right| \xrightarrow[n\to \infty]{}0$
In Petrov's Sums of Independent Random Variables, the following theorem is stated:
Theorem: Let $(X_n)$ be a sequence of i.i.d r.v with mean zero and variance $\sigma^2$ and let $f_n$ denote the density of $Z_n$ (if it exists).
Then $\displaystyle \sup_{x\in \mathbb R} \left| f_n(x) - \frac{1}{\sqrt{2\pi}} e^{-x^2/2} \right| \xrightarrow[n\to \infty]{}0$ if and only if $f_n$ is bounded for some $n$.
In Shiryaev's Probability 2, the following Local Central Limit Theorem is stated:
Theorem: Let $(X_n)$ be a sequence of i.i.d r.v with mean zero and variance $\sigma^2$. If for some $r\geq 1$, $\int |\phi_{X_1}(t)|^r dt <\infty$, then $Z_n$ has a density $f_n$ such that $\displaystyle \sup_{x\in \mathbb R} \left| f_n(x) - \frac{1}{\sqrt{2\pi}} e^{-x^2/2} \right| \xrightarrow[n\to \infty]{}0$
Regarding almost sure convergence, you should have a look at Rao's A Limit Theorem for Densities.