To truncate a distribution is to restrict its values to an interval and re-normalize the density so that the integral over that range is 1.
So, to truncate the $N(\mu, \sigma^{2})$ distribution to an interval $(a,b)$ would be to generate a random variable that has density
$$ p_{a,b}(x) = \frac{ \phi_{\mu, \sigma^{2}}(x) }{ \int_{a}^{b} \phi_{\mu, \sigma^{2}}(y) dy } \cdot \mathcal{I} \{ x \in (a,b) \} $$
where $\phi_{\mu, \sigma^{2}}(x)$ is the $N(\mu, \sigma^2)$ density. You could sample from this density in a number of ways. One way (the simplest way I can think of) to do this would be to generate $N(\mu, \sigma^2)$ values and throw out the ones that fall outside of the $(a,b)$ interval, as you mentioned. So, yes, those two bullets you listed would accomplish the same goal. Also, you are right that the empirical density (or histogram) of variables from this distribution would not extend to $\pm \infty$. It would be restricted to $(a,b)$, of course.
We could as well use a binomial distribution but it is not the point of the question…
Nevertheless, it is our starting point even for your actual question. I'll cover it somewhat informally.
Let's consider with the binomial case more generally:
$Y\sim \text{Bin}(n,p)$
Assume $n$ and $p$ are such that $Y$ is well approximated by a normal with the same mean and variance (some typical requirements are that $\min(np,n(1-p))$ is not small, or that $np(1-p)$ is not small).
Then $(Y-E(Y))^2/\text{Var}(Y)$ will be approximately $\sim\chi^2_1$. Here $Y$ is the number of successes.
We have $E(Y) = np$ and $\text{Var}(Y)=np(1-p)$.
(In the testing case, $n$ is known and $p$ is specified under $H_0$. We don't do any estimation.)
So if $H_0$ is true $(Y-np)^2/np(1-p)$ will be approximately $\sim\chi^2_1$.
Note that $(Y-np)^2 = [(n-Y)-n(1-p)]^2$. Also note that $\frac{1}{p} + \frac{1}{1-p} = \frac{1}{p(1-p)}$.
Hence $\frac{(Y-np)^2}{np(1-p)} = \frac{(Y-np)^2}{np}+\frac{(Y-np)^2}{n(1-p)}\\
\quad= \frac{(Y-np)^2}{np}+\frac{[(n-Y)-n(1-p)]^2}{n(1-p)} \\
\quad= \frac{(O_S-E_S)^2}{E_S}+\frac{(O_F-E_F)^2}{E_F}$
Which is just the chi-square statistic for the binomial case.
So in that case the chi-square statistic should have the distribution of the square of an (approximately) standard-normal random variable.
Best Answer
Chi is a Greek letter. The canonical modern history references are Karl Pearson's introduction of the chi-square test in 1900 and R.A. Fisher's work in 1924, but there is ancient history too: F.R. Helmert in 1876 deserves more than a nod.
http://jeff560.tripod.com/c.html is a good start, especially if other historical bits and pieces are of interest. It includes links. Books such as Anders Hald's histories say more.
Chi appears to be just notation that Pearson used.