As Robert Israel has pointed out, the sum of squares of $n$ independent random variables with a standard normal distribution has a chi-square distribution with $n$ degrees of freedom.
Take them from a normal distribution whose expectation is $\mu$ and whose standard deviation is $\sigma$, you have have
$$
\left(\frac{X_1-\mu}{\sigma}\right)^2 + \cdots + \left(\frac{X_n-\mu}{\sigma}\right)^2
$$
has chi-square distribution with $n$ degrees of freedom.
So why might it appear that one of them is not counted? The answer to that comes from such results as this: Suppose instead of the population mean $\mu$, you subtract the sample mean $\overline X$. Then you have
$$
\left(\frac{X_1-\overline X}{\sigma}\right)^2 + \cdots + \left(\frac{X_n-\overline X}{\sigma}\right)^2,\tag{1}
$$
and this has a chi-square distribution with $n-1$ degrees of freedom. In particular, if $n=1$, then the sample mean is just the same as $X_1$, so the numerator in the first term is $X_1-X_1$, and the sum is necessarily $0$, so you have a chi-square distribution with $0$ degrees of freedom.
Notice that in $(1)$, you have $n$ terms in the sum, not $n-1$, and they're not independent (since if you take away the exponents, you get $n$ terms that necessarily always add up to $0$) and the standard deviation of the fraction that gets squared is not actually $1$, but less than $1$. So why does it have the same probability distribution as if there were $n-1$ of them, and they were indepedent, and those standard deviations were each $1$? The simplest way to answer that may be this:
$$
\begin{bmatrix} X_1 \\ \vdots \\ X_n \end{bmatrix} = \begin{bmatrix} \overline X \\ \vdots \\ \overline X \end{bmatrix} + \begin{bmatrix} X_1 - \overline X \\ \vdots \\ X_n - \overline X \end{bmatrix}
$$
This is the decomposition of a vector into two components orthogonal to each other: one in a $1$-dimensional space and the other in an $n-1$ dimensional space. Now think about the spherical symmetry of the joint probability distribution, and about the fact that the second projection maps the expected value of the random vector to $0$.
Later edit:
Sometimes it might seem as if two of them are not counted. Suppose $X_i$ is a normally distributed random variable with expected value $\alpha+\beta w_i$ and variance $\sigma^2$, and they're independent, for $i=1,\ldots,n$. When $w_i$ is observable and $\alpha$, $\beta$, are not, one may use least-squares estimates $\hat\alpha$, $\hat\beta$. Then
$$
\left(\frac{X_1-(\alpha+\beta w_1)}{\sigma}\right)^2 + \cdots + \left(\frac{X_n-(\alpha+\beta w_n)}{\sigma}\right)^2 \sim \chi^2_n
$$
but
$$
\left(\frac{X_1-(\hat\alpha+\hat\beta w_1)}{\sigma}\right)^2 + \cdots + \left(\frac{X_n-(\hat\alpha+\hat\beta w_n)}{\sigma}\right)^2 \sim \chi^2_{n-2}.
$$
A similar sort of argument involving orthogonal projections explains this.
One needs these results in order to derive things like confidence intervals for $\mu$, $\alpha$, and $\beta$.
Let's answer the first one.
If you know the PDF for $Z$, say $f_{Z}\left(z\right)$, then $f_{c\cdot Z}\left(c\cdot z\right)$ is found from the probability definition:
\begin{equation}
\begin{split}
\text{Pr}\left\{c\cdot Z < z \right\} &= \text{Pr}\left\{Z < \cfrac{z}{c} \right\} = F_{Z}\left(\cfrac{z}{c}\right) \quad \text{so}
\\
\cfrac{d}{dz}\left[F_{Z}\left(\cfrac{z}{c}\right)\right] &= \cfrac{1}{c} f_{Z}\left(\cfrac{z}{c}\right)
\end{split}
\end{equation}
So, applied to a chi-square, just scaled the PDF for $\chi_{N}^{2}$ by $\cfrac{1}{\sigma^{2}}$ and scale it's argument by the same $\cfrac{1}{\sigma^{2}}$ and plot it.
Best Answer
$\dfrac18$ is correct.
You can get from the density of a chi-squared distribution $\frac{2^{-\nu/2}}{\Gamma(\nu/2)}\; x^{\nu/2-1} e^{-x/2}$ to the density of a so-called inverse chi-squared distribution (reciprocal chi-squared distribution might be a better name) of $\frac{2^{-\nu/2}}{\Gamma(\nu/2)}\,x^{-\nu/2-1} e^{-1/(2 x)}$ by standard change of variables: essentially you replace $x$ by $\frac1x$, and multiply by the absolute value of the derivative of the inverse (in its wider meaning), so by $\frac1{x^2}$.
Knowing the density and that it integrates to $1$ makes finding the expectation easy since $x\cdot x^{-\nu/2-1}=x^{-(\nu-2)/2-1}$ and thus the expectation is $\dfrac{\frac{2^{-\nu/2}}{\Gamma(\nu/2)}}{\frac{2^{-(\nu-2)/2}}{\Gamma((\nu-2)/2)}}=\dfrac{1}{\nu-2}$ when $\nu > 2$, and here $\nu=10$.