There are, in fact, two different formulas for standard deviation here: The population standard deviation $\sigma$ and the sample standard deviation $s$.
If $x_1, x_2, \ldots, x_N$ denote all $N$ values from a population, then the (population) standard deviation is
$$\sigma = \sqrt{\frac{1}{N} \sum_{i=1}^N (x_i - \mu)^2},$$
where $\mu$ is the mean of the population.
If $x_1, x_2, \ldots, x_N$ denote $N$ values from a sample, however, then the (sample) standard deviation is
$$s = \sqrt{\frac{1}{N-1} \sum_{i=1}^N (x_i - \bar{x})^2},$$
where $\bar{x}$ is the mean of the sample.
The reason for the change in formula with the sample is this: When you're calculating $s$ you are normally using $s^2$ (the sample variance) to estimate $\sigma^2$ (the population variance). The problem, though, is that if you don't know $\sigma$ you generally don't know the population mean $\mu$, either, and so you have to use $\bar{x}$ in the place in the formula where you normally would use $\mu$. Doing so introduces a slight bias into the calculation: Since $\bar{x}$ is calculated from the sample, the values of $x_i$ are on average closer to $\bar{x}$ than they would be to $\mu$, and so the sum of squares $\sum_{i=1}^N (x_i - \bar{x})^2$ turns out to be smaller on average than $\sum_{i=1}^N (x_i - \mu)^2$. It just so happens that that bias can be corrected by dividing by $N-1$ instead of $N$. (Proving this is a standard exercise in an advanced undergraduate or beginning graduate course in statistical theory.) The technical term here is that $s^2$ (because of the division by $N-1$) is an unbiased estimator of $\sigma^2$.
Another way to think about it is that with a sample you have $N$ independent pieces of information. However, since $\bar{x}$ is the average of those $N$ pieces, if you know $x_1 - \bar{x}, x_2 - \bar{x}, \ldots, x_{N-1} - \bar{x}$, you can figure out what $x_N - \bar{x}$ is. So when you're squaring and adding up the residuals $x_i - \bar{x}$, there are really only $N-1$ independent pieces of information there. So in that sense perhaps dividing by $N-1$ rather than $N$ makes sense. The technical term here is that there are $N-1$ degrees of freedom in the residuals $x_i - \bar{x}$.
For more information, see Wikipedia's article on the sample standard deviation.
The mean is part of what it means for an estimator to be biased. You can't make the estimator unbiased by averaging over several estimates; to the contrary, you can show that it's biased by averaging over estimates and showing that the expected average isn't the value to be estimated. (You can reduce the bias and the variance of the estimator by averaging several estimates, but as discussed above you can do that even better by using all the data for one estimate.)
For example, if your population has equidistributed values $-1,0,1$, with variance $\frac23$, and you take a sample of $2$, you'll get variance estimates of $0$, $\frac12$ and $2$ with probabilities $\frac13$, $\frac49$ and $\frac29$, respectively, yielding the correct mean $\frac13\cdot0+\frac49\cdot\frac12+\frac29\cdot2=\frac23$, whereas the estimates for the standard deviation, $0$, $\sqrt{\frac12}$ and $\sqrt2$ average to $\frac13\cdot0+\frac49\cdot\sqrt{\frac12}+\frac29\cdot\sqrt2=\frac49\sqrt2\neq\sqrt{\frac23}$, with $\frac49\sqrt2\approx0.6285\lt0.8165\approx\sqrt{\frac23}$, an underestimate as expected. If you take a sample of $3$ instead, the mean improves to $\frac19\cdot0+\frac49\cdot\sqrt{\frac13}+\frac29\cdot\sqrt{\frac43}+\frac29\cdot1=\frac19(8\sqrt{\frac13}+2)\approx0.7354$.
Best Answer
If the random variable $X$ represents the age at which a randomly selected person died, you want to compute $P(X>\mu+2\sigma | X > \mu+\sigma)$ where $X$ follows a normal distribution of parameters $\mu=75$ and $\sigma=5$.
By definition of conditional probability, we have
$$ p=P(X>\mu+2\sigma | X> \mu+\sigma) = \frac{P(X> \mu+2\sigma \text{ and } X>\mu+\sigma)}{P(X> \mu+\sigma)} = \frac{P(X> \mu+2\sigma)}{P(X> \mu+\sigma)} $$
Now, if $f$ be the pdf of the normal distribution $\mathcal N(\mu,\sigma^2)$, that is
$$ f(x) = \frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{(x-\mu)^2}{2\sigma^2}} = \frac{1}{5\sqrt{2\pi}}e^{-\frac{(x-75)^2}{50}} $$
we have
$$ p = \frac{\int_{\mu+2\sigma}^\infty f(x)\; dx}{\int_{\mu+\sigma}^\infty f(x)\; dx} $$
The function $f$ has no elementary antiderivative, so you will have to rely on computers or tables to find approximations. Using Maple, I got $\int_{\mu+2\sigma}^\infty f(x)\; dx\simeq 0.02275$, $\int_{\mu+\sigma}^\infty f(x)\; dx\simeq 0.15866$ so
$$ P(X>\mu+2\sigma | X> \mu+\sigma) \simeq 0.14339 \simeq 14.34\% $$