If displacement and velocity are familiar from your calculus course, recall that if you put time on the horizontal axis and the velocity of a moving object on the vertical axis, then the (signed) area under the velocity graph between times $t_1$ and $t_2$ is the net displacement of the object during that time interval.
In probability and statistics, we again encounter areas under curves. Consider a random variable, such as the height of a woman randomly chosen from some given population. Here the horizontal axis will be height, and the vertical axis will be a quantity called probability density. The area under the probability density graph between heights $h_1$ and $h_2$ represents the probability that the height of the randomly selected woman is in that interval.
The normal curve is just one example of a probability density function; there are many others. Calculus—integration in particular—is the tool used to compute probabilities.
In practice, we have a problem when it comes to the normal distribution. The problem is that there is no "closed form" for $\int_{h_1}^{h_2}\frac{1}{\sqrt{2\pi}}e^{-h^2/2}\,dh,$ which is the integral that needs to be done. That is, there is no formula for the result of this integration in terms of familiar functions. This isn't a big problem since the integral can be computed numerically. Many calculators and all statistics packages can evaluate this integral, and its value is tabulated in statistics books.
The condition that total probability equals $1$ corresponds to the condition that the total area under the normal curve equals $1.$ That is
$$
\int_{-\infty}^\infty\frac{1}{\sqrt{2\pi}}e^{-x^2/2}\,dx=1.
$$
This special case of the area calculation actually can be done exactly, but it uses knowledge beyond what one usual learns in Calculus I.
You ask how standard deviation relates to calculus. In fact, both expected value and variance (standard deviation is the square root of variance) are defined as integrals. If $f(x)$ is the probability density function of a random variable $X,$ then
$$
\begin{aligned}
E[X]&=\int_{-\infty}^\infty xf(x)\,dx\\
\text{Var}[X]&=\int_{-\infty}^\infty (x-E[X])^2f(x)\,dx
\end{aligned}
$$
In the case of the normal distribution, the expected value is
$$
E[X]=\int_{-\infty}^\infty x\frac{1}{\sqrt{2\pi}}e^{-x^2/2}\,dx=0,
$$
which follows because the integrand is an odd function (areas left and right of $x=0$ cancel). The variance turns out to be $1.$ The integral can be evaluated using integration by parts.
There is a lot more to be said about how calculus relates to statistics. This summary barely scratches the surface.
Your intuition seems to be telling you that the antiderivative of an always-positive function should be always positive. But this is not correct. This is a counterexample. Integrating $x^2 + 1$ is another example: it's antiderivative is $\frac{x^3}{3} + x + C$, which is not always positive.
Instead, the correct property that we should expect is for the function to be always increasing. Starting with a positive function $f(x)$, we know that $\displaystyle \int_a^b f(x) dx > 0$. In particular, this should mean that $\displaystyle F(x) = \int_0^x f(t) dt$, which is the antiderivative, to be a strictly increasing function.
For instance, $\int_a^b f(x) dx > 0 \iff F(b) - F(a) > 0$, so that we see that $F(x)$ must be strictly increasing.
In this case, $\frac{1}{2}x^2 \text{sgn}(x)$ is a strictly increasing function, so that it might be the antiderivative of a positive function (like it is).
Best Answer
The gist of it is that in polar coordinates $\theta=\frac{\pi}{2}$ corresponds to the $y$-axis. So, if we want a point on the $y$-axis with the restriction that $0\leq\theta\leq\pi$, then we must have $\theta=\frac{\pi}{2}$.
Edit: The only systematic way to find the limits of integration in these cases is to solve for the relevant points of intersection (probably not what you wanted to hear/were looking for). But, this will get much easier with practice. It seems that your confusion is stemming from some difficulty or inexperience in thinking about polar coordinates. For example, this case is quite different from the case of Eulidean coordinates where we set two expressions equal and solve. In this case, we are not necessarily trying to solve for $r=0$, but rather identifying an angle that will satisfy the given condition. As far as recommended reading, most first semester calculus texts should discuss this topic. I learned calculus from Stewart's text and have also used it in teaching. You could probably get a used older edition on the cheap and I'd recommend it as a basic calculus text. Aside from that, perhaps try to read about polar coordinates online (wikipedia, online lecture notes, etc) and get some practice thinking in polar as opposed to Euclidean coordinates. I hope this long, somewhat rambling edit is of some help.