Your ultimate goal is not clear. Perhaps I can flounder around and
make some useful comments.
For appropriate choices of $n$ and $\theta,$ the distribution $Binom(n, \theta)$ is approximately normal, especially if $n$ is large and $\theta$ is not too far from 1/2. The mean is $\mu = n\theta$ and the variance is $\sigma^2 = n\theta(1-\theta).$
Also, for large enough $\lambda,$ the distribution $Pois(\lambda)$ is nearly normal. The mean and variance are $\mu = \lambda$ and $\sigma^2 = \lambda.$
However, the Poisson model may have less flexibility in matching what you want.
Of course, to find the probability that a random variable taking integer
values lies in an interval $(a, b]$ you will add probabilities for integer
values in that interval, rather than evaluating an integral.
For example, if $X \sim Binom(n = 100, \theta = 1/2),$ you have $\mu = 50$
and $\sigma = 5.$ Perhaps you want
$$P(48 < X \le 52) = P(X = 49) + P(X = 50) + P(X = 51) + P(X = 52)\\ = P(X \le 52)
-P(X \le 48) = F_X(52) - F_X(48) = 0.3091736,$$
where $F_X(\cdot)$ is the CDF of $X.$
If there are many integers in the desired interval, computation by hand
can be tedious. In R statistical software dbinom
denotes a binomial PDF
and pbinom
a binomial CDF.
The probability above could be evaluated in R
as shown below. [The last value is a normal approximation (with continuity
correction), which is often accurate to a couple of decimal places.]
sum(dbinom(49:52, 100, .5)) # adding terms of the PDF
## 0.3091736
diff(pbinom(c(48,52), 100, .5)) # subtracting two CDF values
## 0.3091736
diff(pnorm(c(48.5,52.5), 50, 5)) # normal approximation
## 0.3093739
The figure below shows several values of the PDF of $Binom(100, .5),$
emphasizes the four probabilities required (heights of thick blue bars), and shows the approximating
normal density curve. The normal approximation is the area beneath
the curve between the vertical green lines.
If $X_1, X_2, \dots, X_n$ are independent and identically distributed as
$\mathsf{Norm}(\mu, \sigma),$ then $S = \sum_{i=1}^n X_i \sim
\mathsf{Norm}(n\mu, \sqrt{n\sigma^2})$ and $\bar X_n = S/n \sim
\mathsf{Norm}(\mu, \sigma/\sqrt{n}).$
The statements about $E(S), Var(S), E(\bar X_n),$ and $Var(\bar X_n)$
follow readily from the definitions of expectation and variance.
That $S$ and $\bar X_n$ are normal can be shown using moment generating functions.
These relationships are not technically part of the Central Limit Theorem (CLT)
but they are usually stated or proved when the CLT is discussed.
The CLT is a limit theeorem; it states that if the distribution of the $X_i$ has finite variance
$Var(X_i) = \sigma^2$ (but not necessarily normal), then $Z_n = \frac{\bar X_n = \mu}{\sigma/{n}}$ converges
in distribution to $\mathsf{Norm}(0,1).$
Best Answer
$$\int_{0.30}^{\infty}P(x)\,dx=\int_{-\infty}^{\infty}P(x)\,dx-\int_{-\infty}^{0.30}P(x)\,dx$$ The first integral is equal to $1$ since $P(x)$ is a probability density function. The second one is not possible to evaluate with elementary functions.
However using the function $$\operatorname{erf}(x)=\frac{2}{\sqrt{\pi}}\int_0^x\exp(-t^2)\,dt,$$ the cummulative normal density function can be written as $$\frac{1}{\sigma\sqrt{2\pi}}\int_{-\infty}^x\exp\left(-\frac{(t-\mu)^2}{2\sigma^2}\right)\,dt=\frac12\left(1+\operatorname{erf}\left(\frac{x-\mu}{\sigma\sqrt{2}}\right)\right).$$
This makes \begin{align} \int_{0.30}^{\infty}P(x)\,dx&=1-\frac12\left(1+\operatorname{erf}\left(\frac{0.30}{1.40\cdot\sqrt{2}}\right)\right)\\ &=\frac12-\frac12\operatorname{erf}\left(\frac{0.30}{1.40\cdot\sqrt{2}}\right).\end{align}
So it is easiest to just use a look-up table for the standard normal distribution for the cumulative density function of $2.78$ which gives $0.9973$ so the desired integral is $1-0.9973=0.0027$.