If the discrete random variable $X$ takes integer values, then
$$P(X > x)= P(X \ge x+1) = P(X \ge x+.5)$$
The continuity correction would use the third expression when using
a continuous distribution as an approximation.
Ordinarily, the approximating continuous distribution would have
positive probability in the interval $[x, x+.5].$ In that case using
the continuity correction will give you a smaller approximated value.
Example: Suppose $X \sim \mathsf{Binom}(n = 64, p = 1/2)$ and
you seek $P(X > 30).$ The exact value is $P(X > 30) = 1 - P(X \le 30) = 0.6460096.$
1 - pbinom(30, 64, .5)
## 0.6460096
If you use $P(X^\prime > 30) = 1 - P(X^\prime \le 30)$ as an approximation,
where $X^\prime \sim \mathsf{Norm}(\mu = 32, \sigma=4),$ you will get
$P(X > 300) \approx 0.6914625.$
1 - pnorm(30, 32, 4)
## 0.6914625
But if you use the continuity correction, you will use
$P(X^\prime > 30.5) = 1 - P(X^\prime \le 30.5) = 0.6461698.$
Hence, your approximation will be $P(X > 30) \approx 0.6461698.$
This is smaller than the value 0.6914625 without the continuity correction.
It is also closer to the exact binomial probability.
1 - pnorm(30.5, 32, 4)
## 0.6461698
Usually in textbook examples you can expect about two decimal places of accuracy from
a continuity-corrected normal approximation to a binomial distribution.
To four decimal places, the exact value in this example is 0.6460 and the continuity-corrected normal approximation is 0.6462. (Here we get three-place
accuracy; approximations are often best when $p \approx 1/2.$)
The figure below shows relevant binomial probabilities (vertical bars) and
the approximating normal density curve. Notice that be binomial probability $P(X = 31)$ is approximated by the area under the normal curve above the interval
$[30.5, 31.5].$ The uncorrected approximation wrongly
includes the vertical strip between $x = 30.0$ and $x=30.5$ under the
normal curve.
Note: The values I have shown are from R statistical software. If your
normal approximations are obtained by standardization and using a printed
normal table, then results will be slightly different because of the
rounding entailed in the use of the table.
Under certain conditions, the probability $P(\bar{X}_n \in [-1, 1])$ (for example), which is a non-random number, can be approximated by $P(Y \in [-1, 1])$ for some normal random variable $Y$ with appropriate mean and variance, using the central limit theorem. To emphasize, you are approximating the probability $P(\bar{X}_n \in [-1, 1])$, and not the random variable $\bar{X}_n$ itself: you are not saying something is close to $\bar{X}_n$.
Best Answer
Hints:
1) What is the definition of a probability mass function? What has to be true if order for the given $f$ to satisfy the definition?
2) What is the definition of the expected value of a discrete random variable? What do you get (after answering (1)) when you plug in the given $f$?
3) Similar to (2)
4) The answers to (2) and (3) give you all the information about $X$ that you need for this.