I might be telling you something you already know, but keep in mind that really
$\hat{f}(x)=\hat{f}(x,\{X_k\})$,
where $\{X_k\}$ is the set of sample points over which you build your estimate. For most non-parametric estimators, the $X_k$ are assumed independent, and the method is additive, so you can just look at the $MSE$ of $\hat{f}(x,X_k)$ and then take an average.
Then your formula is interpreted as
$MSE(\hat f(x)) = E[(\hat f(x)-f(x))^{2}]=\int_\Omega(\hat{f}(x,z)-f(x))^2f(z)dz$,
which yes, is the MSE error at the point $x$.
As for the $MSEP$, I'm not entirely sure what your question is, but there are surely various ways to predict this. If you want to know the error expected at $x^*$, then I guess it probably does line up with the $MSE$, however, you might want to know for example the prediction error for a random $X^*$, in which case you might assume it is drawn from a $f(x)$ distribution, then you would want the $MISE$.
Hope that helps clarify something.
Squared difference divided by $n$ or by $n-1$ are both variance. The only difference is that in the second case it is an unbiased estimator of variance. Taking square root of it leads to estimating standard deviation.
I guess that mean squared deviation and root mean squared deviation are used more commonly in machine learning field where you have mean squared error and it's square root that are often used.
I also guess that some people prefer using mean squared deviation as a name for variance because it is more descriptive -- you instantly know from the name what someone is talking about, while for understanding what variance is you need to know at least elementary statistics.
Check the following threads to learn more:
Best Answer
The conceptual uses of "square" and "squared" are subtly different, although (almost) interchangeable:
"Squared" refers to the past action of taking or computing the second power. E.g., $x^2$ is usually read as "x-squared," not "x-square." (The latter is sometimes encountered but I suspect it results from speakers who are accustomed to clipping their phrases or who just haven't heard the terminal dental in "x-squared.")
"Square" refers to the result of taking the second power. E.g., $x^2$ can be referred to as the "square of x." (The illocution "squared of x" is never used.)
These suggest that a person using a phrase like "mean squared error" is thinking in terms of a computation: take the errors, square them, average those. The phrase "mean square error" has a more conceptual feel to it: average the square errors. The user of this phrase may be thinking in terms of square errors rather than the errors themselves. I believe this shows up especially in theoretical literature where the second form, "square," appears more often (I believe: I haven't systematically checked).
Obviously both are equivalent in function and safely interchangeable in practice. It is interesting, though, that some careful Google queries give substantially different hit counts. Presently,
returns about 367,000 results (notice the necessity of ruling out the phrase "$e=m c^2$" popularly quoted in certain contexts, which demands the use of "squared" instead of "square" when written out), while
(maintaining analogous exclusions for comparability) returns an order of magnitude more, at 3.47 million results. This (weakly) suggests people favor "mean square" over "mean squared," but don't take this too much to heart: "mean squared" is used in official SAS documentation, for instance.