I might be telling you something you already know, but keep in mind that really
$\hat{f}(x)=\hat{f}(x,\{X_k\})$,
where $\{X_k\}$ is the set of sample points over which you build your estimate. For most non-parametric estimators, the $X_k$ are assumed independent, and the method is additive, so you can just look at the $MSE$ of $\hat{f}(x,X_k)$ and then take an average.
Then your formula is interpreted as
$MSE(\hat f(x)) = E[(\hat f(x)-f(x))^{2}]=\int_\Omega(\hat{f}(x,z)-f(x))^2f(z)dz$,
which yes, is the MSE error at the point $x$.
As for the $MSEP$, I'm not entirely sure what your question is, but there are surely various ways to predict this. If you want to know the error expected at $x^*$, then I guess it probably does line up with the $MSE$, however, you might want to know for example the prediction error for a random $X^*$, in which case you might assume it is drawn from a $f(x)$ distribution, then you would want the $MISE$.
Hope that helps clarify something.
I think you're confusing how to build a model from data and how to quantify a model accuracy once it's built.
When you want to build a model (linear regression in your case I guess?), you would usually use the least square error method that is minimizing the "total" euclidean distance between a line and the data points. Theoretically the coefficients of this line can be found using calculus but in practice, an algorithm will perform a gradient descent which is faster.
Once you have your model, you want to evaluate its performances. Thus, in the case of regression, it may be good to compute a metric which evaluate "how far" is your model to the actual data points (or test set data if you have one) in average. The MSE is a good estimate that you might want to use !
To sum up, keep in mind that LSE is a method that builds a model and MSE is a metric that evaluate your model's performances.
Best Answer
You can think of least squares method as minimization of mean squared errors. In other words the latter is the subject of minimization of the former with respect to the parameters $\beta$: $$\min_\beta MSE(\beta)$$ Once you find the optimal parameters $\hat\beta$ then $MSE(\hat\beta)$ is your LSE