Regarding 1. No, I don't think so. But any statistical program (R, SAS, SPSS) will do this for you. Unless you had numbers for the different categories, in which case what you did is just wrong. What software did you use? Can you show us code? (From question 2 it looks like you use SPSS).
Regarding 2. Can you please give us context? What variables are these? If M is "causing" X, then it sounds like X is mediating the relationship between X and Y.
Regarding 3. I Googled Hayes macro and found that he has written different macros for different kinds of mediation. Given that, it isn't surprising that they give different answers. What are you trying to do?
The "focal" association between category $i$ of one nominal variable and category $j$ of the other one is expressed by the frequency residual in the cell $ij$, as we know. If the residual is 0 then it means the frequency is what is expected when the two nominal variables are not associated. The larger the residual the greater is the association due to the overrepresented combination $ij$ in the sample. The large negative residual equivalently says of the underrepresented combination. So, frequency residual is what you want.
Raw residuals are not suitable though, because they depend on the marginal totals and the overall total and the table size: the value is not standardized in any way. But SPSS can display you standardized residuals also called Pearson residuals. St. residual is the residual divided by an estimate of its standard deviation (equal to the sq. root of the expected value). St. residuals of a table have mean 0 and st. dev. 1; therefore, st. residual serves a z-value, like z-value in a distribution of a quantitative variable (actually, it is z in Poisson distribution). St. residuals are comparable between different tables of same size and the same total $N$. Chi-square statistic of a contingency table is the sum of the squared st. residuals in it. Comparing st. residuals in a table and across same-volumed tables helps identify the particular cells that contribute most to chi-square statistic.
SPSS also displays adjusted residuals (= adjusted standardized residuals). Adj. residual is the residual divided by an estimate of its standard error. Interesting that adj. residual is just equal to $\sqrt{N}r_{ij}$, where $N$ is the grand total and $r_{ij}$ is the Pearson correlation (alias Phi correlation) between dummy variables corresponding to the categories $i$ and $j$ of the two nominal variables. This $r$ is exactly what you say you want to compute. Adj. residual is directly related to it.
Unlike st. residual, adj. residual is also standardized wrt to the shape of the marginal distributions in the table (it takes into consideration the expected frequency not only in that cell but also in the cells outside its row and its column) and so you can directly see the strength of the tie between categories $i$ and $j$ - without worrying about whether their marginal totals are big or small relative the other categories'. Adj. residual is also like a z-score, but now it is like z of normal (not Poisson) distribution. If adj. residual is above 2 or below -2 you may conclude it is significant at p<0.05
level$^1$. Adj. residuals are still effected by $N$; $r$'s are not, but you can obtain all the $r$s from adj. residuals, following the above formula, without spending time to produce dummy variables.$^2$
In regard to your second question, about 3-way category ties - this is possible as part of the general loglinear analysis which also displays residuals. However, practical use of 3-way cell residuals is modest: 3(+)-way association measures are not easily standardized and are not easily interpretable.
$^1$ In st. normal curve $1.96 \approx 2$ is the cut-point of 2.5% tail, so 5% if you consider both tails as with 2-sided alternative hypothesis.
$^2$ It follows that the significance of the adjusted residual in cell $ij$ equals the significance of $r_{ij}$. Besides, if there is only 2 columns in the table and you are performing z-test of proportions between $\text {Pr}(i,1)$ and $\text {Pr}(i,2)$, column proportions for row $i$, the p-value of that test equals the significance of both (any) adj. residuals in row $i$ of the 2-column table.
Best Answer
As your DV can only take integer values it isn't really a continuous variable, so it's not surprising that your plot of residuals has a set of peaks and valleys around an underlying normal curve. Technically such data are best handled by ordinal regression, which allows for an ordered set of discrete categories in the DV.
Depending on the stage of your study, you may be able to learn enough from your multiple regression as you have performed it. You typically want more categories in the DV than this to approximate a set of integer values as a continuous DV, but the residuals don't seem to be skewed and if they also don't depend on the predicted values then you might be doing well enough. You will have to be careful to admit that your data might not strictly meet the criteria for standard interpretation of p-values.
With respect to the imbalanced distribution among levels of your nominal independent variable, one problem is that you will have more imprecise estimates of the effects of the levels that have the smaller numbers of cases. The estimates of the effects of the different countries of origin will also be correlated. Furthermore, the imbalance can complicate some types of ANOVA tests in which you try to apportion variance between, say, between the native/non-native and country-of-origin predictors. Finally, it will be hard to be sure that your results will generalize well to other samples. Short of obtaining more data there isn't much you can do about the imbalance. You might consider bootstrapping to get a bit more confidence in the generalizability of your model than the initial multiple regression can provide on its own.