Let me try to summarize your question (to see if I understand it correctly): You are evaluating some underlying model that predicts a "null distribution" of RMSD values. You want to see whether your observed RMSD value could reasonably be a random sample from that null distribution. If not, you will conclude that the underlying model does not apply to the situation in which you collected your data.
Given that interpretation, this is actually not a problem for bootstrapping. Bootstrapping means constructing new samples from the original data and recomputing the statistic of interest--in your case apparently RMSD--for each of the new samples. You aren't resampling your original data, but rather generating predicted values from some underlying model.
Since you can apparently generate the random values predicted by the underlying model, all you have to do is generate a lot of them and see how your observed value compares to them. If your observed value is greater than 97.5% of the generated values or less than 97.5% of them, then your observed value is out in the 5% tails of the predicted distribution and you would conclude that the underlying model was not right for your situation.
In practice, though, this usually means generating a lot more than 20 predicted RMSD values. Normally I would expect to see the observed value compared with a distribution compiled from hundreds if not thousands of predicted RMSD values.
Maybe you don't need so many in this case because your observed RMSD of 19.9976 is so far out of the range of the predicted values in all_RMSD.mat, but you should get as many as possible.
And you don't need any special MATLAB code to summarize the results. Just make a frequency distribution of the simulated RMSD values and see where the observed value lies relative to that predicted frequency distribution.
Best Answer