There are no utilities in the official MATLAB release you could use right away, but it would be fairly easy to code some of the reviewed methods. For example, in the ascending order by complexity:
- Assume that predictors (columns) are uncorrelated and compute the distance between a new sample (row) and the mean of the training set (set of known samples). Compare with the reference distribution obtained by taking the distance between every row in the training set and the mean of all other rows.
- Assume that the known samples come from a Gaussian mixture of distributions. Find this mixture using gmdistribution from Statistics Toolbox. Compute Mahalanobis distance between the new sample and every Gaussian component. Estimate the probability assuming chisq distribution for the squared Mahalanobis distance.
- Find k nearest neighbors for every sample in the training set using knnsearch. Compute the distribution of the average distance between every sample and its k nearest neighbors. Find k nearest neighbors in the training set for the new sample and take the average of their distance values. Compare to the reference distribution.
And so on. If your training set is pure (all objects are indeed circles) and if your data are low-dimensional, you really have plenty of methods at your disposal. Without purity or in high dimensions, the problem can become substantially harder.
Best Answer