Since I consider Unsupervised learning, I don't have any ground truth to compare with, during the validation phase. So, is there any standard method to deal with it?
Additional informations:
- in my particular case, "validation" is a cross-validation indeed.
- I'm developing a custom binary anomaly detection model which labels dataset records in 2 classes: "normal" and "abnormal"
Best Answer
I'm not sure if it will be considered and answer as in fact is a pointer to a possible answer, but at the same time, I don't have enough reputation to add it as a comment. So it will go here, maybe someone with more rights can move it as comment.
I'm struggling with this theme too and today I found this PhD thesis
"CROSS-VALIDATION FOR UNSUPERVISED LEARNING" by Patrick O. Perry September 2009 - Stanford University in the abstract the author states
http://ptrckprry.com/reports/