Solved – One class classifier Cross validation

libsvmmachine learningsvm

I am working on a problem which requires one-class classifier. I am using LIBSVM. I know there are tons of material out there but still I could not find the answer to my query.

  1. How do I estimate the optimum parameters for the RBF kernel?

  2. Using svm-train -s 2 -t 2 -v 5 train.scale, I am getting 49% accuracy and my training set has no outlier data. So, does this actually mean 98% accuracy in real world scenario?

Best Answer

In one-class SVM the notion of accuracy is out of place. One-class SVM is designed to estimate the support of a distribution. Basically, it's output for a given instance is a measure of confidence of that instance belonging to the data that was used in training the model.

When constructing a one-class SVM model, you have to decide how much of your data can be considered outliers (e.g. rejected by the model). You can tune kernel parameters using a cross-validation approach.

I am getting 49% accuracy and my training set has no outlier data. So, does this actually mean 98% accuracy in real world scenario?

I don't really understand the question here. It sounds like you are using one-class SVM for a binary classification problem, which is a bad idea.