First question here, I am new to machine learning and wanted to understand the following:
I used decision trees, boosting to classify fraud users and I am getting average precision around 60% on my test set. What does it signifies and how well am I able to do the classification.
Solved – Is an Average Precision of 60% acceptable output in a fraud detection machine learning algorithm? What does it signifies
average-precisionfraud-detectionmachine learning
Best Answer
Precision and recall numbers depend on the actual probability of fraud in your data. Generally fraudulent cases are very small in number => Precision might be low, but recall should be high.
Acceptable or not can only be defined by measuring the impact of the results in real-life. I present two examples: