Solved – Multiclass F1 score in the form of a confusion matrix

confusion matrixmachine learningmodel-evaluation

I have creating a multiclass model and I am wondering if it makes any sense to calculate F1 scores, and other metrics like Cohen kappa etc., in the same form as a confusion matrix.

Basically I calculate the F1 score between my a true class and one false class, ignore the other n-2 classes, and put it at one location in the matrix. Then I repeat this for all true and false classes in the same way and receive a matrix of F1 scores.

My question is if this makes any sense? I do this because I am seeking to get an overview of the results from several metrics while I also want to make it easier to identify which class combinations are resulting in the highest mistakes for these different metrics. The main reason I go for creating a similar representation to a confusion matrix is because I find it easy to visualise and to analyse by eye.

Other suggestions for how to go about this are very welcome of course.

Best Answer

So it's like when you have only two classes. Assume you have N class, then the confusion matrix would be a NxN matrix where left axis showing the true class and the top axis showing the class assigned to an item with that true class. here in this link, you could see a good example that describing confusion matrix for multiple classes: Computing Precision and Recall for Multi-Class Classification Problems