Solved – Length normalization in a naive Bayes classifier for documents

classificationnaive bayesnormalization

I'm trying to implement a naive Bayes classifier to classify documents that are essentially sets (as opposed to bags) of features, i.e. each feature contains a set of unique features, each of which can appear at most once in the document. For example, you can think of the features as unique keywords for documents.

I've closely followed the Rennie, et. al. paper at http://www.aaai.org/Papers/ICML/2003/ICML03-081.pdf, but I am running into a problem that doesn't seem to be addressed. Namely, classifying short documents are resulting in much higher posterior probabilities due to the documents having a smaller number of features; vice versa for long documents.

This is because the posterior probabilities are defined as (ignoring the denominator):

$$
P(class|document) = P(class) * P(document|class)
$$

which expands to

$$
P(class|document) = P(class) * P(feature_1|class) * … * P(feature_k|class)
$$

From that, it's clear that short documents with fewer features will have higher posterior probabilities simply because there are fewer terms to multiply together.

For example, suppose the features "foo", "bar", and "baz" all show up in positive training observations. Then, a document with single feature "foo" will have a higher posterior probability of being classified in the positive class than a document with features {"foo", "bar", "baz"}. This seems counter-intuitive, but I'm not quite sure how to solve this.

Is there some sort of length normalization that can be done? One idea is to add the size of the document as a feature, but that doesn't seem quite right since results would then be skewed by the size of documents in the training data.

Best Answer

I don't think this should actually matter since classification is done by choosing the class with the maximum posterior compared to the other classes, not to other observations. Of course, I'm assuming each class uses the same features.

Related Question