Solved – the advantage of non-negativity in matrix factorization

machine learningmatrix decompositionnon-negative-matrix-factorizationrecommender-system

I am wondering why matrix factorization techniques in the machine learning domain almost always expect the provided matrix to be non-negative. What is the advantage of this constraint?

Background: I want to use matrix factorization algorithms for a sparse user-item matrix containing positive and negative implicit feedback. Is there any another possibility to set interactions with negative indications apart from fields that denote that no interaction happened between the user and the item?

Best Answer

Take a look at Jester, it is a well known data set of jokes which uses continuous ratings in the range of -10.0 to +10.0. A lot of papers have used this dataset for matrix factorization techniques with no ill effects. There is no such positive-ratings-only constraint on a mathematical or technical level.

But the reason we typically see majorly positive rating matrices, has likely something to do with websites/services not wanting their products to have negative ratings: even a one-star somehow appears better than a negative rating, it's how liking something less compares to hating something.

It may also have to do with how users perceive averages: if you use negative and positive ratings, some products could on average have 0 stars, which actually means neutral on that rating scale. But if you're used to the 5-star rating scales used on other websites, neutral is 3 stars and 0 stars would mean "really bad".

Related Question