Solved – How to preprocess a large sparse matrix and unbalanced classes in machine learning

data preprocessingmachine learningscikit learnsvm

I have a large very sparse matrix with 1000 columns and 15000 rows. It mainly contains zeros, the rest is integer values from 1-8.

I'm limited to scikit-learn and none of the PCA implementations there would process sparse matrices (not even RandomizedPCA).
I tried LDA and found a value of n_components=870 to be optimal, but this worsened my predictions on the test set.

I'm using LinearSVC as my learning algorithm as I get the best results with it. It performs better than random forests or xgb.

The second problem is, I'm in a multiclass environment with 3 classes that I have to predict: 0,1,2.

However, the classes are extremely unbalanced, 0 is the dominating class and I only have few 1s and 2s. (less than 100).

I'm using the class_weight ='auto' argument, is that correct?

Any advice on the preprocessing and improving my predictions would be helpful.

Best Answer

The standard statistical prescription is to increase the number of replicates to boost the sparseness of occurrence of 1s and 2s. This is expensive and wasteful not to mention that, in your case, it's likely not even possible.

Gary King, the Harvard quantitative political scientist, has an article about this: “Logistic Regression in Rare Events Data.” Political Analysis 9: 137–163. Here's the abstract to that article:

"We study rare events data, binary dependent variables with dozens to thousands of times fewer ones (events, such as wars, vetoes, cases of political activism, or epidemiological infections) than zeros ("nonevents"). In many literatures, these variables have proven difficult to explain and predict, a problem that seems to have at least two sources. First, popular statistical procedures, such as logistic regression, can sharply underestimate the probability of rare events. We recommend corrections that outperform existing methods and change the estimates of absolute and relative risks by as much as some estimated effects reported in the literature. Second, commonly used data collection strategies are grossly inefficient for rare events data. The fear of collecting data with too few events has led to data collections with huge numbers of observations but relatively few, and poorly measured, explanatory variables, such as in international conflict data with more than a quarter-million dyads, only a few of which are at war. As it turns out, more efficient sampling designs exist for making valid inferences, such as sampling all variable events (e.g., wars) and a tiny fraction of nonevents (peace). This enables scholars to save as much as 99% of their (nonfixed) data collection costs or to collect much more meaningful explanatory variables. We provide methods that link these two results, enabling both types of corrections to work simultaneously, and software that implements the methods developed."

But there are other academics who recommend using a poisson model instead of LR since it is intended for use with rare event, integer data. For instance, see Fader and Hardie, Probability Models for Customer-Base Analysis which is marketing focused but generalizeable to your area of application.

The extensions to machine learning applications is immediate, imho, assuming the issue isn't treated as a non-human aided problem. Spending some time developing these workarounds should lead to their "automatization* in an ML algorithm.

Related Question