Outlier Detection – How to Detect Outliers in a Mixture of Gaussians Using Normal Distribution Models

mixture-distributionnormal distributionoutliers

I have a ton of univariate samples ($x_i \in \mathbb{R}^+$). I'd like an automated method to check for outliers and identify the outliers, if any are present. A reasonable model for the distribution of the non-outliers is a mixture of Gaussians. The number of Gaussians in the mixture and their parameters are not known a priori. Can you suggest a simple method for identifying outliers? Do you have any recommendations? It'd be nice if it were simple to code up in Python.

Something quick and dirty — say, easy to understand, easy to implement, and pretty effective– beats something complex but optimal. For example, I'm a bit reluctant to wade into something fancy based upon expectation maximization.

Example parameters: I might have 10,000 samples or so. The distribution of non-outliers might be a mixture of 2 Gaussians; or I might have a mixture of a few hundred Gaussians.

Update: People have asked how anything could possibly be an outlier, given these assumptions. (Presumably, the unstated concern is that this problem may be unsolvable: if every data set is always explainable by some mixture model, then there's no basis to ever identify anything as an outlier.) That's a fair question, so let me try to respond. In my application domain, I can reasonably assume that there will be dozens of samples from each component Gaussian. e.g., I might have 40,000 samples from a mixture of 100 Gaussians, where each Gaussian component has a probability no lower than 0.001 (so it is almost guaranteed that I have at least 10 samples from each Gaussian). I realize I didn't state this assumption earlier, and I apologize for that. However, with this additional assumption, I believe the problem is solvable. There exist examples of data sets where one or more points can be considered outliers (they cannot reasonably be explained by any mixture model). For example, consider a data set that has a single isolated point that is very far from all others: if it's far enough away, it can't be explained by the Gaussian mixture model and thus can be recognized as an outlier. In conclusion, I believe that the problem is well-defined and is solvable (given the additional assumption stated here): there do exist example situations where some points can reasonably be identified as outliers.

Note that I'm not trying to propose a special or unusual definition of outlier. I am happy to use the standard notion of outlier (e.g., a point that cannot reasonably be explained as having been generated by the hypothesized process, because it is too unlikely to have been generated by that process).

Best Answer

I have suggested, in comments, that an "outlier" in this situation might be defined as a member of a "small" cluster centered at an "extreme" value. The meanings of the quoted terms need to be quantified, but apparently they can be: "small" would be a cluster of less than 10 values and "extreme" can be determined as outlying relative to the set of component means in the mixture model. In this case, outliers can be found with simple post-processing of any reasonable cluster analysis of the data.

Choices have to be made in fine-tuning this approach. These choices will depend on the nature of the data and therefore cannot be completely specified in a general answer like this. Instead, let's analyze some data. I use R due to its popularity on this site and succinctness (even compared to Python).

First, create some data as described in the question:

set.seed(17) # For reproducible results
centers <- rnorm(100, mean=100, sd=20)
x <- c(centers + rnorm(100*100, mean=0, sd=1), 
       rnorm(100, mean=250, sd=1), 
       rnorm(9, mean=300, sd=1))

This command specifies 102 components: 100 of them are situated like 100 independent draws from a normal(100, 20) distribution (and will therefore tend to lie between 50 and 150); one of them is centered at 250, and one is centered at 300. It then draws 100 values independently from each component (using a common standard deviation of 1) but, in the last component centered at 300, it draws only 9 values. According to the characterization of outliers, the 100 values centered at 250 do not constitute outliers: they should be viewed as a component of the mixture, albeit situated far from the others. However, one cluster of nine high values consists entirely of outliers. We need to detect these but no others.

Most omnibus univariate outlier-detection procedures would either not detect any of these 109 highest values or would indicate all 109 are outliers.

Suppose we have a good sense of the standard deviations of the components (obtained from prior information or from exploring the data). Use this to construct a kernel density estimate of the mixture:

d <- density(x, bw=1, n=1000)
plot(d, main="Kernel density")

KDE

The (almost invisible) blip at the extreme right qualifies as a set of outliers: its small area (less than 10/10109 = 0.001 of the total) indicates it consists of just a few values and its situation at one extreme of the x-axis earns it the appellation of "outlier" rather than "inlier." Checking these things is straightforward:

x0 <- d$x[d$y > 1000/length(x) * dnorm(5)]
gaps <- tail(x0, -1) - head(x0, -1)
histogram(gaps, main="Gap Counts")

Gap histogram

The density estimate d is represented by a 1D grid of 1000 bins. These commands have retained all bins in which the density is sufficiently large. For "large" I chose a very small value, to make sure that even the density of a single isolated value is picked up, but not so small that obviously separated components are merged.

Evidently the gap distribution has two high outliers (which can automatically be detected using any simple procedure, even an ad hoc one). One characterization is that they both exceed 25 (in this example). Let's find the values associated with them:

large.gaps <- gaps > 25
ranges <- rbind(tail(x0,-1)[large.gaps], c(tail(head(x0,-1)[large.gaps], -1), max(x))

The output is

         [,1]     [,2]
[1,] 243.9937 295.7732
[2,] 256.3758 300.9340

Within the range of data (from 25 to 301) these gaps determine two potential outlying ranges, one from 244 to 256 (column 1) and another from 296 to 301 (column 2). Let's see how many values lie within these ranges:

lapply(apply(ranges, 2, function(r){x[r[1] <= x & x <= r[2]]}), length)

The result is

[[1]]
[1] 100

[[2]]
[1] 9

The 100 is too large to be unusual: that's one of the components of the mixture. But the 9 is small enough. It remains to see whether any of these components might be considered outlying (as opposed to inlying):

apply(ranges, 2, mean)

The result is

[1] 250.1848 298.3536

The center of the 100-point cluster is at 250 and the center of the 9-point cluster is at 298, far enough from the rest of the data to constitute a cluster of outliers. We conclude there are nine outliers. Specifically, these are the values determined by column 2 of ranges,

x[ranges[1,2] <= x & x <= ranges[2,2]]

In order, they are

299.0379 300.0376 300.2696 300.3892 300.4250 300.5659 300.7018 300.8436 300.9340