Solved – What exactly happens when I do a feature cross

data transformationlinear algebramachine learningnonlinear

I was going through a machine learning course and they talked about combining various features to create synthetic feature to take care of non linear data. For eg in the below picture I didn't do any feature crossing and the model didn't fit: no feature crossing

But if I do some feature crossing and create/activate features $x_1^2$, $x_2^2$ and $x_1x_2$ I get this:Fits with feature crossing

The model fits now. But why? What exactly does feature crossing do that enables a model to fit non linear data?

Can some one please help me understand it?

Best Answer

Your data is not linearly separable in the original space.

But it seems like it actually is separable with a circle/ellipse (let's say it's inside a circle to simplify the problem): it seems reasonable to have hypothesis that, for some $c$ if $x^2 + y^2< c$ then a point is blue.

That means that if you use $x^2, y^2$ as features, you can fit a linear classifier to these data points and actually separate the classes linearly.

Related Question