Is a hyper-plane uniquely defined by a single normal vector

geometrylinear algebramachine learningorthogonality

So I was always taught that planes have two directions that are normal, or perpendicular, to it. However, upon reading this comment, in dimensions higher than $R^3$ there can be more than one vector normal to a hyperplane.

This got me thinking, suppose our plane is $$ w + x + y + 0z = w + x + y = 0$$
Then our normal vector is $[1,1,1, 0]$. All vectors on the plane are normal to that vector. However, the same is true for $[1,1,1, 999]$ which points in a different direction, right(please tell me if this is actually pointing in the same direction and I'm an idiot)?

So essentially, we can't uniquely define a plane by a single vector(or negative one times that vector)? There are multiple vectors that can be orthogonal that plane, is this correct.

How come then in machine learning, our perceptron algorithm only uses a single theta(normal vector) to define the plane? Couldn't it be the case that a different normal vector could produce a different classification answer?

Best Answer

The essential idea to understand is that of the hodge dual. It goes as follows: In a d dimensional space, there is a natural association between n dimensional object and n-d dimensional object. So, suppose we are working in 3D, there is a natural association between 1 dimensional object and 2 dimensional object. Here are list of all the correspondance in 3D space:

Scalars <--> Volumes

Vectors <--> Areas

We can also see in 2 dimensional space, we have a correspondance:

Scalars <--> Areas

Vectors <--> Vectors

Let me be more specific about association point. If we are to build up the higher dimensional object from a set of basis object , then the size of that basis for the n dim and n-d dim would be same.

Of course, the above idea only defines upto parallel shift. To get a specific hyper plane/ plane, you need to also specify the point on this object.

Hope this helps you.

Related Question