[Math] Dot product of two vectors without a common origin

eigenvalues-eigenvectorslinear algebramatricesvector-spaces

Given two unit vectors $v_1, v_2\in R^n$, their dot product is defined as $v_1^Tv_2=\|v_1\|\cdot\|v_2\|\cdot\cos(\alpha)=\cos(\alpha)$. Now, suppose the vectors are in a relation $v_2=v_1+a\cdot1_n$, i.e., the vectors are parallel(one is the shifted version of another), where $a\in \mathbb{R}$ is a constant and $1_n=[1 \dots 1]\in\mathbb{R}^n$ is a vector of all ones. Dot product would now be $$v_1^Tv_2=v_1^T(v_1+a\cdot1_n)=1+a\cdot v_1^T1_n \tag 1.$$
This implies that by shifting the vectors, the dot product changes, but still $v_1v_2=\cos(\alpha)$, where the angle now has no meaning. Does that imply that, to perform the proper angle check between two vectors one has to center them (average of vector entries is zero for both vectors, which would be one option)?
If so, what does this imply geometrically? Are vectors brought to the same origin this way?

Another, more interesting observation is with $v_1, v_2$ being unit eigenvectors of two diffrent matrices. From (1), it can be concluded that when the vector entry sum is zero (vectors are centered), the origin is shared (?). For which matrices is it the case that the eigenvectors are centered, ie., with the eigenvector entry sum of zero? For instance, with the matrices having $1_n$ as an eigenvector, by virtue of orthogonality, the sum of entries for each other eigenvector is 0. Are these the only kind of matrices with this kind of the eigenvector property?

Best Answer

In an affine space we can "forget" about the origin, in the sense that it is determined by arbitrary choice of coordinates and so isn't a distinguished part of the space itself. This space has points, and between points we can draw arrows to describe direction. These "arrows" are vectors, and the set of all vectors forms a vector space: an algebraic structure where addition makese sense and scalar multiplication by elements from a given field (the real numbers here) too. There is a vector which is the additive inverse, zero. The vectors act on the points in the affine space by translating them from one location to another, according to the direction and magnitude of the vector. This should all be known already, but it is key that at first the affine space and vector space are two different things.

The vector space has an origin distinguished by being the additive identity, but we can take a copy of this vector space and then interpret the vectors as points, and then the arrows that exist between two points is the original vector that needed to be added algebraically to go from one to the other; we can keep the origin as part of a particular coordinate system. In this way we can view a space as both a vector space and an affine space simultaneously!

It gets a little tricky when we want to describe geometry though. Two vectors standing on an affine space are parallel if they point in the same direction, with no restrictions on their base point. On the other hand, if we want to view these parallel vectors in their vector space habitat as arrows they must be arrows pointing from the origin. The inner product is an operation on the vector space, so if we have two vectors in affine space we want to dot together we do have to "center" them in this way so that the angle-between-them interpretation remains valid.

We can translate vectors on the affine space (move them around without changing their direction) and they remain the same vector, just with a different base point. The operation of addition on the vector space however results in a new vector (when the summands are nonzero), and moreover adding two nonparallel vectors results in a vector that is not parallel with either of the original two.

What we can say instead is that if we have the zero vector $0$, a vector $v$, and a translation vector $w$, we can interpret $0$ and $v$ as points and the arrow between them will of course be the vector $v$, while if we translate the points $0$ and $v$ by the vector $w$ we will obtain the points $w$ and $v+w$ respectively (we must be careful about which we call vectors and which we call points!), The vector between these latter two points will again be $v$, which is obviously parallel to our original vector (because they are one and the same vector).

If $p$ is a vector we reinterpret as a point, and $v$ a vector in affine space with base point $p$, then the vector $v$ understood as an arrow will point specifically to the point $p+v$ (remember the addition takes place in the vector space, so to understand this we have to go back to the vector interpretation of $p$, add to $v$, and then forward again to the affine interpretation as a point). The point $p+v$ corre-sponds to the original vector $p+v$, so the "centering" process involves taking the point $p$ back to the origin (associated to the zero vector) as well as the point $p+v$ back to the point $v$, which is done by subtracting out the vector $p$. In other words, to center a vector existing in affine space, we take the point that it points to as an arrow, interpret it as a vector and subtract out the vector associated to the original base point. This is conceptually a rather roundabout process, but it's what goes on.

Moreover, there is nothing special about the vector $1_n:=(1,\cdots,1)$ when it comes to centering; it does shift every component by $1$ when added to a vector but generally this doesn't center anything at all. Translating a point in affine space just moves it in some specific direction, and indeed there is nothing inherently special about this direction; if we change our coordinate system the component form of this vector could be almost anything we want it to be.

What does it mean when the sum of the components of a vector is zero? (First, keep in mind this sum depends on the choice of coordinate system, so is not intrinsically a function of just the vector space. This is because what vector "$1_n$" specifies depends on coordinates.) It means the dot product between $v$ and $1_n$ is zero, so they are orthogonal aka perpendicular. Thinking of matrices as linear transformations of a vector space (given coordinates) then allows us to use this information to characterize the matrices (with eigenvectors' entries summing to zero) in a geometric way.

Related Question