Given some vector space, can we take vectors as arrows to form a category?
I mean, I am not thinking the vectors spaces as objects, and linear transformations between them as arrows. In the monoid situation, we have a choice to consider the homomorphism between monoids as arrows or the elements of the monoid as arrows (and only one object). Here in linear algebra it seems that we have the same choice, but it seems in certain way more visual, because vectors are "arrows" in the common sense of the word.
I think we could take only one object $\star$ as in the monoidal case , and take the compositions as the sum. It seems it works in a superficial level. Because vector sum is associative and it has the obvious choice of the null vector as $1_*$. This seems to form a category. But it seems far from using all the toys we have in vector spaces.
Can we use some categorical magic to create the extra structure, scalar multiplication, etc…? Can we make some change in the above definition (get more objects to work with, for example) to make the visual aspect of it more similar to the geometric representation of vectors?
Best Answer
This is likely not quite an answer to your question, but maybe you will find it interesting since it somewhat relates the "arrow" nature of vectors to the "arrow" nature of morphisms. However, this could be viewed more as a categorification of your observation, so I doubt it gives any insights about linear algebra per se.
The details of what I'm going to say below can be found in the work by Baez and Crans in Higher-Dimensional Algebra VI: Lie 2-Algebras.
Given a category $\mathcal K$ with finite pullbacks, we can define a category internal to $\mathcal K$ to be a pair of objects $C_0$ and $C_1$ in $\mathcal K$ (interpreted as the "object of objects" and the "object of [all] morphisms" respectively) equipped with morphisms $s,t:C_1\rightrightarrows C_0$ (the "source" and "target" maps, indicating the domain and codomain of morphisms of $C$), a morphism $i:C_0\to C_1$ (the "identity" map, sending objects to their respective identity endomorphisms), and a composition $c:C_1\times_{C_0}C_1\to C_1$ (composing morphisms with compatible co/domains). These structure morphisms need to satisfy constraints analogous to the axioms of a category.
The simplest examples are categories internal to the category $\mathbf{Set}$ of sets, which gives you the (small) categories. However, if you take categories internal to the category $\mathbf{Vect}$ of vector spaces (over a fixed field $\Bbbk$), you get "2-vector spaces" in the sense of Baez and Crans.
More explicitly, a 2-vector space is like a category, but now you have a vector space $V_0$ of objects, and a vector space $V_1$ of morphisms, and you require that taking co/domains, identities, and composites are all linear transformations. This has some pretty surprising consequences that might sound similar to what you're looking for. In particular, Baez and Crans prove in Lemma 3.2 that the composition of morphisms in $V_1$ is precisely given by "adding the vectors tail to tip!"
More precisely, let $f:x\to y$ be a morphism in $V_1$ (that is, $s(f)=x$ and $t(f)=y$), then you can think of $f$ as the pair $(x,\vec f)$, where $x$ is the "starting point" in $V_0$ of the morphism, and $\vec f\in V_1$ is the "direction and magnitude" of $f$. Here, $\vec f$ is defined to be $$ \vec f := f - i(x) $$ (note that it is meaningful to think of $\vec f$ as a direction and magnitude because its source is $s(\vec f)=s(f-i(x))=s(f)-s(i(x))=x-x=0$, so this is the "component" of $f$ that starts at the origin; moreover, we have that $x+t(\vec f)=y$, so the vector really does "point to" $y$).
What Lemma 3.2 says is that composition in your 2-vector space is uniquely determined by $s,t:V_1\rightrightarrows V_0$ and $i:V_0\to V_1$: given two composable morphisms $f=(x,\vec f)$ and $g=(y,\vec g)$, the composite $g\circ f = c(g,f)$ has to be the sum $$ g\circ f = (x,\vec g+\vec f) $$ so in a somewhat precise sense, the structure of the morphisms in a 2-vector space (i.e., category internal to $\mathbf{Vect}$) really do reflect the structure of their underlying vectors. You can thus interpret $V_0$ as the space of "starting positions" of your morphism vectors, and $V_1$ as the space of vectors shooting out of these positions.
In fact, if you take $V_0=0$ to be trivial, then your 2-vector space reduces to an ordinary vector space viewed as a monoid under addition.
A word of warning about this interpretation: $V_0$ and $V_1$ are still different spaces (though $i:V_0\to V_1$ is injective, so you can think $V_0\subseteq V_1$). Baez and Crans interpret the vectors of $V_0$ as ordinary vectors of a vector space, but then view $V_1$ as the space of "infinitesimal vectors" based at points in $V_0$. This means a morphism $f=(x,\vec f)$ consists of a vector $x$ plus an "infinitesimal" component $\vec f$.
In fact, even this can be made somewhat precise, and is done so in Theorem 3.8:
so 2-vector spaces are the same as having a space $C_0$ and a "differential" $d:C_1\to C_0$ (which would encode infinitesimals). For completeness, I will at least describe the functors (on the level of objects) going both ways.
Given a 2-vector space $(V_0,V_1,s,t,i)$, the associated 2-term chain complex has components $C_0 := V_0$ and $C_1 := \ker(s)\subseteq V_1$. Note that $C_1$ is exactly the subspace of those morphisms in $V_1$ of the form $\vec f$ for some $f\in V_1$, so this is the space of "infinitesimal components." The differential $d:C_1\to C_0$ is given by $t$.
Conversely, given a chain complex $C_1\xrightarrow dC_0$, the associated 2-vector space has $V_0 := C_0$ as above, and $V_1 := C_0 \oplus C_1$, which we interpret as pairs $(x,\vec f)$, and define $s,t,i$ accordingly ($i(x)=(x,0)$, $s(x,\vec f)=x$, and $t(x,\vec f)=x+d\vec f$).