There are ways in which $F^S$ relates to $F^n$, and ways in which it doesn't.
Each element of $F^S$ is determined by taking each element of $S$ and assigning it a value from $F$. This is, by definition, an $F$-valued function on $S$.
Each element of $F^n$ is an $n$-tuple of the form $(f_1, \ldots, f_n)$, with $f_i \in F$. This can be reinterpreted as an $F$-valued function on a set with $n$ elements. So there is something that makes $F^n$ resemble $F^S$.
What is very dangerous, however, is to go further and think of the function values in each element of $F^S$ as coordinates, which is not correct. Take a look at an element of $F^n$ of the form $v = (f_1, \ldots, f_n)$. Each $f_i$ is a coordinate of $v$ with respect to the basis $e_i$ ($0$ everywhere except in the $i$-th position). So in the finite case, coordinates and function values coincide. Everything works out.
However, consider an element $w \in F^S$. Its values are determined by $w(s)$ for $s \in S$, so its tempting to say that these are the coordinates of $w$ with respect to a basis of functions that take the value $1$ at particular $s$ and $0$ elsewhere. The problem with this claim is that if $S$ is an infinite set, and $w$ is a function that is everywhere nonzero, then $w$ has infinitely-many nonzero coordinates, which would mean it is expressed as an infinite sum of basis vectors. Infinite sums in linear algebra are not allowed, because they require notions of convergence conferred by calculus, notions that may or may not exist for generic vector spaces. There are ways to introduce infinite sums into linear algebra, but it's a whole different story.
So while it is true that $F^S$ is a vector space, if $S$ is an infinite set, then its basis cannot be expressed in a simple way using the values each function takes on $S$. Indeed, any such basis requires some form of the axiom of choice to describe.
All of this can be summarized in the following statement:
Let $V$ be a vector space over $F$ with a basis $\beta$. Then $V$ is isomorphic to $F^{\beta}$ if and only if $\beta$ is a finite set, a.k.a. $V$ is finite-dimensional.
The following are primary examples of vector spaces (over the real numbers):
A one point set, regarding the point as the origin, i.e. the zero vector $\{0\}$. This space is $0$ dimensional.
A full line through the origin (basically it's along the lines of your picture A, but we also consider negative and every multiples of its vectors). The lines are $1$ dimensional.
A full plane through the origin, including all its points. These are $2$ dimensional.
The physical 3d space you can consider as a $3$ dimensional vector space after fixing a point for origin: you can add vectors and multiply them by real numbers: that's what the abstract definition says.
We can observe that in all these geometric examples, the elements of the given set can be coordinatized by base vectors, namely we have to fix exactly as many base vectors as the given 'dimension'.
This, on one hand, means that the elements of the given set can be represented by a single coordinate (for a line) / a pair of coordinate numbers (for a plane) / a triple of coordinates (for the space).
But this thing we can simply continue in the algebraic way:
For any positive integer $n$, we can define a (canonical) $n$ dimensional vector space: $\Bbb R^n$ consists of the $n$-tuples of real numbers. You can add them and multiply by any real number, coordinatewise. You can check the conditions that it indeed defines a vector space in the abstract sense.
Best Answer
This is simply applying a vector space structure to functions from a given set to $\mathbb{C}$. For instance, take $A = \mathbb{R}$. The set of all complex-valued functions on $\mathbb{R}$ form a vector space where scalar multiplication and vector addition are defined as given there. This is simply showing how we can make vector spaces out of functions.
If you have seen the notation $T \in \mathcal{L}(V,W)$, this might be a little easier to grasp. That is read as "The function $T$ is in the set of all linear transformation from $V$ to $W$", where $\mathcal{L}(V,W)$ is the set of all such transformations. This set has a natural vector space structure on it, namely the one given in the snippet you posted:
For $\alpha \in F$ (the underlying field of both $V$ and $W$) define $\alpha T$ as $v \mapsto \alpha T(v)$ and $S+T$ as $v \mapsto S(v) + T(v)$. This gives us a vector space structure, and the objects (vectors) in this vector space are linear transformations (i.e., functions).
Does that help your understanding?