Take, for example $V = \mathbb R ^2$, the $x$-$y$ plane. Write the vectors as coordinates, like $(3,4)$.
Such a coordinate could be written as a sum of its $x$ component and $y$ component: $$(3,4) = (3,0) + (0,4)$$
and it could be decomposed even further and written in terms of a "unit" x vector and a "unit" y vector:
$$(3,4) = 3\cdot(1,0) + 4\cdot(0,1).$$
The pair $\{(1,0),(0,1)\}$ of vectors span $\mathbb R^2$ because ANY vector can be decomposed this way:
$$(a,b) = a(1,0) + b(0,1)$$
or equivalently, the expressions of the form $a(1,0) + b(0,1)$ fill the space $\mathbb R^2$.
It turns out the $(1,0)$ and $(0,1)$ are not the only vectors for which this is true. For example if we take $(1,1)$ and $(0,1)$ we can still write any vector:
$$(3,4) = 3\cdot(1,1) + 1\cdot(0,1)$$
and more generally
$$(a,b) = a \cdot (1,1) + (b-a)\cdot(0,1).$$
This fact is intimately linked to the matrix $$\left(\begin{array}{cc} 1 & 1 \\ 0 & 1 \end{array} \right)$$ whose row space and column space are both two dimensional.
I would keep going, but your question is general enough that I could write down an entire linear algebra course. Hopefully this gets you started.
Let $V$ be an $n$-dimensional vector space over your favourite field with basis $e_1,\dotsc,e_n$ und $V^*$ its dual space with dual basis $e^1,\dotsc,e^n$.
Strictly speaking the vector spaces $V$ and $V^*$ do not have any vectors in common. But what is meant by “almost the same” is that there is an isomorphism of vector spaces $\varphi\colon V\rightarrow V^*$ defined by $\varphi(e_i) =e^i$ for $i=1,\dotsc,n$ (and extending by linearity). Under this isomorphism $x = \sum_{i=1}^n x^ie_i$ is mapped to $\varphi(x) = \sum_{i=1}^nx^i\varphi(e_i) = \sum_{i=1}^n x^ie^i$. So in your notation you have $x_i = x^i$ for the coefficients $x_i$ of $\varphi(x)$ with respect to the basis $e^1,\dotsc,e^n$.
Under the isomorphism $\varphi$ you identify $x$ and $\varphi(x)$ with each other, although they are not the same, as they live in different vector spaces.
Beware that the isomorphism $\varphi\colon V\rightarrow V^*$ depends heavily on your chosen basis $e_1,\dotsc,e_n$ of $V$. This means that without a basis of $V$ you cannot (easily) identify $V$ with its dual space $V^*$.
Best Answer
There are ways in which $F^S$ relates to $F^n$, and ways in which it doesn't.
Each element of $F^S$ is determined by taking each element of $S$ and assigning it a value from $F$. This is, by definition, an $F$-valued function on $S$.
Each element of $F^n$ is an $n$-tuple of the form $(f_1, \ldots, f_n)$, with $f_i \in F$. This can be reinterpreted as an $F$-valued function on a set with $n$ elements. So there is something that makes $F^n$ resemble $F^S$.
What is very dangerous, however, is to go further and think of the function values in each element of $F^S$ as coordinates, which is not correct. Take a look at an element of $F^n$ of the form $v = (f_1, \ldots, f_n)$. Each $f_i$ is a coordinate of $v$ with respect to the basis $e_i$ ($0$ everywhere except in the $i$-th position). So in the finite case, coordinates and function values coincide. Everything works out.
However, consider an element $w \in F^S$. Its values are determined by $w(s)$ for $s \in S$, so its tempting to say that these are the coordinates of $w$ with respect to a basis of functions that take the value $1$ at particular $s$ and $0$ elsewhere. The problem with this claim is that if $S$ is an infinite set, and $w$ is a function that is everywhere nonzero, then $w$ has infinitely-many nonzero coordinates, which would mean it is expressed as an infinite sum of basis vectors. Infinite sums in linear algebra are not allowed, because they require notions of convergence conferred by calculus, notions that may or may not exist for generic vector spaces. There are ways to introduce infinite sums into linear algebra, but it's a whole different story.
So while it is true that $F^S$ is a vector space, if $S$ is an infinite set, then its basis cannot be expressed in a simple way using the values each function takes on $S$. Indeed, any such basis requires some form of the axiom of choice to describe.
All of this can be summarized in the following statement: