The definition of linear independence is that any finite linear relation is trivial.
Vector spaces in general do not have any concept of an infinite sum at all. For those vector space where the usual concept of an infinite sum of reals can be generalized, one may speak of a different kind of span/basis where one allows infinite linear combinations in addition to finite ones. That gives rise to a separate concept, different from the usual kind of linear-combinations basis.
When one needs to distinguish between the different notions of basis, an ordinary basis that works by finite linear combinations is called a "Hamel basis" or "algebraic basis", and one that needs infinite linear combinations to span everything is called a "Schauder basis" (though strictly speaking the latter name implies some additional conditions).
If $X$ is an infinite-dimensional vector space over some field $F,$ then any basis $B$ must be an infinite set.
It's true that any $v\in X$ can be written as a finite linear combination $p_1 b_1 +\dots + p_n b_n,$ where the $p_k$ are in the underlying field $F$ and the $b_k$ are in the basis $B.$
This doesn't say that $B$ is finite though. Different $v$ in $X$ will require different basis elements to write them in the above format — only finitely many basis elements for any particular $v,$ but (assuming that $X$ is infinite-dimensional so that $B$ is infinite) as you let $v$ vary, you're going to need the infinitely many basis elements to write the various linear combinations (even though each linear combination is itself, individually, a finite sum).
Here's an example:
Let $X$ be the set of all infinite sequences of real numbers that are eventually $0;$ in other words, a member of $X$ is a function $f\colon\mathbb{N}\to\mathbb{R}$ such that for some $n\in\mathbb{N}$, for all $k\gt n,$ $f(k)=0.$ Of course, $X$ is a vector space over $\mathbb{R}$ under pointwise addition and pointwise scalar multiplication.
For each $n\in\mathbb{N},$ let $b_n\in X$ be defined by setting $$b_n(k)=\begin{cases}1,\text{ if }k=n,\\0,\text{ if }k\ne n.\end{cases}$$
Then you can see that $\{b_n\mid n\in\mathbb{N}\}$ is a basis for $X$ over $\mathbb{R}.$ Any member of $X$ can be written as a finite linear combination of the $b_n\text{'s}$ (since each member of $X$ is eventually $0).$ But you need all the $b_n\text{'s}$ (infinitely many) to write all the members of $X$ in that way.
[By the way, the theorem that every vector space has a basis uses the axiom of choice, as does the theorem that any two bases for the same vector space must have the same cardinality. Without the axiom of choice, there can be vector spaces without a basis, and also vector spaces which have a basis but which don't have a well-defined dimension, because different bases can have different cardinalities. I wouldn't worry about any of this when you're just starting to study vector spaces though.]
Best Answer
No. Take for instance, the space of polynomials in one variable ($x$) with real coefficients. It is an infinite-dimensional real vector space. However, it has a Hamel basis: $\{1,x,x^2,x^3,\ldots\}$. And every polynomial can be written as a finite linear combination of elements of that basis.