Let $F:X\to\mathbb R^n$ defined as follows
$$
F(x)=\big(f_1(x),\ldots,f_n(x)\big).
$$
Clearly $F$ is a linear transformation and its range $Y$ is linear subspace of $\mathbb R^n$.
We shall show that the range is the whole $\mathbb R^n$. If it is not, then we can find a vector $v\in\mathbb R^n$, such that $v=(v_1,\ldots,v_n)\perp Y$, i.e.
$$
v\cdot y=0, \quad \text{for all}\,\,\,y\in Y.
$$
But this means that for every $x\in X$:
$$
0=v\cdot F(x)=v\cdot \big(f_1(x),\ldots,f_n(x)\big)=v_1f_1(x)+\cdots+v_nf_n(x),
$$
which means that the $f_1,\ldots,f_n$ are linearly dependent, a contradiction. Hence $F$ is onto. Let $x_1,\ldots,x_n$, such that $F(x_j)=e_j$, where $e_1,\ldots,e_n$ is the standard basis of $\mathbb R^n$. Then the matrix
$$
\big(f_i(x_j)\big)_{i,j=1}^n,
$$
is the unit matrix, and its determinant is equal to $1$.
Note. The proof is valid even in the case in which $\mathbb R^n$ is replaced by $\mathbb C^n$.
Let $K$ denote the scalar field. Consider $F\colon X \to K^n$ given by
$$F(x) = \begin{pmatrix}L_1(x)\\ L_2(x)\\ \vdots \\ L_n(x)\end{pmatrix}.$$
Let $R = \operatorname{im} F \subset K^n$. We have an induced isomorphism $$\tilde{F}\colon X/\ker F \xrightarrow{\sim} R.$$
Since $\bigcap\limits_{k=1}^n \ker L_k = \ker F \subset \ker L$, we have an induced linear form $\tilde{L} \colon X/\ker F \to K$, and can pull that back to $R$ as $\hat{L} := \tilde{L} \circ \tilde{F}^{-1}$. We can extend $\hat{L}$ to all of $K^n$ (extend a basis of $R$ to a basis of $K^n$, and choose arbitrary values, e.g. $0$, on the basis vectors not in $R$). Thus there is a linear form $\lambda \colon K^n \to K$ with
$$\lambda \circ F = \lambda\lvert_R \circ F = \hat{L}\circ F = \tilde{L}\circ \tilde{F}^{-1}\circ F = \tilde{L} \circ \pi = L,$$
where $\pi \colon X \to X/\ker F$ is the canonical projection.
But every linear form $K^n\to K$ can be written as a linear combination of the component projections, so there are $c_1,\dotsc, c_n$ with
$$\lambda\begin{pmatrix}u_1\\u_2 \\ \vdots \\ u_n \end{pmatrix} = \sum_{k=1}^n c_k\cdot u_k,$$
and that means
$$L(x) = \lambda(F(x)) = \sum_{k=1}^n c_k\cdot L_k(x)$$
for all $x\in X$, or
$$L = \sum_{k=1}^n c_k\cdot L_k.$$
Best Answer
Note that we can define a map from $V$ to $\Bbb F^n$ by $$ \Phi(x) = \pmatrix{\varphi_1(x)\\ \vdots \\ \varphi_n(x)} $$ If the intersection of their kernels is $0$, then the map define by $\Phi$ is invertible. That is, $\Phi(x) = Ax$ for an invertible matrix $A$ (with respect to some basis).
For any $c_1,\dots,c_n$, note that $$ c_1 \varphi_1(x) + \cdots + c_n\varphi_n(x) = c^T A x $$ where $c$ is the column vector of $c_i$. What we want to show then is that $$ c^T A = 0 \implies c = 0 $$ which follows from the invertibility of $A$.