Take $x \in V$ and suppose that $f(x)=0$. As $(v_1, \dots,v_n)$ spans $V$, it exists $\lambda_1, \dots \lambda_n$ such that
$$x=\lambda_1 v_1+ \dots +\lambda_n v_n$$
Which implies
$$0=f(x)=\lambda_1 f(v_1)+ \dots +\lambda_n f(v_n)$$
and as $(f(v_1), \dots,f(v_n))$ is supposed to be linearly independent, we get $\lambda_1=\dots=\lambda_n=0$ and finally $x=0$, proving that $\ker f$ is reduced to the zero vector and that $f$ is injective.
Let $\textsf{T}: \textsf{V}\to \textsf{W}$ be a linear transformation and let $\beta =\{v_1,v_2,\dots,v_n\}$ and $\gamma =\{w_1,w_2,\dots,w_m\}$ basis for $\textsf{V}$ and $\textsf{W}$, respectively.
To compute the matrix associated with $\textsf{T}$ respect to $\beta$ and $\gamma$, we always, that is to say always, obtain the vector $\textsf{T}(v_j)$ and put them as a linear combination of the elements that conform $\gamma$, like this :
$$\textsf{T}(v_j)=A_{1j}w_1+A_{2j}w_2+\cdots+A_{mj}w_m$$
then, we put the coefficients
$$\begin{pmatrix} A_{1j} \\ A_{2j} \\ \vdots \\ A_{mj} \end{pmatrix}$$
in the $j$-th column of the matrix $A:= [\textsf{T}]_{\beta}^{\gamma}$ (for $j=1,2,\dots,n$).
Now, for your specific doubt, consider $\textsf{T}: \mathbb{R}^2 \to \mathbb{R}^2$ given by $\textsf{T}(a,b)=(-b,a)$ (doesn't matter) and the basis $\beta=\{(2,1),(3,5)\}$ and $\gamma=\{e_1,e_2\}$ (the standard basis).
Above I said, we must first to compute $\textsf{T}(2,1)$ and the result put it as a linear combination of the elements in the second basis. In this case
$$\textsf{T}(2,1)=(-1,2)=(-1)e_1+2e_2$$
As you can see, the coefficients that we will put in the first column of the matrix will look exactly like the vector $(-1,2)$ (maybe this is what you meant). To finish, note also that
$$\textsf{T}(3,5)=(-5,3)=(-5)e_1+3e_2$$
So
$$[\textsf{T}]_{\beta}^{\gamma} =\begin{pmatrix} -1 & -5 \\ 2 & 3 \end{pmatrix}$$
(an apology for my english, I hope this has clarified your doubt a bit)
Best Answer
Suppose the columns are LI.
Write $v \in V$ as $v = a_1v_1 + \cdots + a_nv_n$.
Then if $T(v) = 0$, we have $0 = T(v) = T(a_1v_1 + \cdots + a_nv_n) = a_1T(v_1) +\cdots + a_nT(v_n)$.
By the LI of the $T(v_j)$, it follows that $a_1 = \cdots = a_n = 0$. Hence $v = 0v_1 +\cdots + 0v_n = 0$.
You do the other direction, suppose $\text{ker }T = \{0\}$, and:
$c_1T(v_1) +\cdots + c_nT(v_n) = 0$. What can you deduce about the $c_j$?