Sure, if $T:V\to W$ is a linear transformation between vector spaces $V$ and $W$ with bases $B$ and $C$, respectively, then $T$ can be described in terms of the coordinates with respect to these bases, thus yielding a "matrix". How closely this relates to the usual notion of matrix depends on the nature of $B$ and $C$. In the usual notion, you take bases that are not only finite, but ordered, so that it makes sense to talk about the 1st row, etc., of the matrix; that is, you make all bases indexed by sets of the form $\{1,2,\ldots,n\}$. The closest to this in the infinite dimensional setting would be to have bases indexed by the positive integers.
More generally, suppose $B=\{v_j\}_{j\in J}$ and $C=\{w_i\}_{i\in I}$, where $I$ and $J$ are sets. Then the matrix of $T$ can be described as a function $M:I\times J\to F$, where $F$ is the base field, by taking $M(i,j)$ to be the coefficient of $w_i$ in the $C$-expansion of $Tv_j$. Such matrices are column-finite, in the sense that for each $j\in J$, the set of $i\in I$ such that $M(i,j)\neq0$ is finite. Conversely, each column-finite matrix, in this sense, corresponds uniquely to a linear transformation between $V$ and $W$. Coordinate-wise addition of such matrices corresponds to addition of the linear transformations.
You can also extend multiplication. Suppose that $S:W\to X$ is a linear transformation and that $X$ has basis $D=\{x_k\}_{k\in K}$. Let $N:K\times I\to F$ denote the $C$-$D$ matrix of $S$. Then $ST:V\to X$ has $B$-$D$ matrix $NM:K\times J\to F$ defined by
$$(NM)(k,j)=\sum_{i\in I}N(k,i)M(i,j).$$ In particular, note that this sum is always finite because $M$ is column-finite.
Motivated by Calle's answer, I decided to add a little on a different kind of matrix for continuous linear transformations on Banach spaces with Schauder bases.
If $X$ is an infinite dimensional separable Banach space, then a sequence $(e_n)_{n=1}^\infty$ in $X$ is called a Schauder basis for $X$ if every $x\in X$ has a unique representation $x=\sum_{n=1}^\infty a_ne_n$, the $a_n$ being scalar and the sum being norm convergent. If $X$ and $Y$ are Banach spaces with Schauder bases $(e_n)$ and $(f_n)$ respectively, and if $T:X\to Y$ is a bounded linear operator, then $T$ can be described by a matrix $(a_{ij})_{i,j=1}^\infty$, with $a_{ij}$ being the coefficient of $f_i$ in the $(f_n)$ expansion of $Te_j$. The map from bounded operators to matrices in one-to-one and preserves algebraic structure, but there is typically not any nice description of which matrices correspond to bounded operators.
For example, in a separable Hilbert space any orthonormal basis is a Schauder basis. For maps between Hilbert spaces the coefficients are found as $a_{ij}=\langle Te_j,f_i\rangle$. In $c_0$, the space of sequences converging to $0$ with sup norm, and in $\ell^p$, the sequence space with norm $\|(x_n)_{n=1}^\infty\|_p=(\sum_{n=1}^\infty|x_n|^p)^{1/p}$, the sequence $(e_n)$ such that the $n^\text{th}$ component of $e_n$ is $1$ and all other components are $0$ forms a Schauder basis.
If $c$ is the space of convergent sequences with sup norm, then this will no longer be a Schauder basis, and in particular it is clear that $\sum_{n=1}^\infty x_n e_n$ is not norm convergent unless $\lim_{n\to\infty}x_n=0$. A Schauder basis for $c$ can be obtained by adding $e_0=(1,1,1,\ldots)$. If $(x_n)\in c$ and $x=\lim_n x_n$, then $(x_n)=xe_0 +\sum_{n=1}^\infty(x_n-x)e_n$ is the basis representation. As in Calle's answer, suppose that $T:c\to c$ is defined by $T(x_1,x_2,x_3,\ldots)=(x,0,0,\ldots)$. Then $T$ has a matrix representation with repsect to $(e_0,e_1,\ldots)$ (but not with respect to $(e_1,e_2,\ldots)$), namely $a_{10}=1$ and all other components are $0$.
Similar to Olod's warning, such matrices typically play only a marginal role, even in cases where they are guaranteed to exist, like on Hilbert space. Not every separable Banach space has a Schauder basis. Enflo first gave an example of a separable Banach space without the approximation property, which guarantees that it has no Schauder basis.
The columns of the matrix tell us where the basis vectors of the domain are mapped, in terms of the basis vectors of the codomain. Since every vector in the domainis a linear combination of the basis vectors (in a unique way), we can extrapolate, in a sense, the image of any given vector. Let $A$ be an $m\times n$ matrix (with coefficients in a field $F$) with columns $A_1, ..., A_n$. Let $V$ be an $n-$dimensional $F-$vector space, and $W$ an $m-$dimensional $F-$vector space, with ordered bases $(v_1, ..., v_n)$ and $(w_1, ..., w_m)$, respectively. Finally, let $T$ be the linear transformation associated with $A$, and let $v\in V$ with $v = c_1v_1 + ... + c_n v_n$ (remember, this expression for $v$ as a linear combination of basis vectors is unique). Then
$$T(v) = T(c_1v_1+...+c_nv_n) = c_1T(v_1)+...+c_nT(v_n) = c_1A_1 + ... + c_nA_n$$
So, this is how the matrix lets us calculate the image of any vector. Notice that the expression on the right is just the matrix $A$ multiplied by the vector $[c_1, c_2, ..., c_n]^T$.
For example, if we let $V = W = \Bbb{R}^2$ (considered as $\Bbb{R}-$vector spaces) with the standard basis, let $$A =
\left[ \begin{array}{ccc}
1 & 2 \\\
2 & 3 \end{array} \right]$$
And let $T$ be the linear transformation associated with $A$, and let $v = [1,5]^T$. Then
$$T(v) = 1\cdot T(e_1) + 5\cdot T(e_2) = [1,2]^T+5[2,3]^T = [11,17]^T = A\cdot v$$
Best Answer
Let the transformation $T$ be from $R^n \to R^m$. We will need bases for each of these spaces, let them be $B_n = \{e_{1n}, e_{2n}... e_{nn}\}$ and $B_m = \{e_{1m}, e_{2m}... e_{mm}\}$ respectively.
Now, any vector $v$ can be expressed as the following
$$v = \sum_1^na_ie_{in}$$
$$\implies T(v) = \sum_i^na_iT(e_{in})$$
To complete the matrix representation, we need to express each $T(e_{in})$ in the basis of the $m$-space
Hence, let $T(e_{in}) = \sum_{k=1}^mb_{ik}e_{km}$
Therefore
$$\implies T(v) = \sum_{i=1}^na_i\sum_{k=1}^mb_{ik}e_{km}$$
Now, we consider the matrix representation of $T$, we express $v$ as a column vector in $R^{n \times 1}$
$$v = \begin{bmatrix}a_1 \\ a_2 \\ . \\. \\. \\a_n\end{bmatrix}$$
Hence, $T(v)$ can be thought of as the sum of $m$ vectors in $R^{m \times 1}$, weighted by the $v$ column scalars. Therefore, we pre-multiply by the column wise representation of $T(e_{in})$ in the basis $B_m$, which is given by scalars $b_{ik}$ as defined above
$$[T] = \begin{bmatrix} b_{11} & b_{21} & b_{31} & ... & b_{n1} \\ b_{12} & b_{22} & b_{32} &...& b_{n2} \\ . & .& . \\ .&.&.&.\\b_{1m} & b_{2m} & b_{3m} &...&b_{nm} \end{bmatrix}$$