One way to think of a basis for $V$ is as a choice of isomorphism $F^n\to V$, where $F$ is your base field - here the basis vectors in $V$ are the image of the standard basis $(1,0,\ldots,0)$, $(0,1,\ldots,0)$ etc. in $F$. Then a change of basis is an isomorphism $F^n\to F^n$, which you precompose with the previous one to get a new isomorphism $F^n\to V$, and thus a new basis of $V$.
If you have a linear map $f\colon V\to W$, the matrix representation of $f$ depends on a choice of bases in $V$ and in $W$. Changing the bases gives you different matrix representations, and some of them are more helpful than others for, say, computing the rank, eigenvalues, determinant and so on. You don't normally have to see the change of basis map explicitly in these computations, but it's necessary for proving theorems. For example, in the row reduction algorithm, properties of the change of basis map corresponding to each row operation tell you how row operations affect the determinant.
For example, if $f\colon V\to W$ is particularly nice, you may be able to change the basis of $V$ so that the matrix of $f$ is upper triangular, meaning all entries below the diagonal are zero (here I assume $\dim{V}=\dim{W}$ so the matrix is square, but don't just write $W$ to $V$ to emphasize that we can change the basis in the domain and codomain separately). Then you can easily read off the eigenvalues - they're the entries on the diagonal - and the determinant, which is their product. If you have two composable linear maps whose matrices are simultaneously diagonalizable (i.e. you can choose bases for everything such that both matrices have all their non-zero entries on the diagonal), then you can multiply these two matrices really easily as well, just multiply the corresponding entries.
What are the elements of that matrix $\mathcal F$?
Depends on the basis we choose. The same linear operator is represented by different matrices in different bases.
What is the dimension of that matrix $\mathcal F$?
It is an infinite matrix (so, strictly speaking, not a matrix).
When $\mathcal F$ acts on a function $g$, how do we write $g$ as a vector, to apply the matrix $F$?
Complex-valued functions are vectors, if a vector is understood abstractly, as an element of a vector space. (That is to say, functions form a vector space). If you mean the concrete representation of a vector as a row or column of numbers, then that can be obtained by expanding $g$ in a basis. The row/column will be infinite.
More details below the cut.
To represent a linear operator as a matrix, we need to choose a basis for our space. The most convenient space on which to study $\mathcal F$ is $L^2(\mathbb R)$, the space of square-integrable functions. One convenient basis of that space is given by Hermite functions
$$\Phi_n(x)= (-1)^n (2^{n}n! \sqrt{\pi})^{-1/2} e^{x^2/2}\frac{d^n(e^{-x^2})}{dx^n}, \quad n=0,1,2,\dots$$
This basis is orthonormal (sketch here), which makes it easy to expand a given function $g\in L^2(\mathbb R)$ in this basis:
$$
g=\sum_{n=0}^\infty g_n \Phi_n,\quad g_n = \langle g,\Phi_n\rangle = \int_{-\infty}^\infty g(x)\Phi_n(x)\,dx
$$
(I did not need conjugation over $\Phi_n$, since it happens to be real-valued.)
Each $\Phi_n$ is an eigenvector (eigenfunction) of $\mathcal F$, with eigenvalue $(-i)^n$, see Wikipedia. Therefore, the matrix of $\mathcal F$ in this basis is diagonal with the periodic sequence $(-i)^n$ along the diagonal:
$$\begin{pmatrix} 1 & 0& 0 & 0 &0 &\dots \\
0 & -i & 0 & 0 &0 &\dots \\
0 & 0 & -1 & 0 &0 &\dots \\
0 & 0 & 0 & i &0 &\dots \\
0 & 0 & 0 & 0 &1 &\dots \\
\vdots & \vdots &\vdots &\vdots &\vdots &\ddots
\end{pmatrix} $$
Best Answer
Calling the Fourier transformation "a change of basis" is misleading in the sense that the Fourier transformation is a unitary (linear) transformation between two different Hilbert spaces, namely $L^2(\mathbb R)$ and $L^2(\hat{\mathbb R})$.
Here $\hat{\mathbb R}$ is the dual group of $\mathbb R$. It turns out that $\hat{\mathbb R}\cong\mathbb R$, but there is no canonical isomorphism. So, only if you fix some arbitrary isomorphism $\hat{\mathbb R}\cong\mathbb R$, you can consider the Fourier transformation as a unitary transformation from some Hilbert space to itself, which really is essentially a change of basis.