In the simplest terms, the range of a matrix is literally the "range" of it. The crux of this definition is essentially
Given some matrix $A$, which vectors can be expressed as a linear combination of its columns?
Range (another word for column space) is what is meant by this. If you give me some matrix $A$ that is $m \times n$, the column space is the set of all vectors such that there exists $a_1, a_2, ...., a_n$ so that $a_1A_1 + a_2A_2 + ... a_nA_n = v$ for some vector $v$.
$$\begin{bmatrix}1 & 0 & 0\\ 0 & 1 & 0 \\ 0 & 0 & 1\end{bmatrix} \begin{bmatrix}a_1 \\ a_2 \\ a_3\end{bmatrix}= \begin{bmatrix}5 \\ 5 \\ 5\end{bmatrix}$$
Then $v$ is in the range of $A$ since $a_1 = a_2 = a_3 = 5$. A better example is when it's not, like:
$$\begin{bmatrix}1 & 0 & 3\\ 1 & 1 & 2 \\ 0 & 0 & 0\end{bmatrix}\begin{bmatrix}a_1 \\ a_2 \\ a_3\end{bmatrix} = \begin{bmatrix}5 \\ 5 \\ 5\end{bmatrix}$$
Now it's not... since no $a_1, a_2, a_3$ will satisfy the condition that $v$ is a linear combination of the columns of $A$...I mean, we will always have $0$ in the third entry of any linear combination!
From this definition, the null space of $A$ is the set of all vectors such that $Av = 0$. Obviously $v = [0, 0, 0, ..., 0]$ is part of the null space, so it is always non-empty.
The rank of the matrix is related to the range. It denotes how many columns of $A$ are actually "relevant" in determining its range. You may think that removing a column from a matrix will dramatically affect which vectors it can reach, but consider:
$$\begin{bmatrix}1 & 2 & 0\\ 1 & 2 & 0 \\ 1 & 2 & 0\end{bmatrix} \approx \begin{bmatrix}1 \\ 1 \\ 1\end{bmatrix}$$
You can try to reason (to yourself), that the left matrix can reach the same space of vectors as the right matrix (Why?)
Let $f:\mathbb{R}^n\rightarrow \mathbb{R}^m$ be a linear map. Suppose that $A\in \mathbb{R}^{m\times n}$ is the matrix of $f$ w.r.t. the standard bases. Since the rank of $A$ is equal to the column rank of $A$, it suffices to show that the columns of $A$ vectors in the image of $f$. Now let $e_i$ be the $i$-th standard basis vector of $\mathbb{R}^n$. Then $f(e_i)=Ae_i=A_i$ where $A_i$ is the $i$-th column of $A$. This completes the proof.
Can you figure what happens the matrix of $f$ is not given w.r.t. the standard bases.?
Edit: You edited the question. The matrix $A$ is no longer the matrix of $f$ w.r.t. the standard bases. The above argument fails since the columns of $A$ do not necessarily belong to the image. However, the columns represent the coordinates of images of basis vectors. So what does belong to your image?
Second Edit: Let's work with your example. You know that basis of $f$ w.r.t. the bases $B$ en $B'$. What information does give us? We know that $f((1,3))=2(0,0,1)+1(1,0,-1)-1(0,1,0)$. Notice that the coordinates appearing in this last expressions are $(2,1,-1)$ which is exactly the first column of $A$. If we want to write $(1,3)$ w.r.t. the basis $B$, then we bet $(1,3)=1(1,3)+0(1,2)$. Notice that $A\begin{pmatrix}1\\0
\end{pmatrix}=\begin{pmatrix}2\\1\\-1\end{pmatrix}$. This result is the coordinate of $f((1,3))$ w.r.t. the basis $B'$. We do not have that $(2,1,-1)\in \text{Im}(f)$, but $f((1,3))=2(0,0,1)+1(1,0,-1)-1(0,1,0)=(1,-1,1)\in \text{Im}(f)$. In the same fashion $f((1,2))=-1(0,0,1)+3(1,0,-1)+1(0,1,0)=(3,1,-4)\in \text{Im}(f)$. Hence $\text{Im}(f)=\text{Span}\left\{(1,-1,1),(3,1,-4)\right\}$.
Best Answer
What is a basis?
Informally we say
This is what we mean when creating the definition of a basis. It is useful to understand the relationship between all vectors of the space. They all will have something in common: they can be written as a linear combination of some set of vectors that lies in the space. The set of vectors are called the base of the vector space.
How to make this notion formal?
For that, we use the theory of linear algebra. We define what is a vector and what we mean by a vector been generated by other vectors. We say that if a vector is some linear combination of other vectors - with respect to elements of some field (a vector space must have a field in the definition, usually this field is $\mathbb{R}$ or $\mathbb{C}$) - then this vector is generated. In some sense then we find first the set off vectors that generates all vectors in space (can be an infinite or a finite set).
Then the theory of linear independence plays the key role: Two vectors can generate and be generated by other vectors. So we talk about linearly independence when we want that the set that generates the space become
So if I have a set of vectors that generates the space and one - or more - of these vectors is generated by other vectors, then I take this vector out of the set. And in some sense, given that I have already a base of my space, if I take out some vector of my base then I cannot generate all vector space anymore! For example we have $\mathbb{R}^2$ and the basis vectors $(0,1)$ and $(1,0)$; we cannot generate $(0,1)$ by a linear combination of $(1,0)$. But of course we can generate all vectors $(a,0)$ for $a \in \mathbb{R}$ using the vector $(1,0)$.
But this is not a unique notion!
It is not! A vector space can have multiple different bases. For example we have for $\mathbb{R}^2$ we have that $\{(1,0),(0,1)\}$ is a basis and we also get that $\{(3,0),(0,5)\}$ is also a basis. But the important notion is that we can create all vectors (points) in the space $\mathbb{R}^2$ using these sets. So in some sense, the basis tells us important things about the space: tells a relation between all vectors; tells how to create an vector; say how we can introduce more profound things about the space such as linear transformations between vector spaces that are different.