This theorem simply tells you that if you know exactly what this linear map does to your basis then you know exactly what this map does to every element of your vector space. This is quite nice since a basis tends to be considerably smaller than the overall space. For example, if we consider the linear transformation $T: V \rightarrow W$ where each vector space $V$ and $W$ is $\mathbb{R}^3$ over the field $\mathbb{R}$ then we need only know what $T$ does to each element of a basis. If, continuing with this example, our basis of the domain is the standard basis $S = \{(1,0,0), (0,1,0), (0,0,1)\}$ and we know that $$T(1,0,0) = (2,1,0), T(0,1,0) = (3,0,-1), T(0,0,1) = (-7, 1, 3)$$ then we can determine exactly where $T$ maps an arbitrary vector, say $(a,b,c)$.
To be exact, we know that $$(a,b,c) = a(1,0,0)+b(0,1,0)+c(0,01)$$ and so the assumed linearity of $T$ gives us $$T(a,b,c) = aT(1,0,0) + bT(0,1,0) + cT(0,0,1)$$
$$= a(2,1,0) + b(3,0,-1) + c(-7,1,3)$$
$$= (2a+3b-7c, a+c, -b+3c).$$
This idea here generalizes exactly to the argument in the proof that you have. The uniqueness follows by the fact that if two linear transformations agree on basis elements then they must agree on every vector (simply write an arbitrary vector as a linear combination of the basis elements and use the linearity of your maps). So, the answer to your first question is yes!
The answer to your second question is not necessarily yes. If you know that a basis is mapped to some set by two different linear transformations then you haven’t ensured that the two linear transformations have mapped the basis elements to the same places. For example, let’s consider linear transformations $T, U: \mathbb{R}^2 \rightarrow \mathbb{R}^2$ where the domain and range are assumed to be vector spaces over $\mathbb{R}$ and $$T(1,0) = (1,0), T(0,1) = (0,1), U(1,0) = (0,1), U(0,1) = (1,0).$$
Then extending each of $T$ and $U$ to an arbitrary vector (I leave the details of this small part to you) yields that given any $ (a,b) \in \mathbb{R}^2$, we have $T(a,b) = (a,b)$ and $U(a,b) = (b,a)$ and hence $T$ and $U$ are not equal as functions.
Each of $T$ and $U$ mapped the basis $S= \{(1,0),(0,1)\}$ to itself but they didn’t agree on where they sent the individual elements of $S$ so that is the distinction that I am trying to get at here. You simply need to know where your linear transformation sends each individual vector of your basis in order to know where that transformation sends ANY vector of its domain.
Best Answer
It says that once you know how $T$ acts on a basis, you know how it acts on ALL vectors $v\in V$. To see this, suppose we have defined $T$ on , $\left \{ v_{1}, v_{2},v_{3}\cdots v_{n}\right \}$, a basis for $V$:
$Tv_{1}=w_{1}, Tv_{2}=w_{2}, \cdots ,Tv_{n}=w_{n}$
We can express $v$ as a linear combnation of the basis vectors, by writing
$v=a_{1}v_{1}+a_{2}v_{2}+a_{3}v_{3}\cdots a_{n}v_{n}$.
Now apply $T$:
$Tv=T(a_{1}v_{1}+a_{2}v_{2}+a_{3}v_{3}\cdots a_{n}v_{n})$.
But $T$ is linear so we get
$Tv=a_{1}Tv_{1}+a_{2}Tv_{2}+a_{3}Tv_{3}\cdots a_{n}Tv_{n}$.
So the effect of $T$ on our arbitrary $v$ only depended on how we defined $T$ on the basis $\left \{ v_{1}, v_{2},v_{3}\cdots v_{n}\right \}$