This theorem simply tells you that if you know exactly what this linear map does to your basis then you know exactly what this map does to every element of your vector space. This is quite nice since a basis tends to be considerably smaller than the overall space. For example, if we consider the linear transformation $T: V \rightarrow W$ where each vector space $V$ and $W$ is $\mathbb{R}^3$ over the field $\mathbb{R}$ then we need only know what $T$ does to each element of a basis. If, continuing with this example, our basis of the domain is the standard basis $S = \{(1,0,0), (0,1,0), (0,0,1)\}$ and we know that $$T(1,0,0) = (2,1,0), T(0,1,0) = (3,0,-1), T(0,0,1) = (-7, 1, 3)$$ then we can determine exactly where $T$ maps an arbitrary vector, say $(a,b,c)$.
To be exact, we know that $$(a,b,c) = a(1,0,0)+b(0,1,0)+c(0,01)$$ and so the assumed linearity of $T$ gives us $$T(a,b,c) = aT(1,0,0) + bT(0,1,0) + cT(0,0,1)$$
$$= a(2,1,0) + b(3,0,-1) + c(-7,1,3)$$
$$= (2a+3b-7c, a+c, -b+3c).$$
This idea here generalizes exactly to the argument in the proof that you have. The uniqueness follows by the fact that if two linear transformations agree on basis elements then they must agree on every vector (simply write an arbitrary vector as a linear combination of the basis elements and use the linearity of your maps). So, the answer to your first question is yes!
The answer to your second question is not necessarily yes. If you know that a basis is mapped to some set by two different linear transformations then you haven’t ensured that the two linear transformations have mapped the basis elements to the same places. For example, let’s consider linear transformations $T, U: \mathbb{R}^2 \rightarrow \mathbb{R}^2$ where the domain and range are assumed to be vector spaces over $\mathbb{R}$ and $$T(1,0) = (1,0), T(0,1) = (0,1), U(1,0) = (0,1), U(0,1) = (1,0).$$
Then extending each of $T$ and $U$ to an arbitrary vector (I leave the details of this small part to you) yields that given any $ (a,b) \in \mathbb{R}^2$, we have $T(a,b) = (a,b)$ and $U(a,b) = (b,a)$ and hence $T$ and $U$ are not equal as functions.
Each of $T$ and $U$ mapped the basis $S= \{(1,0),(0,1)\}$ to itself but they didn’t agree on where they sent the individual elements of $S$ so that is the distinction that I am trying to get at here. You simply need to know where your linear transformation sends each individual vector of your basis in order to know where that transformation sends ANY vector of its domain.
Best Answer
So $B_1= \{v_1,v_2,...,v_n \}$ and $B_2= \{w_1,w_2,...,w_n \}$ are two different basis of the vector space $V$. Now, as $B_1$ is a spanning set, we can write $w_1=a_1v_1+a_2v_2+...+a_nv_n$ not all of $a_i$ are zero. Suppose WLOG $a_i\neq 0$, then $v_i=\frac{w_1-a_1v_1-a_2v_2-...-a_{i-1}v_{i-1}-a_{i+1}v_{i+1}-...-a_{n}v_{n}}{a_i} =$ $a_i^{-1}w_1-a_1a_i^{-1}v_1-...-a_{i-1}a_i^{-1}v_{i-1}-a_{i+1}a_i^{-1}v_{i+1}-...-a_{n}a_i^{-1}v_{n}$.
So this means that any linear combinations of the $v_i$'s can be written as linear combinations of $S=\{v_1,....v_{i-1},w_1,v_{i+1},...,v_{n}\}$ because suppose that $v=b_1v_1+...+b_iv_i+...+b_nv_n$ then $v=b_1v_1+...+b_i*(a_i^{-1}w_1-a_1a_i^{-1}v_1-...-a_{i-1}a_i^{-1}v_{i-1}-a_{i+1}a_i^{-1}v_{i+1}-...-a_{n}a_i^{-1}v_{n})+...+b_nv_n$.
Distribute and group like terms together and then you can see that v is a linear combination of vectors in $S$. Hence $span(S)= V$ and since $|S|=|B_1|$, $S$ is a basis for $V$.