These are very different. Are you aware of what the free vector space on a set $X$ is? It is a vector space $\mathscr F(X)$ with $X$ as a basis. In particular, if you choose to be $X$ to be equal to be a real vector space $V$, then $\mathscr F(V)$ is a real vector space which has $V$ as a basis. So if, for example, $V=\Bbb R$, then $V$ is a finite dimensional real vector space, but $\mathscr F(V)$ is a vector space with dimension of cardinality $\Bbb R$, and $\mathscr F(V)$ is indeed very different to $V$.
To say it in a different way, $\mathscr F(V)$ is all "formal combinations" of elements of $V$. The typical addition and scalar multiplation from $V$ doesn't apply here. So in particular, in $\mathscr F(V\times V')$, the elements $(v,0)+(0,v')$ and $(v,v')$ are different elements, even though they are the same as elements of $V\times V'$.
Technically, yes, it is a valid proof, if the vector spaces are assumed to be finite dimensional (which they don't seem to be), and the principle you quoted is correct.
However the results work for infinite dimensions too, and the dimension counts you made are no longer valid, so without the finite-dimensionality argument you need another proof.
Let me try to convince you, thoigh, that even if the vector spaces were assumed to be finite dimensional, you should do another proof.
Indeed, a very important first point is that this proof doesn't give you an explicit isomorphism, and in the examples you gave, there actually are very interesting explicit isomorphisms. They are interesting for a bunch of reasons :
First of all, they are explicit, and having explicit isomorphisms is always a good thing in practice. Here, one is often brought to identify, say $L(V\times W, E)$ with $L(V,E)\times L(W,E)$ and to make this identification you need the specific isomorphism.
Second of all, the explicit isomorphisms I am thinking of (which are almost surely the ones the author had in mind) are very nice, in that they're "natural". We could get into the technicalities to know what this means precisely, but essentially it means they behave well with the whole of linear algebra. By that I mean that for instance, regarding the last one, if you have two vector spaces $V,W$ and identify $V^n$ and $L(\mathbb F^n, V)$ via that isomorphism, and $W^n$ and $L(\mathbb F^n, W)$ via that isomorphism too, then nothing “goes wrong“ when you look at maps between $V$ and $W$. You can try to fiddle around with it a bit to see what I mean.
And lastly, note that although the course only mentions the principle for finite-dimensional spaces, the isomorphisms I have in mind work for infinite-dimensional spaces too, and the proof remains unchanged. That is often something that happens : doing proofs with dimensions is often more restrictive than doing "synthetic" proofs, and so the latter is better. Indeed, it often happens that there is extra structure on your spaces, and the dimension proofs don't see that at all, whereas proofs that actually exhibit the isomorphism are more likely to help you with that extra structure.
All of this is very common in linear algebra and other disciplines : students tend to use bases, dimensions, charts in differential geometry etc. whenever they have the opportunity, whereas it's often much better, for various reasons, to stay at the "synthetic" level, and the gain is often huge, although sometimes hard to see for students.
Best Answer
One technique that is useful in algebra is to try to relate the subspace you are working with (in this case $V_1 \times_W V_2$) with some linear map.
Define a map $C: V_1 \times V_2 \to W$ where $C((v_1, v_2)) = Av_1 - Bv_2$. Note that $C$ is linear, $\ker C = V_1 \times_W V_2$, and $\operatorname{range} C = \operatorname{range}A + \operatorname{range}B$.
By the Rank-Nullity Theorem $$\dim(V_1 \times V_2) = \dim(\ker C) + \dim(\operatorname{range} C)$$ so $$\dim(V_1) + \dim(V_2) = \dim(V_1 \times _W V_2) + \dim(\operatorname{range}A + \operatorname{range}B)$$ and the desired equality follows.