In short, the answers to your question are
- You checked enough properties to show that $S$ was well-defined (see final example below).
- A function definition followed by a demonstration that it's well-defined is a style widely used by professional mathematicians. See, e.g. the comments in this blog post, where professional mathematicians discussing well-definedness do exactly this.
- As far as I can tell your proof is correct.
What does well-definedness mean?
A function is defined in set-theory as a relation with a certain property which, back at the broad view, means one input value to the function gives one output value. Relations are an extremely useful concept to have in your toolkit, and essential terminology for any deeper work, but (as seen in the blog post comments above) many mathematicians neither think of nor write about functions at the set-theory level unless the context requires it.
At a higher level, when we write $f : X\rightarrow Y$, $f(\mathit{input}) = \mathit{output}$, then the function is well-defined if:
- However we describe the $\mathit{input}$, it must be something that can give every possible value in $X$.
- The $\mathit{output}$ is always in $Y$
- If two $\mathit{input}$s gives the same $X$ value, in both cases the $\mathit{output}$ gives the same $Y$ value.
A lot of the time we don't need to check 1 and 3
We don't need to check 1 and 3 when we have a function defined by phrases like:
"Define $f: \mathbb R\to\mathbb R$, where for all $x\in\mathbb R$, $f(x)=\dots$"
where the RHS depends only on $x$, or phrases like:
"Define $f: \mathbb R\to\mathbb R$, where for all $x,y,z\in\mathbb R$, $f(x,y,z) = \dots$"
where the RHS depends only on $x,y,z$.
The parameters to $f$ range over their entire domains (all of $\mathbb R$ or $\mathbb R^3$), we don't need to check 1.
The form of $f$ means that for whatever particular values of $x$ (or $x,y,z$) we put into $f$, we only get one value coming out of the RHS expression, so we don't need to check 3.
Example: $\quad$ $f : \mathbb R \to \mathbb R$ with $f(x)=x^2-\pi$.
- Nothing to check. This informal notation silently implies that $x$ ranges over the domain $\mathbb R$. Some authors write more explicitly: For all $x \in X, f(x) = x^2-\pi$.
- When we read or write this, we should check that $x^2-\pi$ gives us a real number.
- Nothing to check. Once we have a particular input value $x$, there's no choice about what the output value can be.
Here, there's no need to write anything about well-definedness (check 2 is too obvious).
Example: $\quad f : \mathbb R^2 \to \mathbb R~$ with $~f(x,x^2) = 3x~$ for all $~x\in\mathbb R$.
- $f$ is not well-defined. We don't get the whole domain. E.g. no $x\in\mathbb R$ gives us $(x,x^2) = (0,-1)$.
Example:
I saw some authors do this for example: Define a function $f:R\to R$ by $f(a+b)=a/2+b/2$. Then the authors check that this function $f$ they defined "makes sense".
The definition is a little sloppy; the reader thinks "Where did $a$ and $b$ come from? $a+b$ has to be a real number, so I guess $a$ and $b$ are arbitrary real numbers". Let's assume so and apply our checks:
- For any $r\in\mathbb R$ we can get $r=a+b$ by picking, e.g., $a=r$, $b=0$.
- Since $a,b\in \mathbb R$, the output $a/2 + b/2$ is also in $R$.
- Here our input $a+b$ depends on two variables, $a$ and $b$. Suppose we get the same input value in two different ways, i.e. $a_1+b_1 = a_2+b_2$. Then $f(a_1+b_1) = a_1/2+b_1/2 = \dfrac{a_1+b_1}2 = \dfrac{a_2+b_2}2 = a_2/2+b_2/2 = f(a_2+b_2)$.
If I was writing this myself, I'd probably express it as follows.
Define $f:R\to R~$ by $~f(a+b)=a/2+b/2~$ for all $a,b\in \mathbb R$. $~~f$ is well-defined since $a_1\!+\!b_1 = a_2\!+\!b_2~$ implies $~f(a_1+b_1) = a_1/2+b_1/2 = \dfrac{a_1+b_1}2 = \dfrac{a_2+b_2}2 = a_2/2+b_2/2 = f(a_2+b_2)$.
I've left out the quantifiers for $a_1,a_2,b_1,b_2$, assuming the that reader knows how to play the game of well-definedness, and that the reader can immediately see any real can be written as the sum of two reals.
Your example: $\quad$ Define a map $S:W\to V$ such that $S(T(v))=v \text{ for all } v\in V$.
- As you noted, $T(v)$ gives us any $w\in W$ since $T$ is surjective.
- The output, $v$, is obviously always in the codomain $V$.
- As you noted, for $v, v' \in V$, if the inputs $T(v)$ and $T(v')$ are equal then (since $T$ is a bijection) $v=v'$, i.e. the outputs $v$ and $v'$ are the equal.
Let $V$ and $W$ be two vector spaces and let $T$ be a Linear Transformation. T is said to be an isomorphism if T is also a Bijection. Bijection implies that there exists another Linear Transformation $S:W\to V$ such that S and T are each other's inverse.
So essentially the statements that T is an Isomorphism and T is Invertible are one and the same. Let me give an outline of the proof
Let T be an isomorphism. To prove T is invertible, we need to show that there exists a linear transformation $S:W\to V$ that maps each element of W uniquely onto V. Thus T is invertible. Let $y\in W$. Since T is onto, there exists an $x\in V$ such that $T(x) = y$. Uniqueness of x is established by T being one-to-one. So, $\forall y\in W, \exists$ unique $x \in V$ and hence define $$S:W\to V \quad \text{such that} \quad S(y) = x$$ where $T(x) = y$.
Checking for Linearity of S is simple.
For the other way proof, using the two linear maps that compose to give identity, we can prove that T is ono-to-one and onto and therefore an isomorphism.
Best Answer
You missed the point. But to be fair, the point is easy to miss for a beginner.
An isomorphism of vector spaces is a linear map, which has a two sided inverse linear map.
According to your definition, an invertible linear map is a linear map, which has a two sided inverse map.
The subtle point is that being invertible does not automatically imply that the inverse is also linear. This is what you have to show.
As you progress in mathematics, you will find yourself in situations where bijective homomorphisms are not necessarily isomorphisms.