Global invertibility theorems are generally hard. One example of such a theorem is stated in Global invertibility of a map $\mathbb{R}^n\to \mathbb{R}^n$ from everywhere local invertibility, where pointers to literature are given. Proofs tend to involve substantially more topology than the local consideration: e.g., topological degree of a map, path lifting...
Here's a simple result of purely analytic nature: if $f$ is differentiable in a convex domain $U\subset \mathbb R^n$ and $\|Df - I\|<1$ pointwise, then $f$ is invertible. (Here $I$ is the identity matrix, and the norm is the operator norm.) Indeed, let $g(x)=f(x)-x$ and observe that $\|Dg\|<1$, hence $|g(a)- g(b)|<|a-b|$ whenever $a\ne b$. It follows that $f(a)\ne f(b)$.
Here is more interesting result. Suppose that $U$ and $V$ are Jordan domains in the plane, $f: \overline{U}\to \overline{V}$ is continuous, the restriction of $f$ to $U$ is differentiable with nonvanishing Jacobian determinant, and the restriction of $f$ to $\partial U$ is a homeomorphism onto $\partial V$. Then $f$ is a diffeomorphism of $U$ onto $V$.
The proof goes as following:
- the degree of $f$ with respect to any point $w\in V$ is equal to $1$ (from consideration of boundary values).
- the aforementioned degree is the sum of $\operatorname{sign} \det Df(z)$ over the points $z$ with $f(z)=w$.
- The Jacobian determinant $\det Df(z)$ has constant sign.
- Conclusion: there is exactly one $z$ such that $f(z)=w$.
The proof works in higher dimensions too, and the assumption on the boundary values can be weakened.
Unfortunately, I don't know of a good source for this material in finite-dimensional setting; it seems that people interested in global invertibility tend to work in nonlinear functional analysis. I can recommend the book Nonlinear Functional Analysis by K. Deimling: its first chapter is on finite dimensional spaces, and global invertibility is considered briefly in Chapter 4.
For example, let $f(x)$ be $0$ if $x < 0$ and $e^\frac{-1}{x^2}$ if $x \geqslant 0$, and let $g(x) = 0$. Then $f(x)$ and $g(x)$ are both smooth, have the same values on negative half-line, but differ on positive half-line.
Best Answer
For functions of one real variable, the proof is simpler because nonvanishing derivative implies strict monotonicity, and we get inverse function at once. But this can be adapted to complex variables by interpreting strict monotonicity as $$\operatorname{Re}\frac{f(a)-f(b)}{a-b}>0,\quad a\ne b \tag1$$ (Of course this really mimics increasing rather than monotone functions). Based on this, one can give a proof that does not rely on the inverse function theorem for several real variables.
Indeed, if $f'(z_0)\ne 0$, then the function $g(z)=f(z)/f'(z_0)$ satisfies $g'(z_0)=1$. Since $g$ is $C^1$, it follows that $\operatorname{Re}g'\ge \frac12$ in some neighborhood $U=\{z:|z-z_0|<r\}$. Therefore, $$\operatorname{Re}\frac{g(a)-g(b)}{a-b} = \int_0^1 \operatorname{Re} g'(ta+(1-t)b)\,dt \ge \frac12,\quad a,b\in U, \ a\ne b \tag2$$ Hence, $|g(a)-g(b)|\ge \frac12 |a-b|$ for $a,b\in U$. We conclude that $g^{-1}$ is defined and Lipschitz continuous in $g(U)$ (which is an open set by the open mapping theorem for analytic functions). It remains to compute the derivative of $g^{-1}$ in the usual way, by flipping the difference quotient.