What are the different methods to find inverse of a function? The one that I have been taught in high school is by converting $y=f(x)$ into $x=y(x)$. But, this requires a lot of simplification and may lead to homogeneous equations of $x$ and $y$ which cannot be further simplified in case of complicated functions.So, I believe there are other methods to find the inverse of a function. Is there methods using calculus too?
[Math] What are the methods to find inverse of a function
functionsinverse
Related Solutions
Let me start by giving a unified viewpoint, and then I'll reconcile the other things you heard with it.
A vector is an element of a vector space*.
Can we consider functions as vectors using this?
It's a minor matter to show that for a nonempty set $X$ the set of functions $\{f\mid f:X\to \Bbb R \}$ is a vector space under the operations $(f+g)(x):=f(x)+g(x)$ and $(\lambda f)(x):=\lambda f(x)$. Thus these functions qualify as vectors. (Actually $\Bbb R$ can be replaced with any field.)
What about vectors being "lists of numbers"?
The most relevant theorem is this:
Every vector space has a basis $\{b_i\mid i\in I\}$ for some index set $I$.
This gives us a corollary
Every vector space $V$ over a field $F$ is isomorphic to a direct sum $\bigoplus_{i\in I} F$ for some index set $I$.
What's the connection? The isomorphism in the corollary is given by writing out the cofficients for an element of $V$, and then mapping that element to the list of coefficients in $\bigoplus_{i\in I} F$. So in this sense, it's true that every $F$-vector space looks the same as a list of elements of $F$.
What about vectors as "arrows"?
This is a geometric interpretation of vectors. When working over the real numbers (or any ordered field for that matter) and in two or three dimensions, it is very useful to think of vectors this way.
But this is not really the essence of what a vector is. After you go to even higher dimensions, perhaps infinitely many, the usefulness of the arrow becomes less clear. And also as you move to unordered fields, say $\Bbb C$ or even finite fields, there is no notion of "direction" along the vector, so usefulness diminishes there as well. So the direction-magnitude picture of vectors is a useful picture of real vector spaces, but it is not so successful for general vector spaces.
Now for your bullet points
- "Vectors are [...] basically pointy arrows written with brackets or an arrow on top..." That is notation, yes, and I think we cleared this concept up :)
- "never drawn as an arrow" Yes, because the vector has too many dimensions for an arrow to be a very helpful picture. "doesn't satisfy some axioms of vector space" In general this is just incorrect. Many sets of functions satisfy the axioms of vector spaces.
- "You can do [lots of other stuff] with functions [that you can't do with vectors]" Well, there is no rule about vectors that they can't also be other things with extra special abilities! The set of $n\times n$ square matrices are vectors in a vector space too, but being square does not prevent you from being a vector!
- "No authority has ever said functions are vectors" This is just misinformed. The entire field of functional analysis concerns itself with vector spaces of functions.
- "Yes, functions are sometimes contained in brackets, but that is just a vector of functions, not a function" This is a slightly sticky question because the tuples involved are different (and aren't different.) This is another good reason to stop thinking of vectors only as "tuples of numbers." You can have tuples of functions, and yes they could still be thought of as vectors in the product space $V\times V$.
$^\ast$ Caution: Physicists and engineers talk about vectors and vector spaces differently. The suggested duplicate discusses this: How to think of a function as a vector?
If you want to go by fixed steps, you should do a proper analysis of the function. Back in the days at my school that meant the following:
1) State Domain 2) Investigate y-intercept 3) Investigate x intercept(s) 4) Investigate when the curve is above/under x-axis 5) Investigate vertical asymptotes/holes in the graph 6) State derivative 7) State Domain derivative 8) Make a numberline and determine intervals when function is increasing/decreasing 9) Determine max/min 10) Consider limits $x$ goes to infinity's, horizontal asymptotes? 11) Possibility of slant asymptotes or other asymptotic behavior. 12) Make a graph (nowadays use a calculator) 13) Determine Range
An option would be second derivative but in my school that was not standard on the list unless specified by the teacher. Hope this helps
Best Answer
First of all, you need the function to be bijective (that is, injective and surjective) to be able to find an inverse. For example, $f:\Bbb R\ni x \mapsto 1$ has no inverse.
Then, you need to understand what functions are. A function $f$ is $(D,C,G)$ where $G\subset D \times C$. $D$ is the domain, $C$ is the codomain and $G$ is the graph of $f$, a subset of $D\times C$ with the property that $\forall x \in D, \exists ! y \in C, (x,y)\in G$. Because of this property, we can assign an $y\in C$ to any given $x\in D$ and so we decide to give a name to that $y$: $f(x)$.
Saying that $f$ is injective is saying that $\forall x_1,x_2\in D, f(x_1)=f(x_2)\implies x_1=x_2$. And saying that it is surjective means that $\forall y \in C, \exists x \in D, f(x)=y$. Now if you translate those properties in terms of graph, you get $\forall x_1,x_2\in D, \forall y \in C, (x_1,y)\in G \land (x_2,y)\in G \implies x_1=x_2$ and $\forall y \in C,\exists x \in D, (x,y)\in G$. The first one tells you that for $y\in C$, there is at most one $x\in D$ so that $(x,y)\in G$ and the second one tells you that there is at least one so there is exactly one. Which means $\forall y\in C,\exists !x \in D, (x,y)\in G$. This looks very similar to the property about graph of functions. In fact, this means that $(C, D, \{(y,x)\mid (x,y)\in G\})$ is a function. We'll call it $f^{-1}$.
This means that given a bijective function $f$, we already know $f^{-1}$. But since we tend to dislike writing infinite sets, we prefer giving a function by its domain $D$, its codomain $C$ (which we denote by $f:D\to C$) and then a formula that tells you how to find $f(x)$ in terms of $x$. But not all functions can be expressed that way. For example, you write $f:\begin{array}{l}\Bbb R_+\to \Bbb R_+\\x\mapsto \sqrt{x}\end{array}$ . But how do you define $\sqrt{x}$? Well in fact, you started with $g:\begin{array}{l}\Bbb R_+\to \Bbb R_+\\x\mapsto x^2\end{array}$, showed it was bijective so $g^{-1}$ existed. And then, since it looked like a really important function, we decided to give it its own symbol and write $\sqrt{x}$ instead of $g^{-1}(x)$.
In some cases, if you express $f$ in terms of already known functions, $f^{-1}$ will also be expressible in terms of already known function but it is very rare. And so when we find an interesting inverse function, we give it its own name. Examples are $\ln =\exp^{-1}$, $\sqrt{\cdot}=(\cdot^2)^{-1}$ and trigonometric inverse functions ( http://en.wikipedia.org/wiki/Inverse_trigonometric_functions ).