Sometimes notation $HOM(G,H)$ is used to indicate vector space of all linear maps. Does this have to do with homomorphism? Can someone comment what is role of homomorphism here?
Definition: A linear homomorphism is a linear map between vector spaces.
Definition: $\operatorname{Hom}(V,W)$, where $V$ and $W$ are two $\Bbb R$-vector spaces, is the collection of all linear homomorphisms from $V$ to $W$. With the usual notions of addition and scalar multiplication of functions $\operatorname{Hom}(V,W)$ is itself an $\Bbb R$-vector space.
Definition: The dual space to an $\Bbb R$-vector space $V$ is defined as $V^* = \operatorname{Hom}(V,\Bbb R)$.
When $G,H$ are both real vector spaces, why can't we say dual space $G^*$ is $G^*: G \rightarrow H$?
I don't see how you think this should work. For instance, how would you choose the correct $H$? The actual definition of the dual space is given above.
I did not understand the purpose of defining another vector space mapping from original vector space to space of real numbers. I can guess, one such purpose is for non Cartesian coordinate systems. We have a vector as $ \overline{ a} = a_{i}e^i = a^je_j$. Here $e^i,e_j$ are dual basis vectors. Is this the purpose for dual vector space?
One immediate application of the dual space is indeed that it provides a nice way to expand vectors in a non-orthogonal basis.
Let $V$ be an $n$-dimensional $\Bbb R$-vector space. If $\{e_1, \dots, e_n\}\subset V$ is some (not necessarily orthogonal) basis for $V$ and $\{f^1, \dots, f^n\}\subset V^*$ is its dual basis, then for any vector $v\in\Bbb V$ we have $$v = \langle v, f^1\rangle e_1 + \cdots + \langle v, f^n\rangle e_n$$ where $\langle v, f^i\rangle := f^i(v)$.
Be careful though. It's not true that $a_{i}e^i = a^je_j$ as the LHS and RHS are two different types of objects (existing in two different vector spaces). I've seen physicists state this before but strictly speaking $V$ and $V^*$ are always distinct spaces (I think).
The main point is to realise that linear combinations can by definition only have finitely many nonzero coefficients. They must be defined this way, because in pure linear algebra there is no way to take the sum of infinitely many nonzero vectors (this cannot be defined by repeated addition: one never reaches the goal). In analysis some (convergent) infinite sums can be defined using limits, but linear algebra does not have the tools to make this possible.
Then if you have a set of functions like $\def\A{\mathcal A}\A^*$, each of which is zero on all basis vectors in $\A$ except one, any linear combination of them will produce a function that is zero on all basis vectors in $\A$ except for a finite number of them. When $\A$ is infinite, this is insufficient to produce all functions on the set $\A$, and therefore all linear functions on the vector space$~V$ of which $\A$ is a basis.
Best Answer
In the abstract vector space case, where "dual space" is the algebraic dual (the vector space of all linear functionals), a vector space is isomorphic to its (algebraic) dual if and only if it is finite dimensional.
Bill Dubuque gives a nice argument in a sci.math post (see Google Groups or MathForum)
If $\mathbf{V}$ is an infinite dimensional vector space over $\mathbf{F}$ of dimension $d$, then the cardinality of $\mathbf{V}$ as a set is equal to $d|\mathbf{F}|=\max\{d,|\mathbf{F}|\}$, and $\mathbf{V}$ is isomorphic to $\mathbf{F}^{(d)}$ (functions from a set of cardinality $d$ to $\mathbf{F}$ with finite support), and the dual $\mathbf{V}^*$ is isomorphic to $\mathbf{F}^d$ (all functions from a set of cardinality $d$ to $\mathbf{F}$), so $|\mathbf{V}^*| = |\mathbf{F}|^d$.
If the dimension of $\mathbf{V}^*$ is $d'$, we want to show that $d'\gt d$. Note that, as with $\mathbf{V}$, we have $|\mathbf{V}^*|=d'|\mathbf{F}| = \max\{d',|\mathbf{F}|\}$.
Now let $\{\mathbf{e}_n\}$ be a countable linearly independent subset of $\mathbf{V}$, and extend to a basis. For each $c\in \mathbf{F}$, $c\neq 0$, define $\mathbf{f}_c\colon \mathbf{V}\to\mathbf{F}$ by $\mathbf{f}_c(\mathbf{e}_n) = c^n$, and making $\mathbf{f}_c$ equal to $0$ on the rest of the basis. Thet set of all $\mathbf{f}_c$, $c\neq 0$, is linearly independent, so we can conclude that the dimension if $\mathbf{V}^*$ must be at least equal to $|\mathbf{F}|$ (in the finite case, we know the dimension is at least $d\gt |\mathbf{F}|$).
That means that $$|\mathbf{V}^*| = d'|\mathbf{F}| = \max\{d',|\mathbf{F}|\} = d'.$$
But we also know that $|\mathbf{V}^*| = |\mathbf{F}|^d$. Since $d< |\mathbf{F}|^{d}$ (since $|\mathbf{F}|\geq 2$), then $d' = |\mathbf{F}|^d\gt d$, proving that the dimension of $\mathbf{V}^*$ is strictly larger (in the sense of cardinality) than that of $\mathbf{V}$.
The isomorphism in the finite dimensional case is standard.
So for the algebraic dual, there is never an isomorphism in the infinite dimensional case.
In the Hilbert space case (or in a Banach space, or more generally a topological vector space), one usually restricts to the continuous (or bounded) functionals, so that $\mathbf{V}^*$ denotes the bounded functionals rather than the regular functions. In that case, some spaces are topological-vector-space isomorphic to their double duals, and some not, as AD shows in his answer. The ones that are isomorphic are important enough to get their own name (reflexive). Hilbert spaces are always reflexive, and there are other classes of topological vector spaces that are always reflexive (see Wikipedia's page on reflexive spaces).