The reason that spaces of square integrable functions arose in the first place was to study the orthogonal trigonometeric (Fourier) series. Interestingly, Parseval had already noted in 1799 the equality that now bears his name:
$$
\frac{1}{\pi}\int_{-\pi}^{\pi}f(x)^{2}\,dx = \frac{1}{2}a_{0}^{2}+\sum_{n=1}^{\infty}a_{n}^{2}+b_{n}^{2},
$$
where $a_{n}$, $b_{n}$ are the (Fourier) coefficients
$$
a_{n}=\int_{-\pi}^{\pi}f(x)\cos(nx)\,dx,\;\;\; b_{n}=\int_{-\pi}^{\pi}f(x)\sin(nx)\,dx.
$$
This comes out of the orthogonality conditions for the $\sin(nx)$, $\cos(nx)$ terms in the Fourier series. No definite connection was seen between Euclidean N-space and the above at that time; such a connection took decades to evolve. But square-integrable functions gained interest in the early 19th century, and especially after the early 19th century work of Fourier.
It took some time to see a general Cauchy-Schwarz inequality, and to begin to see a connection with geometry, eventually leading to inner-product space abstraction for the space of square-integrable functions. The CS inequality wasn't widely known until after the 1883 publication of Schwarz, even though essentially the same result was published in 1859 by another author. Hilbert proposed his $l^{2}$ space by the early 20th centry as an abstraction of the square-summable Fourier coefficient space, but also a abstraction of finite-dimensional Euclidean space. The connection with square-integrable functions was already firmly established.
In hindsight we can see good reasons that square-integrable functions are connected with energy, and other Physics concepts, but the abstraction seems to have been dictated more out of solving equations using 'orthogonality' conditions. Of course many of the equations arose out of solving physical problems; so it's also hard to separate the two. Now, after the fact, there is interpretation of the integral of the square of a function. On the other hand, the Mathematical abstraction of dealing with functions as points in a space, with distance and geometry on those points has been even more far-reaching, and a great part of the impetus for modern abstract and rigorous Mathematics.
Note: All of this happened before Quantum Mechanics.
Reference: J. Dieudonne, "History of Functional Analysis".
In this answer, I will use $x_n$ as a sequence in $l^2$ and write $x_n(k)$ as the $k$-th member of that sequence.
The norm in the Hilbert space is given by $\|x\| = \sqrt{\langle x, x \rangle}$. We wish to show that if a sequence $\{ x_n \} \subset l^2$ is Cauchy, then it converges in $l^2$.
Suppose that $\{x_n\}$ is such a Cauchy sequence. Let $\{ e_k \}$ be the collection of sequences for which $e_k(i) = 1$ if $i=k$ and zero if $i\neq k$.
Then $\langle x_n, e_k \rangle = x_n(k)$. Notice that $$|x_n(k) - x_m(k)| = |\langle x_n - x_m, e_k \rangle| \le \|x_n-x_m\| \| e_k\| = \|x_n-x_m\|$$ for all $k$ (also note that this convergence is uniform over $k$). Therefore the sequence of real numbers given by $\{x_n(k)\}_{n\in \mathbb{N}}$ is Cauchy for each $k$, and thus converges. Call the limit of this sequence $\tilde x(k)$.
Let $\tilde x = (\tilde x(k))_{k\in\mathbb{N}}$. We wish to show that $\tilde x \in l^2$.
Consider $$\sum_{k=1}^\infty |\tilde x(k)|^2=\sum_{k=1}^\infty |\lim_{n\to\infty} x_n(k)|^2=\lim_{n\to\infty} \sum_{k=1}^\infty |x_n(k)|^2=\lim_{n\to\infty}\|x_n\|^2.$$
The exchange of limits is justified, since the convergence of $\lim_{n\to\infty} x_n(k)$ is uniform over $k$. Finally, since $\{ x_n \}$ is Cauchy, the inequality, $$| \|x_m\| - \|x_n\| | < \| x_m - x_n\|$$ implies that $\|x_n\|$ is a Cauchy sequence of real numbers, and so $\|x_n\|$ converges. Thus $\tilde x$ is in $l^2$.
Edit: Completing the proof as per the comments.
We have thus shown that $\tilde x$ is in $l^2$. $\tilde x$ is the most likely candidate for the Cauchy sequence to converge to, and it has been demonstrated to be in our space. What remains is to show that $$\| x_n - \tilde x\| \to 0$$ as $n \to \infty$.
We will utilize a generalized form of the dominated convergence theorem for series. This states that if $a_{n,k} \to b_k$ for all $k$, $a_{n,k} < d_{n,k}$ and $\sum_{k} d_{n,k} \to \sum_{k} D_k < \infty$, then $\lim_{n \to \infty} \sum_{k=0}^\infty a_{n,k} = \sum_{k=0}^\infty b_k$. (here $a_{n,k}, b_k, d_{n,k}, D_{k}$ are all real numbers)
Writing $$\| x_n - \tilde x\|^2 = \sum_{k=0}^\infty |x_n(k) - \tilde x(k)|^2.$$
We see that in this case $a_{n,k} = |x_{n}(k) - \tilde x(k)|^2$, $b_k = 0$, and we must find a $d_{n,k}$ that "dominates" $a_{n,k}$ to finish the proof.
Now note that $|x_n(k) - \tilde x(k)|^2 \le 2 |x_n(k)|^2 + 2 |\tilde x(k)|^2$ and $$\lim_{n \to \infty} \sum_{n=0}^\infty ( 2 |x_n(k)|^2 + 2 |\tilde x(k)|^2) = \sum_{k=0}^\infty (2 |\tilde x(k)|^2 + 2 | \tilde x(k)|^2).$$ Recall that we demonstrated $\lim_{n \to \infty} \sum_{n=0}^\infty |x_n(k)|^2 = \sum_{n=0}^\infty |\tilde x(k)|^2$ in the first half. Thus $D_k$ is played by $4|\tilde x(k)|^2$ in this case.
Thus by the dominated convergence theorem we may conclude that $$\sum_{k=0}^\infty |x_n(k)-\tilde x(k)|^2 \to 0.$$
Best Answer
The definition for the inner product should be $$\langle g, h \rangle = \int_{\Omega}g(x)\overline{h(x)}\,dx.$$ Also note that $g, h \in L^2(\Omega) \implies g\overline{h} \in L^1(\Omega)$ by Holder's inequality, so the inner product is well defined. Condition (a)(iii) should be $\langle g, h \rangle = \overline{\langle h, g \rangle}$.
Let $g \in L^2(\Omega)$. We have $$\langle g, g \rangle = \int_{\Omega}g(x)\overline{g(x)}\,dx = \int_{\Omega}|g(x)|^2\,dx.$$ It is a classic exercise that if $f \geq 0$ is measurable, then $\int_{\Omega}f(x)\,dx = 0$ if and only if $f = 0$ almost everywhere (this holds for any measure space $\Omega$). Thus $\langle g, g \rangle = 0$ if and only if $g = 0$ since by definition, $f_1 = f_2$ in $L^2(\Omega)$ if $f_1 = f_2$ almost everywhere. Property (a)(ii) follows from the linearity of the integral on $L^1(\Omega)$. Property (a)(iii) follows from the identity $\overline{\int_{\Omega}f(x)\,dx} = \int_{\Omega}\overline{f(x)}\,dx$,which itself follows from the linearity of the integral on $L^1(\Omega)$ (write $f = g + ih$).
(b) is not difficult to prove if you have the monotone convergence and dominated convergence theorems. A proof is given in "Real Analysis" by Folland on page 183 (theorem 6.6). The idea is to use the fact that a normed vector space $V$ is complete if and only if every absolutley convergent series is convergent, i.e. $v_1, v_2, \dots \in V$, $\sum_{j = 1}^{\infty}\lVert v_j \rVert < \infty \implies \sum_{j = 1}^{\infty}v_j = v$ for some $v \in V$.