An inner product is an additional structure on a vector space. It is true that all vector spaces have inner products, but there can be two different inner products on the same vector space. For instance, on $\mathbb{R}^2$ one has the "usual" inner product $\langle a,b \rangle \cdot \langle c,d \rangle= ac+bd$. But if you change coordinates by replacing the "standard basis" $\langle 1,0 \rangle$, $\langle 0,1 \rangle$ by some other basis then you may get other inner products.
The additional structure that an inner products gives to a vector space is geometric in nature. First, the inner product gives a way of measuring lengths of vectors, using the formula
$$Length(v) = \sqrt{v \cdot v}
$$
Second, the inner product gives a way of measuring angles between two vectors, using the law of cosines formula
$$Angle(v,w) = \cos^{-1}[(v \cdot w) / (Length(v) \, Length(w))]
$$
If you change inner products on the same vector space, then you may get two different angle measurements, two different length measurements, two different notions of "circles", etc. Just as an example, if you define an inner product on $\mathbb{R}^2$ using the basis $\langle 2,0 \rangle$, $\langle 0,1 \rangle$, then the "circles" in this geometry are ellipses whose major axis along the $x$-axis has twice the ordinary Euclidean length as their minor axis along the $y$-axis.
Inner products arise in a variety of areas of mathematics. In particular, vector spaces with inner products defined on it are usually called $\textit{inner product spaces}$, and this is very important when studying functional analysis. For vectors in $\mathbb{R}^{d}$, for example, an inner product can be defined as follows: if $x=(x_{1},...,x_{n}),y=(y_{1},...,y_{n})\in \mathbb{R}^{n}$ we set $$<x,y>:= \sum_{i=1}^{n}x_{i}y_{i}$$
Note that this is not the only possible inner product in $\mathbb{R}^{n}$. It may occur that a vector space has more than one inner product. See this discussion, for example.
There are a lot of "good properties" that inner product spaces share and one of the most important properties is orthogonality. This is very commonly used, for instance, not just in mathematics but in physics and other applied sciences.
Another very important (and interesting) property is that once you have an inner product on a vector space, you can readily define a $\textit{norm}$ from it. For instance, we can define a norm $||\cdot||$ in $\mathbb{R}^{d}$ by setting $$||x|| := \sqrt{<x,x>}= \sqrt{\sum_{i=1}^{n}x_{i}^{2}}$$
This very definition works in general, even if we are dealing with more abstract spaces. Vector spaces which have a norm defined on it are called $\textit{normed spaces}$. These spaces are studied in functional analysis courses as well.
It is worth mentioning that Hilbert and Banach spaces (you may also have heard about these) are special cases of inner product spaces and normed spaces, respectivelly.
As an application, I can say that inner products (and Hilbert spaces, in particular) are very important in quantum mechanics, for example. In short, this is because different states of a system are orthogonal, so the notion of orthogonality plays an important role on the theory.
Best Answer
As for the utility of inner product spaces: They're vector spaces where notions like the length of a vector and the angle between two vectors are available. In this way, they generalize $\mathbb R^n$ but preserve some of its additional structure that comes on top of it being a vector space. Familiar friends like Cauchy-Schwarz, the parallelogram rule, and orthogonality all work in inner product spaces.
(Note that there is a more general class of spaces, normed spaces, where notions of length make sense always, but an inner product cannot necessarily be defined.)
The dot product is the standard inner product on $\mathbb R^n$. In general, any symmetric, positive definite matrix will give you an inner product on $\mathbb C^n$. And you can have inner products on infinite dimensional vector spaces, like
$$ \langle \, f, \, g \, \rangle = \int_a^b \ f(x)\overline{g(x)} \, dx$$
for $f, g$ square-integrable functions on $[a,b]$.
This becomes useful, for example, in applications like Fourier series where you want a basis of orthonormal functions for some function space (it's not just the trigonometric functions that work).