An inner product is definitely a certain bilinear map
$$\langle \cdot, \cdot \rangle: V \times V \longrightarrow \Bbb R$$
which takes two vectors as its arguments. If you're thinking in terms of a metric tensor $g$, then $g$ is a type $(0,2)$-tensor, which means it has two vector arguments (no covector arguments).
What you're thinking of is the fact that a choice of inner product $\langle \cdot, \cdot \rangle$ on a vector space $V$ gives an isomorphism
$$\varphi: V \longrightarrow V^\ast,$$
$$v \mapsto \langle v, \cdot \rangle.$$
In this sense we can identify the inner product as the action of a covector on a vector:
$$\langle u, v \rangle = \varphi(u)(v).$$
In terms of components of the metric tensor, this is equivalent to
$$\langle u, v \rangle = g_{ij} u^i v^j = u_j v^j.$$
An element of $V^\ast$ acts on $V$ in the same way no matter what basis we choose, but there is no canonical identification of $V$ with $V^\ast$. Without a metric, the lowered index components $u_i$ of a vector $u$ make no sense.
Re: "[the dot product] seems almost useless to me compared with the cross product of two vectors ".
Please see the Wikipedia entry for Dot Product to learn more about the significance of the dot-product, and for graphic displays which help visualize what the dot product signifies (particularly the geometric interpretation). Also, you'll learn more there about how it's used. E.g., Scroll down to "Physics" (in the linked entry) to read some of its uses:
Mechanical work is the dot product of force and displacement vectors.
Magnetic flux is the dot product of the magnetic field and the area vectors.
You've shared the algebraic definition of the dot product: how to compute it as the sum of the product of corresponding entries in two vectors: essentially, computing $\;\mathbf A \cdot \mathbf B = {\mathbf A}{\mathbf B}^T.\;$
But the dot product also has an equivalent geometric definition:
In Euclidean space, a Euclidean vector is a geometrical object that possesses both a magnitude and a direction. A vector can be pictured as an arrow. Its magnitude is its length, and its direction is the direction the arrow points. The magnitude of a vector A is denoted by $\|\mathbf{A}\|.$ The dot product of two Euclidean vectors A and B is defined by
$$\mathbf A\cdot\mathbf B = \|\mathbf A\|\,\|\mathbf B\|\cos\theta,\quad\text{where $\theta$ is the angle between $A$ and $B.$} \tag{1}$$
With $(1)$, e.g., we see that we can compute (determine) the angle between two vectors, given their coordinates: $$\cos \theta =
\frac{\mathbf A\cdot\mathbf B}{\|\mathbf A\|\,\|\mathbf B\|}$$
Best Answer
You're right that there's something going on here.
In a general finite-dimensional vector space, there is no canonical choice of isomorphism from $V$ to $V^*$, even though they're isomorphic because they have the same dimension. However, in a general finite-dimensional vector space, there is also no canonical choice of inner product!
Having an inner product gives us an isomorphism $\phi: V \to V^*$: map a vector $v \in V$ to the element $w \mapsto \langle v,w\rangle$ in $V^*$, and we can check that this will be an isomorphism.
Going the other way is a bit trickier, since inner products need to satisfy $\langle v,v \rangle \ge 0$, but isomorphisms "don't know" about this structure. (In particular, for vector spaces over finite fields, we can have an isomorphism $\phi : V \to V^*$, but it doesn't make sense to have an inner product.) However, if you have an isomorphism $\phi : V \to V^*$, then you can define $\langle v, w\rangle = \phi(v)(w)$, and this will at least be a bilinear form. (Making it artificially symmetric is easy and left as an exercise.)
When we're talking about vectors written as column vectors, we've actually given our vector space lots of structure: we've picked a standard basis, and we're writing our vectors in terms of their coordinates in that basis. Here, taking the transpose to turn a column vector into a row vector is exactly the isomorphism that corresponds to taking the dot product as our inner product.