Given a (finite dimensional, although generalizable to infinite dimensions in nice enough settings like QM) vector space $V$ with basis $(v_1, v_2,\ldots,v_n)$, a linear functional $f:V\to\Bbb F$ is uniquely determined by how it acts on the basis vectors $v_i$. That's because any vector $v\in V$ may be written as a linear combination $v = a_1v_1+\cdots a_nv_n$ with $a_i\in \Bbb F$, and $f$ is linear, so
$$
f(v) = f(a_1v_1+\cdots a_nv_n) = a_1f(v_1) + \cdots +a_nf(v_n)
$$
so $f(v)$ is completely determined by the $f(v_i)$, for any vector $v$. Thus any $f$ can be represented by a tuple (list of numbers) $(f_1,\ldots,f_n)\in \Bbb F^n$ where $f_i = f(v_i)$. Clearly, if two functions are different, then their corresponding tuples are also different.
On the other hand, any such tuple $(g_1, \ldots,g_n)$ may be used to define a function $g:V\to \Bbb F$. What's more, any such $g$ is linear.
So now we have a correspondence between the set of linear functionals $V\to \Bbb F$ and tuples in $\Bbb F^n$. These tuples are what becomes row vectors in the case where $V$ is a vector space of column vectors.
Let $V$ be a vector space over $\mathbb{R}$. If we have a vector $v \in V$ and a functional $f \in V^*$, then $f$ is a map $V \rightarrow \mathbb{R}$, so we have $f(v) \in \mathbb{R}$. So, the bra-ket notation just means $\langle f \mid v \rangle = f(v)$. But then, what does it mean to write $\langle u \mid v \rangle$ if $u,v$ are both vectors in $V$?
If $V$ is equipped with an inner product $\langle \cdot, \cdot \rangle$, then for any $u \in V$, there is a linear functional $f_u : V \rightarrow \mathbb{R}$ defined by $f_u(v) = \langle u, v \rangle$. That is, $f_u \in V^*$. So, the bra-ket notation means $\langle u \mid v \rangle = \langle f_u \mid v \rangle = f_u(v) = \langle u , v \rangle$. Essentially, we're considering $u$ to be part of $V^*$ by identifying it with $f_u$.
In fact, this defines a linear map $i : V \rightarrow V^* : u \mapsto f_u$. I'll let you check that it's injective, and if $V$ is finite-dimensional, then $i$ is actually an isomorphism. This means that any $f \in V^*$ can be written as $f = f_u$ for some $u \in V$. So, when we write the bra-ket notation $\langle u \mid v \rangle$ for finite-dimensional spaces, it's not so important whether we think of $u$ as belonging to $V$ or $V^*$, because we have this handy way of converting between them.
Note: in infinite-dimensional spaces, or if $V$ is over $\mathbb{C}$ instead, things are more tricky. I'd encourage you to check out the Riesz Representation Theorem for more info.
(It's been a while since I've done linear algebra, so please edit if you notice any mistakes!)
Best Answer
It's important that we're talking about finite-dimensional vector spaces here. If $\dim(A)=n$ and $\dim(B)=m$, then there's a $1$-$1$ correspondence between linear transformations $f:A \to B$ and $m \times n$ matrices. That's because any such transformation is determined by its values on a basis for $A$, and those values in turn are determined by a column vector with $m$ components. The dimension of the space of $m \times n$ matrices is $mn$.
It's also a theorem that any two finite-dimensional vector spaces over the same field with the same dimension are isomorphic to one another. Choose a basis for each and map element $k$ of the basis for $A$ to element $k$ of the basis for $B$. That results in a linear map that is onto a basis of $B$, and therefore is onto $B$, and that has trivial kernel.