$\newcommand{\ip}[1]{\left\langle{#1}\right\rangle}$Actually, you can view a bra vector as the honest-to-goodness adjoint of the corresponding ket vector, but it requires a trick.
So, let $V$ be a finite-dimensional complex inner product space. The trick is that you have a canonical isomorphism $\Phi : V \to L(\mathbb{C},V)$ given by $\Phi(v) : \lambda \mapsto \lambda v$ for all $v \in V$; indeed, one has that $\Phi^{-1}(s) = s(1)$. Now, since $\mathbb{C} = \mathbb{C}^1$ is also an inner product space, for any $v \in V$ we can form the adjoint $\Phi(v)^\ast \in L(V,\mathbb{C}) = V^\ast$ of $\Phi(v)$, and lo and behold, for any $w \in V$,
$$
\Phi(v)^\ast(w) = \ip{1,\Phi(v)^\ast(w)}_{\mathbb{C}} = \ip{\Phi(v)(1),w}_V = \ip{v,w}_V,
$$
so that $\Phi(v)^\ast : w \mapsto \ip{v,w}_V$, as required. Thus, up to application of a canonical isomorphism, a bra vector really is the adjoint of the corresponding ket vector.
$\newcommand{\ket}[1]{\left|{#1}\right\rangle}
\newcommand{\bra}[1]{\left\langle{#1}\right|}
$ADDENDUM: In more physics-friendly notation, here's what's going on. Let $H$ be your (finite-dimensional) Hilbert space, and let $\ket{a} \in H$. You can interpret $\ket{a}$, in a completely natural way, as defining a linear transformation $\Phi[\ket{a}] : \mathbb{C} \to H$ by $\Phi[\ket{a}](\lambda) := \lambda \ket{a}$. The Hermitian conjugate of $\Phi[\ket{a}]$, then, is a linear transformation $\Phi[\ket{a}]^\dagger : H \to \mathbb{C}$, so that $\Phi[\ket{a}]^\dagger$ is simply a bra vector. The computation above then shows that $\Phi[\ket{a}]^\dagger = \bra{a}$. Thus, as long as you're fine with identifying $\ket{a}$ with $\Phi[\ket{a}]$ (which is actually completely rigorous), you do indeed have that $\bra{a} = \ket{a}^\dagger$.
The numbers in notations like $|n\rangle$ are the analogues of indices in matrix notation. That is, $|0\rangle=e_0$, $|1\rangle=e_1$, etc., where $e_n$ is the vector which has a $1$ in the $n$th position and $0$ in the other entries. Unfortunately, this notation is unspecific about the dimension of the base space. For qubits in quantum computers, the dimension is $2$, so we only have $|0\rangle=e_0=(1,0)$ and $|1\rangle=e_1=(0,1)$. It is also common to have a countable infinity of basis vectors, so we get $|n\rangle$ for each $n\in\Bbb N$. In quantum mechanics one also deals with this notation for larger dimensional spaces; for example we may have $|x\rangle$ for each $x\in\Bbb R^3$ (the position basis), which is a vector space of uncountable dimension $|\Bbb R^3|=2^{\aleph_0}$.
In any case, these vectors are usually enumerating a basis of some kind, and the details beyond that depend on the context.
The notation $\langle 0|0\rangle$ is written in linear algebra notation as $e_0^Te_0$, which is a $1\times 1$ matrix whose value can be identified with the dot product $e_0\cdot e_0$. Provided that the vector is normalized, this will always be $1$. So a general answer is $\langle m|n\rangle=0$ if $m\ne n$, and $\langle n|n\rangle=1$, which expresses that the vectors $(|n\rangle)_{n\in\Bbb N}$ are an orthonormal basis for the space.
For some general rules, then, we have $|n\rangle=e_n$ and $\langle n|=e_n^T$ (or $e_n^\dagger$ in complex vector spaces), where we understand the first as a $d\times 1$ matrix so that the second is $1\times d$, where $d$ is the dimension of the space. Then the inner product is $\langle m|n\rangle=e_m^Te_n=e_m\cdot e_n$, and the outer product is $|n\rangle\langle m|=e_ne_m^T$, which is a $d\times d$ matrix with a single $1$ at the index $(n,m)$. Note that these notations are also used for arbitrary vectors; for example we might write $|\psi\rangle=v$ for some vector $v$, and then $\langle\psi|=v^T$, $\langle\psi|\psi\rangle=\|v\|^2$, and $|\psi\rangle\langle\psi|$ is the projection matrix in the direction of $v$.
Best Answer
Given an inner product space $V$, you can imagine that there are two different copies of $V$, say $V_1$ and $V_2$, in which each vector $v\in V$ corresponds to a bra $\langle v|\in V_1$ and a ket $|v\rangle\in V_2$. To multiply a bra and a ket together, $\langle v|$ times $|w\rangle$ will by definition be $\langle v,w\rangle$ via the inner product.
Another way to think about this is as $V$ and its Hilbert space dual $V^*$ being identified together; each vector $v\in V$ is afforded the covector $v^*$ which is the linear mapping $v^*(w):=\langle w,v\rangle$ afforded by the given inner product. In this setting the covectors / dual vectors / linear functionals $v^*$ are denoted as bras $\langle v|$ and the usual vectors as kets $|v\rangle$, & multiplication is evaluation $\langle v||w\rangle=v^*(w)=\langle w,v\rangle$.
The reason for $v^*(w):=\langle w,v\rangle$ having $v$ in the second argument is so that each bra is a complex-linear functional of the argument $w$. This is related to a Hilbert space $V$ and its dual $V^*$ being anti-isomorphic; see Riesz representation theorem.