Here's how you get from one to the other. Let me take the case of a particle moving in one dimension.
In this case, we assume that there exist vectors $|x\rangle$ which form a "dirac-normalized" basis (the position basis) for the Hilbert space in the sense that their inner products satisfy
$$
\langle x|x'\rangle =\delta(x-x')
$$
Note, as an aside, that these vectors are not normalizable in the standard sense, and therefore they do not strictly speaking belong to the Hilbert space.
Next, for each $|\psi\rangle$ in the Hilbert space, we define the position basis wavefunction $\psi$ corresponding to the state $|\psi\rangle$ as
$$
\psi(x) = \langle x|\psi\rangle
$$
So really, the value $\psi(x)$ of the position basis wavefunction $\psi$ at a point $x$ can simply be thought of as the basis component of $|\psi\rangle$ in the direction of $|x\rangle$ just as in the finite-dimensional case where one can find the component of a vector $|\psi\rangle$ along a basis vector $|e_i\rangle$ simply by taking the inner product $\langle e_i|\psi\rangle$.
The inner product is always between two (ket) vectors. However, let $\mathcal H$ be the space in which they live. This is a complex vector space with a Hermitian inner product. The inner product defines a map $\mathcal H\to\mathcal H^\vee$, where $\mathcal H^\vee$ is the dual of $\mathcal H$, and consists of linear functionals on $\mathcal H$. The map is given by $a\mapsto a^\ast$ which is defined by $a^\ast(b) = \langle a,b\rangle$.
In fact the inner product defines a metric on $\mathcal H$ and the postulates of quantum mechanics state that this state space is in fact complete, making it a Hilbert space, and it can be seen that the element $a^\ast = \langle a,\,\cdot\,\rangle$ is continuous. Now denote by $\mathcal H^\ast$ the topological dual of $\mathcal H$, consisting only of continuous linear functionals. The Riesz representation theorem asserts that $a\mapsto a^\ast$ is an isometric isomorphism between $\mathcal H$ and $\mathcal H^\ast$. You should see the latter as the space of bras, and so we see that indeed every bra vector is the conjugate of a ket vector. Starting from that point, in physics one forgets about the distinction between $\langle a,b\rangle$ and $a^\ast(b)$, which are both denoted $\langle a|b\rangle$. It is even customary for this reason to write $\langle a|$ for $a^\ast$ and $|b\rangle$ for $b$.
ADDED IN EDIT
As commented by ACuriousMind, it can be enlightening to make this concrete for the finite dimensional case. In the finite dimensional case, if we fix a basis, $a$ and $b$ can be written as
$$a = \begin{pmatrix}a_1\\ \vdots \\ a_n\end{pmatrix},\ \ \ b = \begin{pmatrix}b_1\\ \vdots \\ b_n\end{pmatrix}.$$
The inner product is $\langle a,b\rangle = a_1^\ast b_1 + \cdots + a_n^\ast b_n$, where $a_i^\ast$ is the complex conjugate of $a_i$. The linear functional $a^\ast$ is represented by a matrix in this basis, namely
$$a^\ast = \left(a_1^\ast\ \cdots\ a_n^\ast\right).$$
Clearly we have $a^\ast(b) \equiv a^\ast b = \langle a,b\rangle$.
ADDED IN SECOND EDIT
The above argument is a strictly mathematical one, in which the (common) assumption is made that kets are the elements of some Hilbert space and bra's are defined to be elements of its continuous dual. This definition is mathematically convenient but not physicaly unavoidable.
First of all, only kets directly relate to physical reality, bra's are just there for mathematical convience. The convenience, and the whole usefulness of the Dirac formalism disappears when you abandon the duality between bra's and kets.
ZeroTheHero remarked that in the book of Cohen-Tannoudji e.a. the definition of bra's and kets (for a single spinless particle) are such that this duality doesn't strictly hold. The authors do acknowledge the computational desirability of this duality, and propose a formal solution, in which the duality is supposedly restored, but with the understanding that not all elements are physical (only aproximately so), and without going into details. I do think they take great care to indicate possible pain points in the formalism, and probably they do the right thing by not going into distracting detail.
My guess is that their intention was to give a pragmatic approach in which physical motivation and intuition are preferred over mathematical rigor. To be specific, first of all they define the Hilbert space of kets $\mathscr F$ to be a subspace of $L^2$ of "sufficiently regular functions" of which they "shall not try to give a precise, general list of [...] supplementary conditions".
Then they define the space of bra's as the space of all linear functionals on $\mathscr F$, the space that in my earlier notation would have been $\mathscr F^\vee$. I don't think it is ever useful to consider the entire (algebraic) dual in the infinite-dimensional case, since this space is huge, and full of highly pathological elements. They'd better have "defined" it as an otherwise unspecified subspace of "sufficiently regular" linear functionals.
Then they construct a very reasonable bra (namely evaluation at a point), and show that it is not associated to any ket, and finally they note that this can be physically resolved by going to generalized kets to restore the duality, even though one should not attribute a physical meaning to them. Many reasonable bra's correspond to a generalized ket, but this is still not the case for the large majority of the elements of $\mathscr F^\vee$.
Best Answer
There is a lot of confusion even already. Let's start with the basics. There are vectors and operators and an inner product. If everything was finite dimensional the vectors would be complex column vectors, the operators would be complex square matrices and the inner product would be taken by transposing one vector and then conjugating every term in it and then doing regular matrix multiplication.
Note right away that unlike regular 3d space this new inner product is different. Firstly it is complex, secondly you no longer get that it is symmetric. This is important and serious. We can no longer just say "the inner products of vectors A and B." So we need a (new) clear notation.
One notation is $\langle A | B \rangle.$ And that is just like taking the A vector and transposing the column vector $|A\rangle$ into a row vector and complex conjugating every term in it to get a new row vector $\langle A |$ and then we take the natural product.
So whenever you see a ket vector (e.g. $|A\rangle$) just think column vector. And whenever you see a bra vector (e.g. $\langle A|$) just think row vector. And think of the vectors $|A\rangle$ and $\langle A |$ as being transpose conjugates of each other.
This operation of taking the transpose conjugate of one and multiplying by the other has the properties that it is linear in the second argument, conjugate linear in the first argument and is non negative when you give it the same argument in both the positions. Plus if you put the two vectors in the opposite order you get the complex conjugate of putting them in the first order. Those are really the properties you are looking for and you can get them even if there isn't a finite dimensional basis. For instance the operation that takes two functions and conjugates one and then multiplies them together and then integrates is like an infinite dimensional version of that.
That isn't actually what he does. He takes the (column) vector $ |\psi\rangle $ and the (column) vector $ |x\rangle $ and makes the scalar $\langle x|\psi\rangle=\psi(x).$ Each of The vectors $ |\psi\rangle $ and $ |x\rangle $ was an entire vector and you are computing the complex scalar you get from the inner product. If you imagine fixing the vector $|\psi\rangle$ and having a different vector $|x\rangle$ for each position $x$ then you are effectively getting a complex number for each position. You are getting the wave function from the inner product.
And you are getting it the same way you get coefficients from a vector, but inner product long them with your basis vectors. You might be used to getting coefficients by just looking at a vector, but if we have an infinite dimensional basis we can't just write down an infinite number of coefficients. When you think about it, you get a coefficient by inner producting with the basis vectors.
What does this have to do with anything? And firstly we are starting with a vector space with an addition (like adding column vectors), a scaling of vectors by complex numbers (like scaling a column vector by a complex number), and an inner product (like as described). But it just has the properties we want. We are getting the wave function from that. If we used wave functions this would be circular. Of you wanted to do that then you could define the inner product as an integral, but then you won't have vectors like $x\rangle.$
The inner product is always (in physics) conjugate linear in the first argument and linear in the second argument. But he's not doing that. He's just taking their inner product. If the $|x \rangle$ were itself a wave function then it would be a Dirac delta function, which can be real, and hence conjugating it doesn't do anything.
He's just finding a coefficient of the vector $\psi\rangle$ in the "basis" $\{|x\rangle\}.$
It is proper to take the inner product with any two vectors you feel like. Its just not a symmetric operation anymore. You do what you want, you get what you get.
You can project A onto B by $\langle B |A \rangle | B\rangle/\langle B |B \rangle$ for exactly the same reason you normally can.
Really. Go review everything you know about linear algebra but where you aren't allowed to look at the coefficients (imagine there are too many to look at) and instead you can ask a magic device to compute the inner products and scaling and addition for you. That's all that is going on, except the inner product isn't symmetric anymore.
Its how you take an inner product. And some notation.
Work at what? Practicing doing linear algebra with complex column matrices. If you want to do calculus so you can have derivatives so you can set up a differential equation you need a sense of distance for your limit. One sense of distance between A and B is to take $\sqrt {\langle A-B|A-B\rangle}$ and you need that complex conjugate so that the square root gives a non negative number.
Practice with finite dimensional complex vector spaces first.