You're not getting your facts right at all.
How do we know from this $\langle W \rangle = \int_{-\infty}^{\infty} \bar{\Psi}\left(-\frac{\hbar^2}{2m} \frac{d^2}{dx^2} + W_p \right) \Psi dx$ or this $\hat{H} = -\frac{\hbar^2}{2m}\frac{d^2}{dx^2} + W_p$ that we have an eigenfunctiuion and eigenvalue.
Answer: we don't.
All I know about operator $\bar{H}$ so far is this equation where $\langle W \rangle$ is an energy expected value:
\begin{align}
\langle W \rangle = \int_{-\infty}^{\infty} \bar{\Psi}\left(-\frac{\hbar^2}{2m} \frac{d^2}{dx^2} + W_p \right) \Psi dx
\end{align}
No, you don't.
Here's the mathematical side of what an eigenfunction and eigenvalue is:
Given a linear transformation $T : V \to V$, where $V$ is an infinite dimensional Hilbert or Banach space, then a scalar $\lambda$ is an eigenvalue if and only if there is some non-zero vector $v$ such that $T(v) = \lambda v$.
Here's the physics side (i.e. QM):
We postulate that the state of a system is described by some abstract vector (called a ket) $|\Psi\rangle$ that belongs to some abstract Hilbert space $\mathcal{H}$.
Next we postulate that this state evolves in time by some Hermitian operator $H$, which we call the Hamiltonian, via the Schrodinger equation. What is $H$? you guess and compare to experimental results (that's what physics is anyway).
Next we postulate for any measurable quantity, there exists some Hermitian operator $O$, and we further postulate that the average of many measurements of $O$ is given by $ \langle O \rangle = \langle \Psi | O | \Psi \rangle$.
Connection to wavefunctions: we pick the Hilbert space $L^2(\mathbb{R}^3)$ to work in, so $\Psi(x) = \langle x | \Psi \rangle$, and $\langle O \rangle = \int_{-\infty}^{\infty} \Psi^*(x) O(x) \Psi(x) dx$.
Ok, that's the end. The form of $H$ doesn't follow from the energy expected value.
Wait! I haven't even talked about eigenvalues and eigenfunctions. This is a useless post!
Answer: well you don't have to. But it is useful to find the eigenvalues and eigenfunctions of $H$, because the eigenfunctions of $H$ form a basis of the Hilbert space, and certain expressions become diagonal/more easily manipulated when we do whatever calculations we want to do.
So to find the eigenvalues of $H$, we simply solve the eigenvalue equation as stated above:
Solve
\begin{align}
H | \Psi_n \rangle = E_n | \Psi_n \rangle.
\end{align}
This is in the form $T(v) = \lambda v$.
So as Alfred Centauri says, we simply want to find the eigenfunctions of $H$. A more subtle question would be, how do we know they exist? The answer lies in spectral theory and Sturm-Liouville theory but nevermind for now, as physicists we assume they always exist.
So your additional question:
$\hat{a} \psi$ is an eigenfunction of operator$\hat{H}$ with
eigenvalue $(W-\hbar \omega)$.
Well.... that just follows straightaway. You said you already proved that $H a^\dagger \psi = (W - \hbar \omega) a^\dagger \psi$. So here $T$ = $H$, $a^\dagger \psi = v$, and $\lambda = (W - \hbar \omega)$. which is an eigenvalue equation $T(v) = \lambda v$. Thus, $a^\dagger \psi$ is an eigenfunction of $H$ with eigenvalue $(W-\hbar \omega)$.
From Wiki:
For a finite-dimensional vector space, using a fixed orthonormal
basis, the inner product can be written as a matrix multiplication of
a row vector with a column vector:
$ \langle A | B \rangle = A_1^* B_1 + A_2^* B_2 + \cdots + A_N^* B_N = \begin{pmatrix} A_1^* & A_2^* & \cdots & A_N^* \end{pmatrix} \begin{pmatrix} B_1 \\ B_2 \\ \vdots \\ B_N \end{pmatrix}$
Based on this, the bras and kets can be defined as:
$\langle A | = \begin{pmatrix} A_1^* & A_2^* & \cdots & A_N^* \end{pmatrix}$
$ | B \rangle = \begin{pmatrix} B_1 \\ B_2 \\ \vdots \\ B_N \end{pmatrix}$
and then it is understood that a bra next to a ket implies matrix
multiplication.
The conjugate transpose (also called ''Hermitian conjugate'') of a bra
is the corresponding ket and vice-versa:
$\langle A |^\dagger = |A \rangle, \quad |A \rangle^\dagger = \langle A |$
because if one starts with the bra
$\begin{pmatrix} A_1^* & A_2^* & \cdots & A_N^* \end{pmatrix},$ then
performs a complex conjugation, and then a matrix transpose,
one ends up with the ket
$\begin{pmatrix} A_1 \\ A_2 \\ \vdots \\ A_N \end{pmatrix}$
Best Answer
If you are acquainted to matrices, then $|\psi\rangle$ is very much like column vector, and $\langle\psi|$ is similar to row vector. Operators correspond to square matrices. The conjugate transpose $^\dagger$ is similar to the matrix transpose $^\mathrm{T}$ in the sense that it turns columns to rows and vice versa.
Then (neither of your 1-3 lines), if we take $\cdot$ as a matrix-like product, $$\langle\psi|\hat{x}|\psi\rangle=\langle\psi|\cdot\hat{x}|\psi\rangle=|\psi\rangle^\dagger\cdot\hat{x}|\psi\rangle$$ If we group multipliers we could write $$\langle\psi|\hat{x}|\psi\rangle=\langle\psi|\cdot\Bigl(\hat{x}|\psi\rangle\Bigr)=\langle\psi|\cdot|\hat{x}\psi\rangle=\langle\psi|\hat{x}\psi\rangle$$ Other way of grouping lets us $$\langle\psi|\hat{x}|\psi\rangle=\Bigl(\langle\psi|\hat{x}\Bigr)\cdot|\psi\rangle=\Bigl(\hat{x}^\dagger\cdot\langle\psi|^\dagger\Bigr)^\dagger\cdot|\psi\rangle=\Bigl(\hat{x}^\dagger\cdot|\psi\rangle\Bigr)^\dagger\cdot|\psi\rangle=|\hat{x}^\dagger\psi\rangle^\dagger\cdot|\psi\rangle=\langle\hat{x}^\dagger\psi|\cdot|\psi\rangle=\langle\hat{x}^\dagger\psi|\psi\rangle$$
If you are not acquainted to matrices, I strongly suggest you to get acquainted.