For convenience, I'll divide through by the highest-order coefficient, and redefine the constants to the form:
$$ \frac{\partial^4 y}{\partial x^2} + 2\beta\frac{\partial ^2y}{\partial x^2} + \gamma^2 \frac{\partial ^2y}{\partial t^2} = 0 $$
Separating variables gives
$$ \frac{X^{(4)}}{X} + 2\beta\frac{X''}{X} + \gamma^2\frac{T''}{T} = 0 $$
As usual, both $X$ part and the $T$ part have to be equal to constants. Suppose $T''/T = -\omega^2$, then we can separate
\begin{align}
T'' + \omega^2T &= 0 \\
X^{(4)} + 2\beta X'' - \gamma^2\omega^2 X &= 0
\end{align}
The $T$ equation has a sinusoidal solution as you described. For the $X$ equation, we have the following characteristic polynomial:
$$ r^4 + 2\beta r^2 - \gamma\omega^2 = 0 $$
which has a solution of the form
$$ r^2 = -\beta \pm \sqrt{\beta^2 + \gamma^2\omega^2} $$
where $\omega$ is the unknown eigenvalue. So the eigenfunction does indeed have the form
$$ X(x) = c_1\cos(\delta x) + c_2\sin(\delta x) + c_3\cosh(\epsilon x) + c_4\sinh(\epsilon x) $$
where
\begin{align}
\epsilon &= \sqrt{-\beta + \sqrt{\beta^2+\gamma^2\omega^2}} \\
\delta &= \sqrt{\beta + \sqrt{\beta^2+\gamma^2\omega^2}}
\end{align}
Apply the 2 B.C. at $x=0$, we find
$$ X(x) = A\big[\cosh(\epsilon x) - \cos(\delta x)\big] + B\big[\delta\sinh(\epsilon x) - \epsilon\sin(\delta x)\big] $$
The remaining B.C.'s are a bit messy, but we get:
$$\begin{align}
A\big[\epsilon^2\cosh(\epsilon L) + \delta^2 \cos(\delta L) \big] + B\big[\delta\epsilon^2\sinh(\epsilon L) + \epsilon\delta^2\sin(\delta L) \big] &= 0 \\
A\big[\epsilon^3\sinh(\epsilon L) - \delta^3\sin(\delta L)\big] + B\big[\delta\epsilon^3\cosh(\epsilon L) + \epsilon\delta^3\cos(\delta L) \big]&= 0
\end{align} \tag{*} $$
Since this is a homogeneous system, it has non-zero solutions only if the determinant of the coefficient matrix is $0$, i.e.
$$ \begin{vmatrix} \epsilon^2\cosh(\epsilon L) + \delta^2 \cos(\delta L) & \delta\epsilon^2\sinh(\epsilon L) + \epsilon\delta^2\sin(\delta L) \\ \epsilon^3\sinh(\epsilon L) - \delta^3\sin(\delta L) & \delta\epsilon^3\cosh(\epsilon L) + \epsilon\delta^3\cos(\delta L) \end{vmatrix} = 0 $$
which I believe simplifies to
$$ \epsilon\delta(\epsilon^4 + \delta^4) + 2\epsilon^3\delta^3\cosh(\epsilon L)\cos(\delta L) + 2\delta^2\epsilon^2(\delta^2-\epsilon^2)\sinh(\epsilon L)\sin(\delta L) = 0 $$
You'll need to solve the above equation numerically to find $\omega$ in terms of $\beta$ and $\gamma$.
Once values of $\omega$ are found, the last step is to pick the arbitrary constants such that one of the two equations in $(*)$ is satisfied. Something like
$$ X(x) = \big[\delta\epsilon^2\sinh(\epsilon L) + \epsilon\delta^2\sin(\delta L) \big]\big[\cosh(\epsilon x) - \cos(\delta x)\big] - \big[\epsilon^2\cosh(\epsilon L) + \delta^2 \cos(\delta L) \big]\big[\delta\sinh(\epsilon x) - \epsilon\sin(\delta x)\big] $$
up to a constant.
Best Answer
As we move from ODE to PDE, the solution space becomes infinite dimensional. Consider the very simple PDE $$\dfrac{\partial u}{\partial x}=0\quad \text{in } \ \mathbb R^2$$ which has solutions $u(x,y)=g(y)$ for $g$ an arbitrary function. Similarly, the equally simple PDE $$\dfrac{\partial^2 u}{\partial x^2}=0\quad \text{in } \ \mathbb R^2$$ has solutions $u(x,y)=g(y)+xh(y)$ for two arbitrary functions $g$ and $h$. You can see a pattern here: the number of initial conditions is the order of partial derivative that is transverse (i.e., not parallel) to the line (or curve, or surface) on which we prescribe the initial condition.
The above pattern holds in many cases, but it should not be taken as an absolute truth. Basically, it holds for evolution-type equations (wave or diffusion) which can be recast as a ODE in some function space. For example, the heat equation $u_t=k\Delta u$ is a first-order ODE in this sense, if we think of $u(x,t)$ as a function $t\mapsto u$ where $u$ is an element of some function space defined on a domain in $x$-space.
Here is a tricky one: how many conditions on the axis $y=0$ can we impose on solutions of the Laplace equation $u_{xx}=u_{yy}$? On one hand, we can fulfill two conditions $u(x,0)=g(x)$ and $u_y(x,0)=h(y)$ if $g$ and $h$ are very nice (real analytic). On the other hand, the single condition $u(x,0)=g(x)$ determines a unique solution with desirable properties in upper halfplane, where $g$ no longer needs to be nice.