I assume that you made a typo and the choice for the input should be
$$
v = \ddot{y}_d - k_1\,e - k_2\,\dot{e},
$$
so using $\ddot{y}_d$ instead of $\ddot{y}$, which acts as a feedforward term. Such feedforward term makes sure that when the error is zero it will remain zero for any $y_d$ which is at least twice differentiable.
The choice for the input-output linearization gives $\ddot{y} = v$. So if we now look at the error dynamics one gets
\begin{align}
\ddot{e} &= \ddot{y} - \ddot{y}_d \\
&= v - \ddot{y}_d \\
&= \ddot{y}_d - k_1\,e - k_2\,\dot{e} - \ddot{y}_d \\
&= - k_1\,e - k_2\,\dot{e}
\end{align}
Coming up with this $v$ is common choice in linear control. Namely as stated before $\ddot{y}_d$ acts as feedforward. The remaining two terms act as state feedback, for example after applying the feedforward, so define $v = \ddot{y}_d + w$, the error dynamics can also be written as
$$
\dot{z} =
\underbrace{
\begin{bmatrix}
0 & 1 \\ 0 & 0
\end{bmatrix}
}_A z +
\underbrace{
\begin{bmatrix}
0 \\ 1
\end{bmatrix}
}_B w,
$$
with $z = \begin{bmatrix}e & \dot{e}\end{bmatrix}^\top$. Now by using state feedback $w = -K\,z$ we get the closed loop dynamics $\dot{z} = (A - B\,K)\,z$, which decays exponentially to zero if $A - B\,K$ is Hurwitz. Choosing a $K$ can be done with things like pole placement or LQR, but it can be shown that $A - B\,K$ is always Hurwitz when $k_1,k_2>0$, with $K = \begin{bmatrix}k_1 & k_2\end{bmatrix}$.
Lets say one would choose $v = \ddot{y} - k_1\,e - k_2\,\dot{e}$, plugging this into $\ddot{y}$ would give $k_1\,e + k_2\,\dot{e} = 0$. However, $v$ is chosen to be equal to $\ddot{y}$ so solving for $v$ would imply solving $v = v - k_1\,e - k_2\,\dot{e}$. This equation is only true when $k_1\,e + k_2\,\dot{e} = 0$ but you are not able to choose what $e$ and $\dot{e}$ are at any given moment. And if that equation would be true, then all values for $v$ would satisfy it. So either way it does not lead to a very sensible result.
Let us rewrite your equation as $\dot{x}_2 = -x_2^3 + d(t)$, where $|d(t)|\le D$ for all $t$. If $x_2$ is positive, then $\dot{x}_2$ is negative for all $x_2>D^{1/3}$. If $x_2$ is negative, then $\dot{x}_2$ is positive for all $x_2<-D^{1/3}$. It implies that the set $|x_2^3|\le D$ is invariant and attractive: if the initial condition $x_2(t_0)$ belongs to the set, then the trajectory reamins in the set. If the initial condition $x_2(t_0)$ is outside the set, then it will converge to the set.
Thus, $x_2(t)$ is bounded. However, the claim that $|x_2|\le D^{1/3}$ is valid only if you know for sure that the initial condition satisfies this inequality, or if you consider it as $t \to \infty$.
Best Answer
Even though the linearization is not controllable, the nonlinear system can still be stabilized with linear feedback. I propose the control law
$$ u = -x_1 - x_2 \tag{1} $$
which leads to the closed loop dynamics
$$ \begin{align} \dot{x}_1 &= x_2^3 \\ \dot{x}_2 &= -x_1 - x_2 \end{align} \tag{2} $$
Take the Lyapunov function
$$ V(x) = x_1^2 + 2 x_1 x_2 + x_2^2 + \frac{1}{2} x_2^4 $$
It is easy to show that $V$ has a unique minimum at $(0, 0)$ so it is positive definite. The derivative is
$$ \dot{V}(x) = -2 (x_1 + x_2)^2 $$
which is negative semi-definite (zero along the $x_1 = -x_2$ line). If we insert that into $(2)$, we have $\dot{x}_2 = 0$ but $\dot{x}_1 = -x_1^3$, so no solution can stay in the set $\dot{V}(x) = 0$ except $x_1 = x_2 = 0$.
So, by LaSalle, the system is globally asymptotically stabilized by the linear feedback $(1)$.
This is probably also the "simplest" stabilizing control law (linear feedback with both gains being 1), but that depends on your definition of simple.