For a linear system, the superposition principle holds since, be definition, a linear system has the following property:
(1) if $y_1$ is the output for input $x_1$ and
(2) if $y_2$ is the output for input $x_2$ then
(3) the output is $ay_1 + by_2$ for input $ax_1 + bx_2$
In other words, the output for a superposition of inputs is the superposition of the associated outputs.
So, if the differential equation for your system is linear, e.g., the harmonic oscillator, the Superposition Principle holds.
What, then, are you trying to prove?
Prove the superposition principle for inhomogenous linear equations of
motion used in deriving the motion of a driven oscillator. Will it
still apply if the force on an oscillator was −kx2 instead of −kx?
This is, I think, misworded. For example, for the mass on a (linear) spring system, the force on the mass, due to the spring is, by Hooke's law, $-kx$.
A driving force, on the other hand, would be given as a function of time: $F_d = f(t)$.
Then, the net force on the mass is the sum of the driving force and the spring force, $F = f(t) - kx$, which leads to a linear differential equation:
$$m \ddot x +kx = f(t) $$
and thus, the Superposition Principle holds by definition.
This is easy to show by assuming $f(t) = f_1(t) + f_2(t)$ and $x(t) = x_1(t) + x_2(t)$ and inserting into the differential equation.
However, the way I read the problem as stated in your edit, it is the restoring force, not the driving force, that is $-kx^2$.
If that is in fact the case, the resulting differential equation is non-linear
$$m \ddot x +kx^2 = f(t) $$
and thus, the Superposition Principle will not hold since
$$(x_1 + x_2)^2 = x_1^2 + x_2^2 + 2x_1x_2 \ne x_1^2 + x_2^2$$
I think you're worrying too much. This is the correct approach (I'm going to be slightly flippant, so don't take this first paragraph too seriously on a first reading :) ):
- Step 1: Understand the meaning of the Picard-Lindelöf Theorem;
- Step 2: Understand that, by assigning state variables to all but the highest order derivative, you can rework $\ddot x +\omega^2\,x=0$ into a vector version of the standard form $\dot{\mathbf{u}} = f(\mathbf{u})$ addressed by the PL theorem and that, in this case, the $f(\mathbf{u})$ fulfills the conditions of the PL theorem (it is Lipschitz continuous)
- Step 3: Choose your favorite method for finding a solution to the DE and boundary conditions - tricks you learn in differential equations 101, trial and error stuffing guesses in and seeing what happens ..... anything! .... and then GO FOR IT!
Okay, that's a bit flippant, but the point is that you know from basic theoretical considerations there must be a solution and, however you solve the equation, if you can find a solution that fits the equation and boundary conditions, you simply must have the correct and only solution no matter how you deduce it.
In particular, the above theoretical considerations hold whether the variables are real or complex, so if you find a solution using complex variables and they fit the real boundary conditions, then the solution must be the same as the one that is to be found by sticking with real variable notation. Indeed, one can define the notions of $\sin$ and $\cos$ through the solutions of $\ddot x +\omega^2\,x=0$ and they have to be equivalent to complex exponential solutions through the PL theorem considerations above. You can then think of this enforced equivalence as the reason for your own beautifully worded insight that you have worked out for yourself:
"So using sin/cos and even is essentially equivalent so long as you allow for complex constants to provide a conversion factor between the two."
Drop the word "essentially" and you've got it all sorted!
Actually, let's go back to the Step 2 in my "tongue in cheek" (but altogether theoretically sound) answer as it shows us how to unite all of these approaches and bring in physics nicely. Break the equation up into a coupled pair of first order equations by writing:
$$\dot{x} = \omega\,v;\, \dot{v} = -\omega\,x$$
and now we can write things succinctly as a matrix equation:
$$\dot{X} = -i\,\omega \, X;\quad i\stackrel{def}{=}\left(\begin{array}{cc}0&-1\\1&0\end{array}\right)\text{ and } X = \left(\begin{array}{c}x\\v\end{array}\right)\tag{1}$$
whose unique solution is the matrix equation $X = \exp(-i\,\omega\,t)\,X(0)$. Here $\exp$ is the matrix exponential. Note also, that as a real co-efficient matrix, $i^2=-\mathrm{id}$. Now, you may know that one perfectly good way to represent complex numbers is the following: the field $(\mathbb{C},\,+,\,\ast)$ is isomorphic to the commutative field of matrices of the form:
$$\left(\begin{array}{cc}x&-y\\y&x\end{array}\right);\quad x,\,y\in\mathbb{R}\tag{2}$$
together with matrix multiplication and addition. For matrices of this special form, matrix multiplication is commutative (although of course it is not generally so) and the isomorphism is exhibited by the bijection
$$z\in\mathbb{C}\;\leftrightarrow\,\left(\begin{array}{cc}\mathrm{Re}(z)&-\mathrm{Im}(z)\\\mathrm{Im}(z)&\mathrm{Re}(z)\end{array}\right)\tag{3}$$
So if, now, we let $Z$ be a $2\times2$ matrix of this form, then we we can solve (1) by mapping the state vector $X = \left(\begin{array}{c}x\\v\end{array}\right)$ bijectively to the $2\times 2$ matrix $Z = \left(\begin{array}{cc}x&-v\\v&x\end{array}\right)$, solving the equation $\dot{Z} = -i\,\omega\,Z$, i.e. $Z(t) = \exp(-i\,\omega\,t)\,Z(0)$, where $Z(0)$ is the $2\times 2$ matrix of the form (2) with the correct values of $x(0)$ and $v(0)$ that fulfill the boundary conditions, and then taking only the first column of the resulting $2\times 2$ matrix solution $Z(t)$ to get $X(t)$.
This is precisely equivalent to the complex notation method you have been using, as I hope you will see if you explore the above a little. The phase angles are encoded by the phase of the $2\times2$ matrix $Z$, thought of as a complex number by the isomorphism described above.
Moreover, there is some lovely physics here. Consider the square norm of the state vector $X$; it is $E = \frac{1}{2}\,\langle X,\,X\rangle = \frac{1}{2}(x^2 + v^2)$ and you can immediate deduce from (1) that
$$\dot{E} = \langle X,\,\dot{X}\rangle = X^T\,\dot{X} = -\omega\,X^T \,i\, X = 0\tag{4}$$
This has two interpretations. Firstly, $E$ is the total energy of the system, partitioned into potential energy $\frac{1}{2}\,x^2$ and kinetic $\frac{1}{2}\,v^2$. Secondly, (4) shows that the state vector, written as Cartesian components, follows the circle $x^2+v^2=2\,E$ and indeed this motion is uniform circular motion of $\omega$ radians per unit time. So that simple harmonic motion is the motion of any Cartesian component of uniform circular motion.
You could also solve the problem by beginning with (1), deducing (4) and then make the substition
$$x=\sqrt{2\,E}\,\cos(\theta(t));\quad\, v=\sqrt{2\,E}\,\sin(\theta(t))\tag{5}$$
which is validated by the conservation law $x^2+v^2=2\,E$ with $\dot{E}=0$. Then substitute $x$ back into the original SHM equation to deduce that
$$\theta(t) = \pm\omega\,t+\theta(0)\tag{6}$$
Best Answer
Your doubts about the solution given are justified. The method of solution seems to be invalid and misguided - but see my footnote. However, the correct answer choice is still (A).
If the potential energy is $V=k|x|^3$ then (as you observe) the motion is not simple harmonic and cannot be described by $x=A\sin(\omega t)$. The differential equation of motion is
$$m\frac{d^2x}{dt^2}+3kx^2=0$$ which is not of the form $$\frac{d^2x}{dt^2}+\omega^2 x=0\,.$$
The equation of motion does not have a simple solution. However, we can proceed as in Period $T$ of oscillation with cubic force function. We can write the conservation of energy for the oscillator as
\begin{align}\frac12m\dot x^2+k|x|^3 &=ka^3\\ \implies~~~~~~~~~~~~~~~~~~~~ \dot x^2 &=\frac{2k}{m}\left(a^3-|x|^3\right)\end{align}
where $a$ is the amplitude. Change variables to $x=ay$. Then :
\begin{align}a^2\dot y^2 &=\frac{2k}{m}a^3\left(1-|y|^3\right)\\\implies ~ \frac{dy}{dt} &=\sqrt{\frac{2k}{m}a\left(1-|y|^3\right)}\,.\end{align}
The oscillation is symmetric about the equilibrium point, so the period is given by
$$T=\int dt=4\sqrt{\frac{m}{2ka}}\int_0^1 \frac{1}{\sqrt{1-y^3}}~dy\,.$$
Contrary to appearances, the integral is finite and has a value of approx. 1.40218.
So the period is proportional to $\frac{1}{\sqrt{a}}$ and the answer is (A), but not for the reason given in the solution.
Note : The method of solution in the image text does actually give the correct dependence of $T$ on amplitude $a$ for any potential of the form $k|x|^n$. So perhaps there is some justification for it.