Here's a nifty trick - you can transform any linear $n$th-order (in)homogeneous ordinary differential equation into a linear $1$st-order (in)homogeneous $n$-dimensional system of ODEs, which can be solved with the power of matrix exponentials. Behold:
Write $v=y'$, and reinterpret the differential equation as a first-order system:
$$\begin{pmatrix} y \\ v \end{pmatrix}' = \begin{pmatrix}0 & 1\\ -2(c+e^{A+Bx}) & 2a(b-x)\end{pmatrix} \begin{pmatrix} y \\ v \end{pmatrix}.$$
If we denote the column vector $(y,v)^T$ as $\vec{y}$, we can write this system as $\vec{y}'=P(x)\vec{y}$. The solution is actually the same as in the $1$-dimensional case, but you need the matrix exponential,
$$\vec{y} = \exp\left(\int_{x_0}^x P(\tau)d\tau\right) \vec{y}_0 $$
$$= \exp \begin{pmatrix}0 & x-x_0\\ \frac{2}{B}e^A(e^{Bx_0}-e^{Bx})+2c(x_0-x) & 2ab(x-x_0)+a(x_0^2-x^2)\end{pmatrix} \vec{y}_0. $$
Before we actually attempt to synthesize the incoming monstrosity, let's prepare ourselves by writing this matrix using variables: $\begin{pmatrix} 0 & \alpha \\ \beta & \gamma\end{pmatrix}$. Because, let's face it, the full analytical solution is going to be insanely complicated. Also, we are going to do it using the Putzer Algorithm (just keep in mind that we need to distinguish between $x$ and $t$; while using this method we must fix $x$). The eigenvalues are $\lambda_{1,2} = (\gamma\pm\sqrt{\gamma^2+4\alpha\beta})/2$. Solving for $p_1$ and $p_2$ using generic methods, and plugging them into the formula with $t=1$, we get
$$\vec{y}=\left[ \begin{pmatrix} e^{\lambda_1} & 0 \\ 0 & e^{\lambda_2}\end{pmatrix} +\frac{e^{\lambda_1}-e^{\lambda_2}}{\lambda_1-\lambda_2} \begin{pmatrix} -\lambda_1 & \alpha \\ \beta & \gamma-\lambda_1 \end{pmatrix}\right]\vec{y}_0.$$
Just remember the $y$ you want is the first component of $\vec{y}$, and that $\lambda_{1,2}$ are functions of $\alpha,\beta,\gamma$, and that $\alpha,\beta,\gamma$ are all functions of $x$, so this technically contains your full analytical solution, just with a lot of substitutions. Have fun with your research OP. :)
The power series coefficients for $y(t)=\sum_ky_kt^k$ follow the recursion
$$
(k+1)y_{k+1}=Ay_k-\sum_{j=0}^kH(y_j,y_{k-j}).\tag1
$$
Their size is bounded by the recursive inequality
$$
(k+1)\|y_{k+1}\| \le \|A\|\,\|y_k\|+\sum_{j=0}^k\|H\|\,\|y_j\|\,\|y_{k-j}\|
\tag2
$$
in the induced operator and tensor norms. Now compare this to the scalar equation
$$
u'=au+hu^2,~~a=\|A\|,~h=\|H\|,~u(t)=\sum_ku_kt^k\tag3
$$
which has an exact solution
$$
au(t)^{-1}+h=(au_0^{-1}+h)e^{-at}\implies u(t)=\frac{u_0e^{at}}{a(a-hu_0(e^{at}-1))}\tag4
$$
The power series coefficients of $u(t)=\sum_ku_kt^k$ follow the recursion
$$
(k+1)u_{k+1}=au_k+h\sum_{j=0}^ku_ju_{k-j}.\tag5
$$
Comparing this recursion (5) with the recursive norm inequality (2) resulting from the differential equation one concludes that if $\|y_j\|\le u_j$ for $j=0,..,k$, then also $\|y_{k+1}\|\le u_{k+1}$, that is, this holds for all $k$ if $u_0=\|y_0\|$.
The exact solution for $u$ has a pole at $t=\rho=\frac1a\ln(1+\frac{a}{hu_0})\le\frac1{hu_0}$. Additionally one sees that $u_k$ is a polynomial in $u_0$ of degree $k+1$.
From general theory about the radius of convergence one concludes that for any $r<\rho$ there is a constant $C_r$ so that $\|y_k\|\le u_k\le C_rr^{-k-1}$. For $r=\frac12\rho$ for example one thus gets $$\|y_k\|\le C(2\|H\|\,\|y_0\|)^{k+1}.$$
Best Answer
Taking $y’=y^2p(y)$ and $y=e^{\xi}$ we arrive at \begin{equation} pp’_{\xi}+2p^2+4p+1=0, \end{equation} which is separable. After integration and rearranging you’ll get that \begin{align} 1=cy\left(2p+\sqrt 2+2\right)^{(1+\sqrt 2)/4}\left(-2p+\sqrt 2-2\right)^{1-\sqrt 2}. \end{align} Now I’ll take $y=1/u$, giving the equation \begin{align} u=c\left(-2u’_x+\sqrt 2+2\right)^{(1+\sqrt 2)/4}\left(2u’_x+\sqrt 2-2\right)^{1-\sqrt 2}. \end{align} For equations of the form \begin{align} u=F(u’) \end{align} the solution is given via the method of integration by differentiation in “Handbook of Exact Solutions for Ordinary Differential Equations” 2ed. by Polyanin and Zaitsev implicitly as \begin{align} u=F(s), \quad x=\int\frac{F(s)’}{s} \mathrm ds+c, \end{align}
which is not difficult to show.
Your equation has the implicit solution
\begin{align} y&=\frac{1}{c}\left(-2s+\sqrt 2+2\right)^{(-1-\sqrt 2)/4}\left(2s+\sqrt 2-2\right)^{-1+\sqrt 2},\\ x&=\int \frac{c}{s}\frac{\mathrm d}{\mathrm ds} \left[\left(-2s+\sqrt 2+2\right)^{(1+\sqrt 2)/4}\left(2s+\sqrt 2-2\right)^{1-\sqrt 2}\right]\mathrm ds+c_2. \end{align} If I’m inputting the integral correctly in Wolframalpha it has a fairly long solution involving Hypergeometric functions, which I will leave for you to get. An implicit solution is more than one can usually ask for!