Let me work with $n$ dimensions: you want to study the vector field
$$
X=\sum_{1\le j\le n} a_j(x)\frac{\partial}{\partial x_j},
\tag {1}$$
and in particular find the so-called first integrals of $X$ i.e. the functions $f$ such that $Xf=0$. You introduce the system of ODE:
$$
\dot x(t,y)=a(x(t,y)),\quad x(0,y)=y.
\tag {2}$$
The solutions $t\mapsto x(t,y)$ are the integral curves of $X$.
You realize easily that a function is a first integral iff it is constant along the integral curves of $X$: just compute
$$
\frac{d}{dt}\bigl(f(x(t,y))\bigr)=\sum_{1\le j\le n} \frac{\partial f}{\partial x_j}(x(t,y))a_j(x(t,y))=(Xf)(x(t,y))
$$
It means that solving the PDE (1) is somehow equivalent to solving (2).
Now the notational business. It is tempting to write (2), which is $
\frac{dx_j}{dt}=a_j(x), 1\le j\le n,
$
symbolically as
$$
\frac{dx_1}
{a_1(x)}=\dots=\frac{dx_n}
{a_n(x)}
$$
since they are all equal to $dt$ ! Well just take this as a symbolic notation which eliminates the presence of the parameter $t$.
Now the Cauchy problem for this autonomous vector field $X$: find an hypersurface $\Sigma$ to which $X$ is transverse, i.e. $X$ is not tangent to $\Sigma$. Then the Cauchy problem
$$
\begin{cases}
Xu=f,\quad \\
u_{\vert \Sigma}=g
\end{cases}
$$
has locally a unique solution: this problem is equivalent to the scalar ODE
$$
\frac{d}{dt}\bigl( u(x(t,y))\bigr)=f(x(t,y)),\quad u(x(0,y))=u(y)=g(y) \text{ for $y\in \Sigma$},
$$
so that
$$
u(x(t,y))= u(y)+\int_0^tf(x(s,y)) ds\quad \text{ for $y\in \Sigma$}.
\tag{3}$$
Note that $y$ moves on $\Sigma$ ($(n-1)$ degree of freedom) and $t$ in $\mathbb R$ so that it is a nice choice of coordinates to pick $y\in \Sigma$ and $t\in \mathbb R$.
There are variants of this when the vector field is not autonomous, i.e. is of type
$$\frac{\partial}{\partial t}+
\sum_{1\le j\le n} a_j(t,x)\frac{\partial}{\partial x_j}.
$$
More comments on the quasi-linear case and the general method of characteristics:
the quasi-linear Cauchy problem
$$
\frac{\partial u}{\partial t}+\sum_{1\le j\le n} a_j(t,x, u)\frac{\partial u}{\partial x_j}=b(t,x,u),\quad u(0,x)=u_0(x).
\tag{4}$$
has a linear companion
$$
\frac{\partial F}{\partial t}+\sum_{1\le j\le n} a_j(t,x, v)\frac{\partial F}{\partial x_j}+b(t,x,v)\frac{\partial F}{\partial v}=0,\quad F(0,x,v)=v-u_0(x)
\tag{5}$$
where $t,x,v$ are independent variables. It is not difficult to solve using the linear method of characteristics outlined above. Then since $\partial F/\partial v=1$ at $t=0$, the equation
$
F(t,x,v)=0
$
determines implicitely $v=u(t,x)$ and the expression of derivatives of $u$ in terms of derivatives of $F$, e.g.
$
\partial u/\partial x=-\frac{\partial F/\partial x}{\partial F/\partial v}
$
imply that $u$ solves the Cauchy problem (4). Here also the notational industry is working full throttle. People would write
$$
\dot x=a(t,x,u)\quad \dot u=b(t,x,u)\quad
\text{which is }
\frac{dx_j}{a_j}=\frac{du}{b},\quad 1\le j\le n.
$$
I want to propose a (recursive) method to obtain
$F_k(x)$ by exploiting a formula well known from
the theory of Borel re-summation. In the present
context it looks like this:
\begin{equation}
\begin{array}{ll}
\displaystyle \sum_{n \geq 0} \, \langle E_k \rangle_n \, z^n
&\displaystyle = \
\int_0^\infty \, dt \, e^{-t} \sum_{n \geq 0} \,
{\langle E_k \rangle_n \over {n!}} \, (zt)^n \\
&\displaystyle = \
\int_0^\infty \, dt \, e^{-t} \, F_k(zt) \\
&\displaystyle = \ \Big( {1 \over z} \Big) \cdot \,\text{$\frak{L}$} \big\{ F_k \big\} \Big({1 \over z} \Big) \quad \text{Laplace transform}
\end{array}
\end{equation}
I'll illustrate the approach for $k=1$. Let $E_1(x) := \sum_{n \geq 0} \, \langle E_1 \rangle_n \, x^n$. In this case the recurrence relation, as indicated the initial post, is
\begin{equation}
\underbrace{n \langle E_1 \rangle_n}_{xE_1'(x) - x}
\ = \
\left\{
\begin{array}{l}
\displaystyle \underbrace{\langle E_1 \rangle_{n-1}}_{xE_1(x)}
\ + \
\underbrace{(n-1) \langle E_1 \rangle_{n-2}}_{x^3E_1'(x) \ + \ x^2E_1(x)}
\ + \ \\
\displaystyle \underbrace{\ n^2 \langle E_0 \rangle_{n-1} \ }_{\sum_{n \geq 2} n^2 x^n} \ + \
\underbrace{(n-1)(n^2+1)\langle E_0 \rangle_{n-2}}_{\sum_{n \geq 2} (n-1)(n^2+1) \, x^n}
\end{array}
\right.
\end{equation}
for $n \geq 2$ with the initial values $\langle E_1\rangle_0 = 0$ and $\langle E_1 \rangle_1 = 1$.
I've "underbraced" the terms of the recurrence by
their respective contributions to the $1$-st order
non-homogeneous ODE for the generating function $E_1(x)$
which is, upon simplification,
\begin{equation}
E'_1(x) \ + \ {1 \over {x-1}} \, E_1(x)
\ = \ {x^3 -x^2 + 5x +1 \over {(1+x)(1-x)^5}}
\end{equation}
Mathematica tells me that the general solution is of the form
\begin{equation}
\ {\text{const} \over {1-x}}
\ + \ {1 \over 4} {x^2+x+2 \over {(1-x)^4}}
\ + \ {3 \over 8} {1 \over {(1-x)}}
\log \Big( {1-x \over {1+x}} \Big)
\end{equation}
The value of the constant term is forced to be $\text{const} = - {1 \over 2}$ since $\langle E_1\rangle_0 = 0$.
After performing the requisite inverse transforms
we should get (excuse me for mimicking Wikipedia)
\begin{equation}
\begin{array}{ll}
\displaystyle F_k(zt)
&\displaystyle = \ \sum_{n \geq 0} \, {\langle E_1 \rangle_n \over {n!}} \, (zt)^n \\
&\displaystyle = \ {1 \over {2\pi}} \, \int_{-\pi}^{\pi} \,
d\theta \, E_1 \big(zt e^{-i\theta} \big) \exp \big( e^{i\theta} \big) \\
&\displaystyle \ \ \ \ \ \text{(Not sure what this is yet.)}
\end{array}
\end{equation}
ines.
Best Answer
This claim is not valid. The big breakthrough was the paper below:
MR0839134 (88c:12011) Kovacic, Jerald J. An algorithm for solving second order linear homogeneous differential equations. J. Symbolic Comput. 2 (1986), no. 1, 3–43. 12H05 (34A30)
Later, M. Petkovsek extended the ideas to second order difference equations.