Once you start at a given point, you are confined to the level set of $H$ passing through that point, which (typically) is an $(n-1)$-dimensional hypersurface in the $n$-dimensional state space $X$. If you introduce an $(n-1)$-dimensional coordinate system on that hypersurface, you can write the ODEs for the motion on the surface in terms of just those $n-1$ variables.
If you know several constants of motion, you can reduce the order further, and if you know sufficiently many, you can (in principle) integrate the system of ODEs exactly.
Let's solve it in general. The Lotka-Volterra system, with $a,b,c,d> 0$
$$
x'= x(a-by)\\
y' = y(-c+dx)
$$
has a conserved quantity, or first integral, given by
$$ V(x,y) = by + dx - a\log y - c\log x$$
which we can rewrite as:
$$ V(x,y) = b(y-\frac{a}{b}\log y) + d(x - \frac{c}{d}\log x) = b\,G(y) + d\,H(x) $$
It is useful in this type of exercises to realize that $V(x,y)$ is a convex function:
$$
\nabla V(x,y) = (d-\frac{c}{x},b-\frac{a}{y})\\
\mathcal{H}(V) =
\pmatrix{
\frac{c}{x^2} & 0 \\
0 & \frac{a}{y^2}
}
$$
$\mathcal H $ is positive definite so $V$ is convex. And since the trajectories are contour lines of the function, that gives us an intuition that they will be closed.
The minimum of this convex function is
$$k_o = dH(\frac{c}{d}) + bG(\frac{a}{b})$$
Both $H$ and $G$ share the same behaviour because they are almost identical save the constant next to the logarithm. They are both convex since their derivatives are precisely those we have calculated to study $V$, and their global minimum $\frac{c}{d}$ and $\frac{a}{b}$ respectively. Besides
$$
\lim_{y\to 0} G(y) = \lim_{y\to \infty} G(y) = \infty
$$
This means that at both sides of $\frac{a}{b}$, $G$ assumes every value above $G(\frac{a}{b})$, and it does it exactly twice, once in $(0,\frac{a}{b})$ and once in $(\frac{a}{b},\infty)$; since it is 1-to-1 in those intervals. Same is true for $H$.
If we fix $\bar k > k_o$ and $\bar x > \frac{c}{d}$, taking into account the previous study, we can find $y_m(\bar x), y_M(\bar x)$ such as
$$
\bar k = dH(\bar x) + bG(y_m(\bar x)) = dH(\bar x) + bG(y_M(\bar x))
$$
Let us see what is the range of movement for $x$. Given $k > k_o$, for certain $x,y$ we can write
$$ k = dH(x) + bG(y) =\\
= d\gamma + bG(\frac{a}{b}) \tag{*}$$
What we have done is reduce $G(y)$ to the minimum, and adjust $H(x)$ to a greater value $\gamma \geq H(x)$, to mantain the equality. From the study above we know that there are $x_m, x_M$ such $\gamma \geq H(x)$ for $x \in [x_m,x_M]$. And only the extrema verify $\gamma = H(x_m) = H(x_M)$.
We know that $G(\frac{a}{b})$ is the minimum value $G$ can assume, so $H$ is constrained by this: if $x_o < x_m$ or $x_o>x_M$ then for the $k$ that we chose initially,
$$ k < dH(x_o) + bG(y) \text{ for every y }\geq \frac{a}{b} \text{ (the values permited for y).}$$
What his means is that $x$ must belong in that close interval.
Finally, from the identity $(*)$ follows that
$$
\lim_{x\to x_M} y_m(x) = \lim_{x\to x_M}y_M(x) = \frac{a}{b}\\
\lim_{x\to x_M} y_m(x) = \lim_{x\to x_M}y_M(x) = \frac{a}{b}
$$
What we have proven is that for $x \in [x_m,x_M]$ we have continuous functions $y_m(x), y_M(x)$ with $y_m(x) < y_M(x)$ in $(x_m,x_M)$ and $y_m(x_m) = y_M(x_m)$ and $y_m(x_M) = y_M(x_M)$, which verify that, for a given $k>k_o$,
$$ k = V(x,y_m(x)) = V(x,y_M(x)) $$
and thus we can define two curves $\alpha(x)^+, \, \alpha(x)^-$ such:
$$ \alpha(x)^- = (x,y_m(x),V(x,y_m(x))) = (x,y_m(x),k)$$
$$ \alpha(x)^+ =(x,y_M(x),k)$$
and join them at the extrema to form the close curve of the trajectory $\alpha(x) = \alpha(x)^- - \alpha(x)^+$ (with the usual addition of paths).
Best Answer
I have edited the original post to include the correct derivation. My mathematical weakness showed when I confused integration with differentiation!