The basic idea is that the maximum contribution of the integral comes from a neighborhood of $t=0$, and near there we have $t^2+2t^4 \approx t^2$. This problem is particularly nice because we can do everything explicitly. I'll do the calculation in two steps (two changes of variables) to illustrate what's going on.
Start with the change of variables $t^2+2t^4 = s^2$, where $s \geq 0$. This gives
$$
t=\frac{1}{2}\sqrt{-1+\sqrt{1+8s^2}},
$$
so that the integral becomes
$$
\int_0^{\infty} t^{3/4}e^{-x(t^2+2t^4)}\,dt = \int_0^\infty s^{3/4} f(s) e^{-xs^2}\,ds,
$$
where
$$
f(s) = \frac{2^{1/4}s^{1/4}}{\sqrt{1+8s^2}\left(-1+\sqrt{1+8s^2}\right)^{1/8}} = 1 - \frac{15}{4}s^2 + \frac{713}{32}s^4 + \cdots.
$$
Now we can make the second change of variables $s^2 = r$ to put the integral into a form where we can directly apply Watson's lemma. Indeed, this gives
$$
\int_0^\infty s^{3/4} f(s) e^{-xs^2}\,ds = \int_0^\infty r^{-1/8} g(r) e^{-xr}\,dr,
$$
where
$$
g(r) = \frac{1}{2}f\left(\sqrt{r}\right) = \frac{1}{2} - \frac{15}{8}r + \frac{713}{64}r^2 + \cdots.
$$
Finally
$$
\begin{align*}
\int_0^\infty r^{-1/8} g(r) e^{-xr}\,dr &\approx \sum_{n=0}^{\infty} \frac{g^{(n)}(0) \Gamma(n+7/8)}{n! x^{n+7/8}} \\
&= \frac{1}{2}\Gamma\left(\frac{7}{8}\right) x^{-7/8} - \frac{15}{8}\Gamma\left(\frac{15}{8}\right)x^{-15/8} + O\left(x^{-23/8}\right)
\end{align*}
$$
as $x \to \infty$, by Watson's lemma.
If we naively just send $\epsilon \to 0$ then we get the equation
$$
(x-1)^3 = 0,
$$
so we can deduce that we have three roots tending to $x=1$ as $\epsilon \to 0$. We'll suppose they have asymptotic series of the form
$$
x \approx 1 + \sum_{k=1}^{\infty} a_k \delta_k(\epsilon),
$$
where
$$
1 \gg \delta_1(\epsilon) \gg \delta_2(\epsilon) \gg \delta_3(\epsilon) \gg \cdots
$$
as $\epsilon \to 0$.
If we substitute just the first two terms $x \approx 1 + a_1 \delta_1(\epsilon)$ into the original equation
$$
\epsilon x^4 + (x-1)^3 = 0 \tag{$*$}
$$
and expand it we get
$$
a_1^4\epsilon\delta_1(\epsilon)^4 + 4a_1^3\epsilon\delta_1(\epsilon)^3 + a_1^3\delta_1(\epsilon)^3 + 6a_1^2\epsilon\delta_1(\epsilon)^2 + 4a_1\epsilon\delta_1(\epsilon) + \epsilon \approx 0.
$$
Now we'll apply the method of dominant balance. First, certain terms are, by assumption, smaller than others, so they could not possibly be part of a dominant balance and may be ignored. Specifically we know that
$$
a_1^4\epsilon\delta_1(\epsilon)^4 \ll 4a_1^3\epsilon\delta_1(\epsilon)^3 \ll 6a_1^2\epsilon\delta_1(\epsilon)^2 \ll 4a_1\epsilon\delta_1(\epsilon) \ll \epsilon,
$$
so if we ignore all but the largest of these then our equation becomes
$$
a_1^3\delta_1(\epsilon)^3 + \epsilon \approx 0.
$$
From this we see that
$$
\delta_1(\epsilon) = \epsilon^{1/3} \quad \text{and} \quad a_1^3 = -1,
$$
and with three choices for $a_1$---namely the three cube roots of $-1$---we obtain approximations for each of the three roots of the original equation with tend to $x=1$:
- $x \approx 1 - \epsilon^{1/3}$,
- $x \approx 1 + e^{i\pi/3} \epsilon^{1/3}$, and
- $x \approx 1 + e^{i5\pi/3} \epsilon^{1/3}$.
This suggests that we might be able to take
$$
\delta_k(\epsilon) = \epsilon^{k/3}, \quad k \geq 1,
$$
and indeed if we substitute
$$
x \approx 1 + \sum_{k=1}^{\infty} a_k \epsilon^{k/3}
$$
into $(*)$ and collect like powers of $\epsilon$ then we obtain equations for the coefficients $a_k$,
- $a_1^3 + 1 = 0$,
- $3 a_2 a_1^2+4 a_1 = 0$,
- $3 a_3 a_1^2+6 a_1^2+3 a_2^2 a_1+4 a_2 = 0$, etc.
It just remains to find the last root of $(*)$. If we simply expand it out we get
$$
\epsilon x^4 + x^3 - 3x^2 + 3x - 1 = 0. \tag{$**$}
$$
We will again use the method of dominant balance. By ignoring the $\epsilon x^4$ term in the very beginning we essentially assumed it would not be part of a dominant balance, so to find the last root we must assume the opposite. As such we'll be looking for balances between $\epsilon x^4$ and the remaining four terms, so we'll consider the cases
- $\epsilon x^4 \asymp 1$,
- $\epsilon x^4 \asymp 3x$,
- $\epsilon x^4 \asymp 3x^2$, and
- $\epsilon x^4 \asymp x^3$.
In case 1 we have $x \asymp \epsilon^{-1/4}$, but then the term $3x$, being $\asymp \epsilon^{-1/4}$, dominates the terms $\epsilon x^4$ and $1$ in the balance, so the balance is not dominant.
In case $2$ we have $x \asymp \epsilon^{-1/3}$, but then the term $-3x^2$, being $\asymp \epsilon^{-2/3}$, dominates both terms in the balance, so the balance is again not dominant.
In case $3$ we have $x \asymp \epsilon^{-1/2}$, but then the term $x^3$, being $\asymp \epsilon^{-3/2}$, dominates both terms in the balance, namely $\epsilon x^4$ and $x^2$, which are $\asymp \epsilon^{-1}$.
In case $4$ we have $x \asymp \epsilon^{-1}$, and this balance is dominant since
$$
\epsilon x^4 \asymp x^3 \asymp \epsilon^{-3} \gg 3x^2 \gg 3x \gg 1.
$$
Ignoring the terms outside of the balance we then have
$$
\epsilon x^4 + x^3 \approx 0,
$$
so that
$$
x \approx - \epsilon^{-1}.
$$
We might then suspect that this root has an asymptotic series of the form
$$
x \approx -\epsilon^{-1} + \sum_{k=0}^{\infty} a_k \epsilon^k,
$$
and if we substitute this into $(*)$ or $(**)$ and collect like powers of $\epsilon$ we obtain equations for the coefficients $a_k$,
- $a_0+3 = 0$,
- $3 a_0^2+6 a_0-a_1-3 = 0$,
- $-3 a_0^3-3 a_0^2+6 a_1 a_0+3 a_0+6 a_1-a_2-1 = 0$, etc.
We've thus accounted for all four roots of the polynomial equation and can calculate as many terms of their asymptotic series as we wish.
Best Answer
The general idea of the method of dominant balance is fairly simple. Suppose you want to solve some equation $F(y)=0$ with the unknown $y$ and known $F$. In your case, $y$ is a function, $F$ is a differential operator, but the trick applies more or less regardless what we are talking about. Suppose also that $F$ is complicated and no easy explicit solution is in sight but you can split $F$ into two pieces $F=E+G$ such that the equation $E(y)=g$ is easily solvable with any right hand side and $G$ is in some sense much smaller than $E$ for all $y$ you are interested in. After solving $E(y)=0$, you get a solution $y_1$. Then you just say that you got a good approximation and all you need now is to solve $E(y)=-G(y_1)$ to compensate for the extra term. Then you get $y_2$ that solves this equation and switch to $E(y)=-G(y_2)$, etc. If you've ever seen the Picard iteration process, this should sound very familiar.
The whole art is in the splitting. In your case, you can clearly see that $1$ in $(1+3x)$ can be certainly put in $G$ but the rest may not be immediate. Unfortunately, as it turns out, you cannot chop anything off $x^2y''+3xy'+y$. Indeed, suppose you decide that the highest derivative term is in $E$ (something should be there after all). Then you get $x^2y_1''=0$. Let's consider $y_1=1$, say. Then you end up with $G(y)=(3x+1)y'+y$, so $-G(y_1)=-1$. So far, so good. Now let us find the solution to $x^2y''=-1$. We have $y'=\frac 1x+C$ and $y_2=\log x+Cx+C'$. Oops, we seem to get $y_2$ much larger than $y_1$, not a small correction as it should be! This is a clear indicator that our splitting into the dominant and the small part is totally wrong. Playing with this a bit more, we can convince ourselves that we have no choice but to use $E(y)=x^2y''+3xy'+y$. The only part that can go to $G$ is $y'$. Fortunately, solving $E(y)=0$ is not hard. You get $y=\frac 1x$ or $y=\frac{\log x}{x}$ (up to an irrelevant free constant factor). I'll do the Picard iteration game with $y_1=\frac 1x$. Since $E$ is a linear operator, it is enough to solve for the correction term rather than for the whole thing. We need $E(y)=\frac 1{x^2}$. This produces $y_2=\frac 1x+\frac c{x^2}$ where $6c-6c+c=1$, so $c=1$. Then $G(y_2)=-\frac 2{x^3}$, so $y_3=\frac 1x+\frac 1{x^2}+\frac c{x^3}$ with $12c-9c+c=2$, whence $c=\frac 12$. And so on. Just plug'n'play until bored.
This was the cookbook part of the story. The most interesting part is to justify the game. If the sequence you have obtained converges (in the appropriate space), then you can at least say that you found a solution (the limit). Let's see if here it is the case. You have, probably, realized by now that we are getting errors of the kind $c_kx^{-k}$. Each such error gives rise to the correction term $C_kx^{-k}$ with $C_k$ of size about $k^{-2}c_k$, which, in turn, produces a new error $c_{k+1}x^{-k-1}$ with $c_{k+1}$ of size about $k^{-1}c_k$. Thus we are lucky: the errors and correction terms decay quickly and uniformly near $\infty$. You may now say that the whole thing was pure misnomination and we've just done the usual power series representation. Well, that is, indeed, the case here. However, in the truly interesting situations, you get a divergent asymptotic series and the cookbook approach not followed by a careful justification part may give a totally meaningless result.