Short answer: No, it is not possible, at least in terms of elementary functions. However, very good (and reasonably fast!) numerical algorithms exist to calculate such a quantity and they should be preferred over any numerical integration technique in this case.
Quantity of interest in terms of normal cdf
The quantity you are interested in is actually closely related to the conditional mean of a lognormal random variable. That is, if $X$ is distributed as a lognormal with parameters $\mu$ and $\sigma$, then, using your notation,
$$
\newcommand{\e}{\mathbb{E}}\renewcommand{\Pr}{\mathbb{P}}\newcommand{\rd}{\mathrm{d}}
\int_a^b f(x) \rd x = \int_a^b \frac{1}{\sigma \sqrt{2\pi}} e^{-\frac{1}{2\sigma^2}(\log(x) - \mu)^2} \rd x = \Pr(a \leq X \leq b) \e(X \mid a \leq X \leq b) \>.
$$
To get an expression for this integral, make the substitution $z = (\log(x) - (\mu + \sigma^2))/\sigma$. This may at first appear a bit unmotivated. But, note that using this substitution, $x = e^{\mu + \sigma^2} e^{\sigma z}$ and by a simply change of variables, we get
$$
\int_a^b f(x) \rd x = e^{\mu + \frac{1}{2}\sigma^2} \int_{\alpha}^{\beta} \frac{1}{\sqrt{2\pi}} e^{-\frac{1}{2} z^2} \rd z \> ,
$$
where $\alpha = (\log(a) - (\mu + \sigma^2))/\sigma$ and $\beta = (\log(b) - (\mu + \sigma^2))/\sigma$.
Hence,
$$
\int_a^b f(x) \rd x = e^{\mu + \frac{1}{2}\sigma^2} \big( \Phi(\beta) - \Phi(\alpha) \big) \>,
$$
where $\Phi(x) = \int_{-\infty}^x \frac{1}{\sqrt{2\pi}} e^{-z^2/2} \rd z$ is the standard normal cumulative distribution function.
Numerical approximation
It is often stated that no known closed form expression for $\Phi(x)$ exists. However, a theorem of Liouville from the early 1800's asserts something stronger: There is no closed form expression for this function. (For the proof in this particular case, see Brian Conrad's writeup.)
Thus, we are left to use a numerical algorithm to approximate the desired quantity. This can be done to within IEEE double-precision floating point via an algorithm of W. J. Cody's. It is the standard algorithm for this problem, and utilizing rational expressions of a fairly low order, it's pretty efficient, too.
Here is a reference that discusses the approximation:
W. J. Cody, Rational Chebyshev Approximations for the Error Function,
Math. Comp., 1969, pp. 631--637.
It is also the implementation used in both MATLAB and $R$, among others, in case those make it easier to obtain example code.
Here is a related question, in case you're interested.
As you correctly pointed out in your question $f_{Y}(y)$ is calculated by integrating the joint density, $f_{X,Y}(x,y)$ with respect to X. The critical part here is identifying the area on which you integrate. You have already clearly showed graphically the support of the joint distribution function $f_{X,Y}(x,y)$. So, now, you can note that the range of $X$ in the shaded region is from $X=y$ to $X=1$ (i.e. graphically, you can visualize horizontal lines, parallel to the x-axis, going from the diagonal line $Y=X$ to the vertical line at $X=1$).
Thus, the lower and upper limits of the integration are going to be $X=y$ and $X=1$. Thus, the solution to the problem is as follows:
$$f_{Y}(y)= \int_{y}^{1} f_{X,Y}(x,y) dx= \int_{y}^{1} 15xy^{2} dx=15y^{2}\int_{y}^{1} x dx=15y^{2}\left(\frac{1}{2}x^2\Big|_y^1\right)\\=\frac{15}{2}y^2(1-y^2).
$$
Best Answer
First re-write the joint density as
$$f(x,y) = \frac{1}{2\pi} {\rm exp} ({-x^{2}/2}) \cdot {\rm exp} \Big( -\frac{1}{2} \left( y^2 -2x^{2}y + x^4 \right) \Big) $$
Note that $y^2 -2x^{2}y + x^4 = (y-x^{2})^2$. Now the marginal density of $x$ is
$$ f(x) = \int_{-\infty}^{\infty} f(x,y) dy = \frac{1}{\sqrt{2 \pi}} {\rm exp} ({-x^{2}/2}) \int_{-\infty}^{\infty} \frac{1}{\sqrt{2 \pi}} {\rm exp} \Big( -\frac{(y-x^{2})^2}{2} \Big) dy $$
The integrand is a $N(x^{2},1)$ density and therefore it integrates to 1. Therefore,
$$f(x) = \frac{1}{\sqrt{2 \pi}} {\rm exp} ({-x^{2}/2})$$
In other words, the marginal distribution of $X$ is standard normal.