A mostly-worked-out answer to the lower bound in part a:
$$E[\max_i X_i]=E[\max_i X_i 1_{\max_i X_i \geq 0}]+E[\max_i X_i 1_{\max_i X_i<0}].$$
We want to throw out that negative piece. Intuitively, it is unlikely to happen at all and it has bounded expectation. More rigorously, it goes to zero in probability (the probability of it being nonzero is $2^{-n}$) and is pointwise decreasing in magnitude, so by dominated convergence
$$E[\max_i X_i] \geq E[\max_i X_i 1_{\max_i X_i \geq 0}] + o(1) \\
=\int_0^\infty 1-\Phi(t)^n dt + o(1).$$
using the hint and a standard fact from Lebesgue integration of nonnegative functions. Denote the first term by $I$.
Next
$$I \geq \int_0^{\sqrt{2 \log(n)}} 1-\Phi(t)^n dt$$
by simply throwing out regions of positive area.
On $[0,1]$ we have the simple bound $1-\Phi(t)^n \geq 1-\Phi(1)^n$. On $[1,\sqrt{2 \log(n)}]$ we have the bound $\Phi(t) \leq 1-\frac{1}{\sqrt{2 \pi}} e^{-t^2/2}$. (Cf. https://mikespivey.wordpress.com/2011/10/21/normaltails/) Hence
$$I \geq 1-\Phi(1)^n + \int_1^{\sqrt{2 \log(n)}} 1-\left ( 1-\frac{1}{\sqrt{2 \pi}} e^{-t^2/2} \right )^n dt.$$
As for the remaining piece, we're integrating a decreasing function, so we get a lower bound by substituting in the upper limit:
$$I \geq 1-\Phi(1)^n+\int_1^{\sqrt{2 \log(n)}} 1-\left ( 1-\frac{1}{\sqrt{2 \pi}} n^{-1} \right )^n dt.$$
The sequence of numbers in the integrand converges to $1-e^{-\frac{1}{\sqrt{2 \pi}}}>0$, so it is bounded below by $1-e^{-\frac{1}{\sqrt{2 \pi}}}-\varepsilon=:C$ for large enough $n$ depending on $\varepsilon$. Then we get the bound
$$I \geq C(\sqrt{2 \log(n)}-1)+1-\Phi(1)^n.$$
Returning to the original problem we have
$$E[\max_i X_i] \geq C(\sqrt{2 \log(n)}-1)+1-\Phi(1)^n+o(1)$$
which gives the lower bound for part a for sufficiently large $n$. A finite collection of $n$ can always be handled (why?) so we are done.
To solve part b we would need to be able to repeat the derivation to get $C=1$, and I'm not really sure how to do that. One idea would be to change variables to $u=\Phi(t)$, which would give
$$\int_0^\infty 1-\Phi(t)^n dt = \int_{1/2}^1 (1-u^n)\frac{dt}{du} du.$$
where $\frac{dt}{du}$ is the reciprocal of the normal density, written as a function of the normal CDF itself. Perhaps it is possible to get an appropriate series expansion for this quantity to get the result.
Best Answer
The inequality holds with $C=\sqrt{2}$, and this is the optimal constant (the optimality follows from here).
Outline of the proof:
Let $\varphi$ and $\Psi$ be the pdf and complementary cdf of standard normal distribution. Using the inequality $\Psi(x)< \varphi(x)/x$ for $x>0$, it is easy to show that $-\log \Psi(x)$ is convex, therefore, its inverse $G(t) = \Psi^{-1}(e^{-t})$ is concave. Note also that $G$ is increasing.
Applying the quantile transformation, $X_k = G(Y_k)$, where $Y_k$ are iid $\operatorname{Exp}(1)$. Denoting $X_{(n)} = \max_k X_k$, $Y_{(n)} = \max_k Y_k$ and using the monotonicity and concavity of $G$, we get with the help of Jensen's inequality $$ E[X_{(n)}] = E[G(E_{(n)})]\le G(E[Y_{(n)}]) = G(H_n), $$ where $H_n = 1+\frac12 + \dots + \frac1n$ is the $n$th harmonic number (the distribution of exponential order statistics is well known). Since $H_n\le \log n + 1$ for all $n$, we get $$ E[X_{(n)}]\le G(\log n+1) = \Psi^{-1}\big(\tfrac1{en}\big). $$
Using the inequality $\Psi(x)< \varphi(x)/x$ again, $$ \Psi(\sqrt{2 \log n}) \le \frac{1}{2\sqrt{\pi\log n}}e^{-\log n} \le \frac{1}{en}. $$ As $\Psi$ decreases, $$ E[X_{(n)}]\le \sqrt{2\log n}, $$ as required.