The classic reference is Royston (1982)[1] which has algorithms going beyond explicit formulas. It also quotes a well-known formula by Blom (1958):
$E(r:n) \approx \mu + \Phi^{-1}(\frac{r-\alpha}{n-2\alpha+1})\sigma$ with $\alpha=0.375$. This formula gives a multiplier of -2.73 for $n=200, r=1$.
[1]: Algorithm AS 177: Expected Normal Order Statistics (Exact and Approximate) J. P. Royston. Journal of the Royal Statistical Society. Series C (Applied Statistics) Vol. 31, No. 2 (1982), pp. 161-165
(The answer has been reworked to respond to OP's and whuber's comments).
The complementary cdf of $X$ is
$$G_n(x) = \left[1-F_Z\left(x/n\right)\right]^{n}$$
To prove that asymptotically $X$ follows an exponential distribution, we need to show that $$\lim_{n\rightarrow \infty}G_n(x)= e^{-\lambda x}$$
Consider
$$F_Z\left(x/n\right) = \int_0^{x/n}f(t)dt $$
By the properties of the integral, we have
$$\int_0^{x/n}f(t)dt = \frac 1n\int_0^{x}f(t/n)dt$$
Define
$$h_n(w) = \left(1+\frac {w}{n}\right)^{n}, \qquad \lim_{n\rightarrow \infty}h_n(w) = e^w=h(w), \;\; w \in \mathbb R$$
and
$$g_n(x) = -\int_0^{x}f(t/n)dt,\;\;\; -\lim_{n\rightarrow \infty}g_n(x) = -\int_0^{x}f(0)dt = -\lambda x = g(x), \;\;x \in \mathbb R_+$$
(To respond to a question by the OP, we can take the limit inside the integral. First note that $n\geq 1$, and we do not send $x$ to infinity. So the argument of $f$ does not explode. So even if it were the case that $f(\infty) \rightarrow \infty$, we do not need to consider this case here. Then, since also $f(0)$ is finite by assumption, $f$ is bounded and dominated convergence holds).
With these definitions we can write
$$G_n(x) = h_n(g_n(x))$$
and the question is
$$ \lim_{n\rightarrow \infty}h_n(g_n(x)) =?\;\; h(g(x)) = e^{-\lambda x},\;\;x \in \mathbb R_+$$
The limit of a composition of function-sequences does not in general equal the composition of their limits (which is what whuber has essentially pointed out in his comment). But this equality will hold if
$(i)$ $h_n$ converges uniformly to $h$ (it does-convergence to $e^w$ is uniform)
$(ii)$ the limit of $h_n$ is a continuous function (it is)
$(iii)$ the functions $g_n(x)$ map $\mathbb R_+$ to $\mathbb R$ (namely, they map their domain into the set where $h_n$ converges -they do).
So the above equality holds and we have proven what we needed to prove.
Best Answer
Let $X_1, \dots,X_n, \dots \sim \textrm{ iid } \mathcal N(0,1)$ and let $Y_n = \min\{X_1, \dots, X_n\}$.
The standard way to study iid minima is via the following computation: $$ P(Y_n \leq y) = 1 - P(Y_n \geq y) = 1-P(X_1 \geq y \cap \dots \cap X_n\geq y) $$ $$ = 1-\left(1-\Phi(y)\right)^n. $$
If you are interested in the distribution of $Y_n$ as $n \to \infty$ then we have $$ \lim_{n \to \infty} P(Y_n \leq y) = \lim_{n \to \infty} 1 - \left(1 - \Phi(y)\right)^n = 1 $$ for any $y \in \mathbb R$. This means the limiting distribution is effectively a point mass at $-\infty$. This makes sense because any interval $(-\infty, a)$ has positive probability for a standard normal RV, so as you sample more and more eventually you'll land in $(-\infty, a)$ for any $a$, so the minimum (in the limit) with probability 1 is less than $a$.
Your formula agrees with this (assuming $r$ specifies which order statistic we care about, so in this case $r=1$): $$ \lim_{n \to \infty} \mu + \Phi^{-1}\left(\frac{1-\alpha}{n-2\alpha+1}\right)\sigma = -\infty $$ although I wouldn't trust this as an argument in its own right without careful analysis since without knowing more details about the approximation it's not safe to assume it holds or is meaningful in the limit.