As a comment suggested, an unbiased estimator is (one minus) the empirical distribution function
$$\hat P(X_1 > 0) = 1-\hat F_X(0) = 1-\frac 1n \sum_{i=1}^n I\{x_i \leq 0\}$$
where $I\{\}$ is the indicator function, because
$$E[\hat P(X_1 > 0)]=1- E[\hat F_X(0)] = 1-\frac 1n \sum_{i=1}^n E[I\{x_i \leq 0\}] $$
$$= 1-\frac 1nn P(X_1\leq0) = P(X_1 > 0)$$
To the degree that we are estimating a probability with respect to a known threshold (in our case $0$), it doesn't matter whether we know the parameters of the distribution ($\mu$) or not.
You have $tA + ABx$, while the answer you linked is differentiating with $\frac{x-\mu}{\sigma}$.
The answer you linked provides the solution (when integratingn over $-\infty, \infty$)
$$\int_{-\infty}^\infty \Phi \left( \frac{x-\mu}{\sigma} \right) \phi (x) dx = \Phi \left( \frac{-\mu}{\sqrt{1+\sigma^2}} \right)$$
It seems to me you can just transform your integral into that form and apply the answer they provided.
Write
$$tA + ABx = ABx - (-tA) = \frac{x - (-tA)(AB)^{-1}}{(AB)^{-1}} = \frac{x-\mu}{\sigma}$$
Where
$$\mu = (-tA)(AB)^{-1}$$
$$\sigma = (AB)^{-1} = \frac{1}{AB}$$
Then you now have an integral of the form used in that question. So following the logic of that answer, the integral should be
$$\Phi \left( \frac{tA(AB)^{-1}}{\sqrt{1+(AB)^{-2}}} \right) = \Phi \left( \frac{tA}{\sqrt{(AB)^2+1}} \right)$$
Now, in your example you have a lower bound that is not $-\infty$. So, using the transformation above, you should be able to replicate the steps taken in that answer (or similar, as their are multiple on this site) with your lower bound, provided a closed form solution exists.
Perhaps you can run some basic simulations in R or Python to verify this is correct.
Best Answer
Intuitively, the result is obvious because (a) $\phi$ is a rapidly decreasing function (its magnitude decreases at a quadratic exponential rate) and (b) $\Phi$ is bounded above and, for negative $x,$ is also rapidly decreasing at essentially the same rate as $\phi.$ Thus the fraction $\phi^2/\Phi$ decreases rapidly for both positive and negative $x,$ while remaining bounded in between, whence its integral is very nicely behaved and finite.
The problem, then, is to make this intuition rigorous. The rigor merely parallels the foregoing argument by making suitable quantitative comparisons.
When $x\gt 0,$ $\Phi(x)\ge 1/2$ (by a familiar symmetry argument). Whence (in this case) the integrand is bounded above in magnitude by
$$\bigg|\frac{\phi(x)^2}{\Phi(x)}\bigg| \le 2\phi(x)^2 \lt 2\exp(-x^2)/\sqrt{2\pi}$$
which (because it is proportional to the density of another Normal distribution) has a finite integral.
The harder part is the integral over negative $x.$ But here, the Mills Ratio is
$$R(-x) = \frac{\Phi(x)}{\phi(x)}$$
which, as the linked post explains, is bounded below by $-x/(x^2+1).$ Thus, for large negative $x$ (say, $x \le -1$),
$$\bigg|\frac{\phi(x)^2}{\Phi(x)}\bigg| = \bigg|\phi(x)\left(\frac{1}{R(-x)}\right)\bigg| \le \phi(x) \frac{x^2+1}{|x|} \le 2|x|\phi(x)$$
whose integral also converges (it can integrated exactly using elementary techniques).
Since $\phi(x)^2/\Phi(x)$ is bounded on the remaining interval $[-1,0],$ we have established that its integral over the entire real line is bounded in magnitude by the sum of three convergent quantities, whence it is finite, QED.
This argument continues to apply, essentially without change, to any integrand of the form $\phi(x)^k/\Phi(x)$ for $k\gt 1.$ It also shows (look more closely at the upper and lower bounds for Mills' Ratio) that the integral diverges when $k\le 1.$
(For a plot with $k=1$ -- which is just the inverse Mills' Ratio -- see the last figure of https://stats.stackexchange.com/a/166277/919.)