Well, the problem is that there is a reason why general hitting time results are practically non-existent:
Let $f(t)$ be the time-dependent cutoff of your Brownian motion. In your case, $f(t) = \log (Ae^{-\mu t} + Ce^{(b-\mu) t})$. We would like to compute the probability density of $B_t$ given that we have not yet hit the boundary at time $t$. Let this density be $p(x;t)$, so that $\int_{-\infty}^{f(t)} dx \space p(x;t) = 1$. One can work out a PDE for $p$ by discretizing and looking at $p(x;t+\Delta t)$. It is not easy, and one has to be very careful about factors of $\Delta t$ vs $\sqrt{\Delta t}$ and the rate at which you lose weight to the cutoff, but it is doable. One finds the following:
$\partial_t p(x;t) = \frac{1}{2} \partial_x^2 p(x;t) - \frac{1}{2} p(x;t) \partial_x p(f(t); t)$ for $x < f(t)$
The first term is the standard Brownian diffusion of probability, while the latter term accounts for the renormalization necessary (after loss through the cutoff) to keep the total probability equal to 1. One can check the consistency of this by integrating both sides with respect to $x$. The LHS goes to zero (being $\partial_t 1$), and the two terms on the RHS cancel each other. The probability density of hitting times is given by $P(t) \propto \partial_x p(f(t);t)$.
The more complicated the function $f(t)$, the less likely this is to have a nice solution. Even relatively simple $f(t)$ are difficult to solve. Your best bet to find an analytical solution, if one exists, is to run a small-scale Monte Carlo simulation to get an idea of what the probability looks like, and then guess an Ansatz.
Let's look at the case where $(b-\mu) > 0$ or $\mu < 0$, so that asymptotically, the cutoff recedes linearly as $at$ for some positive $a$. Then we expect most of the probability density will look like the standard Brownian motion distribution $e^{-x^2/2t}$, and the rest of it will be a function of $(x-at)$, at least asymptotically. So let's take that as our Ansatz: $p(x;t) = e^{-x^2/2t} g(x-at,t)$. Plugging in, we find
$-a\partial_z g(z,t) + \partial_t g(z,t) = \frac{1}{2} \partial_z^2 g(z,t) - \frac{x}{t} \partial_z g(z,t) + \frac{a}{2} e^{-\frac{a^2t}{2}} g(z,t)$, or
$\partial_t g(z,t) = \frac{1}{2} \partial_z^2 g(z,t) - \frac{z}{t} \partial_z g(z,t) + \frac{a}{2} e^{-\frac{a^2t}{2}}g(z,t)$.
Let's ignore the last term, because we are only looking at the asymptotic behavior. Then a good solution is $g(z,t) = \alpha z + \beta + \gamma \frac{z}{t}$ for $z<0$, and we set the free variables by the requirement that $p(x;t)$ integrates (over x) to $1$, that $g(0,t) = 0$ (so $\beta = 0$), and that we really don't care about the $z/t$ term in the limit.
The hitting times wind up to be proportional to $e^{-\frac{a^2}{2} t}$, at least asymptotically. Of course, that's not the behavior you're interested in, but this is a first step. (Next steps involve perturbing f(t) by lower-order terms in the expansion of the log and seeing how you have to modify your Ansatz.)
You probably know (and if not, could easily check) that the process $X_{t}=B_{t}^{2}-t$ is a martingale.
Now consider, for $n \in \mathbb{N}$, the (bounded) stopping times $$T_{n}=T \wedge n$$
Apply the Optional Stopping Theorem on $T_{n}$ noting that $B_{T_{n}} \le \max(a,b)$ and $T_{n} \le n$
Use the monotone convergence theorem to get $$E[T]=\lim_{n\rightarrow \infty}E[T_{n}]= \lim_{n\rightarrow \infty} E[B_{T_{n}}^{2}] \le \max(a^2,b^2)< \infty$$
Now use dominated convergence to conclude $$\lim_{n\rightarrow \infty} E[B_{T_{n}}^{2}] = E[B_{T}^{2}] = a^2 P(B_{T}=a) + b^2 P(B_{T}=b)$$
which you know already.
This gives you $E[T]$.
Best Answer
The two-dimensional Brownian motion $(W_t)_{t \geq 0}$ has components which are independent Brownian motions, i.e. $$W_t = \begin{pmatrix} 1 \\ 1 \end{pmatrix} +\begin{pmatrix} B_t^1 \\ B_t^2 \end{pmatrix}$$ where $(B_t^i)_{t \geq 0}$, $i=1,2$, are independent one-dimensional Brownian motions started at $0$. Your problem boils down to finding $\mathbb{P}(1+B_{\tau}^1 \geq 0)$ for the stopping time
$$\tau := \inf\{t \geq 0; (1,1)+(B_t^1,B_t^2) \in \mathbb{R} \times (-\infty,0]\}.$$
Clearly,
$$\tau = \inf\{t \geq 0; 1+B_t^2 \leq 0\}$$
which means that $\tau$ is a stopping time with respect to the canonical filtration $\mathcal{F}_t^{2} := \sigma(B_s^2; s \leq t)$. In particular, $\tau$ is independent from $\mathcal{F}_{\infty}^{1} := \sigma(B_s^1; s \geq 0)$. Hence,
$$p:=\mathbb{P}(1+B_{\tau}^1 \geq 0) = \mathbb{E} \bigg[ \mathbb{P}(1+B_t^1 \geq 0) \bigg|_{t=\tau} \bigg].$$
It is known from the reflection principle that $\tau$ has distribution $$\frac{1}{\sqrt{2\pi t^3}} \exp \left(- \frac{1}{2t} \right) 1_{(0,\infty)}(t) \, dt$$ and therefore we get
$$\begin{align*} p &= \int_0^{\infty} \int_{-1}^{\infty} \frac{1}{\sqrt{2\pi t^3}} \frac{1}{\sqrt{2\pi t}} \exp \left(- \frac{1}{2t} - \frac{y^2}{2t} \right) \, dy \, dt \\ &= \frac{1}{2\pi} \int_{-1}^{\infty} \int_0^{\infty} \exp \left(- \frac{1+y^2}{2t} \right) \frac{dt}{t^2} \, dy \\ &= \int_{-1}^{\infty} \frac{1}{\pi} \frac{1}{1+y^2} \, dy \\ &= \frac{1}{2} - \frac{1}{\pi} \arctan(-1). \end{align*}$$
Remark: The above calculation actually shows that $B_{\tau}^1$ is standard Cauchy distributed.