Just an initial remark, if you want computational speed you usually have to sacrifice accuracy. "More accuracy" = "More time" in general. Anyways here is a second order approximation, should improve on the "crude" approx you suggested in your comment above:
$$E\Bigg(\frac{X_{j}}{\sum_{i}X_{i}}\Bigg)\approx
\frac{E[X_{j}]}{E[\sum_{i}X_{i}]}
-\frac{cov[\sum_{i}X_{i},X_{j}]}{E[\sum_{i}X_{i}]^2}
+\frac{E[X_{j}]}{E[\sum_{i}X_{i}]^3} Var[\sum_{i}X_{i}]
$$
$$= \frac{\alpha_{j}}{\sum_{i} \frac{\beta_{j}}{\beta_{i}}\alpha_{i}}\times\Bigg[1 - \frac{1}{\Bigg(\sum_{i} \frac{\beta_{j}}{\beta_{i}}\alpha_{i}\Bigg)}
+ \frac{1}{\Bigg(\sum_{i} \frac{\alpha_{i}}{\beta_{i}}\Bigg)^2}\Bigg(\sum_{i} \frac{\alpha_{i}}{\beta_{i}^2}\Bigg)\Bigg]
$$
EDIT An explanation for the above expansion was requested. The short answer is wikipedia. The long answer is given below.
write $f(x,y)=\frac{x}{y}$. Now we need all the "second order" derivatives of $f$. The first order derivatives will "cancel" because they will all involve multiples $X-E(X)$ and $Y-E(Y)$ which are both zero when taking expectations.
$$\frac{\partial^2 f}{\partial x^2}=0$$
$$\frac{\partial^2 f}{\partial x \partial y}=-\frac{1}{y^2}$$
$$\frac{\partial^2 f}{\partial y^2}=2\frac{x}{y^3}$$
And so the taylor series up to second order is given by:
$$\frac{x}{y} \approx \frac{\mu_x}{\mu_y}+\frac{1}{2}\Bigg(-\frac{1}{\mu_y^2}2(x-\mu_x)(y-\mu_y) + 2\frac{\mu_x}{\mu_y^3}(y-\mu_y)^2 \Bigg)$$
Taking expectations yields:
$$E\Big[\frac{x}{y}\Big] \approx \frac{\mu_x}{\mu_y}-\frac{1}{\mu_y^2}E\Big[(x-\mu_x)(y-\mu_y)\Big] + \frac{\mu_x}{\mu_y^3}E\Big[(y-\mu_y)^2\Big]$$
Which is the answer I gave. (although I initially forgot the minus sign in the second term)
With two variables, you are defining a line segment in $\mathbb{R}^2$, as you pointed out. However, due to the simplex constraint, one of these two variables is redundant in terms of specifying the density, since there is a one-to-one relationship between $x_1$ and $x_2$. Therefore, the density is specified over $K-1$ free variables (i.e., in $\mathbb{R}$)
This is actually pointed out in the first line of this section of the Wikipedia article, albeit very subtly.
Therefore, your density function becomes:.
$$Dir_{1,1}(x_1,1-x_1)=\frac{\Gamma(2)}{\Gamma(1)^2}(x_1)^0(1-x_1)^0=1$$
Therefore,
$$\int_0^1 Dir_{1,1}(x_1,1-x_1) dx_1 = 1$$
Response to OP Comment
Due to the simplex constraints, the two-variable Dirichlet density is actually degenerate in $\mathbb{R}^2$, as shown by my construction above (it only requires one variable). While it is true it has a density of $1$, it does not have a density of $1$ on the line segment connecting $(1,0)$ with $(0,1)$. What the above construction shows is that the marginal density has a value of $1$. Your confusion comes from thinking of $x_2$ as a free variable, in which case the support of the Dirichlet on $\mathbb{R}^2$ would have a non-zero area. This intuition is fine in cases like the the bivariate gaussian, where the two variables are not perfectly correlated, but not in this case.
We can formally derive this as follows:
Let $L$ be some number in $[0,\sqrt{2}]$ specifying the distance from $(1,0)$ to $(0,1)$ along the connecting line segment. Thus, each value of $L$ identifies a unique $(x_
1,x_2)$ pair. Using this notation, your assumption that the density is $1$ along this line boils down to:
$$P(L \in [a,b] \subset)=b-a$$
However, we can show this is not the case through a formal treatment of the joint density of $x_1,x_2$:
$$P_L(L\in [a,b])=P_{X_1,X_2}[(x_1,x_2) \in A_{[a,b]}]$$
Where $A_{[a,b]}:= \{(u,v): u \in [1-\frac{b}{\sqrt{2}},1-\frac{a}{\sqrt{2}}], v = 1- u]$
Now, let's calculate $P_L(L\in [a,b])$:
$$P_L(L\in [a,b])= \int_{A_{[a,b]}} dP_{X_1,X_2}= \int_{A_{[a,b]}} dP_{X_1}dP_{X_2|X_1} =\int_{A_{[a,b]}} 1 \;dP_{X_1} = \int_{1-\frac{b}{\sqrt{2}}}^{1-\frac{a}{\sqrt{2}}}1\; du = $$
$$\left(1-\frac{a}{\sqrt{2}}\right) - \left(1-\frac{b}{\sqrt{2}}\right) = \frac{1}{\sqrt{2}}(b-a)$$
Where the third equality comes about because $dP_{X_2|X_1} = 1$ for $X_2=1-X_1$ (i.e., its not a density, but a point probability mass at $1-X_1$)
As you can see, we've recovered the $\frac{1}{\sqrt{2}}$ normalizing constant for the density along the line segment in $\mathbb{R}^2$. Effectively, this (degenerate) joint density is just a linear transformation of one of the two marginals (either one will work). This results in the domain of the probability density to go from $1$ to $\sqrt{2}$, hence the density must decrease to compensate.
Best Answer
The Dirichlet distribution is either defined on the simplex of $\mathbb R^k$, $$\mathcal S_{k-1}=\big\{\mathbf x;\ x_i\in (0,1),~i=1,2,\ldots,k,~\sum_{i=1}^k x_i=1\big\}$$ in which case the density $$f(\mathbf x) = \frac{1}{B(\textbf{a})}\prod_{i=1}^{k}x_{i}^{a_{i}-1}$$ is with respect to the Lebesgue distribution over that simplex, or defined in $\mathbb R^{k-1}$, in which case the density $$f(\mathbf x) = \frac{1}{B(\textbf{a})}\prod_{i=1}^{k-1}x_{i}^{a_{i}-1}(1-x_1-\cdots-x_{k-1})^{a_k-1}$$ is with respect to the Lebesgue distribution over $\mathbb R^{k-1}$. The later is Wikipedia's definition albeit poorly written since written as a function of $k$ terms.
A particular instance of the later is the family of Beta distributions, which illustrates why it is not feasible to derive an closed form cdf, except for small integer values of the parameters $a_i$: $$\mathbb P_{\alpha,\beta}(X\le \epsilon)=\dfrac{B(\epsilon;\alpha,\beta)}{B(\alpha,\beta)}\quad0\le\epsilon\le 1$$ where $B(\epsilon;\alpha,\beta)$ is the so-called incomplete Beta function (and an acknowledgement of the absence of closed form!).
Both representations obviously lead to the same distribution, but writing events such as $\mathbb P(\mathbf X\in A)$ will depend on which representation is used for $A$, i.e., either $A\subset\mathcal S_{k-1}$ or $A\subset\mathbb R^{k-1}$. In the former case, $$\mathbb P(\mathbf X\in A)=\int_A \frac{1}{B(\textbf{a})}\prod_{i=1}^{k}x_{i}^{a_{i}-1}\,\text d\lambda_{\mathcal S_{k-1}}(\mathbf x)$$ and in the later $$\mathbb P(\mathbf X\in A)=\int_A \frac{1}{B(\textbf{a})}\prod_{i=1}^{k-1}x_{i}^{a_{i}-1}(1-x_1-\cdots-x_{k-1})^{a_k-1} \,\text dx_1\cdots\,\text dx_{k-1}$$