You are right to be skeptical of this approach. The Taylor series method does not work in general, although the heuristic contains a kernel of truth. To summarize the technical discussion below,
- Strong concentration implies that the Taylor series method works for nice functions
- Things can and will go dramatically wrong for heavy-tailed distributions or not-so-nice functions
As Alecos's answer indicates, this suggests that the Taylor-series method should be scrapped if your data might have heavy tails. (Finance professionals, I'm looking at you.)
As Elvis noted, key problem is that the variance does not control higher moments. To see why, let's simplify your question as much as possible to get to the main idea.
Suppose we have a sequence of random variables $X_n$ with $\sigma(X_n)\to 0$ as $n\to \infty$.
Q: Can we guarantee that $\mathbb{E}[|X_n-\mu|^3] = o(\sigma^2(X_n))$ as $n\to \infty?$
Since there are random variables with finite second moments and infinite third moments, the answer is emphatically no. Therefore, in general, the Taylor series method fails even for 3rd degree polynomials. Iterating this argument shows you cannot expect the Taylor series method to provide accurate results, even for polynomials, unless all moments of your random variable are well controlled.
What, then, are we to do? Certainly the method works for bounded random variables whose support converges to a point, but this class is far too small to be interesting. Suppose instead that the sequence $X_n$ comes from some highly concentrated family that satisfies (say)
$$\mathbb{P}\left\{ |X_n-\mu|> t\right\} \le \mathrm{e}^{- C n t^2} \tag{1}$$
for every $t>0$ and some $C>0$. Such random variables are surprisingly common. For example when $X_n$ is the empirical mean
$$ X_n := \frac{1}{n} \sum_{i=1}^n Y_i$$
of nice random variables $Y_i$ (e.g., iid and bounded), various concentration inequalities imply that $X_n$ satisfies (1). A standard argument (see p. 10 here) bounds the $p$th moments for such random variables:
$$ \mathbb{E}[|X_n-\mu|^p] \le \left(\frac{p}{2 C n}\right)^{p/2}.$$
Therefore, for any "sufficiently nice" analytic function $f$ (see below), we can bound the error $\mathcal{E}_m$ on the $m$-term Taylor series approximation using the triangle inequality
$$ \mathcal{E}_m:=\left|\mathbb{E}[f(X_n)] - \sum_{p=0}^m \frac{f^{(p)}(\mu)}{p!} \mathbb{E}(X_n-\mu)^p\right|\le \tfrac{1}{(2 C n)^{(m+1)/2}} \sum_{p=m+1}^\infty |f^{(p)}(\mu)| \frac{p^{p/2}}{p!}$$
when $n>C/2$. Since Stirling's approximation gives $p! \approx p^{p-1/2}$, the error of the truncated Taylor series satisfies
$$ \mathcal{E}_m = O(n^{-(m+1)/2}) \text{ as } n\to \infty\quad \text{whenever} \quad \sum_{p=0}^\infty p^{(1-p)/2 }|f^{(p)}(\mu)| < \infty \tag{2}.$$
Hence, when $X_n$ is strongly concentrated and $f$ is sufficiently nice, the Taylor series approximation is indeed accurate. The inequality appearing in (2) implies that $f^{(p)}(\mu)/p! = O(p^{-p/2})$, so that in particular our condition requires that $f$ is entire. This makes sense because (1) does not impose any boundedness assumptions on $X_n$.
Let's see what can go wrong when $f$ is has a singularity (following whuber's comment). Suppose that we choose $f(x)=1/x$. If we take $X_n$ from the $\mathrm{Normal}(1,1/n)$ distribution truncated between zero and two, then $X_n$ is sufficiently concentrated but $\mathbb{E}[f(X_n)] = \infty$ for every $n$. In other words, we have a highly concentrated, bounded random variable, and still the Taylor series method fails when the function has just one singularity.
A few words on rigor. I find it nicer to present the condition appearing in (2) as derived rather than a deus ex machina that's required in a rigorous theorem/proof format. In order to make the argument completely rigorous, first note that the right-hand side in (2) implies that
$$\mathbb{E}[|f(X_n)|] \le \sum_{i=0}^\infty \frac{|f^{(p)}(\mu)|}{p!} \mathbb{E}[|X_n-\mu|^p]< \infty$$
by the growth rate of subgaussian moments from above. Thus, Fubini's theorem provides
$$ \mathbb{E}[f(X_n)] = \sum_{i=0}^\infty \frac{f^{(p)}(\mu)}{p!} \mathbb{E}[(X_n-\mu)^p]$$
The rest of the proof proceeds as above.
You cannot bound that expectation in $\sigma, n$. That's because there is the distinct possibility that the expectation do not exist at all (or, is $\infty$.) See I've heard that ratios or inverses of random variables often are problematic, in not having expectations. Why is that?. If the conditions given there is fulfilled for the density of $X_1$, it will so be for the density of $\bar{X}_n$. If densities do not exist, but probability mass functions do, it is simpler, since your assumptions prohibit a probability atom at zero, but a probability density can still be positive at zero even if $P(X >0)=1$.
For a useful bound you will at least need to restrict the common distribution of $X_1, \dotsc, X_n$ much more.
EDIT
After your new information, and with $v_1>0$, the expectation of $1/\bar{X}_n$ certainly will exist (irrespective if $K$ is finite or not.) And, since the function $x\mapsto 1/x$ is convex for $x>0$, we can use the Jensen Inequality to conclude that $\DeclareMathOperator{\E}{\mathbb{E}}\E 1/\bar{X}_n \ge 1/\E \bar{X}_n$.
Best Answer
Taylor series approximation of multivariate function $f$ around $x_0$ is $$ f(x) \approx f(x_0) + \nabla f(x_0)'(x-x_0) + \frac{1}{2} (x-x_0)' H_f(x_0) (x-x_0). $$ If you substitute $x=X$ and $x_0 = \mathbb{E}X$ you get $$ f(X) \approx f(\mathbb{E}X) + \nabla f(\mathbb{E}X)'(X-\mathbb{E}X) + \frac{1}{2} (X-\mathbb{E}X)' H_f(\mathbb{E}X) (X-\mathbb{E}X). $$ Taking expectation on both sides gives $$ \mathbb{E}f(X) \approx f(\mathbb{E}X) + \nabla f(\mathbb{E}X)' \mathbb{E}(X-\mathbb{E}X) + \frac{1}{2} \mathbb{E}[(X-\mathbb{E}X)' H_f(\mathbb{E}X) (X-\mathbb{E}X)]. $$ As you noticed, $\mathbb{E}(X-\mathbb{E}X) = 0$, so the expression simplifies to $$ \mathbb{E}f(X) \approx f(\mathbb{E}X) + \frac{1}{2} \mathbb{E}[(X-\mathbb{E}X)' H_f(\mathbb{E}X) (X-\mathbb{E}X)]. $$ This is as far as you can get without assumptions on X. However in your specific case the second term can be further simplified. Rewriting quadratic form using sums gives $$ \mathbb{E}[(X-\mathbb{E}X)' H_f(\mathbb{E}X) (X-\mathbb{E}X)] = \sum_{i=1}^n \sum_{j=1}^n \mathbb{E}[(X_i - \mathbb{E}X_i) H_f(\mathbb{E}X)_{ij} (X_j - \mathbb{E}X_j)] = (*). $$ If $i \neq j$ then $X_i$ and $X_j$ are independent, therefore $$ \mathbb{E}[(X_i−\mathbb{E}X_i)Hf(\mathbb{E}X)_{ij}(X_j−\mathbb{E}X_j)] = \mathbb{E}[X_i−\mathbb{E}X_i]Hf(\mathbb{E}X)_{ij}\mathbb{E}[X_j−\mathbb{E}X_j] = 0. $$ Using that fact, the double sum simplifies to a single sum $$ (*) = \sum_{i=1}^n \mathbb{E}[(X_i - \mathbb{E}X_i) H_f(\mathbb{E}X)_{ii} (X_i - \mathbb{E}X_i)] = (*). $$ Expression $H_f(\mathbb{E}X)_{ii}$ is constant (not random), so it can be extracted from expectation $$ (*) = \sum_{i=1}^n H_f(\mathbb{E}X)_{ii}\mathbb{E}[(X_i - \mathbb{E}X_i)^2] = \sum_{i=1}^n H_f(\mathbb{E}X)_{ii} Var(X_i). $$ Summing up, the Taylor series approximation simplifies to $$ \mathbb{E}f(X) \approx f(\mathbb{E}X) + \frac{1}{2} \sum_{i=1}^n H_f(\mathbb{E}X)_{ii} Var(X_i). $$ In your case $\mathbb{E}X_i = \frac{a+b}{2}$ and $Var(X_i) = \frac{1}{12}(b-a)^2$. Also, you don't need to compute the whole Hessian matrix, because only its diagonal elements are used in the formula.