False, consider the functions
$$g_1(x,y) = \frac19(x-1)^2 + y^4 \quad\text{ and }\quad g_2(x,y) = x^4 + \frac19 (y-1)^2$$
Their global minima are achieved at $x_1 = (1,0)$ and $x_2 = (0,1)$ respectively.
However
$$g_1(x,y) + g_2(x,y) = \frac19 (x-1)^2 + x^4 + \frac19 (y-1)^2 + y^4$$
Notice
$$\frac{d}{dx} \left( \frac19 (x-1)^2 + x^4 \right) = \frac{2\,\left( 3\,x-1\right) \,\left( 6\,{x}^{2}+2\,x+1\right) }{9}$$
The global minimum of $g_1(x,y) + g_2(x,y)$ is achieved at $(x,y) = (\frac13,\frac13)$,
outside the convex hull of $(0,1)$ and $(1,0)$.
As you suggested, you can define indicator functions for the sets:
$$
g_i(x) = \begin{cases} 0 & \ x \in X_i \\ +\infty & \ x \notin X_i \end{cases}
$$
These are dual to the support functions of those sets:
$$
g_i^*(\lambda) = \sup_{x \in X_i} x^T\lambda
$$
Now you can formulate the problem like a consensus optimization problem:
$$
\begin{gathered}
\inf_{x_i,y_i,z} \sum_i f_i(x_i) + g_i(y_i)\\
\text{s.t.:} \ x_i = z,\ y_i = z
\end{gathered}
$$
Let's form the Lagrangian for this constrained problem and minimize over the "local" variables $x_i,y_i$ to get an expression in terms of the convex conjugates:
$$
\inf_{x_i,y_i,z} \sum_i f_i(x_i) + g_i(y_i) + u_i^T(z-x_i) + v_i^T(z-y_i)
\\= \inf_{z} \sum_i -f_i^*(u_i) - g_i^*(v_i) + (u_i+v_i)^T z
$$
Minimizing over the "consensus" variable $z$ gives an implicit equality constraint. The dual problem is thus:
$$
\begin{gathered}
\sup_{u_i,v_i} \sum_i -f_i^*(u_i) - g_i^*(v_i)\\
\text{s.t.:} \ \sum_i u_i + v_i = 0
\end{gathered}
$$
Some caveats: this formulation is only really useful if the sets $X_i$ admit simple support functions $g_i^*$. Also, while this always gives a lower bound of the optimal value of the original problem, to get strong duality there are some technical conditions you need. (See the hypotheses of Fenchel's duality theorem in a convex analysis text.)
Best Answer
Yes. Note that $g$ has a subgradient at $\min(x_i^*)$ that has a nonnegative slope, while there is a subgradient at $\max(x_i^*)$ that has a nonpositive slope. This doesn't even need strict concavity. If the functions are concave, there is an $x_0 \in [\min(x_i^*), \max(x_i^*)]$ so that $g(x_0)$ is the maximum of $g$.
If the functions are all strictly concave and not all $x_i^*$ are identical, the minimum lies in fact in the interior of the interval.