As you suggested, you can define indicator functions for the sets:
$$
g_i(x) = \begin{cases} 0 & \ x \in X_i \\ +\infty & \ x \notin X_i \end{cases}
$$
These are dual to the support functions of those sets:
$$
g_i^*(\lambda) = \sup_{x \in X_i} x^T\lambda
$$
Now you can formulate the problem like a consensus optimization problem:
$$
\begin{gathered}
\inf_{x_i,y_i,z} \sum_i f_i(x_i) + g_i(y_i)\\
\text{s.t.:} \ x_i = z,\ y_i = z
\end{gathered}
$$
Let's form the Lagrangian for this constrained problem and minimize over the "local" variables $x_i,y_i$ to get an expression in terms of the convex conjugates:
$$
\inf_{x_i,y_i,z} \sum_i f_i(x_i) + g_i(y_i) + u_i^T(z-x_i) + v_i^T(z-y_i)
\\= \inf_{z} \sum_i -f_i^*(u_i) - g_i^*(v_i) + (u_i+v_i)^T z
$$
Minimizing over the "consensus" variable $z$ gives an implicit equality constraint. The dual problem is thus:
$$
\begin{gathered}
\sup_{u_i,v_i} \sum_i -f_i^*(u_i) - g_i^*(v_i)\\
\text{s.t.:} \ \sum_i u_i + v_i = 0
\end{gathered}
$$
Some caveats: this formulation is only really useful if the sets $X_i$ admit simple support functions $g_i^*$. Also, while this always gives a lower bound of the optimal value of the original problem, to get strong duality there are some technical conditions you need. (See the hypotheses of Fenchel's duality theorem in a convex analysis text.)
To complement Erwin Kalvelagen's answer, we have the following optimization problem in $\mathrm x \in \mathbb R^n$
$$\begin{array}{ll} \text{minimize} & \|\mathrm A \mathrm x - \mathrm b\|_\infty\end{array}$$
where $\mathrm A \in \mathbb R^{m \times n}$ and $\mathrm b \in \mathbb R^m$ are given. Let $\mathrm a_i^\top \in \mathbb R^n$ denote the $i$-th row of $\rm A$.
Introducing decision variable $t \in \mathbb R$ and rewriting in epigraph form, we then obtain the following constrained optimization problem in $\mathrm x \in \mathbb R^n$ and $t \in \mathbb R$
$$\begin{array}{ll} \text{minimize} & t\\ \text{subject to} & \|\mathrm A \mathrm x - \mathrm b\|_\infty \leq t\end{array}$$
which can be rewritten as follows
$$\begin{array}{ll} \text{minimize} & t\\ \text{subject to} & \displaystyle\max_{1 \leq i \leq m} | \mathrm a_i^\top \mathrm x - b_i | \leq t\end{array}$$
which can be rewritten as follows
$$\begin{array}{ll} \text{minimize} & t\\ \text{subject to} & | \mathrm a_1^\top \mathrm x - b_1 | \leq t\\ & | \mathrm a_2^\top \mathrm x - b_2 | \leq t\\ & \qquad \vdots\\ & |\mathrm a_m^\top \mathrm x - b_m | \leq t\end{array}$$
which can be rewritten as follows
$$\begin{array}{ll} \text{minimize} & t\\ \text{subject to} & -t \leq \mathrm a_1^\top \mathrm x - b_1 \leq t\\ & -t \leq \mathrm a_2^\top \mathrm x - b_2 \leq t\\ & \qquad\quad \vdots\\ & -t \leq \mathrm a_m^\top \mathrm x - b_m \leq t\end{array}$$
which can be rewritten as follows
$$\begin{array}{ll} \text{minimize} & t\\ \text{subject to} & -t 1_m \leq \mathrm A \mathrm x - \mathrm b \leq t 1_m\end{array}$$
where the optimal $\rm x$ and the optimal $t$ are the minimizer and the minimum of the original problem, respectively.
Best Answer
Again, the answer to "how to solve the problem" depends in large part on what $X$ is, what software you intend to use, etc. But as I said in the comment, at least the objective function can be expressed in second-order-cone or semidefinite form. To see how, create new variables $y\in\mathbb{R}^p$ and write the problem as follows: \begin{array}{ll} \text{minimize} & \sum_{i=1}^p y_i \\ \text{subject to} & a_i^Tx<b_i,~ \frac{1}{b_i-a_i^Tx} \leq y_i \quad i=1,2,\dots, p \\ & x \in X \end{array} Now let's use this transformation: $$a_i^Tx<b_i,~\frac{1}{b_i-a_i^Tx} \leq y_i \quad\Longleftrightarrow\quad \begin{bmatrix} y_i & 1 \\ 1 & b - a_i^T x \end{bmatrix} \succeq 0$$ Go ahead and verify this for yourself using the basic definition of positive semidefiniteness. So your problem becomes \begin{array}{ll} \text{minimize} & \sum_{i=1}^p y_i \\ \text{subject to} & \begin{bmatrix} y_i & 1 \\ 1 & b - a_i^T x \end{bmatrix} \succeq 0 \quad i=1,2,\dots, p \\ & x \in X \end{array} So it's ready to be converted to a semidefinite program, assuming that $X$ is semidefinite representable. And $2\times 2$ linear matrix inequalities like these can actually be represented using a second-order cone constraint, since \begin{aligned} \begin{bmatrix} a & b \\ b & c \end{bmatrix} \succeq 0 &\quad\Longleftrightarrow\quad a,c \geq 0, \quad ac-b^2\geq 0 \\ &\quad\Longleftrightarrow\quad a,c \geq 0, \quad \sqrt{b^2 + \left(\tfrac{1}{2}(a-c)\right)^2} \leq \tfrac{1}{2}(a+c)\end{aligned} Go ahead, multiply that out!
Or don't. Use a modeling framework like CVX or YALMIP (disclaimer: the former is mine). If you do, it handles these things for you. In fact, you don't even need to rewrite the objective; in CVX, you can just write it as
sum(inv_pos(b-A*x))
. But again, this assumes that you can represent $X$ in a manner consistent with these frameworks.