We may assume that $k_1+k_2 < L$, otherwise we simply could drop the constraint $\max(x_1,k_1)+\max(x_2,k_2)\geq L$ and solve the remaining system.
Now let $P=\{x\mid Ax\leq b, x\geq 0\}$. Further we define
$\begin{align*}
P_1&=P\cap\{x_1 \geq k_1, x_2 \geq k_2\},\\
P_2&=P\cap\{x_1 \geq k_1, x_2 \leq k_2\},\\
P_3&=P\cap\{x_1 \leq k_1, x_2 \geq k_2\},\\
P_4&=P\cap\{x_1 \leq k_1, x_2 \leq k_2\}.
\end{align*}$
We have $P=P_1\cup P_2\cup P_3\cup P_4$. Furthermore, it holds
if $x\in P_1$, then $\max(x_1,k_1)+\max(x_2,k_2) \geq L \Leftrightarrow x_1 + x_2 \geq L$;
if $x\in P_2$, then $\max(x_1,k_1)+\max(x_2,k_2) \geq L \Leftrightarrow x_1 + k_2 \geq L$;
if $x\in P_3$, then $\max(x_1,k_1)+\max(x_2,k_2) \geq L \Leftrightarrow k_1 + x_2 \geq L$;
if $x\in P_4$, then $\max(x_1,k_1)+\max(x_2,k_2) \geq L \Leftrightarrow$ false (since we assume $k_1+k_2<L$).
Now, let
$\begin{align*}
P_1'&=P_1\cap\{x_1+x_2\geq L\}\\
P_2'&=P_2\cap\{x_1+k_2\geq L\}\\
P_3'&=P_3\cap\{k_1+x_2\geq L\}\\
P_4'&=\emptyset\quad\mbox{(see above)}
\end{align*}$
and $P_1'\cup P_2'\cup P_3'$ is the set of feasible solutions of the originating system. Note although $P_1'$, $P_2'$ and $P_3'$ are convex, this set does not need to be convex.
Finally we minimize $c^Tx$ over each $P_i'$ individually. Taking the minimum of these values should be the solution of the originating problem.
Best Answer
The field of unconstrained Riemannian optimization has a significant literature. But as far as I know, there had been no publications in constrained Riemannian optimization until the 2019 publication cited below.
Paywalled version: Simple Algorithms for Optimization on Riemannian Manifolds with Constraints, by Changshuo Liu & Nicolas Boumal
Freely accessible pre-publication version: Simple Algorithms for Optimization on Riemannian Manifolds with Constraints, by Changshuo Liu & Nicolas Boumal
Abstract: We consider optimization problems on manifolds with equality and inequality constraints. A large body of work treats constrained optimization in Euclidean spaces. In this work, we consider extensions of existing algorithms from the Euclidean case to the Riemannian case. Thus, the variable lives on a known smooth manifold and is further constrained. In doing so, we exploit the growing literature on unconstrained Riemannian optimization. For the special case where the manifold is itself described by equality constraints, one could in principle treat the whole problem as a constrained problem in a Euclidean space. The main hypothesis we test here is whether it is sometimes better to exploit the geometry of the constraints, even if only for a subset of them. Specifically, this paper extends an augmented Lagrangian method and smoothed versions of an exact penalty method to the Riemannian case, together with some fundamental convergence results. Numerical experiments indicate some gains in computational efficiency and accuracy in some regimes for minimum balanced cut, non-negative PCA and k-means, especially in high dimensions.