Proof that feasible region of linear program is exactly one convex region

convex optimizationconvex-geometrylinear programmingoptimization

I have the following linear program:
$$\text{max b}\\\ \text{subject to} \quad \vec{x} \cdot \vec{r}_j \geq b \quad \forall j \quad \text{with} \quad x_i \in \mathbb{R}, \sum_{i=1}^N x_i=1,$$
where $\vec{x}$ and $b$ must be determined, the $\vec{r}_j \in \mathbb{R}^N$ are known.

How can I prove that the feasible region is exactly one (maybe infinite) convex region?

In contrast to standard linear programs, the $x_i$ are allowed to be negative, but that should not change whether the feasible region is exactly one convex region, right?

Since the constraints in a linear program are linear, the allowed region is exactly one convex body. Additionally the allowed solution has to lie on the infinitely large expansion of the simplex ($x_i \in \mathbb{R}, \sum_{i=1}^N x_i=1$), which is convex as well.

So the intersection must be exactly one convex region, right?

Thanks for helping 🙂

Best Answer

Let $x$ and $y$ be two arbitrary feasible solutions, and let $\alpha \in [0,1]$. Now show that the solution $\alpha x + (1-\alpha)y$ is feasible.