A proof about feasible solution

linear programmingproof-verification

$15.$ Consider the linear programming problem

Maximize $$z=e^Tx$$

subject to $$Ax\le b$$ $$x\ge0$$

If $x_1$ and $x_2$ are feasible solutions, show that $$x=\frac{1}{3}x_1+\frac{2}{3}x_2$$is a feasible solution.

$16.$ Generalize the previous exercise to show that $$x=rx_1+sx_2$$ is a feasible solution if $r+s=1$

(Chap $1.2$ exercises $15\&16$ of "Elementary Linear Programming With Application" by Bernard Kolman ⋅ Robert E. Beck)


$15.$ By Definition.(p.g.$64$) that A vector $x\in\mathbb{R}^n$ satisfying the constraints of a linear programming problem is called a feasible solution to the problem

Proof.

Assume $x_1,x_2$ are feasible solutions, Have

$$Ax_1\le b\wedge x_1\ge0$$$$\wedge Ax_2\le b\wedge x_2\ge0$$

Show $x$ is also a fesible solution, that:$$Ax\le b\wedge x\ge0$$ $$\text{WTS }A(\frac{1}{3}x_1+\frac{2}{3}x_2)\le b\wedge \frac{1}{3}x_1+\frac{2}{3}x_2\ge0$$

$$Ax_1\le b\wedge Ax_2\le b$$

Since A is a linear transformation, have the following:

$$\Leftrightarrow A(\frac{1}{3}x_1)\le \frac{1}{3}b\wedge A(\frac{2}{3}x_2)\le \frac{2}{3}b$$

$$\Rightarrow \boxed{A(\frac{1}{3}x_1+\frac{2}{3}x_2)}\le b$$

Second part that $$\boxed{\frac{1}{3}x_1+\frac{2}{3}x_2\ge0} \text{ is trivial} \tag*{$\square$}$$

$16.$ the proof is similar to $(15.)$

Proof

Assume $x_1,x_2$ are feasible solutions, Have

$$Ax_1\le b\wedge x_1\ge0$$$$\wedge Ax_2\le b\wedge x_2\ge0$$

Show $x$ is also a fesible solution, that:$$Ax\le b\wedge x\ge0$$ $$\text{WTS }A(rx_1+sx_2)\le b\wedge rx_1+sx_2\ge0$$

Since A is a linear transformation, have the following:

$$\Leftrightarrow A(rx_1)\le rb\wedge A(sx_2)\le sb$$

$$\Rightarrow \boxed{A(rx_1+sx_2)}\le b$$

Second part that $$\boxed{rx_1+sx_2\ge0} \text{ is trivial} \tag*{$\square$}$$

Is my proof correct$?$

In my proof, I didn't talk about the objective function $z=e^Tx$, would this be a problem$?$

Thanks for your help

Best Answer

I think that the problem description has two additional hypotesis: $r\geq 0$ and $s\geq 0$. But, your proof is correct in these terms. The solution is feasible and this fact do not use the objective function. $Ax\leq b$ and $x\geq 0$ define the constrained domain of the objective function. You can improve your proof:

$$Ax_1\leq b\implies r.Ax_1\leq r.b ~ (r\geq 0)$$

And

$$Ax_2\leq b\implies s.Ax_2\leq s.b ~(s\geq 0)$$

Summing these two parts and using distributive property

$$ A(r.x_1+s.x_2)\leq (r+s).b = b$$

$$r.x_1+s.x_2\geq 0 ~ (r,s\geq 0)$$

Related Question