I'd use x == max(A[0..i-1])
as the loop invariant, positioned immediately after the increment to i
(last line of while
loop). To make it even easier to prove that the invariant holds on each pass through the loop, you can use more assertions within the loop body:
algorithm max(list A[0..n − 1])
x ← A[0]
i ← 1
while i < n do
## x == max(A[0..i-1])
if A[i] > x then x = A[i]
## x == max(A[0..i])
i ← i + 1
## x == max(A[0..i-1])
## x == max(A[0..n-1])
return x
It's then obviously true that once the loop has terminated, x == max(A[0..n-1])
.
The main thing to notice is the order of writing the list.
Your step 1 said we can remove $u_1$ is false. Note that in Linear Dependence Lemma, it said there exists a vector $v_j$ in the list so it belongs to span of all previous vectors $v_1, \ldots, v_{j-1}$. Notice all $v_1, \ldots, v_{j-1}$ are at the left of $v_j$ in the list $(v_1, \ldots, v_m)$.
Hence, if we write the list as
$$u_1,w_1, \ldots, w_n$$
in this exact order, we can't remove $u_1$ because all $w_i$ are at the right of $u_1$ in the list. Hence, even if $u_1$ is linear combination of $w_1, \ldots,w_n$, $u_1$ is still not the corresponding $v_j$ satisfying Linear Dependence Lemma.
Why can't we remove $u_i$?
All the steps give a general list $$u_1, \ldots, u_k,w_1, \ldots, w_l$$
that is linearly dependent. Linear Dependence Lemma said that in this this, there exists a vector $v$ so $v$ belongs to span of all previous vectors in the list. If this $v=u_i$ for some $i$ then $v \in \text{span}(u_1, \ldots, u_{i-1})$, which contradicts to condition that $(u_1, \ldots, u_k)$ is linearly independent. Thus, this $v$ must be different from any $u_i$. Thus, $v=w_i$ for some $i$. With this and condition (b) of the Linearly Dependent Lemma, we find that we can remove $v=w_i$. Thus, we can't remove $u_i$.
Best Answer
I'm not sure if I understand what you're asking, but if you show that the algorithm does what it is supposed to do, namely reduce a spanning list to a basis, then that gives you a proof of the theorem. In fact, it gives you more, because it also tells you how to reduce a spanning set into a basis if you were actually given one.
That said, if you would want to implement the procedure outlined in the proof of the book into say a computer, then you might want to add some details on checking whether $v_j$ is in the span of $v_{0},\ldots,v_{j-1}$. For the purpose of the theoretical algorithm, this is somewhat irrelevant, because we know that it either is in the span, or it's not, and the algorithm will proceed either way.
Edit. Regarding the steps in the comment: 1 and 3. is verified at the end of the proof: using (2.21) it is observed that the output of the algorithm is indeed a basis. 2. is true because at each step $j$, you turn your attention to vector $v_j$, and since there are finitely many vectors, you will surely at some point reach the last vector. I do not understand 4. As for 5.: The proof gives no indication on how to implement the algorithm in practice, or how long it would take for a computer to finish it. In that sense, it might not be considered a true algorithm. In fact, let me rephrase 'Step $j$' as follows:
I claim that the proof is still valid when phrased in this rather crazy way. Indeed, all that matters is that there is an outcome to the step: either $v_j$ is in the span, or it's not. Whether a computer can actually figure that out is besides the point: there's an answer either way and you get to move on to step $j+1$ either way.