- In the figure, in coordinates (I'm using integer coordinates to simplify the notation, so each small square represents one "unit"), there are two represented "edge paths" :
$(0,0)-(0,2) -(3,2) - (3,3) - (5,3)-(5,5)$
and
$(0,0)-(0,2)-(2,2)-(2,3)-(5,3)-(5,5)$
These two paths are the same, except at one point : at $(2,2)$ the first one continues to $(3,2)$ and then moves up to $(3,3)$, while the second one moves up to $(2,3)$ and then moves right to $(3,3)$ (and then both paths continue on together)
At the point where they differ, there is a homotopy from one to the other that stays entirely within the square $[2,3]\times [2,3]$, so that the homotopy moves almost nothing of the path, and what it moves stays within a little square, hence within $X_0$ or $X_1$ if you apply $H$ to it.
To spell out concretely what the edge paths are, you could say something like : paths that follow the edges of the little squares (which you can define explicitly)
You apply $H$ to these edge paths in $[0,1]\times [0,1]$ and to the little homotopies between said edge paths
Well suppose you have $H$ a homotopy from $\gamma$ to $\gamma'$. You have a sequence of edge paths $\delta_0,...,\delta_n$ such that $H\circ \delta_0 = \gamma$ (up to a constant path at $y$), $H\circ \delta_n = \gamma'$ (same thing) and such that $\delta_i, \delta_{i+1}$ differ only one a small square (as above).
Then let $K$ be a homotopy from $\delta_i$ to $\delta_{i+1}$ which moves only in the square by which they differ. Then find times $0=t_0<....<t_m=1$ that correspond to the various edges of the little squares that $\delta_i,\delta_{i+1}$ are defined on, so that $\delta_{j\mid [t_k,t_{k+1}]} \in X_v$ for $j\in \{i,i+1\}$. Then at all times except one of the times (the one where $\delta_i, \delta_{i+1}$ differ), $\delta_{i\mid [t_k,t_{k+1}]} = \delta_{i+1\mid [t_k,t_{k+1}]}$.
And at the ones they differ, we nonetheless have $[H\circ \delta_{i\mid [t_k,t_{k+1}]}] = [H\circ\delta_{i+1\mid [t_k,t_{k+1}]}]$ in $X_v$ (because the homotopy stays within $X_v$ !!)
So for all $k$, $[H\circ \delta_{i\mid [t_k,t_{k+1}]}] = [H\circ\delta_{i+1\mid [t_k,t_{k+1}]}]$ in $X_v$ (where $v$ is the one that they are inside)
It follows that the thing you defined agrees on $H\circ\delta_{i+1}$ and on $H\circ \delta_i$. So it agrees on $\gamma$ (up to a constant) and $H\circ \delta_1$, then on $H\circ \delta_1$ and $H\circ \delta_2$,.... up to $H\circ \delta_{n-1}$ and $\gamma'$ (up to a constant). So in the end, it agrees on $\gamma$ and $\gamma'$
(to have an intuition, imagine the sequence of edge paths as a snake that starts out as "go horizontally to the end, then vertically to the end" and then wiggles a bit, and each wiggle makes it cross a square, to get to "go vertically to then end and then horizontally to the end"; each wiggle is small enough to ensure that from one wiggle to the other it'll have the same image in $\Lambda$, because when the wiggling part is contained within $X_v$, $h_v$ ensures that the image is the same in $\Lambda$)
(1) The idea is the same as for proving that $z\mapsto z^{n+m}$ and $(z\mapsto z^n)*(z\mapsto z^m)$ are homotopic.
The general context as follows : let $G$ be an $H$-space (that is, it has a multiplication $\mu : G\times G\to G$, of which we ask nothing except that it has, for simplicity as in our case, a unit - in general it's enough to require a unit up to homotopy). Then $\Pi(G)$ is a "monoidal" (with the level of generality I gave, that's not quite true because it's not associative - we won't need that) groupoid, that is, it has a "tensor product" : $\otimes : \Pi(G)\times \Pi(G)\to \Pi(G)$. This is simply induced by $\mu$; in particular it is a functor.
Now in our situation, what do we have ? We have a path, $u_{a,t}$, which is of the form $id_a\otimes \gamma$ for some $\gamma : 1\to x$ (for $x$ such that $ax = b$), and $u_{b,s}$, which is of the form $\delta \otimes id_b$ for some $\delta : 1\to y$. Now we want to compose these and use the functoriality of $\otimes$. This does not work, because of course the codomain of $\gamma$ may be different from $b$. But we can mix things up a bit, and since $ax = b$, write $\omega\otimes id_x$ for some $\omega : a \to y$ (with $ay = c$ - think of it this way : $u_{b,s} = b\times$ something, so we may rewrite it as $a\times $something $\times x$)
With this set up, we get $u_{b,s}\circ u_{a,t} = (\omega\otimes id_x)\circ (id_a\otimes \gamma)$; and now we may use functoriality because the (co)domains match up : we get $(\omega \circ id_a )\otimes (id_x\circ\gamma) = \omega\otimes \gamma$. Now $\omega \otimes \gamma$, in our setting, is just a multiplication of paths : now you can remember how $\omega, \delta$ have been defined, and convince yourself that this is just $u_{a,s+t}$ (recall that $\otimes$ is defined by the multiplication on $S^1$ !)
I'll let you write a general statement if you're interested.
(2) It is enough for the following reason : suppose they do, then any two morphisms $f,g : G\to K$ such that $f\circ \gamma_i = g\circ \gamma_i$ ($i=0,1$) must be equal (*).
It follows that if you have $K$ and morphisms $f_i: \Pi(X_i)\to K$ making the appropriate diagram commute, you can factor them through $\Pi(S^1)$, and then, thanks to $\zeta$, through $G$.
So this yields the existence in the universal property of the pushout, and the property (*) yields the uniqueness : so $G$ is a pushout, just like $\Pi(S^1)$, they must be uniquely isomorphic.
(3)I don't understabd. Who's $f_1$ ?
(4) I think it's "for all $s\in [0,1]$", not "for some": you want the path between times $t_1 +...+t_{r-1}$ and $t_1+...+t_r$ to be totally contained in $X_{k(r)}$ : to achieve this, you subdivide your path in small pieces where it has to be contained in one of the two.
(5) The path is completely contained in $X_{k(r)}$, so it is in the image of $\gamma_{k(r)}$ (the precise value is not really necessary, but you can obtain it in any case by noting that $\Pi(X_{k(r)})$ has only one morphism between any two points), which is what we wanted to show.
Best Answer
This is a compactness argument. If we let $U_i=w^{-1}(\text{int}\,X_i)$ ($i\in\{0,1\}$) then $[0,1]=U_0\cup U_1$. This is an open covering of the compact set $[0,1]$ so has a Lebesgue number $\delta>0$. This means that each subset of $[0,1]$ with diameter $<\delta$ is contained in $U_0$ or in $U_1$. Choose $0=t_0<\cdots<t_{m+1}=1$ such that each $t_{j+1}-t_j<\delta$. Then for each $j$, $w([t_j,t_{j+1}])$ is a subset of either $\text{int}\,X_0$ or of $\text{int}\,X_1$.