It is helpful to write the range of summation in terms of inequalities. In your case it turns out to be $$\begin{align}0 &\leq i \leq n\\ 1&\leq j\leq i\\ j&\leq k\leq i\end{align}\tag{1}$$ and you can check that this is equivalent to $$1\leq j \leq k \leq i \leq n.\tag{2}$$ Notice that the original order of indices is $(i,j,k)$ and take a look at $(1)$. In the first line $i$ "doesn't know about" $j$ or $k$, so it is just $1\leq i \leq n$ ($i = 0$ doesn't actually appear in the sum). In the second line, $j$ "doesn't know about" $k$, but it "does know" $i$, so it is $1\leq j \leq i$. Finally, $k$ "knows everyone", so the third line becomes $j\leq k\leq i$.
Let us now change the order to $(j,k,i)$. From $(2)$ we read that the first line has to be $1\leq j\leq n$ since $j$ "doesn't know about" $k$ and $i$. The second line is $j\leq k \leq n$ since $k$ "doesn't know" $i$, but "knows" $j$. Finally, the third line is $k\leq i\leq n$ since $i$ "knows everyone".
Thus, the change of order of summation you want is $$\sum_{j=1}^n\sum_{k=j}^n\sum_{i=k}^n$$
Here is a very rigorous proof of this Proposition based on the nice answer to this question by Ilya Bogdanov on MathOverflow. For his much easier to read sketch of proof, see here.
Lemma 1. If $n\in\Bbb N\setminus \{1\}$ and $a_1,a_2,\dots, a_n\in\Bbb R$ is such that $a_1\cdot a_n< 0$, then there is a $i\in\{1,2,\dots,n-1\}$ such that $a_i=0$ or $a_{i}\cdot a_{i+1}<0$.
Proof. Suppose otherwise. Let $\operatorname{sign}$ be the Sign function. Then $$-1=\operatorname{sign}(a_1\cdot a_n)=\operatorname{sign}\big(a_1\cdot a_n\cdot\prod_{j=2}^{n-1} a_j^2\big) = \operatorname{sign}\big(\prod_{j=1}^{n-1} a_j\cdot a_{j+1}\big)=1.$$ Contradiction. $\square$
Lemma 2. Let $n\in\Bbb N\setminus\{1\}$ and $a_1,a_2,\dots, a_n\in\Bbb R$ such that $\sum_{i=1}^n a_i=0$. For $j\in\{1,2,\dots,n\}$ define $$c(j) = \begin{cases}j+1, & j\neq n\\1, & j=n\end{cases}.$$ Then there are two indices $i_1\neq i_2\in\{1,\dots,n\}$ such that $\forall j\in\{i_1,i_2\}\colon a_j=0\lor a_j\cdot a_{c(j)}<0$.
Proof. If all the $a_i$ are $0$ then we are done. Otherwise let $i$ be an index such that $a_i\neq 0$. Since the $a_i$ sum to $0$, there is an index $j\neq i$ such that $a_i\cdot a_j<0$. Suppose WLOG that $j>i$. By Lemma 1 there is a $i\le i_1< j$ such that $a_{i_1}=0$ or $a_{i_1}\cdot a_{i_1+1}<0$. If $\operatorname{sign}(a_i)=\operatorname{sign}(a_1)$ then by Lemma 1 (applied to $a_j,a_{j+1}, \dots, a_n,a_1$) there is a $j\le i_2\le n$ such that $a_{i_2}=0$ or $a_{i_2}\cdot a_{c(i_2)}<0$. Same reasoning (on the "left side") gives us an $i_2$ if $\operatorname{sign}(a_i)\neq\operatorname{sign}(a_1)$. Clearly, $i_1\neq i_2$. $\square$
Proof of the Proposition. Let $i\in\{1,2,\dots,n-1\}$. Define for $j=1,2,\dots,n$ a "natural continuation of the $a_{i,j}$"
$$d_{i,j} = \sum_{k=0}^{i-1} a_{1,\operatorname{mod}(j+k,n)}$$
where we use a modified $\operatorname{mod}$ function: $$\operatorname{mod}(n_1,n_2):=\begin{cases}n_1\mod n_2, & \text{if }n_1\mod n_2\neq 0 \\ n_2, & \text{if }n_1\mod n_2=0\end{cases}.$$
Some important observations (for all $i=1,2,\dots, n-1$):
\begin{align}
\tag 1 \label 1
d_{i,j} &= a_{i,j}, \text{ if } j\le n-i+1; \\
\tag 2 \label 2
d_{i,j} &=a_{1,j}+a_{1,j+1}+\dots+a_{1,n}+a_{1,1}+a_{1,2}+\dots+a_{1,j+i-1-n}
\\
&= -(a_{1,j+i-n}+a_{1,j+i-n+1}+\dots + a_{1,j-1}) = -a_{n-i,j+i-n}, \text{ if } j > n-i+1;
\\
\tag 3 \label 3
a_{i,n-i+1}&=a_{1,n-i+1}+a_{1,n-i+2}+\dots a_{1,n}=-a_{n-i,1}
\\
\tag 4 \label 4
\sum_{j=1}^n d_{i,j} &= i\cdot\sum_{j=1}^n a_{1,j} = 0.
\end{align}
Because of \eqref{3}, we can apply Lemma 2 to the $d_{i,j}$ for each $i$ and get that:
There are two functions $h_1,h_2\colon\{1,\dots,n-1\}\to \{1,\dots,n\}$ such that for all $i\in\{1,\dots,n-1\}$ we have
$$
\big(d_{i,h_1(i)}=0 \lor d_{i,h_1(i)}\cdot d_{i,\operatorname{mod}(h_1(i)+1,n)}<0\big) \land \big(d_{i,h_2(i)}=0 \lor d_{i,h_2(i)}\cdot d_{i,\operatorname{mod}(h_2(i)+1,n)}<0\big) \land h_1(i)\neq h_2(i).
$$
For every $i\in\{1,\dots,n-1\}$ we now use the following result:
- If $1\le h_1(i)<n-i+1$, then we have a zero at (using \eqref{1}) $d_{i,h_1(i)}=d_{i,h_1(i)}$ or a sign switch between $d_{i,h_1(i)}=a_{i,h_1(i)}$ and $d_{i,h_1(i)+1}=a_{i,h_1(i)+1}$.
- If $h_1(i)=n-i+1$, then we have a zero at $d_{i,h_1(i)}=a_{i,h_1(i)}$ or a sign switch between (using \eqref{1}, \eqref{2} and \eqref{3}) $d_{i,h_1(i)}=-a_{n-i,1}$ and $d_{i,h_1(i)+1}=-a_{n-i,2}$.
- If $n-i+1<h_1(i)<n$, then we have a zero at (using \eqref{2}) $d_{i,h_1(i)}=a_{n-i,h_1(i)+i-n}$ or a sign switch between $a_{n-i,h_1(i)+i-n}$ and $a_{n-i,h_1(i)+i-n+1}$.
- If $h_1(i)=n$, then we have a zero at $d_{i,n}=a_{n-i,i}$ or a sign switch between $d_{i,n}=-a_{n-i,i}$ and $d_{i,1}=a_{i,1}=-a_{n-i,n-(n-i)+1}=-a_{n-i,i+1}$.
The same reasoning applies to the function $h_2$. Let
$$S:=\{(i,h_1(i))\mid i=1,\dots,\lceil (n-1)/2\rceil\}\cup \{(i,h_2(i))\mid i=1,\dots,\lceil (n-1)/2\rceil\}.$$
By the properties of $h_1,h_2$ (namely $h_1(i)\neq h_2(i)$), we know that the cardinality of $S$ is at least $n-1$. By using the above four reasonings, we can get a distinct pair $(\tilde i, \tilde j)$ corresponding to a zero or a sign switch for every pair $(i,j)\in S$.
So there are at least $n-1$ zeros/sign switches in the first $n-1$ rows. It follows that, since $a_{n,1}=0$, there are at least $n$ zeros/sign switches in total. $\square$
Best Answer
Imagine a directed graph, where each vertex is one of your statements $P(m, n)$ that you want to establish. Your base case(s) are particular instances that you can prove directly. Then, induction is just about following a path of arrows from some base case $P(m_0, n_0) \implies \cdots \implies P(m, n)$. This path is made up of individual steps, and these deduction(s) are the ones that you need to prove. "If you can take each step along the path, then you can get to your destination."
The usual, garden-variety induction involves a sequence of statements $P(n)$ that form a linear chain, so the only induction argument you have to make is that $P(n) \implies P(n+1)$ for each $n \geq n_0$, where the base case $P(n_0)$ is established by some other means.
In your triangular array, presumably you can prove $P(1, 1)$ in the top corner. Your lexicographic double induction is really two inductive arguments rolled up into one. (You can see this when you stare at the unpacked definition of the ordering on the array.)
Same row $(m = a \wedge n < b)$: You might as well consider the case $b = n+1$. This proceeds like the standard induction, since the row $m = a$ remains unchanged. In the directed graph, these are horizontal arrows to the right, from $(m, n)$ to $(m, n+1)$ for each $n \geq m$.
Different rows $(m < a)$: You might as well consider the case $a = m+1$, the subsequent row. Notice that there's no reference to column, so this amounts to proving that $P(m, n) \implies P(m+1, n')$ for any $n' \geq m+1$. In the directed graph picture of this proof, that's an arrow from a particular node $(m, n)$ to every node on the row below: $(m+1, m+1)$, $(m+1, m+2)$, etc. If you can construct such an argument, great! But the argument must somehow be agnostic to the second index. Here's a picture of the implications just stemming from one statement: $P(2, 5) \implies P(3, n)$. It's a lot!
The upshot: the base case(s) put you on the directed graph, and the inductive argument(s) allow you to take steps from one node to another on the graph. If together they provide a path to your destination node, then you did it!
It should be noted that there is a sparser graph (fewer edges) that would still work in your triangular array. Convince yourself that there's a path to any node beginning from the upper-left corner: