Isn't that obvious?
$$
MATCH(m,n) = max \begin{cases}
(a): MATCH(m-1,n-1)+ sim(a_m,b_n)\\
(b): MATCH(m,n-1)\\
(c): MATCH(m-1,n).
\end{cases}
$$
Apply (b) recursively, we get
\begin{align}
MATCH(m,n) &\ge MATCH(m,n-1)\\
&\ge MATCH(m,n-2)\\
&\ge \ldots\\
&\ge MATCH(m,n-j).
\end{align}
Now apply (c) recursively and get
\begin{align}
MATCH(m,n-j) &\ge MATCH(m-1,n-j)\\
&\ge MATCH(m-2,n-j)\\
&\ge \ldots\\
&\ge MATCH(m-i,n-j).
\end{align}
Put this two sequences of inequalities together, we obtain
$$MATCH(m,n)\ge MATCH(m-i,n-j).$$
In other words, you first traverse from the node $(m,n)$ horizontally to the left until you reach $(m,n-j)$, then traverse vertically down to the node $(m-i,n-j)$. By the defintion of the $MATCH$ function, the value of the node will decreases along this path.
Trying all possible solutions is an exhaustive search. The following is a dynamic programming algorithm.
The minimal solution we search for can be defined recursively:
if m=v(k), for a k, then:
coins_needed(m):=1,
otherwise:
coins_needed(m):=
min(coins_needed(1)+coins_needed(m-1),
coins_needed(2)+coins_needed(m-2),
...,
coins_needed(floor(m/2)+coins_needed(ceiling(m/2)))
One can start with m=1 and , then calculate coins_needed for m=2 and so on. It is important to store all the minimal solution one has calculated so far, otherwise the algorithm needs an exponential running time.
If one has calculated coins_needed(m-1) in this way, one has already calculated coins_needed(1),...,coins_needed(m-2) and stored the results. It is now easy to calculate coins(m) according to the above formula.
It is dynamic programming as defined in Wikipedia, The problem has optimal substructure, because the problem solution can be constructed of the solution of the smaller subproblems, and each subproblem can be broken in smaller sub problems again but we have overlapping subproblems. This means we have always the same sub problems that can be reused. So if we store the results of the subproblems (memoization) we can reuse them.
Best Answer
Given a set $D=\{d_1,\ldots,d_m\}\subset\mathbb N$ of coin denominations, for $n\in\mathbb N_0$ let $f(n)$ denote the minimum number of coins (with repetition) in $D$ needed to obtain sum $n$ (or $f(n)=\infty$ if it is ompossible). Then clearly $f(n)\ge 0$ for all $n$ and $f(n)=0\iff n=0$.
If $n>0$ and a way to obtain $n$ with $f(n)$ coins uses at least on coin of denomination $d_k$ (at least one such $k$ exists), then removing this coin we obtain a way to obtain $n-d_k$ and hence conclude $$f(n-d_k)\le f(n)-1\quad \text{for at least one $1\le k\le m$}$$ On the other hand, if $1\le k\le m$ and $n\ge d_k$ we obtain a way to obtain $n$ with $f(n-d_k)+1$ coins by adding a $d_k$ coin to an optimal way to get $n-d_k$. We conclude $$ f(n)\le f(n-d_k)+1\quad \text{for all $1\le k\le m$ with $n\ge d_k$}.$$ Together this gives us $$\tag1 f(n)=\min\{\,f(n-d_k)+1\mid 1\le k\le m,n\ge d_k\,\}\quad \text{for all $n>0$}$$ (with the $\min\emptyset=\infty$ understood). And $(1)$ together with $f(0)=0$ is precisely what the DP algorithm uses to compute $f(n)$ recursively.