Let $\{e_i\}$ be the canonical basis of $\Bbb R^n$.
As a $(0,2)$ tensor, $(\delta_{ij})$ can be though of as the bilinear map $(x,y) \mapsto x^Ty$, which
reads, in tensor notations, as $\sum_{i=1}^n {e_i}^*\otimes {e_i}^*$.
As a $(2,0)$ tensor, $(\delta^{ij})$ can be though of as the bivector $\sum_{i=1}^n e_i\otimes e_i$.
Finally, as a $(1,1)$ tensor, $({\delta_i}^j)$ can be though of as an endomorphism (a linear map), in this case the identity map, which reads $\sum_{i=1}^n {e_i}^*\otimes e_i$.
These notations become coherent once you define $e^i = {e_i}^*$.
For example, a $(0,2)$ tensor $A=(A_{ij})$ is equal to $A=\sum_{ij} A_{ij} e^i\otimes e^j$, while a $(1,1)$ tensor $B=({B_i}^j)$ is equal to $B=\sum_{ij} {B_i}^j e^i\otimes e_j$, and a bivector $V=(V^{ij})$ is equal to $V=\sum_{ij}V^{ij}e_i\otimes e_j$.
In these last expressions, each index appears exactly twice: one time as a top index, and one time as a lower index.
The summation convention says that as long as an index appears in this specific configuration, we can forget about the $\sum$ sign.
Therefore, we would write, with the above notations, $A=A_{ij} e^i\otimes e^j$, $B={B_i}^je^i\otimes e_j$ and $V = V^{ij}e_i\otimes e_j$.
It appears that with this convention, ${\delta_i}^i$ is the trace of the identity matrix $({\delta_i}^k)$, since the summation convention implies
$$
{\delta_i}^i= \sum_{i=1}^n {\delta_i}^i = n = \operatorname{trace}I_n.
$$
But it is not true (or at least, it is very confusing and misleading) that ${\delta_a}^b{\delta_b}^i = {\delta_a}^i{\delta_i}^i$, since this latter expression implies a summation over $i$ while the first does not.
In fact, one should avoid this kind of expressions where an index appears three times.
When $n=2$, the left hand side is equal to ${\delta_a}^1{\delta_1}^i + {\delta_a}^2{\delta_2}^i = {\delta_a}^i$, and still depends on $i$, while the right hand side is equal to ${\delta_a}^1{\delta_1}^1 + {\delta_a}^2{\delta_2}^2 = {\delta_a}^1 + {\delta_a}^2$, which only depends on $a$.
This specific problem appears, for instance, at (E), where you write
$$
U_iV^a {\delta_a}^b{\delta_b}^i = U_iV^a{\delta_a}^i{\delta_i}^i,
$$
whereas one should instead write
$$
U_iV^a {\delta_a}^b{\delta_b}^i = U_iV^a{\delta_a}^i.
$$
Indeed, there is no need to add the last term ${\delta_i}^i$ in this expression, since you are using the equality ${\delta_a}^b{\delta_b}^i = {\delta_a}^i$.
I think this is the main confusion in your question and it appears several times (you write $V_b{\delta_a}^b=V_a{\delta_a}^a$ instead of $V_b{\delta_a}^b= V_a$ in (D), etc.)
Note that you have the following equalities
\begin{align}
U_iV^a{\delta_a}^b{\delta_b}^i &= (U_i{\delta_b}^i)(V^a{\delta_a}^b) = U_bV^b,\\
U_iV^a{\delta_a}^b{\delta_b}^i&=U_iV^a({\delta_b}^i{\delta_a}^b) = U_iV^a {\delta_a}^i = (U_i{\delta_a}^i)V^a = U_aV^a,\\
U_iV^a{\delta_a}^b{\delta_b}^i&=U_iV^a({\delta_b}^i{\delta_a}^b) =U_iV^a {\delta_a}^i= U_i(V^a{\delta_a}^i) = U_iV^i.
\end{align}
and thus, the result does not depend on the order of the different contractions.
For what it's worth, let me add that I personally do not use this convention, and more generally, do note use computations in coordinates.
This is a matter of taste (and also a cultural thing), but as you can see, this can sometimes be misleading.
When completely mastered, this way of doing computations is really powerful, but if you're not comfortable with it (like me), you are more likely to do a lot of errors.
Best Answer
We have \begin{align*} \epsilon_{njk}\epsilon_{nmi} r_kp_m &= (\delta_{jm}\delta_{ki}-\delta_{ji}\delta_{km})r_kp_m\\ &=r_ip_j-\delta_{ij}r_kp_k \end{align*} and similarly $$ \epsilon_{klj}\epsilon_{kin} \ r_np_l=\delta_{ij}r_np_n-r_jp_i. $$