For a multilinear mapping, it suffices to consider its Frechet derivative. Let $W$ be an $n$-D vector space, and each $V_i$ be an $m_i$-D vector space with $i=1,2,...,N$. Let $f:V_1\times V_2\times\cdots\times V_N\to W$ be multilinear. Then $\forall\left(v_1,v_2,...,v_N\right)\in V_1\times V_2\times\cdots\times V_N$, the Frechet derivative of $f$ at this location, denoted by $({\rm d}f)(v_1,v_2,...,v_N)$, is also a multilinear mapping, i.e.,
$$
({\rm d}f)(v_1,v_2,...,v_N):V_1\times V_2\times\cdots\times V_N\to W.
$$
According to Frechet, it follows that
\begin{align}
&({\rm d}f)(v_1,v_2,...,v_N)(h_1,h_2,...,h_N)\\
&=f(h_1,a_2,...,a_N)\\
&+f(a_1,h_2,...,a_N)\\
&+\cdots\\
&+f(a_1,a_2,...,h_N).
\end{align}
Recall that, if $g$ is linear, its entry-wise form reads
$$
g_i(v)=\sum_ja_{ij}v_j,
$$
and if $g$ is bilinear, its entry-wise form reads
$$
g_i(v_1,v_2)=\sum_{j_1,j_2}a_{ij_1j_2}v_{1j_1}v_{2j_2}.
$$
Inductively and formally, the above multilinear $f$ observes the following entry-wise form
$$
f_i(v_1,v_2,...,v_N)=\sum_{j_1=1}^{m_1}\sum_{j_2=1}^{m_2}\cdots\sum_{j_N=1}^{m_N}a_{ij_1j_2...j_N}v_{1j_1}v_{2j_2}...v_{Nj_N}
$$
for $i=1,2,...,m$, where each $v_{kj_k}$ denotes the $j_k$-th entry of $v_k\in V_k$, while $a_{ij_1j_2...j_N}$'s are the coefficients of $f$.
Thanks to this entry-wise form, we may then write down the entry-wise form of ${\rm d}f$ as well, which reads
\begin{align}
&({\rm d}f)_i(v_1,v_2,...,v_N)(h_1,h_2,...,h_N)\\
&=\sum_{j_1=1}^{m_1}\sum_{j_2=1}^{m_2}\cdots\sum_{j_N=1}^{m_N}a_{ij_1j_2...j_N}h_{1j_1}v_{2j_2}...v_{Nj_N}\\
&+\sum_{j_1=1}^{m_1}\sum_{j_2=1}^{m_2}\cdots\sum_{j_N=1}^{m_N}a_{ij_1j_2...j_N}v_{1j_1}h_{2j_2}...v_{Nj_N}\\
&+\cdots\\
&+\sum_{j_1=1}^{m_1}\sum_{j_2=1}^{m_2}\cdots\sum_{j_N=1}^{m_N}a_{ij_1j_2...j_N}v_{1j_1}v_{2j_2}...h_{Nj_N}.
\end{align}
In other words, as $a_{ij_1j_2...j_N}$'s are known, the entry-wise form of ${\rm d}f$ could be expressed straightforwardly as above.
Finally, the "$+$" in OP's original post, i.e., $(h_1+h_2+\cdots+h_N)$, is a convention in some context, which is exactly $(h_1,h_2,...,h_N)$ here. When there is free of ambiguity, both expressions can be used as per ones preference.
THe sequence defined in the OP can be considered as a linear dynamical system on $\mathbb{R}^2$ as follows:
$$\begin{pmatrix} x_{n+2}\\ x_{n+1}\end{pmatrix} =\begin{pmatrix} \frac23 & \frac13\\ 1 & 0 \end{pmatrix}\begin{pmatrix} x_{n+1}\\ x_n\end{pmatrix}=\begin{pmatrix} \frac23 & \frac13\\ 1 & 0 \end{pmatrix}^{n+1}\begin{pmatrix} x_1\\ x_0\end{pmatrix}
$$
The matrix $A=\begin{pmatrix} \frac23 & \frac13\\ 1 & 0 \end{pmatrix}$ has eigenvalues $\lambda_1=-\frac13$, $\lambda_2=1$. It follows that
$$A=\begin{pmatrix} \frac23 & \frac13\\ 1 & 0 \end{pmatrix}=
\begin{pmatrix} 1 & 1\\ -3 & 1 \end{pmatrix}\begin{pmatrix} -\frac13 & 0\\ 0 & 1 \end{pmatrix}\frac14\begin{pmatrix} 1 & -1\\ 3 & 1 \end{pmatrix}
$$
Consequently
\begin{align}
A^n&=
\begin{pmatrix} 1 & 1\\ -3 & 1 \end{pmatrix}\begin{pmatrix} \big(-\frac13\big)^n & 0\\ 0 & 1 \end{pmatrix}\frac14\begin{pmatrix} 1 & -1\\ 3 & 1 \end{pmatrix}\\&\xrightarrow{n\rightarrow\infty}\begin{pmatrix} 1 & 1\\ -3 & 1 \end{pmatrix}\begin{pmatrix} 0 & 0\\ 0 & 1 \end{pmatrix}\frac14\begin{pmatrix} 1 & -1\\ 3 & 1 \end{pmatrix}=\frac14\begin{pmatrix} 3 & 1\\ 3 & 1\end{pmatrix}
\end{align}
Thus, the sequence $\mathbf{x}_n=[x_{n+1},x_n]^\intercal$ converges to
$$\frac14\begin{pmatrix} 3 & 1\\ 3 & 1\end{pmatrix}\begin{pmatrix} x_1\\ x_0\end{pmatrix}=\begin{pmatrix} \frac{3x_1+x_0}{4} \\ \frac{3x_1+x_0}{4} \end{pmatrix}
$$
Best Answer
Your application of the chain rule is not entirely correct. We have: $$D_1\left(f_i(a+t(u-a))\right)=\sum_jD_jf_i(a+t(u-a))D_1(a_j+t(u_j-a_j))=tD_1f_i(a+t(u-a)).$$ So, this then leads to $$D_1K(u,t)=\sum_iD_1\left(f_i(a+t(u-a))\right)(u_i-a_i)+f_1(a+t(u-a))=t\langle D_1f(a+t(u-a)),(u-a)\rangle+f_1(a+t(u-a)).$$