For a multilinear mapping, it suffices to consider its Frechet derivative. Let $W$ be an $n$-D vector space, and each $V_i$ be an $m_i$-D vector space with $i=1,2,...,N$. Let $f:V_1\times V_2\times\cdots\times V_N\to W$ be multilinear. Then $\forall\left(v_1,v_2,...,v_N\right)\in V_1\times V_2\times\cdots\times V_N$, the Frechet derivative of $f$ at this location, denoted by $({\rm d}f)(v_1,v_2,...,v_N)$, is also a multilinear mapping, i.e.,
$$
({\rm d}f)(v_1,v_2,...,v_N):V_1\times V_2\times\cdots\times V_N\to W.
$$
According to Frechet, it follows that
\begin{align}
&({\rm d}f)(v_1,v_2,...,v_N)(h_1,h_2,...,h_N)\\
&=f(h_1,a_2,...,a_N)\\
&+f(a_1,h_2,...,a_N)\\
&+\cdots\\
&+f(a_1,a_2,...,h_N).
\end{align}
Recall that, if $g$ is linear, its entry-wise form reads
$$
g_i(v)=\sum_ja_{ij}v_j,
$$
and if $g$ is bilinear, its entry-wise form reads
$$
g_i(v_1,v_2)=\sum_{j_1,j_2}a_{ij_1j_2}v_{1j_1}v_{2j_2}.
$$
Inductively and formally, the above multilinear $f$ observes the following entry-wise form
$$
f_i(v_1,v_2,...,v_N)=\sum_{j_1=1}^{m_1}\sum_{j_2=1}^{m_2}\cdots\sum_{j_N=1}^{m_N}a_{ij_1j_2...j_N}v_{1j_1}v_{2j_2}...v_{Nj_N}
$$
for $i=1,2,...,m$, where each $v_{kj_k}$ denotes the $j_k$-th entry of $v_k\in V_k$, while $a_{ij_1j_2...j_N}$'s are the coefficients of $f$.
Thanks to this entry-wise form, we may then write down the entry-wise form of ${\rm d}f$ as well, which reads
\begin{align}
&({\rm d}f)_i(v_1,v_2,...,v_N)(h_1,h_2,...,h_N)\\
&=\sum_{j_1=1}^{m_1}\sum_{j_2=1}^{m_2}\cdots\sum_{j_N=1}^{m_N}a_{ij_1j_2...j_N}h_{1j_1}v_{2j_2}...v_{Nj_N}\\
&+\sum_{j_1=1}^{m_1}\sum_{j_2=1}^{m_2}\cdots\sum_{j_N=1}^{m_N}a_{ij_1j_2...j_N}v_{1j_1}h_{2j_2}...v_{Nj_N}\\
&+\cdots\\
&+\sum_{j_1=1}^{m_1}\sum_{j_2=1}^{m_2}\cdots\sum_{j_N=1}^{m_N}a_{ij_1j_2...j_N}v_{1j_1}v_{2j_2}...h_{Nj_N}.
\end{align}
In other words, as $a_{ij_1j_2...j_N}$'s are known, the entry-wise form of ${\rm d}f$ could be expressed straightforwardly as above.
Finally, the "$+$" in OP's original post, i.e., $(h_1+h_2+\cdots+h_N)$, is a convention in some context, which is exactly $(h_1,h_2,...,h_N)$ here. When there is free of ambiguity, both expressions can be used as per ones preference.
Best Answer
If you want to generalise formulae with vectors having commuting components, make sure to write the usual formula so as to preserve the order in products. For example, here the $i$th component of the RHS is $\sum_j a_j (b_i c_j-b_j c_i)$, once we impose the $abc$ order of the LHS. Now you know the generalisation off the top of your head. For $a_i=\partial_i$, the result's $i$th component is $$\sum_j \partial_j (b_i c_j-b_j c_i)=b_i\nabla\cdot c+c\cdot\nabla b_i-(\nabla\cdot b) c-b\cdot\nabla c_i.$$The vector, in other words, is $$\vec{b}(\nabla\cdot\vec{c})-(\nabla\cdot\vec{b})\vec{c}+(\vec{c}\cdot\nabla)\vec{b}-(\vec{b}\cdot\nabla)\vec{c}.$$(I've swapped the middle terms to mirror @md2perpe's quoted result, but the letters $b,\,c$ still need to be changed to $A,\,B$).