$\def\m#1{ \left[\begin{array}{r}#1\end{array}\right] }$
Given a hollow matrix
$$\eqalign{
A &\in {\mathbb R}^{n\times n} \\
}$$
the hollow-half-vec operation is analogous to the more familiar half-vec operation. Both can be described in terms of the standard vec operator
$$\eqalign{
{\rm vechh}(A) &= E_h\cdot{\rm vec}(A) \quad&\sim\quad
{\rm vech}(A) &= E\cdot{\rm vec}(A) \\
{\rm vec}(A) &= D_h\cdot{\rm vechh}(A) \quad&\sim\quad
\;\;{\rm vec}(A) &= D\cdot{\rm vech}(A) \\
}$$
where $E_h\in{\mathbb R}^{\ell\times n^2}$ is the hollow-elimination matrix. It is a sparse binary matrix $\,\ell = \tfrac 12(n^2-n)\,$. There is one non-zero element in each row, whose column index corresponds to the index of the conserved element in ${\rm vec}(A)$.
The hollow-duplication matrix is $D_h\in{\mathbb R}^{n^2\times\ell}$ is also a sparse binary matrix, whose elements are such that $E_hD_h=I$ and whose columns sum to two, i.e. $\;\tfrac 12D_h^T{\tt1} = {\tt1}.\;$
Again, this is analogous to the half-vec case where $ED=I\,$ (however the column sums of $D$ vary between one and two).
Interestingly, the pseudoinverse $D_h^+$ can serve as an elimination matrix, although $E_h^+$ fails as a duplication matrix, i.e.
$$\eqalign{
{\rm vechh}(A) &= D_h^+\cdot{\rm vec}(A) \quad&\sim\quad
{\rm vech}(A) &= D^+\cdot{\rm vec}(A) \\
{\rm vec}(A) &\ne E_h^+\cdot{\rm vechh}(A) \quad&\sim\quad
\;\;{\rm vec}(A) &\ne E^+\cdot{\rm vech}(A) \\
D_h^+D_h &= I \quad&\sim\quad \quad D^+D &= I \\
}$$
$\Big[\,$In the hollow case, the calculation is particularly easy since
$D_h^+ = \tfrac 12D_h^T\;\Big]$
For example, for $\;n=4,\;\ell=6$
$$\eqalign{
&E_h[1,2] &= 1,\qquad &D_h(2,1) &= 1,\quad &D_h(5,1) &= 1 \\
&E_h[2,3] &= 1,\qquad &D_h(3,2) &= 1,\quad &D_h(9,2) &= 1 \\
&E_h[3,4] &= 1,\qquad &D_h(4,3) &= 1,\quad &D_h(13,3)&= 1 \\
&E_h[4,7] &= 1,\qquad &D_h(7,4) &= 1,\quad &D_h(10,4)&= 1 \\
&E_h[5,8] &= 1,\qquad &D_h(8,5) &= 1,\quad &D_h(14,5)&= 1 \\
&E_h[6,12]&= 1,\qquad &D_h(12,6)&= 1,\quad &D_h(15,6)&= 1 \\
}$$
The indices of the vec and vechh vectors mapped onto
$4\times 4$ matrices helps elucidate the components
given above.
$$\eqalign{
\m{1&5&9&13\\2&6&10&14\\3&7&11&15\\4&8&12&16} \qquad\qquad
\m{0&1&2&3\\1&0&4&5\\2&4&0&6\\3&5&6&0}
\\
}$$
Best Answer
Yes, the two matrices in your question are hollow. However, the definition of a hollow matrix only requires the matrix to be a square one with a zero diagonal. This definition applies to square matrices over any field. It does not require symmetry or entrywise non-negativity. In fact, the values of the off-diagonal entries or the structure of the off-diagonal part are completely irrelevant. As long as the matrix is square and all of its main diagonal elements are zero, it can be called a hollow matrix.