The equation,
$$
\nabla\cdot (\rho \textbf v \otimes \textbf v),
$$
can be written in index notation as,
$$
\partial_i (\rho v_i v_j),
$$
where the dot product becomes an inner product, summing over two indices,
$$
\textbf a \cdot \textbf b = a_i b_i,
$$
and the tensor product yields an object with two indices, making it a matrix,
$$
\textbf c \otimes \textbf d = c_i d_j =: M_{ij}.
$$
Now we differentiate using the product rule,
$$
\partial_i (\rho v_i v_j)=(\partial_i \rho) v_i v_j + \rho (\partial_i v_i) v_j + \rho v_i (\partial_i v_j).
$$
Let’s look at the terms separately:
$\bullet (\partial_i \rho) v_i v_j $: assuming $\rho=\rho(\textbf x)$, the expression within the brackets is the vector $(\partial_x\rho, \partial_y\rho, \partial_z\rho)$, which then gets dot multiplied with the vector $\textbf v$. This yields a number, say $c_1$, which gets multiplied to every component of the vector $v_j$. So the result here is a vector. If $\rho$ is constant, this term vanishes.
$\bullet\rho (\partial_i v_i) v_j$: Here we calculate the divergence of $\textbf v$,
$$
\partial_i a_i = \nabla \cdot \textbf a = \text{div }\textbf a,
$$
and multiply this number with $\rho$, yielding another number, say $c_2$. This gets multiplied onto every component of $v_j$. The resulting thing here is again a vector.
$\bullet\rho v_i (\partial_i v_j)$: Here we construct a matrix with the composition rule,
$$ M_{ij} := \partial_i v_j,
$$
that is for example $M_{13}=\partial_x v_z$. We then multiply a (row)vector $v_i$ to this matrix, yielding a different vector. Finally, every component of this new vector gets multiplied by $\rho$, so we have a vector again.
I think it helps to write out the Cartesian components of this expression:
\begin{equation}
c \sum_{k=1}^3 \partial_k \left(\partial_k v_i + \partial_i v_k\right)
\end{equation}
where $i$ and $k$ run over $\{1, 2, 3\}$, and where
\begin{equation}
\partial_i \equiv \frac{\partial }{\partial x^i}
\end{equation}
The meaning of the transpose is that the indices on the partial derivative and the vector are switched.
Best Answer
I think that the question was answered in the comments, but your main concern seems to be "how would you denote these in vector notation?".
My answer to this is either (1) you don't, or (2) if you must then you have the freedom to denote it any way you like. The reason for the fact that there is no standard agreement on a "vector" notation is because with tensors with rank greater than 1 it becomes much more confusing than it's worth.
For that reason I recommend option (1)
Example: Suppose you want to take the derivative w.r.t the second index of a tensor. Then you can either write
$$ \partial_{i_2} T^{i_1i_2 \cdots} \qquad or \qquad\vec{\mathcal{D}}\ \cdot \stackrel{\leftrightarrow}{T} $$
In my mind the second equation is essentially useless and above all confusing. The problem with the one on the right is that you are trying to package way too much information into a vector notation. That works find if you have a single index but loses this allure in proportion to how many indices your tensor has.
If you make any attempt to salvage the "vector" notation on the right, you will most likely invent the notation on the left, as it is superior in every single way.