Parentheses in mathematical expressions show what is evaluated first. In some cases, the sequence of evaluation really matters, affecting the final result of expression. In other cases, a sequence doesn’t change the result. For the latter, parentheses can be used too, as example for convenience, for better explanation, for illustrative purposes and/or for some other reasons
Your first expression, ${\phi \, \boldsymbol{a} \cdot \boldsymbol{\nabla} \boldsymbol{b}}$ (* can have parentheses anywhere without affecting its result
*) please avoid using the same symbol for very different operations, when you really need the explicit symbol for scalar multiplication, then you may write it like ${\phi \cdot \boldsymbol{a} \bullet \boldsymbol{\nabla} \boldsymbol{b}}$ or, with the explicit symbol for tensor multiplication, ${\phi \cdot \boldsymbol{a} \bullet \boldsymbol{\nabla} \otimes \boldsymbol{b}}$
$${\phi \bigl( \boldsymbol{a} \cdot \boldsymbol{\nabla} \boldsymbol{b} \bigr) \!}
= {\phi \, \boldsymbol{a} \cdot \bigl( \boldsymbol{\nabla} \boldsymbol{b} \bigr) \!}
= {\phi \bigl( \boldsymbol{a} \cdot \boldsymbol{\nabla} \bigr) \boldsymbol{b}}
= {\, \bigl( \phi \, \boldsymbol{a} \cdot \boldsymbol{\nabla} \bigr) \boldsymbol{b}}
= {\, \bigl( \phi \, \boldsymbol{a} \bigr) \! \cdot \boldsymbol{\nabla} \boldsymbol{b}}
= {\, \bigl( \phi \, \boldsymbol{a} \bigr) \! \cdot \bigl( \boldsymbol{\nabla} \boldsymbol{b} \bigr) \!}
= {\phi \, \boldsymbol{a} \cdot \boldsymbol{\nabla} \boldsymbol{b}}$$
The “trick to manage parentheses” is nothing more but understanding the operations used in an expression, their properties and their arguments
Dot product is the complex operation, it is the tensor product followed by contraction. Dot product affects only tensors of complexity larger than zero (vectors and more complex tensors), thus it has no effect on scalars. Tensor product is the basic operation, it takes two tensors of any complexities and results in a tensor of aggregate complexity. Tensor product of two vectors is often called the dyadic product. Scalar multiplication can be seen as the special case of a tensor product with scalar argument[s] as well as the part of a linear combining operation. Contraction, sometimes called “trace”, takes one argument (it’s unary operation), reducing the complexity of this argument by two via summation over adjacent indices
Polyadic representation (polyadic decomposition, component expansion, linear combination of basis vectors/dyads/triads/polyads with components) of tensors can help to get what’s going on inside, behind the scenes. Here you measure non-scalar tensors via some complete set of mutually independent vectors, called the basis vectors. Any vector can be measured as the linear combination of basis vectors with coefficients, so called components of a vector within a basis currently used for measuring. The simplest basis is an orthonormal one, in which the basis vectors are mutually perpendicular to each other and all are the one unit long, that’s $\boldsymbol{e}_i \cdot \boldsymbol{e}_j = \delta_{ij}$ (https://en.wikipedia.org/wiki/Kronecker_delta)
$$\boldsymbol{w} = w_1 \boldsymbol{e}_1 + w_2 \boldsymbol{e}_2 + w_3 \boldsymbol{e}_3$$
or, easy and short
$$\boldsymbol{w} = w_i \boldsymbol{e}_i$$
The differential operator “nabla” $\boldsymbol{\nabla}$ is the special vector, which components are coordinate derivatives $\partial_i \equiv \frac{\partial}{\partial x_i}$ applied to the term immediately following the nabla
$$\boldsymbol{\nabla} = \boldsymbol{e}_i \partial_i$$
Expanding your expression by measuring via orthonormal basis, you have
$$\phi \, \boldsymbol{a} \cdot \boldsymbol{\nabla} \boldsymbol{b} = \phi \, a_i \boldsymbol{e}_i \cdot \boldsymbol{e}_j \partial_j \bigl( b_k \boldsymbol{e}_k \bigr)$$
or, since mutually orthogonal unit vectors of orthormal basis are constant
$$\phi \, \boldsymbol{a} \cdot \boldsymbol{\nabla} \boldsymbol{b} = \phi \, a_i \boldsymbol{e}_i \cdot \boldsymbol{e}_j \bigl( \partial_j b_k \bigr) \boldsymbol{e}_k$$
or, using $\boldsymbol{e}_i \cdot \boldsymbol{e}_j = \delta_{ij}$
$${\phi \, \boldsymbol{a} \cdot \boldsymbol{\nabla} \boldsymbol{b}}
= {\phi \, a_i \delta_{ij} \partial_j \bigl( b_k \boldsymbol{e}_k \bigr) \!}
= {\phi \, a_i \partial_i \bigl( b_k \boldsymbol{e}_k \bigr)}$$
or
$$\phi \, \boldsymbol{a} \cdot \boldsymbol{\nabla} \boldsymbol{b}
= {\phi \, a_i \delta_{ij} \bigl( \partial_j b_k \bigr) \boldsymbol{e}_k}
= {\phi \, a_i \bigl( \partial_i b_k \bigr) \boldsymbol{e}_k}$$
As long as you keep coordinate derivative $\partial_i$ applied to $\boldsymbol{b}$ (or its components in some orthonormal basis), you can evaluate this expression using any sequence you wish
The story about cross products is much more longer, I propose to take a look at my answer for Gradient of a dot product
Best Answer
The gradient of a vector $\nabla B$ is the 2nd order (2,0) tensor.
$$ \nabla B = {\partial B_i\over\partial x_j}e_{ij} $$
$e_{i}$ is the unit (1,0) vector basis, and $e_{ij}=e_{i}\otimes e_{j}$ is the unit matrix (2,0) basis.
Hence the product $B\cdot\nabla B$ is: $$ B\cdot \nabla B = B_i {\partial B_i\over\partial x_j}e_{ij} $$
For $B=B_3e_3$: $$ B\cdot \nabla B = B_3{\partial B_3\over\partial x^{j}} e_{3j} = B_3{\partial B_3\over\partial x^{j}} e_{j} = B_3 \nabla B_3 $$