Suppose you are given a function $g$ of a scalar argument $s$, and its first few derivatives
$$\eqalign{
g^{(0)}=g(s),\,\,\,g^{(1)}=\frac{dg(s)}{ds},\,\,\,g^{(2)}=\frac{d^2g(s)}{ds^2},\,\,\ldots\cr
}$$
Applying the function element-wise to a vector $(z=A^Tx)$ yields a vector of the same size. The differentials of such a vectorized function can be written using the elementwise/Hadamard product
$$\eqalign{
dg^{(0)} &= g^{(1)}\circ dz= g^{(1)}\circ A^Tdx \cr
dg^{(1)} &= g^{(2)}\circ dz= g^{(2)}\circ A^Tdx \cr
}$$
To eliminate the Hadamard products, you can create a diagonal matrix from the vector, e.g. $G={\rm Diag}(g),$ and use a regular matrix product. For such diagonal matrices, I'll use the corresponding uppercase letter
$$\eqalign{
dg^{(0)} &= G^{(1)}A^Tdx \cr
dg^{(1)} &= G^{(2)}A^Tdx \cr
dg^{(2)} &= G^{(3)}A^Tdx \cr
}$$
Use this to find the differential of the $f$ function.
NB: Instead of $\langle v,g\rangle\,\,$ I use $\,v\!:\!g,\,$ which is easier to type.
$$\eqalign{
f(x) &= v:g^{(0)} \cr
df &= v:dg^{(0)} = v:G^{(1)}A^Tdx \cr
&= AG^{(1)}v:dx \cr
f^{(1)} &= AG^{(1)}v \cr
}$$
There's the first derivative. Taking its differential leads us to the second derivative
$$\eqalign{
df^{(1)}
&= A\,dG^{(1)}v \cr
&= A(dg^{(1)}\circ v) \cr
&= A(v\circ dg^{(1)}) \cr
&= AV\,dg^{(1)} \cr
&= AVG^{(2)}A^T\,dx \cr
f^{(2)} &= AVG^{(2)}A^T \cr
}$$
That's the second derivative, now on to the third
$$\eqalign{
df^{(2)}
&= AV\,dG^{(2)}\,A^T \cr
&= AV\,{\rm Diag}\big(dg^{(2)}\big)\,A^T \cr
&= AV\,{\rm Diag}\big(G^{(3)}A^Tdx\big)\,A^T \cr
&= AV\,{\mathbb E}\,A:{\rm Diag}\big(G^{(3)}A^Tdx\big) \cr
&= AV\,{\mathbb E}\,A:{\mathbb H}\,G^{(3)}A^T\,dx \cr
f^{(3)} &= AV\,{\mathbb E}\,A:{\mathbb H}\,G^{(3)}A^T \cr
}$$
where ${\mathbb E}$ is a 4th order tensor whose components can be written in terms of Kronecker deltas
$${\mathbb E}_{ijkl}=\delta_{ik}\delta_{jl}$$
${\mathbb H}$ is a 3rd order tensor whose components
${\mathbb H}_{ijk}=1$ when all indices are equal, but are zero otherwise.
You can write the 3rd derivative of $f$ in index notation as
$$\eqalign{
f^{(3)}_{ils} &= A_{ij}V_{jk}\,{\mathbb E}_{klmn}\,A_{np}{\mathbb H}_{mpq}\,G^{(3)}_{qr}A^T_{rs} \cr
}$$
If you're not familiar with the summation convention used in index notation, a repeated index implies a summation over that index. For example
$$C_{ip} = A_{ijk}B_{jkp} \equiv \sum_j\sum_kA_{ijk}B_{jkp}$$
Best Answer
You need to be careful with any notation since often authors define and use symbols in different manner. Although in this particular case, my mathematical experience tells me that most common are:
1. "Gradient's variable":
following Ruben Tobar's answer, it may be notation providing information with respect to which variables the whole gradient should be taken. To be more precise: if $x_i\in \mathbb{R}$ for $i= 1,...,n$ and $\mathbf{x}\in\mathbb{R}^n$ is a vector defined as $\mathbf{x}=\{x_1,...,x_n\}$, given differentiable $f(\mathbf{x}):\mathbb{R}^n\rightarrow\mathbb{R}$ we have: $$\nabla_{\mathbf{x}}f=(\partial_{x_1}f,...,\partial_{x_n}f)$$ Therefore $\nabla_{\mathbf{x}}f$ is a vector whose entities $\partial_{x_i}f$ are just standard derivatives of multi-variable function $f$ with respect to the real variable $x_i$ (so called partial derivatives). Note that I did not write here explicitly dependence of $f$ on $\mathbf{x}$ since it would be redundant and erroneous, but nevertheless both left hand side and all the entities on the right hand side are functions dependent on $\mathbf{x}$.
To elaborate about the notation a little: it is quite commonly used in partial differential equations, where you have multiple different variables and not to confuse your reader, when you use (or define) a differential operator you stress the variables used in definition of this operator. For example consider the following: take $\mathbf{x}\in{\mathbb{R}^n}$ defined as above and similarly $\mathbf{z}\in{\mathbb{R}^d}$ for some natural $r,d>1$ and $t\in\mathbb{R}$. Problem is to find a function $f(t,\mathbf{x},\mathbf{z}):\mathbb{R}\times \mathbb{R}^n\times\mathbb{R}^d\rightarrow\mathbb{R}$ solving differential equation. Without stressing the variables the following differential equation is ambiguous and difficult to write in any different way: $$f_t + \nabla_\mathbf{x}f +{\rm div}_\mathbf{z}f = 0$$ where ${\rm div}_\mathbf{z}$ is different differential operator (called divergence) defined as $\sum_i\partial_{z_i}$, $i=1,...d.$
2. Directional derivative:
As you see in https://en.wikipedia.org/wiki/Directional_derivative (where you can find more information) it is a very common notation for directional derivative. Not in full generality: Given any $f(\mathbf{x}):\mathbb{R}^n\rightarrow\mathbb{R}$ and a vector $\mathbf{v}\in \mathbb{R}^n$: $$\nabla_{\mathbf{v}} f(\mathbf{x}):=\nabla f(\mathbf{x})\cdot\mathbf{v},$$ where on the right hand side we have standard scalar product.
3. Gradient at x?:
In my opinion (some may disagree): it is neither common, nor useful notation. Although, using our notation it would mean: $$\nabla_\mathbf{x}f:=[\nabla f](\mathbf{x}):=(\partial_{x_1} f(\mathbf{x}),...,\partial_{x_n} f(\mathbf{x}))$$ Where $\mathbf{x}\in\mathbb{R}^n$ is fixed and partial derivatives of $f$ are evaluated at $\mathbf{x}$. Note that unlike in point 1. where we didn't stress dependence on the variable, here we write explicitly that functions are evaluated at $\mathbf{x}.$