The link you gave appears to be dead so I'm not 100% familiar with some of these terms, however I believe I know what's going on.
These extra multiple shooting parameters your dealing with are a result of the method used, not the problem formulation. Since you haven't transcribed the problem definition to anything new, the additional parameters used in multiple shooting should not appear in the cost function. If this is the case, then the sensitivities are just 0. You can probably stop here.
HOWEVER, why are these new multiple shooting parameters used as explicit NLP parameters in the first place? If something is not a free parameter, then the NLP solver shouldn't have direct access to it. NLP solvers can usually handle equality constraints without any problem. For a trajectory $\gamma$ that has been sliced up into $\{\gamma_1, \gamma_2, \cdots, \gamma_n\}$ each with time segments $\{[\tau_0, \tau_1], [\tau_1, \tau_2], \cdots\, [\tau_{n-1}, \tau_n]\}$ in multiple shooting, extra boundary conditions become
$$
\gamma_i(\tau_{i}) - \gamma_{i+1}(\tau_i) = 0 \; \forall \; i \in [1,n-1]
$$
If you append these extra boundary conditions directly as nonlinear equality constraints, you shouldn't have a problem, but again the link you gave was dead.
If the solver is smart, it'll actually convert $\max(g(x),0)=0$ back into $g(x)\leq 0$. Basically, the $\max$ function is nondifferentiable, which makes it much more difficult to work with numerically since we typically need derivatives for fast optimization algorithms. As such, a solver with a good preprocessor will take $\max(g(x),0)=0$ and turn it into
\begin{align*}
y =& 0\\
y\geq& g(x)\\
y\geq& 0
\end{align*}
which then gets further processed into
$$
0\geq g(x)
$$
which you started with. Or, at least, that's one way to process it. There's probably more.
Anyway, in answer to your original question. No. Your transformation makes things much more difficult numerically. Again, the $\max$ function is nondifferentiable, so it's hard to use fast algorithms to work with it. In addition, if $g$ were convex, you just reformulated a nice convex constraint into something that's hard to work with. Now, unless $g$ is affine, it's likely that the solver is going to add a slack variable anyway and reformulate your problem as
$$
\min\limits_{x\in X} \{c(x) : y = g(x), y\leq 0\}
$$
It'll do this since most algorithms to handle inequalities need to figure out how far we can traverse before becoming infeasible. For a nonaffine $g$, this is hard to determine, so we just add a slack variable and then handle the equality constraint. As such, the real question then becomes whether or not we can effectively solve the linear systems that involve $g^\prime(x)$, which arise when attacking the KKT conditions using something like a Newton method. Basically, algorithms like SQP. In any case, if you want to speed up the computation, likely the biggest speedup is focusing on quickly solving these linear systems.
Best Answer
SQP and interior point methods iterative approximate the optimization problem with an optimization problem with a quadratic objective and linear equality constraints: $$\min \left\{ \frac{1}{2} x^TQx + c^Tx : Ax = b \right\}.$$ What is nice about this approximation is that the KKT conditions (necessary and sufficient if $Q$ is positive semidefinite) are just a linear system: $$ \begin{pmatrix}Q & A^T \\ A & O\end{pmatrix}\begin{pmatrix}x \\ \lambda\end{pmatrix}=\begin{pmatrix}-c \\ b\end{pmatrix}. $$ Therefore, the optimal solution to the approximation can be obtained by solving a system of linear equations. Just a few hundred variables is not a big deal. Especially when $A$ is sparse, you can solve such problems with millions of variables.