So, the only difficult thing is in establishing the relationship between $\Gamma^i_{ik}$ and the metric determinants, but here I'm afraid it's just one of those things which one has to "notice". For now let's follow our nose. Say we're on a pseudo-Riemannian manifold $(M,g)$ and we consider the Levi-Civita connection. In terms of local coordinates $(x^1,\dots, x^n)$, we have
\begin{align}
\Gamma^{i}_{jk}&=\frac{1}{2}g^{ia}\left(\frac{\partial g_{aj}}{\partial x^k}+\frac{\partial g_{ka}}{\partial x^j}-\frac{\partial g_{jk}}{\partial x^a}\right)
\end{align}
If we replace $j$ with $i$ and take the sum, then I'm sure you can convince yourself that with a little index juggling, the last two terms will cancel out, and thus
\begin{align}
\Gamma^i_{ik}&=\frac{1}{2}g^{ia}\frac{\partial g_{ai}}{\partial x^k}\\
&=\frac{1}{2|g|}\frac{\partial |g|}{\partial x^k}=\frac{\partial}{\partial x^k}\log \sqrt{|g|}=\frac{1}{\sqrt{|g|}}\frac{\partial \sqrt{|g|}}{\partial x^k}\tag{$*$}
\end{align}
In the line $(*)$, I shall shortly justify the first equal sign. The second equal sign should be obvious based on basic properties of the logarithm (the $\frac{1}{2}$ disappears into the square root), and the third equal sign is just a simple chain rule application. Taking this for granted, we can proceed with the formula for the divergence:
\begin{align}
\text{div}(\mathbf{u})&=\frac{\partial u^i}{\partial x^i}+\Gamma^i_{ik}u^k\\
&=\frac{\partial u^i}{\partial x^i} + \frac{1}{\sqrt{|g|}}\frac{\partial \sqrt{|g|}}{\partial x^k}u^k\\
&=\frac{1}{\sqrt{|g|}}\bigg(\sqrt{|g|}\frac{\partial u^i}{\partial x^i}+
\frac{\partial \sqrt{|g|}}{\partial x^i}u^i\bigg)\\
&=\frac{1}{\sqrt{|g|}}\frac{\partial}{\partial x^i}\left(\sqrt{|g|}\,\,u^i\right),
\end{align}
which completes the proof of the Voss-Weyl fornmula.
Justifying $(*)$.
Let $G$ be the $n\times n$ matrix $\begin{pmatrix}g_{11}&\cdots & g_{1n}\\
\vdots & \ddots & \vdots\\
g_{n1}& \cdots & g_{nn}\end{pmatrix}$. The first thing we note is that
\begin{align}
g^{ia}\frac{\partial g_{ai}}{\partial x^k}&=\text{trace}(G^{-1}\cdot \frac{\partial G}{\partial x^k}),
\end{align}
where the $\cdot$ refers to matrix multiplication. At this point, one just has to know that the directional derivative of the determinant is related to the trace of a matrix. Without knowing this, it is next to impossible to continue. Take a look at the second half of this answer of mine for the precise statement and proof (which is mostly a linear algebra fact).
Explicitly, note that if we fix a point $p\in \Bbb{R}^n$ (strictly speaking we should be fixing a point in the image of the chart map $x$) and let $s\in\Bbb{R}$ be small enough, then
\begin{align}
\det [G(p+se_k)]&=\det\left(G(p)+s\frac{\partial G}{\partial x^k}(p) + \mathcal{O}(s^2)\right)\\
&=\det [G(p)]\cdot\det \left(I + s G(p)^{-1}\cdot \frac{\partial G}{\partial x^k}(p)+ \mathcal{O}(s^2)\right)\\
&= \det[G(p)]\cdot \left(1+ s\cdot \text{trace}\left(G(p)^{-1}\cdot \frac{\partial G}{\partial x^k}(p)\right) + \mathcal{O}(s^2)\right)
\end{align}
By taking the derivative with respect to $s$, and evaluating at $s=0$ on both sides, we obtain the formula
\begin{align}
\frac{\partial (\det G)}{\partial x^k}(p)&=\det[G(p)]\cdot \text{trace}\left(G(p)^{-1}\cdot\frac{\partial G}{\partial x^k}(p)\right).
\end{align}
We can divide both sides by the determinant to obtain
\begin{align}
\text{trace}\left(G(p)^{-1}\cdot\frac{\partial G}{\partial x^k}(p)\right)&=
\frac{1}{\det G(p)}\frac{\partial (\det G)}{\partial x^k}(p)= \frac{1}{|\det G(p)|}
\frac{\partial |\det G|}{\partial x^k}(p),
\end{align}
where in the last equal sign, I was able to insert the absolute values everywhere, because smoothness (hence continuity) of the matrix function $G$ means that in a sufficiently small neighborhood of the point $p$, $\det G$ maintains a constant sign $\sigma\in \{-1,1\}$. Therefore, we are free to multiply and divide by $\sigma$, and also move it inside the derivative (because it is a constant function), thereby giving us the absolute value signs. Since the point $p$ was arbitrary, we have established (up to small notational differences) the claimed formula
\begin{align}
g^{ia}\frac{\partial g_{ai}}{\partial x^k}=\frac{1}{|g|}\frac{\partial |g|}{\partial x^k}.
\end{align}
Extra Ramblings.
Btw, if you think closely about what we've just done, you'll also realize the geometric interpretation of the trace. One is often taught that determinants yield the volume spanned by parallelepipeds; so now the trace being a derivative of the determinant, can be regarded as the rate of change of volume of parallelepipeds. In $\Bbb{R}^n$, the divergence at a point $p$ of a vector field $F:U\subset \Bbb{R}^n\to\Bbb{R}^n$ is nothing but the trace of its derivative $DF_p$; i.e $(\text{div}F)(p)=\frac{\partial F^i}{\partial x^i}(p)=\text{trace}(DF_p)$. In this answer, I prove one of the classic formulas for the rate of change of volume of a subset in $\Bbb{R}^n$, under the influence of the flow of a vector field; there you'll see that the divergence pops up (this is one possible "geometric" justification for the term "divergence"). Anyway, my point here is that determinants of the metric, trace, and changes in volumes are all very closely related concepts, which is why one shouldn't be too surprised to see terms such as $\sqrt{|g|}$ appearing in the Voss-Weyl formula for the divergence (in fact, when I first learnt the material, I only saw the Voss-Weyl formula much later than when I learnt about the above relationship between volumes, determinants and trace).
Best Answer
Thanks to Prof. Pavel Grinfeld for pointing out the flaw in the derivation in the question (see comment here). The key is to distinguish between the normal to $\Gamma$ and the normal to $C$. (This type of distinction is also made between normals in the context of the divergence theorem applied to volumes on page 242 of Prof. Grinfeld's book).
Here is the normal, $\mathbf{n}$, to $\Gamma$.
\begin{equation} \begin{aligned} \mathbf{n} &= \frac{t^\alpha\mathbf{S}_\alpha} {\sqrt{S_{\beta\gamma}t^\beta t^\gamma}} \times \nu \\ &= \frac{t^\alpha} {\sqrt{S_{\beta\gamma}t^\beta t^\gamma}} \left(\mathbf{S}_\alpha \times \nu\right) \\ &= \frac{t^\alpha} {\sqrt{S_{\beta\gamma}t^\beta t^\gamma}} \epsilon_{\delta\alpha}\mathbf{S}^\delta \\ &= n_\delta\mathbf{S}^\delta \end{aligned} \end{equation}
Thus, \begin{equation} n_\alpha = \frac{\epsilon_{\alpha\delta}t^\delta} {\sqrt{S_{\beta\gamma}t^\beta t^\gamma}} \end{equation}
By analogy, the coefficients of the normal, $\bar{\mathbf{n}}$, to $C$, are given by \begin{equation} \bar{n}_\alpha = \frac{\bar{\epsilon}_{\alpha\delta}t^\delta} {\sqrt{\bar{S}_{\beta\gamma}t^\beta t^\gamma}} \end{equation}
Now, $\bar{\epsilon}_{\alpha\delta} = \frac{\epsilon_{\alpha\delta}}{\sqrt{S}}$ and $\bar{S}_{\beta\gamma} = \delta_{\beta\gamma}$, so that
\begin{equation} \bar{n}_\alpha = \frac{1}{\sqrt{S}} \frac{\sqrt{S_{\beta\gamma}t^\beta t^\gamma}} {\sqrt{\delta_{\mu\upsilon}t^\mu t^\upsilon}} n_\alpha \end{equation}
Now, the derivation in the question looks like this:
\begin{equation} \begin{aligned} \int\limits_\Omega \nabla_\alpha v^\alpha \text{d}\Omega &= \int\limits_A \frac{1}{\sqrt{S}}\frac{\partial}{\partial S^\alpha} \left(v^\alpha\sqrt{S}\right) \sqrt{S}\;\text{d}A \quad \text{by the Voss-Weyl formula} \\ &= \int\limits_A \frac{\partial}{\partial S^\alpha} \left(v^\alpha\sqrt{S}\right) \text{d}A \\ &= \oint\limits_C v^\alpha \bar{n}_\alpha \sqrt{S}\; \text{d}C \quad \text{by the divergence theorem in $\mathbb{R}^2$} \\ &= \oint\limits_C v^\alpha n_\alpha \frac{\sqrt{S_{\beta\gamma}t^\beta t^\gamma}} {\sqrt{\delta_{\mu\upsilon}t^\mu t^\upsilon}}\; \text{d}C \\ &= \oint\limits_\Gamma v^\alpha n_\alpha \text{d}\Gamma \end{aligned} \end{equation}
and all is well!
That, from my perspective, resolves the conflict alluded to in my question.
However, I also want to share another calculation, which does not use the tensor notation formally, that guided me in the above derivation (I seem to have to switch between the two ways of thinking, each guiding the other). The following is a demonstration of the divergence theorem on the surface $\Omega$, knowing the divergence theorem in the coordinate space.
Let $\hat{\Xi}$ be the coordinate map, $\tilde{S}$ be a parametrization of $C$, and $\tilde{\Xi}$, the corresponding parametrization of $\Gamma$ as shown in the figure.
\begin{equation} \begin{aligned} & \oint_\limits\Gamma \mathbf{v}\cdot\mathbf{n}\text{d}\Gamma \\ = & \int_{t_0}^{t_1} \mathbf{v}\cdot \left(\frac{\tilde{\Xi}'(t)} {\left\Vert\tilde{\Xi}'(t)\right\Vert} \times\nu \right) \left\Vert\tilde{\Xi}'(t)\right\Vert \text{d}t \\ = & \int_{t_0}^{t_1} \mathbf{v}\cdot \left(\tilde{\Xi}'(t) \times\nu \right) \text{d}t \\ = & \int_{t_0}^{t_1} \mathbf{v}\cdot \left( \left[\begin{array}{c|c} \frac{\partial\hat{\Xi}}{\partial S^1} & \frac{\partial\hat{\Xi}}{\partial S^2} \end{array}\right] \tilde{S}'(t) \times\nu \right) \text{d}t \\ = & \int_{t_0}^{t_1} \mathbf{v}\cdot \left( \left[\begin{array}{c|c} \frac{\partial\hat{\Xi}}{\partial S^1} \times \nu & \frac{\partial\hat{\Xi}}{\partial S^2} \times \nu \end{array}\right] \tilde{S}'(t) \right) \text{d}t \\ = & \int_{t_0}^{t_1} \left( \left[\begin{array}{c|c} \frac{\partial\hat{\Xi}}{\partial S^1} & \frac{\partial\hat{\Xi}}{\partial S^2} \end{array}\right] \begin{pmatrix} v^1 \\ v^2 \end{pmatrix} \right) \cdot \left( \left[\begin{array}{c|c} \frac{\partial\hat{\Xi}}{\partial S^1} \times \nu & \frac{\partial\hat{\Xi}}{\partial S^2} \times \nu \end{array}\right] \tilde{S}'(t) \right) \text{d}t \\ = & \int_{t_0}^{t_1} \begin{pmatrix} v^1 \\ v^2 \end{pmatrix}^\top \left[\begin{array}{c|c} \frac{\partial\hat{\Xi}}{\partial S^1} & \frac{\partial\hat{\Xi}}{\partial S^2} \end{array}\right]^\top \left[\begin{array}{c|c} \frac{\partial\hat{\Xi}}{\partial S^1} \times \nu & \frac{\partial\hat{\Xi}}{\partial S^2} \times \nu \end{array}\right] \tilde{S}'(t) \text{d}t \\ & \quad \text{the integrand here can be recognized as}\ (v^1\mathbf{S}_1 + v^2\mathbf{S}_2) \cdot \left[-\mathbf{S}^2 \; \mathbf{S}^1\right] \tilde{S}'(t) S \\ = & \int_{t_0}^{t_1} \begin{pmatrix} v^1 \\ v^2 \end{pmatrix}^\top \left[\begin{array}{cc} 0 & S \\ -S & 0 \end{array}\right] \tilde{S}'(t) \text{d}t \\ = & \int_{t_0}^{t_1} \begin{pmatrix} v^1 \\ v^2 \end{pmatrix}^\top \begin{pmatrix} \tilde{S}^2{}'(t) \\ -\tilde{S}^1{}'(t) \end{pmatrix} S \text{d}t \\ = & \int_{t_0}^{t_1} \begin{pmatrix} v^1 S \\ v^2 S \end{pmatrix}^\top \frac{ \begin{pmatrix} \tilde{S}^2{}'(t) \\ -\tilde{S}^1{}'(t) \end{pmatrix} }{\left\Vert \tilde{S}'(t) \right\Vert} \left\Vert \tilde{S}'(t) \right\Vert \text{d}t \\ = & \oint_\limits{C} \begin{pmatrix} v^1 S \\ v^2 S \end{pmatrix}^\top \bar{\mathbf{n}} \text{d}C \\ = & \int_\limits{A}\left( \frac{\partial}{\partial S^1}\left(v^1 S\right) + \frac{\partial}{\partial S^2}\left(v^2 S\right) \right)\text{d}A \quad \text{by the divergence theorem in}\ \mathbb{R}^2 \\ = & \int_\limits{A}\frac{1}{S} \left( \frac{\partial}{\partial S^1}\left(v^1 S\right) + \frac{\partial}{\partial S^2}\left(v^2 S\right) \right) S \text{d}A \\ = & \int_\limits{\Omega}\frac{1}{S} \left( \frac{\partial}{\partial S^1}\left(v^1 S\right) + \frac{\partial}{\partial S^2}\left(v^2 S\right) \right) \text{d}\Omega \end{aligned} \end{equation}
which is the desired result.