Let's formulate the definition of curl slightly more precisely in the form of a definition/theorem. I'll also not use boldface objects, simply for ease of typing
Definition/Theorem.
Let $A\subset \Bbb{R}^3$ be open, $F: A \to \Bbb{R}^3$ be $C^1$. Then, there is a unique continuous function $H:A \to \Bbb{R}^3$, such that for every $p\in A$, for every $\epsilon > 0$, for every "nice" surface $S\subset A$, there is an open neighbourhood $U$ of $p$ in $S$ such that for every "nice" oriented surface $M$ with $p\in M\subset U$, with outward normal vector field $n(\cdot)$ (which is simply the restriction of the outward normal of $S$ to $M$), boundary $\partial M$ and tangent vector field $\tau(\cdot)$ on $\partial M$, we have:
\begin{align}
\left|\dfrac{1}{|M|}\int_{\partial M}\langle F, \tau\rangle\, dl - \langle H(p), n(p)\rangle\right| < \epsilon\tag{1}
\end{align}
(this is the more precise meaning of the limit you're talking about)
In this case, because $H$ is unique, we can give it the name $\text{curl}(F)$. In fact, we can show that
\begin{align}
H =
\left(\frac{\partial F_z}{\partial y} - \frac{\partial F_y}{\partial z}\right) \boldsymbol{\hat\imath} + \left(\frac{\partial F_x}{\partial z} - \frac{\partial F_z}{\partial x} \right) \boldsymbol{\hat\jmath} + \left(\frac{\partial F_y}{\partial x} - \frac{\partial F_x}{\partial y} \right) \boldsymbol{\hat k} \tag{2}
\end{align}
In the above paragraph, "nice oriented surface" means nice enough so that Stoke's theorem can be applied; for example a smooth two-dimensional, oriented manifold-with-boundary (or however much you wanna weaken the hypothesis... because each book presents it with varying levels of generality... just as long as Stoke's theorem can be applied).
Note that for the above definition of curl to make sense, we have to first show the existence and uniqueness of such a vector field $H$. We shall start by showing the uniqueness of $H$. So, we assume such a $H$ exists, and then prove its components are determined according to the formula $(2)$; this will complete the proof of uniqueness.
Proof of Uniqueness
I'll carry out the computation in detail for proving $H_x = \dfrac{\partial F_z}{\partial y} - \dfrac{\partial F_y}{\partial z}$, and leave the other two to you (it's simply a matter of renaming $x,y,z$). We prove this equality pointwise of course. So, fix a point $p \in A$; then for any $\delta > 0$ such that the closed cube $C_{p,\delta} = p + [-\delta,\delta]^3$ (which is the closed cube centered at $p$ of sidelength $2\delta$) lies entirely inside $A$ (note that since $A$ is open, there are infinitely many such $\delta>0$), we define $M^{\delta} := \{p_1\}\times [p_2-\delta, p_2 + \delta]\times [p_3-\delta, p_3 + \delta]$. This is a piece of a plane which we shall orient so that it has a constant outward normal vector field $n = e_1 \equiv \boldsymbol{i}$. Now, we calculate: note that $\partial M^{\delta}$ has $4$-pieces, and the unit tangent vector along these boundary paths is constant, so (if you're very careful with signs... which I hope I didn't make any sign mistakes), we get
\begin{align}
\dfrac{1}{|M^{\delta}|} \int_{\partial M^{\delta}}\langle F, \tau \rangle\, dl &= \dfrac{1}{4\delta^2}
\bigg[\int_{-\delta}^{\delta} F_2(p_1, p_2 + y, p_3-\delta) - F_2(p_1, p_2 + y, p_3+\delta) \, dy\bigg]\\
&+\dfrac{1}{4\delta^2}\bigg[ \int_{-\delta}^{\delta} F_3(p_1, p_2+\delta, p_3+z) - F_3(p_1, p_2-\delta, p_3+z)\, dz\bigg]
\end{align}
For each term, we apply the mean-value theorem for integrals (which we can use since everything is continuous), to obtain some $\eta \in [p_2-\delta, p_2+\delta]$ and some $\zeta\in [p_3-\delta, p_3+\delta]$ such that
\begin{align}
\dfrac{1}{|M^{\delta}|} \int_{\partial M^{\delta}}\langle F, \tau \rangle\, dl &=
\dfrac{1}{4\delta^2}\bigg[2\delta \cdot F_2(p_1, \eta, p_3-\delta) - 2\delta \cdot F_2(p_1, \eta, p_3+\delta)\bigg] \\
&+ \dfrac{1}{4\delta^2}\bigg[2\delta \cdot F_3(p_1, p_2+\delta, \zeta) - 2\delta \cdot F_3(p_1, p_2-\delta, \zeta)\bigg] \\\\
&=\dfrac{F_3(p_1, p_2+\delta, \zeta) - F_3(p_1, p_2-\delta, \zeta)}{2\delta} -\dfrac{F_2(p_1, \eta, p_3+\delta) - F_2(p_1, \eta, p_3-\delta)}{2\delta} \\
&= \dfrac{\partial F_3}{\partial y}(p_1, \alpha, \zeta) - \dfrac{\partial F_2}{\partial z}(p_1, \eta, \beta),
\end{align}
for some $\alpha\in [p_2-\delta, p_2+\delta], \beta\in [p_3-\delta, p_3+\delta]$, using the mean-value theorem for derivatives (which can certainly be applied since we assumed $F$ is $C^1$).
Quick summary: what we showed so far is that for every $p\in A$ and every $\delta>0$ such that the cube $C_{p,\delta}$ lies inside $A$, if we define $M^{\delta}$ as above to be the plane centered at $p$ with normal pointing in $e_1$ direction, then, there exist points $a_{p,\delta},b_{p,\delta} \in M^{\delta} \subset C_{p,\delta}$ inside the surface (in particular inside the cube) such that
\begin{align}
\dfrac{1}{|M^{\delta}|}\int_{\partial M^{\delta}}\langle F, \tau\rangle \, dl &=
\dfrac{\partial F_3}{\partial y}(a_{p,\delta}) - \dfrac{\partial F_2}{\partial z}(b_{p,\delta})
\end{align}
From here, it is a simple matter of using the continuity of partial derivatives. Here's the full $\epsilon,\delta$ argument to finish it off: let $p\in A$ and $\epsilon> 0$ be arbitrary. By our hypothesis of $(1)$, there is an open $U$ such that blablabla. Now, for this given $\epsilon > 0$, let's choose $\delta > 0$ small enough so that
- the cube $C_{p,\delta}$ lies inside $U$
- the $\delta$ "works" for the continuity of $\dfrac{\partial F_3}{\partial y}$ and $\dfrac{\partial F_2}{\partial z}$ at the point $p$
(so really we have to take a minimum of several $\delta$'s). Then, we choose the oriented plane $M^{\delta}$ as defined above (this plane lies inside $U$ by construction, because of how small $\delta$ is).
\begin{align}
\left|\left(\dfrac{\partial F_3}{\partial y}(p) - \dfrac{\partial F_2}{\partial z}(p)\right) - H_1(p)\right| &= \left|\left(\dfrac{\partial F_3}{\partial y}(p) - \dfrac{\partial F_2}{\partial z}(p)\right) - \langle H(p), n(p)\rangle\right| \\\\
&\leq \left|\dfrac{\partial F_3}{\partial y}(p) - \dfrac{\partial F_3}{\partial y}(a_{p,\delta})\right|
+ \left|-\dfrac{\partial F_2}{\partial z}(p) + \dfrac{\partial F_2}{\partial z}(b_{p,\delta})\right|\\
&+ \left|\left(\dfrac{\partial F_3}{\partial y}(a_{p,\delta}) - \dfrac{\partial F_2}{\partial z}(b_{p,\delta})\right) - \dfrac{1}{|M^{\delta}|}\int_{\partial M^{\delta}}\langle F, \tau\rangle\, dl \right| \\
&+ \left|\dfrac{1}{|M^{\delta}|}\int_{\partial M^{\delta}}\langle F, \tau\rangle\, dl - \langle H(p), n(p) \rangle \right| \\\\
&\leq 4\epsilon
\end{align}
(each absolute value is $\leq \epsilon$ based on everything I've said above, and by the choice of $\delta$). Since the point $p$ and $\epsilon > 0$ are arbitrary, the inequality above shows that
\begin{align}
H_1 &= \dfrac{\partial F_3}{\partial y} - \dfrac{\partial F_2}{\partial z}
\end{align}
As a recap of the proof idea: choose a small plane $M^{\delta}$ with outward normal pointing along $e_1$; it is the flatness of the plane (which is inherently adapted to cartesian coordinates), along with the ease of boundary parametrization which makes the resulting line integral easy to calculate. Then, simply calculate everything, and use the mean-value theorems for derivatives and integrals (this is one way to fill in the gaps for the arugments you typically see in physics texts, which say "let's keep things up to first order only" and where they use $\approx$ everywhere); finally we complete it off with a standard $\epsilon,\delta$ continuity argument.
Some remarks is that for this argument to work I've had to assume $F$ is $C^1$, so that I can apply the mean-value theorems twice, and finally finish it off with a continuity argument. I'm not sure if this proof can be strengthened so that we only have to assume $F$ is differentiable (rather than $C^1$).
Proof of Converse
Now we show the existence of such a vector field $H$; for this we'll show that $\text{curl}F$, defined by equation $(2)$ satisfies the conditions of $(1)$. Like I mentioned in the comments, I'm not sure how to do this without already appealing to Stokes theorem. With Stokes' theorem, this becomes quite simple.
Let $p\in A$, $\epsilon > 0$ and let $S\subset A$ be any "nice surface". Since $\langle\text{curl}(F), n\rangle$ is a continuous function on $S$, there is an open neighbourhood $U$ around $p$ in $S$ such that for all $q\in U$,
\begin{align}
\left|\langle \text{curl}F(q), n(q)\rangle - \langle \text{curl}F(p), n(p)\rangle\right| & \leq \epsilon
\end{align}
Now, for any "nice surface" $M\subset U$ (with unit normal being the restriction of the one already on $S$), we have by Stokes theorem:
\begin{align}
\left|\dfrac{1}{|M|}\int_{\partial M}\langle F,\tau\rangle \, dl - \langle \text{curl} F(p), n(p) \rangle\right|
&= \dfrac{1}{|M|}\left|\int_M \langle \text{curl}F, n\rangle \, dA - \int_M \langle \text{curl} F(p), n(p) \rangle\, dA\right|
\\
&\leq \dfrac{1}{|M|}\int_M \left|\langle\text{curl }F, n\rangle -
\langle\text{curl }F(p), n(p)\rangle \right| \, dA \\
& \leq \dfrac{1}{|M|} \epsilon |M| \\
&= \epsilon.
\end{align}
This completes the proof of existence.
Ok, it turns out the other post I linked is crucial to simplifying my problem. The helpful fact from that post is that if we consider $g$ as a matrix, then its eigenvectors form a basis that is simultaneously $e$-orthogonal and $g$-orthogonal. Moreover, we can normalize those eigenvectors according to $e$ so that the transformation from the standard basis to the eigenbasis of $g$ is done by an orthogonal matrix. Then once we write everything in the new basis/coordinates, we are in a situation where both $e$ and $g$ are diagonalized. In that situation, things are a lot simpler.
Note: Any change of basis is meant to be considered in the passive sense. Given bases $\mathcal{E}$ and $\mathcal{P}$, it will be helpful to keep in mind that $[g]_{\mathcal{P}} = P^{T}[g]_{\mathcal{E}}P$ where $P$ is the change of basis matrix such that $P[v]_{\mathcal{P}} = [v]_{\mathcal{E}}$. I will keep all of this implicit, so hopefully this won't be confusing to the reader.
2D Case
Let's first look at the 2D case to get an understanding of what I am talking about.
To start things off, let $\mathcal{E} = (\vec{e}_{1}, \vec{e}_{2})$ be the standard basis, and let
$$ [e] = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}\qquad\text{ and }\qquad
[g] = \begin{pmatrix} \alpha & \beta \\ \beta & \gamma \end{pmatrix} $$
be the matrix representations of inner-products $e$ and $g$ with respect to $\mathcal{E}$. By the spectral theorem for real symmetric matrices, there exist eigenvectors
$$ \vec{p}_{1} = \begin{pmatrix} p_{11} \\ p_{21} \end{pmatrix} \qquad\text{ and }\qquad
\vec{p}_{2} = \begin{pmatrix} p_{12} \\ p_{22} \end{pmatrix} $$
of $[g]$ that are $e$-orthogonal and $g$-orthogonal. Rescaling if necessary, we may assume $\vec{p}_{1}$ and $\vec{p}_{2}$ are $e$-normalized. Then $P = (\vec{p}_{1} \;\; \vec{p}_{2})$ is an orthogonal matrix that diagonalizes $[g]$.
By transforming to the $\mathcal{P} = (\vec{p}_{1}, \vec{p}_{2})$ basis, and rewriting everything with respect to basis $\mathcal{P}$, we find ourselves in the $\beta = 0$ case from my original post. In particular, we have
$$ [e] = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}\qquad\text{ and }\qquad
[g] = \begin{pmatrix} \alpha & 0\\ 0 & \gamma \end{pmatrix} $$
where $\alpha, \gamma$ are not necessarily the same $\alpha, \gamma$ as above.
Let $V:\mathbb{R}^{2}\rightarrow\mathbb{R}^{2}$ be a smooth vector field such that
$$ \nabla\times_{e} V = 0 \quad\text{ and }\quad \nabla\times_{g} V = 0. $$
This gives us
\begin{align*}
\partial_{1}V_{2} = \partial_{2}V_{1} \qquad\text{ and }\qquad \gamma\partial_{1}V_{2} = \alpha\partial_{2}V_{1}.
\end{align*}
Now assuming $g$ is not a scalar multiple of $e$, it must be the case that $\alpha\ne \gamma$. Also, both $\alpha, \gamma$ are nonzero. Some manipulations immediately give us $\partial_{2}V_{1} = 0$ and $\partial_{1}V_{2} = 0$. Hence $V_{1}(x, y) = f_{1}(x)$ and $V_{2}(x, y) = f_{2}(y)$ for some functions $f_{1}, f_{2}:\mathbb{R}\rightarrow\mathbb{R}$.
Hence $V$ is of the form
$$ V(x, y) = \begin{pmatrix} f_{1}(x) \\ f_{2}(y) \end{pmatrix}. $$
Converting back to a general basis, we find
$$ V(\vec{x}) = f_{1}(\vec{x}\cdot\vec{p}_{1})\vec{p}_{1} + f_{2}(\vec{x}\cdot\vec{p}_{2})\vec{p}_{2}. $$
3D Case Involving Three Distinct Eigenvalues
Now we will apply our method to the 3D case. Since $g$ is not a scalar multiple of $e$, its eigenvalues cannot be all the same (if all three eigenvalues are the same then $g$ will be a scalar multiple of $e$). Unfortunately, there is still the possibility that two of the three eigenvalues are equal to each other and this introduces some complexity. To make things simple, we will first deal with the case where all three eigenvalues are distinct.
Let $e$ be the Euclidean inner-product and let $g$ be a real inner-product whose matrix representation has three distinct eigenvalues. By the same reasoning as above (and appealing to the spectral theorem when necessary), there is a basis for which the matrix representations the inner-products are
$$ [e] = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} \quad\text{ and }\quad [g] = \begin{pmatrix} \alpha & 0 & 0 \\ 0 & \beta & 0 \\ 0 & 0 & \gamma \end{pmatrix}. $$
By hypothesis, $\alpha, \beta, \gamma$ are distinct.
Now if $V:\mathbb{R}^{3}\rightarrow\mathbb{R}^{3}$ is a smooth vector field such that
$$ \nabla\times_{e} V = 0 \quad\text{ and }\quad \nabla\times_{g} V = 0, $$
then
\begin{align*}
\partial_{2}V_{3} - \partial_{3}V_{2} &= 0, \quad
\partial_{3}V_{1} - \partial_{1}V_{3} = 0, \quad
\partial_{1}V_{2} - \partial_{2}V_{1} = 0, \\[1.2ex]
\gamma\partial_{2}V_{3} - \beta\partial_{3}V_{2} &= 0, \quad
\alpha\partial_{3}V_{1} - \gamma\partial_{1}V_{3} = 0, \quad
\beta\partial_{1}V_{2} - \alpha\partial_{2}V_{1} = 0.
\end{align*}
Remember that we are considering the case where $\alpha, \beta, \gamma$ are each distinct (and they are each positive by positive-definiteness of $g$). The above equations easily give us
\begin{align*}
\partial_{2}V_{3} = 0, \quad \partial_{3}V_{2} = 0, \quad \partial_{3}V_{1} = 0, \quad \partial_{1}V_{3} = 0, \quad \partial_{1}V_{2} = 0, \quad \partial_{2}V_{1} = 0.
\end{align*}
Thus, one can show that
\begin{align*}
V_{1}(x, y, z) = f_{1}(x), \quad V_{2}(x, y, z) = f_{2}(y), \quad V_{3}(x, y, z) = f_{3}(z)
\end{align*}
for some functions $f_{1}, f_{2}, f_{3}:\mathbb{R}\rightarrow\mathbb{R}$, and $V$ is of the form
$$ V(x, y, z) = \begin{pmatrix} f_{1}(x) \\ f_{2}(y) \\ f_{3}(z) \end{pmatrix}. $$
Converting back to a general basis, we find
$$ V(\vec{x}) = f_{1}(\vec{x}\cdot\vec{p}_{1})\vec{p}_{1} + f_{2}(\vec{x}\cdot\vec{p}_{2})\vec{p}_{2} + f_{3}(\vec{x}\cdot\vec{p}_{3})\vec{p}_{3}. $$
Not only does my conjecture hold here, but this outcome is slightly stronger than what I even conjectured!
3D Case Involving Two Equal Eigenvalues
As before, go to the basis for which the matrix representations the inner-products are
$$ [e] = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} \quad\text{ and }\quad [g] = \begin{pmatrix} \alpha & 0 & 0 \\ 0 & \beta & 0 \\ 0 & 0 & \gamma \end{pmatrix}. $$
Let us assume $\alpha = \beta\ne \gamma$. The other cases are similar.
If $V:\mathbb{R}^{3}\rightarrow\mathbb{R}^{3}$ is a smooth vector field such that
$$ \nabla\times_{e} V = 0 \quad\text{ and }\quad \nabla\times_{g} V = 0, $$
then
\begin{align*}
\partial_{2}V_{3} - \partial_{3}V_{2} &= 0, \quad
\partial_{3}V_{1} - \partial_{1}V_{3} = 0, \quad
\partial_{1}V_{2} - \partial_{2}V_{1} = 0, \\[1.2ex]
\gamma\partial_{2}V_{3} - \beta\partial_{3}V_{2} &= 0, \quad
\alpha\partial_{3}V_{1} - \gamma\partial_{1}V_{3} = 0, \quad
\beta\partial_{1}V_{2} - \alpha\partial_{2}V_{1} = 0.
\end{align*}
This time we can't quite draw as many conclusions as before, because $\alpha = \beta$.
Nonetheless, one can show
\begin{align*}
\partial_{2}V_{3} = 0, \quad \partial_{3}V_{2} = 0, \quad \partial_{3}V_{1} = 0, \quad \partial_{1}V_{3} = 0, \quad \partial_{1}V_{2} = \partial_{2}V_{1},
\end{align*}
and find that $V_{1}(x, y, z) = f_{1}(x, y)$, $V_{2}(x, y, z) = f_{2}(x, y)$, and $V_{3}(x, y, z) = f_{3}(z)$ for some functions $f_{1}, f_{2}:\mathbb{R}^{2}\rightarrow\mathbb{R}$ and $f_{3}:\mathbb{R}\rightarrow\mathbb{R}$. Hence $V$ is of the form
$$ V(x, y, z) = \begin{pmatrix} f_{1}(x, y) \\ f_{2}(x, y) \\ f_{3}(z) \end{pmatrix}. $$
Converting back to a general basis, we find
$$ V(\vec{x}) = f_{1}(\text{proj }\vec{x})\vec{p}_{1} + f_{2}(\text{proj }\vec{x})\vec{p}_{2} + f_{3}(\vec{x}\cdot\vec{p}_{3})\vec{p}_{3} $$
where $\text{proj }\vec{x}$ projects $\vec{x}$ to $\text{Span}(\vec{p}_{1}, \vec{p}_{2})$ according to $e$-orthogonality, and $\vec{p}_{1}, \vec{p}_{2}$ are associated with the same eigenvalue. This result is weaker than the original conjecture, but it is not surprising given the case we are dealing with.
The only other cases left are those with $\alpha = \gamma\ne\beta$ and $\alpha\ne\beta = \gamma$, which can be done by an easy adaptation of the case $\alpha=\beta\ne\gamma$, and their outcomes are similar to the outcome above.
Best Answer
This is in fact a long comment, as will become obvious...
One has that $|u\times v| = \sin \theta\, |u|\, |v|$, for vectors $u$ and $v$, and $\theta$ the angle between $u$ and $v$ (ignoring quibbles if $u$ or $v$ is zero). So if $u \times v = 0$, the vectors are parallel.
Therefore, geometrically, one can rephrase your problem to say that if ${\bf A} $ is parallel to $\bf n$, then one is to show that its curl is perpendicular to $\bf n$.
So, a way of doing it might be to use that ${\bf A} = a{\bf n}$, for some function scalar valued $a$, as implied by the hypothesis ${\bf A} \times {\bf n} =0$. Then, the direct coordinate calculation $$\mathop{\rm curl} {\bf A} \cdot {\bf n} = 0$$ is totally straight-forward. I actually made this suggestion a few minutes after you posted your question. However...
I got nervous and almost immediately deleted the comment because your hypothesis only says ${\bf A} \times {\bf n} =0$ on $\Sigma$, and not identically on Euclidean space. We are, after all, talking about taking derivatives when taking the curl - morally, we need something like an 'open interval' to take limits.
I was therefore equally unhappy with your approach, as it seems to me you are again using ${\bf A} \times {\bf n}= 0$ in some neighborhood of $\Sigma$ (and not just on $\Sigma$), to conclude that its divergence is zero. We are, after all, again talking about taking derivatives when taking the divergence - again, we need something like an 'open interval' to take limits.
So I had my doubts.
However! - thinking about it -
By Green/Stokes/whatever, the result must hold true:
One has $$ \int \int_R \mathop{\rm curl} {\bf A} \cdot {\bf n} \, d\sigma = \int_{\partial R} {\bf A}\cdot d{\bf s}, $$
where $R$ is an arbitrary (nice) region in $\Sigma$, $d\sigma$ the area element, ${\partial R}$ the boundary of $R$, etc...
Now, under the hypothesis of the question, the $RHS =0$ identically, since ${\bf A}$ is perpendicular to $\Sigma$. Hence, since $R$ is arbitrary, we must have that the integrand on the LHS is also identically zero (as desired). [Namely, suppose the $ \mathop{\rm curl} {\bf A} \cdot {\bf n}$ were not zero a some point - say positive. Then, in some neighborhood of that point it would remain positive. Take $R$ to be a small disk in that neighborhood; the LHS would then be positive, forcing the RHS to positive - contradiction.]