$\DeclareMathOperator\span{span}$Here is an argument which works for general $n\times n$ Sudokus, $n\ge 2$, using some ideas from the other answers (namely, casting the problem in terms of linear algebra as in François Brunault’s answer, and the notion of alternating paths below is related to the even sets as in Tony Huynh’s answer, attributed to Zack Wolske).
I will denote the cells as $s_{ijkl}$ with $0\le i,j,k,l< n$, where $i$ identifies the band, $j$ the stack, $k$ the row within band $i$, and $l$ the column within stack $j$. Rows, columns, and blocks are denoted $r_{ik},c_{jl},b_{ij}$ accordingly. Let $X=\{r_{ik},c_{jl},b_{ij}:i,j,k,l< n\}$ be the set of all $3n^2$ checks. For $S\subseteq X$ and $x\in X$, I will again denote by $S\models x$ the consequence relation “every Sudoku grid satisfying all checks from $S$ also satisfies $x$”.
Let $V$ be the $\mathbb Q$-linear space with basis $X$, and $V_0$ be the span of the vectors $\sum_kr_{ik}-\sum_jb_{ij}$ for $i< n$, and $\sum_lc_{jl}-\sum_ib_{ij}$ for $j< n$.
Lemma 1: If $x\in\span(S\cup V_0)$, then $S\models x$.
Proof: A grid $G$ induces a linear mapping $\phi_G$ from $V$ into an $n^2$-dimensional such that for any $x'\in X$, the $i$th coordinate of $\phi_G(x')$ gives the number of occurrences of the number $i$ in $x'$. We have $\phi_G(V_0)=0$, and $G$ satisfies $x'$ iff $\phi_G(x')$ is the constant vector $\vec 1$. If $x=\sum_i\alpha_ix_i+y$, where $x_i\in S$ and $y\in V_0$, then $\phi_G(x)=\vec\alpha$ for $\alpha:=\sum_i\alpha_i$. The same holds for every grid $G'$ satisfying $S$; in particular, it holds for any valid grid, which has $\phi_{G'}(x)=\vec1$, hence $\alpha=1$. QED
We intend to prove that the converse holds as well, so assume that $x\notin\span(S\cup V_0)$. We may assume WLOG $x=r_{00}$ or $x=b_{00}$, and we may also assume that $r_{i0}\notin S$ whenever $r_{ik}\notin S$ for some $k$, and $c_{j0}\notin S$ whenever $c_{jl}\notin S$ for some $l$. By assumption, there exists a linear function $\psi\colon V\to\mathbb Q$ such that $\psi(S\cup V_0)=0$, and $\psi(x)\ne0$. The space of all linear functions on $V$ vanishing on $V_0$ has dimension $3n^2-2n$, and one checks easily that the following functions form its basis:
$\omega_{ik}$ for $0\le i< n$, $0< k< n$: $\omega_{ik}(r_{ik})=1$, $\omega_{ik}(r_{i0})=-1$.
$\eta_{jl}$ for $0\le j< n$, $0< l< n$: $\eta_{jl}(c_{jl})=1$, $\eta_{jl}(c_{j0})=-1$.
$\xi_{ij}$ for $i,j< n$: $\xi_{ij}(r_{i0})=\xi_{ij}(c_{j0})=\xi_{ij}(b_{ij})=1$.
(The functions are zero on basis elements not shown above.) We can thus write
$$\psi=\sum_{ik}u_{ik}\omega_{ik}+\sum_{jl}v_{jl}\eta_{jl}+\sum_{ij}z_{ij}\xi_{ij}.$$
If $r_{ik}\in S$, $k\ne0$, then $0=\psi(r_{ik})=u_{ik}$, and similarly $c_{jl}\in S$ for $l\ne0$ implies $v_{jl}=0$. Thus, the functions $\omega_{ik}$ and $\eta_{jl}$ that appear in $\psi$ with a nonzero coefficient individually vanish on $S$. The only case when they can be nonzero on $x$ is $\omega_{0k}$ if $x=r_{00}$ and $r_{00},r_{0k}\notin S$, but then taking any valid grid and swapping cells $s_{0000}$ and $s_{00k0}$ shows that $S\nvDash x$ and we are done. Thus we may assume that the first two sums in $\psi$ vanish on $S\cup\{x\}$, and therefore the third one vanishes on $S$ but not on $x$, i.e., WLOG
$$\psi=\sum_{ij}z_{ij}\xi_{ij}.$$
That $\psi$ vanishes on $S$ is then equivalent to the following conditions on the matrix $Z=(z_{ij})_{i,j< n}$:
$z_{ij}=0$ if $b_{ij}\in S$,
$\sum_jz_{ij}=0$ if $r_{i0}\in S$,
$\sum_iz_{ij}=0$ if $c_{j0}\in S$.
Let us say that an alternating path is a sequence $e=e_p,e_{p+1},\dots,e_q$ of pairs $e_m=(i_m,j_m)$, $0\le i_m,j_m< n$, such that
$i_m=i_{m+1}$ if $m$ is even, and $j_m=j_{m+1}$ if $m$ is odd,
the indices $i_p,i_{p+2},\dots$ are pairwise distinct, except that we may have $e_p=e_q$ if $q-p\ge4$ is even,
likewise for the $j$s.
If $m$ is even, the incoming line of $e_m$ is the column $c_{j_m0}$, and its outgoing line is the row $r_{i_m0}$. If $m$ is odd, we define it in the opposite way. An alternating path for $S$ is an alternating path $e$ such that $b_{i_mj_m}\notin S$ for every $m$, and either $e_p=e_q$ and $q-p\ge4$ is even ($e$ is an alternating cycle), or the incoming line of $e_p$ and the outgoing line of $e_q$ do not belong to $S$.
Every alternating path $e$ induces a matrix $Z_e$ which has $(-1)^m$ at position $e_m$ for $m=p,\dots,q$, and $0$ elsewhere. It is easy to see that if $e$ is an alternating path for $S$, then $Z_e$ satisfies conditions 1, 2, 3.
Lemma 2: The space of matrices $Z$ satisfying 1, 2, 3 is spanned by matrices induced by alternating paths for $S$.
Proof:
We may assume that $Z$ has integer entries, and we will proceed by induction on $\|Z\|:=\sum_{ij}|z_{ij}|$. If $Z\ne 0$, pick $e_0=(i_0,j_0)$ such that $z_{i_0j_0}>0$. If the outgoing line of $e_0$ is outside $S$, we put $q=0$, otherwise condition 2 guarantees that $z_{i_0,j_1}< 0$ for some $j_1$, and we put $i_1=i_0$, $e_1=(i_1,j_1)$. If the outgoing line of $e_1$ is outside $S$, we put $q=1$, otherwise we find $i_2$ such that $z_{i_2j_1}>0$ by condition 3, and put $j_2=j_1$. Continuing in this fashion, one of the following things will happen sooner or later:
The outgoing line of the last point $e_m$ constructed contains another point $e_{m'}$ (and therefore two such points, unless $m'=0$). In this case, we let $p$ be the maximal such $m'$, we put $q=m+1$, $e_q=e_p$ to make a cycle, and we drop the part of the path up to $e_{p-1}$.
The outgoing line of $e_m$ is outside $S$. We put $q=m$.
In the second case, we repeat the same construction going backwards from $e_0$. Again, either we find a cycle, or the construction stops with an $e_p$ whose incoming line is outside $S$. Either way, we obtain an alternating path for $S$ (condition 1 guarantees that $b_{i_mj_m}\notin S$ for every $m$). Moreover, the nonzero entries of $Z_e$ have the same sign as the corresponding entries of $Z$, thus $\|Z-Z_e\|<\|Z\|$. By the induction hypothesis, $Z-Z_e$, and therefore $Z$, is a linear combination of some $Z_e$s. QED
Now, Lemma 2 implies that we may assume that our $\psi$ comes from a matrix $Z=Z_e$ induced by an alternating path $e=e_p,\dots,e_q$. Assume that $G$ is a valid Sudoku grid that has $1$ in cells $s_{i_mj_m00}$ for $m$ even, and $2$ for $m$ odd. Let $G'$ be the grid obtained from $G$ by exchanging $1$ and $2$ in these positions. Then $G'$ violates the following checks:
$b_{i_mj_m}$ for each $m$.
If $e$ is not a cycle, the incoming line of $e_p$, and the outgoing line of $e_q$.
Since $e$ is an alternating path for $S$, none of these is in $S$. On the other hand, $\psi(x)\ne0$ implies that $x$ is among the violated checks, hence $S\nvDash x$.
It remains to show that such a valid grid $G$ exists. We can now forget about $S$, and then it is easy to see that every alternating path can be completed to a cycle, hence we may assume $e$ is a cycle. By applying Sudoku permutations and relabelling the sequence, we may assume $p=0$, $i_m=\lfloor m/2\rfloor$, $j_m=\lceil m/2\rceil$ except that $i_q=j_q=j_{q-1}=0$. We are thus looking for a solution of the following grid:
$$\begin{array}{|ccc|ccc|ccc|ccc|ccc|}
\hline
1&&&2&&&&&&&&&&&&\\
\strut&&&&&&&&&&&&&&&\\
\strut&&&&&&&&&&&&&&&\\
\hline
&&&1&&&2&&&&&&&&&\\
&&&&&&&&&&&&&&\cdots&\\
&&&&&&&&\ddots&&&&&&&\\
\hline
2&&&&&&&&&1&&&&&&\\
\strut&&&&&&&&&&&&&&&\\
\strut&&&&&&&&&&&&&&&\\
\hline
\strut&&&&&&&&&&&&&&&\\
\strut&&&&\vdots&&&&&&&&&&&\\
\strut&&&&&&&&&&&&&&&\\
\hline
\end{array}$$
where the upper part is a $q'\times q'$ subgrid, $q'=q/2$.
If $q'=n$, we can define the solution easily by putting $s_{ijkl}=(k+l,j-i+l)$, where we relabel the numbers $1,\dots,n^2$ by elements of $(\mathbb Z/n\mathbb Z)\times(\mathbb Z/n\mathbb Z)$, identifying $1$ with $(0,0)$ and $2$ with $(0,1)$. In the general case, we define $s_{ijkl}=(k+l+a_{ij}-b_{ij},l+a_{ij})$. It is easy to check that this is a valid Sudoku if the columns of the matrix $A=(a_{ij})$ and the rows of $B=(b_{ij})$ are permutations of $\mathbb Z/n\mathbb Z$. We obtain the wanted pattern if we let $a_{ij}=b_{ij}=j-i\bmod{q'}$ for $i,j< q'$, and extend this in an arbitrary way so that the columns of $A$ and the rows of $B$ are permutations.
This completes the proof that $x\notin\span(S\cup V_0)$ implies $S\nvDash x$. This shows that $\models$ is a linear matroid, and we get a description of maximal incomplete sets of checks by means of alternating paths.
We can also describe the minimal dependent sets. Put
$$D_{R,C}=\{r_{ik}:i\in R,k< n\}\cup\{c_{jl}:j\in C,l< n\}\cup\{b_{ij}:(i\in R\land j\notin C)\lor(i\notin R\land j\in C)\}$$
for $R,C\subseteq\{0,\dots,n-1\}$. If $R$ or $C$ is nonempty, so is $D_{R,C}$, and
$$\sum_{i\in R}\Bigl(\sum_kr_{ik}-\sum_jb_{ij}\Bigr)-\sum_{j\in C}\Bigl(\sum_lc_{jl}-\sum_ib_{ij}\Bigr)\in V_0$$
shows that $D_{R,C}$ is dependent. On the other hand, if $D$ is a dependent set, there is a linear combination
$$\sum_i\alpha_i\Bigl(\sum_kr_{ik}-\sum_jb_{ij}\Bigr)-\sum_j\beta_j\Bigl(\sum_lc_{jl}-\sum_ib_{ij}\Bigr)\ne0$$
where all basic vectors with nonzero coefficients come from $D$. If (WLOG) $\alpha:=\alpha_{i_0}\ne0$, put $R=\{i:\alpha_i=\alpha\}$ and $C=\{j:\beta_j=\alpha\}$. Then $R\ne\varnothing$, and $D_{R,C}\subseteq D$.
On the one hand, this implies that every minimal dependent set is of the form $D_{R,C}$. On the other hand, $D_{R,C}$ is minimal unless it properly contains some $D_{R',C'}$, and this can happen only if $R'\subsetneq R$ and $C=C'=\varnothing$ or vice versa. Thus $D_{R,C}$ is minimal iff $|R|+|C|=1$ or both $R,C$ are nonempty.
This also provides an axiomatization of $\models$ by rules of the form $D\smallsetminus\{x\}\models x$, where $x\in D=D_{R,C}$ is minimal. It is easy to see that if $R=\{i\}$ and $C\ne\varnothing$, the rules for $D_{R,C}$ can be derived from the rules for $D_{R,\varnothing}$ and $D_{\varnothing,\{j\}}$ for $j\in C$, hence we can omit these. (Note that the remaining sets $D_{R,C}$ are closed, hence the corresponding rules have to be included in every axiomatization of $\models$.)
To sum it up:
Theorem: Let $n\ge2$.
$S\models x$ if and only if $x\in\span(S\cup V_0)$. In particular, $\models$ is a linear matroid.
All minimal complete sets of checks have cardinality $3n^2-2n$. (One such set consists of all checks except for one row from each band, and one column from each stack.)
The closed sets of $\models$ are intersections of maximal closed sets, which are complements of Sudoku permutations of the sets
$\{b_{00},b_{01},b_{11},b_{12},\dots,b_{mm},b_{m0}\}$ for $0< m< n$
$\{c_{00},b_{00},b_{01},b_{11},b_{12},\dots,b_{mm},r_{m0}\}$ for $0\le m< n$
$\{c_{00},b_{00},b_{01},b_{11},b_{12},\dots,b_{m-1,m},c_{m1}\}$ for $0\le m< n$
The minimal dependent sets of $\models$ are the sets $D_{R,C}$, where $R,C\subseteq\{0,\dots,n-1\}$ are nonempty, or $|R|+|C|=1$.
$\models$ is the smallest consequence relation such that $D_{R,C}\smallsetminus\{x\}\models x$ whenever $x\in D_{R,C}$ and either $|R|,|C|\ge2$, or $|R|+|C|=1$.
For a very nice bijective proof of the Murnaghan-Nakayama rule I recommend Section 3.3 of Nick Loehr's article Abacus proofs of Schur function identities, SIAM J. Disc. Math. 24 (2010) 1356-1370. Chapter 11 of Loehr's recent textbook Bijective combinatorics gives an expanded version of his article with minimal prerequisites.
To explain the connection: the Murnaghan-Nakayama rule states that if $\lambda$ is a partition of $m$, $\pi$ is a $k$-cycle and $\sigma$ is a permutation of the remaining $m-k$ points then
$$\chi^\lambda(\pi \sigma) = \sum (-1)^{\ell(\lambda/\mu)} \chi^\mu(\sigma) $$
where the sum is over all partitions $\mu$ obtained from $\lambda$ by removing a $k$-hook from the border of $\lambda$, and $\ell(\lambda/\mu)$ is leg-length of the hook (i.e. the number of rows of the Young diagram occupied by the hook, minus one). In particular, if $\lambda = (n-1,n-2,\ldots,1)$, $\sigma = 1$ and $\pi$ is a $3$-cycle, then we get
$$\chi^{(n-1,\ldots,1)}(\pi) = -\sum \chi^{\delta_i}(1) $$
since every $3$-hooks that can be removed from $(n-1, \ldots, 1)$ is an $L$-shape of leg-length $1$. Now divide by $\chi^{(n-1,\ldots,1)}(id)$ and use the hook formula to get the sum in the question.
For the intended application we only need the Murnaghan-Nakayama rule for cycles. However my feeling is that any proof of the rule for cycles will do almost all the work needed for the general version. For instance, stated using symmetric polynomials, the Murnaghan-Nakayama rule for cycles is $s_\mu p_k = \sum_\lambda (-1)^{\ell(\lambda/\mu)} s_\lambda$, where the sum is over all partitions $\lambda$ obtained from $\mu$ by adding a $k$-hook. Given this, the full result follows in a few lines by taking $\mu = \varnothing$ and repeatedly multiplying by power-sum symmetric polynomials.
There is however an alternative route to the final answer that might be of interest, using
explicit formulae for the characters of symmetric groups on cycles. These can be deduced from the Frobenius formula for character values: see 4.10 and Exercise 4.15 of Fulton & Harris, Representation theory: a first course for the method. The relevant formula for $3$-cycles is (5.2) in R. E. Ingham, Some characters of the symmetric group, Proc. Amer. Math. Soc. 1 (1950) 358-369. It states that
$$\frac{\chi^{\lambda}(\pi)}{\chi^\lambda(id)} m(m-1)(m-2) = 3K_3 (\lambda) - \frac{3m(m-1)}{2} $$
where $\lambda$ is a partition of $m$ and $\pi$ is a $3$-cycle. Here $K_3(\lambda)$ can be defined as follows: put zeros in the boxes in the leading diagonal of the Young diagram of $\lambda$. In each row, put $1^2$, $2^2$, $\ldots$ in the boxes right of the $0$. In each column, put $1^2$, $2^2$, $\ldots$ in the boxes below the $0$. Then $K_3(\lambda)$ is the sum of all the numbers in the boxes of the Young diagram. Calculation shows that
$$K_3(n-1,\ldots,1) = \frac{(n+1)n(n-1)(n-2)}{12} = \frac{N(N-1)}{3} $$
where $N = \binom{n}{2}$, as in the question. Therefore
$$\frac{\chi^{(n-1,\ldots,1)}(\pi)}{\chi^{(n-1,\ldots,1)}(id)} N(N-1)(N-2) = -\frac{N(N-1)}{2} $$
and so
$$\sum_{i=1}^{n-2} \frac{f_{\delta_i}}{f_{(n-1,\ldots,1)}} = -\frac{\chi^{(n-1,\ldots,1)}(\pi)}{f_{(n-1,\ldots,1)}} = \frac{1}{2(N-2)}. $$
This differs from the answer claimed in the question but seems to be correct at least for $n=3,4,5$. (For example, if $n=3$ then we have $\delta_1 = \varnothing$ and the sum in the question is $1/f_{(2,1)} = 1/2$.)
Best Answer
By some sort of strange mathematical cosmic entanglement, it appears that François Brunault answered his own question in the other thread while I was writing this question.
The answer is indeed yes. Feel free to click on the original question and vote up his answer! Or vote this answer up (I've made it community wiki).