Misplaced alignment tab character with \math@cr@@@

alignmenterrorsmath-mode

My thesis suddenly does not compile anymore. LaTeX throws the following error :

! Misplaced alignment tab character &.
\math@cr@@@ ->\ifst@rred \nonumber \fi &
\relax \make@display@tag \ifst@rred …
l.1007 \end{equation}
The sensitivity of a classification model will tell us

The strange thing is that I did not even edit the highlighted area, which is compiling correctly if I extract the lines to a new document.

MWE :

\documentclass[11pt]{article}
\usepackage{amsmath}
\usepackage[margin=1in, paperwidth=8.3in, paperheight=11.7in]{geometry}
\begin{document}
\begin{equation}
    Sensitivity  = \dfrac{\text{\textit{\# of true positives of class i}}}{\text{\textit{\# of true positives of class i + \\
                \# of false negatives positives of class i}}}
\end{equation} The sensitivity of a classification model will tell us
how good each class (climber, spriner, mixed) performs in their own
group. For example, given that the model classified 1000 observations as
climbers in the test subset. While only 700 observations of these
predicted observation actually were climbers (true positives), 300 more
observations which had the class climbers where not predicted as those.
Then, the sensitivity for the class climbers would be 700/700+300 = 0.7. 
\end{document}

The areas that I did edit strangly compile correctly on its own aswell :

MWE 2 :

 \documentclass[11pt]{article}
    \usepackage{amsmath}
    \usepackage[margin=1in, paperwidth=8.3in, paperheight=11.7in]{geometry}
    \begin{document}
1000 observations contained $NA$ data for the variables $avg\_temperature$ and $avg\_calories$. To handle this problem the author decided for two different strategies. First, the observations which contain $NA$ valuers were dropped, second those values were instead imputed.  The imputation was done using the IRMI algorithm mentioned first by \cite{Templ2011}. The basis for the work of the latter authors is the IVEWARE algorithm from \cite{Raghunathan01} which generates iterative estimates for the missing values using a chain of regression models and picking values from the generated predictive distributions. The IRMI algorithm solves the disability of the IVEWARE algorithm to produce robust results for data including outliers, adds more flexibility by removing the restriction of at least one fully observed variable and uses an Akaike information criterion for model selection. In the latter process a user-specified amount of the most important variables for the imputation sequence were chosen. A short example of the algorithm for both variables of the strava dataset than contain missings is the following . \\ In the first step a KNN-algorithm is used to create initial values for those missings. Then, for each variable $j$ of the dataset, those observations that contained missings were marked. Let $A$ describe all observations that were marked containing missings and $m$ describes the variable which include the missings. So $\{A, m\}$ describes all observations of the variable $m$ that contain missings, therefore $m \notin j$ .Then linear regression estimates $\hat{\beta^j}$ were calculated by all other variables including an intercept $X_{A, m}^j$ against the missing observations of the target variable $y_{A, m}^m$. So that $y_{A, m}^m =  \hat{\beta}^jX_{A, m}^j + \epsilon$, with $\epsilon$ being some error term. Alternatively one can choose to use stepwise model selection by AIC. So, that we include only the most important variables $i$ from the best model. If one chooses not to use the stepwise model $i=j$. Then we can replace (new) estimates for the missing values by  $\hat{y}_{A, m}^m = \hat{\beta^i}X_{A, m}^i$. Afterwards the regression and replacing steps were repeated $min(2,M) \quad m \in M$ times until $\sum_{a}^A\left(\hat{\boldsymbol{y}}_{a, m}^{m}-\tilde{\boldsymbol{y}}_{a, m}^{m}\right)^{2}<\delta, \quad \text {for all} a \in A \quad \text {and} m \in M$. Where $\delta$ is a small constant, $\hat{\boldsymbol{y}}_{a, m}^{m}$ is the $a$-th imputed value of the current iteration and  $\tilde{\boldsymbol{y}}_{a, m}^{m}$ is the $a$-th imputed value of the last iteration.   The results of the classification and regression tasks in the results section will explore if an improvement of the results can be achieved with imputed missing values in the dataset. The baseline will be the strava dataset which has simply dropped the missing values and one with imputed values generated by the IRMI algorithm. Possible restrictions of the results obtained with the imputed dataset will be argued in the discussion section.
 
 \begin{equation}
    type_{i_D, k} =  
    \begin{cases}
        climber  & \mbox{if} \quad \sum_{i_{d}}^{i_{D}}climber\_points_{i_d, k} >  \biggl\{ \sum_{i_{d}}^{i_{D}}sprinter\_points_{i_d, k} \biggr\}*1.5 \\
        & \mbox{and} \quad  \biggl\{\sum_{i_{d}}^{i_{D}}climber\_points_{i_d, k} -  \sum_{i_{d}}^{i_{D}}sprinter\_points_{i_d, k} \biggr\} \geq 30  \\
        sprinter & \mbox{if} \quad  \sum_{i_{d}}^{i_{D}}sprinter\_points_{i_d, k} >  \biggl\{ \sum_{i_{d}}^{i_{D}}climber\_points_{i_d, k} \biggr\}*1.5   \\
        & \mbox{and} \quad  \biggl\{\sum_{i_{d}}^{i_{D}}sprinter\_points_{i_d, k} -  \sum_{i_{d}}^{i_{D}}climber\_points_{i_d, k} \biggr\} \geq 30 \\
        mixed   &  \mbox{else} 
    \end{cases}
    \label{eq:type_rule}
 \end{equation}
 
\end{document}

Any idea what the origin of this error could possibly be?

Best Answer

The posted example produces no error although the \\ does nothing useful as \text is like \mbox and always a single line

\documentclass[11pt]{article}
\usepackage{amsmath}
\usepackage[margin=1in, paperwidth=8.3in, paperheight=11.7in]{geometry}
\begin{document}
\begin{equation}
    Sensitivity  =
    \dfrac{\text{\textit{\# of true positives of class
    i}}}{\text{\textit{\# of true positives of class i + \\
                 \# of false negatives positives of class i}}}
\end{equation} The sensitivity of a classification model will tell us
how good each class (climber, spriner, mixed) performs in their own
group. For example, given that the model classified 1000 observations as
climbers in the test subset. While only 700 observations of these
predicted observation actually were climbers (true positives), 300 more
observations which had the class climbers where not predicted as those.
Then, the sensitivity for the class climbers would be 700/700+300 = 0.7. 
\end{document}

enter image description here

However the error message shows that you have used an ams alignment \\ in such a place and changing equation to align produces essentially the error that you show.

\documentclass[11pt]{article}
\usepackage{amsmath}
\usepackage[margin=1in, paperwidth=8.3in, paperheight=11.7in]{geometry}
\begin{document}
\begin{align}
    Sensitivity  =
     \dfrac{\text{\textit{\# of true positives of class
      i}}}{\text{\textit{\# of true positives of class i + \\
     \# of false negatives positives of class i}}}
\end{align} The sensitivity of a classification model will tell us
how good each class (climber, spriner, mixed) performs in their own
group. For example, given that the model classified 1000 observations as
climbers in the test subset. While only 700 observations of these
predicted observation actually were climbers (true positives), 300 more
observations which had the class climbers where not predicted as those.
Then, the sensitivity for the class climbers would be 700/700+300 = 0.7. 
\end{document}

produces

! Misplaced alignment tab character &.
\math@cr@@@ ->&
               \omit \global \advance \row@ \@ne \ifst@rred \nonumber \fi \i...
l.8 \end{align}
                The sensitivity of a classification model will tell us
? 

However the error message that you show does show your document used \end{equation} so (I would guess) that you have a non-standard re-definition of equation based on align. In any case, the fix is to remove the spurious \\ .