In this data, your two raters agreed 100% of the time (the function excludes all rows that include an NA). They both individually have a 100% estimated chance of selecting category "1". Thus, the probability of agreement by chance is $p_e = 1*1 = 1$. Notice that the denominator of Kappa includes $1-p_e$. Thus you are trying to divide by zero. In R, since the numerator also happens to be zero, you end up with $0/0 = NaN$.
It seems to me this is an edge case that the irr
package developers might want address in their next version of the package.
I would argue that Cohen's kappa and Gwet's gamma are both problematic approaches to the estimation of inter-rater reliability. Both make a number of assumptions about the behavior of raters that are rarely tenable in practice and produce paradoxical results when these assumptions are violated (Feng, 2013; Zhao, Liu, & Deng, 2012). The popularity of Cohen's kappa stems largely from historical precedent/inertia and the "intuitive appeal" of its logic (i.e., applying Bayes' rule to the estimation of chance agreement). The resistance of the field to change to Gwet's gamma is probably more due to sociological reasons than statistical ones, although I think statistical arguments against Gwet's gamma could be made (see again the cited articles). Unfortunately, a truly effective alternative has not yet been developed. At the moment, your best bet is probably to report multiple measures. As @Alexis stated in her comments, one attractive option is to use specific agreement for each category as suggested by Cicchetti & Feinstein (1990).
References
Cicchetti, D. V, & Feinstein, A. R. (1990). High agreement but low kappa: II. Resolving the paradoxes. Journal of Clinical Epidemiology, 43(6), 551–558.
Feng, G. C. (2013). Factors affecting intercoder reliability: A Monte Carlo experiment. Quality & Quantity, 47(5), 2959–2982.
Zhao, X., Liu, J. S., & Deng, K. (2012). Assumptions behind inter-coder reliability indices. In C. T. Salmon (Ed.), Communication Yearbook (pp. 418–480). Routledge.
Best Answer
Cohens Kappa is known to have limitations for skewed datasets.
Quoting an example from here
Consider following matrix :
The above example has an observed agreement of 0.85 but the Cohen' Kappa is 0.04.
The solution suggested in this article is to report two separate agreement metrics for positive and negative classes.