There is a standard trick in analysis, where one chooses a subsequence, then a subsequence of that… and wants to get an eventual subsubsequence of all of them and you take the diagonal. I've always called this the diagonalization trick.
I heard once that this is due to Cantor but haven't been able to find a reference (all searches for diagonal and Cantor lead to his argument about the uncountability of [0,1].
Does anyone have an exact reference?
Thanks.
B.
Best Answer
From this, it sounds like a very early instance is in Ascoli's proof of his theorem: pp. 545-549 of Le curve limite di una varietà data di curve, Atti Accad. Lincei 18 (1884) 521-586. (Which, alas, I can't find online.)
Note that this predates Cantor's argument that you mention (for uncountability of [0,1]) by 7 years.
Edit: I have since found the above-cited article of Ascoli, here. And I must say that the modern diagonal argument is less "obviously there" on pp. 545-549 than Moore made it sound. The notation is different and the crucial subscripts rather hard to read, so at first sight I feel the need for a native Italian speaker to help spot where precisely Ascoli passes to the diagonal sequence (as I guess he must somehow)...
(One may also consult a self-summary of the article in Rend. Reale Istituto lombardo di scienze e lettere 21 (1888) 365-371 here.)