As you've undoubtedly noticed, you can't just argue as in the case of finite products, thinning out the sequence again and again to et convergence in more and more components. After any finite number of steps, you still have an infinite subsequence of your original sequence, but if you do infinitely many steps then every term of your original sequence might eventually get removed. Then, instead of having a subsequence at the end of the process, you've got nothing.
The idea of the diagonal argument is to slightly modify the process so that your sequence doesn't entirely disappear. Very roughly, you just restrain your thinning-out operations to ensure that an infinite subsequence remains at the end of the process. Here are the details:
Start with your original sequence, and, before doing any thinning, promise yourself that you will never delete the first of its terms; call that term $a_1$. Now thin out the sequence so that the first components converge, but, in accordance with your promise, keep $a_1$ in your new, thinned-out sequence. This does not harm the first-component-convergence. Keeping $a_1$ means that the sequence of first-components has one unavoidable term at the beginning, namely the first component of $a_1$, but one term at the beginning doesn't affect convergence.
So now you have your first thinned-out sequence, starting with $a_1$, and having its first-components converging. Now make a second promise, namely that the second term of this thinned-out sequence, which I'll call $a_2$, will never be deleted. Then thin out the sequence again, jut as in your finite-product proof, to make the sequence of second-components converge, but, while thinning it out, keep your two promises. That is, $a_1$ and $a_2$ are in this second thinned-out sequence. Again, you can do this because two terms at the beginning have no effect on convergence.
Continue in this way, alternating promises with thinnings. After $n$ steps, you have a subsequence of your original sequence with two crucial properties. (1) Its first, second, $\dots$, $n$-th components are convergent sequences, and (2) its first, second, $\dots$, $n$-th terms, which I'm calling $a_1,a_2,\dots,a_n$, will be the same in all future thinned-out sequences.
Now look at the infinite sequence $a_1,a_2,\dots$ consisting of the subjects of all your promises. For each $n$, its $n$-th components converge, because you have a subsequence of what you had after $n$ thinnings, and you ensured convergence of the $n$-th components at that stage.
This means that $a_1,a_2,\dots$ converges in the product topology. Since it's clearly a subsequence of the sequence you began with, the proof is complete.
Best Answer
Theorem: $X$ is countably compact, then $X$ is strongly limit compact: every countably infinite subset $A$ has an $\omega$-limit point, i.e. a point $x$ such that for every neighbourhood $U$ of $x$ we have $U \cap A$ is infinite.
Proof: suppose not, then every $x \in X$ has a neighbourhood $O_x$ such that $O_x \cap A$ is finite. Now a nice trick: for each finite subset $F$ of $A$ (and there are countably many finite subsets of $A$) define
$$O(F) = \bigcup\{O_x: O_x \cap A = F\}$$
As every $O_x$ is a subset of one of the $O(F)$ (namely that with $F = O_x \cap A$), and the $O_x$ cover $X$, the $O(F)$ form a countable cover for $X$. Hence there is a finite subcover $O(F_1), \ldots, O(F_N)$ but then there is some $a_0 \in A \setminus \bigcup_{i=1}^N F_i$ (as the $F_i$ are finite subsets of $A$) and this $a_0$ is not covered by any of the $O(F_i)$ for $i \le N$, and this is a contradiction. So $A$ does have an $\omega$-limit point.
BTW the reverse also holds, as you can see here, from which I also borrowed the above argument.
But now the sequential compactness can be proved: let $(x_n)$ be a sequence in $X$. If $A = \{x_n : n \in \mathbb{N} \}$ is finite, there is a constant (hence convergent) subsequence. So we can assume $A$ is infinite. So $A$ has an $\omega$-limit point $p \in X$, as we saw above. Let $U_n, n \in \mathbb{N}$ be a countable local base at $p$. Then pick $n_1$ with $x_{n_1} \in U_1 \cap A$. Then having chosen all $x_{n_k} \in (U_1 \cap \ldots U_k) \cap A$, for some $k \ge 1$, where we also have $n_1 < n_2 <\ldots < n_k$, we note that $\cap_{i=1}^{k+1} U_i$ is an open neighbourhood of $p$, so contains infinitely many points of $A$, so in particular we can pick $n_{k+1} > n_k$ such that $x_{n_{k+1}} \in \cap_{i=1}^{k+1} U_i$. Continue this recursion.
It's now standard to check that $x_{n_k} \to p$ is a convergent subsequence of $(x_n)$.
Because I used $\omega$-limit points, there was no need for $T_1$-ness (which you can have with just limit points).