Your attempt is excellent, however I think it is essential to point some things out:
1) Your proof of the first part is correct, but if you are a fan of clear writing, you could do with a little rephrasing and chopping as follows: Let $\epsilon > 0$. Then, consider a subsequence $\{p_{n_k}\}$. Note that for this $\epsilon$, there is a natural number $N$ such that $n > N \implies |p_n-p| < \epsilon$. Now, because $n_k \geq k$ for every $k$, it follows that for some $K$, $n_k > N$ for all $k > K$. Hence, considering this $K$ as your new $N$, conclude that $|p_{n_k}-p| < \epsilon$ for all $k > K$. Thus, $p_{n_k}$ converges to $p$.
2) A sequence certainly can have an infinite number of subsequences. For example, if you have a sequence $1,2,3,....$, then $2,3,...$ is a subsequence, so is $3,4,...$, so is $4,5,...$ etc. Hence it is possible. However, if your sequence is constant, for example, then you cannot have infinitely many subsequences, simply because they are all the same.
2.5) This is the contentious part. There is no finiteness condition on $I$, which creates the simple problem for the existence of the maximum. For example, suppose $N=1$ for $i_1$,$N=2$ for $i_2$, $N=3$for $i_3$ and so on, then $N$ doesn't have a maximum if $I$ is infinite, which it is most of the time. Hence, the argument breaks down here.
3) That's true,but then the argument has already broken down, so it is of no significance.
Now, how you do the second part may come as a bit of a farce. The reason is this:
Suppose I have a convergent sequence, $a_1,a_2,...$ that converges to $a$. I thought of the first number in my mind, called it $a_0$, and considered the sequence $a_0,a_1,a_2,...$,which I renumber and call $b_1,b_2,...$. Well, this sequence is also convergent!
Proof: Suppose you are given $\epsilon > 0$. Then, there exists $N \in \mathbb{N}$ such that $n > N \implies |a_n-a| <\epsilon$. But now, obviously, $b_{n+1}=a_{n}$, so therefore considering the number $N+1$, $n>N+1 \implies n-1>N \implies |a_{n-1}-a| <\epsilon \implies |b_n-a| <\epsilon$, so $N+1$ works for this $\epsilon$ and $b_n$. Hence, it follows that $b_n$ is convergent.
Now, all you need is this little trick: From your sequence $\{ a_n\}$, remove the first element $\{a_1\}$. Whatever is remaining is a subsequence of $\{a_n \}$, which is given to be convergent. But then, put back $a_1$, and then the resulting sequence, which is $\{ a_n\}$ , is convergent by the fact above! Hence, we are done.
This result is somewhat odd, because we did not seem to use the whole strength of the statement "all subsequences converge". But that's okay, we've got your result!
A sequence in a complete metric space whose closed balls are totally bounded, $Y$, is semiconvergent if and only if it has at most one limit point in $Y$.
Proof:
Suppose $(x_n)$ has two limit points in $Y$, then choose a bounded open set containing both. Choose the subsequence of $(x_n)$ that lies in this open set. Now we have a bounded subsequence that necessarily still has two limit points, hence is not convergent.
For the converse, it suffices to show that a bounded sequence, $(x_n)$, with a unique limit point, $x$, is convergent. Since $(x_n)$ is bounded, it is contained in a closed ball, which is compact by total boundedness of closed balls and completeness of $Y$. Call this closed ball $K$. Then if $(x_n)$ doesn't converge to the unique limit point $x$, there is $\epsilon > 0$ such that $(x_n)$ has infinitely many terms not contained in the open ball $U=B_\epsilon(x)$. Then let $(y_n)$ be the subsequence of $(x_n)$ contained in $K\setminus U$, which is a closed and hence compact subset of $K$. Since compactness implies sequential compactness for metric spaces, $(y_n)$ has a limit point in $K\setminus U$, and thus so does $(x_n)$. Contradiction.
Note:
In particular, this applies to $Y=\Bbb{R}^n$ for any $n$.
Best Answer
Are you referring to series or sequences?
That explanation you gave with subsequences isn't usually the easiest (at least, as far as I know). If you can't guess the limit of the whole sequence, you can start with a simpler subsequence: this is just a hunch, though: a sequence converges to a limit $\ell$ if and only if each of its subsequences converges to $\ell$ too. You can see by yourself that this can't be used to determine the convergence of a sequence: you just can't calculate an infinite amount of limits! Anyway, if an easy subsequence you found converges to a certain limit, then you know what the overall limit should be. If that's not the case, then your sequence doesn't have a limit. (Note that this limit I'm speaking of might as well be infinite.)
Here's a not-so-trivial example: $a_n=\cos(\pi n)\frac1{n}$. Using subsequences, you can try the one in which $\cos(\pi n)$ is constant (it can be either $-1$, $0$ or $1$, as $n$ is an integer), so we'll take $\cos(\pi n)=1$, so $n=2k$ ($k\in\mathbb N$). Thus we have $a_{2k}=\cos(2k\pi)\frac1{2k}=\frac1{2k}$ which converges to zero. You should expect, then, that the overal limit of $a_n$ is zero too, but you can't use this result to prove that the limit is really zero (see next example), so you have still to use other methods.
An example of a sequence that doesn't have a limit: the sequence $b_n=(-1)^n$. You can find the subsequence for even values of $n$, that is (using $n=2k$, with $k\in\mathbb N$) $b_{2k}=(-1)^{2k}=1^k=1$, so it converges to $1$. But then you find another subsequence, for odd $n$ ($n=2k+1$), that is $b_{2k+1}=(-1)^{2k+1}=-1$, so those two have different limits: consequently $b_n$ cannot converge (in fact, it oscillates between $1$ and $-1$).
As for your question «What's the trick to finding the sub-sequence?»: intuition.