The official meaning of the assertion that
$$1+r+r^2+r^3+\cdots=S$$
is that
$$\lim_{n \to \infty}(1+r+r^2+\cdots +r^n) =S$$.
By the usual formula for the sum of a finite geometric series,
$$1+r+r^2+\cdots +r^n=\frac{1-r^{n+1}}{1-r}$$
(unless, of course, $r=1$).
It is reasonably clear that if $|r|<1$,
$$\lim_{n\to\infty}\frac{1-r^{n+1}}{1-r}=\frac{1}{1-r}$$
and that if $|r|\ge 1$, then
$$\lim_{n \to \infty}(1+r+r^2+\cdots +r^n)$$
does not exist.
But that is not really your question. What I think you are saying is roughly this: let
$$S=1+r+r^2+r^3+\cdots$$
Then at the level of formal "algebraic" manipulation, we have (??)
$$S-1=rS$$
and "therefore" (??)
$S=1/(1-r)$
(at least if $r \ne 1$).
But do "formal" manipulations always yield correct results? Not necessarily. For example, you have probably already seen plausible-looking algebraic manipulations that appear to yield the absurd "result" that $0=1$ (usually here the flaw is a carefully hidden division by $0$.)
The formal manipulation yields, if we take $r=1/2$, the perfectly correct result that
$$1+\frac{1}{2}+\frac{1}{2^2}+\frac{1}{2^3}+\cdots =2$$
This makes physical sense. If you look at the real number line, and put a dot at $1$, then at $1+1/2$, then at $1+1/2+1/4$, and so on, your dots are clearly approaching $2$, and are after a while indistinguishably close to $2$.
Now look at the formal sum
$$S=1+2+2^2+2^3+\cdots$$
Purely formal manipulation then yields that $S=-1$. Does this make any kind of physical sense? Certainly if you add more and more terms of the above series, you are not getting in any sense close to $-1$. And at the crude informal level, it is clear that the sum is "infinite," whatever that may mean. Formal manipulations on non-existent objects can yield absurd results.
So in this case formal manipulation has produced a result that makes no physical sense. There are a number of instances where blind manipulation yields wrong results, so early in university, students are trained to apply
algebraic "rules" only in situations where these rules are valid.
However, the situation is more complicated than that! Once upon a time complex numbers, when they came up in a formal calculation, were dismissed as "absurd." Now they are part of the essential toolkit of the electrical engineer! There are similar phenomena with series.
In combinatorics, for example, formal power series are successfully used, at the purely manipulational level, even when the series technically do not converge. And in various areas of mathematics, divergent series of a suitable type can be very useful, even in computations!
But at this stage, you should be disciplined, and apply formal manipulations only in situations where you know, or have been assured that, they are "safe." Later, perhaps, you can explore situations where venturing beyond safe confines yields interesting and useful results.
$$\left(1-\frac12\right)+\left(\frac12-\frac13\right)+\left(\frac13-\frac14\right) +\cdots +\left(\frac{1}{n} - \frac{1}{n+1}\right) $$
$$ = 1-\frac12+\frac12-\frac13+\frac13-\frac14+\frac14 -\cdots +\frac{1}{n} - \frac{1}{n+1}$$
$$ = 1-\left(\frac12-\frac12\right)-\left(\frac13-\frac13\right)-\left(\frac14-\frac14\right)-\space \cdots \space - \left(\frac{1}{n}-\frac{1}{n}\right)-\frac{1}{n+1}$$
Notice how each of the terms in parentheses is zero, so we are left with: $$\boxed{\text{Sum of first n terms: }1-\frac{1}{n+1}}$$
If we want the infinite sum we must take the limit as $n \to \infty$ because $n$ is the number of terms in the sequence. So as $n$ becomes arbitrarily large, $\dfrac{1}{n}$ tends towards $0$ so we get that the sequence of finite sums approaches: $$1-0 = \boxed{1}$$
Not all infinite series need to be arithmetic or geometric! This special one is called a telescoping series.
Best Answer
The operation of addition is a binary operation: it is an operation defined on pairs of real (or complex) numbers. When we write something like $a+b+c$, apparently adding three numbers, we’re really doing repeated addition of two numbers, either $(a+b)+c$ or $a+(b+c)$ (assuming that we don’t change the order of the terms); one of the basic properties of this operation is that it doesn’t actually matter in which order we do these repeated binary additions, because they all yield the same result.
It’s easy enough to understand what it means to do two successive additions to get $a+b+c$, or $200$ to get $a_0+a_1+\ldots+a_{200}$; it’s not so clear what it means to do infinitely many of them to get $\sum_{k\ge 0}a_k$. The best way that’s been found to give this concept meaning is to define this sum to be the limit of the finite partial sums:
$$\sum_{k\ge 0}a_k=\lim_{n\to\infty}\sum_{k=0}^na_k\tag{1}$$
provided that the limit exists. For each $n$ the summation inside the limit on the righthand side of $(1)$ is an ordinary finite sum, the result of performing $n$ ordinary binary additions. This is always a meaningful object. The limit may or may not exist; when it does, it’s a meaningful object, too, but it’s the outcome of a new kind of operation. It is not the result of an infinite string of binary additions; we don’t even try to define such a thing directly. Rather, we look at finite sums, which we can define directly from the ordinary binary operation of addition, and then take their limit. In doing this we combine an algebraic notion, addition, with an analytic notion, that of taking a limit.
Finite sums like $a_0+\ldots+a_{200}$ all behave the same way: they always exist, and we can shuffle the terms as we like without changing the sum. Infinite series do not behave the same way: $\sum_{n\ge 0}a_n$ does not always exist, and shuffling the order of the terms can in some cases change the value. This really is a new operation, with different properties.