In addition to @pkofood 's answer (+1):
After 1,000 tosses you have 600 heads and 400 tails. What the fairness of the coin tells you is that in the next 99,000 tosses you are likely to get about 44,500 heads and 44,500 tails. But that would mean that, in the whole 100,000 tosses you would have 45,100 heads and 44,900 tails, which is very close to 50%.
Your interpretation is known as the "Gambler's fallacy".
In the passage, the phrase "entirely predictable" is misleading; a single toss of a fair coin is not predictable at all (except that you can say "it will be heads or tails".
I'm assuming that in the 100-toss game, you win as soon as you get a successful guess of heads or tails. The answer to your question comes down to the geometric distribution:
$$X \sim \text{Geometric}(p),$$
which can be interpreted as giving the probability of the number of flips required to get a successful toss. More technically and generally, this is the number of Bernoulli trials expected to get a single successful trial, where coin-tossing is a perfect example of a sequence of Bernoulli trials.
Under this distribution, if we have $x$ as the number of tosses required to win (i.e. correctly guess the coin toss), then with $p(x)$ being the probability of needing $x$ tosses, we have
$$p(x) = p(1 - p)^{x-1},$$
where $x$ is the number of tosses, and $p$ is the probability of our guess coming up, which is $1/2$. So we can see that the probability of needing just 1 toss to win is:
$$p(1)=0.5(1-0.5)^{0} = 0.5$$
This is to be expected. The probability of needing 2 tosses to win is
$$p(2) = 0.5(1-0.5)^1 = 0.25,$$
and the probability of needing $101$ tosses to win (i.e. you lose the game) is
$$p(101) = 0.5(1 - 0.5)^{101-1} = 3.94\times10^{-31}$$
So you are pretty much guaranteed to win the 100-toss game!
Best Answer
In what follows, I will assume you mean someone gives you 1:1 odds on the loaded coin.
You are looking for the Kelly criterion, which states:
$$ f^* = \frac{ b p - q }{ b } $$
where (just copying from wikipedia) $f^*$ is the fraction of your bank roll, $b$ is the fraction payout (in your case presumably $b=1$, i.e. a dollar investment gives you a dollar return if heads is thrown) and $q = 1-p$.
For your example, the fraction the Kelly criterion says to invest is $f^* = .75 - .25 = 0.5$. i.e. The Kelly criterion tells you to invest half of your savings each time you play.
As intuition, you want to make sure you don't invest all your money on a loaded coin as just one bad throw will deplete your savings. Understanding what function to maximize for is what the Kelly criterion is instructive for.
The Kelly criterion requires that you maximize your winnings based on:
$$ p\ \text{ln}(1 + b x) + (1-p)\ \text{ln}(1 - x) $$
Maximizing for $x$ yields the equation for $f^*$ above.
Unfortunately I am a little unclear as to why this particular equation is being maximized as opposed to some other equation. I have heard an information theoretic argument for why this is so (notice that the equation above looks very much like an entropy equation) but, sadly, still remain puzzled.
EDIT:
Well, I feel pretty dumb. As whuber points out in the comments, the Kelly criterion is quite easy to derive. If we assume a you want to invest a constant proportion of your savings at each trial, call it $r$, then your winnings after $n$ trials, $W_n$, for an initial savings of $W_0$, will be:
$$ W_n = (1 + b r)^{p n} (1 - r)^{(1-p) n} W_0 $$
Taking logarithms, noticing that this equation, as a function of $r$, has one maximum, then setting the derivative equal to 0 and solving gives the Kelly criterion formula for $r = f^*$ as above.