After writing up this answer I found this paper that comes to the same conclusions and also generalizes the problem to $q$ hat colours. I'm posting the answer anyway to have it here on math.SE in self-contained form.
As described in the Wikipedia article Gerry linked to and this book it references, an optimal strategy concentrates the wrong guesses onto as few configurations as possible. Each individual player guesses incorrectly exactly half the time if she doesn't pass, and ideally these incorrect guesses should all be concentrated on configurations where everyone guesses incorrectly whereas the correct guesses should ideally be spread out one per configuration.
Let's denote the set $\def\red{\text{red}}\def\blue{\text{blue}}\def\pass{\text{pass}}\{\red,\blue\}$ of hat colours by $H$ and the set $\{\red,\blue,\pass\}$ of guesses by $G$. Then a strategy for $n$ prisoners is a function $H^n\to G^n$ such that the $k$-th entry of the value doesn't depend on the $k$-th entry of the argument.
Given a strategy, we're interested in the proportion of vectors of $H^n$ for which the strategy prescribes a value which is not the constant $\pass$ vector and in which all non-pass entries match the corresponding entries in the argument. Let's call such vectors good and the others bad.
Adjacent to every good vector $g\in H^n$ is at least one bad vector $b\in H^n$ that differs from $g$ only in the hat colour of one of the prisoners who guess correctly for $g$ (of which there is at least one). Conversely, given a subset $S\subseteq H^n$ of bad vectors such that every vector is adjacent to at least one bad vector in this sense, we can make all other vectors good by assigning an adjacent bad vector to each of them (arbitrarily if there's more than one) and letting only the prisoner in whose entry the two vectors differ guess.
Thus, an optimal strategy is defined by a minimal subset $S\subseteq H^n$ such that all vectors in $H^n$ are adjacent to at least one element of $S$. Such a minimal subset is called a binary optimal covering code of length $n$ and radius $1$, and the number of vectors in such a minimal subset is denoted by $K(n,1)$. This table (linked to from this page) gives the known bounds on $K(n,1)$ up to $n=33$.
For $n=2^k-1$ the Hamming codes described in the article and the book are optimal binary covering codes with $2^{n-k}$ vectors, with winning probability $1-2^{n-k}/2^n=1-2^{-k}=n/(n+1)$. For other values of $n$, the values of $K(n,1)$ are only known up to $n=9$, and for $n$ a power of $2$; the lower bound for $K(27,1)$ was improved only recently.
This is a classic riddle, whose solution surprisingly requires the axiom of choice as discussed here.
The solution, however, is short and clever. Encode the colors into $0$ and $1$, and define the equivalence relation on $2^\Bbb N$, $\langle x_i\rangle\sim\langle y_i\rangle$ if and only if there is some $k$ such that for all $n\geq k$, $x_n=y_n$. Using the axiom of choice the prisoners pick a representative from each equivalence class.
In his turn, then $n$-th prisoner looks for the representative class fitting the string of hats he sees ahead, assuming that all hats up to him are blue. Since all the prisoners follow the same representative to guess their own color, it is guaranteed that after finitely many deaths, the representative and the fashionable selection of hats by the warden will agree, and everyone else will survive.
-- "The needs of the many, outweigh the needs of the few."
Best Answer
It is in general an open problem to find the optimal probability of success for $n$ prisoners when $n\notin\{2^k-1,2^k\mid k\in \mathbb N_+\}$, except for all values of $n$ up to $9$. The best known bounds for $n$ up to $33$ are given in the table in the linked answer.
Even though an optimal strategy is unknown, the following is a pretty good strategy. Find the largest integer $m$ such that $m\le n$ and $m=2^k-1$, for some $k$. The first $m$ prisoners use the Hamming code strategy, while the remaining $n-m$ prisoners pass. This gives a probability of failure of $1/(m+1)$, which is at most $2/(n+1)$. Since it can be shown that every strategy has a failure rate of at least $1/(n+1)$, this strategy is within a factor of $2$ of optimal.
For $n=100$, the success rate is $63/64\approx 98\%$.
I will now prove that every strategy must fail with probability at least $1/(n+1)$. Fix a strategy, and tabulate the number of people who are wrong in each of the $2^n$ possible scenarios. Whenever the prisoners win, there is at least one correct guess, and whenever the prisoners lose, there are at most $n$ incorrect guesses. Furthermore, each prisoner is correct on average half of the times that they guess, so over all cases, the number of correct and incorrect guesses must be equal. Letting $W$ be the number of winning situations and $L$ the number of losing situations, the previous discussion implies $$ W\le nL, $$ which can be manipulated to show that $$ P(\text{losing})=\frac{L}{W+L}\ge \frac{1}{n+1}. $$