The sum of the numbers on all of the hats must be congruent to one of $0, 1, 2 \pmod{3}$. If a gnome knows the sum of all the hats $\pmod{3}$ and the sum of the other gnomes' hats $\pmod{3}$, he can subtract to get the number on his hat $\pmod{3}$, and thus, the number on his hat.
Since the gnomes don't know the sum of all the hats $\pmod{3}$, they do the following: gnome $1$ assumes that the sum of the numbers on all the hats is $1 \pmod{3}$, gnome $2$ assumes that the sum of the numbers on all the hats is $2 \pmod{3}$, and gnome $3$ assumes that the sum of the numbers on all the hats is $0 \pmod{3}$. They each calculate the number on their hat based on this assumption. One of these gnomes must have made the right assumption, and thus, guesses their hat correctly.
For $n$ gnomes, the $i$-th gnome assumes the sum of the numbers on all of the hats is $i \pmod{n}$, and guesses his hat accordingly. By the same logic, one of the gnomes guesses correctly.
Note: No matter what the $n$ gnomes' strategy is, the probability of any gnome guessing correctly is still $1/n$. This means that the expected number of gnomes to guess correctly is always $n \cdot 1/n = 1$. Thus, if the gnomes' strategy has a non-zero probability of having two or more gnomes guess correctly, there will also be a non-zero probability of having zero gnomes guess correctly. Therefore, in any strategy that guarantees that at least one gnome guesses correctly, exactly one gnome guesses correctly every time (no more and no less).
Skip forward to below the "=====" for the answer that you're probably looking for. Read the first part to find out why I put it after the "=====".
To make sense of this, you need to think about probability spaces. And to do that right, you need more information about the meaning of the words in your question.
Case 1: There's a distribution $d$ from which the hats for each person are drawn independently randomly. The players guess black/white uniformly randomly. In this case, the expected number of correct guesses is 50 out of 100.
Case 2: There's a distribution as before, again with independent drawing of hat colors, but the players get to look at others' hats before guessing; they then guess black with a probability proportional to the number of black hats they see (out of the total 99 hats they see). (Roughly: if 95 others have black hats, and 4 have white hats, you guess "black" 95 out of 99 times (perhaps by rolling a die to generate your guess). The expected number of correct guesses in this case is always at least 50, but can be far greater. If the distribution $d$ is highly skewed, this strategy wins big. Note that the players are still "guessing randomly" here ... just not uniformly randomly.
Case 3: The hat-placer is an adversary, and has thought about strategies you might employ. The hat-placer carefully chooses a number $k$ of black hats and $100-k$ white hats, and then distributes these randomly among the players by picking uniformly randomly a permutation of the numbers 1...100. (Note that this still meets the condition "that was put on their head randomly"). The players guess uniformly randomly from "black" or "white", without observing the others' hats. The expected number of correct guesses is again 50.
Case 4: Same adversarial setup as in case 3, but the players use the 'bayesian' approach of case 2. In this case, the adversary will presumably optimize, which will turn out to set $k = 50$, and again the expected number of correct guesses is 50.
====
Anyhow, case 2 makes the point that saying what distribution is being used in each step of randomness in the problem is critical to assessing expected values. Just saying "randomly" doesn't guarantee uniform randomness. And "straight guesses" doesn't actually mean much of anything to me, although I'm guessing that to you it means "uniformly randomly chosen from 'black' and 'white'."
Let me ramble on a little further still, and formulate the problem a little differently.
You have a fixed but unknown list of 100 bits, $b_1, \ldots, b_{100}$, each either a $0$ or a $1$.
You generate another list of 100 bits, $c_i$, $i = 1, \ldots, 100$, chosen independently identically distributed from the uniform distribution on the set $\{0, 1\}$.
You ask "What is the expected number of $i$ for which $b_i = c_i$?"
The answer in this case is $50$, and does not depend on the initial bit sequence $b$. The proof is straightforward: the probability space consists of all possible $c$-sequences; there are $2^{100}$ of these, each equally probably.
If we look at the $i$th digit of each of these sequences, in half of them $c_i$ is zero; in the other half, $c_i = 1$. Hence the probability that $c_i$ equals $b_i$ is exactly $1/2$, and the expected value of the event $c_i = b_i$ is $1/2$. By linearity of expectation, the expected number of matching bits is the sum of the expected number of matching first-bits, matching second-bits, and so so, hence is $100$ times $1/2$, or $50$.
Best Answer
If the villain isn't sufficiently demented to tell you things you already knew, you could reason like this: Since the villain had to tell you that some of you may have an ink dot on your forehead, you don't both have one; thus if you both guess that you don't have one, one of you must be right.