[Math] Difference between Minimax theorem and Nash Equilibrium existence

game theory

Von Neumann's Minimax theorem (quoted from Wikipedia):

For every two-person, zero-sum game with finite strategies, there exists a value V and a mixed strategy for each player, such that (a) Given player 2's strategy, the best payoff possible for player 1 is V, and (b) Given player 1's strategy, the best payoff possible for player 2 is −V.

Nash's theorem:

Every finite game has a mixed strategy equilibrium.

Now, to me, it seems that the Minimax theorem is simply a specific instance of the Nash theorem, for a two-player zero-sum game (the moment an equilibrium is established, the results concerning the game value follow immediately).

But in my Game Theory course, we studied these as two separate theorems, with entirely different proofs. Some exams even had proving both theorems as two exam questions – making it seem like a claim that "Minimax follows immediately from Nash's theorem" would be suspect.

Am I misunderstanding some fundamental difference between these two theorems? Or did we simply learn two different proofs of the same thing?

Best Answer

Wikipedia agrees with you, saying "In zero-sum games, the minimax solution is the same as the Nash equilibrium" (second statement of contents of the article about Minimax). So the existence of Nash equilibrium is a generalization of the Minimax theorem.

Presumably, the proof of the minimax theorem is much simpler than the proof of the general theorem. Another crucial difference is that the proof of the minimax theorem is constructive (it amounts to linear programming), whereas finding a Nash equilibrium is PPAD-complete, even for two player games. It is even hard to find an approximate Nash equilibrium.