[Math] Games that never begin

descriptive-set-theoryset-theory

Games that never end play a major role in descriptive set theory. See for example Kechris' GTM.

Question: Does there exist a literature concerning games that never begin?

I have in mind two players, Alice and Bob, making alternate selections from ${\Bbb N}$, their moves indexed by increasing non-positive integers, the game terminating when Bob plays his move 0.

As for payoff sets and strategies, define these as for games that never end, mutatis mutandis.

One major difference: a pair of strategies, one for Alice, one for Bob no longer determines a unique run of the game, but rather now a set of runs, possibly empty. Even so one may still say that Alice's strategy beats Bob's if every compatible run of one strategy against the other belongs to the payoff set.

Another major difference involves the set-theoretic size of strategies. Now Alice and Bob play every move in the light of infinite history. So size considerations mean that certain familiar arguments, for example non-determined games from the axiom of choice, don't work in any obvious way?

Question: What payoff sets give determined games that never begin?

Edits: By the cold light of morning, I see that I abused the words "mutatis mutandis." A la Kechris, Alice's strategy beats Bob's if the the unique run of the game falls in the payoff set. What I had in mind here was that Alice's strategy beats Bob's
if the run set (consistent with both strategies) is a subset of the payoff set. Joel David Hamkins' clever answer remains trenchant, only now with the import that Alice always wins by playing a strategy with empty run sets regardless of Bob's strategy.

Joel's Alice needs an infinite memory, but if her strategy at each move consists of increasing her previous move by 1, that also necessarily produces an empty run set regardless of Bob's choice.

Possible fix 1 Alice must play an engaged strategy, a strategy that produces a nonempty run set against at least one strategy of Bob's.

Possible fix 2 The residual game at any turn only has a finite future, so one player has a winning strategy from that point on. Call a strategy rational if it requires the player to play a winning move in the residual game whenever one exists. Call a strategy strongly rational if it requires the player to play the least possible winning move whenever one exists. To avoid easy disengaged strategies that don't even reference the payoff set, insist on rational, or even strongly rational strategies.

Possible fix 3 Combine the previous fixes.

Best Answer

As a supplement to Joel's answer, you may want to look at this nice paper of Bollobas, Leader, and Walters concerning continuous games. As a starting point they discuss the classical Lion and Man game introduced by Rado. In this game there is a lion chasing a man inside the unit disk. Both have identical maximum speeds. The lion wins if he catches the man, and the man wins if he is never caught by the lion. If the lion chooses to always run directly toward the man, then he will get arbitrarily close to the man, but never catch him. On the other hand, if the lion instead moves at top speed so that he is always on the radial vector from the centre to the man, it was 'clear' that this was a winning strategy. Proof: without loss of generality, the man stays on the boundary of the disk. However, in 1952, Besicovitch exhibited an ingenious winning strategy for man! Thus, staying on the boundary is with loss of generality for man. Nonetheless, one can ask the perplexing question if lion also has a winning strategy? In this particular game, it turns out that the answer is no. But by changing the metric space, Bollobas, Leader, and Walters prove that there are games in a similar vein where both lion and man have winning strategies!

Related Question