Hodges, Wilfrid :: Logic and Games

Logic and Games (Stanford Encyclopedia of Philosophy)

Stories about Man vs. Nature being vs

In the published version of his John Locke lectures at Oxford, Hintikka in 1973 raised the Dawkins question (see above) for these games. His answer was that one should look to Wittgenstein's language games, and the language games for understanding quantifiers are those which revolve around seeking and finding. In the corresponding logical games one should think of ∃ as Myself and ∀ as a hostile Nature who can never be relied on to present the object I want; so to be sure of finding it, I need a winning strategy. This story was never very convincing; the motivation of Nature is irrelevant, and nothing in the logical game corresponds to seeking. In retrospect it is a little disappointing that nobody took the trouble to look for a better story. It may be more helpful to think of a winning strategy for ∃ in G(φ) as a kind of proof (in a suitable infinitary system) that φ is true.

Could this be what I hope it is? Oh please let it be so. To be perused more thoroughly at a later point. To clarify: I hope it's about PRK's remainders, and plugging net holes.

Incentivized picture-making

Imagine ∃ taking an oral examination in proof theory. The examiner gives her a sentence and invites her to start proving it. If the sentence has the form

φ ∨ ψ

then she is entitled to choose one of the sentences and say ‘OK, I'll prove this one’. (In fact if the examiner is an intuitionist, he may insist that she choose one of the sentences to prove.) On the other hand if the sentence is

φ ∧ ψ

then the examiner, being an examiner, might well choose one of the conjuncts himself and invite her to prove that one. If she knows how to prove the conjunction then she certainly knows how to prove the conjunct.

The case of φ → ψ is a little subtler. She will probably want to start by assuming φ in order to deduce ψ; but there is some risk of confusion because the sentences that she has written down so far are all of them things to be proved, and φ is not a thing to be proved. The examiner can help her by saying ‘I'll assume φ, and let's see if you can get to ψ from there’. At this point there is a chance that she sees a way of getting to ψ by deducing a contradiction from φ; so she may turn the tables on the examiner and invite him to show that his assumption is consistent, with a view to proving that it isn't. The symmetry is not perfect: he was asking her to show that a sentence is true everywhere, while she is inviting him to show that a sentence is true somewhere. Nevertheless we can see a sort of duality.

Ideas of this kind lie behind the dialectical games of Paul Lorenzen. He showed that with a certain amount of pushing and shoving, one can write rules for the game which have the property that ∃ has a winning strategy if and only if the sentence that she is presented with at the beginning is a theorem of intuitionistic logic. In a gesture towards medieval debates, he called ∃ the Proponent and the other player the Opponent. Almost as in the medieval obligationes, the Opponent wins by driving the Proponent to a point where the only moves available to her are blatant self-contradictions.

Lorenzen claimed that his games provided justifications for both intuitionist and classical logic (or in his words, made them ‘gerechtfertigt’, Lorenzen (1961,196)). Unfortunately any ‘justification’ involves a convincing answer to the Dawkins question, and this Lorenzen never provided. For example he spoke of moves as ‘attacks’, even when (like the examiner's choice at φ ∧ ψ above) they look more like help than hostility.

The article Dialogical Logic gives a fuller account of Lorenzen's games and a number of more recent variants. In its present form (January 2013) it sidesteps Lorenzen's claims about justifying logics. Instead it describes the games as providing semantics for the logics (a point that Lorenzen would surely have agreed with), and adds that for understanding the differences between logics it can be helpful to compare their semantics.

From this point of view, Lorenzen's games stand as an important paradigm of what recent proof theorists have called semantics of proofs. A semantics of proofs gives a ‘meaning’ not just to the notion of being provable, but to each separate step in a proof. It answers the question ‘What do we achieve by making this particular move in the proof?’ During the 1990s a number of workers at the logical end of computer science looked for games that would stand to linear logic and some other proof systems in the same way as Lorenzen's games stood to intuitionist logic. Andreas Blass, and then later Samson Abramsky and colleagues, gave games that corresponded to parts of linear logic, but at the time of writing we don't yet have a perfect correspondence between game and logic. This example is particularly interesting because the answer to the Dawkins question should give an intuitive interpretation of the laws of linear logic, a thing that this logic has badly needed. The games of Abramsky et al. tell a story about two interacting systems. But while he began with games in which the players politely take turns, Abramsky later allowed the players to act ‘in a distributed, asynchronous fashion’, taking notice of each other only when they choose to. These games are no longer in the normal format of logical games, and their real-life interpretation raises a host of new questions.

Reread and reduce. SOmething in here about semantics being picture-making, and potentially-but-not-quite-adversarial (incentive-driven?) picture-making. It might be a step or two higher in the chain of game construction dependencies.

Complexity and counterfactuals, and also time-space tradeoffs

Another kind of logical game that allows lies is Ulam's Game with Lies. Here one player thinks of a number in some given range. The second player's aim is to find out what that number is, by asking the first player yes/no questions; but the first player is allowed to tell some fixed number of lies in his answers. As in Pudlak's games, there is certainly a winning strategy for the second player, but the question is how hard this player has to work in order to win. The measure this time is not space or memory but time: how many questions does he have to ask? Cignoli et al. 2000 Chapter 5 relate this game to many-valued logic.

This relies really nicely on modal logic, but is this really what happens, say, in an algorithm dry run? Don't previous steps give you parts of an answer? (Parts measured in space? What is the dry run of a sort on a one-bit-online computer? Why didn't I pay more attention in Formal Methods round 1?)

Sound informal argument - or, soundness in models that are game objects - also, persuasion by order

To return for a moment to Lorenzen: he failed to distinguish between different stances that a person might take in an argument: stating, assuming, conceding, querying, attacking, committing oneself. Whether it is really possible to define all these notions without presupposing some logic is a moot point. But never mind that; a refinement of Lorenzen's games along these lines could serve as an approach to informal logic, and especially to the research that aims to systematise the possible structures of sound informal argument. On this front see Walton and Krabbe 1995. The papers in Bench-Capon and Dunne 2007 are also relevant.

Sound informal argument: or, ways to trade in propositions that could potentially be drafts of normative rules to follow? Ways to characterise what would happen, by modal logics, in counterfactual universes regarding other players? Persuasion in terms of who to convince first in a set of people about a rule change?

SHIT, who do you convince first to join your Vickery-auction-subverting coalition? Or your Regular-auction-subverting, or general game-subverting or -rewriting coalition?

Stability theory as a game, and possibly stability of mechanisms against predictably subversive players

In the traditional cut-and-choose game you take a piece of cake and cut it into two smaller pieces; then I choose one of the pieces and eat it, leaving the other one for you. This procedure is supposed to put pressure on you to cut the cake fairly. Mathematicians, not quite understanding the purpose of the exercise, insist on iterating it. Thus I make you cut the piece I chose into two, then I choose one of those two; then you cut this piece again, and so on indefinitely. Some even more unworldly mathematicians make you cut the cake into countably many pieces instead of two.

These games are important in the theory of definitions. Suppose we have a collection A of objects and a family S of properties; each property cuts A into the set of those objects that have the property and the set of those that don't. Let ∃ cut, starting with the whole set A and using a property in S as a knife; let ∀ choose one of the pieces (which are subsets of A) and give it back to ∃ to cut again, once more using a property in S; and so on. Let ∃ lose as soon as ∀ chooses an empty piece. We say that (A, S) has rank at most m if ∀ has a strategy which ensures that ∃ will lose before her m-th move. The rank of (A, S) gives valuable information about the family of subsets of A definable by properties in S.

Variations of this game, allowing a piece to be cut into infinitely many smaller pieces, are fundamental in the branch of model theory called stability theory. Broadly speaking, a theory is ‘good’ in the sense of stability theory if, whenever we take a model A of the theory and S the set of first-order formulas in one free variable with parameters from A, the structure (A, S) has ‘small’ rank. A different variation is to require that at each step, ∃ divides into two each of the pieces that have survived from earlier steps, and again she loses as soon as one of the cut fragments is empty. (In this version ∀ is redundant.) With this variation, the rank of (A,S) is called its Vapnik-Chervonenkis dimension; this notion is used in computational learning theory.

Stability theory, huh? Return when discussing mechanism stability in the context of Lonely Vickery. (We keep returning to lonely Vickery.)

Game trees as derivation trees, and many-sorted logic

The games described in this section adapt almost trivially to many-sorted logic: for example the quantifier ∀xσ, where xσ is a variable of sort σ, is an instruction for player ∀ to choose an element of sort σ. This immediately gives us the corresponding games for second-order logic, if we think of the elements of a structure as one sort, the sets of elements as a second sort, the binary relations as a third and so on. It follows that we have, quite routinely, game rules for most generalised quantifiers too; we can find them by first translating the generalised quantifiers into second-order logic.

This section, not just this quote - if the extensive game tree is a derivation tree moving through string-grammar production rules, can I look at similar tree-like derivation for moving through graph-grammar production rules? What are the players, in that case?

(The generalization of duals to 2-structures keeps coming to mind - if I have a way to divide up a graph into equivalence classes, do I have a way to characterise players corresponding to winning strategies for each class? (Is that the right semantics choice? Do the classes dorrespond to something else?))

Forcing - or, design by committee

Forcing games are also known to descriptive set theorists as Banach-Mazur games; see the references by Kechris or Oxtoby below for more details of the mathematical background. Model theorists use them as a way of building infinite structures with controlled properties. To sketch the idea, imagine that a countably infinite team of builders are building a house A. Each builder has his or her own task to carry out: for example to install a bath or to wallpaper the entrance hall. Each builder has infinitely many chances to enter the site and add some finite amount of material to the house; these slots for the builders are interleaved so that the whole process takes place in a sequence of steps counted by the natural numbers.

To show that the house can be built to order, we need to show that each builder separately can carry out his or her appointed task, regardless of what the other builders do. So we imagine each builder as player ∃ in a game where all the other players are lumped together as ∀, and we aim to prove that ∃ has a winning strategy for this game. When we have proved this for each builder separately, we can imagine them going to work, each with their own winning strategy. They all win their respective games and the result is one beautiful house.

More technically, the elements of the structure A are fixed in advance, say as a0, a1, a2 etc., but the properties of these elements have to be settled by the play. Each player moves by throwing in a set of atomic or negated atomic statements about the elements, subject only to the condition that the set consisting of all the statements thrown in so far must be consistent with a fixed set of axioms written down before the game. (So throwing in a negated atomic sentence ¬φ has the effect of preventing any player from adding φ at a later stage.) At the end of the joint play, the set of atomic sentences thrown in has a canonical model, and this is the structure A; there are ways of ensuring that it is a model of the fixed set of axioms. A possible property P of A is said to be enforceable if a builder who is given the task of making P true of A has a winning strategy. A central point (due essentially to Ehrenfeucht) is that the conjunction of a countably infinite set of enforceable properties is again enforceable.

Now all I need is a way to replace ɸ with ¬ɸ, and I have rule- changing.

Stability theory, again noted

Variations of this game, allowing a piece to be cut into infinitely many smaller pieces, are fundamental in the branch of model theory called stability theory. Broadly speaking, a theory is ‘good’ in the sense of stability theory if, whenever we take a model A of the theory and S the set of first-order formulas in one free variable with parameters from A, the structure (A, S) has ‘small’ rank. A different variation is to require that at each step, ∃ divides into two each of the pieces that have survived from earlier steps, and again she loses as soon as one of the cut fragments is empty. (In this version ∀ is redundant.) With this variation, the rank of (A,S) is called its Vapnik-Chervonenkis dimension; this notion is used in computational learning theory.

.:ahem:. is this what I need for talking about Vickery's instability

todo Logic and Games (Stanford Encyclopedia of Philosophy) clean up references to this node so we can delete it

Backlinks