Dietrich, Franz and List, Christian :: Where Do Preferences Come From?
- authors
- Dietrich, Franz and List, Christian
tl;dr Preferences over Properties.
The model discussed by this paper is one where the outcomes of a game each have various properties; and player preferences operate over these properties, rather than the outcomes themselves.
In addition, players filter their preferences on the basis of which properties they may consider relevant. That set of relevant properties is called a player's motivational state. Each player has a set of possible motivational states. Two axiom systems are presented: 1+2, and 1+3.
motivational states that contain no properties that discriminate between two outcomes do not generate preferences that distinguish between the outcomes.
for motivational states A,B, A \subset B, if B/A contains no properties that are present in two outcomes, then B and A agree with respect to those outcomes.
for motivational states A,B, A \subset B, if B/A contains no properties that discriminate between two outcomes, then B and A agree with respect to those outcomes.
A scenario that distinguishes the systems:
Suppose motivational states A \subset B \subset C; properties p_1 \in A, p_2 \in B/A, p_3 \in C/B; outcomes x \in p_1, y \notin p_1, x,y \in p_2, x \in p_3, y \notin p_3. Under 1+2 system, there can exist a player preference family such that x \preceq_{A} y \land y \preceq_{B} x. Under 1+3 system this would not be possible; p_2 contains both outcomes, which mean it does not distinguish between them. But x \preceq_{B} y, y \preceq_{C} x is possible, because p_3 does distinguish between the outcomes.
Two theorems are presented:
If \matchcal{M} is intersection-closed, an agent's family of preference-orders satisfies 1+2 iff it is "property-based".
If \matchcal{M} is subset-closed, an agent's family of preference-orders satisfies 1+2 iff it is "property-based in a separable way", i.e. the ranking over the powerset of all properties exhibits independence of irrelevant alternatives. S_1 \ge S_2 \text{ iff } S_1 \cup T \ge S_2\cup T
A topology over the properties?
Motivational states are connected to properties as follows:
if a player's set of possible motivational states is intersection-closed, then the player's preferences between alternatives will satisfy 1+2 iff it is property-based, i.e. you can derive the properties.
if a player's set of possible motivational states is subset-closed, then the player's preferences between alternatives will satisfy 1+3 iff it is property-based.
A proof of these theorems would probably give me the types I could write a simulator for this in.
The set of possible motivational states for a player is a topology over the subset of properties defined as the union over the possible motivational states in the latter case.
In the former case, the same is a pi-system. (See also the table here.)
Properties of outcomes as intentions
Agents select for outcomes, operating from inside certain contexts. Properties that are derivable per DL's theorems 1 and 2 can then be said to be a construction of "what the agent was going for."
What is the organizing principle of the properties that then tells us the relationship between aim A and aim B? That structure is then the structure of intention.
We can read Anscombe fully as a first step to understanding how to assign semantics to motivational states, outcomes, and properties respectively; and what structure over properties to begin looking at (and what might follow as a result.)
Components of organizing principles that suggest themselves, from our understanding so far:
epistemics. the motivational states are indications of what agents know about the alternatives. thus, the properties are factual assertions about the outcomes that matter to agents.
the properties of outcomes can be factual assertions that describe the extensional game further
assertions about reachability.
assertions stronger than reachability
conditions that help us answer the question "given what they did, what did they want?" ("can we derive it? can we prove it?")
conditions that help us assert (or, less likely, disprove) that "given that an agent did that, they must have wanted something that is congruent to the structure presented." This gives us a falsifiable assertion that I can throw a dataset at.
relevance. the motivational states represent the playing-out of a resource bound on reasoning, which we can understand to be true based on how many properties are in each state - which is monotonic to how many outcomes the agent can distinguish between.
a coalition that lies to its members
why does it lie?
members want things that aren't aligned with the coalition as a whole. this bears editing.
is anyone perfectly aligned with the coalition?
arrow argues for yes. but arrow is under unbounded reasoning and a partial ordering over preferences. let's assume dietrich-list's property based salience limited preferences, which allow for circular preferences so long as the universe never allows them to be salient at the same time. so, we only have to assert consistency over the subset of the preferences in question (which, well,)((why?))
so, we only have to assert consistency across all choices made under tham same set of salient properties.
(I think this is axiom 1 in D-L.)
So,
if we never change salient properties, this is α and β rationality; probably this is P and I up in aggregator mechanism also.
what is the relationship of a grand coalition's shapley value to a welfare function?
An allocation, both are.
Shapley value is pareto-optimal.
allocation function obedient to U provides allocations even where there is no core.
so, the shapley value is the "easy" case, the subset of the allocation problem where Pareto dictates a unique solution.
NO.
Shapley needs to be Pareto optimal, but it is something more than that. It encodes information about mechanism, which allocation elides altogether.
The claim of an allocation is that any mechanism is either irrevocable or degenerate, because everybody in a system would go with it as the optimal choice (if its was offered as a focal point on which coordination problems might also be solved.)
OKAY SO there are several layers to this reasoning.
Extensional games
Property adding game
Property removing game
CONCRETE QUESTION: let properties be words in the builder-assistant game. how many rounds of play does it take before the epistemic game catches up to common knowledge of which properties constitute a player p's motivational state?
once we have the collaborative picture, then we can try to break it.
can we build a simulation in which siloing happens?
can we define a complexity measure according to which a language exhibits siloing, or other such effects, past a threshold in that measure?
My naive first guess for a complexity measure is "number of tokens in the motivstate." My second guess is "minimum number of steps needed to achieve common knowledge of everybody's motivstate in a coalition."
what is the shape of the game that exhibits the minimum number of steps needed to achieve common knowledge?
the reason DL hath presented an axiomatization is that axioms are falsifiable statements.
They serve as the condition A in the guarded statement "if A holds in universe U then model M holds in universe U".
We can check if A holds per falisfiability, and elevate the rest of model M to hold also if it does.
SO: I oughtta find a dataset of preferences to test for the axioms suggested.
Time to strengthen our semiotics.
reasons why players might move from one motivstate to the other
Condition being, that the reasoning must be expressed in terms already defined in the game structure, or derived from these.
the motivstates represent what they know about the outcomes - updating to add a property is adding a fact to the universe of consideration.
the motivstates represent "relevance" - updating to add a property necessitates dropping another one, and players optimize for reaching outcomes that are preferred under a maximal motivstate that they can't actually ever hold.
the motivstates represent information about the game itself:
reachability - it would be a simplification of the game tree
whether some future game is gluable onto outcomes with a specific property.
a fact about the other players (what type would that fact have?)
remember that we are trying to assert that motivstates are things that players can infer about other players. what would they be trying to find out?
easy answer: what the other players are going to do.
verbs - intentions that connect moves to ends, that can be inferred and matched to signalling moves over the course of play.
the motivstates represent concerns that are apt to some environmental fact: e.g. the season dictating the parameters for selecting fruit. in monsoon you must look for thick skins, in summer you must look for high water content, etc. interesting, because it's a way to show that the preference cycle might be entrained by an environmental cycle, and therefore rational in context.
This is called a zeitgeber. See Diseasonality - by Scott Alexander - Astral Codex Ten for the place I first head about this; there's a LARGE body of literature attached to the idea of a zeitgeber, and I should read it if and when it seems like a good idea to. Modeling it using this kind of cycle-allowing system would be pretty cool.
Spitball intuition: wsize of motivstate tracks how "tight" your curves can be on a cycle - shifting sands do not make for fiine-grained preferences.
something something lotka-volterra
I do need to spend some time studying this magic when I can.
motivational states and cores
The premise is that somehow, magically, a coalition can determine your motivational state. Given this, we can look at the formation of stable states or cores (remember that these are maximal coalitions according to some dominance function, which will certainly be modified to incorporate the motivational state determination magic, specifically the motivational states that result from it.)
The question is, how in specific does a player's motivational state get formed or influenced?
Two candidate games we have already come up with: property adding game and property removing game. Here we would need to determine a mechanism for adding or removing that relied on consensus. Also it is an extensional game. Which is to say we need to figure out how the extensional version of this works.
Structures are constructed from the outcomes and player preferences over the outcomes.
My misunderstanding earlier was, that m.s. B adding a property p that discriminated x and y to m.s. A still meant B and A could not reverse preferences on x and y. This is untrue: if p discriminates between x and y, under both axiom systems this allows for preference reversal. Player preference orderings are between sets of properties, and do not respect any structure over the properties other than how they attach to outcomes.
This is why we are trying to add structure over the properties that isn't their attachment to outcomes.
Given a set of player preferences over the outcomes, can we derive a player preference ordering over the properties?
Given a motivstate A, property p ∈ A, for some player's preference ordering over outcomes \preceq_A, we can construct a relation over properties in A: R_A = \{(p_1,p_2) | \forall x \in p_1, y \in p_2, x \preceq_A y \lor x \sim_A y\} . R_A is reflexive and antisymmetric but not transitive: p_1 R_A p_2 \land p_2 R_A p_3 \land \lnot p_1 R_A p_3.
not transitive because there might exist x,y, x \in p_1 \land y \in p_3 \land y \notin p_2 \land y \preceq_A x .
counterexample p_1 = \{x,z\}, p_2 = \{y,w\}, p_3=\{w\}, \preceq_A = \{(x,y), (y,w), (z,x)\}; \therefore p_1 R_A p_2 , p_2 R_A p_3, p_3 R_A p_1
Can construct a relation using a stronger condition: \preccurlyeq_A = \{(p_1,p_2) | \forall x \in p_1, y \in p_2, x \preceq_A y\} . This is transitive, and therefore a partial order.
So, yes.