Dietrich, Franz and List, Christian :: Where Do Preferences Come From?

authors
Dietrich, Franz and List, Christian

tl;dr Preferences over Properties.

The model discussed by this paper is one where the outcomes of a game each have various properties; and player preferences operate over these properties, rather than the outcomes themselves.

In addition, players filter their preferences on the basis of which properties they may consider relevant. That set of relevant properties is called a player's motivational state. Each player has a set of possible motivational states. Two axiom systems are presented: 1+2, and 1+3.

  1. motivational states that contain no properties that discriminate between two outcomes do not generate preferences that distinguish between the outcomes.

  2. for motivational states A,B, A \subset B, if B/A contains no properties that are present in two outcomes, then B and A agree with respect to those outcomes.

  3. for motivational states A,B, A \subset B, if B/A contains no properties that discriminate between two outcomes, then B and A agree with respect to those outcomes.

A scenario that distinguishes the systems:

Suppose motivational states A \subset B \subset C; properties p_1 \in A, p_2 \in B/A, p_3 \in C/B; outcomes x \in p_1, y \notin p_1, x,y \in p_2, x \in p_3, y \notin p_3. Under 1+2 system, there can exist a player preference family such that x \preceq_{A} y \land y \preceq_{B} x. Under 1+3 system this would not be possible; p_2 contains both outcomes, which mean it does not distinguish between them. But x \preceq_{B} y, y \preceq_{C} x is possible, because p_3 does distinguish between the outcomes.

Two theorems are presented:

  1. If \matchcal{M} is intersection-closed, an agent's family of preference-orders satisfies 1+2 iff it is "property-based".

  2. If \matchcal{M} is subset-closed, an agent's family of preference-orders satisfies 1+2 iff it is "property-based in a separable way", i.e. the ranking over the powerset of all properties exhibits independence of irrelevant alternatives. S_1 \ge S_2 \text{ iff } S_1 \cup T \ge S_2\cup T

A topology over the properties?

Motivational states are connected to properties as follows:

  1. if a player's set of possible motivational states is intersection-closed, then the player's preferences between alternatives will satisfy 1+2 iff it is property-based, i.e. you can derive the properties.

  2. if a player's set of possible motivational states is subset-closed, then the player's preferences between alternatives will satisfy 1+3 iff it is property-based.

A proof of these theorems would probably give me the types I could write a simulator for this in.

The set of possible motivational states for a player is a topology over the subset of properties defined as the union over the possible motivational states in the latter case.

In the former case, the same is a pi-system. (See also the table here.)

Properties of outcomes as intentions

Agents select for outcomes, operating from inside certain contexts. Properties that are derivable per DL's theorems 1 and 2 can then be said to be a construction of "what the agent was going for."

What is the organizing principle of the properties that then tells us the relationship between aim A and aim B? That structure is then the structure of intention.

We can read Anscombe fully as a first step to understanding how to assign semantics to motivational states, outcomes, and properties respectively; and what structure over properties to begin looking at (and what might follow as a result.)

Components of organizing principles that suggest themselves, from our understanding so far:

a coalition that lies to its members

why does it lie?

members want things that aren't aligned with the coalition as a whole. this bears editing.

is anyone perfectly aligned with the coalition?

arrow argues for yes. but arrow is under unbounded reasoning and a partial ordering over preferences. let's assume dietrich-list's property based salience limited preferences, which allow for circular preferences so long as the universe never allows them to be salient at the same time. so, we only have to assert consistency over the subset of the preferences in question (which, well,)((why?))

why do preferences have to be consistent within a given decision? well, because they need to /resolve/ for that decision. all contexts in which we are legal to require a decision are contexts in which we must be able to provide zero or one selections. the union over all these choices must be a partial order for VNM rationality - i.e. to have a win condition that's the same from start to finish. for time-varying win conditions (or context-varying), we are aiming to be VNM within the equivalence class of a context. outside can go die, though.

so, we only have to assert consistency across all choices made under tham same set of salient properties.

(I think this is axiom 1 in D-L.)

So,

what is the relationship of a grand coalition's shapley value to a welfare function?

An allocation, both are.

Shapley value is pareto-optimal.

allocation function obedient to U provides allocations even where there is no core.

so, the shapley value is the "easy" case, the subset of the allocation problem where Pareto dictates a unique solution.

NO.

Shapley needs to be Pareto optimal, but it is something more than that. It encodes information about mechanism, which allocation elides altogether.

The claim of an allocation is that any mechanism is either irrevocable or degenerate, because everybody in a system would go with it as the optimal choice (if its was offered as a focal point on which coordination problems might also be solved.)

OKAY SO there are several layers to this reasoning.

Extensional games

Property adding game

Property removing game

CONCRETE QUESTION: let properties be words in the builder-assistant game. how many rounds of play does it take before the epistemic game catches up to common knowledge of which properties constitute a player p's motivational state?

once we have the collaborative picture, then we can try to break it.

can we build a simulation in which siloing happens?

can we define a complexity measure according to which a language exhibits siloing, or other such effects, past a threshold in that measure?

My naive first guess for a complexity measure is "number of tokens in the motivstate." My second guess is "minimum number of steps needed to achieve common knowledge of everybody's motivstate in a coalition."

what is the shape of the game that exhibits the minimum number of steps needed to achieve common knowledge?

the reason DL hath presented an axiomatization is that axioms are falsifiable statements.

They serve as the condition A in the guarded statement "if A holds in universe U then model M holds in universe U".

We can check if A holds per falisfiability, and elevate the rest of model M to hold also if it does.

SO: I oughtta find a dataset of preferences to test for the axioms suggested.

Time to strengthen our semiotics.

reasons why players might move from one motivstate to the other

Condition being, that the reasoning must be expressed in terms already defined in the game structure, or derived from these.

motivational states and cores

The premise is that somehow, magically, a coalition can determine your motivational state. Given this, we can look at the formation of stable states or cores (remember that these are maximal coalitions according to some dominance function, which will certainly be modified to incorporate the motivational state determination magic, specifically the motivational states that result from it.)

The question is, how in specific does a player's motivational state get formed or influenced?

Two candidate games we have already come up with: property adding game and property removing game. Here we would need to determine a mechanism for adding or removing that relied on consensus. Also it is an extensional game. Which is to say we need to figure out how the extensional version of this works.

Structures are constructed from the outcomes and player preferences over the outcomes.

My misunderstanding earlier was, that m.s. B adding a property p that discriminated x and y to m.s. A still meant B and A could not reverse preferences on x and y. This is untrue: if p discriminates between x and y, under both axiom systems this allows for preference reversal. Player preference orderings are between sets of properties, and do not respect any structure over the properties other than how they attach to outcomes.

This is why we are trying to add structure over the properties that isn't their attachment to outcomes.

Given a set of player preferences over the outcomes, can we derive a player preference ordering over the properties?

Given a motivstate A, property p A, for some player's preference ordering over outcomes \preceq_A, we can construct a relation over properties in A: R_A = \{(p_1,p_2) | \forall x \in p_1, y \in p_2, x \preceq_A y \lor x \sim_A y\} . R_A is reflexive and antisymmetric but not transitive: p_1 R_A p_2 \land p_2 R_A p_3 \land \lnot p_1 R_A p_3.

Can construct a relation using a stronger condition: \preccurlyeq_A = \{(p_1,p_2) | \forall x \in p_1, y \in p_2, x \preceq_A y\} . This is transitive, and therefore a partial order.

So, yes.

Backlinks