Bayesian networks come in equivalence classes
A set of conditional independencies can have many different Bayes nets that encode it. These together form a Markov equivalence class: i.e. a class of DAGs that share a skeleton (the undirected graph structure, \(\text{abs}(A)\) where \(A\) is the adjacency matrix) and colliders i.e. \(A \rightarrow B \leftarrow C\) where \(A\) and \(C\) are not adjacent.
In other words, you can't select a member of the class for free. The additional structure must have attached semantics of some kind.
Various regimes have been proposed. Lots of the thinking-about-agency wonkiness operationalizes to this or something bigger that contains this.
Further reading:
- (Pearl, Judea 2009),
- (Koller, Daphne and Milch, Brian 2003)
- https://causalincentives.com/ DeepMind working group
- https://rpatrik96.github.io/posts/2021/10/poc2-markov/
- (Castelletti, Federico and Consonni, Guido and Vedova, Marco L. Della and Peluso, Stefano 2018)
- (Bergemann, Dirk and Morris, Stephen 2016) - given that we can coordinate, what is the likely causal shape of the world in which we are coordinating?
- (Kamenica, Emir and Gentzkow, Matthew 2011) - game a learning agent using the order in which you reveal information