What do Searle's constitutive/regulative rules, Lewis' convention theory, and Chen and Micali's generalized-utility style coalitions have in common?
(Searle, John R. and Searle, John Rogers 1969)
All three concepts describe elaborations upon an existing ruleset. In other words it's all motivated protocol-writing, i.e. game rigging, i.e. skin in the game of the law.
This is to say:
Imagine you're dropped into the world today and are trying to make sense of "how things work". Among the many choices you could make for an overarching abstraction to organize your exploration, you as a human being (i.e. a social political animal used to groupthink) will probably choose to "assign agency" to your environment – to divide it up in terms of "who"-shaped things and "what they are doing". This is also useful to you because it clearly establishes a call-and-response dialogue between your actions and "whatever else is happening": at this resolution, essentially anything can be modeled as a social dynamic.
In this framework, you can typecast1 the regularities intrinsic to the environment to rules.
Do the rules change? From your perspective, they change often; as you move through the world and gather data, you are making continuous updates to your model of the way things seem to work. This of course also means that the perceived consequences of your actions keeps changing too. It's intrinsically necessary to have this flexibility in your perspective – and by intrinsically, I mean that if you try to treat the game as if the rules are immutable, you're inevitably at greater risk of losing (i.e. not getting what you want). Unfair?
Good point, Calvin: can you change the rules? In principle, of course you can. You, as a fuzzy human agent, are not bound by the consistency demands of the frameworks you use to navigate the world. Nor are the rules that hold here, now, in your locus, necessarily true everywhere you go.
games with other agents
One of the biggest drivers of this kind of locally true rule are the mutually understood protocols you use alongside other agents. Of course as aforementioned anything can be thought of as an agent, but there really is something to the idea of agency beyond modeling; a rock might hate you, in your mind, but it's not going to make decisions in light of your decisions. You can compete with a rock, or coordinate with one, but only in patterns that the rock "was going to follow anyway".2
A complex agent, on the other hand, is someone you can mutually conditions decisions with. You can comprehend it as encoding complex data about the world. You can model it as wanting complex things – goal formation – and acting to get what it wants –goal seeking. You must be aware that what it will do in context of you and what you will do in context of it are mutually recursive – not only a product of your own reflexive interaction with the world, but of both of your reflexive interactions.
(Unclear how relevant this is, but I'll pull it out to its own thing later if it turns out to be tangential.)
Point being, agents are aware of the rules. Other agents are also aware.