Note on Mechanism Design
December 14, 2021
A while back Nate Coffman published a great post on mechanism design (among other things), to which I added a rather cryptic comment. I subsequently started drafting an expansion, but quickly descended down a massive rabbit hole and didn’t get anywhere—here I’d like to reign that in and briefly hash out a quick-and-dirty version of the main point. I’ll make a distinction between two different kinds of coordination, then argue that one of them is obscured by common framings of coordination failure and its solutions. I believe this has some worrying long term implications.
There’s various ways to circle in on the distinction I want to make—it is between social and economic consensus, between trustful and trustless mechanisms, between agent-constituting and agent-mediating processes, or between mutual recognition and strategic alliance. Intuitively the distinction is quite easy to grasp (economic coordination is producing a vaccine, social coordination is getting everyone to wear a mask), but it is harder to pin down explicitly, or to see why its terms pick out genuinely distinct phenomena. Here I’ll approach it by considering two broad categories of “solution” to the prisoner’s dilemma. These are:
- Modify the pay-off matrix.
- Modify the decision procedure.
The simplest way to solve a prisoner’s dilemma is to have the mob boss threaten to shoot defectors. This is an example of pay-off matrix modification, since it aims to make the price of defection so high that cooperation becomes the optimal strategy from the point of view of players’ self-interest. This is the domain of mechanism design proper, the goal of which is to design incentives with socially desirable Nash equilibria. ‘Socially desirable’ can often be unpacked to mean Pareto optimal, in which case everything becomes neat and mathematically rigorous. The mob boss approach is certainly a kind of mechanism design, but would generally be regarded as a piece of really bad mechanism design—it is centralised, unegalitarian, and depends on a coercive threat of violence. It is the dreaded ‘trusted 3rd party’ mechanism, the Hobbesian Leviathan. Conscientious mechanism design rather busies itself designing incentives which achieve Pareto optimality in a fair and distributed manner, ideally with many more carrots than sticks. It does this with technical architectures (e.g. cryptocurrencies) or via institutional structures like property and antitrust laws (e.g. Harberger taxes). Its overall approach to coordination is summarised by the slogan “markets but better”—its entire MO is the design of trustless mechanisms, and it underpins all current thinking on things like DAOs and Web3.
OK so the important thing to note about this whole approach is that it doesn’t actually solve the prisoner’s dilemma. Mechanism design cannot solve a prisoner’s dilemma, because a prisoner’s dilemma has no socially desirable Nash equilibria. What mechanism design does is to try to arrange things so that prisoner’s dilemmas and similar don’t arise in the first place. I’ll return to this in a moment, but for now let’s turn to the second approach: modifying the decision procedure. We know that given the pay-off matrix definitive of a prisoner’s dilemma, a rationally self-interested decision procedure will always lead to mutual defection. The question then is how, given the same pay-off matrix, can we modify the decision procedure to ensure cooperation? One answer to this question is provided through the concept of ‘superrationality,’ first elaborated by Douglas Hofstadter in a chapter of Metamagical Themas. A superrational agent is the same as a rational agent in all respects, except with the additional criterion that they assume other players are superrational, where superrational players will always play the same strategy as each other. When evaluating a decision, a superrational agent will always consider what would happen if everyone else were to make it, and trusts that everyone else is reasoning along these lines too. This models the kind of reasoning embodied in Kant’s categorical imperative: “Act only according to that maxim by which you can at the same time will that it should become a universal law.”—an equilibrium strategy of a superrational decision procedure can accordingly be referred to as a ‘Kantian’ equilibrium. The Kantian equilibria of a given incentive structure may be different to its Nash equilibria—indeed, mutual cooperation is a Kantian (but not a Nash) equilibrium of the prisoner’s dilemma.
So it seems superrationality solves the prisoner’s dilemma—as in actually solves it, rather than just kicking it down the road. But this raises a whole heap of different questions. Why would someone employ a superrationally self-interested rather than a rationally self-interested decision procedure? In popular debates on these sort of issues, for instance on the use of game theory in neoclassical economics, this question has often taken on a weirdly essentialist tint. Are we really rationally self-interested actors? Or are we in some sense irrational actors, as game theory’s leftwing critics have sometimes suggested? What vulgar proponents on both sides of this issue seem to miss is that a decision over which decision procedure to employ in a given situation is exactly that: a decision. There needn’t be some prior fact of the matter about what kind of agents we are in essence—we may (and do) adopt different reasoning procedures in different contexts. Indeed, making a decision about which decision procedure to employ is itself a coordination problem.
I think this point is underappreciated, has huge practical implications, yet is also kind of obvious when you think about it. A professional context, for example, is one in which you can expect others to employ a rationally self-interested decision procedure. This has nothing to do with whether they’re intrinsically self-interested—it is simply that what we call ‘professionalism’ demarcates a social practice in which everyone expects one another to reason as if they were; this shared understanding is what allows it to function as a social institution. If you find yourself in that context it would be silly not to reason from rational self-interest—it would be to commit a faux pas at best, and to make yourself exploitable at worst. The signs and norms of professionalism are a social map which allow groups of individuals to coordinate on a rationally self-interested decision procedure. By the same token, the signs and norms of, say, moral discourse are a social map which allow us coordinate on something more like a superrational decision procedure. We use different maps for different situations.
At this point I can begin to outline my concern. It has already been mentioned that mechanism design achieves a certain mathematical rigour. It does this by making two different kind of modelling abstractions: i. it assumes a given decision procedure, and ii. it makes some assumptions about its users’ preference hierarchies. The assumption about decision procedure is effectively materialised within the ‘user space’ of the mechanism—i.e. the mechanism becomes impossible to use if you don’t adopt its assumed decision procedure. If we build economic institutions on models that assume rational self-interest, then we should not be surprised to find people reasoning on this basis in practice—to do otherwise is to become invisible or disadvantaged with respect to those institutions. The other kind of assumption secures the success of the mechanism, given the assumed decision procedure. Markets assume market actors will seek to maximise profits, or at the very least seek good deals, and crypto-economic systems contain similar presuppositions.
These assumptions often have an ambiguous and contentious status, as demonstrated by Bitcoin. It is generally agreed that Sakamoto consensus is secure up to a 51% attack—a blockchain-specific attack that can be performed by an actor with control of more than half of the chain’s mining power. But many of Bitcoin’s most ardent evangelists will insist that it is even secure against 51% attack, because anyone who acquired this much power would have more to gain by preserving the integrity of the blockchain than by attacking it. For many this is what secures Bitcoin’s status as a truly autonomous and self-sustaining system—Nick Land, for example, tries to make this point do a lot of work in his Crypto-Current writings. But this logic assumes that any possible attacker would be motivated by money, and this is sensitive to historical conditions, if not just outright spurious. If Bitcoin ever became central to global financial systems, then controlling or distorting it may come to carry political significance. In this scenario, a state actor may have strategic reasons to perform a 51% attack that have nothing to do with profit.
Point here is that mechanisms depend on modelling assumptions which both constrain the action space of their users and may drift away from real conditions over time. Taken together these two features can spell bad news. Imagine that Web3 has taken off wildly and we have moved all our finance and governance onto these new systems. We have now put ourselves in a position where our decision procedure has been pre-emptively chosen for us—within the political sphere we are all game theoretic agents in practice—but we have no complaints, because the mechanisms work well and all the Nash equilibria we converge on are socially desirable. We reckon we’ve nailed it. But over time things start to drift. Political and environmental conditions change, and the priorities assumed by the mechanism no longer reflect the real world. Oh shit—suddenly we start to find ourselves in prisoner’s dilemmas again. But this time we are doubly fucked, because now all our governance mechanisms have a game theoretic decision procedure hardwired into them, which means the only equilibria available to us are Nash equilibria. There is no chance of switching to a different decision procedure and seeking, say, a Kantian equilibrium instead—we have literally thrown this ability away, along with all the old institutions that embodied it, in order to optimise for a bunch of Nash equlibria which are simply no longer relevant. Game over.
I think much of the emancipatory rhetoric of Web3—down with trusted 3rd parties! the old capitalism is little more than a tissue of vested interests, we can do property and markets so much better!—is leading us directly into this trap. Interestingly, there’s a passage in Scott Alexander’s much lauded Meditations on Moloch essay that seems to agree with this:
People are using the contingent stupidity of our current government to replace lots of human interactions with mechanisms that cannot be coordinated even in principle. I totally understand why all these things are good right now when most of what our government does is stupid and unnecessary. But there is going to come a time when – after one too many bioweapon or nanotech or nuclear incidents – we, as a civilization, are going to wish we hadn’t established untraceable and unstoppable ways of selling products.
I’m sure there’s many people who would look at that and think “what is he talking about? Web3 is all about coordination.” Thing is, it is and it isn’t—it’s about moving social interactions onto systems with improved economic coordination. But this comes at a price: the very same thing that makes them good at economic coordination is what makes it impossible for their users to socially coordinate. They actively undermine our ability to solve prisoner’s dilemmas at the same thing as trying to arrange things so they don’t come up. Over long timescales this is suicide.
Where I might go one step beyond Scott Alexander is to suggest that the current impetus behind Web3 is not just the failure of present governance, but also a more general failure in our collective understanding of the nature of these kind of institutions and what it is they actually do for us. A certain kind of crypto enthusiast sees all social institutions as mere trusted 3rd parties, as Leviathans who function only through an illiberal monopoly on the use of coercive force. They see only coercive Nash equilibria, never Kantian equilibria. This is the same kind of mistake as the inability to imagine the successes of the Chinese Communist Party in any terms other than crude authoritarianism, forgetting that historically the Chinese people have not been shy to revolt against leaders they don’t like and that if the CCP now commands a de facto authority over the population, it is probably because most of them consider it to be legitimate. It’s a mistake that tells us more about ourselves than anything, of how unthinkable it is within our present cultural milieu that an authority demanding such a high degree of self-sacrifice could ever be considered legitimate by those over whom it is exercised. The sheer drive and enthusiasm poured into these trustless systems is, I think, a measure of much we have given up on the very concept of legitimate authority.
To wrap up, then: rather than designing incentives with good Nash equilibria, we should think more about how to design the decision that lies one step back from this: the decision of which decision procedure to use. Since this is a question of coordination, it is in principle designable. But this will never look much like mechanism design in the sense talked about above, because it cannot help itself to the same kind of modelling assumptions. It cannot assume a decision procedure because this is precisely what it aims to establish, and if it cannot assume such a procedure then how can it find rational grounds to decide one way or another? This is a difficult question, one that requires a more philosophical kind of approach, but which will nevertheless have far-reaching practical implications. Luckily we do have some concrete examples: the moral language game, for example, is a social technology which appears capable of materialising a superrational structure of agency (you don’t have to be a deontologist to agree that the categorical imperative represents our best intuition about how the moral ought is used). It does this without any kind of centralised threat, relying instead on a kind of decentralised trust. The challenge is to understand how it does this, and to then use these insights as the basis for building new communication structures and social institutions.
Tags
coordination   agency   superrationality   kant   land   crypto