What is rational September 21, 2007Posted by ficial in brain dump, games.
Caveat: this post is the result of game design, market discussion, the late hour, and a midori seltzer. It is no doubt less profound than it seems to me right now, but I expect that even in the cold light of day it’s at least interesting.
Game theory (and economic models based thereon) assume/define that involved agents are rational. This works fairly well for some things, but is hard to apply for others. Specifically, They (of the infamous Them) have to go through all kinds of contortions to attempt explaining things like altruism as any kind of rational.
When difficulties arise they tend to be explained as
– alternate modes of valuation (e.g. the ‘makes you feel good’ metric)
– projected iteration (e.g. in prisoner’s dilemma)
– relative position (e.g. in the money bowl game*)
– people are irrational
I think there’s a much simpler, better explanation. A rational agent usually means one that seeks to maximize its welfare (of the utility of its position, or however you want to describe it). In such a case it’s hard to label an agent that engages in altruistic behavior as rational. However, it makes much more sense if the definition of rational is changed. There are two reasonable alternatives that spring mind. First, rather than maximizing welfare a rational agent might seek to minimize loss. However, I’m not wholly convinced that is actually different, so I’m ignoring it for now.
Second, rather than aiming for maximum welfare a rational agent aims for sufficient welfare, and each agent may put ‘sufficient’ at different levels. I think it more accurately models human agents, at the least. It easily accounts for altruism (at least for simple forms of it – still have to jump through hoops in the case of self-destructive altruism). It handles the prisoner’s dilemma well (strict maximization produces weird, counter-intuitive results for various penalty/reward values). It covers the money bowl game well (some one grabs the bowl when it reaches enough and they can thereby guarantee a sufficient result rather than risking a null result). Finally, it doesn’t resort to the idea that people are irrational when the models don’t match real-world behavior.
Maximization works OK in some models because they’re so restricted / simple. As complexity increases maximization of multiple axes becomes incomputeable and/or impossible. As such, aiming for maximization in the complicated environment of the real world is unrealistic, perhaps even irrational. As such an economic actor seeking maximization disregards non-monetary axes as a way to simplify the system to the point where a ‘rational’ choice is at least theoretically possible.
However, it is reasonable to aim for given, non-maximum levels on many different axes (it’s still hard with many intertwined axes, but possible). Sufficiency makes real-world problems tractable not by pruning the problem space (dangerous in the real world, where things like the environment, or getting enough sleep, can get chopped out) but by drastically increasing the range of acceptable solutions.
I suspect, though I can’t in anyway back it up, that the widespread conflation of rationality with maximization arose during a time of drastic increase in humanity’s ability to understand and intentionally affect the world, or at least the common perception thereof. I’m guessing late 1800s or early 1900s, in the pre-Einstein era of science.
That’s probably enough rambling for now….
* the money bowl is a simple game, run in real life and as thought experiments. Put a 10 people in a room, with an empty bowl in the center. Explain that you’ll be putting $10 in the bowl once every 30 secs for the next 10 minutes. Anyone can claim the bowl at any time, thus getting all the cash and leaving none for anyone else. However, if the game goes th full 10 minutes and no one has claimed the bowl, then EACH PLAYER gets $200. The game never runs the full 10 minutes – someone always takes the bowl before it’s full. The usual explanation is that the person who takes the bowl is attempting to maximize their relative position, but it gets really complicated to try to apply that explanation when you run multiple instances of the experiment at the same time (or even just say you’re doing so).