I've spent the last few months thinking a lot about complexity, the study of complex systems. I've read two books, Complexity: A Guided Tour, and Complexity: The Emerging Science at the Edge of Order and Chaos, and taken a course by the Santa Fe Institute. At its core, complex systems are systems composed of simple components that interact nonlinearly. They have no central control but display emergent behavior (hierarchical organization, information processing, dynamics, evolution, and learning). This discipline can be applied to many topics like biology, evolution, and networks but I was particularly drawn to how it can model human behavior. Traditional economics assumes that individuals act to maximize their utility and are selfish, but we see cooperation across society. Applying the field of complexity to this study can help us understand questions like when it is in your self-interest to cooperate with others and how this comes about/ persists over time.
"Show me the incentive, and I will show you the outcome.” — Charlie Munger
Incentives drive everything. At its core, there are only four permutations of relationships: win-win, win-lose, lose-win, and lose-lose. Of these 4 only win-win relationships are sustainable in the long term. It's the only permutation that can reach a sustained Nash equilibrium where both participants feel incentivized to sustain the relationship. This type of relationship is only seen in non-zero-sum games where one player's win isn't directly taken from the other player's loss. The problem is it's often difficult to tell what type of relationship you are getting into with other people as their incentives aren't clear. People can lie about their incentives and have hidden motivations.
Andrew Huberman talks about the idea of overt vs covert contracts. Overt contracts are situations in which the exchange with another person is clear — for example, hiring someone to complete a task, starting a company, and having a clear equity split upfront. When you are upfront about the exchange the contract is over and the exchange is clear. Another example Huberman gave is the dog-to-human relationship. Part of why we love dogs so much is because the dog-to-human relationship is 100% overt, they don't manipulate you. Covert contracts on the other hand are always bad. When you start a company with someone but they are unhappy with the equity split, they become resentful. The resentful person feels like they deserve more so they covertly start taking something else from the relationship. They show up late or don't try very hard to complete their tasks. This is a bad feature of human interaction, we are allocated extra pre-frontal cortex so when we don't feel safe, we start taking something else in the exchange. We do this because, in the short term, it's harder to deal with not feeling secure in a relationship so we try to create a false sense of certainty. But the only certainty here is that covert contracts will end badly, whereas covert contracts end well, they are win-win. To add further complexity to this problem, even if you are engaging in overt contracts you can't be certain that the other person isn't engaging in covert contracts. Mind reading isn't a technology we've unlocked yet so you can only be responsible for your contract. Again, incentives drive everything so you should do what you can to play non-zero-sum games and enter into win-win relationships.
A similar example can also be seen in the startup world. Jesse Beyroutey introduced me to the study of Market Game Theory which applies game theory concepts to study the behavior of players in a market. Oftentimes, an incumbent dominates market share but a startup enters with a new business model and takes a significant share from the incumbent. This happens because of dominance friction, which is when there is a misalignment between a company and its customer or another party in its value chain. Startups can expose this misalignment and beat incumbents.
An example is when Uber was able to beat the taxicab business. Taxi operators monopolized car transportation in most cities so they were able to charge exorbitant prices and didn't focus on delivering a good customer experience. Uber took advantage of this misalignment and offered consumers a better product by offering cheaper and faster rides. They aligned their profit model with cost drivers and beat the incumbents. This is an example of how better alignment of incentives resulted in a win-win relationship that beat incumbents with a strong advantage.
To bring this back to complexity, we can use two games to model out what self-organization and cooperation might look like in economics. First let's look at the Prisoner's Dilemma, a classic game used in Game Theory.
This matrix shows the outcomes of the game. At its most basic form, players can remain silent (cooperate) or confess (not cooperate), and the respective payoffs depending on what the other player does, are shown. As you can see it is always in the individual's best interest to not cooperate, this is the strictly dominant strategy, but it is also the worst outcome for both players if they don't cooperate. It is a tragedy of the commons situation where individuals, acting in their self-interest, overexploit a shared resource, leading to its depletion and ultimately harming everyone who relies on it.
"The pursuit of self interest by each leads to poor outcomes for all" - Robert Axelrod
Robert Axelrod is a political scientist and complexity scientist, who explored the role of cooperation in social games. In 1980 and 1984 he hosted two tournaments to explore the best strategies for the Prisoners Dilemma game. In both tournaments, 60+ strategies were submitted and the game was played over 200 rounds. What was interesting about the results was that there wasn't necessarily a single best strategy. The strategy that performs the best depends on the other strategies that it interacts with. However, the successful strategies shared a few qualities: they started nice (cooperated to begin with), were forgiving, retaliatory (mimicked their opponent's actions), and were clear (didn't make random moves). But in both tournaments, the strategy that won the most rounds was tit for tat (TFT), a strategy that starts out cooperating and then mimics the opponent's previous moves. TFT did the best as a strategy but what's interesting is that following this strategy, a player can never do better than the player it's playing against. This strategy always results in a loss or a draw, never a win, yet it comes out ahead of all other strategies.
On the other hand, never cooperating can never lose a game, it can only draw or win but overall that strategy doesn't perform as well. So not winning at a local level can result in a global maxima. The key insight here is that winning does not equal beating the other person, it's not zero-sum. As I mentioned earlier you should strive to play non-zero-sum games where to win you don't need to take from the other player, games where cooperation can be good even amongst rivals.
Another interesting model is the El Farol problem Brian Arthur of the Santa Fe Institute proposed. The model's setup is that every Thursday night, a fixed population wants to have fun at the El Farol Bar unless it's too crowded.
If less than 60% of the population goes to the bar, they'll all have more fun than if they stayed home.
If more than 60% of the population goes to the bar, they'll all have less fun than if they stayed home.
Everyone must decide at the same time whether to go or not, with no knowledge of others' choices.
In traditional economics there are a few key assumptions, players are perfectly rational self-interested agents, they have a complete knowledge of other's strategies, and can do deductive reasoning. These assumptions show an efficient outcome, the best possible situation for all (the bar is never overcrowded but the maximum number of people attend), an example of the invisible hand at work. However, Complexity economics starts with a different set of assumptions, namely that each player is a self-interested agent with bounded rationality, has limited knowledge of each other's strategies, and can do inductive reasoning. This model differs from traditional economics as it assumes bounded rationality and includes adaptation. So complexity wants to answer whether players cooperate without communicating and rational deductive reasoning or in other words if self-organized efficiency also emerges under these conditions.
Using tools from the study of complexity, the game can be modeled with the NetLogo software, the results of which are shown here. I used the software and you can toggle different starting conditions but overall the results of this model show that we can achieve similar player behavior without the unrealistic assumptions of traditional economics. Self-organized cooperation and efficiency are possible without the assumptions of perfect rationality, complete knowledge, and deductive reasoning. Even with more realistic assumptions, we see the emergence of the invisible hand.
I'm excited to continue studying this discipline and apply it to understand emergent human behavior. The tools from the study of complexity like the application of these models provide a new lens for understanding incentives. There are exciting insights to be unlocked and I'd like to better understand these concepts in the hopes that I can apply this framework to other areas of life. If anyone is thinking through similar concepts, I'd love to chat.
Comments