Chapter 7 — The Game Theory Trap
Here is the structural problem that makes everything harder.
Imagine a fishery. Ten boats. The fish population can sustain a harvest of 1,000 tonnes per year indefinitely. If each boat takes 100 tonnes, the system is stable. Everyone prospers. The boundary condition is respected.
But no individual boat has an incentive to limit its catch. If you take only 100 tonnes and your competitor takes 200, you lose. They gain. And the fish population declines — but not because of you. Because of them. So you take 150 to stay competitive. They take 200. The population drops faster. Next year, the sustainable yield is lower, but the pressure to catch more is higher.
This is not a hypothetical. It is the history of Atlantic cod, Pacific bluefin tuna, and dozens of other fisheries. Rational agents, each locally optimising, collectively destroying the resource they depend on.
The core of the problem: thousands of isolated optimisation engines compete without system-level coordination. None of them have any incentive to leave enough fish to repopulate, because they know someone else will take them. It all boils down to game theory when you have multiple agents.
The formal structure is well understood. It is a multi-player prisoner's dilemma, iterated over time with imperfect information and asymmetric payoffs. The Nash equilibrium — the stable state where no agent can unilaterally improve their outcome — is collective overfishing. Not because the agents are stupid. Because the equilibrium of the game is destructive.
And the escape routes are limited:
Option 1: The Regulator. An external authority monitors the boundary conditions and forces agents to stay within sustainable limits. Fishing quotas. Carbon caps. Banking capital requirements. This can work — it has worked, in specific cases, for specific periods. But regulators face their own optimisation problems: lobbying pressure narrows their objective function; information asymmetry means agents know more than regulators; regulatory capture turns the referee into a player. The regulator is itself an agent in a game.
Option 2: The Oligopoly. A small number of dominant players agree — explicitly or tacitly — to respect boundary conditions and prevent new agents from entering. OPEC is the canonical example. This can stabilise the resource, but it creates its own pathology: restricted competition, rent extraction, barriers to entry, and the concentration of power in the hands of the few players who agreed to cooperate. The cure is a different disease.
Option 3: Boundary-Aware Agents. The agents themselves internalise the boundary conditions into their objective functions. Not because a regulator forces them, but because their optimisation framework is multi-dimensional enough to include systemic sustainability. This is the hardest option. It requires agents to voluntarily accept lower short-term returns in exchange for long-term system stability. It requires them to solve a coordination problem without a coordinator.
It is also the only option that scales without creating new pathologies.
AlgoRhythmics is the argument that Option 3 is not utopian — it is the next necessary evolution of optimisation itself.