Skip to content

Chapter 10 — Humans as Meta-Optimisers

Humans are not endpoints in the optimisation chain. We are not consumers to be targeted, labour to be allocated, or data to be harvested. We are — or were designed by evolution to be — something far more computationally interesting.

We are meta-optimisers.

We are:

  • Long-horizon simulators. We can model states that do not yet exist. No other species plans its grandchildren's education. No algorithm has genuine long-term intentionality — it has a discount rate.
  • Pattern recognisers across domains. We can see that the dynamics of a fishery collapse resemble the dynamics of a financial bubble resemble the dynamics of an addiction cycle. Cross-domain pattern recognition is what allows us to build frameworks like this one.
  • Meaning generators. We do not just process information. We assign significance. We decide what matters. This is not a weakness or an inefficiency — it is the mechanism by which objective functions get chosen in the first place.
  • Constraint integrators. We can hold multiple, conflicting objectives in our heads simultaneously and navigate trade-offs without collapsing them into a single metric. A parent balancing career, childcare, health, finances, and relationships is solving a multi-objective optimisation problem that would crash most algorithms.
  • Vision engines. We can imagine systems that do not exist and build them. Language, agriculture, cities, markets, the internet — each was an act of speculative construction. We model the future, not just the present.

We can model collapse before it happens. We can detect non-linear risk. We can coordinate symbolically at scale in ways that no other optimising agent on this planet can approach.

But high-frequency industrial systems compress us into:

  • Consumers. Optimised for purchase frequency.
  • Labour inputs. Optimised for productivity per unit cost.
  • Data exhaust. Optimised for behavioural prediction.
  • Throughput nodes. Optimised for transaction volume.

Not because we are weak. Because narrow objective systems reward cognitive narrowing. Single metrics are cognitively cheap. Holding multi-dimensional trade-offs is cognitively expensive. The system selects for cheap cognition because cheap cognition is fast cognition, and fast cognition serves the high-frequency feedback loops that the system depends on.

We outsourced complexity. And forgot we were capable of carrying it.

Leave a message