Can humans be modeled as utility maximizers?

A human being could theoretically be modeled as a utility maximizer, but doing so would require ascribing to them a very complicated utility function that changes depending on the context, sometimes quite erratically and in response to irrelevant stimuli, as well as satiating after a certain point.

An alternative framing would present humans as having many preferences, goals, and values that sometimes conflict with each other, are context-dependent, and can change over time, whether it’s upon reflection or based on irrelevant circumstances.

Arguably, this alternative framing is also more accurate with respect to how humans evolved in the process of natural selection. Evolution tried optimizing humans for inclusive genetic fitness but was too weak of an optimizer to imbue them with an inherent objective to optimize genetic fitness. Instead, humans ended up with a genome that, given a particular (physical and social) environment, can produce a brain with some innate preferences[1] and a capacity to acquire many other preferences based on experience. In the ancestral environment, these preferences tended to produce brains that cared about things beneficial for inclusive genetic fitness. In modern environments, this is often no longer the case. In other words, humans (just like all other animals) are Adaptation-Executers, not Fitness-Maximizers.

Some frameworks aiming to conceptualize the development of human values include Shard Theory and Beren Millidge’s Computational Anatomy of Human Values.


  1. Or at least preferences that develop remarkably easily in an extremely wide variety of circumstances. ↩︎