Last month, I had the pleasure of reading Kelly Clancy’s excellent Playing with Reality, a wide-ranging history of technology and society, named an Economist book-of-the-year. Over 300-odd pages, Clancy, a physicist and neuroscientist, covers topics as wide-ranging as probability, game theory, evolutionary biology, and the invention of AI.
Throughout the book, Clancy maintains a steady through-line: the models we develop to understand our world inevitably reach back around to shape that very world, and not always for the better. By applying a critical eye to these histories, à la Yuval Noah Harari, we create the possibility of different futures.
This essay will provide chapter summaries of the book, as well as provide commentary linking the material to the broader themes of this blog.
I. How to Know the Unknown
The book’s first section focuses on the history of gaming’s fundamentals: the science of dopamine in the brain, and the early history of probability.
1. The Play of Creation
Clancy introduces her central theme of games as world-building tools
The book opens with the example of Rithmomachia, an early chess-like game based on the ideas Pythagoras, and ancient Greek philosopher. While popularly known for his eponymous theorem, the historical Pythagoras was more a religious leader than a scholar, preaching that numbers hid secrets of the divine, and that their study could lead to mystical insight. These (incorrect) beliefs were encapsulated in and reinforced by this popular game and, in Clancy’s view, held European scholarship back for centuries.
Games, in Clancy’s view, shape our minds, as they require players to play in order to win. Every game is a simple model of the world, and rewards players to the extent that they are able to inhabit that model. As “systems furnished with a goal,” games help players learn to navigate the unknown, can connect people across language and culture, and provide an important source of satisfaction. As game designer Raphael Koster has observed: “fun is just another word for learning.”
Clancy argues that games have been central to the evolution of intelligence: “play expanded nature’s search strategy of randomness into the behavioral realm. … Play is to intelligence as mutation is to evolution.” Through play, humans can rehearse and prepare for hypothetical real-world scenarios, helping them thrive in a dynamic and uncertain world. However, as compelling as a play experience might be, a game is never more than a well-defined mathematical object. By allowing our thinking to be shaped by imaginary constraints, we may ultimately limit our ability to navigate messy and complex reality.
Commentary
The example of Rithmomachia provides a compelling hook, in particular through the way that the game’s misconstrued assumptions ultimately constrained the thinking of its players. I have long been a fan of Raphael Koster, whose book A Theory of Fun in Game Design would have a big influence on the design of Chore Wheel.
2. How Heaven Works
Clancy weaves the discovery of of dopamine into the early history of AI
The chapter opens with a history of the “sleeping sickness,” the mysterious illness which plagued thousands in the years after WWI. In searching for a cure, scientists discovered dopamine. Initially understood as the neurotransmitter aiding movement, over time dopamine became understood as a general driver of behavior: “what is movement ultimately for, save the pursuit of reward and avoidance of punishment?” Later studies took the argument further: dopamine is not about reward per-se, but beliefs and expectations of reward. It was during this era that psychologists like B.F. Skinner would speculate about the possibility of socially-engineered utopias.
Clancy introduces the counterpoint of games as a measure of intelligence and a tool for education. Early AI research made explicit the link between random action and reward, as seen in he example of MENACE — a physical computer system which learned to play tic-tac-toe by reallocating beads throughout various matchboxes, based on game outcomes. Over time, this idea of reinforcement learning would emerge as a prediction-based approach to computational learning.
Over time, these two threads would come together, with a collaboration between computational and neuroscience researchers resulting in a “set of now canonical papers suggesting that dopamine serves to broadcast the brain’s reward prediction error” — a landmark achievement linking two scientific disciplines.
Clancy reflects on the incredible human ability to construct rewards systems for events far in the future (planting crops for harvest) or even purely hypothetical (the promise of life after death), allowing for complex coordinated behavior to emerge. She leaves the reader with an invitation, foreshadowing some later chapters of the book: to improve predictions, either the participant’s mental model can be improved, or the world can be made more predictable.
Commentary
This chapter introduces Clancy’s characteristic style, weaving the stories of some of history’s most brilliant minds. This chapter also introduces one of Clancy’s main themes: the interaction of psychological and computational research throughout the 20th century.
3. Dice Playing God
In this chapter, Clancy reviews the history of probability theory
This chapter opens with a survey of divination in early cultures. Augury, whether through animal entrails or cracked bones, was a widespread practice in early societies. Clancy speculates that these techniques, while seemingly pointless to modern observers, were in fact an early source of randomness, helping participants break free of limiting cognitive biases. Bones would over time evolve into dice and lots, seen as "revealed technology for gods will” and means of accessing heavenly insight.
Games of chance would prove consequential for societies, for better and for worse. Among Native American tribes, gambling would become a way of reallocating resources across social classes. It was also addictive, with ubiquitous gambling regularly destabilizing societies. Writing about the late Roman empire, historian Andrew Steinmetz observed that “every inhabitant of that city, down to the populace was addicted to gambling.” Writing about the French revolution, he writes that “at the death of Louis XIV, three-fourths of the nation thought of nothing but gambling.”
In a fascinating discussion, Clancy asks why a mathematical treatment of probability did not arrive until the Renaissance. She argues that the ancient Greeks were fundamentally philosophical, not experimental, in their views. The idea that empirical methods could help answer questions of reality was simply not available to them. Further, the empirical techniques necessary for a science of uncertainty did not arrive in Europe until the middle ages, with the import of the Indian numbering system.
Tracing this history, Clancy describes the first formalization of probability by Gerolamo Cardano in the 16th century, then developed further by Antoine Gombaud, Blaise Pascal, and Pierre de Fermat in the 17th century, with the first calculations of the expected outcomes of a series of dice throws. This emerging field quickly gained momentum, with concepts of decision-making under uncertainty, marginal utility, and of bayesian learning coming in the years that followed. These statistical techniques were soon applied to questions of public health and social policy, leading to widespread benefits.
Clancy closes with a discussion of why gambling is so addictive. It is not merely about the reward, she reasons, as people do not crowd around vending machines. Rather, the allure of gambling is that of exploring the unknown. Recalling the previous chapter, Clancy explains that dopamine release is highest when the odds of winning are 50%. Evolutionarily, this motivated humans to persevere in the face of failure. But a science of probability can also allow us to delude ourselves, convincing ourselves we know more than we do by assigning numbers to our uncertainty.
Commentary
Clancy argument for the social advantages of divination is refreshing. The idea of early augury not as pure superstition but as an actually useful technique for reducing bias is fascinating, as is her interpretation of Native American gambling as a mechanism of wealth redistribution. Her characterization of ancient Greek thought as empirically myopic is equally striking, and I have long been fascinated by the history of numeral systems.
II. Naming the Game
The book’s second section focuses on the history of war-games, from chess to the groundbreaking Kriegsspiel through to the development of game theory. Clancy emphasizes how simplified models of “rational actors” often come up short.
4. Kriegsspiel, the Science of War
Clancy traces the historical development and influence of war games
Chess originated over 1,500 years ago in India, and came to Europe by way of Arab trade and conquest in the centuries which followed. A highly abstracted model of war, 18th century mathematician Johann Hellwig would attempt to enrich the game by modernizing the pieces, enlarging the board, and incorporated dice to determine attack damage. A few decades later, the father-and-son team of Georg Leopold von Reisswitz and Georg Heinrich Rudolf Johann von Reisswitz, both Prussian military officers, would take this work further. Eager to serve their king, they transformed Hellwig’s game into the beginnings of a “war computer,” incorporating to-scale maps, realistic terrain, weather effects, and a data-driven scoring table. Players had separate boards, modeling the uncertainty of fog-of-war. A full game could take weeks.
The game debuted in 1824 and was a hit among the military aristocracy, with senior officials using the game to refine and develop their own military strategies, going on to win real battles. The game’s effectiveness was validated by Wilhelm I’s successful military unification of Germany, and afterwards became increasingly popular. Over time, complex rulebooks were replaced by professional umpires, making games faster and easier to play. As the game more accessible, strong gameplay became a way of advancing officers, making the military as a whole more meritocratic.
As Clancy is keen to remind us, the game had its limits. Germany’s invasion of Belgium in World War I, while tactically sound, was politically disastrous, bringing Britain into the war and leading to Germany’s defeat. Kriegsspiel, sadly, had no diplomacy module.
In the years following World War II, Kriegsspiel would become the inspiration for tabletop games like Warhammer, Settlers of Catan, and the classic 1974 role-playing game Dungeons & Dragons, and ultimately become the basis for the text-based and graphical computer games we play today.
Commentary
There is nothing quite like a secret history. The father-son von Reisswitz team were product visionaries before such a term even existed, and would likely have felt at home at Xerox PARC or Bell Labs. Their legacy is vast, with their work having shaped not only the course of European military history, but contemporary game design itself.
5. Rational Fools
Clancy reviews the historical origins of game theory
Armed with an emerging science of probability, mathematicians in the 18th and 19th centuries would begin attempting to formalize popular games, in order to discover winning strategies. This expression reached its apotheosis in Jon von Neumann, the legendary Hungarian mathematician who, inspired by his advisor David Hilbert’s vision of systematizing mathematics, as well as by the rise in antisemitism in Europe, would attempt to describe “laws” of human nature.
In 1926, von Neumann published a paper proving that for any two-player zero-sum game, there a single best solution — the so-called rational strategy. This strategy, known as “minimax,” involves each player attempting to minimize their opponents maximum gain — as when a child cuts a cake evenly to minimize the size of their siblings piece. In later work, von Neumann found that when the number of players increases, the stable strategy becomes unpredictable due to the introduction of possible alliances and coalitions — a version of the famous “three-body problem.”
Simultaneously, Oskar Morgenstern was developing his own foundations for economics, which he felt was too reliant on unrealistic assumptions. To Morgenstern, economics was “a fragmented field whose practitioners were asking the wrong questions with the wrong tools, offering useless and static models built on a foundation of impossible assumptions.”
Von Neumann and Morgenstern would team up and write the classic “Theory of Games and Economic Behavior” in 1944, laying the foundations of game theory. By formalizing economist Paul Samuelson’s insights about “revealed preferences” and incorporating notions of decision-making under uncertainty, the pair was able to develop a consistent theory for predicting participant strategies and choices.
The work was initially ignored by economists, who found the material too formal, and the results too narrow to be useful. It was John Nash’s 1951 introduction of his eponymous equilibrium, providing a stable solution for games involving many players and many possible outcomes, which made game theory palatable to economists. The ideal of “economic equilibria” was adopted by conservative economists, becoming the basis of market fundamentalist thinking. Selfishness, Clancy observes, made people predictable.
In the present day, game-theory-influenced market designs pervade our economy. Clancy writes:
It is hard to argue with the willful naiveté of a theorist whose model agrees with their ideology. Free-market fundamentalists contend that markets offer a kind of ethical alchemy. Selfishness is held up like a philosopher’s stone, capable of transmuting what was once considered a sin into economic virtue.
However, while optimal in theory, these markets have not delivered widespread abundance in practice. Driving her point home, Clancy argues that this is largely due to the narrow assumptions about human beings that these models require.
Commentary
Here Clancy continues building momentum in her arguments. In recounting the story of these legendary mathematicians, Clancy reminds the reader at every turn that these mathematical models bear only the most incidental relationship to actual human behavior.
6. The Clothes Have No Emperor
Clancy mounts her major critique on game theory.
Despite its elegant mathematics, game theory has been deeply flawed in practice. Based on mathematical tautologies, game theory’s many failed predictions are never seen as limitations of the theory, but rather as “proof” that human beings are fundamentally irrational, unwilling to play ball.
A classic example is the famous “prisoner’s dilemma”, developed by RAND mathematicians Merrill Flood and Melvin Dresher in 1950. Purporting to disprove Adam Smith’s notion of an “invisible hand,” this thought experiment demonstrates that, at least in some cases, rational self-interest leads to collective harm. This influential model would go on to influence the behaviors of corporations, governments, and militaries across the globe. And yet, as Clancy points out, the model’s predictions are often bunk. Participants frequently cooperate, engaging in “tit-for-tat” behaviors to “train” defectors to play nice.
The field of behavioral economics emerged in part as a response to these “mistakes,” seeking to document the many ways that humans are irrational — but often only producing irreproducible results. As Clancy points out, physicists eagerly update their models in response to new data. Economists do not.
Game-theoretical models show that people should frequently defect; in practice, people cooperate, and punish defectors. Much of this discrepancy can be attributed, Clancy argues, to the way in which beliefs affect player choices. As economist Herbert Gintis notes, norms and beliefs “choreograph” player actions, and cooperative norms may have emerged as a way of helping humans survive in their early environments. Clancy notes that cooperation releases dopamine — ergo, being nice is its own reward. Rather than modeling humans as greedy optimizers, Clancy suggests, perhaps we could model them as dynamic learners looking to make increasingly better predictions.
The prevalence of zero-sum views, largely the influence of game-theoretical attitudes, drive zero-sum behaviors. Clancy makes the striking contrast between the 1968 publication of Garret Hardin’s infamous “Tragedy of the Commons” — a theoretical statement of coordination failure — with Elinor Ostrom’s earlier 1965 PhD dissertation empirically documenting the ways that real communities actually manage shared resources. Quoting Ostrom: “People collectively work out globally beneficial outcomes by cleaving to values like reputation, trust, and reciprocity — not by discarding these values.” Ignored for decades, Ostrom would go on to win the Nobel Prize in 2009.
Closing the chapter, Clancy reminds us that human nature is fundamentally plastic. Beliefs about rational-self interest will drive people to be selfish, while beliefs about the value of cooperation will drive people to behave cooperatively. She writes: “people will only be as good as the games they’re incentivized to play.”
Commentary
Here Clancy provides the substance of her critique: obsession with game-theory created cultural beliefs which would go on to produce deeply sub-optimal outcomes. By telling people they are selfish, Clancy argues, they are more likely to become so. In contrast, Ostrom’s work, a major influence on Chore Wheel, would show that people are not actually that selfish in practice, as long sufficient structure was in place to help organize behaviors.
7. A Map that Warps the Territory
Clancy sharpens her critique, exploring the way game theory has shaped warfare.
Advances in military technology have shaped the course of history. From the “greek fire” of the Byzantines, crossbows in Europe, gunpowder in China, through to dynamite in Europe and ultimately to nuclear weapons in America. Every advance sparks a brief flash of hope that the devastating new weapon might finally convince people to stop fighting, but society inevitably adapts to a higher level of violence.
With the development of nuclear weapons in the 1940s, game theory began extending its influence into the military realm. The staggering scope of the nuclear threat convinced policy-makers to cede more judgment to operations researchers. The idea of a Nash equilibrium would lead to the doctrine of M.A.D. — Mutually Assured Destruction — and the logical consequence, the need for an enormous nuclear arsenal. Whereas traditional diplomacy called for an ethical standard of mutual regard, the logic of game theory insisted on mutual selfishness.
In fairness, Clancy admits, the strategy was not a failure, with the Cold War famously characterized by its lack of total war. However, the resulting arms race drove the cost, and risk, of random error sky-high. We must never forget that at one critical moment, the outbreak of war was averted by the judgment of a single officer.
This optimizing, operations-research mindset would continued to shape US foreign policy during the Vietnam war, as Robert McNamara was appointed the Secretary of Defense. As Clancy recounts, McNamara was “enamored with numbers” and would privilege his own technical analyses over the judgment of military leadership. This would infamously lead to disaster during the war, with the US consistently failing to achieve military objectives despite a staggering loss of life. Recalling chapter 5’s “three-body problem,” the models ultimately failed to account for actors — in this case the Vietnamese — with very different beliefs and values.
The popularity of game theory would decline. Grim military simulations like Proud Prophet and films like the iconic WarGames would shape and reflect a public opinion which increasingly saw game theory as a faulty tool of out-of-touch elites. Yet, game theory remains prevalent. Computational strategy models would form the basis of drone piloting systems, and combat simulators remain popular entertainment. As war becomes increasingly abstracted, Clancy reminds us, human costs are concealed.
Commentary
Clancy concludes up her critique by detailing the extent to which game theory would come to influence military policy, and the staggering cost of this influence. Building on her arguments in previous chapters, Clancy shows how the “logic” of game theory would compel decision-makers to put aside their own best judgment and engage in risky, destructive, behaviors.
III. Building Better Players
The book’s third section turns towards evolutionary biology and artificial intelligence, and the way that biologists have employed game-theoretic models to understand evolutionary dynamics.
8. Chess, the Drosophilia of Intelligence
Clancy frames evolution as a type of learning process.
Throughout the 20th century, chess was seen as a gold standard for measuring intelligence. In a game of chess, agents have goals — desires — and can be better or worse at pursuing those goals. Desire is a prerequisite for intelligence, Clancy argues, with arguably the most fundamental desire being the desire to survive. As such, evolution — the process of life continuing to survive — can be seen as a type of intelligent learning process.
In general, life is no more complex than the demands placed on it by its environment. As Clancy writes, “the complexity of an environment limits the level of intelligence that its inhabitants can attain.” And yet, as organisms co-evolve, their collective complexity increases. As such, evolution can be seen as a type of learning: “[tuning] the gene pool, which orchestrates a cell’s protein machinery, to the requirements of its environment.” Over time, organisms developed methods for adapting more quickly than genetic evolution could allow, through a general ability to learn new behaviors through transmission and play.
Humans evolved big brains to allow them to manage complex social relationships. However, the popular theory that more relationships requires more brains is not supported by data — among the social insects, more relationships often leads to more specialization. In a touching turn, Clancy offers a countervailing theory: larger brains are needed for pair-bonding — developing a deeper relationship with one person. This enhanced affective capability could then radiate outwards, allowing for a greater number of social bonds in general.
Commentary
This short chapter sets up the book’s next section, introducing the idea of evolution as a learning process bootstrapping itself up the intelligence hierarchy. She re-iterates her theme of play as an aide to learning, and suggests that larger brains let us empathize more deeply.
9. The End of Evolution
Clancy traces the development of evolutionary biology and the puzzle of altruism
Before Darwin, “teleological” arguments for intelligent design were widespread. Inspired by Malthus and his own fieldwork in the Galapagos, Darwin would famously discern that life is not a static balance, but a dynamic flux based on individual natural selection. “Group selection” and altruistic behaviors broadly would remain a puzzle.
Eventually, Gregor Mendel’s studies with peas would allow statistician Ronald Fisher to formalize the process of random recombination. In his 1930 The Genetical Theory of Natural Selection, he framed group selection as an equilibrium point in a game. Decades later, biologist W. D. Hamilton would read Fisher and develop the idea of “kin selection” as a formal model to accounted for altruistic behavior, within limits.
Against this backdrop, Clancy introduces us to George Price, an “academic dilettante” who was involved with both the Manhattan Project and later Bell Labs. She writes that “his career was one great frantic digression, pinballing between major technological breakthroughs of the era … desperate to make his mark in science.” Reflecting on his own failed family, Price wanted to better understand the origins of the parental care instinct, and began studying Hamilton’s work.
Inspired by Hamilton’s analytic approach, Price would develop, along with John Maynard Smith, a groundbreaking game-theoretic model of animal conflict, in which, limited, but not total, war leads to the best outcomes. Published in 1973 as “The Logic of Animal Conflict,” the paper introduces the now-famous analogy of “hawks” and “doves” and shows that while a hawk may individually dominate a dove, the dove strategy in the long-run is more resource-efficient. Equilibrium — the “evolutionarily stable strategy” — involves a dynamic balance of both populations.
Price wanted to go further, dissatisfied with the theory of kin selection as an explanation for altruism. Working from first principles, he developed his eponymous equation modeling very generally way how population traits change in frequency over time. This equation would become very influential due to its ability to describe inheritance, be it physical or cultural, in a concise and general way.
Unfortunately for Price, even altruism, in his model, could be seen as “selfish.” This pushed Price over the edge. He gave away all of his possessions and began walking the streets of London, offering to help strangers in any way he could — desperate to prove that “real” altruism existed. He soon lost his housing, and within months, was dead. Hamilton would later describe Price’s life as “a completed work of art.”
Clancy closes this chapter with a meditation of the way that games precede brains, and also create them. To quote: “we do not play the game of life; the game of life is what plays us into being.” By way of increasingly complex environments, modern humans evolved. She is not without criticism of biology’s game-theoretic approach, in particular for enabling the ideas of eugenics, but acknowledges it as a “theoretical backbone” that propelled the field forward.
Commentary
This epic chapter revolves around the Shakespearean tragedy of George Price, driven mad by his own quest for knowledge. Having been unfamiliar with this history, I found his story very moving. The subtext of the chapter underscores Clancy’s theme: altruism is always the “missing piece” these models leave out.
10. Nous Ex Machina
Clancy picks up her history of game-playing programs
Claude Shannon and Alan Turing are two of the biggest names in the history of computing. Shannon would invent information theory; Turing would formalize the idea of computation. Both were enchanted with the idea of a computer that could play games. Games, they both understood, were about searching through possible moves and selecting the best one — a reasonable approximation for intelligence.
Both developed simple game-playing systems, on the basis of von Neumann’s “minimax” strategy. While none were particularly strong players, these early systems acted as benchmarks for measuring progress, and research into game-playing systems would lead to many spin-off discoveries and greatly enrich the field of computer science. Clancy pauses to point out that “other forms of genius — physical, emotional, linguistic, musical—are difficult to measure or model, and so were ignored.”
A few decades later, things would pick up again. In 1983, at Bell Labs, UNIX inventor Ken Thompson would create Belle, a brute-force chess program and the first to become a US national master. In 1989, mathematician Jonathan Schaeffer would develop Chinook, a checkers program along similar principles, which would nearly beat the game’s world champion. A few years later, Feng-hsiung Hsu would take Belle’s design, invent a faster chip, and join IBM to create the legendary Deep Blue, which would famously beat world chess champion Gary Kasparov in 1997. Reflecting on this history, Clancy notes that Deep Blue’s success was not due to any conceptual breakthrough, but rather a prodigious increase in computing power.
It would take the development of consumer video games, beginning with Nolan Bushnell’s Atari in 1972 and continuing through the personal computing revolution led by Apple Computer, to push computational game-playing forward. When compared to board games, Clancy argues, interactive computer games provided a richer environment for developing a game-playing program, while still providing the clarity of a win or a loss. Consumer demand for better graphics would spur the development of GPUs, which would — in a striking coincidence — prove to be extremely useful for training large neural networks. In 2013, DeepMind would be the first to put these pieces together, using reinforcement learning to train powerful neural networks to play a library of early Atari games.
Clancy’s history of the co-evolution of computer science and game-playing programs is compelling, and leaves us to ponder: what is the nature of this new intelligence? In her words: “it was clear, however, that human thought looked nothing like these elaborate equations drifting down the complex topology of minimax gradients.” Is there an ineffable je ne sais quoi to human intelligence which can never be achieved by machines? Or is that simply what we’d like to believe?
Commentary
As someone who played a lot of computer games as a child, it is validating to see the way that this field has had such an impact on technological progress writ large. The idea of a co-evolution of game complexity and program complexity underscores Clancy’s themes.
11. Cogito Ergo Zero Sum
Clancy extends this history to the present-day
Stanisław Ulam was a professor at the University of Wisconsin-Madison when he was recruited to the Manhattan Project in 1943. Working with John von Neumann on the bomb’s implosion process, Ulam realized that in lieu of modeling the “combinatorial explosion” of atomic trajectories occurring during a real explosion — an impossible task at the time — he could instead model a random subsample of paths, and use them to extrapolate the total behavior:
Instead of following quadrillions of paths through all possible outcomes, they’d follow a random subset, sampled in proportion to the known probabilities of events. They’d estimate the statistic of the full population by calculating the statistics of an unbiased representative sample.
This technique would become the basis of what is now known as “Monte Carlo” sampling.
Following Deep Blue’s victory in 1997, the frontier of machine intelligence moved from chess to Go, which was seen as too complex for brute-force alone. In 1992, Bernd Brügman would develop a Go-playing engine using a technique known as Monte Carlo tree-search, which would choose moves by randomly playing a subset of games before every turn, and choose the move which led to the greatest number of victories. In 2012, a related program called Zen would be the first to win against a top-ranked human player. Only four years later, DeepMind’s AlphaGo, which combined MCTS with reinforcement learning to enable learning through self-play, would defeat Go world champion Lee Sedol. Clancy reflects that “learning games build judgment through experience, and games are precisely this: generators of fictive experience.”
Following AlphaGo, partially trained on human gameplay, DeepMind would develop AlphaZero, which learned via self-play alone. The result was the emergence of “alien” playing styles which challenged and subverted common beliefs about Go strategy. AlphaZero would beat Stockfish, the world’s leading brute-force chess program, demonstrating the power of AlphaZero over even cutting-edge brute-force techniques. Clancy cautions:
Does the ability to maximize a narrowly-defined “objective function” truly reveal intelligence? Or can maximizing an immediately measurable quantity lead to unintended outcomes? … How can we trust what goes beyond our comprehension?
The idea of learning through competitive play continues to shape AI research. In 2006, Fei Fei Li established the ImageNet competition, which set the stage for major breakthroughs in computer vision in the 2010s. The biennial CASP contest in protein-folding would lead to Foldit, and later AlphaFold, for modeling the behaviors of novel proteins. Neural networks are often built as “generative adversarial networks” in which two networks are trained through an ongoing competition to confuse each other. Large-language models are trained on the game of “predict the best response to the user’s query.” Little of this “gameplay,” Clancy cautions, is grounded in any notion of “reality.”
Unsurprisingly, Clancy takes a dim view of the exuberance currently felt among AI researchers. In a rush to optimize narrow metrics, she warns, “we often harm the system we’d hoped to improve.” Unlike machines, humans can reflect on their inner worlds. Humans have the ability to transfer knowledge between domains, learning from a handful of examples instead of thousands. Perhaps the true test of intelligence, Clancy wonders, lies not in beating games, but in designing them.
Commentary
Having been in graduate school at the time of AlphaGo’s victory, this history feels personal. I remember studying Monte Carlo techniques and reading DeepMind’s papers as they came out. I also connect with Clancy’s critique — outside of the well-defined worlds of game-playing, data is noisy and signals are correlated; a model trained on bad data will do bad things.
IV. Building Better Games
The book’s fourth and final section looks at contemporary game design and asks whether a critically-informed deployment of mechanism design can cultivate better collective outcomes.
12. Simcity
Clancy introduces the game as metaphor for society
For centuries, people understood society as a type of organism. Then, in the 13th century, Dominican friar Jacobus de Cessolis released the Book of the Customs of Men and the Duties of Nobles, adopting chess as a metaphor for a society based on complementary and rigid social roles. Clancy writes that “Cessolis offered a more dynamic metaphor: society was a game governed by rules… Chess soon replaced the body as the reigning literary metaphor for European society.”
Seven centuries later, game designer Will Wright would take this metaphor further, with his groundbreaking (and best-selling) 1989 SimCity. Inspired by the equations and models of libertarian management theorist Jay Forrester, SimCity gave players unprecedented access to a (simplified) experience of social engineering. Forrester, who in 1969 had published his models in a book called Urban Dynamics, had inspired a generation of urban planners and a series of mostly failed experiments. As Clancy notes, Forrester’s models were not backed either by theory or data, with “models [that] could be used to support the conclusions of statists and libertarians alike, depending on what aspects were included. What mattered was who wielded them first, and more forcibly.” Despite these flaws, simulation games, from the useful SimRefinery to the deeply misleading SimHealth, would hold much popular appeal.
Commentary
This short chapter sets up the book’s final section, introducing the idea of “society as a game” and some early experiments in “social game design.” Unsurprisingly, these chapters contain much that is relevant to Zaratan.
13. Moral Geometry: Playing Utopia
Clancy explores the ways that games can teach morality
The popular children’s game Snakes and Ladders was created centuries ago in India to teach children about morality. Through rolls of the dice, children ascend ladders of good karma or succumb to snakes of vice. Through this and other games, Clancy argues, children learn social norms, and over time “playing children learn to replace the respect of authority with the mutual respect of other player’s wills,” leading ideally to a healthy civic culture.
With Leviathan in 1651, Thomas Hobbes would famously advance a “game-theoretic” argument for the state: by sacrificing individual agency, collective peace is ensured. Philosopher Jean-Jacques Rousseau would introduce the idea of a “stag hunt” as a complement to the prisoners dilemma, in which two hunters cooperative to catch a hearty stag. In 1977 philosopher Edna Ullmann-Margalit would write The Emergence of Norms, detailing the ways that social norms can transform a prisoner’s dilemma into a stag hunt. Weaving in her discussions of dopamine and altruism, Clancy writes that “acting in accordance with one’s values often feels like a reward of its own, and this is exactly how norms might change the reward payoff.”
In the 1950s, game-theorists would discover that some games allowed for an infinite number of stable strategies, as long as all the players were willing to play along. This “folk theorem” can help explain the emergence of fairness and efficient strategy overall: “fairness can be thought of as a heuristic that helps people choose among infinite feasible strategies the one that benefits the largest number of players. Morals, by another name, are just smart plays.”
In recent decades, there has been increased interest in using game design to improve social institutions. This technique, known as “gamification,” promises that “desk jobs will be alchemized into entertaining affairs, education will be made effortless, and even the most tedious tasks will become enjoyable.” The reality has been less rosy:
In practice, it’s been hijacked by corporate thought leaders and business interests, applied in the most uninteresting ways imaginable. Many were originally designed to make games more addictive rather than more pleasurable, developed to enthrall players to game platforms.
Taking her critique further, Clancy notes that not everyone playing these games consented to play them, nor were they designed with all players in mind, arguing that “if we explicitly model our social and financial systems as games, we must ensure they are games that all members of society agree to play, and ones in which everyone can win.” As it stands, many of these games provide only narrow criteria for victory and encourage socially-harmful strategies, such as the toxic behaviors common to social media platforms.
Clancy defends games, however, for providing feelings of agency. This can help protect our mental health, even if it cannot solve the underlying social ills:
Games give players a sense of control, which is no small merit. The sense of agency is a critically overlooked part of psychological health. … A bullied child may take pride in being an expert in-game archer. Yet this doesn’t change the reality that the child is being bullied.
Warning that a fixation on shallow rewards can distract us from real erosions — a là Ready Player One — of our rights and standards of living, Clancy concludes with a call for going beyond narrow game worlds and developing a broader and more universal sense of interconnectedness and empathy.
Commentary
There is a lot going on in this chapter, which Clancy weaves together her discussion of game-theory in part II with her discussion of evolution in part III. Where game-theory would preach a doctrine of selfishness, the evolutionary models would carve out increasingly more room for cooperation.
14. Mechanism Design: Building Games Where Everyone Wins
Clancy introduces mechanism design for intentionally developing social systems
In 1983, recognizing the demand for organ transplants, Dr. Barry Jacobs pioneered the idea of an “organ market.” Immediately, those in poverty were pressured to sell their bodies for cash, and within a year these markets were banned. Unfortunately, the alternative mechanisms for coordinating organ transplant were haphazard and ineffective, and led to many avoidable deaths.
Eventually, in 2003, researchers Alvin Roth, Tayfun Sönmez, and M. Utku Ünver designed a novel mechanism of “kidney clearinghouses” which matched donors and receivers across long chains of exchange, significantly improving health outcomes. Reflecting on this history, Clancy recalls Oskar Morgenstern, who “hoped that game theory would provide economists with a more expressive mathematical language, empowering them to go from describing institutions to inventing new ones.”
The foundations of mechanism design were laid by Leonid Hurwicz in the 1960s, who built on Hayek’s famous articulation of the free market as a distributed computer by introducing the idea of “incentive compatibility.” Participants often have an incentive to bend rules in their favor, and so mechanisms should be designed such that selfish and selfless behaviors are closely aligned: consider the example of splitting the cake. Clancy explains: “the mechanism designer’s goal, then, is to invent games that reward payers for being truthful.” Mechanism designers, unlike game theorists, plan for rule-breaking.
A landmark example of mechanism design in action is in the case of the electromagnetic spectrum. Markets for radio spectrum throughout much of the 20th century were poorly designed, with consequentially bad outcomes. In 1994, the in-debt Clinton administration worked with Pacific Bell and economists Robert Wilson and Paul Milgrom to design a new auction system for wireless frequencies, looking to both raise money for the federal government and allocate spectrum more efficiently. The system worked, raising billions for the federal government and diversifying access to the wireless spectrum. And yet, Clancy reminds us, there was still abusive behaviors among major telecoms, and the market remains highly consolidated. No mechanism is perfect.
Mechanisms have limits. Economist Kenneth Arrow would famously prove in 1950 that, under certain conditions, a “perfect” voting system is logically impossible; there will always be competing trade-offs. As in chapter 5, the problem of the measurement of preferences remains fundamental. Contemporary economists like E. Glen Weyl have advanced the field through innovations like “quadratic voting,” which deploys notions of budgets and escalating costs to meaningfully structure decision problems.
The introduction of blockchains over the last two decades has added an additional dimension to the problem of designing safe and reliable institutions. Bitcoin, Ethereum, and the chains which followed introduced novel types of computational substrate, allowing wholly new classes of mechanisms to come into being.
The question of power pervades mechanism design, both in terms of the power of the participants, and the power of the designers themselves. Citing Cory Doctorow, Clancy argues that “software protocols have become a battleground over what values are included or codified into our realities, dictating the games we’re forced to play as end users.”
Clancy concludes with a warning. Powerful actors can bend rules to benefit themselves, and excessive reliance on rules and structures can lead to an ossified cultural life. In contrast, Clancy notes “the preponderance of cultures with a Carnival season, loosening social restrictions to allow for new interactions across the social hierarchy.” Mechanisms can help create new games, but they can’t replace the actual playing. It is through balancing rules and subversion, structure and play, that we will imagine and iterate towards a better world.
Commentary
Having been designing mechanisms professionally since 2018, this chapter speaks to me and the professional community I am a part of. I wrote about Arrow’s theorem in my master’s thesis, have published several papers describing novel mechanisms, and have written many articles analyzing voting, reputation, and budgeting systems. I am also a participant in the Summer of Protocols research group studying the power dynamics surrounding protocol design. Given the scope of this book, it is obviously flattering to find oneself situated so neatly in the conclusion. If anyone would like to discuss these topics in more detail, be in touch.
Epilogue
Clancy draws her sober conclusion
Grounded in her broad professional expertise, and writing in a spirit of critique, Clancy encourages us to appreciate the potential of these evolving techniques while never losing sight of their fundamental limitations.
Games are compelling, Clancy acknowledges, while reminding us that “relentless maximization is the logic of a cancerous tumor, not of health.” While people often display strong responses to short-term rewards, they “are motivated by many things besides money: they joy of discovery, production, security, a stable family life, the company of their colleagues.”
As rich as a game world can be, something will always be left out, and Clancy warns that “we should be wary of trading our autonomy for entertainment.” Games are tools for building skills of connection. We should use these tools, without forgetting what they were designed for.
Commentary
This book is an achievement. Weaving together centuries of history and dozens of major figures, Clancy constructs an elaborate argument for both how and why games evolved, demonstrating gaming’s incredible potential as well as its serious risks. While Clancy is often critical, her message is deeply optimistic: people can be fair and good, and we can create institutions to encourage more of both. This is exactly what we are trying to do at Zaratan, and we are grateful to be doing this work.