Contents

Gaming the Future

My rating: 8/10
Gaming the Future is a book about exploring what our future might look like, and the ways we can plan ahead to navigate traps. What will it take to make it out of the 21 century?
My rating: 8/10
Gaming the Future is a book about exploring what our future might look like, and the ways we can plan ahead to navigate traps. What will it take to make it out of the 21 century?

Summary

Gaming the Future is a book about exploring what our future might look like, and the ways we can plan ahead to navigate traps. While some technological innovations unlock new levels of progress and opportunities, their proliferation can also come with risks. It’s not enough to sit back and hope that everything works out alright. The future can be quite dangerous.

One of the main focuses of the book is how the advancement of technology enables more power in the hands of a few than has ever been possible in the past, and the problems this can cause if not addressed. The proposed antidote is the implementation of technology for intelligent voluntary cooperation. This would enable decentralized and secure cooperation and hopefully unlock a paretropian future of high technology and high freedom.

What I Got Out of It

I learned a lot from reading this book. It does not shy away from looking at potential castrophe in the face, and is unequivicol about the sacrifices we will need to make in order to survive the next century. Overall this book significantly decreased my hopes for humans surviving into the next century, but at the same time reveals a path for making it happen.

The book starts by introducing the idea of value diversity, the fact that humans have very different values. This idea is prevalent throughout the book, as many of the suggested paths forwards are intended to preserve this value diversity without extinguishing it or threatening those with different values. I can sympathize with this perspective. The scary thing about developing a homogenous society is that it may mean going through a one way door that makes it impossible to change course ever again, enforced by technology we have lost control over.

Next, the book explores the toolkit we have for navigating the future with an emphasis on those which allow intelligent voluntary cooperation. This involves looking at schelling points, money, contracts, institutions, etc, and the steps we can take to make sure they are best serving our interests.

After developing these prerequisite concepts, the book takes a much closer look at the threats we will face in the future. I have never read anything which approaches these threats in such an authentic way and found their explanations very enlightening. Two of the main categories of risks include small kills all (i.e. the possibility for someone to pose an existential threat from their bedroom using advanced technology) and civilizational suicide (i.e. civilization’s reliance on a fallible single point of failure). The book is very frank about how difficult it will be to overcome these challenges, in one place stating that “we have no simple answer to this problem” when discussing first strike instabilities. Suggested solutions involve encrypted sousveillance, automatic robotic enforcement of the law, and the widespread use of formally verified programs.

The book finishes by asking what the ultimate goal is. Where are we racing too? We are in a lucky period of time where our technological progress has outpaced out population growth, but the default is for intelligence to exist at subsistence levels where it finds equilibrium, and this is where we should expect to find ourselves in the future. The book doesn’t make any claims about where we should be racing to, and instead suggests that voluntary cooperation is a good heuristic for choosing amongst possible destinations.

While I agree that voluntary cooperation is a good way to go, I’m pessimistic about the odds of maintaining our current levels of value diversity. That being said, moving away from out current levels of value diversity doesn’t necessarily destine us for a violent massacre. Death from old age has been the main channel for updating population values overtime, and we can hope to continue relying on this technique at least until we develop life extension technology.

Key Takeaways

  • Centralized vulnerabilities create temptations to corruption that cannot be resisted.
  • Civilization is an inherited game shaped by those before you.
  • Perhaps the biggest problems arise when there is a state of the world that we would all prefer to jump to, but lack the coordination to do so. A look at how institutions evolved to deal with these factors shows how to further diminish them. With contracts, players can make binding commitments to particular future actions and cooperate for mutual benefit.
  • There’s No Rule That Says We’ll Make It - YouTube
  • There is no law saying that the result will be the continually growing spontaneous order intelligence of civilization. It could be the outcome of a winner-takes-all arms race to expand first.
  • If you want to respect everyone’s preferences, you need to find some system for cooperation
  • The instrumental goal of cooperating evolved into the felt goal of caring about others’ goals, i.e. empathy.
  • A cascade of mutually expected conflict can result in a Hobbesian Trap, where the mutual expectation of conflict creates a preemptive conflict.
  • Subsidiarity is a principle of social organization that holds that social and political issues should be dealt with at the most immediate or local level that is consistent with their resolution.
  • Currency allows the equivalent of multi-way deals via separate pair-wise trades. Prices can represent a summary of individual valuations of a good, the demanders’ use values and the suppliers’ costs. Instead of running back and forth between Alice and Carole, Bob can pay for the promise of Alice’s cow and use that currency to buy Carole’s grains.
  • Civilization is a network of entities with specialized knowledge, making requests of entities with different specializations.
  • The Streisand effect is an example of psychological reactance, wherein once people are aware that some information is being kept from them, they are significantly more motivated to access and spread that information.
  • At some threshold, the majority wins. You can’t make that outcome impossible.
  • Mobile phone plans with monthly subscriptions instead of year-long contracts make for a more competitive telecom market.
  • Blockchains without strong privacy (today, most blockchains) create dangerous new opportunities for real world crime. If real-world commerce moves onto such blockchains, we’ll be in a world where private agreements are impossible.
  • Reports by consumers put the producer in an iterated situation with the community as a whole while trademarking establishes valuable long-term producer reputations.
  • How cities are built, how people self-organize, and what kind of society they organize into is largely based on how the economics of defense and attack evolve with technological progress. This is a main focus in the book “The Sovereign Individual”.
  • Today’s legal systems evolve by competing on at least three axes. First, on how well they insulate us from underlying rules of biology, such as violence. Second, on how well they create rules for their own survival, such as capitalism vs. communism. Third, on how well they insulate us from their own dynamics, such as separation of powers to allow for watchers being watched.
  • Eventually, someone invents a fence. A debate ensues between the legal absolutists, arguing fences are not legitimate versus the fence absolutists, arguing that property law has become irrelevant.
  • 18 years of age is frequently—but not always— the threshold for being a voluntarily consenting adult. There is no obvious non-arbitrary standard that works better than just agreeing to this arbitrary standard. Anything that tries to be more accurate will necessarily be less simple. Common knowledge of the expectation of legitimacy is the ultimate governance. 18 is a schelling point. Easy and simple. Better than the alternatives.
  • It took centuries for today’s open societies to outcompete tyrannical ones, just as it took decades for open source software to outcompete proprietary software. We should not expect quick wins of decentralized systems over centralized service providers, even if the decentralized ones bring a confident rule of law. There will be many failed ventures and it will take time to build up an adequate level of functionality. But we have repeatedly witnessed the long-term winners are those which create a rules framework leading to a predictable basis for cooperative interaction with minimal risk. As long as future levels of the game are defined by a rich taxonomy of rights and composability of contracts, cooperation can evolve. As civilization unlocks level after level, we expect non-human cognition to play an increasing part in the growth of knowledge, wealth, and innovation. We’ll come back to this in chapter 8. It takes a long time for decentralized systems to win out because their advantage is in responding to abuses of power.
  • Two main traps to stear away from: small kills alls via technological proliferation and civilizational suicide via single point of failure.
    • Small kills all: Nuclear violence, robotic violence, biotechnology risks, nanotechnology risks. As each of these becomes more powerful it becomes easier and easier for somebody to pose an existential threat from their bedroom. The Yudkowsky-Moore Law of Mad Science: “Every 18 months, the minimum IQ necessary to destroy the world drops by one point.”
    • Civilizational suicide: To deal with the risks of small kills all, it can seem tempting to explore solutions that prop up powers of governments - or even establish a world government - in the search for safety. These world governments could be equipped with pervasive surveillance and robotic enforcement. According to adverse selection, whenever we create an opportunity to have power, we create competition for it. The greater the power, the greater the race to capture it, and the less likely that competitors will all have purely good intentions. The temptation to create powerful central coordination, for instance to solve small kills all risks from the proliferation of technologies, is Robin Hanson’s best guess for the Great Filter that humanity may face.
  • To defend against civilizational suicide, the system must be multipolar so that the different components monitor each other, and if one component goes rogue, the rest of the system must be able to gather enough force to counter the component that went rogue. Instead of having a mutually trusting system of active shields, we need a mutually suspicious system of active shields. Such a shield should have three features: monitor, detect, and defend.
    • Monitor: Encrypted bottom-up sousveillance. Such a system would keep top-down surveillance in check, and only release information when it detects anomalies in order to reduce the invasion of privacy.
    • Detect: You need to plan ahead for what types of activities should be flagged as anamolies. This time gap between knowing what is buildable and being able to build it creates room to increase safety.
    • Defend: We to design the enforcement mechanism that is activated if illegal activity is detected. They should be very transparent with their operations in order to minimize corruption. e.g. Body cams result in making corrupt behavior harder to hide and thus less likely to occur and making proper behavior that has an ugly outcome easier to defend. They both hurt bad cops and help good cops.
  • Unfortunately, the technological designs resulting from sophisticated design-ahead also create a first-strike instability. This results in a first-strike instability, such that even if no party wants to start a conflict, the fear that another party might incentivizes a first strike. We have no simple answer to this problem.
  • In the physical realm, an attack is costly for the attacker. There is a marginal cost per victim, if nothing else, of the attacker’s attention. A good defense raises the marginal cost of attack. By contrast, software attacks typically have zero marginal cost per victim. Once malware works, its damage can be multiplied by billions using only the victim’s resources. Any vulnerable software system exposed to the outside world will eventually be attacked. We must build invulnerable systems when we can, and otherwise minimize the damage from a successful attack.
  • Authorization-based access control is strong on proactive safety but weak on reactive damage control. Identity-based access control is weak on proactive safety but stronger on reactive damage control.
  • We call intentional interference attacks. Unlike much of the rest of computer security, the ocap approach does not see bugs and attacks as separate problems. OCaps provide modularity and abstraction mechanisms effective against interference, whether accidental or intentional. The ocap approach is consistent with much of the best software engineering. Indeed, the ocap approach to encapsulation and to request making—to boundaries and channels—is found in cleaned-up forms of mostly functional programming18, object-oriented programming19, actor programming20, and concurrent constraint programming.21 The programming practices needed to defend against attacks are “merely” an extreme form of the modularity and abstraction practices that these communities already encourage. Protection against attacks also protects against accidents.
  • As of 2022, all large corporations manage their pervasive insecurities rather than fixing them. This is only sustainable because attacks are not extremely sophisticated yet.
  • The U.S. Constitution gave each government official the least power necessary to carry out the job, what can be called Principle of Least Privilege.
  • It will be difficult to build artificial agents with a cognitive architecture that can internalize the costs associated with actions we regard as wrong. Today’s AI systems can already deceive humans and future artificially intelligent agents may develop trickery we have not evolved to detect. This pure Homo Economicus paradigm with unbounded ability to fake is frightening since those bounds on humans account for much of our civilization’s stability and productiveness. Potential traps of a future dominated by pure game theoretic optimizers are terrifying.
  • Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements, we think its far more likely that many, many AIs, will work to stop the occasional bad actors than the idea that there is a single AI a billion times more powerful than anything else. If that one thing goes off the rails or if Dr. Evil gets that one thing and there is nothing to counteract it, then we’re really in a bad place.
  • Whatever we do, we do within a game left to us by prior generations. Even if we do “nothing”, we endow them with a strategic set of relationships with payoffs and the potential for players to make violent and nonviolent moves. We have come full circle to the start of this book; we can’t exempt ourselves from creating the game within which future players decide.
  • If we simply valued minimizing suffering, we could set up a future that succeeds at doing so, for instance by going extinct. If we value growth of cognition, creativity, and adaptive complexity, there are different, more complicated choices to make. In this book, we suggested that intelligent voluntary cooperation is a good heuristic for choosing amongst this set of choices and proposed a few moves for the next game iterations.