This is my post for day 30 of the Inkhaven writing retreat.
If you’ve ever played video games, you might be familiar with the concept of a mini game. Mini games are a separate, smaller game within a larger game, with different rules and a well-defined context. They can add more novelty to the player’s experience while not over-complicating the game as a whole. But they also reflect how life works sometimes.
Our daily lives just so happen to be well-modeled by using lots of different mini-models that fire off in the right context. You only need to use your driving skills when you’re in the car, and you only need to consult your model of shoe laces when you’re tying your shoes. Another example is the whole phenomenon of buildings being made up of rooms. When you’re in a room, you mostly only need to think about what’s in that room. You mostly only need to know how the rooms relate to each other by means of walking between them. Only rarely do you need to know exactly what’s on the other side of a wall, or what room happens to be above your kitchen.
One of the miracles of modernity has been the slow but steady process of discovering and understanding how all of reality in its multitudinous richness emerges from one simple model, one set of equations that turned out to be so dominant and unwavering that we decided to call them laws.
Though these laws are in some sense the most true model we have, they are not the most useful in any practical sense. We don’t use the Standard Model to fix our cars or mow the lawn, nor do committees use them when choosing zoning or fire safety policies. We don’t even do engineering with software that runs quantum field theory, except in the cases where the purpose of the engineering at hand is to further probe quantum field theory. Our best modeling software is highly optimized for the context, using simplified laws like Kirchoff’s rules or classical Hamiltonians.
The more “unified” models of our world have higher prediction accuracy if you are willing and able to trace out all their implications, and they are the only models that have a chance of getting the answers right at the edges between contexts, or at the extreme values. But that “if” is a huge one, and I think it’s essentially never true. We have a lot of spare compute in the world, and if we could fix cars better using quantum mechanics I think we just would.
People are constantly doing predictive modeling in their brains in order to navigate the world, and we are essentially stuck using the highly contextualized and simplified models. An enthusiastic foodie might use some domain knowledge from biochemistry to design a recipe, but once they do, they’ll essentially cache their calculations into a “if I’m cooking X, set the oven to Y” type of rule. If you ask them why they’re doing that, they might be able to recover their original modeling as the justification, but they also might not.
That said, I believe that most people could move their minds substantially more toward unification than they do. I mostly do so because I value having true beliefs for its own sake, but I have noticed that over time I tend to get better results in predicting related outcomes than my more contextually-modeling friends. We are making different trade-offs, and they are better than me at many more specific skills.
I do think that a unified mindset is especially important in epistemics, since this isn’t domain modelling so much as meta-modelling. I think having a deep understanding of the Bayesian worldview can help someone in all areas of life, personal, interpersonal, and impersonal.
All that is to say I am the kind of person who knows what room is above the kitchen, because I subconsciously made sure to model that when I moved into the house.
I think that if you want to be someone who is highly skilled and adaptive in a wide range of contexts, if you want to go around the world navigating it like a video game and solving lots of problems, then you probably shouldn’t prioritize this kind of mental unification. Being this kind of person is extremely valuable, and I salute you.
If instead you want to try really, really hard to get the right answers to the world’s hardest and most important problems, then I think you need to be the kind of person who focuses on unifying your models. I see existential risk from AI as one of these hardest problems.
Without a doubt these types of people need to work together. The contextual modelers need to be willing to trust the unified modelers’ claims about things like overarching principles and limitations. And the unified modelers need to be willing to trust the contextual modeler’s claims about how specific domains work. The contextual models have lower leverage, and the unified models have a higher probability of being catastrophically wrong. I’m not sure who should be in charge of when exactly we trust who exactly, but it’s somewhere in between.