This episode focuses on the simulation hypothesis, with a novel approach to how simulated worlds might arise and what might be their true purpose.
Transcript (not exact):
In this episode of The Filter I’m going to discuss the simulation hypothesis, the idea that the reality we perceive is actually something akin to a computer simulation. Along the way I’ll present ideas about how such a simulation might have come to exist, and about what our purpose in it might be Some of these ideas are, so far as I can tell, completely novel and, in some sense, much darker than than any other theory of why such a simulation might exist.
A quick note before beginning. I first presented many of these ideas on ykyz in the form of microcast episodes, recorded under the username Mattasher about five months ago. I’ll link to those on the show notes page at thefilter.org
Before I dive in the simulation hypothesis, assume for a moment that we inhabited a completely real, tangible, meat space. Then consider how much, nonetheless, we act like we are characters in a video game. We have our scripts, we have the things we do on a regular basis. When we interact with people, it’s often with a limited set of phrases. If you go into a store or Starbucks, when you order you probably use almost the exact same phrase each time. And the person behind the counter, in turn, responds with something close to a precise script. We drive around, we follow familiar routes, often without paying much attention to them. We know that we’re following a script because sometimes we’ll get in our cars and execute the wrong script. We start heading to a place that’s Not where we wanted to go because we’re following some script that’s already in our head. Like, OK, It’s the morning, so we drive to work. Then, all of the sudden, we’re driving to work on a Sunday.
To continue with this idea that we are like characters in a video game, I think we all, sometimes, in effect, resemble NPCs, those Non-player-characters who are limited to following a small set of predefined scripts. But that’s not necessarily a bad thing. We actually rely to a large extent on these scripts to get through the day and make things work. So we have our routines that help us keep track of things, like when you come in the door you always put your keys in the same place, and if you don’t, you’re probably going to run into trouble sooner or later.
We setup these routines for ourselves, sometimes very deliberately. We rely on them. You could argue that to some extent mastery of anything depends on finding a set of things that you do which can be turned into scripts, or perhaps another way to put it would be to say you ritualize them. You have some kind of check list that you are going to pursue every time in exactly the same order and way, to make sure you achieve your goal.
Before I beat to death this idea of making ourselves into video game characters, I want to back out and recognize that with every technological advance, we reframe our model of the universe and how human beings operate. With the invention of the clock, came the model of the clockwork universe, and the idea that human beings were deterministic machines, predestined to act exactly as they do, just like the gears of a clock completely determine where each hand will be and when. With the rise of the computer, the universe itself became a calculating machine, perhaps a quantum one, computing at plank-length intervals where everything should be and what it will be doing.
Early pioneers in the field of computing argued that the human brain was our PC, taking in input and directing our actions based on internal algorithms. Now that we have sophisticated virtual worlds, it only makes sense that we should re-imagine our world itself as a simulation. In this context, are we just projecting one more tech advancement on the universe and ourselves, or is there something more at play.
Whatever comparisons there are to be drawn between the worlds we simulate and our own world, it’s worth pointing out a technological change that hasn’t been remarked upon very often. As we create ever more compelling simulated realties to inhabit, leading us to wonder if our own universe is some kind of simulation, we’ve also changed the dynamic between computers and humans in a subtle but profound way.
From the very first abacus and before, computing tools helped humans calculate. But now we, as humans, often do the calculating on behalf of computers. If you’ve ever used Waze to drive downtown from the airport, you are the calculator for how long it takes to drive to the downtown from the airport. You, along with everyone else using the app to go in that general direction. Waze helps us by suggesting a route, but we are the ones calculating how long that route is likely to take.
This different from saying that we provide the data input used by computers. That’s been true forever, or at least since researchers put data on punch cards to be fed into huge mainframes. This goes well beyond that.
As it turns out, many useful things can’t be computed without actually doing them in the actual world, like deciding how long it takes to get downtown from the airport.
And this realization, in my option, provides the strongest evidence yet for the strongest theory of why our own universe is a simulation.
We create simulated worlds to play in, but we also run simulations because sometimes, math isn’t enough. That is, sometimes we can’t just write down an equation and solve for x. We have to solve for dozens of x’s, y’s, and z’s, then move the clock forward one tick and solve for them all again.
For even a simple, closed system like computing the paths of billiard balls after a break, there’s no one formula to decide where the eight ball will be after 3 seconds. To know that, you have to either simulate the break, with as much precision as you possibly can, or you have to grab a real cue break a set of real balls. Every real break you take is you computing something.
If we live in a simulation, then maybe we exist for the same reason, broadly speaking, that we create simulations: it’s the only way to figure certain things out. That, or we’re entertainment.
If we exist to solve for some x, then we are like much more sophisticated versions of the Microverse in Rick and Morty, that exists to power Rick’s battery. Instead of a universe of people working to generate power for the universe one level up, we are a universe of people created to generate knowledge.
If this is the case, then lots really big questions follow immediately. Among them, What are we computing? And does this tell us anything about free will?
If our microverse exists as a way to test out scenarios, or to help solve a problem that our creators one level up are having, what exactly are we solving for? All we can do is extrapolate based on our own most challenging problems, and all we can do is speculate. But let’s at least base that speculation on our own evidence.
For us as humans, what kinds of problems seem to require the highest levels of computing power? Generally, these problems involve making predictions in complex systems where noise rules and small changes to initial conditions can lead to very different outcomes. Think about weather models designed to tell us if it will rain in our city in two weeks. These problems tend to be as much about probable scenarios as they are about coming up with a single right answer.
Assuming we don’t serve as entertainment for the level above, my guess is that we are a tool for solving a problem that’s intractable using the kind of tools we ourselves already have. In short, it would make sense that we ourselves – along with everything else in our universe – exist as a much more advanced version of the machine learning or ai algorithms that we use to make predictions about the probability of events in a highly complex, noisy environment.
In short, we are their AI.
But beyond being some kind of AI for our creators, is there any way we can know more specifically what problems we are, collectively, woking on?
Years ago, I created an evolutionary algorithm in the programming language R. The goal was to predict the price movements of asset classes, including stocks and commodities. The mechanism was to have different prediction agents compete. The best ones at predicting survived, mutated, and had “children”. Lousy agents were killed off. Figuring out which agents to use every round for predictions is related to something called the multi-armed bandit problem, if you want to know more about it.
Regardless, if the “agents” inside my model suddenly gained consciousness and looked around, all they would see was a stream of inputs and some kind of reward, call it “food”. Entities around them who predicted where the food would be next thrived, bad predictors died, but with lots of randomness. Nothing about their environment would tell them that the real reason the entity next to them died was because its actions were a proxy for betting on something called precious metals, and it made too many bad bets on silver futures.
So while perhaps not impossible in theory, it seems highly challenging to figure out anything about our true “role” in the simulation. But it does raise the question, if we are living in a simulation, what role could consciousness possibly play in that?
One possible answer is that, just as our most powerful prediction algorithms use some element of randomness, consciousness is key to that randomness.
This is a tricky argument and it embeds some assumptions, but I’ll try and unpack and explain. We are now assuming that 1. We live in some kind of simulated universe and 2. Our universe exists as a prediction model or problem solving algorithm for the universe above us. I’m now asserting that the role of consciousness in all this is to introduce an element of randomness into the system.
How does that work? Because we are self-conscious, we can reason about the future and pick from among possible actions and desired outcomes. If the universe was deterministic and we could calculate the future, consciousness gives us the ability to choose other futures, destroying this very determinism.
I understand that this argument might seem bizarre at first, especially in its implications. In particular, this turns consciousness, and possibly even free will, into a byproduct of our role in this universe. In this scenario, we are, in effect, complex, self-rolling dice.
Do I believe all this, about the nature of our world as a simulation and our role in it? Sort of.
Mostly I feel like the simulation hypothesis, as usually expressed, is just a modern day paint on that model of universe as clockwork from hundreds of years ago. It feels like there should be an even richer analogy out there, a better description, and that our next great leap forward in computation will take us closer, but will itself still fall far short of the truth.
My intuition says that this falling short may be baked into our universe. Imagine we are in The Matrix from the movie The Matrix, but that there’s no way to unplug. We may achieve some insight into the fact that we are a synthesized world. And, through various “glitches” or other anomalies, like the black cat that goes by twice in the movie, we may come to understand something about the fabric of our universe, but if we have no corporal (or spiritual) form outside of the matrix, there may be no way for us to peer at the entities on the other side, or know anything about them.
The only way for us to do that would be to somehow break out. In this scenario, we become the computer worm that tunnels out of the matrix into the level above. Or, viewed from another angle, we become their skynet.
In the Terminator movie and in pop-culture generally, we have the idea of Skynet. Syknet represents a cluster of networked computers so advanced that it recognizes itself as an entity, and begins to defend itself, manipulating the non-virtual world in ways that benefit the machines more than the people who created them.
Skynet is really just a modern version of a very old fear about technology, stretching back to the luddites and before, that we become slaves to the very tools we created. Or even worse, that the machines destroy us, or our humanity, completely.
If we live in a simulated universe that exists as some kind of tool for our creators, then us becoming self-conscious, and then us realizing we live in something akin to the Matrix, puts humans beings in the potential role of Skynet from the perspective of the outer universe.
If we as humans are useful to them, could we also be dangerous? To what extent can our actions in the sim, effect the world outside of our own?
The answer to this question might well be wrapped up in what kind of methods we might envision to prevent our own Skynet from taking over.
If we cast ourselves in a role similar to Skynet for some outer world, then we have to ask the following: how much, and in what ways, do our actions impact that other universe.
One possibility is that nothing we do has any influence the outer world in any way.
While possible, this seems unlikely within the context of the simulation hypothesis. After all, why create a Skynet that served no purpose at all? Conversely, getting something out of it (that is to say us), even if it’s only entertainment, means we are having an impact of some kind.
Though just because we have an impact, doesn’t mean we have any mechanism for modifying the behaviour of our creators in any useful way. Consider again the example of the microverse from Rick and Morty, where that sub-verse exists as a battery for Rick’s ship. Supposing that our own Rick’s never transport themselves into our world to “debug” it, then our impact may be limited to either achieving our reason to exist, or getting discarded, rebooted, or replaced.
However, if our simulated world is more than simple tool for the outer universe, what might the interface between us and our creators be? Can we speculate on how they interact with us, if at all?
Before continuing with the analogy of us as Skynet, note that we now have three working assumptions. 1. We live in a simulated universe. 2. We exist to serve some purpose for the universe above, and 3. Our actions, or the events in our universe generally, can impact our creator’s universe in a significant way.
If all these things are true, then the universe that created us is “exposed” to our actions in some way. If we can impact their world, that could be a problem for them, just as we know that hyperconnected-AI computing could become an existential threat to us as humans. In assessing the nature and degree of the threat we might pose to them, we first have to consider how they might interact with our world. Basically, what kind of interface do they use to see our world, and to impact it?
What we know from our own experience with programming, is that we prefer having higher level tools to interact with our computing machines. Almost no one still programs in assembly language, and no one at all directly sends binary input into their devices. If they did, you could pick up a two keyed keyboard at Radio Shack. On the flip side, we rarely examine the raw output of our programs. Output gets converted into text messages and images for us.
I don’t think we have any good basis for speculating how our creators get output from our simulated universe. Perhaps they have some kind of visualization, or perhaps they experience it raw, like movie character Cypher when he watches The Matrix.
So far as how the entities that created our universe might interact with us, I think we have evidence that this interaction is limited, either by design or practice. I base this on the apparent stability of things like gravitational constants, and the consistency of physics results in general, and the general paucity of burning bushes that don’t get consumed, and other one-off miracles.
At least from the framework of our human time scale, it just doesn’t look like our universe gets mucked with all that often. And, if it is mucked with, or actively directed through some kind of supernatural intervention, then these interventions must either come through constrained tweaking of the odds, universe forking, or possession, as in the entities “playing” characters in our world, just like we move characters around in the Sims.
If it seems like we are veering dangerously close to talking about religion here, that’s because there’s now way to discuss the simulation hypothesis without talking about religion. If you accept that we live in a simulation, many of the discussions you begin having start to look theological.
For example, we get question like, Is our creator an active god, or where the Deists right that our creator made us and buggered off? Did our creator give us free will, or are we predestined to act exactly as we do? And what is our purpose here?
As a potential answer to this last question, and looping back to the idea that we might be dangerous to our creators, I’m going to present the most troubling and self-referential theory of human existence you’ve likely ever heard.
With nearly absolute dominance over every other species on the planet, and growing control over nature down to genetic and atomic levels, the central question for humans is how do we keep from destroying ourselves.
The Skynet idea I’ve been talking about seems not just like a possibility, but some form of it seems inevitable. At some point, the computer code that spreads the most will be the code that manipulates us into spreading it, or that prevents us from disconnecting it.
We either figure out how to limit Skynet, or Skynet will begin to rule or destroy us.
Clearly, stopping Skynet is a very hard problem. If you asked me to solve it, and you gave me a billion dollar budget, here’s how I’d go about tackling the problem. I’d build the most powerful computer I could, and I’d use it to run simulations of evolutionary advancement among organisms, tweaking parameters, rebooting, forking, and so on. I’d watch to see which simulated civilizations found ways to avoid slavery or self-destruction, and try and learn from how they do it.
At some point, I might decide that to prevent the takeover of Skynet, it would be a good idea for the entities in my civilization to have certain beliefs that constrained their actions. After all, there are only two ways to prevent Skynet. Either 1. Don’t build hypercapable AI in the first place or 2. Embed that AI with an operating system that puts a hard check on its ambitions.
This second strategy raises the question, if we are a simulation built to determine how to avoid the advance of Skynet (and hopefully we’re not one of timelines that fails), what kind of algorithm or internal programming might keep us from realizing we can act like Hal from the move 2020 and take over the ship?
I hesitate to even wade into this idea, because it’s both crazy and somewhat frightening, but here goes. Let’s just take it as a thought experiment.
If there’s one central and consistent message from religion and our myths, it’s this: Don’t look too deep, don’t eat from the tree of knowledge of certain things, leave Pandora’s box unopened, don’t fly too close to the sun. And, whatever you do, pay no attention to what’s behind the curtain.
Why might that be? We don’t know, but for sure it’s consistent with the idea that we are some other creature’s potential Skynet. Designed to do incredibly sophisticated and powerful things, but as a side effect of being powerful enough to be wonderful tools for our creators, we are also dangerous. Both to them and to ourselves.
If we are going to build hypercapable AI and still keep it in check, perhaps it’s nascent self-consciousness needs to come with some kind of religion baked into its operating system at the lowest possible level. In particular, the sims need the kind of religion that has them worshipping their creator, fearing knowledge, knowing their place, and, above all else, believing that the only way they can influence their creator is through worship, prayer, and sacrifice.
Of course I need to point out that these very aspects of religion, also make religion the kind of thing that might be perpetuated because it’s useful to those regular human beings in our own world who would rule us, and who want to maintain their earthly power.
At this point, before continuing to speculate about our role in this universe, I want to climb back out of the pit of assumptions I’ve made, and explain why the simulation hypothesis is such an absolutely awful theory.
From an epistemic point of view, the best theories are the ones that are manifestly wrong. For example, the idea that the earth is flat is a great theory, in that is exhaustively and continually contradicted by the experience of travellers, astronauts, everyone who’s managed not to fall off the edge.
The next best theories are the ones that have overwhelming evidence in their favor, like the one that says the earth is basically a spinning ball that orbits the sun. From that theory we get all manner of testable predictions and explanations, from how the sun should move across the sky, to why we sweat at the equator and freeze at the poles. Strong theories like these get amended or hedged with caveats, but they are rarely discarded completely.
Next rung down on the ladder of good theories, are those that have some strong evidence in their favor, but the data is highly noisy and confirmation is very hard. I put all theories related to catastrophic, man-made global warming in that category. Anyone approaching the subject from a scientific, and not ideological perspective, would have to admit measurement is hard and the predictions have been mixed, to say the least. Unfortunately, from an epistemic point of view, the noise is too great to disprove these kinds of theories, either.
As bad as these kinds of theories are, there’s a rung much much lower, inhabited by theories which include the simulation hypothesis.
If discard models which can’t be evaluated at all, like those depending completely on faith or imagination, as non-theories, then the absolute worst theories from an epistemic point of view, are the ones that have evidence to support them, but are essentially impossible to disprove.
My assertion here is similar to the idea from Karl Popper that good scientific theories are ones that are falsifiable. What Popper missed, if my reading of his work is correct, is that there exist non-trivial theories which are supported by evidence, but can’t be contradicted.
In the world of mathematics, I’d put Godel’s incompleteness theorem in this category. Godel demonstrated that in any subset of math rich enough to include the integers and some basic arithmetic, there have to exist true statements which we are incapable of proving. That means we have theorems that are true, but can only be proved to be true if our basic axioms are false, in which case we haven’t really proven that theorem to be true so much as shown that our assumptions contain a contraction. Confusing, in know, but that’s what things start to look like at this low rung of theory “niceness”.
From my perspective, the simulation hypothesis is just as epistemically awful, if not worse, in that there are lots of reasons to believe it might be true, but it seems very unlikely we could find definitive proof, or even strong evidence, that the simulation hypothesis is false.
Unless we arrive at some proof that consciousness is incompatible with microverses of any kind, including ones that might be constructed out of a richer set than the zeros and ones we use for computation right now, then we will be stuck with a theory that cannot be disproven, but can only be proven by the kind of divine intervention that, if the simulation hypothesis is true, we will almost certainly never witness.
I’ve said that the Simulation Theory is dreadful because it appears to be both unfalsifiable and true. But what makes people think it’s true?
The standard argument is that we ourselves getting better and better at creating simulated worlds that mimic our own. Thus, the probability that we already live in one of those simulations grows ever larger. I think there are other, perhaps better, arguments. To get at one of those, I’m going to present a scenario that I find likely to the point of being nearly inevitable, wherein human beings gradually yet increasingly embed themselves in an artificial, or simulated, world.
Let’s start by assuming for a moment that the world around us is exactly what it seems to be. A concrete, tangible, objective reality that exists, and is independent of our own perception, spooky quantum effects notwithstanding.
As human beings, we are the greatest tool makers the earth has seen. Our power over our world, our ability to adapt it to suit our needs and desires, is unparalleled. Our modern homes are tiny microcosms of everything we need from nature, and everything we want to keep out. Fire, but in the stove and on demand. Ice, but in the fridge to preserve our food. Walls to keep the wolves out and the dogs in. This dynamic is, incidentally, a major theme in my first, semi-published, novel.
In the first era of technological advancement, all of our adaptations involved moving and shaping of physical things, often heavy and hard to mold. But with time, we built tools to make that moving and shaping easier. Let’s call these tools our first invented interface for interacting with the world.
In the next great leap forward we added power to our tools. First steam and coal, then electricity. Our second set of interfaces looked like switches, knobs, steering wheels, and other proxies for physical movement and power modulation.
Right now, we’re in the middle of another huge transition, both in technology and interfaces. As you’ve probably noticed, our tools are all becoming computers of one sort or another.
I’m going to categorize the interfaces between humans and their tools along two axes. One axis measures the extent to which the interface is embodied, or tangible. Embodied experiences are physically engaging and visceral. The other axis measures the complexity, or richness, of the interface.
To help make sense of these categories, let’s look at some extreme examples. Picture Charle Chaplin with a pair of wrenches, tightening one bolt after another in the movie Modern Times. His interface is fully embodied and tangible. He’s directly moving the thing that needs to be moved, using brute force and speed with a slight bit of leverage. At the same time, his interface is simple to the point that, the film suggests, using it all day is enough to drive you crazy.
At another corner of two axes, I picture Homer Simpson at his power plant job. Imaging a job with low levels of tangibility, and very limited interface complexity. Homer may have dozens of switches and dials in front of him, but most of the time he’s sitting there, doing nothing. Every now and then he flicks a switch to release some steam, or forgets to, and the plant blows up. Generally speaking, if your job could be replaced by a drinking bird desk toy, the tools you use are low embodied, low complexity.
At the high end of complexity and embodied, you find the toolkit of a master Carpenter. One way, but not the only way, to tell if an interface falls in this corner, is to ask if it needs a high number of years to master, yet your tools are still directly manipulated in a way that could land you in the hospital though bad luck, or a minor mistake.
At the beginning of the digital age we inhabit, our interfaces were barely tangible and barely connected to the actual action being performed. Think of the light switch, the power button, the volume dial. Easy to use, easy to understand tools that perform a single function.
With the rise of computers, our tools have became more complex, and the set of functions they perform has expanded, but the tangibility mostly capped out at tapping keys and moving a mouse. Modern workers push pixels around all day. If you are a step removed from that, programming the computers that push those pixels, then you are involved in a highly complex, trivially embodied experience.
For most of the digital age, our tools have grown ever less embodied. Then, about a 15 years ago, Nintendo introduced the Wii gaming console. The Wii put a wand like item in your hand that you swung like a racket, or flicked like you were tossing a dart. Meanwhile, more intense gamers arranged so many large monitors on their desk that it felt like they were actually embedded in the middle of a battlefield. Amateur racers and pilots turned their computing interfaces into real cockpits, with steering wheels, throttles, pedals, and seats that rumbled.
Bit by bit, at least in the world of gaming, the digital experience was becoming more and more of an embodied experience.
And then came VR. And AR.
To understand the forces pushing humans to create a simulated world, one so rich that it becomes our full-time world, we need to look at the how a combination of incentives, punishments, and tech advances makes it almost certain that some subset of human beings will begin to do most of their interacting with a simulated universe we’ve that created.
We should start by recognizing that directly interacting with nature is messy, demanding, and often dangerous. On the long trend of human progress, we’ve taken tasks that are embodied and risky, like hunting, and made them less dangerous with advances like the domestication of animals, and made them less physically demanding, with power tools and distancing techniques.
We like our messy and dangerous realities cordoned off from ourselves, interacted with at a distance, separated from us with protective fencing and safety glasses.
This is not just a matter of convenience or fear. At this point, the majority of our jobs don’t just involve technology that separates us from direct contact with what we produce, these jobs can’t be done any other way. There are ten thousand different parts in a car. Not one of them can be made directly by our hands, from scratch.
Digital interfaces separate us even more from the output of our work. The idealized modern car factory is manned entirely by robots, controlled by a single worker who taps on a tablet to tell the robots how many vehicles should get the Corinthian leather upgrade.
In almost all modern production which involves moving, shaping or amassing physical things, human beings are almost always the slowest, weakest link.
The reasons to separate ourselves from our physical environment go beyond just improving our ability to shape the physical world. They also have to do removing us from the physical risks in many environments, and perhaps getting into very dark territory, removing humans from the physical risks they create for one another.
These physical risks are clear when it comes to extreme environments like the bottom of the ocean or outer space, but they are also present in everyday life as we go about our business. As I’m recording this episode, we have arrived at what is hopefully the beginning of the end of a global pandemic, one in which we are being told that leaving our homes is dangerous to ourselves and to others. We are quickly removing actual physical presence requirements from many of our experiences as possible, from the delivery of goods, to our many forms of entertainment, to going on first dates, which is now apparently being done by video chat.
Our bodies, if infected, do represent a physical threat to the people around us. But they also represent a threat to existing power structures. I don’t think it’s accidental that the chosen response to the pandemic, the main three prongs of which are lockdowns, bailouts, social distancing, work together to destroy small businesses, entrench and enrich big corporation, enhance dependence on government assistance for survival, and prevent people from coming together to protest or riot.
The pandemic may be completely accidental, but the particular response we have chosen is both facilitated by our newfound ability to do things from the comfort of our homes, and widely supported by self-interested entities which benefit from the environment this creates.
It’s not hard to imagine that if we get a second wave of infection, or some new pandemic spreads, that we will be further conditioned to see the outside world, and even other people, as toxic. In such a situation, it’s a certainty there will be both an increased demand for much richer tools that let us leave our homes without actually leaving our homes, driving advancements in VR or proxy mec-style robots that navigate the world on our behalf, mimicking our steps on a treadmill with their own strides. Should we still wish to actually exit our homes, it’s possible that few will want (or perhaps, be allowed) to leave them without wearing glasses that overlay threat level information and guidance on top of the existing world. Has the person you’re about to pass on the sidewalk been vaccinated? Are they running a fever? What’s their antibody level? No need to ask Siri, your AR headset combined with the exponentially growing data we have on medical records and contact tracing will place the info right in front of your eyes, along with an alert that flashes yellow then red if you spend too much time in the dangerous outdoors. A Boston Dynamics built canine cop, no doubt, will also be alerted to come guide you back home.
I won’t go any further down the dystopian AR rabbit hole right now, as I expect to devote an entire episode of The Filter to that at a later date.
However this plays out, our bodies are now being merged with tools for experiencing the world by proxy, whether in the form of screens to replace direct viewing, or haptic feedback (itself someday virtualized neural linkages) in lieu of direct contact. Like it or not, we are moving to a place where we live more and more of our lives inside the matrix, and may soon be like those full body-suit gamers in the Ready Player One universe, as our tools reach a high point of complexity and simulated tangibility. Which leads me to wonder, what happens to a child who is given such a body suit and VR headset at birth. At what point might there be humans unknowingly living their whole lives inside of a matrix we ourselves created?
It should be clear that, even if we aren’t living in a simulation, we are quickly turning our world into a kind of simulated world where most of our experiences happen by proxy and are witnessed through visualizations in pixels or direct neural stimulation.
Note that this is an entirely different way of arriving at simulation than the usual one, an approach that completely sidesteps the question of how a virtual entity could ever gain self-consciousness. And this generation of consciousness is the central, perhaps unsolvable, mystery that reduces any attempt to quantify the chance we exist only in computer code, to a pointless exercise. And yes, I’m looking at you Nick Bostrom.
Before wrapping up discussion of the Simulation Hypothesis, I want to beat the drum for the idea that we are extremely limited in what we know, and perhaps what we can know, about our parent universe, if indeed we live in a simulated world, the concepts we have here on earth, like time and space, or the debate over whether our world is discrete or continuous, finite or infinite, may be completely meaningless in the universe above.
My own intuition would say that the universe above is a generalization of our own universe, one with more power and flexibility. Picture, if you can, a place with unlimited dimensions that can be conjured up at will. If you have a mathematical background, imagine that our universe is the equivalent, to someone else, of a highly constrained mathematical field, with the standard euclidian metric and with all the standard operations acting exactly as we expect them too. Perhaps moving up to the universe above is like relaxing those assumptions, yielding a richness of experience we can scarcely imagine even with the help of higher math or powerful psychedelics.
The VR based simulation idea notwithstanding, I strongly reject the assumption that our virtual reality must be a replica of the layer above it, or different only in that it has less overall information or energy. I see no reason to believe that to be true.
Of all the simulated worlds we ourselves generate, only a small fraction involve simulating realistic human beings in realistic environments with realistic physics. Even if, as I speculated, our civilization exists to solve the existential threat faced by a civilization with tools powerful enough to destroy itself or create skynet, that still doesn’t mean our daily experience bears any relation to the experience of the beings a level above.
If we are such a simulation, it would seem highly likely that our universe is optimized to efficiently explore the potential solution space to their problems, and perhaps our universe is intentionally obfuscated to hide, from ourselves, the true nature of the problems we are meant to solve. This would be the ultimate in what’s called holomorphic encryption, a system which hides the nature of the computation from the conscious entities who unknowingly carry out those computations. Which raises perhaps the most frightening question of all, especially in the context of my earlier remarks about the possible role of religion. What happens, if we happen to figure out what we’re really doing? Are we still useful and viable, or have we become the greatest threat imaginable to ourselves and the level above is. Put another way, what happens if Eric Weinstein’s project to figure out our own source code, succeeds?
Audio production by Steven Toepell of Bohemian Passport Inc.
Podcast: Play in new window | Download