Resolving the Unknown
What if reality were simply a machine generating plausible explanations for phenomena?
A man walks into the apartment complex of his friend who he was going to meet to see a movie. He knocks on the door but there’s no response. He calls a few times and eventually knocks again a little harder and the door opens by itself. He peeks in and sees a mess, papers all over the floor. He walks in to make sure everything is ok. As he walks he steps on something - “a knife?” he thinks. He picks it up. It has blood on the handle. A shiver runs down his space. He carefully walks around, and opens a closet door. A body falls out on top of him, making him fall to the floor. He is petrified in fear, frozen. Then the police walk in. They ask him what’s going on, but he’s in so much shock he cannot even speak.
To the police, everything seems as if the man murdered his friend. It seems like the most obvious explanation, and this type of scenario has happened countless times before. In some cases, the innocent even start believing they committed the crime themselves, even though they didn’t.
This is a case of “resolving the unknown”, coming up with a reasonable explanation for what they don’t know (in this case, who murdered his friend). The more things they find that weave a story that fits the explanation, the more certain they become that the explanation is true.
We do this all the time, largely without even thinking about it consciously. We take for granted that the world around us works in a consistent and predictable way. But what does that really mean?
When something is predictable, it means that we have information that tells us how it should behave. Usually this is from previous experiences. But why?
If I pour coffee into my cup, I know that it won’t leak out the bottom. I’ve done it countless times before with other containers. If it looks like a container, and has a certain weight that indicates it’s thick enough, then I presume it will retain the liquid even if it’s hot.
Of course I don’t know this for sure. It’s not like I’ve checked all the molecules to make sure they’re properly bonded. Yet if I were to investigate after the fact why it’s able to hold this liquid, looking under a microscope I would probably find things like this: it’s largely ceramic, consisting of clay (silicon dioxide and alumina), tightly packed and then covered in a glaze (also silicon dioxide). If I checked the layout of the molecules, I’d find them to be uniform. There would be trillions of them, far too many to count. So how could I ever know for sure?
Isn’t that amazing though? The fact that I somehow relied on these properties without even knowing what they were, let alone that they were reliable, except through a few empirical tests on materials I thought looked and behaved similarly. I could easily have been wrong though. But this happens all the time. We use things without understanding them at all. Yet somehow we cope, and things work the way we expect regardless. It’s kind of a fluke if you think about it. It’s a fluke that anything works so predictably.
What if this is all wrong though? What if we have the whole thing upside down?
Let’s take a step back. Imagine that, instead of discovering that the cup is made of particles and we just happened to be right about its particle composition and bonding, instead it wasn’t a discovery but reality generated this explanation on-the-fly?
Let me repeat that because it’s the crux of my argument.
What if reality is simply a machine that generates plausible explanations for phenomenon, even though the cause of the phenomenon is something else entirely? Not just plausible but absolutely convincing.
It’s easy to understand what I’m referring to if you’re familiar with video games. In video games you may see a spaceship in 3D. The spaceship has thrusters, engines that burst out some kind of plasma to move it forward as a propulsion mechanism. But, of course, that isn’t what’s actually moving it forward: algorithms that move 3D object coordinates are doing that.
From the far distance, you may see a spaceship as a speck flying across the deep void. You don’t even see the thrusters, and they’re not even painted by the computer because of something called “Level of Detail” that doesn’t draw things you can’t see clearly. But as you get closer, at some point the thrusters will appear. Parts of the machine will gradually appear that gives you the confidence that these engines are what is moving this thing forwards.
This is how video games “resolve the unknown”, and they do it “Just In Time” - just at the point where you have enough information to draw conclusions about causes.
That’s virtual reality. Now what if reality works the same way? What if reality is actually a kind of simulator, that simulates efficient causes (generating explanations) but doesn’t actually run by them?
This is what simulation theory claims, and it’s very close to what we are talking about in this blog. I was talking to ChatGPT the other day about the concept and it suggested calling the phenomenon “transrational fideism” - a faith that transcends rationalism. The primary cause is the continuity of our thoughts - predictability, mixed with the need for novelty. This generates phenomena, and then “reality” generates plausible efficient-cause explanations when we investigate, “on the fly”.
Reality, then, is a machine that simply allows for moment-to-moment phenomena to arise based on the characteristics of consciousness, for us to “manifest”, but it always provides a rational explanation for that phenomenon - resolving the unknown - when, and if, we demand it. The explanation, though, is just a façade - it’s a decoy that makes us believe it is the cause, when it isn’t. And sadly we sometimes do believe it to be the cause, and then live our lives according to its rules, its laws, unknowingly enslaving ourselves.
Think about this the next time you look for an explanation for something.