Quantum Physics and Faith
From Quantum Bayesianism to Algorithmic Information theory: Is there a strong mathematical basis for Idealism, and the power of faith?
The conundrum of quantum physics has led to a plethora of quite unusual theories. Not least among these are those that suggest reality depends on the observer, exclusively so.
Before we delve into the details of these theories, let’s introduce a few concepts in simple terms that are critical to understanding something very important about these new theories about reality.
What’s QBism?
QBism, short for "quantum Bayesianism," is an interpretation of quantum mechanics that emphasizes the role of personal subjective experience in understanding the theory and experimental results.
QBism argues that the mathematics of quantum mechanics should be understood simply as a tool for making personal probability judgments about the outcomes of measurements, rather than as a description of objective reality.
This interpretation is often associated with the philosophical position of solipsism, which holds that the self is the only thing that can be known or verified to exist.
At its root is something called Bayesianism. Bayesianism is named after the 18th century statistician and theologian Thomas Bayes. Bayes was born in London in 1702, and studied theology at the University of Edinburgh. He was an ordained minister in the Church of England and spent most of his life in London and Kent, England.
Bayes developed the basic concepts of Bayesianism in a paper he wrote in 1763, entitled "An Essay towards solving a Problem in the Doctrine of Chances." However, Bayes' work was not widely known or used until it was independently rediscovered and developed by others in the 20th century.
In simple terms, Bayesianism is a way of thinking about probability that emphasizes the role of personal beliefs and subjective experience in understanding the likelihood of different events.
An example of Bayesianism in action is as follows: Imagine you're trying to figure out the probability that a coin is fair (has a 50/50 chance of coming up heads or tails) vs. biased (has a higher chance of one outcome over the other).
Initially, let's say you have no information about the coin, so you assign an equal probability of 50% to it being fair and 50% to it being biased.
Then, you flip the coin 10 times and it comes up heads every time. According to classical statistics, the probability of getting 10 heads in a row if the coin is fair is very low, so you might update your belief that the coin is fair to a low probability.
However, according to Bayesianism, you would update your belief by incorporating your prior belief (the coin has a 50/50 chance of being fair) with the new information (the coin came up heads 10 times) to get a new probability. So the final probability of coin being fair will be updated, but it would no longer be zero.
In this way, Bayesianism allows you to incorporate new information and update your beliefs to make better predictions - perhaps in a way that is more akin to how we would naturally predict outcomes, given our awareness of how we lack complete information.
Ontological?
Now what does it mean to ask if this might be ontological?
Ontology is the study of what is - ie. how you might tell the difference between something existing or something being an illusion.
While Bayesianism is concerned just with determining probability, ie. what we can know, when it is applied to quantum mechanics it is making a much more significant claim: that reality itself behaves (in some ways) according to our predictions.
The very core of our every day experience is determined by the act of decoherence of quantum states to create the “classical” world around us. For the QBist, this happens only when the observer gains knowledge about the system, and it happens according to the predictions the observer made.
This is why some people have called out QBism for being solipsist: it only ever talks about reality in terms of subjective probabilities.
Solomonoff Induction
There is another method to calculate probabilities that many people say is the “computational formalization” of Bayesian probability: it’s called Solomonoff Induction.
Solomonoff Induction picks prediction systems based solely on their ability to predict future events. That system is then used as a reference point for calculating future probabilities.
In our coin example, imagine you have two possible sources of information for predicting the coin toss: a simple rule that says the coin is fair (has a 50/50 chance of coming up heads or tails) and a neural network that has been trained on a dataset of coin toss outcomes.
Using Solomonoff induction, you would assign a probability to each prediction based on how well each source has predicted the coin toss outcomes in the past. For example, let's say the simple rule has been accurate 70% of the time, and the neural network has been accurate 85% of the time. In this case, you would assign a higher probability to the prediction made by the neural network.
As new data comes in, for example after a toss and the outcome is known, the probabilities would be updated accordingly. Solomonoff induction will provide an optimal prediction by weighting the predictions of all sources, taking into account all the observations and adjusting the prediction based on the past performance of each source.
So the difference here is that Solomonoff assigns probabilities to systems that predict, whereas Bayesianism by itself relies only on past outcomes to predict.
Algorithmic complexity
By itself, Solomonoff induction doesn’t tell us much. It just helps us find a system that can help us predict the future, from a pragmatic point of view.
The next step is to look at this from the perspective of something called algorithmic complexity. This lets us calculate how complex a system is.
Now, if we have two systems that make the same predictions, but one is more complex than the other, then you could argue that the simpler of the two is more likely to be more reliable. This is because the bits that aren’t needed for the past predictions haven’t actually been tested and so aren’t technically necessary.
To calculate the complexity of a system, we use something called Kolmogorov complexity. In simple terms, this is just the length of a computer program after it has been fully optimized.
For example: Imagine you have two algorithms that sort a list of numbers, the first one is the well-known "Bubble Sort" algorithm and the second one is the "Quicksort" algorithm. Both algorithms can produce the same sorted list as an output, but they both have different complexity measures.
Measuring the Kolmogorov complexity of each algorithm would involve calculating the length of each algorithm. The Kolmogorov complexity of the quicksort algorithm would be lower than that of the bubble sort algorithm, as it requires fewer instructions to sort the list. So, in this case, the quicksort algorithm would be considered as a simpler algorithm, with the lowest Kolmogorov complexity, and thus it would be preferable in terms of choosing the system to predict output using Solomonoff induction.
So there’s this implicit requirement in Solomonoff induction, and possibly in Bayesianism itself, where the simplest algorithm to explain how likely it would be for a certain event to occur given past experiences.
Law Without Law
This is the point of entry for Markus Muller. He has written a paper titled Law without law: from observer states to physics via algorithmic information theory, and it takes algorithmic information theory, and turns it into a basis for reality itself, as an ontology.
Like Quantum Bayesianism, it calculates everything from a first-person perspective. This theory, though, is quite intentionally solipsistic in its foundation.
Here’s a summary of that paper:
The way we currently understand physics is that any physical theory should describe how the objective world outside of us works. However, quantum theory suggests that physical systems do not always have clear objective properties that can be observed. This raises questions about our current understanding of the world, and some argue that we should consider a different perspective where the focus is on the observer rather than the world.
The approach of the theory described here is based on the idea of induction, which is a way of making predictions about future events by taking into account past observations. It is also based on algorithmic information theory. The main idea is that it is the observer's state of mind, rather than the world or physical laws, that determines the chances of what they will observe next.
Surprisingly, despite starting from a rather solipsistic point of view, this approach leads to a theory that predicts the appearance of an external world that follows simple, computable, and probabilistic laws. Objective reality is not assumed in this approach but emerges as a statistical phenomenon, which means it can be observed through patterns and trends in the starting set of data.
By using Solomonoff induction, selecting probabilities from a supporting algorithm’s Kolmogorov complexity, we get to see how such simple postulates are able to account for the emergence of the classical world we see around us, all from a first-person, quasi-solipsistic perspective.
And it is related in many ways to the same realizations of Quantum Bayesianism: that we must start with what we know, and we learn more, update probabilities to create a pattern that can be used to determine the reality we see.
Between the unexplainable results of quantum physics and studies of consciousness pointing to the first-person mechanism involving similar dynamics of pattern finding and data compression, we’re at a place where we can finally start putting the pieces together for a solid foundation that resurrects that age-old idealistic worldview.