sigmoid.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A social space for people researching, working with, or just interested in AI!

Server stats:

588
active users

#InterpretationsOfQuantumMechanics

0 posts0 participants0 posts today

What physicists believe about quantum mechanics

A few years ago David Bourget and David Chalmers did a follow up survey to the 2009 one polling philosophers on what they believe about various questions. One of them was quantum mechanics, particularly the measurement problem and its various interpretations. Over the decades there have been surveys of physicists themselves on this question, but most, if not all, were with a very small sample size, usually only the attendees at a particular conference.

As part of the Quantum Centennial (the celebration of 100 years of quantum mechanics) Nature has done a fairly large survey of the community of quantum researchers with over 1100 respondents. The results are interesting, although not particularly surprising.

Copenhagen still comes out on top with 36%. It’s interesting that it’s stronger with experimentalists than with theorists (half vs a third). I suspect the experimentalists are hewing to a very pragmatic version of the interpretation. Which highlights a concern that the term “Copenhagen interpretation” means different things to different people. The article acknowledges this, noting that 29% of those who selected Copenhagen favored an ontic version of the wave function vs 63% who came down epistemic.

15% are Everettians (or “consistent-history” advocates, who I suspect object to being lumped in with the many-worlders), 7% Pilot-wave, 4% Spontaneous collapse, 4% Relational Quantum Mechanics, and a smattering in other views.

Overall 47% of respondents see the wave function as just a mathematical tool, with 36% taking a partial or complete realist take (my view), and 8% taking it to only represent subjective beliefs about experimental outcomes.

45% see a boundary between classical and quantum objects (5% see it as sharp) while 45% don’t (my view).

Just before the paywall, there is a question about the observer in quantum mechanics, with 9% saying it must be conscious. Another 56% said there had to be an observer, but that “observer” can just be interaction with a macroscopic environment, and 28% arguing that no observer at all is needed. (I think interaction with the macroscopic environment and the resulting decoherence is key, but it seems misleading to call that environment an “observer”.)

All interesting. Of course, how popular or unpopular a view is has no real bearing on whether it’s reality. Prior to Galileo’s telescopic observations in 1609, an Earth-centered universe was the most popular cosmology. Only a miniscule handful of astronomers accepted Copernicus’ view about the Earth orbiting the sun. Until the quantum-measurement equivalent of the telescope comes along, all we can do is reason as best as possible with the current data.

The results here are interesting to compare with what the philosophers thought on the Bourget-Chalmers survey. On quantum mechanics, philosophers were 24% agnostic, 22% hidden variable theories, 19% many-worlds, 17% collapse, and 13% epistemic. Once we take into account all the various forms of “Copenhagen interpretation”, these seem in a similar ballpark, except that philosophers are more open to hidden variable approaches. (It may be easier to favor hidden variables if you’re not the one who has to find them.)

My own view comes down to a preference for structural completeness (or at least more structurally complete models), which to me currently favors a cautious and minimalist take on the Everettian approach (as I described a few months ago). However, my credence in this conclusion is only 75-80%. That the survey indicates most physicists aren’t super confident in their own conclusions here makes me feel better.

This reminds me of a new approach that Jacob Barandes has been promoting on various podcasts (see this recent Sean Carroll episode as an example). Barandes calls it Indivisible Stochastic Quantum Mechanics. I won’t pretend to understand exactly what he’s trying to accomplish with it, but it involves rejecting the wave function completely, and replacing it with something more stochastic from the beginning. Which strikes me as less structurally complete than the wave function, and so a move in the wrong direction. But maybe I’ll turn out to be wrong.

Anyway, now we have a firmer idea of where the physics community currently stands on quantum interpretations, or at least a firmer one than we did before. How would you have answered the survey questions? (There’s actually a small quiz in the article which is worth taking to see the logic leading to particular interpretations.)

Is quantum immortality a real thing?

In discussions about the Everett interpretation of quantum mechanics, one of the concerns I often see expressed is for the perverse low probability outcomes that would exist in the quantum multiverse. For example, if every quantum outcome is reality, then in some branches of the wave function, entropy has never increased. In some branches, quantum computing doesn’t work because every attempt at it has produced the wrong result and people have concluded it doesn’t work. In other branches, you as a macroscopic object might quantum tunnel through a wall.

Of course, for enthusiasts, this comes with a hopeful aspect. Because in some branches, you would go on living indefinitely, no matter how improbable it might be. Hugh Everett himself was reportedly a believer in quantum immortality and so had little concern about the unhealthy lifestyle that led to his early demise in this branch. The idea is that if every outcome happens, then there are versions of you reading this that will live until the heat death of the universe.

This is vividly illustrated in the infamous quantum suicide thought experiment. One version described by Max Tegmark involves rigging up a gun to fire if a certain quantum event happens. Say the quantum event has a 50% chance of happening in any one second. You then put your head in front of the gun and begin the experiment. In half of all worlds where you begin the experiment, you die in the first second, but you go on living in the other half. In half of that remaining half you die in the next second, but go on living in the other half.

For you as the experimenter this goes on indefinitely with increasingly improbable outcomes leading to your survival. Of course, in virtually all worlds you leave behind grieving friends and family who are less convinced. But for you subjectively, if many-worlds is reality, you continue living until the experiment ends.

(Before getting too comforted by the possibility of quantum immortality, it’s important to remember that this is more of a side-life than an afterlife. Most of the versions of you will still experience an approaching death. It’s also worth noting that a you a million years from now would likely have evolved into something utterly strangle and unrecognizable to the you of today. And there’s no guarantee this ongoing existence would be pleasant. Indeed, under many-worlds, some would inevitably be hellish.)

One question that often comes up in discussions about this is whether reality allows for these infinitesimally low probability outcomes, or whether there is some inherent minimal discreteness at the base of reality that prevents it. There’s nothing in the math to indicate it, but of course the math, at least the math we have today, is a description of reality that is likely only an approximation.

However in a recent interview with Curt Jaimungal, David Wallace, a proponent of the many-worlds interpretation, may have provided another reason to doubt these outcomes: quantum interference. (Note: if the embed doesn’t work right, the relevant remarks are at around the 1:21 mark. Also you don’t have to watch the interview to understand this post, but it is an interesting discussion.)

https://www.youtube.com/watch?v=4MjNuJK5RzM&t=4901s

To understand Wallace’s point, it helps to realize some important points about how quantum decoherence works. Decoherence is the process of the quantum particle losing its wave like nature and becoming more particle like. This happens because as it interacts with the environment, the phase relations which keep the wave coherent become disrupted. The wave becomes fragmented. We call the fragments “particles”. This leads to the famous (infamous?) quantum interference effects disappearing. (As shown by the double slit experiment.)

But the word “disappearing” here in reference to the interference effects should be understood to mean “become undetectable”, not that they cease to exist entirely. Under decoherence the interference never goes away entirely. Like the wave overall, it becomes fragmented, and settles into an underlying “noise”. (Note: this is actually a difference in predictions between collapse and non-collapse interpretations that should, in principle, be testable. Of course, figuring out a way to do the test is another matter.)

Wallace’s point is that infinitesimally low probability outcomes should be swamped out by this remnant interference from higher probability outcomes, meaning that they should be prevented from existing. If so the branches where entropy never increased, where quantum computing never works, or to use his example, where he as a macroscopic object quantum tunnels through a wall, shouldn’t exist.

What does this mean for quantum immortality? I don’t know that it wipes it out entirely. Many of the initial survival scenarios may be very low probability, but not profoundly low ones, and so may not be swamped by interference from the other branches. But it does seem like it shortens the duration and overall makes it less certain, even once someone accepts the existence of the other worlds. So there may be versions of you reading this that live for decades or centuries beyond the normal lifespan, maybe even millenia, but probably not until the end of the universe.

Still, the implications are interesting and fun to speculate about. If there is a version of me alive in the far future, I wonder if he (it?) will remember these speculations.

What do you think of Wallace’s point? If we assume many-worlds is reality, does the idea of quantum immortality seem plausible? Or are there other reasons to doubt it?

Many-worlds without necessarily many worlds?

IAI has a brief interview of David Deutsch on his advocacy for the many-worlds interpretation of quantum mechanics. (Warning: possible paywall.) Deutsch has a history of showing little patience with other interpretations, and this interview is no different. A lot of the discussion centers around his advocacy for scientific realism, the idea that science is actually telling us about the world, rather than just providing instrumental prediction frameworks.

Quick reminder. The central mystery of quantum mechanics is that quantum systems seem to evolve as waves, superpositions of many states, with the different states interfering with each other, all tracked by a mathematical model called the wave function. But when measured, these systems behave as localized particles, with the model only able to provide probabilities on the measurement result. Although the measurement results as a population show the interference patterns from the wave function. This is often called the “wave function collapse”.

Various interpretations attempt to make sense of this situation. Many deny the reality of what the wave function models. Others accept it, but posit the wave function collapse as a real objective event. Some posit both a wave and particle existing throughout. The Everett approach rejects wave function collapse and argues that if we just keep following the mathematical model, we get decoherence and eventually the same observations. But that implies that quantum physics apply at all scales, meaning that it’s not just particles in superpositions of many states, but measuring equipment, labs, people, planets, and the entire universe.

Reading Deutsch’s interview, it occurred to me that my own structural realist outlook, a more cautious take on scientific realism, is reflected in the more cautious acceptance I have of Everettian quantum mechanics. People like Deutsch are pretty confident that there is a quantum multiverse. I can see the reasoning steps that get them there, and I follow them, to a point. But my own view is that the other worlds remains a possibility, but far from a certainty.

I think this is because we can break apart the Everettian proposition into three questions.

  1. Does the mathematical structure of quantum theory provide everything necessary to fit the current data?
  2. If so, can we be confident that there won’t be new data in the future that drives theorists to make revisions or add additional variables?
  3. What effect would any additions or changes have on the broader predictions of the current bare theory?

My answer to 1 is yes, with a moderately high credence, maybe around 80%. I know people like Deutsch and Sean Carroll have this much higher. (I think Carroll says his is around 95% somewhere on his podcast.) And I think they have defendable reasons for it. Experimentalists have been stress testing bare quantum theory for decades, with no sign of a physical wave function collapse, or additional (hidden) variables. Quantum computing seems to have taken it to a new level.

But there remain doubts, notably about how to explain probabilities. I personally don’t see this as that big an issue. The probabilities reflect the proportion of outcomes in the wave function. But I acknowledge that lot of physicists do. I’m not a physicist, and very aware of the limitations of my very basic understanding of the math, so it’s entirely possible I’m missing something, which is why I’m only at 80%.

(Often when I make the point about the mathematical structures, it’s noted that there are multiple mathematical formalisms: wave mechanics, matrices, path integrals, etc. But while these are distinct mental frameworks, they reportedly always reconcile. These theories are equivalent, not just empirically, but mathematically. They always provide the same answer. If they didn’t, we’d see experimental physicists trying to test where they diverge. We don’t because there aren’t any divergences.)

If our answer to 1 is yes, it’s tempting to jump from that to the broader implications, the quantum multiverse. (Or one universe with a much larger ontology. Some people find that a less objectionable description.)

But then there are questions 2 and 3. I have to say no to 2. The history of science seems to show that any claims that we’ve found the final theory of anything is a dubious proposition, a point Deutsch acknowledges in the interview. All scientific theories are provisional. And we don’t know what we don’t know. And there are the gaps we do know about, such as how to bring gravity into the quantum paradigm. It seems rational to wonder what kind of revisions they may eventually require.

Of course 3 is difficult to answer until we get there. I do doubt any new discoveries would drive things toward the other interpretations people currently talk about, or overall be less bonkers than the current predictions. Again given the history of science, it seems more likely it would replace the other worlds with something even stranger and more disconcerting.

So as things stand, there’s no current evidence for adding anything to the structure of raw quantum theory. That does imply other worlds, but the worlds remain untestable for the foreseeable future.

To be clear, I don’t buy that they’re forever untestable. We can’t rule out that some clever experimentalist in the future won’t find a way to detect interference between decohered branches, to recohere them (which has been done but only very early in the process), or some other way we haven’t imagined yet.

My take is the untestability of the other worlds means that Everettian quantum mechanics, in the sense of pure wave mechanics, shouldn’t be accepted because we like the worlds, or rejected because we dislike them. For now, the worlds should be irrelevant for a scientific assessment. The only question is whether anything needs to be added to the bare theory, a question, it should be noted, we can ask regardless of whether we’re being realist or antirealist about any of this.

All of which means that while my credence in austere quantum mechanics is 80%, the credence for the other worlds vacillates somewhere around 50%. In other words I’m agnostic. This resonates with the views I’ve seen from a number of physicists, such as Stephen Hawking, Sidney Coleman, John Preskill, and most recently, Brian Cox, which accept the Everett view but downplay the other worlds. Even Sean Carroll notes in one of his AMAs that he doesn’t really care so much about the other worlds, but the physics at the core of the theory.

But maybe I’m missing something. Are the questions I raised above as easy to separate as I’m thinking? Or are there problems with pure wave mechanics I’m overlooking?

Avoiding the structural gaps

A long standing debate in quantum physics is whether the wave function is real. A quick reminder: quantum entities appear to move like waves, including portions interfering with each other. These waves are modeled with the wave function. But once measured, quantum objects manifest as localized points or field excitations. The wave function can’t predict the measurement outcome, only probabilities on what the result will be.

A popular move here is to decide the wave function isn’t real, that it’s just a mathematical contrivance. Doing so seems to sidestep a lot of uncomfortable implications. But it leaves us trying to explain the statistical outcomes of measurements that show patterns from portions of the wave interfering with itself. Those effects, along with entanglement, are heavily used in quantum computing. If the wave function isn’t modeling something real, then it’s usefulness in technology starts to look like a magic incantation.

Of course, accepting wave function realism leaves us with something that seems to operate in a higher dimensional “configuration space.” And we end up having to choose between unsettling options, like an objective wave function collapse on measurement, a pilot wave guiding the particle in a non-local manner, or just accepting pure wave mechanics despite its implications.

Valia Allori has an article at IAI arguing against quantum wave function realism. (Warning: you might hit a paywall.) The main thrust of her argument, as I understand it, is that we shouldn’t allow ourselves to be lured farther away from the manifest image of the world (the world as it intuitively appears to us) when there are viable alternatives.

Her argument is in opposition to Alyssa Ney’s argument for wave function realism, which touts as one of the benefits that it reclaims locality. Allori argues that this is aiming to satisfy an intuition we develop in three dimensional space, that there aren’t non-local effects, “spooky actions at a distance”. But wave function realism only preserves locality across configuration space, which Allori views as a pyrrhic victory.

Overall, Allori seems to view this as a conflict between two different sets of intuitions. On one side, we have views that are closer to the overall manifest image of reality, one with three dimensions, but at the cost of non-local phenomena. She doesn’t view this as ideal, but deems it preferable to the idea of a universal wave function existing in near infinite dimensions. In her view, embracing theories too far away from the manifest image puts us on the path that leads to runaway skepticism, where nothing we perceive can be trusted.

But I think looking at this in terms of intuitions is a mistake. When it comes to models of reality, our intuitions have historically never been particularly useful. Instead they’ve often led us astray, causing us to insist the earth was the center of the universe, humans were separate from nature, or that time and space were absolute, all ideas that had to be abandoned in the face of empirical realities. The reason to prefer locality isn’t merely to privilege one intuition over others, but to prefer theories that provide a structurally complete accounting.

A while back I described this as a preference for causally complete theories. But causation is a relation across time that is made asymmetrical by the second law of thermodynamics, that entropy always increases. The more fundamental reality are the structural relations. A theory which can account for all (or at least more of) those relations should, I think, be preferred to theories that have larger gaps in their accounting.

By that standard, I perceive wave function antirealism to have huge gaps, gaps which proponents of the idea seem comfortable with, but I suspect only because, as Allori does, they deem it a lesser evil than the alternative. Of course, objective collapse and pilot-wave theories also have gaps, but they seem smaller, albeit still weaknesses that I think should make them less viable.

Pure wave mechanics seems like the option with the fewest gaps. Many would argue that accounting for probabilities remains a crucial gap, but that seems like more of philosophical issue than a scientific one, how best to talk about what probabilities mean. In many ways, it highlights issues that already exist in the philosophy of probability.

Overall then, my take is that the goal isn’t to preserve the manifest image of reality, but to account for it in our scientific image. Preferring theories that are closer to the manifest image just because they are closer, particularly when the theories have larger gaps than the alternatives, seems to amount to what is often called “the incredulous stare”, simply rejecting an proposition because it doesn’t comport with our preexisting biases.

But maybe I’m overlooking something? Are there reasons to prefer theories closer to the manifest image? Is there a danger in excessive skepticism as Allori worries? Or is preferring a more complete accounting itself still privileging certain intuitions over others?