:: Article

Multiverses and sleeping beauty

Alastair Wilson interviewed by Richard Marshall.

Alastair Wilson is a Vulcan somewhere else in the multiverse. He thinks about what a metaphysics of science might be and never stops contemplating the Everettian multiverse, which he thinks is one of the most beautiful ideas in the history of science. It’s a theory that he thinks shows physicists to be less conservative than philosophers. He looks at the philosophical puzzles connected with it, criticises overlapping worlds, is puzzled by questions of identity criteria, thinks Sleeping Beauty has an important connection to the theory, is less sure about crystal balls and indecisive Gods, thinks it is definitely science and can’t be junked, thinks the laws of nature are metaphysically necessary, has deep thoughts on quiddities, and has things to say about the spats between metaphysicians and scientists. This one boldly goes…

3:AM: What made you become a philosopher?

Alastair Wilson: As a boy I wanted to be a physicist and come up with the Theory of Everything. But I took an undergraduate degree in physics & philosophy and turned out to be better suited to the philosophy side. Jeremy Butterfield and Bill Newton-Smith were strong initial influences; they encouraged me to follow up on my ideas about quantum mechanics and modality. Oxford was (and is) a very rich and exciting environment in which to study philosophy of physics, and I tried to absorb as much as possible from people like Harvey Brown, Simon Saunders, David Wallace, Oliver Pooley, Frank Arntzenius, Antony Eagle, Chris Timpson, Cian Dorr, Hilary Greaves, Dennis Lehmkuhl and Eleanor Knox. I stayed on to do the B.Phil and D.Phil (supervised by Simon Saunders and John Hawthorne) before spending a fantastic couple of years in Australia as a postdoc at Monash and then moving to Birmingham.

3:AM: One of the things you might be doing is a metaphysics of science, but there are difficulties with defining what this would be. Stephen Mumford and Matthew Tugby had a go recently and you had your doubts about their stab at it. So why the doubts and how might you define it – or would you decline the offer of even trying?

AW: Defining fields or sub-fields isn’t an activity at which philosophers have a great track record. Even when a definition isn’t obviously extensionally inadequate, it’s a bit unclear what we stand to gain. One worry I had about the Mumford-Tugby definition was that it risked excluding empiricist approaches. Many self-identified practitioners of metaphysics of science have been anti-Humeans, but it would be unnecessarily restrictive to set disciplinary boundaries to as to exclude Humean perspectives that deny objective modality in nature. We’ve learned a lot about the interaction between metaphysics and science from Humeans like David Albert, Barry Loewer, John Norton, and John Roberts.

I have mixed feelings about the term ‘metaphysics of science’ itself: an overly proscriptive attitude can easily be read into it. The late great E.J. Lowe maintained that it should be philosopher’s work to provide a metaphysical foundation for science, a (necessary and a priori) framework within which (contingent and a posteriori) physical theories can be coherently formulated. That philosophy-first stance definitely isn’t something I want to endorse.

If I had to define the metaphysics of science, I’d probably go for a boring definition in terms of other sub-disciplines: it’s any work that engages both with topics historically studied as part of metaphysics and with topics studied by general philosophy of science or by the philosophies of the special sciences. That covers everything covered by the Mumford-Tugby definition, as well as (for example) work on the nature of space and time or on quantum entanglement, and it’s a bit less of a mouthful.

3:AM: One of the things you’ve dedicated yourself to contemplating is the Everettian multiverse. Before we go further into your thoughts can you sketch out what this is , how it solves Schrodinger’s cat puzzle and other issues? Is it science? And why do you think scientists like it more than philosophers?

AW: The Everettian multiverse is one of the most beautiful ideas in the history of science. If its wrong, at least it’s gloriously, elegantly, ambitiously wrong. My approach to Everettian quantum mechanics (EQM) builds directly upon that of the ‘Oxford Everettians’ – David Deutsch, Simon Saunders, David Wallace, and Hilary Greaves. Wallace’s presentation of the view has become canonical, and any seriously interested readers should start by ignoring me and reading his lovely book The Emergent Multiverse.

EQM involves a straightforward scientific realist attitude to the quantum-mechanical formalism. The best explanation of the empirical success of quantum mechanics is that it’s tracking some real structure in the world; and if the theory seems to describe superpositions of macroscopic states, we should at least explore the possibility that there are superpositions of macroscopic states. Everett’s remarkable idea was to reconcile macroscopic superpositions with the ‘manifest image’ by interpreting superpositions not as indeterminacy but as multiplicity. Instead of a single cat in an indeterminate state – half-alive-half-dead – we have two cats each in a determinate state – one alive and one dead.

Is it science? I don’t see why not, at least if our theories about quarks and gluons and about gravitational waves and about galaxies beyond the visible universe count as science. Such things are all posited on the basis that they allow for theories with superempirical theoretical virtues; they’re testable indirectly through the empirical generalizations that they help to elegantly explain.

There are lots of reasons to worry about EQM, some of them worth taking seriously. Amongst the interesting objections are the various aspects of the probability problem (on which more below) and concerns about emergent ontology and lack of determinate identity conditions for worlds. Technical difficulties (the ‘preferred basis problem’) which loomed large in older discussions have largely been resolved by decoherence theory, though this remains pretty controversial. To my mind the less interesting objections include ontological extravagance and departure from common sense. If we can solve the probability and ontology problems in satisfactory ways, then objections based on intuition or aesthetics aren’t going to carry much weight.

I think philosophers are generally a more conservative bunch than physicists. Modulo concerns about falsifiability, physicists are often quite open to radical hypotheses of all kinds. Philosophers tend to be more reluctant to accept radical metaphysics, at least unless they’ve thought it up themselves.

3:AM: You consider some of the philosophical puzzles associated with it. David Saunders and David Wallace have come up with a semantics of the multiverse that maintains some of our ordinary talk about things like time and classical logic’s bivalence but it doesn’t preserve all our common sense talk does it. Can you explain the problem?

AW: The aim is to make sense of how probability works in the context of EQM. The problem is that we want to ascribe probabilities to contents concerning the future, and it is not clear how to make sense of these contents as sayable or thinkable by agents embedded in Everettian multiverses. If reality is about to branch, with one branch seeing Up and the other Down, isn’t it the case that both Up and Down will occur? If so, how can we attach probabilities other than 1 or 0 to the outcomes of any future quantum events?

Saunders and Wallace exploited a Lewis-style worm-theoretic account of personal identity to give a semantics for utterances about future branching events. So in a branching event we have two continuant spacetime worms, which share common segments prior to the branching. One worm continues into the Up branch and one continues into the Down branch. In my mouth, ‘I’ refers to one of these worms but I can’t know which. Probabilities are then attached to self-locating contents: the probability that I will see Up is the probability that my worm is the one which extends into a Up branch. However, this solution doesn’t generalize to propositions about future events occurring after the agent’s own death: in that case the agent’s worm doesn’t extend into either branch. We need to say something a bit more general.

3:AM: In order to understand the multiverse you propose a metaphysics of non-overlapping worlds don’t you. Can you explain what this is?

AW: In informal or popular discussions, people use two metaphors pretty much interchangeably to describe the Everettian multiverse: branching worlds and parallel worlds. Of course, these two metaphors are in tension: the former suggests mereological overlap of worlds, like a branching tree, whereas the latter suggests mereological distinctness of worlds, like a packet of spaghetti. I’ve suggested taking this distinction seriously, and thinking of the multiverse in parallel-world terms. Instead of literally having parts in common, worlds simply resemble one another up to some time and differ thereafter; this is a picture which David Lewis called ‘divergence’. It recovers an indexical (self-locating) fact-of-the-matter about the future, which provides the right sorts of contents for probabilities to attach to.

One essential feature of Oxford-style EQM is that the multiverse is non-fundamental: it’s a high-level, emergent macroscopic structure that depends (somehow) on the nature of the fundamental quantum state. I think this feature gives us licence to adopt a parallel-world understanding of EQM on the basis that it resolves certain conceptual problems with probability. We only need to make a minor tweak in our interpretation of the consistent-histories quantum formalism to obtain a diverging picture: we need to interpret one element of the formalism as representing property-types rather than property-tokens. The physics works out just the same, but now there are indexical contents available for probabilities to attach to: contents concerning what will happen in an agent’s own complete big-bang-to-end-of-time world. As in David Lewis’ modal realism, contents concerning nomically contingent matters of fact are reconceived as essentially indexical in nature. This means we can use an Everettian ontology to do much of the reductive work in metaphysics for which Lewis used his modal realism. But that’s another story, one which I’m currently wrestling into book form.

3:AM: Must there be a determinate identity criteria for the many universes, and can one be provided if we do need it?

AW: Good questions – I don’t know the answer to either. If Everett is to be a ‘pure interpretation’, without any gratuitous additions unmotivated by the physics, then it looks like we need to be able to do without any such precise criteria. Whether this is problematic depends on how we think about the emergence of the worlds from the fundamental underlying quantum state. David Wallace has argued that this kind of imprecision is ubiquitous in higher-level scientific ontology, so that EQM is at least in good company. I suspect matters aren’t so simple – the emergence of the classical from the quantum is certainly very different from any previously encountered emergence phenomena – but I don’t have a settled view.

3:AM: Does an Everettian universe have no probability? And what does Sleeping Beauty have to do with anything?

AW: Sleeping Beauty has an important connection to how EQM gets confirmed. Put yourself in the position of someone wondering if EQM is correct, and about to perform a quantum-mechanical experiment. You reason that if EQM is correct, there will be two different outcomes, each seen by a single agent, while if a traditional one-world indeterministic theory is true instead then there will be a single random outcome seen by a single agent. This is very like the predicament of Sleeping Beauty, who knows that if the coin lands Tails there will be two wakings, while if it lands Heads there will only be a single waking.

The standard ‘thirder’ solution to Sleeping Beauty counts sleeping and then awakening as providing Beauty with evidence for Tails. The analogous move in cases of Everettian confirmation counts *merely performing* any quantum experiment as providing an agent with evidence in favour of EQM. As Darren Bradley nicely puts it: “The ancients could have worked out that they have overwhelming evidence for many-worlds simply by realizing it was a logical possibility and observing the weather.” Something’s gone badly wrong here, but it’s not a problem for Everettians per se: it’s a problem for anyone who doesn’t want to rule out multiverse scenarios a priori.

3:AM: And an imperfect crystal ball?

AW: That’s Darren Bradley’s thought experiment. It’s intended to show that we can get inadmissible evidence even in cases without any supernatural processes being active. (Inadmissible evidence is a technical term introduced by David Lewis; it’s evidence that bears on the truth-value of a chancy proposition other than by providing information about its chance.) Establishing that inadmissible evidence need not be spooky would help to defuse a common thirder objection to ‘halfers’: that the halfer view commits Beauty to have 2/3 credence that a fair future coin toss will come out Heads.

I think the response doesn’t work, because the crystal ball case still relies on supernatural processes (albeit merely counterfactual ones). In contrast, the inadmissible evidence which halfers say that Sleeping Beauty gets doesn’t rely on any supernatural process, actual or counterfactual.

3:AM: And an indecisive God?

AW: I diagnose the difference between Everettian confirmation and Sleeping Beauty as tied up with the role of chance. The relevant disanalogy between the cases is that in Sleeping Beauty it’s a chancy matter how the coin lands, but in Everettian confirmation it’s not a matter of chance whether EQM is correct. Noting this disanalogy permits us to adopt the thirder solution to Sleeping Beauty without introducing problematic automatic confirmation of EQM.

Darren Bradley points out that the analogy can be restored by considering an alternative cosmological scenario. If the hypothesis we’re considering is that God chose between creating one indeterministic world and creating an Everettian multiverse via a chancy coin-toss, the automatic confirmation effect is reintroduced. Darren thinks it’s absurd that God’s decision procedure should make a difference to the confirmation of one-world vs many-world hypotheses. In contrast, I think it’s exactly what we should have expected. If God were to repeat the chancy world-creation process many times, then the overwhelming majority of observers would end up being in Everettian multiverses rather than in stochastic worlds.

3:AM: The weird thing about Everettian multiverses is that although many physicists like it and popular culture too it does seem to hover on the brink of being incredulous. Why do scientists embrace it given that it exceeds any possible empirical testing? Why not just junk these other universes like some philosophers recommend we do with Kant’s unknowable things in themselves, or God?

AW: EQM is surprising, certainly. But we can’t just junk the other universes without undermining the explanatory power of quantum mechanics or introducing additional physical processes like wavefunction collapse. If EQM is right, then we gain evidence for the existence of other worlds whenever we observe quantum-mechanical interference effects. In that sense, it doesn’t exceed possible empirical tests.

3:AM: Wallace’s approach to the multiverse you call ‘minimalist.’ What have others added that he leaves out, and are you sympathetic with what he does? I guess this is about what criteria he applies for cutting the theory down to the bone and why it stops where it does.

AW: The main things people have tried to add to EQM are i) determinate identity conditions for worlds and ii) extra ontology to provide subject-matters for probability. To take a couple of examples from the 1980s, Deutsch once proposed introducing an ‘interpretation basis’ of quantities with determinate values as a basic ingredient of the theory, and Albert and Loewer proposed adding an infinity of immaterial stochastically-evolving minds. Wallace (following Saunders) rejects any such additions and aims for a pure interpretation of the basic unamended quantum formalism: the equations applied in labs and taught in physics departments. Bryce DeWitt once said, echoing some remarks of Everett’s, that “the mathematical formalism of the quantum theory is capable of yielding its own interpretation”. I think this is a bold and beautiful line of thought, and I really hope it can be made to work.

3:AM: Do you think that the laws of nature are metaphysically necessary – that they apply in all possible worlds or is this just an option and contingentism might be the truth? Don’t many physicists think more in terms of contingentism, and shouldn’t that be decisive? Or aren’t they aware enough of the metaphysics to have an opinion?

AW: I’ve defended a version of necessitarianism in print – it’s what Jonathan Schaffer has called modal necessitarianism, the view that the laws of the actual world are the laws of all possible worlds. Of course, I’m not 100% certain that modal necessitarianism is correct – so certainly in that sense, it might be false. But if it’s true, it’s necessarily true; in which case, our contingentist intuitions are misleading. We can conceive of different laws obtaining, of course, but necessitarians characteristically deny that conceivability entails possibility.

Physicists certainly think about various hypotheses that run contrary to the actual laws. If necessitarianism is correct, then these hypotheses are necessarily false. But that doesn’t stop us thinking about them, any more than it stops logicians and mathematicians entertaining necessarily false logical or mathematical propositions. Once we keep epistemic possibility carefully apart from genuine possibility, there’s no real difficulty for necessitarianism here.

3:AM: What’s quidditism and are you a skeptic about it, and why if you are isn’t this just a rehash of the more familiar and by now weak-ass external world skepticism, as Jonathan Schaffer argues?

AW: Quidditism is a anti-structuralist thesis about properties. It says, roughly, that there’s a fact of the matter about which properties are which, over and above any fact of the matter about which properties do what. I like the characterization of the view given by David Lewis:

‘quidditists are those who hold that there are pairs of possible worlds which differ only with respect to which properties play which nomic roles. So according to quidditism, mass could have played the charge role and charge could have played the mass role, with all else left unchanged. If you’re an anti-quidditist, you’ll deny these possibilities and will probably also say things like ‘properties are individuated by their nomic roles’.

Anti-quidditism can also be characterized as scepticism about quiddities – individuating properties of properties (suchnesses) that are analogous to haecceities (thisnesses) for individuals. I don’t think this way of putting it is particularly helpful, because it can too easily be assimilated to sceptical arguments. While I think there are grounds to resist that assimilation, I prefer to motivate anti-realism by a methodological line of thought, along lines indicated by John Hawthorne. The question is whether merely quidditistic differences between worlds do enough theoretical work to justify our recognizing additional structure within the space of genuine possibility. This ties in to some foundational questions in the metaphysics of modality. Is a theory which recognises the extra quidditistic possibilities more complex (because it involves extra possibilities) or simpler (because it involves fewer constraining necessities)?

3:AM: There’s fighting talk in some quarters of science that science will answer all the questions about what exists and what’s real. Metaphysicians kick back with talk about fundamentals and groundings for the possibility of science. Why should we heed the philosophers in the face of the undoubted power and success of scientific theories?

AW: I’m broadly on the side of the scientists and naturalized metaphysicians. It’s very dangerous for metaphysicians to think of their work as some kind of prerequisite for or prolegomenon to any future science. However, science is already suffused with the ideology of fundamentality and explanation, and we do need to make sense of that; it may well be that tools and arguments developed in metaphysics will be useful for this job. At the very least, there’s plenty of work to do in highlighting bad amateur metaphysics done by scientists, and in suggesting ways to do it better. I’m thinking in particular here of Huw Price’s critique of Stephen Hawking and of David Albert’s critique of Lawrence Krauss. Then again, and ironically, the model of a philosophical critique of a physicist is by a physicist: John Bell’s critique of Niels Bohr’s Copenhagen interpretation.

3:AM: And for the readers here at 3:AM are there five books you could recommend that would take us further into your philosophical world?

AW: Ontological Relativity and Other Essays by WVO Quine is the first great work of modern metaphysics.

How the Laws of Physics Lie by Nancy Cartwright is a delightful book full of powerful ideas and vigorous arguments. I disagree with almost everything in it, but have learned a lot from it.

On the Plurality of Worlds by David Lewis is the other great work of modern metaphysics. Delightfully written and I think right about astonishingly many things.

Time and Chance by David Albert / Time’s Arrow & Archimedes’ Point by Huw Price – I couldn’t decide between these two fabulous treatments of the problem of the arrow of time. Read them both, in this order.

The Emergent Multiverse by David Wallace – A canonical treatment of everything Everettian.


ABOUT THE INTERVIEWER
Richard Marshall is still biding his time.

Buy the book here to keep him biding!

First published in 3:AM Magazine: Friday, May 30th, 2014.