## How Pragmatism Reconciles Quantum Mechanics With Relativity etc

Interview by Richard Marshall.

[Photo]

Richard Healey is the pragmatist philosopher of physics who thinks there’s a need to interpret quantum mechanics, that none of the standard interpretations are good enough, that the idea of a nonseparable world helps and that a pragmatist approach is the way to go. He discusses how to dispel the Feynman mystery, the paradox of Wigner’s friend, and how to reconcile quantum mechanics with relativity, whether quantum mechanics is a realist or instrumentalist position, on whether quantum mechanics makes ontological claims, on time, on quantum nonlocality and Dr Bertlmann’s socks, and on getting free of the prejudices we call common sense. Go figure…

**3:AM:** Why did you become a philosopher?

**Richard Healey:** I became a philosopher when I came to realize that this was the best way to make a contribution while pursuing my passion. As a young teenager I became fascinated by the modern physics I read about in semi-popular books and newspaper reports. Trying to understand the world, I was naive enough to think that knowledge of what it was like at a fundamental level was the key to understanding everything else about it! I was also eager to know how people could possibly have found out the things I read about. I endured high-school physics as a necessary evil if I wanted to achieve the deeper understanding I looked forward to at university. Then I heard about a new joint honour degree in Physics and Philosophy at Oxford whose description seemed tailor-made for my interests. I knew about philosophical thinking from my elder brother who had studied Greats at Oxford (Latin and Greek languages, ancient history and philosophy—modern as well as ancient.) So that’s where I went for my B.A. I wasn’t yet committed to becoming a philosopher, and realized I still didn’t know enough fundamental physics. So I went to Sussex University to take a Master’s degree in theoretical high energy physics. I realized during my year at Sussex that creative work in theoretical physics requires unusual talent and an ability to immerse oneself in a very narrow subject with no guarantee of success. I didn’t want to pursue that path. I was seeking understanding on a broader range of topics, pursuing what Wilfrid Sellars’s took as the aim of philosophy—to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term. More specifically, I wanted to understand quantum theory: to struggle with its conceptual problems and to explore its broader implications. So both talents and temperament led me into philosophy, and to Hilary Putnam and W.V.O. Quine at Harvard where I took my Ph.D.

**3:AM:** You’re an expert in the philosophy of quantum mechanics, amongst other things. Some Physicists have said that there’s no need to interpret the theory – it is what it is and we should just use it and develop it. After all, disputes about how to use it tend to be short-lived and consensus reached. So why do you think there is a need to interpret the theory, and why is it so hard?

**RH:** Quantum theory comes in many forms, including the non-relativistic and relativistic quantum mechanics of particles as well as Lagrangian and algebraic quantum field theory: but it is common to lump these all together and call them quantum mechanics. There is a general consensus that any fundamental theory will be some form of quantum mechanics. One expects a fundamental theory to be capable of precise formulation and to say what the world is like at the deepest level. But quantum mechanics confounds these expectations. In the words of the physicist John Stewart Bell “The problem is this: quantum mechanics is fundamentally about “observations”. It necessarily divides the world into two parts, a part which is observed and a part which does the observing. The results depend in detail on just how this division is made, but no definite prescription for it is given. All that we have is a recipe which, because of practical human limitations, is sufficiently unambiguous for practical purposes.” He contrasted quantum mechanics unfavorably with classical mechanics in this respect. “In classical mechanics we have a model of a theory which is not intrinsically inexact, for it neither needs nor is embarrassed by an observer.” We could solve Bell’s problem if we could replace Bell’s ambiguous recipe with a precise formulation of quantum mechanics, and show exactly how this should be applied. Bell’s model for a solution would also be a theory that tells us what the world is like: in his terms, it would be a theory of beables, not observables.

What an interpretation of quantum mechanics must do is to go beyond the recipe that is good for all practical purposes to achieve a precise formulation of the theory without using vague terms like ‘observer’ and ‘measurement’ and to show how this formulation may be successfully applied. It is not obvious how we could do this except by reformulating quantum mechanics as a theory of beables we could use to describe or represent the world at a fundamental level. Quantum mechanics talks of observables and quantum states (wave-functions), but severe technical and conceptual difficulties arise if one attempts to certify either of these as beables. Attempts to view the quantum state as a beable lead to the notorious measurement problem, while a variety of “no-go” theorems block the attempt to view observables as beables. That is why interpretation has proved so difficult.

**3:AM:** There are many interpretations aren’t there – the orthodox Copenhagen and its rivals – the Everettian interpretations, a naïve realist interpretation, a quantum logical interpretation of the theory and so on. Are none of these interpretations good enough for you?

**RH:** No. My early research convinced me that naive realism—essentially the attempt to portray observables as beables—would not work, and I think there is now general agreement on its failure. Quantum logic might be considered a last gasp attempt to revive naive realism, but despite its interest for the philosophy of logic it never seemed promising as a way to understand quantum mechanics. There are as many versions of “the” orthodox Copenhagen interpretation as there are proponents: Bohr’s version is in many ways the most interesting, but it is quite different from the version due to Dirac and Von Neuman that is often taught to students as orthodoxy. I think Bell put his finger on the main problem with “the” Copenhagen interpretation: it presupposes a notion of measurement without the resources clearly to specify what constitutes a measurement. Everettian interpretations were always popular among cosmologists and are currently enjoying a resurgence among my philosophical colleagues, but I remain skeptical. Despite their elegant solution to the measurement problem and the issue of non-locality, Everettians have yet to convince me that they can make sense of a notion of probability applicable to a deterministically branching multiverse. By challenging views of probability, self-location and materiality the current decision-theoretic approach due to Deutsch and Wallace does raise fascinating philosophical questions. But I am yet to be convinced by their answers to these questions, and (in my view) it is mistaken to view the universal wave-function as a beable.

**3:AM:** What do you think a successful interpretation of the theory has to achieve?

**RH:** I touched on this question already. A successful interpretation must explain how quantum mechanics may be formulated as a precise physical theory and unambiguously applied to real-life physical situations. My present view is that this can be done without recasting it as a theory of beables, in which case quantum mechanics will not itself describe or represent the physical situations to which it is applied. But by applying quantum mechanics we become able better to describe and represent those situations in non-quantum terms. I say ‘non-quantum’ rather than ‘classical’ to acknowledge that the progress of science naturally introduces novel language to describe or represent the world (Bose-Einstein condensate, Mott insulator, quark-gluon plasma). My point is that characteristically quantum terms like ‘quantum state’, ‘observable’, ‘Born probability’ do not represent beables. So I no longer agree with those philosophers who believe that a successful interpretation of quantum mechanics has to say how the world could possibly be the way quantum mechanics says it is.

Any interpretation has to address a number of long-standing conceptual puzzles, including the measurement problem (including Schrödinger’s cat), the problem of non-locality and the problem of Wigner’s friend. I say address rather than solve, because my present view is that these are problems to be dissolved by showing they never arise in the first place if one adopts the right view of quantum mechanics. They are symptoms of a mistaken understanding of the theory.

**3:AM:** Your views about how to interpret the theory has evolved since your 2009 book. Can you sketch what your initial interpretation of a nonseparable world looked like?

**RH:** My interest in gauge theories leading up to my book *Gauging What’s Real* emerged from the attempt to extend to quantum field theory an interpretation of non-relativistic quantum mechanics developed in my first book *The Philosophy of Quantum Mechanics: an Interactive Interpretation*. A key idea of that earlier book was that a compound system like a pair of hydrogen atoms formed by dissociation of a hydrogen molecule could have holistic properties over and above those it inherited from properties of its components. An example would be a property whose best expression in English is having oppositely directed spins—a property of the pair even when neither atom actually has a determinate spin! To describe the history of such a pair one would have to ascribe properties to a region of space(-time) that were not determined by properties of its constituent points. I called that non-separability, and saw it as important to reconciling the “quantum non-locality” involved in violations of Bell inequalities with relativity. This same non-separability would also occur even for a single particle like an electron if (as I thought) its position were not restricted to a point of space at each moment.

Another puzzling phenomenon (the Aharonov-Bohm effect) seems to manifest a quite different sort of non-locality: the interference pattern formed when electrons pass by a long, thin solenoid (a current-carrying wire tightly coiled around a long, thin cylinder) depends on the magnetic flux through the cylinder even though the electrons never experience any (electro-)magnetic field in the region through which they pass. I came to think of the AB effect and violations of Bell inequalities as different manifestations of the same phenomenon—non-separability. The idea was that in neither case was there any action at a distance: there didn’t need to be, because what was acted upon (the particle pair, or the electron) was itself not spatially localized. Moreover, in the AB effect what acted was itself not spatially localized. Let me explain. Even in the absence of electric and magnetic fields in the region outside a very long, thin solenoid, classical electromagnetic theory posits spatially non-localized structures called holonomies. The holonomy of each closed curve encircling the solenoid once is proportional to the magnetic flux inside the solenoid, and the location on a detection screen of the fringes manifested by interference of electrons passing by the solenoid is a simple function of these holonomies. But a holonomy is a property of a closed curve that is not determined by any properties of the points that make it up. If the passage of an electron by the solenoid were a non-separable process, then it might interact locally with the non-separable holonomies.

What does this have to do with quantum field theory? It is customary to introduce the AB effect the way I did as a phenomenon within the scope of classical electromagnetic theory and non-relativistic quantum mechanics. But while neither theory can be considered fundamental, each is naturally viewed as an ancestor of the quantum field theories of the Standard Model of high energy physics—currently our most successful fundamental theories. Classical EM was the first gauge theory, and non-relativistic quantum mechanics was the first quantum theory. The thought that prompted the investigation leading to my book *Gauging What’s Real* was that one might come to understand the ontologies of quantum gauge field theories as non-separable. I was encouraged in this thought when I found that some physicists advocated so-called loop representations of these theories. These looked like promising candidates for a formal implementation of a holonomy interpretation. Philosophers continue to puzzle over the ontology of quantum field theories: are they about particles or fields or something else entirely? I had banged my ahead against that problem for several years in the 1990’s: but now I hoped to solve it through an ontology of holonomy properties. The hope was that we could come to see the world as non-separable at a fundamental level—that it ultimately consisted of space-time regions bearing non-separable holonomy properties of various kinds interacting locally with one another in a way that could be represented by quantum field theories.

But what actually emerged in the book was much less. I still think a holonomy interpretation of classical gauge theories including electromagnetism is viable and to be preferred in the context of non-relativistic quantum mechanics. But I came to realize that there is no obvious way to extend this to the quantum gauge field theories of the Standard Model. Moreover, the main barrier to its extension was quantum theory! By this time a number of problems had surfaced for my old interactive interpretation even of non-relativistic quantum mechanics. Even though these did not strike me as fatal, responding to them threatened to become a project of adding epicycles that made the view more and more baroque and less and less likely to be extendable to quantum field theory. In *Gauging What’s Real* I attempted to remain neutral on how to interpret quantum mechanics. Afterwards I began seriously rethinking my views, stimulated by extended visits to Anton Zeilinger’s experimental quantum optics and information institute in Vienna and to the Perimeter Institute for Theoretical Physics in Waterloo, Ontario.

**3:AM:** You’ve since then become interested in a pragmatist approach haven’t you. But given that from that position meaning comes from use, and everyone agrees about how to use the theory, don’t you face a huge problem right from the get go with this approach?

**RH:** Good question! Part of my answer was foreshadowed by my answer to 2. As Bell made clear, it is only at a superficial level that everyone agrees how to use the theory. If one probes deeper one realizes that there are actually different ways to apply quantum mechanics to a situation and the results depend in detail on how one chooses to apply it. Bell’s recipe—“treat as much quantum mechanically as you need to, so that treating more quantum mechanically wouldn’t make a significant difference”—is (as he stresses) vague and depends on a value judgment by the user of the theory. Experimentalists have no difficulty in making that judgment on a daily basis. But a theoretician (or philosopher) with a conscience must acknowledge that a theory that cannot be stated without a prior judgment on what matters in practice falls short of a precisely formulated scientific theory. Moreover, such a theory can never deliver a single, consistent story of what the world is like at a fundamental level. This would not be so bad if there were in principle a right way to apply the theory, though we could never do this in practice because of the intractable complexity of theory and world. But Bell’s point is that the structure of quantum mechanics itself implies there is no right way to apply it—it is intrinsically inexact.

Now let me back up a bit, since as a pragmatist I don’t entirely agree with Bell here. He thinks classical mechanics did not face this problem since “at least one can envisage an accurate description of the world in terms of classical mechanics”. I don’t think one can. Classical mechanics makes available a vast collection of mathematical models of increasing complexity (containing more and more particles spread throughout the universe and interacting in a welter of ways). One applies classical mechanics by choosing a model and taking it to represent a physical situation. Our world is so huge and complex that any model capable of accurately representing it would be so far beyond human cognitive resources that we could not use it. But only in use does a mathematical model represent anything. So we cannot envisage an accurate description of the world in terms of classical mechanics. All we can do is develop better and better inaccurate models to serve particular descriptive, predictive and explanatory purposes.

The same thing is true in quantum mechanics. By treating more and more quantum mechanically we can get better and better predictions, but also better and better descriptions and explanations. We use quantum models when representing physical situations even though no quantum model is itself used to represent a physical situation. We use them to make descriptive non-quantum claims that figure in predictions and explanations. By treating more and more quantum mechanically we can make those descriptive claims better and better. We can think of this as an improvement in accuracy, but not if we think of increased accuracy as improved approximation to the one true description of the world.

This is where the pragmatism about meaning comes in. The content of a non-quantum claim accrues to it through its inferential links to other claims, and ultimately to perception and action. By treating more and more quantum mechanically we become able to describe the world in non-quantum terms by claims that have a fuller and richer content, as manifested by the increased number of reliable inferences they license. We can represent the content of each of these claims truth-conditionally, but only trivially since we have no independent descriptive language to fall back on. Increased accuracy cannot be understood as better and better approximation to any truth that we can express—in either quantum or non-quantum terms. Since claims in classical mechanics get their content through their inferential links also, descriptive claims based on quantum mechanics are no less precise in their content than those based on classical mechanics.

**3:AM:** So how does the pragmatist approach dispel the Feynman mystery which lies at the theory’s heart?

**RH:** Feynman located the mystery already in a two-hole interference experiment with many individual particles—he chose electrons. Focusing attention on the proposition (A) that each electron either goes through hole 1 or it goes through hole 2 [and not both], he rehearsed a familiar argument with the (false) conclusion that no interference fringes will appear in the statistical pattern of localized “hits” registered by the electrons on a detection screen placed behind the holes. Two other patterns with only one of the holes open may be recorded, each in a separate experiment: neither experiment produces a pattern with interference fringes. Assume (A). If an electron goes through hole 1 then it will behave the same way whether hole 2 is open or not, and an electron going through hole 2 will behave the same whether or not hole 1 is open. So the pattern on the screen in the original experiment with both holes open must be formed by combining the results of two other experiments: the pattern with hole 2 closed and the pattern with hole 1 closed. The pattern formed by combining these two patterns also displays no interference fringes. But the actual pattern formed by the electrons with both holes open does display interference fringes. So (A) must be false. Now any apparatus capable of detecting through which of the two open holes an electron has just passed in the experiment never detects anything but an entire electron just behind one hole or the other. So (A) is true of all observed electrons. But as the sensitivity of such an apparatus is increased the interference fringes disappear. (A) cannot be checked experimentally without destroying the interference pattern! Here is what Feynman concluded from his analysis of the two-hole interference experiment:

“…if one has a piece of apparatus which is capable of determining whether the electrons go through hole 1 or hole 2, then one can say it goes through either hole 1 or hole 2. [otherwise] one may not say that an electron goes through either hole 1 or hole 2. If one does say that, and starts to make any deductions from the statement, he will make errors in the analysis. This is the logical tightrope on which we must walk if we wish to describe nature successfully.”

But how could the absence of a piece of apparatus revoke one’s right to free speech? Presumably although one can assert (A) in any circumstances, Feynman’s advice was that one should do so only when the apparatus is present, because only then is (A) meaningful and conducive to correct inferences. But the mystery remains: How can the presence of a piece of apparatus render (A) both meaningful and correct, and what exactly is meant by the presence of such a piece of apparatus?

The key to answering both questions is quantum decoherence. A quantum system like an electron (or an apparatus) is very sensitive to environmental interactions. (That is why it is so hard to build a quantum computer.) The effect on a system of its environment may itself be modeled quantum mechanically, though usually only in an idealized way because of the diversity and complexity of actual environments. At least in simple idealized models, the entangled quantum state of system+environment extremely rapidly approaches, and then remains in, a special form. For many environmental interactions this form privileges the system’s position, in the following sense: by ascribing a definite (though perhaps unknown) position to the system one can make many reliable inferences about its behavior.

On a pragmatist inferentialist view of content, this helps to endow a claim about the system’s position with a high degree of content, while claims about other properties (for example its energy) lack such well-defined content. A claim like (A) is both meaningful and correct in the presence of environmental interactions modeled as privileging the electron’s position in this way. No “observer” need have set up any apparatus to exploit the electron’s interaction with the environment to detect its position by noting its effect on this environment (e.g. by the ambient light scattered from the electron). In the presence of such an environment quantum mechanics assigns a definite probability to an electron’s passing through hole 1 rather than hole 2, but no definite probability to other properties this environmental interaction does not privilege.

So the pragmatist approach dispels Feynman’s mystery not by describing an electron’s journey through the holes to the screen, but by showing how quantum mechanics itself can help us to see how much we can significantly say about the electron in different environmental circumstances, and how we should apportion our degrees of belief in contrary significant claims about it. With no position-decohering interactions before the screen, the advice is not to say anything about an electron’s route through the experiment: while interaction with the screen licenses use of quantum mechanics in estimating where it is likely to be detected there.

**3:AM:** You also argue that Wigner’s paradox is best approached as a pragmatist. Can you sketch out the puzzle and say why the pragmatist is superior to other attempts to solve it?

**RH:** This is actually the paradox of Wigner’s friend (in a paper I called the friend John, after Eugene Wigner’s fellow high-school student in Budapest, John Von Neumann.) It goes like this. Imagine Eugene’s friend John conducting a quantum measurement on something (the spin of a silver atom, say) inside a laboratory that is completely isolated from the rest of the world—by hypothesis there are no mechanical, electromagnetic or any other physical interactions between the lab and its external environment (a condition that would be completely unrealizable in practice). John observes the atom as spinning up along his chosen axis—it is detected in the upper half of his detection screen—and writes the result in his notebook. Meanwhile, Eugene remains outside the laboratory where he is physically unable to observe what is going on inside. According to Wigner (and many others, including Dirac and von Neumann), John observes a determinate result of his measurement only insofar as the quantum state (“wave function”) of the atom (+detection screen+ notebook entry+…) ceases to be an entangled superposition, but physically collapses onto one of that superposition’s components, corresponding to spin up (rather than down). But according to Eugene, who has not (yet) made any observation, the quantum state of the entire lab (including John’s notebook and John’s body and brain as well as the silver atom and detection screen) remains an entangled superposition.

So Eugene and John assign different quantum states to the lab and its contents—one representing a determinate result of John’s experiment, the other representing no definite result. If Eugene now enters the lab (inevitably interacting physically with it) and observes its contents, it is his (Eugene’s) observation that then collapses the lab’s state to produce a result of John’s measurement. When he asks John what result he obtained, John will say “spin up”. Eugene will not take this as a true report of what happened before he entered the lab, but a physical response brought about only by his observation on entering the lab (even though Eugene’s further examination of the lab’s contents will reveal multiple “records” apparently confirming the truth of John’s report). Wigner himself (at one time) proposed to resolve this paradox by supposing that it is consciousness (and only consciousness) that collapses the quantum state. On this supposition, a collapse occurred as soon as John became aware of the result of his experiment, and Eugene simply found this out when he entered the lab—Eugene’s subsequent observation did not need to induce any further collapse.

A pragmatist dissolves the paradox by rejecting Wigner’s view that the quantum state represents the physical condition of a system to which it is assigned. Instead, relative to a specified physical situation, a quantum state provides an objective guide for any agent who might be in that situation—a guide to the significance of claims about a system, and what credence to attach to each significant claim. So quantum state “collapse” is not a physical process, but an objective constraint on updating beliefs in the light of a change in physical (and so epistemic) situation. And differently situated agents (like John and Eugene) should consistently assign different quantum states to the same system—neither of which serves to represent its physical condition. John’s measurement yields a determinate result as soon as the silver atom interacts with the detection screen, whether or not John or anyone else becomes conscious of this result. John and Eugene use their respective quantum state assignments to adjust their degrees of belief about what this result is, each in the light of all information physically available to him at the time.

**3:AM:** Does pragmatism help resolve the issue of reconciling quantum mechanics with relativity?

**RH:** Yes, in three ways. First, by adopting the pragmatist view of the non-representational function of the quantum state briefly sketched in my answer to question 8. Second, by understanding probability in terms of its role as providing an objective guide to credence (degree of belief) for a physically situated (and so epistemically limited) agent. Third, by understanding causation in terms of its role as providing an objective guide to an agent’s assessment of the chances of various possible consequences of his actions. You can see how all three ways work together in a classic example that exhibits the apparent conflict between quantum mechanics and relativity—Bohm’s version of the Einstein-Podolsky-Rosen thought-experiment. If one adopts these three pragmatist views it becomes clear why there is no conflict.

In their thought-experiment, EPR applied quantum mechanics to a pair of systems in an entangled state. Bohm considered a pair of systems whose spin states are entangled. This version is more easily realized in real experiments. Quantum mechanics (correctly) predicts that in Bohm’s entangled state, a measurement of any particular spin-component on one system is certain (probability 1) to yield the opposite result to a measurement of the same spin-component on the other system. It also (correctly) predicts that a measurement of spin-component with respect to any axis on either particle alone has probability ½ of a spin up outcome and probability ½ of a spin down outcome.

Consider a case in which a spin-component with respect to some axis is measured on each particle in a Bohm pair when the particles are far apart. Suppose the decision as to which spin-component is to be measured on each particle is made by adjusting which axis each apparatus is set to independently, randomly, and immediately before the measurements are carried out. If the settings and measurements occur far enough apart in the two wings of the experiment then not even light could travel between either the setting or the measurement event in one wing and either setting or measurement event in the other wing: (in terms of relativistic space-time structure, these events at one wing are space-like separated from the corresponding events at the other wing). Experiments like this have been done, and their results bear out the quantum predictions, not only for the perfect (anti)-correlations when the apparatus in both wings are set to measure spin-component with respect to the same axis, but also for the magnitude of the less-than-perfect correlations when these axes differ by specific angles.

Focus on the perfect anti-correlations at the same settings. Although the outcome at each wing appears to be an individually random (probability ½) event, whether a particular outcome occurs in one wing is completely determined by the outcome in the distant wing (it has probability 1 or 0, depending on that outcome). If the outcome at one wing causally determines the outcome at the other this is difficult to reconcile with relativity. Nothing in relativity breaks the symmetry between such a pair of space-like separated events to mark one as a cause of the other: in particular, neither event occurs invariantly earlier. If there is a physically asymmetric causal dependence this conflicts with fundamental relativistic (Lorentz) invariance. Fortunately, there are reasons to deny that the counterfactual dependence between the distant outcomes is causal. To see how these arise it is helpful first to consider the notion of chance in relativity.

General probabilities like those provided by quantum mechanics are useful to a physically situated agent as sources of authoritative advice about what to believe and what to do. Such advice pertains to particular, individual events the agent is not in a position to be certain about. Application of a general probability statement yields the chance of such an event, and it is this chance that is authoritative over the agent’s beliefs. David Lewis captured the constitutive connection between chance and credence in his Principal Principle, which, he said, tells us everything we know about chance. Its basic idea was to take the chance of an event as making redundant any other accessible information when rationally setting one’s degree of belief in its occurrence. Lewis took all information about the past to be (in principle) accessible. It followed that the chance of an event typically changes as more and more historical information becomes accessible, until the chance becomes either 1 or 0 at the time the event does or does not occur.

In the absence of an absolute time in relativity, the analog of the past is the space-time region encompassed by the backward light-cone of a space-time point, and the analog of its future is the space-time region encompassed by it forward light-cone. Assuming nothing travels faster than light, the information accessible at a point is confined to what happens in its backward light-cone: what happens in space-like separated regions outside its light-cone is just as inaccessible as what happens in its future light-cone. The natural adaptation of Lewis’s Principle to relativity makes the chance of an event relative not to time, but to a space-time point. This has the important consequence that two agents moving in the same way but in different places should sometimes assign different chances to the same event at the same time (relative to their state of motion).

Suppose Alice is in one wing of an EPR-Bohm experiment while Bob is in the other. Suppose also that Bob’s outcome occurs at time tb momentarily earlier than Alice’s at ta with respect to their common state of motion, even though their outcomes are space-like separated. At any time t between tb and ta the chance of Alice’s outcome being spin-up is ½ where Alice is, but either 0 or 1 where Bob is. So the question as to whether Alice’s outcome was predetermined is not well defined. The general probabilities supplied by quantum mechanics yield both Alice’s chance and Bob’s chance at t, and this is the advice each should then take when setting credences at t. Alice and Bob are offered different advice, but in each case the advice is appropriate to one so physically (and therefore epistemically) situated. To extract this advice from quantum mechanics, Bob can consult the quantum state he should assign to Alice’s particle just after tb. This state takes account of his outcome: it is updated just the way it would be if it had physically collapsed, though there was no physical collapse and nothing changed in Alice’s wing at tb. Since Alice is then not in a position to know about Bob’s outcome she cannot ( and should not) assign this quantum state to her particle.

Causation is linked to chance by the principle that e causally depends on f if and only if some hypothetical intervention only on f would alter the chance of e. Such an intervention need not be within the power of any actual agent: it need not even be physically possible. But to evaluate the claim that e causally depends on f one has to adopt the perspective of a hypothetical agent able to intervene and so alter f. This follows from the constitutive role of causation as a guide to action.

At t, Alice’s chance of each possible outcome of her measurement is ½ irrespective of Bob’s outcome. So no hypothetical intervention only on Bob’s outcome would change this chance. At t, Bob’s chance of Alice’s outcome is either 0 or 1, depending on Bob’s outcome at tb. No hypothetical intervention on Bob’s outcome is possible at t, since by then Bob’s outcome has already occurred. Would a hypothetical intervention only on Bob’s outcome prior to tb alter Bob’s chance at t of Alice’s outcome? This question presupposes that it makes sense to speak of a hypothetical intervention only on Bob’s outcome. But for anyone who accepts quantum mechanics this makes no sense! Bob’s outcome is the result of a random process whose possible outcomes each have fixed probability ½: some interventions might alter this process, but not just by altering its outcome.

Since no possible intervention only on the outcome in one wing would alter any chance of an outcome in the other wing, the dependence between these outcomes expressed by their perfect (anti-)correlations is not causal. Moreover, the chances, probabilities and quantum state assignments underlying this analysis may all be understood in a way that is manifestly consistent with fundamental relativistic (Lorentz) invariance.

**3:AM:** So does your pragmatism at work in these two cases mean that we should think of quantum mechanics as a realist or an instrumentalist theory or is it a middle way?

**RH:** Too often contemporary philosophers apply the terms ‘realism’ and ‘instrumentalism’ loosely in evaluating a position, as in the presumptive insult “Oh, that’s just instrumentalism!” Each term may be understood in many ways, and applied to many different kinds of things (theories, entities, structures, interpretations, languages, ….). I once characterized my pragmatist view of quantum mechanics as presenting a middle way between realism and instrumentalism. But by adopting one rather than another use of the terms ‘realism’ and ‘instrumentalism’ one can pigeon hole my view under either label.

In this pragmatist view, quantum probabilities do not apply only to results of measurements. This distinguishes the view from any Copenhagen-style instrumentalism according to which the Born rule assigns probabilities only to possible outcomes of measurements, and so has nothing to say about unmeasured systems. An agent may use quantum mechanics to adjust her credences concerning what happened to the nucleus of an atom long ago on an uninhabited planet orbiting a star in a galaxy far away, provided only that she takes this to have happened in circumstances when that nucleus’s quantum state suffered suitable environmental decoherence.

According to one standard usage, instrumentalism in the philosophy of science is the view that a theory is merely a tool for systematizing and predicting our observations. For the instrumentalist, nothing a theory supposedly says about unobservable structures lying behind but responsible for our observations should be considered significant. Moreover, instrumentalists characteristically explain this alleged lack of significance in semantic or epistemic terms: claims

about unobservables are meaningless, reducible to statements about observables, eliminable from a theory without loss of content, false, or (at best) epistemically optional even for one who accepts the theory. My pragmatist view makes no use of any distinction between observable and unobservable structures, so to call it instrumentalist conflicts with this standard usage.

In this view, quantum mechanics does not posit novel, unobservable structures corresponding to quantum states, observables, and quantum probabilities; these are not physical structures at all. Nevertheless, claims about them in quantum mechanics are often perfectly significant, and many are true. This pragmatist view does not seek to undercut the semantic or epistemic status of such claims, but to enrich our understanding of their non-representational function within the theory and to show how they acquire the content they have.

There is a widespread view that the role of the wave-function (or more general mathematical object) is to represent a novel physical structure—the quantum state—whose existence is evidenced by the theory’s success. In this view, a wave-function represents a physical structure that either exists independently of the more familiar physical systems to which claims about positions, spin etc. pertain or else grounds their existence and properties. From this realist perspective, it may seem natural to label as instrumentalist any approach opposed to that account of the quantum state. But a pragmatist should concede the reality of the quantum state; its existence follows trivially from the truth of quantum claims ascribing quantum states to systems. What he should deny is that quantum state ascriptions are true independently of or prior to the true magnitude claims that (in his view) back them. A more radical pragmatist would reject the representationalist presupposition of this realist/instrumentalist dilemma: the assumption that mere representation is both a (key) function of a novel element of theoretical structure and figures centrally in an account of its content. The truth of a quantum state ascription trivially implies that a wave-function represents something, much as the truth of ‘1+ 1=2’ implies that ‘1’ represents the number one. By eschewing a ‘thicker’ notion of representation, this more radical pragmatist could seek to undermine the view that representation of a tolerably insubstantial sort could either be a non-perspectival function of an element of theoretical structure or usefully appealed to in an account of its content. I’m not presently convinced you have to be so radical to understand the significance of the quantum revolution!

**3:AM:** Does this pragmatist approach change what you used to think about gauging what’s real ?

**RH:** It doesn’t change much if anything about what I said in the book about how to understand classical gauge theories. But it does help me to see why that way of thinking didn’t provide a good guide to understanding their quantum counterparts. In particular, non-separability, though it exists, is not nearly as important as I used to think in understanding a quantum theory. And the thought that quantum gauge field theories posit a non-separable world now strikes me as mistaken. Since I now think of quantum theories of all kinds as “ontologically light” I have come to a novel resolution of the vexed question of what quantum field theories are about: like all quantum theories, they introduce no novel ontology, but advise their users on the significance and credibility of claims (now including ontological claims) about other things.

So a quantum field theory may be used to make claims about particles in one context, and about classical fields in another context. And quantum field theories themselves offer advice on the contexts that make each type of ontological claim appropriate—advice made explicit through the application of quantum field-theoretic models of decoherence. One interesting thing I haven’t thought about much is what contexts (if any) would make appropriate claims about non-separable holonomy properties on the basis of a quantum gauge field theory.

**3:AM:** A key question for us all is how macroscopic systems can be explained by the microscopic. Are we wrong to think of this in terms of trying to reduce the macro to the micro? Ladyman and Ross have argued that there are different levels but not a fundamental one. What do you think?

**RH:** This question was posed without mention of quantum mechanics, even though this is (one of) our most fundamental theory/(ies). Quantum mechanics is often described as a theory of how the world behaves at the microscopic level. But it’s both more and less than that. Quantum mechanics was first applied at the atomic scale. Since then it has been successfully applied over an enormous range of length, time and energy scales, from applications of quantum chromodynamics to calculate the proton-neutron mass difference through the explanation of massive superconducting magnets used in CAT scans and the Large Hadron Collider, up to applications to quantum cosmology including the emergence of large scale structure through quantum fluctuations in the very early universe. So it’s not just a theory of the microworld.

On the other hand, it’s not clear that quantum mechanics is used to describe the world in any of these applications, despite physicists’ tendency to call any successful application of a theory a description! Indeed, in my pragmatist view the function of the distinctively quantum elements of the theory’s models is not to describe the world but to advise us on how better to describe it in other terms.

One respect in which my view of quantum mechanics is not instrumentalist is that I take quantum theory to represent an enormous advance in our ability to understand and explain natural phenomena, notably including many macroscopic phenomena like superfluidity and Bose condensation as well as more familiar things like colors and chemical properties of elements and compounds, lasers, atomic clocks in the GPS system, different types of magnetism, semiconductors, nuclear fission and fusion. But there are several reasons why the explanation is not best described as taking the form of a reduction of (a theory describing) the phenomenon to quantum mechanics.

Reduction is often thought to take the form of a derivation of the laws of the reduced theory to those of the reducing theory. But in my view quantum mechanics has no laws! In particular, the Schrödinger equation is not a fundamental dynamical law representing the evolution of a physical magnitude (the quantum state), and the Born rule is not a fundamental stochastic law. This follows from the fact that neither quantum states nor quantum probabilities are physical magnitudes.

Other philosophers (van Fraassen, Giere) also downplay the significance of laws in understanding the structure of a scientific theory. But there is a much more widespread acceptance of the importance of models in this context. The predominant view is that the primary function of a theory’s models is to represent physical systems. So one could think of reduction as corresponding to the embedding of the reduced theory’s models into those of the reducing theory, thereby connecting their representational structures. While I think this is a pretty good start in understanding many reductions in classical physics (like that of light to electromagnetic radiation, or the gas laws to kinetic theory), it won’t help us to understand how quantum mechanics can help explain macroscopic phenomena. This is because (in my pragmatist view) models of quantum theory do not function representationally.

So while I think quantum theory helps us to understand all kinds of otherwise puzzling phenomena, it does not do this by saying what’s going on at a deeper level: ontologically speaking, there is no quantum level. Quantum theory is fundamental to contemporary physics, and is likely to remain so for the foreseeable future. But it does not contain fundamental laws, and does not contribute its own fundamental ontology. Since quantum mechanics is in these ways parasitic on other descriptive or representational frameworks it cannot be expected to provide a basis for the reduction of the macroscopic to the microscopic. Nor, therefore, can anything else within the horizon of contemporary physics.

Some philosophers (Jonathan Schaffer, for example) have seriously considered the possibility that there is no fundamental level because, ontologically speaking, “it’s turtles all the way down”. My view is very different. The “levels” metaphor is of limited value. Here’s a different metaphor. Theories in physics form a team, and quantum mechanics is a vital player—without quantum mechanics there are many, many things we couldn’t understand about our world. But quantum mechanics can’t play every position at once, and no player is indispensable—physics can do a lot without quantum mechanics.

**3:AM:** You once asked whether we could coherently deny the reality of time. Now that you’re a pragmatist, how do you answer the question? Has anything changed?

**RH:** I argued that it is not coherent to deny the reality of time, but that we may some day come to realize that time is not fundamental, and that temporality emerges (conceptually, not successively!) from some more fundamental physical structure(s). My idea was that we may come to think of time as real in the way that color is real—a non-fundamental feature of the physical world of particular interest to folks like us because of our physical constitution. Looking back on it, that was already a pragmatist idea though I didn’t think of it that way then. I can now add another reason for not denying the reality of time (pace Carlo Rovelli, with whom I agree on many things!) In my pragmatist view, any form of quantum theory is tailored for the use of physically situated agents like us. My answer to question 9 made it apparent how important to our physical situation is our location in time (indeed, in space-time). This is a reason for skepticism about the possibility that space-time might emerge from an application of a quantum theory to something like a spin-foam, as in loop quantum gravity. The worry is whether it could make sense to talk of applying a quantum theory in a pre-spatiotemporal world.

**3:AM:** How spooky is quantum nonlocality? (Well, you did ask!) Is reality genuinely spooky from the quantum theory perspective. What do you find weird (if anything) and should common sense guide theorists – or is it actually a hindrance?

**RH:** This is really two questions. I’ll start with quantum nonlocality. As I said in answer to question 9, two things are not spooky about quantum nonlocality: There is no instantaneous action at a distance, and quantum mechanics meshes beautifully with relativity theory. But quantum entanglement and the theory’s successful explanation of violations of Bell inequalities (as manifested in its correct predictions for the correlations I described in that answer) has brought us face to face with a surprising and perhaps disquieting feature of the world.

I can do no better than quote John Bell:

“Dr. Bertlmann likes to wear socks of different colours. Which colour he will have on a given foot on a given day is quite unpredictable. But when you see that the first sock is pink you can already be sure that the second sock will not be pink. Observation of the first, and experience of Bertlmann, gives immediate information about the second. There is no accounting for tastes, but apart from that there is no mystery here. And is not the EPR business just the same?”

Of course, Bell showed it is not just the same—“the reasonable thing just doesn’t work.” The full patterns of correlation correctly predicted by quantum mechanics for all the correlations described in my answer to question 9 cannot be explained as resulting from a common cause that separately and independently pre-determines the response of Alice’s and Bob’s detectors to each particle in a pair no matter what that detector happens to be set to. This is so even though that seemed to be the only possible explanation of the perfect (anti-)correlations when they ended up with the same settings, barring direct causal connections between space-like separated events at the two wings. It is as if Bertlmann’s second sock somehow always assumes a different color even though neither sock had a color before it was examined.

In one paper, Bell stated an intuitive principle of local causality, which he later attempted to make more precise to prove his result in greater generality:

“The direct causes (and effects) of events are near by, and even the indirect causes (and effects) are no further away than permitted by the velocity of light.”

In my pragmatist view, quantum non-locality does not show that space-like separated events are causally connected in a way that would conflict with this principle. But there is still a serious tension with the first part of Bell’s condition. We can locate a common cause of correlated space-like separated outcomes in their common past (the overlap of their backward light cones). But even when this cause has been fully specified, the unconditional probability of one outcome still differs from its probability conditional on the other outcome. And we cannot use quantum theory to explain why outcomes of experiments on systems assigned entangled quantum states are correlated as they are in the usual way—by describing a continuous causal process connecting an invariantly earlier common cause to these outcomes. In daily life an earlier common cause of a regular correlation between distant events is always connected to each such event by a continuous causal process: and after this cause has been fully specified, the probability of an event here is independent of the outcome there.

I admit I still think this is weird, and I crave some deeper explanation of the correlations. But the only suggestions I’ve seen of where to look for one strike me as just as weird as the phenomena themselves (retro-causation, locality in some higher dimension, branching worlds, …)

Now for the second part of your question. Common sense is one guide for a theorist, but it should always be used with caution. A wise theorist should bear in mind the view (attributed to Einstein in 1948) that common sense is actually nothing more than a deposit of prejudices laid down in the mind prior to the age of eighteen. Many ideas in physics have proved to be important even thought they conflict with common sense. For years I have tried without success to convince my highly intelligent brother that no inconsistency arises between the invariance of the speed of light and results of thought-experiments designed to measure this speed. I have come to suspect that philosophers and even first rate physicists are sometimes misled by their common sense intuitions about causation, probability, and even time into trying to locate these as fundamental elements of physical reality rather than thinking of their objectivity as arising from the essential role they play in the lives of agents such as ourselves. We should all strive to free our imaginations from the prejudices of our eighteen-year-old selves.

**3:AM:** And for the readers here at 3:AM, are there five books you could recommend to take us into your philosophical world?

**RH:** Richard Healey *The Quantum Revolution in Philosophy* (when it comes out from Oxford University Press—maybe in 2016?) Until then,

Simon Friederich *www.bookdepository.com/Interpreting-Quantum-Theory-Simon-Friederich/9781137447142* Palgrave MacMillan

John S. Bell *Speakable and Unspeakable in Quantum Mechanics* 2nd Edition. Cambridge University Press

Huw Price *Naturalism Without Mirrors* Oxford University Press

Tim Maudlin *Quantum Non-Locality and Relativity* 3rd Edition Wiley-Blackwell

(The only one I can fully endorse is the first book, but it hasn’t been published yet!)

**ABOUT THE INTERVIEWER**

**Richard Marshall** is still biding his time.

Buy his book here to keep him biding!

First published in 3:AM Magazine: Sunday, May 24th, 2015.