:: Article

Thinking About Mindreading, Mirroring and Embedded Cognition et al…

Interview by Richard Marshall.

goldman_pottery4

Alvin Goldman is the inter-disciplinary philosopher working through ideas we can read further about in A Theory of Human Action, Epistemology and Cognition, Liaisons: Philosophy Meets the Cognitive and Social Sciences, Knowledge in a Social World, Simulating Minds: The Philosophy, Psychology, and Neuroscience of Mindreading, Reliabilism and Contemporary Epistemology, Essays, Joint Ventures: Mindreading, Mirroring, and Embodied Cognition, Epistemology, A Contemporary Introduction (with M. McGrath), and Social Epistemology: Essential Readings, all of which give you a clear idea of where we’re going in this interview. If there are still folks out there who say philosophy is too busy picking its own belly-button to be relevant then this should put that myth to bed…

9780195334616

3:AM: What made you become a philosopher?

Alvin Goldman: Like most philosophers, I was attracted to the subject fairly early. Although epistemology proved to be my most enduring specialty, this hardly seemed inevitable at the start of my professional career. In fact, my doctoral dissertation (at Princeton) was on action theory, and I doubt that I even listed epistemology as a specialty when I entered the job market. However, as an assistant professor (at Michigan) I was called upon to teach epistemology, and inevitably (in that period) the course gave center stage to the Gettier problem. I formulated a novel approach to the subject during that course, and was then rapidly pulled into the field.

3:AM: Before telling that part of the story can you tell us about your work on action theory.

AG: Sure. Entering philosophy in the early 1960s, one found books by such authors as Anscombe, Peters, Hamlyn, and Melden, who endorsed the Wittgensteinian thesis that action explanation invokes only reasons, not causes. Davidson soon broke with these authors on the role of causes, but agreed with Anscombe on another pivotal issue: the individuation of actions. Action individuation became a core topic in my own (1964) dissertation, which was subsequently revised and published as A Theory of Human Action (1970). I advocated a fine-grained approach to individuation as contrasted with the coarse-grained approach of Anscombe and Davidson. Kim pursued a similar theme at roughly the same time, but our arguments for it were rather different. My central argument focused on the “by-relation” that holds between some (ordered) action pairs. Consider a chess player who moves his hand along a certain path, thereby moving his queen to king-knight-seven, thereby checkmating his opponent, and thereby giving his opponent a heart-attack. The Anscombe-Davidson approach would have us regard all of these actions as identical to one another, whereas the fine-grained approach would treat them as four metaphysically distinct act-tokens.

9780691616735

I contended that the indicated actions cannot be identical with one another because each of the acts listed later in the order stands in the “by”-relation to each of the acts listed earlier. Moreover, the by-relation is an asymmetric, irreflexive, and transitive relation. The chess player checkmakes his opponent by moving his queen to king-knight-seven, but he does not move his queen to king-knight-seven by checkmating him. Since the by-relation is asymmetric and no entity stands in an asymmetric relation to itself (necessarily, I am not taller than myself), these actions cannot be identical to one another. Additional arguments were advanced for denying the identity of certain action pairs because only one of them has a certain causal property. Saying “hello” very loudly may have the property of causing somebody to wake up, but (merely) saying “hello” does not have this causal property. Thus, these acts are not identical.

3:AM: They are connected somehow though aren’t they? Is this where your idea of ‘level generation’ comes in?

AG: Yes. While denying that action pairs of these kinds are identical, it could hardly be denied that they intuitively have an intimate connection with one another. So I introduced the term ‘level-generation’ (or simply ‘generation’) to mark the relevant relationship. In the chess example, moving one’s hand level-generates the second, third, and fourth actions. The transitivity of the generation relation, moreover, invites us to diagram such action patterns with the help of tree structures, including branches. The set of nodes on a (simple) action tree are all level-generated by a single “basic action”. This idea and its elaboration have largely been lost in the mist of time, but the core idea has been adopted and developed in interesting ways by John Mikhail.

In retrospect, level-generation is readily viewed as a forerunner of a more general concept in current metaphysics. It is a fairly vivid specimen of what is nowadays called the grounding relation (Fine 2001; Schaffer 2009). One object or states of affairs is grounded in another object or state of affairs, it is often said, when the former exists, obtains, or holds in virtue of the other one. Like level-generation, the “in virtue of” relation is asymmetric and irreflexive. Such a relation is cheerfully accepted nowadays with little protest; but this was far from the case in 1970.

3:AM: So tell us about your approach to epistemology.

AG: My approach to action theory was rather controversial, but probably less controversial than my first contribution to epistemology, which was “A Causal Theory of Knowing” (Goldman, 1967). This paper offered an unorthodox solution to the well-known Gettier problem about knowledge (Gettier 1963). Causal theories were blooming in the 1960s; for example, causal theories of perception (Grice, 1961), and of memory (Martin & Deutscher, 1966). But those topics were not fundamentally epistemological topics. Epistemological matters, at least until that juncture, were considered to be purely matters of justification rather than causation or discovery (Reichenbach, 1938). My causal thesis, however, targeted a central epistemological notion. The theory was later abandoned (for problems of detail rather than “principle”). But in later work I continued to champion the thesis that a proposition that is known or justifiedly believed must have a suitable causal provenance. Specifically, it must be caused (or causally sustained) by suitable psychological processes. These processes must consist in a sequence of reliable (truth-conducive) belief-forming processes (Goldman 1979, 1986).

3:AM: Can you give us some examples?

AG: The requirements of causation and reliability (for doxastic justification in particular) emerge from simple examples. Suppose you have excellent evidence for some proposition P, but you believe it for an entirely different reason (or cause). Then the belief will not be justified. Moreover, even if possession of the evidence is causally responsible for the belief, the latter might still be unjustified if the route from evidence to belief is an unreliable process. For example, if you arrive at P by wishful thinking, or by running the evidence through a purely random mental algorithm, the belief will still not be justified.

9780199981120

3:AM: What makes your approach to epistemology naturalistic?

AG: One possible interpretation is that a “naturalistic” account of knowledge is one that renders knowledge as a product of (biologically) “natural” powers, rather than overly intellectualized techniques. I saw the discrimination account of knowledge that I offered in (Goldman, 1976) as naturalized in this sense, as suggested in the paper’s final paragraph. Perhaps Quine also thought of naturalism in this sense, for he remarked in a personal note (in 1970) that he had read “A Causal Theory of Knowledge” “with interest.”

However, a different sense of naturalization is more closely associated with Quine, viz, the involvement of science in the study in question. In Quine’s classic (though controversial) paper of 1969, “Epistemology Naturalized,” he described his new form of epistemology as a “chapter of psychology” (Quine, 1969). In due course, I also embraced this idea as part of a program for an enlightened epistemology. In reply to criticsm, Quine explained that he never meant to deny that there is a normative component to epistemology. I fully agree with this. But what, then, is the role for scientific psychology within epistemology?

My reply, roughly, was to split epistemology (individual epistemology, anyway) into two parts (corresponding to parts 1 and 2 of Epistemology and Cognition). The first part is dedicated to the “analytic” task of identifying the criteria, or satisfaction conditions, for various normative epistemic statuses. With respect to the normative status of justifiedness (of belief), the proposed criterion is the reliability of the belief-forming processes by which the belief is produced. Defense of this criterion of justifiedness was not based on scientific psychology, but rather a familiar form of armchair methodology. The second part is the task where science enters the picture. Psychological science is required to identify the kinds of operations or computations available to the human cognizer, how well they work when operating on certain inputs and under certain conditions. All this can lead, non-trivially, to something that answers to what I call reliable versus unreliable belief-forming processes. Many strands of cognitive psychology make contributions toward addressing these tasks (even if they don’t all, systematically, describe their contributions in this fashion). An analogous conception of naturalization was later adopted for other branches of philosophy, including philosophy of mind and metaphysics (see below).

The naturalization theme has emerged in a searching debate by philosophers who critically examine the armchair tradition of analytic philosophy, i.e., the tradition of consulting our “intuitions.” Philosophers appeal to intuitions in many sub-fields. They ask whether Gettier cases (“intuitively”) qualify as instances of knowledge, whether a specified action in a specified scenario is morally permissible, or whether cases of teleportation realize the relation of personal identity across time and place. The standard assumption is that our intuitive responses deliver (mostly) correct answers to these questions. In short, intuitions are treated as evidence about the correct classification of cases in ethics, personal identity, and so forth.

3:AM: Do you think this methodology appropriate?

AG: Naturalistically inclined philosophers tend to raise serious doubts and questions about the evidential probativeness of intuitions. Isn’t an intuition a rather dubious type of mental state on which to rest our trust? Scientists do not settle matters by appeal to intuition. Why should philosophers do so? Rationalists reply that intuitions are important cognitive tools: don’t we need them in mathematics, in logic, and so forth? It is questionable, however, that the mental operations used by mathematicians are the same as those used by subjects in rendering intuitive judgments about, for example, Gettier cases. In the latter usage, intuitions are just classification judgments, i.e., decisions about whether certain examples or scenarios are instances or tokens of certain categories or kinds. Surely, aren’t we competent in that ubiquitous type of cognitive operation?

3:AM: So does this put you in the experimental philosopher camp?

AG: As a naturalist, I share the sentiment of experimental philosophers who pose this challenge to intuition, and who insist that the bona fides of intuitions need to be tested by empirical means if we are to invest our confidence in them. In other words, their mettle must be put to empirical test (Goldman 2010). But X-phi practitioners have been doing this since 2001, when the initial X-phi study of intuitions by Weinberg, Nichols, and Stich (2001) was published. However, I am not convinced that such studies have in fact delivered fatal blows to the claims of intuition to (reasonably) robust evidential status. If philosophical intuitions are to be credited with evidential status, they should be reliable indicators of the contents of our categories (e.g., knowledge, personal identity, etc.). Wide variability in intuitional responses present prima facie evidence for low reliability. And this is what the original Weinberg et al. study reported. But those results have not been replicated. Moreover, proponents of the intuitional method argue that the test of reliability should really be raised only for philosophers, who prima facie have greater expertise than laypersons in making classification judgments about “hard” cases (e.g., Gettier cases). An X-phi study by Turri (2013) lends support to the view that philosophers really do have greater expertise. So, a strong case against intuitional reliability has yet to be convincingly made, although I support the legitimacy of continuing attempts to test this.

3:AM: Someone like Herman Cappelen denies that intuitions play any evidential role in philosophy. Do you disagree with him then?

AG: Yes, this is totally wrong. Even if the term “intuition” isn’t always used, there is widespread reliance on single-case classification judgments to resolve important questions. Appeal to such judgments has long played a probative role in philosophy, normally without introducing the term ‘Intuition” per se. Consider this famous passage from John Locke:

[S]hould the soul of a prince, carrying with it the consciousness of the prince’s past life, enter and inform the body of a cobbler … everyone sees he would be the same person with the prince … (Fraser, A. C.. (ed.), Locke’s An Essay Concerning Human Understanding, vol. I, p. 457.)

Although Locke’s critical term here is “sees” rather than “intuits,” he clearly intends his prince/cobbler case to prompt a classification judgment from his readers; and he relies on such judgments as evidence to support his memory theory of personal identity. Thus, like innumerable philosophers before and after him, Locke uses what we nowadays call “intuitional” methodology. There is no question that its use is widespread. Only its evidential standing is up for debate.

9780195173673

3:AM: So how do you connect science and philosophy?

AG: My impulse is to bring science into partnership with philosophy wherever possible (and relevant) (cf. Goldman 2014). This was easily exercised in philosophy of mind as well as epistemology. Interestingly, when I turned to the topic of “folk psychology” (later called “theory of mind” or “mindreading”), I noticed that even philosophers preoccupied with some variety of cognitive science oddly chose a strikingly a priori methodology to answer the question of how mindreading is executed. Their answers came straight from the pages of other philosophers, who had floated armchair-based hypotheses about how people assign mental states (to self and others). This included such leaders in of philosophy of mind as Jerry Fodor, Daniel Dennett, Paul Churchland, and Stephen Stich. Each embraced either a “theory-theory” or a “rationality” (or “charity”) theory. Yet both theories were products of either armchair philosophy of science or armchair conceptual analysis.

The theory-theory had been proposed by Wilfrid Sellars, who began with the standard story in philosophy of science that theoretical terms are understood in terms of laws that connect them to observations. If you substitute propositional attitudes for theoretical terms of physical science and substitute laypersons for scientists, you derive the idea that mental state concepts (at least the propositional attitudes) are understood in terms of folk-psychological laws that connect them to observables. But Sellars had no empirical evidence of this sort for this conjecture. Nor did Fodor, Churchland, or Stich (not initially, at any rate). Similarly, Davidson and others adopted their rationality or charity theories from Quine. Yet neither Quine nor Davidson had any empirical evidence that supported their ideas. It was all armchair speculation.

3:AM: So what was your alternative route?

AG: People like Robert Gordon (1986) and I (Goldman 1989, 1992) defended the simulation approach to mindreading. This too was initially based on armchair reflection. But in due course developmental psychologists made relevant empirical findings. They found empirical evidence that seemed to support the theory-theory. The major finding was that 3-year-old children are deficient by comparison with 4-year olds in ascribing false beliefs to others (at least in verbal false belief tasks (Wimmer & Perner, 1983). Without applauding this theoretical maneuver, I salute the fact that these investigators genuinely tried to reconcile theory with experimental evidence. From then on, not only psychologists recognized the necessity of shaping their theoretical view to comport with empirical findings; even philosophers working in this area could not ignore the science.

Other parts of cognitive science, including neuroscience, entered the mindreading landscape a decade or so later. In 1998 I chanced to encounter a fast-breaking discovery by a laboratory of neuroscientists in Parma, Italy, led by Giacomo Rizzolatti. This group worked on motor behavior in the macaque monkey, and had identified an intriguing class of neurons in the premotor cortex of macaques. These neurons were activated both when preparing to perform certain motor actions and also – quite surprisingly – when observing another monkey (or a human experimenter) perform the same action (e.g., grasping or holding something with a particular grip). Rizzolatti and colleagues called these neurons “mirror neurons,” because their activation in the brain of an observer mirrored activation in the brain of the actor being observed. Notably, these parts of the monkey brain are homologous to those of the human brain. I learned about this when a member of the Parma team, Vittorio Gallese, delivered a talk at a conference in Tucson, Arizona. It immediately struck me that this mirroring pattern of brain activation sounded like what might transpire if the simulation theory, which I had previously found plausible, were true. It would be like an observer internally imitating, or simulating, what is happening in the target individual. Might mirroring be the mechanism by which interpersonal mindreading takes place, perhaps in a rudimentary way in monkeys, but in a more sophisticated form in humans? I proposed this to Gallese, and we published the idea in Trends in Cognitive Sciences (1998). The idea apparently struck an (initially) responsive chord in many readers. Because, as a staunch critic of mirror neurons later reported, this paper and another one published the same year by Rizzolatti became “two of the most highly cited papers in all of psychology and neuroscience in the last decade and a half” (Hickok 2014, p. 22).

9780195369830

All of this illustrates how philosophers can collaborate productively with cognitive scientists. (Interestingly, Rizzolatti also collaborates with a philosopher, Corrado Sinigaglia.) My current view of human mindreading (see Goldman 2006) assigns a prominent (though not exclusive, role to simulational processes. However, I now propose a dual-system approach to such processes. On the one hand, mirror-like processes occur at a “low” level of cognition, featuring automatic interpersonal simulation as when someone observes a facial expression of another’s emotion that elicits the activation of a similar emotion in the matching area of the observer’s brain. This standardly occurs with, for example, disgust or fear, and it occurs largely below the level of consciousness. In addition to this low-level kind of process, there are controlled simulation processes driven by the imagination, understood as a global capacity to create or recreate mental states by deliberate internal state-manipulation. A self-created state such as an imagined act of eating of 30 M&Ms can bear a striking cortical resemblance to an actual event of the same kind, i.e., experiencing an actual eating of 30 M&Ms (Morewedge et al., 2010). Both levels of simulation, low-level and high-level, can play pivotal roles in mindreading or empathizing. (Goldman & Sripada, 2005; Wikipedia, “Simulation theory of empathy”.)

9780262571005

3:AM: You’ve also integrated cognitive science into philosophy vi metaphysics haven’t you? Can you tell us how you do this?

AG: Here the idea isn’t the modest one of applying cognitive scientific findings to questions about the metaphysics of mind. Few philosophers would resist that methodological maneuver. What I have in mind is more ambitious, and at first glance unpromising: to seek help from psychology (etc.) when tackling the metaphysics of the physical world (and the world of abstracta). How and why might cognitive science enter the picture here?

The human mind-brain is the organ through which we perceive and conceptualize the world. Furthermore, cognitive science assumes (based on wide ranging empirical studies) that the mind-brain isn’t an “neutral” detector of the objective nature of things. It is, rather, an organ that shapes experience and thought, that contains built-in operating procedures that constrain or bias the ways we represent the world. If our aim, then, is to limn the nature of the universe, we had better not rest content with our naïve or commonsense modes of perception and thought. If biases introduced by the mind-brain can be identified, this may help us improve upon our spontaneous, naïve understandings of the external world that are delivered by our native equipment. In other words, when we consider, as metaphysicians, the choice of various anti-realist or deflationary characterizations of various types of objects, qualities, or relations, cognitive science offers potentially instructive background information. This is a path that metaphysicians should find familiar, since an analogous path had been trodden with physics. Many contemporary metaphysicians take it for granted that our naïve view of the universe – a view which is arguably Newtonian – needs to be revised with a view of the world that features exotic entities and relations (of the sort that relativity theory and quantum mechanics introduce). To be sure, there are significant differences between the ways that physics and cognitive science might impact metaphysics. What they share, at a minimum, is the significant possibility of substantial revisionism.

9780199874187

I develop the prospects for revisionary metaphysics in light of cognitive science in “Naturalizing Metaphysics with the Help of Cognitive Science” (Goldman, 2015a). There the matter is expressed in a Bayesian framework. Consider two competing accounts of some type of entity, property, or relation: a “realist” account and an “anti-realist” account. Sometimes the anti-realist account might be a form of eliminativism; other times it would consist in some other variety of deflationary account. Further assume that our initial attraction to the realist view stems from our undergoing certain kinds of mental states, such as intuitions with certain contents; or irresistible feelings of the flow of time. Furthermore, we are strongly inclined to regard the likelihood (conditional probability) of this mental event occurring if the realist account is correct as very high; whereas the likelihood of the same mental event occurring if the anti-realist account were true is regarded as very low. Hence, the posterior probability that the realist account is true will be pretty high (assuming that its prior probability isn’t too low). Can cognitive science contribute any information that would reasonably produce a change in these probability assessments? Yes. Suppose that cognitive science provides reasons to raise the likelihood of the relevant mental state occurring if the anti-realist hypothesis were true. Then some straightforward Bayesian reasoning would lead us to raise the posterior probability of the anti-realist hypothesis. In Goldman 2015, I show how the flow-of-time example and three other examples from contemporary metaphysical argumentation can be expressed in terms of the same sort of Bayesian analysis. And they all illustrate how each would lead a reasonable (Bayesian) metaphysician to make the sort of probability revision just indicated. The point, then, is that reasonable deployment of putative findings in cognitive science can and should have epistemic impacts on one’s metaphysical thinking. Hence, cognitive science is (sometimes) epistemologically relevant to metaphysics.

3:AM: Not only are you busy partially naturalising metaphysics but you’re also a leading figure in social epistemology. Can you say what this is?

AG: My interest in social epistemology was initially triggered by intellectual movements outside of philosophy such as post-modernism, social constructivism, and the “strong programme” in the sociology of science. These movements ran counter to the cornerstones of traditional epistemology — such notions as truth, rationality, and objective knowledge – although (with the exception of Richard Rorty) they rarely made explicit references to the epistemological tradition. A pervasive theme, however, was to try to replace notions like truth with some sort of social phenomenon. For example, they tried to replace or reconstitute truth in terms of notions like societal acceptance or consensus.

9780198237778

The positive proposals associated with these programs struck me as hopelessly confused. Nonetheless, it had to be conceded that philosophers had heretofore devoted scant attention to the epistemic influence that societies, institutions, and interpersonal interactions play in our epistemic life. Epistemologists ought to be actively exploring the social dimensions of knowledge, with the tools already available to them. I made a first proposal for how to do this in a 1987 paper (Goldman, 1987) and later wrote a hefty monograph, Knowledge in a Social World (Goldman, 1999) , which sought to apply a truth-linked criterion of epistemic valuation (“veritism”) to a wide range of social practices and institutions.

More recently I have articulated a program for social epistemology (SE) that distinguishes three branches of the field (Goldman, 2011). Branch 1 seeks to formulate principles for saying when one cognitive agent can and should exploit the statements (or “testimonies”) of others in making up his/her own mind on a subject. A special class of cases are ones in which the agent is a layperson or novice in a given domain D and seeks advice from an expert (or someone he takes to be an expert). How can he decide who is a genuine expert, or which of two people has greater expertise (in case they disagree)? An intriguing variant of this problem is the problem of peer disagreement (Feldman), where the question is how much trust an agent should place in another as compared to herself.

Branch 2 of SE introduces a new class of epistemic agents, i.e., group or collective agents that can serve as subjects of beliefs in their own right. Assuming that such agents are metaphysically legitimate, what are the best ways for such collective entities to form and revise their beliefs? If a group’s beliefs are fixed by some sort of aggregation of its members’ beliefs, what are the optimal principlesor procedures? (List & Pettit, 2011) And what are the criteria by which a group belief should qualify as justified or unjustified? (Goldman, 2014b; Lackey forthcoming)

Branch 3 of SE focuses on social systems, institutions, or networks, which, although they may not be epistemic agents in themselves, establish procedures and structures that can advance or impede the quality of individuals’ beliefs. For example, systems of education, technologies of communication, and legal systems can influence public discourse for good or ill. In the political sphere the principles that regulate speech can have an enormous impact on electoral viewpoints and hence outcomes. The power of American corporations to influence elections under Citizens United is best seen as a serious flaw in a social-epistemic system.

As the last example illustrates, SE is a field that overlaps with political theory at certain points. Knowledge in a Social World (Goldman, 1999), chapter 10 explored the relationship between voter knowledge and what I called “democratic success.” People tend to think of democracy, fundamentally, as a system of voting — specifically, majoritarian voting — because it gives voters an equal chance to influence political outcomes. But voting power per se does not guarantee citizens’ prospects for getting their preferred outcomes. To the extent that they are misinformed or under-informed about crucial matters, their preferences will be undercut and thwarted. Herein lies a major reason why SE, especially in its third branch, is highly relevant to political matters.

This said, it may not surprise the reader to learn that I have recently turned to political theory, especially democratic theory. Although the line of thought described in the preceding paragraph might strongly hint at an epistemic approach to democracy, a recent paper takes a slightly different course (Goldman, 2015b). There I develop a power-equality approach to democracy with central focus on the measurement of power. The novelty of some of the analytical tools introduced there can, I believe, advance our treatment of subtle issues in democratic theory. I hope to make additional contributions in this territory.

ABOUT THE INTERVIEWER

Richard Marshall is still biding his time.

Buy his book here to keep him biding!

First published in 3:AM Magazine: Saturday, June 6th, 2015.