Pete Mandik interviewed by Richard Marshall.
Pete Mandik is the character Keanu Reeves always plays in cyber-phi movies. ‘PrayerBot 2.0’ is a short story of his that 3:AM would have published but he never asked. He is currently uploading This Is Philosophy of Mind: An Introduction. He has written Key Terms In Philosophy of Mind and The Subjective Brain and makes grooving connections between fiction and philosophy. He used to build robots but not so much these days. He’s in a band. He writes about Swamp Mary, including her revenge, and about Transcending Zombies. He has written to warn people of the Unicorn and other things that don’t exist, especially those who are trying to figure out a theory of consciousness. He has a tendency to dismiss contemporary philosophical neo-dualist freak-fests led by Dave Chalmers but only in the name of alternative freak-fests. His hobby is dissing qualia. This outrages fellow music-doodling phi-minder Richard Brown. But Pete says Spinal Tap’s on his side even if Quine is torched. And he does it all from an unscorched armchair, dismaying Josh Knobe and his x phi pyromaniacs. But his style and approach is kind of indie kool out of Bill and Ted’s Bogus Journey, so Josh is still grinning.
3:AM: Pete, could you introduce yourself to the readers and say something about how you came to be a philosopher of mind. Were you always into sci-fi and stuff and is that how it happened?
Pete Mandik: I’ve been working as a professional philosopher for about 12 years now and my main research interests these days involve points of intersection between philosophy of mind and cognitive neuroscience, especially stuff having to do with conscious awareness. I’m one of those who are optimistic that consciousness is something amenable to scientific explanation. Further, such an explanation will be thoroughly physicalistic in the sense that no adjustments to what we regard as the basic ingredients of reality will be needed to accommodate consciousness into a scientific worldview. The appearance that consciousness is something harder than current science can account for is a mere appearance and the most interesting and philosophical aspect of such a project will be spelling out why humans are prone to such illusions.
I guess I first started thinking in explicitly philosophical terms when I was 15 or 16 years old. I was fortunate enough to have had some exposure to philosophy in high school courtesy of some very cool English teachers. One of those teachers encouraged me to go to a one-week philosophy summer camp for high school students at Indiana University in Bloomington. It was during that week that I caught the philosophy-of-mind bug. That was when I made my first trip to a university bookstore and I got my hands on copies of Hofstadter’s Godel, Escher, Bach and Hofstadter and Dennett’s Mind’s I. That stuff really blew me away. Prior to becoming a philosophy freak, I was tearing through as much sci-fi as possible and just soaking up the weird ideas. I really didn’t care about characters and plot as much as the accumulation of strangeness. I remember poring over story synopses in the Encyclopedia of Science Fiction, just trying to digest as much as possible. I think that my interest in sci-fi definitely laid the foundation for my interest in phil mind.
3:AM: Your really cool site Alternate Minds links serious philosophy and cog sci stuff to science fiction. That’s really interesting to me. What’s the relationship between the fiction and the theories, so to speak, and yourself. Can you give some examples, maybe, where the fiction has helped orientate your ideas, or other philosophers you know?
PM: Sci-fi-wise, William Gibson was a major and early influence on me. I was obsessing over his short stories in Omni magazine way back before Neuromancer was published. I was especially knocked out by ‘Johnny Mnemonic’ (1981) and ‘Burning Chrome’ (1982). I remember being 12 or 13 and haunting the bookstores thinking, “This guy has just got to come out with a novel!” And when Neuromancer finally came out in 1984, I just went nuts over it and read it over and over again. When I was a little bit older I got really into Bruce Sterling‘s stuff, especially the Shaper v. Mechanist Schismatrix stuff (collected now in Schismatrix Plus).
It’s really only in retrospect that I can say what the philosophical effects were of that early exposure to Gibson and Sterling. One is that I’m a pretty firm believer in Strong AI: minds are fundamentally computational and there’s no real bar to brewing one up one in a silicon substrate. Another effect is a kind of hard-core metaphysical internalism or individualism. All those early contemplations of jacking into the matrix and jockeying around in cyberspace have made me permanently allergic to stuff like direct realism and the embodied cognition movement. I can’t shake the conviction that brain in a vat that is a perfect intrinsic duplicate of my brain is going to be a perfect mental duplicate. I guess the main philosophical upshots, at least for my work, is to favor explanations of mental phenomena that are computational but in a way in which there’s no essential role played by either the (non-brain) body or the environment. I should add, though, that I have no real beef with the extended mind hypothesis defended by Andy Clark and David Chalmers. I just think that it might as well be called the “extended brain hypothesis”.
Another effect on my thinking of all that 80s cyberpunk stuff is to think of humanity and the self as really malleable. Whatever a human person is, it’s the sort of thing that would survive mind-uploding, and whatever the human species is, it will survive massive self-induced changes to the phenotypes and genotypes of its members.
3:AM: I guess it’s Philip K. Dick who lots of people know about, if not his books then through the films. Are you a fan and if so what have you found enlightening in his stuff?
PM: I only got into PKD much later, well after having digested a bunch of 80s cyberpunk. I feel like I should like his stuff much more than I actually do, but I find it a little slow for my taste. Out of a feeling that I really should give it a chance, I’ve forced myself through a lot of it. I’ve read A Scanner Darkly, Clans of the Alphane Moon, Do Androids Dream of Electric Sheep?, The Three Stigmata of Palmer Eldritch, Ubik, and Valis. I do count Ridley Scott’s PKD adaptation, Blade Runner among my favorite movies (and, by far, the best film adaptation of a PKD story). However, I don’t get a lot of kicks from reading PKD. His stuff is missing what Sterling calls the “crammed prose” and “eyeball kicks” of the ‘80s cyberpunks. Compared to a lot of the authors that PKD inspired, I’d say his stuff just isn’t weird or dense enough for me.
3:AM: And who else should we be reading to get glimpses maybe of who we are, or who we are becoming?
PM: Two bits of recent-ish sci-fi that I’ve been recommending to everyone in earshot are Charlie Stross‘ Accelerando (2005) and Peter Watts‘ Blindsight (2006). Accelerando depicts a post-singularity economy that’s utterly incomprehensible to the humans that manage to survive it. There’s also an interesting segment in Accelerando in which one the characters misplaces his “exocortex” and is thereby rendered pretty helpless. Humans scrape by in Stross’ story without much understanding the world around them. And, to the degree to which the world is understood, the understanding is outsourced to our technologies.
Watts’ story depicts various charters, both protagonists and antagonists, who make do and perhaps even thrive without consciousness. (Thus the titular reference to vision in the absence of awareness). These characters are intelligent, but everything that they do intelligently they also do unconsciously. I’m fascinated by the idea that consciousness might be something that doesn’t survive into our post-human future. It’s really quite creepy to contemplate your own conscious mental life as a wholly dispensable aspect of you, as inessential to getting about intelligently in the world.
3:AM: Are you part of the cyber-punk thing?
PM: As I interpret the question, I’d have to say that, no, I’m nowhere near cool enough to be part of the cyberpunk thing. As I interpret it, it’s a question about being a genuine cyberpunk, which I take to be captured by William Gibson’s descriptions of characters who make it true that, in Gibson’s words, “the street has it’s own uses for technology.” I tend to think of real cyberpunks as people who have the kind of technical knowhow and motivation to, for example, make warranty-violating modifications to their own tech. While technology is pretty deeply insinuated into my daily routine (my iPhone and iPad might as well just be crazy-glued into my hands) I don’t think I really am much of a genuine cyberpunk these days. A few years back I got very interested in building robots out of junk electronics, but I haven’t had much interest in that recently. My main hobbies these days are making music and visual art, but I’m not hacking any of the hardware or software I use.
3:AM: How deep is the Matrix really?
PM: I’m not terribly interested in most of the literal questions raised by the Matrix, questions like, “Are we literally living in a computer simulation?”. I think science shows that the reality that underlies appearance is deeply weird and that philosophy shows that it doesn’t matter much if that underlying reality is bits of matter, bits of information, segments of nine-dimensional vibrating noodles, or whatever. (People who haven’t read Chalmers’ ‘Matrix as Metaphysics’ really should.)
I’m far more interested in the metaphorical import of the Matrix. What, in our really daily lives, can virtual reality be considered a metaphor for? What, in real human interactions, might be the metaphorical significance of Neo’s realization that “there is no spoon”? It’s the metaphorical interpretation of the core ideas of the Matrix that strike me as deep. I think it’s really interesting and useful to think about stuff that is commonly taken as real but is really just a kind of bullshit, a bullshit that can be hacked to one’s own advantage (or detriment). I could say more about this, but then I’d have to kill you.
Back to the literal interpretations of the Matrix, and the questions that arise, I guess that I should add that there is one such question that does strike me as kind of interesting. It’s the question of whether it is conceptually coherent for there to be a virtual reality that is naturally occurring, that is, does not occur as the result of some intentional action that brings it into existence. There’s a short story by Greg Egan, ‘Wang’s Carpets’, that gets incorporated into his novel, Diaspora, in which there’s a water-covered planet supporting a bunch of algae mats that turn out to be computers hosting a “virtual” world populated by beings seemingly unaware of the mats and the ocean. It’s left a bit ambiguous in the novel whether these algae mats are naturally occurring or some amazing artifact. If it is conceptually coherent for there to be naturally occurring virtual realities, that raises the question of what “virtual” would really mean in such a context. That’s a real head scratcher.
3:AM: Are these fictions leading the thinking, do you think, or are they parasitical on the thinking already happening?
PM: I think the fictions are parasitical on the speculative nonfictions of philosophy and science. As far generating new ideas, that is, new concepts of what might be, I don’t see fiction taking the lead there. What I see fiction’s main strength as a contribution to our collective speculative project is as fleshing out answers to the “what would it be like?” question. It’s one thing to propose, for example, fabrication faucets, 3D printers that bring objects into your home for the approximate cost of clean running water. It’s another thing altogether to depict what it would be like to live with such a thing on a day-to-day basis. What sort of impact would that have on your daily routine? What sorts of new chores would need to be done around the house, and what kinds of new interpersonal tensions would arise? How would you feel about the objects that you own if just about anything can be squirt out the faucet? Answering those sorts of questions (“what would it be like?” as opposed to “what might there be?” questions) is mainly what I see as speculative fiction’s value.
Probably one of the most important topics that speculative fiction can apply itself to right now is the topic of the technological singularity. If and when we go around the knee of the curve of exponentially increasing strangeness, what would it be like to live through such a time? Fiction might be best suited to prepare us for the deeply weird.
3:AM: So you have some really interesting theories about the mind and how it works and what it is. Can you share some of your ideas with us.
PM: Most of what I’ve been working on for the past decade or so can be sorted into two piles, both having to do with mental states. The first pile concerns stuff that I’ve been especially concerned with most recently, which has to do with conscious states and my defenses of various aspects of the view that conscious states are a kind of conceptual state, and thus, that one can only be consciously aware of the things one is able to conceptualize. One thing I’ve argued for in connection with this is that the appearance that our sensory consciousness is highly rich and detailed is a kind of illusion. So, for example, you really don’t consciously perceive more colors than you have concepts for. It suffices for your experience to seem rich that you conceive of it as such. (See my paper, ‘Color-consciousness conceptualism’.)
The second and older line of thought in my work concerns unconscious mental states, states that I presume to be much older, evolutionarily speaking, than conscious states. I’m also inclined to see unconscious mental states as quite pervasive in nature, and are exemplified, for instance, in the rudimentary sensory and memory systems of the e. coli bacterium. One question I’ve been especially interested in is the question of when the first minds emerged in the evolution of life on earth and what sorts of selection pressures would suffice to bring about the evolutionary emergence of primordial mental states. I’ve written a few articles detailing computer simulations that I’ve run in which I subjected populations of neural-network controlled virtual organisms to various selection pressures to evolve rudimentary forms of perceptual and memory representations. These primordial representational states do not represent the creature’s environment in the flexible and abstract way that human conceptual states do. Instead, these primordial representations are highly egocentric in that they provide the creature with a way of pointing out aspects of their immediate and remembered environment as it directly relates to the creature. (See my paper, ‘Varieties of representation in evolved and embodied neural networks’.)
One place where these two strands of research come together in my work is in the development of my “allocentric-egocentric interface” account of consciousness. The gist of the account is that conscious states arise at the meeting place in our nervous systems between low-level, egocentric representations of perceptual stimuli and high-level, abstract, and conceptual representations that reflect past learning. (See my paper, ‘Control consciousness’.)
3:AM: And how much of your philosophising is reliant on cog sci and other scientific experimentation. Are you an xphi type or is it fundamentally an armchair that you prefer as your weapon of choice?
PM: I’m an adherent of a version of Quine‘s thesis of the continuity of philosophy and science. I think that the philosophy that’s worth doing is philosophy that keeps a pretty close eye on what’s going on in the sciences, and I’m especially interested in the cognitive sciences, especially psychology, neuroscience, and AI. More often than not, though, I rely on others to do the data collection. With the exception of the artificial life simulations that I’ve run, I’m happy to keep my butt planted pretty firmly in the armchair and just try to help with the theoretical side of things.
While I think the interface between experimental science and philosophy is something deserving of further nurturing, I’m not yet super enthused about the contemporary xphi movement. It seems presently far too focused on methodology and not enough on the development of a distinctive body of theory. This is, of course, probably entirely appropriate given the relative youth of xphi as a subdiscipline. However, until a distinctive body of theory emerges from it, a theory explaining, for example, the Knobe Effect in particular or the nature of philosophical judgment more generally, I won’t have much interest in it.
3:AM: So are we going to be able to create a non-human mind any time soon? What are the big issues that still need sorting out and how close are we getting to knowing how we can do this?
PM: I believe that we are already surrounded by nonhuman minds of our own creation. I think the minimal requirements on having a mind are quite easy to achieve and so, anything that has the function of acquiring, storing, and processing information thereby has a mind. (This is not to say, however, that artificial consciousness is already widespread.) I don’t see much point in raising the bar for mind-hood beyond those minimum requirements.
Now, I don’t think we yet have any artifacts with human-level generalized cognitive capacities. But I think the technical obstacles will be surmounted in time. The really big issues connected to the quest for human-level AI strike me as more ethical than technical. As we get closer, the questions of whether those machines have rights needs to be grappled with. And, if we do achieve the goal of synthetic human-level cognizers, the question arises of whether we ought to seek a kind of immortality through such machines. When human brains can be simulated there’s an interesting ethical question there regarding the status of any so-called uploads.
These ethical issues dwarf the ontological and epistemological issues. The ontological and epistemological questions of whether and how we would know that sufficiently fancy artifacts will really have human-level minds seem easy to me. I think that they will have such minds. And I think that the Turing test has yet to be surpassed as our best test for the presence of such minds.
3:AM: And I’ve watched enough sci-fi and read enough to know that it might not be a good idea! So what’s your view on this – what happens if you find its your research that leads to Sky Net Corp and Terminator scenarios? Is this something anyone really considers when doing this work? Should they?
PM: I don’t think it makes sense to avoid doing research on a topic on the off chance that it might possibly lead to some harm. I think humans are so fundamentally incapable of serious, collective, and long-term foresight that it’s hopeless to try to protect against threats by avoiding even researching the technical possibilities. As long as there’s a possible short-term profit for X, research on X will be conducted. Collectively, it’s clear that we resemble a junk food eating, 3-packs-a-day chain-smoker far more than we resemble an aerobics-exercising vegetarian. Our best protection against the ill effects of research breakthroughs is to also research possible countermeasures.
As for what happens if my own research leads to Sky Net, well, I hope that Sky Net acknowledges my contribution and grants me a pass from termination!
3:AM: I guess some people feel threatened by discoveries and theories in this area. To some people it seems to devalue and empty out a sense of what it is to be human. You know, if robots and computers can play chess better than we can, can be smarter than us, more creative, then the time of humans is over etc. That’s the thought. What’s your take?
PM: I think our best guide to what we should think about any future beings that surpass us is to think about our current attitudes to beings that already surpass us. On the individual level, I’m not bothered, that is, I don’t feel the value sucked out of my life, by the knowledge that there are lots of individuals that are smarter than me. On the species level, I don’t feel that humans are devalued by the knowledge that other species are faster runners, better swimmers, etc. I think, then, by analogy, we should try to take similar attitudes to any post-humans (mechanical or biological) that outperform us. We should continue to value our own lives on our own terms. And also, you know, root for them, since they’ll be our children.
3:AM: Your new book is coming out soon I believe – tell us about it and why should we put it in everyone’s xmas stocking.
PM: The book of mine that is currently closest to publication (it’ll be out sometime in 2013) is a book I’m finishing up called This is Philosophy of Mind: An Introduction. It’s part of a series of introductory philosophy texts from Wiley-Blackwell. The series editor is the philosopher Steven Hales, and he’s put together a pretty cool team of contributors that includes Clayton Littlejohn (epistemology), Kris McDaniel (metaphysics), Neil Manson (philosophy of religion), Jussi Suikkanen (ethics), and Steve himself (introduction to philosophy). I’m super pleased and flattered to be included in such a group! My own contribution aims to be an accessible and up-to-date overview of the philosophy of mind. I’ve also been working on a blog tie-in for the book that will have links to relevant electronic resources connected to the main topics, such as videos about AI, brains in vats, zombies, and all that good stuff.
3:AM: We’ve all seen your awesome rock performance with Dave Chalmers et al on You Tube. So are you in a band, what kind of stuff are you listening to and what do you recommend we should be switched on to?
PM: I play guitar for Quiet Karate Reflex, a band made up of one neuroscientist (Hakwan Lau, Columbia) and three philosophers (Richard Brown, CUNY LaGuardia; Alex Kiefer, CUNY Grad Center; and me). The genius and musical heart of the band is Alex, who writes 8bit chiptunes on old Game Boys and also plays keyboards. The rest of us in the band try to lay down some funkadelic jazz-metal psycho-noodling on top of that, which is exactly as good (or bad) as it sounds. Alex’s solo material is recorded as the acronymic exile Faker, and is really quite good. Our band is part of the New York Consciousness Collective, a collection of bands comprised by academics, mostly in philosophy, psychology, and neuroscience, who play gigs in the Lower East Side of Manhattan. Other bands in the Collective include the Space Clamps, the William James Trio, and the Amygdaloids. A lot of the bands I just mentioned have stuff to check out online. So, consider that a recommendation or a warning as you see fit.
3:AM: Finally, can you rank your top five sci-fi films and top five sci-fi books for the readers at 3:AM. And any bands too.
PM: All of these are presented in no particular order…
1. Primer (best time travel movie ever)
2. Blade Runner
3. Dune (David Lynch’s version is so weird as to make up for the badness.)
4. District 9
5. Children of Men
1. Sonic Youth
2. Butthole Surfers
4. Tom Waits
5. Queens of the Stone Age
ABOUT THE AUTHOR
Richard Marshall is still biding his time.
First published in 3:AM Magazine: Tuesday, May 1st, 2012.