:: Article

Philosophers Wrong About Knowledge Since Plato Bombshell!

Interview by Richard Marshall.

First of all, it is a lie that philosophers traditionally defined knowledge as justified true belief (“JTB”). Gettier criticized a view that nearly no philosopher ever held. Roderick Chisholm might have been, at one point, the only one. But “Philosophers Since Plato Wrong about Knowledge!” is a better headline than “A Philosopher Wrong about Knowledge!” Second, there was never any evidence that JTB was the “commonsense” view either, and recent work by experimental philosophers, particularly Christina Starmans and Ori Friedman, shows that it is not the commonsense view. So it was a fake problem, with no basis in either commonsense epistemology or the history of the discipline. Finally, the problem is not hard to solve.

One challenge facing any communication system is how to prevent individuals from sending dishonest signals that will benefit themselves at a cost to others. For instance, in some bird species, up to two-thirds of predator alarm calls are false, intended to scare conspecifics from preferred feeding or mating opportunities.

Commonsense morality implicitly rejects “ought implies can.” Over and over again, in a wide range of circumstances, we found that people overwhelmingly attributed moral obligations to people unable to fulfill them. In some cases, nearly 90% of people respond this way.

Knowledge does not require belief, as those categories are ordinarily under-stood. Second, in the same way, knowledge does not require reliability.’

People alter their description of obvious details, to the point of contradicting themselves, when they think that someone does not deserve to be blamed for doing something objectively wrong. People want to excuse the transgression and, as a result, roughly half of them say — and perhaps, on some level, even sincerely believe — that no transgression occurred … and that it unintentionally occurred.’

John Turri is a philosopher and cognitive scientist at the University of Waterloo. His current research focuses on social cognition and communication, using tools from philosophy and experimental psychology. Here he discusses why the Gettier problem in contemporary epistemology is philosophy’s version of ‘fake news’, experimental philosophy and psychology, assertion, factive norms, virtue epistemology, inability and obligation, why ableism should replace reliabilism and attitudes to breaking rules.

3:AM: What made you become a philosopher?

John Turri: I grew up in a working-class neighborhood in Detroit and went to university on an athletic scholarship. Philosophy was definitely not on the radar when I enrolled in an introductory drama course taught by William Missouri Downs. Bill gave a lecture on the purpose of art, which fo-cused on opposing views of Plato and Aristotle. It seemed amazing that thousands of years after these two philosophers died, their views were still relevant enough to be taught in a university course by a successful writer. That impressed me and, in retrospect, probably helped pave the way toward becoming a philosopher, but it definitely wasn’t sufficient. After all, I heard lectures on the Pythagorean Theorem too, which also remains relevant after thousands of years, but I didn’t become a mathematician.

Another part of it, I suspect, was symbolic, connected to the image of philosophy in West-ern intellectual culture. In this image, philosophy is willing to question anything — including widely and deeply held assumptions about important practical issues such as religious faith in supernatural beings, coercive use of state power, or the treatment of non-human animals — and to defend, based on available evidence, audaciously simple and general answers. Of course, this willingness is not unique to philosophy; exceptional work in many disciplines shares in it. And philosophy does not always measure up to the image: witness the recent shameful mistreatment of Rebecca Tuvel. Some philosophers instigated this vicious display of anti-intellectual authoritarianism, and many others piled on. (There is already a Wikipedia entry detailing the outrageous affair, which crystallizes some ugly trends that have been accelerating in the profession recently.) In a different vein, unfortunately a lot of contemporary philosophical scholarship is a decadent maze of involuted, introverted, and sterile conversations about narrow and artificial topics. So, in the end, I think it is to some extent a coincidence that I ended up a philosopher. And that’s probably related to why I also ended up a cognitive scientist too.

[Image: David Lynch]

3:AM: Epistemology and branches of thoughts arising from epistemological issues is where you work. One of the big questions of contemporary epistemology in recent times has been the question of knowledge and trying to define it in terms of justified true belief. This has led to industrial scale demonstrations that it isn’t. You have argued that Gettier cases don’t show that knowledge isn’t justified, true belief but rather they reveal a flaw in JTB. Can you sketch what you take to be the flaw and how you supplement the traditional components of JTB — and once you’ve done that are we left with an argument that shows that despite the Gettier cases knowledge truly is justi-fied, true belief after all?

JT: The Gettier problem is contemporary epistemology’s version of fake news. First of all, it is a lie that philosophers traditionally defined knowledge as justified true belief (“JTB”). Gettier criticized a view that nearly no philosopher ever held. Roderick Chisholm might have been, at one point, the only one. But “Philosophers Since Plato Wrong about Knowledge!” is a better headline than “A Philosopher Wrong about Knowledge!” Second, there was never any evidence that JTB was the “commonsense” view either, and recent work by experimental philosophers, particularly Christina Starmans and Ori Friedman, shows that it is not the commonsense view. So it was a fake problem, with no basis in either commonsense epistemology or the history of the discipline. Finally, the problem is not hard to solve. So when it is discussed, an avalanche of distinctions, complications, and permutations must quickly subdue the uninitiated, who might otherwise dare to think the problem pedestrian or, worse, speak a solution. It has required effort, including self-deception and indoctrination, for philosophers to continue pretending, for decades, that the Gettier problem is a profound and formidable challenge. We owe our students, ourselves, and the wider intellectual community better than this.

3:AM: Why don’t findings by experimental philosophers and psychologists worry you when they claim that there is evidence that Gettier subjects do know – thus threatening the assertion of professional philosophers that they don’t?

JT: I welcome those findings because they advance our understanding of the content of our ordinary knowledge concept, which turns out to be a pretty important part of ordinary social cognition. Philosophers don’t always carefully distinguish between claims about how our concepts or judgments work, on the one hand, and the non-conceptual world, which those concepts or judg-ments pertain to, on the other. Both of those things are legitimate areas of philosophical inquiry. Hopefully, findings like the one you mention will cause philosophers to be more careful regarding which they’re referring to, and more judicious in the methods they employ to answer their principal research questions.

3:AM: I guess this links with what some would say is a weakness in philosophical enquiry compared with, say, psychology. This is an area you’ve delved into. So what have you found here – are there differences in how these two subjects are perceived to operate that have significance for how philosophical enquiry ought to be conducted in the future – I’m thinking that perhaps gendered preferences of how enquiry is conducted – on observation over intuition, team over individual – might explain why women are underrepresented in the philosophical profession?.

JT: Any area of inquiry, philosophy included, ought to be closely informed by relevant methods and findings from other disciplines. To do otherwise is at best silly and at worst willfully ignorant and arrogant. In my view, that is a completely separate issue from how philosophy and various other disciplines, including psychology, are perceived by the broader intellectual culture or general public. If it turned out that being better informed resulted in philosophical inquiry being less favorably perceived, then I would count that as an unfortunate cost of doing business the right way.

Nevertheless, as things turn out, the opposite is probably true. In a series of behavioral experiments, Wesley Buckwalter and I found that people generally favor empirical methodology to armchair methodology, and they tend, correctly, to associate the former with psychology and the latter with philosophy. (This finding was replicated by another research group.) So being better informed by empirical findings and methods will probably improve the perception of philosophical inquiry. Moreover, this methodological preference was significantly stronger for women than for men. And since psychology and philosophy are closely related disciplines dealing with many of the same issues, a plausible hypothesis is that this gender-based methodological disparity con-tributes to the fact that philosophy attracts mostly men, whereas psychology attracts mostly women.

[Image: David Lynch]

3:AM: Assertion is important in the area of epistemology isn’t it – and both philosophers and cognitive scientists of various stripes are involved in trying to work out what it is and why it strikes us as being so significant. Why do you think the best way to understand the norm of assertion is via knowledge rather than alternatives such as justification?

JT: Yes, assertion is an important research topic for philosophers and scientists alike, because sharing information is an essential part of social life, and a principal way of sharing information is to make assertions (that is, to tell one another that various things are true). But humans aren’t the only species to share information; communication occurs throughout the animal kingdom, all the way down to bacteria. One challenge facing any communication system is how to prevent individuals from sending dishonest signals that will benefit themselves at a cost to others. For instance, in some bird species, up to two-thirds of predator alarm calls are false, intended to scare conspecifics from preferred feeding or mating opportunities. Obviously, birds benefit from not becoming a predator’s next meal. But what prevents the channel from being flooded with so much misinformation that it ceases to be useful?

Researchers in the interdisciplinary field of animal communication studies have identified some mechanisms that prevent this from happening. In their terminology, these are factors that make communication systems “evolutionarily stable.” One mechanism is to attend preferentially to information constrained signals, which only signalers with access to certain information will produce. For example, sparrows need to distinguish conspecifics who are invading their territory from those who occupy neighboring territory. A sparrow accomplishes this based on whether the conspecific imitates the song the sparrow just sang (“song matching”), or sings a different song that the sparrow has sung previously (“repertoire matching”). Repertoire matching is an informationally-constrained signal of neighborhood because it “requires knowledge” of the other bird’s repertoire. Another mechanism is social policing, which involves testing for honesty and retaliating for dishonesty, either through physical aggression or imposing negative reputational costs. Behavioral ecologists describe “receiver retaliation” as a “behavioral rule” that discourages dishonesty and a “key factor” in making communication systems evolutionarily stable.

What prevents humans from lying enough to destabilize the practice of assertion? Taking a clue from decades of research on animal communication more generally, one hypothesis is that our practice of assertion is (partially) sustained by a socially policed information constraint. On this approach, mastering the practice requires internalizing a rule that assertions should express knowledge, which will have detectable behavioral consequences. As it turns out, the hypothesis is supported by a diverse and robust body of evidence, including observed conversational patterns, developmental studies showing that from an early age human children link knowledge and assertability, historical linguistics, and experimental studies testing adult judgments about assertability. An exciting recent development is that the adult findings are cross-culturally robust, having been observed in English speakers from North America and, now, Korean speakers from the Korean peninsula too.

This impressive body of convergent evidence convinces me that not only is there a norm of assertion, but also that it is best understood as involving knowledge. No alternative proposal, such as ones featuring evidence or belief, is even remotely comparably well supported. People are either fooling themselves or obfuscating if, at this point, they think that these issues can be usefully addressed by, say, inventing another way of “explaining” alleged “intuitions” regarding assertions about losing lottery tickets.

3:AM: Does looking at Gettier cases help you here? And are telling and showing equally covered by this knowledge norm?

JT: Looking at Gettier cases is only one small part of the overall picture and, in all honesty, is probably genuinely interesting only to researchers antecedently invested in that peculiar genre. Telling (asserting) and showing (providing an instructional demonstration of how to do some-thing) are not covered by the same rule, but I have argued that there is evidence for a related but different rule pertaining to showing. On this hypothesis, you should show someone how to do something only if you know how to do it (and your demonstration exhibits the know-how). So both rules feature knowledge, albeit different kinds of knowledge.

3:AM: How does the challenge of selfless assertion threaten your knowledge based account – and how do you see it off?

JT: The objection is that sometimes a person should assert what he does not believe; and knowledge requires belief; so sometimes a person should assert what he does not know; so knowledge is not the norm of assertion. Advocates of this objection have presented thought experiments that, according to them, constitute counterexamples of just this sort. My response is that the argument fails for many reasons. First, the argument is invalid because it presupposes an unduly restrictive interpretation of what a “norm of assertion” would require. Some norms impose exceptionless requirements; others tolerate exceptions. Isolated examples cannot disprove abundantly evident central tendencies. Second, setting the argument’s invalidity aside, in the thought experiments, it is clear that the person does believe the proposition. As evidence of this, when hundreds of theoretically uncommitted adults considered the thought experiments, they strongly agreed that the person believes the proposition. Third, even if the person doesn’t believe, she might still know, at least on the ordinary understanding. As evidence of this, it has been repeatedly demonstrated that knowledge does not require belief, as those categories are ordinarily understood.

3:AM: What are factive accounts of norms of belief and decision making and why don’t you think critics of the position are right to argue that they are counterintuitive and mischaracterise ordinary practices of evaluating beliefs and decisions. And what’s at stake in this?

JT: Factive norms enjoin truth. A factive norm of assertion says that assertions should express truths; a factive norm of belief says that beliefs should be true; a factive norm of decision-making says that decisions should be based on truths (when deliberation occurs, at least). Critics have, as you note, claimed that factive norms are counterintuitive and misdescribe our ordinary evaluative practices. I think they’re wrong about this because results from many carefully controlled behavioral experiments show that ordinary evaluations are deeply sensitive to truth. Holding all else equal, people strongly judge that false propositions should not be asserted, false propositions should not be believed, and decisions should not be based on falsehood. As for what is at stake, this matters because one of philosophy’s principal tasks is to illuminate our ordinary concepts and practices, or what Wilfrid Sellars called “the manifest image.”

3:AM: What is virtue epistemology – is it helpful to link it with virtue ethics in some way?

JT: Roughly speaking, virtue epistemology is a current in contemporary anglophone “analytic” epistemology that takes “intellectual virtue” to be, somehow, at least nominally, essential to epistemological theorizing. Beyond that, it is hard to say anything general that is both true and useful, because virtue epistemologists disagree about what intellectual virtue is, about its proper role in epistemological theorizing, and about what such theorizing aims at. There have been many attempts to link it with virtue ethics, with varying degrees of success. I think that Mark Alfano’s work is a real highlight here. Intriguing also are Jason Baehr’s efforts in helping to found a middle school in Long Beach, California, based on a commitment to cultivating virtue.

3:AM: Do experiments showing us the link between inability and obligation in morals shed light on norms of epistemology?

JT: They could but presently any such light shines only indirectly. I guess that most Western intellectuals have heard the slogan “ought implies can,” or the view, endorsed by many moral philosophers, that if you have a moral obligation, it automatically follows that you’re able to fulfill it. These philosophers typically defend the view on the grounds that it is reflected in the very meaning of moral language or that it is a core commitment of commonsense morality. But there are theoretical reasons to reject the “ought implies can” principle, and some philosophers, myself included, do not find it the least bit plausible. But that is a separate question from whether commonsense morality is committed to “ought implies can,” which is something that Wesley Buckwalter and I set out to test a few years ago. The results were absolutely clear: commonsense morality implicitly rejects “ought implies can.” Over and over again, in a wide range of circumstances, we found that people overwhelmingly attributed moral obligations to people unable to fulfill them. In some cases, nearly 90% of people respond this way. The principal finding has been replicated in many ways and by multiple labs. At this point, it is clear that, contrary to what its philosophical proponents have claimed, “ought implies can” is revisionary.

Many philosophers have appealed to “ought implies can” when arguing about epistemic evaluation. On their view, just as inability obviously limits your moral obligations, it also limited your intellectual obligations. But “ought implies can” is no more plausible in the intellectual realm than in the moral. And while the matter has not been directly tested yet, I suspect that the intellectual version of the principle is equally revisionary.

[Image: David Lynch]

3:AM: Is abilism replacing reliabilism as the new epistemic paradigm?

JT: It hasn’t yet, but it definitely should! Reliabilism is the dominant paradigm in contemporary anglophone “analytic” theory of knowledge. According to reliabilism, knowledge is reliably produced true belief, where being produced “reliably” means having been produced by an ability that produces mostly true beliefs. The most charitable interpretation of reliabilism is that it is a theory of our ordinary knowledge concept, the one we use in everyday thought and talk. Given reliabilism’s paradigmatic status, you might think that it is supported by some very powerful theoretical arguments. But you’d be wrong — the paucity of explicit argumentation in its favor is surprising. In any event, reliabilism faces at least two fatal problems. First, as has been demonstrated repeatedly in experimental studies, beginning with a wonderful paper by Blake Myers and Eric Schwitzgebel, knowledge does not require belief, as those categories are ordinarily understood. Second, in the same way, knowledge does not require reliability. The evidence for this comes from a series of carefully controlled behavioral experiments, in which people strongly attribute knowledge to individuals who usually get things wrong — up to 90% of the time! — and who are actively categorized as unreliable.

According to abilism, knowledge is an accurate representation produced by cognitive abil-ity. (Assuming that the basic cognitive abilities are perception, memory, and inference, this amounts to roughly the view that knowledge is the detection, retention, or discovery of truths.) Abilism differs from reliabilism in three ways. First, knowledge need not be produced by reliable abilities but, instead, reliable or unreliable abilities. Someone’s memory or vision might usually misrepresent the world, but on the occasions when it results in the retention or detection of truth, it could still count as knowledge. In both of these respects, abilism explains the available behavioral data very well. Second, knowledge does not require belief but, instead, only a representation or attribution. A visual representation or memory trace might not be a belief, but it could count as knowledge. The third difference with reliabilism is, at this point, purely theoretical and more tentative: knowledge does not require strict truth but, instead, only approximate truth, or what I call “accuracy.” I say that this disagreement is currently theoretical and tentative because the matter has not yet been tested. We really don’t have evidence one way or the other yet. If it turns out that people strongly tend to require strict truth rather than approximate truth, then I would amend abilism to reflect that fact. As an added bonus, abilism also provides a promising account of the knowledge concept operative in cognitive science and the science of animal behavior. And, arguably, something like abilism was a popular theory of knowledge thousands of years ago among Nyāya epistemologists in classic Indian philosophy.

Here is another way to see the motivation for abandoning reliabilism in favor of abilism. Consider a twelve-month old human infant just beginning to walk. He is highly unreliable at walking, but nevertheless some of his first steps are accomplishments. Goals in soccer are accomplishments, but soccer players usually fail in their scoring attempts. When a native English speaker is learning Mandarin, her first grammatical Mandarin sentence is an accomplishment, but she is not reliable at speaking grammatical Mandarin. In general, accomplishments occur along the road to proficiency and do not require reliable abilities (that is, abilities that usually succeed). Therefore, since knowledge is an intellectual accomplishment, we should not expect it to require reliability either. Why should knowledge be different from any other human accomplishment in this respect?

3:AM: You argue that if someone blamelessly breaks a rule then no rule was broken? Is that right? Can you sketch out how you deal with this question and why it is important?

JT: Research in my lab discovered something surprising about how people respond to rule-breaking: judging that someone blamelessly broke a rule can lead people to claim, paradoxically, that no rule was broken at all. This is not my personal view but rather a description of a central tendency in human judgment. For instance, in one study, people read about Doreen, who just had her car serviced and is driving home. The speed limit is 60 miles per hour as Doreen looks at the speedometer, which says that she is traveling 58 miles per hour. But, unknown to Doreen, something the mechanic did caused the speedometer to malfunction, and she is actually driving 63 miles per hour. Unsurprisingly, people understand the factual details of this case very well. Ask them how fast Doreen is driving, and everyone answers “63 miles per hour.” Ask them what the speed limit is, and everyone answers “60 miles per hour.” But ask them whether Doreen is breaking the speed limit, and roughly half answer “no”. Follow that up by asking them whether Doreen is unintentionally breaking the speed limit, and everyone answers “yes”! So she is driving over the speed limit but she is not breaking the speed limit, and even though she is not breaking the speed limit, she is unintentionally breaking the speed limit.

[Image: David Lynch]

Basically, people alter their description of obvious details, to the point of contradicting themselves, when they think that someone does not deserve to be blamed for doing something objectively wrong. People want to excuse the transgression and, as a result, roughly half of them say — and perhaps, on some level, even sincerely believe — that no transgression occurred … and that it unintentionally occurred! We call this “excuse validation.” (It is related to “blame validation,” an important discovery by psychologist Mark Alicke.) The evidence suggests that excuse validation is caused by a trade-off between accuracy and fairness in describing behavior. Even though excuse validation can seem illogical on the surface, speech can be used for many purposes and it needn’t be unreasonable to prioritize outcomes other than accuracy. Interestingly, there is some evidence that the popularity of “ought implies can” and resistance to the knowl-edge account of assertion among philosophers, which we discussed earlier, is the result of excuse validation. In this respect, as in many others, philosophers reveal themselves to be all too human.

3:AM: And finally for the readers here at 3:AM, are there five books you could recommend to take us further into your philosophical world?

JT: Yes there are, and they can be found in this set of six:

• Thomas Reid, An Inquiry into the human mind on the principles of common sense

• Jane Goodall, In the shadow of man

• Stanley Milgram, Obedience to authority: an experimental view

• Peter Singer, Animal liberation

• Edward Herman and Noam Chomsky, Manufacturing consent: the political economy of the mass media

• Justin Sytsma and Jonathan Livengood, The theory and practice of experimental philosophy

ABOUT THE INTERVIEWER
Richard Marshall is still biding his time.

Buy his new book here or his first book here to keep him biding!

First published in 3:AM Magazine: Saturday, July 1st, 2017.