Saving Wittgenstein, Credence Knowledge and Semantics
Interview by Richard Marshall.
‘Once you see that credences can be knowledge, you realize that this view can solve a lot of philosophical problems. Lately, I’ve been applying probabilistic knowledge to problems with social and political consequences, like saying why you can’t convict someone based on merely statistical evidence, and why there’s something epistemically wrong with racial profiling.‘
‘I’m hopeful that probabilistic knowledge will bridge the gap that currently exists between traditional and formal epistemology. Traditional epistemologists talk a lot about knowledge. Formal epistemologists talk a lot about credences. I argue that we can talk about both at once. We can agree with Tim Williamson and say that “knowledge is first.” But that doesn’t mean credences have to be second.‘
‘Wittgenstein got stuck when it came to analyzing propositions about the colors of objects. Take four propositions about the color of some particular object A: that A is red, that A is yellow, that A is green, and that A is blue. These propositions aren’t logically independent from each other, so at most one of them could be an elementary proposition. But then the worry is that once we determine that one of these four propositions is elementary, we’ll have no way to analyze the three leftover propositions, since they won’t each be truth functions of the elementary color proposition. The proposition that A is red isn’t just the negation of the proposition that A is yellow, for instance. And so it appears impossible to analyze these four color propositions into logically independent elementary propositions. A lot of people think that this problem motivated Wittgenstein to abandon the central project of the Tractatus.‘
Sarah Moss works primarily in epistemology and the philosophy of language, and often on questions at the intersection of these subfields. Her recent papers explore how updating de se credences resembles communicating de se beliefs, for instance, and what semantic theories accommodate intuitive norms governing credences in counterfactuals. She has recently finished a book manuscript defending a unified probabilistic theory of the contents of belief, assertion, and knowledge. Her current work concerns the consequences of her theory of probabilistic knowledge for how we think about racial profiling and legal standards of proof. Here she discusses why the semantics of epistemic expressions is important, her probabilistic theory of epistemology, credences, linguistics and philosophy of language, approaches to theories of time, imprecise credences, time-slice epistemology and finally why Wittgenstein abandoned the Tractatus and why he needn’t have. Elegant…
3:AM: What made you become a philosopher?
Sarah Moss: A series of lucky accidents and generous advisors. I was a math major in college and was all set to go to math grad school—I took the Math GRE; I spent the summer doing some beautiful research at the NSA. I would have been very happy as a mathematician. But then I happened to take a class with Chris Korsgaard and fell head over heels for philosophy. I could exercise just the same parts of my brain, but on problems with meaningful consequences. It was perfect.
After college, I did the B.Phil. at Oxford. My first tutorial supervisor happened to be Tim Williamson, who assigned me a lot of essays in language and epistemology. I hadn’t taken those subjects before, so it was a steep learning curve. I remember that in the beginning, Tim had sort of an amused smile a lot of the time during our meetings. At the time, I had no idea why. Now I know it was because I was fumbling my way through inventing ideas and making some hideous mistakes along the way. I remember reasoning aloud to the conclusion that identity was relative to a sortal. And assuming at one point that knowledge was justified true belief. I’m sure it was hilarious. Tim was extremely patient. Both Chris and Tim: I owe a lot to their mentorship as I was getting started. Later, when I was a grad student at MIT, Bob Stalnaker had an enormous influence on my philosophical values and interests. But I am also grateful that in the early days, Chris and Tim had faith in me before I knew what I was doing.
3:AM: You’ve argued for a novel semantics of several epistemic expressions. Before giving us the details, can you say something about the philosophical stakes in this area of work – why is this an important branch of philosophy and not just a piece of linguistics?
SM: I think the semantics of epistemic vocabulary is connected to fundamental questions in the philosophy of mind and language, as well as epistemology. I just finished writing a book where I spell out those connections. Here’s the basic idea. Traditional semantics assigns something like propositions as the contents of sentences at a context. Tradition also has it that propositions are the contents of belief and knowledge. I argue that we should change all of that. Instead of propositions, my semantics assigns probabilistic contents to sentences. For example, take the sentence ‘it’s unlikely that Jones smokes’. I argue that the semantic content of that sentence is a set of probability spaces—the set of probability spaces that assign low probability to the proposition that Jones smokes. The same goes for epistemic modal sentences and conditionals; on my view, they all get probabilistic contents.
And now for the payoff: I argue that these probabilistic contents are not just the contents of sentences, but also contents of belief and knowledge. Say you have .4 credence that Jones smokes. I say that this just means that you believe a certain probabilistic content. And in just the same way that your full beliefs can constitute knowledge, I say that your .4 credence can constitute knowledge. So my story starts with a fairly innocuous claim about the semantics of words like ‘unlikely’, and ends with a radical claim about what sorts of mental states get to be knowledge. The best part is that once you see that credences can be knowledge, you realize that this view can solve a lot of philosophical problems. Lately, I’ve been applying probabilistic knowledge to problems with social and political consequences, like saying why you can’t convict someone based on merely statistical evidence, and why there’s something epistemically wrong with racial profiling.
3:AM: So how do you approach this and why is your approach superior to standard truth conditional theories?
SM: The standard view is that sentences like ‘it’s unikely that Jones smokes’ express full beliefs, as opposed to credences or probabilistic beliefs. There has been a lot of discontent with this view recently—especially in the last ten years, which have seen an explosion of relativist and expressivist theories of epistemic modals. A lot of arguments against the standard view depend on facts about whether particular sentences sound good or bad. In my book, I give more foundational arguments against the standard view, arguments that depend on theoretical advantages as opposed to ordinary language judgments.
For example, spot me some basic assumptions from standard decision theory, like the fact that we have credences and that our actions ought to be guided by them. Say you are deciding whether to buy some cigarettes as a birthday present for your friend Jones. You might think to yourself, “It’s unlikely that Jones smokes, and it’s fairly probable that she likes flowers, so I should get her flowers instead of cigarettes.” As I see it, the natural view here is that you’re reasoning with credences, the same sorts of partial beliefs that ought to guide your actions according to standard decision theory. If credences can play this role in your thinking to yourself, then the natural view is that they can play the same role when you are thinking aloud. In other words, given that we believe probabilistic contents, the natural view is that we can also judge them and assert them. By contrast, the standard view imposes an ad hoc restriction on the contents that we can assert, limiting them to a small fraction of the contents that we believe.
3:AM: What are credences when used in relation to epistemology and knowledge claims? Are they alternatives to beliefs?
SM: The credences that I am talking about are sometimes called degrees of confidence, or subjective probabilities. They’re measured on a scale from 0 to 1, and it’s often said that if you’re rational, your credences have to obey the probability axioms. If you have .4 credence that Jones smokes, for instance, then you had better have .6 credence that she doesn’t smoke.
The exact relationship between credences and full beliefs is a very hard question, more complicated than it appears at first. For instance, a lot of people think as you gain more justification for fully believing a proposition, your credence in that proposition should go up. In my book, I argue that this is a mistake. It’s misleading to think of credences as “degrees of belief.” Credences are themselves just a different kind of belief, namely beliefs with probabilistic contents.
With respect to epistemology and knowledge claims, I’m hopeful that probabilistic knowledge will bridge the gap that currently exists between traditional and formal epistemology. Traditional epistemologists talk a lot about knowledge. Formal epistemologists talk a lot about credences. I argue that we can talk about both at once. We can agree with Tim Williamson and say that “knowledge is first.” But that doesn’t mean credences have to be second.
3:AM: Does linguistics have a role to play in philosophy of language?
SM: Well, linguistics often plays a useful role in my work. In my book, for instance, I use some linguistics literature on the phenomenon of loose speech to defend a theory of the relationship between simple sentences like ‘Jones smokes’ and sentences containing probability operators. Then I use that theory to answer the very hard question I just mentioned about exactly how credences are related to full beliefs. So linguistic theories can play an important role not just in the philosophy of language, but in other subfields too.
3:AM: Philosophy had a ‘linguistic turn’ but I thought that was so last century? So how has philosophy of language moved on from the critiques that seemed to make it obsolete? Why should we all heed what you guys are arguing?
SM: I don’t know that I would defend general claims about the philosophical value of linguistics or philosophy of language. Both have been instrumental in my work in epistemology, metaphysics, and philosophy of mind. But it’s not as if you can say in advance exactly what role linguistics will play in these subfields. You just have to learn some linguistics, and then go do some philosophy, and see where the linguistics comes in handy. Same for philosophy of language. Epistemic contextualism is a classic example of where philosophy of language has been useful in epistemology. Probabilistic knowledge is another. The stage theory of persistence is a good example of philosophy of language playing a useful role in metaphysics. In any subfield, philosophy of language will prove its relevance by solving problems. As long as that keeps happening, philosophy of language is not in danger of being obsolete.
3:AM: What is a four-dimensionalist theory of persistence trying to do and why do you say an error theory of such things is superior to Ted Sider’s theory of persistence?
SM: Four-dimensionalists think that in addition to spatial parts, objects have temporal parts. There is a part of you that exists just from 3am to 4am, for instance. There are other parts of you that last for only a split second before going out of existence. Ted Sider and I are both four-dimensionalists, but we disagree about one further question—namely, what sorts of objects do we ordinarily talk about? When I say ‘Richard Marshall’, am I talking about a short-lived object, or a long one? A temporal stage that only exists at the instant I say your name, or a worm that persists for years before and after?
The worm theory is the more natural view. But it seems to have some problems. Say you have a ship sitting alone in a harbor, and say that later it will be split apart into two ships. Suppose you ask, “How many ships are currently in the harbor?” According to some worm theorists, you should say “Two,” since both of the ships that will exist after the split are currently in the harbor. It is just that these ships are located in exactly the same place right now. In his book, Ted Sider defends a stage theory of persistence that avoids this counterintuitive result. Sider argues that in fact, we often count instantaneous stages of ships. We can truly say there is just one ship currently in the harbor, namely the common temporal part of the two ship worms that will later diverge. Now, an initial problem for Sider is that stage theorists have to say odd things about other counting sentences. Suppose you ask, “How many ships have ever existed?” If you are counting instantaneous stages of ships, the answer will be “Infinitely many,” which is again a bad result! In response to this problem, Sider ultimately endorses a hybrid of the stage theory and worm theory of persistence. He thinks that whether we count worms or stages depends on context.
Unfortunately, I don’t think the hybrid theory fixes all the problems. Go back to the case where one ship is sitting in the harbor, and now ask, “How many ships were in the harbor during this last hour?” The intuitive answer is “One.” But if you are counting worms, the answer is “Two,” and if you are counting stages, the answer is “Infinitely many.” No hybrid of these two bad theories is going to give you the intuitive answer. There are fancier theories you can try. But ultimately, I think that four-dimensionalists should have gotten off the bus much earlier. When we do metaphysics and become convinced that two ships can be in the same place at the same time, we should change our minds about how many ships are in front of us in the harbor in the fission case. We should not shy away from our disagreement with our previous selves. We should go from thinking that our ordinary counting sentences are true to thinking that they are strictly speaking false.
3:AM: What’s time-slice epistemology, and why do you defend it? And who thinks you’re wrong, and why?
SM: Time-slice epistemology is roughly the view that whether you are rational right now just depends on facts about you right now, and not on facts about what you believed or how you acted before. I’d say that until recently, almost everyone thought that this view was wrong. At first, it just seems like a non-starter. For instance, say that yesterday, you believed that squirrels can swim. It would be irrational for you to just stop believing that today, if you haven’t learned anything about squirrels in the meantime. And a very natural way to explain this fact is to endorse a diachronic norm like “In the absence of new evidence, hold on to your beliefs over time!”
According to time-slice epistemology, though, we don’t really need diachronic norms in order to say what rationality requires of you. Rather, the fact that you should generally hang on to your beliefs is actually just a consequence of a sychronic norm of rationality—namely, that at any given moment, you ought to believe exactly what your evidence supports. From this, it follows that if your evidence stays the same, what you ought to believe will stay the same. But notice that temporally local facts about your past and current evidence are doing all the work, when it comes to determining what rationality requires of you. I think this view is elegant. In “Time-Slice Epistemology and Action Under Indeterminacy,” I argue that it avoids the ad hoc restrictions that people often throw onto diachronic norms. Also, I agree with Brian Hedden that this view accounts for similarities between the relations that you bear to your past and future self, on the one hand, and relations that you bear to other people.
3:AM: What do imprecise credences show us about the way we make decisions? Do they flush out the complexity of our intuitions when trying to make rational decisions, a complexity sometimes missed in the philosophical literature?
SM: There is some disagreement in the literature about how you should make decisions when your credences are imprecise. As I see it, having imprecise credences is a lot like having incommensurable values. Here is a classic moral dilemma from Sartre: say you value being a good son and being a good soldier, and there is just no way for you to weigh those values against each other. Then you might be torn about whether you ought to spend the next year at home with your mother or fighting in the army. As I see it, the same problem can arise when you are torn between different credences, as opposed to torn between different values. Suppose that the only thing you value is being an effective soldier, but your evidence leaves it completely open whether the army or the navy is more effective. Then you might have imprecise credences about which branch of the military is more effective, and as a result, you might be torn about which branch you ought to enter. Just like with moral dilemmas, standard decision theory does not have the resources to help you make a decision here.
3:AM: Does time-slice epistemology help understand particular ethical dilemmas involving incommensurate values?
SM: If anything, it’s closer to the reverse! I argue in “Credal Dilemmas” that our theory of how we should act on imprecise credences should be informed by our theory of how we should act on incommensurable values. Say you intend to join the army, and you even go out and buy an army uniform, but then you have a change of heart and decide to stay at home with your mother. You might end up worse off than if you had simply acted on consistent values from the start. But I wouldn’t necessarily say your change of heart is irrational. I think that if that’s right, then we should be similarly forgiving of agents with imprecise credences. Say you are torn between two opinions, and you start by acting according to one opinion, and then later change your mind and act according to another. I wouldn’t necessarily say you are irrational, even if changing your mind leaves you worse off than if you had simply picked some precise credence to act on from the start.
3:AM: Wittgenstein and his Philosophical Investigations is often thought of when we talk about the linguistic turn. But you’ve spent some time looking at his Tractatus. There are some who argue that he abandoned the Tractatus because of a problem regarding the colour incompatibility problem. So firstly, what is the problem and why was it a problem for Wittgenstein here?
SM: The early Wittgenstein thought that all propositions could ultimately be analyzed as truth functions of elementary propositions, where these elementary propositions were logically independent from each other. This logical atomism was a central theme of his Tractatus. But Wittgenstein got stuck when it came to analyzing propositions about the colors of objects. Take four propositions about the color of some particular object A: that A is red, that A is yellow, that A is green, and that A is blue. These propositions aren’t logically independent from each other, so at most one of them could be an elementary proposition. But then the worry is that once we determine that one of these four propositions is elementary, we’ll have no way to analyze the three leftover propositions, since they won’t each be truth functions of the elementary color proposition. The proposition that A is red isn’t just the negation of the proposition that A is yellow, for instance. And so it appears impossible to analyze these four color propositions into logically independent elementary propositions. A lot of people think that this problem motivated Wittgenstein to abandon the central project of the Tractatus.
3:AM: So do you show that Wittgenstein needn’t have abandoned his project? How does your approach differ from others?
SM: I think that the color incompatibility problem rests on a mistake. It’s true that the proposition that A is red can’t be analyzed as a truth function of the proposition that A is yellow. But suppose that none of the four color propositions mentioned above is elementary. Instead, let’s say there are two elementary propositions: that A is red or yellow, and that A is red or green. These two propositions are logically independent. And our initial four color propositions are truth functional combinations of them. The proposition that A is red is just their conjunction, for instance. And so it turns out that our initial set of four color propositions can be analyzed as truth functions of elementary propositions after all. In my paper on the Tractatus, I argue that given certain natural assumptions, the same goes for any set of propositions. So Wittgenstein needn’t have abandoned his project—on the contrary, we can prove his project can be carried out. My approach to the color incompatibility problem is different from most because I refrain from assuming that the surface structure of a sentence reflects its logical form. As long as we keep in mind the possibility that ordinary expressions like ‘red’ are not logically simple, we can solve the color incompatibility problem.
3:AM: And finally, for the readers here at 3:AM, are there five books you can recommend that will take us further into your philosophical world?
3:AM: If my book had two parents, they would be:
Stalnaker’s Context and Content
and Williamson’s Knowledge And Its Limits, and I would recommend those classics to anyone.
Any philosophers interested in time-slice epistemology should check out Brian Hedden’s recent book Reasons Without Persons.
For fun reading, The Minority Body is currently on my bedside table, and I’m loving every page of it. And I guess it would be strange not to mention that I am excited for my own book to come out with Oxford University Press next year—for the last three years, my philosophical world has been revolving around Probabilistic Knowledge!
ABOUT THE INTERVIEWER
Richard Marshall is still biding his time.
Buy his book here to keep him biding!
First published in 3:AM Magazine: Saturday, November 26th, 2016.