:: Article

medieval matters

Stephen Read interviewed by Richard Marshall.

[Photo: Oleg Voskoboynikov]

Stephen Read is the philosopher who brings it all back home from the heady days of the late middle ages and further. He’s always brooding on how great the medievals were at logic, about the two schools, about Aristotle and Buridan, about octagons of opposition, about why we should all be reading Thomas Bradwardin the paradox crusher who has solutions to Fitch, the Knower, the liar, Curry’s, Field’s, Pseudo-Scotus’s, about how Aristotle and the medievals got to LOT before Fodor, about why Unger is dead wrong and why logic can’t replace metaphysics contra Williamson. Take this one a sip at a time, like a really slow burn malt…

3:AM: What made you become a philosopher? Was it something you expected to become?

Stephen Read: With hindsight, I can see how it happened, but at the time I didn’t realise it. My favourite subjects at school were literature (especially, poetry) and mathematics. My mathematics teacher, a charismatic and wonderfully eccentric man, Bryan Spielman, had me read Bertrand Russell’s Introduction to Mathematical Philosophy, but I still really had no idea what philosophy was. I chose the university I wished to attend, the University of Keele in the English Midlands, on the basis, first, that it was the only university offering joint degrees in English and Mathematics, and secondly, because it offered, uniquely in England at that time, a four-year degree in which the first year was spent on a foundation course covering every subject in the humanities, social sciences and the natural sciences. It was during that first year that I discovered philosophy properly, though even then I had no ambitions to become a philosopher. That came later during postgraduate work, in Bristol and Oxford, on logic and philosophy of language. Then I was hooked, and fortunately, able to pursue it as a career, when I was offered a lectureship in Logic and Metaphysics here at the University of St Andrews in Scotland.

3:AM: You’ve specialized in logic and done pioneering work on medieval logic. Perhaps we could start there. Can you explain the importance of the discovery of Aristotle’s logic in the twelfth century to the subsequent blossoming of the medieval logicians. I guess an obvious question is why Aristotle got lost in the fist place and why he was rediscovered? Were there contextual issues that made his logic suddenly seem relevant? And why did it take so long before Frege and Russell et al moved to a more mathematised version of logic?

SR: Speaking of the discovery of Aristotle’s logic (and the rest of his philosophy) is rather like talking about the discovery of America or Australia—the native inhabitants had known about it long before the Europeans arrived. Actually, Aristotle’s works were lost twice, the first time within about two hundred years of his death in 323 BCE, many of them irretrievably. What survived was assembled by Andronicus of Rhodes in around 60 BCE. The second loss was more relative, with the collapse of Roman civilisation in Western Europe. Aristotle’s writings were preserved in the Eastern Empire, and translated into Arabic were well known to Islamic scholars. Eventually, Western Europe emerged from its darkness, during which only a couple of Aristotle’s logical treatises and some other elementary works were read and studied, and the rest of Aristotle’s logic was translated from Greek and Arabic into Latin in the 12th century, and Aristotle’s other works by the middle of the 13th century.

The history of the revival of logic in the late middle ages (12th – 15th centuries) is a complex one, and there are competing interpretations of what happened. One of the greatest logicians of that period, Peter Abelard, composed his major works before much of Aristotle’s logic had been recovered, but his ideas, coupled to a fascination with the gradual emergence of Aristotle’s quite phenomenal treatises, led to a development of great novelties in the 13th and 14th centuries. Why then—and why not earlier, in the ancient world—are questions I can’t answer. All I know is that there was more work done in logic, and better work, in the 14th century than in any century other than the fourth century BCE (in which Aristotle is the only significant figure whose work has survived), and the 20th century. Several of my colleagues protest at my saying that, claiming 19th century logic as important—which it is, and the work of Boole, Frege, Schröder and perhaps others was essential for what came later, but it’s no better and much less in extent than what we find in the 14th, with Ockham, Burley, Buridan, Albert of Saxony, Heytesbury, Wyclif, and many, many more.

Your question about the late arrival of a mathematized version of logic is easier to answer. It awaited the development of sufficiently sophisticated mathematics, which did not come until the 19th century and the development of significant advances in algebra by Galois, De Morgan, Cayley and others: that was a prerequisite to Boole’s and Schröder’s algebra of logic. The other spur to the development of logistic (a language of logic) came from the pursuit of rigour in the foundations of mathematics during the 19th century in response to the inconsistencies in mathematical analysis as it tackled the infinite and the infinitesimal in the calculus. Finally, once Frege had developed a “concept-script” (Begriffsschrift) he was able to reduce proofs to a sequence of elementary and rigorously sounds steps, formalizing the notion of a proof.

3:AM: Two schools flourished – one in Paris and the other in Oxford. Can you say why they went different ways with their ideas of material and formal consequence? Is this issue of consequence the key to understanding why these two schools divided or do the differences spread over a much broader field of interests?

SR: Back to the middle ages. I’ve just been writing about the Oxford/Paris split in a chapter on 14th-century logic for the Cambridge Companion to Medieval Logic which I am co-editing with Catarina Dutilh Novaes. Again, recording what happened is easier than explaining why, though even establishing the facts is tricky, depending on what texts happen to survive from the before the invention of printing, when works existed only in manuscript on precious sheepskin and even rarer and more precious paper.

During the 13th century there were just two main centres of learning in Western Europe, Paris and Oxford (though there were many other smaller schools) and the best and most creative scholars migrated to them. Two logical notions (among others) had been developed towards the end of the previous century, signification and supposition, the former quite like our notion of meaning, the latter a blend of our ideas of reference and quantification, but really rather different from both. In their analysis of the language of arguments, logicians attributed different modes of signification and of supposition to different expressions to explain their logical behaviour. In Paris, modes of signification became the preferred tool of analysis, though eventually it seems to have proved inadequate to the task. Oxford seems to have preferred modes of supposition as its tool of choice, and it was this which won out in the end, being reintroduced to Paris in the early 14th century (possibly by Burley). For example, in ‘Every man is running’, ‘man’ was said to supposit distributively (that’s its mode of supposition) for all (existing) men, and ‘running’ was said to supposit confusedly for any (existing) runners, and the proposition would be true if all men were included among all the runners. There were other properties of terms (as signification and supposition are called), but in time, these two properties became the centrepiece of the logical analysis of language among the medievals.

3:AM: You’ve looked long and hard at Buridan who was a key figure of the Parisian school . Was he the main rediscoverer of Aristotle? What could we do that we couldn’t before he came along?

SR: No, Aristotle’s works had all been recovered and discussed extensively about a hundred years before Buridan was writing, which was from the 1320s to the 1350s. In some ways, he’s the first to make a significant break with Aristotle on logic, though he doesn’t put it like that—his is a much subtler opposition. Buridan was highly unusual in continuing to teach and research in logic (and other philosophical matters) throughout his life. Most other logicians wrote their works when young teaching masters in the Arts Faculty, before going onto a career in the church, or law, or medicine, and/or moving on to write on theology (when they would often still discuss philosophy and even logic). Buridan continued to develop his ideas, and remained a teacher in Arts.

Logical ideas, in particular, the concept of logical consequence (what follows from what) had been developed since the 12th century, but Buridan is the most systematic and radical of thinkers about it. I can pick out two things he introduced (or at least, made clearer for future generations) concerning logical consequence. Aristotle’s logic focussed on the theory of the syllogism, of what can be inferred from two premises, each of one of four forms (A-form: ‘Every S is P’, E-form: ‘No S is P’, I-form: ‘Some S is P’, O-form: ‘Not every S is P’ or equivalently, ‘Some S is not P’) and their modalizations with ‘necessarily’ and ‘possibly’—e.g., ‘Every S is necessarily P’, ‘No S is possibly P’. But his theory presupposed that there are other sorts of inference, e.g., conversions—one-premise inferences from, e.g., ‘Some S is P’ to ‘Some P is S’ where subject and predicate are switched—and reductio per impossibile arguments, e.g., if assuming a proposition leads one to contradict a given truth, one can infer its contradictory.

Aristotle used such a method in proving the validity of some of his syllogisms. (Incidentally, I’ll use the word ‘proposition’ to refer to a declarative sentence, as the medievals did, not in the modern sense of the meaning of such a sentence.) The medievals started to develop a more general theory of inference subsuming the syllogism but covering very much more. Buridan’s major contribution came not just in giving a clear statement of this more general theory, but in identifying the core notion of consequence as that of formal consequence, one “true in all terms”, as he puts it. Logical consequence is a matter of logical form, resulting from the meaning of the logical terms and the structure of the proposition, holding regardless of what non-logical terms are involved.

The second innovation of Buridan’s, one that lasted down the centuries right until I was a student, was a new method of validating and invalidating putative syllogisms, replacing Aristotle’s reliance on conversion and reductio per impossibile. The central notion of the new method was that of the distributive mode of supposition. Some terms in syllogistic propositions distribute what is said over all the things they supposit for, namely, ‘S’ in ‘Every S is P’, ‘S’ and ‘P’ in ‘No S is P’ and ‘P’ in ‘Some S is not P’—i.e., some S is none of the Ps. Then Buridan can sum up the validity of the syllogism in three necessary and sufficient rules: any term distributed in the conclusion must be distributed in its premise; the middle term (the term common to the two premises) must be distributed in at least one premise; and exactly one premise must be negative if and only if the conclusion is. You and your readers may recall these rules from elementary logic class.

3:AM: What were Buridan’s octagons of opposition? Do they help us understand the difference between the way these guys approached their logical systems and current practice, and illustrate the claim that Catarina Dutilh Novaes makes that these are just as rigorous and precise as modern logic – but not as powerful?

SR: Aristotle spends some time and effort clarifying the logical relations between the four propositional forms we noted just now, but Apuleius of Madaura (the same man who wrote the raunchy Latin novel, The Golden Ass), in the second century CE, was the first to refer to a Square of Opposition with contradictory propositions at opposite corners. Buridan generalized the square to an octagon to capture his theory of the modal syllogism (a considerable departure from Aristotle’s own theory), so that contradictory modal propositions, e.g., ‘Some S is possibly P’ and ‘No S is possibly P’, stand at diametrically opposite corners of the octagon. He proceeded to show that other forms of proposition, propositions with oblique terms, e.g., ‘Some bishop’s donkey is running’, and propositions of (as he called them) “unusual construction” (I’ll elaborate in a moment) share the same octagonal relationships. As is often said, a picture is worth a thousand words, and these “big figures” (as Buridan called them) display graphically a wealth of detail of logical connections between quite complicated propositions. Buridan notes that there are potentially 28 relations between eight nodes, and only four cases fail to define an interesting relation.

Others are contradictory (cannot both be true or both false), contrary (cannot both be true but might both be false), subcontrary (cannot both be false but might both be true) and subalternate (one implies the other). Moreover, at each node, Buridan lists nine equivalent forms (e.g., ‘It’s possible that everyone is running’, ‘It’s not necessary that everyone is not running’, ‘It’s not impossible that everyone is running’, ‘It’s not necessary that no one is running’ and so on). It’s frequently said that the techniques of medieval logic were not as powerful as those of modern logic, and the development of mathematical methods during the 20th century make that true, but medieval logic was highly sophisticated. Terry Parsons, in his recent book, Articulating Medieval Logic, challenges many of the specific claims that have been made concerning medieval inadequacies.

3:AM: Does the difference between Latin and the vernacular languages of the middle ages create problems for these logicians? Is this a problem that modern mathematised logical systems can avoid, and is that one of the advantages of the modern systems, or is something lost by jettisoning the vernacular?

SR: I don’t think the relation between the international language of the church and the universities, Latin, and the vernacular languages created particular problems, though one can clearly see the effect of those vernacular language on the development of Latin over the thousand and more years from classical antiquity to the late middle ages. For example, classical Latin is an SOV language, in which in its simplest sentences, the verb comes last. (Talk of free word order, though to some extent true in classical Latin, is really only true of poetry, and one sees that too in English.) The Frankish languages of those who invaded Europe used our more familiar SVO word order, with the verb separating subject and object. The medieval logicians wanted to formulate rules to capture the logical interpretation and connections of different sentences, and having a fixed syntactic structure helped in this regard. For example, one rule said that an expression like ‘every’ distributes the term immediately following it, and confuses the term mediately following (as exemplified in ‘Every S is P’), another that what distributes the undistributed renders undistributed the distributed—an example is negation: where ‘S’ is distributed and ‘P’ confused (undistributed) in ‘Every S is P’, ‘S’ is undistributed (in fact, has something called determinate supposition) and ‘P’ is distributed in ‘Not every S is P’.

Buridan’s sentences of “unusual construction” that I mentioned place the verb after the predicate (e.g., ‘Every S P is’, ‘Some S P is not’). The irony is that this was the standard form given to those sentences in the classical SOV Latin of Boethius, the last logician of antiquity writing in Rome in the early 6th century, 900 years before Buridan. Bringing the predicate forward in the sentence gives not just an unusual construction for Buridan, but an unusual interpretation, for in this way the predicate often escapes the distributing effect of negation. So whereas ‘P’ in ‘Some S is not P’ is distributed, in ‘Some S P is not’ it is undistributed. One unexpected advantage of the introduction of these novel forms is that O-propositions can now be converted, whereas for Aristotle and Boethius they could not. ‘Some S is not P’ converts to ‘Every P S is not’ (though they are not equivalent, just as ‘Every S is P’ converts to ‘Some P is S’, although they are not equivalent).

The regimentation of the language whose logic was studied brought medieval techniques very close to what we do nowadays with our symbolic notations. The mathematics behind modern logic does not lie in the notation but in the metatheory which interprets the symbolic language. The sentences of unusual construction increase the expressive power of the regimented Latin, allowing a more systematic account of the logic.

3:AM: Thomas Bradwardine is a medieval logician who you have linked with an approach to Fitch’s epistemic paradox. This is not an isolated case in your work, you seem to find contemporary logical issues already being discussed in the fourteenth century. So first could you lay out what the paradox actually is before we turn to Bradwardine?

SR: I’m not sure Fitch’s paradox deserves to be called a paradox, but it is linked to what is certainly a paradox, the Knower paradox. In a paper on value theory written in the 1940s but only published in 1963, Frederic Fitch stated a theorem: suppose α is an attribute of propositions, e.g., propositions which are known. Then no proposition of the form ‘S but S is not α‘ can be an α-proposition. In particular, no proposition of the form ‘S but it is not known that S’ can be known. Yet there are surely many unknown truths, so there must be truths of this form. Consequently, not every truth can be known. This was presented by Fitch as a counterexample to widespread optimism, known as the Principle of Knowability, that every truth could be known. Nothing is really paradoxical there, but it is surprising.

What truly deserves the title ‘paradox’ is the Knower paradox. Consider the proposition, ‘You don’t know this proposition’—call it U, say. Suppose you know U. Then U is true (one can only know truths), so you don’t know U. Contradiction, so (by reductio ad absurdum) you don’t know U. But that is what U says. So U is true, and moreover, you’ve just proved it’s true, so you know U. That really is a contradiction—we can prove both that you know U and that you don’t, that is, that U is both true and false. But surely that’s impossible!

3:AM: So how does Bradwardine help us with this problem?

SR: How are Fitch’s observation and the Knower paradox connected? It was some remarks of Thomas Bradwardine’s on epistemic paradox that helped me see this. Bradwardine was a very clever polymath who began his career teaching in Arts in Oxford in the 1320s and ended as Archbishop of Canterbury and dying of the Black Death in 1349. He made significant contributions to mathematical physics and to theology, and as a young man, to logic in his diagnosis of the central fallacy in what were then known as “insolubles”, paradoxes of self-reference. The most famous of these is the Liar paradox, ‘What I am saying is false’, which we’ll come to in a moment. The key to Bradwardine’s solution is his observation that propositions often mean more than at first appears. For example, ‘All philosophers are sceptics’ not only implies that Plato, Descartes, Russell and others are sceptics, it actually means that, said Bradwardine. Every proposition, he claimed, means everything that follows from what it means—meaning is closed under consequence. I call this his closure postulate. For example, Epimenides, the Cretan who said that all Cretans are liars, was actually calling himself a liar—part of what was meant by his assertion was that he was a liar.

Bradwardine makes two main claims in his treatise on insolubles. The first concerns semantic paradoxes like the Liar, the second epistemic paradoxes like the Knower. The first states that every proposition which says of itself that it is not true also says of itself that it is true—and he proves this in detail, from the closure postulate above, that every proposition means whatever follows from what it means. The second claim was that every proposition which says of itself that it is not known (or believed, etc.) also says of itself that it is not known that it is not known (believed etc.). What I observed was that Bradwardine’s reasoning also shows that the Knower paradox (‘You do not know this proposition’) also says of itself that it is true, so it really says ‘This proposition is a truth unknown to you’). So it is a self-referential version of Fitch’s sentence ‘Some proposition is an unknown truth’.

Suppose Bradwardine is right, and ‘You don’t know this proposition’ also means that you don’t know that you don’t know it. (Actually, I have some doubts about the cogency of his proof, but leave that aside.) Then we prove as above that you don’t know it, so we infer that you know that you don’t know it. But part of what it means is that you don’t know that you don’t know it. So it’s false. That’s a good reason for not knowing it—you can’t know falsehoods. So we can’t infer that you do know it, and the paradox is solved.

3:AM: The liar paradox is another big headache moderns have been wrestling with. Bradwardine has an approach that you link with Grice and the idea that an utterance can signify a number of things. So first can you lay out the paradox and how moderns have tried to deal with it?

SR: We saw the Liar paradox above—the simplest of the semantic paradoxes takes the form ‘This proposition is false’. Suppose it’s true. Then it’s true that it’s false, so it’s false. Contradiction—nothing can be both, so by reductio ad absurdum it’s not true, whence if every proposition is either true or false, it must be false. But that’s what it says. So it’s (also) true, i.e., we’ve shown that it’s both true and false. Contradiction and paradox.

However, that argument assumed that every proposition is either true or false—known as the Principle of Bivalence. So we might conclude that the Liar sentence is neither true nor false—that’s better than it’s being both. But now take the sentence ‘This proposition is not true’. Suppose it’s true, then it’s not true, and it must be either true or not, so by reductio it’s not true. But that’s what it says, so it’s (also) true. No mention of Bivalence there, but we have appealed to the Principle of Non-Contradiction, that it can’t be both true and not true, yet we ended up showing that it is both.

There is a whole range of modern responses. Perhaps three main ones can be described. First, there is the response due to Alfred Tarski, in brief, that natural languages are inconsistent and their semantics defies logical analysis. So we should simply describe how truth works in formalized languages, where there is a hierarchy of object languages, the truth of whose sentences can only be described in its metalanguage, on and on ad infinitum. Saul Kripke famously dismissed Tarski’s response as defeatist, rejecting Tarski’s idea of an infinite hierarchy of truth-predicates. Instead he sought a clear logical analysis of how a language can consistently contain its own truth-predicate. His answer was that truth is a partial predicate obtained as the fixed point of an iterated operation which gradually increases the extension of ‘true’ and ‘false’ as far as they will go. Despite the fact that many semantic notions remain inexpressible in his system, notably the predicate ‘paradoxical’, Kripke’s proposal has proved very popular and has inspired a whole research industry (he described his own proposal as just an ‘Outline of a Theory of Truth’). The third distinctive approach is that of Graham Priest and other dialetheists, who reject the Law of Non-Contradiction and claim that the seemingly paradoxical arguments show that some propositions are indeed both true and false, and so actually both true and not true. This is truly grasping the nettle.

Bradwardine’s approach, like Tarski’s, does not have any implications for the logic involved and can be pursued in a very strong logic. The connection between Bradwardine’s approach and the ideas of Paul Grice lies in Bradwardine’s closure postulate. Grice’s interest lay in the conventionality of linguistic meaning, as contrasted with natural meaning. Those spots mean measles, that smoke means fire; these are cases of natural meaning. Linguistic meaning, like the meaning of three balls outside a pawnbroker’s, is a matter of convention. But what is the “overarching idea”, Grice asked, that unites natural and conventional signs? It’s that what a sign means is in some way a consequence, he replied, and consequence is transitive (consequences of consequences are themselves consequences), so for Grice meaning is multiple, as for Bradwardine.

3:AM: And how does Bradwardine and plural signification deal with the puzzle? Are you convinced? Is it better than the other solutions on offer?

SR: Bradwardine’s first claim is that the Liar paradox, and other paradoxes where a proposition says of itself that it is false, or not true, also say of themselves that they are true, and so are implicitly contradictory. Consequently, they are false and not true, because not everything they mean can obtain—he explicitly endorses the Principles of Bivalence and Non-Contradiction, that every proposition is either true or false, and none can be both true and not true. His proof of the first claim is rigorous and to me convincing, provided one accepts the closure postulate, which is certainly plausible. Every proposed solution to the paradoxes requires some revision of familiar ideas. The closure postulate is very powerful—perhaps too powerful, and I’ve explored ways of weakening it while preserving Bradwardine’s solution. Other approaches to the paradoxes demand higher sacrifices, it seems to me—Tarski’s, that no coherent account of the semantics of natural language is possible; Kripke’s, that we revise logic and even then have no account, except a hierarchical account like Tarski’s, of predicates like ‘paradoxical’; Priest’s, that we revise logic very drastically, in order to avoid triviality, that every proposition turn out both true and false. I’m not myself averse to revising the standard logic of Frege and Russell—indeed, I argued in favour of doing so in my first book, Relevant Logic. But we should revise logic for reasons to do with logic itself, not for arguably extraneous reasons, simply to accommodate a solution to the semantic (and other) paradoxes.

Another test of proposed solutions is whether they can be extended to other paradoxes than the Liar. Another tricky paradox is known as Curry’s paradox, named after Haskell B. Curry, who put it forward in 1942, unaware that it had already been discussed in the 14th century. Take any proposition, e.g., ‘God exists’, and consider the proposition ‘If this proposition is true then God exists’. If this proposition is true, then it is a true conditional with a true antecedent, so God exists. That is, if that proposition is true, God exists. But that’s what it says, so it must be true, in which case it really is a true conditional with true antecedent, so God exists. That’s an amazingly quick argument for a powerful conclusion, so we might expect some trickery. Indeed, take the proposition ‘If this proposition is true then God does not exist’. The same reasoning concludes that God does not exist. We can apparently prove anything whatever with this argument. I’ve argued that Bradwardine’s solution can be extended to deal with Curry’s paradox. So have proponents of other solutions, but it’s particularly difficult for those solutions that demand a revision to the logic, for they mostly have to give up either Modus Ponens (if A, and if A then B, then B) or Conditional Proof (if B follows from A, then if A then B).

3:AM: Hartry Field is one of contemporary giants of logic. And yet you show that there’s a mediaeval solution to his paradox that, unlike Crispin Wright’s solution doesn’t require any revision of logic. And it’s that guy Bradwardine again. So first, what’s Field’s paradox?

SR: Field’s paradox is a paradox of validity. One of the first paradoxes that interested me was a validity paradox due to a 14th-century logician now known as Pseudo-Scotus, for we don’t know who he was except that he definitely wasn’t John Duns Scotus even though his treatise was published as Scotus’s in the complete works in the 17th century. Consider the inference ‘God exists, so this inference is invalid’. Pseudo-Scotus thought this inference was simply invalid, but his contemporaries realised it was in fact paradoxical. For suppose it’s valid. Then by their lights, it has a true premise and false conclusion, so it must be invalid, whence by reductio ad absurdum, it’s invalid. But the reasoning assumes that God exists, that is, it deduces the conclusion that the inference is invalid from the premise that God exists. But that means the inference is valid after all. Contradiction again.

This reasoning is closely related to Curry’s paradox, which we can turn into a validity paradox directly as ‘This inference is valid, so God exists’. Field’s paradox trades on inconsistency in place of validity: two propositions are inconsistent if together they entail a contradiction; and an inference is valid if the premises are inconsistent with the opposite of the conclusion. Now consider the proposition ‘This proposition is inconsistent with A’ for any proposition A; call that proposition B. Then B is inconsistent with A (for if B were true, A would be inconsistent with it and so false), so B is true. This holds for any proposition A, so for every proposition A there is another proposition B that is true and inconsistent with it. But every proposition inconsistent with a truth is false. So every proposition is false. Help!

Crispin Wright comments that just as Russell caused Frege to discover Russell’s paradox, so Field caused Wright to discover what he called Field’s paradox. What happened was that Field challenged Wright to give an inferentialist account negation, that is, give an account of ‘not’ in terms of rules of inference. Wright’s thought was that ‘not-A’ is the weakest proposition inconsistent with A, so he proposed defining ‘not-A’ as ‘Some truth is inconsistent with A’. (Then B, ‘This proposition is inconsistent with A’ stands to ‘not-A’ much as the Knower stands to Fitch’s proposition.) Inferentially, A and not-A entail a contradiction—they’re inconsistent; and we can infer ‘not-A’ from anything which is inconsistent with A.

Field’s theory of truth is a development of Kripke’s. What led Tarski to reject natural languages as inconsistent was belief in the so-called “truth-equivalences”, that for any A, A is equivalent to ‘A is true’. Some of these equivalences fail in Kripke’s theory, e.g., ‘The Liar sentence is true’ is not equivalent to the Liar sentence, that is, ‘The Liar sentence is false’. Field proposed a Kripkean theory in which the truth-equivalences are preserved—at whatever cost. That cost includes restricting Conditional Proof (as mentioned above) and negation-introduction, that is, Wright’s meaning-conferring rule for negation. To my mind, that is a heavy and unacceptable cost—one can find a less costly theory, namely, Bradwardine’s.

3:AM: And how does Bradwardine crush it? Do you think that this really does show that we can save truth from paradox without revising logic?

SR: Once again, the solution lies in recognising that B, ‘This proposition is inconsistent with A’, says more than at first appears. Suppose B is inconsistent with A. Then A and B together entail a contradiction, so by negation-introduction (which I, like Wright, accept as giving the meaning of ‘not’), we can infer ‘not-A’, that is, ‘not-A’ follows from supposing that B is inconsistent with A. But B means that B is inconsistent with A, so by Bradwardine’s closure postulate, B also means that ‘not-A’ is true, that is, that A is false. Now in Wright’s argument for Field’s paradox, having shown that B was inconsistent with A, we inferred that B was true, for that is (part of) what B means. That inference was too hasty, for the truth of B requires that things be wholly as it says they are, that is, its truth requires not only that B is inconsistent with A but also that A be false. So we can only infer that if A is false, B is true, and then the generalization averring that there is a true proposition inconsistent with A holds not for every proposition A, but only for false ones, and that is trivial, obvious and non-paradoxical.

Endorsing any proposed solution to the paradoxes is a hostage to fortune, for there are an indefinite number of paradoxes to solve and explain. Bradwardine’s solution seems to me to have a pretty good track-record in solving paradoxes without requiring any revision of logic—the Liar, Curry’s paradox, Field’s paradox, Pseudo-Scotus’s paradox, and many more.

3:AM: You have pointed to some interesting overlaps between the way the medievals understood concepts and Jerry Fodor’s classic account in ‘Concepts’. But you point to contrasts too. By the time your account has run through, Fodor is in the company of Aristotle by way of Boethius, Augustine and Ockham and we have the notion of concepts as signs and a language of thought. Have I got that right, that they had a prototype version of Fodor’s LOT?

SR: Absolutely. As you say, it all goes back to Aristotle, mediated by Augustine in the 5th century CE and Boethius in the 6th. In the opening chapter of his treatise Perihermeneias (in Latin, De Interpretatione, sometimes rendered in English as ‘On Interpretation’, but better as ‘On Propositions’), Aristotle described three levels of language: written expressions are conventional signs of spoken expressions, which in turn are conventional signs of mental expressions, that is, concepts. Concepts in their turn are natural likenesses of things. Boethius adds that, by that likeness of the concept to the thing, the spoken expression itself becomes a sign of the thing as well as of the concept. This goes beyond Aristotle, but was further endorsed by Augustine, who was himself little influenced by Aristotle. The final step, however, was taken only in the 13th century, when the concept was itself taken as a sign. Then, just as spoken and written expressions can be composed into spoken and written propositions, so too concepts can be composed into mental propositions and we have a full-fledged language of concepts, or thoughts.

William of Ockham was one of the foremost protagonists of this idea of a mental language, enquiring what the grammar of this language was, and how many of the features of spoken language—nouns, verbs, gender, case, declension and so on—are repeated in mental language. Ockham’s “Mental” (as Trentman dubbed it in the 1970) very definitely prefigures Fodor’s LOT. The one really significant difference is that Fodor denies the natural likeness which the medievals recognised between concept and thing—he calls his theory “non-cognitivist”: we respond to things in a certain way, but not because of any common aspect that we recognise in them. In contrast, the medievals explicitly describe concepts as cognitions, acquired by recognition of things’ essences.

3:AM: Tim Williamson has recently called out for metaphysicians to use modal logic if they are to make more progress. Peter Unger says all philosophers are wasting their time, have discovered precious little and should hand over everything to science. From your specialized field in philosophy are you sympathetic to either of these positions? And a broader question I suppose that links to both is whether you think we need philosophy? Why should we heed philosophers?

SR: I think it must be obvious by now that I have no sympathy at all with Unger’s position, nor with other nihilist positions, of which Fodor’s is perhaps another example. Their only value lies in their provocation, stimulating us to think how metaphysics is possible. To come back to where we started this discussion, it was wonderful years ago to be appointed to a post in logic and metaphysics, but also worrying, given the opposition there was from logical positivism and the ordinary language philosophy dominant when I was a student to any sort of metaphysics. Eventually, the influence of Quine and the later Wittgenstein, the leading figures in the anti-metaphysical movement, started to wane, but not before a further influence from Frege, transmuted through Michael Dummett, led to the substitution of conceptual analysis and philosophy of language for metaphysics. Williamson, offering as the book’s title says, Modal Logic as Metaphysics (perhaps echoing Dummett’s Logical Basis of Metaphysics), seems to be proposing the replacement of metaphysics by the logic of necessity and contingency. Even the early Wittgenstein, with whom Williamson shares the thesis of necessitism (that what there is, is necessary—is the same in all possible worlds), inspired the positivists’ revulsion at metaphysics (even if he did not share it) and thought language and logical syntax our only guide to what there is. Although I share with Williamson the belief that one logic is the right logic (a different logic, relevant logic, from the classical logic which is Williamson’s preference), I’m a pluralist about modal logic, and only the right metaphysics can reveal which logic models necessity and contingency, not vice versa. Logic can reveal what follows from what, and the consequences of our assumptions can often be surprising, but logic can’t replace metaphysics.

3:AM: And for the curious here at 3:AM are there five books you could recommend that would take us further into your philosophical world?

SR: To start with a book of my own, the commission from Oxford University Press to write my Thinking about Logic (OUP 1995) was to survey the philosophy of logic for those who might know no logic. It contains very little on medieval logic, however, so for a taste of that, I’d suggest looking at my forthcoming English translation of Buridan’s Treatise on Consequences (Fordham UP, 2014). My edition and translation of Bradwardine’s Insolubilia (Dallas Medieval Texts and Translations 10, Peeters 2010) has an introduction to the genre and a detailed account of his solution. A fascinating reflection on medieval logic as a whole (though explicitly not an introduction to medieval logic) can be found in Terry Parsons’ Articulating Medieval Logic (Oxford UP 2014). Last but not least, I’d recommend everyone to read Aristotle’s Prior Analytics. I return to it regularly, amazed at what Aristotle was able to do seemingly from nowhere (though there were antecedents in Plato) and on his own. But the text is highly compressed and dense, and each sentence needs to be savoured and pondered before moving to the next.

Richard Marshall is still biding his time.

Buy the book here to keep him biding!

First published in 3:AM Magazine: Friday, October 3rd, 2014.