:: Article

Truthmaking

Gonzalo Rodriguez-Pereyra interviewed by Richard Marshall.

Gonzalo Rodriguez-Pereyra is the de Chirico mannequin of philosophy. He thinks all the time about the mysteries of truthmakers, the indiscernability of identicals, resemblance nominalism, universals and metaphysical slingshots. There’s a kind of weird pristine beauty to this that makes him a surreal chillin’ jive.

3:AM: What made you become a philosopher? Were you a philosophical boy or was it something that you changed into?

Gonzalo Rodriguez-Pereyra: I suppose it was interest in philosophical problems. When I was eleven or twelve years old, I became for a while fixated on the question whether there could be two ‘identical’ stones. This is, of course, the question whether the principle of identity of indiscernibles is true and, as I formulated it then, I was bound to fall into confusion about it. But, although I was not aware I was philosophising, I was. So I would say I was a philosophical boy. Those thoughts about ‘identical stones’ are the earliest philosophical thoughts I remember. But when I was a teenager I also thought about the more typical philosophical problems teenagers think about: the existence of god, the objectivity of morality, whether one can know that the external world exists. At one point I took a copy of Berkeley’s Principles from my father’s library. That was the first philosophy book I read. I found it fascinating and wanted to read more philosophy. So I picked up Karl Jaspers’ Philosophy. Then I read Descartes’ Meditations. No doubt I understood very little about them. But by the time I had read those books, I knew I wanted to be a philosopher. So I decided to do philosophy at university, with a view to becoming a professional philosopher. Being a rather unstable character, at some points I had doubts about becoming a professional philosopher, but the example of two of my teachers, Ezequiel de Olaso and Juan Rodriguez Larreta, made me confirm my original decision.

3:AM: You’re a metaphysician and a Descartes and Leibniz expert. When you were asked about the role of metaphysics in relation to other areas of philosophy and the natural sciences you cited Descartes in his preface to the French edition of the Principles when he compared philosophy to a tree whose whole roots are metaphysics. You went on to say that ‘… the totality of knowledge forms a tree whose roots are Metaphysics.’ That’ll shock quite a few people who think that knowledge is now a whole zoo of sub-disciplines, each able to answer the ultimate questions of its own domain e.g. physics by the physicists, neuro-science by the neuro-scientists and so on. It’ll shock the Wittgensteinians who’ll say metaphysics is just misunderstood grammar. So why should they believe you and Descartes?

GRP: I suppose those groups of people will be shocked by different features of my assertion. Those who take knowledge to be a whole zoo of sub-disciplines will react to my giving metaphysics a privileged position in that zoo or to my thinking of knowledge as a tree, with more and less fundamental parts. Those who think that metaphysics is just misunderstood grammar will react to my giving metaphysics some place or another in the system of knowledge. But I don’t think that those with a ‘zoo’ view of knowledge must necessarily reject my ‘tree’ view. Whether they must or not depends on what, exactly, the ‘zoo’ view amounts to. For my view is not that the theoretical disciplines do not have autonomy to decide their own questions, but that the concepts of physics, chemistry, neuro-science, etc. presuppose the concepts of metaphysics. Metaphysics is the study of the most general nature and basic structure of reality, and therefore the concepts of metaphysics, concepts like time, space, identity, resemblance, substance, property, fact, event, composition, possibility, etc., are the most fundamental concepts. Thus metaphysics is the most fundamental theoretical discipline.

This does not mean that metaphysics is about concepts; metaphysics is about reality, but those concepts are supposed to apply to the most basic features of reality. In one way or another all other disciplines (whether philosophical or not) employ these concepts and/or others derived from them and so metaphysics contains the conceptual foundations of the rest of knowledge.

The role of metaphysics in relation to other disciplines, whether philosophical or not and including the natural sciences, is thus a foundational role. Lack of clarity in the concepts of metaphysics implies lack of clarity in other disciplines – both theoretical and practical disciplines – employing those concepts or employing concepts that depend on those of metaphysics. Since there are relations of priority between the other disciplines too, knowledge has the form of a tree, and since metaphysics is the most fundamental one of the theoretical disciplines, it represents the roots of the tree. But nothing here means that the other theoretical disciplines – whether they are natural or social sciences, humanities, or other branches of philosophy – cannot decide their own questions using their own methods.

To say that metaphysics is just misunderstood grammar is to misunderstand a large portion of what goes on in metaphysics – now and in the past. No doubt there are examples of metaphysical reasoning that betray a misunderstanding of grammar. A famous example is Heidegger on nothing, famously criticised by Carnap. Some of Walter Burley’s arguments for universals seem also to be guilty of misunderstanding grammar. But Ockham criticised those and he did that to put forward his own favoured metaphysical position on the issue. Nor must one suppose that all realists about universals misunderstand grammar. David Armstrong, for instance, has argued for the existence of universals; but his argument for the existence of universals cannot be accused of being an expression of misunderstood grammar. Indeed he has explicitly rejected any kind of argument for universals based on the assumption that to every meaningful word there must correspond an entity. These are just examples, and what they show is that not all metaphysics is misunderstood grammar.

3:AM: You co-edited a book Real Metaphysics which was a collection in honour of Hugh Mellor. Outside of philosophy perhaps few people will have heard of Mellor. Why was he significant for you and why should he be better known?

GRP: Hugh Mellor was my PhD supervisor. As such he was very demanding but also very supportive and generous with his time. There were times at which I would meet him every week to discuss my work. My PhD thesis was a defense of, and a case for, resemblance nominalism. On my first meeting with him in Cambridge, one or two days after I arrived there as a graduate student, I told him that I had a defect, which was that although I felt I was quite good at criticising a philosophical position, I was not very good at defending and making a case for a philosophical position. He said that that was not an uncommon situation for people at my stage, and that I should try developing a position in my thesis. He also said I should try developing a position which he rejected. I accepted the challenge and since I was interested in the problem of universals and Hugh was a believer in universals, I decided to defend and argue for resemblance nominalism. He was a great supervisor, and was supportive when I was going through a difficult time, and so I have a huge personal debt to him.

Hugh is, of course, very well known in the areas of philosophy where he has been active. But he is a philosophers’ philosopher, and so I do not see why he should be better known, ‘as a philosopher’, outside philosophy. But Hugh is also an actor, and so some people outside philosophy will have heard of him in this capacity.

3:AM: One big contemporary topic is about the relation of the mind and the body. It’s often discussed in a way that doesn’t seem like metaphysics. The mind-body problem can sound just like a scientific issue in many of the contemporary presentations. But you’ve written about this in terms of Leibniz’s theory of pre-established harmony. So what has a Leibnizean metaphysical approach got to offer in this area of investigation and are attempts to suppress metaphysical dimensions of the problem an added problem?

GRP: I have written on Leibniz’s pre-established harmony as a solution to the mind-body problem as he understood it. But this does not mean that I have defended Leibniz’s solution. What I have written on that is of a purely historical nature. I think there is a metaphysical problem of the relation between mind and body. Thinking that there is no metaphysical dimension to the problem is an error. This does not mean that there are no other aspects of the problem that are more amenable to a scientific treatment. Anyway, I do not think that Leibniz has a great deal to contribute to the contemporary discussion of the mind-body problem. His position is a form of parallelism (the ‘harmony’ bit of the theory), but grounded in a theory of the nature and aims of God (the ‘pre-established’ bit of the theory). The parallelism, or denial of any causation between mind and body, derives basically, and fallaciously, from a theory of substances as having complete concepts that include everything that is true of them. But sometimes he argues for the parallelism by elimination. He considers two alternative theories: interactionism and occasionalism. But his objections to interactionism are rather poor. And although I think some of his objections to occasionalism are quite sophisticated and interesting, to the best of my knowledge occasionalism is not really in the map of contemporary philosophy of mind.

3:AM: For Leibniz all truths about created individual beings are contingent, and contingency is defined in terms of infinite concepts (and proof of propositions having an infinite number of steps). It faces the challenge of the problem of lucky proof and the problem of guaranteed proof. You defend Leibniz don’t you? Can you say what these problems are and how Leibniz can be defended from them?

GRP: Leibniz believed in freedom, both divine and human, and he thought that contingency was a necessary condition of freedom. That is, if an agent A acts freely when choosing X, then A’s choosing X cannot be necessary. But there are some elements in his philosophy that seem to make contingency impossible. And so he struggled to make room for contingency in his philosophy. For instance, he believed that every truth is analytic, in the sense that in every truth the concept of the predicate is included in the concept of the subject. Take, for instance, the proposition ‘Peter denies Christ’. If being a denier of Christ is in the individual concept of Peter, it seems that it is necessary for Peter to deny Christ. Of course, this generalises and so seems to make every truth necessary.

One of the things Leibniz said in response to this was that sometimes the concept of the predicate can be found in the concept of the subject after a finite number of steps. This is what typically happens with propositions about species of things, e.g. ‘A triangle has three sides’. The concept of the subject in such a proposition is finite and so it only takes a finite number of steps in the process of analysis to find the concept of the predicate. This proposition, then, has a ‘proof’, since it can be reduced to an identity, e.g. ‘A closed three-sided figure has three sides’, in a number of finite steps by substituting definitions for the concept under analysis (in this case the concept ‘triangle’). But there are other propositions where the analysis cannot be completed in a finite number of steps. For Leibniz a proposition like ‘Peter denies Christ’ is contingent because, due to the fact that the individual concept of Peter is infinitely complex, its analysis cannot be completed in a finite number of steps, and so it does not have a proof.

Leibniz’s idea was that the distinction between necessary and contingent propositions is the distinction between true propositions that can be proved in a finite number of steps, and true propositions that cannot. The former are necessary, the latter contingent. It seems to me that this strategy to save contingency fails outright. Contingency and necessity have nothing to do with the number of steps it takes to prove or analyze a proposition. What Leibniz has done is, effectively, to change the subject. He has spotted a difference among propositions and has decided to call that difference the difference between necessary and contingent propositions. But, clearly, that difference is not such a difference.

However, that the strategy fails does not mean that it is vulnerable to all the objections that have been leveled against it. One such objection is that it faces the problem of lucky proof. This problem, first brought to the attention of scholars by Robert Adams, is that even if the individual concept of Peter is infinitely complex, we might be lucky and discover that it contains the concept ‘denier of Christ’ at the beginning of our analysis or shortly after having begun it. Indeed, under certain assumptions about the order that any proper analysis must follow, there is a guarantee that the concept of any predicate that is in the subject will be found in its concept after a finite number of steps, however large.

This is the problem of guaranteed proof. The difficulty posed by these problems is to explain why finding the concept of the predicates in that of the subject after a finite number of steps would not constitute a finite proof of the proposition ‘Peter denies Christ’. But Leibniz has an answer to this, an answer that he suggests in several texts, namely that in order to prove a proposition like ‘Peter denies Christ’ one needs to prove the consistency of the infinitely complex concept ‘Peter’. Proving the consistency of this concept requires its full decomposition and the examination and comparison of all its constituents. So, even if the concept ‘denier of Christ’ is found in the concept of Peter at some stage of the analysis, there is never a point at which one has completed the proof of ‘Peter denies Christ’.

3:AM: Leibniz is famous for his principle of the identity of indiscernibles. I guess most people told about this principle would say it was not just true but kind of obvious. But you think it’s just false. So can you tell us what we’re not understanding?

GRP: Whether the principle is trivially true or simply false depends, partly, on what one means by the principle. If it means that no two things can have all their properties in common, and one counts things like ‘being identical to Julius Caesar’ as properties, then the principle is trivially true. For no two things could share all their properties, including their identity properties. But this is not an interesting version of the principle. More interesting versions of the principle are obtained by restricting the class of properties over which the principle quantifies, i.e. by formulating the principle as the principle that there cannot be two things that share all the properties ‘of a certain kind’ (and one has to explain what kind that is).

The problem of how to characterise the properties that would trivialise the principle is one of the hardest problems concerning the principle of identity of indiscernibles and one the problems to which least attention has been paid of. I know of only two articles where that problem is discussed systematically (one by Bernard Katz and one by myself). In a nutshell, my view is that a property F trivialises the principle of identity of indiscernibles if and only if differing with respect to F ‘is’ or may ‘be’ differing numerically (where this means that merely establishing a difference with respect to such properties only establishes a numerical difference between the things in question – i.e. it does ‘not’ mean that differing with respect to them entails a numerical difference between the things in question, since, obviously, all properties are like that). So, on my view, if you exclude such properties from the domain of quantification of the principle of identity of indiscernibles, the principle is not ‘trivially’ true.

I also think that the non-trivial versions of the principle are false, at least when the principle is formulated with modal force, i.e. as saying that there cannot be two things that share all their properties. I suppose that many, if not most, philosophers nowadays will agree that the principle is false when it quantifies over so-called pure properties, i.e. intrinsic properties (e.g. ‘being green’) or relational properties whose having does not depend on the identity of the ‘relatum’ (e.g. ‘being two miles from a tall tower’).

The standard way of arguing that the principle of identity of indiscernibles understood in this way is false is by appealing to Max Black’s possible world, where there are only two iron spheres, one mile apart from each other, each having the same color, shape, diameter, temperature, etc. as the other.

Impure properties are those whose having depends on the identity of the ‘relatum’ (e.g. ‘being identical to Julius Caesar, being two miles apart from the Eiffel Tower’). But the huge majority of philosophers seem to think that including impure properties in the range of the quantifiers of the principle would make the principle trivial. I have argued that it does not: quantifying over ‘being identical to Julius Caesar’ and other such properties trivialises the principle of identity of indiscernibles, but quantifying over properties like ‘being two miles apart from the Eiffel Tower’, or ‘being the father of Aristotle’ does not trivialise the principle of identity of indiscernibles.

So I have argued that there is a non-trivial version of the principle of identity of indiscernibles that quantifies over some impure properties. The truth or falsity of such a principle is another matter, a matter for which I have not argued in print. However, I think that a principle of identity of indiscernibles that quantifies over all non-trivialising properties, including non-trivialising impure properties, may be false.

Leibniz, on the other hand, thought that even the strongest versions of the principle of identity of indiscernibles were true. That is, he thought that there cannot be two things that are even only intrinsically alike. So what he thought true was not the trivially true version of the principle, but what most philosophers think is false. Although I think his arguments for the principle do not work in the end, his arguments are really fascinating and ingenious.

3:AM: Another idea you engage with is about what exists at a basic level. You discuss the bundle theory in connection to the principle of identity of indiscernibles. So what’s the issue, how does the bundle theory deal with it and what’s your view about the fundamental basic ontology?

GRP: The bundle theory is the view that particulars are entirely constituted by universals. These universals are purely qualitative ones, i.e. what I called ‘pure properties’ above. As I said, many philosophers think that a version of the principle of identity of indiscernibles that rules out things with the same pure properties is false. And philosophers have traditionally argued that the bundle theory is committed to the truth of such a version of the principle of identity of indiscernibles. For, the thought seems to be, since particulars are entirely constituted by universals, no two distinct particulars could share all their universals. Therefore, it has been argued, the falsity of the relevant version of the principle of identity of indiscernibles shows that the bundle theory is false.

I think this is wrong. The bundle theory does not entail the principle of identity of indiscernibles. It entails it only when conjoined with a principle to the effect that no distinct particulars can be constituted by exactly the same entities. But there are no reasons to accept such a principle. We would have such reasons if particulars were sets of universals, or if they were ‘mereological sums’ of universals. But there are independent reasons why the bundle theorist will not want to account for particulars as sets or mereological sums of universals. Once it is seen that there is no reason to accept the principle that no distinct particulars can be constituted by exactly the same entities, the bundle theory ceases to be committed to the principle of identity of indiscernibles.

In my view the bundle theorist should say that when a bundle is located somewhere, there is an ‘instance’ of the bundle there. The instance is entirely constituted by the universals of the bundle. But the bundle and the instance are two distinct entities. Bundles of universals can be multiply located, but their instances cannot, and particulars are instances of a bundle of universals. Then in Black’s world what we have is a bi-located bundle instantiated by two numerically distinct particulars. This is how the bundle theory can accommodate Black’s world. Even more, this version of the bundle theory can be used to show that the principle of identity of indiscernibles is false. For bundles of universals can be in more than one place at the same time; so a bundle can have more than one instance; so there can be numerically distinct particulars sharing the same universals; so the principle of identity of indiscernibles is false.

By arguing that the bundle theory does not entail and is not committed in any way to the principle of identity of indiscernibles, I have thereby defended the bundle theory from a traditional objection to it (namely that since the principle of identity of indiscernibles is false, the bundle theory must be false too). But I do not believe in the bundle theory anyway. The bundle theory postulates universals and I do not believe in them; so I do not believe in the bundle theory.

Pages: 1 2

First published in 3:AM Magazine: Friday, September 7th, 2012.