:: Article

Wittgenstein’s Radiator and Le Corbusier’s treacherous knot.

By Richard Marshall.

Francesca Hughes, The Architecture of Error: Matter, Measure, and the Misadventures of Precision, MIT 2014.

Le Corbusier warned that: ‘… in the old-world timber beam there may be lurking some treacherous knot.’ The fear of errors lying hidden in materials became a starkly manifested paranoia as the precision of explanation became fetishised in the twentieth century and onwards. Materials seemed to deviate from this precision and some more than others. Preference for metal over wood was one consequence: metal seemed to deviate less than wood for example. In one of the chapters Hughes explains how this thought led to a move away from building planes out of wood to ones of metal – at the cost of flight! Airplanes in the first world war were all typically made out of wood. Wood, however, was exactly the treacherous-knot material that Le Corbusier feared. Metal, on the other hand, was thought less susceptible to error and so very soon after the first war planes were being made of metal. These early planes couldn’t actually fly but were deemed superior to the wooden ones that could because they represented error free reality. Metal collapsed the distinction between explanation and description. The price of this collapse, Hughes writes, ‘ … was flight itself.’ She asks the obvious question: ‘ If airplanes do not need to be able to fly, do explanations need to tell the truth?’

In this great book – entertaining, lucid and full of delicious detail and narrative as well as intelligent lively assessments of the details, and great pictures too! – the attempt to remove error manifests itself in the way the precision of theory opposes the actuality of the built material. Does the precision of explanatory theory cause inadequate descriptive veracity? Philosopher Nancy Cartwright answers yes: ‘Fundamental equations are meant to explain, and paradoxically enough the cost of the explanatory power is descriptive adequacy. Really explanatory laws of the sort found in theoretical physics do not state the truth.’ An ideological commitment to science, precision and predictability comes at the cost of truth and functionality. Francesca Hughes’ hugely enjoyable and rather brilliant book gives us examples of how this has happened and manifested itself, and makes a powerful case for calling out this ideology, & not just in its application to architecture but in many other spheres too.

In ‘How The Laws of Physics Lie’ Nancy Cartwright draws a distinction between inference to most likely cause and inference to the best explanation. When explanation bridges to best cause can we then start to test its truthfulness. There’s a simple point here: the best explanation can be false because it is approximate. Consistency with the facts as we have them is dependent on many things and approximation works with some materials rather than others so that, for example, Hookes law works better with aluminium than spruce. Approximation is clever enough and helps us think. Cartwright makes a distinction between Phenomenal laws that are about appearances on the one hand and theoretical laws about the underlying reality on the other, and says that what is actually happening in scientific work is a subtle negotiation between these two different types of law. We ‘ … separate laws which are fundamental and explanatory from those that merely describe.’

But don’t explanations have to tell the truth? Instrumentalists like Duhem said no. Philosopher van Frassen agrees and thinks truth an optional extra to the best explanation. But if there is no link between satisfactory explanation and truth then why do we always seem to think there is? An explanation is just an artifice of inference. Cartwright says the phenomenal holds in check the theoretical, the fundamental is held in check by the phenomenological via the architecture of approximation. Fundamental laws apply only to models in the model, Phenomenal laws only to real objects.

Cartwright accuses physicists of lying when they claim they discover laws of physics that tell us something about the metaphysical reality of the universe when their purported discoveries are actually merely highly distorted inventions. The inventions are idealisations that enable the required scientific measurements. The idealisations are various and non-uniform . But some philosophers, like Michael Weisberg and the great Roy Sorensen for example, argue that Cartwright is wrong to say the physicists are lying.

Weisberg argues that there are three types of idealisation involved : ‘… consensus has clustered around three types of positions, or three kinds of idealization. While their proponents typically see these positions as competitors, I will argue that they actually represent three important strands in scientific practice’.

The first type of idealization involves models that allow for excellent predictions and include as much reality detail as they can. However, it is a model that recognizes that not all data and experiments are available. Human fallibility and general limitations mean that experiments regarding the movement of the planets is beyond our powers. Yet they try to give the most accurate predictions as close to the way things actually are. It assumes that everything in the universe is relevant but knows that no one can know everything in the universe. Galileo is the archetype of this attitude: ‘We are trying to investigate what would happen to moveables very diverse in weight, in a medium quite devoid of resistance, so that the whole difference of speed existing between these moveables would have to be referred to inequality of weight alone. … Since we lack such a space, let us (instead) observe what happens in the thinnest and least resistant media, comparing this with what happens in others less thin and more resistant’ This Galilean type of modeler systematically approximates reality. She follows this as a recipe for building a model to reality. It is a pragmatic idealization.

The Galilean modeler represents the system under examination up to a point determined by what is available at the time. It recognizes explicitly that certain things are just not available. It is a temporary lie but it hopes through a process of accretion of other facts to build towards reality eventually. The Galilean is able to say how much distortion has been built into her system. Weisberg argues that this is a common approach. Idealisation is acknowledged to distort reality but the amount of distortion is estimated. It is a process of systematic approximation. In McMullin’s study of Galileo where Weisberg gets this from McMullin says, “… models can be made more specific by eliminating simplifying assumptions and ‘de-idealization’, as it were, the model then serves as the basis for a continuing research program’. This Galilean approach is clearly one involving idealization modeling but it is continually referring back to a metaphysical realism so that its aims of truth, good predictions and accurate representations are all relativised to an idea of what the world actually is. The ideal of this approach is therefore to ‘de-idealise’. Le Corbusier’s terror of the hidden knot is in respect to this a paranoid fear that we can’t know we’ve de-idealised enough. Parts of Hughes book trace this paranoia.

Minimalist idealization is different from the Galilean approach in that it isn’t a temporary idealization. Rather, it finds a permanent minimal model to explain its phenomena. It is minimal because it includes only a few causal facts in its system and ruthlessly eliminates all others. It takes only the first order causal facts in a system. Strevens has labeled this approach a kairetic account of scientific explanation, where modeling focuses on only those elements that ‘make a difference’ to the phenomena. By ‘making a difference’ Stevens means strictly ‘causal entailment’. Strevens agrees with Cartright’s accusation that science lies by failing to own up to using minimal models. Robert Batterman discusses minimal models as constructions of ‘… highly idealized minimal models of the universal, repeatable features of a system’ where adding detail erodes the explanatory power of the model. Minimal models use the imagination to strip away the local and the irrelevant to provide us with clear discernment of the relevant and the universal. A mathematical model is its prototype.

The models are highly distorting. In an explanation of crystal formation, for example, science converts the 3–D reality of crystals into a one dimensional representation. This seems outlandish says Sorensen – although as we’ll see in a minute, he has a way of rescuing this from the instrumentalists. It is different from the Galilean approach because it is an approach that doesn’t strip down and then rebuild towards reality again. The simplicity remains and the actual complexity is forever eradicated from the explanation. The relation between explanatory truth and truth are inversely proportional. Hughes book is also about how this minimal modeling has impacted on architecture.

Weisberg argues that this extreme instrumentalism doesn’t necessarily imply anti-realism. The Galilean approach just shows that the world is too complex to have a full grasp of it. Yet it assumes that as more knowledge is gained models will get closer to actual truth. In the minimal model the distortions can’t be truthful. However, the purpose of the models is to locate an underlying ubiquitous and general deep structure to reality. Minimal modeling is part of a larger strategy to discover what different systems all have in common. In this way Weisman argues that no minimal model idealization is a true description of the universe but despite this there is a meta-realist position lying behind it. Nevertheless, for some architectural modelers, their fear of error is rooted in this approach because pragmatically any distortion may be fatal.

Weisberg’s third type of modeling idealization is multiple-model idealization. The Buddhist story of the blind men and the elephant illustrates this idea. In the story, the blind men are all given different parts of the animal to touch and from their descriptions try and work out what animal they are confronting. This picture captures science when working on very complex phenomena such as weather systems. The idea is that no single simplification will capture the complexity of the phenomena. Even a minimal model approach will fail to capture all the relevant causes. The approach therefore countenances different kinds of models that capture different parts of the systems. They may or may not all overlap but by treating them all together the scientists hopes to capture the whole phenomenon. It is as if separate map-makers converge with their maps to make one single map. But convergence is not guaranteed.

This is a form of perspectivism. In this it is an ancestor of Nietzsche. For Weisberg it is the least well-developed of the three idealization models used by science in terms of justification. Levins justifies the position in terms of trade-offs between different theorists. Different theorists have different goals – accuracy, precision, generality and simplicity etc and these are traded off in order to reach a unification. He writes: ‘The multiplicity of models is imposed by the contradictory demands of a complex, heterogeneous nature and a mind that can only cope with few variables at a time; by the contradictory desiderata of generality, realism, and precision; by the need to understand and also to control; even by the opposing esthetic standards which emphasize the stark simplicity and power of a general theorem as against the richness and the diversity of living nature. These conflicts are irreconcilable. Therefore, the alternative approaches even of contending schools are part of a larger mixed strategy. But the conflict is about method, not nature, for the individual models, while they are essential for understanding reality, should not be confused with that reality itself’.

This approach is clearly a greater threat to the metaphysical realist position than the other two. For example, the weather system may require no causal detail at all to be understood. Statistics will be all that is required. In this case, a metaphysical realism is not easily justified. Weisberg advises that in such a case it is advisable not to be a realist. Taken together, the three idealization models on offer don’t give a unanimous answer to the question as to whether science requires a realist or anti-realist position. Weisberg’s approach justifies a limited pluralistic antirealism. Much of science is realist. Chemists and pilots and architects are realists about their work. Much of the idealization models place an emphasis on realism and so Weisberg’s pluralism is a limited one. Hughes book nimbly moves back and forth to interrogate how the architects have danced around the seeming gap between the idealizations and the reality of what they are designing.

Relevant for any scientific idealization are inclusion rules and fidelity rules. Inclusion rules determine what phenomena are being targeted and fidelity rules concern ‘… the degrees of precision and accuracy with which each part of the model is to be judged’. And completeness is the ideal of any representation associated with classical science. To be complete, a theory must include all relevant phenomenon to fulfill the requirement of its inclusion rules. It must also represent every aspect of the target system in order to fulfill the requirements of the fidelity rules.

Recognition that complete adherence to the inclusion and fidelity rules is impossible motivates the use of idealization. Completeness is the regulative ideal of Kant. Weisberg argues that they ‘… do not describe a cognitive achievement that is literally possible, rather, they describe a target or aim point. They give the theorist guidance about what she should strive for and the proper direction for the advancement of her research program. If a theorist adopts COMPLETENESS, she knows that she should always strive to add more detail, more complexity, and more precision to her models. This will bring her closer to the ideal of completeness, although she will never fully realize this goal’.

Philosopher Peter Godfrey-Smith argues that science is realist. He writes; ‘One actual and reasonable aim of science is to give us accurate descriptions (and other representations) of what reality is like. This project includes giving us accurate representations of aspects of reality that are unobservable’. Weisberg’s three model approach to idealization allows for a broadly realist position, although it is spotty in that some idealizations of the approach may require that they remain staunchly anti-realist.

Roy Sorenson wants more unity, as I hinted above, and thinks that an argument mounted from within a philosophy of language position is helpful. His argument hinges on a distinction between supposing and asserting. He argues that idealization is an act of supposition. The idealised models are therefore not assertions of the truth about the world but are fictions that don’t make claims about reality as such. They are a species of thought experiments. He argues that all thought experiments are suppositions, that all scientific idealizations are thought experiments and that thought experiments are a species of scientific experiment.

This speech act of supposition unifies the three models of idealization and tames the anomalies that lead Weisberg to assume a limited pluralism with regard to realism. The anti-realism of the multiple-models idealization is removed by not assuming that any assertion is being attempted. A supposition isn’t an assertion and so can tolerate the inconsistency between the different theories of the perspectivism embedded in multiple models. The suppositions assign no probability to a theory. An assertion requires a justification from epistemology: ‘how do you know?’ A supposition doesn’t.

A supposition is distinct from make-believe and story-telling because in make-believe the author is pretending to be giving testimony. Supposition isn’t pretending to be doing that. In this way the scientific idealiser avoids the accusation of being a liar. Supposition is non-metaphorical, non idiomatic, avoids the pragmatic requirements of normal language use and is a form of autism. The challenge is then to explain how a supposition can explain anything in the actual world.

Application conditions need to be clear. The idealizations need to be very precise about how they apply. The model-world relation may appeal to theoretical similarity or isomorphism and avoid the risk of assertion by emphasizing that idealization is a suppositional not an assertion. In this way the menace of idealisation is undermined.

Sorenson’s sensitivity to language approach leads to further refinements of what scientific theorizing assumes. A subtle claim he makes is to deny that that logical equivalence implies equivalence in truth-value. Suppositions that model reality well are logically equivalent to suppositionals that model reality badly. However, despite this, the good ones are better than the bad ones because they are closer to the truth . Even meaningless statements can be closer to the truth than other meaningless statements. Popular discourse about scientific standards of reliability and validity tends to overlook this and is perilously distorting. Yet science, including physics, accommodates the idea. So any answer system that doesn’t accommodate this idea that variance of closeness to truth doesn’t necessarily entail variance of logical equivalence misrepresents the scientific ideal.

Logical equivalence is derived from entailments of whatever criteria is being used. Popper thinks verisimilitude is a matter of having true consequences and avoiding false ones. Logically equivalent statements have the same consequences. Counting consequences is difficult. Goodman’s new problem of induction used predicates working like ‘grue’ and ‘bleen’ to make calculations difficult. Criteria in one language can be further from the truth in another.

The scientific assessments agree that languages should be ranked in terms of how good they are at eradicating difficulties. Carnap argued that indexicals and ambiguity were a problem and refused languages that used them in order to remove the difficulty. Quine argued that only a language cutting nature at the joints would do. Language would have to be precise. Goodman wanted entrenched predicates that have a good track record of inductions. Goodman therefore endorsed a type of continuity constraint.

Perhaps mereological criteria which prevents double counting will work. Breaking criteria down into relationships between component parts and the whole is prototypically the way criteria have developed for assessments in high stakes exams, for example. Winner take all theorists suggest we wait and see which criteria prevails over time to preserve the truth best. But this allows the winner to legislate on the criteria for winning retrospectively. It is now, not in the future, that we need to be able to judge truth-values. Again these are issues that bubble throughout Hughes’s marvellous book in relation to architecture.

Strict logical implication would render any criteria with a false conclusion as logically equivalent as any other that also implied a falsehood. But this would be uninformative. We count near misses as being better than big misses. We count fewer misses as better than more misses. Scientists use the discourse of ‘closer to the truth’. Scientists know there are five kingdoms of complex organisms. A scientist idealising to six is closer to the truth than one idealising to sixty-thousand. In maths the same type of data exists to justify the same approach.

‘Closer to the truth’ discourse allows for errors within margins of acceptability to be established on the grounds that some errors are better than others. Salsburg explains this when he writes about the way data overlaps from maths into empirical science:
‘The numbers we get from this random sample are most likely wrong, but we can use the theorems of mathematical statistics to determine how to sample and measure in an optimum way, making sure that, in the long run, our numbers will be closer to the truth than any others’. We are licensed to suppose a mathematical modelling to get closer to the truth without having to assert results are the truth. If Hughes is to be believed Architects fear that closer to the truth may never be close enough.

But progress in science can be measured in terms of it getting closer to the truth. In maths truth seems to be more stable even though errors are made. Maths progress is therefore best measured in terms of full truth, but not exhaustively. Statistical and analogical reasoning figure in maths even though deductive truth overrules both. In assessment statistical reasoning is used. Statistical reasoning does not violate mathematical standards and given that maths and science are largely intertwined there is no violation of scientific standards purely from his using statistical reasoning.

There are inconsistent theories that get closer to the truth than consistent ones. Logical equivalence cannot be the sole determining factor in the concept of ‘closer to the truth.’ This is true in physics, the science assessment-theorists most want to emulate. Schott gives as an example Bohr’s theory which in order to predict greater truth was more inconsistent than it had been initially:
‘Bohr’s theory of the Balmer series is based upon several novel hypotheses in greater or less contradiction with ordinary mechanics and electrodynamics, …yet the representation afforded by it of the line spectrum is so extraordinarily exact that a considerable substratum of truth can hardly be denied to it. Therefore, it is matter of great theoretical importance to examine how far really it is inconsistent with ordinary electrodynamics, and in what way it can be modified so as to remove the contradictions’.

Consistency is often seen as something that is required to achieve scientific truth. Yet verisimilitude in physics can accommodate contradiction and inconsistency.
Norton says:
‘If we have an empirically successful theory that turns out to be logically inconsistent, then it is not an unreasonable assumption that the theory is a close approximation of a logically consistent theory that would enjoy similar empirical success. The best way to deal with the inconsistency would be to recover this corrected, consistent theory and dispense with the inconsistent theory. However, in cases in which the corrected theory cannot be identified, there is another option. If we cannot recover the entire corrected theory, then we can at least recover some of its conclusions or good approximations to them, by means of meta-level arguments applied to the inconsistent theory.

This is not a denial of consistent theories being available. But it suggests that if they can’t be easily found, approximate theories that work quite well are legitimate. Some theorists deny that consistency is always available however. The multiple-models idealisation approach suggests this. If this is right then the failure to use an inconsistent theory that is close to the truth becomes an act of folly. For example Shapere writes, with classical electrodynamics in mind: ‘…there can be no guarantee that we must always find a consistent reinterpretation of our inconsistent but workable techniques and ideas” .

Frisch agrees ‘If acceptance involves only a commitment to the reliability of a theory, then accepting an inconsistent theory can be compatible with our standards of rationality, as long as inconsistent consequences of the theory agree approximately and to the appropriate degree of accuracy. Thus, instead of Norton’s and Smith’s condition that an inconsistent theory must have consistent subsets which capture all the theory’s acceptable consequences, I want to propose that our commitment can extend to mutually inconsistent subsets of a theory as long as predictions based on mutually inconsistent subsets agree approximately’.

Inconsistent statements could be as close as each other to the truth. ‘It is ten to twelve’ is as close to the truth of ‘it is about noon’ as is ‘it is ten past twelve’. Yet they are mutually inconsistent statements. They are part of an acceptable repertoire of thinking about truth proximity.

Meaningless sttements may be closer to the truth than others to the extent that they resemble meaningful statements that do have a proximity to the truth. Yet meaningless statements are degenerative in terms of logical consequence. There are no logical consequences to a meaningless statement. However some nonsense can be used to get close to the truth. Meaninglessness can have implicature which may allow us to decide that, though strictly equivalent, one meaningless statement is arbitrarily closer to the truth than another. So with time, noon and midnight are singularities. Strictly, ‘12.00 PM’ is meaningless. But if it is noon the statement is closer to the truth than someone who reports the time as ‘12.10 PM’ as Sorensen points out.

Here a general point about the subtleties of language philosophy can be made. Grice’s ‘Studies in the Way of Words’, for example, is the prototype of the kind of study that Sorenson uses to defend the role of supposition and thought experiment in mainstream scientific activity. Thought experiments are genuine experiments, are an essential part of science, as are idealisations and models.

Architectural paranoia about error happens when a view of science and precision is beholden to logical positivists and ordinary language philosophers who supposed that there are many more meaningless statements than there actually were. The stereotypical picture of science as being rigorous, precise and methodological in a Positivist manner restricts true engagement with genuine presuppositions and practices of science, distorts the role of distortion and error to get at truths. As Yablo has recently been arguing, we sometimes need to say sentences and think thoughts that contain falsehoods in order to express the truth.

Wittgenstein wrote a whole book that claimed to be meaningless but was close enough to the truth to be useful. In the ‘Tractatus’ he wrote:
‘My sentences are illuminating in the following way: to understand me you must recognize my sentences – once you have climbed out through them, on them, over them – as senseless (You must, so to speak, throw away the ladder after you have climbed up on it).
You must climb out through my sentences; then you will see the world correctly’.

Donald Davidson argued that language meaning required a principle of charity which constrained users to always interpret so as to maximize agreement with the author of language statements. So Wittgenstein can be read as meaningless and fulfil Davidson’s principle. If Frisch is right in thinking that there may not be a consistent substitute for inconsistency then Wittgenstein’s meaningless text may be as close to the truth as you can get. Wittgenstein’s text, like many works of art, are attempts of the human to transcend the limits of language and thought. And in the final chapter of her book this is where Hughes ends up, contemplating Wittgenstein’s radiator and the crazy requirements of anti-error that went into its design.

Wittgenstein’s house for his sister was a distraction for his failing mental health. It was his experiment in precision and architecture. His iconic white house was actually ‘ the silky sheen of white ochre stuccolustro with some red added’’ with charcoal grey floors and grey green metalwork. It was painted white after the war. Materials were determined down to a millimeter at any point. It brought down the margin of tolerance in ‘a radical inflation of exactitude.’ He closed the gap between the calculated and the measured. It closed the gap between the conceptual and the actual in a realization of the idea of being ‘closer to the truth.’ The process here is about exactitude rather than an instantiation of the standardization that was one consequence of the fetishisation of being error-free. It was a return to eighteenth century exactitude where numbers were‘horning in’ on matter. Different doors were different sizes calculated to the millimeter, as were door handles, the thickness of walls and so on. This was the opposite of any mass production precision. The precision he demanded meant many manufactured objects were rejected. The casting of the radiators was beyond anyone in Austria. Ornament became the exactitude of measurement. Excess precision and labour resource and physical absence triumphed. In this respect Wittgensten’s was an aesthetic of excess not minimalism. Hermione Wittgenstein says: ‘When I am writing this a great yearning comes over me to see again those noble doors, in which one would be able to recognize their creator’s spirit, even if the rest of the house were to be destroyed… ‘ She sees this as a ‘house turned logic.’

A memoir of one who was there recalls: ‘I can still hear the locksmith, who asked him with regard to a keyhole, ‘tell me Herr Ingenieur, is a millimetre here really that important?’ and even before he had finished his sentence, the loud energetic ‘Ja’ that almost startled him.’ Lorraine Daston writes: ‘… the more precise the measurement, the more it stands as the solitary achievement of the measurer, rather than the replicable common property of the group.’ Hughes suggests the house is a laboratory where Wittgenstein strove to overcome the human error of the human mind, an exercise like the Tractatus where the precision built a meaningless text as close to the truth as you can get, something, like many works of art, attempting to transcend the limits of language and thought. Poet Elizabeth Bishop and art critic Clement Greenberg are cited by Hughes in this arena with deft aptness.

Elizabeth Bishop:

‘no focus
no gestures: no light shades (they were removed in favor of bare bulbs),
no radiator covers, no keyhole covers

some artificial colour

no distortion

no angst or effort (showing)

plenty of ego

(dead-pan architecture)

Clement Greenberg wrote on colour-field painting in the 1950s: ‘Every part.. is equivalent in stress to every other part.’

Wittgenstein’s strange house was an art of heightening specificity, a continuation in materials of Babbage’s mechanized calculation, of the calculus project of Leibniz. A fetishised surface ornament was turned to fetishised excess surface precision. The result is a building whose ornament is hidden within secret calculations beyond conscious perceptual abilities, working thus at a level beyond the standard, and closer, closer to its baroque truth. Inside its simplicity and minimalism it is an extremity of complexity and ornamentation.

Where is error now? In the surface materiality because counting is only safe if the pieces don’t change. Always that little dark threat again that perhaps something has been added or subtracted and we didn’t notice. This is materiality’s ‘white noise’ returning us to Le Corbusier’s treacherous knot.


ABOUT THE AUTHOR
Richard Marshall is still biding his time.

Buy the book here to keep him biding!

First published in 3:AM Magazine: Saturday, November 15th, 2014.