Coming back to Ludwig

Wittgenstein at the bar of the Folies Bergere

By what James Joyce would call a vicus of recirculation, I find myself once more in agreement with Ludwig Wittgenstein after an unexpected falling-out.

It was my reading of Wittgenstein’s well-known dictum that ‘the meaning of a word…is its use in the language’ along with his notion of ‘language games’ and ‘forms of life’ that sent me plunging back into the preliterate past on what proved a course of unexpected discovery.

Regarding the dictum, I had always taken the emphasis to fall on ‘use’, so that the contrast was between a word actively employed – dynamic, doing something – as opposed to the word in isolation, in repose, as it were, sleeping between the covers of the dictionary. My reading was that ‘meaning’ was not an inherent property of words but something they had only when they were put to use, like a charge they acquired when they were active – and where that happened, of course, was the ‘language game’ or ‘form of life’.

The difference between those two, as I understand it, is a matter of scale: put simply, the language game is the smaller unit, a particular activity defined within the wider context of the ‘form of life’ – which in turn has a range of applications, from all the activities typical of a group or occupation to the culture of a tribe (such as Amazonian Indians or Cambridge dons). In any case, these were places where language was used, and in which the meaning of Wittgenstein’s dictum could be tested.

For example, the ancient practice of buying a bus ticket from a conductor might be classed as a language game, so that the utterance ‘fourpence please’ (together with the proferring of coin) will initiate an activity which culminates in the conductor issuing a ticket to that value; or in Wittgenstein’s ‘building game’ the utterance ‘slab’ will see the builder’s assistant bring him the appropriate item from the pile. Wittgenstein’s point, as I understood it, was that the meaning of the word derived from its context and was inferred from it – in contrast to the conventional explanation that the utterance is a shortened form of a longer one that is ‘understood’ but not spoken, so that ‘fourpence, please’ really means ‘could I have a fourpenny ticket, please?’ And ‘slab!’ means ‘bring me a slab’: in other words, the grammarian’s urge to supply a complete sentence in order to explain the meaning of the word or phrase was not needed: its use (in the language game) showed what it meant.

What caused me to stumble, when I stepped back into preliterate times, was the realisation that words are an artefact of writing (though as a matter of fact, not all writing systems make word divisions). They do not exist as separate elements in speech, which presents as a rhythmic flow of sound. Furthermore, preliterate speech was embedded in, and inseparable from, human activity: it was always part of a language game or form of life, having nowhere else to go – our preliterate ancestors could not contemplate the meaning of the written word nor hear it spoken in isolation (as on a radio programme, say). As such, I realised, speech would be overshadowed by a whole range of accompaniments that were more immediately ‘speaking’ than it was itself: glances of the eye, facial expressions, gestures, posture, bodily movement (one need only think of the effectiveness of mime in communicating to see this).

At first, I took refuge in the fact that ‘meaning’ could be seen, in a sense, as ‘spread across’ the whole activity (or language game, if you like) – any speech was a contributory part to something that was understood as a whole (so that people would ask ‘what is going on here?’ rather than ‘what did he say?’ in order to find out what was happening). However, I gradually realised that I was bringing my own (literate) understanding to bear in doing that: our ancestors would not have had any use for ‘meaning’ as a term: people might have misunderstood situations or others’ intentions, but not their utterances (chiefly because those did not play as significant a part as they do for us: they would not carry the same burden of meaning, since intention was largely conveyed by other forms of expression, and understanding came from paying attention to the situation, what people were doing, rather than to what was said).

This led me, unexpectedly, into conflict with Wittgenstein. Even though it was his dictum about use and his notion of the language game that had set me on my course, I saw now that meaning was an inherent property of words, and that was a change that came about through the discovery of words, as elements of speech, through writing. Put simply, something written on a tablet can be read and understood in itself, without reference to anything external. ‘The cat sat on the mat’ does not require to be checked against an actual situation to be understood: if I know the words and can read, then I know exactly what it means, regardless of whether there is a cat or a mat for it to sit on. In other words, you can know what something means without knowing whether or not it is true, in the sense of referring to an actual state of affairs. I came to see that Language, as we know it today, is not only an artefact of writing (which gives speech a visible, permanent form that can be analysed) but also has a peculiar character: it is what I would call both an abstract entity and a self-validating structure (or whole).

By ‘abstract entity’ I mean the process by which it is transformed from preliterate speech, inextricably entangled with human activity and overshadowed by the more immediate forms of expression that were its invariable accompaniments, into a separate, self-contained entity: through being written down, and so separated from its original context, speech can for the first time be considered as something in itself; at the same time, its visible form allows it to be analysed, so that what is heard as a rhythmic flow can be seen as a pattern articulated from separate elements or words.

By ‘self-validating structure’ I mean the property noted above, that something written can be understood in itself, without reference to anything external, and this because words can be seen as like the nodes or meeting-points in a complex structure (the Forth Bridge, say) where each part is supported by every other part and each node is the meeting point of forces that fix it in place. In the same way, the meaning of a word is determined, not by its relation to anything external, but rather by its relation to all the other words, its place in the language.

[This, of course, is the basic structuralist model, which considers any given sentence in terms of two different planes: the horizontal plane consists of the sentence itself, and how the words in it relate to one another. In an uninflected language like English, position matters: The cat sat on the hat means something different from The hat sat on the cat; in an inflected language like Latin, it is the word endings that indicate the grammatical relation (puella amat nautam, nautam puella amat and amat nautam puella all mean ‘the girl loves the sailor’ but puellam amat nauta means ‘the sailor loves the girl). The vertical plane consists of all the words that could stand in stead of those in the sentence. It is easier to picture this in terms of a one-armed-bandit or fruit machine: instead of a row of bells or lemons, the sentence The cat sat on the hat is displayed in the window with each word on a separate drum or cylinder. The lever is pulled, the drums spin, and a new sentence is formed: A bird nested under an arch. This has the same grammatical form (article+noun[subject]+verb+preposition*+noun[object]) but means something quite different. Thus, we understand any given sentence not only in terms of its actual content, but also in relation to any grammatically similar sentence; to put it in another, slightly puzzling way, the meaning of a word in any given sentence is defined in part by what it is not, i.e. by all the words that could stand (grammatically) in its stead.]

And it was thinking on this that led me, by a roundabout route, a vicus of recirculation, to reconsider what Wittgenstein actually said, and where the emphasis should fall in the dictum I have quoted. Rather than say ‘the meaning of a word is its use in the language’ perhaps one should emphasise that final phrase, ’the meaning of a word is its use in the language.’
This slight shift significantly alters the perspective: where, before, I had seen ‘use’ as drawing attention to the dynamic character of meaning – that it was only present when words were being used, rather than when they were idle – I now saw the dictum as a repudiation of Wittgenstein’s earlier ‘picture theory’ of meaning, outlined in the Tractatus.

In that, the meaning of a proposition is a matter of correspondence: as the arrangement of elements in a picture must correspond to what it depicts (so that one could, as it were, draw lines from one to the other) so too the arrangement of words in a proposition must correspond to, or picture, some state of affairs in the world. In short, the meaning of a proposition depends on an external relation between it and the world. But if we read Wittgenstein’s dictum with the emphasis on the last phrase, it could be understood as saying that the meaning of a word is a purely internal matter: it is defined by its use in the language, not in relation to anything outside the language. A word does not mean by pointing to something outside itself that validates it; its meaning is determined by it relations to other words, those alongside it in the proposition and (as suggested by the structuralist model described above) those that could stand in its place.

So, having arrived at my notion of Language as a self-validating structure or whole and entered into it gladly as one might a public house after a long stravaig on a hot day, who do I find but Ludwig himself already drinking at the bar, having got there a long way ahead of me.

I’m glad to find myself back in his company.

*I realise that grammarians of a more modern stripe would call ‘nested under’ (and ‘sat on’) phrasal verbs, but I am an old-fashioned sort of fellow.

The Shepherd Boy and the Philosopher: a fable about numbers

‘It’s surreal to me that it’s 2022 and there are still people out there who think 2 + 2 = 4 is an objective truth that was true before humans even existed and not just like a thing society agreed on because it’s useful’ (culled from Twitter, where people say the most extraordinary things out loud)

Let us start with big sister and little brother minding their flock of twenty sheep. Little brother hugs one of the sheep and says, ‘Methera is my favourite!’ Big sister asks, ‘Why do you call her Methera?’ Little brother looks surprised. ‘Because that’s her name,’ he says. ‘you call them all by name at the end of the day, but Methera’s the only one I can pick out – O (pointing to another sheep) that’s Bumfitt!’

Big sister realises that little brother has misunderstood. She explains that she isn’t calling the sheep, she’s counting them. She shows him using her fingers as she says ‘Yan, tyan, tethera, methera, pimp!’ (she holds up all five fingers on one hand, then moves to the other) Sethera, lethera, hovera, dovera, dik! (she hold up all ten fingers of both hands, then curls one into her palm as she continues) Yanadik, tyanadik, tetheradik, metheradik, bumfitt! (little brother laughs) Yanabumfitt, tyanabumfitt, tetherabumfitt, metherabumfitt, Giggot!” And when she reaches ‘giggot’ she makes a score on the ground with her crook. ‘there, you see – we have one score sheep: I counted them. Now you try!’

It takes little brother a few attempts to get the sequence right – because that’s the important thing – but luckily the rhythm, rhyme and pattern all help: ‘dovera hovera’ just doesn’t sound right the way ‘hovera, dovera’ does, and in the same way his ear and tongue tell him it’s ‘sethera lethera’ not the other way round. Once he’s got the sequence, big sister tests him on the rule.
‘The rule is ‘add one’ you see – each number in the sequence is one more than the number before it. That’s why you need to know the sequence: the value of each number depends on its place.’

She tests him by holding up different numbers of fingers and having him count them, then adding one. He gets the hang of it quickly and pretty soon if she holds up seven he says ‘lethera’ right away and if she adds another three he counts ‘hovera, dovera,’ in his head then says ‘dik!’ out loud. Then big sister gathers up some pebbles and sets out all the numbers up to twenty – a group of one, then two and so on – so that he can see them all side by side. They play around with the pebbles and see how they can make up the numbers in different ways: Yan and tethera give you methera, but so does tyan and tyan. By the end of the day, he’s able to count the sheep all by himself and he knows what numbers are.

When he’s a bit older, little brother goes into the town and meets a merchant with an abacus. He watches him for a bit and the merchant, aware of his interest, asks, ‘can you count?’ ‘Yes!’ Says little brother, and slides each bead across on the abacus as he says ‘Yan, tyan, tethera, methera…’ ‘Oho, a shepherd boy!’ says the merchant. ‘Here in the town we have another way of counting, but it’s the same really.’ And he writes out the numbers 1-10 in the dust as he counts on the abacus. ‘Now, we say ‘one, two, three, four five’ but you can say ‘yan, tyan, tethera, methera pimp’ – the names might be different but the numbers are still the same.’ He teaches the boy to count using numbers – 1,2,3 – and shows him how he can go beyond a score: 21, 22, 23. They pass a pleasant day playing around with numbers and translating the merchant’s numbers into shepherd’s numbers and back again.

Another day, when he’s older again, the boy goes to the city and meets a philosopher. ‘Do you know what numbers are?’ The philosopher asks. ‘O, yes,’ says the boy, ‘I can count.’ And to show him, he counts to twenty the shepherd’s way and then the merchant’s way. ‘Indeed, you can count,’ says the philosopher, ‘but that wasn’t what I asked – what are numbers?’ The boy is puzzled a moment, since he thinks he has just shown him, but then he writes out the figures 1 to 10 in the dust. ‘I suppose you mean these? That’s what numbers are.’ ‘But I could do that a different way,’ says the philosopher. ‘here’s how the Romans used to do it.’ And he writes out in roman numerals, I, II, III, IV, V and so on. ‘And do you know your alphabet?’

The boy recites it for him.

‘Well, you could use that too,’ says the philosopher. ‘any sequence you know can be used to count if you follow the rule of ‘add one’ : so a is 1, b is 2, c is 3 etc. Or d is methera, e is pimp, f is sethera and g is lethera, if you like.’

‘I see that,’ says the boy. ‘They’re just different names for the same thing, or different ways of doing the same thing.’

‘But what is that thing?’ asks the philosopher, ‘that’s what I’d really like to know! If seven and lethera and 7 and VII and even g are all the same thing, what is that thing? And where is it?’

The boy shrugs. He can sense the philosopher’s excitement, but he doesn’t share it. It does not seem necessary to him to know these things.

‘What does it matter, as long as you can count?’ He asks. ‘Isn’t that the important thing? If you follow the sequence and apply the rule your sums will always work out. Tyan and tyan will give you methera, two and two will give you four, 2+2 will always =4.’

‘But isn’t that the wonder of it?’ says the philosopher. ‘Here are these things – numbers – and we can call them by all sorts of different names, but they always add up, and you know you can rely on that. Suppose someone came up and said, 2+2=5 – how would you react?’

‘I’d tell him he was wrong, that he couldn’t count.’

‘But suppose he insisted? How could you show him that he was wrong?’

After some thought, the boy says, ‘I’d ask him to count to ten. If he did it right, then I’d show him using pebbles for numbers, and he’d see that 2 and 2 couldn’t make 5 but had to be 4.’

‘but what if he did it wrong? What if he counted 1,2,3,5,4?’

‘Then I could show him that we both agreed, but that we used number-names differently: what he called 4, I call 5 and the other way about. I’d like to see what he did with the higher numbers, too – like 14 and 25 and 44 – but as long as we both used a consistent sequence, even in a different order, we could make our sums work out, because it’s the place in the sequence that determines the value, along with the rule – go to the next in the sequence, you add one, go back, you take one away.’

‘And isn’t that marvellous? Suppose someone else came up and said ‘for me, two and two is seven, and two and three is eleven, and eleven and seven is nine, and nine and two is one?’

‘Well, I wouldn’t trust him to count anything, that’s for sure. But I could ask him what rule he’s following, what sequence he’s using.’

‘And if he says, ‘O, I don’t follow any rule (and you can’t make me!) I just use the numbers in any order I like – 7,4,5,8,2,3 one day and 6,1,7,4,9 the next. I’m a free spirit. If I say 2+2=7 then that’s what it equals, for me. After all, numbers are just something we’ve invented: you can use them any way you want.’

‘Then I’d ask him ‘but what do you use them for?’ I use mine to count sheep.’ I suppose I might try to diddle him, just to teach him a lesson, but that would hardly be fair, since he clearly doesn’t know what he’s talking about: he can’t count, he doesn’t know what numbers are.’

‘Which brings us back to my original question,’ says the philosopher. ‘Just what are numbers? There’s something mysterious about them. They seem to be the same for everyone, though we can call them by different names. Once you apply the rule of ‘add one’ to a sequence, you always end up with the same numbers, no matter what you call them, because they always add up the same way, and you know they always will, as long as you’ve got the sequence right. Yet they don’t seem to be anywhere: I mean, you can’t pick them up, or go and look at them or show them to someone else – but you know they must exist, and that they’re infinite, because no matter how high you count, you can always add one more. It’s amazing!’

The boy shrugs. ‘Maybe numbers aren’t a thing at all,’ he suggests. ‘Maybe they’re something you do, like – like playing the piano.’

This brings the philosopher up short but seems to please him. ‘What made you say that?’ he asks. ‘I don’t know,’ says the boy, ‘it just came to me. I suppose because there isn’t a ‘what’ you can ask about – it’s just playing the piano: it’s something you do. So’s playing with numbers.’

‘But what are you playing with?’ demands the philosopher afresh.

The boy shrugs. It does not seem necessary to him to know such things. It is only when he is an old man that he one day says to his big sister as they sit by the fire, ‘I see it now – he was trying to fit them in to his scheme of things. Only I didn’t have a scheme of things, so it didn’t matter to me. He wanted to find a way to think about them, to connect them up to a bigger picture so that it all worked. I suppose that’s what philosophy is: trying to fit everything into the same big picture. I seem to have managed without one all this time. What about you?’

But his sister is already asleep.

The Magic Money Tree: is Covid-19 a game-changer?

The idea of ‘convention’ and its associated activity of ‘deeming’ are fundamental to human activity, as I think I have said elsewhere.

By ‘convention’ I mean the agreement to be bound by something, to deem it to have a power which in reality resides with us.

The paradigm of this concept is when a child, in the course of a game, goes through the motions of tying you up with imaginary rope, then says, ‘there now, you’re my prisoner and you can’t escape’. Both child and adult know, on one level, that this is not the case (being called to the tea-table will dissolve it in an instant) but also that there is a space ‘in the game’ where it holds good, where you both agree to act as if you were bound.

There are two important things to note here: the paradox at the heart of this activity, and its origin in childhood, which suggests that it is ancient, instinctive and intuitive.

The paradox is that the power which we deem the external thing to have is actually our own: we are bound only by our own agreement to follow the rules; it is something which, in theory at least, we can shrug off at any time (though habit can be coercive). The fact that adult conventions are backed by a system of law and enforcement is proof of this: yes, I can be fined or sent to jail if I transgress certain rules that society has agreed, but that requirement for added enforcement is an admission that the conventions, of themselves, have no power to compel.

When children play games and set out the rules, they are not imitating adult behaviour; the reverse is actually the case – the use we make of convention and deeming in adult life derives, I would argue, from that instinctive childhood behaviour. Imposing order on the external world, structuring our lives by giving ourselves rules to follow, is, I would suggest, a method we have evolved that allows us to make sense of experience and manage the problem of existence; but how we do it is up to us.

And that brings us to the magic money tree and these extraordinary times we are living through. It was Theresa May, I think, who said that there was ‘no magic money tree’ in response to a question raised about funding the NHS. As many have since remarked in the recent turn of world events brought about by the coronavirus pandemic, the government now seems to have found an entire magic forest.

If we deconstruct the expression ‘magic money tree’ as Theresa May deployed it, we find it is shorthand for the notion that ‘no matter how worthy the cause (e.g. nurses’ pay) the government can’t just conjure up money to pay for it.’ The implication is that the government is bound by some iron necessity which it could not disobey no matter how much it wanted to; but that is simply not true – the necessity exists only within the conventions of the game – as recent events have shown, there is a magic money tree, if the game has reached a stage where one is needed to keep it going.

The truth is that our economy (by which I mean global free market capitalism, which obtains generally) has no basis in necessity: it is simply a grand and elaborate game, a set of conventions that can be reduced to the notion that people must work (in providing goods and services) in order to earn money to pay for the good and services which people work to produce, in order to earn money, etc. – but we do not have to live this way.

To be sure, there are necessities within the global freemarket economy (I must have money in order to survive) but those are – literally, not metaphorically – just the rules of the game: its logic is internal; it is not founded on any external necessity – there is, ultimately, no reason for it. The fact that we do not have to live this way is demonstrated by the fact that not everyone does, even now, and not so long ago, no-one did – civilisation, living in settled communities supported by agriculture, accounts for only ten thousand of the 200,000 years our particular species has been on earth; in other words, for 95% of human existence we have lived very differently from the way we do now.

Take the case of Richard Branson, one of those who have played the present economic game with such skill that they have amassed a vast pile of the magic leaves we call money, yet who is calling on his airline staff to make sacrifices, which led one Liam Young to tweet the other day

‘Virgin Atlantic have 8,500 employees and Branson has asked them to take 8 weeks unpaid leave. It would cost £4.2 million to pay all of these employees £500 a week to cover this leave. In total that’s a cost of £34 million for 8 weeks. Richard Branson is worth £4 billion.’

[in percentage terms, £34 million is 0.85% of £4 billion, so the implication is that even after doing this, Branson’s wealth would be 99.15% intact]

Now, there are various points for comment here: on the face of it, there seems a great unfairness to ask others to give up their income when you yourself have plenty, even if (as people have pointed out) having a net worth of £4 billion does not mean you have that amount at your immediate disposal; and it is the fact of people working at such and such a cost to provide such and such a service that makes that net worth what it is.

The most succinct summation of the matter was offered by the person who commented, ‘Branson didn’t get his £4 billion by paying staff any more than he absolutely had to.’

No doubt she meant this critically, but it expresses an important truth. For Branson to pay his employees for not working is to play a different game from the one that made him so absurdly rich. That game depends on paying the market rate (which is, by definition, as low as you can get away with) for people to provide goods and services which you sell at a profit. To pay them for unproductive activity is (again, literally, not metaphorically) as if the person who wins at Monopoly, having amassed all the money and ruined everyone else, then says ‘let’s just keep playing, and if you land on my property, I’ll pay you till you’ve all got some money again.’ And of course you can do that if you want, but it’s a different game (and one that completely subverts the point of the original).

I expect, when all this is done, that the world will settle into a new shape, and perhaps a surprising one: habits and customs, once broken, may not be resumed; we may come to see that we need not do what we always have. At the moment it is clear that the present game – global free market capitalism – is in serious danger of failing, so the various participants (i.e. national governments) are willing to abandon the rules, at least temporarily – hence the magic money forest – in order to maintain a semblance of economic activity till we are in a position to start again in earnest; but what they may find, after this, is that people don’t want to play that game any more.

 

A picture of the world

20180822_185814

Let us suppose two people, poring over a map spread on a table; make it an Ordnance Survey two-and-half-inch to the mile one. They are planning a cycle journey together that will traverse the area shown on the map, by one of several routes. Both are skilled in reading maps, so that in tracing a possible route they can visualise the terrain it would take them through, the steepness of the gradients, the possibility of views and so on.

For the time that they are studying the routes, they are wholly absorbed: there is just them and the map; they feel the need of nothing else. The map and their discussions are interwoven, interactive. At length they decide the best way to go.

But the journey, for whatever reason, is never made as they planned.

Only years later, one of them undertakes it, in remembrance of the other, who has died.

The relation between the first scene, with the map, and the second, illustrates what I mean to show by this Venn diagram:

20180822_190036

And both are intended to show the relation between our everyday construct of the world (the blue bit, which by convention, we deem to be reality, and corresponds to the map scene) and how the world really is.

Why Writing is like a Playtex Bra

‘It lifts and separates’ is a slogan that will be familiar to those of my generation – it was advertised as the chief virtue of the Playtex ‘cross-your-heart’ Bra. However, it also serves as a memorable illustration of my theory concerning the origin of what we think of as Language.

The conventional account presents Language, as we have it now, as evolved speech, i.e. its origins go back to our first utterance, with the acquisition of a written form for transcribing it a logical development that occurs in due course – around five thousand years ago – but only after speech has held sway as the primary means of human communication for a couple of hundred thousand years.

However, I think that is not what happened; in particular, the notion that speech was the original and primary means of human communication, occupying a position analogous to what we call Language (both spoken and written) today, is an erroneous backwards projection, based on the status that speech only now enjoys.

The conventional account could be summed up briefly thus: ‘First, we learned to speak, and that is what made us human and marked us out as special; then we learned to write down what we said in order to give it permanent form, and that enabled us to store the knowledge and wisdom which has enabled us to achieve our present pre-eminence.’

However, there are good reasons to suppose that the eminence currently enjoyed by speech actually results from the invention of writing and its impact on human expression – what I would call the Playtex Moment, because the effect of that impact was to lift speech above the rest of human expression, and separate it.

Prior to the invention of writing, and indeed for a good time after it, since its impact was far from immediate, speech was, I would say, simply one aspect of human expression, and by no means the most important. By ‘human expression’ I mean the broad range of integrated activity – facial expression, gesture, posture and bodily movement, and a range of sounds, including speech – which human beings use to express their thoughts and feelings. Bear in mind that up to the invention of writing (and for a good time after it) speech was always part of some larger activity, to which it contributed, but did not (I would assert) dominate.

My ground for supposing this is that it is only through the effort to give speech a written form (which probably did not start to happen till writing had been around for a thousand years) that we come to study it closely, and to analyse it. I suggest there are two reasons for this – the first is that it was not possible to study speech till it was given a permanent, objective form; the second is that the need to analyse speech is part and parcel of the process of giving it written form. Crucially, it is only in writing that the notion of the word as a component of speech arises; speech naturally presents as an uninterrupted flow – rhythm and emphasis are of significance, but word separation is not. Word separation – which not every writing system uses – is a feature of writing, not speech.

In the same way, the whole analysis of speech in terms of the relations between words – grammar – arises from writing (for the good reason that it is only through writing that we can become aware of it). It is the understanding of Language that arises through the development of writing as a tool to transcribe speech that elevates and separates speech from the other forms with which it has hitherto been inseparably bound up.

The notion that we invented writing expressly to transcribe speech does not bear examination*: it was invented for the lesser task of making lists and inventories, as a way of storing information. It was only very gradually that we began to realise its wider potential (the earliest instance of anything we might call literature occurs a good thousand years after the first appearance of writing). Rather than writing being a by-product of speech, speech – as we now know it, our primary mode of communication and expression – is a by-product of writing.

And that is why writing is like a Playtex bra: it lifts and separates speech from all the other forms of human expression – but also (to push the analogy to its limits, perhaps) offers a degree of support that is only bought at the expense of containment and subjugation.

The interesting corollary is that if our present mode of thinking is Language-based – in the sense of ‘Language’ that is used here, a fusion of writing and speech – then that, too, is a relatively recent development**; however much it might seem second nature to us, it is just that – second nature: our natural mode of thought – instinctive and intuitive, developed by our ancestors over several hundred thousand years – must be something quite other, with a different foundation (which is, I would suggest, metaphor).

*if you doubt this, examine it: ask how such an idea would first have occurred, that things should be written down and given a permanent form – to remember them? people had been remembering without the aid of writing for thousands of years – why should they suddenly feel the need to devise an elaborate system to do something they could do perfectly well already? Ask also why, of all things, they would select speech as the one thing to make a record of – only if it were the sort of speech we have now – very much formed and influenced by writing – would it seem the obvious thing to record. Finally, ask how they would go about it – devising a script for the purpose of recording speech requires the sort of analysis of speech we can only acquire through already having devised such a script.

**do not underestimate what I mean by this: it is more than using words to think with. It is the complete model of the world as an objective reality existing independently and governed by logic and reason and all that stems from that; all that can be shown to derive from Language, which in turn arises from the impact of writing on human expression, a process that is initiated about two and a half thousand years ago, in classical Greece, by Plato and Aristotle.

The Actual Colour of the Sun

‘The sun is actually white, it just appears yellow to us through the Earth’s atmosphere.’

This is a line that appeared on Facebook a while ago, courtesy of my friend Else Cederborg, who posts all sorts of curious and interesting things.

It is a common form of argument that most will readily understand and generally accept without thinking too hard about it. Yet the sun is not actually white, nor does it just appear yellow, facts you can easily check for yourself.

Depending on the time of day and the atmospheric conditions, the sun is variously deep blood red, orange, dusky pink and (behind a veil of mist or thin cloud) a sort of milky white; much of the time it has an incandescent brilliance which can hardly be called a colour since we cannot bear to look at it directly. Though it may appear yellow, the usual place to encounter a yellow sun is in a representation of it: a child’s drawing, for instance (against a blue sky above the square white house, surrounded by green grass, with its four four-paned windows and red triangular roof with single chimney belching smoke) or else in the icons used in weather forecasting.

How we come to accept an argument that runs counter to all experience, and perversely insists on a condition for actuality (being seen outside the Earth’s atmosphere) which few of us will ever experience, is a case worth examining.

At the heart of it is that curious human thing, a convention (I say ‘human’ though it might be that other creatures have conventions, but that is for another time). A convention, effectively, is an agreement to treat things as other than they are – it is a form of pretence. This definition might seem surprising, since what is usually emphasised in conventions is their arbitrary character – the typical example being which side of the road we drive on, which varies from country to country.

However, what makes the rule of the road a convention is that we endow it with force, a force that it does not actually have: we say that you must drive on the left (or the right); that you have to – even though any one of us can show this not to be the case. Of course, if you engage in such a demonstration, you might find yourself liable to a range of legal sanctions, or worse still, involved in an accident. The fact that the rule has to be backed by force proves that it has no intrinsic force; on the other hand, the risk of accident shows that the pretence is no mere whim, but sound common sense: conventions are there because they are useful.

A great deal of human behaviour is conventional if you look at it closely: things are deemed to be the case which are not, actually (international borders is a good example – again, these are arbitrary, but it is the power with which they are endowed that makes them conventions). In this regard, it is worth recalling a remark that Wittgenstein makes somewhere (Philosophical Investigations, I think) to the effect that ‘explanation has to come to an end somewhere’. In other words, despite what we tell children, ‘just because’ is an answer. Conventions are so because we say they are so.

So what colour is the sun?

That takes us to the boldest and most fundamental of conventions, the one that underpins our standard way of thinking about the world, for which we have Plato to thank. Plato insists at the outset on discrediting the senses, saying that they deceive us, giving us Appearance only, not Reality.

This is a move of breathtaking boldness, since effectively it dismisses all experience as ‘unreal’ – as a starting position for philosophy, it ought to be quite hopeless, since if we cannot draw on experience, where are we to begin? If reality is not what we actually experience, then what can it be?

However, there is a different angle we can consider this from, which makes it easier to understand how it has come to be almost universally adopted. What Plato is proposing (though he does not see this himself) is that we should view the world in general terms: that we should not allow ourselves to be distracted by the individual and particular, but should see things as multiple instances of a single Idea or Form; or, as Aristotle develops it, as members of a class which can be put under a single general heading: trees, flowers, cats, horses, at one level, plants and quadrupeds at another, and so on.

Implied in this is the elimination of the subject: the world is not as it appears to me or to you or to any particular individual (since particular individuals exist only at the specific level) the world is to be considered as objective, as it is ‘in itself’, i.e. as a general arrangement that exists independently of any observation.

This is a very useful way of looking at things even though it involves a contradiction. Its utility is demonstrated by the extraordinary progress we have made in the 2.500 years or so since we invented it. It is the basis of science and the foundation of our systems of law and education.

Effectively, it starts by accepting the necessary plurality of subjective experience: we see things differently at different times – the sun is sometimes red, sometimes pink, sometimes incandescent; different people have different experiences: one man’s meat is another man’s poison. In the event of dispute, who is to have priority? That problem looks insoluble (though the extent to which it is an actual problem might be questioned).

So, we must set subjective experience and all that goes with it to one side, and rule it out as the basis of argument. Instead, we must suppose that the world has a form that exists independently of us and is not influenced by our subjective observation: we posit a state of ‘how things actually are in themselves’ which is necessarily the same and unchanging, and so is the same for everyone regardless of how it might appear.

This is how we arrive at the position stated at the outset, that the sun is ‘actually’ white, and that its ‘actual state’ should be taken as ‘how it appears from space,’ even though we live on earth and seldom leave it.

There is a confusion here, which has surfaced from time to time in the history of philosophy since Plato’s day. The strict Platonist would dismiss the whole question of the sun’s true colour in terms akin to Dr Johnson’s, on being asked which of two inferior poets (Derrick or Smart) was the greater: ‘Sir, there is no settling the point of precedency between a louse and a flea.’ Colour belongs to the world of Appearance, not the world of Forms or Ideas.

However, as the insertion of the argument about the earth’s atmosphere shows, we feel the need of a reason to dismiss the evidence of our own eyes (and this is where that little sleight-of-mind about ‘just appearing yellow’ comes in – so that we are not tempted to look too closely at the sun as it actually appears, we are fobbed off with a conventional representation of it that we have been accepting since childhood). The argument is that the atmosphere acts as a filter, much as if we looked at a white rose through coloured glass and variously made it appear green or red or blue: so just as we agree the rose is ‘actually’ white, so too the sun’s ‘actual’ colour must be as it appears unfiltered.

This, however, is specious. Leaving aside the fact that the rose is certainly not white (as you will soon discover if you try to paint a picture of it) at least our choice of default colour can be justified by adding ‘under normal conditions of light’ which most people will accept; but ‘as it appears outside the earth’s atmosphere’ can hardly be called ‘normal conditions of light.’

What has happened here is that the subjective ‘filter’ which Plato wished to circumvent by eliminating the subject – ‘let us ignore the fact that someone is looking, and consider only what he sees’ – has reappeared as an actual filter, so that the whole appearance/reality division has been inserted into the objective world by an odd sort of reverse metaphor. The need to give priority to one of the many possible states of the sun is still felt, and it is solved by arbitrarily selecting ‘the sun as it appears from space’ as its actual state, because that seems logical (though in fact it is not).

The subjectivity of colour in particular, and the subject-dependence of all perception in general, is like a submerged reef that appears at various periods in the sea of philosophy. Locke is troubled by colour, which he wants to class as a ‘secondary quality’ since it evidently does not inhere in the object, as he supposes the primary qualities – solidity, extension, motion, number and figure – to do. Colour is ranked with taste, smell and sound as secondary, which is just a restatement of Plato’s rejection of the senses in other terms.

Berkeley, however, sees that this is a specious distinction and gets to the heart of the matter with his axiom esse est percipi – ‘to be is to be perceived’. Everything we know requires some mind to perceive it: a point of view is implicit in every account we give. There can be no objective reality independent of a subject (as the terms themselves imply, since each is defined in terms of the other).

The response to this is illuminating. Berkeley’s dictum is rightly seen as fatal to the notion of an objectively real world, but instead of accepting this and downgrading Plato’s vision to a conventional and partial account employed for practical purposes, every effort is made to preserve it as the sole account of how things are, the one true reality.

Berkeley’s own position is to suppose that only ideas in minds exist, but that everything exists as an idea in the mind of God, so there is a reality independent of our minds, though not of God’s (a neat marriage of philosophy and orthodox christianity – Berkeley did become a bishop in later life. The Californian university town is named after him).

Kant’s solution is to assert that there is an independent reality – the ding-an-sich, or ‘thing-in-itself’ but logically it must be inaccessible to us: we know it is there (presumably as an article of faith) but we cannot know what it is like.

Schopenhauer’s solution is the most ingenious, and to my mind the most satisfying. He agrees that we cannot know the ding-an-sich as a general rule, but there is one notable exception: ourselves. We are objects in the world and so are known like everything else, via the senses – this is Plato’s world of Appearance, which for Schopenhauer is the World as Representation (since it is re-presented to our minds via our senses). But because we are conscious and capable of reflecting, we are also aware of ourselves as subjects – not via the senses (that would make us objects) but directly, by being us. (To put it in terms of pronouns: you know me and I know you, but I am I; I am ‘me’ to you (and in the mirror) but I am ‘I’ to myself)

And what we are aware of, Schopenhauer says, is our Will; or rather, that our inner nature, as it were, is will: the will to exist, the urge to be, and that this is the inner nature of all things – the elusive ding-an-sich: they are all objective manifestations of a single all-pervasive Will (which happens in us uniquely to have come to consciousness).

This idea is not original to Schopenhauer but is borrowed from Eastern, specifically Hindu and Buddhist philosophy, which belongs to a separate (and possibly earlier) tradition than Platonic thought does (it is interesting to note that a very similar way of looking at things has been likewise borrowed by evolutionary biologists, such as Richard Dawkins, with the notion of ‘the selfish gene’ taking the place of Schopenhauer’s ‘Will’ as the blind and indifferent driving force behind all life).

I find Schopenhauer’s account satisfactory (though not wholly so) because it is a genuine attempt to give an account of the World as we experience it, one that reconciles all its elements (chiefly us as subjects with our objective perceptions) rather than the conventional account of Plato (and all who have followed him) which proceeds by simply discounting the subject altogether, effectively dismissing personal experience and reducing us to passive observers. Although the utility of this convention cannot be denied (in certain directions, at any rate) its inherent limitations make it inadequate to consider many of the matters that trouble us most deeply, such as those that find expression in religion and art; and if, like those who wish to tell us that the sun is ‘actually’ white, we mistake the conventional account for an actual description of reality*, we end up by dismissing the things that trouble us most deeply as ‘merely subjective’ and ‘not real’ – which is deleterious, since they continue to trouble us deeply, but that trouble has now been reclassified as an imaginary ailment.

*perhaps I should say ‘the world’ or ‘what there is’. There is a difficulty with ‘reality’ and ‘real’ since they are prejudiced in favour of the convention of an objective world: this becomes clearer when you consider that they could be translated as ‘thingness’ and ‘thing-like’. The paradigm for ‘reality’ then becomes the stone that Dr Johnson kicked with such force that he rebounded from it, i.e. any object in the world, so that the subject and the subjective aspect of experience is already excluded.

‘Like, yet unlike.’

‘Like, yet unlike,’ is Merry’s comment in The Lord of the Rings when he first sees Gandalf and Saruman together: Gandalf, returned from the dead, has assumed the white robes formerly worn by Saruman, who has succumbed to despair and been corrupted by evil and is about to be deposed. So we have two people who closely resemble one another yet are profoundly different in character.

Scene: a school classroom. Enter an ancient shuffling pedagogue. He sets on his desk two items. The first depicts a scene from the days of empire, with a khaki-clad officer of the Camel Corps holding a horde of savage Dervishes at bay, armed only with a service revolver.

Teacher (in cracked wheezing voice):The sand of the desert is sodden red,—
Red with the wreck of a square that broke; —
The Gatling’s jammed and the Colonel dead,
And the regiment blind with dust and smoke.
The river of death has brimmed his banks,
And England’s far, and Honour a name,
But the voice of a schoolboy rallies the ranks:
‘Play up! play up! and play the game! ‘

Cackling to himself, he unveils his second prop, a glass case in which a stuffed domestic tabby cat – now rather moth-eaten, alas! – has been artfully disguised to give it the appearance of a (rather small) African lion.

Teacher (as before) The lion, the lion
he dwells in the waste –
he has a big head,
and a very small waist –
but his shoulders are stark
and his jaws they are grim:
and a good little child
will not play with him!

Once recovered from his self-induced paroxysm of mirth, almost indistinguishable from an asthma attack, he resumes what is evidently a familiar discourse.

Teacher: We remember, children, that whereas the simile (put that snuff away, Hoyle, and sit up straight) says that one thing is like another, the metaphor says that one thing is another, in this case that the soldier was a lion in the fight. Now in what respects was he a lion? it can scarcely be his appearance, though I grant that his uniform has a tawny hue not dissimilar to the lion’s pelt; certes, he has no shaggy mane (did I say something amusing, Williams? stop smirking, boy, and pay attention) and instead of claws and teeth he has his Webley .45 calibre revolver. Nonetheless, he displays a fearless courage in the face of great odds that is precisely the quality for which the King of Beasts is renowned, so that is why we are justified in calling him a lion. What is that, Hoyle? Why do we not just say he is like a lion? Ha – hum – well, you see, it makes the comparison stronger, you see, more vivid.’

Hoyle does not see, but dutifully notes it down, and refrains from suggesting that ‘metaphor’ is just a long Greek word for a lie, since he knows that will get him six of the belt in those unenlightened days.

[curtain]

But young Hoyle the snuff-taker has a point. Aristotle, it will be recalled, writing in his Poetics, says that the poet ‘above all, must be a master of metaphor,‘ which he defines as ‘the ability to see the similarity in dissimilar things’.  But this definition is as problematic as the teacher’s explanation: why is a comparison between two things whose most striking feature is their dissimilarity made stronger and more vivid by saying that they are actually the same?

The best that people seem able to manage in answer to this is that the literary metaphor has a kind of shock value. To illustrate the point, they generally allude to the conceits of the metaphysical poets, such as Donne, where what strikes us first as outrageous, is – once explained – redeemed by wit and ingenuity:

Oh stay, three lives in one flea spare,
Where we almost, nay more than married are.   
This flea is you and I, and this
Our marriage bed, and marriage temple is;   
Though parents grudge, and you, w’are met,   
And cloistered in these living walls of jet.

The best metaphor, it seems, is one where the dissimilarity is more striking than the resemblance.

But mention of the metaphysical poets recalls a different definition of metaphor, one provided by Vita Sackville-West in her book on Andrew Marvell:

‘They saw in it [metaphor] an opportunity for expressing … the unknown … in terms of the known concrete.’

That is in the form that I was wont to quote in my student days, when it made a nice pair with the Aristotle quoted above; but I think now that I did Vita Sackville-West a disservice by truncating it. Here it is in full:

‘The metaphysical poets were intoxicated—if one may apply so excitable a word to writers so severely and deliberately intellectual—by the potentialities of metaphor. They saw in it an opportunity for expressing their intimations of the unknown and the dimly suspected Absolute in terms of the known concrete, whether those intimations related to philosophic, mystical, or intellectual experience, to religion, or to love. They were ‘struck with these great concurrences of things’; they were persuaded that,
Below the bottom of the great abyss
There where one centre reconciles all things,
The World’s profound heart pants,
and no doubt they believed that if they kept to the task with sufficient determination, they would succeed in catching the world’s profound heart in the net of their words.’

If I had my time again (for indeed that ancient pedagogue described above is me) and wished to illustrate this, I would go about it rather differently.

Let us suppose a scene where a child cowers behind her mother’s skirts while on the other side a large and overbearing man, an official of some sort, remonstrates with the mother demanding she surrender the child to his authority. Though she is small and without any looks or glamour – a very ordinary, even downtrodden sort – the woman stands up boldly to the man and defies him to his face with such ferocity that he retreats. I am witness to this scene and the woman’s defiance sends a thrill of excitement and awe coursing through me. In recounting it to a friend, I say ‘In that moment, I seemed to glimpse her true nature – I felt as if I was in the presence of a tiger, defending her cubs.’

This is a very different account of metaphor. It is no longer a contrived comparison for (dubious) literary effect between two external things that are quite unlike, in which I play no part save as a detached observer; instead, I am engaged, involved: the metaphor happens in me: the identity is not between the external objects, but in the feeling they evoke, which is the same, so that the sight before me (the woman) recalls a very different one (the tiger) which felt exactly the same.

The first point to note is that the contradiction implicit in Aristotle’s account has disappeared. There is no puzzle in trying to work out how a woman can be a tiger, because the unity of the two lies in the feeling they evoke. And as long as my response is typically human and not something unique to me, then others, hearing my account, will feel it too, and being stirred in the same way, will recognise the truth expressed by saying ‘I felt I was in the presence of a tiger.’

Further, the very point that seemed problematic at first – the dissimilarity – is a vital element now. It is the fact that the woman appears as unlike a tiger as it is possible to be that gives the incident its force: this is an epiphany, a showing-forth, one of those ‘great concurrences of things’ that seem like a glimpse of some reality beyond appearance, ‘the World’s profound heart’.

Yet that description – ‘some reality beyond appearance’ – is just what pulled me up short, and made me think of the Tolkien quote I have used as a heading. Is not this the very language of Plato, whose world of Forms or Ideas is presented as the Reality that transcends Appearance?

Yet the world as presented by Plato is essentially the same as that of Aristotle, which has become, as it were, our own default setting: it is a world of objective reality that exists independently of us; it is a world where we are detached observers, apprehending Reality intellectually as something that lies beyond the deceptive veil of Appearance. It is the world we opened with, in which metaphor is a contradiction and a puzzle, perhaps little better than a long Greek word for a lie.

Though both accounts – the Platonic-Aristotelian world on one hand, and Vita Sackville-West’s version on the other – seem strikingly similar (both have a Reality that lies beyond Appearance and so is to some extent secret, hidden), there are crucial differences in detail; like Gandalf and Saruman, they are like, yet unlike in the fundamentals that matter.

The Platonic world is apprehended intellectually. What does that mean? Plato presents it in physical terms, as a superior kind of seeing – the intellect, like Superman’s x-ray vision, penetrates the veil of Appearance to see the Reality that lies beyond. But the truth of it is less fanciful. What Plato has really discovered (and Aristotle then realises fully) is the potential of general terms. A Platonic Idea is, in fact, a general term: the platonic idea of ‘Horse’ is the word ‘horse’, of which every actual horse can be seen as an instance or embodiment. Thus, to apprehend the World of Forms is to view the actual world in general terms, effectively through the medium of language.

This can be imagined as being like a glass screen inserted between us and the landscape beyond, on which we write a description of the landscape in general terms, putting ‘trees’ where there is a forest, ‘mountains’ for mountains, and so on. By attending to the screen we have a simplified and more manageable version of the scene beyond, yet one that preserves its main elements in the same relation, much as a sketch captures the essential arrangement of a detailed picture.

But the Sackville-West world is not mediated in this way: we confront it directly, and engage with it emotionally: we are in it and of it. And our apprehension of a different order of reality is the opposite of that presented by Plato; where his is static, a world of unchanging and eternal certainties (which the trained intellect can come to know and contemplate), hers is dynamic, intuitive, uncertain: it is something glimpsed, guessed at, something wonderful and mysterious which we strive constantly (and never wholly successfully) to express, in words, music, dance, art.

The resemblance between the two is no accident. Plato has borrowed the guise of the ancient intuited world (which we can still encounter in its primitive form in shamanic rituals and the like) and used it to clothe his Theory of Forms so that the two are deceptively alike; and when you read Plato’s account as an impressionable youth (as I did) you overlay it with your own intimations of the unknown and the dimly suspected Absolute and it all seems to fit – just as it did for the Christian neoPlatonists (in particular, S. Augustine of Hippo) seeking a philosophical basis for their religion.

I do not say Plato did this deliberately and consciously. On the contrary, since he was operating on the frontier of thought and in the process of discovering a wholly new way of looking at the world, the only tools available to express it were those already in use: thus we have the famous Simile of the Cave, as beguiling an invitation to philosophy as anyone ever penned, and the Myth of Er, which Plato proposes as the foundation myth for his new Republic.

And beyond this there is Plato’s own intuition of a secret, unifying principle beyond immediate appearance, ‘the World’s profound heart’, which we must suppose him to have since it is persistent human trait: is it not likely that when he had his vision of the World of Forms, he himself supposed (just as those who came after him did) that the truth had been revealed to him, and he was able to apprehend steadily what had only been glimpsed before?

It would explain the enchantment that has accompanied Plato’s thought down the ages, which no-one ever attached to that of his pupil Aristotle (‘who is so very nice and dry,’ as one don remarked) even though Aristotelianism is essentially Plato’s Theory of Forms developed and shorn of its mysterious presentation.

So there we have it: a new explanation of metaphor that links it to a particular vision of the world, and an incidental explanation of the glamour that attaches to Plato’s Theory of Forms.

Like, yet unlike.

 

 

More thinking about thinking

As I remarked elsewhere, a lot of my own thinking might be described as ‘subvocalisation’, i.e. speaking without voicing the actual words. Even as I am typing this, I am constructing the sentences ‘in my head’ – though I would not say that I hear them: this is not someone else’s voice, it is mine, and though I do hear my own voice when I speak, I am stopping short of speaking here (though since I do occasionally break into actual speech, it is evidently the same process).

This stopping some way short of action might be a useful model for thought, and also offer an explanation of how it becomes progressively ‘internalized’ so that eventually it is considered a (purely) mental process.

Let us imagine a man who comes into clearing in a woodland. He considers the trees around him, then focuses his attention on a couple of them. These he examines in more detail – they resemble one another, each having branches of similar girth and shape. These branches he gives particular attention, eventually confining himself to just one of them, which he looks at from various angles, stroking it, following the sweep of it with his hand, and so on.

We would not have to watch him long before saying ‘this man has something in mind’ (though we might equally say, ‘he intends something’) and we would not be at all surprised to see him return later with tools to saw off the chosen branch and start to work it into some sort of shape.

So how much more is there to this than meets the eye? Is there an ‘interior’ process that accompanies the various gestures and movements, the looking and touching and so on, and does this constitute ‘what the man is (really) thinking’? And does that same process recur when the man is actually sawing off the branch, stripping it of its bark, etc?

We do, I think, feel less need of it in the second case – after all, the man is now actually doing something – we might even say ‘he is putting his thoughts into action’.

Take another example: a young woman looks at a climbing wall. Her eyes range over the whole of it, then begin to plot a particular path. Along with the direction of her gaze, her hands and feet rehearse certain movements, as if she is working out a sequence to go with the route her eyes are mapping out. What is the ‘accompanying internal process’ here?

Is there anything more to it than ‘looking with intent’, i.e. rehearsing the actions you intend to perform, but stopping short of performing them fully? (When a bowler in cricket goes through the action of bowling before he actually does so, or a golfer rehearses a stroke, what (if anything) is ‘going through his mind’?)

And what does ‘intent’ consist of? Need it involve visualising images or supplying a commentary of some sort on what you intend to do? We do not, after all, give ourselves instructions in this way when we perform an action, yet we clearly understand the difference between a deliberate, voluntary action and an involuntary one – even where the deliberate action is also instinctive (walking, running or catching, for instance).

Indeed, it occurs to me that in the days when I aspired to be a bowler, I found that the best results came when I focused my attention on the stump I wished to hit: it was as if by directing my gaze I was also directing my actions. I am also reminded that very young children just learning to walk will often seem to be ‘drawn’ by their gaze – they look at a target and totter-stumble towards it, arms outstretched, but always with their ‘eyes on the prize’.

The position I am moving towards is that what we consider ‘thinking’ might (in some cases) be better termed ‘willing’ or ‘intending’. The sort of ‘thinking in speech’ that I have described above as ‘subvocalisation’ is a special case in one sense that may mislead us – it has a content that we can identify and describe, namely words. In intending to speak (or as is the case now, write) words, it seems to me that I form those words ‘in my head’ just as if I were going to say them, only I do not say them. However, I am quite clear that I do not hear them spoken (I am listening to the football commentary on the radio at the moment, and that is quite different in kind to the parallel process of forming these words I am writing now).

What misleads here is that unspoken speech still has the recognisable form of speech, but we do not have a description for unperformed action; yet there must surely be an equivalent. I am loth to take the easy route of borrowing from information technology (which can mislead in its own way) but surely there is the equivalent of a program here? Must not all deliberate action be programmed, in the sense of having a set of instructions which our nerves transmit and our muscles execute, even if we have no conscious awareness of it? Is such a program not what presents itself to our consciousness as ‘the intention to do something’? So is it not likely that we rehearse our actions by running that program without executing it, and this is what thinking – in the sense of envisaging a future action – consists of?

Points worth pondering, at least.

A penny for them…

‘What are you thinking of? What thinking? What?
I never know what you are thinking.’
– Eliot, The Waste Land

‘He’s the sort that you never know what he’s thinking’ defines a recognisable character but carries a curious implication. There is a strong suggestion of duplicity, of inner workings at odds with outer show. Even among long-time married couples you will sometimes hear it said (in exasperated tones) ‘all these years we’ve been married and I still have no idea what goes on in that head of yours’.

But that exasperated tone indicates the same curious implication of the first case – namely, that we expect to know what people are thinking; that not to know is what is considered remarkable, the exception that proves the rule. C.Auguste Dupin, a notable precursor of Sherlock Holmes created by Edgar Allan Poe, makes a striking demonstration of this in The Murders in the Rue Morgue:

‘One night we were walking down one of Paris’s long and dirty
streets. Both of us were busy with our thoughts. Neither had spoken
for perhaps fifteen minutes. It seemed as if we had each forgotten that
the other was there, at his side. I soon learned that Dupin had not
forgotten me, however. Suddenly he said:
“You’re right. He is a very little fellow, that’s true, and he would
be more successful if he acted in lighter, less serious plays.”
“Yes, there can be no doubt of that!” I said.
At first I saw nothing strange in this. Dupin had agreed with me,
with my own thoughts. This, of course, seemed to me quite natural.
For a few seconds I continued walking, and thinking; but suddenly
I realized that Dupin had agreed with something which was only a
thought. I had not spoken a single word.’

Dupin’s explanation of his apparent mind-reading runs to another page and three-quarters [you can read it here] and though something of a virtuoso performance, it is based on sound principles – Dupin observes his friend’s actions and expressions closely, and is able to follow his train of thought by skilful inference, both from what he sees, and what he already knows.

The incident starts when, in evading a hurrying fruit-selller, his companion stubs his toe on an ill-laid paving stone:

‘You spoke a few angry words to yourself, and continued walking. But you kept looking down, down at the cobblestones in the street, so I knew you were still thinking of stones.
“Then we came to a small street where they are putting down street stones which they have cut in a new and very special way. Here your face became brighter and I saw your lips move. I could not doubt that you were saying the word stereotomy, the name for this new way of cutting stones.’
….
‘Later I felt sure that you would look up to the sky. You did look up. Now I was certain that I had been following your thoughts as they had in fact come into your mind.’
….
‘I saw you smile, remembering that article and the hard words in it.
“Then I saw you stand straighter, as tall as you could make yourself. I was sure you were thinking of Chantilly’s size, and especially his height.’

Two things are worth noting here, I think. The first is Dupin’s attention to such things as the direction of his friend’s gaze, the expression on his face, and his whole bodily posture, all of which he reads as indicative of thought – I was going to say ‘accompaniments of thought’ but that would be the wrong word, I think, for reasons I will come to presently. The second thing is one particular detail – ‘I saw your lips move’ – and the observation the narrator makes at the end of the episode – ‘Dupin was right, as right as he could be. Those were in fact my thoughts, my unspoken thoughts’.

These highlight two important points about thought that are often overlooked: that it has a physical aspect, and that it is closely connected to speech. We use the expression ‘difficult to read’ of people like the man cited at the start, the ‘sort that you never know what he’s thinking’, and this reminds us that we do rely to a great extent on non-verbal physical indications of ‘mental’ activity.

Indeed, it is interesting to consider just how contrary to everyday experience is the notion that mental activity and thought are hidden, private processes that take place ‘in our heads’ so that only we ‘have access to them’. I put those expressions in quotes because I think they are misleading, in the same way that it is misleading to speak of facial expression etc. as ‘accompaniments’ to thought – I would say they are better considered as an integral part of thinking. We see this from the expression ‘I learned to hide my thoughts’ which is connected with controlling – indeed, suppressing – these external manifestations of thought.

The fact that we must make a conscious effort to conceal thought suggests that it is far from the ‘hidden process’ it is often supposed to be and calls into question the whole range of terms we use that suggest it is – such as the notion of thoughts being ‘in our head’ and our having ‘private access to them’ alluded to above. The implication there is that the head (or brain, or mind) is a sort of space in which our thoughts are stored (and where other mental activity takes place); furthermore, it is a private space, a sort of secret room to which we alone have access. (In this connection, consider the various fictional representations of telepathy and mind-reading, which often involve clutching the head, pressing the temples etc., either in an effort to keep its contents from being rifled, or in the attempt to pilfer them – thoughts are seen as something contained which can, by certain means, be extracted)

In St Ambrose’s day (c340-397) it was considered remarkable that he could read without moving his lips, from which we infer that most people then did so. I believe that this is now termed ‘subvocalisation’ and it appears to have been studied extensively in connection with reading but less so with thought. I am conscious that a great deal of my own thought consists of articulating sentences ‘in my head’ a process that I consider the same as speaking in all but the final act of voicing the words aloud (an interpretation supported by the fact that sometimes I do actually speak my thoughts aloud) – hence my interest in the expression Poe uses above, ‘my unspoken thoughts.’

It would be interesting to know whether the late Romans of St Ambrose’s day moved their lips when thinking, or indeed habitually spoke their thoughts aloud, openly or in an undertone. Even now, this is more common than we might suppose – people often blurt out their thoughts without meaning to, and most of us are familiar with the expression ‘did I just say that aloud?’ (and the feeling that accompanies it) when we say what might have been better kept to oneself. There are also people who have the habit of framing their thoughts as spoken questions, which can be disconcerting till you realise that they are not actually seeking an answer from you, personally: it is just another form of saying ‘I wonder if…’.

So it would seem that, just as we have we have learned for the most part to read without moving our lips, so we have also gradually shed (or learn to suppress) the more obvious physical manifestations of what we now consider ‘mental’ activities, such as thinking, imagining, remembering etc. though my guess (as with subvocalisation in relation to reading) is that there is probably still a fair bit that could be detected in muscle movement, brain activity and the like (though it would be an interesting experiment to see if these too – the brain activity in particular – can be controlled).

From the effort we must make to conceal thought, and our varying success in doing so, it is reasonable to infer that the ‘natural’ mode of thought is holistic, involving the body (at least) as much as the brain: consider, for instance, two rather different examples. One is the domestic cat, and how it is transformed on spying a bird that might be its prey or an intruder on its territory: its intentions can be read very clearly from its bodily posture and movement. The other is the recent emergence in sport – particularly at the highest level – of the practice of ‘visualisation’, which is rather more than simply picturing what you want to happen; it is a full-scale physical anticipation of it, typified by the rituals with which Jonny Wilkinson used to precede his kicking attempts in rugby.

It is interesting to set all this alongside the long-standing tradition in philosophy that regards mental activity as private, personal and inaccessible to others, which has led some to the extreme of solipsism, the doctrine that your own self is the only being you can confidently believe to exist. Much blame for this can be laid at the door of Descartes, often seen as the herald of the modern era in philosophy, though the mind-body dualism generally attributed to him can be dated back to Plato (much as his most noted dictum, cogito ergo sum – ‘I think, therefore I am’ – can be traced back to St Augustine a thousand years before). Descartes makes the classic error of supposing that because we are deceived in some cases, it is possible that we might be deceived in every case – overlooking the fact that such a state of affairs would render ‘being deceived’ and its allied concepts of mistake, illusion, hallucination and the like incomprehensible: if we were deceived in all things, we would not be aware of it; the fact that we have a word for it demonstrates that, in most cases, we are not deceived, and that we also recognise the special and generally temporary circumstances in which we are.

If we go back to Plato, I think we can find the real root of the notion that thoughts are private. It is bound up with what I consider the relocation of meaning that takes place around the time of Classical Greece, about 25 centuries ago, and is made possible by the invention of writing. Only once a word can be written on a page does it become possible to consider it apart from the milieu in which it naturally occurs, human activity involving speech. Such activity (what Wittgenstein calls ‘forms of life’ and ‘language games’) is the ultimate source of meaning (cp. Wittgenstein again, ‘the meaning of a word is its use in the language’). Prior to the invention of writing, there was neither the means nor indeed any reason to consider speech apart from the larger activity of which it formed a part; indeed it is doubtful whether people would even have the concept of words as components of speech, which presents itself as a rhythmic flow, rather than a concatenation of smaller elements.

With writing, all that changes. For the first time, language can be studied and analysed at leisure. A sentence written on a page is comprehensible in itself, or so it appears, without reference to who wrote it or in what context. From this it is an easy step to the notion that meaning is an inherent property of words, rather than situations (what we overlook in this, of course, is that we are able to supply the context in which the words have meaning; but as such instances as the Phaistos Disc remind us, that ability can be lost, so that the marks we see have no more meaning for us than if they were random scratches made in  play).

Screenshot 2015-04-21 13.12.20

This relocation of meaning is fundamental to Plato’s Theory of Forms (or Ideas) in which he argues that the senses are deceived by the world of Appearance and that only the intellect can apprehend the true nature of Reality, the transcendent and immutable Forms. As I have argued elsewhere there is a strong case to be made that Platonic Ideas are in fact words – specifically, general and abstract terms – so the Platonic Ideas of ‘Cat’ ‘Table’ ‘Justice’ and ‘Good’ are the words cat, table, justice, good which stand for these ‘things’ (i.e. the general idea of ‘cat’, the abstract idea of ‘justice’) just as a specific name stands for the actual thing it denotes. (Though Plato pictures the words pointing to the transcendent Form or Idea, in actual fact the words themselves, allied to the use we make of them, are all that is needed)

It is this objectification of general and abstract ideas that leads to the notion of mental processes as private and inaccessible to others. We can point to something as an act of justice or goodness, but once we acquire the notion of justice as an idea, we introduce a new class of objects, those which can be apprehended only by the intellect. Strictly speaking, ‘object’ is used metaphorically here, but with Plato’s insistence that the Forms are the true Reality, this gets overlooked, and we start to think of thoughts, memories, ideas, impressions and the like as ‘mental objects’ that exist ‘in our minds’ or our ‘imaginations’ which we conceive as a kind of space, a sort of private viewing room.

The point to note here is that the metaphor preserves the Subject-Object relation, which is easily grasped in relation to physical objects – I know what it is to look at a tree, a cat or indeed another person: I am here and it is there. However, a degree of mystery seeps in when this is extended to ideas, thoughts and suchlike, particularly as philosophy develops the account it gives of them. Thus by Hume’s time we no longer simply see a tree: we form a mental impression of one, of which we can then make a copy, which he calls an idea – and this copy is what we use in remembering or imagining a tree, ‘calling it to mind’. This development clearly goes hand in hand with a growing understanding of light and optics and the physiology of the eye, but it is facilitated by having the notion of ‘mental space’ and regarding ideas as objects.

However, what is of most interest is how this alters our view of the Subject. From being a holistic notion which makes no distinction between mind and body – ‘I am this person looking at that tree’ – the subject begins to retreat in what becomes an infinite regress: the tree that we see out the window becomes a representation of a tree – to use Schopenhauer’s term, or the impression of a tree, to use Hume’s – which is now ‘in the mind’ but is still, somehow, seen. And if we have memory of that tree – an idea, to use Hume’s term – or the thought of a tree, or the mental image of one, then that, too, seems to be an object which we somehow apprehend – so the seeing, knowing or thinking subject – ourself – is forever edging out of the picture, never able – as subject – to become itself the object of consideration.

This is what leads the earlier Wittgenstein to suppose, in the Tractatus, that the subject is the boundary of experience, that it does not exist in the world but somehow outside or on the edge of it. Others have suggested that the Subject is a temporary manifestation generated (not unlike an electrical charge) by the combination of our brain and body and nervous system: it exists while we are alive (perhaps only when we are awake) and simply ceases when the physiology that generated it dies.

Yet all this, I would argue, is simply the result of philosophy’s having painted itself into a corner by adopting the way of thinking about the world that starts out with Plato. By dismissing the objects of sense as mere Appearance, and substituting the objects of intellectual apprehension as Reality, we reduce the Subject from an active participant in the world to a passive, detached observer: Wittgenstein’s boundary of experience. Reality is redefined as solely objective, and there is no room in it for the subject: ‘objectivity’ is praised while the subjective (often qualified by ‘merely’) is dismissed as unreliable, partial, mere ‘personal opinion’.

But let us step back, go back indeed to where we started, with Dupin, and the notion of thinking as a holistic activity which involves us as a totality, which is both physical and ‘mental’ (if indeed that distinction can be made at all). The view mentioned earlier, that the Subject (which can be identified with consciousness) is a kind of transitory by-product of our physiology seems to be supported by the latest developments in brain-imaging, which allow us to observe electrical activity in the neural networks of the brain: there is a correlation between certain activities and the part of the brain that ‘lights up’ when we are engaged in them. This has even led some to say that what brain imaging shows us are our actual thoughts – that all they are is these patterns of electric activity.

But I wonder. It has been demonstrated that people can lower their blood pressure aided by an index of it in the form of a display; likewise, people can be trained to suppress the physiological symptoms which polygraph tests – so-called ‘lie detectors’ – depend on for their evidence. It would be interesting to see if the lighting-up of neural networks is something that can be similarly controlled or disguised – for if we can learn to ‘hide our thoughts’ by controlling outward appearances, why should we suppose that we cannot do likewise with other physical manifestations of them, once we are aware of them?

It is illuminating to look at this from the other side: not only can we suppress or disguise the physical manifestations of thought, we can also imitate them – that is what actors do. And of course a standard acting technique is to have a store of memories that move us, which can be called to mind when the requisite emotion is called for – so if I wish to portray a character stricken by grief, I conjure a memory when I myself was grieved and my outward aspect will conform, much as does the player’s in Hamlet, who

But in a fiction, in a dream of passion,

Could force his soul so to his own conceit

That from her working all his visage wann’d,

Tears in his eyes, distraction in’s aspect,

A broken voice, and his whole function suiting

With forms to his conceit
Wittgenstein asks somewhere how we know that we are imitating someone’s expression, and adds that it is not by studying our face in a mirror. Our response to a smile, from a very early age, is a smile; but we are not imitating what we see – after all, we do not know what our own face looks like. What guides us is rather the feeling that goes with the smile. The best way I can think to put this is that, as human beings, we know what an expression feels like from the inside.

And I would add a note of caution here: do not import the model of cause and effect that we use in analysing the objective world. The joy we feel within does not cause the smile; it is not prior to it – the two are aspects of the same thing. I am reminded of an expression I learned as a boy doing my catechism – ‘an outward sign of inward grace’. There are a range of things that we know, not through becoming acquainted with them, but by doing them, by being them. And although we speak of ‘seeing’ ‘hearing’ and the rest of the senses separately, we cannot actually turn them on and off, but do them all at once and all the time; what we vary is the attention we give each one, and for most of us, sight predominates. with hearing next and the rest a good way behind, except when they force themselves on our attention.

What we actually experience, unanalysed, is not simply ‘the world’ – that is only half the story; what we experience is ‘being in the world’. All experience has this dual aspect: we know it from the inside and the outside at the same time. That is what makes communication possible, what understanding, properly understood, consists of. It is what in art, in all its forms – music, painting, sculpture, poetry, dance – enables us to ‘get it’: by considering the outward sign, we experience what it is like from inside, we recognise the feeling it expresses as something we, too, have felt.

The clever model that Plato and Aristotle invented, that underpins all Western thought, has enabled us to achieve remarkable things, but only at the considerable expense of ignoring one half of our experience and pretending that it does not matter.

Perhaps what Descartes should have said is not cogito ergo sum, nor even sum ergo sum (since it is not something we know by deduction) but simply sum – I am.

Seeing Better

‘See better, Lear!’ is the admonition Kent gives his King after he has petulantly banished his youngest daughter, Cordelia, because she ‘lacks that glib and oily art’ to flatter him as her false sisters have done. Sight and blindness is a central theme in King Lear, as is its corollary, deception, both of others and oneself.

Kent’s words came to me when I was ruminating on my latest occupation, drawing shiny things

(click to enlarge)

One of the things that drawing teaches is seeing better, and that indeed is a large part of my reason for pursuing it recently, as a kind of philosophical experiment (since February I have been drawing a monkey a day, in response to a challenge by a friend)Muriquin

The status of colour crops up in philosophical discussions at various periods – it is Locke, I think, who argues that colours are not ‘primary qualities’ (such as shape, extension and solidity) but only ‘secondary’ in that they involve an interaction between eye and object and cannot be said to inhere in the object itself as the primary qualities are supposed to do – but it is really a subset of a larger argument that takes us back (as always) to Plato.

Plato, it will be recalled, dismisses the world brought to us via the senses as deceptive Appearance, maintaining that the true nature of the world – Reality – can only be apprehended by the intellect: it is the world of Forms or Ideas. As I have argued elsewhere (‘In the beginning was the Word’) what Plato has really discovered is the power of general terms – the Platonic Idea or Form ‘table’ is not something that lies beyond the word ‘table’, to which it points, it is in fact the word ‘table’ itself – which can be used in thought to stand for any table, because – unlike a picture – it does not resemble any particular table.

This introduces a whole new way of thinking about the world, where it is no longer seen directly, through the despised senses, but apprehended by the intellect through the medium of language. And there is no better way of appreciating this than to try and draw something shiny.

Daimlerdrawn

What colour is the car? Why, black, of course – with some shiny bits. That is how it was described on the official documentation – Daimler DR450, Black. But what about all those other colours, then? Ah, now, that’s just reflections of one thing and another – you can ignore them; the car’s real colour is black (and its radiator grille etc aren’t coloured at all, they’re shiny chrome plate).

What trying to draw it teaches you is not only that you can’t ignore the many other colours that are there (if you want your picture to be any good at all) but it also brings home to you that your regular habit (or at least mine) is to dismiss a great deal of what your eyes tell you and pretend it isn’t there, that it doesn’t count: ‘that is just light reflected off a polished surface; that is just a reflection; that’s just a shadow.’

And that is Platonism in action: the intellect overrides the senses, reserves judgement to itself – and it does it through words: ‘light’ conveniently labels – and so keeps you from looking at – something that is very difficult to render faithfully in a drawing. You find that reflective surfaces, far from being bright, are often dark and dull; a tiny patch left uncoloured on a white page becomes a gleam of light when surrounded by greys and blues, even black. And your mind, on seeing the drawing, converts it back to an image of a plated surface – perhaps the most interesting part of the process.

It is as if we erect a glass screen between ourselves and the world, and on the screen we write the words that correspond to the things beyond – ‘mountains, trees, clouds, house, road, cars, people’ – and most of the time what we see is not what is in front of us, but only the words on the screen that give us the simplified general picture, at once a tool of immense power (enabling rapid thought unencumbered by distracting detail) and a great impoverishment of our experience – it inserts a carapace between us and the world.

See better. Draw. Then go out and look.