Why Writing is like a Playtex Bra

‘It lifts and separates’ is a slogan that will be familiar to those of my generation – it was advertised as the chief virtue of the Playtex ‘cross-your-heart’ Bra. However, it also serves as a memorable illustration of my theory concerning the origin of what we think of as Language.

The conventional account presents Language, as we have it now, as evolved speech, i.e. its origins go back to our first utterance, with the acquisition of a written form for transcribing it a logical development that occurs in due course – around five thousand years ago – but only after speech has held sway as the primary means of human communication for a couple of hundred thousand years.

However, I think that is not what happened; in particular, the notion that speech was the original and primary means of human communication, occupying a position analogous to what we call Language (both spoken and written) today, is an erroneous backwards projection, based on the status that speech only now enjoys.

The conventional account could be summed up briefly thus: ‘First, we learned to speak, and that is what made us human and marked us out as special; then we learned to write down what we said in order to give it permanent form, and that enabled us to store the knowledge and wisdom which has enabled us to achieve our present pre-eminence.’

However, there are good reasons to suppose that the eminence currently enjoyed by speech actually results from the invention of writing and its impact on human expression – what I would call the Playtex Moment, because the effect of that impact was to lift speech above the rest of human expression, and separate it.

Prior to the invention of writing, and indeed for a good time after it, since its impact was far from immediate, speech was, I would say, simply one aspect of human expression, and by no means the most important. By ‘human expression’ I mean the broad range of integrated activity – facial expression, gesture, posture and bodily movement, and a range of sounds, including speech – which human beings use to express their thoughts and feelings. Bear in mind that up to the invention of writing (and for a good time after it) speech was always part of some larger activity, to which it contributed, but did not (I would assert) dominate.

My ground for supposing this is that it is only through the effort to give speech a written form (which probably did not start to happen till writing had been around for a thousand years) that we come to study it closely, and to analyse it. I suggest there are two reasons for this – the first is that it was not possible to study speech till it was given a permanent, objective form; the second is that the need to analyse speech is part and parcel of the process of giving it written form. Crucially, it is only in writing that the notion of the word as a component of speech arises; speech naturally presents as an uninterrupted flow – rhythm and emphasis are of significance, but word separation is not. Word separation – which not every writing system uses – is a feature of writing, not speech.

In the same way, the whole analysis of speech in terms of the relations between words – grammar – arises from writing (for the good reason that it is only through writing that we can become aware of it). It is the understanding of Language that arises through the development of writing as a tool to transcribe speech that elevates and separates speech from the other forms with which it has hitherto been inseparably bound up.

The notion that we invented writing expressly to transcribe speech does not bear examination*: it was invented for the lesser task of making lists and inventories, as a way of storing information. It was only very gradually that we began to realise its wider potential (the earliest instance of anything we might call literature occurs a good thousand years after the first appearance of writing). Rather than writing being a by-product of speech, speech – as we now know it, our primary mode of communication and expression – is a by-product of writing.

And that is why writing is like a Playtex bra: it lifts and separates speech from all the other forms of human expression – but also (to push the analogy to its limits, perhaps) offers a degree of support that is only bought at the expense of containment and subjugation.

The interesting corollary is that if our present mode of thinking is Language-based – in the sense of ‘Language’ that is used here, a fusion of writing and speech – then that, too, is a relatively recent development**; however much it might seem second nature to us, it is just that – second nature: our natural mode of thought – instinctive and intuitive, developed by our ancestors over several hundred thousand years – must be something quite other, with a different foundation (which is, I would suggest, metaphor).

*if you doubt this, examine it: ask how such an idea would first have occurred, that things should be written down and given a permanent form – to remember them? people had been remembering without the aid of writing for thousands of years – why should they suddenly feel the need to devise an elaborate system to do something they could do perfectly well already? Ask also why, of all things, they would select speech as the one thing to make a record of – only if it were the sort of speech we have now – very much formed and influenced by writing – would it seem the obvious thing to record. Finally, ask how they would go about it – devising a script for the purpose of recording speech requires the sort of analysis of speech we can only acquire through already having devised such a script.

**do not underestimate what I mean by this: it is more than using words to think with. It is the complete model of the world as an objective reality existing independently and governed by logic and reason and all that stems from that; all that can be shown to derive from Language, which in turn arises from the impact of writing on human expression, a process that is initiated about two and a half thousand years ago, in classical Greece, by Plato and Aristotle.

The Actual Colour of the Sun

‘The sun is actually white, it just appears yellow to us through the Earth’s atmosphere.’

This is a line that appeared on Facebook a while ago, courtesy of my friend Else Cederborg, who posts all sorts of curious and interesting things.

It is a common form of argument that most will readily understand and generally accept without thinking too hard about it. Yet the sun is not actually white, nor does it just appear yellow, facts you can easily check for yourself.

Depending on the time of day and the atmospheric conditions, the sun is variously deep blood red, orange, dusky pink and (behind a veil of mist or thin cloud) a sort of milky white; much of the time it has an incandescent brilliance which can hardly be called a colour since we cannot bear to look at it directly. Though it may appear yellow, the usual place to encounter a yellow sun is in a representation of it: a child’s drawing, for instance (against a blue sky above the square white house, surrounded by green grass, with its four four-paned windows and red triangular roof with single chimney belching smoke) or else in the icons used in weather forecasting.

How we come to accept an argument that runs counter to all experience, and perversely insists on a condition for actuality (being seen outside the Earth’s atmosphere) which few of us will ever experience, is a case worth examining.

At the heart of it is that curious human thing, a convention (I say ‘human’ though it might be that other creatures have conventions, but that is for another time). A convention, effectively, is an agreement to treat things as other than they are – it is a form of pretence. This definition might seem surprising, since what is usually emphasised in conventions is their arbitrary character – the typical example being which side of the road we drive on, which varies from country to country.

However, what makes the rule of the road a convention is that we endow it with force, a force that it does not actually have: we say that you must drive on the left (or the right); that you have to – even though any one of us can show this not to be the case. Of course, if you engage in such a demonstration, you might find yourself liable to a range of legal sanctions, or worse still, involved in an accident. The fact that the rule has to be backed by force proves that it has no intrinsic force; on the other hand, the risk of accident shows that the pretence is no mere whim, but sound common sense: conventions are there because they are useful.

A great deal of human behaviour is conventional if you look at it closely: things are deemed to be the case which are not, actually (international borders is a good example – again, these are arbitrary, but it is the power with which they are endowed that makes them conventions). In this regard, it is worth recalling a remark that Wittgenstein makes somewhere (Philosophical Investigations, I think) to the effect that ‘explanation has to come to an end somewhere’. In other words, despite what we tell children, ‘just because’ is an answer. Conventions are so because we say they are so.

So what colour is the sun?

That takes us to the boldest and most fundamental of conventions, the one that underpins our standard way of thinking about the world, for which we have Plato to thank. Plato insists at the outset on discrediting the senses, saying that they deceive us, giving us Appearance only, not Reality.

This is a move of breathtaking boldness, since effectively it dismisses all experience as ‘unreal’ – as a starting position for philosophy, it ought to be quite hopeless, since if we cannot draw on experience, where are we to begin? If reality is not what we actually experience, then what can it be?

However, there is a different angle we can consider this from, which makes it easier to understand how it has come to be almost universally adopted. What Plato is proposing (though he does not see this himself) is that we should view the world in general terms: that we should not allow ourselves to be distracted by the individual and particular, but should see things as multiple instances of a single Idea or Form; or, as Aristotle develops it, as members of a class which can be put under a single general heading: trees, flowers, cats, horses, at one level, plants and quadrupeds at another, and so on.

Implied in this is the elimination of the subject: the world is not as it appears to me or to you or to any particular individual (since particular individuals exist only at the specific level) the world is to be considered as objective, as it is ‘in itself’, i.e. as a general arrangement that exists independently of any observation.

This is a very useful way of looking at things even though it involves a contradiction. Its utility is demonstrated by the extraordinary progress we have made in the 2.500 years or so since we invented it. It is the basis of science and the foundation of our systems of law and education.

Effectively, it starts by accepting the necessary plurality of subjective experience: we see things differently at different times – the sun is sometimes red, sometimes pink, sometimes incandescent; different people have different experiences: one man’s meat is another man’s poison. In the event of dispute, who is to have priority? That problem looks insoluble (though the extent to which it is an actual problem might be questioned).

So, we must set subjective experience and all that goes with it to one side, and rule it out as the basis of argument. Instead, we must suppose that the world has a form that exists independently of us and is not influenced by our subjective observation: we posit a state of ‘how things actually are in themselves’ which is necessarily the same and unchanging, and so is the same for everyone regardless of how it might appear.

This is how we arrive at the position stated at the outset, that the sun is ‘actually’ white, and that its ‘actual state’ should be taken as ‘how it appears from space,’ even though we live on earth and seldom leave it.

There is a confusion here, which has surfaced from time to time in the history of philosophy since Plato’s day. The strict Platonist would dismiss the whole question of the sun’s true colour in terms akin to Dr Johnson’s, on being asked which of two inferior poets (Derrick or Smart) was the greater: ‘Sir, there is no settling the point of precedency between a louse and a flea.’ Colour belongs to the world of Appearance, not the world of Forms or Ideas.

However, as the insertion of the argument about the earth’s atmosphere shows, we feel the need of a reason to dismiss the evidence of our own eyes (and this is where that little sleight-of-mind about ‘just appearing yellow’ comes in – so that we are not tempted to look too closely at the sun as it actually appears, we are fobbed off with a conventional representation of it that we have been accepting since childhood). The argument is that the atmosphere acts as a filter, much as if we looked at a white rose through coloured glass and variously made it appear green or red or blue: so just as we agree the rose is ‘actually’ white, so too the sun’s ‘actual’ colour must be as it appears unfiltered.

This, however, is specious. Leaving aside the fact that the rose is certainly not white (as you will soon discover if you try to paint a picture of it) at least our choice of default colour can be justified by adding ‘under normal conditions of light’ which most people will accept; but ‘as it appears outside the earth’s atmosphere’ can hardly be called ‘normal conditions of light.’

What has happened here is that the subjective ‘filter’ which Plato wished to circumvent by eliminating the subject – ‘let us ignore the fact that someone is looking, and consider only what he sees’ – has reappeared as an actual filter, so that the whole appearance/reality division has been inserted into the objective world by an odd sort of reverse metaphor. The need to give priority to one of the many possible states of the sun is still felt, and it is solved by arbitrarily selecting ‘the sun as it appears from space’ as its actual state, because that seems logical (though in fact it is not).

The subjectivity of colour in particular, and the subject-dependence of all perception in general, is like a submerged reef that appears at various periods in the sea of philosophy. Locke is troubled by colour, which he wants to class as a ‘secondary quality’ since it evidently does not inhere in the object, as he supposes the primary qualities – solidity, extension, motion, number and figure – to do. Colour is ranked with taste, smell and sound as secondary, which is just a restatement of Plato’s rejection of the senses in other terms.

Berkeley, however, sees that this is a specious distinction and gets to the heart of the matter with his axiom esse est percipi – ‘to be is to be perceived’. Everything we know requires some mind to perceive it: a point of view is implicit in every account we give. There can be no objective reality independent of a subject (as the terms themselves imply, since each is defined in terms of the other).

The response to this is illuminating. Berkeley’s dictum is rightly seen as fatal to the notion of an objectively real world, but instead of accepting this and downgrading Plato’s vision to a conventional and partial account employed for practical purposes, every effort is made to preserve it as the sole account of how things are, the one true reality.

Berkeley’s own position is to suppose that only ideas in minds exist, but that everything exists as an idea in the mind of God, so there is a reality independent of our minds, though not of God’s (a neat marriage of philosophy and orthodox christianity – Berkeley did become a bishop in later life. The Californian university town is named after him).

Kant’s solution is to assert that there is an independent reality – the ding-an-sich, or ‘thing-in-itself’ but logically it must be inaccessible to us: we know it is there (presumably as an article of faith) but we cannot know what it is like.

Schopenhauer’s solution is the most ingenious, and to my mind the most satisfying. He agrees that we cannot know the ding-an-sich as a general rule, but there is one notable exception: ourselves. We are objects in the world and so are known like everything else, via the senses – this is Plato’s world of Appearance, which for Schopenhauer is the World as Representation (since it is re-presented to our minds via our senses). But because we are conscious and capable of reflecting, we are also aware of ourselves as subjects – not via the senses (that would make us objects) but directly, by being us. (To put it in terms of pronouns: you know me and I know you, but I am I; I am ‘me’ to you (and in the mirror) but I am ‘I’ to myself)

And what we are aware of, Schopenhauer says, is our Will; or rather, that our inner nature, as it were, is will: the will to exist, the urge to be, and that this is the inner nature of all things – the elusive ding-an-sich: they are all objective manifestations of a single all-pervasive Will (which happens in us uniquely to have come to consciousness).

This idea is not original to Schopenhauer but is borrowed from Eastern, specifically Hindu and Buddhist philosophy, which belongs to a separate (and possibly earlier) tradition than Platonic thought does (it is interesting to note that a very similar way of looking at things has been likewise borrowed by evolutionary biologists, such as Richard Dawkins, with the notion of ‘the selfish gene’ taking the place of Schopenhauer’s ‘Will’ as the blind and indifferent driving force behind all life).

I find Schopenhauer’s account satisfactory (though not wholly so) because it is a genuine attempt to give an account of the World as we experience it, one that reconciles all its elements (chiefly us as subjects with our objective perceptions) rather than the conventional account of Plato (and all who have followed him) which proceeds by simply discounting the subject altogether, effectively dismissing personal experience and reducing us to passive observers. Although the utility of this convention cannot be denied (in certain directions, at any rate) its inherent limitations make it inadequate to consider many of the matters that trouble us most deeply, such as those that find expression in religion and art; and if, like those who wish to tell us that the sun is ‘actually’ white, we mistake the conventional account for an actual description of reality*, we end up by dismissing the things that trouble us most deeply as ‘merely subjective’ and ‘not real’ – which is deleterious, since they continue to trouble us deeply, but that trouble has now been reclassified as an imaginary ailment.

*perhaps I should say ‘the world’ or ‘what there is’. There is a difficulty with ‘reality’ and ‘real’ since they are prejudiced in favour of the convention of an objective world: this becomes clearer when you consider that they could be translated as ‘thingness’ and ‘thing-like’. The paradigm for ‘reality’ then becomes the stone that Dr Johnson kicked with such force that he rebounded from it, i.e. any object in the world, so that the subject and the subjective aspect of experience is already excluded.

‘Like, yet unlike.’

‘Like, yet unlike,’ is Merry’s comment in The Lord of the Rings when he first sees Gandalf and Saruman together: Gandalf, returned from the dead, has assumed the white robes formerly worn by Saruman, who has succumbed to despair and been corrupted by evil and is about to be deposed. So we have two people who closely resemble one another yet are profoundly different in character.

Scene: a school classroom. Enter an ancient shuffling pedagogue. He sets on his desk two items. The first depicts a scene from the days of empire, with a khaki-clad officer of the Camel Corps holding a horde of savage Dervishes at bay, armed only with a service revolver.

Teacher (in cracked wheezing voice):The sand of the desert is sodden red,—
Red with the wreck of a square that broke; —
The Gatling’s jammed and the Colonel dead,
And the regiment blind with dust and smoke.
The river of death has brimmed his banks,
And England’s far, and Honour a name,
But the voice of a schoolboy rallies the ranks:
‘Play up! play up! and play the game! ‘

Cackling to himself, he unveils his second prop, a glass case in which a stuffed domestic tabby cat – now rather moth-eaten, alas! – has been artfully disguised to give it the appearance of a (rather small) African lion.

Teacher (as before) The lion, the lion
he dwells in the waste –
he has a big head,
and a very small waist –
but his shoulders are stark
and his jaws they are grim:
and a good little child
will not play with him!

Once recovered from his self-induced paroxysm of mirth, almost indistinguishable from an asthma attack, he resumes what is evidently a familiar discourse.

Teacher: We remember, children, that whereas the simile (put that snuff away, Hoyle, and sit up straight) says that one thing is like another, the metaphor says that one thing is another, in this case that the soldier was a lion in the fight. Now in what respects was he a lion? it can scarcely be his appearance, though I grant that his uniform has a tawny hue not dissimilar to the lion’s pelt; certes, he has no shaggy mane (did I say something amusing, Williams? stop smirking, boy, and pay attention) and instead of claws and teeth he has his Webley .45 calibre revolver. Nonetheless, he displays a fearless courage in the face of great odds that is precisely the quality for which the King of Beasts is renowned, so that is why we are justified in calling him a lion. What is that, Hoyle? Why do we not just say he is like a lion? Ha – hum – well, you see, it makes the comparison stronger, you see, more vivid.’

Hoyle does not see, but dutifully notes it down, and refrains from suggesting that ‘metaphor’ is just a long Greek word for a lie, since he knows that will get him six of the belt in those unenlightened days.

[curtain]

But young Hoyle the snuff-taker has a point. Aristotle, it will be recalled, writing in his Poetics, says that the poet ‘above all, must be a master of metaphor,‘ which he defines as ‘the ability to see the similarity in dissimilar things’.  But this definition is as problematic as the teacher’s explanation: why is a comparison between two things whose most striking feature is their dissimilarity made stronger and more vivid by saying that they are actually the same?

The best that people seem able to manage in answer to this is that the literary metaphor has a kind of shock value. To illustrate the point, they generally allude to the conceits of the metaphysical poets, such as Donne, where what strikes us first as outrageous, is – once explained – redeemed by wit and ingenuity:

Oh stay, three lives in one flea spare,
Where we almost, nay more than married are.   
This flea is you and I, and this
Our marriage bed, and marriage temple is;   
Though parents grudge, and you, w’are met,   
And cloistered in these living walls of jet.

The best metaphor, it seems, is one where the dissimilarity is more striking than the resemblance.

But mention of the metaphysical poets recalls a different definition of metaphor, one provided by Vita Sackville-West in her book on Andrew Marvell:

‘They saw in it [metaphor] an opportunity for expressing … the unknown … in terms of the known concrete.’

That is in the form that I was wont to quote in my student days, when it made a nice pair with the Aristotle quoted above; but I think now that I did Vita Sackville-West a disservice by truncating it. Here it is in full:

‘The metaphysical poets were intoxicated—if one may apply so excitable a word to writers so severely and deliberately intellectual—by the potentialities of metaphor. They saw in it an opportunity for expressing their intimations of the unknown and the dimly suspected Absolute in terms of the known concrete, whether those intimations related to philosophic, mystical, or intellectual experience, to religion, or to love. They were ‘struck with these great concurrences of things’; they were persuaded that,
Below the bottom of the great abyss
There where one centre reconciles all things,
The World’s profound heart pants,
and no doubt they believed that if they kept to the task with sufficient determination, they would succeed in catching the world’s profound heart in the net of their words.’

If I had my time again (for indeed that ancient pedagogue described above is me) and wished to illustrate this, I would go about it rather differently.

Let us suppose a scene where a child cowers behind her mother’s skirts while on the other side a large and overbearing man, an official of some sort, remonstrates with the mother demanding she surrender the child to his authority. Though she is small and without any looks or glamour – a very ordinary, even downtrodden sort – the woman stands up boldly to the man and defies him to his face with such ferocity that he retreats. I am witness to this scene and the woman’s defiance sends a thrill of excitement and awe coursing through me. In recounting it to a friend, I say ‘In that moment, I seemed to glimpse her true nature – I felt as if I was in the presence of a tiger, defending her cubs.’

This is a very different account of metaphor. It is no longer a contrived comparison for (dubious) literary effect between two external things that are quite unlike, in which I play no part save as a detached observer; instead, I am engaged, involved: the metaphor happens in me: the identity is not between the external objects, but in the feeling they evoke, which is the same, so that the sight before me (the woman) recalls a very different one (the tiger) which felt exactly the same.

The first point to note is that the contradiction implicit in Aristotle’s account has disappeared. There is no puzzle in trying to work out how a woman can be a tiger, because the unity of the two lies in the feeling they evoke. And as long as my response is typically human and not something unique to me, then others, hearing my account, will feel it too, and being stirred in the same way, will recognise the truth expressed by saying ‘I felt I was in the presence of a tiger.’

Further, the very point that seemed problematic at first – the dissimilarity – is a vital element now. It is the fact that the woman appears as unlike a tiger as it is possible to be that gives the incident its force: this is an epiphany, a showing-forth, one of those ‘great concurrences of things’ that seem like a glimpse of some reality beyond appearance, ‘the World’s profound heart’.

Yet that description – ‘some reality beyond appearance’ – is just what pulled me up short, and made me think of the Tolkien quote I have used as a heading. Is not this the very language of Plato, whose world of Forms or Ideas is presented as the Reality that transcends Appearance?

Yet the world as presented by Plato is essentially the same as that of Aristotle, which has become, as it were, our own default setting: it is a world of objective reality that exists independently of us; it is a world where we are detached observers, apprehending Reality intellectually as something that lies beyond the deceptive veil of Appearance. It is the world we opened with, in which metaphor is a contradiction and a puzzle, perhaps little better than a long Greek word for a lie.

Though both accounts – the Platonic-Aristotelian world on one hand, and Vita Sackville-West’s version on the other – seem strikingly similar (both have a Reality that lies beyond Appearance and so is to some extent secret, hidden), there are crucial differences in detail; like Gandalf and Saruman, they are like, yet unlike in the fundamentals that matter.

The Platonic world is apprehended intellectually. What does that mean? Plato presents it in physical terms, as a superior kind of seeing – the intellect, like Superman’s x-ray vision, penetrates the veil of Appearance to see the Reality that lies beyond. But the truth of it is less fanciful. What Plato has really discovered (and Aristotle then realises fully) is the potential of general terms. A Platonic Idea is, in fact, a general term: the platonic idea of ‘Horse’ is the word ‘horse’, of which every actual horse can be seen as an instance or embodiment. Thus, to apprehend the World of Forms is to view the actual world in general terms, effectively through the medium of language.

This can be imagined as being like a glass screen inserted between us and the landscape beyond, on which we write a description of the landscape in general terms, putting ‘trees’ where there is a forest, ‘mountains’ for mountains, and so on. By attending to the screen we have a simplified and more manageable version of the scene beyond, yet one that preserves its main elements in the same relation, much as a sketch captures the essential arrangement of a detailed picture.

But the Sackville-West world is not mediated in this way: we confront it directly, and engage with it emotionally: we are in it and of it. And our apprehension of a different order of reality is the opposite of that presented by Plato; where his is static, a world of unchanging and eternal certainties (which the trained intellect can come to know and contemplate), hers is dynamic, intuitive, uncertain: it is something glimpsed, guessed at, something wonderful and mysterious which we strive constantly (and never wholly successfully) to express, in words, music, dance, art.

The resemblance between the two is no accident. Plato has borrowed the guise of the ancient intuited world (which we can still encounter in its primitive form in shamanic rituals and the like) and used it to clothe his Theory of Forms so that the two are deceptively alike; and when you read Plato’s account as an impressionable youth (as I did) you overlay it with your own intimations of the unknown and the dimly suspected Absolute and it all seems to fit – just as it did for the Christian neoPlatonists (in particular, S. Augustine of Hippo) seeking a philosophical basis for their religion.

I do not say Plato did this deliberately and consciously. On the contrary, since he was operating on the frontier of thought and in the process of discovering a wholly new way of looking at the world, the only tools available to express it were those already in use: thus we have the famous Simile of the Cave, as beguiling an invitation to philosophy as anyone ever penned, and the Myth of Er, which Plato proposes as the foundation myth for his new Republic.

And beyond this there is Plato’s own intuition of a secret, unifying principle beyond immediate appearance, ‘the World’s profound heart’, which we must suppose him to have since it is persistent human trait: is it not likely that when he had his vision of the World of Forms, he himself supposed (just as those who came after him did) that the truth had been revealed to him, and he was able to apprehend steadily what had only been glimpsed before?

It would explain the enchantment that has accompanied Plato’s thought down the ages, which no-one ever attached to that of his pupil Aristotle (‘who is so very nice and dry,’ as one don remarked) even though Aristotelianism is essentially Plato’s Theory of Forms developed and shorn of its mysterious presentation.

So there we have it: a new explanation of metaphor that links it to a particular vision of the world, and an incidental explanation of the glamour that attaches to Plato’s Theory of Forms.

Like, yet unlike.

 

 

St. Anselm and the Blackbird

 

2017-05-23 15.17.35

Blackbird

Its eye a dark pool
in which Sirius glitters
and never goes out.
Its melody husky
as though with suppressed tears.
Its bill is the gold
one quarries for amid
evening shadows. Do not despair
at the stars’ distance. Listening
to blackbird music is
to bridge in a moment chasms
of space-time, is to know
that beyond the silence
which terrified Pascal
there is a presence whose language
is not our language, but who has chosen
with peculiar clarity the feathered
creatures to convey the austerity
of his thought in song.

– R.S. Thomas

St Anselm was Archbishop of Canterbury and lived from 1033 to 1109 at the start of the intellectual renaissance that the High Middle Ages brought to Western Europe. It was a period of great intellectual ferment, an Age of both Faith and Reason, when the best minds of the day applied with passionate curiosity the learning they were rediscovering to the big topic of the day: God.

It takes some effort of the imagination in this secular age to realise that for the mediaeval mind, Theology was the Queen of Sciences, as exciting in its day as quantum physics is now. ‘What is God?’ is the question that the greatest of the mediaevals – one of the greatest intellects ever, Thomas Aquinas – asked at an early age and pursued the rest of his life.

The learning they were rediscovering had two principal strands, both of which had been kept alive elsewhere, since the Eastern Empire, centred on Constantinople, continued after the Western one, centred on Rome, had fallen, though increasingly encroached upon latterly by a new intellectual and religious power to the East and South: Islam.

The most immediately accessible strand, because it was written in Latin, was the neoPlatonism of the late Roman period, whose most notable exponent was Augustine of Hippo. Platonism, with its notion of a transcendent Reality composed of eternal, immutable Forms and a vision of Truth as a brilliant sun that is the source of all wisdom, is a good fit for Christianity – so little is needed to reconcile them that Plato (with the Christ-like Socrates as his literary mouthpiece) can seem almost a pagan prophet of Christianity.

The second strand was more difficult, because it took a circuitous route from the Greek-speaking Eastern empire through the Arabic of Islamic scholars (Avicenna and Averroes, principally) before being translated into Latin where the two cultures met in Spain. This second strand centred chiefly on the writings of Plato’s pupil, one of the greatest minds of any age, Aristotle.

It was Aquinas who met the challenge of reconciling this new influx of pagan (and heretic) thought into catholic teaching and did so with such effect that he remains to this day the chief philosopher of the catholic church, with his Summa Theologica his principal work*.

This period marks the second beginning of Western thought; its first beginning had been some thirteen centuries previously with the Classical Age of Greece, and the two giants, Plato and Aristotle. It is important to realise that what might seem at first glance a recovery of ancient wisdom was in reality nothing of the sort: it was the rediscovery of a new and startling way of looking at things, one that displaced and subjugated the traditionally accepted way of understanding our relation to the world that had held since time immemorial.

What made this new way of thought possible was the written word. For the first time, it was possible to separate one of the elements of human expression, speech, from the larger activity of which it was part, and give it what appeared to be an independent and objective form. This did not happen at once; indeed, it took about three thousand years from the invention of writing, around 5500 years ago, to the realisation of its potential in Classical Greece.

The word written on the page is the precondition of the relocation of meaning: from being a property of situations, inseparable from human activity and conveyed by a variety of methods, such as facial expression, gesture, bodily posture, with speech playing a minor role, meaning now becomes the property of words, and is deemed, by implication, to exist independently and objectively, and to be more or less fixed.

This one change is the foundation of modern thought: it is what allows Plato, with breathtaking audacity, to reverse the relation between the intellect and the senses and proclaim that what the senses tell us is mere Appearance, and that Reality is apprehended by the intellect – and consists of the world viewed from a general aspect: effectively, through the medium of language. It is the beginning of a world-view that casts us as detached spectators of an independent objective reality, a world-view that cannot be acquired naturally and instinctively, but only through a prolonged process of education, based on literacy.

When, some thirteen centuries later, Anselm devises his ‘ontological proof’ for the existence of God, it is squarely within this intellectual framework erected by Plato and Aristotle:
[Even a] fool, when he hears of … a being than which nothing greater can be conceived … understands what he hears, and what he understands is in his understanding.… And assuredly that, than which nothing greater can be conceived, cannot exist in the understanding alone. For suppose it exists in the understanding alone: then it can be conceived to exist in reality; which is greater.… Therefore, if that, than which nothing greater can be conceived, exists in the understanding alone, the very being, than which nothing greater can be conceived, is one, than which a greater can be conceived. But obviously this is impossible. Hence, there is no doubt that there exists a being, than which nothing greater can be conceived, and it exists both in the understanding and in reality.’

This is straightforward enough, if you take your time and attend to the punctuation: the expression ‘that than which nothing greater can be conceived’ is Anselm’s definition of God; and even a simpleton, he says, can understand it; but to exist in reality is better than to exist merely in the imagination, so a God that exists in reality is greater than one which exists only in the imagination, so if God is that than which nothing greater can be conceived, then God must exist in Reality (Because that leaves no room, as it were, to conceive of anything greater).

Much has been said and written about this argument since it was first made over 900 years ago, but I want to concentrate on a single aspect of it, which is the continuity it implies between the human understanding and reality. To use an image, if we conceive the intellect as a skyscraper, then by taking the lift to its utmost height and climbing, so to speak, onto the roof, we arrive at Reality, the only thing that is higher than the height of our understanding.

This is what leads us to suppose – via the notion that we are created in the image and likeness of God – that God must be the perfection of all that is best in us; and if we esteem our intellectual faculties above all else (as, in the ‘West’, we seem to do) then God must be the supreme intellect.

This presents a problem, one that has considerable force in arguments against the existence of God: though a lesser intellect cannot fully comprehend a greater one, they share a great deal of common ground, and the greater intellect can certainly attune itself to the capacity of the lesser: this is a familar case (though not always!) between adult and child, teacher and pupil. Why, then, does God not deal directly with us at our intellectual level? Why doesn’t God speak our language? He surely would, if he could; yet he must be able to, since he is God – so the fact that he does not makes him appear either perverse (like a parent playing a cruel sort of game where he pretends not to be there, and does not answer when his child calls out to him, though he may do something that indirectly suggests his presence, like throwing a ball or making the bushes move) or absent, since he would if he could, but does not.

Thomas’s poem is an answer to this conundrum, though it is not a comfortable one. Perhaps our assumption that reality is at the top of the skyscraper is an error: maybe it is outside, at ground level. Maybe God speaks to us all the time, but we do not recognise the fact, because ‘God’ is quite other than we suppose, and cannot be contained in the intellectual framework that Plato and Aristotle have bequeathed to us.

This would explain on the one hand why religion – in its broadest sense – is bound up with immemorial ritual (which belongs to the world before Plato and Aristotle) and on the other, why, in an age that puts it confidence in intellect and reason – the ‘new thinking’ that Plato and Aristotle invented, not so very long ago in terms of our earthly existence – God is proving increasingly difficult to find.

*in the context of this piece, it is worth recalling that Aquinas on his deathbed said that his work now seemed ‘all so much straw.’

A penny for them…

‘What are you thinking of? What thinking? What?
I never know what you are thinking.’
– Eliot, The Waste Land

‘He’s the sort that you never know what he’s thinking’ defines a recognisable character but carries a curious implication. There is a strong suggestion of duplicity, of inner workings at odds with outer show. Even among long-time married couples you will sometimes hear it said (in exasperated tones) ‘all these years we’ve been married and I still have no idea what goes on in that head of yours’.

But that exasperated tone indicates the same curious implication of the first case – namely, that we expect to know what people are thinking; that not to know is what is considered remarkable, the exception that proves the rule. C.Auguste Dupin, a notable precursor of Sherlock Holmes created by Edgar Allan Poe, makes a striking demonstration of this in The Murders in the Rue Morgue:

‘One night we were walking down one of Paris’s long and dirty
streets. Both of us were busy with our thoughts. Neither had spoken
for perhaps fifteen minutes. It seemed as if we had each forgotten that
the other was there, at his side. I soon learned that Dupin had not
forgotten me, however. Suddenly he said:
“You’re right. He is a very little fellow, that’s true, and he would
be more successful if he acted in lighter, less serious plays.”
“Yes, there can be no doubt of that!” I said.
At first I saw nothing strange in this. Dupin had agreed with me,
with my own thoughts. This, of course, seemed to me quite natural.
For a few seconds I continued walking, and thinking; but suddenly
I realized that Dupin had agreed with something which was only a
thought. I had not spoken a single word.’

Dupin’s explanation of his apparent mind-reading runs to another page and three-quarters [you can read it here] and though something of a virtuoso performance, it is based on sound principles – Dupin observes his friend’s actions and expressions closely, and is able to follow his train of thought by skilful inference, both from what he sees, and what he already knows.

The incident starts when, in evading a hurrying fruit-selller, his companion stubs his toe on an ill-laid paving stone:

‘You spoke a few angry words to yourself, and continued walking. But you kept looking down, down at the cobblestones in the street, so I knew you were still thinking of stones.
“Then we came to a small street where they are putting down street stones which they have cut in a new and very special way. Here your face became brighter and I saw your lips move. I could not doubt that you were saying the word stereotomy, the name for this new way of cutting stones.’
….
‘Later I felt sure that you would look up to the sky. You did look up. Now I was certain that I had been following your thoughts as they had in fact come into your mind.’
….
‘I saw you smile, remembering that article and the hard words in it.
“Then I saw you stand straighter, as tall as you could make yourself. I was sure you were thinking of Chantilly’s size, and especially his height.’

Two things are worth noting here, I think. The first is Dupin’s attention to such things as the direction of his friend’s gaze, the expression on his face, and his whole bodily posture, all of which he reads as indicative of thought – I was going to say ‘accompaniments of thought’ but that would be the wrong word, I think, for reasons I will come to presently. The second thing is one particular detail – ‘I saw your lips move’ – and the observation the narrator makes at the end of the episode – ‘Dupin was right, as right as he could be. Those were in fact my thoughts, my unspoken thoughts’.

These highlight two important points about thought that are often overlooked: that it has a physical aspect, and that it is closely connected to speech. We use the expression ‘difficult to read’ of people like the man cited at the start, the ‘sort that you never know what he’s thinking’, and this reminds us that we do rely to a great extent on non-verbal physical indications of ‘mental’ activity.

Indeed, it is interesting to consider just how contrary to everyday experience is the notion that mental activity and thought are hidden, private processes that take place ‘in our heads’ so that only we ‘have access to them’. I put those expressions in quotes because I think they are misleading, in the same way that it is misleading to speak of facial expression etc. as ‘accompaniments’ to thought – I would say they are better considered as an integral part of thinking. We see this from the expression ‘I learned to hide my thoughts’ which is connected with controlling – indeed, suppressing – these external manifestations of thought.

The fact that we must make a conscious effort to conceal thought suggests that it is far from the ‘hidden process’ it is often supposed to be and calls into question the whole range of terms we use that suggest it is – such as the notion of thoughts being ‘in our head’ and our having ‘private access to them’ alluded to above. The implication there is that the head (or brain, or mind) is a sort of space in which our thoughts are stored (and where other mental activity takes place); furthermore, it is a private space, a sort of secret room to which we alone have access. (In this connection, consider the various fictional representations of telepathy and mind-reading, which often involve clutching the head, pressing the temples etc., either in an effort to keep its contents from being rifled, or in the attempt to pilfer them – thoughts are seen as something contained which can, by certain means, be extracted)

In St Ambrose’s day (c340-397) it was considered remarkable that he could read without moving his lips, from which we infer that most people then did so. I believe that this is now termed ‘subvocalisation’ and it appears to have been studied extensively in connection with reading but less so with thought. I am conscious that a great deal of my own thought consists of articulating sentences ‘in my head’ a process that I consider the same as speaking in all but the final act of voicing the words aloud (an interpretation supported by the fact that sometimes I do actually speak my thoughts aloud) – hence my interest in the expression Poe uses above, ‘my unspoken thoughts.’

It would be interesting to know whether the late Romans of St Ambrose’s day moved their lips when thinking, or indeed habitually spoke their thoughts aloud, openly or in an undertone. Even now, this is more common than we might suppose – people often blurt out their thoughts without meaning to, and most of us are familiar with the expression ‘did I just say that aloud?’ (and the feeling that accompanies it) when we say what might have been better kept to oneself. There are also people who have the habit of framing their thoughts as spoken questions, which can be disconcerting till you realise that they are not actually seeking an answer from you, personally: it is just another form of saying ‘I wonder if…’.

So it would seem that, just as we have we have learned for the most part to read without moving our lips, so we have also gradually shed (or learn to suppress) the more obvious physical manifestations of what we now consider ‘mental’ activities, such as thinking, imagining, remembering etc. though my guess (as with subvocalisation in relation to reading) is that there is probably still a fair bit that could be detected in muscle movement, brain activity and the like (though it would be an interesting experiment to see if these too – the brain activity in particular – can be controlled).

From the effort we must make to conceal thought, and our varying success in doing so, it is reasonable to infer that the ‘natural’ mode of thought is holistic, involving the body (at least) as much as the brain: consider, for instance, two rather different examples. One is the domestic cat, and how it is transformed on spying a bird that might be its prey or an intruder on its territory: its intentions can be read very clearly from its bodily posture and movement. The other is the recent emergence in sport – particularly at the highest level – of the practice of ‘visualisation’, which is rather more than simply picturing what you want to happen; it is a full-scale physical anticipation of it, typified by the rituals with which Jonny Wilkinson used to precede his kicking attempts in rugby.

It is interesting to set all this alongside the long-standing tradition in philosophy that regards mental activity as private, personal and inaccessible to others, which has led some to the extreme of solipsism, the doctrine that your own self is the only being you can confidently believe to exist. Much blame for this can be laid at the door of Descartes, often seen as the herald of the modern era in philosophy, though the mind-body dualism generally attributed to him can be dated back to Plato (much as his most noted dictum, cogito ergo sum – ‘I think, therefore I am’ – can be traced back to St Augustine a thousand years before). Descartes makes the classic error of supposing that because we are deceived in some cases, it is possible that we might be deceived in every case – overlooking the fact that such a state of affairs would render ‘being deceived’ and its allied concepts of mistake, illusion, hallucination and the like incomprehensible: if we were deceived in all things, we would not be aware of it; the fact that we have a word for it demonstrates that, in most cases, we are not deceived, and that we also recognise the special and generally temporary circumstances in which we are.

If we go back to Plato, I think we can find the real root of the notion that thoughts are private. It is bound up with what I consider the relocation of meaning that takes place around the time of Classical Greece, about 25 centuries ago, and is made possible by the invention of writing. Only once a word can be written on a page does it become possible to consider it apart from the milieu in which it naturally occurs, human activity involving speech. Such activity (what Wittgenstein calls ‘forms of life’ and ‘language games’) is the ultimate source of meaning (cp. Wittgenstein again, ‘the meaning of a word is its use in the language’). Prior to the invention of writing, there was neither the means nor indeed any reason to consider speech apart from the larger activity of which it formed a part; indeed it is doubtful whether people would even have the concept of words as components of speech, which presents itself as a rhythmic flow, rather than a concatenation of smaller elements.

With writing, all that changes. For the first time, language can be studied and analysed at leisure. A sentence written on a page is comprehensible in itself, or so it appears, without reference to who wrote it or in what context. From this it is an easy step to the notion that meaning is an inherent property of words, rather than situations (what we overlook in this, of course, is that we are able to supply the context in which the words have meaning; but as such instances as the Phaistos Disc remind us, that ability can be lost, so that the marks we see have no more meaning for us than if they were random scratches made in  play).

Screenshot 2015-04-21 13.12.20

This relocation of meaning is fundamental to Plato’s Theory of Forms (or Ideas) in which he argues that the senses are deceived by the world of Appearance and that only the intellect can apprehend the true nature of Reality, the transcendent and immutable Forms. As I have argued elsewhere there is a strong case to be made that Platonic Ideas are in fact words – specifically, general and abstract terms – so the Platonic Ideas of ‘Cat’ ‘Table’ ‘Justice’ and ‘Good’ are the words cat, table, justice, good which stand for these ‘things’ (i.e. the general idea of ‘cat’, the abstract idea of ‘justice’) just as a specific name stands for the actual thing it denotes. (Though Plato pictures the words pointing to the transcendent Form or Idea, in actual fact the words themselves, allied to the use we make of them, are all that is needed)

It is this objectification of general and abstract ideas that leads to the notion of mental processes as private and inaccessible to others. We can point to something as an act of justice or goodness, but once we acquire the notion of justice as an idea, we introduce a new class of objects, those which can be apprehended only by the intellect. Strictly speaking, ‘object’ is used metaphorically here, but with Plato’s insistence that the Forms are the true Reality, this gets overlooked, and we start to think of thoughts, memories, ideas, impressions and the like as ‘mental objects’ that exist ‘in our minds’ or our ‘imaginations’ which we conceive as a kind of space, a sort of private viewing room.

The point to note here is that the metaphor preserves the Subject-Object relation, which is easily grasped in relation to physical objects – I know what it is to look at a tree, a cat or indeed another person: I am here and it is there. However, a degree of mystery seeps in when this is extended to ideas, thoughts and suchlike, particularly as philosophy develops the account it gives of them. Thus by Hume’s time we no longer simply see a tree: we form a mental impression of one, of which we can then make a copy, which he calls an idea – and this copy is what we use in remembering or imagining a tree, ‘calling it to mind’. This development clearly goes hand in hand with a growing understanding of light and optics and the physiology of the eye, but it is facilitated by having the notion of ‘mental space’ and regarding ideas as objects.

However, what is of most interest is how this alters our view of the Subject. From being a holistic notion which makes no distinction between mind and body – ‘I am this person looking at that tree’ – the subject begins to retreat in what becomes an infinite regress: the tree that we see out the window becomes a representation of a tree – to use Schopenhauer’s term, or the impression of a tree, to use Hume’s – which is now ‘in the mind’ but is still, somehow, seen. And if we have memory of that tree – an idea, to use Hume’s term – or the thought of a tree, or the mental image of one, then that, too, seems to be an object which we somehow apprehend – so the seeing, knowing or thinking subject – ourself – is forever edging out of the picture, never able – as subject – to become itself the object of consideration.

This is what leads the earlier Wittgenstein to suppose, in the Tractatus, that the subject is the boundary of experience, that it does not exist in the world but somehow outside or on the edge of it. Others have suggested that the Subject is a temporary manifestation generated (not unlike an electrical charge) by the combination of our brain and body and nervous system: it exists while we are alive (perhaps only when we are awake) and simply ceases when the physiology that generated it dies.

Yet all this, I would argue, is simply the result of philosophy’s having painted itself into a corner by adopting the way of thinking about the world that starts out with Plato. By dismissing the objects of sense as mere Appearance, and substituting the objects of intellectual apprehension as Reality, we reduce the Subject from an active participant in the world to a passive, detached observer: Wittgenstein’s boundary of experience. Reality is redefined as solely objective, and there is no room in it for the subject: ‘objectivity’ is praised while the subjective (often qualified by ‘merely’) is dismissed as unreliable, partial, mere ‘personal opinion’.

But let us step back, go back indeed to where we started, with Dupin, and the notion of thinking as a holistic activity which involves us as a totality, which is both physical and ‘mental’ (if indeed that distinction can be made at all). The view mentioned earlier, that the Subject (which can be identified with consciousness) is a kind of transitory by-product of our physiology seems to be supported by the latest developments in brain-imaging, which allow us to observe electrical activity in the neural networks of the brain: there is a correlation between certain activities and the part of the brain that ‘lights up’ when we are engaged in them. This has even led some to say that what brain imaging shows us are our actual thoughts – that all they are is these patterns of electric activity.

But I wonder. It has been demonstrated that people can lower their blood pressure aided by an index of it in the form of a display; likewise, people can be trained to suppress the physiological symptoms which polygraph tests – so-called ‘lie detectors’ – depend on for their evidence. It would be interesting to see if the lighting-up of neural networks is something that can be similarly controlled or disguised – for if we can learn to ‘hide our thoughts’ by controlling outward appearances, why should we suppose that we cannot do likewise with other physical manifestations of them, once we are aware of them?

It is illuminating to look at this from the other side: not only can we suppress or disguise the physical manifestations of thought, we can also imitate them – that is what actors do. And of course a standard acting technique is to have a store of memories that move us, which can be called to mind when the requisite emotion is called for – so if I wish to portray a character stricken by grief, I conjure a memory when I myself was grieved and my outward aspect will conform, much as does the player’s in Hamlet, who

But in a fiction, in a dream of passion,

Could force his soul so to his own conceit

That from her working all his visage wann’d,

Tears in his eyes, distraction in’s aspect,

A broken voice, and his whole function suiting

With forms to his conceit
Wittgenstein asks somewhere how we know that we are imitating someone’s expression, and adds that it is not by studying our face in a mirror. Our response to a smile, from a very early age, is a smile; but we are not imitating what we see – after all, we do not know what our own face looks like. What guides us is rather the feeling that goes with the smile. The best way I can think to put this is that, as human beings, we know what an expression feels like from the inside.

And I would add a note of caution here: do not import the model of cause and effect that we use in analysing the objective world. The joy we feel within does not cause the smile; it is not prior to it – the two are aspects of the same thing. I am reminded of an expression I learned as a boy doing my catechism – ‘an outward sign of inward grace’. There are a range of things that we know, not through becoming acquainted with them, but by doing them, by being them. And although we speak of ‘seeing’ ‘hearing’ and the rest of the senses separately, we cannot actually turn them on and off, but do them all at once and all the time; what we vary is the attention we give each one, and for most of us, sight predominates. with hearing next and the rest a good way behind, except when they force themselves on our attention.

What we actually experience, unanalysed, is not simply ‘the world’ – that is only half the story; what we experience is ‘being in the world’. All experience has this dual aspect: we know it from the inside and the outside at the same time. That is what makes communication possible, what understanding, properly understood, consists of. It is what in art, in all its forms – music, painting, sculpture, poetry, dance – enables us to ‘get it’: by considering the outward sign, we experience what it is like from inside, we recognise the feeling it expresses as something we, too, have felt.

The clever model that Plato and Aristotle invented, that underpins all Western thought, has enabled us to achieve remarkable things, but only at the considerable expense of ignoring one half of our experience and pretending that it does not matter.

Perhaps what Descartes should have said is not cogito ergo sum, nor even sum ergo sum (since it is not something we know by deduction) but simply sum – I am.

Seeing Better

‘See better, Lear!’ is the admonition Kent gives his King after he has petulantly banished his youngest daughter, Cordelia, because she ‘lacks that glib and oily art’ to flatter him as her false sisters have done. Sight and blindness is a central theme in King Lear, as is its corollary, deception, both of others and oneself.

Kent’s words came to me when I was ruminating on my latest occupation, drawing shiny things

(click to enlarge)

One of the things that drawing teaches is seeing better, and that indeed is a large part of my reason for pursuing it recently, as a kind of philosophical experiment (since February I have been drawing a monkey a day, in response to a challenge by a friend)Muriquin

The status of colour crops up in philosophical discussions at various periods – it is Locke, I think, who argues that colours are not ‘primary qualities’ (such as shape, extension and solidity) but only ‘secondary’ in that they involve an interaction between eye and object and cannot be said to inhere in the object itself as the primary qualities are supposed to do – but it is really a subset of a larger argument that takes us back (as always) to Plato.

Plato, it will be recalled, dismisses the world brought to us via the senses as deceptive Appearance, maintaining that the true nature of the world – Reality – can only be apprehended by the intellect: it is the world of Forms or Ideas. As I have argued elsewhere (‘In the beginning was the Word’) what Plato has really discovered is the power of general terms – the Platonic Idea or Form ‘table’ is not something that lies beyond the word ‘table’, to which it points, it is in fact the word ‘table’ itself – which can be used in thought to stand for any table, because – unlike a picture – it does not resemble any particular table.

This introduces a whole new way of thinking about the world, where it is no longer seen directly, through the despised senses, but apprehended by the intellect through the medium of language. And there is no better way of appreciating this than to try and draw something shiny.

Daimlerdrawn

What colour is the car? Why, black, of course – with some shiny bits. That is how it was described on the official documentation – Daimler DR450, Black. But what about all those other colours, then? Ah, now, that’s just reflections of one thing and another – you can ignore them; the car’s real colour is black (and its radiator grille etc aren’t coloured at all, they’re shiny chrome plate).

What trying to draw it teaches you is not only that you can’t ignore the many other colours that are there (if you want your picture to be any good at all) but it also brings home to you that your regular habit (or at least mine) is to dismiss a great deal of what your eyes tell you and pretend it isn’t there, that it doesn’t count: ‘that is just light reflected off a polished surface; that is just a reflection; that’s just a shadow.’

And that is Platonism in action: the intellect overrides the senses, reserves judgement to itself – and it does it through words: ‘light’ conveniently labels – and so keeps you from looking at – something that is very difficult to render faithfully in a drawing. You find that reflective surfaces, far from being bright, are often dark and dull; a tiny patch left uncoloured on a white page becomes a gleam of light when surrounded by greys and blues, even black. And your mind, on seeing the drawing, converts it back to an image of a plated surface – perhaps the most interesting part of the process.

It is as if we erect a glass screen between ourselves and the world, and on the screen we write the words that correspond to the things beyond – ‘mountains, trees, clouds, house, road, cars, people’ – and most of the time what we see is not what is in front of us, but only the words on the screen that give us the simplified general picture, at once a tool of immense power (enabling rapid thought unencumbered by distracting detail) and a great impoverishment of our experience – it inserts a carapace between us and the world.

See better. Draw. Then go out and look.

In the beginning was the word… or was it?

1511

Reflecting on the origin of words leads us into interesting territory. I do not mean the origin of particular words, though that can be interesting too; I mean the notion of words as units, as building blocks into which sentences can be divided.

How long have we had words? The temptation is to say ‘as long as we have had speech’ but when you dig a bit deeper, you strike an interesting vein of thought.

As I have remarked elsewhere [see ‘The Muybridge Moment‘] it seems unlikely that there was any systematic analysis of speech till we were able to write it down, and perhaps there was no need of such analysis. Certainly a great many of the things that we now associate with language only become necessary as a result of its having a written form: formal grammar, punctuation, spelling – the three things that probably generate the most unnecessary heat – are all by-products of the introduction of writing.

The same could be said of words. Till we have to write them down, we have no need to decide where one word ends and another begins: the spaces between words on a page do not reflect anything that is found in speech, where typically words flow together except where we hesitate or pause for effect. We are reminded of this in learning a foreign language, where we soon realise that listening out for individual words is a mistaken technique; the ear needs to attune itself to rhythms and patterns and characteristic constructions.

So were words there all along just waiting to be discovered? That is an interesting question. Though ‘discovery’ and ‘invention’ effectively mean the same, etymologically (both have the sense of ‘coming upon’ or ‘uncovering’) we customarily make a useful distinction between them – ‘discovery’ implies pre-existence – so we discover buried treasure, ancient ruins, lost cities – whereas ‘invention’ is reserved for things we have brought into being, that did not previously exist, like bicycles and steam engines.  (an idea also explored in Three Misleading Oppositions, Three Useful Axioms)

So are words a discovery or an invention?

People of my generation were taught that Columbus ‘discovered’ America, though even in my childhood the theory that the Vikings got their earlier had some currency; but of course in each case they found a land already occupied, by people who (probably) had arrived there via a land-bridge from Asia, or possibly by island-hopping, some time between 42000 and 17000 years ago. In the same way, Dutch navigators ‘discovered’ Australia in the early 17th century, though in British schools the credit is given to Captain Cook in the late 18th century, who actually only laid formal claim in the name of the British Crown to a territory that Europeans had known about for nearly two centuries – and its indigenous inhabitants had lived in for around five hundred centuries.

In terms of discovery, the land-masses involved predate all human existence, so they were there to be ‘discovered’ by whoever first set foot on them, but these later rediscoveries and colonisations throw a different light on the matter. The people of the Old World were well used to imperial conquest as a way of life, but that was a matter of the same territory changing hands under different rulers; the business of treating something as ‘virgin territory’ – though it quite plainly was not, since they found people well-established there – is unusual, and I think it is striking where it comes in human, and particularly European, history. It implies an unusual degree of arrogance and self-regard on the part of the colonists, and it is interesting to ask where that came from.

Since immigration has become such a hot topic, there have been various witty maps circulating on social media, such as this one showing ‘North America prior to illegal immigration’ 2gdVlD0

The divisions, of course, show the territories of the various peoples who lived there before the Europeans arrived, though there is an ironic tinge lent by the names by which they are designated, which for the most part are anglicised. Here we touch on something I have discussed before  [in Imaginary lines: bounded by consent]  – the fact that any political map is a work of the imagination, denoting all manner of territories and divisions that have no existence outside human convention.

Convention could be described as our ability to project or impose our imagination on reality; as I have said elsewhere [The Lords of Convention] it strikes me as a version of the game we play in childhood, ‘let’s pretend’ or ‘make-believe’ – which is not to trivialise it, but rather to indicate the profound importance of the things we do in childhood, by natural inclination, as it were.

Are words conventions, a form we have imposed on speech much as we impose a complex conventional structure on a land-mass by drawing it on a map? The problem is that the notion of words is so fundamental to our whole way of thinking – may, indeed, be what makes it possible – that it is difficult to set them aside.

That is what I meant by my comment about the arrogance and self-regard implied in treating America and Australia as ‘virgin territory’ – its seems to me to stem from a particular way of thinking, and that way of thinking, I suggest, is bound up with the emergence of words into our consciousness, which I think begins about two and a half thousand years ago, and (for Europeans at least) with the Greeks.

I would like to offer a model of it which is not intended to be historical (though I believe it expresses an underlying truth) but is more a convenient way of looking at it. The years from around 470 to 322 BC span the lives of three men: the first, Socrates, famously wrote nothing, but spoke in the market place to whoever would listen; we know of him largely through his pupil, Plato. It was on Plato’s pupil, Aristotle, that Dante bestowed the title ‘maestro di color che sanno’ – master of those that know.

This transition, from the talking philosopher to the one who laid the foundations of all European thought, is deeply symbolic: it represents the transition from the old way of thought and understanding, which was inseparable from human activity – conversation, or ‘language games’ and ‘forms of life’ as Wittgenstein would say – to the new, which is characteristically separate and objective, existing in its own right, on the written page.

The pivotal figure is the one in the middle, Plato, who very much has a foot in both camps, or perhaps more accurately, is standing on the boundary of one world looking over into another newly-discovered. The undoubted power of his writing is derived from the old ways – he uses poetic imagery and storytelling (the simile of the cave, the myth of Er) to express an entirely new way of looking at things, one that will eventually subjugate the old way entirely; and at the heart of his vision is the notion of the word.

Briefly, Plato’s Theory of Forms or Ideas can be expressed like this: the world has two aspects, Appearance and Reality; Appearance is what is made known to us by the senses, the world we see when we look out the window or go for a walk. It is characterised by change and impermanence – nothing holds fast, everything is always in the process of changing into something else, a notion for which the Greeks seemed to have a peculiar horror; in the words of the hymn, ‘change and decay in all around I see’.

Reality surely cannot be like that: Truth must be absolute, immutable (it is important to see the part played in this by desire and disgust: the true state of the world surely could not be this degrading chaos and disorder where nothing lasts). So Plato says this: Reality is not something we can apprehend by the senses, but only by the intellect. And what the intellect grasps is that beyond Appearance, transcending it, is a timeless and immutable world of Forms or Ideas. Our senses make us aware of many tables, cats, trees; but our intellect sees that these are but instances of a single Idea or Form, Table, Cat, Tree, which somehow imparts to them the quality that makes them what they are, imbues them with ‘tableness’ ‘catness’ and ‘treeness’.

This notion beguiled me when I first came across it, aged fourteen. It has taken me rather longer to appreciate the real nature of Plato’s ‘discovery’, which is perhaps more prosaic (literally) but no less potent. Briefly, I think that Plato has discovered the power of general terms, and he has glimpsed in them – as an epiphany, a sudden revelation – a whole new way of looking at the world; and it starts with being able to write a word on a page.

Writing makes possible the relocation of meaning: from being the property of a situation, something embedded in human activity (‘the meaning of a word is its use in the language’) meaning becomes the property of words, these new things that we can write down and look at. The icon of a cat or a tree resembles to some extent an actual cat or tree but the word ‘cat’ looks nothing like a cat, nor ‘tree’ like a tree; in order to understand it, you must learn what it means – an intellectual act. And what you learn is more than just the meaning of a particular word – it is the whole idea of how words work, that they stand for things and can, in many respects, be used in their stead, just as the beads on an abacus can be made to stand for various quantities. What you learn is a new way of seeing the world, one where its apparently chaotic mutability can be reduced to order.

Whole classes of things that seem immensely varied can now be subsumed under a single term: there is a multiplicity of trees and cats, but the one word ‘tree’ or ‘cat’ can be used to stand for all or any of them indifferently. Modelled on that, abstract ideas such as ‘Justice’ ‘Truth’ and ‘The Good’ can be seen standing for some immutable, transcendent form that imbues all just acts with justice and so on. Plato’s pupil Aristotle discarded the poetic clothing of his teacher’s thought, but developed the idea of generalisation to the full: it is to him that we owe the system of classification by genus and species and the invention of formal logic, which could be described as the system of general relations; and these are the very foundation of all our thinking.

In many respects, the foundations of the modern world are laid here, so naturally these developments are usually presented as one of mankind’s greatest advances. However, I would like to draw attention to some detrimental aspects. The first is that this new way of looking at the world, which apprehends it through the intellect, must be learned. Thus, at a stroke, we render natural man stupid (and ‘primitive’ man, to look ahead to those European colonisations, inferior, somewhat less than human). We also establish a self-perpetuating intellectual elite – those who have a vested interest in maintaining the power that arises from a command of the written word – and simultaneously exclude and devalue those who struggle to acquire that command.

The pernicious division into ‘Appearance’ and ‘Reality’ denigrates the senses and all natural instincts, subjugating them to and vaunting the intellect; and along with that goes the false dichotomy of Heart and Head, where the Head is seen as being the Seat of Reason, calm, objective, detached, which should properly rule the emotional, subjective, passionate and too-easily-engaged Heart.

This, in effect, is the marginalising of the old way of doing things that served us well till about two and a half thousand ago, which gave a central place to those forms of expression and understanding which we now divide and rule as the various arts, each in its own well-designed box: poetry, art, music, etc. (a matter discussed in fable form in Plucked from the Chorus Line)

So what am I advocating? that we undo all this? No, rather that we take a step to one side and view it from a slightly different angle. Plato could only express his new vision of things in the old way, so he presents it as an alternative world somewhere out there beyond the one we see, a world of Ideas or Forms, which he sees as the things words stand for, what they point to – and in so doing, makes the fatal step of discarding the world we live in for an intellectual construct; but the truth of the matter is that words do not point to anything beyond themselves; they are the Platonic Forms or Ideas: the Platonic Idea of ‘Horse’ is the word ‘Horse’. What Plato has invented is an Operating System; his mistake is in thinking he has discovered the hidden nature of Reality.

What he glimpsed, and Aristotle developed, and we have been using ever since, is a way of thinking about the world that is useful for certain purposes, but one that has its limitations. We need to take it down a peg or two, and put it alongside those other, older operating systems that we are all born with, which we developed over millions of years. After all, the rest of the world – animal and vegetable – seems to have the knack of living harmoniously; we are the ones who have lost it, and now threaten everyone’s existence, including our own; perhaps it is time to take a fresh look.

The Lords of Convention

‘The present king of France is bald’ seems to present a logical problem that ‘the cat is on the table’ does not – there is no present king of France, so how can we assert that he is bald? and is the sentence true or false?

But I am much more interested in the second sentence: ‘the cat is on the table’ – what does it mean?

fa728226aacb800d3412c62368829d5e

(‘Cat on a Table’ by John Shelton, 1923-1993)

Can it mean, for instance., ‘it’s your cat, I hold you responsible for its behaviour’?

Consider:

Scene: a sunny flat. A man sprawls at ease on the sofa. To him, from the neighbour room, a woman.

Woman: The cat is on the table.

(Man rolls his eyes, sighs, gets up reluctantly)

Should you want to grasp the difference between the philosophy of the early Wittgenstein, as expressed in the Tractatus Logico-Philosophicus, and his later philsophy, as expressed in Philosophical Investigations (and I accept that not everyone does) then this example epitomises it. It also pins down – or at least, develops further – thoughts I have been having lately about meaning, objectivity and the impact of the invention of writing on thought.

The form of the question in the second paragraph above is curious: ‘what does it mean?’ – where ‘it’ refers to the sentence. The clear implication is that meaning is a property of the sentence, of words – an assertion that may not strike us as strange, till we set it alongside another that we might ask – ‘what do you mean?’

I would suggest that the first question only becomes possible once language has a written form: before that, no-one would think to ask it, because there would be no situation in which you could come across words that were not being spoken by someone in a particular situation – such as the scene imagined above. Suppose we alter it slightly:

Woman: The cat is on the table.
Man: What do you mean?
Woman: What do you mean, what do I mean? I mean the cat is on the table.
Man: What I mean is, the cat is under the sideboard, eating a mouse – look!

The words spoken here all have their meaning within the situation, as it were (what Wittgenstein would call the Language Game or the Form of Life) and the question of their having their own, separate meaning simply does not arise; if we seek clarification, we ask the person who spoke – the meaning of the words is held to be something they intend (though it is open to interpretation, since a rich vein of language is saying one thing and meaning another, or meaning more than we say – just as in our little scene, the line about the cat is far less about description of an event, far more about an implied criticism of the owner through the behaviour of his pet – which in turn is probably just a token of some much deeper tension or quarrel between the two).

Only when you can have words written on a page, with no idea who wrote them or why, do we start to consider that the meaning might reside in the words themselves, that the sentence on the page might mean something of itself, without reference to anything (or anyone) else.

This relocation of meaning – from the situation where words are spoken, to the words themselves – is, at the very least, a necessary condition of Western philosophy, by which I mean the way of thinking about the world that effectively starts with Plato and stretches all the way to the early Wittgenstein, whose Tractatus can be viewed as a succinct summary of it, or all that matters in it;  and perhaps it is more than a necessary condition – it may be the actual cause of Western philosophy.

The crucial shift, it seems to me, lies in the objectification of language, and so of meaning, which becomes a matter of how words relate to the world, with ourselves simply interested bystanders; and this objectification only becomes possible, as I have said, when speech is given an objective form, in writing.

If you were inclined to be censorious, you might view this as an abnegation of responsibility: we are the ones responsible for meaning, but we pass that off on language – ‘not us, guv, it’s them words wot done it.’ However, I would be more inclined to think of it as an instance of that most peculiar and versatile human invention, the convention. Indeed, a convention could be defined as an agreement to invest some external thing with power, or rather to treat it as if it had power – a power that properly belongs to (and remains with) us.

(The roots of convention are worth thinking about. I trace them back to childhood, and the game of ‘make-believe’ or ‘let’s pretend’ which demonstrates a natural facility for treating things as if they existed (imaginary friends) or as if they have clearly defined roles and rules they must follow (the characters in a game a child plays with dolls and other objects it invests with life and character). Is it any wonder that a natural facility we demonstrate early in childhood (cp. speech) should play an important part in adult life? In fact, should we not expect it to?)

It is convenient to act as if meaning is a property of words, and is more or less fixed (and indeed is something we can work to clarify and fix, by study). It facilitates rapid and efficient thought, because if words mean the things they denote, then we can, in a sense, manipulate the world by manipulating words; and this is especially so once we have mastered the knack of thinking in words, i.e. as a purely mental act, without having to write or read them in physical form.

We can perhaps appreciate the power of this more fully if we consider how thinking must have been done before – and though this is speculation, I think it is soundly based. I would argue that before the advent of writing no real analysis of speech was possible: we simply lacked any means of holding it still in order to look at it. An analytic approach to language sees it as something built up from various components – words of different sorts – which can be combined in a variety of ways to express meaning. It also sees it as something capable of carrying the whole burden of expression, though this is a species of circular argument – once meaning is defined as a property of words, then whatever has meaning must be capable of being expressed in words, and whatever cannot be expressed in words must be meaningless.

Without the analytic approach that comes with writing, expression is something that a person does, by a variety of means – speech, certainly, but also gesture, facial expression, bodily movement, song, music, painting, sculpture. And what do they express? in a word, experience – that is to say, the fact of being in the world; expression, in all its forms, is a response to Life (which would serve, I think, as a definition of Art).

Such expression is necessarily subjective, and apart from the cases where it involves making a physical object – a sculpture or painting, say – it is inseparable from the person and the situation that gives rise to it. Viewed from another angle, it has a directness about it: what I express is the result of direct contact with the world, through the senses – nothing mediates it  (and consider here that Plato’s first step is to devalue and dismiss the senses, which he says give us only deceptive Appearance; to perceive true Reality, we must turn to the intellect).

Compare that with what becomes possible once we start thinking in words: a word is a marvel of generalisation – it can refer to something, yet has no need of any particular detail – not colour, size, shape or form: ‘cat’ and ‘tree’ can stand indifferently for any cat, any tree, and can be used in thought to represent them, without resembling them in any respect.

‘A cat sat on a table under a tree’

might be given as a brief to an art class to interpret, and might result in twenty different pictures; yet the sentence would serve as a description of any of them – it seems to capture, in a way, some common form that all the paintings share – a kind of underlying reality of which each of them is an expression; and that is not very far off what Plato means when he speaks of his ‘Forms’ or ‘Ideas’ (or Wittgenstein, when he says ‘a logical picture of facts is a thought’ (T L-P 3) ).

While this way of thinking – I mean using words as mental tokens, language as an instrument of thought – undoubtedly has its advantages (it is arguably the foundation on which the modern world is built), it has been purchased at a price: the distancing and disengagement from reality, which is mediated through language, and the exclusion of all other forms of expression as modes of thought (effectively, the redefinition of thought as ‘what we do with language in our heads’); the promotion of ‘head’ over ‘heart’ by the suppression of the subject and the denigration of subjectivity (which reflects our actual experience of the world) in favour of objectivity, which is a mere convention, an adult game of make-believe –

all this points to the intriguing possibility, as our dissatisfaction grows with the way of life we have thus devised, that we might do it differently if we chose, and abandon the tired old game for a new one.

Where to Find Talking Bears, or The Needless Suspension of Disbelief

polar bear child stroking tube

Something I have been struggling to pin down is a clear expression of my thoughts on the oft-quoted dictum of Coleridge, shown in its original context here:

‘it was agreed, that my endeavours should be directed to persons and characters supernatural, or at least romantic, yet so as to transfer from our inward nature a human interest and a semblance of truth sufficient to procure for these shadows of imagination that willing suspension of disbelief for the moment, which constitutes poetic faith.’

This strikes me as a curious instance of something that has become a commonplace – you can almost guarantee to come across it in critical discussion of certain things, chiefly film and theatre – despite the fact that it completely fails to stand up to any rigorous scrutiny. It is, in a word, nonsense.

But there is another strand here, which may be part of my difficulty. This dictum, and its popularity, strike me as a further instance of something I have grown increasingly aware of in my recent thinking, namely the subjugation of Art to Reason. By this I mean the insistence that Art is not only capable of, but requires rational explanation – that its meaning can and should be clarified by writing and talking about it in a certain way (and note the crucial assumption that involves, namely that art has meaning).

This seems to me much like insisting that everyone say what they have to say in English, rather than accepting that there are languages other than our own which are different but equally good.

But back to Coleridge. If the ‘willing suspension of disbelief for the moment’ is what ‘constitutes poetic faith,’ then all I can say is that it must be an odd sort of faith that consists not in believing something – or indeed anything – but rather in putting aside one’s incredulity on a temporary basis: ‘when I say I believe in poetry, what I mean is that I actually find it incredible, but I am willing to pretend I don’t in order to read it.’

That is the pernicious link – that this suspension of disbelief is a necessary prerequisite of engaging with poetry, fiction or indeed Art as a whole; we see it repeated (as gospel) in these quotations, culled at random from the internet:

‘Any creative endeavor, certainly any written creative endeavor, is only successful to the extent that the audience offers this willing suspension as they read, listen, or watch. It’s part of an unspoken contract: The writer provides the reader/viewer/player with a good story, and in return, they accept the reality of the story as presented, and accept that characters in the fictional universe act on their own accord.’

(‘Any creative endeavour’ ? ‘is only successful’ ? Come on!)

‘In the world of fiction you are often required to believe a premise which you would never accept in the real world. Especially in genres such as fantasy and science fiction, things happen in the story which you would not believe if they were presented in a newspaper as fact. Even in more real-world genres such as action movies, the action routinely goes beyond the boundaries of what you think could really happen.

In order to enjoy such stories, the audience engages in a phenomenon known as “suspension of disbelief”. This is a semi-conscious decision in which you put aside your disbelief and accept the premise as being real for the duration of the story.’
(‘required to believe’ ? ‘in order to enjoy’? Really?)

The implication is that we spend our waking lives in some sort of active scepticism, measuring everything we encounter against certain criteria before giving it our consideration; and when we come on any work of art – or at least one that deals with ‘persons and characters supernatural, or at least romantic’ – we immediately find it wanting, measured against reality, and so must give ourselves a temporary special dispensation to look at it at all.

This is rather as if, on entering a theatre, we said to ourselves ‘these fellows are trying to convince me that I’m in Denmark, but actually it’s just a stage set and they are actors in costumes pretending to be other people – Hamlet, Claudius, Horatio, Gertrude; of course it doesn’t help that instead of Danish they speak a strange sort of English that is quite unlike the way people really talk.’

The roots of this confusion go back what seems a long way, to classical Greece (about twenty-five centuries) though in saying that we should remember that artistic expression is a great deal older (four hundred centuries at least; probably much, much more). I have quoted the contest between Zeuxis and Parrhasius before:

…when they had produced their respective pieces, the birds came to pick with the greatest avidity the grapes which Zeuxis had painted. Immediately Parrhasius exhibited his piece, and Zeuxis said, ‘Remove your curtain that we may see the painting.’ The painting was the curtain, and Zeuxis acknowledged himself conquered, by exclaiming ‘Zeuxis has deceived birds, but Parrhasius has deceived Zeuxis himself.’

– Lempriere’s Classical Dictionary

This is the epitome of the pernicious notion that art is a lie, at its most successful where it is most deceptive: thus Plato banishes it from his ideal state, because in his world it is at two removes from Reality. Plato’s Reality (which he also identifies with Truth) is the World of Forms or Ideas, apprehended by the intellect; the world apprehended by the senses is Appearance, and consists of inferior copies of Ideas; so that Art, which imitates Appearance, is but a copy of a copy, and so doubly inferior and untrustworthy.

Aristotle takes a different line on Appearance and Reality (he is willing to accept the world of the sense as Reality) but continues the same error with his theory of Mimesis, that all art is imitation – which, to use Aristotle’s own terminology, is to mistake the accident for the substance, the contingent for the necessary.

To be sure, some art does offer a representation of reality, and often with great technical skill; and indeed there are works in the tradition of Parrhasius that are expressly intended to deceive – trompe l’oeil paintings, which in the modern era can achieve astonishing effects

but far from being the pinnacle of art (though they are demonstrations of great technical skill) these are a specialist subset of it, and in truth a rather minor one, a sort of visual joke.

Insofar as any work of art resembles reality there will always be the temptation to measure it against reality and judge it accordingly, and this is particularly so of the visual arts, especially cinema, though people will apply the same criterion to fiction and poetry.

They are unlikely to do so in the case of music, however, and this exception is instructive. Even where music sets out to be specifically representative (technically what is termed ‘program(me) music’, I believe) and depict some scene or action – for instance Britten’s ‘Sea Interludes’ –it still does not look like the thing it depicts (for the simple reason that it has no visual element). Music is so far removed in character from what it depicts that we do not know where to start in making a comparison – we see at once that it is a different language, if you like.

The Sea Interludes are extraordinarily evocative, yet we would not call them ‘realistic’, something we might be tempted to say of a photo-realistic depiction of a seascape compared to one by Turner, say:

SEASCAPE-MARINE-PAINTING-FIRST-LIGHT-SURF
(original source here)  Tom Nielsen – ‘First light surf’

Screenshot 2015-12-09 18.40.27

(JMW Turner, ‘Seascape with storm coming on’ 1840)

Of all the different forms of Art, it is cinema that has gone furthest down this erroneous path – with the rise of CGI, almost anything can be ‘realised’ in the sense of presenting it in fully rounded, fully detailed form, and the revival of 3D imagery in its latest version and various other tricks are all geared to the same end of making it seem as if you were actually there in the action, as if that were the ultimate goal.

Yet even with the addition of scent and taste – the only senses yet to be catered for in film – the illusion is only temporary and never complete: we are always aware at some level that it is an illusion, and indeed the more it strives to be a perfect illusion the more aware we are of its illusory nature (we catch ourselves thinking ‘these special effects are amazing!’).

On the other hand, a black and white film from decades ago can so enrapture us that we are completely engaged with it to the exclusion of all else – we grip the arms of our seat and bite our lip when the hero is in peril, we shed tears at the denouement, we feel hugely uplifted at the joyous conclusion – but none of this is because we mistake what we are seeing for reality; it has to do with the engagement of our feelings.

In marked contrast to the cinema, the theatre now rarely aims at a realistic presentation; on the contrary, the wit with which a minimum of props can be used for a variety of purposes (as the excellent Blue Raincoat production of The Poor Mouth did with four chairs and some pig masks) can be part of the pleasure we experience, just as the different voices and facial expressions used by a storyteller can. It is not the main pleasure, of course, but it helps clarify the nature of the error that Coleridge makes.

How a story is told – the technique with which it is presented, whether it be on stage, screen or page – is a separate thing from the story itself. Take, for instance, these two fine books by Jackie Morris

East-of-Suncover-1024x351

wilds

 

East of the Sun, West of the Moon‘ and ‘The Wild Swans‘ are traditional tales; in retelling them, Jackie Morris puts her own stamp on them, not only with her own words and beautiful illustrations, but also with some changes of detail and action (for more about the writing of East of the Sun, see here).

The nature of these changes is interesting. It is like retuning a musical instrument: certain notes that jarred before now ring true; the tales are refreshed – their spirit is not altered but enhanced.

This ‘ringing true’ is an important concept in storytelling and in Art generally (I have discussed it before, in this fable). On the face of it, both these tales are prime candidates for Coleridge’s pusillanimous ‘suspension of disbelief’: in one, a talking bear makes a pact with a girl which she violates, thus failing to free him from the enchantment laid on him (he is actually a handsome prince); in consequence, the girl must find her way to the castle East of the Sun, West of the Moon, an enterprise in which she is aided by several wise women and the four winds; there she must outwit a troll-maiden. In the other, a sister finds her eleven brothers enchanted into swans by the malice of their stepmother, and can only free them by taking a vow of silence and knitting each of them shirts of stinging nettles.

After all, it will be said, you don’t meet with talking bears, any more than you do with boys enchanted into swans, in the Real World, do you?

Hm. I have to say that I view the expression ‘Real World’ and those who use it with deep suspicion: it is invariably employed to exclude from consideration something which the speaker does not like and fears to confront. As might be shown in a Venn diagram, what people mean by the ‘Real World’ is actually a subset of the World, one that is expressly defined to rule out the possibility of whatever its proponents wish to exclude:

Screenshot 2015-12-09 19.18.50

In other words, all they are saying is ‘you will not find talking bears or enchanted swans if you look in a place where you don’t find such things.’

Cue howls of protest: ‘you don’t meet talking bears walking down the street, do you?’ Well, it depends where you look: if you look at the start of East of the Sun, you will meet a talking bear walking through the streets of a city. Further howls: ‘But that’s just a story!’

polar bear child stroking tube

(Some people met this bear on the London underground but I don’t think it spoke )

Well, no – it isn’t just a story; it’s a story – and stories and what is in them are as much part of the world as belisha beacons, horse-blankets and the Retail Price Index. The World, after all, must include the totality of human experience. The fact that we do not meet with talking bears in the greengrocer’s (and has anyone ever said we might?) does not preclude the possibility of meeting them in stories, which is just where you’d expect to find them (for a similar point, see Paxman and the Angels).

The Muybridge Moment

Muybridge-2

The memorable Eadweard Muybridge invented a number of things, including his own name – he was born Edward Muggeridge in London in 1830. He literally got away with murder in 1872 when he travelled some seventy-five miles to shoot dead his wife’s lover (prefacing the act with ‘here’s the answer to the letter you sent my wife’) but was acquitted by the jury (against the judge’s direction) on the grounds of ‘justifiable homicide’. He is best known for the sequence of pictures of a galloping horse shot in 1878 at the behest of Leland Stanford, Governor of California, to resolve the question of whether the horse ever has all four feet off the ground (it does, though not at the point people imagined). To capture the sequence, Muybridge used multiple cameras and devised a means of showing the results which he called a zoopraxoscope, thereby inventing stop-motion photography and the cinema projector, laying the foundations of the motion-picture industry.

The_Horse_in_Motion-anim

(“The Horse in Motion-anim” by Eadweard Muybridge, Animation: Nevit Dilmen – Library of Congress Prints and Photographs Division; http://hdl.loc.gov/loc.pnp/cph.3a45870. Licensed under Public Domain via Commons – https://commons.wikimedia.org/wiki/File:The_Horse_in_Motion-anim.gif#/media/File:The_Horse_in_Motion-anim.gif)

Muybridge’s achievement was to take a movement that was too fast for the human eye to comprehend and freeze it so that each phase of motion could be analysed. It was something that he set out to do – as deliberately and methodically as he set out to shoot Major Harry Larkyns, his wife’s lover.

It is interesting to consider that something similar to Muybridge’s achievement happened a few thousand years ago, entirely by accident and over a longer span of time, but with consequences so far-reaching that they could be said to have shaped the modern world.

We do not know what prompted the invention of writing between five and six thousand years ago, but it was not a desire to transcribe speech and give it a permanent form; most likely it began, alongside numbering, as a means of listing things, such as the contents of storehouses – making records for tax purposes, perhaps, or of the ruler’s wealth – and from there it might have developed as a means of recording notable achievements in battle and setting down laws.

We can be confident that transcribing speech was not the primary aim because that is not something anyone would have felt the need to do. For us, that may take some effort of the imagination to realise, not least because we live in an age obsessed with making permanent records of almost anything and everything, perhaps because it is so easy to do so – it is a commonplace to observe that people now seem to go on holiday not to enjoy seeing new places at first hand, but in order to make a record of them that they can look at once they return home.

And long before that, we had sayings like

vox audita perit, littera scripta manet
(the voice heard is lost; the written word remains)

to serve as propaganda for the written word and emphasise how vital it is to write things down. One of the tricks of propaganda is to take your major weakness and brazenly pass it off as a strength (‘we care what you think’ ‘your opinion matters to us’ ‘we’re listening!’ as banks and politicians say) and that is certainly the case with this particular Latin tag: it is simply not true that the spoken word is lost – people have been remembering speech from time immemorial (think of traditional stories and songs passed from one generation to the next); it is reasonable to suppose that retaining speech is as natural to us as speaking.

If anything, writing was devised to record what was not memorable. Its potential beyond that was only slowly realised: it took around a thousand years for anyone to use it for something we might call ‘literature’. It is not till the classical Greek period – a mere two and a half millennia ago (Homo sapiens is reckoned  at 200,000 years old, the genus Homo at 2.8 million)  – that the ‘Muybridge moment’ arrives, with the realisation that writing allows us to ‘freeze’ speech just as his pictures ‘froze’ movement, and so, crucially, to analyse it.

When you consider all that stems from this, a considerable degree of ‘unthinking’ is required to imagine how things must have been before writing came along. I think the most notable thing would have been that speech was not seen as a separate element but rather as part of a spectrum of expression, nigh-inseparable from gesture and facial expression.  A great many of the features of language which we think fundamental would have been unknown: spelling and punctuation – to which some people attach so much importance – belong exclusively to writing and would not have been thought of at all; even the idea of words as a basic unit of language, the building blocks of sentences, is a notion that only arises once you can ‘freeze’ the flow of speech like Muybridge’s galloping horse and study each phase of its movement; before then, the ‘building blocks’ would have been complete utterances, a string of sounds that belonged together, rather like a phrase in music, and these would invariably have been integrated, not only with gestures and facial expressions, but some wider activity of which they formed part (and possibly not the most significant part).

As for grammar, the rules by which language operates and to which huge importance is attached by some, it is likely that no-one had the least idea of it; after all, speech is even now something we learn (and teach) by instinct, though that process is heavily influenced and overlaid by all the ideas that stem from the invention of writing; but then we have only been able to analyse language in that way for a couple of thousand years; we have been expressing ourselves in a range of ways, including speech, since the dawn of humanity.

When I learned grammar in primary school – some fifty years ago – we did it by parsing and analysis. Parsing was taking a sentence and identifying the ‘parts of speech’ of which it was composed – not just words, but types or categories of word, defined by their function: Noun, Verb, Adjective, Adverb, Pronoun, Preposition, Conjunction, Article.

Analysis established the grammatical relations within the sentence, in terms of the Subject and Predicate. The Subject, confusingly, was not what the sentence was about – which puzzled me at first – but rather ‘the person or thing that performs the action described by the verb’ (though we used the rough-and-ready method of asking ‘who or what before the verb?’). The Predicate was the remainder of the sentence,  what was said about (‘predicated of’) the Subject, and could generally be divided into Verb and Object (‘who or what after the verb’ was the rough and ready method for finding that).

It was not till I went to university that I realised that these terms – in particular, Subject and Predicate – derived from mediaeval Logic, which in turn traced its origin back to Aristotle (whom Dante called maestro di color che sanno – master of those that know) in the days of Classical Greece.

Socrates

Socrates

Plato

Plato

Aristotle

Aristotle

Alexander the Great

Alexander the Great

Aristotle is the third of the trio of great teachers who were pupils of their predecessors: he was a student of Plato, who was a student of Socrates. It is fitting that Aristotle’s most notable pupil was not a philosopher but a King: Alexander the Great, who had conquered much of the known world and created an empire that stretched from Macedonia to India by the time he was 30.

That transition (in a period of no more than 150 years) from Socrates to the conquest of the world, neatly embodies the impact of Classical Greek thought, which I would argue stems from the ‘Muybridge Moment’ when people began to realise the full potential of the idea of writing down speech. Socrates, notably, wrote nothing: his method was to hang around the market place and engage in conversation with whoever would listen; we know him largely through the writings of Plato, who uses him as a mouthpiece for his own ideas. Aristotle wrote a great deal, and what he wrote conquered the subsequent world of thought to an extent and for a length of time that puts Alexander in eclipse.

In the Middle Ages – a millennium and a half after his death – he was known simply as ‘The Philosopher’ and quoting his opinion sufficed to close any argument. Although the Renaissance was to a large extent a rejection of Aristotelian teaching as it had developed (and ossified) in the teachings of the Schoolmen, the ideas of Aristotle remain profoundly influential, and not just in the way I was taught grammar as a boy – the whole notion of taxonomy, classification by similarity and difference, genus and species – we owe to Aristotle, to say nothing of Logic itself, from which not only my grammar lessons but rational thought were derived.

I would argue strongly that the foundations of modern thought – generalisation, taxonomy, logic, reason itself – are all products of that ‘Muybridge Moment’ and are only made possible by the ability to ‘freeze’ language, then analyse it, that writing makes possible.

It is only when you begin to think of language as composed of individual words (itself a process of abstraction) and how those words relate to the world and to each other, that these foundations are laid. Though Aristotle makes most use of it, the discovery of the power of generalisation should really be credited to his teacher, Plato: for what else are Plato’s Ideas or Forms but general ideas, and indeed (though Plato did not see this) those ideas as embodied in words? Thus, the Platonic idea or form of ‘table’ is the word ‘table’ – effectively indestructible and eternal, since it is immaterial, apprehended by the intellect rather than the senses, standing indifferently for any or all particular instances of a table – it fulfils all the criteria*.

Which brings us to Socrates: what was his contribution? He taught Plato, of course; but I think there is also a neat symbolism in his famous response to being told that the Oracle at Delphi had declared him ‘the wisest man in Greece’ – ‘my only wisdom is that while these others (the Sophists) claim to know something, I know that I know nothing.’ As the herald of Plato and Aristotle, Socrates establishes the baseline, clears the ground, as it were: at this point, no-one knows anything; but the construction of the great edifice of modern knowledge in which we still live today was just about to begin.

However, what interests me most of all is what constituted ‘thinking’ before the ‘Muybridge Moment’, before the advent of writing – not least because, whatever it was, we had been doing it for very much longer than the mere two and a half millennia that we have made use of generalisation, taxonomy, logic and reason as the basis of our thought.

How did we manage without them? and might we learn something useful from that?

I think so.

*seeing that ideas are actually words also solves the problem Hume had, concerning general ideas: if ideas are derived from impressions, then is the general idea ‘triangle’ isosceles, equilateral, or scalene or some impossible combination of them all? – no, it is just the word ‘triangle’. Hume’s mistake was in supposing that an Idea was a ‘(faint) copy’ of an impression; actually, it stands for it, but does not resemble it.