‘Could a rule be given from without, poetry would cease to be poetry, and sink into a mechanical art. It would be μóρφωσις, not ποίησις. The rules of the IMAGINATION are themselves the very powers of growth and production. The words to which they are reducible, present only the outlines and external appearance of the fruit. A deceptive counterfeit of the superficial form and colours may be elaborated; but the marble peach feels cold and heavy, and children only put it to their mouths.’ [Coleridge, Biographia ch. 18]

‘ποίησις’ (poiēsis) means ‘a making, a creation, a production’ and is used of poetry in Aristotle and Plato. ‘μóρφωσις’ (morphōsis) in essence means the same thing: ‘a shaping, a bringing into shape.’ But Coleridge has in mind the New Testament use of the word as ‘semblance’ or ‘outward appearance’, which the KJV translates as ‘form’: ‘An instructor of the foolish, a teacher of babes, which hast the form [μóρφωσις] of knowledge and of the truth in the law’ [Romans 2:20]; ‘Having a form [μóρφωσις] of godliness, but denying the power thereof: from such turn away’ [2 Timothy 3:5]. I trust that's clear.

There is much more on Coleridge at my other, Coleridgean blog.

Sunday 19 July 2020

On Memory



Contents
Preface
Chapter 1. Towards Total Memory
Chapter 2. Memory and Fiction
Chapter 3. Memory and Religion
Chapter 4. Irrepressible Memory
Chapter 5. Inaccessible Memory
Afterword


from the Preface


‘The key thing is not memory as such. It is the anticipation of memory’ Pierre Delalande

Everyone knows there are two kinds of memory: short-term memory and long-term memory. These are distinct functions in terms of brain architecture (such that, for example, mental deterioration or injury can destroy one but not the other) although they are, obviously, linked, both doing similar things, instantiating the actions of particular networks of neuronal activity in the brain. Working memory, say the psychologists, serves as a mental processor, encoding and  retrieving information [see, for instance, Alan Baddeley's Working Memory, Thought, and Action (Oxford University Press 2007)]. The two broader categories of short-term and long-term memory get differentiated and refined further when physiologists look at specific temporal ranges:
Atkinson and Shiffrin [“Human Memory: A Proposed System and Its Control Processes”, in Kenneth W. Spence & Janet Taylor Spence (eds.), Psychology of Learning and Motivation (New York: Academic Press 1968), 89–195] proposed a multi-store model in which kinds of memory are distinguished in terms of their temporal duration. Ultra short term memory refers to the persistence of modality-specific sensory information for periods of less than one second. Short term memory refers to the persistence of information for up to thirty seconds; short term memory, which receives information from ultra short term memory, is to some extent under conscious control but is characterized by a limited capacity. Long term memory refers to the storage of information over indefinitely long periods of time; long term memory receives information from short term memory and is characterized by an effectively unlimited capacity. Though this taxonomy does not distinguish among importantly different kinds of long term memory—in particular, it does not distinguish between episodic and semantic memory—it has been applied productively in psychological research. [Kourken Michaelian and John Sutton, ‘Memory’, in Edward N. Zalta (ed) Stanford Encyclopedia of Philosophy (Summer 2017 Edition)]
Memory science also identifies a third kind of memory, the sort of muscle-memory that you deploy when you drive your car or play the piano. This they call sensory memory, and they tie it into the other two kinds with various diagrams and charts.


So there they are: your three basic kinds of memory and their interrelations.

I'm not interested in them. My interest is in other modes of memory, modes that are (I would argue) just as valid as, indeed more important than, these more conventional forms. So far as I can see these other modes are either under-discussed or not discussed at all, not even recognised as modes of memory. Nonetheless I am going to try and argue that these three conventional mode hitherto mentioned are actually the least interesting kinds of memory.
......



from Chapter 1: Towards Total Memory

Memory costs. That is to say, in a biological sense, large brains are expensive organs to run. In order for evolution to select for them there must be an equivalent or more valuable pay-off associated with the cost. In the case of homo sapiens that pay-off is our immensely supple, adaptable and powerful minds; something that could be run on anything cheaper, biologically speaking, that the organ. This is because consciousness and self-consciousness depend to a large extent upon memory; or perhaps it would be more accurate to say, the consciousness and self-consciousness rely upon a sense of continuity through time, which is to say, upon memory. Memory is what we humans have instead of an actual panoptic view of the fourth dimension. We know all about its intermittencies and unreliabilities of course—indeed, the larger discourse of memory, from Freud and Proust to modern science, on has delved deeply into precisely those two quantities. My focus here happens to be on neither of those two qualities, but I don’t disagree: memory is often intermittent and unreliable. It’s also the best we’ve got.

When evolutionary scientists talk about the ‘cost’ of something, they have particular sense of the word in mind. James G. Burns, Julien Foucaud and Frederic Mery have interesting things to say about the costs associated with memory (and learning) specifically: ‘costs of learning and memory are usually classified as constitutive or induced,’ they say. The difference here is that ‘constitutive (or global) costs of learning are paid by individuals with genetically high-learning ability, whether or not they actually exercise this ability’:
As natural populations face a harsh existence, this extra energy expenditure should be reflected in reduction of survival or fecundity: energy and proteins invested in the brain cannot be invested into eggs, somatic growth or the immune system. Hence, learning ability is expected to show evolutionary trade-offs with some other fitness-related traits. [James G. Burns, Julien Foucaud and Frederic Mery, ‘Costs of Memory: Lessons from Mini-Brains’, Proceedings of the Royal Society 278 (2011), 925]
‘Induced costs’ touch on the idea that ‘the process of learning itself may also impose additional costs reflecting the time, energy and other resources used.’
This hypothesis predicts that an individual who is exercising its learning ability should show a reduction in some fitness component(s), relative to an individual of the same genotype who does not have to learn. … Questions regarding the induced costs of learning and memory are not only restricted to the cost of ‘how much’ information is processed, but also to ‘how’ they are processed.
Intriguingly, recent research (‘from both vertebrate and invertebrate behavioural pharmacology’) challenges ‘the traditional view of memory formation as a direct flow from short-term to long-term storage.’ Instead, ‘different components of memory emerge at different times after the event to be memorized has taken place.’

Memory, in other words, has always been part of an unforgiving zero-sum game of energy expenditure. It is possible to hypothesise that the general reduction in memory ability as people get older (when we all tend to become more forgetful and less focussed) reflects a specific focalisation of energy expenditure at the time of greatest reproductive fitness. We can all think—though I’m dipping now into pop evopsych, an ever-dubious realm—of how unattractive women find forgetful men: how much trouble a husband gets into, for example, if he forgets a wedding anniversary or a birthday.

But this is one of, I think, only a very few instances where human technological advance directly interferes with the much longer term evolutionary narratives. For the first time in the history of life we have access to a form of memory that doesn’t cost—or more precisely, that costs less and less with each year that passes whilst simultaneously becoming more and more capacious and efficient. Indeed: not only do we have access to this memory, we are all of us working tirelessly to find more intimate ways of integrating this memory into our daily lives. I’m talking of course about digital memory. Right now, in the top pocket of my shirt I am carrying a palm-sized device that grants me instant access to the totality of human knowledge, as archived online. Everything that humanity has achieved, learned and thought can be ‘remembered’ by me at the touch of my fingers on the glass screen. Everybody I know carries something similar. It is no longer even a remarkable thing.

It may be that Moore’s Law is the single most significant alteration to the environment within which human evolutionary pressures operate. As that Law rolls inexorably along, we come closer to that moment when cost itself will no longer present an obstacle to total memory. By ‘total’ I mean: the circumstance where everything that we have done, experienced, said or thought is archived digitally and virtually, and can be accessed at any time. Digital memory is exterior to the brain (at least it is so at the moment); but like an additional hard-drive being cable-plugged into your laptop, it augments and enhances brain-memory and brain-function. Which London taxi driver need learn the ‘knowledge’ when sat-nav systems are so cheap? Or to put it another way: the existence of a cheap sat-nav instantly transforms me, Joe-90-like, into a sort of super black-cab-driver, with instant access not only to every quickest route through the London streets, but the whole country and indeed the whole world. This is one small example of a very large phenomenon.

What I'm talking about here is the ‘Extended Mind Thesis’ (EMT), that argues the human mind need not be defined as exclusively the stuff, or process, or whatever that is generated inside the bones of the human skull. Here is David Chalmers:
A month ago I bought an iPhone. The iPhone has already taken over some of the central functions of my brain . . . The iPhone is part of my mind already . . . [in such] cases the world is not serving as a mere instrument for the mind. Rather, the relevant parts of the world have become parts of my mind. My iPhone is not my tool, or at least it is not wholly my tool. Parts of it have become parts of me . . . When parts of the environment are coupled to the brain in the right way, they become parts of the mind. [Chalmers is here quoted from the foreword he wrote to a book-length elaboration of this idea: Andy Clark’s Supersizing the Mind: Embodiment, Action and Cognitive Extension (OUP 2008)]
I find this idea pretty persuasive, I must say; but I am not a philosopher of mind. Not all philosophers of mind like this thesis. Jerry Fodor, for instance, attempted several times to dismantle Clark’s argument. In a review-essay published in the London Review of Books Fodor takes a heuristic trot through one of Clark’s thought-experiments. Imagine two people, Otto and Inga ‘both of whom want to go to the museum. Inga remembers where it is and goes there; Otto has a notebook in which he has recorded the museum’s address. He consults the notebook, finds the address and then goes on his way. The suggestion is that there is no principled objection between the two cases: Otto’s notebook is (or may come with practice to serve as) an “external memory”, literally a “part of his mind” that resides outside his body.’ Fodor asks himself: ‘so could it be literally true that Chalmer’s iPhone and Otto’s notebook are parts of their respective minds?’ He answers, no. I don’t take the force of his objections. So for instance:
[Clark’s] argument is that, barring a principled reason for distinguishing between what Otto keeps in his notebook and what Inga keeps in her head, there’s a slippery slope from one to another ... That being so, it is mere prejudice to deny that Otto’s notebook is part of his mind if one grants that Inga’s memories are part of hers. … But it does bear emphasis that slippery-slope arguments are notoriously invalid. There is, for example, a slippery slope from being poor to being rich; it doesn’t follow that whoever is the one is therefore the other, or that to insist on the distinction is mere prejudice. Similarly, there is a slippery slope between being just a foetus and being a person; it doesn’t follow that foetuses are persons, or that to abort a foetus is to commit a homicide. [Jerry Fodor, ‘Where is my mind?’ LRB 31:3 (2009)]
But this really is to miss the point. The analogy (since Fodor forces it) is not that Clark is arguing the brain is ‘rich’ and the notebook ‘poor’ as if these were the precisely the same kind of thing differing only in degree; but rather that they both have something in common—as ‘rich’ and ‘poor’ have money in common—the difference being only that one, the brain, has lots of this (call it ‘mind’) and the other, the notebook, has very little. That seems fair enough to me. Fodor goes on to deliver what he takes to be a knockout blow:
The mark of the mental is its intensionality (with an ‘s’); that’s to say that mental states have content; they are typically about things. And … only what is mental has content.
But lots of the data on my computer is ‘about’ things. Arguably, even the arrangement of petals on a flower is ‘about’ something (it’s about how lovely the nectar is inside; it’s about attracting insects). Fodor is surprised Clarke doesn’t deal with intensionality, but I’m going to suggest it’s a red herring and move on.
Surely it’s not that Inga remembers that she remembers the address of the museum and, having consulted her memory of her memory then consults the memory she remembers having, and thus ends up at the museum. The worry isn’t that that story is on the complicated side; it’s that it threatens regress. It’s untendentious that Otto’s consulting ‘outside’ memories presupposes his having inside memories. But, on pain of regress, Inga’s consulting inside memories about where the museum is can’t require her first to consult other inside memories about whether she remembers where the museum is. That story won’t fly; it can’t even get off the ground.
Fodor, on the evidence of this, has never heard of the concept of a mnemonic. Or is he denying that the mnemonics I have in my mind are, somehow, not in my mind ‘on pain of infinite regress’?

I’ll stop. This may be one of those issues where reasoned argument is unlikely to persuade the sceptical; and if reasoned argument can’t then snark certainly won’t. The most I can do here, then, is suggest that the principle be taken, at the least, under advisement; or the remainder of my thesis here will fall by the wayside. It seems to me that the following extrapolations of contemporary technological development are, topologically (as it were) equivalent: (a) a person who stores gigabites of personal information (including photos, messages and other memorious material) in their computer or iPhone; (b) the person who uses advances in genetic technology biological to augment the physiological structures of their brain tissue to enable them to ‘store’ and flawlessly access gigabites of memorious data; (c) the future cyborg who integrates digital memory and biological memory with technological implants; (d) the individual whose memories are entirely ‘in the cloud’, or whatever futuristic equivalent thereof is developed.

And actually this (it seems to me) is not the crux of the matter. The extraordinary increases in capacity for raw data storage is certainly remarkable; but as mere data this would be inert, an impossibly huge haystack the sifting of which would take impossible lengths of time. The real revolution is not the sheer capacity of digital memory, but the amazingly rapid and precise search engines which have been developed to retrieve data from that.
.....


from Chapter 2: Memory and Fiction

That the ‘novel’ is a mode of memory is not an idea original to me. Dickens's fiction, in a sense, ‘remembers’ Victorian London for us, as Scott's fiction ‘remembers’ 18th-century Scotland. This is to say more than just that (although it is to say that) our collective or historical memory is mediated through these things—more, at any rate, through fiction (Shakespeare's plays, Jane Austen's novels, Homer's poetry) than through annalistic pilings-up of blank historical data. Our own individual memories, those products (long-term and short-term) of brain function, narrativise the past much more than they isolate or flashbulb past-moments. Fiction is always memorious.

This memoriousness is complicated but not falsified by the fact that fiction is not, well, true. There never was a boy called Oliver Twist, and though there was a figure called Rob Roy he wasn't at all like Scott's version of him. The veracity of art does not run exactly in harmony with the veracity of history, but neither is it completely orthogonal toit. But that doesn't matter. Our own individual memories are immensely plastic and dubious, fictions based on fact. Our collective memories likewise.

What's more striking, I think, is what happens to this idea in an age (like ours) when science fiction increasingly becomes the cultural dominant. After all, unlike Homer, Shakespeare or Dickens, SF is in the business of future-ing its stories, no? As early as 1910, G K Chesterton pondered the paradoxes of predicating ‘memoir’ on futurity:
The modern man no longer presents the memoirs of his great grandfather; but is engaged in writing a detailed and authoritative biography of his great-grandson. Instead of trembling before the spectres of the dead, we shudder abjectly under the shadow of the babe unborn. This spirit is apparent everywhere, even to the creation of a form of futurist romance. Sir Walter Scott stands at the dawn of the nineteenth century for the novel of the past; Mr. H. G. Wells stands at the dawn of the twentieth century for the novel of the future. The old story, we know, was supposed to begin: “Late on a winter’s evening two horsemen might have been seen—.” The new story has to begin: “Late on a winter’s evening two aviators will be seen—.” The movement is not without its elements of charm; there is something spirited, if eccentric, in the sight of so many people fighting over again the fights that have not yet happened; of people still glowing with the memory of tomorrow morning. A man in advance of the age is a familiar phrase enough. An age in advance of the age is really rather odd. [Chesterton, What’s Wrong With the World (1910), 24-25]
That few science fiction novels are actually written in the future tense doesn’t invalidate Chesterton’s observation. A novel notionally set in 2900 narrated by an omniscience narrator in the past tense, interpellates us hypothetically into some post-2900 world. Science fiction adds a bracingly vertiginous sense to memory. In Frank Hebert’s Dune Messiah (1969), Paul Atreides—the prophet/messiah leader of the inhabitants of a desert planet—is blinded. According to the rather severe code of his tribe he must be sent into the wilderness to die, but he avoids this fate in part by demonstrating that he can still see, after a fashion. His prophetic visions of the future are so precise, and so visual, that it is possible for him to remember past visions he previously had of the present moment, and use them, though he is presently eyeless, to navigate and interact with his world as if he were sighted. The way memory operates here, as an paradoxical present memory of the past’s future, is the perfect emblem of science fiction’s tricksy dramatization of memory. There are science fiction tales of artificial memory, enhanced memory, memory that works forwards rather than backwards; of robot memory and cosmic memory. And given the genre’s predilection for fantasies of total power, it does not surprise us that there are many SF fables of total memory.

That said, it is a story not often bracketed with ‘Pulp SF’—Borges’ ‘Funes the Memorious’—that is typically deployed when notions of ‘total’ memory are discussed. And he stands as a useful conceptual diagnostic to the thesis I’m sketching here. It’s a trivial exercise translating Borges’ hauntingly oblique narrative into the language of Hard SF. What might the world look like in the case where digital memory is so capacious, and so well integrated into our daily lives, as to give us functionally total memories? This, to be clear, is not to posit a world in which we carry around in our minds the total memory of everything—that would indeed be a cripplingly debilitating state of mind. But our present-day incomplete memories don’t work that way either. We remember selectively. Indeed, the circumstances (let’s say for example: post-traumatic circumstances) in which we are unable to deselect certain memories is a grievous one, such that people who suffer from it are advised to seek professional psychiatric help. So, given that we use our memories selectively, and are comfortable remembering only what we need when we need it, the future I’m anticipating would only be a sort of augmentation of the present state of affairs.

You would go through your life with your entire previous existence accessible to you at will. Would this be a good thing? Or do you tend to the view, fired perhaps by the Funes-like consensus that total memory would be in some sense disastrous, that it would not? ‘If somebody could retain in his memory everything he had experienced,’ claimed Milan Kundera, in his novel Ignorance (2000), ‘if he could at any time call up any fragment of his past, he would be nothing like human beings: neither his loves nor his friendship would resemble ours’. Funes himself dies young, after all; as if simply worn out by his prodigious memoriousness. We might conclude: all our efforts are focussed on attempting to make our ‘memory’ better. Now that technology has overtaken us we should, on the contrary, be pondering how we can most creatively and with what spiritual utilitarianism, make it worse.

I shall register the obvious objection. Since total recall would crowd-out actual experience with the minute-for-minute remembrance of earlier experiences, we would have to be very selective in the ways we access our new powers. The question then becomes: what would our processes of selection be? How robust? How reliable? What if we put in place (as my thought-experiment digital future certainly enables us to do) a filter that only allows us to access happy memories. Would this change our sense of ourselves—make us more content, less gloomy, happier in our lot? Would this in turn really turn us into Kunderan alien beings? The problem becomes ethical: it is surely mendacious to remember only the good times. The ‘reality’ is both good and bad, and fidelity to actuality requires us to balance happy memories with sad ones. This, however, depends upon a category error, embodied in the tense. Where memory is concerned reality is not an ‘is’; reality is always a ‘was’. Memories feed into the reality of present existence, but never in an unmediated or unselective way. Indeed, current research tends to suggest that something like the opposite of my notional filter actually operates in human memory—that as we get older we tend to remember the unhappy events of the past over the happier ones.

The bias that ‘total memory’ would in some sense be damaging to us strikes me as superstition. Funes’s imaginary experiences are a poor match for the sorts of thought-experiments to which his name has been, latterly, attached. Christian Moraru toys with describing Funes’ situation as one of disorder, but then has second thoughts. ‘Disorder may not be the right word here since Funes’s memory retrieves a thoroughly integrated systematic, and infinite world. Taking to a Kabbalistic extreme Marcel Proust’s spontaneous memory, one present fact or detail involuntarily leads in Funes’s endlessly relational universe to a “thing (in the) past” and that to another, and so on. Remembrance reaches deeper and deeper and concurrently branches off, in an equally ceaseless search for an ever-elusive origin or original memory.’ He goes on:
With one quick look, you and I perceive three wineglasses on a table; Funes perceived every grape that had been pressed into the wine and all the stalks and tendrils of its vineyard. He knew the forms of the clouds in the southern sky on the morning of April 20, 1882, and he could compare them in his memory with the veins in the marbled binding of a book he has seen once, or with the feathers of spray lifted by an oar on the Rio Negro on the eve of the Battle of Quebracho. Nor were these memories simple—every visual image was linked to muscular sensations, thermal sensations, and so on. He was able to reconstruct every dream, every daydream he had ever had. Two or three ties he had reconstructed an entire day; he had never once erred or faltered, but each reconstruction has itself taken an entire day. [Christian Moraru, Memorious Discourse: Reprise and Representation in Postmodernism (Fairleigh Dickinson University Press 2005), 21-22]
Moraru finds in Funes’ memory ‘a trope of postmodern discourse’ which he defines as ‘representation that operates digressively, and conspicuously so, through other representations.’ He is interested in the ‘interrelational nature of postmodern representation, its quintessential intertextuality … [that] in saying itself says the other, as it were, re-cites other words, speaks other idioms, the already- and elsewhere-spoken and written.’ Actual memory does not think back to drinking wine in the sunshine and thereby recall not just the wine and the sunshine but the individual life-stories of each and every grape that was grown in order to be pressed into the juice that eventually fermented into wine. On an individual level that would be magic, not memory. But there is a sense, a technological-global sense, in which Moore’s law is pointing us in precisely that collective social and cultural conclusion.

The real message of ‘Funes’ is not that a complete memory would render life unliveable (lying in a darkened room, taking a whole day to remember a previous day in every detail, dying young and so on). The real message is: a perfect memory would be transcendent. It would enable us to recall not just the things that had happened to us, but the things that happened to everyone and everything with which we came into contact. This, of course, has no brain-physiological verisimilitude, but it speaks to a deeper sense of the potency of memory. In memory we construct another world that goes beyond our world. Imagination can do this too, but for many people imagination is weaker than memory; or perhaps it would be more accurate to say, imagination manifests itself most powerfully in memory, in the buried processes of selection and augmentation. Not for nothing do we dignify processes of recollection beyond the simplest as memory palaces.

......

The Philip K Dick story ‘We Can Remember It For You Wholesale’ (1966) sports one of the truly great SF story titles, I think; a title that has been poorly served by its two Hollywood movie adaptations, both of clunk-down to Total Recall. Dick’s protagonist, the flinchingly-named Douglas Quail, can’t afford the holiday-trip to Mars he earnestly desires. So he visits REKAL, a company that promises to insert into his brain the ‘extra-factual memory’ of a trip to Mars; and not as a mere tourist, neither, as a secret agent. Exciting! Not real, but (Dick's premise tacitly prompts us to think) once something has happened it is no longer real either, it's just a sort-of phantasm in our memorious brains. Fake the phantasm as you can obviate the expense and inconvenience of actually doing the things, perhaps dangerous things, needful to be remembered.

The story goes on to explore a narrative ambiguity: Quail: is the superspy adventure an artificial memory, or has the REKAL process accidentally unearthed rea memories of Quail as a government assassin. In the original story, Quail returns to REKAL to have a false memory of detailed psychiatric analysis inserted in order to restore his psychological balance and prevent any further urge to visit REKAL, which is quite a nice twist. But Dick, never knowingly under-twisted, adds another: this return visit uncovers deeper ‘actual’ memories (or else implants them) in which Quail remembers being abducted by aliens at the age of nine. Touched by his innate goodness these aliens decide to postpone their invasion of Earth until after his death. This means that, merely by staying alive Quail is protecting the Earth from disaster. He is, in one sense, the single most important individual alive.

Dick’s main theme is not just that memory is unreliable—hardly a novel observation, that—and not even the more radical idea that ‘real’ and ‘artificial’ memories have an equal validity as far as the process of remembering goes. It’s actually that ‘real’ and ‘made-up’ memory in competition in the mind nonetheless tend to gravitate back to narratives of ego inflation. What I always remember is that I am the centre of memory, that the events and persons of the universe are arrayed about me. The same circumstance does not normally obtain in matters of moment-to-moment perception (megalomania excepted) because this involves us in intersubjectivity in a way memory does. Or more precisely, memory is a particular and involuted form of intersubjectivity, where the two subjectivities interacting are present-me and past-me.

The movie adaptations of this story are, in a way, even more interesting. Both jettison Dick’s complicated conceit of memory, ambiguously real or artificial, layered upon memory in favour of a simpler narrative line, better suited to the visual medium in which the story is now being told. Quail thinks himself a nobody, a mere construction worker. He goes to REKAL to be given artificial memories of a more exciting life. These memories trigger authentic memories of his actual life as a spy. In both films (though to a lesser degree in the earlier of them) the strong implication is that he has a true identity and it is this latter. The bulk of both storylines is then given over to the cinematic storytelling of his spy-action adventures.

What’s so fascinating about this is the way both texts portray memory (something that is, we might say, by its nature recollected after the event) as a vivid and kinetic ongoing present set of experiences. Neither movie has its protagonist sitting in a chair remembering being a spy; both, rather, show Quail running, fighting, shooting and getting the girl in the cinematic present. Since we all know how memory works (and that it doesn’t work this way) it seems plain that some strange dislocation is happening in the level of representation of the text. We are shown Quail living his quotidian life; we are shown that life transformed seamlessly into his artificial memories of being a spy. In both movie versions a hinge-scene is staged where an individual attempts to intervene into the action-adventure shoot-up adventure Quail’s life has become. These individuals both tell Quail that the world he is currently experiencing is not real; and that if he perseveres in its fantasy it will kill him. Quail is offered a pill, a token (it is claimed) of his willingness to give up the dream and return to the real world. In both films Quail suspects a ruse and refuses the pill in the most violent way imaginable, by shooting dead the messenger who carries it.

This, to be clear, is a special case of a more general SF trope. There is no shortage of texts that develop the idea of a virtual reality or drug-created alternate reality that runs concurrent with actual reality—the Matrix films are probably the most famous iteration of this, but there are scores of examples from science fiction more generally. Linked to this is the ‘dream narrative’ trope, where John Bunyan or Alice explore a continuous but fantastical timeline that is revealed, at the story’s end, to have been running in parallel with actual reality through the logic of dreams. In both the case of ‘virtual reality’ and ‘dreaming’ it’s an easily comprehensible logic that moves from actual reality into the alternate reality and back again. The Total Recall movies, though—and the story on which they are based—do something more dislocating. Memory is not an alternative parallel reality in the way that VR or dreaming is. Nonetheless these texts treat it as though it is. Remembering something that happened previously is elided with experiencing something now. This is to drag the events remembered out of the past and into the immediacy of the present; or perhaps it is to retard the experience of the present into something always already recalled.

This may look like a trivial misalignment of narrative logics, or perhaps only the limitations of the representational logics of cinema. Think of the visual cliché: a character is shown on-screen ‘remembering’: wavy lines flows across the image and a dissolve-cut takes us to ‘the remembered events’. But Total Recall short-circuits this convention: memory happens in the present, as on-going narrative. This in turn means that the distinction between present and past, the distinction which it is memory’s main function to reinforce, vanishes. Memory is no longer of the past, or even rooted in the past; it is refashioned as a technological artifice (‘REKAL’) that configures ‘memory’ as the continuous present, and augments that present-ness by making the happening-now into a continuous adrenalized onward rushing (running, fighting, escaping, plunging on).

This, I think, is the implication of a 21st-Century Funes. A technologically actualised ‘total’ memory could well destabilise the authentic ‘reality’ of the remembered experience. It might mean that we get to set out own selection algorithms for memory recall, such that we only recall those memories that make us happy, or paint us in a good light—that, for instance, reinforce the sense we have of ourselves as action heroes rather than boring 9-to-5ers. It might mean that we erode the difference between ‘real’ memory, the memory of artifice (films we have seen, books we have read) and actual artificial memory itself. This is the old threat of Postmodernism, exhilarating and alarming in equal measure: the notion that simulacra really will come to precede the things they supposedly copy. But I’m suggesting something more. Total memory, as Funes tacitly and Total Recall explicitly says, will transcend the past. It will break down the barrier between past and present, and reconfigure it as a more vital now. It will subsume the particularity of memory and render it wholesale.



from Chapter 3. Memory and Religion

Like Judaism, Christianity and Islam are both memorious religions. Religion need not necessarily be so, I think, but it's presumably not a coincidence that the two biggest religions in the world today are. Roland Bainton argues:
Judaism is a religion of history and as such may be contrasted with both religions of nature and religions of contemplation. Religions of nature discover God in the surrounding universe; for example, in the orderly course of the heavenly bodies, or more frequently in the recurring cycle of withering and resurgence of vegetation. This cycle is interpreted as the dying and rising of a god in whose experience the devotee can spare through various ritual acts, and thus become divine and immortal. For such a religion the past is not important, since the cycle of the seasons is the same one year as the next. Religions of contemplation, at the other pole, regard the physical world as an impediment to the spirit which, abstracted from the things of sense, must rise by contemplation to union with the divine. The sense of time itself is to be transcended, so that here again history is of no import. But religions of history, like Judaism, discover God "in his mighty acts among the children of men". Such a religion is a compound of memory and hope. It looks backward to what God has already done ... [and it] looks forward with faith: remembrance is a reminder that God will not forsake his own. [Bainton, The Penguin History of Christianity (volume 1, 1967), 9]
Memory and history are interconnected; history (personal and collective) being what we remember and memory (individual and textual) being how we access history. And when you look at it like that it's quite surprising that it is the religions of history that so dominate human worship. The problematic is a large one, after all: if God intervenes in human history at a certain point in time, what about all the people who happened to be born and to die before that moment? Religions of nature and contemplation can embrace them easily. Religions of history must necessarily come to terms with the ruthlessness of history. History, after all, is famously a winners' discourse. What about the losers? Calling them (say) virtuous pagans, or pretending they simply don't exist, jars awkwardly with Christian and Islamic emphases on the excluded, the underdog and the poor.

Immediate or strictly contemporaneous religions (Scientology, say) tend to seem absurd to us, even though the miracles they declare are no more intrinsically risible than those of Christianity, Islam or Hinduism. The reason this is so, I suspect, is because we are so acculturated to the idea of religious belief working as memory rather than as to-hand experience … or at least not as this latter for most people (ecstatics and schizophrenics excepted, I mean).

As is the case with our memory, many details are omitted, and many contradictions and infelicities reworked into more-than-truly-contiguous narratives. Like memory, religion doesn’t always or even particularly intrude on everyday living—it requires a will-to-contemplation to evoke it, actually, although a properly functioning religion is bound to provide copious aides-memoires (liturgy, ceremony, sunday schools and their equivalents and so on) to help in this respect. Consulting family photographs, after all, has a liturgical aspect to it for many of us; in Pixar's Coco (dir. Lee Unkrich 2017) these family photos and their place in the lives of the living literally translate into the wellbeing and status of the dead generations in the afterlife.

I'd suggest that most religion asks us to look back, to honour our mothers and fathers, to worship our ancestors, to consider the origins of life and the cosmos and be thankful for them; but of course there are also portions of religion that ask us to look forward. The believer is to orient her life by their future reward or punishment. The Bible is, by weight, mostly history; but it ends as future-prophesy. Nonetheless, I'd be tempted to argue that the memory-gravity of religion means that those portions of religious practice or thought that have a significant future component end up doing that strange thing of construing future apocalypse as memory … the odd past-oriented backwardness of St John’s revealed future, for instance. Indeed, the more I think about it, the more it strikes me that this is one of the things that science fiction has in common with religion.

Religion endures best in adulthood if it has been impressed upon us in childhood. This means that we are, when we live in faith, steering ourselves according to how we remember our younger days. I suspect something like this is behind Jesus's celebrated ‘except ye be become as little achildren, ye shall not enter into the kingdom of heaven’.
......


from Chapter 4: Irrepressible Memory

According to the New Scientist (‘Déjà vu: Where fact meets fantasy’ by Helen Phillips) only 10% of people claim never to have experienced Déjà vu (I'm one of that ten, actually). For some people, at the other end of the scale, it becomes a veritable psychopathology:
Mr P, an 80-year-old Polish émigré and former engineer, knew he had memory problems, but it was his wife who described it as a permanent sense of déjà vu. He refused to watch TV or read a newspaper, as he claimed to have seen everything before. When he went out walking he said the same birds sang in the same trees and the same cars drove past at the same time every day. His doctor said he should see a memory specialist, but Mr P refused. He was convinced that he had already been.
The article rehearses arguments from brain chemistry to explain this widespread feeling (perhaps it is indeed ‘the consequence of a dissociation between familiarity and recall’). But I read the article wondering: could something as banal and everyday as this be behind Nietzsche's unflinching adherence to the doctrine of Eternal Recurrence? (A philosophical slogan: ‘Eternal Return, the consequence of a dissociation between familiarity and recall...’) Could memory, Funes-style, prove so strong that it overwhelms us, strong-arms us to the floor? Should we be afraid of memory?

We're not, because our day-to-day experience of memory, as we stand there trying to remember where we put our car keys, or what the second line of Twelfth Night is, or what we even came downstairs for, is of an elusiveness that indexes fragility. To speak in terms of the opposite of this is a convention, but an empty one. Some reviewerish boilerplate, from Jane Yeh in an old edition of the TLS:
... should appeal to a wide readership, given the universal scope of its themes--family tensions, and the adult author's changing relationship to her parents, the power of memory ...
But of course we don't actually, talk of ‘the power of memory’. Rather, all our experience leads us to the consideration of the weakness of memory. This is not just a question of the feebleness of our powers of recall (the necessary, non-Funes weakness), or the way memory is a sixty-pound weakling compared to the muscular shaping requirements of our preconceptions, our repressive superegos and so on. It is to challenge the idea that simply recalling something is ‘powerful’ in its own right: as if we're sitting in the cinema of our minds in 1890 and are amazed simply by virtue of the fact that anything is projected on the screen at all. It betrays, I suppose, a tacit belief that memory ought not to be able to move us, to influence our present; that we ought to live in a sort of unfettered continuous present. Or maybe it's a simple misprison: for memory read the past. Two things almost wholly unrelated, however often they're confused.

This extends, I think, even to traumatic memories. There are instances where memory overwhelms the rememberer, as PTSD, but these instances are not the default, even though trauma of varying intensities is the default of ordinary living. Not thinking about things is, actually, a fairly effective way of dealing with trauma and upset, actually; and not thinking about things can certainly become a habit. But this isn’t the same thing as forgetting, and certainly not the same thing as ‘repressing’ a memory. Freud's insistence that the repressed always returns is more a statement of faith than an evidence-based assertion. I mean, it strikes me as a good faith. It says: nothing stays secret for ever, you cannot bury anything permanently, your true nature will eventually emerge, that affair you had will eventually come to light, those memories you are distracting yourself from don't go away just because you are distracting yourself from them (although, as I say, the distraction can perhaps be prolonged indefinitely). This is a worthwhile ethos by which to live life. It is not true, though. Memories, it seems, are not only sometimes lost, the default position for memories is to lose them, or rather it is to overwrite the memories with simplified neural tags or thumbnail versions of the memory. We do this to stop our minds exploding, but it means that it is not repressed memory that always returns, but repressed desire (the desire that shaped the recasting of the memory in the first place). That sounds truer; short of neural-surgical intervention, repressed desire always does return ... it just doesn't necessarily return at the same strength. If memory is strong, then total memory would be omnipotent. But if memory is weak, actually, then total memory would follow a different hyperbolic trajectory into nothingness.
.....


from Chapter 5: Inaccessible Memory

Coming hard after the previous chapter, and its claim concerning the irrepressibility of memory, the title of this chapter runs the risk of seeming mere trolling. But, if you'll bear with me, I have a particular something in mind.

When we remember something particular from our childhoods, we recognise the specific recollection as memory. When I remember that I left the iron on, just now, we recognise that as memory. Both forms of memory have content, and are comprehensible, and that might tempt us into thinking that having content and being comprehensible are two features of memory as such: that if we have a memory that baffles us, then that just means that we haven't contextualised it, or understood it. I think this is wrong. I think more memory, and more important memory, provides neither of those two things. If we define memory by its accessibility then we rule out from the very concept of memory those memorious processes that are not accessible, even if those processes are vital to memory and mental health.

I'll give you an example of what I mean: dreams as memory.

I need to be specific, here. We all dream, and sometimes we remember what we dream and sometimes we don't. But those rembered-dreans are second-order memories, friable attempts to translate one kind of (non-rational, not consciously controlled) mental process into another that is quite different. I'm not talking about our memories of dreams; I'm talking about our dreaming as itself an iteration of memory.

Because of course dreams are a way of remembering stuff, often the stuff that happened in the day. We know that dreams ‘process’ the events of the day (and sometimes other days) and our anxieties and desires pertaining to them—we process these events, in other words, by remembering them in this peculiar way we call dreaming. More, we know that if we are prevented from dreaming we die. Torturers from ancient Rome to the CIA have long known this. Doctors diagnose the rare but real condition fatal insomnia: ‘a neurodegenerative disease eventually resulting in a complete inability to go past stage 1 of NREM sleep. In addition to insomnia, patients may experience panic attacks, paranoia, phobias, hallucinations, rapid weight loss, and dementia. Death usually occurs between 7 and 36 months from onset.’ If I fail to remember where I put my car-keys, even if I permanently fail to remember this thing, it will not kill me. In this sense dreaming-as-remembering is much, much more important than remembering-as-conscious-recall.

If we don't tend to think of dreams as a fourth kind of memory (alongside sensory memory, short-term memory and long-term memory) it's because we are hamstrung by a prior assumption that memory must be accessible and conscious to count as memory. But I wonder if the absolute physiological necessity of dreaming, and the relative disposability of the other three kinds of memory (for even patients with severe neurological decline who lost both long and short term memory can carry on living otherwise just fine) suggests that not only are we ignoring a vital kind of memory, we have got the relative importance of these things entirely the wrong way about. What if, instead of dreams being a shadowy and dislocated imitation of ‘real’ memory, long-term and short-term memory are both the fundamentally inessential tips of a much larger subconscious iceberg? Perhaps most of our remembering happens unconsciously, inaccessibly, in somnicreative form?

I say so in part, of course, because it situates my earlier claims that fiction (that art, that culture in the broadest sense) is a mode of memory in both the individual and the collective sense. But these processes of memory are not directly analogous to what happens in our brains when we retrieve either recent or archived memories. They are closer to somatic memory, except that they are rarely actually somatic. And they are, I think, the bulk of memory as it figures.

Say, rather than repressing or purging our memories, we are (short of surgical interventions that literally excise portions of our brain) remembering all the time, in a nexus of ways that are inaccessible, or largely inaccessible, to our conscious minds. Say that this process of continuous, paraliminal remembering actually constitutes our consciousness: is the bulk of what consciousness means for a living being, and that the stuff we consciously think of, the stuff of which we are aware and over which we exercise a degree of mental control, is the excresence, the bit of that process that pokes out into the realm of self-aware mentition.

Perhaps this seems far-fetched to you. I can see why. We can, after all, only discuss memory in the idiom of consciousness and rationality. If the bulk of memory actually happens outwith those two territories then it's hard to see what we can usefully say about it. It's like Kant's exhaustive but tentative groping around the shape the inaccessible Ding an Sich leaves in the accessible but fallible and untrustworthy spread of human perceptions. By what process do we transpose the alien idiom of memory we call ‘dream’ into the graspable idiom of consciousness as such?

Adam Phillips, in Terrors and Experts, says this about the interpretation of dreams: ‘a dream is enigmatic—it invites interpretation, intrigues us—because it has transformed something unacceptable, through what Freud calls the dream work, into something puzzling. It is assumed that the unacceptable is something, once the dream has been interpreted, that we are able to recognize and understand. And this is because it belongs to us; we are playing hide-and-seek, but only with ourselves. In the dream the forbidden may become merely eccentric or dazzlingly banal; but only the familiar is ever in disguise. The interpreter, paradoxically—the expert on dreams—is in search of the ordinary.’ [64]

But why must the extraordinary be turned into the ordinary? That sounds like false reckoning (or false translation) to me. The implication here is 'because it started out that way'; but that's surely not true: dreams are as likely, or are more likely, to grind their metaphorical molars upon extraordinary aspects of our life. The perfectly habitual aspects of it won't snag the unconscious's interest. So could it be that dream-interpreters turn the extraordinary into the ordinary only because the ordinary sounds more comprehensible to us, because it produces the sort of narrative the dreamer prefers to wake up to? (‘...those skinny cattle eating the fat cattle and not getting fat? That's about harvests, mate.’) But if the currency of dreams is the extraordinary, common sense suggests that the interpretation of dreams should be extraordinary too—suggests that the function of the dreaming is bound-up with its extraordinariness. The sense of recognition Phillips is talking about here, that ‘aha! that's what it means!’ is all about the transcendent rush, the poetry, not about the mundanity. But the very fact that it's a rush, the very thrill of it, ought to make us suspicious. It is not the currency of true memoroy to elate us, after all. It's cool, but it's not the truth.

This is the flaw in the Biblical narrative of Joseph: his dreams are too rational, too strictly allegorical. They don't have the flavour, the vibe, of actual dreams. We can, I think, tell the difference between a report of an actual dream and the faux-dream confected for, as it might be, a novel. Writer C K Stead says as much: ‘In my most recently published novel I decided one or other of the central characters should experience or remember a significant dream in each of seven chapters. When I tried to invent these they seemed in some indefinable way fake; so I hunted through old notebooks and found dreams I had recorded which could be used with a minimum of alteration.’ Most writers will know what he means. It's one reason I like this Idries Shah story:
Nasrudin dreamt that he had Satan's beard in his hand. Tugging the hair he cried: 'The pain you feel is nothing compared to that which you inflict on the mortals you lead astray!' And he gave the beard such a tug that he woke up yelling in agony. Only then did he realise that the beard he held in his hand was his own. [Shah, The World of Nasrudin (Octagon Press 2003), 438]
One of the things that's cool about it is the way it captures the feel of an actual dream. But mostly, of course, it's the implication that our subconscious not only understands but is capable of timing the revelation comically to deflate the dark grandeur of our secret fantasies. Nasrudin's dream knows more about Nasrudin than he does, I think.  And by extension all our dreams know more about all of us, and remember more about all of us, than we do ourselves.

This, I think, is the most compelling part of recentering dreams in our accounts of memory. Because doing so recognises the extent to which we are all artists.
The beauteous appearance of the dream-worlds, in the production of which every man is a perfect artist, is the presupposition of all plastic art, and ... half of poetry also. We take delight in the immediate apprehension of form; all forms speak to us; there is nothing indifferent, nothing superfluous. But, together with the highest life of this dream-reality we also have, glimmering through it, the sensation of its appearance: such at least is my experience, as to the frequency, ay, normality of which I could adduce many proofs, as also the sayings of the poets. ... And perhaps many a one will, like myself, recollect having sometimes called out cheeringly and not without success amid the dangers and terrors of dream-life: “It is a dream! I will dream on!” I have likewise been told of persons capable of continuing the causality of one and the same dream for three and even more successive nights: all of which facts clearly testify that our innermost being, the common substratum of all of us, experiences our dreams with deep joy and cheerful acquiescence. [Nietzsche, Birth of Tragedy (transl. Hausmann), 23-24]
Blake was fond of the verse from Numbers (11:29) ‘would to God that all the Lords people were Prophets!’ I feel the same, but for artists. And the ongoing progression of Moore's Law and the interpenetration of our lives with technology that facilitates our expression, brings that utopia, that mode of remembering, ever closer. Lord, as Dickens prayed, keep my memory green.


from the Afterword.

Some aspects of the ever-increasing technological interpenetration of our lives cater to our conscious minds. Some address our subconscious. It may be worth speculating as to what the version of memory argued for here—a total memory predicated upon continuing improvements in processing power that encompasses both ordinary information-retrieval instances, but a larger collective artistic or religious communal memory, and even (perhaps) the buried part of the iceberg of memory to which we don't have access—would look like in practice. It might free us from the vagaries of physiological memory, its vulnerabilities and intermittencies.  By the same token it might cast upon the not-so-tender mercies of algorithms. Memory might become the province of the strategies of control of the congeries of State Power that currently asserting dominance. There are two current-day strategies here: one, a Nineteen Eighty-Four approach typified by contemporary China who believe in a top-down authoritarian domination of online activity via restriction, censorship and punishment. The other, much more widely pursued, is the Brave New World approach of the West, where punters are told they are free to frolic in unlimited online pastures when in fact a combination of targeted nudging, ever-evolving algorithms and the sheer soma-like excess of hedonistic online-content actually confine and herd the user even more effectively than Chinese top-down control. The ‘internet’ (to generalise ridiculously) can be wielded by Foucauldian Power to, say, ensure Brexit or the election of Donald Trump, to promote certain socio-cultural memories and excise others, and all without the apparatus of apparent oppression. Those who are conscious of oppression and who can see their oppressors are, in a sense, better-off (because they at least have a clear target) than those who are oppressed but can neither identify a specific tyrant nor even be sure that they are oppressed are obviously worse off than this. As technology becomes an increasing part of our memory, on the hyperbolic path towards total memory, these latter strategies might easily become constutive of memory as such. Our future memories might well become bizarre hybrids of actual remembrance and Orwellian memory-hole, and the fact that we won't necessarily even be aware of these controlling dynamics might well align this new memory with the buried portion of our actual memory, our dream-memory and other unconscious memorious drives. It's not, I concede, a hopeful prognosis. Of course, I may be wrong, may (indeed) be profoundly wrong. But it seems to me, looking around, that we have already tacitly conceded that our collective consciousness (I thumbnail this as ‘the internet’ but it's larger than that) is already apprehending important social and political questions not by ratiocination but according to a set of unconscious processes not strictly accessible to our conscious wills. We are, in other words, already remembering our past—and so, shaping our future—in the way dreams remember things rather than the way consciousness remembers things, and I see no reason why that might not intensify into the future. Our tech, I would hazard, will bed that in. We will increasingly dream our memories, both individual and collective, and do so much more comprehensively thanks to technology. In a late poem, the great Les Murray seems to put his finger on something.
Routines of decaying time
fade, and your waking life
gets laborious as science.

You huddle in, becoming
the deathless younger self
who will survive your dreams
and vanish in surviving.
I wonder.

Saturday 18 July 2020

"Thetis of the Shining Breasts"


The image, there, is Sir James Thornhill's ‘Thetis Accepting the Shield of Achilles from Vulcan’ (1710), currently in the Tate.

You know the story already: Achilles, grief-stricken and furious at the death of his lover Patroclus, resolves to return to the war from which he had previously withdrawn. His divine mother Thetis visits the palace of Hephaestos, the smith of the gods, begging him to make her son a marvellous suit of armour and shield to protect him in battle. This the god does, because he owes Thetis a favour.
She [Charis] called to Hephaestus, the famed craftsman, and spake to him, saying: “Hephaestus, come forth hither; Thetis hath need of thee.” And the famous god of the two strong arms answered her: “Verily then a dread and honoured goddess is within my halls, even she that saved me when pain was come upon me after I had fallen afar through the will of my shameless mother, that was fain to hide me away by reason of my lameness. Then had I suffered woes in heart, had not Eurynome and Thetis received me into their bosom—Eurynome, daughter of backward-flowing Oceanus. With them then for nine years' space I forged much cunning handiwork, brooches, and spiral arm-bands, and rosettes and necklaces, within their hollow cave; and round about me flowed, murmuring with foam, the stream of Oceanus, a flood unspeakable. Neither did any other know thereof, either of gods or of mortal men, but Thetis knew and Eurynome, even they that saved me. And now is Thetis come to my house; wherefore it verily behoveth me to pay unto fair-tressed Thetis the full price for the saving of my life.” [Iliad 18:392-408; this is A.T. Murray's old Loeb translation from 1924]
Accounts differ as to why and by whom Hephaestus was thrown out of heaven; but Homer is clear that Thetis saved his life. In return the smith makes the armour, and not only forges the shield but decorates it with a wide range of gorgeously rendered pastoral and city scenes of peaceful Greek life.

Anyway: recently I re-read Auden's great poem ‘The Shield of Achilles’ (1955).
She looked over his shoulder
For vines and olive trees,
Marble well-governed cities
And ships upon untamed seas,
But there on the shining metal
His hands had put instead
An artificial wilderness
And a sky like lead.

A plain without a feature, bare and brown,
No blade of grass, no sign of neighborhood,
Nothing to eat and nowhere to sit down,
Yet, congregated on its blankness, stood
An unintelligible multitude,
A million eyes, a million boots in line,
Without expression, waiting for a sign.

Out of the air a voice without a face
Proved by statistics that some cause was just
In tones as dry and level as the place:
No one was cheered and nothing was discussed;
Column by column in a cloud of dust
They marched away enduring a belief
Whose logic brought them, somewhere else, to grief.

She looked over his shoulder
For ritual pieties,
White flower-garlanded heifers,
Libation and sacrifice,
But there on the shining metal
Where the altar should have been,
She saw by his flickering forge-light
Quite another scene.

Barbed wire enclosed an arbitrary spot
Where bored officials lounged (one cracked a joke)
And sentries sweated for the day was hot:
A crowd of ordinary decent folk
Watched from without and neither moved nor spoke
As three pale figures were led forth and bound
To three posts driven upright in the ground.

The mass and majesty of this world, all
That carries weight and always weighs the same
Lay in the hands of others; they were small
And could not hope for help and no help came:
What their foes like to do was done, their shame
Was all the worst could wish; they lost their pride
And died as men before their bodies died.

She looked over his shoulder
For athletes at their games,
Men and women in a dance
Moving their sweet limbs
Quick, quick, to music,
But there on the shining shield
His hands had set no dancing-floor
But a weed-choked field.

A ragged urchin, aimless and alone,
Loitered about that vacancy; a bird
Flew up to safety from his well-aimed stone:
That girls are raped, that two boys knife a third,
Were axioms to him, who'd never heard
Of any world where promises were kept,
Or one could weep because another wept.

The thin-lipped armorer,
Hephaestos, hobbled away,
Thetis of the shining breasts
Cried out in dismay
At what the god had wrought
To please her son, the strong
Iron-hearted man-slaying Achilles
Who would not live long.
The short-line stanzas here (1, 4, 7) detail the kind of things portrayed upon the actual Homeric shield of Achilles; the longer-lined stanzas interspersed give us grim modern glosses.

But one thing puzzled me: why ‘Thetis of the shining breasts’? It looks like a maternal image, or perhaps even a sexual one. But here's the thing: although it is Homeric practice to add epic epithets to characters' names, neither he nor any other classical poet uses this particular epithet. In fact Homer uses only two epithets for Thetis: Θέτις Αργυροπεζα (as at Iliad 18: 369 and 381) which means ‘silver-footed Thetis’, and Θέτις Ἁλοσυδνης (eg Iliad 20:207) which means ‘brine-born Thetis.’ How do these breasts get into Auden's poem?

The answer is via a mistranslation (by no means the only one) in George Chapman's Elizabethan-Jacobean rendering of Homer. Here's how Chapman translates the lines quoted—in Murray's more literal version—at the head of this post:
She led her in, and in a chair of silver (being the fruit
Of Vulcan’s hand) she made her sit, a footstool of a suit
Apposing to her crystal feet; and call’d the God of fire,
For Thetis was arriv’d, she said, and entertain’d desire
Of some grace that his art might grant. “Thetis to me,” said he,
“Is mighty, and most reverend, as one that nourish’d me,
When grief consum’d me, being cast from heav’n by want of shame
In my proud mother, who, because she brought me forth so lame,
Would have me made away; and then, had I been much distress’d
Had Thetis and Eurynome in either’s silver breast
Not rescu’d me; Eurynome that to her father had
Reciprocal Oceanus. Nine years with them I made
A number of well-arted things, round bracelets, buttons brave,
Whistles, and carquenets. My forge stood in a hollow cave,
About which, murmuring with foam, th’ unmeasur’d ocean
Was ever beating; my abode known nor to God nor man,
But Thetis and Eurynome, and they would see me still,
They were my loving guardians.” [Chapman’s Iliad 18: 344-361]
‘I [had] been much distress’d/Had Thetis and Eurynome in either’s silver breast/Not rescu’d me’: the Greek is simpler: εἰ μή μ᾽Εὐρυνόμη τε Θέτις θ᾽ ὑπεδέξατο κόλπῳ: ‘had not Eurynome and Thetis received me into their bosoms.’ Why does Chapman add-in the silver? Because elsewhere in Homer Thetis's feet are described as silver. And because it makes the breasts shining, since these two beings are divine sea-nymphs. Which is to say: Hephaestus being received into their bosoms is Hephaestus falling an immense distance from heaven and hitting water rather than land, and so surviving. Their bosoms are silver-shining because that's how the sunlit sea is.

One thing this does, or so it seems to me, is draw out how dry Auden's horrible vision of contemporaneity is. It's all landscapes, wastelands, concrete and barbed wire. It is, we could say, a thirsty vision. The silver breasts of Thetis (the ‘she’ of the opening line, I suppose) represent not maternal nutrition nor even the individuated bliss of sexual connection, so much as the blue stretch of ocean, the diver leaping and sheathing himself in water: cool freedom and the escape of survival. At any rate, that (I think) is how Auden got ‘Thetis of the shining breasts’.

Tuesday 7 July 2020

Talking Back To Fiction: or, Gee, I Really Hope Somebody Got Fired For That Blunder


In a 2004 essay on philosophical fiction the late, lamented Jerry Fodor argues that, though ‘the philosophical novel’ is a well-established mode, viz. ‘Comp. Lit. 102: readings in Dostoevsky, Kafka, Mann, Gide, Sartre’ (‘little or no philosophical sophistication required’), in fact philosophy and fiction aren’t particularly miscible. Fodor sees metaphysicians and novelists as doing quite different things: ‘practically by definition, theories traffic in abstractions; they purport to see where the eye does not. Novels, by contrast, tend to be concerned with the surfaces of things.’ Then he says this:
Philosophical theories are worse candidates than most for novelistic treatment. The whole function of a philosophy is to be argued with, pro or con, and it is churlish to argue with a novel: ‘Call me Ishmael.’ I won’t! ‘About two in the morning he returned to his study.’ In fact, it was nearer 3.15. You can’t talk back to a novel: ‘What’s that supposed to mean?’ and ‘Why should I believe that?’ are out of place. But these are the queries that philosophers want to test their theories on; not just because philosophers are churlish by profession, but also because theories to which such questions aren’t posed can get away with murder.
Reading this I was, to use the old nautical cliché, taken aback. If Fodor had spent as much time in the halls of Science Fiction and Fantasy fandom as I have, I fancy he wouldn’t have been so blithely confident that readers of novels don’t answer back. Nor is it just SF/F, of course. As with many things, The Simpsons has a meme for this. In ‘The Itchy & Scratchy & Poochie Show’ (s8 ep14, first shown 1997) the ‘Itchy and Scratchy’ producers decide to liven the show by adding a new character—voiced by Homer—Poochie, a dog with ‘attitude’ who surfs, raps, and plays electric guitar. Homer accompanies the stars of the show at a fan convention, when they field questions like this:
“In episode 2F09, when Itchy plays Scratchy's skeleton like a xylophone, he strikes the same rib twice in succession, yet he produces two clearly different tones. I mean, what are we to believe, that this is some sort of a... [nerdy chuckle] a magic xylophone or something? Gee, I really hope somebody got fired for that blunder.”
It’s funny because it’s true. It’s been recognised as true (and therefore funny) at least since William Shatner’s Saturday Night Live ‘Get a Life!’ sketch in 1986. Fans talk back to their novels (and films, and games, and comics) all the time.

I suppose it’s the case that a hefty proportion of fans talking back concerns franchises rather than single texts, because one thing that grinds fandom’s gears is: inconsistencies in worldbuilding and character-development. This can be specifics, where things happen that don’t fit the material specificities of earlier instalments, or where the timeline goes screwy, or it can be more about a perception of tone, or vibe: as when fandom divides into two shouty cohorts, one declaring vehemently that Star Wars Corporate Product Movie/Game/Novel x+1 doesn’t ‘feel’ like a proper Star Wars text and the other insisting just as vehemently that it does. “J J Abrams’ can’t capture the true Star Wars-ness of Star Wars” is one version of this argument, which is interesting to me in that if the Star Wars sequel trilogy shows one thing very clearly it is Abrams sweating with the exertion of pastiche-ing Star Wars as energetically and completely as possible, cramming in as many easter eggs as the basket can hold. That, though, doesn’t capture echt-Star Warsosity for many.

But bald issues of consistency and canonicity aren’t the only things that provoke fans to answer their texts back. Another is: problems with diversity, the use of derogatory stereotypes and so on. A third, more meta- point of fannish engagement has to do with genre itself. Many’s the SF fan who will talk loudly back at even a standalone SF novel because it is not ‘proper’ SF, or because it doesn’t get the physics right, or is too long, or too short, or too infodumpy or whatever.

I was going to add something here to the effect that literary criticism is a mode of talking back at texts too, but actually I’m not so sure. In one sense, of course, it’s absolutely literary criticism’s job to look at a sentence like ‘Call me Ishmael’ and interrogate it. But the specific challenge Fodor presents—the ‘no! in thunder’ he implies—is rarely part of the idiom of literary criticism, actually. There have been one or two notable flame wars, but mostly we academic critics are politely, even mouse-ishly, happy to busy ourselves contributing to an ongoing accretive discourse. This may be one of the things that differentiates critics from fans, actually.

It’s also, of course, about the willing suspension of disbelief. One of my boy Coleridge’s most influential ideas, this, although I’m not sure I see that ‘will’ is actually the mot juste. We do indeed suspend our disbelief when we read, see a play or watch a movie, but this is rarely a matter of active will. It is, on the contrary, a habitual decoupling of aspects of our natural scepticism that we learn, or are acculturated into, when we’re young and that become second-nature by the time we’re adults. The withdrawal from scepticism could be called ‘gullibility’, and in a sense I suppose we are gullible for stories: fools for them, holy fools even. But there are degrees, or perhaps whole separate magisteria, within the realm of ‘gullibility’ and it’s possible to moderate our ingenuousness without shouting at the text ‘Emma Woodhouse handsome clever and rich …’ OH YEAH? FOR ALL I KNOW SHE WAS POOR AND UGLY—HELL, SHE NEVER EVEN EXISTED AT ALL, WAKE UP SHEEPLE. An argument with somebody can be a slanging match, sure; or it can be a civilised debate. The thing is, I’m not sure either paradigm describes what critics—and most readers—do with texts. Something far less specifically engaged, mostly. Something rather more passive-aggressive.

The point is that books can’t answer back, or not very well. If we’re arguing with a version of a book we have in our head then I suppose it might answer back, to some extent, but only in the echo-chamber sense that we're using the text to talk to ourselves, actually. If we’re arguing with an author—with J K Rowling for instance, something many hostile and abusive people do on social media daily—then we’ve missed the point.

Our talking-back at books, as fans and critics, is Socratic, but in a very particular sense of Socratic. I'm talking about the way Socrates knows it all, and his interlocuters know nothing, so that Plato has to gussy-up a series of what are, we can be honest, monologues with repeated interjections from the other guys of ‘how true that is!’ and ‘I see!’ and the like. T H Irwin puts it well: ‘Socrates conducts strenuous, maddening and one-sided discussions of moral questions with interlocutors who lack his argumentative skill. … Socrates needs to assume that his discussions with interlocutors involve a genuine and honest exercise of the interlocutors’ capacity for moral judgment, and that their capacity for moral judgment is both reliable and corrigible. … It is far more difficult to decide whether the assumptions are plausible.’ Assume it’s a free-and-fair exchange of views and you’ll probably conclude: they thrashed these complex ideas out and all agreed that Socrates is right! But we can be honest. A debate between Socrates and Some-schmuckates was never going to be free and fair.

I suppose another way of seeing these dialogues is picturing Socrates as Tom Hanks in ragged shorts with a huge beard and his interlocuters as a basketball with a face painted on it in blood. Of course they’re going to agree with Socrates. They exist in order to affirm that Socrates is right. That’s baked into the form itself. Can you imagine a dialogue that went…
SOCRATES: Do you not agree that ideas must be derived from a previous state of existence because they are more perfect than the sensible forms given them by experience? If the soul existed in a previous state then it will exist in a future state, for a law of alternation pervades all things. And, if the ideas exist, then the soul exists; if not, not.

CEBES: But I can hold in my mind the idea of an inexistent soul. Therefore, if my idea exists, the soul cannot.

SOCRATES: [long pause] You know what? You’re right. I hadn’t thought of that. Bollocks. Ah well, maybe I’ll get it right next time. Let’s grab some moussaka.
Of course not. That’s not the idiom of the Platonic dialogues. And my point is: this Socratic exchange is, actually, how we argue with our books. The books to which we subject our reaction are Cebeses and Menos and Critos, whose role is to nod and say ‘yes indeed’ and ‘truly’ and ‘of course’ and we monologue at them with our own obsessions and fascinations and needs and failings. As they talk about philosophy as footnotes to Plato, so the history of fan and critical engagements with literature is all footnotes to Plato.

One way of reading Barthes’ ‘Death of the Author’ is to see it this way, as a manifestation of the urge to keep our texts as Wilson-the-Basketballs and not have to complicate stuff by learning, let us say, that J K Rowling’s views on the reality of biological sex differs from ours, and whatnot. When this latter happens (which is to say, when a book we have interrogated Socratically to the point where its ‘Quite right, Socrates!’; ‘Correct!’; ‘Indeed, yes!’ and so on have convinced us that it cleaves to our very soul—reveals itself, in its author's eyes at any rate, to be doing something that doesn't, actually) the sting is sharp, and we can lash out.

Saturday 4 July 2020

From Hallucination to Delirium



That's Gordon Teskey's theory, at any rate [the paragraph above is from Teskey's Delirious Milton: The Fate of the Poet in Modernity (Harvard University Press 2006), 29]. I wonder if I agree. Which is, as I'm sure you know, how an Englishman says ‘I really don't agree.’

Friday 3 July 2020

Christiad Sidebar: Was Jesus Called Jesus?


The Christiad is a epic-poem retelling of Christ's life in Latin hexameters (by Marco Girolamo Vida, published in 1535) and I've been translating it in little daily gobbets over the last few months. Why have I been doing that? Hmm: why have I been doing that?

The thing is, I’ve gone down a series of rabbit holes whilst working on this project, which wasn’t really what I set out to do. What I set out to do was just to give myself a little task to start and help structure my days during Lockdown. My wife has taken up embroidery for the same reason. Some people are doing jigsaws, or baking bread, or learning the trombone. This is my equivalent. And I suppose that’s still the function it serves: one or two dozen lines of Latin rendered into English daily, with a little bloggish commentary appended. But the longer I’ve gone on, the more the latter element has bloated. I use the topic of the day’s portion as an excuse to poke around online, to go through JSTOR and other scholarly resources, and dig out anything that strikes me as interesting. A lot of the scholarship I find is many decades old, but that doesn’t bother me: I’m not trying to retrain as an actual expert in Renaissance Italy or 1st-century AD Judea, after all.

Here’s one thing I didn’t realise before I started this, for instance. Jesus may not have been called Jesus. I don’t mean in the sense that Jesus is a Greek name (Ἰησοῦς, Iēsous) because the Gospels were written in Greek, a language Jesus himself probably didn’t speak at all and certainly didn’t converse in day-to-day. That’s true of course: ‘Jesus’ is the Hellenized version of the Hebrew name Yeshua or Y'shua (ישוע‎), etymologically related to another biblical name, Joshua. I knew that already. What I didn’t realise is that this particular historical Yeshua/Joshua may not have been called Yeshua/Joshua.

So: even my fairly scanty reading into the huge amount of work that has been done on the historical Jesus tells me three things:

1. It’s overwhelmingly likely Jesus was a historical figure, like Mohammed or Ataturk, rather than a purely mythological invention like Moses or King Arthur. There’s a lot of data about him and his life, and although most of that is the NT texts and apocrypha (most written later, some much later, all written to advance a particular set of theological rather than historical agenda, and all rewritten and smoothed over many centuries) some of it is other writers with less of an axe to grind, and some of it is papyrological and archaeological evidence. He was a real person, it seems.

2. Scholars also agree on the historicity of John the Baptist, who, it seems likely, led his own purity-baptismal eschatological sect and had his own followers. Despite later Christian revision it seems clear that Jesus started out not as a self-proclaiming messiah figure from the get-go, but as a follower or disciple of John. Indeed, it seems likely John was, for much of Jesus’s life, the more famous, or notorious, figure: a Jewish, perhaps Samaritan-Jewish, end-times preacher who insisted upon a strict regime of personal purity for his followers to prepare them for the imminent apocalypse. Baptism was an important part of his cult, although it seems Jesus developed doctrinal differences from his master on this matter.
[This was] the difference between preaching baptism as the first step, and preaching it (as Jackson and Lake, here, believe the historical John did) as the last step, the culmination of a series of purifying modes of living undertaken by a small sect of ascetic followers: ‘the real difference between Josephus and the Gospels as a whole is that Josephus represents [John] as preaching to those who had especially devoted their lives to virtue, and offering baptism as the crowning point of righteousness, whereas the Gospels, including Luke, represent the baptism of John as one of repentance for the remission of sins.’ John's way (if this is right) retains the common-sense connection between actual washing and spiritual washing, where Christ's call to baptism breaks it, or sets it in some strange new, almost ironical relation.
Perhaps these differences caused Jesus to break away from John’s sect and set up his own; or perhaps Herod’s execution of John left the original group leaderless and Jesus took over and steered it in a new direction. Either way, when later Christians came to relate this relationship they could neither write John out of history (he was much too famous in the 1st-C AD near east) nor could they concede that he had precedence over their preferred messiah, Jesus. This leads to the story in which Jesus (though later Christians insisted he had been born without sin) comes to John to be baptised and have his sin washed away, and also to the characterisation of John not as a prophetic leader in his own right but only a kind of carnival barker, announcing the coming of somebody bigger than himself. Neither of these last two ideas really make logical sense, but there we are.
Jesus began as a follower of John the Baptist. Jesus was certainly baptized by John, and he seems not to have begun his own ministry until after the arrest of the Baptist. That all suggests that he was in the beginning a disciple of the Baptist. All our evidence about John the Baptist indicates that he was a prophet attempting to prepare the Jewish people for some urgent, imminent apocalyptic event, probably the arrival of the “reign of God.” So Jesus began as an adherent of an apocalyptic movement. … Jesus also appointed twelve male disciples, doubtless as an eschatological symbol for the messianic reconstitution of the twelve tribes of Israel. He probably expected that these twelve men would be heads of the miraculously reconstituted twelve tribes in the eschatological world. [Dale B. Martin, New Testament History and Literature (Yale University Press 2012), 191]
It's worth quoting Martin’s book a little more:
Beyond that general picture, we can say a few more things about the historical Jesus, most of which I cannot defend here because doing so would merit a book of its own. Jesus was a lower-class Jewish peasant from Nazareth, a small village in Galilee. There is no reason to believe the later legends that he was born in Bethlehem. He grew up probably in a family of hand laborers. He had brothers and probably sisters. His mother was named Mary, and his father, Joseph. Since we hear nothing of Joseph’s activities from Jesus’ adulthood, he likely was dead by the time Jesus began his preaching. His mother, though, and at least his brother James later were figures in the movement after Jesus’ death, with James ending up as the main leader of the Jewish church in Jerusalem. Jesus certainly spoke Aramaic as his first language. If he spoke Greek at all, it was only enough to get by in bilingual situations. He probably could not write, and if he could read, it was only minimally.

Jesus did gather followers around him, some of whom were certainly women in central positions. Mary Magdalene was doubtless a close follower, later respected by the community after Jesus’ death. ... I also think Jesus taught against the traditional household and formed, in its place, a band of men and women separated from their traditional households and families and bound to one another as a new, eschatological household of God. There are few aspects of Jesus’ ministry more certain to be historical than that he called people away from their families for the sake of the coming kingdom of God. The historical Jesus, therefore, was certainly not a “family man” in any way advocated by modern Christianity or ancient household ethics.

In spite of the possibility that Jesus was something of an ascetic with regard to marriage and family, he was not one with regard to eating and drinking. In fact, one of the things that may have differentiated the ministry of Jesus from that of John the Baptist, his early teacher, and other Jewish ascetics was that he and his followers did not follow an ascetic agenda with regard to food and drink. I think it is historical that he was rumored to be a man who enjoyed feasting and drinking when the rare opportunity arose for someone so poor, and that he kept the company of tax collectors, prostitutes, and other disreputable persons. [Martin 193-4]
3. What about the Joshua-Jesus name? Well: Jesus and his followers were not the only apocalyptic religious movement knocking-around 1st-C Roman Judea. From Josephus we know of at least two others: John the Baptist (whose own movement has been partially erased and glommed onto Jesus’s by later Christian writing) and another, perhaps led by a man called Dositheos (or perhaps a different name), whose movement was put down by Pontius Pilate.
Helen Bond notes that for the first six years of Pilate's tenure the Syrian legate Lamia was in Rome, which meant that Pilate couldn't simply send for troop reinforcements from the north if he had trouble. ‘Pilate would have had great difficulty in contacting [Lamia] if he needed the support of his legions, a situation that would mean that any potential uprising had to be put down quickly before it could escalate.’ [Bond, Pontius Pilate in History and Interpretation (Cambridge University Press, 1998), 15]. We can assume that his default, leadership-wise, was to act swiftly and with some violence in the face of any popular disquiet.

A case in point: around the same time as the events recorded in the NT Pilate had dealt with a different self-proclaimed Messiah, a Samaritan (conceivably a man called Dositheos) who tried to start a movement and possibly a rebellion. Josephus' Antiquities of the Jews [18.4.1-2], records that this messianic sect stormed Mount Gerizim, hoping to find artefacts they believed had been buried there by Moses. As the group was armed, Pilate decided their action was insurrectionary. He brought Roman troops to the scene, dispersing the gathering and killing many, including the ringleader. Executing messiahs was part of his job spec, we might say.

After this event other Samaritans, claiming the group killed had not been armed, complained to Lucius Vitellius the Elder, the governor of Syria. He (either because the complaint had genuine merit, and it was a way of calming the people he had to rule over, or else for reasons of Roman political jockeying-for-power) managed to get Pilate recalled to Rome to be judged by Tiberius. Tiberius however, died before his arrival (this dates the end of Pilate's governorship to AD 36/37). We don't know what happened to Pilate after that.
‘Dositheos’ means ‘given by God’, more a title (like ‘Christ’, the anointed one) than a given name. That he was a Samaritan is interesting. I'll explain what I mean.

There are various non-Gospel sources for this period, including near-contemporary Jewish non-Christians like Josephus, and the sacred writings of other groups. The Jews were not then (any more than they are now) a single, homogenous group, and although they shared many rituals, practices and beliefs there were important differences. In the south of what is today Israel were Judean Jews; in the midlands (the present day West Bank) were Samaritan Jews; in the north were Galilean Jews. And that’s just three groups in the immediate vicinity. Two things they all had in common were: a belief in one God rather than many, and a belief that God would send a messiah to his chosen people. There was, however, little consensus on what this messiah would be like.

One thing we can be certain of is that no 1st-C Jew believed the messiah would be in any way like the figure who later emerged out of the Nicean council of AD 325: that is to say, a figure not only from God but of God and the same as God, one in three and three in one, possessed of all the powers of God—coeternal with the Father and begotten from His same substance. Important though that figure has become to Christianity, it’s a long way from what 1st-C Jews were expecting. Different Jewish groups had different ideas as to what the messiah would be like. Some thought he would be, in effect, an exceptionally just and beneficent ruler, others that he would be a spiritual not a temporal leader, or that his concern would be to re-establish the true Temple; others that he would be a healer and miracle worker; others again that he would usher in the end-times. These different conceptions of the coming messiah took different Old Testament figures as their prototypes: Joshua, the ruler. Moses, the great spiritual leader. Elisha, the wonder-worker. 

The Samaritan angle is interesting here. There was hostility between Samaritan Jews and Judean and Galilean Jews, but they were all Jews. Some scholars think that the NT includes a number of (in context, surprisingly positive) references to Samaritans as a deliberate attempt by Jesus's Galilean and Judean followers to proselytise Christ’s status as messiah to Samaria. Moreover, many Samaritan religious texts have come down to us, and they provide an interesting perspective on the Christian scriptures.

Although they spoke more-or-less that same Aramaic as Judean and Galilean Jews, the Samaritans looked forward not to the messiah but to a figure they called the taheb:
The term most frequently encountered in Samaritan texts for the eschatological agent is the Taheb, a title which allows several translation-interpretations: the ‘restorer,’ the ‘returning one’, or the ‘repentant.’ [James D. Purvis, ‘The Fourth Gospel and the Samaritans’, Novum Testamentum 17:3 (1975), 182]
Who was going to return, or restore? It would, it seems, be a renewed Moses, or perhaps a renewed Joshua, or conceivably a renewed Elisha or Elijah, depending on which sect you belonged to—and although Judean and Galilean Jews had a different word for messiah, many of them had similar expectations of him.

‘Marqah, the classical theologian of Samaritanism,’ Purvis explains, ‘contributed significantly to the sect's literature and liturgy. The major work attributed to him, the Memar Marqah, or Teaching of Marqah, is especially rich in the traditions it preserves concerning Moses and Joseph.’ Because so much of Marqah’s writings have been preserved, and because ‘it is clear that Marqah was not a representative of that branch of Samaritanism which glorified Joshua—that stream of thought is reflected in the Samaritan Book of Joshua as well as in some other sources—’, that the Taheb would be a new Moses is seen as mainstream Samaritanism.
The figure is associated with the Divine promise to Moses in Deut. 18:18-22 (‘I will raise up for them a prophet like you from among their brethren, etc.’). It has been said that at his appearance the Taheb will recover the sacred vessels which have been hidden in a cave on Mt. Gerizim, and that he will have with him the rod of Moses and a container of Manna. Such is the understanding of the contemporary Samaritan community concerning this figure.
Purvis’s footnote to this last claim is the rather charming: ‘So, my conversations with Samaritans.’ But if modern-day Samaritan Jews expect the coming messiah to be a new Moses, not all 1st-century Samaritan Jews—or other kinds of Jew—thought that. For many the messiah would rather be a Joshua, primarily a judge and ruler; or an Elisha, primarily a wonder-worker and healer. The episode mentioned above, when the Samaritan ‘Dositheos’ (or whoever it was) and an army of followers ‘stormed Mount Gerizim, hoping to find artefacts they believed had been buried there by Moses’, suggests that more-or-less contemporaneous with Jesus’s ministry there was a separate individual claiming to be the Jewish messiah: a Moses-figure ‘sent by God’.

Jesus’s ministry contains a lot of Elisha-like miracle-working, and NT scholars have unearthed a deal of eschatological (that is, Moses-messiah) aspects too. But those eschatological aspects had to be downplayed, and even erased, in later Christian versions of the sacred texts—for the rather obvious reason that the world did not end within the lifetime of the disciples, as a few remnants of the original Gospel suggest Jesus’s original followers believed it would. Hence, just as the whatever-his-original-name Samaritan executed by Pilate after storming Mount Gerizim called himself ‘Dositheos’, so the whatever-his-original-name Galilean we today call by a Hellenized version of Joshua’s name acquired that title because the ‘Joshua’-messiah was expected to come not to perform miracles, nor usher in the end times, but to rule. ‘The Joshua Taheb concept itself remains an enigma’ Purvis concedes, ‘with much less by way of textual evidence’; although he does speculate that, Moses—the version of the messiah thought to bring-in the end-times—was a reaction against the idea of the messiah as Joshua: ‘the association of the Taheb with Moses rather than Joshua would also have been due to the original use of Joshua in some Samaritan circles as a non-eschatological model, i.e., as a model leader for the restoration in history of the old priestly order.’
A. D. Crown has suggested that the Joshua-like Taheb is also known from Justin Martyr, or, that the Joshua-Jesus typology in Justin (a native of Samaria) was dependent upon an older Samaritan Joshua-Taheb typology. … Bowman has recently related the alleged Joshua Taheb to John's gospel by suggesting that the unnamed feast of John 5:1 ff. was Purim and that the visit to Samaria of John 4 coincided with the Samaritan minor feast of sammu't happesah. Just as the Samaritan woman supposedly saw Jesus as the coming Joshua “who would restore the Temple on Mt. Gerizim, recapture the land and divide it among the Samaritans as the true Israel”), the story in John 5 supposedly points to Jesus as a Joshua-like figure through whom the remembrance of Amalek would be eradicated (Exodus 17:I4)—i.e. through him and not through Esther or Mordecai. The statement of John 5:46, “for he [Moses] wrote of me,” refers, Bowman claims, to Exodus 17:14 (“Write this as a memorial in a book and recite it in the ears of Joshua, that I will utterly blot out the remembrance of Amalek from under heaven”), and not to Deut. 18:18. Bowman notes that this Joshua-Jesus typology in reference to Amalek is found also in Justin Martyr and the Epistle to Barnabas 78).
Of course, it’s possible that Jesus’s given name was Yeshuah/Joshua. But it’s also possible that he adopted this messianic name to indicate the kind of messiah he presented as—or that his followers retrospectively gave it him, to establish the terms on which his messiah-ness was to continue. The fact the Gospels give him two names: Joshua, and Emmanuel (עִמָּנוּאֵל: a very different Hebrew and Aramaic name, meaning, ‘God is with us’) perhaps suggests that his given-name was the latter and his messianic name the former.

Or perhaps ... not? Of course it’s impossible to be sure. But I do find all this stuff really fascinating. You're at liberty to disagree.

The image at the head of this blog is of Joshua and the Israelite people: from a Carolingian miniature, c. 840.