‘Could a rule be given from without, poetry would cease to be poetry, and sink into a mechanical art. It would be μóρφωσις, not ποίησις. The rules of the IMAGINATION are themselves the very powers of growth and production. The words to which they are reducible, present only the outlines and external appearance of the fruit. A deceptive counterfeit of the superficial form and colours may be elaborated; but the marble peach feels cold and heavy, and children only put it to their mouths.’ [Coleridge, Biographia ch. 18]

‘ποίησις’ (poiēsis) means ‘a making, a creation, a production’ and is used of poetry in Aristotle and Plato. ‘μóρφωσις’ (morphōsis) in essence means the same thing: ‘a shaping, a bringing into shape.’ But Coleridge has in mind the New Testament use of the word as ‘semblance’ or ‘outward appearance’, which the KJV translates as ‘form’: ‘An instructor of the foolish, a teacher of babes, which hast the form [μóρφωσις] of knowledge and of the truth in the law’ [Romans 2:20]; ‘Having a form [μóρφωσις] of godliness, but denying the power thereof: from such turn away’ [2 Timothy 3:5]. I trust that's clear.

There is much more on Coleridge at my other, Coleridgean blog.

Sunday, 19 July 2020

On Memory

Chapter 1. Towards Total Memory
Chapter 2. Memory and Fiction
Chapter 3. Memory and Religion
Chapter 4. Irrepressible Memory
Chapter 5. Inaccessible Memory

from the Preface

‘The key thing is not memory as such. It is the anticipation of memory’ Pierre Delalande

Everyone knows there are two kinds of memory: short-term memory and long-term memory. These are distinct functions in terms of brain architecture (such that, for example, mental deterioration or injury can destroy one but not the other) although they are, obviously, linked, both doing similar things, instantiating the actions of particular networks of neuronal activity in the brain. Working memory, say the psychologists, serves as a mental processor, encoding and  retrieving information [see, for instance, Alan Baddeley's Working Memory, Thought, and Action (Oxford University Press 2007)]. The two broader categories of short-term and long-term memory get differentiated and refined further when physiologists look at specific temporal ranges:
Atkinson and Shiffrin [“Human Memory: A Proposed System and Its Control Processes”, in Kenneth W. Spence & Janet Taylor Spence (eds.), Psychology of Learning and Motivation (New York: Academic Press 1968), 89–195] proposed a multi-store model in which kinds of memory are distinguished in terms of their temporal duration. Ultra short term memory refers to the persistence of modality-specific sensory information for periods of less than one second. Short term memory refers to the persistence of information for up to thirty seconds; short term memory, which receives information from ultra short term memory, is to some extent under conscious control but is characterized by a limited capacity. Long term memory refers to the storage of information over indefinitely long periods of time; long term memory receives information from short term memory and is characterized by an effectively unlimited capacity. Though this taxonomy does not distinguish among importantly different kinds of long term memory—in particular, it does not distinguish between episodic and semantic memory—it has been applied productively in psychological research. [Kourken Michaelian and John Sutton, ‘Memory’, in Edward N. Zalta (ed) Stanford Encyclopedia of Philosophy (Summer 2017 Edition)]
Memory science also identifies a third kind of memory, the sort of muscle-memory that you deploy when you drive your car or play the piano. This they call sensory memory, and they tie it into the other two kinds with various diagrams and charts.

So there they are: your three basic kinds of memory and their interrelations.

I'm not interested in them. My interest is in other modes of memory, modes that are (I would argue) just as valid as, indeed more important than, these more conventional forms. So far as I can see these other modes are either under-discussed or not discussed at all, not even recognised as modes of memory. Nonetheless I am going to try and argue that these three conventional mode hitherto mentioned are actually the least interesting kinds of memory.

from Chapter 1: Towards Total Memory

Memory costs. That is to say, in a biological sense, large brains are expensive organs to run. In order for evolution to select for them there must be an equivalent or more valuable pay-off associated with the cost. In the case of homo sapiens that pay-off is our immensely supple, adaptable and powerful minds; something that could be run on anything cheaper, biologically speaking, that the organ. This is because consciousness and self-consciousness depend to a large extent upon memory; or perhaps it would be more accurate to say, the consciousness and self-consciousness rely upon a sense of continuity through time, which is to say, upon memory. Memory is what we humans have instead of an actual panoptic view of the fourth dimension. We know all about its intermittencies and unreliabilities of course—indeed, the larger discourse of memory, from Freud and Proust to modern science, on has delved deeply into precisely those two quantities. My focus here happens to be on neither of those two qualities, but I don’t disagree: memory is often intermittent and unreliable. It’s also the best we’ve got.

When evolutionary scientists talk about the ‘cost’ of something, they have particular sense of the word in mind. James G. Burns, Julien Foucaud and Frederic Mery have interesting things to say about the costs associated with memory (and learning) specifically: ‘costs of learning and memory are usually classified as constitutive or induced,’ they say. The difference here is that ‘constitutive (or global) costs of learning are paid by individuals with genetically high-learning ability, whether or not they actually exercise this ability’:
As natural populations face a harsh existence, this extra energy expenditure should be reflected in reduction of survival or fecundity: energy and proteins invested in the brain cannot be invested into eggs, somatic growth or the immune system. Hence, learning ability is expected to show evolutionary trade-offs with some other fitness-related traits. [James G. Burns, Julien Foucaud and Frederic Mery, ‘Costs of Memory: Lessons from Mini-Brains’, Proceedings of the Royal Society 278 (2011), 925]
‘Induced costs’ touch on the idea that ‘the process of learning itself may also impose additional costs reflecting the time, energy and other resources used.’
This hypothesis predicts that an individual who is exercising its learning ability should show a reduction in some fitness component(s), relative to an individual of the same genotype who does not have to learn. … Questions regarding the induced costs of learning and memory are not only restricted to the cost of ‘how much’ information is processed, but also to ‘how’ they are processed.
Intriguingly, recent research (‘from both vertebrate and invertebrate behavioural pharmacology’) challenges ‘the traditional view of memory formation as a direct flow from short-term to long-term storage.’ Instead, ‘different components of memory emerge at different times after the event to be memorized has taken place.’

Memory, in other words, has always been part of an unforgiving zero-sum game of energy expenditure. It is possible to hypothesise that the general reduction in memory ability as people get older (when we all tend to become more forgetful and less focussed) reflects a specific focalisation of energy expenditure at the time of greatest reproductive fitness. We can all think—though I’m dipping now into pop evopsych, an ever-dubious realm—of how unattractive women find forgetful men: how much trouble a husband gets into, for example, if he forgets a wedding anniversary or a birthday.

But this is one of, I think, only a very few instances where human technological advance directly interferes with the much longer term evolutionary narratives. For the first time in the history of life we have access to a form of memory that doesn’t cost—or more precisely, that costs less and less with each year that passes whilst simultaneously becoming more and more capacious and efficient. Indeed: not only do we have access to this memory, we are all of us working tirelessly to find more intimate ways of integrating this memory into our daily lives. I’m talking of course about digital memory. Right now, in the top pocket of my shirt I am carrying a palm-sized device that grants me instant access to the totality of human knowledge, as archived online. Everything that humanity has achieved, learned and thought can be ‘remembered’ by me at the touch of my fingers on the glass screen. Everybody I know carries something similar. It is no longer even a remarkable thing.

It may be that Moore’s Law is the single most significant alteration to the environment within which human evolutionary pressures operate. As that Law rolls inexorably along, we come closer to that moment when cost itself will no longer present an obstacle to total memory. By ‘total’ I mean: the circumstance where everything that we have done, experienced, said or thought is archived digitally and virtually, and can be accessed at any time. Digital memory is exterior to the brain (at least it is so at the moment); but like an additional hard-drive being cable-plugged into your laptop, it augments and enhances brain-memory and brain-function. Which London taxi driver need learn the ‘knowledge’ when sat-nav systems are so cheap? Or to put it another way: the existence of a cheap sat-nav instantly transforms me, Joe-90-like, into a sort of super black-cab-driver, with instant access not only to every quickest route through the London streets, but the whole country and indeed the whole world. This is one small example of a very large phenomenon.

What I'm talking about here is the ‘Extended Mind Thesis’ (EMT), that argues the human mind need not be defined as exclusively the stuff, or process, or whatever that is generated inside the bones of the human skull. Here is David Chalmers:
A month ago I bought an iPhone. The iPhone has already taken over some of the central functions of my brain . . . The iPhone is part of my mind already . . . [in such] cases the world is not serving as a mere instrument for the mind. Rather, the relevant parts of the world have become parts of my mind. My iPhone is not my tool, or at least it is not wholly my tool. Parts of it have become parts of me . . . When parts of the environment are coupled to the brain in the right way, they become parts of the mind. [Chalmers is here quoted from the foreword he wrote to a book-length elaboration of this idea: Andy Clark’s Supersizing the Mind: Embodiment, Action and Cognitive Extension (OUP 2008)]
I find this idea pretty persuasive, I must say; but I am not a philosopher of mind. Not all philosophers of mind like this thesis. Jerry Fodor, for instance, attempted several times to dismantle Clark’s argument. In a review-essay published in the London Review of Books Fodor takes a heuristic trot through one of Clark’s thought-experiments. Imagine two people, Otto and Inga ‘both of whom want to go to the museum. Inga remembers where it is and goes there; Otto has a notebook in which he has recorded the museum’s address. He consults the notebook, finds the address and then goes on his way. The suggestion is that there is no principled objection between the two cases: Otto’s notebook is (or may come with practice to serve as) an “external memory”, literally a “part of his mind” that resides outside his body.’ Fodor asks himself: ‘so could it be literally true that Chalmer’s iPhone and Otto’s notebook are parts of their respective minds?’ He answers, no. I don’t take the force of his objections. So for instance:
[Clark’s] argument is that, barring a principled reason for distinguishing between what Otto keeps in his notebook and what Inga keeps in her head, there’s a slippery slope from one to another ... That being so, it is mere prejudice to deny that Otto’s notebook is part of his mind if one grants that Inga’s memories are part of hers. … But it does bear emphasis that slippery-slope arguments are notoriously invalid. There is, for example, a slippery slope from being poor to being rich; it doesn’t follow that whoever is the one is therefore the other, or that to insist on the distinction is mere prejudice. Similarly, there is a slippery slope between being just a foetus and being a person; it doesn’t follow that foetuses are persons, or that to abort a foetus is to commit a homicide. [Jerry Fodor, ‘Where is my mind?’ LRB 31:3 (2009)]
But this really is to miss the point. The analogy (since Fodor forces it) is not that Clark is arguing the brain is ‘rich’ and the notebook ‘poor’ as if these were the precisely the same kind of thing differing only in degree; but rather that they both have something in common—as ‘rich’ and ‘poor’ have money in common—the difference being only that one, the brain, has lots of this (call it ‘mind’) and the other, the notebook, has very little. That seems fair enough to me. Fodor goes on to deliver what he takes to be a knockout blow:
The mark of the mental is its intensionality (with an ‘s’); that’s to say that mental states have content; they are typically about things. And … only what is mental has content.
But lots of the data on my computer is ‘about’ things. Arguably, even the arrangement of petals on a flower is ‘about’ something (it’s about how lovely the nectar is inside; it’s about attracting insects). Fodor is surprised Clarke doesn’t deal with intensionality, but I’m going to suggest it’s a red herring and move on.
Surely it’s not that Inga remembers that she remembers the address of the museum and, having consulted her memory of her memory then consults the memory she remembers having, and thus ends up at the museum. The worry isn’t that that story is on the complicated side; it’s that it threatens regress. It’s untendentious that Otto’s consulting ‘outside’ memories presupposes his having inside memories. But, on pain of regress, Inga’s consulting inside memories about where the museum is can’t require her first to consult other inside memories about whether she remembers where the museum is. That story won’t fly; it can’t even get off the ground.
Fodor, on the evidence of this, has never heard of the concept of a mnemonic. Or is he denying that the mnemonics I have in my mind are, somehow, not in my mind ‘on pain of infinite regress’?

I’ll stop. This may be one of those issues where reasoned argument is unlikely to persuade the sceptical; and if reasoned argument can’t then snark certainly won’t. The most I can do here, then, is suggest that the principle be taken, at the least, under advisement; or the remainder of my thesis here will fall by the wayside. It seems to me that the following extrapolations of contemporary technological development are, topologically (as it were) equivalent: (a) a person who stores gigabites of personal information (including photos, messages and other memorious material) in their computer or iPhone; (b) the person who uses advances in genetic technology biological to augment the physiological structures of their brain tissue to enable them to ‘store’ and flawlessly access gigabites of memorious data; (c) the future cyborg who integrates digital memory and biological memory with technological implants; (d) the individual whose memories are entirely ‘in the cloud’, or whatever futuristic equivalent thereof is developed.

And actually this (it seems to me) is not the crux of the matter. The extraordinary increases in capacity for raw data storage is certainly remarkable; but as mere data this would be inert, an impossibly huge haystack the sifting of which would take impossible lengths of time. The real revolution is not the sheer capacity of digital memory, but the amazingly rapid and precise search engines which have been developed to retrieve data from that.

from Chapter 2: Memory and Fiction

That the ‘novel’ is a mode of memory is not an idea original to me. Dickens's fiction, in a sense, ‘remembers’ Victorian London for us, as Scott's fiction ‘remembers’ 18th-century Scotland. This is to say more than just that (although it is to say that) our collective or historical memory is mediated through these things—more, at any rate, through fiction (Shakespeare's plays, Jane Austen's novels, Homer's poetry) than through annalistic pilings-up of blank historical data. Our own individual memories, those products (long-term and short-term) of brain function, narrativise the past much more than they isolate or flashbulb past-moments. Fiction is always memorious.

This memoriousness is complicated but not falsified by the fact that fiction is not, well, true. There never was a boy called Oliver Twist, and though there was a figure called Rob Roy he wasn't at all like Scott's version of him. The veracity of art does not run exactly in harmony with the veracity of history, but neither is it completely orthogonal toit. But that doesn't matter. Our own individual memories are immensely plastic and dubious, fictions based on fact. Our collective memories likewise.

What's more striking, I think, is what happens to this idea in an age (like ours) when science fiction increasingly becomes the cultural dominant. After all, unlike Homer, Shakespeare or Dickens, SF is in the business of future-ing its stories, no? As early as 1910, G K Chesterton pondered the paradoxes of predicating ‘memoir’ on futurity:
The modern man no longer presents the memoirs of his great grandfather; but is engaged in writing a detailed and authoritative biography of his great-grandson. Instead of trembling before the spectres of the dead, we shudder abjectly under the shadow of the babe unborn. This spirit is apparent everywhere, even to the creation of a form of futurist romance. Sir Walter Scott stands at the dawn of the nineteenth century for the novel of the past; Mr. H. G. Wells stands at the dawn of the twentieth century for the novel of the future. The old story, we know, was supposed to begin: “Late on a winter’s evening two horsemen might have been seen—.” The new story has to begin: “Late on a winter’s evening two aviators will be seen—.” The movement is not without its elements of charm; there is something spirited, if eccentric, in the sight of so many people fighting over again the fights that have not yet happened; of people still glowing with the memory of tomorrow morning. A man in advance of the age is a familiar phrase enough. An age in advance of the age is really rather odd. [Chesterton, What’s Wrong With the World (1910), 24-25]
That few science fiction novels are actually written in the future tense doesn’t invalidate Chesterton’s observation. A novel notionally set in 2900 narrated by an omniscience narrator in the past tense, interpellates us hypothetically into some post-2900 world. Science fiction adds a bracingly vertiginous sense to memory. In Frank Hebert’s Dune Messiah (1969), Paul Atreides—the prophet/messiah leader of the inhabitants of a desert planet—is blinded. According to the rather severe code of his tribe he must be sent into the wilderness to die, but he avoids this fate in part by demonstrating that he can still see, after a fashion. His prophetic visions of the future are so precise, and so visual, that it is possible for him to remember past visions he previously had of the present moment, and use them, though he is presently eyeless, to navigate and interact with his world as if he were sighted. The way memory operates here, as an paradoxical present memory of the past’s future, is the perfect emblem of science fiction’s tricksy dramatization of memory. There are science fiction tales of artificial memory, enhanced memory, memory that works forwards rather than backwards; of robot memory and cosmic memory. And given the genre’s predilection for fantasies of total power, it does not surprise us that there are many SF fables of total memory.

That said, it is a story not often bracketed with ‘Pulp SF’—Borges’ ‘Funes the Memorious’—that is typically deployed when notions of ‘total’ memory are discussed. And he stands as a useful conceptual diagnostic to the thesis I’m sketching here. It’s a trivial exercise translating Borges’ hauntingly oblique narrative into the language of Hard SF. What might the world look like in the case where digital memory is so capacious, and so well integrated into our daily lives, as to give us functionally total memories? This, to be clear, is not to posit a world in which we carry around in our minds the total memory of everything—that would indeed be a cripplingly debilitating state of mind. But our present-day incomplete memories don’t work that way either. We remember selectively. Indeed, the circumstances (let’s say for example: post-traumatic circumstances) in which we are unable to deselect certain memories is a grievous one, such that people who suffer from it are advised to seek professional psychiatric help. So, given that we use our memories selectively, and are comfortable remembering only what we need when we need it, the future I’m anticipating would only be a sort of augmentation of the present state of affairs.

You would go through your life with your entire previous existence accessible to you at will. Would this be a good thing? Or do you tend to the view, fired perhaps by the Funes-like consensus that total memory would be in some sense disastrous, that it would not? ‘If somebody could retain in his memory everything he had experienced,’ claimed Milan Kundera, in his novel Ignorance (2000), ‘if he could at any time call up any fragment of his past, he would be nothing like human beings: neither his loves nor his friendship would resemble ours’. Funes himself dies young, after all; as if simply worn out by his prodigious memoriousness. We might conclude: all our efforts are focussed on attempting to make our ‘memory’ better. Now that technology has overtaken us we should, on the contrary, be pondering how we can most creatively and with what spiritual utilitarianism, make it worse.

I shall register the obvious objection. Since total recall would crowd-out actual experience with the minute-for-minute remembrance of earlier experiences, we would have to be very selective in the ways we access our new powers. The question then becomes: what would our processes of selection be? How robust? How reliable? What if we put in place (as my thought-experiment digital future certainly enables us to do) a filter that only allows us to access happy memories. Would this change our sense of ourselves—make us more content, less gloomy, happier in our lot? Would this in turn really turn us into Kunderan alien beings? The problem becomes ethical: it is surely mendacious to remember only the good times. The ‘reality’ is both good and bad, and fidelity to actuality requires us to balance happy memories with sad ones. This, however, depends upon a category error, embodied in the tense. Where memory is concerned reality is not an ‘is’; reality is always a ‘was’. Memories feed into the reality of present existence, but never in an unmediated or unselective way. Indeed, current research tends to suggest that something like the opposite of my notional filter actually operates in human memory—that as we get older we tend to remember the unhappy events of the past over the happier ones.

The bias that ‘total memory’ would in some sense be damaging to us strikes me as superstition. Funes’s imaginary experiences are a poor match for the sorts of thought-experiments to which his name has been, latterly, attached. Christian Moraru toys with describing Funes’ situation as one of disorder, but then has second thoughts. ‘Disorder may not be the right word here since Funes’s memory retrieves a thoroughly integrated systematic, and infinite world. Taking to a Kabbalistic extreme Marcel Proust’s spontaneous memory, one present fact or detail involuntarily leads in Funes’s endlessly relational universe to a “thing (in the) past” and that to another, and so on. Remembrance reaches deeper and deeper and concurrently branches off, in an equally ceaseless search for an ever-elusive origin or original memory.’ He goes on:
With one quick look, you and I perceive three wineglasses on a table; Funes perceived every grape that had been pressed into the wine and all the stalks and tendrils of its vineyard. He knew the forms of the clouds in the southern sky on the morning of April 20, 1882, and he could compare them in his memory with the veins in the marbled binding of a book he has seen once, or with the feathers of spray lifted by an oar on the Rio Negro on the eve of the Battle of Quebracho. Nor were these memories simple—every visual image was linked to muscular sensations, thermal sensations, and so on. He was able to reconstruct every dream, every daydream he had ever had. Two or three ties he had reconstructed an entire day; he had never once erred or faltered, but each reconstruction has itself taken an entire day. [Christian Moraru, Memorious Discourse: Reprise and Representation in Postmodernism (Fairleigh Dickinson University Press 2005), 21-22]
Moraru finds in Funes’ memory ‘a trope of postmodern discourse’ which he defines as ‘representation that operates digressively, and conspicuously so, through other representations.’ He is interested in the ‘interrelational nature of postmodern representation, its quintessential intertextuality … [that] in saying itself says the other, as it were, re-cites other words, speaks other idioms, the already- and elsewhere-spoken and written.’ Actual memory does not think back to drinking wine in the sunshine and thereby recall not just the wine and the sunshine but the individual life-stories of each and every grape that was grown in order to be pressed into the juice that eventually fermented into wine. On an individual level that would be magic, not memory. But there is a sense, a technological-global sense, in which Moore’s law is pointing us in precisely that collective social and cultural conclusion.

The real message of ‘Funes’ is not that a complete memory would render life unliveable (lying in a darkened room, taking a whole day to remember a previous day in every detail, dying young and so on). The real message is: a perfect memory would be transcendent. It would enable us to recall not just the things that had happened to us, but the things that happened to everyone and everything with which we came into contact. This, of course, has no brain-physiological verisimilitude, but it speaks to a deeper sense of the potency of memory. In memory we construct another world that goes beyond our world. Imagination can do this too, but for many people imagination is weaker than memory; or perhaps it would be more accurate to say, imagination manifests itself most powerfully in memory, in the buried processes of selection and augmentation. Not for nothing do we dignify processes of recollection beyond the simplest as memory palaces.


The Philip K Dick story ‘We Can Remember It For You Wholesale’ (1966) sports one of the truly great SF story titles, I think; a title that has been poorly served by its two Hollywood movie adaptations, both of clunk-down to Total Recall. Dick’s protagonist, the flinchingly-named Douglas Quail, can’t afford the holiday-trip to Mars he earnestly desires. So he visits REKAL, a company that promises to insert into his brain the ‘extra-factual memory’ of a trip to Mars; and not as a mere tourist, neither, as a secret agent. Exciting! Not real, but (Dick's premise tacitly prompts us to think) once something has happened it is no longer real either, it's just a sort-of phantasm in our memorious brains. Fake the phantasm as you can obviate the expense and inconvenience of actually doing the things, perhaps dangerous things, needful to be remembered.

The story goes on to explore a narrative ambiguity: Quail: is the superspy adventure an artificial memory, or has the REKAL process accidentally unearthed rea memories of Quail as a government assassin. In the original story, Quail returns to REKAL to have a false memory of detailed psychiatric analysis inserted in order to restore his psychological balance and prevent any further urge to visit REKAL, which is quite a nice twist. But Dick, never knowingly under-twisted, adds another: this return visit uncovers deeper ‘actual’ memories (or else implants them) in which Quail remembers being abducted by aliens at the age of nine. Touched by his innate goodness these aliens decide to postpone their invasion of Earth until after his death. This means that, merely by staying alive Quail is protecting the Earth from disaster. He is, in one sense, the single most important individual alive.

Dick’s main theme is not just that memory is unreliable—hardly a novel observation, that—and not even the more radical idea that ‘real’ and ‘artificial’ memories have an equal validity as far as the process of remembering goes. It’s actually that ‘real’ and ‘made-up’ memory in competition in the mind nonetheless tend to gravitate back to narratives of ego inflation. What I always remember is that I am the centre of memory, that the events and persons of the universe are arrayed about me. The same circumstance does not normally obtain in matters of moment-to-moment perception (megalomania excepted) because this involves us in intersubjectivity in a way memory does. Or more precisely, memory is a particular and involuted form of intersubjectivity, where the two subjectivities interacting are present-me and past-me.

The movie adaptations of this story are, in a way, even more interesting. Both jettison Dick’s complicated conceit of memory, ambiguously real or artificial, layered upon memory in favour of a simpler narrative line, better suited to the visual medium in which the story is now being told. Quail thinks himself a nobody, a mere construction worker. He goes to REKAL to be given artificial memories of a more exciting life. These memories trigger authentic memories of his actual life as a spy. In both films (though to a lesser degree in the earlier of them) the strong implication is that he has a true identity and it is this latter. The bulk of both storylines is then given over to the cinematic storytelling of his spy-action adventures.

What’s so fascinating about this is the way both texts portray memory (something that is, we might say, by its nature recollected after the event) as a vivid and kinetic ongoing present set of experiences. Neither movie has its protagonist sitting in a chair remembering being a spy; both, rather, show Quail running, fighting, shooting and getting the girl in the cinematic present. Since we all know how memory works (and that it doesn’t work this way) it seems plain that some strange dislocation is happening in the level of representation of the text. We are shown Quail living his quotidian life; we are shown that life transformed seamlessly into his artificial memories of being a spy. In both movie versions a hinge-scene is staged where an individual attempts to intervene into the action-adventure shoot-up adventure Quail’s life has become. These individuals both tell Quail that the world he is currently experiencing is not real; and that if he perseveres in its fantasy it will kill him. Quail is offered a pill, a token (it is claimed) of his willingness to give up the dream and return to the real world. In both films Quail suspects a ruse and refuses the pill in the most violent way imaginable, by shooting dead the messenger who carries it.

This, to be clear, is a special case of a more general SF trope. There is no shortage of texts that develop the idea of a virtual reality or drug-created alternate reality that runs concurrent with actual reality—the Matrix films are probably the most famous iteration of this, but there are scores of examples from science fiction more generally. Linked to this is the ‘dream narrative’ trope, where John Bunyan or Alice explore a continuous but fantastical timeline that is revealed, at the story’s end, to have been running in parallel with actual reality through the logic of dreams. In both the case of ‘virtual reality’ and ‘dreaming’ it’s an easily comprehensible logic that moves from actual reality into the alternate reality and back again. The Total Recall movies, though—and the story on which they are based—do something more dislocating. Memory is not an alternative parallel reality in the way that VR or dreaming is. Nonetheless these texts treat it as though it is. Remembering something that happened previously is elided with experiencing something now. This is to drag the events remembered out of the past and into the immediacy of the present; or perhaps it is to retard the experience of the present into something always already recalled.

This may look like a trivial misalignment of narrative logics, or perhaps only the limitations of the representational logics of cinema. Think of the visual cliché: a character is shown on-screen ‘remembering’: wavy lines flows across the image and a dissolve-cut takes us to ‘the remembered events’. But Total Recall short-circuits this convention: memory happens in the present, as on-going narrative. This in turn means that the distinction between present and past, the distinction which it is memory’s main function to reinforce, vanishes. Memory is no longer of the past, or even rooted in the past; it is refashioned as a technological artifice (‘REKAL’) that configures ‘memory’ as the continuous present, and augments that present-ness by making the happening-now into a continuous adrenalized onward rushing (running, fighting, escaping, plunging on).

This, I think, is the implication of a 21st-Century Funes. A technologically actualised ‘total’ memory could well destabilise the authentic ‘reality’ of the remembered experience. It might mean that we get to set out own selection algorithms for memory recall, such that we only recall those memories that make us happy, or paint us in a good light—that, for instance, reinforce the sense we have of ourselves as action heroes rather than boring 9-to-5ers. It might mean that we erode the difference between ‘real’ memory, the memory of artifice (films we have seen, books we have read) and actual artificial memory itself. This is the old threat of Postmodernism, exhilarating and alarming in equal measure: the notion that simulacra really will come to precede the things they supposedly copy. But I’m suggesting something more. Total memory, as Funes tacitly and Total Recall explicitly says, will transcend the past. It will break down the barrier between past and present, and reconfigure it as a more vital now. It will subsume the particularity of memory and render it wholesale.

from Chapter 3. Memory and Religion

Like Judaism, Christianity and Islam are both memorious religions. Religion need not necessarily be so, I think, but it's presumably not a coincidence that the two biggest religions in the world today are. Roland Bainton argues:
Judaism is a religion of history and as such may be contrasted with both religions of nature and religions of contemplation. Religions of nature discover God in the surrounding universe; for example, in the orderly course of the heavenly bodies, or more frequently in the recurring cycle of withering and resurgence of vegetation. This cycle is interpreted as the dying and rising of a god in whose experience the devotee can spare through various ritual acts, and thus become divine and immortal. For such a religion the past is not important, since the cycle of the seasons is the same one year as the next. Religions of contemplation, at the other pole, regard the physical world as an impediment to the spirit which, abstracted from the things of sense, must rise by contemplation to union with the divine. The sense of time itself is to be transcended, so that here again history is of no import. But religions of history, like Judaism, discover God "in his mighty acts among the children of men". Such a religion is a compound of memory and hope. It looks backward to what God has already done ... [and it] looks forward with faith: remembrance is a reminder that God will not forsake his own. [Bainton, The Penguin History of Christianity (volume 1, 1967), 9]
Memory and history are interconnected; history (personal and collective) being what we remember and memory (individual and textual) being how we access history. And when you look at it like that it's quite surprising that it is the religions of history that so dominate human worship. The problematic is a large one, after all: if God intervenes in human history at a certain point in time, what about all the people who happened to be born and to die before that moment? Religions of nature and contemplation can embrace them easily. Religions of history must necessarily come to terms with the ruthlessness of history. History, after all, is famously a winners' discourse. What about the losers? Calling them (say) virtuous pagans, or pretending they simply don't exist, jars awkwardly with Christian and Islamic emphases on the excluded, the underdog and the poor.

Immediate or strictly contemporaneous religions (Scientology, say) tend to seem absurd to us, even though the miracles they declare are no more intrinsically risible than those of Christianity, Islam or Hinduism. The reason this is so, I suspect, is because we are so acculturated to the idea of religious belief working as memory rather than as to-hand experience … or at least not as this latter for most people (ecstatics and schizophrenics excepted, I mean).

As is the case with our memory, many details are omitted, and many contradictions and infelicities reworked into more-than-truly-contiguous narratives. Like memory, religion doesn’t always or even particularly intrude on everyday living—it requires a will-to-contemplation to evoke it, actually, although a properly functioning religion is bound to provide copious aides-memoires (liturgy, ceremony, sunday schools and their equivalents and so on) to help in this respect. Consulting family photographs, after all, has a liturgical aspect to it for many of us; in Pixar's Coco (dir. Lee Unkrich 2017) these family photos and their place in the lives of the living literally translate into the wellbeing and status of the dead generations in the afterlife.

I'd suggest that most religion asks us to look back, to honour our mothers and fathers, to worship our ancestors, to consider the origins of life and the cosmos and be thankful for them; but of course there are also portions of religion that ask us to look forward. The believer is to orient her life by their future reward or punishment. The Bible is, by weight, mostly history; but it ends as future-prophesy. Nonetheless, I'd be tempted to argue that the memory-gravity of religion means that those portions of religious practice or thought that have a significant future component end up doing that strange thing of construing future apocalypse as memory … the odd past-oriented backwardness of St John’s revealed future, for instance. Indeed, the more I think about it, the more it strikes me that this is one of the things that science fiction has in common with religion.

Religion endures best in adulthood if it has been impressed upon us in childhood. This means that we are, when we live in faith, steering ourselves according to how we remember our younger days. I suspect something like this is behind Jesus's celebrated ‘except ye be become as little achildren, ye shall not enter into the kingdom of heaven’.

from Chapter 4: Irrepressible Memory

According to the New Scientist (‘Déjà vu: Where fact meets fantasy’ by Helen Phillips) only 10% of people claim never to have experienced Déjà vu (I'm one of that ten, actually). For some people, at the other end of the scale, it becomes a veritable psychopathology:
Mr P, an 80-year-old Polish émigré and former engineer, knew he had memory problems, but it was his wife who described it as a permanent sense of déjà vu. He refused to watch TV or read a newspaper, as he claimed to have seen everything before. When he went out walking he said the same birds sang in the same trees and the same cars drove past at the same time every day. His doctor said he should see a memory specialist, but Mr P refused. He was convinced that he had already been.
The article rehearses arguments from brain chemistry to explain this widespread feeling (perhaps it is indeed ‘the consequence of a dissociation between familiarity and recall’). But I read the article wondering: could something as banal and everyday as this be behind Nietzsche's unflinching adherence to the doctrine of Eternal Recurrence? (A philosophical slogan: ‘Eternal Return, the consequence of a dissociation between familiarity and recall...’) Could memory, Funes-style, prove so strong that it overwhelms us, strong-arms us to the floor? Should we be afraid of memory?

We're not, because our day-to-day experience of memory, as we stand there trying to remember where we put our car keys, or what the second line of Twelfth Night is, or what we even came downstairs for, is of an elusiveness that indexes fragility. To speak in terms of the opposite of this is a convention, but an empty one. Some reviewerish boilerplate, from Jane Yeh in an old edition of the TLS:
... should appeal to a wide readership, given the universal scope of its themes--family tensions, and the adult author's changing relationship to her parents, the power of memory ...
But of course we don't actually, talk of ‘the power of memory’. Rather, all our experience leads us to the consideration of the weakness of memory. This is not just a question of the feebleness of our powers of recall (the necessary, non-Funes weakness), or the way memory is a sixty-pound weakling compared to the muscular shaping requirements of our preconceptions, our repressive superegos and so on. It is to challenge the idea that simply recalling something is ‘powerful’ in its own right: as if we're sitting in the cinema of our minds in 1890 and are amazed simply by virtue of the fact that anything is projected on the screen at all. It betrays, I suppose, a tacit belief that memory ought not to be able to move us, to influence our present; that we ought to live in a sort of unfettered continuous present. Or maybe it's a simple misprison: for memory read the past. Two things almost wholly unrelated, however often they're confused.

This extends, I think, even to traumatic memories. There are instances where memory overwhelms the rememberer, as PTSD, but these instances are not the default, even though trauma of varying intensities is the default of ordinary living. Not thinking about things is, actually, a fairly effective way of dealing with trauma and upset, actually; and not thinking about things can certainly become a habit. But this isn’t the same thing as forgetting, and certainly not the same thing as ‘repressing’ a memory. Freud's insistence that the repressed always returns is more a statement of faith than an evidence-based assertion. I mean, it strikes me as a good faith. It says: nothing stays secret for ever, you cannot bury anything permanently, your true nature will eventually emerge, that affair you had will eventually come to light, those memories you are distracting yourself from don't go away just because you are distracting yourself from them (although, as I say, the distraction can perhaps be prolonged indefinitely). This is a worthwhile ethos by which to live life. It is not true, though. Memories, it seems, are not only sometimes lost, the default position for memories is to lose them, or rather it is to overwrite the memories with simplified neural tags or thumbnail versions of the memory. We do this to stop our minds exploding, but it means that it is not repressed memory that always returns, but repressed desire (the desire that shaped the recasting of the memory in the first place). That sounds truer; short of neural-surgical intervention, repressed desire always does return ... it just doesn't necessarily return at the same strength. If memory is strong, then total memory would be omnipotent. But if memory is weak, actually, then total memory would follow a different hyperbolic trajectory into nothingness.

from Chapter 5: Inaccessible Memory

Coming hard after the previous chapter, and its claim concerning the irrepressibility of memory, the title of this chapter runs the risk of seeming mere trolling. But, if you'll bear with me, I have a particular something in mind.

When we remember something particular from our childhoods, we recognise the specific recollection as memory. When I remember that I left the iron on, just now, we recognise that as memory. Both forms of memory have content, and are comprehensible, and that might tempt us into thinking that having content and being comprehensible are two features of memory as such: that if we have a memory that baffles us, then that just means that we haven't contextualised it, or understood it. I think this is wrong. I think more memory, and more important memory, provides neither of those two things. If we define memory by its accessibility then we rule out from the very concept of memory those memorious processes that are not accessible, even if those processes are vital to memory and mental health.

I'll give you an example of what I mean: dreams as memory.

I need to be specific, here. We all dream, and sometimes we remember what we dream and sometimes we don't. But those rembered-dreans are second-order memories, friable attempts to translate one kind of (non-rational, not consciously controlled) mental process into another that is quite different. I'm not talking about our memories of dreams; I'm talking about our dreaming as itself an iteration of memory.

Because of course dreams are a way of remembering stuff, often the stuff that happened in the day. We know that dreams ‘process’ the events of the day (and sometimes other days) and our anxieties and desires pertaining to them—we process these events, in other words, by remembering them in this peculiar way we call dreaming. More, we know that if we are prevented from dreaming we die. Torturers from ancient Rome to the CIA have long known this. Doctors diagnose the rare but real condition fatal insomnia: ‘a neurodegenerative disease eventually resulting in a complete inability to go past stage 1 of NREM sleep. In addition to insomnia, patients may experience panic attacks, paranoia, phobias, hallucinations, rapid weight loss, and dementia. Death usually occurs between 7 and 36 months from onset.’ If I fail to remember where I put my car-keys, even if I permanently fail to remember this thing, it will not kill me. In this sense dreaming-as-remembering is much, much more important than remembering-as-conscious-recall.

If we don't tend to think of dreams as a fourth kind of memory (alongside sensory memory, short-term memory and long-term memory) it's because we are hamstrung by a prior assumption that memory must be accessible and conscious to count as memory. But I wonder if the absolute physiological necessity of dreaming, and the relative disposability of the other three kinds of memory (for even patients with severe neurological decline who lost both long and short term memory can carry on living otherwise just fine) suggests that not only are we ignoring a vital kind of memory, we have got the relative importance of these things entirely the wrong way about. What if, instead of dreams being a shadowy and dislocated imitation of ‘real’ memory, long-term and short-term memory are both the fundamentally inessential tips of a much larger subconscious iceberg? Perhaps most of our remembering happens unconsciously, inaccessibly, in somnicreative form?

I say so in part, of course, because it situates my earlier claims that fiction (that art, that culture in the broadest sense) is a mode of memory in both the individual and the collective sense. But these processes of memory are not directly analogous to what happens in our brains when we retrieve either recent or archived memories. They are closer to somatic memory, except that they are rarely actually somatic. And they are, I think, the bulk of memory as it figures.

Say, rather than repressing or purging our memories, we are (short of surgical interventions that literally excise portions of our brain) remembering all the time, in a nexus of ways that are inaccessible, or largely inaccessible, to our conscious minds. Say that this process of continuous, paraliminal remembering actually constitutes our consciousness: is the bulk of what consciousness means for a living being, and that the stuff we consciously think of, the stuff of which we are aware and over which we exercise a degree of mental control, is the excresence, the bit of that process that pokes out into the realm of self-aware mentition.

Perhaps this seems far-fetched to you. I can see why. We can, after all, only discuss memory in the idiom of consciousness and rationality. If the bulk of memory actually happens outwith those two territories then it's hard to see what we can usefully say about it. It's like Kant's exhaustive but tentative groping around the shape the inaccessible Ding an Sich leaves in the accessible but fallible and untrustworthy spread of human perceptions. By what process do we transpose the alien idiom of memory we call ‘dream’ into the graspable idiom of consciousness as such?

Adam Phillips, in Terrors and Experts, says this about the interpretation of dreams: ‘a dream is enigmatic—it invites interpretation, intrigues us—because it has transformed something unacceptable, through what Freud calls the dream work, into something puzzling. It is assumed that the unacceptable is something, once the dream has been interpreted, that we are able to recognize and understand. And this is because it belongs to us; we are playing hide-and-seek, but only with ourselves. In the dream the forbidden may become merely eccentric or dazzlingly banal; but only the familiar is ever in disguise. The interpreter, paradoxically—the expert on dreams—is in search of the ordinary.’ [64]

But why must the extraordinary be turned into the ordinary? That sounds like false reckoning (or false translation) to me. The implication here is 'because it started out that way'; but that's surely not true: dreams are as likely, or are more likely, to grind their metaphorical molars upon extraordinary aspects of our life. The perfectly habitual aspects of it won't snag the unconscious's interest. So could it be that dream-interpreters turn the extraordinary into the ordinary only because the ordinary sounds more comprehensible to us, because it produces the sort of narrative the dreamer prefers to wake up to? (‘...those skinny cattle eating the fat cattle and not getting fat? That's about harvests, mate.’) But if the currency of dreams is the extraordinary, common sense suggests that the interpretation of dreams should be extraordinary too—suggests that the function of the dreaming is bound-up with its extraordinariness. The sense of recognition Phillips is talking about here, that ‘aha! that's what it means!’ is all about the transcendent rush, the poetry, not about the mundanity. But the very fact that it's a rush, the very thrill of it, ought to make us suspicious. It is not the currency of true memoroy to elate us, after all. It's cool, but it's not the truth.

This is the flaw in the Biblical narrative of Joseph: his dreams are too rational, too strictly allegorical. They don't have the flavour, the vibe, of actual dreams. We can, I think, tell the difference between a report of an actual dream and the faux-dream confected for, as it might be, a novel. Writer C K Stead says as much: ‘In my most recently published novel I decided one or other of the central characters should experience or remember a significant dream in each of seven chapters. When I tried to invent these they seemed in some indefinable way fake; so I hunted through old notebooks and found dreams I had recorded which could be used with a minimum of alteration.’ Most writers will know what he means. It's one reason I like this Idries Shah story:
Nasrudin dreamt that he had Satan's beard in his hand. Tugging the hair he cried: 'The pain you feel is nothing compared to that which you inflict on the mortals you lead astray!' And he gave the beard such a tug that he woke up yelling in agony. Only then did he realise that the beard he held in his hand was his own. [Shah, The World of Nasrudin (Octagon Press 2003), 438]
One of the things that's cool about it is the way it captures the feel of an actual dream. But mostly, of course, it's the implication that our subconscious not only understands but is capable of timing the revelation comically to deflate the dark grandeur of our secret fantasies. Nasrudin's dream knows more about Nasrudin than he does, I think.  And by extension all our dreams know more about all of us, and remember more about all of us, than we do ourselves.

This, I think, is the most compelling part of recentering dreams in our accounts of memory. Because doing so recognises the extent to which we are all artists.
The beauteous appearance of the dream-worlds, in the production of which every man is a perfect artist, is the presupposition of all plastic art, and ... half of poetry also. We take delight in the immediate apprehension of form; all forms speak to us; there is nothing indifferent, nothing superfluous. But, together with the highest life of this dream-reality we also have, glimmering through it, the sensation of its appearance: such at least is my experience, as to the frequency, ay, normality of which I could adduce many proofs, as also the sayings of the poets. ... And perhaps many a one will, like myself, recollect having sometimes called out cheeringly and not without success amid the dangers and terrors of dream-life: “It is a dream! I will dream on!” I have likewise been told of persons capable of continuing the causality of one and the same dream for three and even more successive nights: all of which facts clearly testify that our innermost being, the common substratum of all of us, experiences our dreams with deep joy and cheerful acquiescence. [Nietzsche, Birth of Tragedy (transl. Hausmann), 23-24]
Blake was fond of the verse from Numbers (11:29) ‘would to God that all the Lords people were Prophets!’ I feel the same, but for artists. And the ongoing progression of Moore's Law and the interpenetration of our lives with technology that facilitates our expression, brings that utopia, that mode of remembering, ever closer. Lord, as Dickens prayed, keep my memory green.

from the Afterword.

Some aspects of the ever-increasing technological interpenetration of our lives cater to our conscious minds. Some address our subconscious. It may be worth speculating as to what the version of memory argued for here—a total memory predicated upon continuing improvements in processing power that encompasses both ordinary information-retrieval instances, but a larger collective artistic or religious communal memory, and even (perhaps) the buried part of the iceberg of memory to which we don't have access—would look like in practice. It might free us from the vagaries of physiological memory, its vulnerabilities and intermittencies.  By the same token it might cast upon the not-so-tender mercies of algorithms. Memory might become the province of the strategies of control of the congeries of State Power that currently asserting dominance. There are two current-day strategies here: one, a Nineteen Eighty-Four approach typified by contemporary China who believe in a top-down authoritarian domination of online activity via restriction, censorship and punishment. The other, much more widely pursued, is the Brave New World approach of the West, where punters are told they are free to frolic in unlimited online pastures when in fact a combination of targeted nudging, ever-evolving algorithms and the sheer soma-like excess of hedonistic online-content actually confine and herd the user even more effectively than Chinese top-down control. The ‘internet’ (to generalise ridiculously) can be wielded by Foucauldian Power to, say, ensure Brexit or the election of Donald Trump, to promote certain socio-cultural memories and excise others, and all without the apparatus of apparent oppression. Those who are conscious of oppression and who can see their oppressors are, in a sense, better-off (because they at least have a clear target) than those who are oppressed but can neither identify a specific tyrant nor even be sure that they are oppressed are obviously worse off than this. As technology becomes an increasing part of our memory, on the hyperbolic path towards total memory, these latter strategies might easily become constutive of memory as such. Our future memories might well become bizarre hybrids of actual remembrance and Orwellian memory-hole, and the fact that we won't necessarily even be aware of these controlling dynamics might well align this new memory with the buried portion of our actual memory, our dream-memory and other unconscious memorious drives. It's not, I concede, a hopeful prognosis. Of course, I may be wrong, may (indeed) be profoundly wrong. But it seems to me, looking around, that we have already tacitly conceded that our collective consciousness (I thumbnail this as ‘the internet’ but it's larger than that) is already apprehending important social and political questions not by ratiocination but according to a set of unconscious processes not strictly accessible to our conscious wills. We are, in other words, already remembering our past—and so, shaping our future—in the way dreams remember things rather than the way consciousness remembers things, and I see no reason why that might not intensify into the future. Our tech, I would hazard, will bed that in. We will increasingly dream our memories, both individual and collective, and do so much more comprehensively thanks to technology. In a late poem, the great Les Murray seems to put his finger on something.
Routines of decaying time
fade, and your waking life
gets laborious as science.

You huddle in, becoming
the deathless younger self
who will survive your dreams
and vanish in surviving.
I wonder.


  1. For what it’s worth, I’m one of those people to whom Fodor seems obviously right and Clarke obviously wrong. Perhaps you need to snark at me more?

    You write: “ But lots of the data on my computer is ‘about’ things. Arguably, even the arrangement of petals on a flower is ‘about’ something (it’s about how lovely the nectar is inside; it’s about attracting insects).” I think maybe you are misunderstanding what Fodor means by “about”? Surely he would say that the data on your computer is just a bunch of zeroes and ones, and that any aboutness happens in your mind when it consults that data. And that nectar in a flower isn’t “about attracting insects,” it just attracts insects. The aboutness occurs in the mind of a human being contemplating the relationship between nectar and insects.

    Fodor is very funny on philosophers’ hatred of untenable dualisms, and I think he’s right to say that that hatred leads them to reject dualisms that are not only tenable but necessary. I don’t think there’s an explanatory gain when you go from “My library and notebooks are tools I use to increase the power of my brain” to “My brain includes my library and notebooks.” Indeed, I think there’s a conceptual loss.

    1. My friend, I think you've put up with more than a lifetime's dose of my snark already.

      The petals/nectar example probably isn't very well chosen here; but "aboutness happens in the mind" is exactly the point, isn't it? By which I mean, the question under discussion is precisely what constitutes "the mind" in this case? Even in a minimalist, Fodorian understanding memories aren't "about" anything (they're just arrangements of nerones, or matrices of electrical and chemical interactions, or whatever) until active consciousness retrieves them, or they press upon our thoughts, or (in my reading here) we dream them, or whatever. And if that's true then what is the substantive difference between the reservoir of neuronal arrangements inside my brain and the reservoir of ones-and-zeroes on my computer? Or to put it more snarkily: "hah, yer Mom!"

  2. > the question under discussion is precisely what constitutes "the mind" in this case?

    Well, if you want to go there — If you were to visit my office, and I were to say “Come in and take a look at my mind,” or if I were to ask you to come to Kensington Gardens with me because my mind is so lovely this time of year, I think you’d be confused. That’s why I believe it’s useful to distinguish between my mind and the objects external to my mind with which it interacts.

    1. Well, it seems to me the problem with saying "come to Kensington Gardens with me because my mind is so lovely this time of year" is the "my" part. Kensington Gardens isn't yours, after all. But having me visit your office, seeing your books all arranged on shelves ... how are they not part of your mind? Your office, your books, they shaped you, when you need more detail on exactly how they did so you can consult them. They're yours, and they're an annex to your mind. Is it so hard to parse?

    2. They're not part of my mind because I could give them all away and my mind would still be the same. There's a very loose sense in which those experiences and places and events and things that have shaped my mind are part of my mind — we use such phraseology all the time. "That school will always be a part of me." But I don't think such metaphors can successfully be literalized in the way Clarke wants to do.

    3. I disagree: if you gave away all your books your mind wouldn't be quite the same, I think: it wouldn't be able to develop the complex literary-critical ideas it presently does (unless you replaced the lost books with new copies, or looked online) for instance ... the analogy that strikes me is: if you suffered an injury to the grey matter of your brain and lost a bunch of specific memories, you wouldn't be quite the same person you were before. You'd still be you, of course, but that's not to suggest that those little grey cells aren't a part of the architecture of your mind.

      I think there's slippage when you equate this kind of thing with the broader sense of "my experiences are a part of me because they have shaped me". True, that, but not the same thing. All our experiences shape us in lots of ways; but only some of our experiences can be recalled to memory, and where those experiences are concerned the question is: does it prejudice the experience, or its shaping power upon you, if your recollection is (a) rummaging around in your own mind, otherwise unaided, or (b) consulting a diary entry you wrote as an aide memoire. Surely a and b are versions of the same thing?

      Your resistance to this idea is fascinating to me, I must say. It seems very clear to me. Your glasses aren't part of your eyeball, but they are part of your vision. Your books and iphone etc aren't part of your brain, but they are part of your memory.

    4. I resist the idea because it’s obviously false, duh.

      I don’t understand your second paragraph so I can’t reply to that. But I would say that my little grey cells are part of the architecture of my mind, but it is not true that, as Clark says, my iPhone and my library are part of the architecture of my mind. My mind is what it is in part because of my encounters with my iPhone and my library, but that is not the same thing, for precisely the same reason that lifting dumbbells to build up my muscles does not make the dumbbells part of my body. My mind is what it is in part because of my encounters with you, also, but that does not mean that your little grey cells are part of my mind as my own little gray cells are part of my mind — mine are gray rather than grey because I’m an American.

      To follow Clark’s argument is to lose the ability to distinguish between self and world, which is either a highly advanced form of Buddhist contemplation or mere solipsism, and I am neither a Buddhist nor a solipsist.

    5. A related thought: Clark’s thesis is wrong for the same reason Aristotle was wrong when he said that a slave is part of his master, an extension (as McLuhan might have put it) of the master.

    6. "I don’t understand your second paragraph so I can’t reply to that" ... OK so I think I see what's happened here (though I could be wrong, and you'll tell me if I've misread you). You think I'm arguing that there's no real difference between the inside of our brains and the outside world, between self and other. But I'm not arguig that, because I don't believe that's true. My point is not Buddhist (via whatever iteration of Buddhism). If I had to tag it to a thinker or school or whatnot, it would be, I guess, Deleuze-Guattarian.

      I'm talking about prostheses.

      We all use prostheses, of various kinds. Spectacles are common, for instance. When are spectacles most themselves, most spectacles-ish? As Heidegger would say, when we forget we are even wearing them, they are just correcting our eyesight and letting us get on with our day-to-day. The glasses then become part of how our eyes work. They're not literally part of our eyeballs, but they are part of our vision.

      Dumbells are not a prosthesis; they are a tool (for making our muscles bigger, say). But a tool is a different thing to a prosthesis. A replacement hip-joint or a pacemaker are prostheses: a pacemaker is not the same thing as heart-muscle, but it's part of our functioning heart (if we have one). Would you insist that a person's pacemaker is not part of their body?

      Pacemakers are cardiac prostheses, but I'm interested in memorious prostheses. Homer had to hold all the Iliad in his mind. I don't have to do this, because I have it on my shelves as a book (as several books in fact, and online and so on). The book of the Iliad is not literally part of my brain, but it is part of my memory, because it's how I recall the Iliad (when I want to). By extension this is true of all my books; and my address book; and the telephone directory on my phone and so on and so forth. These things are memory-prostheses, like pacemakers or spectacles.

      The Aristotelian example is a ticklish one, though. Because it would be hard to deny that slavers sometimes do use slaves as prostheses: if my legs are weak, I might get my slave to push me around in a chair, say, to turn them into an ambulatory prosthesis. But, I'd say, more usually slavers use slaves as tools, to work and generate wealth. That's different. I suppose I'd say that the sense in which Aristotle was wrong when he said that a slave is part of his master (I agree this is a wrongheaded thing to assert) is a moral sense. It is morally wrong to turn people into prostheses, because they are people. But its not morally wrong to turn spectacles into prostheses, or books, or iPhones, because these things are not people.


    7. Ah, this is indeed clarifying! I now see that we disagree about everything. I don’t agree with the clean distinction between prosthetic and tool (nor do I, like McLuhan, understand all tools as prosthetics); I don’t think an aide-memoire is a prosthetic; and I don’t think a book is an aide-memoire. I also don’t think I agree that there is a kind of ideal Iliad inside my mind — you seem almost a Berkeleyan there — which the book helps me to remember.

      Other than that....

    8. Also, re: “You think I'm arguing that there's no real difference between the inside of our brains and the outside world” — no, I understand that we’re not talking about brains but rather minds, and that Clark’s view still makes a clear distinction between the part of my mind that’s inside my skull and the parts that are outside it. I just think that view requires a kind of mental colonization of the world that tends towards solipsism. That’s why I prefer the language of encounter to Clark’s assimilative model. It makes me think of the Borg.

    9. Maybe one last thing? You and Clark seem to believe that saying (a) “I remember X” is equivalent to (b) “X is a part of my memory” and that both are equivalent to (c) “X is part of my mind.” I reject the move from (a) to (b) and even if I accepted it I would still reject the move from (b) to (c). Maybe that’s the best summing-up I can do.

    10. Either clarifying or perhaps unclarifying. Hmm. You read me saying "Homer had to hold all the Iliad in his mind. I don't have to do this, because I have it on my shelves as a book" as an assertion that I keep a Berkleyan ideal Iliad in my head at all times? But that's exactly the opposite of what I'm saying! I don't need such a thing bc the Iliad is right there, on my shelves.

      "Mental colonization of the world" is a fair enough summary I suppose; although adding "... that tends towards solipsism" seems contrary to me, since solipsism is in inwardizing "it's all me" and this is a motion outwards into the world as other, that same world that contains lots of other consciousnesses. And to be honest, it's less a Borgist assimilation, and more a sort of Voltairean cultivation of a little garden of memorious potential around ourselves.

      As for your (a), (b) and (c): your final comment does not strike me as easy as the Jackson 5 song suggests it ought to be. I am straining my admittedly limited brain power to see what the distinction even is between (a) and (b) ("I remember I remember the house where I was born" is another way of saying the house is in my memory now, surely); and if a thought is not part of my mind then it's hard to see what it is. But here it's very likely I've misunderstood you, or am missing something obvious.

    11. ... to be more precise: I have a hazy sense of the Iliad in my mind, not a shining Berkleyan idea of it: I know the story, and I can remember a few bits and pieces. But if I want to do something with that memory, write about it say, then I must needs augment that haziness with the better-remembered, indeed 100% accurate, memory of Homer's poem in the book from my shelves. In a prosthesis-related analogym it's an armature; like the robotic super-suit Ripley wears to augment her strength so as to be able to fight the Queen Alien at the end of Alien 2, the sort of thing without which the task (writing an essay on Homer, fighting aliens, whatever) becomes inachievable.

    12. I'm starting to suspect that I'm not going to convince you on that one. Fair enough: I shall swallow that bitter pill, and simply agree to differ ... and if you want to tell the one-legged man that, however useful he finds his false limb, he must under no circumstances consider it a part of his walking-around on pain of perpertrating a Borg-like and solipstisic leggic colonization of the world, then it would relieve me of an uncomfortable duty.

    13. I relish the prospect of addressing that one-legged man!

      Actually, what I need to do is to stop trying to express myself in a comment thread — let me reflect and consider your points more carefully and then give you a more substantive (and perhaps even generous) reply. These are important points. And who knows, after reflection I may tell you that you were right about everything!

    14. Was it not Clausewitz who said: "all exchanges online are wars of attrition, arguments in comments-threads especially so"?

  3. What's happened to the other book outlines? I was definitely going to read them, some time...