‘Could a rule be given from without, poetry would cease to be poetry, and sink into a mechanical art. It would be μóρφωσις, not ποίησις. The rules of the IMAGINATION are themselves the very powers of growth and production. The words to which they are reducible, present only the outlines and external appearance of the fruit. A deceptive counterfeit of the superficial form and colours may be elaborated; but the marble peach feels cold and heavy, and children only put it to their mouths.’ [Coleridge, Biographia ch. 18]

‘ποίησις’ (poiēsis) means ‘a making, a creation, a production’ and is used of poetry in Aristotle and Plato. ‘μóρφωσις’ (morphōsis) in essence means the same thing: ‘a shaping, a bringing into shape.’ But Coleridge has in mind the New Testament use of the word as ‘semblance’ or ‘outward appearance’, which the KJV translates as ‘form’: ‘An instructor of the foolish, a teacher of babes, which hast the form [μóρφωσις] of knowledge and of the truth in the law’ [Romans 2:20]; ‘Having a form [μóρφωσις] of godliness, but denying the power thereof: from such turn away’ [2 Timothy 3:5]. I trust that's clear.

There is much more on Coleridge at my other, Coleridgean blog.

Wednesday, 23 December 2020

The Tintinnabulation at the End of the World: a Christmas Story

 



Zuzu
: Look, Daddy. Teacher says, every time a bell rings an angel gets his wings.
George: That's right, that's right! Attaboy, Clarence!
IT’S A WONDERFUL LIFE


Individual nations took various steps to maximise their campanoproductivity through the 1950s, but it was only after the 1961 UN resolution—backed by both the USA and the Soviet Union, and with the tacit approval of China—that global tintinnabulation properly began to ramp-up. By midsummer 1965 the recently-formed Global Bellsound Survey calculated that close to a million distinct and individual bell-ring sounds were being generated every ten seconds by human beings all across the planet. The development of computers led to increasing efficiencies in tintinnabulation. By the end of the century it was possible for a single, modestly-priced computer to sound a hundred distinct bell-sound noises every second (resulting in over a billion such sounds a year). Hundreds of millions of such machines were devoted to this business by 2000 alone. Some pundits predicted the craze would die out, but manifest alterations in the fabric of reality—a huge increase in angelic visitations and apparitions, especially in dreams, but also measurable distortions in the Earth’s magnetic field, gravitational anomalies, mysterious messages appearing in the least likely places and so on—convinced most people that this was labour with which it was worth persevering. Economists argued that the true driver was the percentage of angel-oriented prayers (perhaps as high as 65%) that now resulted in positive returns. Priests, Imams and Rabbis spoke of a newly productive mode of religious engagement. Music fans applauded the supersession of guitars and drums by bells. By 2010 more than 40% of global GDP was devoted to tintinnabulating. Every city resounded, day and night, with a cacophony of bells chiming, deep or shrill, long or brief, day and night. Most people made-do with noise cancellation headphones or (in poorer parts of the world) with wadding stuffed into the earholes, although some elected for radical surgery to sever the aural nerves. As 2015 began, the global production of new angels was assessed at 10^85 annually, more than the number of atoms in the observable universe. By 2017 this number was calculated to be 10^190, a figure significantly greater than the number of Planck volumes composing the volume of the observable universe. No bricks-and-mortar Tower of Babel ever had, or ever could have, approached so nearly the ineffable throne of infinity as this new project by a humanity unified by internet media. Quantum computing raised productivity exponentially. God was being swamped with new angels, overwhelmed by an exponentially increasing spiritual volume. Some theologians argued that the nature of the Divine was such as to be able to accommodate a literally infinite addition of angelic pressure. Others observed that the War in Heaven—so memorably narrated in Paradise Lost—had involved some paltry thousands of rebel angels, and that had shaken the pillars of paradise. Who could say what effect the creation of so many googols of new angelic natures, each one an infinitude-in-one theologically-speaking, would have upon the Godhead? By the end of 2020 the line of the graph approached closer and closer to vertical. This was when, in a one-third-full car-park in Leicester, UK, God made His final manifestation in this reality. He spoke medieval French (a detail that occasioned a good deal of academic hypothesis and disputation in the years that followed) but what he said was clear enough: “That’s it, I’m out, I’m through with this joint. In sufficient numbers, even gnat-bites will chase away the brontosaurus. You’ve had six thousand years of my rule. Get ready for a new manner of management, a Graham’s Number of angels, your new commissariat. Take it from me, there’s very little on which they all agree.” With that, He vanished. And so, as we look forward to the twenty-twenties, some folk are apprehensive, and some excited, and most are perfectly indifferent, caught up, as is the human way, in their own day-to-day concerns.

Tuesday, 15 December 2020

La Chose En Soi

 


My French publishers asked me to record a 5-minute video talking about my novel The Thing Itself, prior to its appearance in translated form. For reasons that now escape me, I elected to do this in a hat. As you can see, my French is not very good. Still, if an Englishman can't speak French clumsily, and with an atrocious accent, then what was the Battle of Waterloo even for?

Saturday, 12 December 2020

The Odourless God


 

‘Professor Strass, welcome to the vidcast!’

‘It’s my pleasure, Landon.’

‘Let’s talk about your new book, The Odourless God. If I understand it correctly, your argument is that the human belief in God, and the numinous sensations we have with respect to those feelings, are rooted, evolutionarily speaking, in the diminution of our sense of smell? In what you call the odourless sublime?’

‘Exactly so. My starting point is, I feel, a thoroughly uncontentious one. It’s a matter of basic evolutionary fact: we have become, relatively speaking, alienated from our sense of smell. Our long-ago ancestors had much more acute senses of smell than we do, and they navigated their world much more by smell and touch than by vision.’

‘As many animals do today.’

‘My dog for one! Indeed it was my dog that set this whole project in motion. But, yes, that’s my starting point. From there I extrapolate into more shall-we-say hypothetical intellectual territory. Over the timescale of recent evolution sight and smell have swapped places in our human sense-hierarchy. Now we largely navigate the world with our eyes. But we have carried over a sense of smell as intimate, bodily, personal and sight as remote, distant, uncanny.’

‘Tell us about your dog!’

‘Well: I was taking my dog on a walk one autumnal day and he was sniffing every tree stump and every nettle bed, tail wagging, totally immersed in the joyous immediacy of his experience. We passed through a little copse of trees and came out the other side into the broad expanse of the rec. Here there happened to be  a great heap of copper-coloured leaves. The wind ticked the far edge of this carpet, and it stirred, as if alive, and Steps—my dog—changed: his whole manner altered, tail down, ears back, peering into the middle distance. He was spooked by this movement because he could see it but he couldn’t smell it. It freaked him out. I mean, his eyesight has never been very good, even for a dog; but that moment … suddenly I understood. It was my eureka moment.’

‘You saw, then, the thesis of what would become your book?’

‘Yes indeed. You see, Landon: we’re a huggy, close-knit, smell-and-touch collection of species, we simians. That which we can’t touch, that which we can’t smell ... the moon, the lightning flashing across the sky, the horizon .... these spook us in a profound way. Such things register with us as uncanny, immense, far. And we’ve carried that apperception through into modernity. I mean, we keep trying to drag God back from His infinite remoteness and eternal inhumaneness. We keep trying to smell Him. Jesus says next-to-nothing about sexual morality in the Gospels, except some stuff about how we shouldn’t be so quick to judge. But for a hundred generations his followers have talked endlessly, obsessively about sexual morality. Why? Because sex is intimate, touch and taste and smell, and that’s where we’re comfortable. So we keep trying to drag God into our beds with us, even if He is there to judge and tut-tut. But that’s not where the numinous impulse originated. It was something seen, very far away, beyond our reach. Something radically unsmellable. Our ears go back, our tails go down, we get that fizzy sensation in our stomachs. We feel something is there, something beyond us. God. The great unodour.’

‘You say God doesn’t smell, but if I think of God I think of—let’s say: incense? The smoke of sacrifice and so on.’

‘Yes, and fragrant oils and unctions, and all the olfactory paraphernalia of worship. But of course that’s not God. That’s us, offering God what we sense he lacks, odour. That’s us trying to supply the uncanny absence, trying to bridge the unsettling distance, between us and God. In our hind-brains, bedded down by the inertia of evolution’s longue durée, is a belief that the world is what we can touch, and taste, and smell. But there is more to the world than that, we can't deny it. There is the world that we can neither touch, taste or smell yet which we can see—and as our vision became increasingly important to us as a species, with a correlative waning of our olfactory powers—and that unsettles us in a profound way. That sense of something slightly off-kilter, existentially speaking … the sensation we associate with that, which Aldous Huxley call the numinous … that is behind our religious feelings. Actual religions are attempts, some less, some vastly more elaborate, to make sense of that twist in our reality.’

‘And you think this has something to do with our addiction—you do use the word addiction—to social media?’

‘I think this is the core reason, the hidden reason why we have all converted so rapidly and en masse as a species to social media, yes. These new technologies allow us the sorts of instantaneous connections and intersubjectivities that, not so long ago, were only possible in face-to-face encounters with other human beings. But with face-to-face encounters our exchange is underpinned by somatic evidence. If you are close enough to whisper something to somebody, then you can smell them. But social media is a constant stream of intimate, emotionally-charged communications that are, whatever else they are, perfectly odourless.’

‘The emotion they are charged with is often anger, of course.’

‘Indeed. And we tolerate that. We even seek it out, although it makes us unhappy. Think how many people on Twitter refer to it as this hellsite. They’re only partly joking, and yet they still flock to it. Strong emotion is stimulating. But—and this is the crucial thing—human-to-human anger is somatic. Fists fly and you can feel the punches, you can smell the sweat. And when we make up, after a row, we hug and kiss and smell one another. Think of “make-up sex” for example! Those somatic cues are important in the way we, as a social body, modulate our stronger emotions. And those somatic cues are missing, both with respect to the wrath of God, and on social media. These new screen-based interactions expose us to the vast wrath of the collective and yet deny us the somatic cues to make sense of, and remedy, that anger.’

‘Because they are odourless interactions, and so make us think, on a subconscious and atavistic level, that we are mediating our intersubjectivities through God?’

‘Exactly so.’

‘So what of Jesus? Christians believe in his case God became a man. He must have been smellable, no?’

‘I think the whole question of Christ is fascinating, really. One might say, in a common-sense way, that if Christ was a regular first-century-AD man then of course he would have carried with him the odours of a regular man. But is that how Christ figures, in art? In worship? Think of all the representations of Jesus: don’t you think that, in almost all of them, he looks remarkably scrubbed? Clean? Odour-free? Do we contemplate Jesus using the toilet? Do we ever consider—I don’t say this to be blasphemous, but simply as a function of something we all recognise as integral to human embodiment—a farting Jesus? The ancient Greeks believed that the gods' skin smelled faintly of honey, not of usual human reeks and stinks, and surely something like that is true of Jesus in our imagination of him. Don’t you think? Similarly it is important that Jesus never engaged in that intimate, odiferous human business of sexual intercourse. It is a balance, I think. When he is fully man Jesus eats and drinks with other human beings, but when he resurrects, and is, as we could say, transitioning back to being fully God, what does he say to his followers? Noli me tangere. Don’t touch me. That kind of physical intimacy is not the true currency of God.’

‘So if you’re right, what are the implications? Where, for instance, do you think social media are headed?’

‘There has always been a utopian dimension to social media. Look back on the early days of these formats, and how many people were gushing about its world-changing, utopian possibilities. It hasn’t panned-out that way, of course. But neither are these media going away. I predict they will increasingly mediate our human interactions, and that we as a species will be increasingly driven into them. To desomaticize our connections. Driven for reasons we don’t consciously comprehend, because to us these odourless screen-mediated spaces feel godly, somehow. For better and worse. A god of love and an angry, jealous god. But a god.’

Wednesday, 23 September 2020

Coming Soon

 


[Update: December 1st, 2020] These are the, variously Dantean, epigraphs for this novel. Click for a clearer image:
 
I'm a big fan of epigraphs. And in this case they're particularly crucial to the working of the novel. That said, my twin worries are (a) people won't twig the Tennysonian gesture in the first; and, (b) with respect to the second, necessity (that is, copyright law) made me take out all specific references to Lord of the Rings in my text—Tolkien and Joyce being, obviously, the yin-yang of the 20th-C anglophone novel—thus defanging much of the textual and epigraphic point. Ah well: we work with what we have, not with what we'd like.

Sunday, 13 September 2020

La Troll Doom Sans Mercy

 



“O what can ail thee, sat-at-home,
    Alone and palely twittering?
The charm has withered from this site,
    And no birds sing.

O what can ail thee, sat-at-home,
    So haggard and so woe-begone?
The squirrel’s granary is full,
    And the harvest’s done.

I see a lily on thy brow,
    With anguish moist and fever-dew,
And on thy cheeks a fading rose
    Fast withereth too.”

“I joined this website long ago,
    Full beautiful—it made me glad,
When tweets were short, and tone was light,
    And fun was to be had.

It found me links of relish sweet,
    And breaking news, and punning-dew,
And sure in language strange it said—
    ‘What else is new?’

It came part of my daily rote.
    But now it’s changed—Ah! woes umpteen!—
The latest tweet I ever read
    On the cold phone screen.

I saw pale men and women too,
    Keyboard-warriors were they all;
They cried—‘La Troll Site Sans Mercy
    Thee hath in thrall!’

I saw their starved snark on my phone,
    With horrid warning gaped peeve,
And I awoke and found me here,
    And said: ‘it’s time to leave’.

And this is why I sojourn here,
    Alone and barely twittering,
The fun is withered from this app,
    And no birds sing.”

++++

I resisted Twitter for a while, back in the dim-distant, assuming it was, you-know, ‘for kids’, and that it would be unseemly for me, a man in my (as I then was) forties, to loiter there. I succumbed, though, in the first instance at my publisher’s urging. You need to be on social media, Adam, they said. You need to get the word out, they said. So I made an account at @arrroberts and, frankly, got swept up. Man alive but Twitter was addictive back then. It used to be fun (remember that?). I got to meet interesting people, to follow intriguing links, to delight in pithy wit and hear breaking news stories before they were officially announced. And again there was just the structure of it, built to gratify the ADHD-ish loops in my brain architecture (I mean, having lived to the age I now am I’ve developed a raft of little strategies of dealing with this part of my nature, siphoning-off its energies in various ways; but it doesn’t go away). So here I was: my rat-paw pressing the little lever to make a savoury pellet drop from a hatch, over and over. This, of course (of course) is an inherently unhealthy thing to be doing, psychologically speaking. Although I suppose there are worse ways to indulge that craving. Or there were.

Twitter, though, has changed. Nowadays I wake up, get a cup of tea, open Twitter on my phone, scroll through and it just makes me anxious and unhappy. By the time I close the site in order to get on with my day my mood has been dragged down. Why do that to myself? That looks like a rhetorical question, that, but I really mean it. So many of us going on to what we call, only half in jest, this hellsite daily. What do we think we’re doing?

I’ve been trying to put my finger on what it is that gets me down. It is, I think, in a strange way, actually a function of the positivity of the platform—I mean the way it enables people (and has enabled me) to make new friends, to set up friend-circles, to connect with people all over the world via shared interests. But the truth is that this possibility is increasingly becoming a hermeticism rather than an open-ended hospitality to otherness. We justify our hiding-away to ourselves in terms of how distressing it is to have to engage, or even to be made aware of certain kinds of people and certain species of views. But the gloating use of the block button, the self-congratulatory declaration that we have blocked person X, Y or Z, leads us deeper into the stylites remoteness that finds itself less and less tolerant of hearing contrary views, or even of being reminded that not everybody in the world thinks the way we do. I’m not sure what the answer to this is, if I’m honest, in a larger sense. And I’m aware that what I’m suggesting here—stepping away—is in many ways an abdication of responsibility. But it seems to me less self-deluding, less morally mendacious, than ‘curating’ our social media until they become a sealed chamber of righteous mirrors.

When our daughter was born my wife and I got one of those big How To Parent books (it's a terrifying business, the first few weeks with your first child: a whole, helpless human being is now your life-or-death responsibility!) From amongst the welter of advice that volume contained one thing really stuck with me. It's this: when your kid is older you will, on occasion, have to discipline him/her. They will do something naughty, or worse, and it's important that you draw a line so that they can learn the difference between acceptable and unacceptable behaviour. But when you do that, it's very important you tell the child “you have done a bad thing” and not “you are a bad child”. It might look like a small thing, but it's not. Kids internalise what we tell them and what they internalise can have large impacts on their later development. “You have done a bad thing” gives them the chance to change what they do; “you are a bad child” requires them to change who are they, and that's not a burden we should lay on them. But here’s the thing: that burden has become absolutely the default on Twitter, or so it seems to me. If Celebrity X says something with which you disagree, your first step is: they therefore are a bad person (they are a Nazi, a terf, an abuser etc). This then licenses you to pour your contumely upon them, to treat them as a legitimate target for your wrath and violence and outrage. It's essentialism, in a word. It's everywhere and I find it exhausting and wrong and bad: a world whose fundamental premise is, redemption is impossible, except through total capitulation to my value-system, and for many people not even then. But it's not something I can change, so it may well be that the best thing I can do is step away.

Another way of putting this would be to note that Twitter, and by degrees other forms of social media, have become astonishingly Schmittian spaces—astonishingly not least because Carl Schmitt is about as cancel-worthy an individual as Twitter could possibly conceive. But here we are. These media make the expression of nuance harder; they encourage instant acclamation and instant judgment, and above all they pander to a mind-set in us by which we divide the universe into friends and enemies. Reading William Davies recent essay on Schmitt in the LRB brought this powerfully home to me.
In the late 1920s, the political philosopher and jurist Carl Schmitt, subsequently to join the Nazi Party, developed a theory of democracy that aimed to improve on the liberal version. In place of elections, representatives and parliaments, all talk and gutless indecision, Schmitt appealed to the one kind of expression that people can make for themselves: acclamation. The public should not be expected to deliberate or exercise power in the manner that liberals hoped. But they can nevertheless be consulted, as long as the options are limited to ‘yea’ or ‘nay’. The public can ‘express their consent or disapproval simply by calling out’, Schmitt wrote in Constitutional Theory (1928), ‘calling higher or lower, celebrating a leader or a suggestion, honouring the king or some other person, or denying the acclamation by silence or complaining’. ‘Public opinion,’ he continued, ‘is the modern type of acclamation.’
Social media are predicated, as Davies says, on a basically Schmittian bedrock: that the world is divided for each of us into friends and enemies. ‘The outcome of all this [in our modern, socially-mediated world] is a politics with which Schmitt’s name is commonly associated, one that reduces to a base distinction between “friend and enemy”. The distinction itself is what counts, not whatever fuels or justifies it.’ It's increasing where we are. Now, this seems to me bad and wrong on its own terms; but it also seems to me that it’s a game rigged—as actual literal Nazi Schmitt perhaps intuited—in favour of the Right. The way the Left has reacted to accusations of ‘cancel culture’ is a case in point: it betrays a sense that we're on the run, a fatal muddling of responses, offered with angry vehemence but all saying different things: ‘there’s no such thing!’ is common, as is ‘it’s just another term for karma!’ and also ‘cancel culture doesn’t go far enough: these people just lose lucrative media contracts, not their heads’. The right is clearer-eyed:
The right understands how to play this ‘culture war’: they know to identify the most absurd or unreasonable example of your opponents’ worldview; exploit your own media platform to amplify it; articulate an alternative in terms that appear calm and reasonable; and then invite people to choose. It isn’t all one-way traffic, of course. There is no shortage of progressive and left-wing opinion on social media that aims primarily at harming conservatives by misrepresenting them. One difference is that the left isn’t in control of the majority of the newspapers (though its opponents accuse it of controlling much else, from the BBC to universities).
Schmitt’s friend/enemy formulation, formulated in ‘The Concept of the Political’, argues that the political is different to all other domains It's different to the theological, where value is premised on something extrinsic; and different to the economic, which at least makes a distinction between profitable and not profitable. This is because the political construes identity; and more to the point, because, for Schmitt, our identity is predicated upon the distinction between friend and enemy, a distinction determined ‘existentially’: the enemy is whoever is ‘in a specially intense way, existentially something different and alien, so that in the extreme case conflicts with him are possible.’ And the crucial thing about the enemy is their radical wrongness. They don’t just say or do wrong things, they are wrong. And once the idea seeps into your consciousness that this other person is not only saying wrong things but is themselves fundamentally wrong, terrible consequences become possible. A terror can insinuate itself into you that their wrongness might be catching—it’s primal, in some cases, and therefore unconsidered.

At any rate, here I am: nearly a dozen years after joining, thinking of conscious uncoupling. I’m very small-beer, I know—I follow about 1500 people, and fewer than 10,000 follow me (a sizeable proportion of that latter number, I presume, bots). Quitting, I won’t be missed: there are plenty of other tweeters whose feed will provide you with bad puns, chatter about SF-y things and occasional links to literary-critical academic-English 19th-C literature-y things, if those are what you’re looking for. I suppose I’ll see how it goes.

Tuesday, 25 August 2020

Adam Roberts & François Schuiten, "The Compelled" (NeoText 2020)


Out today (available as e-text only, Amazon US, Amazon UK). The story's not bad but François's illustrations are something else again.


Friday, 14 August 2020

If I Were Called In To Construct


And I should raise in the east
A glass of water
                      LARKIN

If I were called in to construct a religion
I would start with opposition.
Pick an established faith, like Larkin's Water,
and attack it as insufficiently aquatic.

I would bewail the drowned
and blame Larkin;
and gather an army and make war
upon the Larkinians,
kill them, seize their Larkwives
and their Larkine.

I would establish the Holy Romarine Empire,
crown my good with brotherhood
from land to shingly land.
Scorch my enemies and parch my friends.
After that comes expansion, missionaries,
elaborate ritual, green-and-purple robes,
High Holy Days to mark the fullest tides.

Then a long period of decline
as theologians bicker over
increasingly crumbling minutiae
and ordinary people live by
a calcified version of the once flowing spirit
(stalactites, coral, ice)

when I will walk the beach, with all the stiffness of age—
as the breakers come and keep coming
bowing before the land like heretics—
and contemplate Mystery, salt and unsustaining.

Monday, 27 July 2020

Earth Versus the Heliists



Historians and analysts are still arguing how it is that we of the Earth were able to overwhelm the vastly more numerous and powerful inhabitants of the Sun. There are, of course, many factors that enabled our victory. As a species we worked hard to overcome our manifest disadvantages, our relative puniness and underpopulation not least. The dwellers on the surface of the sun thrive in temperatures from below water’s freezing point up to many millions of degrees C, where each of us Earthlings must maintain out body's core within a tiny temperature spectrum or perish. The least Heliist citizen is huge, powerful and intelligent. The best of us is a pygmy by comparison. The Heliists comprise trillions of individuals, any single individual of which, were s]he to land on Earth, could easily defeat, single-handed, an entire army of homo sapiens in open battle.

How did we win? I intervene into this learned and, alas, furious debate to note only one thing, for it has I think been overlooked by our analysts and historians. Although the empires and queendoms of the Sun are manifold, perhaps as many as forty thousand separate realms, and their interrelation is complex and often belligerent, in this one particular thing, we had the advantage of disadvantage. Of, that is, one disadvantage in particular: we long ago chose the Copernican model of the solar system over the Ptolemaic.

So powerful and compelling is the latter model, so deeply in tune with our ego-senses of our importance, that only very grievous discrepancies between the theory and the observed motions of planets in the sky—retrograde curls in orbits, the impossibility of determining the radii of planetary orbits by Ptolemy’s geometrical methods—forced us to abandon it. Such abandonment was painful and protracted and it entailed a whole conceptual revolution that, amongst other things, weakened the unitary hold of Catholic Christianity on Europe and set in motion the Scientific and Industrial revolutions.

But for those gigantic beings of light and plasma who roam the surface of the sun no such Copernican Revolution occurred. Because there was no need for it. Their egos and their celestial observations precisely allied. And so we, though weak and susceptible, were forced into a more realistic understanding of our place in the cosmos. And so we developed all manner of technological prostheses and fixes and augmentations. The Heliists never did this.

It was this mind-set, plus (as we are often reminded) the loss of millions of human lives, that led to our victory in the recent war. It was this that lost it for the Sun. My friends, when you next celebrate our great and unlikely victory, sacrifice a rooster to Copernicus.

Thursday, 23 July 2020

Man Considered as a Three-Legged Stool



I visited my old friend Sam
in heaven: Saint Peter let
him through the door to join me outside.
‘Ten minutes,’ said the Saint. ‘No more.’

‘Thanks, mister,’ we said. ‘Thanks.’

I’d brought cigarettes.
He lit his from mine. For a time
we just smoked, companionably,
saying nothing, just looking down
the view’s visceral swoon—bright
fluid stars in the shimmering pool-blue
and below, very far, almost lost
in its depths, Earth like a beetle’s back
blue and green and black.

‘So,’ I said. ‘What’s it like?’
‘Good,’ he said, ‘it’s. Obviously it's good.’
We laughed, both, nervy-like.
‘The thing is—deathhood:
it’s great, it really is. Obviously.’

‘Obviously.’

‘It’s just …’ He leaned against the wall.
Breathed a spear of smoke into the blue.
‘It’s just it’s all
threes.’ ‘Threes?’ ‘Threes.
Father, Son and Spirit is all the logic.
The principle. Thrinciple. Mystical.
Nine ranks of angels, all
with three sets of wings, three
ears, three eyes. Thrice three and three.
Even maths here is trinomial.
Fractions have three elements. Nothing’s
either-or.’ ‘And?’

He tossed his stub, ember-red, over the lip
to spin past the giant white-blue lights.

‘Look at me,’ he said. ‘Look at you.
Two arms, two legs, two eyes, two thoughts.
I don’t quite fit. I’m missing something.
It’s how He made me, how He wants it, but.

I’m out of kilter. My line's not true.
It’s bliss that's loss as well.
It’s how He made us, twos, twos,
three-legged-stools with a leg missing.
And black-whites, is-oughts,
Life, death, they don't obtain.
I'm unrebuilt. I'm still mundane.
It's not a complaint. But.’

‘Time,’ boomed the Saint.
‘Break's over, chaps.’

Sam shook my hand, a touch solemn,
nodded, went back inside to a blare
of trumpets. I started down the big stair
grasping the rail.
Unsteady on my pins. But to slip there
would be a pretty catastrophic lapse
and days-worth of plummeting fall.



Sunday, 19 July 2020

On Memory



Contents
Preface
Chapter 1. Towards Total Memory
Chapter 2. Memory and Fiction
Chapter 3. Memory and Religion
Chapter 4. Irrepressible Memory
Chapter 5. Inaccessible Memory
Afterword


from the Preface


‘The key thing is not memory as such. It is the anticipation of memory’ Pierre Delalande

Everyone knows there are two kinds of memory: short-term memory and long-term memory. These are distinct functions in terms of brain architecture (such that, for example, mental deterioration or injury can destroy one but not the other) although they are, obviously, linked, both doing similar things, instantiating the actions of particular networks of neuronal activity in the brain. Working memory, say the psychologists, serves as a mental processor, encoding and  retrieving information [see, for instance, Alan Baddeley's Working Memory, Thought, and Action (Oxford University Press 2007)]. The two broader categories of short-term and long-term memory get differentiated and refined further when physiologists look at specific temporal ranges:
Atkinson and Shiffrin [“Human Memory: A Proposed System and Its Control Processes”, in Kenneth W. Spence & Janet Taylor Spence (eds.), Psychology of Learning and Motivation (New York: Academic Press 1968), 89–195] proposed a multi-store model in which kinds of memory are distinguished in terms of their temporal duration. Ultra short term memory refers to the persistence of modality-specific sensory information for periods of less than one second. Short term memory refers to the persistence of information for up to thirty seconds; short term memory, which receives information from ultra short term memory, is to some extent under conscious control but is characterized by a limited capacity. Long term memory refers to the storage of information over indefinitely long periods of time; long term memory receives information from short term memory and is characterized by an effectively unlimited capacity. Though this taxonomy does not distinguish among importantly different kinds of long term memory—in particular, it does not distinguish between episodic and semantic memory—it has been applied productively in psychological research. [Kourken Michaelian and John Sutton, ‘Memory’, in Edward N. Zalta (ed) Stanford Encyclopedia of Philosophy (Summer 2017 Edition)]
Memory science also identifies a third kind of memory, the sort of muscle-memory that you deploy when you drive your car or play the piano. This they call sensory memory, and they tie it into the other two kinds with various diagrams and charts.


So there they are: your three basic kinds of memory and their interrelations.

I'm not interested in them. My interest is in other modes of memory, modes that are (I would argue) just as valid as, indeed more important than, these more conventional forms. So far as I can see these other modes are either under-discussed or not discussed at all, not even recognised as modes of memory. Nonetheless I am going to try and argue that these three conventional mode hitherto mentioned are actually the least interesting kinds of memory.
......



from Chapter 1: Towards Total Memory

Memory costs. That is to say, in a biological sense, large brains are expensive organs to run. In order for evolution to select for them there must be an equivalent or more valuable pay-off associated with the cost. In the case of homo sapiens that pay-off is our immensely supple, adaptable and powerful minds; something that could be run on anything cheaper, biologically speaking, that the organ. This is because consciousness and self-consciousness depend to a large extent upon memory; or perhaps it would be more accurate to say, the consciousness and self-consciousness rely upon a sense of continuity through time, which is to say, upon memory. Memory is what we humans have instead of an actual panoptic view of the fourth dimension. We know all about its intermittencies and unreliabilities of course—indeed, the larger discourse of memory, from Freud and Proust to modern science, on has delved deeply into precisely those two quantities. My focus here happens to be on neither of those two qualities, but I don’t disagree: memory is often intermittent and unreliable. It’s also the best we’ve got.

When evolutionary scientists talk about the ‘cost’ of something, they have particular sense of the word in mind. James G. Burns, Julien Foucaud and Frederic Mery have interesting things to say about the costs associated with memory (and learning) specifically: ‘costs of learning and memory are usually classified as constitutive or induced,’ they say. The difference here is that ‘constitutive (or global) costs of learning are paid by individuals with genetically high-learning ability, whether or not they actually exercise this ability’:
As natural populations face a harsh existence, this extra energy expenditure should be reflected in reduction of survival or fecundity: energy and proteins invested in the brain cannot be invested into eggs, somatic growth or the immune system. Hence, learning ability is expected to show evolutionary trade-offs with some other fitness-related traits. [James G. Burns, Julien Foucaud and Frederic Mery, ‘Costs of Memory: Lessons from Mini-Brains’, Proceedings of the Royal Society 278 (2011), 925]
‘Induced costs’ touch on the idea that ‘the process of learning itself may also impose additional costs reflecting the time, energy and other resources used.’
This hypothesis predicts that an individual who is exercising its learning ability should show a reduction in some fitness component(s), relative to an individual of the same genotype who does not have to learn. … Questions regarding the induced costs of learning and memory are not only restricted to the cost of ‘how much’ information is processed, but also to ‘how’ they are processed.
Intriguingly, recent research (‘from both vertebrate and invertebrate behavioural pharmacology’) challenges ‘the traditional view of memory formation as a direct flow from short-term to long-term storage.’ Instead, ‘different components of memory emerge at different times after the event to be memorized has taken place.’

Memory, in other words, has always been part of an unforgiving zero-sum game of energy expenditure. It is possible to hypothesise that the general reduction in memory ability as people get older (when we all tend to become more forgetful and less focussed) reflects a specific focalisation of energy expenditure at the time of greatest reproductive fitness. We can all think—though I’m dipping now into pop evopsych, an ever-dubious realm—of how unattractive women find forgetful men: how much trouble a husband gets into, for example, if he forgets a wedding anniversary or a birthday.

But this is one of, I think, only a very few instances where human technological advance directly interferes with the much longer term evolutionary narratives. For the first time in the history of life we have access to a form of memory that doesn’t cost—or more precisely, that costs less and less with each year that passes whilst simultaneously becoming more and more capacious and efficient. Indeed: not only do we have access to this memory, we are all of us working tirelessly to find more intimate ways of integrating this memory into our daily lives. I’m talking of course about digital memory. Right now, in the top pocket of my shirt I am carrying a palm-sized device that grants me instant access to the totality of human knowledge, as archived online. Everything that humanity has achieved, learned and thought can be ‘remembered’ by me at the touch of my fingers on the glass screen. Everybody I know carries something similar. It is no longer even a remarkable thing.

It may be that Moore’s Law is the single most significant alteration to the environment within which human evolutionary pressures operate. As that Law rolls inexorably along, we come closer to that moment when cost itself will no longer present an obstacle to total memory. By ‘total’ I mean: the circumstance where everything that we have done, experienced, said or thought is archived digitally and virtually, and can be accessed at any time. Digital memory is exterior to the brain (at least it is so at the moment); but like an additional hard-drive being cable-plugged into your laptop, it augments and enhances brain-memory and brain-function. Which London taxi driver need learn the ‘knowledge’ when sat-nav systems are so cheap? Or to put it another way: the existence of a cheap sat-nav instantly transforms me, Joe-90-like, into a sort of super black-cab-driver, with instant access not only to every quickest route through the London streets, but the whole country and indeed the whole world. This is one small example of a very large phenomenon.

What I'm talking about here is the ‘Extended Mind Thesis’ (EMT), that argues the human mind need not be defined as exclusively the stuff, or process, or whatever that is generated inside the bones of the human skull. Here is David Chalmers:
A month ago I bought an iPhone. The iPhone has already taken over some of the central functions of my brain . . . The iPhone is part of my mind already . . . [in such] cases the world is not serving as a mere instrument for the mind. Rather, the relevant parts of the world have become parts of my mind. My iPhone is not my tool, or at least it is not wholly my tool. Parts of it have become parts of me . . . When parts of the environment are coupled to the brain in the right way, they become parts of the mind. [Chalmers is here quoted from the foreword he wrote to a book-length elaboration of this idea: Andy Clark’s Supersizing the Mind: Embodiment, Action and Cognitive Extension (OUP 2008)]
I find this idea pretty persuasive, I must say; but I am not a philosopher of mind. Not all philosophers of mind like this thesis. Jerry Fodor, for instance, attempted several times to dismantle Clark’s argument. In a review-essay published in the London Review of Books Fodor takes a heuristic trot through one of Clark’s thought-experiments. Imagine two people, Otto and Inga ‘both of whom want to go to the museum. Inga remembers where it is and goes there; Otto has a notebook in which he has recorded the museum’s address. He consults the notebook, finds the address and then goes on his way. The suggestion is that there is no principled objection between the two cases: Otto’s notebook is (or may come with practice to serve as) an “external memory”, literally a “part of his mind” that resides outside his body.’ Fodor asks himself: ‘so could it be literally true that Chalmer’s iPhone and Otto’s notebook are parts of their respective minds?’ He answers, no. I don’t take the force of his objections. So for instance:
[Clark’s] argument is that, barring a principled reason for distinguishing between what Otto keeps in his notebook and what Inga keeps in her head, there’s a slippery slope from one to another ... That being so, it is mere prejudice to deny that Otto’s notebook is part of his mind if one grants that Inga’s memories are part of hers. … But it does bear emphasis that slippery-slope arguments are notoriously invalid. There is, for example, a slippery slope from being poor to being rich; it doesn’t follow that whoever is the one is therefore the other, or that to insist on the distinction is mere prejudice. Similarly, there is a slippery slope between being just a foetus and being a person; it doesn’t follow that foetuses are persons, or that to abort a foetus is to commit a homicide. [Jerry Fodor, ‘Where is my mind?’ LRB 31:3 (2009)]
But this really is to miss the point. The analogy (since Fodor forces it) is not that Clark is arguing the brain is ‘rich’ and the notebook ‘poor’ as if these were the precisely the same kind of thing differing only in degree; but rather that they both have something in common—as ‘rich’ and ‘poor’ have money in common—the difference being only that one, the brain, has lots of this (call it ‘mind’) and the other, the notebook, has very little. That seems fair enough to me. Fodor goes on to deliver what he takes to be a knockout blow:
The mark of the mental is its intensionality (with an ‘s’); that’s to say that mental states have content; they are typically about things. And … only what is mental has content.
But lots of the data on my computer is ‘about’ things. Arguably, even the arrangement of petals on a flower is ‘about’ something (it’s about how lovely the nectar is inside; it’s about attracting insects). Fodor is surprised Clarke doesn’t deal with intensionality, but I’m going to suggest it’s a red herring and move on.
Surely it’s not that Inga remembers that she remembers the address of the museum and, having consulted her memory of her memory then consults the memory she remembers having, and thus ends up at the museum. The worry isn’t that that story is on the complicated side; it’s that it threatens regress. It’s untendentious that Otto’s consulting ‘outside’ memories presupposes his having inside memories. But, on pain of regress, Inga’s consulting inside memories about where the museum is can’t require her first to consult other inside memories about whether she remembers where the museum is. That story won’t fly; it can’t even get off the ground.
Fodor, on the evidence of this, has never heard of the concept of a mnemonic. Or is he denying that the mnemonics I have in my mind are, somehow, not in my mind ‘on pain of infinite regress’?

I’ll stop. This may be one of those issues where reasoned argument is unlikely to persuade the sceptical; and if reasoned argument can’t then snark certainly won’t. The most I can do here, then, is suggest that the principle be taken, at the least, under advisement; or the remainder of my thesis here will fall by the wayside. It seems to me that the following extrapolations of contemporary technological development are, topologically (as it were) equivalent: (a) a person who stores gigabites of personal information (including photos, messages and other memorious material) in their computer or iPhone; (b) the person who uses advances in genetic technology biological to augment the physiological structures of their brain tissue to enable them to ‘store’ and flawlessly access gigabites of memorious data; (c) the future cyborg who integrates digital memory and biological memory with technological implants; (d) the individual whose memories are entirely ‘in the cloud’, or whatever futuristic equivalent thereof is developed.

And actually this (it seems to me) is not the crux of the matter. The extraordinary increases in capacity for raw data storage is certainly remarkable; but as mere data this would be inert, an impossibly huge haystack the sifting of which would take impossible lengths of time. The real revolution is not the sheer capacity of digital memory, but the amazingly rapid and precise search engines which have been developed to retrieve data from that.
.....


from Chapter 2: Memory and Fiction

That the ‘novel’ is a mode of memory is not an idea original to me. Dickens's fiction, in a sense, ‘remembers’ Victorian London for us, as Scott's fiction ‘remembers’ 18th-century Scotland. This is to say more than just that (although it is to say that) our collective or historical memory is mediated through these things—more, at any rate, through fiction (Shakespeare's plays, Jane Austen's novels, Homer's poetry) than through annalistic pilings-up of blank historical data. Our own individual memories, those products (long-term and short-term) of brain function, narrativise the past much more than they isolate or flashbulb past-moments. Fiction is always memorious.

This memoriousness is complicated but not falsified by the fact that fiction is not, well, true. There never was a boy called Oliver Twist, and though there was a figure called Rob Roy he wasn't at all like Scott's version of him. The veracity of art does not run exactly in harmony with the veracity of history, but neither is it completely orthogonal toit. But that doesn't matter. Our own individual memories are immensely plastic and dubious, fictions based on fact. Our collective memories likewise.

What's more striking, I think, is what happens to this idea in an age (like ours) when science fiction increasingly becomes the cultural dominant. After all, unlike Homer, Shakespeare or Dickens, SF is in the business of future-ing its stories, no? As early as 1910, G K Chesterton pondered the paradoxes of predicating ‘memoir’ on futurity:
The modern man no longer presents the memoirs of his great grandfather; but is engaged in writing a detailed and authoritative biography of his great-grandson. Instead of trembling before the spectres of the dead, we shudder abjectly under the shadow of the babe unborn. This spirit is apparent everywhere, even to the creation of a form of futurist romance. Sir Walter Scott stands at the dawn of the nineteenth century for the novel of the past; Mr. H. G. Wells stands at the dawn of the twentieth century for the novel of the future. The old story, we know, was supposed to begin: “Late on a winter’s evening two horsemen might have been seen—.” The new story has to begin: “Late on a winter’s evening two aviators will be seen—.” The movement is not without its elements of charm; there is something spirited, if eccentric, in the sight of so many people fighting over again the fights that have not yet happened; of people still glowing with the memory of tomorrow morning. A man in advance of the age is a familiar phrase enough. An age in advance of the age is really rather odd. [Chesterton, What’s Wrong With the World (1910), 24-25]
That few science fiction novels are actually written in the future tense doesn’t invalidate Chesterton’s observation. A novel notionally set in 2900 narrated by an omniscience narrator in the past tense, interpellates us hypothetically into some post-2900 world. Science fiction adds a bracingly vertiginous sense to memory. In Frank Hebert’s Dune Messiah (1969), Paul Atreides—the prophet/messiah leader of the inhabitants of a desert planet—is blinded. According to the rather severe code of his tribe he must be sent into the wilderness to die, but he avoids this fate in part by demonstrating that he can still see, after a fashion. His prophetic visions of the future are so precise, and so visual, that it is possible for him to remember past visions he previously had of the present moment, and use them, though he is presently eyeless, to navigate and interact with his world as if he were sighted. The way memory operates here, as an paradoxical present memory of the past’s future, is the perfect emblem of science fiction’s tricksy dramatization of memory. There are science fiction tales of artificial memory, enhanced memory, memory that works forwards rather than backwards; of robot memory and cosmic memory. And given the genre’s predilection for fantasies of total power, it does not surprise us that there are many SF fables of total memory.

That said, it is a story not often bracketed with ‘Pulp SF’—Borges’ ‘Funes the Memorious’—that is typically deployed when notions of ‘total’ memory are discussed. And he stands as a useful conceptual diagnostic to the thesis I’m sketching here. It’s a trivial exercise translating Borges’ hauntingly oblique narrative into the language of Hard SF. What might the world look like in the case where digital memory is so capacious, and so well integrated into our daily lives, as to give us functionally total memories? This, to be clear, is not to posit a world in which we carry around in our minds the total memory of everything—that would indeed be a cripplingly debilitating state of mind. But our present-day incomplete memories don’t work that way either. We remember selectively. Indeed, the circumstances (let’s say for example: post-traumatic circumstances) in which we are unable to deselect certain memories is a grievous one, such that people who suffer from it are advised to seek professional psychiatric help. So, given that we use our memories selectively, and are comfortable remembering only what we need when we need it, the future I’m anticipating would only be a sort of augmentation of the present state of affairs.

You would go through your life with your entire previous existence accessible to you at will. Would this be a good thing? Or do you tend to the view, fired perhaps by the Funes-like consensus that total memory would be in some sense disastrous, that it would not? ‘If somebody could retain in his memory everything he had experienced,’ claimed Milan Kundera, in his novel Ignorance (2000), ‘if he could at any time call up any fragment of his past, he would be nothing like human beings: neither his loves nor his friendship would resemble ours’. Funes himself dies young, after all; as if simply worn out by his prodigious memoriousness. We might conclude: all our efforts are focussed on attempting to make our ‘memory’ better. Now that technology has overtaken us we should, on the contrary, be pondering how we can most creatively and with what spiritual utilitarianism, make it worse.

I shall register the obvious objection. Since total recall would crowd-out actual experience with the minute-for-minute remembrance of earlier experiences, we would have to be very selective in the ways we access our new powers. The question then becomes: what would our processes of selection be? How robust? How reliable? What if we put in place (as my thought-experiment digital future certainly enables us to do) a filter that only allows us to access happy memories. Would this change our sense of ourselves—make us more content, less gloomy, happier in our lot? Would this in turn really turn us into Kunderan alien beings? The problem becomes ethical: it is surely mendacious to remember only the good times. The ‘reality’ is both good and bad, and fidelity to actuality requires us to balance happy memories with sad ones. This, however, depends upon a category error, embodied in the tense. Where memory is concerned reality is not an ‘is’; reality is always a ‘was’. Memories feed into the reality of present existence, but never in an unmediated or unselective way. Indeed, current research tends to suggest that something like the opposite of my notional filter actually operates in human memory—that as we get older we tend to remember the unhappy events of the past over the happier ones.

The bias that ‘total memory’ would in some sense be damaging to us strikes me as superstition. Funes’s imaginary experiences are a poor match for the sorts of thought-experiments to which his name has been, latterly, attached. Christian Moraru toys with describing Funes’ situation as one of disorder, but then has second thoughts. ‘Disorder may not be the right word here since Funes’s memory retrieves a thoroughly integrated systematic, and infinite world. Taking to a Kabbalistic extreme Marcel Proust’s spontaneous memory, one present fact or detail involuntarily leads in Funes’s endlessly relational universe to a “thing (in the) past” and that to another, and so on. Remembrance reaches deeper and deeper and concurrently branches off, in an equally ceaseless search for an ever-elusive origin or original memory.’ He goes on:
With one quick look, you and I perceive three wineglasses on a table; Funes perceived every grape that had been pressed into the wine and all the stalks and tendrils of its vineyard. He knew the forms of the clouds in the southern sky on the morning of April 20, 1882, and he could compare them in his memory with the veins in the marbled binding of a book he has seen once, or with the feathers of spray lifted by an oar on the Rio Negro on the eve of the Battle of Quebracho. Nor were these memories simple—every visual image was linked to muscular sensations, thermal sensations, and so on. He was able to reconstruct every dream, every daydream he had ever had. Two or three ties he had reconstructed an entire day; he had never once erred or faltered, but each reconstruction has itself taken an entire day. [Christian Moraru, Memorious Discourse: Reprise and Representation in Postmodernism (Fairleigh Dickinson University Press 2005), 21-22]
Moraru finds in Funes’ memory ‘a trope of postmodern discourse’ which he defines as ‘representation that operates digressively, and conspicuously so, through other representations.’ He is interested in the ‘interrelational nature of postmodern representation, its quintessential intertextuality … [that] in saying itself says the other, as it were, re-cites other words, speaks other idioms, the already- and elsewhere-spoken and written.’ Actual memory does not think back to drinking wine in the sunshine and thereby recall not just the wine and the sunshine but the individual life-stories of each and every grape that was grown in order to be pressed into the juice that eventually fermented into wine. On an individual level that would be magic, not memory. But there is a sense, a technological-global sense, in which Moore’s law is pointing us in precisely that collective social and cultural conclusion.

The real message of ‘Funes’ is not that a complete memory would render life unliveable (lying in a darkened room, taking a whole day to remember a previous day in every detail, dying young and so on). The real message is: a perfect memory would be transcendent. It would enable us to recall not just the things that had happened to us, but the things that happened to everyone and everything with which we came into contact. This, of course, has no brain-physiological verisimilitude, but it speaks to a deeper sense of the potency of memory. In memory we construct another world that goes beyond our world. Imagination can do this too, but for many people imagination is weaker than memory; or perhaps it would be more accurate to say, imagination manifests itself most powerfully in memory, in the buried processes of selection and augmentation. Not for nothing do we dignify processes of recollection beyond the simplest as memory palaces.

......

The Philip K Dick story ‘We Can Remember It For You Wholesale’ (1966) sports one of the truly great SF story titles, I think; a title that has been poorly served by its two Hollywood movie adaptations, both of clunk-down to Total Recall. Dick’s protagonist, the flinchingly-named Douglas Quail, can’t afford the holiday-trip to Mars he earnestly desires. So he visits REKAL, a company that promises to insert into his brain the ‘extra-factual memory’ of a trip to Mars; and not as a mere tourist, neither, as a secret agent. Exciting! Not real, but (Dick's premise tacitly prompts us to think) once something has happened it is no longer real either, it's just a sort-of phantasm in our memorious brains. Fake the phantasm as you can obviate the expense and inconvenience of actually doing the things, perhaps dangerous things, needful to be remembered.

The story goes on to explore a narrative ambiguity: Quail: is the superspy adventure an artificial memory, or has the REKAL process accidentally unearthed rea memories of Quail as a government assassin. In the original story, Quail returns to REKAL to have a false memory of detailed psychiatric analysis inserted in order to restore his psychological balance and prevent any further urge to visit REKAL, which is quite a nice twist. But Dick, never knowingly under-twisted, adds another: this return visit uncovers deeper ‘actual’ memories (or else implants them) in which Quail remembers being abducted by aliens at the age of nine. Touched by his innate goodness these aliens decide to postpone their invasion of Earth until after his death. This means that, merely by staying alive Quail is protecting the Earth from disaster. He is, in one sense, the single most important individual alive.

Dick’s main theme is not just that memory is unreliable—hardly a novel observation, that—and not even the more radical idea that ‘real’ and ‘artificial’ memories have an equal validity as far as the process of remembering goes. It’s actually that ‘real’ and ‘made-up’ memory in competition in the mind nonetheless tend to gravitate back to narratives of ego inflation. What I always remember is that I am the centre of memory, that the events and persons of the universe are arrayed about me. The same circumstance does not normally obtain in matters of moment-to-moment perception (megalomania excepted) because this involves us in intersubjectivity in a way memory does. Or more precisely, memory is a particular and involuted form of intersubjectivity, where the two subjectivities interacting are present-me and past-me.

The movie adaptations of this story are, in a way, even more interesting. Both jettison Dick’s complicated conceit of memory, ambiguously real or artificial, layered upon memory in favour of a simpler narrative line, better suited to the visual medium in which the story is now being told. Quail thinks himself a nobody, a mere construction worker. He goes to REKAL to be given artificial memories of a more exciting life. These memories trigger authentic memories of his actual life as a spy. In both films (though to a lesser degree in the earlier of them) the strong implication is that he has a true identity and it is this latter. The bulk of both storylines is then given over to the cinematic storytelling of his spy-action adventures.

What’s so fascinating about this is the way both texts portray memory (something that is, we might say, by its nature recollected after the event) as a vivid and kinetic ongoing present set of experiences. Neither movie has its protagonist sitting in a chair remembering being a spy; both, rather, show Quail running, fighting, shooting and getting the girl in the cinematic present. Since we all know how memory works (and that it doesn’t work this way) it seems plain that some strange dislocation is happening in the level of representation of the text. We are shown Quail living his quotidian life; we are shown that life transformed seamlessly into his artificial memories of being a spy. In both movie versions a hinge-scene is staged where an individual attempts to intervene into the action-adventure shoot-up adventure Quail’s life has become. These individuals both tell Quail that the world he is currently experiencing is not real; and that if he perseveres in its fantasy it will kill him. Quail is offered a pill, a token (it is claimed) of his willingness to give up the dream and return to the real world. In both films Quail suspects a ruse and refuses the pill in the most violent way imaginable, by shooting dead the messenger who carries it.

This, to be clear, is a special case of a more general SF trope. There is no shortage of texts that develop the idea of a virtual reality or drug-created alternate reality that runs concurrent with actual reality—the Matrix films are probably the most famous iteration of this, but there are scores of examples from science fiction more generally. Linked to this is the ‘dream narrative’ trope, where John Bunyan or Alice explore a continuous but fantastical timeline that is revealed, at the story’s end, to have been running in parallel with actual reality through the logic of dreams. In both the case of ‘virtual reality’ and ‘dreaming’ it’s an easily comprehensible logic that moves from actual reality into the alternate reality and back again. The Total Recall movies, though—and the story on which they are based—do something more dislocating. Memory is not an alternative parallel reality in the way that VR or dreaming is. Nonetheless these texts treat it as though it is. Remembering something that happened previously is elided with experiencing something now. This is to drag the events remembered out of the past and into the immediacy of the present; or perhaps it is to retard the experience of the present into something always already recalled.

This may look like a trivial misalignment of narrative logics, or perhaps only the limitations of the representational logics of cinema. Think of the visual cliché: a character is shown on-screen ‘remembering’: wavy lines flows across the image and a dissolve-cut takes us to ‘the remembered events’. But Total Recall short-circuits this convention: memory happens in the present, as on-going narrative. This in turn means that the distinction between present and past, the distinction which it is memory’s main function to reinforce, vanishes. Memory is no longer of the past, or even rooted in the past; it is refashioned as a technological artifice (‘REKAL’) that configures ‘memory’ as the continuous present, and augments that present-ness by making the happening-now into a continuous adrenalized onward rushing (running, fighting, escaping, plunging on).

This, I think, is the implication of a 21st-Century Funes. A technologically actualised ‘total’ memory could well destabilise the authentic ‘reality’ of the remembered experience. It might mean that we get to set out own selection algorithms for memory recall, such that we only recall those memories that make us happy, or paint us in a good light—that, for instance, reinforce the sense we have of ourselves as action heroes rather than boring 9-to-5ers. It might mean that we erode the difference between ‘real’ memory, the memory of artifice (films we have seen, books we have read) and actual artificial memory itself. This is the old threat of Postmodernism, exhilarating and alarming in equal measure: the notion that simulacra really will come to precede the things they supposedly copy. But I’m suggesting something more. Total memory, as Funes tacitly and Total Recall explicitly says, will transcend the past. It will break down the barrier between past and present, and reconfigure it as a more vital now. It will subsume the particularity of memory and render it wholesale.



from Chapter 3. Memory and Religion

Like Judaism, Christianity and Islam are both memorious religions. Religion need not necessarily be so, I think, but it's presumably not a coincidence that the two biggest religions in the world today are. Roland Bainton argues:
Judaism is a religion of history and as such may be contrasted with both religions of nature and religions of contemplation. Religions of nature discover God in the surrounding universe; for example, in the orderly course of the heavenly bodies, or more frequently in the recurring cycle of withering and resurgence of vegetation. This cycle is interpreted as the dying and rising of a god in whose experience the devotee can spare through various ritual acts, and thus become divine and immortal. For such a religion the past is not important, since the cycle of the seasons is the same one year as the next. Religions of contemplation, at the other pole, regard the physical world as an impediment to the spirit which, abstracted from the things of sense, must rise by contemplation to union with the divine. The sense of time itself is to be transcended, so that here again history is of no import. But religions of history, like Judaism, discover God "in his mighty acts among the children of men". Such a religion is a compound of memory and hope. It looks backward to what God has already done ... [and it] looks forward with faith: remembrance is a reminder that God will not forsake his own. [Bainton, The Penguin History of Christianity (volume 1, 1967), 9]
Memory and history are interconnected; history (personal and collective) being what we remember and memory (individual and textual) being how we access history. And when you look at it like that it's quite surprising that it is the religions of history that so dominate human worship. The problematic is a large one, after all: if God intervenes in human history at a certain point in time, what about all the people who happened to be born and to die before that moment? Religions of nature and contemplation can embrace them easily. Religions of history must necessarily come to terms with the ruthlessness of history. History, after all, is famously a winners' discourse. What about the losers? Calling them (say) virtuous pagans, or pretending they simply don't exist, jars awkwardly with Christian and Islamic emphases on the excluded, the underdog and the poor.

Immediate or strictly contemporaneous religions (Scientology, say) tend to seem absurd to us, even though the miracles they declare are no more intrinsically risible than those of Christianity, Islam or Hinduism. The reason this is so, I suspect, is because we are so acculturated to the idea of religious belief working as memory rather than as to-hand experience … or at least not as this latter for most people (ecstatics and schizophrenics excepted, I mean).

As is the case with our memory, many details are omitted, and many contradictions and infelicities reworked into more-than-truly-contiguous narratives. Like memory, religion doesn’t always or even particularly intrude on everyday living—it requires a will-to-contemplation to evoke it, actually, although a properly functioning religion is bound to provide copious aides-memoires (liturgy, ceremony, sunday schools and their equivalents and so on) to help in this respect. Consulting family photographs, after all, has a liturgical aspect to it for many of us; in Pixar's Coco (dir. Lee Unkrich 2017) these family photos and their place in the lives of the living literally translate into the wellbeing and status of the dead generations in the afterlife.

I'd suggest that most religion asks us to look back, to honour our mothers and fathers, to worship our ancestors, to consider the origins of life and the cosmos and be thankful for them; but of course there are also portions of religion that ask us to look forward. The believer is to orient her life by their future reward or punishment. The Bible is, by weight, mostly history; but it ends as future-prophesy. Nonetheless, I'd be tempted to argue that the memory-gravity of religion means that those portions of religious practice or thought that have a significant future component end up doing that strange thing of construing future apocalypse as memory … the odd past-oriented backwardness of St John’s revealed future, for instance. Indeed, the more I think about it, the more it strikes me that this is one of the things that science fiction has in common with religion.

Religion endures best in adulthood if it has been impressed upon us in childhood. This means that we are, when we live in faith, steering ourselves according to how we remember our younger days. I suspect something like this is behind Jesus's celebrated ‘except ye be become as little achildren, ye shall not enter into the kingdom of heaven’.
......


from Chapter 4: Irrepressible Memory

According to the New Scientist (‘Déjà vu: Where fact meets fantasy’ by Helen Phillips) only 10% of people claim never to have experienced Déjà vu (I'm one of that ten, actually). For some people, at the other end of the scale, it becomes a veritable psychopathology:
Mr P, an 80-year-old Polish émigré and former engineer, knew he had memory problems, but it was his wife who described it as a permanent sense of déjà vu. He refused to watch TV or read a newspaper, as he claimed to have seen everything before. When he went out walking he said the same birds sang in the same trees and the same cars drove past at the same time every day. His doctor said he should see a memory specialist, but Mr P refused. He was convinced that he had already been.
The article rehearses arguments from brain chemistry to explain this widespread feeling (perhaps it is indeed ‘the consequence of a dissociation between familiarity and recall’). But I read the article wondering: could something as banal and everyday as this be behind Nietzsche's unflinching adherence to the doctrine of Eternal Recurrence? (A philosophical slogan: ‘Eternal Return, the consequence of a dissociation between familiarity and recall...’) Could memory, Funes-style, prove so strong that it overwhelms us, strong-arms us to the floor? Should we be afraid of memory?

We're not, because our day-to-day experience of memory, as we stand there trying to remember where we put our car keys, or what the second line of Twelfth Night is, or what we even came downstairs for, is of an elusiveness that indexes fragility. To speak in terms of the opposite of this is a convention, but an empty one. Some reviewerish boilerplate, from Jane Yeh in an old edition of the TLS:
... should appeal to a wide readership, given the universal scope of its themes--family tensions, and the adult author's changing relationship to her parents, the power of memory ...
But of course we don't actually, talk of ‘the power of memory’. Rather, all our experience leads us to the consideration of the weakness of memory. This is not just a question of the feebleness of our powers of recall (the necessary, non-Funes weakness), or the way memory is a sixty-pound weakling compared to the muscular shaping requirements of our preconceptions, our repressive superegos and so on. It is to challenge the idea that simply recalling something is ‘powerful’ in its own right: as if we're sitting in the cinema of our minds in 1890 and are amazed simply by virtue of the fact that anything is projected on the screen at all. It betrays, I suppose, a tacit belief that memory ought not to be able to move us, to influence our present; that we ought to live in a sort of unfettered continuous present. Or maybe it's a simple misprison: for memory read the past. Two things almost wholly unrelated, however often they're confused.

This extends, I think, even to traumatic memories. There are instances where memory overwhelms the rememberer, as PTSD, but these instances are not the default, even though trauma of varying intensities is the default of ordinary living. Not thinking about things is, actually, a fairly effective way of dealing with trauma and upset, actually; and not thinking about things can certainly become a habit. But this isn’t the same thing as forgetting, and certainly not the same thing as ‘repressing’ a memory. Freud's insistence that the repressed always returns is more a statement of faith than an evidence-based assertion. I mean, it strikes me as a good faith. It says: nothing stays secret for ever, you cannot bury anything permanently, your true nature will eventually emerge, that affair you had will eventually come to light, those memories you are distracting yourself from don't go away just because you are distracting yourself from them (although, as I say, the distraction can perhaps be prolonged indefinitely). This is a worthwhile ethos by which to live life. It is not true, though. Memories, it seems, are not only sometimes lost, the default position for memories is to lose them, or rather it is to overwrite the memories with simplified neural tags or thumbnail versions of the memory. We do this to stop our minds exploding, but it means that it is not repressed memory that always returns, but repressed desire (the desire that shaped the recasting of the memory in the first place). That sounds truer; short of neural-surgical intervention, repressed desire always does return ... it just doesn't necessarily return at the same strength. If memory is strong, then total memory would be omnipotent. But if memory is weak, actually, then total memory would follow a different hyperbolic trajectory into nothingness.
.....


from Chapter 5: Inaccessible Memory

Coming hard after the previous chapter, and its claim concerning the irrepressibility of memory, the title of this chapter runs the risk of seeming mere trolling. But, if you'll bear with me, I have a particular something in mind.

When we remember something particular from our childhoods, we recognise the specific recollection as memory. When I remember that I left the iron on, just now, we recognise that as memory. Both forms of memory have content, and are comprehensible, and that might tempt us into thinking that having content and being comprehensible are two features of memory as such: that if we have a memory that baffles us, then that just means that we haven't contextualised it, or understood it. I think this is wrong. I think more memory, and more important memory, provides neither of those two things. If we define memory by its accessibility then we rule out from the very concept of memory those memorious processes that are not accessible, even if those processes are vital to memory and mental health.

I'll give you an example of what I mean: dreams as memory.

I need to be specific, here. We all dream, and sometimes we remember what we dream and sometimes we don't. But those rembered-dreans are second-order memories, friable attempts to translate one kind of (non-rational, not consciously controlled) mental process into another that is quite different. I'm not talking about our memories of dreams; I'm talking about our dreaming as itself an iteration of memory.

Because of course dreams are a way of remembering stuff, often the stuff that happened in the day. We know that dreams ‘process’ the events of the day (and sometimes other days) and our anxieties and desires pertaining to them—we process these events, in other words, by remembering them in this peculiar way we call dreaming. More, we know that if we are prevented from dreaming we die. Torturers from ancient Rome to the CIA have long known this. Doctors diagnose the rare but real condition fatal insomnia: ‘a neurodegenerative disease eventually resulting in a complete inability to go past stage 1 of NREM sleep. In addition to insomnia, patients may experience panic attacks, paranoia, phobias, hallucinations, rapid weight loss, and dementia. Death usually occurs between 7 and 36 months from onset.’ If I fail to remember where I put my car-keys, even if I permanently fail to remember this thing, it will not kill me. In this sense dreaming-as-remembering is much, much more important than remembering-as-conscious-recall.

If we don't tend to think of dreams as a fourth kind of memory (alongside sensory memory, short-term memory and long-term memory) it's because we are hamstrung by a prior assumption that memory must be accessible and conscious to count as memory. But I wonder if the absolute physiological necessity of dreaming, and the relative disposability of the other three kinds of memory (for even patients with severe neurological decline who lost both long and short term memory can carry on living otherwise just fine) suggests that not only are we ignoring a vital kind of memory, we have got the relative importance of these things entirely the wrong way about. What if, instead of dreams being a shadowy and dislocated imitation of ‘real’ memory, long-term and short-term memory are both the fundamentally inessential tips of a much larger subconscious iceberg? Perhaps most of our remembering happens unconsciously, inaccessibly, in somnicreative form?

I say so in part, of course, because it situates my earlier claims that fiction (that art, that culture in the broadest sense) is a mode of memory in both the individual and the collective sense. But these processes of memory are not directly analogous to what happens in our brains when we retrieve either recent or archived memories. They are closer to somatic memory, except that they are rarely actually somatic. And they are, I think, the bulk of memory as it figures.

Say, rather than repressing or purging our memories, we are (short of surgical interventions that literally excise portions of our brain) remembering all the time, in a nexus of ways that are inaccessible, or largely inaccessible, to our conscious minds. Say that this process of continuous, paraliminal remembering actually constitutes our consciousness: is the bulk of what consciousness means for a living being, and that the stuff we consciously think of, the stuff of which we are aware and over which we exercise a degree of mental control, is the excresence, the bit of that process that pokes out into the realm of self-aware mentition.

Perhaps this seems far-fetched to you. I can see why. We can, after all, only discuss memory in the idiom of consciousness and rationality. If the bulk of memory actually happens outwith those two territories then it's hard to see what we can usefully say about it. It's like Kant's exhaustive but tentative groping around the shape the inaccessible Ding an Sich leaves in the accessible but fallible and untrustworthy spread of human perceptions. By what process do we transpose the alien idiom of memory we call ‘dream’ into the graspable idiom of consciousness as such?

Adam Phillips, in Terrors and Experts, says this about the interpretation of dreams: ‘a dream is enigmatic—it invites interpretation, intrigues us—because it has transformed something unacceptable, through what Freud calls the dream work, into something puzzling. It is assumed that the unacceptable is something, once the dream has been interpreted, that we are able to recognize and understand. And this is because it belongs to us; we are playing hide-and-seek, but only with ourselves. In the dream the forbidden may become merely eccentric or dazzlingly banal; but only the familiar is ever in disguise. The interpreter, paradoxically—the expert on dreams—is in search of the ordinary.’ [64]

But why must the extraordinary be turned into the ordinary? That sounds like false reckoning (or false translation) to me. The implication here is 'because it started out that way'; but that's surely not true: dreams are as likely, or are more likely, to grind their metaphorical molars upon extraordinary aspects of our life. The perfectly habitual aspects of it won't snag the unconscious's interest. So could it be that dream-interpreters turn the extraordinary into the ordinary only because the ordinary sounds more comprehensible to us, because it produces the sort of narrative the dreamer prefers to wake up to? (‘...those skinny cattle eating the fat cattle and not getting fat? That's about harvests, mate.’) But if the currency of dreams is the extraordinary, common sense suggests that the interpretation of dreams should be extraordinary too—suggests that the function of the dreaming is bound-up with its extraordinariness. The sense of recognition Phillips is talking about here, that ‘aha! that's what it means!’ is all about the transcendent rush, the poetry, not about the mundanity. But the very fact that it's a rush, the very thrill of it, ought to make us suspicious. It is not the currency of true memoroy to elate us, after all. It's cool, but it's not the truth.

This is the flaw in the Biblical narrative of Joseph: his dreams are too rational, too strictly allegorical. They don't have the flavour, the vibe, of actual dreams. We can, I think, tell the difference between a report of an actual dream and the faux-dream confected for, as it might be, a novel. Writer C K Stead says as much: ‘In my most recently published novel I decided one or other of the central characters should experience or remember a significant dream in each of seven chapters. When I tried to invent these they seemed in some indefinable way fake; so I hunted through old notebooks and found dreams I had recorded which could be used with a minimum of alteration.’ Most writers will know what he means. It's one reason I like this Idries Shah story:
Nasrudin dreamt that he had Satan's beard in his hand. Tugging the hair he cried: 'The pain you feel is nothing compared to that which you inflict on the mortals you lead astray!' And he gave the beard such a tug that he woke up yelling in agony. Only then did he realise that the beard he held in his hand was his own. [Shah, The World of Nasrudin (Octagon Press 2003), 438]
One of the things that's cool about it is the way it captures the feel of an actual dream. But mostly, of course, it's the implication that our subconscious not only understands but is capable of timing the revelation comically to deflate the dark grandeur of our secret fantasies. Nasrudin's dream knows more about Nasrudin than he does, I think.  And by extension all our dreams know more about all of us, and remember more about all of us, than we do ourselves.

This, I think, is the most compelling part of recentering dreams in our accounts of memory. Because doing so recognises the extent to which we are all artists.
The beauteous appearance of the dream-worlds, in the production of which every man is a perfect artist, is the presupposition of all plastic art, and ... half of poetry also. We take delight in the immediate apprehension of form; all forms speak to us; there is nothing indifferent, nothing superfluous. But, together with the highest life of this dream-reality we also have, glimmering through it, the sensation of its appearance: such at least is my experience, as to the frequency, ay, normality of which I could adduce many proofs, as also the sayings of the poets. ... And perhaps many a one will, like myself, recollect having sometimes called out cheeringly and not without success amid the dangers and terrors of dream-life: “It is a dream! I will dream on!” I have likewise been told of persons capable of continuing the causality of one and the same dream for three and even more successive nights: all of which facts clearly testify that our innermost being, the common substratum of all of us, experiences our dreams with deep joy and cheerful acquiescence. [Nietzsche, Birth of Tragedy (transl. Hausmann), 23-24]
Blake was fond of the verse from Numbers (11:29) ‘would to God that all the Lords people were Prophets!’ I feel the same, but for artists. And the ongoing progression of Moore's Law and the interpenetration of our lives with technology that facilitates our expression, brings that utopia, that mode of remembering, ever closer. Lord, as Dickens prayed, keep my memory green.


from the Afterword.

Some aspects of the ever-increasing technological interpenetration of our lives cater to our conscious minds. Some address our subconscious. It may be worth speculating as to what the version of memory argued for here—a total memory predicated upon continuing improvements in processing power that encompasses both ordinary information-retrieval instances, but a larger collective artistic or religious communal memory, and even (perhaps) the buried part of the iceberg of memory to which we don't have access—would look like in practice. It might free us from the vagaries of physiological memory, its vulnerabilities and intermittencies.  By the same token it might cast upon the not-so-tender mercies of algorithms. Memory might become the province of the strategies of control of the congeries of State Power that currently asserting dominance. There are two current-day strategies here: one, a Nineteen Eighty-Four approach typified by contemporary China who believe in a top-down authoritarian domination of online activity via restriction, censorship and punishment. The other, much more widely pursued, is the Brave New World approach of the West, where punters are told they are free to frolic in unlimited online pastures when in fact a combination of targeted nudging, ever-evolving algorithms and the sheer soma-like excess of hedonistic online-content actually confine and herd the user even more effectively than Chinese top-down control. The ‘internet’ (to generalise ridiculously) can be wielded by Foucauldian Power to, say, ensure Brexit or the election of Donald Trump, to promote certain socio-cultural memories and excise others, and all without the apparatus of apparent oppression. Those who are conscious of oppression and who can see their oppressors are, in a sense, better-off (because they at least have a clear target) than those who are oppressed but can neither identify a specific tyrant nor even be sure that they are oppressed are obviously worse off than this. As technology becomes an increasing part of our memory, on the hyperbolic path towards total memory, these latter strategies might easily become constutive of memory as such. Our future memories might well become bizarre hybrids of actual remembrance and Orwellian memory-hole, and the fact that we won't necessarily even be aware of these controlling dynamics might well align this new memory with the buried portion of our actual memory, our dream-memory and other unconscious memorious drives. It's not, I concede, a hopeful prognosis. Of course, I may be wrong, may (indeed) be profoundly wrong. But it seems to me, looking around, that we have already tacitly conceded that our collective consciousness (I thumbnail this as ‘the internet’ but it's larger than that) is already apprehending important social and political questions not by ratiocination but according to a set of unconscious processes not strictly accessible to our conscious wills. We are, in other words, already remembering our past—and so, shaping our future—in the way dreams remember things rather than the way consciousness remembers things, and I see no reason why that might not intensify into the future. Our tech, I would hazard, will bed that in. We will increasingly dream our memories, both individual and collective, and do so much more comprehensively thanks to technology. In a late poem, the great Les Murray seems to put his finger on something.
Routines of decaying time
fade, and your waking life
gets laborious as science.

You huddle in, becoming
the deathless younger self
who will survive your dreams
and vanish in surviving.
I wonder.