‘Could a rule be given from without, poetry would cease to be poetry, and sink into a mechanical art. It would be μóρφωσις, not ποίησις. The rules of the IMAGINATION are themselves the very powers of growth and production. The words to which they are reducible, present only the outlines and external appearance of the fruit. A deceptive counterfeit of the superficial form and colours may be elaborated; but the marble peach feels cold and heavy, and children only put it to their mouths.’ [Coleridge, Biographia ch. 18]

‘ποίησις’ (poiēsis) means ‘a making, a creation, a production’ and is used of poetry in Aristotle and Plato. ‘μóρφωσις’ (morphōsis) in essence means the same thing: ‘a shaping, a bringing into shape.’ But Coleridge has in mind the New Testament use of the word as ‘semblance’ or ‘outward appearance’, which the KJV translates as ‘form’: ‘An instructor of the foolish, a teacher of babes, which hast the form [μóρφωσις] of knowledge and of the truth in the law’ [Romans 2:20]; ‘Having a form [μóρφωσις] of godliness, but denying the power thereof: from such turn away’ [2 Timothy 3:5]. I trust that's clear.

There is much more on Coleridge at my other, Coleridgean blog.

Friday, 25 March 2016

The Beauty/Truth Equivalence


Lots of famous people attended Coleridge's 1811-12 course of lectures on Shakespeare and Milton: Hazlitt, Crabb Robinson, Aaron Burr, Mary Russell Mitford, Samuel Rogers and Lord Byron. But as Richard Holmes notes there were two important absentees: ‘the seventeen year-old John Keats, who had just begun attending surgical lectures at St Thomas’s Hospital, across the river by Westminster Bridge; and the nineteen-year old Percy Bysshe Shelley, who had just eloped with his first wife Harriet to Edinburgh’ [Holmes, Coleridge: Darker Reflections, 267]. That means that when, in lecture 8, Coleridge talked of 'Shakespeare the philosopher, the grand Poet who combined truth with beauty and beauty with truth', John Keats was not one of those who heard him. And since the lecture was not published in Coleridge's lifetime, he can't have read the words either. Still, it's a striking coincidence that Keats most famous poem builds to precisely that equivalence:


The beauty/truth equivalence has always fascinated me, in part because I'm really not sure what it means. And because I'm spending time at the moment reading the proofs of a forthcoming EUP edition of Coleridge's Lectures on Shakespeare, I wondered a little if maybe, on the evening of the 12th December 1812, Keats bunked off his actual studies and crossed the river to hear this. Not likely, though. And, deciding to rummage around a little in the eighteen-teens, I soon discovered that he didn't need to hear these words fall from Coleridge's lips. Because the truth-beauty equivalence was everywhere.

It could have been, for instance, that Keats had been reading Mark Akenside's long didactic poem The Pleasures of the Imagination, first published in 1744 and often reprinted (for instance in 1819, when Keats was drafting his Ode). Here's the 1819 publisher's argument to the poem:

Or maybe Keats browsed a little in the new 1816 translation of Proclus, the Platonist, who has a great deal to say about 'the triad symmetry, truth, and beauty.':
If, however, truth is indeed the first, beauty the second, and symmetry the third, it is by no means wonderful, that according to order, truth and beauty should be prior to symmetry; but that symmetry being more apparent in the first triad than the other two, should shine forth as the third in the secondary progressions. For these three subsist occultly in the first triad ... For we have spoken of these things in a treatise consisting of one book, in which we demonstrate that truth is co-ordinate to the philosopher, beauty to the lover, and symmetry to the musician; and that such as is the order of these lives, such also is the relation of truth, beauty, and symmetry to each other. [192-93]
This is less improbable than you might think: the translation in question was by Tom Taylor, brother to Keats's friend and correspondent John Taylor.

Indeed, the more I look, the more it strikes me that loads of people in the eighteen-teens were debating the Beauty-Truth equivalence.




Thursday, 24 March 2016

Imitating Taylor Imitating McGregor Imitating Guinness Imitating



:1:

On the (rare) occasions when I teach cinema rather than my usual literature, I have been known (rarely) to offer students a more-or-less polemical abridged history of 20th-century film and television: an epitome of modern visual culture in three individuals. Given how saturated we all are, nowadays, in visual culture, how many tens of thousands of hours of TV and YouTube and movies and so on we have all assimilated before we even reach the age of majority, it's easy to forget how counterintuitive the visual text is. 20th and 21st-Century visual texts like films and TV shows are more different to the sorts of visual media that preceded them than they are similar to them: watching a play is not a nascent form of watching a movie; animated cartoons are much more than paintings that move. At any rate, I suggest that the three key innovations can be thumbnailed as: Eisenstein; Griffith; Chaplin. That's three men who, between 1915 and 1925, established the parameters that in the most crucial sense distinguish modern 'visual culture' texts from older, literary, theatrical and painterly ones.



Eisenstein is significant for in effect inventing key elements of the visual grammar of film: most famously montage, or the assemblage of images and sequences linked by jump-cuts. Early theorists were astonished by the effectiveness of the jump-cut: Benjamin, in his 'The Work of Art in the Age of Mechanical Reproduction' essay (1936) argues that jump-cuts are so deracinating for the ordinary sensorium, and so widely disseminated through the new mass-media, that they would accomplish nothing short of a revolution in human life. Film, he argues,
affords a spectacle unimaginable anywhere at any time before this. It presents a process in which it is impossible to assign to a spectator a viewpoint ... unless his eye were on a line parallel with the lens. This circumstance, more than any other, renders superficial and insignificant any possible similarity between a scene in the studio and one on the stage. In the theater one is well aware of the place from which the play cannot immediately be detected as illusionary. There is no such place for the movie scene that is being shot. Its illusionary nature is that of the second degree, the result of cutting. That is to say, in the studio the mechanical equipment has penetrated so deeply into reality that its pure aspect freed from the foreign substance of equipment is the result of a special procedure ... The equipment-free aspect of reality here has become the height of artifice; the sight of immediate reality has become an orchid in the land of technology.... Thus, for contemporary man the representation of reality by the film is incomparably more significant than that of the painter, since it offers, precisely because of the thoroughgoing permeation of reality with mechanical equipment, an aspect of reality which is free of all equipment.
As it turned out, Benjamin was wrong. The film jump-cut has not shaken human sensibilities free of the bag-and-baggage of 'traditional' visual representation, and the reason it has not is that, in fact, jump-cuts mimic the process of human perception very closely. That might look like a counterintuitive thing to say, so I'll expand on it a little. We might consider that the cut performs a kind of violence to the 'natural' business of looking, since its essence is moving without interval from one thing to an unadjacent and possibly unrelated thing. In fact the way our eyes and brains collaborate to piece together their visual apprehension of the world much more closely resembles an Eisenteinian 'cut' than it does a slow pan. If we turn our head from left to right, our eyes remain on original object occupying our visual field, counter-swiveling in the eyesockets as the head turns, until they reach a point when they move to something else in the visual field, which they do very quickly until the new object is acquired. They then fix on Object B as the head continues to turn. The brain reads this process not as 'Object A, blurry motion, Object B...' but as a straight jump-cut from Object A to Object B. Try it at home if you don't believe me. I move my head from looking at this screen, to the right of my computer where the half-drunk mug of tea sits on the desk, then to the bookshelves, and finally to the door and what my brain 'sees' is these four items as discrete elements, all cut together. I don't pan smoothly all the way round. That's not how the eyes and the brain 'see'.

I don't mean to go on: but the way that Eisensteinian 'montage' works is not to permeate our consciousness with a radical new mechanical sensibility, as Benjamin thought; but actually to bring the kind of visual experience of watching closer to reality than was ever the case in (say) watching a play in the theatre. It's not just a more dynamic visual experience, it's a visual experience radically defined by its dynamism, formally speaking. And that—rather than just the fact that film is images that moves—is the something that is new in human culture. No mode of representation the history of humankind had been able to do that before. Of course, montage and the cut are now so deeply integrated into visual texts, and we are exposed to so much of it from such an early age, that it 'feels' like second nature. It's not, though; and my shorthand for that aspect of visual culture is 'Eisenstein'.


My second name is 'Griffith', which is to say D W Griffith (that's him, wearing the white hat), and I invoke him as a shorthand for two things, both embodied by his immensely successful movie Birth of a Nation (1915): not its deep-dyed and ghastly racism, nor the part it played in revitalising the Ku Klux Klan, difficult though it is to separate the other aspects of the film from that horrid heritage. But two things that the success of the movie baked into 20th-century cinematography: the 'feature' length of the movie as a two-hours-or-so experience, and the sheer spectacle of the battle scenes. The former has become simply a convention of movies, and a more or less arbitrary one; but the latter has come absolutely to dominate cinema. All the highest grossing movies now offer their audiences spectacle on an colossal scale; and the rise of CGI has proved the apotheosis of spectacle as such. Griffith achieved his effects with vast casts, and enormous sets, especially on his follow-up film Intolerance (1916), the most expensive film made to that point, and although it seems it wasn't quite the colossal flop it was once thought to have been (it just about recouped its enormous outlay) it set a precedent for spectacular overspend. In other respects Griffith was a very un-Eisensteinian director, specialising in panoramic long shots and slow camera pans. He tended to punctuate scenes with iris effects, and his art is much closer to old tableaux vivants (though on an unprecedented scale) than Eisenstein. But so spectacular! Visual spectacle is not unknown before cinema, of course: there were plenty of spectacular stage shows and circuses in the 19th- and 20th-centuries. But cinema proved simply better at this business than the stage. Indeed, I sometimes think that 'spectacle' has become such a bedrock of modern cinema culture that it makes sense to see 21st-century film as the development of narrative and character specifically via the idiom of spectacle rather than in any other sense.



The third name, Chaplin, is of course here to represent a new kind of celebrity. Indeed, it's hard to overstate how famous Chaplin was, in his heyday. His films have neither Eisenstein's formal panache, nor Griffith's scope and splendor: but what they do have is Chaplin himself, a performer of genius. 'Chaplin had no previous film acting experience when he came to California in 1913,' as Stephen M. Weissman notes. Nonetheless, 'by the end of 1915 he was the most famous human being in the entire world.' He was more than famous, in fact: he invented a new mode of 'being famous': a hybrid of professional achievement and press-mediated personal scandal, blended via a new global iconicity. The closest pre-mass-media equivalent might be Byron; a close-runner 1920s contemporary might be Valentino; but Chaplin surpassed both, and there's no 21st-century figure who even approaches his level of early 20th-century fame. He was the first star as superstar, the first great celebrity as brand, a global VIP whose image is still current today.

So. OK: the danger, with a deliberately simplified thesis like this is that students will take it as an ex cathedra pronouncement wholly describing the early history of early 20th-century cinema. This, clearly, is very far from that. I would, however, be prepared to defend the case that the three quantities represented by these three individuals are what differentiates 20th-century visual culture, in the broadest sense, from its historical predecessors: theatre, tableaux vivants, dance, painting, sculpture and illustration, magic lanterns and so on. It's not only that cinema and, later, TV (and latterly games, online culture and so on) have proved massively more popular and have evinced much deeper global penetration, than the earlier visual forms, although clearly they have. It's that these latter qualities are the result of the medium's formal expression of those three elements: a new expressively dynamic visualised logic; new and ever more sublime possibilities of spectacle; and a new recipe of celebrity.

These are large questions, of course, and they have been more extensively discussed than almost any feature of contemporary culture. In The Senses of Modernism: Technology, Perception, and Aesthetics (2002) Sara Danus summarises the broader currents:
For a theory of the dialectics of technology and perceptual experience, one could use as a starting point Marx's proposition that the human senses have a history. The cultivation of the five senses, Marx contends, is the product of all previous history, a history whose axis is the relation of human beings to nature, including the means with which human beings objectify their labour. One could also draw on the theory of perceptual abstraction implicit in Walter Benjamin's writings on photography, mechanical reproducibility and Baudelaire's poetry. Benjamin's theory could then be supplemented with Guy Debord's notion of the society of spectacle, or, for a more apocalyptic perspective, Paul Virilio's thoughts on the interfaces between technologies of speed and the organization of the human sensorium. One might also consult Marshall McLuhan's theory of means of communication which, although determinist, usefully suggests that all media can be seen as extensions of the human senses; and Friedrich Kittler's materialist inquiry into the cultural significance of the advent of inscription technologies such as phonography and cinematography.
A pretty good list of usual 'Theory' suspects, and we could springboard from there in a number of different directions. For the moment, though, I'd like to try to think-through a couple of somethings occasioned by watching my 8-year-old son at play in the world of visual and digital media.


:2:

Which brings me to autobiographical anecdote. My lad is pretty much like any other 8-year-old. So, for instance, he watches a ton of TV and he likes to play in both the somatic sense (I mean, with his body: running around, climbing stuff, larking about) and in the digital sense. He has a PS3 and his favourite game at the moment is Plants Versus Zombies. But even more than playing this game, he enjoys watching other people play video games; and by 'other people' I mean people on YouTube. He will gladly spend hours and hours at this activity, and it fascinates me.

It fascinates me in part because it seems, to my elderly sensibilities, a monumentally boring and pointless activity. I can see the fun in playing a game; my imagination fails me when it comes to entering into the logic of watching videos of individuals with improbable monikers such as Stampylongnose and Dan TDM playing Minecraft, talking all the time about what they are doing as they do it. Partly I think this is because it seems to me to entail a catastrophic sacrifice of agency, where I suppose I'd assumed agency is core to the 'fun' of play. But my boy is very far from alone. The high-pitched always-on-the-edge-of-hilarity voice of Stampylongnose ('hellooo! this is Stampy!') might set my teeth on edge as it once again echoes through our house; but Joseph Garrett, the actual person behind the YouTube persona, hosts one of the ten most watched YouTube channels in the world. He is huge. He is bigger, in terms of audience, than many movie stars.

This is clearly a significant contemporary cultural mode, and I wonder what it's about. It may have something to do with the medium-term consequences of Benjamin's loss of 'aura', something he discusses as a central feature of modern culture: that the new art exists in a radically new way, not only as a form of art that is mechanically reproducible, but as a whole horizon of culture defined by its mechanical reproducibility. This is the starting point for Baudrillard's later meditations on the precession of the simulacra; because before it is anything else, the simulacrum is the icon of perfect mechanical reproducibility. 'Reality,' Manuel Castells grandly declares, 'is entirely captured, fully immersed in a virtual image setting, in a world of make-believe, in which appearances are not just on the screen through which experience is communicated, but they become the experience' [Castells, The Information Age: Economy, Society and Culture (2000), 373].  For Baudrillard, of course, this entails a weird kind of belatedness, in which the simulation which once upon a time came after the reality now precedes it. Castells is saying something more extreme, I think: that reality and simulation are now the same thing. For my son, this may be true.

Not that the boy spends literally every waking hour on a screen (part of our responsibility as parents is ensuring that he doesn't, of course). There was one time when he was actually (that is, somatically) playing: wielding an actual toy lightsaber and doing a lot of leaping about. I was the bad guy, of course ('I am your father!' and so on). In the course of this game, my boy adopted a slightly strangulated posher-than-normal voice and said something along the lines of: 'Annakin, take the droids and bring up that shuttle!' And I recognised the line as something he had heard from an episode of the animated Star Wars: the Clone Wars series that he had been watching. My son was playing at being Obi Wan Kenobi, based on what he had seen of the character from that show.


I confess I was very struck by this. Kenobi in this show is voiced by US actor James Arnold Taylor, and my son was doing an impression of him. But of course Taylor is himself, in this role, doing an impression of Ewan McGregor from the Star Wars prequel movies. And McGregor in his films is doing a kind of impression of Alec Guinness. So my son was copying James Arnold Taylor copying Ewan McGregor copying Alec Guinness. Baudrillard might call this a fourth order simulation, and talk about how distinctively postmodern it is. Indeed, he would probably go further and evidence this as an instance of the precession of simulacra: for when he finally watched Star Wars: a New Hope my lad was disappointed at how little like the 'actual' Kenobi this stiff old geezer playing him was.

A related question is the extent to which Guinness himself, born an illegitimate child in a rented Maida Vale flat, was 'imitating' something when he spoke, after the manner of the upper-middle-class-aspirational elocution-lesson-taking ethos of his time and social milieu. I'll come back to that.

Still, however lengthy this chain of simulation grows, it is still follows a recognisably linear domino-tumble logic of simulation. The boom in watching YouTubers playing video games seems to me something else. Of course it's tempting simply to deplore this new cultural logic, as Matthew B. Crawford does in his recent The World Beyond Your Head: On Becoming an Individual in an Age of Distraction (2015). Diana Schaub summarises: Crawford's book anatomises 'the fragile, flat, and depressive character of the modern self and the way in which this supposedly autonomous self falls ready prey to massification and manipulation by corporate entities that engineer a hyper-stimulating, hyper-palatable artificial environment. Lost in the virtual shuffle are solid goods like silence (a precondition for thinking), self-control (a precondition for coherent individuality), and true sociality.' In other words, this new visual logic is the symptom of something pathological in modern subjectivity itself:
Crawford is particularly good at showing how forms of pseudo-reality are being deliberately manufactured, not in obedience to some inner dynamic of technological progress, but rather in accord with “our modern identification of freedom with choice, where choice is understood as a pure flashing forth of the unconditioned will.” Freedom, thus conceived, is essentially escapist; we seek to avoid any direct encounter with the tangible world outside our elaborately buffered me-bubble. I saw this impulse at work when our young son would flee from the pain of watching a game his Orioles seemed destined to lose, retreating to a video version of baseball in which he could basically rig a victory. He preferred an environment he could control to the psychic risk of real engagement. I thought we had done enough by setting strict time limits and restricting his gaming options to sports only. But it became obvious that the availability of this tyrannical refuge was an obstacle to his becoming a better lover of baseball (and, down the road, a better friend, husband, and father).
This is rather too pearl-clutching for my taste. Which is to say, I don't think it's true, actually, that these new modes of visual media will result in a whole generation incapable of interacting with real life as friends, spouses and parents. But there's something here, isn't there. I wonder if there is some quite radical new mode of art being inaugurated.

Returning to my initial three figures, and the pared-down cartoonified narrative about 'visual culture in the 20th/21st centuries' they embody. The thing is, I look at video games, and the paratexts of video games (like YouTube playthrough videos) and I see none of the three present. Video games come in various formal flavours, but none of those flavours make much use of montage or jump cuts, at least not in game play. First person shooters fill the screen with what is notionally the player's-eye-view of the topography through which s/he moves; but this view moves according to a formal logic of pans and tilts, not according to jump-cuts. With platformers we track the motion of Mario (or whoever) left to right, up, down and so on. The visual field of games, which used to be Space-Invaders rudimentary, now tends to be very rich, busy with detail, high-def, cleverly rendered, complex; but the formal visual grammar of games and gaming tends to be pre-Eisenstein. That's really interesting, I think.

Similarly, games are rarely if ever spectacular. They often construe very large topographies, landscapes a player can spend weeks exploring; but they almost never approach sublimity; they don't do what cinema can do on the spectacle front. One of the differences between playing one of the many Lord of the Rings video games and watching Jackson's trilogy is that in the former Middle Earth comes over as a extensive (even interminable) game-space, where in the latter it comes across as a spectacular, actual landscape. This is not to say that games are simplistic visual texts. Many of them are very unsimple, visually. But their complexity is all on the level of content: the multifariousness of elements to be played, the visual rendering, the complexity of the through-line play. It is not complexity on the level of textual form.

And games do not make superstars, not in the sense that movies and, to a lesser extent, TV make superstars. The odds are you hadn't previously head of Stampylongnose until I mentioned him seven paragraphs earlier; but even if you did happen to recognise the name, I'd be surprised if you could have put a face to it, or would have been able to place the guy's real name Joseph Garrett. To repeat myself, though: he is one of the top ten most watched YouTubers in the world. He earns £260,000 a month doing what he does. That's three million quid a year. A film actor who could command a £3 million fee for a movie would be a household name. This sort of visual culture is different to cinema. Of course, when something is this lucrative celebrities want in:
Kim Kardashian has a video game ... Bjork has one. Taylor Swift announced a game on February 3, and Kanye West announced his own on February 11. These announcements have been met (at least in my circles) with a mix of disbelief and mockery.
'Kardashian's game promises to give you the experience of being Kim Kardashian.' Pass. These people will surely make money, but they will not get to the heart of the game experience. Gaming, as a culture, is not predicated upon celebrity in the way that film and TV are.

To be clear: I am not saying any of this to denigrate games and gaming. Video games are clearly a major cultural force, and many game texts are aesthetically fascinating, significant works of art. But I am saying that games are, formally and culturally, quite different to films and TV. Games, and I think I am making that assertion in a stronger form than it is sometimes made. Broadly speaking, attempts to crossover from games to films have flopped; and cash-in games made to piggy-back the success of successful movies have a checkered history. The biggest games, from GTA to Minecraft, from Space Invaders to Doom to FIFA, from Myst to Candy Crush to Twitter, are their own things.

I used to believe that the key difference between cinema/TV and games is that the former are passive and the latter active. I no longer believe that. In part this is because the former really aren't passive: fandom's textual poachers engage with their favourite texts actively, inventively, remixing and reappropriating, cosplaying and fanfictionalising. And, contrariwise, I wonder if the appeal of games is less to do with the play-engagement and more with the simple immersion, something just as easily achieved—or more so—by watching play-through videos. YouTube, the second most accessed website in the world (after Google), only ten years old, was recently bought, by Google, for £1.6 billion; and although corporations and film companies do post their content to the site, the overwhelming driver of content is ordinary people, fans, gamers and so on. Hardly passive.

So when it comes to games, in what are kids immersing themselves? The specific game-worlds, of course, but something else too. Minecraft has been described as 'infinite lego', but it has the advantage over actual lego not only that it supplies an inexhaustible quantity of bricks, bombs, landscapes and so on, but that it is unconstrained by the quiddity of reality itself. Its famous low-res graphics emphasise this: it is not simulating a session with actual lego in the sitting room, it is establishing its own world. Its graphics are about separating its simulation from the belatedness of simulation. My favourite Minecraft story concerns the players who assemble imaginary components inside the virtual world of the game to build working computers: hard drives, screens, programmable devices, some of which are as big as cities. This is, of course, both cool and monumentally pointless. But it's more than that: it suggests a revised model of the old Baudrillardian precession of simulacra, which in turn, or so I am suggesting, explains the unique appeal of this mode. It is a logic of simulation closer to embedding than the old Pomo mirrorverse horizontal string of simulation.

From my point of view, my son copying James Arnold Taylor copying Ewan McGregor copying Alec Guinness represents a linear chain of simulation, because I saw Alec Guinness first, murmuring 'an elegant weapon, for a more civilized age' and 'these aren't the droids' in his RADA-trained R.P. voice. Then, many years later, I saw McGregor doing that thing Scotsmen all think they can do, and affecting a strangulated posh-o English accent. Later still I became aware of the animated Clone Wars cartoons. So the whole thing lays out a trail that I can trace back. For my son's generation it is not like that: lacking this linear meta-context, simulation for him is centre-embedded. I'll say a little more about this, but with reference to an old science fiction novel rather than a work of professional linguistics: Ian Watson's great The Embedding (1973). Watson lays out the distinction between linear syntax and embedded forms. We can follow 'This is the maiden all forlorn/That milked the cow with the crumpled horn/That tossed the dog that worried the cat/That killed the rat that ate the malt/That lay in the house that Jack built'; and we can continue to follow it, no matter how many additional clauses are added. But syntactically embed the units and it becomes much trickier:
'This is the malt that the rat that the cat that the dog worried killed ate.' How about that? Grammatically correct—but you can hardly understand it. Take the embedding a bit further and the most sensitive, flexible device we know for processing language—our own brain—is stymied. [Watson, The Embedding, 49]
I wonder if the logic of simulation that my son is growing up with isn't more impacted, more structurally embedded, than the kind of simulation Baudrillard theorised. And I wonder if he isn't perfectly fine with that. All these copies are embedded inside one another in the paradoxical topography of the virtualised world. This may explain why these aspects of contemporary culture combine novelty with cultural ubiquity the way they do. They are construing the consciousnesses that can enjoy them.

There still is a real world, of course; however fractal the newer logics of simulation grow, we are still anchored somewhere. Where though? Whom is Alec Guinness imitating, anyway? I suppose a Baudrillardian would say: class, in an aspirational sense of the term. Which is to say: ideology. Which is to say: the force that drives the maximisation of profit and works to smooth out obstacles to that process. But two things occur to me, here. One is that this is not, whatever Matthew B. Crawford argues, something that video games culture has invented. Rather it is the horizon within which all culture happens; and the purpose of Guinness's painstakingly upper-middle-class accent was to smooth over the jagged realities of class and wealth inequality. To wave the hand, to convince us these aren't the realities we are looking for. Since those realities are (as Jameson might say) the whole reality of History, eliding them is an attempt to occlude history. Sometimes this is called 'postmodernism', hence Baudrillard.

But I'm not sure this is all there is. In terms of the 'real world' logic, hard as it is to gainsay, by which Star Wars: the Force Awakens (1977) was released before the Star Wars: Clone Wars animated series (2008), then the Baudrillardian chain of simulation is a straightforward thing. But, of course, in terms of the in-text logic, the old-man Obi Wan played by Guinness comes after the young-man Obi Wan voiced by James Arnold Taylor. I don't want to make too much of this, except to say that 'The child is father to the man' is a peculiarly appropriate slogan for a series, like Star Wars, so deeply invested in paternity and filiality. Rather I'm suggesting that, on the level of representation, the question of who is imitating whom where Kenobi is concerned is more complicated than you might think. Lucas wrote the part with Toshiro Mifune in mind; the Jedi were originally a sort-of Samurai cast, hence Obi Wan's Japanese-style name. Guinness's urbane upper-middle-class English performance, clearly, is not 'imitating' Toshiro Mifune, except insofar as Lucas's script constrains him within that larger logic of cultural appropriation. But an advantage in thinking about the logic of simulation that applies here in terms of an embedded, rather than a linear chain, is that it leaves us free to think in more involved ways about precedence and antecedance, about simulation and originality, in this context. In this sense we might want to argue: what Guinness is simulating, in his performance, is simulation itself.

All of this relates most directly, I think, to the way video games are reconfiguring the logic of the dominant visual modes of contemporary culture. The three pillars on which Old Cinema was erected, and to which I have, at the beginning of this too, too lengthy post, attached names, tended to emphasise an intensified temporality: the more dynamic visual rendering of time, the excitement of spectacle, the cultic thrill of celebrity. But if the balance has been towards the time-image (think of all the time-travel movies; think of 'bullet-time'), then with games I wonder if we are not seeing a return to a more spatialised logic.

The kinetic montage and vitality of the cut; an unprecedented scope and scale of the spectacular; a new level and iconicity of superstar celebrity. On these three pillars was the monumental and eye-wateringly profitable edifice of 20th-century cinema erected; and lucrative, culturally important visual texts continue to be developed along these lines: of course they do. But the new visual cultures of the 21st-century are starting over, with three quite different pillars. I'm not entirely sure what the three are, yet. But I'd hazard a sense of new, immense, intricate but oddly unspectacular new topographies of the visual, what we might call the Minecraftization of visual culture, something much more concerned with the spatial than the temporal aspect of the medium. And I wonder about a new configuring of the balance between passive 'watching' and active 'engagement' as salients of the audience experience, with a new stress on the latter quality. And I wonder about a new mobilization of the visual, texts no longer a matter of public cinema or private TV, but disseminated into every tablet and phone and computer, literally in the pocket of everyone on the planet. How's that for embedding?

One of the main thrusts of Crawford's polemic is that this new digital culture is predicated upon an ideology of distraction. And this makes an immediate kind of sense: many people complain of a shrinkage of collective attention span, that plays into the hands of those who would prefer to get on with despoiling the environment and maximizing social inequality in the service of their own profit. What kind of collective reaction can we muster when reading anything longer than 140-characters prompts us to eye-rolling, sighing and 'tl;dr'; when we can be distracted by an endless succession of cute cat videos and memes and other such metaphorical scooby-snacks. Maybe Crawford is right about this. But by way of counter-example, I can only point to my son. He is precisely as easy to distract as any 8-year old. But he is also capable, when watching Dan TDM's sometimes immensely lengthy playthroughs of Minecraft, of paying close attention for literally hours and hours and hours. That's something.

Wednesday, 16 March 2016

Further Thoughts on Sonnet 146: the Musica Sacra Connection



I've noted on this blog before that I've a soft-spot for Sonnet 146, the 'Poor Soule the center of my Sinfull Earth' one. Here are some more thoughts on it.

Fairly abstruse thoughts, mind. Still: there are many songs in Shakespeare's plays, and he often collaborated with musicians and composers. For example, it seems likely that 'It Was A Lover And His Lass' from As You Like It was either a collaboration between Shakespeare and Thomas Morley: Shakespeare and Morley lived in the same London parish; and 'It Was A Lover And His Lass' was printed, as by Morley alone, in The First Book of Ayres of 1600. It's surely as likely that Shakespeare appropriated Morley's song for his play as that he wrote it himself, although it's also likely that he cultivated professional relationships with various London musicians. Plays needed music, after all.

Morley was a publisher of music as well as a composer, and Thomas Este (his name is on the title page of the Musica Sacra to Sixe Voyces, above) was his chief 'assigne' or printer. Musica Sacra to Sixe Voyces is a translation from the Italian of Francesco Bembo, with music by Croce, translated into English by 'R.H.':


Soko Tomita calls this 'a set of authentic Italian madrigali spirituali and the only the only Italian madrigal book translated complete into English'. There's some evidence that Shakespeare was interested in Bembo; and I wonder if R.H.'s version of the sixth sonnet here directly influenced Shakespeare's own collection of sonnets, published the following year.


Since we know almost nothing about the sequence of events that led to the publication of Shakespeare's Sonnets by Thomas Thorpe in 1609, not even whether Shakespeare was involved in the process or not, we are licensed to speculate. It's possible Thorpe published with Shakespeare's permission. It's even possible that Shakespeare, asked for copy by his publisher, bundled together some sonnets he'd written as a young man, in the early 1590s, when he was randier and more lustful, with some newer sonnets written in 1608 and 1609, by which time he had become more moral, more (in the loose sense of the word) puritanical about sex, more religious. Sonnet 146 would surely be one of the later poems, if so. And it's not impossible that Shakespeare might have read T.H.'s Musica Sacra sonnets, and written his Sonnet 146 as a version of, or a more loosely inspired extrapolation of, sonnet 6 up there. What do you reckon?
Poor soul, the centre of my sinful earth,
Prest by these rebel powers that thee array?
Why dost thou pine within, and suffer dearth,
Painting thy outward walls so costly gay?
Why so large cost, having so short a lease,
Dost thou upon thy fading mansion spend?
Shall worms, inheritors of this excess,
Eat up thy charge? is this thy body's end?
Then soul, live thou upon thy servant's loss,
And let that pine to aggravate thy store;
Buy terms divine in selling hours of dross;
Within be fed, without be rich no more:
So shalt thou feed on Death, that feeds on men,
And, Death once dead, there's no more dying then.
The various similarities and verbal parallels can be left as an exercise for the reader. One attractive aspect to this theory howsoever farfetched it may be, is that if it is true then we have a strong steer as to the music in Shakespeare's head as he wrote this sonnet. Sonnets are little songs after all; and 'Poor soul, the centre of my sinful earth' goes pretty well to this:



Tuesday, 15 March 2016

Thoughts That Do Often Lie Too Deep For Tears



One of, if not the single, most famous line(s) in all of Wordsworth, this: of course the conclusion to his masterly 'Ode: Intimations of Immortality from Recollections of Early Childhood' (1807). This rich and complex poem starts from the simple observation that when WW was a child he had an unforced, natural access to the splendour and joy of the cosmos, but growing old has alienated him from that blessed mode of being-in-the-world. It starts:
There was a time when meadow, grove, and stream,
The earth, and every common sight,
To me did seem
Apparelled in celestial light,
The glory and the freshness of a dream.
It is not now as it hath been of yore;—
Turn wheresoe'er I may,
By night or day,
The things which I have seen I now can see no more.
It ends:
And O, ye Fountains, Meadows, Hills, and Groves,
Forebode not any severing of our loves!
Yet in my heart of hearts I feel your might;
I only have relinquished one delight
To live beneath your more habitual sway.
I love the Brooks which down their channels fret,
Even more than when I tripped lightly as they;
The innocent brightness of a new-born Day
Is lovely yet;
The Clouds that gather round the setting sun
Do take a sober colouring from an eye
That hath kept watch o'er man's mortality;
Another race hath been, and other palms are won.
Thanks to the human heart by which we live,
Thanks to its tenderness, its joys, and fears,
To me the meanest flower that blows can give
Thoughts that do often lie too deep for tears.
I have a simple question: what does it mean to talk of 'thoughts that do often lie too deep for tears'? I'm not asking after the psychological or existential ramifications of the phrase; I'm asking about its semantic content.

You see, it's a phrase that seems to me to imply two quite incompatible meanings. One: the speaker of the poem is saying that even the meanest flower that blooms—like the one at the top of this post—can sometimes make him cry. These tears come from a source that is usually, in his day-to-day living, repressed, buried deep in the psyche, since he is English and therefore too buttoned-down to permit weeping. But the encounter with simple natural beauty liberates this emotion from its prison, and the cathartic tears can at last flow. These are thoughts that often, but not always, lie too deep for expression as tears. In other words, the encounter with the wild-flower in the last two lines of the 'Ode' is, in its bittersweet way, a positive one.

But there's another way of reading the line. This would posit a psychological topography in which, in descending layers, we have: the normal everyday placidity, and below that the propensity to weep, and below that something else, some profound sorrow or depression too deeply ingrained in the human soul ever to find release in tears. Children cry, when provoked, as we all know, because they are more intuitively in touch with their emotions; this is, in one sense, the whole thesis of the 'Ode'. But the poem also embodies the mournful observation that men like Wordsworth's speaker here have lost the capacity for that kind of emotional ingenuousness. Now to encounter nature, in the form of the wildflower, creates a sense of sorrow so deep that it cannot even be relieved by crying. It is not every thought, it's not always like this. But it often is.

Clearly the line can't mean both of these two things. The first suggests that a grown-up saudade finally relieves itself in crying the sorts of tears that great beauty can provoke. The second implies that the Ode is an elegy for the barrenness of modern emotional existence, a parched state where the sorrowful thoughts cannot even provoke tears, because they lie too deep for them. Tears of complicated joy, versus I-have-no-mouth-and-I-must-scream despair. Hmm.


Sunday, 6 March 2016

John Green: the Antecession of Adolescence



It seems to me hard to deny that YA fantastika has, over the first years of this new century, achieved a mode of cultural dominance: that Potter, Katniss and the MCU bestride our contemporary cultural production like colossi; that Malorie Blackman and Patrick Ness are more important contemporary UK novelists than Martin Amis and Zadie Smith. But I have to admit that my saying so may merely reflect my own bias towards SF/Fantasy. Perhaps I overestimate the centrality of Fantasy to the contemporary YA phenomenon. I'm not sure I do, but it's possible. It's one thing to talk about Rowling, Collins, Meyers, Blackman, Ness and Pullman (and Lemony Snicket, and Philip Reeve, and Eoin Colfer, and Tony DiTerlizzi and Holly Black, and Jonathan Stroud, and Tom Pollock, and Rick Riordan, and Cassandra Clare ... and on and on the list goes) as representing some important culture movement.



But I have to concede that not all today's YA is fantastika. Or put it another way: if my argument is that the key YA texts are all Fantasy, then how do I account for those commercially huge, culturally major YA writers who don't write Fantasy? Two names in particular leap out: the marvelous Jacqueline Wilson, and the mega-selling John Green. Both work in what we could loosely call 'realist' idioms, writing about children and teenagers. Both are very good. What about them?

Take Green. Now, I like Green a great deal: he has a funny, personable and informative online presence as *clears throat* a vlogger, and he writes intelligent, witty and prodigiously successful novels. If those novels don't move me the way they evidently move millions of younger readers, that merely reflects my age. They're not aimed at people like me. Or it would be truer to say: they're not primarily aimed at people like me. And, to speak for myself, I admire and enjoy the charm with which he writes, the cleverly packaged wisdom, the lightness of touch he brings to serious matters.

A lot of this has to do with Green's skill with one-liners. The crafting of excellent one-liners is a much more demanding skill than many people realise. It is a business I rate and respect. Sometimes Green writes one-liners to get a laugh, which (of course) is the conventional function of the one-liner: 'Getting you a date to prom is so hard that the hypothetical idea itself is actually used to cut diamonds' [from Paper Towns], or '"It's a penis," Margo said, "in the same sense that Rhode Island is a state: it may have an illustrious history, but it sure isn't big."' [from the same novel]. But just as often he writes one-liners designed to make you feel, or think, rather than laugh. That's harder to do, I think. The most famous line from The Fault in Our Stars is 'I fell in love the way you fall asleep; Slowly, and then all at once', which has the form of a one-liner but is built to produce a particular affect rather than a laugh. Rather beautiful, too.

Green has two big-ish themes to which he keeps returning, and which we might peg as 'death' and 'authenticity', both inflected through the prism of teenage intensity. That he's good on this latter quantity (that is, on the way adolescents feel more intensely, have goofier highs and moodier lows, than grown-ups; the way they experience love as first love in all its Romeo-and-Juliet full-on-ness) is evidenced by his enormous appeal amongst his target audience, to whom his books clearly speak; and this also doubtless explains that element of his writing that I don't quite grok, being middle-aged and English and dwelling accordingly upon a buttoned-down emotional plateau of politeness and tea and low-level anguish. But I don't think 'teenage intensity' is his primary theme; I think it's the idiom via which he chooses to express a fascination with death and authenticity. In Looking for Alaska (2005), the main character Miles 'Pudge' Halter spends a year at a boarding school where he has various adventures with schoolfriends and schoolenemies, and where he falls in love with the beautiful but unhinged Alaska Young. The story bundles along pleasantly funny and bittersweet until the end, when Alaska drives drunk, crashes her car and dies, a death that is perhaps suicide. One of the first things we learn about Pudge is that he is fascinated with people's famous last words, and one of the things that first bonds Pudge and Alaska is their shared interest in Simón Bolívar's enigmatic final line: 'Damn it. How will I ever get out of this labyrinth!' ('Is the labyrinth living or dying?' Alaska wonders. 'Which is he trying to escape—the world or the end of it?'). The novel's own ending, and Pudge's attempts to come to terms with both his bereavement, and his guilt at possible, though unwitting, complicity in her death (since he and another friend distracted the school authorities in order to let Alaska get away in her car), inserts this morbid fascination rather cruelly into reality. What Pudge realises is that he wasn't in love with Alaska, but with an idea of Alaska he had in his head. 'Sometimes I don't get you,' he tells her; and she replies ('she didn't even glance at me, she just smiled') 'You never get me. That's the whole point.'

There's something important in this, I don't deny. It has to do with the teenage tendency towards self-obsession and egoism, of course; but it's also to do with a broader, neo-Arnoldian existential disconnect, the unplumbed salt estranging sea that lies between all our islands. 'It is easy to forget,' is how Paper Towns puts it, 'how full the world is of people, full to bursting, and each of them imaginable and consistently misimagined.' I take it that Green's point is: we owe other people a duty to at least try and relate to them as they are, and not to ignore them, or rewrite them in our minds as we would like them to be. This, in my reading, is the 'labyrinth' from which the characters in Looking for Alaska are trying to escape: it is inauthenticity, and Green's suggested solution in terms of escaping it are such quantities as forgiveness and acceptance. If I call this stance 'authenticity', I'm not trying to tag-in Existentialism. This position has more in common with Holden Caulfield's animadversion to all things 'phony' than it does to Sartre.



Still it's fair to say that 'Existentialism' was interested in the connections between angst, authenticity and death, and there's something in that combo in Green's writing that doesn't weave right. I feel like an uglier and grumpier Oscar Wilde mocking Little Nell, but part of me found itself unable to buy into The Fault In Our Stars (2012), Green's biggest success, first as bestselling book, then as box-office-topping movie: the undeniably heartfelt story of two teenage cancer sufferers falling in love. When Malorie Blackman rewrites Romeo and Juliet in her Noughts and Crosses, her focus is on the arbitrary grounds of the Montague-Capulet hostility, and the toxic social environment that results. When Green rewrites the same story it is not inter-familial hatred but death itself that interposes itself between the two young lovers. That's both the book's strength, and, perhaps, its weakness. The love-story reads as believable and sweet; but the book as a whole treads that debatable line between sensibility and sentimentality, and the brute fact of death, at the story-end, distorts what the book is trying to say about love. It swathes the experience in a cloak of existential all-or-nothingness, which, tends to present the experience as, as it were, all icing and no actual cake. I'm not trying for a cheap shot, here, or at least I hope I'm not. I'm not accusing The Fault in Our Stars of wallowing in any misery-lit melodramatic tragic schlock simply because it juxtaposes young love and cancer. Hazel and Augustus, in the novel, don't fall in love because they have cancer; the cancer is just something they have to try and deal with as they fall in love. But because such cancer truncates life the novel can't help but offer up a truncated representation of love, and this tangles awkwardly with the fact that this is a story of intense teenage passion. Romeo and Juliet experienced emotional intensity with one another, no doubt; but what sort of marriage would they have had, in the event they had survived the end of the play? What would they have looked like, as a couple, in their thirties? Or their sixties? Hazel, in The Fault In Our Stars, dismisses the old insistence that 'without pain, how could we know joy?'
This is an old argument in the field of thinking about suffering and its stupidity and lack of sophistication could be plumbed for centuries but suffice it to say that the existence of broccoli does not, in any way, affect the taste of chocolate.
Which is neat, and uses the one-liner form nicely. It's just doesn't use that form in a way that actually suffices to say. After all: a person wouldn't live very healthily or very long, on a diet of pure chocolate and no broccoli. One of the ways love is more than mere lust is that love lasts; and if there's no timescale into which such lasting can be projected it is somewhere between difficult and impossible to be sure about the love. The true test of love is not in-the-moment intensity, but endurance. I appreciate that's a very middle-aged-adult, and a very un-teen, thing to say. That's the whole point.

But I don't mean to get distracted. Rather, I want to say something about Paper Towns (2008), a more interesting novel (I'd say) than The Fault In Our Stars. This novel frontloads its death (its characters start the story by discovering the body of a divorcé), which is a better, which is to say less conventionally melodramatic, way of doing things. It goes on to tell the story of Florida teen Quentin "Q" Jacobsen, and his young neighbor, the eccentric but (of course) beautiful Margo Roth Spiegelman. Margo, a character who doesn't quite escape the taint of Manic Pixie Dreamgirlishness, recruits Q to help her take elaborate and comical revenge upon various kids at their school who have slighted her. Halfway through the story Margo disappears. The community begins to think she has committed suicide, but a series of clues persuades Q that she is still alive, and living in the 'paper town' of Agloe, New York: a simulacrum of a town invented by mapmakers that has, oddly enough, turned into a real town. He and his friends drive up to rescue her after their high school graduation, but she doesn't want to be rescued. The book ends with Q accepting that he has lived inauthentically, devoted to a version of Margo he has concocted out of his own desire and insecurity, and that it's not fair to Margo to relate to her in that way. Sailing dangerously close to the unSeinfeld learning-hugging-growing arc, Paper Towns' Q realises 'the fundamental mistake I had always made—and that she had, in fairness, always led me to make—was this: Margo was not a miracle. She was not an adventure. She was not a fine and precious thing. She was a girl.' By the novel's end this point is bedded-in: 'What a treacherous thing to believe that a person is more than a person.'

The novel as a whole is concerned with this question of inauthentic living, with the simulacrum. The 'paper town' of Agloe is real: a non-existent place included in a map of NY State to catch out any mapmakers foolish enough to plagiarise. At the end of his novel, Green notes this fact: 'Agloe began as a paper town created to protect against copyright infringement. But then people with these old Esso maps kept looking for it, and so someone built a store, making Agloe real'. The map precedes the territory, the description comes before the reality described. 'Margo always loved mysteries,' Q tells us. 'And in everything that came afterward, I could never stop thinking that maybe she loved mysteries so much that she became one.' It's neatly done. Margo never quite comes alive, but her quirky puppet-ness doesn't impede the story. Arguably the reason she doesn't feel fully alive is that she, in terms of the in-logic of the story, doesn't want to. Which has some interesting implications for characterisation, actually.

Then again, there are moments when the simulacrum is less postmodern, and more old-school phony. Margo, on her hometown Orlando FL, ventriloquises the echt Holden Caulfield:
You can tell what the place really is. You see how fake it all is. It's not even hard enough to be made out of plastic. It's a paper town. I mean look at it, Q: look at all those cul-de-sacs, those streets that turn in on themselves, all the houses that were built to fall apart. All those paper people living in their paper houses, burning the future to stay warm. All the paper kids drinking beer some bum bought for them at the paper convenience store. Everyone demented with the mania of owning things. All the things paper-thin and paper-frail. And all the people, too. I've lived here for eighteen years and I have never once in my life come across anyone who cares about anything that matters.
This is attractively meta (since any town described in a book made of paper bound together is going to be a paper town), even a touch modish. It either captures with nice irony, or else is deplorably complicit with, that teenage certainty that they know 'what matters', and that what matters is more than just living an ordinary, unexceptional life, like boring grown-ups do.

Then again, maybe the conceit of Paper Towns does tip a more Baudrillardian than Sartrean nod. It invites us to go back to 1981's Simulacres et Simulation. Maybe, in this novel, Green goes beyond 1950s phony-baiting, and into the precession of simulacra as such, and maybe that's why the novel works better for me. Baudrillard, you'll recall, distinguishes three phases:
First order simulacra, associated with the premodern period, where representation is clearly an artificial placemarker for the real item. The uniqueness of objects and situations marks them as irreproducibly real and signification obviously gropes towards this reality.

Second order simulacra, associated with the modernity of the Industrial Revolution, where distinctions between representation and reality break down due to the proliferation of mass-reproducible copies of items, turning them into commodities. The commodity's ability to imitate reality threatens to replace the authority of the original version, because the copy is just as "real" as its prototype.

Third order simulacra, associated with the postmodernity of Late Capitalism, where the simulacrum precedes the original and the distinction between reality and representation vanishes. There is only the simulation, and originality becomes a totally meaningless concept
This is where we are, says Jean. Disneyland (one of Orlando FL's most famous sites) started as a copy of the perfect American small town; now, Baudrillard suggests, America itself is a kind of copy of Disneyworld. The simulation precedes the reality. So it is that we care about, and invest emotionally, in the fictional neighbours represented in Eastenders and Coronation Street, and barely know our actual real-world neighbours. So it is that the things that happen in the world only feel real to us when we see them reported on the TV news. When Baudrillard talks about the 'precession of simulacra' in Simulacra and Simulation, he means simulacra have come to precede the real, and that the real is, in his pungent phrase, 'rotting', its vestiges littering what he calls 'the desert of the real'.

I suppose we could say that the difference is that Baudrillard celebrates this new simulacral logic, where Green finds it both exhilarating and terrifying. Having encountered a dead body, and heard gunshots, and been afraid in various ways, Q comes to understand that there is a deeper fear underlying his 'real' or 'actual' experiences of fear. Or not 'underlying', but 'preceding': 'This fear bears no analogy to any fear I knew before,' he tells us. 'This is the basest of all possible emotions, the feeling that was with us before we existed, before this building existed, before the earth existed. This is the fear that made fish crawl out onto dry land and evolve lungs, the fear that teaches us to run, the fear that makes us bury our dead.' Not a fear of death, and not a fear of inauthenticity as such, but rather the fear that inauthenticity is the only reality.

This is bringing me, slowly and after many too many words (I know, I know) back to my original point. Why do so many globally popular YA take the form of Fantasy? If there is a metaphorical relationship between the magical school (or the daemon-accompanied alt-world, or the sexy vampire, or whatever) and reality, we might expect there to be a mimetic relationship between the Orlando teenagers, or the cancer-suffering teenagers in Indianopolis, and reality. But that's not how it works.

Maybe that's what books like Green's offer us: 'realism' rather than realism, a different logic of fantasy that repudiates the idea that there is a clear reality to be metaphorised. A nostalgia for the future rather than for the present or for the past. That's what Alaska thinks, at any rate, in Looking for Alaska:
"Imagining the future is a kind of nostalgia."

"Huh?" I asked.

"You spend your whole life stuck in the labyrinth, thinking about how you'll escape it one day, and how awesome it will be, and imagining that future keeps you going, but you never do it. You just use the future to escape the present."
If I say that there's a particular emphasis upon this pseudo-nostalgia of Green's novels, I don't mean it as negative criticism. Baudrillard, in that 'Precession of Simulacra' essay, insists that 'When the real is no longer what it was, nostalgia assumes its full meaning.'

I don't mean to over-reach, argument-wise, but I wonder if this speaks to the reasons why YA has become so culturally dominant. Once upon a time kids wore jeans and listened to rock music until they passed out the other side of their adolescent phase: then they put on suits and dresses and went to work and listened to Classical Music. Got blue rinses. Smoked pipes and grew beards. Now it's the kids who dress as hipsters, in suits and sculpted facial hair, dye in their hair, and middle-aged dinosaurs like me who wear jeans and listen to rock music. What started as a chronological descriptor covering the years 13-19 expanded: a tansitional period when one is no longer a child, but not yet an adult that bulged at both ends. Now the phase starts earlier and ends much later: people in their 20s, 30s even in their 40s still living with their parents, or pursuing their teenage pursuits (look at me, and my continuing passion for science fiction, for one example), or examining their own souls and saying: you know? I really don't feel 'grown-up'. Not properly. Properly grown-up is the desert of the real of individual subjectivity. Baudrillard, again from the 'Precession of Simulacra' essay:
This world wants to be childish in order to make us believe that the adults are elsewhere, in the "real" world, and to conceal the fact that true childishness is everywhere - that it is that of the adults themselves who come here to act the child in order to foster illusions as to their real childishness.
If we think of it like that, then the whole cultural edifice of children's and YA literature becomes an attempt, on the largest scale, to fix and establish a simulacrum of 'youth', for the benefit of the adults. 'The child is father to the man' becomes evacuated of its original natural piety and spiritual truth, and becomes instead the slogan of causal disconnection in a youth-obsessed society in which adolescence no longer precedes adulthood, but replaces it altogether. Things that once distinguished childhood from adulthood, in the sense that kids would not do these things and adults would—trivial things like drinking and smoking, or profound things like having sex and dying—become, in Green's novels, how teens spend their time. They are all nostalgic for a future that, in Green's textual universe, will never come. It's the precession of sim-maturity that marks the erosion of the distinction between the immature and the mature. Why do teens in John Green novels keep dying? There's a line in Catch-22 that I've always liked, where Yossarian rails that a certain airforceman friend of his, killed in action, was old, very old, very very old, twenty-two. That doesn't sound very old, his interlocutor returns. He died, says Yossarian; you don't get any older than that. Which also has the shape of a one-liner, whilst packing a significantly larger existential punch than the my-dog's-got-no-nose standard sample. (It was also Heller, in the rather underrated Something Happened, who said: 'When I grow up I want to be a little boy')

What Green's novels embody is a larger logic of YA: a kind of impossible nostalgia for a future adulthood that the protagonists not only have never experienced, but fear will never come. As in Harry Potter, or The Hunger Games, the story is: teens are compelled to act as adults, to assume adult responsibilities, commit adult murders, risk the fate of all adults (which is death). But this isn't the precession of adulthood; its the Baudrilliardian erasure of adulthood. That's the fantasy. Maybe.

Friday, 4 March 2016

Paradise Lost's Goodness Infinite



Felix culpa means fortunate fall, or lucky delinquency, and is one of the central theological problematics not just of Milton's Paradise Lost but of Christianity itself. Adam's sin brought death into the world; but then again it's fortunate Adam fell, insofar as it set-up Christ's salvation. Or, indeed: 'if the Jews had not prevailed upon the Romans to crucify our Lord,' as Disraeli says in Tancred, 'what would have become of the Atonement?':
Could that be a crime which secured for all mankind eternal joy? Which vanquished Satan, and opened the gates of Paradise? Such a tenet would sully and impugn the doctrine that is the corner-stone of our faith and hope. Men must not presume to sit in judgment on such an act. They must bow their heads in awe and astonishment and trembling gratitude.
Milton says something similar:
O goodness infinite, goodness immense!
That all this good of evil shall produce,
And evil turn to good; more wonderful
Than that which by creation first brought forth
Light out of darkness! full of doubt I stand,
Whether I should repent me now of sin
By mee done and occasiond, or rejoyce
Much more, that much more good thereof shall spring,
To God more glory, more good will to Men
From God, and over wrauth grace shall abound. [Paradise Lost, 12:469-78]
Abound, at the end there, gestures towards Romans 5:20: 'Moreover the law entered, that the offence might abound. But where sin abounded, grace did much more abound'. And there's something deliberately springy about that bounding, a play on words: that sin which binds men shall also give men the sprightliness to bounce free. People think of Milton as a ponderous sort of poet, and in some senses he is, but he is also playful, sometimes wittily so. The whole point of this passage is that sin is not a terrible end, but is instead the germ of something wonderful. And look there at the first line there: goodness infinite contains at its very heart, the last letter of goodness and the first two of infinite, sin. Sin, Milton is intimating, is the very kernel of infinite goodness.

Tuesday, 1 March 2016

Frankenstein and the French Stone



1.

Is there anything new to say about this work, one of the most discussed and reinterpreted of all Gothic novels? Well, there are the standard points, of course, some of which have become platitudes: that it is the first SF novel; the first great fable of the scientific age, a penetrating story of man’s material-technical overreaching and the danger of unintended consequences; or more specifically that it is a myth about the way Western science’s masculinist bias circumvents the feminine principle with disastrous consequences. There are critics who approach the novel from a biographical point of view, and argue that it embodies Shelley’s ambivalence to the Romantic and radical circles in which she moved, or that it encodes her horror at her miscarried pregnancy. This speaks to the multivalent nature of Shelley’s success, here, although it also points up the dangers of reductionism when trying to get a handle on what makes the book (for all its clumsinesses and awkward moments) so dream-haunting.

It probably is fair to say that most people know this book through its myriad adaptations than its early nineteenth-century prose, at least in the first instance; such that actually reading it, particularly the rather prosy outer frame narrative (an Englishman called Walton is exploring the Arctic, eager to push-back the boundaries of geographical knowledge; and he writes home to his sister with accounts of his voyage), can be rather estranging.

The novel certainly starts slowly. Even when Watson encounters Frankenstein, at the point of exhaustive collapse, pursuing a strange figure across the ice, it takes a while for the novel to start generating its distinctive, eerie and suggestive tone and affect. Frankenstein’s own first-person narrative is folded into Walton’s account here; and after his detailed account of his upbringing, his desire to conquer death, his researches and the creation of his monster—not to mention his horror at his own actions, a period of hysterical amnesia—we get a second inset narrative, the monster’s own life story. This first-person narration nestles as the smallest Russian-doll inside the nested structure of the novel, is the one most people think of as ‘the story’ of Frankenstein. Indeed, the celerity with which adaptors and filmmakers stripped away Walton’s frame narrative (Branagh’s 1994 movie is an exception, here) suggests that it’s the relationship between the creator and his creation that really ignites the imagination, not the third party explorer and observer, the figure akin to us as readers.

The issue here isn’t really one of story-details so much as tone. Filmmakers aim for a heightened intensity, a (melo)dramatic pitch; but Shelley’s own approach reaches its peculiar dark sublimity by going, as it were, down rather than up. Bring to mind any cinematic version you may have seen of the moment where the monster is brought to life: crashing thunder and lightning, dramatic music, the hysterical scientist screaming ‘live, my creation, live!’. To read the beginning of the novel's chapter 5 is to be stuck by how far Shelley herself was prepared to dial-down this crucial moment:
It was on a dreary night of November that I beheld the accomplishment of my toils. With an anxiety that almost amounted to agony, I collected the instruments of life around me, that I might infuse a spark of being into the lifeless thing that lay at my feet. It was already one in the morning; the rain pattered dismally against the panes, and my candle was nearly burnt out, when, by the glimmer of the half-extinguished light, I saw the dull yellow eye of the creature open; it breathed hard, and a convulsive motion agitated its limbs.
Not quite anticlimactic, but more cannily downbeat, this. It speaks to something important about the way the novel has been creatively read, of course. Which is to say: Frankenstein the novel does deal with those intensities of the Romantic Sublime (‘sense of wonder’, ‘enchantment’) that get the hairs stirring on the backs of our necks; but it does so by descent, rather than ascent, and via an apprehension of the guilt of creation rather than human technological hubris. If you bear with me, I’ll explain what I mean.



2.

Here’s something I wrote about Frankenstein in a book called 50 Key Figures in Science Fiction (Routledge 2009):
The novel’s core story is probably well-enough known not to need extensive summary. Scientist Victor Frankenstein constructs and animates an eight-foot-tall artificial man, but, obscurely horrified by what he has done, abandons his creation and temporarily loses his memory. The creature (it is never named) comes into the world physically strong, but mentally a tabula rasa to be written upon my experience—as it transpires, mostly the experience of others’ hostility towards its hideous appearance. It learns not only to speak but, improbably enough, to read and write by eavesdropping unnoticed on a peasant family. Thereafter it becomes murderous, a consequence not only of others’ hostility but also its reading Milton’s Paradise Lost and identifying with the outcast Satan. Lonely, it seeks out its maker demanding that he create a monstrous bride. Frankenstein agrees and builds a second, female creature, but belatedly alarmed at the implication of his two creations breeding and populating the world with monsters, he tears it to pieces. In revenge the monster kills Frankenstein’s own wife. Frankenstein then pursues his creation to the arctic wastes, where he dies; the novel ends with the creature still alive, but promising to kill itself. Summarised so baldly, this perhaps seems clumsily plotted (Shelley was 19 when she wrote it) and the novel itself does sometimes lapse into a rather melodramatic crudeness. But it also possesses remarkable imaginative power, not least in the embodiment, in both heart-wracked scientist and sublime monster, of two enduringly iconic archetypes of the genre.
The opinion that science fiction starts with Mary Shelley’s novel has had several adherents (and several dissenters) but is most closely associated with British SF author and critic Brian Aldiss. For Aldiss, Frankenstein encapsulates ‘the modern theme, touching not only on science but man’s dual nature, whose inherited ape curiosity has brought him both success and misery’ [Aldiss, Billion Year Spree 26]. Aldiss wrote his own oblique fictional treatment of the same story, Frankenstein Unbound (1974), in which a modern man propelled by ‘timeslips’ back to the Romantic era meets not only Mary Shelley, but Frankenstein and his monster too—this latter proving an eloquent commentator upon man’s capacity for dialectically interconnected creation and destruction. As a description of the novel, and an implicit characterisation of sf as a whole, this has persuaded many.

Now Frankenstein, as every schoolchild knows, is the name of the scientist, not the name of the monster (although transferring the name from creator to creation is now so widely disseminated a solecism as hardly to merit rebuke). The monster has no name. What, then, is Frankenstein’s creature?

The answer is obvious. It is a monster.

Now, monster is an interesting word. It derives from the Latin, monstrum, which means (I crack open my Lewis and Short) ‘a divine omen, indicating misfortune, an evil omen, portent’. This word is in turn from moneo: ‘to teach, instruct, tell, inform, point out; to announce, predict, foretell’ (from this we get the French ‘montre’, and the English ‘demonstrate’). Originally a calf (say) born with two heads would be a monster in the sense of being ominous: through it the gods would be trying to tell us something. Though the word now has the connotation of a large and terrifying fantastical beast, the earlier meaning still haunts it. Godzilla, say, is a monster in the contemporary vulgar sense, but also in the sense that he is trying to tell us something (in his case, something about the evils of nuclear testing). Frankenstein’s monster, of course, is often read as a book trying to tell us something about science, or man’s hubris, or about the nature of creation itself. Me, I wonder if the monster’s main function, and the ground of its prodigious success, is that it demonstrates something closer to home: you. Yes, I mean you madam; and you sir. I’ll come back to this in a moment.

What about the creator’s name, ‘Frankenstein’? It’s a common-enough Germanic moniker (the invaluable Wikipedia tells us: ‘Mary Shelley maintained that she derived the name “Frankenstein” from a dream-vision. Despite her public claims of originality, the significance of the name has been a source of speculation. … The name is associated with various places in Germany, such as Castle Frankenstein (Burg Frankenstein) in Hesse or Castle Frankenstein in Frankenstein, Palatinate.’) But I have a fanciful theory about the name; or half-fanciful, and I intend to air it here.

The half that’s less fanciful is the first syllable, which seems to me very likely, in its reference to France, to encode a symbolic allusion to the French Revolution. The half that’s more fanciful would link the stone (‘-Stein’ in German) with the French for stone, –pierre, as a sort of sidestep towards Robespierre, architect of the French revolutionary Terror … like Frankenstein, a well-bred, well-educated man impatient with old forms, who wished to conquer the injustices of the world but who ended up creating only a monster of Terror.

This may strike you are more tortuously implausible than it does me, not just because I tend to see in this rebus (Frankenstein = French ‘stone’ = French [robes]-pierre) an example of the way the creative subconscious works, but because there are a great many people who share my sense than the novel is in a symbolic sense ‘about’ the French revolution. Chris Baldick’s book, In Frankenstein’s Shadow: Myth, Monstrosity and Nineteenth-Century Writing (Oxford 1987) traces the many appropriations of Shelley’s monster in the culture of the century noting how very often revolution, upheaval or popular dissent was troped precisely as a ‘Frankenstein’s monster’. Like the Revolution, the monster is a creature of power and uncanny novelty, brought into being with the best intentions, but abandoned by its architect and running into bloodsoaked courses of remorseless violence and terror. Which is to say: the monster emblematises Revolution because it focuses terror. Indeed, for an English liberal in the first decades of the 19th-century there were two key Revolutions in recent history: the French and the American. It may not be a coincidence that, after making his European monster, the French-Swiss Frankenstein is persuaded to make a second, on the understanding that the pair will emigrate to America. He changes his mind:
Even if they were to leave Europe and inhabit the deserts of the new world, yet one of the first results of those sympathies for which the daemon thirsted would be children, and a race of devils would be propagated upon the earth who might make the very existence of the species of man a condition precarious and full of terror.
That last word—terror—is crucial for the novel. The word ‘terror’ chimes like a bell through the whole text. Terror, of course, was Robespierre’s touchstone: here, for example, from his Discours sur les principes de morale politique (February 1794):
Si le ressort du gouvernement populaire dans la paix est la vertu, le ressort du gouvernement populaire en révolution est à la fois la vertu et la terreur: la vertu, sans laquelle la terreur est funeste; la terreur, sans laquelle la vertu est impuissante. La terreur n’est autre chose que la justice prompte, sévère, inflexible; elle est donc une émanation de la vertu; elle est moins un principe particulier, qu’une conséquence du principe général de la démocratie, appliqué aux plus pressants besoins de la patrie.

[If virtue be the spring of a popular government in times of peace, the spring of that government during a revolution is virtue combined with terror: virtue, without which terror is destructive; terror, without which virtue is impotent. Terror is only justice prompt, severe and inflexible; it is then an emanation of virtue; it is less a distinct principle than a natural consequence of the general principle of democracy, applied to the most pressing wants of the country.]
Terror is an emanation of virtue because it is the purest form of justice; and Frankenstein’s mythic heft and potency derives surely in large part from the sense that there is a cruel, implacable justice behind the monster’s violence. If people had treated him well, and seen past his hideous exterior, he would have repaid their trust. Because they treated him with violence and disgust, those are the human qualities he mirrors back. This comes close to the secret brilliance of the book: it is that our creations will punish us, they will pursue us (as we pursue them, seeking to punish them); and that this will happen because, in a crucial sense, they are us. It is that out of ourselves and against ourselves comes the fiercest and most unrelenting urge to punish, to bring to justice, the most acute terror. I’m reminded of something Hazlitt wrote (this is from his essay ‘On Will Making’ (1821):
It is the wound inflicted upon our self-love, not the stain upon the character of the thoughtless offender, that calls for condign punishment. Crimes, vices may go unchecked or unnoticed; but it is the laughing at our weaknesses, or thwarting our humours, that is never to be forgotten. It is not the errors of others, but our own miscalculations, on which we wreak our lasting vengeance. It is ourselves that we cannot forgive.
I can’t think of a book that is as eloquent in its apprehension of the dark truth embedded in that last sentence as Frankenstein.

What, then, is Frankenstein? It is Revolution (and its bloody aftermath) as myth; it is the excavation of the guilt of Enlightenment creation and action. It is, in short, a descent into Hell. Indeed, I would suggest, we can read the novel as a thoughtfully structured piece of mythic intertextuality about this great theme. I’m thinking of Western culture’s many narratives about infernal descent; in particular, think about Dante’s great Divina Commedia. Dante’s Hell is a funnel shaped cavern located inside the earth—something Shelley’s own ‘funnel-shaped’ narrative structure apes, with Walton’s frame narrative containing the smaller but deeper account of Frankenstein himself, and that circle of story containing again the smaller yet more profound narrative of the monster. Thinking in these terms perhaps explains some of the odder moments in Shelley’s text; or at least, I’m prepared to be persuaded so.

For example: one stumbling block for many readers is Frankenstein’s weird hysterical amnesia—having spent months making his creation, he is so horrified by the result that he stumbles away and forgets all about it until four months later, when the monster’s murders bring it all back to him. A reader who judges by standards of psychological verisimilitude will find this hard to swallow; but if we read with a sense of the mythic provenance—for of course entry to the underworld happens only after the shades of the dead have drunk of the waters of Lethe, or forgetfulness. By the same token, the novel’s final scenes in the frozen polar wasteland (striking and memorable stuff, if something rather gnashingly written by Shelley) are modelled on Dante’s final encounter with Satan at the conclusion of the Inferno: trapped forever not in fire, but embedded in a vast field of ice. The monster’s self-identification with the devil (via Milton) only reinforces this hellish troping. The hell of Enlightenment liberalism is you, or your hideous, monstrous doppelganger, your creation, your child.


3.

Frankenstein is amongst other things a novel about being part of a family, about the generation of life and the toll taken by familial pressures. American critics Sandra Gilbert and Susan Gubar read Shelley as ‘this orphaned literary heiress’ for whom ‘highly charged connections between femaleness and literariness must have been established early’ particularly ‘in relation to the controversial figure of her dead mother.’ [Gilbert and Gubar, The Madwoman in the Attic (New Haven, Yale University Press 1979), 222] That mother, Mary Wollstonecroft was—of course—the author of a foundational text of Western feminist thought, Vindication of the Rights of Women. Gilbert and Gubar’s big, inspiring, occasionally wayward study of female writers was foundational in a smaller way, of the second wave of postwar academic feminist enquiry. Certainly their feminist reading of the novel, as a female appropriation of previously masculine myths of authorship and creation—a Romantic proto-feminist act of bibliogenesis—proved influential in academe.

Since the 1970s Frankenstein has been the subject of many perceptive feminist readings. Indeed, according to Diane Long Hoeveler this novel ‘has figured more importantly in the development of feminist literary theory than perhaps any other novel, with the possible exception of Charlotte Brontë’s Jane Eyre’ [Hoeveler, ‘Frankenstein, feminism and literary theory’, in Esther H. Schor (ed) The Cambridge Companion to Mary Shelley (Cambridge University Press, 2003), 45]. The brilliantly imaginative ways in which the novel deconstructs traditional understandings of ‘masculinity’ and ‘femininity’ (not least in its new myth of the man who gives ‘birth’ to life thereby birthing death and terror too; which is to say, its effective critique of masculinist structures of society, science and literature) speaks both to the great change in conceptions of femaleness that was starting to gain momentum in Shelley’s day, and also to the potential of non-realist modes of art such as science fiction to represent, dramatise and disseminate precisely those changes. Not for nothing does Debra Benita Shaw’s 2000 feminist study describe SF as a whole as The Frankenstein Inheritance.

But having said that, I can’t help feeling that this success has its own limitations. Certainly Shelley’s own career has been overwritten by the impact of Frankenstein: she wrote many other things, but only specialists know anything about them. More to the point, it could be argued that the novel has been almost hijacked by its heritage. What I mean by this is: we tend to read it nowadays as a science fiction novel (which is to say, in ways conditioned by the habits of reading twentieth- and twenty-first-century SF) rather than reading it as it was originally read and reviewed, as a novel of philosophical speculation in the tradition of Voltaire’s Candide (1759), Mary Wollstonecroft’s Mary (1788) or Godwin’s Caleb Williams (1794). To read the book this way would be to concentrate more upon the first section as a meditation on the proper boundaries of human knowledge, and to read the Monster’s first-person narrative as a bold attempt to dramatise the theory-of-mind of John Locke, and to pay less attention to the pitiful/Satanic intensities of the monster’s violence and alienation. But violence and alienation speak more directly to us today, I suppose.

Incidentally: the illustrations accompanying this post are from Bernie Wrightson's 1983 edition of Frankenstein. Amazing, aren't they?