‘Could a rule be given from without, poetry would cease to be poetry, and sink into a mechanical art. It would be μóρφωσις, not ποίησις. The rules of the IMAGINATION are themselves the very powers of growth and production. The words to which they are reducible, present only the outlines and external appearance of the fruit. A deceptive counterfeit of the superficial form and colours may be elaborated; but the marble peach feels cold and heavy, and children only put it to their mouths.’ [Coleridge, Biographia ch. 18]

‘ποίησις’ (poiēsis) means ‘a making, a creation, a production’ and is used of poetry in Aristotle and Plato. ‘μóρφωσις’ (morphōsis) in essence means the same thing: ‘a shaping, a bringing into shape.’ But Coleridge has in mind the New Testament use of the word as ‘semblance’ or ‘outward appearance’, which the KJV translates as ‘form’: ‘An instructor of the foolish, a teacher of babes, which hast the form [μóρφωσις] of knowledge and of the truth in the law’ [Romans 2:20]; ‘Having a form [μóρφωσις] of godliness, but denying the power thereof: from such turn away’ [2 Timothy 3:5]. I trust that's clear.

There is much more on Coleridge at my other, Coleridgean blog.

Friday, 25 March 2016

The Beauty/Truth Equivalence


Lots of famous people attended Coleridge's 1811-12 course of lectures on Shakespeare and Milton: Hazlitt, Crabb Robinson, Aaron Burr, Mary Russell Mitford, Samuel Rogers and Lord Byron. But as Richard Holmes notes there were two important absentees: ‘the seventeen year-old John Keats, who had just begun attending surgical lectures at St Thomas’s Hospital, across the river by Westminster Bridge; and the nineteen-year old Percy Bysshe Shelley, who had just eloped with his first wife Harriet to Edinburgh’ [Holmes, Coleridge: Darker Reflections, 267]. That means that when, in lecture 8, Coleridge talked of 'Shakespeare the philosopher, the grand Poet who combined truth with beauty and beauty with truth', John Keats was not one of those who heard him. And since the lecture was not published in Coleridge's lifetime, he can't have read the words either. Still, it's a striking coincidence that Keats most famous poem builds to precisely that equivalence:


The beauty/truth equivalence has always fascinated me, in part because I'm really not sure what it means. And because I'm spending time at the moment reading the proofs of a forthcoming EUP edition of Coleridge's Lectures on Shakespeare, I wondered a little if maybe, on the evening of the 12th December 1812, Keats bunked off his actual studies and crossed the river to hear this. Not likely, though. And, deciding to rummage around a little in the eighteen-teens, I soon discovered that he didn't need to hear these words fall from Coleridge's lips. Because the truth-beauty equivalence was everywhere.

It could have been, for instance, that Keats had been reading Mark Akenside's long didactic poem The Pleasures of the Imagination, first published in 1744 and often reprinted (for instance in 1819, when Keats was drafting his Ode). Here's the 1819 publisher's argument to the poem:

Or maybe Keats browsed a little in the new 1816 translation of Proclus, the Platonist, who has a great deal to say about 'the triad symmetry, truth, and beauty.':
If, however, truth is indeed the first, beauty the second, and symmetry the third, it is by no means wonderful, that according to order, truth and beauty should be prior to symmetry; but that symmetry being more apparent in the first triad than the other two, should shine forth as the third in the secondary progressions. For these three subsist occultly in the first triad ... For we have spoken of these things in a treatise consisting of one book, in which we demonstrate that truth is co-ordinate to the philosopher, beauty to the lover, and symmetry to the musician; and that such as is the order of these lives, such also is the relation of truth, beauty, and symmetry to each other. [192-93]
This is less improbable than you might think: the translation in question was by Tom Taylor, brother to Keats's friend and correspondent John Taylor.

Indeed, the more I look, the more it strikes me that loads of people in the eighteen-teens were debating the Beauty-Truth equivalence.




Thursday, 24 March 2016

Imitating Taylor Imitating McGregor Imitating Guinness Imitating



:1:

On the (rare) occasions when I teach cinema rather than my usual literature, I have been known (rarely) to offer students a more-or-less polemical abridged history of 20th-century film and television: an epitome of modern visual culture in three individuals. Given how saturated we all are, nowadays, in visual culture, how many tens of thousands of hours of TV and YouTube and movies and so on we have all assimilated before we even reach the age of majority, it's easy to forget how counterintuitive the visual text is. 20th and 21st-Century visual texts like films and TV shows are more different to the sorts of visual media that preceded them than they are similar to them: watching a play is not a nascent form of watching a movie; animated cartoons are much more than paintings that move. At any rate, I suggest that the three key innovations can be thumbnailed as: Eisenstein; Griffith; Chaplin. That's three men who, between 1915 and 1925, established the parameters that in the most crucial sense distinguish modern 'visual culture' texts from older, literary, theatrical and painterly ones.



Eisenstein is significant for in effect inventing key elements of the visual grammar of film: most famously montage, or the assemblage of images and sequences linked by jump-cuts. Early theorists were astonished by the effectiveness of the jump-cut: Benjamin, in his 'The Work of Art in the Age of Mechanical Reproduction' essay (1936) argues that jump-cuts are so deracinating for the ordinary sensorium, and so widely disseminated through the new mass-media, that they would accomplish nothing short of a revolution in human life. Film, he argues,
affords a spectacle unimaginable anywhere at any time before this. It presents a process in which it is impossible to assign to a spectator a viewpoint ... unless his eye were on a line parallel with the lens. This circumstance, more than any other, renders superficial and insignificant any possible similarity between a scene in the studio and one on the stage. In the theater one is well aware of the place from which the play cannot immediately be detected as illusionary. There is no such place for the movie scene that is being shot. Its illusionary nature is that of the second degree, the result of cutting. That is to say, in the studio the mechanical equipment has penetrated so deeply into reality that its pure aspect freed from the foreign substance of equipment is the result of a special procedure ... The equipment-free aspect of reality here has become the height of artifice; the sight of immediate reality has become an orchid in the land of technology.... Thus, for contemporary man the representation of reality by the film is incomparably more significant than that of the painter, since it offers, precisely because of the thoroughgoing permeation of reality with mechanical equipment, an aspect of reality which is free of all equipment.
As it turned out, Benjamin was wrong. The film jump-cut has not shaken human sensibilities free of the bag-and-baggage of 'traditional' visual representation, and the reason it has not is that, in fact, jump-cuts mimic the process of human perception very closely. That might look like a counterintuitive thing to say, so I'll expand on it a little. We might consider that the cut performs a kind of violence to the 'natural' business of looking, since its essence is moving without interval from one thing to an unadjacent and possibly unrelated thing. In fact the way our eyes and brains collaborate to piece together their visual apprehension of the world much more closely resembles an Eisenteinian 'cut' than it does a slow pan. If we turn our head from left to right, our eyes remain on original object occupying our visual field, counter-swiveling in the eyesockets as the head turns, until they reach a point when they move to something else in the visual field, which they do very quickly until the new object is acquired. They then fix on Object B as the head continues to turn. The brain reads this process not as 'Object A, blurry motion, Object B...' but as a straight jump-cut from Object A to Object B. Try it at home if you don't believe me. I move my head from looking at this screen, to the right of my computer where the half-drunk mug of tea sits on the desk, then to the bookshelves, and finally to the door and what my brain 'sees' is these four items as discrete elements, all cut together. I don't pan smoothly all the way round. That's not how the eyes and the brain 'see'.

I don't mean to go on: but the way that Eisensteinian 'montage' works is not to permeate our consciousness with a radical new mechanical sensibility, as Benjamin thought; but actually to bring the kind of visual experience of watching closer to reality than was ever the case in (say) watching a play in the theatre. It's not just a more dynamic visual experience, it's a visual experience radically defined by its dynamism, formally speaking. And that—rather than just the fact that film is images that moves—is the something that is new in human culture. No mode of representation the history of humankind had been able to do that before. Of course, montage and the cut are now so deeply integrated into visual texts, and we are exposed to so much of it from such an early age, that it 'feels' like second nature. It's not, though; and my shorthand for that aspect of visual culture is 'Eisenstein'.


My second name is 'Griffith', which is to say D W Griffith (that's him, wearing the white hat), and I invoke him as a shorthand for two things, both embodied by his immensely successful movie Birth of a Nation (1915): not its deep-dyed and ghastly racism, nor the part it played in revitalising the Ku Klux Klan, difficult though it is to separate the other aspects of the film from that horrid heritage. But two things that the success of the movie baked into 20th-century cinematography: the 'feature' length of the movie as a two-hours-or-so experience, and the sheer spectacle of the battle scenes. The former has become simply a convention of movies, and a more or less arbitrary one; but the latter has come absolutely to dominate cinema. All the highest grossing movies now offer their audiences spectacle on an colossal scale; and the rise of CGI has proved the apotheosis of spectacle as such. Griffith achieved his effects with vast casts, and enormous sets, especially on his follow-up film Intolerance (1916), the most expensive film made to that point, and one of cinema's biggest flops. In other respects he was a very un-Eisensteinian director, specialising in panoramic long shots and slow camera pans. He tended to punctuate scenes with iris effects, and his art is much closer to old tableaux vivants (though on an unprecedented scale) than Eisenstein. But so spectacular! Visual spectacle is not unknown before cinema, of course: there were plenty of spectacular stage shows and circuses in the 19th- and 20th-centuries. But cinema has proved simply better at this business than the stage. Indeed, I sometimes think that 'spectacle' has become such a bedrock of modern cinema culture that it makes sense to see 21st-century film as the development of narrative and character via the idiom of spectacle.



The third name, Chaplin, is of course here to represent a new kind of celebrity. Indeed, it's hard to overstate how famous Chaplin was, in his heyday. His films have neither Eisenstein's formal panache, nor Griffith's scope and splendor: but what they do have is Chaplin himself, a performer of genius. 'Chaplin had no previous film acting experience when he came to California in 1913,' as Stephen M. Weissman notes. Nonetheless, 'by the end of 1915 he was the most famous human being in the entire world.' He was more than famous, in fact: he invented a new mode of 'being famous': a hybrid of professional achievement and press-mediated personal scandal, blended via a new global iconicity. The closest pre-mass-media equivalent might be Byron; a close-runner contemporary might be Valentino; but Chaplin surpassed both. He was the first star as superstar, the first great celebrity as brand, a global VIP whose image is still current today.

So. OK: the danger, with a deliberately simplified thesis like this is that students will take it as an ex cathedra pronouncement wholly describing the early history of early 20th-century cinema. This, clearly, is very far from that. I would, however, be prepared to defend the case that the three quantities represented by these three individuals are what differentiates 20th-century visual culture, in the broadest sense, from its historical predecessors: theatre, tableaux vivants, dance, painting, sculpture and illustration, magic lanterns and so on. It's not only that cinema and, later, TV (and latterly games, online culture and so on) have proved massively more popular and have evinced much deeper global penetration, than the earlier visual forms, although clearly they have. It's that these latter qualities are the result of the medium's formal expression of those three elements: a new expressively dynamic visualised logic; new and ever more sublime possibilities of spectacle; and a new recipe of celebrity.

These are large questions, of course, and they have been more extensively discussed than almost any feature of contemporary culture. In The Senses of Modernism: Technology, Perception, and Aesthetics (2002) Sara Danus summarises the broader currents:
For a theory of the dialectics of technology and perceptual experience, one could use as a starting point Marx's proposition that the human senses have a history. The cultivation of the five senses, Marx contends, is the product of all previous history, a history whose axis is the relation of human beings to nature, including the means with which human beings objectify their labour. One could also draw on the theory of perceptual abstraction implicit in Walter Benjamin's writings on photography, mechanical reproducibility and Baudelaire's poetry. Benjamin's theory could then be supplemented with Guy Debord's notion of the society of spectacle, or, for a more apocalyptic perspective, Paul Virilio's thoughts on the interfaces between technologies of speed and the organization of the human sensorium. One might also consult Marshall McLuhan's theory of means of communication which, although determinist, usefully suggests that all media can be seen as extensions of the human senses; and Friedrich Kittler's materialist inquiry into the cultural significance of the advent of inscription technologies such as phonography and cinematography.
A pretty good list of usual 'Theory' suspects, and we could springboard from there in a number of different directions. For the moment, though, I'd like to try to think-through a couple of somethings occasioned by watching my 8-year-old son at play in the world of visual and digital media.


:2:

Which brings me to autobiographical anecdote. My lad is pretty much like any other 8-year-old. So, for instance, he watches a ton of TV and he likes to play in both the somatic sense (I mean, with his body: running around, climbing stuff, larking about) and in the digital sense. He has a PS3 and his favourite game at the moment is Plants Versus Zombies. But even more than playing this game, he enjoys watching other people play video games; and by 'other people' I mean people on YouTube. He will gladly spend hours and hours at this activity, and it fascinates me.

It fascinates me in part because it seems, to my elderly sensibilities, a monumentally boring and pointless activity. I can see the fun in playing a game; my imagination fails me when it comes to entering into the logic of watching videos of individuals with improbable monikers such as Stampylongnose and Dan TDM playing Minecraft, talking all the time about what they are doing as they do it. Partly I think this is because it seems to me to entail a catastrophic sacrifice of agency, where I suppose I'd assumed agency is core to the 'fun' of play. But my boy is very far from alone. The high-pitched always-on-the-edge-of-hilarity voice of Stampylongnose ('hellooo! this is Stampy!') might set my teeth on edge as it once again echoes through out house; but Joseph Garrett, the actual person behind the YouTube persona, hosts one of the ten most watched YouTube channels in the world. He is huge. He is bigger, in terms of audience, than many movie stars.

This is clearly a significant contemporary cultural mode, and I wonder what it's about. It may have something to do with the medium-term consequences of Benjamin's loss of 'aura', something he discusses as a central feature of modern culture: that the new art exists in a radically new way, not only as a form of art that is mechanically reproducible, but as a whole horizon of culture defined by its mechanical reproducibility. This is the starting point for Baudrillard's later meditations on the precession of the simulacra; because before it is anything else, the simulacrum is the icon of perfect mechanical reproducibility. 'Reality,' Manuel Castells grandly declares, 'is entirely captured, fully immersed in a virtual image setting, in a world of make-believe, in which appearances are not just on the screen through which experience is communicated, but they become the experience' [Castells, The Information Age: Economy, Society and Culture (2000), 373].  For Baudrillard, of course, this entails a weird kind of belatedness, in which the simulation which once upon a time came after the reality now precedes it. Castells is saying something more extreme, I think: that reality and simulation are now the same thing. For my son, this may be true.

Not that the boy spends literally every waking hour on a screen (part of our responsibility as parents is ensuring that he doesn't, of course). There was one time when he was actually (that is, somatically) playing: wielding an actual toy lightsaber and doing a lot of leaping about. I was the bad guy, of course ('I am your father!' and so on). In the course of this game, my boy adopted a slightly strangulated posher-than-normal voice and said something along the lines of: 'Annakin, take the droids and bring up that shuttle!' And I recognised the line as something he had heard from an episode of the animated Star Wars: the Clone Wars series that he had been watching. My son was playing at being Obi Wan Kenobi, based on what he had seen of the character from that show.


I confess I was very struck by this. Kenobi in this show is voiced by US vocal actor James Arnold Taylor, and my son was doing an impression of him. But of course Taylor is himself, in this role, doing an impression of Ewan McGregor; and McGregor in his films is doing a kind of impression of Alec Guinness. So my son was copying James Arnold Taylor copying Ewan McGregor copying Alec Guinness. Baudrillard might call this a fourth order simulation, and talk about how distinctively postmodern it is. Indeed, he would probably go further and evidence this as an instance of the precession of simulacra: for when he finally watched Star Wars: a New Hope my lad was disappointed at how little like the 'actual' Kenobi this stiff old geezer playing him was.

A related question is the extent to which Guinness himself, born an illegitimate child in a rented Maida Vale flat, was 'imitating' something when he spoke, after the manner of the upper-middle-class-aspirational elocution-lesson-taking ethos of his time and social milieu. I'll come back to that.

Still, however lengthy this chain of simulation grows, it is still follows a recognisably linear domino-tumble logic of simulation. The boom in watching YouTubers playing video games seems to me something else. Of course it's tempting simply to deplore this new cultural logic, as Matthew B. Crawford does in his recent The World Beyond Your Head: On Becoming an Individual in an Age of Distraction (2015). Diana Schaub summarises: Crawford's book anatomises 'the fragile, flat, and depressive character of the modern self and the way in which this supposedly autonomous self falls ready prey to massification and manipulation by corporate entities that engineer a hyper-stimulating, hyper-palatable artificial environment. Lost in the virtual shuffle are solid goods like silence (a precondition for thinking), self-control (a precondition for coherent individuality), and true sociality.' In other words, this new visual logic is the symptom of something pathological in modern subjectivity itself:
Crawford is particularly good at showing how forms of pseudo-reality are being deliberately manufactured, not in obedience to some inner dynamic of technological progress, but rather in accord with “our modern identification of freedom with choice, where choice is understood as a pure flashing forth of the unconditioned will.” Freedom, thus conceived, is essentially escapist; we seek to avoid any direct encounter with the tangible world outside our elaborately buffered me-bubble. I saw this impulse at work when our young son would flee from the pain of watching a game his Orioles seemed destined to lose, retreating to a video version of baseball in which he could basically rig a victory. He preferred an environment he could control to the psychic risk of real engagement. I thought we had done enough by setting strict time limits and restricting his gaming options to sports only. But it became obvious that the availability of this tyrannical refuge was an obstacle to his becoming a better lover of baseball (and, down the road, a better friend, husband, and father).
This is rather too pearl-clutching for my taste. Which is to say, I don't think it's true, actually, that these new modes of visual media will result in a whole generation incapable of interacting with real life as friends, spouses and parents. But there's something here, isn't there. I wonder if there is some quite radical new mode of art being inaugurated.

Returning to my initial three figures, and the pared-down cartoonified narrative about 'visual culture in the 20th/21st centuries' they embody. The thing is, I look at video games, and the paratexts of video games (like YouTube playthrough videos) and I see none of the three present. Video games come in various formal flavours, but none of those flavours make much use of montage or jump cuts, at least not in game play. First person shooters fill the screen with what is notionally the player's-eye-view of the topography through which s/he moves; but this view moves according to a formal logic of pans and tilts, not according to jump-cuts. With platformers we track the motion of Mario (or whoever) left to right, up, down and so on. The visual field of games, which used to be Space-Invaders rudimentary, now tends to be very rich, busy with detail, high-def, cleverly rendered, complex; but the formal visual grammar of games and gaming tends to be pre-Eisenstein. That's really interesting, I think.

Similarly, games are rarely if ever spectacular. They often construe very large topographies, landscapes a player can spend weeks exploring; but they almost never approach sublimity; they don't do what cinema can do on the spectacle front. One of the differences between playing one of the many Lord of the Rings video games and watching Jackson's trilogy is that in the former Middle Earth comes over as a extensive (even interminable) game-space, where in the latter it comes across as a spectacular, actual landscape. This is not to say that games are simplistic visual texts. Many of them are very unsimple, visually. But their complexity is all on the level of content: the multifariousness of elements to be played, the visual rendering, the complexity of the through-line play. It is not complexity on the level of textual form.

And games do not make superstars, not in the sense that movies and, to a lesser extent, TV make superstars. The odds are you hadn't previously head of Stampylongnose until I mentioned him seven paragraphs earlier; but even if you did happen to recognise the name, I'd be surprised if you could have put a face to it, or would have been able to place the guy's real name Joseph Garrett. To repeat myself, though: he is one of the top ten most watched YouTubers in the world. He earns £260,000 a month doing what he does. That's three million quid a year. A film actor who could command a £3 million fee for a movie would be a household name. This sort of visual culture is different to cinema. Of course, when something is this lucrative celebrities want in:
Kim Kardashian has a video game ... Bjork has one. Taylor Swift announced a game on February 3, and Kanye West announced his own on February 11. These announcements have been met (at least in my circles) with a mix of disbelief and mockery.
'Kardashian's game promises to give you the experience of being Kim Kardashian.' Pass. These people will surely make money, but they will not get to the heart of the game experience. Gaming, as a culture, is not predicated upon celebrity in the way that film and TV are.

To be clear: I am not saying any of this to denigrate games and gaming. Video games are clearly a major cultural force, and many game texts are aesthetically fascinating, significant works of art. But I am saying that games are, formally and culturally, quite different to films and TV. Games, and I think I am making that assertion in a stronger form than it is sometimes made. Broadly speaking, attempts to crossover from games to films have flopped; and cash-in games made to piggy-back the success of successful movies have a checkered history. The biggest games, from GTA to Minecraft, from Space Invaders to Doom to FIFA, from Myst to Candy Crush to Twitter, are their own things.

I used to believe that the key difference between cinema/TV and games is that the former are passive and the latter active. I no longer believe that. In part this is because the former really aren't passive: fandom's textual poachers engage with their favourite texts actively, inventively, remixing and reappropriating, cosplaying and fanfictionalising. And, contrariwise, I wonder if the appeal of games is less to do with the play-engagement and more with the simple immersion, something just as easily achieved—or more so—by watching play-through videos. YouTube, the second most accessed website in the world (after Google), only ten years old, was recently bought, by Google, for £1.6 billion; and although corporations and film companies do post their content to the site, the overwhelming driver of content is ordinary people, fans, gamers and so on. Hardly passive.

So when it comes to games, in what are kids immersing themselves? The specific game-worlds, of course, but something else too. Minecraft has been described as 'infinite lego', but it has the advantage over actual lego not only that it supplies an inexhaustible quantity of bricks, bombs, landscapes and so on, but that it is unconstrained by the quiddity of reality itself. Its famous low-res graphics emphasise this: it is not simulating a session with actual lego in the sitting room, it is establishing its own world. Its graphics are about separating its simulation from the belatedness of simulation. My favourite Minecraft story concerns the players who assemble imaginary components inside the virtual world of the game to build working computers: hard drives, screens, programmable devices, some of which are as big as cities. This is, of course, both cool and monumentally pointless. But it's more than that: it suggests a revised model of the old Baudrillardian precession of simulacra, which in turn, or so I am suggesting, explains the unique appeal of this mode. It is a logic of simulation closer to embedding than the old Pomo mirrorverse horizontal string of simulation.

From my point of view, my son copying James Arnold Taylor copying Ewan McGregor copying Alec Guinness represents a linear chain of simulation, because I saw Alec Guinness first, murmuring 'an elegant weapon, for a more civilized age' and 'these aren't the droids' in his RADA-trained R.P. voice. Then, many years later, I saw McGregor doing that thing Scotsmen all think they can do, and affecting a strangulated posh-o English accent. Later still I became aware of the animated Clone Wars cartoons. So the whole thing lays out a trail that I can trace back. For my son's generation it is not like that: lacking this linear meta-context, simulation for him is centre-embedded. I'll say a little more about this, but with reference to an old science fiction novel rather than a work of professional linguistics: Ian Watson's great The Embedding (1973). Watson lays out the distinction between linear syntax and embedded forms. We can follow 'This is the maiden all forlorn/That milked the cow with the crumpled horn/That tossed the dog that worried the cat/That killed the rat that ate the malt/That lay in the house that Jack built'; and we can continue to follow it, no matter how many additional clauses are added. But syntactically embed the units and it becomes much trickier:
'This is the malt that the rat that the cat that the dog worried killed ate.' How about that? Grammatically correct—but you can hardly understand it. Take the embedding a bit further and the most sensitive, flexible device we know for processing language—our own brain—is stymied. [Watson, The Embedding, 49]
I wonder if the logic of simulation that my son is growing up with isn't more impacted, more structurally embedded, than the kind of simulation Baudrillard theorised. And I wonder if he isn't perfectly fine with that. All these copies are embedded inside one another in the paradoxical topography of the virtualised world. This may explain why these aspects of contemporary culture combine novelty with cultural ubiquity the way they do. They are construing the consciousnesses that can enjoy them.

There still is a real world, of course; however fractal the newer logics of simulation grow, we are still anchored somewhere. Where though? Whom is Alec Guinness imitating, anyway? I suppose a Baudrillardian would say: class, in an aspirational sense of the term. Which is to say: ideology. Which is to say: the force that drives the maximisation of profit and works to smooth out obstacles to that process. But two things occur to me, here. One is that this is not, whatever Matthew B. Crawford argues, something that video games culture has invented. Rather it is the horizon within which all culture happens; and the purpose of Guinness's painstakingly upper-middle-class accent was to smooth over the jagged realities of class and wealth inequality. To wave the hand, to convince us these aren't the realities we are looking for. Since those realities are (as Jameson might say) the whole reality of History, eliding them is an attempt to occlude history. Sometimes this is called 'postmodernism', hence Baudrillard.

But I'm not sure this is all there is. In terms of the 'real world' logic, hard as it is to gainsay, by which Star Wars: the Force Awakens (1977) was released before the Star Wars: Clone Wars animated series (2008), then the Baudrillardian chain of simulation is a straightforward thing. But, of course, in terms of the in-text logic, the old-man Obi Wan played by Guinness comes after the young-man Obi Wan voiced by James Arnold Taylor. I don't want to make too much of this, except to say that 'The child is father to the man' is a peculiarly appropriate slogan for a series, like Star Wars, so deeply invested in paternity and filiality. Rather I'm suggesting that, on the level of representation, the question of who is imitating whom where Kenobi is concerned is more complicated than you might think. Lucas wrote the part with Toshiro Mifune in mind; the Jedi were originally a sort-of Samurai cast, hence Obi Wan's Japanese-style name. Guinness's urbane upper-middle-class English performance, clearly, is not 'imitating' Toshiro Mifune, except insofar as Lucas's script constrains him within that larger logic of cultural appropriation. But an advantage in thinking about the logic of simulation that applies here in terms of an embedded, rather than a linear chain, is that it leaves us free to think in more involved ways about precedence and antecedance, about simulation and originality, in this context. In this sense we might want to argue: what Guinness is simulating, in his performance, is simulation itself.

All of this relates most directly, I think, to the way video games are reconfiguring the logic of the dominant visual modes of contemporary culture. The three pillars on which Old Cinema was erected, and to which I have, at the beginning of this too, too lengthy post, attached names, tended to emphasise an intensified temporality: the more dynamic visual rendering of time, the excitement of spectacle, the cultic thrill of celebrity. But if the balance has been towards the time-image (think of all the time-travel movies; think of 'bullet-time'), then with games I wonder if we are not seeing a return to a more spatialised logic.

The kinetic montage and vitality of the cut; an unprecedented scope and scale of the spectacular; a new level and iconicity of superstar celebrity. On these three pillars was the monumental and eye-wateringly profitable edifice of 20th-century cinema erected; and lucrative, culturally important visual texts continue to be developed along these lines: of course they do. But the new visual cultures of the 21st-century are starting over, with three quite different pillars. I'm not entirely sure what the three are, yet. But I'd hazard a sense of new, immense, intricate but oddly unspectacular new topographies of the visual, what we might call the Minecraftization of visual culture, something much more concerned with the spatial than the temporal aspect of the medium. And I wonder about a new configuring of the balance between passive 'watching' and active 'engagement' as salients of the audience experience, with a new stress on the latter quality. And I wonder about a new mobilization of the visual, texts no longer a matter of public cinema or private TV, but disseminated into every tablet and phone and computer, literally in the pocket of everyone on the planet. How's that for embedding?

One of the main thrusts of Crawford's polemic is that this new digital culture is predicated upon an ideology of distraction. And this makes an immediate kind of sense: many people complain of a shrinkage of collective attention span, that plays into the hands of those who would prefer to get on with despoiling the environment and maximizing social inequality in the service of their own profit. What kind of collective reaction can we muster when reading anything longer than 140-characters prompts us to eye-rolling, sighing and 'tl;dr'; when we can be distracted by an endless succession of cute cat videos and memes and other such metaphorical scooby-snacks. Maybe Crawford is right about this. But by way of counter-example, I can only point to my son. He is precisely as easy to distract as any 8-year old. But he is also capable, when watching Dan TDM's sometimes immensely lengthy playthroughs of Minecraft, of paying close attention for literally hours and hours and hours. That's something.

Friday, 18 March 2016

Carnegie Medal: Thoughts on Recent Winners



So: the shortlist for this year's prestigious Carnegie Medal for best children's novel has been announced:
One by Sarah Crossan
The Lie Tree by Frances Hardinge
There Will Be Lies by Nick Lake
The Rest of Us Just Live Here by Patrick Ness
Five Children on the Western Front by Kate Saunders
The Ghosts of Heaven by Marcus Sedgwick
Lies We Tell Ourselves by Robin Talley
Fire Colour One by Jenny Valentine
The Carnegie is always an interesting prize, and doubly so if one happens to be teaching a course on children's literature (as I have been doing this term). There are at least two ways in which it merits discussion: one, since the judges just have a really good record of choosing strong, thought-provoking, worthwhile books; and two since many school English teachers, mandated by the national curriculum to teach contemporary fiction, take their lead from the prize, bulk-order the winning title for their school libraries to teach it. That gives this medal a special place in the broader culture of 'young people reading'.

I don't know who is going to win the 2016 title; though it would be a brave person who bet against Frances Hardinge's marvellous Lie Tree. I'm more interested in meditating a little on whether there is an identifiable trend in recent Carnegie winners—the last six titles, say, to take us back to the beginning of this decade. (Going by this list, 2016's trend seems to be 'lies': for not only do we have Hardinge's The Lie Tree, Nick Lake's There Will Be Lies and Robin Talley's gnarly, affecting love story Lies We Tell Ourselves, there are also such titles as The Rest of Us Just Lie Here and Lie Colour One. But putting that to one side for a moment.)

In what follows I appreciate I'm running the risk of generalising and therefore flattening the specificity of what is a properly diverse set of novels. But I do wonder if a general taste for dystopia has now assumed a culturally dominant force. The Hunger Games is one of the best-selling series of the century so far; Harry Potter takes its charming old-school, er, school into pretty dystopian territory when Voldemort comes back and establishes a fascist dictatorship. Other series like Noughts and CrossesDivergent, and The Maze Runner spin variations on the same dystopic vision. Game of Thrones, popular among teens and adults both, is a dystopian Fantasy of rare grimness and horribleness. Once upon a time James Bond (the movies, I mean: Fleming's novels were always dour and self-hating) was camp and colourful and carefree escapism; now it is scowling, dark and testicle-crushing. Superman used to be bright colours and antic adventure; Batman's bashing-up bad guys would be embroidered with on-screen POW! and SMASH! pop-ups. Now we're staring down the barrel of Batman Versus Superman, which so far as I can see will be two grown men joylessly punching one another in the rain for two hours.

I don't want to overstate things. There's still a lot of joy in a lot of popular culture, of course. But it does seem to me as if the really popular culture-texts, the ones with the most extensive global cultural penetration, all skew dystopic. And whilst the Carnegie titles haven't enjoyed quite the sheer scale of success of Potter, Hunger Games and Twilight, they are interesting and significant texts nonetheless.

So is there a trend? The prize has been running since 1936, but I'm going to concentrate on the 21st century, and our decade in particular. The 2001 winner was this installment of Pratchett's Discworld sequence; and as charming and funny and clever as any of them.


Hopping forward to 2008: Reeve's Arthurian novel won, a book that creates a notionally 'realistic' 6th-century in order to think about the power and endurance of legends and the legendary:


Then to 2009, when Gaiman's Graveyard Book won. Now I don't want to sound like I'm sniping at Gaiman's success (and this novel has been very successful indeed); and there's undeniably a lot to like about this novel, a retelling of Kipling's Jungle Book in a postmortem world of graves, ghouls, ghosts and vampires. I suppose we could say it's 'about' death, but although it is inventive and enjoyable I wonder if it doesn't, ultimately, pull its punches on that existentially desolating subject. I wouldn't call it cutesy. I might call it cute, but cute is OK. I do think it's a little, well, safe.


We might say: that's only right, since it's a children's novel. But to say so would be to ignore the turn the Carnegie took in the 20-teens: and I think that the start of this decade does mark a turn. First off, the final volume of Patrick Ness's excellent Chaos Walking trilogy (the first two installments of which had been shortlisted and longlisted respectively) won the prize in 2010.


There's a great deal to say about this trilogy, much more than I can manage here. It starts as the story of an all-male settlement on a distant planet, men and boys who have somehow been altered so that they cannot help but hear the cacophonic thoughts of others. Ness's cleverly-rendered 'Noise' (some playful typography that never becomes egregious) uses telepathy to capture the claustrophobic oppressiveness of a certain kind of toxic interpersonal environment. Twitter used to be fun when I started using it; it's getting more and more like Ness's 'Noise', and increasingly I find myself wondering if I should be a Todd and run away. We start the Chaos Walking trilogy believing that the women of Prentisstown have all died of the 'germ' that infected the men and made them telepathic; and also that this bug was released upon the human settlers by the alien aboriginal Spackles. The truth, we soon discover, is otherwise: the women are still alive, though elsewhere, and the Spackle are not to blame. But although Ness's storytelling is always lively and interesting, and although he is a funny as well as a moving writer, he is absolutely unsparing in his representation of the way violence harms the self as much as it harms others. The scene in the first book where Todd stabs a Spackle to death is genuinely upsetting to read; and the descent of the various populations into war in the end makes grueling reading. Prentisstown, and its Donald Trump-ish fuhrer Prentiss, are the stuff of a straightforward dystopian vision.

Ness won the 2011 prize as well, for the heartbreaking A Monster Calls.


The cover sports two Carnegie medals, as you can see, since Jim Kay also won that year's Kate Greenaway prize for illustration (awarded by CILIP, the same people who administer the Carnegie). As well he should: his images for this book are amazing.



But the whole book is amazing. The monster is a sort of giant animated yew tree, and it calls, always at 12:07, on 13-year old Conor. Conor's mother is dying of cancer; his father lives on the other side of the Atlantic, and his grandmother, who cares for him, is distant and cold. He's bullied at school and something of a loner. The monster tells him three stories, and in return Conor must tell the creature his own nightmare. But the monster's stories deliberately eschew moralising; they are tales of human suffering and loss. In his waking life, as when he finally fights back against his bullies, Conor seems to be in some sense possessed by the spirit of his troublesome monster. His mother's condition worsens, and at 12:07 she dies in the hospital. If I describe this book as 'Not Now Bernard, but for bereavement' I in no way mean to belittle it. Not Now Bernard is, in its way, a kind of masterpiece.



A Monster Calls began as an idea of Siobhan Dowd's, herself a previous Carnegie winner, and herself dying of cancer. But the premise stands or falls on how well the monster is written, and Ness does an extraordinary job with that. His monster possesses just the right touch of the sublime:


This in turn grounds the emotional directness of Conor's relationship to his dying mother, which you really would have to have a heart of stone to read without being touched.


You couldn't call A Monster Calls a dystopia. But neither it is a bag of laughs, and Ness here brings the same unflinching approach to psychological trauma here that shapes the representation of actual violence in the Chaos Walking books.

The 2013 winner, Sally Gardner's Maggot Moon, certainly is dystopia, though: and dystopia of a remarkably old-fashioned, Nineteen-eighty-four-ish stripe.



That's Standish Tredwell on the cover there: a misfit, dyslexic, one-blue-eye-one-brown-eye kid in an Orwellian alt-historical 1950s (I'm guessing) Britain called the ‘Motherland’: deprivation and oppressive surveillance, disappearances and secret police, children at Standish's school beaten, sometimes to death, for noncompliance. Standish lives with his Gramps in a ramshackle house in distant 'Zone 7' whilst the Motherland North-Korea-ishly pushes ahead on a grand and pointless moon rocket project. It's a well-written but surprisingly dispiriting read, Maggot Moon; or I thought so. But more to the point, there's something queerly old-fashioned about this dystopia. Hunger Games, or in their more schematic ways Maze Runner or Divergent, construct dystopias extrapolated from the experience of modern teens; Maggot Moon extrapolates a 21st-century teen dystopia from the experience of 21st-century teen's grandparents, all boiled cabbage and nosy neighbours, shonky totalitarianism and petty spite. It's a strange and oddly nostalgic mode of dystopia, this; though dystopia it certainly is.

Which brings us to Kevin Brooks' The Bunker Diary; 2014's winner and easily the most controversial recipient of the Carnegie Medal.


Teenager Linus Weems is asked by a feeble-looking blind man to help him load something into his van; duped and drugged Linus wakes up in a windowless underground bunker. The only only way in or out is an elevator, which his captor, 'The Man Upstairs', controls.



Over time various other people are deposited in this prison; the youngest a nine-year-old girl. Linus records it all in his diary, and the Man Upstairs withholds food until the captives kill one another. Some die, or commit suicide; others kill. Eventually the elevator stops running; the inmates deduce that the Man Upstairs had passed away, or moved on, or been captured, and resign themselves to their fate. It's a heroically grim and forbidding story with an exceptionally bleak ending: everyone else is dead, and Linus's diary entries trail off to indicate that he too has expired.

Now there's a kind of Saw-lite vibe to The Bunker Diaries; and a stubborn refusal to permit the readers any kind of catharsis which is either boldly admirable, or else just annoying. But the reaction to the book's win was way out of proportion to the matter at hand. It approached hysteria in the Telegraph, and the Guardian, reporting it, managed to slap a picture of the (I don't doubt, charming and personable) Brooks onto their website that made him look like a stare-eyed loon.


From that report:
Brooks was named winner of the Carnegie on Monday, joining a long roster of prestigious winners which includes Arthur Ransome – who won the first ever award in 1936 – Alan Garner, Penelope Lively and Philip Pullman. The Bunker Diary, which was turned down by publishers for years because of its bleak outlook, is told in the form of the diary of a kidnapped boy held hostage in a bunker. Awarding him with the medal, judges said he had created "an entirely credible world with a compelling narrative, believable characters and writing of outstanding literary merit".

But writing in the Telegraph, in a piece headlined "why wish this book on a child?" and describing The Bunker Diary as "a vile and dangerous story", literary critic Lorna Bradbury vigorously disagreed. She called the book "much nastier" than other dystopian fictions such as The Hunger Games and Divergent, writing: "Here we have attempted rape, suicide and death by various means, all of it presided over by our anonymous captor, the 'dirty old man' upstairs who it's difficult not to imagine masturbating as he surveys the nubile young bodies (including a girl of nine)."

Saying that the novel "makes versions of the imprisonment narrative for adult readers, such as Emma Donoghue's Room, based on the case of Josef Fritzl, the Austrian who locked up his daughter for 24 years, look tame", Bradbury questioned whether books like Brooks' were "good for our teenagers".
There's no actual masturbation or rape specified in Brooks' novel: this is what we might call 'projection' on the evidently rather fevered imagination of Lorna Barbara. But it is undeniably a grim read

Also grim is last year's potent winner, Tanya Landman's Buffalo Soldier:


This centres on Charlotte, a slave girl in 1860s America who becomes free with the Emancipation Proclamation but finds her life is no better off, and in some respects is worse. It is a vivid and powerful piece of writing, but far from comfortable: Charlotte's parents raped and murdered and Charlotte herself raped and threatened, she decides she'd face a slightly reduced risk dressed as a boy. And it is as a boy, 'Charlie' that she joins the US Army, and becomes one of the Buffalo Soldiers, African American regiments (but with White officers) sent out to subdue Native Americans. Part of the point of the book is the way the brutalised subaltern in turn will find and brutalise any sub-subaltern power presents to them: the freed Blacks in Landman's novels are at the bottom of the social heap, until they meet the Native Americans. Charlie ends up in a relationship with one Indian, 'Jim', and I wouldn't say the novel is bleached of all hope. Nor can we really call it dystopian, unless 'dystopia' seems to us a reasonable description of the Civil War-era and postbellum USA. Which we might well think.

So where are we? To one degree or another, all the 20teens winners of the Carnegie medal have been either dystopias or else narratives of trauma, suffering and misery. All of them. And one way of getting at the point I'm trying to make is that these titles (with the possible exception of A Monster Calls) are dark in an unrelieved way: tragic but without catharsis. Indeed a thumbnail definition of today's Grimdark might be: a commitment to tragedy that is by definition non-cathartic.

What does this say about youth culture today? Because whatever it does say will, again by definition, say something important about 21st-century culture more generally, determined as that latter quantity is so hugely by YA,from Potter to MCU, from pop to SF. Why the scope and persistence of this taste in dystopia? The more hopeful reading might be the Jamesonian one: that dystopia is actually our age's way of 'doing' utopia, an age that can no longer believe in the wide-eyed ingenuousness of the epoch that wrote all those unironic utopian books, but which doesn't want to give up the idea that things might be made better.



Jameson's thesis in his Archaeologies of the Future is that dystopia represents an ‘Anti-anti-utopianism’ which is, in a counterintuitive way, utopian:
What is crippling is the universal belief . . . that the historic alternatives to capitalism have been proven unviable and impossible, and that no other socioeconomic system is conceivable, let alone practically available. The value of the utopian form thus consists precisely in its capacity as a representational meditation on radical difference, radical otherness, and . . . the systematic nature of the social totality [28]

[This new mode of] Utopia as a form provides the answer to the universal ideological conviction that no alternative is possible. It does so ... by forcing us to think the break itself . . . not by offering a more traditional picture of what things would be like after the break [323]
Maybe that's right; but I find myself wondering if Jameson's approach is the best way of talking about all this. It is, in a manner of speaking, a kind of assertive mourning for the injustices of the present, aimed at working through the horror towards something better. But maybe Freud's distinction between mourning and melancholia better glosses the present-day vogue for dystopia by leaning more on the latter quantity. Young readers connect with books like The Hunger Games or The Bunker Diary because they feel, in some symbolic sense, trapped in a desperate game designed by adults to control and kill them. The real tragedy of the circumstance is that their suffering, absent catharsis, is not even tragic. 'Within depression,' Kristeva notes in Black Sun, 'if my existence is on the verge of collapsing, its lack of meaning is not tragic – it appears obvious to me, glaring and inescapable.' Those last two words describe the premises of many of these Westeros-Panem YA topographies very neatly.


The counter-intuitive part of all this, for a middle-aged fogey like myself, is that in many ways young people today have 'it' better than any previous generation. Quite apart from material improvements in the quality of life, the future belongs to them, as it always has. They can hardly lose. But then again, perhaps that is the problem. 'Perhaps,' Kristeva also wonders 'my depression points to my not knowing how to lose—I have perhaps been unable to find a valid compensation for my loss.' Difficult to see from where such compensation could be sourced. Chaos Walking and Buffalo Soldier are both, in different ways, about the terrible fallout from acts of colonial intrusion, but it may be that the relevant empire is less historical in these novels than it is the Empire of Trauma itself, under whose distant and implacable authority we all now subsist.


Wednesday, 16 March 2016

Further Thoughts on Sonnet 146: the Musica Sacra Connection



I've noted on this blog before that I've a soft-spot for Sonnet 146, the 'Poor Soule the center of my Sinfull Earth' one. Here are some more thoughts on it.

Fairly abstruse thoughts, mind. Still: there are many songs in Shakespeare's plays, and he often collaborated with musicians and composers. For example, it seems likely that 'It Was A Lover And His Lass' from As You Like It was either a collaboration between Shakespeare and Thomas Morley: Shakespeare and Morley lived in the same London parish; and 'It Was A Lover And His Lass' was printed, as by Morley alone, in The First Book of Ayres of 1600. It's surely as likely that Shakespeare appropriated Morley's song for his play as that he wrote it himself, although it's also likely that he cultivated professional relationships with various London musicians. Plays needed music, after all.

Morley was a publisher of music as well as a composer, and Thomas Este (his name is on the title page of the Musica Sacra to Sixe Voyces, above) was his chief 'assigne' or printer. Musica Sacra to Sixe Voyces is a translation from the Italian of Francesco Bembo, with music by Croce, translated into English by 'R.H.':


Soko Tomita calls this 'a set of authentic Italian madrigali spirituali and the only the only Italian madrigal book translated complete into English'. There's some evidence that Shakespeare was interested in Bembo; and I wonder if R.H.'s version of the sixth sonnet here directly influenced Shakespeare's own collection of sonnets, published the following year.


Since we know almost nothing about the sequence of events that led to the publication of Shakespeare's Sonnets by Thomas Thorpe in 1609, not even whether Shakespeare was involved in the process or not, we are licensed to speculate. It's possible Thorpe published with Shakespeare's permission. It's even possible that Shakespeare, asked for copy by his publisher, bundled together some sonnets he'd written as a young man, in the early 1590s, when he was randier and more lustful, with some newer sonnets written in 1608 and 1609, by which time he had become more moral, more (in the loose sense of the word) puritanical about sex, more religious. Sonnet 146 would surely be one of the later poems, if so. And it's not impossible that Shakespeare might have read T.H.'s Musica Sacra sonnets, and written his Sonnet 146 as a version of, or a more loosely inspired extrapolation of, sonnet 6 up there. What do you reckon?
Poor soul, the centre of my sinful earth,
Prest by these rebel powers that thee array?
Why dost thou pine within, and suffer dearth,
Painting thy outward walls so costly gay?
Why so large cost, having so short a lease,
Dost thou upon thy fading mansion spend?
Shall worms, inheritors of this excess,
Eat up thy charge? is this thy body's end?
Then soul, live thou upon thy servant's loss,
And let that pine to aggravate thy store;
Buy terms divine in selling hours of dross;
Within be fed, without be rich no more:
So shalt thou feed on Death, that feeds on men,
And, Death once dead, there's no more dying then.
The various similarities and verbal parallels can be left as an exercise for the reader. One attractive aspect to this theory howsoever farfetched it may be, is that if it is true then we have a strong steer as to the music in Shakespeare's head as he wrote this sonnet. Sonnets are little songs after all; and 'Poor soul, the centre of my sinful earth' goes pretty well to this:



Tuesday, 15 March 2016

Thoughts That Do Often Lie Too Deep For Tears



One of, if not the single, most famous line(s) in all of Wordsworth, this: of course the conclusion to his masterly 'Ode: Intimations of Immortality from Recollections of Early Childhood' (1807). This rich and complex poem starts from the simple observation that when WW was a child he had an unforced, natural access to the splendour and joy of the cosmos, but growing old has alienated him from that blessed mode of being-in-the-world. It starts:
There was a time when meadow, grove, and stream,
The earth, and every common sight,
To me did seem
Apparelled in celestial light,
The glory and the freshness of a dream.
It is not now as it hath been of yore;—
Turn wheresoe'er I may,
By night or day,
The things which I have seen I now can see no more.
It ends:
And O, ye Fountains, Meadows, Hills, and Groves,
Forebode not any severing of our loves!
Yet in my heart of hearts I feel your might;
I only have relinquished one delight
To live beneath your more habitual sway.
I love the Brooks which down their channels fret,
Even more than when I tripped lightly as they;
The innocent brightness of a new-born Day
Is lovely yet;
The Clouds that gather round the setting sun
Do take a sober colouring from an eye
That hath kept watch o'er man's mortality;
Another race hath been, and other palms are won.
Thanks to the human heart by which we live,
Thanks to its tenderness, its joys, and fears,
To me the meanest flower that blows can give
Thoughts that do often lie too deep for tears.
I have a simple question: what does it mean to talk of 'thoughts that do often lie too deep for tears'? I'm not asking after the psychological or existential ramifications of the phrase; I'm asking about its semantic content.

You see, it's a phrase that seems to me to imply two quite incompatible meanings. One: the speaker of the poem is saying that even the meanest flower that blooms—like the one at the top of this post—can sometimes make him cry. These tears come from a source that is usually, in his day-to-day living, repressed, buried deep in the psyche, since he is English and therefore too buttoned-down to permit weeping. But the encounter with simple natural beauty liberates this emotion from its prison, and the cathartic tears can at last flow. These are thoughts that often, but not always, lie too deep for expression as tears. In other words, the encounter with the wild-flower in the last two lines of the 'Ode' is, in its bittersweet way, a positive one.

But there's another way of reading the line. This would posit a psychological topography in which, in descending layers, we have: the normal everyday placidity, and below that the propensity to weep, and below that something else, some profound sorrow or depression too deeply ingrained in the human soul ever to find release in tears. Children cry, when provoked, as we all know, because they are more intuitively in touch with their emotions; this is, in one sense, the whole thesis of the 'Ode'. But the poem also embodies the mournful observation that men like Wordsworth's speaker here have lost the capacity for that kind of emotional ingenuousness. Now to encounter nature, in the form of the wildflower, creates a sense of sorrow so deep that it cannot even be relieved by crying. It is not every thought, it's not always like this. But it often is.

Clearly the line can't mean both of these two things. The first suggests that a grown-up saudade finally relieves itself in crying the sorts of tears that great beauty can provoke. The second implies that the Ode is an elegy for the barrenness of modern emotional existence, a parched state where the sorrowful thoughts cannot even provoke tears, because they lie too deep for them. Tears of complicated joy, versus I-have-no-mouth-and-I-must-scream despair. Hmm.


Sunday, 6 March 2016

John Green: the Antecession of Adolescence



It seems to me hard to deny that YA fantastika has, over the first years of this new century, achieved a mode of cultural dominance: that Potter, Katniss and the MCU bestride our contemporary cultural production like colossi; that Malorie Blackman and Patrick Ness are more important contemporary UK novelists than Martin Amis and Zadie Smith. But I have to admit that my saying so may merely reflect my own bias towards SF/Fantasy. Perhaps I overestimate the centrality of Fantasy to the contemporary YA phenomenon. I'm not sure I do, but it's possible. It's one thing to talk about Rowling, Collins, Meyers, Blackman, Ness and Pullman (and Lemony Snicket, and Philip Reeve, and Eoin Colfer, and Tony DiTerlizzi and Holly Black, and Jonathan Stroud, and Tom Pollock, and Rick Riordan, and Cassandra Clare ... and on and on the list goes) as representing some important culture movement.



But I have to concede that not all today's YA is fantastika. Or put it another way: if my argument is that the key YA texts are all Fantasy, then how do I account for those commercially huge, culturally major YA writers who don't write Fantasy? Two names in particular leap out: the marvelous Jacqueline Wilson, and the mega-selling John Green. Both work in what we could loosely call 'realist' idioms, writing about children and teenagers. Both are very good. What about them?

Take Green. Now, I like Green a great deal: he has a funny, personable and informative online presence as *clears throat* a vlogger, and he writes intelligent, witty and prodigiously successful novels. If those novels don't move me the way they evidently move millions of younger readers, that merely reflects my age. They're not aimed at people like me. Or it would be truer to say: they're not primarily aimed at people like me. And, to speak for myself, I admire and enjoy the charm with which he writes, the cleverly packaged wisdom, the lightness of touch he brings to serious matters.

A lot of this has to do with Green's skill with one-liners. The crafting of excellent one-liners is a much more demanding skill than many people realise. It is a business I rate and respect. Sometimes Green writes one-liners to get a laugh, which (of course) is the conventional function of the one-liner: 'Getting you a date to prom is so hard that the hypothetical idea itself is actually used to cut diamonds' [from Paper Towns], or '"It's a penis," Margo said, "in the same sense that Rhode Island is a state: it may have an illustrious history, but it sure isn't big."' [from the same novel]. But just as often he writes one-liners designed to make you feel, or think, rather than laugh. That's harder to do, I think. The most famous line from The Fault in Our Stars is 'I fell in love the way you fall asleep; Slowly, and then all at once', which has the form of a one-liner but is built to produce a particular affect rather than a laugh. Rather beautiful, too.

Green has two big-ish themes to which he keeps returning, and which we might peg as 'death' and 'authenticity', both inflected through the prism of teenage intensity. That he's good on this latter quantity (that is, on the way adolescents feel more intensely, have goofier highs and moodier lows, than grown-ups; the way they experience love as first love in all its Romeo-and-Juliet full-on-ness) is evidenced by his enormous appeal amongst his target audience, to whom his books clearly speak; and this also doubtless explains that element of his writing that I don't quite grok, being middle-aged and English and dwelling accordingly upon a buttoned-down emotional plateau of politeness and tea and low-level anguish. But I don't think 'teenage intensity' is his primary theme; I think it's the idiom via which he chooses to express a fascination with death and authenticity. In Looking for Alaska (2005), the main character Miles 'Pudge' Halter spends a year at a boarding school where he has various adventures with schoolfriends and schoolenemies, and where he falls in love with the beautiful but unhinged Alaska Young. The story bundles along pleasantly funny and bittersweet until the end, when Alaska drives drunk, crashes her car and dies, a death that is perhaps suicide. One of the first things we learn about Pudge is that he is fascinated with people's famous last words, and one of the things that first bonds Pudge and Alaska is their shared interest in Simón Bolívar's enigmatic final line: 'Damn it. How will I ever get out of this labyrinth!' ('Is the labyrinth living or dying?' Alaska wonders. 'Which is he trying to escape—the world or the end of it?'). The novel's own ending, and Pudge's attempts to come to terms with both his bereavement, and his guilt at possible, though unwitting, complicity in her death (since he and another friend distracted the school authorities in order to let Alaska get away in her car), inserts this morbid fascination rather cruelly into reality. What Pudge realises is that he wasn't in love with Alaska, but with an idea of Alaska he had in his head. 'Sometimes I don't get you,' he tells her; and she replies ('she didn't even glance at me, she just smiled') 'You never get me. That's the whole point.'

There's something important in this, I don't deny. It has to do with the teenage tendency towards self-obsession and egoism, of course; but it's also to do with a broader, neo-Arnoldian existential disconnect, the unplumbed salt estranging sea that lies between all our islands. 'It is easy to forget,' is how Paper Towns puts it, 'how full the world is of people, full to bursting, and each of them imaginable and consistently misimagined.' I take it that Green's point is: we owe other people a duty to at least try and relate to them as they are, and not to ignore them, or rewrite them in our minds as we would like them to be. This, in my reading, is the 'labyrinth' from which the characters in Looking for Alaska are trying to escape: it is inauthenticity, and Green's suggested solution in terms of escaping it are such quantities as forgiveness and acceptance. If I call this stance 'authenticity', I'm not trying to tag-in Existentialism. This position has more in common with Holden Caulfield's animadversion to all things 'phony' than it does to Sartre.



Still it's fair to say that 'Existentialism' was interested in the connections between angst, authenticity and death, and there's something in that combo in Green's writing that doesn't weave right. I feel like an uglier and grumpier Oscar Wilde mocking Little Nell, but part of me found itself unable to buy into The Fault In Our Stars (2012), Green's biggest success, first as bestselling book, then as box-office-topping movie: the undeniably heartfelt story of two teenage cancer sufferers falling in love. When Malorie Blackman rewrites Romeo and Juliet in her Noughts and Crosses, her focus is on the arbitrary grounds of the Montague-Capulet hostility, and the toxic social environment that results. When Green rewrites the same story it is not inter-familial hatred but death itself that interposes itself between the two young lovers. That's both the book's strength, and, perhaps, its weakness. The love-story reads as believable and sweet; but the book as a whole treads that debatable line between sensibility and sentimentality, and the brute fact of death, at the story-end, distorts what the book is trying to say about love. It swathes the experience in a cloak of existential all-or-nothingness, which, tends to present the experience as, as it were, all icing and no actual cake. I'm not trying for a cheap shot, here, or at least I hope I'm not. I'm not accusing The Fault in Our Stars of wallowing in any misery-lit melodramatic tragic schlock simply because it juxtaposes young love and cancer. Hazel and Augustus, in the novel, don't fall in love because they have cancer; the cancer is just something they have to try and deal with as they fall in love. But because such cancer truncates life the novel can't help but offer up a truncated representation of love, and this tangles awkwardly with the fact that this is a story of intense teenage passion. Romeo and Juliet experienced emotional intensity with one another, no doubt; but what sort of marriage would they have had, in the event they had survived the end of the play? What would they have looked like, as a couple, in their thirties? Or their sixties? Hazel, in The Fault In Our Stars, dismisses the old insistence that 'without pain, how could we know joy?'
This is an old argument in the field of thinking about suffering and its stupidity and lack of sophistication could be plumbed for centuries but suffice it to say that the existence of broccoli does not, in any way, affect the taste of chocolate.
Which is neat, and uses the one-liner form nicely. It's just doesn't use that form in a way that actually suffices to say. After all: a person wouldn't live very healthily or very long, on a diet of pure chocolate and no broccoli. One of the ways love is more than mere lust is that love lasts; and if there's no timescale into which such lasting can be projected it is somewhere between difficult and impossible to be sure about the love. The true test of love is not in-the-moment intensity, but endurance. I appreciate that's a very middle-aged-adult, and a very un-teen, thing to say. That's the whole point.

But I don't mean to get distracted. Rather, I want to say something about Paper Towns (2008), a more interesting novel (I'd say) than The Fault In Our Stars. This novel frontloads its death (its characters start the story by discovering the body of a divorcé), which is a better, which is to say less conventionally melodramatic, way of doing things. It goes on to tell the story of Florida teen Quentin "Q" Jacobsen, and his young neighbor, the eccentric but (of course) beautiful Margo Roth Spiegelman. Margo, a character who doesn't quite escape the taint of Manic Pixie Dreamgirlishness, recruits Q to help her take elaborate and comical revenge upon various kids at their school who have slighted her. Halfway through the story Margo disappears. The community begins to think she has committed suicide, but a series of clues persuades Q that she is still alive, and living in the 'paper town' of Agloe, New York: a simulacrum of a town invented by mapmakers that has, oddly enough, turned into a real town. He and his friends drive up to rescue her after their high school graduation, but she doesn't want to be rescued. The book ends with Q accepting that he has lived inauthentically, devoted to a version of Margo he has concocted out of his own desire and insecurity, and that it's not fair to Margo to relate to her in that way. Sailing dangerously close to the unSeinfeld learning-hugging-growing arc, Paper Towns' Q realises 'the fundamental mistake I had always made—and that she had, in fairness, always led me to make—was this: Margo was not a miracle. She was not an adventure. She was not a fine and precious thing. She was a girl.' By the novel's end this point is bedded-in: 'What a treacherous thing to believe that a person is more than a person.'

The novel as a whole is concerned with this question of inauthentic living, with the simulacrum. The 'paper town' of Agloe is real: a non-existent place included in a map of NY State to catch out any mapmakers foolish enough to plagiarise. At the end of his novel, Green notes this fact: 'Agloe began as a paper town created to protect against copyright infringement. But then people with these old Esso maps kept looking for it, and so someone built a store, making Agloe real'. The map precedes the territory, the description comes before the reality described. 'Margo always loved mysteries,' Q tells us. 'And in everything that came afterward, I could never stop thinking that maybe she loved mysteries so much that she became one.' It's neatly done. Margo never quite comes alive, but her quirky puppet-ness doesn't impede the story. Arguably the reason she doesn't feel fully alive is that she, in terms of the in-logic of the story, doesn't want to. Which has some interesting implications for characterisation, actually.

Then again, there are moments when the simulacrum is less postmodern, and more old-school phony. Margo, on her hometown Orlando FL, ventriloquises the echt Holden Caulfield:
You can tell what the place really is. You see how fake it all is. It's not even hard enough to be made out of plastic. It's a paper town. I mean look at it, Q: look at all those cul-de-sacs, those streets that turn in on themselves, all the houses that were built to fall apart. All those paper people living in their paper houses, burning the future to stay warm. All the paper kids drinking beer some bum bought for them at the paper convenience store. Everyone demented with the mania of owning things. All the things paper-thin and paper-frail. And all the people, too. I've lived here for eighteen years and I have never once in my life come across anyone who cares about anything that matters.
This is attractively meta (since any town described in a book made of paper bound together is going to be a paper town), even a touch modish. It either captures with nice irony, or else is deplorably complicit with, that teenage certainty that they know 'what matters', and that what matters is more than just living an ordinary, unexceptional life, like boring grown-ups do.

Then again, maybe the conceit of Paper Towns does tip a more Baudrillardian than Sartrean nod. It invites us to go back to 1981's Simulacres et Simulation. Maybe, in this novel, Green goes beyond 1950s phony-baiting, and into the precession of simulacra as such, and maybe that's why the novel works better for me. Baudrillard, you'll recall, distinguishes three phases:
First order simulacra, associated with the premodern period, where representation is clearly an artificial placemarker for the real item. The uniqueness of objects and situations marks them as irreproducibly real and signification obviously gropes towards this reality.

Second order simulacra, associated with the modernity of the Industrial Revolution, where distinctions between representation and reality break down due to the proliferation of mass-reproducible copies of items, turning them into commodities. The commodity's ability to imitate reality threatens to replace the authority of the original version, because the copy is just as "real" as its prototype.

Third order simulacra, associated with the postmodernity of Late Capitalism, where the simulacrum precedes the original and the distinction between reality and representation vanishes. There is only the simulation, and originality becomes a totally meaningless concept
This is where we are, says Jean. Disneyland (one of Orlando FL's most famous sites) started as a copy of the perfect American small town; now, Baudrillard suggests, America itself is a kind of copy of Disneyworld. The simulation precedes the reality. So it is that we care about, and invest emotionally, in the fictional neighbours represented in Eastenders and Coronation Street, and barely know our actual real-world neighbours. So it is that the things that happen in the world only feel real to us when we see them reported on the TV news. When Baudrillard talks about the 'precession of simulacra' in Simulacra and Simulation, he means simulacra have come to precede the real, and that the real is, in his pungent phrase, 'rotting', its vestiges littering what he calls 'the desert of the real'.

I suppose we could say that the difference is that Baudrillard celebrates this new simulacral logic, where Green finds it both exhilarating and terrifying. Having encountered a dead body, and heard gunshots, and been afraid in various ways, Q comes to understand that there is a deeper fear underlying his 'real' or 'actual' experiences of fear. Or not 'underlying', but 'preceding': 'This fear bears no analogy to any fear I knew before,' he tells us. 'This is the basest of all possible emotions, the feeling that was with us before we existed, before this building existed, before the earth existed. This is the fear that made fish crawl out onto dry land and evolve lungs, the fear that teaches us to run, the fear that makes us bury our dead.' Not a fear of death, and not a fear of inauthenticity as such, but rather the fear that inauthenticity is the only reality.

This is bringing me, slowly and after many too many words (I know, I know) back to my original point. Why do so many globally popular YA take the form of Fantasy? If there is a metaphorical relationship between the magical school (or the daemon-accompanied alt-world, or the sexy vampire, or whatever) and reality, we might expect there to be a mimetic relationship between the Orlando teenagers, or the cancer-suffering teenagers in Indianopolis, and reality. But that's not how it works.

Maybe that's what books like Green's offer us: 'realism' rather than realism, a different logic of fantasy that repudiates the idea that there is a clear reality to be metaphorised. A nostalgia for the future rather than for the present or for the past. That's what Alaska thinks, at any rate, in Looking for Alaska:
"Imagining the future is a kind of nostalgia."

"Huh?" I asked.

"You spend your whole life stuck in the labyrinth, thinking about how you'll escape it one day, and how awesome it will be, and imagining that future keeps you going, but you never do it. You just use the future to escape the present."
If I say that there's a particular emphasis upon this pseudo-nostalgia of Green's novels, I don't mean it as negative criticism. Baudrillard, in that 'Precession of Simulacra' essay, insists that 'When the real is no longer what it was, nostalgia assumes its full meaning.'

I don't mean to over-reach, argument-wise, but I wonder if this speaks to the reasons why YA has become so culturally dominant. Once upon a time kids wore jeans and listened to rock music until they passed out the other side of their adolescent phase: then they put on suits and dresses and went to work and listened to Classical Music. Got blue rinses. Smoked pipes and grew beards. Now it's the kids who dress as hipsters, in suits and sculpted facial hair, dye in their hair, and middle-aged dinosaurs like me who wear jeans and listen to rock music. What started as a chronological descriptor covering the years 13-19 expanded: a tansitional period when one is no longer a child, but not yet an adult that bulged at both ends. Now the phase starts earlier and ends much later: people in their 20s, 30s even in their 40s still living with their parents, or pursuing their teenage pursuits (look at me, and my continuing passion for science fiction, for one example), or examining their own souls and saying: you know? I really don't feel 'grown-up'. Not properly. Properly grown-up is the desert of the real of individual subjectivity. Baudrillard, again from the 'Precession of Simulacra' essay:
This world wants to be childish in order to make us believe that the adults are elsewhere, in the "real" world, and to conceal the fact that true childishness is everywhere - that it is that of the adults themselves who come here to act the child in order to foster illusions as to their real childishness.
If we think of it like that, then the whole cultural edifice of children's and YA literature becomes an attempt, on the largest scale, to fix and establish a simulacrum of 'youth', for the benefit of the adults. 'The child is father to the man' becomes evacuated of its original natural piety and spiritual truth, and becomes instead the slogan of causal disconnection in a youth-obsessed society in which adolescence no longer precedes adulthood, but replaces it altogether. Things that once distinguished childhood from adulthood, in the sense that kids would not do these things and adults would—trivial things like drinking and smoking, or profound things like having sex and dying—become, in Green's novels, how teens spend their time. They are all nostalgic for a future that, in Green's textual universe, will never come. It's the precession of sim-maturity that marks the erosion of the distinction between the immature and the mature. Why do teens in John Green novels keep dying? There's a line in Catch-22 that I've always liked, where Yossarian rails that a certain airforceman friend of his, killed in action, was old, very old, very very old, twenty-two. That doesn't sound very old, his interlocutor returns. He died, says Yossarian; you don't get any older than that. Which also has the shape of a one-liner, whilst packing a significantly larger existential punch than the my-dog's-got-no-nose standard sample. (It was also Heller, in the rather underrated Something Happened, who said: 'When I grow up I want to be a little boy')

What Green's novels embody is a larger logic of YA: a kind of impossible nostalgia for a future adulthood that the protagonists not only have never experienced, but fear will never come. As in Harry Potter, or The Hunger Games, the story is: teens are compelled to act as adults, to assume adult responsibilities, commit adult murders, risk the fate of all adults (which is death). But this isn't the precession of adulthood; its the Baudrilliardian erasure of adulthood. That's the fantasy. Maybe.