On the (rare) occasions when I teach cinema rather than my usual literature, I have been known (rarely) to offer students a more-or-less polemical abridged history of 20th-century film and television: an epitome of modern visual culture in three individuals. Given how saturated we all are, nowadays, in visual culture, how many tens of thousands of hours of TV and YouTube and movies and so on we have all assimilated before we even reach the age of majority, it's easy to forget how counterintuitive the visual text is. 20th and 21st-Century visual texts like films and TV shows are more different to the sorts of visual media that preceded them than they are similar to them: watching a play is not a nascent form of watching a movie; animated cartoons are much more than paintings that move. At any rate, I suggest that the three key innovations can be thumbnailed as: Eisenstein; Griffith; Chaplin. That's three men who, between 1915 and 1925, established the parameters that in the most crucial sense distinguish modern 'visual culture' texts from older, literary, theatrical and painterly ones.
Eisenstein is significant for in effect inventing key elements of the visual grammar of film: most famously montage, or the assemblage of images and sequences linked by jump-cuts. Early theorists were astonished by the effectiveness of the jump-cut: Benjamin, in his 'The Work of Art in the Age of Mechanical Reproduction' essay (1936) argues that jump-cuts are so deracinating for the ordinary sensorium, and so widely disseminated through the new mass-media, that they would accomplish nothing short of a revolution in human life. Film, he argues,
affords a spectacle unimaginable anywhere at any time before this. It presents a process in which it is impossible to assign to a spectator a viewpoint ... unless his eye were on a line parallel with the lens. This circumstance, more than any other, renders superficial and insignificant any possible similarity between a scene in the studio and one on the stage. In the theater one is well aware of the place from which the play cannot immediately be detected as illusionary. There is no such place for the movie scene that is being shot. Its illusionary nature is that of the second degree, the result of cutting. That is to say, in the studio the mechanical equipment has penetrated so deeply into reality that its pure aspect freed from the foreign substance of equipment is the result of a special procedure ... The equipment-free aspect of reality here has become the height of artifice; the sight of immediate reality has become an orchid in the land of technology.... Thus, for contemporary man the representation of reality by the film is incomparably more significant than that of the painter, since it offers, precisely because of the thoroughgoing permeation of reality with mechanical equipment, an aspect of reality which is free of all equipment.As it turned out, Benjamin was wrong. The film jump-cut has not shaken human sensibilities free of the bag-and-baggage of 'traditional' visual representation, and the reason it has not is that, in fact, jump-cuts mimic the process of human perception very closely. That might look like a counterintuitive thing to say, so I'll expand on it a little. We might consider that the cut performs a kind of violence to the 'natural' business of looking, since its essence is moving without interval from one thing to an unadjacent and possibly unrelated thing. In fact the way our eyes and brains collaborate to piece together their visual apprehension of the world much more closely resembles an Eisenteinian 'cut' than it does a slow pan. If we turn our head from left to right, our eyes remain on original object occupying our visual field, counter-swiveling in the eyesockets as the head turns, until they reach a point when they move to something else in the visual field, which they do very quickly until the new object is acquired. They then fix on Object B as the head continues to turn. The brain reads this process not as 'Object A, blurry motion, Object B...' but as a straight jump-cut from Object A to Object B. Try it at home if you don't believe me. I move my head from looking at this screen, to the right of my computer where the half-drunk mug of tea sits on the desk, then to the bookshelves, and finally to the door and what my brain 'sees' is these four items as discrete elements, all cut together. I don't pan smoothly all the way round. That's not how the eyes and the brain 'see'.
I don't mean to go on: but the way that Eisensteinian 'montage' works is not to permeate our consciousness with a radical new mechanical sensibility, as Benjamin thought; but actually to bring the kind of visual experience of watching closer to reality than was ever the case in (say) watching a play in the theatre. It's not just a more dynamic visual experience, it's a visual experience radically defined by its dynamism, formally speaking. And that—rather than just the fact that film is images that moves—is the something that is new in human culture. No mode of representation the history of humankind had been able to do that before. Of course, montage and the cut are now so deeply integrated into visual texts, and we are exposed to so much of it from such an early age, that it 'feels' like second nature. It's not, though; and my shorthand for that aspect of visual culture is 'Eisenstein'.
My second name is 'Griffith', which is to say D W Griffith (that's him, wearing the white hat), and I invoke him as a shorthand for two things, both embodied by his immensely successful movie Birth of a Nation (1915): not its deep-dyed and ghastly racism, nor the part it played in revitalising the Ku Klux Klan, difficult though it is to separate the other aspects of the film from that horrid heritage. But two things that the success of the movie baked into 20th-century cinematography: the 'feature' length of the movie as a two-hours-or-so experience, and the sheer spectacle of the battle scenes. The former has become simply a convention of movies, and a more or less arbitrary one; but the latter has come absolutely to dominate cinema. All the highest grossing movies now offer their audiences spectacle on an colossal scale; and the rise of CGI has proved the apotheosis of spectacle as such. Griffith achieved his effects with vast casts, and enormous sets, especially on his follow-up film Intolerance (1916), the most expensive film made to that point, and although it seems it wasn't quite the colossal flop it was once thought to have been (it just about recouped its enormous outlay) it set a precedent for spectacular overspend. In other respects Griffith was a very un-Eisensteinian director, specialising in panoramic long shots and slow camera pans. He tended to punctuate scenes with iris effects, and his art is much closer to old tableaux vivants (though on an unprecedented scale) than Eisenstein. But so spectacular! Visual spectacle is not unknown before cinema, of course: there were plenty of spectacular stage shows and circuses in the 19th- and 20th-centuries. But cinema proved simply better at this business than the stage. Indeed, I sometimes think that 'spectacle' has become such a bedrock of modern cinema culture that it makes sense to see 21st-century film as the development of narrative and character specifically via the idiom of spectacle rather than in any other sense.
The third name, Chaplin, is of course here to represent a new kind of celebrity. Indeed, it's hard to overstate how famous Chaplin was, in his heyday. His films have neither Eisenstein's formal panache, nor Griffith's scope and splendor: but what they do have is Chaplin himself, a performer of genius. 'Chaplin had no previous film acting experience when he came to California in 1913,' as Stephen M. Weissman notes. Nonetheless, 'by the end of 1915 he was the most famous human being in the entire world.' He was more than famous, in fact: he invented a new mode of 'being famous': a hybrid of professional achievement and press-mediated personal scandal, blended via a new global iconicity. The closest pre-mass-media equivalent might be Byron; a close-runner 1920s contemporary might be Valentino; but Chaplin surpassed both, and there's no 21st-century figure who even approaches his level of early 20th-century fame. He was the first star as superstar, the first great celebrity as brand, a global VIP whose image is still current today.
So. OK: the danger, with a deliberately simplified thesis like this is that students will take it as an ex cathedra pronouncement wholly describing the early history of early 20th-century cinema. This, clearly, is very far from that. I would, however, be prepared to defend the case that the three quantities represented by these three individuals are what differentiates 20th-century visual culture, in the broadest sense, from its historical predecessors: theatre, tableaux vivants, dance, painting, sculpture and illustration, magic lanterns and so on. It's not only that cinema and, later, TV (and latterly games, online culture and so on) have proved massively more popular and have evinced much deeper global penetration, than the earlier visual forms, although clearly they have. It's that these latter qualities are the result of the medium's formal expression of those three elements: a new expressively dynamic visualised logic; new and ever more sublime possibilities of spectacle; and a new recipe of celebrity.
These are large questions, of course, and they have been more extensively discussed than almost any feature of contemporary culture. In The Senses of Modernism: Technology, Perception, and Aesthetics (2002) Sara Danus summarises the broader currents:
For a theory of the dialectics of technology and perceptual experience, one could use as a starting point Marx's proposition that the human senses have a history. The cultivation of the five senses, Marx contends, is the product of all previous history, a history whose axis is the relation of human beings to nature, including the means with which human beings objectify their labour. One could also draw on the theory of perceptual abstraction implicit in Walter Benjamin's writings on photography, mechanical reproducibility and Baudelaire's poetry. Benjamin's theory could then be supplemented with Guy Debord's notion of the society of spectacle, or, for a more apocalyptic perspective, Paul Virilio's thoughts on the interfaces between technologies of speed and the organization of the human sensorium. One might also consult Marshall McLuhan's theory of means of communication which, although determinist, usefully suggests that all media can be seen as extensions of the human senses; and Friedrich Kittler's materialist inquiry into the cultural significance of the advent of inscription technologies such as phonography and cinematography.A pretty good list of usual 'Theory' suspects, and we could springboard from there in a number of different directions. For the moment, though, I'd like to try to think-through a couple of somethings occasioned by watching my 8-year-old son at play in the world of visual and digital media.
Which brings me to autobiographical anecdote. My lad is pretty much like any other 8-year-old. So, for instance, he watches a ton of TV and he likes to play in both the somatic sense (I mean, with his body: running around, climbing stuff, larking about) and in the digital sense. He has a PS3 and his favourite game at the moment is Plants Versus Zombies. But even more than playing this game, he enjoys watching other people play video games; and by 'other people' I mean people on YouTube. He will gladly spend hours and hours at this activity, and it fascinates me.
It fascinates me in part because it seems, to my elderly sensibilities, a monumentally boring and pointless activity. I can see the fun in playing a game; my imagination fails me when it comes to entering into the logic of watching videos of individuals with improbable monikers such as Stampylongnose and Dan TDM playing Minecraft, talking all the time about what they are doing as they do it. Partly I think this is because it seems to me to entail a catastrophic sacrifice of agency, where I suppose I'd assumed agency is core to the 'fun' of play. But my boy is very far from alone. The high-pitched always-on-the-edge-of-hilarity voice of Stampylongnose ('hellooo! this is Stampy!') might set my teeth on edge as it once again echoes through our house; but Joseph Garrett, the actual person behind the YouTube persona, hosts one of the ten most watched YouTube channels in the world. He is huge. He is bigger, in terms of audience, than many movie stars.
This is clearly a significant contemporary cultural mode, and I wonder what it's about. It may have something to do with the medium-term consequences of Benjamin's loss of 'aura', something he discusses as a central feature of modern culture: that the new art exists in a radically new way, not only as a form of art that is mechanically reproducible, but as a whole horizon of culture defined by its mechanical reproducibility. This is the starting point for Baudrillard's later meditations on the precession of the simulacra; because before it is anything else, the simulacrum is the icon of perfect mechanical reproducibility. 'Reality,' Manuel Castells grandly declares, 'is entirely captured, fully immersed in a virtual image setting, in a world of make-believe, in which appearances are not just on the screen through which experience is communicated, but they become the experience' [Castells, The Information Age: Economy, Society and Culture (2000), 373]. For Baudrillard, of course, this entails a weird kind of belatedness, in which the simulation which once upon a time came after the reality now precedes it. Castells is saying something more extreme, I think: that reality and simulation are now the same thing. For my son, this may be true.
Not that the boy spends literally every waking hour on a screen (part of our responsibility as parents is ensuring that he doesn't, of course). There was one time when he was actually (that is, somatically) playing: wielding an actual toy lightsaber and doing a lot of leaping about. I was the bad guy, of course ('I am your father!' and so on). In the course of this game, my boy adopted a slightly strangulated posher-than-normal voice and said something along the lines of: 'Annakin, take the droids and bring up that shuttle!' And I recognised the line as something he had heard from an episode of the animated Star Wars: the Clone Wars series that he had been watching. My son was playing at being Obi Wan Kenobi, based on what he had seen of the character from that show.
I confess I was very struck by this. Kenobi in this show is voiced by US actor James Arnold Taylor, and my son was doing an impression of him. But of course Taylor is himself, in this role, doing an impression of Ewan McGregor from the Star Wars prequel movies. And McGregor in his films is doing a kind of impression of Alec Guinness. So my son was copying James Arnold Taylor copying Ewan McGregor copying Alec Guinness. Baudrillard might call this a fourth order simulation, and talk about how distinctively postmodern it is. Indeed, he would probably go further and evidence this as an instance of the precession of simulacra: for when he finally watched Star Wars: a New Hope my lad was disappointed at how little like the 'actual' Kenobi this stiff old geezer playing him was.
A related question is the extent to which Guinness himself, born an illegitimate child in a rented Maida Vale flat, was 'imitating' something when he spoke, after the manner of the upper-middle-class-aspirational elocution-lesson-taking ethos of his time and social milieu. I'll come back to that.
Still, however lengthy this chain of simulation grows, it is still follows a recognisably linear domino-tumble logic of simulation. The boom in watching YouTubers playing video games seems to me something else. Of course it's tempting simply to deplore this new cultural logic, as Matthew B. Crawford does in his recent The World Beyond Your Head: On Becoming an Individual in an Age of Distraction (2015). Diana Schaub summarises: Crawford's book anatomises 'the fragile, flat, and depressive character of the modern self and the way in which this supposedly autonomous self falls ready prey to massification and manipulation by corporate entities that engineer a hyper-stimulating, hyper-palatable artificial environment. Lost in the virtual shuffle are solid goods like silence (a precondition for thinking), self-control (a precondition for coherent individuality), and true sociality.' In other words, this new visual logic is the symptom of something pathological in modern subjectivity itself:
Crawford is particularly good at showing how forms of pseudo-reality are being deliberately manufactured, not in obedience to some inner dynamic of technological progress, but rather in accord with “our modern identification of freedom with choice, where choice is understood as a pure flashing forth of the unconditioned will.” Freedom, thus conceived, is essentially escapist; we seek to avoid any direct encounter with the tangible world outside our elaborately buffered me-bubble. I saw this impulse at work when our young son would flee from the pain of watching a game his Orioles seemed destined to lose, retreating to a video version of baseball in which he could basically rig a victory. He preferred an environment he could control to the psychic risk of real engagement. I thought we had done enough by setting strict time limits and restricting his gaming options to sports only. But it became obvious that the availability of this tyrannical refuge was an obstacle to his becoming a better lover of baseball (and, down the road, a better friend, husband, and father).This is rather too pearl-clutching for my taste. Which is to say, I don't think it's true, actually, that these new modes of visual media will result in a whole generation incapable of interacting with real life as friends, spouses and parents. But there's something here, isn't there. I wonder if there is some quite radical new mode of art being inaugurated.
Returning to my initial three figures, and the pared-down cartoonified narrative about 'visual culture in the 20th/21st centuries' they embody. The thing is, I look at video games, and the paratexts of video games (like YouTube playthrough videos) and I see none of the three present. Video games come in various formal flavours, but none of those flavours make much use of montage or jump cuts, at least not in game play. First person shooters fill the screen with what is notionally the player's-eye-view of the topography through which s/he moves; but this view moves according to a formal logic of pans and tilts, not according to jump-cuts. With platformers we track the motion of Mario (or whoever) left to right, up, down and so on. The visual field of games, which used to be Space-Invaders rudimentary, now tends to be very rich, busy with detail, high-def, cleverly rendered, complex; but the formal visual grammar of games and gaming tends to be pre-Eisenstein. That's really interesting, I think.
Similarly, games are rarely if ever spectacular. They often construe very large topographies, landscapes a player can spend weeks exploring; but they almost never approach sublimity; they don't do what cinema can do on the spectacle front. One of the differences between playing one of the many Lord of the Rings video games and watching Jackson's trilogy is that in the former Middle Earth comes over as a extensive (even interminable) game-space, where in the latter it comes across as a spectacular, actual landscape. This is not to say that games are simplistic visual texts. Many of them are very unsimple, visually. But their complexity is all on the level of content: the multifariousness of elements to be played, the visual rendering, the complexity of the through-line play. It is not complexity on the level of textual form.
And games do not make superstars, not in the sense that movies and, to a lesser extent, TV make superstars. The odds are you hadn't previously head of Stampylongnose until I mentioned him seven paragraphs earlier; but even if you did happen to recognise the name, I'd be surprised if you could have put a face to it, or would have been able to place the guy's real name Joseph Garrett. To repeat myself, though: he is one of the top ten most watched YouTubers in the world. He earns £260,000 a month doing what he does. That's three million quid a year. A film actor who could command a £3 million fee for a movie would be a household name. This sort of visual culture is different to cinema. Of course, when something is this lucrative celebrities want in:
Kim Kardashian has a video game ... Bjork has one. Taylor Swift announced a game on February 3, and Kanye West announced his own on February 11. These announcements have been met (at least in my circles) with a mix of disbelief and mockery.'Kardashian's game promises to give you the experience of being Kim Kardashian.' Pass. These people will surely make money, but they will not get to the heart of the game experience. Gaming, as a culture, is not predicated upon celebrity in the way that film and TV are.
To be clear: I am not saying any of this to denigrate games and gaming. Video games are clearly a major cultural force, and many game texts are aesthetically fascinating, significant works of art. But I am saying that games are, formally and culturally, quite different to films and TV. Games, and I think I am making that assertion in a stronger form than it is sometimes made. Broadly speaking, attempts to crossover from games to films have flopped; and cash-in games made to piggy-back the success of successful movies have a checkered history. The biggest games, from GTA to Minecraft, from Space Invaders to Doom to FIFA, from Myst to Candy Crush to Twitter, are their own things.
I used to believe that the key difference between cinema/TV and games is that the former are passive and the latter active. I no longer believe that. In part this is because the former really aren't passive: fandom's textual poachers engage with their favourite texts actively, inventively, remixing and reappropriating, cosplaying and fanfictionalising. And, contrariwise, I wonder if the appeal of games is less to do with the play-engagement and more with the simple immersion, something just as easily achieved—or more so—by watching play-through videos. YouTube, the second most accessed website in the world (after Google), only ten years old, was recently bought, by Google, for £1.6 billion; and although corporations and film companies do post their content to the site, the overwhelming driver of content is ordinary people, fans, gamers and so on. Hardly passive.
So when it comes to games, in what are kids immersing themselves? The specific game-worlds, of course, but something else too. Minecraft has been described as 'infinite lego', but it has the advantage over actual lego not only that it supplies an inexhaustible quantity of bricks, bombs, landscapes and so on, but that it is unconstrained by the quiddity of reality itself. Its famous low-res graphics emphasise this: it is not simulating a session with actual lego in the sitting room, it is establishing its own world. Its graphics are about separating its simulation from the belatedness of simulation. My favourite Minecraft story concerns the players who assemble imaginary components inside the virtual world of the game to build working computers: hard drives, screens, programmable devices, some of which are as big as cities. This is, of course, both cool and monumentally pointless. But it's more than that: it suggests a revised model of the old Baudrillardian precession of simulacra, which in turn, or so I am suggesting, explains the unique appeal of this mode. It is a logic of simulation closer to embedding than the old Pomo mirrorverse horizontal string of simulation.
From my point of view, my son copying James Arnold Taylor copying Ewan McGregor copying Alec Guinness represents a linear chain of simulation, because I saw Alec Guinness first, murmuring 'an elegant weapon, for a more civilized age' and 'these aren't the droids' in his RADA-trained R.P. voice. Then, many years later, I saw McGregor doing that thing Scotsmen all think they can do, and affecting a strangulated posh-o English accent. Later still I became aware of the animated Clone Wars cartoons. So the whole thing lays out a trail that I can trace back. For my son's generation it is not like that: lacking this linear meta-context, simulation for him is centre-embedded. I'll say a little more about this, but with reference to an old science fiction novel rather than a work of professional linguistics: Ian Watson's great The Embedding (1973). Watson lays out the distinction between linear syntax and embedded forms. We can follow 'This is the maiden all forlorn/That milked the cow with the crumpled horn/That tossed the dog that worried the cat/That killed the rat that ate the malt/That lay in the house that Jack built'; and we can continue to follow it, no matter how many additional clauses are added. But syntactically embed the units and it becomes much trickier:
'This is the malt that the rat that the cat that the dog worried killed ate.' How about that? Grammatically correct—but you can hardly understand it. Take the embedding a bit further and the most sensitive, flexible device we know for processing language—our own brain—is stymied. [Watson, The Embedding, 49]I wonder if the logic of simulation that my son is growing up with isn't more impacted, more structurally embedded, than the kind of simulation Baudrillard theorised. And I wonder if he isn't perfectly fine with that. All these copies are embedded inside one another in the paradoxical topography of the virtualised world. This may explain why these aspects of contemporary culture combine novelty with cultural ubiquity the way they do. They are construing the consciousnesses that can enjoy them.
There still is a real world, of course; however fractal the newer logics of simulation grow, we are still anchored somewhere. Where though? Whom is Alec Guinness imitating, anyway? I suppose a Baudrillardian would say: class, in an aspirational sense of the term. Which is to say: ideology. Which is to say: the force that drives the maximisation of profit and works to smooth out obstacles to that process. But two things occur to me, here. One is that this is not, whatever Matthew B. Crawford argues, something that video games culture has invented. Rather it is the horizon within which all culture happens; and the purpose of Guinness's painstakingly upper-middle-class accent was to smooth over the jagged realities of class and wealth inequality. To wave the hand, to convince us these aren't the realities we are looking for. Since those realities are (as Jameson might say) the whole reality of History, eliding them is an attempt to occlude history. Sometimes this is called 'postmodernism', hence Baudrillard.
But I'm not sure this is all there is. In terms of the 'real world' logic, hard as it is to gainsay, by which Star Wars: the Force Awakens (1977) was released before the Star Wars: Clone Wars animated series (2008), then the Baudrillardian chain of simulation is a straightforward thing. But, of course, in terms of the in-text logic, the old-man Obi Wan played by Guinness comes after the young-man Obi Wan voiced by James Arnold Taylor. I don't want to make too much of this, except to say that 'The child is father to the man' is a peculiarly appropriate slogan for a series, like Star Wars, so deeply invested in paternity and filiality. Rather I'm suggesting that, on the level of representation, the question of who is imitating whom where Kenobi is concerned is more complicated than you might think. Lucas wrote the part with Toshiro Mifune in mind; the Jedi were originally a sort-of Samurai cast, hence Obi Wan's Japanese-style name. Guinness's urbane upper-middle-class English performance, clearly, is not 'imitating' Toshiro Mifune, except insofar as Lucas's script constrains him within that larger logic of cultural appropriation. But an advantage in thinking about the logic of simulation that applies here in terms of an embedded, rather than a linear chain, is that it leaves us free to think in more involved ways about precedence and antecedance, about simulation and originality, in this context. In this sense we might want to argue: what Guinness is simulating, in his performance, is simulation itself.
All of this relates most directly, I think, to the way video games are reconfiguring the logic of the dominant visual modes of contemporary culture. The three pillars on which Old Cinema was erected, and to which I have, at the beginning of this too, too lengthy post, attached names, tended to emphasise an intensified temporality: the more dynamic visual rendering of time, the excitement of spectacle, the cultic thrill of celebrity. But if the balance has been towards the time-image (think of all the time-travel movies; think of 'bullet-time'), then with games I wonder if we are not seeing a return to a more spatialised logic.
The kinetic montage and vitality of the cut; an unprecedented scope and scale of the spectacular; a new level and iconicity of superstar celebrity. On these three pillars was the monumental and eye-wateringly profitable edifice of 20th-century cinema erected; and lucrative, culturally important visual texts continue to be developed along these lines: of course they do. But the new visual cultures of the 21st-century are starting over, with three quite different pillars. I'm not entirely sure what the three are, yet. But I'd hazard a sense of new, immense, intricate but oddly unspectacular new topographies of the visual, what we might call the Minecraftization of visual culture, something much more concerned with the spatial than the temporal aspect of the medium. And I wonder about a new configuring of the balance between passive 'watching' and active 'engagement' as salients of the audience experience, with a new stress on the latter quality. And I wonder about a new mobilization of the visual, texts no longer a matter of public cinema or private TV, but disseminated into every tablet and phone and computer, literally in the pocket of everyone on the planet. How's that for embedding?
One of the main thrusts of Crawford's polemic is that this new digital culture is predicated upon an ideology of distraction. And this makes an immediate kind of sense: many people complain of a shrinkage of collective attention span, that plays into the hands of those who would prefer to get on with despoiling the environment and maximizing social inequality in the service of their own profit. What kind of collective reaction can we muster when reading anything longer than 140-characters prompts us to eye-rolling, sighing and 'tl;dr'; when we can be distracted by an endless succession of cute cat videos and memes and other such metaphorical scooby-snacks. Maybe Crawford is right about this. But by way of counter-example, I can only point to my son. He is precisely as easy to distract as any 8-year old. But he is also capable, when watching Dan TDM's sometimes immensely lengthy playthroughs of Minecraft, of paying close attention for literally hours and hours and hours. That's something.