Columbus, Ohio USA
Return to Homepage www.shortnorth.com

Pop Renaissance

Contributing columnist for monthly newspaper Short North Gazette
Jared Gardner

editor@guttergeek.com

THIS IS AN ARCHIVE
This page contains articles from 2009 - follow links at left for 2010, 2011

Visit Writer's Web site www.jaredgardner.org
Visit Archive: Pop Renaissance 2010
Visit Currrent: Pop Renaissance 2011
Return to Homepage

Old Wine in New Bottles
O
r, Why the Kindle Changes Pretty Much Nothing

OCTOBER 2009

My kids hovered over in anticipation as I placed the needle in the groove and set the machine spinning. Their jaws dropped as the Jimmie Lunceford Orchestra’s recording of “White Heat” began to play. Mistaking their wide-eyed interest for musical appreciation, I prodded them.

“Isn’t it amazing?”

But they weren’t hearing the music: They were too busy watching the record spin at 78 rotations per minute.

“How does it work?” my eldest asked.

“It’s magic!” my youngest declared.

It was only then that I realized this was the first record they had ever seen. After a tedious explanation as to how records worked (roughly half of which I made up), I played them the same song on iTunes. This time they heard it and started tapping their feet and then bouncing around the room in time with the music.

It wasn’t that the recording on iTunes was better. In fact, it was considerably worse: It was my first attempt to digitize one of the many 1930s jazz 78s my godfather had left me; and still unfamiliar with the procedure, I had pretty much clipped all the high notes, muddled the bass notes and turned everything else into something that sounded like it was played through a thick layer of wax paper. But the kids heard it the second time because, on the medium through which they had listened to recorded music for their entire lives, the technology became invisible (and of course there isn’t enough wax paper in the world to make Lunceford’s band sound anything less than awesome). My eldest even asked if I would put it on his iPod.

I tell this little domestic tale because it helps, I think, put some of the buzz and crackle about eBooks – and the Amazon’s Kindle in particular – in some much-needed perspective. Ever since Amazon first released their electronic book device in late 2007, the digital cognoscenti and the cultural gatekeepers have been battling it out over what it all means for the future of the book, of literacy, of the universe as we know it. Predictably, opinions have largely come down on one of two sides. The Kindle is either completely “life-changing” (Oprah) and the beginning of “a cultural revolution” (Slate), or it is the end of everything that makes books special – the comfort, the smell, the “serenity.” “I can hear Johann Gutenberg rolling over in his grave,” one Bloomberg columnist moaned.

Millard Draudt

Mine is a two Kindle family, and I am here to tell you that both sides in this debate are predictably, tiresomely wrong. The Kindle is no more the end of print than digitization has proved the end of records. Books made of paper will simply, like records made of vinyl, become more specialized, marketed for particular users with particular needs. Long after it was assumed that the traditional record was gone forever, today vinyl sales are stronger than they have been in years. The reason is simple: Records sound better, especially better than the compressed files stored in your iPod. Of course, much of the time most of us don’t notice, or we are willing to trade the loss in audio fidelity for the convenience of being able to take our music with us on the bus or to the gym. But a new generation of serious music aficionados is rediscovering vinyl’s wider range and warmer tones. Even my kids, once I had played the record of “White Heat” for the 100th time, began to notice the difference, no longer distracted by the magic of the mysterious record playing machine.

But if vinyl is not dead, it is also never going to return to its former glory days. It is and will remain an item for audiophiles, collectors, DJs, and scholars (currently vinyl makes up less than .5 percent of total music sales). And that is OK. For most of us, most of the time, CDs and MP3s are more convenient, more portable, and accessible on a range of devices and environments.

Now, I’m no audiophile, having done considerable damage to my ears long ago listening to music (on vinyl) at excruciating volume. But I am, for better or worse, a professional reader: I read all day, every day, from morning to night, and when I am not reading, I tend to be writing about something I have read recently. My book collection continues to grow exponentially, consuming more and more of my house and office. The Kindle made sense to me for the same reason that CDs and MP3s did in the ‘80s and ‘90s: It offers me a way out of my growing feeling (not entirely paranoid) of being buried alive.

Almost immediately, however, it became clear that for me the Kindle was not going to make the book a historical artifact. First, much of what I read – from academic books to graphic novels – is simply not available on the Kindle. And even if these books were available, I will prefer to read them on paper for the foreseeable future. Academic titles depend heavily on citation and footnotes, and the paper book remains a far more efficient storage and retrieval device than the Kindle for bouncing around in a book – from page to endnote, to index, and back. And with 16-greys, the Kindle is a long way from offering graphics capable of displaying image-heavy texts. So my two largest libraries will continue to overburden the floorboards.

That left about 50 percent of my annual book purchases that I could plausibly move to the Kindle. I loaded up a pile of books (purchased with alarming ease from the online Kindle store), and set out to take my new overpriced reader for a test drive.

As with my kids listening to “White Heat” on the turntable, my first book was all but wasted on me (a very good book, too – In the Woods by Tana French). I just couldn’t see past the technology. And the technology is pretty impressive, even if the design looks like something out of the late ‘80s. Unlike traditional computer screens, the Kindle essentially “prints” each page on the surface of the screen, making the experience very much like reading paper (a sort of gray-green paper). Accustomed to daily eye strain from reading on the computer screen, I was delighted to discover that I could read the Kindle for hours – even in the sun – without running to the medicine cabinet for a handful of Advil.

And then I realized that by my second book I had been reading for hours without remembering that I was reading a Kindle at all. It was a splendid book, The Little Stranger by Sarah Waters, a wonderful riff on the traditional haunted house story – and while I can perfectly envision the details of the house in which the adventures took place, I have no memory of the technology in my hands while reading it. In other words, a good book did what a good book should do: It made the technology invisible, irrelevant.

After a few months with the Kindle, the number of books flowing into the house has if anything increased. Only now, about half of them are digital. Yes, I know all the arguments against the eBook. For one, you can’t lend your eBooks, thanks to the restrictions of the digital rights management (DRM) software Amazon uses to control digital piracy. That’s all right with me, since I hate lending books anyway, so now I have a good excuse. Another more meaningful concern involves format. While we know the printed page will be usable in 50 years, what are the odds that the .mobi format used by the Kindle will be accessible in even half that time? Not great. Which means manufacturers will get me to buy Dickens and Proust (maybe even Sarah Waters and Tana French) all over again, just as the music industry has managed to get me to buy the White Album three times now.

It is not as if the books on my shelf will last forever. I am old enough that many of my paperbacks are falling apart as I reread them, and only last week I repurchased Evelyn Waugh’s Vile Bodies (sadly, not available for the Kindle) so that I don’t have to keep picking up the falling pages.

After all, what makes the books (or music) last is not the medium on which they are printed or the technology on which they are read. It is the way they get into our bloodstream, like “White Heat,” forcing us to tap out a beat we didn’t know we had in us, or fully inhabit a haunted Engish manor house we have never seen in real life. The Kindle, with its dull grey screen and monotonous Caecilia font, is the equivalent of the compressed MP3 on your iPod: You trade some of the warmth of the “original” for other conveniences, other pleasures. But like vinyl records, the book on paper offers something special for the discerning reader, something that will not be lost even if it ultimately becomes of interest only to a particular kind of reader who appreciates the loving care of a typesetter explaining the choice of a 10 point Linotron Galliard (my personal favorite) or a book cover designed by Susan Mitchell or Chip Kidd (again, my faves).

My Kindle will no doubt one day soon be sitting next to my first generation iPod, an expensive and not very attractive paper weight, replaced by something prettier, faster, more convenient. And it is OK. I have figured out how to transfer my 78s to iTunes, but I still keep my record player out for those times I want to hear both the pops and the pop that only vinyl can do. I will happily buy the White Album at least two more times before I die, I am quite certain, and I will buy Vile Bodies at least that many times. The song, and the book, remains the same.

Film in the Age of the Ultimate, Endless Cut

AUGUST 2009

The other day I had a conversation with my 4-year-old niece about The Wizard of Oz, my favorite film which I had just had the good fortune to share with her. “You know what I liked best on that DVD?” she asked me. “When the Wicked Witch got melted by Dorothy.” It is of course a great moment in the film (although my personal favorite remains Dorothy and Scarecrow’s encounter with the curmudgeonly apple trees: “Are you saying my apples aren’t what they ought to be?”). But what struck me most about the encounter was that for her this classic was classified as a DVD. I looked to her older cousins in that condescending amusement uncles put on at such moments, but to my surprise they picked up her language without missing a beat. Each of them selected their own favorite scenes in the “DVD,” and then they settled down to arguing the various merits of their choices without ever stopping to notice the pained look on my face.

I should not have been surprised. With increasing frequency, I have encountered similar moments with young people quite a bit older than they are. Of all the changes I have witnessed in 20 years of studying and teaching film, the introduction of the DVD is by far the most significant. When my students increasingly speak of films as DVDs they are not simply changing the name to reflect current technology, they are in many ways describing a radically different thing than I spent all those years studying and teaching: They are naming the future of film by its proper name.

This is why the DVD is such a different thing than its nearest predecessor, the video cassette. VHS was a significant innovation in how audiences interacted with film, as I mentioned in last month’s column. Quentin Tarantino, for example, came of age working in a video store, which ultimately served as his film school, and was among the first of a new generation of filmmakers to have access to any film he wanted, at any time. The style of filmmaking he developed as a direct result of this unprecedented access – a mélange of allusion, pastiche, and outright plagiarism – has proved very influential. But “the movies” as an institution did not change, and for all their ubiquity throughout the 1980s and ‘90s, one never talked about getting together to watch the “video” of the Godfather.

The reason is simple: video did not make the original film better, bigger, or even substantially different – except insofar as it often took widescreen formats and shrunk them down to what was then the standard format for television screens, 4:5. In fact, at the insistence of the Directors Guild of America (DGA), video cassettes were required to add a disclaimer – “This film has been modified from its original version” – to every cassette. The version you were watching on your TV was most definitely not the original.

As a graduate student I spent much of my indentured-servitude carrying around heavy reels of 16mm film and learning how to operate incredibly finicky projectors prone to burst into flames at odd moments. We had video cassettes, but it was understood that they were good for reference only – to show a clip during a lecture or to review a scene later to double-check an observation from screening notes. After all, the 16mm rentals we ordered for our classes were already a diminished thing from the 35mm originals. But at least they had proper aspect ratio and looked – and this was the key – as the filmmakers intended.

Everything changed in the late 1990s with the DVD. The 16mm rental companies I worked with all those years have gone the way of the wainwright and the haymonger. I have relied exclusively on digital projection in my film classes for a decade now, my early qualms quickly overcome by the convenience (and safety) DVDs afforded. Further, the DVD allows something that neither 16mm or video ever did: perfect stills for careful and close analysis.

And yet, even though I have been by necessity living on the bleeding edge of these changes for many years, I was nonetheless caught by surprise by a ultimately seismic shift in my students’ attitudes toward what constituted the true and “original” film, the proper object of study for cinephiles. It became most clear to me this past spring with the release of Watchmen, which was by necessity a huge event for the 100 or so students gathered with me to study the phenomenon of film and comics (see last month’s column).

For those who missed the hype, Watchmen is an adaptation of an extremely influential 1980s graphic novel by Alan Moore about middle-aged superheroes confronting a murder mystery and an impending apocalypse. For a range of reasons too complicated to go into here, a film version of Watchmen got stalled for years, and then decades, and so by the time of the film’s release in March of this year the pop culture world was abuzz, and destined to be pretty well disappointed (after all, the film was directed by the charmingly untalented Zack Snyder).

The film was released shortly before the class began, which meant that for the first time I would not have a DVD to screen in class. Instead, the students were required to see it on their own, a requirement easily met since all of them had already seen it. But quite early in our discussions of the film it became clear that many of the students did not feel as if they had seen it at all. As I raised some questions for debate about the adaptation in the film, repeatedly students sought to bring debate to an end by insisting that we couldn’t argue the merits of Snyder’s film since we had only seen the theatrical release. And that, they insisted, was most certainly not the thing itself. Any real discussion of the film’s accomplishment would have to wait until the DVD release, when we could see it properly and as the director intended.

Again, I should not have been surprised by this. In recent years I have been encountering more frequent objections from students to my refusal to screen the longer, DVD-versions of films (often advertised as “director’s cuts,” a nostalgic appeal to the idea of the director as maverick visionary in a studio of corporate suits and numbers-crunchers). When screening Apocalypse Now, for example, several students were quite annoyed that I showed the version I had seen in the theaters in 1979 and not the version Coppola released for DVD in 2001. Aside from the fact that the re-edited film was 50 minutes longer and could no longer be screened within the confines of my class, it was also, I argued, not the “original” film. But for younger film fans raised in the DVD age, both arguments fell very flat.

The very notion of “original” or “true” no longer makes sense to today’s film fans. Take, for example, the release of Watchmen on DVD this month. Advertised as the “director’s cut,” the two-disc release extends the already bloated 162-minute theatrical running time by 24 minutes. In addition, the DVD offers a music video by My Chemical Romance and a series of “video journals” with behind-the-scenes looks at production, design and special effects. And with the ability the DVD gives to freeze and zoom, I am able to appreciate (as I was not in the theater) the layers of design and detail everywhere in the film, the obsessive love Snyder and his crew devoted to re-creating the alternative present Moore imagined a quarter century ago.

But there is something else included in the DVD case that is particularly striking: a coupon for $10 off the “Ultimate Collectors Edition,” which will be released in December. This next edition will be five disks, including full director’s commentary and the addition of The Tales of the Black Freighter story. The Black Freighter story was an integral part of the original graphic novel, offering a subtle commentary on the main story via a particularly gruesome pirate story comic book being read at a newsstand by a young man as the world was literally coming to pieces around him. In conjunction with the original theatrical release, the animated Black Freighter story was released on a separate DVD, along with a mock-documentary based on another feature of the original graphic novel, “Under the Hood,” a tell-all exposé written in retirement by one of the original masked superheroes. The five-disc version promises to bring all these moving parts together into one giant DVD, one whose viewing will consume hours of a viewer’s life and be absolutely un-screenable in any theater or classroom.

I had heard that this still-longer version of the DVD was forthcoming, but I was frankly shocked to see it advertised prominently in the DVD. Surely consumers who had just shelled out good money for their two-disc version would be outraged to discover that less than six months later an “ultimate edition” would be coming?

Apparently not. The fans I have talked to, including many who are belligerently disappointed with the adaptation, plan on purchasing both versions of the film in order to watch its continued evolution. Instead of seeing the theatrical release as the original and true film, as film fans and scholars did for a century, the theatrical release is increasingly understood as a first draft of a work in progress.

This is also how the film industry sees theatrical releases, and for good reason. After all, even with a decline in DVD sales in the past year or so, the vast majority of studio revenue comes not from a film’s theatrical run but from DVD sales and rentals. We do not on the whole consume most of our films in movie theaters any longer, nor will we again. We watch most of our films in living rooms in digital formats, especially DVD, and the studios have begun making movies specifically designed with this reality in mind.

There is, for example, the phenomenon of the “puzzle film” or what film scholar Graeme Harper has called the “new cinema of complexity,” films like Christopher Nolan’s Memento or David Lynch’s Mulholland Drive. These films are made with the understanding that they will be watched more than once by viewers in search of clues and insights almost certain to be missed the first time through. When I screened Memento a couple of years ago, one of my students asked me if I was going to show the “unlocked” version of the film. When I stared at him blankly he told me how the “limited edition” DVD offered a series of puzzles that, once completed successfully, allowed the viewer to show the story (famous told in reverse-chronological order) in “normal” time. “But why would you want to do that?” I asked. “To see what the film looks like that way,” he replied.

And why not? As I discussed in April’s column when considering pop music, the days of the “true” or the “authentic” version are over. Why should the same not be true for film as well? For film scholarship, of course, this presents a series of conundrums not likely to be solved in my lifetime, but with luck most film scholars are film fans first and foremost. And why should we not want to experiment with re-mixing our films? In many ways, the inclusions of “deleted scenes” and “alternate endings” in DVDs, a feature that has become all but ubiquitous in the last decade, is an invitation to viewers to go make their own editorial decisions, their own final cuts.

When I next teach my course on comics and film, which version of the Watchmen film will I screen? My easy answer of always choosing the theatrical release will no doubt start to look increasingly absurd in an age when the theatrical release is little more than an advertisement for the vastly longer and more interactive digital versions to follow. And in fact, long before the official “ultimate” release, fans are already planning their own edits, debating which scenes should be included, which sounds to edit in or out of the soundtrack, discussing techniques for editing Nite Owl out of a climactic scene so that it more properly resembles Moore’s script, etc. What if, as is not entirely implausible, the fan edits are in fact better? I strongly suspect that a generation from now “film studies” will involve sitting students down at workstations with digital films – including deleted scenes, alternate endings – and asking them to produce their own edits of the film. I am not sure how I feel about such a future vision of film study, but I am sure it is coming. The genie is out of the bottle. The truth is film as it has traditionally been defined and studied is dead. No longer do our DVDs have the disclaimer “This film has been modified from its original version.” Instead, the versions we see in the theaters should probably be the ones with the disclaimer “This film is in the earliest stages of being endlessly modified into its ‘ultimate’ edition.”

Oh, and The Wizard of Oz, which celebrates its 70th birthday this month? In September it will be released in a new edition on Blueray complete with 5.1 Dolby digital audio. Why not? Still, when I screen it next in class, I intend to take advantage of the option to watch it with the original mono audio track. Don’t tell!

Comics in the New Millennium
Part II: When Movies Met Comics

JULY 2009

In the May column, I considered the rising visibility and prominence of what has been historically one of the most neglected and denigrated pop culture media, the comic book. But I deferred addressing what is perhaps the most notable sign of the changing fortunes of comics: the explosion of comics-based films in the past decade. My justification for this deferral was a question of space, but in truth it was also because the question of film’s turn to comics has more to say about film than it does about comics, opening up a whole new set of questions. After ducking those questions last month while finishing up a course on the very topic, I feel prepared at least to start formulating some potential answers to the big question: Why?

When the first wave of comics-based films began appearing toward the end of the last decade, many in both film and comics assumed (and hoped) that it was only a trend. After all, Hollywood cycles through genres at the speed of light, and even some best remembered today (the gangster or horror films of the 1930s, for example) only lasted 3-4 years. Beginning somewhat quietly with The Crow (1994) and Men In Black (1997), and then accelerating wildly at the start of the new millennium, Hollywood’s fascination with comics is now well over a decade old and showing no signs of abating. While estimates vary according to which “insider” is dishing, a good guess is that there are roughly 200 comics-related films in some stage of development in Hollywood at this moment. While only a fraction (say, 1 in 10) of those will ever see the light of day, the number demonstrates the growing relationship between the two industries. If one needs further proof, stop by the Comic-Con in San Diego later this summer: an event once reserved for the geekiest comics fans and chain-smoking professionals, the Con is now chin-deep in producers, film stars, and all the attendant glitterati.

For aficionados of both film and comics, this relationship is at best a mixed blessing – but most often a source of deep concern about the future of both comics and film. For many film critics, the “comic book movie” is another nail in the coffin of film art, the ultimate triumph of style over substance, spectacle over story and character. And while one might imagine that comics fans and critics would delight in the greater exposure film gives to their historically neglected medium, many in the world of comics fear that the increasing partnerships with the film industry have doomed every new comic to being essentially a Hollywood pitch. In both cases, what is feared is that comics are becoming more like film, and movies more like comics.

In many ways, those inclined to be anxious about the purity of film or comics have reason to be worried. A couple of months ago, when visiting Columbus to give a lecture at the Wexner Center, Alex McDowell, the production designer for such influential comics-related films as The Crow and Watchmen, generously agreed to answer questions from my students. Predictably, given how many of them had aspirations to make films and/or comics on their own, the discussion soon turned to the inevitable: “What advice would you have for someone trying to break into film or comics today?” His response was emphatic: Don’t try. Don’t plan on a career in film or comics because film and comics as we know them and the industries that have produced and distributed them for the past century are changing so fast that they will be unrecognizable in a generation. Instead, he advised, think about pursuing what he called “narrative media,” the ever-changing convergence of storytelling forms across multiple media that will shape creative and professional opportunities in the years to come.

This was the first week of class, and I could tell the students were baffled and a bit disappointed by his answer. But he was giving them the best advice on the subject they could hope to hear. While most critics assume that the pursuit of comics properties is a sign of a creative vacuum in Hollywood, there is something deeper at work here. In fact, what has emerged in the last generation and accelerated since the rise of the digital age is more like a return to origins for both film and comics. Far from being an aberration or dilution of their “true” formal properties, the argument could just as easily be made that the convergence of film and comics is a restoration of the genetic links that once bound the two media together when they emerged as the new media modes of storytelling in the early years of the 20th century.

The first comics-based film was the 1903 film Happy Hooligan Interferes, a direct adaptation of Frederick Burr Opper’s Happy Hooligan – one of the smash hits of the brand-new newspaper comics supplement. The film was directed by Edwin S. Porter, known to film historians as one of the great pioneers of narrative film. Along with other filmmakers of this first generation, Porter would produce dozens of adaptations of famous comic strips, including my personal favorite, a remarkably creative adaptation of Winsor McCay’s famous strip, Dream of a Rarebit Fiend (1906). The reason filmmakers were turning to comics materials a century ago was clear: sequential comics had found ways to create multi-dimensional characters, tell complex stories that captured the fragmentary and chaotic energy of the new century, and attract huge and consistent audiences.

These earliest partnerships between comics and film remind us that film wasn’t born telling stories that followed classical Hollywood rules – those rules had to be developed (largely imported and adapted from earlier storytelling modes, such as the novel and theater), codified, and policed. The earliest films, for a generation following the birth of the form, were something else entirely, what Tom Gunning has influentially referred to as a “cinema of attractions.” These early movies invited audience interaction, spectacle, and fragments over the passive voyeurism that has characterized the experience of watching movies for the past century. In other words, these early films were very much like comics.

It is important to keep in mind how films were actually watched a hundred years ago. There were two primary technologies in 1906 by which a film fan might watch a movie, and neither of them looked anything like the multiplex. You could stop by your local arcade and watch it on a Mutoscope – a kind of mechanical flip-book, peep-show device that allowed you to control the rate of “projection” – or you could swing down to the new-fangled nickelodeon, the first movie theaters that had just come on the scene in 1905. But while the films were silent, as the saying goes, the experience of going to a nickelodeon was anything but: a raucous, interactive space, where people came and went, commented loudly on the films, and hopped around town greedily consuming everything they could find (“nickel madness,” as the addiction was known at the time).

In other words, in 1906, people consumed their movies pretty much the way they consumed their comics – in bits and pieces, serially, compiling their own collections out of the offerings at the city’s competing nickelodeons or newspapers. These early films weren’t really “beginning/middle/end” narratives as later silent films: they were fragments, 100-foot pieces compiled by the nickelodeon operator into longer programming, with viewers hopping from nickelodeon to nickelodeon making their own collages of the fragments. Thus they were creating their own “collections” as the early comics readers were cutting-and-pasting their favorite strips from the papers in scrapbooks, etc. Ten years later, a completely different regime was in place for narrative film: the Hollywood studio system, the motion picture palace, the star system, continuity editing, and all the other features that make film “magic.”

It was in magic that the new studio moguls decided their fortunes lay as they unpacked their belongings in a little cowtown known as Hollywood one sunny day in 1914. Instead of inviting viewers to participate as active critics, fans, meaning-makers and editors, as the earliest technology had encouraged them to do, the industry now moved toward longer, novelistic narrative, seamless ‘invisible’ editing, and an increasingly elaborate set of rules and disciplines governing the viewing of film, rules that put audiences at a greater and greater remove from the thing itself and allowed the magic to work. As I discussed last column, such distancing of the reader is precisely what comics, always bound to the fragmentary and collaborative nature of the form itself, could never do.

Of course today, everything has changed. But this change had little to do with comics. This shift has been the result of a series of technological developments that transformed the ways in which we consume films. It began in the early 1980s with the proliferation of home video. For the first time, the average person could watch a film on their own schedule, could even rewind and fast-forward. It is a change we now so fully take for granted that it is hard for those born after 1980 to comprehend it. When I tell my students how my generation depended on revival theaters for any opportunities to see movies after their initial run, they look at me as if I were describing daily life before indoor plumbing.

Still, the impact of the VHS was nothing compared to the earthquake brought about by the DVD at the end of the 1990s (not-so-coincidentally at precisely the same time as the first of the current ongoing wave of comics-based movies). Now, not only can movies be watched on one’s own schedule, but individual frames can be frozen and zoomed for infinite dissections; films can be broken down for frame-by-frame analyses; movies can even be re-edited, watched out of order, remixed. Once coupled with the high-definition home theaters and powerful home video editing software that are increasingly common in American households, all the rules of film viewing are now officially extinct (although it must be confessed that the field of Film Studies has not yet woken up to this fact).

While exact figures are hard to come by, traditional theatrical film viewership now accounts for only about 20 percent of total movie revenue (compared to around 100 percent in the golden age of the Hollywood studios). I have students who claim to have never been to a movie theater who have nonetheless seen more films in their 20 years than I have in my 40-odd. They watch films on their laptops, on TV screens in living rooms, on their iPods. They chat (in person, online) about them with their friends while they are watching them, they analyze them in painstaking detail on their blogs, they loop them again and again on their PSPs. In other words, film has returned to the world it left behind in 1906: the DVD, the iPod, and the home theater are in some ways the 21st-century version of the Mutoscope and the nickelodeon. (Actually, they are something more or at least very different, but that is a topic for next month’s column, when I will look in detail at the new release of the DVD of the recent comics-based film, Watchmen).

Discovering that viewers are interacting with their films in these ways, it is not surprising that Hollywood returns to its own fork in the road, to the time when comics and film were both young and very much in love. It makes sense that Hollywood, which has survived everything from the transition to sound, the rise of television, and the breakup of the studio system, would be ahead of the curve on this one. To court comics readers as they have been doing aggressively in the past decade might seem to make little business sense at first glance. After all, a best-selling comic book sells just a couple of hundred thousand copies a month, nowhere near the number needed to guarantee a successful return on a motion picture. But what Hollywood is after is not comics readers’ dollars (although they are happy to accept them) but an understanding of the ways in which they read. Because going forward this is how we will be reading our films. As Alex McDowell said to my class, many of the things he was most proud of in his production design for Watchmen would not even be visible until the DVD – until, that is, the remote control gave the viewer the power to stop time and study each frame in excruciating detail, finding narrative elements that are not bound by the inexorable forward movement of the projected film. The DVD has made us all comics readers now, even if we never choose to pick up a comic book.


Comics in the New Millennium
Part I: The Origin Issue

MAY 2009

Over the next two months, I will explore what is one of the most striking aspects of contemporary popular culture: the sudden visibility of comics and the graphic novel. The form (whether comic strip, comic book, or graphic novel) is not much more than a century old (unless you accept a more flexible definition of the form that includes stained glass windows or Aztec codices), and perhaps its most consistent attribute over this long century has been that comics have remained in the gutters of American culture. Now, all of a sudden, comics and the graphic novel seem to be everywhere. Bookstores that used to sneer when I asked whether they carried graphic novels now devote prime real estate to these very same books. And even more surprisingly, comics have been gaining respect from institutions like the Pulitzer Prize or the New York Times, historically devoted to preserving an elite culture against the encroachments of the low. And comics have always represented the lowest of the low, so much so that we routinely use “comic book” to express the shallowness of a story or character.

What has changed? Are comics better than they were a generation ago? Well, yes, they are. But this improvement in quality is not sufficient to explain changing attitudes toward the form, since remarkable depths of talent, quality, and innovation existed in comics for decades without attracting the notice of the cultural gatekeepers. What has made a form that came of age in the late 19th century suddenly relevant and meaningful to so many readers in this new century? I don’t claim to have all the answers to these questions, but I believe it is a question whose answers will help us begin to imagine what our popular culture will look like in this coming century.

Like many who grew up with comics, I put them away when I was told it was time to set aside childish things, and in my decade away I heard little to convince me that I was missing much: the industry collapsed in a sea of excess and speculation and my prize collection which I had once fantasized would help put my kids through college was suddenly all but worthless. Liberated from any such fantasies, I opened up several boxes from my childhood one weekend in the late ‘90s and spent the day wallowing in nostalgia. By the next day, however, my curiosity was peaked: What had been happening to comics while I was gone?

A lot had happened, at least at first glance. Neither the racks of manga or the videogame tie-ins were familiar, and the array of new titles and independent publishers made me feel like Rip Van Winkle. But for the most part the atmosphere resembled the eighties: new titles loudly displayed, plastic-wrapped back issues on the wall, and the usual assortment of boys, girls, women and men looking simultaneously guilty and (finally) at home. I quickly settled in and began unpacking.

For the next two years I read everything I could find. I had returned at just the right time to encounter the work of a new breed of independent cartoonists – articulate grandchildren of the underground commix of the early 1970s – and the mature work of a generation of pioneers who had completely transformed the once convention-bound world of superhero comics. And so I read. And read, waiting to hit the bottom of what I was certain must be a very shallow barrel. I still haven’t found the bottom.

As Daniel Clowes (one of those remarkable new guys I encountered for the first time in the late ‘90s) has argued, comics have a unique ability to reinvent themselves completely every fifteen years or so. Writing in 1998 as I was reading his work for the first time, he prophesied, “The next creative epoch is due to begin in 1998; are we on the brink of rapture or Armageddon?” Although Clowes is being at least partly playful, the fact that he sees the new “epoch” in the history of comics – an epoch in which he himself is playing a central role – in such millennialist terms is not entirely disingenuous. Comics are always facing apocalypse and always rising again from the ashes as something entirely new and yet deeply connected to its own history.

Part of what has allowed comics to reinvent themselves for each generation is the fact that it is a medium with no dominant conventions – stylistic, grammatical, narrative – other than a handful of basic elements: sequential panels and a highly charged interdependence between word and image. It was this combination of word and image that especially marked the form as debased when the comic strip was newly arrived in Sunday papers around the nation. The idea, still deeply felt in many corners, that text and image do not belong together and cannot communicate sophisticated ideas together goes back to 18th-century aesthetics and the forging of distinct academic practices for the making and criticism of painting and literature. By the 19th century when the sequential comic was born, the combination of word and image was understood to be inherently un-aesthetic, un-academic – in a word, childish (which is why children’s books throughout the 19th century were the primary form that continued to combine word and image).

Of course today the traditional notion that image and text cannot be meaningfully combined is belied by daily experience – by print and television advertising, and most spectacularly at the end of the 20th century by the Internet, which has flooded us with text and image in dizzying combination (framed by the dimensions of the screens we carry about with us each day). It is not surprising that the “renaissance” of comics coincides with the development of the Internet and the increasing need to learn how to navigate complicated combinations of word and image in our everyday lives. The comics form, recovering from the ashes of its own commercial implosion in the early 1990s, was reborn as an “art” in the 1990s at a time when the need to create and read word and image together was both more pressing and less culturally despised than ever in the century-long history of the form.

Because of the unique way in which it brings together different systems of communication (words, symbols, icons, images) into a crowded field where meaning is both collaborative and competitive – between frames, between reader and writer – comics have become a preeminent form for those interested in developing and interrogating methods of reading the everyday world. It is in the space between the frames – the “gutter” as it is called in the trade – that the work of bringing reader and writer into collaboration takes place. As the critic and cartoonist Scott McCloud has influentially put it in Understanding Comics, the reader must always bring “closure” to the space between frames. Even in the most simple of narratives, the reader must actively participate to fill in the space between the frames with the “missing” action and connect the words to the image.

This collaborative aspect of comics also contributes, I believe, to the increasing relevance of the form in the 21st century. Arguably the most significant change in popular culture in the digital age, as I touched on last month, has involved a diminishing distance between creators and audience. And with that diminished distance has come a growing demand for interactivity, for spaces and places in which the audience can become active creators in their own right. Comics, an inherently collaborative and interactive medium, have been doing this for decades before anyone even thought of a personal computer, let alone YouTube. The new comics have set about reinventing themselves by building on the direct relationship to readers that Marvel (and E.C. Comics before them) pioneered, but today’s comics grant even more power and freedom to the reader. Over and again, the contemporary comic acknowledges the reader as a co-creator (sometimes with open arms, sometimes with frustration), and it is in the gutters, formal and cultural, that this relationship is forged.

Of course, I have not discussed what is in many ways the most visible sign of the growing influence and marketability of comics: the seemingly endless stream of adaptations of comics and graphic novels that have found their way to the big screen in the past decade. Next month, we will look in detail at the history of this relationship and its consequences – the good, the bad, and the ugly – for the future of comics, movies, and popular culture.

The Best Album of 2008 contains not a single note of original music
... and I feel fine


APRIL 2009

Greg Gillis (aka Girl Talk) performing and being carried by the crowd.
[From the film RiP!: A Remix Manifesto, 2009.]

For the first time ever, I was genuinely shocked by my own answer to the predictable year-end question: What was the best album of 2008? This past year was a very good year for pop music, with some terrific new artists. There was, in fact, hardly a wrong answer in the bunch. I could have, for instance, picked Santogold, Bon Ivor, or Fleet Foxes, all of whom released terrific (and remarkably different) debut albums in 2008, and all of which I am still listening to well into 2009. Or I could have played it very safe and gone with the vastly over-hyped and overly hip TV on the Radio, which probably graced the top of more top-10 lists than any other band at year’s end. Any of those would have been received with the usual sage nods and counter-arguments in that year-end “best of” ritual we all love and dread in equal measure.

Instead, as I thought back to my experience of listening to music in 2008 and thinking about how that music reflected my sense of the year, I realized that the album I have listened to the most and returned to most often was in fact an album by an artist who doesn’t play a note or sing a word. In fact, my own personal pick for best album of 2008 was also undeniably the most unoriginal album of the year: Girl Talk’s Feed the Animals (Illegal Art, 2008).

Girl Talk is the stage name of a Pittsburgh-based DJ named Greg Gillis. Actually, Gillis would strenuously object, and for good reasons, to my identifying him as a “DJ” (to make the point, he even markets T-shirts with the slogan “I’m Not a DJ”). At first glance, his attempts to distance himself from the traditional work of the DJ – someone who orders and overlaps other people’s music for club or radio environments to create an appropriate and crowd-pleasing mix – seem odd and somewhat pretentious. After all, Gillis’s albums are all entirely made up of other people’s music. His latest (and his best), for example, combines more than 300 different samples from popular music hits (and misses) from the last several decades. If he is not a DJ, as he insists, what is he?

And that is the big question I found myself pondering when I announced my pick for best album of 2008 to incredulous stares. “But he is not even a musician!” “He’s just a DJ!” Actually, no one said those things, but they were thinking them. Well, probably they were just thinking: “He’s old, what does he know?” Or “I bet he just liked the album because it samples so many songs from the ‘70s and ‘80s.” All of which is, of course, true. I am old, and so many of his sources – from the Beach Boys through Ton Loc, from “Hey, Mickie” to “Come On, Eileen” – are part of the soundtrack of my younger years. And I am old enough that many of his contemporary source were either unfamiliar – Dolla, Rich Boy – or hopelessly puerile (“Soulja Boy,” “Low”). But not so hopelessly, it turns out: in the hands of Gillis, the infuriating banality of Soulja Boy becomes something else entirely once blended with Thin Lizzy’s “Jailbreak” and ELO’s “Don’t Bring Me Down.” And that is where the magic lies.

In fact, compelled by some of the nostalgia Gillis’s ‘70s and ‘80s archive inspired, I went and tracked down many of the original tracks he samples, imagining that if I was enjoying Feed the Animals so much, surely the originals would be even more satisfying. Not so, as it turned out. Almost universally, Gillis made even my fondest musical memories better than they were the first time around (the one notable exception would be The Band’s “The Weight,” which became decidedly smaller, shriller in the album’s remix). As for the source material newer to me, in downloading a gaggle of the songs from iTunes, I discovered a similar phenomenon. I didn’t like any of them half as much as I did once Gillis chopped, mashed, mixed and deconstructed them in Feed the Animals.

Admittedly, sometimes Gillis is just plain showing off, and with his skills he is perfectly entitled to a few virtuoso moments. The mixing of Bustah Rhymes’ “Wooh Hah!!” with the Police’s “Every Little Thing She Does is Magic” is so improbable that one cannot help but gasp when you hear for the first time how perfectly it works. But as his chosen stage name would suggest, Gillis does not entirely fit in with the typical DJ macho culture. Gillis for the most part does not put his cleverness and technical skills center stage.

Of course, for every source we recognize, there are also those that are beyond the recovery of even the most devoted fans (the music blogosphere was tracking down Girl Talk’s sources before he even finished uploading the original tracks). That is because in today’s digital world Gillis can break songs down into the musical equivalent of the sub-atomic level. A kickdrum here, a bass riff there, even the echo from the recording studio can find its way into his library of samples and effects, transplanted onto another layer from a completely different song. And suddenly, concerns about “originality” and about the “authentic” seem quite beside the point.

In truth, concerns about originality, authenticity, and the vision of the Artist are fairly short-lived in the longer history of Western culture. In the 18th century, for example, “mashups” and “remixes” were everywhere. True, they didn’t use those terms, but they did borrow from each other, create new adventures for characters from other people’s novels, cut-and-paste each others’ works in a popular culture that did not prize originality or authenticity as much as it did the creative edit, the brilliant remix. It was not until the Romantic movement at the end of the century and the international copyright laws that came with it that we inherited our fetish of the Work of Art as the product of the Solitary Man of Genius.

Two centuries is a pretty good run for the artist with a capital “A.” If I were a betting man, I would prophecy that the 21st century will belong to the editor with a capital “E.” Lawrence Lessig has dubbed the phenomenon “Remix Culture” – the rise of cultural productions entirely generated from the raw material of other people’s stuff. Lessig, a lawyer and expert in intellectual property, has long been a champion for unleashing the creative potential of remix culture. I have been reading his thoughts on the topic for years, and had remained on the fence in the coming culture wars until I discovered Girl Talk (which coincidentally happened to be simultaneous with reading Lessig’s latest book, Remix: Making Art and Commerce Thrive in the Hybrid Economy [Penguin, 2008]). As he writes about Girl Talk in this book, “This is not simply copying. Sounds are being used like paint on a palette. But all the paint has been scratched off other paintings.” “So,” he asks, “how should we think about it? What does it mean, exactly?”

For Lessig, it means, at once, something radically new and something as old as modernity itself. After all, the idea of intellectual property, an 18th-century invention, was never intended to lock up ideas and culture from public remixing. Quite the contrary, in fact. The original copyright law preserved intellectual property for only 14 years and then returned it to the public from which all ideas were understood to originate, to which all ideas ultimately were understood to belong, and from which all new ideas – the progress of culture itself – would be generated. Our modern practice of locking up intellectual property for decades, lifetimes, even in-perpetuity (try reprinting the poems of Emily Dickinson without Harvard’s permission, for example), was never envisioned by those who first developed the modern concept of copyright. Even Ben Franklin, Founding Father Extraordinaire, never patented his many inventions. Instead, he believed that “as we enjoy great advantages from the inventions of others, we should be glad of an opportunity to serve others by any invention of ours; and this we should do freely and generously.” 250 years ago, Franklin “stole” other people’s ideas and improved on them, but he also “gave away” his own best ideas. And he got very, very rich in the process, a refutation of the all-too familiar argument that without our copyright laws no artist could survive.

I have friends who work in popular culture industries who are convinced that digital “piracy” has undermined their ability to make a living, and I am in no position to say they are wrong. But is it possible to imagine another world, another approach to ideas and culture that would not be based around “property” which would allow them to thrive as Ben Franklin did in his day?

As Franklin says, we make nothing that is not indebted to the inventions of those who came before us. In his recent biography of Joseph Priestley, The Invention of Air (Riverhead, 2008), for example, Steven Johnson reminds his readers repeatedly in how many ways Priestley’s famous discovery of oxygen was in fact simultaneously a moment of profound individual inspiration and a remarkable (conscious and unconscious) act of collaboration (including with good friend Franklin). In the end, of course, we want to believe in the solitary Genius, and so Priestley got the credit, even though, as Johnson points out, in many ways he completely misinterpreted what it was he had discovered. Perhaps more importantly: even though Priestley would have much preferred sharing the credit with all who came before and after.

That Stephen Johnson would have chosen Priestley as the subject for this biographical project is itself telling. After all Johnson is the man who, in 2005, wrote the controversial book, Everything Bad is Good For You which argued that (contrary to the doom and gloom of the cultural gatekeepers) popular culture was actually growing more complex and challenging. And as Johnson has argued in all his books, the great achievements of our present moment (like all that have preceded us) are the result of building on the ideas, productions and “intellectual property” of others—from the discovery of oxygen, to the invention of rock and roll, to the birth of cinema, and so on. We make more and better culture the more our cultural productions are free to circulate. And our culture makes us smarter, better, the freer it is.

So am I really suggesting that what Girl Talk is up to is in any way equivalent to the conditions that led to Priestley’s discovery of oxygen? Well, yes, I suppose I am. And I think Priestley and Franklin and many of their 18th-century collaborators would approve of the analogy. But, it might reasonably be argued: If we are dealing with the most restrictive culture for the circulation of intellectual property, why are we also witnessing a period where something like Girl Talk could exist (and in future columns I will consider related achievements in other corners of the popular culture universe)?

The answer lies somewhere around the moment in musical (and, in my case, personal) history from which Gillis draws so many of his samples – and to teenage memories that anyone under the age of, well, my age can summon effortlessly: the mixtape, that dorky, romantic phenomenon of the late 1970s. With the birth of the first commercially accessible analog audio tapes, consumers for the first time began making their own mixes, deeply personal combinations of songs often given to friends or left anonymously for teenage crushes. From the audiotape was born not only the cliché of the mixtape, but rap music, the DJ, remix culture, the bootleg, and most especially the realization that consumers could take any album and reorder the songs, mix them together, share them, distribute them, personalize them. Long before the digital revolution and all it brought with it – CDs, Napster, iTunes – our relationship to pop culture had already changed, forever. Girl Talk is a culmination of a generation of mixtapes, only here the mixtape is so personal, so profound that it becomes not only more than the sum of its parts but, through what can only be described as the alchemy of remix culture, something completely different than it was from the start.

Of course, what I have not addressed here is an important fact that Gillis is, in fact, a criminal. His record label, Illegal Art, wears its status on its sleeve, and it is probably only a matter of time before the right combination of lawsuits, congressional investigations, and bad press bring this particular front of the pop renaissance to a tragic close. In the meantime, go download Feed the Animals, pay what you can and what you will. And while you’re at it, stop by http://nfb.ca/rip to view the just-released documentary RiP: A Remix Manifesto, featuring Greg Gillis, Lawrence Lessig (sadly, no Ben Franklin) and think about what we lose every time an artist like Gillis is silenced. And think as well about what we gain by the fact that, in the end, he and those who will follow from his example, will not and cannot be silenced. Long live the (remix) revolution.

More people are reading,
but is the modern canon stocked with the right stuff?

FEBRUARY 2009

This column will be based on a proposition that will strike many readers as absurd: we are, at this moment in history, in the middle of a cultural “renaissance.” All the signs, it will be protested, surely point in the opposite direction. For example, in 2004 the National Endowment for the Arts announced the results of a massive study in which it was concluded that “advanced literacy” was in a steady decline. “America can no longer take active and engaged literacy for granted,” Dana Gioia, chairman of the NEA announced. “As more Americans lose this capability, our nation becomes less informed, active, and independent minded.”

Twenty million advanced readers lost as of 2002, and the rates were only expected to increase as the new dark century unfolded. Such apocalyptic visions were of course very much in line with the millennialism of the moment (remember Y2K?), and the media jumped on the NEA’s findings and ran with them in many directions (blithely ignoring the fact that it was the mass media itself that Gioia’s NEA held responsible for the degeneration of mankind).

Much less attention was paid, however, to a follow-up study released this month from the NEA that somewhat grudgingly admitted that “Reading is on the rise.” How could this be so? According to the vision the NEA painted five years ago, we should all be rooting around in the gutters for our next cultural meal by this point. Instead we were reading more? Gioia struggled to explain it, it is because more reading is being “required” by teachers, employers, parents – in his vision, no doubt responding to the urgent pleas that have emanated from his office over the last decade. Take away the requirements, and our natural tendency toward degeneration returns. As Gioia warned anyone inclined to take too much solace in these “astonishing” counter-intuitive results: “Have we become a nation of Lionel Trillings? The answer is absolutely not yet.”

A nation of Lionel Trillings. Gioia’s ideal representative for “advanced literacy” is telling. Trilling was one of the last great “public intellectuals,” and he was indeed one of the more erudite and eloquent critics of his day. He was also fiercely hostile to what he saw as the threat posed to elite or high culture by the popular. He devoted his career to fighting off the threats posed by the “dumbing down” of America and to celebrating the power of high culture to make us a more reflective and thereby moral people. In a sense, then, Gioia’s vision of the ideal for his country – a nation of Lionel Trillings – says everything we need to know about what “advanced literacy” means in this context: it means reading books that Trilling would approve of and not reading books (or, God forbid, watching the programs, playing video games, listening to the songs) that are associated with what Trilling’s contemporary critic Robert Warshow referred to as the “immediate experience” of popular culture.

In fact, I would suggest Warshow, and not Trilling, as a more appropriate ideal for our nation. Before his death in 1955, Warshow, who shared many of Trilling’s attitudes toward popular culture, began to reflect that perhaps there was something more to all this “trash” than at first appeared. As is the case for many of us, his intellectual conversion was brought about in large measure by the most personal of observations: his son, despite all the advantages of education and good sense, loved comic books, the very comic books that Congressional subcommittees were investigating for their corrupting influence on America’s youth – as David Hajdu describes in detail in last year’s fascinating history, The Ten-Cent Plague: The Great Comic-Book Scare and How It Changed America (Picador, 2008). Warshow argued with his son, teased him about the time he wasted on comics and bullied him lovingly into devoting more time to the “advanced literacy” Gioia and his spiritual godfather, Trilling, would have insisted was the literacy worth having. But his son persisted in his devotion to comics, and in the end Warshow was forced to admit they were certainly not doing him any harm. After all, his 11-year-old son was smart, funny, and creative – “a more alert, skillful, and self-possessed than I or any of my friends were at eleven.” And it was entirely likely that his son’s love of comics had at least something to do with the young man he was shaping up to be.

Sadly, Warshow died the following year of a heart attack before he had a chance to see how Paul and the comics he loved would turn out. Both ultimately turned out quite well indeed, and I like to imagine that Warshow – a more generous critic than Trilling ever was – would have been delighted by both (especially by Paul, who followed in his father’s footsteps and became a critic). And I can’t help but be reminded of Warshow when I struggle with my own 11-year-old son over his fierce love of video games. Don’t get me wrong: I play and enjoy video games (as future columns will no doubt reveal), and I believe there are great things that can and do happen in them. But when I see the hours and hours they consume, the frustrations and obsessions they inspire, I find myself sounding a lot like Robert Warshow in 1954. Or even like Gioia in 2004. Those hours and days devoted to mastering the increasingly complex obstacles and puzzles of the best video game are hours and days not spent with Charles Dickens or Edgar Allan Poe. As Warshow mused a half century ago, his son could “be reading things like ‘The Pit and the Pendulum’… – which, to be sure, would be better.”

But then I must force myself to pause (as Warshow paused in his day) and remind myself that my beloved Dickens and Poe were themselves figures deeply part of popular culture in their day. Both were magazine writers and editors, courting popular audiences and experimenting with techniques and approaches designed to elicit strong “immediate” emotions in their readers. One of them, Dickens, worked the system brilliantly and became extremely successful as a result of the mass marketplace; the other, Poe, famously never found the same success and was plagued by bad luck and some very bad instincts, dying broke and alone. Today, of course, Dickens and Poe are precisely the kinds of authors Gioia worries that we are not reading much of any more. They represent the “advanced literacy” we must protect at all costs from the encroachments of popular culture and its attendant literacies – all trace of these past authors’ own popular culture origins and investments now conveniently scrubbed by generations of anthologies and syllabi.

To be clear, my son does read Dickens and Poe, but he probably reads a bit less of it as a result of the fact that he also plays video games. And yet, as Warshow was forced to concede, he is in every way a more engaged, intellectual, and self-possessed 11-year-old than I was at his age. Do video games have anything to do with this? We spend so much time asking the opposite question (how does popular culture corrupt our children?) that we have almost neglected to ask the questions Warshow was asking before his untimely death. Just as his son was drawn to the very best comics of his time – those produced by EC Comics, including the brilliantly satirical Mad Magazine – so too is my son drawn to what I suspect history will judge the very best video games of this time: Ico, Zelda, God of War, Prince of Persia, and my own nomination for best game of 2008: Spore. Will some of these emerge as classics in future syllabi and “anthologies” fifty, a hundred years from now? Yes, no doubt, although what such classrooms and anthologies will look like is hard to imagine at the moment.

Back when I was an undergraduate some quarter century ago, I wondered, wistfully, what it would have been like to live during a period of a truly new media form – to be alive in the U.S. in 1787 when the first novel was published, or in the 1890s when the first films were projected, or in 1950 when television entered the homes of average Americans. In the early ‘80s, I could not yet imagine that I was going to be so blessed to live through such a revolution, indeed, one whose impact on the culture would be as great as all of these combined – arguably greater than any event in the history of media since the invention of the printing press.

Can we be truly shocked that in the 1990s and up through 2002, as the NEA reported, people were distracted away from traditional literary texts by the need to learn how to read and navigate this new media, the Internet, and the vast array of completely new kinds of forms, genres, voices that it unleashed? Was that truly a sign of the decline of literacy, or the natural result of people recognizing that something important was going on here? And if reading of the kind Gioia would recognize has been suddenly increasing in recent years, doesn’t that suggest that we are not dealing with an either/or proposition here? As traditional books merge increasingly into what popular culture theorist Henry Jenkins has described as “convergence culture” – translated into e-books, audio books, and other digital forms – the distinction becomes increasingly meaningless for most users. Literature, far from being some bastion of civilization guarding the gates against an evil Mass Media, is itself reabsorbed into the river of popular culture. And as discriminating readers, like my son, Eli, sit down to make choices as to the culture they want to partake in, literature is present as one choice among many. And as it turns out, Dickens still passes the test. As I write this my son plans to spend all day alternating between reading a novel (Great Expectations) and playing a video game (Call of Duty: World at War).

It turns out Gioia and the National Endowment for the Arts had little to worry about. Except that far from becoming a nation of Lionel Trillings, devoted to preserving a High culture from the encroachments of the Low, we are more likely to become a nation of Robert Warshows, skeptical critics of our mainstream media, separating the wheat from the chaff – succumbing neither to utopian fantasy or reactionary refusal.

The chaff has always swamped the wheat in any age, even those eras we now recognize as being among the most culturally rich in our history. In the United States we have had two so-called “renaissances” before – one in the second half of the 19th century (the precise dates depend on whether we are talking about literature, where the flowering tends to be described as happening earlier, or art and architecture, where it is decidedly a post-Civil War affair), and another in the early years of the 20th century – the Harlem Renaissance. Both of these earlier “Renaissances” were deeply connected to what we now think of as popular culture, including tabloids, true crime, music hall theatre, popular music. But the historians and critics who had the privilege of endowing these periods of cultural production with the label “Renaissance” worked studiously to erase the popular culture from their record, focusing instead on the forms we have been taught are “proper” arts and legitimate culture: poetry, novels (of a certain refinement), drama, architecture, etc.

It has been almost a century since anyone had the audacity to talk about “renaissances” in this country. And yet for about ten years now, I will argue in the coming months, we have been living through one taking place less in traditional forms like poetry, drama, architecture, or any of the usual places where we look for evidence of our cultural regeneration. There are plenty of exciting things happening in those genres, to be sure, and in the months ahead I will mention some. But the real heart of this current renaissance is happening in forms that we have either historically neglected, outright scorned, or only recently realized even existed.

And to all who would challenge this assertion by pointing out that the vast majority of what is currently produced in these forms is garbage, I would point out that in 1855 or 1925, the vast majority of literature, drama, and music was as bad as it is now. What made those periods come alive and resonate across the generations had as much to do with the remarkable skills of its readers as its artists – including, no doubt, the 11-year-old spiritual ancestors of Paul or Eli, readers who found their way to Whitman or Langston Hughes despite their parents’ disapproval, and whose devotion to this pathfinding poetry finally forced their parents to get off their high horse and take a serious look at what this new-fangled stuff was all about. It is time for us to turn our back on Gioaia and others who would have us become a nation of Lionel Trillings and to look seriously at this culture that our children are devouring with the energy and illumination of those who know they are on to something great.

Over the course of future columns, I will share my own explorations of this pop renaissance with you. I am old enough, I promise, not to give in without a fight to every fad I encounter, but wild-eyed enough not to dismiss anything – however unfamiliar or strange – from serious consideration. And as I work to explore the contours and limits of this renaissance, I invite readers to share their own discoveries with me.

Jared Gardner teaches American literature, film and popular culture at the Ohio State University. He can be reached at editor@guttergeek.com

Writer's Web site www.jaredgardner.org

© 2009-12 Short North Gazette, Columbus, Ohio. All rights reserved.

Return to Homepage www.shortnorth.com