I am quite excited to announce my newest publication, as it marks my first venture into a fully realized work of videographic criticism. “Adaptation.‘s Anomalies” was just published in [in]Transition, culminating a project I began at the Scholarship in Sound & Image workshop we hosted in Middlebury last summer. (I’m also presenting the video on a panel of videographic work at SCMS in Atlanta, Friday April 1 at 12:15pm.)

While the video stands on its own, I encourage readers to visit the journal’s version for contextualizing material, including my author’s statement and two open peer reviews that provide good insights into the project. I hope it prompts a conversation, either here or at [in]Transition!


Last month I shared my plan to use specifications grading in my Television and American Culture course this spring semester. I just finished marking the first exam, which provides my first real opportunity to reflect on how the experiment is going. (Make sure to read that previous post for the specifics of the approach and course design.) Below I walk through the first exam, what my students did, and reflect what this system has revealed to me about my teaching and students’ learning.

The course has 31 students enrolled, and all seem to be on board with the grading approach. I asked students to sign a short form to affirm their understanding with the grading system, and asked them to indicate (with no binding commitment) which “bundle” of assignments, and thus which final grade, they planned on working toward in the course. 85% of the students said they planned on working toward an A, with the remaining 15% indicating the B bundle. This wasn’t much of a surprise, given that the norm at Middlebury is toward receiving A grades – if anything, the surprise was that as many as 5 students said they were striving toward “only” a B in the course. It will be interesting to track how this initial plan matches the work that students end up doing, as I expect there will be some who started aiming at an A who choose to do less work as the semester proceeds, and perhaps a few of who revise their aim higher.

The first exam consisted of two questions, each with two versions – the Basic versions provide opportunities for students to demonstrate their ability to restate the course content in their own words (which, as an open book take-home exam, should not be particularly challenging), while the Advanced versions ask students to apply this knowledge to specific examples or to craft their own arguments about the concepts – doing more Advanced questions allows students to qualify for B or A final grades, while every student must satisfactorily complete at least Basic versions of six exam questions throughout the semester. For the first exam, all students may revise their Unsatisfactory answers at no “cost,” while future exams require them to spend “flexibility tokens” to revise answers. Each question on the three exams focuses on one of the 6 units in the course, so it is all very structured and hopefully transparent as to what is being evaluated. Continue reading ‘First Update on My Specifications Grading Experiment’


In January, I helped bring Anne Trubek to Middlebury to do a workshop for faculty called “Writing for the Public.” Anne is a friend and a great writer with a diverse resume, so when she announced that she was adding campus workshops to her Thinking Writer slate of online courses, I jumped at the chance to bring her to Vermont.

It was a great couple of days, as Anne worked with 15 faculty to discuss the mechanics of popular publishing, how to craft a pitch, what topics make sense for what venues and audiences, and what distinctive perspectives academics have to offer and how best to write to those strengths. Anne also called attention to how unusual the phrase “writing for the public” is, noting that everyone else except academics simply call it “writing.”

She encouraged us to develop a pitch and start working on an article to discuss with her and our peers, which I did. This was shortly after the public squabble between NBC and Netflix at the Television Critics Association about Netflix’s unreleased viewership numbers. So I developed a pitch that would address that timely controversy, trying to explain Netflix’s unique business model to industry outsiders and make the case for why we should care. I drafted the piece that night, Anne & I hashed out edits over breakfast, and then I sent the piece to five journalistic venues.

That was Thursday January 28th. The only one I heard back from quickly was a friendly acquaintance who edits a prominent culture site, who passed due to overlap with previous coverage. Silence for the next week. (Anne says this is not uncommon.) Then a week later, I got a positive reply from The Atlantic, who asked for potential revisions, which I turned around quickly. Then more waiting, then a positive email saying they’re waiting for the right time in their cycle to run it, with some final revisions (especially since the timely hook was now stale).

That right time was today, as the article was published: “Why Netflix Doesn’t Release Its Ratings.” For me, it’s a piece that straddles genres: too off-the-cuff & speculative for scholarship, too depersonalized for a blog post, too rudimentary for posting to a more academic audience on a site like Flow or the late-lamented Antenna. I see it as a form of public media literacy, hoping to raise awareness of how underlying business models impact how we engage with and talk about television. I hoped it would get people interested in the behind-the-scenes systems of our major communication media.

It did get one notable person interested – shortly after the article came out, I got an email from a producer at the public radio show Marketplace, who was hoping to have me record an interview about the topic. Thankfully the scheduling and technology worked, so my interview appeared on tonight’s show.

I share this here, both as a more permanent link as part of my revitalized blogging, and to share a sense of how such a popular press piece comes into being. And, admittedly, to support and promote Anne’s workshops as a valuable way to learn how to write (for the public)!


Today I started my spring course, Television and American Culture, a class I have offered around 15 times. It’s the course that inspired my textbook (of the same name), and my co-edited book How to Watch Television also was structured to fit with the course’s design. In short, it’s the course that I’ve dedicated the most work to honing, and I feel that overall it works quite well… except for one facet: grading.

I hate grading. I hate how grades function in higher education for students, for faculty, for parents, and for institutions. I hate how grades often work as an obstruction for learning, rather than a motivation, reward, or neutral assessment. I firmly believe that, at least here at Middlebury, figuring out a way to rethink the culture of grades would be the most effective and impactful reform we could make. Such reforms are challenging and slow-moving at an institutional level, but I was moved to jump into the deep-end to rethink how grading works in this course. And thus I’m running an experiment this semester by completely changing the course’s grading system.

The approach I am taking is called Specifications Grading, which emerged from a fairly well-established alternative approach to grading typically called Contract Grading. I first learned about contract grading a few years ago through Cathy Davidson blogging about her use of the system at Duke. The idea bubbled around in my head for years, but I decided to give it a whirl after reading this piece by Linda Nilson on specifications grading, which is based on her book. The difference between specifications & contract grading is a bit fuzzy, and ultimately not as important as their similarities, which are tied to three key principles:

  • All individual assignments are graded on a Pass / Fail or Satisfactory / Unsatisfactory basis. The bar for Satisfactory is set higher than what we typically think of as “passing work” (more like a typical B than a C), with a satisfactory assignment being one that meets its clearly-articulated specifications and learning goals. This means that an assignment that meets some but not all of the goals & specs is Unsatisfactory, a much more rigorous bar than how most faculty (especially in the humanities) grade papers. This also means that you need not spend time quibbling between giving a paper a B+ vs. A–; it either meets the expectations, or it doesn’t. Instead, I can spend my assessment time providing qualitative feedback, which is more rewarding for everyone. Plus the system has options for revision so that a student receiving an Unsatisfactory can choose to improve their work and hopefully satisfactorily accomplish the assignment goals.
  • Assignments are designed to demonstrate that students have achieved the course’s specific learning goals. This seems obvious, but I was surprised by how weakly the old assignments for my course were connected to stated learning goals. Under this approach, you should be able to clearly highlight how each assignment serves the stated goals. Making those connections explicit greatly improved the conceptual basis for the assignments I give, and I hope will make assessing whether they accomplish those goals easier.
  • Final grades are determined by students’ accomplishments in a hierarchy of “assignment bundles.” If we set the passing bar for the course at a C, then we designate which quantity and depth of assignments are necessary to accomplish the course’s base learning goals. Additional assignments are added to that base to reflect more sophisticated and deeper learning, creating bundles for B and A levels. This system gives student full control over which of these bundles they will strive to accomplish, based on their own learning priorities and self-aware judgment over time-management and intellectual goals.

This last system of bundles is kind-of a “hack” to the system: because most of us teach in institutions that require us to enter a single letter grade into a transcript at the end of the semester, we need to be able to produce such a metric. However, the specifications approach eliminates the stresses of grading each assignment by designing a course which allows students to choose their own learning paths transparently, as linked to grades at the end of the process. Hopefully, at the end of the semester, I can know that a student who received a B demonstrated that they learned four of the course’s explicit learning goal, while a student who received an A learned all five. (See below for the specific language laying out this system for students.)

In designing my syllabus, I embraced a tiered set of learning goals, based on various schema of levels of learning and cognition. The base level focuses on learning and comprehending the information covered in the course, and being able to express this knowledge effectively: this is what any student who passes the course should accomplish, and the C bundle assesses this knowledge. The next tier involves applying that knowledge to analyzing new examples and scenarios, with assignments in the B bundle requiring such analytical application. The highest tier invites students to generate their own arguments and synthesize both information and analytic approaches across realms of knowledge, captured in the additional requirements of the A bundle. Thus a student receiving a high grade is not an indication of doing the same assignments particularly well, but demonstrating more challenging modes of engagement and analysis. This seems like a more accurate demarcation of learning.

So today in class, I rolled it out for students, walking through the policy statements reproduced below.* It took some time for them to grapple with the new system, but I think they got it, and I sensed that they mostly thought it was a cool idea. One said, “it’s kind of like a board game” – which I affirmed, but emphasized that “winning” means understanding the system enough to actively engage in the material to achieve the level of learning you aim to accomplish, not gaming the system. We will see how it unfolds, and I will try to update the blog on the experiment in progress. I’d love to hear what readers think of such a system, whether you’ve tried anything similar, and any advice for what might emerge as the semester progresses.

  • A few have asked the size of the course: around 30 students, mostly sophomores & juniors. About 1/3 declared majors, with another 1/3 who might become majors.

Continue reading ‘Rethinking Grading: An In-Progress Experiment’


I’ve griped about the problems with closed peer review in academic publishing before, whether in the black box of tenure reviews, or celebrating the open review for Complex TV, or wondering about Why a Book?, or envisioning new possibilities with MediaCommons. My unifying frustration in all of these gripes is that throughout academia, the strongest elements of peer review — the dialogue that leads to higher quality scholarship, the labor that goes into providing thoughtful commentary on other scholars’ work, the contextualization of placing scholarship into particular conversations and subdisciplines, the validation from particular peers you respect praising your work — is kept completely invisible from readers. What is left is simply the gatekeeping function, where we either see the fact of publication and are left to assume it must have been deemed worthy by someone for some reasons, or know nothing of what gets rejected and why.

I believe in open review, and have tried to practice it whenever I can – right now I’m participating in a great open peer review process for the new Debates in Digital Humanities volume, with contributors commenting on each others’ essays. But such experiments are far from widespread and still viewed skeptically by many traditionalists. The one place where a modest form of open peer review is broadly practiced is book blurbs.

Blurbs are far from typical peer review: they are solicited after a book has been fully approved for publication, they offer no opportunity for feedback or revision, and they are designed to simultaneous promote the book and highlight the blurber’s ability to offer praise in a consise and pithy way. And yet, they offer something that other forms of peer review do not: openness.

In a blurb, both the author and the blurber know who each other are, as do readers. While this might seem like it would foster conflicts of interest and opportunities to simply promote your friends and colleagues, the openness provides a counter to this. Consider the great new book Matinee Melodrama: Playing with Formula in the Sound Serial by Scott Higgins. I had the pleasure of reading the manuscript and offering this brief blurb: “Matinee Melodrama manages a mean feat: making a mostly forgotten, formulaic format seem new and exciting, shining an informative, fascinating light from film history onto today’s television, comics, and videogames.” I had a lot more to say about it than fit in such a sentence (all good, of course!), but the point of such a blurb is less about what I say than that I say it – we read blurbs for the people who write them, and how their praise informs our perception of the book’s quality and appeals. Perusing the blurbs for Higgins’s book, you see my comment next to Steve Neale, Charles Wolfe, and Leonard Maltin, suggesting that the book will be of interest for film scholars, media scholars, and popular critics. These four signed sentences tell you much more about the book’s potential appeals and merits than the far more substantive and lengthy anonymous peer reviews, of which we know absolutely nothing (except that they endorsed publication).

As for conflicts of interest, I think open review can be more honest than blind review. Scott Higgins is actually an old friend of mine from graduate school, and it is true that I would not feel comfortable saying something negative about his work in an official capacity. In a closed blind review, I could easily praise the shoddy work of a friend (not Scott, whose work is never shoddy) and nobody would be the wiser. In an open review or a blurb, I am staking my name publicly on the integrity of my judgment—if Matinee Melodrama were a weak book, readers would wonder why I praised it so. (I can think of such an instance, where a really bad academic book was blurbed by a scholar I quite respect; to this day, I wonder what she was thinking…)

Similarly, open review can help temper perceived conflicts of interest between author and publisher. I will have a videographic essay published in the next issue of [in]Transition, a journal for which I serve as project manager for MediaCommons. My video went through the journal’s standard practice of open peer review, and thus there will be two signed reviews published alongside the piece to justify its publication; perhaps some might wonder whether my piece was treated favorably by the editorial team, but two notable videographic creator/scholars will have publicly endorsed it, making the rationale for publication transparent. Would they sign lengthy positive open reviews of a bad project, just to appease the editors’ favoritism? I know that I wouldn’t.

I’m not saying blurbs should replace peer review, but they highlight how little readers know about the actual peer reviewers and their thoughts about any given work. The fact of publication is not enough to ensure its quality and value, and knowing the perspectives and positions of those who vetted a work is important context that is left invisible within closed review. But until more publishers and journals adopt open peer review standards, blurbs are the most transparent comments we have.


This is the third and final (and, to me, most interesting) excerpt from my essay draft on “Videographic Criticism as a Digital Humanities Method.” The first laid out my approach to deformative criticism via the format of PechaKuchas; the second explored videographic 10/40/70 analyses. I highly recommend watching some of the musical videos discussed near the end of the post.

A videographic 10/40/70 relies upon the single shot as the core unit of a film, a key tendency common to much academic work on moving image media. My third and final type of videographic deformation also highlights the shot, but from a distinctly different approach. One of the most prominent forms of quantitative and computational analysis within film studies is statistical stylistics, especially as shared on the crowd-sourced Cinemetrics website. While there are numerous metrics on the site, the most common and well known is ASL, or average shot length, computed by dividing the time of a full film by its number of discrete shots. The resulting number indicates a film’s overall editing pace, charting a spectrum from quickly-cut movies (such as Batman Begins at 2.37 seconds or Beverly Hills Chihuahua at 2.72) to longer-take films (such as An American In Paris at 21 seconds or Belle du Jour at 24).[1] The most typical range is between 3 and 8 seconds per shot, with much variability between historical eras, genres, national traditions, and specific filmmakers.

An ASL is in itself a kind of deformation, a reduction of a film to a single numeric representation. Cinemetrics does allow more detailed quantification and visualization of a film’s editing patters—for instance, this is a more granular and graphic elaboration of Mulholland Drive’s ASL of 6.5:[2]

MD Cinemetrics visualization

But these numbers, tables, and graphics make the film more distant and remote, leaving me uncertain what we can we learn from such quantification. According to Yuri Tsivian, Cinemetrics’s founder, the insights are quite limited: “ASL is useful if the only thing we need to know is how long this or that average shot is as compared to ASL figures obtained for other films, but it says nothing about each film’s internal dynamics.”[3] Certainly comparison is the most useful feature of ASL, as it allows quantitative analysis amongst a large historical corpus, a general approach that has proven quite productive in digital humanities across a range of fields. But I wonder about Tsivian’s quick dismissal that ASL “says nothing about each film’s internal dynamics.” Doesn’t a film with a 2.5 second cutting rate feel and function differently than one with a 15 second ASL? Certainly, and it doesn’t take a quantification to notice those differences. But perhaps such a quantification might guide a more thorough understanding of editing rates by extending the deformation?

Videographic methods allow us to impose a film’s ASL back onto itself. I have created a videographic experiment called an “equalized pulse”: instead of treating ASL as a calculated average abstracted from the film, I force a film to conform to its own average by speeding up or slowing down each shot to last precisely as long as its average shot length.[4] This process forces one filmic element that is variable within nearly every film, shot lengths, to adhere to a constant duration that emerges quantitatively from the original film; but it offsets this equalizing deformation with another one, making the speed of each shot, which is typically constant, highly variable. Thus in a film with an ASL of 4 seconds, the equalized pulse extends a 1-second shot to 25% speed, while an 8-second shot runs at 200% speed. If you equalized an entire film to its average pulse, it would have the same running time and the same number of shots, but most would be slowed down or sped up to conform to an identical length. Every shot exerts the same temporal weight, but each feels distinct in its tempo and pace. The result is, unsurprisingly, very strange—but I believe productively so.

What does Mulholland Drive look and feel like when equalized to a pulse of its 6.5 second ASL? Can we learn something about the “film’s internal dynamics” more than its numeric representations on Cinemetrics? Take the film’s opening scene following the credits, with Rita’s car accident on the titular street; in the original, it lasts 4:07 with 49 shots ranging in length between .3 and 27 seconds.

The deformed version with an equalized pulse of every shot lasting precisely 6.5 seconds runs 5:18, as the original sequence is cut comparatively faster (ASL of 5.04 seconds) than the film as a whole. The effect is quite uncanny, with super slow-motion action sequences bookended by sped up shots with less onscreen action; the car accident is particularly unsettling, turning a 9-shot, 6-second sequence into a grueling and abstract 58-second ordeal that oddly exaggerates the effect of experiencing a moment of trauma in slow motion. As a whole, the video does convey the sense that a pulse of 6.5 seconds feels quite deliberate and drawn out, although the variability of action obscures the consistency of the editing pulse.

Another scene from Mulholland Drive offers quite different effects, despite the same algorithmic deformation to its same equalized pulse. The memorable scene in Winkies Diner, where two unnamed men discuss and confront a dream, has always been the film’s pivotal scene for me, signaling its affective impact that transcends any rational comprehension or interpretation. When equalized to a 6.5 second pulse, the scene’s uncanniness is ratcheted up, downplaying the dialogue rhythm for a more even distribution between the two men. The slow motion close-ups with distorted voice highlight the dreamlike quality, and the overall slower pace increases the sense of foreboding that already pervades the scene. By the time the horrific bum is revealed at the scene’s end, I find myself completely enthralled by the editing pulse and pulled into the affective horror that the scene always produces, suggesting that its impact is not dependent on Lynch’s designed editing rhythms. I have not extended this equalized pulse to the entire film, but clearly each scene and sequence will feel quite different, even with a uniform shot length throughout.

Mulholland Drive is a film packed with abundant strangeness, even before its deformation; how does an equalized pulse impact a more conventional example? Even though Mildred Pierce features the unusual combination of noir crime and family melodrama, it is still a far more straightforward film in keeping with its 1940s era. Its ASL of 10.09 is much slower than films of today, but is fairly typical of its time. Equalizing the pulse of a crucial scene in the family melodrama, with Veda driving a wedge between Mildred and Monty who finally end their dysfunctional relationship, highlights various character interactions.

When Mildred gives Veda a car, it speeds through her thanking her mother but lingers on her exchange with Monty, underscoring the closeness between the stepfather and daughter—in the original, the emphasis is reversed in terms of timing, but equalizing the shots actually better represents Veda’s attitudes. The deformation lingers over shots without dialogue, letting us closely examine facial expressions and material objects, but speeds through lengthy dialogue shots, like an impatient viewer fast-forwarding through the mushy emotional scenes. The final lines exchanged between Mildred and Monty are unreasonably drawn out, milking their mutual contempt for all that it is worth. The scene is still legible, especially emotionally, but it redirects our attention in unpredictable ways—arguably a key goal of an effective deformance.

What about the other end of the pacing spectrum, equalizing the pulse of an action film like Raiders of the Lost Ark. The film has an ASL of 4.4 seconds, longer than most contemporary action movies but still quite brisk, especially for director Steven Spielberg. I deformed the iconic opening sequence, but used the sequence’s faster ASL of 3.66 rather than the whole film’s pacing, as that allows for a direct comparison of the original and equalized versions.[5]

The effect is definitely striking, as the deformed version races through the build-up toward action and peril, while lingering painfully on darts flying through the air, near-miss leaps, and other moments of derring-do. In the slowed down shots, you notice odd details you never would see in the regular film, like the discoloration of Indy’s teeth, and sense a very different momentum. When placed side by side with the original, it highlights how much of the sequence is weighted toward the approach and build-up rather than the action, while the deformed version lingers on moments that regularly flit by.

Raiders Equalized Timeline

The editing timeline visualizes these differences, but in a way that is analytically obscure; the videographic form allows us to feel and experience the analysis in ways that computational visualization cannot. What stands out most to me in this watching and listening to this deformation is the role of music, as John Williams’s score still manages to hit its key themes and punctuate the action, despite its variable tempo and rhythms.

This experiment in equalizing a film’s pulse points most interestingly toward different types and functions of rhythm and tempo. In a conventionally edited film, variation of shot length is a main source of rhythmic play, both in creating emotional engagement and guiding our attention. Eliminating that variation by equalization creates other forms of rhythm and tempo, as we notice the relative screen time given to various characters, anticipate the upcoming edits in a steady pulse, and engage with the interplay between image and sound. These equalized deformations highlight how much the analysis of editing and ASL privileges the visual track over the audio—we are not quantifying audio edits or transitions in such metrics, as sounds bridge across shots, slowing or speeding up like an accordion.

Experimenting with these equalized pulse videos piqued my curiosity in how visual editing functions in conjunction with music, especially for instances where the musical track is more dominant, as with film musicals or music videos. These explorations into musical sequences proved to be the most exciting examples of equalized pulse, as they highlight the transformation of rhythm and tempo: the musical track stretches and squashes to create unpredictable rhythms and jettisons its standard tempo, allowing the steady beat of the changing visuals to define the speed.

For instance, “Can’t Buy Me Love” from The Beatles film A Hard Day’s Night becomes a collage of fast and slow motion when equalized to its sequence ASL of 4.9 seconds, making an already playful and experimental sequence even more unpredictable. Musical sequences combined with dance add another layer of rhythmic play, as with the transformation of Singin’ in the Rain’s “Broadway Melody” into a deformed and almost uncanny work when equalized to its ASL of 14.9 seconds.

Musical numbers typically are edited at a slower pace than their films as a whole, providing more attention to performance and dance without being pulled away by edits. A rare exception is one of the fastest cut films listed on the Cinemetrics site, and certainly the fastest cut musical I know of: Moulin Rouge, with an ASL of 1.9 seconds.

The “Roxanne” number, with an even brisker ASL of 1.05 seconds, is the only equalized pulse video I’ve yet made where the visual tempo becomes noticeably dominant, offering a steady beat of images and sounds whose speed deformations go by so quickly as to often escape notice.

These equalized pulse versions of musical numbers are the most engaging and affective examples of videographic deformations I have made, functioning as compelling cultural objects both on their own and as provocatively deformative paratexts. They also demand further analysis and study, opening up a line of examination concerning the relative uses of edits, music, and dance to create rhythm and tempo. As such, these videographic deformations are not scholarship on their own, but they do function as research, pointing the way to greater scholarly explorations. Whether that subsequent scholarship is presented in written, videographic, or multimodal forms is still to be determined, but I hope that this discussion has shown how videographic criticism is more than just a form of dissemination. Transforming a bound cultural object like a film into a digital archive of sounds and images enables a mode of critical engagement that is impossible to achieve by other methods; as such, videographic criticism functions as a digital humanities research method that is poised to develop the field of film and media studies in unpredictable new ways.


Some bonus equalized pulse videos to consider:

 

[1] Unless otherwise noted, all ASL data are taken from Barry Salt’s dataset on Cinemetrics; even though the site includes many more films with crowdsourced information, I have found they lack consistency and methodological clarity of Salt’s list, which is easier to compare among films with.

[2] Salt’s list doesn’t include this film, so I used the ASL and this graphic from Nikki Esselaar’s submission.

[3] Yuri Tsivian, “Taking Cinemetrics into the Digital Age,” Cinemetrics.

[4] The process to do this is fairly straightforward in Adobe Premiere: first cut the source video into clips per the original edits. Then select all of the clips and use the Clip Speed / Duration tool. Unlink the Speed and Duration variables, and enter the number of seconds and frames in Duration corresponding to the ASL. Relink Speed and Duration, and be sure to check the Maintain Audio Pitch and Ripple Edit buttons. The only troubles come when a clip is stretched or sped up more than 1000%, as then the audio needs to be manually processed with more complex intervening steps.

[5] The opening 12:47 of the film consists of 209 shots, resulting in a 3.66 ASL.


This is the second excerpt from my essay draft on “Videographic Criticism as a Digital Humanities Method.” The first laid out my approach to deformative criticism via the format of PechaKuchas. This one moves toward another instance of deformation, inspired by the work of Nicholas Rombes.

Videographic PechaKuchas take inspiration from another form, the oral presentation, but we can also translate other modes of film and media scholarship itself to deformative videographic forms. One of the most interesting examples of parameter-driven deformative criticism is Nicholas Rombes’s “10/40/70” project.[1] In a series of blog posts and a corresponding book, Rombes created screen captures of frames from precisely the 10, 40, and 70 minute marks in a film, and then wrote an analysis of the film inspired by these three still images. Rombes acknowledged that he was deforming the film by transmuting it into still images, thus disregarding both movement and sound, but he aimed to draw out the historical connections between filmmaking and still photography through this shift of medium. The choice of the three time markers was mostly arbitrary, although they roughly mapped onto the beginning, middle, and end of a film. The result was that he could discover aspects of the film that were otherwise obscured by narrative, motion, sound, and the thousands of other still images that surrounded the three he isolated—a clear example of a deformance in Samuels and McGann’s formulation.

What might a videographic 10/40/70 look like? It is technologically simple to patch together clips from each of the designated minute markers to create a moving image and sound version of Rombes’s experiment. Although we could use a range of options for the length of each clip, after some experimentation I decided to mimic Rombes’s focus on individual frames by isolating the original shots that include his marked frames, leading to videos with exactly three shots, but with far more variability in length, rhythm, and scope. As with Rombes’s experiment, the arbitrary timing leads to highly idiosyncratic results for any given film. [I recommend watching the videos before reading the analyses.]

Raiders of the Lost Ark yields a trio of shots without obvious narrative or thematic connection, but in isolation, we can recognize the cinematographic palette that Steven Spielberg uses to create action melodrama: camera movement to capture moments of stillness with an emphasis on off-screen or deep space, contrasted with facial closeups to highlight character reactions and emotion.

Star Wars: A New Hope also calls attention to movement, with consistent left-to-right staging: first with the droids moving across the desert, then with Luke running to his landspeeder, then with Obi-Wan’s head turning dramatically, which is continuous with the rightward wipe edit that closes out the second shot. Both of these iconic films are driven by plot and action, but the arbitrary shots bely coherent narrative, allowing us to focus more on issues of visual style, composition, and texture.

Depending on the resulting shots, narrative can certainly play into these deformations. In Fargo, we start with a shot of Jerry sputtering about negotiating a car sale in the face of an irate customer, which abruptly cuts to Jerry sputtering about negotiating with kidnappers to Wade and Stan in a diner, highlighting the consistent essence of Jerry’s character, underscored by his nearly identical wardrobe across different days in the original film. The scene plays out in an unbroken 80-second static shot, pulling us away from the deformity and placing us back into the original film, as the coherent narrative eclipses the incoherence of the 10/40/70 exercise. But knowing that we are watching a deformation, we wait for the unexpected cut to jump us forward in time, splitting our attention between the film and its anticipated manipulation. The narrative action guides the transition, as Wade impatiently refuses to abide by Jerry’s plan to deliver the ransom himself and stalks away saying “Dammit!” The resulting arbitrary edit follows the most basic element of narrative, cause and effect: we cut to Wade being shot by one of the kidnappers, punctuated by a musical sting and evoking Stan’s earlier line that they’ll need “to bite the bullet.” The final jarring effect stems from the final shot being less than 3 seconds long, a startling contrast to the previous long take, and underscores the contrast between the incongruities of mundanity and brutality, boring stasis and vicious action, that is the hallmark of Fargo and much of the Coen brothers’ work. Although it certainly feels like an unusual video, Fargo 10/40/70 also functions as a cultural object on its own right, creating emotional responses and aesthetic engagement in a manner that points to one of the strengths of videographic work.

It’s interesting to compare Rombes’s results working with stills versus a videographic version of the same film. Rombes analyzes three stills from Mildred Pierce, and they point him toward elements of the film that are frequently discussed in any analysis: the contradictions and complexities of Mildred’s character, how she fits into the era’s gender norms, and the blurs between film noir and melodrama. The images launch his analysis, but they do not direct it into unexpected places.

I find the videographic version of these three moments more provocative, as they create more opportunities for misunderstanding and incoherence. The first shot finds Wally panicking and discovering Monty’s dead body in a noirish moment of male murder and mayhem, but quickly gives way to a scene of female melodrama between mother Mildred and daughter Veda. Mildred’s first line, “I’m sorry I did that,” suggests a causal link that she is apologizing for murdering Monty. Knowledge of the film makes this causality much more complex, as the murder is a future event that sets the stage for the rest of the film being told in flashback; in the frame story, Mildred appears to have murdered Monty, with the flashback slowly revealing the real killer to be Veda. Thus this scene works as a decontextualized confession made to the actual (future) murderer, adding temporal resonance and highlighting how the entire flashback and murder plotline was a genre-spinning element added to the screenplay but not present in the original novel. The third scene picks up the discussion of the restaurant and finances, bringing it back to the conflict between Wally and Monty—if we were to temporally rearrange the shots to correspond to the story chronology, the opening shot of Wally finding Monty’s body would seem to payoff this conflict, and create a closed loop of causality for this deformed version of the film. This brief analysis is no more valid or compelling than Rombes’s discussion, but it is certainly less conventional, triggered by the narrative and affective dimensions cued by the videographic deformation that ultimately seems more suggestive and provocative than the three still images.

And here are a few bonus 10/40/70 videos that I made but did not analyze – feel free to provide your own analysis in the comments!

Next time: a new and provocative mode of deformation, based on the computational method of average shot lengths!

[1] Nicholas Rombes, 10/40/70: Constraint as Liberation in the Era of Digital Film Theory (Zero Books, 2014).


I’ve spent the last month working on an essay called “Videographic Criticism as Digital Humanities Method” for the second edition of Debates in the Digital Humanities. The full essay should be online soon for open peer review, but I want to share three excerpts that feature numerous video examples, as the blog is an easier site to embed and control the layout, and I am including more examples here than will be in the book version. Plus these are presented as “conversation starters,” so I hope they provoke some comments here!

The first excerpt frames the mode of “research experiment” that videographic work can do, via the PechaKucha form that I previously presented as part of our summer workshop – here it is:

Where the possibilities of videographic method get most intriguing is via the combination of the computational possibilities of video editing software with the poetics of expression via sounds and images. The former draws from scientific-derived practices of abstraction that is common to digital humanities: taking coherent cultural objects like novels or paintings and transforming them into something less humanistic, like datasets or graphs. The latter draws from artistic practices of manipulation and collage: taking coherent cultural objects and transforming them into the raw materials to create something more unusual, unexpected, and strange. Videographic criticism can loop the extremes of this spectrum between scientific quantification and artistic poeticization together, creating works that transform films and media into new objects that are both data-driven abstractions and aesthetically expressive. I will outline three such possibilities that I have developed, using case studies of films that I know well and have used in the classroom, hoping to discover new insights into familiar texts.

The model of poeticized quantification that I am proposing resembles the vector of literary analysis that Lisa Samuels and Jerome McGann call “deformative criticism.”[1] Such an approach strives to make the original work strange in some unexpected way, deforming it unconventionally to reveal its structure and discover something new from it. Both Stephen Ramsay and Mark Sample extend Samuels and McGann’s model of deformances into the computational realm, considering how algorithms and digital transformations might create both new readings of old cultural objects and new cultural objects out of old materials.[2] This seems like an apt description of what videographic criticism can do: creating new cultural works composed from moving images and sound that reflect upon their original source materials. While all video essays might be viewed as deformances, I want to explore a strain of videographic practice that emphasizes the algorithmic elements of such work.

One way to deform a film algorithmically is through a technique borrowed from conceptual art: imposition of arbitrary parameters. From Oulipo, the collective of French artists who pioneered “constrained writing,” to proto-videographic artworks like Douglas Gordon’s 24 Hour Psycho or Christian Marclay’s The Clock, to obsessive online novelties of alphabetized remixes of films like ARST ARSW (Star Wars) and Of Oz The Wizard (The Wizard of Oz), artists have used rules and parameters to unleash creativity and generate works that emerge less from aesthetic intent than unexpected generative outcomes. We can adopt such an unorthodox approach to scholarship as well, allowing ourselves to be surprised by what emerges when we process our dataset of sounds and images using seemingly arbitrary parameters. One such approach is a concept that Christian Keathley and I devised as part of our workshop: a videographic PechaKucha. This format was inspired by oral PechaKuchas, a form of “lightning talk” consisting of exactly 20 slides lasting exactly 20 seconds, resulting in a strictly parametered presentation. Such parameters force decisions that override critical or creative intent, and offer helpful constraints on our worse instincts toward digression or lack of concision.

A videographic PechaKucha adopts the strict timing from its oral cousin, while focusing its energies on transforming its source material. It consists of precisely 10 video clips from the original source, each lasting precisely 6 seconds, overlaid upon a one-minute segment of audio from the original source. There are no mandates for content, for ideas, for analysis—it is only a recipe to transform a film into a one-minute video derivation or deformance. In doing videographic PechaKuchas ourselves, with our workshop participants, and with our undergraduate students, we have found that the resulting videos are all quite different in approach and style despite their uniform length and rhythm. For instance, Tracy Cox-Stanton transforms the film Belle du Jour into a succession of shots of main character Séverine vacantly drifting through rooms and her environment, an element of the film that is far from central to the original’s plot and themes.

Or Corey Creekmur compiles images of doors being open and shut in The Magnificent Ambersons to highlight both a visual and thematic motif from the film.

In such instances, the highly parametric exercise allows the critic discover and express something about each film through manipulation and juxtaposition that would be hard to discern via conventional viewing, and even harder to convey so evocatively via writing.

I started using this exercise in my teaching last semester – in a narrative theory course, students were asked to make a PechaKucha of one of the films we had viewed together in the course, with the only requirement that they not try to retell the same story as the film presents. For a sense of the range of possibilities, here are two PechaKuchas for Barton Fink, created by different pairs of students:

Such PechaKuchas follow arbitrary parameters to force a type of creativity and discovery that belies typical academic intent, but they are still motivated by the critic’s insights into the film, aiming to express something. A more radically arbitrary deformance removes intent altogether, allowing the parameters to work upon the film and removing the critic’s agency. I devised the concept for a videographic PechaKucha randomizer, which would randomly select the 10 video clips and assemble them on top of a random minute of audio; Mark Sample and Daniel Houghton executed my concept by creating a Python script to generate random PechaKuchas from any source video. The resulting videos feel like the intentionally designed PechaKucha videos that I and others have made with their uniform length and rhythm, but the content is truly arbitrary and random, including repeated clips, idiosyncratic moments from closing credits, undefined sound effects, and oddly timed clips that include edits from the original film. And yet they are just as much of a distillation of the original film as those made intentionally, and as such have the possibility to teach us something about the source text or create affective engagement with the deformed derivation.

Just as the algorithmic Twitter bots created by Mark Sample or Darius Kazemi produce a fairly low signal-to-noise ratio, most randomly generated PechaKuchas are less than compelling as stand-alone media objects; however, they can be interesting and instructive paratexts, highlighting elements from the original film or evoking particular resonances via juxtaposition, and prompting unexpectedly provocative misreadings or anomalies.

For instance, in a generated PechaKucha from Star Wars: A New Hope, Obi-Wan Kenobi’s voice touts the accuracy of Stormtroopers as the video shows a clip of them missing their target in a blaster fight, randomly resonating with a popular fan commentary on the film.

Another generated PechaKucha of Mulholland Drive distills the film down to the love story between Betty and Rita, highlighting the key audio moment of Betty confessing her love with most clips drawn from scenes between the two characters; the resulting video feels like a (sloppy but dedicated) fannish remix celebrating their relationship.

A generated PechaKucha of All the President’s Men is anchored by one of the film’s most iconic lines, while the unrelated images focus our attention on patterns of shot composition and framing, freed by our inattention to narrative.

There are nearly infinite possibilities of how algorithmic videos like these might create new deformations that could help teach us something new about the original film, or constitute a compelling videographic object on its own merits. Each act of deformative videographic criticism takes approximately two minutes to randomly create itself, generating endless unforeseen critical possibilities.

Next time: a videographic take on another film studies deformance, Nicholas Rombes’s 10/40/70 project.

[1] Lisa Samuels and Jerome J. McGann, “Deformance and Interpretation,” New Literary History 30, no. 1 (1999): 25–56.
[2] Stephen Ramsay, Reading Machines: Toward an Algorithmic Criticism (Champaign: University of Illinois Press, 2011); Mark Sample, “Notes towards a Deformed Humanities,” Sample Reality, May 2012, http://www.samplereality.com/2012/05/02/notes-towards-a-deformed-humanities/.


One of the outcomes for the Scholarship in Sound and Image workshop we hosted in June is a forthcoming book, The Videographic Essay: Criticism in Sound and Image, that Christian Keathley and I are writing/editing. I’ve written a chapter focused on copyright and fair use issues, which I have posted below for open commentary and feedback before we send the book to press. I’d appreciate anyone who is interested in videographic criticism or remix video to let me know if this chapter covers your questions about copyright, as well as copyright experts letting me know if you think anything should be clarified or changed. Thanks in advance!

[Note: this post has been updated following thoughtful feedback from Steve Anderson and Kevin Ferguson. I will continue to update it with any revisions to maintain it as a useful open resource.]


But Is Any Of This Legal?: Some Notes About Copyright and Fair Use

There comes a time in any discussion about videographic criticism where the question of copyright comes up.  As with any form of culture that involves making something new out of materials created by others, videographic criticism raises key issues around notions of ownership, authorship, originality, and ethics.  We cannot be comprehensive in such a short volume, and luckily there are many useful resources available to educators and scholars listed at the end of this chapter — including a videographic exploration of the topic, Eric Faden’s ‘A Fair(y) Use Tale’, which explains copyright and fair use via an assemblage of clips from Disney animated films.  This chapter provides only a brief overview to the topic, hopefully reassuring videographic practitioners and teachers that what they aim to do is (probably) legal.

A few important caveats.  First and foremost, we are not lawyers and this is not legal advice!  As with any practice that might tread into thorny legal areas, it is up to you to decide how much risk you are willing to take, and research the particular issues that might arise in consultation with experts.  We will note that for those videographic makers and teachers working at universities, institutions tend to be quite risk-averse, so you should know that if you ask your university lawyers or copyright experts if what you are doing is legal, odds are they will say ‘no’.  Likewise, you could always ask permission to use copyrighted material in videographic work, but in most instances (especially if the original is owned by a commercial media company), the answer will be to decline the request. It is debateable as to whether proceeding with what might be considered a fair use after the rights holder has refused permission strengthens or weakens a fair use claim: asking permission might be viewed as an act of good faith that is important to establish in legal proceedings, but it also puts your transformative work on the radar of a rights holder, who might be inclined to pursue costly legal action to suppress your work.  We also should note that our experience and knowledge is based in United States copyright law, where ‘fair use’ is a legal exception to copyright; few countries follow that exact model, while a number have similar ‘fair dealing’ provisions but with significant variations from each other.  If you are producing videographic criticism outside the U.S., you should explore any relevant national laws.[1]

Within the United States, most videographic criticism falls squarely under the provisions of fair use, allowing you to reuse copyrighted materials without permission, with some important exceptions.  Fair use is vague by design, requiring a judgment call (by a judge in court) as to whether it violates copyright law based on four interrelated factors: the nature of the use, the nature of the copyrighted work, the extent of the original being used, and the impact the use might have on the market value of the original.  None of these factors override each other, and all are judged on a spectrum of degrees, rather than a simple ‘yes or no’ binary.  In fact, almost no works are ever formally evaluated to be fair or infringing uses, as that requires an actual court case, which rarely happens.  However, a knowledge of fair use guidelines is helpful in assessing whether a use would likely be judged as allowable in the rare case of actual lawsuit going to court, and can be asserted as a defense to any pre-trial actions. Generally, most videographic criticism would likely be seen as fair rather than infringing uses on all four factors, although there are often wrinkles involved in some cases.

The first factor concerns the nature of the use of copyrighted materials. A videographic essay is by definition a transformative use of original material, aimed at providing commentary, criticism, and/or parody that fulfills the spirit of fair use.  Additionally, it is often noncommercial and educational, which also leans toward fair use; however, some videographic essays have been distributed commercially, as with supplements to DVD releases, so there is no single mandate that fair uses must be noncommercial.  However, not every element in a videographic piece might fall under fair use, an issue that often arises with music. Consider the epigraph exercise created by Jason Mittell as mentioned earlier in the book.

This video uses three copyrighted sources without permission: footage and sound from the film Adaptation, quotations from Michel Foucault’s essay ‘What Is an Author?’, and music from the song ‘I Should Live in Salt’ by The National. The film clip and textual quotation seem both to be clearly transformative in nature, aimed at critical commentary.  While the use of the song is transformative by creating a sonic loop from its opening 15 seconds, there is no commentary or criticism implied in its use — in fact, the primary reason for its use was that the mood it evoked felt appropriate, suggesting that it borrows something from the original without transforming it.  For that reason, the music would probably fail the ‘nature of use’ factor, but that does not necessarily mean it would be ruled an infringement.

The second factor concerns the nature of the original copyrighted work(s) being used without permission.  Typically, works that are more original and creative are given more protection than less original works.  This is not a judgment of quality, but of process and intent, as incorporating shots from a news report showing a public protest would be regarded as less protected than incorporating an original monologue from a fiction film.  This factor is typically the least significant in videographic criticism, as most sources are from original fictional materials (or highly original documentaries).  All three sources in the Adaptation epigraph qualify as original protected works, thus raising the bar for the other three factors; if the music were replaced by a copyrighted recording of crowd noise, then it would likely be accorded less protection than an original musical piece like The National’s song.

The third factor concerns the extent of the use of copyrighted work, focused on the quantity and quality of the portion used.  There is a misconception that there is a magic percentage that is allowable, such as 5% or 10% of the original, but this is untrue.  Like all factors, extent is a judgment call that considers both how much of the original is used, and to what degree that use repurposes the ‘heart’ of the original.  Most videographic criticism about feature films or television programs use only a small portion of the originals — the Adaptation epigraph incorporates 30 seconds of a 114-minute film, three sentences from a 20-page essay, and 15 seconds from a four-minute song, all of which are clearly very small portions or their originals (approximately 0.4%, 0.7%, and 6.2% respectively).  Additionally, none would be considered the most essential parts of those originals, as the looped instrumental guitar riff is the most pared-down element from the song, and neither the quotations nor film clip would be regarded as essential.  However, imagine a videographic essay focused on a short film or an epigraph that quotes a large portion of a poem — such uses would be more likely to be considered infringing. Likewise, some transformative works can reuse the entirety of the original, such as Douglas Gordon’s 24 Hour Psycho or Matt Bucy’s Of Oz The Wizard, an alphabetized remix of The Wizard of Oz—these would certainly fail the third factor, but potentially still be upheld on the other three.

The fourth factor considers how the use might impact the value of the original, especially concerning the effect on its commercial possibilities. While a videographic essay that offers a highly negative analysis of a film might arguably suppress its commercial viability, the transformative critical role would override that, in the same way that a negative review that quotes a book might discourage sales but that doesn’t make it a copyright violation.  The more relevant question is whether the transformative use would effectively usurp the original’s commercial value, leading consumers to avoid the original in lieu of the derivative work.  This is hard to imagine for most videographic work; if anything, transformative reuse of materials as in this Adaptation video would more likely inspire people to seek out the original film, essay, or song to understand their broader contexts.  However, if a videographic piece did potentially curtail the market for a similar derivative work produced by the original rights holder, such as an analysis of a film scene that might be included as a special feature on a DVD, it might be regarded as an infringement.

As mentioned above, fair use is primarily understood as a legal defense that can be asserted in court if a copyright holder sues you for infringement, and these four factors would come into play in such hearings.  However, this rarely actually happens, as most cases of accused infringement never proceed to formal legal proceedings or they get settled before rulings are issued; as of 2015, only one case involving videographic work or video remix has yielded a legal ruling (and it was determined to be fair use).[2] Given how unlikely that formal legal proceedings will result in directly judging each of the four factors on their merits, it is probably not worth getting bogged down in those legal particularities. Another approach is to follow the “best practices” of other videographic work, as these are the more common precedents of creative transformative uses that have not been found to be infringing—in most cases, rights holders do not object to transformative reuse, and thus we should consider the many instances of videos being published without objection as establishing community norms of best practice. The Center for Media & Social Impact has documented best practices in fair use for a number of different types of creative and critical practice, including the most relevant Code of Best Practices in Fair Use for Online Video. Based on these best practices, nearly all videographic works clearly fall within the purview of fair use.

Just because legal proceedings are rare doesn’t mean that infringement accusations do not occur in the videographic realm. The most common situation is when somebody posts a videographic work on a sharing site like YouTube and receives a takedown notice, such as the publicized case of prominent videographic critic Kevin B. Lee versus YouTube.[3]  Such sites have automated ‘bots’ that search new videos for copyrighted material, and when there’s a match with such footage or music, the system will disable the video.  There is no analysis for fair use or consideration of the various factors that might override potential infringement, and some automated takedowns are ‘false positive’ hits for non-copyrighted material (especially music)—however a recent court case did put a burden on sites to analyze fair use possibilities before taking down a video, although it is unclear exactly how that will impact these automated systems.  If your video has been flagged and taken down, you can file an appeal, dipping into a legal realm that few video makers are familiar with.  Even if your fair use claim is upheld by the site (which they often are), the chilling effect is to discourage video creators to transform copyrighted material out of uncertainty and fear.

In actuality, the risks for posting a video using unauthorized copyrighted material are quite low.  The most common outcome would be a takedown from a video site, which would require either appealing or moving the video to another site. CriticalCommons.org is a nonprofit site designed for academics to share videos for teaching and research purposes; they have no automated takedown system, are strong advocates for fair use, and thus are a useful site to post videographic work to minimize fears about potential takedowns. Regardless of the hosting site, it is extremely unlikely that any action would proceed beyond a takedown request or cease-and-desist letter, as the upside for a rights holder to sue an academic videographic creator would be minimal.  In fact, the potential negative press coverage and reputation damage could be ultimately more harmful to a company, and the last thing that a media corporation wants is for a court ruling that helps further establish and reinforce fair use rights.  However, the potential fear of getting a threatening letter from corporate lawyers can be sufficient to make an independent video maker withdraw their work and stop posting videographic work, even if the legal threat is not substantive.

In the United States, there is another level to copyright concerns beyond fair use.  The Digital Millennium Copyright Act (DMCA) added another key obstruction to videographic and remix work: the anti-circumvention measure.  The 1998 law made it illegal to override copy protection systems on DVDs as well as other forms of digital rights management (DRM), whether or not the use of such tools was fair use or even if the DVD was of a non-copyrighted film; additionally, circumventing DRM was made a criminal rather than civil offense, even if it was done following fair use.  Thus even if a videographic essay is clearly a fair use or draws from authorized material, it was made criminal to circumvent the copy protection on a DVD to be able to create clips and remix the footage.

Thankfully, the law allows for the Library of Congress to establish exemptions to this provision, and since 2010, such an exemption has made it legal for critics, scholars, remixers, and students to override DVD protections to edit clips for scholarly and educational purposes, including videographic criticism.  This exemption was expanded to include Blu-ray discs in 2015, meaning that it is no longer illegal to ‘rip’ a DVD or Blu-ray in order to create videographic criticism, regardless of fair use ruling.  However, many university technologists and copyright authorities are still reluctant to allow for these exemptions, fearing potential litigation, so it is important for academic video makers to assert our own rights and those of our students.  Additionally, technologies of video distribution are changing faster than the laws, so many source materials may only be legally available via online streams or digital downloads, which are not exempted from anti-circumvention laws and might prompt lawsuits for violation of Terms of Services to subscribers or purchasers. Additionally, fair use is predicated upon transforming lawfully-obtained material, and thus the rise of illegal file-sharing might tempt videographic critics and student creators to use illegally downloaded videos as source material, which would greatly weaken any fair use claims (as well as opening you up to other legal action).

It is clearly vital to follow and participate in the legal updates, as the exemptions need to be renewed every three years, and new technologies pose new obstacles to the otherwise legal practices of videographic criticism. Fair use has been compared to a muscle that will atrophy if not actively exercised—videographic criticism is some of the most vigorous exercise that scholars can offer their fair use muscles.

 

Resources on Copyright and Fair Use

The Center for Media and Social Impact has many resources available for understanding fair use, including “best practices” guides for a number of relevant realms, including online video and documentary filmmaking: http://www.cmsimpact.org/fair-use

The Electronic Frontier Foundation has aggressively defended fair use rights and transformative works, with details on case law and resources for defending against takedowns: https://www.eff.org/issues/intellectual-property

[in]Transition collects and updates resources for videographic criticism, including fair use and copyright: http://mediacommons.futureofthebook.org/intransition/resources

Steve Anderson, “Fair Use and Media Studies in the Digital Age,” Frames Cinema Journal 1, no. 1 (2012), http://framescinemajournal.com/article/fair-use-and-media-studies-in-the-digital-age/.

Patricia Aufderheide and Peter Jaszi, Reclaiming Fair Use: How to Put Balance Back in Copyright (Chicago: University Of Chicago Press, 2011).

Peter Decherney, Hollywood’s Copyright Wars: From Edison to the Internet (New York: Columbia University Press, 2012).

Eric Faden, “A Fair(y) Use Tale,” Online video, 2007, http://cyberlaw.stanford.edu/blog/2007/03/fairy-use-tale.

Lawrence Lessig, Remix: Making Art and Commerce Thrive in the Hybrid Economy (New York: Penguin Press, 2008).

Jason Mittell, “Letting Us Rip: Our New Right to Fair Use of DVDs,” ProfHacker, July 27, 2010, http://chronicle.com/blogs/profhacker/letting-us-rip-our-new-right-to-fair-use-of-dvds/25797; Jason Mittell, “How to Rip DVD Clips,” ProfHacker, August 12, 2010, http://chronicle.com/blogs/profhacker/how-to-rip-dvd-clips/26090.

[1] For a good overview of comparative international fair use and fair dealing provisions, see The Fair Use / Fair Dealing Handbook, http://infojustice.org/archives/29136.

[2] Northland Family Planning Clinic, Inc. v. Ctr. for Bio-Ethical Reform, 868 F. Supp. 2d 962 (C.D. Cal. 2012).

[3] See Matt Zoller Seitz, “Copy Rites: YouTube vs. Kevin B. Lee,” Slant Magazine, January 13, 2009, http://www.slantmagazine.com/house/article/copy-rites-youtube-vs-kevin-b-lee for a discussion of this case.


Every year, WordPress sends users a Year-in-Review email highlighting all of your blogging over the past year. For 2015, my blogging consisted of… four posts. This made me sad.

So even though I don’t typically do them, I’m making a New Year’s resolution to blog more. I’m not going to wait until I have a fully thought out post to share. I’m not going to avoid posts that are just sharing links and random thoughts. In other words, I’m going to try to blog as if Twitter didn’t exist. My goal is at least one post every two weeks – I’ll strive for one a week, and settle for once a month.

I do have two things to share that deserve the more archived presence of a blog post than just an ephemeral tweet. Both grew out of the topic of my last blog post (from six whole months ago!), our summer workshop on videographic criticism. The direct outgrowth is a special issue of [in]Transition that I co-edited with Christian Keathley, featuring five new practitioners of videographic criticism that developed their create at our workshop. It’s a fabulous group of videos, highlighting a range of approaches and topics; additionally, each video features two thoughtful peer reviews that do a great job advancing the conversation over how videographic work functions as scholarship. I’m quite proud of this issue and the work that my colleagues produced.

The second publication to share came out last month, and was an indirect byproduct of the workshop. One of our participants, Tracy Cox-Stanton, edits the journal The Cine-Files, and she invited all of us to contribute to a dossier on teaching specific films. I asked if she’d bend the “cine” focus of the journal to consider a piece about teaching The Wire, and she obliged. The resulting essay, “Teaching The Wire,” relays an anecdote from 6 years ago and explores the course I discussed back when I was an active blogger. I’m proud to be part of a great dossier of pedagogical reflections, even if I’m the lone teacher of television represented.

Okay, back on the blogging horse – so more to come soon!

 


The last two weeks were some of the most exciting and energizing of my academic career. My colleague Chris Keathley and I hosted an NEH-sponsored digital humanities workshop at Middlebury, called Scholarship in Sound & Image, focused on producing videographic criticism. We define videographic criticism as creating videos that serve an analytic or critical purpose, exploring and presenting ideas about films and moving images via sounds and images themselves. This workshop flows directly from the journal of videographic criticism, [in]Transition, that Chris and I co-founded (with Catherine Grant, Drew Morton, and Chris Becker) – and which recently won an Anne Friedberg Innovative Scholarship Award of Distinction from the Society for Cinema and Media Studies. It also connects with my own work as faculty director of Middlebury’s Digital Liberal Arts Initiative.

This post does not aim to recap the entire workshop, nor share everything that we did – Chris and I are working on another way to capture that material. But as I and others have been posting about the workshop on social media, people seem really interested to know more about what we did. Additionally, my role in the workshop was a hybrid of facilitator and participant, as I produced my own videos alongside other attendees, who were faculty from other institutions across the U.S. and Europe – prior to this workshop I had no direct experience making videographic criticism, so this marked my own transition from theorist to practitioner. And as is my wont, when I make something, I want to share it. So here are the videos I produced for the workshop, framed within the assignments we gave participants – this should provide a good taste of the type of work we undertook.

Our approach is based on a couple of core principles. The first is to learn by doing – even though more than half of our participants had virtually no video editing experience, we had them start making projects in the very first day. Luckily my colleague Ethan Murphy is fantastic at teaching the tools of video production, so he gave them a crash course in Adobe Premiere on day one, and then everyone learned via practice. Our mantra in the first week was Make First, Talk Later – a distinct challenge for a group of academics!

Our second principle is that formal parameters will lead to content discoveries – instead of asking participants to make a video that serves a particular content goal (such as criticism, analysis, comparison, etc.), we created exercises with very strict formal requirements, but open to whatever content people were interested in. To facilitate this process, each participant selected a single film or series to serve as their source text for a series of five exercises to be produced in the workshop’s first week; this produced a strong focus for experimentation, and allowed participants to come to know each others’ films as the exercises accumulated. I chose the film Adaptation, as I am writing a short book about the film this summer (for this book series) – while I am interested in making videographic criticism about television, I correctly guessed that working with a source text as long as a television series would be far more unwieldy than the contained length of a film.

Below are the parameters for each video exercise that we assigned, and my own creation for the assignments. Remember, these are formal etudes rather than motivated works of scholarship; however, I and many of my fellow producers did create videos that were meaningful and effective explorations of the films we were working on, especially if you are familiar with the original film.

Continue reading ‘Making Videographic Criticism’


I was recently invited by The Conversation to write something for their site exploring some of the arguments of my new book for a general audience. I like reading The Conversation to see academics writing in a journalistic voice (something that some are better at than others) and support their embrace of CreativeCommons licensing and open republishing. So in that vein, here’s the piece below the fold – the arguments will be familiar to regular readers of Just TV, but hopefully worth sharing regardless.

Continue reading ‘Why has TV storytelling become so complex?: A journalistic take’


I’m holding in my hand a copy of my new book, Complex TV: The Poetics of Contemporary Television Storytelling.

Complex TV selfie

Every book is its own unique journey. This one feels like the longest (which it was) and most significant, at least intellectually if not professionally. I presented the earliest version of the ideas that would eventually form the spine of the book over ten years ago, at a colloquium at Middlebury College, where my friend and colleague Michael Newbury made the hugely influential suggestion that I check out Neil Harris’s concept of the operational aesthetic as a parallel to what I was describing about television storytelling. I published the first essay that would chart the book’s vector in 2006, as “Narrative Complexity in Contemporary American Television,” which definitely came out at the right time and place to generate a lot of enthusiasm and momentum for this project.

Even though over the past 10 years I wrote a different book and edited another, Complex TV has been the project that has occupied most of my thinking, that fueled my work during my wonderful year in Germany, and that framed my identity as a scholar. The overthinking pessimist in me thinks about the hole that its completion creates, the absence of scholarly identity and drive that has yet to be filled. But the rest of me shouts that side down, as I’m eager to celebrate the book’s launch at the Society for Cinema and Media Studies conference this week in Montreal and enjoy the sense of completion.

I want to briefly focus on the book’s paratexts, as I am as proud of how the book has been published as I am of the content. As many of you know, I wrote the book in public, posting each chapter to MediaCommons and soliciting open peer review throughout 2012-13. I’m happy to say that the MediaCommons draft of the book will remain online for the foreseeable future, serving both as an open access version of the book’s ideas and evidence of the writing process. Hopefully this will help demonstrate that making a book’s content available online for free helps rather than hurts a book’s sales. (Feel free to add to that evidence via NYU Press, Amazon, or your favorite bookseller!)

MediaCommons hosted the pre-print paratext, but I have created another site for the book, collecting supplementary videos of scenes that I reference and discuss in the book. The videos are hosted on Critical Commons, which is an essential site for sharing fair use video content. But the supplementary site is published in Scalar, an incredibly rich tool developed at USC for multimedia interactive publishing. I’m really happy with how the site turned out, combining quotes from the book and videos via a number of interfaces. I particularly like this gallery view, representing the book in thumbnail form.

Complex TV video gallery

So now my work is done. I leave it to the readers to explore the book and its paratexts, and please let me know what you think!


I have two new book chapters out that I want to share. The first is an essay called “Lengthy Interactions with Hideous Men: Walter White and the Serial Poetics of Television Antiheroes,” published in a brand new anthology, Storytelling in the Media Convergence Age: Exploring Screen Narratives, edited by Roberta Pearson and Anthony Smith. The chapter, which is largely about Breaking Bad and antiheroes, is adapted from the Character chapter in Complex TV, so I won’t reproduce the content here.

The second is unrelated to television narrative altogether (or at least was, before I got hold of the topic and started off with a diversion into television studies). A couple of years ago, Mark J.P. Wolf invited me to contribute to a book he was editing about LEGO. I declined to write a chapter, as I was knee-deep in finishing Complex TV and couldn’t commit to an original research essay on a new topic, but I was interested in the topic and offered to write an afterword for the book.

It evolved into something a bit more pointed than a typical afterword, and I want to share it here now that the excellent book has been published as LEGO Studies: Examining the Building Blocks of a Transmedial Phenomenon and hopefully inspire people to read the whole book. I post it on the day that a crime against LEGO, against art, and against humanity itself was perpetrated: The LEGO Movie was snubbed in the Oscar nominations. It’s small solace, but I offer this essay in appreciation of how The LEGO Movie inspired my own thinking about how we can both build things up and tear them down productively.
Continue reading ‘D.I.Y. Disciplinarity — (Dis)Assembling LEGO Studies For The Academy’


This is not an organized or ranked list. This is a collection of the cultural things (mostly TV, but not exclusively) that I most loved in 2014, presented in alphabetical order. There are many things not on this list – they are absent because either I did not love them or I did not consume them. (If it is a movie, it’s probably the latter, as I saw almost no new films this year.)

The Americans – one of those odd series that I always fall way behind on, but always love when I watch it. I still have a few episodes left to finish the second season, which is completely inexplicable.

Andre Braugher’s performance on Brooklyn 99 – I enjoy the series well enough, but Braugher cements his status as one of television’s all-time most indelible performers with his supporting role. Has any sitcom ever been so defined by a purely deadpan character before? He never says a joke, but is always the funniest person on-screen. If I could find a video of his monologue from “The Mole” online, I’d provide it as Exhibit A for how to create humor without being funny.

Bob’s Burgers – I find myself taking it for granted by now, but Bob’s might be the greatest animated series since The Simpsons, and still completely unpredictable in its fifth season.

The Colbert Report – he went out in style, embracing both the egomania of the title character and the sense of gonzo absurdism that has always made the show more than just a satire of punditry.

Fargo – there was no reason to expect this would be anything but a failure. Instead, it took full advantage of its form, providing intertextual pleasures with the film, while functioning on its own as a delightfully dark morality play.

Girls – I still love it, and when it’s on target (like the beach house episode), nothing is quite like it.

The Good Wife – consistently the best show on television. Doing everything that makes both network and cable drama great, and getting better every season.

Hannibal – the first season was a dark romp; the second was pure madness. The finale was probably the most sustained example of avant garde filmmaking I’ve ever seen on television.

Her – released late last year, but I only saw it this summer, and I loved it too much to have it go unmentioned. Spike Jonze has directed four features, all perfect in their own ways.

Jane the Virgin – I love whimsy so much, and Jane nails its tone perfectly.

Last Week Tonight with John Oliver – taking the Daily Show model forward by embracing the long-form investigative comedy structure that the weekly non-commercial format allows.

The Leftovers – a searing season of television that was imperfect, but so powerful. Carrie Coon’s performance was probably the best I’ve seen this year, with this moment from the finale burned into my retinas for months:

The LEGO Movie – while I saw very few films this year, I’m pretty sure this would be one of the best I saw regardless. The first 3/4 are a pitch-perfect blend of action, comedy, and satire, while the final act makes it into a heartfelt postmodern masterpiece.

The LEGO Movie Videogame – my son and I play LEGO videogames together regularly, and this is my favorite. Besides nailing the tone of the film (and adding many more jokes), it’s the best implementation of the game series’s mechanics and gameplay yet.

Olive Kitteredge – one of the great untapped potentials of American television is creating miniseries adaptations of interesting novels, unfettered by the time constraints of film. A few years ago, Todd Haynes set the bar with his brilliant Mildred Pierce miniseries, and this year HBO succeeds again with Lisa Cholodenko’s wrenching miniseries. Amazing performances and sense of place.

Review – prior to this, I knew little of Andy Daly, but soon discovered his brilliance in a masterful comic performance. The scene in outer space was the most I laughed all year.

Serial / This American LifeI’ve been writing about Serial for Antenna, where I chart out many of the issues I have with it, especially its uneven use of its serial form. Nevertheless, it is great audio storytelling that has rightly garnered attention. But I hope it will inspire people to listen more to This American Life (from which it spun off), which still does thoughtful nonfiction audio better than anyone – probably my favorite podcast moment of the year was this story about a man leaving a Utah cult with his kids, featuring heartbreaking music from Stephen Merritt.

Solforge – despite having taught a videogame course this spring, I played very few traditional console or PC games this year, and nothing notable (except LEGO Movie Videogame). But I played a lot of mobile games, and by far my favorite is this collectable card game that takes full advantage of its native digital format—think Magic the Gathering but not constrained by the material limits of cards. I’ve tried the far-more-popular Hearthstone, but Solforge has more imaginative gameplay and interesting mechanics. Like all CCGs, it takes awhile to get into it, but if you sign-up via this link, I’d be happy to send you some cards to help build decks.

Transparent – in a year with many great new series, I think this was my favorite. Despite having no characters who are conventionally likeable, it exudes warmth and affection.

Sharon Van Etten, “Are We There” – “Tramp” was my favorite album of 2012, so I was skeptical that this year’s follow-up could match those heights. “Are We There” might even be better, working as one of those albums that I simply cannot stop listening to.

Veep – I’m uncertain with the direction the plot took this season, but no series is more consistent in generating laughs.

The Wire scholarship – Although it is widely regarded by academics as peerless television, scholarship about The Wire has been pretty erratic. 2014 saw the publication of two excellent and accessible (and short!) about the series, both of which I had the honor of reviewing pre-publication: Frank Kelleter’s Serial Agencies: The Wire and its Readers is a compelling take on the cultural circulation of the series, while Linda Williams’s On The Wire is an impressive analysis of the series in the framework of melodrama. Both are highly recommended for Wire fans & scholars alike.

You’re the Worst – again, no expectations helped make this comedy into a surprising little gem, with far more of a soul than its misanthropic premise might suggest.

One final trend to mention is that this seems to have been the year when television direction began to eclipse (or at least match) its writing. There have always been series whose style and tone help distinguish them, but so many of my favorite series this year (Fargo, Transparent, Hannibal, Girls, The Leftovers, Olive Kitteredge) were notable for their innovative and striking visual and sonic sensibilities. Even series that I didn’t love this year, like True Detective, Louie, Game of Thrones, Gracepoint, and The Missing (and some I haven’t watched yet, like The Knick and The Honorable Woman), stood out more for their excellent direction more than writing (at least this year). It will be interesting to see how this plays out going forward, as TV’s production model still privileges writers over directors, but perhaps this is shifting, as per The Knick.

Finally, one of the worst elements of 2014 was how bad I’ve been in terms of blogging. I really hope to post more frequently in 2015 – see you then!




Follow

Get every new post delivered to your Inbox.

Join 4,704 other followers

%d bloggers like this: