As mentioned last month, we’ve been fortunate enough to get another NEH grant to conduct two more videographic criticism workshops at Middlebury, in June 2017 and June 2018. We are now accepting applications for the 2017 workshop, which is open to graduate students in Film & Media Studies or related disciplines. Please spread the word to qualified and interested graduate students!
I also just returned from Miami University of Ohio, where I did a two-day workshop on videographic criticism. Chris Keathley and I developed a highly-compressed excerpt from our summer workshop aimed at faculty new to videographic criticism, and at Miami, I offered it to a dozen faculty. I wasn’t exactly sure how such an approach would work over a two-day span, but I think it was highly successful (and the participants seemed to agree!). So we’ve decided to publicly “offer” ourselves as available to visit campuses (or a consortium of nearby institutions) to do the compressed two-day version for faculty and/or grad students interested in a crash course in videographic criticism. Anybody who is interested, let me know!
Filed under: Academia, digital humanities, Middlebury, Videographic Criticism | Leave a Comment
In my last post, I closed the book on my spring Television & American Culture course, reflecting on the general success of using specifications grading for the course. As I launch into a new semester, I’m using the same approach on a different course, Theories of Popular Culture (the whole syllabus is available at the link), trying to make some adjustments to address both the very different set of educational goals and contexts, and some of the lessons learned from my first go round.
Theories of Popular Culture is an upper-level seminar (around 15 students once the dust clears), fulfilling both the theory requirement for the Film & Media Culture department and Middlebury’s College Writing (CW) requirement (all students must take an introductory writing course as part of their first year seminar, and an advanced CW course like this, ideally within their major). Thus the bar is set much higher than last semester’s intro course, and the expectations are that students do both more advanced quality work and higher quantity of writing & revision. This is the eighth time I’ve taught this course, and I think both the content and assignments work very well, so I was not looking to do a major overhaul of either. Rather, I was trying to implement this grading system to increase student flexibility and transparency, focus on learning over grades, and avoid the stresses and negative patterns tied to traditional grading.
In adapting the course learning goals to the tiered system that forms the foundation of specifications grading, I immediately ran into a problem with the CW requirement: students fulfill this requirement by passing a course tagged as CW. This means that I needed to ensure that the goals of the CW requirement are included at the base level of my course, meaning that every student who passed the course would have to fulfill them. While the CW program doesn’t provide explicit learning goals, I tried to adapt some of the advice for CW faculty concerning writing and revision, baking them into the course learning goals:
All students who pass the course (with a minimum grade of C) will have demonstrated the ability to:
- Describe how various theoretical approaches approach the study of popular culture
- Apply specific vocabulary and concepts to analyze popular culture
- Read dense theoretical writings and summarize their core ideas
- Communicate their ideas orally and via writing with fluency and clarity, per college CW standards
- Revise their writing to improve both ideas and communication, per college CW standards
Students who achieve a higher level of mastery (with a minimum grade of B) will have also demonstrated the ability to:
- Analyze popular culture with original insights, effective use of sources, and connections between theoretical models, different examples and cultural contexts
- Engage in serious conversation about often fraught topics with an ethos of “rhetorical resilience”
Students who achieve the highest level of mastery (with a grade of A) will have also demonstrated the ability to:
- Create, substantiate, and communicate an original analytic argument that synthesizes multiple facets of popular culture, appropriate types of evidence, and theoretical approaches with sophistication
- Meet class expectations per the assigned schedule with consistency
I admit I’m not entirely happy with this breakdown, because I believe my expectations for successful writing and revision per the CW program are higher than the expectations for the C level should be. Additionally, the need to produce a significant amount of writing and revision for CW credit (typically 25+ pages) takes away one of the most successful aspects of my spring course: making the final essay optional. The best solution I came up with would be to disentangle the CW credit from the course grade: students would earn what they earn in terms of a grade, and those who met the CW expectations would receive that credential separately (and those who didn’t, wouldn’t). However, that’s not how things work here: the CW marker is tagged to a course, not an outcome, so anyone who passes a CW course fulfills the requirement on their transcript. Needless to say, reworking this system is not something that an individual faculty can implement on an ad-hoc basis, so I’m stuck with keeping the CW goals as part of the course’s ground floor requirements, and working with students to make sure they fulfill them.
Two of the other shifts in how I scaffold assignments and assign grades are embedded in the assignment bundles:
C Bundle – Students who complete the following will pass the course with a grade of C:
- Actively attend all course meetings, with up to five absences, per the attendance policy below
- Complete at least 8 reading responses to a Satisfactory level
- Complete all 4 essays to a Satisfactory level, with at least one successful revision
B Bundle – Students who complete the following will pass the course with a grade of B:
- Actively attend all course meetings, with up to three absences, per the attendance policy below
- Complete at least 10 reading responses to a Satisfactory level
- Complete all 4 essays to a Satisfactory level, with at least one Sophisticated mark and at least one successful revision
- Actively demonstrate engaged and productive in-class participation during at least four course meetings
A Bundle – Students who complete the following will pass the course with a grade of A:
- Actively attend all course meetings, with up to two absences, per the attendance policy below
- Complete at least 12 reading responses to a Satisfactory level
- Complete all 4 essays to a Satisfactory level, with at least three Sophisticated marks and at least one successful revision
- Actively demonstrate engaged and productive in-class participation during at least eight course meetings
One key difference is that instead of different versions of an assignment (Basic vs. Advanced prompts for my TV exams), I’m implementing differential evaluation for the same prompt, allowing for Satisfactory and Sophisticated as dual passing marks. Each assignment will have some additional specifications to achieve Sophisticated, so it does function somewhat as an Advanced version, but it is really more about execution than taking on different questions. In my mind, a Sophisticated essay will demonstrate upper level learning around originality and synthesis of ideas, as well as using more effective rhetoric and prose style to convey ideas. The pitfall is avoiding treating this as a backhand way of giving A vs. B grades under different names, but I will strive to emphasize the specifications rather than more subjective evaluation, especially in giving feedback for potential revisions.
The other major change involves class participation. In my TV class, I was a bit dismayed that a few students who got A or A– never contributed much in class discussions; although I technically said that attendance would measure participation, there was no real way to implement that. So given the smaller size and more theoretical/analytical bent of this course, I’ve created a tracking system for participation: at the end of each class, I will mark each student that I thought demonstrated active engagement and made productive contributions that day. With a 15 person class, that seems manageable, although we will see if I can be consistent in my tracking.
The final difference involves the use of tokens and flexibility. Last semester, I found that too many students were trying to game the system by handing in weak first drafts and revising them as de facto extensions, or relying too much on tokens to fall behind in their weekly responses. So this semester I’m being more strict with the use of tokens; students get three to use for any of these purposes:
- Eliminate an absence from their attendance record
- Count an Unsatisfactory or not completed reading response as Satisfactory
- Revise and resubmit an Unsatisfactory essay to fulfill Satisfactory expectations (due 1 week after essay is returned)
- Revise and resubmit a Satisfactory essay to fulfill Sophisticated expectations (due 1 week after essay is returned)
- Submit an essay assignment up to 48 hours late
Unlike last semester, the first revision is not “free,” and each revision will cost a token. If a student uses all three initial tokens and needs to use more for revisions, they can be “purchased” at the cost of one gradation of the final letter grade—thus if a student achieves the expectations for the B bundle, but must revise an essay multiple times and uses four total tokens, that student would receive a B– for the course. While this may be a bit harsh for some, it will hopefully discourage procrastination or manipulation of the expectations, but still provide some agency and control for students and reinforce the pedagogical values of transparency and flexibility that students really valued last spring.
Like before, this is an experiment. My primary goal is to encourage students to focus on learning rather than grades, and take more ownership of their education. But I also recognize that this is a very challenging course, both with the highly theoretical content and the quantity of writing, so I expect there will be some bumps along the way. I will hopefully offer updates as we go.
Filed under: Academia, Middlebury, Teaching | Leave a Comment
Tags: specifications grading
I’ve had a lingering “to be continued” here for a few months, as I promised to report on my experiment with specifications grading from the spring, beyond my first mid-semester update. The delay was first due to the need to wait to process a post-semester survey that we did from my class and another colleague who used a similar approach to grading. Once we got those results, my head was already deep into summer mode of writing deadlines and family fun. But now on the eve of my fall semester starting, I’m ready to return to the classroom and the topic of grading.
In short, all evidence suggests that my experiment last semester was a success, and I’ll be using a similar approach to grading this fall in my course, Theories of Popular Culture. I’ll detail some of my revisions to the approach as customized for that course – a writing-intensive upper-level seminar of 15, rather than an intro-level survey of 30+ students – in another post. But here I’d like to explore how my Television and American Culture course turned out, and offer some reflections on the benefits and limitations of specifications grading.
Filed under: Academia, Middlebury, Teaching | 1 Comment
Tags: specifications grading
I am tremendously excited to announce that Christian Keathley and I received another Institute for Advanced Topics in the Digital Humanities grant from the National Endowment for the Humanities, allowing us to host two additional years of our videographic criticism workshop, Scholarship in Sound & Image, at Middlebury College in 2017 and 2018!!!
The first workshop in 2015, supported by the same NEH grant program, was hugely successful for both participants and conveners, leading to numerous published videographic works and our brief book about the approach. The opportunity to repeat the workshop two more times is wonderful, and we are already thinking about how we will evolve our approach for two new cohorts of participants.
We have already made one major change: in 2015, the workshop was open to any film & media scholar, regardless of degree or rank. However, we received more than 100 applicants for only 12 spots, so we ended up only taking participants who had their Ph.D.s in hand. For these new workshops, we’ve explicitly divided the potential participants:
- For the June 18 – July 1, 2017, workshop, participants must be enrolled in a graduate program at the time of application (December 2016).
- For the June 17 – 30, 2018 workshop, participants must have received a Ph.D. by the time of application (December 2017).
We hope dividing the applicant pool this way will allow us to reach a broader range of participants, and customize our content for different audiences.
If you are interested in applying for one of the workshops, information will be available in September 2016 on the workshop website. As of now, the information there is archived from the 2015 workshop, but it should give a good sense as to what we’ll be doing for the next two versions. I’ll post info on this blog as well, when we are ready to accept applications, so stay tuned!
Filed under: Academia, digital humanities, Middlebury, New Media, Videographic Criticism | 1 Comment
I’m in Berlin, one of my favorite cities, to participate in the Seriality Seriality Seriality conference, the culminating event in the Popular Seriality Research Unit that I have been affiliated with for the past six years. It’s wonderful to be here to celebrate the conclusion of the research unit, and also a moment for nostalgia toward my ongoing participation with this wonderful group of scholars, who hosted me in Göttingen while I wrote Complex TV.
For the conference, I participated in the first panel, along with an all-star crew of friends and colleagues Frank Kelleter, Sean O’Sullivan, Jeff Sconce, Robyn Warhol, and Daniela Wentz. Frank chaired the panel with this prompt, asking us to draft 5-minute responses: “What does it mean for the study of popular serialities that its most visible research paradigm is (American) television? How can television studies be re-imagined as part of seriality studies? Should it be? Is there serial life after television?” Below is my response, designed to provoke conversation (which it did!) – I share it here to (serially) extend that discussion:
I would like to address (or rather mention and then skirt around) the last question: “Is there serial life after television?” I think this is particularly interesting because I believe television is becoming notably less serialized. To explain why, I must acknowledge that much of the writing on serial television (including my own) has fallen prey to a misunderstanding of seriality that I’d like to address.
I have frequently defined seriality most simply as “Continuity with Gaps.” We can elaborate each of these two necessary ingredients – continuity suggests long-form storytelling, repetition and reiteration, historicity and memory, and transmedia expansion. Gaps suggest temporal ruptures, narrative anticipation, moments for viewer productivity, opportunities for feedback between producers and consumers, and a structured system for a shared cultural conversation.
Much recent scholarly work on serial television (including my own) has overemphasized the former. The past twenty years have seen a remarkable increase in long-form television storytelling, in the proliferation of continuity across media, and in cultural practices where fans expand continuities. Such broadening and deepening of continuity is important, and clearly vital to the mode of complex television that I have written about.
However, in overemphasizing continuity, we have underemphasized the gaps and not paid sufficient attention to the waning role of such gaps as the dominant structure of serial distribution and consumption. The very technologies that I and others have pointed to as enabling the rise of long-form television continuities—time-shifting DVRs, bound volumes of DVD box sets, downloadable and on-demand streaming video—these all short circuit the structured system for a shared cultural conversation that serialized gaps have long offered. The latter technology of streaming video has equally disrupted serialized production and distribution practices to favor the model of “full-drop seasons” via Netflix and Amazon, releasing a set of episodes in a distinctly non-serialized fashion. Counter to accounts in the popular press, this is not the only or most common way that people watch TV today, but it is becoming increasingly widespread and will soon be regarded as an established normal option for media distribution and consumption, rather than just the hot new thing.
To be clear: a full-drop of a new season of television, to be viewed when and how you like, is not a serial. There are no gaps (at least between episodes – under this model, seasons become the new episode). So-called “binge viewing,” or my preferred non-judgmental term of “compressed viewing,” is not a serialized experience. There is no shared cultural conversation until everyone finishes the season on their own schedule. There are no productive gaps for viewer engagement, paratextual production, or feedback between producers and consumers. There is no method for simultaneous, collaborative forensic fandom, where viewers come together to figure out what has happened and predict what will happen. There are no opportunities for the agonizing anticipation after an anxious cliffhanger, where you would give anything to get the next episode instead of waiting a week or more—now, you just get the next episode. This full-drop mode of production, distribution, and consumption is distinctly different than seriality, and thus we need to consider what is lost when we eliminate these productive serial gaps. Compressed viewing is individualistic and decontextualized, whereas serial viewing is potentially communal, social, and rooted in its historical moment.
So back to Frank’s question: Is there serial life after television? And let me posit the inverse: Is there television life after seriality? Obviously, the easy answer to both is yes; such forms will not just become extinct, but rather evolve, transform, and mutate. But we need to think carefully about what these transformations will look like, and what the decline or remediation of such serial experiences will mean for us theorists of seriality.
Let me conclude with a communal call that comes with a memorable slogan: mind the gap. In our scholarship and conversations about seriality, let us reemphasize these gaps, and highlight how much will be lost without these structures of shared experience that are so essential to the cultural practice of popular seriality.
Filed under: Academia, Conferences, Narrative, Television | 1 Comment
As of today, my institution Middlebury College has officially embraced open access as the default way that faculty share our research.
What this means is that we have adopted a policy whereby faculty grant the institution a license to republish their scholarly essays in an online open access repository, making it standard that copies of faculty publications are freely available, even when they have been published in high-priced scholarly journals. It does not mean that faculty have to change where we publish, or even that we must deposit our work in the repository (as there is an automatic waiver for anyone who wishes to opt-out). But by changing the default, we hope to change behavior and awareness so that it becomes commonplace for faculty to share publications through our institutional repository, and thus people searching for scholarly work will find links to these free open versions of publications. (You can learn more about OA institutional policies through the Coalition of Open Access Policy Institutions or through Harvard’s excellent resource site.)
This has been a long haul for me and my colleagues. I remember first having this conversation in 2008 with Mike Roy, who had just arrived at Middlebury as the new Dean for Library and Information Services. I was on the Faculty Library Advisory Committee, and Mike and I met to discuss what initiatives we each hoped would move forward. He introduced me to the idea of an institutional open access policy, and wondered if other faculty would buy into it. I expressed major skepticism, thinking there was a lack of both awareness and enthusiasm to go down that path for any but a small sliver of faculty. He said he’d take a slow approach, raising awareness and building momentum until we were ready to take action.
Eight years later, we’re ready. Today the faculty nearly unanimously passed the resolution that our Open Access committee, which Mike co-chaired with my colleague Svea Closser in Anthropology, drafted and discussed for over a year. We brought Peter Suber, one of the foremost experts on open access, to campus to advise our work and give a public presentation to raise awareness. We did one-on-one interviews with 50 faculty to understand how this policy might apply across various fields. We fielded and answered many skeptical questions, collected on our lengthy FAQ. We presented the policy and its rationale in at least 5 formal faculty meetings or targeted sessions. As Suber told us, keep having such meetings until faculty stop coming. (And they did.)
In the end, most people understood the policy (which is rather complicated in its legal maneuvers) and certainly grasped its intent. One thing I found interesting is how various OA supporters latched onto different core reasons to embrace the policy. For Mike, given his position running our library, he was motivated both by the mission of the library to disseminate knowledge broadly and how the huge costs of the current subscription model for closed scholarly access eat up library budgets for little gain. For Svea, who studies public health in Africa and Asia, she wants valuable research like hers to be available to the communities she studies that typically lack the resources to subscribe to pay journals. Personally, I am most motivated by outrage over the ways that publishers take free faculty labor as writers, editors, and reviewers, and turn around and charge our institutions to access the fruits of our labor. Over the course of our campus discussions, we heard many other good reasons to support such a policy, while the primary reasons against the policy boiled down to a general suspicion of such changes and any unintended consequences.
Needless to say, I am thrilled that the vast majority of my colleagues sided with us, and tomorrow we get to start the hard work of both building the technical infrastructure to make our repository functional, and the cultural work of getting faculty to implement our policy by making the open sharing of our research a new default. Kudos to my colleagues for embracing the policy, and especially to Mike, Svea, and my fellow committee members for their leadership and work, enabling me to type the rarest of all phrases: I found my work on this college committee enjoyable, productive, and fully worth my time!
Filed under: Academia, Middlebury, Open Access, Publishing | Leave a Comment
I’m excited to announce the publication of my latest book, The Videographic Essay: Criticism in Sound and Image.
It’s a gratifying publication in many ways. It is the first project that I have co-authored with my good friend and colleague Christian Keathley, and as such, it was quite fun to put together. It is based on the NEH-funded workshop on videographic criticism that we ran at Middlebury in June 2015, so it both brings back many memories from those fabulous two weeks, and shares much of what we did with a larger audience, including my overview of fair use for videographic practice. It also features the writing of three other friends who collaborated on the workshop with us, Catherine Grant, Eric Faden, and Kevin B. Lee.
I’m also quite happy with its mode of publication. The book is published by caboose books, a small independent press based in Montreal that strives to publish works in film studies that go against most trends in academic publishing by being affordable and accessible. Our book is part of a series, Kino-Agora, that features short books that straddle the boundary between long essay and short book—ours is only 64 pages. But it is priced accordingly: you can buy the book directly from caboose for $5 plus shipping, or from Amazon for $8 (free shipping) or as a $4 Kindle download.
I also created a companion site on Scalar, featuring many examples of videographic exercises created by the participants in our workshop. The open access Scalar site should provide a good sampling of the type of work produced at the workshop, and also features numerous videos produced by participants over the past year. We hope it will be a useful resource for both teaching this type of work and for inspiring people to take the videographic plunge!
We hope the low price will be tempting enough to encourage readers to explore this new mode of critical engagement. I can certainly say that my own adventures in video making has been incredibly rewarding and has expanded my critical horizons – I hope this book will help others join in!
Filed under: Books, digital humanities, Fair Use, Not Quite TV, Open Access, Publishing, Technology, Videographic Criticism | 3 Comments
I am quite excited to announce my newest publication, as it marks my first venture into a fully realized work of videographic criticism. “Adaptation.‘s Anomalies” was just published in [in]Transition, culminating a project I began at the Scholarship in Sound & Image workshop we hosted in Middlebury last summer. (I’m also presenting the video on a panel of videographic work at SCMS in Atlanta, Friday April 1 at 12:15pm.)
While the video stands on its own, I encourage readers to visit the journal’s version for contextualizing material, including my author’s statement and two open peer reviews that provide good insights into the project. I hope it prompts a conversation, either here or at [in]Transition!
Filed under: digital humanities, Film, Narrative, Not Quite TV, Publishing, Videographic Criticism | Leave a Comment
Last month I shared my plan to use specifications grading in my Television and American Culture course this spring semester. I just finished marking the first exam, which provides my first real opportunity to reflect on how the experiment is going. (Make sure to read that previous post for the specifics of the approach and course design.) Below I walk through the first exam, what my students did, and reflect what this system has revealed to me about my teaching and students’ learning.
The course has 31 students enrolled, and all seem to be on board with the grading approach. I asked students to sign a short form to affirm their understanding with the grading system, and asked them to indicate (with no binding commitment) which “bundle” of assignments, and thus which final grade, they planned on working toward in the course. 85% of the students said they planned on working toward an A, with the remaining 15% indicating the B bundle. This wasn’t much of a surprise, given that the norm at Middlebury is toward receiving A grades – if anything, the surprise was that as many as 5 students said they were striving toward “only” a B in the course. It will be interesting to track how this initial plan matches the work that students end up doing, as I expect there will be some who started aiming at an A who choose to do less work as the semester proceeds, and perhaps a few of who revise their aim higher.
The first exam consisted of two questions, each with two versions – the Basic versions provide opportunities for students to demonstrate their ability to restate the course content in their own words (which, as an open book take-home exam, should not be particularly challenging), while the Advanced versions ask students to apply this knowledge to specific examples or to craft their own arguments about the concepts – doing more Advanced questions allows students to qualify for B or A final grades, while every student must satisfactorily complete at least Basic versions of six exam questions throughout the semester. For the first exam, all students may revise their Unsatisfactory answers at no “cost,” while future exams require them to spend “flexibility tokens” to revise answers. Each question on the three exams focuses on one of the 6 units in the course, so it is all very structured and hopefully transparent as to what is being evaluated. Continue reading ‘First Update on My Specifications Grading Experiment’
Filed under: Academia, Middlebury, Teaching | 4 Comments
Tags: specifications grading
Today I started my spring course, Television and American Culture, a class I have offered around 15 times. It’s the course that inspired my textbook (of the same name), and my co-edited book How to Watch Television also was structured to fit with the course’s design. In short, it’s the course that I’ve dedicated the most work to honing, and I feel that overall it works quite well… except for one facet: grading.
I hate grading. I hate how grades function in higher education for students, for faculty, for parents, and for institutions. I hate how grades often work as an obstruction for learning, rather than a motivation, reward, or neutral assessment. I firmly believe that, at least here at Middlebury, figuring out a way to rethink the culture of grades would be the most effective and impactful reform we could make. Such reforms are challenging and slow-moving at an institutional level, but I was moved to jump into the deep-end to rethink how grading works in this course. And thus I’m running an experiment this semester by completely changing the course’s grading system.
The approach I am taking is called Specifications Grading, which emerged from a fairly well-established alternative approach to grading typically called Contract Grading. I first learned about contract grading a few years ago through Cathy Davidson blogging about her use of the system at Duke. The idea bubbled around in my head for years, but I decided to give it a whirl after reading this piece by Linda Nilson on specifications grading, which is based on her book. The difference between specifications & contract grading is a bit fuzzy, and ultimately not as important as their similarities, which are tied to three key principles:
- All individual assignments are graded on a Pass / Fail or Satisfactory / Unsatisfactory basis. The bar for Satisfactory is set higher than what we typically think of as “passing work” (more like a typical B than a C), with a satisfactory assignment being one that meets its clearly-articulated specifications and learning goals. This means that an assignment that meets some but not all of the goals & specs is Unsatisfactory, a much more rigorous bar than how most faculty (especially in the humanities) grade papers. This also means that you need not spend time quibbling between giving a paper a B+ vs. A–; it either meets the expectations, or it doesn’t. Instead, I can spend my assessment time providing qualitative feedback, which is more rewarding for everyone. Plus the system has options for revision so that a student receiving an Unsatisfactory can choose to improve their work and hopefully satisfactorily accomplish the assignment goals.
- Assignments are designed to demonstrate that students have achieved the course’s specific learning goals. This seems obvious, but I was surprised by how weakly the old assignments for my course were connected to stated learning goals. Under this approach, you should be able to clearly highlight how each assignment serves the stated goals. Making those connections explicit greatly improved the conceptual basis for the assignments I give, and I hope will make assessing whether they accomplish those goals easier.
- Final grades are determined by students’ accomplishments in a hierarchy of “assignment bundles.” If we set the passing bar for the course at a C, then we designate which quantity and depth of assignments are necessary to accomplish the course’s base learning goals. Additional assignments are added to that base to reflect more sophisticated and deeper learning, creating bundles for B and A levels. This system gives student full control over which of these bundles they will strive to accomplish, based on their own learning priorities and self-aware judgment over time-management and intellectual goals.
This last system of bundles is kind-of a “hack” to the system: because most of us teach in institutions that require us to enter a single letter grade into a transcript at the end of the semester, we need to be able to produce such a metric. However, the specifications approach eliminates the stresses of grading each assignment by designing a course which allows students to choose their own learning paths transparently, as linked to grades at the end of the process. Hopefully, at the end of the semester, I can know that a student who received a B demonstrated that they learned four of the course’s explicit learning goal, while a student who received an A learned all five. (See below for the specific language laying out this system for students.)
In designing my syllabus, I embraced a tiered set of learning goals, based on various schema of levels of learning and cognition. The base level focuses on learning and comprehending the information covered in the course, and being able to express this knowledge effectively: this is what any student who passes the course should accomplish, and the C bundle assesses this knowledge. The next tier involves applying that knowledge to analyzing new examples and scenarios, with assignments in the B bundle requiring such analytical application. The highest tier invites students to generate their own arguments and synthesize both information and analytic approaches across realms of knowledge, captured in the additional requirements of the A bundle. Thus a student receiving a high grade is not an indication of doing the same assignments particularly well, but demonstrating more challenging modes of engagement and analysis. This seems like a more accurate demarcation of learning.
So today in class, I rolled it out for students, walking through the policy statements reproduced below.* It took some time for them to grapple with the new system, but I think they got it, and I sensed that they mostly thought it was a cool idea. One said, “it’s kind of like a board game” – which I affirmed, but emphasized that “winning” means understanding the system enough to actively engage in the material to achieve the level of learning you aim to accomplish, not gaming the system. We will see how it unfolds, and I will try to update the blog on the experiment in progress. I’d love to hear what readers think of such a system, whether you’ve tried anything similar, and any advice for what might emerge as the semester progresses.
- A few have asked the size of the course: around 30 students, mostly sophomores & juniors. About 1/3 declared majors, with another 1/3 who might become majors.
Filed under: Academia, Teaching, Television | 7 Comments
Tags: specifications grading
I’ve griped about the problems with closed peer review in academic publishing before, whether in the black box of tenure reviews, or celebrating the open review for Complex TV, or wondering about Why a Book?, or envisioning new possibilities with MediaCommons. My unifying frustration in all of these gripes is that throughout academia, the strongest elements of peer review — the dialogue that leads to higher quality scholarship, the labor that goes into providing thoughtful commentary on other scholars’ work, the contextualization of placing scholarship into particular conversations and subdisciplines, the validation from particular peers you respect praising your work — is kept completely invisible from readers. What is left is simply the gatekeeping function, where we either see the fact of publication and are left to assume it must have been deemed worthy by someone for some reasons, or know nothing of what gets rejected and why.
I believe in open review, and have tried to practice it whenever I can – right now I’m participating in a great open peer review process for the new Debates in Digital Humanities volume, with contributors commenting on each others’ essays. But such experiments are far from widespread and still viewed skeptically by many traditionalists. The one place where a modest form of open peer review is broadly practiced is book blurbs.
Blurbs are far from typical peer review: they are solicited after a book has been fully approved for publication, they offer no opportunity for feedback or revision, and they are designed to simultaneous promote the book and highlight the blurber’s ability to offer praise in a consise and pithy way. And yet, they offer something that other forms of peer review do not: openness.
In a blurb, both the author and the blurber know who each other are, as do readers. While this might seem like it would foster conflicts of interest and opportunities to simply promote your friends and colleagues, the openness provides a counter to this. Consider the great new book Matinee Melodrama: Playing with Formula in the Sound Serial by Scott Higgins. I had the pleasure of reading the manuscript and offering this brief blurb: “Matinee Melodrama manages a mean feat: making a mostly forgotten, formulaic format seem new and exciting, shining an informative, fascinating light from film history onto today’s television, comics, and videogames.” I had a lot more to say about it than fit in such a sentence (all good, of course!), but the point of such a blurb is less about what I say than that I say it – we read blurbs for the people who write them, and how their praise informs our perception of the book’s quality and appeals. Perusing the blurbs for Higgins’s book, you see my comment next to Steve Neale, Charles Wolfe, and Leonard Maltin, suggesting that the book will be of interest for film scholars, media scholars, and popular critics. These four signed sentences tell you much more about the book’s potential appeals and merits than the far more substantive and lengthy anonymous peer reviews, of which we know absolutely nothing (except that they endorsed publication).
As for conflicts of interest, I think open review can be more honest than blind review. Scott Higgins is actually an old friend of mine from graduate school, and it is true that I would not feel comfortable saying something negative about his work in an official capacity. In a closed blind review, I could easily praise the shoddy work of a friend (not Scott, whose work is never shoddy) and nobody would be the wiser. In an open review or a blurb, I am staking my name publicly on the integrity of my judgment—if Matinee Melodrama were a weak book, readers would wonder why I praised it so. (I can think of such an instance, where a really bad academic book was blurbed by a scholar I quite respect; to this day, I wonder what she was thinking…)
Similarly, open review can help temper perceived conflicts of interest between author and publisher. I will have a videographic essay published in the next issue of [in]Transition, a journal for which I serve as project manager for MediaCommons. My video went through the journal’s standard practice of open peer review, and thus there will be two signed reviews published alongside the piece to justify its publication; perhaps some might wonder whether my piece was treated favorably by the editorial team, but two notable videographic creator/scholars will have publicly endorsed it, making the rationale for publication transparent. Would they sign lengthy positive open reviews of a bad project, just to appease the editors’ favoritism? I know that I wouldn’t.
I’m not saying blurbs should replace peer review, but they highlight how little readers know about the actual peer reviewers and their thoughts about any given work. The fact of publication is not enough to ensure its quality and value, and knowing the perspectives and positions of those who vetted a work is important context that is left invisible within closed review. But until more publishers and journals adopt open peer review standards, blurbs are the most transparent comments we have.
Filed under: Academia, Books, Media Studies, Publishing | 2 Comments
Tags: open review, peer review
This is the third and final (and, to me, most interesting) excerpt from my essay draft on “Videographic Criticism as a Digital Humanities Method.” The first laid out my approach to deformative criticism via the format of PechaKuchas; the second explored videographic 10/40/70 analyses. I highly recommend watching some of the musical videos discussed near the end of the post.
A videographic 10/40/70 relies upon the single shot as the core unit of a film, a key tendency common to much academic work on moving image media. My third and final type of videographic deformation also highlights the shot, but from a distinctly different approach. One of the most prominent forms of quantitative and computational analysis within film studies is statistical stylistics, especially as shared on the crowd-sourced Cinemetrics website. While there are numerous metrics on the site, the most common and well known is ASL, or average shot length, computed by dividing the time of a full film by its number of discrete shots. The resulting number indicates a film’s overall editing pace, charting a spectrum from quickly-cut movies (such as Batman Begins at 2.37 seconds or Beverly Hills Chihuahua at 2.72) to longer-take films (such as An American In Paris at 21 seconds or Belle du Jour at 24). The most typical range is between 3 and 8 seconds per shot, with much variability between historical eras, genres, national traditions, and specific filmmakers.
An ASL is in itself a kind of deformation, a reduction of a film to a single numeric representation. Cinemetrics does allow more detailed quantification and visualization of a film’s editing patters—for instance, this is a more granular and graphic elaboration of Mulholland Drive’s ASL of 6.5:
But these numbers, tables, and graphics make the film more distant and remote, leaving me uncertain what we can we learn from such quantification. According to Yuri Tsivian, Cinemetrics’s founder, the insights are quite limited: “ASL is useful if the only thing we need to know is how long this or that average shot is as compared to ASL figures obtained for other films, but it says nothing about each film’s internal dynamics.” Certainly comparison is the most useful feature of ASL, as it allows quantitative analysis amongst a large historical corpus, a general approach that has proven quite productive in digital humanities across a range of fields. But I wonder about Tsivian’s quick dismissal that ASL “says nothing about each film’s internal dynamics.” Doesn’t a film with a 2.5 second cutting rate feel and function differently than one with a 15 second ASL? Certainly, and it doesn’t take a quantification to notice those differences. But perhaps such a quantification might guide a more thorough understanding of editing rates by extending the deformation?
Videographic methods allow us to impose a film’s ASL back onto itself. I have created a videographic experiment called an “equalized pulse”: instead of treating ASL as a calculated average abstracted from the film, I force a film to conform to its own average by speeding up or slowing down each shot to last precisely as long as its average shot length. This process forces one filmic element that is variable within nearly every film, shot lengths, to adhere to a constant duration that emerges quantitatively from the original film; but it offsets this equalizing deformation with another one, making the speed of each shot, which is typically constant, highly variable. Thus in a film with an ASL of 4 seconds, the equalized pulse extends a 1-second shot to 25% speed, while an 8-second shot runs at 200% speed. If you equalized an entire film to its average pulse, it would have the same running time and the same number of shots, but most would be slowed down or sped up to conform to an identical length. Every shot exerts the same temporal weight, but each feels distinct in its tempo and pace. The result is, unsurprisingly, very strange—but I believe productively so.
What does Mulholland Drive look and feel like when equalized to a pulse of its 6.5 second ASL? Can we learn something about the “film’s internal dynamics” more than its numeric representations on Cinemetrics? Take the film’s opening scene following the credits, with Rita’s car accident on the titular street; in the original, it lasts 4:07 with 49 shots ranging in length between .3 and 27 seconds.
The deformed version with an equalized pulse of every shot lasting precisely 6.5 seconds runs 5:18, as the original sequence is cut comparatively faster (ASL of 5.04 seconds) than the film as a whole. The effect is quite uncanny, with super slow-motion action sequences bookended by sped up shots with less onscreen action; the car accident is particularly unsettling, turning a 9-shot, 6-second sequence into a grueling and abstract 58-second ordeal that oddly exaggerates the effect of experiencing a moment of trauma in slow motion. As a whole, the video does convey the sense that a pulse of 6.5 seconds feels quite deliberate and drawn out, although the variability of action obscures the consistency of the editing pulse.
Another scene from Mulholland Drive offers quite different effects, despite the same algorithmic deformation to its same equalized pulse. The memorable scene in Winkies Diner, where two unnamed men discuss and confront a dream, has always been the film’s pivotal scene for me, signaling its affective impact that transcends any rational comprehension or interpretation. When equalized to a 6.5 second pulse, the scene’s uncanniness is ratcheted up, downplaying the dialogue rhythm for a more even distribution between the two men. The slow motion close-ups with distorted voice highlight the dreamlike quality, and the overall slower pace increases the sense of foreboding that already pervades the scene. By the time the horrific bum is revealed at the scene’s end, I find myself completely enthralled by the editing pulse and pulled into the affective horror that the scene always produces, suggesting that its impact is not dependent on Lynch’s designed editing rhythms. I have not extended this equalized pulse to the entire film, but clearly each scene and sequence will feel quite different, even with a uniform shot length throughout.
Mulholland Drive is a film packed with abundant strangeness, even before its deformation; how does an equalized pulse impact a more conventional example? Even though Mildred Pierce features the unusual combination of noir crime and family melodrama, it is still a far more straightforward film in keeping with its 1940s era. Its ASL of 10.09 is much slower than films of today, but is fairly typical of its time. Equalizing the pulse of a crucial scene in the family melodrama, with Veda driving a wedge between Mildred and Monty who finally end their dysfunctional relationship, highlights various character interactions.
When Mildred gives Veda a car, it speeds through her thanking her mother but lingers on her exchange with Monty, underscoring the closeness between the stepfather and daughter—in the original, the emphasis is reversed in terms of timing, but equalizing the shots actually better represents Veda’s attitudes. The deformation lingers over shots without dialogue, letting us closely examine facial expressions and material objects, but speeds through lengthy dialogue shots, like an impatient viewer fast-forwarding through the mushy emotional scenes. The final lines exchanged between Mildred and Monty are unreasonably drawn out, milking their mutual contempt for all that it is worth. The scene is still legible, especially emotionally, but it redirects our attention in unpredictable ways—arguably a key goal of an effective deformance.
What about the other end of the pacing spectrum, equalizing the pulse of an action film like Raiders of the Lost Ark. The film has an ASL of 4.4 seconds, longer than most contemporary action movies but still quite brisk, especially for director Steven Spielberg. I deformed the iconic opening sequence, but used the sequence’s faster ASL of 3.66 rather than the whole film’s pacing, as that allows for a direct comparison of the original and equalized versions.
The effect is definitely striking, as the deformed version races through the build-up toward action and peril, while lingering painfully on darts flying through the air, near-miss leaps, and other moments of derring-do. In the slowed down shots, you notice odd details you never would see in the regular film, like the discoloration of Indy’s teeth, and sense a very different momentum. When placed side by side with the original, it highlights how much of the sequence is weighted toward the approach and build-up rather than the action, while the deformed version lingers on moments that regularly flit by.
The editing timeline visualizes these differences, but in a way that is analytically obscure; the videographic form allows us to feel and experience the analysis in ways that computational visualization cannot. What stands out most to me in this watching and listening to this deformation is the role of music, as John Williams’s score still manages to hit its key themes and punctuate the action, despite its variable tempo and rhythms.
This experiment in equalizing a film’s pulse points most interestingly toward different types and functions of rhythm and tempo. In a conventionally edited film, variation of shot length is a main source of rhythmic play, both in creating emotional engagement and guiding our attention. Eliminating that variation by equalization creates other forms of rhythm and tempo, as we notice the relative screen time given to various characters, anticipate the upcoming edits in a steady pulse, and engage with the interplay between image and sound. These equalized deformations highlight how much the analysis of editing and ASL privileges the visual track over the audio—we are not quantifying audio edits or transitions in such metrics, as sounds bridge across shots, slowing or speeding up like an accordion.
Experimenting with these equalized pulse videos piqued my curiosity in how visual editing functions in conjunction with music, especially for instances where the musical track is more dominant, as with film musicals or music videos. These explorations into musical sequences proved to be the most exciting examples of equalized pulse, as they highlight the transformation of rhythm and tempo: the musical track stretches and squashes to create unpredictable rhythms and jettisons its standard tempo, allowing the steady beat of the changing visuals to define the speed.
For instance, “Can’t Buy Me Love” from The Beatles film A Hard Day’s Night becomes a collage of fast and slow motion when equalized to its sequence ASL of 4.9 seconds, making an already playful and experimental sequence even more unpredictable. Musical sequences combined with dance add another layer of rhythmic play, as with the transformation of Singin’ in the Rain’s “Broadway Melody” into a deformed and almost uncanny work when equalized to its ASL of 14.9 seconds.
Musical numbers typically are edited at a slower pace than their films as a whole, providing more attention to performance and dance without being pulled away by edits. A rare exception is one of the fastest cut films listed on the Cinemetrics site, and certainly the fastest cut musical I know of: Moulin Rouge, with an ASL of 1.9 seconds.
The “Roxanne” number, with an even brisker ASL of 1.05 seconds, is the only equalized pulse video I’ve yet made where the visual tempo becomes noticeably dominant, offering a steady beat of images and sounds whose speed deformations go by so quickly as to often escape notice.
These equalized pulse versions of musical numbers are the most engaging and affective examples of videographic deformations I have made, functioning as compelling cultural objects both on their own and as provocatively deformative paratexts. They also demand further analysis and study, opening up a line of examination concerning the relative uses of edits, music, and dance to create rhythm and tempo. As such, these videographic deformations are not scholarship on their own, but they do function as research, pointing the way to greater scholarly explorations. Whether that subsequent scholarship is presented in written, videographic, or multimodal forms is still to be determined, but I hope that this discussion has shown how videographic criticism is more than just a form of dissemination. Transforming a bound cultural object like a film into a digital archive of sounds and images enables a mode of critical engagement that is impossible to achieve by other methods; as such, videographic criticism functions as a digital humanities research method that is poised to develop the field of film and media studies in unpredictable new ways.
Some bonus equalized pulse videos to consider:
 Unless otherwise noted, all ASL data are taken from Barry Salt’s dataset on Cinemetrics; even though the site includes many more films with crowdsourced information, I have found they lack consistency and methodological clarity of Salt’s list, which is easier to compare among films with.
 The process to do this is fairly straightforward in Adobe Premiere: first cut the source video into clips per the original edits. Then select all of the clips and use the Clip Speed / Duration tool. Unlink the Speed and Duration variables, and enter the number of seconds and frames in Duration corresponding to the ASL. Relink Speed and Duration, and be sure to check the Maintain Audio Pitch and Ripple Edit buttons. The only troubles come when a clip is stretched or sped up more than 1000%, as then the audio needs to be manually processed with more complex intervening steps.
 The opening 12:47 of the film consists of 209 shots, resulting in a 3.66 ASL.
Filed under: Academia, digital humanities, Film, New Media, Publishing, Videographic Criticism | 2 Comments
Tags: A Hard Day's Night, average shot length, Cinemetrics, Mildred Pierce, Moulin Rouge, Mulholland Drive, Raiders of the Lost Ark, Singin' in the Rain
This is the second excerpt from my essay draft on “Videographic Criticism as a Digital Humanities Method.” The first laid out my approach to deformative criticism via the format of PechaKuchas. This one moves toward another instance of deformation, inspired by the work of Nicholas Rombes.
Videographic PechaKuchas take inspiration from another form, the oral presentation, but we can also translate other modes of film and media scholarship itself to deformative videographic forms. One of the most interesting examples of parameter-driven deformative criticism is Nicholas Rombes’s “10/40/70” project. In a series of blog posts and a corresponding book, Rombes created screen captures of frames from precisely the 10, 40, and 70 minute marks in a film, and then wrote an analysis of the film inspired by these three still images. Rombes acknowledged that he was deforming the film by transmuting it into still images, thus disregarding both movement and sound, but he aimed to draw out the historical connections between filmmaking and still photography through this shift of medium. The choice of the three time markers was mostly arbitrary, although they roughly mapped onto the beginning, middle, and end of a film. The result was that he could discover aspects of the film that were otherwise obscured by narrative, motion, sound, and the thousands of other still images that surrounded the three he isolated—a clear example of a deformance in Samuels and McGann’s formulation.
What might a videographic 10/40/70 look like? It is technologically simple to patch together clips from each of the designated minute markers to create a moving image and sound version of Rombes’s experiment. Although we could use a range of options for the length of each clip, after some experimentation I decided to mimic Rombes’s focus on individual frames by isolating the original shots that include his marked frames, leading to videos with exactly three shots, but with far more variability in length, rhythm, and scope. As with Rombes’s experiment, the arbitrary timing leads to highly idiosyncratic results for any given film. [I recommend watching the videos before reading the analyses.]
Raiders of the Lost Ark yields a trio of shots without obvious narrative or thematic connection, but in isolation, we can recognize the cinematographic palette that Steven Spielberg uses to create action melodrama: camera movement to capture moments of stillness with an emphasis on off-screen or deep space, contrasted with facial closeups to highlight character reactions and emotion.
Star Wars: A New Hope also calls attention to movement, with consistent left-to-right staging: first with the droids moving across the desert, then with Luke running to his landspeeder, then with Obi-Wan’s head turning dramatically, which is continuous with the rightward wipe edit that closes out the second shot. Both of these iconic films are driven by plot and action, but the arbitrary shots bely coherent narrative, allowing us to focus more on issues of visual style, composition, and texture.
Depending on the resulting shots, narrative can certainly play into these deformations. In Fargo, we start with a shot of Jerry sputtering about negotiating a car sale in the face of an irate customer, which abruptly cuts to Jerry sputtering about negotiating with kidnappers to Wade and Stan in a diner, highlighting the consistent essence of Jerry’s character, underscored by his nearly identical wardrobe across different days in the original film. The scene plays out in an unbroken 80-second static shot, pulling us away from the deformity and placing us back into the original film, as the coherent narrative eclipses the incoherence of the 10/40/70 exercise. But knowing that we are watching a deformation, we wait for the unexpected cut to jump us forward in time, splitting our attention between the film and its anticipated manipulation. The narrative action guides the transition, as Wade impatiently refuses to abide by Jerry’s plan to deliver the ransom himself and stalks away saying “Dammit!” The resulting arbitrary edit follows the most basic element of narrative, cause and effect: we cut to Wade being shot by one of the kidnappers, punctuated by a musical sting and evoking Stan’s earlier line that they’ll need “to bite the bullet.” The final jarring effect stems from the final shot being less than 3 seconds long, a startling contrast to the previous long take, and underscores the contrast between the incongruities of mundanity and brutality, boring stasis and vicious action, that is the hallmark of Fargo and much of the Coen brothers’ work. Although it certainly feels like an unusual video, Fargo 10/40/70 also functions as a cultural object on its own right, creating emotional responses and aesthetic engagement in a manner that points to one of the strengths of videographic work.
It’s interesting to compare Rombes’s results working with stills versus a videographic version of the same film. Rombes analyzes three stills from Mildred Pierce, and they point him toward elements of the film that are frequently discussed in any analysis: the contradictions and complexities of Mildred’s character, how she fits into the era’s gender norms, and the blurs between film noir and melodrama. The images launch his analysis, but they do not direct it into unexpected places.
I find the videographic version of these three moments more provocative, as they create more opportunities for misunderstanding and incoherence. The first shot finds Wally panicking and discovering Monty’s dead body in a noirish moment of male murder and mayhem, but quickly gives way to a scene of female melodrama between mother Mildred and daughter Veda. Mildred’s first line, “I’m sorry I did that,” suggests a causal link that she is apologizing for murdering Monty. Knowledge of the film makes this causality much more complex, as the murder is a future event that sets the stage for the rest of the film being told in flashback; in the frame story, Mildred appears to have murdered Monty, with the flashback slowly revealing the real killer to be Veda. Thus this scene works as a decontextualized confession made to the actual (future) murderer, adding temporal resonance and highlighting how the entire flashback and murder plotline was a genre-spinning element added to the screenplay but not present in the original novel. The third scene picks up the discussion of the restaurant and finances, bringing it back to the conflict between Wally and Monty—if we were to temporally rearrange the shots to correspond to the story chronology, the opening shot of Wally finding Monty’s body would seem to payoff this conflict, and create a closed loop of causality for this deformed version of the film. This brief analysis is no more valid or compelling than Rombes’s discussion, but it is certainly less conventional, triggered by the narrative and affective dimensions cued by the videographic deformation that ultimately seems more suggestive and provocative than the three still images.
And here are a few bonus 10/40/70 videos that I made but did not analyze – feel free to provide your own analysis in the comments!
Next time: a new and provocative mode of deformation, based on the computational method of average shot lengths!
 Nicholas Rombes, 10/40/70: Constraint as Liberation in the Era of Digital Film Theory (Zero Books, 2014).
Filed under: Academia, digital humanities, Film, Media Studies, Technology, Videographic Criticism | 2 Comments
Tags: Fargo, Mildred Pierce, Nicholas Rombes, Raiders of the Lost Ark, Star Wars
I’ve spent the last month working on an essay called “Videographic Criticism as Digital Humanities Method” for the second edition of Debates in the Digital Humanities. The full essay should be online soon for open peer review, but I want to share three excerpts that feature numerous video examples, as the blog is an easier site to embed and control the layout, and I am including more examples here than will be in the book version. Plus these are presented as “conversation starters,” so I hope they provoke some comments here!
The first excerpt frames the mode of “research experiment” that videographic work can do, via the PechaKucha form that I previously presented as part of our summer workshop – here it is:
Where the possibilities of videographic method get most intriguing is via the combination of the computational possibilities of video editing software with the poetics of expression via sounds and images. The former draws from scientific-derived practices of abstraction that is common to digital humanities: taking coherent cultural objects like novels or paintings and transforming them into something less humanistic, like datasets or graphs. The latter draws from artistic practices of manipulation and collage: taking coherent cultural objects and transforming them into the raw materials to create something more unusual, unexpected, and strange. Videographic criticism can loop the extremes of this spectrum between scientific quantification and artistic poeticization together, creating works that transform films and media into new objects that are both data-driven abstractions and aesthetically expressive. I will outline three such possibilities that I have developed, using case studies of films that I know well and have used in the classroom, hoping to discover new insights into familiar texts.
The model of poeticized quantification that I am proposing resembles the vector of literary analysis that Lisa Samuels and Jerome McGann call “deformative criticism.” Such an approach strives to make the original work strange in some unexpected way, deforming it unconventionally to reveal its structure and discover something new from it. Both Stephen Ramsay and Mark Sample extend Samuels and McGann’s model of deformances into the computational realm, considering how algorithms and digital transformations might create both new readings of old cultural objects and new cultural objects out of old materials. This seems like an apt description of what videographic criticism can do: creating new cultural works composed from moving images and sound that reflect upon their original source materials. While all video essays might be viewed as deformances, I want to explore a strain of videographic practice that emphasizes the algorithmic elements of such work.
One way to deform a film algorithmically is through a technique borrowed from conceptual art: imposition of arbitrary parameters. From Oulipo, the collective of French artists who pioneered “constrained writing,” to proto-videographic artworks like Douglas Gordon’s 24 Hour Psycho or Christian Marclay’s The Clock, to obsessive online novelties of alphabetized remixes of films like ARST ARSW (Star Wars) and Of Oz The Wizard (The Wizard of Oz), artists have used rules and parameters to unleash creativity and generate works that emerge less from aesthetic intent than unexpected generative outcomes. We can adopt such an unorthodox approach to scholarship as well, allowing ourselves to be surprised by what emerges when we process our dataset of sounds and images using seemingly arbitrary parameters. One such approach is a concept that Christian Keathley and I devised as part of our workshop: a videographic PechaKucha. This format was inspired by oral PechaKuchas, a form of “lightning talk” consisting of exactly 20 slides lasting exactly 20 seconds, resulting in a strictly parametered presentation. Such parameters force decisions that override critical or creative intent, and offer helpful constraints on our worse instincts toward digression or lack of concision.
A videographic PechaKucha adopts the strict timing from its oral cousin, while focusing its energies on transforming its source material. It consists of precisely 10 video clips from the original source, each lasting precisely 6 seconds, overlaid upon a one-minute segment of audio from the original source. There are no mandates for content, for ideas, for analysis—it is only a recipe to transform a film into a one-minute video derivation or deformance. In doing videographic PechaKuchas ourselves, with our workshop participants, and with our undergraduate students, we have found that the resulting videos are all quite different in approach and style despite their uniform length and rhythm. For instance, Tracy Cox-Stanton transforms the film Belle du Jour into a succession of shots of main character Séverine vacantly drifting through rooms and her environment, an element of the film that is far from central to the original’s plot and themes.
Or Corey Creekmur compiles images of doors being open and shut in The Magnificent Ambersons to highlight both a visual and thematic motif from the film.
In such instances, the highly parametric exercise allows the critic discover and express something about each film through manipulation and juxtaposition that would be hard to discern via conventional viewing, and even harder to convey so evocatively via writing.
I started using this exercise in my teaching last semester – in a narrative theory course, students were asked to make a PechaKucha of one of the films we had viewed together in the course, with the only requirement that they not try to retell the same story as the film presents. For a sense of the range of possibilities, here are two PechaKuchas for Barton Fink, created by different pairs of students:
Such PechaKuchas follow arbitrary parameters to force a type of creativity and discovery that belies typical academic intent, but they are still motivated by the critic’s insights into the film, aiming to express something. A more radically arbitrary deformance removes intent altogether, allowing the parameters to work upon the film and removing the critic’s agency. I devised the concept for a videographic PechaKucha randomizer, which would randomly select the 10 video clips and assemble them on top of a random minute of audio; Mark Sample and Daniel Houghton executed my concept by creating a Python script to generate random PechaKuchas from any source video. The resulting videos feel like the intentionally designed PechaKucha videos that I and others have made with their uniform length and rhythm, but the content is truly arbitrary and random, including repeated clips, idiosyncratic moments from closing credits, undefined sound effects, and oddly timed clips that include edits from the original film. And yet they are just as much of a distillation of the original film as those made intentionally, and as such have the possibility to teach us something about the source text or create affective engagement with the deformed derivation.
Just as the algorithmic Twitter bots created by Mark Sample or Darius Kazemi produce a fairly low signal-to-noise ratio, most randomly generated PechaKuchas are less than compelling as stand-alone media objects; however, they can be interesting and instructive paratexts, highlighting elements from the original film or evoking particular resonances via juxtaposition, and prompting unexpectedly provocative misreadings or anomalies.
For instance, in a generated PechaKucha from Star Wars: A New Hope, Obi-Wan Kenobi’s voice touts the accuracy of Stormtroopers as the video shows a clip of them missing their target in a blaster fight, randomly resonating with a popular fan commentary on the film.
Another generated PechaKucha of Mulholland Drive distills the film down to the love story between Betty and Rita, highlighting the key audio moment of Betty confessing her love with most clips drawn from scenes between the two characters; the resulting video feels like a (sloppy but dedicated) fannish remix celebrating their relationship.
A generated PechaKucha of All the President’s Men is anchored by one of the film’s most iconic lines, while the unrelated images focus our attention on patterns of shot composition and framing, freed by our inattention to narrative.
There are nearly infinite possibilities of how algorithmic videos like these might create new deformations that could help teach us something new about the original film, or constitute a compelling videographic object on its own merits. Each act of deformative videographic criticism takes approximately two minutes to randomly create itself, generating endless unforeseen critical possibilities.
Next time: a videographic take on another film studies deformance, Nicholas Rombes’s 10/40/70 project.
 Lisa Samuels and Jerome J. McGann, “Deformance and Interpretation,” New Literary History 30, no. 1 (1999): 25–56.
 Stephen Ramsay, Reading Machines: Toward an Algorithmic Criticism (Champaign: University of Illinois Press, 2011); Mark Sample, “Notes towards a Deformed Humanities,” Sample Reality, May 2012, http://www.samplereality.com/2012/05/02/notes-towards-a-deformed-humanities/.
Filed under: Academia, Film, New Media, Open Access, Publishing, Videographic Criticism | 5 Comments
Tags: All the President's Men, Barton Fink, Belle du Jour, Magnificent Ambersons, Mulholland Drive, Star Wars