PEV01 – An Introduction to Performance Enhancing Visual Effects

The term “Performance Enhancing Visual Effects” (PEVs) or “Performance Based Visual Effects” was coined by Alan E. Bell, the editor of (500) Days of Summer, The Amazing Spider-Man”, and most recently The Hunger Games: Catching Fire and two parts of Mockingjay. It encompasses all manipulations of the source material aimed at making an actor’s performance better. Usually the alteration takes part only in part of the frame, as opposed to typical cutting and juxtaposition of whole frames. In my opinion the availability and recent popularisation of these techniques constitute a significant shift in the history and process of editing.

The visual effects origins

Originally such sub-frame manipulations were not available to an editor due to limitations in technology – it was hard enough to obtain a frame-accurate edit on tape, as was making a proper negative cut on film. Anything else was in the purview of VFX houses, took a lot of time and effort to get right. When NLEs first came about, their sole job was to generate the Edit Decision List (EDL) which would then be assembled during on-line edit, either on a very expensive hardware, or still using the old-school film cutters. Editors worked on proxy, low resolution files, to ensure that editing experience was smooth. With the VFX budgets being decided well in advance, editors simply had very little or even no say about what type of VFX work was required.

One could argue, that one of the very first applications of PEVs were done in The Crow in 1994, where Dream Quest Images VFX house put a face of deceased Brandon Lee from previously recorded takes on the body of another actor, to be able to finish the movie. It took hundreds of hours to do, and most definitely was not an editor’s job. Take a look:

Crossing the Rubicon

With the introduction of computer-based VFX and the rapid progression of hardware and software, more and more became possible in less and less time and for less and less money. These days, it’s no longer required that you edit your movie using off-line or proxy files (though the larger productions still do so). I would wager that most of editors out there actually work either with the original or with final quality media. The introduction of masking tools and advanced retiming algorithms into NLEs marks the point when even an editor can perform relatively complex manipulations that will end up on screen. Up until then, one had to resort to software such as After Effects, Fusion or Nuke. Today this is no longer the case.

From the simplest split screen, where two or more separate takes are fused together into one, through time offsetting and more complex retiming, and up to merging performances using face or body replacement, these techniques are now available for an editor to perform. Even if sometimes the final work is still being done by VFX artists, or requires usage of other software, they are executed at editor’s behest. They are not required for the shot to be complete – unlike compositing, keying, matte painting, and other special or visual effects. The parts are chosen by the editor, and the idea how to alter and merge them is also his or hers.

The editor can now stop thinking that a single frame is an untouchable, indivisible whole. He can now cut the frames in part, warp them, extend them, and treat them as malleable material from which the whole piece will be formed. This is a major shift in approach to editing.

What’s the point?

These manipulations make the perception of an actor’s performance better. We can pick the most appropriate parts from all takes, irrespective of other actors performances, and sometimes even merge one actor’s performance from multiple takes. A number of these techniques actually does alter actor’s actual performance – removing blinking eyes during action scenes, extending the fist or a pistol to actually hit the other actor’s face, retiming of body parts, or other deformations. These effectively create a new performance, which was too dangerous or for some other reason not possible to obtain on set.

Very often these techniques are referred to as “tricks” that don’t have much to do with storytelling. This is not so. Understanding what you can achieve with their help can sometimes mean turning a shot which does not work into a great performance. If you can enhance the story, make a scene stronger, more evocative – why not do it? Who says that a frame is not to be divided, extended or warped? Why should we compromise on the take that was not perfect, if we can create such take from the ones we already have? The decisions of course belong to an editor and a director, but the technology is no longer a barrier. It’s time to embrace it. If such opportunities are not taken, then the end result is not optimal.

There is an argument to be made against altering actor’s performance, as recorded on set, but it is more an ideological one. Editors have been known to make “poor” performance better with editing for a long time. This is nothing new, in fact, it’s part of an editor’s job, to make actors look as good as possible. However, taking credit for “fixing” actor’s performance is in my opinion an expression of editor’s overblown ego. It’s almost impossible to create something from nothing. The performance has to be there already, we are just taking the best parts – even if it’s the actor’s hand gesture from take two, and his facial expression from take four.

Of course, these techniques require more time than simple cutting. Therefore they will most often be an editor’s last resort. On the other hand, sometimes the knowledge of these possibilities can help the director save time during production, especially on more complex shots. Some directors – like David Fincher – deliberately frame their pictures in such a way to allow for easier such manipulations in the edit bay.

An editor who embraces these techniques stops thinking about a frame as a sacrosanct,  indivisible entity, but perceives things in more granular way – a performance of a single actor within a frame or even a performance of his individual body parts. He can juxtapose not just the whole shots, but performances taken from the same or different takes, adjusted to achieve the best result. To “rescue” more shots, and make them more meaningful in the end. To make the movie as good as it can be.

What will follow?

In the upcoming months I intend to interview as many people whom I know are using this type of techniques as I can. The most prominent names include – apart from the grandfather of PEVs, Mr. Bell himself – Angus Wall, Kirk Baxter and his assistant Tyler Nelson. However I am looking not only for the opinion of editors, but also, if possible, directors and VFX artists who are tasked with the execution of the final versions of shots prepared by editorial. Perhaps I can also get a word or two from editing teachers. If you know about someone who would be interested in expressing his or her view on the topic, please let me know, either via email, or here in the comments.

In the meantime, see all the articles from this series published so far, visit Alan Bell’s blog for a brief overview of the techniques he uses and enjoy the following demonstration of split comping technique by Ben Gill:

Tagged , , . Bookmark the permalink.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.