48 Hours with Resolve and Fusion


Disclaimer: Blackmagic Design was kind enough to supply me with a full version of Fusion Studio to check out the latest features. It most certainly influenced this article, but I believe you will find it interesting regardless.

While taking part in 48 Hour Film Project I did have an opportunity to use the latest version of both Resolve and Fusion, and I want to share with you this interesting experience of rapid post-production.

see more

PEV04 – Norman Hollyn on Performance Enhancing Visual Effects


This interview is part of the larger series about Performance Enhancing Visual Effects. Norman Hollyn is an editor best known for his work on Heathers. He’s a professor at USC, where he teaches editing. He wrote two books on this topic as well, “The Film Editing Handbook” and “Lean Forward Moment” which I heartily recommend. For those more visually inclined, a great course of his, “Foundations of Video: The Art of Editing“, is available on Lynda.com, and you can find a few of his webinars on Moviola.com

Before I published my previous interviews with Alan E. Bell and Zbigniew Stencel, I sent Norman drafts of those and asked for his opinions. Here is his reply. see more

PEV03 – Zbigniew Stencel (Editor) About Performance Enhancing Visual Effects


This interview is part of the larger series about Performance Enhancing Visual EffectsZbigniew Stencel is a Polish film editor. He worked on recently released  No Panic, With A Hint of Histeria, and Czułość, which is going to arrive to the cinemas somewhere in the first quarter of 2016. Read on what he has to say about PEVs.

Please briefly introduce yourself.

My name is Zbigniew Stencel. I’m an editor based in Poland. I graduated from the Polish National Film, Television and Theatre School in Łódź (specialization in film editing), I have cut documentaries, music videos and ads. During recent years I focused on various 3D stereoscopic projects. Just now I edited two feature films.

How did you first learn about Performance Enhancing Visual Effects (PEVs)?

I guess Kirk Baxter, Angus Wall and Alan Edward Bell got me inspired. I read and watched various interviews and videos with them, where they described the process. But I used these kind of effects intuitively before. I thought it was cheating – and I was ashamed that I couldn’t find a “proper” editing solution to a problem in the movie. I was taught to always find a way using a simple cut, not to resort to “tricks”. When I learned about PEVs I was relieved, that maybe it’s not cheating – that it is the evolution of editing techniques. I think using PEVs gets editing even closer to directing.

Could you describe PEVs in your own words?

PEVs are techniques of merging audiovisual elements of film that are at editor’s disposal. It’s mainly visual elements, but I extend that to audio, with all the ways that sound can help “sell” a shot or a scene. It’s the synthesis of what was shot on set but not at the same time. It’s a collage with consistency. PEVs create what was intended on set but never happened there.

Can you recall the first time you used these effects, and can you tell us what your thought process was at that time?

I guess it was one of the films in film school. Two characters were sitting on a bench. I wanted to hold a shot of them sitting for a little longer, but one of the characters got up and walked away. I did a split screen with slowdown on this moving character, so the two of them sat together a little longer. It looked acceptable, so I left it in the movie.

I too have heard of PEVs referred to as “tricks” not related to storytelling even by experienced editors. Where do you think this perception might come from? Is it the unfamiliarity with the technique, a misunderstanding, or is there something more to it?

Experienced editors might think that PEVs will replace basic skills and techniques of storytelling in film editing. PEVs are expansion of our skill set. Maybe some not tech-savvy editors are afraid that these techniques are difficult, or that the “classic-NLE” means of editing will become obsolete. I think any problem can be resolved with simple cuts and make particular thing work in the movie. With PEVs we can not only make something work in the edit, but make it look better.

You said that PEVs make editing more like directing. This is a very astute observation. Can you elaborate on that?

When you make some actors do things that they didn’t do on set, you’re stepping into the shoes of the director. Using PEVs is like doing a re-shoot. In most cases PEVs help with problematic situations with actors in the footage. Directors and actors are very close on set, but in the editing bay the intimacy starts to grow between the footage and editors. This relationship is not as direct as its counterpart on set, but editors are still “saying” to actors “do this, do that” with cuts, and even more with PEVs.

Zbig in the editing/colour suite. He decided to avoid showing his favourite NLE - because he loves them all.

Zbigniew in his editing/grading suite. He decided to avoid showing his favourite NLE – because he loves and knows them all.

Do PEVs influence the way you approach editing?

I try to use PEVs as the last resort. I dive into the footage to find what I may have omitted, to look at it from a different perspective, to find gems. Only when I’m sure I will not find what I’m looking for, I use PEVs. It would be lazy to use PEVs before that. All those editors in the history of cinema resolved difficult situations with their creativity – we cannot lower our standards and standards of our profession.

Does the knowledge and ability to perform PEVs allow you to be a better editor and tell better stories?

It’s always about telling a better story. Be it PEVs, transitions, effects – they all should serve the story. Cliché, but that’s what I think. But I’m a fan of a simple cut. I’m more proud of myself, when I resolve a problem with straight cuts than with PEVs.

Do PEVs change the way you look at the footage and think about it?

Yes, but I try to use “PEVs perspective” on footage only when all other ways of problem solving in editing failed. When I see the footage for the first time I’m not automatically analyzing “Here I will use split-screen, here I will do a slowdown and add a mask”. All these ideas come later if they’re really required.

What differentiates PEVs from the “true” editing?

I think PEVs don’t make the editing any less “true”. But maybe cutting on Steenbeck, KEM or Moviola was “true” editing. It was all about cuts. Any dissolves involved sending a cut to laboratory, not mentioning more difficult processes. I was editing on Steenbeck for a year in film school and I thought about simple cuts, not masks, slowdowns, cloning or repositions. So maybe the “true edit” is something that was available almost since the dawn of cinema: a cut.

Do you think PEVs should be taught to new editors in film schools? To what extent?

When students understand the basic principles of editing and they can solve issues with techniques known to the editors in the era of film – then yes, they should be taught to use PEVs. But in the first years of education, maybe, just maybe, they should be forbidden to use them.

Do PEVs influence your relationship with the director?

Yes. At first, the directors were hesitant to use this technique. They didn’t know that the tools were in an NLE. After few shots done this way which looked good, they were like: let’s merge this part of this take with the other shot into one. If I wasn’t sure that I searched through all the footage for particular scene or sequence I didn’t agree to use PEVs.

How often are you using PEVs?

I try to minimize the use of this technique, as I explained before. It can range from 5 to 15 shots in a movie.

How much are you doing yourself, and how much is passed to the VFX team?

I’m doing an offline version with offline resolution footage in an NLE. VFX guys get online resolution clips and they’re doing the final shot. I have to be sure to pass all the necessary metadata and references so that they can recreate what was approved in the offline. So the idea is created in editing room but it’s realized by the VFX department.

What was your simplest shot that you created?

I think that it was the one created in film school, still intuitively. I like these kind of simple treatments, but my ambition is to create – if a film I am editing requires it – some PEVs with face replacement or roto to smoothly add body parts. These kind of manipulations are still not my league.

Can you tell us about your most complex shot?

It was a reaction shot of six people. In each take they weren’t synchronized in their movements. I had to merge their reactions so they would turn their heads and bodies at the same time. The shot was static, but you had to compensate for slight differences in camera position and positions of the actors. It’s not that hard technically, but the choice of particular takes and timing is crucial. Of course, the VFX team made it even better.

And the most satisfying one?

The most complex shots are the most satisfying ones.

How much time do you spend on a shot?

From a few minutes to half an hour at the most. The time is precious in the editing bay so If I can’t achieve it in the short time I just make it less polished and leave it for the VFX team. Sometimes shots are more complicated, so NLE tools are not enough.

What kind of tools do you use to make PEVs in your daily work?

In today’s post production world there is no comfort in knowing only one NLE. So I use the one that’s desired for particular project. Inside Media Composer there’s the Animatte effect. Inside Premiere Pro – masks, that are now very nicely built into clips effects controls and other effects. In Final Cut Pro X – the Draw Mask effect. I also use retiming on selected composited shots – my favorite is FluidMotion retiming in Media Composer. So masks and retiming are basic tools for PEVs.

Thanks a lot for your thoughts.

PEV02 – Alan E. Bell (ACE) About Performance Enhancing Visual Effects


This interview is part of the larger series about Performance Enhancing Visual Effects. Alan E. Bell (ACE) is the editor who invented the term and made it his trademark. He is most known for his work on The Green Mile, (500) Days of Summer, The Amazing Spiderman, The Hunger Games: Catching Fire, and recently The Hunger Games: Mockingjay part 1 and part 2. Read on to learn his thoughts about PEVs.

see more

PEV01 – An Introduction to Performance Enhancing Visual Effects

PEVs 01

The term “Performance Enhancing Visual Effects” (PEVs) or “Performance Based Visual Effects” was coined by Alan E. Bell, the editor of (500) Days of Summer, The Amazing Spider-Man”, and most recently The Hunger Games: Catching Fire and two parts of Mockingjay. It encompasses all manipulations of the source material aimed at making an actor’s performance better. Usually the alteration takes part only in part of the frame, as opposed to typical cutting and juxtaposition of whole frames. In my opinion the availability and recent popularisation of these techniques constitute a significant shift in the history and process of editing.

see more

Rumours That Leave Some Terrified…


When the news first came out that The Foundry is going to be available for sale in 2015, and my friend told me that there were rumours that Adobe might be interested in acquiring it, at first I dismissed it as rather unlikely. However, this week, first an Adobe employee tweeted The Foundry’s reel, and soon after an article in Telegraph confirmed this possibility, which makes things almost official.

Some of the early tweeter world reactions were not very enthusiastic, to say the least:


If you are familiar with the profiles of these two companies, the initial wariness is understandable. Adobe delivers tools for everyone, while The Foundry has been traditionally associated with high-end workflows for larger visual effects studios. Perhaps the dismay of some The Foundry users comes from the fact, that The Foundry does not really need anything from Adobe to supplement their great products in the niche that they positioned themselves at. Were it not for the venture capital (The Carlyle Group in this case) that simply wants to profit from their investment, there would hardly be any interest for The Foundry in mingling with Adobe. From Adobe’s perspective though, acquiring The Foundry is a perfect opportunity to fill in the areas, which have always been their weak points – true 3D (Modo, Mari, Katana) and high-end compositing (Nuke).

Personally I would not mind having an additional set of icons added to my Creative Cloud license. Depending on how this (potential) marriage is handled, it can be a beginning of something great, or a potential disaster to some. I am cautiously optimistic.

Both companies have their own mature line-up of products that are mostly self-sufficient. The real challenge is immediately obvious: integrating these is not going to be a piece of cake. For example, Adobe’s internal scripting platform revolves around JavaScript, while The Foundry’s is centered around Python. These are not compatible in any way, shape or form. Adobe has their own UI framework called Drover, while The Foundry is using a Linux-gone-multiplatform standard of QT. This is also very unlikely to change, and perhaps shouldn’t. To cater for the needs of large studios, The Foundry delivers not only for Windows and OS X, but also – and perhaps most importantly – for Linux. This is an area where Adobe has arguably limited experience – they released one version of Photoshop for Unix once, which was subsequently discontinued because of a total lack of interest. Will Adobe then have to develop at least the Creative Cloud Desktop application for Linux to handle licensing? This might be interesting.

The questions appear almost instantaneously: what will happen to the alliance between Adobe and Maxon, when they acquire their own 3D software package (Modo)? If Nuke becomes the main compositing tool for Adobe, how it will impact the development of After Effects as a platform, and what will happen with quite a few compositing plug-ins? This is the most obvious place where these technologies can clash, and some third-party developers might be left out in the cold. How much of the development power will be focused on integration, and creation of Dynamic-Link engines in all applications that talk to each other, as opposed to implementing new, cutting-edge features or fixing bugs? Without a doubt, it would be great to see a link to Nuke composition in Premiere Pro – and this might in fact be not so difficult to achieve, since Nuke can already run in the render mode in the background. However, how will it impact the development of “Flame Killer” Nuke Studio itself? Hard-core Nuke users will most definitely see the necessity to use Premiere as a hub as a step back, especially when it comes to conforming – an area, which is known to be an Achilles heel for Premiere (see my previous notes about it) – and the vfx workflow. And if we are to take hint from what happened with the acquisition of SpeedGrade, when most development resources were moved towards creating Dynamic Link with Premiere, and the actual development on SpeedGrade itself almost stalled, this might be worrying.

Certainly there are some valid concern about responsiveness of Adobe towards the usual clients of The Foundry, as the market audience for the products will inevitably shift. At the same time Adobe does crave to work on the higher end, and it’s much easier for high-profile people like David Fincher to ask for the features, and receive them, as opposed to common folks like you and me. So the studios will still have the leverage on Adobe. However, a challenge will come in the fact, that The Foundry tools (with the exception of Modo) are not as accessible and intuitive, as Adobe’s, and very often require extensive personal training to use properly. Again, Iridas acquisition being an example, Adobe will try to make small changes in the UI, where necessary, but in general the efforts will be spent elsewhere. Personally I don’t ever envision myself using a Katana which is most definitely a specialised relighting tool for high-end 3D workflows, mostly working with assets coming from the software owned by Autodesk. If I were to name a single product that is most likely to be dropped after the acquisition, it would be Katana. It would take quite a pressure from the studios using it to keep it in development. Adobe would have no skin in this game – in fact, possibly quite the opposite. One way or another, I highly doubt Katana will make it to the hands of Adobe’s typical end-user. It might become a separate purchase, like Adobe Anywhere is now.

On a good side, this acquisition will indeed make Adobe’s video pipeline next to complete. We used to snicker at the slides and demos suggesting or even insisting, that it’s possible to do everything within the Creative Cloud. We knew that making even a simple 3D projection in After Effects was an effort often destined to fail. A lot of great work has been made in After Effects despite its shortcomings, but the workarounds are often time-consuming – with Nuke at our disposal this would no longer be the case. It indeed has the potential to make Adobe a one-stop shop in post-production. And even more good news? The drop in price is inevitable, especially with the recent acquisition of Eyeon by Blackmagic Design.

If I am to make predictions, I’d say that initially some The Foundry products (After Effects plug-ins, Modo, Nuke, Mari and Mischief, if it doesn’t get integrated into Photoshop/Illustrator) will immediately become part of the Creative Cloud offer. Adobe will be showcasing Modo and Nuke to sell more CC licenses. A lot of users who just shelled out thousands of dollars for their Nuke licenses will be unhappy, but Adobe will most likely give some grace period for them – maybe in the form of free Creative Cloud licenses to current The Foundry users without active CC subscription or something similar. However, to avoid legal issues with Linux users, where Adobe is not able, and will most likely never be able to deliver their full line of Creative Cloud products, a separate offering will be made for this platform – perhaps on custom order, similarly to CC for Enterprise customers. Linux versions will keep up feature-wise at the beginning with their counterparts, but depending on the number of licenses sold this way, they might stall or be discontinued. Katana is most likely the first to go. The whole Nuke line will be integrated into a single product – hopefully Nuke Studio, but possibly to what is now known as NukeX. The latter would be unfortunate, as there is quite a lot of potential in Nuke Studio, but I’m not sure Adobe folks will understand it at the moment, as they seem to be only now learning about high-end vfx workflow. Hopefully outcry from the clients will be enough. Hiero, however will also most likely be dropped, as it essentially is redundant to conform part of Nuke Studio.

I hope some of the original The Foundry branding will be retained, but I am a bit afraid that we will quite fast see either square icons with Nuke symbol, or even letters Nu, Mo, Ma, Mc. Hopefully someone can point Adobe Media Encoder icon as a precedent, and at least the Nuke symbol remains intact. Adobe letter salad becomes a bit tedious to keep up with.

Again, if we are to take hint from Iridas acquisition, The Foundry development team will remain mostly the way it is – unless people decide that they don’t want to have anything to do with Adobe as a company, which does happen from time to time – but it will be integrated into Adobe culture. Adobe seems to be pretty good in this kind of thing, so the falloff should be minimal. Development-wise, most certainly the attempts at making exchange between various application easier will get priority right after making sure Creative Cloud licensing works. An importer of Modo files into After Effects, perhaps a bridge between After Effects and Nuke, sharing cameras, tracking data, scene geometry, and some layers; or attempts at Dynamic Link between Nuke and Premiere – these are my initial guesses. Perhaps even the XML exchange between Premiere and Hiero/Nuke Studio will finally be fixed, and at some point The Foundry applications will be able to read and/or write Premiere Pro project files. Adobe’s XMP model of metadata will most likely be employed throughout the Collective.

On a good side, it will allow The Foundry to focus – I had the impression that for some time this company began to behave like Adobe in times of CS5-CS6 – trying to expand the market, pumping out new flashy features instead of focusing on stability and bug fixing, and diluting Nuke line, or in general trying out to lure people to buy their products or updates. Creative Cloud subscription model, regardless of how it was perceived when introduced about two years ago, helps in this regard quite dramatically, as there is less pressure on the developers to cater to the needs of marketing department (vide introduction of Text node in Nuke) and maintaining various versions of the software. This should translate into more time and manpower being directed towards the “core” development – the good stuff.

I think this is promising – if it ever happens. There already has been a precedent of a lower-end company acquiring high-end tools and making them available for public without necessarily watering down their value. We’ve all seen it. Most of us loved it. The company’s name is Blackmagic Design, and the tools were daVinci Resolve and Eyeon Fusion. Here’s to hoping that Adobe handles this acquisition in a similarly efficient and successful manner, bringing the high-end 3D and compositing tools to the hands of many. That is, if this buyout ever happens. Because you know what? Why wouldn’t Blackmagic simply outbid them just for the sheer thrill of disrupting the market?

Adobe conforming tool – my vision solidifies

I have been pondering over my recent discussion with David McGavran, the Engineering Manager for Adobe Premiere Pro about the limitations of Premiere’s own XML format when it comes to interchange. I am grateful for this exchange. I realized that my ideas are not possible to be implemented in Adobe Premiere Pro itself. After all, it is a relatively uncomplicated tool with the sole specialization in editing. I hoped it could become a Smoke-like base for other applications to work from, but it turns out not to be feasible in any foreseeable future.

However, instead of letting go of my dreams, I decided to take a wider look on the problem, and paint the vision in even broader strokes. Fortune favors the brave.

Right now the Production Premium suite is still a patchwork of applications with significantly different structures stemming from various technologies that Adobe acquired along the way. The interchange between them is sometimes very good (especially with Photoshop files), but sometimes mediocre (like sending Premiere project to SpeedGrade), and often limited to a single workstation running all the applications (like the Dynamic Link). Even though I remain amazed on how much Adobe Engineers have been able to achieve within the limitations of software architectures, some dating from over 20 years ago, there are times when the integration is still sorely lacking.

With recent switch in Adobe policy towards the Creative Cloud solution it makes even more sense to give broader structure to this patchwork of loosely related applications, especially in the world of post-production, where the effective teamwork, alongside with project and asset management are some of the vital keys to success.

Adobe had already made an attempt to create an asset management system in the past, although it turned out to be a dead-end. I don’t know the exact reasons why they cancelled Version Cue in CS5, but for me and a few companies that I worked for at the time, the issue was stability. After three consecutive crashes of VC database, and literally days of attempts to recover the assets, we gave up on this quite promising solution. Clearly it was not production ready, even after a few years of work.

The void however remains, and the suite still lacks an application that would bind everything together, at least in post-production world: a comprehensive project management, and conforming tool.

Let’s take a look at a sample, deliberately vague workflow involved in film post-production:

  1. Dailies ingest and grading
  2. Rough Cut
  3. VFX work alongside the editorial
  4. Audio engineering and mixing
  5. Final grading
  6. Finishing and mastering

Hopefully there is a picture lock between 3 and 4, however the pride of Adobe has always been the possibility of retaining flexibility up to the very end of the process, and personally I would love to retain it.

Even though the production suite does contain the applications that can take care separately for each part of the process, tying them all together mostly still involves at least a well thought out folder structure, and perhaps a third-party asset management tool, and is prone to human error, especially during backup and archiving and in an environment involving more than one person. Any sensible version control is also lacking, and when it is implemented in a rudimentary fashion (raising version number in After Effects project file name) it can break other dependencies, like Dynamic Link.

What would the missing application need to do?

  1. Media ingest, transcoding and metalogging – similarly to Prelude but also importing from already partially created Premiere project if some editing was done in the field already
  2. Sending media to SpeedGrade or via FCP XML to any other grading app
  3. Receiving graded media either with .look files or as color corrected new versions (ie. track versions of a clip regardless of its filename and/or extension)
  4. Sending media to Premiere projects, supporting templates and bin organization
  5. Conforming Premiere projects with graded media and relinking without opening Premiere
  6. Preparing and managing assets for VFX work in AE or Photoshop on a shot by shot basis with templates and bin organization
  7. Tracking versions of VFX assets, including rendering and review
  8. Reviewing and exporting Premiere sequences without opening Premiere
  9. Conforming Premiere projects for FCP XML or AAF export and import and keeping track of conformed/rendered files
  10. Re-conforming XML or AAF import for Premiere
  11. Outputting any project from any of the suite apps
  12. Archiving and backup options for projects
  13. Managing meta-assets like templates, grades, presets, user preferences and other
  14. Possibly a few other important things that I forgot to include

All of this – of course – with the possibility of working with many users, many separate workstations, and in both stand-alone and integrated version.

In the end, I’d love to have the functionality or integration with Shotgun or any other “big iron” project management system. Right now it is partly being done with the use of Panel API that Adobe has added in CS6 to Premiere, but it’s just a single application patch, which works only in certain kinds of workflow. Granted, it’s a step ahead – and I hope that fully-featured scripting is the next big step in proper direction – but it’s still not enough.

Am I asking for too much? A lot of the necessary bricks seem already in place. I hope that you can see how such an application would contribute towards even greater usability of the Production Premium suite, especially in the more collaborative environment. Even though it seems like another patch on top of the patchwork, it would be more like a gate to the outside world, and a useful internal interchange manager, rather than half-hearted attempts to fix problems on the level of a single application that leave some of us wanting.

Is it feasible to give more structure to the patchwork of Adobe Production Premium? Can Adobe Engineers do it by theselves, or should they acquire a technology that is already somewhat mature like CatDV? Who knows. However, perhaps passing these ideas to Wes Plate or other brilliant guys on Adobe team would make them excited enough about such development project, that they would be interested in following it, and that the management would consider such a project worthwhile. Think big, Adobe! Audaces fortuna iuvat!

Green screen primer

Having recently had an opportunity to do some green screen work, which at first glance seemed to be a quick job, and later turned out to require some pretty hefty rotoscopy and compositing, I decided to write down another caveat, this time on using a green screen. Please note, that the pictures are for illustrative purposes only. For convenience, wherever they are labelled as YCbCr colorspace, I used Photoshop Lab/YUV to create them, which is very similar, but not identical to YCbCr. Also, many devices use clever conversion and filters during chroma subsampling, which reduces aliasing and generally are better at preserving the detail, than Photoshop is in its RGB->Lab->RGB conversion, so the loss of detail and differences might be a little smaller, than depicted here, but are real nevertheless.

Green screen mostly came about because of the way that digital camera sensors are built. The most common bayer pixel pattern in CMOS sensors used by virtually all single-chip cameras consists of two green sensors, and a single blue and red ones (RGGB). Which is a sensible design, if you consider the fact that the human eye is the most sensitive in green-yellowish regions of the light spectrum. It also means, that you will automatically get twice as much resolution from the green channel of a typical single-chip camera, than from either red or blue one. Add to this the fact that the blue sensors most often have the most noise, due to the fact that the blue light has the least energy to deposit in a sensor, and the signal is simply the lowest there, and you might start to get a clue why green screen seems to be such a good idea for digital acquisition.

RGGB sensor mosaic

Typical CMOS RGGB pixel mosaic. There are two times as many green pixels than red or blue.

So far this discussion did not concern 3-sensor cameras or the newest Canon C300 with the sensor twice the size of encoded output, however the next part does.

Green channel has the most input (over 71% in Rec 709 color space specification) in the calculated luma (Y) value, which is most often the only one that gets encoded at full resolution when compression scheme called chroma subsampling is used – which is almost a given in most cases. All color information is usually compressed in one way or another. In 4:2:0 chroma subsampling scheme – common to AVCHD in DSLRs and XDCAM EX – the color channels are encoded at 1/4th of their resolution (half width and half height), and in 4:2:2 at half resolution (full height, half width). These encoding schemes were developed based upon the observation that a human eye is less sensitive to loss of detail in color than in brightness, and in horizontal plane, than in vertical. Regardless of how well they function as delivery codecs (4:2:2 is in this matter rather indistinguishable from uncompressed), they can have serious impact on compositing, especially on keying.

Various chroma subsampling methods

Graphical example of how various chroma subsampling methods compress color information

Recording 4:4:4 RGB gives you an uncompressed color information, and is ideal for any keying work, but it is important to remember, that you won’t get more resolution from the camera, than its sensor can give you. With typical RGGB pattern, and sensor resolution not significantly higher, than final delivery, you will still be limited by the debayering algorithm and the lowest number of pixels. It’s excellent if you can avoid introducing compression and decompression artifacts, which will inevitably happen with any sort of chroma subsampling, but it might turn out that there is little to be gained in pursuing 4:4:4 workflow due to the lack of proper signal path, as is for example with any HDMI interface from DSLRs, which outputs 8-bit 4:2:0 YCbCr signal anyway, or many cameras not having proper dual-link SDI to output digital 4:4:4 RGB. Analog YCbCr output signal (component) is always at least 4:2:2 compressed.

A good alternative to 4:4:4 is a raw output from camera sensor – provided that you remember about everything what I wrote before about the actual sensor resolution. So far there are only two sensible options in this regard – RED R3D and ArriRaw.

There are also not very many codecs and acquisition devices that allow you to record 4:4:4 RGB, and most still require fast and big storage arrays, and thus its application is rather limited to bigger productions with bigger budgets. It is slowly changing due to falling prices of SSD drives that easily satisfy the writing speed requirements, and portable recorders like Convergent Design Gemini, but storage space and archiving of such footage still remains a problem, even in the days of LTO-5.

Artifacts introduced by chroma subsampling

Chroma subsampling introduces artifacts that are mostly invisible to the naked eye, but can make proper keying hard or even impossible

Readers with more technical aptitude can consult two more detailed descriptions of problems associated with chroma subsampling:

  1. Merging computing with studio video: Converting between R’G’B’ and 4:2:2
  2. Towards Better Chroma Subsampling

The higher sensitivity of human eye and cameras to green color means also, that you don’t need as much light to light the green screen, as you would for the blue one. The downside however is that the green screen does have much more invasive spill, and due to the fact that it is not a complementary color to red, it is much more noticeable and annoying than the blue spill, and requires much more attention during removal. Plus spending a whole day in a green screen environment can easily give you a headache as well.

Generally it is understandable why the green screen is a default choice for digital pipeline. However, as with all rules of the thumb, there is more than meets (or irritates) the eye.

When considering keying, you need to remember that it is not enough that you get the highest resolution in the channel where your screen is present (assuming that it is correctly lit, does not spill into other channels, and there is not much noise in the footage). Keying algorithms still rely on contrasting values and/or colors, using separate RGB color channels. Those channels – if chroma subsampled – are reconstructed from YCbCr in your composition software.

Therefore, even assuming little or no spill from the green screen to the actors, if you have a gray object (let it be a shirt), which has similar value in green channel to the green screen, then this channel is made useless for keying by this very fact. You can’t get any contrast from it. You and your keying algorithm are left to try obtaining the proper separation in the remaining channels, first red, and then blue (where most likely most of the noise resides, and which has meager 7% input in luminance), which automatically reduces your resolution, also introducing more noise. In the best case you get a less crispy and a little unstable edge. In the worst, you have to resort to rotoscoping, defeating the purpose of shooting on the green screen in the first place.

Now consider the same object on a blue screen – when your blue screen has the same luminance as a neutral object, then you throw the blue channel away, and most likely can use green and red channels for keying. Much better option, wouldn’t you say?

Difference of blue screen and green screen keying with improper exposition

If the green value of an object on a green screen is similar to the screen itself, keying will be a problem

Of course this caveat holds true only for items with green channel level close to the level of the screen. If we want to extract shadows, it’s a completely different story – we need to get contrast in the shadows as well, and to this end green screen will most likely be more appropriate. But if we don’t, then choosing a color of the screen entails more than simply looking what color the uniforms or props are or a basic rule of the thumb that “green is better for digital”. You need to look at the exposure as well.

There are a few other ways to overcome this problem. One is to record 4:4:4 using a camera that can deliver proper signal, then you are only limited by the amount of noise in each channel. Another is to shoot at twice the resolution of final image (4K against 2K delivery), and then to reduce the footage size before keying and compositing. This way the noise will be seriously reduced, and the resolution in every channel will be improved. Of course, then it is advisable to output the intermediates to any 4:4:4 codec (most VFX software will make excellent use of DPX files) to retain the information.

Another sometimes useful – and cheap – solution might be to shoot vertically (always progressive, right?), thus gaining some resolution, however remember that in 4:2:2, and in 4:1:1 compression schemes, it is the horizontal (and now vertical) resolution that gets squashed, so the gain might not be as high as you hoped, and in the dimension that is more critical for perception, so make sure that you’re not making your situation worse.

The key in keying is not only to know what kind of algorithm or plugin to use. The key is also to know what kind of equipment, codec and surface should be used to obtain the optimal results, and it all starts – as with most things – even before the set is build. Especially if you’re on a budget.

To sum up:

  • Consult your VFX supervisor, and make sure he’s involved throughout the production process.
  • Use field monitoring to see how the exposition in the green channel looks like, and if you are gettting proper separation.
  • Consider different camera and/or codec for green/blue screen work.
  • Try to avoid chroma subsampling. If it’s not feasible, try to get the best possible signal from your camera.
  • Consider shooting VFX scenes in twice the final resolution to get the best resolution and the least noise.