Nuke Studio For After Effects


When I first saw the demonstration of Nuke Studio shown on NAB, with each new feature I was thinking: yes, this is exactly as it should be done. This is precisely how Dynamic Link should work from the get go in Adobe applications. Congratulations to The Foundry for making it happen. I wish I could afford your tool :) see more

The Strange Maths of After Effects Timecode

For past few weeks I’ve been developing for That Studio a number of scripts for After Effects which are supposed to make one’s life easier, when dealing with an edit in Premiere that requires handling of more than a couple of VFX shots.

One thing that surprised me is that you can’t access layer’s timecode directly using scripting, even though you can do this using expressions. At first I thought that I can use the Time Remap effect to do this, since its values are supposedly shown as a timecode. But a brief experimentation shows that it’s not the case. Time Remap values are given in seconds from the start of the layer regardless of its timecode. Ouch.

Therefore one has to resort to a brutish hack to obtain the layer’s starting timecode: create a text layer, add an expression that reads the layer.sourceTime value and assigns it to the text, and then read the source text in the script. That’s hardly an elegant solution, and it would be so much better, if the layer.sourceTime was supported not only for expressions.

Perhaps I could live with it, but on top of it there is a nasty bug hidden in AE, and it can bite you rather hard, if you’re not careful.

You might be familiar with the fact that when you precompose a single footage layer, and choose to leave the effects in the main composition, the precomp will have the length of the whole footage file, and will inherit the exact timecode… more or less so.

You might not be aware, that if you are using 23.976 fps frame rate, the timecode might be off by a frame or even two when you attempt to render this layer – which you will notice only if you open the file, because even the render queue will show the correct timecode value. You can mitigate this by manually entering the starting timecode in the composition settings window.

The real problem begins when you are trying to set this timecode via scripting. Let’s say you want the composition to start at 20:20:51:22, which at 23.976 fps translates to 73325.2419085752. When you assign it to the desired composition, the internal AE procedure will truncate it to 73325.2421875, which will result in the timecode which is not frame accurate, and even though it is displayed correctly in the composition, it is incorrectly rendered and written to file. You can check it yourself by running the following script on a selected composition (make sure to first enter 20:20:51:22 into the Start Timecode of the Composition Settings dialog):

c = app.project.activeItem;
a = c.displayStartTime;
c.displayStartTime = a;

Note, that the value changes by the sheer fact of assigning the code to the composition start time. My supposition is that the internal workings of AE convert the double value into a float, thereby reducing the precision at higher values. That’s definitely not nice. Depending on how critical the timecode is for your application, it might rule out automation of certain tasks, and make it more troublesome to render – you have to always manually enter the starting timecode, even if you precompose. I have not found a way to reliably code around this bug.

The conclusion so far is: forget about accurate timecode renders from AE, especially if you are using 23.976. 29.97 or higher fps and do not start from a full second. Lower integer frame rates seem not to be affected so much, as are easy dividers (12 for 24 fps, etc.). Unfortunately, even if this is fixed in the future releases, it still means that you can’t use scripts that rely on this feature for earlier versions of AE.

It’s the Feature Countdown…

Like last year around NAB, this time Adobe announced the preview of the upcoming release of its Digital Video Applications (DVA) which include Premiere Pro, After Effects, SpeedGrade, Media Encoder, Prelude and Audition.

You can read the general overview of the new features in a few places (Creative Cow, fxguide, Studio Daily), various Adobe blogs give you a complete overview of the upcoming features, Scott Simmons at ProVideoCoalition gives a more indepth look into the new features of Premiere Pro itself, and for those visually oriented, Josh from prepared a video. Here as usual I will attempt to give you a bit more nuanced view with a possible long-term impact on the real world workflows. I also hope to showcase some of the most important ones in more detail pretty soon, and our favorite NLE will enjoy a separate detailed post. Right now let’s take a view from a few stories high.

The overall theme of this upcoming release is something that I would describe with a single word: “Finally…”. Finally we have a number of features that many have been asking for – sometimes for years. I know it may sound a bit ungrateful, and it’s no secret that we all would love to have these from the very beginning. But this release seems to really deliver – features galore, big and small, all to make your life as an editor easier. Therefore, when I use the word “finally”, it means that it is really there, without “buts” and “howevers”, and with full appreciation of the required time and resources, and a long road that all had to travel to arrive at this point.

One of the more interesting developments in this cycle is the new version of Prelude with its ability to trim clips in the rough cuts (finally), and tagging. The first feature makes it feasible to do the rough cuts in Prelude without the need to precisely mark the subclips, or to adjust the selections on the fly. We can finally do the quick assemblies, selects, what have you, and we can be as OCD about them, as we want.

Tagging, the second feature, has tremendous potential, similar to Keywords Collections in Final Cut Pro X. The ability to apply tags during playback using a fully customizable, state-of-the-art, JSON driven Tag Panel or to apply several tags to a single marker makes logging much easier and much faster. This is precisely what was missing in the metadata workflow between Adobe applications. Therefore we finally have a non-hierarchical way to quickly and consistently annotate the source footage. In the preview software versions that I had access to, tags are currently searchable only in Prelude, and only on the clip basis via the search box in the project panel. Hopefully, this is going to trickle down at least to Premiere, which can already see the tags, but can’t yet look for them or show them in any meaningful way, and that the searching capabilities will become more advanced.

On a technical note, tagging is implemented via an interesting extension of Flash Cue markers, and I will definitely elaborate on it soon. Right now, if you for some reason feel unhappy that the FLV format is being totally ditched from Adobe video applications in this upcoming release, you can comfort yourself with the knowledge, that not everything Flash-related is going to waste.

There are a few other new features that span throughout several CC applications. For one, Premiere and SpeedGrade now sport the Master Clip Effects (MCE). The idea is quite simple – when you apply an effect to a given clip, not to its instance in the timeline, it is applied as a separate “layer”, so to speak, below all timeline effects, and ripples through all instances of the master clips on all timelines in the project. It’s a great feature, especially useful with color correction, and the fact that SpeedGrade can also work in this mode, makes it even better. Here however I am not saying “finally”, because there seem to be a few quirks associated with it. I will elaborate on them in a separate Premiere Pro note, perhaps even wait for the release version of the software to see how these issues are resolved.

Regrettably, Master Clip Effects do not apply to sequences – yet? – which would make them totally awesome. This, and the possibility to render and replace such modified master clip or a sequence to the codec of one’s choosing. But even without it, it’s a killer.

The mechanism used for MCE allowed also Adobe to create the Live Text Templating for After Effects compositions in Premiere. Here we can see the roll-out of a 1.0 release of a feature, which covers only the most basic – though also the most frequently requested – ability to edit a selected text layers of Dynamically Linked AE comps straight in Premiere Pro. It’s a boon for all lower thirds, titles and similar graphics. It’s pretty simple to work with – you mark your composition as a Premiere template in the comp settings, and then any unlocked text layer in this composition or pre-comps (very clever!) is going to be accessible in Premiere in the same way MCEs are – by using match frame on the timeline clip or opening it in the Source Monitor, and then looking up the Effect Control Panel.

The drawback of Live Text Templates using MCE mechanism is that, if you want to duplicate this composition in the timeline, it will duplicate the clip in the Project Panel, similarly to how Titles currently work. This perhaps is not the most elegant solution – ideally we’d only have a single template, and only adjust the effects on the instances in the timeline – but it does work, and the link to the original AE composition remains, you can easily change anything in it in After Effects, and the changes will propagate to Premiere. Of course, it is not the final word in terms of templating. If you have comments or ideas, make sure to send them to Adobe, I know they are listening.

Next, both Premiere Pro and After Effects can now contain their effects using masks. Here I can definitely say “finally!”, at least when it comes to Premiere. Finally you can create a vignette or limit your color correction using a mask. Such masks can also be tracked, using the same technology that was already available in After Effects for past half a year. It works and it’s great. There is a minor limitation – currently there are only two types of masks present in Premiere – elliptical or polygonal. No bezier shapes or variable feathering, to access these you still need to go to After Effects. But these two types of masks will suffice for the usual 80% of cases, especially given the controls to expand or uniformly feather the mask.

Interestingly, the implementation in Premiere Pro seems much easier and much more elegant than its After Effects counterpart. While masks in Premiere are shown under each effect, AE requires you to navigate your timeline, make sure your effects masks do not interfere with masks that you apply to the layer, and so on. Quite a few inelegant steps on the way. The only positive side is that you can reuse these masks for multiple effects, which is not possible in Premiere. But the masks do carry over when you send a clip to AE via Dynamic Link, which is also a welcome addition.

Thanks to these features suddenly a lot of things became much easier to achieve in Premiere itself, without the use of Dynamic Link or After Effects round-tripping, especially in the Motion Graphics area. The only large things currently missing here are Pixel Motion algorithm for speed changes, and Motion Blur. About the impact of this release on the future of Creative Impatience plugins I’m going to elaborate in another note.

One of the best features in the upcoming Premiere Pro is the improved performance of the search field in the project panel. Yes, it seems tiny in comparison to all other loudly touted upgrades, and it’s more of a fix, than a marketable feature. But it means that finally we will be able to use the search box in larger projects, and that is no small change. On similar note, marker names will finally be visible in the marker panel, and can be searched for. For the list of all the features see Premiere’s blog, and an upcoming post on this website.

Astute readers and long time users of Premiere had most likely already noticed one feature that is sorely missing to complete the “finally” list. Unfortunately, Project Manager does not get an update in the upcoming release, and we will not be able to easily transcode or trim our projects either for archive or exchange. If I had to name a major disappointment, this would be the one. Here’s to hope that we’ll see some working solution soon – IBC perhaps?

But enough complaining. On another front, SpeedGrade seems to have finally received support for AMD GPUs in the Direct Link mode, including dual GPUs – owners of new Mac Pros rejoice – a new YUV vectroscope that works like every other vectroscope on the planet, and sports a decent graticule, a control to clamp the scopes instead of resizing them, and vertical sliders supplementing the offset/gamma/gain rings, easier to access and manipulate with a mouse. Some keyboard shortcuts became unified with the ones present in Premiere, and you can also enable or disable any track in the timeline in the Direct Link mode, which previously was impossible, making it easier to consider various grading options or simply hide distracting elements.

Media Encoder can finally be installed separately from other applications, which should make reinstalling and troubleshooting much easier, in case you ever need to do that, and apart from a number of bug fixes it also added support for industry standard AS-11 DPP, which you will never need unless you are delivering Broadcast material to UK, and – perhaps more importantly – encoding unencrypted DCP, which will be helpful if you’re going to submit your great movie to a film festival. Now, if only we had access to DCI P3 color space in Premiere…

Audition received a modest update as well – support for Dolby Digital, multichannel wav files, and some multitrack enhancements that should make your life easier if your sessions stretch vertically enough for you to consider turning your monitor short edge up.

Finally, After Effects enjoys integrated Mercury Transmit for live preview, support for HTML5 third-party panels (can be pretty significant in the long run), some updates to Curves effect (still not compatible with Premiere’s RGB Curves though), and an interesting technology for improving the mattes that you get from Keylight or other keying effects. Both will definitely come in handy, especially since I’m right now involved heavily with Hero Punk project which was shot totally on green screen.

There are also updates to Story and Anywhere, but I can’t meaningfully comment on either.

All in all, it looks like it’s going to be a pretty solid release, centered on Premiere Pro. Dare I say – finally? 😉

Adobe Anywhere – are we there yet?

At NAB 2012 Adobe made an intriguing sneak peek at the technology for collaborative editing. At IBC 2012 Michael Coleman introduced the new Adobe Anywhere and presented its integration with Adobe Premiere. Like most demos, this one looked pretty impressive, and even gave away a few interesting developments in the upcoming version of Premiere, but it also left me pondering on the larger picture.

Indeed, Mercury Streaming Engine’s performance seems impressive. Ability to focus on the whole production, instead of on its single aspect, automatic (?) file management (and backup?), use of relatively slow machines on complex projects, working at long distance – all this is really promising. There is no doubt about it. However…

No back end and management application was presented. No performance requirements were given. How soon does a server saturate its own CPU, GPU and HDD resources? Apart from performing all the usual duties, it must now also encode to the Adobe streaming codec, and all the horsepower must still come from somewhere. If the technology uses standard current frame servers developed for Dynamic Link and Adobe Media Encoder, how are the resources divided, and how is the Quality of Service ensured? How effective is the application, and more important – how stable? I hope the problems with database corruption in Version Cue are things of the past, and they will not happen with Anywhere at any time.

Adobe engineers have been working on the problem for about 4 years, so there is a high chance that my fears are unwarranted. At the same time though I’ve learnt not to expect miracles, and there will always be some caveats, especially with the early releases of the software.

Of course, it explains why Adobe wants to first target Anywhere to their broadcast clients. Perhaps there is some of the sentiment, that since the video division finally has the enterprise clients, it needs to take care of them – hopefully not at the expense of smaller businesses and freelance editors like me. But setting up the servers, managing hardware and the whole architecture, takes expertise, and it is mostly the big guys who have the resources to implement the recommendations. We still do not know what the entry-level cost is going to be, but I highly doubt it’s going to be cheap.

Not that small post-houses would not profit from Anywhere. I can easily see how it could be incorporated in our workflow, and how it could easily resolve a few problems that we have to manage on a daily basis. But will we be able to supply the back-end architecture? It remains to be seen.

Interestingly, this approach of beefing up one’s machine room contrasts another trend that we have been seeing – the horsepower of average desktops being more than enough to handle pretty complex projects. All this remains totally unused in the model promoted by Adobe Anywhere. I wonder what Walter Biscardi thinks of it, and does he plan on using it at all.

I’m also curious how the version control is resolved? How are the changes propagated – can you in some way unify the conflicting projects, or do you need to choose one over the other? It is important. I gather that you can always go back to previous versions, but will they be available only from administrative panel, or also from applications themselves? Only time will tell.

It’s good that there is a possibility of expanding the system. I think a natural application that will be developed very shortly after the release, will be some kind of review player, where you can see the recent final result of the project, add markers and possibly annotations (why not? as a Premiere Pro title for example). Especially useful for mobile platforms, like iPad, where Premiere or even Prelude is not available. Such tools could become crucial for the approval and collaborative workflow in general.

There is also another point, which gave rise to the question in the title of this note. Is it the conforming uber-app that I’ve been arguing for? From the limited demonstrations to date unfortunately the answer is still no. We are not there yet, even though Adobe Anywhere seems very promising for collaborative editing, it is not yet there for collaborative finishing (and archiving for that matter).

The elephant in the room seems to be client’s review and approval. It’s OK to serve a 1/4th resolution of the picture if you are editing on a laptop without an external monitoring. But once you get into the realm of finishing, especially with your client at the back, you want the highest quality picture that you can get, with as little compression as you can. Anywhere is most likely not going to be able to serve that. Would you have to leave the ecosystem then?

Even though the support exists for After Effects, Premiere Pro and Prelude, the holy grail still remains the ability to take Premiere’s project in its entirety and work on it in Audition or SpeedGrade, and then bring it back to Premiere for possible corrections in picture edit with all the changes made in other programs intact. Or to export an XML or EDL without a hassle of hours of preparation if custom plugins, effects or transitions are being used. Nope – not there yet.

There is also a question of its integration in larger, more diverse pipelines, involving other programs and assets, not only from Adobe, but from other vendors, like The Foundry or Autodesk. It’s true, that Anywhere does have it’s own API for developers, although it remains to be seen, how open and how flexible the system will be, especially in terms of asset management.

Yet, despite all these doubts and supposed limitations, it seems to be a step in the right direction. And, as Karl Soule claims, the release of Anywhere is going to be big.

Feather crop in Premiere Pro

I think the idea of feathered edges on a piece of footage that was cropped with standard Premiere Pro crop effect is as old, as the crop effect itself. I know that I’ve been waiting for Adobe to make it since I started using their software, which means version 6.5 of Premiere (not yet “Pro” then). And I know I’m not the only one.

How many of you have fallen prey to the hope that “feather edges” effect would actually work as it should with cropped footage? Or wished for more control than blurring the alpha channel via the “channel blur”? Or used titler or photoshop pictures as track mattes?

Fortunately, there’s no more need for this. Not because the guys from Adobe actually decided to focus their efforts on this non-critical, although pretty non-complicated, task. Drawing on my background of a would-be computer scientist, physicist, and – of course – video editor, I decided to delve into the dreaded Premiere Pro/After Effects SDK, and created the effect myself.

So, without further ado – here’s the Feathered Crop effect that I’ve written. It seems to be pretty popular (even more than the Vignette) and has gone through a few iterations already, each one adding new functionality.

The effect is free, but I appreciate donations, especially if you like the results that you are getting. I’d like to thank everyone for their generous support, and kind words. Enjoy!

Why Premiere Pro could use scripting

I’ve been testing the workflow from Premiere Pro to DaVinci Resolve (similarly to other more renowned people). For many reasons I want to avoid sending a flattened file, instead relying on XML interchange, and a few annoying simple issues make it pretty inconvenient:

  1. We’re using XDCAM EX in mp4 wrapper and NXCAM (AVCHD) files which Resolve does not support. Transcoding is necessary although it’s the subject for another entry.
  2. Time remapping in Resolve is much worse than even in Premiere, not mentioning After Effects. All speed changes should be rendered and replaced before exporting XML.
  3. Some effects should be rendered, but transitions should be left untouched.
  4. All Dynamic Link clips should be rendered and replaced.

Doing these things manually takes a whole lot of time, and is very prone to mistakes. This is a perfect example when a simple script would make one’s life so much easier. The script would:

  1. Traverse the timeline, looking for clips having properties mentioned in points 2-4.
  2. Create a new video layer or a sequence, whatever would be faster.
  3. Copy the clips there one by one and queue export for each to desired codec, encoding timecode and track either in metadata or the name.
  4. After the export is done, it would import the renders, and replace old clips with the new ones.

Alternatively, I could have one script to export (1-3), and another to reimport (4).

See? It’s relatively simple. The possibilities of scripting are almost infinite. For example, I could also change all the time remapped clips automatically into Dynamic Linked AE compositions and render them using its superior PixelMotion algorithm – although I would rather appreciate Adobe including it in Premiere itself, getting rid of the old and awful frame blending. I could even attempt to change them to their Twixtor equivalents, although I must say that my experience with this effect is pretty crashy.

I looked at SDK for Premiere Pro to see if I could write a plugin that would make this job easier, but as far as I know such possibility does not exist. Plugin architecture for Premiere is pretty limited, and compartmentalized, and using C++ for this seems like a bit of an overkill.

Adobe, please support scripting (JavaScript, Python, or any other obscure language) in Premiere Pro. This way users will be able to create their own tools to solve inefficiencies of the program, and your job will become much easier. And Premiere Pro will prosper and develop much quicker and much more effectively. Besides – you don’t want FCPX to overtake you, do you?

The anatomy of a promo

This is my latest production. It’s a promotional spot for a non-profit organization that is dedicated to another passion of mine – historical personal combat.

What follows is an overview of the production of this short movie, including how the screenplay changed during production, breakdown of my editing process, and a few techniques that we used in post-production to achieve the final result.


It was a collaborative, voluntary effort, and included cooperation from parties from various cities in Poland. The Warsaw sequences (both office and training) were shot with Sony EX-1R, 1080i50, with the exception of slow-motion shots that were recorded at 720p60. Sequences from Wroclaw and Bielsko Biala were shoot with DSLRs at 1080p25. Therefore the decision was made to finish the project in 720p25, especially since the final distribution would be via youTube.

The most effort went into filming the Warsaw training, where we even managed to bring a small crane on set. Out of two shots that we filmed, in the final cut only one was partially used – the one where all people are running on the open clearing. We envisioned it as one of the opening shots. As a closing shot we filmed from the same place the goodbyes and people leaving the clearing, while the camera was moving up and away. It seemed a good idea at that time, one that would be a nice closure of the whole sequence, and perhaps of the movie as well.

We had some funny moments when Michal Rytel-Przelomiec (the camera operator, and the DOP) climbed up a tree to shot running people from above, and after a few takes he shouted that he can last only one more, because the ants definitely noticed his presence and started their assault. What a brave and dedicated guy!

A few days later we were able to shot the office sequence. The first (and back then still current) version of the screenplay involved a cut after the text message was send to what was supposedly a reminiscence from another training, and finished up with coming back to office, where Maciek (the guy in office) would pick up a sword and rush at the camera. Due to the spatial considerations on set (we were filming in Maciek’s office after hours), we decided to alter the scenario, especially since we had already filmed the training sequences, including the farewell closing shot.Therefore instead of Maciek picking up a sword and attacking the camera, he actually rushed away to training, leaving the office for something dearer to his heart. It was also Michal’s idea to shot the office space with 3200K white balance to create more distant, cold effect, and it worked really well.


All footage (about 2 hours worth) was imported into Adobe Premiere CS5, that allowed skipping transcoding and working with the source files from the beginning right to the end. After Effects CS5 and Dynamic Link were used for modest city titles only, although perhaps it could have been used to improve a few retimed shots. Music and effects were also mixed in Premiere.

The promo was in production for over half a year, mostly because we were waiting for footage from other cities, some of which never materialized, and we decided to finish the project with what we had. Actual cutting was pretty quick, and mostly involved looking for the best sequences to include from other cities. Some more time was spend on coming up with a desired final look for the short movie.


The general sequence of events was laid out by the screenplay written by Maciek Talaga. At first the clip started immediately with corporate scene. We were supposed to have some similar stories from other cities, and I was ready to use dual or even quadruple split screen for parallel action, but since the additional footage never materialized, I decided to pass on this idea. In the end it allowed us to focus more on Maciej Zajac, and made him the main hero of our story, what was not planned from the start.

After leaving the office we had to transition to the training, and preferably to another place. Wroclaw had a nice gathering sequence, and completely different atmosphere (students, backpacks, friendship and warmth), which constituted excellent contrast to the cool corporate scenes from Warsaw, presenting another kind of people involved in pursuing the hobby.

The order of following cuts was determined by the fact, that we had very little material from Bielsko-Biala, and it all involved the middle of the warm-up. We had excellent opening shots from Warsaw, which were great for setting the mood, and adding some more mystery. I used them all, and even wanted to transition to push-ups and other exercises, however when the guys already stopped running, coming back to it in Bielsko sequence ruined the natural tempo of the event. Therefore with great regret I had to shorten the crane shot to the extent that it most likely does not register as a crane shot at all, and transition to Bielsko for the remaining part of the warm-up.

Coming back to Warsaw seemed a little odd, so I decided to cut to Wroclaw to emphasize the diversity, and a short sequence with a few shots of a warm-up with swords. Here I especially like the two last cuts, where one cuts on action with the move of the sword, that is underlined by the camera move in the next shot, and then the one that moves the action back to Warsaw, when a guy exits the frame with a thrust. I was considering using a wipe here, but it looked too cheesy, so I decided to stick to a straight cut.

As an alternate to this choice, I could at first come back to Warsaw, and move the Wroclaw sequence between the warm-up and sparrings, but this would then create an alternating cadence Warsaw-other place-Warsaw and I wanted to break this rhythm and avoid that. Therefore I was stuck in Warsaw for the remaining of the movie, even though it had at least two distinctive parts left. We had an ample selection of training footage from Wroclaw, however it was conducted in a gym, and including it would ruin the overall mood and contrast closed office space vs. open training space, so in the end we decided against it.

Unfortunately we did not have any footage from gearing up, so the transition between the florysh part in Warsaw to the sparrings is one of the weakest parts of this movie, and I would love to have something else to show. I did not come up with anything better than the cut on action though.

The sparring sequence is mostly cut to music selection of the most dynamic and most spectacular actions from our shoot (not choreographed in any way), including a few speed manipulations here and there to make sword hits at proper moments or to emphasize a few nice actions, including the disarm at the end. There were a few lucky moments during shooting, where Michal zoomed in on a successful thrust, and I tried to incorporate them as much as I could, to obtain the best dynamics, and to convey as much of the atmosphere of competitive freeplay as was possible.

The sequence ends on a positive note with fighters removing masks and embracing each other. I tried to avoid cutting in the middle of this shot, but it was too long, and I wanted to have both the moment where the fencing masks come off, and the glint on the blade of the sword at the end (which was not added in post). In the end the jump cut is still noticeable, but it defends itself. There is a small problem with music at the end, because I had to cut it down and extend a little bit to hold it for the closing sequence, but it is minor, and does not distract too much from the overall story.

Apart from the serious and confrontational aspect of the training, we wanted to stress the companionship, and I believe that both the meeting sequence in Wroclaw, and the final taking off the masks and embrace did convey the message well.

During cutting I realized that regardless of the added production value of the crane farewell shot, there is no way to include it at the end. It was too long, it lessened the emotional content, and paled in comparison to the final slow motion shots that I decided to use, including the final close-up of Maciek, that constituted the ellipse present in the first version of the screenplay. Therefore it had to go, regardless of our sentiment towards it.

The feedback from early watchers was that Maciej Zajac was not easily recognizable for people who did not know him, and made us wish for something more. The idea of the beginning with sounds and no picture came from Maciek Talaga, and I only tweaked it a little bit. We first thought about putting as the first shot the one where Maciej takes off the fencing mask, however it did not look good at all, and the transition to the office scene was awkward at best. In the end I proposed the closing close up as the first shot, which in our opinion nicely tied the whole thing together, being both introduction of Maciek, setting focus on him as a person, and also nicely contrasting the “middle ages dream or movie” with his later work at the office. Excellent brief textual messages authored by Maciek Talaga added also a lot to the whole idea.

Color grading

All color correction was done in Premiere Pro with the use of standard CC filters and blending modes. I experimented with the look in the midst of editing, trying to come up with something that would best convey the mood. I started with high-contrast, saturated theme, and moved quickly to a variation of bleach bypass with a slightly warmer, yellowish shift in midtones. However, it still lacked the necessary punch, and in the end I decided to over-emphasize the red color (an important one for the organization as well) with a slight Pleasantville effect. It gave the movie this slightly unreal, mysterious feeling, and the contrast underlined the seriousness of effort.

The office sequence did not need much more than the variation of bleach bypass, not having anything actually red. The increase of contrast and slight desaturation was mostly enough to bring it to the desired point, thanks to Michal’s idea of shooting it at lower Kelvin. Warsaw sequence required additional layer of “leave color” effect where everything apart from red was partially desaturated, a little more push towards yellow in highlights and in midtones, all blended in color mode over previous bleach bypass stack. I will do the detailed breakdown of color correction I used in a separate entry, although perhaps with the introduction of SpeedGrade in Adobe CS6 this technique might become  obsolete.

Michal also suggested a clearer separation between the various cities, so I pushed Wroclaw more towards blue, as it involved more open air, and Bielsko more towards yellowish-green, to emphasize its more “wild” aspect. In the end, I had the most trouble with footage from this place, because as shot it was dark, had bluish tint, and involved pretty heavy grading, which on H.264 is never pleasant. Overall I’m satisfied with the results, although there are a few places that could benefit from perhaps a few more touches.

The blooming highlight on the fade out of the opening and closing shot was a happy accident and a result of fading out all corrected layers simultaneously mixed with “Lightning effects”, at first intended only for vignetting (as mentioned in my another entry).

I like the overall result. I also enjoyed the production on every step of the way, and even though it could still perhaps be improved here and there, I am happy. It was an excellent teamwork effort, and I would like to thank all people who contributed to its final look.

Image deblurring and warp stabilizer would be a killer combo

In case you have been living under a rock, and have not yet seen the recent Adobe presentation on image deblurring, here is the video. I recommend you watch it first, and then read on:

The demo itself is pretty impressive. I’m sure it won’t fix every photo, and it will also be having it’s own share of problems, however I don’t think there is anybody who would disagree, that this technology is really revolutionary. Richard Harrington blogged “It will change everything”, and it surely will. There is a lot of creative potential with this technology as it is.

However, the real killer would be translating it to video. I can’t even start to count how many times have I tried to stabilize shaky footage only to back down considerably due to the motion blur that no stabilizer has yet been able to remove. No matter how good a stabilizer, be it a simple tracking and position/rotation/scale lock or more advanced algorithms like warp stabilizer, if the camera movement is erratic, you will get a variable amount of motion blur, which is often more pain to watch, than original shaky footage. Therefore I received all claims about warp stabilizer being a new steadycam with more than a grain of salt.

However, if warp stabilizer did include image deblurring, then it would indeed be another game changer. Interestingly, kernel calculation in moving picture might be actually helped quite a lot by temporal data and tracking (although subframe calculations would still be necessary), and the algorithm for video might in the end be less computation-intensive on the per-frame basis. And instead of the simple stabilize option, we would have the option to remove motion blur, or even calculate proper motion blur for the newly stabilized footage.

How great would that be, huh?

For those willing to delve deeper and read on the history of this research, here is a nice article from, that describes it: You saw the unblur clip with the audience gasping…here is the source. And for those interested in other impressive work in Adobe, check out the rest of Adobe sneak videos. Especially look at video meshes, pixel nuggets and local layer ordering. These technologies might find their way to your favorite editing software as well.