Feather crop in Premiere Pro

I think the idea of feathered edges on a piece of footage that was cropped with standard Premiere Pro crop effect is as old, as the crop effect itself. I know that I’ve been waiting for Adobe to make it since I started using their software, which means version 6.5 of Premiere (not yet “Pro” then). And I know I’m not the only one.

How many of you have fallen prey to the hope that “feather edges” effect would actually work as it should with cropped footage? Or wished for more control than blurring the alpha channel via the “channel blur”? Or used titler or photoshop pictures as track mattes?

Fortunately, there’s no more need for this. Not because the guys from Adobe actually decided to focus their efforts on this non-critical, although pretty non-complicated, task. Drawing on my background of a would-be computer scientist, physicist, and – of course – video editor, I decided to delve into the dreaded Premiere Pro/After Effects SDK, and created the effect myself.

So, without further ado – here’s the Feathered Crop effect that I’ve written. It seems to be pretty popular (even more than the Vignette) and has gone through a few iterations already, each one adding new functionality.

The effect is free, but I appreciate donations, especially if you like the results that you are getting. I’d like to thank everyone for their generous support, and kind words. Enjoy!

An idea on how to dramatically improve Premiere Pro

I will admit right at the beginning – the idea is stolen from Autodesk Smoke 2013. I hope they don’t have a patent for that, because it’s so fantastic. But first let me make an obligatory digression.

There are a few things to like in Smoke, and there are other not to like. Something that really turned me off was the fact that something as simple as a clip with an alpha channel would not play in the timeline without rendering. Excuse me? As far as I know there is no other NLE on the market anymore that requires it. And we’re not even in 2013 yet. This constant need of rendering was something that turned me away from Final Cut Pro. I thought we’re long past that.

I also didn’t like the fact that the order of applied effects is pretty strict, although ConnectFX, and Action are really well developed and pretty flexible tools coming from the makers of great finishing software. This is the part which I liked. But after creating your comp and coming back to the timeline, you always have to render it to preview. Period.

The real trick of Smoke rooms seems to come to clever media management that is obscured from the user. I fail to comprehend how it is different from rendering  a Dynamic Linked composition in Premiere Pro. Except from the fact that Premiere will at least attempt to play it, if ordered, and Smoke will just show “Unrendered frame”. But then, it’s just me.

However, Smoke has a feature that in my opinion is awesome, and should be implemented in Premiere Pro as soon as possible. It treats each source clip as a sequence from the get-go. It’s a brilliant idea.

In case you are wondering why I am so excited about it, let me make a short list on what you could do with the clips before you put them on the timeline when such option is available:

  1. Set audio gain and levels.
  2. Add additional audio channels or files and synchronize them.
  3. Composite another clip on top – or even make it a fully-fledged composition.
  4. Add versions of the clip.
  5. Apply LUT or a grade.
  6. Pre-render clip into proxy or dynamically transcode like in After Effects.

Can you see it now? You can work with your source material before making any edit. At the same time all these effects will be applied to the clips being inserted to the timeline or already present after the edit is complete.

I would love to see this implemented in Premiere. I don’t think it would be that hard, since sequence nesting is already possible, as is merging the audio clips. It seems to be only one more step with perhaps some clever way to turn on and off layers or effects of the clip already present on the timeline. It is the ultimate flexibility that would allow for quite a few new workflows to appear. I hesitate to use the abused words of “a game changer” – but I can’t help to feel terribly excited about it.

Oh, and while we’re at it, why don’t we tie it with scripting, and Premiere Pro project file as a universal container for other applications to work from?

My vision of Adobe SpeedGrade

SpeedGrade seems like a very promising addition to Adobe Creative Suite, which I have already mentioned. However, after playing with it for a short moment, I found with regret that it does not fit our current infrastructure and workflows. Below is a short list of what kind of changes that I consider pretty important. These requests seem to be quite common among other interested parties, judging by the comments and questions asked during Adobe SpeedGrade webinar.

First, as of now the only way to output a video signal from SpeedGrade is via very expensive SDI daughter board to nVidia Quadro cards. This is pretty uncommon configuration in most post facilities. These days a decent quality monitoring card can be bought for less than 10 times the price of nVidia SDI. If the software is to gain wider popularity, this is the issue to be addressed.

Adobe seems to have been painfully aware of its importance, even before the release. I’m sure that had it been an easy task, it would have been accomplished long ago. Unfortunately, the problem is rooted deep in the SpeedGrade architecture. Its authors say, that SG “lives in the GPU”. This means that obtaining output on other device might require rewriting a lot – if not most – of an underlying code – similarly to what Adobe did in Premiere Pro CS5 when they ditched QuickTime and introduced their own Mercury Playback Engine. Will they consider the rewrite worthwhile? If not, they might just as well kill the application.

Second, as of now SG supports a very limited number of color surfaces. Unless the choice is widened to include at least Avid Color, and new Tangent Elements, it will push the application again into the corner of obscurity.

Third, the current integration with Premiere is very disappointing. It requires either using an EDL, or converting the movie into a sequence of DPX files. It’s choice of input formats is also very limited, which means that in most cases you will have to forget about one of the main selling point of Premiere – native editing. Or embrace offline-online workflow, which is pretty antithetical to the flexible spirit of other Adobe applications.

The integration needs to be tightened, and (un)fortunately Dynamic Link will not be an answer. DL is good for single clips, but a colorist must operate on the whole material to be effective. Therefore SG will have to read whole Premiere sequences, and work directly with Premiere’s XML (don’t confuse with FCP XML). It also means that it will have to read all file formats and render all the effects and transitions that Premiere does. Will it be done via Premiere becoming a frame server for SpeedGrade, as is After Effects for Premiere when DL is employed? Who knows, after all, Media Encoder already runs a process called PremiereProHeadless, which seems to be responsible for rendering without Premiere GUI being open. A basic structure seems to be in place already. How much will it conflict with SpeedGrade’s own frame server? How will effects be treated to obtain real time playback? Perhaps SpeedGrade could use Premiere’s render files as well?

An interesting glimpse of what is to come can also be seen in an obscure effect in After Effects which allows to apply a custom look from SpeedGrade to a layer. Possibly something like this is in store for Premiere Pro, where SG look will be applied to graded clips. The question remains, if the integration will follow the way of Baselight’s plugin, with the possibility to make adjustments in Premiere’s effect panel, or will we have to reopen the project in SG to make the changes.

This tighter integration also means that export will most likely be deferred to Adobe Media Encoder, which will solve the problem of pretty limited choice of output options presently available in SpeedGrade.

As of now SpeedGrade does not implement curves. Even though the authors claim that any correction done with curves can be done with the use of other tools present in SG, curves are sometimes pretty convenient and allow to solve some problems in more efficient manner. They will also be more familiar to users of other Adobe applications like Photoshop or Lightroom. While not critical, introducing various curve tools will allow SG to widen its user base, and will make it more appealing.

Talking about appeal, some GUI redesign is still in order, to make the application more user friendly and Adobe-like. I don’t think a major overhaul is necessary, but certainly a little would go a long way. Personally I don’t have problems with how the program operates now, but for less technically inclined people, it would be good to make SpeedGrade more intuitive and easier to use.

These are my ideas on how to improve the newest addition to Adobe Suite. As you can see, I am again touting the idea of the container format for video projects – and Premiere Pro’s project file, being an XML, is a perfect candidate. Frankly, if SpeedGrade will not be reading .prproj files by the next release, I will be very disappointed.

Why Premiere Pro could use scripting

I’ve been testing the workflow from Premiere Pro to DaVinci Resolve (similarly to other more renowned people). For many reasons I want to avoid sending a flattened file, instead relying on XML interchange, and a few annoying simple issues make it pretty inconvenient:

  1. We’re using XDCAM EX in mp4 wrapper and NXCAM (AVCHD) files which Resolve does not support. Transcoding is necessary although it’s the subject for another entry.
  2. Time remapping in Resolve is much worse than even in Premiere, not mentioning After Effects. All speed changes should be rendered and replaced before exporting XML.
  3. Some effects should be rendered, but transitions should be left untouched.
  4. All Dynamic Link clips should be rendered and replaced.

Doing these things manually takes a whole lot of time, and is very prone to mistakes. This is a perfect example when a simple script would make one’s life so much easier. The script would:

  1. Traverse the timeline, looking for clips having properties mentioned in points 2-4.
  2. Create a new video layer or a sequence, whatever would be faster.
  3. Copy the clips there one by one and queue export for each to desired codec, encoding timecode and track either in metadata or the name.
  4. After the export is done, it would import the renders, and replace old clips with the new ones.

Alternatively, I could have one script to export (1-3), and another to reimport (4).

See? It’s relatively simple. The possibilities of scripting are almost infinite. For example, I could also change all the time remapped clips automatically into Dynamic Linked AE compositions and render them using its superior PixelMotion algorithm – although I would rather appreciate Adobe including it in Premiere itself, getting rid of the old and awful frame blending. I could even attempt to change them to their Twixtor equivalents, although I must say that my experience with this effect is pretty crashy.

I looked at SDK for Premiere Pro to see if I could write a plugin that would make this job easier, but as far as I know such possibility does not exist. Plugin architecture for Premiere is pretty limited, and compartmentalized, and using C++ for this seems like a bit of an overkill.

Adobe, please support scripting (JavaScript, Python, or any other obscure language) in Premiere Pro. This way users will be able to create their own tools to solve inefficiencies of the program, and your job will become much easier. And Premiere Pro will prosper and develop much quicker and much more effectively. Besides – you don’t want FCPX to overtake you, do you?

What pro users want from Premiere Pro, what Adobe will not deliver and why

After acquiring the IRIDAS Adobe is in a perfect position now to replace the now EOLed Final Cut Studio as a preferred suite of applications for editing and now relatively low cost finishing. This is also what is most likely to happen, even though personally I would love Premiere Pro, After Effects, Lightroom,  Photoshop, Audition and now SpeedGrade to be integrated in one single seamless application a la Smoke. I am obviously not the only person to think about that (see comments here), nor even the first one by any stretch of imagination.

Here is however why I don‘t think it will happen though. For one, recent changes in pricing and the fact that Adobe software has became very affordable for new businesses and startups is something that the company is not going to strike out by building a single finishing application encompassing the functionality of the whole suite. Arguably the fact that you can essentially rent a single specific tool for your job for next to nothing is one of the things that makes Adobe software more popular than ever. This business model would be seriously undermined by conversion of the suite to a single application, and this is what actually none of us think would be a wise thing to do.

Secondly, the architectures of After Effects and Premiere Pro – not even mentioning Audition – seem to be quite different. Even though Adobe has gone to great lengths to ensure proper translation of projects between the applications, there is a realm of difference between this and actually merging the two together in a Smoke-like manner. Don’t get fooled by the similarities of the interface. The engines running these two are quite different, and to actually enclose one in another might be impossible without rewriting most of the code. Adobe already did that while creating 64-bit applications, and there is hardly any incentive to do that again, especially since their time for development has actually shortened due to the “dot half” releases.

The only sensible way to approach this is to create a new application from scratch, but that would be essentially duplicating the features of already existing programs without any real benefit to the business, and at at least twice the cost. This is not something that is going to happen without a serious incentive to do so. Perhaps incorporation of SpeedGrade into the suite might be such a good pretext, but it all depends on the underlying architecture of the program itself, and is not going to happen soon, so don’t hold your breath until CS7 or even CS8.

I bet that in the short term we will see the remake of SpeedGrade’s interface to suit more the CS family, perhaps a few more options will be added, and a “Send to…” workflow will be established between Premiere, After Effects and SpeedGrade, perhaps with the addition of plugin a la recent Baselight development for the old FCP. This is what is feasible to expect in CS6. SpeedGrade will be able to see and render all Premiere and After Effects effects, transitions etc., due to the incorporation of either Dynamic Link or the standalone  renderers that already are present in Adobe Media Encoder, and hopefully will be able to merge projects from Audition as well.

Perhaps a new common project file format will be born, independent of any application, akin to the container, where each application reads and works only on its own parts, and it all comes together in SpeedGrade (finishing), Bridge (playback) or even AME for export. And if nobody at Adobe is working on such an idea yet, then please start immediately, because this is exactly what is needed in the big shared workflows. This format would get rid of the some of the really annoying problems of the Dynamic Link, and would open a lot of possibilities.

In the long run we might see a birth of a new Ubertool – a true finishing app from Adobe, and if a container-project idea is embraced, the workflow might even be two-way. I would imagine that this tool would also incorporate some management ideas from recently demonstrated Foundry Hiero, like versioning, conforming, or even preparing material to send to Premiere Pro, AE, Audition, etc. for other artists.  Because Adobe Suite does not need only the Color Grading software for completion. It needs a true project management and finishing application, and that would be an excellent logical step for Adobe to take, and then their workflow would really encompass all stages of pre-, post- and production proper. Which I hope will happen in the end.

One thing that I am sure Adobe will not do: they will not repeat the debacle of FCPX. The new Ubertool might be able to do all that other apps do, and probably more, perhaps even better, but they will not fade the smaller tools out of existence immediately, if ever, and everyone will be able to talk to each other as seamlessly as possible.

Tactile input is important

Recent (?) fascination with touch-controlled interfaces is perhaps good for their development, but in my opinion they are not necessarily the future of device manipulation.

One of the big mixed blessings is that you have to rely on the visual feedback to operate such an interface. While it is perhaps a tad faster to manipulate directly items that you’d always want to look at—like photos—or that wide-sweeping gestures are faster than looking for “next” or “previous” buttons, it is not necessarily so with interfaces that rely on mixed tactile/visual, and sometimes even auditory feedback.

An excellent example is a keyboard. A keyboard gives you at least three kinds of input: tactile (feeling of pressing the key and its shape), auditory (click of pressing the key), and finally visual – letters appear (or not :) ) on the screen. Many people do not appreciate the first two, mostly because they were not trained in typing without looking at the keyboard to find each letter that they type, or to use all their fingers while typing. Personally I believe that classes of quick typing should be obligatory in primary or high schools and would be more useful in daily life. For example, when visiting a doctor, I often find that he takes more time to use his computer to type the diagnosis with two fingers than to actually examine his patient. What a terrible waste of time.

Anyway, the reason that mixed input is important is about efficiency. Once you learn to stop looking at the keyboard while you type, you reach a new level of efficiency. You start relying on your tactile and auditory input to feel and hear if you have pressed your key, and to an extent to know which key you pressed, only using visual feedback for confirmation, not estimation. For those who wonder why there are small embossed dashes on the F and J keys – they are places where your index fingers will be when you use proper typing technique.

Touch screen does not give you this advantage. You use visual cues to find the proper key, robbing yourself of the input by covering the key you want to press at the same time, and then for verification. You use a single channel to process the information. It is slower not only because tactile information reaches your brain and is processed 10 times faster, but also because you use serial processing instead of parallel one. While typing on a classical keyboard I know I have pressed a wrong or a good key even before I get the confirmation on the screen. Therefore it is much easier for me to switch from typing to correcting mode (and yes, there is a noticeable switch going on), than I would when I am typing on a touch-screen keyboard. My impression also is that the responsiveness and robustness of touch screen interfaces is still not at the level of keyboards, but I might be wrong, since this field evolves very quickly.

Another example where tactile input is vital, are the devices which one would be able to operate without having to look at them. One that comes to my mind is an mp3 player. Usually this device sits in my pocket or in a place that I do not have an easy visual access to, and for a good reason. Therefore if I want to increase the volume, lock the controls or change/rewind the track, I would prefer not to need to put the device in the center of my visual attention. Running, cycling, driving — these are the activities that do not lend themselves well to visual distractions. Admittedly, using any device while driving will lessen one’s concentration and might result in an accident, but this is precisely why most of the car interiors are built in such a way that you can rely on tactile input to turn on radio, heating, conditioner and all else.

Therefore, it makes little sense to design an mp3 player with touch screen input. When the buttons are present, you can learn their layout, and operate the device without the need to look at it. You will get immediate auditory input — increase in volume, next track will start playing etc. And you can easily lock/unlock controls, which perhaps is the biggest advantage of all.

There is also another issue. While using touch-screen to manipulate photos you often cover the part that you’re interested in manipulating, therefore robbing yourself of a visual feedback that the touch-screen is supposed to give you. This is not necessarily an optimal way to work. I would agree that it is the way that we manually paint or write, but it only shows the limitation of our tools (limbs). Personally, when faced with a choice of a touch-screen tablet, and a standard screen with a cursor, and a traditional tablet, I prefer the latter, simply because my field of view is wider. Motion estimation is similar in both cases, even if the second way takes more time to learn, and to get used to, like learning to use any tool or device.

All these examples show that if touch-screen interfaces want to become more useful, they will have to evolve additional feedback mechanisms. As of now, there are too many applications where they are detrimental to efficiency, and when we consider them setting their “coolness” factor aside, their application is still limited in scope.

Features of an ideal mp3 player

I mostly listen to audio books, podcasts and lectures, but occasionally I also turn on some music. So far I’ve not yet encountered a player that would incorporate all the following features that I consider crucial:

  1. variable playing speed (100-300%) – this one is so crucial, since it saves so much time! 175-200% is often the norm for me when listening.
  2. bookmarks – as above, it’s hard to live without it.
  3. auto-save of current position on power-off – seemingly obvious, right? think again!
  4. physical buttons, especially volume, pause/play and lock – I keep my player in my pocket, and when I’m driving,  riding a bike or simply running, I prefer the tactile input, and don’t want to look at the screen at all.
  5. variable speed of fast-forward and rewind – in case I need to get to the fifth hour of eight hour of a book, I don’t want it to take 10 minutes! Ideally it would be pressure-sensitive, but I can live with the one that increases the speed in time
  6. aac and aax compatibility – yeah, I use Audible, and would prefer not to waste time on transcoding this stuff to mp3s.
  7. folder browsing and playlists – ideally selected on a PC, but I could live with player-selected playlists that actually work.
  8. good handling of VBR encoded mp3s – believe it or not, there are files which can confuse most players on the market… bookmarks don’t work for them, nor fast-forwarding or rewinding.
  9. long battery life – for long trips, either on the road, or in the air.

And I think this is it. I would love to have sound normalization to 0dB, but I’m a realist here, especially with 8h sound files. I can live with good volume slider or buttons.

I used to have Creative Zen, and I liked it very much for its interface, but it didn’t have the feature number 1, and it died on me a couple years back. Right now I use Vedia B6 – it doesn’t have features no 2, 5, 6 and 8, and is quite an unstable product, to say the least. But after discovering feature number 1, I never looked back.

If you happen to know a player that suits all my needs – hey, let me know. If you build one – even better :)