Democratization of Color Grading – what’s the next move?

Yesterday BlackMagic released an upgrade to its free version of the industry standard grading tool, daVinci Resolve. The biggest and most influential change was surely removing the limit of 2 nodes that was present in previous Lite version. This bold move essentially makes the professional color correction software available to everyone for free. I am still waiting for the announced Windows version, that would make it even more accessible, but it’s almost a given at the beginning of the next year.

There still are limitations – you can at most output at HD resolution, even though you can work with footage that is much bigger than that, you won’t get noise reduction, you are limited to a single GPU. That said, most of the people to whom this version of software is directed hardly ever yet think about projects in 2K and above and have not considered buying a second GPU except perhaps for gaming purposes. However you choose to look at it, BlackMagic did surprise everyone by providing amazing piece of truly professional software for free. This kind of democratization of grading tools is certainly terrific, and unexpected. It is however not yet disruptive enough. What will BlackMagic next move be?

I see this release as a preemptive strike against Adobe (see my previous post on Adobe acquiring Iridas) and following Apple recent “prosumerisation” trend. In Adobe CS6 we will almost certainly see integrated SpeedGrade color-correction software – to many it means that they will get this tool almost for free (for the price of upgrade, but you would most-likely want to upgrade anyway). To attempt to win the new users, there was little else that BlackMagic could do. However the question still remains, why would BlackMagic voluntarily resign from some part of their income? Why not sell the newly unlocked Lite version for $99 or $199 and profit handsomely? What’s in it for them, apart from perhaps profiting from monitoring interfaces that they already sell? Let’s speculate a little bit.

One of the things that distinguishes “real” from “would-be” colorists is a control surface. It’s a tool dedicated towards increasing speed and ease with which to operate the software. All companies that provide serious grading software also sell special panels that go with it. This hardware is extremely expensive, costing anywhere from ten thousand to several hundred thousand dollars. BlackMagic does have its own model, which costs about $20 grand. Of course, in the world of high-turnover, high-end productions, such costs are quite quickly recovered. But this highly demanding pro world is relatively small, and competing companies rather numerous: BlackMagic, Digital Vision (former Nucoda), Baselight, Autodesk, Quantel, to name a few important ones.

Certainly no home-grown editor would-be colorist will shell out $20k for a tool that will sit idle 90% of their working time. Towards this end companies like Euphonix (now Avid), and Tangent Devices developed less sophisticated models that cost about $1500. For a pro it is often a very reasonable price for an entry-level piece of hardware that will pay for itself pretty quick. However, for a prosumer it is still at least two to three times too much, especially considering very limited use of the said tool. Regular consumers are willing to pay $499 for a new iPhone, avid gamers usually spend this much on a new GPU, and I guess this is about the limit that a prosumer color-grading surface would have to cost to catch on big time.

From a business perspective, selling 10 000 pieces of hardware costing $500 each earns you more than selling 10 $20k ones. Apple knew that when they released Final Cut Pro X (regardless of what you think about the program). Professional market is quite saturated, and there is not much to be gained there. It is also very demanding. Prosumers are much easier to appease, and their tools do not have to withstand the amount of abuse that pros require. Following the Apple model – giving the tool to prosumers – is a surer promise of profit, than appealing to the demanding pros.

The question is – who will make this move? Two years ago I would say that Apple might be one of the best candidates, but after introducing weird color control in Final Cut Pro X, and focusing all their efforts on touch panels I’m pretty sure they are not the ones. I don’t expect Tangent Devices or Avid to undercut the sales of their relatively low-cost models, especially after Tangent recently revamped their panels. BlackMagic is the most likely candidate, because right now they only have their high-end model. Creating a new version takes a lot of R&D resources, both time and money, and it is pretty hard to compete in this segment. BlackMagic also always did appeal to those with lower budgets, and this kind of disruptive move is something that is the easiest to expect from this company.

Therefore I am waiting for a simple control surface that will cost about $500-$700, will be sturdy enough to last me two years of relatively light to moderate use, and sensitive enough for the kind of color grading that I presently do – nowhere near truly professional level, but sometimes quite demanding nevertheless. I understand the big problem is producing decent color wheels, but I don’t loose hope that somebody will come up with some neat idea, and implement it. And no, multitouch panel will not do. If you wonder why, read another of my articles on the importance of tactile input. The whole point of control surface is that you don’t have to look at it while grading.

Finally, is the realm of professional colorists in any danger from the newcomers? To a certain extent perhaps. The field will certainly become more competitive, and even more dynamic, perhaps a few players will drop out of the market. On the other hand, more people will be educated about the quality of good picture, and more will require this quality, and also will be able to appreciate excellent work that most of the professionals do. All in all it probably will influence more the job of an editor than a colorist, bringing the two even closer together – the editors will be required to learn color correction to stay in business. In the high-end productions not very much will change, the dedicated professionals will still be sought for both for training and for expertise. Perhaps some of the rates will go down, but most likely in the middle range. In the end I think it will have net positive effect on what we do and love.

Will we then see a new product during NAB 2012 or IBC 2012? I would certainly be the first in line with my credit card. And if we do – you heard it here first. :)

Image deblurring and warp stabilizer would be a killer combo

In case you have been living under a rock, and have not yet seen the recent Adobe presentation on image deblurring, here is the video. I recommend you watch it first, and then read on:

The demo itself is pretty impressive. I’m sure it won’t fix every photo, and it will also be having it’s own share of problems, however I don’t think there is anybody who would disagree, that this technology is really revolutionary. Richard Harrington blogged “It will change everything”, and it surely will. There is a lot of creative potential with this technology as it is.

However, the real killer would be translating it to video. I can’t even start to count how many times have I tried to stabilize shaky footage only to back down considerably due to the motion blur that no stabilizer has yet been able to remove. No matter how good a stabilizer, be it a simple tracking and position/rotation/scale lock or more advanced algorithms like warp stabilizer, if the camera movement is erratic, you will get a variable amount of motion blur, which is often more pain to watch, than original shaky footage. Therefore I received all claims about warp stabilizer being a new steadycam with more than a grain of salt.

However, if warp stabilizer did include image deblurring, then it would indeed be another game changer. Interestingly, kernel calculation in moving picture might be actually helped quite a lot by temporal data and tracking (although subframe calculations would still be necessary), and the algorithm for video might in the end be less computation-intensive on the per-frame basis. And instead of the simple stabilize option, we would have the option to remove motion blur, or even calculate proper motion blur for the newly stabilized footage.

How great would that be, huh?

For those willing to delve deeper and read on the history of this research, here is a nice article from fxguide.com, that describes it: You saw the unblur clip with the audience gasping…here is the source. And for those interested in other impressive work in Adobe, check out the rest of Adobe sneak videos. Especially look at video meshes, pixel nuggets and local layer ordering. These technologies might find their way to your favorite editing software as well.

What pro users want from Premiere Pro, what Adobe will not deliver and why

After acquiring the IRIDAS Adobe is in a perfect position now to replace the now EOLed Final Cut Studio as a preferred suite of applications for editing and now relatively low cost finishing. This is also what is most likely to happen, even though personally I would love Premiere Pro, After Effects, Lightroom,  Photoshop, Audition and now SpeedGrade to be integrated in one single seamless application a la Smoke. I am obviously not the only person to think about that (see comments here), nor even the first one by any stretch of imagination.

Here is however why I don‘t think it will happen though. For one, recent changes in pricing and the fact that Adobe software has became very affordable for new businesses and startups is something that the company is not going to strike out by building a single finishing application encompassing the functionality of the whole suite. Arguably the fact that you can essentially rent a single specific tool for your job for next to nothing is one of the things that makes Adobe software more popular than ever. This business model would be seriously undermined by conversion of the suite to a single application, and this is what actually none of us think would be a wise thing to do.

Secondly, the architectures of After Effects and Premiere Pro – not even mentioning Audition – seem to be quite different. Even though Adobe has gone to great lengths to ensure proper translation of projects between the applications, there is a realm of difference between this and actually merging the two together in a Smoke-like manner. Don’t get fooled by the similarities of the interface. The engines running these two are quite different, and to actually enclose one in another might be impossible without rewriting most of the code. Adobe already did that while creating 64-bit applications, and there is hardly any incentive to do that again, especially since their time for development has actually shortened due to the “dot half” releases.

The only sensible way to approach this is to create a new application from scratch, but that would be essentially duplicating the features of already existing programs without any real benefit to the business, and at at least twice the cost. This is not something that is going to happen without a serious incentive to do so. Perhaps incorporation of SpeedGrade into the suite might be such a good pretext, but it all depends on the underlying architecture of the program itself, and is not going to happen soon, so don’t hold your breath until CS7 or even CS8.

I bet that in the short term we will see the remake of SpeedGrade’s interface to suit more the CS family, perhaps a few more options will be added, and a “Send to…” workflow will be established between Premiere, After Effects and SpeedGrade, perhaps with the addition of plugin a la recent Baselight development for the old FCP. This is what is feasible to expect in CS6. SpeedGrade will be able to see and render all Premiere and After Effects effects, transitions etc., due to the incorporation of either Dynamic Link or the standalone  renderers that already are present in Adobe Media Encoder, and hopefully will be able to merge projects from Audition as well.

Perhaps a new common project file format will be born, independent of any application, akin to the container, where each application reads and works only on its own parts, and it all comes together in SpeedGrade (finishing), Bridge (playback) or even AME for export. And if nobody at Adobe is working on such an idea yet, then please start immediately, because this is exactly what is needed in the big shared workflows. This format would get rid of the some of the really annoying problems of the Dynamic Link, and would open a lot of possibilities.

In the long run we might see a birth of a new Ubertool – a true finishing app from Adobe, and if a container-project idea is embraced, the workflow might even be two-way. I would imagine that this tool would also incorporate some management ideas from recently demonstrated Foundry Hiero, like versioning, conforming, or even preparing material to send to Premiere Pro, AE, Audition, etc. for other artists.  Because Adobe Suite does not need only the Color Grading software for completion. It needs a true project management and finishing application, and that would be an excellent logical step for Adobe to take, and then their workflow would really encompass all stages of pre-, post- and production proper. Which I hope will happen in the end.

One thing that I am sure Adobe will not do: they will not repeat the debacle of FCPX. The new Ubertool might be able to do all that other apps do, and probably more, perhaps even better, but they will not fade the smaller tools out of existence immediately, if ever, and everyone will be able to talk to each other as seamlessly as possible.

Is there a reality out there?

Human beings love stories. We thrive on them, they evoke our emotions and give us reason to live (or die for that matter). Stories are what give us sense and meaning.

Stories are also almost absolutely subjective, at least in terms of their final impact on individuals. The final act of every creation—stories included—happens in the mind of the recipient, and it is the stage upon which the storyteller has no control at all. It is the sum of our experiences that does decide what our perception of the story will be. Therefore there are as many variants of a single story, as there are people on the planet. And from this perspective, who is to say which story is better?

This kind of reasoning brought us the idea of post-modernism, where even the concept of reality itself begun to be treated as subjective to one’s interpretation, and wholly dependent on one’s point of view. In its peak, this philosophy proposed that we in fact create the very reality by our mental processes, and that by changing the way in which we think and perceive the outside world we can totally remake it, because it is nothing more than a narrative. It also proposed the ultra-egalitarianism, postulating absolute equality of everybody’s view of the world.

I’ll spare you the paradoxes of post-modernistic philosophies, and listing the flaws of this kind of unfortunately quite prevalent magical thinking, but there is one thing that I personally can’t ignore. The uber-egalitarianism proposes that science is “just another piece of storytelling”, and that it has no special claim on saying what’s real, and what’s not. This point of view saddens me, especially when it is voiced by famous psychotherapists or people who really should know better.

This statement is very, very incorrect. While I absolutely agree that each of us creates their own inner picture of reality, and then by our actions we can influence the outside world, sometimes even make an important contribution towards some big changes, it is by no means equal to creating the outside reality, or to claiming that there is no objective reality at all.

There exist certain rules that every atom in the universe does seem to follow. These rules can be discovered by systematic observation, and they even can be described with the use of an abstract language of mathematics. Even such seemingly chaotic and stochastic systems like weather can be described with certain probability, and with certain “resolution”.  It is amazing that we were able to create the semantics which allows us to predict the events that do happen “randomly”, like radioactive decay for example. Of course, not the course of every single one of them can be described—they are truly random in such sense—but their general, statistical behavior is quite well established, enough for building reliant nuclear reactors or medical imaging devices.

Such is our advancement in this kind of observation that it allows us to build tools like GPS which take into account the space-time curvature, and relativistic lengthening of time, or tackle the idea of quantum computing. We are so certain of the laws that govern the universe, that in our arrogance we create amazing things that rely on these laws to function properly. And they do work. There are laws in the universe that we can all rely upon.

These laws do not care who you are, where you live, whether you are a human-being, an amoeba, or a piece of anti-matter. They do not care about your life story. They are identical for everyone and everything, the true example of uber-egalitarianism. As such, these laws are the reality: the objective reality, ever-present for everyone and everything in the same way. And science is the process that seeks to discover and describe these laws. Therefore to dismiss it as “yet another storytelling device” is a mark of ignorance or a sad lack of understanding of what this process entails.

To be fair, science is messy, and mathematics is an abstract language that—sadly—not very many people know or even want to learn. Science does involve at first noticing some observations (which are objective, repeatable facts), and then trying to come up with some kind of abstract description that will allow prediction of future behavior of the observed system. This part happens in an objective reality. The problems might start when one has to translate or interpret the findings.

All our “natural” languages are inherently imprecise, and are a product of our daily experience and the environment that we live in. We have problems translating ideas that fall out of such experience, because we lack proper frame of reference to convey the true meaning of elaborate equations—equations that are very precise, and leave little room for debate. We have to resort to metaphors to describe constructs like electron cloud, which are described without fault by mathematical equations, but can never be properly described with the use of common words. Such attempts of passing the knowledge to “uninitiated” are prone to misunderstandings, and superficiality, as we can witness for example with the idea of an “observer” in quantum mechanics. For some reason people started associating the act of localizing an electron (measurement) with the presence of some kind of consciousness observing the act, sparkling a lot of shallow, misguided philosophical speculations. This is the limit of metaphors, and there are things lost or added in translation. Such is the nature of telling stories—their authors have almost no influence on how they will be understood.

In the end science does indeed tell stories that are supposed to help us make sense of this world that we live in. The difference is that the stories being told are based on the most fundamental aspects of repetitive, and reliable objective reality, a translation from very precise language of mathematics into our limited, poetic language of everyday life. These stories are not made up. They might be better or worse translations, but what they describe is real.

Some people seem to be offended by the word “objective”, and prefer using “consensus” instead. I think it is a misnomer. The laws do not care about our consent. Even though there exists a substantial subjective element to how things are explained, interpreted and understood, the facts, and the laws themselves remain reliably unchanged. There is an objective reality out there, and we’re relying on it in all our activities every day, especially now, when you are reading these words. :)

Thanks to these laws, we are alive, and can go on telling our stories in a manner that is most convenient for us 😀

Tactile input is important

Recent (?) fascination with touch-controlled interfaces is perhaps good for their development, but in my opinion they are not necessarily the future of device manipulation.

One of the big mixed blessings is that you have to rely on the visual feedback to operate such an interface. While it is perhaps a tad faster to manipulate directly items that you’d always want to look at—like photos—or that wide-sweeping gestures are faster than looking for “next” or “previous” buttons, it is not necessarily so with interfaces that rely on mixed tactile/visual, and sometimes even auditory feedback.

An excellent example is a keyboard. A keyboard gives you at least three kinds of input: tactile (feeling of pressing the key and its shape), auditory (click of pressing the key), and finally visual – letters appear (or not :) ) on the screen. Many people do not appreciate the first two, mostly because they were not trained in typing without looking at the keyboard to find each letter that they type, or to use all their fingers while typing. Personally I believe that classes of quick typing should be obligatory in primary or high schools and would be more useful in daily life. For example, when visiting a doctor, I often find that he takes more time to use his computer to type the diagnosis with two fingers than to actually examine his patient. What a terrible waste of time.

Anyway, the reason that mixed input is important is about efficiency. Once you learn to stop looking at the keyboard while you type, you reach a new level of efficiency. You start relying on your tactile and auditory input to feel and hear if you have pressed your key, and to an extent to know which key you pressed, only using visual feedback for confirmation, not estimation. For those who wonder why there are small embossed dashes on the F and J keys – they are places where your index fingers will be when you use proper typing technique.

Touch screen does not give you this advantage. You use visual cues to find the proper key, robbing yourself of the input by covering the key you want to press at the same time, and then for verification. You use a single channel to process the information. It is slower not only because tactile information reaches your brain and is processed 10 times faster, but also because you use serial processing instead of parallel one. While typing on a classical keyboard I know I have pressed a wrong or a good key even before I get the confirmation on the screen. Therefore it is much easier for me to switch from typing to correcting mode (and yes, there is a noticeable switch going on), than I would when I am typing on a touch-screen keyboard. My impression also is that the responsiveness and robustness of touch screen interfaces is still not at the level of keyboards, but I might be wrong, since this field evolves very quickly.

Another example where tactile input is vital, are the devices which one would be able to operate without having to look at them. One that comes to my mind is an mp3 player. Usually this device sits in my pocket or in a place that I do not have an easy visual access to, and for a good reason. Therefore if I want to increase the volume, lock the controls or change/rewind the track, I would prefer not to need to put the device in the center of my visual attention. Running, cycling, driving — these are the activities that do not lend themselves well to visual distractions. Admittedly, using any device while driving will lessen one’s concentration and might result in an accident, but this is precisely why most of the car interiors are built in such a way that you can rely on tactile input to turn on radio, heating, conditioner and all else.

Therefore, it makes little sense to design an mp3 player with touch screen input. When the buttons are present, you can learn their layout, and operate the device without the need to look at it. You will get immediate auditory input — increase in volume, next track will start playing etc. And you can easily lock/unlock controls, which perhaps is the biggest advantage of all.

There is also another issue. While using touch-screen to manipulate photos you often cover the part that you’re interested in manipulating, therefore robbing yourself of a visual feedback that the touch-screen is supposed to give you. This is not necessarily an optimal way to work. I would agree that it is the way that we manually paint or write, but it only shows the limitation of our tools (limbs). Personally, when faced with a choice of a touch-screen tablet, and a standard screen with a cursor, and a traditional tablet, I prefer the latter, simply because my field of view is wider. Motion estimation is similar in both cases, even if the second way takes more time to learn, and to get used to, like learning to use any tool or device.

All these examples show that if touch-screen interfaces want to become more useful, they will have to evolve additional feedback mechanisms. As of now, there are too many applications where they are detrimental to efficiency, and when we consider them setting their “coolness” factor aside, their application is still limited in scope.

Maintenance can be creative work as well

Up until recently I had not realized this simple truth: things do decay, and to maintain their functionality one has to expend energy, sacrifice time and put an actual effort into it. I think this is one of these fundamental truths that can be applied in general to life, universe and everything. If left unattended, the entropy will take its course, and the things will die.

Maintenance is one of this kind of jobs which when performed properly is invisible. Like a good matchmove or a good cut. Maintenance allows the action (life) to move forwards smoothly and without glitches. As such, in accordance to the old adage “out of sight, out of mind”, it is often under-appreciated and ignored until an accident happens.

Indeed, it is not as flashy as any “creative” work, and its results are never direct. But come to think of it, hardly any creative work can be done when tools are not prepared, not working correctly, or broken. Of course, overcoming such problems can be a creative endeavor in itself, but the satisfaction is usually left only for the person dealing with the problem, and will most likely be lost on the recipients of the work that was done, possibly hindering the results. The work might of course bear some signs of the problem in question, making it perhaps unique in its own way, and it is the sign of true mastery to turn problems into creative opportunities. But even in such cases the newly developed process needs to be streamlined and maintained.

I have always seen maintenance as dull job that needs to be done. Backups, cleanups, servicing – they all ate away the time that could have been used for “the proper work” or leisure. I did not appreciate that such jobs also do require creativity, if only to solve the annoyances of software glitches, hardware incompatibilities and such. Thinking about the safest and optimal workflow or proper hardware configuration is a job in itself, a job that requires knowledge, research, time, and also some economic sense. It is also a very important art of differentiating the wants (everybody wants the best tools available) from the actual needs (the productivity gains from faster computers tend to flatten out beyond certain point). This kind of fine-tuning, and deep analysis is what can be very rewarding, especially if you come up with some clever way to solve a complicated problem.

Of course, there is no way to absolutely avoid real accidents, and to foresee everything. The loss of HDCAM SR factory during recent events in Japan is perhaps something that very few people were really prepared for. But proper studio maintenance would give you at least some time window to prepare plan B for archiving and delivery.

This kind of philosophy can be applied to any aspect of life. By working I provide means for my family to exist and grow. Even if most of my current salary goes towards maintaining what already exists, hardly allowing for investments, there is a certain satisfaction of providing the base so that the others can employ their energy in a better way.

Similarly in relationships, after the obvious novelty wears off, maintenance is often under-appreciated, and can lead to perception of boredom. We need to remember, that our brains are hardwired for novelty, and we get the most dopamine and endorphine rush from new challenges and new “stuff” (this is perhaps why shopping works for many people as a mood raiser). Tweaking and fine-tuning is usually tedious. It is hard to appreciate the things that you do have, when there are so many things that you could have had, if you only worked harder, more or in another job.

And yet, taking time to maintaining what you have is essential for not loosing it, whatever “it” may be. Since this is the case, I might as well take time to enjoy it :)

Have a good day.

Three (or more) ways to make a vignette in Premiere Pro

UPDATE: You can download the plugin that I wrote here.

One feature that I lack in Premiere Pro is masking and vignettes in its standard color-correction tools. Unless you are using plugins like Colorista, other dedicated grading software or simply send your sequence to After Effects (if you have it), there is no obvious way to make a vignette. Here are however three ways to accomplish this effect, each having their pros and cons.

The first two ways to make a vignette require use of a blending mode, and towards this you need to understand what they actually do. I recommend going to ProVideo Coalition site, they have a nice tutorial on the subject. We will be using multiply mode to darken the image or overlay to saturate and lighten/darken the image (basically increase contrast and “punch”).

Multiply mode darkens the underlying image using the luminance value of the layer (clip) to which it is applied. 100% black darkens underlying layer to 100% black, 50% gray darkens by 50%, so for example 50% gray multiplied by 50% gray is 75% gray, and 100% white is totally transparent.

Overlay mode is partly multiply, and partly opposite. In overlay mode, 50% gray is transparent, darker colors work like multiply, and lighter colors lighten the image in the opposite manner than multiply: 100% brightness (white) makes layer below white, 25% gray makes underlying 100% black 50% gray, and so on. An overall effect is an increase in contrast and saturation (if you want to get more “punch” from your footage, try making a copy of it on the layer above and applying overlay mode to it, and see what happens, it’s a common trick to use).

I hope you’re not confused yet :) Now for the vignetting:

1. Photoshop file

Simply create a Photoshop file or tiff with a dimension of your sequence. Set your foreground color to black, and background color to white or gray, click on the second gradient option to select radial gradient, click on “reverse” and drag from the center of attention outwards, drawing a vignette shape. The lightest point should be placed where the center of attention should be in your footage, and the darkest on the outside. Save the file, it should look something like this:

Import the file to your project by dragging it into project window, put it on the timeline, and apply appropriate blending mode – it’s available under the opacity part of Effect Controls palette for this clip. Tweak opacity setting to achieve desired effect.

It is a very simple, method, that is also the least intensive on CPU, although it requires switching to another program to do part of the work, and does not provide easy way to change settings – you have to change the file itself. Another advantage is that you can put it on the top layer and affect all layers below.

2. Separate layer with ramp

Create a new solid in project window. The color is unimportant, make it the full size of your sequence. Then put it over the footage, put a “ramp” effect on it (it’s in “generate” sections. Select radial, reverse, and move start point towards the center, and the end point towards the edge of vignette. Your ramp should look similarly as the Photoshop file above. Then apply blending mode and adjust opacity as in method 1.

This method is a little more CPU intensive, but gives you the possibility to change the vignette without leaving Premiere, and does not require you to have Photoshop or any other such tool at all. You can even animate the vignette if you feel like it.

3. The Circle effect

If you don’t care about elliptical vignetting, you can use the Circle effect, which populates the oh-so-intuitive category of “generators”. It is a really versatile effect that I’ve found only recently. If you apply it for the first time, you will most likely dismiss it – as I did. However, it has most things that a decent vignette needs – set your blending mode to multiply, set your color to black, add feather, reverse the mask, and there you go. What is missing is the possibility to draw an ellipse instead of a circle, and to rotate it. But still it can be pretty useful, and it is not very CPU intensive. No CUDA acceleration though.

By the way, if you thought that the Ellipse effect present in the generators category would make your day, you’d be sorely disappointed. It’s a completely different effect, incidentally totally broken in Premiere Pro, even though it works well under After Effects.

4. Lightning Effects

The most demanding, but also giving you most options, including the possibility of additional color correction, is the effect that I have hardly ever seen mentioned in the context of Premiere – “Lightning Effects”. It is quite a powerful tool, giving you a lot of AE lights functionality without the need to use dynamic link or such. If you want to create a vignette, simply apply it to the chosen clip. Now do some tweaking:

  1. Select the first light as the spot light (usually set as default).
  2. Click on the effect name or the transform icon to the left of it to see visual input in viewer window.
  3. Adjust the center point, both radiuses (radiae?) and the angle so that the center is where you want to point viewer’s attention.
  4. Alternatively tweak focus (feathering) and intensity properties for additional effects.
  5. You can also tweak Ambient Light Intensity and Exposure to adjust overall lightness or darkness of the image.

Voila! This is it. Below are some pics before and after. As you can see I decided to go for rather subtle effect, but Lightning Effects is a really powerful—if CPU intensive and not supported by GPU acceleration—tool that you can add to your editing and color correction arsenal. It has enormous potential, and creating a simple vignette with it may even sound like a blasphemy, but it’s a good place to start the exploration. The only drawback is that you can’t apply it to multiple layers below like you can with other two methods. But hey, in Color you can’t do it as well, so don’t complain 😀

Visual input of Lightning Effects filter

Footage before

Footage after

Premiere Pro positive

Some of you might wonder why do I keep complaining about Premiere Pro, and not move to some “more professional” software like FCP or Avid.

It so happens that there is a number of features that Premiere Pro has, which make it a really great tool for video editor like me who often needs to work on a project from start to finish – from ingest to distribution – under a relatively tight time constraints.

I like the fact, that I can do 90% of my audio in Premiere Pro – it has basic mixing tools, automation and support for VST plugins for whole audio tracks, works in 32-bit, and even though the basic plugins are of mediocre quality, one can always supplement them with more advanced ones, and the output is usually acceptable. I certainly could achieve better results in Audition or Protools, but in most of the cases there is no need to. Also there is no problem with importing most of the audio files format (good luck handling mp3 files in FCP), although again Mac version does have some strange issues with audio that was not sampled in 48k, and the workaround is simply moving to PC or converting the files. It might be hard if your operator used 32k sound though…

Speaking about various formats – CS5 is really amazing at handling almost anything that you throw at it – image sequences, AVCHD, AVCCAM, DVCPRO, XDCAM, RED footage… Call it what you will, PPro will swallow it, and allow you to work with it without the need to transcode the files. If you want a professional intermediate, you can always obtain Cineform codec or on Mac even use ProRes. But there is no need to – and it is a huge timesaver.

It is especially true with CS5 and CUDA graphic cards. I put a few XDCAM EX clips in a sequence, and on an i7 PC with GTX 460 I threw some of the color correction filters that I routinely use. Fast Color was a blast – no wonder, it always is. Luma Curve was a blast – nice. Three-way color corrector – no problem. Seven additional Three-ways? You bet CS5 could handle them in real time. It gave up only after I threw four more Luma Curves just to see how much is too much. I got a few seconds of playback, and then it stopped. Whew. Essentially, anything that I would be needing will be handled in real time… on a PC. If I wanted the same for Mac, I would either have to buy an outdated and no longer supported nVidia 285 or a Quadro which costs over $1000.

Color correction in Premiere is certainly not as easy and streamlined as in dedicated tools like Color or daVinci, and I must admit that basic CC is much better resolved in Avid, but still, you do have presets, you do have most of the tools that you need (basic color corrector, three-ways with secondaries, luma and rgb curves, keying and other important stuff). There are a few things missing – vignettes or saturation curve – but these can be remedied by installing additional plugins like Colorista.

I must admit, that I miss real-time scopes. All the scopes are present in Premiere, but they don’t update during playback, which is a pity. Kudos go to FCP for having those, and also for icons on color correctors to copy settings to next or second next clip on the timeline. In Premiere changing your settings in the clips is a little more troublesome. However, at least you don’t have to double click them to see their effects control window, like you have to in FCP.

Premiere also has great management of render files. FCP can loose them upon any movement on your part, and if you combine it with the FCP’s tendency to force you to render almost anything that is not a color-corrector or a transition, it can be a major drawback. Premiere remembers the position of files and filters, and there is always a chance of going back, if the clips realign themselves properly, even in a different place in the timeline. Huge, huge advantage.

What I like about Premiere the most though is that it is really the most flexible editing software that I have ever used. FCP is not so close second, Avid, even with the recent updates, still lags far behind. Work really goes fast (when there are no nasty surprises on the way), much faster than in the rest of them. Even though I’m pretty proficient in operating both FCP and PPro, I find the latter smoother, quicker and more intuitive to work with.

Therefore the only reason I’m whining so much is that this certain piece of software could have been much more reliable, and much better, if it was free from the bugs and weird gotchas that tend to happen from time to time. And if the Mac version did have all the plugins and transitions that are available on PC… oops, I did it again. Sorry :)

Synchronicity and confirmation bias – a difference

And now for something completely different.

The concept of synchronicity is quite simple – it is a subjective feeling that two events are meaningfully related. For example, I look for information on problem A, then in my spare time I listen to an overdue issue of a biotech podcast that happens to have guests from a virology podcast, which I then decide to check out, and I find out that a new episode of this second podcast has the answer to my problem A in it. Amazing synchronicity – I found a solution for a problem not looking for it, and in a place that I would have not expected it. It feels meaningful for me, and gives me a lot of joy. Another example – I talk about effects of sword cuts with friends, and then suddenly a person from another side of the globe who is not involved in this discussion sends me on Facebook his pictures of the test-cutting that he did this day, and that illustrate precisely the point I was trying to make in a discussion. Wow! What are the odds of that?

However, from an objective standpoint synchronicity is simply a coincidence. Regardless of how meaningful this event is for me, for other people there is no such connection. Unless they share my belief system, or I manage to convince them otherwise. But in general, people not involved in solving problem A will look at the virology podcast, they will not get butterflies in their belly, and will say that there is nothing unusual about it. And from their perspective they are right.

Synchronicities, when they happen, really do add meaning to our lives, and push us into the state of mind closer to “being in the zone”. That is if we allow them to do it. They might create an impression, that there is an invisible hand that guides our destiny, and lead us forward, making the life easier and lighter. Why not use this to our advantage? Life without synchronicities is tiresome, boring, and gray. Synchronicity provides me with a moment of awe and wonder, in which I can immerse myself, take a deep breath, and appreciate life more. Screw objectivism, this feels good! And it makes a great story as well! (Which is probably why it feels this way, but it’s another matter entirely).

But then, don’t overdo it. If you start actively looking for synchronicities, then you are actually employing a strategy to find meaning, where there is none. This strategy is called confirmation bias. You know there is a meaning, and you are simply looking for signs to confirm your preconceived idea. What you find can give you peace of mind (or sometimes a headache, if you happen to find something you weren’t looking for), but in the end you are only deluding yourself, and chasing dreams and shadows of meaning, not the real meaning itself. Stop. Cease and desist.

The trick is not to be too active but observant and open to new experiences, and surprises. Synchronicities do happen. But their only magic is in our heads. Embrace the magical moment of inner realization, and don’t make the mistake of trying to enter the same river twice. This is not going to happen. Move on, wait for another day and another miracle.

Life happens to be beautiful. From one synchronicity to the next.

Human heuristics

Some argue, that we should live our lifes in a rational way, calmly and dispassionately analyzing pros and cons of each situation. While perhaps this is a noble ideal, it is also impossible to attain, and even those people who advocate such way of life are subject to a few quirks and limitations of our minds and brains.

Here are a few important examples that each of us should remember:

  1. It is very hard, if not impossible, to separate our thinking about things into risk and benefit. The general rule of thumb that we employ is that when we consider something to be beneficial, we tend to undervalue the risks associated with it, either ignoring them, diminishing or rationalizing, and vice versa. If a thing can be both beneficial and risky (like nuclear energy for example), it proves to be a hard issue for us to swallow, and analyze.  Perhaps many emotions are the result of this conflict that we consciously or unconsciously perceive.
  2. We have a bias for novelty. New things are inherently more interesting than the old, common, and known ones. In this day and age this imperfection is really starting to become a hindrance, and getting caught in the endless cycle of news is so easy with our smartphones, tablets, RSS readers and such. Heck, I often can’t even finish reading one book not thinking about all others that stand on the shelves around.
  3. Also, a similar variation of previous bias is the one for uncommonality – if a thing is rare, it catches our attention much quicker, than if it is common. Media are a good example on the manifestation of this rule. In connection to our inability of assessing risks and understanding big numbers, it proves a real challenge to obtain a view on reality that is… well… as close to “real” as possible.
  4. We have a bias for finding patterns, even if there are none. Which is totally understandable, but it makes our lifes more difficult if we want to actually know what is true, and what isn’t. It’s even worse when we are looking for an invisible agent in places, where there doesn’t have to be one. It might be fun (or scary), it makes a great story (and we are suckers for good stories, I tell you), but it “ain’t exactly real”, as one famous singer put it.
  5. Confirmation bias plays well into the bias for searching patterns. Once we think up of a possible pattern, it is easier and more “natural” to look for the evidences proving our idea, than to come up with those that disprove it. Reading a balanced article can actually increase our bias, instead of reducing it. It takes a lot of courage to think about possibility of myself being wrong.
  6. Anchoring. Our brain focuses on the first thing that comes to its attention, and uses it for future reasoning, even if the thing has nothing to do with the problem at hand. It is a great feature helping us to process complex problems, and to arrive at conclusions in some sort of sensible time, but in a distracting environment, such conclusions can be totally unwarranted. And we fall for anchoring each and every time – there is no way to defend from it, even if we try to compensate for it! Beware of marketers that ask you seemingly innocen questions at the beginning of your conversation, especially any numbers. It is a trap. Run like hell! :)

What’s good about it all is that we slowly start to understand the role that emotions play in our decision making, and shaping our view of the world. They are crucial, cannot, and should not be eliminated – they are what makes us going through life. But at the same time, it is vital to be aware of one’s weak and blind spots. Because we all do have them in abundance.