The Inevitable Convergence – Episode II


In the aftermath of IBC 2013 I wrote about the inevitable convergence of various software packages. It was easy to see how various vendors began expanding their packages into areas beyond the primary intended roles. NAB 2014 confirms this ongoing trend, and breeds more and more interesting solutions at various price ranges.

Let’s quickly sum it up: BlackMagic Design gave Resolve a serious boost in the editing realm and collaboration, The Foundry announced Nuke Studio, bringing Hiero timeline into Nuke – or another way around, if you prefer – upping the VFX management expectations for everyone and aiming towards the on-line market. Autodesk enhanced real-time timeline capabilities in both Flame and Smoke, while Adobe is constantly tightening the interaction between its various applications to make them work seamlessly as one. The case can be made that Avid is also attempting to do precisely that, gathering all its offerings in Avid Everywhere platform mirroring Adobe Anywhere though with proxy workflow instead of real-time server rendering.

All in all, this expansion outside the primary areas suggests that the applications are mostly mature, the toolset required to fulfil the primary functions is pretty much there, and the software companies are aggressively attempting to widen the user base. This is the case especially with grading packages, where the competition is relatively intense, and the high-end segment stops being perceived as the only viable support. Witness Digital Vision licensing its precision control surface to SGO Mistika, and going software-only route with its Nucoda, dropping its price in a clear attempt to widen its reach.

Which breeds the question – is specialized software doomed to fail in the long run? Will the likes of Baselight eventually run off of the resources to sustain themselves? Certainly, there are some comfortable niches where individual applications do and will exist – Mocha for planar tracking and Silhouette for rotoscoping seem to be pretty good examples. But they thrive in the space where they have no competition, protected by patents or relative obscurity. It’s a very cozy place to be in, but there are not many like these. How will Nuke fare against Mamba FX, now that it has Mac version? How will Premiere, Avid and FCPX survive the BlackMagic incursion?

Today for pure editing still nothing beats dedicated NLEs. I bet it might be a year or two before somebody attempts to do a larger editing project in Nuke Studio or Resolve. But I can easily see how shorter forms might resort to these tools, especially to Resolve for its unbeatable price point and relative ease of use, and Nuke Studio will comfortably find its place in the VFX editorial and possibly finishing.

Lastly, there is the problem of feature bloat and discoverability. When software starts to expand into areas not envisioned from the moment of its conception, the risk of hitting a development wall is pretty huge, since the base code and the user interface was not optimized for these additional tasks, and the forays will most likely appear clumsy to the eyes of the users of specialized packages. Nuke will never be as good roto software as is Silhouette, and I highly doubt it will outclass After Effects in motion graphics.

Will the convergence happen though? Will there be enough overlap between Adobe Creative Cloud, Nuke Studio, Autodesk Flame, and daVinci Resolve that the choice will come down to user preference and – gosh – pricing? Not unless BlackMagic partners with SGO, Eyeon or takes over Toxik from Autodesk. If that happens, all bets are off.

As for now, we can happily choose any tool we deem appropriate for the job and out budgets.

It’s the Feature Countdown…

Like last year around NAB, this time Adobe announced the preview of the upcoming release of its Digital Video Applications (DVA) which include Premiere Pro, After Effects, SpeedGrade, Media Encoder, Prelude and Audition.

You can read the general overview of the new features in a few places (Creative Cow, fxguide, Studio Daily), various Adobe blogs give you a complete overview of the upcoming features, Scott Simmons at ProVideoCoalition gives a more indepth look into the new features of Premiere Pro itself, and for those visually oriented, Josh from prepared a video. Here as usual I will attempt to give you a bit more nuanced view with a possible long-term impact on the real world workflows. I also hope to showcase some of the most important ones in more detail pretty soon, and our favorite NLE will enjoy a separate detailed post. Right now let’s take a view from a few stories high.

The overall theme of this upcoming release is something that I would describe with a single word: “Finally…”. Finally we have a number of features that many have been asking for – sometimes for years. I know it may sound a bit ungrateful, and it’s no secret that we all would love to have these from the very beginning. But this release seems to really deliver – features galore, big and small, all to make your life as an editor easier. Therefore, when I use the word “finally”, it means that it is really there, without “buts” and “howevers”, and with full appreciation of the required time and resources, and a long road that all had to travel to arrive at this point.

One of the more interesting developments in this cycle is the new version of Prelude with its ability to trim clips in the rough cuts (finally), and tagging. The first feature makes it feasible to do the rough cuts in Prelude without the need to precisely mark the subclips, or to adjust the selections on the fly. We can finally do the quick assemblies, selects, what have you, and we can be as OCD about them, as we want.

Tagging, the second feature, has tremendous potential, similar to Keywords Collections in Final Cut Pro X. The ability to apply tags during playback using a fully customizable, state-of-the-art, JSON driven Tag Panel or to apply several tags to a single marker makes logging much easier and much faster. This is precisely what was missing in the metadata workflow between Adobe applications. Therefore we finally have a non-hierarchical way to quickly and consistently annotate the source footage. In the preview software versions that I had access to, tags are currently searchable only in Prelude, and only on the clip basis via the search box in the project panel. Hopefully, this is going to trickle down at least to Premiere, which can already see the tags, but can’t yet look for them or show them in any meaningful way, and that the searching capabilities will become more advanced.

On a technical note, tagging is implemented via an interesting extension of Flash Cue markers, and I will definitely elaborate on it soon. Right now, if you for some reason feel unhappy that the FLV format is being totally ditched from Adobe video applications in this upcoming release, you can comfort yourself with the knowledge, that not everything Flash-related is going to waste.

There are a few other new features that span throughout several CC applications. For one, Premiere and SpeedGrade now sport the Master Clip Effects (MCE). The idea is quite simple – when you apply an effect to a given clip, not to its instance in the timeline, it is applied as a separate “layer”, so to speak, below all timeline effects, and ripples through all instances of the master clips on all timelines in the project. It’s a great feature, especially useful with color correction, and the fact that SpeedGrade can also work in this mode, makes it even better. Here however I am not saying “finally”, because there seem to be a few quirks associated with it. I will elaborate on them in a separate Premiere Pro note, perhaps even wait for the release version of the software to see how these issues are resolved.

Regrettably, Master Clip Effects do not apply to sequences – yet? – which would make them totally awesome. This, and the possibility to render and replace such modified master clip or a sequence to the codec of one’s choosing. But even without it, it’s a killer.

The mechanism used for MCE allowed also Adobe to create the Live Text Templating for After Effects compositions in Premiere. Here we can see the roll-out of a 1.0 release of a feature, which covers only the most basic – though also the most frequently requested – ability to edit a selected text layers of Dynamically Linked AE comps straight in Premiere Pro. It’s a boon for all lower thirds, titles and similar graphics. It’s pretty simple to work with – you mark your composition as a Premiere template in the comp settings, and then any unlocked text layer in this composition or pre-comps (very clever!) is going to be accessible in Premiere in the same way MCEs are – by using match frame on the timeline clip or opening it in the Source Monitor, and then looking up the Effect Control Panel.

The drawback of Live Text Templates using MCE mechanism is that, if you want to duplicate this composition in the timeline, it will duplicate the clip in the Project Panel, similarly to how Titles currently work. This perhaps is not the most elegant solution – ideally we’d only have a single template, and only adjust the effects on the instances in the timeline – but it does work, and the link to the original AE composition remains, you can easily change anything in it in After Effects, and the changes will propagate to Premiere. Of course, it is not the final word in terms of templating. If you have comments or ideas, make sure to send them to Adobe, I know they are listening.

Next, both Premiere Pro and After Effects can now contain their effects using masks. Here I can definitely say “finally!”, at least when it comes to Premiere. Finally you can create a vignette or limit your color correction using a mask. Such masks can also be tracked, using the same technology that was already available in After Effects for past half a year. It works and it’s great. There is a minor limitation – currently there are only two types of masks present in Premiere – elliptical or polygonal. No bezier shapes or variable feathering, to access these you still need to go to After Effects. But these two types of masks will suffice for the usual 80% of cases, especially given the controls to expand or uniformly feather the mask.

Interestingly, the implementation in Premiere Pro seems much easier and much more elegant than its After Effects counterpart. While masks in Premiere are shown under each effect, AE requires you to navigate your timeline, make sure your effects masks do not interfere with masks that you apply to the layer, and so on. Quite a few inelegant steps on the way. The only positive side is that you can reuse these masks for multiple effects, which is not possible in Premiere. But the masks do carry over when you send a clip to AE via Dynamic Link, which is also a welcome addition.

Thanks to these features suddenly a lot of things became much easier to achieve in Premiere itself, without the use of Dynamic Link or After Effects round-tripping, especially in the Motion Graphics area. The only large things currently missing here are Pixel Motion algorithm for speed changes, and Motion Blur. About the impact of this release on the future of Creative Impatience plugins I’m going to elaborate in another note.

One of the best features in the upcoming Premiere Pro is the improved performance of the search field in the project panel. Yes, it seems tiny in comparison to all other loudly touted upgrades, and it’s more of a fix, than a marketable feature. But it means that finally we will be able to use the search box in larger projects, and that is no small change. On similar note, marker names will finally be visible in the marker panel, and can be searched for. For the list of all the features see Premiere’s blog, and an upcoming post on this website.

Astute readers and long time users of Premiere had most likely already noticed one feature that is sorely missing to complete the “finally” list. Unfortunately, Project Manager does not get an update in the upcoming release, and we will not be able to easily transcode or trim our projects either for archive or exchange. If I had to name a major disappointment, this would be the one. Here’s to hope that we’ll see some working solution soon – IBC perhaps?

But enough complaining. On another front, SpeedGrade seems to have finally received support for AMD GPUs in the Direct Link mode, including dual GPUs – owners of new Mac Pros rejoice – a new YUV vectroscope that works like every other vectroscope on the planet, and sports a decent graticule, a control to clamp the scopes instead of resizing them, and vertical sliders supplementing the offset/gamma/gain rings, easier to access and manipulate with a mouse. Some keyboard shortcuts became unified with the ones present in Premiere, and you can also enable or disable any track in the timeline in the Direct Link mode, which previously was impossible, making it easier to consider various grading options or simply hide distracting elements.

Media Encoder can finally be installed separately from other applications, which should make reinstalling and troubleshooting much easier, in case you ever need to do that, and apart from a number of bug fixes it also added support for industry standard AS-11 DPP, which you will never need unless you are delivering Broadcast material to UK, and – perhaps more importantly – encoding unencrypted DCP, which will be helpful if you’re going to submit your great movie to a film festival. Now, if only we had access to DCI P3 color space in Premiere…

Audition received a modest update as well – support for Dolby Digital, multichannel wav files, and some multitrack enhancements that should make your life easier if your sessions stretch vertically enough for you to consider turning your monitor short edge up.

Finally, After Effects enjoys integrated Mercury Transmit for live preview, support for HTML5 third-party panels (can be pretty significant in the long run), some updates to Curves effect (still not compatible with Premiere’s RGB Curves though), and an interesting technology for improving the mattes that you get from Keylight or other keying effects. Both will definitely come in handy, especially since I’m right now involved heavily with Hero Punk project which was shot totally on green screen.

There are also updates to Story and Anywhere, but I can’t meaningfully comment on either.

All in all, it looks like it’s going to be a pretty solid release, centered on Premiere Pro. Dare I say – finally? 😉

Adobe Anywhere – Currently An Enterprise Solution Only


On NAB we’ve seen a few reveals from Adobe, and among them also the premiere of Adobe Anywhere. I speculated extensively on Anywhere in the past, and I was perhaps a bit too optimistic in my assessment for required hardware and bandwidth, motivated mostly by the hope that we would be able to install it in our small facility as well. Alas, it’s not going to happen.

As of now, Anywhere requires at least 4 servers to run: one being a collaboration hub, and 3 Mercury Streaming Engines. Karl Soule explained, that this is a required minimal structure, because the MSE machines also take care of the rendering. This hardware should cover the needs of 6-8 editors, and supposedly scales well by adding additional machines. It’s certainly not inexpensive (starting at $5000 but most likely achieving $15,000 to $20,000 per piece), and the cost is certainly increased by Windows 2008 Server Enterprise Edition (about $2300 per license) and MSE requiring at least one Tesla K10 processing unit costing $3000 each.

I was not mistaken though about replacing expensive SAN licences with something a bit more affordable. The two currently recommended systems (Harmonic MediaGrid and Isilon X400 series) sport their own filesystems which cover most of the SAN benefits, without incuring the overhead. Plus they work via Ethernet, lowering the price of backbone architecture even further. However, don’t get your hopes up, these solutions are still pretty expensive, going into hundreds thousands of dollars.

Obviously, Anywhere is not a plug and play solution, it requires tailoring to the specific workflow and solutions in one’s facility, and Adobe has their own servicemen who will install and configure it. Judging by the fact that the cost of software and installation is also not publicly available, it is safe to assume that it does venture into the “if you have to ask, you can’t afford it” territory.

My bandwidth estimation was also too optimistic. The suggested pipe for seamless experience seems to be 25-40 Mbps, which is not insignificant, and in fact might be the biggest limiting factor to the actual spread of Anywhere. While it’s easily achievable locally, it is far beyond standard 3G data rates (2 Mbps), requiring LTE or HSPA+ connections, not always easily available, and is slightly beyond WiFi 802.11 a and g standards, requiring at least 802.11n communication using multiple antennas. It is also at the edge of what the most recent ADSL modems can provide (40 Mbps in the ideal conditions). So perhaps Bob Zelin’s dream of remote editing will still be limited by the last mile infrastructure, at least for a time.

In the end, the message is pretty clear: right now Adobe Anywhere is aimed at the enterprise players like CNN and large post houses who can afford the necessary equipment or perhaps can fit it into an already existing hardware structure. Certainly, the benefits are great, but the little folk can only hope that at some point these solutions will trickle down.

The brighter side of Anywhere


Today another interesting thought about Adobe Anywhere struck me. Essentially, part of the idea of Anywhere is to totally divide the UI from the renderer. The concept in itself is nothing new, and all 3D applications use it. However, as far as I know, it is the first time that it has been applied to an NLE.

The obvious implication is the possibility of network rendering, and using more than a single machine for hefty tasks. Of course, there are a lot of caveats to multi-machine rendering, and we already know that not all problems scale well, and sometimes the overhead of distributing processes is higher, than the gain in speed. But in general, the more means the better.

The less obvious implication is the possibility of running various types of background processing, like rendering or caching parts of a sequence that are not currently being worked upon, if the resources allow. This means a quicker final render time. It also allows to run multiple processes at the same time, or even multiple parts of the rendering engine at the same time, allowing for better utilization of server’s computational power.

However, the most overlooked implication is the fact, that the renderer is UI agnostic. It doesn’t care what kind of client gets connected, it only cares that the communication is correct. And this has potentially huge implications for the future of applications in Adobe’s Video Production line.


Anywhere Renderer most likely already consists of several separate modules – After Effects, and Premiere, possibly even Lumetri (SpeedGrade), which can be chained in any sequence. For example, the client sends a request to render a frame at half-resolution consisting of the following stack: V1 P2 clip, V2 Red One clip with Ultra Keyer on it, V3 Dynamic Linked AE clip with lower thirds, and a color correction via Lumetri on top of it all.

Adobe Anywhere makes Premiere renderer open a Premiere sequence, finds a dynamic linked clip overlaid on a source material, and asks After Effects to render it, at the same time composits the V1 and V2, and once AE is done, composits the result together. Then it sends the whole thing to Lumetri for color correction. Each stage is most likely cached, so that when the CC or the keyer is changed, the AE does not have to re-render the thing. Then the frame is rendered and sent out to the client.

Running locally

However, it’s all in the client-server architecture – you might say. How about those of us who use the programs on a single machine? Anywhere will not have any impact here… or will it?

There is nothing stopping Adobe from installing both the server, and the client software on the same machine. After all, the communication does not care about the physical location of client and server that much. It cares about the channels, and whether the messages are being heard. Perhaps the hardware might be a little bit problematic, considering the fact that both UI, and render most likely use GPU acceleration. But other than that, all Adobe needs to do is to distribute the Adobe Anywhere Render Engine as part of any local application. It would perhaps be custom-configured for local usage, to streamline some tasks, but it’s going to be the same Anywhere.

And that, dear readers, could be huge.


Separating the UI and a renderer is a brilliant move. In the long run it allows Adobe to alter the client without rewriting or even incorporating the renderer code in the application. For all Anywhere cares, the UI could be as simple as an HTML5 application which would send and receive the proper messages. Need I say anything more? Let your imagination run wild already.

The regular readers perhaps already see where I am going with it. The newcomers are encouraged to read about my vision of Adobe conforming tool, and – why not – Stu Maschwitz’s proposition of merging Premiere Pro and After Effects, or my hopes of seeing a Smoke-like ubertool from Adobe. Any such application could access Anywhere’s backend, and could be optimized to suit specific needs, giving birth to a number of tools specific to certain needs. Tools which are easily written, quick to update, perhaps even accessible via mobile devices. And in time they might also work locally, on a single powerful machine. Or on a number of them. Wherever you prefer. Anywhere.

Adobe Anywhere didn’t spring out of nowhere

Yesterday a few pieces of the puzzle came together in my head, and I realized that Adobe Anywhere in no way was conceived as a brand new solution, and is in fact a result of a convergence of many years of research and development of a few interesting technologies.

A couple years ago I saw a demonstration of remote rendering of Flash files and streaming the resulting picture to a mobile device. For a long time I thought nothing about it, because Flash has always been on the periphery of my interests. But yesterday I suddenly saw, how relevant this demonstration was. I believe it was a demo of Adobe Flash Media Server, and it was supposedly showing a great way to allow users with devices not having enough power to enjoy more advanced content without taxing the resources too much, and possibly streaming content to iOS devices not running Flash. Granted, the device had to be able to play streamed video, but it didn’t have to render anything. All processing was done on the server.

Can you see the parallels already?

Recently Adobe Flash Media Server – which Adobe acquired with Flash when it bought Macromedia in 2005 – changed its name to Adobe Media Server, proudly offering “Broadcast quality streaming”, and a few other functionalities not limited to serving Flash anymore. The road from Adobe Media Server to Adobe Anywhere Server does not seem very far. All you need is a customized Premiere Pro frameserver and project version control, which in itself perhaps is based on the phased out Version Cue. Or not. The required backbone technologies seem to already have been here for a while.

Mercury Streaming Engine backbone

What follows are a few technical tidbits that came with this realization and a few hours of research. Those of you not interested in these kind of nerdy details, skip to the next section.

To deliver the video at astonishing speed Adobe Anywhere most likely uses the protocol called RTMFP (Real Time Media Flow Protocol) which had its roots in the research done on MFP protocol by Amicima. Adobe acquired this company back in 2006. RTMFP, as opposed to most other streaming protocols, is UDP-based, which means that there is much less time and bandwidth spent on maintaining the communication, but also there is no inherent part of the protocol dedicated to finding out if all data has been sent. However, some of the magic of RTMFP makes the UDP-based protocol not only inherently reliable, but also allows for clever congestion control, and “absolute” security, at the same time bypassing most of NATs and firewall issues.

The specification of RTMFP has been submitted by Adobe in December 2012 to Internet Engineering Task Force (IETF), and is available on-line in its drafts repository.

More in-depth information about RTMFP can be found at two MAX presentations from Adobe. One of them is no longer available through the Adobe website, but you can still access its Google’s cached version: MAX 2008 Develop, and another from MAX 2011 Develop, and still available on the site. Note, that both are mostly Flash specific, although the first one has great explanation of what the protocol is and what it does.

It is still unclear what type of compression is used to deliver the footage. I highly doubt it is any inter-frame codec, because the overhead in compressing a number of frames would introduce a noticeable lag. Most likely it is some kind of intra-frame compressor, perhaps a Scalable Video Codec version of H.264 or JPEG2000 and its Motion JPEG 2000 version that would change the quality setting depending on the available bandwidth. The latter is perhaps not as efficient as the former, but even at full HD 1920×1080 JPEG2000 file at quite decent 50% quality is only 126 kB, 960×540 only 75 kB, and if you lower the quality to viewable 30%, you can get down to 30 kB, which requires about 5 Mbps to display 25 frames in real time, essentially giving you a seamless experience using Wireless connection. And who knows, perhaps even some version of H.265 is experimentally employed.

Audio is most likely delivered via Speex codec optimized for use in UDP transmission, and live conferencing.

Ramifications and speculations

There are of course several performance questions, some of them I already expressed – are you really getting the frame rate that your sequence is in (1080p60 for example) or is there a temporal compression to 24 or 25 frames as well – or any number, depending on the bandwidth available. And how is the quality of picture displayed on a broadcast monitor next to my edit station affected? Yes, I know, Anywhere is supposed to be for the lightweight remote editing. But is it really, once you have the hardware structure in place?

When it comes to server, if I had to guess today, a relatively fast SAN, and an equivalent of HP Z820 including several nVidia GPUs or Tesla cards is enough to take care of a facility hosting about half a dozen editors or so. Not an inexpensive machine, although if you factor in the lower cost of editing workstations, it does not seem so scary. The downside is that such editing workstations would only be feasible for editing in Premiere Pro, and most likely little else. No horsepower for After Effects or SpeedGrade. Which brings me to the question – how are the Dynamic Link and linked AE comps faring under Anywhere? How is rendering and resources allocation resolved? Can you chain multiple servers or defer jobs from one machine to another?

Come to think of it, in the environment only using Adobe tools, Anywhere over local ethernet might actually be more effective than having all the edit stations pull required the media from the SAN itself, because it greatly reduces bandwidth necessary for smooth editing experience. The only big pipe required goes between the storage and the server. And this is a boon to any facility, because the backbone – be it fiber, 10-Gig ethernet, or PCI-Express – still remains one of the serious costs, as far as installing the service is concerned. I might even go further, and suggest abandoning SAN protocol altogether, when only Adobe tools are used, thus skipping SAN overhead, both in network access, and in price, although I believe in these days of affordable software from various developers it would be a pretty uncommon workflow.

In the end I must admit that all of it is just an educated guess, but I think we shall soon see how right or wrong I was. Since Al Mooney already showed a custom build of the next version of Premiere Pro running Adobe Anywhere, it is almost certain, that the next release will have Anywhere as one of its major selling points.

Adobe Anywhere – are we there yet?

At NAB 2012 Adobe made an intriguing sneak peek at the technology for collaborative editing. At IBC 2012 Michael Coleman introduced the new Adobe Anywhere and presented its integration with Adobe Premiere. Like most demos, this one looked pretty impressive, and even gave away a few interesting developments in the upcoming version of Premiere, but it also left me pondering on the larger picture.

Indeed, Mercury Streaming Engine’s performance seems impressive. Ability to focus on the whole production, instead of on its single aspect, automatic (?) file management (and backup?), use of relatively slow machines on complex projects, working at long distance – all this is really promising. There is no doubt about it. However…

No back end and management application was presented. No performance requirements were given. How soon does a server saturate its own CPU, GPU and HDD resources? Apart from performing all the usual duties, it must now also encode to the Adobe streaming codec, and all the horsepower must still come from somewhere. If the technology uses standard current frame servers developed for Dynamic Link and Adobe Media Encoder, how are the resources divided, and how is the Quality of Service ensured? How effective is the application, and more important – how stable? I hope the problems with database corruption in Version Cue are things of the past, and they will not happen with Anywhere at any time.

Adobe engineers have been working on the problem for about 4 years, so there is a high chance that my fears are unwarranted. At the same time though I’ve learnt not to expect miracles, and there will always be some caveats, especially with the early releases of the software.

Of course, it explains why Adobe wants to first target Anywhere to their broadcast clients. Perhaps there is some of the sentiment, that since the video division finally has the enterprise clients, it needs to take care of them – hopefully not at the expense of smaller businesses and freelance editors like me. But setting up the servers, managing hardware and the whole architecture, takes expertise, and it is mostly the big guys who have the resources to implement the recommendations. We still do not know what the entry-level cost is going to be, but I highly doubt it’s going to be cheap.

Not that small post-houses would not profit from Anywhere. I can easily see how it could be incorporated in our workflow, and how it could easily resolve a few problems that we have to manage on a daily basis. But will we be able to supply the back-end architecture? It remains to be seen.

Interestingly, this approach of beefing up one’s machine room contrasts another trend that we have been seeing – the horsepower of average desktops being more than enough to handle pretty complex projects. All this remains totally unused in the model promoted by Adobe Anywhere. I wonder what Walter Biscardi thinks of it, and does he plan on using it at all.

I’m also curious how the version control is resolved? How are the changes propagated – can you in some way unify the conflicting projects, or do you need to choose one over the other? It is important. I gather that you can always go back to previous versions, but will they be available only from administrative panel, or also from applications themselves? Only time will tell.

It’s good that there is a possibility of expanding the system. I think a natural application that will be developed very shortly after the release, will be some kind of review player, where you can see the recent final result of the project, add markers and possibly annotations (why not? as a Premiere Pro title for example). Especially useful for mobile platforms, like iPad, where Premiere or even Prelude is not available. Such tools could become crucial for the approval and collaborative workflow in general.

There is also another point, which gave rise to the question in the title of this note. Is it the conforming uber-app that I’ve been arguing for? From the limited demonstrations to date unfortunately the answer is still no. We are not there yet, even though Adobe Anywhere seems very promising for collaborative editing, it is not yet there for collaborative finishing (and archiving for that matter).

The elephant in the room seems to be client’s review and approval. It’s OK to serve a 1/4th resolution of the picture if you are editing on a laptop without an external monitoring. But once you get into the realm of finishing, especially with your client at the back, you want the highest quality picture that you can get, with as little compression as you can. Anywhere is most likely not going to be able to serve that. Would you have to leave the ecosystem then?

Even though the support exists for After Effects, Premiere Pro and Prelude, the holy grail still remains the ability to take Premiere’s project in its entirety and work on it in Audition or SpeedGrade, and then bring it back to Premiere for possible corrections in picture edit with all the changes made in other programs intact. Or to export an XML or EDL without a hassle of hours of preparation if custom plugins, effects or transitions are being used. Nope – not there yet.

There is also a question of its integration in larger, more diverse pipelines, involving other programs and assets, not only from Adobe, but from other vendors, like The Foundry or Autodesk. It’s true, that Anywhere does have it’s own API for developers, although it remains to be seen, how open and how flexible the system will be, especially in terms of asset management.

Yet, despite all these doubts and supposed limitations, it seems to be a step in the right direction. And, as Karl Soule claims, the release of Anywhere is going to be big.

What’s coming in Premiere CS7?


Update: see how right or wrong I was with what is actually coming in the next release.

In their recent video concering the introduction of Adobe Anywhere – which I will elaborate on more in another note – guys from Adobe revealed a few interesting upcoming (or at least being tested) features in Adobe Premiere.

Both presenters were using custom development build of Adobe Premiere (the same that you can see at Al Mooney’s presentation at IBC 2012). Especially take a look at around 2:35, where the transition is being applied. For your convenience I include a cropped screenshot with the timeline panel, where all the interesting stuff is happening.

Take a look at mute and solo switches for audio tracks, much wider transition bar centered on the clip, and an interesting button to the left of snapping, most likely toggling the display of audio waveforms. The last option brings into mind a possibility of delaying creation of peak and conforming audio files in the preferences (wild guess). Notice that there is no “untwirl” triangle in either video and audio tracks. The menu to select what property is keyframed on each clip in the timeline is not visible as well.

Update: I forgot to include the fact, that at some point in the IBC presentation Al Mooney drops the clip from video track 2 to track 1, and he most likely did not do it the old fashioned way, but used a keyboard shortcut to do this. It’s a new feature as well, requested by many Final Cut users.

Michael seems to be dynamically adjusting the lenght of the incoming crossfade during application – with cursor keys perhaps? What a wonderful idea, straight from the Illustrator! No more clicking in effects control panel or hunting for the small handles, access directly from the timeline.

I’m also curious about the red pencil icons on the media in the project window. Are there simply markers for Adobe Anywhere assets, checked out, in sync, or is there something else going on?

Interestingly, the clip in the timeline is MXF – is the support for DNxHD in MXF containers coming to Premiere as well? We shall see… One thing is certain – OpenCL for Windows for AMD cards in Adobe Premiere is most likely coming very soon (see here at about 3:10).

These are all the details that I’ve seen in the video. But there is one additional, very important development. For some reason, even though Anywhere is touted pretty much, very little has been made out of the mention of the fact, that the footage in Anywhere is delivered in a proprietary Adobe codec. This is important. Native Adobe codec is something that many of us have been asking for even before the creation of Cineform and its brief inclusion in one of the versions of Premiere, so that in colaborative environments we can skip QuickTime and it’s dreaded problems with gamma.

Most certainly the encoders will be platform agnostic (as oposed to Apple ProRes). We’ll see how well it stacks against Avid’s DNxHD, and if it can be used as a mastering/delivery codec as well. Of course, the real key to popularity is making hardware vendors like BlackMagic, AJA or Convergent Design support direct recording in this format, which most likely will not happen overnight, and the ability of really high-end tools like Nuke to work with it.