Interview with Cineform’s David Newman

Cineform is one of the most popular intermediate codecs today, helping out not only filmmaking professionals, but also video enthusiasts. I’m very happy today to introduce you to David Newman, CineForm’s CTO. David is also the company’s compression & image processing engineer.

1. What is taking Cineform apart from other intermediate codecs, e.g. Avid’s DNxHD, R3D, Lagarith, or Apple’s ProRES?

David Newman: Workflow. 9 years ago when we created the CineForm codec, it was not our ultimate goal, in fact we didn’t intend to build a codec at all. We set out to make post-production workflows, completely software-based, rather than the hardware RT solutions of the day. We found that existing codecs where too slow, with quality issues hampering our workflow goals, so we set out to build something new. As this new codec was not to be a final format, had to be very easy convert into and out of, without accelerator hardware, supporting a huge range of deep pixel formats and operate with as many tools and platforms as possible. This flexibility has resulted in CineForm compression used throughout the Hollywood post industry as mezzanine format, a high quality fast archive for film and TV finishing.

In more recent years we do have some competition in the mezzanine/intermediate format space, each addressing some of the reasons that brought the CineForm codec into being, yet none of codecs listed are as flexible or as widely supported. However data in and data out is not all we are interested in (it never was). Over the last four years our developments efforts involve more of the creative, than the technical part of image development. This started with RAW image compression, which could not be displayed without a debayer/demosaic operation, which itself is more of an art form than hard science, and we felt this should be under user control. This expanded to include, white balance, curves, primary color correction, secondaries through film emulation LUTs, image re-framing and burn-in metadata — all as part of the decoder. The more recent addition of FirstLight allowed the filmmaker full non-destructive control over how the image is developed dependently from their NLE, yet remain fully compatible with all their tools. Tool independent color correction is one the ways CineForm is reinventing the workflow through a codec.

2. So far, there’s been only one camera that would record directly to Cineform (to my knowledge). Why not more prosumer or pro cameras? Why is it difficult to persuade camera manufacturers?

David Newman: We are software company with only 8 people, mostly engineers. The camera guys generally need an FPGA or ASIC implementation for lower power usage and long battery life. While the CineForm codec is designed with potential hardware implement in mind, and will have excellent low power characteristic, it will be the responsibility of a licensing partner to do the hardware port. In some ways we’re a victim on our software successes and Intel’s every increasing performance, hardware vendors from Silicon Imaging, Wafian, CineDeck and 1Beyond, all simply use a mobile Intel CPU for live CineForm encoding, which constantly get faster and cheaper — pushing back the need for a hardware port.

3. Any plans to resurrect the HDMI-recording device? Would that make sense today?

David Newman: It would totally make sense today — more so than ever — CineDeck is hugely popular, and a pro-sumer version would be a far larger market. That original proposal was just to tease out hardware partner(s), we did have one for a while, but they didn’t survive the market change. We are still looking for the partner to do it, it is getting easier to build such a device.

4. You are also a filmmaker. How important do you feel that “visual quality of footage” is? Do you believe that source footage quality is paramount in creating an astounding piece, or it’s all in the hands of the filmmaker to create magic via whatever means he/she has in his/her disposal? Basically, “is it the camera, or the camera-man?”

David Newman: The camera-man. We develop workflow-based compression, which means we understand that quality at the expense of everything else does not yield the best results — i.e. uncompressed rarely if ever benefits the user. While our Filmscan-2 quality mode is indistinguishable from uncompressed — I don’t use it in my own projects, as perfect source does not make a movie. There is a balance between the volume of data, speed of retrieval, and image flexibility (how much you can push it in post.) I’ve shot a Canon 7D directly to CineDeck, to bypass the camera’s H.264 compression, yes it was better in post, but my shoot was less flexible (this was an on-stage event, so mobility was not an issue). On other projects I’ve shot to in-camera compression and posted in CineForm. So I choice the acquisition format to fit the project needs, so while quality is important, it is not the driving factor.

5. Cineform is a mature codec. However, as an engineer, I’m sure you have more ideas on how to enrich the product. Could you share with us some of these ideas?

David Newman: The codec core is very mature, changing very little on the encode side in the last 4-5 years, yet the progress has not stopped. The codec version is now at version 6.5.1, which just added uncompressed profiles for RAW (existing), YUV and RGB. This helps with battery life and compute loads on mobile acquisition devices where not all frames are compressed (as you can’t tell uncompressed from compress in FS2, this can happen without the filmmaker every being aware.) This also helps with those doubting the compression quality to test and compare for themselves — so it’s both of technical and marketing value. We will be adding many more image development features for 2D and 3D workflows, some I’m really excited to use in my own projects. We are currently working on an idea so outside the box, we don’t know yet the limits to how it will used; it takes today’s already powerful metadata engine to another level.

6. Do you think that 3D movies will be the norm soon, or is it yet another Hollywood effort to resurrect 3D unsuccessfully, as it has happened at least 5 times so far in the last 80 years?

David Newman: It will not become the norm, but it will settle to an ongoing percentage of the feature film market, likely around 10%. This technology is now to the point where is it not the limiting factor, as it was in all the previously incarnations. Now that I’ve seen a good number on 3D features, I’m starting to miss it at some that aren’t — Iron Man 2 should have been 3D, but I’m happy that Inception was not.

7.Any plans for a Linux decoder via the Gstreamer framework?

David Newman: We have a Linux version of the encoder and decoder SDK that third parties have started to implement. This SDK would make a GStreamer implementation straight forward. We are not sure whether CineForm or a third party will do it first. It is up for grabs. Care to do some Linux coding?

8. Are you personally a supporter of WebM or h.264 towards becoming HTML5’s default video codec?

David Newman: Decoder licensing should be made free for all but the highest volume distributors.

9. How do you feel about the recent explosion of videography and filmmaking by amateur enthusiasts in the last 2-3 years online? Do you see this as an artistic and social revolution, or merely sign o’ the times?

David Newman: I believe it’s the impact of Canon 5DMk2 on enthusiasts, more than any other market or social force. That camera, along with the 7D, T2i, GH1, etc. has allowed part-time/hobbyist filmmakers to make images that look like the high-end projects which inspired them. This is what I got into this business to help create. As a hobbyist filmmaker myself for 20+ years, I could never afford the high-end tools, so I started building my own, working to make the results look as good — that just got a lot easier.

10. And the 10 million dollars question: would Cineform be a good codec for applications like remote medical procedures, or… spying satellites? Do you eye markets apart the entertainment industry?

David Newman: We are in some of these markets, we have users at NASA, on extreme deep ocean ROVs, we used on terrain imaging, and many other applications well beyond our design goals. All these project require, fast, high-quality compression processing, and our easy to use SDK certainly helps.

5 Comments »

Glenn Thomas wrote on August 24th, 2010 at 6:12 PM PST:

Nice interview!

Since upgrading the NEO HD recently, I’ve become a big fan of First Light. Something that I would like to see is a Vegas plugin that would allow tweaking of all the First Light parameters so I’m not switching between Vegas and First light all the time. Although I doubt it would be possible due to the different levels plugins are used in Vegas. Media fx, clips, tracks and master.

I would still be interested in that portable HDMI device too. A device that enables recording straight to Cineform built into a decent sized LCD monitor would be brilliant.


33_hertz wrote on August 25th, 2010 at 9:58 AM PST:

I purchased NeoScene a few months ago when I read your blog and saw that you recommended it. VMS9 wouldn’t accept my Sanyo’s files (although 10 now does) so NeoScene solved that problem and works great.

So I was very interested to see on my RSS feed that you had interviewed David. I thought you asked some very good questions, not that I understand all the technical bits haha.

I was also impressed with David’s willingness to interact with his customers in support forums.

Anyway, thank you for a very informative interview. 🙂


David Newman wrote on August 26th, 2010 at 11:52 AM PST:

It seems my answer to number 8 was prophetic. H.264 is now free for internet usage, permanently, beyond the originally proposed 2015 date.


Stephen Armour wrote on August 28th, 2010 at 5:27 AM PST:

As I was downloading the newest NEO 4K iteration, I was thinking about how it’s now almost a 100MB download. My, how things have changed since 2005!

That’s not a complaint, but a compliment on steadily growing value.


Michael C. wrote on August 30th, 2010 at 9:24 AM PST:

“H.264 is now free for internet usage” — it is free only for “free-to-view” videos. As Google moves towards “pay-to-view” model, the free AVC codec becomes less relevant. Google and advertisers and content providers will be making money on the movies sold in Google Store. Who cares about home videos of 1-yr olds sucking on their finger anyway (apparently, some do, people are strange). MPEG-LA simply stated that it prefers hunting for larger fish.


Comments are closed as this blog post is now archived.

Lines, paragraphs break automatically. HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

The URI to TrackBack this blog entry is this. And here is the RSS 2.0 for comments on this post.