Archive for July, 2009

Color grading of the week, Part 8

Some more color grading, some extreme, some not. Lighthouses was my subject tonight. I used mostly Magic Bullet, Bump Map, Color Corrector, Unsharpen Mask, and a variety of Pixelan’s Vegas plugins.

Before:

Picture by chjab, licensed under the Creative Commons “Attribution” 2.0 license.
After:

Before:

Picture by clapon, licensed under the Creative Commons “Attribution/Share-Alike” 2.0 license.
After:

Before:

Picture by Bocian & Tusia, licensed under the Creative Commons “Attribution” 2.0 license.
After:

Before:

Picture by Jake Wellington, licensed under the Creative Commons “Attribution” 2.0 license.
After:

Apple, AT&T, shame on you

After Adam and Thom, I will be putting a break on my Apple purchases too. I have already promised days ago one more Apple iPhone review to a third party store, so when I am done with that too, it’s over for me as well.

Apple is quickly becoming the new Microsoft in the eyes of the people. The funny thing is though, that Apple was always like that, it’s just that people outside of the Bay Area didn’t know all of the juicy details most of us residents know. You see, Silicon Valley is a small place. You will be surprised how small it is. Word goes out easily. So while I might not have been blogging or reporting much about small tidbits that I happened to hear over the years (in order to protect my sources), the truth is, Apple never had a good name as a workplace/business in the area. Not before Steve Jobs came back as a CEO, and certainly not after.

But I guess, the cat is out of the bag now.

Between the crude iPhone application authorization process, the no-background processes allowed, to selling only locked phones (something that’s pretty illegal in other countries and I hope it would be here too), it makes me loathe what Apple is doing. They have created the best smartphone experience, they started the true smartphone revolution with the iPhone, but at the same time they try to limit progress in other areas.

It’s not a coincidence that the un-approved, and previously approved by Phil Schiller himself, iPhone apps were ALL Google Voice-related (I think there were 2-3 Voice-related apps un-approved, plus the Google Voice official app that was unauthorized from the get-go). The hard work and sweat of these developers, all went to waste. It’s more than obvious here that AT&T is behind this plot. You see, Apple has nothing to lose with Google Voice, if anything, Apple has everything to gain from it (it makes their phone more useful)! But AT&T is the one who has everything to lose. Google Voice allows for free US-bound calls and dirt cheap international calls (just $0.02 per minute for US to France/Greece), which of course, puts AT&T’s business at risk.

What Google is doing with Google Voice is nothing but progress. They have the bandwidth, so they go with it. AT&T on the other hand, is nothing but a new RIAA/MPAA, scared of the new realities that technology brings! They can’t, or they don’t want to, change their business and/or technologies, and so they fight against the new kids on the block, who use technology in a more flexible way.

Put that in addition to what AT&T did to my iPhone last month: they cut off my EDGE support. My iPhone is LOCKED to AT&T, and it is NOT jailbroken. It is as vanilla as it goes. The only difference here is that I didn’t buy the iPhone from AT&T at the time, as I only needed to use their PayAsYouGo plan (since I do not do more than 4-5 calls per month).

Think about it. AT&T blocks NO OTHER cellphone-maker for that plan! They single-out the non-AT&T-bought iPhones, like they are a plague, EVEN if they might be as vanilla as they get. THIS ALONE can be used for a class action lawsuit. I had no plans contacting EFF about it, but be sure I will do so now, after the latest Google Voice fiasco. There are too many things to hold a grudge now, I am afraid.

In the meantime, I have emailed Apple with feedback about their practices, and I suggest you do so too.

Update: Kroc wrote on Twitter that “I think you peeps should create a charter that defines “iPhone fixed” and publish it“.

Here’s my list, in this order, for Apple:
1. No more locked phones. Phones should also be able to be purchased at full price, with no ties to any carrier. Subsidized phones with a contract should also be unlocked.
2. Authorization denial of iPhone/iPod apps should be restricted to malware/spyware/buggy apps, and to illegal apps (e.g. a Nazi-related app). All other apps should be allowed to go through to the Apple Store. Sexual-related apps should be allowed, but with age verification or warning.
3. Allow background processes on the iPhone/iPod, as long as they are don’t seem to be compromising the system (e.g. battery life, system software, towers). The iPhone is not a real smartphone without background processes.

For AT&T (and ANY other carrier):
1. Stop dictating to manufacturers what software they can put in there and what they can’t. It’s not your job. It only becomes your job if your towers are compromised. Otherwise, SHUT IT.
2. No more locked phones. Period. You can still subsidize phones, but they have to be unlocked.
3. Allow EDGE/3G/GPRS for all phones. Artificially limiting the non-contract-bound Blackberries and iPhones, is unacceptable, even if I am willing to pay up your crazy prices ($10 per 1 MB of data transfered)!
4. Allow “PayAsYouGo” calls from Europe and other places within US (e.g. the Ukiah area in CA, USA). It’s *unacceptable* to not be able to use my phone when I am on vacations (I am blocked from calling out in these areas), even if I am willing to pay up your crazy ass prices!

Finally, if Apple can’t design protection to open the app CPU without compromising the radio CPU, they need better engineers. The notion that the towers aren’t designed to deal with faulty clients is just bullshit. Who would design a client-server system where the server trusts the client?

The Panasonic FZ38 video digicam

Panasonic announced yesterday the FZ38, an 18x super-zoom camera (called FZ35 in US). Not really the brightest idea for a digicam purchase. However, this camera has other features that can prove very useful to people who need 24p with manual controls for cheap ($400).

See, this camera is the first consumer non-DSLR digicam that offers manual controls in video mode! It has both shutter speed and aperture controls! At this point, I can only assume that when in this manual video mode, the exposure will stop jumping left and right as it does currently with other Panasonic digicams.

Then, there’s the frame rate thing. According to DPReview, the camera can do 25p and 30p (depending which model used, European or US), at 17 mbps AVCHD-Lite. However, it saves the videos in a really bad way, and makes editors *think* that it has recorded in 50p and 60p, while in fact it just has duplicated the frames. On Sony Vegas you have to specifically tell it to use either 25 fps or 30 fps (depending if you used the European or the US model) in order to avoid the duplicated frames. Sample 25p/50p .mts file here (bottom of the page).

So in a scenario where you shot in 25p (with 1/50th shutter speed), you edit as such, and at the end you re-time the video to become 24p (if desired). This will produce a very filmic motion look, as close as it gets with a digicam. Yes, there are other cheap digicams that do 24p right out of the box, but they don’t offer manual shutter speed control, which is an important ingredient in the quest to get the filmic motion look.

Additionally, color/saturation/contrast/brightness controls are offered in video mode, as well as manual white balance. In conclusion:

Pros:
– Shutter speed manual control
– Aperture manual control
– 25p can easily be converted to 24p
– Color adjustments
– Manual white balance
– Manual focusing in addition to auto focusing
– By using a CCD sensor won’t produce wobbly videos

Cons:
– Small sensor (1/2.33″)
– Its doubling of frame rate is stupid and unnecessary
– Not a fast lens (meaning, less background blur than the HV20)
– To get the 25p recording you need to buy the European (FZ38) version.
– We are still not sure if exposure will continue jumping even when in manual control mode (there was no word about ISO/gain control you see)
– Its 17 mbps bitrate is much lower than Canon’s 24 mbps digicams in 720p

Color grading of the week, Part 7

Before:


Picture by Alex Witherspoon, licensed under the Creative Commons “Attribution” 2.0 license.

After:

Used Sony Vegas’ color corrector to fix the white balance, then used a modified “bleach bypass” Magic Bullet look, and a bit of unsharpen mask to give it a more filmic look.

Manifest Destiny

An amazingly well-done horror short movie shot with an HV30 & a 35mm adapter.

The importance of accelerated drivers

Geeks.com, sent over one of their popular computer parts products, video cards, the nVidia GeForce 8800GT (512 MB DDR3, PCIe, PureVideo2 HD support, GL 2.0, DirectX10, HDCP). This is a test on video playback performance with Vista-default drivers (Vista 64bit, SP2), and nVidia accelerated drivers (latest stable, v190).

I tested a Canon 5D Mark-II file, since it’s a heavy format: MOV h.264 High Profile, 40 mbps, no audio. File was loaded full screen in a 1:1 screen (1920×1080) at 32bit color, using various decoders and media players. Then, the frame rate and CPU usage was measured. I used a video file with a lot of movement to visually figure out if VLC was playing the file in real time or not (from the players I tried, it was the only one that didn’t have a way to show fps performance). The rest of the players had a way to get actual concrete numbers. Results below:

The CoreAVC Pro CUDA-accelerated version had of course the best result with just 3% CPU utilization (the Vista default drivers had no CUDA support). When CUDA was turned off, there was still a small speed up with the newer, non-Vista, drivers. The rest of the decoders also had it easier either with better frame rate, or with less CPU utilization. If they didn’t do better in terms of frame rate was mostly because of multi-threaded issues, as these decoders are written in legacy styled code (JBQ and I still joke sometimes how even today’s programmers can’t get multi-threading). The only decoder from the ones I tested that was actually multi-threaded was CoreAVC’s. These guys rock.

Please note that I used a speed up option for VLC to get real time decoding with it. By default, VLC doesn’t do real time on the 5D files, not even in this Quad Core 2.4 Ghz DELL PC I used for the test.

The moral of the story is:
– Use graphics cards that have a fast memory bus. Since 2D acceleration tapped by generic non-Purevideo decoders is mostly bandwidth-bounded, get cards that don’t cut costs by using slow memory or buses.
– Don’t leave your PC with the default XP/Vista/Win7 drivers. Upgrade to the latest stable version from your manufacturer’s web site.
– When possible, use CoreAVC Pro as your default decoder on media players/editors (Vegas won’t support it unfortunately, since it doesn’t support DirectShow decoders — but Premiere might).
– Prefer nVidia over ATi. nVidia’s PureVideo architecture is better supported by decoders, be it CoreAVC or Adobe’s CS4.
– Don’t ever opt for an Intel integrated card, unless you are really short on money.

The 6th and final season of LOST

Today at ComicCon was “Lost” day. The two writers and the cast showed up in front of 6,500 fans to show teaser clips of Lost’s 6th and final season and (vaguely) answer some fan questions. Here’s a rundown:
– Small clips were shown from a timeline where the 815 crash never happened.
– The writers said that the timeline might have changed.
– But they were quick to also say that if they were to erase 5 seasons of plot that would be a “cheat to the viewers”.
– When asked if there will be flashbacks in season 6, the writers said that the “format will be different”.

Now, put all that together and theorize. Potential spoilers below (although I’ve been wrong before):

I believe that S6 will start off with the Oceanic 815 landing safely in LAX. The story will be split between two parallel universes: our original timeline, and a timeline where the crash never happened. Dead characters like Charlie, and Ana Lucia will be shown alive and well. It’s already confirmed that the Juliet actress will be back for a few episodes, and maybe Shannon & Boone too.

The other timeline will continue just after Jacob’s death. The surviving people from the nuclear detonation (which is what created the split of the timeline), will be transported back to 2007-8. This is what Jacob meant when he said just before he died “they are coming”.

In the other timeline, the heroes will just try to get on with their lives. Until that weird guy named Jacob tells them that things are not how they should have been. They think he’s crazy, even real-Locke thinks so too! But quickly after that the “special” people among the 815 passengers would momentarily cross over to another reality. Maybe through dreams, or other means. This is what will make them think that maybe this Jacob guy might not be so crazy after all.

These cross overs have a unique effect on the other timeline: apparitions and whispers. This is how these are going to be explained.

At the end of the season the two timelines will be merged back. But someone will have to take a decision: do you merge back to a timeline where so many people died, but have found their true self just before they died? At this point we should remember the main point of Lost, which is all about redemption and fate.

I posted the above theory on DarkUFO’s site, the biggest Lost fan site on the internet. There were many comments there from seemingly confused people by the various announcements, but when I wrote the above theory, everyone stopped commenting. There were a few saying that my theory makes sense, but they stopped commenting (so far). I feel like a party pooper. It wasn’t of course intentional, as this was simply a theory of mine. But apparently a theory that made total sense to them. Shows once more that what people want are more mysteries, not answers. It’s the answers that kill a show like this, not the added mysteries. It’s counter-intuitive for many people to realize it though.

Meet Fade, the HV20 photographer

This is the art of Fade from TN, USA. Fade uses the modest (but legendary) 3.1 MP Canon HV20 camcorder to shoot her amazingly artistic and beautiful pictures. I am a huge fan of her art and I had to write this blog post, as a shrine to her art. She’s yet one bright example showing us that you don’t need the best camera to shoot the best photograph or the best video. You just need to have the vision, and the skill. Enjoy.

Finally, a few more nice pictures shot by others with the HV20.

The power of color grading

I had a shock tonight. I was watching the most popular YouTube videos for the day, and I stumbled on this and this video, promotional clips from the new movie “Funny People“. The clips felt very video-like, they had nothing from the $70 million look that the movie cost to produce. Which of course gives us hope that our consumer HD cameras can produce great-looking video if we knew how to post-process it. I tried to find which camera was used, but to no avail (it has a digital look though).

Later, I searched and found the trailer at Apple’s site to try to see more of the movie’s scenes. When I watched the trailer though, it was a completely different look. Obviously, the YouTube clips were ungraded!!! The actual trailer really did look like a Hollywood movie! I can’t believe how the colorists were able to make this originally terribly-looking movie to look so good. Check the before and after!

Of course, there’s always the chance that the Youtube clips were the ones that were graded to be super-contrasty for some reason, or that whoever exported these clips messed them up, but I don’t think so. It really feels like the youtube clips are the original ungraded clips, and the trailer was graded. Which shows us how important grading is. You could take any digital HD camera and make it look as fabulous (as long as you have access to RAW).

Update: I played with the HD YouTube clips and tried to reproduce the look. And while I was working with the useless (for color grading needs) 2 mbps YouTube clips, I was able to get pretty close to the trailer look (I would need the movie’s RAW files to be able to completely emulate it). Which means that the YouTube clips are *definitely* the original clips, as they came out of the camera, ungraded! If *I* was able to get so close with these useless 2 mbps files, the movie’s colorist could very easily get to that trailer look with his RAW 4k files. Which again, it shows us that if we know how to light, frame, shoot, and grade, we can get the “film look” even with a consumer HD camera (of course we would have to try a bit harder, but it’s possible). No need for 35mm adapters.

I used the freeware Aav6cc plugin (saturated greens and yellows, desaturated reds), Sony Vegas’ “Contrast” plugin (-16 value), and the “Color Corrector” plugin (low saturation, higher gamma, a bit of offset, mids+lows towards yellow). Piece of cake, huh!

Update 2: One more. Again, I would need the original RAW/4k files to do better than that.

Tutorial: Stereoscopic 3D with Sony Vegas

3D is back as the next big thing (until holograms arrive, hah!) and many forces are heavily pushing for it on all fronts. Soon enough, we will be enjoying 3D without the need for annoying glasses too.

Since July 2009, YouTube supports 3D videos. It offers various viewing styles to fit all kinds of tastes and… glasses. So, here’s how to shoot, edit and export such 3D footage in stereoscopic mode (a mode that allows YouTube to offer more than one viewing style) on Sony Vegas Pro 9 and prior versions, or Vegas Platinum 10 or earlier. Vegas Pro 10+ and Platinum 11+ have their own, different way of editing 3D.

The shoot

1. Buy this and this. Here’s a cheaper twin-head model if you’re short on money.

2. Place two identical cameras on the twin-head tripod. If not identical, they should at least be similar models (e.g. the HF10 and the HF100). Leave less than an inch/2cm of space between the two cameras for zoom level 0. But if you zoom-in, let’s say 3x, make their space ~2-3 inches/4-6cm. Of course, you need to be super-precise about your zooming level each time (genlock the cameras if they have that feature).

3. Setup the cameras the exact same way: frame rate, resolution, zoom level, exposure compensation, shutter speed, etc etc.

4. Make sure the cameras are level with each other (you can enable the “Grey markers” feature on your camera to test if the horizon is tilted in one of the two cameras). Try to shoot an object in a non-static way, always making sure there’s some background visible, so we can fake “depth” in the image.

5. Press “Record” on both cameras (maybe even by using a remote control, if your cameras came with one). I suggest you record in plain 50i/60i because 3D requires more frames to look natural (although PF25/PF30/24p/25p are workable, PF24 can be very problematic depending on the pulldown removal algorithm used, so stick with the default frame rates).

6. Use the clapper board to clap. The sound it makes will be used later to line up the footage from the two cameras.

The editing

1. Load Vegas, and set up a 1920×540 project if your cameras were full HD, or a 1280×360 project if your cameras were 720p (notice how the vertical resolution is half of 1080p/720p). Make sure that the rest of the project properties are correct (e.g. frame rate, field order, aspect ratio). Select “Best” for quality, and “interpolation” for de-interlacing algorithm. Here’s how it would look like if you shot in NTSC 1080/60i HD:

2. Place the two nearly identical clips from the two cameras in the timeline (one clip on the video track on top of the other clip). Zoom-in in the timeline, and find the place where the clapper makes the clapping sound. Based on this, line-up the two videos. Cut off the edges of these clips.

3. Load the “Track Motion” dialog for the video track on top. Click “Lock Aspect Ratio” icon in its toolbar. Then, change under the “Position” section the following: X:-480 Y:0 Width:1,920 Height:540. It should look like this:

4. Load the “Track Motion” dialog for the video track on the bottom. Do the same as above, but for X use the 480 value (instead of -480). Close it down. Now, you should have something that looks like this in the (ultra-wide) preview screen:

5. That’s it. Your video is now stereoscopic. Save the project.

The exporting

1. Export like it’s described here, but with two modifications: first, ignore the “project properties” setup in step-1 (we already did that step above), and secondly, the resolution. If you are exporting at 720p, then the resolution you should export is 1280×360. Everything else is the same as in that exporting tutorial. If you are exporting at 1080p, export at 1920×540 and give it a bit more bitrate (e.g. 8-9 mbps).

2. When the video is exported, upload it on Youtube. Make sure you add the following TAGS in your video, otherwise YouTube won’t apply the 3D menu options: HD, 3D, yt3d:enable=true, yt3d:aspect=16:9 (eventually it will be possible to tell Youtube your videos are 3D, so the 3D tags won’t be needed, but for now, use them).

3. After a while, YouTube will have converted your video to HD (it transcodes the low-resolution versions first and HD becomes available a few hours later). Wear your glasses, select the right Youtube 3D menu option on your video page (depending on what kind of glasses you got), and enjoy!

Notes

1. The kind of export we did here is called stereoscopic (with its wide 2 clips next to each other). There’s a way to export directly an anaglyph red-cyan image, hard coded, but this is the old way of doing things, now YouTube can dynamically adapt the stereoscopic image to various methods and viewing styles, so the stereoscopic way in this very article should be the method you should choose.

2. You can edit & export at full HD rather than just 540 pixels height, but you will have to create a 3840×1080 project to do that (1920+1920×1080). Unfortunately, only Sony Vegas Pro 9+ supports such high project resolutions.

3. YouTube requires 2x the CPU speed to playback 3D videos. So an older PC that barely plays back smoothly an HD YouTube video, won’t be able to playback a 3D version of that HD video smoothly.

4. If you don’t have two cameras, you can “fake” it by using the exact same clip twice, but by offsetting it by 4-5 frames in its video track compared to the other clip. This is how I did it on my 3D test here, since I don’t have two HV20s (although I am seriously thinking of getting an HV30 now to use it in 3D mode). This hack of course doesn’t produce realistic 3D, but it’s good enough to test things around and learn the workflow.