Capture Cards and Codecs
April 8, 2002
Capture Cards and Codecs
By Marco Solorio
Depressed about Uncompressed?
Building the "perfect" Uncompressed FCP system.
About this article:
Thinking of going uncompressed? Well this article focuses on the different uncompressed hardware solutions available for Final Cut Pro. It attempts to answers questions such as: "What Capture card should I buy and do I even NEED an uncompressed system?" Also included is a side by side evaluation of the quality and functionality of each uncompressed hardware solution on the market today.
By Marco Solorio
Have you been exhausted by the amount of choices there are for non-linear editing packages? It seems there are more choices today than ever before. With so many choices also comes many "budget brackets" so to speak. To keep things a little less complicated, we'll focus on systems that can work for the Final Cut Pro user running an Apple Macintosh.
Here are some questions in thought to prepare yourself:
- Are you editing in DV and want to "move up" to uncompressed? What are the advantages and disadvantages?
- If you're thinking of adding hardware to create a FCP uncompressed system, will it be for editing purposes or CGI purposes? Maybe Both?
- What are your I/O needs? SDI, Component, FireWire, Y/C? Break-out-box or direct connections?
- Is High-Definition (HD) in your facility's future?
- What about the need for 24 FPS real-time editing for film?
- Not all systems have a full set of real-time capabilities. How much real-time functionality can you sacrifice for other needs you may require?
- For many, image quality is their highest priority. How does codec quality compare against each other and what do they mean to your finished product?
- What are the highlights of all these uncompressed systems?
- How does your ideal "perfect system" fit in your budget?
- Is there really a "perfect system" that can cover everything?
Let's start with the basics
Final Cut Pro and DV. For the person on a budget, but wants to stick with practical quality for their projects, this is a great combination. From independent filmmaking to corporate video production, this is a very viable and trustworthy format. I hate to use the term "broadcast quality" as it's so loosely described these days, but major networks use a large number of DV cameras in their arsenal. Let's not forget that FCP 3 has come a long way in terms of its interaction with the DV format. On a quick G4, you can get real-time (RT) dissolves, color correction and more. Even the new DV-offline may come in handy for some. Obviously though, there are limits to this new wave of CPU, software based DV interaction, such as no real-time preview to an external video monitor. But this is only the beginning of CPU based DV editing and the future looks very, very good.
Should I add uncompressed hardware to my FCP system?
Let's stick with DV conversation for a moment. The instant the image hits the chips on the DV camera, the media is compressed to a ratio of 5:1 when it lays it to it's DV tape. There are many advantages to this compression. One of which is the ability to fit your hour-long video on a tiny DV tape. Another example is so that applications like FCP can do CPU intensive edits in real-time, and doing them on slower IDE hard drives that come built into the computers (including PowerBooks). The obvious disadvantage is that in some cases the image may appear "crumpled up" like a wad of rolled out paper (under very close observation). The artifacts caused by the 5:1 compression may not appear on your video monitor, but upon closer inspection in your FCP canvas at 100% size, the artifacts may become more noticeable.
There's another side of the DV coin: the 4:1:1 (NTSC) and 4:2:0 (PAL) color space. Without going off into a wild tangent and getting too technical, the 4:1:1 or 4:2:0 color space can be thought of as compression in the color resolution, with 4:4:4 being a perfect source to compare to. You're basically getting 1/4th the color resolution from a perfect source in comparison. This is not to be confused with color-depth, which all systems mentioned in this article are at least 24-bit or "millions of colors". For detailed information on 4:X:X color space, visit Adam Wilt's website:
All of the real-time uncompressed systems for FCP are in 4:2:2 color space, or in other words, half the color space resolution (from a 4:4:4 rule). When I first heard about "uncompressed NLE systems", I thought they meant they were lossless (like the "None" or "Animation" codec), which basically means every single pixel is exactly reproduced from the source with zero loss.. It's true that the file space is uncompressed (which average at about 20 megabytes/second), but the color space is in fact a 2:1 compression due to the 4:2:2 color space. So although the image may look compressed, (in a fashion similar to saving file size space), it's actually a result of color space conversion to 4:2:2 and in some cases the addition of color filtration (or lack thereof).
So here's a common misconception: I'll buy a 4:2:2 uncompressed system for FCP, and my DV footage will look better when I capture it to FCP. Not at all! In fact, you will neither gain nor lose quality in the entire process from capture to edit to tape mastering. Remember, DV is 4:1:1 with 5:1 compression, no matter what! If you capture that DV footage into a 4:2:2 uncompressed NLE, you will sustain the original DV artifacts it created when it was written to the DV tape. Think of it this way: Remember that day at the office you photocopied your butt on the Xerox machine? Take that hideous photocopy and scan it in a super high-resolution drum scanner. The drum scanner won't make your butt look any better or add ANY detail to the Xerox image. It will only show the ugly (very ugly in this case) artifacts of the Xerox's photocopy output.
Okay, so now that we're thoroughly grossed out, we understand that 4:2:2 uncompressed systems cannot make DV footage look better. When you capture DV via FireWire and edit in DV mode, you're essentially editing the file that was in the camera. It is an exact replication of the source. There is no generation loss or conversion process. Realistically, you're simply doing a file transfer like you would over the Internet. So how can an uncompressed system make the quality worse? The moment you step outside of FireWire and the DV codec, you're going to loose generation quality. Here's why.
Since uncompressed 4:2:2 systems use their own codec (compression-decompression) format, the DV clip needs to be converted to the uncompressed system's native format by either transcoding it (best quality) or capturing it from your DV deck in real-time either digitally (SDI) or by analog inputs (component, Y/C, composite). By decompressing the DV codec to an uncompressed codec, you're losing 1/2 generation of quality. Additionally, going back to DV from an uncompressed codec also looses 1/2 generation of quality. The entire process looses 1 full generation of quality loss. We'll focus more on this loss a little later in the article
If you're going to transcode it the DV footage, there are tricks to remember to make sure you get results better than an SDI or analog capture. Importing DV capture to uncompressed 4:2:2 can be done in three ways (in order of quality):
DV has a frame size of 720 x 480, which is 6 pixels shorter than the D1 size of 720 x 486 which all uncompressed 4:2:2 systems use. The worst thing to do is stretch the DV image vertically to fit the 720 x 486 size. This will add interpolated pixels and totally mess up the natural interlaced fields in the DV image, which will look awful upon playback. Another thing to avoid is placing the DV image dead center in the 486-high frame. This will reverse the interlaced fields! You must keep the vertical position in even numbered pairs. For instance, you don't want to have 3 rows of black pixels at the top and 3 rows of black pixels at the bottom. For best results, stick with the resolution standard spec of 4 rows on the top and 2 rows on the bottom. You can stick with even numbered sets that include [0,6], [2,4], [4,2] and [6,0] (i.e., [top,bottom]) to keep the fields in the proper order, but [4,2] is your safest choice for standardization. Once you get the positioning correct, you can now safely render the DV clip to your uncompressed codec's best settings. Obviously, this can be a very time consuming procedure, but you will obtain the best results. However, this is still not an exact replication, pixel for pixel. The uncompressed codec still needs to figure out how to successfully reproduce the pixels in its 4:2:2 color space, even though the DV image is in 4:1:1 color space. You will get color filtering on the original color filtering!
A simple test can show this phenomenon in action.
2. Digital Capture:
SDI is the best way to capture DV in real-time to your uncompressed system. The loss is minuscule. You either get a very expensive DV deck with SDI I/O, or you get a product like the Miranda DV-Bridge to go between your DV deck and your uncompressed system's SDI I/O. Any way you look at it, an SDI integrated system will cost good money. But you will get what you pay for. Interlaced fields and proper positioning of the DV frame at D1 size will all be correct.
3. Analog Capture:
If you have a component DV deck and your uncompressed system has component I/O, then this is the next best thing. The loss is still hard to detect, but it's there nonetheless. If you're one of the many DV users that have a simple DV deck like a Sony DSR-11, then Y/C (S-Video) is your next best bet. The quality is still pretty reasonable actually. Once again, interlaced fields and proper positioning of the DV frame at D1 size will all be correct
Sometimes the loss in quality in a digital or analog capture outweighs the time required to transcode each clip to the codec's uncompressed format. I always say that if it's so imperative that your DV footage be transcoded instead of captured in real-time then it shouldn't have been shot on DV in the first place. Obviously you'll still want to transcode for short time-based projects like commercials or a visual effect clip. But for projects like hour-long documentaries, transcoding could take eons.
Do you really want to go to uncompressed if you're just using DV gear?
Okay, so you've either transcoded or captured your DV footage to your uncompressed 4:2:2 system. You now have a clip that's at least about six times larger in file size than the original DV size and it won't look any better in the end. So NOW do you really want to go to uncompressed if you're just using 4:1:1 DV gear? Possibly. Here are some thoughts:
If you incorporate any graphics or animation in your projects, your graphic clips will greatly benefit if you edit in an uncompressed FCP timeline as opposed to taking your graphics to a DV-based timeline edit. If you want to use an uncompressed system and you're wondering if the original 4:1:1 quality of the DV footage (transcoded to 4:2:2 uncompressed) will be a lot less than the quality of the graphic clips, the answer is, "not really." I would say it's more noticeably worse to render your graphics to DV against DV clips in a DV timeline, than it is to bring your DV clips to an uncompressed timeline with uncompressed graphics. Remember, the DV video wont gain anything in either process (DV or uncompressed), but the graphics will. Or more specifically, the graphics wont lose as much color space going to an uncompressed timeline than a DV timeline.
There is a catch though. If you capture DV footage to an uncompressed timeline and add graphics or effects to an uncompressed timeline and in the end you lay your project back to a DV tape, you have successfully wasted time and hard drive space with no gain in video quality. All your uncompressed 4:2:2 graphics and effects will get down-sampled back to 4:1:1 color space with 5:1 compression on the DV tape. Think about your DV source clips too: instead of going back and forth via FireWire where there is zero generation loss in the transmission process, you have converted the DV clip to an uncompressed format (instant loss, and even more so if performed via analog conversions) and converted back to DV on your master tape (another loss through conversion). There are only two advantages to this entire DV/uncompressed process (with graphics and/or rendering included in the mix):
1. Going to a master tape format better or equal to 4:2:2.
2. Mastering to a tape/format different than the 4:1:1 DV source you captured from. This includes formats like MPEG-2 for DVD. Even though MPEG-2 for DVD is 4:2:0 (which is the same color space value for DV PAL users), the MPEG-2 codec is entirely different than the DV codec even though they might encompass the same or similar color space value. Whenever you change codec formats, transfer from the best resolution possible!
Let's examine the process of 4:1:1 DV to uncompressed 4:2:2 and back to 4:1:1 DV a little closer. Some people insist that rendering their graphics on an uncompressed timeline and recording the edit on a 4:1:1 DV tape is better than simply rendering their graphics to DV on a DV timeline and recording to their DV deck. This simply is not true, as there is no loss or gain in the process. Let's look at a table of generation loss in a 4:1:1 DV to 4:2:2 uncompressed mix, starting with the most basic: pure DV editing.
Page ONE of TWO Continued on next page