Because it is just a pain, and with debayering, not such an issue. See, in nature color tends to change less than the brightness component. So color tends to change less pixel by pixel, and given some other cues, including overlap between the channels response curves, debayering routines can do a good job at guesstimating the color component per pixel.
I just realised something, it is possible to see purer colors. Because if overlap in the response curves for colors in our eyes, individual primaries are often picked up by multiple receptors in the eye reducing purity, it should be possible to either pick areas that have no cross response, or trick the other receptors in not seeing a color. I can't remember if this is possible, but interesting.
Like the dual color blind inspirational reforming political leader might say , "There are no white people ,.. there are no yellow, red or black people in this world,... there are only all us green people..!" ;)
So what does that mean to us layman who are only viewing the final produced image with our eyes and not intellectually as a series of numbers? Does this mean according to one of the last posts that the Sony F35 which does true 4:4:4 has basically greater and truer color fidelity than Red MX/Epic etc? Does this mean it's colors are truer, more accurate, richer, what? Does this explain Red's micro-tone separation problems on skin tones etC? Just wondering.
I hardly think that the F35's particular color filter array makes it significantly more "true" 4:4:4 than something with a bayer pattern, it is still an array and any one photosite still only has one color filter on it. From what I understand, the C300 does a similar thing with its Bayer CFA in that it treats quad pixels as RGB pixels, R/G/B are still not co-sited. I would tend to think that these cameras that skip the demosaic/debayer step aren't giving you more color information, they just give you less luminance information since they do not attempt to reconstruct or interpolate missing RGB data from the full pixel array. So maybe the F35 is 4:4:4 in 1080p terms, but using that same terminology in 4k terms, the F35 delivers 4k 2:2:0, whereas RED delivers 4k 4:2:0 (sorry Graeme for perpetuating the misuse of 4:4:4 etc outside of its intended YUV sampling realm, but for simplicity rather than accuracy it seems to be something people have an easier time understanding for comparison). Perhaps all you'd have to do to get back F35-like 4:4:4 from RED 4kHD is do a nearest-neighbor downscale to 1080p? Or maybe RED will eventually release their 1080p RGB recording mode mentioned years back, and maybe it will just read quad pixels as RGB pixels without debayering, but why would we want that over 4k debayered and downscaled?
Maybe what we need are displays that use the same CFA as our cameras, perhaps OLPFs and aliasing and debayering would no longer have to be so problematic. Actually it just occurred to me that that is basically what the F35 does since most LCDs use a similar RGB stripe arrangement for their pixels...
As I mentioned above, what you're talking about now is an complete abuse of terminology. 4:2:2 etc. refer to Y'CbCr data and have nothing to say on measured resolution. If you want to talk measured resolution you're going to have to be very careful on your terminology....
F35 doesn't measure full 1920 horizontally - it's soft. That's why it's not a 3k camera (or 6k as others have claimed camera). Vertically you get pure luma aliasing, horizontally it's pure chroma moire. The chroma moire is because there's 1/3 output pixel offset in the red and blue compared to green, so you get a wave of rainbow through the image of what should be black and white! The luma is due to inadequate optical filtering.
So there's your "True 4:4:4" - go on - take it, warts and all. As I said above basic sampling theory denies it to you.... And if you're tempted to go the RED route of higher resolution acquisition and "do a Canon" and deny you access to that higher resolution data, you won't be totally happy either because the "average green" demosaic is softer than a full demosaic (or downsampling demosaic) and still shows chroma moire which is not visible in the full demosaic from the same data (because I coded up a such a decode to try it out...) I'd actually say Canon's approach is better than that of Sony's on the F35 though...
This thread was not started to discuss the use of proper terminology. The truth is that our industry's terminology is not correct or incorrect but simply in a constant state of flux. 4:4:4 was coined when we may have both been in grade school. Technically, the term implys a specific technical conversation. However, in 2012 the actual importance of full RGB color sampling (4:4:4) has changed relative to emerging thechnology like RED's R3D file format. So the question still begs to be answered: How does the machine work? Does the R3D contain full luma resolution and full RGB color resolution (fidelity)?
Wayne is on target here when he says "Bayer is adequate if you treat it right, especially downsized right, but not as good as triple sampling. My pixel shift handled right might be suitable, but less accurate again, at 8k+ it may not be much of an issue." Summed up any Bayer-type CMOS sensor can get close to proper 4:4:4 full color sampling by over sampling luma and chroma resolution with the intention of downsizing to an intended target resolution. Sample at 4K and conform to 2K/HD. The ARRI Alexa, Sony F65, and Canon C300 work this way. The Epic is designed to conform for max luminance resolution regardless of color. The debayering and software conversation process can be tweaked to make color reproduction seem more complete but it can never add discreet RGB information that was never sampled at the sensor to begin with.
"Does the R3D contain full luma resolution and full RGB color resolution (fidelity)?" It contains the full detail in both luma and chroma that the sensor saw.
I'm sorry, but questions about Y'CbCr chroma subsampling when asked of sensors is like asking what hair colour is bald. It's the wrong question to ask - it's meaningless. The correct question to ask is, in the final RGB image that is decoded from the raw R3D files, what is the measured luma and chroma resolution. That question has been answered above.
Yes the sensor can only record what it can see. And in the Epic's case it is only seeing half of the Blue and Red component to its intended 4K RGB image. Does the Epic's R3D file contain the full luma and chroma resolution that was sampled at the sensor? Yes. Is that resolution 100% luma, 100% Green, 100% Red, and 100% Blue per pixel at 4K? Because of the inherent design of the Epic's Bayer sensor the answer is no.
Chroma subsampling and baldness aside, let's not decide that these issues are far to "technical" to even discuss with those who actually shoot with these cameras.
Adrian, even taking a theoretically perfect RGB sampling sensor, it's still not going to have 100% of each of R,G,B because it can't avoid sampling theory and still needs an optical filter. As I mentioned above, Sony tried to do RGB sampling with the F35 and it just lead to chroma moire and luma aliasing. Foveon has poor colorimetry, noise issues (from the layered sensor) and luma aliasing (from lack of optical filtering). Three chip cameras are much lower resolution, and don't work well with large sensors or cinema glass.
The only real solution for the problem is exactly what we're doing - high resolution sensors, optimum optical filtering.
These issues are not too technical - that's why we discuss them.
|« Previous Thread | Next Thread »|