What about this rumor? Is it similar to Foveon?
What about this rumor? Is it similar to Foveon?
No idea Uli. But the picture attached is nothing like Foveon which uses photon penetration through silicon depth as a poor colour discriminator.
Unsharp mask is only 'my friend' if it is used to recover slight softness. Kodak called that 'aperture correction' back in the Cineon scanner days.
Sharpening the linear looking RCX output does not recover all the detail that the Rocket card has, unfortunately.
If I apply superresolution to the RCX output, it resolves a lot less than superresulution applied to the Rocket output.
Superres 'loves' aliased inputs, because there is a lot of detail to be extracted from that very sampled sample, that is bayered data.
Conrad, you are totally right: The future of this is combining superres into the debayer process. Old debayer research was driven by still image needs.
Let's shift gears and move beyond that! I am not too concerned with processing speed. Time will solve that.
The wonderful thing is that the detail can be 'mined' out of the old r3d's when the above is productized.
Red has a lot of irons in the fire, I understand fully this is lower priority.
That is a laugh. I am sure Rob was somebody I much liked, but there is so much to it not even in the books which I came up with (invented). I have received personal professional opinion about Red code compared to the original Cineform Bayer wavelet compression years ago, but I don't know what has been done to it since then. Genius is like the hen's teeth. If you take everything you can learn and learn from what everybody else has done, you may still only have state of the art at most, to get beyond that is the tricky part. Sorry to be such a pessimist, but a number of behind the scenes things have happened and I have seen so many things less then they should be. It is good Rob is a worthy sort of person from memory. However, until somebody can meet my 1000:1 true loss less compression goal, they have not caught up. I know I have not mentioned this before, because people spin out when I say a conservative 10:1, let alone 100:1. Most every abstract forecast I say is understatement, often servere understatement. I am pushing for up to 10,000:1 visually loss less fast routines. The ideas I have pursued to end game, they are so compkex and so many years in advance. People are not good at seeing and manipulating the domain space of problems, many designs are inadequate, if not hopeless, written by smart but never the less half clueless people, that I treat nicely, thank and even praise for what they have actually achieved. That is life unfortunately. I wish that I had not been too sick to work on my own, that these dead spots had not turned up in my head, I could have had my own business, at least my own research department. I can't even remember propperly who Rob was now and it is hard to grasp onto my past ideas and how to best fit them together. I should at least have a consulting job advising research and development departments how to improve things, but I will have trouble advising on how to do things the best way now, but still can swing a mean solution.
One thing I can advise for anybody here engineering, grasp onto the process and mechanism domain, that is where you can make vast improvements rearranging things, because we live in a physical process and mechanism based universe, it is that simple. In programming it is not which object or routine you use, it is which you can make better as well , in process and mechanism. It is good to have people around that can have a clue what to do. Maybe this redray will turn out to be really something, so my faith in humanity is returned.
There's some really juicy stuff here. Nicely done boys. But let's be clear and get back to the point. Is the R3D a true 4:4:4 recording?
What we are talking about here is full color sampling at the pixel-by-pixel level. We are not talking about software conversions and color space transforms. The question becomes: "Is each pixel in the intended, recorded resolution (in this case 4K) derived from individual, finite RGB values?" With regards to the Red1, Epic, and Scarlet the answer is no. These are not true 4:4:4 cameras. And, to be honest, I have my doubts that they are even 4:2:2. RED's cameras sacrifice color fidelity in favor of higher resolution. In order for the Epic to record a true 4:4:4 R3D file there would need to be twice the number of pixels on the sensor. Whether the file is recorded as RAW sensor data or 4K video is irrelevant. If full RGB color is not discreetly sampled at the sensor for the given output resolution then it is not 4:4:4. As my father always said, "There is no such thing as free."
Now I'm sure that the Epic could be capable of delivering a very nice 1080p 4:4:4 output but because the system is designed to only output 4K+ there is no ability to recover this lost color information within the R3D file.
Adrian, you miss an important point.... In any sane camera design the input image is bandwidth limited so that it won't produce excessive aliasing via optical low pass filtering. That means you shouldn't be seeing resolution at the level of the sensor implied by your definition of 4:4:4. There's one type of stills camera that acts like this: Sigma Foveon, and it shows quite strong luma aliasing, so although you do get the colour resolution you desire, you also get corrupting aliases along with it and sampling theory says there's only one real way around this - aim for a much higher resolution and employ proper filtering.
In single chip (non Foveon) cameras, the one that fits the bill is the Sony F35, and guess what? It shows strong luma aliasing vertically and chroma moire horizontally. Again, by your definition it's 4:4:4, but it's not a 4:4:4 that I want due to the corrupting influence of the aliases...
That said, as I say above, the terminology of 4:2:2 etc. implies you're starting off with an RGB image, which you transform to Y'CbCr and decimate the chroma. Applying such terminology elsewhere in the image making process is wrong.
Bayer has been suggested as good for 4:2:2, but in reality it is just a different beast that does not even match 4:2:0, as not every pixel is green, but it canbe made to do a good job against 4:2:2, and even 4:4:4 (just an opinion) given suitable conditions of course. But in reality it is close to 4:4:4 (real color gaps sub pixel) when resolution is dropped in half or so from debayered etc. So, some might use this to claim that real color resolution is half etc etc, but let's not go there.
I am not referring to 4:4:4 or 4:2:2 Y'CbCr and chroma subsampling for bandwidth's sake (compression in order to transmit data after it has already been recorded). I am referring to full resolution color sampling which is usually designated 4:4:4 in the 10-bit RGB color video model. The Epic uses a CMOS Bayer sensor to capture its images. Wikipedia offers a simple explanation: "In a Bayer filter arrangement, green is given twice as many detectors as red and blue (ratio 1:2:1) in order to achieve higher luminance resolution than chrominance resolution." Therefore, a proper 2:1 Bayer pattern will produce a full (4:4:4) luminance resolution image with subsampled color via a full-sampled Green channel, half-sampled Blue channel, and half-sampled Red channel (4:2:2).
The Sony F35 is an excellent example of a camera with the ability to provide fully sampled color images for its defined output (1920x1080 HD). The F35 uses a single CCD imager made up of 1920x1080 triple RGB pixels. By definition the F35 uses a discreet Red, Green, and Blue pixel to sample full chroma (4:4:4) before combining them into a single luma/chroma pixel value. Technically the F35 has three times the resolution of HD in its luminance sample rate. Is it a 3K camera? Sony didn't think so. The Arri Alexa uses a Bayer pattern similar to the Epic and though it comes close in luminance resolution to the Epic it was only marketed as an HD or 2K camera. The Alexa, like the new Sony F65, purposefully over-samples its intended luminance resolution in order to properly sample its chromatic resolution. So the Alexa samples 3.2K luma and HD/2K chroma to derive a proper 4:4:4 HD/2K output. The F65 samples 8K luma and 4K chroma to derive a proper 4:4:4 4K output. Can the Alexa record RAW sensor data at 3.2K? Yes. It it a proper 4:4:4 full-color 3.2K recording? No. Can the F65 record 8K RAW? Yes. Is it a proper 4:4:4 8K recording? No. The same applies to the Epic. And because the Epic is only intended to provide a 4K pipeline via its R3D files it simply does not have enough pixels on the sensor to sample full color 4:4:4 at 4K.
Saying that 4:4:4 color sampling does not apply to RED's R3D files because they are RAW may be technically accurate at that stage in the workflow process but it is mostly an irrelevant definition when discussing the finer details of how a camera works to its intended end user: the artist, not the technician.
|« Previous Thread | Next Thread »|