05-06-2007, 11:21 PM
Phew, just trying to collate a couple of things previously mentioned.
1. Image data from CMOS sensor/ Bayer filter is passed thru 14bit A/D converter.
2. Black outputs of all pixels are set using extended (>4.5K) masked ‘black’ pixels.
3. White output of R, G & B pixels is set to 5000K using factory calibration.
4. Uninterpolated raw pixel output is compressed using wavelet based compression.
5. Native camera output is compressed 12bit linear Red Raw 4K file.
6. Internal demosaic to produce compressed 12bit linear Red RGB HD file.
7. Write compressed Raw or RGB files plus additional QT wrapper to Red media.
1. Ingest 12bit linear Red Raw 4K file from Red media or backup.
2. Output required file version to NLE at various bit depths, color space, resolution.
Raw: 8, 16, 32 bit linear, 10bit log
RGB: 10, 12, 16 bit linear, 10bit log
Resolution: 720p to 4.5K
Color space: Native RGB, YCbCr, sRGB, XYZ …3. tba....
Is any of that correct?
stop press: corrected...thanks cail
05-07-2007, 08:54 AM
The nominal 'white point' at 5000K is, as I understand it, defined by the color filters over the CMOS photosites installed in the factory. No white cards necessary (although you can definitely shoot one for use in REDCINE later).
05-07-2007, 04:38 PM
Thanks very much Cail,
Yep, that's 5000K not 5K!
As for actual 'native' white point calibration technique, I wouldn't have a clue.
However I'm guessing you'd want to know individual pixel saturation points relative to other pixels.
This is so color looks uniform across the image, noise level and clipping levels are equal......
How they get the transfer characteristics of the each pixel homogenous is anyone's business!
Also that this is measured in some 'absolute' way to the outside world before it leaves the factory, because...this it what factories like to do, I guess!
05-08-2007, 11:09 AM
I started the list at the beginning of this thread as I’m trying to define the basic elements of the Red camera,
redcine and other post tools within a complete workflow.
So when quality problems appear, armed with a little theory, I have a chance of finding out where the problems may be.
This all started when I began to look at the nle/grading/compositing options for Red that were out there, see here (http://www.reduser.net/forum/showthread.php?t=1476).
I rang round, got a few product names, costs, resolutions etc, but found most of the information nearly useless in a real world situation.
Okay, so then I decided to look at the different codecs & bit depths that each product supported as I realized each of these products
were merely containers for these codecs/file types and I should firstly focus on these codecs. I was back where I started.
Actually worst, I didn’t know what were the native color spaces, bit depths or lin/log compressions of these applications.
I hadn’t even begun to look at conforming standards, edl/xml etc, used by each of these applications.
Okay, I realized I needed to start at the very beginning of the workflow and work my way along.
Things like, the camera outputs this wavelet compressed raw/rgb signal, it uses this bit depth and this color space....natively!
And that Redcine can do certain lin/log, bit depth, resolution, color space and codec conversions.
Moving on, Graeme was helpful when he mentioned the robustness of the converted image is derived from the sensor/wavelet compression.
Because when the camera comes out I can isolate this and find out how far I can push this clean sensor/bayer/wavelet signal in grading.
But there are other problems like, if I need to blend different source footage/composites etc from different colour spaces, resolutions,
compressions, codecs etc.....what will be the optimum workflow?
In a scene shot by a Red camera, I certainly (still) don’t know how much color cast (and subsequent white balance),
will cause color to clip in another color space or color correction downstream!
I didn’t even know how a ‘viewing’ lut and a ‘burnt in’ lut are separately managed.
I realize there’s no substitute for experience, however I needed more info so I read quite a few books on this but nothing came near what is required.
The other day I went up to the national film school because I heard they had a 4K projector.
I went into the auditorium looking for the projectionist and met a crew setting up a Thompson Viper.
They informed me the Viper was for a 3rd party shoot and the school was very much SD, using HDV for any high definition.Great.
Whilst I was there I went through their library looking at video theory books. Most of them were over 4 years old and nearly useless.
My point is……Reduser.net is my school!
As I said, I’m starting a workflow list. I’m not after Red ip or undisclosed specifications.
I’m after the simple facts of Red / RedCine so I can begin to plot the basic elements of a workflow,
so hopefully I & others, can use this info to make some quality calls.
Sure, Red is going to have greater quality improvement/announcements coming along & some convenience things like Red API support in different packages.
I know lots more good stuff is coming…
But right now, I don’t really care if the whole Red thing was 8bit sRGB standard definition workflow.
So long as I have a general idea of what’s happening where.
I also know the tiny amount I’ve looked at this field, that there must be some real hurricane problems in the image processing, math conversions/interpolations,
lack of standards and just the pure ‘real-time’ crunching required to produce a robust compressed image on the fly!
So my utter respects and kudos to the whole Red team for not only having to know all of the information mentioned above,
but 5 times more, and in a real world manner, and then build it!
And all handled by such a tiny team! If you guys take 3 more months with this….I (am just beginning to) understand!!!
ps: take a couple of weeks off…Jim can look after everything!
ss: don’t worry about the list, I’ll work it out.