PDA

View Full Version : Less Pixels More Bit Depth...



Shawn Larkin
12-10-2008, 08:27 PM
To Whomever It May Concern,

Great now we have a 28K Camera that no one in the world can view at full resolution--pixel for pixel. (But I can't wait for the next gen "super-duper-ultra-higher-than-the-definition-of-high-definition" monitor and/or video projector.)


So I get how 8K, or 24K, or 28K or 1 Billion K is great for still photography. But we all know managing anything over 2K is tough. Even if it is compressed, managing 4K and 6K will be, well, "interesting" to deal with in post. Thank God parallel processing is just around the corner! But I still wonder how one might actually see all of those pixels for a composite or just, well, to actually see them.

Anyhow, this is not to say I don't LOVE all those pixels. I do. And I can't wait to actually (try to) use them to make movies. But what I and my trusty DI colorist want more than more pixels is--as the title of this post suggests--MORE BIT DEPTH. I say let's go to 16; that's a nice manageable number that really helps with the banding, noise, and other artifacting of the image. It's already supported in the still world and it's about time for those of us in Film Post to use it (...although 16 bit Tiff was a VFX standard forever, 10 bit DPX is the new standard for DI). 10 bits ain't enough and 12 is, well, OK. But I've been playing with 16 vs. 12 vs. 10 and in my testing 16 is where it is at. This is not so difficult as float to process or work with and it really helps when you start pushing the gamut far.

How hard would it be fore RED to develop RED CODE to support 16 bit? I don't know. But I do know that I much prefer grading 16 bit images than 10 or 12 bit images. And you would too if you try it.

So there you have it--One man's opinion about what all of you pixel-steroid abusers might have overlooked.

Or Not...

Jeff Carpenter
12-10-2008, 08:41 PM
I'm pretty sure the Monstro sensors are 16 bit. Correct me if I'm wrong.


To Whomever It May Concern,

Great now we have a 28K Camera that no one in the world can view at full resolution--pixel for pixel. (But I can't wait for the next gen "super-duper-ultra-higher-than-the-definition-of-high-definition" monitor and/or video projector.)


So I get how 8K, or 24K, or 28K or 1 Billion K is great for still photography. But we all know managing anything over 2K is tuff. Even if it is compressed, managing 4K and 6K will be, well, "interesting." Thank God parallel processing is just around the corner. But I still wonder how one might actually see all of those pixels for a composite or just, well, to actually see them.

Anyhow, this is not to say I don't LOVE all those pixels. I do. And I can't wait to actually (try to) use them to make movies. But what I and my trusty DI colorist want more than more pixels is--as the title of this post suggests--MORE BIT DEPTH. I say let's go to 16; that's a nice manageable number that really helps with the banding, noise, and other artifacting of the image. It's already supported in the still world and it's about time in Film Post use it (although 16 bit Tiff was a VFX standard forever). 10 bits ain't enough and 12 is, well, OK. But I've been playing with 16 vs. 12 vs. 10 and in my testing 16 is where it is at. This is not so difficult as float to process or work with and it really helps when you start pushing the gamut far.

How hard would it be fore RED to develop RED CODE to support 16 bit? I don't know. But I don know that I much prefer grading 16 bit images than 10 or 12. And you would too if you try it.

So there you have it. One man's opinion about what all of you that like pixel-steroid abusers might have overlooked.

Or Not...

Shawn Larkin
12-10-2008, 08:55 PM
I did not see it on a the spec sheet. But I think--don't know for sure--that it has more to do with the RED CODE. Anything shooting RAW (in DSLR world at least) can than be processed/encoded/transcoded into the appropriate format with the appropriate bit depth.

AND maybe the hardware has to be optomized for the capture to allow for 16 bit--I don't know.

But I do know that up to now, you can only go to 12 bit with RED ONE. So either way, I want to get the equivalent of 16 bit native resolution files out of the camera/RED CODE. I don't want to take a 10 or 12 bit capture RAW and upres the color bit depth...if you know what I mean.

Is there a link to where this process is specified or shown possible that I don't know about?

Hell, for that matter AND since we are talking about "possibility" here, why not have the option to go to float? I know that would be just as impossible to manage as 28K, but at least they both match then: impossible resolution with impossible bit depth.

The Bottom Line is: those of us that care about quality (and not just pixel quantity), would like more options starting at 16 bit. If you can throw in float just for good measure, why not? I mean, I wouldn't shoot it; that's crazy talk. But I wouldn't shoot 28K for cinema either.

So if anyone knows the "real" spec for color bit depth OR how the camera and RED CODE goes about acquiring the image bit depth, please let us (me) know.

Merci.

NateWeaver
12-10-2008, 09:27 PM
It's been discussed before (with Graeme involved, I believe), that 13 or 14 bits would be only recording the noise floor of the imaging chain, much less using 16. The CMOS readout would have to get quieter for more than 12 bits to make a difference.

In other words, the A/D is not really the weakest link right now, although that's a really bad way to put it.

Paris Remillard
12-10-2008, 09:30 PM
Monstro is 16-bit.

Shawn Larkin
12-10-2008, 10:08 PM
I've read a lot of the previous posts on this stuff. Mr. Nattress, combatentropy, and others really get into it.

Basically, if Monstro is 16 bit, we hope that it's giving us a clean enough image AND the conversion to/from RED CODE can make use of this clean 16 bit signal.

Otherwise--and I am really oversimplifying here--it's best that they continue to stick with 10 bit log / 12 bit linear which best fits the signal at it's cleanest without eating up more (unnecessary) disk space.

As a side note, I would prefer that the image was linear, because ALL the apps anyone uses in post, end up doing a log-to-lin conversion for display and compositing/coloring purposes.

Anyone have anymore info on the possible Epic bit depth resolutions? Will RED have it handled to actually make use of a clean 16 bit signal (that isn't crammed into 10 or 12 bits)?

jbeale
12-10-2008, 10:16 PM
It may not be directly relevant, but of all the DSLR cameras on the market now that currently provide 14 bit RAW files, I believe there is no clear evidence that all those bits are actually justified. That is, 12 bit would still capture all the "real" sensor information in those cameras, the remaining bits being just noise. Maybe Red's sensors are better than the current DSLRs, but it would be pretty remarkable if they even came close to 16 bits of true DNR.

Kyle Presley
12-10-2008, 10:20 PM
The only thing you'd get out of a 16 bit Red One is more accurate noise, as Graeme has said over and over and over again. It may help with banding though...

Shawn Larkin
12-10-2008, 10:38 PM
I am hoping that Monstro does actually provide a real 16 bit DNR image capture (one that isn't just cramming the noise off a not "true" 16 bit image into the file). And that I can work with that 16 bit file in post. I suppose that is ultimately what this post is about.

Pawel Achtel
12-10-2008, 11:45 PM
One way to reduce noise is by capturing higher resolution and downsampling. An additional advantage of such approach is higher MTF at higher frequencies after down-sample. No one said that you have to finish or distribute in 9k or 28k. The advantage of 28k or 9k acqusition is the light gathering capability of larger sensor. All things being equal, this is your way to 16-bit, 2k or 4k content. But, this is achieved in post, not in the camera. Whilst your 16-bit lower resolution acqusition is impossible to achieve using existing technology, the end result is possible by using 9k and 28k sensors that you're complaining about. The workflow is different, though.

George Wilcox
12-11-2008, 12:22 AM
In the world of DSLRs going from 10 bits to 12 bits to 14 bits has resulted in better color reproduction across the board, broader ISO ratings, etc. (although admittedly, there are other technological things improving at the same time). The big benefit of more bits is more natural color reproduction and better shadow and highlight detail - 16 bit is the way to go.

Jarred Land
12-11-2008, 12:41 AM
Monstro is 16 bit. There is an advantage to 16 bits.. anyone who has worked with 16 bit medium format sensors will tell you the same. The difference is small.. you really need to dive into in-range tonality and highlights to see it... but it is there. The extra processing precision with 16 bits gives you a little more flexibility as well, specially when you start to push things around.

Pawel Achtel
12-11-2008, 12:51 AM
Monstro is 16 bit. There is an advantage to 16 bits.. anyone who has worked with 16 bit medium format sensors will tell you the same. The difference is small.. you really need to dive into in-range tonality and highlights to see it... but it is there. The extra processing precision with 16 bits gives you a little more flexibility as well, specially when you start to push things around.

That's great news. I did run out of tonality steps quite often using HDCAM (in camera) by "pushing things around" (adjusting gamma, colour balance and matrix heavily). The result was awful flickering of clipped tonal ranges. But, this was 10-bit DSP. See my post about it here: http://www.reduser.net/forum/showthread.php?t=22992

I don't think I will have the problem with 12-bit Red One (I haven't tried yet), but I am absolutely confident 16-bit quantization will fix the problem once and for all.

Michael Lindsay
12-11-2008, 01:02 AM
Hi Jarred

Would 14bit precision have been enough for Monstro? Or with only 2 bits to go is it easier to just sit on byte boundaries?

I ask because i feel I remember Mr Natress suggesting that there was very little chance (in a dalsa discussion) 16bit precision was necessary.

I'm happy either way.

regards

Michael (delighted not to touch 8bit vanilla HDCAM ever again) Lindsay

Graeme Nattress
12-11-2008, 07:01 AM
You must have a bit depth on your a-to-d that is "better" than the dynamic range of your sensor, or else you're limiting your sensor. Similarly, it's not much use to have a better a-to-d than your sensor is capable of feeding, noise-wise, or else you're just pushing extra bits around for no image benefit.

So, current RED One has a 12bit a-to-d and those 12bits get stored in a 12bit REDCODE.

Now, future RED sensors improvements over the current generation Mysterium, and will warrant higher bit depths on the a-to-d. REDCODE will develop along with the new sensors and new a-to-d so that it all works together as it should.

I've not had any banding issues with the RED One other than that as caused by displays or stuff not actually related to the data in the R3D file itself. That is good.

As to the Dalsa discussion, they were using a 14bit a-to-d and padding that up to 16bits for storage. So two of those 16bits were "marketing bits" so to speak. When RED is talking about bit depth, it's bit depth on the a-to-d, so real bits.

Graeme

Shawn Larkin
12-11-2008, 07:36 AM
I'm glad that we might actually have a 16 bit sensor with a 16 bit a-to-d and REDCODE that can work with these "real" 16 bit images.

That's exactly what I wanted to hear...if it is true.

J. Eric Camp
12-11-2008, 08:03 AM
fantastic

jbeale
12-11-2008, 08:25 AM
In the world of DSLRs going from 10 bits to 12 bits to 14 bits has resulted in better color reproduction across the board, broader ISO ratings, etc. (although admittedly, there are other technological things improving at the same time). The big benefit of more bits is more natural color reproduction and better shadow and highlight detail - 16 bit is the way to go.

My point was that more bits in your recording gives you more dynamic range, less posterization etc. if and only if your sensor + analog electronics + A/D noise level is low enough. Otherwise there is no benefit to adding bits in the camera. You might as well just use a smaller A/D and save cost and data storage space. If you want to add pure noise bits and do your post operations using a higher bit depth, which can help especially if you do big corrections and/or contrast stretches, you can still do that in post regardless of the original acquisition bit depth. In other words, if your system noise exceeds the 12-bit quantization level, then "doing it in post" (upconverting to 14 or 16 bits) is actually just as good.

There is a physics prof. at U. Chicago, Emil Martinec, who put up what I found to be a well-reasoned and well-supported web page regarding DSLR bit depth and resolution for the current DSLRs.

"Curiously, all the 14-bit cameras on the market (as of this writing) do not merit 14-bit recording. The noise is more than four levels in 14-bit units on all of these cameras (Nikon D3/D300, Canon 1D3/1Ds3 and 40D); the additional two bits are randomly fluctuating, since the levels are randomly fluctuating by +/- four levels or more. Twelve bits are perfectly adequate to record the image data without any loss of image quality, for any of these cameras (though the D3 comes quite close to warranting a 13th bit). "

http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/noise-p3.html#bitdepth

NateWeaver
12-11-2008, 08:27 AM
i think even if, in the move to 16bit, those last 2 bits are recording nothing but noise, the move will help push forward post and getting prepared for the future.

Also, recording the noise floor of the analog chain more accurately I'm sure can have it's benefits. Oversampling spatial resolution does wonders for the refinement of the image. I would imagine oversampling bit depth would have it's own benefits, however small...likely more natural looking grain I'd guess, especially if underexposed images are being pulled out of muck.

Michael Lindsay
12-11-2008, 09:50 AM
As to the Dalsa discussion, they were using a 14bit a-to-d and padding that up to 16bits for storage. So two of those 16bits were "marketing bits" so to speak. When RED is talking about bit depth, it's bit depth on the a-to-d, so real bits.

Graeme

Graeme

that's interesting to know... thanks!!

What is less useful is making unnecessary post production demands (like dalsa).

On a side note can you tell us the distribution of bits against stops of light for the current camera. Linear, to me, suggests half the light per stop climbing down into the shadows. Is that correct?

thanks again

Michael L

Graeme Nattress
12-11-2008, 10:06 AM
REDCODE RAW also helps in keeping the files manageable, but yes, if you're going uncompressed, at least pack the bits! (Like a 10bit dpx does) or use some simple lossless compression.

Linear means 1 bit per stop. Yes, that's correct.

Graeme

Michael Lindsay
12-11-2008, 10:12 AM
Hi Graeme

In theory is it possible to differently distribute encoding and still keep the other advantages of Raw?

thanks for the education.

Michael L

Graeme Nattress
12-11-2008, 10:16 AM
You need a non-linear amp ahead of the a-to-d to do that, which can cause more problems that it's worth. It's also worth remembering that things like colour transform matrices need to work on linear light data, and thus you must have a reliable and accurate way to get back to linear if you make it non-linear.

Graeme

Michael Lindsay
12-11-2008, 10:22 AM
Graeme

Thanks ..

Michael

Daniel Browning
12-11-2008, 11:50 AM
I hope I don't annoy too many folks with the size of this post. If you were having trouble sleeping, maybe I can help with that, at least. :)



But what I and my trusty DI colorist want more than more pixels is--as the title of this post suggests--MORE BIT DEPTH.


I don't think you want more bit depth by itself: that just wastes recording space on random noise. To make use of 16 bits would require at least 15 stops of dynamic range, so that's what you're asking for: 4 more stops of dynamic range (AKA an 800% improvement, quite a tall request).

The 12-bits in the RED ONE are plenty sufficient to encode everything the camera can deliver: 11.3 stops of dynamic range. More bits would just bloat file sizes with no benefit.

In fact, most people don't use all 11.3 stops, because the last few stops are pretty noisy. If you only use 9.5 stops, you could do just fine with a 10-bit R3D and save a lot of room in the file size. (It would be nice if RED allowed photographers to choose lower precision recording to save space, but many customers would probably aim directly for their foot and shoot, so it probably wont happen).

If all RED ONE cameras got a free 16-bit ADC and 16-bit REDCODE upgrade tomorrow, it would not affect image quality at all, the only change would be bigger files sizes. If there was a 10-bit ADC that had less noise than the existing 12-bit ADC, the 10-bit would provide a better image. It would only be 9.6 stops, but they would be less noisy. But I suspect RED built the lowest-noise ADC they could.

Of course, for the highest quality, post-processing should occur with greater precision than the original file. 16-bit int or even 32-bit float will give slightly better results than grading in 12-bit lin or 10-bit log, particularly for bigger changes.

So what you probably intended to ask for is more dynamic range. If you're using all 11.3 stops now, and RED increases dynamic range per pixel by two stops in the Mysterium-X, to 13.3 stops, then a 14-bit recording would be sufficient to capture all the information put out by the sensor, and you can use all 13.3 stops.

But if you only use 9.3 stops right now with RED ONE because the last two stops are too noisy for your taste, then you only need 10 bits right now: the last two are being clipped to black. If RED increases dynamic range per pixel by two stops, and you want to start using 11.3 stops (out of the 13.3 total), then 12-bit will still be enough to capture all the information that you will use.

So there are three things:


Number of stops the camera is capable of.
Number of stops that most people will use.
Bit precision needed to encode what the camera is capable of.
Bit precision used by the ADC
Bit precision recorded to the file.
Bit precision used in post production


For RED ONE we have:


Number of stops the camera is capable of: 11.3
Number of stops that most people will use for 4K DI: 9.6
Bit precision needed to encode what the camera is capable of: 12
Bit precision needed to encode what most people will use for 4K DI: 10
Bit precision used by the ADC: 12
Bit precision recorded to the file: 12
Bit precision used in post production: 10-32


It's important to know that the precision of the ADC is separate from what the camera is capable of using, and also what gets recorded. The Pentax K10D had a 22-bit ADC, but was only capable of 14-bits and only recorded 14-bits. (Actually, there were some reports of slight posterization, but the fixed pattern noise was so much stronger than the random read noise that most could not use all 14-bits anyway).

Essentially, as long as the ADC has enough precision for the camera, it doesn't matter how much higher it is (it could be 1000 bits and it wouldn't matter), you still only want to record enough precision that the entire signal is preserved, and then some (one half-stop of totally random noise).

Perhaps Mysterium-X will look like this (guessing):


Number of stops the camera is capable of: 13.3
Number of stops that most people will use for 4K DI: 11.6
Bit precision needed to encode what the camera is capable of: 14
Bit precision needed to encode what most people will use for 4K DI: 12
Bit precision used by the ADC: 16
Bit precision recorded to the file: 14
Bit precision used in post production: 16


And perhaps Monstro will go like this (guessing):


Number of stops the camera is capable of: 15.3
Number of stops that most people will use for 4K DI: 13.6
Bit precision needed to encode what the camera is capable of: 16
Bit precision needed to encode what most people will use for 4K DI: 14
Bit precision used by the ADC: 16
Bit precision recorded to the file: 16
Bit precision used in post production: float


Here's an analogy: say it's your job to count how many rain drops fall into a bucket per second. Your given a paper with only two columns, so you have two digits of precision. It starts raining and you very carefully write down numbers:

1.2
1.5
1.3
1.3
1.4

And so on. Soon you get so good at it that you think two columns is not enough precision, and you could do three columns:

1.35
1.46
1.20

But then someone points a big sprinkler right at the bucket and it sends hundreds of drops that are mixing with the true rain drops, and you can't tell them apart. So your precision goes down.

Rain drops are like photons, the true "signal" that the sensor tries to record. Sprinkler drops are like electrons, the read noise that we don't want to record. The number of columns is like the bit depth, or precision, of the ADC. It only makes sense to record numbers that have meaning; if they are lost in the noise, then using more precision will not give us any information: you might as well make up random numbers.

Here's an example of bit depth in current products: the D300. In 12-bit mode, the dynamic range is such that the last bit is not even used. 11 bits would have been fine. The Sony sensor Nikon used for the D300 doesn't have a 14-bit mode (the ADC is on-chip), but Nikon needed one in order to score Marketing points, so the engineers did a quadruple readout (reading the sensor four times for one shot), then did a little processing on all four readouts to remove some of the noise. This improves things significantly so that the camera has less noise and the 12th bit is now being utilized. The last two bits in 14-bit are still just random noise and can be discarded. So Nikon is essentially offering a very slow (1.5 FPS or something) method of reducing read noise.

Nikon has done the same thing with the D3X. Using the 12-bit Sony A900 sensor and reading it four times (at a very slow 1.8 FPS) to improve the noise a little bit so they can call it 14-bit.

[10,000 character limit reached. Continued in next post...]

Daniel Browning
12-11-2008, 11:51 AM
[16-bit] is already supported in the still world and it's about time for those of us in Film Post to use it


The 16-bit recording of MFDB is just a marketing ploy. This demonstration shows that the last four bits do nothing but record random noise: http://forums.dpreview.com/forums/read.asp?forum=1019&message=29766597

http://www.thebrownings.name/photo/misc/John_Sheehy_16_bit.jpg

Also, Emil Martinec demonstrated that even 14-bit precision in most cameras is wasteful:

http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/

In particular, look at the demonstration on page three:

http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/noise-p3.html

Here is the 12-bit version (look at the huge gaps in the histogram):
http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/underexposed-6bit.gif

Here is the 14-bit version (histogram is filled out):
http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/underexposed-8bit.gif

I can't tell any difference between the two.

So I would say that what you and your colorist (and all of us) really want is higher SNR, more dynamic range, better response to underexposure, less noise, better compression, less quantization, etc. When that is achieved, I expect an increase of bit depth to go along with it.


I'm pretty sure the Monstro sensors are 16 bit.

I saw that, too. It wasn't specified if just the ADC was 16-bit or the REDCODE along with it. RED needs to put 16-bit on the specs for marketing reasons, whether they are used to bloat the file or not is something else entirely. Everyone else records more bits than they actually need because it helps their snake oil marketing. I expect RED will only have 16-bit ADC, but wont actually record any more bits than are actually needed. It's like a reverse bait-and-switch: sell the snake oil (22-bit ADC!), but secretly replace it with something *useful* (e.g. 14-bits REDCODE). At least, that's what I would do.


It may not be directly relevant, but of all the DSLR cameras on the market now that currently provide 14 bit RAW files, I believe there is no clear evidence that all those bits are actually justified.

Correct. The best of them (Nikon D3) only need 13 bits, and most only need 12 bits (or less). If they combined the low read noise of ISO 1600 with the high FWC of ISO 100 (as in patent 6734905), then the D3 would need 15 bits, but until then 13 is the max.


The only thing you'd get out of a 16 bit Red One is more accurate noise, as Graeme has said over and over and over again. It may help with banding though...

By banding, I assume you mean posterization. If it helped with that, then it would mean there was more signal to record. A too-small bit depth causes quantization error (AKA quantization noise), and then it causes posterization (banding). Most of the time people see posterization in post production because of their display or processing (e.g. 8-bit display or 10-bit processing can easily show posterization that isn't there in the raw file).


In the world of DSLRs going from 10 bits to 12 bits to 14 bits has resulted in better color reproduction across the board, broader ISO ratings, etc. (although admittedly, there are other technological things improving at the same time).

All the improvements in those areas have been due to other reasons. Color reproduction improved due to resolution, noise, color filter arrays that more closely match the human eye, and software. ISO ratings improved due to higher quantum efficiency (15-50%) and less read noise. The references above show that, for almost all DLSR cameras, 12-to-14 bits made no difference.

As long as read noise, FWC, and QE stays the same per area, every increase in resolution can be accompanied by a decrease in bit depth to capture the same dynamic range. If technology improves per area (as it has in the past), the same or more precision is required. The ideal camera, which wont be possible for decades, if ever, is a photon counter. It has so much resolution (we're talking gigapixels), that the chance of two photons striking the same pixel is vanishingly small. The bit depth required is just 1: on or off.

In post processing, you can convert such a 1-bit image up to any desired bit depth. Here's a demonstration by Jay Turberville:

http://forums.dpreview.com/forums/read.asp?forum=1018&message=30280847

(Click thumb for full image)

http://www.thebrownings.name/photo/misc/quantum_sensor_thumb.jpg (http://www.thebrownings.name/photo/misc/quantum_sensor_simulate.jpg)

You can see that a one-bit precision captures all sorts of tonal gradations. There are also plenty of methods for handling color. "Some researchers are actually trying to build one right now: Donald Figer and his colleagues at Rochester Institute of Technology got $2.8M grant to develop a light detector that would count individual photons for astronomy applications":

http://image-sensors-world.blogspot.com/2008/10/rit-team-to-develop-photon-counting.html

There's also a paper by Eric Fossum, inventor of CMOS sensors:

"What To Do With Sub-Diffraction-Limit (SDL) Pixels? -- A Proposal for a Gigapixel Digital Film Sensor (DFS)"
http://ericfossum.com/Papers/2005%20Program%20What%20to%20Do%20with%20Sub-Diffraction%20Limit%20(SDL)%20Pixels.pdf

In it he proposes grouping 1-bit pixels into clumps, like film grain, for post-processing.

So what I would actually like to see is less bit depth, and more resolution.

Marcus la Grone
12-19-2008, 01:45 PM
The sensor array fundamentally accounts for a lot of the noise down in the floor. While well designed silicon can greatly help the noise floor, there is a simple way to help even "simple" arrays: good heat management. I come from a scientific imaging background and the standard trick we often use to get more 'good bits' is to cool the array. Going from ambient down to -40F (-40C... fancy that) adds about 3 to 4 useful bits. The problem is no one wants to carry the extra battery around for the TE coolers (or, as is my present case, do all their photography outside in Alaska). Well designed heat pipes, however, can greatly improve wast heat and therefore noise. If the guys at RED say they can get us 16 good clean bits, I'll be impressed, but happy. But then again, so far I'm already impressed with what they have done.

Daniel Browning
12-19-2008, 02:24 PM
While well designed silicon can greatly help the noise floor, there is a simple way to help even "simple" arrays: good heat management.

That's certainly the case for any exposure longer than 1/10 second, but typical photography is not affected by thermal noise to any measurable degree. Dark current of about 10 e-/sec is typical for a modern pixel at room temperature, and temporal noise is only (approx.) square root of that. All the other noise sources (e.g. read noise) are so much higher that thermal noise can't be noticed at 1/10 second.

EDIT: that's not to say heat management is unimportant. Some say it's the biggest challenge in designing motion cameras (I wouldn't know). Can't have cameras catching fire or "letting the magic smoke out". :)

Personally, I'm very interested in astrophotography and timelapse, so I would love a Thermo-Electric cooler; I requested it in my DSMC wish list post.

Marcus la Grone
12-19-2008, 02:47 PM
Going to have to sort of agree and sort of disagree with the short exposure statement. (Using DALSA arrays) At 30fps we've seen improvements from cooling; granted, they aren't as impressive improvements as they are for long exposures. On the other hand, while the noise may be "okay" at room temp, encased in a camera, most FPAs shoot up north of 45C. 10 e-/sec is actually kinda noisy in my book, if your well depth is 150k that is only 15k "happy range" aka: ~13-14bits.
wrt: title of the thread-- I'm greedy I want more pixels and more bits...

Daniel Browning
12-19-2008, 03:00 PM
Going to have to sort of agree and sort of disagree with the short exposure statement. (Using DALSA arrays) At 30fps we've seen improvements from cooling; granted, they aren't as impressive improvements as they are for long exposures. On the other hand, while the noise may be "okay" at room temp, encased in a camera, most FPAs shoot up north of 45C.

Interesting. Thanks for the information.

Ian Andolina
12-21-2008, 03:11 PM
Daniel: thank you for such detailed and fascinating posts!

George Wilcox
12-23-2008, 01:04 AM
"Curiously, all the 14-bit cameras on the market (as of this writing) do not merit 14-bit recording. The noise is more than four levels in 14-bit units on all of these cameras (Nikon D3/D300, Canon 1D3/1Ds3 and 40D); the additional two bits are randomly fluctuating, since the levels are randomly fluctuating by +/- four levels or more. Twelve bits are perfectly adequate to record the image data without any loss of image quality, for any of these cameras (though the D3 comes quite close to warranting a 13th bit). "

http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/noise-p3.html#bitdepth

There is certainly much more to a clean image than the number of bits - the design of the sensor, micro-lenses, gap between sensors, materials, noise filtering, and so on. But if you look at comparisons of the images produced on different DSLR like I have rather obsessively, I can tell you that the higher bit depth sensors produce better images. Have a visit to http://www.imaging-resource.com to compare.

Comparing the new Sony A900 to the Canon EOS 5D MkII,the Sony uses 12bit A/D and the Canon uses 14bit. The Canon absolutely smokes the Sony like a cheap cigar! Especially at high ISO ratings. It also bests the EOS 1Ds MkIII which has the same bit depth and resolution. This shows that improvements in signal processing and sensor design play a key role, but pixel depth is important too. As an interesting point, a 12bit pixel can reproduce 4096 colors. A 14bit pixel can reproduce 16,384 colors and a 16bit pixel can produce 65,536 colors per pixel. So a 16bit pixel can reproduce 16 times the number of colors as a 12bit pixel can. That's a whole lot of richness and detail in highlights and shadows and alot more room to move, all yours for a modest 33% increase in bits. Unsubstantiated rumor has it that Canon's 1Ds Mk IV will sport 18bits/pixel with 262,144 colors per pixel, but I for one, look forward to 24 bit DSLR and cine cameras with 16.7 million colors per pixel. Now that would finally go toe to toe with film! Color depth is the weak link in digital imaging so far, but as you can see, Jim Jannard is dedicated to heading in the right direction. 16bit will be a huge step in the right direction. Bravo Jim; keep those delicious bits AND pixels coming!:w00t:

Daniel Browning
12-23-2008, 09:56 AM
There is certainly much more to a clean image than the number of bits - the design of the sensor, micro-lenses, gap between sensors, materials, noise filtering, and so on. But if you look at comparisons of the images produced on different DSLR like I have rather obsessively, I can tell you that the higher bit depth sensors produce better images.


The 14-bit cameras produce better images exactly because of the other reasons you mentioned (and more), not the greater recorded bit depth.



Comparing the new Sony A900 to the Canon EOS 5D MkII,the Sony uses 12bit A/D and the Canon uses 14bit. The Canon absolutely smokes the Sony like a cheap cigar!


First of all, the Canon JPEG engine is far better than Sony's, which accounts for the difference you see on that web site. RAW is a different story: at base ISO with a good raw conversion the Sony has better contrast, more resolution, and similar noise.

Second, correlation does not imply causation. I know a lot of people who measure fuel economy as "how much it costs to fill up". This works because as fuel economy goes up, manufacturers shrink the fuel tank and driving distance remains the same. If Marketing had their way, they would just make the gas tank smaller. This would fool said people into thinking fuel economy improved just because the "price at the pump" was lower, at least until they realized their measurement methodology was fundamentally flawed.

The bit depth is just one of the many differences between the two cameras, one cannot assume that it is the primary cause of the difference in image quality. In fact, for JPEG, the difference can not be related at all because the default black point and curves crush all detail in the last two bits anyway.

By the way, I just got my 5D2 a few days ago and I will be testing it as soon as I get the chance. One thing that I will test for is what bit depth is really needed for the camera. So far it looks very promising and has very low read noise.



Especially at high ISO ratings.


High ISO needs requires even *less* bit depth than base ISO. At ISO 3200, the 5D2 cannot make use of even 10 bits because so much is just read noise. The bit depth of the camera has to be designed for the amplification with the most dynamic range, which is base ISO in most cameras.

In any case, even if the Canon *was* better than the Sony, one could not assume that it was because it recorded more bits. It could be because the sensor has higher sensitivity and lower read noise. Or maybe the manufacturer switched to a sloppier color filter to improve sensitivity at the expense of spectral response (i.e. color accuracy), as Canon did with the 5D2.

Or maybe the new 14-bit ADC does have less read noise. Maybe it's just now two stops less. For example, if a camera had 11.0 stops of dynamic range and 12-bit ADC, then the 12 bits are sufficient to record the entire signal. If the same camera was upgraded with a 14-bit ADC which improved read noise by 0.5 stops, then it would have 11.5 stops of dynamic range. But that can still be recorded losslessly within 12 bits. Which is what manufacturers *should* do because it would be best for the photographer.

Instead, Marketing realizes that they can market additional bits, even if they are nothing but bloat, because people will assume that no company is evil enough to waste disk space just for marketing purposes. Unfortunately, Marketers are smart enough to do it in small increments, so that more people are fooled. If they jumped to 32 bit depth or 1000 bit depth immediately, no one would beleive it, but adding 2 fake bits here and 2 there is easier to sell.

One can truncate most 14-bit cameras to 12 bits in post production and never even notice the difference.



It also bests the EOS 1Ds MkIII which has the same bit depth and resolution. This shows that improvements in signal processing and sensor design play a key role, but pixel depth is important too.


Here's how you can measure the role of bit depth: truncate the bits in post production. If there's no difference after you lop them off, they were marketing bits. If it results in posterization, then they were real bits. All of the early 14-bit cameras had at least two marketing bits. Some of the more recent ones have only 1 marketing bit (e.g. D700 uses 13/14 bits).




As an interesting point, a 12bit pixel can reproduce 4096 colors. A 14bit pixel can reproduce 16,384 colors and a 16bit pixel can produce 65,536 colors per pixel. So a 16bit pixel can reproduce 16 times the number of colors as a 12bit pixel can. That's a whole lot of richness and detail in highlights and shadows and alot more room to move, all yours for a modest 33% increase in bits.


That's all true (setting aside Bayer interpolation), but it only applies if the sensor and electronics can keep up. Anyone can record 16 bits or 22 bits or a thousand bits, that's not hard. The hard part is building the sensor/electronics to actually put something useful in those bits.

For example, the Pentax K10D has a 22-bit ADC, but it is certainly not capable of 4.1 million values per pixel. It actually does use all 14 bits, but the last two stops have such strong pattern noise that no one can stand to use them in real life, so it's effectively 12 bits (despite the 22-bit ADC).



keep those delicious bits AND pixels coming!

Yes!