PDA

View Full Version : HDR bracketing



E.J. Sadler
09-11-2008, 09:14 AM
It would be nice to have a one click multiple exposure mode for HDR use. You set the number of exposures you want, the over/under f-stop range to cover, and the camera takes those images with a single click. This would make hand held HDR work more feasible, and enable a 'safety bracket' set and forget for a lot of situations.

mikeburton
09-11-2008, 09:29 AM
I second this request! If it could do this it would be huge in the VFX community.

RED-Tank
09-11-2008, 10:04 PM
I am third! even better to have the 'movie-mode' in HDR bracketing ha ha haa.. silly to ask for this :p.

Raphael Varandas
09-11-2008, 11:19 PM
4th... it would be nice.

Tony de Vries
09-12-2008, 04:35 AM
Why bother? The sensor itself should be HDR capable! :detective2:

Even Solberg
09-12-2008, 04:52 AM
Because you may want to do the HDR yourself. Or just pick the best shot of the bunch.

fde101
09-12-2008, 08:04 AM
This would be done with exposure bracketing, wouldn't it?

Endgroove
09-13-2008, 10:34 AM
Because you may want to do the HDR yourself. Or just pick the best shot of the bunch.

Bracketing strikes me as more of a work-around - not a feature of HDR photography. Bracketing and alignment would not be necessary if the sensor had enough DR cover all the luminosity of a given scene to begin with.

Creatively, an HDR sensor (e.g. 100,000:1 :w00t: ) would open up HDR photography to subjects that actually move...

My wish is for a an HDR sensor... At the very least I vote for a sensor with best in class dynamic range thus requiring fewer brackets.

Sean

Aeron
09-13-2008, 03:29 PM
Unless Red push the DR capabilities of the sensor significantly the majority of scenes with high contrast would not be covered. i.e. the darkest parts are clearly visible and the highlights are not blown out. This will change in the future but as the technology stands for VFX HDR lighting you need to bracket.

It would be great to have a sensor capable of covering larger dynamic ranges with low noise, this surely would be the ideal and put the Red cam in a very competitive position. But if it isn't possible to push the current DR envelope then I agree an auto bracketing feature would be great.

It would be very cool if the auto bracketing worked out the f-stop range needed with clever metering as well as giving the option of how many stops to use.

Dan Hudgins
09-13-2008, 03:42 PM
Bracketing strikes me as more of a work-around - not a feature of HDR photography. Bracketing and alignment would not be necessary if the sensor had enough DR cover all the luminosity of a given scene to begin with.

Creatively, an HDR sensor (e.g. 100,000:1 :w00t: ) would open up HDR photography to subjects that actually move...

My wish is for a an HDR sensor... At the very least I vote for a sensor with best in class dynamic range thus requiring fewer brackets.

Sean



Its not just the Dynamic range of the A2D converter, or the number of Bits, the noise in the sensor and Pre-amp can be cut by fusing more than one exposure.

With the RED ONE (tm) the sensor noise, both fixed and random, gets enlarged somewhat by the low pass effect of the compression. If you fuse several exposures the chaos of the noise helps more actual detail get through the compression.

Making an exposure change for each of the HDR series helps add more chaos to the images so that you do not get the same compression artifacts on each image (all of), making the real data come through better in the fused HDR image.

I have done tests with DSLR images and using bracketing, color seperation filters, and up to 27 exposures does make the result better. This could be of use with the RED ONE (tm) for motion control and time lapse work.

In some of my other posts you can read about how to use two RED ONE (tm) cameras in sync to get HDR with motion today.

Endgroove
09-13-2008, 08:49 PM
Its not just the Dynamic range of the A2D converter, or the number of Bits, the noise in the sensor and Pre-amp can be cut by fusing more than one exposure.

With the RED ONE (tm) the sensor noise, both fixed and random, gets enlarged somewhat by the low pass effect of the compression. If you fuse several exposures the chaos of the noise helps more actual detail get through the compression.

Making an exposure change for each of the HDR series helps add more chaos to the images so that you do not get the same compression artifacts on each image (all of), making the real data come through better in the fused HDR image.

Yes, averaging. But with HDR stills photography moving subjects cause ghosting. Less compression is necessary with stills, and a chip with phenomenal dynamic range would almost certainly help toward obviating both of these issues. ...not that I expect this to happen

I will check out your posts about HDR movies... Sounds fun. :)


Unless Red push the DR capabilities of the sensor significantly the majority of scenes with high contrast would not be covered. i.e. the darkest parts are clearly visible and the highlights are not blown out. This will change in the future but as the technology stands for VFX HDR lighting you need to bracket.

It would be great to have a sensor capable of covering larger dynamic ranges with low noise, this surely would be the ideal and put the Red cam in a very competitive position. But if it isn't possible to push the current DR envelope then I agree an auto bracketing feature would be great.

It would be very cool if the auto bracketing worked out the f-stop range needed with clever metering as well as giving the option of how many stops to use.

I agree. I just thought while we're fantasizing... :biggrin:

In fact assuming the Red SLR has video, bracketing could in theory take place very quickly - especially if we don't have to get a mirror out of the way for each frame...

Here's a thought, why not have a function where exposure EV's for a series of brackets are determined by ISO (gain at the chip). Just a thought.

In any case assuming my HDR chip fantasy doesn't come true - having good flexibility in regards to HDR workflow is not much to ask for.

...here's to hoping :greedy:

Dan Hudgins
09-14-2008, 05:13 AM
Yes, averaging. But with HDR stills photography moving subjects cause ghosting. Less compression is necessary with stills, and a chip with phenomenal dynamic range would almost certainly help toward obviating both of these issues. ...not that I expect this to happen

I will check out your posts about HDR movies... Sounds fun. :)


There is no direct relation to the number of bits the sensor puts out from the A2D and the dynamic range except when the pre-amp is linear.

If you adjust the gamma or slope/curve in the analog part of the sensor you can get 14bits+ dynamic range output through 8bits of digital data, the noise will dither the bits up and down on sucessive frames.

The main problem with high dynamic range in sensors is that you need to bleed off some of the charge, making the sensor non-linear. I do not see an issue with a non-linear sensor since film is very non-linear. You can always linearize the data to some extent later for the de-Bayer if needed.

I was talking of using the filter wheel and multi-exposures for stop motion type work, something that can be done today with the camera as it is.

Using two cameras and a Pellicle Beam Splitter can let you get HDR from moving subjects with any motion problems today also.

The main problem with high dynamic range from a single sensor is in the analog part and thermal noise.

You can use two or more sensors and a Pellicle Beam Splitter to get more dynamic range without having to improve the sensors over what you can purchase right now, by reducing the light to one of the sensors it will record the highlights without clipping and the other will get enough light to be over the noise floor, you use a 5%-95% Pellicle Beam Splitter and any filters as needed.

If you have RAW Bayer data you can get more dynamic range by reducing the image size, a 4K 12bit image becomes a 2K 14bit image from the same RAW data. Compression filters out some of the noise on each pixel, so you do not get as much improvement in real image data with compressed data from averaging.

Endgroove
09-14-2008, 10:22 AM
There is no direct relation to the number of bits the sensor puts out from the A2D and the dynamic range except when the pre-amp is linear.

If you adjust the gamma or slope/curve in the analog part of the sensor you can get 14bits+ dynamic range output through 8bits of digital data, the noise will dither the bits up and down on sucessive frames.

The main problem with high dynamic range in sensors is that you need to bleed off some of the charge, making the sensor non-linear. I do not see an issue with a non-linear sensor since film is very non-linear. You can always linearize the data to some extent later for the de-Bayer if needed.

I was talking of using the filter wheel and multi-exposures for stop motion type work, something that can be done today with the camera as it is.

Using two cameras and a Pellicle Beam Splitter can let you get HDR from moving subjects with any motion problems today also.

The main problem with high dynamic range from a single sensor is in the analog part and thermal noise.

You can use two or more sensors and a Pellicle Beam Splitter to get more dynamic range without having to improve the sensors over what you can purchase right now, by reducing the light to one of the sensors it will record the highlights without clipping and the other will get enough light to be over the noise floor, you use a 5%-95% Pellicle Beam Splitter and any filters as needed.

If you have RAW Bayer data you can get more dynamic range by reducing the image size, a 4K 12bit image becomes a 2K 14bit image from the same RAW data. Compression filters out some of the noise on each pixel, so you do not get as much improvement in real image data with compressed data from averaging.

This is all fascinating, but :biggrin: I'm in fantasy land here. :biggrin:

Assuming we could have a chip that was linear all the way, but could capture more light in the scene we would have a chip that would by definition A) have a lower noise-floor B) obviate bracketing and other work-arounds [e.g. multicamera] C) Assuming we could capture all the light in a given scene, obviate gamma curves as the light represented in the file would correlate directly to the light in the scene. D) have it all in a small package.

Then of course we'd need HDR delivery mediums...

And, oh yeah, a fairy-godmother...:)

In all seriousness though, it seems to me that the one way to do this (in a small SLR type package) would be to have a chip with typical DR, but have extremely high frame-rates. Assuming you could do bracketing with such a device (possibly with ISO instead of exposure), and assuming you could get all your exposures in under - say - 1/60 of a second well then you'd be cooking with gas. (and at 500-ish FPS something will definitely be cooking :excl:)

Dan Hudgins
09-14-2008, 11:55 AM
In all seriousness though, it seems to me that the one way to do this (in a small SLR type package) would be to have a chip with typical DR, but have extremely high frame-rates. Assuming you could do bracketing with such a device (possibly with ISO instead of exposure), and assuming you could get all your exposures in under - say - 1/60 of a second well then you'd be cooking with gas. (and at 500-ish FPS something will definitely be cooking :excl:)

If you say can shoot at 480fps and want 24fps, you then have 20 exposures to work with.

It is good to reduce the exposure time by 1/20th to hold more highlight detail, but you may get reciprocity failure in the shadow areas (and overall to some extent.)

Reciprocity failure is when the sum of several exposures does not total the same exposure time given in one exposure. You get a maximum exposure in the mid range, with less exposure for short or long exposures.

Some sensors might have more or less reciprocity failure.

With short exposures the shadow detail will get more noise in the relation,

Signal + Noise = Output

The linear part of the curve will be less linear in the foot when there is reciprocity failure that is itself non-linear.

The higher frequency readout can also put more RFI/EMI noise into the image, and capacitive coupling between the sensor chip pixel elements and the sensor chip and circuit board and other parts.

So if you want to shoot 10000 short exposures per frame and work out details from the dark noisy images you have some other issues in addition to the pre-amp noise, A2D converter issues, fixed pattern noise in the sensor, and such.

If you have two sensors getting a 1/48 second exposure they are both operating in their "best" exposure conditions for the high and low exposures.