View Full Version : Combined exposures
02-16-2007, 07:59 PM
Thinking about all these approaches for getting increased dynamic range.
What software can combine images in the following fashion?
You sample a scene over multiple exposures.
You sample one exposure to pick up shadow details and another to pick up detail in the highlights. Nothing new there!
You notice certain pixels change value in relation to their surrounding pixels!
For example, an area in the scene shot at 1/1000 sec was all black. You then shoot the same scene at 1/100 and noticed the pixel in the centre is darker than its neighbors.
Or similarly, another part of the image was all white when shot at 1/100sec.
You then shoot the same scene at 1/1000sec and notice the pixel in the centre is darker than its neighbors.
Can you get the mean difference (average or similiar) value of any pixels that have changed in relationship to their neighbors, and use this new pixel value in a combined image?
02-16-2007, 08:47 PM
For still images, Photoshop is able to stitch HDR images together out of a series of exposures.
For video images, there are many threads on this. Try the search function. One of the threads is
No one has yet come up with a system that works where there is motion.
02-16-2007, 09:05 PM
Just to clarify:
You're talking about combining two images of different exposures, but of relative time similarity (like frames 1 and 2 of a 48 fps exposure instead of 24) and then taking them, combining, and then spatial tone mapping on top of that?
I bet it would be a neat effect, but unless there's some nifty way of resolving which exposure gets precidence when it comes to motion, you're going to have some difference between the two frames, no matter how slight, and it might result in some ghosting.
That said, if you have some footage like that to play with, i'd be happy to play around and see if I can make a node that will do what you ask... at the very least it might be interesting to experiment... even if we just end up with a strange semi-ghosting HRI-ish look we call the "Farland look" ;)
For the tone mapping portion, I've been messing a lot with that recently, as referenced here:
02-16-2007, 11:01 PM
That’s correct, but still don’t have a clue if it’s all bollocks or not!
I was reading you thread with the nice church pics and saw there was contrast missing on the final church scene so thought it was important to have relative luminous changes as much as global.
Thinking about Stuart English’s procedure for HDR using a PC controlled RED which is sampling at ½ the max frame rate for the resolution you’re using, 30fps@2K and then take back to back ‘paired frames’ (expose at end of first, start of second frame) and then combine with some magic. So you’d be shooting at 30 fps, exposing at say 1/100sec at the end of the first frame and at 1/1000 sec at the beginning of the second frame.
I hoping a lot of the glow/reduced contrast or whatever else makes up a HDR picture will be reduced as you make localized /error corrected decisions about the image, more than global.
Now with motion, I hoping if they??...that means you an other smart folk, can get stuff like mpeg codecs to work out what’s moved and track it, surely that can track & match pixels that may have moved and shouldn’t over the two exposures. Mainly to improve the loss of resolution/reduce ghosting on high contract edges where the software doesn’t know whether the bright pixel in the second frame is a result of increased exposure in the second frame or because in the second exposure sunlight pixels outside moved across window pane pixels in the first frame.
After that, do some proximity pixel compares across the two frames/exposures of the same scene/image. Now to cure cancer!!
02-17-2007, 09:49 AM
With 4k to 1080p output you could pan and scan a scene to make up for a minimal amount of variation. As long as you have two or three stationary referance points.
02-17-2007, 10:47 AM
There is some more info on this topic at http://www.dvxuser.com/V6/archive/index.php/t-59340.html.
I think they are talking about running the camera at 2X (or even 5X) speed, and changing the exposure on every other frame like Glenn and Wade are discussing.
At my still digital imaging class at the local community college, I teach a simple method to increase apparent dynamic range with Photoshop using layers. Simply record a scene at 2 different exposure times: one for highlights and the other for shadows. Paste the highlight exposure on top of the shadow exposure. This gives you two layers. Change the opacity of the top (highlight) layer to let some of the detail from the "shadow" layer show through. We usually need to tweak overall exposure and contrast with adjustment layers to get the image to look right. In some cases this method seems to work better than Photoshop's HDR function.
Maybe you could perform a similar function in FCP or other editor by putting two tracks recorded with different exposure times of the same scene, then change the opacity of the top track.
As you have discussed, motion is very problematic. In the Photoshop HDR function, there is a check box "Attempt to align images," but I don't think this will work when an object is moving toward or away from the camera. It might help for lateral motion.
If the shutterspeeds were fast enough, say 1/500 and 1/1000 sec, do you think you might not notice the disparity?
02-18-2007, 05:11 PM
Well here we go back toward multiple chips one very sensitive chip and one much less sensetive with for instance a nd in front of it. We put a prism in front of the two beyer sensors and at 24p record the overexposed and underexposed frames with no temporal offset. since we recorded the same frame at two separate aligned locations now we have 20+ stops of latitude to play with. Perhaps the RED 2.o will have two chips and a prism????