Hey guys --
I realize that there's already an HDR thread, but ultimately I'm interested in discussing something a bit different, so rather than hijack the thread, I figured I'd start a new one.
In the other thread people were talking about wanting to use HDR techniques with REDCODE output. Generally I tend to shy away from using the term "HDR" because it brings to mind the strange looking things from Flickr that we see so often -- oversaturated colors, strange black ghosting and a complete lack of shadow. This is all fine and good and great for the occasional music video (and I'd love to see if anyone manages to get that kind of an image in-motion), but what I wanted was something that I could use when the normal range of tools (gamma, exposure, brightness, contrast, etc) just won't capture the full range of what i'm looking for in an image... I want to boost the shadow values or save the highlights without messing with any of the intermediate values... sure there are ways of masking out and replacing bits of the image, but that's really not practical with motion pictures (unless you've got a lot of time on your hands).. So I went on a quest last night to find some means of salvaging really rangey images that looked better than what I could accomplish with the normal set of tools.
Also, a lot of HDR depends on multiple exposures, etc... but for my purposes I'm more interested in mapping the luminance of the whole 11-stop range into an image displayable on a computer screen / HD / DV, while retaining the charectoristics of the original image.
disclaimer: I'm a DP and a photographer, not a programmer. I'm also not a video engineer, so I can't claim to know everything about gamma, etc... so i won't take it personally one or more of you can demonstrate a better means of doing this sort of thing -- especially those of you who work in Post and are really good at it... in fact, I encourage it... my goal in participating on this board is to share & learn.
Ok, so I found this little algorythm online:
in essence, it brigtens the luminance of really dark pixels (compared to their immediate neighbors and the overall brightness of the frame) and darkens the luminance of really bright pixels (same), so it really only affects the outliers.
I opened up Shake and Xcode and got to making. My code is messy and not optimized, so I'm not going to post it (pvt msg me if you want it anyway), but I made a Shake node and it did exactly as I had hoped.
Here's an uncompressed TIFF of part of my Shake window (all the gfx are at actual size):
As labeled, the images are (left to right):
1) the image in shake,
2) gamma corrected image + contrast modification to the way I wanted it artistically,
3) the result of the Spacial Tone Mapping node I made, which included some tweaking within the algorythm (c and lamdba values) to make it look like I wanted it to.
Overall, I think this works pretty well, and I'm planning on using it as a stopgap measure when I have out of control highlights or shadows; but what I'd like is for you all is to tell me if there's something i'm missing (besides using curves manually or using mattes to isolate parts of the image) tool-wise or technique-wise. I'm relatively new (1 yr or so) to the compositing world (Shake).
BTW, here's the original image, and the equations as used in the Shake macro:
Original Image (memorial.exr)
c and u are the controls. c is the multiplying factor of the avg lum of the whole image (YA) and u controls the display color on the monitor (for our purposes it's saturation control)
Y = black and white luminance image
YA = solid greyish image of the avg of Y
GC = Global contrast factor
YL = local luminance (pixels around them)
GC = c*YA
CL = YL*[log(.0001 + YL/Y)] + GC
R = [(R/Y)^u] * YD
G = [(G/Y)^u] * YD
B = [(G/Y)^u] * YD
The c slider controls the overall light/dark, with .15 being the "middleish" range best I can tell.
I used .4 for the u value, as they did in the paper.
I cheated in the following ways:
1) I used the LumToAlpha node from the ShakeSDK for Y, when I probably should have used their standardized equation.
2) I used a hasty method of calculating YL, and in small resolutions, may create artifacting because of a lack of smoothness, so I introduced a blur on YL that's optional (didnt need to use it though), but for this image, I used 4x4 blocks of avgs, rather than the suggested 10x10 to prevent the artifacting.
I'm happy with how it rendered the dome, the shadow values in the roof, and the stained glass, but I'm not happy about the blooming (but I can live with it) or the fact that it perceptually saturated the colors (because of the way I did the Y channel and math at the end) but I think messing with the u value could have fixed it.
Apologies for not being terribly clear. I'm a stream of consciousness poster.