To me the core issue of this thread is whether staying RAW as long as possible is really a significant benefit over a proper transcode to a robust RGB format.
To determine that there are several issues that need to be examined in a comparative manner to find any pitfalls or benefits in terms of image quality. The debate over the practicality of staying RAW longer is typically more about the particular production environment one encounters on a given project. It would be great if we could cite universally beneficial practices that would apply to any project but as the spectrum of comments on this forum proves, that's not reality.
The reality I see is that budget, turnaround time, sophistication of personnel, post tools, etc are all factors that need to be considered all the way to how a scene is lit and exposed. I also believe that native scene DR and whether elements in the frame with lower exposure levels need to "play" in the finish have to be considered in choosing a strategy. For example, for a dimly lit scene I might add more light to get a cleaner "RAW negative" and then restore the shadows in the grade to keep the noise from intruding. On a sunny exterior I might underexpose a little to hold more highlight information knowing that with all that blue light I should be able to dig whatever I need out of the shadows with minimal noise.
ISO on a digital camera can be counterintuitive, so I suggest we think in terms of signal to noise ratio. One of the things I noticed immediately with MX sensor performance in the Epic vs the R1 was the increased precision in the shadows (which I attribute to the added bit depth) changed the noise "signature", I would wager that's one of the things that Gunleik notices when comparing them. The lighter compression options made possible by the higher data rates also seems to give more texture to areas of relatively consistent density/chroma which I assume is due to less interpolating in the wavelet encode - but that's a Graeme issue that he may be constrained from discussing in detail publicly. In any case, to add a more seat of the pants perspective to all this tech talk, I have a story:
My friend, and legendary RedUser, Evin Grant, embraced the noise right from the earliest Mysterium/R1 days. He liked the texture and thought of it as digital grain. He loved the look of film and I think he found the "clean" digital negative too sterile, so he sought out some noise to add some guts and grit. He made some very evocative imagery with that approach, though his skill as a DP was probably more to "blame" (hehe) than his ardor for noise. I bring this up, because at the time I was firmly in the other camp trying to kill noise because I disliked the way the type of noise we had in 2008 seemed to look roughly the same no matter what the other image metrics might be - it was easier to see and harder to kill under 3000K, but its character was roughly the same as in dark shadows under daylight.
With the introduction of the MX sensor and the concurrent release of new color science (RC2/RG2 IIRC) the noise "character" changed rather dramatically. Instead of chunky it was now a smooth gray bottom end and, even better, we had another 2 stops of genuine DR before it even went into the noise floor. For grain aficionados like Evin, there were now several post plugs that could dial in the amount and type of grain desired - I believe there was even one company that used scans of popular film stocks to create a random noise pattern that could be comped in during DI/finishing.
With the Epic/MX package I am now seeing noise "signatures" that track more accurately to the specific scene and subtle differences in the deep shadows. This makes me more comfortable with letting the noise "play" in certain material for some of the same goals Evin had. I still think adding random grain in post is a better way to get that grittier look, but that takes some serious computational horsepower in post AFAIK. In any case, its worth noting that all noise is not created equal and so determining a specific numerical value of S/N in db is not as purely empirical as we camera nerds would like.
FWIW, even with the newest color science I still pop on a 1/4 CTB anytime the light sources are under 3000K. I know that Graeme has gone to great lengths to optimize the reconstruction of balanced images even when the color temp is near the extremes, but it seems to help (perhaps its just my bias clouding my judgement). I bring it up in this thread because I believe it makes the choice to transcode to an RGB format at the one light stage easier to do.
Cheers - #19