DXOMark has SNR charts in two forms:
1.) "print" which downsamples all cameras to the size of an 8x10
2.) "screen" which analyzes all cameras at their actual size
On Chart "1", yeah the 36MP D800 does only very very slightly worse (insignificantly so: you can barely see the difference on the chart, much less in real life, I'm just making a point) than the 12MP D3s. But look at Chart "2". OOPS!!
(even worse is that it's the same comparing the D3s to the only-16MP D4)
Right now many Red users are downsampling to 2K. In other words, Chart "1" is the relevant result.
But just as the whole point of all the D800's pixels was to be able to print bigger than the D3s (and certainly bigger than 8x10), one day cinemas are going to switch to 4K and higher and we're all going to have to start looking at Chart "2".
But I'm still going to be asking, "What could they have done if they hadn't increased density?"
Take a look at the chart comparing the 12MP FF35 D3s (or even original D3) and the 12MP APS-C D2x. Night and day. That's what I wish Red had done this time around. Bigger and better pixels (or even just better pixels) for huge gains, not more and better pixels for incremental improvements or worse.
You know what'd be really cool?
Give us a FF35 (or better yet 36x36, 3:2 is so 1997) sensor with a similar pixel density to the MX sensor. Only the windowed S35 5K area is available for motion. But for stills shooters...
Now that's a DSMC camera.
The thing is, you can probably see a resolution difference between 2K and 5K when viewed at normal distances. It's subtle, you would need above-average eyesight, but it's there, on some percentage of scene types (ones that contain many tiny sharp details) , not all. But can you see a resolution difference between 4K acquired at 5K Bayer and 6K Bayer? No, according to just about all visual experts, you can't. So all those pixels are wasted on the abilities of human eyes. Better dynamic range, better color, better sensitivity, absent IR issues, 100% absent jello - all these are visible all the time, even on an over-compressed web feed. So I can't help but wonder, why make an engineering trade-off for something we can't see, that compromises something we all can always see? Because small pixels are a trade-off. No matter how great the color, sensitivity etc. of Dragon might be, all these would have been even better with fewer, larger pixels.
On top of it, these trade-offs will cause my clients to be unhappy as it will further choke post for them. Here we work for clients, who have their pipelines and their ways. Already Alexa is often chosen to "avoid the choke", so now they come along with something that will make that even more frequent. We are not Tom Lowe working alone on gear of his choice and/or hiring whomever he likes to do post for him. We don't have that level of control in our work ecosystem, and few DPs have that kind of one-man-show situation. If our clients work on 2007 Mac Pros, there is nothing we can do. Like most people, our work is not aimed at Imax release either.
And on top of that... Heat, compression issues, these could be even MORE perfected with 5k-only, not to mention the cost and set-time associated with filling even more SSDs and backing them up.. Sure, maybe SSDs will be cheaper by then, but they won't be free.
Of course if you want to so a 1000% re-crop, you need all the pixels you can get, but how often do we do this? Color and dynamic range, low noise, these are needed in each and every shot, not "that one time four months go".
On Reduser there are a huge number of atypical users - more people working alone (and who combat this feeling of loneliness by hanging out here) than in the general population where approaches must be sold to others and people must be convinced. They are also die-hard Red enthusiasts. I love our cameras, but like many DPs I know, not really mainly because of the resolution. If you gave me an Epic with 3K RAW in the current 5K field of view, but with better color and sensitivity, I would gladly trade you for my current 5K Epic. It would save money and hassle in post, and color and light rendition are what my clients and I care about most, past a certain point. Again, 5K is already way, way past that point.
LMAO ... I changed my mind ... i marked 5k ... but I want 6k now !
(Corollary to Jannard's byline: I reserve the right to change my mind, including wanted camera specs and delivery dates )
- color precision
- color gamut
- color consistency
- 16-bit utilization factor
- native DR
- sensor noise floor
- in-camera noise reduction efficiency
- independency to custom hardware for fluid raw workflow
Resolution increase seems understandable when perceiving one the goals of the company, which is achieving the picture quality for motion that matches or supersedes equivalent DSLR. 15 stops of 16-bit ~20MP motion imagery fits in that context.
an easy way to get less noise is to increase source resolution. The if you downscale your C-mos sensor to 1/9 then you get a extremely good debarring. 6k/9 = about HD....
HD from 6k will shine like nothing else.
Video / Debayering noice will be close to none.
Pixels sharpness will be far above any of our lenses.
Then again explain to my why you would not want a 6k sensor, that you could just touch screen down to a 5k sensor... if you do not care for the extra sensor size or pixel count.
The the one and only reason to say no thanks would be economy. But then again evolution will go on and if you do not have that 6k within a years time others will... There will be cameras out there, alexa studio among others that will be preferred by a lot of DP's but with grater range, framerate, sensor size and pixel count you are still competing and probably doing so with better gear.
If you shoot for your self not renting, not counting pixels, and do not care about source resolution then you need to ask yourself... why did you get 5k in the first place..
|« Previous Thread | Next Thread »|