You forget that digital cameras can have a 360 degree shutter. The motion continues without breaks (read-reset may cause a slight break, but that should be unnoticeable).
Originally Posted by David Mullen ASC
I actually wrote down a pretty detailed proposition of the technique last year - i posted it somewhere here, but can't find the thread right now, so here's my essay again:
Timeslice subframes: SNR / DR improvement with firmware and software modification for temporal supersampling
By Eki Halkka, 17. Nov. 2008 (Edited 20. Nov, 23. Nov)
The idea is to use "exposure timeslice subframes" to store more information for each frame, using temporal supersampling - a bit like adding more cores to a processor instead of making it clock faster, if you may.
Dynamic range can already be expanded with temporal supersampling using current tools - by shooting at higher frame rate and combining the frames in post. Simply averaging two frames together reduces noise in half, or in other words adds 6dB to signal-noise ratio (SNR). Adding the frames together adds one stop of dynamic range (DR).
Unfortunately, this is a rather complicated procedure in practice right now.
What i suggest, is to make temporal supersampling an integrated part of RED work flow. Ideally, this should be done with the raw linear sensor data, before debayer.
For the sake of simplicity, i'll use a target of 25 fps and 1/50 shutter (nice round numbers) on the following example of a practical implementation:
The camera sensor is set to run at 100 fps, 1/100 (360 degree) shutter speed. The exposure is set properly for this, not the targeted 25 fps, 1/50 shutter speed, thus preventing overexposure by one stop.
Out of these originally captured 100 fps frames, or exposure "timeslices" as i like to call them, each group of four frames would be treated as “subframes” of a final 25 fps frame.
Captured frames 1 and 2 of each group would be saved, but with a flag that describes them as partial frame 1a and 1b. Captured frames 3 and 4 would be discarded.
The result would be a 50 fps stream, but the extra frame rate would be used to store twice more of intra-frame precision, not more temporal information.
If possible, the unused subframes (3 and 4) would actually never be captured at all – the sensor would have a bit of time to cool down, maybe it could help to implement this at higher target frame rates.
Supersampled subframes give a lot of new flexibility in post production:
A) If subframes 1a and 1b are averaged together, the result is an 1/50 shutter speed frame which has 6dB lower noise, but the same dynamic range.
B) If subframes 1a and 1b are added together, we end up with an 1/50 shutter speed frame that has the same noise level, but one stop higher dynamic range – it's essentially a 13 bit accumulation of the original two 12 bit images, reaching sensor saturation one stop later. This will naturally require the post production software to be able to hold values that “go over 100%”, floating point style.
C) Any blend of the above is also possible - they're two sides of the same phenomenon.
D) One bonus possibility is to simply discard one of the two sub frames - user could modify shutter speed in post between 1/50 and 1/100 - at the cost of losing the improved SNR / DR.
E) If all four captured 100 fps timeslices were saved as subframes instead of just the two in the above example, user could have also 1/33 and 1/25 shutter speeds at his/her disposal, trading in more motion blur and capture storage space for additional DR and SNR in the resulting image, making the blur-noise trade-in decision in post rather than on the field.
F) Timeslice subframes might also open up other interesting possibilities, i.e. isolating moving areas of the frame by difference matting etc.
G) As a bonus, this method would work as an "electronic ND filter", the aperture could be opened by a stop without overexposing.
Summing it up
+ At least 6dB less noise (12 dB with four subframes)
+ At least one f-stop better dynamic range at original noise level (two f-stops with four subframes)
+ Ability to control shutter speed in post (i.e. 1/100, 1/50, 1/33, 1/25)
+ Capture proper motion blur for video and faster shutter speed for stills simultaneously.
+ Can also be used as an "electronic ND filter"
+ Should help to minimize compression artifacts by averaging them over frames
+ Should be doable with current hardware, with minor changes in firm- and software.
+ The technique scales up as technology evolves
- With Red One, the technique probably only works in windowed modes
- The technique only works at relatively slow target frame rates
- The shutter speed variation is limited
- The storage requirements increase
- The post processing burden increases
- The technique improves the dynamic range "from the top up" by preventing overexposure - there's no low light sensitivity gain.
Doable with current technology
Red one can shoot 2K up to 120 fps, which is enough for the above 6dB / one f-stop improvement with frame rates up to 30 fps. Depending on what the frame rate bottleneck is (sensor, the electronics, redcode, other..), it might be possible to use three or more subframes instead of two in a "burst mode", for an additional 3dB / half stop improvement, especially as the sensor has time to cool down between bursts.
Using this method would require adding a new timeslice/subframe recording mode to the camera's firmware, with the flags i described above. As there's probably not enough processing power in the camera to do proper frame blending etc., the real time preview should be modified to account for this method - possibly simply by adjusting display gain if needed.
The post production software would also need to be modified to read the flags and combine frames from the new recording mode, in the ways i described above. If individual frames cannot be flagged in redcode, it would probably be enough to simply to have "footage uses two subframes" flag in the clip's meta data.
But that's it - that's all that's needed to make this work, as far as i can see.
But wait, there's more
In the future cameras, faster sub frame rates and even bigger improvements could be obtained. If RED ever comes up with a 1000 fps high speed camera, the high temporal precision could be used to store many, many additional stops as timeslice subframes - the camera would in fact classify as a true high dynamic range motion camera.
Another highly desirable option would be to use time slices with varying shutter speeds - the same way as HDRI is currently captured with still cameras. I assume this would require much bigger modifications though - but the potential benefits would also be big. In addition to capturing HDRI, the same footage could be used for video with a (relatively) slow shutter speed partial frame and for stills with (relatively) high shutter speed partial frame - the trade off is differences in exposure, (still frame would be underexposed), but the benefit of usable, sharp stills of moving subjects might be worth it.