Thank you!
I am trying to research the state of HFR in the "ultra high framerate" sphere; there is little on realtime HFR above 120fps.

Originally Posted by
David Mullen ASC
All I recall are the old tests that Douglas Trumbull did for his Showscan process. He felt that at really high frame rates, over 100, the ability for the audience to perceive a difference was too small to offset the practical issues of shooting and projecting at those rates.
From what I am (faintly) aware of the old tests of Douglas, he was unable to push the frame rates on an exponential curve necessary to make it worthwhile.
What we discovered is you have to double in order to have a really noticeable effect.
e.g. 120fps -> 240fps -> 480fps -> 960fps
Basically, these steps halve motion blur. The diminishing points of returns continue, but doesn't quite stop, because it is equivalent to display motion blur of:
1/120sec blur -> 1/240sec blur -> 1/480sec blur -> 1/960sec blur
With experimential ultrahigh-refreshrate headroom now available for scientists/researchers/experimenters to experiment with, it has been observed that display persistence (refresh cycle length, for sample-and-hold displays) has the same behaviour as camera shutter speed. 480fps@480Hz on a flickerless display (like a 360-degree shutter), has exactly the same display motion blur, as a photograph taken with a 1/480sec shutter speed, for the same equivalent physical angular motionspeed for the display plane relative to the human eye.
Which is indeed noticeable.
The jump from 120fps HFR->1000fps HFR is roughly as noticeable as the jump from 60fps HFR ->120fps HFR Yes, it's a much, much bigger jump that becomes mandatory to punch the diminishing points of return curve -- but we're comparing 1/60 versus 1/120 versus 1/1000 which is 16.67ms versus 8.33ms versus 1.0ms -- that's nearly the same apart (8.3ms delta versus 7.3ms delta). But it clearly outlines that overlooked "diminishing points of returns" headroom, you need a massively ginormous jump up in frame rates to be really noticeable. But it's quite useful for VR and Holodecks to try to achieve near-zero-persistence -- something 120fps HFR can't even do without strobing or blur side effects, for the aboveformentioned reasons.
Today, it's more pratical to just (about) experiment -- current off-the-shelf DLP chips can be attached to a custom firmware to actually display a monochrome image at a true-1000Hz refresh rate, thanks to the DMD's ultrarapid pixel switching speed. Several scientific suppliers now sell 500Hz and 1440Hz DLP projectors (albiet at often 5 figure prices) -- e.g. ViewPixx. My prediction is that ultrahigh refresh rates will eventually be a minor cost-add within 20 years to future displays. But right now I'm researching the source-side of the equation.
4K is cheap now, tomorrow, 1000Hz may be too.

Originally Posted by
David Mullen ASC
Keep in mind that 1000 fps is over a 5-stop exposure loss compared to 24 fps, though you can gain one-stop by not using a shutter at those rates. But even if it's just a 4-stop loss, your 800 ASA camera just became a 50 ASA camera.
I'm certainly aware! Past high speed cameras need to utilize a ton of light.
Several private brainstorms have privately been discussed amongst us about what will be needed in 20 years from now, to try to film for a VR or Holodeck environment in much more truer manner:
-- One theoretical camera being imagined, is a photon camera. After you recorded the video (essentially timecoded photons), you can play back the video file at any frame rate you want, whether 37fps or 24fps or 1000fps. Post-processing would allow full-brightness at low framerates to be multiplexed with the temporal resolution of ultrahigh framerates
-- One can convert a 1000fps video into a much-brighter 25fps video simply by stacking 40 frames into 1 each. It's still the same number of photons, so with a proper sensor & proper tech, with a good high-efficiency sensor, this can in theory work as perfectly in brightening 1000fps videos to the same brightness as a 25fps video. But when objects move, denoising algorithms are REALLY bad. However, natural learning has shown some shocking improvements -- artificial intelligence denoising algorithms can in theory re-brighten 1000fps videos, while keeping temporal artifacts at bay. Artificial-intelligence denoising algorithms would solve the problem of denoising artifacts.
I saw some really good artificial-intelligence scalers at CES 2018 that actually converted 1080p into 4K and 8K material in a really fantastic way -- it was as if it knew that blurry house and windows and tree leaves were actually the real thing, and artistically (in realtime!) put in real sharp wood, real sharp leaves, etc, creating detail whereas there wasn't before. In 20 years from now, AI algorithms can probably realtime convert a VHS videotape into a retina-sharp Holodeck environment, by automatically recognizing objects and using its own library of 3D objects and material, to replace the blurry stuff with the real thing. Pretty much a supercomputerful of realtime processing.
Some of us have commented, that the the exact same kind of process, could be used to apply AI-based denoising/deblurring to framestacked-brightened 960fps videos (which could theoretically also output 480fps, 240fps, 120fps, 60fps, 24fps, as divisors of 960 -- and these multiple source files of multiple different brightnesses (lower framerate being brighter and lower noise) by the AI algorithm -- to successfully ultra-brighten 960fps video without the darkness and noise.)
Back in 1930s and 1940s, you really needed bright light to expose color film even at just a mere 24 frames per second (e.g. Wizard Of Oz) -- far more light back then than for today's 1000fps. Tomorrow's breakthroughs, in a couple of decades, could make practical much-brighter 1000fps filming, or some other framerateless method necessary for VR realism without the side effects of using a frame rate (aka static imagery to represent moving images).

Originally Posted by
David Mullen ASC
At those high rates, the screen becomes essentially flicker less and seems like a clear window, assuming there is sufficient resolution.
At ultrahigh framerates, flicker doesn't need to be involved to eliminate motion blur.
Blurless sample-and-hold. Blurless 360-degree.
The ultra-high framerates make low-persistence possible without the need of black periods.
Which is better (for VR / Holodecks) because real life doesn't strobe and doesn't enforce a motionblur above-and-beyond human vision.
Real life has no frame rate either, and a frame rate has many side effects (stroboscopic effects, wagonwheel effects, stepping effects, persistence motion blur from eye-tracking across static frames) which affects passing a theoretical Holodeck Turing Test (unable to tell apart VR and real life). But since we can't go framerateless or analog-motion, ultra HFR is the next best thing.

Originally Posted by
David Mullen ASC
But I also think you run into a variant of the uncanny valley phenomenon when shooting narrative fiction -- the more "real" the process becomes, the more artificial the techniques of moviemaking appear -- costumes, lighting, make-up, etc. Which is one reason why this sort of approach works better in nature documentaries, for example, because everything in front of the camera IS real.
Yes, this is a consideration.
That said, virtual reality literally requires the realism factor or you get nausea/headaches. In a "Holodeck" situation, this is a very different phenomenon from just staring at a flat screen: You're unable to do things like depth of field, defocus, motionblur, etc -- the human eye needs to do that instead if you're in a virtual environment. Even motion blur (beyond human vision) creates major nausea in VR. This can become important when designing virtual reality movie, or a movie that you can eventually walk in.
Most of the time, this is computer generated graphics, but can also include 360-degree movies (currently often looks like a flat plane, and has lots of problems) -- but eventually when 360 VR movies becomes true 6dof stereoscopic 3D in one way or another (intense research is occuring as we speak) there is a lot of technological progress needed.
Some of this is futurist stuff, obviously, however, I'm fascinated by the topic of HFR beyond 120fps. Humankind will certainly definitely need to plan a path for eventual ultra-HFR for virtual reality purposes -- e.g. 360-degree 6dof films that actually look like being immersed into a Holodeck like the real thing instead of looking inside a hollow projection sphere.
All fascinating areas of study, and I'd be happy to collaborate ideas with a few bleeding-edge HFR people into "push beyond 120fps HFR". As a display-side guy, I have to reach across the universe to the source-side people, and work on long-term solutions that successfully solve the "Holodeck Turing Test" problem of successfully matching real-life even for fast motions like downhill skiing, racing, speedboating, etc -- where all motion blur needs to be naturally from the human eye rather any existing than in the source/display.
VR is growing leaps and bounds, but the ultimate test is when VR stops looking fake and looking identical to real life: I believe that the Holodeck Turing Test experiment is achievable within 1 to 2 human generation (ultrahigh-Hz retina-resolution VR headset + ultrahigh-framerate video) to successfully trick a person that VR is a transparent ski goggles and that they're staring into real life (even for fast motions). Yet it is actually virtual reality -- a Holodeck per se. But is going to require a lot of future co-operation between the displayside experts and the sourceside experts. Still, this is real "Within our kid's life" stuff!