iSCSI is very workable and robust... we have several iSCSI SANs at work, and I'm running one at home. Using GigE copper works just fine (especially if you have jumbo-frame support in your infrastructure).
The bigger issue, I believe, is that even the above-average Joe is going to be challanged to figure out how to find, install, and configure an iSCSI initiator for his personal machine, and then figure out how to map the camera in as a LUN in order to get at the data.
Of course, RED could roll all that together in a pint-n-click installer... but I suspect that something you can just aim a web-browser at, or a discoverable share/export system that your Mac or PC can mount )NFS/CIFS), is much more likely a "workable" solution for most folks.
But, yes, the ability to insert, say, two SSD modules, and have the camera maintain mirrored copies of everything would also be extremely useful. Especially if they could be remotely mounted via ethernet so you could actually check shots from a computer without having to download.
Same with the SSD. I can transfer footage via ethernet, and the times I'm an idiot and forget a dock while on a shoot, I'll use it. But the dock will be in my own edit bay, and when I need it, I'll have it and use it over ethernet.
Don't get me wrong, these features are great to have and will someday save my ass when I've made a stupid mistake or forgotten something on a trip away from home (every time I've gone international, I've forgotten the power adapter for my laptop), but the ol' standards are still better for me.
From the standpoint of footage transfer, there is something to be said about a FW800 connection and eSATA vs Ethernet.
Just my 2 cents...
The long (135mm and up) L-primes can shoot sharp shots with 2.8x worth of TCs on a camera capable of 70lp/mm. So, that's 70*2.8=196lp/mm from the lens.
Yes, I completely agree that there would be some definite motion problems doing it the way I described. I mean, it would be interesting to see what you could do with it, but the Arri method seems like the only way you could really do it right, and that is use gain because the time you expose has to be the same or the frames are going to be different regardless. I mean, even on a stills camera if there is any wind and you have trees and such it can be difficult to get any usable result.
The question is, hypothetically speaking, the last frame will still contain the movement of the first frame (since the reset was never called) albeit blurred I imagine, so it would quite interesting how that would combine. I have no camera that I could attempt to do this since there is no control of the reset.
I have only played slightly with HDR so I am not sure even what framerate would be required to get what kind of range in the image.
And don't care about *seeing* HDR images in an HDR monitor, I like taking them and getting all that extra detail in the image. I cannot even imagine how that would look in video.
So I guess audio streaming for headphone monitoring is out but nothing to stop them displaying some meters and allowing level control - but it would be so great if it could stream audio. Perhaps if a the redmote pro could receive wifi then it could do this and the video monitoring when close enough to be in wifi range? Perhaps that's part of why it costs as much more as it does?
....the speculation continues...
|« Previous Thread | Next Thread »|