From the openexr.org site ... version 2 of openexr is in beta. What fascinates me is:
* Deep Data. Pixels can now store a variable length list of samples. The main rationale behind deep-images is to have multiple values at different depths for each pixel. OpenEXR v2 supports both hard surface and volumetric representation requirements for deep compositing workflows.
This basically allows 3D streaming with volumetric textures (i'm assuming like voxels), which I'd think would revolutionize 3D processing. Anyone one have any insite to this new format, I can't find much about it and the ILM site is a bit to sophisticated for me to understand?
edit/add ... I'm finding it's not impossible to do z-depth on dropped frames (i.e. shoot 48 fps at 360 shutter, then doing the zdepth between frames and only saving at 24 fps 180 degrees with zdepth). This would be a lot nicer if i had a epic (could do 96 fps and get a real nice solve per frame using 4 frames - also since this is pre-blur on the 2 frames that need to be added to get the 180 degree shutter ... the solve is way crisper for the mid's and high's). I'll write something up when i have some time.