Possible Red Workflow:
Images are captured on set in 4k RAW and recorded to a portable hard drive.
The data is then ingested to an onset laptop connected to external hard drives for archival and editorial.
Once back 'home' the footage is generically processed out to an SD proxy using REDCine. Bulk naming is a possibility using simple scripting such as: TestProject_%Camera_%Tape_%Date_%ClipNum_SD.avi
Obviously some of the variables would be from the RedCode metadata and others would be defined within the REDCine project settings.
Within each of these processed RGB files there would also be some absolute clip and/or frame id meta data.
Within REDCine the software would also be logging a Reference Table for the footage:
REDCode4k_ShotA == Test_Project_CameraA….SD.avi
I would then hand off the SD proxies to an editor, and they can do whatever it is they normally do in their post workflow to edit. They will do no grading on this footage.
The editor will deliver an EDL to the post house (or using later mentioned features submit an EDL for processing to the post house.)
REDCine loads the EDL. Sorts through its Reference Table and starts exporting clips in one of several configureable manners:
Let's say for example that you have shot two takes that you have now edited.
You have a 401 frame long sequence named "DV_A_FOOTAGE"
and a 200 frame long sequence named "DV_B_FOOTAGE"
You edit this in say AVID and export an EDL which consists of
CLIP_0001: DV_A_FOOTAGE from frames 100 - 150
CLIP_0002: DV_B_FOOTAGE from frames 0 - 10
CLIP_0003: DV_A_FOOTAGE from frames 250-400
REDCINE looks at this EDL, sees that DV_A_FOOTAGE is only needed between the frame range of 100-150 & frames 250-400. And DV_B_FOOTAGE is only needed from frames 0 -10 The user has configured REDCine to use of a 10 frame padding in addition to the necessary frames the EDL requires for existing transitions in case they want to adjust transitions in the online, and a 5 frame slate.
So REDCINE now exports a new 4k online cut of two files consisting of:
4K_A_FOOTAGE: [5 frame slate][DV_A_FOOTAGE: 90-160][5 frame slate][DV_A_FOOTAGE: 240-400]
4K_B_FOOTAGE: [5 frame slate][DV_B_FOOTAGE: 0-20]
REDCINE now performs a simple math operation on all of the EDL frames (assuming frames start at 0):
CLIP_001: 4K_A_FOOTAGE from frames 15-65
CLIP_002: 4K_B_FOOTAGE from frames 5-15
CLIP_001: 4K_A_FOOTAGE from frames 90-240
The user exports this new EDL and loads it into Fusion, Combustion, AE, Smoke, Avid.
A second option would be instead of grouping 4K_A_FOOTAGE, split it up into two files:
4K_A_1_FOOTAGE: [5 frame slate][DV_A_FOOTAGE: 90-160]
4K_A_2_FOOTAGE: [5 frame slate][DV_A_FOOTAGE: 240-400]
4K_B_FOOTAGE: [5 frame slate][DV_B_FOOTAGE: 0-20]
EDL then is updated accordingly.
Settings to the user would include:
Reduce Footage (Instead of exporting the full clip frame for frame from the original)
Pad cuts by ________ frames (Contextual if reducing)
Create new Clip for Each Cut (Each cut based on the EDL)
Generate Slate [design]
Slate would be a ridiculously simple text creation tool maybe even following HTML basic formatting and XML parsing:
<ORIGINAL FRAME RANGE><BR>
<LENGTH IN FRAMES><BR>
This would greatly facilitate working with VFX and Color Grading where you don't necessarily want to send someone more frames than they really need. You don't want to grade frames you won't see and quite honestly people just might not have enough hard drive space for *another* copy of a significant portion of their film in 4k.
This way, if you only use 2 frames of a 1000 frame clip, REDCINE only exports the two frames (or maybe a few more if you have a defined Front and End clip padding). This would also greatly accelerate the REDCINE Process since it won't have to Encode, White Balance and LUT 1000 frames.
These clips are then graded in whatever grading application the user wants such as Combustion, Fusion, Lustre, etc… and rerendered into a new folder.
The EDL is loaded into either an online 4k editor such as Smoke or else loaded into a compositing application for render such as Combustion (w/ Automatic Duck) or else Fusion which supports EDLs natively. And rendered out as a master 4k.
The Command Line
Now I mentioned earlier that the editor could "submit" the EDL for processing to the post house. One thing I've found almost every post house wants is to have a somewhat automated business wide pipeline. Giving access to all of the functionality of the REDCine software via CommandLine would enable third parties to tightly integrate REDCine into their workflow.
Now I'm sure you guys are going to invest a great deal of energy and time into getting the interface for REDCine great, but don't take this the wrong way: I don't trust you to get make it perfect for me. I don't expect anybody to ever create the perfect interface.
By exposing all of the primary features of the REDCine software to a command line interface, any program could make a call to the REDCine software, very much like a most 3D rendering engines such as Mental Ray.
An example might be a plug in to Combustion that allows the user to "Request Full Resolution". Combustion would look at the file name DV_FootageA.avi and send a call to REDCine.exe :
REDCine.exe –r rcode –i 'C:\Renders\DV_FootageA.avi' –h 871 –w 2048 –s 15 –e 65 –L 'C:\Luts\test lut.lut' –p '…\Test Project.settings' 'C:\Renders\2k_FootageA.avi'
The footage is rendered and saved to the appropriate place. And Combustion then automatically replaces updates its file path for that footage file to 2k_FootageA.avi
It could also make custom network rendering simple. On my last 3d Project we used a PHP/SQL based network rendering system for our Maya work. All of the render settings would be setup using the Web interface and then submitted to the database. The long term plan for the web interface was to put all of our status/comment/review online, however we didn't have time to complete it. But the point was, using batch files a primitive render management system was possible. All third party distributed render systems currently exploit command line rendering. Such as Frantic Film's Deadline.
I'm not exactly sure what I would use the REDCine Command line interface for, but that's the point. I'm not sure how exactly I'll use REDCine in general, which means when I have that deadline looming and I feel the need to automate a process to save some time, I don't want to be stuck in a situation where there is something I can't do.
Theoretically using REDCine Command line, all of the advanced EDL work I mentioned earlier could be done in another application. So if you don't do it, someone else could, using calls to REDCine.exe
AVID could theoretically write an application which directly interfaces to the REDCine engine. And skip the EDL step completely.
Some obvious features would be:
Codec, Codec settings, Resolution, Start, End, LUT locations, Project Settings, Color Presets, Output locations, Input Locations.
The goal being for other applications to be able to fully control the REDcine software much like an API. When I think of REDCine I actually think of it as a rendering application much like Mental Ray or even Maya.exe and would hope that it could be as easily integrated into pipelines around the world.
I hope this all makes sense, feel free to ask questions, I can ramble and It's hard to try to remember everything I would do without answering specific questions, as I'm sure you know.
- Gavin Greenwalt