I hope that this is the appropriate forum for this idea.
All other things being equal, when you downsample a sensor's output to half of native resolution (a quarter of the pixels) you get a noticeably better result than shooting the lower resolution natively (that should be obvious). I only half know what I'm talking about, but the higher resolution image, even when down-sampled to half res, retains 'high frequency' information which would not be apparent in the lower resolution image, even if the final resolution is equal in each case. This means that you can work with lower resolution imagery from importing all the way to delivery without needing to utilize the most powerful and therefore expensive data systems (computers and RAIDs).
Now, a lot of folks will shoot in full frame (5K on the EPIC) and down-sample to 4K (or HD) and that's great. And I imagine that Dragon users will do the same. But here's the deal: you can arguably get a result 80% as good as flat or down-sampled 4K when you down-sample from 6K to 3K. But you're using 25% of the data.
And the added benefit is that you don't need to deBayer. IOW, instead of colour interpolation, you're combining the colours of each 2x2 photosite grid (which is I think RGBG) into an output pixel.
So you have: minimal moiré (such as it exists - maybe offer a version without a low-pass filter?); a nice, smooth image; a bit less noise (remember that much of it will be carried through, but it will be more pleasant to look at); a result which might not require sharpening; and the 3K file can be used to edit without the need for proxies. Your equipment and storage costs are somewhat less.
Did I mention that you don't need to deBayer? The most data intensive task would be ingestion/importing: the 6K file will be down-sampled as it comes from the REDmag to the hard drive - or perhaps, after it has finished copying. The only problem I can see is colour management. But I have no deep knowledge of that field.