But on the other hand, I don't really believe that the RED or any other sensor can deliver 16-bits of meaningful data, even if that's what they give you in the raw file format. Also, the number of bits you claim to use is less important if you use a compression format that isn't lossless. The data rate becomes the important number, not the number of bits.
A good way to test this is to run out your 16-bit sources into 10-bit DPX or 8-bit TIFF/PNG. If you swap in the lower bit depth footage, is there any test you can do (like some sort of extreme color correction) that shows a difference between the 16-bit original and the lower bit replacement? If not, then I think those extra bits aren't really doing anything.
Another way to look at it is if you have visible noise (grain) that you can see on your 8-bit computer monitor, then surely the bits beyond 8 are just more noise. The need for higher bits is more evident for images where you would not have noise, like motion graphics or 3D renderers where the computer can create as many bits as it wants, not limited to what a CCD can do. Likewise, once you start processing the images, the computer might introduce data that you want to hold on to. I'm just talking about the source footage here.