Thread: Compression Wedges

Results 1 to 10 of 45

Threaded View

  1. #27  
    Join Date
    Jul 2012
    re: motion tests

    if you're not really geeky..skip this post btw

    my personal tests have always focused on low light. and in that regard 22:1 is always much much less noisy, in terms of "low light" originated noise, than higher compression options..and this is what id expect, as the noise gets thrown away in increasing amounts as compression is increased.. its what the wavelet transform is all about afterall..seperating signal from noise.. which in reality means throwing away small values.. which in reality sometimes means stripping bark off trees.. but whatever, compression is necessary for practical reasons..

    what this test is showing is codec introduced artifacts.. but its worth noting that trees are the ultimate stress test for redcode, and wavelet quantization and compresison distribution. and if your not shooting in a forest..22:1 is totally usable ..and IMO, if you're not shooting closeup stills for the front cover of "tree bark weekly" .. probably fine in a forest also, especially if you're going to downsample to 4k..

    5:1 is not my sweet spot, 12:1 is..10:1 in a forest ;) ..and thats on the mx, at 5k
    i have not yet found an economically viable way of storing footage long term
    crime and punishment was shot at 10:1 @ 4k on mx btw .. 5:1 is simply not an option for most

    there is one test i have always wanted to do, but never completed fully. that is to count how many individual ie original, 48bit RGB colours you can find in any given red image

    ..turns out its not a trivial calculation, so i have never managed to calculate a value that analysed more than about 75 lines .. i am still working on the problem though

    one way to find this value is to use a giant lookup table, which needs to be 65536 * 65536 * 65536 in size if we are to believe we are capturing 16bit raw values.. or 4096*4096*4096 if we are capturing 12bit (im still pretty irritated that apparently we're not allowed to know when the camera is recording 12 or 16bit.. but hey..).. or 65536*65536*65536 if your camera manufacturer won't let you know what you are recording, or when.. but llikes to have bigger numbers in their specs for reasons of corporate insecurity no-doubt.. never the less..

    ..both arrays are too large for most systems.. so another method is to use a dynamic array and only add values that don't already exist in the list.. and keep a count of original which case you need an array that is 6000x4000 approx.for a 6k image.. ..which does fit into my RAM..but again, after analysing about 75 lines the search for "individuality colours" gets pretty slow.. and every line you analyse makes it slower still..

    a third way to calculate if a value is original is to create a fiename with the value concatenated. the number of files you have created is the number of original colours. if a colour has already been seen it merely overwrites its corresponding file with an identical one... but unfortunately most micro computers (macs or pcs ) .. can't handle more than about 20000 files in a folder together, so again the analysis becomes critically slow after about 50 lines like the other methods,but for different reasons. for a 6k image you might generate 6000x4000 files etc.. substantially more than most non-cray computers can generally handle..

    an alternative to the above is to use each colour component value (R, g and B) as a folder name..but this also becomes very slow .. but at a slightly slower rate.. if that makes sense.. i think this sytem got me to line 75 .. the number of sub-folders is your original colour count..

    anyway, whatever image ive tested, up until the point the calculation gets too slow and ive bailed..usually after analysing about 50 lines.. and whatever horizontal line i start the analysis i can choose that.. i seem to get a value of between 22-30%

    ..meaning between a quarter, and a third of pixel colours are not original.. and seems a very high number given the extremely low probability that three independant colour values ranging between 0..65535 will all be identical to another three values with the same range, within a finite set of just a few thousand samples.. of course.. some of those values are interpolations generated by the debayering .. but its still a high percentage

    so it is in this way that i have analysed rc image compression. the quantization is certainly there in all compression ratios. im looking forward to testing output from an aja or ursa actually for comparison in regard to "original colours"

    ..still trying to find the best method ..maybe three arrays.. <ignore me now, im thinking out load> ..
    Last edited by egon touste; 03-17-2015 at 02:39 PM.
    Reply With Quote  

Posting Permissions
  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts