Thread: Redray codec accuracy.

Reply to Thread
Page 3 of 4 FirstFirst 1234 LastLast
Results 21 to 30 of 36
  1. #21  
    Senior Member
    Join Date
    Jan 2008
    Posts
    6,278
    I'm curious how 3D-5D wavelets turned out that they use in some security camera compression (post non authentic image (noise) correction.

    Your comparison, if you can compare to modern redcode, you atr basically comparing to cineform technology anyway. Kineinfinity uses an official cineform, but at what depth and quality?

    So, you are saying you are taking 165MB effectively lossless redcode down to 9MB, or taking a 16 bit, 4:4:4 demosiaced frame grab to 9MB. Those 4:4:4 grabs would be incredibly devoid of much detail, as two thirds is recreated after a low pass filter smuges a lot of difference out, in a simple compressible way. Devoid of noise, 4:4:4 (especially this stuff) becomes much more compressible, and 16 bits a lot more again. Jpeg HDR used to get incredible compression rates on frames, as you are describing something that is mainly difference in peaks that are described similarly in data producing huge savings.

    So, I should clarify, the 2-4:1 maxing out in the old days was 8 bit video. So, what is it now for what you are dealing with I wouldn't know. 4:1+ might he the norm on Bayer, even more again with prefiltering. Everything is largely mappable except unexpected things like noise. In testing small cameras codec quality, I will furll the things around unpredictable and examine the data rate and quality. Another one was wavey water, used to tank out and macro block older codecs, but with h264 we got smudging as a solution. However, it is possible to make the codec nap the waves, something I pushed for. With extra datarate the wave thing became a lot less anyway. But strong flapping leaves and branches swirls of mists, making most of the picture, are the sorts of things to separate out lossy cameras.

    Anyway, interesting conversation I've had with cineform, or SI, which ever engineer, they objected to my pushing for noise elimination to dramatically increase the compressibility, as removing authentic image, which I politely pointed out the noise was not in the scene and the recreated pixel was likely much more authentic. But it seems to be a potential factor. But it means interesting things for lossless, as the noise was probably a main factor limiting to 2:1-4:1 max normal compressibility ratios.

    It also means, that your routine might go ahead more than on normal footage because of prefiltering.
    An explorer explores new ideas and things, but somebody stuck in mud, complains.
    - But the explorer complains about how much better the territory he has discovered is instead.

    -Since when has denying that the possible is possible been reason? Such is the life of some skepticism.
    -Normalcy involves what is realistically possible, not just what you say.
    Reply With Quote  
     

  2. #22  
    Banned
    Join Date
    Jul 2016
    Posts
    64
    Quote Originally Posted by Wayne Morellini View Post
    Cineform can do lossless and I believe higher bit depths. David Newman created cineform raw, maybe it applies there. Although, it is not mathematical lossless, by increasing things it can get there. However, it is just much better technology than jpeg2k. Hence the boost to redcode when the changed their lossy from jpeg2k. My point is that jpeg2k is not a good wavelet implementation to compare to. Xr also may produce the same quality, but at a lot less processing.

    I encourage you to also ad a few lines of code to Jp2k, and see what happens. I'm actually not too thrilled it turned out the way it did, wavelets were promising.
    jpeg2000 is consistent losslessly speaking .. it compresses some hard to compress images well .. but loses at the easy gains ..
    j2k in lossy mode it is pretty good .. i don't think the wavelet transform does anything for lossless encoding ratios though.. in fact , i'm pretty certain it hurts it ..

    i have two desires for a codec

    1) in camera/ during capture = lossy for motion (duration extending..but also lossy can be faster, allow higher fps..).. but lossless for stills/ timelapse/low fps/astronomy / long exposure shots etc
    2) when preserving material for the future, and during editing .. i want lossless

    lots of players working on 1), .. fewer working on 2)
    my ideal camera could be swung around, and would encode using different encodings settings, in an alternating pattern, using several methods for a specified time.. it would not record any files, simply compute their size..and then would dial in appropriate settings for the scene
    this could be done by simply panning the camera around slowly .. a forest is very different than a living room ..

    the only problem i found with cineform, ..wasn't the codec , but the debayering .. red have always done a better job imo

    my system is "dumb and simple/ non -adaptive" currently..which is why i find the results so odd.. there is nothing as complex as .. say ..a paeth filter going on..
    i believe , as do the creators of the FLIF format.. that the future of lossless encoding will involve machine learning.. so the format i am devising has this in mind, and will eventually contain metadata ive not seen in other formats, and the ability to understand previously undefined filtering methods of high complexity, by understanding definitions of new filters..provided they follow the filter syntaxes provided .. quality will always remain the same .. but how long you wish to analyse the image before compression will be selectable ..and scalable.. it is not intended for hardware implementation , but a software / network soloution for film, and petapixel panoramas etc

    ..you've got me interested in seeing what is possible in the lossy realm .. but my system is not designed for playback ..encoding is done a byte at a time , decoding one bit at a time .. both are approx. symmetric in time required .. but some of the functionality i want to support does not make large LUT/fast decoding trivial to implement .. so that is not high on my agenda yet ..and openjpeg is slower at decoding to full images..

    jpeg-xr makes smaller files than j2k .. i really need some 48bit jpeg-xr test files ..or a 48bit encoder
    in the meanwhile .. ill make a 24bit version of my system to do a preliminary test .. the few jxr files i created came out larger than my files .. but i need to try that on 50,000 images to get better stats ..

    i don't think j2k is a great wavelet format either.. but its "out there" .. and i cannot test against what is not ..

    regarding compressing helium frame grabs. i render out an 8k helium r3d grab to 48bit tiff . it is debayered .. the interpolations often create new "leave values" .. speaking in Huffman terms .. they do not reduce
    so the tiff is 165mb , the jpeg2000 file is 120mb, and my files are 47mb .. that's pretty clear hopefully..
    if i drop the channel depth to 10bit per component.. that's when the file becomes a visually lossless, ..and true lossless .. 30bit file .. hopefuly that makes sense ..quality is 100% ;) .. but 10bit components ..which could be in log space..
    ..and here is a bit you may be missing .. even though a 4:4:4 image may be interpolated (which can only result in equal or greater image complexity btw.. never less complexity..regardless of the interpolation method..the latter being the csse with my own debayering algorithms..).. if that 9mb file was a "raw" file ..the leaf count CAN ONLY DROP ..as "leaves created through high quality interpolation" are removed.. but even if no leaves disappear .. i definitely only need to write 1/3 as many codes out to disk .. so the file .. i can say with certainty .. could become 3mb in raw format ..

    of course , with a lossless system .. there can be no data rate control

    i would prefer to save the full sensor readout verbatim .. even if it contains transport noise and the likes .. as software noise reduction can be more complex than a fixed hardware solution, and can improve over time..so the image can be reinterpreted .. but the lossy wavelet transforms in j2k are in effect noise reduction filters anyway .. but that might make a deeper noise reduction algorithm less effective later on .. and pre compression filters can compensate / reduce noise , through good prediction .. which my system does not do.. yet ..

    anyway, once a shit is recorded in camera.. i don't care .. my problem has always been ..where to put the finished D.I .. which normally means deleteing something else !
    Last edited by konrad grant; 03-24-2017 at 07:45 PM.
    Reply With Quote  
     

  3. #23  
    Senior Member
    Join Date
    Jan 2008
    Posts
    6,278
    Yes, they found many years back, that wavelet produced much better results at high compression against newer codecs (back then) than at lower rations. But we are at least a generation past that now in non wavelet technology. The thing about cineform, is that they handcraft for speed and results. At some point, it effectively become lossless as you drop the ratio too. Even mpeg2 would do that. At some point it pats to switch how you comp esss (and not just once, hint). Even jpeg basically models one wat, and tries to compress the bits that font conform to the model. Now, magine if you could examine the data and compress hundreds of different ways according to which worked vest against others in different parts of the image. Of course, I'm encouraging the industry to keep going down a wastefull path here, where maybe millions of sets of analysis has to he done before you determine the best set (unless they have a mind to short cut this).

    For my universal file compression ideas, I determined you could make a compression scheme that resulted in optimal compression for each data types, maybe plus some overhead. All without you figuring it out. You know why I know thus, to us the answer is obvious, to others it seems non obvious, obscured. They are probably scrambling to figure out what I mean and how to do it. We are talking many many billions of GDP per year with some of these technologies. Of course I'm being obscure again, as I often am, usually just painting part of the solution, and lesser solutions.

    Lol, I just read the rest past the first paragraph of your post. We are on the same mind as I thought. So, yes, you should be able to get lossless for playback, however according to my calculations human imaging can go into terabytes per second datatates (or was that terabits, I forget). That still means a big need for lossless, unless I can get my high end lossless out. That would be one of the biggest achievements of computational design. But I don't usually stop until I have the ideal (perfect) answer in these sorts of things. Where as in physical design we see so little, beyond the logical aspects, it is hard to know how good the solution is compared to the unkown ideal solution. Even if you know the ideal solution, it may well involve things of such intricacies, it is hard to understand, like things of the plsnk scale, or smaller to get the best possible solution. But in logic, it is possible to determine an ideal path. Except with this, it is applying the logical to the real world, which we don't fully understand. But as we are not recording much below 2400dpi, the rest does not matter too much in exact detail, and if I can get within 1% of the ideal result (1% measured against the 100% of the ideal result) , I would be happy. If I had to compress the plank space, or just subatomic, on the other hand, it might he extremely inefficient, because we don't undrestsnd that space. Too much for neurotypicals.
    An explorer explores new ideas and things, but somebody stuck in mud, complains.
    - But the explorer complains about how much better the territory he has discovered is instead.

    -Since when has denying that the possible is possible been reason? Such is the life of some skepticism.
    -Normalcy involves what is realistically possible, not just what you say.
    Reply With Quote  
     

  4. #24  
    Senior Member Elsie N's Avatar
    Join Date
    Oct 2009
    Posts
    6,681
    Guys, instead of inventing a new wheel, why not just use this one that google just released?
    One camera is a shoot...but four (or more'-) Hydrogens is a prohhhh-duction... Elsie the Wraith
    Reply With Quote  
     

  5. #25  
    Senior Member
    Join Date
    Jan 2008
    Posts
    6,278
    .........., some of that is like what I was suggesting in the elphel cinema camera projects before I gave up on them, that was one of the things that caused me to give up. I wasn't well enough to get into it, and one of the guys gave it a try but couldn't get it to work. Thank you Mr Google, and others that proved me right. It wasn't it was going to be best compression, but lower processing easier compression.

    I had a dispute with a top engineer I went to university with in the day. I claimed I could get better compression than some law limiting compression he was quoting (like a follower). It was through prediction in a certain way.

    We are talking about a lot bigger differences Elsie, even compression to less than 0.1%. I actually am one of the people that did reinvent the wheel, so don't go throwing that one around me too liberally.

    I will review this as an off the shelf solution for software recorder (as it with take considerable time to write and tune codec designs, usually.
    An explorer explores new ideas and things, but somebody stuck in mud, complains.
    - But the explorer complains about how much better the territory he has discovered is instead.

    -Since when has denying that the possible is possible been reason? Such is the life of some skepticism.
    -Normalcy involves what is realistically possible, not just what you say.
    Reply With Quote  
     

  6. #26  
    Senior Member
    Join Date
    Jan 2008
    Posts
    6,278
    What Google and web standards have been interested in is free open standards with no licensing. This is likely another attempt at a codec using free IP to do what an existing one does.

    Now, Jpeg has jumped in quality, at lossless I don't know, but it wasn't a top lossless performer anyway, so this may not be a great thing to compare to. Only 26% smaller is NIR as good as 74% smaller, that would he great.

    I'm happy to shift product with a competent codec, rather than shift with one of mine without protection. Thanks for the heads up.
    An explorer explores new ideas and things, but somebody stuck in mud, complains.
    - But the explorer complains about how much better the territory he has discovered is instead.

    -Since when has denying that the possible is possible been reason? Such is the life of some skepticism.
    -Normalcy involves what is realistically possible, not just what you say.
    Reply With Quote  
     

  7. #27  
    Banned
    Join Date
    Jul 2016
    Posts
    64
    Quote Originally Posted by Elsie N View Post
    Guys, instead of inventing a new wheel, why not just use this one that google just released?
    elsie .. its just not a good wheel . i intended to spent 6 months creating something with better compression ratios .. as i had bit of space ..but my initial prototype did that after 3days working on it..to my enormous surprise..using "very old algorithms" .. (improvements have been small since then.. but i'm collecting statistics..getting ready to test some more advanced/complex ideas..)

    .. the openjpeg2000 encoder project is now one of the few reference jpeg2000 encoders .. its taken years , and two google summer of code pushes to get it there (ie, massive support)

    ..but my files compress 56% smaller on average .. without the complexity.. so i'm personally happy to leave google to design the next generation of internet bloating formats .. whilst i help you make bit-for-bit backups of material that's important to you , that takes up less space

    check out the free lossless image format elsie .. that is currently the cutting edge of lossless encoding imo..but is slow..as it uses machine learning. it is built into imagemagick already, but may not be finalised yet.. one of the designers of that, jon sneyers emailed me the other day after i sent him a linkedin message .. not had time to quiz him about its status yet . my compression system uses imagemagick to grab images , and it can also encode into any format imagemagick supports. in the meanwhile you can try the format by executing . its very impressive on large images in terms of compression ratios

    <imagemagick from command line>: convert <yourimage.tif etc> <yourimage.FLIF>

    @wayne : the recent improvements to jpeg are just changes to the DCT coefficients from what ive read . jpeg lossless uses a completely different encoding system, so if my assumptions are correct..that won't affect jpeg lossless ..which is the compression system used by DNG .. right there is the problem .. new formats using old compression routines from 1992 .. when machines had 128kb of ram ..
    Reply With Quote  
     

  8. #28  
    Senior Member
    Join Date
    Jan 2008
    Posts
    6,278
    Sorry, I forgot to specify. Yeah, JPEG 1 for lossless is not that good By recent I meant going back 14 years. Actually jpeg lossless was like something I was proposing that was dismissed, but there you have it, somebody else proved it in an industry standard no less. You think some of these people that supposed tp know this stuff must be clean rooming for decades to miss that one. I basically work from the ground up pretty clean room except for free stuff with any patent expired. I did this with my is design, I got texts from University of South Australia that were rather old and read and reinvented and invented new, and did uni by the end I started uni (but still with old text and tech). I got the department of defense published standards paper on security management to read up on some auditing managed systems process but never got to read it, and had done my own process anyway. This whole mantle/metal api thing, is like I was designing the OS from the beginning. Java/Oak Toas and my simple VOS all virtually started the same year. Only one wasn't close to funding markets and didn't receive funding. Java was the least efficient design, and Toas undoubtedly the second most efficient, but blew Java away. In matter of fact, the leading PC Java/JavaScript (a blur now) engine was a Taos derived product from them, one which other's tried to be like to get more performance.
    An explorer explores new ideas and things, but somebody stuck in mud, complains.
    - But the explorer complains about how much better the territory he has discovered is instead.

    -Since when has denying that the possible is possible been reason? Such is the life of some skepticism.
    -Normalcy involves what is realistically possible, not just what you say.
    Reply With Quote  
     

  9. #29  
    Banned
    Join Date
    Jul 2016
    Posts
    64
    Quote Originally Posted by Wayne Morellini View Post
    Sorry, I forgot to specify. Yeah, JPEG 1 for lossless is not that good By recent I meant going back 14 years. Actually jpeg lossless was like something I was proposing that was dismissed, but there you have it, somebody else proved it in an industry standard no less. You think some of these people that supposed tp know this stuff must be clean rooming for decades to miss that one. I basically work from the ground up pretty clean room except for free stuff with any patent expired. I did this with my is design, I got texts from University of South Australia that were rather old and read and reinvented and invented new, and did uni by the end I started uni (but still with old text and tech). I got the department of defense published standards paper on security management to read up on some auditing managed systems process but never got to read it, and had done my own process anyway. This whole mantle/metal api thing, is like I was designing the OS from the beginning. Java/Oak Toas and my simple VOS all virtually started the same year. Only one wasn't close to funding markets and didn't receive funding. Java was the least efficient design, and Toas undoubtedly the second most efficient, but blew Java away. In matter of fact, the leading PC Java/JavaScript (a blur now) engine was a Taos derived product from them, one which other's tried to be like to get more performance.
    I'm confused by java/javascript .. they are not connected in any way , other than being programming languages .. but they share no common ancestor , or development, and are not similar in any way ..other than the misleading name ..

    from oracle:

    How is JavaScript different from Java?
    The JavaScript programming language, developed by Netscape, Inc., is not part of the Java platform.
    JavaScript does not create applets or stand-alone applications. In its most common form, JavaScript resides inside HTML documents, and can provide levels of interactivity to web pages that are not achievable with simple HTML.
    Key differences between Java and JavaScript:
    Java is an OOP programming language while Java Script is an OOP scripting language.
    Java creates applications that run in a virtual machine or browser while JavaScript code is run on a browser only.
    Java code needs to be compiled while JavaScript code are all in text.
    They require different plug-ins.
    Reply With Quote  
     

  10. #30  
    Senior Member
    Join Date
    Jan 2008
    Posts
    6,278
    That is old times. It is not right to say they are not connected. Originally JavaScript, used Java, in the sense that they used some Java API's. JavaScript could be compiled to java byte code and run. Things have diverged (and the web people hate plugins now. Which is a pity). It is correct to say the language itself is not the same. Anyway, you are not interested in web services, so it may not matter for your purposes. (I'm also need to look this all up to make sure my memory is not failing me).

    However, things have changed as desktop libraries were released for independent Javascript desktop applications. Now, Javavscript is to receive its own virtual binary code. It is no longer that Java and Javascript are separate, it is that most platforms support JavaScript, that Javascript has won in my opinion for desktop and wen purposes. You can develop to Javascript API (even in c) and get a portable application to future host environments as JavaScript support is ported to them. Of course the world is more messy than that, but it is a good sentiment.

    I am delighted, I was getting ready to learn JavaScript and make my own virtual binary code on it that used the JavaScript api. Then I find out the community was preparing to do it, bonus, I can just use a subset of the javascript api with that binary for portability in an engine written for whatever mobile phone chipset for low powered gadgets. That cuts out a lot of initial work. I can doy own binary some other time.

    I was planning to propose to the Linux community to write the majority of applications to the javascript platform in the binary, for portability between OS's, so any OS has an instant library by porting javascript. Except for applications that require extra performance, that are dine natively as much as needed. Most applications don't require the most performance. So, except games, 80%+ of applications could just use javascript. This would remove do.e of the biggest obstacles in computing.
    An explorer explores new ideas and things, but somebody stuck in mud, complains.
    - But the explorer complains about how much better the territory he has discovered is instead.

    -Since when has denying that the possible is possible been reason? Such is the life of some skepticism.
    -Normalcy involves what is realistically possible, not just what you say.
    Reply With Quote  
     

Posting Permissions
  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts