Thread: Redray codec accuracy.

Reply to Thread
Page 2 of 4 FirstFirst 1234 LastLast
Results 11 to 20 of 36
  1. #11  
    Senior Member
    Join Date
    Jan 2008
    Posts
    5,097
    I don't know, I might have before you Else lol!

    But the IP market gives me the creeps, it is heavily suited to less mindful very large companies and their drones, than to the masses with a lot of creative potential, making a real drain and bottleneck even for companies. You should not have to go into hock for heaps just to the establish and protect a patent world wide, before you go and do anything with it, or be required to go through very expensive NDA processes and further development in secret hunting for investors etc. A true market place is, you have it listed people look at it, and say, there's one, and contact you, rather then you hunting across the country or planet for somebody to NDA and license. It would make the rate of progress and change rapidly speedup and get rid of lesser solutions quicker. In an more open market place multiple companies could license at once without negotiation at set rates (the rate calculation is a little complex but something I come up with maybe in the 1990's). You never need to meet a licensee, but if they have any sense they would use you as a consultant.
    Since when has denying that the possible is possible been reason? Such is the life of some skepticism.

    Normalcy involves what is realistically possible, not just what you say.

    Inferiorly superior (humbleness) rather than superiorly inferior (arrogance). -
    Reputation is something the unwise apply. But integrity is what the wise apply.
    Reply With Quote  
     

  2. #12  
    Quote Originally Posted by Wayne Morellini View Post
    If anybody wants to beta for Conrad that is good here.

    Now, Konrad. My head is a bit misty at the moment and I'm next to a busy road. So forgive the sloppiness of any supply. I can tell you some of the stuff about redcode, but first. The jpeg2k part of your compression comparison is a bit of 'how long is a piece of string' for calculating compression. How much compression on average do you need for true lossless, and how much do you need for a known form of visually lossless or compared to various visually lossless modern redcode data rates are basic sort of comparisons. An average loss db against an image is accurate but not so relevant, as it is where that loss occurs that is relevant. Preserving more significant parts of an image against less significant parts skews the perception of image quality. This is something I put forwards nearly 13 years ago.

    Now, redcode. I know people, and redcode was apparently originally based off of Jpeg2k, which is bloated slow etc. Then redcode had cineform related technology and vastly improved.

    Now, redray performs very high compression, by reports at least visually lossless (but I have not seen tests). So, does your codec perform 4k at less than 9mb/s visually lossless etc?

    Now, it is often difficult to get antiquated and fundamentally inefficient routines to high efficiency metrics. But a high efficiency routine can be different and simpler. So, second level of denial of interest in routines happens there (the first includes 'you can't x,y,z', 'another guy claiming something' , 'yada yada yada' 'blah, blah, blah' etc.) Levels include people with inadequate or no idea, or I'll equipped to do something with it financially or practically. The third level, is that it is a IP nightmare, where you can fall foul of patents, and its a lot of trouble and companies that know things, know it, and may not be interested once they do know, often. A simple Huffman encoder by itself, doesn't appeal as a potential redray challenger.

    But I'm open minded so I'm interested in what it is, but protect your IP. If you really have something and its all too difficult, maybe Google will be interested to pay, and roll it into their open source codecs, but then you get the chicken and eggs like difficulty of getting them to sign a non compete permanently, which companies without the utmost morality (like virtually NONE) would he interested in doing under legal advice. So, with a limited non compete clause, they likely will still he reluctant to sign, plus you have the issue of filing for patent and completing cost of patent rollout over the next year or so, in which time you may have nobody uptaking the technology to afford the patents, and then letting the patent lapse and companies you have under NDA Non compete previously swooping in to sweep it up. One engineer I knew was part of a group that had a new leading hardware technology and the big company they wanted to license to, apparently, was intent to just wait them out. We really do need a revision of patent law to have a free registration of IP, a patent period that starts upon a contract, commercial use of an IP. So you can present and companies can window shop, and never ha e the ability to wait out IP unless it is during the active IP period of commercialization. This is the sort of thing I'm putting forwards for IP reform. There also needs to be a more open market mechanism for these things after a first exclusive license period. Which is another reform I'm putting forwards. We might be looking at a 90% reduction in progress due to the way the current systems work. Which means, we are maybe a century or more behind in some areas (some newer areas may be more novel, harder and less legs to advance quickly with the aid of smaller competitors without many/millions/billions of project investment ).
    firstly .. where loss takes place is irrelevant .. my compression system is lossless..and for archive . my files and the j2k output is bit for bit identical images .. so the lengths of strings can be compared ..and measured in bytes ;)

    ..as for your lossy ideas 13years ago .. they caught on ;) .. lossy is a p*ece of p*ss to pull off ..

    does my codec perform 4k at less than 9mb/s visually lossless? .. no, its lossless .. not visually lossless .. but it would depend on the image, whether I was recording raw or RGB ..and you didn't specify the framerate ..or bit depth.. and is that megabits or megabytes? .. ill assume you mean 1fps and megabytes .. and say yes ;) .. of course 4096 x2160 pixels is fewer pixels than 4480x1920 (r1 wide screen) .. so in that scenario ..it could do 4.5k .. or a 28k image.. if its height is 1 pixel ..

    but seriously..i think i could modify it to do visually lossy encoding ..without complex math.. but i do not expect id meet the redcode datarate .. nor do i think redray would match my encoding quality ;p .. but i imagine i could produce a pleasing image at that datarate .. but i have little interest in doing so currently..go figure ..

    in my world, its all about lossless, high dynanmic range encoding ..and archiving .. ill leave playback to dvd ;)
    I'm not confined by hardware implementation .. i'm producing a 64bit image pipeline for storing very large numbers of pixels as densely as i can ..

    i'm going to launch a small kickstarter soon, with one goal being to enable compression of extremely large hdr panoramic /stitched images using my system (and small images too of course)..along with some downsampling, upsampling tools so you can view them at HD etc ..or SD ..without issues .. at the end of the project .. everyone can see the compression code..its going opensource ;)
    ..couldn't care less what google do .. the fuller their hard drives are ..the better .. i'm happy for them to choke on their own data ..

    .. its about 400lines of code so far .. written in free pascal ..ready for translation to other languages .. but i have a long list of possible improvements and tests to carry out first..including supporting floating point colour accuracy etc

    once the design is locked down (with some open hooks for adding features), then hopefully there will be a java multi platform decoder created.. keith long from east end studio in London has offered to do a translation, if anyone can, its probably him..he writes code for the banks ..untangles bad Russian multi-threading routines ..makes stuff work nice..and my code is peanuts easy ..and he likes the fact i use parenthesis comprehensively in my equations .. he says its professional ;)

    the adobe machine already issued me a private compression tag for the tiff format too, so implementing it within tiff is an option already .. though it wont be supported by third parties unless i write a patch for libtiff etc .. or they want to .assimilate were not interested in making work for themselves ..as they are a small team .. but i shall keep them posted when i'm done ..

    there are no patents involved .. its a simple system, just surprising that doing a few very things that seemed obvious to me.. that surely must have all been done before .. would leave jpeg2000 and every other variant of jpeg.. in the dust .. truly very strange, and surprising . i shall write a white paper regardless , called "don't believe every research paper you read, a guide to lossless HDR image compression and archiving "

    if the kickstarter gets funded .. i shall offer a prototype encoding /decoding tool on day one..and then regular updates.. with periodic function bonuses along the way .. the aim is to under promise , and over deliver .. and i'm happy i can deliver on what i promise etc ..based on what i have ..

    4k would fund 12months of development time currently .. so that's what ill be asking for .. trying to take a break from doing rentals for a while ..but have a 4k bank loan to service .. which is distracting me ..

    redcode compression is not based on jpeg2000 ..it is jpeg2000.. to become redcode, additional processing/filtering of the captured image is required of course ..which no doubt helps the compression engine ie jpeg2000 ...
    Last edited by konrad grant; 03-20-2017 at 10:43 PM.
    Reply With Quote  
     

  3. #13  
    Senior Member
    Join Date
    Jan 2008
    Posts
    5,097
    Quote Originally Posted by konrad grant View Post
    firstly .. where loss takes place is irrelevant .. my compression system is lossless..and for archive . my files and the j2k output is bit for bit identical images .. so the lengths of strings can be compared ..and measured in bytes ;)

    ..as for your lossy ideas 13years ago .. they caught on ;) .. lossy is a p*ece of p*ss to pull off ..

    does my codec perform 4k at less than 9mb/s visually lossless? .. no, its lossless .. not visually lossless .. but it would depend on the image, whether I was recording raw or RGB ..and you didn't specify the framerate ..or bit depth.. and is that megabits or megabytes? .. ill assume you mean 1fps and megabytes .. and say yes ;) .. of course 4096 x2160 pixels is fewer pixels than 4480x1920 (r1 wide screen) .. so in that scenario ..it could do 4.5k .. or a 28k image.. if its height is 1 pixel ..

    but seriously..i think i could modify it to do visually lossy encoding ..without complex math.. but i do not expect id meet the redcode datarate .. nor do i think redray would match my encoding quality ;p .. but i imagine i could produce a pleasing image at that datarate .. but i have little interest in doing so currently..go figure ..

    in my world, its all about lossless, high dynanmic range encoding ..and archiving .. ill leave playback to dvd ;)
    I'm not confined by hardware implementation .. i'm producing a 64bit image pipeline for storing very large numbers of pixels as densely as i can ..

    i'm going to launch a small kickstarter soon, with one goal being to enable compression of extremely large hdr panoramic /stitched images using my system (and small images too of course)..along with some downsampling, upsampling tools so you can view them at HD etc ..or SD ..without issues .. at the end of the project .. everyone can see the compression code..its going opensource ;)
    ..couldn't care less what google do .. the fuller their hard drives are ..the better .. i'm happy for them to choke on their own data ..

    .. its about 400lines of code so far .. written in free pascal ..ready for translation to other languages .. but i have a long list of possible improvements and tests to carry out first..including supporting floating point colour accuracy etc

    once the design is locked down (with some open hooks for adding features), then hopefully there will be a java multi platform decoder created.. keith long from east end studio in London has offered to do a translation, if anyone can, its probably him..he writes code for the banks ..untangles bad Russian multi-threading routines ..makes stuff work nice..and my code is peanuts easy ..and he likes the fact i use parenthesis comprehensively in my equations .. he says its professional ;)

    the adobe machine already issued me a private compression tag for the tiff format too, so implementing it within tiff is an option already .. though it wont be supported by third parties unless i write a patch for libtiff etc .. or they want to .assimilate were not interested in making work for themselves ..as they are a small team .. but i shall keep them posted when i'm done ..

    there are no patents involved .. its a simple system, just surprising that doing a few very things that seemed obvious to me.. that surely must have all been done before .. would leave jpeg2000 and every other variant of jpeg.. in the dust .. truly very strange, and surprising . i shall write a white paper regardless , called "don't believe every research paper you read, a guide to lossless HDR image compression and archiving "

    if the kickstarter gets funded .. i shall offer a prototype encoding /decoding tool on day one..and then regular updates.. with periodic function bonuses along the way .. the aim is to under promise , and over deliver .. and i'm happy i can deliver on what i promise etc ..based on what i have ..

    4k would fund 12months of development time currently .. so that's what ill be asking for .. trying to take a break from doing rentals for a while ..but have a 4k bank loan to service .. which is distracting me ..

    redcode compression is not based on jpeg2000 ..it is jpeg2000.. to become redcode, additional processing/filtering of the captured image is required of course ..which no doubt helps the compression engine ie jpeg2000 ...
    Oh great a temperamental being touchy against the only person interested in his stuff.

    So, less misty, but on a sleeping tablet. So is lossless. Crappy lossless is a price of piss to pull off, not good visually lossless at high efficiencies. Actually high efficiency lossy tends to get pretty complicated to get those groups of efficiency up. One reason is just boosting base technique that is not the best I would believe. The sacrificial image break down I publicly posted and pointed ambarella to, is was based off my analysis of visual perception in various forms of art and computer games 20 years ago. Before ambarella, a lot of high compression ratio footage on consumer cameras was crap. That is why I put it forwards to better preserve the more noticeable and meaningful parts of the image. They use ambarella tech in higher end cameras and broadcast infrastructure.

    Now, the discussion was related to redray, and you bring a gun to a knife fight. So it was a bit confusing Konrad. As I explained I was a bit off color. So, of course I was solely comparing against the 9mbit per second redray initial claims (actually less). Where after an actual showing one late night a bunch of people (I think plied with alcohol) started acting like it was 9mb/s lossless (I guessed they might have been mistaken and it could be visually lossless, but it may just compare to the crappy cinema exhibition compression ratios). I believe that was 4k24p and bit depth I don't remember. Better than others, but a number of people are aware of my own new advanced compression ideas in the previous camera projects that led to the Red camera. To this day, they have kept quite about how it actually works. I also basically kept most of the best mechanic ideas to myself. So, redray achieves half or a 1/4 of what I was aiming at with those. But the truth is far more interesting.

    So, you are not serious about being temperamental. Good because I have been perceiving that you might be of a certain mindset that can take the complex and find simple solutions for it. It is obvious to simplify these things, but not to many. I encourage you to take a good path and commercialise it, we really need good solutions.

    Now let's put this into perspective. I tell people I'm aiming for at least 10x lossless, because I'm really aiming for at least 100x, and by an ancient proposal of mine, hopefully over 1000x in a particular way (all without non authentic parts of an image, like noise). A simpler codec I currently am holding to achieve 9mb/s 8kp50 if not lossless, visually lossless. But to patent these things would require so many patents as to practically write a new book. I have neither the money or time to do it right now. I was the primary school kid in the play ground coming up with viable processing structures for artificial intelligence at age 10, 11 or 12. So, storage of data, and processing, are very important to me.

    So your 400 lines of code sound more interesting to me, from the perspective of things I have covered, knowing some changes can lead to simple large gains. But whatever you do, remember cineform, particularly cineform raw, much better than Jpeg2k you are comparing to, and also the newer form of JPEG that came out of Microsoft!
    Since when has denying that the possible is possible been reason? Such is the life of some skepticism.

    Normalcy involves what is realistically possible, not just what you say.

    Inferiorly superior (humbleness) rather than superiorly inferior (arrogance). -
    Reputation is something the unwise apply. But integrity is what the wise apply.
    Reply With Quote  
     

  4. #14  
    Senior Member
    Join Date
    Jan 2008
    Posts
    5,097
    Forgot, look up web assembly, they are developing a virtual machine code for JavaScript, which hopefully might give a more stable app versus JavaScript compile on different platforms (and opens up a possibility of processors built for it on common hardware).
    Since when has denying that the possible is possible been reason? Such is the life of some skepticism.

    Normalcy involves what is realistically possible, not just what you say.

    Inferiorly superior (humbleness) rather than superiorly inferior (arrogance). -
    Reputation is something the unwise apply. But integrity is what the wise apply.
    Reply With Quote  
     

  5. #15  
    Quote Originally Posted by Wayne Morellini View Post
    Oh great a temperamental being touchy against the only person interested in his stuff.

    So, less misty, but on a sleeping tablet. So is lossless. Crappy lossless is a price of piss to pull off, not good visually lossless at high efficiencies. Actually high efficiency lossy tends to get pretty complicated to get those groups of efficiency up. One reason is just boosting base technique that is not the best I would believe. The sacrificial image break down I publicly posted and pointed ambarella to, is was based off my analysis of visual perception in various forms of art and computer games 20 years ago. Before ambarella, a lot of high compression ratio footage on consumer cameras was crap. That is why I put it forwards to better preserve the more noticeable and meaningful parts of the image. They use ambarella tech in higher end cameras and broadcast infrastructure.

    Now, the discussion was related to redray, and you bring a gun to a knife fight. So it was a bit confusing Konrad. As I explained I was a bit off color. So, of course I was solely comparing against the 9mbit per second redray initial claims (actually less). Where after an actual showing one late night a bunch of people (I think plied with alcohol) started acting like it was 9mb/s lossless (I guessed they might have been mistaken and it could be visually lossless, but it may just compare to the crappy cinema exhibition compression ratios). I believe that was 4k24p and bit depth I don't remember. Better than others, but a number of people are aware of my own new advanced compression ideas in the previous camera projects that led to the Red camera. To this day, they have kept quite about how it actually works. I also basically kept most of the best mechanic ideas to myself. So, redray achieves half or a 1/4 of what I was aiming at with those. But the truth is far more interesting.

    So, you are not serious about being temperamental. Good because I have been perceiving that you might be of a certain mindset that can take the complex and find simple solutions for it. It is obvious to simplify these things, but not to many. I encourage you to take a good path and commercialise it, we really need good solutions.

    Now let's put this into perspective. I tell people I'm aiming for at least 10x lossless, because I'm really aiming for at least 100x, and by an ancient proposal of mine, hopefully over 1000x in a particular way (all without non authentic parts of an image, like noise). A simpler codec I currently am holding to achieve 9mb/s 8kp50 if not lossless, visually lossless. But to patent these things would require so many patents as to practically write a new book. I have neither the money or time to do it right now. I was the primary school kid in the play ground coming up with viable processing structures for artificial intelligence at age 10, 11 or 12. So, storage of data, and processing, are very important to me.

    So your 400 lines of code sound more interesting to me, from the perspective of things I have covered, knowing some changes can lead to simple large gains. But whatever you do, remember cineform, particularly cineform raw, much better than Jpeg2k you are comparing to, and also the newer form of JPEG that came out of Microsoft!
    wayne, ive always liked you .. i am not touchy .. i am not familiar with your work, so its not always obvious what you know

    what i know is, about 3months ago i spent 3days creating an encoder .. and i learnt that most assumptions id made about such an endeavour were inaccurate ..

    my encoder is currently encoding .. i ran out of test images a week ago.. so i hit up pexels.com .. j2k lossless is making files 3x bigger currently .. that's openjpeg2000 .. as i am a peasant

    ill make a java decoder first, then maybe an encoder .. i know nothing about javascript ..except they are completely unrelated .. as someone once wrote, java is to javascript, what cars are to carpets .. but ill ask keith to take a look ..he's doing a c# translation of the decoder too ..

    my free pascal compiler can produce windows, Linux, mac and android ..and raspberry pi binaries etc etc.. but that is beyond me, im not familiar enough with mac/Linux/unix etc .. but im keeping my code simple , so that, with outside help, i can get encoding and decoding on multiple platforms, using binaries and java ..

    im driven by a personal need for a lossless backup system worth its salt , ..i thought id found it with FLIF , but then i calculated compressing all my test images would take a year .. i compressed some to compare against my system ..they track quite well btw.. but after 2 weeks of compressing i was nowhere through the list ..

    note, flif is a lossless encoder .. but can encode lossy images by pre filtering the image .. and the results are excellent , as is the quality , and bitrate..compared to the alternatives, such as lossy jpeg ..
    this is the same approach i would take to lossy encoding.. and it has the side benefit that you can reencode the image after changes without degrading anything .. as underlining everything , is a lossless encoder ..
    banding does not occur with 7bit colour channels .. just doing that .. gives me a very large ratio .. if playback is all you require ..

    if you would like to try a windows binary/proto encoder.. just drop me a private message sir .. and we can exchange email
    if you are as knowledgeable about codecs as ive always assumed you are , id be interested in your opinions
    ive had i.p stolen, ..but that's another story ..
    Last edited by konrad grant; 03-21-2017 at 02:04 PM.
    Reply With Quote  
     

  6. #16  
    Senior Member
    Join Date
    Jan 2008
    Posts
    5,097
    Thanks Konrad. What is the average ratio of compression you ate achieving? Industry best is usually around 3-4:1 13 years ago. With a clean image 4:4:4 and probably high bit depth adding more. So there was one that claimed 6:1 with 4:4:4 (and the space images on their site looked spartan of detail). So, I don't know of that one. Before you get into inter technique. So, yes certain things help Bayer compression, and probably the low pass filtering reducing pixel difference. Now open whatever, might he a lit different from commercial best. So, hard ratios are a good thing to look at. Sorry about all the visually lossless before, bit under it and wasn't obviouse.

    Now, what you said us true, when you look at solutions with real eyes, and new eyes, you can see a whole lot of new stuff. Some of it is just false ends people avoid, or have forgotten or never learnt because it didn't pan out (sometimes it didn't because of the people were not successful, but actually can be better). I've got the sane problem in some areas of science, are these people blind to xyz, or don't they merely not talk about it and gave tskennit into account because for some reason it's not valid. But logic is simpler, give it a go.

    JavaScript the language is different in the way it works, but was designed to compile to Java byte code, so used subsets of Java API's and some new stuff (which I would imagine is in the Java standard anyway). View JavaScript as an advanced firm of c which cannbe compiled to stand alone desktop applications these days.

    The web assembly and virtual machine code binary format is meant to install and run application the applications fast, nearly as fast as native c applications, so even Unreal game engine has been demonstrated in web assembly by epic. It means high speed internet applications, and supports compiling from normal c languages. So, you can write your codec in c and compile. The advantage is its a networkable application, which probably doesn't suite your archive ambition as much as player applications, now we have established you are not looking at visually lossless or competing with RR.

    OK, I'm private R&D codec is on the future list, but has been low priority in the past. I'm working in a more serious project. So, my knowledge of others is passing but what I see I don't see much mentioned. It is expressly about certain ..... that the techniques are applied to. From your talk I think you may be cultivating good views. But seriously, ratio figures, I don't know what openjpeg2k dies, and I'm not much interested (the newer JPEG dues it faster) and I have my own waveform ideas that took me like a decade or two to realise. It's all on the back burner behind other things getting done. All the good stuff started decades ago, but I think people are still concentrating on even older techniques that require a lot for efficencies.

    You will likely hear from me sometime, but now I'm supposed to be doing a few other things than writing here (NAB new cameras, where are you :) ).
    Since when has denying that the possible is possible been reason? Such is the life of some skepticism.

    Normalcy involves what is realistically possible, not just what you say.

    Inferiorly superior (humbleness) rather than superiorly inferior (arrogance). -
    Reputation is something the unwise apply. But integrity is what the wise apply.
    Reply With Quote  
     

  7. #17  
    my ratios are all relative to equivalent openjpeg2000 files (which is one of the few reference compliant j2k encoders I could find), and on RGB 4:4:4 images..
    my files are 55.12906% smaller than j2k on average .. so far .. as i say .. that was compressing all the images I have , at varying bit depths

    assuming jpeg-xr (Microsoft photo hd) compresses 2.5:1 on average ..as the jpeg site says .. its likely its not beating my ratios ..as the jpeg site also says it ratios are similar to j2k ..

    I don't know what 3-4:1 ratio means either.. as some files don't compress well regardless of the format used..just the way the numbers go..so id never state an expected ratio.. but indeed some files do go down to 20% their original size etc .. or less at times

    what I do know , is my system tracks FLIF (free lossless image format) quite accurately .. and none of jpeg formats come close at lower sample depths , or with files containing sparse leaf counts .. hopefully that makes sense
    funny thing is.. i think i could fix j2k compression by adding just a very few lines of code ..

    ive also tested compression after converting 16bit integer samples to 16bit half float samples .. which is a lossy transform, yielding even better ratios.. but its not lossless of course in a true sense ..
    kickstarter have told me my project is legit .. ill hit it up in a few weeks , once ive organised myself

    have fun camera watching wayne .. ill check in on camera progress in a few years .. i just sold my epic dragon kit .. but i've kept my r1 for personal projects
    Reply With Quote  
     

  8. #18  
    Senior Member
    Join Date
    Jan 2008
    Posts
    5,097
    Yeah, we are talking average, and just comparing original files to compressed, to get a starting idea. Try it against a good wavelet codec, like cineform? So, how compressed were your files in total compared to the original total Konrad!
    Since when has denying that the possible is possible been reason? Such is the life of some skepticism.

    Normalcy involves what is realistically possible, not just what you say.

    Inferiorly superior (humbleness) rather than superiorly inferior (arrogance). -
    Reputation is something the unwise apply. But integrity is what the wise apply.
    Reply With Quote  
     

  9. #19  
    Quote Originally Posted by Wayne Morellini View Post
    Yeah, we are talking average, and just comparing original files to compressed, to get a starting idea. Try it against a good wavelet codec, like cineform? So, how compressed were your files in total compared to the original total Konrad!
    remember ..i'm creating a lossless format .. cineform is "visually lossless" .. it doesn't do lossless, and it runs at lower lower bit depth.. if i drop my bit dfepths to 10bit ..165mb helium frame grabs become 9mb files .. that's lossless 10bit .. can be re-encoded forever ..without quality loss
    my comparisons are always against lossless "compressed" formats that support at least 16bit colour channels .. as there is no point in creating something less effective ? ..so im happy to test anything that does the same job ..
    .. also .. because i compressed 300gig of test files ..and didn't have enough space.. i had to delete the input files as i encoded ..as i was making two new encodings of each file (a jpeg2000 .. and my format) .. but i could calculate that number too if you like .. ill just have to decode the compressed file ..

    ive not compared my system against FFV1 codec yet, though im familiar with ffv1, having used it years ago .. that supports 16bit channels, and is lossless .. but ill wager its simple median filter prediction puts it behind my encoder ratios..even though my filters are even simpler !

    ive tested a few jpeg XR files .. took some 48bit images -> encoded into a 24bit jpeg xr (..i don't currently have a 48bit jpeg xr encoder..).. the results are approximately the same size as the 48bit encoding of the same file that my system generated (about 6% smaller actually).. but i'm not throwing away half the data before compression .. so i'd also wager that.. when i can make 48bit jpeg-xr files .. they will be a whole lot bigger than the 24bit ones ..and hence a whole lot bigger than my Huffman coder files ..
    Reply With Quote  
     

  10. #20  
    Senior Member
    Join Date
    Jan 2008
    Posts
    5,097
    Cineform can do lossless and I believe higher bit depths. David Newman created cineform raw, maybe it applies there. Although, it is not mathematical lossless, by increasing things it can get there. However, it is just much better technology than jpeg2k. Hence the boost to redcode when the changed their lossy from jpeg2k. My point is that jpeg2k is not a good wavelet implementation to compare to. Xr also may produce the same quality, but at a lot less processing.

    I encourage you to also ad a few lines of code to Jp2k, and see what happens. I'm actually not too thrilled it turned out the way it did, wavelets were promising.
    Since when has denying that the possible is possible been reason? Such is the life of some skepticism.

    Normalcy involves what is realistically possible, not just what you say.

    Inferiorly superior (humbleness) rather than superiorly inferior (arrogance). -
    Reputation is something the unwise apply. But integrity is what the wise apply.
    Reply With Quote  
     

Posting Permissions
  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts