PDA

View Full Version : Redray codec accuracy.



Wayne Morellini
02-18-2017, 08:22 AM
Let's talk business. Redray has been around for a while. Are there comparison tests for it's accuracy compared to different codecs?


Thanks.

Elsie N
02-18-2017, 08:27 AM
Redray and the codec it used, .RED seem to have just faded away but I remember them showing some data rates years ago when first unveiled. Probably this is something best addressed by Graeme, but as we all know he is up to his elbows in IPP2 stuff.

I suspect someday we will see employment contracts stipulating that the employee will be required to take on cyborg properties either by implant, hardwire plug-in or wi-fi. Until then, we are stuck with Analog Graeme and his limitation of doing the work of just one person. '-)

Wayne Morellini
02-18-2017, 08:52 AM
The redray player was out in the wild, so somebody must have dine some testing?

Gavin Greenwalt
02-23-2017, 02:07 PM
From what I heard it was comparable to H265 which made it kind of irellevant since H265 is widely implemented in all new hardware whereas RedRay required a dedicated player.

Wayne Morellini
02-23-2017, 09:37 PM
You are joking, all those guys raving on his lossless it looked at 9mb/s that is incredible they could get that wrong. H265 can't do that. Couldn't they tell the difference? Gavin, are there links to any actual tests?

Wayne Morellini
02-23-2017, 09:38 PM
Oh, and thanks for the heads up Gavin. :)

Wayne Morellini
02-23-2017, 09:45 PM
Hmm, does that mean if I put a demo of cows in a windy field up in 4k 9mb/s h265 in a dimly lit room at midnight and after serving drinks for an hour and put strippers besides the screen bathed in red light, and call it a revolutionary codec and claim to be Steve Jobs, people would say how ultra realistic lossless it would look, and how good it is for recording porn, and start asking me when the next Mac Pro was coming out? :).

konrad grant
03-12-2017, 10:43 AM
ive compressed every image i have on disk ..about 160k of them, 300tb worth..using my own compression algorithm , and also with jpeg2000.
the images I encoded were 8bit, 24bit, 48biti colour depths, and my compression system is based on a simple Huffman coder, that encodes images in about 1/10 of the time the openjpeg encoder requires..but has produced a folder of images that is 55.12906% smaller than j2k .
I can also at complete error detection and correction data to the file .. which increases filesizes by 2.5% (per megabyte of compressed data)

I get the same results compressing raw DNG files, but ive not optimised for raw or single channel data yet

ive noticed that the 8k weapon footage ive downloaded from reduser generally encodes very well, typically..165mb tiff-> 120mb j2k-> 47mb Huffman coder... I'm finding weapon footage compresses much easier than dragon orientated footage, and its not necessarily caused by a reduction in noise .. its slightly stranger than that ..

better compression algorithms are easy to design, and improving existing ones too.. the hard bit is implementing them in hardware , whether it be via HDL code which is well beyond me, or other methods, which are also ..tricky!.. and for new formats..getting them adopted of course

I got fedup of j2k lossless compression some time ago .. quite simply .. its a lossy format for practical purposes .. its lossless function creates bloated files (most of the time)
replacing it as a backup format has been easy for me. shame no one else is interested ..although if anyone wants to be a windows beta tester .. i could arrange that .. ive ran out of test images!

I think it was Graeme who said designing the .red codec was ..i para phrase "remarkably easy" .. which suggests to me they adapted an existing codec ..i doubt they knocked out a brand new/ground up hardware based codec .. more like "pre filtered an image and fed it into an existing codec" .. like they do with the redcode (pre-filter data -> jpeg2000 ) .. which still costs an arm and a leg in research and development to implement in hardware..and headaches.. but means you have off the shelf hardware support to use as a starting point

Wayne Morellini
03-12-2017, 05:22 PM
If anybody wants to beta for Conrad that is good here.

Now, Konrad. My head is a bit misty at the moment and I'm next to a busy road. So forgive the sloppiness of any supply. I can tell you some of the stuff about redcode, but first. The jpeg2k part of your compression comparison is a bit of 'how long is a piece of string' for calculating compression. How much compression on average do you need for true lossless, and how much do you need for a known form of visually lossless or compared to various visually lossless modern redcode data rates are basic sort of comparisons. An average loss db against an image is accurate but not so relevant, as it is where that loss occurs that is relevant. Preserving more significant parts of an image against less significant parts skews the perception of image quality. This is something I put forwards nearly 13 years ago.

Now, redcode. I know people, and redcode was apparently originally based off of Jpeg2k, which is bloated slow etc. Then redcode had cineform related technology and vastly improved.

Now, redray performs very high compression, by reports at least visually lossless (but I have not seen tests). So, does your codec perform 4k at less than 9mb/s visually lossless etc?

Now, it is often difficult to get antiquated and fundamentally inefficient routines to high efficiency metrics. But a high efficiency routine can be different and simpler. So, second level of denial of interest in routines happens there (the first includes 'you can't x,y,z', 'another guy claiming something' , 'yada yada yada' 'blah, blah, blah' etc.) Levels include people with inadequate or no idea, or I'll equipped to do something with it financially or practically. The third level, is that it is a IP nightmare, where you can fall foul of patents, and its a lot of trouble and companies that know things, know it, and may not be interested once they do know, often. A simple Huffman encoder by itself, doesn't appeal as a potential redray challenger.

But I'm open minded so I'm interested in what it is, but protect your IP. If you really have something and its all too difficult, maybe Google will be interested to pay, and roll it into their open source codecs, but then you get the chicken and eggs like difficulty of getting them to sign a non compete permanently, which companies without the utmost morality (like virtually NONE) would he interested in doing under legal advice. So, with a limited non compete clause, they likely will still he reluctant to sign, plus you have the issue of filing for patent and completing cost of patent rollout over the next year or so, in which time you may have nobody uptaking the technology to afford the patents, and then letting the patent lapse and companies you have under NDA Non compete previously swooping in to sweep it up. One engineer I knew was part of a group that had a new leading hardware technology and the big company they wanted to license to, apparently, was intent to just wait them out. We really do need a revision of patent law to have a free registration of IP, a patent period that starts upon a contract, commercial use of an IP. So you can present and companies can window shop, and never ha e the ability to wait out IP unless it is during the active IP period of commercialization. This is the sort of thing I'm putting forwards for IP reform. There also needs to be a more open market mechanism for these things after a first exclusive license period. Which is another reform I'm putting forwards. We might be looking at a 90% reduction in progress due to the way the current systems work. Which means, we are maybe a century or more behind in some areas (some newer areas may be more novel, harder and less legs to advance quickly with the aid of smaller competitors without many/millions/billions of project investment ).

Elsie N
03-12-2017, 07:24 PM
If anybody wants to beta for Conrad that is good here.

Now, Konrad. My head is a bit misty at the moment and I'm next to a busy road. So forgive the sloppiness of any supply. I can tell you some of the stuff about redcode, but first. The jpeg2k part of your compression comparison is a bit of 'how long is a piece of string' for calculating compression. How much compression on average do you need for true lossless, and how much do you need for a known form of visually lossless or compared to various visually lossless modern redcode data rates are basic sort of comparisons. An average loss db against an image is accurate but not so relevant, as it is where that loss occurs that is relevant. Preserving more significant parts of an image against less significant parts skews the perception of image quality. This is something I put forwards nearly 13 years ago.

Now, redcode. I know people, and redcode was apparently originally based off of Jpeg2k, which is bloated slow etc. Then redcode had cineform related technology and vastly improved.

Now, redray performs very high compression, by reports at least visually lossless (but I have not seen tests). So, does your codec perform 4k at less than 9mb/s visually lossless etc?

Now, it is often difficult to get antiquated and fundamentally inefficient routines to high efficiency metrics. But a high efficiency routine can be different and simpler. So, second level of denial of interest in routines happens there (the first includes 'you can't x,y,z', 'another guy claiming something' , 'yada yada yada' 'blah, blah, blah' etc.) Levels include people with inadequate or no idea, or I'll equipped to do something with it financially or practically. The third level, is that it is a IP nightmare, where you can fall foul of patents, and its a lot of trouble and companies that know things, know it, and may not be interested once they do know, often. A simple Huffman encoder by itself, doesn't appeal as a potential redray challenger.

But I'm open minded so I'm interested in what it is, but protect your IP. If you really have something and its all too difficult, maybe Google will be interested to pay, and roll it into their open source codecs, but then you get the chicken and eggs like difficulty of getting them to sign a non compete permanently, which companies without the utmost morality (like virtually NONE) would he interested in doing under legal advice. So, with a limited non compete clause, they likely will still he reluctant to sign, plus you have the issue of filing for patent and completing cost of patent rollout over the next year or so, in which time you may have nobody uptaking the technology to afford the patents, and then letting the patent lapse and companies you have under NDA Non compete previously swooping in to sweep it up. One engineer I knew was part of a group that had a new leading hardware technology and the big company they wanted to license to, apparently, was intent to just wait them out. We really do need a revision of patent law to have a free registration of IP, a patent period that starts upon a contract, commercial use of an IP. So you can present and companies can window shop, and never ha e the ability to wait out IP unless it is during the active IP period of commercialization. This is the sort of thing I'm putting forwards for IP reform. There also needs to be a more open market mechanism for these things after a first exclusive license period. Which is another reform I'm putting forwards. We might be looking at a 90% reduction in progress due to the way the current systems work. Which means, we are maybe a century or more behind in some areas (some newer areas may be more novel, harder and less legs to advance quickly with the aid of smaller competitors without many/millions/billions of project investment ).

I invented that years ago. '-)

Wayne Morellini
03-12-2017, 09:59 PM
I don't know, I might have before you Else lol!

But the IP market gives me the creeps, it is heavily suited to less mindful very large companies and their drones, than to the masses with a lot of creative potential, making a real drain and bottleneck even for companies. You should not have to go into hock for heaps just to the establish and protect a patent world wide, before you go and do anything with it, or be required to go through very expensive NDA processes and further development in secret hunting for investors etc. A true market place is, you have it listed people look at it, and say, there's one, and contact you, rather then you hunting across the country or planet for somebody to NDA and license. It would make the rate of progress and change rapidly speedup and get rid of lesser solutions quicker. In an more open market place multiple companies could license at once without negotiation at set rates (the rate calculation is a little complex but something I come up with maybe in the 1990's). You never need to meet a licensee, but if they have any sense they would use you as a consultant.

konrad grant
03-20-2017, 10:10 PM
If anybody wants to beta for Conrad that is good here.

Now, Konrad. My head is a bit misty at the moment and I'm next to a busy road. So forgive the sloppiness of any supply. I can tell you some of the stuff about redcode, but first. The jpeg2k part of your compression comparison is a bit of 'how long is a piece of string' for calculating compression. How much compression on average do you need for true lossless, and how much do you need for a known form of visually lossless or compared to various visually lossless modern redcode data rates are basic sort of comparisons. An average loss db against an image is accurate but not so relevant, as it is where that loss occurs that is relevant. Preserving more significant parts of an image against less significant parts skews the perception of image quality. This is something I put forwards nearly 13 years ago.

Now, redcode. I know people, and redcode was apparently originally based off of Jpeg2k, which is bloated slow etc. Then redcode had cineform related technology and vastly improved.

Now, redray performs very high compression, by reports at least visually lossless (but I have not seen tests). So, does your codec perform 4k at less than 9mb/s visually lossless etc?

Now, it is often difficult to get antiquated and fundamentally inefficient routines to high efficiency metrics. But a high efficiency routine can be different and simpler. So, second level of denial of interest in routines happens there (the first includes 'you can't x,y,z', 'another guy claiming something' , 'yada yada yada' 'blah, blah, blah' etc.) Levels include people with inadequate or no idea, or I'll equipped to do something with it financially or practically. The third level, is that it is a IP nightmare, where you can fall foul of patents, and its a lot of trouble and companies that know things, know it, and may not be interested once they do know, often. A simple Huffman encoder by itself, doesn't appeal as a potential redray challenger.

But I'm open minded so I'm interested in what it is, but protect your IP. If you really have something and its all too difficult, maybe Google will be interested to pay, and roll it into their open source codecs, but then you get the chicken and eggs like difficulty of getting them to sign a non compete permanently, which companies without the utmost morality (like virtually NONE) would he interested in doing under legal advice. So, with a limited non compete clause, they likely will still he reluctant to sign, plus you have the issue of filing for patent and completing cost of patent rollout over the next year or so, in which time you may have nobody uptaking the technology to afford the patents, and then letting the patent lapse and companies you have under NDA Non compete previously swooping in to sweep it up. One engineer I knew was part of a group that had a new leading hardware technology and the big company they wanted to license to, apparently, was intent to just wait them out. We really do need a revision of patent law to have a free registration of IP, a patent period that starts upon a contract, commercial use of an IP. So you can present and companies can window shop, and never ha e the ability to wait out IP unless it is during the active IP period of commercialization. This is the sort of thing I'm putting forwards for IP reform. There also needs to be a more open market mechanism for these things after a first exclusive license period. Which is another reform I'm putting forwards. We might be looking at a 90% reduction in progress due to the way the current systems work. Which means, we are maybe a century or more behind in some areas (some newer areas may be more novel, harder and less legs to advance quickly with the aid of smaller competitors without many/millions/billions of project investment ).

firstly .. where loss takes place is irrelevant .. my compression system is lossless..and for archive . my files and the j2k output is bit for bit identical images .. so the lengths of strings can be compared ..and measured in bytes ;)

..as for your lossy ideas 13years ago .. they caught on ;) .. lossy is a p*ece of p*ss to pull off ..

does my codec perform 4k at less than 9mb/s visually lossless? .. no, its lossless .. not visually lossless .. but it would depend on the image, whether I was recording raw or RGB ..and you didn't specify the framerate ..or bit depth.. and is that megabits or megabytes? .. ill assume you mean 1fps and megabytes .. and say yes ;) .. of course 4096 x2160 pixels is fewer pixels than 4480x1920 (r1 wide screen) .. so in that scenario ..it could do 4.5k .. or a 28k image.. if its height is 1 pixel ..

but seriously..i think i could modify it to do visually lossy encoding ..without complex math.. but i do not expect id meet the redcode datarate .. nor do i think redray would match my encoding quality ;p .. but i imagine i could produce a pleasing image at that datarate .. but i have little interest in doing so currently..go figure ..

in my world, its all about lossless, high dynanmic range encoding ..and archiving .. ill leave playback to dvd ;)
I'm not confined by hardware implementation .. i'm producing a 64bit image pipeline for storing very large numbers of pixels as densely as i can ..

i'm going to launch a small kickstarter soon, with one goal being to enable compression of extremely large hdr panoramic /stitched images using my system (and small images too of course)..along with some downsampling, upsampling tools so you can view them at HD etc ..or SD ..without issues .. at the end of the project .. everyone can see the compression code..its going opensource ;)
..couldn't care less what google do .. the fuller their hard drives are ..the better .. i'm happy for them to choke on their own data ..

.. its about 400lines of code so far .. written in free pascal ..ready for translation to other languages .. but i have a long list of possible improvements and tests to carry out first..including supporting floating point colour accuracy etc

once the design is locked down (with some open hooks for adding features), then hopefully there will be a java multi platform decoder created.. keith long from east end studio in London has offered to do a translation, if anyone can, its probably him..he writes code for the banks ..untangles bad Russian multi-threading routines ..makes stuff work nice..and my code is peanuts easy ..and he likes the fact i use parenthesis comprehensively in my equations .. he says its professional ;)

the adobe machine already issued me a private compression tag for the tiff format too, so implementing it within tiff is an option already .. though it wont be supported by third parties unless i write a patch for libtiff etc .. or they want to .assimilate were not interested in making work for themselves ..as they are a small team .. but i shall keep them posted when i'm done ..

there are no patents involved .. its a simple system, just surprising that doing a few very things that seemed obvious to me.. that surely must have all been done before .. would leave jpeg2000 and every other variant of jpeg.. in the dust .. truly very strange, and surprising . i shall write a white paper regardless , called "don't believe every research paper you read, a guide to lossless HDR image compression and archiving "

if the kickstarter gets funded .. i shall offer a prototype encoding /decoding tool on day one..and then regular updates.. with periodic function bonuses along the way .. the aim is to under promise , and over deliver .. and i'm happy i can deliver on what i promise etc ..based on what i have ..

4k would fund 12months of development time currently .. so that's what ill be asking for .. trying to take a break from doing rentals for a while ..but have a 4k bank loan to service .. which is distracting me ..

redcode compression is not based on jpeg2000 ..it is jpeg2000.. to become redcode, additional processing/filtering of the captured image is required of course ..which no doubt helps the compression engine ie jpeg2000 ...

Wayne Morellini
03-21-2017, 04:56 AM
firstly .. where loss takes place is irrelevant .. my compression system is lossless..and for archive . my files and the j2k output is bit for bit identical images .. so the lengths of strings can be compared ..and measured in bytes ;)

..as for your lossy ideas 13years ago .. they caught on ;) .. lossy is a p*ece of p*ss to pull off ..

does my codec perform 4k at less than 9mb/s visually lossless? .. no, its lossless .. not visually lossless .. but it would depend on the image, whether I was recording raw or RGB ..and you didn't specify the framerate ..or bit depth.. and is that megabits or megabytes? .. ill assume you mean 1fps and megabytes .. and say yes ;) .. of course 4096 x2160 pixels is fewer pixels than 4480x1920 (r1 wide screen) .. so in that scenario ..it could do 4.5k .. or a 28k image.. if its height is 1 pixel ..

but seriously..i think i could modify it to do visually lossy encoding ..without complex math.. but i do not expect id meet the redcode datarate .. nor do i think redray would match my encoding quality ;p .. but i imagine i could produce a pleasing image at that datarate .. but i have little interest in doing so currently..go figure ..

in my world, its all about lossless, high dynanmic range encoding ..and archiving .. ill leave playback to dvd ;)
I'm not confined by hardware implementation .. i'm producing a 64bit image pipeline for storing very large numbers of pixels as densely as i can ..

i'm going to launch a small kickstarter soon, with one goal being to enable compression of extremely large hdr panoramic /stitched images using my system (and small images too of course)..along with some downsampling, upsampling tools so you can view them at HD etc ..or SD ..without issues .. at the end of the project .. everyone can see the compression code..its going opensource ;)
..couldn't care less what google do .. the fuller their hard drives are ..the better .. i'm happy for them to choke on their own data ..

.. its about 400lines of code so far .. written in free pascal ..ready for translation to other languages .. but i have a long list of possible improvements and tests to carry out first..including supporting floating point colour accuracy etc

once the design is locked down (with some open hooks for adding features), then hopefully there will be a java multi platform decoder created.. keith long from east end studio in London has offered to do a translation, if anyone can, its probably him..he writes code for the banks ..untangles bad Russian multi-threading routines ..makes stuff work nice..and my code is peanuts easy ..and he likes the fact i use parenthesis comprehensively in my equations .. he says its professional ;)

the adobe machine already issued me a private compression tag for the tiff format too, so implementing it within tiff is an option already .. though it wont be supported by third parties unless i write a patch for libtiff etc .. or they want to .assimilate were not interested in making work for themselves ..as they are a small team .. but i shall keep them posted when i'm done ..

there are no patents involved .. its a simple system, just surprising that doing a few very things that seemed obvious to me.. that surely must have all been done before .. would leave jpeg2000 and every other variant of jpeg.. in the dust .. truly very strange, and surprising . i shall write a white paper regardless , called "don't believe every research paper you read, a guide to lossless HDR image compression and archiving "

if the kickstarter gets funded .. i shall offer a prototype encoding /decoding tool on day one..and then regular updates.. with periodic function bonuses along the way .. the aim is to under promise , and over deliver .. and i'm happy i can deliver on what i promise etc ..based on what i have ..

4k would fund 12months of development time currently .. so that's what ill be asking for .. trying to take a break from doing rentals for a while ..but have a 4k bank loan to service .. which is distracting me ..

redcode compression is not based on jpeg2000 ..it is jpeg2000.. to become redcode, additional processing/filtering of the captured image is required of course ..which no doubt helps the compression engine ie jpeg2000 ...

Oh great a temperamental being touchy against the only person interested in his stuff.

So, less misty, but on a sleeping tablet. So is lossless. Crappy lossless is a price of piss to pull off, not good visually lossless at high efficiencies. Actually high efficiency lossy tends to get pretty complicated to get those groups of efficiency up. One reason is just boosting base technique that is not the best I would believe. The sacrificial image break down I publicly posted and pointed ambarella to, is was based off my analysis of visual perception in various forms of art and computer games 20 years ago. Before ambarella, a lot of high compression ratio footage on consumer cameras was crap. That is why I put it forwards to better preserve the more noticeable and meaningful parts of the image. They use ambarella tech in higher end cameras and broadcast infrastructure.

Now, the discussion was related to redray, and you bring a gun to a knife fight. So it was a bit confusing Konrad. As I explained I was a bit off color. So, of course I was solely comparing against the 9mbit per second redray initial claims (actually less). Where after an actual showing one late night a bunch of people (I think plied with alcohol) started acting like it was 9mb/s lossless (I guessed they might have been mistaken and it could be visually lossless, but it may just compare to the crappy cinema exhibition compression ratios). I believe that was 4k24p and bit depth I don't remember. Better than others, but a number of people are aware of my own new advanced compression ideas in the previous camera projects that led to the Red camera. To this day, they have kept quite about how it actually works. I also basically kept most of the best mechanic ideas to myself. So, redray achieves half or a 1/4 of what I was aiming at with those. But the truth is far more interesting.

So, you are not serious about being temperamental. Good because I have been perceiving that you might be of a certain mindset that can take the complex and find simple solutions for it. It is obvious to simplify these things, but not to many. I encourage you to take a good path and commercialise it, we really need good solutions.

Now let's put this into perspective. I tell people I'm aiming for at least 10x lossless, because I'm really aiming for at least 100x, and by an ancient proposal of mine, hopefully over 1000x in a particular way (all without non authentic parts of an image, like noise). A simpler codec I currently am holding to achieve 9mb/s 8kp50 if not lossless, visually lossless. But to patent these things would require so many patents as to practically write a new book. I have neither the money or time to do it right now. I was the primary school kid in the play ground coming up with viable processing structures for artificial intelligence at age 10, 11 or 12. So, storage of data, and processing, are very important to me.

So your 400 lines of code sound more interesting to me, from the perspective of things I have covered, knowing some changes can lead to simple large gains. But whatever you do, remember cineform, particularly cineform raw, much better than Jpeg2k you are comparing to, and also the newer form of JPEG that came out of Microsoft!

Wayne Morellini
03-21-2017, 04:59 AM
Forgot, look up web assembly, they are developing a virtual machine code for JavaScript, which hopefully might give a more stable app versus JavaScript compile on different platforms (and opens up a possibility of processors built for it on common hardware).

konrad grant
03-21-2017, 01:44 PM
Oh great a temperamental being touchy against the only person interested in his stuff.

So, less misty, but on a sleeping tablet. So is lossless. Crappy lossless is a price of piss to pull off, not good visually lossless at high efficiencies. Actually high efficiency lossy tends to get pretty complicated to get those groups of efficiency up. One reason is just boosting base technique that is not the best I would believe. The sacrificial image break down I publicly posted and pointed ambarella to, is was based off my analysis of visual perception in various forms of art and computer games 20 years ago. Before ambarella, a lot of high compression ratio footage on consumer cameras was crap. That is why I put it forwards to better preserve the more noticeable and meaningful parts of the image. They use ambarella tech in higher end cameras and broadcast infrastructure.

Now, the discussion was related to redray, and you bring a gun to a knife fight. So it was a bit confusing Konrad. As I explained I was a bit off color. So, of course I was solely comparing against the 9mbit per second redray initial claims (actually less). Where after an actual showing one late night a bunch of people (I think plied with alcohol) started acting like it was 9mb/s lossless (I guessed they might have been mistaken and it could be visually lossless, but it may just compare to the crappy cinema exhibition compression ratios). I believe that was 4k24p and bit depth I don't remember. Better than others, but a number of people are aware of my own new advanced compression ideas in the previous camera projects that led to the Red camera. To this day, they have kept quite about how it actually works. I also basically kept most of the best mechanic ideas to myself. So, redray achieves half or a 1/4 of what I was aiming at with those. But the truth is far more interesting.

So, you are not serious about being temperamental. Good because I have been perceiving that you might be of a certain mindset that can take the complex and find simple solutions for it. It is obvious to simplify these things, but not to many. I encourage you to take a good path and commercialise it, we really need good solutions.

Now let's put this into perspective. I tell people I'm aiming for at least 10x lossless, because I'm really aiming for at least 100x, and by an ancient proposal of mine, hopefully over 1000x in a particular way (all without non authentic parts of an image, like noise). A simpler codec I currently am holding to achieve 9mb/s 8kp50 if not lossless, visually lossless. But to patent these things would require so many patents as to practically write a new book. I have neither the money or time to do it right now. I was the primary school kid in the play ground coming up with viable processing structures for artificial intelligence at age 10, 11 or 12. So, storage of data, and processing, are very important to me.

So your 400 lines of code sound more interesting to me, from the perspective of things I have covered, knowing some changes can lead to simple large gains. But whatever you do, remember cineform, particularly cineform raw, much better than Jpeg2k you are comparing to, and also the newer form of JPEG that came out of Microsoft!

wayne, ive always liked you .. i am not touchy .. i am not familiar with your work, so its not always obvious what you know

what i know is, about 3months ago i spent 3days creating an encoder .. and i learnt that most assumptions id made about such an endeavour were inaccurate ..

my encoder is currently encoding .. i ran out of test images a week ago.. so i hit up pexels.com .. j2k lossless is making files 3x bigger currently .. that's openjpeg2000 .. as i am a peasant

ill make a java decoder first, then maybe an encoder .. i know nothing about javascript ..except they are completely unrelated .. as someone once wrote, java is to javascript, what cars are to carpets .. but ill ask keith to take a look ..he's doing a c# translation of the decoder too ..

my free pascal compiler can produce windows, Linux, mac and android ..and raspberry pi binaries etc etc.. but that is beyond me, im not familiar enough with mac/Linux/unix etc .. but im keeping my code simple , so that, with outside help, i can get encoding and decoding on multiple platforms, using binaries and java ..

im driven by a personal need for a lossless backup system worth its salt , ..i thought id found it with FLIF , but then i calculated compressing all my test images would take a year .. i compressed some to compare against my system ..they track quite well btw.. but after 2 weeks of compressing i was nowhere through the list ..

note, flif is a lossless encoder .. but can encode lossy images by pre filtering the image .. and the results are excellent , as is the quality , and bitrate..compared to the alternatives, such as lossy jpeg ..
this is the same approach i would take to lossy encoding.. and it has the side benefit that you can reencode the image after changes without degrading anything .. as underlining everything , is a lossless encoder ..
banding does not occur with 7bit colour channels .. just doing that .. gives me a very large ratio .. if playback is all you require ..

if you would like to try a windows binary/proto encoder.. just drop me a private message sir .. and we can exchange email
if you are as knowledgeable about codecs as ive always assumed you are , id be interested in your opinions
ive had i.p stolen, ..but that's another story ..

Wayne Morellini
03-21-2017, 08:25 PM
Thanks Konrad. What is the average ratio of compression you ate achieving? Industry best is usually around 3-4:1 13 years ago. With a clean image 4:4:4 and probably high bit depth adding more. So there was one that claimed 6:1 with 4:4:4 (and the space images on their site looked spartan of detail). So, I don't know of that one. Before you get into inter technique. So, yes certain things help Bayer compression, and probably the low pass filtering reducing pixel difference. Now open whatever, might he a lit different from commercial best. So, hard ratios are a good thing to look at. Sorry about all the visually lossless before, bit under it and wasn't obviouse.

Now, what you said us true, when you look at solutions with real eyes, and new eyes, you can see a whole lot of new stuff. Some of it is just false ends people avoid, or have forgotten or never learnt because it didn't pan out (sometimes it didn't because of the people were not successful, but actually can be better). I've got the sane problem in some areas of science, are these people blind to xyz, or don't they merely not talk about it and gave tskennit into account because for some reason it's not valid. But logic is simpler, give it a go.

JavaScript the language is different in the way it works, but was designed to compile to Java byte code, so used subsets of Java API's and some new stuff (which I would imagine is in the Java standard anyway). View JavaScript as an advanced firm of c which cannbe compiled to stand alone desktop applications these days.

The web assembly and virtual machine code binary format is meant to install and run application the applications fast, nearly as fast as native c applications, so even Unreal game engine has been demonstrated in web assembly by epic. It means high speed internet applications, and supports compiling from normal c languages. So, you can write your codec in c and compile. The advantage is its a networkable application, which probably doesn't suite your archive ambition as much as player applications, now we have established you are not looking at visually lossless or competing with RR.

OK, I'm private R&D codec is on the future list, but has been low priority in the past. I'm working in a more serious project. So, my knowledge of others is passing but what I see I don't see much mentioned. It is expressly about certain ..... that the techniques are applied to. From your talk I think you may be cultivating good views. But seriously, ratio figures, I don't know what openjpeg2k dies, and I'm not much interested (the newer JPEG dues it faster) and I have my own waveform ideas that took me like a decade or two to realise. It's all on the back burner behind other things getting done. All the good stuff started decades ago, but I think people are still concentrating on even older techniques that require a lot for efficencies.

You will likely hear from me sometime, but now I'm supposed to be doing a few other things than writing here (NAB new cameras, where are you :) ).

konrad grant
03-22-2017, 11:51 PM
my ratios are all relative to equivalent openjpeg2000 files (which is one of the few reference compliant j2k encoders I could find), and on RGB 4:4:4 images..
my files are 55.12906% smaller than j2k on average .. so far .. as i say .. that was compressing all the images I have , at varying bit depths

assuming jpeg-xr (Microsoft photo hd) compresses 2.5:1 on average ..as the jpeg site says .. its likely its not beating my ratios ..as the jpeg site also says it ratios are similar to j2k ..

I don't know what 3-4:1 ratio means either.. as some files don't compress well regardless of the format used..just the way the numbers go..so id never state an expected ratio.. but indeed some files do go down to 20% their original size etc .. or less at times

what I do know , is my system tracks FLIF (free lossless image format) quite accurately .. and none of jpeg formats come close at lower sample depths , or with files containing sparse leaf counts .. hopefully that makes sense
funny thing is.. i think i could fix j2k compression by adding just a very few lines of code ..

ive also tested compression after converting 16bit integer samples to 16bit half float samples .. which is a lossy transform, yielding even better ratios.. but its not lossless of course in a true sense ..
kickstarter have told me my project is legit .. ill hit it up in a few weeks , once ive organised myself

have fun camera watching wayne .. ill check in on camera progress in a few years .. i just sold my epic dragon kit .. but i've kept my r1 for personal projects

Wayne Morellini
03-23-2017, 02:24 AM
Yeah, we are talking average, and just comparing original files to compressed, to get a starting idea. Try it against a good wavelet codec, like cineform? So, how compressed were your files in total compared to the original total Konrad!

konrad grant
03-24-2017, 01:57 AM
Yeah, we are talking average, and just comparing original files to compressed, to get a starting idea. Try it against a good wavelet codec, like cineform? So, how compressed were your files in total compared to the original total Konrad!

remember ..i'm creating a lossless format .. cineform is "visually lossless" .. it doesn't do lossless, and it runs at lower lower bit depth.. if i drop my bit dfepths to 10bit ..165mb helium frame grabs become 9mb files .. that's lossless 10bit .. can be re-encoded forever ..without quality loss
my comparisons are always against lossless "compressed" formats that support at least 16bit colour channels .. as there is no point in creating something less effective ? ..so im happy to test anything that does the same job ..
.. also .. because i compressed 300gig of test files ..and didn't have enough space.. i had to delete the input files as i encoded ..as i was making two new encodings of each file (a jpeg2000 .. and my format) .. but i could calculate that number too if you like .. ill just have to decode the compressed file ..

ive not compared my system against FFV1 codec yet, though im familiar with ffv1, having used it years ago .. that supports 16bit channels, and is lossless .. but ill wager its simple median filter prediction puts it behind my encoder ratios..even though my filters are even simpler !

ive tested a few jpeg XR files .. took some 48bit images -> encoded into a 24bit jpeg xr (..i don't currently have a 48bit jpeg xr encoder..).. the results are approximately the same size as the 48bit encoding of the same file that my system generated (about 6% smaller actually).. but i'm not throwing away half the data before compression .. so i'd also wager that.. when i can make 48bit jpeg-xr files .. they will be a whole lot bigger than the 24bit ones ..and hence a whole lot bigger than my Huffman coder files ..

Wayne Morellini
03-24-2017, 03:23 AM
Cineform can do lossless and I believe higher bit depths. David Newman created cineform raw, maybe it applies there. Although, it is not mathematical lossless, by increasing things it can get there. However, it is just much better technology than jpeg2k. Hence the boost to redcode when the changed their lossy from jpeg2k. My point is that jpeg2k is not a good wavelet implementation to compare to. Xr also may produce the same quality, but at a lot less processing.

I encourage you to also ad a few lines of code to Jp2k, and see what happens. I'm actually not too thrilled it turned out the way it did, wavelets were promising.

Wayne Morellini
03-24-2017, 03:54 AM
I'm curious how 3D-5D wavelets turned out that they use in some security camera compression (post non authentic image (noise) correction.

Your comparison, if you can compare to modern redcode, you atr basically comparing to cineform technology anyway. Kineinfinity uses an official cineform, but at what depth and quality?

So, you are saying you are taking 165MB effectively lossless redcode down to 9MB, or taking a 16 bit, 4:4:4 demosiaced frame grab to 9MB. Those 4:4:4 grabs would be incredibly devoid of much detail, as two thirds is recreated after a low pass filter smuges a lot of difference out, in a simple compressible way. Devoid of noise, 4:4:4 (especially this stuff) becomes much more compressible, and 16 bits a lot more again. Jpeg HDR used to get incredible compression rates on frames, as you are describing something that is mainly difference in peaks that are described similarly in data producing huge savings.

So, I should clarify, the 2-4:1 maxing out in the old days was 8 bit video. So, what is it now for what you are dealing with I wouldn't know. 4:1+ might he the norm on Bayer, even more again with prefiltering. Everything is largely mappable except unexpected things like noise. In testing small cameras codec quality, I will furll the things around unpredictable and examine the data rate and quality. Another one was wavey water, used to tank out and macro block older codecs, but with h264 we got smudging as a solution. However, it is possible to make the codec nap the waves, something I pushed for. With extra datarate the wave thing became a lot less anyway. But strong flapping leaves and branches swirls of mists, making most of the picture, are the sorts of things to separate out lossy cameras.

Anyway, interesting conversation I've had with cineform, or SI, which ever engineer, they objected to my pushing for noise elimination to dramatically increase the compressibility, as removing authentic image, which I politely pointed out the noise was not in the scene and the recreated pixel was likely much more authentic. But it seems to be a potential factor. But it means interesting things for lossless, as the noise was probably a main factor limiting to 2:1-4:1 max normal compressibility ratios.

It also means, that your routine might go ahead more than on normal footage because of prefiltering.

konrad grant
03-24-2017, 06:58 PM
Cineform can do lossless and I believe higher bit depths. David Newman created cineform raw, maybe it applies there. Although, it is not mathematical lossless, by increasing things it can get there. However, it is just much better technology than jpeg2k. Hence the boost to redcode when the changed their lossy from jpeg2k. My point is that jpeg2k is not a good wavelet implementation to compare to. Xr also may produce the same quality, but at a lot less processing.

I encourage you to also ad a few lines of code to Jp2k, and see what happens. I'm actually not too thrilled it turned out the way it did, wavelets were promising.

jpeg2000 is consistent losslessly speaking .. it compresses some hard to compress images well .. but loses at the easy gains ..
j2k in lossy mode it is pretty good .. i don't think the wavelet transform does anything for lossless encoding ratios though.. in fact , i'm pretty certain it hurts it ..

i have two desires for a codec

1) in camera/ during capture = lossy for motion (duration extending..but also lossy can be faster, allow higher fps..).. but lossless for stills/ timelapse/low fps/astronomy / long exposure shots etc
2) when preserving material for the future, and during editing .. i want lossless

lots of players working on 1), .. fewer working on 2)
my ideal camera could be swung around, and would encode using different encodings settings, in an alternating pattern, using several methods for a specified time.. it would not record any files, simply compute their size..and then would dial in appropriate settings for the scene
this could be done by simply panning the camera around slowly .. a forest is very different than a living room ..

the only problem i found with cineform, ..wasn't the codec , but the debayering .. red have always done a better job imo

my system is "dumb and simple/ non -adaptive" currently..which is why i find the results so odd.. there is nothing as complex as .. say ..a paeth filter going on..
i believe , as do the creators of the FLIF format.. that the future of lossless encoding will involve machine learning.. so the format i am devising has this in mind, and will eventually contain metadata ive not seen in other formats, and the ability to understand previously undefined filtering methods of high complexity, by understanding definitions of new filters..provided they follow the filter syntaxes provided .. quality will always remain the same .. but how long you wish to analyse the image before compression will be selectable ..and scalable.. it is not intended for hardware implementation , but a software / network soloution for film, and petapixel panoramas etc

..you've got me interested in seeing what is possible in the lossy realm .. but my system is not designed for playback ..encoding is done a byte at a time , decoding one bit at a time .. both are approx. symmetric in time required .. but some of the functionality i want to support does not make large LUT/fast decoding trivial to implement .. so that is not high on my agenda yet ..and openjpeg is slower at decoding to full images..

jpeg-xr makes smaller files than j2k .. i really need some 48bit jpeg-xr test files ..or a 48bit encoder
in the meanwhile .. ill make a 24bit version of my system to do a preliminary test .. the few jxr files i created came out larger than my files .. but i need to try that on 50,000 images to get better stats ..

i don't think j2k is a great wavelet format either.. but its "out there" .. and i cannot test against what is not ..

regarding compressing helium frame grabs. i render out an 8k helium r3d grab to 48bit tiff . it is debayered .. the interpolations often create new "leave values" .. speaking in Huffman terms .. they do not reduce
so the tiff is 165mb , the jpeg2000 file is 120mb, and my files are 47mb .. that's pretty clear hopefully..
if i drop the channel depth to 10bit per component.. that's when the file becomes a visually lossless, ..and true lossless .. 30bit file .. hopefuly that makes sense ..quality is 100% ;) .. but 10bit components ..which could be in log space..
..and here is a bit you may be missing .. even though a 4:4:4 image may be interpolated (which can only result in equal or greater image complexity btw.. never less complexity..regardless of the interpolation method..the latter being the csse with my own debayering algorithms..).. if that 9mb file was a "raw" file ..the leaf count CAN ONLY DROP ..as "leaves created through high quality interpolation" are removed.. but even if no leaves disappear .. i definitely only need to write 1/3 as many codes out to disk .. so the file .. i can say with certainty .. could become 3mb in raw format ..

of course , with a lossless system .. there can be no data rate control

i would prefer to save the full sensor readout verbatim .. even if it contains transport noise and the likes .. as software noise reduction can be more complex than a fixed hardware solution, and can improve over time..so the image can be reinterpreted .. but the lossy wavelet transforms in j2k are in effect noise reduction filters anyway .. but that might make a deeper noise reduction algorithm less effective later on .. and pre compression filters can compensate / reduce noise , through good prediction .. which my system does not do.. yet ..

anyway, once a shit is recorded in camera.. i don't care .. my problem has always been ..where to put the finished D.I .. which normally means deleteing something else !

Wayne Morellini
03-24-2017, 07:54 PM
Yes, they found many years back, that wavelet produced much better results at high compression against newer codecs (back then) than at lower rations. But we are at least a generation past that now in non wavelet technology. The thing about cineform, is that they handcraft for speed and results. At some point, it effectively become lossless as you drop the ratio too. Even mpeg2 would do that. At some point it pats to switch how you comp esss (and not just once, hint). Even jpeg basically models one wat, and tries to compress the bits that font conform to the model. Now, magine if you could examine the data and compress hundreds of different ways according to which worked vest against others in different parts of the image. Of course, I'm encouraging the industry to keep going down a wastefull path here, where maybe millions of sets of analysis has to he done before you determine the best set (unless they have a mind to short cut this).

For my universal file compression ideas, I determined you could make a compression scheme that resulted in optimal compression for each data types, maybe plus some overhead. All without you figuring it out. You know why I know thus, to us the answer is obvious, to others it seems non obvious, obscured. They are probably scrambling to figure out what I mean and how to do it. We are talking many many billions of GDP per year with some of these technologies. Of course I'm being obscure again, as I often am, usually just painting part of the solution, and lesser solutions.

Lol, I just read the rest past the first paragraph of your post. We are on the same mind as I thought. So, yes, you should be able to get lossless for playback, however according to my calculations human imaging can go into terabytes per second datatates (or was that terabits, I forget). That still means a big need for lossless, unless I can get my high end lossless out. That would be one of the biggest achievements of computational design. But I don't usually stop until I have the ideal (perfect) answer in these sorts of things. Where as in physical design we see so little, beyond the logical aspects, it is hard to know how good the solution is compared to the unkown ideal solution. Even if you know the ideal solution, it may well involve things of such intricacies, it is hard to understand, like things of the plsnk scale, or smaller to get the best possible solution. But in logic, it is possible to determine an ideal path. Except with this, it is applying the logical to the real world, which we don't fully understand. But as we are not recording much below 2400dpi, the rest does not matter too much in exact detail, and if I can get within 1% of the ideal result (1% measured against the 100% of the ideal result) , I would be happy. If I had to compress the plank space, or just subatomic, on the other hand, it might he extremely inefficient, because we don't undrestsnd that space. Too much for neurotypicals.

Elsie N
03-24-2017, 08:11 PM
Guys, instead of inventing a new wheel, why not just use this one (https://developers.google.com/speed/webp/) that google just released?

Wayne Morellini
03-24-2017, 11:32 PM
.........., some of that is like what I was suggesting in the elphel cinema camera projects before I gave up on them, that was one of the things that caused me to give up. I wasn't well enough to get into it, and one of the guys gave it a try but couldn't get it to work. Thank you Mr Google, and others that proved me right. It wasn't it was going to be best compression, but lower processing easier compression.

I had a dispute with a top engineer I went to university with in the day. I claimed I could get better compression than some law limiting compression he was quoting (like a follower). It was through prediction in a certain way.

We are talking about a lot bigger differences Elsie, even compression to less than 0.1%. I actually am one of the people that did reinvent the wheel, so don't go throwing that one around me too liberally.

I will review this as an off the shelf solution for software recorder (as it with take considerable time to write and tune codec designs, usually.

Wayne Morellini
03-24-2017, 11:53 PM
What Google and web standards have been interested in is free open standards with no licensing. This is likely another attempt at a codec using free IP to do what an existing one does.

Now, Jpeg has jumped in quality, at lossless I don't know, but it wasn't a top lossless performer anyway, so this may not be a great thing to compare to. Only 26% smaller is NIR as good as 74% smaller, that would he great.

I'm happy to shift product with a competent codec, rather than shift with one of mine without protection. Thanks for the heads up.

konrad grant
03-25-2017, 08:14 PM
Guys, instead of inventing a new wheel, why not just use this one (https://developers.google.com/speed/webp/) that google just released?

elsie .. its just not a good wheel . i intended to spent 6 months creating something with better compression ratios .. as i had bit of space ..but my initial prototype did that after 3days working on it..to my enormous surprise..using "very old algorithms" .. (improvements have been small since then.. but i'm collecting statistics..getting ready to test some more advanced/complex ideas..)

.. the openjpeg2000 encoder project is now one of the few reference jpeg2000 encoders .. its taken years , and two google summer of code pushes to get it there (ie, massive support)

..but my files compress 56% smaller on average .. without the complexity.. so i'm personally happy to leave google to design the next generation of internet bloating formats .. whilst i help you make bit-for-bit backups of material that's important to you , that takes up less space

check out the free lossless image format elsie .. that is currently the cutting edge of lossless encoding imo..but is slow..as it uses machine learning. it is built into imagemagick already, but may not be finalised yet.. one of the designers of that, jon sneyers emailed me the other day after i sent him a linkedin message .. not had time to quiz him about its status yet . my compression system uses imagemagick to grab images , and it can also encode into any format imagemagick supports. in the meanwhile you can try the format by executing . its very impressive on large images in terms of compression ratios

<imagemagick from command line>: convert <yourimage.tif etc> <yourimage.FLIF>

@wayne : the recent improvements to jpeg are just changes to the DCT coefficients from what ive read . jpeg lossless uses a completely different encoding system, so if my assumptions are correct..that won't affect jpeg lossless ..which is the compression system used by DNG .. right there is the problem .. new formats using old compression routines from 1992 .. when machines had 128kb of ram ..

Wayne Morellini
03-25-2017, 10:48 PM
Sorry, I forgot to specify. Yeah, JPEG 1 for lossless is not that good By recent I meant going back 14 years. Actually jpeg lossless was like something I was proposing that was dismissed, but there you have it, somebody else proved it in an industry standard no less. You think some of these people that supposed tp know this stuff must be clean rooming for decades to miss that one. I basically work from the ground up pretty clean room except for free stuff with any patent expired. I did this with my is design, I got texts from University of South Australia that were rather old and read and reinvented and invented new, and did uni by the end I started uni (but still with old text and tech). I got the department of defense published standards paper on security management to read up on some auditing managed systems process but never got to read it, and had done my own process anyway. This whole mantle/metal api thing, is like I was designing the OS from the beginning. Java/Oak Toas and my simple VOS all virtually started the same year. Only one wasn't close to funding markets and didn't receive funding. Java was the least efficient design, and Toas undoubtedly the second most efficient, but blew Java away. In matter of fact, the leading PC Java/JavaScript (a blur now) engine was a Taos derived product from them, one which other's tried to be like to get more performance.

konrad grant
03-27-2017, 01:02 AM
Sorry, I forgot to specify. Yeah, JPEG 1 for lossless is not that good By recent I meant going back 14 years. Actually jpeg lossless was like something I was proposing that was dismissed, but there you have it, somebody else proved it in an industry standard no less. You think some of these people that supposed tp know this stuff must be clean rooming for decades to miss that one. I basically work from the ground up pretty clean room except for free stuff with any patent expired. I did this with my is design, I got texts from University of South Australia that were rather old and read and reinvented and invented new, and did uni by the end I started uni (but still with old text and tech). I got the department of defense published standards paper on security management to read up on some auditing managed systems process but never got to read it, and had done my own process anyway. This whole mantle/metal api thing, is like I was designing the OS from the beginning. Java/Oak Toas and my simple VOS all virtually started the same year. Only one wasn't close to funding markets and didn't receive funding. Java was the least efficient design, and Toas undoubtedly the second most efficient, but blew Java away. In matter of fact, the leading PC Java/JavaScript (a blur now) engine was a Taos derived product from them, one which other's tried to be like to get more performance.

I'm confused by java/javascript .. they are not connected in any way , other than being programming languages .. but they share no common ancestor , or development, and are not similar in any way ..other than the misleading name ..

from oracle:

How is JavaScript different from Java?
The JavaScript programming language, developed by Netscape, Inc., is not part of the Java platform.
JavaScript does not create applets or stand-alone applications. In its most common form, JavaScript resides inside HTML documents, and can provide levels of interactivity to web pages that are not achievable with simple HTML.
Key differences between Java and JavaScript:
Java is an OOP programming language while Java Script is an OOP scripting language.
Java creates applications that run in a virtual machine or browser while JavaScript code is run on a browser only.
Java code needs to be compiled while JavaScript code are all in text.
They require different plug-ins.

Wayne Morellini
03-27-2017, 07:24 AM
That is old times. It is not right to say they are not connected. Originally JavaScript, used Java, in the sense that they used some Java API's. JavaScript could be compiled to java byte code and run. Things have diverged (and the web people hate plugins now. Which is a pity). It is correct to say the language itself is not the same. Anyway, you are not interested in web services, so it may not matter for your purposes. (I'm also need to look this all up to make sure my memory is not failing me).

However, things have changed as desktop libraries were released for independent Javascript desktop applications. Now, Javavscript is to receive its own virtual binary code. It is no longer that Java and Javascript are separate, it is that most platforms support JavaScript, that Javascript has won in my opinion for desktop and wen purposes. You can develop to Javascript API (even in c) and get a portable application to future host environments as JavaScript support is ported to them. Of course the world is more messy than that, but it is a good sentiment.

I am delighted, I was getting ready to learn JavaScript and make my own virtual binary code on it that used the JavaScript api. Then I find out the community was preparing to do it, bonus, I can just use a subset of the javascript api with that binary for portability in an engine written for whatever mobile phone chipset for low powered gadgets. That cuts out a lot of initial work. I can doy own binary some other time.

I was planning to propose to the Linux community to write the majority of applications to the javascript platform in the binary, for portability between OS's, so any OS has an instant library by porting javascript. Except for applications that require extra performance, that are dine natively as much as needed. Most applications don't require the most performance. So, except games, 80%+ of applications could just use javascript. This would remove do.e of the biggest obstacles in computing.

konrad grant
03-27-2017, 11:41 AM
That is old times. It is not right to say they are not connected. Originally JavaScript, used Java, in the sense that they used some Java API's. JavaScript could be compiled to java byte code and run. Things have diverged (and the web people hate plugins now. Which is a pity). It is correct to say the language itself is not the same. Anyway, you are not interested in web services, so it may not matter for your purposes. (I'm also need to look this all up to make sure my memory is not failing me).

However, things have changed as desktop libraries were released for independent Javascript desktop applications. Now, Javavscript is to receive its own virtual binary code. It is no longer that Java and Javascript are separate, it is that most platforms support JavaScript, that Javascript has won in my opinion for desktop and wen purposes. You can develop to Javascript API (even in c) and get a portable application to future host environments as JavaScript support is ported to them. Of course the world is more messy than that, but it is a good sentiment.

I am delighted, I was getting ready to learn JavaScript and make my own virtual binary code on it that used the JavaScript api. Then I find out the community was preparing to do it, bonus, I can just use a subset of the javascript api with that binary for portability in an engine written for whatever mobile phone chipset for low powered gadgets. That cuts out a lot of initial work. I can doy own binary some other time.

I was planning to propose to the Linux community to write the majority of applications to the javascript platform in the binary, for portability between OS's, so any OS has an instant library by porting javascript. Except for applications that require extra performance, that are dine natively as much as needed. Most applications don't require the most performance. So, except games, 80%+ of applications could just use javascript. This would remove do.e of the biggest obstacles in computing.

according to my friend who gets paid a "LOT" of money to write/maintain code for a large bank from home.. but only actually has to write / fix someone elses code about once every 3months by the sounds of it.. repeated : java is to javascript , what cars are to carpets ..absolutely no connection according to him.. nil..zero .. but as you say, javascript may have been implemented through java/plugins etc .. yes, i dislike java..except for its arbitrary precision maths library..and javascript ..wtf is that?..hangon..don't tell me!!.. i'm happy to be wrong .. as ive not formed my own opinion ;)

i created my own version of basic when i was 17.. and it was .. back when screens were green ..nowadays i collect books on compiler design.. just in case ..
..i also invented multithreading .. before the days of windows/protected mode .. i believe the old dos timer interrupt i used to hook into was $1C .. pc plus magazine paid me 75 in 1989//those were the days .. when porn was dithered and only viewable by standing 15ft from your monitor and squinting.. before computer video formats existed ..and came as an executable .. and was watched and enjoyed only because it was a technical achievement ..

ps, i'm finding j2k and jpeg-xr are quite closely matched .. but j2k is still winning in my 24bit compression tests .. really need to find a 48bit xr encoder .. found some source code ..might need to actually try and compile one myself .. with a c compiler ..urrr!!

Wayne Morellini
03-27-2017, 04:22 PM
He gets paid how often. Again, there is Java the language and have the system and api. I could make language whatever 'based' on a specification of subset of the Java system, it might be based on Java, but not the Java language.

As he is a code maintainer, he might like to research the JavaScript developments of the last 5-10 years, where a lot of the action was. I wasn't up to posting links on it late last night, or at the moment. But a lot if the older developments were rolled into firefox OS. Javascript was knockrd together well by somebody that knocked together languages quickly in those days. It didn't have java in the name, that was added to popularise it. They worked out something with the old Java owner to use the name. Best to think of it as a old script for java system rather than the language. Personally, I think they should have tried to make it a syntax subset and true subset of Java language to, so that people could write code in subset between the two, and Stoll retainrdva simple implementation, would have resolved a number of things. But they didn't. Anyway, forget about the Java thing, the game is now more javascript as far as cross platform support.

Lol, the Tandy color model :). I guess you mean CPM, IBM PC xt, Hercules card,or Amstrad PCW (one of my favorite computers. Can learn a lot about design from gnat thing, the edition of two more shades made a good difference to the abstraction of the graphics. Versus the mac).

Oh good, an old timer around here with normalised function. No wonder you are easy to talk with. All these new found kids with the attitudes and twitter. I mean what is that, you practically ate calling yourself a twit. In pur day s you actually had to keep computers near a power point, and not one of those new found Microsoft things....

Neither one of us invented mutual threading I am afraid, probably been going since before we were born.. When I did my revolutionary os technology, when I got to find out about mainframes and mini computers, I found my design had broad elements of them. No surprise, as I was gravely influenced by the notion of the commodore serial bus having independently run hardware attached to it (where normally on the PC drives were heavily dependent of the system. IDE followed this trend. The commodore serial bus was based on a HP serial bus, which would have been affected by previous mainframe/mini design philosophies. So, it is inevitable that we both come to optimal design solutions that are somewhat similar (except there was only one of me, not like 10k to 1m people working towards the mainframe/minis.

No, I'm impressed. Not like many around here that run/use/maintain somebody else's ideas and think they know it all without much independent thought past input.

Wayne Morellini
03-27-2017, 07:18 PM
From Wikipedia JavaScript page (or what I now called "Doctor Google"of Google university). Thankfully I didn't have to spend time lookung around the web for this.

https://en.m.wikipedia.org/wiki/JavaScript


Although there are strong outward similarities between JavaScript and Java, including language name, syntax, and respective standard libraries, the two are distinct languages and differ greatly in their design. JavaScript was influenced by programming languages such as Self and Scheme.

JavaScript is also used in environments that are not Web-based, such as PDF documents, site-specific browsers, and desktop widgets. Newer and faster JavaScript virtual machines (VMs) and platforms built upon them have also increased the popularity of JavaScript for server-side Web applications. On the client side, developers have traditionally implemented JavaScript as an interpreted language, but more recent browsers perform just-in-time compilation. Programmers also use JavaScript in video-game development, in crafting desktop and mobile applications, and in server-side network programming with run-time environments such as Node.js


Netscape Communications realized that the Web needed to become more dynamic. Marc Andreessen, the founder of the company believed that HTML needed a "glue language" that was easy to use by Web designers and part-time programmers to assemble components such as images and plugins, where the code could be written directly in the Web page markup. In 1995, the company recruited Brendan Eich with the goal of embedding the Scheme programming language into its Netscape Navigator. Before he could get started, Netscape Communications collaborated with Sun Microsystems to include in Netscape Navigator Sun's more static programming language Java, in order to compete with Microsoft for user adoption of Web technologies and platforms.[11] Netscape Communications then decided that the scripting language they wanted to create would complement Java and should have a similar syntax, which excluded adopting other languages such as Perl, Python, TCL, or Scheme. To defend the idea of JavaScript against competing proposals, the company needed a prototype. Eich wrote one in 10 days, in May 1995.

Although it was developed under the name Mocha, the language was officially called LiveScript when it first shipped in beta releases of Netscape Navigator 2.0 in September 1995, but it was renamed JavaScript[2] when it was deployed in the Netscape Navigator 2.0 beta 3 in December.[12] The final choice of name caused confusion, giving the impression that the language was a spin-off of the Java programming language, and the choice has been characterized as a marketing ploy by Netscape to give JavaScript the cachet of what was then the hot new Web programming language.


A common misconception is that JavaScript is similar or closely related to Java. It is true that both have a C-like syntax (the C language being their most immediate common ancestor language). They also are both typically sandboxed (when used inside a browser), and JavaScript was designed with Java's syntax and standard library in mind. In particular, all Java keywords were reserved in original JavaScript, JavaScript's standard library follows Java's naming conventions, and JavaScript's Math and Date objects are based on classes from Java 1.0,[117] but the similarities end there.


In January 2009, the CommonJS project was founded with the goal of specifying a common standard library mainly for JavaScript development outside the browser.[26]


Java introduced the javax.script package in version 6 that includes a JavaScript implementation based on Mozilla Rhino. Thus, Java applications can host scripts that access the application's variables and objects, much like Web browsers host scripts that access a webpage's Document Object Model (DOM).[86][87]

Firefoxos, now officially discontinued at Mozilla, but continued development at Panasonic and supposedly at another private company by former developer of it at Mozilla, but I have not seen a current reference there for a while. This is the biggest shame. Mozilla though has been behind on something's.

https://en.m.wikipedia.org/wiki/Firefox_OS

https://en.m.wikipedia.org/wiki/Google_Native_Client

https://en.m.wikipedia.org/wiki/WebAssembly

https://en.m.wikipedia.org/wiki/Asm.js

Note, I don't agree with their use of syntax, I am using the notion of deeper usage than this.

I think I read commonjs is now being demoted.

Eich apparently readily put together languages, so had experience, and I would think a code tool set (not likely to be written from scratch but put together and confirmed to the present JavaScript syntax, which is still a big feat in 10 days (and perhaps some previous complementation, planning and pre work. I was so sick and tied years ago of a certain long running project, I was thinking of doing something like my own simple os system over 6 months and codec, and then with some research and pre planning likely, do the camera in 6 weeks (even 2 weeks, as most of the work is done in the os and codec, and the camera becomes little more than setting up a rudimentary program and GUI controls) and watch the look on their faces. If you have a good fast (unlike me) programmer experienced in the area or at least maybe realtime embedded etc etc programming, that knows his stuff, they should be able to do those simple camera recorder like app using api's in 2-6 weeks, not 18 months+ I think using an existing camera recording code base (hard to remember, there were a few projects floating around, with a professional programmer, while not recognising publicaly advice on real time embedded programming and setting up windows for this (I had been advised by a machine vision company that set their cameras up for use in cinematic filming that it was simple "trick" to achieve reliable recording (and not knowing the details on setting up a realtime embedded environments on windows (which had a history), this was my estimation too). I knew people setup windows realtime, and even replaced the core with a realtime nuclease. Anyway, windows advanced. They actually adopted the TRON realtime nuclease. Tron was a Japanese wiindows challenger project that saw widespread use in consumer electronics and other embedded spaces, and was very superior. Home edition didn't see so much success over the years. Unfortunately things like Java, Dos, Windows, JavaScript, c, suck the air out of the room, away from better solutions.

As you can see, the Oracle statement doesn't fully cover everything, it is making it more clearly separated for people to understand.

For me it is simple business economics, JavaScript is simply the most wide spread common target to develop for, and if it became more standardise for standalone across embedded and desktop spaces, it could be a good thing.

Frankly, if they had compatibly sub-setted Java in the first place it would have been much better for the web and the industry. I think JavaScript came off of embedded java specification, but can't remember. They simply could have cored out Java, and added a functional mode (Java 8 received functionality) preserving binary with scripting, as in latter web assembly, it would have been great. Hopefully web assembly pushes things in a better direction than standard java bytecode though. It's interesting that after all the stuff about the original android virtual code, it proved worse than bytecode instead in various tests. The new ART environment did better, and now follows what I wanted, to compile the virtual code to real local binary code once, and use that. They also allow use of native binary code segments to be in the package so it can be used for the specific processor type of the target, which allows for efficient handcrafted machine code, thus removing most of the penalty of android for high performance applications (obviously not all) like games. If only I could run webassembly binary version of resolve. :)

Something like JavaScript webassembly opens up the adoption of new better technologies and systems, as they only need to support its codebase to have wide spread userability from the start.

Anyway, nothing you need anyway, as you only have to target PC, Linux and Mac OS, and sell code picture archival companies.

Nick Timmons
03-28-2017, 11:26 AM
Conrad, just...wow

konrad grant
03-28-2017, 01:15 PM
Conrad, just...wow

which bit is confusing you ? ..jpeg being computationally crap .. or javascript being systematically sh*t ? ..or my research into image compression , that's has made me think wow nearly everyday this week.. give us a clue .. wow is so vague ..and doesn't really contribute much ..

Wayne Morellini
03-28-2017, 05:02 PM
Yes, Nick, your post is about the only confusing thing here at the moment?