|
Dejay
|
2024-12-20 03:49:23
|
So sequential loading or partial loading would actually be desired
|
|
|
_wb_
|
|
AccessViolation_
I think you might be able to make an encoder that does this by using a VarDCT layer and one layer for the residuals and blending them in some way?
|
|
2024-12-20 03:50:00
|
This is totally possible but it will compress worse than just doing lossless compression. The entropy in the residuals will likely be at least as large as the original in the image itself (even if the amplitude of the signal will obviously be much lower), so you can just as well use normal lossless compression, possibly with a lossy preview frame.
|
|
|
AccessViolation_
|
2024-12-20 03:51:50
|
Interesting. I knew the entropy/variance would be about as large, but because of quantization I assumed lower amplitude meant fewer possible values, and thus fewer bits needed to represent each of them
|
|
|
_wb_
|
|
CrushedAsian255
This wouldn't work as VarDCT is inexact
|
|
2024-12-20 03:54:57
|
It is exact enough to not cause off-by-ones when doing 8-10 bit, though you may have to disable some things like gaborish/epf/xyb to get enough precision.
|
|
2024-12-20 03:57:57
|
lower amplitude can help a little bit to improve things, but most entropy is in the least significant bits anyway (the msbs is what can be predicted best), and remember you're basically combining the entropy of the image itself with the entropy of the compression artifacts (which come "for free" in vardct but are costly for lossless compression techniques)
|
|
|
I don't think it's possible at all in the API. Best you could do is extract the codestream and insert it at the correct point
|
|
2024-12-20 04:03:27
|
There's no API for codestream manipulation, only one for pixels -> codestream and one for jpeg bitstream -> codestream. It would be interesting to add some API for jxl-to-jxl transcoding, with ways to do (partial) codestream copy or to do entropy-reencoding without modifying the data itself. But this kind of thing is quite a lot of implementation effort while the use cases where this would actually get used are relatively niche, so I doubt it will become a priority to add something like that anytime soon.
|
|
|
AccessViolation_
|
|
_wb_
lower amplitude can help a little bit to improve things, but most entropy is in the least significant bits anyway (the msbs is what can be predicted best), and remember you're basically combining the entropy of the image itself with the entropy of the compression artifacts (which come "for free" in vardct but are costly for lossless compression techniques)
|
|
2024-12-20 04:10:40
|
Right, that makes sense. So this whole thing is basically an all-round poor approach because it's bad for both images with lots of detail since you have lots of variance to deal with regardless, and simpler smoother images because those would be very easy to predict for a special purpose lossless encoder and DCT would just add more artifacts which end up in the residuals
|
|
2024-12-20 04:12:55
|
I guess the only reason AVIF went with this is because it's super simple to add on if you already have a whole lossy stack
|
|
|
_wb_
|
2024-12-20 04:17:19
|
It might be a good approach for near-lossless compression though, say if you want to tolerate a fixed amount of error per pixel then a "lossless" layer could be added to correct the lossy layer where needed but it would be a relatively sparse layer since in most places the quality of the lossy layer would be high enough to stay within the threshold (it's just hard to guarantee for every single pixel position)
|
|
|
Traneptora
|
|
Insanity
Mainly JPEGXL, trying to have back and EXR or a PNG with HDR from a HDR JXL in most software is a dead end
|
|
2024-12-20 10:45:00
|
try viewing it with mpv
|
|
|
AccessViolation_
Just out of curiosity, is there a mode in JPEG XL that will VarDCT encode the source image, then store the residuals of the source image minus the compressed image, and modular mode compress those, effectively giving you a VarDCT-based lossless mode?
I had a thought that you could basically turn any existing lossy image or video compression format into a lossless format by just entropy coding the residuals after lossy compression for every frame
|
|
2024-12-20 10:46:23
|
this is theoretically possible but it saves essentially nothing over just modularly encoding the image in the first place
|
|
2024-12-20 10:46:35
|
you're better off just doing progressive squeeze modular
|
|
|
AccessViolation_
Interesting. I knew the entropy/variance would be about as large, but because of quantization I assumed lower amplitude meant fewer possible values, and thus fewer bits needed to represent each of them
|
|
2024-12-20 10:49:30
|
most of the entropy is in the lower-order bits, rather than the higher order bits, because the way something like JXL works is it predicts what the next pixel will be and then records the difference between the actual pixel and the prediction. the predictive nature (kinda like a delta-coded image) essentially makes it so the higher-order bits only need to be sent once
|
|
|
AccessViolation_
|
2024-12-20 10:57:22
|
I should look into how predictors work in more depth. All I know is that they're similar in purpose to 'filters' in PNG. It's intriguing
|
|
|
Traneptora
most of the entropy is in the lower-order bits, rather than the higher order bits, because the way something like JXL works is it predicts what the next pixel will be and then records the difference between the actual pixel and the prediction. the predictive nature (kinda like a delta-coded image) essentially makes it so the higher-order bits only need to be sent once
|
|
2024-12-20 10:57:53
|
Appreciate the info!
|
|
|
A homosapien
|
2024-12-20 11:02:15
|
Wasn't _wb_ writing an article about how predictors work?
|
|
|
AccessViolation_
|
2024-12-20 11:02:53
|
This covers predictors
<https://cloudinary.com/blog/jpeg-xls-modular-mode-explained>
|
|
|
CrushedAsian255
|
2024-12-20 11:03:06
|
Cherry Blossom 🌸
|
|
|
AccessViolation_
|
2024-12-20 11:03:28
|
I've only skimmed it, I still wanna read it
|
|
|
jonnyawsom3
|
2024-12-21 12:18:38
|
And this https://discord.com/channels/794206087879852103/824000991891554375/1309512822979559436
|
|
2024-12-21 12:18:39
|
https://cdn.discordapp.com/attachments/824000991891554375/1309822385343696936/Screenshot_2024-11-23_at_11.06.16_AM.png?ex=67673be7&is=6765ea67&hm=01e488824f091a549699ef225d587885134d37ed893f75a7a065b86399679ba0&
|
|
|
Demiurge
|
2024-12-21 05:22:33
|
Average light or average value?
|
|
|
jonnyawsom3
|
2024-12-21 07:13:43
|
> W: value of pixel on the left
> N: value of pixel above
> NW: value of topleft pixel
> NE: value of topright pixel
> WW: value of pixel to the left of the pixel on the left
> Select: predictor from lossless WebP
> Gradient: W+N-NW, clamped to min(W,N)..max(W,N)
> Weighted: weighted sum of 4 self-correcting subpredictors based on their past performance (warning: not clamped so can get out of range)
> AvgW+N,AvgW+NW,AvgN+NW,AvgN+NE: average of two pixel values
> AvgAll: weighted sum of various pixels: (6 * top - 2 * toptop + 7 * left + 1 * leftleft + 1 * toprightright + 3 * topright + 8) / 16
|
|
|
_wb_
|
2024-12-21 09:44:13
|
Modular predictors work on integer sample values numerically, without caring about the photometric interpretation.
|
|
|
CrushedAsian255
|
|
_wb_
Modular predictors work on integer sample values numerically, without caring about the photometric interpretation.
|
|
2024-12-21 11:07:16
|
Are averages rounded up or down
|
|
|
_wb_
|
2024-12-22 09:38:45
|
Towards zero, like (a+b)/2 in C.
|
|
|
AccessViolation_
|
2024-12-22 10:34:56
|
Since predictors work with scan line data and can't predict from the bottom to top, and I imagine there can very easily be cases where some bottom to top prediction just happens to be more correct, is there some coding tool to 90-degree-rotate the block of pixels some preferred amount of times and give that to the predictor, and then rotate it back during decoding?
|
|
|
jonnyawsom3
|
|
AccessViolation_
Since predictors work with scan line data and can't predict from the bottom to top, and I imagine there can very easily be cases where some bottom to top prediction just happens to be more correct, is there some coding tool to 90-degree-rotate the block of pixels some preferred amount of times and give that to the predictor, and then rotate it back during decoding?
|
|
2024-12-22 10:39:39
|
It's not scanline, it's per group. If you look at the list I copied from the JXL art site, you can see we have N and NE
|
|
|
CrushedAsian255
|
|
AccessViolation_
Since predictors work with scan line data and can't predict from the bottom to top, and I imagine there can very easily be cases where some bottom to top prediction just happens to be more correct, is there some coding tool to 90-degree-rotate the block of pixels some preferred amount of times and give that to the predictor, and then rotate it back during decoding?
|
|
2024-12-22 10:40:09
|
What situation would bottom be able to predict top more than top being able to predict the bottom?
|
|
|
jonnyawsom3
|
2024-12-22 10:40:16
|
But what you said still applies to upwards prediction, from S to N
|
|
|
AccessViolation_
|
2024-12-22 10:41:32
|
Oh, I thought there was no upwards prediction
|
|
|
CrushedAsian255
|
|
AccessViolation_
Oh, I thought there was no upwards prediction
|
|
2024-12-22 10:42:14
|
There is none
|
|
|
jonnyawsom3
|
2024-12-22 10:42:23
|
Sorry, that's what I mean. We have N and NE predictions (Going across scanlines) but no S
|
|
|
AccessViolation_
|
|
CrushedAsian255
What situation would bottom be able to predict top more than top being able to predict the bottom?
|
|
2024-12-22 10:49:05
|
No right it wouldn't, I think I had the wrong mental model for predictors. Those sample images made me think that the area that the predictor filled in is basically taken as a whole and subtracted from the image data under it, and those are the residuals. But I think is those images were created just to visually demonstrate the 'pattern' of the different predictors. Because in those images it looks like it's making a prediction, and then using only that prediction as the new pixel value for the prediction step of the next pixel, while in practice I assume it would use the real pixel value (previous prediction + residual) to use for the next prediction.
The reason I thought bottom to top could be different from top to bottom is because if the image is mostly blue but has red borders at the top and left, it all predictors would say that whole area is red, while if you rotated the image 180 degrees it would predict more blue and be more correct
|
|
|
CrushedAsian255
|
|
AccessViolation_
No right it wouldn't, I think I had the wrong mental model for predictors. Those sample images made me think that the area that the predictor filled in is basically taken as a whole and subtracted from the image data under it, and those are the residuals. But I think is those images were created just to visually demonstrate the 'pattern' of the different predictors. Because in those images it looks like it's making a prediction, and then using only that prediction as the new pixel value for the prediction step of the next pixel, while in practice I assume it would use the real pixel value (previous prediction + residual) to use for the next prediction.
The reason I thought bottom to top could be different from top to bottom is because if the image is mostly blue but has red borders at the top and left, it all predictors would say that whole area is red, while if you rotated the image 180 degrees it would predict more blue and be more correct
|
|
2024-12-22 10:50:01
|
The residuals would have to override and say blue once and then the predictors should keep working from there
|
|
2024-12-22 10:50:29
|
Or you could use x and y MA decision
|
|
|
AccessViolation_
No right it wouldn't, I think I had the wrong mental model for predictors. Those sample images made me think that the area that the predictor filled in is basically taken as a whole and subtracted from the image data under it, and those are the residuals. But I think is those images were created just to visually demonstrate the 'pattern' of the different predictors. Because in those images it looks like it's making a prediction, and then using only that prediction as the new pixel value for the prediction step of the next pixel, while in practice I assume it would use the real pixel value (previous prediction + residual) to use for the next prediction.
The reason I thought bottom to top could be different from top to bottom is because if the image is mostly blue but has red borders at the top and left, it all predictors would say that whole area is red, while if you rotated the image 180 degrees it would predict more blue and be more correct
|
|
2024-12-22 10:51:08
|
Either way, the residuals would have to course correct one place or another
|
|
|
AccessViolation_
|
2024-12-22 10:52:28
|
Right yeah, so they would only have to course correct once and then the predictors would predict the correct (blue) values after that. In contrast to those sample images, where there is no 'real' image data in the predicted area in question so the predictions just cascade without ever being corrected by residuals
|
|
2024-12-22 10:52:53
|
Is that right?
|
|
|
CrushedAsian255
|
2024-12-22 10:53:10
|
Correct the residual usually have to course correct even with predictors that are pretty good
|
|
2024-12-22 10:53:22
|
I guess maybe excluding the weighted predictor
|
|
2024-12-22 10:53:36
|
It’s pretty good at correcting itself
|
|
|
AccessViolation_
|
2024-12-22 10:54:03
|
I see
|
|
|
jonnyawsom3
|
2024-12-22 10:54:28
|
https://discord.com/channels/794206087879852103/824000991891554375/1311405735862734858
|
|
|
AccessViolation_
|
|
https://cdn.discordapp.com/attachments/824000991891554375/1309822385343696936/Screenshot_2024-11-23_at_11.06.16_AM.png?ex=67673be7&is=6765ea67&hm=01e488824f091a549699ef225d587885134d37ed893f75a7a065b86399679ba0&
|
|
2024-12-22 10:57:11
|
Now I kind of want to see a version of this where there is real image data in the predicted segment, and the visualizations are replaced with heat maps showing how large the residuals are.
I actually considered making a tool to do this at one point, similar to jxlatte, which could be used to 'debug' the efficiency of the encoder. It could for example show you a heatmap over the whole image, with the estimated bits per pixel required for a given image area, show you predictor effectiveness, patches etc. But that's like way too much to take on for me
|
|
2024-12-22 11:00:54
|
My plan was to patch an existing decoder to make it record this data while decoding, but it would still be quite the task
|
|
|
jonnyawsom3
|
2024-12-22 11:01:17
|
Oh that already exists.. Or some of it... Used to work...
|
|
|
CrushedAsian255
|
|
Oh that already exists.. Or some of it... Used to work...
|
|
2024-12-22 11:02:18
|
Benchmark XL mentioned?
|
|
|
AccessViolation_
|
|
CrushedAsian255
I guess maybe excluding the weighted predictor
|
|
2024-12-22 11:08:23
|
Could you briefly explain what the weighted/self-correcting predictor is, or how it works - or link me to some source describing it?
I've heard those terms a few times, I assume they're not just referring to being corrected by the residuals like we just talked about
|
|
|
CrushedAsian255
|
|
AccessViolation_
Could you briefly explain what the weighted/self-correcting predictor is, or how it works - or link me to some source describing it?
I've heard those terms a few times, I assume they're not just referring to being corrected by the residuals like we just talked about
|
|
2024-12-22 11:08:51
|
I don’t fully understand myself, but it’s some fancy predictor that does some fancy stuff
|
|
2024-12-22 11:09:12
|
It uses the residuals to adjust its predictions to be more correct or something
|
|
2024-12-22 11:09:50
|
<@794205442175402004> can explain it way better
|
|
|
AccessViolation_
|
2024-12-22 11:09:57
|
Oh I found an overview in the article on modular mode
|
|
2024-12-22 11:10:07
|
*which I still haven't read*
|
|
2024-12-22 11:10:54
|
Okay I'm just going to turn off all distractions and read the god damn thing I've been wanting to read it for like over a month now
|
|
|
jonnyawsom3
|
|
AccessViolation_
Oh I found an overview in the article on modular mode
|
|
2024-12-22 11:16:08
|
The modular mode article describes the modular mode. Useful when asking questions about the modular predictors in the modular mode
|
|
2024-12-22 11:16:14
|
:>
|
|
|
AccessViolation_
|
2024-12-22 11:17:25
|
I am forbidding myself from saying anything in this server until I've read it
|
|
|
jonnyawsom3
|
|
CrushedAsian255
Are averages rounded up or down
|
|
2024-12-22 11:17:33
|
Also here's the PR for that https://github.com/libjxl/libjxl/pull/3329
|
|
|
CrushedAsian255
Benchmark XL mentioned?
|
|
2024-12-22 11:22:16
|
Yeah, but I can't get it to write any debug output on Windows for the last year or so
|
|
|
AccessViolation_
|
2024-12-22 05:02:49
|
I finished the reading the Modular mode blog, it was a fun and interesting read
|
|
2024-12-22 05:03:13
|
My main question is how did they cook up the `(6 * N − 2 * NN + 7 * W + WW + NEE
+ 3 * NE + 8) /16` predictor lmao
|
|
2024-12-22 05:05:29
|
It's so very specific
|
|
2024-12-22 05:17:32
|
Oh nvm it's just `AvgAll` <:galaxybrain:821831336372338729>
|
|
2024-12-22 05:19:43
|
Still doesn't clear up why the weird constants, though
|
|
2024-12-22 05:59:50
|
ah it's weighted. when I first saw it I thought it was for some very specific pattern that's somehow common in a lot of images
|
|
|
jonnyawsom3
|
|
AccessViolation_
My main question is how did they cook up the `(6 * N − 2 * NN + 7 * W + WW + NEE
+ 3 * NE + 8) /16` predictor lmao
|
|
2024-12-22 06:03:45
|
IIRC it's what Jyrki found worked best after some benchmarking for Delta Palette
|
|
2024-12-22 06:04:21
|
Don't actually know *why* it's the best, but it gave the best results so it got added to the list
|
|
|
lonjil
|
2024-12-22 10:03:14
|
can you even call it a weighted average when there's a barely + 8 in there?
|
|
|
|
veluca
|
2024-12-22 10:28:25
|
that's just rounding
|
|
2024-12-22 10:28:42
|
but the -2 is more interesting for an average 😛
|
|
|
AccessViolation_
|
2024-12-22 10:40:10
|
> There are four different “sub-predictors”, which are parameterized (the constants in their formula can be signaled). One of them is just a simple predictor (W + NE – N), but the three others have terms in them that make them ‘self-correct’: the prediction algorithm keeps track of the prediction errors in the previous row of the image, and these errors are then fed back into the new predictions. This feedback loop causes the predictor to automatically correct its past mistakes, to some extent
<https://cloudinary.com/blog/jpeg-xls-modular-mode-explained#the_self_correcting_predictor>
I don't really understand this part specifically:
> the prediction algorithm keeps track of the prediction errors in the previous row of the image, and these errors are then fed back into the new predictions
In which way is it "feeding back the error", is it very different from storing a residual which also 'corrects' it for the next pixels?
|
|
|
jonnyawsom3
|
|
IIRC it's what Jyrki found worked best after some benchmarking for Delta Palette
|
|
2024-12-22 10:43:10
|
I think this is where I saw that https://discord.com/channels/794206087879852103/824000991891554375/1285199764442447902
|
|
2024-12-22 10:44:09
|
Ah yeah, I just had to scroll down a bit https://discord.com/channels/794206087879852103/824000991891554375/1285206622884462613
|
|
|
CrushedAsian255
|
|
AccessViolation_
> There are four different “sub-predictors”, which are parameterized (the constants in their formula can be signaled). One of them is just a simple predictor (W + NE – N), but the three others have terms in them that make them ‘self-correct’: the prediction algorithm keeps track of the prediction errors in the previous row of the image, and these errors are then fed back into the new predictions. This feedback loop causes the predictor to automatically correct its past mistakes, to some extent
<https://cloudinary.com/blog/jpeg-xls-modular-mode-explained#the_self_correcting_predictor>
I don't really understand this part specifically:
> the prediction algorithm keeps track of the prediction errors in the previous row of the image, and these errors are then fed back into the new predictions
In which way is it "feeding back the error", is it very different from storing a residual which also 'corrects' it for the next pixels?
|
|
2024-12-22 10:56:18
|
Feeding back the error is it compares its prediction to the actual value (prediction+residual)
|
|
|
Orum
|
2024-12-22 10:57:34
|
weird, somehow cjxl gave me an invalid <:JXL:805850130203934781> image
|
|
2024-12-22 10:57:56
|
unfortunately I can't repeat it as it was from a screen capture <:SadOrange:806131742636507177>
|
|
|
CrushedAsian255
|
|
Orum
weird, somehow cjxl gave me an invalid <:JXL:805850130203934781> image
|
|
2024-12-22 10:59:06
|
maybe the file didn’t save correctly..?
|
|
|
Orum
|
2024-12-22 10:59:32
|
well it was about the size I'd expect
|
|
|
jonnyawsom3
|
2024-12-22 11:00:14
|
Do you have the file?
|
|
|
Orum
|
2024-12-22 11:00:26
|
the broken ones? I just deleted them....
|
|
2024-12-22 11:00:37
|
I couldn't decode them anyway 🤷♂️
|
|
|
AccessViolation_
|
|
CrushedAsian255
Feeding back the error is it compares its prediction to the actual value (prediction+residual)
|
|
2024-12-22 11:01:14
|
In which way does knowing its prediction residual help it do better on the next row? I know one part of it is that bad sub-predictors are weighted lower, but that's mentioned as the second way in which self-correction happens. It also sounds like the sub-predictors individually are self-correcting, and it's not clear to me how that works
|
|
|
CrushedAsian255
|
|
AccessViolation_
In which way does knowing its prediction residual help it do better on the next row? I know one part of it is that bad sub-predictors are weighted lower, but that's mentioned as the second way in which self-correction happens. It also sounds like the sub-predictors individually are self-correcting, and it's not clear to me how that works
|
|
2024-12-22 11:02:00
|
Fancy JPEG XL alien technology from the future
|
|
|
Orum
|
2024-12-22 11:04:25
|
oh, very weird, now I'm just getting corrupted output 😓
|
|
|
CrushedAsian255
|
2024-12-22 11:04:36
|
Is your disk full
|
|
|
Orum
|
|
CrushedAsian255
|
2024-12-22 11:04:58
|
Try running cjxl from cmd line
|
|
|
Orum
|
2024-12-22 11:05:22
|
I wonder if this is a hardware issue
|
|
|
AccessViolation_
|
|
AccessViolation_
> There are four different “sub-predictors”, which are parameterized (the constants in their formula can be signaled). One of them is just a simple predictor (W + NE – N), but the three others have terms in them that make them ‘self-correct’: the prediction algorithm keeps track of the prediction errors in the previous row of the image, and these errors are then fed back into the new predictions. This feedback loop causes the predictor to automatically correct its past mistakes, to some extent
<https://cloudinary.com/blog/jpeg-xls-modular-mode-explained#the_self_correcting_predictor>
I don't really understand this part specifically:
> the prediction algorithm keeps track of the prediction errors in the previous row of the image, and these errors are then fed back into the new predictions
In which way is it "feeding back the error", is it very different from storing a residual which also 'corrects' it for the next pixels?
|
|
2024-12-22 11:06:55
|
<@794205442175402004> any chance you could explain how the sub-predictors of the self-correcting predictor *themselves* are self-correcting? I understand that all sub-predictors are evaluated and ones that did badly are weighted lower, but it also sounds like they do some sort of self-correction individually and it's not clear to me how that works
|
|
|
CrushedAsian255
|
|
AccessViolation_
<@794205442175402004> any chance you could explain how the sub-predictors of the self-correcting predictor *themselves* are self-correcting? I understand that all sub-predictors are evaluated and ones that did badly are weighted lower, but it also sounds like they do some sort of self-correction individually and it's not clear to me how that works
|
|
2024-12-22 11:08:59
|
The predictors themselves reference the true error values (if I’m reading the specification correctly )
|
|
|
Orum
|
2024-12-22 11:18:59
|
like, why is there a big blue square in the middle of this? 🤔
|
|
|
CrushedAsian255
|
2024-12-22 11:21:33
|
Umm that’s a bug
|
|
2024-12-22 11:21:39
|
A very big bug
|
|
2024-12-22 11:21:49
|
Something’s definitely going wrong
|
|
|
Orum
|
2024-12-22 11:22:04
|
yeah, I know
|
|
2024-12-22 11:26:20
|
it seems to have something to do with all those yellow boxes
|
|
2024-12-22 11:26:32
|
that look almost like a yellow line across the middle
|
|
2024-12-22 11:26:47
|
because the corruption is always in that area
|
|
2024-12-22 11:27:17
|
here's another one
|
|
|
CrushedAsian255
|
2024-12-22 11:28:48
|
Can you send the original image
|
|
|
Orum
|
2024-12-22 11:28:50
|
unfortunately I don't have a single one that came out correctly
|
|
2024-12-22 11:29:00
|
these are piped directly into cjxl
|
|
2024-12-22 11:29:12
|
I'll need to change my capture script
|
|
|
CrushedAsian255
|
|
Orum
|
2024-12-22 11:32:26
|
actually I don't even need tee
|
|
2024-12-22 11:32:38
|
I'll just have it write to a ppm and then encode it as well
|
|
|
jonnyawsom3
|
2024-12-22 11:46:30
|
5 corrupted modular groups
|
|
|
Orum
|
2024-12-22 11:54:02
|
okay, I have some ppms that look to have failed with cjxl
|
|
2024-12-22 11:54:07
|
1 min...
|
|
2024-12-23 12:00:45
|
so decode this with dwebp to ppm first, then encode with `cjxl v0.10.3 4a3b22d2 [AVX2,SSE4,SSE2]`, and the encoding options `-e 1 -d 0 --quiet --streaming_output`, and tell me if you see a corrupted image:
|
|
2024-12-23 12:01:02
|
this one won't even decode after I encode it:
|
|
2024-12-23 12:02:13
|
oof, I need to update libjxl....
|
|
2024-12-23 12:03:28
|
very curious if this is fixed on 0.10.4 or 0.11.x
|
|
|
Demiurge
|
2024-12-23 12:32:39
|
This is why it's very nice to have a lossless "verify pixels" mode where it automatically decodes and verifies while encoding... to help catch weird corner cases where it's not lossless...
|
|
|
CrushedAsian255
|
2024-12-23 12:53:17
|
im now getting nervous about my large archive
|
|
2024-12-23 12:53:29
|
im on 0.11.1
|
|
2024-12-23 12:55:05
|
after some testing: 0.11.1 has this problem as well, only happens on `-e 1`
|
|
2024-12-23 12:55:12
|
`--streaming_output` doesnt affect it
|
|
2024-12-23 01:00:22
|
I cropped the image to this, decode this to PNG and use `-d 0 -e 1` and it will generate a completely corrupted file
|
|
|
Orum
|
|
CrushedAsian255
after some testing: 0.11.1 has this problem as well, only happens on `-e 1`
|
|
2024-12-23 01:02:53
|
you mind filing a bug report, as you're on the latest version?
|
|
|
CrushedAsian255
|
|
Orum
you mind filing a bug report, as you're on the latest version?
|
|
2024-12-23 01:03:09
|
im on the latest stable version
|
|
|
Orum
|
2024-12-23 01:03:19
|
well, that's still a lot better than what I'm on
|
|
2024-12-23 01:03:53
|
I can file it if you'd prefer
|
|
|
CrushedAsian255
|
2024-12-23 01:04:07
|
im going to try to install the latest version on a Linux
|
|
|
Orum
|
2024-12-23 01:04:56
|
you mean, the latest git head?
|
|
|
CrushedAsian255
|
|
Orum
|
2024-12-23 01:05:06
|
alright, let me know how it goes
|
|
2024-12-23 01:05:43
|
the number of times Factorio has exposed problems in libjxl.... it should be given an "honorary libjxl debugging tool" award <:CatSmile:805382488293244929>
|
|
|
CrushedAsian255
|
|
jonnyawsom3
|
2024-12-23 01:08:31
|
I mean, it's a good benchmark. Synthetic layered sprites adding up to photo-like images
|
|
|
CrushedAsian255
|
2024-12-23 01:08:43
|
confirmed, still happens on `JPEG XL encoder v0.12.0 24e12baf [NEON,NEON_WITHOUT_AES]`
|
|
2024-12-23 01:09:03
|
im making a issue
|
|
|
Orum
|
2024-12-23 01:10:53
|
thanks; link me after you've made it
|
|
2024-12-23 01:11:12
|
wait, NEON.... you're on ARM?
|
|
|
CrushedAsian255
|
|
Orum
|
2024-12-23 01:12:32
|
ah, good to know, then it's at least likely to be in the non-assembly parts of the code
|
|
|
CrushedAsian255
|
2024-12-23 01:14:24
|
#4026
|
|
|
Orum
|
2024-12-23 01:14:58
|
thanks, checking it out now, and will add info if there's anything if I can think is missing
|
|
|
CrushedAsian255
|
2024-12-23 01:15:45
|
<@179701849576833024> (cause you're online), is there any sort of debug mode build option in `cjxl`/`djxl` ?
|
|
|
jonnyawsom3
|
|
Orum
ah, good to know, then it's at least likely to be in the non-assembly parts of the code
|
|
2024-12-23 01:15:53
|
There's assembly in the code?
|
|
2024-12-23 01:16:02
|
I thought that was just AV1 and WebP
|
|
|
CrushedAsian255
|
2024-12-23 01:16:13
|
i think they mean like vector intrinsics and stuff
|
|
|
|
veluca
|
|
CrushedAsian255
<@179701849576833024> (cause you're online), is there any sort of debug mode build option in `cjxl`/`djxl` ?
|
|
2024-12-23 01:16:35
|
you could do a very verbose build by changing stuff in status.h
|
|
|
CrushedAsian255
|
|
veluca
you could do a very verbose build by changing stuff in status.h
|
|
2024-12-23 01:17:12
|
which value to change?
|
|
2024-12-23 01:17:36
|
`#define JXL_DEBUG_V_LEVEL 0`?
|
|
|
|
veluca
|
2024-12-23 01:17:58
|
https://github.com/libjxl/libjxl/blob/main/lib/jxl/base/status.h#L25
|
|
2024-12-23 01:17:59
|
yup
|
|
2024-12-23 01:18:06
|
going to sleep now though
|
|
|
CrushedAsian255
|
2024-12-23 01:18:12
|
what should i set it to? 10000?
|
|
|
|
veluca
|
2024-12-23 01:18:42
|
10 ought to be enough
|
|
2024-12-23 01:18:52
|
if you can repro this on x86 it'd be nice
|
|
|
CrushedAsian255
|
2024-12-23 01:18:59
|
i don't have x86
|
|
2024-12-23 01:19:26
|
we know it happens on x86 already though
|
|
2024-12-23 01:19:34
|
<@548366727663321098> was on x86
|
|
|
jonnyawsom3
|
2024-12-23 01:19:37
|
Actual references use the debug level up to 3, but it's all >= so 10'll work
|
|
|
Orum
|
2024-12-23 01:21:00
|
yeah, I just posted about it on the issue
|
|
|
jonnyawsom3
|
2024-12-23 01:22:12
|
I can't reproduce it
|
|
|
Orum
|
2024-12-23 01:22:41
|
you using the same options?
|
|
|
CrushedAsian255
|
2024-12-23 01:22:53
|
make sure youre using `-d 0 -e 1`
|
|
|
jonnyawsom3
|
2024-12-23 01:22:54
|
`-d 0 -e 1` right?
|
|
|
CrushedAsian255
|
|
Orum
|
2024-12-23 01:23:17
|
🤔 interesting
|
|
|
CrushedAsian255
|
2024-12-23 01:23:21
|
what version
|
|
|
jonnyawsom3
|
2024-12-23 01:23:23
|
```cjxl test_iamge2.png Test.jxl -d 0 -e 1
JPEG XL encoder v0.11.0 0185fcd [AVX2,SSE2]
Encoding [Modular, lossless, effort: 1]
Compressed to 95428 bytes (2.172 bpp).
667 x 527, 72.433 MP/s [72.43, 72.43], , 1 reps, 16 threads.```
|
|
2024-12-23 01:23:33
|
https://embed.moe/https://cdn.discordapp.com/attachments/794206170445119489/1320562369419673660/Test.jxl?ex=676a0d0b&is=6768bb8b&hm=4c152980131db8e08f4679d916c2664326a709cc208f87d74338a1ba85277c22&
|
|
2024-12-23 01:23:53
|
Thanks Discord
|
|
|
CrushedAsian255
|
2024-12-23 01:24:01
|
did you use the files from the ZIP?
|
|
|
jonnyawsom3
|
2024-12-23 01:24:09
|
There's a ZIP?
|
|
|
CrushedAsian255
|
2024-12-23 01:24:13
|
on the github issue
|
|
|
jonnyawsom3
|
|
CrushedAsian255
I cropped the image to this, decode this to PNG and use `-d 0 -e 1` and it will generate a completely corrupted file
|
|
2024-12-23 01:24:35
|
I used that, gimme a sec
|
|
|
Orum
|
2024-12-23 01:24:45
|
https://github.com/libjxl/libjxl/issues/4026
|
|
|
CrushedAsian255
|
|
I used that, gimme a sec
|
|
2024-12-23 01:24:46
|
maybe discord recompression
|
|
2024-12-23 01:25:51
|
im building libjxl with debug
|
|
|
jonnyawsom3
|
2024-12-23 01:26:37
|
Yeah, nothing for me
|
|
|
CrushedAsian255
|
2024-12-23 01:27:50
|
ahh great
|
|
2024-12-23 01:27:53
|
libjxl isn't building with debug
|
|
2024-12-23 01:28:19
|
```error: invalid operands to binary expression ('const Kind' and 'ModularStreamId (size_t)' (aka 'jxl::ModularStreamId (unsigned long)'))```
|
|
|
jonnyawsom3
|
2024-12-23 01:28:38
|
```cjxl -d 0 -e 1 glitched_image.png Test.jxl
JPEG XL encoder v0.11.0 0185fcd [AVX2,SSE2]
Encoding [Modular, lossless, effort: 1]
Compressed to 1272.6 kB (1.227 bpp).
3840 x 2160, 338.094 MP/s [338.09, 338.09], , 1 reps, 16 threads.
cjxl -d 0 -e 1 corrupted_image.png Test2.jxl
JPEG XL encoder v0.11.0 0185fcd [AVX2,SSE2]
Encoding [Modular, lossless, effort: 1]
Compressed to 95364 bytes (2.170 bpp).
667 x 527, 72.287 MP/s [72.29, 72.29], , 1 reps, 16 threads.```
|
|
|
CrushedAsian255
|
2024-12-23 01:28:56
|
very strange
|
|
|
Orum
|
2024-12-23 01:29:02
|
try decoding to ppm first?
|
|
|
jonnyawsom3
|
2024-12-23 01:29:08
|
Corrupted images are larger
|
|
|
CrushedAsian255
|
2024-12-23 01:29:13
|
i wasnt using ppm
|
|
|
Orum
|
2024-12-23 01:29:24
|
yeah, I'm just curious because I was
|
|
2024-12-23 01:30:24
|
I wonder if it's just a race condition
|
|
|
CrushedAsian255
|
2024-12-23 01:30:41
|
if its a race condition it wouldnt be happening every time
|
|
|
Orum
|
2024-12-23 01:30:50
|
on different HW it might
|
|
|
jonnyawsom3
|
2024-12-23 01:31:54
|
I *am* on an 8 year old Zen 1 chip
|
|
|
CrushedAsian255
|
2024-12-23 01:32:14
|
because its happening for me on both
```
JPEG XL encoder v0.11.1 0.11.1 [NEON]
JPEG XL encoder v0.12.0 24e12baf [NEON,NEON_WITHOUT_AES]
```
|
|
2024-12-23 01:32:25
|
those are the archs i can test on
|
|
2024-12-23 01:32:38
|
<@548366727663321098> test on the file I made
|
|
2024-12-23 01:32:42
|
`corrupted_image.png`
|
|
|
Orum
|
2024-12-23 01:32:58
|
1 sec
|
|
|
CrushedAsian255
|
2024-12-23 01:33:26
|
maybe something with SIMD?
|
|
2024-12-23 01:33:33
|
thats the only difference i can think of
|
|
2024-12-23 01:33:44
|
SSE4 and NEON might be doing something funky at `-e 1`
|
|
|
Orum
|
2024-12-23 01:34:07
|
yeah but I'm sure Zen 1 has SSE4
|
|
|
CrushedAsian255
|
|
```cjxl -d 0 -e 1 glitched_image.png Test.jxl
JPEG XL encoder v0.11.0 0185fcd [AVX2,SSE2]
Encoding [Modular, lossless, effort: 1]
Compressed to 1272.6 kB (1.227 bpp).
3840 x 2160, 338.094 MP/s [338.09, 338.09], , 1 reps, 16 threads.
cjxl -d 0 -e 1 corrupted_image.png Test2.jxl
JPEG XL encoder v0.11.0 0185fcd [AVX2,SSE2]
Encoding [Modular, lossless, effort: 1]
Compressed to 95364 bytes (2.170 bpp).
667 x 527, 72.287 MP/s [72.29, 72.29], , 1 reps, 16 threads.```
|
|
2024-12-23 01:35:01
|
> `JPEG XL encoder v0.11.0 0185fcd [AVX2,SSE2]`
apparently not
|
|
|
Orum
|
2024-12-23 01:35:06
|
why does apple hide these files? 😮💨
|
|
|
jonnyawsom3
|
2024-12-23 01:35:08
|
For the corruped JXL
`failed to render image: frame error: modular stream error: entropy decoder error: error from bitstream: I/O error: unexpected end of file`
|
|
|
CrushedAsian255
|
|
Orum
why does apple hide these files? 😮💨
|
|
2024-12-23 01:35:28
|
?
|
|
|
Orum
|
2024-12-23 01:35:38
|
starts the file names with a '.'
|
|
|
CrushedAsian255
|
2024-12-23 01:36:02
|
Cmd+Shift+.
|
|
2024-12-23 01:36:10
|
show hidden files
|
|
2024-12-23 01:36:21
|
or do you mean the dumb `._` files
|
|
|
Orum
|
2024-12-23 01:36:22
|
oh wait, these are just the thumbnails
|
|
|
CrushedAsian255
|
2024-12-23 01:36:52
|
what?
|
|
|
Orum
|
2024-12-23 01:37:07
|
in the archive there's a `__MACOSX` directory
|
|
2024-12-23 01:37:19
|
and in that there are some images that I guess are supposed to be thumbnails?
|
|
|
CrushedAsian255
|
|
Orum
in the archive there's a `__MACOSX` directory
|
|
2024-12-23 01:38:03
|
yeah thats apple's dumb `AppleDouble` thing
|
|
|
Orum
and in that there are some images that I guess are supposed to be thumbnails?
|
|
2024-12-23 01:38:24
|
sometimes they're thumbnails other times they're just random data files
|
|
2024-12-23 01:38:31
|
they're like Alternate data streams in NTFS
|
|
|
Orum
|
2024-12-23 01:38:36
|
and yeah, I get the same issues with your pngs
|
|
|
jonnyawsom3
|
|
Orum
yeah but I'm sure Zen 1 has SSE4
|
|
2024-12-23 01:39:16
|
|
|
2024-12-23 01:39:40
|
cjxl has acted weird for a while, since around 0.10 I think, not using certain hardware extensions
|
|
|
CrushedAsian255
|
2024-12-23 01:39:41
|
odd `cjxl` isnt using sse4 then
|
|
2024-12-23 01:39:45
|
did you build from source
|
|
|
jonnyawsom3
|
2024-12-23 01:39:54
|
Of course not, I'm on Windows
|
|
2024-12-23 01:39:57
|
;P
|
|
|
Orum
|
|
|
|
2024-12-23 01:40:06
|
isn't that vulnerable to sinkclose, among other things? 😓
|
|
|
Quackdoc
|
2024-12-23 01:40:11
|
it would be nice if cjxl had a flag to hash the the input and output file to verify it was truly lossless, currently I do this all manually
|
|
|
CrushedAsian255
|
|
Quackdoc
it would be nice if cjxl had a flag to hash the the input and output file to verify it was truly lossless, currently I do this all manually
|
|
2024-12-23 01:40:19
|
thats what we all want
|
|
|
Quackdoc
|
2024-12-23 01:41:21
|
iirc xxhash is entirely just one header, maybe that could just be pulled [Hmm](https://cdn.discordapp.com/emojis/1113499891314991275.webp?size=48&name=Hmm)
|
|
|
Orum
|
2024-12-23 01:42:14
|
Quack, are you able to reproduce it?
|
|
|
jonnyawsom3
|
|
Orum
isn't that vulnerable to sinkclose, among other things? 😓
|
|
2024-12-23 01:42:24
|
Oh I also have UAC disabled
|
|
|
Quackdoc
|
2024-12-23 01:42:33
|
uh, I havent tried, what cmdline should I use?
|
|
|
CrushedAsian255
|
|
Quackdoc
uh, I havent tried, what cmdline should I use?
|
|
2024-12-23 01:42:42
|
check the issue
|
|
|
Orum
|
2024-12-23 01:42:43
|
it's all in the bug report
|
|
|
CrushedAsian255
|
2024-12-23 01:42:47
|
https://github.com/libjxl/libjxl/issues/4026
|
|
|
Quackdoc
|
2024-12-23 01:44:08
|
seems fine on my machine
|
|
|
Orum
|
2024-12-23 01:44:19
|
well, this bug is annoying AF then
|
|
|
jonnyawsom3
|
|
Quackdoc
seems fine on my machine
|
|
2024-12-23 01:44:43
|
What's the cjxl output?
|
|
|
Orum
|
2024-12-23 01:45:20
|
more importantly, what CPU?
|
|
|
Quackdoc
|
2024-12-23 01:46:34
|
```ps
➜ orum cjxl glitched_image.png out.jxl -d 0 -e 1
JPEG XL encoder v0.11.1 794a5dcf [AVX2,SSE4,SSE2]
Encoding [Modular, lossless, effort: 1]
Compressed to 1272.6 kB (1.227 bpp).
3840 x 2160, 398.904 MP/s [398.90, 398.90], , 1 reps, 12 threads.
➜ orum magick identify -format "%# " glitched_image.png
79f3d5173c42448c5d03332f624728b7f8e133fdfea33747762ab1b5a2038ff5 % ➜ orum magick identify -format "%# " out.jxl
79f3d5173c42448c5d03332f624728b7f8e133fdfea33747762ab1b5a2038ff5 %
``````ps
➜ orum cjxl corrupted_image.png out.jxl -d 0 -e 1
JPEG XL encoder v0.11.1 794a5dcf [AVX2,SSE4,SSE2]
Encoding [Modular, lossless, effort: 1]
Compressed to 95364 bytes (2.170 bpp).
667 x 527, 51.445 MP/s [51.45, 51.45], , 1 reps, 12 threads.
➜ orum magick identify -format "%# " out.jxl
65fe3a6815b38afb1b38f23bb033f1eb07ce6e475096673412f8a059aae27188 % ➜ orum magick identify -format "%# " corrupted_image.png
65fe3a6815b38afb1b38f23bb033f1eb07ce6e475096673412f8a059aae27188 %
```
|
|
|
Orum
more importantly, what CPU?
|
|
2024-12-23 01:46:39
|
ryzen 2600
|
|
|
Orum
|
2024-12-23 01:46:57
|
🤔 you have access to anything newer?
|
|
|
jonnyawsom3
|
2024-12-23 01:47:17
|
~~So half decade old Ryzen is superior~~
|
|
|
Quackdoc
|
|
Orum
🤔 you have access to anything newer?
|
|
2024-12-23 01:47:41
|
cpu? no.
|
|
|
jonnyawsom3
|
2024-12-23 01:47:53
|
Oh, there's a cursed idea
|
|
2024-12-23 01:47:56
|
I'll use my phone
|
|
|
CrushedAsian255
|
|
Quackdoc
|
2024-12-23 01:48:11
|
I mean, I do have my phone ready
|
|
2024-12-23 01:48:35
|
it's an old peice of shit tho
|
|
|
jonnyawsom3
|
2024-12-23 01:48:42
|
The PNGs should be fine from Discord
|
|
|
Orum
|
2024-12-23 01:49:02
|
yeah, as long as you download them and not the preview
|
|
|
jonnyawsom3
|
2024-12-23 01:49:32
|
Oh....
|
|
2024-12-23 01:49:40
|
It fails to encode a group
|
|
2024-12-23 01:49:49
|
The top left of the image is just missing
|
|
|
Quackdoc
|
2024-12-23 01:49:53
|
this is what I have to work with
```ps
➜ ~ neofetch
-o o- u0_a207@localhost
+hydNNNNdyh+ -----------------
+mMMMMMMMMMMMMm+ OS: Android 14 aarch64
`dMMm:NMMMMMMN:mMMd` Host: lge LM-G710
hMMMMMMMMMMMMMMMMMMh Kernel: 4.9.337-perf+
.. yyyyyyyyyyyyyyyyyyyy .. Uptime: 13 days, 26 mins
.mMMm`MMMMMMMMMMMMMMMMMMMM`mMMm. Packages: 351 (dpkg), 1 (pkg)
:MMMM-MMMMMMMMMMMMMMMMMMMM-MMMM: Shell: zsh 5.9
:MMMM-MMMMMMMMMMMMMMMMMMMM-MMMM: Terminal: /dev/pts/1
:MMMM-MMMMMMMMMMMMMMMMMMMM-MMMM: CPU: Qualcomm SDM845 (8) @ 1.766GHz
:MMMM-MMMMMMMMMMMMMMMMMMMM-MMMM: Memory: 2948MiB / 3628MiB
-MMMM-MMMMMMMMMMMMMMMMMMMM-MMMM-
+yy+ MMMMMMMMMMMMMMMMMMMM +yy+
mMMMMMMMMMMMMMMMMMMm
`/++MMMMh++hMMMM++/`
MMMMo oMMMM
MMMMo oMMMM
oNMm- -mMNs
```
|
|
2024-12-23 01:50:07
|
I may oom [av1_dogelol](https://cdn.discordapp.com/emojis/867794291652558888.webp?size=48&name=av1_dogelol)
|
|
2024-12-23 01:52:20
|
oop, cjxl on my phone broke
|
|
|
jonnyawsom3
|
|
Quackdoc
|
2024-12-23 01:52:44
|
```ps
➜ orum cjxl corrupted_image.png out.jxl -d 0 -e 1
JPEG XL encoder v0.11.1 [NEON,NEON_WITHOUT_AES]
Encoding [Modular, lossless, effort: 1]
Compressed to 96333 bytes (2.192 bpp).
667 x 527, 31.020 MP/s [31.02, 31.02], , 1 reps, 8 threads.
➜ orum magick identify -format "%# " corrupted_image.png
65fe3a6815b38afb1b38f23bb033f1eb07ce6e475096673412f8a059aae27188
➜ orum magick identify -format "%# " out.jxl
identify: unable to read image data `out.jxl' @ error/jxl.c/ReadJXLImage/774.
```
|
|
|
jonnyawsom3
|
|
|
|
2024-12-23 01:54:15
|
What the fuuuuuck is this
|
|
2024-12-23 01:54:24
|
It has a single offset group instead of 2?
|
|
2024-12-23 01:54:58
|
Wait no, it's just missing the bottom of that lower group
|
|
2024-12-23 01:55:05
|
The top left is gone though
|
|
|
CrushedAsian255
|
2024-12-23 02:07:25
|
`libsvtav1`
|
|
|
A homosapien
|
|
Orum
🤔 you have access to anything newer?
|
|
2024-12-23 05:09:56
|
I can't replicate on my intel i5 12400 or my ryzen 5 5600H. Both of my systems are on Windows so it might be a linux issue?
|
|
2024-12-23 05:10:41
|
Lemme test on my phone and my linux VM real quick
|
|
2024-12-23 05:18:50
|
Can't seem to replicate the corruption there either
|
|
2024-12-23 05:21:13
|
I tested versions 0.10.2 and 0.11.0
|
|
2024-12-23 05:21:19
|
How strange
|
|
|
Tirr
|
2024-12-23 05:24:07
|
it smells like a data race...
|
|
|
CrushedAsian255
|
|
A homosapien
I can't replicate on my intel i5 12400 or my ryzen 5 5600H. Both of my systems are on Windows so it might be a linux issue?
|
|
2024-12-23 06:14:57
|
its happening on Mac as well
|
|
|
Tirr
it smells like a data race...
|
|
2024-12-23 06:24:03
|
is there any way to force libjxl to single threaded?
|
|
|
A homosapien
|
2024-12-23 06:25:45
|
`--num_threads 0`
|
|
|
CrushedAsian255
|
2024-12-23 06:25:57
|
can't be data race, i set ` --num_threads=0` and still happening
|
|
|
A homosapien
|
2024-12-23 06:26:18
|
also, try `--num_threads 1` just to double check
|
|
|
CrushedAsian255
|
2024-12-23 06:26:38
|
still happening
|
|
2024-12-23 06:28:15
|
no other flags i set are seeming to do anything
|
|
2024-12-23 06:28:23
|
probably because `-e 1` is a specialised code path
|
|
|
Tirr
|
2024-12-23 07:11:24
|
oh it was e1... then it might be a SIMD bug
|
|
2024-12-23 07:12:24
|
maybe <@179701849576833024> need to look into this?
|
|
|
CrushedAsian255
|
|
Tirr
oh it was e1... then it might be a SIMD bug
|
|
2024-12-23 07:14:44
|
That would explain why its acting differently on different architectures and different CPUs
|
|
|
Tirr
|
2024-12-23 07:15:23
|
seems like NEON is faulty and AVX2 is fine
|
|
|
|
veluca
|
2024-12-23 07:17:16
|
😭
|
|
2024-12-23 07:17:33
|
can someone test turning off avx2?
|
|
|
Tirr
|
2024-12-23 07:23:17
|
don't have access to my x86 machines right now
|
|
|
|
veluca
|
2024-12-23 07:44:21
|
seems to work on SSE but not on AVX2 for me
|
|
|
Tirr
|
2024-12-23 07:46:07
|
then NEON and AVX2 is faulty and SSE2 is fine?
|
|
|
|
veluca
|
2024-12-23 07:47:13
|
apparently?
|
|
2024-12-23 07:47:22
|
that's... worrying
|
|
|
Tirr
|
2024-12-23 07:49:48
|
yeah that's quite bad...
|
|
|
|
veluca
|
2024-12-23 07:54:49
|
```
// Even if NEON only has 128-bit lanes, it is still significantly (~1.3x) faster
// to process two vectors at a time.
constexpr static size_t kLogChunkSize = 4;
```
|
|
2024-12-23 07:54:56
|
ah, I guess that explains it
|
|
|
CrushedAsian255
|
|
veluca
```
// Even if NEON only has 128-bit lanes, it is still significantly (~1.3x) faster
// to process two vectors at a time.
constexpr static size_t kLogChunkSize = 4;
```
|
|
2024-12-23 08:03:42
|
Is this only in the e 1 code path?
|
|
|
|
veluca
|
2024-12-23 08:06:05
|
yep
|
|
2024-12-23 08:23:45
|
huh, I seem to have isolated the bug in `BitWriter::WriteMultiple` ... didn't expect that
|
|
2024-12-23 09:05:53
|
https://github.com/libjxl/libjxl/pull/4027
|
|
2024-12-23 09:06:02
|
classic "I have no idea how this ever worked" moment
|
|
|
Tirr
|
2024-12-23 09:20:12
|
so it worked only with shorter vectors...
|
|
|
|
veluca
|
2024-12-23 09:24:07
|
no not even that
|
|
2024-12-23 09:24:22
|
all the vectorized code was broken
|
|
2024-12-23 09:24:35
|
looks like fjxl doesn't *actually* have a SSE version
|
|
|
Tirr
|
2024-12-23 09:25:01
|
what
|
|
2024-12-23 09:25:23
|
how that even worked
|
|
|
Demiurge
|
2024-12-23 10:09:45
|
Lossless verify mode would be cool...
|
|
|
DZgas Ж
|
|
veluca
looks like fjxl doesn't *actually* have a SSE version
|
|
2024-12-23 10:28:51
|
SSE or SSE2 ?
I still have an sse2 processor and I could check it out
|
|
|
|
veluca
|
2024-12-23 10:31:12
|
It's either avx2, neon, or scalar (or avx512)
|
|
|
DZgas Ж
|
2024-12-23 10:32:01
|
Well, I have avx512 too...
|
|
2024-12-23 10:32:55
|
After 14 years, I'm just planning to switch from amd to amd
|
|
2024-12-23 10:37:07
|
when fjxl was first added, I don't remember having any problems. But okay, I'll check 0.11 now
|
|
|
Dandy
|
2024-12-23 10:38:20
|
Are there any other iOS libraries like this? I’m specifically looking for one that supports HDR encoding https://github.com/awxkee/jxl-coder-swift
|
|
|
DZgas Ж
|
|
veluca
It's either avx2, neon, or scalar (or avx512)
|
|
2024-12-23 10:51:45
|
AVX encoding creates a broken file that I can't even open, although SSE2 does everything fine
|
|
|
jonnyawsom3
|
2024-12-23 10:56:38
|
Weird that I didn't get the corruption
|
|
|
DZgas Ж
|
|
|
|
2024-12-23 10:58:33
|
It's funny, I can't even open my broken file, but I can open your broken file
|
|
|
jonnyawsom3
|
2024-12-23 11:00:06
|
That was encoded on a Kirin 970 ARM CPU
|
|
|
DZgas Ж
|
2024-12-23 11:00:22
|
to use fjxl just buy sse2 processor <:KekDog:805390049033191445>
|
|
|
|
veluca
|
2024-12-23 11:06:53
|
The fix is already submitted 😛
|
|
|
DZgas Ж
|
2024-12-23 11:08:32
|
<:JXL:805850130203934781>
⬜🖼️🖼️
🖼️🖼️🖼️
|
|
|
Orum
|
2024-12-23 12:05:33
|
I will try out the fix later today <:Hypers:808826266060193874>
|
|
|
Baecho
|
|
CrushedAsian255
|
|
Baecho
✨
|
|
2024-12-24 04:47:02
|
<:JPEG_XL:805860709039865937>
|
|
|
Demiurge
|
2024-12-24 10:31:49
|
I know I've been here for a while... but I just wanted to say:
Cool image format, guys. Good work. I'm grateful. And like many others, I can't wait for it to become more ubiquitous as adoption and momentum keeps building up. And for people to continue to develop new and creative encoders and decoders and integrate them into common libraries and tools. Until it's as ubiquitous as PNG one day.
|
|
2024-12-24 10:32:14
|
:)
|
|
|
DZgas Ж
|
|
Demiurge
I know I've been here for a while... but I just wanted to say:
Cool image format, guys. Good work. I'm grateful. And like many others, I can't wait for it to become more ubiquitous as adoption and momentum keeps building up. And for people to continue to develop new and creative encoders and decoders and integrate them into common libraries and tools. Until it's as ubiquitous as PNG one day.
|
|
2024-12-24 11:31:28
|
bad image format, development has been going on for 7 years, there is still no support for 4x4 256x256 blocks. The unbearably long decoding speed. the lack of progressive tile decoding. It is impossible to encode images like 30000x30000 due to memory problems, and it is also impossible to open.
The guys are good 👍 <:logo:829708783336816671>
|
|
|
Demiurge
|
2024-12-24 11:41:13
|
software development has been kind of a mess, but the format is good. I can open it just fine on any linux or apple device. The decode speed is just fine too, the same or better than libpng or dav1d. Hopefully the reference implementation improves and becomes more reliable when it comes to memory usage. And hopefully people continue to write better jxl codec implementations from scratch too
|
|
|
Traneptora
|
2024-12-24 11:41:23
|
inability to encode massive images is a library limitation, not a format one
|
|
|
Demiurge
|
2024-12-24 01:25:58
|
There definitely is progressive decoding, tile by tile... idk what you are even complaining about here honestly... the reference encoder has lots of room for improvement for sure, but I'm grateful that the format itself was thoughtfully designed, and I hope it becomes more ubiquitous and takes over the world, and more codec implementations get written that improve over and surpass libjxl, while libjxl itself hopefully improves too
|
|
|
A homosapien
|
2024-12-24 07:39:44
|
I'm most excited for jxl-rs and it's potential improvements over libjxl
|
|
2024-12-24 07:40:01
|
Speaking of which, I should resume my decoder benchmarks
|
|
|
CrushedAsian255
|
|
DZgas Ж
bad image format, development has been going on for 7 years, there is still no support for 4x4 256x256 blocks. The unbearably long decoding speed. the lack of progressive tile decoding. It is impossible to encode images like 30000x30000 due to memory problems, and it is also impossible to open.
The guys are good 👍 <:logo:829708783336816671>
|
|
2024-12-24 09:10:33
|
Can’t encode 4K images in 2 kB at visually lossless quality, useless format /s
|
|
2024-12-24 09:11:07
|
Merry Christmas
|
|
|
jonnyawsom3
|
|
CrushedAsian255
Can’t encode 4K images in 2 kB at visually lossless quality, useless format /s
|
|
2024-12-24 09:12:26
|
Welll, if the image is a pure white square to start with....
|
|
2024-12-24 09:12:53
|
~~AVIF would still make it bigger lossless~~
|
|
|
Demiurge
|
|
A homosapien
I'm most excited for jxl-rs and it's potential improvements over libjxl
|
|
2024-12-24 09:22:00
|
Me too.
|
|
|
CrushedAsian255
|
2024-12-24 11:11:02
|
yeah
|
|
2024-12-27 12:22:16
|
<@890419667045343252> in Annex C, are “pre clustered distributions” the amount of different contexts and “clustering” is combining multiple different distributions to use the same context model? Also do all the distributions share the same decoding state?
|
|
2024-12-27 12:22:24
|
Wait wrong person lol
|
|
2024-12-27 12:22:29
|
<@794205442175402004>
|
|
|
_wb_
|
2024-12-27 12:30:51
|
Pre clustering are the context IDs that for MA trees are the leaf indices. They are clustered so only one distribution and hybriduintconfig has to be signaled per cluster.
|
|
2024-12-27 12:31:19
|
They do share state like the lz77 window and ANS state
|
|
|
CrushedAsian255
|
|
_wb_
Pre clustering are the context IDs that for MA trees are the leaf indices. They are clustered so only one distribution and hybriduintconfig has to be signaled per cluster.
|
|
2024-12-27 12:32:28
|
What counts as one cluster?
|
|
2024-12-27 12:32:46
|
Can’t the MA trees reference the same context ID for the same effect?
|
|
|
_wb_
|
2024-12-27 12:33:12
|
The MA trees don't reference context IDs, it's just an implicit counter
|
|
|
CrushedAsian255
|
|
_wb_
The MA trees don't reference context IDs, it's just an implicit counter
|
|
2024-12-27 12:39:28
|
Ah, so the clustering allows multiple leaf nodes to share a singular context?
|
|
|
_wb_
|
2024-12-27 12:42:09
|
Yes
|
|
2024-12-27 12:43:26
|
The context modeling for HF coefficients in VarDCT mode uses the same clustering mechanism (but without MA trees and with a fixed relatively large ctx model instead)
|
|
|
CrushedAsian255
|
2024-12-27 12:43:38
|
Does this also happen in places like H.4.2? It says it reads 6 “pre-clustered distributions”, does this mean the distributions haven’t been clustered yet, or that have been implicitly clustered so that each distribution gets its own cluster?
|
|
|
|
veluca
|
2024-12-27 12:45:31
|
it means it reads a total of 6 distributions, possibly clustered together, so in practice there might be less than 6 encoded distributions
|
|
|
CrushedAsian255
|
2024-12-27 12:46:51
|
That makes sense, thanks
|
|
|
Tirr
|
2024-12-27 12:56:18
|
(see also: https://tirr-c.github.io/jxl-docs/entropy-config/clustering.html)
|
|
|
TheBigBadBoy - 𝙸𝚛
|
2024-12-27 02:16:41
|
I read somewhere that arithmetic was a problem back then because of patents that are now expired
JXL does not use arithmetic coding (iirc).
Would it be beneficial for JXL ?
Will it be added to JXL (perhaps as a subset - non-retrocompatible - file) ?
|
|
|
jonnyawsom3
|
2024-12-27 02:25:04
|
You mean rANS?
|
|
|
TheBigBadBoy - 𝙸𝚛
|
2024-12-27 02:26:23
|
oh right, that's the name
|
|
2024-12-27 02:26:53
|
come from here
https://youtu.be/RFWJM8JMXBs
> TO CLARIFY: don't be afraid to use these algorithms! the patents for arithmetic coding are all expired, and the only justifiably patentable material in Microsoft's patent are specific implementations of adaptive-probability rANS (which JPEG XL actually doesn't use). Microsoft has also given anyone permission to use it in open source, royalty-free codecs
|
|
|
AccessViolation_
|
2024-12-27 03:31:40
|
So nice of Microsoft to give the open source community permission to use a technology that wasn't theirs that they tried and failed to patent and then claimed patent rights to anyway
|
|
|
TheBigBadBoy - 𝙸𝚛
come from here
https://youtu.be/RFWJM8JMXBs
> TO CLARIFY: don't be afraid to use these algorithms! the patents for arithmetic coding are all expired, and the only justifiably patentable material in Microsoft's patent are specific implementations of adaptive-probability rANS (which JPEG XL actually doesn't use). Microsoft has also given anyone permission to use it in open source, royalty-free codecs
|
|
2024-12-27 04:19:33
|
JXL does use ANS, but it doesn't use ANS with adaptive-probability which is specifically what the patent is for
|
|
2024-12-27 04:30:13
|
Iirc Jon previously said that having adaptive probabilities isn't that useful if you have good predictors and context modeling, which JXL does have, and even then, adaptive probability ANS updates the probabilities somewhat slowly. So if you scan the image left to right, top to bottom, then it would be good for a large vertical difference (🇳🇱) , but on horizontal differences (🇲🇫) the pixels would constantly be changing along your scan pattern, faster than it would update the probabilities, and it wouldn't be that useful
|
|
|
CrushedAsian255
|
2024-12-27 10:14:44
|
Adaptive probability is poor man’s meta adaptive predictors
|
|
|
Orum
|
2024-12-28 04:40:26
|
I'm glad we got assistance from both 🇳🇱 and 🇫🇷 there
|
|
|
AccessViolation_
|
2024-12-28 07:42:52
|
And nice of unicode to include a rotated dutch flag specifically for cases like this
|
|
|
CrushedAsian255
|
2024-12-28 07:47:46
|
Very much appreciate
|
|
|
Demiurge
|
2024-12-28 10:30:11
|
jxl can have a hierarchical DCT right? Like a recursive DCT with multiple different hierarchical levels of detail.
|
|
2024-12-28 10:30:48
|
I wonder if jxr can be converted to jxl without fully decoding to pixels...
|
|
2024-12-28 10:31:15
|
Or at least a large portion of jxr
|
|
|
CrushedAsian255
|
2024-12-28 10:39:25
|
jxl has LF frames
|
|
2024-12-28 10:39:32
|
the 1:8 downsampled version can be itself a DCT
|
|
2024-12-28 10:39:50
|
only 1:8 recursion is supported though i think
|
|
2024-12-28 10:40:00
|
1:8, 1:64, 1:512, 1:4096
|
|
|
spider-mario
|
2024-12-28 11:06:39
|
are there enough lossy JXR files in the wild (with lost original source) to make that worthwhile, though?
|
|
|
CrushedAsian255
|
2024-12-28 12:04:14
|
It’s more of an interesting question
|
|
2024-12-28 12:05:24
|
I am also working on a PNG lossless recompression system based on preflate-rs (when that comes out of beta) someone here has done that before using Python but hoping to make one more performant
|
|
|
Demiurge
I wonder if jxr can be converted to jxl without fully decoding to pixels...
|
|
2024-12-28 12:06:22
|
How does JXR work again? Haven’t really researched the format
|
|
|
Demiurge
|
2024-12-28 12:06:57
|
It looks like it's a dct format that was designed to be lossless when the quant coeffs are 1
|
|
2024-12-28 12:07:10
|
And it uses a hierarchal 4x4 dct
|
|
2024-12-28 12:07:38
|
A 4x4 within a 4x4
|
|
|
CrushedAsian255
|
2024-12-28 12:09:15
|
You could possibly work out some form of lossless conversation between the 4x4 within 4x4 and a 16x16 DCT
|
|
2024-12-28 12:10:42
|
Although the specifics of the DCT used may not be the same as JXL
|
|
2024-12-28 12:10:47
|
So it might not be compatible
|
|
|
Meow
|
|
spider-mario
are there enough lossy JXR files in the wild (with lost original source) to make that worthwhile, though?
|
|
2024-12-28 01:13:52
|
HDR screenshots on Windows?
|
|
|
spider-mario
|
2024-12-28 01:14:07
|
are they lossy?
|
|
|
Meow
|
2024-12-28 01:14:48
|
I have no conditions to test that
|
|
2024-12-28 01:16:41
|
GeForce Experience is a big source of those JXR screenshots
|
|
|
Demiurge
|
2024-12-28 03:21:54
|
The specifics are probably slightly different. But who knows, maybe it's similar enough that the error is small
|
|
|
Meow
|
2024-12-28 03:25:49
|
Are self-made JXR images considered?
|
|
|
Demiurge
|
2024-12-28 05:59:30
|
Native jxl support can't come soon enough in windows...
|
|
2024-12-28 05:59:42
|
At least Mac and Linux work great
|
|
|
RaveSteel
|
2024-12-28 08:04:51
|
Is JXL support even present in the insider builds? Or not there yet?
|
|
|
CrushedAsian255
|
|
Demiurge
At least Mac and Linux work great
|
|
2024-12-28 10:47:53
|
JXL animations dont work on Mac
|
|
|
Demiurge
|
2024-12-28 10:58:48
|
Can Preview play animated gif?
|
|
2024-12-28 11:00:51
|
No animations is a bummer but better than nothing at least
|
|
|
CrushedAsian255
|
|
Demiurge
Can Preview play animated gif?
|
|
2024-12-28 11:08:25
|
yes, also APNG
|
|
2024-12-28 11:08:39
|
well Preview shows each frame
|
|
2024-12-28 11:08:44
|
but Quick Look can play the animation
|
|
2024-12-28 11:10:15
|
This is APNG in Preview
|
|
|
Demiurge
|
2024-12-28 11:11:27
|
Ah yes I remember
|
|
2024-12-28 11:11:44
|
It's been a while since I used it
|
|
2024-12-28 11:12:07
|
That's cool, would be nice for Apple to plumb animation support
|
|
2024-12-28 11:12:15
|
For jxl
|
|
2024-12-28 11:12:45
|
Still, it's really nice how well it works already despite the lack of animation support
|
|
|
Meow
|
|
Demiurge
Native jxl support can't come soon enough in windows...
|
|
2024-12-29 05:48:45
|
I've checked 24H2. JXL files can be selected for wallpapers although it doesn't work
|
|
|
Demiurge
|
2024-12-29 06:29:11
|
Such a tease.
|
|
|
RaveSteel
|
2024-12-29 05:13:52
|
Small indie company, please understand
|
|
|
DZgas Ж
|
2024-12-29 05:24:18
|
<:ReeCat:806087208678588437>
|
|
|
KKT
|
2024-12-29 09:08:37
|
So was playing around with Clearsky.app, and noticed something interesting/odd:
|
|
2024-12-29 09:08:50
|
Is that someone here?
|
|
2024-12-29 09:09:10
|
#2 worldwide is pretty impressive.
|
|
|
jonnyawsom3
|
2024-12-29 09:26:45
|
They seem to go by Soap on other sites, and aren't very happy https://bsky.app/profile/jpegxl.bsky.social/post/3lctst5mdj22h
|
|
|
KKT
|
2024-12-29 11:10:38
|
Yeah, seemed aggressive.
|
|
|
user_to_be_deleted_f5n17u49
|
2025-01-01 06:27:32
|
I saw someone on reddit saying you should always hash when doing lossless. I don't use imagemagick, so I'm not familiar with the command they posted. Is this "hash" just a cryptographic hash, or is it something else entirely? What is the significance? Do hashes between the original file and the jxl match when lossless?
|
|
2025-01-01 06:33:40
|
While I'm at it, are there any other best practices with conversion to JXL I should know about, particularly for lossless? I'm using JXL for my photography collection. I keep JPG, JXL, and PAR2 files for both, with all the original EXIF metadata when possible, and custom metadata.
|
|
|
username
|
|
user_to_be_deleted_f5n17u49
I saw someone on reddit saying you should always hash when doing lossless. I don't use imagemagick, so I'm not familiar with the command they posted. Is this "hash" just a cryptographic hash, or is it something else entirely? What is the significance? Do hashes between the original file and the jxl match when lossless?
|
|
2025-01-01 06:37:52
|
I think they mean taking a non-cryptographic hash of the raw decoded pixels to make sure that the original image and the image converted to another lossless format (such as JXL) match up when viewed/decoded
|
|
|
Demiurge
|
2025-01-01 06:37:54
|
It's a good idea to do some kinda sanity check when converting a huge amount of files without checking each one by hand.
|
|
2025-01-01 06:39:50
|
For lossless, the pixel data should be identical and you can verify this by decoding the jxl and the original and making sure the pixels match or has the same hash
|
|
|
user_to_be_deleted_f5n17u49
|
2025-01-01 06:40:28
|
What's a non-cryptographic hash? Do you happen to know how I could hash the pixels like that in Powershell or bash? (I may be misusing "cryptographic hash" here; I'd still think of MD5 as one, even though it isn't secure)
|
|
|
Demiurge
|
2025-01-01 06:40:52
|
For lossy, you can check the maximum amplitude of difference/distortion and make sure it doesn't exceed a sensational loudness level
|
|
|
user_to_be_deleted_f5n17u49
|
2025-01-01 06:41:05
|
Oh am I decoding to raw and hashing that?
|
|
|
Demiurge
|
|
user_to_be_deleted_f5n17u49
What's a non-cryptographic hash? Do you happen to know how I could hash the pixels like that in Powershell or bash? (I may be misusing "cryptographic hash" here; I'd still think of MD5 as one, even though it isn't secure)
|
|
2025-01-01 06:41:27
|
There are graphics tools/libraries like magick that can compare pixel data
|
|
2025-01-01 06:41:49
|
Just use a tool like vips or gm/magick
|
|
|
user_to_be_deleted_f5n17u49
|
2025-01-01 06:42:30
|
Fair enough! Tools exist for a reason. Thanks for the help.
|
|
|
Demiurge
|
2025-01-01 06:44:25
|
Yeah. If you need help formulating a specific command, someone can help you too
|
|
|
user_to_be_deleted_f5n17u49
|
2025-01-01 06:45:20
|
I've been writing scripts to handle everything, I was just trying to figure out the purpose and if I could do it without needing a new dependency
|
|
|
Demiurge
|
2025-01-01 06:50:38
|
Depends on if you prefer using vips or if you prefer using imagemagick/graphicsmagick
|
|
|
user_to_be_deleted_f5n17u49
|
2025-01-01 06:51:31
|
I've never used either!
|
|
2025-01-01 06:51:54
|
I've been running parchive2 and libjxl alone thusfar
|
|
|
Demiurge
|
2025-01-01 07:10:05
|
It would be nice if some type of sanity check was done by cjxl itself
|
|
2025-01-01 07:10:28
|
But there isn't as of now
|
|
|
Orum
|
|
user_to_be_deleted_f5n17u49
What's a non-cryptographic hash? Do you happen to know how I could hash the pixels like that in Powershell or bash? (I may be misusing "cryptographic hash" here; I'd still think of MD5 as one, even though it isn't secure)
|
|
2025-01-01 07:39:39
|
There are plenty of these, though usually they're referred to as "checksums" instead of "hashes" as the latter is generally assumed to be cryptographic. Anyway, for example: https://en.wikipedia.org/wiki/Fletcher%27s_checksum
|
|
|
user_to_be_deleted_f5n17u49
|
|
Orum
There are plenty of these, though usually they're referred to as "checksums" instead of "hashes" as the latter is generally assumed to be cryptographic. Anyway, for example: https://en.wikipedia.org/wiki/Fletcher%27s_checksum
|
|
2025-01-01 07:52:13
|
Ah, I see, thanks! I'll give this a read.
|
|
2025-01-01 07:53:15
|
So, to be sure; when using magick identify, the hash between a lossless JXL and the original JPEG is expected to be different, but if I then decode said JXL back to a JPEG, the new JPEG and the original should have matching hashes. Do I have that right?
|
|
|
Demiurge
|
2025-01-01 07:53:57
|
Correct
|
|
|
user_to_be_deleted_f5n17u49
|
2025-01-01 07:54:12
|
Okay, good, thank you! I was worried for a second.
|
|
|
Demiurge
|
2025-01-01 07:55:01
|
For that particular scenario, you must compare the file hash. For other lossless scenarios (like png) you must compare the pixel hash.
|
|
|
user_to_be_deleted_f5n17u49
|
2025-01-01 07:55:50
|
Ah, so JPEG <=> JXL I could just hash the whole file, but anything else I'd want magick identify?
|
|
|
Demiurge
|
2025-01-01 07:56:03
|
Correct
|
|
2025-01-01 07:57:45
|
For lossy, some distortion is expected, but not an extreme level of distortion.
|
|
2025-01-01 07:58:34
|
If you can measure the maximum amplitude of distortion, and see that it exceeds a certain reasonable threshold, it could be a sign of silent errors too
|
|
2025-01-01 07:58:56
|
Like a missing block or something
|
|
|
user_to_be_deleted_f5n17u49
|
2025-01-01 07:59:32
|
In lossless, though, could I expect hashing to catch everything, more or less?
|
|
|
Demiurge
|
2025-01-01 08:00:34
|
The problem is there's no built in sanity checks to ensure this doesn't happen and silent output errors have happened with certain corner cases in the past
|
|
|
user_to_be_deleted_f5n17u49
In lossless, though, could I expect hashing to catch everything, more or less?
|
|
2025-01-01 08:01:27
|
If you are doing lossless then anything less than a perfect match is an unacceptable error.
|
|
|
user_to_be_deleted_f5n17u49
|
2025-01-01 08:01:44
|
That's what I would have thought, but I wanted to verify.
|
|
|
Demiurge
|
2025-01-01 08:01:51
|
The pixel hash should match exactly
|
|
2025-01-01 08:05:17
|
Silent errors are the worst kind of errors and the libjxl reference software isn’t really known for its defensive programming techniques
|
|
|
user_to_be_deleted_f5n17u49
|
2025-01-01 08:06:55
|
So here's what I got from my test, I think this lines up with what's been described.
```
2de481c344fe59824e7fab7ce12f4307a0b523fded424e7453087c34871ce0fd converted.png
2de481c344fe59824e7fab7ce12f4307a0b523fded424e7453087c34871ce0fd deconstructed.jxl
2de481c344fe59824e7fab7ce12f4307a0b523fded424e7453087c34871ce0fd deconverted.jxl
2de481c344fe59824e7fab7ce12f4307a0b523fded424e7453087c34871ce0fd lossless.jxl
87184060c93d41b47d33f47f4a2df7a3fde83c1e11d39e5e89767f6c5f43e84a original.jpg
87184060c93d41b47d33f47f4a2df7a3fde83c1e11d39e5e89767f6c5f43e84a reconstructed.jpg
```
`original.jpg` -> `lossless.jxl`
`lossless.jxl` -> `reconstructed.jpg`
`lossless.jxl` -> `converted.png`
`reconstructed.jpg` -> `deconstructed.jxl`
`converted.png` -> `deconverted.jxl`
|
|
2025-01-01 08:07:16
|
(using `magick identify -format "%# %f\n" ./*`)
|
|
|
Demiurge
|
2025-01-01 08:10:00
|
It's mainly just designed for good performance and to just keep stapling things onto aging and increasingly messy code
|
|
2025-01-01 08:10:49
|
Why do you have so many files... what's "deconstructed" and "deconverted"
|
|
|
user_to_be_deleted_f5n17u49
|
2025-01-01 08:12:28
|
I'm going back and forth, just to see what happens. I turn the original JPG into a JXL. I take that JXL and decode it twice, once into a JPG and once into a PNG. Then I take each of those and re-encode them as JXLs. Those would be "deconstructed" (JPG -> JXL -> JPG -> JXL) and "deconverted" (JPG -> JXL -> PNG -> JXL).
|
|
2025-01-01 08:13:53
|
If I take `deconverted.jxl` and try to decode it back to a JPG, I get a third hash? It fails to losslessly decode. That feels incorrect?
|
|
2025-01-01 08:16:27
|
So it goes JPG -> JXL -> PNG -> JXL -> JPG, with the last decode producing a third hash not before seen.
|
|
|
Demiurge
|
2025-01-01 08:26:37
|
You can compare the file hash without using magick (for jpeg recompression)
|
|
|
user_to_be_deleted_f5n17u49
|
|
user_to_be_deleted_f5n17u49
So it goes JPG -> JXL -> PNG -> JXL -> JPG, with the last decode producing a third hash not before seen.
|
|
2025-01-01 08:29:46
|
What of this, though? I don't think it'd be expected that the final operation would fail to be done losslessly, no? Do we report such failures somewhere? Or does that just happen sometimes and we kinda just say "oh well"?
|
|
|
monad
|
|
user_to_be_deleted_f5n17u49
What of this, though? I don't think it'd be expected that the final operation would fail to be done losslessly, no? Do we report such failures somewhere? Or does that just happen sometimes and we kinda just say "oh well"?
|
|
2025-01-01 08:44:11
|
JPEG -> JXL can be lossless. assuming it is:
JXL -> PNG is lossy
PNG -> JXL can be lossless
JXL -> JPEG is lossy
|
|
2025-01-01 08:45:20
|
Since it is a lossy operation, a different hash is not surprising.
|
|
|
user_to_be_deleted_f5n17u49
|
2025-01-01 08:45:21
|
JXL -> PNG produced an identical hash to the hash of the JXL, so is the pixel data not preserved?
|
|
|
monad
|
2025-01-01 08:49:05
|
There is no absolute pixel value in that JXL (or the original JPEG). The PNG is just the decoded result of the JXL, so you should expect the decoded result of the JXL to match the decoded result of the JXL represented as PNG.
|
|
|
_wb_
|
2025-01-01 08:49:16
|
ImageMagick hashes are just comparing pixel values as seen by ImageMagick, which is usually compiled to use uint16 internally. One thing to note is that it completely ignores colorspace.
|
|
|
user_to_be_deleted_f5n17u49
|
2025-01-01 08:49:48
|
Ooooh. I think I understand now.
|
|
|
Orum
|
|
monad
JPEG -> JXL can be lossless. assuming it is:
JXL -> PNG is lossy
PNG -> JXL can be lossless
JXL -> JPEG is lossy
|
|
2025-01-01 08:51:47
|
JXL -> PNG shouldn't be lossy unless in a format that PNG doesn't support
JXL -> JPEG isn't lossy if the JXL was originally created from a JPEG source
|
|
|
monad
|
2025-01-01 08:53:53
|
I'm describing the implications of the specific case posed.
|
|
|
Orum
|
2025-01-01 08:53:58
|
ah
|
|
2025-01-01 08:54:38
|
well even then JPG -> JXL -> PNG shouldn't be lossy (just bad for compression)
|
|
|
monad
|
2025-01-01 08:57:08
|
JXL to PNG drops the JPEG data and leaves a single decoded interpretation which other decoders may disagree with.
|
|
|
Orum
|
2025-01-01 08:57:56
|
sure, but that can happen just with decoding a JPEG; it doesn't mean the format itself is at fault
|
|
|
_wb_
|
2025-01-01 09:12:37
|
It's also lossy in the sense that the DCT and YCbCr color transforms are not exact when quantizing the result to uint8 or even uint16 RGB
|
|
|
Orum
|
2025-01-01 09:20:36
|
maybe, but for practical purposes it doesn't matter
|
|
|
monad
|
2025-01-01 09:22:07
|
What matters here is that JPEG -> JXL -> PNG -> JXL -> JPEG doesn't roundtrip, per measurement.
|
|
|
_wb_
|
2025-01-01 09:38:21
|
JPEG -> PNG -> JPEG also usually doesn't roundtrip
|
|
|
Orum
|
2025-01-01 09:53:29
|
well lossy -> lossless -> lossy didn't either, until JXL existed and added lossless recompression for JPEG
|
|
|
CrushedAsian255
|
2025-01-01 11:45:42
|
only jpg -> jxl -> jpg is roundtrip
|
|
|
Demiurge
|
|
_wb_
ImageMagick hashes are just comparing pixel values as seen by ImageMagick, which is usually compiled to use uint16 internally. One thing to note is that it completely ignores colorspace.
|
|
2025-01-02 12:23:14
|
Yeah, that's unfortunate that there isn't a better alternative available for performing some basic decode sanity checks on encode...
|
|
|
DZgas Ж
|
2025-01-02 04:50:45
|
jpg -> jpg -> jpg is real life
|
|
|
Meow
|
|
DZgas Ж
jpg -> jpg -> jpg is real life
|
|
2025-01-02 05:28:10
|
https://youtu.be/w7UDJUCMTng
|
|
|
CrushedAsian255
|
2025-01-02 05:29:43
|
jpeg looks like a gameboy and webp looks like unicorn vomit
|
|
|
Meow
|
2025-01-02 05:38:32
|
Wonder if Jpegli -> Jpegli -> Jpegli ... would result in the same
|
|
|
Orum
|
2025-01-02 06:06:59
|
JPEG looks like you entered the matrix
|
|
|
monad
|
2025-01-02 06:21:59
|
JPEG -> JPEG is better than JXL -> JXL currently. video above is a misleading, broken example
|
|
|
Orum
|
2025-01-02 06:24:48
|
🤔 I'll have to test it again when I get home
|
|
2025-01-02 06:25:05
|
last time I tested it, certain areas became blurry, but the rest looked really good
|
|
|
monad
|
2025-01-02 06:27:36
|
https://discord.com/channels/794206087879852103/803645746661425173/1230065259784572928
|
|
|
Orum
|
2025-01-02 06:27:48
|
I did mine back in '21 so yeah, time to update
|
|
2025-01-02 06:28:55
|
have you tried in 0.11.1?
|
|
2025-01-02 06:30:02
|
also it seems mostly fine without gaborish
|
|
|
monad
|
2025-01-02 06:30:34
|
haven't, but I'm not aware of any significant changes affecting this
|
|
|
Orum
|
2025-01-02 06:31:09
|
> Default is 'encoder chooses'
the hell does that even mean?
|
|
|
monad
|
2025-01-02 06:31:50
|
I know there was one commit from Jon mitigating generation loss in a very specific situation, but nothing general
|
|
|
Orum
> Default is 'encoder chooses'
the hell does that even mean?
|
|
2025-01-02 06:32:14
|
what is the context
|
|
|
Orum
|
2025-01-02 06:32:23
|
the `--gaborish` option
|
|
|
monad
|
2025-01-02 06:33:31
|
the encoder can choose whether or not to signal gaborish
|
|
|
Orum
|
2025-01-02 06:33:41
|
right, but *how* does it choose?
|
|
|
monad
|
2025-01-02 06:35:17
|
I don't know the details. I think it's disabled at very high quality, probably other cases too
|
|
|
Orum
|
2025-01-02 06:35:28
|
what distance were you using?
|
|
|
monad
|
2025-01-02 06:36:55
|
Well, I didn't look at this recently and that's not my video, but I think that user posted their script somewhere.
|
|
|
Orum
|
2025-01-02 06:38:02
|
Well back when I tested it in 2021, it did much better than that. This was 1st generation, -d 0.5:
|
|
2025-01-02 06:38:12
|
and this was 500th:
|
|
2025-01-02 06:39:34
|
weirdly it had the most problems with larger, featureless areas
|
|
|
monad
|
2025-01-02 06:42:01
|
https://discord.com/channels/794206087879852103/803645746661425173/1023189031011491921
|
|
2025-01-02 06:42:20
|
last time they mentioned quality
|
|
|
Orum
|
2025-01-02 06:43:27
|
yeah, my guess is just that gaborish is disabled at 0.5 so I didn't notice the degradation
|
|
2025-01-02 06:43:35
|
or it wasn't even implemented at the time 🤷♂️
|
|
|
A homosapien
|
|
Orum
yeah, my guess is just that gaborish is disabled at 0.5 so I didn't notice the degradation
|
|
2025-01-02 08:09:58
|
You are correct, gaborish is disabled at distances 0.5 or less
|
|
2025-01-02 08:11:14
|
But I also remember there was talk about tuning the gabor filter to be more resilient against generation loss.
|
|
|
Orum
|
2025-01-02 09:31:37
|
I think in the short term it should just be off unless explicitly forced on
|
|
|
_wb_
|
2025-01-02 09:38:46
|
it is quite useful against blockiness at d1-3 though
|
|
2025-01-02 09:39:46
|
(at higher distance its kernel size is too small to help much and you really need epf, at lower distance it is not really needed)
|
|
|
CrushedAsian255
|
|
_wb_
(at higher distance its kernel size is too small to help much and you really need epf, at lower distance it is not really needed)
|
|
2025-01-02 10:39:25
|
so is gaborish like a laid-back deblocking filter and epf is like a full on one?
|
|
|
Quackdoc
|
2025-01-02 10:48:14
|
I wonder if maybe I should find a spare machine and have that dedicated to run lossless encode decode tests for cjxl [Hmm](https://cdn.discordapp.com/emojis/1113499891314991275.webp?size=48&name=Hmm) I do have some old craptops and mini pcs
|
|