JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

benchmarks

AccessViolation_
2025-01-25 10:42:18
how long did that take 💀
RaveSteel
2025-01-25 10:42:25
but only with transcoding, no re-encode
AccessViolation_
2025-01-25 10:42:33
ohh okay
RaveSteel
2025-01-25 10:42:37
haven't tried reencoding
2025-01-25 10:42:49
I like to preserver the originals as far as possible
2025-01-25 10:43:01
If possible, I want literally every file I have to be lossless
AccessViolation_
2025-01-25 10:43:06
Makes sense
2025-01-25 10:43:52
I mean Modular mode gets surprisingly close to virtually lossless VarDCT levels sometimes
jonnyawsom3
AccessViolation_ It's happened several times now that I go "I wonder if this software has any requests for JXL" and when I check the github issues it's one of you guys <:KekDog:805390049033191445>
2025-01-25 10:44:07
Need that Blender support for my textures 😩
2025-01-25 10:44:42
I logged my cmd output so I can retrieve settings
2025-01-25 10:45:30
WebP does okay... Then you remember it's 8-bit so broke normal maps
RaveSteel
2025-01-25 10:49:35
WebP certainly has its uses, but high fidelity is not among them
Orum
2025-01-25 11:12:30
8-bit normal maps are quite usable
RaveSteel
2025-01-25 11:16:43
Of course, but it is a limitation nonetheless
Orum
2025-01-25 11:16:50
true
A homosapien
AccessViolation_ tried effort 11 yet? :p
2025-01-25 11:18:17
We should really just make the encoder do a no op when effort 11 is called for anything but lossless.
2025-01-25 11:19:47
It's mentioned in the documentation, but I think the program yelling, "STOP! This does nothing! You are wasting your time." in your face is better.
jonnyawsom3
2025-01-25 11:20:21
Well, it just does effort 10 effectively doesn't it?
A homosapien
2025-01-25 11:21:51
It falls back to effort 10 yeah.
2025-01-25 11:23:05
But that can still lead to a lot of wasted time
2025-01-25 11:24:32
I think the devs underestimated how obsessively people would try e11 on *everything*
RaveSteel
2025-01-25 11:25:06
Which is a clear indication of a need for e 12 to e 20 <:KekDog:805390049033191445>
Orum
2025-01-25 11:26:15
I mean I do up to e10 for research purposes but for everyday use I use anywhere between e1 (e.g. on screenshots) and e7
RaveSteel
2025-01-25 11:26:38
e1 is perfect for screenshots indeed
AccessViolation_
A homosapien It's mentioned in the documentation, but I think the program yelling, "STOP! This does nothing! You are wasting your time." in your face is better.
2025-01-25 11:26:41
Ironically enough, it does not waste time when it's a no op, and wastes a lot of time when it's doing lossless <:galaxybrain:821831336372338729>
2025-01-25 11:27:21
I think at least? I remember doing lossless transcoding at e11 and it finished like instantly
2025-01-25 11:27:32
Oh vardct would probably work
RaveSteel
2025-01-25 11:28:27
It was mentioned multiple times already I believe, but lossless trancoding doesn't really improve upon setting an effort greater than 8, maybe 9
2025-01-25 11:28:51
I myself haven't really tested this, but from the few times I did it was so
A homosapien
2025-01-25 11:51:56
Effort 9 does increase density by ~8-10% but it takes ~20x more encoding time compared to effort 7.
RaveSteel
2025-01-26 01:30:41
Tried exporting an image from Kirta, this time with RGB noise, one time in an extra layer, the other time flattened. Interestingly, the exact opposite situation to the crt scanline texture from last time, the JXL with the noise in a separate layer is massively larger than the flattened image
2025-01-26 01:30:53
``` 47M gradient_test_lossless_noise_layers.jxl 12M gradient_test_lossless_noise_no_layers.jxl ```
2025-01-26 01:31:41
The noise layer was exported from GIMP at 25% opacity since JXL or Kita's implementation of JXL do not support layers with reduced opacity
2025-01-26 01:32:42
AccessViolation_
2025-01-26 01:34:05
I produced transparent layers in Krita by using the eraser tool, setting it to the biggest size and some opacity, and then erasing the whole layer in one sweep
RaveSteel
2025-01-26 01:34:37
That is certainly one option
AccessViolation_
2025-01-26 01:35:21
You effectively just gotta do any sort of transparency by using the alpha channel, or maybe a layer in the multiplication blend mode, but idk if krita supports outputting the multiplication blend mode in JXL
2025-01-26 01:36:21
Those test results are interesting though
RaveSteel
2025-01-26 01:37:02
I also encoded the image wth cjxl and --photon_noise_iso 64000 to get a similar noise level. The noise doesn't llok as nice, but at 1.5MB it handily beats both JXLs with the noise overlay created in GIMP
AccessViolation_ You effectively just gotta do any sort of transparency by using the alpha channel, or maybe a layer in the multiplication blend mode, but idk if krita supports outputting the multiplication blend mode in JXL
2025-01-26 01:37:48
Hm, I don't really have a lot of experience with Krita, but I'll have to check if this is possible
AccessViolation_
RaveSteel ``` 47M gradient_test_lossless_noise_layers.jxl 12M gradient_test_lossless_noise_no_layers.jxl ```
2025-01-26 01:41:50
My expectations were for the size to be about the same. Since input noise just can't be predicted period, I would expect the noise to take up like 99% of the data, and the gradient is just slight upticks or downticks in the prediction residuals, barely adding up to anything, regardless of whether the gradient is encoded separately or within the noise. But I don't know why it got so much worse
2025-01-26 01:42:28
Ohh
RaveSteel
2025-01-26 01:44:04
I was just trying to reencode the image with layers at higher effort only to realize that cjxl flattens layers together upon encoding
2025-01-26 01:44:13
Very unfortunate
AccessViolation_
2025-01-26 01:44:35
If the noise was fairly subtle in the no-layers version, the residuals are small, for example maybe a difference from 0 to 20 (out of 255) per pixel. But since the layered image was exported at full opacity like it says in that notice above, the differences in the noise are from 0 to 255
2025-01-26 01:44:42
Could that be it?
2025-01-26 01:44:48
idk what your noise looks like
RaveSteel
2025-01-26 01:45:33
like this
2025-01-26 01:45:42
Random RGB noise created with GIMP
AccessViolation_
2025-01-26 01:46:01
Right, and what did it look like in the no-layers, blended with the gradient?
RaveSteel
2025-01-26 01:46:34
like this
2025-01-26 01:46:52
I have to say that the noise looks quite nice though
AccessViolation_
2025-01-26 01:46:53
Ahhh yeah that explains it
2025-01-26 01:50:15
In that combined image the noise looks very minimal. If a pixel in the top right is green, the one next to it will probably be green-ish too, despite the noise. It's predictable. But your noise-only layer is basically pure information, there is no pattern, it's all over the place, it goes from the minimum pixel value to the maximum pixel value, The current pixel doesn't tell you anything about the next one. It effectively just cannot be compressed
RaveSteel
2025-01-26 01:50:28
Right, random noise
2025-01-26 01:50:45
Always hard to compress
2025-01-26 01:52:25
So, I assume that predictable patterns will do better in a separate layer while random noise will do better flattened?
2025-01-26 01:52:46
Because the flattening will make it more predictable?
AccessViolation_
2025-01-26 01:59:54
I think the issue here specifically isn't to do with whether it's flattened or not. It's just that one image has more noise. In the flattened image, the base image effectively makes up 95% of the image as a whole, and the noise 5% (these are blended together) In the layered image, the base image takes up 100% of the first layer, and the noise takes up 100% of the second layer. They are *visually* blended so the base image and noise look like they make up 95% and 5% respectively, but that's only what it's showing you, not now it's stored So it's not really the layering approach that's at play here, it's that one image actually has more noise, because Krita reducing the opacity just makes it *look* like there is less noise, but that noise is actually stored in the file in full. I think if you do the experiment again, but set every layer to 100% opacity and make the noise layer transparent using the alpha channel (like by using the eraser tool), you'll see similar sizes in both of them, or hopefully the layered one being better since it's able to see the patterns in the base image more easily
2025-01-26 02:19:34
Krita's layer opacity setting is like listening to a playlist only at 30% volume. You hear less sound but your music takes the same amount of disk space. What happens when Krita flattens it or makes the actual texture itself transparent, is like re-exporting the music at 30% of the original volume. Now it actually has reduced disk space
RaveSteel
2025-01-26 02:19:55
The noise was already made transparent in GIMP, since Krita does not support exporting JXL at reduced opacity
2025-01-26 02:20:38
It's just hard to see in discord
AccessViolation_
2025-01-26 02:22:01
Oh - my bad
RaveSteel
2025-01-26 02:22:27
Just preview shenanigans due to discord's dark background
AccessViolation_
2025-01-26 02:23:38
Okay then I would definitely expect this to be better. It should make the background more predictable <:Thonk:805904896879493180>
2025-01-26 02:23:52
super weird
RaveSteel
2025-01-26 02:25:01
I can share the files tomorrow
AccessViolation_
2025-01-26 02:26:19
Yeah I'd love to see that
2025-01-26 02:26:46
I also prepared another test with a real bayer dithering pattern this time, to see how that goes
2025-01-26 02:27:40
But uhh it's 03:30 here so good night. jxl keeping me up 😶
monad
Orum why not -g 3 in the second command?
2025-01-26 11:40:05
e11 overrides all of `-P`, `-E`, `-I`, `-g`, `--modular_palette_colors`, `-X`, `-Y` and `--patches`. instead, it tries various set combinations of settings. it does happen to try all group sizes, but each in conjunction with different combinations of other settings.
jonnyawsom3
2025-01-28 05:39:36
Yesterday me and a friend realised they'd accidentally made a 16K image instead of an 8K one in Blender years ago, so why not run some tests... Original, ECT -1 at 512 MB RAM 100 Seconds, Effort 1 at 1.2GB RAM 3 Seconds, Effort 2 at 1.1GB RAM 7 Seconds
2025-01-28 05:42:35
`-g 0` actually improved effort 2. Wonder if that applies generally due to the fixed MA tree
AccessViolation_
2025-01-28 10:55:38
2025-01-28 10:55:53
1st + 2nd = 3rd
2025-01-28 11:29:11
One thing that's a bit unfortunate: the subtracted-out pattern isn't always just a single value. Especially for artistic use or palette images, the difference between any two colors that are dithered can be pretty large and also different throughout the image. Whereas 'real' dithering like turning a 16-bit gradient into an 8-bit one, will probably have values that are a bit closer together. But this is a palette image, so you'd get a palette for the base image and another palette for the delta layer
2025-01-28 11:33:51
These are the PNG, flattened (single-layer) and decomposed (separate dither and base layers) files, all exported at the maximum encoding efforts Krita supports (9, for both JXL and PNG)
RaveSteel
2025-01-28 12:09:21
nice
AccessViolation_
2025-01-28 05:13:01
I had an idea to not store the subtract values of the pattern and then kAdd-blending it on, but instead making it an overlay that's already the right color and just kReplace-blending it on, so that the dither pattern layer can be made up of indices to the same palette that the base image uses. Useful in images like pixel art where the dither pattern isn't just one value away from the surrounding pixels, which I think it otherwise would be for 'true' dithering. But I realized that in this particular example with the dithered sky, it's not worth it to do this at all and it would be more effective to just store all the 2x2 tile combinations of the pattern *with* the background in patches and apply them everywhere, which should then run length encode really good (so should this dither pattern in the first place, but something else in the pipeline appears to be messing with the values a bit before LZ77 has a chance to compress them)
2025-01-28 05:18:11
Would it be preferred if I posted things related to optimizing repeating patterns and image decomposition using layers in its own thread?
RaveSteel
2025-01-28 06:19:54
I think a thread would make sense for having continuity
_wb_
2025-01-30 05:44:05
This is the exact same data, just plotted with the axes differently: First one is how it is traditionally done, with bpp on a logscale and the metric on a linear scale. This has the effect of enlarging the low bitrates and low qualities. Second one is how I like to do it: bpp on a linear scale, metric on a suitable scale to actually see what's happening. Linear scale bpp makes the most sense to me since the cost of storage/transfer is also linear. So if 2 bpp can be reduced to 1.5 bpp, that is a larger actual saving than if 0.5 bpp can be reduced to 0.25 bpp, assuming same number of pixels. From the plot on the left you'd conclude that AVIF > JXL > WebP >> JPEG. From the plot on the right you'd conclude something different.
AccessViolation_
2025-01-31 09:02:48
Potentially found a good category of images on Wikimedia Commons for benchmarking lossless compression: [Category:PNG that should use vector graphics ](<https://commons.wikimedia.org/wiki/Category:PNG_that_should_use_vector_graphics>)
2025-01-31 09:03:10
75000 images
2025-01-31 09:04:45
Some of these appear to be photographic in nature, probably miscategorized because they look like symbols if you don't look closely
2025-01-31 09:07:46
mostly digital maps, charts, symbols, logos, chemical compounds, etc
2025-01-31 09:09:35
Hmm. Upon closer inspection some of these have pretty clear lossy compression artifacts...
2025-01-31 09:15:44
Yeah, this seems to be mostly a category with the idea that people eventually replace these with SVGs, not necessarily that they're PNGs that look like they could have been SVG renderer output, so it includes some transcoded photos and such...
2025-02-01 09:40:02
Almost everything in this image can use patches https://xkcd.com/radiation/
2025-02-01 09:40:29
2025-02-01 09:41:46
And cjxl does use patches, pretty well (distance 1, effort 7)
2025-02-01 09:43:03
Interestingly it only uses patches for the outer tiles in those grids. I wonder why that is
A homosapien
AccessViolation_ And cjxl does use patches, pretty well (distance 1, effort 7)
2025-02-01 10:11:11
How do you view the use of patches?
AccessViolation_
2025-02-01 10:27:35
`~/Programs/jxlatte/jxlatte.jar 1528229508475.jxl out.png`
2025-02-01 10:28:10
The patch preview only works on lossy images, at least in this (possibly outdated) version of jxlatte
Orum
2025-02-01 10:29:06
do patches work at all on lossless?
AccessViolation_
2025-02-01 10:29:18
Yeah
2025-02-01 10:31:21
``` 72kB d0-patches0.jxl 34kB d0-patches1.jxl ```
jonnyawsom3
AccessViolation_ `~/Programs/jxlatte/jxlatte.jar 1528229508475.jxl out.png`
2025-02-01 11:47:38
djxl -s 8 also works
monad
AccessViolation_ Interestingly it only uses patches for the outer tiles in those grids. I wonder why that is
2025-02-01 01:15:28
It's related to the heuristic for text detection, which looks for some threshold of similar pixels in a region to identify background content. The perimeters of the grids met the threshold for possible patch extraction, but not the inner areas.
AccessViolation_
2025-02-01 04:09:06
Interesting
veluca
AccessViolation_ Interestingly it only uses patches for the outer tiles in those grids. I wonder why that is
2025-02-01 06:35:16
because I'm bad at writing code 😛
2025-02-01 06:35:43
more seriously, probably in the center of the grid the code thinks there's a different "background" color
AccessViolation_
2025-02-01 06:36:05
no! no self deprecating >:(
2025-02-01 06:38:14
what would happen if you disabled the heuristic, would it not create patches at all or would it try way too hard and take a lot of time, but get most of the patch candidates in the end?
veluca because I'm bad at writing code 😛
2025-02-01 06:39:55
i don't know how patch detection works in detail but i was pleasantly surprised to see it got basically all of the text, which is what it was designed for, so i'd say you did pretty good :)
veluca
2025-02-01 06:41:21
no patches at all
2025-02-01 06:42:13
I even have a fairly good idea of a better heuristic, but who has time
AccessViolation_
2025-02-01 06:43:05
do you have time to write about it instead? because I am curious how it'd work 👀
veluca
2025-02-01 06:45:17
I mean, a vague idea is to do it the same way you do a lempel-ziv compressor, but with something like 4x4 patches that get hashed
2025-02-01 06:45:42
and $some way of extending the 4x4
AccessViolation_
2025-02-01 06:52:03
if you did the extending part with some sort of recursion, as in first running it on 4x4 pixels, then again on some 2x2 of those, and so on, you might even be able to utilize that logic directly to achieve a 'hierarchical patches' idea I had. The idea was that since patches work on the letters level, not on the words level, you have the level 0 patch layer which contains the letters, then a level 1 patch layer which contains indices into that to create words, then using that as another patch layer. Then the final image can index into both, prioritizing using larger patches like for words, but also being able to index into the sole letters that make up the words for encoding uncommon words that don't deserve their own larger patch. Letters and words are just an example, could word for anything though. If that's possible at least, it was just an idea I had
2025-02-01 06:55:07
Incidentally I also have an active project for a cheap lz77 heuristic that could tell predictors how likely their residuals are lz77-able, instead of just scoring them by the residuals themselves
2025-02-01 06:58:33
(sorry for yapping)
A homosapien
2025-02-01 07:19:43
This server is all about yapping, no need to apologize
AccessViolation_
2025-02-01 07:24:56
i think it can seem a little rude since i show genuine interest in what someone else is up to and post a memory dump of my own right after, so i've been trying to recognize it ahead of time
2025-02-01 07:25:08
trying to keep my adhd in check a bit
monad
AccessViolation_ what would happen if you disabled the heuristic, would it not create patches at all or would it try way too hard and take a lot of time, but get most of the patch candidates in the end?
2025-02-01 07:34:05
The current design looks for features surrounded by background. You can relax the parameters of the heuristic identifying areas of background, and you can relax the constraint that a patch candidate needs to be surrounded by background. But using more patches is not helpful where other tools would be more efficient. It's a more complex problem than simply extracting all repeated content, considering both density and compute efficiency.
AccessViolation_
monad The current design looks for features surrounded by background. You can relax the parameters of the heuristic identifying areas of background, and you can relax the constraint that a patch candidate needs to be surrounded by background. But using more patches is not helpful where other tools would be more efficient. It's a more complex problem than simply extracting all repeated content, considering both density and compute efficiency.
2025-02-01 07:50:44
so i guess the problem here is that it sees too much green tiles compared to white background. I guess if a floodfill type of algorithm was used for the background it might have worked?
2025-02-01 07:51:10
in this specific case I mean
monad
2025-02-01 07:52:19
My point is even if you find more patches, you may not benefit density at some compute target.
AccessViolation_
2025-02-01 07:53:20
that's fair, I was mostly asking about the first part of your message. it makes sense that patches everywhere aren't always the best solution (but that seems like a separate thing from how they're detected, unless the background algorithm also works to not detect them where they usually wouldn't be beneficial)
monad
2025-02-01 07:59:25
Every four pixels it looks at the surrounding area and identifies how many of those pixels are the same as the pixel in question. if there are enough, then it will later search for patches in that area. So, for any color in the inner part of the grid, there are not enough pixels of the same color according to the heuristic. Yes, you can change how it works. Yes, it was designed to be effective by some measure.
2025-02-01 08:04:16
I posted some results from tweaking patches here: https://discord.com/channels/794206087879852103/803645746661425173/1246697827069857852, eventually adding tile detection on top of the text detection. While I could improve outcomes for many images, It was a more complex task to accurately predict whether any individual patch would benefit density.
2025-02-01 08:05:51
I'm a bit skeptical about those squares for example, I imagine they're not as important as the text in that image.
AccessViolation_
2025-02-01 08:11:59
What are some ways in which using patches could get worse? I can see some ways in which it might not be *better*, like yeah those green tiles will probably lz77 compress really well regardless of the predictor used (so long as the MA tree keeps the one that is used consistent), but using patches for it would also lz77 compress well since it's the same indices over and over
_wb_
2025-02-01 08:14:03
If the patches are large enough it should usually be a good idea to use them. Main question is how much time to spend on finding patches...
jonnyawsom3
AccessViolation_ What are some ways in which using patches could get worse? I can see some ways in which it might not be *better*, like yeah those green tiles will probably lz77 compress really well regardless of the predictor used (so long as the MA tree keeps the one that is used consistent), but using patches for it would also lz77 compress well since it's the same indices over and over
2025-02-01 08:18:47
This is one example https://discord.com/channels/794206087879852103/803645746661425173/1326476011654873098
AccessViolation_
2025-02-01 08:20:28
Hmm - detect background color (flood fill, common 4x4 colors, etc) - do contrast based edge detection, and create bounding boxes around good-patch-sized blobs of edges that include or are inside of much of that background color - if many blobs turn out to have bounding boxes of the same size, you might have found a good patch candidate - now do more expensive patch candidate selection on those previous candidates
This is one example https://discord.com/channels/794206087879852103/803645746661425173/1326476011654873098
2025-02-01 08:23:51
that's adorable it's trying so hard
A homosapien
2025-02-04 09:53:26
I downclocked my ram to see how much memory speed impacts performance. The image is 45 MP, all tests ran at lossless effort 1, 20 reps. ```╔════════════╤═════════════════════════╗ ║ Decoding │ i5 12400 - 32 GB DDR4 ║ ╠════════════╪════════════╤════════════╣ ║ │ 2133 MHz │ 3200 MHz ║ ╟────────────┼────────────┼────────────╢ ║ 1 Thread │ 29.9 MP/s │ 32.4 MP/s ║ ╟────────────┼────────────┼────────────╢ ║ 4 Threads │ 108.1 MP/s │ 120.1 MP/s ║ ╟────────────┼────────────┼────────────╢ ║ 8 Threads │ 175.0 MP/s │ 186.0 MP/s ║ ╟────────────┼────────────┼────────────╢ ║ 12 Threads │ 214.6 MP/s │ 216.2 MP/s ║ ╚════════════╧════════════╧════════════╝ ╔════════════╤═════════════════════════╗ ║ Encoding │ i5 12400 - 32 GB DDR4 ║ ╠════════════╪════════════╤════════════╣ ║ │ 2133 MHz │ 3200 MHz ║ ╟────────────┼────────────┼────────────╢ ║ 1 Thread │ 41.2 MP/s │ 43.2 MP/s ║ ╟────────────┼────────────┼────────────╢ ║ 4 Threads │ 120.8 MP/s │ 125.2 MP/s ║ ╟────────────┼────────────┼────────────╢ ║ 8 Threads │ 163.7 MP/s │ 172.7 MP/s ║ ╟────────────┼────────────┼────────────╢ ║ 12 Threads │ 182.3 MP/s │ 186.6 MP/s ║ ╚════════════╧════════════╧════════════╝ ```
RaveSteel
2025-02-04 10:16:46
Would be interesting to test on an AMD Ryzen, which are more sensitive to lower RAM speeds
jonnyawsom3
2025-02-05 01:10:59
I'm on a Ryzen 1700, 16 GB of DDR4 at 2400mhz. Those speeds seem about the same as mine, so no major differences at least
A homosapien
Meow The original PNG for reference
2025-02-06 12:34:53
Just a quick follow up, this image is cursed in more ways than I thought. WebP actually performs *worse* than PNG which is surprising. First time I've seen that happen.``` Optimized PNG - 356,474 bytes WebP -z 9 ----- 408,200 bytes Libjxl 0.11 --- 313,266 bytes```Parameters are `cjxl in.png out.jxl -d 0 -e 10 -P 0 -E 4 --patches 0` For images like these it's often the case that the "none" predictor works best, similar to PNG.
RaveSteel
2025-02-06 12:38:16
AVIF performs better than the WebP here, interesting 318437 Optimized.avif
A homosapien
2025-02-06 12:39:44
Terrifying <:tfw:843857104439607327>
RaveSteel
2025-02-06 12:39:52
Weird indeed
2025-02-06 12:40:11
It is even almost better than your e10 JXL
A homosapien
2025-02-06 12:41:24
It's not as bad as that one time GIF beat JXL though 😅
RaveSteel
2025-02-06 12:42:42
I should test this again on a larger GIF set. But as we've found out by now, GIF -> optimised APNG -> JXl will often result in smaller filesizes than just GIF -> JXL
jonnyawsom3
A homosapien It's not as bad as that one time GIF beat JXL though 😅
2025-02-06 12:43:32
I have that file here. GIF beat *everything*
A homosapien
2025-02-06 12:44:31
A mythical image beyond our realm of reality. <:NotLikeThis:805132742819053610> It defies all logic somehow
RaveSteel
2025-02-06 12:45:20
Interesting
jonnyawsom3
2025-02-06 12:45:22
All while being a real-world example
A homosapien Just a quick follow up, this image is cursed in more ways than I thought. WebP actually performs *worse* than PNG which is surprising. First time I've seen that happen.``` Optimized PNG - 356,474 bytes WebP -z 9 ----- 408,200 bytes Libjxl 0.11 --- 313,266 bytes```Parameters are `cjxl in.png out.jxl -d 0 -e 10 -P 0 -E 4 --patches 0` For images like these it's often the case that the "none" predictor works best, similar to PNG.
2025-02-06 12:46:55
Group size had no improvement?
A homosapien
2025-02-06 12:47:12
-g 2 was optimal iirc
2025-02-06 12:47:33
Or was it -g 1 I don't remember
RaveSteel
2025-02-06 12:50:12
Time to bust out e11
2025-02-06 12:50:31
Dammit, still larger
2025-02-06 12:51:35
e11 is only marginally smaller than WebP <:KekDog:805390049033191445>
jonnyawsom3
2025-02-06 12:52:02
Eugh... Just got sent an image by a friend, was practically uncompressed so ran ECT... Hardly improved, I open it and look a bit closer....
2025-02-06 12:52:19
Not even dithering, just pure, RGB noise.... God damn it
RaveSteel
2025-02-06 12:52:49
For what purpose did he have that image lying around?
jonnyawsom3
2025-02-06 12:53:32
Commissioned artwork. They had sent me it before but much smaller. Turns out it was a WIP so the noise filter hadn't been applied, along with being a re-compressed JPEG
RaveSteel
2025-02-06 01:00:52
This JXL uses around 40% of my 7950x to play lol
Meow
A homosapien Just a quick follow up, this image is cursed in more ways than I thought. WebP actually performs *worse* than PNG which is surprising. First time I've seen that happen.``` Optimized PNG - 356,474 bytes WebP -z 9 ----- 408,200 bytes Libjxl 0.11 --- 313,266 bytes```Parameters are `cjxl in.png out.jxl -d 0 -e 10 -P 0 -E 4 --patches 0` For images like these it's often the case that the "none" predictor works best, similar to PNG.
2025-02-06 01:32:20
cwebp turns it into RGB. That may be the cause
A homosapien
2025-02-06 01:58:40
It should still be palletized tho since there are only 256 discreet colors
AccessViolation_
Not even dithering, just pure, RGB noise.... God damn it
2025-02-06 08:30:58
was this a challenge because of what you have in your bio? replace it with photon noise, I doubt they'd notice :3
jonnyawsom3
AccessViolation_ was this a challenge because of what you have in your bio? replace it with photon noise, I doubt they'd notice :3
2025-02-06 09:07:02
I can't 'just' replace the noise. I'd need the original file, and photon noise doesn't work properly on Lossless
AccessViolation_
2025-02-06 09:33:58
nah i know, i just meant you could probably manually recreate something that looked close enough to the original if you wanted to
Meow
2025-02-12 11:51:03
```50 files (libjxl v0.11.1) 2,615,247,628 bytes oxipng -o 0 1,590,449,934 bytes cjxl -d 0 (e7) ~22 minutes 1,355,276,849 bytes cjxl -d 0 -e 10 -E 4 -I 100 -g 3 ~61 hours```
2025-02-12 11:57:41
```e7 5859 x 7680, 2.498 MP/s [2.50, 2.50] e10 10500 x 10500, 0.002 MP/s [0.00, 0.00]``` The slowest of each
RaveSteel
2025-02-12 11:57:59
Now compare how long it takes to open in an image viewer
2025-02-12 11:58:16
Or use ffmpeg -benchmark
2025-02-12 11:58:58
Which is a while lot slower than opening the image normally, but at least it is repeatable and gives proper output
Meow
2025-02-12 11:59:06
Not really relevant on macOS
RaveSteel
2025-02-12 11:59:49
Elaborate
Meow
2025-02-12 12:00:51
Already slow for Preview to open a normal-sized JXL
RaveSteel
2025-02-12 12:01:22
Darn
2025-02-12 12:01:52
I would still be interest ed in an ffmpeg comparison if you are inclibed
Meow
2025-02-12 12:02:12
I don't have ffmpeg installed
RaveSteel
2025-02-12 12:08:17
Oh well
2025-02-12 12:08:20
Nevermind then
Meow
2025-02-12 01:58:11
Yeah posted publicly (SFW) and nothing new from what I posted here https://meow.pro/meow/e-10-e-4-i-100-g-3/
2025-02-12 01:59:44
I think I won't use that again soon
TheBigBadBoy - 𝙸𝚛
Meow Yeah posted publicly (SFW) and nothing new from what I posted here https://meow.pro/meow/e-10-e-4-i-100-g-3/
2025-02-12 02:28:48
wait what OneDrive has thumbnails for JXL [⠀](https://cdn.discordapp.com/emojis/852007419474608208.webp?size=48&name=av1_woag)
2025-02-12 02:29:54
quite a strange collection for benchmarking tho <:kekw:808717074305122316>
jonnyawsom3
2025-02-12 02:40:20
Oh, you didn't time oxipng?
2025-02-12 02:44:47
The blog post is SFW, but the OneDrive folder certainly isn't. Also surprised the thumbnails load for JXL, guess we can add another to <#803574970180829194>
Meow
Oh, you didn't time oxipng?
2025-02-12 03:10:38
I just wanted to make a fair test so I used -o 0 only for faster optimisation and removing metadata before testing further
2025-02-12 03:12:38
I used that -e 10 for normal-sized ones too but they're not in this test
The blog post is SFW, but the OneDrive folder certainly isn't. Also surprised the thumbnails load for JXL, guess we can add another to <#803574970180829194>
2025-02-12 03:15:02
You must click three different links for reaching NSFW
TheBigBadBoy - 𝙸𝚛 wait what OneDrive has thumbnails for JXL [⠀](https://cdn.discordapp.com/emojis/852007419474608208.webp?size=48&name=av1_woag)
2025-02-12 03:18:56
Hmm saw a French word
TheBigBadBoy - 𝙸𝚛
2025-02-12 03:23:57
Indeed
Meow
2025-02-12 03:25:27
Oh a few. Download, ago [number] days
2025-02-12 03:28:21
I could have used `oxing -o 4` but it's just too good comparing to most of available PNG files
2025-02-12 03:29:10
`-o 0` already reduced significantly
Fox Wizard
TheBigBadBoy - 𝙸𝚛 quite a strange collection for benchmarking tho <:kekw:808717074305122316>
2025-02-12 03:37:00
Nah, nothing strange about a "Bustycat in Heat" <:KekDog:884736660376535040>
TheBigBadBoy - 𝙸𝚛
2025-02-12 03:48:51
But I can't blame him as I did the same once with that one: || https://cdn.discordapp.com/attachments/673202643916816384/1127986007724261498/1104459542487633950.webp || <:KekDog:805390049033191445>
Fox Wizard
2025-02-12 03:59:15
At least it doesn't reveal anything XD
TheBigBadBoy - 𝙸𝚛
2025-02-12 04:00:20
true I was surprised by the fact that FFmpeg defaults settings give a different output than dwebp
2025-02-12 04:01:22
to fix that you'd need `-sws_flags full_chroma_inp+full_chroma_int+accurate_rnd` <:KekDog:805390049033191445>
jonnyawsom3
2025-02-12 05:12:14
I think I tried to bench those Fuck off windows I'm ranting about FFMPEG
2025-02-12 05:13:38
I tried to bench those arguments to see how much of a difference it made, since there's conflicting info about if `accurate_rnd` actually does anything in newer versions or if `full_chroma_inp` applies automatically. I know `full_chroma_int` did have an effect, though I don't recall how drastic it was
I think I tried to bench those Fuck off windows I'm ranting about FFMPEG
2025-02-12 05:17:06
Oh, interesting, the mouse didn't show
spider-mario
I think I tried to bench those Fuck off windows I'm ranting about FFMPEG
2025-02-12 05:28:40
it sounds like it’s trying to get you to think about your retirement plan or something
jonnyawsom3
spider-mario it sounds like it’s trying to get you to think about your retirement plan or something
2025-02-12 05:38:44
It's more like telling me to buy a funeral plan. This system is 8 years old so I need to kill it off before their pestering can do anything anyway
2025-02-12 05:41:04
Doesn't help the only upgrade paths are being discontinued. So I'll have to build an entire new system most likely, which I certainly can't afford
RaveSteel
2025-02-12 05:45:55
Just wait for Windows 12 /s
spider-mario
2025-02-12 06:07:42
or maybe install ReactOS
jonnyawsom3
2025-02-12 11:02:47
Trying a Clang build <@207980494892040194> cooked up ``` Distance 1 Effort 7 Github 7680 x 4320, 5.436 MP/s [5.44, 5.44], , 1 reps, 16 threads. Wall time: 0 days, 00:00:06.749 (6.75 seconds) User time: 0 days, 00:00:05.406 (5.41 seconds) Kernel time: 0 days, 00:00:50.781 (50.78 seconds) Distance 1 Effort 7 Clang 7680 x 4320, 7.146 MP/s [7.15, 7.15], , 1 reps, 16 threads. Wall time: 0 days, 00:00:05.332 (5.33 seconds) User time: 0 days, 00:00:04.890 (4.89 seconds) Kernel time: 0 days, 00:00:35.140 (35.14 seconds) Lossless Effort 1 Github 7680 x 4320, 114.661 MP/s [114.66, 114.66], , 1 reps, 16 threads. Wall time: 0 days, 00:00:00.938 (0.94 seconds) User time: 0 days, 00:00:00.062 (0.06 seconds) Kernel time: 0 days, 00:00:02.781 (2.78 seconds) Lossless Effort 1 Clang 7680 x 4320, 305.596 MP/s [305.60, 305.60], , 1 reps, 16 threads. Wall time: 0 days, 00:00:00.785 (0.79 seconds) User time: 0 days, 00:00:00.093 (0.09 seconds) Kernel time: 0 days, 00:00:01.703 (1.70 seconds) Lossless Effort 7 Github 7680 x 4320, 3.809 MP/s [3.81, 3.81], , 1 reps, 16 threads. Wall time: 0 days, 00:00:09.356 (9.36 seconds) User time: 0 days, 00:00:04.296 (4.30 seconds) Kernel time: 0 days, 00:01:48.828 (108.83 seconds) Lossless Effort 7 Clang 7680 x 4320, 5.416 MP/s [5.42, 5.42], , 1 reps, 16 threads. Wall time: 0 days, 00:00:06.814 (6.81 seconds) User time: 0 days, 00:00:04.578 (4.58 seconds) Kernel time: 0 days, 00:01:14.546 (74.55 seconds)```
2025-02-12 11:02:55
``` Github djxl 15000 x 13746, 188.104 MP/s [188.10, 188.10], , 1 reps, 16 threads. Creation time 2025/02/12 22:45:56.717 Exit time 2025/02/12 22:45:58.055 Wall time: 0 days, 00:00:01.338 (1.34 seconds) User time: 0 days, 00:00:02.187 (2.19 seconds) Kernel time: 0 days, 00:00:13.500 (13.50 seconds) Clang djxl 15000 x 13746, 239.420 MP/s [239.42, 239.42], , 1 reps, 16 threads. Wall time: 0 days, 00:00:01.107 (1.11 seconds) User time: 0 days, 00:00:02.390 (2.39 seconds) Kernel time: 0 days, 00:00:10.078 (10.08 seconds)```
A homosapien
2025-02-13 04:48:46
Quackdoc
A homosapien
2025-02-13 05:12:14
This is neat, Ill have to check about doing this locally on my phone, I also want to try and compare using png-rs eventually but it has no c api T.T
Meow
2025-02-13 06:20:24
My next benchmark project would be WebP near-lossless but using JXL as one of metrics
2025-02-13 06:21:56
Only one source image will be mine this time
2025-02-13 08:53:38
Earlier tests showed that WebP near-lossless 0, 20, 40, 60, and 80 roughly correspond to JXL distance 2.0, 1.2, 0.5, 0.3, and 0.2
Demiurge
Fox Wizard Nah, nothing strange about a "Bustycat in Heat" <:KekDog:884736660376535040>
2025-02-13 11:14:32
I love meow's discord "about me"
Fox Wizard
2025-02-13 11:18:43
Why did I read it <:KekDog:884736660376535040>
CrushedAsian255
2025-02-13 11:32:19
Something about data compression always makes one either an otaku or furry for some reason
2025-02-13 11:32:24
It’s something I’ve noticed
Meow
Meow Earlier tests showed that WebP near-lossless 0, 20, 40, 60, and 80 roughly correspond to JXL distance 2.0, 1.2, 0.5, 0.3, and 0.2
2025-02-15 12:44:03
1.7, 0.95, 0.55, 0.35 and 0.25 in the new test
pshufb
CrushedAsian255 Something about data compression always makes one either an otaku or furry for some reason
2025-02-15 01:29:14
A lot of anime watchers seem to care about video quality and so the path from that to being interested in codecs seems straightforward. See e.g. dark shikari.
Demiurge
2025-02-15 01:40:11
What ever happened to dark shikari
2025-02-15 01:40:22
There used to be a blog and everything
Orum
2025-02-15 01:40:59
disappeared into the ether at some point, never to be heard from again
Demiurge
CrushedAsian255 Something about data compression always makes one either an otaku or furry for some reason
2025-02-15 01:41:15
How else are you going to compress all those terabytes of waifu data
CrushedAsian255
Demiurge How else are you going to compress all those terabytes of waifu data
2025-02-15 01:41:27
Fair point
Meow
2025-02-15 07:15:54
The data collection is done https://docs.google.com/spreadsheets/d/1vq0VLsB580yGS9LXAuxf6LTv8CVM_Jv-tBln5ruQ5do/edit?usp=sharing
2025-02-15 07:16:36
including corresponding lossy WebP, Jpegli, and JXL quality/distance
2025-02-16 04:43:04
The JXL sheet is there for a reference. The main comparison is to its own lossy part, and Jpegli.
2025-02-16 04:54:45
I may test the WebP part again, but without `-m 6`
2025-02-16 04:57:16
About 8 to 10 times slower than JXL with `-m 6`
2025-02-16 03:29:34
The next benchmark project is finding JXL distances corresponding to ssimulacra2 for almost 400 digital artworks
2025-02-16 03:30:48
d0.5 shows average at 90.72142788 that's higher than my expectation
2025-02-16 04:34:43
Tested four today. Interesting `d0.3 91.96229814 d0.5 90.72142788 d1.0 87.68818199 d3.0 77.23289308`
2025-02-16 05:22:17
The higher distance brings some more significant outliers. For d3.0 in this benchmark, one reaches as low as 58.63179688, but can be as high as 84.63561018
Meow The data collection is done https://docs.google.com/spreadsheets/d/1vq0VLsB580yGS9LXAuxf6LTv8CVM_Jv-tBln5ruQ5do/edit?usp=sharing
2025-02-18 02:06:32
I've updated again for adding the 10th image. This is my first time of doing a detailed benchmark, any comment?
2025-02-18 02:11:58
Not sure about if 10 distinct images are enough for a benchmark
jonnyawsom3
2025-02-18 05:37:48
<@263300458888691714> Not sure how much hassle it is for you to run tests, but any chance you could try running `-d 0 -e 2` with and without `-g 0`? I've been seeing it consistently outperform the default of 1 but want to see if it holds up with a larger corpus. If it does, we could change the group size by default for effort 2
monad
2025-02-18 06:48:16
I'll try it
2025-02-18 11:27:48
benchmark_xl v0.12.0 f73fc6f0 [\_AVX2\_,SSE4,SSE2] 20 total threads, 2030 tasks, 0 threads, 20 inner threads ``` 1015 images Encoding kPixels Bytes BPP E MP/s D MP/s ------------------------------------------------------------- jxl:d0:2 1039647 800359291 6.1586958 61.534 264.353 jxl:d0:2:g0 1039647 800953474 6.1632680 68.347 353.244 Aggregate: 1039647 800656327 6.1609815 64.851 305.583 ``` benchmark_xl v0.12.0 f73fc6f0 [\_AVX2\_,SSE4,SSE2] 20 total threads, 2030 tasks, 0 threads, 0 inner threads ``` 1015 images Encoding kPixels Bytes BPP E MP/s D MP/s ------------------------------------------------------------- jxl:d0:2 1039647 800359291 6.1586958 24.545 38.439 jxl:d0:2:g0 1039647 800953474 6.1632680 24.185 37.945 Aggregate: 1039647 800656327 6.1609815 24.364 38.191 ```
jonnyawsom3
2025-02-18 11:53:49
Hmm, so good for multithreaded (de)code but not a density improvement
_wb_
2025-02-18 12:17:11
my guess is it will depend quite a bit on the image contents: e.g. for photo it likely doesn't make much difference, for simple illustrations with lines and gradients that are well-predicted by gradient it could help a bit to bump up the group size (fewer poorly predicted pixels), while for images with more "local" characteristics might benefit from a smaller group size so the histograms fit better to the local contents.
monad
2025-02-18 01:21:05
some categories **LPCB** 20 inner threads ``` 107 images Encoding kPixels Bytes BPP E MP/s D MP/s -------------------------------------------------------------- jxl:d0:2 1154190 1266224502 8.7765409 141.070 471.324 jxl:d0:2:g0 1154190 1272288235 8.8185703 163.681 519.575 ``` 0 inner threads ``` 107 images Encoding kPixels Bytes BPP E MP/s D MP/s -------------------------------------------------------------- jxl:d0:2 1154190 1266224502 8.7765409 27.191 38.685 jxl:d0:2:g0 1154190 1272288235 8.8185703 26.222 37.993 ``` **Scope ITAP** 20 inner threads ``` 255 images Encoding kPixels Bytes BPP E MP/s D MP/s -------------------------------------------------------------- jxl:d0:2 1125814 1281053819 9.1031291 103.148 383.918 jxl:d0:2:g0 1125814 1280877871 9.1018788 112.521 431.984 ``` **Scope Art** 20 inner threads ``` 443 images Encoding kPixels Bytes BPP E MP/s D MP/s -------------------------------------------------------------- jxl:d0:2 2663857 1435281564 4.3103848 132.347 473.021 jxl:d0:2:g0 2663857 1439479421 4.3229916 145.857 524.062 ``` **Scope Pixiv** 20 inner threads ``` 637 images Encoding kPixels Bytes BPP E MP/s D MP/s -------------------------------------------------------------- jxl:d0:2 1157230 1021934712 7.0646900 76.394 289.462 jxl:d0:2:g0 1157230 1020343798 7.0536919 83.989 342.313 ``` **Scope Manga** 20 inner threads ``` 349 images Encoding kPixels Bytes BPP E MP/s D MP/s ------------------------------------------------------------- jxl:d0:2 1005250 453271317 3.6072323 94.127 459.246 jxl:d0:2:g0 1005250 454549965 3.6174080 102.502 516.090 ```
2025-02-18 01:21:27
**Scope GameScreens** 0 inner threads ``` 590 images Encoding kPixels Bytes BPP E MP/s D MP/s -------------------------------------------------------------- jxl:d0:2 2748558 2368756369 6.8945405 27.853 39.110 jxl:d0:2:g0 2748558 2361767219 6.8741978 26.321 38.320 ``` **Scope PixelArt** 0 inner threads ``` 2051 images Encoding kPixels Bytes BPP E MP/s D MP/s ------------------------------------------------------------- jxl:d0:2 3929080 293349229 0.5972884 28.522 47.610 jxl:d0:2:g0 3929080 302902542 0.6167399 29.290 47.506 ``` **Scope GameSets** 20 inner threads ``` 545 images Encoding kPixels Bytes BPP E MP/s D MP/s ------------------------------------------------------------- jxl:d0:2 222827 106846745 3.8360335 28.826 152.352 jxl:d0:2:g0 222827 107011990 3.8419661 32.104 225.266 ``` 0 inner threads ``` 545 images Encoding kPixels Bytes BPP E MP/s D MP/s ------------------------------------------------------------- jxl:d0:2 222827 106846745 3.8360335 18.495 35.072 jxl:d0:2:g0 222827 107011990 3.8419661 18.505 35.046 ``` **Scope LowPoly** 20 inner threads ``` 431 images Encoding kPixels Bytes BPP E MP/s D MP/s ------------------------------------------------------------- jxl:d0:2 1306519 829813172 5.0810617 85.794 374.639 jxl:d0:2:g0 1306519 826829751 5.0627938 93.495 432.108 ``` **Scope Fractals** 0 inner threads ``` 308 images Encoding kPixels Bytes BPP E MP/s D MP/s -------------------------------------------------------------- jxl:d0:2 1105849 1445407312 10.4564492 23.220 34.870 jxl:d0:2:g0 1105849 1442358292 10.4343918 21.973 34.302 ```
jonnyawsom3
2025-02-18 01:45:38
Interesting, my bpp are between 0.01 and 0.08 bpp lower with `-g 0`, averaging around 0.05. Though encode speed is also lower too, suggesting the heuristics are doing something extra
Meow
2025-02-20 01:14:45
``` byte sec 857,192 (gif) 1,039,970 2.010 -d 0 682,581 39.250 -d 0 -e 10 -E 4 -I 100 -g 3```
2025-02-20 01:15:21
A delicious benchmark
lonjil
2025-02-20 01:40:47
>see spoilered benchmark image >"oh this is probably just porn or whatever" >*dies instantly upon clicking*
RaveSteel
2025-02-20 01:46:04
[roach](https://cdn.discordapp.com/emojis/626353358445477888.gif?size=128&animated=true&name=roach)
AccessViolation_
Meow ``` byte sec 857,192 (gif) 1,039,970 2.010 -d 0 682,581 39.250 -d 0 -e 10 -E 4 -I 100 -g 3```
2025-02-20 02:47:14
dr kel when no MREs
_wb_
2025-02-20 04:24:30
why does the background of that gif have such flickering dithering going on?
2025-02-20 04:25:17
probably it could be quite a bit smaller if it was 'cleaner'
Meow
2025-02-21 02:07:29
Just a quick search after a news report about some customer accidentally eating the head of a cockroach inside a taro ice pop
2025-02-21 04:32:24
Oh a cockroach leg only
Quackdoc
2025-02-21 04:34:38
never heard of that until now, and I wont be having it ever thanks
CrushedAsian255
_wb_ why does the background of that gif have such flickering dithering going on?
2025-02-21 04:55:50
Probably naiive video to gif converter
jonnyawsom3
2025-02-21 05:10:36
Per frame palette
Meow
2025-02-24 04:50:17
Almost full with only three JXL (d3) images on this device ``` 551,522 566,845 222,794 = 1,341,161```
_wb_
2025-02-24 05:07:48
this is the SFW folder? 🥵
Tirr
2025-02-24 05:14:31
maybe 💾 means "not"
_wb_
2025-02-24 05:17:06
I remember when I first saw a 3.5 inch floppy I thought "oh so this is a hard disk?"
2025-02-24 05:17:22
I was used to 5.25 inch floppies which were actually, you know, floppy
2025-02-24 05:21:11
it didn't help that there was often this "HD" symbol on them (for "high density", 1.44 MB) which of course I would interpret as "hard disk"
2025-02-24 05:22:11
took me a while before I understood that a hard disk was actually something else and that this hard-case floppy with "HD" on it was actually also just a floppy 🙂
AccessViolation_
2025-02-24 06:20:33
i didn't know there were floppy disks that were actually floppy when assembled. i always thought the 'floppy' part was referring to the internal disk in the casing
Tirr maybe 💾 means "not"
2025-02-24 06:21:30
||the images are there to turn your floppy disk into a hard drive||
Orum
AccessViolation_ ||the images are there to turn your floppy disk into a hard drive||
2025-02-24 06:30:19
||yeah I had a teacher in college that had a thick accent and pronounces the word 'disks' like 'dicks'... you can see where I'm going with this one when it came to floppies and HDs||
AccessViolation_
2025-02-24 06:32:14
lmao
jonnyawsom3
2025-02-24 07:04:57
Meanwhile I never had the privilege of seeing a floppy in person, until last year when a friend brought an old digital camera that used floppies to store the photos. I was going to JXL them, but we ran out of time 😔
_wb_
2025-02-24 08:02:48
5.25 inch floppies are floppy. You could quite easily fold one in two if you wanted to (though it would destroy it)
2025-02-24 08:04:16
I never used those larger 8 inch floppies but I heard rumors that they were such low density that you could in fact fold them, mail them to someone, and still recover the data
AccessViolation_
2025-02-24 08:06:00
oh, interesting
spider-mario
Meanwhile I never had the privilege of seeing a floppy in person, until last year when a friend brought an old digital camera that used floppies to store the photos. I was going to JXL them, but we ran out of time 😔
2025-02-24 08:12:56
I’ve seen them but I don’t remember ever seeing them _in use_
2025-02-24 08:13:12
the memories I have of seeing, holding and manipulating one in person are well after they fell out of fashion
2025-02-24 08:13:51
I don’t think I had a floppy drive
2025-02-24 08:15:25
ah, it was a 3.5"
juliobbv
Meow Almost full with only three JXL (d3) images on this device ``` 551,522 566,845 222,794 = 1,341,161```
2025-02-24 08:24:27
try it with the DMF format, so you can squeeze four
2025-02-24 08:25:14
or port that weird driver that allows writing the full 2 MB on a floppy
Meow
2025-02-25 04:08:34
https://en.wikipedia.org/wiki/Distribution_Media_Format
2025-02-25 04:22:59
Not a solution for macOS
juliobbv
2025-02-25 06:07:05
hmm, it used to be that you could use them on macOS, but apparently today's USB floppy drives have trouble reading them from the hardware side anyway
Meow
2025-02-25 08:35:03
I have the USB (Type-A) floppy drive only. Apple is the first to ditch such device
2025-02-25 02:11:19
2025-02-25 02:11:19
```byte sec command 943,880 (PNG) 654,838 1.958 oxipng -o 4 448,367 0.968 cjxl -d 0 436,728 3.841 cwebp -lossless -m 6 -q 100 370,601 35.539 cjxl -d 0 -e 10 -E 4 -I 100 -g 3```
2025-02-25 02:12:31
A simple benchmark for the "floppy disk" itself
2025-02-25 02:24:21
So the default JXL can't completely beat WebP lossless for 💾
_wb_
2025-02-25 02:39:22
``` Floppy.webp.png Encoding kPixels Bytes BPP E MP/s D MP/s Max norm SSIMULACRA2 PSNR pnorm BPP*pnorm QABPP Bugs ---------------------------------------------------------------------------------------------------------------------------------------- webp:ll:m1 1048 457188 3.4880676 1.277 103.932 nan 100.00000000 52.16 0.00000000 0.000000000000 3.488 0 webp:ll:m2 1048 444712 3.3928833 0.830 92.640 nan 100.00000000 52.16 0.00000000 0.000000000000 3.393 0 webp:ll:m5 1048 445408 3.3981934 0.687 101.428 nan 100.00000000 52.28 0.00000000 0.000000000000 3.398 0 webp:ll:m6 1048 439566 3.3536224 0.219 125.650 nan 100.00000000 52.28 0.00000000 0.000000000000 3.354 0 jxl:d0:1 1048 564380 4.3058777 80.209 23.010 nan 100.00000000 99.99 0.00000000 0.000000000000 4.306 0 jxl:d0:3 1048 469729 3.5837479 5.147 7.379 nan 100.00000000 99.99 0.00000000 0.000000000000 3.584 0 jxl:d0:4 1048 451626 3.4456329 1.641 6.050 nan 100.00000000 99.99 0.00000000 0.000000000000 3.446 0 jxl:d0:7 1048 447550 3.4145355 1.008 11.069 nan 100.00000000 99.99 0.00000000 0.000000000000 3.415 0 jxl:d0:8 1048 441631 3.3693771 0.480 14.250 nan 100.00000000 99.99 0.00000000 0.000000000000 3.369 0 jxl:d0:9 1048 426955 3.2574081 0.270 9.963 nan 100.00000000 99.99 0.00000000 0.000000000000 3.257 0 jxl:d0:10 1048 408316 3.1152039 0.094 13.717 nan 100.00000000 99.99 0.00000000 0.000000000000 3.115 0 Aggregate: 1048 459965 3.5092554 1.211 26.337 0.00000000 100.00000000 78.36 0.00000000 0.000000000000 3.509 0 ```
2025-02-25 02:41:59
so webp m2 -m5 are smaller than jxl e7 (but slower to encode) and faster than jxl e8 (but larger), so they're still on the Pareto front
2025-02-25 02:44:51
probably the more important thing where libjxl is lagging behind is decode speed — which I think is to some extent caused by webp being able to hardcode everything for 8-bit while libjxl is still decoding everything to float32
Meow
2025-02-25 02:59:05
And cwebp uses `-m 4` by default 👀
damian101
_wb_ probably the more important thing where libjxl is lagging behind is decode speed — which I think is to some extent caused by webp being able to hardcode everything for 8-bit while libjxl is still decoding everything to float32
2025-02-25 05:27:19
but not for lossless?
2025-02-25 05:27:27
or isn't this about lossless
jonnyawsom3
2025-02-25 06:09:22
I'm trying to recall, there's no fast decode like e1 encode right? Just the fixed MA tree used for e1 and e2 that probably aids caching
_wb_
2025-02-25 06:11:32
There is some specialization of some decode paths in terms of MA trees mostly, which makes e1 decode faster than e7.
2025-02-25 06:12:46
But modular decode always decodes to int32 buffers which then get converted to float32 buffers and only at the API interface do they get converted to uint8 buffers if it's an 8-bit image.
2025-02-25 06:13:26
So there is some room left for speeding up special cases like 8-bit or <= 16-bit.
jonnyawsom3
2025-02-25 07:04:09
I know a while back you also mentioned using JIT compiling to avoid 'branching traversal of the MA trees' or something along those lines
_wb_
2025-02-25 07:57:34
we're not doing that, but there is some on the fly tree simplification and special casing going on
2025-02-25 07:58:15
in other news: tweaked jpeg recompression a bit, improving compression by 0.2% or so: https://github.com/libjxl/libjxl/pull/4130
AccessViolation_
_wb_ But modular decode always decodes to int32 buffers which then get converted to float32 buffers and only at the API interface do they get converted to uint8 buffers if it's an 8-bit image.
2025-02-25 07:58:30
for 8-bit images, could the decoder instead enter a code path where `u16` is used internally instead of `f32`?
_wb_
AccessViolation_ for 8-bit images, could the decoder instead enter a code path where `u16` is used internally instead of `f32`?
2025-02-25 08:04:34
yes, the main reason we normalize everything to f32 is for convenience: this way everything that might happen after modular decode like adding splines or noise, blending frames, color conversions etc only has to be implemented for f32 buffers. But it should be possible to detect the "simple" case where it's just a single-frame 8-bit image with no fancy extra stuff, and avoid the excessive precision (i32 in modular decode, then converted to f32, then back to u8). That should improve memory locality and SIMD effectiveness, and thus also speed.
AccessViolation_
2025-02-25 08:07:17
i feel like at least in this era, 99% of the images people are going to be decoding are 8-bit single-frame images, so if that significantly improves performance that might be worth considering 👀
2025-02-25 08:09:29
in the rust based decoder this would probably be a lot less tedious to implement with generics; you can write functions with generic arguments and use them in the `u16` and `f32` code paths, and thanks to monomorphization the compiler will optimize the functions for their specific data types used individually
_wb_
2025-02-25 08:11:19
I'm talking only about lossless / modular mode here btw, in vardct things are "natively float"
AccessViolation_
2025-02-25 08:14:38
ahh okay, it's a lot less than 99% then haha
Traneptora
_wb_ But modular decode always decodes to int32 buffers which then get converted to float32 buffers and only at the API interface do they get converted to uint8 buffers if it's an 8-bit image.
2025-02-25 10:32:49
Ah this is something I did in jxlatte that sped up lossless quite a bit
2025-02-25 10:33:13
I had my buffers be (effectively) unions of float32 and int32
2025-02-25 10:34:27
and modular rectangles decoded to int32 but could be converted to float if needed for, e.g. blending
2025-02-25 10:34:55
if they weren't ever needed it would remain int all the way to png output
veluca
_wb_ I'm talking only about lossless / modular mode here btw, in vardct things are "natively float"
2025-02-25 11:16:16
we could use f16 though 😄
A homosapien
_wb_ in other news: tweaked jpeg recompression a bit, improving compression by 0.2% or so: https://github.com/libjxl/libjxl/pull/4130
2025-02-26 12:14:07
I'm seeing a minor file size regression (~0.4% larger) ~~and a significant performance regression (~37% slower). ~~ ``` Main -------------------- time ./cjxl chen.jpg main.jxl -d 0 -e 9 -I 100 -g 3 -j 1 -E 4 -P 0 --brotli_effort 11 JPEG XL encoder v0.12.0 c6355600 [AVX2,SSE4,SSE2] Encoding [JPEG, lossless transcode, effort: 9] Compressed to 2474.1 kB including container real 0m5.306s user 0m5.600s sys 0m0.164s PR -------------------- time ./cjxl chen.jpg PR.jxl -d 0 -e 9 -I 100 -g 3 -j 1 -E 4 -P 0 --brotli_effort 11 JPEG XL encoder v0.12.0 [_AVX2_,SSE4,SSE2] Encoding [JPEG, lossless transcode, effort: 9] Compressed to 2485.3 kB including container real 0m7.295s user 0m7.647s sys 0m0.132s ```
2025-02-26 12:14:27
Are the file size improvements only for effort 7?
2025-02-26 12:19:36
Weird, now ~~I'm seeing a performance improvement (~9%) but~~ still the same file size increase (~0.4%) at effort 7 ``` Main -------------------- time ./cjxl chen.jpg main.jxl -j 1 JPEG XL encoder v0.12.0 c6355600 [AVX2,SSE4,SSE2] Encoding [JPEG, lossless transcode, effort: 7] Compressed to 2484.2 kB including container real 0m0.309s user 0m0.541s sys 0m0.092s PR -------------------- time ./cjxl chen.jpg PR.jxl -j 1 JPEG XL encoder v0.12.0 [_AVX2_,SSE4,SSE2] Encoding [JPEG, lossless transcode, effort: 7] Compressed to 2495.4 kB including container real 0m0.281s user 0m0.476s sys 0m0.152s ```
2025-02-26 01:00:38
Keep in mind I'm testing larger images around 10 MP. Seems like effort 9 is just ~~overall worse. Effort 7 is slightly faster but~~ slightly less dense.
2025-02-26 01:01:49
I *do* see file size improvements with smaller images around 0.5 MP.
2025-02-26 01:03:10
Still overall less dense compared to libjxl 0.9 though
_wb_
2025-02-26 06:53:53
Hm, so your corpus behaves differently from the test corpora I used then...
A homosapien
2025-02-26 07:48:56
I think it just depends on the resolution. Most image corpora I've seen don't really test above 2-3 MP.
_wb_
2025-02-26 07:49:04
maybe I should get a better test corpus
2025-02-26 07:49:26
I used these: https://imagecompression.info/test_images/
2025-02-26 07:50:11
but maybe I better test on just some random photos from my phone, that's probably more representative
A homosapien
2025-02-26 07:50:20
Try these on for size: https://www.hasselblad.com/learn/sample-images/ Free 50-100 MP jpegs for extreme cases 👍
_wb_ I used these: https://imagecompression.info/test_images/
2025-02-26 08:30:00
Okay, so the performance regression has something to do with my Ubuntu VM, I recompiled on Windows natively and both perform within margin of error. Bad testing environment, sorry. 😅 I still see a small 0.2-0.4% increase in file size with my test images. FYI I don't use image corpora, I just gather random images on my computer. They cover a wide range of content/resolutions.
2025-02-26 08:31:41
That's how I found the regression from 0.8 btw, I was just compressing anything & everything on my computer lol. <:PepeOK:805388754545934396>
_wb_
2025-02-26 08:32:02
I don't expect speed differences, but in terms of compression when I test it on some random jpeg photos from an Android phone and from an iPhone, I do see a small increase in file size so that means those test images / encoder settings I was using are not representative for my "in the wild" examples
Quackdoc
2025-02-26 08:34:50
whenever I want to test an image encoder, I just download 20k images from a booru and call it good, eventually I want to make an app that just rips all the images from websites I view
A homosapien
2025-02-26 08:36:00
For "real world" photographic sources my phone's camera roll works well enough
2025-02-26 08:40:41
Also downloading random funny images from google/Wikipedia/reddit also works for a "in the wild" corpus
2025-02-26 08:43:47
I don't think that Kodak image set represents today's modern internet memes/screenshots/generation-loss'ed jpegs
2025-02-26 08:54:29
For example, this random Twitter meme featuring a big-chested anime woman illustrates these issues. Can the Kodak image set do the same? I don't think so.
Quackdoc
2025-02-26 08:54:53
what do you mean kodak doesn't accurately represent an image that has been jpeg to jpeg transcoded 200 times because of a discord meme
A homosapien
2025-02-26 08:59:32
*Know Your Meme* is more representative of the modern internet than any scientifically homogenous, carefully curated, sterile image set out there.
Quackdoc
2025-02-26 09:02:20
I hate that I have nothing I could possibly say to refute that
A homosapien
A homosapien For example, this random Twitter meme featuring a big-chested anime woman illustrates these issues. Can the Kodak image set do the same? I don't think so.
2025-02-26 09:07:35
The issues in question: e10 larger than e9, Jon's PR not being better, 0.9 being able to use `-E` and being the overall densest. ``` cjxl params: -j 1 -e 9 -I 100 Jon's PR --- 726,879 bytes main + e10 - 721,921 bytes main ------- 721,633 bytes 0.9 -------- 721,328 bytes 0.9 + E2 --- 721,141 bytes ```
Meow
A homosapien I don't think that Kodak image set represents today's modern internet memes/screenshots/generation-loss'ed jpegs
2025-02-26 09:12:42
I recently ran the ssimulacra2 benchmark with over 400 mostly NSFW PNG artworks
2025-02-26 09:19:28
They're all done in 2024 so quite new
jonnyawsom3
A homosapien The issues in question: e10 larger than e9, Jon's PR not being better, 0.9 being able to use `-E` and being the overall densest. ``` cjxl params: -j 1 -e 9 -I 100 Jon's PR --- 726,879 bytes main + e10 - 721,921 bytes main ------- 721,633 bytes 0.9 -------- 721,328 bytes 0.9 + E2 --- 721,141 bytes ```
2025-02-26 09:50:14
Huh... I didn't think transcoding used MA trees, unless those gains are purely from the DC
_wb_
A homosapien Okay, so the performance regression has something to do with my Ubuntu VM, I recompiled on Windows natively and both perform within margin of error. Bad testing environment, sorry. 😅 I still see a small 0.2-0.4% increase in file size with my test images. FYI I don't use image corpora, I just gather random images on my computer. They cover a wide range of content/resolutions.
2025-02-26 01:23:33
could you retry with the changes I made?
Traneptora
Huh... I didn't think transcoding used MA trees, unless those gains are purely from the DC
2025-02-26 05:37:45
the DC coeffs are encoded using modular, like in standard JXL
2025-02-26 05:38:09
the MA tree gains are in that area
AccessViolation_
A homosapien Try these on for size: https://www.hasselblad.com/learn/sample-images/ Free 50-100 MP jpegs for extreme cases 👍
2025-02-26 05:56:43
oh this is *really* nice
2025-02-26 05:58:58
Ah unfortunately a lot of them are rather noisy. i really like when there's basically no noise so you can evaluate the the performance of image coding, especially lossless, just on the patterns of the image subject
2025-02-26 06:03:32
though at that resolution, good luck with that...
jonnyawsom3
2025-02-26 06:06:52
Ah damn, I forgot to take a good monochrome image while I was in London. Wanted to get a nice one for greyscale tests
lonjil
AccessViolation_ though at that resolution, good luck with that...
2025-02-26 08:13:42
just downscale to get a less noisy image
AccessViolation_
2025-02-26 08:25:53
not a bad idea
jonnyawsom3
2025-02-26 08:30:03
That poses the question, what downscaling method is best to remove noise?
AccessViolation_
2025-02-26 08:34:29
without prior knowledge on this, the first thing i would try (after simply averaging pixels), would be averaging some amount of pixels, weighted by how similar they already are to each other. so for example if a single pixel were to throw off the average a lot even though though the others are already very similar it contributes less to the final average.
CrushedAsian255
2025-02-26 08:55:12
Maybe take a median?
_wb_
2025-02-26 09:10:06
Best imo to get rid of noise is to do gaussian blur first with a sigma equivalent to half a downscaled pixel, then just a box filter.
2025-02-26 09:10:48
Avoid downscaling filters with negative weights like lanczos since they introduce ringing
Traneptora
That poses the question, what downscaling method is best to remove noise?
2025-02-26 11:00:38
bicubic b=1 c=0 will reduce noise cause it has a blurring effect
2025-02-26 11:10:28
if you want it to be a cardinal cubic then you can try hermite (b=0, c=0)
jonnyawsom3
KRA files are just ZIP with different extensions, so here's a demo one just with a layer of my Doggo.png test image. KRZ files are for archival and skip the `mergedimage.png`
2025-02-28 10:07:45
<@245794734788837387>
2025-03-01 03:16:08
A note to self: Test `-P 3` on progressive lossless images. First results hint it's best for the Squeeze
Meow
2025-03-01 01:47:15
Running a benchmark for WebP near-lossless vs WebP lossy / JXL / Jpegli again with 353 artworks this time
jonnyawsom3
A note to self: Test `-P 3` on progressive lossless images. First results hint it's best for the Squeeze
2025-03-01 04:40:17
After further testing, it's not `-P 3`, for some reason effort 5+ causes significantly larger progressive lossless files. Using effort 4 has minimal overhead compared to non-progressive
2025-03-01 05:14:22
```JPEG XL encoder v0.11.0 0185fcd [AVX2,SSE2] Encoding [Modular, lossless, effort: 4] Compressed to 8739.9 kB including container (5.921 bpp). 3968 x 2976, 2.850 MP/s [2.85, 2.85], , 1 reps, 16 threads. Encoding [Modular, lossless, effort: 5] Compressed to 13702.1 kB including container (9.283 bpp). 3968 x 2976, 1.388 MP/s [1.39, 1.39], , 1 reps, 16 threads.``` Adding `-Y 0` to disable local palette seems to help, but still a long way off (Patches were disabled on both e5 runs for speed) ```Encoding [Modular, lossless, effort: 5] Compressed to 12258.6 kB including container (8.305 bpp). 3968 x 2976, 1.534 MP/s [1.53, 1.53], , 1 reps, 16 threads.```
Meow
2025-03-01 06:22:18
```byte sec ssimulacra2 command 970,483,716 (PNG) 969,921,289 (Alpha converted to grey bg) ---- 747,496,648 6,396 100.0000000 cwebp -lossless -m 6 -q 100 677,263,619 1,140 100.0000000 cjxl -d 0 ---- 655,460,446 6,653 92.39339902 cwebp -near_lossless 80 -m 6 -q 100 620,081,602 144 92.28029366 cjpegli -q 100 411,639,120 312 92.49223606 cjxl -d 0.2 -a 0.2 ---- 602,689,220 6,507 91.61000444 cwebp -near_lossless 60 -m 6 -q 100 432,434,134 74 91.50709178 cjpegli -q 99 285,565,253 290 91.54658918 cjxl -d 0.35 -a 0.35 ---- 573,363,544 6,663 90.17948322 cwebp -near_lossless 40 -m 6 -q 100 290,696,672 66 90.15305513 cjpegli -q 97 198,763,445 279 90.36016919 cjxl -d 0.55 -a 0.55 ---- 558,806,258 6,588 87.73899044 cwebp -near_lossless 20 -m 6 -q 100 232,007,806 567 87.54638977 cwebp -sharp_yuv -m 6 -q 99 189,430,551 64 87.43204715 cjpegli -q 93 131,084,991 328 87.63679506 cjxl -d 1.0 -a 1.0 ---- 550,627,860 6,521 84.10761986 cwebp -near_lossless 0 -m 6 -q 100 139,203,521 60 84.10544909 cjpegli -q 88 122,573,156 478 84.13292201 cwebp -sharp_yuv -m 6 -q 92 93,210,924 324 84.11337606 cjxl -d 1.65 -a 1.65```
2025-03-01 06:34:49
This is a more objective benchmark based on the 353 digital artworks in various styles
A homosapien
_wb_ could you retry with the changes I made?
2025-03-02 12:30:11
It's much better! I'm no longer seeing file size regressions with high res images. File sizes measured in bytes. ``` 10MP Photo ╔══════════╤═══════════╤═══════════╤═════════════════╗ ║ │ effort 7 │ effort 9 │ max (I100 & E2) ║ ╠══════════╪═══════════╪═══════════╪═════════════════╣ ║ new-PR │ 1,111,384 │ 1,103,200 │ 1,100,923 ║ ╟──────────┼───────────┼───────────┼─────────────────╢ ║ old-PR │ 1,111,359 │ 1,103,225 │ 1,100,948 ║ ╟──────────┼───────────┼───────────┼─────────────────╢ ║ jxl-0.11 │ 1,110,628 │ 1,102,531 │ 1,100,254 ║ ╟──────────┼───────────┼───────────┼─────────────────╢ ║ jxl-0.9 │ 1,110,659 │ 1,100,953 │ 1,089,308 ║ ╚══════════╧═══════════╧═══════════╧═════════════════╝ 50MP Photo ╔══════════╤════════════╤════════════╤═════════════════╗ ║ │ effort 7 │ effort 9 │ max (I100 & E2) ║ ╠══════════╪════════════╪════════════╪═════════════════╣ ║ new-PR │ 15,662,021 │ 15,606,378 │ 15,602,688 ║ ╟──────────┼────────────┼────────────┼─────────────────╢ ║ old-PR │ 16,855,906 │ 17,507,564 │ 17,503,872 ║ ╟──────────┼────────────┼────────────┼─────────────────╢ ║ jxl-0.11 │ 15,714,786 │ 15,660,344 │ 15,656,653 ║ ╟──────────┼────────────┼────────────┼─────────────────╢ ║ jxl-0.9 │ 15,714,846 │ 15,660,286 │ 15,653,181 ║ ╚══════════╧════════════╧════════════╧═════════════════╝ 100MP Photo ╔══════════╤════════════╤════════════╤═════════════════╗ ║ │ effort 7 │ effort 9 │ max (I100 & E2) ║ ╠══════════╪════════════╪════════════╪═════════════════╣ ║ new-PR │ 50,760,429 │ 50,591,277 │ 50,573,301 ║ ╟──────────┼────────────┼────────────┼─────────────────╢ ║ old-PR │ 53,523,780 │ 53,625,598 │ 53,607,623 ║ ╟──────────┼────────────┼────────────┼─────────────────╢ ║ jxl-0.11 │ 50,734,605 │ 50,566,598 │ 50,548,622 ║ ╟──────────┼────────────┼────────────┼─────────────────╢ ║ jxl-0.9 │ 50,734,611 │ 50,562,417 │ 50,472,014 ║ ╚══════════╧════════════╧════════════╧═════════════════╝ YouTube Thumbnail ╔══════════╤══════════╤══════════╤═════════════════╗ ║ │ effort 7 │ effort 9 │ max (I100 & E2) ║ ╠══════════╪══════════╪══════════╪═════════════════╣ ║ new-PR │ 90,380 │ 88,924 │ 88,759 ║ ╟──────────┼──────────┼──────────┼─────────────────╢ ║ old-PR │ 90,537 │ 89,092 │ 88,928 ║ ╟──────────┼──────────┼──────────┼─────────────────╢ ║ jxl-0.11 │ 90,684 │ 89,193 │ 89,028 ║ ╟──────────┼──────────┼──────────┼─────────────────╢ ║ jxl-0.9 │ 90,731 │ 89,173 │ 89,030 ║ ╚══════════╧══════════╧══════════╧═════════════════╝ Twitter meme ╔══════════╤══════════╤══════════╤═════════════════╗ ║ │ effort 7 │ effort 9 │ max (I100 & E2) ║ ╠══════════╪══════════╪══════════╪═════════════════╣ ║ new-PR │ 732,394 │ 727,348 │ 726,539 ║ ╟──────────┼──────────┼──────────┼─────────────────╢ ║ old-PR │ 732,406 │ 727,687 │ 726,879 ║ ╟──────────┼──────────┼──────────┼─────────────────╢ ║ jxl-0.11 │ 727,401 │ 722,441 │ 721,633 ║ ╟──────────┼──────────┼──────────┼─────────────────╢ ║ jxl-0.9 │ 727,402 │ 721,737 │ 721,141 ║ ╚══════════╧══════════╧══════════╧═════════════════╝ ``` The new PR performs much better overall and is consistently beating the old PR. Sometimes it even trades blows with 0.9! It could improve even more with medium resolution content like the twitter meme (which I posted earlier) and the 10MP photo.
AccessViolation_
2025-03-04 08:07:30
predictors/residuals/context visualizations
Meow
2025-03-05 12:33:43
Should I add more formats such as AVIF? https://docs.google.com/spreadsheets/d/1nTYDRiI5IDpBU5fiMuQEMVIpWRaK_66N5YcVoUmDhyA/edit?usp=sharing
juliobbv
Meow Should I add more formats such as AVIF? https://docs.google.com/spreadsheets/d/1nTYDRiI5IDpBU5fiMuQEMVIpWRaK_66N5YcVoUmDhyA/edit?usp=sharing
2025-03-05 10:53:48
`--tune iq` libaom might be worth giving it a test
Meow
juliobbv `--tune iq` libaom might be worth giving it a test
2025-03-06 02:10:42
What is that `--tune iq` for?
juliobbv
Meow What is that `--tune iq` for?
2025-03-06 02:16:54
it's the newest tune, optimized for Image Quality (IQ), heuristics were guided by SSIMULACRA2 but double-checked with subjective quality checks
2025-03-06 02:17:18
as opposed to any other particular metric like PSNR or SSIM
Meow
2025-03-06 02:52:34
Hmm I should experiment that before doing a full benchmark
paperboyo
juliobbv `--tune iq` libaom might be worth giving it a test
2025-03-06 08:33:51
[I know it really belongs to other-codecs, but just a quickie…] I wanted to get rid of some blockiness on AVIF effort `6`, but going to `5`, which gets rid of it, takes ~twice longer to encode… Are there any speed benchmarks available for `tune=iq` (and for efforts in general), what’s the performance hit? I couldn’t find any mass speed benchmarks and me encoding 5 images isn’t really a proper way…
jonnyawsom3
2025-03-06 08:55:50
I did some testing based around enabling resampling at lower distance values. The current default is 20, but I found at distance 15 with an adjusted scale it's already higher quality and lower bpp. More interestingly though, version 0.8 beat both by a mile, with the resampling tweak only improving it further. I'll probably do a few more tests and then make a PR. One result had significantly worse scores, around -30 with ssimulacra2, but visually looked better. Encode speed also suffers significantly due to this PR https://github.com/libjxl/libjxl/pull/471 . Ssimulacra2 results below have either identical or lower bpp resampled ```Distance 20 7.34862819 Distance 5.75 Resampling 2 9.68362533 Distance 20 v0.8 12.12171614 Distance 5.75 Resampling 2 v0.8 15.37069878```
juliobbv
paperboyo [I know it really belongs to other-codecs, but just a quickie…] I wanted to get rid of some blockiness on AVIF effort `6`, but going to `5`, which gets rid of it, takes ~twice longer to encode… Are there any speed benchmarks available for `tune=iq` (and for efforts in general), what’s the performance hit? I couldn’t find any mass speed benchmarks and me encoding 5 images isn’t really a proper way…
2025-03-06 06:10:16
there are no speed benchmarks on tune iq AFAIK (it's relatively new), but speed is roughly proportional to other tunings, of which there are some benchmarks out there
RaveSteel
2025-03-07 10:30:02
Comparing two large resolution videos (6016x3384) comprised of photos did surprise me, because the lossless 521 MiB large JXL decodes much faster than the 2.1 GiB FFV1 MKV, with the latter also being more resource intensive. I had expected the animated JXL to be smaller, but slower
2025-03-07 10:31:46
AVIF performed so badly, that I didn't even bother keeping it
Meow
2025-03-08 12:45:21
```byte sec command 1,285,825,364 (PNG after djxl) 1,107,332,566 162 oxipng -o 0 -a 983,356,520 1,476 oxipng -a 975,553,116 2,366 oxipng -o 4 -a ---- 1,344,423,003 65 qoi 1,121,548,403 122 jxrencapp -l 0 1,055,738,327 45 cjxl -d 0 -e 1 933,718,761 920 avifenc -l 932,893,526 529 heif-enc -L -p chroma=444 --matrix_coefficients=0 747,496,648 6,396 cwebp -lossless -m 6 -q 100 677,263,619 1,140 cjxl -d 0``` More lossless benchmarks
2025-03-08 12:46:20
`oxipng` uses `-o 2` by default if people don't know this
2025-03-08 12:49:32
I could add AVIF, HEIC, and even JXR later as well when I'm free
2025-03-08 02:19:21
Yeah encoding JXR is slightly larger and faster than the fastest OxiPNG
2025-03-08 03:13:56
AVIF and HEIF are really close
𐑛𐑦𐑕𐑣𐑸𐑥𐑩𐑯𐑦 | 最不調和の伝播者 | 異議の元素
2025-03-09 01:38:06
are there any progress on the spline encoding talked about several months ago?
AccessViolation_
2025-03-09 08:44:11
ahh these always feel good
2025-03-09 08:45:14
massive decrease in file size from `e 9` to `e 10`. from 445 KB to 87 KB
2025-03-09 08:45:27
original: PNG, 550 KB
A homosapien
2025-03-09 08:45:28
Food? 🍔
AccessViolation_
2025-03-09 08:45:51
lol
2025-03-09 08:45:53
dyslexia moment
couleur
AccessViolation_ original: PNG, 550 KB
2025-03-09 08:46:32
what program produces that?
2025-03-09 08:46:37
what kind of graph even is that
AccessViolation_
2025-03-09 08:48:11
it's the control flow graph of a disassembled function from some malware. the software is [Cutter](<https://cutter.re/>), useful for disassembling and reverse engineering compiled code
2025-03-09 08:49:10
and generating large images perfect for benchmarks :)
couleur
AccessViolation_ original: PNG, 550 KB
2025-03-09 08:51:52
how well does jpegli avif and webp perform on this?
2025-03-09 08:52:51
webp lossless 90kb
AccessViolation_
2025-03-09 08:53:13
I don't know how to use those from the command line so I can't easily test. but I'd imagine WebP lossless does pretty great and AVIF lossless does pretty poor
𐑛𐑦𐑕𐑣𐑸𐑥𐑩𐑯𐑦 | 最不調和の伝播者 | 異議の元素
AccessViolation_ I don't know how to use those from the command line so I can't easily test. but I'd imagine WebP lossless does pretty great and AVIF lossless does pretty poor
2025-03-09 08:55:17
`cwebp -m 6 -lossless input.png -o output.webp`
AccessViolation_
2025-03-09 08:55:43
as far as I know, how lossless AVIF works is that it encodes it as a lossy image first, and then stores a second image that contains all the values that it needs to add or subtract from that to get the original image back. this means it will create a bunch of compression artifacts around those lines - like any lossy format would - and then store a bunch of extra data to remove them. not ideal
username
𐑛𐑦𐑕𐑣𐑸𐑥𐑩𐑯𐑦 | 最不調和の伝播者 | 異議の元素 `cwebp -m 6 -lossless input.png -o output.webp`
2025-03-10 12:52:07
I would recommend this instead: `cwebp -mt -lossless -z 9 -alpha_filter best -metadata icc -progress -v input.png -o output.webp` this is for max effort so it is a bit slow
𐑛𐑦𐑕𐑣𐑸𐑥𐑩𐑯𐑦 | 最不調和の伝播者 | 異議の元素
username I would recommend this instead: `cwebp -mt -lossless -z 9 -alpha_filter best -metadata icc -progress -v input.png -o output.webp` this is for max effort so it is a bit slow
2025-03-10 12:53:01
tbh `-m 6` is already quite slow, this sounds overkill
2025-03-10 12:55:31
probablt would reformat to `cwebp -mt -m 6 -z 9 -near_lossless 100 -alpha_filter best` later however...
username
2025-03-10 01:00:01
`-z` contains presets for setting both `-q` and `-m` at the same time because `-q` relates to speed/effort when lossless is defined. check here for what is set at each preset level: https://github.com/webmproject/libwebp/blob/874069042ead095f8a8d6bdd35b9b145ce80af43/src/enc/config_enc.c#L140
monad
2025-03-10 01:01:41
I think I decided z6 was about the practical limit in my case dealing with a couple hundred thousand images. higher efforts scale poorly in general
TheBigBadBoy - 𝙸𝚛
username I would recommend this instead: `cwebp -mt -lossless -z 9 -alpha_filter best -metadata icc -progress -v input.png -o output.webp` this is for max effort so it is a bit slow
2025-03-10 01:02:38
and if you want the best WebP compression you should bruteforce `-z 1..9` <:KekDog:805390049033191445>
username
TheBigBadBoy - 𝙸𝚛 and if you want the best WebP compression you should bruteforce `-z 1..9` <:KekDog:805390049033191445>
2025-03-10 01:06:23
not only that but you would also have to probably do it between libwebp/cwebp 1.4.0 and 1.5.0 because in some testing I saw 1.5.0 making lossless images larger for some reason
TheBigBadBoy - 𝙸𝚛
2025-03-10 01:07:25
too much of a hassle for me lol
Meow
𐑛𐑦𐑕𐑣𐑸𐑥𐑩𐑯𐑦 | 最不調和の伝播者 | 異議の元素 `cwebp -m 6 -lossless input.png -o output.webp`
2025-03-10 01:51:29
You could add `-q 100` even if that's lossless
juliobbv
_wb_ Possibly ssimu2 puts a bit too much weight on the zoomed-out scales
2025-03-10 05:49:47
this is what I've noticed as well IME
2025-03-10 05:52:51
but scale weighting does seem to be one of those things it's inherently hard to tune for because of each person's individual preferences can be significantly different (crisper and more distorted vs. smoother and less distorted)
2025-03-10 05:55:00
so different cohorts end up with different aggregates with a different spectrum of preferences and thus different weights after training
Meow
2025-03-10 07:11:59
Quick benchmark on a significantly slower laptop for just one image ```byte sec command ---- 64,267,322 4.417 oxipng -a -f 0 --zc 1 ---- 38,811,969 2.011 qoi 38,332,388 15.147 oxipng -a -o 0 36,019,648 4.001 cwebp -lossless -m 0 -f 0 -q 0 -mt 28,886,875 2.378 cjxl -d 0 -e 1```
2025-03-10 07:35:09
Tried my best to find the quickest `cwebp` approach
Orum
2025-03-10 07:40:43
surprised qoi is faster than e 1, though with such a large increase in size it's not really worth it for the small decrease in time
Meow
2025-03-10 07:44:53
Ran again with `qoi` and `cjxl -d 0 -e 1` each taking 2.106 and 2.427 seconds respectively
2025-03-10 07:45:42
QOI remained slightly faster
2025-03-10 07:46:37
Maybe that's due to a slower environment
Orum
2025-03-10 07:47:32
what CPU is this?
Meow
2025-03-10 07:48:00
Intel Core i5-8350U
Orum
2025-03-10 07:48:59
so, quad core and AVX2... yeah that surprises me a bit
2025-03-10 07:50:17
still, not worth it to use qoi though, as the trade-off in actually writing that image to a drive probably negates any speed bonus you get from encoding it
Meow
2025-03-10 07:51:00
It uses one thread only though
2025-03-10 07:51:50
`cjxl` could easily surpass with a faster CPU
Orum
2025-03-10 07:56:03
I would think even a 4-core would be faster though for cjxl
2025-03-10 07:56:18
I don't have anything less than 6-core though to test on
2025-03-10 07:56:30
well, except for low-power stuff like a Pi
Meow
2025-03-10 07:59:29
Not sure about if there are other parameters for `cwebp -lossless` that can be smaller but not much slower than `-m 0 -f 0 -q 0`
Orum
2025-03-10 08:00:48
well cwebp seems to only be single threaded for lossless, even with `-mt`, with the exception of `-z 9` (where it still only uses 2 threads)
2025-03-10 08:06:19
Meow
2025-03-10 08:15:05
Uh that should be why encoding lossy WebP is much faster
Orum
2025-03-10 08:16:45
IDK, I don't really test lossy WebP, and I really only use it for uploading things to discord (though with cjpegli, I use it a lot less now)
AccessViolation_
2025-03-10 09:06:55
there is *something* about effort 10 that makes these graphs like ~15% of the size they were for effort 9...
AccessViolation_ ahh these always feel good
2025-03-10 09:07:20
(context)
2025-03-10 09:07:38
the larger the image the bigger the gains, it seems
A homosapien
2025-03-10 09:08:57
for lossless e10 enables the mixed predictor `-P 15` instead of the Gradient and Weighted Predictors.
2025-03-10 09:09:06
and for larger images chunked encoding is also disabled at e 10
AccessViolation_
2025-03-10 09:23:52
`-e 9 -P 15 -E 3 -I 100 --num_threads 0` is still within the same order of magnitude as `-e 9`. they say effort 10 also enables the "global MA tree", it doesn't look like that's something I can enable for effort 9 using parameters. i guess that must be it, then
2025-03-10 09:27:11
I know that the entropy coding effort and LZ77 search space also goes up, but I doubt an incremental increase to those is going to suddenly result in 1/10th the file size
monad
AccessViolation_ there is *something* about effort 10 that makes these graphs like ~15% of the size they were for effort 9...
2025-03-10 09:36:45
patches
AccessViolation_
2025-03-10 09:43:22
OH
2025-03-10 09:43:25
yes! thank you
2025-03-10 09:45:30
I honestly thought lossless modular effort 7 and up used patches by default
Meow
Meow Quick benchmark on a significantly slower laptop for just one image ```byte sec command ---- 64,267,322 4.417 oxipng -a -f 0 --zc 1 ---- 38,811,969 2.011 qoi 38,332,388 15.147 oxipng -a -o 0 36,019,648 4.001 cwebp -lossless -m 0 -f 0 -q 0 -mt 28,886,875 2.378 cjxl -d 0 -e 1```
2025-03-10 12:13:27
The speeds aren't very different with i5-9600K ```byte sec command ---- 64,267,322 4.808 oxipng -a -f 0 --zc 1 ---- 38,811,969 1.598 qoi 38,332,388 13.041 oxipng -a -o 0 36,019,648 3.258 cwebp -lossless -m 0 -f 0 -q 0 -mt 35,330,537 6.417 heif-enc -L -p preset=ultrafast --matrix_coefficients=0 34,204,919 5.610 avifenc -s 9 -j all -l 28,886,875 1.969 cjxl -d 0 -e 1```
jonnyawsom3
2025-03-10 12:34:22
Could you try with a clang compiled cjxl? That might tip the scales between QOI and effort 1 JXL. Assuming Windows, I can send my binary if you need it
Orum IDK, I don't really test lossy WebP, and I really only use it for uploading things to discord (though with cjpegli, I use it a lot less now)
2025-03-10 12:35:59
Annoyingly, Discord seem to lower the preview quality further when it's a JPEG file. They do the same for WebP but not as badly
AccessViolation_ I honestly thought lossless modular effort 7 and up used patches by default
2025-03-10 12:37:47
Only in images under 2048 x 2048, due to chunked encoding. Effort 10 disabled chunked, so patches get re-enabled. I fixed the documentation for it in my PR yesterday while fiddling with resampling
AccessViolation_
2025-03-10 12:42:54
ahh I see
Meow
Could you try with a clang compiled cjxl? That might tip the scales between QOI and effort 1 JXL. Assuming Windows, I can send my binary if you need it
2025-03-10 01:05:36
Hackintosh 👀 but that 8350U is with Windows 11
jonnyawsom3
Meow Hackintosh 👀 but that 8350U is with Windows 11
2025-03-10 01:06:33
Worth a shot
Meow
2025-03-10 01:07:27
Not with that laptop now. To test tomorrow possibly
Orum
Annoyingly, Discord seem to lower the preview quality further when it's a JPEG file. They do the same for WebP but not as badly
2025-03-10 01:09:33
well they can't even get the colors right for sRGB so no matter what you do you're screwed
Meow
Worth a shot
2025-03-11 03:34:33
Yes the speed of `cjxl` is faster ```byte sec command ---- 38,811,969 2.086 qoi 28,886,875 1.785 cjxl -d 0 -e 1```
2025-03-11 03:35:38
Why does the clang-compiled library encode faster?
CrushedAsian255
Meow Why does the clang-compiled library encode faster?
2025-03-11 05:49:52
Differing compiler optimisations probably
A homosapien
2025-03-11 06:02:37
Clang is better than MSVC, not a big surprise
Meow
2025-03-15 12:07:42
```byte sec command ---- 84,018 0.041 qoi 83,936 0.038 cjxl -d 0 -e 1 65,798 1.291 oxipng -o 4 47,660 0.836 cjxl -d 0 35,702 1.342 cwebp -lossless -m 6 -q 100 28,103 10.502 cjxl -d 0 -e 10 -E 4 -I 100 -g 3```
CrushedAsian255
2025-03-15 10:46:58
Nice
Meow
2025-03-16 10:20:17
Added qoi back. How can I miss that
_wb_
2025-03-16 01:08:45
How does lower effort jxl compare?
jonnyawsom3
2025-03-16 01:18:03
Maybe even clang vs normal
_wb_
2025-03-16 01:49:04
I would expect e1 to be quite ok for such an image
lonjil
2025-03-16 01:58:08
add [fpnge](<https://github.com/veluca93/fpnge>)!
Meow
_wb_ How does lower effort jxl compare?
2025-03-16 02:19:18
Yeah 82 bytes smaller and 0.003 seconds faster
2025-03-16 02:19:45
Quite OK JXL
lonjil add [fpnge](<https://github.com/veluca93/fpnge>)!
2025-03-16 02:33:57
Needs some research first
2025-03-16 02:34:36
Seems people really interest in that drawing board icon
lonjil
Meow Needs some research first
2025-03-16 02:35:17
what kind of research?
jonnyawsom3
2025-03-16 03:20:47
I was trying to think if splitting it into separate layers or using spines could help
Meow
lonjil what kind of research?
2025-03-16 04:11:51
Figuring out what it is and how to use it
lonjil
Meow Figuring out what it is and how to use it
2025-03-16 04:12:15
It's a very fast PNG encoder
Meow
2025-03-16 04:15:41
I heard of fpng before
2025-03-16 04:30:16
Don't know how to use that fpnge
Orum
2025-03-16 04:58:09
don't bother
2025-03-16 04:58:38
IIRC that's the one that only takes PNG input, so it's useless as a fast encoder
spider-mario
2025-03-16 06:17:30
it’s also a library that could be integrated in a tool that reads from other formats (or generates pixels directly)
2025-03-16 06:17:59
only the bundled `fpnge_main.cc` is limited to PNG input
juliobbv
Meow Quite OK JXL
2025-03-16 10:40:42
QOJXL
CrushedAsian255
Meow ```byte sec command ---- 84,018 0.041 qoi 83,936 0.038 cjxl -d 0 -e 1 65,798 1.291 oxipng -o 4 47,660 0.836 cjxl -d 0 35,702 1.342 cwebp -lossless -m 6 -q 100 28,103 10.502 cjxl -d 0 -e 10 -E 4 -I 100 -g 3```
2025-03-16 11:39:43
-e 11 is bigger...? ``` JPEG XL encoder v0.11.1 0.11.1 [NEON] Encoding [Modular, lossless, effort: 11] Compressed to 30018 bytes (0.229 bpp). 1024 x 1024, 0.002 MP/s [0.00, 0.00], , 1 reps, 14 threads. cjxl gimp.png -d 0 -e 11 gimp.jxl --allow_expert_options 1535.08s user 6.70s system 97% cpu 26:14.66 total ```
monad
2025-03-17 12:44:13
not likely if you compare for the same input and encoder version.
jonnyawsom3
CrushedAsian255 -e 11 is bigger...? ``` JPEG XL encoder v0.11.1 0.11.1 [NEON] Encoding [Modular, lossless, effort: 11] Compressed to 30018 bytes (0.229 bpp). 1024 x 1024, 0.002 MP/s [0.00, 0.00], , 1 reps, 14 threads. cjxl gimp.png -d 0 -e 11 gimp.jxl --allow_expert_options 1535.08s user 6.70s system 97% cpu 26:14.66 total ```
2025-03-17 01:10:28
Maybe the original file compared to this under the alpha?
Meow
2025-03-17 01:14:45
The genuine file
2025-03-18 03:03:47
SVG source
2025-03-25 02:00:22
I have been slowly setting up MacBook4,1 as another benchmark platform
2025-03-25 02:05:43
Its Geekbench 6 scores indicate it would be good for benchmarks https://browser.geekbench.com/v6/cpu/122599
2025-03-25 02:07:07
Multi-core Score is <10% of my main device
2025-03-27 05:34:08
I've installed `qoi`, `libjxl`, `webp`, `libavif`, `libheif` and `oxipng` on it via MacPorts
Meow The speeds aren't very different with i5-9600K ```byte sec command ---- 64,267,322 4.808 oxipng -a -f 0 --zc 1 ---- 38,811,969 1.598 qoi 38,332,388 13.041 oxipng -a -o 0 36,019,648 3.258 cwebp -lossless -m 0 -f 0 -q 0 -mt 35,330,537 6.417 heif-enc -L -p preset=ultrafast --matrix_coefficients=0 34,204,919 5.610 avifenc -s 9 -j all -l 28,886,875 1.969 cjxl -d 0 -e 1```
2025-03-28 01:44:16
The benchmark for the same image on MacBook4,1 is done ```byte sec command ---- 64,267,322 10.833 oxipng -a -f 0 --zc 1 ---- 43,662,994 4.300 qoi 40,848,451 30.326 avifenc -s 9 -j all -l 38,332,388 30.078 oxipng -a -o 0 36,019,648 8.105 cwebp -lossless -m 0 -f 0 -q 0 -mt 29,846,465 6.145 cjxl -d 0 -e 1```
2025-03-28 01:45:54
What really surprised me is there are significant filesize differences except PNG and WebP
2025-03-28 01:49:14
Only the `qoi` utility is different, and `avifenc` is slightly older, and all libraries are installed via MacPorts on that Mac. How could such differences happen?
2025-03-28 01:55:07
I've made sure about that the JXL file can be opened
jonnyawsom3
Meow The benchmark for the same image on MacBook4,1 is done ```byte sec command ---- 64,267,322 10.833 oxipng -a -f 0 --zc 1 ---- 43,662,994 4.300 qoi 40,848,451 30.326 avifenc -s 9 -j all -l 38,332,388 30.078 oxipng -a -o 0 36,019,648 8.105 cwebp -lossless -m 0 -f 0 -q 0 -mt 29,846,465 6.145 cjxl -d 0 -e 1```
2025-03-28 05:35:32
Was that using the clang version of cjxl again?
Meow
2025-03-28 05:38:02
Hmm it says depending on clang-17 https://ports.macports.org/port/libjxl/details/
2025-03-28 05:39:00
But the clang version of cjxl binary on Windows didn't produce size differences
jonnyawsom3
2025-03-28 02:11:32
Might take a while to unravel the code spaghetti and fix this
2025-03-28 02:12:06
I know how, it'll just take time
Meow
2025-03-28 02:26:17
I simply copied workflows from my main computer to that old Mac
jonnyawsom3
2025-03-31 12:52:26
Neat, 4x faster decoding than PNG on a random file I found in a game's cache folder of all places
2025-04-02 10:03:37
```20,088,614 FD1.jxl 20,088,614 FD2.jxl 9,867,679 FD3.jxl 9,867,679 FD4.jxl 5,510,643 New-FD1.jxl 6,797,611 New-FD2.jxl 7,152,288 New-FD3.jxl 8,902,969 New-FD4.jxl 5,477,158 Default.jxl``` Still work to do, but progress. Now the different levels actually increase decode speed at the cost of filesize, while being smaller than current.
2025-04-02 10:05:08
Still working on Faster Decoding 4, may turn it into 3 levels instead
2025-04-02 10:40:47
Oh, also progressive lossless now ranges from 20%-50% larger instead of 100%. Lots more tweaking to do though since this is just a side effect of fixing Faster Decoding ```1,552,046 Normal.jxl 3,159,831 Old-Progressive.jxl 2,336,824 New-Progressive.jxl```
2025-04-03 07:06:06
More progress for Lossless Faster Decoding``` Summary PNG 7,106,592 djxl --disable_output --num_reps 10 FD4.jxl ran 6,031,578 1.07 ± 0.01 times faster than djxl --disable_output --num_reps 10 FD3.jxl 5,765,455 2.02 ± 0.06 times faster than djxl --disable_output --num_reps 10 FD2.jxl 4,932,279 2.37 ± 0.03 times faster than djxl --disable_output --num_reps 10 FD1.jxl 4,807,033 3.66 ± 0.05 times faster than djxl --disable_output --num_reps 10 Normal.jxl 4,807,964``` Depends on image content, effort and resolution, but overall a lot better than it was
Orum
2025-04-03 08:04:30
are all of these optimizations only for lossless?
jonnyawsom3
2025-04-03 08:17:41
Yeah, lossy has separate parameters that already work well enough
2025-04-03 08:21:09
Gonna try and fix progressive lossless too, and did a flyby fix of allowing different predictors with Delta Palette which was added years ago but got removed again in a recent PR
username
2025-04-03 08:22:32
ill DM you some lines to mess with that I tried to get sapien to mess with
Orum
2025-04-03 09:57:11
eh, I could *always* use faster decode speed, especially in lossy
2025-04-03 09:57:32
mainly because I'd love to find something I can use to replace prores
jonnyawsom3
2025-04-03 10:09:23
I've got ideas, but I'll need to figure out what it already does for lossy
2025-04-04 10:52:02
Did some work on Progressive Lossless too``` (Oxi)PNG 8,482,827 bytes Adam7 PNG 9,940,041 bytes (Full image at 2.2%) Effort 1 8,135,838 bytes (Not progressive) Effort 2 7,566,316 bytes Effort 3 7,962,127 bytes (Weighted predictor) Effort 4 7,099,777 bytes Effort 5 7,130,420 bytes Effort 6 7,081,891 bytes Effort 7 7,019,834 bytes Default 7 5,628,080 bytes (Not progressive) Effort 8 6,750,659 bytes Effort 9 6,728,930 bytes Effort 10 6,722,822 bytes (Full image at 1%)```
2025-04-07 01:02:38
More results in <#1358733203619319858>
2025-04-17 12:18:28
So, libjxl *really* doesn't like dark areas. Original, `-d 1`, `-d 0.1` and classic JPEG at under half the bpp of `-d 0.1` while still looking better
2025-04-17 12:22:21
And jpegli changes the brightness entirely
A homosapien
2025-04-17 12:28:37
What does butteraugli say?