JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

benchmarks

veluca
2024-12-28 08:28:58
and I never spent too long on making it work well for modular
Quackdoc
2024-12-28 08:40:13
I used it for doing my JXL videos, but now all I think it does is bug out. I havent benched them in a while
A homosapien
2025-01-04 10:08:56
2025-01-04 10:10:50
12.5 MP image btw
jonnyawsom3
2025-01-04 10:21:42
I assume the faster decoding results are effort 7?
A homosapien
2025-01-04 10:27:34
Yup
Traneptora
A homosapien
2025-01-05 12:29:51
what feature is starting to be used at effort 3 that make it that much slower to decode?
jonnyawsom3
2025-01-05 12:32:24
Effort 1 and 2 have fixed MA trees with lookup tables in the decoder
2025-01-05 12:34:27
And IIRC the weighted predictor is enabled at effort 3
AccessViolation_
2025-01-05 12:48:30
effort 3 exclusively uses the self-correcting predictor as far as I understand it, yeah
2025-01-05 12:52:33
https://discord.com/channels/794206087879852103/803645746661425173/1321405548981256253
jonnyawsom3
2025-01-05 06:40:21
Being self-correcting, it makes sense to use as the sole predictor, but the decode speed due to its adaptive nature is quite a big drop... Wonder if using exclusively gradient instead would've been better
Tirr
2025-01-05 06:41:50
gradient only is effort 2
jonnyawsom3
2025-01-05 08:45:11
Oh, right
Traneptora
2025-01-07 07:36:18
self-correcting is very slow I've found
2025-01-07 07:36:29
I'm not sure how good it is in practice as a lock compared to gradient
jonnyawsom3
2025-01-07 08:18:50
I'm surprised no other predictors are tried until effort 9, at the very least it's what faster_decoding should do
_wb_
2025-01-07 08:22:26
Generally the gradient predictor and the weighted predictor are pretty much as good as the other ones. Better context modeling gives more advantage than better prediction.
jonnyawsom3
2025-01-08 09:02:05
While messing with dithered 1bit images, I noticed a sudden memory spike at e5, turns out it was using patches on the blue noise ```wintime -- cjxl "C:\Users\jonat\Documents\Compression\Dithering\Blue Noise.png" Test.jxl -d 0 -e 10 JPEG XL encoder v0.11.0 0185fcd [AVX2,SSE2] Encoding [Modular, lossless, effort: 10] Compressed to 168.4 kB (0.650 bpp). 1920 x 1080, 0.154 MP/s [0.15, 0.15], , 1 reps, 16 threads. PageFaultCount: 652758 PeakWorkingSetSize: 1.892 GiB QuotaPeakPagedPoolUsage: 33.38 KiB QuotaPeakNonPagedPoolUsage: 27.75 KiB PeakPagefileUsage: 1.977 GiB Creation time 2025/01/08 08:57:33.917 Exit time 2025/01/08 08:57:47.380 Wall time: 0 days, 00:00:13.463 (13.46 seconds) User time: 0 days, 00:00:01.015 (1.02 seconds) Kernel time: 0 days, 00:00:14.546 (14.55 seconds)``` ```wintime -- cjxl "C:\Users\jonat\Documents\Compression\Dithering\Blue Noise.png" Test.jxl -d 0 -e 10 --patches=0 JPEG XL encoder v0.11.0 0185fcd [AVX2,SSE2] Encoding [Modular, lossless, effort: 10] Compressed to 115.0 kB (0.444 bpp). 1920 x 1080, 0.140 MP/s [0.14, 0.14], , 1 reps, 16 threads. PageFaultCount: 77667 PeakWorkingSetSize: 97.77 MiB QuotaPeakPagedPoolUsage: 33.38 KiB QuotaPeakNonPagedPoolUsage: 12.21 KiB PeakPagefileUsage: 98.69 MiB Creation time 2025/01/08 08:58:22.390 Exit time 2025/01/08 08:58:37.242 Wall time: 0 days, 00:00:14.851 (14.85 seconds) User time: 0 days, 00:00:00.109 (0.11 seconds) Kernel time: 0 days, 00:00:16.953 (16.95 seconds)```
2025-01-08 09:04:47
This is the first progressive view when using lossy, I'm honestly surprised it matched so many together
monad
2025-01-08 01:22:50
it's matching tiny rectangles, what's surprising
jonnyawsom3
2025-01-08 09:32:34
It's matching almost individual pixels, on what's meant to be noise, on top of a lossy image that should be changing the background so that they aren't detected as patches. Unless the old Dots code is still triggering?
_wb_
2025-01-08 09:44:40
We need better patch/dots heuristics to deal with this kind of stuff. Currently patches are mostly for text in screenshots and dots are intended for something like a few stars in the sky in a regular photo, not these kind of 1-bit images...
jonnyawsom3
2025-01-08 09:48:57
Specifically for 1-bit, I know you wanted to use a modular form of bitpacking involving squeeze too. So a lot of ways to optimize an image like this, although maybe a simple check could be "If patches cover >75% of the image, use lossless instead" since the overhead from signalling every pixel is the only reason the filesize increased
_wb_
2025-01-08 09:52:23
Lossless also uses patches though. But it should be made smarter, e.g. in a 1-bit image it is not going to be useful to have very small patches...
jonnyawsom3
2025-01-08 09:57:27
Right, I should've clarified. If most of the image is patches, just encode the entire frame as lossless without patches or dots. Though, at that point, we're recreating your [More Patches for Non-Photo](https://github.com/libjxl/libjxl/pull/1395) PR with a sledgehammer instead of a comb...
_wb_
2025-01-08 10:04:57
Also if most of the image is patches but those patches are relatively large, like letters of text, then you want to keep them as patches...
jonnyawsom3
2025-01-08 10:09:57
Good point. Finding the balance between patch size and signalling overhead
2025-01-08 11:21:06
Did a small test after thinking of Hydrium https://discord.com/channels/794206087879852103/806898911091753051/1326662563684941834 ```wintime -- java -jar "C:\Program Files\JPEG XL\jxlatte.jar" Test.jxl Decoded to pixels, discarding output. PageFaultCount: 300590 PeakWorkingSetSize: 1.009 GiB QuotaPeakPagedPoolUsage: 446.9 KiB QuotaPeakNonPagedPoolUsage: 22.29 KiB PeakPagefileUsage: 2.202 GiB Creation time 2025/01/08 23:10:53.944 Exit time 2025/01/08 23:11:08.904 Wall time: 0 days, 00:00:14.959 (14.96 seconds) User time: 0 days, 00:00:01.687 (1.69 seconds) Kernel time: 0 days, 00:00:17.375 (17.38 seconds)``` ```wintime -- jxl-oxide Test.jxl ←[2m2025-01-08T23:08:13.000099Z←[0m ←[32m INFO←[0m ←[2mjxl_oxide_cli::decode←[0m←[2m:←[0m Image dimension: 7680x4320 ←[2m2025-01-08T23:08:34.563922Z←[0m ←[32m INFO←[0m ←[2mjxl_oxide_cli::decode←[0m←[2m:←[0m Took 21563.24 ms (1.54 MP/s) ←[2m2025-01-08T23:08:34.564099Z←[0m ←[32m INFO←[0m ←[2mjxl_oxide_cli::decode←[0m←[2m:←[0m No output path specified, skipping output encoding PageFaultCount: 2166433 PeakWorkingSetSize: 1.31 GiB QuotaPeakPagedPoolUsage: 43.63 KiB QuotaPeakNonPagedPoolUsage: 11.02 KiB PeakPagefileUsage: 1.915 GiB Creation time 2025/01/08 23:08:12.979 Exit time 2025/01/08 23:08:34.670 Wall time: 0 days, 00:00:21.691 (21.69 seconds) User time: 0 days, 00:00:03.000 (3.00 seconds) Kernel time: 0 days, 00:00:29.421 (29.42 seconds)``` ```wintime -- djxl --disable_output Test.jxl JPEG XL decoder v0.12.0 4b77417 [AVX2,SSE2] Decoded to pixels. 7680 x 4320, 0.149 MP/s [0.15, 0.15], , 1 reps, 16 threads. PageFaultCount: 99732426 PeakWorkingSetSize: 1.128 GiB QuotaPeakPagedPoolUsage: 33.91 KiB QuotaPeakNonPagedPoolUsage: 8.227 KiB PeakPagefileUsage: 1.134 GiB Creation time 2025/01/08 23:14:38.183 Exit time 2025/01/08 23:18:21.198 Wall time: 0 days, 00:03:43.014 (223.01 seconds) User time: 0 days, 00:02:19.718 (139.72 seconds) Kernel time: 0 days, 00:01:23.187 (83.19 seconds)```
2025-01-13 09:28:58
Continuing from https://discord.com/channels/794206087879852103/794206170445119489/1328288915022549012
2025-01-13 09:29:00
```wintime -- cjxl -d 0 -e 1 Test.ppm Test.jxl --streaming_input --streaming_output JPEG XL encoder v0.11.0 0185fcd [AVX2,SSE2] Encoding [Modular, lossless, effort: 1] Compressed to 128383.4 kB (4.981 bpp). 15000 x 13746, 132.127 MP/s [132.13, 132.13], , 1 reps, 16 threads. PageFaultCount: 189782 PeakWorkingSetSize: 596.6 MiB QuotaPeakPagedPoolUsage: 1.185 MiB QuotaPeakNonPagedPoolUsage: 7.164 KiB PeakPagefileUsage: 10.14 MiB Creation time 2025/01/13 09:21:16.954 Exit time 2025/01/13 09:21:18.584 Wall time: 0 days, 00:00:01.629 (1.63 seconds) User time: 0 days, 00:00:01.500 (1.50 seconds) Kernel time: 0 days, 00:00:11.250 (11.25 seconds) wintime -- cjxl -d 0 -e 1 Test.ppm Test.jxl --streaming_input JPEG XL encoder v0.11.0 0185fcd [AVX2,SSE2] Encoding [Modular, lossless, effort: 1] Compressed to 128382.5 kB (4.981 bpp). 15000 x 13746, 152.497 MP/s [152.50, 152.50], , 1 reps, 16 threads. PageFaultCount: 299185 PeakWorkingSetSize: 913.5 MiB QuotaPeakPagedPoolUsage: 1.185 MiB QuotaPeakNonPagedPoolUsage: 17.39 KiB PeakPagefileUsage: 1.28 GiB Creation time 2025/01/13 09:22:02.205 Exit time 2025/01/13 09:22:03.988 Wall time: 0 days, 00:00:01.783 (1.78 seconds) User time: 0 days, 00:00:00.843 (0.84 seconds) Kernel time: 0 days, 00:00:10.546 (10.55 seconds) wintime -- cjxl -d 0 -e 1 Test.ppm Test.jxl JPEG XL encoder v0.11.0 0185fcd [AVX2,SSE2] Encoding [Modular, lossless, effort: 1] Compressed to 127863.9 kB (4.961 bpp). 15000 x 13746, 154.664 MP/s [154.66, 154.66], , 1 reps, 16 threads. PageFaultCount: 450323 PeakWorkingSetSize: 1.468 GiB QuotaPeakPagedPoolUsage: 33.38 KiB QuotaPeakNonPagedPoolUsage: 17.52 KiB PeakPagefileUsage: 2.433 GiB Creation time 2025/01/13 09:22:13.166 Exit time 2025/01/13 09:22:15.818 Wall time: 0 days, 00:00:02.652 (2.65 seconds) User time: 0 days, 00:00:00.750 (0.75 seconds) Kernel time: 0 days, 00:00:10.562 (10.56 seconds)```
2025-01-13 09:33:49
So streamed output slows down effort 1 for some reason, makes the Peak Page usage show only 10 when it uses 600 in the Working Set (Unless I'm understanding it wrong) and 300+ MB are being used to buffer the output, when the file is 121 MB. Strange
Orum
2025-01-13 10:22:55
very interesting
2025-01-13 10:23:28
though it's not by much... I'll see if I can replicate that
So streamed output slows down effort 1 for some reason, makes the Peak Page usage show only 10 when it uses 600 in the Working Set (Unless I'm understanding it wrong) and 300+ MB are being used to buffer the output, when the file is 121 MB. Strange
2025-01-15 07:06:54
I get the opposite: ```cjxl -d 0 -e 1 tjs.ppm tjs.jxl --streaming_input --streaming_output Time (mean ± σ): 152.1 ms ± 5.0 ms [User: 512.3 ms, System: 297.7 ms] Range (min … max): 146.3 ms … 161.2 ms 20 runs cjxl -d 0 -e 1 tjs.ppm tjs.jxl --streaming_input Time (mean ± σ): 216.4 ms ± 4.5 ms [User: 537.5 ms, System: 567.5 ms] Range (min … max): 212.5 ms … 226.4 ms 13 runs cjxl -d 0 -e 1 tjs.ppm tjs.jxl Time (mean ± σ): 309.7 ms ± 4.5 ms [User: 611.0 ms, System: 342.8 ms] Range (min … max): 304.6 ms … 317.3 ms 10 runs```
2025-01-15 07:09:52
almost exactly half the time with streaming I/O
2025-01-15 07:10:33
I wonder if it's HW dependent 🤔
2025-01-15 07:13:13
well both our results do show lower real time, but mine also has less user and system time
jonnyawsom3
2025-01-15 07:17:10
I was going by the cjxl MP/s
2025-01-15 07:20:26
Likely hardware differences (mine is 8 years old) and wintime versus hyperfine
Orum
2025-01-15 07:29:31
I tend to prefer externally timed tests, as things can (and have) misreported execution time
_wb_
2025-01-15 07:50:38
External timing includes the time to decode the input (which can be longer than the encode time in case of e1) or for decoders to encode the output (which can be longer than the decode time when decoding VarDCT to PNG)
2025-01-15 07:51:28
In a sense, cjxl/djxl do external timing of the libjxl part.
jonnyawsom3
Orum I get the opposite: ```cjxl -d 0 -e 1 tjs.ppm tjs.jxl --streaming_input --streaming_output Time (mean ± σ): 152.1 ms ± 5.0 ms [User: 512.3 ms, System: 297.7 ms] Range (min … max): 146.3 ms … 161.2 ms 20 runs cjxl -d 0 -e 1 tjs.ppm tjs.jxl --streaming_input Time (mean ± σ): 216.4 ms ± 4.5 ms [User: 537.5 ms, System: 567.5 ms] Range (min … max): 212.5 ms … 226.4 ms 13 runs cjxl -d 0 -e 1 tjs.ppm tjs.jxl Time (mean ± σ): 309.7 ms ± 4.5 ms [User: 611.0 ms, System: 342.8 ms] Range (min … max): 304.6 ms … 317.3 ms 10 runs```
2025-01-15 09:11:14
```hyperfine -w 3 "cjxl -d 0 -e 1 Test.ppm nul --streaming_input --streaming_output" "cjxl -d 0 -e 1 Test.ppm nul --streaming_input" "cjxl -d 0 -e 1 Test.ppm nul --streaming_output" "cjxl -d 0 -e 1 Test.ppm nul" Benchmark 1: cjxl -d 0 -e 1 Test.ppm nul --streaming_input --streaming_output Time (mean ± σ): 1.699 s ± 0.029 s [User: 10.398 s, System: 0.882 s] Range (min … max): 1.667 s … 1.754 s 10 runs Benchmark 2: cjxl -d 0 -e 1 Test.ppm nul --streaming_input Time (mean ± σ): 1.705 s ± 0.030 s [User: 9.994 s, System: 0.877 s] Range (min … max): 1.679 s … 1.760 s 10 runs Benchmark 3: cjxl -d 0 -e 1 Test.ppm nul --streaming_output Time (mean ± σ): 2.042 s ± 0.040 s [User: 10.030 s, System: 0.663 s] Range (min … max): 2.001 s … 2.132 s 10 runs Benchmark 4: cjxl -d 0 -e 1 Test.ppm nul Time (mean ± σ): 2.301 s ± 0.049 s [User: 10.118 s, System: 0.883 s] Range (min … max): 2.195 s … 2.358 s 10 runs Summary cjxl -d 0 -e 1 Test.ppm nul --streaming_input --streaming_output ran 1.00 ± 0.02 times faster than cjxl -d 0 -e 1 Test.ppm nul --streaming_input 1.20 ± 0.03 times faster than cjxl -d 0 -e 1 Test.ppm nul --streaming_output 1.35 ± 0.04 times faster than cjxl -d 0 -e 1 Test.ppm nul``` I was getting huge variance between runs, realised it was my SSD saturating it's SATA connection. Using `nul` avoids any writing at all, but does mean `--streaming_output` has no effect. I'm also using a 200MP image, so that's probably why you got different timings since I was I/O constrained and you have much shorter encodes with likely less threading too
Orum
_wb_ External timing includes the time to decode the input (which can be longer than the encode time in case of e1) or for decoders to encode the output (which can be longer than the decode time when decoding VarDCT to PNG)
2025-01-15 09:13:52
not sure I would call reading a ppm file as 'decoding'
jonnyawsom3
2025-01-15 09:14:17
It obviously still has an effect on timings though
Orum
```hyperfine -w 3 "cjxl -d 0 -e 1 Test.ppm nul --streaming_input --streaming_output" "cjxl -d 0 -e 1 Test.ppm nul --streaming_input" "cjxl -d 0 -e 1 Test.ppm nul --streaming_output" "cjxl -d 0 -e 1 Test.ppm nul" Benchmark 1: cjxl -d 0 -e 1 Test.ppm nul --streaming_input --streaming_output Time (mean ± σ): 1.699 s ± 0.029 s [User: 10.398 s, System: 0.882 s] Range (min … max): 1.667 s … 1.754 s 10 runs Benchmark 2: cjxl -d 0 -e 1 Test.ppm nul --streaming_input Time (mean ± σ): 1.705 s ± 0.030 s [User: 9.994 s, System: 0.877 s] Range (min … max): 1.679 s … 1.760 s 10 runs Benchmark 3: cjxl -d 0 -e 1 Test.ppm nul --streaming_output Time (mean ± σ): 2.042 s ± 0.040 s [User: 10.030 s, System: 0.663 s] Range (min … max): 2.001 s … 2.132 s 10 runs Benchmark 4: cjxl -d 0 -e 1 Test.ppm nul Time (mean ± σ): 2.301 s ± 0.049 s [User: 10.118 s, System: 0.883 s] Range (min … max): 2.195 s … 2.358 s 10 runs Summary cjxl -d 0 -e 1 Test.ppm nul --streaming_input --streaming_output ran 1.00 ± 0.02 times faster than cjxl -d 0 -e 1 Test.ppm nul --streaming_input 1.20 ± 0.03 times faster than cjxl -d 0 -e 1 Test.ppm nul --streaming_output 1.35 ± 0.04 times faster than cjxl -d 0 -e 1 Test.ppm nul``` I was getting huge variance between runs, realised it was my SSD saturating it's SATA connection. Using `nul` avoids any writing at all, but does mean `--streaming_output` has no effect. I'm also using a 200MP image, so that's probably why you got different timings since I was I/O constrained and you have much shorter encodes with likely less threading too
2025-01-15 09:14:20
I just put the file on tmpfs
2025-01-15 09:15:17
but yeah, it looks similar to my results
CrushedAsian255
It obviously still has an effect on timings though
2025-01-15 10:20:06
Especially on slower drives / larger images
jonnyawsom3
2025-01-15 10:37:07
In my case, both
2025-01-17 03:03:16
So, `--disable_output` doesn't seem to disable the output... ```wintime -- djxl Test.jxl --disable_output JPEG XL decoder v0.12.0 3d7cec2 [AVX2,SSE2] Decoded to pixels. 7680 x 4320, 135.359 MP/s [135.36, 135.36], , 1 reps, 16 threads. PageFaultCount: 132287 PeakWorkingSetSize: 497.1 MiB QuotaPeakPagedPoolUsage: 33.92 KiB QuotaPeakNonPagedPoolUsage: 9.023 KiB PeakPagefileUsage: 778.9 MiB Creation time 2025/01/17 15:02:05.728 Exit time 2025/01/17 15:02:06.023 Wall time: 0 days, 00:00:00.295 (0.30 seconds) User time: 0 days, 00:00:00.453 (0.45 seconds) Kernel time: 0 days, 00:00:03.000 (3.00 seconds)``` Sending an output to `nul` *effectively* does ```wintime -- djxl Test.jxl --output_format ppm nul JPEG XL decoder v0.12.0 3d7cec2 [AVX2,SSE2] Decoded to pixels. 7680 x 4320, 150.037 MP/s [150.04, 150.04], , 1 reps, 16 threads. PageFaultCount: 83570 PeakWorkingSetSize: 212.3 MiB QuotaPeakPagedPoolUsage: 33.92 KiB QuotaPeakNonPagedPoolUsage: 9.156 KiB PeakPagefileUsage: 214 MiB Creation time 2025/01/17 15:01:53.150 Exit time 2025/01/17 15:01:53.425 Wall time: 0 days, 00:00:00.274 (0.27 seconds) User time: 0 days, 00:00:00.296 (0.30 seconds) Kernel time: 0 days, 00:00:02.687 (2.69 seconds)```
2025-01-17 03:06:29
Scratch that, writing a PNG is still faster to decode and less memory hungry, so whatever `--disable_output` is doing must be very wrong
RaveSteel
2025-01-17 03:17:11
Same behaviour with different images? Writing a ppm to /dev/null is equally as fast as disabling output for me
2025-01-17 03:17:27
png to /dev/null takes around 1.5-2 seconds longer
jonnyawsom3
2025-01-17 03:18:32
And sending PPM to `nul` with `--streaming_output` with cjxl is faster than `--disable_output` too, around 12% faster ```cjxl C:\Users\jonat\Pictures\VRChat\2024-06\VRChat_2024-06-09_04-37-50.894_7680x4320.png -e 1 -d 0 nul --streaming_output JPEG XL encoder v0.11.0 0185fcd [AVX2,SSE2] Encoding [Modular, lossless, effort: 1] Compressed to 36592.7 kB including container (8.823 bpp). 7680 x 4320, 124.296 MP/s [124.30, 124.30], , 1 reps, 16 threads. Time (abs ≡): 885.6 ms [User: 2700.0 ms, System: 55.9 ms] cjxl C:\Users\jonat\Pictures\VRChat\2024-06\VRChat_2024-06-09_04-37-50.894_7680x4320.png -e 1 -d 0 --disable_output JPEG XL encoder v0.11.0 0185fcd [AVX2,SSE2] Encoding [Modular, lossless, effort: 1] Compressed to 36592.7 kB including container (8.823 bpp). 7680 x 4320, 107.868 MP/s [107.87, 107.87], , 1 reps, 16 threads. Time (abs ≡): 925.9 ms [User: 2622.8 ms, System: 118.4 ms]```
RaveSteel Same behaviour with different images? Writing a ppm to /dev/null is equally as fast as disabling output for me
2025-01-17 03:27:45
Trying with other images, the speed is erratic, sometimes the same, a few milliseconds or a few percent, but the higher memory usage is consistent ```JPEG XL decoder v0.12.0 3d7cec2 [AVX2,SSE2] Decoded to pixels. 3840 x 2160, 129.827 MP/s [129.83, 129.83], , 1 reps, 16 threads. 3840 x 2160, 152.179 MP/s [152.18, 152.18], , 1 reps, 16 threads. djxl Test4.jxl --output_format ppm nul ran 1.07 ± 0.04 times faster than djxl Test4.jxl --disable_output```
2025-01-17 03:37:20
The cjxl results are consistent though
RaveSteel
2025-01-17 04:21:31
I didn't test memory usage, but for multiple other images disable output and writing ppm to /dev/null are pretty much always equal. Decoding to png and writing that to /dev/null always takes a bit more time
Orum
2025-01-17 04:23:22
yeah, I noticed png decode time is pretty awful
2025-01-17 04:23:33
like so bad that I can't even compare fpnge to cjxl
jonnyawsom3
2025-01-17 04:27:38
This is PNG encode, not decode
Orum
2025-01-17 04:29:17
😕 it looks like you were doing decode, not encode
2025-01-17 04:29:39
oh, you mean in the earlier djxl tests
jonnyawsom3
2025-01-17 04:36:13
Ehh, sorry. Shouldn't have mixed the results
2025-01-17 04:37:50
On yet another note, I realised `--faster_decoding` lossless (I know it's broken) doesn't set block size to 128x128, and effort 1 is locked at 256x256 as far as I can tell. Wonder if it would be much of an improvement in speed, since it actually helped compression at e2 in my small test
Orum
2025-01-18 09:08:04
how is faster decoding for lossless broken?
jonnyawsom3
2025-01-18 10:11:05
It seems to do things lik disable expensive predictors, but without replacing them with cheaper ones, causing huge filesizes
damian101
2025-01-22 02:52:41
I just stumbled across a case where ssimulacra2 really falls apart. Visually, the quality very obviously decreases from one to the next image. And Butteraugli agrees: 6.6608324051 3-norm: 2.825210 16.4659614563 3-norm: 5.622267 22.6416397095 3-norm: 8.332853 SSIMULACRA does, too: 0.10421639 0.18506859 0.24036341 VMAF as well: 71.326272 57.736229 34.167257 SSIM: 0.795168 0.654748 0.552588 But SSIMULACRA2 produces these ridiculous scores: 64.03269612 24.03410571 32.43812252
Orum
2025-01-22 03:13:27
my guess is it's an alignment issue
monad
I just stumbled across a case where ssimulacra2 really falls apart. Visually, the quality very obviously decreases from one to the next image. And Butteraugli agrees: 6.6608324051 3-norm: 2.825210 16.4659614563 3-norm: 5.622267 22.6416397095 3-norm: 8.332853 SSIMULACRA does, too: 0.10421639 0.18506859 0.24036341 VMAF as well: 71.326272 57.736229 34.167257 SSIM: 0.795168 0.654748 0.552588 But SSIMULACRA2 produces these ridiculous scores: 64.03269612 24.03410571 32.43812252
2025-01-22 10:51:17
https://jon-cld.s3.amazonaws.com/test/index.html
juliobbv
Orum my guess is it's an alignment issue
2025-01-22 05:51:11
yeah, I suspect SSIMULACRA2 sees more "blocking" in the 1/3 image more than the 1/4 because the nearest neighbor edges can fall outside the boundary of even pixels
2025-01-22 05:51:37
which is an issue
RaveSteel
2025-01-22 10:11:20
<@384009621519597581> Since I couldn't use the shader I wanted to try if there is a difference just creating a test imagine in krita with a crt scanline in the top layer. Not at all conclusive, just one image with no layers and one with layers and exported in Krita without making changes to the encoder except for setting it to lossless, but the difference for this one case is huge: ``` 1,5M gradient_test_crt_layers.jxl 7,6M gradient_test_crt_no_layers.jxl ```
AccessViolation_
2025-01-22 10:12:31
Oh that's what you meant, I thought you just meant I could create the gradient in Krita
2025-01-22 10:12:52
I didn't know it gives you control over whether to use JXL's layers, that's neat
RaveSteel
2025-01-22 10:18:36
I had to use Krita since GIMP does not support exporting JXL with layers, but Krita works quite nicely
2025-01-22 10:18:50
Here's the images
2025-01-22 10:18:55
AccessViolation_
2025-01-22 10:20:08
Oh shit I dyslexia'd hard on that, misread as 1.5M and 1.6M and thought you said the difference was not huge
2025-01-22 10:20:12
That's actually amazing
RaveSteel
2025-01-22 10:20:48
It is. The base image without the crt scanlines was 850kb
AccessViolation_
2025-01-22 10:24:45
Heh, when zoomed out they render differently in Waterfox
2025-01-22 10:25:23
Both seem to have a different Moire pattern
2025-01-22 10:26:05
but when viewing at 100% they appear identical
2025-01-22 10:26:41
Probably something that should be looked into, but I don't know which part of the stack that bug is in
2025-01-22 10:30:48
PNG does surprisingly well here too at 3 MB
RaveSteel
2025-01-22 10:30:59
They look identical when viewed with a few image viewers I've tried, so probably just waterfox?
AccessViolation_ PNG does surprisingly well here too at 3 MB
2025-01-22 10:31:32
Does the PNG have layers?
2025-01-22 10:32:57
Nonetheless, an extremely solid 50% reduction for the JXL compared to the PNG
AccessViolation_
RaveSteel Does the PNG have layers?
2025-01-22 10:33:23
No, but it's not uncommon for PNG to outperform JXL on iamges with repeating patterns thanks to its use of LZ77
RaveSteel
2025-01-22 10:33:36
Interesting
AccessViolation_
2025-01-22 10:34:09
PNG doesn't support layers at all as far as I'm aware
2025-01-22 10:34:42
both JXLs decode to a png of the same size
RaveSteel
AccessViolation_ PNG doesn't support layers at all as far as I'm aware
2025-01-22 10:34:59
Seems so, no mention of layers in the current specification
AccessViolation_ both JXLs decode to a png of the same size
2025-01-22 10:35:39
I had even suspected a bug in Kirta's encoding of JXL, so I reencoded the 7.5MB image with cjxl, but same size
2025-01-22 10:36:05
It also makes no difference whether I combine the layers beforehand or choose the flatten image option during export
AccessViolation_
2025-01-22 10:37:02
Do you have to do a destructive merge of the layers to get krita to not output it as a layered JXL?
RaveSteel
2025-01-22 10:37:43
The "flatten image" is pre-selected, but just uncheck it and you'll have a layered JXL
2025-01-22 10:38:16
Here is a TIFF with ZIP deflate compression and layers
AccessViolation_
2025-01-22 10:38:29
Oh by the way, got the non-layered version down to 930 kB using effort 10
RaveSteel
2025-01-22 10:38:43
thats almost as good as the base image
AccessViolation_
2025-01-22 10:39:17
I've also never seen this much of an improvement going from effort 7 to 10
RaveSteel
2025-01-22 10:39:54
Wow, saving the layers "as Photoshop", bloats up the filesize to over 20 MB
2025-01-22 10:40:06
No matter if ZIP or RLE is selected
2025-01-22 10:40:12
For the TIFF
2025-01-22 10:40:42
But we can clearly observe the advantage of having a format that supports layers
2025-01-22 10:40:49
Very impressive
AccessViolation_
2025-01-22 10:41:10
Yeah this is really cool, I honestly wasn't expecting this much of an improvement
RaveSteel
2025-01-22 10:41:31
So we can safely assume that screenshots of games like balatro, or emulated games with crt filter would be immensely smaller if layers were used
AccessViolation_
2025-01-22 10:42:26
Can you test outputting a layered JXL on effort 10 with Krita, btw? I'm not sure how going from lossless jxl to lossless jxl works with cjxl, and -e 10 increased the file size a little bit
RaveSteel
2025-01-22 10:43:35
Kirta supports effort 9 at most, so I'll try that, but if you have Krita you can open the layered JXL I posted yourself
2025-01-22 10:43:49
GIMP also doesnt seem to support showing layers of layered JXLs
AccessViolation_
RaveSteel Kirta supports effort 9 at most, so I'll try that, but if you have Krita you can open the layered JXL I posted yourself
2025-01-22 10:44:07
Wow it's nice that that just, works, as an authoring feature
RaveSteel
2025-01-22 10:44:14
Very much so
2025-01-22 10:45:08
Krita's export with e9 is 1.3MB
2025-01-22 10:45:38
jonnyawsom3
AccessViolation_ I didn't know it gives you control over whether to use JXL's layers, that's neat
2025-01-22 10:45:58
Krita is unironically one of the finest JXL implementations around thanks to <@274048677851430913> IIRC it even retains the layer names when importing back to Krita, along with every option having a tooltip explaining it's function
RaveSteel
2025-01-22 10:46:23
It did retain the layer names for me
jonnyawsom3
2025-01-22 10:46:32
There was an idea to import paged JXL's as layer groups too
RaveSteel
2025-01-22 10:47:09
The layer names are embed into the JXL metadata ``` JPEG XL file format container (ISO/IEC 18181-2) JPEG XL image, 1920x1200, (possibly) lossless, 16-bit RGB+Alpha Color space: 9080-byte ICC profile, CMM type: "lcms", color space: "RGB ", rendering intent: 0 layer: full image size, name: "Background" layer: full image size, name: "Malebene 1 (eingefügt)" ```
jonnyawsom3
AccessViolation_ Heh, when zoomed out they render differently in Waterfox
2025-01-22 10:50:03
Waterfox had a bug for me that exponentially increased memory usage with zoom level, hit 10GB on a 1080p image. Seems they have a lot of weird bugs with it
AccessViolation_ No, but it's not uncommon for PNG to outperform JXL on iamges with repeating patterns thanks to its use of LZ77
2025-01-22 10:50:41
PNG is DEFLATE, JXL is LZ77 though restricted to groups as far as I'm aware
AccessViolation_
2025-01-22 10:51:40
> PNG uses DEFLATE, a non-patented lossless data compression algorithm involving a combination of LZ77 and Huffman coding. From Wikipedia
jonnyawsom3
2025-01-22 10:52:15
Oh yeah
AccessViolation_
2025-01-22 10:55:14
I'm not sure why JXL's LZ77 isn't as efficient. Maybe it tries smarter stuff first, after which there's not many patterns left for LZ77, even though it would have been more effective if that was the first thing it tried. If that's the case, an encoder option to eagerly try LZ77 without doing other other coding tools that destroy the patterns first, would probably make JXL better than PNG in all the 'repeating pattern' cases where PNG currently does much better
2025-01-22 10:55:31
But in this case it seems effort 10 figured something out, lol
AccessViolation_ Oh by the way, got the non-layered version down to 930 kB using effort 10
2025-01-22 11:00:08
Wait nope. I think my file explorer updated the size as the file was still being written to. It's 1.8 MB
RaveSteel
2025-01-22 11:00:27
Still quite good
AccessViolation_
2025-01-22 11:00:34
For sure
jonnyawsom3
2025-01-22 11:01:19
Oh, I didn't notice that in Krita
RaveSteel
2025-01-22 11:05:46
Yes, I created the image with 16 bits since I wanted to use a gradient
AccessViolation_
2025-01-22 11:12:50
Now the next thing it needs is natively encoding the straight line tool as splines <:KekDog:805390049033191445>
2025-01-22 11:13:42
Scratch that, just add a "jpeg xl spline tool"
2025-01-22 11:14:03
you can export to other formats but they'll be bigger!
jonnyawsom3
2025-01-22 11:14:03
Hmm, I'm using the frame stitcher to try and do effort 10 with layers, but the output is just black regardless of blend mode
2025-01-22 11:14:58
RaveSteel
2025-01-22 11:15:41
One last thing, saving the image as a krita project is larger than the exported JXL because Krita is using PNG internally xd
jonnyawsom3
2025-01-22 11:15:52
Worse than that
2025-01-22 11:16:13
KRA files use raw bitmaps, compressed into the ZIP
RaveSteel
2025-01-22 11:16:47
Sheesh
2025-01-22 11:17:05
Someone needs to make a pull request so they can start to use JXL internally <:KekDog:805390049033191445>
jonnyawsom3
2025-01-22 11:18:24
I did mention that to Kampidh at some point heh
AccessViolation_
2025-01-22 11:19:16
I like it when software specific save files are a record of the operations performed instead of their results
2025-01-22 11:19:56
Makes for insanely small saves, but the decoding process is basically recreating the work by applying those same operations so it can be a bit slow especially if many operations have been performed
Kampidh
I did mention that to Kampidh at some point heh
2025-01-22 11:25:47
KRXL <:KekDog:805390049033191445>
jonnyawsom3
2025-01-22 11:26:53
Already have KRA and KRZ why not KRX
AccessViolation_
2025-01-22 11:29:24
We did it to DNG, we can give Krita the makeover too
jonnyawsom3
KRA files use raw bitmaps, compressed into the ZIP
2025-01-22 11:29:43
I stand corrected, I... Don't know what this is ```VERSION 2 TILEWIDTH 64 TILEHEIGHT 64 PIXELSIZE 4 DATA 510 448,960,LZF,11865```
2025-01-22 11:30:57
AccessViolation_
RaveSteel Someone needs to make a pull request so they can start to use JXL internally <:KekDog:805390049033191445>
2025-01-22 11:32:30
I mean *especially* being able to pack all the layers into the same JXL file could be huge as we have seen
2025-01-22 11:33:02
No actually, separating the layers was huge, which using multiple images for layers also does regardless of the format used
2025-01-22 11:33:56
Presumably it currently uses independent images for layers which can't share data though, and JXL can share data between layers, as I understand it
2025-01-22 11:34:26
On top of JXL being just better even if every layer was its own complete image, of course
jonnyawsom3
AccessViolation_ Presumably it currently uses independent images for layers which can't share data though, and JXL can share data between layers, as I understand it
2025-01-22 11:38:43
KRA files are just ZIP with different extensions, so here's a demo one just with a layer of my Doggo.png test image. KRZ files are for archival and skip the `mergedimage.png`
2025-01-22 11:44:34
Well, replacing the layer breaks it, but both 'PNG's work fine as JXLs
2025-01-22 11:45:58
A theoretical KRX, using effort 1 JXLs
RaveSteel
2025-01-22 11:47:44
Nice
jonnyawsom3
AccessViolation_ We did it to DNG, we can give Krita the makeover too
2025-01-22 11:54:01
...She needs it https://invent.kde.org/graphics/krita/-/blob/master/libs/image/tiles3/swap/kis_lzf_compression.cpp?ref_type=heads
2025-01-22 11:54:31
14 Years ago
2025-01-22 11:54:56
And as far as I can tell it's not been touched since
AccessViolation_
2025-01-22 11:55:20
Damn
jonnyawsom3
2025-01-22 11:57:05
I mean on the plus side, no harm in leaving it as an option for backwards compatibility :P
RaveSteel
2025-01-22 11:57:15
It probably can't even be merged anymore due to its age
2025-01-22 11:57:52
Ah, i thought it was a pull request, but it's in fact a commit
2025-01-22 11:57:53
my bad
AccessViolation_
2025-01-22 11:58:28
If someone ends up making it use JXL they need to make sure to brag about it a little bit, like by saying "reduced save files by X% by using JPEG XL internally" in the release notes. The more the format gets on the good side of artists the better
2025-01-22 11:58:33
:p
RaveSteel
2025-01-22 11:59:20
Will be especially important for those working with ridculously large files and therefore also very large filesizes
2025-01-23 12:00:00
Used to work on gigabyte sized images sometimes, the project files were very large indeed
AccessViolation_
2025-01-23 12:01:25
Wow yeah you'd want JXL for that
jonnyawsom3
2025-01-23 12:01:42
God bless chunked encoding
AccessViolation_ I'm not sure why JXL's LZ77 isn't as efficient. Maybe it tries smarter stuff first, after which there's not many patterns left for LZ77, even though it would have been more effective if that was the first thing it tried. If that's the case, an encoder option to eagerly try LZ77 without doing other other coding tools that destroy the patterns first, would probably make JXL better than PNG in all the 'repeating pattern' cases where PNG currently does much better
2025-01-23 01:12:42
https://discord.com/channels/794206087879852103/822105409312653333/1306947850076160000
RaveSteel
2025-01-23 01:15:14
Setting -I 0 leads to a much larger image for the unlayered example, 7.5MB to 12.3MB
2025-01-23 01:15:43
-I 100 results in the 7.5MB image
jonnyawsom3
2025-01-23 01:15:56
The LZ77 is still limited to groups though. So it can't see large patterns
RaveSteel Setting -I 0 leads to a much larger image for the unlayered example, 7.5MB to 12.3MB
2025-01-23 01:16:06
What about just on the CRT texture?
RaveSteel
2025-01-23 01:16:44
the crt scanline image is just 10KB as a PNG
2025-01-23 01:18:15
``` 12 scanlines.png 80 scanlines_I100.jxl 1020 scanlines_I0.jxl ```
2025-01-23 01:18:16
oof
2025-01-23 01:19:30
here's the original image, if you want to try yourself
2025-01-23 01:21:48
Using e10 is still 2KB larger than the PNG
2025-01-23 01:31:08
Currently trying out different predictors with this texture, using P0 and e10 got me down to 2.3KB!
2025-01-23 01:34:41
It took over 8 minutes to encode though
juliobbv
2025-01-23 01:36:30
this image should be perfect for patches I think
2025-01-23 01:36:45
can jpeg xl use patches for each layer?
RaveSteel
2025-01-23 01:38:05
0 difference in filesite with patches, at least with the normal github release
2025-01-23 01:38:34
maybe monad's build would work better?
2025-01-23 01:38:46
No idea, since I have never tried it
juliobbv
2025-01-23 01:38:53
yeah, I was thinking semi-handcrafted maybe?
RaveSteel
2025-01-23 01:39:11
P1 is 1.7KB
2025-01-23 01:39:46
Will post a list when the encodes are done
jonnyawsom3
RaveSteel maybe monad's build would work better?
2025-01-23 01:55:26
Unfortunately he lost it, but yeah. libjxl is tuned for text detection for patches, Monad tuned it for tiles and repeating patterns
RaveSteel
2025-01-23 01:56:07
Well, this specific texture features nothing but repetitions, so it should probably work fine
2025-01-23 02:41:44
``` 2304 scanlines_P0.jxl 1729 scanlines_P1.jxl 1897 scanlines_P2.jxl 24299 scanlines_P3.jxl 1852 scanlines_P4.jxl 1876 scanlines_P5.jxl 3157 scanlines_P6.jxl 2159 scanlines_P7.jxl 2042 scanlines_P8.jxl 1918 scanlines_P9.jxl 2082 scanlines_P10.jxl 2175 scanlines_P11.jxl 2219 scanlines_P12.jxl 24269 scanlines_P13.jxl 2076 scanlines_P14.jxl 12541 scanlines_P15.jxl ```
2025-01-23 02:42:23
P1 resulted in the smallest filesize, 1.7KB, all encoded at e10
jonnyawsom3
2025-01-23 02:51:37
Strange that 'Mix all' obviously didn't mix them all and see what was best...
_wb_
2025-01-23 07:38:14
It does, but it doesn't take lz77 into account iirc
AccessViolation_
2025-01-23 10:06:19
I wonder if there's some sort of clever heuristic where during the prediction evaluation the encoder can easily find out if that particular predictor results in residuals that are unusually good for LZ77, without actually performing LZ77 to find out, which would be slow
2025-01-23 10:28:04
Like maybe a small hashmap-like running array of 32 entries, and the difference between the previous residual and the current residual is placed in the array at an index which is a hash of that value. The array entries are a tuple of the last residual_diff value that fell onto that index, and the amount of times it was replaced by some other residual difference, changing it. So something like: ``` residual_diff: current_residual - previous_residual; residual_diff_hash: (residual_diff.hash() % 32) -> u8; map: array[(residual_diff, replaced_count); 32]; if map[residual_diff_hash].residual_diff != residual_diff { map[residual_diff_hash].residual_diff = residual_diff; map[residual_diff_hash].replaced_count += 1; } ``` This is a similar to the running palette array in QOI, but instead of being a measure of the amount of unique residuals, it keeps track of the amount of unique *changes* from residual to residual. If you instead used the residuals themselves this metric would be tripped up by a small set of recurring residuals even if they are in a random a pattern that LZ77 would not be good for. Then if most counts in this array are fairly low relative to the amount of predictions done, that means not many diffs were evicted which is a good indicator that you saw the same pattern over and over.
2025-01-23 10:31:05
So you could have an option to do this for the predictors as they are evaluated, and then at the end you can look at the values to determine if it's likely LZ77-able, and choose to use that predictor, whereas the predictor would otherwise get a bad score for having large (*repeating*) residuals
_wb_ It does, but it doesn't take lz77 into account iirc
2025-01-23 10:33:45
So this could be one way of making it take that into account, but I should test this first on a small demo that just has predictors and this algo to see how good of a metric it is for the expected performance of LZ77
CrushedAsian255
2025-01-23 11:36:27
does `jxl_from_tree` support patches?
A homosapien
RaveSteel ``` 2304 scanlines_P0.jxl 1729 scanlines_P1.jxl 1897 scanlines_P2.jxl 24299 scanlines_P3.jxl 1852 scanlines_P4.jxl 1876 scanlines_P5.jxl 3157 scanlines_P6.jxl 2159 scanlines_P7.jxl 2042 scanlines_P8.jxl 1918 scanlines_P9.jxl 2082 scanlines_P10.jxl 2175 scanlines_P11.jxl 2219 scanlines_P12.jxl 24269 scanlines_P13.jxl 2076 scanlines_P14.jxl 12541 scanlines_P15.jxl ```
2025-01-23 12:21:23
I got it down to 582 bytes <:pancakexl:1283670260209156128> <:galaxybrain:821831336372338729> <:FeelsAmazingMan:808826295768449054>
2025-01-23 12:21:35
RaveSteel
2025-01-23 12:21:42
Which settings did you use?
2025-01-23 12:21:58
Oh, my e11 encode for P0 just finished
2025-01-23 12:22:00
2025-01-23 12:22:04
lol
2025-01-23 12:22:13
it has been encoding for 2 hours
A homosapien
RaveSteel Which settings did you use?
2025-01-23 12:23:36
```wintime -- cjxl.exe bw.pam 0.8.jxl -d 0 -e 8 -g 3 -P 2 JPEG XL encoder v0.8.2 954b460 Read 1920x1200 image, 4608128 bytes, 2430.1 MP/s Encoding [Modular, lossless, effort: 8], Compressed to 582 bytes (0.002 bpp). 1920 x 1200, 3.61 MP/s, 12 threads. PeakWorkingSetSize: 349 MiB PeakPagefileUsage: 408 MiB Creation time 2025/01/23 04:23:03.549 Exit time 2025/01/23 04:23:04.310 Wall time: 0 days, 00:00:00.760 (0.76 seconds) User time: 0 days, 00:00:00.125 (0.12 seconds) Kernel time: 0 days, 00:00:00.671 (0.67 seconds)```
2025-01-23 12:23:48
effort 8 was really fast and really small
RaveSteel
2025-01-23 12:23:53
nice
CrushedAsian255
2025-01-23 12:34:00
i just finished a 1 and a half hour e11 encode, 418 bytes
2025-01-23 12:34:29
RaveSteel
2025-01-23 12:35:02
is p0 the "try every" one?
2025-01-23 12:35:12
or does someone have to run -e 11 in a loop for each -P
2025-01-23 12:35:13
😭
RaveSteel
2025-01-23 12:35:42
I am just running a loop
CrushedAsian255
2025-01-23 12:35:46
who has that -e 13 build?
RaveSteel I am just running a loop
2025-01-23 12:36:02
i feel bad for your CPU
RaveSteel
2025-01-23 12:36:54
No idea what you mean, it's only been encoding for 3 hours now xd
A homosapien
2025-01-23 12:48:25
```time ./cjxl bw.pam scanlines.jxl -d 0 -e 8 -I 0 -g 3 -P 1 JPEG XL encoder v0.11.1 [AVX2,SSE4,SSE2] Encoding [Modular, lossless, effort: 8] Compressed to 322 bytes (0.001 bpp). 1920 x 1200, 4.905 MP/s [4.90, 4.90], , 1 reps, 12 threads. real 0m0.477s user 0m0.402s sys 0m0.184s```
2025-01-23 12:48:49
idk effort 8 just has that magic sauce ya know ¯\_(ツ)_/¯
2025-01-23 12:49:29
I think `cjxl` hangs at effort 9 and higher
2025-01-23 12:49:53
or it could just be encoding for several hours I can't really tell
CrushedAsian255
2025-01-23 12:50:57
huh there is something weird about using pam vs png
2025-01-23 12:51:07
can you send over `bw.pam`?
A homosapien
2025-01-23 12:51:57
Grayscale + Alpha
CrushedAsian255
2025-01-23 12:52:08
instead of RGB?
A homosapien
2025-01-23 12:52:30
why waste bits on RGB when the image is clearly black and white?
2025-01-23 12:55:02
oh I got it down to 314 bytes nice
AccessViolation_
CrushedAsian255 i just finished a 1 and a half hour e11 encode, 418 bytes
2025-01-23 12:56:01
Is this the same base image that RaveSteel used? Somehow `djxl` decodes them to different PNGs which is very weird
CrushedAsian255
RaveSteel here's the original image, if you want to try yourself
2025-01-23 12:56:29
im using this file
2025-01-23 12:56:40
metadata stripped maybe
AccessViolation_
2025-01-23 12:56:48
Ahh maybe
A homosapien
2025-01-23 12:57:20
remember to run ssimulcra2 to check if they are pixel identical
CrushedAsian255
A homosapien oh I got it down to 314 bytes nice
2025-01-23 12:57:33
how?
RaveSteel
2025-01-23 12:57:40
compare image hashes using `/usr/bin/identify -format "%m %#\n"'`
2025-01-23 12:57:58
or ssimulcra2, yes
AccessViolation_
RaveSteel compare image hashes using `/usr/bin/identify -format "%m %#\n"'`
2025-01-23 01:00:29
Did that, same hash yeah
A homosapien
CrushedAsian255 how?
2025-01-23 01:00:34
magic ✨ 🧙‍♂️ 🪄 (libjxl 0.10 for some reason)
RaveSteel
2025-01-23 01:02:26
For reference, optimizing the source PNG with oxipng and versy high settings yields a reduction by almost 50%, from 10.2KB to 5.4KB
2025-01-23 01:02:37
Still much worse than JXL at around 300-350 Bytes though xd
A homosapien
AccessViolation_ No, but it's not uncommon for PNG to outperform JXL on iamges with repeating patterns thanks to its use of LZ77
2025-01-23 01:02:58
Betcha feel very silly right now 😛
CrushedAsian255
2025-01-23 01:03:11
cjxl -e 19 Attempts to encode the image using every possible input parameter into every commit of cjxl ever pushed to the public GitHub
AccessViolation_ No, but it's not uncommon for PNG to outperform JXL on iamges with repeating patterns thanks to its use of LZ77
2025-01-23 01:03:55
JPEG xl also has lz77 doesn’t it?
AccessViolation_
2025-01-23 01:04:05
Yeah, read on from there
CrushedAsian255
2025-01-23 01:04:06
Also patches
AccessViolation_
A homosapien Betcha feel very silly right now 😛
2025-01-23 01:04:44
Nahh, being blown away by JXL is something that happens to me often!
CrushedAsian255
2025-01-23 01:05:40
Does Jpegxl use lz77 with ans or Huffman?
A homosapien
2025-01-23 01:05:48
JXL wins yet again [absolute_cinema](https://heatnation.com/wp-content/uploads/2019/04/636625868623447717-AP-APTOPIX-Heat-Bucks-Basketball-39255807-e1556638154381.jpeg)
CrushedAsian255
2025-01-23 01:06:36
We should as a community try to find all images where a different format beats JXL and then try to optimise encoder parameters to make JXL always win
AccessViolation_
AccessViolation_ Like maybe a small hashmap-like running array of 32 entries, and the difference between the previous residual and the current residual is placed in the array at an index which is a hash of that value. The array entries are a tuple of the last residual_diff value that fell onto that index, and the amount of times it was replaced by some other residual difference, changing it. So something like: ``` residual_diff: current_residual - previous_residual; residual_diff_hash: (residual_diff.hash() % 32) -> u8; map: array[(residual_diff, replaced_count); 32]; if map[residual_diff_hash].residual_diff != residual_diff { map[residual_diff_hash].residual_diff = residual_diff; map[residual_diff_hash].replaced_count += 1; } ``` This is a similar to the running palette array in QOI, but instead of being a measure of the amount of unique residuals, it keeps track of the amount of unique *changes* from residual to residual. If you instead used the residuals themselves this metric would be tripped up by a small set of recurring residuals even if they are in a random a pattern that LZ77 would not be good for. Then if most counts in this array are fairly low relative to the amount of predictions done, that means not many diffs were evicted which is a good indicator that you saw the same pattern over and over.
2025-01-23 01:07:56
I had this idea to make it easier for JXL encoders to detect when predictor residuals could be easily compressed with LZ77 where they would otherwise not be picked since many residuals = bad
2025-01-23 01:08:35
I'll probably explore that further, I already have some code for predictors set up so it wouldn't be too hard to modify it for that experiment
RaveSteel
2025-01-23 01:08:51
I find that images at very small sizes will often be better with lossless WebP than with JXL, specifically thumbnails. There isn't too much potential gain to be had here though, because thumbnails need to load fast and JXL needs to be encoded at very low efforts to beat WebP in loading times, leading to larger filesizes in many, if not most, cases
CrushedAsian255
2025-01-23 01:08:52
What is stopping JXL from doing full LZ77 like PNG?
AccessViolation_
CrushedAsian255 What is stopping JXL from doing full LZ77 like PNG?
2025-01-23 01:10:01
If I were to guess, it's that large residuals that LZ77 could compress nicely, are still large residuals, which is seen as poor predictor performance. So other predictors might perform a bit better but destroy patterns that LZ77 would be good at, leading to worse compression in the end
2025-01-23 01:11:34
Hence my idea to basically basically tell the encoder "hey this predictor looks like it performs poorly but it looks like its residuals are actually very compressible with LZ77 compared to those of the others so consider that"
CrushedAsian255
2025-01-23 01:12:10
So like adding a cheap LZ77-compressibility heuristic to the predictor selection logic?
AccessViolation_
2025-01-23 01:12:21
Yep exactly
CrushedAsian255
2025-01-23 01:12:51
The thing with JXL is it has so many advanced coding tools that the potential of combining them is even more than just using them sequentially
Meow
2025-01-23 01:17:42
JXL remains horrible on grayscale screentone images
AccessViolation_
2025-01-23 01:20:14
JXL's modular mode is expressive enough to basically replicate a PNG's per-row predictor model, no? The only thing that I think is missing is the paeth predictor
2025-01-23 01:20:51
Then the question is, can you specify that all it should do after that is LZ77 and entropy coding
2025-01-23 01:21:19
Or are the other tings that can't be disabled that might make it harder for LZ77
CrushedAsian255
AccessViolation_ JXL's modular mode is expressive enough to basically replicate a PNG's per-row predictor model, no? The only thing that I think is missing is the paeth predictor
2025-01-23 01:21:38
just a bunch of nested y coordinate checks as a binary search down to a row predictor
2025-01-23 01:22:04
RCT may still be helpful
Meow JXL remains horrible on grayscale screentone images
2025-01-23 01:23:02
Elaborate?
AccessViolation_
Meow JXL remains horrible on grayscale screentone images
2025-01-23 01:23:05
I remember you talking about this, PNG did better for those, now I'm wondering if replicating what PNG does with JXL could match if not exceed it
RaveSteel
2025-01-23 01:23:47
Good question, <@277378565304090636> would you be so kind and share an image that was much worse with JXL compared to PNG?
Meow
2025-01-23 01:24:46
It saves -288.6%
CrushedAsian255
2025-01-23 01:24:51
Iirc JXL has all the coding features png has I guess other than supporting LZ77 references outside of the group boundary
2025-01-23 01:25:22
Bu patches do exist so that could partially alleviate the group boundary issue
AccessViolation_
2025-01-23 01:25:31
I'm just going to assume I shouldn't open that one at work <:galaxybrain:821831336372338729>
Meow
2025-01-23 01:25:34
I simply used cjxl -d 0
CrushedAsian255
2025-01-23 01:25:49
My phone wants to open the JXL in weibo
Meow
2025-01-23 01:28:11
The original PNG for reference
CrushedAsian255
2025-01-23 01:29:17
That is certainly an image
Meow
2025-01-23 01:29:39
Really hope there'll be a solution
AccessViolation_
2025-01-23 01:30:36
if it's not going to compress lewd screentone furry art well, what even is the point tbh
CrushedAsian255
2025-01-23 01:30:55
useless format 🤷‍♂️
Meow
2025-01-23 01:31:57
There are thousands of hentai comics in screentone
RaveSteel
Meow It saves -288.6%
2025-01-23 01:32:55
Reducing the colours using GIMP to 256 colours leads to a different image hash but a 100 ssimulcra2 score, while also reducing the filesize to ~595KB at e7, smaller than the PNG
AccessViolation_
2025-01-23 01:32:56
currently it's 570 KB for the PNG for me, and 799 KB for an effort 7 lossless jxl - not bad
2025-01-23 01:33:05
I guess there have been improvements since
RaveSteel
2025-01-23 01:34:30
e8 is 593KB for the original PNG encoded to JXL
Meow
2025-01-23 01:34:46
Uh isn't the PNG about 360 KB?
AccessViolation_
2025-01-23 01:34:50
e10 is 577 kB
CrushedAsian255
2025-01-23 01:35:10
the png is 356500 bytes
AccessViolation_
2025-01-23 01:35:13
the png as output from djxl is 567 kB
RaveSteel
Meow Uh isn't the PNG about 360 KB?
2025-01-23 01:35:47
I just decoded the JXL you posted to PNG, I hadn't yet seen that you posted the original PNG, my bad
AccessViolation_ the png as output from djxl is 567 kB
2025-01-23 01:36:06
Mine was 850KB
Meow
CrushedAsian255 the png is 356500 bytes
2025-01-23 01:36:11
Discord secretly added 26 bytes
CrushedAsian255
2025-01-23 01:36:21
probably an empty Exif chunk
2025-01-23 01:36:57
yeah
AccessViolation_
2025-01-23 01:37:11
How does one ignore exif in cjxl again
CrushedAsian255
2025-01-23 01:37:12
2025-01-23 01:37:34
14 + 12 (size,type,crc) = 26
AccessViolation_
2025-01-23 01:37:52
Ah it's `-x strip=exif`
CrushedAsian255
2025-01-23 01:38:25
-e 10 -g 3 -E 4 -I 100
2025-01-23 01:38:28
whoops this isn't terminal
2025-01-23 01:40:55
`-d 0 -x strip=exif -e 10 -g 3 -E 4 -I 100` got 448.4 kB
AccessViolation_
2025-01-23 01:45:25
So much of this image is just different types of patches
RaveSteel
2025-01-23 01:45:30
Yep
2025-01-23 01:47:26
The lowest I could get without running e10 is 401KB
A homosapien
2025-01-23 01:47:45
I got it down to 348.9 Kb pixel identical e 9
CrushedAsian255
2025-01-23 01:47:49
What params?
RaveSteel
CrushedAsian255 `-d 0 -x strip=exif -e 10 -g 3 -E 4 -I 100` got 448.4 kB
2025-01-23 01:47:52
-g 3 is actualyl a good bit larger with this image
A homosapien
2025-01-23 01:48:02
ya -g 2 is better
CrushedAsian255
2025-01-23 01:48:15
Screw it, -e 11 time
AccessViolation_
2025-01-23 01:48:52
this is a 3500 × 3500 image
2025-01-23 01:48:55
see you tomorrow
CrushedAsian255
2025-01-23 01:49:24
⚰️ <- me when encoding finishes
AccessViolation_
2025-01-23 01:50:24
come back when it's done and don't for get to tell us what it was like watching the last star in the universe burn out
RaveSteel
2025-01-23 01:50:42
I enabled patches with this image, encode took almost 10 times longer and is 200KB larger smh
CrushedAsian255
2025-01-23 01:50:51
oops
2025-01-23 01:50:59
``` JPEG XL encoder v0.11.1 0.11.1 [NEON] Encoding [Modular, lossless, effort: 11] ```
2025-01-23 01:51:02
it's started
2025-01-23 01:51:07
cya in a few weeks
AccessViolation_
2025-01-23 01:51:48
Oh god I just noticed the nipples
CrushedAsian255
2025-01-23 01:53:13
I’m glad I have low vision
Meow
AccessViolation_ So much of this image is just different types of patches
2025-01-23 01:53:38
That's how screentone works
CrushedAsian255
2025-01-23 01:54:01
Someone should try JBIG2 on the image
AccessViolation_
2025-01-23 01:54:10
Lmao
2025-01-23 01:54:29
Do the one with the character substitution bug
2025-01-23 01:54:34
There will be no detail left but the outlines
RaveSteel
2025-01-23 01:58:16
WebP is 412KB at max settings
AccessViolation_
Meow That's how screentone works
2025-01-23 01:58:50
right, I just wasn't expecting it to be pixel perfect like this
Meow
2025-01-23 01:59:48
Except the outlines there would be just some patterns of pixels
CrushedAsian255
2025-01-23 02:00:35
`-e 11` finished surprisingly quickly
2025-01-23 02:11:39
wait i forgot to send the result, 334 kB
2025-01-23 02:11:46
2025-01-23 02:11:53
(discord uses KiB)
Meow
2025-01-23 02:17:10
ssimulacra2: 100.00000000 Wow it succeeded and the colour space remains "Gray"
2025-01-23 02:17:54
Lossless WebP becomes RGB
2025-01-23 02:21:34
That JXL is 6.4% smaller than PNG
RaveSteel
2025-01-23 02:29:57
But it sadly isn't viable practically to have each image take over 10 minutes encoding
Meow
RaveSteel But it sadly isn't viable practically to have each image take over 10 minutes encoding
2025-01-23 02:36:12
That's so Zopfli-like
jonnyawsom3
CrushedAsian255 What is stopping JXL from doing full LZ77 like PNG?
2025-01-23 03:59:17
Group size, it can only 'see' inside 1 group at a time
AccessViolation_ if it's not going to compress lewd screentone furry art well, what even is the point tbh
2025-01-23 04:08:14
My friend used to run a backup server for one of the biggest furry sites... Offered to compress everything to JXL and report the savings, but then had a drive failure and lost it all 😔
2025-01-23 04:09:06
Would've been a good corpus too, going back a decade and including everything from photorealistic to pixel art
AccessViolation_
2025-01-23 04:09:28
I need that corpus actually for uhh
2025-01-23 04:09:32
benchmarking yes
jonnyawsom3
2025-01-23 04:11:56
We spent a night running e11 on an old Google data centre server he'd bought from working there, pre-0.10 so it was using around 50GB of memory
2025-01-23 04:12:09
Felt like justice
AccessViolation_
2025-01-23 04:13:11
Casually owning a google data center server is such a flex
jonnyawsom3
2025-01-23 05:45:21
128GB RAM with 2 Xeons IIRC, for £120
AccessViolation_
2025-01-23 05:46:37
2025-01-23 05:46:38
What was the original file size?
2025-01-23 05:48:28
If the JPEG input is high quality enough they might be able to compress the pixel data as a `-d 2` JXL or something, and save more a lot more bandwidth while preserving a lot more quality
2025-01-23 05:50:15
Obviously lossless transcoding is preferable to the user for keeping the original file completely intact, but if you're okay with transcoding them to lower quality JPEGs you could save the same amount and preserve more quality if you go straight to JXL
2025-01-23 05:50:44
I think JPEGs produced by phone cameras are high quality enough to have very little artifacts
jonnyawsom3
2025-01-23 05:50:56
The joys of Android meant it took me this long to copy and rename the file
AccessViolation_
2025-01-23 05:51:28
Oh I usually zip them
2025-01-23 05:52:29
Okay so that particular one is compressed to ~15% of the original
jonnyawsom3
AccessViolation_ I think JPEGs produced by phone cameras are high quality enough to have very little artifacts
2025-01-23 05:56:30
Quality 96 4:2:0 with AI upscaling shenanigans (even in this RAW mode...)
2025-01-23 05:58:16
I'd also like to remind you, my phone is 8 years old... So this was I *think* the first time AI interpolation was used
AccessViolation_
2025-01-23 10:13:08
2025-01-23 10:13:08
just dropping this here
Meow
My friend used to run a backup server for one of the biggest furry sites... Offered to compress everything to JXL and report the savings, but then had a drive failure and lost it all 😔
2025-01-24 01:54:27
How many files were there?
jonnyawsom3
Meow How many files were there?
2025-01-24 02:07:36
2025-01-24 02:08:29
Last I checked it was only 2TB, guess people are using more videos and higher resolutions
Meow
2025-01-24 02:12:05
Looks like e621
jonnyawsom3
2025-01-24 02:20:01
Correct, I didn't name it to preserve people's sanity
Meow
2025-01-24 03:00:44
It's the only large furry site that supports WebM
jonnyawsom3
2025-01-24 03:55:56
Exclusively WebM at that
A homosapien
A homosapien oh I got it down to 314 bytes nice
2025-01-24 04:24:02
302 bytes, I think this is the limit without handcrafting it in JXL art 😅
_wb_
2025-01-24 08:15:57
My hypothesis: in the 1:4 and more zoomed out scales, the last image gets a very high score since it looks like the image was obtained by doing something like `convert orig.png -scale 25% -sample 400% distorted.png` (a 4x nearest-neighbor upscale from a 4x downscale). That causes its score to be a bit higher than that of the other image, which looks like it's 3x3 NN which will cause all scales to have some errors 3 does not divide any power of two.
I just stumbled across a case where ssimulacra2 really falls apart. Visually, the quality very obviously decreases from one to the next image. And Butteraugli agrees: 6.6608324051 3-norm: 2.825210 16.4659614563 3-norm: 5.622267 22.6416397095 3-norm: 8.332853 SSIMULACRA does, too: 0.10421639 0.18506859 0.24036341 VMAF as well: 71.326272 57.736229 34.167257 SSIM: 0.795168 0.654748 0.552588 But SSIMULACRA2 produces these ridiculous scores: 64.03269612 24.03410571 32.43812252
2025-01-24 08:16:16
(in a late response to this)
2025-01-24 08:16:58
(I started typing that a while ago but for some reason forgot to press enter)
AccessViolation_
2025-01-24 08:30:11
That's the site I landed on when looking up what a certain E number on the ingredients list of some product was <:galaxybrain:821831336372338729> I had heard of it but not enough to remember the name
monad
RaveSteel 0 difference in filesite with patches, at least with the normal github release
2025-01-24 08:37:59
libjxl doesn't consider extra channels for patches (dunno if theoretically possible). anyway, highly predictable patterns are often more efficiently represented with other coding tools. patches work better separating features that are otherwise difficult to predict.
spider-mario
AccessViolation_ That's the site I landed on when looking up what a certain E number on the ingredients list of some product was <:galaxybrain:821831336372338729> I had heard of it but not enough to remember the name
2025-01-24 09:15:24
fwiw, wikipedia has a list https://en.wikipedia.org/wiki/E_number#Full_list
AccessViolation_
2025-01-24 09:19:46
Thanks, that'll be useful
damian101
_wb_ (I started typing that a while ago but for some reason forgot to press enter)
2025-01-24 10:36:57
sounds familiar 😅
_wb_ My hypothesis: in the 1:4 and more zoomed out scales, the last image gets a very high score since it looks like the image was obtained by doing something like `convert orig.png -scale 25% -sample 400% distorted.png` (a 4x nearest-neighbor upscale from a 4x downscale). That causes its score to be a bit higher than that of the other image, which looks like it's 3x3 NN which will cause all scales to have some errors 3 does not divide any power of two.
2025-01-24 11:44:16
Yes, correct. I was testing downscaling methods (btw, this scaler performs really well for 1/n scaling: https://legacy.imagemagick.org/Usage/filter/#box, definitely my favorite, although edges could be smoother). As I know a bit about how SSIMULACRA2 works, I wasn't too surprised by this. However, I was surprised that SSIMULACRA, another multi-scale SSIM metric, was seemingly unaffected.
_wb_
2025-01-24 06:36:51
Possibly ssimu2 puts a bit too much weight on the zoomed-out scales
2025-01-24 06:37:23
2025-01-24 06:38:36
is that a reasonable / clear / fair way to report some results?
jonnyawsom3
2025-01-24 06:49:11
Seems good to me, maybe a bolder line between formats to help tell where they change at a glance, though that's already done for JPEG XL
...She needs it https://invent.kde.org/graphics/krita/-/blob/master/libs/image/tiles3/swap/kis_lzf_compression.cpp?ref_type=heads
2025-01-24 07:40:29
I am curious now, how would effort 1 lossless compare to that archaic format? I'd test it, but... That source code is the only info I can find on the format, so I have no way to decode them other than using them normally in Krita
_wb_
2025-01-24 07:48:19
One of the comments says it is based on https://oldhome.schmorp.de/marc/liblzf.html
2025-01-24 07:49:18
I would assume it is somewhere between PNG and QOI
jonnyawsom3
2025-01-24 08:02:08
Uncompressed this image is `6,220,840 Bytes`, Krita compressed it to `5,536,243 Bytes`, QOI is `3,574,360 Bytes`, PNG is `3,143,287 Bytes` and JXL e1 is `2,759,209 Bytes`, so it falls quite far behind any other format. Though, I did find out it's supposedly tiled for Krita to reduce memory usage. Could probably be done with either individual JXL files or cropped decoding in future though.
AccessViolation_
Uncompressed this image is `6,220,840 Bytes`, Krita compressed it to `5,536,243 Bytes`, QOI is `3,574,360 Bytes`, PNG is `3,143,287 Bytes` and JXL e1 is `2,759,209 Bytes`, so it falls quite far behind any other format. Though, I did find out it's supposedly tiled for Krita to reduce memory usage. Could probably be done with either individual JXL files or cropped decoding in future though.
2025-01-24 09:30:00
Could it acquire a file lock, encode the image with e1, while simultaneously running another job to encode it with higher effort. So if you quickly save and close Krita you get the e1 file, but if you don't close it immediately it will compress it further in the background
2025-01-24 09:30:27
So you don't lose the benefit of fast saving while also allowing it to compress it better if it can
jonnyawsom3
2025-01-24 09:32:13
Did a small test using file layers, opening speed seems identical
AccessViolation_ Could it acquire a file lock, encode the image with e1, while simultaneously running another job to encode it with higher effort. So if you quickly save and close Krita you get the e1 file, but if you don't close it immediately it will compress it further in the background
2025-01-24 09:34:03
KRA files are ZIPs, so you'd need to also pack the ZIP after e1, then re-pack it at higher effort, at which point they've probably already closed it
AccessViolation_
2025-01-24 09:39:26
Do they need to be zips? Wonder if you could make it a `.kraxl` (henceforth canonically pronounced 'cracksel') that's a layered JXL with any ancillary data that Krita needs in a Box
2025-01-24 09:41:23
And you get the thumbnail feature for free!
jonnyawsom3
AccessViolation_ Do they need to be zips? Wonder if you could make it a `.kraxl` (henceforth canonically pronounced 'cracksel') that's a layered JXL with any ancillary data that Krita needs in a Box
2025-01-24 09:49:42
https://docs.krita.org/en/general_concepts/file_formats/file_kra.html
AccessViolation_
2025-01-24 09:58:08
Oh that's quite a lot
2025-01-24 09:58:49
More than I expected but surely the necessary information could theoretically all be represented in a single JXL
2025-01-24 10:08:11
To be clear I don't mean this in the sense of adding JXL to the format it already uses, but rather an idea for a new Krita file format that's one big JXL
2025-01-24 10:09:05
Because it's fun to think about all the ways JXL could be shoehorned into software regardless of whether the effort required would be worth it <:CatBlobPolice:805388337862279198>
monad
_wb_
2025-01-24 11:18:01
The least clear part is the WebP line that changes color. I suggest more straightforward formatting rules: underline and blue highlight for overall Pareto-optimal, green highlight for non-JXL Pareto-optimal, bold text and dark highlight for overall best, dark highlight for non-JXL best. Thus, the WebP line becomes all blue with no other changes.
jonnyawsom3
2025-01-25 01:49:31
Downloaded what I thought was a 32-bit EXR, but cjxl was overshooting by 2x on lossless ```wintime -- cjxl Tree.pfm Tree.jxl -d 0 JPEG XL encoder v0.11.0 0185fcd [AVX2,SSE2] Encoding [Modular, lossless, effort: 7] Compressed to 6187.6 kB (58.875 bpp). 928 x 906, 0.522 MP/s [0.52, 0.52], , 1 reps, 16 threads. PageFaultCount: 71664 PeakWorkingSetSize: 87.7 MiB QuotaPeakPagedPoolUsage: 33.39 KiB QuotaPeakNonPagedPoolUsage: 11.55 KiB PeakPagefileUsage: 110 MiB Creation time 2025/01/25 01:43:50.768 Exit time 2025/01/25 01:43:52.406 Wall time: 0 days, 00:00:01.638 (1.64 seconds) User time: 0 days, 00:00:00.312 (0.31 seconds) Kernel time: 0 days, 00:00:02.031 (2.03 seconds)``` Lo and behold, I think it was my old friend the palete overflow again ```wintime -- cjxl9 Tree.pfm Tree.jxl -d 0 JPEG XL encoder v0.9.1 b8ceae3 [AVX2,SSE4,SSSE3,SSE2] Encoding [Modular, lossless, effort: 7] Compressed to 2868.3 kB (27.292 bpp). 928 x 906, 0.062 MP/s [0.06, 0.06], 1 reps, 16 threads. PageFaultCount: 129033 PeakWorkingSetSize: 94.31 MiB QuotaPeakPagedPoolUsage: 37.7 KiB QuotaPeakNonPagedPoolUsage: 12.48 KiB PeakPagefileUsage: 120.8 MiB Creation time 2025/01/25 01:44:06.979 Exit time 2025/01/25 01:44:20.557 Wall time: 0 days, 00:00:13.577 (13.58 seconds) User time: 0 days, 00:00:00.156 (0.16 seconds) Kernel time: 0 days, 00:00:14.734 (14.73 seconds)```
2025-01-25 01:50:09
Not sure why it takes so long though, effort setting and patches has no effect
_wb_
monad The least clear part is the WebP line that changes color. I suggest more straightforward formatting rules: underline and blue highlight for overall Pareto-optimal, green highlight for non-JXL Pareto-optimal, bold text and dark highlight for overall best, dark highlight for non-JXL best. Thus, the WebP line becomes all blue with no other changes.
2025-01-25 07:48:51
You mean the webp z0 nonphoto encode time should be dark blue instead of dark green? Yeah, that would probably look better. I went back and forth on it, having no dark green cell and two dark blue ones seemed a bit strange but how it is now is also strange...
A homosapien
2025-01-25 08:11:52
Also I feel like a good middle ground for oxipng is `-o 4` rather than `-o max`
monad
_wb_ You mean the webp z0 nonphoto encode time should be dark blue instead of dark green? Yeah, that would probably look better. I went back and forth on it, having no dark green cell and two dark blue ones seemed a bit strange but how it is now is also strange...
2025-01-25 08:12:08
Yes, that's the cell.
2025-01-25 08:16:50
I interpret that oxipng entry as an intentional example of high effort. webp's z9 is impractical, but also a valid example of high effort for the format.
A homosapien
2025-01-25 08:17:49
then why not add another row for oxipng `-o 2`
2025-01-25 08:17:56
both medium effort and max effort
_wb_
2025-01-25 08:43:01
Could do that but it would just be somewhere in between the two rows that are already there.
jonnyawsom3
Uncompressed this image is `6,220,840 Bytes`, Krita compressed it to `5,536,243 Bytes`, QOI is `3,574,360 Bytes`, PNG is `3,143,287 Bytes` and JXL e1 is `2,759,209 Bytes`, so it falls quite far behind any other format. Though, I did find out it's supposedly tiled for Krita to reduce memory usage. Could probably be done with either individual JXL files or cropped decoding in future though.
2025-01-25 09:13:09
So Krita does poorly all round, JXL does far better, but falls behind on floats ``` Krita 9,052,670 Doggo8int.kra 13,323,976 Doggo16int.kra 11,236,307 Doggo16float.kra 13,373,784 Doggo32float.kra cjxl v0.12 2,760,410 Doggo8int.jxl 8,989,466 Doggo16int.jxl 6,564,718 Doggo16float.jxl 15,192,812 Doggo32float.jxl cjxl v0.9 2,760,370 Doggo8int9.jxl 8,989,466 Doggo16int9.jxl 2,697,219 Doggo16float9.jxl 2,697,219 Doggo32float9.jxl```
AccessViolation_
2025-01-25 07:06:16
Initial test of the idea to detect and subtract common dither patterns out of images and place them in their own layer: ``` 23M dither-flattened.jxl 1.9M dither-layered.jxl ``` Both are from the same Krita project, a transparent dither pattern additive blended with a gradient, Modular lossless, encoded at effort 7, with and without flattening. Both render to the same thing
2025-01-25 07:12:51
Here are the files. The flattened (single-layer) one was re-encoded with effort 10 to make it small enough for Discord
RaveSteel
2025-01-25 07:13:02
Would you mind sharing the krita project as well?
AccessViolation_
2025-01-25 07:13:28
Sure, one moment
RaveSteel
2025-01-25 07:13:43
But pretty nice result overall
2025-01-25 07:13:54
The future is looking bright
2025-01-25 07:14:06
We just need more adoption😅
2025-01-25 07:15:35
A lot of artists like to add noise/grain to give their images some texture, this will massively reduce filesizes for them and for sites that works are shared on
2025-01-25 07:15:49
And it doesnt even require transcoding or high effort compression
AccessViolation_
2025-01-25 07:16:38
Actually, you can just open the __layered__ JXL as I specifically made sure that all images and blending modes etc could be fully represented in JXL itself - if this wasn't the case Krita would have forcefully flattened it (it warned me of this)
2025-01-25 07:16:44
Just tested it 🙂
RaveSteel
2025-01-25 07:17:05
Indeed
AccessViolation_
2025-01-25 07:17:11
If you want the krita file regardless, [here it is](<https://wormhole.app/XMy1m#3eLsjd6U39AMxNoE59Yomw>)
RaveSteel
2025-01-25 07:17:15
Thanks
AccessViolation_
2025-01-25 07:27:01
I couldn't get it to work how I planned: my goal was to create a 16-bit gradient and create a fill pattern on a separate layer that dithers it to 8-bit and export it as such. But Krita doesn't make that easy if it's possible at all (I don't think it is) so instead I think this is still just a 16-bit gradient with a suboptimal arbitrarily chosen additive blended dither pattern that apparently *does* actually somewhat work to remove the banding on my 8 bit per channel display <:KekDog:805390049033191445> But for this test that shouldn't be relevant. This proves that if images can be decomposed into the base image and its dither pattern, that can lead to potentially very very big savings
RaveSteel A lot of artists like to add noise/grain to give their images some texture, this will massively reduce filesizes for them and for sites that works are shared on
2025-01-25 07:31:06
Exactly, so long as it isn't random - and a lot of artistic use of dithering specifically isn't, from what I've seen
RaveSteel
2025-01-25 07:32:52
Even adding random noise would probably be a good bit smaller if added as a layer instead of flattening. I'll try it out later
AccessViolation_
2025-01-25 07:33:36
That would be interesting to see for sure!
_wb_
2025-01-25 07:47:33
Probably it is possible to construct a family of jxl art to produce various artistic patterns. If you want noise, the noise synthesis is also an option...
RaveSteel
2025-01-25 07:50:44
Would this be viable during the creation of artistic work? Since noise generation is often a filter, this would be happening before JXL is selected as export format. I guess software that allows for JXL export could replace the noise filter with JXL noise synthesis during the export
2025-01-25 07:51:51
But this replacement can't be done if the creator flattens the layers together manually before exporting
2025-01-25 07:51:52
hm
AccessViolation_
2025-01-25 07:52:50
Now that I think about it, it's probably easiest to use some software to actually dither 16-bit images into 8-bit with a Bayer matrix pattern, then use image editing software to subtract the dithered from the non-dithered, leaving me with the dither pattern, and then use Krita again to apply the pattern with the additive mode, and then export as flattened and layered JXLs again
2025-01-25 07:56:16
Ideally it's good enough at compressing the bayer pattern itself, but if the dither actually just adds or subtracts 1 value from the R,G,B channels (I don't know if it always does) then the entire dither layer can just be indices into another layer containing a handful of possible dither pattern tiles
RaveSteel But this replacement can't be done if the creator flattens the layers together manually before exporting
2025-01-25 07:59:15
Well JXL's nosie is deterministic, so it's relatively for an encoder to subtract the noise that it *would* itself have generated from a section of the image, with different strengths, and see if that makes the image significantly less noisy. If it does, then you know it's using JXL noise and you can subtract it out entirely, and add it by specifying the image should use JXL noise
2025-01-25 08:01:01
I mean if we had some sort of large set of "deterministic patterns artists often apply" then you could literally apply them in reverse, one by one, and if they make the image significantly easier to compress then you know "oh it's using that", subtract it out of the image, add it to a layer, and done
2025-01-25 08:01:17
This would be slow, yes, but theoretically possible
RaveSteel
2025-01-25 08:01:34
Right, but using a noise filter in GIMP or photoshop does not use JXL noise generation, so JXL noise synthesis cannot be applied, except for adding more noise on top. Or am I incorrect?
2025-01-25 08:02:07
If the JXL encoder removes non-JXL generated noise to synthesize it afterwards to re-add noise, then this would not be a truly lossless image if exported
AccessViolation_
2025-01-25 08:03:34
No that's right - what I meant is that those image editors could copy JXL's noise algorithm, and then *destructively* blend the noise with the image, so that the image looks identical in every export format. And the JXL encoder could then be smart enough to recognize and subtract out its own noise pattern But on second thought that would be completely unnecessary since the image editing software could just use the noise feature of JXL by default, and only destructively apply it when exporting to other formats
2025-01-25 08:04:34
They could have different options like "AVIF noise, JXL noise" depending on which format you plan on exporting to, and when you export to some other format the software just destructively applies the selected noise to the image itself before encoding
RaveSteel
2025-01-25 08:06:54
I think software implementing this is far out in the future, but would be great to see happening
2025-01-25 08:07:27
Also AVIF noise would just be a blur filter <:KekDog:805390049033191445>
2025-01-25 08:07:44
Kidding of course xd
AccessViolation_
2025-01-25 08:08:55
Lol
2025-01-25 08:09:53
You know, we currently have 'film grain simulation' to artificially apply a nostalgic effect of the past to our digital video
2025-01-25 08:11:34
So I wonder if in the future, when we've found out P = NP and are able to compress all media to the theoretical maximum 100% of the time, we're going to see "Macroblocking simulation" and "Vaseline simulation" for JPEG XL and AVIF respectively
RaveSteel
2025-01-25 08:12:02
Imagine
2025-01-25 08:14:04
Regarding artists, one of the greater problems that artists are facing, kinda, is that most do not know how to properly export/make use of of certain formats. Many export to GIF with global palette, because that's what their software does, not knowing, that GIF can look much better using per-frame palette or just straight up using a different format, like AVIF or APNG. JXL sadly isn't really viable yet
2025-01-25 08:14:34
I have encountered one singular artists who was exporting as APNG
AccessViolation_
2025-01-25 08:17:39
I hope that at there's going to be a tipping point where through word of mouth many artists will know that JXL, either lossless or lossy, is the best for their art
2025-01-25 08:18:34
I wonder if there's already any editors specifically for pixel art that make use of JXL's native 8x8 upscaling. Would be cool
RaveSteel
2025-01-25 08:20:01
Probably not I would assume
2025-01-25 08:20:37
But again, knowledge problem for many creators. For example, we have all seen pixel art upscaled using bicubic or similar
AccessViolation_
2025-01-25 08:21:50
Ow, yeah
2025-01-25 08:22:58
What Pixilart seems to do, is have people upload their art as the original resolution, and then they upscale it for you, judging by it displaying the original resolution - which is actually pretty annoying, since when I download them for testing I need to downscale them in editing software before I can do JXL's upsampling on them when evaluating how well different pixel art compresses
RaveSteel
2025-01-25 08:23:26
At least they are scaling correctly
AccessViolation_
2025-01-25 08:27:49
Yeah, it works
jonnyawsom3
2025-01-25 08:53:21
Ya know.... ```wintime -- cjxl Tiles.png Tiles.jxl -d 0 -e 10 -g 3 JPEG XL encoder v0.11.0 0185fcd [AVX2,SSE2] Encoding [Modular, lossless, effort: 10] Compressed to 359 bytes (0.001 bpp). 2560 x 1440, 0.859 MP/s [0.86, 0.86], , 1 reps, 16 threads. PageFaultCount: 91730 PeakWorkingSetSize: 155.8 MiB QuotaPeakPagedPoolUsage: 33.39 KiB QuotaPeakNonPagedPoolUsage: 11.15 KiB PeakPagefileUsage: 162.8 MiB Creation time 2025/01/25 20:12:12.971 Exit time 2025/01/25 20:12:17.297 Wall time: 0 days, 00:00:04.325 (4.33 seconds) User time: 0 days, 00:00:00.312 (0.31 seconds) Kernel time: 0 days, 00:00:05.703 (5.70 seconds)``` Sometimes I hate being right the first try ```wintime -- cjxl Tiles.png Tiles.jxl -d 0 -e 11 --num_threads 8 --allow_expert_options JPEG XL encoder v0.11.0 0185fcd [AVX2,SSE2] Encoding [Modular, lossless, effort: 11] Compressed to 359 bytes (0.001 bpp). 2560 x 1440, 0.006 MP/s [0.01, 0.01], , 1 reps, 8 threads. PageFaultCount: 17467406 PeakWorkingSetSize: 1.588 GiB QuotaPeakPagedPoolUsage: 33.2 KiB QuotaPeakNonPagedPoolUsage: 52.98 KiB PeakPagefileUsage: 1.75 GiB Creation time 2025/01/25 20:30:24.992 Exit time 2025/01/25 20:41:32.403 Wall time: 0 days, 00:11:07.411 (667.41 seconds) User time: 0 days, 00:00:31.515 (31.52 seconds) Kernel time: 0 days, 00:32:03.125 (1923.12 seconds)```
2025-01-25 08:54:07
On the upside, 1440p e11 in 10 minutes
RaveSteel
2025-01-25 09:00:47
What were you trying to verify?
jonnyawsom3
2025-01-25 09:14:30
Just trying to get the smallest file, lo and behold the first command I tried was the best and the fastest
So Krita does poorly all round, JXL does far better, but falls behind on floats ``` Krita 9,052,670 Doggo8int.kra 13,323,976 Doggo16int.kra 11,236,307 Doggo16float.kra 13,373,784 Doggo32float.kra cjxl v0.12 2,760,410 Doggo8int.jxl 8,989,466 Doggo16int.jxl 6,564,718 Doggo16float.jxl 15,192,812 Doggo32float.jxl cjxl v0.9 2,760,370 Doggo8int9.jxl 8,989,466 Doggo16int9.jxl 2,697,219 Doggo16float9.jxl 2,697,219 Doggo32float9.jxl```
2025-01-25 09:14:47
Tidied the results up compared to Krita too
Orum
2025-01-25 09:52:51
why not -g 3 in the second command?
2025-01-25 09:53:51
or does it already test all group sizes?
jonnyawsom3
2025-01-25 09:59:14
e11 *should* test all parameters to a certain extent, so all group sizes, `-I 0, 25, 50, 75, 100`, ect
RaveSteel A lot of artists like to add noise/grain to give their images some texture, this will massively reduce filesizes for them and for sites that works are shared on
2025-01-25 10:00:19
I've seen quite a few do this, sometimes on top of their own watermark/signature. Very annoying when trying to re-compress or even just view on a poor connection
RaveSteel
2025-01-25 10:03:23
I have an 3000x7000 JPG from an artist with RGB noise, 38MB. Transcoding it to JXL is 22MB. Just thinking how much smaller it could be with the noise in a separate layer...
2025-01-25 10:04:33
A lossless JXL would likely be smaller than this lossy JPEG
Orum
2025-01-25 10:09:45
heh, noise is everything when it comes to lossless file size, but for lossy it really depends on how intent you are on preserving it
2025-01-25 10:10:03
though grain synth on AVIF is pretty <:FeelsAmazingMan:808826295768449054>
jonnyawsom3
RaveSteel Regarding artists, one of the greater problems that artists are facing, kinda, is that most do not know how to properly export/make use of of certain formats. Many export to GIF with global palette, because that's what their software does, not knowing, that GIF can look much better using per-frame palette or just straight up using a different format, like AVIF or APNG. JXL sadly isn't really viable yet
2025-01-25 10:22:08
The biggest issue is that the sites they post to don't support anything other than the big three. I had a discussion with one who wanted to post Lossless WebP, but the only site accepting it was Itaku. So now they post Lossy PNG from Pingo to the other sites
RaveSteel
2025-01-25 10:23:19
Very true, adoption is a big thing in the chain of problems
2025-01-25 10:24:09
And even if some support more, their implementation may be bad
AccessViolation_
I've seen quite a few do this, sometimes on top of their own watermark/signature. Very annoying when trying to re-compress or even just view on a poor connection
2025-01-25 10:24:17
How many of these would you say used some sort of predictable dithering pattern? This whole thing basically falls apart if the pattern is random
RaveSteel
2025-01-25 10:24:47
I've mentioned this before, but upload a GIF or video to pixiv, the largest platofrm for japanese artists and one of the largest in general, and it will deconstruct the video into an MJPEG
2025-01-25 10:25:10
Low quality MJPEG*
AccessViolation_ How many of these would you say used some sort of predictable dithering pattern? This whole thing basically falls apart if the pattern is random
2025-01-25 10:26:19
I think it's probably just the noise filter in their software, instead of some dithering noise, if I am understanding you correctly
AccessViolation_
RaveSteel I have an 3000x7000 JPG from an artist with RGB noise, 38MB. Transcoding it to JXL is 22MB. Just thinking how much smaller it could be with the noise in a separate layer...
2025-01-25 10:28:18
Hmm, I think VarDCT and even JPEG 1 basically already do this. They break up the image into their frequency components. Higher frequencies, e.g. pixel noise, are basically deemed as less important and are dropped
RaveSteel
2025-01-25 10:29:23
Well, this particular JPEG was exported at quality 100, so it dropped pretty much nothing lol
AccessViolation_
2025-01-25 10:32:48
Ooo then it's probably high quality enough to encode the raw pixel data into a new lossy JXL 👀
jonnyawsom3
RaveSteel And even if some support more, their implementation may be bad
2025-01-25 10:33:14
Another friend does pixel art and wanted APNG, but the site thumbnailer doesn't support it, so it's static until you click on the file and then naturally downloading it doesn't work in most viewers
RaveSteel
AccessViolation_ Ooo then it's probably high quality enough to encode the raw pixel data into a new lossy JXL 👀
2025-01-25 10:33:57
Here, if you want to go at it, 3rd image from the top https://www.pixiv.net/en/artworks/111977115
Another friend does pixel art and wanted APNG, but the site thumbnailer doesn't support it, so it's static until you click on the file and then naturally downloading it doesn't work in most viewers
2025-01-25 10:35:08
At least pixel art can (often) adequately be repesented by the 256 colours GIF supports, but I can imagine how annoying this must be
AccessViolation_
2025-01-25 10:35:08
Ugh
RaveSteel
2025-01-25 10:35:12
Ah
jonnyawsom3
RaveSteel At least pixel art can (often) adequately be repesented by the 256 colours GIF supports, but I can imagine how annoying this must be
2025-01-25 10:35:50
Issue is it's often scaled by 10x, so GIF decoders choke
AccessViolation_ I wonder if there's already any editors specifically for pixel art that make use of JXL's native 8x8 upscaling. Would be cool
2025-01-25 10:37:03
I did mention it to one https://github.com/aseprite/aseprite/issues/4682
AccessViolation_
RaveSteel Here, if you want to go at it, 3rd image from the top https://www.pixiv.net/en/artworks/111977115
2025-01-25 10:37:06
command should be `cjxl input.jpg output.jxl --lossless_jpeg 0 -d 1` in case you wanted to try it for yourself
jonnyawsom3
2025-01-25 10:37:20
-j 0 -d 1
RaveSteel
AccessViolation_ command should be `cjxl input.jpg output.jxl --lossless_jpeg 0 -d 1` in case you wanted to try it for yourself
2025-01-25 10:37:45
I already tried to my heart's content xd
2025-01-25 10:37:54
But thanks
AccessViolation_
2025-01-25 10:37:55
Ohh ok xD
RaveSteel
2025-01-25 10:38:48
https://files.catbox.moe/8hdvik.jpg
AccessViolation_
I did mention it to one https://github.com/aseprite/aseprite/issues/4682
2025-01-25 10:38:49
It's happened several times now that I go "I wonder if this software has any requests for JXL" and when I check the github issues it's one of you guys <:KekDog:805390049033191445>
RaveSteel
2025-01-25 10:38:50
link to the file via catbox
AccessViolation_ It's happened several times now that I go "I wonder if this software has any requests for JXL" and when I check the github issues it's one of you guys <:KekDog:805390049033191445>
2025-01-25 10:39:10
I definitely have opened some issues xddd
AccessViolation_
AccessViolation_ It's happened several times now that I go "I wonder if this software has any requests for JXL" and when I check the github issues it's one of you guys <:KekDog:805390049033191445>
2025-01-25 10:40:28
ah yes a wee 40 MB JPEG
RaveSteel link to the file via catbox
2025-01-25 10:40:54
re: this*
RaveSteel
2025-01-25 10:41:00
Just a smol file
2025-01-25 10:41:04
xd
AccessViolation_
2025-01-25 10:41:54
tried effort 11 yet? :p
RaveSteel
2025-01-25 10:42:06
yes
2025-01-25 10:42:12
It was the same size as e9