JPEG XL

Info

rules 58
github 38694
reddit 688

JPEG XL

tools 4270
website 1813
adoption 22069
image-compression-forum 0
glitch-art 1071

General chat

welcome 3957
introduce-yourself 294
color 1687
photography 3532
other-codecs 25116
on-topic 26652
off-topic 23987

Voice Channels

General 2578

Archived

bot-spam 4577

benchmarks

AccessViolation_
2025-11-24 06:10:15 I think they do the same for progressive JPEG 1. as far as I'm aware, the DC components of progressive JPEG actually look like perfect squares, and the decoders in browsers blur them of their own volition specifically to make progressive look nicer
_wb_
2025-11-24 06:14:23 Neither the jpeg 1 nor the jxl spec says anything about how you're supposed to render partial image data, only the final image is specified. So it's not in or out of spec to show low-res previews in a blurry way, in a blocky way, or using the fancy upsampler we use in libjxl.
AccessViolation_
2025-11-24 06:17:50 if there ends up being a way to query the percentage of bytes loaded (maybe there already is, I'm not super familiar with the Web API), websites could potentially do it themselves with JS and CSS which in practice means no one will do it but I like that it gives you control
2025-11-24 06:19:13 can you tell whether this one uses the default upsampler?
2025-11-24 06:20:06 because it doens't look very blurry, but also I remember something along the lines of: LF blurring works on a 1:8 of the image, so it might not have the chance to blur those blocks that much given how big they are ๐Ÿค”
Tirr
2025-11-24 06:22:08 that's just a result of raw unsqueeze transform and 8x NN upsampling for LF frame, nothing fancy. that one was decoded using jxl-oxide iirc
_wb_
2025-11-24 06:22:32 This looks like just unsqueezing with assumed zero residuals. The tendency term will cause it to look like that.
AccessViolation_
2025-11-24 06:23:23 oh hmm. it might be good to replace that graphic with VarDCT progressive loading, people will definitely see those more (and is sort of expected to be used for a photographic image like this)
Tirr
2025-11-24 06:25:18 it's actually progressive VarDCT encoding, though LF frame uses Modular because squeeze transform is much better at creating lower resolution previews
AccessViolation_
2025-11-24 06:25:54 ah, gotcha
Tirr
2025-11-24 06:26:05 so it uses squeeze until 1:8 then switches to DCT
jonnyawsom3
2025-11-24 11:22:34 https://github.com/libjxl/jxl-rs/issues/387
Elias
2025-11-25 09:12:07 is there a benchmark how fast JPEG XL decoding speed is at different lvls of compression?
_wb_
2025-11-25 10:26:05 `benchmark_xl --input=blah.png --codec=jxl:d1,jxl:d2,jxl:d3 --decode_reps=30 --num_threads=0 --inner_threads=NB_THREADS`
2025-11-25 10:32:58 These are the kind of numbers I get with 4 threads: ``` Encoding kPixels Bytes BPP E MP/s D MP/s Max norm SSIMULACRA2 PSNR pnorm BPP*pnorm QABPP Bugs ---------------------------------------------------------------------------------------------------------------------------------------- jxl:d0.2 5242 3523753 5.3768204 11.758 133.465 0.74666458 90.75474509 50.15 0.20155062 1.083701491906 5.377 0 jxl:d0.5 5242 1986915 3.0317917 12.640 182.482 1.14921882 88.17999824 44.31 0.42358538 1.284222622528 3.484 0 jxl:d1 5242 1196239 1.8253159 11.615 163.103 1.72378463 83.06925731 40.91 0.69828552 1.274591623605 3.146 0 jxl:d2 5242 689070 1.0514374 10.365 199.808 2.75924460 73.21579877 37.37 1.11687348 1.174322525976 2.901 0 jxl:d3 5242 487047 0.7431747 9.850 211.756 3.72705648 64.65200124 35.56 1.45422875 1.080746076298 2.770 0 jxl:d4 5242 386094 0.5891327 8.837 133.573 5.33052577 57.45251261 34.37 1.74048901 1.025378975295 3.140 0 jxl:d6 5242 276871 0.4224716 10.431 142.772 6.08248985 45.78377116 32.84 2.20675997 0.932293458274 2.570 0 jxl:d8 5242 221731 0.3383347 10.444 144.541 7.63092791 37.06003665 31.78 2.65585771 0.898568704211 2.582 0 jxl:d12 5242 174917 0.2669022 9.157 367.764 27.60635948 -4.51364357 27.76 8.81798618 2.353539564305 7.368 0 Aggregate: 5242 609766 0.9304284 10.502 176.985 3.60911399 59.51716417 36.68 1.27187356 1.183387241367 3.469 0 ```
Elias
2025-11-26 01:15:32 oh wow thank you!
Orum
2025-11-26 02:08:12 perhaps more interesting is at different encoding efforts with roughly the same visual quality
KKT
2025-11-26 09:09:43 And if you want to test against AVIF: `--codec custom:avif:avifenc:avifdec`
๐‘›๐‘ฆ๐‘•๐‘ฃ๐‘ธ๐‘ฅ๐‘ฉ๐‘ฏ๐‘ฆ | ๆœ€ไธ่ชฟๅ’Œใฎไผๆ’ญ่€… | ็•ฐ่ญฐใฎๅ…ƒ็ด 
2025-12-21 09:14:39 has anyone ever benchmarked average/mean SSIMU2 score of AVC, HEVC and VP9 at different CRF/Q values??
Emre
2025-12-21 11:39:11 Would this count?
2025-12-21 11:39:14
2025-12-21 11:39:40
๐‘›๐‘ฆ๐‘•๐‘ฃ๐‘ธ๐‘ฅ๐‘ฉ๐‘ฏ๐‘ฆ | ๆœ€ไธ่ชฟๅ’Œใฎไผๆ’ญ่€… | ็•ฐ่ญฐใฎๅ…ƒ็ด 
2025-12-21 11:39:53 ~~how to read Butteraugli?~~
Emre
2025-12-21 11:40:03 Lower is better. 0.0 is perfect
Lumen
2025-12-21 11:40:18 lacking in details in encoder versions/arguments
Emre
2025-12-21 11:40:38
2025-12-21 11:40:53 I am sharing them one by one <:KekDog:805390049033191445>
2025-12-21 11:43:15 Git upstream versions from may 2025
Exorcist
2025-12-21 12:16:47
Emre
2025-12-21 12:27:59 Yeah in that case svt-av1 with preset 4 would demolish everything to a bigger extent here.
2025-12-21 12:28:40 This is just the highest metric capacity for all of these encoders. Basically their maximum at those bitrates.
jonnyawsom3
2025-12-23 01:31:31 Continuing from https://discord.com/channels/794206087879852103/794206170445119489/1453016879093518430 with a decade old Huawei Mate 10 Pro Performance mode on ```time cjxl --disable_output 56JonnyMaze.png -d 0 -e 1 --num_reps 10 JPEG XL encoder v0.11.1 [NEON,NEON_WITHOUT_AES] Encoding [Modular, lossless, effort: 1] Compressed to 10545.8 kB including container (8.925 bpp). 6875 x 1375, geomean: 158.682 MP/s [110.94, 165.50], , 10 reps, 8 threads. real 0m0.952s user 0m2.203s sys 0m0.251s``` Performance mode off ```time cjxl --disable_output 56JonnyMaze.png -d 0 -e 1 --num_reps 10 JPEG XL encoder v0.11.1 [NEON,NEON_WITHOUT_AES] Encoding [Modular, lossless, effort: 1] Compressed to 10545.8 kB including container (8.925 bpp). 6875 x 1375, geomean: 136.847 MP/s [103.99, 153.84], , 10 reps, 8 threads. real 0m1.078s user 0m2.436s sys 0m0.273s```
2025-12-23 01:42:29 ```time cjxl --disable_output 56JonnyMaze.png -d 0 -e 2 --num_reps 10 --num_threads 0 JPEG XL encoder v0.11.1 [NEON,NEON_WITHOUT_AES] Encoding [Modular, lossless, effort: 2] Compressed to 9197.3 kB including container (7.784 bpp). 6875 x 1375, geomean: 4.402 MP/s [4.33, 4.44], , 10 reps, 0 threads. real 0m21.873s user 0m16.474s sys 0m5.384s``` ```time cjxl --disable_output 56JonnyMaze.png -d 0 -e 1 --num_reps 10 --num_threads 0 JPEG XL encoder v0.11.1 [NEON,NEON_WITHOUT_AES] Encoding [Modular, lossless, effort: 1] Compressed to 10545.8 kB including container (8.925 bpp). 6875 x 1375, geomean: 49.531 MP/s [45.14, 49.76], , 10 reps, 0 threads. real 0m2.292s user 0m2.096s sys 0m0.195s``` ```time cjxl --disable_output 56JonnyMaze.png -d 0 -e 1 --num_reps 10 --num_threads 2 JPEG XL encoder v0.11.1 [NEON,NEON_WITHOUT_AES] Encoding [Modular, lossless, effort: 1] Compressed to 10545.8 kB including container (8.925 bpp). 6875 x 1375, geomean: 82.174 MP/s [69.77, 83.24], , 10 reps, 2 threads. real 0m1.525s user 0m2.113s sys 0m0.205s```
veluca
2025-12-23 02:11:19 e2 is a bit slower ๐Ÿ˜›
jonnyawsom3
2025-12-23 02:15:08 Just a tad
_wb_
2025-12-23 04:42:33 though only some of that is inherent, if you would implement e2 in a way similar to e1 (avoiding conversion to float32 and back to ints, doing YCoCg at the same time as converting interleaved to planar) it would probably be quite a bit faster
veluca
2025-12-23 04:49:03 what's e2 doing again?
jonnyawsom3
2025-12-23 05:42:26 Apparently my desktop is roughly on-par with my phone using a single core ```JPEG XL encoder v0.12.0 029cec42 [_AVX2_] {Clang 20.1.8} Encoding [Modular, lossless, effort: 1] Compressed to 10514.7 kB including container (8.898 bpp). 6875 x 1375, geomean: 50.507 MP/s [48.303, 51.103], 10 reps, 0 threads.```
2025-12-23 05:52:44 I wonder how group sizes could effect effort 1 too. Size 0 could lower memory further and might even give a slight density boost thanks to locality in the image
2025-12-23 05:57:36 Not sure if this is what you meant, but generalising the only-int path *should/could* lower memory and increase speed for all efforts where the input is int(16), while the YCoCg path could also be used for effort 2, 3 and 4 before other RCTs get applied in effort 5. The slowest part (I assume) are the MA trees
2025-12-23 05:58:14 I should also update that table to say Global pallete doesn't work above 2048 * 2048/below effort 10...
veluca
2025-12-23 05:58:59 ok, so biggest difference is that it uses an actual MA tree and ANS
jonnyawsom3
2025-12-23 05:59:58 As far as I can see, those are the only differences
_wb_
2025-12-23 06:43:06 But it's a simple fixed tree that is just a single lookup. Also probably switching to prefix codes wouldn't come at a big cost for e2/e3. With fixed trees I don't think you will get contexts with distributions where ANS makes a big difference.
veluca
2025-12-23 07:26:23 > switching to prefix codes wouldn't come at a big cost for e2/e3 that's easy to test ๐Ÿ˜›
username
2025-12-23 07:28:26 semi-related question but how easy or hard would it be to make **lossy** e1 similar to libjxl-tiny?
awxkee
2025-12-23 07:30:59 Lately I wanted to update https://github.com/awxkee/jxl-coder-swift and decided to do some rough benchmarking to see if any improvements for the last year, and found interesting thing. I did not find the image I tried the last time, so encoded new one 16 bits and 8 bit ( effort 7, q 81 ). JXL this time for that image 2 times faster on iphone than dav1d, but 2 times slower for the same picture than dav1d on Mac M3
veluca
2025-12-23 07:31:21 I have a branch somewhere that encodes things super fast with lossy ycbcr
awxkee
2025-12-23 07:31:26 So it seems it can be actually faster on aarch64, but not obviously clear why
veluca
2025-12-23 07:31:30 (was meant for motion jpeg)
awxkee
2025-12-23 07:31:56 Especially interesting that each time the gap is so huge and almost all the time x2
veluca
2025-12-23 07:32:53 https://github.com/veluca93/libjxl/tree/fmjxl
2025-12-23 07:33:26 is that with multiple threads?
awxkee
2025-12-23 07:33:51 Yes
2025-12-23 07:35:56 Single threaded ~15% ( didn't compute exactly ) JXL slower than AVIF on iPhone, and ~20% on Mac
2025-12-23 07:36:34 Interesting <:WTF:805391680538148936>
veluca
2025-12-23 07:45:05 ok, so it's a question of parallelism/memory bandwidth
2025-12-23 07:45:37 makes sense with libjxl using f32 vs dav1d probably using 16-bit integers internally
jonnyawsom3
2025-12-23 07:59:55 If it were memory bandwidth, I'd have expected the iPhone to do worse too, not 2x faster
veluca
2025-12-23 08:02:53 mh, good point, I flipped the logic
awxkee
2025-12-23 08:04:35 if that matters this is a build with JXL_HIGH_PRECISION = 0
_wb_
2025-12-23 09:08:54 that only makes the very last stages use 16-bit ints, but most decode steps are still done in f32
๐†๐€ ๐€๐ฆ๐ž๐ซ๐ข๐œ๐š
2025-12-27 01:07:09 Hello! I just joined the server because I thought you guys might be interested in this little project I'm doing, since it involves a large dataset and I actually intend on using it (its more than just a bench mark). I'm about to Start converting the entire ASTER_GDEM-DEM file dataset from Tiff to JpegXL on Effort=MAX Quality=0.1 Lossy I'll probably upload the final result to google drive so you guys can check it out if you want to. If there is anything you want me to keep track of let me know!
Quackdoc
2025-12-27 01:12:48 [chad](https://cdn.discordapp.com/emojis/862625638238257183.webp?size=48&name=chad)
A homosapien
2025-12-27 01:34:55 Awesome! Would you be willing to test other effort levels? Typically, higher efforts don't yield much gain over the defaults for lossy, but I wonder if that is still true for this data set and quality level.
monad
2025-12-27 01:46:48 Probably e7 is best for lossy . . .
๐†๐€ ๐€๐ฆ๐ž๐ซ๐ข๐œ๐š
2025-12-27 01:48:53 I Might, depending on how long it will take (its currently looking like its going to take me about a bit longer than 2 weeks) I do want a high effort level even if the gain is small due to the overall size of the dataset. I've got a batch converter script working to compress it all via gimp. I'm not sure if there is a better way. If so let me know if there is a significantly better way to do this. My computer is old so it will be slow no matter what.
A homosapien
2025-12-27 01:56:53 Yeah a lot of the testing/dev work was done on effort 7, so that's about as good as it gets for lossy. I tend to notice efforts 8+ don't really look much better (in a few cases worse even). Also, at distance=0.1 I wonder if lossless would be smaller <:Thonk:805904896879493180>
๐†๐€ ๐€๐ฆ๐ž๐ซ๐ข๐œ๐š
2025-12-27 02:03:51 At least with the implementation that gimp is using, it absolutely is not!
2025-12-27 02:06:12 I was getting 5.6MiB lossless, 163.8KiB lossy distance=0.1
2025-12-27 02:12:12 Is that not normal?
A homosapien
2025-12-27 02:14:08 depends on the data you are compressing, but those numbers look odd
RaveSteel
2025-12-27 02:14:17 for batch encoding without editing the image you should probably use imagemagick or libvips
2025-12-27 02:15:13 either compress to JXL directly with either, or convert to a mezzanine format such as PNG and pass that to cjxl
2025-12-27 02:15:43 you'll probably get the best results with the latter
2025-12-27 02:17:01 do note that you will have to use external tools to preserve metadata, should you wish to do so
๐†๐€ ๐€๐ฆ๐ž๐ซ๐ข๐œ๐š
2025-12-27 02:46:33 This would have been way easier to setup than going through gimp. I just tried going from png and passing it to cjxl, and suprisingly it gave almost identical results both in terms of speed and compression ratio. I guess the nature of the data is just better for lossy.
2025-12-27 02:57:37 You were right about the effort settings, I'm getting smaller files when using less effort (for lossy only). Thanks for the tip I never would have tried that if it weren't for you.
monad
2025-12-27 03:12:39 Does it? It seems quite suspicious.
jonnyawsom3
2025-12-28 07:42:17 Wondered why my results were so weird, then realised it was spending 8x longer decoding the PNG than encoding the JXL ```wintime -- fjxl "poster-25-12-28 05-08-33.png" nul 2 10 8 297.093 MP/s 4.344 bits/pixel PageFaultCount: 2333585 PeakWorkingSetSize: 2.639 GiB QuotaPeakPagedPoolUsage: 31.77 KiB QuotaPeakNonPagedPoolUsage: 32.4 KiB PeakPagefileUsage: 4.409 GiB Creation time 2025/12/28 19:33:54.408 Exit time 2025/12/28 19:34:14.908 Wall time: 0 days, 00:00:20.499 (20.50 seconds) User time: 0 days, 00:00:39.968 (39.97 seconds) Kernel time: 0 days, 00:00:03.578 (3.58 seconds)```
2025-12-29 11:52:33 Non-perceptual encoding is pretty busted with VarDCT. Modular is number 2, then normal VarDCT and normal Modular
2025-12-29 11:56:35 And as a bonus, our semi-fixed saturation build
2025-12-30 09:40:14 Oh, and because I somehow forgot, the original. You can (hopefully) see that the second image kept the saturation the best, because the XYB issue doesn't apply
VXsz
2025-12-31 01:43:59 Was testing the decoding optimization flag, with and without progressive decoding, and I got this is the weird result, 4k photo, ~%50 of a 12700k (500 reps)
2025-12-31 01:44:35 CPU was used slightly more without progressive decoding images, but it seems to scale really weirdly eitherway
jonnyawsom3
2025-12-31 01:45:20 What version of the encoder were you using?
VXsz
2025-12-31 01:45:27 v0.11.1
2025-12-31 01:45:56 ``` JPEG XL encoder v0.11.1 [AVX2,SSE4,SSE2] ```
jonnyawsom3
2025-12-31 01:46:13 We fixed major bugs with both Progressive and Faster decoding in v0.12 (Not realeased yet...), up to 75% smaller and 25% faster than v0.11 in our testing
VXsz
2025-12-31 01:46:43 oh nice, is it aviable anyway/nightly for testing? I was curious why the last build was 11/2024 lol
jonnyawsom3
2025-12-31 01:47:26 This hosts the nightly Github build https://artifacts.lucaversari.it/libjxl/libjxl/latest/
VXsz
2025-12-31 01:48:33 Also, it seems like its hitting a ceiling of 260MP/s with my 12700k at like 8 threads or more iirc (it scored even lower the more the threads) am I hitting hyperthreading limits or something like context switching?
jonnyawsom3
2025-12-31 01:48:39 Oh wait, I'm an idiot, you're doing lossy right? Our changes were to losssless
VXsz
2025-12-31 01:48:56 yeah lossy
2025-12-31 01:48:57 rip
jonnyawsom3
2025-12-31 01:49:01 No wonder the size seemed small for 4K haha
VXsz
2025-12-31 01:49:19 yeah was just doing some testing
2025-12-31 01:50:38 Though the thing I'm targeting (visual novel engine) usually has aggressively compressed images
2025-12-31 01:50:48 I think I will just roll with progressive decoding
2025-12-31 01:51:05 seems like the obvious choice
2025-12-31 01:52:12 speaking of, I couldn't test how long it takes to decode the downsampled images, djxl prints just the MP/s speed and not the output size/time
A homosapien
2025-12-31 02:00:36 Are you on Windows?
VXsz
2025-12-31 02:00:44 Linux
A homosapien
2025-12-31 02:03:03 I'm not familiar with Linux, but scripting something that prints out time & file size shouldn't be too hard.
VXsz
2025-12-31 02:18:17 they would be inaccurate, I think
2025-12-31 02:18:47 load time + reps would affect it quite a bit I feel
2025-12-31 02:19:01 maybe I could do time/reps
A homosapien
2025-12-31 02:21:11 libjxl has `--num_reps`
2025-12-31 02:21:46 I'm not sure what the best app for measuring time is on Linux
2025-12-31 02:21:59 I hear good things about hyperfine
jonnyawsom3
2025-12-31 03:06:45 Load time is why the tools only output MP/s, just the other day I had a file that spent 8 seconds loading the PNG but only 1 second encoding it
2026-01-02 09:25:01 I'm noticing that for low color content, effort 6+ with no predictor does far better than all efforts with predictors. Can't really think of a reason why though
_wb_
2026-01-02 09:29:46 for lossless?
2026-01-02 09:33:29 when using a palette with N colors without predictors, the entropy coder just gets N values and the histogram can correspond to the frequencies (after lz77 removes the runs/repeats). With predictors, there are 2N values so it can be less efficient.
jonnyawsom3
2026-01-02 10:06:11 Right. Maybe I'll see if the pallete detection can hand the color count to the predictor selection, picking none when it's 256 max (GIF/PNG8)
_wb_
2026-01-02 12:46:16 Sometimes prediction does help though. Like when it's basically like a grayscale image.
jonnyawsom3
2026-01-02 01:48:19 Hmm, good point. No easy way to tell if it's a greyscale photo or a repetitive pixel art
username
2026-01-02 02:47:27 I mean couldn't you do a good enough guesstimate by checking the pallete to see if all the colors are just different shades of each other within a threshold or something? of course this wouldn't work well in every single case but would probably be good enough to bump up the average. although I guess this might fall outside of an "easy way" depending on how tough it actually is to have detection logic like this and of course libjxl probably doesn't have anything like this setup in it.
monad
2026-01-03 02:27:34 like this?
VXsz
2026-01-05 01:53:51 Hey, how can I benchmark like this? its exactly what I'm looking for
AccessViolation_
2026-01-05 04:24:26 I feel like I need to learn how to do lossy modular properly. with the default encoding of avifenc, this turns into a 181 KB file, which I had trouble matching in JXL at the same quality. but a *lossless* JXL is just 2.5 times the size of the AVIF
2026-01-05 04:25:04
2026-01-05 04:26:03 (first one is the PNG source, second one is the AVIF)
jonnyawsom3
2026-01-05 04:33:19 Ya know, I only just realised after all the talk in <#822105409312653333>, the reason AVIF is so good at non-photo and anime is probably because the content detection actually works, not because AVIF is inherently great at it
AccessViolation_
2026-01-05 04:40:05 well, content detection is one thing, you still need the coding tools to represent them well
2026-01-05 04:42:09 I guess you mean compared to JXL, which doesn't make attempts at detecting the type of content before encoding
2026-01-05 05:06:36 odd artifacts when using modular lossy palette
2026-01-05 05:08:25 and without (also q 70, e 10 , modular), and the source for reference
2026-01-05 05:14:29 even quality 99 lossy modular has a lot of color fringing/bleeding
jonnyawsom3
2026-01-05 05:48:09 What command did you use?
2026-01-05 05:48:26 That doesn't look like delta palette
AccessViolation_
2026-01-05 06:03:46 `time cjxl source.png q99.jxl -q 70 -m 1 -e 10 --intensity_target=400 -g 3 --modular_lossy_palette` turns out, it's the intensity target
2026-01-05 06:05:07 when I removed both the intensity target and modular lossy palette it went away. I incorrectly assumed the intensity target had nothing to do with it
jonnyawsom3
2026-01-05 06:05:52 That's not how you enable lossy pallete
2026-01-05 06:06:05 Never heard of that interaction before though
2026-01-05 06:08:01 `time cjxl source.png DP.jxl -d 0 --modular_lossy_palette --modular_palette_colors 0 -e 10 -g 3 -E 11 -I 100`
2026-01-05 06:08:15 That should enable it at the best density
AccessViolation_
2026-01-05 06:10:39 oh nice, thanks
2026-01-05 06:11:29 btw, even `-q 70 -m 1 -e 10 --intensity_target=400 -g 3` results in the artifacts
2026-01-05 06:12:04 in 0.11.1 at least
2026-01-06 10:28:34 I just had a thought, can we similarly try to detect screen content, and if it's determined it's likely a screenshot, use adaptive quantization to spend more bits on things against flat colors (e.g. icons in GUIs) while spending fewer bits on any photographic content that happens to be present?
jonnyawsom3
2026-01-06 10:29:49 It should already do that since butteraugli would mark the flat colors as low quality on the quant map
AccessViolation_
2026-01-06 02:44:24 some experimentation backs this up yeah. a screenshot of my desktop with wallpaper (so the image is largely photographic) compress well with both AVIF and JXL, and JXL actually has the application dock, panel and their icons looking good, AVIF less so. AVIF instead has the photographic background in slightly higher quality than the JXL
2026-01-06 03:07:58 I hope that at some point the encoder will be able to segment the image into different frames (or use patches to achieve that), so we can combine VarDCT, lossless modular and lossy modular all together
2026-01-06 04:15:26 okay, I think I know how to compress this image well, it's just that I don't know how to tell cjxl to do it, or whether it's possible
2026-01-06 04:41:24 disabling squeeze is key here, squeeze removes a lot of fidelity. at that point with `--modular_lossy_palette --modular_palette_colors 0`, the distance setting reduces the color accuracy/amount, so it needs a custom palette instead. but that custom palette should not include those thousands of colors that are just there because of anti aliasing between the few select colors. the colors should ideally be quantized based on some combination of how infrequent they are and how visually distinct they are (e.g. is this a shade that results from anti aliasing that can be quantized, or is this a a tiny, high contrast detail)
2026-01-06 04:44:14 and if there's some specific area that's just not fit for palette while the rest of the image is, take it out with a patch, or put it on a different frame (not that would be helpful for this image, but probably generally good to do when working with images where most but not all parts are easily palette-able)
_wb_
2026-01-06 04:47:22 is it squeeze or XYB that is the problem? When using lossy palette it uses RGB, when using default lossy modular (squeeze) it uses XYB. Squeeze can also work on RGB (or rather, on YCoCg), but I don't know if cjxl/libjxl still expose that as an encode option
AccessViolation_
2026-01-06 04:54:53 squeeze, and no squeeze (normalized for file size)
2026-01-06 04:55:25 there is this, but the encoder automatically selects the best one already ``` -C K, --modular_colorspace=K Color transform: -1 = default (try several per group, depending on effort), 0 = RGB (none), 1-41 = fixed RCT (6 = YCoCg). ```
2026-01-06 04:59:18 btw, I assume you're not referring to this
username
2026-01-06 04:59:43 the heuristics for this aren't perfect! for example in the PR that <@238552565619359744> submitted to make progressive lossless way smaller the RCT had to be hardcoded because the heuristics kept picking really really bad choices bloating up the file size by a ton
AccessViolation_
2026-01-06 05:00:59 oh hmm, I checked all 41 options manually but I think that was without squeeze? I'm not sure
2026-01-06 05:04:11 and that was also at a low effort because I couldn't be bothered to wait 10 seconds every time i ran the command
veluca
2026-01-06 05:10:25 Have you tried using a global, somewhat large palette, btw?
AccessViolation_
2026-01-06 05:11:55 I don't know if this is something inherent or something that can be tuned, but I have noticed that one thing that makes VarDCT substantially worse for these types of images, is not necessarily the amount of artifacts themselves, but the fact that they're artifacts in color rather than just in brightness. AVIF has plenty of artifact when compressing images like this, but they're all in the form of turning sharp edges between colors into gradients, and introducing brightness ringing. JXL, on the other hand, really quickly starts bleeding colors and adding new colors in the artifacts
2026-01-06 05:13:52 I looked at that, but from the description in cjxl, it seemed it requires the amount of colors to already be low enough?
2026-01-06 05:14:16 but I'll try
veluca
2026-01-06 05:21:17 You can set what low enough means ๐Ÿ™‚
2026-01-06 05:21:47 I'm also curious what's the actual colour count
AccessViolation_
2026-01-06 05:25:09 I feel like I need other parameters as well, it does nothing by itself no mater the value do I need `--modular_lossy_palette` and/or `--modular_palette_colors=N`?
veluca
2026-01-06 05:30:18 I don't actually remember what cjxl does ^^'
AccessViolation_
2026-01-06 05:30:51 `Colors: 18741` according to imagemagick identify
2026-01-06 05:41:34 yeah the local `-Y` and global `-X` palette options don't do anything no matter which values I try
veluca
2026-01-06 05:54:21 That seems small enough that a global palette might be beneficial
AccessViolation_
2026-01-06 06:05:53 to beat lossy AVIF for this image, I feel like some anti-anti-aliasing or... aliasing? would go a long way too, the vast majority of those unique pixels are because of this
2026-01-06 06:17:36 okay, this might be doable. instead of going just by pixel counts and local pixel distinct-ness like my first idea was, you could identify the few 'main' colors of the image and then try to identify transition colors. basically by asking the question, "is this is a pixel value we can expect when blending between any two of these 'main' colors". if the answer is yes, don't add it to the palette as it's part of an anti-aliased edge. if it isn't, it's part of some other unique detail. you can then choose to e.g. move all 'main' colors into their own frame and do lossless pallete compression there, and do lossless predictor-based compression, lossy modular or VarDCT for the transition pixel frame (possibly surround transition pixels by some non-transition pixels to give VarDCT an easier time if you go that route). and if you want, those non-transition pixels that are still unique can be in their own frame too, maybe with VarDCT (as a bunch of unique non-transition pixels sounds like a photographic segment in the image)
2026-01-06 06:22:20 tomorrow I might try to write up some code that tries to do this segmentation. probably in an isolated Rust project though because it would take me weeks to successfully integrate any major part of this into cjxl before I even find out whether it works <:KekDog:805390049033191445>
2026-01-06 06:29:37 just a sanity check, it's not possible to combine predictor-based encoding and palette encoding using MA-trees, right?
veluca
2026-01-06 06:29:56 that's delta palette
AccessViolation_
2026-01-06 06:32:28 ooo
2026-01-06 06:33:15 okay thanks, hours of creating a worse implementation of something that JXL already does have been saved
2026-01-06 06:36:34 presumably prominent in lossless, right? a lossless encoding of that image is only 2.5 times larger than the lossy AVIF, which is impressive, and I guess it must make heavy use of that
veluca
2026-01-06 06:37:23 maybe <@604964375924834314> knows how the delta palette encoder works, but in principle I could see following an approach of "collect all the most frequent colors, then try and see how badly we can delta-palette-encode the remaining pixels with the deltas, and if it's too bad add them to the palette"
2026-01-06 06:37:40 nah, I don't think it gets used by the encoder at all unless you very explicitly ask for it
AccessViolation_
2026-01-06 06:38:27 oh, well this is exciting ^^
veluca
2026-01-06 06:39:50 delta palette + lz77 could do a pretty good job here
2026-01-06 06:40:08 but probably also some better-than-current-encoder lz77+prediction approach
AccessViolation_
2026-01-06 06:50:06 ah, this is the delta palette, used in "a lossy way" ``` --modular_lossy_palette Use delta palette in a lossy way; it is recommended to also set --modular_palette_colors=0 with this option to use the default palette only. ``` I did try that, but it didn't seem right in terms of which colors it chose. for example at low distances it distorts the colors, this is `-q 10` quality 70 is around where color similarity is barely noticeable. I think it might not be using a palette derived from the image when used on its own
_wb_
2026-01-06 07:20:03 Currently we don't have any code in libjxl to use non-delta-palette in a lossy way, nor to use delta-palette in a lossless way.
2026-01-06 07:22:01 and the lossy delta-palette encoder is more of a proof-of-concept thing than something that actually works well
jonnyawsom3
2026-01-06 07:28:51 I thought the lossy delta palette only worked in lossless mode, or at least that's what my old testing suggested. Changing quality values did nothing
2026-01-06 07:30:56 Global pallete doesn't work with chunked encoding (Like a lot of things currently...), so <@384009621519597581> would have to use effort 10 or force patches on to go back to non-chunked
2026-01-06 07:31:24 (Again, based on my own tests, I think someone said it worked for them but it could've been group palette)
AccessViolation_
2026-01-06 07:32:12 ah dang, I thought settings `--num_threads=1` also disabled chunked encoding
jonnyawsom3
2026-01-06 07:33:14 Singlethreaded is just a symptom, not the cause
AccessViolation_
2026-01-06 07:37:47 also, I was curious what LZ77 would be for in this. is that for the palette part of it? or for the deltas you see from repeating anti-aliased edges on e.g. slopes of intersecting colors
jonnyawsom3
2026-01-06 07:39:02 LZ77 runs on the residuals of each group, pallete is also stored as sub-frame IIRC so it would do both the quantized groups and the pallete for them, but not between groups/frames
AccessViolation_
2026-01-06 07:39:45 I assumed you meant the latter, but I had to double check and you can include an offset to the predictor, so on those linear(ish?) gradients you can compensate for the consistent decrease with offsets and the residuals should be close to zero
2026-01-06 07:40:06 unless the delta palette allows just signaling a predictor, and not an offset, unlike MA tree contexts
jonnyawsom3
2026-01-06 07:42:12 To signal different predictors in a group, an MA tree is required as far as I can tell. At least that's how `-P` and `-I` interact
2026-01-06 07:42:27 Otherwise it's 1 fixed predictor as a single MA node
AccessViolation_
2026-01-06 07:43:16 this is what it says on the delta palette > Optionally, the first d of the n palette colors > can be interpreted as โ€˜delta entriesโ€™. This means that the > signaled โ€˜colorโ€™ values do not represent a fixed color, but > rather as a difference vector that gets applied to the pre- > dicted sample values for the current pixel position. **Both the > number d and the predictor choice are signaled**.
2026-01-06 07:43:22 (emphasis mine)
2026-01-06 07:46:13
2026-01-06 07:46:13 so these gradient-like intersections between palette colors here would be runs of similar offsets from the 'west' predictor. but if you can't signal an offset along with the predictor, you're going to end up with a run of about equal (but maybe not quiet equal) values, that may be LZ77-able if you're lucky. but if an offset *can* be provided along with the predictor, you'll just get a bunch of nice close to zero residuals
2026-01-06 07:48:53 if this is the case it would be great, because then they probably compress really well with an orthogonal predictor and offset if I understand the delta palette correctly, at least
veluca
2026-01-06 08:02:39 lz77 helps with all the flat color areas
A homosapien
2026-01-06 08:50:31 I got JXL down to 170 KB while looking decent
2026-01-06 08:52:04 The trick is quantizing the png beforehand, then using JXL's amazing lossless abilities to compress the image down.
veluca
2026-01-06 09:09:06 I'm not surprised
2026-01-06 09:09:26 a better lossy screen content encoder should be able to do way better than that
jonnyawsom3
2026-01-06 09:27:10 Surprisingly, it even effects effort 1 ``` cjxl -d 0 yeah.jxl Test.jxl -e 1 JPEG XL encoder v0.12.0 029cec42 [_AVX2_] {Clang 20.1.8} Encoding [Modular, lossless, effort: 1] Compressed to 938.3 kB (2.491 bpp). 2163 x 1393, 25.458 MP/s, 16 threads. cjxl -d 0 yeah.jxl Test.jxl -e 1 --noise 1 JPEG XL encoder v0.12.0 029cec42 [_AVX2_] {Clang 20.1.8} Encoding [Modular, lossless, effort: 1] Compressed to 386.0 kB (1.025 bpp). 2163 x 1393, 10.999 MP/s, 16 threads. ``` 2.5x smaller with global palette (I assume)
2026-01-06 09:28:27 The noise flag tries to estimate photon noise based on image content. I'm pretty sure it's broken, but either way it would hardly effect filesize and disables chunked
veluca
2026-01-06 09:29:49 yeah e1 doesn't do local palette at all so...
jonnyawsom3
2026-01-06 09:30:15 Effort 3 goes from 2.9bpp to 0.9 bpp
AccessViolation_
2026-01-06 09:30:54 wow, really nice! using pngquant is clever
jonnyawsom3
2026-01-06 09:37:42 It gets weirder ``` cjxl -d 0 yeah.png nul -e 1 JPEG XL encoder v0.12.0 029cec42 [_AVX2_] {Clang 20.1.8} Encoding [Modular, lossless, effort: 1] Compressed to 392.4 kB (1.042 bpp). 2163 x 1393, 304.466 MP/s, 16 threads.``` PNG input gives the expected results, so JXL input is doing something weird with effort 1
2026-01-06 09:40:28 Effort 2, 3 and 4 show that behaviour even with a PNG, effort 5 enables group pallete so the size reduction drops to 2%
AccessViolation_
2026-01-06 11:58:04 this is also reported to be 15-30 times faster for some reason. really weird
2026-01-06 11:58:07 but interesting
jonnyawsom3
2026-01-07 11:12:59 I realised what it is, JXL input decodes to float, and fast lossless only supports 16int, so it was falling back to effort 2... But that should only happen with lossy files so I'm still a little confused
monad
2026-01-07 03:37:07 There has been different behavior between JXL and PNG input ever since JXL was supported.
2026-01-07 03:38:54 Also with ssimulacra2, can't trust JXL input.
_wb_
2026-01-08 11:11:04 JXL decodes to float also for lossless files โ€” float32 has 23 mantissa bits so it has enough precision to store 16-bit uint data losslessly
2026-01-08 11:11:53 ssimulacra2 works with float32 images
2026-01-08 11:13:30 Converting 8-bit or 16-bit PNG to lossless JXL and back is lossless, but I think there's some slight difference in the float32 versions of the image
2026-01-08 11:16:10 In particular, the conversion of 8-bit uint to float32 and the conversion of an identical 16-bit uint version of the image (all values integer-multiplied by 257 = 65535/255 which involves no rounding) to float32 does not produce the exact same float32 values, IIRC
Lumen
2026-01-08 11:16:42 you should use something else than ssimulacra2 if you want to check mathematical equality though
2026-01-08 11:17:07 typically ssimulacra2 uses SSIM which, when optimized properly does not give 100 for identical images anymore because of fp error
_wb_
2026-01-08 11:19:25 ssimulacra2 should give 100 for fully identical inputs, but even an off-by-one in the float32 mantissa (which is well below the error needed to change a uint16 value) can cause the score to drop to something like 93
Lumen
2026-01-08 11:19:51 it's more an issue with optimizing there is an extremely fragile step in ssim
2026-01-08 11:19:54 which I see in vship
2026-01-08 11:19:56
2026-01-08 11:20:00 this step
2026-01-08 11:20:08 if you just let ffast-math act on it
2026-01-08 11:20:12 you lose 100
_wb_
2026-01-08 11:20:17 ah that's possible
Lumen
2026-01-08 11:20:29 just using su11+su22 - (m11+m22) and it's dead
_wb_
2026-01-08 11:20:43 in the libjxl version of ssimulacra2 you do get 100 when comparing an image to itself
Lumen
2026-01-08 11:20:52 it's worse if you use the trick to avoid computing su11 and su22 separately to avoid an entire gaussian blur
2026-01-08 11:21:30 I ended giving that up in vship for the upcoming 4.1.0 of vship because it brings 10% speed improvment I will now tell people to use PSNR to test for equality ahah
2026-01-08 11:21:36 there is no reason to use SSIMULACRA2 for that
_wb_
2026-01-08 11:22:50 when comparing a jxl to a decoded png of the same jxl, there can be a tiny difference between the two float32 images and any tiny difference will cause the score to drop
2026-01-08 11:23:09 to test for equality you should use pixel-wise max error, not PSNR
Lumen
2026-01-08 11:24:34 true but I believe people know how to compute psnr while the other is not exactly as clear in minds
jonnyawsom3
2026-01-08 11:25:05 Ah right, I thought this changed that, but it was only metadata https://github.com/libjxl/libjxl/pull/3385
_wb_
2026-01-08 11:25:10 an off-by-one in one pixel in a large image may not be visible in PSNR, since PSNR is basically the same as RMSE
Lumen
2026-01-08 11:26:02 ah I see, that's painful
jonnyawsom3
2026-01-08 11:27:15 As long as you say what number means lossless and if higher or lower is bad, I think most people don't care which metric it is
Lumen
2026-01-08 11:27:53 it's just that usually when people want actually mathematically lossless they don't really mind about psychovisual metrics
2026-01-08 11:28:00 so why use a slow metric?
2026-01-08 11:28:14 (for images ssimulacra2 is fine but on video not as much if using cpu current implems)
2026-01-08 11:29:01 \_wb\_ also just mentionned manners that could prove the testing wrong with certain metrics
2026-01-08 11:29:27 ~~I take the 10% perf boost~~
2026-01-08 11:30:50 I was able to reach a new reported world record of 760 fps of SSIMU2 (1080p) thanks to an A100 lol
2026-01-08 11:33:17 that's 1.576 GPixels/s ๐Ÿ˜ตโ€๐Ÿ’ซ
jonnyawsom3
2026-01-08 11:33:48 You could live-validate effort 1 lossless with that
Lumen
2026-01-08 11:34:21 it'd still be slower than an actuall CPU equality test though-
jonnyawsom3
2026-01-08 11:34:55 Though it did hit 6 and 11 GP/s on ideal scenarios
2026-01-08 11:35:05 True
Lumen
2026-01-08 11:37:07 also to reach that speed I gave up on having 100 for equality as I said above, by losing this property we allow an optimization which removes an entire gaussian blur it would even be faster to make a special equality check just to return 100 than to do this gaussian blur
AccessViolation_
2026-01-08 12:06:49 I recently found out patches can utilize masks, and if I understand them correctly that means you're not just limited to rectangular shapes
2026-01-08 12:07:29 which opens up a lot of possibilities
jonnyawsom3
2026-01-08 12:09:53 Huh?
2026-01-08 12:10:03 You mean blend modes?
AccessViolation_
2026-01-08 12:12:15 yeah, I didn't know you could use them in this way. > The index of the extra channel to use as the โ€˜alpha channelโ€™ for the purpose of patch blending is signaled. It does not have to be the (first) extra channel of type Alpha. This flexibility allows for example to have a profile picture in a reference frame with two alpha channels: one that corresponds to a rounded rectangle and another that follows the outline of the face. This picture can then be alpha-blended to the underlying image in different ways.
_wb_
2026-01-08 01:37:47 patch blending is actually crazy expressive, considering that in the encoder we don't use anything other than just kAdd on just the color channels. We made patch blending include all frame blending modes, and in both orders too (patch on top and patch below), including the option to use multiple alpha channels etc.
jonnyawsom3
2026-01-08 01:57:59 I read that as crazy expensive at first, which technically it is since the blending hasn't been very optimised IIRC (try decoding a hydrium image)
AccessViolation_
2026-01-08 01:59:12 > I read that as crazy expensive at first heh me too
2026-01-08 02:01:21 I'm surprised blending isn't optimized, as that sounds like something that's parallelizable on the pixel level, and as such would be really easy to vectorize
jonnyawsom3
2026-01-08 02:06:38 The main thing is since the encoder only uses kAdd for patches and kReplace for animations, they could probably be fast pathed quite effectively
2026-01-08 02:07:26 And last I checked some GIFs fail due to not using kAdd when the disposal mode is set to nothing
AccessViolation_
2026-01-08 02:08:01 cursed idea number I've lost count: you can use patches for deblocking. VarDCT compress a lower quality version of the image that's offset by four pixels. because it's offset by four pixels, block borders will always be in different locations. if the real image has really bad artifacts on certain block borders, patch-copy that part from the lower quality offset image there and use a smooth transparency fall-off to blend it. then remove all parts from the low quality image except those you are actively patching over I doubt this is bit efficient compared to just setting a lower distance in the first place, but it's fun that it's possible
2026-01-08 02:08:27 I wish we had a tool to manually make silly JXLs like this
2026-01-08 02:10:00 JPEG XL Studio when? <:Stonks:806137886726553651>
jonnyawsom3
2026-01-08 02:11:08 It'd be fun to have a GUI where you can edit and re-order the bitstream. Arrange group order like puzzle pieces
AccessViolation_
2026-01-08 02:11:35 yeah it would be a lot of fun to play with
2026-01-08 02:12:26 especially because sometimes I have dumb ideas like this and I wonder how they would work in practice. it's way too much effort to implement them in the encoder for a laugh, so a tool like that would be perfect
2026-01-08 02:13:38 so something like aomanalyzer if it also allowed to make changes to it
๐†๐€ ๐€๐ฆ๐ž๐ซ๐ข๐œ๐š
2026-01-08 03:42:07 I'm getting close to finishing up testing how JpegXL handles the ASTER_GDEM dataset. I'm planning on sharing the results on google drive but even compressed not all of the compressed datasets will fit. So which ones do you guys want? Also none of the losslessly compressed data sets are even close to being small enough to put onto google drive. If you guys really need the losslessly compressed datasets (not just the summeries) let me know and I might try to self host something for like a week. That would be a pain though so please only ask for that if it would be actually useful to you.
AccessViolation_
2026-01-08 04:10:33 just jumping in to say you can always create a torrent
๐†๐€ ๐€๐ฆ๐ž๐ซ๐ข๐œ๐š
2026-01-08 04:11:37 Ill look into that, ive heard of torrents but dont really know what they are. Thanks!
jonnyawsom3
2026-01-08 04:28:26 If I'm honest, I have no clue who here will use the files, but it'll be real interesting to see the results I think effort 9 distance 0.3 would've run a lot faster and smaller with roughly the same results, but hindsight is 20/20
๐†๐€ ๐€๐ฆ๐ž๐ซ๐ข๐œ๐š
2026-01-08 04:38:24 I kind of figured that but hey i was running it while i wasnt using my computer so \\('~')/
Smegas
2026-01-12 07:15:59 cjxl .\Test_jxl.jpg Test_jxl_Q96_E8.jxl -q 96 -e 8 --lossless_jpeg 0 JPEG XL encoder v0.11.0 4df1e9e [AVX2,SSE2] Encoding [VarDCT, d0.460, effort: 8] Compressed to 542.1 kB (0.867 bpp). 2500 x 2000, 10.549 MP/s [10.55, 10.55], , 1 reps, 12 threads. cjxl .\Test_jxl.jpg Test_jxl_Q96_E8.jxl -q 94 -e 8 --lossless_jpeg 0 JPEG XL encoder v0.11.0 4df1e9e [AVX2,SSE2] Encoding [VarDCT, d0.640, effort: 8] Compressed to 314.2 kB (0.503 bpp). 2500 x 2000, 0.929 MP/s [0.93, 0.93], , 1 reps, 12 threads.
2026-01-12 07:16:35 JPEG XL encoder v0.11.0 4df1e9e [AVX2,SSE2]
2026-01-12 07:16:43 Windows 11
2026-01-12 07:18:01 I don't know if anyone has brought this up, but I was blown away by this test.
2026-01-12 07:20:20 The difference between q=96 and q=94 is gigantic! And q=94 is 10x slower! It should be the other way around and not 10x!
jonnyawsom3
2026-01-12 07:31:20 <https://github.com/libjxl/libjxl/blob/main/doc/encode_effort.md> > Chunked encoding is also disabled under these circumstances: > Efforts 8 & 9 VarDCT at distances >0.5.
RaveSteel
2026-01-13 12:13:43 <@132186538325835776> https://files.catbox.moe/w2yntb.jxr For this image I first encoded to EXR via magick and then passed that to a build of libjxl that supports EXR input. JXR - 16.6MiB JXL - 14.6MiB ``` JXL 07593a088c4b64163b3d39e11d670c4801224464dfc5f6b0ad2740885ea629f0 TIFF 07593a088c4b64163b3d39e11d670c4801224464dfc5f6b0ad2740885ea629f0 Riven 07_07_2024 22_32_33.jxl JXL 3440x1440 3440x1440+0+0 16-bit sRGB 14.2971MiB 0.000u 0:00.001 Riven 07_07_2024 22_32_33.jxr TIFF 3440x1440 3440x1440+0+0 16-bit sRGB 37.7932MiB 0.000u 0:00.002 ``` For some reason identify shows the filesize of the JXR wrong, it is actually 16.6MiB ``` JPEG XL file format container (ISO/IEC 18181-2) JPEG XL image, 3440x1440, (possibly) lossless, 16-bit float (5 exponent bits) RGB+Alpha Color space: RGB, D65, sRGB primaries, Linear transfer function, rendering intent: Perceptual ```
dogelition
2026-01-13 12:41:34 my tests have changed a bit, compare to https://discord.com/channels/794206087879852103/803645746661425173/1250101680620568727: ``` 30493053 -> 27838445 bytes, -8.71% 16173522 -> 12083875 bytes, -25.29% 34901826 -> 25180634 bytes, -27.85% 24749140 -> 21494972 bytes, -13.15% 29241593 -> 27427514 bytes, -6.20% 30205570 -> 30323007 bytes, 0.39% Best: -27.85% Worst: 0.39% Mean: -13.47% ```
2026-01-13 12:42:06 best has gotten worse, worst has gotten better, mean is slightly worse effort 9 doesn't really help: ``` 30493053 -> 28022318 bytes, -8.10% 16173522 -> 12000595 bytes, -25.80% 34901826 -> 25014097 bytes, -28.33% 24749140 -> 21500926 bytes, -13.12% 29241593 -> 27391054 bytes, -6.33% 30205570 -> 30580681 bytes, 1.24% Best: -28.33% Worst: 1.24% Mean: -13.41% ```
jonnyawsom3
2026-01-13 12:45:06 What if you try v0.9?
dogelition
2026-01-13 12:46:49 not entirely sure how to test that, i'm using a c++ program built with mingw gcc against mingw libjxl for the conversion currently
jonnyawsom3
2026-01-13 12:59:17 Ah right... Could you upload a resulting JXL then? Thankfully 0.9 accepts JXL input so I could run some tests
dogelition
2026-01-13 01:01:33 i did have that idea but it seems when i recompress a 16 bit float using cjxl it turns it into 32 bit float (with a significantly larger file size)
2026-01-13 01:01:54 i can upload some jxl files though, sure
jonnyawsom3
2026-01-13 01:07:18 Right, yeah https://discord.com/channels/794206087879852103/803645746661425173/1458783569341059258
dogelition
2026-01-13 01:10:28 https://wormhole.app/6YJxL5#3l1G8AjI4S2vBjEP38kSGQ
veluca
2026-01-13 04:58:22 does anybody here feel like collecting a few images that decode surprisingly slowly on their machine with jxl-rs (compared to libjxl single thread)? (and ideally upload them to github issues, trying to avoid duplicates, and mentioning their CPU :D) (for now "surprisingly slowly" is probably >2x/3x slower)
lonjil
2026-01-13 05:03:25 Once again I'm annoyed by how few JXL images I actually have to test decoding on
jonnyawsom3
2026-01-14 02:31:45 An interesting thing I realised, Oxide doesn't have SIMD upsampling, so I can directly compare it with jxl-rs ``` jxl-rs Oxide -d 0 -e 2 15MP/s vs 18MP/s Resampling 2 31MP/s vs 6.8MP/s Resampling 4 55MP/s vs 7.4MP/s Resampling 8 69MP/s vs 7.3MP/s``` Quite the difference, so jxl-rs should do a lot better rendering progressive images
2026-01-14 02:39:12 It's actually faster than libjxl, 60MP/s for 8x resampling
2026-01-14 09:34:29 Doing some tests based on this comment https://discord.com/channels/794206087879852103/822105409312653333/1428847517637939282 and my recent dive into progressive offsetting decode speed ``` djxl --num_threads 0 --num_reps 10 vs jxl_cli -s -n 10 Normal 25.477 MP/s 16.130 MP/s Progressive DC 24.546 MP/s 15.351 MP/s Progressive AC 20.253 MP/s 11.596 MP/s Progressive DC+AC (-p) 19.397 MP/s 10.932 MP/s ```
2026-01-14 09:43:16 Previously only the AC got implemented for anything progressive related, but I'd say now is a good time to switch to DC instead. Decoding is 25% faster, upsampling already has SIMD so is near-instant for both jxl-rs and libjxl, while the first pass would be available an order of magnitude faster
2026-01-19 12:52:13 Spent the past hour doing this because AVIF annoys me https://www.reddit.com/r/jpegxl/comments/1qfjrk9/comment/o0e3654/
2026-01-19 12:53:16 <https://aomedia.org/blog%20posts/AV1-Image-File-Format-Specification-Gets-an-Upgrade-with-AVIF/>
2026-01-19 12:54:24 JXL was around 1000x faster than AVIF's "Innovative" 16bit lossless encoding while still being smaller
juliobbv
2026-01-19 01:04:37 16 bit AVIF was never meant to beat JXL
2026-01-19 01:05:11 not sure where that implication came from
2026-01-19 01:08:26 support was added to expand the featureset while offering a back-compat pathway for lower bit depth decoding
jonnyawsom3
2026-01-19 01:10:01 It got posted on the Reddit and someone asked about effort 10, but while running some tests I realised the blog used the slowest setting to skim past PNG. I'm glad it's backwards compatible at least, but I just can't see it getting used unless the lossless encoder got significantly better
juliobbv
2026-01-19 01:11:56 well, that's up to the customers to decide
2026-01-19 01:12:51 but I don't think JXL's current position will ever be threatened, lossless has always been a strong suit
Orum
2026-01-19 01:36:09 AVIF/AV1 always sucked at lossless
CrushedAsian255
2026-01-19 01:50:17 itโ€™s cause 90% of the extremely clever things av1 use for efficiently low bitrate compression are not available in lossless mode
Orum
2026-01-19 01:50:57 regardless of the reason, it's pointless to even compare
2026-01-19 01:51:34 it's obviously not a priority for AOM, because none of them care about lossless
2026-01-19 01:52:25 hell, a big part of the reason H.264 is good at lossless is because one of the x264 devs really cared about it and spent a crazy amount of time optimizing it
whatsurname
2026-01-19 02:02:26 They do care (for YUV sources though)
jonnyawsom3
2026-01-19 02:11:11 A bit like WebP
Orum
2026-01-19 02:15:00 doubt
2026-01-19 02:15:20 honestly I'm surprised they even included support for it in the spec
whatsurname
2026-01-19 02:18:53 Lossless coding improvement is claimed as one of notable features of AV2, so we shall see
veluca
2026-01-19 09:27:05 maybe it will get close to PNG ๐Ÿ˜›
Meow
2026-01-20 05:54:09 As I said before lossless file size on par with PNG is a feature that users can quickly know it's lossless
whatsurname
2026-01-20 07:32:45 Well AVIF with YCgCo-R can already beat PNG Anyway, the original comment is about lossless coding for videos, not images
jonnyawsom3
2026-01-20 08:40:34 Can it?
_wb_
2026-01-20 08:49:34 I guess it depends on the kind of image, for photographic images, yes, it beats PNG. For something like an image with alpha and small number of RGBA colors so PNG can use a palette and do the whole image using 1 byte per pixel, AVIF will never be competitive since it has to put the RGB and the A in separate codestreams so it is impossible to exploit any correlation between RGB and A.
AccessViolation_
2026-01-20 08:53:41 how did lossless AVIF work again? was it lossy compressing the image and then losslessly storing the residuals?
jonnyawsom3
2026-01-20 09:08:01 Every benchmark I've seen showed PNG being smaller, but maybe things changed or they didn't use the right kind of image
whatsurname
2026-01-20 09:19:43 Most existing benchmarks use identity matrix (which is bad for compression) instead of YCgCo-R
2026-01-20 09:20:56 YCgCo-R is still pretty new and not supported everywhere
_wb_
2026-01-20 09:22:46 PNG is smaller on anything like screen content where lz77 will help a lot. But for photo, avif can match/beat it.
2026-01-20 09:23:47 Non photo:
2026-01-20 09:24:03 Photo:
2026-01-20 09:25:24 In any case, even with YCoCg, I don't think there is anything avif has to offer compared to jxl, when it comes to lossless.
veluca
2026-01-20 09:29:14 I mean, if you use s5+, you can beat jxl e1! ๐Ÿ˜›
AccessViolation_
2026-01-20 09:29:43 I didn't know JXL was *that* much better than the others, even at e1 (for non-photographic)
2026-01-20 09:30:07 and matching the others at e1 for photographic is equally impressive
whatsurname
2026-01-20 09:34:02 Yeah I don't think lossless AVIF will be competitive even with AV2, it only gets close to WebP at best but is 10x slower in my preliminary test with AVM
jonnyawsom3
2026-01-20 09:35:37 Missing that Jyrki magic
whatsurname
2026-01-20 09:36:06 ^ that is with YCgCo-R, with identity matrix it's only close to PNG
AccessViolation_
2026-01-20 09:36:30 maybe AV2 will use JXL in its bitstream for lossless :3c
2026-01-20 09:37:12 but then Sisvel would seek to claim patent rights over JXL as well, we can't have that
Traneptora
2026-01-21 12:05:39 what I do to crush PNG images is I use `optipng` with a specific setting mostly to do things like palette reduction and empty alpha discarding
2026-01-21 12:06:04 and then I extract out the IDATs, uncompress them, and then recompress them with `pigz -c -f -n -z 11`
2026-01-21 12:06:12 and then splice it back into the PNG
jonnyawsom3
2026-01-21 12:21:44 I was going to say why not OxiPNG, but that doesn't do multithreaded deflate
Meow
2026-01-21 01:31:24 yeah recently oxipng just reached version 10
monad
2026-01-21 03:19:49 It is not that much better than PNG at non-photo in general. UI/pixel art/algorithmic stuff can be encoded much more densely in fast PNG than in JXL e1-3. Across a broader compute budget, WebP is far more efficient than JXL on such content. As you get closer to photo-like characteristics, whether paintings, 3D renders, generative ML stuff, JXL catches up or pulls ahead.
Adrian The Frog
2026-01-21 08:31:31 `cjxl -e 7 -d 5` this seems like a particularly bad case for jxl desaturation, i guess because all of the color info is very high frequency
2026-01-21 08:31:51 (left is jxl)
jonnyawsom3
2026-01-21 08:41:53 My vision is now covered in squiggles, but yeah. Trying our test-fix of the quant tables brings back some color
2026-01-21 08:42:47 Actually I'm gonna spoiler it because it really hurts to look at
o7William
2026-01-29 03:42:37 where can I find this awesome references?
Dunda
2026-01-29 03:46:54 This is a very, very pretty image
AccessViolation_
2026-01-29 03:47:25 right?
Dunda
2026-01-29 03:47:35 Whoops I didn't realise will's reply went so far back in time
2026-01-29 03:48:17 It's ghostly like the buddhabrot, so perhaps they're a similar idea
AccessViolation_
2026-01-29 03:50:24 it's from this paper https://arxiv.org/pdf/2506.05987
o7William
2026-01-29 03:50:52 ah I see thanks you
AccessViolation_
2026-01-29 03:51:14 page 41
2026-01-30 04:03:59 I did some testing of JPEG XL against AVIF and it seems at the medium to high quality end, JXL is indeed better than AVIF in therms of compression performance. at the same file size, AVIF will do a lot more smoothing, removing noise or fine textures present in the image. only thing is that JXL suffers from color bleeding (which may or may not be related to the desaturation issue -- I would describe it as the opposite, making certain high-contrast features like sharp edges more contrasty, even at very high qualities). interestingly, pretty much the exact same artifacting (at least visually) was also there with lossy modular with squeeze at very high qualities
jonnyawsom3
2026-01-30 05:03:43 What encode options did you use for the AVIF?
AccessViolation_
2026-01-30 05:16:36 it was avifenc at quality 95, no other options specified
2026-01-30 05:29:11 with JXL and AVIF both at q95, for this image, the AVIF is 5.2 MB and the JXL is 4.9 MB. the more pronounced structures of the image are preserved really well in both, but AVIF seems too eager to remove the noise from otherwise slow gradients, and on top of that (or maybe as a result of that), introduces noticeable posterization at 100% zoom
2026-01-30 05:31:47
2026-01-30 05:35:48
Exorcist
2026-01-30 05:57:54
AccessViolation_
2026-01-30 06:38:12 I forgot about โจ`tune=iq`โฉ. I know my version of avifenc doesn't support it yet, though
A homosapien
2026-01-30 10:29:14 Following up from https://discord.com/channels/794206087879852103/804324493420920833/1466568154003275923 I did some tests comparing msvc vs. clang. Testing with a 1080p image with โจโจโจโจ`-d 1 -e 7 --num_reps 5`โฉโฉโฉโฉ. I can confirm there is a ~50% perf increase for lossy.โจโจ``` Single Threaded {MSVC 19.44.35222.0} geomean: 2.149 MP/s Wall time: 5.39 seconds User time: 4.59 seconds {Clang 19.1.5} geomean: 3.265 MP/s Wall time: 3.19 seconds User time: 2.86 seconds All Cores 12-Threads {MSVC 19.44.35222.0} geomean: 5.641 MP/s Wall time: 1.84 seconds User time: 8.20 seconds {Clang 19.1.5} geomean: 8.325 MP/s Wall time: 1.25 seconds User time: 5.20 seconds```โฉโฉโฉโฉ
2026-01-30 10:42:27 There is also a significant perf increase for decoding lossy images as well. Outputting to ppm with 20 repetitions.โจโจ``` Single Threaded {MSVC 19.44.35222.0} geomean: 35.317 MP/s Wall time: 1.18 seconds User time: 1.11 seconds {Clang 19.1.5} geomean: 50.952 MP/s Wall time: 0.82 seconds User time: 0.78 seconds All Cores 12-Threads {MSVC 19.44.35222.0} geomean: 122.531 MP/s Wall time: 0.35 seconds User time: 1.70 seconds {Clang 19.1.5} geomean: 142.274 MP/s Wall time: 0.30 seconds User time: 0.66 seconds ```โฉโฉ
veluca
2026-01-30 10:45:55 what's the CPU?
A homosapien
2026-01-30 10:46:40 Fast lossless speeds โจโจ`-e 1`โฉโฉ also doubled ~150 MP/s -> ~320 MP/s. I have an Intel Core i5-12400
veluca
2026-01-30 10:47:58 pretty good
A homosapien
2026-01-30 10:51:59 Other than fast lossless, it seems like the overall speed up for lossless diminishes to around +3-5% give or take.
username
2026-01-30 10:53:32
2026-01-30 10:53:33 relevant image
veluca
2026-01-30 10:57:16 makes sense, significantly less simd ๐Ÿ˜‰
AccessViolation_
2026-02-06 09:44:28 I'm a staff member in another community and with JXL support in browsers getting ever closer I want to losslessly compress some assets to impress the rest of the team, and add them to the site with a fallback to the original PNG. this is the smallest I've been able to get them. if anyone feels like trying to beat it, that'd be great. I'm sure there's more gains that can be achieved in ways I don't know how
2026-02-06 09:45:43
2026-02-06 09:45:52
2026-02-06 09:48:13 the `hero.png` will probably do well with palette compression because it's got only 1177 colors. which is unfortunate, I wish we had a full color render instead that was a good contender for lossy
2026-02-06 09:49:40 `logo.jxl` is already really good at 14.4% of the original size
username
2026-02-06 09:51:09 hmm something I'm wondering is how much of a size bump is there to making it progressive as well? (with nightly libjxl ofc)
2026-02-06 09:51:40 I know what I'm asking is kinda in the opposite direction of what you are asking for but still I'm curious
AccessViolation_
2026-02-06 09:56:01 I'd be interested in that for sure. Iirc palette compression doesn't work for progressive, so I don't know how good of a fit it would be for `hero.png`, but `logo.png` is mostly a gradient in the vertical direction so it'd do pretty well there I suspect?
2026-02-06 09:57:19 the majority of that image is 'predict west' with residuals of zero, so at least half of the squeeze steps might be more or less free
monad
2026-02-06 02:59:29 For logos, v0.10 measurements showed d0e10E4I100g3Y0patches0 and d0e10E4I100g2modular_palette_colors0X0patches0 to be complimentary. This was found from measuring 3840 commands against 70 logo images, collecting a non-minimal set of 23 commands which minimized bpp (by iteratively collecting the command which minimized the most images), then exhaustively finding the two of those which together produced the least bpp.
A homosapien
2026-02-06 11:31:08
2026-02-06 11:32:22 I eked out small gains, probably not worth the time invested ๐Ÿ˜….
AccessViolation_
2026-02-07 12:01:48 but appreciated nonetheless!
2026-02-07 12:02:07 I'm going to be using these then :>
monad
2026-02-07 05:17:55 some decode
VcSaJen
2026-02-08 02:17:48 Good to know that decoding effort 10 is practically as quick as decoding default effort
monad
2026-02-08 08:54:01 addressing an old question, the e7E11g3 photo hack does come with a decode penalty
jonnyawsom3
2026-02-08 09:03:46 Makes sense, E means channels can't decode in parallel and g reduces the amount of threads while increasing memory
monad
2026-02-08 09:05:41 `num_threads0`, though
jonnyawsom3
2026-02-08 09:48:37 I'm not sure if it's just due to the larger memory required, but group size has a very non-linear impact on both encode and decode, even while singlethreaded
monad
2026-02-08 10:53:28 dunno what that means, but I know single-threaded encode tends to be faster as group size increases
_wb_
2026-02-08 11:08:59 Using prevchannels does come at a cost in decode speed, that's why it is disabled by default.
jonnyawsom3
2026-02-09 12:10:48 Interesting, maybe I was using an edge case image or it's just my CPU being old again
monad
2026-02-11 09:56:29 turns out, if you need to best WebP density for scaled pixel art, you can still get half-decent decode speed
AccessViolation_
2026-02-15 10:53:36
2026-02-15 10:53:58 source image:
2026-02-15 11:01:42 (the encoder did not enable the EPF with the default of "encoder chooses")
spider-mario
2026-02-15 11:14:03 aah, the โ€œStop Doing Mathโ€ graph
2026-02-15 11:14:58 (https://knowyourmeme.com/photos/2029841-stop-doing-math)
AccessViolation_
2026-02-15 12:38:27 oh yeah!
2026-02-15 12:45:44 I should've recognized it. I love that meme format
_wb_
2026-02-15 03:38:46 It doesn't? We should change that imo. EPF should be as useful for lossy modular as for lossy vardct.
AccessViolation_
2026-02-15 04:12:24 I meant it didn't in this specific case
2026-02-15 04:18:19 I mean, maybe you're right. I just know that for this image, it wasn't enabled, and assumed the encoder made that decision because it thought it was best according to some heuristic
_wb_
2026-02-15 05:07:11 Afaik the only thing influencing whether to enable EPF (and how many iterations of EPF) is the distance setting
2026-02-15 05:08:09 The heuristics are about how to modulate the local strength of EPF, in case of vardct (when using modular it's just a global filter, iirc)
AccessViolation_
2026-02-15 05:08:21 gotcha
2026-02-15 05:09:49 I tested another image out of curiosity, one with small and big planes of flat colors, and it actually did worse there. the sharp line between two distinct colors gets blurred. it make sense, they're low contrast sections so EPF will apply a heavy blur
2026-02-15 05:10:07 it's interesting that it worked so well on that image above and so poorly on that other one
2026-02-15 05:14:03 even thought EPF is designed to weigh different colors less heavily when deciding to do with the current pixel, so I'm not sure what happened there... (this source image in question included)
2026-02-15 05:17:54 Since EPF is just decode time, at higher effort levels you could presumably try 0, 1, 2, 3 at encode time and see which gives a better butteraugli score, but I don't know if butteraugli is fast enough for that to be worth it ๐Ÿค”
_wb_
2026-02-15 07:40:58 I think most metrics will not like EPF in general.
RaveSteel
2026-02-15 08:43:31 <@&807636211489177661>
spider-mario
2026-02-15 09:39:26 despite the recent widening of admin privileges, the `@admin` ping still notifies only two people; should we update that?
monad
2026-02-15 09:53:13 there was a widening of admin privileges? yes, some dedicated role should signify, if not admin
username
2026-02-15 10:04:27 moderators?
monad
2026-02-15 11:43:41 yes, if 'admin' is the full server admin role (which initially was just to hide Jon's crown), 'mod' or such can be the community moderation role, and IMO that should be a primary visible role so it's as clear as possible we should ping those particular people
AccessViolation_
2026-02-16 02:13:17 grr, q95, still color fringing, I haven't been able to make it not do that
2026-02-16 02:14:45 even q99 has it
jonnyawsom3
2026-02-16 02:18:01 The B quant table strikes again
AccessViolation_
2026-02-16 03:33:26 that actually makes me wonder if part of the reason why VarDCT appears disproportionately bad on screen content with higher distances, is also because of this
2026-02-16 03:34:42 because a lot of the artifacts you do get, are in different colors, which makes them a lot worse than if they were just luma artifacts (I've mentioned this issue before, but wasn't sure what exactly caused it)
jonnyawsom3
2026-02-16 03:39:29 If the artifacts are blue or yellow, it's the quant table again
AccessViolation_
2026-02-16 03:42:48 interesting, I'll keep an eye out for that
jonnyawsom3
2026-02-16 09:38:27 Found an interesting image that gets covered in grey noise after encoding. Disabling Gaborish hardly helps so it's not the sharpening, v0.8 did better so that suggests block selection, but using RGB lossy modular is fine suggesting it's XYB again...
2026-02-16 09:41:25 Here's a crop of it for easier testing, on the full image it causes a lot of desaturation since it's almost entirely red Original, v0.8, Main, RGB Lossy Modular
Adrian The Frog
2026-02-22 06:11:01 can anyone manage to get a decent lossy result on this animated gif? original 20.5 kb, `-d 0 -e 1` is 19.6, `-d 0 -e 7` is 17.5, `-d 0 -e 11` is 15.3 kb, `-d 2 -e 10` is 34.2 kb, `-d 5 -e 10` is 15.3 kb, `-m 1 -d 2 -e 10` is 25.1 kb, `-m 1 -d 5 -e 10` is 18.9 kb
2026-02-22 06:19:37 webp lossless seems to handle it decently well
2026-02-22 06:22:18 although maybe this is ffmpeg's fault but it doesn't clear the transparent area between frames lol
2026-02-22 06:22:39
2026-02-22 06:23:14 and doesn't loop by default which is strange
username
2026-02-22 06:24:13 maybe should try using gif2webp instead? https://developers.google.com/speed/webp/docs/gif2webp
Adrian The Frog
2026-02-22 06:30:55 probably mostly wondering why jxl is doing so badly tho
2026-02-22 06:31:34 webp doesn't even support inter frames
AccessViolation_
2026-02-22 11:26:31 I thought animated WebP was a VP8 video
jonnyawsom3
2026-02-22 04:23:40 Well 1 it's lossless WebP and 2 it's still intra only. I doubt you'll get much from lossy since GIF is already so quantized by nature
Adrian The Frog
2026-02-22 05:29:53 animated webp can do lossy gif does use palette compression, do you think jxl is not taking advantage of this when converting?
jonnyawsom3
2026-02-22 06:52:30 Yeah, but you said you were using lossless
Adrian The Frog
2026-02-23 03:54:55 does lossless not use a palette?
jonnyawsom3
2026-02-23 04:20:41 Sorry, meant to send another message but forgot. I meant you're using lossless WebP so it's not VP8, but lossless JXL *should* be using palette... Though using `-X 0` to *dis*able pallete doesn't do anything, so I'm not sure what's going on
2026-02-23 04:23:01 Running it through an APNG optimizer first got it down to 11.4 KB, should still be lossless
username
2026-02-23 04:29:17 for context the current libjxl encoder does **very little** in the way of taking advantage of what the JXL format supports animation wise so running an image through a APNG optimizer first to get differential frames and blending then passing that into libjxl will get a smaller result since libjxl itself doesn't do any of that itself yet but can copy/convert over the work from a pre-existing APNG
jonnyawsom3
2026-02-23 04:30:43 (By "very little", it does nothing. It just encodes the frames it's given, pre-optimized or not)
monad
2026-02-23 06:16:35 use libwebp_anim
_wb_
2026-02-23 07:44:05 libjxl doesn't distinguish between layers and frames in how they get encoded, only the duration is different. But semantically, there is kind of a big difference: if you have a layered image, generally you want to preserve the layer data as is, while if you have an animation, you generally only care about what the blended frames look like, and you don't care about what's in the actual frame data.
Exorcist
2026-02-23 07:45:47 I searched libwebp, find `anim_encode.c` do diff frames
_wb_
2026-02-23 07:45:51 Basically in animations, blend modes and the actual frame data is generally seen as just a coding tool and the semantic object is the blended frame. While in layered images, blend modes and the actual layer data is part of the image data that needs to be preserved.
2026-02-23 07:48:14 This distinction is similar to "does the color of an invisible pixel matter?". When delivering an image for display, obviously it doesn't, invisible is invisible so who cares. When in a production workflow though, it can be that you want to keep the alpha editable and then you do need to preserve the color of invisible pixels since they might become visible after editing.
VcSaJen
2026-02-23 07:52:58 How large is frame header overhead? Would encoding several "clumps" of changes via several zero-duration frames be smaller than just encoding one bounding box of changes?
_wb_
2026-02-23 08:28:15 Depends on the data what's smaller, frame header overhead is not large but regions of zeroes are also cheap so it's hard to tell what's best.
jonnyawsom3
2026-02-23 08:30:33 Blending is pretty slow to decode (for now), just see Hydrium's decode speed... So I'd just stick to 1 frame with all the changed regions
_wb_
2026-02-23 08:35:25 I guess that's mostly true in the MT case, since frames are decoded sequentially while groups within a frame are decoded in parallel. But in principle it should be possible to also decode frames in parallel, though it does get a bit tricky.
jonnyawsom3
2026-02-23 08:38:47 All I know is that decoding default Hydrium files is extremely slow and memory intensive compared to full-frame
AccessViolation_
2026-02-23 08:45:37 In the past I tested a use case where the pattern of ordered dithering was lifted into a different frame, such that one frame had banding and no dithering, while a different frame had the dither pattern alpha blended on top. this resulted in very large savings on effort 7, but effort 9 and above did equally well on the original without lifting the dither pattern into a new frame
2026-02-23 08:47:31 I think I even had one frame per color for the dither frames? there were several. so using multiple frames to give the encoder an easier time can be worth it
VcSaJen
2026-02-23 09:13:23 Yeah, I also thought about background extraction into different layer (frame?).
๐‘›๐‘ฆ๐‘•๐‘ฃ๐‘ธ๐‘ฅ๐‘ฉ๐‘ฏ๐‘ฆ | ๆœ€ไธ่ชฟๅ’Œใฎไผๆ’ญ่€… | ็•ฐ่ญฐใฎๅ…ƒ็ด 
2026-02-23 06:04:27 there's `img2webp`, requires unpacking individual frames first though
o7William
2026-02-26 08:15:10 jxl increase this image file size from 18.5KB to 20.5KB what could be cause of this? im sure this is should perform about as good as manga or similar
2026-02-26 08:16:13 default ffmpeg so probably libjxl 0.11.1 VarDCT effort 7 yes?
2026-02-26 08:16:25 or the modular lossless convertion one
AccessViolation_
2026-02-26 08:23:54 what format was the original?
o7William
2026-02-26 08:24:15 jpg appearently
2026-02-26 08:24:44 seem like a compressed version of the other original
AccessViolation_
2026-02-26 08:25:01 ah just the one you shared is the one you used
o7William
2026-02-26 08:25:08 yeah it is
AccessViolation_
2026-02-26 08:26:22 it gets smaller with lossless transcoding
o7William
2026-02-26 08:28:44 ffmpeg -i .\pxfuel2.jpg -c:v libjxl -distance 0 -modular 1 modular.jxl
2026-02-26 08:28:52 I run this and now it up to 34kb
2026-02-26 08:29:04
AccessViolation_
2026-02-26 08:30:03 an image that's already lossy compressed will have compression artifacts like these. if you then try to lossy compress it again, the encoder will think it's detail that needs to be preserved and will spent a lot of bits on doing so. that's why the file size will often be larger, even if the new format is better
o7William
2026-02-26 08:30:15 figured
2026-02-26 08:30:24 I'll have to find the real original then ig?
AccessViolation_
2026-02-26 08:32:26 that's the fastest option. google reverse image search is good, just plop in the image, and the original will be the oldest (if google has it). that's your best shot
o7William
2026-02-26 08:32:38 I did try lens
2026-02-26 08:32:55 it have 34kb size which perform well on jxl
2026-02-26 08:33:06 but still seem to be compressed tho
2026-02-26 08:33:10 <:kekw:808717074305122316>
AccessViolation_
2026-02-26 08:36:59 here you go :) it's not antialiased though (there are only two colors, no blending between black and white)
2026-02-26 08:38:34 I used pngquant to reencode the image using only two colors, which means pixel values got rounded to either black or white
2026-02-26 08:38:56 so it's not the actual original, but it doesn't have DCT compression artifacts anymore
o7William
2026-02-26 08:40:03 is it something like dejpeg?
2026-02-26 08:40:49 oh it's an encoder
2026-02-26 08:40:56 or compressor
AccessViolation_
2026-02-26 08:41:34 no, pngquant is for reducing the amount of colors in an image. basically a way of lossy compressing PNG. but because this image *basically* only has two colors, black and white, I can tell it to only use two colors and it will round ever pixel to the nearest black or white, in this case it's more of a lucky coincidence that this works for this specific image
2026-02-26 08:44:00 this is what it looks like when I run the same command on a screenshot I just took
o7William
2026-02-26 08:44:27 Still, how come you use lossless transcoding and got smaller size
2026-02-26 08:44:50 Is my modular parameters wrong?
AccessViolation_
2026-02-26 08:46:20 JXL has a special mode for losslessly recompressing JPEGs. it doesn't reencode the pixels, but applies better compression to all the data that's already there. it will save about 20%, and you can always get the original JPEG back
o7William
2026-02-26 08:46:36 Oh well yes I am aware but uh
2026-02-26 08:46:47 I recall there are multiple modular mode
2026-02-26 08:47:00 And lossless transcoding is one of them yes?
2026-02-26 08:47:18 Maybe I did it wrong but I put -modular 1
AccessViolation_
2026-02-26 08:47:47 I think you may be confusing lossless compression with lossless JPEG recompression
2026-02-26 08:49:19 this is a bit of an oversimplification, but you could say JPEG XL has four 'modes': - lossy (VarDCT) - lossy (Modular) - lossless (Modular) - lossless (lossless JPEG recompression)
o7William
2026-02-26 08:49:57 Oh damn that's make sense
2026-02-26 08:50:17 I mistake recompression that they are also one of modular
AccessViolation_
2026-02-26 08:51:02 I suspect what you're thinking of is lossless modular, which is basically like encoding the pixels of a JPEG as a PNG what I'm talking about, lossless JPEG recompression, is more like putting the original JPEG in a zip file so it becomes smaller, while also not changing any pixel data
o7William
2026-02-26 08:51:44 Well the jpeg recompression is what I planned to do