JPEG XL

Info

rules 58
github 38694
reddit 688

JPEG XL

tools 4270
website 1813
adoption 22069
image-compression-forum 0
glitch-art 1071

General chat

welcome 3957
introduce-yourself 294
color 1687
photography 3532
other-codecs 25116
on-topic 26652
off-topic 23987

Voice Channels

General 2578

Archived

bot-spam 4577

benchmarks

o7William
2026-02-26 08:52:14 Normally on Linux it should default to the recompression but It doesn't seem to be the case on ffmpeg Windows
2026-02-26 08:52:38 So I have to figure out the parameters which I still don't know
AccessViolation_
2026-02-26 08:53:27 ah, sorry for explaining something you already knew yeah that sounds about right, I remember reading in here that ffmpeg doesn't support lossless JPEG recompression yet, but that was a while ago
o7William
2026-02-26 08:54:04 It's fine maybe I could forget something too and learn it again 👍
RaveSteel
2026-02-26 08:54:09 ffmpeg indeed does not support lossless jpeg recompression
AccessViolation_
2026-02-26 09:02:07 there is also the nefarious lossless VarDCT mode but I think I would get stern looks from the core devs if I mentioned it
2026-02-26 09:03:24 I think I get stern looks from the core devs a lot actually <:KekDog:805390049033191445>
_wb_
2026-02-26 09:12:31 Lossless vardct? I guess that's what jpeg recompression does...
AccessViolation_
2026-02-26 09:14:43 true, but I'm referring to the idea of taking in 8 bit per channel pixel data, and encoding with high enough precision and without quantization to result in the exact same pixel values in a sufficiently high precision decoder
2026-02-26 09:15:39 I vaguely remember libjxl got a but report from someone that tried this but ran into issues when decoding
veluca
2026-02-26 09:19:07 I guess technically it could work...
AccessViolation_
2026-02-26 09:27:09 I've always wondered how that would compare to lossless Modular for a photographic image in terms of compression ratio. I don't have a good intuition at all for how efficiently a non-quantized DCT representation can be compressed
2026-02-26 09:30:26 If I understand correctly, this is the basis of how lossless AVIF works, though its 'base' image is not entirely lossless, so it additionally stores residuals to get back to the original, and that does about as well as JXL's lossless modular on photographic, somewhat noisy imagery
2026-02-26 09:31:51 (I know this because at one point I had this idea and I was told I had just invented lossless AVIF)
_wb_
2026-02-26 09:32:26 I guess with the small DCT transforms like 2x2 or Hornuss it would be most likely to work
veluca
2026-02-26 09:33:45 yes, that's more or less how lossless AVIF works
2026-02-26 09:34:05 and also.. JPEG-LS?
AccessViolation_
2026-02-26 09:37:16 these 'lossless VarDCT' images would be useful for tracking how much precision is lost due to accumulating rounding errors in the decoding process
2026-02-26 09:39:38 I don't know how you would even go about producing that, but setting the quant tables to 1 is the obvious first step
2026-02-26 09:41:14 no restoration filters, no decorrelation tools that aren't mathematically reversible
veluca
2026-02-26 09:48:42 in jxl you can set quant tables to 1e-8 or so, too
2026-02-26 09:49:22 or 0.00006103515625 more precisely
2026-02-26 09:49:32 (lowest possible non-subnormal fp16 value)
2026-02-26 09:49:41 (... I *think*, maybe you can go lower)
AccessViolation_
2026-02-26 09:54:20 I might be misunderstanding how values in quant tables affect quantization. I thought 1 meant no quantization since it's the multiplicative identity
veluca
2026-02-26 10:10:23 what happens is that you divide by the quant table value then round
2026-02-26 10:10:31 if the input were an integer, you'd be right
2026-02-26 10:10:42 but dct coefficients (in jxl) can and will be arbitrary floats
monad
2026-02-26 11:40:07 a very destructive process, isn't it? I believe it is safer to assume the antialiasing should be preserved. seems like an ideal situation for jpeg2png decoding.
2026-02-26 11:43:39 here's an example jpeg2png decode and a derivative jxl at the same byte size as the input jpeg
2026-02-27 12:08:58 or here's an example at 17217 B, closer to the transcoded jpeg at 17210 B
Exorcist
2026-02-27 08:08:32 Here we go again: https://github.com/victorvde/jpeg2png
NovaZone
2026-02-27 09:12:57 Or https://github.com/ilyakurdyukov/jpeg-quantsmooth
TheBigBadBoy - 𝙸𝚛
2026-02-27 09:25:31 (or waifu2x)
Exorcist
2026-02-27 09:32:50 Everytime I use jpeg2png, I feel indeed "JPEG is Alien Technology from the Future" Theoretical, DCT is bad for hard edge But with jpeg2png, it still can!
NovaZone
2026-02-27 09:33:23 Too ancient
2026-02-27 09:33:36 Also anime only
RaveSteel
2026-02-27 09:34:23 upscayl works fine
NovaZone
2026-02-27 09:34:53 Yee dispite their age, they work well
2026-02-27 09:36:05 Thio made an updated one btw https://github.com/ThioJoe/jpeg2png
2026-02-27 09:36:36 Is that 1x?
RaveSteel
2026-02-27 09:36:51 it can work 1x
NovaZone
2026-02-27 09:37:12 Hmm 🤔 esrgan? Or?
veluca
2026-02-27 09:37:13 wonder why we don't see a decent diffusion model to do this around
RaveSteel
2026-02-27 09:37:22 different models available
NovaZone
2026-02-27 09:39:26 2024 ncnn and esrgan kk
Exorcist
2026-02-27 09:40:28 jpeg2png is literal gradient descent, not machine learning
NovaZone
2026-02-27 09:40:48 Can do but it won't be faithful
veluca
2026-02-27 09:41:05 it can absolutely be though
NovaZone
2026-02-27 09:41:36 Gan nn's will be close
2026-02-27 09:42:08 Haven't seen 1 latent diffusion based upscaler that doesn't invent details
veluca
2026-02-27 09:42:55 well, as long as it invents details that are compatible with the JPEG data you have ... (i.e. it creates a JPEG that would recompress to the same pixels)
NovaZone
2026-02-27 09:43:52 I suppose but the goal right, is to de-jpeg, not transform the img too much
2026-02-27 09:44:54 Similar to what competent de-h.264 gan models do for video
2026-02-27 09:45:51 Latent diffusion is also heavily limited by the vae
TheBigBadBoy - 𝙸𝚛
2026-02-27 09:46:19 it is really good for digital content, and the above one is a perfect example of where waifu2x excels
NovaZone
2026-02-27 09:46:29 Gans have their own limits and flaws ofc but are at least very faithful
TheBigBadBoy - 𝙸𝚛
2026-02-27 09:47:05 also, I don't understand why "oldness" is used as an argument to not use a software
2026-02-27 09:47:35 and jpeg2png's last commit is 11years ago <:KekDog:805390049033191445>
NovaZone
2026-02-27 09:47:41 Old means it can't take advantage of newer better techniques
2026-02-27 09:48:02 That's the acception 🤣
2026-02-27 09:48:28 There's wizardry going on with jpeg2png/qs
2026-02-27 09:49:14 In w2x's case it lacks a transformer
2026-02-27 09:49:41 Specifically a derivative of the swin transformer
2026-02-27 09:50:02 Tldr: it can't compensate for lack of training to a certain degree
2026-02-27 09:50:17 Causing warping and tons of artifacts
2026-02-27 09:53:32 https://github.com/chaiNNer-org/spandrel?tab=readme-ov-file#single-image-super-resolution we have many many more competent architecture's now xD
VcSaJen
2026-02-27 12:34:56 I tried several "alternatives", they all were worse than waifu2x for digital 2d cartoon images. Seems like they all are trained on photographic content. (Also I remember there was some shareware application that didn't use NNs at all, and it still worked very good for cartoon images) If you have a list of alternatives, I would test them out (normal exe GUI programs, not raw python, CLI, or web-interface).
NovaZone
2026-02-27 12:37:28 https://imgsli.com/NzA3NjI
2026-02-27 12:41:02 But yea really depends what ur trying to do
2026-02-27 12:41:15 In this case 0 jpegness
2026-02-27 12:44:07 Literally chainner for sisr, and there's a few others for video
2026-02-27 12:44:23 Some use trt/onnx
2026-02-27 12:54:32 But if ur primary goal is toon restoration/upscale https://github.com/Sirosky/Upscale-Hub/releases/tag/AniToon
o7William
2026-02-27 01:08:59 avif seem to also work well with flat color I assume? I can get it to as low as 3.08KB while still visually lossless
2026-02-27 01:12:00 Also I kinda noticed ImageGlass show color space changed from Grey/- to sRGB/-
AccessViolation_
2026-02-27 01:12:21 yep AVIF has some coding tools that do particularly well for lossy compression of flat color shapes
2026-02-27 01:13:06 the lossless JXL at effort 7 was somewhere between 1 and 2 kilobytes iirc
o7William
2026-02-27 01:14:17 I tried effort 8 and distance 10 and get around 7.46 KB
AccessViolation_
2026-02-27 01:14:23 lossless?
o7William
2026-02-27 01:14:35 Good question
2026-02-27 01:14:50 It's lossy <:kekw:808717074305122316>
username
2026-02-27 01:26:15 anything above distance 0 is lossy
jonnyawsom3
2026-02-27 01:28:55 A minute or two and I got this
2026-02-27 01:29:23 `-d 0 -g 3 -e 9 -P 0`
monad
2026-02-27 08:06:09 certainly that was not the source for the avif
o7William
2026-03-01 11:36:51 2.23GB <:PepeGlasses:878298516965982308>
2026-03-01 11:42:29 the transparent one is damn bright
Orum
2026-03-01 01:35:30 why was that so big to begin with?
AccessViolation_
2026-03-01 02:16:53 oh shoot, there was a larger one?
2026-03-01 02:17:23 the 8000x8000 actually seemed downscaled because edges were sharpened, I'm glad I know there's a 25000x25000 one now
username
2026-03-01 02:18:45 there's also other files for it too as well like a depth map and IIRC a normal map
2026-03-01 02:19:01 uploaded as different entires
ignaloidas
2026-03-01 02:20:13 maybe was encoded without any predictors? it's around 3.5bpp
AccessViolation_
2026-03-01 02:25:45 it's incredibly noisy unlike the downscaled one, that's probably why
2026-03-01 02:27:15 the slight macroblocking effect when you zoom in on that screenshot seems to be due to how XnView is rendering it, it doesn't appear like that in the image
2026-03-01 02:36:05 early results
2026-03-01 02:36:20 interestingly, effort 2 was a lot faster than effort 1
2026-03-01 02:37:01 probably in part because effort 1 used so much memory that memory compression kicked in, whereas for effort 2 that didn't happen
o7William
2026-03-01 02:37:34 Bouta ran out of space before all tested😂
2026-03-01 02:38:01 Also do you see the transparent one
RaveSteel
2026-03-01 02:38:15 e7 with fd4 allows for better density and even loads faster than the source PNG
AccessViolation_
2026-03-01 02:38:48 fd4?
2026-03-01 02:38:51 ah, faster decoding
o7William
2026-03-01 02:38:56 I might imagine thing but it seem to have higher color-depth
AccessViolation_
2026-03-01 02:39:11 oh interesting
2026-03-01 02:40:27 it looks brighter too, I wonder if that's because of how PNG makes assumptions about associated alpha/premultiplication and I know rendering software sometimes gets fucky with that
2026-03-01 02:41:15 I wish I had more RAM
RaveSteel
2026-03-01 02:41:25 use swapfiles <:galaxybrain:821831336372338729>
AccessViolation_
2026-03-01 02:41:35 I use memory compression which is a lot faster
2026-03-01 02:41:46 that's the yellow line
RaveSteel
2026-03-01 02:41:51 yeah, but it can't make use of storage media
AccessViolation_
2026-03-01 02:43:05 I think memory compression is backed by a final swap file on disk, but I haven't verified that. I use whatever the default setup is for Pop!_OS and they were pretty meticulous about tuning the swap behavior so I trust they did it right
RaveSteel
2026-03-01 02:43:48 zram is not backed by storage, it's in-memory-compression
AccessViolation_
2026-03-01 02:43:49 I have 20 GB "swap", which I'm pretty sure is 16 GB (uncompressed) worth of compressed memory, which usually takes up about 1/5th of the space, and a final 4 GB swap file on disk
2026-03-01 02:44:05 yeah I know, but once zram is full it starts using disk swap
RaveSteel
2026-03-01 02:44:12 ah, that's what you meant
_wb_
2026-03-01 02:45:04 Assuming 16-bit is not really needed for this image, you can gain quite a lot by just using a lower precision, for example: $ cjxl 20241111151449\!Mandelbox_mit_farbigem_Nebel_und_Licht_20241111_\(color\).png -d 0 -e 1 20241111151449\!Mandelbox_mit_farbigem_Nebel_und_Licht_20241111_\(color\).png.jxl8 --override_bitdepth 10 JPEG XL encoder v0.11.1 0.11.1 [NEON_BF16,NEON] Encoding [Modular, lossless, effort: 1] Compressed to 770763.7 kB (9.866 bpp). 25000 x 25000, 110.078 MP/s [110.08, 110.08], , 1 reps, 12 threads. $ cjxl 20241111151449\!Mandelbox_mit_farbigem_Nebel_und_Licht_20241111_\(color\).png -d 0 -e 1 20241111151449\!Mandelbox_mit_farbigem_Nebel_und_Licht_20241111_\(color\).png.jxl8 --override_bitdepth 8 JPEG XL encoder v0.11.1 0.11.1 [NEON_BF16,NEON] Encoding [Modular, lossless, effort: 1] Compressed to 448212.0 kB (5.737 bpp). 25000 x 25000, 145.433 MP/s [145.43, 145.43], , 1 reps, 12 threads.
RaveSteel
2026-03-01 02:45:07 I went overboard with testing large JXLs, so my setup is like this for swapfiles. top one is zram
jonnyawsom3
2026-03-01 02:46:09 Shouldn't Pallete already reduce the effective bitdepth?
_wb_
2026-03-01 02:46:33 since it's a generated image, it has noise in all bits of that 16-bit image so Palette cannot do anything here
AccessViolation_
2026-03-01 02:48:12 (cleaned up a bit for clarity) ``` $ cjxl big.png -d 0 -e 1 big.png.jxl8 --override_bitdepth 10 JPEG XL encoder v0.11.1 0.11.1 [NEON_BF16,NEON] Encoding [Modular, lossless, effort: 1] Compressed to 770763.7 kB (9.866 bpp). 25000 x 25000, 110.078 MP/s [110.08, 110.08], , 1 reps, 12 threads. $ cjxl big.png -d 0 -e 1 big.png.jxl8 --override_bitdepth 8 JPEG XL encoder v0.11.1 0.11.1 [NEON_BF16,NEON] Encoding [Modular, lossless, effort: 1] Compressed to 448212.0 kB (5.737 bpp). 25000 x 25000, 145.433 MP/s [145.43, 145.43], , 1 reps, 12 threads. ```
jonnyawsom3
2026-03-01 02:48:58 Ah right, so the image *is* 16bit, but most isn't useful information
AccessViolation_
2026-03-01 02:49:03 I didn't expect the 25000x25000 to be 16 bit as well
_wb_
2026-03-01 02:49:34 I guess 16-bit is useful if you want to do further editing, like adjusting brightness/contrast
2026-03-01 02:49:46 but for delivery of SDR images, 8-10 bits is enough.
jonnyawsom3
2026-03-01 02:49:54 That definitely doesn't sound right... Effort 1 uses 16bit buffers instead of 32float, so memory should be lower if anything
username
2026-03-01 02:50:29 I feel like this would also be an interesting image to test VarDCT lossy on, especially since with the recent buffering changes to libjxl you can now get a proper progressive group ordering without having to disable chunked encoding
AccessViolation_
2026-03-01 02:50:54 yeah I'm not sure why either, maybe it's because effort 2 wrote chunks back to the file while effort 1 kept them in memory or something?
2026-03-01 02:51:05 I was surprised too
2026-03-01 02:52:05 on 0.11.1
jonnyawsom3
2026-03-01 02:52:18 Hmm
2026-03-01 02:56:01 I don't think it would be smaller, but if you want the fastest loading you could try -d 0 -g 0 -P 0 -I 0 -e 6 (could go higher, but will already be very slow to encode)
RaveSteel
2026-03-01 02:56:32 I did encode it, was decently fast at 50MP/s and ~400MiB smaller
2026-03-01 02:57:33 and loads in ~2-3 seconds compared to 14-15sec of the original in nomacs
jonnyawsom3
2026-03-01 02:57:45 The 0000 command?
RaveSteel
2026-03-01 02:58:04 just `-e 7 -d 0 --faster_decoding 4`
2026-03-01 02:58:25 libjxl from main
2026-03-01 02:59:16 ``` JPEG XL encoder v0.12.0 98f8988c [_AVX3_ZEN4_,AVX3_DL,AVX2,SSE4,SSE2] {GNU 15.2.1} Encoding [Modular, lossless, effort: 7] Compressed to 1947411.8 kB (24.927 bpp). 25000 x 25000, 49.373 MP/s, 32 threads. ```
AccessViolation_
2026-03-01 02:59:48 I'm interested in creating a 16-bit 8000x8000 one form the 25000x25000 one, with a downscaling method that doesn't introduce sharpened edges, to give the predictors a good shot
RaveSteel
2026-03-01 03:00:13 try integer scaling <:galaxybrain:821831336372338729> /s
jonnyawsom3
2026-03-01 03:01:26 I meant I doubt that command would be smaller, but should decode in less than a second (maybe)
AccessViolation_
2026-03-01 03:01:31 really the author should've let the renderer run for a couple more months so we'd have a nice and smooth image
RaveSteel
2026-03-01 03:01:40 let me try, one sec
AccessViolation_
2026-03-01 03:02:31 though I'm wondering if they ran into precision issues in the renderer itself somewhere, because weirdly this image seems to have posterization on a coarse level *despite* the image being very noisy and high bit depth on a fine level...
2026-03-01 03:02:47 smells of floating point limits to me...
_wb_
2026-03-01 03:03:53 `convert -scale 25%` produces a nice 6250x6250 image with just averaging of 4x4 pixels
AccessViolation_
2026-03-01 03:04:16 ooo nice, I'll use that
2026-03-01 03:05:53 `convert-im6.q16: cache resources exhausted` we're gonna need a bigger boat
2026-03-01 03:06:25 actually, I remember it saying that when it doesn't read the input properly too
_wb_
2026-03-01 03:06:31 still there's a lot of entropy remaining in those least significant bits of the 16-bit version
2026-03-01 03:06:52 ``` bash-3.2$ cjxl 20241111151449\!Mandelbox_mit_farbigem_Nebel_und_Licht_20241111_\(color\).png.png -d 0 20241111151449\!Mandelbox_mit_farbigem_Nebel_und_Licht_20241111_\(color\).png.png.jxl JPEG XL encoder v0.11.1 0.11.1 [NEON_BF16,NEON] Encoding [Modular, lossless, effort: 7] Compressed to 97335.3 kB (19.934 bpp). 6250 x 6250, 3.773 MP/s [3.77, 3.77], , 1 reps, 12 threads. bash-3.2$ cjxl 20241111151449\!Mandelbox_mit_farbigem_Nebel_und_Licht_20241111_\(color\).png.png -d 0 20241111151449\!Mandelbox_mit_farbigem_Nebel_und_Licht_20241111_\(color\).png.png.jxl --override_bitdepth 10 JPEG XL encoder v0.11.1 0.11.1 [NEON_BF16,NEON] Encoding [Modular, lossless, effort: 7] Compressed to 27917.8 kB (5.718 bpp). 6250 x 6250, 7.522 MP/s [7.52, 7.52], , 1 reps, 12 threads. bash-3.2$ cjxl 20241111151449\!Mandelbox_mit_farbigem_Nebel_und_Licht_20241111_\(color\).png.png -d 0 20241111151449\!Mandelbox_mit_farbigem_Nebel_und_Licht_20241111_\(color\).png.png.jxl --override_bitdepth 8 JPEG XL encoder v0.11.1 0.11.1 [NEON_BF16,NEON] Encoding [Modular, lossless, effort: 7] Compressed to 14440.7 kB (2.957 bpp). 6250 x 6250, 11.923 MP/s [11.92, 11.92], , 1 reps, 12 threads. ```
RaveSteel
2026-03-01 03:08:46 also loads in 2-3 seconds but is 2.5GiB instead of the 1.8GiB with e7 and fd4
_wb_
2026-03-01 03:09:52 the 8-bit per sample version nicely compresses to 1-bit per sample (2.957 bpp), while the 10-bit version takes almost 2 bits per sample and the 16-bit version takes more than 6 bits per sample.
2026-03-01 03:11:05 So those 8 msb can be compressed well but the 8 lsb is pretty noisy even after downscaling 4x
jonnyawsom3
2026-03-01 03:13:06 Huh, interesting
AccessViolation_
2026-03-01 03:21:46 I cannot get the convert command to work, any chance you could get me that through some file sharing service?
_wb_
2026-03-01 03:22:59 Not atm, on my phone now
AccessViolation_
2026-03-01 03:23:17 alright, no worries
RaveSteel
2026-03-01 03:26:32 https://files.catbox.moe/jk8fz0.png
AccessViolation_
2026-03-01 03:27:15 thanks :3
RaveSteel
2026-03-01 03:27:21 for some reason scale and resize with 25% got me a 2000x2000 image, no idea how they calculate 25% from 25000
2026-03-01 03:27:30 so had to specify the 6250 manually
2026-03-01 03:28:33 oh wait
2026-03-01 03:28:41 I had the wrong image
2026-03-01 03:28:56 <@384009621519597581> one sec lmao, I'll get you the right one
2026-03-01 03:34:19 Here's the correct one https://files.catbox.moe/8jsozz.png
AccessViolation_
2026-03-01 03:36:05 yep that file size seems much more realistic :p
username
2026-03-01 03:40:08 anyone wanna try putting the depth channel to use? https://commons.wikimedia.org/wiki/File:Mandelbox_mit_farbigem_Nebel_und_Licht_20241111_(zbuffer).png?useskin=vector
2026-03-01 03:40:41 tbh I don't even remember how you would put this in with cjxl or even if you can
AccessViolation_
2026-03-01 03:40:51 I don't think you can lol
RaveSteel
2026-03-01 03:40:57 you could probably use this as a gainmap instead
_wb_
2026-03-01 03:43:58 You can, but you need to give input as a generalized PAM file, and there is nothing other than cjxl/djxl that can handle those
AccessViolation_
2026-03-01 03:48:11 you could cheat by using a PNG source and setting the depth buffer as the alpha layer. that's just for if you want to test how it would compress. it's semantically wrong, of course
2026-03-01 03:49:25 make sure it doesn't disregard color information it deems invisible due to the alpha value (I think by default it doesn't do this for lossless so you should be good)
username
2026-03-01 03:52:51 unrelated'ish but I thought of a probably bad idea: lossy depth (would have to be lossy modular cause it's a extra channel)
AccessViolation_
2026-03-01 03:57:24 tbh, not having vardct for extra channels is a little unfortunate
2026-03-01 03:57:52 probably not too big of a deal, since extra channels will probably be most useful for scientific purposes and those will probably tend to prefer lossless regardless
2026-03-01 03:58:38 still, I don't think it would have hurt
2026-03-01 03:59:04 unless not having it is overall better in ways I don't realize
username
2026-03-01 03:59:59 IMO something interesting is that pik (what VarDCT is based on) supports alpha so I wonder if it was DCT based back then?
2026-03-01 04:00:51 probably complicates the spec/format a bunch having it or something
2026-03-01 04:01:12 I think there might have been a convo in here a few years back about it?
AccessViolation_
2026-03-01 04:01:43 that's what I'm guessing, you can say "but why not also this" to every limitation the format has, and while in isolation every one of those would be a good to have, in total it becomes a kitchen sink mess, is what I'm thinking
username
2026-03-01 04:05:30
AccessViolation_
2026-03-01 04:05:34 actually, I wonder if it has to do with block selection, because I know at least the block and XYB channels are encoded is unfortunately 'channel per block' rather than 'block per channel'. in other words, every color channel is subject to the same block selection
2026-03-01 04:05:44 hell yeah nailed it
username
2026-03-01 04:06:40
2026-03-01 04:06:41 here's also an older message saying a similar thing
AccessViolation_
2026-03-01 04:07:20 a less fun fact is that the decision to go 'channels per block' rather than 'blocks per channel' was done for performance at the cost of compression efficiency, while after the fact it turned out it didn't improve performance at all, so now it's just there, limiting compression efficiency 😔
2026-03-01 04:08:54 but you know, hindsight...
username
2026-03-01 04:09:41 oh yeah I heard/saw this in here a bit ago. guess it's just another thing to add to the imaginary hypothetical "JPEG XXL"
AccessViolation_
2026-03-01 04:10:20 JPEG XXL will so much better than AV36 <:Stonks:806137886726553651>
2026-03-01 04:12:14 I guess a question for another channel would be: when? because it's intended to be a long-term format, but as we invent new methods and create new needs, *eventually* we're going to want to move on
2026-03-01 04:12:33 that's gotta be on the order of dozens of years, I wager
_wb_
2026-03-01 04:13:18 Lossless pik was used for alpha. Hardcoding the DCT mode for 3 channels allows a somewhat more efficient implementation and also allows signaling things like adaptive quantization only once per block instead of having extra signaling cost per component.
2026-03-01 04:17:11 I don't think compression improvements alone can still justify a new codec after jxl, the gains just won't be worth the hassle. New functionality would be needed. No idea what that would be though.
AccessViolation_
2026-03-01 04:23:26 Yeah that's fair. Also, the JPEG committee tends to identify new functionality needs and dedicates a custom little format to them, and given that JPEG XL is general purpose, a need would need to be very widespread and very important to be justified in a general purpose format
2026-03-01 04:25:49 btw, my effort 10 attempt has been going for at least 40 minutes at this point :)
RaveSteel
2026-03-01 04:26:00 nice
AccessViolation_
2026-03-01 04:27:06 diminishing returns so far, but maybe trying all predictors will make a bit of a dent
2026-03-01 04:30:20 are you sure this is downscaled the right way? these edges look suspiciously artificially sharpened
RaveSteel
2026-03-01 04:30:48 I think Imagemagick defaults to bicubic? but no idea
AccessViolation_
2026-03-01 04:33:00 ah, well, at least it's a lot better than the one provided by the author which is a lot more aggressive
jonnyawsom3
2026-03-01 07:01:54 There's the hacked simple-lossless binary we made that encodes PNG Alpha as Depth
2026-03-01 07:05:54 Lossy Alpha works fine, so depth should too
username
2026-03-01 07:54:03 too few options/control compared to full libjxl, I wanted to make a VarDCT image, I mean i guess simple-lossless would work if I just wanted a simple way to make an image with a depth channel and didn't care about much else. Also I'm not 100% sure but I think there *might* be a small error with that patch although I haven't investigated.
jonnyawsom3
2026-03-01 07:54:42 You can use the JXL as input for cjxl
username
2026-03-01 07:55:19 oh damn i didn't think of that lol
2026-03-01 07:55:53 guess I will do that since I kinda don't want to try and interact with stuff like PAM
jonnyawsom3
2026-03-01 07:57:12 Scrolling past that gave me a thought... Does dithered scaling exist? Averaging the pixels almost certainly doesn't land on a full number, so dithering could be applied. But it might end up worse than a consistent but less accurate result...
_wb_
2026-03-02 07:12:26 if you rescale in 16-bit precision, then you can apply dithering when converting to a lower bit depth
AccessViolation_
2026-03-02 10:56:38 your wonderfully small files are now live :D https://slabserver.org/
jonnyawsom3
2026-03-02 11:28:17 At some point we should try expanding the predictor sets to include low color and progressive optimized choices
o7William
2026-03-02 08:31:53 I guess it can be as large as you want
2026-03-02 08:32:15 The image can be reproduce
2026-03-02 08:33:07 It seem to be one of sample file from software called Mandelbulber
2026-03-02 08:33:58 And the parameter they use for the image is also on the wiki commons too
jonnyawsom3
2026-03-03 05:29:51 Might be due to the old v0.9 quality regression, but in a quick test EPF 3 seemed to improve every image I tried it on compared to the default level of 1 at distance 1
AccessViolation_
2026-03-04 10:18:37
2026-03-04 10:18:38 is this normal? aren't the upsampling modes available in cjxl baked into the format, requiring like a byte to signal?
2026-03-04 10:19:28
2026-03-04 10:20:28
jonnyawsom3
2026-03-04 10:22:52 Only non-seperable is default
ignaloidas
2026-03-04 10:23:14 It uses an upsampling matrix, so there's a default one which you can override with whatever you want but it's a bunch f16 matrices
jonnyawsom3
2026-03-04 10:23:27 Though that does seem like a very large jump, normally it's only around 100 bytes for NN
ignaloidas
2026-03-04 10:24:04 if it's at least a 8x upsample I don't see how it can be less than 400 bytes higher
AccessViolation_
2026-03-04 10:24:23 damn, nearest neighbor should've been baked in, how else will I brag about my tiny pixel art's small file sizes
RaveSteel
2026-03-04 10:27:20 have this 4x4 image I stole from a JXL thread somewhere
AccessViolation_
2026-03-04 10:27:31 even a direct encoding of the 8x8 pixels per pixel image is smaller than the image with 1/64 the pixels but upsampled...
jonnyawsom3
2026-03-04 10:30:37 30, 80 and 300 bytes of overhead for 2, 4 and 8x NN
2026-03-04 10:30:57 So I was probably thinking of 4x when I last tested it
ignaloidas
2026-03-04 10:47:38 I don't see how it would be only 300 bytes, it's 210 F16 values?
2026-03-04 10:50:30 (I guess you're adding it up? 300+80+30 = 410 bytes, which is closer)
jonnyawsom3
2026-03-04 10:52:19 Right yeah, it's additive. Sorry, getting late here xP
AccessViolation_
2026-03-04 10:54:50 noting this down for JPEG XXL
monad
2026-03-05 04:08:34 I have previously convinced myself that resampling is optimal for highest density in general. I would like to produce a new document carefully clarifying this.
jonnyawsom3
2026-03-05 11:00:55 I'm not sure if it's what you mean, but I did enable automatic 2x resampling at distance 10 or lower, since it usually gave a cleaner image at a smaller or equal size. Unfortunately 4x was just too blurry for distance 25, ect
monad
2026-03-05 02:19:50 No, referring to AccessViolation_'s use case just above. I guess [this](https://discord.com/channels/794206087879852103/803645746661425173/1240228597005025331) is the best surviving document. The tried cjxl commands included jixel's general default, so any density gains came from the resampling alternative. You might also ask questions like: which scale is densest? which options are broadly Pareto optimal? is there distinguishing behavior between consumption and production? what are the densest commands tailored to consumed resampled images vs untouched images specifically (noting resampled images may not end up 1:1)?
jonnyawsom3
2026-03-05 02:48:51 A random thought, but it's worth pointing out lossless JXL only uses 'global' group decisions (not Global MA). It would be possible to change the available predictor set per group, or use LZ77 only if a block is very repetitive/single color, ect. The same goes for per-channel settings, like encoding Alpha with optimal parameters compared to RGB, ect
2026-03-06 02:45:30 Was messing around with lossy modular, seeing if I can give it a quality bump, and I found one of the most obvious examples of the B quantization yet It's turning green into teal because of all the extra blue caused by the precision loss. Lowering the B quantization brings back the green again
ignaloidas
2026-03-06 02:47:59 I wonder how well would "barely lossy modular" work, where you still convert to XYB but then do no (or almost no) quantization
2026-03-06 02:49:01 for cases when you don't care about being bit-perfect, but don't want to lose precision anyways
Orum
2026-03-06 02:49:16 oh wow, v0.11.2 came out recently
jonnyawsom3
2026-03-06 02:49:19 XYB is inherently lossy, but you'd essentially be encoding a progressive lossless image without actually being lossless if you used no quantization
Orum
2026-03-06 02:49:41 I assume this *finally* has the fix for the lossless `-e 1`?
jonnyawsom3
2026-03-06 02:49:45 Nope
Orum
2026-03-06 02:49:51 what?!
2026-03-06 02:50:11 why was that not included?
jonnyawsom3
2026-03-06 02:50:11 https://discord.com/channels/794206087879852103/804324493420920833/1470736585388724379
ignaloidas
2026-03-06 02:51:49 I mean it's only as lossy as float computations are
jonnyawsom3
2026-03-06 02:52:35 Also <https://github.com/libjxl/libjxl/pull/4027#issuecomment-2854185744>
Orum
2026-03-06 02:52:45 well, my benchmarking will wait for 0.12.0 then
jonnyawsom3
2026-03-06 02:53:38 https://discord.com/channels/794206087879852103/806898911091753051/1270048431737540669
ignaloidas
2026-03-06 02:55:46 I'm a bit surprised if it has that much of an affect tbh
jonnyawsom3
2026-03-06 02:56:31 Near-lossless was the main point of lossy modular, distances below 0.1 or 0.05 roughly (0.05 is the limit for VarDCT currently too)
2026-03-06 02:57:09 But it also does pretty well on non-photo content, and if I can get this working, it might actually be competetive/beat VarDCT at distance 1... For better or worse
2026-03-06 08:38:20 Initial testing, lossy modular in my branch is both smaller and slightly higher quality than VarDCT at distance 1. Only did 10 minutes of tweaking, so plenty more testing to do though
AccessViolation_
2026-03-06 08:42:16 woah
2026-03-06 08:42:22 is that for non-photo?
monad
2026-03-06 08:42:56 it must be
2026-03-06 10:13:51 I tried two images with two target sizes each between latest modular, vardct and resaturation modular. One image was clear win for latest, other image was convincing win for resaturation (i.e. important improvements, but some tradeoff).
o7William
2026-03-09 05:28:29 https://litter.catbox.moe/c2doj5kofl4vvbwb.zip
2026-03-09 05:29:12 converting DNG to jxl cause incorrect color and also created artifact at edge
2026-03-09 05:29:36 is this ffmpeg only issue?
2026-03-09 05:30:09 and somehow this image also turned 90 degree counter-clockwise
2026-03-09 05:30:43
jonnyawsom3
2026-03-09 05:42:04 I didn't even know FFMPEG supported DNG input
JaitinPrakash
2026-03-09 05:42:39 It's most likely that ffmpeg isn't reading any metadata that might be present in the dng. And it's almost always recommended to directly use a proper raw development program.
jonnyawsom3
2026-03-09 05:43:33 Could give this a shot, but support might be spotty https://tinydng.com/
2026-03-09 05:43:48 Has a lossless option in the settings
_wb_
2026-03-09 07:08:39 DNG is not just an image format, it's also a syntax for a chain of image processing operations that is to be applied to it.
Quackdoc
2026-03-09 01:39:25 I wonder if oiiotool would convert it properly
2026-03-09 01:40:01 iirc oiio uses libraw
AccessViolation_
2026-03-16 09:49:05 I'm drive-by dropping this website that has some images that make artistic use of a low color palette, if anyone wants some easy images for benchmarking palette compression https://nchrs.xyz/
CrushedAsian255
2026-03-24 11:04:21 dithering?
jonnyawsom3
2026-03-29 06:55:33 Just did a bit of testing with the desaturation again. Subtracting all blue from the image definitely fixes the yellows, some pixels had the blue increased all the way up to 42 from 0 in the source image