JPEG XL

Info

rules 58
github 38694
reddit 688

JPEG XL

tools 4270
website 1813
adoption 22069
image-compression-forum 0
glitch-art 1071

General chat

welcome 3957
introduce-yourself 294
color 1687
photography 3532
other-codecs 25116
on-topic 26652
off-topic 23987

Voice Channels

General 2578

Archived

bot-spam 4577

coverage

Post links to articles, blog posts, reddit / hackernews / forum posts, media coverage about or related to JXL here!

Lumen
2025-12-10 08:33:26 Bots are a true obstacle to our digital society Yet we are only creating more tools to amplify the problem
juliobbv
2025-12-10 08:33:28 you ban one account, three more pop up like a fucking hydra
2025-12-10 08:33:46 yup
2025-12-10 08:34:12 it's too easy to automate account creation
Quackdoc
2025-12-10 01:42:13 so far having a blackhole role has helped us with the av1 community server. some get through but it catches most bots
0xC0000054
2025-12-10 02:19:06 The article that video linked to was also an interesting read. I have seen the aftermath of bot bans, but fortunately not any of the posts from those bots.
jonnyawsom3
2025-12-17 07:01:06 https://pdfa.org/pdf-errata-makeover/
AccessViolation_
2025-12-17 07:03:02 > Chrome’s U-turn on JPEG-XL <:CatBlobPolice:805388337862279198>
2025-12-17 07:53:02 > showing the website https://jpeg-xl.info they even hyphenated the community website URL...
Jarek Duda
2025-12-21 02:15:52 https://www.chipestimate.com/CAST-Introduces-JPEG-XL-Encoder-IP-Core-for-High-Quality-On-Camera-Still-Image-Compression/CAST/news/59275
username
2025-12-21 02:24:16 there was a little bit of discussion about this a few weeks ago. link for anyone curious: https://discord.com/channels/794206087879852103/803574970180829194/1446808996399943812
jonnyawsom3
2025-12-23 10:58:33 https://www.januschka.com/chromium-jxl-resurrection.html
2025-12-23 11:00:09 I'd argue AVIF and WebP serve Animation while JXL is for photos and a PNG replacement
HCrikki
2025-12-24 03:20:14 video animation, not image animation (jxl seems to excel at converting losslessly from gif with guaranteed smaller size like with jpg). then again, even 2001's divx works better than gif, no surprise
2025-12-24 03:22:12 they way those formats can replace gif is by disregarding the gif entirely and going back to the highest quality original/video and converting that, but it imposes challenges on services (ie preserving the original source, maintaining a link betwen known originals, inputs and outputs).
Meow
2025-12-24 03:27:52 > No browser currently supports JXL animations natively
HCrikki
2025-12-24 03:28:24 waterfox supports jxl animation out of the box for windows and android, no ?
Meow
2025-12-24 03:29:43 "No major browser" would be more accurate
NovaZone
2025-12-24 05:43:11 If only libavif wasn't such a pain to work with for animated sequences
2025-12-24 05:48:23 Legit only 4 types of inputs, y4m ofc being the only one that works for "gifs"
RaveSteel
2025-12-24 05:51:54 Libavif is annoying for decode too
Mine18
2026-01-03 02:52:34 i think avif is generally better for all lossy images
_wb_
2026-01-03 03:13:04 AVIF is best for low-medium quality, JXL is best for medium-high quality. AVIF has great coding tools for hiding the artifacts of losing too much image information, JXL has great coding tools for preserving image information and losslessly compressing it. They are quite complementary, in a way.
Emre
2026-01-04 03:01:23 As an avid user, I also find that the newer versions of AVIF, with its slowest presets and YUV444/10-bit mode, are better for lossy compression than JXL. The new screen content detection mode is especially unmatched for any type of screenshot. JXL obviously has many more features for images, and they are great. Lossless/modular, JPEG reconstruction, and progressive decoding are all amazing features. From what I understand, the AVIF developers focused heavily on RDO, while the JXL developers worked on features, compatibility, etc. Previously, JXL was far superior to AVIF, even for lossy images, and AVIF was only competitive in the low-quality range (although even that was questionable if you disregarded metrics). Now, however, I generally encounter AVIF beating JXL in any metric and quality range (except lossless), except in a few specific cases (which might be due to my testing pool), and those can be visually confirmed too. I think JXL indirectly helped AVIF improve significantly, though. The JXL devs' approach to image/video quality metrics, in particular, has influenced how AVIF is developed (most notably, `tune=iq`) and even how we process videos now. For context, I currently maintain a chunked target quality video encoding tool, and the available metrics are ssimulacra2, butteraugli5pn and CVVDP, which you recommended to me specifically. Therewithal, we do everything we can to discourage the use behind the ancient metrics like ssim/psnr/vmaf. So... thanks for everything! This Discord channel alone is a gold mine for open source. It's impossible not to learn something new if you spend a few minutes here. NOTE: I only considered size/quality efficiency disregarding encoding/decoding complexity & speed, encoder stability and other features.
Quackdoc
2026-01-04 03:16:23 I find it wins in raw quality:size, but I still find that jxl images decode more power efficiently on my low end devices, hoping I can get some proper power monitoring setup for a large test soon, My test typically contains 20k images downloaded from a image booru for anime images, around 10k images ripped from pixelfed which iirc was mostly landscape photography iirc, and maybe like 2k images ripped from koe booru
2026-01-04 03:16:42 (note, encoding this with avif is so much more painful then jxl LOL)
2026-01-04 03:17:45 I dont remember what the avif command was but the quality was typically around an ssimu2rs of 70-80
NovaZone
2026-01-04 03:18:50 Yep img board datasets will really test ur decode perf
2026-01-04 03:19:40 That said on qview at least avif is ever so slightly faster than jxl
Quackdoc
2026-01-04 03:19:45 oh right, and ofc manga, hajime no ippo for manga and for manwha I used really whatever
NovaZone
2026-01-04 03:20:07 Manwha is evil
2026-01-04 03:20:29 No img viewer or encoder was designed for 1000x10000 kek
Quackdoc
2026-01-04 03:20:32 yeah, especially on PCs, but I found on arm devices jxl generally performs slightly better, at least in perf per wat anyways
2026-01-04 03:20:44 very true lol
NovaZone
2026-01-04 03:21:19 Gotta give the best decode to webp tho
2026-01-04 03:21:38 Well technically jpeg but it really needs to die kek
Quackdoc
2026-01-04 03:22:09 havent really tested webp in a long time but mozjs gave me better results at the time when I did
NovaZone
2026-01-04 03:22:50 Webp is weird, best at monochrome, decent at anything 420, awesome decode perf
whatsurname
2026-01-04 03:22:51 It's quite often that it doesn't fit the 16383 limitation for webtoon
NovaZone
2026-01-04 03:22:58 Beyond that tho it's trash kek
Quackdoc
2026-01-04 03:23:50 snip snip time
NovaZone
2026-01-04 03:24:50 Inb4 I figure out how to train a yolo compvis model for bbox manwha detection kek
2026-01-04 03:25:36 Tldr: turn it into a manga xD
2026-01-04 03:27:26 I digress tho, the current ranking imo for lossy: avif > jxl > webp > jpeg
2026-01-04 03:28:07 For lossless: jxl > webp > png
2026-01-04 03:29:03 And yes I know jpeg has a lossless ad-hoc patch xD
2026-01-04 03:29:16 Won't include cause no1 uses it
2026-01-04 03:30:21 For "gif"/animated sequences: avif > webp > png > jxl
jonnyawsom3
2026-01-04 03:56:17 We had a brief discussion about it in the voice channel a few weeks ago. AVIF has numerous big company sponsors and dozens of developers working on it as part of AOM. JXL has essentially been the same half a dozen devs for the past 3 years, split between other projects/priorities. That along with some regressions mean it's fallen behind AVIF, at least by default. Most images that beat JXL, I can find a command line combo to match it, but we just don't have the expertise available to overhaul the heuristics. Me, Sapien and Username partially proved that. In the past year we started looking at the code, running tests with the CLI and ended up hugely improving the lossless modes for progressive and faster decoding. Lossy also got some attention by enabling resampling sooner with 10x the encode speed, making progressive images actually progressive again and making them load at 1% instead of 50% The capability for greatness is there, a few coding tools aren't even used yet. So still plenty of room to grow, we just need more help to do it.
2026-01-04 03:58:32 Huh, why PNG over JXL? You said yourself that it's the best lossless, and it has an actual lossy mode
2026-01-04 04:00:20 Then there's me
2026-01-04 04:00:23 https://cdn.discordapp.com/attachments/1283125524692078655/1456527709168603294/image.png?ex=695aaae2&is=69595962&hm=bf0e18f60117fe8c4ad97e1aa278c5b0a23c6e0a6be10cd4eb07be014811953a&
NovaZone
2026-01-04 04:03:10 Decode perf
jonnyawsom3
2026-01-04 04:04:03 Ah, it actually decodes faster than PNG with our faster decoding work
NovaZone
2026-01-04 04:05:00 Yea will retest the gif formats again at some point
2026-01-04 04:06:20 Tho the only thing that will probably change is jxl overtaking apng
2026-01-04 04:06:44 Technically it does in every area already Sept decode
jonnyawsom3
2026-01-04 04:06:49 I did specifically want to overhaul JXLs low color performance sometime too, for GIF conversions which currently require high efforts
NovaZone
2026-01-04 04:07:54 If weir talking str8 from a actual gif, then it's: webp > avif > jxl > png
2026-01-04 04:08:24 Avif looses on that end cause libavif is really annoying for that specific conversion xD
2026-01-04 04:08:53 And ideally u do want to use libavif cause of auto tiles
2026-01-04 04:09:24 Ffmpeg aom can do it as well but my gods is it still slow as
AccessViolation_
2026-01-04 07:38:53 how does screen content detection in AVIF work?
Emre
2026-01-04 07:41:10 https://gitlab.com/AOMediaCodec/SVT-AV1/-/blob/master/Docs/Appendix-Antialiasing-Aware-Screen-Content-Detection-Mode.md
2026-01-04 07:41:30 https://files.catbox.moe/1we9y3.avif
2026-01-04 07:41:35 and check this amazing example
2026-01-04 07:42:36 `3840x32766` resolution under 800kb and it's still readable and looks good overall
AccessViolation_
2026-01-04 07:47:05 is intra-block copy similar to patches in JXL?
2026-01-04 07:54:44 that's pretty good! do you have a lossless source? I'm curious how JXL will do here with patch detection
Emre
2026-01-04 07:55:22 I can find it. I remember testing but there was a drastic difference at same bitrate for JXL with patch
2026-01-04 07:55:41 as in JXL was not readable in any shape or form, at all
AccessViolation_
2026-01-04 07:58:32 patch sources should be encoded lossless, so the majority of letters should look really sharp, but there's the issue of not *all* letters being detected correctly or being patch-worthy, which means that among the lossless letters there will be some that are lossy compressed, which will look really of at low distances
username
2026-01-04 07:59:16 the patch detection currently present in the encoder is a bit under-baked
AccessViolation_
2026-01-04 08:00:54 it's also only pixel perfect matches currently, so TrueType or whatever it's called can cause the same letters to have different sub-pixel positions, leading to becoming different patches
username
2026-01-04 08:01:51 ClearType
AccessViolation_
2026-01-04 08:03:38 now that I think about it it probably makes sense to have an OS-integrated screenshot tool disable ClearType, because afaik otherwise the text in the screenshot will only look good on displays with a similar pixel structure
username
2026-01-04 08:04:23 I was literally about to type a message like that lol. ill finish it anyway
2026-01-04 08:04:48 it would be cool if screenshotting tools disabled ClearType when you go to take a screenshot
AccessViolation_
2026-01-04 08:05:05 it'd be great if you could find it!
Emre
2026-01-04 08:05:14 https://0x0.st/Kz70.png
username
2026-01-04 08:08:10 it seems like changing the "Turn on ClearType" checkbox causes all applications rendering such text to almost immediately change so it's strange that afaik no screenshotting tool takes advantage of this
2026-01-04 08:08:20 I guess it's just something people don't think of sadly
Emre
2026-01-04 08:10:59 Another tip: Firefox based browsers let you take lossless screenshots of the whole web pages (Right Click --> Take Screenshot --> Whole page)
2026-01-04 08:12:14 My command for avif/aom was: ```sh avifenc -a tune=iq -d "10" -y "444" -s "0" -q "something between 20 to 30" --cicp "1/13/1" -r "full" \ --ignore-exif --ignore-profile --ignore-icc --ignore-xmp \ --tilerowslog2 0 --tilecolslog2 0 \ "input.png" -o "output.avif" ```
2026-01-04 08:12:52 for reference, disabling tiles for screenshots make intrabc much more efficient (though it has no effect for normal images)
jonnyawsom3
2026-01-04 08:12:55 How much RAM do you have?
AccessViolation_
2026-01-04 08:13:06 memory compression to the rescue!
jonnyawsom3
2026-01-04 08:13:52 It will take an eternity, but good luck
AccessViolation_
2026-01-04 08:13:57
2026-01-04 08:14:16 JXL did a lot better
jonnyawsom3
2026-01-04 08:14:41 I thought you said you were trying patches?
AccessViolation_
2026-01-04 08:15:27 I just did: effort 7, distance 6
2026-01-04 08:16:01 I thought that included patch detection, but iirc that's disabled for images this large
2026-01-04 08:16:52 actually, these letters look too good to be lossy, they're probably patches?
jonnyawsom3
2026-01-04 08:17:06 https://discord.com/channels/794206087879852103/794206087879852106/1368950237082816563
Emre
2026-01-04 08:18:06 this is the official docs AFAIK There was one more github gist from Julio (the main author of the feature)
jonnyawsom3
2026-01-04 08:18:11 Patches requires effort 10 for anything above 2048 pixels in width or height
2026-01-04 08:18:41 Or you can manually do `--patches 1`
2026-01-04 08:19:05 But it will take a week to encode and your RAM will implode
2026-01-04 08:21:52 Chromium based has it in the run command menu as full-page-screenshot or something similar
AccessViolation_
2026-01-04 08:27:41 some comparisons
username
2026-01-04 08:30:02 filesize isn't close enough
AccessViolation_
2026-01-04 08:43:46 oh, right, I'll try to get it closer
2026-01-04 08:57:21
2026-01-04 08:57:29
2026-01-04 09:00:29 I'm surprised AVIF is this blocky, I thought it went all in on deblocking filters?
Emre
2026-01-04 09:04:42 -d 10 is much bigger for me
2026-01-04 09:05:58 and the score is not that great
AccessViolation_
2026-01-04 09:06:25 weird. what if you try effort 7? ``` % cjxl source.png e7-d10.jxl -e 7 -d 10 JPEG XL encoder v0.11.1 0.11.1 [AVX2,SSE4,SSE2] Encoding [VarDCT, d10.000, effort: 7] Compressed to 732.6 kB (0.047 bpp). ```
Emre
2026-01-04 09:06:49 and avif at 642k
2026-01-04 09:07:11 let me check
2026-01-04 09:08:11 still big but not as big
2026-01-04 09:08:35
AccessViolation_
2026-01-04 09:08:45 must be a difference between 0.11.1 and 0.12.0 I guess
Emre
2026-01-04 09:09:05 it can be. This is git upstream build (I've just built it)
2026-01-04 09:11:11
2026-01-04 09:11:13 however check this
_wb_
2026-01-04 09:11:38 IIRC, in avif it's a bit all-or-nothing with deblocking, in the sense that you can only signal it per 64x64 region and it's usually not a good idea to mix it up since then you see macroblocks when one 64x64 gets filtered and the one next to it doesn't. I suppose in the example you look at, the deblocking is disabled. Enabling it would probably remove blockiness in the photo parts but also make the text blurry. Just guessing though.
Emre
2026-01-04 09:12:01 now it matches avif perf, according to ssimu2 at least
2026-01-04 09:12:40 I'll try to match the bitrate and test with cvvdp/butter/ssimu2 and visually
AccessViolation_
2026-01-04 09:13:03 could you check the ssimulacra2 score of this one that matches the size?
2026-01-04 09:13:32 I prefer the JXL though, in terms of representation of both text and images, so the ssimulacra2 score is more of a curiosity
_wb_
2026-01-04 09:14:18 (also don't trust ssimu2 too much, especially on these kind of large mixed-content images which is not really something ssimu2 was tuned for)
AccessViolation_
2026-01-04 09:14:19 JXL does have some issues where it desaturates certain colors which is evident here as well
Emre
2026-01-04 09:15:07 definitely. Actually it fails miserably in general with these types of images or very big images. CVVDP is more reliable for these types of stuff
AccessViolation_
2026-01-04 09:16:31 do any webp enthusiasts want to throw it into the comparison as well :3c
2026-01-04 09:17:11 I don't know if WebP can even represent images this large
Emre
2026-01-04 09:17:14 okay, I matched them exactly
AccessViolation_
2026-01-04 09:19:46 if someone has a VM with a lot of RAM, this would be an interesting test image
jonnyawsom3
2026-01-04 09:19:55 I made it so that distance 10 enables resampling in v0.12
2026-01-04 09:20:29 It varies per image, but on average it was better
AccessViolation_
2026-01-04 09:21:48 I wonder if you could use a heuristic for whether resampling would be good. e.g. get a read of the global image contrast, and resample if the contrast is particularly low
2026-01-04 09:22:32 or rather, more importantly, see it as a heuristic for when resampling would be particularly bad, and don't do it in that case
2026-01-04 09:23:21 just thinking out loud. maybe something to explore in the future
jonnyawsom3
2026-01-04 09:24:18 Apparently Emre did it https://discord.com/channels/794206087879852103/822105409312653333/1457298689629491391
2026-01-04 09:24:32 Though that was with resampling too
2026-01-04 09:25:48 It was actually the other way around. Low quality and high contrast meant it was bitstarved, resampling blurs slightly but avoided the severe artifacts
2026-01-04 09:26:16 Probably throw it under the non-photo detection list again
AccessViolation_
2026-01-04 09:26:39 interesting
jonnyawsom3
2026-01-04 09:27:33 https://github.com/libjxl/libjxl/pull/4147
AccessViolation_
2026-01-04 09:29:17 now I'm wondering, is 2x resampling effectively the same (in terms of compression efficiency) as zeroing all but the top-left quadrant of coefficients? I know that's how you emulate chroma subsampling, so I guess so? then follows a second question, can you signal different quant tables per image section or per block, or something like that? because then you can, effectively, adaptively and locally resample based on the image contents
2026-01-04 09:30:35 I know resampling also effectively doubles the block size in both directions relative to the image content, so you'd also have to replace 8x8 blocks with 16x16 if you wanted to match it I think?
2026-01-04 09:31:17 yeah, if you had a heuristic for using 16x16 blocks with quant tables to emulate resampling, you could do it selectively. if that's possible
jonnyawsom3
2026-01-04 09:35:10 We're already having a hard enough time changing the global quant table, yet alone trying to do per-block stuff
2026-01-04 09:36:14 Larger blocks already quantize more
Emre
2026-01-04 09:36:31 Both images at 642k: ```sh cjxl test.png test_4.jxl -d 14 -e 7 --keep_invisible=0 -x strip=all --progressive_dc=0 --resampling=1 --ec_resampling=1 --brotli_effort=11 --patches=1 avifenc -a tune=iq -d "10" -y "444" -s "0" -q "20" --cicp "1/13/1" -r "full" --ignore-exif --ignore-profile --ignore-icc --ignore-xmp --tilerowslog2 0 --tilecolslog2 0 "test.png" -o "output.avif" ``` ```sh ssimulacra2 test.png test_4.jxl 53.48004335 butteraugli_main test.png test_4.jxl --intensity_target 203 --pnorm 5 16.7195262909 5-norm: 4.888680 ssimulacra2 test.png avif.png 51.83576267 butteraugli_main test.png avif.png --intensity_target 203 --pnorm 5 38.9060020447 5-norm: 8.934184 fcvvdp test.png jxl.png -m hdr_pq 9.6885 fcvvdp test.png avif.png -m hdr_pq 9.6043 ```
2026-01-04 09:36:53 impressive
AccessViolation_
2026-01-04 09:39:47 I know ssimulacra2 likes smoothing a little too much, and the JXL is much smoother because AVIF doesn't do a lot of deblocking in this image. so if you wanted to metagame it, make AVIF deblock a lot, then even though the image will look worse, the score will probably be better than of the JXL :p
2026-01-04 09:42:36 <@238552565619359744> if I understand correctly, the reason resampling works is moreso because the current tuning/heuristics are suboptimal, right? since you can emulate subsampling with coefficients, I feel like this must be true. and on top of that adaptive quantization can bring back detail where it's needed? talking theoretically of course, I respect all of this is a lot of work to get right
2026-01-04 09:44:29 or am I drawing the wrong conclusion here
2026-01-04 09:46:52 hmm, though I suspect JXL's clever upsampling algorithm is probably better than what you get from emulating subsampling with quant tables
2026-01-04 10:15:16 okay, actually- it's doing patches by default
2026-01-04 10:19:12 ``` % cjxl source.png e7-d7-p0.jxl -e 7 -d 7 --patches=0 Compressed to 4999.1 kB (0.318 bpp). % cjxl source.png e7-d7-p1.jxl -e 7 -d 7 --patches=1 Compressed to 870.6 kB (0.055 bpp). % cjxl source.png e7-d7.jxl -e 7 -d 7 Compressed to 870.6 kB (0.055 bpp). ```
jonnyawsom3
2026-01-04 10:26:32 Huh... Weird
AccessViolation_
2026-01-04 10:28:25 so, really, it's impressive how well AVIF does *without* a dedicated coding tool for patches. without patches, VarDCT gets worse pretty fast, though lossy modular (without any fancy parameters) does alright
2026-01-04 10:31:17 btw, was there a way to reduce memory usage of VarDCT encoding (aside from turning off patches)? I can't remember
2026-01-04 10:44:26 I can encode this image with effort 7, but effort 8 is too much
jonnyawsom3
2026-01-04 10:47:46 Lowering effort probably
AccessViolation_
2026-01-04 10:47:57 ah dang
2026-01-04 10:50:14 well, to be fair, afaik we make no attempt at detecting screen content except that we do patches, so VarDCT tries to encode this screenshot the same way it would a photo
2026-01-04 10:51:46 and in the future we could do frame blending. VarDCT for the photo parts and (lossy) modular for the text
2026-01-04 10:52:08 but even without that obviously patches are a huge win for screenshots, evidently
2026-01-04 10:52:45 they save so many bits that you can spend them on representing those images nicely
jonnyawsom3
2026-01-04 10:53:37 Oh actually, v0.12 has a 30% memory reduction for patches
AccessViolation_
2026-01-04 10:53:47 ooo
2026-01-04 11:44:52 I'll have to look into how patches currently work, because I've potentially thought of a way to make it pretty fast and low memory with potentially just a bit more misses, and it would also give you a continuous curve of encode effort
2026-01-04 11:51:15 every patch candidate will only look for matches against other patch candidates within a certain configurable `distance` from itself. the letter `p` at the start of the page would be a different patch entirely than the letter `p` near the end of the page because of this, so you then do a second pass to deduplicate all patches. now it's still one patch per letter, but as a compromise, there need to be two patches no more than `distance` apart for them to be considered, because a single instance of a patch candidate that has no nearby surrounding duplicates is ignored
2026-01-04 11:58:37 though I have a suspicion that the expensive part of patches is the detection of candidates itself
jonnyawsom3
2026-01-04 11:58:58 There was an old idea to LZ77 for patch detection
2026-01-04 12:00:31 The detection also isn't very good, it only works on text and misses a lot of it. Tiled content or sprites usually get misses entirely
AccessViolation_
2026-01-04 12:01:17 hmm yeah
2026-01-04 12:04:39 and there's still my idea of hierarchical patches. instead of littering the image with references to letter patches, have fewer references to word patches, and have the frame that contains word patches itself be built up from a frame of letter patches, to reduce the total amount of patches signaled
2026-01-04 12:06:21 assuming LZ77 doesn't already take care of common repeating sequences of multiple patches, which it might
jonnyawsom3
2026-01-04 12:10:49 Isn't the whole point of LZ77 to compress repeating data?
AccessViolation_
2026-01-04 12:11:43 yeah, so depending on how patches are signaled that will catch it
2026-01-04 12:15:58 also: if you can detect the leading (space between lines) of a paragraph of text, maybe based on a few confirmed patches you already have, you can significantly reduce the amount of possible patch candidate locations you have to check: form every possible `x` and `y`, to every possible `x` and `y / leading_pixels` could be good for lower effort compression of screenshots
_wb_
2026-01-04 01:44:21 That's actually a good idea, currently we aren't taking into account that text (at least in screenshots) is nearly always horizontal and often has regular distance between lines.
2026-01-04 01:52:34 If text is monospace (or icons are on a grid, etc), also the horizontal distances are regular. So a good heuristic could be: once you have some confirmed patches, take the gcd of the x diffs and of the y diffs, and then use that to search for more matches at those specific locations. (also a similar heuristic could be used to figure out 'missing letters' (often it happens that some of the letters in a block of text are not detected by the patch heuristics even though they would also benefit from being encoded with patches).
AccessViolation_
2026-01-04 03:38:51 oh I like the missing letters idea
2026-01-04 03:44:57 in that screenshot above they're pretty apparent, but mainly in the blue text because they get desaturated :p
A homosapien
2026-01-04 03:59:59 blue getting desaturated is expected considering the default quant tables
AccessViolation_
2026-01-04 04:01:43 there will also be cases like discord screenshots where the leading is all over the place (in absolute terms) because messages themselves are also spaced out, so another way of doing it, that might be more resilient, is doing patch candidate selection from the left to the right of the image (not a scanline order: actually covering the whole left half first), and then just assuming that there might be a match on the exact same y level. and then from there, see if there's any consistency in terms of vertical spacing, additionally? because I haven't thought about how
2026-01-04 04:04:28 oh wait, your approach does account for that, because you're looking at relative distances from candidates rather than absolute positions
juliobbv
2026-01-04 06:17:08 enabling intraBC in AV1 disables deblocking and filtering due to hardware design constraints
2026-01-04 06:18:20 this is why one of the main reasons screen detectors are a thing in AV1 encoders -- they're actually not pure screen detectors, but rather want to answer the question: is enabling screen content tools for the content worth it?
_wb_
2026-01-04 06:19:20 Oh, I didn't know there was such a constraint. That's pretty unfortunate for mixed content (screen content that includes photos)...
2026-01-04 06:20:14 Can it be circumvented by encoding two frames instead?
juliobbv
2026-01-04 06:20:31 maybe, actually
2026-01-04 06:20:55 with one (unfiltered) overlay frame + a visible frame that applies filtering on top of the overlay
2026-01-04 06:21:41 I need to talk to the AVIF team to see if this is a valid configuration
AccessViolation_
2026-01-04 06:56:41 could you summarize what property of a block intraBC actually copies or predicts? I found this: <https://github.com/BlueSwordM/SVT-AV1/blob/master/Docs/Appendix-Intra-Block-Copy.md> but I don't have enough base knowledge of AV1 to understand
Adrian The Frog
2026-01-04 07:15:37 it seems that text rendering software is so reliant on legacy code etc that monitor manufacturers are now going back to RGB stripes for oled instead of trying to actually get decent software support for their layouts (the new generations of both qd-oled and woled panels do this) w/ screenshots tho disabling cleartype is still good because images are usually viewed slightly zoomed in / out which would break it anyways
AccessViolation_
2026-01-04 07:50:38 by the way, I tried it without patches out of curiosity, and AVIF is definitely best in that case. at distance 25 (the lowest quality you can select in cjxl) neither Modular mode not VarDCT look as good, and VarDCT comes in at 2.1 MB and VarDCT at 1.9 MB
2026-01-04 07:52:43 VarDCT and Modular
Emre
2026-01-04 07:52:46 yeah, that was probably why I stated as AVIF was better for screenshots. I haven't tried with patches.
2026-01-04 07:52:58 Without it, JXL is unusable at that bitrate [kekw~1](https://cdn.discordapp.com/emojis/1167690816010059817.webp?size=48&name=kekw%7E1)
AccessViolation_
2026-01-04 07:54:59 yeah you wouldn't want to actually provide these to users. I think modular has some potential here, I'm gonna try throw more parameters at it, this was just `--modular 1 -e 9 -d 25 --patches=0`
2026-01-04 07:55:49 (though of course patches were designed specifically for this use case, so it's not fair to disable them in a comparison. I'm just curious how it does without it)
2026-01-04 07:57:13 and also if there hadn't been patches, tuning the rest of the coding tools for screenshots would have been way more important and it would have been more competitive, but that's just speculation
2026-01-04 08:19:39 so what does AVIF actually do here with the intrabc?
2026-01-04 08:20:48 I read it copies predictions over, but I'm not sure what that entails
2026-01-04 08:40:10 that's what makes it do so well here as I understand it?
2026-01-04 08:44:45 oh I recognize this transform :p
monad
2026-01-04 10:40:05 That was my initial tile-like patch detection demonstrated mid-2024, except I assumed the median patch size might be the tile size and calculated the grid offset based on the earliest such patches in the image. That particular strategy yielded better results for images with tiles, but not perfect. I also recently realized and [demonstrated](<https://github.com/libjxl/libjxl/issues/4150#issuecomment-3589510155>) that extracting remaining unique features can benefit density.
dogelition
2026-01-04 10:45:22 i think it just copies the entire pixels of the reconstructed blocks? that makes this wording strange though, i'd expect it to say `decoded` instead of `coded` pixels in that case (also there's an extra word in that sentence) from https://arxiv.org/pdf/2008.06091
AccessViolation_
2026-01-04 10:46:51 oh so they *are* like patches?
2026-01-04 10:50:00 yep seems like it if you read further
dogelition
2026-01-04 10:50:15 from my understanding yes in the spec it's a part of `7.11.3.4. Block inter prediction process`, i.e. using a motion vector to pull in pixels from another frame. except it just reads from the current frame
AccessViolation_
2026-01-04 10:52:25 that makes so much more sense
2026-01-04 10:53:25 I was thinking it was something like inheriting certain properties of nearby blocks if they are similar, and I couldn't wrap my head around how that could possibly be *that* efficient (in the context of the massive screenshot we were experimenting with above)
2026-01-04 10:54:26 thanks ^^ and for the paper too, I'll bookmark that
dogelition
2026-01-04 10:55:56 there's also this, not sure if it includes anything the other paper doesn't: https://www.jmvalin.ca/papers/AV1_tools.pdf
AccessViolation_
2026-01-04 11:04:30 I think I might have stumbled upon that one before
Emre
2026-01-05 03:04:43 since AV1 is initially a video codec, the terminology might be different but from what I understand, it's like patches. IntraBC (intra block copy), is an encoder mode for video encoders that is used for keyframes or still images. So with this mode, there is no inter-frame prediction is used but all intra tools are still available unless explicitly disabled. This includes directional intra prediction, palette mode, transform skipping and intraBC mainly. So with tune=iq (still image), and the screen content mode, you do more agressive partitioning, transform choices, intra modes and intraBC. IntraBC is basically: "copy a block (optionally with residuals) from somewhere else in the same frame". Instead of predicting pixels using, directional prediction (angles), or DC / smooth modes, the encoder says: "This block already exists earlier in the frame, just copy it" AFAIK, it's similar to LZ77-style back-references: block-based, motion-vector driven, restricted to already-decoded regions of the same frame. The encoder searches previously reconstructed pixels; finds a block that matches, encodes a motion vector and an IntraBC flag. At decoding time, the decoder copies pixels from that location. Applies residual if present. This is coming from an enthusiastic user. <@297955493698076672> can explain how it works with its limitations, advantages, disadvantages within AVIF/aom and the new AV2 spec in a more technical way; or correct me if needed
2026-01-05 03:22:44 But compared to patches, they must be conceptually similar but not architecturally. AFAIK, JXL uses an explicit patch list. IntraBC does use motion vectors + flag. JXL's pipeline is image-domain copy; but intraBC uses its inter-prediction engine. IntraBC is retrofit into a video codec but JXL patches are designed as a first-class image feature. IntraBC can not do arbitrary shapes, transform domain, copy forward, copy with rotation/mirroring. And AV1 has many limitations; video being the primary concern, as well as hardware decoder limitations.
juliobbv
2026-01-05 03:28:56 intraBC is very simple: it just copies a region of pixels from an already decoded part of the image onto the desired area (the relative x, y distance is coded), plus it can encode a residual for further refinement... pretty much like how inter prediction mode works in AV1, but the reference is the frame to decode itself vs other frames
2026-01-05 03:29:44 if you open an small screenshot on aomanalyzer, you can get an intuition on how intraBC works
2026-01-05 03:31:13 https://pengbins.github.io/aomanalyzer.io/
2026-01-05 03:33:18 the main disadvantage with intraBC (in AV1) is that it enabling the coding tool **disables** filtering and deblocking within the **entire** frame
2026-01-05 03:33:47 this limitation is no longer present in AV2
monad
2026-01-05 05:43:16 Agreed, this is a poor example to refute JXL (although with a 0.12 release it might become a good example).
Emre
2026-01-05 05:45:21 with patches enabled, JXL is definitely much better
2026-01-05 05:46:50 One complaint would be: Why not enable patches by default if screen content mode is detected?
2026-01-05 05:47:17 Julio's method does pretty well with the detection part. Something like that could have been used.
monad
2026-01-05 05:52:57 patches are default
jonnyawsom3
2026-01-05 05:55:50 Something like that didn't exist, but they kindly suggested we integrate the same screen content detection. As for enabling patches by default, they technically are... There's two ways of encoding, chunked with multithreading, or non-chunked singlethreaded. Patches only works with non-chunked, isn't very thorough, is very slow and uses much more memory due to requiring non-chunked
2026-01-05 05:57:02 I'm honestly surprised you successfully encoded such a large image with it enabled
2026-01-05 05:58:20 Also this probably could've moved to <#803645746661425173> a long time ago
Emre
2026-01-05 06:39:09 I basically have 64G memory (my total use is less than 1G as I am on Gentoo and not using any DE) with 9950x. I can check the total consumption from `cjxl`.
jonnyawsom3
2026-01-05 06:44:59 I'd say run time on `cjxl -d 0 -e 9 -P 15` and `cjxl -d 0 -e 10` That should be roughly effort 10 with and without chunked encoding and patches
Emre
2026-01-05 06:46:11 ```sh ❯ cjxl test.png test.jxl -d 14 -e 10 --keep_invisible=0 -x strip=all --progressive_dc=0 --resampling=1 --ec_resampling=1 --brotli_effort=11 --patches=1 JPEG XL encoder v0.12.0 53042ec5 [_AVX3_DL_] {Clang 21.1.8} Encoding [VarDCT, d14.000, effort: 10] Compressed to 792.7 kB (0.050 bpp). 3840 x 32766, 1.599 MP/s, 32 threads. cjxl RAM (highest): 32458.62 MB ```
2026-01-05 07:14:52
jonnyawsom3
2026-01-05 07:42:54 20x slower and 26x more memory... Yeahhh
2026-01-05 08:33:06 Well, that was horrible. Why is it always *that* kind of content getting spambotted here now?
veluca
2026-01-05 08:40:58 quick description of how that works?
Emre
2026-01-05 08:47:38 hard to describe quickly: https://gitlab.com/AOMediaCodec/SVT-AV1/-/blob/master/Docs/Appendix-Antialiasing-Aware-Screen-Content-Detection-Mode.md https://docs.google.com/document/d/1e_Rrb3wLL_Wco9yKArg6RzhUr4vV5_0m309pMXzg5mE/edit?tab=t.0
jonnyawsom3
2026-01-05 08:47:58 The screen content detection or the block copy detection?
Emre
2026-01-05 08:49:17 oh only 5x more memory actually 🤣 but yeah
2026-01-05 08:49:43 This is modular/lossless though not the one we tested before
jonnyawsom3
2026-01-05 08:50:15 Right.... I'm too lost in lossless, even when we're meant to be doing lossy
Emre
2026-01-05 08:50:31 this was pretty fast with patches, effort 10 and resampling, though with 32G peak memory usage
2026-01-05 08:51:36 not even butteraugli scores better with effort 10 though
jonnyawsom3
2026-01-05 08:52:01 That used no resampling, but I'd assume trying 9 with and without patches would show a major speed and memory difference
Emre
2026-01-05 08:52:06 I remember testing efforts and it was pretty much linear on all metrics, up to 10 (but with natural, real life images)
2026-01-05 08:52:34 > That used no resampling Why not? What's the limitation here?
jonnyawsom3
2026-01-05 08:55:09 You set resampling to 1, which is 1/1 of the original resolution
2026-01-05 08:55:49 Here's the source code of the content classification https://github.com/gianni-rosato/photodetect2
Emre
2026-01-05 08:56:52 ```sh --resampling=-1|1|2|4|8 Resampling for color channels, default = -1. -1 = apply resampling only for very low quality. 1 = 1x1 downsampling. 2 = 2x2 downsampling. 4 = 4x4 downsampling. 8 = 8x8 downsampling. ``` Oh, I definitely misread the docs. But I remember using resampling=1 always being more consistent. What's the reason?
jonnyawsom3
2026-01-05 08:57:48 Probably because then patch detection works. Otherwise letters blend into each other
AccessViolation_
2026-01-05 09:58:20 cjxl has another "problem" with screenshots, mainly that it really like crushing darks, because of the low default intensity target. I've exaggerated the problem by setting an intensity target of 50 nits. you can see how it removes a lot of fidelity in the darker areas while retaining it in the light areas while retaining it in the light areas
jonnyawsom3
2026-01-05 09:59:50 Ah right, I was going to bump it up a little... https://github.com/jonnyawsom3/libjxl/commit/8625454b3cbcdff7d515f1d12841f578e7ff5449
AccessViolation_
2026-01-05 10:00:04 this is basically telling JXL something along the lines of "this image will be viewed in low brightness, so don't worry about those darks too much" the intensity target currently is a bit on the low end for modern displays. iirc the default value now is like 250, while it should probably be closer to 500? which is why in dark screenshots, it'll do even worse
2026-01-05 10:00:44 you can easily customize it with `--intensity_target=N`, so it's just a problem with the default values thankfully
jonnyawsom3
2026-01-05 10:00:44 Issue is we don't want to too bright, or it'll blow out on older displays since it affects the color management
AccessViolation_
2026-01-05 10:03:09 does it actually affect the brightness of the pixels? I thought it just aided the compressor by telling it to which extent it's okay to remove detail in darker areas
2026-01-05 10:04:09 here's 50000 nits
2026-01-05 10:05:01 or maybe eye of gnome has broken color management
jonnyawsom3
2026-01-05 10:12:36 SSIMU2 actually scores higher intensity lower because of the wrong brightness ``` 255.jxl 84.15416046 350.jxl 60.78676800```
2026-01-05 10:13:20 Decoding both to PNG 'fixes' it ``` 255.png 83.78361234 350.png 85.79117789 ```
spider-mario
2026-01-05 10:13:27 it encodes them as brighter XYB values
2026-01-05 10:13:40 if an app were to request output in PQ, it would get PQ values for super bright pixels
2026-01-05 10:14:05 it “works” if you decode to SDR because then it renormalises 0-50000 nits to 0-1 in the output
2026-01-05 10:14:17 but the image “really is” 50000 nits
2026-01-05 10:15:50 (note: this is true with SDR input – it tells the encoder that 0-1 corresponds to 0-`intensity_target` nits – but for PQ input, it always interprets the pixels as whatever brightness is implied by the PQ values)
jonnyawsom3
2026-01-05 10:16:10 Honestly, it's kinda funny how all the default values aren't *quite* right. Quant table obliterates the B to the point it affects other colors, intensity target is 255 when 300 has been standard for the past decade. Definately a lot of work to do, but thankfully JXL is flexible enough to fix it all (hopefully)
AccessViolation_
2026-01-05 10:31:48 I've been experimenting some more and even though JXL has better patches for text, it looks like AVIF is still better for screenshots in certain scenarios, because it's really good at representing sharp edges
jonnyawsom3
2026-01-05 10:34:08 Even with modular mode?
AccessViolation_
2026-01-05 10:36:39 yeah, modular mode at `-d 6` basically visually halves the resolution where there's detail
2026-01-05 10:39:49 though cjxl also has a lot of issues extracting patches in this screenshot of discord
2026-01-05 10:51:01 I imagine DCT8x32 might help with transitions between flat blocks of colors might help, those aren't currently used. and lines can become splines
2026-01-05 10:52:38 the thing is, though, transitions between these flat blocks should compress pretty well losslessly? so maybe those can be lifted into the patch reference frame just so they're losslessly encoded
2026-01-05 11:04:36 you could also create two frames, one VarDCT and one lossless modular, and and push entire regions of the image that are estimated to compress well losslessly to that, and blend the two frames
jonnyawsom3
2026-01-05 11:08:39 https://github.com/libjxl/libjxl/pull/1395
AccessViolation_
2026-01-05 11:57:03 this would be huge. the file size reduction alone is nice, but the fact that it's also lossless instead of lossy should make screenshots look a *lot* better
2026-01-05 12:01:39 related, can you create "empty" (presumably black-looking) DCT blocks that just take up space, but take no or very little bits to encode their content?
2026-01-05 12:07:34 I had the idea to replace entire single-color DCT blocks with patches, because then multiple blocks of the same color (including smaller ones) can become patches that sample the same area in the reference frame, but I don't how patches work in terms of what happens with the underlying DCT data as it is. it might not compress better, idk
2026-01-05 12:35:37 you could first split the image into 64x64 blocks, deduplicate all of them and and keep only blocks that appear `N` times. set up patches for those. then try all remaining 32x32 blocks, and do the same thing, adding them to the reference frame, while also re-using parts of the 64x64 blocks already in it. then do 16x16. then you could do 8x8. then patch detection for letters and symbols, and then DCT encode the rest? in effect this will probably be similar to intra-block-copy from AV1, but faster to encode because you're not allowing patches to move at a pixel level, just block level
2026-01-05 12:37:37 that would immediately catch things like the red line, the yellow-gray intersection, and the border of the member list, which are currently blurred pretty badly when trying to match AVIF file sizes
2026-01-05 12:38:28 the grid is 16x16, for reference
jonnyawsom3
2026-01-05 12:42:25 I was going to suggest just replacing the detection code in that PR with the AV1 code, both are C and function similarly
AccessViolation_
2026-01-05 12:43:16 worth a shot for sure
2026-01-05 07:21:25 this reminds me of how assets in NES games are compressed
2026-01-05 07:22:13 clip from: https://discord.com/channels/794206087879852103/806898911091753051/1457816415661850861
jonnyawsom3
2026-01-08 05:51:26 Hmm, if we left the encoder the same and set intensity target to 1000, but override the metadata to still be 255, would that effectively increase the quality of darks without blowing it out on HDR displays?
2026-01-08 05:53:57 Actually I guess that's how the djxl option used to work
spider-mario
2026-01-08 05:54:17 I think it would cause the image to be blown out on all displays
2026-01-08 05:55:52 the encoded XYB values would still correspond to bright pixels, but now not even the renormalisation to SDR would save them
jonnyawsom3
2026-01-08 05:58:12 Right, back to the drawing board
AccessViolation_
2026-01-09 11:11:01 could add a parameter that exposes a multiplier for the quality decrease per darkness curve maybe?
2026-01-09 11:14:08 `--darkeness_preserving=N` where the default value is `1` which changes nothing, higher values preserve more, lower values preserve less?
2026-01-09 11:17:43 which, by the way, I feel like should be a bit higher by default already. presumably the encoder is okay with reducing quality in darks because we'll be less likely to notice, but in almost all cases I do notice that darks seem lower quality than brighter objects. if tuned probably, I assume the quality should look even, by definition? I know this is hard to quantify because of different viewing conditions, but more often than not, I feel I notice dark elements are lower quality than bright elements
NovaZone
2026-01-10 04:07:01 Perfect tester for this would be a candle exposing elements on a dark gradient backdrop
2026-01-10 04:11:28 But yes the dark is still an issue for all codecs xD
2026-01-10 04:11:48 https://www.youtube.com/watch?v=h9j89L8eQQk
2026-01-10 04:13:11 Av1 works around this by boosting the variance, and by adjusting frame level lumina
2026-01-10 04:13:37 Basically just dump more bits into problematic areas kek
2026-01-10 04:18:13 Still not ideal ofc but it's a hard problem
2026-01-10 04:19:03 Especially considering it is incredibly dependent on source
2026-01-10 04:19:49 Hence my recc, that is the "normalized" tester I would use
whatsurname
2026-01-14 03:42:06 https://news.ycombinator.com/item?id=46597927
Meow
2026-01-14 08:49:20 Now this title invents a new spelling: JpegXL
AccessViolation_
2026-01-14 11:52:55 *cracks joints* time to hop on and correct people that are wrong about JPEG XL on the internet
spider-mario
2026-01-14 03:49:34 I believe we’re still missing jpeg_xl
monad
2026-01-14 04:49:24
2026-01-14 04:49:25 both spellings appear in IrfanView context
jonnyawsom3
2026-01-14 05:07:43 My friends went with Jpg-Xl
2026-01-14 05:08:03 Just say JXL... *please*
spider-mario
2026-01-14 05:24:13 I stand corrected (what a weird expression, by the way)
2026-01-14 05:24:31 right now, I guess I sit corrected
AccessViolation_
2026-01-14 05:42:07 this implies you were sitting incorrectly before
2026-01-14 05:42:19 I'm glad you're taking care of your posture
spider-mario
2026-01-14 09:12:55 me when I see “JpegXL”
monad
2026-01-14 09:17:16 JPEG-li is worse
AccessViolation_
2026-01-14 09:20:30 I use the full name, JPEG Lithium
jonnyawsom3
2026-01-14 09:36:24 Seeing JPEG-XL in blog posts/code with a link to the JPEG XL wiki page
Traneptora
2026-01-14 11:59:12 Do I look like I know what a JpegXL is? I just want a...
Quackdoc
2026-01-16 03:46:34 who runs the jpegxl info website again?
monad
2026-01-16 05:27:23 mostly <@594623456415449088> , but it's open to pull requests <https://github.com/jxl-community/jxl-community.github.io>
Quackdoc
2026-01-16 05:56:43 I'm just thinking, showing a website rendering at one megabit per second, which is what Starlink stands by uses. probably a good comparison now that its in chrome again.
jonnyawsom3
2026-01-16 06:01:22 We do have a progressive demo on the site using Oxide
Quackdoc
2026-01-16 06:03:06 tis not the same
2026-01-16 06:03:26 showing a video of a site side by side is a great demo for visualization since some people just dont get it
jonnyawsom3
2026-01-16 06:09:16 Right now that wouldn't show anything at all. jxl-rs doesn't do progressive decoding yet, libjxl doesn't have progressive DC implemented and Oxide isn't in any browsers AFAIK
Quackdoc
2026-01-16 06:13:22 even the progressiveness in libjxl via firefox is still a good showcase
2026-01-16 06:13:29 well waterfox
monad
2026-01-16 06:16:10 doesn't the jxl-rs integration do some minimal progressive renders? it should show some information sooner than avif at least
jonnyawsom3
2026-01-16 06:17:49 It *barely* beats AVIF in my tests, but that's because the JXL file on Jon's site was smaller too. It can only load completed AC blocks and the SIMD missing probably wasn't helping
2026-01-16 06:19:33 I'll run some new tests soon, including Waterfox with the alternative progressive encoding to see if it's faster
Demiurge
2026-01-16 07:36:20 https://www.phoronix.com/news/JPEG-XL-Returns-Chrome-Chromium
KKT
2026-01-17 08:10:21 How do we get this updated? https://caniuse.com/jpegxl
username
2026-01-17 08:16:52 https://github.com/Fyrd/caniuse/pull/7457/ was just pushed 15 minutes ago, It will probably take a little bit of time to actually make it to the live site
HCrikki
2026-01-17 08:57:16 claiming 'partial support' over progressive is ridiculous. even original jpeg has its full featureset implemented by noone
2026-01-17 09:00:44 not against pushing for 'progressive' implementation as a target but this so far just lowers the usage stat numbers without consideration for app implementations or other browsers (search engines themselves deemphasized progressive as a browser strategy by normalizing preloading)
veluca
2026-01-17 09:00:47 eh, who cares
2026-01-17 09:00:54 😛
2026-01-17 09:01:06 it will be supported, all the logic for it is there
HCrikki
2026-01-17 09:01:40 any progress on the apple ecosystem support for progressive loading of jxls ?
veluca
2026-01-17 09:02:20 not like they support progressive loading of anything
lonjil
2026-01-17 09:15:59 Progressive loading was added to caniuse due to how many web devs cited progressive loading as a major reason they want to use jxl.
Quackdoc
2026-01-17 09:22:10 its a pretty major feature IMO, especially here in canada where they COMPLETELY botched 5g roll out and fucked over tons of people who now rely on satallite internet and really bad connection cellular
jonnyawsom3
2026-01-17 10:22:53 I'll keep up the progressive DC (and lossless) fight until the day I die. Loading in at 1% blows any other format out of the water in terms of responsiveness. Even if we didn't compete on filesize or decode speed it would make JXLs near-instant, but being smaller, faster and previewing in 1/100th of the time means even my decade old phone going through a railway tunnel could show me a website full of images
username
2026-01-17 10:26:29 I think one of the only concerns I heard related to doing progressive that low is that certain features like patches aren't available at that point which could make the lower stages of the image look very different to the final image
2026-01-17 10:28:13 personally though I would take it anyways because being able to see a full sized image at as low as 1% is incredible
jonnyawsom3
2026-01-17 10:34:21 Either most of the image is patches, in which case download size is greatly reduced, or there's very little patches in which case the progressive steps won't be missing much. I also *think* patches are signalled first, so they'd just render over the lower res image
Quackdoc
2026-01-17 10:46:41 starlink standby mode is 7 cad a month, and it gets unlimited 1mbps down and 750kbps up, which is actually really nice, I use it for emergency internet when power goes down, and as a second backup and progressive loading helps tons lots of people also use starlink mini standby for cottages and stuff
Cacodemon345
2026-01-18 07:39:38 <@&807636211489177661>
_wb_
2026-01-18 10:12:20 https://www.blognone.com/node/149443
2026-01-18 10:13:25 Nothing surprising in that article, but interesting illustration
2026-01-18 10:17:26 https://www.theregister.com/2026/01/14/google_rekindles_relationship_with_jilted/
2026-01-18 10:18:03 At least better than these disturbing AI hands
AccessViolation_
2026-01-18 10:48:19 I highly suspect this is AI as well
2026-01-18 10:51:51 it's got that recognizable style, there aren't many obvious tells though. the only one is that the JXL character has a face while the Chrome character doesn't, which would just be a weird decision from the artist
_wb_
2026-01-18 10:53:15 Yes, probably both are AI, but the first one was a more specific prompt than the second one
AccessViolation_
2026-01-18 10:54:46 oh I mistook what you said, I thought you meant only the second one was AI
2026-01-18 10:55:21 the second one is crazy yeah, especially from such a well known outlet
2026-01-18 10:57:18 it doesn't appear in the article, I guess they're using generative AI to make link embeds less boring so people are more likely to click them?
2026-01-18 10:58:01 hold on, I'm curious if this works...
2026-01-18 10:58:36 https://www.theregister.com/2026/01/14/google_sets_the_moon_on_fire
2026-01-18 10:59:31 aw, I was wondering if maybe the thumbnail generation thing just took in the URL and generated something based on that, but since it's a 404 no embed is generated at all
spider-mario
2026-01-18 11:47:14 > The format's supporters argue JXL can be used to recompress existing JPEG images without loss so they're 20 percent smaller such weird phrasing
2026-01-18 11:47:27 it’s not just the format’s supporters arguing that
2026-01-18 11:47:32 it’s easy to verify
RaveSteel
2026-01-18 11:54:29 I wonder if AI is just cheaper than having a license with a stock image provider?
veluca
2026-01-18 11:54:37 strong AI vibes
_wb_
2026-01-18 01:16:19 I don't think it's AI, it's typical journalism nowadays to just present everything as "X says Y", regardless of whether Y is a fact or a completely crazy opinion. If it is AI, then it's stylistically accurate, imo.
runr855
2026-01-18 03:26:22 Many written media outlets require an absurd amount of daily publications per journalist, they have no way of actually researching the stuff they write about so I guess it makes sense for them to just state "X says Y".
CrushedAsian255
2026-01-18 05:22:39 the first one has that distinctive gpt image art style to it, don’t know how to explain it well
_wb_
2026-01-18 05:42:52 Yes. And the second has many inexplicable fingers.
CrushedAsian255
2026-01-18 05:44:12 AI still can’t do fingers?
_wb_
2026-01-18 05:45:18 It's better at it now, but who knows when that image was generated.
monad
2026-01-18 07:52:21 The most striking giveaways to me as a non-artist: - half the text struck from the logo but a random "J" compromising readability - Gemini watermark in the corner
AccessViolation_
2026-01-18 07:53:06 oh I didn't even catch the watermark
_wb_
2026-01-20 08:57:52 https://tweakers.net/reviews/14116/google-is-om-waarom-jpeg-xl-alsnog-de-standaard-voor-het-web-wordt.html
AccessViolation_
2026-01-20 09:03:54 lol your portrait was a surprise
Meow
2026-01-20 10:27:17 Ironically the banner image is AVIF
AccessViolation_
2026-01-20 10:36:16 I just finished reading it. it's a nice article, I appreciate the deep dive about some of the compression techniques as well
HCrikki
2026-01-20 10:43:37 i noticed the ressource consumption potential for web services is never mentioned. lossless transcoding from jpg consumes almost no ressources and is quasi-instant (in addition to being pixel-exact reversible). its a unique capability that can enable web services to complete entire migrations using less than 5% the ressources using any other format wouldve (or just their idle cycles)
2026-01-20 10:44:47 massive gamechanger for big tech, considering cloud compute costs significantly increased in the last few years and recently even more
_wb_
2026-01-20 11:00:53 It was a fun interview with the journalist, he wanted to dive pretty deep so we had a pretty long chat and afterwards he had added more technical details and asked me to check what he had written - most was fine, a few mistakes but that's why he asked me, so the final article is technically quite accurate
AccessViolation_
2026-01-20 11:02:24 oh wow, that's a breath of fresh air given the state of most tech journalism I come across
whatsurname
2026-01-20 11:38:20 People in the comments are still complaining about WebP because they can't use it in X/Y/Z, let's guess how long it will take for people to complain about JXL after it's enabled by default in Chrome
username
2026-01-20 11:41:41 funny thing is there is a lot of software that unknowingly supports WebP but doesn't allow anyone to use it
Exorcist
2026-01-20 11:42:58
spider-mario
2026-01-20 12:16:57 especially with this layout, it looks as if it’s intended as a mug shot
2026-01-20 12:17:04 “here is a prime culprit for JPEG XL”
2026-01-20 12:17:35 “please report if seen”
VcSaJen
2026-01-20 01:51:04 To be fair, same could be said for SVG, yet browsers still show partial loads just like PNG's scanlines.
AccessViolation_
2026-01-20 02:29:43 I think we're sort of lucky that JXL is already supported in most creative software and plenty of other software. maybe Chrome's late adoption was a blessing in disguise
whatsurname
2026-01-20 03:07:51 JXL definitely has better day one support than WebP/AVIF, but since people are still complaining about their support today, they’ll complain about JXL as well
lonjil
2026-01-20 03:09:48 Just the other day someone was telling me that webm and webp are evil formats that only work in expensive Adobe software and browsers, and nowhere else.
username
2026-01-20 03:11:04 there is still new software getting released that only supports recording to GIF
HCrikki
2026-01-20 03:11:28 support will come in waves. id personally be more concerned about educating the public about how to generate better quality images efficiently instead of chasing after the lowest kilobyte count possible
whatsurname
2026-01-20 03:13:19 I'd say that's impossible, just use better defaults
2026-01-20 03:14:28 Not only for libjxl, but also Photoshop, WordPress, etc
HCrikki
2026-01-20 03:14:29 doesnt stop people from picking ludicrously slow settings like effort10, even 11
username
2026-01-20 03:15:28 aren't effort 10 and 11 locked by default?
VcSaJen
2026-01-20 03:30:02 What's there to learn? Just move the slider until preview shows that image is below file size limit.
2026-01-20 03:30:16 If no size limit, use lossless
jonnyawsom3
2026-01-20 04:26:43 11 is behind the expert flag, 10 is normal
KKT
2026-01-20 11:28:45 Hopefully the apps that have it built in keep applying the updates as well. I was just exporting some AVIFs from Pixelmator Pro, and it produced files 2x as big as from the command line. Don't want that happening to jxl.
jonnyawsom3
2026-01-20 11:30:19 v0.12 is going to be the biggest update in a while, lots of fixes, speed and density improvements. We know there's plenty more to do, but it'll all take time to figure out
username
2026-01-21 05:13:38 a bit strange but still counts as coverage I guess: https://news.ycombinator.com/item?id=46708032
runr855
2026-01-21 05:52:59 ```=== For an equal ssimulacra2 score (83.4xx ) === *Test* | Affinity 3.0.2 | cjxl v0.11.1 ------------+------------------+---------------- Quality | q 100 (in app) | -d 0.66 Encode time | 12 seconds | 4 seconds Size | 14.0MB | 10.7MB ------------------------------------------------```
2026-01-21 05:54:34 Would an older version even be that slow? And isn't that ssimulacra2 score very low for what they deem 100% quality?
_wb_
2026-01-21 06:21:36 ugh I was not prepared for that
monad
2026-01-21 07:18:29 jxlman
AccessViolation_
2026-01-21 10:42:03 can you believe your likeness is used in a meme that writes JPEG XL with a hyphen? :p
2026-01-21 10:43:01 the image filename really is `jpegxlman.jxl` lol
2026-01-21 10:43:31 not all heroes wear capes
HCrikki
2026-01-22 08:50:32 about the op111.net bench results mentioned on HN, didnt someone get jxl for one specific image down to ludicrously low filesize? i thought it wouldve been updated with such a note about whats technically possible for either specific types of images or doing a manual tuned encode for one
2026-01-22 08:53:34 Found the post. 704 bytes! -> https://discord.com/channels/794206087879852103/822105409312653333/1429571212283084850
AccessViolation_
2026-01-22 10:14:17 in 0.11.1, that same command results in a 10 kilobyte file rather than a 704 byte file
jonnyawsom3
2026-01-22 10:18:48 Use `-e 10` instead of `-e 9` (And optionally `--patches 0` to save some time)
AccessViolation_
2026-01-22 10:21:41 ah yeah there we go, 754 bytes
jonnyawsom3
2026-01-22 10:28:26 Global MA tree instead of Local
AccessViolation_
2026-01-22 10:29:11 got it down to 569 bytes
jonnyawsom3
2026-01-22 10:33:50 Huh?
AccessViolation_
2026-01-22 10:39:42 P15 instead of P14. I'm trying to find out if there's a certain single predictor that does better, or whether it's using different predictors in different parts of the image, but it's gonna take a while to try them all, so I'm not sure if I'm going to do that
jonnyawsom3
2026-01-22 10:49:32 Ah right, I was going to try P15 but got an error and forgot to re-run it
VcSaJen
2026-01-22 12:52:38 Some foreign podcast focused on web-standards briefly talked about JPEG XL: https://youtu.be/V71wKUp7LtI?t=6390 (from 1:46:29 to 1:51:49) Machine generated translation of machine generated transcript: https://rentry.org/3ub8vxmn Overall current jxl-rs performance in Chrome/Firefox was questioned, but otherwise positive reception.
jonnyawsom3
2026-01-22 12:57:22 There's a commit pending that should roughly double decode speed going to v0.3 ~~and progressive loading would hide it anyway~~
hjanuschka
2026-01-22 05:27:04 the roll of v0.3 should land soon, but wont make it into 145, do you have any tools i can run your benachmarks?
2026-01-22 05:27:22 would like to see what the net effect on chromium all those perf increases have
jonnyawsom3
2026-01-22 05:32:29 Oh, for my Chromium SIMD comparison I just used this site and formatted it by hand, since it runs 50 reps and gives both mean and total time <https://random-stuff.jakearchibald.com/apps/img-decode-bench/>
A homosapien
2026-01-23 10:48:43 479 bytes, although I think I can push it a bit further. https://discord.com/channels/794206087879852103/803645746661425173/1429594434525204642
2026-01-23 10:55:46 <:galaxybrain:821831336372338729>
_wb_
2026-01-24 10:01:56 https://coywolf.com/news/web-development/jpeg-xl-jxl-is-coming-back-to-chrome/
2026-01-24 10:14:32 https://blog.fileformat.com/en/image/webp-vs-avif-vs-jpeg-xl-the-battle-for-next-gen-image-supremacy/
2026-01-24 10:16:04 https://danielandrade.net/posts/jpeg-xl/
2026-01-24 10:22:25 Ugh that genAI illustration: https://texnologia.net/giati-to-jpeg-xl-borei-telika-na-ginei-to-protypo-standard-tou-web/2026/01
2026-01-24 10:23:39 "30% higher Efficiency ctHoy than JPEP"
2026-01-24 10:24:23 "Lower bandWdyth", yay!
2026-01-24 10:27:19 It looks like an AI-generated translation/rewrite of the tweakers article from Dutch to Greek...
AccessViolation_
2026-01-24 10:32:39 ugh
2026-01-24 10:33:11 I might need to revive my project for the browser extension with the community maintained list of slop pages/websites
2026-01-24 10:33:31 because the situation hasn't gotten better overall
Exorcist
2026-01-24 10:35:20 Use enormous emoji = LLM
_wb_
2026-01-24 10:39:03 https://youtu.be/2rI200MAlqI?si=r4pOZLqgobIaqy_7
VcSaJen
2026-01-24 10:40:16 Plenty of JPEG XL pages use checkbox emojis, none of them LLM-written
AccessViolation_
2026-01-24 10:41:09 what I like about JPEG XL is that a lot of the deeper facts and philosophies around it exist in text only in this server, and LLMs don't have access to them yet. when you ask them in-depth questions about JPEG XL they will always get a bunch of things wrong
2026-01-24 10:41:52 maybe I'll create a site that describes some part of JPEG XL, and for every real page that users can see, there are invisible links to 10 variants that contain total nonsense
2026-01-24 10:42:26 a while ago I found out Grok thought JXL had lapped transforms. tiny things like that
2026-01-24 10:42:47 write at length about the lapped transforms in JPEG XL to poison these models
VcSaJen
2026-01-24 10:43:36 That's a bad thing, not a good thing. Closed Discord ecosystem is one of the things that is slowly killing the internet. No read-only view for guests. No search indexing. No archival. Nothing.
AccessViolation_
2026-01-24 10:44:36 I agree, I should've been more specific. I don't like that so much knowledge about JPEG XL is tucked away here, but I do enjoy that as a result of that, LLMs can't get to it. a net negative, but I enjoy that part of it
2026-01-24 10:45:44 but it looks like with LLMs writing articles about JPEG XL, I won't even have to try very hard at creating nonsense myself
_wb_
2026-01-24 10:45:49 If there would be something as convenient for chatting as discord but with public archives and indexing like an old-fashioned forum, let me know. Though for things like 'privately' sharing the spec, discord is nice
VcSaJen
2026-01-24 10:49:18 I would have said "Matrix", but Guest access is also notoriously bad there. Channel owners have to manually enable it (none bother), and none (?) of the web clients even support it.
AccessViolation_
2026-01-24 10:50:15 matrix is all sorts of broken. I liked it conceptually, but the realization of it is all-around bad
2026-01-24 10:50:39 I use it for certain technical communities which only exist there
whatsurname
2026-01-24 11:11:28 Zulip?
HCrikki
2026-01-24 11:27:04 archival of discord chatter? please never, only bots would ever scour that... just commit to a curated public presence instead so the press has something to work with
_wb_
2026-01-24 11:40:34 yes, we should probably just add more stuff to jpegxl.info. If anyone feels like adding stuff there, that would be great — I can also give you push access to the website repo (https://github.com/jxl-community/jxl-community.github.io), I like to trust people (and I can always just revoke the access and revert stuff)
2026-01-24 11:44:26 One thing that's missing from the new version of jpegxl.info is the jxl art page (https://jpegxl.info/old/art/). It would be nice to have a page like that again, maybe with some more explanation of how it works and with up to date links like https://jxl-art.lucaversari.it/
jonnyawsom3
2026-01-24 12:12:04 That reminds me, maybe <@179701849576833024> could update the jxl-art website so we have access to 16bit modular buffers https://discord.com/channels/794206087879852103/794206170445119489/1464002172172763327, then we can make interactive demos/links to things like the smallest JXL file, Predictor demos, RCTs, ect
monad
2026-01-24 12:15:02 I would be surprised if Sam & Co. weren't siphoning every bit they can out of the deep web. LLM's just aren't reliable in general.
veluca
2026-01-24 01:22:11 is that just updating the website?
jonnyawsom3
2026-01-24 01:23:26 Yeah, updating the libjxl version... Though maybe we can't because v0.12 isn't released
veluca
2026-01-24 01:23:35 `a0711b0976804c9678bfd613c9a344a87dd42278` exists
2026-01-24 01:23:35 😛
2026-01-24 01:25:14 someone should RIIR that page
2026-01-24 01:25:21 and maybe jxl-from-tree 😛
2026-01-24 01:26:37 (I hate js and npm)
jonnyawsom3
2026-01-24 01:42:25 Huh, the buffer setting still isn't working, is it pushed to the site yet?
veluca
2026-01-24 01:52:27 nope, working on it
2026-01-24 01:52:33 it's failing in some ways
2026-01-24 01:56:53 > Cannot read properties of undefined (reading 'byteLength') 😭
2026-01-24 01:57:19 can someone investigate? https://github.com/veluca93/jxl-art
Laserhosen
2026-01-24 02:26:41 An under-reported feature IMO - surprised it's not more prominent on the website.
_wb_
2026-01-24 02:34:35 JPEP XthqzxL
VcSaJen
2026-01-29 11:24:37 I've seen Phoronix using libjxl in their CPU benchmarks, looks like Tom's Hardware also uses libjxl for this purpose: https://www.tomshardware.com/pc-components/cpus/amd-ryzen-7-9850x3d-review/3
jonnyawsom3
2026-01-29 11:42:52 Yeah, IIRC the benchmark has some odd settings. Jon proposed a different set but they were rejected
monad
2026-01-29 09:04:14 Tom's is probably also using Phoronix Test Suite. Looks like default settings except quality and disabling lossless JPEG.
2026-01-29 09:14:12 <https://openbenchmarking.org/innhold/cd6425b5f86affe35c17ce9f46864528b1bbcf16>
jonnyawsom3
2026-01-29 09:45:27 Yeah, I wanted to submit a PR doing 1000 reps of JPEG transcode or fast lossless, to stress test cache instead of just clock speed
monad
2026-01-29 10:10:16 I guess you mean for decoding, but surely there are more practical tests.
jonnyawsom3
2026-01-29 10:11:28 No, encoding. At least for effort 1 since it's so much faster than anything else
monad
2026-01-29 10:12:01 Hm, what is the related use case?
Mine18
2026-02-01 02:19:02 probably something fast for low power devices, like a phone screenshot
Demiurge
2026-02-01 09:52:43 I think that's called IRC?
_wb_
2026-02-01 10:00:00 IRC doesn't come with archives/search though
ignaloidas
2026-02-02 01:29:48 there's archivers/loggers that can search through logs, e.g. https://libera.catirclogs.org/#
Quackdoc
2026-02-02 01:41:12 yeah but those are fairly unreliable unless they are run by the IRC server
Meow
2026-02-02 02:03:18 Needs a <#794206170445119489> channel for IRC?
hjanuschka
2026-02-03 03:36:25 https://www.januschka.com/jxl-art/
jonnyawsom3
2026-02-03 03:42:34 Looks nice, but doesn't have the 16bit buffer commit > Unexpected node type: 16BitBuffers node_modules
hjanuschka
2026-02-03 03:42:50 uses current libjxl, and if possible uses jxl from browser to render?!
2026-02-03 03:42:54 what how'd you trigger it?
jonnyawsom3
2026-02-03 03:43:50 The tree I posted here for the smallest JXL https://discord.com/channels/794206087879852103/794206170445119489/1463998815919931493
hjanuschka
2026-02-03 05:05:37 ok it renders now - do you have a reference how it should look? also chromium refuses this - so a png alternative for this
2026-02-03 05:05:51 damn - everything is a rabbithole
2026-02-03 05:06:15 added the tree as a preset
jonnyawsom3
2026-02-03 05:21:41 It should just be a black rectangle, but it's the smallest valid JXL image at 12 bytes, so nice to have as a demo
hjanuschka
2026-02-03 06:03:34 ok it works now
2026-02-03 06:04:23 yet i have to investigate jxl-rs in chromium now 🤪 the web app works and falls back to png output in case of 16bit thing
Jarek Duda
2026-02-04 07:31:59 JPEG XL for PDF White Paper: https://www.linkedin.com/posts/twain-working-group_jpeg-xl-white-paper-the-benefits-activity-7422687581231804417-CFK7
whatsurname
2026-02-04 08:43:49 It's basically duplicating jpegxl.info (it even uses a screenshot of that site for the JXL icon instead of a proper one) I was expecting more for a "white paper"
0xC0000054
2026-02-04 08:46:14 I am surprised that a TWAIN working group still exists. I thought that scanner standard had been replaced.
Jarek Duda
2026-02-04 08:48:12 sure, but from PDF perspective ... just found its webpage: https://www.info-source.com/white-paper-the-benefits-of-adding-jpeg-xl-to-the-iso-pdf/
monad
2026-02-04 10:08:13 That link seems like some automated redistribution.