JPEG XL

Info

rules 58
github 38694
reddit 688

JPEG XL

tools 4270
website 1813
adoption 22069
image-compression-forum 0
glitch-art 1071

General chat

welcome 3957
introduce-yourself 294
color 1687
photography 3532
other-codecs 25116
on-topic 26652
off-topic 23987

Voice Channels

General 2578

Archived

bot-spam 4577

jxl

Anything JPEG XL related

tokyovigilante
2025-12-03 09:16:36 I'm trying that first, and it is pretty good. I've got a big stack of images (3D dataset) I need to show in low quality ASAP, then stream in to full quality while the user is manipulating the data in a 3D texture, which I'm swapping higher quality versions of each image into as they come in.
2025-12-03 09:17:52 ``` Input: 512x512, 524288 bytes (signed 16-bit) Note: MONOCHROME2, RescaleIntercept=-1024, RescaleSlope=1 Pixel range: -2000 - 2706 (stored values, signed) Lossless: 94798 bytes (18%) | encode: 37.02 ms | decode: 6.62 ms Progressive: 168043 bytes (32%) | encode: 64.08 ms | decode: 7.42 ms```
2025-12-03 09:18:29 just working through the API to see how much of the progressive version I need before I can render something
ignaloidas
2025-12-03 09:32:30 FWIW if you want better progressive rendering afaik jxl-oxide is currently the best decoder for it
tokyovigilante
2025-12-03 09:33:55 thanks, would prefer to avoid rust as a dependency if possible (I'm using Nim so C FFI is ideal)
2025-12-03 09:35:07 by best do you mean faster or higher-quality? Raw speed of decoding ins't such a big deal, more early rendering and being able to do a multi step refinement without adding too much (more) overhead
ignaloidas
2025-12-03 09:35:41 more eager decoding, can start providing an image with less bits recieved than e.g. libjxl
tokyovigilante
2025-12-03 09:36:00 oh yup, that is what I need.
2025-12-03 09:36:39 at the moment I'm not actually getting any progressive decode - ``` Total size: 168043 bytes Progressive steps: 2 Step | Bytes | %Data | Description -----|------------|--------|------------------------ 1 | 56 | 0.0% | Basic info (header) 2 | 168043 | 100.0% | Full image complete Lossless (non-progressive) analysis: Total size: 94798 bytes Progressive steps: 2```
ignaloidas
2025-12-03 09:40:40 If you're using libjxl you want to set how much progressive steps you want to get with JxlDecoderSetProgressiveDetail
tokyovigilante
2025-12-03 09:47:01 is that just for VarDCT encoding though?
ignaloidas
2025-12-03 09:48:16 should work for squeeze steps as well I think?
2025-12-03 09:48:28 admittedly haven't touched that part of the code much
tokyovigilante
2025-12-03 09:49:03 No worries. I'll run a few experiments once the rest of the pipeline is up and running.
AccessViolation_
2025-12-03 12:27:51 I've been thinking about this, and I don't understand the whole DCT concept well enough to reason about this from intuition alone. how beneficial would it be if an encoder could create custom quantization tables specifically tuned to the contents of that specific image? this would effectively be optimizing an image in an entirely different domain... what caused me to think of this was dithering, where, let's say an ordered checkerboard-like pattern is applied at a pixel level, the highest frequency component will be very important to representing that, while they are usually the first to go with quantization tables.
veluca
2025-12-03 12:40:09 I believe the answer would be "depends on the image" ๐Ÿ™‚
Exorcist
2025-12-03 12:42:19 > say an ordered checkerboard-like pattern is applied at a pixel level this is not "natural"
AccessViolation_
2025-12-03 12:46:17 I just feel like there has to be a best quantization table for a given image, while the default ones are generally good for all images. but I'm not sure how much things would actually improve if we somehow found and used that best quantization table
2025-12-03 12:47:42 I imagine it might look more or less like the default one, with some slight transformation applied or some very specific frequencies having a particularly high importance, like in the dither example
Exorcist
2025-12-03 12:55:08 How you define "some very specific frequencies having a particularly high importance"?
AccessViolation_
2025-12-03 12:59:25 during the DCT process, a block is decomposed into a set of weights that say how much a frequency contributes to the block. those are then divided by a quantization table which is a representation of how important it is that those given frequencies are represented with high precision. and the higher the frequency, the less important it is, but that's just something that's *generally true*, there must be a mathematically best quantization table for a given image with a given BPP
2025-12-03 12:59:42 (if I understand all of this correctly)
Exorcist
2025-12-03 01:00:26 But why you use DCT for not "natural" signal?
AccessViolation_
2025-12-03 01:00:48 "not natural" referring to the dither example?
2025-12-03 01:02:08 even if it's a natural image just taken by a camera without further editing (if that's what you mean) there must be a best quantization table specifically for that image
2025-12-03 01:03:49 at least, that's what I'm thinking
Magnap
2025-12-03 03:26:30 pretty sure it has to be true: for every image, there's a total order on any choice of quantization tables by how small the image would be if VarDCT-encoded with those tables, so there must be minimal such choices (there could be several that are equally good) for that image
jonnyawsom3
2025-12-03 04:04:21 Just use progressive lossless with a nightly encode from https://artifacts.lucaversari.it/libjxl/libjxl/latest/ `cjxl -d 0 -p` And test it with https://jxl-oxide.tirr.dev/demo/index.html
2025-12-03 04:05:21 You'll get a preview down to 8 pixels
Traneptora
2025-12-03 06:44:54 I just uploaded `.jxl` to facebook messenger
2025-12-03 06:44:58 and it supported it
2025-12-03 06:44:59 :o
Quackdoc
2025-12-03 06:58:39 0.0
veluca
2025-12-03 07:16:44 facebook has supported jxl upload for quite a while
tokyovigilante
2025-12-03 09:08:59 nice, thanks, handy to have up-to-date debian packages too
TheBigBadBoy - ๐™ธ๐š›
2025-12-03 10:39:12 if you DL the image, is it still a JXL?
Traneptora
2025-12-03 10:52:32 not with my user agent
Quackdoc
2025-12-03 11:25:09 is it UA, or ACCEPT header that matters?
AccessViolation_
2025-12-04 12:02:39 I remember hearing facebook made a change to allow JXL as an ingest format
2025-12-04 12:03:49 so they allow users to upload them, but transcode them to something else
2025-12-04 12:04:27 that was a long time ago though, something may have changed
Magnap
2025-12-04 11:43:15 when writing the TOC, you still have to write entries for the empty sections, right?
Tirr
2025-12-04 11:46:57 yep
Magnap
2025-12-04 11:48:56 also, in the case where num_groups == num_passes == 1, "there is a single TOC entry and a single section containing all frame data structures", but isn't that TOC entry always "offset is 0"? or do you write the length of the section as the offset?
Tirr
2025-12-04 11:49:52 section sizes are written to the TOC
Magnap
2025-12-04 11:51:38 and then the last size doesn't go into the last value of group_offset, but by knowing it you can skip to the end of the frame, is that right?
Tirr
2025-12-04 11:53:38 uh, do offsets get encoded into the bitstream? I don't remember the details...
2025-12-04 11:53:50 I guess section offsets are computed from encoded section sizes and permutation
ignaloidas
2025-12-04 11:54:52 I'm pretty sure it's section sizes, not offsets
Magnap
2025-12-04 12:01:42 no it's the sizes, as you say, but each offset is computed in the decoder as the cumulative sum of all the sizes up to but not including that section, so the size of the last section isn't needed for calculating the offsets within a frame, so I figured its use must be to skip past the frame
Tirr
2025-12-04 12:04:17 ah yeah you got it right, the decoder needs the size of the last section to skip past the entire frame
lonjil
2025-12-04 01:34:19 I've been playing around with noisy gradients in jxl_from_tree, but I don't think the coding tools available in it can produce this effect. Does anyone know if the coding tools in the standard would allow efficiently producing this image (or something similar)?
Magnap
2025-12-04 01:39:43 am I seeing it right that that's noisy in the space of "where along the gradient are we"?
lonjil
2025-12-04 01:39:51 yeah
_wb_
2025-12-04 01:45:04 It's unfortunate that Noise only works on the 3 color channels. If it would work on alpha, that would give you a way to do it (by making the gradient as a noisy alpha gradient that blends between whatever two colors)
2025-12-04 01:46:40 Maybe using kAdd or kMul blending could help here somehow
Magnap
2025-12-04 04:29:36 for GlobalModular where I want to encode a global tree and also encode some image data, is this the proper way to do it? - write a true Bool() for the global tree - write an entropy-encoded stream that describes the tree - write the distributions part (including whether LZ77 is used and both clustering and post-clustered distributions) of an entropy-encoded stream that describes the residuals, but not any symbols - write a full modular sub-bitstream with ModularHeader and all
Tirr
2025-12-04 04:39:18 you need to write self-correcting predictor header (write single `true` for all defaults) and modular transforms info (write `0b00` for no transforms) right after signalling whether to use global tree (the first `Bool()`)
Magnap
2025-12-04 04:43:02 oh, so the GlobalModular is just like a modular sub-bitstream except the `use_global_tree` has a different meaning of whether or not to save the tree for later?
2025-12-04 04:44:12 wait, and then ig if you signal `false` for the global tree in GlobalModular you don't write anything further? or can you still write the rest of a modular stream but the global tree just won't be saved?
2025-12-04 04:45:46 the part of the spec I'm trying to understand here is > First, the decoder reads a Bool() to determine whether a global tree is to be decoded. If true, an MA tree is decoded as described in H.4.2. > > The decoder then decodes a modular sub-bitstream (Annex H), where the number of channels is computed as follows: > [...]
Tirr
2025-12-04 04:45:54 I think signalling the global tree and signalling `use_global_tree` is separate thing
2025-12-04 04:46:12 `use_global_tree` is a part of Modular sub-bitstream
Magnap
2025-12-04 04:47:14 yeah, that's why I was confused about having to write the rest of a ModularHeader when encoding the global tree
Tirr
2025-12-04 04:48:15 in my understanding, it's possible to have `use_global_tree = false` and use a different tree in `GlobalModular`
Magnap
2025-12-04 04:48:30 ok that fits my previous understanding
2025-12-04 04:49:37 that you write `true`, then write a tree in one entropy-coded stream, then write the residuals distributions as if about to write another entropy-coded stream, and then you write a full Modular sub-bitstream
Tirr
2025-12-04 04:57:00 > you write `true`, then write a tree in one entropy-coded stream this will signal the global tree, and the Modular sub-bitstream immedately follows, so you won't be writing distributions, but write `use_global_tree = true`, `WpHeader.all_defaults = true`, `transforms.len = 0` and then start a entropy-coded stream (write the residual distributions)
Magnap
2025-12-04 04:59:02 don't you need to signal the residual distributions for each context of the global tree as part of signaling the global tree?
Tirr
2025-12-04 04:59:02 uh well my wording seems a little bit confusing
Magnap
2025-12-04 04:59:41 so is mine, by "a tree" in this message I meant without the residuals, just the tree shape itself
2025-12-04 05:00:10 (well, not just the shape, but also the data of what to branch on and what the leaves contain)
Tirr
2025-12-04 05:00:51 ah yeah you're right, I was a bit confused reading the decoder code
Magnap
2025-12-04 05:01:12 which tbf is not how the spec considers it, since the spec considers "an MA tree" to include the residual distributions, but afaict the tree "shape" and the residual distributions are written in each their own entropy-encoded stream
2025-12-04 05:01:39 oh you know what, I could just do that, that might be clearer ๐Ÿ˜…
Tirr
2025-12-04 05:02:59 so it's: - signal the global tree: `true`, meta-adaptive tree structure, distributions for each cluster - signal Modular header: `use_global_tree = true`, `WpHeader.all_defaults = true`, `transforms.len = 0` - start an entropy coded stream for all the channels that belongs to GlobalModular, with distributions signalled in the global tree
Magnap
2025-12-04 05:03:53 ok, that's great to have confirmed
2025-12-04 05:04:13 because that's what I thought I was doing but it's not working ๐Ÿ™ƒ but at least the bug isn't in my understanding of the spec
Tirr
2025-12-04 05:05:20 you can read jxl-rs or jxl-oxide, I'm reading the both now ๐Ÿ˜…
2025-12-04 05:06:56 you might be writing channels that shouldn't belong to GlobalModular. are you encoding an image larger than `group_dim`?
2025-12-04 05:07:31 I don't remember the exact conditions though...
Magnap
2025-12-04 05:08:16 not to my knowledge; I am encoding a 256x256 image, in frame with `group_size_shift` 1
2025-12-04 05:09:13 (and no crop or anything)
Tirr
2025-12-04 05:09:40 hmm then all color channels are in GlobalModular...
Magnap
2025-12-04 05:10:47 more frustratingly, I tried taking the data (that is, the part after the TOC) from some JXL art of a black image encoded in GlobalModular (which is what I am trying to produce) and it works but it seems too small to me
Tirr
2025-12-04 05:14:29 maybe you can debug it by modifying a decoder to log all the details and feed it the encoded image
Magnap
2025-12-04 06:02:00 I finally figured it out! I had overlooked that clustering is skipped if there is only 1 distribution
2025-12-04 06:03:15 I tried to decode it with jxl-oxide, and got an error about an invalid hybrid uint config, which made me realize I must had fallen out of sync with the decoder
2025-12-04 06:21:30 totally unrelated, but why was CMYK banned from bitstream level 5?
jonnyawsom3
2025-12-04 06:29:45 I assume due to color pipeline complexity
Jyrki Alakuijala
2025-12-05 11:31:19 perhaps 256x256 dct
lonjil
2025-12-05 11:32:03 hm
juliobbv
2025-12-06 02:09:30 Fixed: JPEG XL โ†’ Butteraugli & SSIMULACRA2 โ†’ Fab โ†’ SVT-AV1-PSY โ†’ AOM-AV1 `tune=iq`
2025-12-06 02:09:36 based on a true story
2025-12-06 02:11:16 Fab brought up testing still image perf on SVT-AV1 and that's how we noticed it was performing better than libaom back at the time (early 2024 IIRC)
2025-12-06 02:13:18 then <@703028154431832094> and I started an effort to improve still image even further and that's how we ended up with today's SVT-AV1 and libaom's `tune=iq`
2025-12-06 02:15:33 interestingly enough, we never got around to fine-tune the U and V channels separately in `tune=iq` -- they both use the same quantization matrices and chroma delta q offsets
2025-12-06 02:18:03 but we've found Butteraugli's and SSIMU2's assessments on multi-scale feature retention and competence on luma vs. chroma bit allocation very useful during our development
2025-12-06 02:20:01 I think that's the true influence from JXL
tokyovigilante
2025-12-06 10:46:39 Nice, thanks, I don't get anything usable for my test image (12-bit pixel depth) until about 8kb, but that's much better than `libjxl` which only starts reporting a partial until about 86kb (as below). Is this just `jxl-oxide` doing it better? ```=== Streaming Decode Simulation (4KB chunks) === 16 decode events during streaming: [1] 4096 bytes (2.9%) - Header decoded [2] 86016 bytes (60.4%) - Partial image available [3] 90112 bytes (63.3%) - Partial image available```
jonnyawsom3
2025-12-06 10:51:26 libjxl used to do it, but it was disabled/removed
2025-12-06 10:51:44 I was trying to find the commit that did it, but I gave up
2025-12-06 10:53:33 If you run `jxl-oxide -I --all-frames --with-offset` it should show LfGlobal, which is when actual image data starts
tokyovigilante
2025-12-06 10:55:08 ```JPEG XL image (Container) Image dimension: 512x512 Bit depth: 16 bits Color encoding: Colorspace: Grayscale White point: D65 Primaries: sRGB Transfer function: Linear Frame #0 (keyframe) Modular (maybe lossless) Frame type: Regular 512x512; (0, 0) Offset (in codestream): 8 (0x8) Frame header size: 24 (0x18) bytes Group sizes, in bitstream order: LfGlobal: 83464 (0x14608) bytes LfGroup(0): 0 (0x0) bytes HfGlobal: 0 (0x0) bytes GroupPass { pass_idx: 0, group_idx: 3 }: 15483 (0x3c7b) bytes GroupPass { pass_idx: 0, group_idx: 2 }: 15264 (0x3ba0) bytes GroupPass { pass_idx: 0, group_idx: 0 }: 14096 (0x3710) bytes GroupPass { pass_idx: 0, group_idx: 1 }: 13957 (0x3685) bytes```
Tirr
2025-12-06 10:56:02 (note that the offsets ignore container boxes)
jonnyawsom3
2025-12-06 10:57:10 That lines up though, libjxl will only decode a full pass, so the entire LfGlobal is required
tokyovigilante
2025-12-06 10:58:20 makes sense. but I can get "something" at ~14-15k?
2025-12-06 10:59:03 or do you mean jxl-oxide is partially decoding the LfGlobal data?
2025-12-06 10:59:23 I did think the Hf data was refining towards lossless
jonnyawsom3
2025-12-06 11:03:04 Oxide can partial decode the LfGlobal, it's how you get a preview at 8kb
2025-12-06 11:04:43 It goes even lower for lossy
AccessViolation_
2025-12-06 12:01:53 can you in theory use splines to remove e.g. ringing artifacts, not by replacing the element that's causing ringing by a spline, but by using splines to cover the ringing itself? or can splines only be used to subtract information before it's encoded in a way that would cause ringing
2025-12-06 12:03:51 intuitively it makes sense to me that splines are applied when IDCT is already done, but now I'm thinking that doesn't make sense because then you gain nothing by using them, you'd still see ringing around elements you replace with splines
Magnap
2025-12-06 12:05:37 I believe the decoder adds the splines towards the end (not sure when it happens relative to the other features), but lemme check
AccessViolation_
2025-12-06 12:05:42 because I was just thinking: there might be cases where a ringing-causing element cannot trivially be replaced by a spline, but the ringing itself can easily be removed by a spline that matches the flat background gradient
Magnap
2025-12-06 12:06:41 Yeah, they get added after patches but before noise
AccessViolation_
2025-12-06 12:08:04 so is this after IDCT?
Magnap
2025-12-06 12:08:14 Yup
2025-12-06 12:08:30 Or Modular decoding ๐Ÿ˜‰
2025-12-06 12:09:20 (and after upscaling)
AccessViolation_
2025-12-06 12:09:38 so to replace spline-worthy elements with splines to avoid compression artifacts, the encoder subtracts the spline form the pixel data *before* DCT?
2025-12-06 12:10:01 (I know the encoder doesn't do any of this, I mean if it would)
2025-12-06 12:10:07 gotcha, thanks :3
2025-12-06 12:11:25 ok so then to cover ringing with splines directly you would have to see how the encoder ends up encoding the image, decode that, roll it back, add splines to cover the compression artifacts it will create, then encode again ๐Ÿ˜…
Magnap
2025-12-06 12:12:09 I mean you don't have to fully encode the image, you "just" have to do an IDCT ๐Ÿ˜…
2025-12-06 12:14:05 But yeah splines go before the "normal" image data in the bitstream, so you can't just stream out the bitstream if you wanna add in splines after VarDCT/Modular encoding
AccessViolation_
2025-12-06 12:15:10 yeah that makes sense
2025-12-06 12:16:36 I wonder how splines are going to look with progressive VarDCT. do they get applied right away, and do we see a ghost of spline-encoded data way before the full resolution image is rendered, or do they only pop in during the final full render, causing missing or distorted elements when the image is almost but not fully loaded
Magnap
2025-12-06 12:18:22 are you thinking about multiple passes? because I imagine you could downscale them (or better, do the rendering math in a way that results directly in the downscaled version) for the LF image
AccessViolation_
2025-12-06 12:18:52 ohh yeah that's smart
2025-12-06 12:19:40 it'll probably be costly during decode but if you're network bottlenecked, I don't think that CPU usage will be a problem relative to how long you have to wait for the image. might have to be smart to not bother doing it when the network is fast enough to load the whole image effectively immediately
Magnap
2025-12-06 12:20:59 but it's a good question whether they make more sense as "pass -1" or "pass N+1" if there are multiple passes. arguably you'd expect the encoder to use them for high-frequency information which would make more sense as a later pass, but also since you're getting them first and it's basically early access to high-frequency information, why not put them in early
AccessViolation_
2025-12-06 12:21:46 alternatively, you could render splines right away but apply them 'transparently', e.g. if the image is at 50% fidelity you blend the spline at 20%, at 75% quality you blend it at 50% and at 100% quality it'll be 100%
2025-12-06 12:22:22 just so there's no pop-in, but also no very obvious sharp splines when the image is barely loaded
Magnap
2025-12-06 12:24:22 that's nice ๐Ÿ‘€
AccessViolation_
2025-12-06 12:24:29 <@604964375924834314> do you still have that JXL with the manually spline-encoded lamp post?
2025-12-06 12:27:32 or maybe a blur over the spline-affected area that decreases as the image gets sharper would be better, though I do wonder if any of this would be considered compliant
Magnap
2025-12-06 12:27:49 you could probably do some scarily clever heuristic about how much high-frequency content a block already has, but it sounds expensive to apply ๐Ÿค”
2025-12-06 12:30:47 I don't think the spec really has much to say about decoding partial images other than "don't progressively render SkipProgressive frames"
spider-mario
2025-12-06 01:35:07 it seems the bitstream has changed too much since then; itโ€™s probably possible to recreate the image (I still have the control points) but it will be some effort
Magnap
2025-12-06 05:03:22 for the blend modes that define what happens to the alpha channel, what happens if the ec_blend_info for that channel sets a different blend mode?
AccessViolation_
2025-12-06 05:23:18
2025-12-06 05:23:19 any suggestions for improvements or things you want added are welcome (please see <#822120855449894942> for the most recent version)
Magnap
2025-12-06 05:25:39 there's the zune encoder https://github.com/etemesi254/zune-image/tree/dev/crates/zune-jpegxl
cioute
2025-12-06 05:26:08 emm... idk, may add jxlviewer and fossify gallery to android software category? jpegview to windows software category (i don't like imageglass due it's required a webview)
AccessViolation_
2025-12-06 05:29:13 gotta go for a bit, I'll add these
Tirr
2025-12-06 05:38:14 I think the only reference implementation is libjxl, which is 18181-4
2025-12-06 05:39:10 (at least technically)
Magnap
2025-12-06 05:43:34 incidentally, how does that work? did they submit a tarball? (if so, does said tarball correspond to a specific commit?)
AccessViolation_
2025-12-06 06:16:18 putting the domain the links point to in brackets, yes or no? I prefer it like this because I like knowing what site links direct to at a glance, but it does look a bit cluttery
Tirr
2025-12-06 06:19:34 <@&807636211489177661> ๐Ÿ‘† <:BanHammer:805396864639565834>
RaveSteel
2025-12-06 06:23:19 this is at least the second bot with that domain in their bio
2025-12-06 06:23:27 maybe we should get an automod
_wb_
2025-12-06 08:42:15 Current 18181-4 has only (an old version of) libjxl, plan is to at some point update 18181-4 to contain both libjxl and jxl-rs, so both will become reference implementations. There's just a lot of ISO bureaucracy to update standards, so I'd prefer to do that when the dust has settled a bit and we have a good stable release for both. In practice, nobody is using the ISO version of the software anyway, it's more like a long-term archival thing but not really something with practical implications.
Orum
2025-12-06 11:09:33 JPEGs losslessly transcoded to <:JXL:805850130203934781> get some filtering applied when decoding them (so that the output isn't the same as just decoding the original JPEG), right?
jonnyawsom3
2025-12-06 11:13:15 The output isn't the same because it does the maths in higher precision, but it can also use CFL, ect
Orum
2025-12-06 11:14:37 is there a way to force that with djxl?
A homosapien
2025-12-06 11:15:55 force the same output as regular jpeg?
Orum
2025-12-06 11:16:06 yes
2025-12-06 11:16:32 I suppose you could just convert it back to the original jpg, but I mean to something like ppm
jonnyawsom3
2025-12-06 11:17:02 No, because that depends on what JPEG decoder you're comparing it to
Orum
2025-12-06 11:18:40 I'm just thinking something like a reference JPEG decoder...
A homosapien
2025-12-06 11:22:09 The defacto standard for decoding to ppm is libjpeg-turbo
Orum
2025-12-06 11:23:59 yeah, which is what I use, I just don't know what else is out there
2025-12-06 11:24:34 other than djpegli, obviously
A homosapien
2025-12-06 11:25:00 I think to get the higher precision decoding, do `djxl transcode.jxl output.ppm --bits_per_sample 16`
AccessViolation_
2025-12-06 11:30:06 CFL?
jonnyawsom3
2025-12-06 11:31:15 Chroma From Luma
2025-12-06 11:32:09 They want lower precision like Turbo
witext
2025-12-06 11:33:27 Hi people! i'm new to this stuff but i've long been interested in JXL as i heard about it way back and have been looking it up like once a year or smth since One of the things i'm most excited about with JXL is the animation compatability, as GIFs famously look bad, but is still the main way we exchange short animated reactions online. Last time i looked it up, there wasn't really any tools for creating animated JXL files, so i was wondering if that's changed and if not, where should i look for updates in the future?
HCrikki
2025-12-06 11:39:30 cjxl can create animated jxls (image animation). afaik it seems to do so both losslessly and with a guaranteed smaller filesize than the original gif
AccessViolation_
2025-12-06 11:41:36 I think ffmpeg can produce animated JXLs too, but it's not my area of expertise. some people here have been working on further improving ffmpeg support, iirc
A homosapien
2025-12-06 11:41:47 there is a flag to turn off CFL for jpeg transcoding in cjxl
AccessViolation_
2025-12-06 11:46:50 I think there used to be a way to do this with the API by requesting an 8-bit per channel buffer and requesting it not be dithered, though as I understand it the behavior currently is to always dither
2025-12-06 11:46:58 but I'm not 100% clear on that
2025-12-06 11:51:26 that's not to say it would look the same as other JPEG decoders, but dithering is the main thing that makes losslessly recompressed JPEGs look very different from the originals if you zoom in, since most JPEG decoders won't bother to decode to a high bit depth and dither the output (at least none that I've personally seen in viewers)
2025-12-07 12:03:02 opinions differ on this, I think that when dealing with a recompressed JPEG the default should be to not dither when the requested output is 8 bpc, to maintain closer visual identity with the original as it appears in most software, but do show higher precision colors (still without image-level dithering) when a higher bpc is requested. though other people with more experience than me seem to prefer dithering because it's a more correct representation of the image data (as I understand, I hope I'm not misrepresenting them)
2025-12-07 12:14:46 speaking of, are there currently any image viewers that support JXL and request a higher bit depth, and dither the frame buffer?
username
2025-12-07 02:51:44 the slides here give a really nice overview: https://docs.google.com/presentation/d/1LlmUR0Uoh4dgT3DjanLjhlXrk_5W2nJBDqDAMbhe8v8/view Also the spellings of the various JPEG standards should use non-breaking spaces. Ill post a message right after this one that contains a non-breaking space that you can use the "Copy Text" action on since IMO trying to get a non-breaking space can be a hassle sometimes.
2025-12-07 02:51:47 JPEGย XL
2025-12-07 02:51:54 JPEGย 2000
2025-12-07 02:51:57 JPEGย XR
AccessViolation_
2025-12-07 10:58:32 I tried non-breaking spaces before but I think Discord turns them into normal spaces
2025-12-07 10:59:16 oh, no, I specifically need to use the copy text button like you said. odd
2025-12-07 11:04:27 it seems in hyperlinks they become a bit...too non-breaking
2025-12-07 11:07:54 <@277378565304090636> changed the capitalization to "macOS", I assume that's what you meant?
Mine18
2025-12-07 11:20:15 <@384009621519597581> hey can you add more image viewers like nomacs and imageglass?
username
2025-12-07 11:21:20 it feels like we are recreating the old website as a markdown message lol
Mine18
2025-12-07 11:21:34 maybe lol
Meow
2025-12-07 11:25:01 Sure it's the official name
AccessViolation_
2025-12-07 12:10:40 new spelling just dropped
2025-12-07 12:34:57 I linked to the corresponding presentation on youtube
2025-12-07 12:42:21 I don't like third party link shortening services and I'm starting to reach the character limit, so I don't have much space for large google docs URLs
2025-12-07 12:46:30 fun fact, wikipedia has an internal link shortening service: <https://w.wiki/GXdt>
2025-12-07 12:46:59 <https://meta.wikimedia.org/wiki/Special:UrlShortener>
Meow
2025-12-07 01:03:47 https://w.wiki/tr
AccessViolation_
2025-12-07 01:27:53 I was checking interesting letter combinations and `UwU` made me laugh
2025-12-07 01:28:10 pretty sure you can't control the shortlink so that was by chance
Meow
2025-12-07 03:37:12 The actual template documentation for short URL is https://w.wiki/tR
Traneptora
2025-12-08 06:28:58 typically will not use Cfl cause it doesn't work with subsampling
2025-12-08 06:29:16 `magick compare -verbose` broken for anyone else?
2025-12-08 06:29:30 ``` leo@gauss ~/Development/Traneptora/jxlatte :) $ magick compare -verbose -metric pae bike_5.png test.png null:- 52428 (0.8) ```
2025-12-08 06:29:35 it's not verbose
2025-12-08 06:30:09 it's also giving the wrong number
5peak
2025-12-08 10:55:58 Arty Fiscal Yntelligence
Orum
2025-12-08 11:00:26 I am curious what video that is from
Traneptora
2025-12-08 11:56:03 voice recognition is conventional software tbh and it functions similarly
jonnyawsom3
2025-12-09 11:30:47 Oh, it's a Discord scam again
2025-12-09 11:31:08 <@&807636211489177661>
veluca
2025-12-09 11:31:52 kinda getting out of hand
jonnyawsom3
2025-12-09 11:32:29 I'm curious if they created a new invite they're all using, or if it's just the one on the website
AccessViolation_
2025-12-10 12:12:25 are the spam bots these accounts that join with 0 day old accounts
2025-12-10 12:13:25 I saw some join a few days ago too but that time they didn't immediately do anything
Quackdoc
2025-12-10 02:19:04 time to make a spam role
Orum
2025-12-11 05:33:05 anyone know why levels look all wrong when opening a JXL in GIMP?
2025-12-11 05:33:20 blacks are raised waaaaaaaaaaay up....
2025-12-11 07:57:32 left = ||nomacs|| right = ||GIMP||
2025-12-11 07:58:55 screw it, I'm just going to decode to PPM with `djxl` and open that in GIMP
A homosapien
2025-12-11 08:14:20 also, try different `--colorspace` options to see if you get your desired output
Orum
2025-12-11 08:15:18 ๐Ÿคทโ€โ™‚๏ธ by default djxl matches what I get in nomacs, which is what I expect
2025-12-11 08:15:27 GIMP is just doing something weird, but I can't figure out what
jonnyawsom3
2025-12-11 08:24:26 Decoding it as linear I'd guess
Orum
2025-12-11 08:34:27 that would be bizarre
cioute
2025-12-11 10:05:47 place your bets, how long will humanity use jpeg, btw we already use ipv4 1/4 century
Exorcist
2025-12-11 10:14:00 Let me guess, GIMP also don't support multi frame or multi layer JXL
A homosapien
2025-12-11 10:34:29 Yup, GIMP doesn't support those features. I use Krita for multi layer JXLs
_Broken sฬธyฬดmฬดmฬตฬฟeฬดอŒอ†tฬธrฬตฬ‰ฬฟyฬดอ อ†
2025-12-11 10:36:19 I know that djxl seem to decode lossy images differently from the way gimp seems to it. I dunno what is going on, but when decoding a lossy image with gimp, it all looks like it's supposed to, when using djxl or applications using djxl, there appears to be "dithering". -# I amplified the "dithhering" a bit in 02b, so it is visiable. It seems to effect both value and color, but not 100% sure. -# it also happens when passing -m 1 -# Also gimp opens the lossy images as 32-bit floats, dunno if that helps -# JPEG XL decoder v0.11.1 0.11.1 [AVX2,SSE4,SSE2] || and default lossy encoding was used. I dunno if that is related, but I wanted to mention this a while ago. Lossless images are fine.
Orum
2025-12-11 10:40:01 this is a lossless image
_Broken sฬธyฬดmฬดmฬตฬฟeฬดอŒอ†tฬธrฬตฬ‰ฬฟyฬดอ อ†
2025-12-11 10:46:43 I did convert the image to linear 16-bit float (lossless), and yes, nomacs displays it not properly. But when it is saved as a (16bit linear) png it is displayed properly.
2025-12-11 10:47:17 Also, wanted to more generally share this, as I do not know if it's more of a known issue.
A homosapien
2025-12-11 10:47:23 This is intentional, it's done to avoid banding when decoding the image to 8-bit. GIMP decodes to 32-bit, so dithering is not done. To avoid dithering/banding I use `djxl input.jxl output.png --bits_per_sample 16`. And yes lossless images are not dithered
Orum
2025-12-11 10:47:33 nomacs is displaying it correctly for me though (while GIMP is not) ๐Ÿค”
_Broken sฬธyฬดmฬดmฬตฬฟeฬดอŒอ†tฬธrฬตฬ‰ฬฟyฬดอ อ†
2025-12-11 10:50:17 Okay, that does help. I assume it is not set as default because of performance reasons?
2025-12-11 10:52:52 The top left is the 8bit-original below that 16bit float (linear) as png. on the top right, gwen-view displaying the jxl. below-that nomacs displaying the jxl. Here it is lossless
A homosapien
2025-12-11 10:53:02 16-bit pngs are ridiculously large and inefficient
lonjil
2025-12-11 10:53:41 By default the output but depth will be the same as the input bit depth (e.g. if the original image is 8 bits then djxl will output 8 bit images, which it dithers)
Orum
2025-12-11 10:53:46 interesting, but still a different situation than I'm in
2025-12-11 10:54:11 also gwen-view's rendering looks wrong too (it's desaturated)
_Broken sฬธyฬดmฬดmฬตฬฟeฬดอŒอ†tฬธrฬตฬ‰ฬฟyฬดอ อ†
2025-12-11 10:54:34 yes, gwenview is funky
2025-12-11 10:55:38 It displays an grayscale image differently when in sRGB vs Gray (something I for got the default name) Even tho in gimp or krita it is displayed with the same brightness / levels.
Orum
2025-12-11 10:55:40 I thought JXL was supposed to solve some of the issues with colors not being decoded properly <:FeelsSadMan:808221433243107338>
_Broken sฬธyฬดmฬดmฬตฬฟeฬดอŒอ†tฬธrฬตฬ‰ฬฟyฬดอ อ†
2025-12-11 10:57:05 Remember that they may not have the latest version implemented, or may have some other bugs. It quiet quick to happen
lonjil
2025-12-11 10:58:33 Gwenview can't even display sRGB PNGs correctly, so
2025-12-11 10:59:01 No amount of making a better image format can fix it ๐Ÿ™‚
spider-mario
2025-12-11 11:54:36 oh, I donโ€™t remember thatย โ€“ what does it do wrong?
lonjil
2025-12-11 12:34:40 I never figured out exactly what it does wrong, but now that I'm trying it again, I can't seem to reproduce the problem, so maybe they fixed it.
Quackdoc
2025-12-11 12:36:58 gimp doesn't do any color management right at all
Traneptora
2025-12-11 12:49:41 is there any way to extract just the LF image from a jxl?
2025-12-11 12:50:04 without upsampling it
2025-12-11 12:50:44 `--downsampling=8` upsamples the LF image using some algorithm to hit same width/height
spider-mario
2025-12-11 01:36:43 how so?
lonjil
2025-12-11 01:40:35 I think it was some weirdness in the color management, it looked like it dithered zoomed images in linear light and then converted to sRGB
Quackdoc
2025-12-11 01:46:29 gimp can do really color management stuff, but if you use stuff with different profiles it breaks, it composites everything in it's working space which means blending breaks (see image) I dont think it can handle HDR transfers at all either, though that might have changed
RaveSteel
2025-12-11 01:56:00 GIMP cannot do HDR yet, this is a GTK3 limitation Maybe they could work around that, but no such effort exists to my knowledge
Quackdoc
2025-12-11 01:59:37 They should still be able to deal with HLG/PQ transfers though
jonnyawsom3
2025-12-11 02:33:31 The issue is that the viewer can request a color space from libjxl. Generally, that should just be sRGB. Instead, they're signalling nothing or getting Linear, which they then don't color manage (or something along those lines)
Orum
2025-12-11 02:36:34 god, how do the gimp devs screw up color management that badly?
ignaloidas
2025-12-11 02:56:36 you can help those who don't want to color manage, and those who can, but can't help those who think they can but actually can't
jonnyawsom3
2025-12-11 03:43:36 It would probably be smart to just display the LF when downsampling by 8 with VarDCT, but currently it downsamples the entire image and makes a new, smaller DC frame
Quackdoc
2025-12-11 05:02:46 sRGB should be deprecated in favour of gamma2.2 since its more explicit
jonnyawsom3
2025-12-11 05:15:35 But most things still default or interpret 2.2 as sRGB anyway
Quackdoc
2025-12-11 05:17:39 what?
2025-12-11 05:18:25 maybe you meant that in reverse, but even then I would be very hesitant before saying most
lonjil
2025-12-11 05:49:14 It's pretty common, I believe, for programs to see a gamma 2.2 tag and assume it should be treated as sRGB
jonnyawsom3
2025-12-11 05:50:10 Oh no I'm getting mixed up with 0.45455.... This is why sleep is important
lonjil
2025-12-11 05:50:28 Meh, same thing
Quackdoc
2025-12-11 06:56:15 not common, some apps may do that, but they then are just explicitly wrong unless they handle sRGB as gamma22, which is fine.
lonjil
2025-12-11 07:28:53 <@853026420792360980> do you have any knowledge on this? iirc umbrielpng does it when fixing PNGs.
Traneptora
2025-12-11 07:42:09 It doee not
2025-12-11 07:42:28 it follows the spec and strips fallback chunks
2025-12-11 07:42:45 cicp > iccp/srgb > chrm/gama
2025-12-11 07:43:02 it only strips chrm and gama if srgb is present
2025-12-11 07:43:43 if there is no color info, or only a chrm which matches bt709, then it inserts srgb chunk
2025-12-11 07:43:58 but if gama is present with no other trc it leaves it there
lonjil
2025-12-11 07:44:32 ic ic
2025-12-11 07:45:37 so a gama + bt709 chrm will be left alone?
2025-12-11 07:49:34 Since I seem to have forgotten WHAT did it, now I'm real curious what software I noticed doing it a few years ago. Must've been something.
Traneptora
2025-12-12 04:15:20 that's correct, yea
lonjil
2025-12-12 04:15:43 Hooray
2025-12-12 04:16:05 I wonder if it was imagemagick that did something weird. Or maybe gimp?
Traneptora
2025-12-12 04:16:22 imagemagick has a longstanding bug where it confuses sRGB chunks with gama45
lonjil
2025-12-12 04:16:46 Ah, that'd be it, then
Traneptora
2025-12-12 04:18:45 https://discord.com/channels/794206087879852103/794206087879852106/1083761922287095869
spider-mario
2025-12-12 04:59:22 I think recent versions of ImageMagick have finally fixed it, or do I misremember?
Traneptora
2025-12-12 05:48:12 imagemagick is kinda broke rn
2025-12-12 05:49:16 https://discord.com/channels/794206087879852103/794206170445119489/1447475160218341396
5peak
2025-12-12 05:56:46 ImageTragick
Quackdoc
2025-12-12 06:17:41 https://tenor.com/view/ah-shit-here-we-go-again-ah-shit-cj-gta-gta-san-andreas-gif-13933485
novomesk
2025-12-12 07:48:18 nomacs didnโ€™t have color management at all. That started to change since 3.22.0 RC2 https://github.com/nomacs/nomacs/releases/tag/3.22.0-rc.2
Demiurge
2025-12-13 10:28:11 Maybe try graphicsmagick to see if it works any better. gm has some advantages and it's more stability-focused.
2025-12-13 10:29:15 Imagemagick changes things around more often.
Jyrki Alakuijala
2025-12-13 05:45:28 I wanted this to be part of the original decoding API but was overruled by everyone else, and I didn't insist. I thought getting the LF for progressive and thumbnailers would have been nice and less memory bandwidth use.
jonnyawsom3
2025-12-13 06:21:21 Huh, last time I brought it up, most seemed to like the idea
spider-mario
2025-12-13 07:56:27 from what I remember (i.e. not much) of the discussions weโ€™ve had about this, the main reason for not having it was image features like patches and splines
_wb_
2025-12-13 09:18:00 For progressive rendering, it makes more sense to do it at full res, so we can also benefit from nice upsampling and indeed thingsl ike patches and splines. But for thumbnailers probably it would be good to also have a way to get an 1:8 image without upsampling. Though if the image uses excessive amouts of patches, the thumbnail might then look quite different from an actual downscaled version of the image.
spider-mario
2025-12-13 09:27:37 I could have sworn that we used to have a downsampling-aware implementation of splines
2025-12-13 09:27:41 but maybe not
Traneptora
2025-12-14 02:48:26 I was wondering for debugging purposes but I can probably just mod libjxl to do that
Jyrki Alakuijala
2025-12-16 12:23:54 It was more of an aesthetics based discussion. My personal aesthetics is to leave maximum control at higher architectural levels and to save memory bandwidth as much as easily feasible. Other people's aesthetic was to have most uniform representations and minimal changes in comparison to existing solutions, e.g., jpeg1 progression
2025-12-16 12:25:22 So, instead of handing out the 8x8 image it became a task of libjxl to upsample it and provide the progressive renderings at 1x1 sampling
jonnyawsom3
2025-12-16 12:33:48 Eg: <https://docs.google.com/document/d/1oT7K2h4Xf4E0ScUmsOQx0zXUVJj57qBwcsa3yK9SJr0> > Once jxl-rs supports it. Progressive decoding will use jxl-rs's internal sophisticated non-separable upscaling for DC data, matching the behavior of progressive legacy JPEGs (libjpeg-turbo). This avoids intermediate buffers in Chromium.
2025-12-16 12:34:21 I really wish we *weren't* just matching 3 decades ago and actually used the new features....
2025-12-16 01:10:18 I wholeheartedly agree with that. The format has so many capabilities and options, we should allow the end user to take full advantage. Previously we discussed forcing features on/off, such as allowing EPF decoding for JPEGs, enabling/disabling photon noise, ect. Effectively making most encode options decode options too, so people can make variations/debugging images without hacking the library themselves
AccessViolation_
2025-12-17 06:14:49 this is a question about semantics. for the term "meta-adaptive context model", what makes it "meta-adaptive" rather than just "adaptive"? a context model can be fixed, an adaptive context model can be adapted to the specific data, and a meta-adaptive context model... I'm not sure
2025-12-17 06:16:05 or maybe an adaptive context model implies that something else is changed by it, while meta-adaptive implies it itself can be changed, in which case I don't see what other thing an adaptive context model would adapt
2025-12-17 06:20:05 or is "meta-adaptive" rather than "adaptive" just something to make it more clear that the context model itself is what is adapted, and ignoring that this may imply the existence of context models that adapt something other than themselves?
2025-12-17 06:21:28 I know it might sound like I'm trolling, but I'm genuinely wondering how it was interpreted semantically
Magnap
2025-12-17 06:23:33 That's how I've been interpreting it, fwiw
2025-12-17 06:26:02
2025-12-17 06:26:03 Jon explains it starting here
AccessViolation_
2025-12-17 06:27:50 oh hmm okay then an distinction between meta-adaptive and adaptive context might be feasible
2025-12-17 06:32:20 \- static context model: everything is fixed, like compressing things with fixed huffman trees and dictionaries, and not allowing the payload to include these. \- adaptive context model: the coding tools used are fixed, for example: an lz77 pass followed by huffman coding, but you're allowed to tune the parameters of these and provide dictionaries. \- meta-adaptive context model: the coding tools used can not just be configured themselves, but also rearranged and disabled/enabled as signaled in the payload
Magnap
2025-12-17 06:34:30 BTW, my attempt at a brief summary: The probability distribution of each residual is modeled as a function of? * static: nothing * adaptive: which image is being decoded * meta-adaptive: which image is being decoded and information from the already decoded pixels and about the pixel to be decoded
2025-12-17 06:34:56 Oops, I typed too slow ๐Ÿ˜…
AccessViolation_
2025-12-17 06:51:48 oh that's a good one too
Demiurge
2025-12-17 07:06:14 Meta adaptive means that it can adapt the adaptation method
2025-12-17 07:06:28 Thatโ€™s what meta means
2025-12-17 07:06:39 :)
Magnap
2025-12-17 07:07:52 Really I don't think it's one scale. "does the model depend on the previously decoded data?" is a very different question than "how much information is signaled about the model?" which I'd argue isn't even that useful as a binary question (if I had to, I'd call the binary distinction implicit/explicit. But having a single bit that indicates one of 2 modes for decompression means your overall model is now no longer fully implicit, and so I don't think it's super useful)
AccessViolation_
2025-12-17 07:44:34 ohh that makes sense
2025-12-17 07:45:03 I see I should have read further down from jon's message that was linked. this explains it pretty well
2025-12-17 07:45:16
2025-12-17 07:45:33 that's my question answered :)
username
2025-12-18 02:09:52 does JXL support anything like this? https://frs.badcoffee.info/PAR_AcidTest/
2025-12-18 02:11:10 I know that images with non-square pixels are a really niche use-case but I'm curious as to if It's something supported in any capacity or not
HCrikki
2025-12-18 02:50:45 intrinsic size in the header? basically recommended display dimension (normally meant to accomodate 'retina' upscaling). according to the history document pdf in other formats par needed to be done in exif metadata but in jxl can be signaled directly
TheBigBadBoy - ๐™ธ๐š›
2025-12-18 08:35:47 you can use these Exif properties (used for the DPI and aspect ratio, like the pHYs PNG chunk): XResolution, YResolution, ResolutionUnit
2025-12-18 08:36:24 these are the names used by `exiftool`
jonnyawsom3
2025-12-19 12:54:50 While I was rummaging around the Chromium code, turns out they already do DC decoding of JPEGs if the full size would hit memory limits > // ...we'll only allow downscaling an image if both dimensions fit a > // whole number of MCUs or if decoding to the original size would > // cause us to exceed memory limits. The latter case is detected by > // checking the |max_numerator| returned by DesiredScaleNumerator(): > // this method will return either |g_scale_denominator| if decoding to > // the original size won't exceed the memory limit (see > // |max_decoded_bytes_| in ImageDecoder) or something less than > // |g_scale_denominator| otherwise to ensure the image is downscaled.
KonPet
2025-12-20 11:09:48 Hey, I have recently started looking into the JXL spec a bit, and I found out that there are two patent declarations. One by Cloudinary and one by Google. The declarations say, that "the patent holder is prepared to grant a Free of Charge license to an unrestricted number of applicants [...]". So, if I wanted to implement this spec, does that mean that I would first have to ask Cloudinary and Google for a license like that? And if so, what license(s) do I even have to ask for? The declarations do not specify the relevant patents
dogelition
2025-12-20 01:40:10 this is actually an interesting question i think i thought it would work the same way as with av1, where the patent license lets you `make, use, sell, offer for sale, import or distribute any Implementation` whereas the libjxl one says: `make, have made, use, offer to sell, sell, import, transfer and otherwise run, modify and propagate the contents of this implementation of JPEG XL` so my interpretation would be that you really do need to ask for a license first if you're making your own implementation ๐Ÿค”
KonPet
2025-12-20 02:24:28 Those are implementation specific though? Is this relevant for my own implementation?
dogelition
2025-12-20 02:25:53 i mean the main difference i see is that av1 grants it for *any implementation*, which would presumably include writing your own from scratch. whereas the libjxl one explicitly only grants the patent license for `this implementation` (it's also not entirely clear to me if the av1 patent license is only part of the reference implementation or also the spec itself... in the av1 spec, it just links to the legal section on the aom site, but there the patent license is under "software". on the other hand the patent license file is included in the github repository of the spec)
KonPet
2025-12-20 02:26:14 Oh, I misread that
jonnyawsom3
2025-12-20 02:40:58 I know the JPEG XL developer stance is "Do whatever you want", but the legalese might've gotten lost in translation
veluca
2025-12-20 02:47:58 I can confirm that's my stance, and I can also say I wish ISO had better procedures around patents and around making standards publicly available
KonPet
2025-12-20 03:13:45 yes, completely agree
2025-12-20 03:15:26 fair, but I still want to do it correctly. Admittedly, mostly because I'm curious. I would be very surprised if anyone took notice of my decoder
veluca
2025-12-20 03:16:15 FWIW I'd be surprised if either Google or Cloudinary complained about even a commercial decoder or anything like that (but usual IANAL)
KonPet
2025-12-20 03:16:31 ye
Tirr
2025-12-20 03:18:16 I didn't ask for a license for jxl-oxide at least ๐Ÿ˜‰
KonPet
2025-12-20 03:18:34 by the way, how relevant is Microsoft's rANS patent to JXL? I've heard that it might apply here, but since they haven't sent in a patent declaration (as far as I can see) I'm not even sure if it would be possible for them to do anything
2025-12-20 03:21:26 (oh also, can I just say that I _love_ the jxl spec? After dealing with video for so long, seeing a spec like that was really refreshing. You can almost read it like a book, and it's not just a dump of all kinds of variables with undocumented abbreviations)
username
2025-12-20 03:22:13 there was a long message that went over why this isn't an issue but I can't find it. IIRC that patent is not an issue because it's about ANS with dynamic probabilities but JXL only uses static ones. and apparently Cloudinary has a protective patent or something just in case Microsoft tries something
Tirr
2025-12-20 03:22:16 regarding rANS patent: https://canary.discord.com/channels/794206087879852103/794206170445119489/1285622180507287613
veluca
2025-12-20 03:22:28 (thanks, it was a massive pain to write but happy to see that some people appreciate it)
KonPet
2025-12-20 03:22:54 it's the best spec I've read so far, by a long shot
2025-12-20 03:24:13 ah, thanks! That's very reassuring
lonjil
2025-12-20 03:46:22 I agree. Most ISO specs seem terrible, but the JXL one is quite nice.
2025-12-20 03:46:42 just claim that your own implementation is a derivative work of libjxl ๐Ÿ˜„
KonPet
2025-12-20 03:47:43 ๐Ÿคฏ
veluca
2025-12-20 04:01:35 that'd probably be even true if you read the code
2025-12-20 04:01:51 (which is probably easier to read then the spec, despite our best efforts)
KonPet
2025-12-20 04:04:16 to be honest, so far I've found the spec to be the most readable resource, at least for me
dogelition
2025-12-20 04:04:57 ship of theseus moment?
jonnyawsom3
2025-12-20 04:10:51 I'm sure you might've already seen it, but this is definitely worth a read too as a higher level explanation of the format https://arxiv.org/abs/2506.05987
KonPet
2025-12-20 04:23:48 I actually haven't. I just started reading the spec and went with it lol
Magnap
2025-12-20 04:44:45 you're underestimating the spec ๐Ÿ˜… it's actually really good!
KonPet
2025-12-20 04:48:14 yes. It's an absolute dream compared to the stuff I usually have to deal with. Video specs are so painful to read that at this point I have pretty much completely given up
veluca
2025-12-20 09:39:23 thanks, it was a lot of work to write it so it's definitely nice to hear that ๐Ÿ˜„
ignaloidas
2025-12-21 12:13:29 Will add to the spec praise, it's really very readable, way more than wast majority of standards, the only part that's somewhat hard to understand to me is the entropy coding stuff (and even that is in part because half of it is in brotli spec)
veluca
2025-12-21 01:10:49 it was really not easy to write that part too, I remember spending a few hours figuring out how to write it and explaining it to unrelated people to try to figure out the best ways (and admittedly it's one of the more complex parts)
2025-12-21 01:12:10 (well, I shouldn't say "unrelated", but at least not as involved as others)
_wb_
2025-12-21 11:43:54 Both Cloudinary and Google are granting a royalty-free license to anyone, no need to ask first. The only condition is the defensive clause: you are not allowed to sue anyone claiming that YOU hold patents that are essential for JPEG XL: if you do that, then the royalty-free license from Google and Cloudinary automatically terminates as of the date such litigation is filed.
AccessViolation_
2025-12-21 02:24:58 hmm, I wonder what that's for ๐ŸŸฅ๐ŸŸฉ ๐ŸŸฆ๐ŸŸจ
2025-12-21 02:25:07 :p
2025-12-21 02:32:03 that's pretty neat though
2025-12-21 02:33:06 I guess that works the other way around too: the fact that MS is shipping JPEG XL implies they themselves don't consider any of their patents to cover JPEG XL
2025-12-21 02:33:50 just in case people still think it does, this is basically confirmation from MS themselves
KonPet
2025-12-21 02:49:32 Now the only issue I have with jxl is the fact that the spec costs so much if you don't get it through e.g. your university like me
veluca
2025-12-21 02:53:45 I very much dislike that too
2025-12-21 02:54:35 we have been considering writing an "alternative" spec for some time
2025-12-21 02:54:46 but that's a lot of work
0xC0000054
2025-12-21 02:57:47 I understand the ISO needs to make money, but their specification costs are ridiculous for individual developers. Especially if you are working on freeware/open source. I ran into that a lot when writing my own parser for the ISOBMFF container used by AVIF.
Exorcist
2025-12-21 02:58:10 It is <:ugly:805106754668068868> that: - arithmetic coding is literal same as range coding - arithmetic coding is patented - range coding is not patented
2025-12-21 02:59:09 USA patent system is completely dysfunctional
0xC0000054
2025-12-21 02:59:45 For what format? The JPEG patents should have expired years ago.
AccessViolation_
2025-12-21 05:21:09 if I understand things correctly, there is still the possibility that the PDF Association will pay to make the spec freely available since JPEG XL will be a part of PDF
KonPet
2025-12-21 05:30:06 That would be absolutely amazing
cioute
2025-12-21 07:34:46 info-faq update? new soft?
_wb_
2025-12-21 08:46:08 We may also give it another try to let the JPEG committee ask ISO to make the spec available publicly.
jonnyawsom3
2025-12-21 08:51:30 Don't tell ~~the teacher~~ ISO https://discord.com/channels/794206087879852103/1021189485960114198/1256224842894540831
2025-12-21 08:51:52 https://discord.com/channels/794206087879852103/1021189485960114198/1319682405073813534
2025-12-21 08:53:29 With all the companies interested in JPEG XL, I'm sure you could make it a group effort
adap
2025-12-21 10:54:07 Is there a windows thumbnail handler for jxl that works with hdr images?
Quackdoc
2025-12-21 11:08:29 jxl-winthumb should work? it should tonemap to srgb
2025-12-21 11:09:03 oh maybe it doesn't and just makes hdr thumbnails?
adap
2025-12-21 11:13:50 https://github.com/saschanaz/jxl-winthumb this one?
2025-12-21 11:14:50 doesn't seem to work for me
Quackdoc
2025-12-21 11:16:46 the thumbnail itself is probably in HDR, Im wondering if it should be converted to srgb, I honestly dunno
2025-12-21 11:18:32 it would be worth opening an issue ticket in the repo probably
username
2025-12-21 11:23:31 If that's the case then that means Windows is using the codec incorrectly maybe? I don't know what jxl-winthumb is actually doing, I think the WIC system supports high bit depth but I have no clue how it handles color or if tonemapping even exists
Quackdoc
2025-12-21 11:25:38 I dunno how windows expects to handle thumbnails, but it probably wants jxl-winthumb to convert to srgb
A homosapien
2025-12-21 11:26:10 HDR thumbnails? Imagine a 10000 nit thumbnail <:KekDog:805390049033191445>
Quackdoc
2025-12-21 11:28:08 [chad](https://cdn.discordapp.com/emojis/862625638238257183.webp?size=48&name=chad)
jonnyawsom3
2025-12-23 03:15:05 With ReShade using the simple lossless encoder and JXL DNG being released in libraw soon, now I'm wondering how fast/simple lossless does on mobile, specifically my decade old android I imagine NEON could still do some work, but to output a (regular) compressed DNG takes a few seconds, so either effort 1 would make it instant or it would cook the CPU Squoosh is too old to test online, and ImageToolbox does other processing on top so I can't time it easily
_wb_
2025-12-23 11:38:04 e1 is very fast, the gap between e1 and e2 is large
Quackdoc
2025-12-23 11:51:59 itsnoaxksged in termux
2025-12-23 11:52:14 wow that was a fail, meant to say it is packaged in termux
veluca
2025-12-23 01:18:36 ``` ~ $ cjxl bb.d2.png -e 1 -d 0 bb-e1.jxl JPEG XL encoder v0.11.1 [NEON,NEON_WITHOUT_AES] Encoding [Modular, lossless, effort: 1] Compressed to 41421.7 kB (8.485 bpp). 7216 x 5412, 132.325 MP/s [132.32, 132.32], , 1 reps, 8 threads. ~ $ cjxl bb.d2.png -e 1 -d 0 bb-e1.jxl --num_threads 0 JPEG XL encoder v0.11.1 [NEON,NEON_WITHOUT_AES] Encoding [Modular, lossless, effort: 1] Compressed to 41421.7 kB (8.485 bpp). 7216 x 5412, 91.091 MP/s [91.09, 91.09], , 1 reps, 0 threads. ```
2025-12-23 01:18:39 this on a pixel 7a
2025-12-23 01:20:24 a bit better with some warming up: ``` ~ $ cjxl bb.d2.png -e 1 -d 0 bb-e1.jxl --num_threads 0 --num_reps 10 JPEG XL encoder v0.11.1 [NEON,NEON_WITHOUT_AES] Encoding [Modular, lossless, effort: 1] Compressed to 41421.7 kB (8.485 bpp). 7216 x 5412, geomean: 108.609 MP/s [91.07, 108.95], , 10 reps, 0 threads. ~ $ cjxl bb.d2.png -e 1 -d 0 bb-e1.jxl --num_reps 10 JPEG XL encoder v0.11.1 [NEON,NEON_WITHOUT_AES] Encoding [Modular, lossless, effort: 1] Compressed to 41421.7 kB (8.485 bpp). 7216 x 5412, geomean: 159.669 MP/s [134.80, 171.40], , 10 reps, 8 threads. ```
jonnyawsom3
2025-12-23 01:30:19 Last time I checked it out it had a bunch of warnings, but seems to be working this time. Dumping some results in https://discord.com/channels/794206087879852103/803645746661425173/1453017179376324628
Quackdoc
2025-12-23 01:32:21 I'll try and remember to tonight, but I have access to a bunch of unique phones including some interesting low end onea
2025-12-23 01:33:12 I have an Alcatel and a zte flip phone
2025-12-23 01:33:25 both ofc android under the bood
jonnyawsom3
2025-12-23 01:46:57 I'm surprised the threads don't scale better, but your single core is faster than my 2 core result haha
_wb_
2025-12-23 01:54:40 probably memory bus is being the bottleneck, not compute. Because e1 should parallelize well in general.
2025-12-23 01:58:03 2048 x 2560, geomean: 246.379 MP/s [212.61, 260.87], , 10 reps, 1 threads. 2048 x 2560, geomean: 467.439 MP/s [358.05, 488.90], , 10 reps, 2 threads. 2048 x 2560, geomean: 748.359 MP/s [540.54, 773.31], , 10 reps, 4 threads. 2048 x 2560, geomean: 949.307 MP/s [656.44, 1041.58], , 10 reps, 6 threads. ^ this is on my macbook M3 Pro, also NEON but of course beefier than a phone
2025-12-23 01:59:56 not quite linear scaling but the extra threads do help quite a bit
2025-12-23 02:01:46 on phones you typically have quite extreme heterogeneous cores, with 1 or 2 very strong cores and then a bunch of weaker cores, and the weakest ones can be very slow compared to the strong one (but also much more energy friendly)
jonnyawsom3
2025-12-23 02:02:44 We did find out standalone fast lossles scales better than cjxl, around 50% faster when utilising all threads on an x86 system
_wb_
2025-12-23 02:03:11 probably just using as many threads as there are cores (the cjxl default) is not really a good idea on a phone
2025-12-23 02:03:34 that's surprising, I wonder where the difference comes from
2025-12-23 02:04:06 there is a notion of 'effort' within e1, maybe standalone fjxl has a different default than libjxl?
jonnyawsom3
2025-12-23 02:04:18 <https://github.com/libjxl/libjxl/issues/4447#issuecomment-3303319178>
2025-12-23 02:05:02 We matched the effort, as both slowed down in newer builds the gap became smaller, so probably also memory management as the root cause
_wb_
2025-12-23 02:07:16 there are some things in the fast lossless code that can be adjusted, like the palette detection and the amount of sampling to do to determine the histograms for huffman codes. Those would also be the things that are global and not done in parallel (iirc).
2025-12-23 02:08:55 e.g. for a camera using e1 to produce a DNG, you probably want to skip palette completely, and use predefined huffman codes instead of image specific ones
jonnyawsom3
2025-12-23 02:13:23 The table there shows singlethreaded is faster too, and that same improvement scales across all threads. 1 thread +50MP/s 6 thread +300MP/s Ect
2025-12-23 09:43:49 Apparently I'm still getting over 2x faster with standalone, even with our fixes to main that got cjxl back to full speed ```wintime -- fjxl Test.png nul 2 1000 8 549.354 MP/s 8.925 bits/pixel PageFaultCount: 4353136 PeakWorkingSetSize: 77.62 MiB QuotaPeakPagedPoolUsage: 31.77 KiB QuotaPeakNonPagedPoolUsage: 7.422 KiB PeakPagefileUsage: 122.8 MiB Creation time 2025/12/23 21:38:24.720 Exit time 2025/12/23 21:38:42.138 Wall time: 0 days, 00:00:17.417 (17.42 seconds) User time: 0 days, 00:00:08.156 (8.16 seconds) Kernel time: 0 days, 00:01:18.359 (78.36 seconds)``` ```wintime -- cjxl --disable_output Test.png -d 0 -e 1 --num_reps 1000 --num_threads 8 JPEG XL encoder v0.12.0 029cec42 [_AVX2_] {Clang 20.1.8} Encoding [Modular, lossless, effort: 1] Compressed to 10514.7 kB including container (8.898 bpp). 6875 x 1375, median: 245.813 MP/s [146.618, 265.112] (stdev 103.270), 1000 reps, 8 threads. PageFaultCount: 2967191 PeakWorkingSetSize: 93.27 MiB QuotaPeakPagedPoolUsage: 33.65 KiB QuotaPeakNonPagedPoolUsage: 6.898 KiB PeakPagefileUsage: 145.3 MiB Creation time 2025/12/23 21:39:03.560 Exit time 2025/12/23 21:39:42.760 Wall time: 0 days, 00:00:39.199 (39.20 seconds) User time: 0 days, 00:00:03.265 (3.27 seconds) Kernel time: 0 days, 00:03:20.062 (200.06 seconds)```
2025-12-23 09:45:57 0.03 bpp smaller with cjxl, but setting fjxl to effort 127 only makes it slower instead of smaller
_wb_
2025-12-23 10:06:57 hm, also a large difference in user vs sys time
2025-12-23 10:10:37 looks like the bulk of the time is spent in sys stuff, not usr
2025-12-23 10:11:00 I'm getting this: ``` $ /usr/bin/time cjxl 100.png 100.png.jxl -d 0 -e 1 --num_threads 8 --num_reps 1000 JPEG XL encoder v0.11.1 0.11.1 [NEON_BF16,NEON] Encoding [Modular, lossless, effort: 1] Compressed to 8476.1 kB (12.934 bpp). 2048 x 2560, median: 1088.628 MP/s [301.56, 1117.64] (stdev 293.449), , 1000 reps, 8 threads. 4.98 real 26.12 user 0.30 sys ```
2025-12-23 10:12:00 or with something closer to the current git version: ``` JPEG XL encoder v0.12.0 2a4f12b6 [_NEON_] {AppleClang 17.0.0.17000319} Encoding [Modular, lossless, effort: 1] Compressed to 8476.1 kB (12.934 bpp). 2048 x 2560, median: 1393.966 MP/s [235.752, 1440.995 (stdev 176.341), 1000 reps, 8 threads. 4.25 real 20.22 user 0.30 sys ```
2025-12-23 10:13:11 so most of the time spent in userland, not in kernel
2025-12-23 10:14:57 but you're getting almost everything in kernel time, that's weird
jonnyawsom3
2025-12-23 10:30:23 `wintime` is good for relative comparisons, but I'm not sure how it compares to the standard `time`, so that might be part of it <https://github.com/cbielow/wintime>
2025-12-23 10:38:16 Though, over 5x higher MP/s does seem like quite a lot, even if the time formatting were different
Demiurge
2025-12-23 10:41:18 You normally want to disregard and ignore kernel time and only look at user time when benchmarking
2025-12-23 10:41:49 kernel time is usually things like: waiting to communicate with hardware devices.
jonnyawsom3
2025-12-23 10:43:51 In my case Kernel means CPU time, and I think User is OS level stuff, then Wall as end user/real time. It's annoying they don't just match Linux
Demiurge
2025-12-23 10:44:30 When is that ever the case?
2025-12-23 10:45:03 kernel time is kernel time. user time is user time.
jonnyawsom3
2025-12-23 10:45:04 There apparently https://discord.com/channels/794206087879852103/794206170445119489/1453141072778760328
2025-12-23 10:46:40 Hmm, actually, I wonder if the odd timing for me is related to this https://github.com/libjxl/libjxl/issues/4547 The Kernel time is 25s per core on my cjxl run, but Wall is 39s, so *theoretically* there's 14s of overhead. fjxl would be 8s of overhead (I have no idea what I'm saying)
RaveSteel
2025-12-23 10:48:39 If you have WSL it would be interesting to compare to bare metal performance
Demiurge
2025-12-23 10:50:13 https://learn.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-getprocesstimes
2025-12-23 10:50:44 Kernel and user always mean kernel and user. It is not inverted on Linux and Windows unless there's a bug in "wintime" that switches them around by accident.
2025-12-23 10:54:35 https://github.com/cbielow/wintime/blob/main/WinTime/Time.cpp
2025-12-23 10:57:35 Wow. There IS a bug in wintime
2025-12-23 10:57:50 In Time.h the struct is defined in a different order than in Time.cpp
jonnyawsom3
2025-12-23 10:58:27 See, I'm not insane, just stupid
Demiurge
2025-12-23 10:58:46 The order is accidentally switched around
2025-12-23 10:58:55 You found a bug!
2025-12-23 10:59:09 Well I guess we both did
2025-12-23 10:59:30 Anyways you should probably let the author know since you're using their tool
jonnyawsom3
2025-12-23 11:37:57 I made a PR instead, no clue if it'll get merged though since the last release was 2023
A homosapien
2025-12-24 12:22:43 I'll just compile it
Jyrki Alakuijala
2025-12-25 03:02:56 Very good numbers. This far better than "launchable", these numbers are Grrreat!
Orum
2025-12-26 12:33:17 so... no new libjxl in 2025?
jonnyawsom3
2025-12-26 01:38:01 Apparently not, but most devtime *is* on jxl-rs, plus holidays so not surprising
_wb_
2025-12-26 09:22:23 2026 will be the year of jxl I guess ๐Ÿ™‚
veluca
2025-12-26 11:50:18 can confirm most of the devtime being on jxl-rs ๐Ÿ˜›
2025-12-26 11:50:42 I give even odds to libjxl 0.12/1.0 or jxl-rs 1.0 coming first
Tirr
2025-12-26 11:59:30 so there's some chance that libjxl 0.12 never comes <:KekDog:805390049033191445>
veluca
2025-12-26 12:15:07 we'd need a jxl-rs encoder for that ๐Ÿ˜›
Tirr
2025-12-26 12:18:13 or libjxl 1.0 instead
Jim
2025-12-26 12:26:20 December 25, 2026 to be renamed Jxmas.
Meow
2025-12-26 12:55:41 I'd like to see it
DZgas ะ–
2025-12-26 01:14:49 1.0 ? .....have solved the fading problem? ๐Ÿ˜—
jonnyawsom3
2025-12-26 02:13:28 1.0 doesn't mean it's perfect, it just means it won't explode. There's still a lot of ideas that could be done for speed and density, along with fairy big fixes like desaturation and chunk reordering
Demiurge
2025-12-26 02:21:04 Fixing desaturation in libjxl-tiny is actually, I'd say, more important than in regular libjxl. Would be a shame if something like that got permanently engrained in hardware, quite literally cementing it into JXL's reputation as a format.
2025-12-26 02:26:07 The desaturation follows a predictable pattern so, it's probably possible to come up with a very cheap way of counteracting and mitigating it during the quantization step.
jonnyawsom3
2025-12-26 02:37:59 It's the same code, so fixing one fixes both
Demiurge
2025-12-26 04:17:58 Same? I thought they were completely separate implementations.
jonnyawsom3
2025-12-26 04:20:15 The encoding yes, the quant table no. Both just use the default
2025-12-26 04:46:10 Right now the only way to change the quant table would be `kQuantModeRAW`, but I can't find the API to set it
2025-12-26 04:47:36 I did want to try messing with the lossy modular table too, to prove the fix since it's stored as a LUT
Demiurge
2025-12-26 04:59:05 There is actually no need to use a custom quant table
2025-12-26 05:00:34 All that is needed is to take into account and approximately counteract the bias
2025-12-26 05:01:23 All that a quant table does is determine the allocation of bits. Not how those bits are rounded. The problem is how they are being rounded. Not where they are allocated.
2025-12-26 05:02:15 So changing the quant table is completely the wrong solution
jonnyawsom3
2025-12-26 05:02:45 Well if the quant table is the problem, I think it makes more sense to fix the quant table than to implement a convoluted workaround that has to do a bunch of color theory
Demiurge
2025-12-26 05:03:13 But isn't libjxl using a default, implicit table?
2025-12-26 05:03:33 In order to save costs of encoding a custom one
jonnyawsom3
2025-12-26 05:03:41 Yes, which is why it's not been fixed yet
veluca
2025-12-26 05:04:10 I mean the cost of encoding a couple of custom ones is minimal
2025-12-26 05:04:19 the effort of finding a good one otoh...
Demiurge
2025-12-26 05:04:25 But that's still not the right question I should be asking because the table is irrelevant anyways. It is the rounding method
2025-12-26 05:05:15 That's like throwing more bits at the problem instead of changing the underlying cause
jonnyawsom3
2025-12-26 05:05:44 ... So instead of spending a hundred bytes on parametic tables, you want to implement an entire saturation countaction system
2025-12-26 05:06:09 While rounding away from 0, which will increase the filesize anyway
veluca
2025-12-26 05:06:23 rounding away from 0 will introduce the opposite problem
Demiurge
2025-12-26 05:06:58 The underlying cause is that there is inherent bias in the rounding method that could probably be "cancelled out" with some quick and dirty addition of some opposing bias
2025-12-26 05:08:30 A mathematically correct solution would probably be annoying to implement but a "good enough" quick and dirty solution should be much better than nothing
veluca
2025-12-26 05:12:02 I'm curious, have you tried seeing if turning off (or on) https://github.com/libjxl/libjxl/blob/main/lib/jxl/enc_group.cc#L339 improves or worsens the situation?
2025-12-26 05:12:44 (or, relatedly, if we tried just rounding towards 0, which I was *sure* there was code for but I can't find it)
Demiurge
2025-12-26 05:20:11 The encoder rounds and approximates and resamples gamma-encoded values as if they were linear values, which causes brightness loss too
2025-12-26 05:20:28 Or desaturation in the case of color channels
2025-12-26 05:21:01 And there is no heuristic or hack to attempt to mitigate or compensate for that one-sided trend or bias towards zero
2025-12-26 05:23:35 Doing the actual computations properly would probably not be practical but that doesn't mean there isn't any way to improve or mitigate it
2025-12-26 05:27:08 In gamma-encoded values 0.5 is not half as bright as 1.0 but is actually a lot darker. And that might be why there's desaturation happening too.
2025-12-26 05:34:07 If that bias can even be roughly approximated and modeled, then that rough approximation can be subtracted from the image before encoding, for example?
_wb_
2025-12-26 05:42:30 I don't think the issue is with DC precision, but rather high freq B getting quantized too aggressively which ends up having a desaturation effect. Or are there examples where the desaturation problem also happens with perfectly flat colors / smooth gradients? In the examples I remember, there's some high freq texture (possibly a subtle one) that gets desaturated.
jonnyawsom3
2025-12-26 05:43:52 The underlying problem is there's too much blue in the XYB. Testing showed it's because there's a very narrow range of B for some saturated colors to have 0 Blue. If we increase the precision with the quant table, we stay in the range. If we round up, we still go out the range and desaturate. If we try to bias, we risk giving incorrect colors when no issue existed before
2025-12-26 06:04:34 What's the evidence for this?
Orum
2025-12-26 06:07:36 I just want a new version so I can start using `-d 0 -e 1` again
jonnyawsom3
2025-12-26 06:51:38 Effort 4 was worse than 7, assuming that's the main difference https://discord.com/channels/794206087879852103/794206087879852106/1397220377033445377