JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

coverage

Post links to articles, blog posts, reddit / hackernews / forum posts, media coverage about or related to JXL here!

Kaldaien
2025-02-20 02:09:53
Yeah, that's the entire problem though. We used to be able to share HDR images and have the Discord client render them in HDR.
2025-02-20 02:10:18
Now we have HDR->SDR tonemapping, and it's broken to add insult to injury.
jonnyawsom3
2025-02-20 02:19:28
WebP can only do 8bit, so they couldn't have been HDR. At least not really
AccessViolation_
2025-02-20 08:11:31
it'd be interesting to see what 8-bit HDR looks like. the color banding would be a lot worse for sure
spider-mario
2025-02-20 08:56:50
you can encode a PQ or HLG image to an 8-bit JPEG with an ICC profile that has a CICP tag, and open that in Chrome
2025-02-20 08:57:03
not so surprisingly, HLG survives this better than PQ
TheBigBadBoy - 𝙸𝚛
The OG report to the resident Discord Dev https://discord.com/channels/794206087879852103/805176455658733570/1331809146156220416
2025-02-20 03:59:06
oh my bad, was supposed to send it to <#805176455658733570> <:KekDog:805390049033191445>
Kaldaien
WebP can only do 8bit, so they couldn't have been HDR. At least not really
2025-02-21 06:20:32
They weren't WebP until this recent change. They were actual AVIF. You just add an ICC profile and Chromium/Electron displays them in HDR. At least until we got screwed over 😦 Now 10- and 12-bit AVIF images have broken tonemapping, and 8-bit HDR AVIF images work (not that anyone should be encoding HDR AVIF images at 8-bit).
2025-02-21 06:21:14
Now it's simply not viable to paste AVIF images into Discord. We have to use PNG instead.
2025-02-21 11:05:57
Who thought this was a good idea, I wonder? So-called tonemapped AVIF on the left.
_wb_
2025-02-21 11:12:40
yuck, what kind of tone mapping is that?
Quackdoc
2025-02-21 11:12:49
they are testing a different tonemapper now
_wb_ yuck, what kind of tone mapping is that?
2025-02-21 11:13:37
"custom reinhard" so I dunno, didn't look at the code
2025-02-21 11:16:29
looks like its https://docs.opencv.org/4.x/d0/dec/classcv_1_1TonemapReinhard.html with some custom params
_wb_
2025-02-21 11:16:45
the whole state of HDR is giving me flashbacks to the time that you had Apple images with gamma 1.8 and PC images with gamma 2.2 and things weren't signalled so you'd just randomly get wrong colors
Quackdoc
2025-02-21 11:17:10
https://github.com/discord/lilliput/blob/b991bde996d1bb7d865a5c0f71860eef37ea1317/avif.cpp#L97
_wb_
2025-02-21 11:18:19
except now there's also gain maps which makes things even worse and more complicated
Quackdoc
2025-02-21 11:21:16
tone mapping is insanely hard to do, gain maps complexity comes from lack of standardization and conventions. once they come it will be a large improvement over tonemapping
2025-02-21 11:22:01
the real competition will be AI tonemapping vs gain maps
RaveSteel
2025-02-21 11:45:08
Is there any POC of AI tonemapping? Or just an idea floating around?
CrushedAsian255
Kaldaien Who thought this was a good idea, I wonder? So-called tonemapped AVIF on the left.
2025-02-21 11:55:03
the one on the left is also showing as HDR for me
spider-mario
2025-02-21 11:58:12
uh, indeed
2025-02-21 11:58:33
is it possible that they just forgot to strip the tags that make it HDR after tonemapping or something?
jonnyawsom3
spider-mario is it possible that they just forgot to strip the tags that make it HDR after tonemapping or something?
2025-02-21 12:03:58
Seems like it
spider-mario
2025-02-21 12:05:02
I sense that I’m going to have nightmares of that image
Mine18
Kaldaien Who thought this was a good idea, I wonder? So-called tonemapped AVIF on the left.
2025-02-21 02:12:32
<@1156997134445461574>
Demiurge
Kaldaien Who thought this was a good idea, I wonder? So-called tonemapped AVIF on the left.
2025-02-21 02:40:46
The preview on the right looks bright, washed out and desaturated
2025-02-21 02:41:08
The one on the left looks like a black rectangle.
Quackdoc the real competition will be AI tonemapping vs gain maps
2025-02-21 02:43:46
Or signaled parametric tone mapping like Jon said
2025-02-21 02:44:46
Since gain maps are already typically algorithmically generated by a parametric function
Scott Kidder
Mine18 <@1156997134445461574>
2025-02-21 07:26:16
thanks for the tag - yeah, is there a tone-mapping method you'd recommend here? another suggestion was to apply bt.2446-a, wdyt?
2025-02-21 07:27:48
we'll get this dialed in soon, appreciate the constructive suggestions!
Mine18
2025-02-21 07:34:00
<@862810094734934036> shout your complaints to this man
_wb_
2025-02-21 09:04:01
I like what libjxl/tools/hdr/local_tonemap does
Quackdoc
Scott Kidder thanks for the tag - yeah, is there a tone-mapping method you'd recommend here? another suggestion was to apply bt.2446-a, wdyt?
2025-02-21 09:42:25
bt-2446a is "reliable" it doesn't look the best, but it reliably looks ok
_wb_ I like what libjxl/tools/hdr/local_tonemap does
2025-02-21 09:42:38
what algorithm does libjxl use for tonemapping? 2390?
Kaldaien Who thought this was a good idea, I wonder? So-called tonemapped AVIF on the left.
2025-02-21 09:54:08
how was the image on the right encoded?
_wb_
Quackdoc what algorithm does libjxl use for tonemapping? 2390?
2025-02-21 10:35:32
Good question. <@604964375924834314> ?
spider-mario
2025-02-21 11:03:40
yes https://github.com/libjxl/libjxl/blob/f73fc6f045268e2d12c17490ac490e0721116657/lib/extras/tone_mapping.cc#L37-L39
2025-02-21 11:04:54
https://github.com/libjxl/libjxl/blob/f73fc6f045268e2d12c17490ac490e0721116657/lib/jxl/cms/tone_mapping-inl.h#L33
2025-02-21 11:06:17
the “YRGB” variant, followed by some ad hoc gamut mapping
2025-02-21 11:06:39
(with configurable trade-off between preserving saturation and preserving luminance)
Quackdoc
2025-02-22 12:50:55
2390 is aesthetically the most pleasing I found, but it falls on ita face in some images pretty hard.
2025-02-22 12:53:15
hard to say which I prefer. depends on content I guess
spider-mario
2025-02-22 08:15:00
yeah, it depends to what extent the content that falls outside of the destination range is “true” content
2025-02-22 08:15:06
it can end up flattened quite a bit
jonnyawsom3
2025-02-22 12:16:28
In this case I think the issue is the retained HDR ICC and not the tonemapping algorithm
Demiurge
2025-02-22 07:51:38
What does 2390 do, try to preserve the perceptual brightness but compress the dynamic range for values that are too bright?
2025-02-22 07:52:14
Or does it just chop
_wb_
2025-02-22 09:06:38
Just chopping is evil, I don't think it is ever a good idea. It makes things irreversible.
spider-mario
Demiurge What does 2390 do, try to preserve the perceptual brightness but compress the dynamic range for values that are too bright?
2025-02-22 11:40:27
basically this
2025-02-22 11:40:45
it keeps the middle of the range linear and rolls off the extremes
Demiurge
2025-02-22 11:47:21
Why would it squish the darks though?? :(
damian101
Demiurge Why would it squish the darks though?? :(
2025-02-22 11:48:59
in some implementations it doesn't...
2025-02-22 11:50:31
all the black level compensation I've seen so far made it worse...
2025-02-22 11:50:45
generally I mean
2025-02-22 11:53:07
but that was for 0.1 nits, because my monitor has 1000:1 contrast, and peak was 100 nits
Demiurge
2025-02-23 12:26:38
Yeah black compression sounds like a bad idea
2025-02-23 12:27:03
Since bad monitors have bright blacks anyways
Quackdoc
2025-02-23 12:42:57
2390 is a basic implementation that is really designed to be "good enough". 2446a was designed to retain more information without degredation. A lot of people mistakenly believe it was "desgined for well mastered content" this isn't really true, but is rather a symptom of it's design. bt.2446 was actually intended to be used to do SDR->HDR and HDR->SDR > An important aspect of an HDR eco-system involves the meaningful conversion between SDR content and HDR, for example to enable existing content to be brought into an HDR workflow, or to check SDR content on an HDR display device. Likewise, HDR content may have to be converted to SDR for monitoring purposes, or for distribution. Conversions whereby SDR content is placed into an HDR container without changing the dynamic range is generally termed ‘mapping’, and this is discussed in Report ITU-R BT.2390. > instead, the focus of this Report is on conversion methods whereby the dynamic range is changed, either from SDR levels to HDR, or vice-versa. The goal of such methods is to reproduce content at a different dynamic range without altering the visual experience to the extent possible. One goal of tone mapping HDR content is to allow the output to be intercut with natively produced SDR content, whereas a goal of inverse tone mapping of SDR content is to allow the output to be intercut with natively produced HDR content
2025-02-23 12:45:47
bt.2446a is *designed* for studio's who master explicitly between 100 and 1000 nits > the tone mapping and inverse tone mapping techniques described in this section map content between 100 cd/m2 and 1 000 cd/m2 and vice-versa. These levels were chosen because most SDR content is nominally produced at 100 cd/m2, while HDR content may require further conversions between the PQ and HLG systems, which is most straightforward to accomplish for signals that do not exceed 1 000 cd/m2, as discussed in Report ITU-R BT.2408, § 6
2025-02-23 12:46:44
this is not necessairly the same as "well mastered content" it just so happens that studios who follow the guidelines of sticking to under 1000 nits in HDR have a tendency to have better graded content
2025-02-23 12:50:14
(btw im guilty of saying it's for well designed content, it's not like most people will care or know how to master hdr)
spider-mario
Demiurge Why would it squish the darks though?? :(
2025-02-23 08:37:04
to prevent black crush, under the assumption that a display with a minimum luminance of 0.1 cd/m² would render 0.05 cd/m² and 0.005 cd/m² the same (as 0.1 cd/m²) otherwise
2025-02-23 08:37:37
if the display doesn’t do that then indeed there is no need to raise those values to ones that are higher and slightly less distinct from one another
_wb_
2025-02-23 11:00:19
Don't all displays just pretend that black is black?
spider-mario
2025-02-23 11:00:55
I’m not sure / ICC conversions don’t, except with “black point compensation” enabled
2025-02-23 11:02:25
https://community.adobe.com/t5/color-management-discussions/should-i-match-rgb-values-to-compensate-after-changing-colour-profile/m-p/11140621#M2790 > Black Point Compensation should normally ***always*** be on. There is no commonly occurring reason to turn it off. It prevents hard clipping when remapping the black point, giving a soft taper instead.
2025-02-23 11:03:48
https://www.argyllcms.com/doc/CrushedDisplyBlacks.html
_wb_
2025-02-23 11:04:28
I always thought black point compensation was mostly something that matters for print/CMYK where the 'black' can be not very dark
2025-02-23 11:06:53
For a display, can it even know the actual minimum luminance? The amount of backlight getting through is something that depends only on the display technology, but the amount of ambient light getting reflected depends on the ambient light, doesn't it?
damian101
_wb_ For a display, can it even know the actual minimum luminance? The amount of backlight getting through is something that depends only on the display technology, but the amount of ambient light getting reflected depends on the ambient light, doesn't it?
2025-02-23 11:13:59
Many HDR displays definitely take not only peak luminance but also black level into account when tonemapping.
2025-02-23 11:14:34
I doubt that's always for the better, though.
_wb_
2025-02-23 11:17:02
I don't see why you ever would want to clip the darks as opposed to just mapping 0 nits to the display black level and basically just adding the black level to everything
2025-02-23 11:18:59
So if the black level is 0.1 nits then you render 0 nits as 0.1 nits and 5 nits as 5.1 nits, that seems way better than correcting for the black level and rendering 5 nits as exactly 5 nits but at the cost of mapping everything between 0 and 0.1 nits to the same thing.
damian101
2025-02-23 11:20:34
Yeah, but in the implementations I have tried, the transition was much too smooth, it ruined the contrast of the image, lightened dark areas way to much and made the image a lot less appealing.
spider-mario
2025-02-23 11:51:59
maybe part of it is that there are several ways to do it and you might want the flexibility to choose
2025-02-23 11:52:47
when grading HDR, as far as the highlights are concerned, a display that clips is definitely preferable over one that forces tone mapping
2025-02-23 11:53:02
I can see the same logic applying to the shadows
damian101
spider-mario when grading HDR, as far as the highlights are concerned, a display that clips is definitely preferable over one that forces tone mapping
2025-02-23 11:56:24
I don't think so... not if that's the only option you get with a display.
spider-mario
2025-02-23 11:59:28
for general use, maybe not, but for grading, definitely yes: you want the display to render its advertised range without any such processing, and then you restrict your export to that range to make sure that you are not exporting anything you couldn't see
2025-02-23 11:59:51
of course, your own processing might involve some tone mapping of your choice, in software
damian101
spider-mario for general use, maybe not, but for grading, definitely yes: you want the display to render its advertised range without any such processing, and then you restrict your export to that range to make sure that you are not exporting anything you couldn't see
2025-02-23 12:00:45
makes sense, I guess
spider-mario
2025-02-23 12:05:18
on the 2021+ MacBook Pros, there's the day-to-day “Apple XDR” mode that goes up to 1600 nits (which it can do over parts of the display) with tone mapping, and there is the PQ reference mode that stops at 1000 nits (which it can sustain over the whole screen) and then clips
Demiurge
spider-mario if the display doesn’t do that then indeed there is no need to raise those values to ones that are higher and slightly less distinct from one another
2025-02-23 02:40:51
Yeah, displays almost never do that though. That would be clipping
2025-02-23 02:45:24
Hmm... I sort of understand though. When reproducing content on a crappy screen, the shadows will inevitably look lighter, but the rest of the dynamic range can be adjusted to approximately match the same perceived level of brightness maybe.
2025-02-23 02:46:23
Stuff on both the light and dark side will need to be squished and squeezed into the available dynamic range
2025-02-23 02:46:38
To prevent clipping
2025-02-23 02:47:09
I think I understand the theory
HCrikki
2025-02-24 04:34:15
https://pdfa.org/jpegli-works-in-pdf/
Meow
2025-02-24 05:36:51
> Google’s Jpegli promises smaller JPEG images (and so, smaller PDF files) with no change to reader software! Check it out for yourself!
CrushedAsian255
2025-02-24 08:12:45
Still not an excuse to not support JPEG XL
lonjil
2025-02-24 09:30:41
I mean there's like a 99% chance JXL will be in an upcoming PDF revision, right?
jonnyawsom3
2025-02-25 12:34:28
Yeah
Demiurge
2025-02-25 03:23:32
Separating jpegli into a separate repo from libjxl is a disaster.
2025-02-25 03:24:24
How are all the redundant copies of the same files in both repos kept in sync anyways? They share so much code
HCrikki
2025-02-25 03:52:02
should they even be kept in permanent sync? any jxl bits in jpegli's repo could be frozen until jpegli gets a new major version going
2025-02-25 03:52:52
thats still just development-related. the binaries (dlls) shouldnt need any such worry from endusers and devs shipping them as is
Demiurge
2025-02-25 04:02:00
it uses a lot of shared code from libjxl. A significant amount of common code. Plus if they are bundled together, it encourages people to try the newer jxl codec too.
2025-02-25 04:02:22
It really is shooting yourselves in the foot for absolutely no logical reason.
2025-02-25 04:04:27
They really belong in one repo. If anything is to be separated and segregated it should be cjxl/djxl/cjpegli/djpegli from libjxl
2025-02-25 04:19:23
But even that could just be in a subfolder and a simple compile time option to disable all those dependencies instead of a whole bunch of multi line cmake flag arcana
_wb_
2025-02-28 10:24:21
Later today I'll drop the current draft of the big (67-page) paper on JPEG XL here. Plan is to publish it on arXiv in April or so, leaving some time to refine the draft based on feedback from y'all.
AccessViolation_
2025-02-28 10:33:24
aaa exciting!
CrushedAsian255
2025-02-28 10:36:35
is arXiv one of those free access paper websites?
_wb_
2025-02-28 10:49:51
Yes
2025-02-28 03:08:50
ok here it is: https://drive.google.com/file/d/1M3jmC83t2Pkflb46vNSegdiAqT7Io_k4/view?usp=sharing
2025-02-28 03:11:46
any kind of feedback is appreciated
dogelition
2025-02-28 03:52:15
very nitpicky, but i think this should be `Display P3` without the hyphen (<https://developer.apple.com/documentation/coregraphics/cgcolorspace/1408916-displayp3>) also seems a bit inconsistent to use the informal name `Rec.2100` here (+ in the figure 55 caption) and the full proper name `ITU-R BT.2100-2` in other places
AccessViolation_
2025-02-28 04:00:08
as a JPEG XL enthusiast I have been completely desensitized to incorrect hyphen usage <:galaxybrain:821831336372338729>
Meow
2025-02-28 04:02:31
Unnecessary hyphens are atrocious
_wb_
dogelition very nitpicky, but i think this should be `Display P3` without the hyphen (<https://developer.apple.com/documentation/coregraphics/cgcolorspace/1408916-displayp3>) also seems a bit inconsistent to use the informal name `Rec.2100` here (+ in the figure 55 caption) and the full proper name `ITU-R BT.2100-2` in other places
2025-02-28 04:04:52
Right, it's DCI-P3 but Display P3. Not sure what to do with rec2100, using the full formal name every time is annoying (and you could do the same with sRGB etc), but never using it also seems wrong.
Meow
2025-02-28 04:05:47
Apple isn't really consistent
_wb_
2025-02-28 04:06:11
Then again Apple writes jxl as JPEG-XL so maybe I should put an unwanted hyphen in their color space 🙂
AccessViolation_
2025-02-28 04:06:20
maybe do it like this: "ITU-R BT.2100-2, or simply Rec.2100, ..." and then continue using "Rec.2100" going forward
2025-02-28 04:12:58
I love how some members of this community made it in the paper
spider-mario
2025-02-28 04:13:15
from what I’ve seen, typographical usage is to put no space after “BT.” but to put one after “Rec. ”
2025-02-28 04:13:30
so ITU-R BT.2100-2 but Rec. 2100
2025-02-28 04:14:40
Wikipedia: > ITU-R Recommendation BT.2100, more commonly known by the abbreviations Rec. 2100 or BT.2100,
AccessViolation_
2025-02-28 04:16:11
also I really appreciate the full-size PNGs. i've seen too many papers about image compression that use really poor images themselves, to the point where what they're trying to demonstrate isn't even visible
veluca
AccessViolation_ I love how some members of this community made it in the paper
2025-02-28 04:17:19
anyone specific that's not a (core) dev? xD
AccessViolation_
2025-02-28 04:17:42
:D
2025-02-28 04:18:16
it's technically true!
Meow
_wb_ Then again Apple writes jxl as JPEG-XL so maybe I should put an unwanted hyphen in their color space 🙂
2025-02-28 04:22:14
Apple also writes down "USB 4" instead of "USB4", and I even saw "MacOS" on Apple's website
_wb_
2025-02-28 04:30:16
Also feel free to suggest more things to illustrate or ways to improve existing figures. I think figures make a paper like this not just more fun but also it helps for a more intuitive understanding.
2025-02-28 04:31:48
(it's also just fun to hack libjxl or make jxlart to illustrate stuff)
AccessViolation_
_wb_ Also feel free to suggest more things to illustrate or ways to improve existing figures. I think figures make a paper like this not just more fun but also it helps for a more intuitive understanding.
2025-02-28 05:10:50
I like the JXL art for the predictors. what I'd find even more interesting additionally are some sample images along with maps that show which predictors are used where in the image as decided by the MA tree, similar to the examples that shows where VarDCT blocks of different sizes are used. and maybe along with that, for the same image, now a heatmap of the prediction residuals. also one image with a residual heatmap where just the self-correcting predictor is used would also be interesting, so you can see how well the self-correcting predictor works on different textures in the image 👀
2025-02-28 05:14:17
i don't know how hard this would be, but the predictors have always been very interesting to me despite their simplicity, and i've never seen anything visualize their effectiveness in this way
veluca
2025-02-28 05:21:15
wait, patch detection picks up on the profile pictures?
lonjil
2025-02-28 05:25:00
yeah
2025-02-28 05:25:13
my experience is that it almost always does that
_wb_
2025-02-28 05:49:02
iirc, I had to cheat a bit and set the max patch size a bit larger for it to do it here, since those profile pictures are relatively large.
Meow
2025-02-28 05:50:25
That avatar is of course residual
jonnyawsom3
2025-02-28 05:54:05
I'll just post the full section
2025-02-28 05:54:28
I'm surprised the reference frame doesn't do anything like color sorting, so similar areas can be encoded together
2025-02-28 05:55:03
Though, looking at it closer I see it matched letters and symbols together
2025-02-28 06:04:40
Page 55, Blend Modes and Layers. I feel like you could have an example of the compression gains from using layers too. Like when we separated CRT noise from an image https://discord.com/channels/794206087879852103/803645746661425173/1331749968578613352
_wb_
AccessViolation_ i don't know how hard this would be, but the predictors have always been very interesting to me despite their simplicity, and i've never seen anything visualize their effectiveness in this way
2025-02-28 06:19:08
I'll produce and add some examples of residual maps, predictor maps, and context maps, it's interesting indeed. Maybe for various effort settings too...
I'm surprised the reference frame doesn't do anything like color sorting, so similar areas can be encoded together
2025-02-28 06:24:54
there are probably a bunch of tweaks that could be made to that algorithm. I like how it fits all those letters in a relatively small rectangle though 🙂
jonnyawsom3
_wb_ I'll produce and add some examples of residual maps, predictor maps, and context maps, it's interesting indeed. Maybe for various effort settings too...
2025-02-28 06:31:32
Me and <@207980494892040194> tried to get the predictor map working after I spotted this line of code. But it just seemed to crash or fail compiling <https://github.com/libjxl/libjxl/blob/f73fc6f045268e2d12c17490ac490e0721116657/lib/jxl/modular/encoding/enc_encoding.cc#L43>
AccessViolation_
2025-02-28 06:32:26
> With the addition of timing information, frames can represent an animation sequence. Use cases for animation sequences include “GIFs” (short looping > video fragments), cinemagraphs, and lossless or very high-fidelity **intra-only video**. I believe you can sort of encode frames as residuals of some blend operation with the previous frame, no? That's basically very rudimentary inter-frame coding. Also probably valuable to photographers that take a lot of burst images. there will be a lot of 'temporal' similarity which it should compress fairly well if the camera is kept steady
jonnyawsom3
2025-02-28 06:33:29
Surely if the image is shifted even 1 pixel off, the blending might add even more residuals from misaligned edges?
AccessViolation_
2025-02-28 06:38:31
then you're storing edges instead of basically the entire image again. but yeah if there is only very fine detail throughout the image I don't think it'd help. as I understand it, those residuals from frame blending would have predictors ran on them too. but i never did any benchmarks for compressing burst images. I wanted to, but got tired of trying to get Krita to do what I want
jonnyawsom3
2025-02-28 07:07:35
It also depends if you mean the traditional Alpha-interframe or actually subtracting the images, since the latter requires floats for negative values
_wb_
Me and <@207980494892040194> tried to get the predictor map working after I spotted this line of code. But it just seemed to crash or fail compiling <https://github.com/libjxl/libjxl/blob/f73fc6f045268e2d12c17490ac490e0721116657/lib/jxl/modular/encoding/enc_encoding.cc#L43>
2025-02-28 07:45:07
Yeah that code is old and needs refactoring. I will just hack something for now, but it could be nice to do the code plumbing to make that debug image dumping code work again.
jonnyawsom3
2025-02-28 08:55:29
The shading on page 58 could have the transparency lowered, it's quite hard to see the JPEG XL and especially the AVIF area due to them being light colors on a light background
CrushedAsian255
Meow That avatar is of course residual
2025-02-28 10:16:29
I’m wonder why , maybe it’s just simple enough to be easier stored as just residuals?
VcSaJen
2025-03-01 02:06:16
I'm surprised that avatars are pixel-perfect on DiscordApp, I expected anti-aliasing to screw that up. Same for letters.
Laserhosen
2025-03-01 02:07:56
Typo on page 11: > chrome from luma
_wb_
2025-03-01 02:28:25
Lol, that is a funny typo
Laserhosen
2025-03-01 03:01:36
Freudian 🙂
AccessViolation_
2025-03-01 07:17:25
i just realized, regarding the title > The JPEG XL Image Codec: History, Features, Coding Tools, Design Rationale, and Future the title describes the document to be about the "image codec", and 'codec' is a portmanteau of '[en]**co**der' and '**dec**oder', so specifically the software used to encode/decode JPEG XL images. I believe the proper terminology here could be "The JPEG XL Image Coding Format"? This would then also be consistent with the wording the JPEG Committee uses: > The JPEG Committee has launched the Next-Generation Image Coding activity, also referred to as JPEG XL
jonnyawsom3
2025-03-01 07:19:27
Could just do Image Format
AccessViolation_
2025-03-01 07:21:23
that would work. personally i do like "Image Coding Format" more probably just because it..sounds cooler? 😅 but at least going by wikipedia, "audio coding format", "video coding format", "media coding format" all are terms used for this. In ISO/IEC 18181, they use "JPEG XL image coding system"
2025-03-01 07:49:47
so I guess "JPEG XL Image Coding System" would be best, to mirror the spec?
_wb_
2025-03-01 10:14:49
I am not sure if there really exists an agreed upon convention for how to call the whole thing. I tend to use "format" for a file format and "codec" for the core payload, e.g. JXL and DNG are different formats but they may use the same codec.
AccessViolation_
2025-03-01 10:17:04
i've seen 'codec' used to mean 'coding format' in other works too. it probably doesn't matter that much anyway since everyone will know what you mean
2025-03-01 10:17:20
but i thought i'd bring it up 🙂
_wb_
2025-03-01 10:20:18
Terminology is kind of fuzzy here. The word "codec" can refer to a specific implementation (e.g. x264) and to the standard in general (e.g. h264).
AccessViolation_
2025-03-01 10:23:46
this feels very similar to the 'Linux' situation. Linux is the kernel, but people will just say Linux to refer to their operating system as a whole, so much so that Wikipedia made a point to name the article about the operating system "Linux" instead of "GNU/Linux" or something like that, and the article about the kernel specifically is called "Linux kernel"
2025-03-01 10:24:53
so even though 'codec' is technically incorrect in this use case if you go by the original portmanteau, over time people started using it in different ways and it doesn't really matter anymore
2025-03-01 10:27:41
personally i'm a big fan of precise language in documents like this, but i recognize that not everyone cares in the way i do which is perfectly valid, so i just wanted to make you aware of it, and you should decide for yourself :)
JaitinPrakash
2025-03-01 10:29:04
I interpret codec as the encoder/decoder, and split "format" into file format and bitstream format. So the h264 "codec" is rather a "codec standard" that defines how to create the actual coder and decoder. So jxl-oxide is a codec for the jpeg xl format that only implements a decoder, while libjxl has both the coder and decoder. And you should always define your terminology anyways, so that all readers know which words mean what.
AccessViolation_
2025-03-01 10:37:39
regardless of how it's done throughout the article, i think there is a lot of value in having the title be consistent with the title in the spec, so, > The JPEG XL Image Coding System: History, Features, Coding Tools, Design Rationale, and Future That way people that find either the spec or this document, then search for "JPEG XL Image Coding System" from the title, will more reliably find both
2025-03-01 10:42:43
speaking of, I could add that to the first paragraph in the wikipedia article too
Quackdoc
2025-03-01 10:43:00
I simply avoid the term codec when possible
2025-03-01 10:43:29
i will do so given appropriate context tho
CrushedAsian255
2025-03-01 11:18:42
libjxl is the codec, jpeg xl is the format
𝕰𝖒𝖗𝖊
2025-03-03 06:05:59
AFAIK, codec refers specifically to software that implements both encoding and decoding functionality. However, as discussed, usage has expanded over time. - Can refer to specific implementations: `jxl`, `jxl-oxide` - Can refer to standards more generally, similar to `H.264`. Though a standard does not include software. - Some implementations may only include one part (decoder). Coding format refers to the standardized specification of how image data is compressed, stored, and interpreted. Image Format, on the other hand is a broader term that captures the file structure, headers, and general organization of image data. *Image Coding System* as AccessViolation_ mentioned, is used in the official ISO/IEC 18181 specification for JPEG XL. This is the most standards aligned terminology and includes the complete specification of the format. As JaitinPrakash said, file format captures the complete file structure including headers, metadata containers, etc, while the bitstream format is the specific encoding of the compressed image data within the file. Therewithal, "GNU/Linux" is a terrible name. *Linux* is concise, memorable, and easier to pronounce. One or two syllable names are always more successful. The kernel is the foundational, non-replaceable component that defines the system. You can swap out GNU tools; especially now it's extremely easy. Busybox, Toybox, Clang, Musl, Zsh, etc. *Linux* effectively differentiates these systems from Windows, macOS, and BSD variants in a way that highlights their most fundamental technical difference. - Operating systems are always known to be named after the kernel, not their userspace tools. - Using a part to refer to the whole is common in computing (we say *Intel machine* not *Intel CPU-based computer*). - The kernel provides the single common concept across all Linux distributions, while userspace components vary dramatically. It doesn't matter if you use a system with Clang / Musl / LibreSSL / Busybox coreutils / Zsh, Dash / S6 / elftoolchain; if it's Linux, it's Linux. - Jpeg XL would be the core standard/specification itself. It's the essential definition. - jxl would be viewed as the common shorthand name for the format and also the file type. - libjxl would be the codec and can be seen as just one implementation of the JPEG XL standard, analogous to a specific distribution of Linux. - jxl-oxide can be a partial codec, and it would be viewed as another replaceable implementation, similar to GNU Tools.
monad
AccessViolation_ i just realized, regarding the title > The JPEG XL Image Codec: History, Features, Coding Tools, Design Rationale, and Future the title describes the document to be about the "image codec", and 'codec' is a portmanteau of '[en]**co**der' and '**dec**oder', so specifically the software used to encode/decode JPEG XL images. I believe the proper terminology here could be "The JPEG XL Image Coding Format"? This would then also be consistent with the wording the JPEG Committee uses: > The JPEG Committee has launched the Next-Generation Image Coding activity, also referred to as JPEG XL
2025-03-03 06:43:12
the paper certainly leverages the physical codec, speaking pedantically. regardless, the codec is the conceptual implementation of the decoding spec, so it's perfectly valid semantically.
AccessViolation_
2025-03-03 08:18:58
hmm that's fair
𝕰𝖒𝖗𝖊 AFAIK, codec refers specifically to software that implements both encoding and decoding functionality. However, as discussed, usage has expanded over time. - Can refer to specific implementations: `jxl`, `jxl-oxide` - Can refer to standards more generally, similar to `H.264`. Though a standard does not include software. - Some implementations may only include one part (decoder). Coding format refers to the standardized specification of how image data is compressed, stored, and interpreted. Image Format, on the other hand is a broader term that captures the file structure, headers, and general organization of image data. *Image Coding System* as AccessViolation_ mentioned, is used in the official ISO/IEC 18181 specification for JPEG XL. This is the most standards aligned terminology and includes the complete specification of the format. As JaitinPrakash said, file format captures the complete file structure including headers, metadata containers, etc, while the bitstream format is the specific encoding of the compressed image data within the file. Therewithal, "GNU/Linux" is a terrible name. *Linux* is concise, memorable, and easier to pronounce. One or two syllable names are always more successful. The kernel is the foundational, non-replaceable component that defines the system. You can swap out GNU tools; especially now it's extremely easy. Busybox, Toybox, Clang, Musl, Zsh, etc. *Linux* effectively differentiates these systems from Windows, macOS, and BSD variants in a way that highlights their most fundamental technical difference. - Operating systems are always known to be named after the kernel, not their userspace tools. - Using a part to refer to the whole is common in computing (we say *Intel machine* not *Intel CPU-based computer*). - The kernel provides the single common concept across all Linux distributions, while userspace components vary dramatically. It doesn't matter if you use a system with Clang / Musl / LibreSSL / Busybox coreutils / Zsh, Dash / S6 / elftoolchain; if it's Linux, it's Linux. - Jpeg XL would be the core standard/specification itself. It's the essential definition. - jxl would be viewed as the common shorthand name for the format and also the file type. - libjxl would be the codec and can be seen as just one implementation of the JPEG XL standard, analogous to a specific distribution of Linux. - jxl-oxide can be a partial codec, and it would be viewed as another replaceable implementation, similar to GNU Tools.
2025-03-03 08:27:18
> Image Coding System as AccessViolation_ mentioned, is used in the official ISO/IEC 18181 specification for JPEG XL. This is the most standards aligned terminology and includes the complete specification of the format. I think there's a lot of value in this. terminology used in the document itself is whatever, people will know what it means, but if you can get the title consistent with other publications, that establishes the document as more 'canonical'. terminology in the title that differs from the spec would be a bit weird, imo
monad
2025-03-03 09:58:49
the brevity of the AVIF section compared to others under *1.1 Precursors* is hilarious.
_wb_ any kind of feedback is appreciated
2025-03-03 10:06:08
couple of mundane edits so far page 5: *if the new format does not fully cover all of the functionality of the old format it intends to replace (in case of PNG: it lacked animation support), the old format will inevitably remain in use too. Although PNG has been a very successful image format, it failed to render GIF obsolete since it lacks support for animation.* redundancy between parenthesis and following sentence page 56: *decoders are often an order of magnitude faster than decoders* decoders twice
Foxtrot
2025-03-03 11:02:24
the red part for JPEG XL should be above the others and semi-transparent so its obvious where it extends to.
jonnyawsom3
2025-03-03 02:14:52
Optionally, could make it the <:JPEG_XL:805860709039865937> teal color too
_wb_
2025-03-03 02:57:48
my thoughts exactly
2025-03-03 02:58:11
this is how I changed it
jonnyawsom3
2025-03-03 03:58:37
Hmm, it could just be me but now I feel like the colors of the axis labels clash with it
_wb_
2025-03-03 04:34:09
actually those axis labels do not really need color
spider-mario
2025-03-03 04:35:17
I wonder if I wouldn’t put OpenEXR a bit more on the left, along with DICOM
2025-03-03 04:35:59
(or is the raster/vector axis not relevant within “application-specific interchange formats”?)
_wb_
2025-03-03 04:42:01
I put it to the right of DICOM since it's also used for things like Pixar animation movies or other non-sensor data
2025-03-03 04:42:50
(while DICOM is only used for sensor data afaik)
2025-03-03 04:47:00
but yeah within "application-specific" I put things side-by-side to save vertical space but those three examples (medical, CGI/video prod, GIS) should in reality overlap horizontally since even in DICOM you could have some vector annotations and of course GIS also has lots of raster images where jxl can be of use as a payload codec
2025-03-03 04:47:31
Updated draft based on the suggestions: https://drive.google.com/file/d/1KHxIms3bfzr_4qYLG_Jqpb41VkzZC3cV/view?usp=sharing
2025-03-03 04:48:26
I spent way too much time on making this figure but I think was worth the time
2025-03-03 04:49:06
in the PDF you can zoom in on those trees if you like
2025-03-03 04:50:52
the tree for e11 is kind of crazy large for such a small image, here's a small part of it
AccessViolation_
_wb_ I spent way too much time on making this figure but I think was worth the time
2025-03-03 04:56:28
that's so cool <:BlobYay:806132268186861619>
2025-03-03 04:56:40
I forgot that for most effort settings, only two predictors are used
_wb_
2025-03-03 04:58:18
yeah because Weighted and Gradient are both pretty good general-purpose predictors and it's hard to really benefit from the others without spending substantially more effort
AccessViolation_
2025-03-03 05:00:24
is effort 11's predictor selection the same as what "mix all" `-P 15` would do on lower effort levels?
_wb_
2025-03-03 05:01:36
not sure how it's wired, maybe that works but still on lower effort it's not trying all properties for the decision nodes in the tree, so it can probably make less use of the various predictors
2025-03-03 05:03:35
in the e11 image you can see how it is using the NE predictor on the left (where the edges go in that NE direction indeed), while it uses a mix of W, AvgW+NW and NW on the right, where the edges indeed go in those directions
2025-03-03 05:05:13
it's pretty cool how each effort setting makes those residuals get smaller, and reaches a smaller total file size even though it has to signal larger and larger trees
AccessViolation_
_wb_ in the e11 image you can see how it is using the NE predictor on the left (where the edges go in that NE direction indeed), while it uses a mix of W, AvgW+NW and NW on the right, where the edges indeed go in those directions
2025-03-03 05:06:26
> in the e11 image you can see how it is using the NE predictor on the left I was going to mention that, it's awesome that it picked up on that so well
2025-03-03 05:06:45
this is so cool to see, honestly
2025-03-03 05:06:53
so glad you added it
_wb_
2025-03-03 05:07:26
well thank you for the suggestion to add this! 🙂
AccessViolation_
2025-03-03 05:08:30
also the predictor selection is a lot more refined than I expected it to be, on e11. I assumed it would mostly divide the image into larger rectangles and then decide which predictor is the best for each, but this is incredible. I wasn't expecting it to use these more complex decision nodes basically at all. I thought that was another case of "the format can do this but the encoder doesn't try"
_wb_
2025-03-03 05:08:43
I should probably get my hacky stuff in libjxl so anyone can make these kinds of visualizations
2025-03-03 05:09:03
it can probably help to further improve the tree building algorithms too
2025-03-03 05:11:38
I wonder what could be achieved with AI-based tree generation. The nice thing is: any tree will give you a lossless image, it only has impact on the compression. So the most catastrophic thing that the AI could do is have suboptimal compression, but the result would still be lossless if the AI is only trusted with the tree generation, nothing else.
AccessViolation_
2025-03-03 05:13:32
ooo yeah
2025-03-03 05:18:34
usually for these types of NP-hard (?) problems it's going to be substantially faster too, because the alternative is just trying more and more and more things
jonnyawsom3
_wb_ it's pretty cool how each effort setting makes those residuals get smaller, and reaches a smaller total file size even though it has to signal larger and larger trees
2025-03-03 05:21:23
It does make me wonder if there's an earlier optimal point where the MA tree size starts adding more than it saves Visualising the decisions would definately help with seeing what it's doing right or wrong. Possibly if the 'MA saving/loading' were added too, further tests and experiments would be possible. Ideally it could even be in jxl_from_tree format, so it could be edited externally, but that's getting complex.
_wb_
monad the brevity of the AVIF section compared to others under *1.1 Precursors* is hilarious.
2025-03-03 05:22:20
It was not intentionally hilarious, I just never ended up elaborating that subsection because 1) the focus of the section is on "lessons drawn from precursors", and AVIF is barely a 'precursor' and I don't think we really drew any lessons from it 2) I was not sure how to say anything useful about AVIF without also sounding biased
AccessViolation_
2025-03-03 05:24:00
I wonder how easy it would be to rip out all the MA tree generation and predictor selection logic and just have a standalone thing to create these visualizations for images. could even be a cool web tool or interactive blog like "how JPEG XL losslessly compresses your images" allowing people to upload their own images and seeing it in action
2025-03-03 05:24:48
that's something i'd try if I wasn't absolutely terrified of C <:KekDog:805390049033191445>
_wb_
It does make me wonder if there's an earlier optimal point where the MA tree size starts adding more than it saves Visualising the decisions would definately help with seeing what it's doing right or wrong. Possibly if the 'MA saving/loading' were added too, further tests and experiments would be possible. Ideally it could even be in jxl_from_tree format, so it could be edited externally, but that's getting complex.
2025-03-03 05:27:31
you can make a reasonable estimate of how much the signaling cost of an extra node is compared to how much it will save, but there's room for improvement in those estimates. Then there's also of course some cost in encode/decode speed you may want to factor in: larger trees will make both encode and decode slower...
AccessViolation_
2025-03-03 05:32:06
should be an easy estimation, right? evaluating it for every pixel, check how many times a branch is taken. if the left branch is taken 99.9% of the time, it's probably not worth it to have that very large subtree under the right branch there (though as images get bigger, those branches will be more 'worth it' whereas for small images like icons a tiny tree is probably the least wasteful, so maybe go by total pixel count instead of a percentage)
jonnyawsom3
2025-03-03 05:36:09
That reminds me, by default the MA tree learning is set to '50% of pixels', but how does it decide/truncate them?
AccessViolation_
2025-03-03 05:43:53
I like how it captured the reflections from the water in the MA tree too
monad
AccessViolation_ is effort 11's predictor selection the same as what "mix all" `-P 15` would do on lower effort levels?
2025-03-03 06:10:22
as demonstrated, it is the same option as P15, but it is not necessary that e11 chooses that option as it tries multiple
AccessViolation_
2025-03-03 06:25:42
also +1 for the title that's consistent with the spec now 👍
2025-03-03 06:37:13
I really need to clear out like a day to read all of this, I'm so excited to just read it all but I don't have the focus right now 😅 I'll probably save it for the weekend
monad
That reminds me, by default the MA tree learning is set to '50% of pixels', but how does it decide/truncate them?
2025-03-03 07:22:15
it iterates over the channel data choosing pixels based on PRNG. there are additional samples at 10% of the full sample, hence https://discord.com/channels/794206087879852103/804324493420920833/1288947695066546176, which are also chosen based on PRNG
jonnyawsom3
2025-03-03 07:25:36
Ahh, so `-I 1000` means the additional pass is also `-I 100` effectively
AccessViolation_
2025-03-03 07:29:12
> PRNG I wonder how much that hurts performance just because of poor branch prediction. though I guess with consistent patterns it's possible you miss some pattern in the image that happens to be aligned with the way you sample it
monad
Ahh, so `-I 1000` means the additional pass is also `-I 100` effectively
2025-03-03 07:34:03
except in the 10% case the data is iterated differently, and doesn't seem to ensure visiting every pixel uniquely
2025-03-03 08:04:54
okay, I take that back, it does step through each individual pixel at `-I 990`
AccessViolation_
_wb_ I should probably get my hacky stuff in libjxl so anyone can make these kinds of visualizations
2025-03-04 08:38:54
if you ever do btw, or make the code you used to generate those available, please let me know 👀 i'd love to play around with it
_wb_
2025-03-04 08:43:20
I made a hacky thing that just always dumps stuff directly to ppm and dot files with hardcoded names; the right way to do it requires more code plumbing but I can give you my hacky thing if you don't mind compiling a hacky libjxl
AccessViolation_
2025-03-04 08:56:14
sure i'll take the hacky variant
Demiurge
𝕰𝖒𝖗𝖊 AFAIK, codec refers specifically to software that implements both encoding and decoding functionality. However, as discussed, usage has expanded over time. - Can refer to specific implementations: `jxl`, `jxl-oxide` - Can refer to standards more generally, similar to `H.264`. Though a standard does not include software. - Some implementations may only include one part (decoder). Coding format refers to the standardized specification of how image data is compressed, stored, and interpreted. Image Format, on the other hand is a broader term that captures the file structure, headers, and general organization of image data. *Image Coding System* as AccessViolation_ mentioned, is used in the official ISO/IEC 18181 specification for JPEG XL. This is the most standards aligned terminology and includes the complete specification of the format. As JaitinPrakash said, file format captures the complete file structure including headers, metadata containers, etc, while the bitstream format is the specific encoding of the compressed image data within the file. Therewithal, "GNU/Linux" is a terrible name. *Linux* is concise, memorable, and easier to pronounce. One or two syllable names are always more successful. The kernel is the foundational, non-replaceable component that defines the system. You can swap out GNU tools; especially now it's extremely easy. Busybox, Toybox, Clang, Musl, Zsh, etc. *Linux* effectively differentiates these systems from Windows, macOS, and BSD variants in a way that highlights their most fundamental technical difference. - Operating systems are always known to be named after the kernel, not their userspace tools. - Using a part to refer to the whole is common in computing (we say *Intel machine* not *Intel CPU-based computer*). - The kernel provides the single common concept across all Linux distributions, while userspace components vary dramatically. It doesn't matter if you use a system with Clang / Musl / LibreSSL / Busybox coreutils / Zsh, Dash / S6 / elftoolchain; if it's Linux, it's Linux. - Jpeg XL would be the core standard/specification itself. It's the essential definition. - jxl would be viewed as the common shorthand name for the format and also the file type. - libjxl would be the codec and can be seen as just one implementation of the JPEG XL standard, analogous to a specific distribution of Linux. - jxl-oxide can be a partial codec, and it would be viewed as another replaceable implementation, similar to GNU Tools.
2025-03-04 09:53:53
GNU is just plain awful. So is Linux actually. It makes sense for them to go together.
2025-03-04 09:55:12
But calling the OS "Guhnoo slash linucks" will never make sense. Linux is not really an OS, but neither is GNU
2025-03-04 09:55:45
GNU is even further away from being called an OS than Linux is
2025-03-04 09:57:24
It never will be an OS and the FSF is really sore about that. They really really want GNU to be an OS but they never put the effort and responsibility into actually making an OS. So they are trying to say "GNU/Linux" is that imaginary OS that they wish they made
2025-03-04 09:59:35
GNU/Hurd WOULD be their imaginary OS but I don't think it even has a driver for a USB keyboard and mouse yet
2025-03-04 09:59:59
And never will
AccessViolation_
2025-03-04 09:59:59
> It never will be an OS and the FSF is really sore about that that seems really petty, do they actually care?
Demiurge
2025-03-04 10:00:34
If you read their website "what is GNU?" that's how they basically make it sound like
2025-03-04 10:01:47
They wish they made an OS called GNU but they gave up and never took the responsibility or effort required to have an OS project like BSD or Redox or Serenity or Haiku etc
AccessViolation_
2025-03-04 10:01:57
> GNU is typically used with a kernel called Linux. This combination is the GNU/Linux operating system. GNU/Linux is used by millions, though many call it “Linux” by mistake. seems fine? though maybe a bit pedantic but /shrug
Demiurge
2025-03-04 10:02:38
So they want to pretend like GNU is an operating system they never actually put the required amount of work into creating by trying to compel people to call Linux "GNU" as if it's their imaginary GNU OS
2025-03-04 10:03:23
It's an imaginary OS project that tries to vicariously live the success of Linux by proxy
2025-03-04 10:04:10
The idea is you're not using Linux, you are actually using GNU
AccessViolation_
2025-03-04 10:04:33
If they're calling it GNU/Linux it sounds like they're embracing the combination rather than spitefully trying to claim everything as their own, though
Demiurge
2025-03-04 10:04:53
You're using their imaginary OS they call GNU. You're using the GNU OS with the Linux kernel
2025-03-04 10:05:04
That's why they call it GNU/Linux
2025-03-04 10:05:16
Same as GNU/Hurd
2025-03-04 10:06:13
They are just pretending that GNU is an operating system that definitely exists and they definitely created
2025-03-04 10:07:33
So they are clearly sore that they never made an actual OS, which is their fault since if they wanted to have an OS they would make one like every other open source OS project like BSD for example
AccessViolation_
2025-03-04 10:09:19
if the pack of software that makes your computer run ("OS") consists of a lot of GNU software and the Linux kernel, I don't think many would object to calling GNU/Linux an operating system. if they have communities that are actually really as upset as you claim about specific terminology used that would surprise me, but I also haven't looked into it so I'll take your word for it
2025-03-04 10:12:20
but also we should probably not be talking about this in this channel 😅
Demiurge
2025-03-04 10:12:58
Their software is frankly not good either. They do the same microsoft style "embrace extend extinguish" thing with standards like inventing their own weird GNU variant dialect with subtle compatibility issues with the C standard for example, and most of their stuff like glib and gmp are infamous for calling abort() and having horrible over-engineered spaghetti code style and resource usage.
Traneptora
AccessViolation_ if the pack of software that makes your computer run ("OS") consists of a lot of GNU software and the Linux kernel, I don't think many would object to calling GNU/Linux an operating system. if they have communities that are actually really as upset as you claim about specific terminology used that would surprise me, but I also haven't looked into it so I'll take your word for it
2025-03-04 10:39:03
their main argument is that the pack of software that makes your computer run consists of mostly GNU software and the Linux kernel but this is actually not true
2025-03-04 10:41:50
many important bits (e.g. mount, kill, hexdump) are util-linux
Demiurge Their software is frankly not good either. They do the same microsoft style "embrace extend extinguish" thing with standards like inventing their own weird GNU variant dialect with subtle compatibility issues with the C standard for example, and most of their stuff like glib and gmp are infamous for calling abort() and having horrible over-engineered spaghetti code style and resource usage.
2025-03-04 10:45:23
glib is not gnu, it's part of the Gnome project
Demiurge
2025-03-04 10:47:42
You mean the gnu object model environment?
Traneptora
2025-03-04 10:48:09
that's not what Glib is
2025-03-04 10:48:27
it originally comes from Gtk
2025-03-04 10:48:42
but was separated out in the 1990s and has been developed in tandem since by the Gnome project
2025-03-04 10:48:51
it's never been a GNU software or developed by FSF
spider-mario
2025-03-04 10:50:29
for what it’s worth, Stallman’s response to the famous “I’d just like to interject” copypasta is even funnier https://www.gnu.org/gnu/incorrect-quotation.en.html > The main error is that Linux is *not* strictly speaking part of the GNU system—whose kernel is GNU Hurd. The version with Linux, we call “GNU/Linux.” It is OK to call it “GNU” when you want to be really short, but it is better to call it “GNU/Linux” so as to give Torvalds some credit.
2025-03-04 10:51:31
also note how “Most of it was copied from statements I have made, but the part italicized here is not from me”, but the italicized part is like 10% of the text
2025-03-04 10:51:38
so the remaining 90% are in fact from him
2025-03-04 10:51:53
including > Through a peculiar turn of events, the version of GNU which is widely used today is often called “Linux,” and many of its users are not aware that it is basically the GNU system, developed by the GNU Project.
2025-03-04 10:52:47
which can be found here: https://www.gnu.org/gnu/linux-and-gnu.en.html
Demiurge
2025-03-04 11:06:16
That's what I'm saying. They are literally living in an alternate reality where everyone is using this imaginary OS called GNU that they never bothered developing and publishing and maintaining, because it's too much work involved to have a website and OS releases a la DragonflyBSD or Redox OS
2025-03-04 11:06:47
In this imaginary world, everyone is using this imaginary OS called GNU without realizing it, because they think it's called Linux
Traneptora it's never been a GNU software or developed by FSF
2025-03-04 11:07:18
What does the G in glib stand for?
Traneptora
2025-03-04 11:07:55
doesn't actually stand for anything
2025-03-04 11:08:25
the fact that you're trying to cite acronyms shows you actually don't really know anything about the history
Demiurge
2025-03-04 11:08:53
I guess glib is part of gtk but I thought gnome stands for "gnu networked object model environment"
Traneptora
2025-03-04 11:09:15
it originally did but it hasn't been part of the "gnu project" for a very long time
2025-03-04 11:09:33
and it no longer stands for that
2025-03-04 11:09:52
for example, much of it stems from GIMP, which is, officially, the Gnu Image Manipulation Program, but never actually had anything to do with FSF, it just used the name after the developers asked stallman if they could in the mid 1990s
2025-03-04 11:10:10
but GIMP never actually had any FSF hands in its development
Demiurge
2025-03-04 11:11:04
Well, if you want other examples of infamously badly written libraries, there's still glibc and gmp
Traneptora
2025-03-04 11:12:16
I can't comment on how "badly written" they are as I haven't looked at the code
2025-03-04 11:12:35
but you like to make a lot of assertions for someone who really doesnt appear to actually know that much
Demiurge
2025-03-04 11:12:56
The only time interacting with GNU software is a pleasant experience is when using some of the convenience features of GNU tools like inline replace in sed
spider-mario
2025-03-04 11:13:27
I just use perl for that
Demiurge
2025-03-04 11:14:23
Even when it's convenient, it's not necessary and has better replacements that don't create a compatibility jungle
2025-03-04 11:14:45
So overall GNU is just not pleasant to have to deal with at all
Traneptora but you like to make a lot of assertions for someone who really doesnt appear to actually know that much
2025-03-04 11:20:12
I know more than I appear, sadly.
2025-03-04 11:21:26
Lots of people don't believe me and then regret later on :(
2025-03-04 11:24:17
Cuz I have a peculiar style of joking about things
2025-03-04 11:33:57
It's hard to take me seriously when I don't even take myself that seriously. But people forget that even people like that can be a source of useful information sometimes and it's unwise to dismiss everything just because someone is quirky and peculiar.
2025-03-04 11:35:28
In my experience the people with the most interesting information tend to have an unusual sense of humor
_wb_
AccessViolation_ sure i'll take the hacky variant
2025-03-04 11:59:54
replace libjxl/lib/jxl/modular/encoding/enc_encoding.cc with this and you'll get something that produces `predictors.ppm`, `residuals.ppm` and `context.ppm` — I made it only for single-channel (grayscale) single-group images, if you want to see stuff for multiple channels or larger images I guess you'll have to add some indices into those filenames
𝕰𝖒𝖗𝖊
Demiurge GNU is just plain awful. So is Linux actually. It makes sense for them to go together.
2025-03-04 01:24:01
I agree. But Linux in general is a better word to refer to the ecosystem while distribution names are better to refer it as an OS. Again, the static part does not change: *Linux* distributions. Debian call itself a GNU/Linux distribution, which is terrible. What would you say for Chimera, or Alpine then? *LLVM / Musl / Mimalloc / BSD Coreutils / Dinit / Pipewire / Gnome / Linux*
AccessViolation_
_wb_ replace libjxl/lib/jxl/modular/encoding/enc_encoding.cc with this and you'll get something that produces `predictors.ppm`, `residuals.ppm` and `context.ppm` — I made it only for single-channel (grayscale) single-group images, if you want to see stuff for multiple channels or larger images I guess you'll have to add some indices into those filenames
2025-03-04 01:42:37
thank you!
spider-mario
2025-03-04 01:42:48
to be fair, historically, gcc has probably been quite useful to the linux project
2025-03-04 01:43:17
but these days, it’s harder to argue for as central a position for GNU
2025-03-04 01:43:33
unless one argues that people can’t see their files without GNU `ls`
AccessViolation_
Demiurge It's hard to take me seriously when I don't even take myself that seriously. But people forget that even people like that can be a source of useful information sometimes and it's unwise to dismiss everything just because someone is quirky and peculiar.
2025-03-04 01:46:37
if you're up for some good-faith, honest, unsolicited advice: ||i didn't know you were joking, i thought you were always genuine. to others (me, at least) your passion for the things you talk about reads a lot like anger, or very un-nuanced comments about everything that's wrong with those things||
2025-03-04 01:46:58
if you're not up for that, feel free to ignore it
_wb_
2025-03-04 01:48:37
The concept of the GNU project, i.e. a community initiative to build a FOSS version of some kind of unix style OS, predates Linux by a decade. I like to distinguish the technical contribution of GNU from the philosophical/political one. They did make a nontrivial technical contribution but I think the main legacy of GNU is philosophical/political, not so much technical. From the technical point of view, many of the GNU stuff has meanwhile been superseded by better alternatives.
lonjil
2025-03-04 02:00:36
The thing about the "GNU/Linux" OS is kind of historical revisionism, originally it was just about glibc.
2025-03-04 02:02:30
Early distros used unofficial ports of glibc, and when the GNU project added official Linux support to glibc, people started talking about "GNU/Linux"
2025-03-04 02:02:48
the GNU coreutils or GCC didn't really factor into it
jonnyawsom3
2025-03-04 02:14:34
<#806898911091753051>?
AccessViolation_
_wb_ replace libjxl/lib/jxl/modular/encoding/enc_encoding.cc with this and you'll get something that produces `predictors.ppm`, `residuals.ppm` and `context.ppm` — I made it only for single-channel (grayscale) single-group images, if you want to see stuff for multiple channels or larger images I guess you'll have to add some indices into those filenames
2025-03-04 02:55:25
got it to work! weirdly `-e 11` breaks the ppm files for me
2025-03-04 02:57:11
2025-03-04 03:14:00
works fine with `-e 10` though :)
Demiurge
_wb_ The concept of the GNU project, i.e. a community initiative to build a FOSS version of some kind of unix style OS, predates Linux by a decade. I like to distinguish the technical contribution of GNU from the philosophical/political one. They did make a nontrivial technical contribution but I think the main legacy of GNU is philosophical/political, not so much technical. From the technical point of view, many of the GNU stuff has meanwhile been superseded by better alternatives.
2025-03-04 04:44:27
In other words, they were meaning to make an OS for a long time, before even Linux appeared... but they never did. And they wanted to pretend like they did without spending the effort involved in actually following through.
2025-03-04 04:49:06
They made some mediocre tools. At the time BSD was being litigated in court so people avoided using it since it was unclear who owned the rights to it. So GNU and Linux were in the right place at the right time when superior alternatives already existed, with an even less restrictive license, but people weren't sure if it was legal because corporations were claiming BSD was stolen or whatever
jonnyawsom3
AccessViolation_ works fine with `-e 10` though :)
2025-03-04 06:33:18
Time to bombard you with images to test xD
AccessViolation_
2025-03-04 06:33:34
feel free!
2025-03-04 06:33:45
we could create a thread in <#803645746661425173> even
jonnyawsom3
2025-03-04 06:35:51
Though, if you happen to be on Windows, you could just upload the binaries and I'll be missing for the next week
AccessViolation_
2025-03-04 06:39:45
unfortunately I'm on Linux. but I did just make a command to easily convert images to grayscale and downscale them to the max group size, so that they should Just Work with what Jon brewed up: ``` convert image.png -set colorspace Gray -separate -average -resize 1024x1024\> prepared-image.png ``` and here's the variant you can put in a justfile so you can run `just prepare <your image>`: `justfile` ``` # change image to grayscale and shrink to 1024 prepare IMAGE: convert {{IMAGE}} -set colorspace Gray -separate -average -resize 1024x1024\> prepared-{{IMAGE}} ``` (the reason I did that was so that I wouldn't have to time commands constantly, and of this and I wanted to try out `just`) cc <@794205442175402004>, for convenience, if you also plan on trying it out more
_wb_
AccessViolation_ got it to work! weirdly `-e 11` breaks the ppm files for me
2025-03-04 06:56:43
Maybe try -e 11 --num_threads 0, probably it is doing multiple encodes in parallel and they are writing to the same file at the same time
AccessViolation_
2025-03-04 06:57:42
ah, i'll try that
2025-03-04 07:00:39
<@238552565619359744> if you have any images specific images you want me to test (preferably in png format) feel free to send them
_wb_
2025-03-04 07:02:50
I am curious now if the e11 tree/images I have in the figure are actually correct, they may not be from the run that ended up winning but just the one that was the last one to write the files
AccessViolation_
2025-03-04 07:03:38
oh hmm
2025-03-04 07:04:10
do you think the context/predictors/residuals images are affected by that too?
_wb_ I am curious now if the e11 tree/images I have in the figure are actually correct, they may not be from the run that ended up winning but just the one that was the last one to write the files
2025-03-04 07:05:48
or is the last run necessarily the one that won? I'd expect it to do the winning run again at the very end to actually encode to the file
_wb_
2025-03-04 07:20:43
I have to check the code, I didn't get bad ppms myself but if you got them then at least there is some race condition going on
AccessViolation_
2025-03-04 07:22:37
effort 11 also ignores some presets like `-g 3` so it's not always going to do what you want anyway
2025-03-04 07:22:49
i'll probably just `-e 10` for consistency
monad
2025-03-04 07:23:08
there is always one final encode with the chosen settings
jonnyawsom3
AccessViolation_ effort 11 also ignores some presets like `-g 3` so it's not always going to do what you want anyway
2025-03-04 07:27:28
I was going to say, probably group size related
AccessViolation_
2025-03-04 07:39:11
original, predictors, residuals
2025-03-04 07:39:52
I like how in the residuals you can see dark bands perpendicular to the rings where the predictors perform the best - where the rings perfectly align with their respective directions
2025-03-04 08:08:28
I made a thread for posting more of these. also feel free to post images there and i'll run them through and post the results 🙂 https://discord.com/channels/794206087879852103/803645746661425173/1346574799396147220
monad
2025-03-04 08:26:06
honestly a chad move by Jon
AccessViolation_
2025-03-04 08:28:37
yeah this is really cool
Demiurge
AccessViolation_ if you're up for some good-faith, honest, unsolicited advice: ||i didn't know you were joking, i thought you were always genuine. to others (me, at least) your passion for the things you talk about reads a lot like anger, or very un-nuanced comments about everything that's wrong with those things||
2025-03-05 01:06:43
Well I'm ha ha only serious.
2025-03-05 01:07:45
Like I mean what I say but often say it in a hyperbolic and non literal way
Kleis Auke
2025-03-05 10:06:12
https://jcgt.org/published/0014/01/04/
_wb_
2025-03-05 10:58:51
Ooh that looks interesting, I should refer to that somewhere in the big jxl paper
2025-03-05 11:12:48
> We hope that our work helps to put the problem of spectral image compression on the map and that it will motivate > improvements to libjxl. > Desirable features for future versions of libjxl would be: > • Handling RGB framebuffers and scalar-valued framebuffers in the same file (for RGB previews), > • Supporting an arbitrary number of framebuffers (currently limited to 256), > • Independent control of distance levels and resolutions for different framebuffers within the same file.
2025-03-05 11:15:05
that limit of 256 extra channels is just the Level 10 limit, I guess we should add an encode option to not conform to any Level (I guess we can signal that as Level 255, since not signaling anything is implicitly Level 5)
2025-03-05 11:18:09
last bullet: we do already have `JxlEncoderSetExtraChannelDistance(JxlEncoderFrameSettings* frame_settings, size_t index, float distance)` which should allow you to set a different distance for each extra channels — maybe it wasn't there yet when the article was written
2025-03-05 11:19:02
in the first bullet I don't quite understand what the issue is
jonnyawsom3
2025-03-05 03:56:01
They could also use more frames along with channels, then it's effectively unlimited
spider-mario
2025-03-05 03:58:15
https://itdaily.fr/nouvelles/logiciel/windows-11-jpg-xl/ > Une fois l’extension installée, les fichiers JPEG existants peuvent être convertis au format JPG XL. “JPG XL” 😬
2025-03-05 03:58:27
I think I even prefer JPEG-XL
Tirr
2025-03-05 04:00:32
now we have a matrix: ``` no hyphen | with hyphen +-----------+------------- JPEG | JPEG XL | JPEG-XL JPG | JPG XL | JPG-XL ```
jonnyawsom3
_wb_ last bullet: we do already have `JxlEncoderSetExtraChannelDistance(JxlEncoderFrameSettings* frame_settings, size_t index, float distance)` which should allow you to set a different distance for each extra channels — maybe it wasn't there yet when the article was written
2025-03-05 04:00:54
I wondered if this might've been related, but I can't see their name on the paper https://github.com/libjxl/libjxl/issues/3906
spider-mario https://itdaily.fr/nouvelles/logiciel/windows-11-jpg-xl/ > Une fois l’extension installée, les fichiers JPEG existants peuvent être convertis au format JPG XL. “JPG XL” 😬
2025-03-05 04:03:52
My friend used it when I first introduced them to the format a year ago. When I asked them why, they replied they had never seen a .jpeg file, only .jpg
spider-mario
2025-03-05 04:04:48
guess we should also rename XHTML to XHTM
jonnyawsom3
2025-03-05 04:06:05
Strangely, I've only ever seen .html
spider-mario
2025-03-05 04:06:55
I have seen a few `.htm`
2025-03-05 04:06:57
and not just on http://www.coboloncogs.org/INDEX.HTM
2025-03-05 04:07:14
(parody of ruby on rails)
Meow
2025-03-05 05:02:02
JXR (JXL HDR)🤔
_wb_
2025-03-05 05:17:17
Actually .htm kind of makes sense, since the file contains not the language but just the markup 🙂
2025-03-05 05:17:38
It looks very ugly though
2025-03-05 05:20:12
JPG XL sounds like a statement: "y'all ain't no experts!"
jonnyawsom3
2025-03-05 06:01:50
"Joint Photographic Group, Extra Large"
spider-mario
2025-03-05 06:50:16
maybe recompressed JPEGs should be `.jpg.xl`
jonnyawsom3
2025-03-05 06:55:11
.jpg-xl
2025-03-05 06:55:18
💀
2025-03-05 06:55:39
Actually....
damian101
spider-mario maybe recompressed JPEGs should be `.jpg.xl`
2025-03-05 06:58:39
aren't those extra xmall?
jonnyawsom3
2025-03-05 07:03:01
`.jpeg xl` with a non-breaking space
2025-03-05 07:03:03
Perfection
Laserhosen
_wb_ > We hope that our work helps to put the problem of spectral image compression on the map and that it will motivate > improvements to libjxl. > Desirable features for future versions of libjxl would be: > • Handling RGB framebuffers and scalar-valued framebuffers in the same file (for RGB previews), > • Supporting an arbitrary number of framebuffers (currently limited to 256), > • Independent control of distance levels and resolutions for different framebuffers within the same file.
2025-03-05 07:15:17
Maybe they're talking about planar vs. interleaved buffers? Like they want to output 3 extra channels interleaved to display them as if they're RGB(?) Not sure that makes a lot of sense for them to mention though, as it's easy enough to interleave them afterwards.
Demiurge
2025-03-05 08:50:04
Jon would probably come up with a good way to store a spectral image in 1 jxl file without RGB preview. He's pretty good at thoughtfully planning that kind of thing...
2025-03-05 08:50:48
Preferably a way for spectral data to be derived from the RGB preview + additional auxiliary spectral info
_wb_
2025-03-05 08:58:30
The straightforward thing would be to just store the channels corresponding to the wavelengths for R,G,B as the main RGB image and the other wavelengths as extrachannels
Demiurge
2025-03-05 08:59:13
The additional spectral data would have to be described and encoded in reference to the rgb preview image
_wb_
2025-03-05 08:59:18
In some colorspace like Rec2100 where the primaries correspond to exact spectral colors
2025-03-05 09:00:30
As far as I understand, a spectral image is kind of just a normal image but with the wavelengths sampled at more points than just the three you need to fool human eyes
Demiurge
2025-03-05 09:01:39
What our eyes see is how they respond to all of the light in aggregate. So the RGB image will show how it looks like and the additional spectral info would have to tell us more detailed information about the spectral composition
2025-03-05 09:02:55
Yeah. So instead of RGB it's a whole bunch of different wavelengths
2025-03-05 09:04:01
But the extra spectral channels should be encoded in a way that references the RGB image, to avoid redundancy and enable an easy RGB preview
2025-03-05 09:04:19
All in one file
2025-03-05 09:04:35
They say they have to use multiple jxl files currently, to compress one exr
_wb_
2025-03-05 09:05:09
I guess the RGB image would not correspond exactly to the actual image we would see, e.g. a spectral cyan color between blue and green would activate both G and B in a normal RGB image while in a spectrally sharp sampled image you would include it in some channel corresponding to that wavelength but not in the G and B channels
Demiurge
2025-03-05 09:05:33
Exactly
2025-03-05 09:05:54
That's why the extra channels should be encoded in reference to the visible rgb channels
2025-03-05 09:06:06
In my opinion.
2025-03-05 09:07:02
The spectral channels should be derived from the RGB image + additional spectral refinement data
_wb_
2025-03-05 09:07:36
Yeah, you could do something like that, encoding differences wrt the spectral low pass data or something
Demiurge
2025-03-05 09:10:21
Yup, just like how rgb color is often encoded in reference to Y channel
2025-03-05 09:12:39
The question is, how convenient is it to do that today with libjxl? To compress an exr file into a single rgb jxl with extra spectral channels and back to exr again
2025-03-05 09:12:43
Like they want
_wb_
2025-03-05 09:13:11
I wonder if a series of YCoCg transforms could be used, starting by doing it on adjacent spectral channels and then recursively on the "Y" of each subbundle of 3 channels
2025-03-05 09:14:11
Or one of the other RCTs
Demiurge
2025-03-05 09:14:33
Whatever method gives the best quality/size ratio :)
2025-03-05 09:16:12
If someone wrote an encoder that can convert from spectral EXR and back, with a single jxl file done the "right" way, they would be happy.
2025-03-05 09:16:38
cjxl-spectral
2025-03-05 09:18:01
Their current process looks kind of messy with multiple files and multiple steps
_wb_
2025-03-05 09:27:10
In the worst case you could always embed extra files in some box in a jxl file 🙂
jonnyawsom3
2025-03-05 09:41:58
Looking at the paper itself, I'm surprised they got such improvements with subsampling. I thought JPEG XL only used it for JPEG recompression since it's better to just lower the quality target
2025-03-05 09:43:47
> Our format supports the full feature set of spectral OpenEXR, but compared to this lossless compression, we achieve file sizes that are 10 to 60 times smaller than their ZIP compressed counterparts. I mean... It's just a JXL file isn't it? Though I guess you can rename a PPM file to anything else and call it your own :P
spider-mario
_wb_ Yeah, you could do something like that, encoding differences wrt the spectral low pass data or something
2025-03-05 09:45:14
if we have the raw spectral information, we can multiply that with the CIE XYZ matching functions, integrate the result and get XYZ triplets directly (although there is the “small” question of what the white point should be)
2025-03-05 09:50:36
(https://en.wikipedia.org/wiki/CIE_1931_color_space#Computing_XYZ_from_spectral_data)
Demiurge
> Our format supports the full feature set of spectral OpenEXR, but compared to this lossless compression, we achieve file sizes that are 10 to 60 times smaller than their ZIP compressed counterparts. I mean... It's just a JXL file isn't it? Though I guess you can rename a PPM file to anything else and call it your own :P
2025-03-05 09:55:03
It's multiple jxl files, with a special metadata box
2025-03-05 09:55:50
Multiple greyscale files too if I read correctly
2025-03-05 09:57:44
If it was converted into 1 single jxl like they want, it could probably be even more efficient compression.
jonnyawsom3
2025-03-05 09:58:39
Ah > In principle, JPEG XL supports having one main image and up to 255 sub-images, which sounds like a good match for c0 and f1, . . . , fn−1. Unfortunately, the current implementation in libjxl does not allow us to tweak the compression ratio and subsampling on a per-sub-image basis. Due to these limitations, we currently use one JPEG XL file per channel so that we have full control over the compression parameters. The main file stores c0 and all metadata. This file provides a natural way to preview spectral images in any software with JPEG XL support, since it shows the overall brightness.
Demiurge
2025-03-05 09:59:11
Since the rgb image will be psychovisually compressed and the spectral channels can utilize redundancy between the channels
jonnyawsom3
2025-03-05 09:59:46
It's a shame they didn't reach out. We could've used the lessons learnt from satellite multispectral imagery
Demiurge
_wb_ last bullet: we do already have `JxlEncoderSetExtraChannelDistance(JxlEncoderFrameSettings* frame_settings, size_t index, float distance)` which should allow you to set a different distance for each extra channels — maybe it wasn't there yet when the article was written
2025-03-05 10:00:22
Also they say they are using v0.11
It's a shame they didn't reach out. We could've used the lessons learnt from satellite multispectral imagery
2025-03-05 10:00:45
They say "thanks to libjxl developers for answering our questions"
2025-03-05 10:03:50
I thought jxl has a limit of 4096 extra channels
2025-03-05 10:04:01
Not 256
spider-mario
Demiurge Not 256
2025-03-05 10:05:22
https://discord.com/channels/794206087879852103/822105409312653333/1346803199792906312
Demiurge
2025-03-05 10:06:42
What's level 5 then, 1 extra channel?
_wb_
2025-03-05 10:39:22
Level 5 is max 4 extrachans
jonnyawsom3
2025-03-05 10:50:21
After reading though it, I see they were using resampling of the greyscale images, not 'actual' chroma subsampling inside JXL spec. Still, if they were talking to libjxl developers, I'm surprised they weren't informed about the extra channel settings or still using the 256 channels across fewer files
Demiurge
2025-03-05 11:03:35
Curious indeed
2025-03-06 12:19:58
https://tenor.com/view/harry-potter-curious-very-curious-thinking-gif-19711495
Traneptora
2025-03-06 03:44:36
tho actual monochromatic light is pretty wild
2025-03-06 03:44:41
try standing under a low-pressure sodium light
VcSaJen
2025-03-06 06:41:22
Microsoft's JPEG XL extension is getting some coverage: https://www.ghacks.net/2025/03/03/windows-11-how-to-add-jpeg-xl-support-officially/ Upcoming default support in Paint.NET is also mentioned
jonnyawsom3
2025-03-06 06:43:37
> Microsoft's Windows 11 operating system supports a wide range of file formats by default. JPEG XL, a royalty-free open image format has not been on that default list up until now. Well... Not default, but it's official
2025-03-06 06:44:08
> The main downside right now is limited support for the format and a lack of hardware decoding support. No image format in Windows has hardware decoding...
username
2025-03-06 06:45:20
regular JPEG 1 sorta kinda does although iirc it's just in the form of something with the color transforms or something
AccessViolation_
> Microsoft's Windows 11 operating system supports a wide range of file formats by default. JPEG XL, a royalty-free open image format has not been on that default list up until now. Well... Not default, but it's official
2025-03-06 06:45:31
"up until now"?
2025-03-06 06:46:00
the phrasing is a bit weird
2025-03-06 06:47:19
"Windows 11: How to add JPEG-XL support officially" what is that title? is this a tutorial for microsoft execs?
2025-03-06 06:47:25
this looks AI generated
username
2025-03-06 06:48:28
should there be some kinda easily digestible statement or summery of this situation to correct the record about this and claim people down about this?
jonnyawsom3
2025-03-06 06:56:51
I just replied debunking it, may take a while to show
2025-03-06 06:57:11
This was a quote Jon said previously >>> Yes. To be clear: - The Microsoft patent is not relevant to JPEG XL. It is about dynamic ANS while jxl uses only static ANS. - Microsoft has never claimed it is relevant, also not when ISO and IEC explicitly asked if they have relevant patents. In principle they could of course still suddenly make such a claim, but it does weaken their case significantly. - Even if it would be relevant, Microsoft has publicly stated that they will not enforce this patent against any FOSS implementations. - Even if it would be relevant, and even if Microsoft is lying about their intentions, the patent was filed _after_ early versions of libjxl were already published, which makes prior art pretty straightforward to demonstrate. - Even if all that is not enough: both Cloudinary and Google own defensive patents related to JPEG XL (made available royalty-free, of course), which have only one purpose: fight back against patent trolls. If Microsoft would litigate to make anyone pay to use JPEG XL, the defensive clause in the royalty-free patent grant kicks in and automatically terminates Cloudinary's and Google's patent license to specifically Microsoft, revoking Microsoft's rights, meaning Microsoft can no longer ship JPEG XL in any of its products, not even if they reimplement it from scratch. That is a pretty big stick. A patent troll like Sisvel may not care about defensive patents (they don't have any products to ship), but for a company like Microsoft it seems extremely foolish, from a business perspective, to activate the defensive clause.
2025-03-06 06:59:55
At least, I hope such a large comment posts :P
AccessViolation_
2025-03-06 07:11:35
oop i commented the first bullet point too
2025-03-06 07:11:43
well if anything that should help convince them
VcSaJen
AccessViolation_ this looks AI generated
2025-03-06 07:13:09
...Interesting take. How would AI know about beta-version of Paint.net, for example?
AccessViolation_
2025-03-06 07:15:09
it's possible people generate the bulk and then make some changes as they see fit
2025-03-06 07:15:53
i'm not saying i think it's definitely ai generated, i'm saying it looks like it, i.e. has some traits that an ai generated article would also have
2025-03-06 07:31:31
I asked an LLM (Phind) about some current downsides of JPEG XL, and it said "needs dedicated hardware for optimal performance", citing an article from cloudinary which says the exact opposite
2025-03-06 07:31:53
amazing
jonnyawsom3
2025-03-06 07:32:45
Could try running it through the AI 'detectors'
AccessViolation_
2025-03-06 07:33:49
I don't have a good experience with those personally, but it was a while ago that I tested them. the one I used then detected some of my own Wikipedia edits as AI with 100% confidence
2025-03-06 07:35:28
I'm okay with knowing this is either AI or very careless journalism that happens to have the same issues generated text often does
VcSaJen
2025-03-06 07:39:46
It's fine calling it a careless journalism. Just cut the unnecessary logic jumps.
_wb_
AccessViolation_ I asked an LLM (Phind) about some current downsides of JPEG XL, and it said "needs dedicated hardware for optimal performance", citing an article from cloudinary which says the exact opposite
2025-03-06 07:41:36
ChatGPT said similar things when I asked it to review the big paper on jxl. It also said I shouldn't use "obscure metrics like JND" but I should use well-established metrics like PSNR and SSIM.
couleur
VcSaJen Microsoft's JPEG XL extension is getting some coverage: https://www.ghacks.net/2025/03/03/windows-11-how-to-add-jpeg-xl-support-officially/ Upcoming default support in Paint.NET is also mentioned
2025-03-06 07:42:02
is there a reason for paint.net not to use jpegli
2025-03-06 07:42:13
i should probably move that to <#805176455658733570>
AccessViolation_
VcSaJen It's fine calling it a careless journalism. Just cut the unnecessary logic jumps.
2025-03-06 07:42:37
I don't really get the point you're trying to make. besides, use of large language models probably falls in the general category of bad journalism. I could point out how the counterexample of Paint.NET is the only sentence misspelling JPEG XL as people often do and large language models often don't, but whatever
_wb_ ChatGPT said similar things when I asked it to review the big paper on jxl. It also said I shouldn't use "obscure metrics like JND" but I should use well-established metrics like PSNR and SSIM.
2025-03-06 07:46:14
it's always fun to question them about the specifics of topics you're an expert on. it makes you realize how bad of an idea it is to use them for information on topics you're not too familiar with
jonnyawsom3
2025-03-06 07:56:27
AI or misunderstanding aside, it's something haha
Meow
username should there be some kinda easily digestible statement or summery of this situation to correct the record about this and claim people down about this?
2025-03-07 02:33:40
What happened to the colours on your browser?
username
Meow What happened to the colours on your browser?
2025-03-07 02:35:36
the website doesn't manually define all of it's colors so some of them get handled as "native" colors and for me my native colors are dark themed because I messed around with custom windows theme stuff
Meow
2025-03-07 02:38:58
Another coverage https://www.windowslatest.com/2025/03/05/turn-on-jpeg-xl-jxl-in-windows-11-24h2-a-secret-new-feature/
2025-03-07 02:39:50
JPEG XL: a secret feature 🤫
2025-03-07 03:00:25
Added Windows 11 24H2 to the official support list of JPEG XL on Wikipedia
2025-03-07 03:01:02
Official extensions or plugins are included in this scope
Demiurge
2025-03-07 03:58:33
So secret
2025-03-07 03:58:42
Don't tell anyone 🤫
jonnyawsom3
2025-03-07 04:07:20
A comment says their store page reads > You can set JPEG XL images as a desktop background and open JPEG XL images in Windows Media Player Legacy. Support for JPEG X is coming to additional apps. Mine says > It installs a plug-in used by Windows apps, such as File Explorer, to decode JPEG XL images. Please use File Explorer to view thumbnails after installing the extension.
2025-03-07 04:13:23
Sounds to me like whoever made the plugin is rushing to explain why it doesn't work now that news is spreading
Demiurge
2025-03-07 04:17:08
lol just rename the extension to .jpg and magically it works <:kekw:808717074305122316>
2025-03-07 04:17:34
The bug is in the windows file suffix database not the codec
CrushedAsian255
Demiurge Don't tell anyone 🤫
2025-03-07 11:04:02
I told you guys, JPEG XL is a secret military experiment! Wait, I wasn’t meant to say that out loud?
Demiurge
2025-03-07 11:21:07
There will probably be a fix to smooth out some of the quirks in the next windows system update... probably the next feature update.
2025-03-07 11:21:33
It's really cool they actually pulled the trigger in the first place
couleur
2025-03-07 02:15:49
do you guys know any android gallery/image/video viewers that support jxl
RaveSteel
2025-03-07 02:32:03
Fossify Gallery
spider-mario
2025-03-07 02:50:33
seems it has just reached all the French-speaking websites
couleur
2025-03-07 02:50:35
is that available in pre release or stable
spider-mario seems it has just reached all the French-speaking websites
2025-03-07 02:51:24
what kind of google alerte service is this? le intéressant
spider-mario
2025-03-07 02:51:56
you receive e-mails with a summary of new search results for a given query https://www.google.ch/alerts
2025-03-07 02:56:35
oh, I accidentally had that alert set to “French only”
2025-03-07 02:56:35
ugh
couleur
spider-mario oh, I accidentally had that alert set to “French only”
2025-03-07 03:00:42
do you know what they mean by auto source
2025-03-07 03:00:51
must be different from all source
2025-03-07 03:01:10
(in the "sources" field)
spider-mario
2025-03-07 03:01:21
not sure
BlueSwordM
Could try running it through the AI 'detectors'
2025-03-07 11:37:09
Wouldn't exactly work well for someone who actually knows how to use LLMs.
2025-03-07 11:37:13
They mainly look for slop, but if you finetune an LLM to copy your writing style and try to suppress slop, you'll get much further along.
Demiurge
_wb_ ChatGPT said similar things when I asked it to review the big paper on jxl. It also said I shouldn't use "obscure metrics like JND" but I should use well-established metrics like PSNR and SSIM.
2025-03-08 03:16:58
Anyone got a quick link to Jon's newest paper?
2025-03-08 03:17:22
I was thinking of critiquing it
A homosapien
2025-03-08 03:20:05
Here: https://drive.google.com/file/d/1KHxIms3bfzr_4qYLG_Jqpb41VkzZC3cV/view?usp=sharing
Demiurge
2025-03-08 03:28:04
ty
2025-03-08 03:38:29
What about a channel or thread for discussing the paper?
2025-03-08 03:41:21
https://discord.com/channels/794206087879852103/1347775789139230780
jonnyawsom3
_wb_ the tree for e11 is kind of crazy large for such a small image, here's a small part of it
2025-03-08 10:01:43
I'm assuming it's a very small cost storing the same predictor for so many contexts like this? At first I thought it was simply branching for no reason, then I noticed the colored edges weren't the same shade of grey
2025-03-08 10:02:16
...After I bumped up the resolution of the PDF by tenfold, those are some small big trees haha
🐺 BONES ⚡
2025-03-10 06:35:46
Fedora 43 considering a proposal for using JPEG-XL for the default wallpaper format https://www.phoronix.com/news/Fedora-43-RPM-6.0-Early-Changes (posted to wrong channel initially)
jonnyawsom3
2025-03-10 06:45:59
The comment linked to for the size reduction is an unoptimized PNG, it's actually a lot closer once you do `2944508 bytes (45.40% smaller): f42-01-day.png` JXL is still around 1/3 smaller though, just not the 1/2 like suggested
Meow
🐺 BONES ⚡ Fedora 43 considering a proposal for using JPEG-XL for the default wallpaper format https://www.phoronix.com/news/Fedora-43-RPM-6.0-Early-Changes (posted to wrong channel initially)
2025-03-11 01:43:24
JPEG**-**XL😬
Demiurge
2025-03-11 03:57:49
Oh come now. A non breaking space vs a hyphen vs <:JXL:805850130203934781> or just spelling it out like jpegxl, at the end of the day it's all the same :)
2025-03-11 04:07:15
A hyphen has historically been the most common and acceptable way to conjoin words
CrushedAsian255
2025-03-11 05:49:22
Wait till Sony and Philips release JPEG+XL
Meow
CrushedAsian255 Wait till Sony and Philips release JPEG+XL
2025-03-11 05:52:52
And ultimately JPEG±XL
CrushedAsian255
2025-03-11 06:16:45
JPEG×XL
spider-mario
2025-03-11 07:05:45
I'm surprised we haven't seen JPEG/XL yet
_wb_
2025-03-11 07:21:58
or JPEG 3.0
2025-03-11 07:22:31
considering JPEG 2000 is "JP2" and kind of considered a "JPEG version 2"
2025-03-11 07:23:58
then again if you count LS, LS 2, XT, XR, XS then maybe the counter should already be at JPEG 8 or so now
Demiurge
2025-03-11 07:38:22
I would just like to interject for a moment. What you are referring to as jpegxl is in fact, guhnoo slash XL.
spider-mario
_wb_ then again if you count LS, LS 2, XT, XR, XS then maybe the counter should already be at JPEG 8 or so now
2025-03-11 07:39:24
not to be confused with libjpeg 8
Demiurge
2025-03-11 07:39:28
By itself, XL is only one component of a complete GNU system.
jonnyawsom3
spider-mario not to be confused with libjpeg 8
2025-03-11 07:50:33
Which is binary compatible with jpegli, which is a subset of JPEG XL... Not that the DLLs are available anywhere It comes full circle
Meow
_wb_ or JPEG 3.0
2025-03-11 07:54:36
Just rename to JPEG 40 in 2032
_wb_
2025-03-11 07:55:47
JPEG `0xL`
2025-03-11 07:56:08
very confusing malformed hexadecimal
monad
2025-03-11 09:08:02
JPEG × L
AccessViolation_
Demiurge Oh come now. A non breaking space vs a hyphen vs <:JXL:805850130203934781> or just spelling it out like jpegxl, at the end of the day it's all the same :)
2025-03-11 10:24:25
jonnyawsom3
2025-03-12 11:47:05
Huh https://techbookfest.org/product/kPqEF2hKyKBrcAHcKazFK3?productVariantID=fEiVwDjgz1XkYaViaY95At
RaveSteel
2025-03-12 11:59:48
lmao what
Meow
2025-03-12 02:01:07
2nd edition 👀
_wb_
2025-03-12 03:24:18
would this be a fair/correct description of OS support for jxl?
couleur
2025-03-12 03:26:50
dont forget browsers in another category
5peak
_wb_ would this be a fair/correct description of OS support for jxl?
2025-03-12 03:28:44
100 years already? In binary.
_wb_
2025-03-12 03:34:31
In the "adoption / application domains" section I already had subsections on web browsers, image tools/libraries, photography (DNG), scanners (PDF/R), medical (DICOM), and GIS, but adding something about operating systems seems to make sense, especially now it is being added to Windows too.
5peak
2025-03-12 04:30:34
Hey jpegxl.info owner, year++ ©2024 The JPEG XL Community
jonnyawsom3
2025-03-12 04:36:10
Huh
couleur
2025-03-12 04:36:42
shouldve went https://getfullyear.com/ SMH