JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

jxl

Anything JPEG XL related

A homosapien
2025-01-02 11:24:56
how old are we talking here?
CrushedAsian255
Orum Well back when I tested it in 2021, it did much better than that. This was 1st generation, -d 0.5:
2025-01-02 12:34:45
I did a similar generation loss experiment with PNG, it did very well preserving the details as well as smooth gradients, even after 1000 generations
2025-01-02 12:35:00
Very impressive and resilient format
2025-01-02 12:38:52
PPM -> PNG -> PPM even round trips!
Quackdoc
A homosapien how old are we talking here?
2025-01-02 12:47:18
uhhh, IBM was still a thing old
2025-01-02 12:47:35
I dont have exact specs on hand as I need to wire them up to a power source
2025-01-02 12:48:41
I know one is a sandybridge and one is a ivy bridge tho
_wb_
CrushedAsian255 so is gaborish like a laid-back deblocking filter and epf is like a full on one?
2025-01-02 07:42:56
Gaborish is a 3x3 uniform smoothing at decode time which gets compensated by a 5x5 uniform sharpening at encode time. EPF is decode time only (not compensated at encode time) and is like a bilateral filter, not smoothing edges away. It is also not uniform but the strength is locally signaled. It can reach a 9x9 region around each pixel (but it's not a simple convolution like gaborish).
Orum
2025-01-02 07:51:57
I feel like there are better ways of deblocking ||but they might be patent encumbered 💀||
veluca
_wb_ Gaborish is a 3x3 uniform smoothing at decode time which gets compensated by a 5x5 uniform sharpening at encode time. EPF is decode time only (not compensated at encode time) and is like a bilateral filter, not smoothing edges away. It is also not uniform but the strength is locally signaled. It can reach a 9x9 region around each pixel (but it's not a simple convolution like gaborish).
2025-01-02 07:53:27
I seriously wonder how much slower a 7x7 sharpening would be 😄 (or for that matter, whether our 5x5 sharpening is truly the correct inverse)
Demiurge
2025-01-02 08:35:02
jxr has filters that are losslessly reversible
2025-01-02 08:35:56
Not sure why though
_wb_
veluca I seriously wonder how much slower a 7x7 sharpening would be 😄 (or for that matter, whether our 5x5 sharpening is truly the correct inverse)
2025-01-02 08:48:48
Ideally encoder gaborish is only the true inverse as distance goes to zero, and does slightly more sharpening than just the inverse as distance goes higher, to compensate for the slight blurring caused by HF DCT quantization.
veluca
2025-01-02 08:50:06
Yes, that would be even better, interpolating between sharper and true inverse as distance goes to 0
jonnyawsom3
2025-01-02 09:17:10
I think it was mentioned a long time ago, but having options interpolate with distance rather than hard cutoffs or being strictly enabled/disabled could help a lot. Similar thing with image resolution and encode effort/group size, ect
Demiurge
2025-01-02 10:20:34
hf DCT quantization doesn't necessarily cause blurring though, right? It could also cause sharpness or noise
_wb_
2025-01-02 11:23:49
Depends on the rounding, but generally you have a large bucket around zero so a bit more blurring than noise/sharpening. But yeah, if you round heavily towards zero it's blurrier than if you round in a more balanced way
CrushedAsian255
_wb_ Depends on the rounding, but generally you have a large bucket around zero so a bit more blurring than noise/sharpening. But yeah, if you round heavily towards zero it's blurrier than if you round in a more balanced way
2025-01-03 12:13:35
Could you emphasise sharpening and noise by rounding away from zero?
A homosapien
2025-01-03 12:15:07
I feel like there could be some potential psychovisual optimization there
Demiurge
2025-01-03 12:57:28
There definitely is. It's the basis for psycho rdo like in x264
2025-01-03 01:00:38
Basically the theory is you avoid all the nasty blurring-smearing by using a more rough/course DCT quantization that tries to preserve more of the spectral energy, especially in the higher-frequency bins, rather than "rounding to zero" which results in a fugly smudge.
2025-01-03 01:02:12
You end up with more sharpness, grain, and potentially unwanted noise, but at least it looks better and preserves more visual information than a blur or a smear that destroys the visual information behind it.
2025-01-03 01:02:58
It's great for natural images but maybe not for synthetic flat high-contrast images.
BlueSwordM
Orum I feel like there are better ways of deblocking ||but they might be patent encumbered 💀||
2025-01-03 01:30:05
Well, closer to "compute intensive" 🙂
CrushedAsian255
2025-01-03 02:15:32
with jxl is the quantisation weights defined per-frame? per-image? per-group? per-varblock?
Tirr
2025-01-03 03:07:05
per-frame quant matrices, and per-varblock quant strength (`HfMul`)
CrushedAsian255
2025-01-03 03:32:24
Is the quant strength a scalar that multiplies each value in the var block’s decoded coefficients
2025-01-03 03:32:44
Like does a higher value = lower precision ?
Tirr
2025-01-03 03:42:22
higher value = higher precision it seems
CrushedAsian255
2025-01-03 03:51:23
whats the equation?
Tirr
2025-01-03 03:55:16
Section I.5.3: `Mul = (1 << 16) / (quantizer.global_scale * HfMul)`
screwball
2025-01-03 07:54:52
i hope blender gets support for JPEG XL
2025-01-03 07:55:48
it gets my image sequences down to 1/10th the size with microscopic changes
jonnyawsom3
2025-01-03 01:06:27
Someone actually already worked to add support... Then disappeared almost immediately after, twice https://projects.blender.org/blender/blender/pulls/118989 https://projects.blender.org/blender/blender/pulls/119257
screwball
Someone actually already worked to add support... Then disappeared almost immediately after, twice https://projects.blender.org/blender/blender/pulls/118989 https://projects.blender.org/blender/blender/pulls/119257
2025-01-03 01:10:49
And it hasn’t been merged?
2025-01-03 01:10:51
Why?
2025-01-03 01:11:15
That sucks
jonnyawsom3
2025-01-03 01:11:22
It wasn't finished, and marked as a draft
2025-01-03 01:12:28
It can only merge on a major revision, due to requiring libjxl. So it's essentially forced to wait for both a propper maintainer and the next release cycle in a few months
screwball
It can only merge on a major revision, due to requiring libjxl. So it's essentially forced to wait for both a propper maintainer and the next release cycle in a few months
2025-01-03 01:14:07
Think there’s a way to get some attention towards it from the blender devs or no? I hope it gets into 4.4 but it probably won’t
jonnyawsom3
screwball Think there’s a way to get some attention towards it from the blender devs or no? I hope it gets into 4.4 but it probably won’t
2025-01-03 01:17:11
They won't work on it themselves, someone else needs to do the work. It shouldn't be hard, since that PR already has the base made, and it's 'just' enabling JXL in the image library, then adding the option to the menus
username
2025-01-03 02:45:33
tbh if JXL gets added to Blender it should really have an option to export with a depth channel
jonnyawsom3
2025-01-03 03:38:08
Well, it would likely use a lot of the hooks for EXR encoding, if I'm right it has all the same features, at least that Blender uses
2025-01-03 03:39:52
With the tests I did for RawTherapee, maybe I should see what jpegli could do for JPEGs in Blender too... Though they just use an image library so they probably can't use it.....
Orum
2025-01-04 12:33:59
using an image library should make it easier
2025-01-04 12:34:07
kimageformats already supports JXL
lonjil
2025-01-04 12:50:20
Blender uses OpenImageIO which already supports JXL.
jonnyawsom3
2025-01-04 05:03:12
Yeah, like I said, it's 'just' enabling libjxl during compile and plugging the settings menu in
Quackdoc
Orum kimageformats already supports JXL
2025-01-04 05:50:47
I would not trust kimageformats for the degree of quality necessary for something like blender
jonnyawsom3
2025-01-04 06:42:13
https://openimageio.readthedocs.io/en/v3.0.1.0/builtinplugins.html#jpeg-xl
2025-01-04 06:42:25
You what
2025-01-04 06:42:54
So, the decode speed setting is so horribly broken, people think it's a second effort setting
Quackdoc
2025-01-04 06:43:56
isnt this supposed to be "faster_decode"
2025-01-04 06:44:16
lmao
You what
2025-01-04 06:44:38
this can be reported here https://github.com/AcademySoftwareFoundation/OpenImageIO/issues and very much should be lol
2025-01-04 06:47:13
if you don't report I will lel
jonnyawsom3
2025-01-04 06:49:16
I was reading trying to find if it was limited to ints or anything, but nope. Seems it does have full support, so just need to plumb it into Blender
Quackdoc
2025-01-04 06:49:58
oiio is generally a fairly reliable tool, I had been using it to convert out of range EXR to JXL and had no issues
jonnyawsom3
2025-01-04 06:51:02
I'll write up a quick issue
2025-01-04 07:02:01
https://github.com/AcademySoftwareFoundation/OpenImageIO/issues/4584
2025-01-04 07:02:38
Other than cjxl itself, there's not many places mentioning faster_decoding, so maybe Jon or another dev can leave a comment in the morning to confirm it
A homosapien
2025-01-04 08:03:43
As shown in my benchmarks, faster decoding is either bugged or a footgun for Modular images. There *might* be a case to made for VarDCT but honestly I find the decode speeds fast enough already.
spider-mario
2025-01-04 08:04:03
one would think that naming it `faster_decoding` should have been enough
Orum
A homosapien As shown in my benchmarks, faster decoding is either bugged or a footgun for Modular images. There *might* be a case to made for VarDCT but honestly I find the decode speeds fast enough already.
2025-01-04 08:04:29
it works for me 🤷‍♂️
2025-01-04 08:04:50
I like it a lot, as the size increase is marginal but the decode boost can be significant
A homosapien
2025-01-04 08:06:41
That's true, a 2% increase in file size is worth it if decoding speeds increase by 20%.
2025-01-04 08:14:02
But for modular mode its better to just use effort 1 rather than faster decoding 3
2025-01-04 08:14:49
At least according to my tests, I should post my results in <#803645746661425173>
Orum
2025-01-04 08:18:14
I only tested for VDCT
A homosapien
2025-01-04 08:22:03
For VDCT I agree, using fd is a worthwhile trade-off
veluca
A homosapien But for modular mode its better to just use effort 1 rather than faster decoding 3
2025-01-04 08:31:23
tbf effort 1 is much newer 🙂
A homosapien
2025-01-04 08:33:58
Makes sense
Quackdoc
2025-01-04 08:34:17
I still need to bisect to see if the faster decoding stuff is just a natural progression or if there was a regression, that being said, as long as it's fast and good it doesn't matter much, ill probably test chimera for image sequence again
A homosapien
2025-01-04 08:34:56
Lossy or lossless?
Quackdoc
2025-01-04 08:35:00
lossy
2025-01-04 08:35:16
i used it for ediitng videos
A homosapien
2025-01-04 08:36:05
It should be fine, my benchmarks show that fd is worth using
Demiurge
2025-01-04 12:26:01
I haven't noticed a difference in decode speed
Traneptora
2025-01-04 04:52:06
I have a jxl file which is a reconstructed JPEG
2025-01-04 04:52:18
named `foo.jxl`
2025-01-04 04:52:24
if I run `djxl foo.jxl foo-1.png`
2025-01-04 04:52:28
and I also run
2025-01-04 04:52:40
`djxl foo.jxl foo.jpg; djpegli foo.jpg foo-2.png`
2025-01-04 04:52:53
should I expect to get identical pixel data between `foo-1.png` and `foo-2.png`?
2025-01-04 04:53:35
I ask because I'm not. I'm getting a PAE (using magick compare) of 23/255, which is fairly large
2025-01-04 04:53:37
i.e. not roundoff
2025-01-04 04:53:51
(strictly speaking it reports 5911/65535 but these are 8-bit PNGs so I scaled by 257)
2025-01-04 04:54:22
both PNG files have an iCCP attached, which is the same profile, one embedded in the original jxl/jpeg
2025-01-04 04:54:39
(the iCCP chunks have the same crc32 and length so I'm assuming they're actually the same profile)
veluca
2025-01-04 04:55:34
I don't think you should expect that, no (I don't think djpegli and djxl interpret the 8x8 DCT the exact same way -- the DCT is the same, but they do different dequant tricks AFAIU)
jonnyawsom3
2025-01-04 04:56:31
The JXL will use CfL by default. You can disable it and might get closer results, but the JXL is higher quality
Traneptora
2025-01-04 04:56:41
is it?
2025-01-04 04:57:01
the JXL spec includes a jpeg decoder stealthily built in, in the sense that jpeg -> jxl (losless) is fully specified, and jxl -> pixels is also fully specified so 18181-1 and 18181-2 contain a jpeg decoder that you can always leverage by using `cjxl -j 1 foo.jpg foo.jxl; djxl foo.jxl foo.png`
2025-01-04 04:57:33
I was under the impression that djpegli was just, well, *this* decoder
The JXL will use CfL by default. You can disable it and might get closer results, but the JXL is higher quality
2025-01-04 04:58:04
so converting to JXL is higher quality? then why would I ever use djpegli instead of `cjxl + djxl`
jonnyawsom3
2025-01-04 04:58:50
You reminded me, I actually stumbled across this last night https://github.com/libjxl/libjxl/commit/538c77b59ca708a46e9a8045d6673108507e65e1
2025-01-04 04:59:01
Suggesting at one point, they were identical
2025-01-04 05:02:50
Differences aren't huge, but they do exist https://discord.com/channels/794206087879852103/804324493420920833/1301645879979282442
Traneptora
2025-01-04 05:07:32
<@179701849576833024> since you're online... ffmpeg just got animated JXL encode support (via libjxl)
2025-01-04 05:07:33
https://github.com/FFmpeg/FFmpeg/commit/f3c408264554211b7a4c729d5fe482d633bac01a
2025-01-04 05:07:35
I merged it last night
2025-01-04 05:08:44
``` ffmpeg -i video.mkv -c libjxl_anim -f image2pipe animated.jxl ```
2025-01-04 05:10:24
https://git.ffmpeg.org/gitweb/ffmpeg.git/commit/f3c408264554211b7a4c729d5fe482d633bac01a
_wb_
Traneptora so converting to JXL is higher quality? then why would I ever use djpegli instead of `cjxl + djxl`
2025-01-04 05:18:54
I don't think it's higher quality than djpegli. Chroma from Luma introduces some small difference but I don't think there's reason to assume it will be towards 'better'. I am not sure where the other differences come from but I can imagine that the dequant or IDCT is not completely identical between libjxl and jpegli.
Traneptora
Traneptora https://git.ffmpeg.org/gitweb/ffmpeg.git/commit/f3c408264554211b7a4c729d5fe482d633bac01a
2025-01-04 05:19:21
the patch is co-authored by Zsolt Vadász
2025-01-04 05:19:32
I believe they are in this discord but I'm having a hard time finding their discord username
2025-01-04 05:19:35
so if that's you plz lmk
veluca
Traneptora <@179701849576833024> since you're online... ffmpeg just got animated JXL encode support (via libjxl)
2025-01-04 05:27:22
yay 😄
VcSaJen
Traneptora ``` ffmpeg -i video.mkv -c libjxl_anim -f image2pipe animated.jxl ```
2025-01-04 05:51:09
What would happen if you don't specify -c libjxl_anim?
Traneptora
VcSaJen What would happen if you don't specify -c libjxl_anim?
2025-01-04 06:00:30
it'll automatically determine from filename and it'll try to encode it as still JXL images concatenated together
VcSaJen
2025-01-04 06:04:12
Does it do the same for GIF?
RaveSteel
2025-01-04 06:04:13
Are there any plans to further improve JXL decoding with FFmpeg? I noticed that using the -benchmark flag shows a significantly higher decode time than using djxl or jxl-oxide
jonnyawsom3
Traneptora it'll automatically determine from filename and it'll try to encode it as still JXL images concatenated together
2025-01-04 06:19:41
Would there be any downside to defaulting to animated?
Traneptora
Would there be any downside to defaulting to animated?
2025-01-04 06:25:57
because single-image animations are legal but usually not what you want
2025-01-04 06:26:09
e.g. if you use `ffmpeg -i input.png output.jxl`
jonnyawsom3
2025-01-04 06:26:32
Riight
Traneptora
2025-01-04 06:26:34
codec choice happens before you know how many frames are in input
2025-01-04 06:26:41
so you can't predicate it based on that
VcSaJen Does it do the same for GIF?
2025-01-04 06:30:12
ngl, idk
Razor54672
2025-01-04 09:23:48
long time no see bois
DZgas Ж
Traneptora https://github.com/FFmpeg/FFmpeg/commit/f3c408264554211b7a4c729d5fe482d633bac01a
2025-01-04 10:42:15
<:Poggers:805392625934663710>
jonnyawsom3
Traneptora https://github.com/FFmpeg/FFmpeg/commit/f3c408264554211b7a4c729d5fe482d633bac01a
2025-01-05 12:42:29
Made a Reddit post announcing that, since no one else did yet https://redd.it/1htt3se
2025-01-05 12:43:10
With a comment thanking you both for the commit
Traneptora
2025-01-07 07:31:39
How would one explain rANS in simple mathematical terms? (simple being relative here)
2025-01-07 07:31:52
suppose someone fully understands the concept of information entropy
veluca
2025-01-07 07:31:54
explain to whom? 🙂
Traneptora
2025-01-07 07:32:04
and fully understands stateless prefix coding (e.g. huffman)
2025-01-07 07:32:17
there's a class at my uni for undergraduates that goes over these concepts
2025-01-07 07:32:33
you prove things like Kraft's inequality, huffman's theorem, etc.
2025-01-07 07:32:44
but it stops there. it doesn't have any discussion of Finite State Entropy
veluca
2025-01-07 07:32:51
I'd probably try to explain "classic" arithmetic coding first, and then explain how rANS is similar-but-different
Traneptora
2025-01-07 07:32:56
to *that* audience, how would you explain it? conceptually by preserving state you can achieve better results than something stateless
veluca
2025-01-07 07:34:06
I personally find AC to be somewhat easier to explain *shrug* but I guess ANS without state renormalization is also fairly simple
_wb_
2025-01-07 09:05:35
Prefix coding requires at least one bit per symbol, and an integer number of bits for each symbol. That means that if you have 90% zeroes, you still have to use 1 bit for them which corresponds to 50%. AC/ANS overcome this limitation basically by not encoding one symbol at a time but by modifying state and flushing only when needed. That means that in case of encoding a symbol with 90% probability, the state will only require flushing rarely. In AC, the basic idea is that you divide the unit interval according to the symbol distribution, and then recursively subdivide each subinterval again in the same way. Then every sequence of symbols corresponds to a small interval. The bitstream signals a single number with enough precision to know which interval it is. The result is that more probable numbers will cause less of an interval reduction, so you need to signal fewer bits for those. In the edge case of two symbols with each probability 50% this boils down to the same thing as just signaling them using one bit per symbol, but if you e.g. have 0 with probability 90% then after writing 6 zeroes the interval is still only reduced to [0, 0.9^6=0.531...] meaning that if the bitstream starts with one 0 bit, it implies that at least the first 6 symbols will be zeroes.
Jarek Duda
Traneptora but it stops there. it doesn't have any discussion of Finite State Entropy
2025-01-08 01:42:46
To relate prefix codes with tANS (FSE), there is this fast Huffman decoder with buffer of fixed number of bits - just shifting the unused bits of the buffer, while tANS would additionally include fractional bits in this shift (modify newX). tANS is defined by symbol spread: assigning a symbol to each position in such buffer, spreading them is power-of-2 size blocks we would get such Huffman decoder (bottom right)
Traneptora
_wb_ Prefix coding requires at least one bit per symbol, and an integer number of bits for each symbol. That means that if you have 90% zeroes, you still have to use 1 bit for them which corresponds to 50%. AC/ANS overcome this limitation basically by not encoding one symbol at a time but by modifying state and flushing only when needed. That means that in case of encoding a symbol with 90% probability, the state will only require flushing rarely. In AC, the basic idea is that you divide the unit interval according to the symbol distribution, and then recursively subdivide each subinterval again in the same way. Then every sequence of symbols corresponds to a small interval. The bitstream signals a single number with enough precision to know which interval it is. The result is that more probable numbers will cause less of an interval reduction, so you need to signal fewer bits for those. In the edge case of two symbols with each probability 50% this boils down to the same thing as just signaling them using one bit per symbol, but if you e.g. have 0 with probability 90% then after writing 6 zeroes the interval is still only reduced to [0, 0.9^6=0.531...] meaning that if the bitstream starts with one 0 bit, it implies that at least the first 6 symbols will be zeroes.
2025-01-08 01:48:24
This is still fundamentally stateless though, isn't it?
Jarek Duda To relate prefix codes with tANS (FSE), there is this fast Huffman decoder with buffer of fixed number of bits - just shifting the unused bits of the buffer, while tANS would additionally include fractional bits in this shift (modify newX). tANS is defined by symbol spread: assigning a symbol to each position in such buffer, spreading them is power-of-2 size blocks we would get such Huffman decoder (bottom right)
2025-01-08 01:49:36
I'm confused about the mathematics behind FSE. The concept makes sense, but how does the math behind it end up working?
Jarek Duda
Traneptora I'm confused about the mathematics behind FSE. The concept makes sense, but how does the math behind it end up working?
2025-01-08 01:54:48
looks like everybody has a different understanding ( https://encode.su/threads/2078-List-of-Asymmetric-Numeral-Systems-implementations ), here is the original:
dkam
2025-01-09 08:09:36
Hello, I have a JPEG image which cjxl successfully converts to jxl, but which isn't viewable in Safari or Preview. Where's the best place to trouble shoot what's happening? I'm happy to upload the image.
dkam Hello, I have a JPEG image which cjxl successfully converts to jxl, but which isn't viewable in Safari or Preview. Where's the best place to trouble shoot what's happening? I'm happy to upload the image.
2025-01-09 08:18:59
If I use image magick, then it successfully creates a jxl - using `magick 19617.jpeg -define jxl:lossless=true 19617_mk.jxl`. For cjxl, I'm using version `cjxl v0.11.1 0.11.1 [NEON]` And magick reports it's version as `Version: ImageMagick 7.1.1-41 Q16-HDRI aarch64 22504` I'm on a Mac using MacOS 15.2
jonnyawsom3
2025-01-09 08:21:04
magick uses normal lossy/lossless encoding, while cjxl uses JPEG Transcoding, so it sounds like there's an issue with the JPEG itself that's causing the JXL to not load. Could you zip the JPEG and the cjxl JXL file then send them here?
dkam
dkam If I use image magick, then it successfully creates a jxl - using `magick 19617.jpeg -define jxl:lossless=true 19617_mk.jxl`. For cjxl, I'm using version `cjxl v0.11.1 0.11.1 [NEON]` And magick reports it's version as `Version: ImageMagick 7.1.1-41 Q16-HDRI aarch64 22504` I'm on a Mac using MacOS 15.2
2025-01-09 08:22:17
dkam If I use image magick, then it successfully creates a jxl - using `magick 19617.jpeg -define jxl:lossless=true 19617_mk.jxl`. For cjxl, I'm using version `cjxl v0.11.1 0.11.1 [NEON]` And magick reports it's version as `Version: ImageMagick 7.1.1-41 Q16-HDRI aarch64 22504` I'm on a Mac using MacOS 15.2
2025-01-09 08:23:12
Huh - If I use ImageMagick to convert the 'not working' jxl back into a jpeg, it seems to be visible again.
magick uses normal lossy/lossless encoding, while cjxl uses JPEG Transcoding, so it sounds like there's an issue with the JPEG itself that's causing the JXL to not load. Could you zip the JPEG and the cjxl JXL file then send them here?
2025-01-09 08:23:47
Doesn't `-define jxl:lossless=true` actually make it lossless?
jonnyawsom3
2025-01-09 08:23:48
Again, that's because it's not doing JPEG Transcoding, it'll just be making a new file instead
dkam Doesn't `-define jxl:lossless=true` actually make it lossless?
2025-01-09 08:25:38
magick turns the jpeg into pixels, and then feeds that to the JXL encoder. cjxl reads the actual JPEG file itself, and reuses the data inside. The first *is* lossless, but you can't get the same JPEG back out and it'll usually be larger and worse quality if you turn it back into a JPEG. The cjxl and djxl way will give you exactly the same file as you first put in, down to the same bit
2025-01-09 08:26:28
It's... Annoying, to say the least
dkam
2025-01-09 08:27:50
Yeah, that's confusing. The way that option's defined `jxl:lossless=true` makes it feel like it's doing the transcoding thing.
jonnyawsom3
2025-01-09 08:28:54
Most stuff like Magick, FFMPEG and CDN stuff always turns things into pixels so the different encoders can always open it, but that means libjxl can't get the original data
2025-01-09 08:30:06
cjxl flat out refuses or at least errors if it can't generate a **truly** lossless JPEG Transcode, so it's frustrating the others don't at least warn about it
_wb_
2025-01-09 08:34:11
A lot of pipelines work with pixels, not bitstreams, and it would be very nontrivial to change that. But yes, it would be nice if ImageMagick would add a special case that makes `convert foo.jpg foo.jxl` do the right thing.
jonnyawsom3
2025-01-09 08:34:11
<@350116203190157314> can you try opening this file in Safari/Preview?
dkam
2025-01-09 08:34:47
Yep - works fine
jonnyawsom3
2025-01-09 08:35:20
Seems like the issue is the ICC profile then, since all I did was remove it
2025-01-09 08:36:29
Though... Could you try this one too?
dkam
2025-01-09 08:36:57
Doesn't work.
jonnyawsom3
2025-01-09 08:37:21
Okay, it is the ICC colorspace then. Wanted to make sure it wasn't just a dodgy cjxl version
dkam
2025-01-09 08:39:26
So - what do I do? Can I detect weird ICC colorspace issues? I'm converting tb of images from jpeg -> jxl - do I need to remove the ICC profile prior to converting? Or is this something cjxl can fix?
jonnyawsom3
2025-01-09 08:39:40
Well... I think I found the problem https://photosauce.net/blog/post/making-a-minimal-srgb-icc-profile-part-1-trim-the-fat-abuse-the-spec
dkam
2025-01-09 08:39:56
(Also - thanks for your help! )
jonnyawsom3
2025-01-09 08:40:06
That specific image is literally using an ICC from a blog about abusing the specification to make the smallest one possible xD
dkam
2025-01-09 08:41:21
Haha. Any time I've had troubles converting to jxl, it's because of icc profiles.
jonnyawsom3
2025-01-09 08:43:24
Interesting thing is, in that blog they mention MacOS specific data that they skipped. So I wonder if that's why it fails for you but not for me on Windows
dkam
2025-01-09 08:48:15
Right, so, ... I should strip the ICC if it's TinyRGB prior to conversion to JXL? Wait - why does the JPG work on Mac?
jonnyawsom3
2025-01-09 08:52:19
Personally, I'm unsure what you should do. The others here have better knowledge on color handling and ICC, so I'll let them give suggestions. Though, if you are converting TB of JPEGs, then I'm glad I cleared up the difference between Magick and cjxl JPEG decoders tend to be a lot more loose, with rules relaxed over the years or never enforced. JXL was made a lot more strict and accurate, so it's had to be relaxed a few times when 'valid' JPEGs on the web have been invalid as a JXL. This could be another instance of it, hard to tell without debugging MacOS itself
_wb_ A lot of pipelines work with pixels, not bitstreams, and it would be very nontrivial to change that. But yes, it would be nice if ImageMagick would add a special case that makes `convert foo.jpg foo.jxl` do the right thing.
2025-01-09 08:55:09
Even if they just gave a warning when the input is JPG and the output is JXL, saying "JPEG Transcoding Unsupported" would help a lot
2025-01-09 08:58:28
<@350116203190157314> It might be worth making a [Github issue](https://github.com/libjxl/libjxl/issues/new?assignees=&labels=&projects=&template=bug_report.md&title=) if you have an account. Then it's logged and the devs can check if it's a libjxl issue or an Apple one before you start stripping data
dkam
2025-01-09 09:01:06
Stripping it out with `exiftool -icc_profile= 19617.jpeg` makes it all work fine. What's confusing to me is that, MacOS can display the jpeg - cjxl converts it to a JXL which isn't viewable, but them Macgick converts it to a viewable JPEG.
_wb_
2025-01-09 09:06:14
It does happen that some things are less picky than others about invalid input. Generally viewers tend to still show corrupt jpegs (incomplete ones, bad ICC profiles, etc) while being more picky when decoding more recent formats.
dkam
2025-01-09 10:29:38
Is it possible to detect and remove it from the jxl after it's. been transcoded to jpeg?
2025-01-09 10:31:23
So, the Jpeg is viewable on the mac, cjxl -> jxl and the image is no longer viewable, djxl -> jpeg and the image is viewable again.
_wb_
2025-01-10 10:48:25
No, you would have to strip or replace the bad ICC profile before converting the jpeg to jxl.
Demiurge
2025-01-11 08:52:51
Zero Loss Compress automatically tests each image to make sure it's reversible.
2025-01-11 08:53:09
Haven't used it but it looks cool
Though... Could you try this one too?
2025-01-11 08:55:35
Works for me on my iphone
_wb_ It does happen that some things are less picky than others about invalid input. Generally viewers tend to still show corrupt jpegs (incomplete ones, bad ICC profiles, etc) while being more picky when decoding more recent formats.
2025-01-11 08:57:41
Probably because new codec libraries are not designed to deal with malformed input as well as libpng/libjpeg/giflib
_wb_
2025-01-11 09:30:02
Nah in this case I think it's not related to the codec library but some difference in application-level logic where a jpeg with a bad profile is treated like "ok we will just assume it is sRGB then" while a modern format with a bad profile is treated like "it's bad, throw an error". The latter behavior is actually better imo since it gives more incentive to encoder implementations to do the right thing and not produce bad files, but the "let's be forgiving" behavior tends to happen over time because users complain when the bad file doesn't render while some other viewer does render it.
2025-01-11 09:31:33
The same kind of logic has led to browsers having to deal with very malformed html syntax to the point that specifications get written specifying how to deal with it, etc.
Demiurge
2025-01-11 10:27:32
I see...
Well... I think I found the problem https://photosauce.net/blog/post/making-a-minimal-srgb-icc-profile-part-1-trim-the-fat-abuse-the-spec
2025-01-11 10:32:35
I love this guy. I hope this crazy bastard is on the jxl team.
2025-01-11 10:32:41
https://photosauce.net/blog/post/making-a-minimal-srgb-icc-profile-part-3-choose-your-colors-carefully
jonnyawsom3
2025-01-11 10:38:52
You reminded me, I found a nice surprise at the bottom of this page https://glq.pages.dev/posts/high_quality_gifs/
Demiurge
2025-01-11 10:49:12
https://github.com/saucecontrol/Compact-ICC-Profiles
2025-01-11 10:54:26
This could be useful for libjxl
jonnyawsom3
2025-01-11 11:01:39
IIRC something similar is already being/has been worked on by Jon
Demiurge
2025-01-11 11:10:26
This guy is a great writer
2025-01-11 11:10:32
Really well written blog
jonnyawsom3
2025-01-11 11:12:39
https://github.com/libjxl/libjxl/pull/3446
2025-01-11 11:15:45
Oh, actually, <@350116203190157314> I wonder if that would've fixed the ICC in that image
paperboyo
2025-01-11 11:16:16
[happy NY!] ^ this is cool! Is there something that works in reverse? “Oh, I can see what you are doing, honey, and you, and you, you are all just sRGB, I will shrink you even further – turn you into a metadata flag!”
Demiurge
2025-01-11 11:16:24
It kinda blows my mind that the color primaries are wrong in most srgb images
2025-01-11 11:18:24
My first thought is this will complicate the process of automatically determining the intent and simplifying a color profile
https://github.com/libjxl/libjxl/pull/3446
2025-01-11 11:24:19
I don't think the simplified icc profiles libjxl generates are similar to the ultra minified profiles from saucecontrol
2025-01-11 11:25:16
It would be cool if some of his icc profile innovations could be incorporated into the icc profile generation code in libjxl, and then maybe a thank you could be added in the credits for his impressive and cc0-licensed work
2025-01-11 11:26:06
<@718245888069468230> is that you? :o
jonnyawsom3
Demiurge I don't think the simplified icc profiles libjxl generates are similar to the ultra minified profiles from saucecontrol
2025-01-11 11:28:48
They seem *similar*, but libjxl doesn't sacrifice accuracy and can compress the ICCs in JXLs too
Demiurge
2025-01-11 11:31:57
His super tiny profiles don't sacrifice accuracy either. And of course the jxl file format allows the profile data to be compressed or omitted entirely
jonnyawsom3
2025-01-11 11:33:40
I was seeing the mini and magic versions
Demiurge
2025-01-11 11:34:02
I wonder if it's possible to have a lossless non-xyb file using enum color space...
spider-mario
2025-01-11 01:01:14
it is, either with a CICP PNG or with a PPM + `-x color_space=...`
DZgas Ж
2025-01-11 07:30:09
why does the e9 use 10 times more memory for encoding than the e7 ?
jonnyawsom3
2025-01-11 07:43:43
What image resolution?
Quackdoc
2025-01-11 08:07:52
can confirm that iacobionut's gallery with A15 works fine
2025-01-11 08:08:03
yay, now we have 2 usable galleries
Kremzli
2025-01-11 09:30:47
still doesnt make jxl images show up by themselves
2025-01-11 09:31:02
on A14
2025-01-11 09:31:21
does A15 recognize jxl as an image?
2025-01-11 09:33:31
my file manager thinks they are images but it still doesnt show in the gallery
Quackdoc
2025-01-11 09:33:35
yup, you still need a gallery app that supports jxl, LOS' gallery just gives me black 0x0 pictures
Kremzli
2025-01-11 09:34:07
i have iacobionut's gallery but i didnt give it all files permission, maybe that way it shows?
Quackdoc
2025-01-11 09:35:05
it will show on A15 but not on A14. I already made an issue and it's an issue with how the gallery app queries files, apparently changing it would break API stuff that allows them to be on the google play store.
Kremzli
2025-01-11 09:35:45
alright ill keep waiting for samsuck to figure out how to do A15 even though android made it easier to port than ever
DZgas Ж
What image resolution?
2025-01-11 09:46:05
10000x14000
A homosapien
DZgas Ж why does the e9 use 10 times more memory for encoding than the e7 ?
2025-01-11 10:41:34
For lossless images e10 disables chunked encoding which increases ram usage
Demiurge
2025-01-11 11:18:57
Why do people use e10...
2025-01-11 11:19:15
Should probably put it under "expert options"
2025-01-11 11:19:31
And that should be renamed "idiot options"
A homosapien
DZgas Ж why does the e9 use 10 times more memory for encoding than the e7 ?
2025-01-12 12:27:10
Oops misread "10 times" as "e10", what are the settings used here?
Demiurge
2025-01-12 12:46:37
Anything higher than e7 is really weird, has really bad multi-thread scaling, but usually has tighter compression somehow, but it's using a really inefficient algorithm with a lot of room for improvement.
DZgas Ж
A homosapien For lossless images e10 disables chunked encoding which increases ram usage
2025-01-12 10:49:17
lossy
A homosapien Oops misread "10 times" as "e10", what are the settings used here?
2025-01-12 10:51:16
just -d 1 -e 9 uses 9 gb ram -d 1 -e 7 uses ~1 gb ram 10000x14000
2025-01-12 10:52:17
It's just a situation where the e7 is pretty good, but if you want to compress it harder, then just don't have the memory
A homosapien
DZgas Ж It's just a situation where the e7 is pretty good, but if you want to compress it harder, then just don't have the memory
2025-01-12 03:35:52
I see, for lossy, chunked encoding is disabled at e9. If you are using 0.11, you can add `--streaming_input` if you are memory limited
Orum
2025-01-12 03:42:17
well, for lossless there's a big jump in memory use at e8, though that was in an old version of libjxl that I tested
2025-01-12 03:42:41
e8 lossless uses about twice the RAM when encoding vs e4 though 7
2025-01-12 03:44:09
e1 uses a trivial amount though, which is one of the reasons I like it <:BlobYay:806132268186861619>
2025-01-12 03:44:38
I should update both my lossy and lossless tests again
jonnyawsom3
2025-01-12 06:14:26
That was because of Patches
A homosapien
Orum e8 lossless uses about twice the RAM when encoding vs e4 though 7
2025-01-12 06:36:17
The biggest concern going from effort 7 to effort 8 is encoding times. RAM increases by 1.25x and encoding times increase around 3x.``` wintime -- cjxl smal.png smal.jxl -d 0 JPEG XL encoder v0.12.0 2368781 [AVX2,SSE2] Encoding [Modular, lossless, effort: 7] Compressed to 20893.3 kB (6.561 bpp). 4371 x 5828, 4.460 MP/s, 12 threads. PageFaultCount: 656378 PeakWorkingSetSize: 353.2 MiB QuotaPeakPagedPoolUsage: 33.11 KiB QuotaPeakNonPagedPoolUsage: 16.99 KiB PeakPagefileUsage: 395.1 MiB Wall time: 0 days, 00:00:06.120 (6.12 seconds) User time: 0 days, 00:00:01.750 (1.75 seconds) Kernel time: 0 days, 00:00:56.218 (56.22 seconds) wintime -- cjxl smal.png smal.jxl -d 0 -e 8 JPEG XL encoder v0.12.0 2368781 [AVX2,SSE2] Encoding [Modular, lossless, effort: 8] Compressed to 20832.3 kB (6.542 bpp). 4371 x 5828, 1.471 MP/s, 12 threads. PageFaultCount: 1522027 PeakWorkingSetSize: 441.6 MiB QuotaPeakPagedPoolUsage: 33.11 KiB QuotaPeakNonPagedPoolUsage: 25.62 KiB PeakPagefileUsage: 495 MiB Wall time: 0 days, 00:00:17.720 (17.72 seconds) User time: 0 days, 00:00:02.156 (2.16 seconds) Kernel time: 0 days, 00:02:55.218 (175.22 seconds) ```
That was because of Patches
2025-01-12 06:38:46
Chunked encoding is turned off with patches. This causes a 11x increase in encoding time and a 4x increase in RAM relative to default effort 8.``` wintime -- cjxl smal.png smal.jxl -d 0 -e 8 --patches 1 JPEG XL encoder v0.12.0 2368781 [AVX2,SSE2] Encoding [Modular, lossless, effort: 8] Compressed to 20600.9 kB (6.470 bpp). 4371 x 5828, 0.126 MP/s, 12 threads. PageFaultCount: 80647359 PeakWorkingSetSize: 1.939 GiB QuotaPeakPagedPoolUsage: 33.11 KiB QuotaPeakNonPagedPoolUsage: 114.3 KiB PeakPagefileUsage: 1.95 GiB Wall time: 0 days, 00:03:23.320 (203.32 seconds) User time: 0 days, 00:00:20.125 (20.12 seconds) Kernel time: 0 days, 00:03:38.312 (218.31 seconds) ```
2025-01-12 06:44:00
I hope patches can be made compatible with chunked encoding one day.
2025-01-12 06:44:14
We should have a public jxl Wishlist to keep track of these things. 😅
Orum
A homosapien The biggest concern going from effort 7 to effort 8 is encoding times. RAM increases by 1.25x and encoding times increase around 3x.``` wintime -- cjxl smal.png smal.jxl -d 0 JPEG XL encoder v0.12.0 2368781 [AVX2,SSE2] Encoding [Modular, lossless, effort: 7] Compressed to 20893.3 kB (6.561 bpp). 4371 x 5828, 4.460 MP/s, 12 threads. PageFaultCount: 656378 PeakWorkingSetSize: 353.2 MiB QuotaPeakPagedPoolUsage: 33.11 KiB QuotaPeakNonPagedPoolUsage: 16.99 KiB PeakPagefileUsage: 395.1 MiB Wall time: 0 days, 00:00:06.120 (6.12 seconds) User time: 0 days, 00:00:01.750 (1.75 seconds) Kernel time: 0 days, 00:00:56.218 (56.22 seconds) wintime -- cjxl smal.png smal.jxl -d 0 -e 8 JPEG XL encoder v0.12.0 2368781 [AVX2,SSE2] Encoding [Modular, lossless, effort: 8] Compressed to 20832.3 kB (6.542 bpp). 4371 x 5828, 1.471 MP/s, 12 threads. PageFaultCount: 1522027 PeakWorkingSetSize: 441.6 MiB QuotaPeakPagedPoolUsage: 33.11 KiB QuotaPeakNonPagedPoolUsage: 25.62 KiB PeakPagefileUsage: 495 MiB Wall time: 0 days, 00:00:17.720 (17.72 seconds) User time: 0 days, 00:00:02.156 (2.16 seconds) Kernel time: 0 days, 00:02:55.218 (175.22 seconds) ```
2025-01-12 09:22:30
I'm surprised it's only 25% more for you, though maybe memory use has improved in recent versions
jonnyawsom3
Orum I'm surprised it's only 25% more for you, though maybe memory use has improved in recent versions
2025-01-13 12:33:55
If you mean major (minor) versions, patches is disabled until effort 10 (unless the image is under 2048) since the addition of chunked encoding in 0.10
A homosapien I hope patches can be made compatible with chunked encoding one day.
2025-01-13 12:35:55
The issue is it needs to reference the entire image to find matches. It could probably be done cheaper, but also much slower. LZ77 patch detection was one idea
CrushedAsian255
2025-01-13 03:18:11
Possibly only store the last 2-3 chunks to reference from?
2025-01-13 03:18:24
Then if one patch is used often store it for the entire encode?
jonnyawsom3
2025-01-13 03:42:42
Groups maybe, chunks could mean it empty's the patch cache before it sees the next if it's a complex tile/sprite, ect
Orum
If you mean major (minor) versions, patches is disabled until effort 10 (unless the image is under 2048) since the addition of chunked encoding in 0.10
2025-01-13 04:48:53
0.10.0 is the latest version I've tested, so that shouldn't be the issue
jonnyawsom3
Orum 0.10.0 is the latest version I've tested, so that shouldn't be the issue
2025-01-13 04:53:28
What resolution were you using?
Orum
2025-01-13 04:54:09
4K UHD
CrushedAsian255
Groups maybe, chunks could mean it empty's the patch cache before it sees the next if it's a complex tile/sprite, ect
2025-01-13 05:19:14
Diff between group and chunk?
jonnyawsom3
CrushedAsian255 Diff between group and chunk?
2025-01-13 05:22:04
Chunk is just 'an amount of data' group is 128 to 1024 squares of pixels, as I understand it
2025-01-13 05:22:24
So if the image is higher bitdepth, ect. The chunk has less pixel data
CrushedAsian255
2025-01-13 06:00:02
Sorry I meant groups then
2025-01-13 06:00:20
I use chink,group,tile interchangeably
jonnyawsom3
2025-01-13 06:25:46
Streamed and chunked get confused often too
A homosapien
Streamed and chunked get confused often too
2025-01-13 08:03:02
Can you clarify the difference between the two?
jonnyawsom3
A homosapien Can you clarify the difference between the two?
2025-01-13 08:50:58
Streamed is reading the input file without buffering in memory, chunked is only submitting segments of data to the encoder at once PPM Input Full: 630 MB Streamed: 583 MB Chunked: 282 MB (Current Default) Streamed and Chunked: 236 MB PNG Input Full: 617 MB Streamed: 617 MB `Warning PPM/PGM streaming decoding failed, trying non-streaming mode.` Chunked: 267 MB (Current Default) Streamed and Chunked: 267 MB `Warning PPM/PGM streaming decoding failed, trying non-streaming mode.`
2025-01-13 08:51:19
Currently there isn't a streaming PNG decoder, so only PPM works for that
2025-01-13 09:05:55
Oh... Wait... `--disable_output` doesn't use `--streaming_output` and crashes when used
2025-01-13 09:07:14
So PPM is actually Streamed: 561 MB Streamed and Chunked: 213 MB
2025-01-13 10:01:34
``` /** Control what kind of buffering is used, when using chunked image frames. * -1 = default (let the encoder decide) * 0 = buffers everything, basically the same as non-streamed code path (mainly for testing) * 1 = buffers everything for images that are smaller than 2048 x 2048, and * uses streaming input and output for larger images * 2 = uses streaming input and output for all images that are larger than * one group, i.e. 256 x 256 pixels by default * 3 = currently same as 2 * * When using streaming input and output the encoder minimizes memory usage at * the cost of compression density. Also note that images produced with * streaming mode might not be progressively decodeable. */ JXL_ENC_FRAME_SETTING_BUFFERING = 34,``` Well, that's what the code says
DZgas Ж
A homosapien I see, for lossy, chunked encoding is disabled at e9. If you are using 0.11, you can add `--streaming_input` if you are memory limited
2025-01-14 10:12:20
and how exactly does this option work? are the 256x256 chunks encoded like scan from top to bottom?
A homosapien
2025-01-14 11:16:19
Idk, but I would assume that is a safe guess
Demiurge
2025-01-15 01:43:55
The chunks can be in absolutely any order at all
2025-01-15 01:46:58
A middle-out spiral, centered on a point of interest is sometimes used. But most common is for the chunks to be ordered from top left, to right, to bottom.
2025-01-15 01:51:37
In theory I think the order can be losslessly rearranged without changing the size of the file
CrushedAsian255
2025-01-15 05:31:59
jxltran time
_wb_
Demiurge In theory I think the order can be losslessly rearranged without changing the size of the file
2025-01-15 06:26:43
Correct. Besides a few extra bytes for a funky TOC permutation.
Demiurge
2025-01-15 06:36:23
Yes, completely foxlessly
jonnyawsom3
2025-01-15 06:53:11
Have a GUI tool that lets you re-order groups based on number. Then play an animation while your file downloads haha
A homosapien
2025-01-15 08:01:52
Honestly a jxl progressive viewer app would be awesome
jonnyawsom3
2025-01-15 10:05:50
You mean a viewer that loads progressively, or a way to see the progressive scans/percentages to see what it looks like
CrushedAsian255
2025-01-15 10:18:53
For the second one, jxl-oxide has a progressive video mode, you have to enable it when building but it’s there
2025-01-15 10:19:15
<@206628065147748352>
jonnyawsom3
2025-01-15 10:39:10
Yeah, I know. I can't build it though so I've just been truncating and saving PNGs instead
Demiurge
2025-01-15 06:01:05
There is a website that lets you see any image loaded progressively
2025-01-15 06:01:45
Google jxl saliency demo
username
2025-01-15 06:04:35
that only allows you to progressively decode in browser and most browsers use libjxl which doesn't progressively decode as much as jxl-oxide
jonnyawsom3
2025-01-15 06:13:21
Yeah, no LQIP and poor control over the percentage
2025-01-15 06:13:46
Still strange libjxl disabled that...
Demiurge
2025-01-15 08:36:58
Maybe just to save cpu
AccessViolation_
2025-01-16 11:34:47
I just realized that if cameras shot in JXL natively, they could do vignetting correction by putting the correction gradient in a Modular mode layer and blending it with the image
2025-01-16 11:36:20
... I'm not sure what the benefits of this would be yet, apart from being able to get the uncorrected image back more easily
2025-01-16 11:44:07
It might be worse? The encoder might see the darker peripherals and think it can get away with reducing the information there somewhat, and then the brightness is brought up again. Unless you can tell the encoder to apply blending before using heuristics like that
2025-01-16 11:47:59
Cameras would presumably use a custom encoder so they could just make it work
2025-01-16 11:59:19
Hmm you'd still have the problem that any compression artifacts are multiplied along with it I think
CrushedAsian255
2025-01-17 12:00:02
Don’t really think most people would need / want the uncorrected image when shooting in JPEG/JXL. if they wanted that control they would be shooting in RAW already
jonnyawsom3
2025-01-17 12:00:33
You could just increase the intensity target to increase the quality of the blacks, not to mention you'd already have higher bitdepth than a normal JPEG too
2025-01-17 12:01:11
Though, you could do JXL RAW with a correction layer... Using the channel type Adobe completely ignored for bayer images
AccessViolation_
2025-01-17 12:10:32
you mean this? `kCFA = JXL_CHANNEL_CFA, // Bayer channel`
2025-01-17 12:10:42
What does it do?
2025-01-17 12:11:21
Aren't bayer patterns just encoded by using one layer per photodiode like R,G,G,B
2025-01-17 12:12:25
Oh is it just for identifying that a layer/channel is part of a bayer image?
2025-01-17 12:23:41
> kCFA (for Bayer data, e.g. the second G), Ah, that explains it, it's just the one extra channel you need <https://redlib.nl/r/jpegxl/comments/rmo3ea/jpeg_xl_for_pbr_image_textures/>
jonnyawsom3
AccessViolation_ > kCFA (for Bayer data, e.g. the second G), Ah, that explains it, it's just the one extra channel you need <https://redlib.nl/r/jpegxl/comments/rmo3ea/jpeg_xl_for_pbr_image_textures/>
2025-01-17 12:29:47
Close, this is what I was thinking of https://discord.com/channels/794206087879852103/804324493420920833/1230846934596718622
AccessViolation_
2025-01-17 12:31:16
Ah neat
jonnyawsom3
2025-01-17 12:31:19
So it'd be, RGavgBGdif to get 'an image' without wasting data on storing a preview
AccessViolation_
Though, you could do JXL RAW with a correction layer... Using the channel type Adobe completely ignored for bayer images
2025-01-17 12:35:53
You know what, all lens corrections except distortion correction could be output to a channel and then you could easily disable/enable them in your editing workflow without having to rely on your software to apply those corrections itself (which it may do worse than the camera if your lens isn't in the database or something)
CrushedAsian255
2025-01-17 12:42:33
So basically JXL raw without needing to even be a special raw
jonnyawsom3
2025-01-17 01:02:58
Pretty much. Most of DNG is just metadata describing the camera, which JXL can also brotli compress the EXIF
Demiurge
2025-01-17 07:07:38
I wonder how bayer sensor data can be efficiently compressed with jxl
2025-01-17 07:08:05
If it even can be
2025-01-17 07:08:38
Some bayer filters have 4 colors
2025-01-17 07:08:50
2 different greens
2025-01-17 07:11:25
What would be the most efficient way to represent all that data in lossless or near-lossless jxl I wonder
2025-01-17 07:17:03
Is there a nearly lossless/reversible method to debayer and convert to xyb, even for bayer filters with 4 different colors?
_wb_
2025-01-17 07:39:12
I would just use a lossy debayered and otherwise processed image redundantly (just like the preview jpeg in DNG) and store the Bayer data as 4 CFA channels where Modular RCTs can help to e.g. subtract one G from the other.
jonnyawsom3
Demiurge 2 different greens
2025-01-17 08:40:18
That's what I was talking about with the Avg Green and Difference Green
Demiurge
2025-01-17 08:41:16
Is there a (near-)reversible debayer algo?
2025-01-17 08:42:21
It would be cool if the bayer sensor data could be mostly derived from the xyb image
Orum
2025-01-17 08:53:56
you can always guess at what it would be based on the resulting image
Demiurge
2025-01-17 09:04:23
I think it would be cool if there was a specified and reversible or near-lossless way of deriving bayer data from a multi channel image (with maybe 1 or 2 extra channels of reconstruction data)
2025-01-17 09:07:11
So you could reliably reconstruct the original bayer sensor data while still storing it in a format that's ready to view, without wasting space on storing redundant information
jonnyawsom3
Close, this is what I was thinking of https://discord.com/channels/794206087879852103/804324493420920833/1230846934596718622
2025-01-17 09:25:43
Well this is what that would look like
2025-01-17 09:26:36
And the normal DNG
AccessViolation_
Demiurge So you could reliably reconstruct the original bayer sensor data while still storing it in a format that's ready to view, without wasting space on storing redundant information
2025-01-17 09:53:06
You could probably achieve something similar but not quite as good using RCTs, as I understand. Bayer sensor data and final pixel color will be highly correlated even if advanced demosaicing is used, so I think just storing the residuals of some RCT operations referencing the final image and previous bayer filter channels should compress them fairly well
2025-01-17 09:55:29
Like it's effectively just storing the R,G,B channels _again_ except you're storing just the differences between what the sensor data was and what it was interpolated to be during demosaicing
2025-01-17 09:56:32
Though you'd need to be able to downscale the bayer channels since every bayer channel will be half the size. If that's not possible, you can do the reverse and use the free upscale op on the bayer data and encode the demosaiced image's channels as the residuals of it minus the bayer channels
Demiurge
2025-01-17 09:56:51
Yeah, there's no technical reason why it can't be done afaict, the hard part is specifying it as a repeatable, reliable, reversible, reuseable standard for different types of bayer filters
2025-01-17 09:57:50
But hacking up a quick and dirty proof of concept shouldn't be difficult at all
2025-01-17 09:58:38
Then it just needs to get polished and generalized and homogenized and standardized
spider-mario
AccessViolation_ You know what, all lens corrections except distortion correction could be output to a channel and then you could easily disable/enable them in your editing workflow without having to rely on your software to apply those corrections itself (which it may do worse than the camera if your lens isn't in the database or something)
2025-01-17 11:46:35
chromatic aberration correction is also spatial
2025-01-17 11:47:05
that leaves only vignetting correction, I think? but that one is also rather easy to *un*do
AccessViolation_
2025-01-17 12:17:44
Chromatic aberration correction isn't something where you can just blindly apply the same channel to any image taken with the same camera configuration, unlike vignetting correction, yeah. But it's still easy to compute it for that specific image, and put the residuals of the corrected vs uncorrected image into a channel
2025-01-17 12:20:16
I think distortion correction would also work in theory but with particularly strong correction you basically lose *all* correlation around the closer you get to the edges, so at that point you're effectively storing like, two, or 1.5 images worth of data and it would not be worth it to use that approach
spider-mario
2025-01-17 12:20:31
I mean that the residuals would be of the same nature as with geometric distortion
AccessViolation_
2025-01-17 12:22:36
Ohh yeah you're right
2025-01-17 12:56:48
This is the base image I took (with a pretty bad lens) and the diff between that base image and the CA-corrected one
2025-01-17 12:57:18
(the second one is only really clear if you 'open in browser' and zoom in)
2025-01-17 12:59:10
I opened a raw in Darktable and exported a png both with and without CA correction (not using lens data since that wasn't available, just its own guess at it from the CA correction module). And then added them as layers in gimp and subtracted the corrected one from the uncorrected one
jonnyawsom3
2025-01-17 01:04:06
That seems weirdly... Normal... I would've expected slight distortion around the edges at least, but only the sky has major difference as if it were compressed
2025-01-17 01:05:02
Then again, I don't know much about *real* cameras
AccessViolation_
2025-01-17 01:06:18
There's definitely a border around the branches against the bright sky, but I can't really explain why CA correction did something to the sky itself
spider-mario
2025-01-17 01:15:24
lateral (a.k.a. transverse) chromatic aberration (3 in https://en.wikipedia.org/wiki/File:Comparison_axial_lateral_chromatic_aberration.svg), i.e. the correctable kind, is due to different wavelengths landing on the sensor at different distances from the center
2025-01-17 01:15:33
correcting that involves rescaling them
Demiurge
AccessViolation_ This is the base image I took (with a pretty bad lens) and the diff between that base image and the CA-corrected one
2025-01-17 01:16:23
I wonder what spline encoding could do with this image...
spider-mario
spider-mario lateral (a.k.a. transverse) chromatic aberration (3 in https://en.wikipedia.org/wiki/File:Comparison_axial_lateral_chromatic_aberration.svg), i.e. the correctable kind, is due to different wavelengths landing on the sensor at different distances from the center
2025-01-17 01:17:45
(2 = “axial” or “longitudinal” chromatic aberration and is much more annoying/tricky to fix https://www.lenstip.com/526.5-Lens_review-Canon_EF_85_mm_f_1.4L_IS_USM_Chromatic_and_spherical_aberration.html)
AccessViolation_
spider-mario lateral (a.k.a. transverse) chromatic aberration (3 in https://en.wikipedia.org/wiki/File:Comparison_axial_lateral_chromatic_aberration.svg), i.e. the correctable kind, is due to different wavelengths landing on the sensor at different distances from the center
2025-01-17 01:33:44
Ahh that makes sense, I never thought about it that way. So that type of CA correction is effectively kind of like doing geometric distortion correction on specific wavelengths?
2025-01-17 01:33:55
So they all line up
2025-01-17 01:37:34
Also, here's the diff of vignetting correction - though I assume vignetting correction is multiplicative and this is was a subtract operation, so that's probably why you see any resemblance of the original image at all. None of the layer blend modes gave me something that looked like what I would have expected, but this one gives you an idea
jonnyawsom3
Demiurge I wonder what spline encoding could do with this image...
2025-01-17 01:49:05
It'd be nice to have some hand-coded JXL files again. One demoing Splines, one Patches (but more thorough like Monad's build), Delta Palette, 'Modular bit-packing' (Described in theory by Jon). I suppose I'm just re-creating the conformance test files, but real images and examples of the features instead of exclusively that function
AccessViolation_
2025-01-17 01:49:49
What's modular bit-packing?
2025-01-17 01:52:54
(I say that with the assumption that things already aren't byte aligned and thus it probably means something specific)
jonnyawsom3
AccessViolation_ What's modular bit-packing?
2025-01-17 01:58:06
You know how PNG can do 4, 2 and 1 bit images, putting 2, 4 or 8 pixels into 1 byte? (Or something like that) Currently JPEG XL can't do that, but Jon mentioned an idea to do something similar using modular mode
CrushedAsian255
2025-01-17 01:59:00
So like encoding 8 1bit pixels as 1 8bit pixel?
2025-01-17 02:00:01
I remember seeing somewhere using a string of Squeeze+Palette could achieve that effect
AccessViolation_
2025-01-17 02:00:18
that sounds so cursed
2025-01-17 02:00:22
but fun
jonnyawsom3
CrushedAsian255 I remember seeing somewhere using a string of Squeeze+Palette could achieve that effect
2025-01-17 02:02:44
<https://github.com/libjxl/libjxl/issues/3775#issuecomment-2317324336> > The trick here was to apply one squeeze step followed by a palette step, which has the effect of turning the 1-bit image into a 2-bit image of half the size. This image compresses better since the MA tree gets a larger effective local neighborhood for context, etc.
2025-01-17 02:03:31
Huh... That gives me an idea. Currently Squeeze is enabled with `-R 1`, so what if we allowed the number to control the amount of Squeeze steps
You know how PNG can do 4, 2 and 1 bit images, putting 2, 4 or 8 pixels into 1 byte? (Or something like that) Currently JPEG XL can't do that, but Jon mentioned an idea to do something similar using modular mode
2025-01-17 02:07:53
https://discord.com/channels/794206087879852103/794206170445119489/1084937919220957244 Intruiging. Yet more secrets of the format <https://www.reddit.com/r/jpegxl/comments/11p0edq/comment/jc3mdot/> > I don't think bit packing is still as useful as it used to be except for niche use cases like pixel art that intentionally uses low color count. In general people generally don't do things like scanning documents as a 1-bit black and white image anymore. > > But theoretically jxl can still kind of do it: you can apply a combination of Squeeze and Palette transforms in order to achieve something similar (but not quite identical) to bit packing. One interesting thing is that it's not limited to packing 2x1 4-bit, 4x1 2-bit or 8x1 1-bit pixels per byte like in PNG, but you could also e.g. do 2x2 3-bit pixels using ~12-bit palette indices applied to 4 channels of squeeze residuals, and other funky things like that. Currently totally unexplored in libjxl, but the modular nature of jxl's modular mode would allow this kind of thing. > > Another funky way of effectively doing bit packing is by using e.g. a redundant (2n+1)-color palette for a 1-bit image to represent n pixels at a time, where index i always corresponds to pixel value i%2. The first 2n indices are the ones used for the first value of a group of n pixels, and this value completely determines the remaining n-1 pixels through the MA tree (technically the remaining n-1 pixels end up in a context with a singleton distribution so they get entropy coded down to nothing). Again, this can be made to work for non-horizontal bit packing and for non-power-of-two packings (e.g. packing 3x3 1-bit pixels in a single 9-bit symbol).
CrushedAsian255
2025-01-17 02:58:23
Also the ability for palette to be arbitrarily ordered opens up some opportunities for learned palette ordering allowing for stronger correlation and better ma trees
Oleksii Matiash
AccessViolation_ There's definitely a border around the branches against the bright sky, but I can't really explain why CA correction did something to the sky itself
2025-01-17 03:08:37
I believe it is about demosaic process. Moving planes changes data fed to debayer algorithm
AccessViolation_
2025-01-17 03:47:35
Yeah that makes sense. I assumed how Darktable did it was by trying to detect fringing and just correcting for it by changing the color values in those specific areas, but if it's morphing the planes then you would expect some amount of difference everywhere when subtracting it from the original
spider-mario
Oleksii Matiash I believe it is about demosaic process. Moving planes changes data fed to debayer algorithm
2025-01-17 03:54:42
isn’t debayering done first?
Oleksii Matiash
spider-mario isn’t debayering done first?
2025-01-17 03:55:58
I can't provide proofs, but I believe no
spider-mario
AccessViolation_ Yeah that makes sense. I assumed how Darktable did it was by trying to detect fringing and just correcting for it by changing the color values in those specific areas, but if it's morphing the planes then you would expect some amount of difference everywhere when subtracting it from the original
2025-01-17 03:56:19
ah, in fact the DNG specification says you’d use the distortion correction opcode to correct chromatic aberration https://helpx.adobe.com/content/dam/help/en/photoshop/pdf/DNG_Spec_1_7_1_0.pdf#page=106 > This opcode can be used to correct lateral (transverse) chromatic aberration by specifying the appropriate coefficients for each image plane separately.
AccessViolation_
2025-01-17 03:58:44
I'm assuming that's not what happened in my case, it was Canon's CR3 format and for that Darktable seems to rely the lensfun database for lens correction, and it didn't have any CA data so I had to use the built-in module to correct CA without any correction information
spider-mario
2025-01-17 03:59:11
> This also allows processing steps to be specified, such as lens corrections, which ideally should be performed on the image data after it has been demosaiced, while still retaining the advantages of a raw mosaic data format.
AccessViolation_ I'm assuming that's not what happened in my case, it was Canon's CR3 format and for that Darktable seems to rely the lensfun database for lens correction, and it didn't have any CA data so I had to use the built-in module to correct CA without any correction information
2025-01-17 03:59:42
oh, I think that module does work the way you described (desaturating edges)
AccessViolation_
2025-01-17 03:59:48
That is neat though. I wish cameras would just shoot in DNG already
spider-mario
2025-01-17 03:59:58
the warping approach is the one taken by the lens correction module (the one that uses lensfun)
2025-01-17 04:01:02
oh, wait, there are _two_ other modules than the lens correction one
2025-01-17 04:01:16
[non-raw](https://docs.darktable.org/usermanual/development/en/module-reference/processing-modules/chromatic-aberrations/) and [raw](https://docs.darktable.org/usermanual/development/en/module-reference/processing-modules/raw-chromatic-aberrations/)
AccessViolation_
2025-01-17 04:02:19
I used this one
2025-01-17 04:03:43
As you can see the 'real' lens correction module is only able to do distortion and vignetting correction, it has no data for anything else
2025-01-17 04:08:49
Both the raw and non-raw variants seem to shift the image a bit
2025-01-18 06:57:12
I thought about developing a raw to a lossless and lossy jxl, and then subtracting them in GIMP, to see where in the lossy image the most detail is lost. But since pixel difference isn't necessarily representative of visually noticeable differences, that got me thinking: would it be useful if SSIMULACRA 2 or a similar tools could effectively output a heatmap of the quality over the image? I've thought about this and I think if encoders are tuned for these metrics you should expect a pretty homogeneous heat map after lossy encoding. If it isn't homogeneous, then that's an indicator the coder is making incorrect decisions (as judged by the quality metric) which result in it dropping visual quality in different amounts across the image
2025-01-18 06:58:11
I feel like that could be a pretty useful tool?
2025-01-18 07:03:25
This is the first time I've looked into how it works but it seems like SSIMULACRA 2 internally uses error maps, if that means maps over the image, it might be pretty easy to get it to output an error map instead of a weighted sum, or calculating the weighted sum over small blocks of pixels to create the heat map
2025-01-18 07:16:19
I'm not sure if it's right that you should expect a homogeneous heat map, you can save a lot of data in a solid color plane and lose barely any quality, compared to a more detailed area of the image, but I think it might still be useful to see if you're getting equal degradation where you do expect there to be
jonnyawsom3
AccessViolation_ I thought about developing a raw to a lossless and lossy jxl, and then subtracting them in GIMP, to see where in the lossy image the most detail is lost. But since pixel difference isn't necessarily representative of visually noticeable differences, that got me thinking: would it be useful if SSIMULACRA 2 or a similar tools could effectively output a heatmap of the quality over the image? I've thought about this and I think if encoders are tuned for these metrics you should expect a pretty homogeneous heat map after lossy encoding. If it isn't homogeneous, then that's an indicator the coder is making incorrect decisions (as judged by the quality metric) which result in it dropping visual quality in different amounts across the image
2025-01-18 07:29:04
That's actually what libjxl does internally. Benchmark_xl is broken for me, but I did have an old example I sent to a friend on Telegram
2025-01-18 07:29:55
It's butteraugli based rather than Ssimulacra2 though
2025-01-18 07:32:18
....I'm a fool
2025-01-18 07:32:58
The standalone butteraugli executable has the heatmap as an argument
2025-01-18 07:41:10
Here's what an image looks like after lossy JXL
AccessViolation_
2025-01-18 07:49:33
Oh! Cool
_wb_
2025-01-18 07:50:44
Butteraugli can visualize the heat map already indeed
2025-01-18 07:51:45
For ssimu2 I should add something like that, it's not hard but there's some artistic choices to be made in how to put the 3 error maps and various scales in a single visualization 🙂
jonnyawsom3
2025-01-18 07:52:53
Well, you don't necessarily have to. Could just output 3 error maps
AccessViolation_
2025-01-18 07:54:44
So these heat maps are used internally in libjxl. Does that mean if some better quality metric came around, you could effectively swap butteraugli with that one to basically improve the encoder tuning for free?
2025-01-18 07:56:30
Using the quality metric heatmap in the encoder itself is pretty clever, I thought there was a painstaking manual process of trying to match it - though I assume there's still a lot of manual tuning done
jonnyawsom3
2025-01-18 07:58:18
IIRC me and <@207980494892040194> thought about trying to swap butteraugli for Ssimulacra2, but it'd be a speed penalty and we wouldn't know where to start
AccessViolation_
2025-01-18 08:01:16
If these quality metrics are expensive, I wonder if you could save a lot of encode time at the lower end of effort levels by switching from a perceptual quality metric to something super basic, like straight up pixel value difference weighted by brightness for how noticeable they are 🤔
_wb_
2025-01-18 08:01:59
Actually only for vardct e8+ there are iterations to optimize for a metric, and yes you could replace it with any metric afaiu
2025-01-18 08:02:42
For e7 and lower, no metrics are computed, it is all heuristics that were tuned offline
AccessViolation_
2025-01-18 08:03:07
Oh, interesting
P
2025-01-19 11:11:54
Hi all, could anyone point me to a scientific explaination how JXL avoids generation loss? I know all [cool videos from Jon](https://www.youtube.com/results?search_query=generation+loss+jxl) with comparisons between different formats, but I still didn't get how it's that JXL's lossy compression with "the same or higher" quality always falls into the same... result
2025-01-19 11:15:04
Unfortunately the whitepaper doesn't clarify that? I have found a mention [here](https://www.jpegxl.io/faq#generation-loss-resilience), but it's still mostly black magic to me. Any help will be appreciated, thanks!
AccessViolation_
P Unfortunately the whitepaper doesn't clarify that? I have found a mention [here](https://www.jpegxl.io/faq#generation-loss-resilience), but it's still mostly black magic to me. Any help will be appreciated, thanks!
2025-01-19 11:28:51
This paragraph here is weirdly phrased (it almost reads like it's AI generated), but I think it's talking about lossless transcoding, which effectively losslessly recompresses the original JPEG into a JXL, reducing its file size, but retaining the exact same pixel data, and even allowing you to transform it back into the original JPEG file, bit-for-bit. It's a completely lossless operation and as such doesn't suffer from any generation loss at all. This is different from decoding a JXL's pixel data, and then lossy encoding it into a new JXL, and doing that many times over, which is what causes generation loss. I don't know what specifically makes it so resistant to this, so I can't answer that (someone else here probably can though)
jonnyawsom3
2025-01-19 11:41:19
A very simple way of putting it is, it doesn't do anything addative. Other formats usually blur or sharpen the image to make higher compression look good, with JXL that's (Gaborish) disabled at distance 0.5 and below (Quality 95~) since you're probably using it in a workflow like Krita, ect
Demiurge
2025-01-20 01:42:02
Black magic voodoo
A homosapien
2025-01-20 01:42:57
Alien technology from the future 👽
Demiurge
2025-01-20 01:44:07
Next gen alien technology from the future
CrushedAsian255
2025-01-20 02:53:12
Is jpeg xl banned in China
Meow
2025-01-20 04:52:05
monad
P Hi all, could anyone point me to a scientific explaination how JXL avoids generation loss? I know all [cool videos from Jon](https://www.youtube.com/results?search_query=generation+loss+jxl) with comparisons between different formats, but I still didn't get how it's that JXL's lossy compression with "the same or higher" quality always falls into the same... result
2025-01-20 05:40:13
The JXL format doesn't fundamentally avoid generation loss. It just so happens the old encoder demonstrated in those videos behaved in a resilient way. The current libjxl encoder doesn't maintain such claims, except in the mentioned case of JPEG transcoding which is cleverly a lossless transaction by default.
Demiurge
2025-01-20 06:56:21
It will only become banned in China once JPEG XL becomes an official religion
Meow
2025-01-20 08:20:36
J||ailing|| X||i the|| L||eader|| Oops
CrushedAsian255
2025-01-20 09:36:10
noooo
2025-01-20 09:36:23
再见 😭
Meow
2025-01-20 11:25:23
which website is this?
jonnyawsom3
2025-01-21 02:11:16
<@384009621519597581> thinking about the space image yet again... The exposure adjustment could probably just be done by requesting different nits from libjx right? Since it has color management built in and lossy is always float 32 anyway
Meow
CrushedAsian255 which website is this?
2025-01-21 03:01:48
https://viewdns.info/chinesefirewall/
CrushedAsian255
2025-01-21 05:59:32
just wondering who is running this Twitter account? https://x.com/JpegXl ||i refuse to call it X||
_wb_
2025-01-21 09:16:22
I do, have been neglecting it a bit lately tbh though
Meow
2025-01-21 09:43:51
J𝕏L
2025-01-21 09:50:13
or even 𝕁𝕏𝕃
AccessViolation_
<@384009621519597581> thinking about the space image yet again... The exposure adjustment could probably just be done by requesting different nits from libjx right? Since it has color management built in and lossy is always float 32 anyway
2025-01-21 09:51:21
Oh hmm, that would certainly simplify creating a viewer
juliobbv
2025-01-21 07:17:19
it might be worth creating another account on BlueSky as well
2025-01-21 07:17:41
𝕏 has been on shaky ground lately
Quackdoc
2025-01-21 07:18:33
IMO it's best to cross post as much as possible, JXL should be on twitter/x, bluesky, and the fediverse as well if it is not already
CrushedAsian255
2025-01-21 09:47:26
fediverse = mastodon and friends?
Quackdoc
2025-01-21 09:49:07
correct
AccessViolation_
2025-01-22 05:48:15
I'm thinking about games with CRT filters, film gain or other overlay effects. I wonder if screenshots from games like these would compress better if they stored the filter(s) in a separate layer. I think it might not because you're storing one RGB and one RGBA image instead of just one RGB image, but I also think it might because predictors would probably work better on both of them separately (especially on the filter) compared to on the combined image
2025-01-22 05:54:56
For example this CRT filter on Balatro would make it so most predictors that look up perform poorly, limiting which ones are useful, whereas separating that allows you to pick the best predictor for the base image, and then picking a 'west'-like predictor is going to be even more effective for the CRT filter
jonnyawsom3
2025-01-22 06:42:27
Then you could also store the tile of the filter, since it's usually if not always a repeating texture overlay
AccessViolation_
2025-01-22 06:44:26
Oh yeah that'd make it even smaller
2025-01-22 06:44:54
I'm considering buying balatro to see if I can extract the crt texture and then test this
2025-01-22 06:45:38
But I don't know how I would even manually construct a JXL with the CRT filter and base image as separate layers
RaveSteel
2025-01-22 07:21:32
if it is possible to deactivate the filter in the game settings, you could export a jxl with the aforementioned specification using krita
2025-01-22 07:21:34
probably
AccessViolation_
2025-01-22 07:22:17
A friend of mine that has the game generously offered to look through the files and send me the CRT texture when they get home
2025-01-22 07:22:21
Just gotta hope it's not a shader
2025-01-22 07:25:46
If there's such a thing as running a second predictor on the residuals of the first predictor, it might be pretty easy to predict out the CRT in the first pass? First run 'west' to subtract out most of the CRT pattern and then use your other predictors of choice on whatever is left, but it's going to probably destroy some of the patterns of the base image, making them harder to predict
2025-01-22 07:28:51
It's a shader :(
RaveSteel
2025-01-22 07:39:58
Can you send the file?
AccessViolation_
2025-01-22 09:50:31
<@167354761270525953>
RaveSteel
2025-01-22 09:56:10
Thanks
juliobbv
AccessViolation_ For example this CRT filter on Balatro would make it so most predictors that look up perform poorly, limiting which ones are useful, whereas separating that allows you to pick the best predictor for the base image, and then picking a 'west'-like predictor is going to be even more effective for the CRT filter
2025-01-22 10:04:34
WestWestWest
2025-01-22 10:08:39
the balatro CRF effect seems blurry enough and simulated phosphor size is large enough that prob the regular predictors can cope with to a reasonable degree
AccessViolation_
juliobbv the balatro CRF effect seems blurry enough and simulated phosphor size is large enough that prob the regular predictors can cope with to a reasonable degree
2025-01-22 11:00:55
we did some testing in <#803645746661425173>
juliobbv
2025-01-22 11:57:44
exciting stuff for sure!
jonnyawsom3
2025-01-23 02:24:29
<sound:794206087879852103:1141847636865982554>
Traneptora
P Hi all, could anyone point me to a scientific explaination how JXL avoids generation loss? I know all [cool videos from Jon](https://www.youtube.com/results?search_query=generation+loss+jxl) with comparisons between different formats, but I still didn't get how it's that JXL's lossy compression with "the same or higher" quality always falls into the same... result
2025-01-23 07:18:12
Short summary is that, like JPEG, it quantizes the higher-order coefficients to integers before entropy-encoding them. however if you decode and re-encode using the same settings you'll get pretty much those same integers back, so not much will be different. A better question is "why doesn't this happen with traditional JPEG" and that's more because traditional JPEG doesn't have specified decoding. theoretically with a well-engineered jpeg decoder and encoder you could avoid this as well. since the process of encoding a JPEG could be seen as an inverse problem (i.e. figure out what coefficients decoded to this image) rather than a forward problem (convert pixels to coefficients) but most JPEG encoders don't do that
AccessViolation_
2025-01-23 09:09:53
Is there anything JXL could do to make use of the fact that pixel art is resampled such that 1 pixel in the image is actually 3x3 pixels?
jonnyawsom3
AccessViolation_ Is there anything JXL could do to make use of the fact that pixel art is resampled such that 1 pixel in the image is actually 3x3 pixels?
2025-01-23 09:13:57
https://discord.com/channels/794206087879852103/794206087879852106/1239238259784024117
2025-01-23 09:15:58
If you turn the image back to 1:1, you can use the custom upsampling to do 2x, 4x or 8x scaling
2025-01-23 09:17:04
Those were effort 11 too
AccessViolation_
2025-01-23 09:17:27
This is about Noita in a convenient 'low resolution rendering' mode. It renders pixels that make up the world to 3x3 pixels and UI elements are commonly 2x2 pixels so that they can be a bit more detailed
2025-01-23 09:19:02
So most of the the image can be downscaled as 3x3, but not everything
jonnyawsom3
2025-01-23 09:19:48
Another case of 'if only JXL were integrated to the rendering'
AccessViolation_
2025-01-23 09:25:54
How would that work if one layer is scaled by a factor of two and another is scaled by a factor of three
2025-01-23 09:28:15
Hold on
2025-01-23 09:28:17
2025-01-23 09:29:05
Okay never mind, the game just roughly fits pixels into 3x3 but not strictly always
2025-01-23 09:47:50
Still does much better than PNG regardless :)
2025-01-23 10:11:34
The game renders some pixels to 2x2 tiles, some to 3x3 and some to 4x4, which is interesting. It also seems the seam in that image above is the game's way of simulating sub-pixel movement by basically moving the pixels up and down along a 'scanline' grid
2025-01-23 10:12:46
Anyway - Noita is a great game for benchmarking, with this low resolution rendering, high resolution rendering, optional pixel art anti-aliasing and dithering to avoid banding there's a lot of different things to test 👀
_Broken s̸y̴m̴m̵̿e̴͌͆t̸r̵̉̿y̴͆͠
If you turn the image back to 1:1, you can use the custom upsampling to do 2x, 4x or 8x scaling
2025-01-24 12:17:00
Sorry if that is a little of topic, but who made that? I've seen this style somewhere.
jonnyawsom3
_Broken s̸y̴m̴m̵̿e̴͌͆t̸r̵̉̿y̴͆͠ Sorry if that is a little of topic, but who made that? I've seen this style somewhere.
2025-01-24 12:21:07
https://www.pixilart.com/art/habitat-sr28bef949d0baws3
_Broken s̸y̴m̴m̵̿e̴͌͆t̸r̵̉̿y̴͆͠
https://www.pixilart.com/art/habitat-sr28bef949d0baws3
2025-01-24 12:21:33
Thank you 🦦
AccessViolation_
2025-01-24 10:04:26
That's quite a useful website for benchmarking
K
2025-01-26 03:37:48
what the go to software to download for convert? only found this https://github.com/JacobDev1/xl-converter/releases
Orum
2025-01-26 03:43:22
libjxl
2025-01-26 03:44:02
or just use imagemagick
K
2025-01-26 04:07:05
libjxl seem to be CLI, i can't into that
Orum
2025-01-26 04:09:11
oh, you wanted a GUI app? I know GIMP has support, and I think Affinity Photo does too, but that's commercial software (and I think it has more than a few bugs with JXL support)
Meow
2025-01-26 04:27:01
If there is already libjxl installed, you can do bulk conversion with simply dragging to a .bat that contains the code of cjxl
2025-01-26 04:28:24
I wrote some for cwebp and OxiPNG but I haven't tried libjxl on Windows
AccessViolation_
K what the go to software to download for convert? only found this https://github.com/JacobDev1/xl-converter/releases
2025-01-26 10:48:57
I've used this, it's pretty nice :)
K
Meow If there is already libjxl installed, you can do bulk conversion with simply dragging to a .bat that contains the code of cjxl
2025-01-26 10:56:38
I want to do some basic setting with a GUI than to suffer and be a pro CLI user
Orum oh, you wanted a GUI app? I know GIMP has support, and I think Affinity Photo does too, but that's commercial software (and I think it has more than a few bugs with JXL support)
2025-01-26 10:57:14
Both editing software though. I think irfanview is supporting jxl in a limited manner, so I might play with that if all else fail
AccessViolation_ I've used this, it's pretty nice :)
2025-01-26 10:57:29
How is it?
Meow
2025-01-26 10:57:32
I'm not a pro at all
AccessViolation_
K How is it?
2025-01-26 10:58:35
it works well
Meow
2025-01-26 10:58:43
Just pitiful that winget doesn't include libjxl
AccessViolation_
2025-01-26 10:58:55
though iirc it had some defaults I didn't like, consider checking the settings before you batch convert everything
Meow
2025-01-26 11:01:17
For cwebp it looks like this `@echo off setlocal for %%f in (%*) do ( set "inputFile=%%~f" set "baseFileName=%%~nf" cwebp -lossless -m 6 -q 100 -mt -v -progress "%%~f" -o "%%~nf.webp" ) pause`
K
AccessViolation_ though iirc it had some defaults I didn't like, consider checking the settings before you batch convert everything
2025-01-26 11:02:06
Thx boss
2025-01-26 11:02:18
U finish converting or still considering
AccessViolation_
2025-01-26 11:04:42
I don't understand
RaveSteel
2025-01-26 12:32:26
If you use KDE you can use service menus in dolphin to allow converting images via the context menu
2025-01-26 12:32:45
Meow
RaveSteel If you use KDE you can use service menus in dolphin to allow converting images via the context menu
2025-01-26 12:33:53
A bit similar to Quick Action on macOS
RaveSteel
2025-01-26 12:34:04
Create "/home/USER/.local/share/kio/servicemenus/" ands put this file there
Meow A bit similar to Quick Action on macOS
2025-01-26 12:34:33
Yes
Meow
2025-01-26 12:34:40
2025-01-26 12:34:41
I have an example posted yesterday
2025-01-26 12:36:14
It uses AppleScript this time
RaveSteel
2025-01-26 12:37:41
I would test it, but no MacOS xd
Meow
2025-01-26 12:39:10
A window (or tab) will pop up like this
2025-01-26 12:41:07
The codes are largely helped by ChatGPT with trial and error
2025-01-26 12:44:06
I've tested running it for hundreds at the same time
RaveSteel
2025-01-26 12:45:48
nice
Meow
2025-01-26 12:49:48
This is a universal method so other codecs can be used with that as well
HCrikki
K what the go to software to download for convert? only found this https://github.com/JacobDev1/xl-converter/releases
2025-01-26 05:30:22
pretty much the most accessible one (a gui that behind the scene uses libjxl). just make sure youre confortable with the parameters selected, afaik its set to *wipe existing metadata* out of the box (barely decreases filesizes but may mess management of curated image collections)
2025-01-26 05:34:06
there's https://github.com/kampidh/jxl-batch-converter but its mostly libjxl wearing a differently colored ski mask. not as good, new versions can be swapped in, easier to mess if you fiddle without understanding the parameters, displays helpful logs
2025-01-26 05:34:52
handles jpegli encoding (cjpegli) and decoding (djpegli) as well
K
2025-01-26 05:55:41
Thanks boss
AccessViolation_ I don't understand
2025-01-26 05:57:15
Sorry typo, was asking if u finished the move to jxl or still thinking
2025-01-26 05:58:12
I'm stuck in the consideration phase at the moment. Not sure if it time to move yet. There seem there be quite good traction supporting it but there is also so many (format) Competition at the moment
AccessViolation_
K Sorry typo, was asking if u finished the move to jxl or still thinking
2025-01-26 06:13:49
I finished the move
2025-01-26 06:14:13
I used lossless jpeg transcoding, so they take up 20% less space, and I can always get an exact copy of the original files out of them if I want in the future
K I'm stuck in the consideration phase at the moment. Not sure if it time to move yet. There seem there be quite good traction supporting it but there is also so many (format) Competition at the moment
2025-01-26 06:17:27
JPEG XL is the only image format that can do lossless JPEG transcoding. That mode works basically like putting a document in a zip file: it becomes smaller but still contains the original file
2025-01-26 06:17:46
So you don't lose any data, even if JXL completely flops, you can turn them all back into the original JPEGs
K
AccessViolation_ JPEG XL is the only image format that can do lossless JPEG transcoding. That mode works basically like putting a document in a zip file: it becomes smaller but still contains the original file
2025-01-26 06:18:09
Yeah, i be going with this mode for sure
2025-01-26 06:19:29
U read anything about "clashes"? Where anyone run into a bug where Lossless isn't reversible? For example (not a good one here) Similar to MD5 hash where two different files result in same hashsum
AccessViolation_
2025-01-26 06:21:27
There are certain situations where lossless transcoding (currently) isn't possible or may fail, like I think if there's more than 4 MB of metadata attached to the image it won't do it. I'm not sure if XL Converter verifies that every file was 100% successfully transcoded, but I do know that for some images it doesn't work and it just ignores those and keeps them as JPEG Also, I would recommend making a backup of your original images before doing the conversion, or if that's not an option, testing it first on a few copies of the files you want to convert.
Orum
2025-01-26 06:51:27
>more than 4 MB of metadata <:monkaMega:809252622900789269>
AccessViolation_
2025-01-26 10:06:34
Imagine how many people you could kill based on *that much metadata*
CrushedAsian255
AccessViolation_ Imagine how many people you could kill based on *that much metadata*
2025-01-26 11:27:00
i hid the government secrets in an Exif comment
2025-01-26 11:28:23
is there a specific reason why 4 MB metadata is a limit?
Orum
2025-01-27 01:49:19
probably because 2²² has some significance?
2025-01-27 01:50:04
maybe the spec simply limits it because 4MB is already insanely huge and anything more is almost certainly in error
RaveSteel
2025-01-27 01:50:37
4MB metadata would probably be a just another file written into the metadata block
2025-01-27 01:50:46
Most books aren't even 4MB
jonnyawsom3
2025-01-27 01:51:31
IIRC the methodology was: If something is appended that's more than 4MB, you probably want to be treating it differently than just storing it as metadata anyway