JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

jxl

Anything JPEG XL related

A homosapien
Jyrki Alakuijala I'll take this on my own plate then... πŸ‹οΈ thank you for the help so far 🌞🌞🌞
2024-05-28 04:39:32
I think it would be good practice to track this on GitHub. Should I make a new issue? Or should I edit the existing one I made? https://github.com/libjxl/libjxl/issues/3530
Orum
2024-05-28 04:44:41
why compare at the same distance instead of the same BPP?
monad
Demiurge effort=3 and effort=9 are the same size btw. And e=3 encodes AND decodes much, much faster. I really think 3 or 4 ought to be the default for lossless mode. And lossless mode ought to be the default for all files.
2024-05-28 05:26:31
e3 has bad edge-cases in non-photographic: [text](https://loathra.com/jxl/bench/20240506.html#multi-12), [pixel_art_scaled](https://loathra.com/jxl/bench/20240506.html#multi-25)
2024-05-28 05:31:54
so e4 is safer, but also has bad cases in some circumstances: [algorithmic](https://loathra.com/jxl/bench/20240506.html#multi-28)
Demiurge
Demiurge
2024-05-29 07:46:32
0.6.1 definitely looks much better, but it still doesn't look that great.
2024-05-29 07:46:40
I would test earlier versions if I could
2024-05-29 07:47:06
Maybe I should
2024-05-29 07:48:12
That abandoned factory image is just a real problem case with the leaves getting washed out colors
Jyrki Alakuijala
A homosapien I think it would be good practice to track this on GitHub. Should I make a new issue? Or should I edit the existing one I made? https://github.com/libjxl/libjxl/issues/3530
2024-05-29 12:30:30
let's make a new one as this is clearly for photographic images?
Demiurge Unfortunately. these all basically look the same to me. Except the lossless one obviously.
2024-05-29 02:50:36
for me -- the factory vs. sky has decreasing artefacts as versions progress, but saturation of colorful (yellow) details is reduced as we have progressed in version numbers
Demiurge Unfortunately. these all basically look the same to me. Except the lossless one obviously.
2024-05-29 02:55:07
looking at all of these I think we improved in pretty much all version...
yoochan
2024-05-29 03:03:40
I tend to agree πŸ™‚ <@532010383041363969> could you publish a short list of commits (half a dozen) containing very meaningful for lossy quality improvement steps ? I'll be glad to make some chronological comparison (if I can sucessfully compile them)
Jyrki Alakuijala
2024-05-29 03:16:04
recent 2024 commits or commits from the far past (0.6 era to now) ?
yoochan
2024-05-29 04:41:58
I'm not sure πŸ˜„ I found the work I did on commit 4d2bc14 interesting and was trying to extend this... While thinking again, I wonder if I couldn't start with "released" version. I'll build something with what was published as pre-compiled on the git. And I'll comeback for more specific requests on commits I think
Demiurge
2024-05-29 09:44:00
Unfortunately I see no significant difference... Perhaps a more complex image would be better, like Mercado
2024-05-29 09:46:28
Idk if this is the best resize... I made this, but there's ringing artifacts. https://cdn.discordapp.com/attachments/794206087879852106/1235398636498714634/mercado.png
2024-05-29 09:48:39
I used `vipsthumbnail --linear`
Jyrki Alakuijala
2024-06-01 12:35:54
I did a few experiments and I could not get simple image quality gains. The images where AVIF looks sharper have AVIF to disregard middle frequency band. It can be nice for video to fill in missing fine frequencies later, but possibly weird for photos.
2024-06-01 12:37:33
We could add more control for slightly favoring saturation, sharpness, less aftefacts (and more blurring) etc.
2024-06-01 12:38:11
I think there is a minute increase in sharpness in the head when compared to releases (theoretically thinking about it, didn't look at evidence)
JendaLinda
2024-06-01 12:51:03
What do the content type presets in webp actually do? Could jxl encoder have similar presets as well?
username
JendaLinda What do the content type presets in webp actually do? Could jxl encoder have similar presets as well?
2024-06-01 12:54:53
JendaLinda
username
2024-06-01 01:11:30
Looks like different levels of smoothing, basically.
Orum
2024-06-01 05:25:47
should be doable with a prefilter then, no?
monad
JendaLinda What do the content type presets in webp actually do? Could jxl encoder have similar presets as well?
2024-06-01 05:36:13
I toyed with calling specific cjxl commands based on content
2024-06-01 05:37:56
but I do not really know how to make such a thing very practical
2024-06-01 05:42:42
but that is lossless
JendaLinda
monad but I do not really know how to make such a thing very practical
2024-06-01 06:15:10
I suppose the webp presets take effect only in lossy compression.
Demiurge
2024-06-01 07:41:03
I don't think a lossy encoder should ever create blurry smudges if it can create higher-frequency artifacts instead
2024-06-01 07:41:53
In lossy audio compression they call it noise-shaping
2024-06-01 07:43:27
It works well, in psychoacoustics as well as psychovisuals
Jyrki Alakuijala
JendaLinda What do the content type presets in webp actually do? Could jxl encoder have similar presets as well?
2024-06-02 06:13:19
I don't like the philosophy of more encoding control. Some image will be a hybrid β€” someone will put a logo and text on a photograph, etc. each area should understand it's characteristics separately. If it is done it should be done in my opinion in a way that is easy to understand.
JendaLinda
Jyrki Alakuijala I don't like the philosophy of more encoding control. Some image will be a hybrid β€” someone will put a logo and text on a photograph, etc. each area should understand it's characteristics separately. If it is done it should be done in my opinion in a way that is easy to understand.
2024-06-02 06:26:10
That's fair. I haven't used those webp presets either because I was not sure if they improve anything.
Demiurge
Jyrki Alakuijala I don't like the philosophy of more encoding control. Some image will be a hybrid β€” someone will put a logo and text on a photograph, etc. each area should understand it's characteristics separately. If it is done it should be done in my opinion in a way that is easy to understand.
2024-06-02 08:16:38
That's why I like the idea of some kind of edge detection algorithm that separates an image into a DCT "texture" layer and a modular "splines/gradients/solids" layer
2024-06-02 08:17:47
Ideally an encoder should be smart enough to detect image content that will be inefficient to compress with dct mode
2024-06-02 08:18:51
A lot of images are smaller file size with lossless compression and currently the user has to guess or double check and compare the two
CrushedAsian255
2024-06-03 06:55:07
Would this cause issues with encode and decode speeds, or with possible hw codecs?
Demiurge
2024-06-03 12:41:08
Multiple layers would decrease decode speed (increase the time it takes to decode) and hw encoders would probably not ever do that, or anything much more sophisticated than standard JPEG.
Naksu
2024-06-06 01:59:07
Hello, I would like to know if the decoding speed of animated JPEG XL has improved compared to AVIF, or if it is better to favor WebP for now.
yoochan
2024-06-06 02:25:18
there is performance issues ?
2024-06-06 02:26:03
when I encode somehing with cjxl, what criteria would make it skip the container ? no meta data in the original file ?
HCrikki
2024-06-06 04:55:22
animated avif is an av1 video, same as webp with vp8, not real animations. videos will always run 'better' than any animation based on those videos
lonjil
2024-06-06 04:56:28
animated webp is not a vp8 video, it's a sequence of still frames, that can be vp8 or vp8l (aka webp lossless)
jonnyawsom3
yoochan when I encode somehing with cjxl, what criteria would make it skip the container ? no meta data in the original file ?
2024-06-06 05:56:06
Yeah, you can use `--strip=all` in cjxl if I recall along with `--container=0` or something similar
TheBigBadBoy - π™Έπš›
Yeah, you can use `--strip=all` in cjxl if I recall along with `--container=0` or something similar
2024-06-06 05:59:47
isn't it `-x strip=all` ?
jonnyawsom3
2024-06-06 06:00:17
Yeah, that's it. Only used it months ago so forgor
TheBigBadBoy - π™Έπš›
2024-06-06 06:00:52
no worries, it isn't even in the output of `cjxl -v -v -v -v -v -v -v -v -v -v -v -h` <:KekDog:805390049033191445>
2024-06-06 06:02:04
well it actually is (with `exif`), but doesn't say we can use `all`
yoochan
Yeah, you can use `--strip=all` in cjxl if I recall along with `--container=0` or something similar
2024-06-06 07:02:28
Thanks! My question was not very clear. I meant : if you don't force the presence or the absence of container (with - - container) how cjxl choose to make a raw codestream or a containerised one
Orum
TheBigBadBoy - π™Έπš› no worries, it isn't even in the output of `cjxl -v -v -v -v -v -v -v -v -v -v -v -h` <:KekDog:805390049033191445>
2024-06-06 07:45:47
probably needs like 12 more `-v`s
Naksu
HCrikki animated avif is an av1 video, same as webp with vp8, not real animations. videos will always run 'better' than any animation based on those videos
2024-06-06 08:00:49
Not really. Compression techniques are maintained, but the images remain independent. I have no issues decoding AV1, VP9, or VP8, but I encounter significant image loss with AVIF. WebP performs better but is still not as good as video.
2024-06-06 08:05:38
I wanted to know if JPEG XL decoding was more efficient than its counterparts.
_wb_
2024-06-06 08:29:18
Decoding of jxl is roughly similar in speed to decoding avif or webp, though generally avif/webp are faster at lower quality than at higher quality, while jxl is more constant in decode speed regardless of quality. For animations, inter frames are typically less work than intra frames, and avif is the only one that really has inter frames (since you can just use arbitrary av1 payloads).
2024-06-06 08:30:46
So for animation (at least of natural content and at "web quality"), I think avif is a better choice than jxl.
2024-06-06 08:31:16
Not just for decode speed but mostly because of the compression an actual video codec can bring.
2024-06-06 08:32:05
No intra only codec can compete with video codecs (at least when doing web quality lossy)
Naksu
2024-06-06 09:09:19
Thank you for this detailed feedback. However, as my use is not web-oriented, I prioritize lossless quality. Although the animated images I manipulate aren't of very high resolution, I aim for them to be displayed as smoothly as possible. I have determined that maintaining 40 frames per second provides the ideal balance for stable decoding with the WebP format. Unfortunately, while AVIF offers inter-frames and superior compression of 35%, it struggles to display even 10 frames per second due to slow decoding, rendering it unusable in my situation. Given that JPEG XL also offers significant compression, I was concerned about encountering similar decoding issues.
lonjil
2024-06-06 09:16:47
How about high quality h.264? Should decode quite quickly.
Naksu
2024-06-06 09:21:16
I need an infinite loop and the support of transparency.
lonjil
2024-06-06 09:25:36
ah
veluca
2024-06-06 09:29:15
maybe the faster lossless speeds are fast enough? how big are the frames?
Naksu
2024-06-06 09:33:44
Input frames or output file frames?
Soni
2024-06-06 09:48:03
can JXL store subpixel data? (so many screenshots burn our eyes due to subpixel mismatch)
veluca
Naksu Input frames or output file frames?
2024-06-06 10:27:36
does it make a difference for lossless?
Naksu
2024-06-06 10:43:13
I've read that lossless is decoded faster.
Quackdoc
Naksu Not really. Compression techniques are maintained, but the images remain independent. I have no issues decoding AV1, VP9, or VP8, but I encounter significant image loss with AVIF. WebP performs better but is still not as good as video.
2024-06-06 10:47:53
avif is just av1 track inside of a mp4 container, if you are having issues decoding avif and not av1, it's either an issue with the file itself or the implementation of the decoding
2024-06-06 10:50:31
converting a video to avif is pretty much as simple as `mv file.mp4 file.avif; mp4box -ab avis file.avif` this isn't totally spec compliant, but most decoders will probably wind up working with this
username
2024-06-06 11:34:21
I'm finally getting around to doing this https://discord.com/channels/794206087879852103/803574970180829194/1204264884016123954
2024-06-06 11:35:02
before:
2024-06-06 11:35:07
after:
Naksu
Quackdoc converting a video to avif is pretty much as simple as `mv file.mp4 file.avif; mp4box -ab avis file.avif` this isn't totally spec compliant, but most decoders will probably wind up working with this
2024-06-07 12:15:53
I've tried lossy encoding and it works betterβ€”at least on Firefox, not Chromium. So it's the lossless version that seems to have difficulties being decoded (the opposite of WebP).
2024-06-07 12:30:42
AV1 just isn't optimized for lossless. The same goes for VP8 or VP9, but at least it doesn't compromise playback. I wonder if HEIF supports animated images?
Quackdoc
Naksu AV1 just isn't optimized for lossless. The same goes for VP8 or VP9, but at least it doesn't compromise playback. I wonder if HEIF supports animated images?
2024-06-07 04:39:50
well, without getting to indepth since this is the wrong channel for av1/avif, there isn't anything wrong with the AV1 spec itself for lossless it's just no one cares about it enough to invest substantial effort into making it good, to begin with, of the three major sw encoders, only aomenc can even do lossless.
2024-06-07 04:40:05
also it offers no "fast decode" settings like svtav1 does which hurts a lot
VcSaJen
Soni can JXL store subpixel data? (so many screenshots burn our eyes due to subpixel mismatch)
2024-06-07 05:16:46
Do you mean ClearType font antialiasing in Windows and scrolling in Ice Book Reader?
2024-06-07 05:46:40
They generally don't "burn eyes" on mismatch, they just result in colorful edges. Maybe you mean color profile?
_wb_
Soni can JXL store subpixel data? (so many screenshots burn our eyes due to subpixel mismatch)
2024-06-07 06:38:37
You mean the physical subpixel layout? I don't know of any existing ways to represent that information (as, say, some Exif metadata or something), and I also wouldn't know what a viewer could do with that information β€” I suppose it could do _something_ when rendering the image more zoomed in than 1:1, but I'm not aware of any image viewing software that actually tries to take subpixel layout into account.
Naksu AV1 just isn't optimized for lossless. The same goes for VP8 or VP9, but at least it doesn't compromise playback. I wonder if HEIF supports animated images?
2024-06-07 06:41:53
I think it only supports intra-only in its multi-image variants, but there might be some flavor of HEIF that allows arbitrary HEVC payloads. If it works, I would expect it to not actually work with alpha, even 4:4:4 might be an issue.
Quackdoc well, without getting to indepth since this is the wrong channel for av1/avif, there isn't anything wrong with the AV1 spec itself for lossless it's just no one cares about it enough to invest substantial effort into making it good, to begin with, of the three major sw encoders, only aomenc can even do lossless.
2024-06-07 07:22:50
I disagree. Sure, there is probably room for enc/dec improvements for lossless avif, especially regarding speed. But I do think that there are inherent limitations at the spec level that prevents lossless avif from being very good. In particular: 1. AV1 follows the video codec philosophy that color spaces / color transforms are not in the scope of the core codec, but they are just metadata to be handled at the application layer and the codec itself only gets numbers to be encoded with or without loss (where loss is measured just as numerical error, e.g. using PSNR). So they do not implement any color transforms as part of the codec itself. For lossless, this means they are forced to use RGB without any channel decorrelation, which is not effective. They want to add YCoCg as an option, but it has to be done at the metadata level (i.e. as a new H.273 CICP code point), for which they have to rely on external standardization, and application support is not guaranteed, so it is not easy to do and leads to interop issues. Long story short: they're stuck with RGB in practice, and that means a lot of correlated entropy has to be encoded multiple times. For images with alpha it's even worse: the RGB and A data is inherently encoded independently in AVIF, which means it is impossible to do things like RGBA palettes where a full pixel is encoded only once. 2. While AV1 has some non-DCT coding tools like palette blocks, they are designed for lossy encoding (e.g. palettes are limited to 8 colors per channel). Tools like directional prediction are useful if you can mostly quantize away the residuals, but they don't help much if all the residuals have to be preserved. 3. The entropy coding in AV1 is relatively simple and designed to be hardware-friendly. For lossy, this is fine, since most of the entropy is quantized away anyway. For lossless and high-quality lossy, entropy coding plays a bigger role though in getting good compression density.
Naksu Thank you for this detailed feedback. However, as my use is not web-oriented, I prioritize lossless quality. Although the animated images I manipulate aren't of very high resolution, I aim for them to be displayed as smoothly as possible. I have determined that maintaining 40 frames per second provides the ideal balance for stable decoding with the WebP format. Unfortunately, while AVIF offers inter-frames and superior compression of 35%, it struggles to display even 10 frames per second due to slow decoding, rendering it unusable in my situation. Given that JPEG XL also offers significant compression, I was concerned about encountering similar decoding issues.
2024-06-07 07:30:10
You can try what happens when encoding an APNG with `cjxl -d 0 -e 2` or something like that. Note that libjxl encodes all frames as they are provided to it, it does not automatically crop the frames to cover only the region that changes, which can make a big difference. JPEG XL is not particularly designed for animation, but it should be able to beat other animated still image formats (apng, webp). We have not spent much effort on making a good encoder or decoder for the case of animation though. For example, if the dimensions are small, probably it would be beneficial to process frames in parallel, but libjxl does not do that at the moment.
Quackdoc
_wb_ I disagree. Sure, there is probably room for enc/dec improvements for lossless avif, especially regarding speed. But I do think that there are inherent limitations at the spec level that prevents lossless avif from being very good. In particular: 1. AV1 follows the video codec philosophy that color spaces / color transforms are not in the scope of the core codec, but they are just metadata to be handled at the application layer and the codec itself only gets numbers to be encoded with or without loss (where loss is measured just as numerical error, e.g. using PSNR). So they do not implement any color transforms as part of the codec itself. For lossless, this means they are forced to use RGB without any channel decorrelation, which is not effective. They want to add YCoCg as an option, but it has to be done at the metadata level (i.e. as a new H.273 CICP code point), for which they have to rely on external standardization, and application support is not guaranteed, so it is not easy to do and leads to interop issues. Long story short: they're stuck with RGB in practice, and that means a lot of correlated entropy has to be encoded multiple times. For images with alpha it's even worse: the RGB and A data is inherently encoded independently in AVIF, which means it is impossible to do things like RGBA palettes where a full pixel is encoded only once. 2. While AV1 has some non-DCT coding tools like palette blocks, they are designed for lossy encoding (e.g. palettes are limited to 8 colors per channel). Tools like directional prediction are useful if you can mostly quantize away the residuals, but they don't help much if all the residuals have to be preserved. 3. The entropy coding in AV1 is relatively simple and designed to be hardware-friendly. For lossy, this is fine, since most of the entropy is quantized away anyway. For lossless and high-quality lossy, entropy coding plays a bigger role though in getting good compression density.
2024-06-07 08:10:24
ah my bad, I meant in comparison to h264/h265/vp9
Naksu
Quackdoc well, without getting to indepth since this is the wrong channel for av1/avif, there isn't anything wrong with the AV1 spec itself for lossless it's just no one cares about it enough to invest substantial effort into making it good, to begin with, of the three major sw encoders, only aomenc can even do lossless.
2024-06-07 10:25:29
AV1 is designed for use on the web at very low bitrates. For various reasons, it is therefore very difficult to achieve complete fidelity with this codec, even at high bitrates. I don't think this necessarily stems from a lack of effort on the part of the developers. There is no benefit in mastering it since the H.264, H.265, and H.266 codecs already excel in this area. This is why I was interested in HEIF because I know it will be much more suitable for my needs, but support for its animated version seems to be nonexistent.
_wb_ I think it only supports intra-only in its multi-image variants, but there might be some flavor of HEIF that allows arbitrary HEVC payloads. If it works, I would expect it to not actually work with alpha, even 4:4:4 might be an issue.
2024-06-07 10:25:41
Honestly, I have no idea. It needs to be tested.
_wb_ You can try what happens when encoding an APNG with `cjxl -d 0 -e 2` or something like that. Note that libjxl encodes all frames as they are provided to it, it does not automatically crop the frames to cover only the region that changes, which can make a big difference. JPEG XL is not particularly designed for animation, but it should be able to beat other animated still image formats (apng, webp). We have not spent much effort on making a good encoder or decoder for the case of animation though. For example, if the dimensions are small, probably it would be beneficial to process frames in parallel, but libjxl does not do that at the moment.
2024-06-07 10:25:48
That's why I'm here in the first place, because it's not working. I get this: ```Getting pixel data failed.``` However, thank you for all this information.
Quackdoc
Naksu AV1 is designed for use on the web at very low bitrates. For various reasons, it is therefore very difficult to achieve complete fidelity with this codec, even at high bitrates. I don't think this necessarily stems from a lack of effort on the part of the developers. There is no benefit in mastering it since the H.264, H.265, and H.266 codecs already excel in this area. This is why I was interested in HEIF because I know it will be much more suitable for my needs, but support for its animated version seems to be nonexistent.
2024-06-07 10:37:33
this is a fairly common misconception that I don't really understand why it is spread. the current implementations do not represent the totality of av1. but even disregarding that aomenc when you spend days tweaking it (if you don't mind the slow speed) can compete quite with with libx265 at high qualities (I even used it for "master exports" for a while when I was still working on video). rav1e is actually fairly decent at high fidelity encoding too. the issue is entirely that there is very little optimization here, which is really understandable because who wants to fund it? as you said h264 and h265 implementations are very capable at higher quality so there is next to no incentive to hire/sponsor a developer to work on high fidelity encoding, *very few* people actually want that.
_wb_
2024-06-07 11:17:07
I wouldn't say very few people want high fidelity video encoding, but typically different technologies are used anyway in production workflows and professional use cases (things like ProRes, DCP, FFV1, JPEG XS), with the "real video codecs" being used only for the final delivery to consumer devices. While h264/h265/h266 are perhaps still somewhat useful for other things than the end delivery (they do have profiles that are aiming at production workflows, like HEVC's "Main 4:4:4 16 Intra" or "High Throughput 4:4:4 14"), AV1 seems to be designed specifically for web streaming / broadcast, with its highest profile only going up to 12-bit (compared to 14-bit for h264 and 16-bit for hevc and vvc).
Naksu That's why I'm here in the first place, because it's not working. I get this: ```Getting pixel data failed.``` However, thank you for all this information.
2024-06-07 11:19:42
Right, I guess our animated PNG input in cjxl is broken in the current released version. Does it help to use the current git version of libjxl? I think this issue was solved in the mean time, see https://github.com/libjxl/libjxl/issues/3430
Quackdoc
_wb_ I wouldn't say very few people want high fidelity video encoding, but typically different technologies are used anyway in production workflows and professional use cases (things like ProRes, DCP, FFV1, JPEG XS), with the "real video codecs" being used only for the final delivery to consumer devices. While h264/h265/h266 are perhaps still somewhat useful for other things than the end delivery (they do have profiles that are aiming at production workflows, like HEVC's "Main 4:4:4 16 Intra" or "High Throughput 4:4:4 14"), AV1 seems to be designed specifically for web streaming / broadcast, with its highest profile only going up to 12-bit (compared to 14-bit for h264 and 16-bit for hevc and vvc).
2024-06-07 11:45:18
yeah, generally I would say that 12bit is probably enough for a lot of production workflows, (av1 even calls it the "professional" profile for 12b444 and 12b422) but generally, av1 doesn't fit in well because what benefits it does have aren't actually really that beneficial, it does ofc have bandwidth savings over h264/h265, but they wouldn't be so great that it would be a massive improvement, since this target audience generally isn't massively bandwidth limited anyways. generally what market there is for it, would be using asics anyways, things like VDI/streaming, sets that deal with many cameras etc. where all these usecases are adequately, and ideally served with hardware encdoing
Naksu
Quackdoc this is a fairly common misconception that I don't really understand why it is spread. the current implementations do not represent the totality of av1. but even disregarding that aomenc when you spend days tweaking it (if you don't mind the slow speed) can compete quite with with libx265 at high qualities (I even used it for "master exports" for a while when I was still working on video). rav1e is actually fairly decent at high fidelity encoding too. the issue is entirely that there is very little optimization here, which is really understandable because who wants to fund it? as you said h264 and h265 implementations are very capable at higher quality so there is next to no incentive to hire/sponsor a developer to work on high fidelity encoding, *very few* people actually want that.
2024-06-07 11:48:08
There are very large investors behind AV1, so I doubt it's a funding issue. I think it's just not the developers' intention to focus on this usage, and I doubt its current structure lends itself to that. As for achieving fidelity with AV1 similar to HEVC... yes, with encoders like `aom-av1-psy`, transparency can be achieved without lossless mode, but the generated files are just as large, if not larger, than those for HEVC. The codec wasn't originally designed for this, so it deserves its reputation.
_wb_ Right, I guess our animated PNG input in cjxl is broken in the current released version. Does it help to use the current git version of libjxl? I think this issue was solved in the mean time, see https://github.com/libjxl/libjxl/issues/3430
2024-06-07 11:48:15
Hmm, I'll try it.
Quackdoc
Naksu There are very large investors behind AV1, so I doubt it's a funding issue. I think it's just not the developers' intention to focus on this usage, and I doubt its current structure lends itself to that. As for achieving fidelity with AV1 similar to HEVC... yes, with encoders like `aom-av1-psy`, transparency can be achieved without lossless mode, but the generated files are just as large, if not larger, than those for HEVC. The codec wasn't originally designed for this, so it deserves its reputation.
2024-06-07 11:59:05
but *who* is investing in what project is important, svtav1 development has slowed down considerably, "very high quality" isn't even a goal for for svtav1. It doesn't even support 4:2:2 or greater which immediately kills it's prospects there. Rav1e is well, it's pretty much dead, we can pretend it's not, but it's more or less in maintenance mode. so really we *only* have AOM for high fidelity encoding. when using aom, or back then aom-psy it needs *loads* of tuning but you can produce high quality files that are in the same weight class as libx265 when encoding. it is loads slower, and it's reliability is mediocre at best (rav1e my beloved). but often times you can get results that beat out libx265, I encountered this most reliably when encoding netflix's sample media, as well as ofc my own footage that I sadly no longer have as I no longer do any real work with video
2024-06-07 11:59:27
though back to JXL related, has anyone tried encoding netflix's TIFF/EXR images into JXL to compare size? http://download.opencontent.netflix.com.s3.amazonaws.com/index.html?prefix=CosmosLaundromat/
Naksu
Quackdoc but *who* is investing in what project is important, svtav1 development has slowed down considerably, "very high quality" isn't even a goal for for svtav1. It doesn't even support 4:2:2 or greater which immediately kills it's prospects there. Rav1e is well, it's pretty much dead, we can pretend it's not, but it's more or less in maintenance mode. so really we *only* have AOM for high fidelity encoding. when using aom, or back then aom-psy it needs *loads* of tuning but you can produce high quality files that are in the same weight class as libx265 when encoding. it is loads slower, and it's reliability is mediocre at best (rav1e my beloved). but often times you can get results that beat out libx265, I encountered this most reliably when encoding netflix's sample media, as well as ofc my own footage that I sadly no longer have as I no longer do any real work with video
2024-06-07 12:11:03
The codec's position on fidelity remains quite delicate, so its reputation is still justified. I'll probably inquire more about AOM, but x266 will likely quickly overshadow it.
Quackdoc
2024-06-07 12:12:41
if rav1e didn't die, it would probably be excellent for fidelity, rav1e is still... fine if you dont mind contibuting to global warming, but it isn't as efficient as aomenc can be, it is however very reliable still making it a decent option for a very specific niche
Soni
_wb_ You mean the physical subpixel layout? I don't know of any existing ways to represent that information (as, say, some Exif metadata or something), and I also wouldn't know what a viewer could do with that information β€” I suppose it could do _something_ when rendering the image more zoomed in than 1:1, but I'm not aware of any image viewing software that actually tries to take subpixel layout into account.
2024-06-07 12:50:17
for starters we'd make the screenshot tool store unmasked colors as pixel values
2024-06-07 12:50:56
for example, black on white text would store black or white pixels
2024-06-07 12:52:19
then a shape layer (8 bits per pixel, as if a 3x3 subpixel with the center missing) can be applied to regenerate the subpixel data
2024-06-07 12:52:51
this would make the screenshots legible to viewers that don't support subpixel rendering
Meow
username after:
2024-06-07 01:17:05
I would recommend removing that "Beta" after Safari 17
2024-06-07 01:17:52
It makes readers think Safari is still testing
username
Meow I would recommend removing that "Beta" after Safari 17
2024-06-07 01:25:31
alrighty I have changed this now in my local copy that I will eventually submit \πŸ‘
_wb_
Soni then a shape layer (8 bits per pixel, as if a 3x3 subpixel with the center missing) can be applied to regenerate the subpixel data
2024-06-07 03:17:54
Interesting, how exactly does that shape layer work?
2024-06-07 03:20:27
You can certainly just store a grayscale or RGB image in jxl with some additional channel for a subpixel shape layer β€” there's nothing in the spec for that, but you could use a `kOptional` extra channel, give it some name like "SubpixelShape" or whatever so your application knows what to look for, and you can make it work. Of course normal jxl viewers would just ignore that extra channel, but you can make a special viewer that understands the extra channel.
2024-06-07 03:23:57
If this is something common and desirable enough, we could think about standardizing it in one way or another, but I suspect subpixel rendering is kind of a legacy thing that no modern software still does, it brings more trouble than it solves when displays are getting more dense and many of them can be rotated...
VcSaJen
Quackdoc if rav1e didn't die, it would probably be excellent for fidelity, rav1e is still... fine if you dont mind contibuting to global warming, but it isn't as efficient as aomenc can be, it is however very reliable still making it a decent option for a very specific niche
2024-06-07 03:35:12
"Global Warming" C'mon, it's a common talking point in PR slides for marketing, but there's no need to use it here.
Quackdoc
VcSaJen "Global Warming" C'mon, it's a common talking point in PR slides for marketing, but there's no need to use it here.
2024-06-07 03:36:46
its a joke
2024-06-07 03:37:06
rav1e is super slow to encode for even a base line encode
Soni
_wb_ If this is something common and desirable enough, we could think about standardizing it in one way or another, but I suspect subpixel rendering is kind of a legacy thing that no modern software still does, it brings more trouble than it solves when displays are getting more dense and many of them can be rotated...
2024-06-07 04:40:01
sadly ClearType is still all over the place on Windows...
2024-06-07 04:40:55
unless desktops are legacy now
_wb_
2024-06-07 04:50:00
I keep forgetting about Windows. Also I didn't know it was still using ClearType.
Naksu
_wb_ Right, I guess our animated PNG input in cjxl is broken in the current released version. Does it help to use the current git version of libjxl? I think this issue was solved in the mean time, see https://github.com/libjxl/libjxl/issues/3430
2024-06-07 04:58:06
The error is still present.
_wb_
2024-06-07 04:59:22
What does it say, if you build with `./ci.sh opt` ?
lonjil
2024-06-07 05:08:37
I wouldn't be surprised if many Linux installations still use subpixel rendering.
Naksu
_wb_ What does it say, if you build with `./ci.sh opt` ?
2024-06-07 05:51:55
My bad, the command was calling the wrong `cjxl`. It works fine now.
jonnyawsom3
_wb_ Right, I guess our animated PNG input in cjxl is broken in the current released version. Does it help to use the current git version of libjxl? I think this issue was solved in the mean time, see https://github.com/libjxl/libjxl/issues/3430
2024-06-07 06:08:31
Fixed by these https://github.com/libjxl/libjxl/pull/3491 https://github.com/libjxl/libjxl/pull/3525
2024-06-07 06:08:35
I think
dogelition
_wb_ I keep forgetting about Windows. Also I didn't know it was still using ClearType.
2024-06-07 07:06:33
there's also the newer DirectWrite API, which, unlike the GDI APIs, properly supports grayscale rendering. applications that don't use it (i.e. most of them, including windows explorer) can't be configured to do non-broken grayscale rendering though. so even if you have a display with a non-standard subpixel layout, you have to use ClearType configured as RGB or BGR for text to look acceptable
spider-mario
2024-06-07 07:44:52
my workaround is MacType
Naksu
_wb_ You can try what happens when encoding an APNG with `cjxl -d 0 -e 2` or something like that. Note that libjxl encodes all frames as they are provided to it, it does not automatically crop the frames to cover only the region that changes, which can make a big difference. JPEG XL is not particularly designed for animation, but it should be able to beat other animated still image formats (apng, webp). We have not spent much effort on making a good encoder or decoder for the case of animation though. For example, if the dimensions are small, probably it would be beneficial to process frames in parallel, but libjxl does not do that at the moment.
2024-06-09 01:16:51
JPEG XL is currently the most stable format I've tested along with WebP. Playback is still a bit slower compared to video, but it is far from the significant image drops of AVIF. That said, the rendering is not lossless; this may be due to the conversion to APNG.
_wb_
2024-06-09 03:18:10
Not lossless? That's weird. Colorspace shenanigans in your player maybe?
Naksu
2024-06-09 05:06:50
I have no idea. I created a new APNG file which I then converted to JPEG XL, and after a good hour of encoding, I got a true lossless file (not at the right FPS, but that’s something I need to adjust).
2024-06-09 05:09:20
But speaking of color space, isn't there a way to adjust it with the `cjxl` encoder, or does it rely solely on the input file specifications?
2024-06-09 05:27:49
I see that it doesn't accept AVIF files as input, and APNG doesn't support 10-bit...
Quackdoc
2024-06-09 05:47:28
apng should support 16bit which should cover you
Naksu
2024-06-09 06:07:34
2024-06-09 06:07:38
And the encoder rejects 10-bit profiles.
Quackdoc
2024-06-09 06:11:54
when encoding it, if you have a 10bit if you encode to 16, it will just add some 0s, cjxl should be able to just handle that
Naksu
2024-06-09 08:07:58
It increases the bitrate by 20,000 Mb/s. That's not great.
_wb_
2024-06-09 09:52:30
Try --override_bitdepth 10
Naksu
2024-06-10 09:38:27
`Unrecognized option 'override_bitdepth'.`
jonnyawsom3
Naksu `Unrecognized option 'override_bitdepth'.`
2024-06-10 09:40:18
They meant on cjxl, if you want smaller intermediate APNGs then the only real option is to increase the compression setting, assuming the tool you're using has one
Naksu
2024-06-10 09:44:50
Same on cjxl : `Unknown argument: -override_bitdepth`. The compression rate is already set to the maximum for each of my conversions.
lonjil
2024-06-10 09:46:08
hmm it works for me `cjxl --override_bitdepth 10 foo.png foo.jxl`. Which version are you on? `cjxl --version` I am on v0.10.2
Naksu
2024-06-10 09:47:17
v0.10.2 too
2024-06-10 09:47:57
Maybe I need to remove my quality settings?
2024-06-10 09:49:05
Yep, it looks like that's the issue. It's working now.
lonjil
2024-06-10 09:49:11
that's weird
2024-06-10 09:49:43
what was the whole command line?
Naksu
2024-06-10 09:51:01
```cjxl input.apng -override_bitdepth 10 -q 100 -e 9 output.jxl cjxl input.apng --override_bitdepth 10 --q 100 --e 9 output.jxl```
lonjil
2024-06-10 09:51:44
short arguments (single letter) should have 1 dash, long arguments (full word or multiple words) should have two dashes
2024-06-10 09:51:54
`cjxl input.apng --override_bitdepth 10 -q 100 -e 9 output.jxl`
Naksu
2024-06-10 09:53:44
That's right, thank you.
Naksu I have no idea. I created a new APNG file which I then converted to JPEG XL, and after a good hour of encoding, I got a true lossless file (not at the right FPS, but that’s something I need to adjust).
2024-06-10 09:57:15
I think I found the source of the issue: When I use low effort for more speed, I get lossy quality even in lossless mode.
spider-mario
2024-06-10 09:57:38
that really shouldn’t happen
Naksu
2024-06-10 09:58:59
The encoder clearly indicates that I am in lossless mode, so I am certain it's not my fault.
Orum
2024-06-10 09:59:09
using `-q` <:SadOrange:806131742636507177>
spider-mario
Naksu The encoder clearly indicates that I am in lossless mode, so I am certain it's not my fault.
2024-06-10 09:59:39
sorry, I didn’t mean to imply that it was
Orum
2024-06-10 10:01:14
what are your exact commands to encode & decode that give you lossy encoding in lossless mode?
Naksu
spider-mario sorry, I didn’t mean to imply that it was
2024-06-10 10:02:57
No problem, I didn't misinterpret it. It's just that I often make mistakes.
Orum what are your exact commands to encode & decode that give you lossy encoding in lossless mode?
2024-06-10 10:03:02
```cjxl input.apng --override_bitdepth 10 -q 100 -e 1 output.jxl``` But it was the same without the `--override_bitdepth`.
Orum
2024-06-10 10:03:20
so it's a 10-bit png as input?
2024-06-10 10:03:46
errr.... apng?
Naksu
2024-06-10 10:04:28
16-bit, I guess.
Orum
2024-06-10 10:04:57
what does `identify` say?
Naksu
2024-06-10 10:05:47
I add it to my command?
CrushedAsian255
Naksu I add it to my command?
2024-06-10 10:06:23
It’s part of image magick
2024-06-10 10:06:34
$ identify <filename>
2024-06-10 10:06:47
On phone can’t code block
Naksu
2024-06-10 10:06:59
Ah, I never use ImageMagick.
Orum
2024-06-10 10:07:36
might be a good time to start <:Hypers:808826266060193874>
CrushedAsian255
Orum might be a good time to start <:Hypers:808826266060193874>
2024-06-10 10:08:08
I’m trying to learn it
Naksu
2024-06-10 10:08:13
Okay, I see what you're asking me to do. Just wait for the encoding to finish.
CrushedAsian255
Orum might be a good time to start <:Hypers:808826266060193874>
2024-06-10 10:08:30
My current way to manipulate images is ||ffmpeg||
Orum
2024-06-10 10:08:50
oh god...
CrushedAsian255
2024-06-10 10:08:54
I use identify all the time though
Orum oh god...
2024-06-10 10:09:01
Ikr
2024-06-10 10:09:23
ffmpeg -i source.png -s:v 640x480 out.jpg
2024-06-10 10:11:47
Is cjxl still currently the best highest quality encoder?
Orum
2024-06-10 10:13:16
depends--are we talking about for high BPP lossy?
2024-06-10 10:13:28
then pretty much yes
CrushedAsian255
2024-06-10 10:15:12
What about that chroma that everyone was talking about a few days ago
Orum
2024-06-10 10:15:25
link?
CrushedAsian255
2024-06-10 10:17:14
https://discord.com/channels/794206087879852103/804324493420920833/1245906480192946256
Orum
2024-06-10 10:19:08
oh god, the furry image...
2024-06-10 10:19:44
yeah, the yellow is slightly off
2024-06-10 10:20:23
I've noticed some color shift in some of my images as well, though that was with old libjxls so I would have to test again to see if it's still an issue
lonjil
2024-06-10 10:20:44
you ever notice that other than some common test photographs, all the images people who are image encoding fanatics test with seem to be furry, porn, or both
CrushedAsian255
2024-06-10 10:23:38
Slightly more seriously, is thể a way to pass y4m to cjxl and if not what’s the best intermediary , as I am trying to convert bt.2020 image frames into jxl
Orum
2024-06-10 10:27:41
okay, so there are still slight color shift issues in cjxl
2024-06-10 10:27:54
but I still think in terms of lossy, there aren't better options
2024-06-10 10:30:03
on the one image I have with the most trouble, it shifts away from green and toward magenta (at `-d 1`)
2024-06-10 10:31:07
there are also issues with dark areas losing detail but those can be mitigated with a high intensity target
_wb_
Naksu I think I found the source of the issue: When I use low effort for more speed, I get lossy quality even in lossless mode.
2024-06-10 10:58:18
This should not happen. Can you give an example input file and command line for which this happens? How did you check that it's not lossless?
Naksu
Orum what does `identify` say?
2024-06-10 11:03:10
Orum
2024-06-10 11:03:41
I meant on the APNG input
Naksu
2024-06-10 11:04:43
```input.apng PNG 508x649 508x649+0+0 16-bit sRGB 294.244MiB 0.001u 0:00.000```
Quackdoc
2024-06-10 11:04:52
perhaps encoding as 16 itself is an issue?
Orum
2024-06-10 11:05:52
well, yeah, encoding a 16-bit PNG to 10-bit *anything* is going to be lossy
lonjil
2024-06-10 11:05:59
that's not true
2024-06-10 11:06:13
PNG has a header for indicating how many of the bits are actually real
Quackdoc
Orum well, yeah, encoding a 16-bit PNG to 10-bit *anything* is going to be lossy
2024-06-10 11:06:22
no its 10bit to 16 bit
2024-06-10 11:06:31
so the extra data should just be garbage
Naksu
_wb_ This should not happen. Can you give an example input file and command line for which this happens? How did you check that it's not lossless?
2024-06-10 11:06:34
It was visually degraded.
Orum
lonjil PNG has a header for indicating how many of the bits are actually real
2024-06-10 11:06:39
doesn't identify show that though?
Quackdoc
2024-06-10 11:06:55
10bit -> 16bit png -> 10bit jxl, this should be fine
lonjil
Orum doesn't identify show that though?
2024-06-10 11:07:04
idk!
Quackdoc
2024-06-10 11:07:15
i can only guess either encoding the apng is broken, or decoding it.
Naksu
2024-06-10 11:07:25
But with an effort at 9, I have a real lossless file now.
Quackdoc
2024-06-10 11:07:37
0.0
2024-06-10 11:07:50
we have a real uh oh moment on our hands lol
_wb_
2024-06-10 11:10:07
quite strange. Can you show an example of the kind of degradation you are seeing?
2024-06-10 11:12:12
any effort distance 0 not being lossless is a bug β€” can you simplify things to a single problem frame and see if you can still get the buggy behavior to happen?
2024-06-10 11:13:18
could be some shenanigans with the combination of e1 and animation, I suppose. Does the problem happen only at e1 or also at e2?
CrushedAsian255
lonjil idk!
2024-06-10 11:18:47
Maybe try `identify -verbose`
jonnyawsom3
2024-06-10 11:49:49
My first thought is something acting up with pallete and the 10 bit override with extra 16 bit garbage data
Naksu
_wb_ quite strange. Can you show an example of the kind of degradation you are seeing?
2024-06-10 01:17:53
https://slow.pics/s/815wX69z
2024-06-10 01:18:42
I'll test for `-e 2`.
Naksu I'll test for `-e 2`.
2024-06-10 01:20:50
Same, still lots of compression artifacts.
2024-06-10 01:37:48
Wait... I think I was at 90 in quality.
2024-06-10 01:38:58
I thought I had seen 'lossless', but it must have been another command...
2024-06-10 01:43:25
Yeah, I had to run two encodings at the same time with different quality criteria, then I got mixed up.
_wb_
2024-06-10 01:49:37
Low-effort lossy encoding can indeed be less reliable than high-effort lossy encoding in the sense that there might be images or regions of images where low effort lossy is worse than the target quality. I wouldn't recommend using lossy below e4, and I would recommend using e6.
Naksu
2024-06-10 02:01:29
I intended to do lossless compression, that was a mistake on my part. To optimize the file size, I had chosen values around -e 9 or 10, but that seems to significantly slow down decoding. So, what is the recommended value for lossless?
_wb_
2024-06-10 02:20:51
If you want something that encodes/decodes quickly, I'd use e1/e2/e3
2024-06-10 02:21:02
e3 for photographic, e1 or e2 for non-photo
Orum
2024-06-10 02:21:55
πŸ€” is that still true if you're using `--faster_decoding=4` or whatever?
_wb_
2024-06-10 03:37:00
e1/e2/e3 are automatically decoding faster because they have specialized decode paths. --faster_decoding is mostly useful for lossy, though it should also help for higher-effort lossless β€” but generally I wouldn't really use that option tbh, for lossy the default decode speed is fast enough in practice and for lossless you'll get a bigger difference by switching to lower effort encoding. Something like -e 9 --faster_decoding=4 will probably still be slower than just -e 2
Naksu
2024-06-10 03:43:28
And `-e 2` + `--faster_decoding 4`?
_wb_
2024-06-10 07:47:15
Feel free to benchmark it, don't think it will make a big difference but I can be wrong
Naksu
2024-06-10 09:03:31
I don't think so, there's always a lag compared to a video.
afd
2024-06-11 08:26:39
Hi, I have 2 questions A): Can 'jpeg xl' be losslessly cropped? As in pixel perfect, not bit perfect. I am able to do this with current baseline 'jpegs' without the file size increasing using jpegtran, will the jxl format allow this? B): How can I losslessly re-compress a .jxl without the file size increasing? Currently I can run cjxl in.jpg out.jxl at default settings. But when I take that same output and re-run it again at say cjxl -d 0 -e9 in.jxl out.jxl it always seems to end up bigger than the original file
2024-06-11 08:26:43
thanks
CrushedAsian255
afd Hi, I have 2 questions A): Can 'jpeg xl' be losslessly cropped? As in pixel perfect, not bit perfect. I am able to do this with current baseline 'jpegs' without the file size increasing using jpegtran, will the jxl format allow this? B): How can I losslessly re-compress a .jxl without the file size increasing? Currently I can run cjxl in.jpg out.jxl at default settings. But when I take that same output and re-run it again at say cjxl -d 0 -e9 in.jxl out.jxl it always seems to end up bigger than the original file
2024-06-11 08:32:34
for question 2, it's because the first file is going through a special process that is specifically designed for making JPEGs smaller, where as the second time you're just using the normal lossless process, which does not know the input was a JPEG
2024-06-11 08:32:47
so it's trying to also transcode the JPEG artifacts
2024-06-11 08:32:53
ie. why would you do that?
2024-06-11 08:33:10
if you specifically wanted to use a different effort, decode the JXL back into JPG and then cjxl again
2024-06-11 08:34:45
for the first question, i think there is a way to define a display window frame to losslessly crop it, but not 100% Sure
jonnyawsom3
afd Hi, I have 2 questions A): Can 'jpeg xl' be losslessly cropped? As in pixel perfect, not bit perfect. I am able to do this with current baseline 'jpegs' without the file size increasing using jpegtran, will the jxl format allow this? B): How can I losslessly re-compress a .jxl without the file size increasing? Currently I can run cjxl in.jpg out.jxl at default settings. But when I take that same output and re-run it again at say cjxl -d 0 -e9 in.jxl out.jxl it always seems to end up bigger than the original file
2024-06-11 08:42:08
Original jpegs used blocks of 8x8 pixels, JXL uses variable blocks in lossy or 128x128 minimum in modular mode (used for lossless, but can do lossy too). JXL has built in cropping, so the files can be made smaller losslessly, but they'll also be bigger than a jpeg crop due to the larger block sizes For the second question, as said above, you want to decode back to the jpeg and then use the higher effort/different settings
_wb_
afd Hi, I have 2 questions A): Can 'jpeg xl' be losslessly cropped? As in pixel perfect, not bit perfect. I am able to do this with current baseline 'jpegs' without the file size increasing using jpegtran, will the jxl format allow this? B): How can I losslessly re-compress a .jxl without the file size increasing? Currently I can run cjxl in.jpg out.jxl at default settings. But when I take that same output and re-run it again at say cjxl -d 0 -e9 in.jxl out.jxl it always seems to end up bigger than the original file
2024-06-11 09:49:00
A) In principle, yes, but with VarDCT it is not possible to crop at arbitrary 8x8-aligned offsets. The current encoder does not allow blocks to cross the 64x64 grid, so in principle images could be cropped losslessly at those offsets, though the bitstream itself only guarantees that for a 256x256 grid. In any case, this is for now only academic, there is no jpegtran-like tool that does jxl transcoding at the coefficient level. It is also possible in principle to do a reversible cropping in jxl, keeping all the image data but simply adjusting the headers so the image canvas only includes the crop. There is no tool at the moment that can do that, but in principle it can be done by updating the image header to change the dimensions to the crop size and updating the frame header to make it have a crop offset and the original dimensions, without having to change the actual image data. B) If you use jxl as an input format in cjxl, it is always decoded to pixels. Possibly we could/should change that behavior in the case that it is a recompressed jpeg and the encode settings are lossless. The main use case we had in mind was to use lossless jxl (lossless from pixels, not from a jpeg) as input to do a lossy encoding of it, I don't think we had considered the option of re-encoding a recompressed jpeg losslessly. In any case, if the input is a lossy jxl (that is not an ex-jpeg), for now we have no other option than to decode it to pixels; making a jpegtran-like tool that can losslessly transcode lossy jxl is quite nontrivial and not within the scope of what we currently want to have in libjxl.
Orum
_wb_ e1/e2/e3 are automatically decoding faster because they have specialized decode paths. --faster_decoding is mostly useful for lossy, though it should also help for higher-effort lossless β€” but generally I wouldn't really use that option tbh, for lossy the default decode speed is fast enough in practice and for lossless you'll get a bigger difference by switching to lower effort encoding. Something like -e 9 --faster_decoding=4 will probably still be slower than just -e 2
2024-06-11 09:51:08
I'm mostly interested in using it for lossy though, and it did help decode speed
2024-06-11 09:52:02
fairly small hit to efficiency as well
_wb_
2024-06-11 09:52:12
Yes, for lossy it should help quite a bit.
afd
CrushedAsian255 ie. why would you do that?
2024-06-11 12:06:26
Hi CrushedAsian255 thanks for your reply. Why would I want to losslessly re-compress a .jxl file? Because I may not of been the person who created the files in the first place, or know what the inputs were for those any of those files in the beginning. If have a 20,000 various .jxl files gathered from random places of the net, how do I know that they as optimised as they could be? I don't, so of course I would simple run them through optimisation processes to see *if* they could be shrunk some more. I've been do this for years with all my file formats with a lot of success.
CrushedAsian255
afd Hi CrushedAsian255 thanks for your reply. Why would I want to losslessly re-compress a .jxl file? Because I may not of been the person who created the files in the first place, or know what the inputs were for those any of those files in the beginning. If have a 20,000 various .jxl files gathered from random places of the net, how do I know that they as optimised as they could be? I don't, so of course I would simple run them through optimisation processes to see *if* they could be shrunk some more. I've been do this for years with all my file formats with a lot of success.
2024-06-11 12:12:10
I’m not sure if such optimisation exists for VarDCT JXL. For lossless you can just convert jxl to jxl as it’s lossless. For JPEG transcoded JXL, you would want to convert back to JPEG. Not sure about VarDCT JXL, you could just try jxl->jxl using a higher effort but there will be some generation loss.
2024-06-11 12:13:30
Also β€œas optimised as they can be” depends on how many years you want transcoding to take, Ie you probably wouldn’t want to use β€œultraslow” av1 just to squeeze out 5% storage space
2024-06-11 12:13:47
cjxl can take a long time at higher effort levels
afd
2024-06-11 12:23:19
Thanks. I have tested quite a bit .jxl to jxl back an forth to see the effects, and for the first time now reading the comments I have a much better idea of why some of my files (the jpeg ones) where getting larger and not smaller. Yes of course "optimised" can be a vague term I agree. However, that highlights things even more about re-compressing jxl files. Some people work in situations where power is not always constant or cheap, wouldn't it be nice to quickly save a bunch of .jxl (less optimised, but less energy/time used) and then at a later date come back to that same set of files when contacted to more stable power or cheaper energy and re-optimise again? πŸ™‚
_wb_ A) In principle, yes, but with VarDCT it is not possible to crop at arbitrary 8x8-aligned offsets. The current encoder does not allow blocks to cross the 64x64 grid, so in principle images could be cropped losslessly at those offsets, though the bitstream itself only guarantees that for a 256x256 grid. In any case, this is for now only academic, there is no jpegtran-like tool that does jxl transcoding at the coefficient level. It is also possible in principle to do a reversible cropping in jxl, keeping all the image data but simply adjusting the headers so the image canvas only includes the crop. There is no tool at the moment that can do that, but in principle it can be done by updating the image header to change the dimensions to the crop size and updating the frame header to make it have a crop offset and the original dimensions, without having to change the actual image data. B) If you use jxl as an input format in cjxl, it is always decoded to pixels. Possibly we could/should change that behavior in the case that it is a recompressed jpeg and the encode settings are lossless. The main use case we had in mind was to use lossless jxl (lossless from pixels, not from a jpeg) as input to do a lossy encoding of it, I don't think we had considered the option of re-encoding a recompressed jpeg losslessly. In any case, if the input is a lossy jxl (that is not an ex-jpeg), for now we have no other option than to decode it to pixels; making a jpegtran-like tool that can losslessly transcode lossy jxl is quite nontrivial and not within the scope of what we currently want to have in libjxl.
2024-06-11 12:46:53
Thank you _wb_ for your answer. I understand it might not be possible to crop to arbitrary dimensions based on the file structure. However, having the ability to Losslessly crop *even just a little* off an image can make a huge difference to file sizes. I have a lot of astronomical images of large star fields such as 6000x6000 pixels, often I just want to focus on a small section of stars, square crop the image from say 20MB down to 50 kb and insert that smaller image into science document without lossing any of the image quality.
_wb_
2024-06-11 01:04:37
For default lossy (vardct), the jxl bitstream in principle allows cropping on a 256x256 grid (the AC group size) though it requires re-encoding DC groups, and relatively easily (i.e. with only header updates and selective bitstream copying) allows cropping on a 2048x2048 grid (the DC group size). For default lossless (modular with independent groups), in principle cropping on a 256x256 grid can be done with only header updates + selective bitstream copying. Currently there are no tools to do this kind of bitstream chopping. Maybe an interesting project for one of the independent implementers around here who wrote jxl decoders? After all, once you have a decoder that can parse the headers and interpret the TOC, you're halfway there: it's then a matter of making a new TOC and re-concatenating the relevant bitstream segments. <@853026420792360980> maybe you feel like writing such a tool?
CrushedAsian255
2024-06-11 01:09:23
don't ask me lol i'm still trying to parse the header πŸ’€
afd
2024-06-11 01:35:54
Thank you _wb_ may I also ask in a similar fashion would there also be the ability in the future to *losslessly join* 2 or more images which I do with jpegtran. There are some scenarios where I have received 2 images of say an old star field map that was split in half or multiple tiles, for reasons of bandwidth in the past, a webmaster took a decision to split them up and years later I’ve had to rejoin files via command line tools back to the original image so I could study them properly. If both lossless *crop* and lossless *join* could be achieved in the JXL format, it would be nice to move away from PNGs. JPEGS or GIFs and finally simplifying to one universal image format.
_wb_
2024-06-11 01:51:31
I mean, as long as you are using lossless (like PNG), you can just use standard image editing tools like imagemagick to crop or join images, and just re-encode them. For lossy, things are way more complicated since standard tools operate on pixels, not internal representations used in lossy codecs, which for JPEG is still doable (just DCT coefficients) but for JXL gets much trickier (because there are more coding tools / block sizes / auxiliary internal images / etc).
lonjil
2024-06-11 01:52:36
for joining arbitrary JXL images, could you store each of them as reference frames and then use patches to assemble the final image? πŸ™‚
afd
2024-06-11 01:53:50
I should say specifically joining basline JPEGS as those are my most problematic files due to trying to keep them lossless.
_wb_
2024-06-11 01:53:52
Sure, or just as two frames directly (using the offset in the frame header to position them)
2024-06-11 01:55:53
But things get tricky in the general case, e.g. what if you're joining two jxl images that are themselves already multi-layer? Need to recompute offsets, etc.
2024-06-11 01:56:24
Things can be done but someone has to write tools to do these tricks πŸ™‚
Traneptora
_wb_ For default lossy (vardct), the jxl bitstream in principle allows cropping on a 256x256 grid (the AC group size) though it requires re-encoding DC groups, and relatively easily (i.e. with only header updates and selective bitstream copying) allows cropping on a 2048x2048 grid (the DC group size). For default lossless (modular with independent groups), in principle cropping on a 256x256 grid can be done with only header updates + selective bitstream copying. Currently there are no tools to do this kind of bitstream chopping. Maybe an interesting project for one of the independent implementers around here who wrote jxl decoders? After all, once you have a decoder that can parse the headers and interpret the TOC, you're halfway there: it's then a matter of making a new TOC and re-concatenating the relevant bitstream segments. <@853026420792360980> maybe you feel like writing such a tool?
2024-06-11 03:10:24
I was considering doing something like this a bit ago and it's still on my mind, but time is the biggest issue
Dexrn ZacAttack
2024-06-13 02:53:59
found out someone made a thumbnail thingy that allows explorer to preview JXL files... amazing! It seems to even support the progressive loading?
username
Dexrn ZacAttack found out someone made a thumbnail thingy that allows explorer to preview JXL files... amazing! It seems to even support the progressive loading?
2024-06-13 02:59:09
you sure about progressive loading/decoding? maybe you could take a video of what you are seeing
jonnyawsom3
2024-06-13 03:20:57
You're probably seeing a photo app loading the file explorer thumbnail before the full image
Demiurge
Dexrn ZacAttack found out someone made a thumbnail thingy that allows explorer to preview JXL files... amazing! It seems to even support the progressive loading?
2024-06-13 04:19:29
the jxl_winthumb made by jxl_oxide? Yeah it's pretty cool, too bad it doesn't work with the newer photo viewer, only the old winvista photo viewer
2024-06-13 04:19:48
I think that's just a limitation with windows though
jonnyawsom3
2024-06-13 04:33:14
I think you *can* make it work, but it requires a lot of registry edits
Dexrn ZacAttack
username you sure about progressive loading/decoding? maybe you could take a video of what you are seeing
2024-06-13 05:18:50
I see a blurry photo then the full res after I’ll try to capture a video, hold on
username
2024-06-13 07:57:09
2024-06-13 07:57:43
2024-06-13 07:57:54
which do you all think is better?
2024-06-13 07:58:35
less or more "categories"?
Demiurge
2024-06-13 08:37:58
It's so small that there's no point in splitting the last one up.
2024-06-13 08:38:36
Personally I think the one with 3 categories looks best
Posi832
username which do you all think is better?
2024-06-13 08:38:43
I like the second image more. Maybe headers for each category would make it even easier to skim through (Encoders/Decoders, Browsers, Command-line tools, Image editors, Image tools)
VcSaJen
2024-06-13 09:19:05
Remove Chrome, Edge, Opera, add addon as a separate line instead
Quackdoc
2024-06-13 09:20:55
perhaps a newline for the ~~hall of shame~~ gimped applications
username
Posi832 I like the second image more. Maybe headers for each category would make it even easier to skim through (Encoders/Decoders, Browsers, Command-line tools, Image editors, Image tools)
2024-06-13 09:32:04
I decided against headers for each category since I feel like it would be a bit visually messy and unnecessary while also possibly somewhat negatively contrast with the rest of the site which in comparison is quite simplistic and doesn't contain any other multi-level lists https://discord.com/channels/794206087879852103/803574970180829194/1204264884016123954
CrushedAsian255
username
2024-06-13 12:19:55
Affinity Photo/Designer 2 also contains Affinity Publisher
Kagamiin~ Saphri
2024-06-13 01:14:17
cw: generative AI, AI upscaling ||How curious. I upscaled two images using two different AI-upscaling models. One of them generated a PNG that's **1.25x** bigger than the other. When converted to JXL modular lossless (I just used `cjxl -d 0`), sure they are both smaller than their PNG counterparts (which I didn't pngcrush btw, just saved directly from Upscayl), but the relative size difference increases to **1.5x**! And when compressed into JXL VarDCT (`cjxl -d 1.0`), the relative size difference flips over to **0.93x**!||
monad
Kagamiin~ Saphri cw: generative AI, AI upscaling ||How curious. I upscaled two images using two different AI-upscaling models. One of them generated a PNG that's **1.25x** bigger than the other. When converted to JXL modular lossless (I just used `cjxl -d 0`), sure they are both smaller than their PNG counterparts (which I didn't pngcrush btw, just saved directly from Upscayl), but the relative size difference increases to **1.5x**! And when compressed into JXL VarDCT (`cjxl -d 1.0`), the relative size difference flips over to **0.93x**!||
2024-06-13 01:54:55
is this some exception to your usual experience?
Kagamiin~ Saphri
monad is this some exception to your usual experience?
2024-06-13 02:32:46
it's mostly a curiosity
2024-06-13 02:34:12
I'm fascinated by image compression and I'd love to know more about how different types of compression can have disparities to the point where the size difference between two images flips around like that
Meow
username
2024-06-14 11:18:08
Pixelmator Pro can open JXL but cannot export JXL
zamfofex
2024-06-15 11:45:56
Hello, everyone, I’m here again! Not sure if this is the best channel to ask, but I have decided to continue working on my JPEG XL extension. I decided to move from libjxl to jxl-oxide (because it is much easier to use!) and it seems to be working better than before. (Bar the fact it doesn’t use a worker anymore, so it might slow down browsing while loading images.) I have made it available on the `jxl-oxide` branch on <https://github.com/zamfofex/jxl-crx> ``` git clone --branch=jxl-oxide https://github.com/zamfofex/jxl-crx cd jxl-crx scripts/build.sh # if you want to use Chrome: rm manifest.json ln -s manifest-v3.json manifest.json ``` Then load it into Chrome/Firefox directly. (The `.zip` for Firefox, or the directory itself for Chrome.) **I’d love any kind of feedback!** Either here or on [#22](https://github.com/zamfofex/jxl-crx/pull/22) Note that now the build uses Node (and npm) for webpack.
Quackdoc
2024-06-15 11:48:34
I may be able to test it later today, <@703028154431832094> has a good page for testing decode speed, https://giannirosato.com/blog/post/corpus-lossy/ this will cripple most wasm implmentations since it does no lazy loading
Hello, everyone, I’m here again! Not sure if this is the best channel to ask, but I have decided to continue working on my JPEG XL extension. I decided to move from libjxl to jxl-oxide (because it is much easier to use!) and it seems to be working better than before. (Bar the fact it doesn’t use a worker anymore, so it might slow down browsing while loading images.) I have made it available on the `jxl-oxide` branch on <https://github.com/zamfofex/jxl-crx> ``` git clone --branch=jxl-oxide https://github.com/zamfofex/jxl-crx cd jxl-crx scripts/build.sh # if you want to use Chrome: rm manifest.json ln -s manifest-v3.json manifest.json ``` Then load it into Chrome/Firefox directly. (The `.zip` for Firefox, or the directory itself for Chrome.) **I’d love any kind of feedback!** Either here or on [#22](https://github.com/zamfofex/jxl-crx/pull/22) Note that now the build uses Node (and npm) for webpack.
2024-06-15 11:53:18
actually, If you can throw a premade zip I can try it sooner, I dont have the compilation stuff installed on my device rn
zamfofex
2024-06-15 11:58:15
Quackdoc
2024-06-15 12:05:18
the page doesn't crash now in firefox 0.0
2024-06-15 12:05:53
its still slow, likely due to firefox not rendering the images until the addon is done decoding all of them
2024-06-15 12:07:27
zamfofex
Quackdoc the page doesn't crash now in firefox 0.0
2024-06-15 12:08:10
That sounds good! πŸ˜„
username
2024-06-15 12:08:49
welcome back. you could probably post about or ask questions in the https://discord.com/channels/794206087879852103/1065165415598272582 channel
2024-06-15 12:09:10
also I wonder how it compares to this standalone demo page https://jxl-oxide.tirr.dev/demo/index.html
Quackdoc
2024-06-15 12:09:35
chromium is way faster (I dont think it's being knee capped by rendering), and it too does not crash
2024-06-15 12:10:02
though it did throw an error so I think it stopped decoding them early
2024-06-15 12:10:57
oh yeah, the page did end up crashing, but not before decoding a large amount of images
2024-06-15 12:11:21
if the site had lazy loading it would probably be fine, but I wonder if there is a way to limit the concurrent amount of decoding images
2024-06-15 12:12:21
either way, seems like a large upgrade
zamfofex
2024-06-15 12:17:44
I see! πŸ˜„ One neat thing is that now it supports animated JPEG XL images and other such niceties.
Quackdoc
2024-06-15 12:19:02
> Content-Security-Policy: The page’s settings blocked the loading of a resource (connect-src) at data:application/wasm;base64,AGFzbQEAAAA… because it violates the following directive: β€œconnect-src *”
2024-06-15 12:19:05
<:SadCheems:890866831047417898>
2024-06-15 12:19:42
using a redirector to force lemmy to load all images as jxl and this causes issues when used with the extension
yoochan
2024-06-15 02:02:22
Amazing! Amazing news!
Oleksii Matiash
Quackdoc its still slow, likely due to firefox not rendering the images until the addon is done decoding all of them
2024-06-15 02:54:16
It does not crash with old jxl addon too, but..
2024-06-15 02:54:18
2024-06-15 03:00:28
With 17 G total used by ff
Quackdoc
2024-06-15 03:26:29
0.0
HUK
Kagamiin~ Saphri cw: generative AI, AI upscaling ||How curious. I upscaled two images using two different AI-upscaling models. One of them generated a PNG that's **1.25x** bigger than the other. When converted to JXL modular lossless (I just used `cjxl -d 0`), sure they are both smaller than their PNG counterparts (which I didn't pngcrush btw, just saved directly from Upscayl), but the relative size difference increases to **1.5x**! And when compressed into JXL VarDCT (`cjxl -d 1.0`), the relative size difference flips over to **0.93x**!||
2024-06-16 04:16:21
i would assume this has something to do with real-esrgan's very poor maintaining of details, results would prob be different with up-to-date archs. My guess is that VarDCT isn't able to preserve the visual quality as well (due to real-esrgan removing much of it on its own) and ends with a larger file because of that. Maybe the lack of detail makes it harder to find optimizations? this is all just speculation though
CrushedAsian255
HUK i would assume this has something to do with real-esrgan's very poor maintaining of details, results would prob be different with up-to-date archs. My guess is that VarDCT isn't able to preserve the visual quality as well (due to real-esrgan removing much of it on its own) and ends with a larger file because of that. Maybe the lack of detail makes it harder to find optimizations? this is all just speculation though
2024-06-16 04:35:16
What model is better than ESRGAN?
HUK
2024-06-16 04:55:29
Essentially anything that isn't ESRGAN. DAT, RGT, ATD, SPAN, etc. it's over half a decade old
Kagamiin~ Saphri
HUK i would assume this has something to do with real-esrgan's very poor maintaining of details, results would prob be different with up-to-date archs. My guess is that VarDCT isn't able to preserve the visual quality as well (due to real-esrgan removing much of it on its own) and ends with a larger file because of that. Maybe the lack of detail makes it harder to find optimizations? this is all just speculation though
2024-06-16 08:57:27
interesting
Demiurge
Hello, everyone, I’m here again! Not sure if this is the best channel to ask, but I have decided to continue working on my JPEG XL extension. I decided to move from libjxl to jxl-oxide (because it is much easier to use!) and it seems to be working better than before. (Bar the fact it doesn’t use a worker anymore, so it might slow down browsing while loading images.) I have made it available on the `jxl-oxide` branch on <https://github.com/zamfofex/jxl-crx> ``` git clone --branch=jxl-oxide https://github.com/zamfofex/jxl-crx cd jxl-crx scripts/build.sh # if you want to use Chrome: rm manifest.json ln -s manifest-v3.json manifest.json ``` Then load it into Chrome/Firefox directly. (The `.zip` for Firefox, or the directory itself for Chrome.) **I’d love any kind of feedback!** Either here or on [#22](https://github.com/zamfofex/jxl-crx/pull/22) Note that now the build uses Node (and npm) for webpack.
2024-06-16 09:25:31
How could libjxl become easier to use?
_wb_
2024-06-16 09:48:09
I guess we could add a higher level API for one-shot encoding/decoding, exposing fewer options and less functionality but making it just a single function call to convert a bitstream to or from a pixel buffer.
TheBigBadBoy - π™Έπš›
2024-06-16 09:59:13
exposing fewer options ? `cjxl -h` already shows only 3 encoding options: distance, quality, effort how could you do fewer than that <:KekDog:805390049033191445> ~~and I don't want to add another `-v` to expose more options~~
monad
TheBigBadBoy - π™Έπš› exposing fewer options ? `cjxl -h` already shows only 3 encoding options: distance, quality, effort how could you do fewer than that <:KekDog:805390049033191445> ~~and I don't want to add another `-v` to expose more options~~
2024-06-16 10:11:03
\> API nobody uses cjxl
TheBigBadBoy - π™Έπš›
2024-06-16 10:17:03
oh shit I misread <:KekDog:805390049033191445>
CrushedAsian255
_wb_ I guess we could add a higher level API for one-shot encoding/decoding, exposing fewer options and less functionality but making it just a single function call to convert a bitstream to or from a pixel buffer.
2024-06-16 10:18:07
So like a thin wrapper?
_wb_
2024-06-16 10:46:24
Yes. But then again, most applications that need simple encode/decode, will not use codec libraries directly, but higher level libraries like libvips, ImageMagick, ffmpeg, CoreMedia, etc.
zamfofex
_wb_ I guess we could add a higher level API for one-shot encoding/decoding, exposing fewer options and less functionality but making it just a single function call to convert a bitstream to or from a pixel buffer.
2024-06-16 02:22:05
The nice thing about jxl-oxide is that it’s not just for β€œone‐shot decoding”. It seems to have most meaningful features of libjxl, but the API is just much simpler. Probably the most significant thing it lacks is converting across color profiles other than β€œto sRGB”. You just feed bytes into it, and at any point in time, you can just query the known information about the image.
Quackdoc
2024-06-16 04:18:46
I think it can hmm
CrushedAsian255
2024-06-17 12:45:32
just wondering what was the outcome from https://github.com/libjxl/libjxl/issues/3356
username
CrushedAsian255 just wondering what was the outcome from https://github.com/libjxl/libjxl/issues/3356
2024-06-17 03:36:28
it was brought up that it's possible for there to be bitstreams that end up the same but decode differently (which is a very very bad thing for lossless) and that seems to have been the thing that made the decision lean towards changing the spec rather then fixing libjxl but I have to wonder if it's even possible for the libjxl encoder to produce such a bitstream or if it's just purely a theoretical thing
CrushedAsian255
username it was brought up that it's possible for there to be bitstreams that end up the same but decode differently (which is a very very bad thing for lossless) and that seems to have been the thing that made the decision lean towards changing the spec rather then fixing libjxl but I have to wonder if it's even possible for the libjxl encoder to produce such a bitstream or if it's just purely a theoretical thing
2024-06-17 03:37:30
so the spec has been changed so it's now intended behaviour?
username
CrushedAsian255 so the spec has been changed so it's now intended behaviour?
2024-06-17 03:39:06
<@794205442175402004> would have to be the one to answer because even though it's been months and months I don't know how the spec drafting process works so it's possible that nothing has been done quite yet.
2024-06-17 03:40:58
also if the libjxl encoder is incapable of producing JXLs that are undetectable as being invalid then I'm still in favor of the bug being fixed in libjxl rather then the spec being changed
CrushedAsian255
2024-06-17 03:41:30
i guess is this what happens if you use pre-release software?
username
2024-06-17 03:47:07
with libjxl being the only real full encoder for JPEG XL (and also being the reference encoder) there is the consideration of all the pre-existing JXL files that people have made however this bug only appears on effort 8 and above which isn't the default and also the type of visual data affected mainly includes pixel art not constrained by a single palette and pixel data from decoded JPEGs.
Tirr
2024-06-17 03:48:48
I remember that it's actually possible for libjxl to produce image that's undetectable and compression density benefits from fixing libjxl was marginal
2024-06-17 03:49:57
so they decided to go change the spec
username
2024-06-17 04:00:23
the comparison of compression density was done by <@794205442175402004> on the set of images QOI uses for benchmarking https://qoiformat.org/benchmark/ and the image with the biggest difference was this https://upload.wikimedia.org/wikipedia/commons/8/88/Battle_for_Mandicor_0.0.5.png ``` 432707 Battle_for_Mandicor_0.0.5.png 197284 Battle_for_Mandicor_0.0.5.png.e8.bug.jxl 196985 Battle_for_Mandicor_0.0.5.png.e8.fixed.jxl 188207 Battle_for_Mandicor_0.0.5.png.e9.bug.jxl 187598 Battle_for_Mandicor_0.0.5.png.e9.fixed.jxl ```
2024-06-17 04:10:35
I am curious as to what the results would be like on https://github.com/libjxl/libjxl/assets/2322996/6e607820-542f-443f-ae2a-5d6c4db39b5e and https://github.com/saschanaz/jxl-winthumb/files/12315745/anomalous-jxl-files.zip
_wb_
username also if the libjxl encoder is incapable of producing JXLs that are undetectable as being invalid then I'm still in favor of the bug being fixed in libjxl rather then the spec being changed
2024-06-17 05:33:39
No, libjxl did create bitstreams that would be incorrect according to how it was written in the spec. We have changed the wording in the spec to correspond to what libjxl has always done.
2024-06-17 05:35:44
It was quite rare to find cases where it makes a difference: you need an image where a local palette of more than 256 colors is used and at the same time lz77 matches with special distances are used, which cannot happen in libjxl default effort or lower, but it can happen at e8+.
2024-06-17 06:06:53
In any case, if it made a difference for compression, it was a very small one, and the difference could go either way. So we did not consider it worthwhile to change libjxl and potentially break existing bitstreams, and updated the wording in the spec instead.
CrushedAsian255
HUK Essentially anything that isn't ESRGAN. DAT, RGT, ATD, SPAN, etc. it's over half a decade old
2024-06-17 08:48:19
what about RealESRGAN? Lots of upcaling frontends still use it
HUK
2024-06-17 07:43:22
Real-ESRGAN isn't an arch, it's just ESRGAN trained with OTF degradation pipeline
2024-06-17 07:43:59
https://discord.com/channels/794206087879852103/794206170445119489/1251752202977022014
2024-06-17 07:46:15
Compact or SPAN are the best 1:1 replacements imo (trained), but RealESRGAN is not very impressive anymore, and idk why it would still be used
jonnyawsom3
2024-06-20 12:36:52
<@&807636211489177661>
VcSaJen
HUK Essentially anything that isn't ESRGAN. DAT, RGT, ATD, SPAN, etc. it's over half a decade old
2024-06-20 02:39:02
Which one of them is good for 2D toons? So far waifu2x is the best in my experience, despite it's age.
CrushedAsian255
2024-06-20 03:17:58
how exactly do patches work in JXL? are they stored as hidden frames that are decoded before the main image? or are they encoded as part of the main image frame?
2024-06-20 03:52:58
would it just be using `kReferenceOnly`?
A homosapien
VcSaJen Which one of them is good for 2D toons? So far waifu2x is the best in my experience, despite it's age.
2024-06-20 04:04:37
https://openmodeldb.info/?t=cartoon
2024-06-20 04:04:58
pick the one that suits your use case the best
HUK
VcSaJen Which one of them is good for 2D toons? So far waifu2x is the best in my experience, despite it's age.
2024-06-20 05:06:59
Waifu2x is definitely very outdated, and it is also more complicated than just looking under the cartoon category on OMDB (majority of those models are outdated, very specific, or not viable), depends on your graphics card, what speeds you are looking for (real time for videos, whichever speed for images, medium for a balance, etc.) though that discussion is prob best suited for elsewhere.
_wb_
CrushedAsian255 would it just be using `kReferenceOnly`?
2024-06-20 05:07:03
Yes, kReferenceOnly is meant for patch frames, as it hides the frame. Though in an animation or multi-layer image where there is similarity between frames/layers, patches can also refer to previous visible frames.
CrushedAsian255
2024-06-20 05:07:32
Is there any way to view these hidden frames?
2024-06-20 05:08:10
Hmm, someone should make some kind of GUI that shows the inner workings of the codec and lets you look behind the scenes of images
monad
CrushedAsian255 Is there any way to view these hidden frames?
2024-06-20 08:15:54
it's straightforward to output on encode, but idk about decode
CrushedAsian255
monad it's straightforward to output on encode, but idk about decode
2024-06-20 09:06:07
What option?
_wb_
2024-06-20 09:32:05
Libjxl does not expose a way to see hidden frames, since they are considered internal stuff (unlike layers of a multi-layer image, which you can get from libjxl by disabling coalescing, and which are conceptually considered part of the decoded data)
2024-06-20 09:32:36
But of course you could make a fork of libjxl that does show all the gory internals
monad
CrushedAsian255 What option?
2024-06-20 01:26:56
`enc_patch_dictionary.cc` -> `FindBestPatchDictionary` ```if (WantDebugOutput(cparams)) { if (is_xyb) { JXL_RETURN_IF_ERROR(DumpXybImage(cparams, "reference", reference_frame)); } else { JXL_RETURN_IF_ERROR(DumpImage(cparams, "reference", reference_frame)); } }``` then run `benchmark_xl` with `--debug_image_dir`
2024-06-20 01:42:17
https://discord.com/channels/794206087879852103/803645746661425173/892560805613146172 d1 170514B, d0 47355B, tiles d0 12400B
_wb_
2024-06-20 03:19:28
ah cool I forgot about that
Demiurge
2024-06-20 07:06:46
https://uprootlabs.github.io/poly-flif/polyflif-sample.html
2024-06-20 07:06:57
Speaking of forgetting about things, this is really cool
cutename
2024-06-20 07:40:04
oooh I gotta try JXL performance on pixel art
Demiurge
2024-06-20 07:41:01
I wonder if flif is better at progressive lossless than jxl...
_wb_
2024-06-20 08:07:04
FLIF is better at avoiding compression loss while doing progressive, though it has the same problem where for non-photo it will compress better in non-progressive mode. JXL is better in the quality of progressive steps (you get downscales with pixel averaging, not nearest neighbor like it is in FLIF).
cutename
2024-06-20 08:11:31
is pixel averaging specified, or in other words, could an encoder choose a good scaling algorithm?
jonnyawsom3
2024-06-20 08:40:47
You *can* provide custom upsampling weights for resampled images (NN pixel art scaling), but I don't think you can for progressive scans
TheBigBadBoy - π™Έπš›
cutename is pixel averaging specified, or in other words, could an encoder choose a good scaling algorithm?
2024-06-20 08:41:48
for pixel art, you can hint JXL to use nearest neighbor to further improve lossless compression with `--resampling=1|2|4|8 --already_downsampled --upsampling_mode=0` https://discord.com/channels/794206087879852103/794206087879852106/1239238259784024117
_wb_
cutename is pixel averaging specified, or in other words, could an encoder choose a good scaling algorithm?
2024-06-20 08:43:44
Yes, it is specified. No, an encoder has no choice, it has to be (A+B)/2 or else the reconstruction will not be lossless.
cutename
2024-06-20 08:44:15
thanks!
monad
cutename oooh I gotta try JXL performance on pixel art
2024-06-20 09:24:01
lossless webp is way more practical. jxl can get denser at highest effort, but it's not comparable in compute efficiency
Orum
2024-06-20 09:30:34
usually lossless webp will be smaller too (than *any* effort level of <:JXL:805850130203934781>)
monad
2024-06-20 09:52:41
if you venture to use settings beyond e you can get denser than webp
Quackdoc
2024-06-20 09:53:15
time to brute force some settings
monad
2024-06-20 09:53:29
I already did it
2024-06-20 09:54:57
here's a small sample <https://loathra.com/jxl/bench/20240506.html#single-extra-22>
cutename
2024-06-20 09:56:07
what's -strip do?
CrushedAsian255
monad if you venture to use settings beyond e you can get denser than webp
2024-06-20 09:56:22
Like how WebP has tuning?
monad
2024-06-20 09:58:06
Here's using the upscaling trick where possible+best options (jixel)
2024-06-20 09:59:16
(but that script assumes scaled pixel art, which is probably most distributed stuff)
Quackdoc
2024-06-20 09:59:17
even making the script would be too much effort for me lol
monad
cutename what's -strip do?
2024-06-20 10:00:00
that just signifies no metadata
cutename
2024-06-20 10:00:06
ah
monad
2024-06-20 10:11:54
here's a group of settings covering all minimum outputs found across 2207 commands
Demiurge
monad lossless webp is way more practical. jxl can get denser at highest effort, but it's not comparable in compute efficiency
2024-06-20 10:33:50
This is because of a shortcoming in the libjxl encoder, not the format itself, from what I think I heard
2024-06-20 10:35:09
Hopefully one day libjxl improves, or better yet a new encoder steals the spotlight...
monad
2024-06-20 10:35:53
any future encoder does not help much in the present
Demiurge
2024-06-20 10:36:01
Like how jxl-oxide seems to be slowly stealing the spotlight when it comes to decode
monad any future encoder does not help much in the present
2024-06-20 10:37:11
Yeah, for now, things are the way they are. :(
Quackdoc
Demiurge Like how jxl-oxide seems to be slowly stealing the spotlight when it comes to decode
2024-06-20 10:53:34
perf wise, its a ways off, progressiveness however, it will progressive decode files that libjxl just gives up on lol
_wb_
2024-06-21 09:14:14
At some point we should spend some time on improving lossless compression for images with few colors β€” not just pixel art but also e.g. bit depth < 8 images (e.g. RGB565), 4-bit grayscale, 1-bit black&white, etc. We have been focusing mostly on getting things right for 8 to 16 bit per channel full color images, but those do behave quite differently from low-color images. I have some ideas on things to try (mimicking pixel packing with Squeeze, better palette heuristics, finding ways to better combine lz77 with MA trees, more generic patch detection with better estimation of patch signaling cost vs benefit) but these are all rather nontrivial things to do and we're mostly talking about images that already compress extremely well (compared to 'normal' images) so the potential gains in absolute terms are not as big β€” I mean, a 1% improvement on a 10 MB file has more impact than a 20% improvement on a 10 kB file.
RaveSteel
2024-06-21 07:05:43
Does jxl have support for checksums, like for example dng and flac?
jonnyawsom3
2024-06-21 07:38:22
Not natively, but you could generate one from the image data and then store it as metadata
spider-mario
2024-06-21 07:42:27
or create accompanying .par2 files
RaveSteel
2024-06-21 07:43:49
Would be nice to have as it would facilitate using jxl as an archival format
spider-mario
2024-06-21 07:49:46
using par2, you can not only detect that something is wrong, but possibly even fix it (here, I manually corrupted three consecutive bytes in a hex editor) ```console $ par2 repair _W4A2518.jxl Loading "_W4A2518.jxl.par2". Loaded 4 new packets Loading "_W4A2518.jxl.vol0+1.par2". Loaded 1 new packets including 1 recovery blocks Loading "_W4A2518.jxl.par2". No new packets found There are 1 recoverable files and 0 other files. The block size used was 31296 bytes. There are a total of 2000 data blocks. The total size of the data files is 62591652 bytes. Verifying source files: Opening: "_W4A2518.jxl" Target: "_W4A2518.jxl" - damaged. Found 1999 of 2000 data blocks. Scanning extra files: Repair is required. 1 file(s) exist but are damaged. You have 1999 out of 2000 data blocks available. You have 1 recovery blocks available. Repair is possible. 1 recovery blocks will be used to repair. Computing Reed Solomon matrix. Constructing: done. Solving: done. Wrote 62591652 bytes to disk Verifying repaired files: Opening: "_W4A2518.jxl" Target: "_W4A2518.jxl" - found. Repair complete. ```
CrushedAsian255
2024-06-22 12:43:54
Ooh, par2 πŸ₯°
Crite Spranberry
2024-06-22 03:01:08
IA-32 JXL
Tirr
2024-06-22 10:12:23
I'm working on the docs on decoding jxl images <#1254012901816274956> main motivation is that afaik there's no open docs describing how to actually decode jxl, and the (paywalled) ISO spec is very difficult to read and understand
Posi+ive
2024-06-22 05:32:54
why does jpegview and irfanview show my jxl file like this?
2024-06-22 05:33:10
imageglass opens it just fine but it's slow
jonnyawsom3
2024-06-22 05:38:54
Color management?
username
2024-06-22 05:40:05
could be? iirc jpegview has color management off by default
Posi+ive
2024-06-22 05:48:49
I see
2024-06-22 05:48:59
I'll look around in the settings
Quackdoc
Tirr I'm working on the docs on decoding jxl images <#1254012901816274956> main motivation is that afaik there's no open docs describing how to actually decode jxl, and the (paywalled) ISO spec is very difficult to read and understand
2024-06-22 05:49:39
Maybe this will get into images :D
Posi+ive
2024-06-22 05:52:35
alright it's fixed
2024-06-22 05:52:59
changing this option did it
Oleksii Matiash
2024-06-22 06:32:33
IrfanView does not see color profile inside jxl 😦
Naksu
2024-06-24 12:21:53
Hi, is this normal? `-e 2 --faster_decoding 4 -d 1`
2024-06-24 12:22:04
** ** `-e 6 --faster_decoding 4 -d 1`
2024-06-24 12:23:09
** ** Original file:
Orum
2024-06-24 12:28:08
no
2024-06-24 12:28:51
well, actually at `d 1` I'm not sure <:Thonk:805904896879493180>
Naksu
2024-06-24 12:37:16
`-e 2 --faster_decoding 4 -d 0`
2024-06-24 12:37:32
** ** `-e 6 --faster_decoding 4 -d 0`
2024-06-24 12:37:41
** ** Weird
_wb_
2024-06-24 12:38:30
weird indeed, could you share the image?
2024-06-24 12:38:50
what happens without faster_decoding?
JendaLinda
2024-06-24 12:39:30
Depending on the contents of the image, `-d 0` may be smaller than `-d 1`
Naksu
_wb_ weird indeed, could you share the image?
2024-06-24 12:40:14
Haha, no unfortunately. But I will try without faster decoding.
_wb_
2024-06-24 12:41:51
d0 being smaller than d1 can happen for 'clean' images (no photo, nothing 'noisy')
Tirr
2024-06-24 12:42:05
I remember libjxl did something weird when image is dithered
Naksu
2024-06-24 12:42:13
`-e 2 -d 1`
2024-06-24 12:42:24
** ** `-e 6 -d 1`
2024-06-24 12:42:55
** ** I think we've found the problem.
_wb_
2024-06-24 12:43:15
wow I am really curious about this image now, lol
2024-06-24 12:43:57
so e6 d1 is a LOT smaller than e2 d1, but when adding faster_decoding 4 it becomes a LOT larger
username
Naksu Haha, no unfortunately. But I will try without faster decoding.
2024-06-24 12:45:11
is the image not legal to publicly share, or from someone who prefers it's not shared, or is it just nsfw?
Naksu
2024-06-24 12:45:43
The 3rd option.
_wb_
2024-06-24 12:46:13
If you can try cropping the image to a smaller region that would be OK to share and that still exhibits the weird cjxl behavior, that would be nice
2024-06-24 12:47:31
if it's just nsfw then you can share it as far as I'm concerned, just remove the discord preview or put a spoiler thing around it with a warning so we're not blasting nsfw images at people
username
2024-06-24 12:48:39
or putting in something like a zip file with a warning would work as well
Naksu
2024-06-24 12:51:29
I'll see what I can do.
2024-06-24 01:56:24
There are only the legs, but it represents well the overall composition of the image. It's drawing combined with an assembly of textures of different qualities which results in a lot of banding and aliasing. ||https://litter.catbox.moe/w4e2ua.png||
2024-06-24 01:56:29
Original dimensions: 6260 x 19252 (a kind of dakimakura).
jonnyawsom3
2024-06-24 01:59:16
I'm probably wrong, but the highlights almost look like clipped HDR to me
Naksu
2024-06-24 02:02:50
The PNG is 8-bit, but I don't know about the original PSD.
_wb_
2024-06-24 02:25:24
``` w4e2ua.png Encoding kPixels Bytes BPP E MP/s D MP/s Max norm SSIMULACRA2 PSNR pnorm BPP*pnorm QABPP Bugs -------------------------------------------------------------------------------------------------------------------------------------------------------- jxl:2:d1 18830 583436 0.2478740 142.099 578.221 1.81921705 83.32991363 52.29 0.54721688 0.135640859273 0.451 0 jxl:6:d1 18830 579887 0.2463662 20.822 488.140 1.40984583 85.53871848 54.98 0.40049452 0.098668329307 0.347 0 jxl:2:d1:faster_decoding=4 18830 619533 0.2632099 126.808 722.429 1.82612011 83.15371525 52.12 0.55527767 0.146154595606 0.481 0 jxl:6:d1:faster_decoding=4 18830 713773 0.3032480 45.504 730.564 1.72649039 84.61082300 54.41 0.41836603 0.126868659417 0.524 0 Aggregate: 18830 621928 0.2642276 64.280 621.259 1.68630999 84.15829259 53.44 0.47501319 0.125511576137 0.446 0 ```
2024-06-24 02:28:35
these numbers are more or less normal β€” higher effort does produce better quality per byte, faster_decoding does come at some cost in both quality and size but mostly in size...
jonnyawsom3
2024-06-24 02:30:03
Guess we need the full image after all
spider-mario
2024-06-24 02:32:44
for science, of course
jonnyawsom3
2024-06-24 02:34:17
In the name of da Vinci, we require the midriff
Naksu
2024-06-24 03:42:18
This is the heaviest PNG I could find on a forum, and even though there's nothing shocking about it, I find it really weird to have to share this. But if it's for science... ||<https://litter.catbox.moe/pfnoal.png>|| (NSFW)
TheBigBadBoy - π™Έπš›
Naksu This is the heaviest PNG I could find on a forum, and even though there's nothing shocking about it, I find it really weird to have to share this. But if it's for science... ||<https://litter.catbox.moe/pfnoal.png>|| (NSFW)
2024-06-24 03:57:18
a little `ect -9 -strip`: > Processed 1 file in 3m55s and saved 46,59MiB (59,679%)!
Naksu
2024-06-24 04:01:05
But where can I actually see all the possible commands? And what does the one you recommend correspond to?
jonnyawsom3
In the name of da Vinci, we require the midriff
2024-06-24 04:27:46
Huh, so it's actually the midriff that's censored with a blurred lower resolution
TheBigBadBoy - π™Έπš›
Naksu But where can I actually see all the possible commands? And what does the one you recommend correspond to?
2024-06-24 04:32:45
I don't understand `ect` is something completely unrelated to JXL (it is a file optimizer, meaning that it losslessly recompress stuff without changing the file format - in this case only PNG) to see all possible commands of `cjxl`, use `cjxl -v -v -v -v -h`...
Orum
2024-06-24 04:33:58
both images are already 404 not found πŸ’€
2024-06-24 04:34:15
guess you're not allowed to upload them to catbox πŸ˜†
TheBigBadBoy - π™Έπš›
2024-06-24 04:34:50
no worry I have it, just need to wait for `cjxl -e 10 -d 0` to finish
2024-06-24 04:35:01
in hope to bring it below 25MB to send it here
2024-06-24 04:35:44
maaaaaan it is using 16GB of RAM <:NotLikeThis:805132742819053610>
jonnyawsom3
TheBigBadBoy - π™Έπš› a little `ect -9 -strip`: > Processed 1 file in 3m55s and saved 46,59MiB (59,679%)!
2024-06-24 04:35:57
I shouldn't have tried on my weak cpu, even just on default -3 it's still running Dx I should've done e1 d0
TheBigBadBoy - π™Έπš› maaaaaan it is using 16GB of RAM <:NotLikeThis:805132742819053610>
2024-06-24 04:36:05
Chunked encoding my beloved
Orum
TheBigBadBoy - π™Έπš› maaaaaan it is using 16GB of RAM <:NotLikeThis:805132742819053610>
2024-06-24 04:36:28
...that's all?
2024-06-24 04:36:34
anyway try streaming in/out
TheBigBadBoy - π™Έπš›
Orum ...that's all?
2024-06-24 04:36:49
wdym
jonnyawsom3
2024-06-24 04:36:57
Nah, just e9. 'Actual' streaming makes some heavy tradeoffs
Orum
2024-06-24 04:37:21
I'm just saying I've had a lot more RAM used by cjxl on some images
TheBigBadBoy - π™Έπš›
Orum anyway try streaming in/out
2024-06-24 04:37:37
well I have plenty of free RAM, I'll just wait <:KekDog:805390049033191445> it's just uncommon for me to see a program eating that much
Orum
2024-06-24 04:37:59
wait til you see what ssimulacra2-rs uses...
jonnyawsom3
2024-06-24 04:38:04
I think I hit 13 GB once... Having 16 GB total.....
2024-06-24 04:38:21
Boy oh boy was my SSD having *fun*
Naksu
Orum both images are already 404 not found πŸ’€
2024-06-24 04:38:27
It's normal, I don't want them to be archived.
TheBigBadBoy - π™Έπš›
2024-06-24 04:38:47
it remembers me that time cjxl took 26 Gigs of RAM to compress a 512,000x600 png file <:kekw:808717074305122316> this is that file > in case you want the 512,000x600 png: https://cdn.discordapp.com/attachments/874012579880660993/1047220909288722492/7gk51u.png :av1_pepeShy:
Naksu
TheBigBadBoy - π™Έπš› I don't understand `ect` is something completely unrelated to JXL (it is a file optimizer, meaning that it losslessly recompress stuff without changing the file format - in this case only PNG) to see all possible commands of `cjxl`, use `cjxl -v -v -v -v -h`...
2024-06-24 04:40:37
Ah, I thought you were talking about JPEG XL compression. But thanks for the cjxl command.
TheBigBadBoy - π™Έπš› no worry I have it, just need to wait for `cjxl -e 10 -d 0` to finish
2024-06-24 04:41:47
TheBigBadBoy - π™Έπš›
2024-06-24 04:42:25
oh, I might have went for slower settings than that <:CatSmile:805382488293244929>
jonnyawsom3
TheBigBadBoy - π™Έπš› a little `ect -9 -strip`: > Processed 1 file in 3m55s and saved 46,59MiB (59,679%)!
2024-06-24 04:43:35
```wintime -- ect -p -r -k pfnoal.png Processed 1 file Saved 46.21MB out of 78.06MB (59.2009%) PageFaultCount: 220231218 PeakWorkingSetSize: 1.203 GiB QuotaPeakPagedPoolUsage: 25.39 KiB QuotaPeakNonPagedPoolUsage: 9.289 KiB PeakPagefileUsage: 1.203 GiB Creation time 2024/06/24 17:30:08.744 Exit time 2024/06/24 17:38:00.086 Wall time: 0 days, 00:07:51.342 (471.34 seconds) User time: 0 days, 00:03:15.156 (195.16 seconds) Kernel time: 0 days, 00:04:35.953 (275.95 seconds)``` Took almost twice as long on -3 instead of -9, either my CPU is really showing it's age or you've got a really damn good one haha
2024-06-24 04:43:43
But anyway, back to JXL
TheBigBadBoy - π™Έπš›
```wintime -- ect -p -r -k pfnoal.png Processed 1 file Saved 46.21MB out of 78.06MB (59.2009%) PageFaultCount: 220231218 PeakWorkingSetSize: 1.203 GiB QuotaPeakPagedPoolUsage: 25.39 KiB QuotaPeakNonPagedPoolUsage: 9.289 KiB PeakPagefileUsage: 1.203 GiB Creation time 2024/06/24 17:30:08.744 Exit time 2024/06/24 17:38:00.086 Wall time: 0 days, 00:07:51.342 (471.34 seconds) User time: 0 days, 00:03:15.156 (195.16 seconds) Kernel time: 0 days, 00:04:35.953 (275.95 seconds)``` Took almost twice as long on -3 instead of -9, either my CPU is really showing it's age or you've got a really damn good one haha
2024-06-24 04:44:21
i7-10870H so not great not bad ig
Naksu
2024-06-24 04:44:36
It's because I had already done it yesterday.
TheBigBadBoy - π™Έπš›
2024-06-24 04:45:07
oh that explains it <:KekDog:805390049033191445>
2024-06-24 04:45:23
also, this file is weird https://cdn.discordapp.com/attachments/733445413478334504/1254828649178202165/imagen.png
jonnyawsom3
2024-06-24 04:45:39
Like I said, a low resolution blur for censoring
Naksu
2024-06-24 04:45:40
πŸ‘Œ
TheBigBadBoy - π™Έπš›
2024-06-24 04:45:47
oh
Naksu
2024-06-24 04:46:31
Nah, I think the guy just messed up his export. The other versions of the image don't have that.
TheBigBadBoy - π™Έπš›
2024-06-24 04:47:27
that's what I thought bc there is censoring in addition to that low-res part
Naksu
2024-06-24 04:47:39
Yep
TheBigBadBoy - π™Έπš›
2024-06-24 04:48:40
when will JXL progress bar be a thing <:CatSmile:805382488293244929>
jonnyawsom3
2024-06-24 04:49:07
https://discord.com/channels/794206087879852103/804324493420920833/1252594723877556235
Naksu
TheBigBadBoy - π™Έπš› also, this file is weird https://cdn.discordapp.com/attachments/733445413478334504/1254828649178202165/imagen.png
2024-06-24 04:49:38
TheBigBadBoy - π™Έπš›
2024-06-24 04:50:22
so do we have the wrong file?
2024-06-24 04:50:55
w.r.t. the -e6 --faster_decoding being 4.5x heavier than -e2?
Naksu
2024-06-24 04:51:14
You have the softest version.
2024-06-24 04:51:21
<:PepeOK:805388754545934396>
TheBigBadBoy - π™Έπš›
2024-06-24 04:51:54
I just want to know if it's the one with `-e6 --faster_decoding being 4.5x heavier than -e2`
2024-06-24 04:51:59
<:KekDog:805390049033191445>
jonnyawsom3
2024-06-24 04:52:11
If only there were a way to run the command and find out
2024-06-24 04:52:18
Oh wait, e10, I forgot already
2024-06-24 04:53:06
One day, this file will embed.... ECT Optimized
TheBigBadBoy - π™Έπš›
2024-06-24 04:53:33
shit you made it before me <:KekDog:805390049033191445>
jonnyawsom3
2024-06-24 04:53:56
Just an e9 lossless, but now people can help ~~burn their RAM~~ test on it
Naksu
2024-06-24 04:54:47
Why is it lighter than mine?
Orum
2024-06-24 04:55:25
so this looks like this was upscaled twice, first with nearest neighbor, and then again with bicubic?
Fox Wizard
2024-06-24 04:57:42
More porn in the jxl server <:KekDog:884736660376535040>
TheBigBadBoy - π™Έπš›
2024-06-24 04:58:35
If I knew it would take that long, I would have used cjxl nightly builds <:KekDog:805390049033191445>
jonnyawsom3
Naksu Why is it lighter than mine?
2024-06-24 04:59:08
Could be the viewer not using JXL color management
Fox Wizard
2024-06-24 04:59:08
-e 11 when?
Crite Spranberry
2024-06-24 05:01:59
Displaying the image in r3dfox makes it chug holy shit
2024-06-24 05:02:36
This is a weird ass export or censoring job
2024-06-24 05:03:22
~~would be noice if not oversized~~
Crite Spranberry Displaying the image in r3dfox makes it chug holy shit
2024-06-24 05:03:48
It took like a min to display, and yt video in background started having audio cut out
jonnyawsom3
2024-06-24 05:05:20
I'm on `4451ce9` and I'm not getting the weird results, seems like it was already fixed
2024-06-24 05:06:29
e2 4.32 MB e6 3.99 MB e2 Faster Decoding 4.61 MB e6 Faster Decoding 4.59 MB
Orum
2024-06-24 05:08:32
anyway this image should never be encoded, for like 5 different reasons...
2024-06-24 05:09:00
only 1 of which has anything to do with JXL
TheBigBadBoy - π™Έπš›
Orum only 1 of which has anything to do with JXL
2024-06-24 05:11:08
RAM usage ? <:KekDog:805390049033191445>
Naksu
2024-06-24 05:11:33
I didn't intend to encode it. The initial idea was to find the heaviest image possible and see how much it could be reduced.
Orum
2024-06-24 05:11:43
well more the weird BPP results
2024-06-24 05:13:03
but its weird upscaling is reason enough to never use this image, let alone the nauseating subject matter <:YEP:808828808127971399>
Naksu
2024-06-24 05:16:28
But is there indeed a problem with faster decoding in the end?
jonnyawsom3
2024-06-24 05:16:37
<@1219592742011932732> what version of cjxl are you using?
Naksu
2024-06-24 05:17:11
`JPEG XL encoder v0.10.2 d947eb5`
username
Could be the viewer not using JXL color management
2024-06-24 05:17:39
I think they might mean "lighter" as in file size not visuals