JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

jxl

Anything JPEG XL related

spider-mario
2024-07-18 01:23:11
you can view the default being a power of two minus one as kind of a coincidence
2024-07-18 01:23:20
a stop above is just twice that as it normally would be
qdwang
2024-07-18 01:24:43
Thanks, you are right. I'll just follow the *2 rule
_wb_
2024-07-18 01:36:32
Nominally, sRGB is 80 nits. But it's rare to see such dim displays nowadays. The current consensus for defining SDR max nits seems to be 203 nits. But typical SDR displays will be even brighter. We used the default 255 because at some point, we were using a linear RGB representation where the numbers went up to the intensity target (so they were basically in nits), and then converting floats to 8-bit values would be just rounding to int. We no longer do that, but the number 255 sticked.
jonnyawsom3
2024-07-18 01:47:11
I've always seen 300 nits as the SDR max, but that could be because I've only recently seen it in passing now that displays are brighter in general
qdwang
_wb_ Nominally, sRGB is 80 nits. But it's rare to see such dim displays nowadays. The current consensus for defining SDR max nits seems to be 203 nits. But typical SDR displays will be even brighter. We used the default 255 because at some point, we were using a linear RGB representation where the numbers went up to the intensity target (so they were basically in nits), and then converting floats to 8-bit values would be just rounding to int. We no longer do that, but the number 255 sticked.
2024-07-18 01:51:20
Thanks for the explanation.
Traneptora
2024-07-22 05:27:07
SDR max doesn't actually have a specific nit value, since it's just relative to whatever the SDR monitor outputs
2024-07-22 05:27:15
most monitors are around 200-300 nits give or take
2024-07-22 05:27:39
but theoretically you can have any SDR white point and any SDR black point in nits
2024-07-22 05:27:49
though most software expects SDR black to be 1/1000th of the SDR white
_wb_
2024-07-22 05:37:16
Yes, in SDR everything is relative. In a cinema, the max value can be only 48 nits (that is what DCI-P3 uses, for example), since you're viewing in a very dark room. The sRGB reference was 80 nits but that was assuming an indoors office environment. In practice many current SDR displays (e.g. phones and laptops) are designed for more varying ambient light conditions and can often do 300 nits or even more. The 203 nits is a recent convention of where to nominally put SDR white when mixing HDR and SDR content.
Quackdoc
2024-07-22 06:38:50
PQ can't come soon enough, I don't want relative transfers any more, so much headaches
2024-07-22 06:39:18
not that most HDR displays actually do a great job at tracking PQ, but that's a different story
_wb_
2024-07-22 06:59:06
it's not really desirable either to interpret PQ exactly. You want your display to have manually/automatically adjustable brightness. It makes sense to make it darker when used in a dark room and brighter when used outdoors.
2024-07-22 07:00:38
absolute luminosity only really makes sense if the viewing conditions are also absolute
2024-07-22 07:01:41
PQ was designed by Dolby with cinema theaters in mind. HLG was designed by BBC with consumer TVs in mind.
2024-07-22 07:04:31
In practice, it seems like most stuff that supports PQ (like HDR TVs or Apple Macbook laptops, say) does interpret PQ in an HLG kind of way: everything is relative to SDR white, which has a brightness that depends on the display brightness setting, rather than having everything in absolute nits as it is theoretically supposed to be in PQ.
qdwang
2024-07-22 07:38:48
The improvement from v0.10.2 to main is huge! I'm wondering when will the next version gets release?
Orum
2024-07-22 08:27:14
improvement in what?
qdwang
Orum improvement in what?
2024-07-22 08:38:42
quality and file size changes
Orum
2024-07-22 09:15:40
For lossy only? Or do you mean lossless is smaller and quality for a given file size is better in lossy?
SwollowChewingGum
Traneptora though most software expects SDR black to be 1/1000th of the SDR white
2024-07-23 02:58:06
what about on displays that can display "true" black (ie. oled)?
Traneptora
SwollowChewingGum what about on displays that can display "true" black (ie. oled)?
2024-07-23 03:41:47
In SDR mode they may not use true black, or they might
2024-07-23 03:42:22
basically SDR is a range of 0% to 100%. the black point is 0% and the white point is 100%
2024-07-23 03:42:39
if the black point is actually 0 nits that's called infinite contrast
SwollowChewingGum
2024-07-23 03:43:32
well *technically* 0 nits is impossible due to quantum physics ๐Ÿค“โ˜๏ธ
Traneptora
2024-07-23 03:45:24
sure, but you asked about true black
2024-07-23 03:45:30
which would be 0 nits
2024-07-23 03:45:57
nits are visible light btw, so infrared radiation does not have any nit value
qdwang
Orum For lossy only? Or do you mean lossless is smaller and quality for a given file size is better in lossy?
2024-07-23 04:51:53
I havenโ€™t tested lossless yet.
lonjil
2024-07-23 07:01:14
OLED can't actually achieve 0 nits since light can reflect off the screen
TheBigBadBoy - ๐™ธ๐š›
lonjil OLED can't actually achieve 0 nits since light can reflect off the screen
2024-07-23 07:11:05
https://youtu.be/p6q54q2iam8 [โ €](https://cdn.discordapp.com/emojis/1059276598660059136.webp?size=48&quality=lossless&name=av1_trollhq)
lonjil
2024-07-23 07:11:43
But light from the screen would bounce off your face and then back to the screen !
SwollowChewingGum
2024-07-23 07:34:50
Mask?
w
2024-07-23 07:38:40
then you cant see
TheBigBadBoy - ๐™ธ๐š›
2024-07-23 07:49:14
imagine just seeing flying white eyes in the dark void <:KekDog:805390049033191445>
Fox Wizard
TheBigBadBoy - ๐™ธ๐š› imagine just seeing flying white eyes in the dark void <:KekDog:805390049033191445>
2024-07-23 08:08:05
https://tenor.com/view/laugh-black-man-dark-darkness-scary-gif-19605615
frep
2024-07-23 09:44:48
I encoded a 2096x2088 PNG to JPEG XL with effort 9 and the bottom right corner has a small 48x40 "patch" where colors are really distorted
2024-07-23 09:46:32
i got this error on the cover art for femtanyl's "CHASER" which is slightly graphic, but painting out the rest of the image still produces this error after JXL encoding->decoding
2024-07-23 09:49:12
So apparently it only happens >= effort 5
2024-07-23 11:57:16
This issue was demonstrated with 0.10.2. This does not occur on 0.9.3
2024-07-23 12:00:01
0.10.0 crashes(?!) during encode
2024-07-23 12:02:29
0.10.1 as well
2024-07-23 12:03:34
0.10.3, like 0.10.2, doesn't crash but has the same bug
_wb_
2024-07-23 12:22:55
could you open a github issue about this? The DC group size is 2048x2048 so those image dimensions would have one full group and then three very small ones, the one at the bottom right only 48x40. It could be that there's some kind of edge case bug in the encoder heuristics that makes it make bad decisions in this case.
TheBigBadBoy - ๐™ธ๐š›
_wb_ could you open a github issue about this? The DC group size is 2048x2048 so those image dimensions would have one full group and then three very small ones, the one at the bottom right only 48x40. It could be that there's some kind of edge case bug in the encoder heuristics that makes it make bad decisions in this case.
2024-07-23 12:24:32
he already did https://github.com/libjxl/libjxl/issues/3717
_wb_
2024-07-23 12:24:49
yeah, just saw. Looking
SwollowChewingGum
2024-07-23 12:32:04
Can someone check if bug exists in master?
qdwang
2024-07-23 12:45:47
Is this a bug? I have a 14-bit grayscale PGM `src.pgm`, and I converted it to lossless JXL by using `cjxl -d 0 src.pgm target.jxl`. The problem is the brightness of these two are slightly different. By using `identify -verbose`, I get the `Channel statistics`: **src.pgm** ``` Channel statistics: Pixels: 24337152 Gray: min: 0 (0) max: 15871 (0.968748) mean: 642.471 (0.0392157) median: 229 (0.0139779) standard deviation: 1242.43 (0.0758363) kurtosis: 15.1143 skewness: 3.73396 entropy: 0.759787 ``` **target.jxl** ``` Channel statistics: Pixels: 24337152 Gray: min: 0 (0) max: 15871 (0.96875) mean: 642.457 (0.0392148) median: 228.99 (0.0139773) standard deviation: 1242.42 (0.0758362) kurtosis: 15.1143 skewness: 3.73397 entropy: 0.759787 ``` So I checked another 16bit RGB PPM. No problem at all. The `identify` result are exactly the same. Is this a code issue for Grayscale image only?
2024-07-23 01:36:38
I've checked again. It seems no problem.
yoochan
SwollowChewingGum Can someone check if bug exists in master?
2024-07-23 01:49:31
I did, it does ๐Ÿ˜„
frep
yoochan I did, it does ๐Ÿ˜„
2024-07-23 01:53:03
thank you for testing!
yoochan I did, it does ๐Ÿ˜„
2024-07-23 01:55:26
isn't it weird how master builds still say 0.10.2?
yoochan
2024-07-23 02:59:26
indeed ๐Ÿ˜„ they could call it 0.10.3pre
Traneptora
frep I encoded a 2096x2088 PNG to JPEG XL with effort 9 and the bottom right corner has a small 48x40 "patch" where colors are really distorted
2024-07-23 03:19:04
lossless? if so that's a bug
Is this a bug? I have a 14-bit grayscale PGM `src.pgm`, and I converted it to lossless JXL by using `cjxl -d 0 src.pgm target.jxl`. The problem is the brightness of these two are slightly different. By using `identify -verbose`, I get the `Channel statistics`: **src.pgm** ``` Channel statistics: Pixels: 24337152 Gray: min: 0 (0) max: 15871 (0.968748) mean: 642.471 (0.0392157) median: 229 (0.0139779) standard deviation: 1242.43 (0.0758363) kurtosis: 15.1143 skewness: 3.73396 entropy: 0.759787 ``` **target.jxl** ``` Channel statistics: Pixels: 24337152 Gray: min: 0 (0) max: 15871 (0.96875) mean: 642.457 (0.0392148) median: 228.99 (0.0139773) standard deviation: 1242.42 (0.0758362) kurtosis: 15.1143 skewness: 3.73397 entropy: 0.759787 ``` So I checked another 16bit RGB PPM. No problem at all. The `identify` result are exactly the same. Is this a code issue for Grayscale image only?
2024-07-23 03:20:06
can you post src.pgm?
2024-07-23 03:21:09
though if I had to guess, the issue is one of 14 -> 16 bit
2024-07-23 03:21:26
14-bit is 0 to 16383 and 16-bit is 0 to 65535, and 65535 is not divisible by 16383
2024-07-23 03:21:35
so there may be some sort of issue with rounding
2024-07-23 03:22:52
<@184373105588699137> https://discord.com/channels/794206087879852103/848189884614705192/1265271751349108796 ``` ffmpeg -i foo -c rawvideo -f framecrc - ```
2024-07-23 03:23:00
alternatively `-c rawvideo -f hash -`
2024-07-23 03:23:30
``` leo@gauss ~ :) $ ffmpeg -hide_banner -loglevel error -i george-smile4.jxl -c rawvideo -f hash - SHA256=c9f1c023f0310760e6bd5ecfab97464a77a4eaecb3ff9102cc55a11449b7f535 ```
frep
Traneptora lossless? if so that's a bug
2024-07-23 03:40:40
right! and i'm really concerned
2024-07-23 03:41:38
my first draft of the message started by saying it was a bug but ig it was lost somewhere lol
Traneptora though if I had to guess, the issue is one of 14 -> 16 bit
2024-07-23 03:44:02
doesn't jxl support arbitrary bit depths?
Traneptora
2024-07-23 03:45:12
internally it supports between 8 and 16 (actually up to 32) yea but the library only supports receiving or providing 8 or 16 bit buffers (or 32-bit)
frep
2024-07-23 03:45:32
hhuh
Traneptora
2024-07-23 03:45:45
there's no way to request 14-bit packed pixel data from the library, you have to request 16-bit data
2024-07-23 03:46:06
so in theory if you have a 14-bit buffer you can provide it to the library as a 16-bit buffer, and there's a flag to tell it which bits are significant
frep
2024-07-23 03:46:20
theoretically the library could be expanded to perform this however? not that anyone might implement it...
Traneptora
2024-07-23 03:46:35
well you can provide 14-bit data with the top two bits all zeroes and tell the library it's 14-bit data
2024-07-23 03:46:43
and likewise you can request the data back, with the same flag
2024-07-23 03:46:57
and it'll be between 0 and 16383, but in a 16-bit container
2024-07-23 03:46:58
if that makes sense
frep
2024-07-23 03:47:01
hmmm i get it
Traneptora
2024-07-23 03:48:02
however, in the src.pgm and target.jxl, it says "max: 15871" which leads me to believe that imagemagick is scaling the 14-bit data to 16-bit data for the src.pgm, but it's letting libjxl do the scaling for the target.jxl, and the two libraries don't scale it in exactly the same way b/c of rounding
frep
2024-07-23 03:49:05
mmm conflicting codec handling
2024-07-23 03:51:47
i have noticed there's some images in which webp beats jxl in lossless compression, maybe i'll make an issue
2024-07-23 03:52:25
tho the savings are not huuge or anything, and cwebp's highest compression setting is really slow (e11 worthy)
username
frep tho the savings are not huuge or anything, and cwebp's highest compression setting is really slow (e11 worthy)
2024-07-23 03:54:15
are you passing `-mt` to cwebp? if not you should since it can speed things up even if cwebp can only use a max of like 2 or 3 threads
frep
username are you passing `-mt` to cwebp? if not you should since it can speed things up even if cwebp can only use a max of like 2 or 3 threads
2024-07-23 03:54:56
i haven't really noticed a difference with `-z#` (lossless), but sure i'll try again
username
2024-07-23 03:55:59
there should be a noticeable difference at `-z 9` since the highest level enables the "lossless cruncher" which can be multithreaded with `-mt`
frep
2024-07-23 03:56:32
interesting, i didn't know
qdwang
Traneptora can you post src.pgm?
2024-07-23 04:03:15
Iโ€™ve double checked it. No problem now. It was my mistake.
Traneptora
2024-07-23 04:03:24
huh
qdwang
2024-07-23 04:10:06
https://github.com/libjxl/libjxl/pull/3563 About this PR, if I only compress image once with high quality settings, should I turn on the Gaborish manually?
Quackdoc
Traneptora <@184373105588699137> https://discord.com/channels/794206087879852103/848189884614705192/1265271751349108796 ``` ffmpeg -i foo -c rawvideo -f framecrc - ```
2024-07-23 04:41:23
I have used hash before, I just pipe to b3sum since it's marginally faster in my scripts and I have it memorized now (shaves maybe 1/8th of a second, but that matters a lot when validating 60k image sets)
Traneptora
Quackdoc I have used hash before, I just pipe to b3sum since it's marginally faster in my scripts and I have it memorized now (shaves maybe 1/8th of a second, but that matters a lot when validating 60k image sets)
2024-07-23 04:49:51
that said if your'e going to hash the output you're relying on ffmpeg's specific ppm encoder
2024-07-23 04:49:57
may be better to just use -f rawvideo
Quackdoc
2024-07-23 04:50:23
I never really considered that ffmpeg's ppm encoder could cause issues 0.0
Traneptora
2024-07-23 04:50:36
well it's more that you're relying specifically on its semantics
2024-07-23 04:50:44
if it's changed, the hashes will change
2024-07-23 04:50:53
I prefer `-f rawvideo` as only time you'll get identical hashes is if the bytes are identical but the shape is different (or there's a collision by accident, but lol)
2024-07-23 04:51:22
also why b3sum? is it faster tham md5sum?
2024-07-23 04:51:24
what about xxhash?
2024-07-23 04:51:40
xxh64sum sounds faster
2024-07-23 04:54:24
since you don't need cryptographic hashing
Quackdoc
2024-07-23 04:55:33
much faster then md5sum by default at least these are the commands as is, iirc b3sum just has way better threading ```ps โžœ Pictures hyperfine --runs 10 'md5sum windwaker-no-fast-1.jxl' 'b3sum windwaker-no-fast-1.jxl' 'xxh32sum windwaker-no-fast-1.jxl' 'xxh64sum windwaker-no-fast-1.jxl' Benchmark 1: md5sum windwaker-no-fast-1.jxl Time (mean ยฑ ฯƒ): 796.7 ms ยฑ 9.5 ms [User: 722.9 ms, System: 67.0 ms] Range (min โ€ฆ max): 785.3 ms โ€ฆ 817.8 ms 10 runs Benchmark 2: b3sum windwaker-no-fast-1.jxl Time (mean ยฑ ฯƒ): 72.6 ms ยฑ 1.0 ms [User: 464.9 ms, System: 79.2 ms] Range (min โ€ฆ max): 71.2 ms โ€ฆ 74.9 ms 10 runs Benchmark 3: xxh32sum windwaker-no-fast-1.jxl Time (mean ยฑ ฯƒ): 147.8 ms ยฑ 6.6 ms [User: 85.9 ms, System: 61.1 ms] Range (min โ€ฆ max): 141.8 ms โ€ฆ 163.1 ms 10 runs Benchmark 4: xxh64sum windwaker-no-fast-1.jxl Time (mean ยฑ ฯƒ): 102.2 ms ยฑ 3.5 ms [User: 42.1 ms, System: 58.2 ms] Range (min โ€ฆ max): 96.4 ms โ€ฆ 107.0 ms 10 runs Summary b3sum windwaker-no-fast-1.jxl ran 1.41 ยฑ 0.05 times faster than xxh64sum windwaker-no-fast-1.jxl 2.04 ยฑ 0.10 times faster than xxh32sum windwaker-no-fast-1.jxl 10.98 ยฑ 0.20 times faster than md5sum windwaker-no-fast-1.jxl ```
2024-07-23 04:56:48
im sure that there can be hashers that are faster then b3sum, but b3sum is something I have on all of my systems by default so it's safe to just default to it then go hunting and find a new hashing tool
Traneptora
2024-07-23 04:56:59
xxh64sum may be worth it
2024-07-23 04:57:11
considering that it's like, 42.1ms vs 464.9 ms
Quackdoc
2024-07-23 04:57:31
it's certainly faster for sure. I had never really tested it before
Traneptora
2024-07-23 04:57:45
it's designed to be an extremely fast in-memory non-crypographic hash
2024-07-23 04:57:56
it's used by lots of databases for hashbuckets and the like
2024-07-23 04:58:05
er, hashtables
2024-07-23 04:58:30
developed by the Lz4 guy, uh what's his name
2024-07-23 04:58:33
uh Yan collet or something
Quackdoc
2024-07-23 05:00:32
i see, I can for sure start including that when I do stuff for sure. b3sum's speed mostly comes from the absurd level of optimization the devs did, which iirc includes an absurd amount of hand written ASM
Traneptora
2024-07-23 05:00:47
also iirc it looks like it's parallelizing better
2024-07-23 05:01:17
compare the real time of the b3sum to the user time, the user time counts all threads
Quackdoc
2024-07-23 05:01:20
arch ships this by default https://github.com/BLAKE3-team/BLAKE3 which is what I use,
Traneptora compare the real time of the b3sum to the user time, the user time counts all threads
2024-07-23 05:02:09
oh yeah, it hits a lot harder on resources ```ps โžœ Pictures time xxh64sum windwaker-no-fast-1.jxl 74127cc7e3d61d2d windwaker-no-fast-1.jxl xxh64sum windwaker-no-fast-1.jxl 0.03s user 0.09s system 98% cpu 0.118 total avg shared (code): 0 KB avg unshared (data/stack): 0 KB total (sum): 0 KB max memory: 4 MB โžœ Pictures time b3sum windwaker-no-fast-1.jxl 49e5cfa296f70fd0f29a0b7041f4bacd0f40371746d67e74851a88808d12678c windwaker-no-fast-1.jxl b3sum windwaker-no-fast-1.jxl 0.52s user 0.06s system 673% cpu 0.085 total avg shared (code): 0 KB avg unshared (data/stack): 0 KB total (sum): 0 KB max memory: 510 MB ```
Traneptora
2024-07-23 05:03:09
510 MB oof
Quackdoc
2024-07-23 05:04:36
yeah disabling mmaping helps a lot in resource usage, but *really* kills the perf since it kills threading too ```ps โžœ Pictures time b3sum --no-mmap windwaker-no-fast-1.jxl 49e5cfa296f70fd0f29a0b7041f4bacd0f40371746d67e74851a88808d12678c windwaker-no-fast-1.jxl b3sum --no-mmap windwaker-no-fast-1.jxl 0.27s user 0.07s system 99% cpu 0.335 total avg shared (code): 0 KB avg unshared (data/stack): 0 KB total (sum): 0 KB max memory: 4 MB ```
Traneptora
2024-07-23 05:04:57
oh I see it mmaps the input file
2024-07-23 05:05:10
so what happens if you stream it? ie `cat windwaker-no-fast-1.jxl | b3sum -`
2024-07-23 05:05:48
cause that's what matters for your use case rn
Quackdoc
2024-07-23 05:07:08
yup, it hurts. still better then md5 though by a good amount, looses to xxhash a lot ```ps โžœ Pictures time cat windwaker-no-fast-1.jxl | b3sum - 49e5cfa296f70fd0f29a0b7041f4bacd0f40371746d67e74851a88808d12678c - cat windwaker-no-fast-1.jxl 0.00s user 0.25s system 69% cpu 0.364 total avg shared (code): 0 KB avg unshared (data/stack): 0 KB total (sum): 0 KB max memory: 4 MB b3sum - 0.22s user 0.13s system 95% cpu 0.364 total avg shared (code): 0 KB avg unshared (data/stack): 0 KB total (sum): 0 KB max memory: 4 MB ``````ps โžœ Pictures hyperfine --runs 10 'md5sum windwaker-no-fast-1.jxl' 'cat windwaker-no-fast-1.jxl | b3sum' 'xxh32sum windwaker-no-fast-1.jxl' 'xxh64sum windwaker-no-fast-1.jxl' Benchmark 1: md5sum windwaker-no-fast-1.jxl Time (mean ยฑ ฯƒ): 814.3 ms ยฑ 11.7 ms [User: 724.3 ms, System: 84.7 ms] Range (min โ€ฆ max): 795.4 ms โ€ฆ 835.9 ms 10 runs Benchmark 2: cat windwaker-no-fast-1.jxl | b3sum Time (mean ยฑ ฯƒ): 381.3 ms ยฑ 6.4 ms [User: 262.6 ms, System: 358.2 ms] Range (min โ€ฆ max): 373.3 ms โ€ฆ 395.3 ms 10 runs Benchmark 3: xxh32sum windwaker-no-fast-1.jxl Time (mean ยฑ ฯƒ): 164.2 ms ยฑ 4.2 ms [User: 92.2 ms, System: 70.9 ms] Range (min โ€ฆ max): 157.9 ms โ€ฆ 172.6 ms 10 runs Benchmark 4: xxh64sum windwaker-no-fast-1.jxl Time (mean ยฑ ฯƒ): 119.8 ms ยฑ 3.4 ms [User: 37.5 ms, System: 80.6 ms] Range (min โ€ฆ max): 116.0 ms โ€ฆ 126.6 ms 10 runs Summary xxh64sum windwaker-no-fast-1.jxl ran 1.37 ยฑ 0.05 times faster than xxh32sum windwaker-no-fast-1.jxl 3.18 ยฑ 0.10 times faster than cat windwaker-no-fast-1.jxl | b3sum 6.80 ยฑ 0.21 times faster than md5sum windwaker-no-fast-1.jxl ```
frep
2024-07-23 06:30:45
i managed to make b3sum faster
2024-07-23 06:30:50
`alias b3sum="xxh64sum"`
2024-07-23 06:31:07
:^)
Demiurge
Traneptora if the black point is actually 0 nits that's called infinite contrast
2024-07-24 09:26:23
Division by zero contrast ๐Ÿ˜‚
Traneptora
2024-07-24 09:26:51
it's called "infinite contrast ratio"
2024-07-24 09:26:55
I didn't invent the term
SwollowChewingGum
2024-07-24 09:28:16
Public cause that happens when youโ€™re dealing with learning point numbers
2024-07-24 09:28:21
Floating point numbers
dogelition
Demiurge Division by zero contrast ๐Ÿ˜‚
2024-07-24 09:53:44
lim x -> 0+ 1/x contrast
CrushedAsian255
2024-07-28 02:03:24
YOO I GOT MY OLD ACCOUNT BACK
2024-07-29 12:23:09
why is xxh32sum like 3x slower than xxh64sum
2024-07-29 12:23:18
that doesn't make sense?
okydooky_original
2024-07-29 02:58:08
I've been out of the loop for a while, so I have a couple questions about splines: where are they at with implementation and will they close the gap between JXL and AVIF for simple shaded anime-style images? I was surprised and a little disappointed to see that AVIF did much better with the line art edges than JXL at the same quality setting (eg 80) and confirmed it myself using XL Converter.
jonnyawsom3
2024-07-29 03:22:35
Currently there's no spline encoding in libjxl due to how complex it would be to find good placement of them. Decoding is fully supported, and there are example files or tool to make a file using splines, but they need to be added by hand until more time can be spent developing an efficient method to encode them
w
2024-07-29 05:31:15
it's been multiple years
2024-07-29 05:31:18
It's joever
Demiurge
CrushedAsian255 why is xxh32sum like 3x slower than xxh64sum
2024-07-29 06:05:10
older and less optimized maybe? just guessing
_wb_
I've been out of the loop for a while, so I have a couple questions about splines: where are they at with implementation and will they close the gap between JXL and AVIF for simple shaded anime-style images? I was surprised and a little disappointed to see that AVIF did much better with the line art edges than JXL at the same quality setting (eg 80) and confirmed it myself using XL Converter.
2024-07-29 06:19:55
Same quality setting does not mean same quality. The avifenc scale is very different from the cjxl scale.
Demiurge
2024-07-29 06:51:22
I think the biggest quality/bitrate breakthroughs in the future will be from hybrid images that use splines or wavelets to encode an extremely smooth and simplified "edge/shape detection" color image, and DCT to encode "texture" only since that's the 1 thing DCT is always really good at. Of course, hybrid images will take at least twice as long to decode, but it's the price you pay for really good lossy bitrate/quality improvements.
okydooky_original
_wb_ Same quality setting does not mean same quality. The avifenc scale is very different from the cjxl scale.
2024-07-29 07:10:51
I'm not a power user, so that's the only "objective" metric I can give to show similar usage. But, yeah. It's annoying how different codecs use different scales behind the scenes, or even user-facing, since it kind of defeats the point in having something for regular people.
2024-07-29 07:12:32
I guess the main thing I noticed was it prpduced a smaller file in Avif for simple shaded anime images than JXL using either VarDCT or Modular settings and with no artifacts (though, Modular was a clear winner compared to VarDCT).
_wb_
2024-07-29 07:13:45
Encoders are generally not consistent enough to be able to have quality settings that would be consistent across codecs. Just keep in mind that these setting scales are completely arbitrary. It is very easy to make an encoder that just produces worse quality for the same q setting and it will look like it compresses very well ๐Ÿ™‚
2024-07-29 07:15:53
In fact, I think that's a 'trick' that both lossy webp and avifenc are applying, since they both have a scale that is producing worse quality for the same q parameter than typical jpeg encoders (libjpeg-turbo or mozjpeg) would produce for that setting.
2024-07-29 07:18:27
I don't really like that kind of trick aimed at deceiving naive users. It basically causes people to accidentally select a quality that's worse than what they were doing before, giving them the impression that they're saving many bytes but it comes with a drop in image fidelity (moreover, a drop that's not so easy to notice since both webp and avif are good at preserving appeal even when fidelity is low).
okydooky_original
2024-07-29 07:28:43
Interesting. I didn't realize it went back even to WebP. Though, I do notice the "blurring" or "smearing" that the VPX codecs apply, which is why I'd pick Jpeg (who's artifacts I was more okay with) over WebP when re-encoding images to save space.
2024-07-29 07:37:16
Regarding Avif + aminu pics, it produced an image that looked fine upon broad and fine tuned inspection, with no artifacts and was also a bit smaller than its Modular JXL version (which did have some noticeable, but minor, artifacting around the edges). So, I was wondering "what would be needed to close that gap between the formats in this area?" I have a lot of images in that style, but I'd like to use JXL as exclusively as I reasonably can, partly out of loyalty and partly out of convenience of not having to juggle multiple formats.
2024-07-29 07:40:00
So, splines were what came to mind, since the issue mainly appear around clean edges and I remembered discussion a while back about how that might address that aspect. I'm still really impressed with what JXL is becoming and all the improvements being made already.
paperboyo
2024-07-29 07:48:04
While on the topic of `quality`โ€ฆ In JXL, is its relationship to `distance`โ€ฆ fixed? Static? (Iโ€™m looking for a correct wordโ€ฆ) If Iโ€™m using the exact same `quality` is it like I would be using exactly same `distance` and nothing changes this relationship (eg. `effort`)?
jonnyawsom3
2024-07-29 08:28:38
I can't find the graph, but quality is mapped to distance on a fixed curve
Demiurge
_wb_ I don't really like that kind of trick aimed at deceiving naive users. It basically causes people to accidentally select a quality that's worse than what they were doing before, giving them the impression that they're saving many bytes but it comes with a drop in image fidelity (moreover, a drop that's not so easy to notice since both webp and avif are good at preserving appeal even when fidelity is low).
2024-07-29 09:19:54
I read and enjoyed your article about appeal vs fidelity. I also think it's possible to target both at the same time... Compression artifacts don't have to look deceitful, nor distracting and irritating. If compression artifacts resemble natural-looking noise patterns, then it won't look irritating and distracting nor deceitful, compared to typical smearing, or macroblocking, or blurring caused by image compression. High frequency noise patterns also help to convey a sense of "bad transmission quality" as opposed to deceptively changing the contents of the image.
2024-07-29 09:25:05
Imagine a JXL image getting progressively fuzzier and noisier as the bitrate drops and the distance setting increases, instead of blurry smears and smudges.
2024-07-29 09:25:56
They're functionally the same but the noise helps hide some of the ugliness of the lost information, without hiding the fact that information is missing.
Oleksii Matiash
Demiurge I read and enjoyed your article about appeal vs fidelity. I also think it's possible to target both at the same time... Compression artifacts don't have to look deceitful, nor distracting and irritating. If compression artifacts resemble natural-looking noise patterns, then it won't look irritating and distracting nor deceitful, compared to typical smearing, or macroblocking, or blurring caused by image compression. High frequency noise patterns also help to convey a sense of "bad transmission quality" as opposed to deceptively changing the contents of the image.
2024-07-29 09:27:22
Film photos is the best example of this, I believe
CrushedAsian255
Demiurge Imagine a JXL image getting progressively fuzzier and noisier as the bitrate drops and the distance setting increases, instead of blurry smears and smudges.
2024-07-29 09:30:26
Kinda sounds like analog interference
Demiurge
CrushedAsian255 Kinda sounds like analog interference
2024-07-29 11:17:09
Yes, exactly. That would be more honest AND better looking at the same time, compared to typical digital lossy compression artifacts.
jonnyawsom3
2024-07-29 11:20:11
It reminded me of the way AI art starts from noise, then refines into an image over time. The less refinement, or data, the more noisy the image is. Unfortunately it's a lot harder to do that in a way that looks good at all levels
Demiurge
2024-07-29 11:21:24
Smears and blurs and smudges does a worse job, both at avoiding deception, and avoiding looking looking awful and weird.
_wb_
So, splines were what came to mind, since the issue mainly appear around clean edges and I remembered discussion a while back about how that might address that aspect. I'm still really impressed with what JXL is becoming and all the improvements being made already.
2024-07-29 12:27:00
I don't think anyone is currently working on an encoder that does spline detection. Another option to improve anime compression is to implement something similar to avif's palette blocks (using modular patches). There are a few tricks that could be tried to improve jxl encoding on anime-style images, but for now this hasn't really been a priority โ€” it's a relatively specific use case, and I think we prefer to first focus on the more general things (like getting libjxl v1.0 done). AVIF works well on anime-style images because it has directional prediction with many angles, which is good for saving bits when encoding clean hard edges. On photos it tends to cause smearing / smoothing artifacts, but on anime-style images the artifacts tend to be OK. Closing the gap between JXL and AVIF on lossy anime-style images will require coming up with nontrivial encoder improvements in libjxl. Don't hold your breath. If you want to use JXL for these images, for now I would recommend using lossless.
frep
Demiurge Imagine a JXL image getting progressively fuzzier and noisier as the bitrate drops and the distance setting increases, instead of blurry smears and smudges.
2024-07-29 12:40:13
Reminds me of what Opus does, it fills some parts with artificial noise which is actually mostly not unpleasant
2024-07-29 12:42:04
I wonder how exactly one would make such an image codec, where it has a "noise floor" you can reach by missing data
Quackdoc
2024-07-29 01:32:49
it's also worth noting that it's not like avif does better for all "anime style" images. more detailed images will usually start shifting heavily towards JXL anyways
_wb_
2024-07-29 02:04:53
Yes, it's mostly when the image is "clean and simple" that lossy avif will work well on it. But those are also images where lossless jxl will work well, in some cases even producing smaller images than high-quality lossy. Images that involve more detail / texture / more realistic drawing styles, will be better for lossy jxl than for lossy avif.
Demiurge
2024-07-29 09:25:05
The worst thing an encoder can do, is to completely change what you're looking at and create something completely different that didn't exist in the original. Like for example, completely change the color of an image, or completely erase certain details or textures from an image. Or create new signals that didn't exist in the original, like moire patterns or banding patterns or macroblock patterns. It's especially bad if it's worse than simply resizing the image to a smaller canvas, or if the artifacts are visible after resizing.
CrushedAsian255
Demiurge The worst thing an encoder can do, is to completely change what you're looking at and create something completely different that didn't exist in the original. Like for example, completely change the color of an image, or completely erase certain details or textures from an image. Or create new signals that didn't exist in the original, like moire patterns or banding patterns or macroblock patterns. It's especially bad if it's worse than simply resizing the image to a smaller canvas, or if the artifacts are visible after resizing.
2024-07-29 09:29:03
Yay HEIC and JBIG2
Demiurge
2024-07-29 09:57:06
what's heic do?
CrushedAsian255
2024-07-29 11:45:11
heic (hevc) can artifact with fake data
frep
2024-07-30 09:57:56
jbig2 is funny
JendaLinda
2024-07-30 09:59:39
I would say JBIG2 is good only for storing coloring pages.
jonnyawsom3
2024-07-30 12:22:15
The pain of the recent Reddit post > (input image was originally .jpg, converted using imagemagick at the end of 2023) And now they're wondering why the image is so large when they try to re-compress it with cjxl, I really hope they still have that jpeg to transcode it properly
frep
2024-07-30 03:38:22
big love to the dev who fixed the modular/lossless bug i filed
2024-07-30 03:38:42
lossless is now a little less lossy than before :))
_wb_
2024-07-30 05:58:53
That was a rather nasty one, I am happy it got caught before the 1.0 release
veluca
2024-07-30 06:16:54
no idea how it managed to slip past for so long... how sure are you it also repros on 0.8.2?
frep
veluca no idea how it managed to slip past for so long... how sure are you it also repros on 0.8.2?
2024-07-30 06:46:06
https://github.com/libjxl/libjxl/issues/3717#issuecomment-2247106108
2024-07-30 06:46:22
User-reported
yoochan
2024-07-30 07:44:41
I did โ˜๏ธ I tested with the "static" releases provided on the github. I may have made a mistake but I hope not... Please check again if you can. Should a `--check` flag be added? To be used on lossless mode to add a pixel level check of decoded compressed image?
frep
2024-07-30 08:20:06
Does libpng do a pixel check?
yoochan
2024-07-30 08:24:48
I guess not... It would sounds like the library is not very mature ๐Ÿ˜„ but jxl is both a bit less mature and a bit more complex
_wb_
2024-07-30 08:32:02
I think we should maybe do a large scale test, running it on some big random corpus with various effort settings and number of threads etc, just to check everything roundtrips.
2024-07-30 08:33:04
(also with asan/msan/tsan builds)
okydooky_original
_wb_ I don't think anyone is currently working on an encoder that does spline detection. Another option to improve anime compression is to implement something similar to avif's palette blocks (using modular patches). There are a few tricks that could be tried to improve jxl encoding on anime-style images, but for now this hasn't really been a priority โ€” it's a relatively specific use case, and I think we prefer to first focus on the more general things (like getting libjxl v1.0 done). AVIF works well on anime-style images because it has directional prediction with many angles, which is good for saving bits when encoding clean hard edges. On photos it tends to cause smearing / smoothing artifacts, but on anime-style images the artifacts tend to be OK. Closing the gap between JXL and AVIF on lossy anime-style images will require coming up with nontrivial encoder improvements in libjxl. Don't hold your breath. If you want to use JXL for these images, for now I would recommend using lossless.
2024-07-31 03:50:52
Sounds good. I'd only argue against it being a "relatively specific use case" in the sense that an increasing contingent of the images on the internet are of this style. But, I understand the logic behind prioritizing the overall parts of libjxl and the fact it'd take some actual work to close that gap. The whole reason I asked about this was so I could have an idea of what would be needed to make that improvement and to have a realistic expectation about what would be required to make that happen. All of that was nicely answered in your reply here. So, thank you and good luck. I'll continue doing what I can to help further along the adoption of the format.
Quackdoc it's also worth noting that it's not like avif does better for all "anime style" images. more detailed images will usually start shifting heavily towards JXL anyways
2024-07-31 03:55:21
Yeah. That's why I specified "simple shaded anime images." I'm currently sorting images downloaded from Pixiv and such into two folders ("Convert to JXL" and "Convert to Avif") based on that criteria, which is of course subjectively eyeballed and guesstimated.
2024-07-31 03:58:20
Lossless JXL can sometimes make smaller files than high quality lossy Avif? I'll have to do some testing on that, then. Maybe someone will come up with a machine learning tool that will scan images and figure out which conversion formats/settings would work best. Just automate everything. Lol
Orum
Lossless JXL can sometimes make smaller files than high quality lossy Avif? I'll have to do some testing on that, then. Maybe someone will come up with a machine learning tool that will scan images and figure out which conversion formats/settings would work best. Just automate everything. Lol
2024-07-31 04:14:08
in basically any format that natively supports both lossy and lossless, you'll usually find cases where lossless will be smaller than high quality lossy
2024-07-31 04:14:20
it's very content dependent though...
Oleksii Matiash
Lossless JXL can sometimes make smaller files than high quality lossy Avif? I'll have to do some testing on that, then. Maybe someone will come up with a machine learning tool that will scan images and figure out which conversion formats/settings would work best. Just automate everything. Lol
2024-07-31 06:17:14
Just compress to both avif and jxll (l - lossless), and compare sizes. But I'd stick to jxll anyway, storage is quite cheap these days. And I hate avif, it is the reason too
Oleksii Matiash Just compress to both avif and jxll (l - lossless), and compare sizes. But I'd stick to jxll anyway, storage is quite cheap these days. And I hate avif, it is the reason too
2024-07-31 06:21:10
๐Ÿฅฒ
CrushedAsian255
Oleksii Matiash ๐Ÿฅฒ
2024-07-31 06:31:20
Hope you have backups
Oleksii Matiash
CrushedAsian255 Hope you have backups
2024-07-31 06:32:39
Sure, another hdd with the copy lays in the bookcase
CrushedAsian255
Oleksii Matiash Sure, another hdd with the copy lays in the bookcase
2024-07-31 06:38:40
also hope itโ€™s organised and not just a pile of random files in no structure
Oleksii Matiash
CrushedAsian255 also hope itโ€™s organised and not just a pile of random files in no structure
2024-07-31 06:39:26
Sure ๐Ÿ™‚ Photos\year\year-month-day <name>\ <files>
frep
CrushedAsian255 Hope you have backups
2024-07-31 07:38:54
I think we're gonna be saying that for quite a bit after that scare ๐Ÿ˜…
drkt
2024-07-31 11:52:18
Speaking of 1.0, is there any estimate for it? This year, next year?
_wb_
2024-07-31 01:39:47
I think it is likely to happen in 2024.
drkt
2024-07-31 02:18:20
awesome
Orum
2024-07-31 03:29:33
maybe we can finally shut up the Mozilla devs claiming JXL isn't stable <:KekDog:805390049033191445>
HCrikki
2024-07-31 04:39:40
jxl (the specification and bitstream) is stable since 2022 and conformance is tracked. discussions tend to confusingly mix between jxl and libjxl, capabilities of the format itself and the reference library (its current state generally)
jonnyawsom3
2024-07-31 05:25:48
Admittedly, the conformance tests haven't caught two cases of lossless not being lossless, one needing the spec to be changed so they stayed compatible and the other thankfully being due to rare combinations of arguments (Chunked multithreaded lossless encoding above e4) With some bolstered tests before releases, and maybe using the other JXL decoders for sanity checks, it should be in a very stable state already
okydooky_original
Orum in basically any format that natively supports both lossy and lossless, you'll usually find cases where lossless will be smaller than high quality lossy
2024-07-31 05:52:42
Like with png sometimes being smaller than jpeg.
Oleksii Matiash ๐Ÿฅฒ
2024-07-31 05:54:38
>muh storage is cheap cope No. Lol
A homosapien
2024-07-31 05:55:30
I think a more accurate statement would be, "storage is getting cheaper"
Oleksii Matiash
>muh storage is cheap cope No. Lol
2024-07-31 05:55:36
I said about myself and my choice. For me any my needs - it is
jonnyawsom3
2024-07-31 06:06:17
I'm just sad all my HDDs died, running on a single non-redundant SSD now
okydooky_original
2024-07-31 06:06:35
I know. It's a meme to reply with "storage is cheap" to any data size related problems online, especially chan-style message boards. So, I try to shut it down when I see it, because it's an attitude of "don't solve the core problem, just let it get worse and adapt by spending money."
A homosapien
I know. It's a meme to reply with "storage is cheap" to any data size related problems online, especially chan-style message boards. So, I try to shut it down when I see it, because it's an attitude of "don't solve the core problem, just let it get worse and adapt by spending money."
2024-07-31 06:20:31
"Data hoarding is one hell of a drug. I myself just got into rehab and I deleted a ton of stuff off of my hard drive. Now I wake up everyday with optimism as I look at all the free storage space I have! This has not only improved the performance of my computer, this affects me to the very core of my life. My ex-wife is now talking to me and I might even see my kids again!"
Oleksii Matiash
I know. It's a meme to reply with "storage is cheap" to any data size related problems online, especially chan-style message boards. So, I try to shut it down when I see it, because it's an attitude of "don't solve the core problem, just let it get worse and adapt by spending money."
2024-07-31 06:21:37
I am big optimization (in better meaning) fan, so I understand you clearly. So let me rephrase my statement - storage is cheaper for me than the quality. But if anything helps me to save storage without losing quality - I'll be the first to use it ๐Ÿ™‚
okydooky_original
2024-07-31 06:33:46
No, I understand. I was only attacking the meme. I figured you were in favor of optimization, since you're here in this channel. ๐Ÿ˜‰
2024-07-31 06:35:06
And Call of Duty needs to stop putting out 300GB "games."
jonnyawsom3
A homosapien "Data hoarding is one hell of a drug. I myself just got into rehab and I deleted a ton of stuff off of my hard drive. Now I wake up everyday with optimism as I look at all the free storage space I have! This has not only improved the performance of my computer, this affects me to the very core of my life. My ex-wife is now talking to me and I might even see my kids again!"
2024-07-31 06:41:27
I don't have a wife or kids, but I did this a few weeks ago on 250 GB of random videos I had taken over the years
A homosapien
I don't have a wife or kids, but I did this a few weeks ago on 250 GB of random videos I had taken over the years
2024-07-31 06:49:56
Does looking at all that free storage space fill you with joy? ๐Ÿ˜Š
jonnyawsom3
2024-07-31 07:10:08
It means I don't have to constantly worry about running out anymore, was at 40 GB left at one point with the swapfile using up another 20 GB during a game
A homosapien
2024-07-31 07:17:56
I had similar issues with my laptop. I had 20 GB left with a 12 GB swap file. So I got rid of 80 GB worth of junk.
CrushedAsian255
A homosapien Does looking at all that free storage space fill you with joy? ๐Ÿ˜Š
2024-08-03 12:11:44
whenever i have free space it fills me with the urge to start `yt-dlp`ing random crap that i'll never need ever again
HatFront
2024-08-04 07:18:51
Hello. When I convert animated .jxl to .apng using `djxl` I always get gigantic images. When for example a program or a webiste doesn't support .jxl and I convert it to .apng I still can't upload it because of the size. I wanted to convert my GIFs to JXL but then if I ever need to go back to another format I will get a gigantic file and lose quality when trying to make it smalller. A 5,3 MiB GIF converted to APNG using ffmpeg is 7 MiB. Looks identical to original. When I convert original GIF or ffmpeg's APNG to JXL using -d 0 I get 6,3 and 6,2 MiB JXL files. When I convert those JXL's to APNG I get 32,9 and 32,6 MiB files. Does anyone knows why is this happening? Is there any way to get .apng from .jxl with similar size to the original? Happens with `djxl` on latest commit I compiled (v0.10.2 f7f20ce0), v0.10.3 4a3b22d from GitHub releases and v0.8.3 from Fedora repos.
username
HatFront Hello. When I convert animated .jxl to .apng using `djxl` I always get gigantic images. When for example a program or a webiste doesn't support .jxl and I convert it to .apng I still can't upload it because of the size. I wanted to convert my GIFs to JXL but then if I ever need to go back to another format I will get a gigantic file and lose quality when trying to make it smalller. A 5,3 MiB GIF converted to APNG using ffmpeg is 7 MiB. Looks identical to original. When I convert original GIF or ffmpeg's APNG to JXL using -d 0 I get 6,3 and 6,2 MiB JXL files. When I convert those JXL's to APNG I get 32,9 and 32,6 MiB files. Does anyone knows why is this happening? Is there any way to get .apng from .jxl with similar size to the original? Happens with `djxl` on latest commit I compiled (v0.10.2 f7f20ce0), v0.10.3 4a3b22d from GitHub releases and v0.8.3 from Fedora repos.
2024-08-04 07:29:00
I assume `djxl` isn't using that that much compression effort when creating APNG files leading to the large sizes, you might need to put the APNGs produced by `djxl` through another program such as [APNG Optimizer](https://sourceforge.net/projects/apng/files/APNG_Optimizer/1.4/) to get the file size down back to normal
_wb_
2024-08-04 07:45:16
The main issue is probably that djxl returns full frames
HatFront
username I assume `djxl` isn't using that that much compression effort when creating APNG files leading to the large sizes, you might need to put the APNGs produced by `djxl` through another program such as [APNG Optimizer](https://sourceforge.net/projects/apng/files/APNG_Optimizer/1.4/) to get the file size down back to normal
2024-08-04 08:00:36
Thank you! apngopt 1.4 from Debian repos reduced the image size to 5,7 MiB
Demiurge
2024-08-05 06:05:24
That's smaller than the jxl size you reported
2024-08-05 07:43:13
These image optimization tools could be ported to jxl in the future hopefully
CrushedAsian255
Demiurge These image optimization tools could be ported to jxl in the future hopefully
2024-08-05 09:03:36
Interframe video compression with patches?
JendaLinda
2024-08-05 09:22:14
These animation optimizers usually encode differences between old and new pixels, leaving rest of the frame transparent.
HatFront
Demiurge That's smaller than the jxl size you reported
2024-08-05 10:40:29
Yesterday, I was not trying to create the smallest JXL. The original GIF is 5,3 MiB cjxl (v0.10.2 f7f20ce0) `-d 0` creates 6,2 MiB JXL `-d 0 -e 9` 5,7 MiB `-d 0 -e 9 -P 0` 4,4 MiB `-d 0 -e 9 -P 0 -g 0` 4,3 MiB djxl + apngopt (defaults, 7ZIP with 15 iterations) creates 5,7 MiB APNG
jonnyawsom3
JendaLinda These animation optimizers usually encode differences between old and new pixels, leaving rest of the frame transparent.
2024-08-05 10:43:10
Both of those are options. The APNG way is both encoding differences by using transparency to previous frames, but also using frames only as large as they need to be (A 32x64 area out of a 512x512 animation, for example). That can only be done for the entire frame though, JXL could likely use adaptable patch sizes along with referencing outside concurrent frames as you said
CrushedAsian255
Both of those are options. The APNG way is both encoding differences by using transparency to previous frames, but also using frames only as large as they need to be (A 32x64 area out of a 512x512 animation, for example). That can only be done for the entire frame though, JXL could likely use adaptable patch sizes along with referencing outside concurrent frames as you said
2024-08-06 02:34:09
theoretically jpeg xl could also do funky things with layer mixing modes to add kind-of interframe prediction
zamfofex
2024-08-06 08:39:47
https://youtu.be/FlWjf8asI4Y
CrushedAsian255
2024-08-07 12:16:35
Just learned I can use JPEG XL in Obsidian using the community extension "Image Magician" and adding `jxl` to the supported files list
Deleted User
2024-08-09 05:32:15
Hello! Nice to meet you all
CrushedAsian255 Just learned I can use JPEG XL in Obsidian using the community extension "Image Magician" and adding `jxl` to the supported files list
2024-08-09 05:32:24
woah great!
Just me
2024-08-11 08:29:20
Pedantic. 2 keywords are too general right now. Just remove https://github.com/topics/lossy-compression and https://github.com/topics/lossless-compression-algorithm from https://github.com/libjxl/libjxl. The repository has 2 -image-compression keywords and even image-compression. JPEG XL can't compress arbitrary binary files easily...
VcSaJen
2024-08-11 11:31:05
Which advanced parameters outside of distance/quality and effort are most commonly used/most useful for encoding?
yoochan
2024-08-11 12:25:20
the `-I` is often usefull for non photographic images (lossless)
A homosapien
2024-08-11 12:25:39
Well it depends. What content are you encoding, lossy or lossless? Do you care about progressive decoding?
VcSaJen
2024-08-11 12:48:14
Well, I mean which options should be in UI, and which ones are too useless/technical.
A homosapien
2024-08-11 12:49:16
https://github.com/JacobDev1/xl-converter
2024-08-11 12:49:39
This program provides a good starting point
2024-08-11 12:50:43
My only nitpick would be to replace quality with distance
username
2024-08-11 12:51:54
what does the "Intelligent" checkbox next to effort mean/do? does it like change/scale the effort value based on resolution of the input file or something?
A homosapien
2024-08-11 12:52:40
idk, I've been meaning to look into it ยฏ\\_(ใƒ„)\_/ยฏ
yoochan
2024-08-11 12:55:49
I wouldn't be surprised if the author of this was here, in this discord
VcSaJen
A homosapien https://github.com/JacobDev1/xl-converter
2024-08-11 01:19:27
It seems like the only advanced option is lossy modular mode
2024-08-11 01:32:27
Here's Krita's JPEG XL save dialog. I dunno if it's overboard, or only essentials.
yoochan
2024-08-11 01:48:53
it seems overwhelming !
_wb_
2024-08-11 01:53:56
Just quality and maybe encode effort should be enough to expose, imo. Most of the encode options are more useful for experimentation and maybe some specific use cases, but for general use, even exposing just quality, with some reasonable default for effort (say, e3 for lossless, e6 for lossy), should be ok.
Demiurge
2024-08-11 03:36:32
wow, it exposes literally every single option
2024-08-11 03:36:43
cool
Geniusak
2024-08-11 05:25:40
I think it's fine since it's under the advanced tab
A homosapien
VcSaJen Here's Krita's JPEG XL save dialog. I dunno if it's overboard, or only essentials.
2024-08-11 06:06:02
The most advanced useful option I see there is lossy alpha. For everything else you can let the encoder decide automatically.
jonnyawsom3
VcSaJen Here's Krita's JPEG XL save dialog. I dunno if it's overboard, or only essentials.
2024-08-11 06:48:29
Krita's documentation actually has descriptions and explanations for things not listed in the help of cjxl or the github, since they read the code and implimented it themselves ~~Unlike Adobe~~
A homosapien
2024-08-11 06:49:53
Why does Krita not offer the use of distance. ๐Ÿ˜” Like it only took me a minute to learn how to use it and I've never gone back
2024-08-11 06:50:18
Such an underrated feature.
Quackdoc
VcSaJen Here's Krita's JPEG XL save dialog. I dunno if it's overboard, or only essentials.
2024-08-11 06:50:27
krita still not having associated alpha support T.T
okydooky_original
VcSaJen It seems like the only advanced option is lossy modular mode
2024-08-12 02:37:57
I'd love for that to be an optionally automatic feature, like letting the app figure out if modular or VarDCT should be used based on a loose scan of the image contents. Like, "lots of detail" vs "smooth colors with sharp edges/lines."
2024-08-12 02:41:33
Since we're talking about apps, would someone be interested in looking into this issue: https://github.com/T8RIN/ImageToolbox/issues/1202 There's some rendering oddities in the middle, but the main bit is at the beginning and the last few posts.
Just me
2024-08-12 04:13:58
FYI: latest 4K monitors@1000 Hz are crazily bottlenecked by common broadband speeds in 2024. 1/1000 second at 100 mbps = 100000 bits = 12.5 kB 1/1000 second at 1000 mbps = 1000000 bits = 125 kB
2024-08-12 04:17:22
This means that progressive should be enabled pretty early for these in online/streaming over broadband scenarios...
Oleksii Matiash
2024-08-12 06:54:59
New fab is here ๐Ÿซค
spider-mario
2024-08-12 08:00:49
Iโ€™m not sure using a 1000Hz monitor implies that you necessarily expect 1000fps decoding for still images
_wb_
2024-08-12 09:10:43
1000 Hz refresh rate, that seems overkill. That's an order of magnitude faster than the temporal resolution of human vision. Maybe it's useful to reduce ghosting etc to have such a high refresh rate, but it seems useless to have the actual content signal at that speed. Just like a display resolution much denser than retina resolution is not really useful for human viewers.
spider-mario
2024-08-12 09:30:23
the temporal resolution of human vision is a bit of a tricky question: https://www.100fps.com/how_many_frames_can_humans_see.htm
2024-08-12 09:36:14
for what itโ€™s worth, on my 2019 X1 Carbon (4K model), I could detect the PWM flickering with my naked eyes (in some situations) even though it was measured by notebookcheck at 200โ€ฏHz
2024-08-12 10:05:01
I think my main criterion would be โ€œa GPU that supports some form of adaptive sync with this monitorโ€
yoochan
2024-08-12 10:17:53
I agree, 1000 hz could also mean that the vsync is inexpensive for the gpu
VcSaJen
2024-08-12 12:09:12
To be fair, even average gaming mouse have 1000 Hz polling rate
jonnyawsom3
2024-08-12 01:08:50
My friend might've made the solution to that https://youtu.be/VvFyOFacljg
Meow
2024-08-12 04:36:29
The performance of opening JXL hasn't been improved in macOS Sequoia public beta
KKT
Meow The performance of opening JXL hasn't been improved in macOS Sequoia public beta
2024-08-12 05:42:07
Can you check to see if it fixes this? https://discord.com/channels/794206087879852103/804324493420920833/1272600654287994992
Meow
2024-08-12 05:52:17
Never saw that issue with my 10500 * 10500 pixel images on both Sonoma and Sequoia
jonnyawsom3
2024-08-12 05:52:52
Above 8-bit?
Meow
2024-08-12 06:14:04
Oh I didn't have such image
Oleksii Matiash
2024-08-13 06:53:21
Just in case someone needs large 16-bit jxls. Color space is in the name, size is ~18800x9000
2024-08-13 07:02:05
Btw, can anybody tell me why cjxl tells me *libpng warning: iCCP: known incorrect sRGB profile* or *libpng warning: iCCP: profile 'ProPhoto RGB': 0h: PCS illuminant is not D50* on every png file from photoshop?
Traneptora
Oleksii Matiash Btw, can anybody tell me why cjxl tells me *libpng warning: iCCP: known incorrect sRGB profile* or *libpng warning: iCCP: profile 'ProPhoto RGB': 0h: PCS illuminant is not D50* on every png file from photoshop?
2024-08-13 08:45:05
Photoshop embeds an ICC profile in sRGB pngs but it embeds and incorrect sRGB profile
2024-08-13 08:45:18
and libpng devs are aware about it and warns you about it
2024-08-13 08:48:55
apparently this bug has been fixed if you update
KishiAri
2024-08-14 04:33:28
hi, newbie here i have 1500+ images that all somehow add up to 9GB how would i go about converting these all as a batch
2024-08-14 04:34:25
oh ifranview support?
jonnyawsom3
2024-08-14 04:35:13
Assuming some are jpegs, I'd suggest using cjxl instead of any other tools
KishiAri
2024-08-14 04:36:44
um wait huh
2024-08-14 04:36:50
im not seeing that in the git
2024-08-14 04:37:58
https://www.mankier.com/1/cjxl
2024-08-14 04:37:59
this?
jonnyawsom3
2024-08-14 04:39:49
Yeah, here's the file in the github <https://github.com/libjxl/libjxl/blob/main/tools/cjxl_main.cc> and the latest release where you can download it directly https://github.com/libjxl/libjxl/releases/tag/v0.10.3
KishiAri
2024-08-14 04:40:30
hold on, i figured out how ifran does it
2024-08-14 04:40:36
i'll check this other one later
jonnyawsom3
2024-08-14 04:43:01
I'm fairly sure cjxl is the only way to use jpeg transcoding currently, so do test the results before deleting all the originals
HCrikki
2024-08-14 04:47:00
use XL Converter, preserve metadata instead of wipe, make sure automatic JPEG recompression is *enabled* (lossless conversion from jpg to jxl) and stick to effort 7 - higher efforts arent worth the slowdown for a few less kilobytes when youre already guaranteed 20% smaller filesize anyway
2024-08-14 04:47:14
e7 will have the lossless conversion almost instant. 1500 images should be a few minute's job.
jonnyawsom3
2024-08-14 04:48:52
*Assuming they're all jpegs I'd guess there's a lot of PNGs in there too, so they'll be a fair bit slower but still relatively quick
Oleksii Matiash
Traneptora Photoshop embeds an ICC profile in sRGB pngs but it embeds and incorrect sRGB profile
2024-08-14 05:47:22
But it also warns about AdobeRGB and both ProPhoto profiles
Traneptora
2024-08-14 05:47:46
different warning message
Demiurge
2024-08-14 09:48:36
is there no "jpeg transcode" support in the public libjxl api?
KishiAri hi, newbie here i have 1500+ images that all somehow add up to 9GB how would i go about converting these all as a batch
2024-08-14 09:50:01
I probably would not recommend doing large batch conversions yet, until the lossless encoder gets a "verify" mode
2024-08-14 09:50:52
Just to make sure nothing goes wrong without you noticing, like a rare corner case or bug with a particular image input
2024-08-14 09:52:35
Currently I don't think there would be any way for you to tell if one of the images became corrupted during conversion. There's no test to make sure the decoded sample/coeff values match the original.
CrushedAsian255
Demiurge Currently I don't think there would be any way for you to tell if one of the images became corrupted during conversion. There's no test to make sure the decoded sample/coeff values match the original.
2024-08-14 09:59:32
Oops I already transcoded over 15000 images :/
KishiAri
2024-08-14 10:01:10
noted
2024-08-14 10:01:52
might as well compress it in a 7z
Orum
Demiurge I probably would not recommend doing large batch conversions yet, until the lossless encoder gets a "verify" mode
2024-08-14 10:12:50
you can manually compare the encoded image to the original (which is what I do)
2024-08-14 10:13:30
I'm paranoid about this sort of thing, but it's easy to script it in
Demiurge
2024-08-14 10:22:57
it's pretty fatiguing to manually compare thousands of images though... for lossless compression, it would be a good idea to verify first before deleting the original file, to ensure the original data can be recovered.
CrushedAsian255
Demiurge it's pretty fatiguing to manually compare thousands of images though... for lossless compression, it would be a good idea to verify first before deleting the original file, to ensure the original data can be recovered.
2024-08-14 10:25:27
maybe us as a community should start encoding images in lossless and comparing them in mass to try to find any kind of issues
Orum
Demiurge it's pretty fatiguing to manually compare thousands of images though... for lossless compression, it would be a good idea to verify first before deleting the original file, to ensure the original data can be recovered.
2024-08-14 10:26:16
well by "manual" I mean you aren't doing it within cjxl; I still script it in so it happens automatically when encoding to <:JXL:805850130203934781>
2024-08-14 10:30:49
it's kind of a long script but here's the relevant part: ```sh if [ "$(djxl "$sf" - --output_format ppm --quiet | diff -q "$uc" -)" ]; then de fi ```
Demiurge
2024-08-14 10:30:54
for jpeg, you could try decoding back to jpeg and checking with diff before deleting the original. and for lossless pixel data like png, you could use a tool to check if the sample values are identical, like vips_equal
2024-08-14 10:31:26
Then you can delete the original files with confidence. And it can be done in a scripted, automated way.
Orum
2024-08-14 10:32:15
yeah, for jpeg it should be even easier
CrushedAsian255
Orum it's kind of a long script but here's the relevant part: ```sh if [ "$(djxl "$sf" - --output_format ppm --quiet | diff -q "$uc" -)" ]; then de fi ```
2024-08-14 10:32:49
Whatโ€™s de?
2024-08-14 10:32:55
Translate to German?
Orum
2024-08-14 10:33:19
no, just a simple function that prints an error and dies: ```sh de() { rm "$uc" echo "Recompressed $sx file not the same as the original for file: $se" exit 1 }```
CrushedAsian255
2024-08-14 10:33:55
How is uc generated?
Orum
2024-08-14 10:35:14
well in this script it was with dwebp (as the source images were all lossless WebPs), but it could just as easily be done with imagemagick
Demiurge
2024-08-14 10:35:53
I would 100% recommend bulk converting images into jxl format to save space, as long as you are 100% confident in the verification process. If you need help creating a script for converting and verifying, I can help and I'm sure others would be willing to help too.
Orum
2024-08-14 10:36:41
as long as it's not MS batch, powershell, or python I'm willing to help <:CatSmile:805382488293244929>
CrushedAsian255
Orum as long as it's not MS batch, powershell, or python I'm willing to help <:CatSmile:805382488293244929>
2024-08-14 10:37:00
Perl?
Demiurge
2024-08-14 10:37:05
yeah lol powershell is just needlessly different from literally everything else
Orum
2024-08-14 10:37:13
๐Ÿ˜ I love perl
Demiurge
2024-08-14 10:37:14
perl rocks
CrushedAsian255
Demiurge perl rocks
2024-08-14 10:37:26
Iโ€™ve been meaning to learn
Demiurge
2024-08-14 10:37:41
it's just bash on steroids
CrushedAsian255
Demiurge yeah lol powershell is just needlessly different from literally everything else
2024-08-14 10:37:43
Powershell is โ€œwe wanted to make Bash but we want to be Microsoftโ€
Demiurge
2024-08-14 10:38:12
and bash is just ksh but slower and with 100% more "i'd like to interject"
CrushedAsian255
2024-08-14 10:39:15
IMO fish = oh my zsh - POSIX
Demiurge
2024-08-14 10:40:18
powershell uses way too much powershell specific jargon that is completely unique to powershell and not used anywhere else... it feels like I have to learn to speak klingon to use powershell
2024-08-14 10:40:55
I'm sure powershell is great but I just don't have the patience when it just looks like it's trying to be different just for the sake of being different
CrushedAsian255
2024-08-14 10:42:47
Weโ€™re happy over in UNIX land and Windows is like โ€œme have confusing bash knockoff mixed with CMD? โ€œ
Demiurge
2024-08-14 10:56:34
and instead of typing pwd they would prefer to Spell-It-Out with Weird-Capitalization Like-This
2024-08-14 10:56:56
So cool!
2024-08-14 10:57:08
feels so modern!
CrushedAsian255
Demiurge and instead of typing pwd they would prefer to Spell-It-Out with Weird-Capitalization Like-This
2024-08-14 10:58:55
suprised they didn't call it `Microsoft-Power-Shell`
Meow
KKT Can you check to see if it fixes this? https://discord.com/channels/794206087879852103/804324493420920833/1272600654287994992
2024-08-14 11:19:28
It seems the latest public beta of Sequoia (24A5320a) has fixed it
2024-08-14 11:20:13
No glitch in contrast to macOS 14.6.1
KishiAri
CrushedAsian255 Weโ€™re happy over in UNIX land and Windows is like โ€œme have confusing bash knockoff mixed with CMD? โ€œ
2024-08-15 06:56:40
TIL. thanks :D
Quackdoc
CrushedAsian255 Weโ€™re happy over in UNIX land and Windows is like โ€œme have confusing bash knockoff mixed with CMD? โ€œ
2024-08-15 07:01:41
SPEAKING OFF, continuing in <#806898911091753051>
RaveSteel
Demiurge and instead of typing pwd they would prefer to Spell-It-Out with Weird-Capitalization Like-This
2024-08-15 07:03:29
Powershell has many of their Unix/Linux equivalents aliased. Just type alias in powershell lmao
Demiurge
2024-08-15 08:05:30
Don't you mean List-Aliases or something like that
RaveSteel
2024-08-15 08:28:56
No, simply 'alias' will work as well
2024-08-15 08:29:53
"alias" is likely aliased to "List-Aliases" lol
monad
2024-08-15 09:42:45
guys i back did i miss anything important
CrushedAsian255
monad guys i back did i miss anything important
2024-08-15 11:33:43
not really, just everyone pooping on powershell
2024-08-17 03:34:12
Is there any way to indicate the Modular data is using yuv420p/444p? Like an ICC profile? Iโ€™m trying to store video frames losslessly
yoochan
2024-08-17 07:10:17
I think, this was discussed somewhere. If I remember correctly, modular cannot be told to proceed explicitly in 420 but will encode the planes losslessly as if they where 444 (with a repeated color in the missing spot? Don't remember, the answer should be added to the FAQ)
Tirr
2024-08-17 07:12:39
various codecs uses different subsampling methods iirc, so it would not be lossless if you keep samples in video frame as is
2024-08-17 07:13:38
https://discord.com/channels/794206087879852103/794206170445119489/1261606346092969985
CrushedAsian255
yoochan I think, this was discussed somewhere. If I remember correctly, modular cannot be told to proceed explicitly in 420 but will encode the planes losslessly as if they where 444 (with a repeated color in the missing spot? Don't remember, the answer should be added to the FAQ)
2024-08-17 07:39:07
i was planning to upscale the u and v channels to convert it from yuv420p to yuv444p
_wb_
2024-08-17 09:13:54
In non-XYB images, there is a frame header field to denote that the YCbCr transform is used (with 444,422,440, or 420 subsampling). This is used when recompressing JPEGs but can also be used in Modular mode to do lossless yuv. Only limitation is that the ycbcr matrix, range and subsampling siting is fixed (it is the way JPEG does it), so you can't e.g. do tv-range yuv with the chroma siting of h264.
CrushedAsian255
_wb_ In non-XYB images, there is a frame header field to denote that the YCbCr transform is used (with 444,422,440, or 420 subsampling). This is used when recompressing JPEGs but can also be used in Modular mode to do lossless yuv. Only limitation is that the ycbcr matrix, range and subsampling siting is fixed (it is the way JPEG does it), so you can't e.g. do tv-range yuv with the chroma siting of h264.
2024-08-17 10:47:21
can I store the pixel data as YUV in modular channels and then apply an ICC profile to convert back to RGB? Now that I think about it, i don't need full lossless, can i just use vardct xyz at like d0.3?
_wb_
2024-08-17 10:49:40
If you need a different yuv matrix than the jpeg one, you have to store it as "RGB" with an ICC profile. If it's the jpeg yuv matrix, the libjxl decoder will do the inverse transform...
CrushedAsian255
_wb_ If you need a different yuv matrix than the jpeg one, you have to store it as "RGB" with an ICC profile. If it's the jpeg yuv matrix, the libjxl decoder will do the inverse transform...
2024-08-17 10:53:25
im probably just going to decode to RGB, is 12 bit precision good enough to be lossless?
2024-08-17 10:55:50
is the RGB data stored in JPEG XL gamma encoded? or is it linear?
_wb_
2024-08-17 11:01:30
When using lossless, it is whatever you want it to be.
CrushedAsian255
2024-08-17 11:05:32
so whatever the input is?
Traneptora
CrushedAsian255 so whatever the input is?
2024-08-17 11:28:56
when you use lossless jxl it disables those transforms
2024-08-17 11:29:21
it uses the input space and tags it
jonnyawsom3
VcSaJen Here's Krita's JPEG XL save dialog. I dunno if it's overboard, or only essentials.
2024-08-17 08:44:42
I only just looked again, but for some reason I'm missing the newer Lossless Alpha and CICP options. On 5.2.3 and they were added months ago so not sure where they went
2024-08-17 08:45:03
Same in the Advanced Export too
Kampidh
I only just looked again, but for some reason I'm missing the newer Lossless Alpha and CICP options. On 5.2.3 and they were added months ago so not sure where they went
2024-08-17 10:24:34
it was merged for 5.3, currently available only on nightly builds
jonnyawsom3
2024-08-17 10:26:56
Ahh, thanks
CrushedAsian255
2024-08-18 06:48:52
Is there no way to use anything other than XYB (or YCbCr) for VarDCT?
_wb_
2024-08-18 11:08:20
You can in principle do VarDCT on RGB. So using a suitable ICC profile, you can use any color space. Also note that XYB is configurable: currently libjxl always uses the default XYB but all the parameters of the transform can also be signaled.
CrushedAsian255
Quackdoc ```patch diff --git a/libavformat/riff.c b/libavformat/riff.c index df7e9df31b..16e37fb557 100644 --- a/libavformat/riff.c +++ b/libavformat/riff.c @@ -34,6 +34,7 @@ * files use it as well. */ const AVCodecTag ff_codec_bmp_tags[] = { + { AV_CODEC_ID_JPEGXL, MKTAG('J', 'X', 'L', ' ') }, { AV_CODEC_ID_H264, MKTAG('H', '2', '6', '4') }, { AV_CODEC_ID_H264, MKTAG('h', '2', '6', '4') }, { AV_CODEC_ID_H264, MKTAG('X', '2', '6', '4') }, ```
2024-08-19 05:16:19
just wondering, has this been submitted to upstream yet?
Quackdoc
CrushedAsian255 just wondering, has this been submitted to upstream yet?
2024-08-19 05:17:44
no, it's kinda hacky, I mean it does work and all. but it may have some unintended side effects
2024-08-19 05:18:24
that and mailing lists are a hassle, no chance I would remeber to respond to correspondence lol
Traneptora
2024-08-20 11:29:47
it's unlikely to be merged
Jyrki Alakuijala
_wb_ You can in principle do VarDCT on RGB. So using a suitable ICC profile, you can use any color space. Also note that XYB is configurable: currently libjxl always uses the default XYB but all the parameters of the transform can also be signaled.
2024-08-21 08:44:32
you'd need to have your own quantization matrices to get any performance out of that -- finding good quantization matrices is time consuming as everything needs to be in a balance, but in a balance that puts weight on the worst case behaviour
CrushedAsian255
Jyrki Alakuijala you'd need to have your own quantization matrices to get any performance out of that -- finding good quantization matrices is time consuming as everything needs to be in a balance, but in a balance that puts weight on the worst case behaviour
2024-08-21 08:52:25
so basically it's a bad idea?
Jyrki Alakuijala
CrushedAsian255 so basically it's a bad idea?
2024-08-21 10:39:00
it just needs some work -- of course utility of the outcome is a bit unsure, too
CrushedAsian255
Jyrki Alakuijala it just needs some work -- of course utility of the outcome is a bit unsure, too
2024-08-21 11:15:36
i guess this is the benefit of having an extremely expressive bitstream
2024-08-22 08:26:22
what is `-e 11`?
jonnyawsom3
2024-08-22 08:58:07
Slow
CrushedAsian255
2024-08-22 09:00:37
and did literally nothing compared to `-e10` for me
jonnyawsom3
CrushedAsian255 and did literally nothing compared to `-e10` for me
2024-08-22 09:04:31
Only -e 10 and -e 11 or did you have other parameters too?
CrushedAsian255
2024-08-22 09:06:24
`-d 0 -e 11 -I 100 -g 3 -E 4`
2024-08-22 09:06:32
but im not running it again just to test
2024-08-22 09:06:48
it took 25 minutes on my M3 Max with a 1024 x 1024 image
Oleksii Matiash
CrushedAsian255 and did literally nothing compared to `-e10` for me
2024-08-22 09:12:15
-e11 saves some bytes from time to time over -e10, but time required for image larger than few KB makes it an instument of curiosity
CrushedAsian255
Oleksii Matiash -e11 saves some bytes from time to time over -e10, but time required for image larger than few KB makes it an instument of curiosity
2024-08-22 09:12:45
Is it basically my idea of โ€œtry every possible valid bitstreamโ€?
jonnyawsom3
2024-08-22 09:14:14
I'm also not sure if using `-I 100 -g 3 -E 4` might disable some of it's searches
Oleksii Matiash -e11 saves some bytes from time to time over -e10, but time required for image larger than few KB makes it an instument of curiosity
2024-08-22 09:14:46
I generally stick to 128x128 or less when trying it
CrushedAsian255
2024-08-22 09:14:47
Isnโ€™t that what youโ€™re meant to add for better compression?
jonnyawsom3
2024-08-22 09:15:19
Normally yeah, but when it's trying to test everything, forcing it into specifics means it might not try the 'correct' combo
2024-08-22 09:17:59
I've had cases where `-g 2` was best, or `-I 74` in one instance
DNFrozen
2024-08-22 09:31:15
hello ๐Ÿ‘‹
2024-08-22 10:05:11
I'm trying to create my own android app (using java) and I want it to be able to read and display images stored on my phone I tried to use this java implementation of jpgxl here https://www.reddit.com/r/jpegxl/comments/zb1hvn/standalone_jpeg_xl_decoder_written_in_pure_java/ but something about the jar file causes the the build process to fail even though it works fine in an eclipse project in windows. Is here anyone who knows how to integrate libxl in a java android studio project or how to resolve my jxlatte.jar - gradle build issue?
Quackdoc
2024-08-22 10:09:10
I would probably make a ticket on the git repo for that myself, but seeing as you are here, <@853026420792360980> is the dev for it, and would be the person to ask
jonnyawsom3
2024-08-22 10:09:35
<#1042307875898408960> may have some info too
Kampidh
2024-08-22 11:37:22
Got another round of simple color testing https://saklistudio.com/miniblog/modern-image-format-linear-color-analysis/ (along with some comparisons)
oupson
DNFrozen I'm trying to create my own android app (using java) and I want it to be able to read and display images stored on my phone I tried to use this java implementation of jpgxl here https://www.reddit.com/r/jpegxl/comments/zb1hvn/standalone_jpeg_xl_decoder_written_in_pure_java/ but something about the jar file causes the the build process to fail even though it works fine in an eclipse project in windows. Is here anyone who knows how to integrate libxl in a java android studio project or how to resolve my jxlatte.jar - gradle build issue?
2024-08-22 11:39:09
There are libraries that exist for that if you want, such as https://github.com/awxkee/jxl-coder or https://github.com/oupson/jxlviewer
DNFrozen
oupson There are libraries that exist for that if you want, such as https://github.com/awxkee/jxl-coder or https://github.com/oupson/jxlviewer
2024-08-22 12:25:20
thank you I'll take a look at them.
Traneptora
DNFrozen I'm trying to create my own android app (using java) and I want it to be able to read and display images stored on my phone I tried to use this java implementation of jpgxl here https://www.reddit.com/r/jpegxl/comments/zb1hvn/standalone_jpeg_xl_decoder_written_in_pure_java/ but something about the jar file causes the the build process to fail even though it works fine in an eclipse project in windows. Is here anyone who knows how to integrate libxl in a java android studio project or how to resolve my jxlatte.jar - gradle build issue?
2024-08-22 12:45:11
"Causes the build process to fail" what do you mean?
DNFrozen
2024-08-22 12:52:14
gradle is crashing with some internal exceptions when i add the jar. but i just tested it with a fesh empty project and there it doesn't crash. so it seems like whatever is happening was caused by something i did I'm now trying to figure out why this is happening
2024-08-22 12:53:04
i should have tested it with an empty project before asking here. sorry about that
2024-08-22 02:05:49
<@853026420792360980> after updating some plugins, invalidating the cache, multiple restart of adroid studio and removing/adding it is now suddenly working ๐ŸŽ‰ but now I'm not sure how to get the image information out of the JXLImage object because java.awt that contains the BufferedImage class is not available on android so i can not use asBufferedImage() is there another way?
monad
CrushedAsian255 what is `-e 11`?
2024-08-22 02:17:33
It tries 432 setting combinations: ```-d 0 -e 10 -P !,0 -E 4 -I 100 -g !,0,3 --modular_palette_colors !,0,70000 -X !,0 -Y !,0 --patches !,0 --wp_tree_mode !,2 -d 0 -e 10 -P !,0 -E 4 -I 0 -g !,0,3 --modular_palette_colors !,0,70000 -X !,0 -Y !,0 --patches !,0``` Read `-g !,0,3` as trying each of `-g -1`, `-g 0`, and `-g 3`. Notice tree mode is not actually exposed to the CLI, so it is possible e11 achieves something otherwise inaccessible. e10E4g3I100 is very effective in general, and e11 will often pick this combination or equivalent, in which case nothing is gained. In e11E4g3I100, those specific settings are overwritten when traversing all combinations above.
Traneptora
DNFrozen <@853026420792360980> after updating some plugins, invalidating the cache, multiple restart of adroid studio and removing/adding it is now suddenly working ๐ŸŽ‰ but now I'm not sure how to get the image information out of the JXLImage object because java.awt that contains the BufferedImage class is not available on android so i can not use asBufferedImage() is there another way?
2024-08-22 02:34:53
If java.awt isn't present at all, then you can't. I didn't realize some java dustributions were missing that core package, so ill need to do some refactoring
2024-08-22 02:36:21
I use java.awt.Dimension in a few places too so those classes will fail to load
2024-08-22 02:37:44
My work schedule dies down a bit after this week so I'm going to start a few major codebase refactorings based on performance soon
Managor
2024-08-22 03:01:31
Unexpected. JXL increased the filesize of some jpegs
2024-08-22 03:01:59
Quackdoc
2024-08-22 03:04:26
how were they encoded? just with `cjxl in.jpg out.jxl`?
Oleksii Matiash
Traneptora If java.awt isn't present at all, then you can't. I didn't realize some java dustributions were missing that core package, so ill need to do some refactoring
2024-08-22 03:20:29
Only java.awt.font is present in Android, as I can see. Have never used it though
Traneptora
Managor Unexpected. JXL increased the filesize of some jpegs
2024-08-22 03:22:17
This should basically never happen with lossless jpeg transcoding
HCrikki
2024-08-22 03:22:38
use the jpg->jxl reversible transcode. all full conversions to ANY format normally results in either huge lossless output or a worse image that does not even guarantee lower filesize unless you sacrifice even more quality
2024-08-22 03:23:23
xl converter added lossless transcode for jxl when jpgs are source but its disabled by default for no reason. with it enabled, conversions are hyper quick
Traneptora
2024-08-22 03:23:35
note that if you used ffmpeg or imsgemagick something similar, you wouldn't be doing a lossless transcose
2024-08-22 03:23:50
it would be decoding to pixels and encoding those pixels
2024-08-22 03:24:01
which isn't what you want
Managor
Quackdoc how were they encoded? just with `cjxl in.jpg out.jxl`?
2024-08-22 03:24:01
`magick`. Is it wrong?
Traneptora
2024-08-22 03:24:12
imagemagick won't do that
2024-08-22 03:24:15
use cjxl
2024-08-22 03:24:35
imagemagick is doing a decode to pixels and then encoding that pixel buffer
2024-08-22 03:24:54
you can check with magick compare that the images are different
DNFrozen
2024-08-22 03:37:53
i find it incredible that jpgXL is able to losslessly recompress jpg images and make them 20% smaller
2024-08-22 03:38:52
that alone make it quite usefull for backups
Managor
2024-08-22 03:56:52
Alright there. Much better
2024-08-22 04:06:27
Every file shrunk in size
2024-08-22 04:06:41
Now to wait for browsers to support it
2024-08-22 04:10:58
> 2 years ago > jpegxl is only compiled in on nightly. the pref has no effect on beta or release. Why firefox?
DNFrozen
2024-08-22 04:14:44
maybe google put pressure on them
Quackdoc
DNFrozen maybe google put pressure on them
2024-08-22 04:42:49
I doubt it
HCrikki
2024-08-22 05:00:47
mozilla's releases of firefox are the issue. some rebuilds and derivatives like floorp and waterfox support jxl out of the box and even better than firefox nightly using patches that were handed out to mozilla for free
DNFrozen
2024-08-22 05:11:48
I've never even considered using alternate builds of firefox ๐Ÿค”
2024-08-22 05:12:11
are there other good reasons to try them out?
HCrikki
2024-08-22 05:17:09
specific optimisations or different defaults, extra patches. its not forks or derivatives by any definition, what we once called 3rd party builds or unofficial compiles
2024-08-22 05:17:59
openmandriva's firefox supports jxl, fedora has a copr that swaps regular ff with the jxl enabled build
Quackdoc
2024-08-22 05:25:37
waterfox seems a bit faster then firefox does on my systems
DNFrozen
2024-08-22 05:37:13
threadripper would be quite nice right now ๐Ÿ™ƒ
HCrikki
2024-08-22 05:52:36
what parameters and source format are you using?
DNFrozen
2024-08-22 05:53:31
all jpg images to jxl with lossless recompression
HCrikki
2024-08-22 05:55:50
odd my gettho ryzen crushes conversions. are you using effort 6-7? in regard to jpg->jxl conversions you already significantly lower filesize and a few less kilobytes might not be worth the trouble of drastically longer jobs with higher values than 6
DNFrozen
2024-08-22 05:57:40
I'm only using 3 cores for the conversion and it doesn't really matter how long it takes
HCrikki
2024-08-22 06:01:26
just to make sure, its reconstructible? lossless jxl smaller than the source jpg
DNFrozen
2024-08-22 06:02:49
it should be ๐Ÿคทโ€โ™‚๏ธ when I'm done converting to jxl I'll reconstruct them all and compare the file size to my original
HCrikki
2024-08-22 06:06:41
afaik its not possible if a full decode+conversion was done. better check the images that already completed
DNFrozen
2024-08-22 06:11:10
well i set it to lossless recompression so i should be able to restore the original files
HCrikki
2024-08-22 06:16:10
what effort? anything higher than 7 multiplies the total duration for really negligible gain. asking since its a huge conversion
DNFrozen
2024-08-22 06:16:37
9 but what does effort even do?
LMP88959
2024-08-22 06:19:19
uses more tools and more exhaustive checks to get more compression at the expense of encoding time
HCrikki
2024-08-22 06:19:23
try harder to shrink filesize or hit the target filesize. with lossless files get even smaller, with lossy images look better for any determined filesize OR files are larger for the same quality
2024-08-22 06:20:19
with lossless recon even effort3 should be smaller than jpg yet like 50x quicker than e9
DNFrozen
2024-08-22 06:21:12
well my main goal is to make it as small as possible for long term backups
A homosapien
DNFrozen 9 but what does effort even do?
2024-08-22 06:42:56
For the lossless mode of jxl, the higher the effort the more time the encoder spends trying to losslessly optimize the image to take up less file space. In my experience, jpeg transcoding higher than effort 7 (the default) is kind of a waste of time. You only see files shrink by about 0.5-1% more and it takes 10-20 times as long to compress
2024-08-22 06:44:25
If you're happy with waiting a literal week to get 1% more compression I won't stop you
DNFrozen
2024-08-22 06:47:03
I've thrown jpg files at paq8 compression programms for far smaller gains just to see what the limit is
gb82
A homosapien If you're happy with waiting a literal week to get 1% more compression I won't stop you
2024-08-22 07:09:25
<:kekw:808717074305122316>
DNFrozen
2024-08-23 07:41:05
my CPU consumes in idle 30w but if i limit it to 3.3GHz it also just consumes 30w up to about 30% utilization so from that point of view i have a certain amount calculation power that doesn't cost extra and is just wasted if not used and there are certain situations where you just have a certain amount of space available and just can't go above it for example you can't upgrade your internal phone storage and even if your phone has a microsd slot 2TB will always be the hard limit of what you can add because the SDXC standard that current phones support (well if they have an sdcard slot at all)
HCrikki
2024-08-23 08:08:38
out of curiosity, what kind of images are you converting? photographs, colored/greyscale comics?
DZgas ะ–
2024-08-23 08:08:58
<:ReeCat:806087208678588437>
2024-08-23 08:10:30
sometimes av1 preset 0 with crf 63 seems insufficient
CrushedAsian255
2024-08-23 08:11:05
is there any way to get the image comparison slider that works with jxl that runs offline?
2024-08-23 08:11:11
like squoosh?
DZgas ะ–
2024-08-23 08:12:21
but the experience with paq8px shows that time also matters. and decoding time... and the decoding time of 32k x 32k jpeg xl does not suit me.
2024-08-23 08:12:24
<:JXL:805850130203934781>
CrushedAsian255
DZgas ะ– but the experience with paq8px shows that time also matters. and decoding time... and the decoding time of 32k x 32k jpeg xl does not suit me.
2024-08-23 08:12:54
shouldn't it tiled decode?
DZgas ะ–
2024-08-23 08:14:17
it's still infinitely long and rip ram
DNFrozen
HCrikki out of curiosity, what kind of images are you converting? photographs, colored/greyscale comics?
2024-08-23 08:14:26
many anime/manga style images all colored but also every images that i have ever taken with my phones or that i have send/recieved on whatapp, scanned documents. everything i can find
2024-08-23 08:19:03
is jxl much better at compressing certain types of images?
Orum
2024-08-23 08:27:04
we talking about lossless?
DNFrozen
2024-08-23 08:28:27
all i do is lossless recompression so far but I'm also curious about the others
2024-08-23 08:30:08
in terms of percentage the best improvement I've seen is a 100% white image that went from 4864 bytes to 484 bytes
Orum
2024-08-23 08:30:33
this is lossless JPEG recompression?
DNFrozen
2024-08-23 08:30:41
yes
2024-08-23 08:31:32
1 color images are not that relevant but still interesting to see
Orum
2024-08-23 08:31:37
๐Ÿค” not really sure then, I haven't done enough of that, but the gains result from better entropy encoding
2024-08-23 08:32:50
I imagine low BPP benefits more from that (in terms of % saved), but like I said, I haven't tested it
DNFrozen
2024-08-23 08:33:24
BPP stands for?
Orum
2024-08-23 08:33:29
bits per pixel
jonnyawsom3
2024-08-23 08:44:20
Compressed metadata also helps
DNFrozen
2024-08-23 08:45:17
is lossless recompressing also preserving all image metadata too?
Orum
2024-08-23 08:45:30
it should
HCrikki
2024-08-23 08:45:39
xl converter defaults to wiping metadata unless you choose to preserve it iinm
Orum
2024-08-23 08:46:04
wait, it does?
HCrikki
2024-08-23 08:46:21
metadata 'encoder - wipe'
DNFrozen
2024-08-23 08:46:42
found it
Orum
2024-08-23 08:47:08
oh, XL converter is a front end; I was thinking about cjxl ๐Ÿ˜…
DNFrozen
2024-08-23 08:47:32
well not a bad thing for me I've never needed any of this
HCrikki
2024-08-23 08:49:00
how do you plan to consume the jxls afterwards? wondering what viewers or server software people use
DNFrozen
2024-08-23 08:50:29
mmh if i use them for anything other than backups it will probably be 100% personal use where i have 100% control over the software
2024-08-23 08:52:08
it wouldn't make sense to share an image with others when they probably can't use it without converting it back
HCrikki
2024-08-23 08:54:12
reconstructible jpg->jxl seems can be decoded as either format almost instantly
2024-08-23 08:55:13
anything under 12megapixels blitzes here at like 40 milliseconds, almost realtime
DNFrozen
2024-08-23 08:55:48
if i want to view the jxl files on my PC right now i havbe to open them in gimp mspaint or my image viewer don't know what to do with the files
HCrikki
2024-08-23 08:55:53
we really need viewers that do this kind of decoding, would help adoption big time
jonnyawsom3
2024-08-23 08:56:33
This kind of decoding?
HCrikki
2024-08-23 08:56:46
xnviewmp handles jxl fine. some comic viewers as long as the right dll is added
DNFrozen
2024-08-23 08:57:45
mmh this progressive decoding feature of jxl means that you can already display a burry version even if the file is not 100% loaded correct?
HCrikki
2024-08-23 08:58:26
for reconstructible jpgs, i was under the impression the goal was not only for it to cover conversions with permanent storage of the output, but realtime conversions by web services, cdns and image viewers so any browser or gallery app that lacks jxl still decodes the jpg-encoded jxls just fine
username
DNFrozen mmh this progressive decoding feature of jxl means that you can already display a burry version even if the file is not 100% loaded correct?
2024-08-23 08:59:10
more so downloaded rather then loaded as in cases of local viewing it's probably better to just load the whole image rather then load in steps
HCrikki
DNFrozen mmh this progressive decoding feature of jxl means that you can already display a burry version even if the file is not 100% loaded correct?
2024-08-23 08:59:20
jxl is always progressive. you can make it *extra* progressive. however some software may not load images progressively
username
HCrikki jxl is always progressive. you can make it *extra* progressive. however some software may not load images progressively
2024-08-23 08:59:51
well lossy and jpg transcodes are but not lossless images
DNFrozen
2024-08-23 09:02:52
so it probably doesn't make sense to do the progressive loading on an android app that reads from an sdcard. i wonder how the code for that would even look like. for fully loaded images i call whatever function decodes the image and just use the return value to make a bitmap out of it. how would a partial loading work?
username
2024-08-23 09:05:35
I mean if the data/storage access is slow and you are able to work with partial data as it's streamed in from said slow storage then progressive decoding would probably make sense
2024-08-23 09:06:20
as for how to do that on android, uh I don't have the required know how for that
DNFrozen
2024-08-23 09:09:26
i think whats slowing image loading down most is that i have to do individual file access and not the actual reading process if I'm right about that then packing like 20 images together in a zip file and just keep that in memory would probably boot performance much more than any partial loading
HCrikki
2024-08-23 09:12:17
progressive load makes even more sense on slow media but i think the long accepted norm is to have images completely decode or serve their integrated thumbnail before decoding the next image
Orum
2024-08-23 09:14:04
it's weird how cjxl does significantly better than cwebp on some pixel art images, and way, way worse on others
HCrikki
2024-08-23 09:14:18
like on windows with color management, projects have stuck with old default choices and not revisited em post-jxl (assuming devs even are aware of whats possible now)
DNFrozen
2024-08-23 09:17:18
I'm annoyed that that i need to convert images to .ico if i want to use them as windows folder icons
2024-08-23 09:18:00
there is no other use case for these ico files that i know of
username
DNFrozen there is no other use case for these ico files that i know of
2024-08-23 09:19:39
iirc they are also used for website favicons
DNFrozen
2024-08-23 09:21:55
one think that i personally don't like about webp and jxl is that there are so many types of internal encodings combined under one name
2024-08-23 09:22:36
if i see a .jpg file o know it's lossy compressen and when i see a .png file i know it's lossless
2024-08-23 09:24:20
and .gif is always animated. but with jxl webp and other container formats i have no idea what I'm dealing with