JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

jxl

Anything JPEG XL related

Traneptora
2024-10-09 11:05:53
no
2024-10-09 11:05:58
it always produces sRGB pngs
2024-10-09 11:06:19
well I should say
2024-10-09 11:06:23
it's always sRGB or PQ
CrushedAsian255
2024-10-09 11:06:24
``` ../java/com/traneptora/jxlatte/JXLImage.java:64: warning: [this-escape] possible 'this' escape before subclass is fully initialized int colors = getColorChannelCount(); ^ ../java/com/traneptora/jxlatte/frame/group/LFGroup.java:31: warning: [this-escape] possible 'this' escape before subclass is fully initialized this.lfCoeff = new LFCoefficients(reader, this, parent, lfBuffer); ^ ../java/com/traneptora/jxlatte/frame/modular/MATree.java:54: warning: [this-escape] possible 'this' escape before subclass is fully initialized nodes.add(node); ^ ../java/com/traneptora/jxlatte/frame/modular/ModularChannel.java:78: warning: [this-escape] possible 'this' escape before subclass is fully initialized allocate(); ```
Traneptora
2024-10-09 11:06:34
yea, that's a stupid warning
CrushedAsian255
2024-10-09 11:06:45
does it still build?
Traneptora
2024-10-09 11:06:49
it's a warning
2024-10-09 11:07:10
if you call a local function in a constructor that can be overridden it whines at you
CrushedAsian255
2024-10-09 11:07:14
Traneptora
2024-10-09 11:07:29
fwiw that's a newer warning added in later versions of java
2024-10-09 11:07:47
it warns you about a possible issue in theory because of hypothetical future subclasses
CrushedAsian255
2024-10-09 11:07:57
still has the issue
Traneptora
2024-10-09 11:08:04
what issue
2024-10-09 11:08:09
did you pull from latest git
CrushedAsian255
2024-10-09 11:08:11
nevermind im an idiot
2024-10-09 11:08:21
i forgot to copy the new build to the testing dir
Traneptora
lonjil Should it produce PNGs that are extremely similar to what djxl produces without special decoding options? So I can just run a script on a ton of stuff and automatically compare.
2024-10-09 11:08:50
it always produces sRGB or PQ depending on some heuristics of whether or not the image is HDR or not
2024-10-09 11:09:08
you can force one or the other with `--png-hdr=yes` or `--png-hdr=no`
CrushedAsian255
Traneptora it always produces sRGB or PQ depending on some heuristics of whether or not the image is HDR or not
2024-10-09 11:09:17
this is the one thing i don't really like about jpeg xl, it's hard to tell whether the image is hdr or sdr
Traneptora
2024-10-09 11:09:33
if you use djxl to decode a JXL tagged as gamma45 it'll decode it to a gamma45 png
2024-10-09 11:09:56
whereas jxlatte ignores the tag and decodes XYB to sRGB anyway
2024-10-09 11:10:19
so if you try to compare SDR jxlatte decodes to SDR djxl decodes you may need to do some color management first
CrushedAsian255
2024-10-09 11:10:38
also jxlatte seems to output really big PNGs
Traneptora
2024-10-09 11:10:47
for HDR? yea
2024-10-09 11:10:59
or just in general?
2024-10-09 11:11:12
I did so cause java's built-in deflate algorithm is really slow so
2024-10-09 11:11:16
so I set the compression level way down
2024-10-09 11:11:27
you can override it with `--png-compression` I think I called it
CrushedAsian255
2024-10-09 11:12:35
it doesn't seem to be doing any compression whatsoever
2024-10-09 11:12:48
2024-10-09 11:13:39
uncompressed data size: 4032\*3024\*3 = 36.58 MB jxlatte output size: 36.59 MB
Traneptora
2024-10-09 11:14:24
ye looks like default compression is 0
2024-10-09 11:14:26
I should probably change that
CrushedAsian255
2024-10-09 11:14:55
is 0 just store?
Traneptora
2024-10-09 11:14:59
ye
2024-10-09 11:15:12
it's still valid zlib but it doesn't run any lz77
2024-10-09 11:15:43
try running `--png-compression=1` and see if that affects anything
2024-10-09 11:16:12
I just changed the default, I agree "store" is not a good one
2024-10-09 11:16:22
may be slower because of disk io
CrushedAsian255
2024-10-09 11:17:05
PNG compression 0: 11.64s 36.58 MB PNG compression 1: 12.33s 18.53 MB
2024-10-09 11:17:38
djxl: 4.88s 12.47 MB
Traneptora
2024-10-09 11:17:50
wew that's a massive file if it takes djxl 5s
CrushedAsian255
2024-10-09 11:18:09
2024-10-09 11:18:17
4032x3024 jpeg recompression
2024-10-09 11:18:52
djxl seems to not be able to multithread, `103% cpu`
Traneptora
2024-10-09 11:19:13
why is it so slow on your computer?
CrushedAsian255
2024-10-09 11:19:29
it's not the jxl its the png step it seems
Traneptora
2024-10-09 11:19:31
``` leo@gauss ~/Downloads :) $ time jxlatte a350.jxl Decoded to pixels, discarding output. real 0m1.506s user 0m2.791s sys 0m1.046s ```
2024-10-09 11:19:38
it's 1.5s to decode the jxl
CrushedAsian255
2024-10-09 11:19:55
``` time jxlatte a350.jxl Decoded to pixels, discarding output. jxlatte a350.jxl 2.03s user 0.59s system 210% cpu 1.242 total ```
2024-10-09 11:20:01
yeah, png is really slow it seems
Traneptora
2024-10-09 11:20:14
hm
CrushedAsian255
2024-10-09 11:20:18
``` time djxl a350.jxl --disable_output JPEG XL decoder v0.11.0 0.11.0 [NEON] Decoded to pixels. 4032 x 3024, 335.911 MP/s [335.91, 335.91], , 1 reps, 14 threads. djxl a350.jxl --disable_output 0.23s user 0.12s system 671% cpu 0.053 total ```
Traneptora
2024-10-09 11:20:30
interesting
2024-10-09 11:21:01
it's already sRGB too so no color management issues either
CrushedAsian255
2024-10-09 11:22:25
geez, jxl -> jxl (d0e1): 8 MB, 0.91s cpu, 0.351s real jxl -> png: 12.5 MB, 4.87s cpu, 4.767s real png is really slow
2024-10-09 11:22:31
(this is using djxl / cjxl)
Traneptora
2024-10-09 11:22:50
ye, for some reason this particular file really hates png encoders
CrushedAsian255
2024-10-09 11:23:45
maybe it needs work with optimising the images
Traneptora
2024-10-09 11:24:02
hm?
2024-10-09 11:24:07
some files are just hard to encode
2024-10-09 11:24:09
it do be like that
2024-10-09 11:24:17
if fpnge is taking 4s then it's gonna be one of those situations
CrushedAsian255
2024-10-09 11:24:27
fpnge = ?
Traneptora
2024-10-09 11:24:36
fast png encoder
jonnyawsom3
Traneptora ``` git: 'gud' is not a git command. See 'git --help'. ```
2024-10-09 11:24:42
`pip3 install git-gud`
Traneptora
2024-10-09 11:24:48
it's a super fast png encoder that veluca wrote
2024-10-09 11:25:06
I think it's currently included as one of the libjxl extras codecs for png encoding
lonjil
2024-10-09 11:25:33
I don't think djxl uses fpnge yet
CrushedAsian255
`pip3 install git-gud`
2024-10-09 11:26:13
this is cool
lonjil
2024-10-09 11:26:28
It's currently amd64 only so they haven't switched to it yet
CrushedAsian255
2024-10-09 11:26:45
why does png encoders not like my a350 shot?
jonnyawsom3
Traneptora why is it so slow on your computer?
2024-10-09 11:29:48
I made a PR that lowered the djxl PNG compression from 6 to 1 to avoid skewed test results from people outputting to file, but it was merged the day after 0.11 released
CrushedAsian255
2024-10-09 11:30:55
i just output to PPM or use disable output if i am testing djxl performance
jonnyawsom3
2024-10-09 11:30:56
So there's a window where djxl versions reporting as 0.11 are significantly faster than others
CrushedAsian255
2024-10-09 11:31:14
`djxl a350.jxl --disable_output 0.24s user 0.12s system 620% cpu 0.057 total`
2024-10-09 11:31:32
so yeah it's just png being png lol
jonnyawsom3
CrushedAsian255 djxl: 4.88s 12.47 MB
2024-10-09 11:34:26
```wintime -- djxl a350.jxl a350.png JPEG XL decoder v0.11.0 0185fcd [AVX2,SSE2] Decoded to pixels. 4032 x 3024, 167.737 MP/s [167.74, 167.74], , 1 reps, 16 threads. PageFaultCount: 40173 PeakWorkingSetSize: 103.2 MiB QuotaPeakPagedPoolUsage: 33.86 KiB QuotaPeakNonPagedPoolUsage: 7.828 KiB PeakPagefileUsage: 106.6 MiB Creation time 2024/10/09 12:32:53.017 Exit time 2024/10/09 12:32:54.198 Wall time: 0 days, 00:00:01.180 (1.18 seconds) User time: 0 days, 00:00:00.046 (0.05 seconds) Kernel time: 0 days, 00:00:01.531 (1.53 seconds)``` 13.6 MB output PNG, which is why I changed it
2024-10-09 11:34:35
https://github.com/libjxl/libjxl/pull/3819
CrushedAsian255
2024-10-09 11:35:42
i wonder why jxlatte takes 8s saving the PNG when it's using compression level 0?
2024-10-09 11:36:28
is there an easy way to profile Java applications
jonnyawsom3
2024-10-09 11:36:34
If you want a djxl that's just 0.11 but with that PR https://github.com/libjxl/libjxl/actions/runs/10847095854
CrushedAsian255
2024-10-09 11:37:32
also could jxlatte speed be faster if it was written as one of those java native method things?
spider-mario
2024-10-09 11:44:34
I think it would kind of go counter to the point
CrushedAsian255
2024-10-09 11:51:25
fair
Traneptora
CrushedAsian255 also could jxlatte speed be faster if it was written as one of those java native method things?
2024-10-09 11:53:21
If I was going to do that I would have just written it in C
CrushedAsian255 i wonder why jxlatte takes 8s saving the PNG when it's using compression level 0?
2024-10-09 11:54:17
I don't know, it's somewhat strange
2024-10-09 11:54:24
what if you write to pfm?
CrushedAsian255
2024-10-09 11:54:28
Is there a Java native binding for libjxl?
Traneptora
2024-10-09 11:54:33
yes
CrushedAsian255
Traneptora what if you write to pfm?
2024-10-09 11:54:34
Sorry can’t test right now
Traneptora
2024-10-09 11:54:40
no worries
2024-10-09 11:55:05
I suspect the java deflater, even with 0, is doing something odd
jonnyawsom3
Traneptora what if you write to pfm?
2024-10-09 12:09:01
```wintime -- java -jar "C:\Program Files\JPEG XL\jxlatte.jar" Graph.jxl Decoded to pixels, discarding output. PageFaultCount: 243678 PeakWorkingSetSize: 904.7 MiB QuotaPeakPagedPoolUsage: 446.9 KiB QuotaPeakNonPagedPoolUsage: 22.29 KiB PeakPagefileUsage: 975.6 MiB Creation time 2024/10/09 13:07:15.747 Exit time 2024/10/09 13:07:17.989 Wall time: 0 days, 00:00:02.241 (2.24 seconds) User time: 0 days, 00:00:01.328 (1.33 seconds) Kernel time: 0 days, 00:00:02.671 (2.67 seconds) C:\Users\jonat\Downloads>wintime -- java -jar "C:\Program Files\JPEG XL\jxlatte.jar" Graph.jxl Test.png Decoded to pixels, writing PNG output. PageFaultCount: 282138 PeakWorkingSetSize: 1.019 GiB QuotaPeakPagedPoolUsage: 446.9 KiB QuotaPeakNonPagedPoolUsage: 22.29 KiB PeakPagefileUsage: 2.199 GiB Creation time 2024/10/09 13:07:24.003 Exit time 2024/10/09 13:07:30.927 Wall time: 0 days, 00:00:06.924 (6.92 seconds) User time: 0 days, 00:00:01.781 (1.78 seconds) Kernel time: 0 days, 00:00:07.562 (7.56 seconds) C:\Users\jonat\Downloads>wintime -- java -jar "C:\Program Files\JPEG XL\jxlatte.jar" Graph.jxl Test.pfm Decoded to pixels, writing PFM output. PageFaultCount: 237450 PeakWorkingSetSize: 883.5 MiB QuotaPeakPagedPoolUsage: 446.9 KiB QuotaPeakNonPagedPoolUsage: 22.29 KiB PeakPagefileUsage: 956.5 MiB Creation time 2024/10/09 13:07:34.027 Exit time 2024/10/09 13:07:37.134 Wall time: 0 days, 00:00:03.106 (3.11 seconds) User time: 0 days, 00:00:01.859 (1.86 seconds) Kernel time: 0 days, 00:00:03.515 (3.52 seconds)```
2024-10-09 12:09:38
So yeah, something strange is happening with PNG, odd that PFM resulted in slightly less memory usage too
Traneptora
So yeah, something strange is happening with PNG, odd that PFM resulted in slightly less memory usage too
2024-10-09 12:14:07
not that weird
2024-10-09 12:14:20
png requires me to cast the float buffer to int
2024-10-09 12:14:28
which allocates a new buffer
jonnyawsom3
2024-10-09 12:15:11
I meant compared to discarding output
Traneptora
2024-10-09 12:15:39
oh, that's probably just JVM stuffs
2024-10-09 12:15:46
JVM garbage collects when it wants to
jonnyawsom3
2024-10-09 12:15:57
fair
CrushedAsian255
Traneptora JVM garbage collects when it wants to
2024-10-09 12:16:20
Is there a way to manually memory manage in Java?
Traneptora
2024-10-09 12:17:12
there is but if you are going to do that you ideally aren't using Java at all
2024-10-09 12:17:18
it's also a hack
2024-10-09 12:17:33
best you can do realistically is suggest to the system to run gc by doing `System.gc();`
Demiurge
2024-10-09 01:25:35
You can disable garbage collection completely, but then you will run out of memory if you allocate too much.
Dejay
2024-10-09 01:59:20
Can jxl "transcode" pngs so they can be restored bitexact? Is that theoretically possible?
2024-10-09 02:24:57
Thanks for the link, seems like it adds not insignificant size to converted jxl. But this would be interesting to "archive" web pages when you want to be able to "prove" that the archived version still represents the original website and images
2024-10-09 02:25:11
Not sure if this is really useful
CrushedAsian255
2024-10-09 02:27:23
The version of libjxl shouldn’t affect it as preflate only wants the uncompressed bytes not the compressed JXL , so as long as the JXL decodes the same it should work all the same
2024-10-09 02:28:08
This preflate thing sounds very interesting I might try to make a bit exact ZIP recompress using SquashFS LZMA2/XZ and preflate
2024-10-09 02:29:32
Maybe the PNG reconstruction data could be stored in some box “pbrd”? instead of being a sidecar file
2024-10-09 02:29:45
I should look further into this as this seems quite interesting
2024-10-09 02:29:52
Libjxl is libjxl though so might have to libjxl
Dejay
CrushedAsian255 This preflate thing sounds very interesting I might try to make a bit exact ZIP recompress using SquashFS LZMA2/XZ and preflate
2024-10-09 02:35:05
I recently discovered [precomp](https://github.com/schnaader/precomp-cpp) which does this for zip, jpg, png and pdf! (huh, not sure what the difference is between this and preflate)
CrushedAsian255
Dejay I recently discovered [precomp](https://github.com/schnaader/precomp-cpp) which does this for zip, jpg, png and pdf! (huh, not sure what the difference is between this and preflate)
2024-10-09 02:38:21
It uses preflate (check the bottom)
2024-10-09 02:38:31
It seems like a decent front end though
2024-10-09 02:38:43
Preflate only deals with raw streams I think
AccessViolation_
2024-10-09 07:00:15
Can lossless images be decoded to lower resolutions, using less resources? Like 1/16, 1/8 etc
2024-10-09 07:02:27
I have a 480 megapixel lossless JXL image and sent it to someone, they said their file manager had a preview for it even though `jxlinfo` tells me the file doesn't have a preview frame, so either it decoded the whole thing, scaled it down and cashed it, or decoded the image to a fraction of the resolution? Because decoding it takes quite a bit of time
2024-10-09 07:04:12
I did find something about TOC in the spec draft which seems to be this, but I was under the impression that only lossy images supported any form of progressive decoding, so I'm not sure if it also applies to lossless
lonjil
2024-10-09 07:15:16
squeeze-encoded images can be decoded to lower resolutions
2024-10-09 07:15:29
squeeze isn't used by default for lossless though
AccessViolation_
2024-10-09 07:17:05
Ah, so it's not a guarantee that any image can always be decoded to some lower resolution
2024-10-09 07:18:38
Interestingly I got this file from an 80 MB jxl -> `djxl` to 80 MB png -> `cjxl -d 0` to 50 MB jxl, and that was the one I sent them
_wb_
2024-10-09 08:00:25
There's only a guarantee when you do lossy, for lossless it's non-progressive by default since progressive comes at a cost in compression density.
jonnyawsom3
2024-10-09 08:08:01
And quite a large one currently at that
AccessViolation_
2024-10-09 08:25:49
That would explain why I ran out of memory trying to re-encode it in responsive mode
spider-mario
AccessViolation_ Interestingly I got this file from an 80 MB jxl -> `djxl` to 80 MB png -> `cjxl -d 0` to 50 MB jxl, and that was the one I sent them
2024-10-09 09:27:10
unless something funky is going on with metadata and the like, it should be possible to just `cjxl -d 0` that initial jxl directly, without going through png in between
AccessViolation_
2024-10-09 09:32:38
Oh yeah I know. The reason I went to PNG was because I wanted to see the size difference. I got suspicious when I decoded the JXL to a PNG which was just as large, so at that point I wanted to encode either of them back to JXL with effort 7. Out of habit from benchmarking, I went with PNG -> JXL. Without that context it does seem arbitrary to create a PNG in between, I'm not sure why I even mentioned it, it wasn't relevant
2024-10-09 09:34:12
TL;DR I tend to over explain things, exhibit A: see above
jonnyawsom3
2024-10-09 10:05:32
Going to PNG can help in the case of palette or a redundant Alpha channel, since you can run a quick optimization pass to clean it up ready for cjxl
Demiurge
Dejay Can jxl "transcode" pngs so they can be restored bitexact? Is that theoretically possible?
2024-10-09 10:45:54
no, and bit-exact restoration is kinda silly in the first place... as long as it's pixel-exact, that really should be all that matters... Personally I think bit-exact restoration of JPEG is silly too, as long as the "important" parts are lossless...
_wb_
2024-10-09 11:21:18
I agree that the details of the huffman coding etc don't matter, but preserving metadata, including possibly exotic application-specific APP markers, can be also important. And since it doesn't cost much to go full bit-exact, why not — if only to take any doubt away that something may have been lost.
2024-10-09 11:23:12
In particular it would be harder to convince people that it's really lossless even though djxl decodes it to different pixels than libjpeg-turbo, if we couldn't also reconstruct the bit-exact file.
Demiurge
2024-10-09 11:26:14
Fair enough, and of course metadata is important to exactly preserve too in my opinion
Dejay
Demiurge no, and bit-exact restoration is kinda silly in the first place... as long as it's pixel-exact, that really should be all that matters... Personally I think bit-exact restoration of JPEG is silly too, as long as the "important" parts are lossless...
2024-10-09 11:29:55
One case I was thinking of was caching websites, so you can "prove" that the restored version is bitexact and has the same hash. But I'm not sure how signing works exactly or if that would really be that useful. You could also just hash the "payload" the actual image data etc
2024-10-09 11:32:13
For something like federalist or <[gemini protocol](https://www.reddit.com/r/geminiprotocol/comments/1fzvyu0/distributed_p2p_dht_gemini_protocol/)> (and possibly ebooks)
VcSaJen
Dejay One case I was thinking of was caching websites, so you can "prove" that the restored version is bitexact and has the same hash. But I'm not sure how signing works exactly or if that would really be that useful. You could also just hash the "payload" the actual image data etc
2024-10-10 01:01:22
I remember that Chrome folks wanted to implement seamless JPEG compression when trasferred via http, using jpeg xl, similar to gzip compression.
Demiurge
2024-10-10 02:16:51
Wow really? Where did you read about that
2024-10-10 02:17:46
`Accept-Encoding: jpegxl`
VcSaJen
2024-10-10 02:23:24
https://chromestatus.com/feature/5678152091172864 https://issues.chromium.org/issues/40141863
CrushedAsian255
2024-10-10 04:01:59
just wondering does this server have a ban appeals ssection?
Meow
CrushedAsian255 just wondering does this server have a ban appeals ssection?
2024-10-10 04:42:07
When members get annoyed too much, <@794205442175402004> would ban that person
Jyrki Alakuijala
2024-10-10 06:40:18
feel free to upvote the hacker news item: https://news.ycombinator.com/item?id=41785820
AccessViolation_
Demiurge no, and bit-exact restoration is kinda silly in the first place... as long as it's pixel-exact, that really should be all that matters... Personally I think bit-exact restoration of JPEG is silly too, as long as the "important" parts are lossless...
2024-10-10 07:29:19
It might have some archival value. For example the Internet Archive could ingest original JPEGs and transcode them to JXL, giving people the ability to retrieve the true original if they want to for whatever reason
CrushedAsian255
2024-10-10 07:32:07
or even by default just convert back to jpeg
AccessViolation_
2024-10-10 07:32:13
When JXL is sufficiently accessible, they could transcode their entire JPEG library, not destroying any information while saving significantly
CrushedAsian255
2024-10-10 07:32:22
as there are probably a lot of images on there that people rarely access
2024-10-10 07:32:54
so the slight slowdown in accessing the file is probably worth it for the storage gains
AccessViolation_
2024-10-10 07:33:24
Is transcoding them back to JPEG fast and cheap enough to do just-in-time with some caching?
CrushedAsian255
2024-10-10 07:33:35
i personally do think there is decent merit for a PNG to JPEG XL lossless converter
AccessViolation_ Is transcoding them back to JPEG fast and cheap enough to do just-in-time with some caching?
2024-10-10 07:34:20
Yes, that's the point
AccessViolation_
2024-10-10 07:34:56
Pretty neat
Demiurge
2024-10-10 07:44:15
Yes
jonnyawsom3
AccessViolation_ When JXL is sufficiently accessible, they could transcode their entire JPEG library, not destroying any information while saving significantly
2024-10-10 11:19:20
Well, not their *entire* library, but 99% of it at least
Inner Hollow
2024-10-10 10:22:20
I'm not a photographer or work with any images at all, but I tried to convert a super larger resolution photo to JXL via IrfanView(with plugin) and it literally froze my entire computer, had to restart it..
damian101
Inner Hollow I'm not a photographer or work with any images at all, but I tried to convert a super larger resolution photo to JXL via IrfanView(with plugin) and it literally froze my entire computer, had to restart it..
2024-10-10 10:22:58
you most likely ran out of RAM
2024-10-10 10:23:24
your swap file probably got hammered with write requests, which caused your system to freeze
Inner Hollow
you most likely ran out of RAM
2024-10-10 10:23:41
Yeah, I set it to Speed Effort 9 (max) and it even glitched out my spotify running in the background
2024-10-10 10:23:44
<:kekw:808717074305122316>
A homosapien
2024-10-10 10:25:32
How large is the image you're trying to convert? How much RAM do you have?
Inner Hollow
A homosapien How large is the image you're trying to convert? How much RAM do you have?
2024-10-10 10:28:44
16gb Ram, and the image is 14879x7420 . I convert images to jxl just for fun, im always amazed by the capability of the compression while preserving the quality..
2024-10-10 10:34:45
Okay I was looking at the task manager during a conversion, 5 effort peak RAM 3.5gb usage from IrfanView, 7 effort was over 11gb RAM usage
2024-10-10 10:34:49
Lol
CrushedAsian255
2024-10-10 10:35:01
`what distance?`
Inner Hollow
2024-10-10 10:35:39
I dont actually know much about JXL I don't know what you mean <:CatSmile:805382488293244929>
CrushedAsian255
2024-10-10 10:38:04
what quality setting
AccessViolation_
2024-10-10 10:38:14
Distance is like a quality target, effort is how hard it's going to try
lonjil
Inner Hollow Okay I was looking at the task manager during a conversion, 5 effort peak RAM 3.5gb usage from IrfanView, 7 effort was over 11gb RAM usage
2024-10-10 10:38:33
I guess IrfanView doesn't use streaming encoding, wow! 😄
Inner Hollow
lonjil I guess IrfanView doesn't use streaming encoding, wow! 😄
2024-10-10 10:39:05
Jpeg xl isn't by default on IrfanView, you have ti download a plug-in, so I dotn think it's very optimized
lonjil
2024-10-10 10:39:22
ahh
Inner Hollow
CrushedAsian255 what quality setting
2024-10-10 10:40:56
There's only 2 options for saving .jxl images in IrfanView, it's "Save quality" 1-100 (100 is Lossless) and "Speed Effort" 1-9 (9 is highest effort), I used Save Quality 80 and Speed Effort 9 (that's when it crashed about 30 seconds later)
2024-10-10 10:41:32
Oh there is also progressive JXL option but the plug-in doesn't support progressive loading when viewing images, so I never use that
2024-10-10 10:42:42
I now know that i crashed because I ran out of RAM
2024-10-10 10:42:45
👍
HCrikki
2024-10-10 10:48:13
effort 7 is ideal for almost all uses and normally shouldnt be changed unless for specific uses or resolutions since its not adaptive (small res/icons? can increase effort. large res? lower effort). its distance that controls the actual target visual quality
AccessViolation_
Inner Hollow 16gb Ram, and the image is 14879x7420 . I convert images to jxl just for fun, im always amazed by the capability of the compression while preserving the quality..
2024-10-10 10:49:41
> I convert images to jxl just for fun, im always amazed by the capability of the compression while preserving the quality This is what I've spent some time doing as well, it's oddly fun
Inner Hollow
HCrikki effort 7 is ideal for almost all uses and normally shouldnt be changed unless for specific uses or resolutions since its not adaptive (small res/icons? can increase effort. large res? lower effort). its distance that controls the actual target visual quality
2024-10-10 10:50:18
yeah, 7 is the default setting, but i just like to play with it i guess
2024-10-10 10:50:33
sometimes i try low effort like 3 and sometimes high effort, like 9
2024-10-10 10:56:48
also, completly unrelated question, and i dont need answers if there is no definitive answer; so over the years, image compression has gone better and better, like every other technology, but there has to be some sort of limit where the information is stored in the most optimal way no? Is the rate of compression over the years slowing down or speeding up ? (i.e. are we getting better at compressing every year consistently, or is progress slowing down?)
jonnyawsom3
2024-10-10 11:24:35
IrfanView is on a very old libjxl version, doesn't do JPEG transcoding and naturally doesn't do chunked encoding as a result. So encoding any large image, especially JPEGs, will use a huge amount of memory and time compared to cjxl
CrushedAsian255
2024-10-10 11:34:08
i love cjxl
2024-10-11 12:09:41
china really like jpeg xl apparently
2024-10-11 12:09:56
Dejay
2024-10-11 02:24:50
I'd love it if cjxl could downscale images. As is, you need to use cjxl for transcoding and something else to reduce resolution. I've been using VIPS but it's a bit weird to use as a command line tool
Meow
CrushedAsian255
2024-10-11 05:55:49
Unfortunately
_wb_
2024-10-11 06:25:12
2024-10-11 06:26:00
Looks like most people searching for avif are mostly interested in how to convert it to another image format 🙂
yoochan
2024-10-11 06:38:15
Don't china account for almost one fifth of the world population? It may bias this kind of country based stats.
CrushedAsian255
_wb_ Looks like most people searching for avif are mostly interested in how to convert it to another image format 🙂
2024-10-11 06:39:24
Oh dear lol
2024-10-11 06:48:12
apparently Indians really like FLIF
Dejay
2024-10-11 07:24:33
So you're saying improving the jxl lossless encoder would improve interest from India? 😉
CrushedAsian255
2024-10-11 07:25:07
that's 1/5 of the population, lets get on it
VcSaJen
yoochan Don't china account for almost one fifth of the world population? It may bias this kind of country based stats.
2024-10-11 07:33:49
China does not use Google
Smegas
IrfanView is on a very old libjxl version, doesn't do JPEG transcoding and naturally doesn't do chunked encoding as a result. So encoding any large image, especially JPEGs, will use a huge amount of memory and time compared to cjxl
2024-10-11 08:25:31
XnView MP use newer 10.3 version. Use this software 🙂
Orum
2024-10-11 08:46:35
`cjxl` gives me bad banding, even at `-d 1` <:SadOrange:806131742636507177>
2024-10-11 08:46:57
I suppose compared to jpeg and webp it's great, but that's not exactly the best benchmark
CrushedAsian255
2024-10-11 08:48:40
can you share test images
Orum
2024-10-11 08:48:43
maybe I can mask some of it with some noise...
CrushedAsian255
2024-10-11 08:48:44
that's a pretty big probmen
Orum
2024-10-11 08:48:49
1 min
CrushedAsian255
2024-10-11 08:48:51
it should NOT be banding at `-d 1`
Orum
2024-10-11 08:50:52
this is the lossless image
2024-10-11 08:51:15
banding is quite noticeable on the right of the image even at `-d 1`
username
Orum maybe I can mask some of it with some noise...
2024-10-11 08:51:19
noise inserted on the input image or noise with `--photon_noise_iso`?
Orum
2024-10-11 08:51:30
was going to try photon noise, even though I don't particularly like it (compared to AVIF's grain synth)
CrushedAsian255
2024-10-11 08:52:13
i can't see any banding but it could just be my bad eyesight
Orum
2024-10-11 08:52:34
you reencoded it at `-d 1`, right?
CrushedAsian255
2024-10-11 08:52:44
`cjxl Downloads/small_lossless.jxl test-d1.jxl`
2024-10-11 08:52:49
`-d 1` is the default
Orum
2024-10-11 08:52:52
ah okay
CrushedAsian255
2024-10-11 08:53:01
i got this
embed
CrushedAsian255 i got this
2024-10-11 08:53:06
https://embed.moe/https://cdn.discordapp.com/attachments/794206170445119489/1294221206328840203/test-d1.jxl?ex=670a38ed&is=6708e76d&hm=9d971cb745d7665c385d3231931f805473e2bb01c9a8a38884192a06ceebe70d&
Orum
2024-10-11 08:53:16
you have a high contrast display, with good black levels?
CrushedAsian255
Orum you have a high contrast display, with good black levels?
2024-10-11 08:53:25
macbook pro 16 inch
Orum
2024-10-11 08:54:09
well I don't know that specific model, but macs tend to have good displays
2024-10-11 08:54:32
though not really sure about their laptops
username
2024-10-11 08:54:46
I can see the banding
CrushedAsian255
2024-10-11 08:54:50
i can't see the banding when opening it natively but it's quite obvious in affinity photo
2024-10-11 08:55:34
preview:
2024-10-11 08:55:43
affinity photo:
Tirr
2024-10-11 08:56:04
color management shenanigans?
Orum
2024-10-11 08:56:04
woah... I think affinity is doing something wrong there
username
2024-10-11 08:56:29
yeah affinity photo looks like it's messing up
CrushedAsian255
2024-10-11 08:56:36
affinity loads it fine when lossless
2024-10-11 08:56:49
but when loading the encoded version it seems to crap itself
Orum
2024-10-11 08:56:52
maybe an issue with VDCT then?
CrushedAsian255
2024-10-11 08:57:14
is affinity photo using it's own jpeg xl decoder?
username
2024-10-11 08:57:57
the banding I see is subtle when opening the d1 test but is present
Orum
2024-10-11 08:58:22
yeah, it's not super-obvious like it is in jpg/webp, but it's *definitely* there
CrushedAsian255
2024-10-11 08:58:31
<@794205442175402004> this might be a serious problem
Dejay
2024-10-11 08:58:38
Well you can see stars softening or vanishing. Which I guess is hard to detect and encode. Maybe it needs a special detector to differentiate stars from noise
CrushedAsian255
2024-10-11 08:59:53
i think macOS preview is doing something weird, the banding is significantly worse when decoding using jxl-oxide to PNG compared to opening it in Preview
username
2024-10-11 09:01:47
give photon noise a try for encoding, I am curious if it does a good job of handling the banding
Tirr
2024-10-11 09:01:55
what's the output of `jxl-oxide -I`?
CrushedAsian255
2024-10-11 09:02:26
could be [this problem](https://discord.com/channels/794206087879852103/794206170445119489/1289440474188349462) again
Tirr what's the output of `jxl-oxide -I`?
2024-10-11 09:02:50
my encode: ``` JPEG XL image (BareCodestream) Image dimension: 3008x2008 Bit depth: 8 bits XYB encoded, suggested display color encoding: Colorspace: RGB White point: D65 Primaries: sRGB Transfer function: sRGB Frame #0 (keyframe) VarDCT (lossy) Frame type: Regular 3008x2008; (0, 0) ```
AccessViolation_
2024-10-11 09:02:54
On the embed.moe one you can put `/png/` and `/jxl/` before the url to swap between them. The png and jxl look identical to me in Firefox Nightly but in Waterfox the JXL is brighter and has more banding
Orum
username give photon noise a try for encoding, I am curious if it does a good job of handling the banding
2024-10-11 09:03:08
it definitely helps
CrushedAsian255
2024-10-11 09:03:09
the original ``` JPEG XL image (BareCodestream) Image dimension: 3008x2008 Bit depth: 8 bits Color encoding: Colorspace: RGB White point: D65 Primaries: sRGB Transfer function: sRGB Frame #0 (keyframe) Modular (maybe lossless) Frame type: Regular 3008x2008; (0, 0) ```
AccessViolation_ On the embed.moe one you can put `/png/` and `/jxl/` before the url to swap between them. The png and jxl look identical to me in Firefox Nightly but in Waterfox the JXL is brighter and has more banding
2024-10-11 09:03:22
def. sounds like colour management shenanigans
2024-10-11 09:03:41
macOS finder vs djxl vs jxl-oxide vs Affinity Photo all look slightly different to me with slightly different severity of banding
Quackdoc
2024-10-11 09:03:54
none of the images seem uber messed up on my display [av1_cheems](https://cdn.discordapp.com/emojis/720670067091570719.webp?size=48&quality=lossless&name=av1_cheems) time to get a new one I guess
Tirr
CrushedAsian255 my encode: ``` JPEG XL image (BareCodestream) Image dimension: 3008x2008 Bit depth: 8 bits XYB encoded, suggested display color encoding: Colorspace: RGB White point: D65 Primaries: sRGB Transfer function: sRGB Frame #0 (keyframe) VarDCT (lossy) Frame type: Regular 3008x2008; (0, 0) ```
2024-10-11 09:04:45
could you try decoding with `-t png16`?
CrushedAsian255
2024-10-11 09:05:02
in JXL-oxide?
AccessViolation_
2024-10-11 09:05:05
All this time I thought images getting brighter was just a thing JXLs did. I specifically switched from FF Nightly to Waterfox because I thought it had a better JXL stack
Tirr
2024-10-11 09:05:06
yep
CrushedAsian255
Tirr could you try decoding with `-t png16`?
2024-10-11 09:05:41
`unexpected argument '-t' found`
Tirr
2024-10-11 09:05:45
maybe it was `-f`, consult help message
Quackdoc
2024-10-11 09:08:05
yeah I don't see any banding issues on qimgv, and I am running 8bit+frc at least on my display, is this just an application issue?
CrushedAsian255
2024-10-11 09:08:13
`-f png16` seems to make it better
2024-10-11 09:08:24
Affinity photo
2024-10-11 09:08:31
djxl
2024-10-11 09:08:42
finder
2024-10-11 09:08:48
jxl-oxide png8
username
2024-10-11 09:09:00
huh when testing between old and new Firefox/Gecko with the unmerged JXL patches new Firefox/Gecko ends up without the banding
CrushedAsian255
2024-10-11 09:09:06
can't post jxl-oxide png16 because discord file limit
Quackdoc
2024-10-11 09:09:28
https://catbox.moe/
Orum
2024-10-11 09:09:40
well affinity is *definitely* doing something wrong
Tirr
2024-10-11 09:09:58
yeah affinity is def wrong
username
2024-10-11 09:10:25
lets ignore affinity for now because that's a whole other problem almost
Quackdoc
2024-10-11 09:10:31
is afinity rendering in 8bit output?
CrushedAsian255
2024-10-11 09:10:32
yeah
2024-10-11 09:11:27
the original: original: <https://files.catbox.moe/hgfnqp.jxl> the jpeg xl at d1: <https://files.catbox.moe/xpujus.jxl> Finder: <https://files.catbox.moe/hswi4c.png> jxl-oxide png8: <https://files.catbox.moe/gto2lw.png> jxl-oxide png16: <https://files.catbox.moe/0mkbq1.png> Affinity (not relevant): <https://files.catbox.moe/xemxgk.png> djxl: <https://files.catbox.moe/q6um5w.png>
Quackdoc
2024-10-11 09:12:04
I can see *some* banding but honestly, it's within the range I would expect from d1
Orum
2024-10-11 09:12:27
there's a tiny bit in the source image, but it's definitely worse in the d 1 encode
Quackdoc
2024-10-11 09:13:01
well yeah, I wouldn't expect it not to be unless you use like 0.5 or 0.25
Orum
2024-10-11 09:13:27
I guess my expectations of `-d 1` are just too high <:SadOrange:806131742636507177>
username
AccessViolation_ All this time I thought images getting brighter was just a thing JXLs did. I specifically switched from FF Nightly to Waterfox because I thought it had a better JXL stack
2024-10-11 09:13:42
try against either the Waterfox 6.5.0 beta or [r3dfox](https://eclipse.cx/projects/r3dfox) or [Floorp](https://floorp.app/) (EDIT: maybe I should have replied to this message instead https://discord.com/channels/794206087879852103/794206170445119489/1294223693756502058)
Quackdoc
2024-10-11 09:13:48
I always do 0.5 if I want it to be something i would "export"
CrushedAsian255
2024-10-11 09:14:01
here are the files in 7z for if the catbox links expire
Quackdoc
2024-10-11 09:14:24
you prolly could just use `split` [av1_dogelol](https://cdn.discordapp.com/emojis/867794291652558888.webp?size=48&quality=lossless&name=av1_dogelol)
CrushedAsian255
2024-10-11 09:14:37
what split
Orum
2024-10-11 09:14:55
does OSX have split?
Quackdoc
2024-10-11 09:14:57
coreutils tool now that I think about it lol
Tirr
2024-10-11 09:15:11
I think this is the same issue with https://discord.com/channels/794206087879852103/794206170445119489/1289440474188349462
Quackdoc
Orum does OSX have split?
2024-10-11 09:15:14
I instal coreutils on everything so sometimes I forget lol
Orum
2024-10-11 09:15:23
heh, I know the feeling
CrushedAsian255
2024-10-11 09:15:28
it does
2024-10-11 09:15:29
just checked
2024-10-11 09:15:48
aight gtg cya
Orum
Tirr I think this is the same issue with https://discord.com/channels/794206087879852103/794206170445119489/1289440474188349462
2024-10-11 09:16:37
yeah but that was cjpegli, not cjxl... though I agree, it's probably something I can combat by raising the intensity_target
Quackdoc
2024-10-11 09:17:09
intensity_target can fuck with the decode so prolly shouldn't use that lel
Orum
2024-10-11 09:17:11
have to offset by also raising the distance though or file size explodes
Quackdoc
2024-10-11 09:18:17
well with libjxl it's probably fine since it does no implicit tonemapping, it would fuck with jxl-oxide most likely
Orum
2024-10-11 09:18:34
good, another thing that can be fixed
Tirr
2024-10-11 09:19:07
yeah I remember I wrote some janky logic within color management pipeline
Quackdoc
2024-10-11 09:19:35
before and after messing with intensity_target
2024-10-11 09:20:01
note that this is actually "proper" behavior from jxl-oxide since messing with intensity_target is actually changing how bright the pixels are supposed to be, it's not just an encode offset
Tirr
2024-10-11 09:20:12
I guess... it will tonemap on SDR contents
Quackdoc
2024-10-11 09:20:18
[av1_yep](https://cdn.discordapp.com/emojis/721359241113370664.webp?size=48&quality=lossless&name=av1_yep)
Orum
2024-10-11 09:20:24
so libjxl does it incorrectly?
Quackdoc
2024-10-11 09:20:33
uh, kinda? libjxl just does no tonemapping
2024-10-11 09:20:49
neither are really right or wrong, rather I would just consider jxl-oxide's behavior the better default
Orum
2024-10-11 09:20:51
I mean, where is the tonemapping *supposed* to occur?
Quackdoc
2024-10-11 09:21:07
that is entirely dependant on the app,
Tirr
2024-10-11 09:21:15
I'm not very sure about the exact tonemapping conditions
Quackdoc
2024-10-11 09:21:18
jxl-oxide makes you explicitly request it
Orum
2024-10-11 09:21:41
ah, so you can tell oxide not to do any tonemapping when decoding?
Tirr
2024-10-11 09:22:02
maybe it should keep tonemapping on SDR curves, maybe not
2024-10-11 09:23:24
jxl-oxide does something like "if target intensity > 255, tonemap to 255 nits"
Quackdoc
2024-10-11 09:23:58
I don't think jxl-oxide can currently tell it to not tonemap
2024-10-11 09:24:14
well if you decode to linear maybe
Tirr
2024-10-11 09:24:49
yeah it will tonemap if target intensity condition matches and SDR is requested
Orum
2024-10-11 09:24:56
man, I just want a way to bias the encoder's bit allocation to darker areas *without* screwing up decode...
Tirr
2024-10-11 09:25:06
maybe it can do better
jonnyawsom3
Orum was going to try photon noise, even though I don't particularly like it (compared to AVIF's grain synth)
2024-10-11 09:25:40
I just woke up, and this may be stupid, but are you adding the photon noise to lossless encodes or lossy?
Quackdoc
2024-10-11 09:25:44
it should probably only tonemap if the user explicitly asks for sRGB
AccessViolation_
2024-10-11 09:25:52
Since this is an sRGB image, why is there tone mapping or anything like that at all? I've only seen these issues in the context of rendering wide gamut or HDR images on sRGB usually
Orum
I just woke up, and this may be stupid, but are you adding the photon noise to lossless encodes or lossy?
2024-10-11 09:26:06
lossy... can you even add it to lossless? <:DogWhat:806133035786829875>
AccessViolation_ Since this is an sRGB image, why is there tone mapping or anything like that at all? I've only seen these issues in the context of rendering wide gamut or HDR images on sRGB usually
2024-10-11 09:26:32
it happens when you raise the intensity_target (in oxide, anyway)
Jyrki Alakuijala
Orum I guess my expectations of `-d 1` are just too high <:SadOrange:806131742636507177>
2024-10-11 09:27:19
about expectations on d1.0, we used to have higher quality at 1.0 -- one was a 5 % pure mistake on my side around 2018, but otherwise we went more into a holistically ok approach giving ok results but not such guarantees like we used to have. Something like distance 0.85 is what we used to mean with 1.0. Also, our focus used to be on high effort compression (50x slower than e8) and it fully shifted to effort 7, up to a point where e8 and e9 are not manually verified but are hope based
Quackdoc
AccessViolation_ Since this is an sRGB image, why is there tone mapping or anything like that at all? I've only seen these issues in the context of rendering wide gamut or HDR images on sRGB usually
2024-10-11 09:27:49
because he is setting intensity_target
Tirr
2024-10-11 09:29:58
~~decode to PQ for better experience~~
Dejay
Jyrki Alakuijala about expectations on d1.0, we used to have higher quality at 1.0 -- one was a 5 % pure mistake on my side around 2018, but otherwise we went more into a holistically ok approach giving ok results but not such guarantees like we used to have. Something like distance 0.85 is what we used to mean with 1.0. Also, our focus used to be on high effort compression (50x slower than e8) and it fully shifted to effort 7, up to a point where e8 and e9 are not manually verified but are hope based
2024-10-11 09:30:54
Did the way butteraugli distance is calculated change?
jonnyawsom3
2024-10-11 09:31:28
I'll read though the 150 messages in a bit, but this looks like 1. The viewer is trying to display a higher bit depth than it supports 2. An old libjxl is outputting 8 bit before the dithering commit was merged 3. Incorrect color management 4. This https://github.com/libjxl/libjxl/pull/3880
Orum lossy... can you even add it to lossless? <:DogWhat:806133035786829875>
2024-10-11 09:32:17
You can, but the noise is in XYB space so it comes out blue and red
2024-10-11 09:32:50
Would be a nice niche otherwise, so you can disable it and get the original back later
Orum
2024-10-11 09:33:28
I would think doing anything at -d 0 would no longer make the image lossless, and though I can theoretically see some uses for it, it could confuse a lot of people
Jyrki Alakuijala
Dejay Did the way butteraugli distance is calculated change?
2024-10-11 09:37:17
yes, the 5 % normalization error I did in the butteraugli itself -- the rest is just how JPEG XL interprets the distance and due to my focus on effort 7 (instead of effort 8 and 9 which can be inherently more guarantees because of butteraugli iterations)
2024-10-11 09:39:51
the 5 % normalization error corresponds roughly to me initially doing the visual testing at 60 cm distance to having done it at 63 cm -- or something like that. It is not the end of the world. But it changed what 1.0 is and because of these I can symphatize with people who would like to get higher quality at 1.0
jonnyawsom3
Orum I would think doing anything at -d 0 would no longer make the image lossless, and though I can theoretically see some uses for it, it could confuse a lot of people
2024-10-11 09:40:04
I won't mention -m 1 -d 1 then
2024-10-11 09:41:25
Do you have the *source* source image by the way? Before the jxl
2024-10-11 09:42:19
Because this really seems like a high bit depth file to me that's somehow been mislabled
Orum
2024-10-11 09:44:03
well I can export it as 16 or even 32 bit, but the "source" file is really a camera raw file + a bunch of processing parameters
Dejay
Jyrki Alakuijala the 5 % normalization error corresponds roughly to me initially doing the visual testing at 60 cm distance to having done it at 63 cm -- or something like that. It is not the end of the world. But it changed what 1.0 is and because of these I can symphatize with people who would like to get higher quality at 1.0
2024-10-11 09:45:12
Thanks for the info. And all your awesome work, I love this "fire and forget" d=1 feature!
AccessViolation_
2024-10-11 09:48:25
As someone who spent like an hour fucking with avifenc parameters, I agree
Dejay
2024-10-11 09:58:42
It would be cool to add like a second step. Like d=2 is where you can see the image change at normal zoom levels, but you don't see any loss of detail or compression artifacts. So like if you compare the two images you couldn't tell which one is better or the original, just that they are ever so slightly different
2024-10-11 09:58:59
Not sure if that makes any sense
CrushedAsian255
2024-10-11 10:07:26
like AVIF?
Jyrki Alakuijala
Dejay It would be cool to add like a second step. Like d=2 is where you can see the image change at normal zoom levels, but you don't see any loss of detail or compression artifacts. So like if you compare the two images you couldn't tell which one is better or the original, just that they are ever so slightly different
2024-10-11 10:25:03
we didn't want to focus much on this -- it is so difficult to say where the semantic information is is a motor block cracked or is there a hair on the block -- is it ok to remove that line by blurring it away if it is a picture of a wall and a painting -- should we blur the wall more or not, perhaps it is about evaluating if the wall should be repainted or it relates to insurance-cover of the damage on the wall if it is a picture of a bicycle for sale, blurring the wheels may make them look completely used to the end (I got a very cheap unused bike this way from an online auction, I know my image artefacts :-D) it could relate to the authenticity of the object -- is that violin 300 years old or just 100 years? is that object made of synthetic gypsum, natural gypsym or marble? etc. because of these, it might be better to separate the processes of reducing detail from the compression itself
Dejay Thanks for the info. And all your awesome work, I love this "fire and forget" d=1 feature!
2024-10-11 10:27:21
Thank you! "Fire and forget" was exactly the idea -- people should not be bothered to think about these things any more, computers can take the responsibility for quality/density 🙂
Dejay
Jyrki Alakuijala we didn't want to focus much on this -- it is so difficult to say where the semantic information is is a motor block cracked or is there a hair on the block -- is it ok to remove that line by blurring it away if it is a picture of a wall and a painting -- should we blur the wall more or not, perhaps it is about evaluating if the wall should be repainted or it relates to insurance-cover of the damage on the wall if it is a picture of a bicycle for sale, blurring the wheels may make them look completely used to the end (I got a very cheap unused bike this way from an online auction, I know my image artefacts :-D) it could relate to the authenticity of the object -- is that violin 300 years old or just 100 years? is that object made of synthetic gypsum, natural gypsym or marble? etc. because of these, it might be better to separate the processes of reducing detail from the compression itself
2024-10-11 10:40:44
Ah I see, thanks. There are many images where d=2 leads to noticeable differences but basically no loss of detail. But I guess you'd need a lot of smart classifiers to detect the cases where it does lead to loss of detail. Maybe with advances in AI it could learn to write fast code to detect and tune for all those cases
Jyrki Alakuijala
2024-10-11 10:42:09
that sounds like a plausible plan -- perhaps one could just use humans to indicate where the artefacts/blurring/whatever is too bad and put some more bits there
Dejay
2024-10-11 10:44:36
Now I picture a scene from the invasion of the body snatchers haha
jonnyawsom3
Orum well I can export it as 16 or even 32 bit, but the "source" file is really a camera raw file + a bunch of processing parameters
2024-10-11 11:06:46
So from a camera raw, exported to 8 bit lossless JXL?
Orum
2024-10-11 11:10:41
no, exported to ppm, then fed that into cjxl
Demiurge
2024-10-12 11:20:05
The best thing would be to make a mode or new default behavior to have blue noise type artifacts instead of blur type
2024-10-12 11:21:18
Easier for humans to "see through" to the original image underneath, compared to destructive blurring
2024-10-12 11:26:59
Especially if it's really smooth, uniform, uncorrelated noise with no discernible low-frequency shapes and signals (like blocks)
VcSaJen
2024-10-12 04:21:05
I remember something about uploading a mask indicating level of compression for parts of images. I hope we would see it when jxltran becomes a thing
CrushedAsian255
2024-10-13 10:35:33
if only there was a way to losslessly compress h264 into some other format like jpeg->jxl
RaveSteel
2024-10-13 10:36:54
sadly video compression is highway to loss city in 99% of cases
CrushedAsian255
2024-10-13 10:37:14
yea
HCrikki
2024-10-13 11:57:23
containers typically handle this role in the video world, leaving the video stream generated with the codec *untouched* if no conversions occur - imo its the best approach to lossless preservation of sources
CrushedAsian255
2024-10-13 11:59:27
i was thinking specifically to make the file smaller
2024-10-13 11:59:31
not just new containers
2024-10-13 11:59:54
like this https://github.com/dropbox/avrecode
HCrikki
2024-10-14 12:00:01
its also way more efficient to retain the original videos and periodically regenerate new lossy copies using whatever current modern codec is maintream, instead of counting on minimizing generation loss
A homosapien
CrushedAsian255 like this https://github.com/dropbox/avrecode
2024-10-14 12:00:22
holy crap, jpegtran for h264 video?!
RaveSteel
HCrikki its also way more efficient to retain the original videos and periodically regenerate new lossy copies using whatever current modern codec is maintream, instead of counting on minimizing generation loss
2024-10-14 12:00:46
While this is true, recording losslessly or even at very high qualities is not feasible for the normal user due to quickly balooning filesizes
A homosapien
2024-10-14 12:00:57
or is it more like dropbox's lepton?
CrushedAsian255
2024-10-14 12:01:25
it's like jpeg reconstruction in jxl
HCrikki
2024-10-14 12:02:29
that reversibility sounds pointless if it cannot be undone on the fly to play in standard players
lonjil
2024-10-14 12:03:28
it's for reducing storage costs on their servers only
2024-10-14 12:03:42
usually you don't stream video in a player directly from dropbox
CrushedAsian255
2024-10-14 12:03:45
im saying i think it would be nice if there was a jpeg reconstruction but for h264
2024-10-14 12:03:48
instead of like lepton
2024-10-14 12:04:01
A homosapien
CrushedAsian255
2024-10-14 12:04:16
This is libel <:AngryCry:805396146322145301>
HCrikki
2024-10-14 12:05:56
dropbox needed reducing its storage cost for the *original* videos. most services just generate and serve smaller/optimized versions of those
CrushedAsian255
2024-10-14 12:06:48
HCrikki dropbox needed reducing its storage cost for the *original* videos. most services just generate and serve smaller/optimized versions of those
2024-10-14 12:06:59
like how youtube does it?
HCrikki
2024-10-14 12:23:02
like youtube, cdns... its simpler minimizing generation loss and adopting newer codecs/tuning this way
Oleksii Matiash
A homosapien holy crap, jpegtran for h264 video?!
2024-10-14 10:16:36
Some h.264 videos can be optimized by recompressing from CAVLC to CABAC, but there is no tool available to do this
CrushedAsian255
Oleksii Matiash Some h.264 videos can be optimized by recompressing from CAVLC to CABAC, but there is no tool available to do this
2024-10-14 10:35:04
Are videos generated by eg. hw accel or cameras sometimes in cavlc?
Oleksii Matiash
CrushedAsian255 Are videos generated by eg. hw accel or cameras sometimes in cavlc?
2024-10-14 10:35:40
Sometimes. Mostly in cabac, though
CrushedAsian255
2024-10-14 10:36:39
If only codecs like h265/av1 had some kind of h264 recompression mode
2024-10-14 10:37:03
Even if it wasn’t perfect, some way to use the existing h264 data to compress to h265 and minimise gen loss
2024-10-14 10:38:23
Cause JPEG to JPEG XL is *really* nice however these days it’s Video that is using most of the storage space and could benefit most from some form of recompression
lonjil
2024-10-14 10:38:57
How much of the improvements in newer codecs is in the entropy coding and how much is in better intraframe detail deletion and interframe stuff?
CrushedAsian255
2024-10-14 10:39:58
Most is probably interframe prediction
2024-10-14 10:40:26
Entropy coding has just gone from Arithmetic to ANS I think
2024-10-14 10:40:31
And that’s mainly for the speed benefit
Oleksii Matiash
CrushedAsian255 Are videos generated by eg. hw accel or cameras sometimes in cavlc?
2024-10-14 11:28:18
Mostly it is relevant for stupid apps, not bothering with quality, and doing everything for speed. (x264 uses cavlc only in the fastest preset, ultrafast, afair)
CrushedAsian255
2024-10-14 01:04:46
I usually use very fast
2024-10-14 01:05:03
Good compromise for my use cases (on the fly transcoding on relatively powerful hardware)
Traneptora
2024-10-14 02:43:44
<@386612331288723469> why'd you change your nickname to Jia Tan
Meow
2024-10-14 03:38:09
The former god of xz
Quackdoc
2024-10-14 05:15:09
almost feel sorry for the man, hope one day this man comes back with an email just saying "And I would have gotten away with it too, if it weren't for you meddling devs!"
CrushedAsian255
2024-10-14 07:55:56
Ah those stinking optimisers, have to be profilin’ everything!
Traneptora <@386612331288723469> why'd you change your nickname to Jia Tan
2024-10-14 07:59:40
Why not 🤷‍♂️
2024-10-14 07:59:50
Funnies
2024-10-15 12:06:52
2024-10-15 07:15:11
I got one of my friends to install a JPEG XL compatible image viewer! 🎉
Orum
2024-10-15 08:19:56
there are a lot, as anything that uses kimageformats gets <:JXL:805850130203934781> support for free
2024-10-15 08:25:56
I'm trying to think if I have an image viewer that *doesn't* support JXL (other than my browser, of course <:SadOrange:806131742636507177>)
VcSaJen
2024-10-15 08:31:31
What does use kimageformats by default? In distros I used it's a manual install.
Orum
2024-10-15 08:35:24
nomacs, for starters, but I think other stuff does as well... let me check
2024-10-15 08:37:41
kdenlive optionally uses it, but I don't think I've tested it there yet
2024-10-15 08:38:45
just look at the list on the lower right here: https://aur.archlinux.org/packages/kimageformats-git
2024-10-15 08:42:17
mcomix sadly doesn't use it, being GTK and all. I should find a replacement for it.
Demiurge
Orum I'm trying to think if I have an image viewer that *doesn't* support JXL (other than my browser, of course <:SadOrange:806131742636507177>)
2024-10-15 09:13:23
It's crazy how fast new codecs get universal support in image viewers but it takes FOREVER to get support from browsers or Windows Photos. (As long as the codec isn't made by the head of the Chrome Codec team, like avif and webp. Then of course it gets supported in Chrome first and Google uses their clout to force everyone to start using it or get their pagerank demoted.)
HCrikki
2024-10-15 12:03:33
browsers (chrome in particular) tried taking inhouse a lot of the functionalty that is actually the domain of operating systems
2024-10-15 12:05:23
a honestly managed chromium wouldve kept and updated jxl support even if downstream chrome had it disabled by default or not compiled in. google knew all other derivatives like edge wouldve enabled it by default since upkeep would be minimal and shared at upstream
2024-10-15 12:08:06
websites like google properties are free to serve whatever format they prefer so its ridiculous trying to force other sites into using unwanted formats - all it does is keeping use of jpg and png perpetuated
jonnyawsom3
2024-10-15 12:25:44
In reference to the recent post in <#805722506517807104> > Quality setting of 90 preserves tons of detail compared to lossless PNG. *Huh*
2024-10-15 12:26:21
I think they forgot their own sentance halfway through
CrushedAsian255
2024-10-15 12:26:45
Ok what
2024-10-15 12:26:54
Hang on lemme think about that
2024-10-15 12:27:11
Nope, the math ain’t mathing
Tirr
2024-10-15 12:27:21
more lossless than lossless
CrushedAsian255
2024-10-15 12:27:51
It’s lossless AND removes the noise!!
jonnyawsom3
2024-10-15 12:32:40
"The artifacts make it look so crisp"
CrushedAsian255
2024-10-15 12:33:02
“MP3s sound more warm”
RaveSteel
CrushedAsian255 “MP3s sound more warm”
2024-10-15 12:39:26
https://goodhertz.com/lossy/
2024-10-15 12:39:30
lol
VcSaJen
HCrikki browsers (chrome in particular) tried taking inhouse a lot of the functionalty that is actually the domain of operating systems
2024-10-16 04:13:06
I heard that MS Edge undoes that, at least for video. It relies on OS instead
CrushedAsian255
RaveSteel https://goodhertz.com/lossy/
2024-10-16 04:22:09
$79.00 for mp3 compressiion?
Traneptora
2024-10-16 04:58:50
that's kinda
2024-10-16 04:58:51
lame
2024-10-16 04:58:57
may be just a big shitpost tho
Demiurge
Traneptora may be just a big shitpost tho
2024-10-16 09:25:48
nope, they are sincere. they are selling a customizable plugin.
jonnyawsom3
2024-10-16 04:53:46
Unoptimized Greyscale images getting extra colors from lossy isn't ideal... https://github.com/libjxl/libjxl/issues/3896
2024-10-16 04:54:21
Would it be a huge performance hit to check if RGB values are the same?
Demiurge
2024-10-17 02:49:01
Why would you disable XYB for grayscale images?
2024-10-17 02:49:10
XYB was designed for grayscale images as well.
CrushedAsian255
2024-10-17 02:49:34
Can X and B just be Zeros?
Demiurge
2024-10-17 02:49:55
that's exactly how it works and there's no overhead
2024-10-17 02:50:45
it was designed with that in mind
CrushedAsian255
2024-10-17 02:52:04
Is VarDCT per channel?
Demiurge
2024-10-17 02:52:14
also --disable_perceptual sounds like an option that should never be used and has no legitimate use (other than cheating on automated comparisons)
CrushedAsian255
2024-10-17 02:52:27
And maybe debugging
Demiurge
2024-10-17 02:52:53
I dunno, it sounds like something that just adds bloat and complexity and has no legitimate reason to exist.
CrushedAsian255
2024-10-17 02:53:07
Does it disable what?
Demiurge
2024-10-17 02:53:40
no idea what it actually does but judging by the name it probably makes the output look worse on purpose.
2024-10-17 02:54:04
you might as well call it --ugly_mode
2024-10-17 02:54:28
actually that would be a better and more direct name for it
2024-10-17 02:55:06
So people don't accidentally get the idea that it's a useful setting
CrushedAsian255
2024-10-17 02:57:52
It’s probably like —tune ssim2 or something
Demiurge
2024-10-17 03:01:55
that's what it sounds like
2024-10-17 03:05:55
And the only reason that exists is to cheat on automated comparisons, which are worthless.
2024-10-17 03:06:42
Test results are extra worthless if you know the participants were cheating. 😂
jonnyawsom3
Demiurge also --disable_perceptual sounds like an option that should never be used and has no legitimate use (other than cheating on automated comparisons)
2024-10-17 04:00:04
It was done for non-visual uses like data storage and scientific uses, where machines are looking at the pixels instead of humans
Demiurge
2024-10-17 04:05:26
Well, the same perceptual optimizations that are good for human vision (like noise shaping) are also good at preserving discreet signals below the noise floor...
2024-10-17 04:09:36
There just aren't any good metrics that can weigh the significance of low-frequency signals being lost and smeared away by artifacts, compared to the significance of high-frequency noise being lost or altered/replaced with noise of the same spectrum.
CrushedAsian255
It was done for non-visual uses like data storage and scientific uses, where machines are looking at the pixels instead of humans
2024-10-17 04:43:37
Maybe it should be renamed to —non-visual or something
2024-10-17 06:08:09
is JPEG XL ISO or IEC specification 18181?
2024-10-17 06:08:18
or is it both
_wb_
2024-10-17 06:19:13
It's both. Proper way to refer to it is ISO/IEC 18181
CrushedAsian255
_wb_ It's both. Proper way to refer to it is ISO/IEC 18181
2024-10-17 06:19:51
Do you guys get a cut if someone buys the specification?
lonjil
2024-10-17 06:39:39
No
_wb_
2024-10-17 06:54:51
lol no
2024-10-17 06:56:10
that money goes straight to maintaining bureaucrats and their fancy offices in Geneva
2024-10-17 06:57:11
I also have no way of knowing how many times that spec gets sold, I would assume it's a single digit number but I have no clue
CrushedAsian255
2024-10-17 07:11:20
i was thinking of buying the spec for the lolz
_wb_
2024-10-17 07:12:19
like a physical copy?
2024-10-17 07:14:08
I have been wondering if I should buy a physical copy of the thing to put in my bookshelf, but it's kind of expensive and I don't really feel like giving ISO any more money than I already do through my membership fee
2024-10-17 07:14:34
You'd think ISO could send a courtesy copy to the editors of their standards, but nope, they don't do that
VcSaJen
2024-10-17 07:19:58
would it be enough for a hardcover? edit: looks like it's 91 pages, so yes.
CrushedAsian255
2024-10-17 07:22:01
How much is the specification?
2024-10-17 07:22:07
It's like 200 swiss or something?
_wb_
2024-10-17 07:44:35
CHF 216 for part 1, CHF 96 for part 2, CHF 63 for part 3, CHF 42 for part 4 (though parts 3 and 4 are really not worth getting through ISO since they're just a snapshot of the conformance repo and the libjxl repo)
CrushedAsian255
2024-10-17 08:04:18
you could probably work out part 2 just by looking in a hex editor for a while, although you need it for JPEG reconstruction
2024-10-17 08:05:31
We should all buy the spec and take selfies with it to put on the community website
2024-10-17 08:05:40
Or would that make us look like a cult
VcSaJen would it be enough for a hardcover? edit: looks like it's 91 pages, so yes.
2024-10-17 08:06:11
Is hardcover like a bound book or just a stack of paper
Orum
2024-10-17 08:28:11
it just means the cover of the book is hard 🤷‍♂️
Meow
2024-10-17 08:50:03
Experimenting if this works `<link rel="apple-touch-icon" href="bug.jxl">`
jonnyawsom3
Demiurge XYB was designed for grayscale images as well.
2024-10-17 02:40:26
In this case it gives the greyscale image 270 colors instead of staying at 256
Nova Aurora
_wb_ I have been wondering if I should buy a physical copy of the thing to put in my bookshelf, but it's kind of expensive and I don't really feel like giving ISO any more money than I already do through my membership fee
2024-10-17 04:21:47
Does cloudinary pay the fee or do you?
_wb_
2024-10-17 04:22:19
Cloudinary does, that is, I pay and they reimburse me
2024-10-17 04:26:22
it just feels a bit like academic publishing before the academics revolted and open access became the norm: basically ISO expects us to pay them for the privilege of working for them for free and then they can put the fruits of our labor behind a paywall and sell it for profit. Basically the same business model as academic publishers.
2024-10-17 04:42:43
(Source: https://www.iso.org/ar2023.html) It looks like ISO gets - CHF 21m from membership fees ("ISO members" are themselves national standardization bodies, like DIN or ANSI, so I presume this is coming from the national standardization bodies transfering their own membership fees to ISO, plus maybe adding some tax payer money) - CHF 15m from royalties on national bodies selling standards - CHF 6.6m from ISO directly selling standards and it spends all that money on "operations", which I suppose is mostly paying wages for their army of bureaucrats
2024-10-17 04:43:36
The thing is, 22 million is a ridiculously low amount of money to justify keeping all major international standards locked behind a paywall.
2024-10-17 04:53:39
Compared to say how much it costs to pay all those people who are doing the actual work in the ~4000 committees there are within ISO. Just the meetings alone: according to the numbers in https://www.iso.org/files/live/sites/isoorg/files/about%20ISO/iso_in_figures/docs/ISO_in_Figures_2023.pdf, on average on every working day there are 43 meetings in progress. Some may have low attendance, others high, but as a ballpark number: that's the equivalent of about 2000 full-time jobs (typically engineers or some other highly paid profession) just for the meetings alone — nevermind all the work that gets done between meetings. That's an order of magnitude more money, maybe two orders of magnitude.
Nova Aurora
_wb_ Compared to say how much it costs to pay all those people who are doing the actual work in the ~4000 committees there are within ISO. Just the meetings alone: according to the numbers in https://www.iso.org/files/live/sites/isoorg/files/about%20ISO/iso_in_figures/docs/ISO_in_Figures_2023.pdf, on average on every working day there are 43 meetings in progress. Some may have low attendance, others high, but as a ballpark number: that's the equivalent of about 2000 full-time jobs (typically engineers or some other highly paid profession) just for the meetings alone — nevermind all the work that gets done between meetings. That's an order of magnitude more money, maybe two orders of magnitude.
2024-10-17 04:58:31
Who pays for the in person meetings?
_wb_
2024-10-17 04:59:07
the companies or universities the participants are working for
Nova Aurora
_wb_ The thing is, 22 million is a ridiculously low amount of money to justify keeping all major international standards locked behind a paywall.
2024-10-17 05:00:08
That could be a write off for a national government or particularly rich university
_wb_
2024-10-17 05:02:39
yeah it's a ridiculously small amount of money if it comes at the cost of having all standards locked behind a paywall, so not indexed by search engines, not really accessible to most people
2024-10-17 05:03:53
then again there are also some national standardization bodies that don't get enough public funding and have to get some income from selling standards too
Nova Aurora
2024-10-17 05:09:03
For example ANSI, because they don't receive funding from NIST
_wb_
2024-10-17 05:26:00
It's a bit of a mess tbh, but the bottom line is basically that all of the actual work gets done by the domain experts, who have their own funding, and the only role ISO staff plays in all of it is basically to come up with bureaucratic nonsense to make committee members jump through all kinds of kafkaesque hoops, and to ensure that standards remain safely locked behind a paywall because otherwise how are they going to pay ISO staff? Geneva is quite expensive, you see.
Cacodemon345
2024-10-17 05:36:06
Everything in the world revolves around the USA, Switzerland and the Netherlands... /s
A homosapien
2024-10-17 05:37:01
I wish academia was more accessible and not locked behind a paywall
2024-10-17 05:37:37
Thankfully most studies I find are free but every now and then I have to go to scihub...
_wb_
2024-10-17 06:10:47
At least in academia if you author a paper, you are free to put it on arxiv. In ISO, if you author a spec and then put it on arxiv they will sue you.
CrushedAsian255
_wb_ At least in academia if you author a paper, you are free to put it on arxiv. In ISO, if you author a spec and then put it on arxiv they will sue you.
2024-10-17 09:11:11
cause it's their IP?
Demiurge
In this case it gives the greyscale image 270 colors instead of staying at 256
2024-10-17 09:50:30
270 gray colors? the chroma channels should be empty...
CrushedAsian255
Demiurge 270 gray colors? the chroma channels should be empty...
2024-10-17 10:14:20
maybe floating point rounding issues
jonnyawsom3
2024-10-17 10:17:16
Which is why I mentioned it here
DZgas Ж
2024-10-17 11:51:50
an interesting observation. if you convert the pixels of the image use algorithm: the difference between the current pixel and the next pixel (wight). To do new image — when compressing png, it will weigh about 2 times less than the original. but at the same time, if you compress both the original image and the pixel difference into jpeg xl. Its will size almost the same.
_wb_
CrushedAsian255 cause it's their IP?
2024-10-18 04:57:46
Lawyers can debate about these things, and things are different in different countries.
Demiurge
_wb_ At least in academia if you author a paper, you are free to put it on arxiv. In ISO, if you author a spec and then put it on arxiv they will sue you.
2024-10-18 08:56:09
Sounds like grounds to choose freedom instead of ISO
2024-10-18 08:56:59
Anyone can call themselves "the international union of whatever" but that doesn't make them special.
2024-10-18 08:59:24
People are free to ignore them if they aren't serving a useful purpose. They're just bureaucrats with big ideas about global authority.
2024-10-18 08:59:45
But nothing that actually gives them any authority.
2024-10-18 09:08:17
What do you guys have to gain exactly from their organization? I understand they held a contest and brought the FUIF and PIK teams together, but after that, it sounds like all they did was boss you around, take your money (so you pay for the privilege!), and tell you to focus on meaningless bitrates (0.06 bbp) at the expense of high fidelity optimizations.
2024-10-18 09:09:35
After uniting the FUIF and PIK teams it looks like the rest of what the "international" bureaucrats did was try to sabotage the project rather than help it succeed.
2024-10-18 09:11:45
It has the "JPEG" name and brand because of their involvement but that's of dubious value as well, since that's like calling a super advanced new codec "GIF 2"
CrushedAsian255
Demiurge What do you guys have to gain exactly from their organization? I understand they held a contest and brought the FUIF and PIK teams together, but after that, it sounds like all they did was boss you around, take your money (so you pay for the privilege!), and tell you to focus on meaningless bitrates (0.06 bbp) at the expense of high fidelity optimizations.
2024-10-18 09:31:57
Not sure if they can just split off the project
Demiurge It has the "JPEG" name and brand because of their involvement but that's of dubious value as well, since that's like calling a super advanced new codec "GIF 2"
2024-10-18 09:33:12
MP3.1
_wb_
2024-10-18 09:56:13
Having the stamp of approval of an international standard does have advantages over just being an industry project by Google and Cloudinary.
2024-10-18 09:58:14
It's mostly about having guarantees over control and maintenance of the standard, e.g. it not being possible for a single company to start making arbitrary changes to it. That's quite important if you want to use it as a building block for other things like DNG or DICOM are doing.
CrushedAsian255
2024-10-18 10:11:43
What about AOM? Do they have something stable?
Demiurge
_wb_ It's mostly about having guarantees over control and maintenance of the standard, e.g. it not being possible for a single company to start making arbitrary changes to it. That's quite important if you want to use it as a building block for other things like DNG or DICOM are doing.
2024-10-18 10:53:35
When you were maintaining FLIF/FUIF, you froze the bitstream, and you guaranteed to yourself and everyone using it that it wasn't going to change... No need for weird bureaucrats to help you do that :)
_wb_
2024-10-18 12:00:52
AOM is an industry consortium, and a relatively recent one too. While it's undoubtably more modern than ISO in terms of processes, it doesn't necessarily have the same level of guarantees regarding spec stability, e.g. it looks like revisions of the avif spec can be done relatively easily even if they introduce breaking changes, there is no multi-stage process with ballots and comments from national bodies like there is in ISO.