JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

jxl

Anything JPEG XL related

yurume
_wb_ Currently libjxl makes one jxlp per frame in case of animation, so the last one can be arbitrarily far
2022-09-04 12:08:10
ah, of course, that's reasonable, but even in that case metadata would be in the earlier part of the entire file, no?
_wb_
2022-09-04 12:32:20
You mean image metadata like SizeHeader and co? Yes, that should always be almost the first thing
yurume
_wb_ You mean image metadata like SizeHeader and co? Yes, that should always be almost the first thing
2022-09-04 01:57:43
I meant something like Exif or JUMBF
2022-09-04 01:58:18
but in any case, the presence of multiple frames makes things hairier anyway
_wb_
2022-09-04 02:14:42
Oh, Exif etc could be anywhere - spec says that things like orientation should be taken from codestream, not Exif, so we assumed it would be fine if Exif is e.g. at the very end
yurume
2022-09-04 02:16:33
well, so my context for that question was that I realized that I can't do TOC permutation without having a mapping from codestream offsets to file offsets, and it is simpler to scan short of the entire file to construct that mapping (I have a PoC) but this also means that incremental parsing is impossible for some files.
2022-09-04 02:17:06
so I'm okay with non-codestream boxes at the very end, but that strategy will fail if the last codestream box frequently starts at the very end.
2022-09-04 02:21:55
iirc libjxl is more complex, essentially constructing the mapping incrementally and process as many sections (in the logical order) as possible, but I wondered if it is entirely avoidable
_wb_
2022-09-04 03:15:27
You could also just ignore the 18181-2 stuff and only implement a codestream decoder (i.e. assume you only get fed raw codestream and if there is jxlp concatenation that needs to be done, the calling application has to deal with that)
Jyrki Alakuijala
BlueSwordM <@794205442175402004> How exactly is in-block RDO done currently in libjxl? I know it sounds like a strange question, but I'm currently looking at the differences in use and aptitude of in-block RDO vs normal block RDO, and it is rather interesting to see the differences in approach and how it is done.
2022-09-05 04:29:47
essentially there is no real RDO -- we have a quantization value that comes from the distance setting, some adaptive quantization calculation that works as an initial guess, but the adaptive quantization is more looking at what could be exposed by lack of masking than doing RDO next, there is (with effort 8 and 9) a round or three of butteraugli, and bad mistakes that butteraugli indicates are used for reducing quantization with adaptive quantization field -- too good performance can also lead to increasing quantization, but no more than 0.6x the original guess it is all a big hack
2022-09-05 05:40:02
originally these heuristics were focused on finding the just-noticeable-difference, later reused for RDO-like goals, but it isn't RDO -- it is more like DO with some exceptions
BlueSwordM
Jyrki Alakuijala essentially there is no real RDO -- we have a quantization value that comes from the distance setting, some adaptive quantization calculation that works as an initial guess, but the adaptive quantization is more looking at what could be exposed by lack of masking than doing RDO next, there is (with effort 8 and 9) a round or three of butteraugli, and bad mistakes that butteraugli indicates are used for reducing quantization with adaptive quantization field -- too good performance can also lead to increasing quantization, but no more than 0.6x the original guess it is all a big hack
2022-09-05 09:20:53
Oh yeah, I remember everything now. We talked about it a few months ago :)
Jim
Brinkie Pie I made a Proof of Concept of this with JPEG and service workers a few years ago, but both Chrome and Firefox were a bit buggy and sometimes rendered images as all-white when scrolling. [update: certainly not my finest piece of code; https://ba.jtbrinkmann.de/gallery/ loads a JSON with the list of images incl. byte count for each progressive scan, then uses `fetch` with a range header to load only a portion of the image file, depending on the dimensions needed; https://ba.jtbrinkmann.de/gallery/sw.html also includes a (broken) service worker implementation, which automatically sets the range headers depending on parameters in the image url, so you can use it in `<img>` for example]
2022-09-05 09:47:23
Seems to work ok, though it only seems to have 2 sizes: small and large (at least that it what it appears to have when loading). However, when I simulate a really slow connection, most the images end up failing with a corrupt message. I assume there is a timeout rather than waiting for the file to completely load.
2022-09-05 09:49:12
Even still, would be nice to see something like this added to the wasm decoder implementation so that i can be used as a polyfill for older browsers.
Traneptora
yurume well, so my context for that question was that I realized that I can't do TOC permutation without having a mapping from codestream offsets to file offsets, and it is simpler to scan short of the entire file to construct that mapping (I have a PoC) but this also means that incremental parsing is impossible for some files.
2022-09-08 02:08:59
do keep in mind that jxlp boxes allow arbitrary subdivision of the codestream, including in the middle of a block, by spec
2022-09-08 02:11:19
if you want a streaming decoder you'll probably have to have a second layer that handles the container level separately
2022-09-08 02:11:45
like how PNG decoders can feed IDAT chunks to zlib and then concatenate the resulting decoded chunks
yurume
Traneptora if you want a streaming decoder you'll probably have to have a second layer that handles the container level separately
2022-09-08 02:20:08
yes, this is what I eventually settled on, except that at the time of that question I had a full scanning phase before actual parsing, now my scanner can operate on a partially loaded container.
2022-09-08 02:21:05
I concluded that, since there is a jxlp box for each frame, it is not reasonable to do the full scanning at the beginning anyway
Traneptora
2022-09-08 02:21:10
the plan for JXL in ffmpeg is to have the parser combine the codestream into a single monolithic chunk and load that into a contiguous memory block
2022-09-08 02:21:24
and then just use a bitstream reader on that codestream
yurume
2022-09-08 02:21:24
I think this is more or less how libjxl works in general
Traneptora
2022-09-08 02:21:46
FFmpeg decoders generally don't decode partial packets anyway
2022-09-08 02:21:58
the minimum size for an AVPacket is one AVFrame
2022-09-08 02:22:20
it's not strictly requried but almost all decoders assume there's a full frame inside a packet they are sent
2022-09-08 02:22:24
which the parser ensures
2022-09-08 02:22:44
parsing happens in 4k chunks at a time
2022-09-08 02:22:53
so writing the parser was actually pretty complicated
2022-09-08 02:23:01
since it required entropy decoding to find the frame boundary
WAZAAAAA
2022-09-08 10:02:59
question Let's pretend for a moment we're living in a web where JXL is omnipresent. You guys familiar with `Chrome Data Saver` or `Opera Turbo`? The browser redirects your traffic to their proxy servers which compress it by reducing the quality of images, and sends it back to you... which sounds like a privacy nightmare. Could YOU on the user-level force JXL progressive decoding to only download, say, max 10% of every image through a Firefox/Chrome flag or whatever, to save bandwidth?
2022-09-09 01:13:47
nah I was thinking of just a global ON/OFF toggle, and you're right about the accessibility part. The alternative (choose per-image) sounds like too much clutter, or at least something that should remain limited to an hypothetical optional addon or an obscure about:config option FF Android hiding about:config on stable (also the mobile addon apocalypse) is a tragedy, I wish Mozilla would stop shooting themselves on the foot 😔
yurume
WAZAAAAA question Let's pretend for a moment we're living in a web where JXL is omnipresent. You guys familiar with `Chrome Data Saver` or `Opera Turbo`? The browser redirects your traffic to their proxy servers which compress it by reducing the quality of images, and sends it back to you... which sounds like a privacy nightmare. Could YOU on the user-level force JXL progressive decoding to only download, say, max 10% of every image through a Firefox/Chrome flag or whatever, to save bandwidth?
2022-09-09 05:08:26
this is a good point, but as long as we take `<img srcset=...>` granted I think there is no *additional* privacy threat
The_Decryptor
2022-09-09 05:30:26
iirc Firefox has a mode where it won't load any images unless you tap on them, could extend that to "show nothing" "show minimal preview" and "show full image" with JXL
2022-09-09 05:31:04
Something like that combined with the JXL transfer encoding could be a killer feature for areas with low data allowances
yurume
2022-09-09 05:31:18
yeah, an ability to adjust progressiveness regardless of screen dimension can be very interesting
Fraetor
The_Decryptor iirc Firefox has a mode where it won't load any images unless you tap on them, could extend that to "show nothing" "show minimal preview" and "show full image" with JXL
2022-09-09 07:45:00
I think that is an extention, but it works pretty well.
Brinkie Pie
WAZAAAAA question Let's pretend for a moment we're living in a web where JXL is omnipresent. You guys familiar with `Chrome Data Saver` or `Opera Turbo`? The browser redirects your traffic to their proxy servers which compress it by reducing the quality of images, and sends it back to you... which sounds like a privacy nightmare. Could YOU on the user-level force JXL progressive decoding to only download, say, max 10% of every image through a Firefox/Chrome flag or whatever, to save bandwidth?
2022-09-09 07:50:59
technically, it's possible for a client (browser) to abort the request once enough bytes for the desired quality are loaded.
_wb_
2022-09-09 08:18:50
not really afaik. A client can break a connection but I don't think it's possible to abort a specific request while maintaining the connection and not aborting everything else that's coming from that host.
fab
2022-09-09 11:00:10
https://it.m.wikipedia.org/wiki/Speciale:DiffMobile/129262267
2022-09-09 11:00:20
I did a commit here
Hello71
_wb_ not really afaik. A client can break a connection but I don't think it's possible to abort a specific request while maintaining the connection and not aborting everything else that's coming from that host.
2022-09-09 11:17:20
it's possible in http 2, and http 1 pipelining is allegedly rarely used. the bigger problem (imo) is that if your connection is slow enough for you to care about this, you probably have enough rtt that by the time the reset gets to the server you've already downloaded more than you want. you could solve both of these problems by fetching the frame index first then doing a range request, but then that's at least one extra rtt, plus the server might not support range requests at the beginning of a file, plus as we've discussed before jpeg xl doesn't necessarily have an index
_wb_
2022-09-09 12:26:58
the frame index for animations is optional, but the TOC for frames is not optional
Brinkie Pie
_wb_ not really afaik. A client can break a connection but I don't think it's possible to abort a specific request while maintaining the connection and not aborting everything else that's coming from that host.
2022-09-09 02:48:54
I don't know how this looks on the HTTP layer, but in JavaScript there's `XMLHttpRequest.abort()` and `AbortSignal` (for `fetch`). Both apparently do stop the download. I am hesitant to think that it causes connection issues.
yurume
2022-09-09 02:52:52
it is possible to break the connection during transfer, even in HTTP/1.1 ("If a message body has been indicated, then it is read as a stream until an amount of octets equal to the message body length is read or the connection is closed.")
2022-09-09 02:54:05
what it's complex is to break then resume transfer at different offset; this would require a new connection (which gets multiplexed in HTTP/2 onwards)
Traneptora
2022-09-10 11:38:53
Is there a larger test-set for JPEG XL images, for decoder testing?
0xC0000054
Traneptora Is there a larger test-set for JPEG XL images, for decoder testing?
2022-09-11 03:32:32
The largest JXL test set I know of is the libjxl conformance repository: https://github.com/libjxl/conformance
Traneptora
2022-09-11 06:25:58
I meant larger than that
2022-09-11 06:26:08
this is mostly for testing edge-cases
yurume
2022-09-11 06:27:30
I think that full MC/DC coverage for J40 may produce a such test case set
Traneptora
yurume I think that full MC/DC coverage for J40 may produce a such test case set
2022-09-11 06:32:54
do you have it? what license? etc.
2022-09-11 06:33:08
I mostly test on photos I have lying around that I produce with cjxl
yurume
2022-09-11 06:33:17
of course not, it would be a very long-term project
Traneptora
2022-09-11 06:33:34
oh I thought you were referring to your own test case set, mb
yurume
2022-09-11 06:38:11
I don't have anything like that, unfortunately
2022-09-11 06:38:38
and I think no one including libjxl devs know *which* edge cases are possible (I have a good idea about that, thanks to j40 though)
_wb_
2022-09-11 06:58:12
The combinatorial explosion of using all the coding tools and features is quite large
Traneptora
2022-09-11 06:58:45
at the very least individual coding tools in the codestream itself could have test images
2022-09-11 06:59:17
atm the conformance library tests decoder compliance with weird things but it doesn't really help build a decoder, like having just lots of various ordinary vardct and modular images that use different features
_wb_
2022-09-11 07:33:34
Yes, like images using just one block type
Traneptora
2022-09-12 01:16:20
In other news, I'm adding animated JXL support to FFmpeg via libjxl, but apparently ffmpeg.c ignores the timebase set by decoders
2022-09-12 01:19:28
which is kind of annoying
2022-09-12 02:10:37
what this means that setting the framerate to something very large will decode animated JXL properly
2022-09-12 02:10:45
but setting the framerate to the default of 1/25 will drop frames
2022-09-12 02:30:54
haven't figured out a fix yet
_wb_
2022-09-12 03:27:07
how strange - are no other decoders setting timebase?
Traneptora
_wb_ how strange - are no other decoders setting timebase?
2022-09-12 05:39:17
that's correct, the demuxer sets the timebase
2022-09-12 05:40:15
decoders *can* attempt to set the timebase, but clients of libavcodec are responsible for respecting it
2022-09-12 05:40:21
and ffmpeg.c doesn't
gurpreetgill
2022-09-13 03:32:57
👋 v0.7rc has been out since Aug 9th, do we have an estimate of when v0.7 is gonna be released? libvips has started requiring v0.7 libjxl https://github.com/libvips/libvips/commit/0029b3c416de8f520e6282ad5085403a9648a2a7
_wb_
2022-09-13 03:55:36
<@795684063032901642> <@987371399867945100> any ETA on that? anything in particular that still needs to be done/checked before release?
yurume
2022-09-13 06:16:32
PSA: https://github.com/lifthrasiir/j40 is now up.
Moritz Firsching
_wb_ <@795684063032901642> <@987371399867945100> any ETA on that? anything in particular that still needs to be done/checked before release?
2022-09-13 07:05:13
wanted to get https://github.com/libjxl/libjxl/pull/834 in. Other than that it seems only perhaps a little bit of documentation...
gurpreetgill
2022-09-14 01:51:08
export version information in headers b...
yurume
2022-09-16 03:26:42
now having released J40, I should probably need to report all known spec bugs again... and there are 63 unreported issues in J40 comments. ugh.
_wb_
2022-09-16 05:29:46
If the comments are clear enough, we can just use those as input
2022-09-19 12:22:34
https://people.csail.mit.edu/ericchan/hdr/hdr-jxl.php
2022-09-19 12:22:43
why does this page no longer work in current chrome?
2022-09-19 02:06:45
it works on android chrome 105 for me (but shows the images with weird banding, which is likely due to my android being too old)
2022-09-19 02:07:11
but on macos chrome 105 it doesn't work
2022-09-19 02:08:45
for me at least
2022-09-19 02:09:27
spider-mario
2022-09-19 02:12:43
sometimes, it requires a refresh for some reason
_wb_
2022-09-19 02:56:15
lol what is causing that
2022-09-19 02:56:24
indeed just refreshing the page helped
spider-mario
2022-09-19 03:17:25
could it have to do with the chrome jxl decoder not liking to be called on so many images at once in the same page?
_wb_
2022-09-19 03:42:02
Oh, maybe running out of memory if it starts all of them at the same time?
2022-09-19 03:42:17
But then why would refreshing make it any better?
Jyrki Alakuijala
2022-09-20 02:22:21
how many galleries of JPEG XL photos do we know about?
2022-09-20 02:22:54
(online web pages with loads of photographic JPEG XL content)
_wb_
2022-09-21 01:23:20
sigh what is it with these cryptospammers
2022-09-21 01:23:59
I also get lots of cryptospam in my twitter dms lately
Moritz Firsching
2022-09-21 02:05:45
We have done the version 0.7.0 release! https://github.com/libjxl/libjxl/releases/tag/v0.7.0
Jim
2022-09-21 02:05:56
I agree, get rid of these jxl spammers! <:kekw:808717074305122316>
2022-09-21 02:07:05
Congrats on the 0.7 release! <:JXL:805850130203934781>
BlueSwordM
2022-09-21 04:44:36
So sad. Crypto bots with no JPEG-XL mentions smh.
brooke
2022-09-21 06:18:51
asked this in another smaller jxl server before finding this one, but can cjxl resize images? or is this a better question for <#804324493420920833>
2022-09-21 06:20:21
like say i want to resize a 4000x4000 source .png to a 1200x1200 .jxl
190n
2022-09-21 06:20:57
i don't see any option for that so probably just convert to a smaller png first
2022-09-21 06:21:30
would be handy tho
brooke
2022-09-21 06:22:05
so my use case is backlogging album covers, would it be more efficient to encode from a lossless .jpg or .png?
2022-09-21 06:23:35
or should i just test
190n
2022-09-21 06:24:36
wdym "lossless .jpg"
brooke
2022-09-21 06:25:10
or i guess quality 100
2022-09-21 06:25:18
so like, quality 100 .png to .jpg
2022-09-21 06:25:35
i use caesium for compression so i'm assuming the "lossless" option there is just that
190n
2022-09-21 06:26:10
maybe, it won't be lossless tho
2022-09-21 06:26:35
i would go 4k png 1.2k png 1.2k jxl
brooke
190n i would go 4k png 1.2k png 1.2k jxl
2022-09-21 06:27:07
ty, i've done AV1 / .opus work before but i just got into trying JXL out today so i wasn't sure
2022-09-21 06:27:49
i'm a gui person so using cli stuff is not my forte
BlueSwordM
brooke so my use case is backlogging album covers, would it be more efficient to encode from a lossless .jpg or .png?
2022-09-21 06:31:17
PNG.
2022-09-21 06:31:29
The highest quality source will always yield the highest fidelity result.
brooke
2022-09-21 06:32:09
in a time where data prices get exponentially cheaper i'm obsessed with shrinking everything :(
2022-09-21 06:58:20
i don't mean to be handheld or anything, but if i wanted to get the best quality for the size, are there particular settings i should use here or is `-q 95` sufficient?
2022-09-21 06:59:27
`-e 9` takes like 2 million years for me
brooke `-e 9` takes like 2 million years for me
2022-09-21 07:01:47
but if it outputs eventually i'd probably be fine with it
Pashi
2022-09-21 07:09:01
Brooke, the encoder seems to be tuned to give you the best tradeoff at the default quality setting -d1 (which I believe is equivalent to -q90)
brooke but if it outputs eventually i'd probably be fine with it
2022-09-21 07:10:07
You are probably fine using the default effort level too as -e9 will not give you much benefit considering how much longer it takes compared to -e7
2022-09-21 07:10:17
The default levels are good.
2022-09-21 07:10:29
If you want lossless use -d0
2022-09-21 07:10:43
Otherwise -d1 is the best tradeoff for lossy
2022-09-21 07:11:23
You should not need to bother with any other settings and you usually will want lossless unless there's a good reason for lossy.
2022-09-21 07:12:20
Also whenever you transcode ANYTHING be it music video or images always start with the highest quality source material, never use lossy sources like JPEG if you can avoid it.
2022-09-21 07:15:16
If you already start with a JPEG file as the best thing you have access to, then do not transcode from lossy to lossless like from JPEG to PNG.
brooke
2022-09-21 07:15:24
i saw some commands with a good few parameters so i was wondering what could optimize stuff further if needed
Pashi
2022-09-21 07:16:51
In that case it's better to losslessly crush the JPEG directly into a JXL using -d1 or if you want to degrade the quality further -d1 is hard to notice the change but will shrink it even smaller than lossless crush
brooke i saw some commands with a good few parameters so i was wondering what could optimize stuff further if needed
2022-09-21 07:18:11
The defaults are pretty well tuned and optimized. Generally -d1 is the best sweet spot in terms of quality/size for lossy coding. Going above or below is usually not a good tradeoff.
2022-09-21 07:19:00
And higher effort values than 7 are truly not worth the extra time spent
2022-09-21 07:19:30
They usually don't improve coding efficiency but theoretically may improve consistency of quality.
brooke
2022-09-21 07:19:47
so for bulk encoding, do you think this or 190n's method would work better? the goal is just to reduce source images to 1200x1200 and optimize filesize while retaining as much quality as possible
190n i would go 4k png 1.2k png 1.2k jxl
2022-09-21 07:20:01
(this)
2022-09-21 07:20:21
i'm just interested in experimenting
Pashi
2022-09-21 07:21:48
Is the source material 4k PNG and you want to reduce it to 1.2k JXL? I would resize/resample to 1.2k and then use -d1 as the only cjxl flag.
brooke
2022-09-21 07:22:56
alright good to know
Pashi
2022-09-21 07:22:56
If you have any command like questions feel free to ask. Command line is very useful once you understand how it works. You on Windows or...?
brooke
2022-09-21 07:23:07
unfortunately yeah
2022-09-21 07:23:29
never been able to make a permanent switch to linux due to hardware compatibility and workflow
Pashi
2022-09-21 07:24:58
Windows comes with a Linux kernel and VM. If you set up WSL2
2022-09-21 07:25:17
Or virtualbox
2022-09-21 07:25:27
Virtualbox is better to be frank...
brooke
2022-09-21 07:25:34
i'm using a custom .iso w/o WSL compatibility so yeah i've only been using virtualbox
Pashi
2022-09-21 07:25:34
Let's you run whatever you like
2022-09-21 07:59:08
What hardware you got?
2022-09-21 08:00:03
Just out of curiosity, what sort of incompatible workflow and hardware you got?
brooke
Pashi Just out of curiosity, what sort of incompatible workflow and hardware you got?
2022-09-21 08:01:09
G15 5515 laptop, i do a lot of archival work that requires some windows-exclusive tools that i've already adapted to
2022-09-21 08:01:22
can't find good linux equivalents in some case
Pashi
2022-09-21 08:01:27
asus?
brooke
2022-09-21 08:01:39
dell
Pashi
2022-09-21 08:01:45
Oh
brooke
2022-09-21 08:01:51
wanted to get a decent gaming laptop on the cheap
Pashi
2022-09-21 08:02:02
Dell is your best bet
brooke
2022-09-21 08:02:05
works flawlessly for windows but linux is really iffy
Pashi
2022-09-21 08:02:11
Asus sucks
brooke
2022-09-21 08:02:13
yeah i know about like the XPS models being good for linux
2022-09-21 08:02:20
but this isn't really geared for it
Pashi
2022-09-21 08:02:27
No just in general not linux specific.
brooke
2022-09-21 08:02:35
yeah i guess
Pashi
2022-09-21 08:02:58
Archival stuff? Like what, rar?
brooke
2022-09-21 08:03:31
general media
2022-09-21 08:04:49
i've been looking into lossy codecs for large-form archival lately after having been a lossless shill for a long time
Pashi
2022-09-21 08:05:47
:)
2022-09-21 08:05:55
Use whatever makes sense for the job
2022-09-21 08:06:13
You already tried Linux on the laptop? What distro?
brooke
2022-09-21 08:06:55
i've always been a strong fedora user but i tried arch too without much success
Pashi
2022-09-21 08:16:24
Both are decent places to start
2022-09-21 08:16:45
Which piece of hardware had trouble?
brooke
2022-09-21 08:18:36
gpus
Pashi
2022-09-21 08:38:25
Hybrid gpu?
brooke
2022-09-21 08:38:46
yes
Pashi
2022-09-21 08:40:04
Pretty strange, everything should theoretically work fine if you install the nvidia package, assuming you have nvidia
2022-09-21 08:42:09
And for AMD everything should just work regardless because AMD doesn’t put their driver behind a stupid license forbidding redistribution.
brooke
2022-09-21 11:19:45
does anyone have a clue how to use .jxl files as cover art on foobar2000? i heard it supports the format now, but i have 0 clue how to do so
2022-09-21 11:22:52
oh wait <@387462082418704387> you would probably know right
2022-09-21 11:23:00
i just put foobar in search
Pashi
2022-09-21 11:48:03
I don't see anything about jxl support on the website
brooke
2022-09-21 11:48:26
nvm i ended up figuring it out myself lol
Pashi
2022-09-21 11:48:36
`Utilized Windows Imaging Component for image parsing, removed libwebp dependency. Album covers in HEIF, AVIF, etc can now be viewed if system codecs are present.`
2022-09-21 11:49:10
Did you have to install a jxl wic plugin?
brooke
2022-09-21 11:50:16
yeah i did when i got it working
2022-09-22 12:18:38
now i just need a 64-bit theme because pretty sure it's only working for 64-bit :/
Pashi
2022-09-22 02:05:17
32 bit computers in this day and age are like using whale oil lamps
brooke
2022-09-22 02:08:25
yeah but foobar's a native 32-bit app and the fact that it just transitioned to 64-bit alongside it is the only reason .jxl works there lmao
eddie.zato
2022-09-22 07:03:06
WIC support was added in foobar 2.0 beta, which has x64 version.
spider-mario
brooke i'm using a custom .iso w/o WSL compatibility so yeah i've only been using virtualbox
2022-09-22 07:31:33
another option than WSL is MSYS2: https://www.msys2.org/
Deleted User
2022-09-22 08:49:56
How do I actually find the spec?
fab
2022-09-22 09:18:51
I found it by writing International on my computer
Deleted User
2022-09-22 11:15:23
...buy? <a:SCconfused:814378693341872159>
2022-09-22 11:15:34
i am poor american
2022-09-22 11:15:38
i no have spending money
2022-09-22 11:15:43
i spent it all on programmer socks
2022-09-22 11:16:44
~~and nitro~~
2022-09-22 11:22:52
Oh rad, thank you.
2022-09-22 11:28:55
Sounds good, thank you!
brooke
2022-09-23 02:57:30
~~trying to encode an image, is something wrong?~~
2022-09-23 02:58:47
oh wait i used `-q` instead of `-d`
2022-09-23 02:58:48
my bad
Pashi
2022-09-23 04:38:34
Are you happy with the results? JXL is pretty nice when used at -d1 and -d0 quality settings.
2022-09-23 04:39:58
Other quality settings aren’t as good of a tradeoff because it was tuned and optimized the most at -d1
2022-09-23 04:42:16
For example higher quality settings uses a LOT more space for hardly any difference quality, lower quality settings hardly reduce the size but noticeably mess up the image with a drastic quality reduction.
2022-09-23 04:43:26
So it's generally not worth it and not effective to change the setting from -d1 unless you're using -d0 for lossless
2022-09-23 04:43:43
Or modular mode
2022-09-23 04:46:29
Anything higher or lower is a very poor tradeoff as other quality levels have not been optimized the same way
2022-09-23 04:47:46
The focus of JXL from the beginning has been perceptual transparency because that seems to be the most relevant and most used settings on the web today
_wb_
2022-09-23 04:51:29
I would say d0.5-d4 (or q60-95 or so) is the useful range. Basically: d0 for lossless/authoring d0.5 for archival d1 for most non-web (or high-fidelity web) use cases d2 for typical web d3 for low-fidelity web d4 for ultra-low-fidelity web
2022-09-23 04:52:24
In cloudinary terms I would say d1=best, d2=good, d3=eco, d4=low, roughly speaking.
Pashi
2022-09-23 04:53:54
In my testing the space savings from lower quality settings are much less than expected and the space increase from higher quality settings are much higher than expected.
2022-09-23 04:54:42
It really seems from my testing like -d1 is a noticeable spike in terms of quality/bbp
2022-09-23 04:58:25
At -d1 the filesize is close to as low as it will get for acceptable quality and any further reduction will come at a huge visual cost
2022-09-23 04:59:06
For hardly any space savings considering most of the size was already shaved off
_wb_
2022-09-23 05:04:20
Roughly speaking I think indeed most codecs/encoders have a "sweet spot" where they give the best "bang for buck", and if you move away from that spot the trade-off becomes less nice. For libjxl, that sweet spot region is d1-2; for mozjpeg that region is q60-80, for libjpeg it is q75-92, for avif and webp it is below the lowest quality people actually want to use for still images, around d8 or so.
Pashi
2022-09-23 05:57:38
mozjpeg is nice...
2022-09-23 05:58:01
Especially its resistance to generation loss
2022-09-23 05:59:06
Wish people would stop using stock standard libjpeg turbo where speed isn't absolutely more important than fidelity
2022-09-23 05:59:16
Because turbo is fugly
_wb_
2022-09-23 06:02:07
is there a significant difference in resistance to generation loss between libjpeg and mozjpeg?
2022-09-23 06:03:46
my current experiments (mostly objective metrics based) indicate that mozjpeg is significantly better than libjpeg below ~q80, but then the gain quickly diminishes and above q88 or so mozjpeg actually becomes worse than libjpeg
2022-09-23 06:04:07
(this is with default options, so optimizing for psnr-hvs etc; I haven't tested other options yet)
JendaLinda
2022-09-23 06:31:00
JXL does pretty good job compressing JPEGs encoded at quality 100, it can save as much as 50% of space. I know that JPEG at quality 100 is suboptimal, I didn't create those JPEGs.
brooke
_wb_ I would say d0.5-d4 (or q60-95 or so) is the useful range. Basically: d0 for lossless/authoring d0.5 for archival d1 for most non-web (or high-fidelity web) use cases d2 for typical web d3 for low-fidelity web d4 for ultra-low-fidelity web
2022-09-23 06:55:08
-d 3 is a fine middle ground for my use case
2022-09-23 06:55:15
after -d 4 it gets iffy
Pashi
2022-09-23 07:59:29
What is the space savings of -d3 versus -d1?
2022-09-23 07:59:34
In your use case
2022-09-23 08:00:20
You save 1 megabyte and get a really weird looking image forever?
2022-09-23 08:01:08
Well maybe you would save 1 megabyte over multiple images
2022-09-23 08:01:26
For a single image it's rare to get that much savings
2022-09-23 08:01:57
Like I said you hardly save anything at all for the amount of quality you lose permanently.
2022-09-23 08:02:33
The quality drops off steeply after d1
2022-09-23 08:02:52
Without a corresponding change in filesize.
2022-09-23 08:03:24
And the size increases steeply on the other side of d1
2022-09-23 08:03:38
It's a very steep spike
2022-09-23 08:03:57
In both directions.
2022-09-23 08:06:20
But I'm curious what space savings you are getting in your case with d1 vs d3
2022-09-23 08:07:00
Personally I notice a very significant and unacceptable decrease in perceptual quality at d3 and even at d2
_wb_
2022-09-23 08:33:17
it all depends on what the image is meant for — the fidelity needs depend a lot on what the image is used for and if it's just a preview image or "the" version of the image
Pashi
2022-09-23 09:54:04
I suspect you would get much better savings by decreasing the dimensions of the image rather than the quality knob
yurume
2022-09-23 09:59:20
or upsampling. the problem with this approach is that, the encoder has no idea whether this image will be most frequently viewed as a whole or with zoomed.
brooke
Pashi What is the space savings of -d3 versus -d1?
2022-09-23 10:29:39
my use case is just an economic album library so covers are hardly the main focus
2022-09-23 10:31:07
the extent of my worries is that -d 3 can't handle shades of black well
2022-09-23 10:31:56
example (2x zoom)
2022-09-23 10:32:17
additional artifacts are hardly noticeable unless you're two inches away from the screen
2022-09-23 10:33:58
but i mean the source image was already a lossy .jpg
2022-09-23 10:34:04
so i'm not really complaining
Pashi I suspect you would get much better savings by decreasing the dimensions of the image rather than the quality knob
2022-09-23 10:34:54
my album covers are resized to 1200x1200
2022-09-23 10:35:05
then have -d 3 applied to them regardless
2022-09-23 10:35:35
note i do archive source .png files and original lossless tracks so this is purely for low storage libraries
2022-09-24 12:13:49
so there's a guy trying to patch in JXL compatibility into his mpv build, and he's repeatedly getting an `ERROR: libjxl >= 0.7.0 not found using pkg-config` error - several solutions have been attempted, but neither him or i have any ideas. here are some logs: https://pastebin.com/WpEWd3ji
2022-09-24 12:15:31
i've checked for solutions on both github and this server's search
2022-09-24 12:16:41
<@794205442175402004> would you happen to know anything? sorry if this is an unnecessary ping, but i feel like you'd know a tad about compiling
2022-09-24 12:27:42
also https://github.com/libjxl/libjxl/issues/1778
2022-09-24 12:27:54
(same error, just his issue submission)
B_Ræd15
2022-09-24 02:25:43
No lossless jpeg transcode with ffmpeg yet, right?
BlueSwordM
B_Ræd15 No lossless jpeg transcode with ffmpeg yet, right?
2022-09-24 02:57:29
Indeed.
Silikone
2022-09-24 03:40:05
How do you debunk naysayers/shills who use the hardware decoding argument against JXL?
B_Ræd15
2022-09-24 03:43:50
Depends what they're comparing to. PNG and GIF don't have hardware acceleration either afaik
2022-09-24 03:44:41
And I don't know much but I'd assume gpu based acceleration is possible to implement later if this is similar enough to jpeg
Silikone
B_Ræd15 Depends what they're comparing to. PNG and GIF don't have hardware acceleration either afaik
2022-09-24 04:04:10
WebP and JPEG, I guess. Whatever is widespread on the web
2022-09-24 04:05:00
I've heard even phone browsers use a CPU library because of complications with inconsistent content
2022-09-24 04:06:05
For JPEG, that is
2022-09-24 04:06:33
And there isn't even phone hardware for AVIF yet, I think
190n
2022-09-24 04:06:59
some phones have av1
B_Ræd15
2022-09-24 04:07:11
Which phone chips do?
2022-09-24 04:07:26
I only know for sure RTX 3000 and Intel Arc do
190n
2022-09-24 04:08:03
samsung exynos 2100 (in some galaxy s21s) and up, google tensor series, some mediatek parts (their flagship "dimensity" series i believe all have it)
Silikone
2022-09-24 04:08:06
Intel CPU's do too, as do some Radeon cards, except for the cheap crappy ones, even though they need it the most
190n
2022-09-24 04:08:11
qcom said they would bring it to next year's snapdragon iirc
Silikone
2022-09-24 04:08:52
AV1 hardware is in a pretty good position on PC, it seems
2022-09-24 04:09:19
But that doesn't answer the question, does anything use it for AVIF?
190n
2022-09-24 04:09:57
ah, there, i don't think so
2022-09-24 04:12:23
do thumbnail generators for jpeg tend to take advantage of progressive decoding (when available)?
_wb_
brooke <@794205442175402004> would you happen to know anything? sorry if this is an unnecessary ping, but i feel like you'd know a tad about compiling
2022-09-24 05:41:48
I dunno, all build tools seem to be mostly made out of duct tape and glue and when something doesn't work it's tricky to troubleshoot when you're not using the same environment.
190n do thumbnail generators for jpeg tend to take advantage of progressive decoding (when available)?
2022-09-24 05:46:00
Libjpeg does have an option to load jpegs at 1:8 or even n:8 scale, where it loads only DC or does a cheaper iDCT that doesn't look at all AC coefficients. I think this feature is specifically there for thumbnail generators. It works for both progressive and sequential jpegs, but with progressive ones you get the additional advantage that it doesn't need to read and entropy decode the whole file.
2022-09-24 05:47:45
No browser uses hardware acceleration for any image format afaik. Hardware decode is useful for video. For still images, it has more downsides than advantages.
2022-09-24 05:48:52
At least in a web context, where pages tend to have multiple images loading in parallel, of varying dimensions, many of them relatively small.
2022-09-24 05:54:51
Hardware typically needs to be initialized to a certain frame dimension (which has a bit of initialization overhead), then takes a full bitstream as input and returns a full decoded frame. It doesn't work if you want progressive or even just incremental decoding. The initialization cost is ok for video or for cameras where dimensions are fixed, but for web pages it kind of ruins the advantages. At least that's how I understand it. Fact is that no browser does hw decode for any image format, even though jpeg and vp8 hw decode is often available.
2022-09-24 05:56:40
Also hardware decoders typically only support some profile: e.g. typically only 4:2:0, only dimensions limited to full HD or 4k, etc.
2022-09-24 05:59:33
I don't think there are plausible use cases where it could be an issue. The battery cost of radio and display is more significant than that of cpu. It's also just a few bursts of cpu and then the image is loaded and you just look at it — it's not like video where 30 times per second a new frame needs to be decoded.
Demez
2022-09-24 09:32:30
kinda related in way, but I have a image viewer that uses vulkan that I've been working on lol there really seems seems to be few images viewers out there that actually use the gpu, and for more than just image resizing and a ton of qt ones (qimgv and fragment are my favs)
_wb_ Hardware typically needs to be initialized to a certain frame dimension (which has a bit of initialization overhead), then takes a full bitstream as input and returns a full decoded frame. It doesn't work if you want progressive or even just incremental decoding. The initialization cost is ok for video or for cameras where dimensions are fixed, but for web pages it kind of ruins the advantages. At least that's how I understand it. Fact is that no browser does hw decode for any image format, even though jpeg and vp8 hw decode is often available.
2022-09-24 09:43:51
wonder if these hardware jpeg decoders are faster than simply using turbo jpeg in my case, cause I would need to copy the image data to the GPU anyway lol
_wb_
2022-09-24 09:54:21
Who cares about speed in case of jpeg? Software decode is fast enough to do realtime 8k 60fps motion jpeg decoding, probably (didn't check that, but it's pretty fast in any case)
2022-09-24 10:31:30
No idea - maybe because it doesn't take much die area and it's a nice box to tick?
2022-09-24 10:32:45
If gpu textures can be stored as jpeg, it might be nice as a texture format to save vram
Pashi
2022-09-24 10:56:21
Hardware image decoding is a bad idea. Browsers need to decode a bunch of images at once in parallel, with partial updates / progressive loading, and with an API that works everywhere... they don't need a hardware-specific thing to interface with a real hardware decoder that can only do one thing at a time and is inflexible like hardware.
B_Ræd15
2022-09-24 03:16:55
For Nvidia it's about ai throughput, if you can transfer an encoded bitstream to the gpu and then just decode it you save on memory bandwidth if you're say training or running a deep learning model on thousands of images
Demez
2022-09-24 03:21:05
yeah I never setup to download stuff into thirdparty and build it yet, so you will have to do that yourself, I should take care if that today tbh
2022-09-24 03:21:19
I could give you a build though lol
_wb_
B_Ræd15 For Nvidia it's about ai throughput, if you can transfer an encoded bitstream to the gpu and then just decode it you save on memory bandwidth if you're say training or running a deep learning model on thousands of images
2022-09-24 03:21:39
Right - and in such workflows, you would typically have all the same dimensions, e.g. 512x512
B_Ræd15
2022-09-24 03:23:10
Yeah
Demez
2022-09-24 03:24:26
<@456226577798135808> just remember its very, very WIP https://cdn.discordapp.com/attachments/863102954965696513/1023134832504602624/ImageViewer_build_2022-09-24__03.rar
fab
2022-09-24 03:26:23
Crashed on Windows 7
2022-09-24 03:26:43
Where 's the GUI?
Demez
2022-09-24 03:27:43
it's ImGui, and I've never tested it on windows 7 yet, but I guess it doesn't work on that lol, gonna have to look into that one
2022-09-24 03:32:02
slightly outdated video https://media.discordapp.net/attachments/997371530562514974/1021312321299156992/2022-09-19_02-48-14.mp4
2022-09-24 03:33:00
~~this should really be in <#806898911091753051> lol~~
Silikone
_wb_ Who cares about speed in case of jpeg? Software decode is fast enough to do realtime 8k 60fps motion jpeg decoding, probably (didn't check that, but it's pretty fast in any case)
2022-09-24 03:47:01
Doesn't matter how fast software decoding is if it's less energy efficient. Hardware decoding is a win even if it doesn't improve speeds, but if it's slower, you'll have to pick a poison
_wb_
2022-09-24 03:49:41
Yeah hw is good for things like cameras where energy efficiency is maybe more important.
2022-09-24 03:53:44
For a phone and the use case of images on the web, I think hw is not very useful, it's a small saving compared to all the other stuff that is going on in a browser. But for encoding of phone camera pictures it might be useful (those are larger).
B_Ræd15
2022-09-24 04:02:08
Yeah phone camera apps definitely use hardware acceleration because they know exactly what hardware is in a device
Brinkie Pie
2022-09-24 04:10:42
on this subject, we should differentiate clearly between decoding and encoding.
B_Ræd15
2022-09-24 04:12:21
When I say hardware acceleration here I mean both
Brinkie Pie
2022-09-24 04:17:17
do camera apps even display photos? I can only think of those tiny thumbnails of the last taken image. Regarding displaying the current camera image, I'd assume they just get the raw pixel values from the camera api.
B_Ræd15
2022-09-24 04:18:16
I know for samsung phones the camera and gallery are basically one app
Demez
2022-09-24 11:54:24
oh, well Linux isn't ready yet and I need to write the linux abstraction myself cause I wanted to write stuff in win32 directly lol
2022-09-25 07:50:05
this image viewer was basically created due to disappointment in many various image viewers for not being able to do certain things, or the way they did things
2022-09-25 08:49:02
- keeping the zoom level while going through images (see fragment image viewer, the only one I've seen do this) - no support for certain formats and no way to add them in (fragment can't be extended) - basically wanting key bindings and controls similar to fragment image viewer - potentially being slow with loading images (image glass is actually very slow with jpeg xl, and uses a LOT of cpu for large images, and seemingly loads other ones in the background, using lots of resources) - being able to delete files to the recycle bin and undo it in the image viewer or file explorer - respecting image color space - proper jpeg decoding (fragment has issues with it) - some dont seem to have drag and drop support - maybe have control + left click allow you to drag the image onto something? - auto updating the file list whenever something is changed in the current directory - maybe even being able to drag in image urls from discord
2022-09-25 08:49:50
kind of a bunch of random things, and probably some i forgot, but whatever
2022-09-25 03:58:15
yeah it is, and that's another issue with many image viewers, linux only
Jyrki Alakuijala
brooke the extent of my worries is that -d 3 can't handle shades of black well
2022-09-25 06:45:28
you can sometimes improve on blacks by using a slightly higher intensity target, say 300 lumens instead of 200
_wb_
2022-09-25 08:05:44
Some SDR displays do 400 nits nowadays so it's not unreasonable to encode images with a higher intensity target. One thing that is not clear to me though, is at what point should an image be rendered in the SDR way (brightness scaled to whatever the display can do) and at what point it should be rendered in the HDR way (brightness has an absolute meaning and should not just be scaled to whatever the display can do).
spider-mario
2022-09-25 08:07:28
this question has been kind of eating at me as well
_wb_
2022-09-25 08:13:39
I think we may just need to define some arbitrary threshold and make a de facto standard that way - we can add a note about it in the spec
2022-09-25 08:14:23
Say < 500 is SDR, >= 500 is HDR
2022-09-25 08:15:36
Otherwise it just remains ambiguous or based on weird heuristics like "the transfer curve looks like PQ so it's probably HDR"
B_Ræd15
2022-09-25 08:15:58
Isn't HDR supposed to be encoded with some kind of table that allows it to be tone mapped to your specific display?
spider-mario
2022-09-25 08:16:09
it doesn’t have to be “_looks like_ PQ”, though, as we can signal PQ specifically
B_Ræd15 Isn't HDR supposed to be encoded with some kind of table that allows it to be tone mapped to your specific display?
2022-09-25 08:17:05
tone mapping is kind of a separate issue, it is possible in principle to transmit an image in HDR only
2022-09-25 08:17:27
the tone mapping can be ± “generic”
_wb_
2022-09-25 08:17:37
Yeah I know but still, I think it makes more sense that PQ with an intensity target of 80 is treated as SDR and some non-PQ gamma tf with intensity target 2000 is treated as HDR than the other way around...
spider-mario
2022-09-25 08:18:22
yeah, possibly
Fraetor
2022-09-25 08:18:32
Conversely, if you open an SDR image on a 1000 nit continuous HDR display, what does a maximal brightness value (eg. 255,255,255) output as?
spider-mario
2022-09-25 08:18:41
if we are going with a threshold, I think I would have it a bit lower than 500
2022-09-25 08:18:56
DisplayHDR starts at 400
_wb_
2022-09-25 08:19:13
Anything above 250 and below 1000 is fine with me
2022-09-25 08:19:21
400 seems like a reasonable choice
2022-09-25 08:20:15
>= 400 is HDR, < 400 is SDR, simple and clean, no?
spider-mario
Fraetor Conversely, if you open an SDR image on a 1000 nit continuous HDR display, what does a maximal brightness value (eg. 255,255,255) output as?
2022-09-25 08:20:58
most likely, the OS or the display’s firmware would realise that the image is SDR and would not use its maximum luminance for (255, 255, 255)
Fraetor
2022-09-25 08:45:40
Is there a standard brightness to output it to?
spider-mario
2022-09-25 09:02:10
on macOS, SDR white corresponds to PQ’s 100 nits
2022-09-25 09:02:23
on Windows, from what I understand, there is a specific setting
spider-mario on macOS, SDR white corresponds to PQ’s 100 nits
2022-09-25 09:03:29
(while PQ is supposed to be absolute, macOS has no qualm about scaling it with the brightness setting)
Jyrki Alakuijala
2022-09-25 09:50:38
sRGB defines 80 nits, right?
spider-mario
2022-09-25 10:03:43
officially, yes, but it’s probably rare that it is actually respected
a cat
2022-09-25 11:15:50
can someone share the latest draft spec for JPEG 🙂 i want to start with that first
BlueSwordM
2022-09-26 04:17:49
<@532010383041363969> How is the rANS entropy coder parallelized in libjxl and how well does it scale? Last I checked, rANS decoding can be parallelized to an absurd degree, but not sure about rANS encoding.
yurume
2022-09-26 04:25:24
to my knowledge parallel rANS requires multiple streams in parallel, which is not the case in JPEG XL anyway
_wb_
2022-09-26 05:04:01
Yes, only groups are separate streams, within a group there is no parallelism for the entropy decode.
BlueSwordM
2022-09-26 05:20:18
Oh I see.
2022-09-26 05:20:36
No wonder they didn't take up an ANS entropy coding scheme for AV1.
veluca
BlueSwordM No wonder they didn't take up an ANS entropy coding scheme for AV1.
2022-09-26 06:30:49
I figure that's mostly because of the reversed encoding requirement...
Pashi
2022-09-26 07:27:49
Are JXL files supposed to be bigger than GIF?
2022-09-26 07:30:43
Is GIF just the ultimate format in terms of compression efficiency? I thought I could save space by converting GIF to JXL
2022-09-26 07:31:28
But I guess there's just no way to beat the supremely efficient state of the art GIF algorithm
_wb_
2022-09-26 07:31:31
lol
2022-09-26 07:32:58
some gif optimizers do things in a way that happens to compress well with gif but might be a bit tricky to reproduce just from the pixels; e.g. the palette ordering is lost in translation when converting gif input files to libjxl input buffers
yurume
2022-09-26 07:33:37
can we make a specialized encoder just for GIF?
_wb_
2022-09-26 07:34:22
sure, could make a specialized one that doesn't use the libjxl api but directly translates gif palettes to jxl palettes and maybe does something clever to convert lzw to lz77
2022-09-26 07:34:41
it's just quite a bit of effort and I wonder if it's worth it
2022-09-26 07:35:05
it's probably a better workflow to avoid GIF and the color reduction that comes with it completely
Pashi
2022-09-26 07:35:41
For artwork and animations it's nice
2022-09-26 07:35:47
Lots of GIFs
yurume
2022-09-26 07:35:49
I think it is a worthy take (but not a high priority) for naysayers, so to say
Pashi
2022-09-26 07:37:27
Does an indexed palette even lead to space savings?
_wb_
2022-09-26 07:37:50
Of course it can
Pashi
2022-09-26 07:38:13
Considering compression algos can recognize commonly used byte sequences
_wb_
2022-09-26 07:39:13
Jxl encodes planar, not interleaved, so repetition of pixel values has to be encoded for each channel when not doing palette
2022-09-26 07:40:28
But even in interleaved formats like png and webp, palette helps since it allows both distance and literals to be smaller
Pashi
2022-09-26 07:43:00
For animated GIF does cjxl create an indexed JXL? the cmdline output doesn't mention it.
_wb_
2022-09-26 07:44:45
Yes, except at low effort
2022-09-26 07:45:15
But it will reconstruct a new palette with probably a different order, which may be better or worse
Pashi
_wb_ sure, could make a specialized one that doesn't use the libjxl api but directly translates gif palettes to jxl palettes and maybe does something clever to convert lzw to lz77
2022-09-26 07:46:58
Also why make one that doesn't use JXL API when you could add a function for specifying a custom palette and ordering...
2022-09-26 07:47:51
Orrrr, maybe just use a different algorithm for palette ordering...
_wb_
2022-09-26 07:51:37
it is currently likely to do YCoCg and then a lexicographic ordering, so sorted on luma, more or less, which is not bad
Demez
2022-09-26 07:56:17
imo we should just use a video codec, and in a format that is video only and supports all looping parameters gif does
2022-09-26 07:56:36
would be way better for space saving I imagine
Pashi
2022-09-26 07:59:16
Except for oekaki art
2022-09-26 08:00:04
mspaint-style palettized art and animations
_wb_
2022-09-26 08:19:06
for that, jxl should be better than gif in principle, unless the gif was created with specific trickery that works well in only gif — I don't know what they are doing but I can imagine e.g. dithering patterns that are particularly lzw-friendly
Pashi
2022-09-26 08:24:48
Really weird dithering that looks strange when you zoom in
_wb_
2022-09-26 08:27:31
some things can look like random noise to one encoder (so very hard to compress) while to another it happens to have a magic pattern that compresses to nothing
2022-09-26 08:28:02
see <#824000991891554375> for some extreme examples of such behavior 🙂
Jyrki Alakuijala
2022-09-26 09:48:23
https://twitter.com/inkscape/status/1574010475024683008
Pashi Does an indexed palette even lead to space savings?
2022-09-26 09:50:03
not for photograph, for pixel graphics -- yes. sometimes pixel graphics and photos (or photo like features like gradients or noisy gradients) are mixed with each other, then it is good if the failure representing photos/gradients in pixel graphics is not too bad
2022-09-26 09:51:40
paletteized graphics can allow more pixels to be used for context (given a context model of a fixed size), context trees can work with one channel
JendaLinda
Pashi Really weird dithering that looks strange when you zoom in
2022-09-26 10:40:27
There is also lossy algorithm for GIF, that exploits the LZW compression and hides arifacts in dithering. Such GIFs are impossible to losslessly compress with anything else.
2022-09-26 11:01:03
For pixel art and "ms paint" art with thousands of colors it's worth trying to increase the size of color palette, it may significantly improve compression.
ananke
2022-09-26 03:02:23
hi, I'm trying to use jxl to create some images with more than 4 channels and apparently you can only do it using the API. however, i haven't been able to set up the library using CMake, like it just would not import the functions. I think I'm having some issues with the target_link_libraries line but there's no .lib file and targeting the .dll file doesn't seem to work. any help would be very appreciated!
spider-mario
2022-09-26 03:27:06
are you using libjxl as a submodule, or installing it system-wide?
2022-09-26 03:27:23
if using MSYS2, there is a package: https://packages.msys2.org/base/mingw-w64-libjxl
ananke
spider-mario are you using libjxl as a submodule, or installing it system-wide?
2022-09-26 03:54:58
trying to use it as a submodule. the issue with the msys2 package seems that its still in the 0.6 version and channels were only supported in 0.7 afaik
spider-mario
2022-09-26 04:03:50
maybe it’s worth trying to take the PKGBUILD for 0.6 and simply updating it to 0.7 and rebuilding the package
2022-09-26 04:04:41
oh, as it turns out, the PKGBUILD has already been updated even if the binary package has not: https://github.com/msys2/MINGW-packages/tree/master/mingw-w64-libjxl
Pashi
2022-09-26 05:46:31
I hate how poorly written the average PKGBUILD on Arch is
JendaLinda There is also lossy algorithm for GIF, that exploits the LZW compression and hides arifacts in dithering. Such GIFs are impossible to losslessly compress with anything else.
2022-09-26 05:48:02
Sure, it's pretty simple to reduce the size further by turning it into PNG typically...
2022-09-26 05:48:37
Also pngquant is pretty good at reducing size extremely smol using lossy quantization
JendaLinda
Pashi Sure, it's pretty simple to reduce the size further by turning it into PNG typically...
2022-09-26 05:57:31
I've tried that. Lossless codecs failed miserably when compressing lossy GIFs. Lossy codecs could decrease the file size by reducing the quality further.
Jyrki Alakuijala
JendaLinda I've tried that. Lossless codecs failed miserably when compressing lossy GIFs. Lossy codecs could decrease the file size by reducing the quality further.
2022-09-27 02:49:30
lossy compression for png, near-lossless webp lossless, gif etc. is often done in a way that the way loss is executed is tightly connected to the format -- and because of it other formats cannot compete trying to repeat that kind of loss exactly
Pashi
2022-09-27 03:39:42
Well there's usually a simpler way of compiling a package than the method in most arch PKGBUILDs. Usually they are filled with a lot of unnecessary special sauce instead of following the upstream instructions on how to compile a given project. Even though Arch is supposedly the KISS distro.
2022-09-27 03:40:54
Also packages are often not split, like the avahi package being one of the most prominent offenders, including a lot of unneccesary test/demo applications in the same package as "avahi"
2022-09-27 03:41:24
Even though a lot of packages depend on avahi but nobody usually wants the demo programs installed
2022-09-27 03:42:34
Also as a separate problem it's kind of a bummer that pkgbuilds don't have build switches like homebrew does
2022-09-27 03:42:50
Or even gentoo
2022-09-27 03:43:22
It's nice to be able to enable or disable or customize compile time options
2022-09-27 03:43:35
But pacman was not designed for that
2022-09-27 03:45:22
Arch used to be a unique distro whose distinguishing features were the rc.conf file and pacman, now it's fedora with sloppier packagers
2022-09-27 03:45:47
/hottake
2022-09-27 03:47:09
To be fair Arch never supported changing compile time options without making modifications to the PKGBUILD which is kinda lame especially since it's always bragged about having such a great build system likening itself to BSD ports
2022-09-27 03:47:47
It's not even close to BSD
2022-09-27 03:48:18
But it used to be closer when it had an rc.conf
2022-09-27 03:51:48
Only because those follow insane clown logic
2022-09-27 03:51:59
That's like saying at least it's better than Hitler
2022-09-27 03:52:15
It's a pretty low bar
2022-09-27 03:52:24
To jump over
2022-09-27 03:52:58
But yes I agree Arch is a lot more convenient for me to use than Debian or anything related to it
2022-09-27 03:56:06
Like I said arch pkgbuilds typically include a lot of special sauce and making a split package is relatively simple. The consequence of them not doing this is that whenever something has a dependency on avahi you are forced to install these useless programs and have ugly icons for them in the xdg applications menu
2022-09-27 03:57:24
Other distros like debian go the other extreme and split up packages excessively.
2022-09-27 03:57:47
Everything that could possibly be done wrong Debian does wrong
2022-09-27 03:58:16
It's amazing that it's even used at all...
spider-mario
2022-09-27 01:49:13
one thing that would be nice is if PKGBUILDs for split packages could still use just one package() function
2022-09-27 01:49:55
e.g. `make install` everything and then *move* some of the stuff to another subpackage
Pashi
2022-09-27 11:57:32
Yeah it's kind of crap.
2022-09-28 12:00:16
But only because of the way it's defined I think. The order the (split) package functions will run in is undefined, and it is also undefined how to access or copy files from a common folder.
2022-09-28 12:01:23
Like for example install to a common folder and then, as you say, copy select parts to each package-specific folder
2022-09-28 12:01:47
Well, copy or move
2022-09-28 12:03:39
But it's all undefined, when the package specific folders are created, and accessing parent and sibling folders, there's no official rules you should follow it seems
improver
2022-09-28 08:45:10
makefiles could be more rich too tbh, eg `make install-docs` `make install-demos` `make install-programname`
Traneptora
2022-09-28 01:27:47
debian is for people who like using software from 4 years ago
_wb_
2022-09-28 02:34:44
you mean debian-stable?
2022-09-28 02:34:58
debian-unstable seems to be quite OK imo
spider-mario
2022-09-28 03:32:38
I’ve heard debian stable being referred to as “debian obsolete”
_wb_
2022-09-28 03:43:50
it's kind of OK for production environments I think, where you don't need cutting edge versions of everything and perhaps prefer to have battle-tested older versions
Pashi
2022-09-28 06:47:41
How do you make a Debian package?
2022-09-28 06:49:30
What boggles my mind is how Debian became so popular, with the largest amount of packages, when I still don't even know how to make a Debian package
2022-09-28 06:50:09
I know how to make packages and scripts for a whole bunch of other systems, even RPM
2022-09-28 06:50:27
But I still to this day have never figured out how you're supposed to do it for Debian
2022-09-28 06:50:51
Because I've tried and I guess it's just beyond my comprehension.
2022-09-28 06:51:07
I don't understand why it's so hard for me to understand or for others to explain
2022-09-28 06:51:51
It should be easy because it's the most popular packaging format right?
2022-09-28 06:51:57
Maybe I'm just stupid
2022-09-28 06:52:16
Or maybe I'm taking crazy pills
2022-09-28 06:52:49
Or maybe the Debian build system is just incomprehensible and we are living in clown world.
2022-09-28 06:53:14
Either way I wish everyone would stop using Debian, it's horrible.
2022-09-28 06:53:53
And I always hear about people having bad experiences with Linux because of a Debian-specific issue
2022-09-28 06:55:22
Same with GNU project actually. GNU software and drama llamas are the source of a lot of issues too.
2022-09-28 06:56:14
People should be thinking about switching to musl libc and ksh
2022-09-28 06:56:32
It's just better.
2022-09-28 06:56:56
Their software is not good.
2022-09-28 06:58:06
Linux kernel isn't that good either honestly. It's a bunch of hardware drivers but the glue holding those drivers together isn't very good.
2022-09-28 06:59:57
The grsecurity guys have literally given up all hope of ever getting their patches upsteamed because of how hard it is to make improvements to the kernel.
2022-09-28 07:00:53
It's just a bunch of sloppily glued together drivers in a vaguely unix like fashion.
2022-09-28 07:01:50
With no other real overarching idea about how the system is designed.
2022-09-28 07:02:34
The only good things about Linux is that it's a convenient collection of driver code
2022-09-28 07:03:32
Granted if any one of those drivers malfunctions the entire system is compromised because the kernel has no design at all
2022-09-28 07:04:58
No microkernel features aside from being able to load and unload driver code.
2022-09-28 07:06:11
No separation and containment and fault tolerance or anything really, it's hardly a kernel at all, just the bare minimum necessary to glue a bunch of drivers together into a vaguely unix like shape
2022-09-28 07:06:57
So yeah what I'm saying is it's sloppy and not particularly good or special.
2022-09-28 07:07:29
and if people moved to a different kernel that would probably be a good thing too.
2022-09-28 07:07:46
So long as it still had hardware support.
2022-09-28 07:10:47
For some reason all of the "original Linux ideas" tend to be terrible and convoluted ways of doing things that are inefficient and hard to use and way more complicated than existing solutions, like udev and dbus...
Brinkie Pie
2022-09-28 07:10:48
I was thinking about making a debian-bashing thread earlier today; guess that would've been a good idea after all
Pashi
2022-09-28 07:12:18
Or the kernel APIs
2022-09-28 07:12:49
Like Linux inventing their own random number API that's worse in every way compared to existing work in OpenBSD
BlueSwordM
Pashi No microkernel features aside from being able to load and unload driver code.
2022-09-28 07:14:44
That is changing a lot in recent years.
Fraetor
2022-09-28 07:38:12
I think that Debian has a lot of popularity because it has been around for a long time. I think slackware is the only major(?) distro that has been around longer and is still maintained.
2022-09-28 07:38:58
I run debian on servers, and the stability there is nice.
2022-09-28 07:39:50
The only alternative long term stable distro is RHEL, which has an even slower cycle than debian.
_wb_
2022-09-28 07:41:53
Let's take this to <#806898911091753051> or a forum thread in <#1019989652079394826>
2022-09-28 08:20:00
http://sneyers.info/progressive/
2022-09-28 08:20:38
interesting to see the difference between chrome stable (which doesn't do progressive jxl yet, only group-by-group decode) and chrome canary (where progressive works)
2022-09-28 08:25:01
load that page in chrome canary with network throttling (e.g. "fast 3G") and then tell me about how WebP and AVIF are improving the web experience
2022-09-28 08:26:41
Bonus points if you can do that with a straight face
paperboyo
_wb_ http://sneyers.info/progressive/
2022-09-28 08:46:44
For me personally, the biggest feature of JXL is progressive, clearly shown winning here for supporting formats. I get that qualities were chosen for q equivalence, where AVIF already looses some detail while being ⅓ fatter than JXL, but I would like to see same-filesize comparison too (a child in me would want a full equalizer knobs :-). q: is it encoding or (Chrome-decided) decoding that makes a 60% weightier JPEG appear faster/nicer? (nicer being subjective, yeah)
_wb_
2022-09-28 08:55:56
https://sneyers.info/progressive/index-samesize.html
2022-09-28 08:56:05
that's with roughly the same file sizes
paperboyo
2022-09-28 09:04:31
👏 AVIF: this is some plastic cat! 👏
JendaLinda
2022-09-28 09:29:43
I have an impression that AV1 codec is tuned for cartoons.
BlueSwordM
JendaLinda I have an impression that AV1 codec is tuned for cartoons.
2022-09-28 09:31:53
Eh, it's more like the default settings for avifenc all-intra are slightly unoptimal.
2022-09-28 09:32:15
IMO, there are a decent number of settings that if they were made default, aomenc all-intra would be seen as being a lot stronger.
paperboyo
paperboyo 👏 AVIF: this is some plastic cat! 👏
2022-09-28 09:39:11
This was actually childish of me. I am not happy of anyone’s plastic. I wish everyone’s best. And yeah: I haven’t asked about encode time/settings (and have too little experimental experience myself to have well-founded opinions of my own).
JendaLinda
2022-09-28 09:48:22
Poorly tuned encoders ruin the whole format.
Pashi
2022-09-29 01:18:23
Theora could be tuned to look a lot better but nobody would know because the default encoder is pretty bad
2022-09-29 01:19:14
https://people.xiph.org/~xiphmont/demo/theora/demo9.html
Deleted User
2022-10-03 06:30:23
Does JXL support transparency?
JendaLinda
2022-10-03 07:10:48
Yes it does.
Deleted User
2022-10-03 10:34:37
Is jxl lossless?
2022-10-03 10:35:44
Wait is there a faq?
spider-mario
Is jxl lossless?
2022-10-03 10:35:52
it can be
Deleted User
spider-mario it can be
2022-10-03 10:36:24
Is it a setting?
Traneptora
Is it a setting?
2022-10-03 10:50:36
when encoding with `cjxl` you set the distance to `0` e.g. `cjxl -d 0 input.png output.jxl`
Deleted User
Traneptora when encoding with `cjxl` you set the distance to `0` e.g. `cjxl -d 0 input.png output.jxl`
2022-10-03 10:53:45
Ah. So is png generally better for vector art? Or does the random access aspect of jxl make it superior? (Ie cropping, changing resolution)
Traneptora
2022-10-03 10:54:37
png is not better than jxl for practically anything except extremely flat horizontal repetitive areas
2022-10-03 10:54:53
and in that scenario JXL encoder could be improved, it just hasn't been yet
Pashi
2022-10-03 10:45:05
It's better for compatibility... for now...
2022-10-03 10:45:18
But yeah PNG kinda is a bad format
2022-10-03 10:46:07
Even compared to bzipped pixel data in farbfeld format
2022-10-03 10:48:37
https://jpegxl.io/articles/faq/
2022-10-03 10:54:38
Basically, JXL strengths are lossless, lossy high fidelity, supports layers and animations, alpha channel, plus additional non-color channels, and HDR.
2022-10-03 10:54:52
Plus more.
2022-10-03 10:55:04
Like progressive decoding
2022-10-03 10:57:10
Also if you have ever noticed distracting color banding and blockiness in other formats, JXL basically never does that.
2022-10-04 02:42:00
At least not to colors
2022-10-04 02:42:49
It's really good at preserving colors and photographs of the sky for example that other lossy codecs do terrible at. Probably because of the xyb encoding
2022-10-04 02:43:03
Even avif sucks in that regard
Jyrki Alakuijala
2022-10-04 03:02:56
thank you for appreciating it -- it was really important for me during the development
2022-10-04 03:03:32
overprovisioning the dc in comparison what metrics say -- and in general trusting our eyes more than numbers
Pashi
Jyrki Alakuijala overprovisioning the dc in comparison what metrics say -- and in general trusting our eyes more than numbers
2022-10-04 05:53:12
Hell yes. I feel like trusting bad numbers such as PSNR is a big part of why many codecs look so ugly
2022-10-04 05:57:07
We just need to find the right math that describes what our eyes actually notice, so we can write an appropriate fitness function like Butteraugli to use to tune encoders and specify a fidelity target
_wb_
2022-10-04 06:19:56
I am currently trying to figure out if the dc frame and non-dc frame approach are well calibrated. Ssimulacra2 thinks the dc frame approach (--progressive_dc=1) is currently lower quality; butteraugli and ssimulacra1 don't think so.
2022-10-04 06:20:50
So I am trying to figure out if it is ssimulacra2 being hypersensitive to DC error or butteraugli/ssimulacra1 not being sensitive enough to errors in DC
Jyrki Alakuijala
2022-10-04 06:21:01
the balance between ac and dc is a bit arbitrary -- it depends a lot on the viewing distance/zoom level
2022-10-04 06:21:36
viewing distance/zoom are partially cultural/environmental things rather than just related to the format
2022-10-04 06:22:03
modern apps often limit zoom to 5x or so