JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

libjxl

_wb_
2023-02-05 11:56:55
No idea how accurate that image is, also note that sRGB does not cover the full gamut of visible colors
Demiurge
2023-02-06 08:53:02
https://artifacts.lucaversari.it/libjxl/libjxl/latest/ For a supposed "static" build, it seems like it's missing some DLLs.
Kleis Auke
Demiurge https://artifacts.lucaversari.it/libjxl/libjxl/latest/ For a supposed "static" build, it seems like it's missing some DLLs.
2023-02-06 09:28:51
See: https://discord.com/channels/794206087879852103/804324493420920833/1071129331448942767
_wb_
2023-02-06 05:47:04
<@179701849576833024> <@987371399867945100> VarDCT e3 (and libjxl-tiny) are still using ANS, not Huffman, right? How much trouble would it be to make them use Huffman and do entropy coding fjxl style? (sample some blocks to estimate a distribution, then do the main encoding in one pass instead of first tokenizing everything)
veluca
2023-02-06 05:49:21
Eh, it would be some problem
_wb_
2023-02-06 05:50:58
to me it looks like a big code plumbing headache since basically everything is designed around "first tokenize everything, then compute histograms and context map, and only then spit out bitstream"
2023-02-06 05:54:34
maybe an easier route is to make a "lossy fjxl" as a separate project, also to see if it is even worth doing in the first place (if compression sucks when not using ANS, it may not be something we should add to libjxl)
veluca
2023-02-06 06:10:29
IIRC it loses 7-11% from Huffman
_wb_
2023-02-06 06:10:30
Likely all the symbols you'll need to entropy code in vardct fit within 16-bit (probably even in 12-bit or so when d > 0.5), so the huffman writer of fjxl could be used...
veluca
2023-02-06 06:10:33
that's easy to try though
_wb_
2023-02-06 06:12:50
I would imagine the gap to shrink as quality goes up, since the benefit of being able to represent 0 in less than one bit gets smaller when 0 occurs less often...
veluca
2023-02-06 06:14:23
https://github.com/libjxl/libjxl/blob/main/lib/jxl/enc_ans.cc#L1556 set this to true and try 😛
_wb_
2023-02-06 06:15:13
But then again fast lossy encode imo mostly makes sense at high quality, so you can later do a transcode at the actual quality you want. Doing e1 lossy d3 doesn't make that much sense to me...
veluca https://github.com/libjxl/libjxl/blob/main/lib/jxl/enc_ans.cc#L1556 set this to true and try 😛
2023-02-06 06:18:23
That will still use exact histograms though - what I would propose is to avoid tokenization completely and use fixed or sampled histograms, which will have some additional cost besides the cost of huffman vs ans...
veluca
2023-02-06 06:18:39
meh, that's not going to be a huge difference
2023-02-06 06:19:05
in fjxl sampling 100% makes pretty much no difference wrt sampling 2%
_wb_
2023-02-06 06:22:00
Yeah but lossy might have more skewed histograms than lossless, or something
2023-02-06 06:23:15
Anyway maybe it could also be interesting to try sampling histos when using ANS, and encoding backwards, to avoid two passes and the extra memory needed to tokenize everything
spider-mario
Traneptora unlikely as XYB is modeled after human perception, not after tristimulus RGB hardware like CCDs
2023-02-06 08:07:34
(just as a tangential FYI, CMOS overtook CCD 15 years ago)
daniilmaks
Demiurge We wouldn't have RGB if we didn't have LMS cones in our eyes
2023-02-07 01:47:00
Don't mind I'm late to the convo... You're not wrong there, however, human visual perception occurs at the brain, not the eyes. Notice "visual" here is an adjective, not the subject.
Demiurge
2023-02-07 02:20:31
I should probably take some more photographs and test the performance of JXL in retaining detail in dark shadows, since it seems the encoder is currently surprisingly poor at that.
2023-02-07 02:21:48
Maybe the image I posted is too obnoxiously large and it's too difficult for most people to concatenate the f ile
Demiurge
2023-02-07 02:24:08
hopefully others can notice that this AVIF is retaining substantially more detail and texture in the shadows compared to the JXL. Every other part of the JXL looks visually lossless except for the shaded regions.
2023-02-07 02:24:46
So it's not perceptually uniform quantization since some parts of the image look noticeably degraded compared to other parts that look transparent
2023-02-07 09:36:55
djxl says -v gives more information with -h but -v is not accepted as an argument to djxl at all
2023-02-07 09:37:29
also djxl -h does not say what argument to use if you do not want to write an output file
spider-mario
Demiurge Is that a good idea of what those colors look like at that frequency? Because the spot where S is peaking is a lot darker than the spot to the right of it. I would expect the brightest-looking spot to be where the peak is.
2023-02-07 03:18:28
perhaps one way to approach this would be: for each wavelength λ, take XYZ = (x̄(λ), ỹ(λ), z̄(λ)) (https://en.wikipedia.org/wiki/CIE_1931_color_space#Color_matching_functions), and gamut-map that to your monitor’s colorspace
Moritz Firsching
Demiurge djxl says -v gives more information with -h but -v is not accepted as an argument to djxl at all
2023-02-10 01:04:17
This seems like a bug, for djxl there are not that many options and all are displayed with `-h`, and `-v` does not work as you pointed out. Let me fix that...
2023-02-10 01:47:32
Removing two characters should fix it: https://github.com/libjxl/libjxl/pull/2181
Demiurge
2023-02-11 12:29:42
It would also be a good idea to remove the misleading help message encouraging the use of a flag that doesn't exist
2023-02-11 12:31:32
looks like it was already removed...
2023-02-11 12:33:17
actually no it wasn't...
2023-02-11 08:02:17
Just a follow up on the picture I took... I encoded a HEIC version and a JXL -q 85 version. Even at such a high quality setting, JXL noticeably destroys the texture of the shaded regions in the image compared to other codecs such as AVIF and HEIC.
2023-02-11 08:03:51
It's unexpected because even since JPEG-XR, codecs started to focus on retaining detail in dark areas and getting a lot better at that.
2023-02-11 08:06:59
Maybe next I'll have to compare JXL to JXR ;(
afed
2023-02-11 08:07:28
it very depends on the display settings, also maybe `--intensity_target=` will help
Demiurge
2023-02-11 08:12:09
No idea how to use that setting...
2023-02-11 08:12:23
I know it takes an integer but I don't know what the scale is.
2023-02-11 08:12:41
Hmm, seems to be measured in nits.
2023-02-11 08:12:49
Well I still don't know what the scale of that is.
2023-02-11 08:13:40
forgive my ignorance.
afed
2023-02-11 08:17:23
`Upper bound on the intensity level present in the image in nits. Leaving this set to its default of 0 lets libjxl choose a sensible default value based on the color encoding.` the default is 300 or 400, maybe trying to raise it to like 600-1000 and compare or maybe even more, just for the experiments
2023-02-11 08:21:41
If that doesn't help, then just increase the quality, because quality for jxl is still optimized for distance and if comparing from a closer distance, the difference is likely to be noticeable
_wb_
2023-02-11 08:32:46
The default is 80, no? Quite low iirc
2023-02-11 08:33:49
Could you share the original image?
Demiurge
2023-02-11 08:33:55
Wow I cannot find any software for reading or writing JXR
2023-02-11 08:34:26
Yes, I shared the original image earlier, it's large so I truncated it into 2 parts so you can restore it by concatenation
Demiurge Here is the original file, truncated to fit within discord limits. Restore with `cat`
2023-02-11 08:34:53
https://discord.com/channels/794206087879852103/804324493420920833/1071730694147035216
_wb_
2023-02-11 08:36:00
Ah ok thanks. No time to look at it now, but it would be worth looking into, it's weird if you're saying jxl is destroying darks while avif/heic work well, that's the opposite of what should happen
2023-02-11 08:36:21
<@532010383041363969> also might want to take a look
afed
_wb_ The default is 80, no? Quite low iirc
2023-02-11 08:36:29
probably, I just remember how it is typically used in libaom for `--tune=butteraugli`
Demiurge
2023-02-11 08:38:54
Well I posted an AVIF version as well as a HEIC encode. Sorry the file is so obnoxiously large.
2023-02-11 08:39:27
But you can clearly see that JXL is by far the worst at retaining texture in the shaded area.
2023-02-11 08:39:46
Maybe something changed recently
2023-02-11 08:40:01
with libjxl's tuning
_wb_
Demiurge Well I still don't know what the scale of that is.
2023-02-11 08:40:07
It is basically how bright your display goes. 'Officially', sRGB assumes a display where the brightest white is 80 nits. Current displays tend to go significantly brighter than the displays of the 1990s for which sRGB was designed. E.g. my laptop goes to 400 nits, and HDR screens can go to 1000 nits or more. Since everything gets scaled to the display brightness, artifacts in the darks will become a lot more visible on a brighter display than on a dim one.
Demiurge
2023-02-11 08:41:47
Makes sense to me. So it sounds like a sensible scale would be near the range of 400-1000, give or take.
_wb_
2023-02-11 08:45:51
It all depends on the expected viewing conditions and the intended meaning of the image. If you set the intensity target to 1000, it will also make the brights very bright on an HDR screen, so that's probably not what you want to do.
Demiurge
2023-02-11 08:50:50
lol I have to convert JXR to BMP in order to even view it...
2023-02-11 08:51:18
But anyways luckily JXL is not worse than JXR
2023-02-11 08:51:48
That would be pretty embarrassing
_wb_ It all depends on the expected viewing conditions and the intended meaning of the image. If you set the intensity target to 1000, it will also make the brights very bright on an HDR screen, so that's probably not what you want to do.
2023-02-11 08:53:31
Really, that option will change the perceived brightness of the image as well as change the psymodel?
2023-02-11 08:56:29
I tried intensity_target 400 and it significantly increased the filesize
2023-02-11 08:56:44
It also significantly reduced the blurry awfulness.
2023-02-11 09:01:04
Even setting it to 500 and adjusting the distance to match the same bitrate again, heic still looks less blurry.
2023-02-11 09:02:11
I think it just needs better RDO and noise masking heuristics.
_wb_
Demiurge Really, that option will change the perceived brightness of the image as well as change the psymodel?
2023-02-11 09:03:02
<@604964375924834314> can explain better but afaik yes, it defines how bright white is supposed to be, which will affect both how it is supposed to be displayed and how things are compressed. But for "non HDR" values, I think it doesn't really change how the image gets displayed, since SDR screens don't have a defined brightness anyway.
Demiurge
2023-02-11 09:08:16
It just, for some reason, is treating the whole image really good except for my shadow. It's encoding the rest of the image at d1.45 but it's encoding my shadow at -d 3 or something.
2023-02-11 09:08:36
It's discriminating and treating my shadow different than the rest of the image :(
spider-mario
2023-02-11 09:36:37
right, the intensity target is the peak luminance of the image (which for an SDR image is the luminance (1, 1, 1)), with the default for SDR images being 255
2023-02-11 09:37:00
when displaying a JXL image “as SDR”, that luminance range is mapped back to 0-1, so it has no impact on the brightness of the SDR output
2023-02-11 09:38:11
but using a higher intensity target for an SDR image encodes it as a brighter image for HDR displays
sklwmp
Demiurge Wow I cannot find any software for reading or writing JXR
2023-02-11 09:38:33
Well, Windows can read and write it natively using the Photos app on Windows 11. And yes, that's because Microsoft made the format.
Demiurge
2023-02-11 09:39:14
If you aren't on windows it's basically impossible to find any image viewers that support it.
2023-02-11 09:39:29
I downloaded jxrlib, it seems to be some java thingy?
2023-02-11 09:39:58
No, it's not java, it's just weird
2023-02-11 09:40:25
Like the binary name has WeirdCamelCaseCapitalization for some reason
spider-mario
2023-02-11 09:55:51
I think that’s a common .NET convention
_wb_
Demiurge Here is the original file, truncated to fit within discord limits. Restore with `cat`
2023-02-11 10:30:28
for some reason, `eog` does not display that image. I can convert it with imagemagick though, no idea why eog is complaining
Demiurge
2023-02-11 10:31:52
hmm, gwenview seems to work fine for me on my weird system. I might have mozjpeg installed...
2023-02-11 10:32:07
It probably has to do with arithmetic coding.
2023-02-11 10:32:15
Some JPEG viewers have a hard time with that.
_wb_
2023-02-11 10:33:00
ah lol yes, pretty much most of them
2023-02-11 10:34:14
I don't consider arithmetic coding to be part of the de-facto jpeg format — like 12-bit, lossless, hierarchical and all the other stuff that's in the original spec but is not implemented in the widely deployed decoders
2023-02-11 10:38:24
so that's the shadow you're talking about, in the original image?
2023-02-11 10:38:38
and this is the decoded jxl
2023-02-11 10:39:26
ugh why am I still using this imagemagick that creates buggy png files
2023-02-11 10:41:21
this is the actual original image at that location
2023-02-11 10:44:28
ok so yes, the q85 jxl does smooth things a little in those very dark areas
2023-02-11 10:48:08
When viewing at 2x zoom I can see it quite clearly, though I wouldn't particularly say it's a very problematic amount of smoothing. Viewing at 1:1, it's not that obvious, and q85 isn't supposed to be fully visually lossless, it's a somewhat lower quality than that.
Demiurge
2023-02-11 10:48:53
correcto. Also apparently libvips blows both imagemagick and graphicsmagick out of the water I hear.
2023-02-11 10:50:46
I understand but compared to other codecs (hevc and av1) jxl is performing poorly here relatively speaking, also it seems like the quality in that shaded region is noticeably worse than the rest of the image.
_wb_
2023-02-11 10:53:48
it could be that there is some generation loss aspect at play here too. you're starting from a q95 jpeg or so, which is quite good quality but it's not pristine. avif/hevc stay in the same colorspace (8-bit ycbcr) while jxl uses a different one, which can make a difference
Demiurge
2023-02-11 10:54:46
I am sure that it will only get more noticeable when the image is viewed with a brighter monitor. Or by an image viewer that applies post processing. Or by a "gaming" monitor that brightens shadows.
2023-02-11 10:59:03
It might look subtle, but I think that it's not a good sign if JXL is too eager to smooth and blur details away just because it's at a darker part of the image. The more crisp and detailed the shadows look, the more images tend to appear to "pop" out. So it's very bad for fidelity for an encoder to give shadows not enough weight.
2023-02-11 11:00:19
Not only that, but the type of artifacts here are also troubling.
2023-02-11 11:01:47
q85 is well within the range where libjxl is normally very strong and competitive. But the type of artifacts it's showing here, compared to the other codecs, looks almost like 2x2 subsampling.
2023-02-11 11:02:26
It looks very crude instead of trying to use any clever psychovisual tricks to mask the distortion.
2023-02-11 11:06:05
I'm not anything like an image coding expert, but I believe, in the expressive bitstream, there are different ways the image data can be quantized and rounded to increase compression, different ways that produce different types of artifacts, sometimes noise, sometimes blurring, sometimes this weird 2x2 effect...
_wb_
2023-02-11 11:07:12
yeah this is a matter of encoder heuristics, it is assuming it can be more lossy there since you wouldn't notice, but I guess it's kind of wrong there 🙂
Demiurge
2023-02-11 11:07:20
x264 and x265 have a lot of very clever masking tricks that add extra distortion for the sake of preserving detail and texture.
2023-02-11 11:07:41
Because noise often helps preserve detail since our brain is pretty good at removing it.
_wb_
2023-02-11 11:07:52
possibly gaborish is responsible for some of this smoothing, have you tried with gaborish=0?
Demiurge
2023-02-11 11:08:15
No, but I'll give it a shot.
2023-02-11 11:12:58
The file is noticeably smaller and noticeably better quality too.
2023-02-11 11:13:26
Not quite the level of the heic though but I will give it a shot at the same bitrate
2023-02-11 11:14:01
turning off gaborish made a shockingly huge improvement to this photograph.
2023-02-11 11:16:24
It's still pretty subtle. And I think gaborish isn't the main problem because allocating more bits for the shadows will improve things more with gaborish on or off.
2023-02-11 11:17:05
But still, maybe the heuristics for when to enable gaborish should be re-evaluated...
2023-02-11 11:21:54
Even after turning gaborish off and matching the bitrate (d=1.3), it certainly looks a lot better but heic still has a leg up.
2023-02-11 11:22:50
But it definitely closed most of the gap.
veluca
Demiurge But still, maybe the heuristics for when to enable gaborish should be re-evaluated...
2023-02-11 12:08:40
"yes" is not much of a heuristic 😛
2023-02-11 12:08:49
(well, "yes if not trying to be super fast" but still)
Moritz Firsching
Demiurge actually no it wasn't...
2023-02-11 12:23:03
are there some improvements to the help messages of cjxl and djxl left todo?
Demiurge
2023-02-11 12:25:22
cjxl and djxl both use the same function to print the help message.
2023-02-11 12:26:29
And so djxl also says "try using -v with -h" even though that doesn't apply to djxl
2023-02-11 12:27:12
It checks the verbosity level before printing the suggestion
2023-02-11 12:33:45
Who are these assholes charging $30 for a scientific journal published in 2000?
2023-02-11 12:35:44
Sorry, kind of an offtopic question. Just kind of amazed how 23 year old research papers are being so fiercely guarded by numbskulls trying to make a buck.
2023-02-11 12:37:55
I wonder what the reason is, that research papers are always behind paywalls with $30 PDF download buttons, instead of being freely distributed to all curious minds...
2023-02-11 12:38:23
It doesn't really seem to be in the spirit of science at all...
Moritz Firsching
Demiurge And so djxl also says "try using -v with -h" even though that doesn't apply to djxl
2023-02-11 12:57:07
That problem was fixed yesterday, at the current head it should be ```[I] firsching@bernoulli ~/libjxl ((6280645e…))> ./build/tools/djxl -h JPEG XL decoder v0.9.0 6280645e [AVX3,AVX2,SSE4,SSSE3,Unknown] Usage: ./build/tools/djxl INPUT OUTPUT [OPTIONS...] INPUT The compressed input file. OUTPUT The output can be (A)PNG with ICC, JPG, or PPM/PFM. -V, --version Print version number and exit. [...] --quiet Silence output (except for errors). -h, --help Prints this help message. ``` Thanks for reporting it
Demiurge
2023-02-11 12:58:24
Ah neat. Glad I could help.
spider-mario
Demiurge Sorry, kind of an offtopic question. Just kind of amazed how 23 year old research papers are being so fiercely guarded by numbskulls trying to make a buck.
2023-02-11 01:01:02
I have come across articles that _used_ to be freely accessible and are now paywalled… 🤦 🙄
2023-02-11 01:01:18
archive.org to the rescue (no need for sci-hub in this case)
_wb_
2023-02-11 02:50:25
Academic publishers are the worst leeches in society, taking typically publicly funded research, written and edited by researchers, and then putting it behind a paywall so they get to charge a lot of (mostly public) money for basically hosting a website that for some reason is "prestigious".
Demiurge
2023-02-11 09:21:18
If scientists are so smart why don't they create a free way to distribute their papers? :P
2023-02-11 09:21:46
All problems can be solved with science!
2023-02-11 09:22:20
I feel like Cave Johnson saying that lol.
daniilmaks
2023-02-11 10:25:07
We do what we must, because we can
improver
Demiurge If scientists are so smart why don't they create a free way to distribute their papers? :P
2023-02-12 12:38:55
why they would be smart about everything at once?
Demiurge
2023-02-12 12:39:38
Humans never are.
improver
2023-02-12 12:40:16
taking upon everything is not a smart thing to do
2023-02-12 12:40:27
question of the time and resources
Demiurge
2023-02-12 04:33:13
Certainly. Such lowly mortals cannot be trusted.
improver
2023-02-12 12:24:56
individually, yes
2023-02-12 12:28:40
some corners of academy aint that bad, like cryptography (they tend to use public journals like iacr's ePrint)
Demiurge
2023-02-12 01:19:06
openbsd guys are pretty nice :)
2023-02-12 01:19:41
So I just switched over to waterfox on one of my computers. Good riddance firefox.
Fraetor
2023-02-12 05:57:40
TBF the ArXiv is sort of that for some fields.
Demiurge
2023-02-13 03:33:55
Here is another example of what I was talking about earlier. How the quality and the characteristic compression artifacts and smearing are radically different in different regions of the same image compressed at a single quality level.
2023-02-13 03:34:09
I was looking at this and decided to do some testing of my own. https://forum.vivaldi.net/topic/61006/support-for-jxl-image-format?lang=en-US
2023-02-13 03:34:35
Here's the original.
2023-02-13 03:35:32
I tried to experiment with how far I could compress the image before I start noticing something wrong with the image.
2023-02-13 03:36:26
I used photon_noise_ISO800 to approximate the noise level of the original image. I got this number by pulling it out of thin air.
2023-02-13 03:36:49
Here it is at d3.
2023-02-13 03:38:07
Problem is, the quality varies a lot throughout the same image at the same quality setting. Darker areas are punished a lot more than brighter areas.
2023-02-13 03:39:27
d3 is the lowest I tested but even at higher settings, darker areas always get hit first and the level of quality varies a lot depending on the brightness.
2023-02-13 03:39:50
It's not perceptually uniform and it's a pretty extreme difference.
2023-02-13 03:40:46
I think that if people were to mass-compress a whole bunch of images at any lossy quality setting, whether it be d1 or d2 or d3, they will not get consistent results if some images contain darker content.
2023-02-13 03:42:29
Some images it will be impossible to notice any artifacts and will look totally transparent and other images will be noticeably altered. Maybe not for d1 since d1 tends to be extremely conservative, but definitely for anything less than d1. And this also means that d1 could be made more efficient too, since it's probably wasting more bits than it needs to on bright high contrast areas.
ziemek.z
2023-02-13 10:12:59
It's been quite a while since I've been involved, but I'm back for a few moments.
2023-02-13 10:13:15
Yes, I'm trying to update libjxl in Squoosh. Again.
2023-02-13 10:14:41
``` wasm-ld: error: duplicate symbol: skcms_DisableRuntimeCPUDetection >>> defined in node_modules/jxl/build/mt/lib/libjxl.a(skcms.cc.o) >>> defined in node_modules/jxl/build/mt/third_party/libskcms.a(skcms.cc.o) wasm-ld: error: duplicate symbol: powf_ >>> defined in node_modules/jxl/build/mt/lib/libjxl.a(skcms.cc.o) >>> defined in node_modules/jxl/build/mt/third_party/libskcms.a(skcms.cc.o) wasm-ld: error: duplicate symbol: skcms_TransferFunction_isSRGBish >>> defined in node_modules/jxl/build/mt/lib/libjxl.a(skcms.cc.o) >>> defined in node_modules/jxl/build/mt/third_party/libskcms.a(skcms.cc.o) wasm-ld: error: duplicate symbol: skcms_TransferFunction_isPQish >>> defined in node_modules/jxl/build/mt/lib/libjxl.a(skcms.cc.o) >>> defined in node_modules/jxl/build/mt/third_party/libskcms.a(skcms.cc.o) wasm-ld: error: duplicate symbol: skcms_TransferFunction_isHLGish >>> defined in node_modules/jxl/build/mt/lib/libjxl.a(skcms.cc.o) >>> defined in node_modules/jxl/build/mt/third_party/libskcms.a(skcms.cc.o) wasm-ld: error: duplicate symbol: skcms_TransferFunction_makeScaledHLGish >>> defined in node_modules/jxl/build/mt/lib/libjxl.a(skcms.cc.o) >>> defined in node_modules/jxl/build/mt/third_party/libskcms.a(skcms.cc.o) wasm-ld: error: duplicate symbol: skcms_ParseWithA2BPriority >>> defined in node_modules/jxl/build/mt/lib/libjxl.a(skcms.cc.o) >>> defined in node_modules/jxl/build/mt/third_party/libskcms.a(skcms.cc.o) ```
2023-02-13 10:15:15
``` em++: error: '/emsdk/upstream/bin/wasm-ld -o dec/jxl_dec.wasm /tmp/emscripten_temp_yont4_9_/jxl_dec_0.o node_modules/jxl/build/mt/lib/libjxl.a node_modules/jxl/build/mt/third_party/brotli/libbrotlidec-static.a node_modules/jxl/build/mt/third_party/brotli/libbrotlienc-static.a node_modules/jxl/build/mt/third_party/brotli/libbrotlicommon-static.a node_modules/jxl/build/mt/third_party/libskcms.a node_modules/jxl/build/mt/third_party/highway/libhwy.a -L/emsdk/upstream/emscripten/cache/sysroot/lib/wasm32-emscripten/lto --whole-archive /emsdk/upstream/emscripten/cache/sysroot/lib/wasm32-emscripten/lto/libembind-rtti.a --no-whole-archive /emsdk/upstream/emscripten/cache/sysroot/lib/wasm32-emscripten/lto/libgl.a /emsdk/upstream/emscripten/cache/sysroot/lib/wasm32-emscripten/lto/libal.a /emsdk/upstream/emscripten/cache/sysroot/lib/wasm32-emscripten/lto/libhtml5.a /emsdk/upstream/emscripten/cache/sysroot/lib/wasm32-emscripten/lto/libc.a /emsdk/upstream/emscripten/cache/sysroot/lib/wasm32-emscripten/lto/libcompiler_rt.a /emsdk/upstream/emscripten/cache/sysroot/lib/wasm32-emscripten/lto/libc++-noexcept.a /emsdk/upstream/emscripten/cache/sysroot/lib/wasm32-emscripten/lto/libc++abi-noexcept.a /emsdk/upstream/emscripten/cache/sysroot/lib/wasm32-emscripten/lto/libdlmalloc.a /emsdk/upstream/emscripten/cache/sysroot/lib/wasm32-emscripten/lto/libc_rt_wasm.a /emsdk/upstream/emscripten/cache/sysroot/lib/wasm32-emscripten/lto/libsockets.a -mllvm -combiner-global-alias-analysis=false -mllvm -enable-emscripten-sjlj -mllvm -disable-lsr --allow-undefined --strip-debug --export main --export stackSave --export stackRestore --export stackAlloc --export __wasm_call_ctors --export __errno_location --export malloc --export free --export-if-defined=__start_em_asm --export-if-defined=__stop_em_asm --export-table -z stack-size=5242880 --initial-memory=16777216 --no-entry --max-memory=2147483648 --global-base=1024' failed (returned 1) make: *** [Makefile:39: dec/jxl_dec.js] Error 1 ```
2023-02-13 10:15:53
<@281115117884801025> IIRC you've been messing with that stuff
2023-02-13 10:16:03
https://github.com/ziemek99/squoosh/tree/jxl-081
Traneptora
2023-02-14 02:00:20
<@1028567873007927297> it's a piecewise function
2023-02-14 02:00:34
unfortunately discontinuous
2023-02-14 02:07:10
lemme find where it is in the cjxl code
2023-02-14 02:08:52
ah nvm they found it
spider-mario
2023-02-14 02:51:41
here it is plotted
yoochan
spider-mario here it is plotted
2023-02-14 02:55:05
wrong channel though 😄 the question was raised in <#794206170445119489>
spider-mario
2023-02-14 03:04:43
yes, but accompanied with the comment “oh, meant to ask this in the <#804324493420920833> channel.”
Moritz Firsching
Traneptora unfortunately discontinuous
2023-02-14 03:06:10
I thought Iit was adapted to be continous, even C_1, https://github.com/libjxl/libjxl/pull/1824
Traneptora
Moritz Firsching I thought Iit was adapted to be continous, even C_1, https://github.com/libjxl/libjxl/pull/1824
2023-02-14 04:19:21
I didn't remember that being fixed, excellent
spider-mario
2023-02-14 04:29:19
logscale butteraugli, grid
2023-02-14 04:35:56
_wb_
2023-02-14 04:45:44
I don't really see use cases for q<50 tbh, so the shape below q30 doesn't matter that much imo. For "web quality", q60-90 is the relevant range. For "camera quality", q90-97 is the relevant range.
2023-02-14 05:03:53
maybe if you want to have images at 1:1 on dpr3 phones (as opposed to just sending them a 2x image and calling it a day), lower qualities might be relevant for the web... though it's a weird mix of priorities to me if you want to simultaneously send a high-res image to such phones (which I presume is because you care about image fidelity) and then use a low quality (which I presume is because you care about bandwidth/webperf).
2023-02-14 05:15:51
Realistically, based on my experience (which might be limited and not representative since most of that is based on what Cloudinary customers want/do), I think about 40% of the web use cases use a 'good but not perfect' quality around jxl q80, then about 30% of the web use cases are more bandwidth-conscious (typically media/social media) and use a quality around jxl q72, then about 20% of the web use cases (typically e-commerce, especially luxury brands) are more fidelity-conscious and use a quality around jxl q90, and then about 10% of the web use cases are ok with serious fidelity sacrifices and go down to about jxl q65, or even lower on some important images if they can manually check that they still look somewhat acceptable.
2023-02-14 05:18:38
so if I would do a benchmark to compare codecs from the point of view of the web, I would check performance at those fidelity points and weigh the results accordingly (i.e. most weight going to the q70-90 range)
Demiurge
2023-02-15 09:14:36
Hmm, it does indeed look discontinuous and what's more, I think ffmpeg has their own, separate algebra for it.
2023-02-15 09:16:32
I guess it doesn't REALLY matter if it's discontinuous or not
daniilmaks
2023-02-15 09:38:09
neither is a discontinuous function tho?
Ufnv
2023-02-15 12:00:06
Hi! Could someone please point me to some guide on how to use the pre-build dll's in my Visual Studio 22 project? I believe I should build some library to be able to link to them, but I have no clue on where to start
Demiurge
neither is a discontinuous function tho?
2023-02-15 01:22:14
Well, the graph looks linear at higher q-levels and nonlinear at around 30
2023-02-15 01:23:02
I got no idea what ffmpeg is doing but they have their own independent formula for converting q to d
daniilmaks
Demiurge Well, the graph looks linear at higher q-levels and nonlinear at around 30
2023-02-15 07:49:34
correct
Demiurge
2023-02-17 02:18:52
Just wanted to add, this image is another good example of how libjxl is not achieving perceptually-uniform quantization. If you save this image at d=1 or higher, you will notice that the grain present in the dark sky, gets blurred/smeared with blocky DCT artifacts very soon before anything else in the image suffers. Whereas even if you lower the quality to extremely low settings, the level of detail of the city and buildings and streets and people, basically everything except for the sky, is retained nearly transparent even at EXTREMELY low quality settings.
2023-02-17 02:19:24
Maybe it has something to do with this being taken at night? Maybe it is a color profile thing?
2023-02-17 02:19:56
This image was originally a high-quality JPEG
2023-02-17 02:21:09
If you decompress the image and recompress at lower quality settings you will notice easily that the quality of the sky is not perceptually uniform with the quality of the rest of the image and the quality of the rest of the image remains nearly transparent even at absurdly low quality settings
2023-02-17 02:22:00
Please see for yourself if you think you might know why this might be happening.
_wb_
2023-02-17 04:01:25
Have you tried decoding to 16-bit and displaying with a viewer that does proper dithering (e.g. MacOS Preview does it right)?
2023-02-17 04:02:20
8-bit without dithering can be a bit blocky/bandy, especially in the darks
Demiurge
2023-02-18 04:39:19
I do not use Mac OS anymore.
2023-02-18 04:40:28
I have not tried decoding to a 16 bit PNG or anything like that, but shouldn't libjxl dither if decoding to an 8 bit buffer?
2023-02-18 04:41:27
If not, that sounds like something that can be fixed in libjxl
2023-02-18 04:47:22
djxl default for PNG is 8-bit as well. Flattening away details by not dithering when decoding to 8-bit is an oversight in the decoder
2023-02-18 04:53:54
...Still, I just tested decoding to a 16 bit PNG just now, and the difference is pretty subtle and doesn't really add any depth or detail or texture that was not there before.
2023-02-18 04:54:19
Still there is an incongruity between the quality of the sky and the quality of the rest of the image.
2023-02-18 04:55:30
When I open the image in viewers with poor color profile support, the image looks a lot darker than normal, too. So maybe that has something to do with why the sky is over-quantized.
_wb_ Have you tried decoding to 16-bit and displaying with a viewer that does proper dithering (e.g. MacOS Preview does it right)?
2023-02-18 04:58:03
In sum, I tried just now and it makes no difference. The sky quickly becomes blocky and flattened while the rest of the image retains transparency at absurdly high distance settings
_wb_
Demiurge In sum, I tried just now and it makes no difference. The sky quickly becomes blocky and flattened while the rest of the image retains transparency at absurdly high distance settings
2023-02-18 07:15:48
Is there already an issue about this on github? If not, then could you open one?
Demiurge
2023-02-18 07:26:47
Sometimes I think maybe I'm crazy picking at pixels like this.
_wb_ Is there already an issue about this on github? If not, then could you open one?
2023-02-18 07:27:07
I would if I had a github account... I might create one later...
ziemek.z
Demiurge Sometimes I think maybe I'm crazy picking at pixels like this.
2023-02-18 07:23:08
No, that's exactly how bugs are found! Xiph.org team responsible for Opus Codec couldn't stress enough that any "killer samples" from HydrogenAudio forum are appreciated.
ziemek.z https://github.com/ziemek99/squoosh/tree/jxl-081
2023-02-18 07:25:32
BTW plz anyone help me with Squoosh build ;_;
yoochan
2023-02-19 10:08:17
well well... I have no idea how to help, but i'll fork your depot and try ...
Demiurge
ziemek.z No, that's exactly how bugs are found! Xiph.org team responsible for Opus Codec couldn't stress enough that any "killer samples" from HydrogenAudio forum are appreciated.
2023-02-19 11:51:37
I think opus development has been abandoned, and harpsichord is still an opus-killer.
2023-02-19 11:52:46
which is a shame because I love harpsichord
ziemek.z
Demiurge which is a shame because I love harpsichord
2023-02-19 06:09:10
There's always Vorbis, I guess...?
Demiurge I think opus development has been abandoned, and harpsichord is still an opus-killer.
2023-02-19 06:17:27
Most of the stuff (features, stability etc.) is already here. Remember that Opus is over 10 years old, it had its time to mature.
sklwmp
2023-02-20 06:27:53
`benchmark_xl` can sometimes fail when built with `-D_GLIBCXX_ASSERTIONS`, as Arch does by default.
2023-02-20 06:28:29
The weird thing is, it doesn't fail *all* the time, but only sometimes, at some qualities. I don't know if I can replicate this consistently with other images.
2023-02-20 06:32:12
Okay, nevermind, I can pretty easily replicate it with the JPEG XL image from the JPEG website. I'll open a GitHub issue later when I have the time.
2023-02-20 02:27:26
i wanted to have some fun and see if i could compile cjpeg with jpegli, i *almost* got there after a bit of tweaking and some very hacky patches for the missing symbols (jpeg\_write\_marker, jpeg\_float\_quality\_scaling, and some jpeg\_c\_\* symbols)
Demiurge
ziemek.z Most of the stuff (features, stability etc.) is already here. Remember that Opus is over 10 years old, it had its time to mature.
2023-02-20 02:57:25
If it hasn't gotten any updates during that time then it's not exactly maturing...
ator
2023-02-20 04:10:28
<@532010383041363969> Here's an example file that exhibits some compression artifacts. Under vertical features there can appear a visible "dot" (lighter or darker). jpegli with distance 3 under the second leg of the "h" in "Irish", and a darkish line under the "r" in "Irish" jpegli with distance 4 there are light spots under the "h" in "Irish" and also under the "i" in English.
2023-02-20 04:10:54
(dunno if discord recompresses these files...)
username
2023-02-20 04:12:18
discord doesn't recompress stuff they only remove metadata
ator
2023-02-20 04:12:35
Yeah, if I zoom in the artifacts are still visible in the 2nd and 3rd image.
username
2023-02-20 04:13:16
although the thumbnails of images in discord are separate so you sometimes need to open the image to get the actual thing instead of a generated thumbnail
Jyrki Alakuijala
_wb_ I don't really see use cases for q<50 tbh, so the shape below q30 doesn't matter that much imo. For "web quality", q60-90 is the relevant range. For "camera quality", q90-97 is the relevant range.
2023-02-20 04:52:33
I consider low 'web quality' starts at q70 -- the use of q60 is a below web quality that occasionally but rarely occurs in the web
ator (dunno if discord recompresses these files...)
2023-02-20 04:59:44
I see these problems -- I'll discuss about a mitigation with Zoltan -- did you see similar problems if the background is gray or does it mostly occur on reddish backgrounds (perhaps also on greenish)
ator
Jyrki Alakuijala I see these problems -- I'll discuss about a mitigation with Zoltan -- did you see similar problems if the background is gray or does it mostly occur on reddish backgrounds (perhaps also on greenish)
2023-02-20 05:02:54
I've only noticed it on this image, but I haven't looked very closely at other backgrounds. This particular image has been an excellent test for evaluating various compressors and their quality settings, since these color combinations and thin features make artefacts show up very well.
2023-02-20 05:03:57
When I found similar artefacts with the AVIF compressor it was black/red text on a light brown-grayish background.
paperboyo
_wb_ maybe if you want to have images at 1:1 on dpr3 phones (as opposed to just sending them a 2x image and calling it a day), lower qualities might be relevant for the web... though it's a weird mix of priorities to me if you want to simultaneously send a high-res image to such phones (which I presume is because you care about image fidelity) and then use a low quality (which I presume is because you care about bandwidth/webperf).
2023-02-20 05:13:40
> though it's a weird mix of priorities to me if you want to simultaneously send a high-res image to such phones (which I presume is because you care about image fidelity) and then use a low quality (which I presume is because you care about bandwidth/webperf) Yep, that’s exactly what I do. Extra details are worth it. Ideally, there wouldn’t be any, but they are worth some artifacting to me. I never go beyond dpr2, though. This always seemed to me convincing (but I trust my eyes too): https://observablehq.com/@eeeps/visual-acuity-and-device-pixel-ratio#cell-220
2023-02-20 05:24:53
So, hypothetically, if there would an encoder which decides for me to lower the resolution when I ask for lower quality, such an encoder would be entirely useless to me.
_wb_
Jyrki Alakuijala I consider low 'web quality' starts at q70 -- the use of q60 is a below web quality that occasionally but rarely occurs in the web
2023-02-20 05:42:38
Mozjpeg q60 still gets used on the web in my experience. Even down to q50 in some cases. But only when the image content allows it (in particular, no non-photo) and when there is no brand reputation involved.
paperboyo
_wb_ Mozjpeg q60 still gets used on the web in my experience. Even down to q50 in some cases. But only when the image content allows it (in particular, no non-photo) and when there is no brand reputation involved.
2023-02-20 06:00:47
FWIW, 85 for dpr1 and, ekhm, 45 for dpr2 (effective from, IIRC, dpr>1.3) here. And yes, it results in quite noticeable artifacting on minority of images (posterisation in large area+subtle gradients)
_wb_
2023-02-20 06:19:50
I've seen a Cloudinary competitor use q30, but I dunno if they were successful with that. Sure, they beat us in filesizes, but other than that...
2023-02-20 06:21:07
q40 can be ok for dpr2 but only for 80-90% of the images. 1 in 20 images will look quite bad.
Demiurge
ator <@532010383041363969> Here's an example file that exhibits some compression artifacts. Under vertical features there can appear a visible "dot" (lighter or darker). jpegli with distance 3 under the second leg of the "h" in "Irish", and a darkish line under the "r" in "Irish" jpegli with distance 4 there are light spots under the "h" in "Irish" and also under the "i" in English.
2023-02-20 10:02:27
Both images look pretty impractically poor.
2023-02-20 10:02:56
In particular the color.
2023-02-20 10:04:23
The shape of the flower is destroyed. The hue of the top right text is destroyed.
2023-02-20 10:05:25
In one of them the entire color of the sign itself is different.
2023-02-20 10:06:12
I wonder if encoding at a higher bitrate would make a difference
Jyrki Alakuijala
_wb_ Mozjpeg q60 still gets used on the web in my experience. Even down to q50 in some cases. But only when the image content allows it (in particular, no non-photo) and when there is no brand reputation involved.
2023-02-21 04:18:52
I don't see significant q50-q60 in my data analysis, less than 2 %
paperboyo
Jyrki Alakuijala I don't see significant q50-q60 in my data analysis, less than 2 %
2023-02-21 04:47:43
[slightly facetious comment, but only slightly] Maybe that’s because not many sites care about image quality providing separate high DPR-specific imagery? 😜
Jyrki Alakuijala
2023-02-21 05:08:38
I have repeated such analysis by manually surfing using an 8k monitor and a 4k monitor as well as done more thorough data analysis about images, I have tried to repeat the analysis during the development trajectory -- so I did it about every two years since 2014
2023-02-21 05:08:51
of course initially I didn't have 4k/8k monitors 😛
2023-02-21 05:10:17
there is one major site that I was able to find that has really low quality images, down to webp q45 (which is likely like jpeg q40), but I just absolutely don't like how their images look and I consider them ill-advised so I just ignore that site in manual analysis
2023-02-21 05:10:42
automated analysis looks at many images and is consistent with manual analysis
2023-02-21 05:11:03
last round of analysis I asked a colleague to do and it produced similar results with my previous approaches
_wb_
Jyrki Alakuijala I don't see significant q50-q60 in my data analysis, less than 2 %
2023-02-21 05:53:03
I would assume that most of the ones that go to quite low qualities, would typically not be using jpeg but rather webp or avif.
2023-02-21 05:55:05
The median avif is 1 bpp, the median jpeg is 2 bpp. But to get the same quality as 2 bpp jpeg, you would need 1.7 bpp avif or so. So basically the median avif user is using a lower fidelity target than the median jpeg user.
Jyrki Alakuijala
2023-02-21 08:42:07
making your browser lie 'no webp, no avif' could then lead to better image quality? :----D
_wb_ The median avif is 1 bpp, the median jpeg is 2 bpp. But to get the same quality as 2 bpp jpeg, you would need 1.7 bpp avif or so. So basically the median avif user is using a lower fidelity target than the median jpeg user.
2023-02-21 08:44:53
It is like early digital television looked worse than the last phases of analog television. Making things worse by busy-busy engineering a lot is something that I don't want to spend my life doing... I love to improve things 🙂
2023-02-21 08:54:29
also, we were able to afford good looking images with old JPEG
2023-02-21 08:54:40
then we made things more affordable by improving compression
2023-02-21 08:55:00
as a result of making things more affordable, we no longer can afford to have good looking images
2023-02-21 08:56:53
it is like if the Swiss would have found an even better and cheaper recipe for chocolate, and once it is offered to the world, everyone would just eat mud instead of chocolate -- commonly decided 😛
2023-02-21 09:03:51
ideas/votes on what should be next after jpegli: opusli, avifli, x264li, av1li, av2li, weppeli, something else?
HLBG007
2023-02-21 09:06:47
bmpli
Traneptora
2023-02-21 09:57:06
jxlli <:PauseChamp:755852977574510632>
BlueSwordM
Jyrki Alakuijala ideas/votes on what should be next after jpegli: opusli, avifli, x264li, av1li, av2li, weppeli, something else?
2023-02-21 10:14:35
opusli 🙂
pandakekok9
2023-02-22 11:03:46
pngli :P
2023-02-22 11:03:55
or gifli
afed
2023-02-22 11:54:00
for png has already been done enough and if combine all these efforts, it's basically pngli (from the jxl team): <https://github.com/google/zopfli> <https://github.com/veluca93/fpnge> <https://github.com/lvandeve/lodepng> and the best thing that can be done for the gif is not using it at all, in my opinion improving gif is just making things worse to see how x264 could be improved from a different perspective would be interesting because avc is still a sort of jpeg for video and x264 is an encoder which got a lot of improvements staying within the same format and compatibility but i doubt there's much room for improvement (maybe something for high fidelity) but, opus and x264 in the later development years are rather a lot of manual fine-tunings than new methods it will be very difficult to re-tune it with much better results, unless there will be some revolutionary ideas and much more efficient methods
veluca
2023-02-22 12:28:08
(fwiw, lodepng predates the jxl team)
afed
2023-02-22 12:31:07
yeah, i meant the same people who were also involved in jxl
_wb_
2023-02-22 12:31:25
It could be interesting to have better intra in jxl (e.g. using cropping, patches and blend modes, which are currently not used at all for multi-frame purposes) and see if we can improve the state of the art for digital cinema and video archival use cases — i.e. lossless or very high fidelity lossy, where the usual video codecs are not very effective and currently things like intra-only j2k and ffv1 are what they use.
2023-02-22 12:31:54
So mjxli?
afed
2023-02-22 12:38:53
and avifli might be also interesting, but prioritizing avif improvements over jxl would be somewhat strange and maybe then it would be better to be involved in the avif2 development at earlier stages, so that it would be better as a format from the beginning?
veluca
2023-02-22 12:40:35
(doing that re avif2)
2023-02-22 12:41:13
the AV1 entropy coder is... slightly embarassing... for images
2023-02-22 12:41:37
(I will deny ever saying this if somebody tries to quote me xD)
_wb_
veluca the AV1 entropy coder is... slightly embarassing... for images
2023-02-22 01:22:03
it's not designed for still images, it's supposedly designed for hw encode/decode of video (i.e. mostly low amplitude symbols)
veluca
2023-02-22 01:23:16
yeah, clearly xD
2023-02-22 01:23:33
(in case you are wondering, NUM_BASE_LEVELS is ... 3)
DZgas Ж
2023-02-22 03:52:18
last build <:Thonk:805904896879493180>
2023-02-22 04:03:40
hmm. the build that is made here https://artifacts.lucaversari.it/libjxl/libjxl/2023-02-22T14%3A22%3A27Z_5b530446dc8ed8db0d3586986d15e18986c8793c / 1 minute ago works fine
2023-02-22 04:04:51
maybe I have some problems with the MSYS2 build
BlueSwordM
afed for png has already been done enough and if combine all these efforts, it's basically pngli (from the jxl team): <https://github.com/google/zopfli> <https://github.com/veluca93/fpnge> <https://github.com/lvandeve/lodepng> and the best thing that can be done for the gif is not using it at all, in my opinion improving gif is just making things worse to see how x264 could be improved from a different perspective would be interesting because avc is still a sort of jpeg for video and x264 is an encoder which got a lot of improvements staying within the same format and compatibility but i doubt there's much room for improvement (maybe something for high fidelity) but, opus and x264 in the later development years are rather a lot of manual fine-tunings than new methods it will be very difficult to re-tune it with much better results, unless there will be some revolutionary ideas and much more efficient methods
2023-02-22 06:02:51
h.264 stuff could be made better if you could reencode baseline videos into high videos to benefit from the improved entropy coder.
Traneptora
2023-02-22 06:03:18
oh yea, cabac is much better than cavlc
2023-02-22 06:03:36
and you could in theory do that without dequanting
BlueSwordM
2023-02-22 06:04:54
Yes. The only concern would be decoders on older devices, but it doesn't matter too much since you could design a tool that does the inverse.
Traneptora
2023-02-22 06:05:10
decoders on older devices should still be able to handle High profile
2023-02-22 06:05:24
unless they're hardware decoders, but High profile is c. 2004
2023-02-22 06:05:31
and Main is 2003
2023-02-22 06:05:45
there should be very few hardware decoders that can't handle the High profile
afed
2023-02-22 06:13:13
yeah, but it's more like optimizers, I even heard that there are commercial compressors that can reduce h264 bitstream by 20-30% losslessly, it will be a different, incompatible format, but still playable but I was talking about encoder improvements
2023-02-22 06:14:18
like `-tune butteraugli` for x264/x265
2023-02-22 06:19:12
but, just adding more complex metrics is only a small part, without proper manual tuning and lots of visual comparisons it may work worse than simple metrics and methods that the encoder is already well tuned for
Traneptora
2023-02-22 06:22:18
ah, or maybe -tune ssimulacra2 or something?
afed
2023-02-22 06:33:35
yeah, but from what I hear from some experiments with the av1 encoder, ssimulacra2 is much slower (but maybe not with the same optimizations) and butteraugli is easier to use as a metric for encoding in a specific quality
BlueSwordM
afed yeah, but from what I hear from some experiments with the av1 encoder, ssimulacra2 is much slower (but maybe not with the same optimizations) and butteraugli is easier to use as a metric for encoding in a specific quality
2023-02-22 06:49:37
Let me clarify: both metrics are too slow for direct use in a video encoder pipeline, and need a lot of internal tweaks to make them viable(more so butteraugli than ssimu2 since ssimu2 is just faster, but since no encoder currently uses ssimu2, it's a moot point).
w
2023-02-22 06:55:47
slow according to who
2023-02-22 06:55:51
not slow for me
DZgas Ж
afed like `-tune butteraugli` for x264/x265
2023-02-22 07:12:55
<:monkaMega:809252622900789269>
2023-02-22 07:15:13
considering how long butteraugli has been around, I think the x264 developers would have *screwed *it to the encoder long ago. yes, just for fun. But I haven't heard about it. as I understand it, this is a technical implementation problem.
afed
2023-02-22 08:48:41
there is no technical problem here, but it will require a lot of effort to retune the encoder for that metric, as I said, just adding a new metric is only a small part, most likely the quality will be even worse x265 also has some experimental modes with higher use of metrics, but since these modes are not as tuned as the old ones, they are very unstable in quality and rarely used and x264 has had no real progress for almost like 10 years now (since Dark Shikari stopped being active), just some bug fixes and minor improvements
ayumi
2023-02-22 09:02:04
Butteraugli is more that a decade younger that x264. You can not tune for a metric that does not yet exists.
DZgas Ж
ayumi Butteraugli is more that a decade younger that x264. You can not tune for a metric that does not yet exists.
2023-02-22 11:36:54
I don't think this is a metric problem.
2023-02-22 11:39:32
Not in the subject at all. But I remember how windows XP could not open files on an android smartphone. Because there were no built-in drivers. I think there is a direct analogy here. In order for the new metric to work, it is necessary to strongly add the encoder code.
afed there is no technical problem here, but it will require a lot of effort to retune the encoder for that metric, as I said, just adding a new metric is only a small part, most likely the quality will be even worse x265 also has some experimental modes with higher use of metrics, but since these modes are not as tuned as the old ones, they are very unstable in quality and rarely used and x264 has had no real progress for almost like 10 years now (since Dark Shikari stopped being active), just some bug fixes and minor improvements
2023-02-22 11:42:28
I generally understand this. I also realized that metrics can't be done just once and done. Although in the case of butteraugli on av1 it looks like this... it was just added in minor version 3.1.0 as if there were no problems in this.
Demiurge
2023-02-27 01:08:20
Where can I follow the latest developments of the new JPEG encoder? :)
Ufnv
2023-02-27 08:47:45
Hi! I've just integrated libjxl into my game that needs to work with hundreds of thousands of images. Before I was using webp and JXL definitely shows the substantial improvement in compression rate and the quality. But looks like something is wrong with decoding speed. It is something like 3-5 times slower than webp decoding. I use default parameters for lossy encoding, except for Quality settings that I set to 3.0 (works fine on my specific data - visually perfect). The images are 512x256 px in size, with alpha. Where should I look to check what is going wrong with the decoding speed?
spider-mario
2023-02-27 09:29:11
as far as I know, with lossy+alpha, it is likely that alpha takes up most of the decoding time, so perhaps this would be what to look into speeding up
2023-02-27 09:29:24
but I’m afraid that my wisdom pretty much stops here
2023-02-27 09:29:47
maybe subsampled alpha if that’s a practical option?
_wb_
2023-02-27 10:04:24
in general encoding with --faster-decoding might help
2023-02-27 10:07:51
alpha decoding might also be faster when not using lossy alpha (which uses squeeze) and when encoding for specialized decode paths (e.g. `-D 0 -e 2` could help, maybe)
Demiurge
2023-02-27 10:28:47
squeeze is a naive unoptimized implementation with no simd right?
_wb_
2023-02-27 10:31:05
I did try to optimize it, it does have some simd
Demiurge
_wb_ in general encoding with --faster-decoding might help
2023-02-27 10:31:50
in my experience, at least on the decode benchmark web page, faster-decoding doesn't really make much difference.
_wb_
2023-02-27 10:31:58
not sure how much room for improvement there is — probably faster paths are possible for bitdepths below 14-bit or so
Demiurge in my experience, at least on the decode benchmark web page, faster-decoding doesn't really make much difference.
2023-02-27 10:32:42
best way to benchmark decode speed is `djxl --num_reps=30 --num_threads=[whatever you want to test]`
Demiurge
2023-02-27 10:35:33
Also for some reason these timings seem absolutely absurdly low
2023-02-27 10:36:51
because when I use djxl I get 100 MP/s
2023-02-27 10:36:59
I was just about to mention that lol
2023-02-27 10:37:41
why is this browser test reporting such ridiculously slow speeds?
2023-02-27 10:37:49
I'm using waterfox...
_wb_
2023-02-27 11:40:56
the browser test includes whatever overhead the integration code has, I suppose. Also that benchmark might be too harsh on the browser scheduling, since it basically starts 100 decodes at the same time and counts the time until they're all done
Ufnv
_wb_ alpha decoding might also be faster when not using lossy alpha (which uses squeeze) and when encoding for specialized decode paths (e.g. `-D 0 -e 2` could help, maybe)
2023-02-27 11:52:16
Thanks! In terms of libjxl, how this should look like? Should I set some fields in JxlEncoderFrameSettings?
2023-02-27 11:54:46
BTW on my images it looks like the lib does not benefit from multiple cores, as the images are too small - they are 512x256. Maybe it is possible to launch JxlDecoderSetParallelRunner that can handle several frames instead?
_wb_
2023-02-27 12:13:48
`JxlEncoderSetExtraChannelDistance(settings, /*index of alpha channel*/ 0, /*distance setting for this channel*/ 0)`
2023-02-27 12:14:46
I don't think we already have a way to encode multiple frames in parallel, but I might be wrong
Ufnv
2023-02-27 01:05:19
Thanks, will try! Is the decoder itself thread-safe? Can I safely run several decoders in parallel?
veluca
2023-02-27 01:20:03
yep
2023-02-27 01:20:07
well
2023-02-27 01:20:24
you can create multiple decoder objects, but you can't use a single decoder object from multiple threads
Ufnv
2023-02-27 01:27:35
Several decoder objects will work just fine. The examples however suppose creating ParallelRunner for the decoder - is there a way not to use the ParallelRunner, but run the decoder in a single thread?
veluca
2023-02-27 01:30:54
just pass it `nullptr`
2023-02-27 01:31:20
for relatively small images it should even be a bit faster
Ufnv
2023-02-27 01:31:35
Ok, thanks!
veluca
2023-02-27 01:31:37
(also recycling decoders *might* help)
Ufnv
veluca (also recycling decoders *might* help)
2023-02-27 01:33:35
Good idea, as I always have a stream of images to decode, so pre-allocating several decoders for the lifetime of the game makes sense. To re-use a decoder , should I somehow reset it?
veluca
2023-02-27 01:35:16
I think there is a way to do so
2023-02-27 01:35:21
but maybe there isn't xD
Ufnv
2023-02-27 01:35:41
Will check
2023-02-27 01:36:38
The images are all of the same size and encoding, so should be relatively simple to reset
Traneptora
Ufnv Good idea, as I always have a stream of images to decode, so pre-allocating several decoders for the lifetime of the game makes sense. To re-use a decoder , should I somehow reset it?
2023-02-27 07:58:36
`JxlDecoderReset()` is part of the API
Ufnv
2023-02-27 08:48:23
Thanks!
2023-02-27 09:08:08
JxlEncoderSetExtraChannelDistance - is it from 0.8.1 ? I use 0.8.0 from vcpkg build and there is no such func 😦
_wb_
2023-02-27 09:10:59
It's not even in 0.8.1 I think, only in current git head. It's a recent addition to the encode api...
DZgas Ж
2023-02-28 03:23:19
Well now I have a whole collection of pictures (album covers) which have disgusting and unacceptable JPEG XL artifacts in high compression quality original | d1 | d 2
2023-02-28 03:25:19
original for <@532010383041363969>
2023-02-28 03:32:01
All artifacts, apparently, have the same principle of origin, as I think it's all connected with what I've talked about a lot before. The passage of the artifact through the blocks https://discord.com/channels/794206087879852103/794206170445119489/1074362924484788306
Jyrki Alakuijala
2023-02-28 09:04:27
this is fantastic work DZgas, thank you so much
2023-02-28 09:04:49
I'll study these with care
afed like `-tune butteraugli` for x264/x265
2023-02-28 09:11:35
I feel -tune butteraugli would be great for x264, possibly give another 25 % there -- I don't want to improve x265 since it is covered with gloomy IP 👻 -- I don't want to explore the possibility if SSIMULACRA2 can be used for guiding quality since it wasn't developed for that purpose and in butteraugli I had to make 10 rounds of tuning to make it suitable for guiding quality decisions, I'm guessing SSIMULACRA2 would likely need similar incremental improvements (which often make it slightly worse for measuring quality itself)
DZgas Ж Well now I have a whole collection of pictures (album covers) which have disgusting and unacceptable JPEG XL artifacts in high compression quality original | d1 | d 2
2023-02-28 09:12:45
could you also describe the errors that you find most irritating in each image? (of course I see that there are severe problems, but it would be nice if we can agree which should be improved)
DZgas Ж
Jyrki Alakuijala could you also describe the errors that you find most irritating in each image? (of course I see that there are severe problems, but it would be nice if we can agree which should be improved)
2023-02-28 09:50:44
I see these...but in fact, it will be much more efficient to use the e 3 parameter to **see **which artifacts are not there.
2023-02-28 09:52:10
it all refers to the fact that inside a block that does not have such a color, such color appears from neighboring blocks, and spreads over the entire area
Jyrki Alakuijala
2023-02-28 03:57:12
2023-02-28 03:58:25
it is interesting that there shouldn't be much interest for the codec to put red pixels in the upper left hand corner, but it does it anyways -- this is the V-channel for that image, it should be more green than red there
2023-02-28 03:58:37
it will be a lot of fun to debug this
2023-02-28 04:28:29
would this be better at d2 ?
2023-02-28 04:39:04
I found the broken algorithm -- it is in 'chroma from luma'
Traneptora
2023-02-28 04:39:35
dzgas providing color samples doing very well
Jyrki Alakuijala
2023-02-28 04:39:43
both the fast and the slow version are broken in similar ways, I'll need to rework them...
2023-02-28 04:40:21
It is so great to have a caring community so that we can get all details right
2023-02-28 04:40:41
persistent and caring 😄
afed
2023-02-28 04:45:39
especially noticeable for red, even for photographic images, like I showed some example here https://discord.com/channels/794206087879852103/848189884614705192/1069777174170714152
2023-02-28 04:50:50
surprisingly this also applies to jpegli, but to a different extent, where the red tones have a noticeable blockiness or perhaps because human eyes are more sensitive to it
Jyrki Alakuijala
2023-02-28 05:08:32
We will pay more attention to jpegli artefacts end of March, we are still figuring basic things there
_wb_
Jyrki Alakuijala both the fast and the slow version are broken in similar ways, I'll need to rework them...
2023-02-28 05:35:48
How broken is it? Fixing this might result in a nice encoder improvement...
yoochan
Jyrki Alakuijala It is so great to have a caring community so that we can get all details right
2023-02-28 07:43:51
I have some difficulties to assess the level of irony on this one 😄
_wb_
2023-02-28 07:49:11
I don't think there was any irony/sarcasm in that statement.
Demiurge
2023-02-28 08:11:29
I just hope that encoder improvements never are declared "done" like what happened to libopus after opus achieved success
2023-02-28 08:11:48
lossy compression is never "done"
HLBG007
2023-02-28 08:12:22
I moved link to benchmarks
Demiurge
2023-02-28 08:12:55
libjpeg is still getting huge quality improvements all these years later, with mozjpeg and now jpegli
2023-02-28 08:13:18
even though jpeg already achieved its success
DZgas Ж
Traneptora dzgas providing color samples doing very well
2023-02-28 08:27:34
🙏
Jyrki Alakuijala I found the broken algorithm -- it is in 'chroma from luma'
2023-02-28 08:28:32
will wait <:JXL:805850130203934781> rework
Demiurge
2023-03-01 05:05:10
For images that have an exotic color profile, shouldn't libjxl transparently perform color profile conversion before LOSSY compression? for lossy, there is really no reason to keep the original color profile.
2023-03-01 05:05:33
It would completely screw up all of the psychovisual model's assumptions
_wb_
2023-03-01 06:17:07
It does that.
Demiurge
2023-03-01 07:59:18
Does it? Hmm...
_wb_
2023-03-01 08:06:20
yes, it converts whatever color space the input is in to XYB. When doing lossy, the color profile / color space enum is just metadata so a decoder knows what space it comes from (so it could be a good idea to convert the decoded image back to that space), but it doesn't influence the actual interpretation of the image data, which is always in XYB when using lossy.
2023-03-01 08:10:47
the advantage of this approach is also that a viewer application can do color management without having to do a potentially different and expensive transform for every image. XYB to display space is the only transform that is needed for viewing, regardless of what the original color spaces were.
2023-03-01 08:15:04
For lossless and for cmyk, viewers still need to do arbitrary color management, but that's kind of inevitable. But for lossy, in particular for typical web delivery, I think it's nice that there's this 'automatic normalization' which could in principle help a little to speed up color management since you can directly decode to display space regardless of what the display space is.
Traneptora
2023-03-01 02:50:06
something that does bother me is that libjxl always performs clip-tone-mapping
Jyrki Alakuijala
yoochan I have some difficulties to assess the level of irony on this one 😄
2023-03-01 02:51:56
I didn't attempt to put irony/sarcasm there. I just love to get feedback from actual users and work towards better solutions for all.
Traneptora
2023-03-01 02:52:12
I suppose "relative" rendering intent implies clip tone mapping
Jyrki Alakuijala
_wb_ How broken is it? Fixing this might result in a nice encoder improvement...
2023-03-01 02:53:29
it is in cfl, so no one actually understands how to get it right -- my best attempt is to remove the system altogether which reduces numbers by ~1 % -- I still have some hopes of finding something slightly better than removing it, but this one feels more complicated than anything else in jpeg xl and I'll probably need to keep trying for two weeks or so
Traneptora
2023-03-01 02:53:55
consider the following JXL:
2023-03-01 02:54:25
my monitor is sRGB, as far as I'm aware. (It's not wide-gamut)
2023-03-01 02:54:38
if you request sRGB from libjxl when decoding this image, you get the following:
2023-03-01 03:08:09
https://0x0.st/Hzmy.png
2023-03-01 03:08:35
It's noticeably darker than the PQ image when viewed from chromium or mpv
_wb_
2023-03-01 03:19:53
I suppose it isn't actually doing clipping but just scaling everything down so the brightest white becomes SDR white? <@604964375924834314> understands this stuff better than me
Traneptora
2023-03-01 03:29:21
this is the same thing that jxlatte does when you request SDR btw, it just takes the XYB and uses the default opsin matrix to convert it to linear sRGB, and then applies the forward sRGB transfer function
2023-03-01 03:29:34
and afterward it clamps to `[0, 1]`
2023-03-01 03:31:36
I wonder if this is an issue with the intensity target?
Jyrki Alakuijala
2023-03-01 03:39:52
Seems like it is worth documenting the observed (mis)behaviour in an issue
Traneptora
2023-03-01 03:42:26
It don't think it's technically "incorrect" behavior but it is undesirable
Jyrki Alakuijala Seems like it is worth documenting the observed (mis)behaviour in an issue
2023-03-01 04:02:08
https://github.com/libjxl/libjxl/issues/2251
_wb_
2023-03-01 05:30:59
The problem with sRGB is that it only defines primary colors / white point / transfer function, but not how bright #FFFFFF is supposed to be. So there isn't really any "correct" way to translate HDR images to SDR spaces like sRGB. But I do think we should aim to produce something nice-looking 🙂
Traneptora
2023-03-01 06:07:56
yea, it is display-referred, but it should still have a sane-default IMO
2023-03-01 06:08:04
at least for the default Intensity Target, which is 255
_wb_ The problem with sRGB is that it only defines primary colors / white point / transfer function, but not how bright #FFFFFF is supposed to be. So there isn't really any "correct" way to translate HDR images to SDR spaces like sRGB. But I do think we should aim to produce something nice-looking 🙂
2023-03-01 06:09:24
well, it appears to be tonemapping it to some subset of the 0-65535 range
2023-03-01 06:09:49
2023-03-01 06:11:04
Identify -verbose reports these channel statistics
2023-03-01 06:11:06
``` Channel statistics: Pixels: 8294400 Red: min: 0 (0) max: 32740 (0.49958) mean: 6524.4 (0.099556) median: 5916 (0.0902724) standard deviation: 5019.01 (0.0765851) kurtosis: 0.54006 skewness: 0.894579 entropy: 0.946897 Green: min: 0 (0) max: 30019 (0.458061) mean: 7113.79 (0.10855) median: 6916 (0.105531) standard deviation: 5159.37 (0.078727) kurtosis: -0.27073 skewness: 0.513194 entropy: 0.955864 Blue: min: 0 (0) max: 29593 (0.45156) mean: 7540.07 (0.115054) median: 3124 (0.0476692) standard deviation: 8041.49 (0.122705) kurtosis: -1.54838 skewness: 0.493819 entropy: 0.800535 ```
2023-03-01 06:11:14
note that the max value (in float) is ~0.5
2023-03-01 06:11:50
except alpha which is constant full 65535
2023-03-01 06:11:58
(this image has a superfluous alpha channel at a constant 1.0)
spider-mario
2023-03-01 06:46:48
it doesn’t do any tone mapping by default
2023-03-01 06:47:03
it maps [0, intensity_targets] cd/m² to 0-1 in sRGB
2023-03-01 06:48:04
to get tone mapping, you can call e.g. `JxlDecoderSetDesiredIntensityTarget(dec, 255);`
2023-03-01 06:48:18
the djxl equivalent is `--display_nits=255`
2023-03-01 06:48:46
that would tone-map the image to 0-255 cd/m² and then map _that_ to 0-1
2023-03-01 06:51:36
the output, in that case, looks like this:
yoochan
2023-03-01 06:55:22
Does the original image have a hint on the nits value to be picked by default?
_wb_
2023-03-01 06:56:35
No, it has the nits that the image actually represents but not the nits of your display
Traneptora
2023-03-01 06:57:32
I'm pointing out that for SDR color spaces, which are *display-referred* we should always map to display-nits of 255
_wb_
2023-03-01 06:58:28
Anyway, <@604964375924834314>, wouldn't it make sense to have a lower default display nits when outputting to sRGB?
Traneptora
2023-03-01 06:58:57
^ sRGB 1.0 is defined to be whatever the max of your display is
_wb_
2023-03-01 07:00:12
Technically that is a more lossy thing to output than a 16-bit non-tonemapped image that has the whole dynamic range, but it would look a lot better
spider-mario
2023-03-01 07:01:51
we should perhaps only do that if `original_colorspace.tf.IsPQ()` or something along those lines
Traneptora
2023-03-01 07:01:53
for context, libplacebo assumes SDR displays have a brightness of 200 nits and 1:1000 contrast ratio of black level to white level
spider-mario
2023-03-01 07:02:14
HLG, for example, is not really meant to be tone-mapped in this way
Traneptora
2023-03-01 07:02:41
this is a PQ image, data was PQ before being cjxl-ed
spider-mario
2023-03-01 07:03:01
yep, just thinking out loud about how to go about generalising it
Traneptora
2023-03-01 07:04:34
hm
2023-03-01 07:04:50
this image is mapping to ~50% in sRGB space which is ~21% in linear space
spider-mario
2023-03-01 07:09:53
the intensity_target of that image is 10 000 which may well be much higher than necessary
2023-03-01 07:10:26
it is likely that it does not actually reach 10 000 cd/m² anywhere
Traneptora
2023-03-01 07:14:11
well, considering that its maximum in linear light is ~21%
2023-03-01 07:14:29
I'm guesing that it actually reaches around 2100 or so
2023-03-01 08:19:46
It looks like chromium and mpv both do peak detection
2023-03-01 08:19:51
in order to map HDR -> SDR
2023-03-01 08:25:35
so I've added peak detection to jxlatte to make that work better
Jyrki Alakuijala
2023-03-01 09:01:59
Around 2015-2018 I had a high-quality NEC 'photo editing' monitor (at the time when I was developing butteraugli). To extend the experience to HDR I used it at nominal 250 nits instead of -- for example -- the more boring 80 nits of sRGB. Later a more technical engineer (IIRC Jan) intepreted that it is 255 for 8-bit values (like we often have for old 8-bit graphics), and that's how 255 nits became default.
DZgas Ж I see these...but in fact, it will be much more efficient to use the e 3 parameter to **see **which artifacts are not there.
2023-03-01 09:14:43
https://github.com/libjxl/libjxl/pull/2252
2023-03-01 09:16:13
at d2
2023-03-01 09:17:06
It was a coding session like: "I kept pressing buttons on the keyboard until it worked"
2023-03-01 09:17:12
but it was fun nonetheless
daniilmaks
2023-03-01 10:04:10
oh that's nice
Traneptora
Jyrki Alakuijala Around 2015-2018 I had a high-quality NEC 'photo editing' monitor (at the time when I was developing butteraugli). To extend the experience to HDR I used it at nominal 250 nits instead of -- for example -- the more boring 80 nits of sRGB. Later a more technical engineer (IIRC Jan) intepreted that it is 255 for 8-bit values (like we often have for old 8-bit graphics), and that's how 255 nits became default.
2023-03-01 10:09:09
for SDR to HDR they assume 203 nits and contrast ratio of 1:1000
2023-03-01 10:09:15
placebo does, at least
spider-mario
2023-03-01 10:45:25
this sounds kind of debatable
2023-03-01 10:45:54
BT.2408, which proposes the 203 cd/m² level for “HDR Reference White”, says: > NOTE – The signal level of “HDR Reference White” is not directly related to the signal level of SDR “peak white”
2023-03-01 10:46:15
and in its annex 8, proposes a HDR->SDR mapping that maps 75% HLG (reference white) to 90% SDR, not 100%
2023-03-01 10:47:27
with that mapping, it’s 260 cd/m² on a 1000 cd/m² HLG display that would correspond to (1, 1, 1) sRGB, not 203 cd/m²
2023-03-01 10:47:46
and if 203 cd/m² is diffuse white, I do feel that it should give better results
2023-03-01 10:48:15
you likely don’t want diffuse white to be too deep into the highlight roll-off
2023-03-01 10:51:11
it seems to me that people tend to hear “203 cd/m² - white” and jump to hasty conclusions
Traneptora
2023-03-02 01:01:38
interesting, I've forwarded the question to haasn
Demiurge
Jyrki Alakuijala https://github.com/libjxl/libjxl/pull/2252
2023-03-02 04:37:52
Jyrki the hero
2023-03-02 04:40:03
The unstoppable, ascended warlock priest of image coding
Jyrki Alakuijala
Demiurge The unstoppable, ascended warlock priest of image coding
2023-03-02 09:37:12
Haha, that made my day 😄
yoochan
2023-03-02 10:00:32
this is one exciting thing of encoding ! despite the bitstream is frozen tricks can still/will be found to improve the encoding 🙂 (like for jpegli 😅 )
Jyrki Alakuijala
2023-03-02 02:28:01
google/butteraugli is an earlier version, libjxl butteraugli includes optimizations that make it work with machine learning and also improvements that help with jpeg xl
2023-03-02 02:28:17
very likely google/butteraugli is more precise against humans
2023-03-02 02:29:01
it is possible that libjxl butteraugli works better with larger errors, as this wasn't my goal when I was building google/butteraugli -- the jpeg committee wanted much lower quality than what was interesting to me
HLBG007
2023-03-02 02:30:55
How much versions of butteraugli exists?
2023-03-02 02:33:24
I'm asking because in Google PIK is a version of butteraugli too https://github.com/google/pik/tree/master/pik/butteraugli
veluca
2023-03-02 02:41:05
that's weird
HLBG007 How much versions of butteraugli exists?
2023-03-02 02:41:21
IDK, I kinda lost count
_wb_
2023-03-02 02:59:40
Are the jpegs RGB or YCbCr?
2023-03-02 02:59:57
And how do you decode them?
Jyrki Alakuijala
HLBG007 How much versions of butteraugli exists?
2023-03-02 03:07:00
use libjxl's butteraugli when you want the latest stuff -- I suspect there are about five versions (one in google/guetzli, one in google/butteraugli, one in pik, one in libjxl, etc.)
DZgas Ж
Jyrki Alakuijala https://github.com/libjxl/libjxl/pull/2252
2023-03-02 03:51:08
great!... no... not quite. the result is not very good, artifacts of this kind have not gone anywhere. but they have weakened a lot... and I also discovered new artifacts that didn't exist before 🤭
2023-03-02 03:55:48
now I'm going to do a full test and give you new album covers. but I can inform you in advance that problems with the yellow color have begun
2023-03-02 03:58:50
but I also noticed that there were much fewer problems with red, even where I hadn't noticed artifacts before, now there are none at all
Jyrki Alakuijala at d2
2023-03-02 04:02:43
I also want to note that the artifact from the last image was not solved at all. apparently this refers to some other area that needs to be fixed
2023-03-02 04:14:27
<@532010383041363969> oh well
2023-03-02 04:17:08
the first 6 images have new artifacts after the update (I'll make a comparison now), image 7 here, it shows that the new update works much better on it, then 3 past images for comparison, and at the end 2 images with very small artifacts of unknown origin
2023-03-02 04:28:10
—Original —old (27 feb build v0.9.0 8137953) -e 9 -d 2 —new (2 mar build v0.9.0 b931fc5) -e 9 -d 2 —new with marked where is the NEW artifact
2023-03-02 04:30:09
I want to note that the artifact I wanted to show has disappeared from the pre-last image. obviously, it arose because of the location of the image, most likely the problem was in the shift of pixels inside the block.
Jyrki Alakuijala
2023-03-02 07:19:14
I hope that soon I will not have all your album covers in an emergency test corpus, but I'll be able to solve problems faster than new emerge 😅
Traneptora
2023-03-02 10:02:20
as far as I am aware, GIMP uses mozjpeg internally
2023-03-02 10:06:02
I ran `cjpeg -q 98 <lenna.png >lenna.jpg`
2023-03-02 10:06:26
and in GIMP opened lenna.png, unchecked all the metadata, set the quality to 98, and used integer 4:4:4 optimized progressive export to lenna2.jpg
2023-03-02 10:06:50
then `magick compare -verbose -metric pae lenna.jpg lenna2.jpg comparison.png` reported zero for PAE
2023-03-02 10:06:56
i.e. identical pixels returned from the decoder
2023-03-02 10:08:02
here `cjpeg` is mozjpeg
spider-mario
2023-03-02 10:12:57
does it not use system libjpeg, whichever that might be?
Traneptora
2023-03-02 10:15:55
I don't know, I think on windows it uses Mozjpeg, not the Windows image codec API
2023-03-02 10:16:09
on linux it might just default to /usr/lib/libjpeg.so, which for *me* is mozjpeg
gb82
2023-03-03 10:51:34
Is libjxl not available in the Ubuntu apt repos?
novomesk
gb82 Is libjxl not available in the Ubuntu apt repos?
2023-03-03 11:13:14
Maybe only in future version 23.04 https://packages.ubuntu.com/source/lunar/jpeg-xl
DZgas Ж
2023-03-03 12:35:09
--progressive_dc=2 breaks the color
veluca
2023-03-03 12:36:20
why am I not surprised
Demiurge
2023-03-03 04:59:57
ubuntu is just debian
2023-03-03 05:00:18
That means, they never update anything.
2023-03-03 05:01:06
idk why everyone is so attracted to that
2023-03-03 05:01:29
there are much better "server" distros out there
Ufnv
2023-03-04 08:51:53
some update about the decoding speed - alpha really makes a difference. The same image, both with alpha channels but one having it completely uniform 255. With "complex" alpha - 11.73 MP/s With "plain" alpha - 24.66 MP/s BTW, encoding with -d 0 -e 2 makes decoding speed worse
2023-03-04 08:53:18
cannot check with lossless alpha yet - waiting for this option to appear in the stable build
2023-03-04 11:04:00
But overall, the decoding speed even without alpha is still several times slower than webp 😦
DZgas Ж
Jyrki Alakuijala https://github.com/libjxl/libjxl/pull/2252
2023-03-04 12:02:17
<@532010383041363969> I observe global problems with color DIM with yellow and green, even with high quality -d 1
_wb_
2023-03-04 12:02:19
Several times? I think single-core vardct decode speed should be not that much slower than webp decoding. What kind of MP/s are you getting?
DZgas Ж
DZgas Ж <@532010383041363969> I observe global problems with color DIM with yellow and green, even with high quality -d 1
2023-03-04 12:33:57
I made pixel difference maps for the old and new version to show this
DZgas Ж <@532010383041363969> oh well
2023-03-04 12:37:18
these are 5 more covers of my albums that looked definitely better on the version from Feb 27 than from the current one from Mar 4
Ufnv
_wb_ Several times? I think single-core vardct decode speed should be not that much slower than webp decoding. What kind of MP/s are you getting?
2023-03-04 02:03:53
well, probably something is seriously wrong with my code. I've just compared the time for 100 decodes of the same image (2048x1024) from JXL and WebP. jxl: 3290.0590000ms webp: 985.8875000ms
2023-03-04 02:06:05
Single-core JXL: jxl: 8899.4905000ms webp: 966.2073000ms
2023-03-04 02:06:39
Test images
2023-03-04 02:07:35
so, about 9 times slower when single-core
_wb_
2023-03-04 02:07:37
Are you excluding the time to encode the decoded image?
Ufnv
2023-03-04 02:08:02
just decoding from memory.
_wb_
2023-03-04 02:08:15
Decoding from and to memory?
Ufnv
2023-03-04 02:08:25
loading time excluded.
2023-03-04 02:09:13
yes, from and to memory. Actually, for WebP I do one more conversion - converting the pixel format, while for JXL I just leave it as is
_wb_
2023-03-04 02:11:52
Hm, what kind of cpu are you running this on? Does it have avx2?
Ufnv
2023-03-04 02:12:24
Intel i9 11980HK
_wb_
2023-03-04 02:13:46
Did you compile libjxl yourself? Can you check what SIMD it is using? E.g. running cjxl it should say that in the first line of output iirc
Ufnv
2023-03-04 02:14:07
no, I get it using vcpkg
2023-03-04 02:15:23
cjxl reports [AVX2, SSE4, SSSE3, Unknown]
2023-03-04 02:16:32
but it's a separately downloaded cjxl. Can I query libjxl somehow to get this info?
_wb_
2023-03-04 02:21:15
Good question, I don't think so
2023-03-04 02:21:52
vcpkg, is that using msvc?
Ufnv
2023-03-04 02:22:06
Profiler shows static void jxl::N_AVX2::FromLinearStage<...>::ProcessRow(const class std::vector<...> & const, const class std::vector<...> & const,
_wb_
2023-03-04 02:22:29
Clang compiles are quite a lot faster than msvc ones, iirc