JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

benchmarks

_wb_
2023-03-02 07:49:27
but for still images, nobody really wants to do ssimulacra2 < 50, and most would aim for ssimulacra2 > 70
HLBG007
_wb_ if you could add thumbnails of the images to the page, it would be useful — just to have an idea of what kind of image content it is
2023-03-02 07:50:06
https://libwebpjs.hohenlimburg.org/avif-jxl-bench/charts_8-5-all-images.html
_wb_
2023-03-02 07:54:12
those aren't really thumbnails but just the full image browser-downscaled 🙂 it's a bit heavy to load
2023-03-02 07:54:46
I was thinking more like a `convert -resize 300x` version of the images or something
2023-03-02 07:56:24
so basically all images are art/illustrations/game graphics, right?
2023-03-02 07:58:52
that's a good test set in case I ever find time to continue working on https://github.com/libjxl/libjxl/pull/1395
HLBG007
_wb_ so basically all images are art/illustrations/game graphics, right?
2023-03-02 07:59:32
Images i get here from this link: https://storage.googleapis.com/avif-comparison/index.html
jonnyawsom3
_wb_ those aren't really thumbnails but just the full image browser-downscaled 🙂 it's a bit heavy to load
2023-03-02 08:00:30
I was just wondering about that last night oddly enough, couldn't remember if browser downscaling actually saved on bandwidth at all (Like I know how Cloudinary could convert based on the entered link, if I recall) or if it was purely for decode performance. Anyway, back to reading through the server for me
_wb_
2023-03-02 08:01:35
browser downscaling doesn't save bandwidth at all, it fetches the full image and then downscales it to fit the area css layout gives the image
jonnyawsom3
2023-03-02 08:02:19
Yeah, I suppose I only usually see it on images with small filesizes in the first place, so I don't feel the download taking place
_wb_
HLBG007 Images i get here from this link: https://storage.googleapis.com/avif-comparison/index.html
2023-03-02 08:02:43
can't find those images there, or are they from https://docs.google.com/spreadsheets/d/1ju4q1WkaXT7WoxZINmQpf4ElgMD2VMlqeDN2DuZ6yJ8/edit#gid=174429822 which is linked at the bottom of that page?
HLBG007
_wb_ browser downscaling doesn't save bandwidth at all, it fetches the full image and then downscales it to fit the area css layout gives the image
2023-03-02 08:03:05
I can not store images on my hoster. its not allowed in germany. I would need from every image owner a permission
_wb_
2023-03-02 08:05:53
wouldn't that be fair use? it's just a low-res downscale anyway, and you're linking the originals
HLBG007
_wb_ can't find those images there, or are they from https://docs.google.com/spreadsheets/d/1ju4q1WkaXT7WoxZINmQpf4ElgMD2VMlqeDN2DuZ6yJ8/edit#gid=174429822 which is linked at the bottom of that page?
2023-03-02 08:06:33
Bottom of the page near the text -> Here is another lossless comparison,
_wb_
2023-03-02 08:08:37
that spreadsheet has thousands of images, which ones did you select?
HLBG007
2023-03-02 08:12:03
my wget downloaded 350 from 6000 images. wget has decided
2023-03-02 08:15:29
but i can download all to encode. but then your smartphone will be completely down when i put all on one site xD
_wb_
2023-03-02 08:37:18
all is not needed, but you now probably have images that are mostly from the same category
2023-03-02 08:37:39
and those categories are selected for relevance to lossless compression
2023-03-02 08:38:20
for lossy, I think photographic images are the most important broad category
paperboyo
2023-03-02 09:37:41
It’s great to have this set and the discussion it generated (and will), thanks <@736640522042605604> (also, double-good to have a non-fanboy here!). I can hardly read graphs, but a cursory look (outliers notwidthstanding) seems to suggest two things: 1. on more “photo-like” imagery, both encoders are closer to each other (it’s a known fact AVIF is better on images where PNG would be used instead of a JPEG in the past) 2. JXL plots are more… consistent, while AVIF differs more depending on image makeup If above are true, I would love to see a page like this but with photographs, the current set doesn’t seem representative (as much as I like Miyazaki) unless one is running a site devoted to comics. And to the second point: ofc it’s better than AVIF can go much smaller at still acceptable quality for some images (and much, much less often much more weighty). But for me, who is looking at having to set **one quality** for all imagery (likely for years), without any per-image heuristics, JXL looks more consistent? (I would need to set AVIF higher than some images allow, only because otherwise other images would get too plasticky). I think Jon called it “encoder consistency” somewhere. If that’s indeed what I’m seeing.
_wb_
2023-03-02 10:26:06
Here are encoder consistency results from our subjective experiment. The encoder versions used here are meanwhile a bit outdated (this is libjxl 0.6.1, libaom 3.1.2, webp 1.0.3, mozjpeg 4.1.0) but I expect the current situation to not be significantly different.
2023-03-02 10:27:07
the Y axis is standard deviation in the mean opinion scores over the corpus of 250 images; the axis is flipped so higher curves are better (lower stdev so more consistency)
2023-03-02 10:27:41
every dot is a specific encoder setting
2023-03-02 10:37:29
e.g. if you use jxl q65, the quality you get is a mos score of 67 or so on average, +- 5. If however you use libaom at some setting that also has a mos score of 67 on average, it will be +- 7.5. So if you want to e.g. aim for a minimum mos score of 65 across all images, then with jxl you have to aim at an average mos score of 70 or so, while with avif you'll have to aim a bit higher, to get an average of 73 or so, with still the same worst-case quality.
2023-03-02 10:39:43
more encoder consistency means that if you aim at the same target, the risk of getting a bad image goes down (or you can aim lower if you prefer keeping the risk the same and saving more bandwidth)
HLBG007
_wb_ wouldn't that be fair use? it's just a low-res downscale anyway, and you're linking the originals
2023-03-02 01:00:49
On your word
2023-03-02 01:03:28
<@794205442175402004> Jon do you thinking your smartphone would survive a benchmark of all user avatars of this discord group?
2023-03-02 01:03:50
http://libwebpjs.hohenlimburg.org/avif-jxl-bench/charts_8-5-avatars.html
_wb_
2023-03-02 01:03:55
i'm on my laptop now 🙂
2023-03-02 01:09:49
not sure if ssimulacra2 scores are accurate for images that small — it was tuned on 512x512 images, these images are only 128x128
HLBG007
_wb_ not sure if ssimulacra2 scores are accurate for images that small — it was tuned on 512x512 images, these images are only 128x128
2023-03-02 02:57:17
It looks like it works
Jyrki Alakuijala
2023-03-02 06:59:39
good news -- I believe I found a substantial opportunity in encoding, like 7.77 % bpp*pnorm improvement at d1 in my current test corpus
2023-03-02 07:00:09
(exact value may differ from 7.77 %, optimization is still running)
improver
2023-03-02 07:02:43
very cool
Jyrki Alakuijala
2023-03-02 07:04:13
likely 1.777 % for ssimulacra2 :-), at least they both agree that its an improvement
HLBG007
Jyrki Alakuijala good news -- I believe I found a substantial opportunity in encoding, like 7.77 % bpp*pnorm improvement at d1 in my current test corpus
2023-03-02 07:05:36
Was my benchmark the reason?
Jyrki Alakuijala
2023-03-02 07:06:05
of course all focus on quality inspires such things
2023-03-02 07:06:43
this improvement was originated by the chessboard pattern -- I believe someone from the JPEG committee had such a test image
HLBG007
2023-03-02 07:06:55
My hope is to inspire you to make jxl better when you see which images do trouble
Jyrki Alakuijala
2023-03-02 07:07:31
it was easy to fix in a basic way, but the chromacity part is still broken -- Zoltan proposed to check the precision of gaborish, and that is where the problem was
HLBG007
Jyrki Alakuijala it was easy to fix in a basic way, but the chromacity part is still broken -- Zoltan proposed to check the precision of gaborish, and that is where the problem was
2023-03-02 07:07:56
Which problem exactly?
Jyrki Alakuijala
2023-03-02 07:08:32
gaborish inverse and direct should not spoil the image when applied sequentially
2023-03-02 07:09:32
the problem was that https://github.com/libjxl/testdata/blob/main/jxl/chessboard/colorful_chessboards.png doesn't compress very well
2023-03-02 07:09:49
it gets some funny artefacts
HLBG007
2023-03-02 07:10:48
What do you thinking about my user avatar benchmark? Is this helpful?
Jyrki Alakuijala the problem was that https://github.com/libjxl/testdata/blob/main/jxl/chessboard/colorful_chessboards.png doesn't compress very well
2023-03-02 07:12:54
Some pictures have this pixel graphic
afed
2023-03-02 07:13:34
maybe at very high qualities it would be better to disable gaborish completely?
Jyrki Alakuijala
2023-03-02 07:20:37
without gaborish we'd benefit from different quantization matrices -- that would also be some work to explore
HLBG007 What do you thinking about my user avatar benchmark? Is this helpful?
2023-03-02 07:21:25
yes, it is nice -- I don't like that my own photo compresses better with avif 😄 -- need to fix it soon 😄
HLBG007
2023-03-02 07:21:27
Why is this image in avif better? https://i.redd.it/0u3lf3ai29x21.png
Jyrki Alakuijala
2023-03-02 07:22:01
I believe AVIF has a local palette mode that allows 8 colors in a locality
HLBG007
2023-03-02 07:22:33
Better Encoding time
Jyrki Alakuijala
2023-03-02 07:23:14
if you just compress pixels with a palette it is easier than dcting/decorrelating it
2023-03-02 07:23:55
could be that, I don't really know how avif works, it was too complicated for my simple brain
HLBG007
Jyrki Alakuijala if you just compress pixels with a palette it is easier than dcting/decorrelating it
2023-03-02 07:27:00
But why do this not doing jxl encoder too?
2023-03-02 07:28:31
https://i.redd.it/01lk9mhfp3i41.png
2023-03-02 07:29:25
https://i.redd.it/02nkxsfye7k41.png
_wb_
2023-03-02 07:30:48
We could use a mix of modular patches and vardct to mimick avif's palette blocks. In avif there's only one thing to check (is this block better done with palette or not), in jxl we don't need to align patches with blocks and we have no limit on the number of palette colors, so there's way more expressivity but it's also trickier to figure out what the best way to do it actually is.
HLBG007
2023-03-02 07:33:03
Is jbig2 a solution to fix this problem?
yoochan
2023-03-02 07:34:45
for pixel art, wouldn't it be efficient to encode the reduced version (where one macro pixel is downsized to a single pixel) and expended again at post proc ? (like a resize without any interpolation)
_wb_
2023-03-02 07:38:17
That's possible with custom upscale weights.
2023-03-02 07:38:59
For 2x, 4x and 8x that is
yoochan
2023-03-02 07:39:48
you mean it would be complex if the macro pixel is 13 pixel wide ?
_wb_
2023-03-02 07:39:50
But that's a rather niche thing to do, dunno if we should complicate the api for that
2023-03-02 07:40:45
The jxl bitstream has a way to do upscaling with custom weights. Nearest neighbor is a special case of that.
yoochan
2023-03-02 07:43:10
oki
HLBG007
2023-03-02 08:25:29
my benchmark has resized thumbnails now https://libwebpjs.hohenlimburg.org/avif-jxl-bench/charts_8-5-all-images.html
Jyrki Alakuijala
2023-03-03 10:04:31
https://github.com/libjxl/libjxl/pull/2257 makes a 10 % improvement at d0.5 and 6 % at d1 on my test corpus (jyrki31) including the colorful chessboards image and dzgas images and some other difficult cases
2023-03-03 10:05:18
it should in general improve how individual pixels are treated and make VarDCT slightly more pixel friendly
2023-03-03 10:05:24
every pixel is precious!
2023-03-03 10:06:13
(not merged yet)
_wb_
Jyrki Alakuijala https://github.com/libjxl/libjxl/pull/2257 makes a 10 % improvement at d0.5 and 6 % at d1 on my test corpus (jyrki31) including the colorful chessboards image and dzgas images and some other difficult cases
2023-03-03 11:49:15
very exiting stuff, and surprising that an improvement of this magnitude is still possible at this stage. See discussion in the private jxl dev chat, I think there might be room to take it even further.
yoochan
2023-03-03 12:01:19
https://tenor.com/view/spongebob-spongebob-squarepants-gif-26050988
Jyrki Alakuijala
2023-03-03 10:47:35
I believe that the improvement is mostly possible by shifting targets -- adding more images where JPEG XL has been performing badly
2023-03-03 10:48:11
It is a difficult/complex value question how much we should emphasize strange pixel-based images vs. usual photographs
2023-03-03 10:49:21
other images than the chessboards I consider relatively usual images (including the dzgas images), but the chessboards is a truly bizarre image -- but likely should still be supported -- and even better than in the current head
2023-03-03 10:53:21
in general I believe that worst case behaviour should be mitigated, but there can be limits when it is too expensive
monad
2023-03-04 06:07:26
<@532010383041363969> This one's exceptionally antagonistic. The handling has improved with recent efforts, but is still poor. <https://dsp.stackexchange.com/revisions/25990/1>
HLBG007
Jyrki Alakuijala I believe that the improvement is mostly possible by shifting targets -- adding more images where JPEG XL has been performing badly
2023-03-04 07:57:41
I can give more pictures
Jyrki Alakuijala It is a difficult/complex value question how much we should emphasize strange pixel-based images vs. usual photographs
2023-03-04 07:58:33
Each image should be treated equally
Jyrki Alakuijala other images than the chessboards I consider relatively usual images (including the dzgas images), but the chessboards is a truly bizarre image -- but likely should still be supported -- and even better than in the current head
2023-03-04 07:59:11
These chessboard pictures are also works of art that should receive an appreciation
Jyrki Alakuijala in general I believe that worst case behaviour should be mitigated, but there can be limits when it is too expensive
2023-03-04 08:03:39
JPEGXL only gets the same respect that JPEGXL gives to the images as respect
yoochan
2023-03-04 08:06:03
good at everything means good at nothing 😄 I instead think jpegxl should excel in the kind of image it is meant to encode (and still be okeyish with the weirdest ones)
jonnyawsom3
2023-03-04 09:40:12
That's a lotta pings. I know it's written somewhere, but sticking to VarDCT for photographs and modular for synthetic is almost what it's designed for if I recall. Especially with the PR to mix and match the techniques based on what would work best per block, it could be worth only optimizing what already works best for a sample. If that makes sense
monad
2023-03-04 07:58:33
Yes, as with any practical compression scheme, the encoder should be optimized for representative data of the kind people tend to create and disseminate. There exists some threshold of artificiality or randomness where we don't generally care whether the encoder performs well, but such images may still inspire practical improvements.
Jyrki Alakuijala
That's a lotta pings. I know it's written somewhere, but sticking to VarDCT for photographs and modular for synthetic is almost what it's designed for if I recall. Especially with the PR to mix and match the techniques based on what would work best per block, it could be worth only optimizing what already works best for a sample. If that makes sense
2023-03-06 09:28:19
Ideally both would deliver the asked quality just at different bit rates -- neither should produce a blurry or artefacty surprise when a large set of images were asked to be compressed. Some webmasters have 5000 or more images, asking them to review individual images is a bit irresponsible.
jonnyawsom3
2023-03-06 09:47:51
Yeah, the quality asked should always take priority. I was just continuing from your comment about "Pixel-based images vs photographs", saying if either this PR is eventually improved and merged https://github.com/libjxl/libjxl/pull/1395 or it's on an image-by-image basis (Such as image creation or screenshots), then optimizing each method for their best use case might be better than generalising both
afed
2023-03-06 01:55:56
<@794205442175402004><@179701849576833024> seems that git windows builds are still much slower, especially for fast speeds depends on compiler, shared/static? maybe it's worth reconsidering the compilation workflows because people mostly use and test official builds ProcProfile64: clang v0.9.0 [cjxl e1 lossless] `119,771,153 -> 52,094,805: 43.49%. Cpu 349 mb/s (0.327 sec), real 318 mb/s (0.359 sec) = 91%. ram 401144 KB, vmem 569556 KB` git build v0.8.1 [cjxl e1 lossless] `119,771,153 -> 51,493,245: 42.99%. Cpu 78 mb/s (1.453 sec), real 78 mb/s (1.452 sec) = 100%. ram 402340 KB, vmem 569940 KB` clang [fjxl 1 lossless] `239,542,306 -> 52,199,900: 21.79%. Cpu 585 mb/s (0.390 sec), real 585 mb/s (0.390 sec) = 100%. ram 352720 KB, vmem 521388 KB` strange that fjxl uses double read data, but cjxl e1 does not or maybe it is wrong values from windows api to trace such a thing? Halic `119,771,153 -> 45,307,046: 37.82%. Cpu 140 mb/s (0.812 sec), real 140 mb/s (0.812 sec) = 100%. ram 12236 KB, vmem 8452 KB` i think that's why Halic is faster in tests from other people and why it doesn't match my results, but memory consumption and compression is still impressive Qlic2 `119,771,153 -> 47,815,785: 39.92%. Cpu 128 mb/s (0.890 sec), real 128 mb/s (0.890 sec) = 100%. ram 277216 KB, vmem 274532 KB`
veluca
2023-03-06 02:00:26
mhhh weird, otoh I know nothing of windows builds so could you open an issue? 😄
_wb_
2023-03-06 03:44:43
it would be good if we can make sure the official builds are good. That's a rather huge difference between the git build and the clang build...
2023-03-06 03:45:48
what does `119,771,153 -> 52,094,805` mean? is that the file size reduction?
2023-03-06 03:46:51
the git build compressing a little better (but much slower) probably indicates that it isn't using the same simd type as the clang version, right <@179701849576833024> ?
afed
2023-03-06 03:49:14
almost, it is how much data have been read and written (so it may differ from the size of the source and encode)
veluca
_wb_ the git build compressing a little better (but much slower) probably indicates that it isn't using the same simd type as the clang version, right <@179701849576833024> ?
2023-03-06 03:50:15
that could make sense yep
2023-03-06 03:50:35
ah, I remember that I didn't have dynamic dispatch properly implemented for msvc, that might explain things
afed
2023-03-06 03:54:29
in general, it seems that clang static build is the best option, it's faster even without native flags and other extra parameters
jonnyawsom3
2023-03-06 04:18:17
I could be misinterpreting, but as far as I can see the main difference is from the git build being 0.8.1 compared to 0.9.0
afed
2023-03-06 04:21:00
there are no changes for lossless between these versions anyway
Jyrki Alakuijala
monad <@532010383041363969> This one's exceptionally antagonistic. The handling has improved with recent efforts, but is still poor. <https://dsp.stackexchange.com/revisions/25990/1>
2023-03-06 04:30:42
This image is terrible for VarDCT -- I'll investigate
jonnyawsom3
2023-03-06 04:49:35
Out of curiosity I ran that through lossless, because why not... 150:1 ratio... Not bad
veluca
Out of curiosity I ran that through lossless, because why not... 150:1 ratio... Not bad
2023-03-06 04:53:22
how many bits per pixel?
2023-03-06 04:53:38
I suspect the tree learning algorithm did a lot of magic with it
jonnyawsom3
2023-03-06 04:54:19
I *just* closed the terminal, but it was 0.014 if I recall
2023-03-06 04:55:24
-d 0 -e 9 -g 3 should give the same result. Thinking about it there's still -e 10 to try too
veluca
2023-03-06 04:55:26
yep, definitely a lot of magic 😛
-d 0 -e 9 -g 3 should give the same result. Thinking about it there's still -e 10 to try too
2023-03-06 04:55:38
with that image size, not this week...
jonnyawsom3
2023-03-06 04:56:33
Well it is still practically winter, could do with a heater for the week
2023-03-06 04:57:40
Compared to other images it also seemed to encode relatively fast on e 9, so maybe even half a week!
afed
afed <@794205442175402004><@179701849576833024> seems that git windows builds are still much slower, especially for fast speeds depends on compiler, shared/static? maybe it's worth reconsidering the compilation workflows because people mostly use and test official builds ProcProfile64: clang v0.9.0 [cjxl e1 lossless] `119,771,153 -> 52,094,805: 43.49%. Cpu 349 mb/s (0.327 sec), real 318 mb/s (0.359 sec) = 91%. ram 401144 KB, vmem 569556 KB` git build v0.8.1 [cjxl e1 lossless] `119,771,153 -> 51,493,245: 42.99%. Cpu 78 mb/s (1.453 sec), real 78 mb/s (1.452 sec) = 100%. ram 402340 KB, vmem 569940 KB` clang [fjxl 1 lossless] `239,542,306 -> 52,199,900: 21.79%. Cpu 585 mb/s (0.390 sec), real 585 mb/s (0.390 sec) = 100%. ram 352720 KB, vmem 521388 KB` strange that fjxl uses double read data, but cjxl e1 does not or maybe it is wrong values from windows api to trace such a thing? Halic `119,771,153 -> 45,307,046: 37.82%. Cpu 140 mb/s (0.812 sec), real 140 mb/s (0.812 sec) = 100%. ram 12236 KB, vmem 8452 KB` i think that's why Halic is faster in tests from other people and why it doesn't match my results, but memory consumption and compression is still impressive Qlic2 `119,771,153 -> 47,815,785: 39.92%. Cpu 128 mb/s (0.890 sec), real 128 mb/s (0.890 sec) = 100%. ram 277216 KB, vmem 274532 KB`
2023-03-06 05:15:25
clang v0.9.0 [cjxl e2 lossless] `119,771,153 -> 49,773,654: 41.55%. Cpu 34 mb/s (3.296 sec), real 34 mb/s (3.296 sec) = 100%. ram 2753260 KB, vmem 3196968 KB` git build v0.8.1 [cjxl e2 lossless] `119,771,153 -> 49,773,669: 41.55%. Cpu 22 mb/s (4.999 sec), real 22 mb/s (4.989 sec) = 100%. ram 2754592 KB, vmem 3196992 KB`
2023-03-06 05:16:57
all single-threaded, also yeah, memory consumption increases significantly
afed clang v0.9.0 [cjxl e2 lossless] `119,771,153 -> 49,773,654: 41.55%. Cpu 34 mb/s (3.296 sec), real 34 mb/s (3.296 sec) = 100%. ram 2753260 KB, vmem 3196968 KB` git build v0.8.1 [cjxl e2 lossless] `119,771,153 -> 49,773,669: 41.55%. Cpu 22 mb/s (4.999 sec), real 22 mb/s (4.989 sec) = 100%. ram 2754592 KB, vmem 3196992 KB`
2023-03-06 05:43:28
clang v0.9.0 [cjxl e6 lossy] `119,771,153 -> 6,931,089: 5.78%. Cpu 15 mb/s (7.483 sec), real 15 mb/s (7.542 sec) = 99%. ram 1876520 KB, vmem 2767608 KB` git build v0.8.1 [cjxl e6 lossy] `119,771,153 -> 6,738,856: 5.62%. Cpu 9 mb/s (12.062 sec), real 9 mb/s (12.090 sec) = 99%. ram 1875332 KB, vmem 2767340 KB` though for lossy there may be some changes between versions, but still I think jxl is slower overall
_wb_
2023-03-06 07:16:46
150:1 for lossless means it basically learned "the rule" or most of it
monad
Out of curiosity I ran that through lossless, because why not... 150:1 ratio... Not bad
2023-03-07 06:46:41
Yes, but that's GIF. ``` 1807631 QLyML.gif 61918 QLyML.png 12221 QLyML.d0e9.jxl 7754 QLyML.webp 6447 QLyML.d0e9I0P0g3.jxl 27991254 QLyML.d1e9.jxl```
2023-03-08 09:49:09
Space-saving tip: convert your lossy JXLs back to PNG. ```27991254 QLyML.d1e9.jxl 25690561 QLyML.d1e9.jxl.png 25440782 QLyML.d1e9.jxl.webp```
jonnyawsom3
2023-03-08 09:57:13
Interesting....
_wb_
2023-03-08 10:27:17
hehe that's not a very typical behavior 🙂
monad
2023-03-08 10:28:26
It worked with every image I've tried so far.
2023-03-08 10:30:51
But going from lossy JXL to lossless JXL doesn't work.
jonnyawsom3
2023-03-08 10:38:27
"JXL, the brand new PNG encoder!"
monad
2023-03-08 10:47:48
Yep. It holds as long as every image you try is just that one image.
Demez
2023-03-08 11:25:31
first image i tried it with made a way bigger png lol
2023-03-08 11:25:40
and i even used optipng on it
2023-03-08 11:26:39
_wb_
2023-03-08 11:31:23
that is more typical behavior 🙂
Orum
2023-03-09 02:54:44
I imagine it depends a bit on which encoding method you use (VarDCT vs modular)
gb82
2023-03-10 04:04:14
What avif encoding speed using libaom is equivalent to cjxl e7?
2023-03-10 04:59:58
looks like s6
afed
2023-03-11 12:20:43
<:Poggers:805392625934663710> **clang jxl e1** `119,771,153 -> 45,851,422: 38.28%. Cpu 333 mb/s (0.343 sec), real 318 mb/s (0.359 sec) = 95%. ram 395268 KB, vmem 569528 KB` **git jxl e1** `119,771,153 -> 45,367,590: 37.87%. Cpu 90 mb/s (1.264 sec), real 89 mb/s (1.281 sec) = 98%. ram 396540 KB, vmem 569916 KB` **clang fast_lossless** 2 1 8 (8 threads) `239,542,306 -> 45,851,181: 19.14%. Cpu 431 mb/s (0.530 sec), real 1125 mb/s (0.203 sec) = 261%. ram 340476 KB, vmem 515180 KB` **HALIC 0.6.3** `119,771,153 -> 39,421,509: 32.91%. Cpu 187 mb/s (0.609 sec), real 183 mb/s (0.624 sec) = 97%. ram 12212 KB, vmem 8468 KB` **HALIC 0.6.2** `119,771,153 -> 39,349,943: 32.85%. Cpu 138 mb/s (0.827 sec), real 137 mb/s (0.828 sec) = 99%. ram 12232 KB, vmem 8452 KB` **QLIC2** `119,771,153 -> 42,167,657: 35.20%. Cpu 130 mb/s (0.874 sec), real 130 mb/s (0.874 sec) = 100%. ram 277228 KB, vmem 274492 KB`
_wb_
2023-03-11 07:56:49
Is there source code or a description of HALIC somewhere?
2023-03-11 07:57:38
All I can find is an encode.su thread: https://encode.su/threads/4025-HALIC-(High-Availability-Lossless-Image-Compression)
2023-03-11 08:03:05
What does "high availability" mean?
jonnyawsom3
2023-03-11 10:05:08
Apparently they'll "share the code later" <https://encode.su/threads/4025-HALIC-(High-Availability-Lossless-Image-Compression)?p=78293&viewfull=1#post78293>
yoochan
2023-03-11 10:24:54
Medium availability then
jonnyawsom3
2023-03-11 11:09:26
Although 9/10 when someone says that they end up forgetting or loosing the code somehow and it's lost forever :P
Jarek
2023-03-11 11:15:42
not necessarily - as exec is avaliable, reverse engineering is common e.g. lzna, kraken, razor. I wouldn't be surprised if some are already doing it
afed
_wb_ What does "high availability" mean?
2023-03-11 12:19:59
i think it's just a sort of alternative name for HAkan Lossless Image Compression, probably also because due to such speed (even without simd and multithreading), low memory consumption and simplicity it can be effectively used on almost any system
_wb_
2023-03-11 12:25:28
Having sources would help if you want to make it usable on almost any system
afed
2023-03-11 12:57:45
yeah, I hope it will be open source like the author said, maybe he wants to do it after the upcoming GDCC
_wb_
2023-03-11 01:34:33
One question is to what extent he has new ideas that could perhaps be 'ported' to png or jxl encoders.
2023-03-11 01:35:46
(or if there's something particularly valuable but it's a new coding tool not available in jxl, we could consider making a jxl extension for it)
jonnyawsom3
2023-03-11 02:07:31
JXXL
2023-03-11 02:07:48
Or maybe JXS
yoochan
2023-03-11 02:09:05
exists already... https://jpeg.org/jpegxs/ xxl sounds better
jonnyawsom3
2023-03-11 02:10:50
Oh yeah, forgot about that, it certainly deserves the name
2023-03-11 02:57:15
I suppose another thing to consider, if they're getting smaller files than any other format with only 10MB of memory for encode and decode, what possibilities are there once you expand the technique to use utilise more resources
afed
2023-03-11 03:09:08
Halic is better than other fast encoders for photographic images, but not better than slower ones and I think this method is not very scalable to slower modes For slower ones there is something like LEA/GRALIC
jonnyawsom3
2023-03-11 03:14:50
Ah, I see
afed
2023-03-11 03:21:59
and even the current LEA like encoder without optimization is pretty fast, close to the current jxl e3, and with proper optimization and simd will probably be much faster, with compression for photos as the slowest jxl modes or even better
_wb_
2023-03-11 03:24:35
Jxl e1 could in principle be done with only enough memory to read 256 rows (or rather groupdim, so could go down to 128 if needed) of uncompressed input image plus some small constant overhead, assuming you can do a seek in the output writing to write the toc at the end.
2023-03-11 03:26:31
Does HALIC do arbitrary bitdepth or is it hardcoded for 8-bit?
afed
2023-03-11 03:28:08
most likely only 8 bpc without alpha
BlueSwordM
Apparently they'll "share the code later" <https://encode.su/threads/4025-HALIC-(High-Availability-Lossless-Image-Compression)?p=78293&viewfull=1#post78293>
2023-03-11 05:23:35
More like 0 availability.
afed
2023-03-15 04:44:21
also a big gap from other users for github builds, I think reviewing how binaries are compiled for windows should be a higher priority because most do comparisons on official releases (from Halic thread) v0.8.1 - github build v0.9.0 - clang
2023-03-15 04:44:26
jonnyawsom3
2023-03-15 04:52:17
I'll say it again, you're not just comparing compilers. You're comparing a build from 6 weeks ago and nearly 200 commits behind, to the bleeding edge optimizations getting put in nearly every day
afed
2023-03-15 04:55:22
most of the changes are for jpegli or small bug fixes, some for lossy quality (and some may even make encoding slower) for fast lossless modes there are no changes for speed during this period, only one fix for encoding very small images so this is purely a difference in compilers or even new builds may be a bit slower (for lossy)
jonnyawsom3
2023-03-15 05:04:40
Hmm, I could've sworn I specifically saw a commit that improved speed by roughly 10%, but after checking the commits I can't find it. Maybe it was sooner than I realised
afed
2023-03-15 05:10:07
because I always use only self-compiled builds, I haven't noticed any speed changes for fast lossless except when they come in these commits (and all of them were before 0.8.1 release) <https://github.com/libjxl/libjxl/commits/main/experimental/fast_lossless> but when I tried the github releases, I noticed a significant speed drop
2023-03-15 05:18:50
the same was already happening for gcc vs clang for exactly the same build before some optimizations (gcc is still slower, but not by much) https://discord.com/channels/794206087879852103/803645746661425173/1052978962055307357
kmadura
2023-03-16 09:13:32
Hello. I'm doing simple **lossless** compression time benchmarks for a scientific presentation. What I have noticed is all cpu cores are used initially only for a split second, then remaining computation is done on a single core. As an effect I see noticeable correlation between cpu generations but not much between number of available threads or clock frequency. For example on 11-th gen i5 and 12-th gen i7 laptop CPUs I get the best performance, while 6-th gen i5 or Xeon E-2274G on bare metal cloud instance (in OVH) perform worse. What I suppose is happening, cjxl first reads input file and builds/maps MA tree (or prepares for it) using multiple cores, then relies only on SIMD for actual calculations. Please correct me if I'm wrong.
2023-03-16 09:21:00
* all machines have 16 or 32GB of DDR4 and NVMe drives (except for one machine with SATA RAID1 SSD)
_wb_
2023-03-16 10:16:14
Most of the lossless encoding time is spent in stuff that isn't paralellized yet, at least at effort > 3. At e1 you should see good multithreading, at e2,3 some, at e4 not much speedup at all when adding more threads
jonnyawsom3
2023-03-19 03:03:22
Just did a few very quick tests on small textures from a game. From 116KB to 59.7KB (51%) and 202 bytes to 87 bytes (43%) Also found a large texture from a different game. From 47.9MB to 24.7KB (0.05%), safe to say they didn't optimize too well on that one... All lossless naturally, so I'm surprised there's no or very little mention of game engine support in the cases where PNGs are usually used currently. Halving (or more) the largest set of files in most titles seems like a pretty good use case if you ask me
2023-03-19 03:04:54
Unfortunately it's also 3am thanks to my nerding out, so I'll have to play with it more tomorrow
username
2023-03-19 03:06:19
speaking of games/engines that use or support PNG I was thinking that it would be cool if maybe something like GZDoom added JXL support
jonnyawsom3
2023-03-19 03:29:10
I know I spent an evening trying to figure out if you can add input/output formats to blender via an addon, but maybe that's better in <#803574970180829194>
HLBG007
2023-03-22 01:41:17
https://libwebpjs.hohenlimburg.org/avif-jxl-bench/charts_avif-jpegxl.html
username
HLBG007 https://libwebpjs.hohenlimburg.org/avif-jxl-bench/charts_avif-jpegxl.html
2023-03-22 01:55:22
why is almost every single test image there pixel art or non-photographic content? wouldn't they all compress better as lossless instead?
2023-03-22 01:56:07
I took this image from that site and did a quick test
2023-03-22 01:56:09
2023-03-22 01:56:24
????? i'm so confused
2023-03-22 01:56:32
is there something I am missing?
HLBG007
2023-03-22 02:20:30
As an end user I do not want to study the whole manual to get the best out of my image. I assume that the encoder decides what is best if I do not explicitly tell it to compress lossy or lossless. The default is lossy in this situation. It is therefore worth considering whether a procedure should not be implemented that automatically detects such images and then compresses them lossless because it is smaller.
username
2023-03-22 02:27:29
encoder heuristics to automatically detect whether or not something will compress better as lossless or lossy is a very favorable feature however it's something that isn't set in stone it's something that can always be added in the future without breaking any pre-existing piece of software as it's a encoding time decision. something else that has to be brought up is that the lossless mode in avif compresses worse then webp lossless while also only being able to work in yuv
2023-03-22 02:30:11
libjxl could benefit greatly from lossy vs lossless heuristics however saying that a format is better because of current encoders doesn't take into account that things chance over time and a spec for a format is something that can't change while a encoder is something that can
2023-03-22 02:32:34
what you are saying about not having to read a manual to get the best results makes sense but like I said encoders can be changed while format specs cannot
2023-03-22 02:33:24
so the problem/situation you bring up is something that can be fixed and changed
2023-03-22 02:40:52
apologies for my poor grammar
HLBG007
2023-03-22 03:02:46
The heuristic here is very simple math: ```js if(different_colors_of_image + magic_threshold < width * height ) { //lossless } else { //lossy } ```
jonnyawsom3
2023-03-22 03:06:37
https://github.com/libjxl/libjxl/pull/1395
2023-03-22 03:07:06
I've talked about that pull request a few times lately
HLBG007
I've talked about that pull request a few times lately
2023-03-22 03:30:06
nice. I just looked into this for the first time and found the solution right away.
2023-03-22 03:30:17
unfortunately the source code of jpegxl is an impertinence otherwise I would build it in myself.
_wb_
2023-03-22 05:22:42
I'm working on a paper on CID22, the dataset that is the result of the big experiment we did last year.
2023-03-22 05:23:26
Does anyone feel like reading an early draft of the paper and possibly giving some feedback?
2023-03-22 05:24:06
Plan is to submit it as a contribution to JPEG AIC-3, and then to shorten it and get it published somewhere.
2023-03-22 05:30:04
For the dataset itself, we've decided to release the annotations for the validation set (49 originals, 4292+49 mos scores). The full set is 250 originals, 21903+250 mos scores, but we'll keep ~80% of the data private so we can use it in the future to do things like testing new metrics. The images of the full set will of course be made available (but without mos scores). I'm now figuring out how I can put everything online in a nice way.
2023-03-22 05:37:14
For comparison, KADID-10k is the biggest publicly available IQA dataset afaik, and it has 81 originals, 10k mos scores. But regarding image compression it only has jpeg and j2k (the other distortions are artificial stuff like adding noise or changing brightness/contrast), at 5 qualities each, so only 810 mos scores are relevant to image compression, and at least half of those are at a silly low quality, so w.r.t. practical image compression, that set is an order of magnitude smaller than what will be the public part of CID22, and two orders of magnitude smaller than the full CID22 data).
yoochan
2023-03-22 05:43:01
I would be interested but I'm a bit short of free time at the moment (first born of 6mo suck it 25h a day)
_wb_
2023-03-22 05:47:18
2023-03-22 05:48:29
(this is not from the paper but from a CID22 website I'm working on)
monad
2023-03-22 06:02:51
I'd read it.
Fraetor
2023-03-22 06:12:02
I'd read it.
jonnyawsom3
2023-03-22 06:12:58
I'd read it.
_wb_
2023-03-22 06:18:06
here's an early draft, please don't redistribute
zamfofex
2023-03-22 06:34:49
I feel like you are very trusting to just casually be, like, “please don’t redistribute” in a somewhat well‐populated public server! 😄
username
2023-03-22 06:36:16
I wish I could contribute but I'm not very well versed when it comes to technical papers
_wb_
I feel like you are very trusting to just casually be, like, “please don’t redistribute” in a somewhat well‐populated public server! 😄
2023-03-22 06:43:06
There are no secrets in that paper, after all I want to publish it. I just don't want early drafts to circulate too much.
username
_wb_ here's an early draft, please don't redistribute
2023-03-22 06:43:31
I see that there are version numbers written down for each piece of software used except for some like FSIM and LPIPS
2023-03-22 06:48:18
looking at the github pages for those projects does reveal that seemingly the version numbers don't matter too much for those projects specifically but it's probably good to have the version numbers listed down for completion sake
Fraetor
_wb_ here's an early draft, please don't redistribute
2023-03-23 08:51:10
Here's my feedback.
_wb_
2023-03-23 08:55:09
Thanks!
Fraetor
2023-03-23 08:55:32
I apologise for my handwriting.
_wb_
2023-03-25 10:23:44
Second version of the draft.
Fraetor
_wb_ Second version of the draft.
2023-03-26 11:45:03
Page 17: Where you have "one sixth of the images" I think it would be better phrased as "1 in 6 images", as should the 1 in 50 case. This makes it easier to compare the values given for the different cases. Otherwise it looks good!
Jyrki Alakuijala
2023-03-27 03:38:31
Ssimulacra2 was optimized to the data capturing process used in the test set. How much emphasis should go into ssimulacra 2 being tuned to the same process (like instructions/population/screen distribution/lighting/...) ? For example, butteraugli and dssim were optimized and tested with different processes. (There is no guarantee that a user of ssimulacra2 wants to use it exactly with the process used in this paper.)
_wb_
2023-03-27 06:01:03
Yeah it's not a fair comparison, not just because of test conditions but also e.g. the split between test and validation was only on images, not on encoders. It was tuned on training data from the same encoders and settings as what was in the validation data (different images, but still).
2023-03-27 06:04:57
But I don't know of any other big and good iqa dataset. TID2013 and Kadid10k are mostly very low quality images and irrelevant distortions. On mucped22 and the AIC-3 CTC data, ssimulacra2 correlates well, but those are small datasets and mucped22 only has RMOS scores, no scores that can be compared across different originals.
yoochan
Jyrki Alakuijala Ssimulacra2 was optimized to the data capturing process used in the test set. How much emphasis should go into ssimulacra 2 being tuned to the same process (like instructions/population/screen distribution/lighting/...) ? For example, butteraugli and dssim were optimized and tested with different processes. (There is no guarantee that a user of ssimulacra2 wants to use it exactly with the process used in this paper.)
2023-03-29 07:57:38
I'm not sure to understand what you call data capturing process in this context ? like digital photography vs scanner vs I-don't-know-what ?
Jyrki Alakuijala
2023-03-29 10:46:30
Jon tuned ssimulacra2 from the same process that is used to measure its success, just different samples
2023-03-29 10:46:41
but samples were obtained using the same process
2023-03-29 10:47:04
samples for tuning butteraugli were obtained from a different process, the same for dssim and every other heuristic
_wb_
2023-03-29 01:16:02
This is why I put ssimulacra2 correlations between brackets: it's not super meaningful, it's expected to correlate well since it was tuned to do so.
2023-03-30 03:37:31
2023-03-30 03:39:30
VMAF being very funky on that dataset
2023-03-30 03:41:37
Nice confirmation that butteraugli, dssim and ssimulacra2 are indeed good metrics, as I have been claiming
yoochan
2023-03-30 06:54:27
What improvement do ssimulacra2 bring over butteraugli? Or what key difference?
_wb_
2023-03-30 07:00:11
I think ssimulacra2 extends a bit better into the lower qualities, and it's perhaps a bit simpler computationally. But no metric is perfect, there are only bad metrics and really bad metrics.
Traneptora
_wb_ Nice confirmation that butteraugli, dssim and ssimulacra2 are indeed good metrics, as I have been claiming
2023-03-30 09:03:49
so when you say "good metrics" you really mean "good for a metric" right? i.e. bad metrics instead of really bad metrics <:KEKW:643601031040729099>
_wb_
2023-03-30 09:41:15
exactly 🙂
yoochan
2023-03-31 05:07:51
And if I understood well ssimulacra2 was fine tuned with subjective tests? Butteraugli also?
_wb_
2023-03-31 05:28:12
All perceptual metrics are, I suppose. But metrics like SSIM and VMAF are tuned using subjective tests at very low quality, or appeal-based tests instead of fidelity-based ones
yoochan
2023-03-31 06:42:40
Oki, thanks
veluca
2023-03-31 09:41:40
uhm, looks like I accidentally disabled AVX2/AVX512 in -e 1 3 weeks ago... https://github.com/libjxl/libjxl/commit/4c7f15e2998a58006277281b0636d79fe88fff9d
_wb_
2023-03-31 09:50:24
oops, well at least it will not affect the next release 🙂
jonnyawsom3
2023-03-31 11:00:32
That reminded me, I've wondered for a while just how much of a difference AVX512 makes compared to AVX2, not many comparisons or tests of it that I've seen
gb82
2023-03-31 01:19:04
How much of a difference does AVX2 make over more basic builds?
_wb_
2023-03-31 01:37:17
If the simdified stuff is most of the computation: a lot. For something like e9 lossless encoding, it probably makes little difference since the bottlenecks there are not simdified anyway
veluca
2023-03-31 02:12:08
AVX2 probably makes the entire thingy 4x faster
2023-03-31 02:12:12
Maybe more
afed
2023-03-31 02:46:32
but for windows the current releases are also very slow for -e 1, I guess because there is something wrong with msvc and avx/avx2/avx512 is also not enabled or not working correctly
2023-03-31 02:50:48
would be good to fix this as well because even if changing the compilation workflow on github, many other apps are still using msvc
veluca
afed but for windows the current releases are also very slow for -e 1, I guess because there is something wrong with msvc and avx/avx2/avx512 is also not enabled or not working correctly
2023-03-31 02:54:37
yup, I am not sure how to do dynamic dispatching on msvc... patches welcome 😛
gb82
2023-03-31 10:33:25
mercury
2023-03-31 10:33:30
FF
2023-03-31 10:33:45
why is mercury so slow<:FeelsReadingMan:808827102278451241>
Demez
2023-03-31 11:09:04
try it on waterfox as well
janwassenberg
veluca yup, I am not sure how to do dynamic dispatching on msvc... patches welcome 😛
2023-04-01 07:51:15
It seems this file is not using Highway in order to avoid the dependency. I guess the biggest missing piece is __builtin_cpu_supports? You can get that by copying the x86 parts of hwy/targets.cc CPU detection.
2023-04-01 07:52:16
but at that point, not sure it's worthwhile to maintain a copy of all this code. seems there is interest in getting the vector code working on MSVC, looks like we're not even doing SSE4 on MSVC.
veluca
2023-04-01 07:53:07
Yeah I want to be possible to use that code as a single file
janwassenberg
2023-04-01 07:59:23
yeah. how about 2 files? I've been considering a single-header version of hwy, could be done if it were useful here.
pandakekok9
gb82 why is mercury so slow<:FeelsReadingMan:808827102278451241>
2023-04-04 06:12:18
I was wondering about that as well. They said they haven't enabled profile-guided optimization on their builds, but I didn't think it would affect performance THAT harshly
_wb_
2023-04-12 09:58:28
I'm considering to make a ssimulacra 3 tuned on TID2013+KADID10k+CID22. Using multiple source datasets could make it a bit more robust (in the sense of generalizing better to new encoders / new kinds of artifacts etc)...
2023-04-12 11:50:22
going to add KonFiG-IQA scores too (with F boosting only, no other boosts), see http://database.mmsp-kn.de/konfig-iqa-database.html
2023-04-12 11:52:19
optimizing for a weighted sum of CID22 training set MSE, Kendall and Pearson correlation, and Kendall/Pearson correlation with those 3 other datasets
2023-04-12 12:01:02
KonFig has only 840 distorted images and most distortions are artificial/not very relevant to image compression, but it has a better range of distortion amplitudes than TID2013 and KADID10k, which both have mostly extremely distorted images.
afed
2023-04-12 12:36:29
does ssimulacra 3 have any other significant changes? and maybe a different scaling for higher quality?
_wb_
2023-04-12 02:38:03
https://github.com/cloudinary/ssimulacra2/blob/main/src/ssimulacra2.cc#L371 ssimulacra 2 does `- 1.634169143917183` there, which means some high quality but not lossless images will get a maximum score of 100
2023-04-12 02:38:53
in the tuning of ssimulacra 3 I'm no longer allowing such a term, to ensure that 100 means mathematically lossless and any loss will cause the value to be < 100
2023-04-12 02:40:14
(at least in principle, maybe if the error is really really tiny, float precision limits might still cause things to round to 100)
2023-04-12 02:41:55
for now I haven't thought of any new things to try in the actual computation of subscores, so perhaps I shouldn't be calling it ssimulacra 3 but rather ssimulacra 2.1 or something — just retuned weights, no fundamental changes
afed
2023-04-12 02:49:12
yeah, I think ssimulacra 2.x would be better for such changes
_wb_
2023-04-12 03:10:08
ugh, I was also adding some polynomial score scaling of the error (before applying some gamma curve and subtracting it from 100) to get a better fit with the datasets, but it is coming up with polynomials like this
2023-04-12 03:10:46
not quite monotonic - at large errors, the score gets slightly better again
2023-04-12 03:11:08
that's not a nice property, gonna force that polynomial to be monotonic
2023-04-12 03:37:48
it kind of does indicate that at the very low quality end, the TID2013/KADID10k data is probably kind of noisy and not very meaningful — if both images are horribly distorted, they're just both very bad but the difference in MOS scores doesn't mean very much anymore: you can't really say that a TID2013 MOS=1.5 image is better than a MOS=1.4 image, both are just very bad but in pairwise comparisons you probably wouldn't get a clear preference of one over the other
2023-04-12 03:40:37
getting rid of the nonmonotonic score scaling, but doing a saturation thing (plateau-ing as error gets into the ridiculously bad) does make sense
2023-04-12 03:42:32
it should also help to make BD rate computations more robust against poor bpp range choices, since essentially the very bad (e.g. negative) ssimulacra scores get compressed a bit
Traneptora
2023-04-12 04:42:57
Why cubic out of curiosity?
_wb_
2023-04-12 06:29:22
No particular reason, 3 is just a random smallish number
2023-04-12 06:31:00
I don't think giving it a higher degree polynomial to tune is going to being much extra advantage
Jyrki Alakuijala
_wb_ in the tuning of ssimulacra 3 I'm no longer allowing such a term, to ensure that 100 means mathematically lossless and any loss will cause the value to be < 100
2023-04-18 02:14:59
why not 0.0 for lossless?
2023-04-18 02:22:14
often quickly obtained test data has a reversal of scores in the high end -- small errors in images lead to better scores than no errors or very small errors -- testers are worried when they don't see a small error, they believe they missed a larger error, but don't have time or opportunity to look again, so they score '4' for perfect and '5' for images with small errors
_wb_
2023-04-18 02:36:32
that, but also if the test is done in an appeal-focusing way rather than fidelity-focusing (e.g. by not revealing which image is the original but just asking which is 'better'), people tend to prefer a 'denoised' image to a 'noisy' original, which introduces nonmonotonicity in the scores you get that way
2023-04-20 02:19:09
updated ssimulacra to version 2.1 — not a huge change, but should be a bit more robust overall, and work better at the lower qualities too
2023-04-20 03:41:58
the range of scores gets compacted a bit in ssimulacra 2.1 versus 2.0, so crap quality images don't go as negative anymore
Traneptora
2023-04-20 04:22:56
how do we use ssimulacra2.1? is there a tool built into libjxl?
gb82
_wb_ the range of scores gets compacted a bit in ssimulacra 2.1 versus 2.0, so crap quality images don't go as negative anymore
2023-04-20 04:41:55
:D
Traneptora how do we use ssimulacra2.1? is there a tool built into libjxl?
2023-04-20 04:42:18
https://github.com/cloudinary/ssimulacra2/commit/fb5d0dbe866e5113d320666a09ddbc8d145b5fde ssimu2.1
Foxtrot
2023-04-24 01:26:19
JPEG XL doesnt look good here, but I think it's because tester is incompetent https://dodonut.com/blog/use-cases-of-web-image-formats/
username
2023-04-24 01:28:40
they also don't seem to be testing WebP correctly either
2023-04-24 01:29:20
they are most likely passing in all their test images at default settings and not changing anything
Eugene Vert
2023-04-24 01:31:11
Yes, they took a jpeg, jxl-transcoded it and compared it to a lossy avif
2023-04-24 01:31:33
https://discord.com/channels/587033243702788123/673202643916816384/1100046382179561544
Foxtrot
2023-04-24 01:32:58
btw, about my test... I didnt compare similarity to original image with ssimulacra, so its just guess... i dont know if q60 behaves similar in avif and jpeg xl
2023-04-24 01:40:51
maybe he will tell us, lets see https://twitter.com/FoxtrotCZ/status/1650494454766223365
_wb_
Foxtrot JPEG XL doesnt look good here, but I think it's because tester is incompetent https://dodonut.com/blog/use-cases-of-web-image-formats/
2023-04-24 01:43:15
yes, this looks like pretty incompetent testing. Does he understand that encoders have a quality setting that can be adjusted?
username
2023-04-24 01:44:44
the article also shows AVIF as compressing better then WebP for vector art which is not true if the right prams where used
Foxtrot
2023-04-24 01:44:48
this is case of "no good deed goes unpunished" if JPEG XL didnt losslessly transcode jpegs by default but did something like lossy q60 like AVIF it wouldnt look bad in this case 😄
_wb_
2023-04-24 01:46:30
this kind of table is very cringe — what is this even supposed to mean
2023-04-24 01:47:20
I very much doubt that the 10 MB JPEG 2000 file has the same quality ("Good", whatever that means) as the 204 KB AVIF
2023-04-24 01:48:31
(also that header with white text on bright lime green is quite painful)
jonnyawsom3
2023-04-24 01:48:51
Why did they even test JXL if they just say "Not recommended, it's not supported" anyway?
username
2023-04-24 01:49:24
the part where they test vector art is pretty confusing
_wb_
2023-04-24 01:50:00
> In our tests, we approached the evaluation from the perspective of an average user who may not be well-versed in codecs. Additionally, we used the default settings whenever we saved a graphic, for example, from Photoshop. When saving a JPG file, we always chose the best quality option.
jonnyawsom3
username the part where they test vector art is pretty confusing
2023-04-24 01:50:31
"Scale Vector Graphics is good at vector graphics" Ya know I think they might be onto something
_wb_
2023-04-24 01:50:31
So they tried to do the evaluation from the point of view of someone who is incompetent. OK. I think they succeeded in that.
jonnyawsom3
2023-04-24 01:50:52
Maybe a bit too well
Foxtrot
Why did they even test JXL if they just say "Not recommended, it's not supported" anyway?
2023-04-24 01:51:58
I dont even understand what they mean by "not supported" in that case
2023-04-24 01:54:13
> The online conversion tool, convert.io, was then utilized to transfer the downloaded image into different formats, including JPG, PNG, WEBP, AVIF, HEIFC, and JPG XL. I cannot find JPEG XL nor AVIF in https://convert.io/image-converter so it makes me wonder how did they even convert them
username
2023-04-24 01:57:33
maybe they used this? https://jpegxl.io/
Foxtrot
2023-04-24 02:07:15
seems like it the output size (default, lossless transcode) is almost the same as the one from their test
2023-04-24 02:08:48
hmm, I unchecked "jpeg transcode" but the size is the same as with lossless... maybe bug?
jonnyawsom3
2023-04-24 02:10:56
The online converters tend to be a bit hit and miss for me, sometimes ignoring settings or just erroring out instead
Foxtrot
2023-04-24 02:16:15
bug reported to their dicord
2023-04-24 02:16:35
would be unfortunate if JPEG XL looked bad because of bug in online converter...
2023-04-24 02:17:00
and tbh, I even thought that this converter is somehow official because of the URL
monad
Foxtrot bug reported to their dicord
2023-04-24 02:38:56
<https://discord.com/channels/794206087879852103/794206170445119489/936386250502447204>
Foxtrot
2023-04-24 03:06:22
understood, wont get fixed ever 😄
2023-04-24 06:02:07
ok, i did some more of stupid testing with this image https://unsplash.com/photos/-aljSU61s_s 1) downloaded original size jpg 2) convert to png with imagemagick 3) convert png to both jxl and avif, **avif**: default settings (color quality [60 (Medium)], alpha quality [100 (Lossless)] **avif size**: 598kB **jxl**: quality set to q64 so file size match avif file **jxl size**: 599kB 4) convert avif and jxl back to png with their respective decoders 5) compare decoded files to original png with ssimulacra2: **avif**: 69.5 **jxl**: 67.9 so I guess avif slightly wins 🙂 for the same size it has little higher quality
username
2023-04-24 06:03:03
what where the encoding times like?
Foxtrot
2023-04-24 06:03:14
i didnt test that, dunno
username
2023-04-24 06:03:30
it would be interesting to not only make the size the same between the 2 but also the encoding time
Foxtrot
2023-04-24 06:09:05
ok, I used Measure-Command in Powershell time of encoding original png to jxl and avif JPEG XL: 4,9s AVIF: 13,7s
jonnyawsom3
2023-04-24 06:10:15
3x as long for 3 points of difference
w
2023-04-24 06:10:54
now do jxl -e 10
Foxtrot
2023-04-24 06:12:52
it says max value is 9 😄
_wb_
2023-04-24 06:13:09
for lossy there is no e10 anyway
w
2023-04-24 06:13:17
haha
Foxtrot
2023-04-24 06:13:31
but it was really fast, it errored out immediately 😄
_wb_
2023-04-24 06:13:41
try decoding to 16-bit png, ssimulacra sometimes doesn't like 8-bit
w
2023-04-24 06:14:28
anyway I was gonna say, if you care about time, you can't compare it if it's different times. and is it wall time or user time?
Foxtrot
2023-04-24 06:15:49
wall time
w
2023-04-24 06:16:49
i guess any kind of benchmark is interesting, since every benchmark is specific
_wb_
2023-04-24 06:17:09
probably not a good idea to take already-lossy jpeg as input — I usually downscale by at least 2x so most of the jpeg artifacts are gone
Foxtrot
2023-04-24 06:18:38
Well, I am not really programmer so my tests dont have much weight and I cannot do it really properly I just wanted to try if I can to better job than that guy in article 😄
w
2023-04-24 06:20:10
well you've now shown that it's not easy to make any format(jxl) automatically look good, just like the article
2023-04-24 06:20:12
that has weight
gb82
2023-04-24 07:06:45
downscaled by 50% (25% of the pixels) using catrom
2023-04-24 07:07:00
going to graph jxl vs avif
Traneptora
2023-04-24 09:01:41
generally prefer to downscale with Mitchell instead of Catrom
2023-04-24 09:02:13
it's a little bit blurrier but it has fewer ringing artifacts and it tends to hide JPEG ringing in the fullsize things well
Foxtrot
2023-04-24 09:13:44
tried one test for my personal use case... I have multiple product photos saved as lossless PNG with transparent background PNG is cca 10MB after reencoding losslessly AVIF is 9MB and JPEG XL 5MB so looks like JXL has significant lead here
2023-04-24 09:14:50
I guess that makes sense... lossless isnt really needed in video
username
2023-04-24 09:23:40
lossless WebP also beats out lossless AVIF by a noticeable amount
190n
2023-04-24 09:24:51
doesn't png beat lossless avif
gb82
gb82 downscaled by 50% (25% of the pixels) using catrom
2023-04-24 10:31:44
alright tested
2023-04-24 10:31:50
2023-04-24 10:33:27
this is avif s6j16 vs jxl 0.8.1 e7
2023-04-24 10:36:03
```Benchmark 1: avifenc -c aom -s 6 -j 16 -d 10 -y 444 --min 1 --max 63 -a end-usage=q -a cq-level=24 -a tune=ssim testimage.png testimage.avif Time (mean ± σ): 1.313 s ± 0.030 s [User: 5.412 s, System: 0.092 s] Range (min … max): 1.273 s … 1.364 s 10 runs ```
2023-04-24 10:36:14
```Benchmark 1: cjxl testimage.png testimage.jxl -e 7 -d 1.5 -p Time (mean ± σ): 919.3 ms ± 9.3 ms [User: 3282.6 ms, System: 365.1 ms] Range (min … max): 905.6 ms … 936.1 ms 10 runs ```
BlueSwordM
190n doesn't png beat lossless avif
2023-04-25 02:55:54
Luckily, no. Also, unlike WebP, it at least does 10-12 bit lossless.
Demez
2023-04-25 06:24:38
iirc, don't colors get converted or something and have some sort of loss to them in avif lossless, making it not true lossless?
novomesk
Demez iirc, don't colors get converted or something and have some sort of loss to them in avif lossless, making it not true lossless?
2023-04-25 06:27:27
AVIF can be true lossless without RGB->YUV conversion.
yoochan
gb82
2023-04-25 06:33:27
a bit deceiving... what is your metric ?
gb82
2023-04-25 06:34:07
Ssimulacra2, as always. I don’t know if I was trying to be deceiving, sorry for not specifying
yoochan
2023-04-25 06:37:32
the results are deceiving 😄 not you
2023-04-25 06:37:59
I expected jpegxl to perform a bit better
2023-04-25 06:39:15
perhaps e8 or e9 could have improved the score
_wb_
2023-04-25 07:10:05
it's a bit strange that the codecs saturate at something like 87 (except jxl but it also has a weird shape at the high end)
2023-04-25 07:10:25
there could be a colorspace issue
2023-04-25 07:12:41
remember that imagemagick has this bug that makes it write PNG files with gAMA but no sRGB chunk, which it itself interprets as meaning the same as sRGB (which is incorrect), but cjxl/ssimulacra2/browsers will interpret it as what it is, a pure gamma tf without the linear segment in the darks that the sRGB tf has
2023-04-25 07:15:36
also note that `cjxl -p` makes a jxl with more progressive passes than the default, which might hurt compression a little bit and likely hurts encode time too
yoochan
2023-04-25 07:36:57
can we generate a varDCT file which is not progressive at all and win a few bits more ?
Tirr
2023-04-25 07:39:09
vardct images always have a 8x progressive pass (LF images) so there's no "not progressive" vardct image
yoochan
2023-04-25 07:42:23
oki
_wb_
2023-04-25 10:23:19
there would be nothing to gain in terms of compression from not doing LF first
nec
2023-04-27 09:11:32
Have you tried to use ssimulacra2 for segments of video frame? I'm not sure if it's good or bad idea. On one hand, I split one frame with average score 71, which is at least mid+ quality, on 16 parts and parts of frame with movement have only ~52 score, very strong blur/loss of details. On the other hand when I play it on normal speed, it's not so perceivable, because it's movement.
2023-04-27 09:16:21
Also providing more bitrate kinda fixes it too. At 78 score, the same movement parts of the frame already have 72 score.
jonnyawsom3
2023-04-27 09:28:26
The movement is why there are separate tools, they hide the artefacts and due to motion you can need drastically more bitrate than static
BlueSwordM
nec Have you tried to use ssimulacra2 for segments of video frame? I'm not sure if it's good or bad idea. On one hand, I split one frame with average score 71, which is at least mid+ quality, on 16 parts and parts of frame with movement have only ~52 score, very strong blur/loss of details. On the other hand when I play it on normal speed, it's not so perceivable, because it's movement.
2023-04-28 08:02:23
Yes, and it works very well 🙂
2023-04-28 08:02:45
Basically, take the lowest quality percentile frames(5% works well), and run with it.
2023-04-28 08:02:51
Good luck with encoding speed though <:KekDog:805390049033191445>
runr855
2023-04-28 03:54:13
Sorry if this is the wrong place to ask, but how can I use ssimulacra2? I can only find source code I need to built myself. The Github states the tool is in libjxl but I can't find any way to use ssimulacra2 in the libjxl files
_wb_
2023-04-28 04:09:59
What OS do you want binaries for?
runr855
2023-04-28 05:31:06
Windows would be perfect
2023-04-28 05:32:34
I'm no expert but if it's any hassle I could look into compiling myself, it just sounded like libjxl included ssimulacra2
_wb_
2023-04-28 05:35:52
the libjxl repo includes it, but I suppose it may not be included in the binary releases
Eugene Vert
2023-04-28 05:37:07
It's behind `DJPEGXL_ENABLE_DEVTOOLS` cmake flag iirc
_wb_
2023-04-28 06:15:25
yeah, I hope this will help to get those included in the future: https://github.com/libjxl/libjxl/pull/2443
yoochan
2023-04-28 06:20:59
You may want to try to ask <@226977230121598977> , he may have compiled it for windows
2023-04-28 06:25:13
Perhaps
runr855
2023-04-28 08:26:04
Thank you, I will wait for a future release with the benchmarks included. JPEG XL is great work and I hope it will win over time!
DZgas Ж
2023-04-28 08:33:46
👨‍🔬
2023-04-28 08:34:48
Nevertheless, I continue to use the quality assessment "by eye"
_wb_
2023-04-28 08:44:12
that's the best assessment — unfortunately it cannot be automated 🙂
Eugene Vert
2023-05-06 09:05:51
https://discord.com/channels/794206087879852103/822105409312653333/1104501917289304149 ```bash 184348 cjxl_e1 184200 zune ``` ```bash hyperfine -- "parallel -j 1 zune -i '{}' --encode-threads=0 -o ./out/'{.}.jxl' ::: *.ppm" Time (mean ± σ): 4.337 s ± 0.051 s [User: 3.101 s, System: 0.824 s] Range (min … max): 4.282 s … 4.439 s 10 runs hyperfine -- "parallel -j 1 cjxl '{}' -m 1 -d 0 -e 1 --num_threads=0 ./out/'{.}.jxl' ::: *.ppm" Time (mean ± σ): 1.230 s ± 0.009 s [User: 0.597 s, System: 0.532 s] Range (min … max): 1.216 s … 1.245 s 10 runs ``` Upd: redid the benchmark using ppm instead of png on the kodak dataset
monad
2023-05-07 02:12:58
ok, but which is better
_wb_
2023-05-07 06:23:05
That benchmark is mostly timing png decode speed
zamfofex
_wb_ that's the best assessment — unfortunately it cannot be automated 🙂
2023-05-07 08:55:26
I wonder whether it would be possible to create a lossy “compression” mechanism that is intentionally meant to look worse than JPEG, but that scores better than JPEG at objective/automatic tests. 😄 Mostly as a proof of concept. (I wrote “compression” in quotes because it doesn’t even need to compress the image at all, just modify it some way.)
_wb_
2023-05-07 09:35:36
For psnr that is easy to do
Eugene Vert
_wb_ That benchmark is mostly timing png decode speed
2023-05-07 05:08:49
Thanks, I hadn't thought about that. Redone the benchmark using PPM.
_wb_
2023-05-07 05:17:25
Doing num_reps also helps
2023-05-08 11:41:14
Here are some benchmark results using the most recent versions of everything:
2023-05-08 11:48:16
When comparing at same-ballpark encode speeds (which means cjxl e6 vs avifenc s8 vs cwebp m6), jxl is clearly better than the others. If encode time is no issue at all, avif can be about the same as jxl, but then it takes about 50x as much time — say if cjxl e6 takes one second, avifenc s3 takes one minute.
2023-05-08 11:50:44
jpegli is doing quite well above q80, where it is a lot better than mozjpeg in both speed and compression
2023-05-08 11:54:16
at the highest end (q90+), only <@826537092669767691>'s cavif 1.5 manages to get good quality — likely this is because it now uses 10-bit encoding (aom is at the default 8-bit here; I could try 10-bit too later to check if that's indeed what makes the difference)
2023-05-08 12:05:17
aom s9 really doesn't make much sense to use: at the low end it is worse and slower than webp and mozjpeg, at the high end it is worse and slower than jpegli. I don't see any reason to use this speed setting.
2023-05-08 12:20:33
the quality range here is from very poor quality to very high quality; I would say ssimulacra2.1=60 is about the lowest you likely want to go in practice (that's the average quality of libjpeg-turbo q43 4:2:0), 70 is a "well optimized web image" (libjpeg-turbo q66 4:2:0), 80 is a "default quality web image" (libjpeg-turbo q85 4:2:0), 90 is "consumer camera quality" (libjpeg-turbo q93 4:4:4) and anything higher starts to get good enough for authoring workflows where subsequent editing like color adjustments etc are possible.
Deleted User
2023-05-08 04:05:44
Hello, I'm doing some benchmarking on my own to compare lossless mode (RGB -> YUV conversion notwithstanding) for various codecs and I wondered if weakness compared to AOM for photographic data (i.e. high quality with lots of camera grain) was to be expected?
_wb_
2023-05-08 04:35:02
What do you mean, lossless yuv? That's something very different from lossless rgb
Deleted User
2023-05-08 04:35:35
I mean, I ignore the small perceptual error in converting from RGB (source) to YUV. Should I not?
2023-05-08 04:35:54
(YUV 4:4:4, of course)
_wb_
2023-05-08 04:41:08
Well that removes about 2bpp worth of information
2023-05-08 04:41:59
It's roughly similar to compressing rgb787 instead of rgb888
2023-05-08 04:42:28
Avifenc can do real lossless too, why don't you try that?
Deleted User
2023-05-08 04:56:03
I'm using imagemagick with libheif for convenience
2023-05-08 04:56:39
Huh, guess it does set the identity matrix to keep being lossless?
2023-05-08 05:06:38
I think you're right, IM doesn't allow true lossless AVIF, I think
Traneptora
Huh, guess it does set the identity matrix to keep being lossless?
2023-05-08 07:40:03
yea, AVIF supports lossless RGB without a matrix transform
2023-05-08 07:40:09
you might need to use FFmpeg to make it work tho
_wb_
2023-05-08 08:08:16
Or you can try avifenc or cavif, I think they both have an option for lossless
Traneptora
2023-05-08 08:12:10
yea, that probably works too
gb82
_wb_ When comparing at same-ballpark encode speeds (which means cjxl e6 vs avifenc s8 vs cwebp m6), jxl is clearly better than the others. If encode time is no issue at all, avif can be about the same as jxl, but then it takes about 50x as much time — say if cjxl e6 takes one second, avifenc s3 takes one minute.
2023-05-10 05:34:16
Is this XYB jpegli?
_wb_
2023-05-10 05:40:41
No, it's regular YCbCr (default cjpegli)
2023-05-10 05:41:41
I can add XYB jpegli. Still a bit risky to deploy atm (still too much software considering color space as "optional metadata")
afed
2023-05-10 10:10:00
is still useful to compare how much better the xyb is, also at least modern browsers have support
_wb_
2023-05-10 11:26:56
Agreed
gb82
2023-05-17 06:17:07
https://cdn.discordapp.com/attachments/673202643916816384/1108259646231298088/Non-Photographic_Image_-_cjpegli.webp
2023-05-17 06:17:28
https://cdn.discordapp.com/attachments/673202643916816384/1108262994695159838/chart5.webp
2023-05-17 06:17:34
I'm very impressed with jpegli
2023-05-17 06:18:12
monad
2023-05-17 07:03:44
1 is a number
_wb_
2023-05-22 12:04:41
This is a summary of my most recent benchmarking of various encoders
username
_wb_ This is a summary of my most recent benchmarking of various encoders
2023-05-22 12:25:30
this is a nice graph! although I do wonder if m5 would provide a better tradeoff compared to m6 for WebP 🤔. m3/m4 = "basic scoring/optimization" m5 = partial trellis quantization m6 = full trellis quantization which in the code specifically says "much slower"
2023-05-22 12:28:12
MozJPEG uses trellis quantization by default however I do not know how different it is compared to cWebP
_wb_
2023-05-22 12:29:05
I manually added those labels and I don't have data on webp m5, but this is the same plot with m4 and m6
2023-05-22 12:29:38
likely m5 is somewhere in between m4 and m6 in both speed and compression
2023-05-22 12:31:14
so yes, likely m5 is a better trade-off, and probably the default m4 an even better one, but it all depends on how important speed is
2023-05-22 12:32:16
(those very fast ones are libjpeg-turbo, 444 and 420)
2023-05-22 12:37:08
If speed is really important, then you need to go to the fastest settings of libaom if you want to get in the same ballpark as libjxl e6 or webp m4...
2023-05-22 12:38:56
but that doesn't really make any sense — at that kind of speed, avif is at the same time worse and slower than the pareto front of jpeg encoders
DZgas Ж
_wb_ This is a summary of my most recent benchmarking of various encoders
2023-05-22 01:44:34
why do all codecs with yuv420 have such a noticeable sharp decline. Is this test algorithm problem? If it really like this. Then, for more honest tests, it would be possible to reduce the image by 2 times after compression, or use only a black-and-white image. Because the presence of yuv420 instantly puts all these codecs at a disadvantage
2023-05-22 01:51:37
sure that Mozjpeg 444 in this test will be much better than webp precisely because 444 - but does this mean that webp is suddenly worse than jpeg
2023-05-22 01:55:57
since jpeg xl is such a format. I don't think it can be compared to any codec that doesn't support yuv444 because jpeg xl will always be better. Well, since there is no comparison of very very strong compression in these tests, this will be fair.
2023-05-22 01:56:57
in fact, I don't see anything other than tests on a black-and-white image as a solution. webp/jxl
_wb_
DZgas Ж why do all codecs with yuv420 have such a noticeable sharp decline. Is this test algorithm problem? If it really like this. Then, for more honest tests, it would be possible to reduce the image by 2 times after compression, or use only a black-and-white image. Because the presence of yuv420 instantly puts all these codecs at a disadvantage
2023-05-22 02:17:51
because chroma subsampling just introduces a cap on the maximum obtainable quality — e.g. q100 libjpeg 420 reaches about the same ssimu2.1 score on average as q93 libjpeg 444, but it uses about twice the bitrate
2023-05-22 02:19:34
at the low end that doesn't matter much since compression will be more destructive than the chroma subsampling anyway, but at the high end, 420 is just too blunt a coding tool — after all it basically boils down to forcedly zeroing 3/4 of the chroma coefficients
DZgas Ж
2023-05-22 02:21:16
👆
2023-05-22 02:22:56
1 luma + 1/4 + 1/4 chroma
2023-05-22 02:25:38
It always confuses me. I always remember the geometric representation of this function to understand that the actual file size is reduced by 2 times
_wb_
2023-05-22 02:26:47
I don't think it would be more honest to only look at grayscale images so yuv420 is not a disadvantage. If a codec doesn't support 444 then it just cannot do very high quality. Which can be fine if your use case doesn't require very high quality, but it's not something that should be hidden by not looking at color images. It's like doing a race between an old 4-speed Lada and a fancy 8-speed Ferrari, and saying the race is more honest if both are required to only use up to 4th gear because the Lada doesn't 'support' gears 5 to 8.
2023-05-22 02:27:48
4:2:0 is exactly half the data of 4:4:4 (arguably visually the most important half, but still)
DZgas Ж
_wb_ I don't think it would be more honest to only look at grayscale images so yuv420 is not a disadvantage. If a codec doesn't support 444 then it just cannot do very high quality. Which can be fine if your use case doesn't require very high quality, but it's not something that should be hidden by not looking at color images. It's like doing a race between an old 4-speed Lada and a fancy 8-speed Ferrari, and saying the race is more honest if both are required to only use up to 4th gear because the Lada doesn't 'support' gears 5 to 8.
2023-05-22 02:30:38
To show superiority. Sure. It's easier to do that. And yet the tests that I see are done precisely for high qualities
2023-05-22 02:31:51
For me it would be honest not to use tests at all with codecs that are 420 because it doesn't make sense at all to understand
_wb_
2023-05-22 02:31:57
(the "avif team" way of doing the race is to put both the Lada and the Ferrari in first gear and don't allow them to change gears since first gear is the "web quality" gear)
2023-05-22 02:35:46
the range I'm showing on that plot is roughly the range that "matters" for us (Cloudinary): ssimu2.1 score ~60 is about the lowest we would aim to go and ssimu2.1 score ~87 is about the highest we would aim to go. For other use cases you may need ssimu2.1 > 90 but for web delivery 60-87 is more or less the useful range.
DZgas Ж
2023-05-22 02:38:25
I would like to see how the image after compression jxl would have been sampled by chrome to yuv420 and then a test was done. 🤔
2023-05-22 02:39:35
But I understand your position. It just seems obvious to me that with an identical file size, webp cannot be better than jpeg xl at any quality at all.
2023-05-22 02:42:27
To tell the truth, I can't create or don't even know which image could be worse on jpeg xl than on webp. Although. At the moment, jpeg xl uses very very few of all possible block options. Perhaps a pixel image with 4x4 pixels will better
2023-05-22 02:43:29
But it will also stop working when libjxl uses 4x4 blocks.
spider-mario
DZgas Ж sure that Mozjpeg 444 in this test will be much better than webp precisely because 444 - but does this mean that webp is suddenly worse than jpeg
2023-05-22 09:45:11
there is a case to be made for that
_wb_
2023-05-23 05:12:48
Mozjpeg 420 does perform better than mozjpeg 444 though
gb82
2023-05-23 05:15:12
Most of the time
DZgas Ж
spider-mario there is a case to be made for that
2023-05-23 02:31:54
an interesting statement... in any case, I got unlock guetzli and will do tests with webp
2023-05-23 02:41:32
q20
2023-05-23 02:42:46
a completely identical file size, and I even disabled the use of WEBP filters. but STILL, WebP looks much better
2023-05-23 02:43:38
When activating webp filters, everything gets more better
2023-05-23 02:50:57
although, if you imagine that webp does not exist, then jpeg guetzli is really the best jpeg that can be encoded now. here is a comparison with my mozjpeg key
yoochan
2023-05-23 03:03:08
for what use do you need such an horrendous quality ? 😄
DZgas Ж
yoochan for what use do you need such an horrendous quality ? 😄
2023-05-23 03:05:15
i2pd's sites
yoochan
2023-05-23 03:07:01
ow, interesting, the final product doesn't contains text ? like mangas ? could text be read at this quality ?
2023-05-23 03:07:36
what about avif at this size ?
DZgas Ж
yoochan ow, interesting, the final product doesn't contains text ? like mangas ? could text be read at this quality ?
2023-05-23 03:09:10
for manga, I use jpeg xl. But since I'm making a one-page website, I had to make 84 images that contain icons grid 25x25 (pics) in order to choose a manga
yoochan what about avif at this size ?
2023-05-23 03:09:32
bullshit unsuitable for anything
2023-05-23 03:10:56
it takes too long to encode, does not meet expectations. size restrictions for manga. there is no progressive decoding for icons. not suitable for anything
yoochan ow, interesting, the final product doesn't contains text ? like mangas ? could text be read at this quality ?
2023-05-23 03:17:03
I used jpeg XL size 65535 and I didn't have any problems. But the icons had to be divided into many images because in 1 image it will take up much more than 65535. And it just so happened that this size is the maximum that the browser can display. browsers have a limit on this size and do not show an image that requests at least 1 pixel more. When using AVIF for 84 icon images, I ran into a problem. It just lags, it just requires a lot of calculation for its decoding. very bad. Since the total number of displayed images on the home page is equal to 212 megapixels, neither Jpeg XL nor AVIF can be used for this. Jpeg or WEBP only
2023-05-23 03:19:16
☝️ but, given the complexity of jpeg XL format, I can make an assumption that it is possible to create a compression option in it. which would do compression without using complex Jpeg XL functions, an analogue of fast decoding
yoochan
2023-05-23 03:27:19
if I understand, it is a way for you to load all thumbnails / covers at once in a single file ? or even the manga itself ?
DZgas Ж
yoochan if I understand, it is a way for you to load all thumbnails / covers at once in a single file ? or even the manga itself ?
2023-05-23 03:34:53
If you are so interested, you can install i2pd, figure out how it works, and go to my site hentai.i2p 🙂 I've done a lot of websites before. I'm just genuinely tired of CSS JS and everything that is possible. I was able to make a mono-website using 18 thousand lines of 'html map'
yoochan
2023-05-23 03:39:30
😄 I'll try to have a look
Traneptora
2023-05-23 04:52:56
oh god
diskorduser
2023-05-23 05:36:33
I2pd? Is it like tor? 🤨
jonnyawsom3
2023-05-23 06:48:55
Sounds similar
DZgas Ж
2023-05-23 08:02:15
no
2023-05-23 08:02:48
This is a completely different Internet that is built on the basis of the regular Internet
2023-05-23 08:03:26
Using dozens of encryption algorithms, a DHT peer system and other
2023-05-23 08:04:56
<@238552565619359744><@263309374775230465>👆
Traneptora oh god
2023-05-23 08:05:54
<:Thonk:805904896879493180> depends on what you're talking about
Traneptora
2023-05-23 08:12:12
using one image with html map sounds like a really bad idea
DZgas Ж
Traneptora using one image with html map sounds like a really bad idea
2023-05-23 08:13:59
you didn't read well, I wrote 83 images of 225 buttons each
2023-05-23 08:14:57
index.html 3 257 843 bytes and It works just great.
Traneptora
2023-05-23 08:27:34
three megabyte index.html file <:dunkVeryAustralian:802988024358764554>
Foxtrot
2023-05-23 08:49:25
https://github.com/PurpleI2P/i2pd ? Its not in rust so it cannot be safe. FBI is watching your memory leaks! <:CatBlobPolice:805388337862279198>
w
2023-05-23 09:01:48
rust isnt even safe. i swear they advertise that so nobody audits it and they can hide fbi backdoor in the language
DZgas Ж
2023-05-23 09:02:39
<:Thonk:805904896879493180> cringe
2023-05-23 09:05:20
Based on the results of my tests today, I can say this, if you need a yuv444, then the standard guetzli q84 is a great option. And if you need a lower quality, it's better to use WEBP ```cwebp.exe -q 50 -af -noalpha -pass 10 -m 6 ```
2023-05-23 09:09:19
because of yuv444 - jpeg guetzli has a good advantage on high qualities. But it all becomes useless on low qualities. Considering that WEBP and guetzli were made by the same company, I understand why they made software quality restrictions on q84. Maybe they were doing their own internal tests. Because just after this 84 quality it is better to use not guetzli but use WEBP
Foxtrot https://github.com/PurpleI2P/i2pd ? Its not in rust so it cannot be safe. FBI is watching your memory leaks! <:CatBlobPolice:805388337862279198>
2023-05-23 09:17:26
<@100707002262503424> <@288069412857315328> https://github.com/PurpleI2P/i2pd/blob/55b2f2c625ae3b7de2d6f20716c908bba801c370/libi2pd/Identity.h#L79 <:kkomrade:896822813414011002>
Foxtrot
2023-05-23 09:18:05
FSB lol 😄
DZgas Ж
Foxtrot FSB lol 😄
2023-05-23 09:19:14
In fact, it's a very good move to use encryption keys from MORE than one country. So that everyone has a double paranoia <:KekDog:805390049033191445>
Foxtrot
2023-05-23 09:20:30
yeah, I agree, its definitely in Russia's interest to have cypher that is uncrackable by US gov
DZgas Ж
Foxtrot yeah, I agree, its definitely in Russia's interest to have cypher that is uncrackable by US gov
2023-05-23 09:24:19
* it's good to have keys that none of the countries will crack, even if they want to
_wb_
2023-05-24 09:31:38
2023-05-24 09:32:59
A comparison at similar encode speeds
username
2023-05-24 09:47:50
<@226977230121598977> in your testing of cwebp how well does "-sharp_yuv" work?
username <@226977230121598977> in your testing of cwebp how well does "-sharp_yuv" work?
2023-05-24 10:08:37
there is some coverage of it here https://www.ctrl.blog/entry/webp-sharp-yuv.html but not that many example images except the header image https://www.ctrl.blog/media/hero/adobe-webp-q50-vs-sharpyuv.1280x720.webp
yoochan
2023-05-24 11:35:10
impressive !
DZgas Ж
username <@226977230121598977> in your testing of cwebp how well does "-sharp_yuv" work?
2023-05-24 02:48:37
hm. I did some tests, but I didn't see anything. it may be worth doing the tests a little more carefully
2023-05-24 02:52:02
interesting
2023-05-24 02:55:40
8% more information is allocated exclusively under chroma 540 kb standard 580 kb -sharp_yuv
2023-05-24 03:02:32
I did tests by subtracting LUMA from the image, then tests only LUMA. I found out that with q50, the CHROMA \channel takes about 20% of the total file weight. And this means that -sharp_yuv makes the amount of color data about 1.5 times more
2023-05-24 03:06:15
I can say that this is almost an analogue of the chrome scales of x265 or x264, which may be visually better in images that are compressed more than usual, or that contain a lot of colored details.
2023-05-24 03:07:02
Why it's called sharp_yuv I have no idea.
2023-05-24 03:07:38
-more_quality_for_color
Jyrki Alakuijala
2023-05-25 12:24:24
yuv420 requires blurring of the signal -- there is only one sample for every four chromasity samples -- to counter the blurring, one can 'sharpen' the signal to reduce rms error by ~50 % or so
Oleksii Matiash
2023-05-25 12:40:29
Did a quick test on synthetic image and -sharp_yuv produced artifacts that are even worse than the default mode. So it is not a panacea definitely
DZgas Ж
Oleksii Matiash Did a quick test on synthetic image and -sharp_yuv produced artifacts that are even worse than the default mode. So it is not a panacea definitely
2023-05-25 10:10:52
And didn't write anything about the tests. According to my tests, it works fine on strong compression. Q1-Q60
Jyrki Alakuijala yuv420 requires blurring of the signal -- there is only one sample for every four chromasity samples -- to counter the blurring, one can 'sharpen' the signal to reduce rms error by ~50 % or so
2023-05-25 10:16:56
It absolutely cannot work like this, because such operations would require changing the decoder so that it would use other algorithms when rendering the chroma. All I see is more information for chroma and that's it. When there is a lot of information, it stops smoothing the filter (-sns). This is an illusion of sharp
2023-05-25 10:17:51
👉 webp
Oleksii Matiash
DZgas Ж And didn't write anything about the tests. According to my tests, it works fine on strong compression. Q1-Q60
2023-05-26 06:32:48
Sorry, you are right. I used `cwebp -preset drawing -m 6 -q 90 -sharp_yuv`
Foxtrot
2023-06-02 05:42:31
I wondered what happened to that Dodonut article: https://dodonut.com/blog/use-cases-of-web-image-formats/ So looks like he added disclaimer that he will consider suggestions in the future comparisons. So he did bad job, published wrong results and doesnt even correct it after it was pointed out to him. Just wow.
_wb_
2023-06-02 05:45:12
I wonder why random people on the internet feel the need to make such comparisons when they clearly have no clue what they're talking about.
Foxtrot
2023-06-02 05:45:41
Also he says: > However, for AVIF and JPEG XL formats, we used the converter available at https://convertio.co/. I cant find JPEG XL option in that convertor... Maybe I am blind, can anyone else check? Because if thats the case I wonder where did he convert it...
2023-06-02 05:48:11
I mean... Is that stupidity or what? Saying that he converted JPEG XL in converter that doesnt support JPEG XL...
_wb_
2023-06-02 05:48:29
Why would you use anything other than the latest version of the reference software itself for all formats? Using external tools or applications can only mean you're using an outdated version, a poor integration, or both.
2023-06-02 05:48:44
(I mean, it can also be fine, but why take any risk?)
Foxtrot
2023-06-02 05:49:33
Well, everyone can make mistakes. What annoys me is that even after being warned he decided not to correct the wrong results... Thats pure laziness.
username
2023-06-02 05:50:43
there is a high volume of online articles and blogposts that don't know what they are talking about, for example recently I found one that was recommending people to re-encode/recompress pre-existing jpeg files to make them progressive