JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

adoption

Adoption of jxl: what software supports jxl already, how to get more adoption?

fab
2021-06-07 02:22:00
is that good
BlueSwordM
_wb_ AVIF however is something that does seem to be a bit of a moving target, and does get regular changes still ("version 1.0" is two years old, but they keep making changes so it appears to be a not very frozen thing)
2021-06-07 02:22:13
Funnily enough, there's not a 1.0 version yet.
fab
2021-06-07 02:22:31
like you will add a mode in this year or when you want
_wb_
2021-06-07 02:22:53
I think if you want to do that, you should use image editing software to do denoising and plastification or whatever effect you're looking for, and then encode the result of that.
fab
2021-06-07 02:23:28
what is plastification what plugins there are
_wb_
2021-06-07 02:23:31
I don't consider it the scope of an encoder to make an image look better than the original
fab
2021-06-07 02:24:04
like to you an image
2021-06-07 02:24:18
that weight 52 kb and was re encode from a 93 kb jpeg is bad?
BlueSwordM
2021-06-07 02:24:24
Hey, filtering should always be done by the software, not the encoder purposefully trying to make an image better.
fab
2021-06-07 02:24:32
maybe fb could do based on the size?
2021-06-07 02:24:38
on the resolution?
2021-06-07 02:24:42
is really needed?
2021-06-07 02:24:50
what fb has expressed?
Jim
2021-06-07 02:24:58
I didn't think that would even be possible. The original image is as good of quality as you will get. Each time you re-encode it, there is going to be some loss of quality. Though JXL does a better job of reducing the losses over time.
fab
2021-06-07 02:25:26
with cavif rs it looks like orange the image when you re encode
2021-06-07 02:25:44
but aom has sharpening that helps some images to get lower file sizes
2021-06-07 02:25:50
why not go in that direction?
BlueSwordM
2021-06-07 02:27:15
aomenc/rav1e/SVT-AV1 do not do sharpening...
fab
2021-06-07 02:31:04
why not go in that direction?
2021-06-07 02:31:07
fb needs lossy
2021-06-07 02:31:18
internet needs lossy
_wb_
2021-06-07 02:35:55
The purpose of lossy compression is to reduce bpp without introducing visible artifacts
2021-06-07 02:36:13
That jxl can do quite well
fab
2021-06-07 02:38:52
ah to png
_wb_
2021-06-07 02:39:07
The purpose of lossy is not to introduce visible differences like smoothing or sharpening, even if you like those differences
fab
2021-06-07 02:39:40
when jxl to jxl
2021-06-07 02:40:01
this i think is important to the average user
2021-06-07 02:40:27
also microsoft needs to update the decoder
2021-06-07 02:40:40
at the moment neither xnview or imageglass update anything
raysar
2021-06-07 02:59:07
<@!794205442175402004> For public share, the last "stable" commit is on main/0.3.7/0.4.0 branch?
_wb_
2021-06-07 03:03:29
main is the most recent, 0.4.0 is the most stable
raysar
2021-06-07 03:15:14
i update my sheet for new jxl user ๐Ÿ™‚ if i forgot some software, ask me. https://docs.google.com/spreadsheets/d/1bTeraUXIl-nGM8c53IdURxmJbabX9eXqPZwVSynoH9U
lithium
raysar i update my sheet for new jxl user ๐Ÿ™‚ if i forgot some software, ask me. https://docs.google.com/spreadsheets/d/1bTeraUXIl-nGM8c53IdURxmJbabX9eXqPZwVSynoH9U
2021-06-07 04:06:36
Why documentation jxl tab is empty?
Jyrki Alakuijala
_wb_ WebP is a Google format, like PSD is an Adobe format. Google defined the bitstream, Google could change it if they wanted to, there is no international standardization organization that has some kind of process, it's just a Google thing.
2021-06-07 04:06:59
important parts of lossy WebP are defined in RFC 6386 (lossless bitstream and some other parts are however just a tech reports by Google)
_wb_
2021-06-07 04:16:49
Yes, VP8 is a standard, but how it's hacked into an image format is not a standard, I don't even know if there's a document describing lossy webp. For lossless webp there's a description that is kind of detailed but not quite a spec, for VP8 there's a real spec, and for the muxing of alpha and VP8 and stuff like that, I think there's only code, no detailed description or spec.
2021-06-07 04:18:00
Ah wait there is a 'container spec' on the webp website, it is kind of speccy
Jyrki Alakuijala
2021-06-07 04:19:48
I used the lossless webp bitstream description as the model on what jpeg xl spec should look like
2021-06-07 04:20:27
that is why we got C-ish description instead of the commonly used algol/pascal-like description of what happens at decoding
2021-06-07 04:21:03
usually it is frowned upon by the standards people -- to have any reflection of implementation in the spec
2021-06-07 04:21:18
but hey, it is 3x less work and much easier to get right
_wb_
2021-06-07 04:21:38
Yes, that I like
2021-06-07 04:22:34
The lossless webp description is a good description, it's not really enough to do an independent implementation from that alone though
Jyrki Alakuijala
2021-06-07 04:22:45
I think more stuff should be like webp bitstream description ๐Ÿ™‚ ... there are deficiencies, too, like no conformance set
2021-06-07 04:23:16
there was an independent implementation in 2012 or so
2021-06-07 04:23:29
2014
2021-06-07 04:23:39
by some people related to ffmpeg
2021-06-07 04:23:50
they found a bug in the spec ๐Ÿ˜„
2021-06-07 04:24:28
(or actually spec was correct and we had added the bug later to code but in a way that it affected encoding and coding the same)
2021-06-07 04:24:44
changing the spec was 100x easier at that time than changing the implementations ๐Ÿ˜„
2021-06-07 04:25:32
WebP lossless is a very simple codec in comparison to JPEG XL
2021-06-07 04:26:08
I designed it alone -- including all the reasearch -- in six months and three weeks or something like that
_wb_
2021-06-07 04:27:10
I think it's mostly things like "Note: Older readers may not support files using the lossless format." and "Note: Older readers may not support files using the extended format." that are not so nice for interoperability
Jyrki Alakuijala
2021-06-07 04:27:35
in webp lossless?
2021-06-07 04:27:42
uffffffffff
2021-06-07 04:27:48
I don't know of these ๐Ÿ˜„
Scope
2021-06-07 04:28:36
Also https://datatracker.ietf.org/doc/html/draft-zern-webp-01
Jyrki Alakuijala
2021-06-07 04:29:25
https://developers.google.com/speed/webp/docs/webp_lossless_bitstream_specification#abstract
2021-06-07 04:30:17
interesting the image/webp submission
2021-06-07 04:30:28
for image/jxl they put it in as provisional until standardization is ready
2021-06-07 04:30:37
I wonder what it would mean for image/webp
Deleted User
Scope Yep (and so far afaik only spider-mario uses Windows) https://discord.com/channels/794206087879852103/803574970180829194/851439593978200094
2021-06-07 04:32:07
Hey, *I also use* Windows as a daily driver... just like 99% of people I know. I only use Linux in form of command-line WSL 2.
Scope
2021-06-07 04:34:24
This is about devs from the Jpeg XL team or people who can write and maintain a WIC decoder
Deleted User
2021-06-07 04:36:04
That'll slow down the adoption since lots of ppl use Windows.
2021-06-07 04:36:43
Maybe set up a <#806522208692207617> for this project's maintainer to update it? https://github.com/mirillis/jpegxl-wic
Scope
2021-06-07 04:39:03
It's a commercial company, I don't think it would be interested in that other than the support in its products <https://mirillis.com/> So maybe this would be better: <https://github.com/saschanaz/jxl-winthumb/issues/3> (as already mentioned)
fab
2021-06-07 04:39:32
Yes But if Windows 11 doesnt exist how we do
lithium
2021-06-07 04:43:05
I'm Windows user, But I just mind which image encoder can get great lossy quality for drawing content(anime).
_wb_
Maybe set up a <#806522208692207617> for this project's maintainer to update it? https://github.com/mirillis/jpegxl-wic
2021-06-07 05:01:11
Could also just fork it, I suppose?
Scope
2021-06-07 05:27:09
Also, if it's so hard to get support for Jpeg XL (which has objective advantages over AVIF in certain areas), how hard will it be for WebP2? ๐Ÿค”
fab
2021-06-07 05:38:07
nhw codec is better
2021-06-07 05:38:18
nhw pulsar
diskorduser
2021-06-07 05:56:49
Nhw better? How?
Jyrki Alakuijala
2021-06-07 06:02:13
we provided Chrome with an information package -- they will be digesting it for some time
2021-06-07 06:02:29
one part was the feedback that Sami collected here ๐Ÿ˜„
fab
diskorduser Nhw better? How?
2021-06-07 06:14:32
Just joking
Deleted User
Jyrki Alakuijala one part was the feedback that Sami collected here ๐Ÿ˜„
2021-06-07 06:38:23
Well, if I would have know that making AVIF look superior to JXL gives higher chances of seeing fully-featured JXL in Chrome by default, then I would have written something different. :/
Petr
_wb_ It's not just time โ€” few of us use Windows and are familiar with it
2021-06-08 07:49:53
So would it be a good idea to "hire" a new core dev member who uses primarily Win? ๐Ÿ™‚
_wb_
2021-06-08 08:28:52
I don't think it necessarily needs to be a new 'core dev', in the sense that they don't need to be involved with the internals of libjxl, it's more about integration, making/maintaining plugins, example applications, helping others with using the libjxl api.
Deleted User
2021-06-08 08:43:01
For example <@!239702523559018497> isn't a core dev, but still made good integration with Firefox.
_wb_
2021-06-08 10:08:29
<@239702523559018497> do you have an idea of how things will proceed with firefox jxl support? Are you planning to make icc profiles and animation work?
2021-06-08 10:09:17
Also, what is the process to get it enabled-by-default?
Deleted User
_wb_ <@239702523559018497> do you have an idea of how things will proceed with firefox jxl support? Are you planning to make icc profiles and animation work?
2021-06-08 12:32:55
Speaking of color profiles: when could I stop using `cs_tinysrgb` in `f_jxl`? I've just tested it and it hasn't been fixed yet ๐Ÿ™
_wb_
2021-06-08 12:50:38
Right, that issue with IM creating not-quite-sRGB pngs...
raysar
lithium Why documentation jxl tab is empty?
2021-06-08 01:08:17
I need to compile good infos in that tab ๐Ÿ™‚
Jyrki Alakuijala
Well, if I would have know that making AVIF look superior to JXL gives higher chances of seeing fully-featured JXL in Chrome by default, then I would have written something different. :/
2021-06-08 02:48:24
I don't know how AOM observes JPEG XL, but the last time they were involved in a comparison was in JPEG XL competition in autumn 2018
2021-06-08 02:48:41
There, AVIF won over every other codec, including both pik and FUIF
2021-06-08 02:49:43
it could be that this is still the basic mental model of AVIF vs. JPEG XL at AOM, and it can be refreshing for them to read opinions that it could be otherwise
2021-06-08 02:51:08
if they observe or believe that AVIF is better or even roughly the same (i.e., same generation) with JPEG XL, they will only see that JPEG XL will divide the market and create confusion
2021-06-08 02:51:41
if we are able to bring info that JPEG XL is actually one generation ahead of AVIF, then they might have more interest in it
2021-06-08 02:52:07
we have been really bad in comparing the codecs (JPEG XL vs. AVIF) for several reasons
fab like even (basis compression) sometimes beats jxl
2021-06-08 02:55:41
Basis is built by really strong minds -- lzham performance was a huge inspiration for me in compression work
fab a different palette
2021-06-08 02:57:23
I consider the delta palette is one the poorly understood gems -- it will solve many problems existing previously with pixel crisp images containing some gradients or some photographic elements
fab
Jyrki Alakuijala Basis is built by really strong minds -- lzham performance was a huge inspiration for me in compression work
2021-06-08 03:12:23
ktx2 basis ETCIS i compressed a screenshot
2021-06-08 03:12:31
https://deploy-preview-1017--squoosh.netlify.app/
2021-06-08 03:14:09
i wonder what version of webp2 they are using on squoosh. did they updated it
Jyrki Alakuijala
2021-06-08 03:16:08
basis didn't impress me with the first image that I looked (against jpeg xl)
2021-06-08 03:16:20
why is jpeg xl a beta?
fab
2021-06-08 03:17:15
compression mode is good compression
2021-06-08 03:17:22
i had 4 kb images with it
2021-06-08 03:17:28
it edited the icon
2021-06-08 03:17:35
so it made like a frame
2021-06-08 03:17:43
jon don't like in this way
2021-06-08 03:17:59
but the image could be usable if basis improve
2021-06-08 03:18:15
or if jpeg xl at same bitrate has a smoother image
2021-06-08 03:18:22
aom didn't impress me for screenshots
2021-06-08 03:18:43
https://discord.com/channels/794206087879852103/803574970180829194/851465015763795968
2021-06-08 03:18:49
or for this type of image
2021-06-08 03:19:25
with cavif rs it looks like orange the image when you re encode but aom has sharpening that helps some images to get lower file sizes
Scope
Jyrki Alakuijala There, AVIF won over every other codec, including both pik and FUIF
2021-06-08 03:46:37
My feeling from most comparisons or requirements of the Web community is that a very high priority goes to low bpp and what efficiency codec/format can show there (even if in practice it will not be so used), and all other aspects are considered to a lesser extent, Jpeg XL has stronger all other sides (for me much more useful, but I am not the majority opinion), but low bpp its weak side
Pieter
2021-06-08 04:29:17
I sometimes play boardgames who works on AV1, I could ask his opinion...
Scope
Scope My feeling from most comparisons or requirements of the Web community is that a very high priority goes to low bpp and what efficiency codec/format can show there (even if in practice it will not be so used), and all other aspects are considered to a lesser extent, Jpeg XL has stronger all other sides (for me much more useful, but I am not the majority opinion), but low bpp its weak side
2021-06-08 04:59:41
- Low bpp is easier to promote, easier to show significant visual improvements and traffic savings in marketing articles, even regular users are more likely to compare codecs at low bitrates - Encoding/decoding speed is not always fully taken into account, if it is slow it is justified by the fact that encoders haven't yet reached maturity and needed optimizations, as well as acceptance that each more modern format may be hundreds more complex and slower than previous ones - Progressivity, ordinary users usually don't even know what it is and how it affects perception, so they don't think it's necessary, also there are no studies that prove it's always useful, also the Web world already considers it proper to use separate LQIP images - Ultra-high resolution in a single image and color depth is not considered necessary for the Web - Lossless is also usually not considered a high priority since it does not create much Web traffic and is mainly needed for UI, graphs and such (where lossy formats are not very effective)
fab
2021-06-08 05:02:58
yes like me that i did q 63.52 new heuristics p1 and said amazing quality
2021-06-08 05:03:13
even if the compression was only about webp
2021-06-08 05:03:19
it wasn't incredible
2021-06-08 05:03:29
and the colors were desaturated
2021-06-08 05:03:50
plus other distortions artifacts that i don't noticed
2021-06-08 05:04:30
but i guess scope is talking about fb
_wb_
Jyrki Alakuijala if we are able to bring info that JPEG XL is actually one generation ahead of AVIF, then they might have more interest in it
2021-06-08 05:04:39
I don't think jxl is a generation ahead of AVIF, maybe 1/3rd of a generation but certainly not a full one. I think the main difference is that jxl was designed as a still image codec for software decoding, while avif is a video codec designed for hardware decode, hacked into an image format.
Jyrki Alakuijala
2021-06-08 05:24:38
generation means ~30 %
veluca
_wb_ I don't think jxl is a generation ahead of AVIF, maybe 1/3rd of a generation but certainly not a full one. I think the main difference is that jxl was designed as a still image codec for software decoding, while avif is a video codec designed for hardware decode, hacked into an image format.
2021-06-08 05:24:42
I think it might be a full generation (or at least 75% of one) in the higher quality
Jyrki Alakuijala
2021-06-08 05:25:24
in visually lossless or 'camera quality', I consider it is often 2x more bits required for AVIF
2021-06-08 05:26:21
some weird corner just gets extremely blurry in AVIF
2021-06-08 05:26:59
supposedly AVIF needs to decide about filter strength at 64x64 resolution whereas AVIF gets guidance from adaptive quantization + an 8x8 control field for it
2021-06-08 05:27:33
that alone might be a 15-20 % difference in mid and high quality
2021-06-08 05:28:09
it becomes a difficult decision to decide filtering a 64x64 when the alternatives then are dct artefacts or blurry details
2021-06-08 05:28:40
it seems to me that so far avifenc has opted more on the blurry side
2021-06-08 05:28:59
(I don't really know what I am talking about, I didn't read the spec, didn't write the code for AV1/AVIF)
_wb_
veluca I think it might be a full generation (or at least 75% of one) in the higher quality
2021-06-08 05:45:28
Yes, if you split it up like that, then you could say jxl is a generation ahead at high fidelity and maybe same-gen at low fidelity
veluca
2021-06-08 05:45:54
probably worse tbh - but that's OK with me
_wb_
2021-06-08 05:54:12
I say 'maybe' same gen at low fidelity because the current encoder is not there yet, but I do think the bitstream should be as capable of doing well there as avif's (but we'll only know for sure once we have an encoder that demonstrates it)
Scope
2021-06-08 07:10:02
About the shared decoder, is it possible to apply all the optimizations from libjpeg_turbo or will it have to be mostly rewritten? <https://github.com/mozilla/standards-positions/issues/522#issuecomment-856035731>
paperboyo
Scope My feeling from most comparisons or requirements of the Web community is that a very high priority goes to low bpp and what efficiency codec/format can show there (even if in practice it will not be so used), and all other aspects are considered to a lesser extent, Jpeg XL has stronger all other sides (for me much more useful, but I am not the majority opinion), but low bpp its weak side
2021-06-08 07:13:25
FWIW, hereโ€™s a repeat of the POV on new formats of a big news site (although treat it like a personal one), serving all types of imagery, sometimes as a main subject, much more often as auxiliary to textual content: 1. All are exciting, possible fragmentation being more of a headache for image resizers 2. UI is for CSS/SVG, raster imagery has little use there (lossless is useless) 3. Lack of proper progressive in AVIF may be a blocker for using it in certain contexts (hero, fullscreen etc.): JXL win here is massive 4. Only lowest possible bpp matters: you go as low as possible until it starts to look shit and then you take a small step back. Any other attitude is a waste of your, but mostly, your readersโ€™ time and money*. Would be a pity to have to decide upfront that only hero images are served as JXL, because AVIF is leaner while looking good in majority of other cases. 5. Having a way of targeting that quality independently of the individual imageโ€™s makeup would be a godsend (added bonus: lower q the higher the dpr, higher q on wider gamut etc.) 6. Nobody is spending time choosing best encoder and its settings per-image: encoders are not used directly, they are mediated by the resizer service. We understand the burden of providing targeted quality rendition per render, so would be willing to take some compute hit ourselves (there is ample time for preprocessing our-side if that could help inform resizers in their job of knowing what needs to be done to attain target quality) I have no idea if any tech in the encoder could help with pt. 6. But I canโ€™t see resizer services being freely (read: cheaply) willing to create tens of prerenders just to check how to hit the target quality per-renderโ€ฆ * that would also be my personal advice for a photographerโ€™s website, nevermind the site where photography is not the main subject
2021-06-08 07:13:28
Iโ€™m also an amateur photographer, but the non-web use case(s) here is so different. Here JXL is way ahead of AVIF, I suppose, but I also do not care about it **that** much.
_wb_
Scope About the shared decoder, is it possible to apply all the optimizations from libjpeg_turbo or will it have to be mostly rewritten? <https://github.com/mozilla/standards-positions/issues/522#issuecomment-856035731>
2021-06-08 07:17:44
we already have SIMDified iDCT and YCbCr to RGB, basically we have everything to decode a jpeg bitstream to DCT coefficients in the encoder (which then spends effort on better entropy coding it) and everything to convert DCT coefficients is already in the decoder. So you'd just have to add some of the jpeg parsing to the decoder and the rest is there already.
2021-06-08 07:19:00
Only thing we don't have yet is a streaming jpeg parser, so it also works for progressive. (Currently the encoder expects full input available)
paperboyo FWIW, hereโ€™s a repeat of the POV on new formats of a big news site (although treat it like a personal one), serving all types of imagery, sometimes as a main subject, much more often as auxiliary to textual content: 1. All are exciting, possible fragmentation being more of a headache for image resizers 2. UI is for CSS/SVG, raster imagery has little use there (lossless is useless) 3. Lack of proper progressive in AVIF may be a blocker for using it in certain contexts (hero, fullscreen etc.): JXL win here is massive 4. Only lowest possible bpp matters: you go as low as possible until it starts to look shit and then you take a small step back. Any other attitude is a waste of your, but mostly, your readersโ€™ time and money*. Would be a pity to have to decide upfront that only hero images are served as JXL, because AVIF is leaner while looking good in majority of other cases. 5. Having a way of targeting that quality independently of the individual imageโ€™s makeup would be a godsend (added bonus: lower q the higher the dpr, higher q on wider gamut etc.) 6. Nobody is spending time choosing best encoder and its settings per-image: encoders are not used directly, they are mediated by the resizer service. We understand the burden of providing targeted quality rendition per render, so would be willing to take some compute hit ourselves (there is ample time for preprocessing our-side if that could help inform resizers in their job of knowing what needs to be done to attain target quality) I have no idea if any tech in the encoder could help with pt. 6. But I canโ€™t see resizer services being freely (read: cheaply) willing to create tens of prerenders just to check how to hit the target quality per-renderโ€ฆ * that would also be my personal advice for a photographerโ€™s website, nevermind the site where photography is not the main subject
2021-06-08 07:22:39
Regarding pt 6, I think that's where cjxl is miles ahead of any avif encoder (or jpeg/webp/j2k encoder for that matter).
2021-06-08 07:23:44
Cjxl has a perceptual target and pretty consistently obtains it, reaching a perceptually consistent result without human intervention.
2021-06-08 07:25:32
It only works well in the medium to high fidelity range though (I'd say reliable for d0.5 to d4), at lower fidelity human assessment is still needed to figure out if it's acceptable-ish or crap
2021-06-08 07:27:28
Any other encoder either takes a bitrate as target (a certain bpp goal) or (most commonly) a quantizer scaling (the thing they call 'quality').
2021-06-08 07:29:11
The problem with bpp or quantizer settings is that while it is a scale where "more means better", it is not an absolute scale. For some images, 0.5 bpp is fine, while for others 3 bpp is needed. For some images, -q 50 is fine while for others -q 85 is needed.
2021-06-08 07:30:20
You can do trial and error with perceptual metrics to work around this, and that's exactly what advanced image services have been doing (e.g. the q_auto in Cloudinary does something like that, and there are others that also do something like that)
2021-06-08 07:31:32
Trial and error is feasible with JPEG, but not with AVIF, where encoding is expensive enough to make one encode per image kind of the most you can afford (if it is affordable at all, even)
2021-06-08 07:33:20
This is a property of an encoder, not a bitstream. Theoretically you could make a 'dumb' jxl encoder or you could make an avif encoder that also does reach a perceptual target consistently.
2021-06-08 07:34:42
But at the moment, cjxl is the only thing that does it, it's a pioneering encoder in that way, and I don't think we can expect something similar for other codecs anytime soon - it took a lot of effort (mostly by the pik team) to get it where it is now.
2021-06-08 07:37:19
I think this is _the_ killer feature of the jxl encoder from the operational perspective of a content provider
2021-06-08 07:39:12
AVIF might be better at low bpp, if you can do manual encoding, with squoosh or something, and have the time (so money, because human time is not cheap) to find that sweet spot for every image.
2021-06-08 07:41:00
But if you need automated image encoding, and you want to "set it and forget it", then atm AVIF is risky (if you set it too low, you will get a fraction of images that are going to be crap)
Scope
2021-06-08 07:41:11
Yes, it's a very useful thing that works very well, maybe not ideal for some non-photographic images (but it's tunable), and there's nothing like it that works better (other than fully encoding a lot of images before reaching the right value of some metric)
paperboyo
_wb_ I think this is _the_ killer feature of the jxl encoder from the operational perspective of a content provider
2021-06-08 07:44:48
_I think this is the killer feature of the jxl encoder from the operational perspective of a content provider_ ๐Ÿ‘ In my humblest of opinions this should be advertised and dissected in blogposts and videos even more than progressive mode or JPEG redelivery. For โ€œmyโ€ (mass) use-case, the most important improvements to JXL would be better quality at low bpp and better perceptual quality targetting at low bpp, then.
Scope
_wb_ we already have SIMDified iDCT and YCbCr to RGB, basically we have everything to decode a jpeg bitstream to DCT coefficients in the encoder (which then spends effort on better entropy coding it) and everything to convert DCT coefficients is already in the decoder. So you'd just have to add some of the jpeg parsing to the decoder and the rest is there already.
2021-06-08 07:45:18
This is a pretty big advantage, especially if the decoder has the same speed and features (although I don't think everyone will get rid of libjpeg_turbo very soon, but still) So I think it's worth the investment to be one of the first priorities
_wb_
Scope This is a pretty big advantage, especially if the decoder has the same speed and features (although I don't think everyone will get rid of libjpeg_turbo very soon, but still) So I think it's worth the investment to be one of the first priorities
2021-06-08 07:49:59
Yes, if we can make that work, we could 'sell' libjxl as an upgrade for libjpeg(-turbo), with a different api (inevitably, we have things like alpha, hdr, animation that do not fit into the libjpeg api) but which can read both jpeg and jxl
spider-mario
2021-06-08 07:50:51
we could call it libjpeg-ultra or something like that
2021-06-08 07:51:05
(kidding, letโ€™s not)
_wb_
2021-06-08 08:00:50
There are a bunch of applications that basically ship a copy of libjpeg-turbo: both firefox and chrome do that, lots of android apps (if they care about jpeg decode speed, since the system libjpeg-turbo can be quite old and poorly optimized for ARM), many games, etc
Scope
2021-06-08 08:02:29
Also Messengers with billions users
_wb_
2021-06-08 08:03:00
From the pov of binary size, it's a compelling idea to upgrade from libjpeg to libjxl, get a whole new codec, for a binary increase from 150 KB for libjpeg-turbo to maybe 200 KB for libjxl, so net cost only 50 KB
2021-06-08 08:04:53
Maybe with some effort we can even match the libjpeg-turbo binary size
2021-06-08 08:06:49
There is stuff in libjpeg-turbo that nobody really uses but it's there and they cannot remove it without breaking the api. Like decoding a jpeg to a buffer that uses 256-color palette...
2021-06-08 08:07:54
(that was useful in the 90s, when many video cards were still doing the 1 byte per pixel thing to save memory and trade colors for more resolution, basically)
veluca
2021-06-08 08:08:32
200kb is the size *in the chrome APK*
_wb_
2021-06-08 08:08:37
It's ridiculous to have a jpeg decoder doing that though
veluca
2021-06-08 08:08:37
not necessarily the same
_wb_
2021-06-08 08:10:42
If it is 200kb in chrome then it can be 200kb in another app too, no?
2021-06-08 08:10:52
Of course that is libjxl_dec only
2021-06-08 08:11:05
If you also need encode, it's a different story
2021-06-08 08:12:02
We haven't really tried reducing encoder binary size yet, have we?
2021-06-08 08:14:27
It does probably add quite a bit of bytes. Could reduce it by not having all encode paths (e.g. only support up to effort 7 and input color restricted to an enum colorspace)
veluca
2021-06-08 08:17:45
it's complicated
_wb_
2021-06-08 08:22:18
Anyway, there are a lot of applications that only need decode, not encode
2021-06-08 08:22:41
And those who do need encode, likely care less about binary size
2021-06-09 05:20:36
Yes, for that we'd also need to have a jpeg encoder in there... For which we actually also basically have all the code already anyway
2021-06-09 05:22:51
It's a bit weird that browsers have a jpeg and png encoder imo. Maybe that needs to be revisited at some point...
2021-06-09 06:01:03
No, and also doesn't read it
2021-06-09 06:52:46
it's quite a bit of work, mostly 'plumbing'. Making it decode jpeg _progressively_ is probably the trickiest part.
Jyrki Alakuijala
2021-06-09 09:23:23
it might be helpful for Chrome and AOM to believe there is a generation difference if more comparisons like this were made available: https://twitter.com/jyzg/status/1402555421337006084
2021-06-09 09:24:41
"This is what happens when I tried JPEG XL and AVIF on the Kodak image corpus."
2021-06-09 09:24:54
"This is what happens when I tried JPEG XL and AVIF on my vacation image."
2021-06-09 09:24:59
etc.
2021-06-09 09:25:10
people believe that AVIF is magical and works
2021-06-09 09:26:05
but its artefacts at around 1 bpp are worse than what a JPEG XL image looks like after a further twitter's lossy jpeg compression
2021-06-09 09:42:56
AVIF removes textures and paints the sky in bit odd way
_wb_
2021-06-09 09:50:09
No no. That is just to set the bpp target
2021-06-09 09:51:07
That flat part of the dune on the right becomes plastic instead of sand with avif
Scope
2021-06-09 09:56:21
I think Jpeg XL should be promoted more on photographic and sites with high-resolution images or where image quality close to the original is very important, because even with this loss of detail average sites usually don't bother
2021-06-09 10:01:00
<https://github.com/eclipseo/eclipseo.github.io>
lithium
Scope I think Jpeg XL should be promoted more on photographic and sites with high-resolution images or where image quality close to the original is very important, because even with this loss of detail average sites usually don't bother
2021-06-09 10:18:07
> image quality close to the original Also some non-photographic images site need this jxl features. ๐Ÿ™‚
_wb_
2021-06-09 10:30:14
2021-06-09 10:30:39
What does a plot like that even mean?
2021-06-09 10:31:56
It seems to assume lossy compression has only one quality and they all correspond to the same fidelity
Crixis
Jyrki Alakuijala it might be helpful for Chrome and AOM to believe there is a generation difference if more comparisons like this were made available: https://twitter.com/jyzg/status/1402555421337006084
2021-06-09 10:39:04
on this image also at 0.3 bpp jxl is vastly superior
Jyrki Alakuijala
2021-06-09 10:47:12
Originally, they didn't know which bitrates to use so they let BPG to decide. That was a bad decision and the effort still suffers from it. For some images 'big' bitrates are still too low for accetable quality. I made an attempt to convince WebP team to renormalize the bitrates in a reasonable way, but they preferred not to touch the methodology -- just replicate it.
_wb_
2021-06-09 10:47:54
https://twitter.com/jonsneyers/status/1402577815506149378?s=19
2021-06-09 10:48:38
I wonder what people actually want in this regard, so I started a poll
Jyrki Alakuijala
_wb_ It seems to assume lossy compression has only one quality and they all correspond to the same fidelity
2021-06-09 10:48:47
I find that about 90 % of people related to image quality are comfortable using the same ssim-score as a proof of same quality
_wb_
2021-06-09 10:49:36
Yes, but avif is going to be more impressive at the last one.
Jyrki Alakuijala I find that about 90 % of people related to image quality are comfortable using the same ssim-score as a proof of same quality
2021-06-09 10:54:40
It's funny: I haven't met any perceptual metric designer yet who really trusts any objective metric, it's kind of common knowledge e.g. within JPEG that objective metrics can be good for catching bugs but are to be taken with several grains of salt for doing actual image quality assessment, but in more devops-like circles, objective metrics are assumed to be flawless and subjective assessment is met with scepticism.
Scope
2021-06-09 11:01:23
I think it would be better for everyone to vote on Twitter to summarize and make the results more public/popular
_wb_
2021-06-09 11:03:11
If you don't want to make a twitter account, feel free to react to this message with 1๏ธโƒฃ 2๏ธโƒฃ 3๏ธโƒฃ 4๏ธโƒฃ , but otherwise, yes, what Scope said.
2021-06-09 11:04:48
I cannot vote on my own poll but as I have probably said a few times (here or in blogposts etc), I would personally prefer 2๏ธโƒฃ
Scope
2021-06-09 11:08:23
Maybe (though I don't know how much it really helps with the not-so-popular tags and limited time voting)
2021-06-09 11:11:13
On the other hand, about voting, the preferences in this discord and the preferences of random outsiders can be very different ๐Ÿค”
_wb_
2021-06-09 11:11:45
Why not 2๏ธโƒฃ > 3๏ธโƒฃ ?
fab
2021-06-09 11:14:15
i like the focus on psnr jpeg xl is having
2021-06-09 11:14:40
even at speed 1 the psnr is sometimes comparable to webp2 at medium bitrates
2021-06-09 11:14:49
that doesn't mean nothing
2021-06-09 11:15:01
but jpeg xl is good
2021-06-09 11:15:46
as jon saw in discord av1 image
2021-06-09 11:15:55
the encoder currently don't demostrate anything
2021-06-09 11:16:12
2021-06-09 11:16:17
2021-06-09 11:16:45
_wb_
2021-06-09 11:16:52
That's an interesting nuance. Better fidelity on desktop where bw is usually not that much of an issue, better size (without sacrificing but also without improving fidelity) on mobile where bw is an issue
fab
2021-06-09 11:16:59
webp2 set at q50
2021-06-09 11:17:04
29310
2021-06-09 11:17:16
jxl 27,3 KB (27.980 byte)
_wb_
fab i like the focus on psnr jpeg xl is having
2021-06-09 11:17:36
Focus on psnr? Are you trying to troll? ๐Ÿ˜…
fab
2021-06-09 11:17:57
on speed higher than 1 psnr is rather good
2021-06-09 11:18:11
aom focuses on vmaf
2021-06-09 11:18:18
webp2 uses same approach
2021-06-09 11:18:38
this image was the setting i highlighted speed 1
2021-06-09 11:18:50
JPEG XL v0.3.7-31bbdcd7
2021-06-09 11:18:56
WEBP2 v0.1.0-a0bc155
2021-06-09 11:19:03
https://forum.doom9.org/showthread.php?t=174300&page=22
_wb_
2021-06-09 11:19:15
Psnr is not a perceptual metric and jxl explicitly doesn't focus on it, in fact it deliberately does things that are bad for psnr, like thrashing the blue
fab
2021-06-09 11:34:45
128 kbps mp3 aren't good
2021-06-09 11:35:42
LAME does have some settings to extend the highs. But generally, the idea is that wasting 'bits" on sounds that are usually masked or otherwise inaudible means fewer bits where they might be more important and overall the sound can be worse. (When people hear compression artifacts, it's usually not a loss of highs that they notice/hear.).
paperboyo
_wb_ I wonder what people actually want in this regard, so I started a poll
2021-06-09 12:06:43
Very hard to answer that without knowing what baseline is it relative to? My specific case: 1.8โ€“2๏ธโƒฃ Internet average pic size/quality: 3๏ธโƒฃ โ€“3.5?
_wb_
2021-06-09 12:08:01
baseline is what you do now, which indeed is different things for different people
2021-06-09 12:12:03
(a lot of sites are still serving basically unoptimized images, they could get same fidelity, much lower sizes now already by properly resizing their images and switching to mozjpeg and webp instead of uploading images straight from camera or from Photoshop saved at high quality, but I am assuming most people who answer that poll are already doing something sensible now)
BlueSwordM
2021-06-09 03:11:21
You know, I believe aq-mode=1 and chroma-deltaquant should become the default for aomenc in AVIF since it provides a large psycho-visual boost while not touching anything else.
spider-mario
_wb_ It's funny: I haven't met any perceptual metric designer yet who really trusts any objective metric, it's kind of common knowledge e.g. within JPEG that objective metrics can be good for catching bugs but are to be taken with several grains of salt for doing actual image quality assessment, but in more devops-like circles, objective metrics are assumed to be flawless and subjective assessment is met with scepticism.
2021-06-09 03:15:10
I think I may have already linked and said this but I believe the over-reliance on โ€œobjectiveโ€ metrics to be a manifestation of the streetlight effect: https://www.discovermagazine.com/the-sciences/why-scientific-studies-are-so-often-wrong-the-streetlight-effect
lithium
2021-06-09 03:15:15
Maybe 10bit also worth open on default mode?
Eugene Vert
BlueSwordM You know, I believe aq-mode=1 and chroma-deltaquant should become the default for aomenc in AVIF since it provides a large psycho-visual boost while not touching anything else.
2021-06-09 03:15:30
Is enable-chroma-deltaq=1 now works on images w/ alpha?
spider-mario
2021-06-09 03:15:38
when you can assign a number to something, itโ€™s very tempting to see it as โ€œobjectiveโ€ and give it ultimate precedence, without much consideration for what the number actually means or represents
BlueSwordM
lithium Maybe 10bit also worth open on default mode?
2021-06-09 03:17:24
I would like that to be the case, but that will not happen as there is a noticeable encoding/decoding performance deficit when switching to 10b.
Eugene Vert Is enable-chroma-deltaq=1 now works on images w/ alpha?
2021-06-09 03:17:47
Yes, but you should only enable chroma-deltaq on color stuff, not alpha stuff as not to break the process.
lithium
Eugene Vert Is enable-chroma-deltaq=1 now works on images w/ alpha?
2021-06-09 03:18:43
> avifenc -a color:enable-chroma-deltaq=1
BlueSwordM
BlueSwordM Yes, but you should only enable chroma-deltaq on color stuff, not alpha stuff as not to break the process.
2021-06-09 03:18:55
For example: `-a tune=butteraugli -a aq-mode=1 -a color:enable-chroma-deltaq=1`
Eugene Vert
2021-06-09 03:19:35
Also, from my observation, aq-mode=1 does not work well for anime content, it leads to larger file size and a smoother image?
lithium
2021-06-09 03:19:36
same time ๐Ÿ™‚
BlueSwordM
Eugene Vert Also, from my observation, aq-mode=1 does not work well for anime content, it leads to larger file size and a smoother image?
2021-06-09 03:21:48
Well, that is to be expected since it is a variance based aq-mode(which means large flat blocks will be given more bpp), but it actually works well in that regard, preventing nasty artifacts or overblurring of some other stuff.
veluca
2021-06-09 03:22:11
what do *you* use to produce AVIF images? I've heard the recommendation of `--min 0 --max 63 --a end-usage=q -a cq-level=quality --tune=ssim`
BlueSwordM
veluca what do *you* use to produce AVIF images? I've heard the recommendation of `--min 0 --max 63 --a end-usage=q -a cq-level=quality --tune=ssim`
2021-06-09 03:23:04
That works well, but while --tune=ssim works well to preserve more detail on photographic images, it butchers color, which is why I always use `-a tune=butteraugli` now.
veluca
2021-06-09 03:23:37
also, any thoughts on when to 420 and when not to 420?
BlueSwordM
veluca also, any thoughts on when to 420 and when not to 420?
2021-06-09 03:24:01
Never chroma-subsample, unless the source is already in 4:2:0.
veluca
2021-06-09 03:24:08
heh
lithium
2021-06-09 03:24:12
But for now tune=butteraugli only support 8bit.
BlueSwordM
lithium But for now tune=butteraugli only support 8bit.
2021-06-09 03:24:30
Yeah. Hopefully, I'll get that fixed as well.
veluca
2021-06-09 03:24:31
our AVIF colleagues recommend to ~always `-y 420` IIRC
_wb_
2021-06-09 03:24:50
For low fidelity, 420 is likely better
BlueSwordM
_wb_ For low fidelity, 420 is likely better
2021-06-09 03:25:21
I do not believe from my own testing experience. It might prevent more artifacts, but it butches color performance even more when you do chroma-subsampling at low bpp.
veluca
2021-06-09 03:25:42
also -- any practical experience on the difference between `-s 0` - `-s 6`?
Scope
2021-06-09 03:25:43
Also with `--min 63 --max 63` (when min=max) encoding should be much faster
_wb_
2021-06-09 03:26:23
Well it is image dependent if you can get away with 420 or not, sometimes just doing 420 already destroys something too important before you even start compressing
veluca
2021-06-09 03:26:34
and while I'm on a roll... `svt-av1`, `rav1e` or `libaom`? ๐Ÿ˜›
BlueSwordM
veluca also -- any practical experience on the difference between `-s 0` - `-s 6`?
2021-06-09 03:27:55
`-s 0` is for absolute maximum efficiency, so not feasible to use in the real world using aomenc since it throws speed out of the water. For practical purposes, I usually pick `-s 4` As for which encoder, while I'd like to recommend rav1e, it seems to have some issues regarding YCbCr color performance in still_image mode for some images. For pure image quality, libaom. If you want to use something faster than `-s 6` in aomenc, I'd recommend switching to SVT-AV1 instead.
_wb_
2021-06-09 03:27:57
But for the ~90% of images where 420 itself is not too destructive, I think for both jpeg and avif (and heic) you get better quality/density by doing 420, at least at the low end (at higher fidelity it's something else)
2021-06-09 03:28:45
Our experience (from a while back) is that svt is best at speed 7, and aurora is even better
veluca
BlueSwordM `-s 0` is for absolute maximum efficiency, so not feasible to use in the real world using aomenc since it throws speed out of the water. For practical purposes, I usually pick `-s 4` As for which encoder, while I'd like to recommend rav1e, it seems to have some issues regarding YCbCr color performance in still_image mode for some images. For pure image quality, libaom. If you want to use something faster than `-s 6` in aomenc, I'd recommend switching to SVT-AV1 instead.
2021-06-09 03:29:02
I mean, how much do you feel `-s 0` improves things over `-s 6`, if you don't care about time?
BlueSwordM
veluca I mean, how much do you feel `-s 0` improves things over `-s 6`, if you don't care about time?
2021-06-09 03:30:22
That is something that I've not experimentally measured since I have started the encoding tests today, but from anecdotal evidence, anything below `-s 2` or `-s 3` gets negligible gains below that, unless at extreme low bpp where every millipercent matters xD
veluca
2021-06-09 03:30:45
fair enough
2021-06-09 03:30:57
I haven't seem much of a difference in the higher quality settings
2021-06-09 03:31:17
(anecdotally)
BlueSwordM
BlueSwordM `-s 0` is for absolute maximum efficiency, so not feasible to use in the real world using aomenc since it throws speed out of the water. For practical purposes, I usually pick `-s 4` As for which encoder, while I'd like to recommend rav1e, it seems to have some issues regarding YCbCr color performance in still_image mode for some images. For pure image quality, libaom. If you want to use something faster than `-s 6` in aomenc, I'd recommend switching to SVT-AV1 instead.
2021-06-09 03:31:38
But yeah, now you understand why I prefer using CJXL for images ๐Ÿ˜›
veluca
2021-06-09 03:31:50
a bit faster? ๐Ÿ˜›
2021-06-09 03:32:13
and I think it can get in ~1bpp what AVIF can get in 1.4 or so
BlueSwordM
2021-06-09 03:33:19
Speed is not the only consideration. It's also complexity from the encoder side, tradeoffs, having to deal with non-psycho-visual aided "Quality" settings, in which `-d --distance` works exceptionally well, and feature set.
2021-06-09 03:34:03
I really don't like having to pick a specific quantizer setting for example, I just want a quality know, and deal with it.
2021-06-09 03:34:45
I also like to not have banding from 8bpc quantization, while cjxl does everything in 24-32-bit, bypassing the problem entirely.
2021-06-09 03:35:00
It also has noise synthesis, which makes its low bpp performance in photographic images quite a bit stronger, like AV1 grain synthesis.
lithium
2021-06-09 03:35:39
In my test, For non-photographic images, If I want find best lossy quality on different images, avif quantizer is difficult to control quality, jxl butteraugli and heuristic is more stable to control quality, so I very look forward vardct can implement some improve for non-photographic images. ๐Ÿ™‚
BlueSwordM
2021-06-09 03:37:16
In the end, I do believe AVIF can carve out its niche(extremely strong animation performance and good all around performance if tuned well, especially at low bpp).
2021-06-09 03:37:58
However, JPEG-XL has been designed from the ground up as a next generation image format, and it is better suited to images overall than AVIF.
lithium
2021-06-09 03:39:26
JPEG XL Alien Technology From The Future! ๐Ÿ˜›
Scope
Scope Also with `--min 63 --max 63` (when min=max) encoding should be much faster
2021-06-09 03:39:52
And this is not a joke, although I did not compare the latest builds (maybe something has changed), but after certain updates encoding with the constant quantizer was noticeably faster than with the variable quantizer
BlueSwordM
Scope And this is not a joke, although I did not compare the latest builds (maybe something has changed), but after certain updates encoding with the constant quantizer was noticeably faster than with the variable quantizer
2021-06-09 03:40:45
I mean, to be fair, encoding at such low file sizes is indeed very fast.
Scope
2021-06-09 03:42:30
This is just an example, the main thing is that they are equal
BlueSwordM
2021-06-09 03:42:40
I see ๐Ÿค”
Scope
2021-06-09 03:46:21
And with `aq-mode=1` I also had worse quality on some art content, so it's not a very universal option for everything
lithium
2021-06-09 03:49:04
Conclusion: we need jpeg xl
BlueSwordM
2021-06-09 03:51:26
Conclusion: We need every encoder to get better ๐Ÿ˜›
Jyrki Alakuijala
2021-06-09 04:03:58
what is the best command line (or a good commandline) for avifenc ?
BlueSwordM That works well, but while --tune=ssim works well to preserve more detail on photographic images, it butchers color, which is why I always use `-a tune=butteraugli` now.
2021-06-09 04:04:55
how do you observe improvements due to butteraugli?
lithium In my test, For non-photographic images, If I want find best lossy quality on different images, avif quantizer is difficult to control quality, jxl butteraugli and heuristic is more stable to control quality, so I very look forward vardct can implement some improve for non-photographic images. ๐Ÿ™‚
2021-06-09 04:06:15
Thank you for reminding!!! ๐Ÿ™‚
BlueSwordM Never chroma-subsample, unless the source is already in 4:2:0.
2021-06-09 04:07:35
in guetzli we noticed that often just going to a lower quality when starting from yuv420 sources gave better compromises with yuv444
2021-06-09 04:08:20
yuv420 quantization noise in chromacity plane goes twice as far, and some red quantization noise can be quite visible far apart
2021-06-09 04:08:49
recompressing can increase this quantization and make it a big issue -- even when the image itself doesn't have yuv444 information in it
2021-06-09 04:09:03
possibly this is less of an issue in a more advanced codec like AVIF than it is for JPEG
lithium
Jyrki Alakuijala what is the best command line (or a good commandline) for avifenc ?
2021-06-09 04:14:07
For me I use like this, I think BlueSwordM know more about av1 knowledge. > avifenc --min 10 --max 10 -d 10 -s 4 -j 12 -a end-usage=q -a cq-level=10 -a color:aq-mode=1 -a color:enable-chroma-deltaq=1 -a color:enable-dnl-denoising=0 -a color:denoise-noise-level=8 // default tune psnr // -a color:denoise-noise-level=8 for photo // -a color:denoise-noise-level=5 for drawing // default 1/13/6 // --cicp 2/2/8 ycocg // --cicp 2/2/6 ycbcr
Scope
Jyrki Alakuijala what is the best command line (or a good commandline) for avifenc ?
2021-06-09 04:21:02
Yeah, I think something like this: `avifenc --min X --max X -d 10 -a color:denoise-noise-level=5 -a color:enable-dnl-denoising=0 -a color:enable-chroma-deltaq=1` Maybe sometimes without: `denoise-noise-level/enable-dnl-denoising` And with : `tune=butteraugli/ssim` `aq-mode=1`
Jyrki Alakuijala
2021-06-09 04:25:49
what would you use for the same cjxl spell?
2021-06-09 04:25:58
(for the roughly same purpose)
Scope
2021-06-09 04:29:15
In my opinion there is not much I can change for lossy in cjxl other than the quality and speed settings
2021-06-09 04:32:49
And even in the speed settings I usually don't touch other values besides `-s 7` and `-s 8`, `-s 9` is too slow and doesn't give better results (and can also be worse sometimes) If we talk about the correlation with AVIF quality settings, it's hard to compare and very much depends on the content
lithium
2021-06-09 04:33:30
In my test, avifenc end-usage=q quality compare jxl -d, just a sample data not too much accurate. (use maxButteraugli_jxl, 3Norm, 9Norm, dssim, ssimulacra adjust) > q7~q15 -s4 near cjxl -d 0.5~1.0 -s7 > q15~q22 -s4 near cjxl -d 1.3 ~ 1.45 -s7 > q22~q28 -s4 near cjxl -d 1.6 ~ 1.9 -s7
Jyrki Alakuijala
2021-06-09 04:42:44
I tried to improve my response on mozilla's standard track -- https://github.com/mozilla/standards-positions/issues/522#issuecomment-853919129
Crixis
Jyrki Alakuijala I tried to improve my response on mozilla's standard track -- https://github.com/mozilla/standards-positions/issues/522#issuecomment-853919129
2021-06-09 05:09:19
When it become bullism? <:kekw:808717074305122316>
lithium
2021-06-09 05:15:14
How to contact Vimeo dev and convince they use jxl? https://medium.com/vimeo-engineering-blog/upgrading-images-on-vimeo-620f79da8605
fab
2021-06-09 05:16:13
jxl isn't meant for them
2021-06-09 05:17:02
unless you do
2021-06-09 05:17:03
JPEG XL v0.3.7-31bbdcd7 -d 3.02577 -s 4 --faster_decoding=3 --use_new_heuristics
2021-06-09 05:17:13
jpeg xl can waste bits
2021-06-09 05:17:45
because isn't optimized for quantizers or for slower speeds
Scope
lithium How to contact Vimeo dev and convince they use jxl? https://medium.com/vimeo-engineering-blog/upgrading-images-on-vimeo-620f79da8605
2021-06-09 05:21:45
In Daala IRC, but this is useless until JXL support is not enabled in browsers by default
veluca
2021-06-09 05:29:02
well, them wanting to use it would give a datapoint to browsers ๐Ÿ˜›
Scope
Jyrki Alakuijala (for the roughly same purpose)
2021-06-09 06:26:43
However, about JXL magic settings, such people exist https://discord.com/channels/794206087879852103/840831132009365514/852171018198056990
2021-06-09 06:30:36
So, theoretically not for 100% of users the default encoding settings are enough for lossy ๐Ÿค”
_wb_
2021-06-09 06:33:52
Finetuners are gonna finetune. Most people won't bother (or cannot bother since they don't get much control over encoder settings โ€” applications with write support rarely expose all the detailed knobs and dials that the encoder has)
Scope
2021-06-09 06:42:23
Yes, and for JXL lossy it can be both good and bad at the same time, it is good if the default settings/encoder choices are the most optimal and give the best quality, but if not and there is no way for users to improve something
_wb_
2021-06-09 07:15:08
There is always a way: use cjxl or your own encoder directly. But in many cases someone else makes the decision, not the user, and that someone else is an automated system, like the resize/recompression that twitter, facebook, whatsapp etc do: no user control at all, you get what they do with the image and that's it.
2021-06-09 07:16:37
In image editors you likely get a quality slider but that's about it (in some you might get a bit more control, maybe behind some 'advanced' button)
2021-06-09 07:18:43
I think jxl is an opportunity for companies like facebook and twitter to get rid of their reputation as "ruining the images" if they use jxl mostly to get a better / more consistent fidelity, not to save more bandwidth
fab
Scope However, about JXL magic settings, such people exist https://discord.com/channels/794206087879852103/840831132009365514/852171018198056990
2021-06-09 07:42:55
-d 3,02577 -s 4 --faster_decoding=3 --use_new_heuristics
Scope
2021-06-09 07:47:13
b- 77520,3 ฦจ- 4 3=ฯฑniboษ”ษ˜b_ษฟษ˜ษŸฦจษ’ส‡-- ฦจษ”iษŸฦจiษฟuษ˜โ‘_wษ˜n_ษ˜ฦจu-- ๐Ÿช„
fab
2021-06-09 08:24:20
i can do the same at s 7
2021-06-09 08:24:29
but i will get worse compression
2021-06-09 08:24:31
so not 0.227 bpp
2021-06-09 08:24:36
for screenshots
2021-06-09 08:25:05
image quality probably won't be noticeable the improvement
2021-06-09 08:25:29
because this settings is already high quality than webp for 99% of users i think
2021-06-09 08:25:45
if you want to tune every single image you can
2021-06-09 08:26:19
but i don't think jpeg xl will be better than webp and avif at 0.227 bpp
2021-06-09 08:26:35
better visually looking
2021-06-09 08:30:55
webp2 is good also
2021-06-09 08:31:46
jpeg xl at the moment is good for the reasons you all gave, but at the end generation loss, algorithm they don't seem particularly interesting
2021-06-09 08:32:03
because we do not know many things
2021-06-09 08:32:28
at also quality when you specify a distance is not written in cjxl
_wb_
2021-06-09 08:46:26
You mean the mapping between -d and -q?
lithium
2021-06-09 08:55:30
About distance quality(just-noticeable-error) you can see this, > from Jyrki Alakuijala comment > -d 4.1 means that you accept errors that are 4.1x what I consider just-noticeable-error. > at -d 1.0 you would get exactly what I consider just-noticeable-errors.
_wb_
2021-06-09 08:56:46
2021-06-09 08:58:03
So far, looks like 92% wants at least the same fidelity (not lower fidelity with still good appeal), 45% wants higher fidelity
tufty
2021-06-10 10:16:51
libvips 8.11 is out and includes jpeg-xl support https://twitter.com/jcupitt65/status/1402932085304791045
_wb_
2021-06-10 10:18:12
<:Hypers:808826266060193874>
Jyrki Alakuijala
2021-06-10 12:55:23
perhaps tweet with @twitter ?
2021-06-10 12:56:14
what does it mean? what would be the ideal solution here?
_wb_
2021-06-10 12:57:40
the first generation of digital tv was strictly worse than the end-stage of analog tv, they used the advance for shipping more channels instead of better or even the same quality
veluca
2021-06-10 02:35:59
probably windows *developers*, but ... ๐Ÿ˜„
2021-06-10 02:36:16
(we should also package the WIC plugin if we can, I guess)
diskorduser
2021-06-10 07:02:43
Having windows package is nice. Is it possible to add jpegxl plugin in windows store? like they do with av1.
veluca
2021-06-10 07:07:26
I *think* that has to officially come from M$
fab
2021-06-10 07:36:33
no in windows store, in windows update
2021-06-10 07:36:45
windows sun valley should have jpeg xl
Crixis
veluca I *think* that has to officially come from M$
2021-06-10 08:57:40
I don't thik there is al limit on third parts app
2021-06-10 08:59:06
Also with "new windows" store will be integrated with a package manager, distribuite jxl in this manner will be fantastic
2021-06-10 09:00:51
Spoiler, the package manager will be a copy of apt-get, the autor of apt-get spoke about it some time ago
veluca
2021-06-10 09:08:09
well, the thing is that plugins like the AV1/AVIF ones use some non-public API IIRC
Crixis
2021-06-11 06:30:32
Oh, seem I have blurred memory
eclipseo
2021-06-11 06:34:36
Any infos if camera and smartphone manufacturers would adopt JPEG-XL? Would the force the JPEG brand help?
BlueSwordM
2021-06-11 02:34:54
Only because there were not many alternatives at the time for 10b output.
2021-06-11 02:35:07
Now that JPEG-XL is here, that might change soonโ„ข๏ธ
fab
2021-06-11 03:36:56
Jpeg xl is simply Not made for 10 year old computer
2021-06-11 03:37:27
Its processing is heavy 32 bit
2021-06-11 03:38:22
4 minute for a kitten 16 mpx image and 3.254 mb ram consumption
2021-06-11 03:39:00
With new architecture probably will use 6 GB
Deleted User
fab Its processing is heavy 32 bit
2021-06-11 03:48:58
Correct me if I'm wrong, but shouldn't 32-bit float actually be faster to process than 24-bit int?
veluca
2021-06-11 03:48:58
why would it be faster?
2021-06-11 03:49:15
not that 24-bit int exists, you'd use 32-bit ints
Deleted User
2021-06-11 03:49:48
24-bit int doesn't align well with CPU registers
2021-06-11 03:50:03
Would 32-bit int be faster than 32-bit float?
veluca
2021-06-11 03:50:22
depends on what you need to do with them ๐Ÿ˜›
2021-06-11 03:50:36
addition? sure. multiplication? depends. division? slower
2021-06-11 03:51:11
although for image stuff division is not so common
2021-06-11 03:51:41
so often they'd be faster... but with simd archs, have some interesting limitations in weird places
Deleted User
veluca so often they'd be faster... but with simd archs, have some interesting limitations in weird places
2021-06-11 03:52:41
> some interesting limitations in weird places Which ones for example?
veluca
2021-06-11 03:53:53
IIRC fixpoint multiplications are different on arm and x86
Deleted User
2021-06-11 03:57:39
Where and how can I learn more about such weird quirks of low-level implementation details?
diskorduser
fab Jpeg xl is simply Not made for 10 year old computer
2021-06-11 04:25:22
So?
Deleted User
fab 4 minute for a kitten 16 mpx image and 3.254 mb ram consumption
2021-06-11 04:38:27
WHOAAAAA Which settings did you use? `-s 7` VarDCT shouldn't be taking *that* long, I've tested it on a 19 MPix image and it took only a couple of seconds on an 8-year-old Dell Latitude E6330...
_wb_
2021-06-11 04:54:34
probably modular with weird options
2021-06-11 04:54:58
modular encoding is still pretty slow at effort > 3
veluca
2021-06-11 04:56:03
kitten is also quite slower than -s 7
Where and how can I learn more about such weird quirks of low-level implementation details?
2021-06-11 04:56:32
all I know I learned from https://developer.arm.com/architectures/instruction-sets/intrinsics/ and similar Intel info xD
Scientia
WHOAAAAA Which settings did you use? `-s 7` VarDCT shouldn't be taking *that* long, I've tested it on a 19 MPix image and it took only a couple of seconds on an 8-year-old Dell Latitude E6330...
2021-06-11 05:53:41
He uses a VERY old laptop
2021-06-11 05:53:52
I think first generation i core
_wb_
2021-06-11 06:05:07
An old enough laptop has a weaker cpu than a cheap phone
fab
2021-06-11 06:06:13
i used -d 0.048176 -s 8
2021-06-11 06:06:36
with JPEG XL v0.3.7-31bbdcd7
2021-06-11 06:08:53
it says encoding to d 0.100
2021-06-11 06:09:01
but i have to verify it
Scientia
2021-06-12 02:10:58
<@416586441058025472> what are your computer specs again?
190n
2021-06-12 06:18:39
new linux image viewer: https://www.reddit.com/r/linux/comments/nxj1xt/image_roll_my_new_simple_and_fast_gtk_image i asked about avif/jxl support
fab
Scientia <@416586441058025472> what are your computer specs again?
2021-06-12 07:28:45
i3 330m 4 gb ram
2021-06-12 07:29:00
this was the best you could have in laptop in 2009
2021-06-12 07:29:06
799 euros i paid
2021-06-12 07:29:19
dual core still
_wb_
2021-06-12 07:32:54
Perhaps you don't have enough memory to do encoding of large images with the current cjxl
2021-06-12 07:33:40
If it starts using swap space then no way it is going to have reasonable speed
2021-06-12 07:54:42
Also it could help to switch to an OS that doesn't eat half your RAM itself
veluca
2021-06-12 08:09:25
probably not ๐Ÿ˜›
_wb_
2021-06-12 08:09:51
I think the rpi has a weaker single-core cpu but it has 4 of them, probably still weaker overall
veluca
2021-06-12 08:10:06
the pi 4 is pretty slow
_wb_
2021-06-12 08:11:26
8GB RAM will help though, Fabian's 4GB laptop likely starts swap thrashing on big image encoding
2021-06-12 08:24:29
Clockspeed is only part of the story, it's more about cache, pipelining/throughput, simd instructions, etc
2021-06-12 08:28:54
Hm, how come there are no ARM results on https://openbenchmarking.org/test/pts/jpegxl-decode&eval=3bf51333cfa7b617571e31ae4e2833d0be552161#metrics ?
Jyrki Alakuijala
fab 799 euros i paid
2021-06-12 08:45:31
I wonder if buying a 1000 euro laptop every 10 years or a 500 euro laptop every 5 years is a better strategy on average
_wb_
2021-06-12 09:11:53
Taking warranty, chances of accidents etc into account: cheaper one more regularly is better
2021-06-12 09:12:21
Even without that, it's probably better on average
2021-06-12 09:15:04
Flagship models are always more expensive than they can be, they need to amortize the development cost somewhere and they cannot do it on the low/medium end devices because that market is too competitive
2021-06-12 09:24:33
Probably a good strategy is to get a cheap desktop pc to do your compute on, and then a cheap laptop for when desktop is inconvenient. Can share drives over the network and keep a shell (or even remote X) to the desktop open so the laptop doesn't have to do much compute.
2021-06-12 09:44:28
I have a phone (pretty modern ARM), an OpenPandora (ancient armv7), and an rpi4 which I somehow cannot get to work
2021-06-12 09:49:56
Don't remember, whatever they suggest. Haven't spent much effort on it yet, could be I just used a bad sd card or whatever
veluca
2021-06-12 10:08:15
I have a working rp4
2021-06-12 10:08:39
and also a working rpi3 but it's hooked up to work as a wireless speaker ๐Ÿ˜›
2021-06-12 10:10:03
with JXL I remember it wasn't so good xD
2021-06-12 10:10:45
raspbian
2021-06-12 10:11:01
and, well, it works, but...
2021-06-12 10:11:56
IIRC it's like 3x slower than my phone
2021-06-12 10:12:29
even with neon, the rpi just doesn't have that much computing power
_wb_
2021-06-12 10:44:45
Rpi aims at cheap and weak, good for hobby projects but not exactly a number cruncher
veluca
2021-06-12 10:48:39
that depends on who you are ๐Ÿ˜†
2021-06-12 10:48:57
it's probably enough for *my* personal website for at least a few more years
_wb_
2021-06-12 11:10:45
Serving a static website does not require a lot of compute, I'd think, it's probably more important what kind of uplink and ping you have...
2021-06-12 11:15:48
When your entire website fits in RAM (as is the case for a typical personal website), and it's just static stuff, then likely only the network is a bottleneck
veluca
2021-06-12 11:20:43
well, the CPU *can* become a bottleneck, but it takes... a while
Toggleton
2021-06-13 11:35:04
this website does use a RPI like device (have not found what board it is using) But you can optimize your website. The website got quite some traffic while running on a PV+Battery <https://solar.lowtechmagazine.com/about.html>
diskorduser
Toggleton this website does use a RPI like device (have not found what board it is using) But you can optimize your website. The website got quite some traffic while running on a PV+Battery <https://solar.lowtechmagazine.com/about.html>
2021-06-13 11:42:54
Interesting
eclipseo
2021-06-13 11:51:32
BTW, I've packaged jpegxl for Fedora F33-F35. It will be used as a dependency by AOM for the new butteraugli tuning
Deleted User
Toggleton this website does use a RPI like device (have not found what board it is using) But you can optimize your website. The website got quite some traffic while running on a PV+Battery <https://solar.lowtechmagazine.com/about.html>
2021-06-13 01:04:18
I'm curious if using JPEG XL instead of dithered PNGs on that website would help in terms of both filesize and quality...
2021-06-14 12:39:07
How are Chrome politics looking today?
eddie.zato
2021-06-15 08:30:40
I managed to build qimgv for win64 with jxl/avif support. Good viewer. If anyone is interested, you can get it here https://github.com/eddiezato/eddiezato.github.io/releases/download/bin/qimgv_win64_v092a2g9a5f8d7.7z
_wb_
2021-06-15 11:23:24
Here's a data point on the kinds of fidelity targets our customers have. <@!532010383041363969> We have a `q_auto` feature, which takes an optional fidelity target (`best`, `good`, `eco`, `low`), where the default is `good` if save-data is false, `eco` if save-data is true (but the default can be overridden, e.g. someone who cares a lot about fidelity will set their images to `best`, someone who cares a lot about bandwidth will set it to `eco` (so also if save-data is false) or to `low`. For most codecs this involves search and heuristics; for jxl I will just do a simple mapping (at the moment it's `best` -> d1 (q90), `good` -> d1.9 (q80), `eco` -> d2.8 (q70), `low` -> d3.7 (q60). To get an idea of what kinds of targets we end up producing: looking at a bit over 57 million image encodes that were done by our biggest customers on June 1st, we had 56% of them at `good`, 19% of them at `eco`, 17% at `best`, and 8% at `low`.
2021-06-15 11:26:40
I consider jxl clearly being better than avif for `good` and `best`, avif being better for `low`, and I'm not sure about `eco`.
fab
eddie.zato I managed to build qimgv for win64 with jxl/avif support. Good viewer. If anyone is interested, you can get it here https://github.com/eddiezato/eddiezato.github.io/releases/download/bin/qimgv_win64_v092a2g9a5f8d7.7z
2021-06-15 11:33:44
avif doesn't work
2021-06-15 11:33:50
error 11 or something
Deleted User
2021-06-15 12:52:17
Do you remember my guide for permanently enabling JPEG XL in MS Edge 91? https://discord.com/channels/794206087879852103/803574970180829194/848009313196703745
2021-06-15 12:56:43
Now I've just made it into an easily shareable gist! ๐Ÿ˜ƒ https://gist.github.com/ziemek99/6295222469218427bb160cf849cdaa0c
veluca
2021-06-15 04:02:28
I *believe* I can now accept PRs to https://github.com/mirillis/jpegxl-wic - so if some of you feels inclined to (a) update the version of libjxl or (b) send something like an AppVeyor configuration...
_wb_
2021-06-15 04:58:14
How did you get permissions for that?
Crixis
_wb_ How did you get permissions for that?
2021-06-15 05:09:28
hacking
Pieter
Crixis hacking
2021-06-15 05:19:06
veluca
_wb_ How did you get permissions for that?
2021-06-15 05:21:42
it's a secret! ๐Ÿ˜›
2021-06-15 05:21:46
(Jan asked xD)
_wb_
2021-06-16 02:37:50
https://chromium.googlesource.com/chromium/src/+/b951d2523bfb922aaa9298f765ead69eddf0c2e8
2021-06-16 02:38:03
jxl landing behind a flag in android chrome!
Scope
2021-06-16 02:47:21
> Attempts to further reduce build size didn't produce significant > results: for example, removing support for multi-threaded decoding (not > used in chrome) -> only -0.5 KiB. ๐Ÿค”
2021-06-16 02:52:23
So there are no plans to support multi-threading in browsers, even for decoding a large single image, since it's harder to predict and properly allocate threads for that?
_wb_
2021-06-16 02:53:18
Chome doesn't do that atm with other codecs, afaik. <@795684063032901642> do you know more about this?
2021-06-16 02:59:54
I'm a bit surprised that getting rid of all multithreading is only 0.5kb decoder size reduction - I would have expected a bit more, but that would perhaps require bigger code changes (replacing currently-parallelized things with hardcoded single-threaded variants, simplifying some of the stuff that is complicated only to allow parallelization)
BlueSwordM
2021-06-16 03:01:45
Also, why would multi-threading not be used in Chrome for image codecs even though video decoders can use all of the threads if necessary?
_wb_
2021-06-16 03:03:55
One argument could be that typically you have only one video playing at a time, while there can be many images on a page
BlueSwordM
_wb_ One argument could be that typically you have only one video playing at a time, while there can be many images on a page
2021-06-16 03:08:06
I see. Now the question is: can page loading multi-threading(chunked loading, or one thread per element) still be done?
2021-06-16 03:08:19
If one thread per image is done, then multi-threading per image isn't as needed.
fab
2021-06-16 03:11:08
image codecs are made for bigger size
_wb_
2021-06-16 03:11:22
I don't know much about the internals of browsers but I would expect multiple threads to be used when there are multiple images
fab
2021-06-16 03:11:28
so video codecs have not usefulness
raysar
_wb_ jxl landing behind a flag in android chrome!
2021-06-16 03:49:33
Maybe it's for tomorrow in chrome canary? <:Hypers:808826266060193874>
_wb_
2021-06-16 03:51:12
likely - or else the day after, I guess
veluca
_wb_ I'm a bit surprised that getting rid of all multithreading is only 0.5kb decoder size reduction - I would have expected a bit more, but that would perhaps require bigger code changes (replacing currently-parallelized things with hardcoded single-threaded variants, simplifying some of the stuff that is complicated only to allow parallelization)
2021-06-16 03:56:28
that's definitely true
_wb_ Chome doesn't do that atm with other codecs, afaik. <@795684063032901642> do you know more about this?
2021-06-16 03:56:43
it's not a hard limitation, just a question of implementing it
_wb_
2021-06-16 04:26:52
Does it use one thread per image or one thread total for all image decoding?
lonjil
2021-06-17 03:02:30
I just noticed that there is a JXL flag available in Firefox beta, but apparently it doesn't do anything...
monad
2021-06-17 05:51:26
Good start.
Jyrki Alakuijala
I'm curious if using JPEG XL instead of dithered PNGs on that website would help in terms of both filesize and quality...
2021-06-17 09:38:51
In my experiments with simple photographs JPEG XL palette mode looks 7x better than 256 color dithered PNG, and is 50 % of the filesize. With simple graphics you get less difference, but you get more guarantees that it doesn't look terrible.
2021-06-17 09:42:28
I wonder if a tool that did Mozjpeg, AVIF and JPEG XL at normal bit rates (quality 85-95) and show with flipping the differences would create more awareness of the problems in AVIF
2021-06-17 09:43:12
I observe Mozjpeg being pretty competitive against AVIF -- and Guetzli even more so around quality 90
_wb_
2021-06-17 10:04:58
normal bit rates on the web are not 85-95, more like 65-85
monad
2021-06-17 02:19:55
65 bpp isn't too bad
Jyrki Alakuijala
_wb_ normal bit rates on the web are not 85-95, more like 65-85
2021-06-17 02:33:16
do you have a link?
2021-06-17 02:34:03
last time I sampled it, I found two hills in the histogram, one at 75-80 and another at 85-95, median looked at something like 80-85
2021-06-17 02:34:28
(I used imagemagick's identify to measure quality)
2021-06-17 02:35:31
lowest site was running at quality 60 (now when I looked that site again, they had upgraded to 72)
_wb_
2021-06-17 02:37:50
imagemagick identify is not necessarily very accurate for things that aren't encoded with default quant tables from libjpeg-turbo or IM itself
Jyrki Alakuijala do you have a link?
2021-06-17 02:43:21
Roughly speaking, our q_auto low corresponds to a median of q64, eco to a median of q70, good to a median of q79, best to a median of q91 (in libjpeg-turbo terms). By volume, we do ~7% low, ~18% eco, ~61% good, ~13% best.
2021-06-17 02:45:29
(it might end up doing anything between q40 and q95, depending on the image itself, but those are roughly the medians for each fidelity point)
Jyrki Alakuijala
2021-06-17 03:02:10
you know what you are doing and can drive the quality down to a sufficient but low level
2021-06-17 03:02:51
not everyone knows about it, and for example image's on MIT's website were compressed at quality 99 last time I looked
_wb_
2021-06-17 03:05:37
ah yes that does happen, people also upload stuff straight from their camera to their homepage as a way too large q97 sequential jpeg
2021-06-17 03:07:33
but that sort of thing shouldn't be typical of most browsing โ€” bigger websites and social media shouldn't be doing that (it still does happen, it's basically our sales team's job to find them ๐Ÿ˜… )
2021-06-17 03:12:27
anyway, while the _current_ (well-optimized) web is serving images at roughly the fidelity of libjpeg q65-q85, I hope that with jxl we can bring that fidelity to the libjpeg q80-q95 range while reducing the bandwidth to that of libjpeg q40-q70.
BlueSwordM
2021-06-17 03:13:59
Nah, let's keep the data rates of libjpeg q65-q85 and have Q90-Q95+ fidelity with JXL <:Stonks:806137886726553651>
_wb_
2021-06-17 03:20:15
I'm in favor of that, but the people who pay the bandwidth bills probably don't agree ๐Ÿ™‚
paperboyo
2021-06-17 03:31:42
My opinion is known, so I will repeat it ๐Ÿ˜œ: that would be a waste for (almost) no gain. And apart from [progressive font](https://www.w3.org/TR/PFE-evaluation/) loading, browsers need to be able to load shape data progressively and dependent on font-size, too.
_wb_
2021-06-17 03:42:31
I think it would be nice to have more auto-adjusting of fidelity based on network conditions, user preference, and target display/medium. That way we could have very high-fidelity images when it makes sense (say when you're browsing at home on a fast network with a good monitor, or when you want to print a web page, and lower fidelity ones when that makes more sense (when you're on a slower network, on a device with a small high-density screen, when you're paying for mobile data, etc).
2021-06-17 03:44:01
letting it be not so much a web author decision to serve everyone the same fidelity regardless of what they actually want/need
monad
_wb_ but that sort of thing shouldn't be typical of most browsing โ€” bigger websites and social media shouldn't be doing that (it still does happen, it's basically our sales team's job to find them ๐Ÿ˜… )
2021-06-17 03:44:37
The Steam game distribution platform doesn't seem to process user data in announcements, guides, artwork. Often devs upload hi-res PNGs and users have to download those to read an announcement even when the display size is much smaller.
_wb_
2021-06-17 03:46:10
browsers could even do smart things like when you're just scrolling quickly through a timeline, it gets the low fidelity versions, but when you stop at a certain image, maybe start pinch-to-zoom on it, then probably you want the high fidelity version
monad The Steam game distribution platform doesn't seem to process user data in announcements, guides, artwork. Often devs upload hi-res PNGs and users have to download those to read an announcement even when the display size is much smaller.
2021-06-17 03:48:48
sounds like they should be updating to Cloudinary (or whatever other similar service). Just passing user-generated content along is the easiest thing to do (also in terms of security risks for their own infrastructure), but it's not the best thing to do...
monad
2021-06-17 03:56:05
Yes, that's why I mention it. Although, I realize bandwidth is probably less of a concern for them overall given that users are downloading GBs of game data anyway.
fab
2021-06-17 03:56:11
when i need new build of jxl
_wb_
2021-06-17 03:57:43
bandwidth is always a concern - even netflix is concerned about the bandwidth of their still images
monad
2021-06-17 03:59:18
Surely Instagram is more concerned about images than Steam. But yes, even as a user I don't like downloading a 15 MB image just to read a paragraph about a game update.
_wb_
2021-06-17 03:59:34
even if you don't care about the bandwidth cost itself, just the slower user experience is a cost too
2021-06-17 04:00:00
(it means less transactions/sales/ad income, simple as that)
monad
2021-06-17 04:01:21
They do optimize content on store pages at least, just not in community posts.
Jyrki Alakuijala
_wb_ bandwidth is always a concern - even netflix is concerned about the bandwidth of their still images
2021-06-17 04:46:48
in bandwidth, the user is still more expensive than computers (for waiting), a human is roughly 1000x the cost of a computer and a cable
2021-06-17 04:49:12
if someone computes otherwise, they will be repelling users
2021-06-17 04:50:37
the services that grow fastest, have most users to click on ads or products, will survive the 'evolution' of web services -- and not necessarily those who save the most money on their cpu or bandwidth bills
Deleted User
2021-06-17 10:35:20
I think Chrome (93.0.4542.2) has some kind of memory leak/issue when playing animated JXL. If someone could confirm:
2021-06-17 10:35:28
2021-06-17 10:35:32
Chrome reports a RAM usage of over 800 MB for a tab playing this JXL and it takes several seconds until it runs without pausing.
2021-06-17 10:35:44
APNG and WebP both run fine with each at a mere 28 MB usage:
2021-06-17 10:35:57
2021-06-17 10:35:59
2021-06-17 10:37:56
<https://cdn.discordapp.com/attachments/803574970180829194/855214208635502622/animatin_test.png> <https://cdn.discordapp.com/attachments/803574970180829194/855214220446138388/animatin_test.webp>
veluca
2021-06-17 10:47:43
ah, we know about the problem with memory, and we're working on fixing it
2021-06-17 10:47:54
as for the speed, that's a different issue
2021-06-17 10:48:11
one which I'm not 100% sure about
Deleted User
2021-06-17 10:58:45
btw, VP9 (aka. a video codec) uses around 90 MB and the file size is also larger than even WebP. Plus the image looks ugly in Chrome. <#805007255061790730>
improver
2021-06-17 11:20:14
it's mostly still image with moving parts so that makes sense
_wb_
_wb_ Roughly speaking, our q_auto low corresponds to a median of q64, eco to a median of q70, good to a median of q79, best to a median of q91 (in libjpeg-turbo terms). By volume, we do ~7% low, ~18% eco, ~61% good, ~13% best.
2021-06-18 07:44:47
Looking at the top 20 cloudinary customers (aggregated over just one day so not fully representative and certainly not taking into account seasonal effects), I see 1 that uses q_auto:low for most images, 11 that use q_auto:eco for most images, 3 that use q_auto:good and 5 that use q_auto:best.