|
Scope
|
|
_wb_
One option is to use a very small preview frame (say 8x8 pixels), which then can be used by webdev tools to make a blurhash, gradient or solid color to be inlined in the html as a placeholder (if the preview frame is missing, they can also do it by decoding the whole image and downscaling it, but the embedded preview would just be an optimization for those tools, as well as an embedded placeholder in cases where html cannot be produced)
|
|
2021-09-16 12:53:57
|
|
|
2021-09-16 01:57:19
|
Hmm, lossless encoded 8x8 pixels JXL including the header fits into a 200 bytes budget
But for example in the fb article about previews, they use 42×42 pixels Jpeg with no headers, I wonder if something similar is possible with lossy JXL?
https://engineering.fb.com/2015/08/06/android/the-technology-behind-preview-photos/
|
|
2021-09-16 01:57:23
|
And a 42x42 pixels image, even with noticeable losses and limited colors, seems to me to be better than the highest quality 8x8
|
|
|
_wb_
|
2021-09-16 02:41:01
|
probably a small preview frame that's a lossy 64x64 vardct image fits within a small budget
|
|
|
Scope
|
2021-09-16 03:05:55
|
Hm, after some experiments, even for 16x16 pixel images with `--override_bitdepth` it is not easy to achieve the 200 bytes limit with the header without noticeable color distortion, both for modular and VarDCT, so maybe some other tricks are needed 🤔
|
|
|
_wb_
|
2021-09-16 05:02:39
|
yeah we don't really have an encoder option to aim at such small budgets - what it should do is just do it as a single DCT64x64 block where most of the coeffs are zero, but what it will do is segment the image in smaller blocks and waste lots of bits on signaling stuff that doesn't really help for such tiny images
|
|
2021-09-16 05:02:48
|
|
|
2021-09-16 05:02:58
|
|
|
2021-09-16 05:03:21
|
`-m -Q -25 -c 1 -e 9` does something
|
|
2021-09-16 05:03:48
|
but it's not particularly great
|
|
2021-09-16 05:04:34
|
again, we're probably wasting bits on things that don't pay off for such tiny images
|
|
|
Scope
|
2021-09-17 01:53:33
|
However, I am more coming to the conclusion that for minimal LQIP, if the format does not have some way to do it more effectively than other tools and if LQIP is not part of the image that is guaranteed to be there at any encoding settings, then it is better to create it with external tools
Otherwise it adds more complexity and problems because it will require loading all the images on the page only partially and there are added disadvantages with the parallel connection limitation
In general a separate LQIP will not be worse than an embedded one because it does not take more space (because the embedded one is not part of the image), it can be more flexible, if it is used it is guaranteed for all images and for all formats, also it can be more easily and efficiently pre-loaded
|
|
2021-09-17 02:33:24
|
And maybe in Jpeg XL it makes sense to use only starting with MQIP (which is guaranteed at least for any VarDCT) and incremental and progressive loading
Also as far as I've looked through different opinions, a lot of people are annoyed by very blurry images (this creates additional work for brain to recognize what's shown), but incremental loading is usually perceived (plus sites also are not always loaded instantly), so I think that using dominant color or something like light blurhash (which is more like a random color pattern) is even more universal than a blurry image (where it is possible to recognize something)
|
|
|
_wb_
|
2021-09-17 02:41:41
|
placeholders kind of need to be inlined in the html anyway if you want to avoid the delay from a request roundtrip, there is not really any way to avoid that
|
|
2021-09-17 02:43:26
|
(though for below-the-fold images, browsers _could_ fetch some initial kilobytes of every image that gets lazy-loaded further only when scrolling gets near)
|
|
|
Scope
|
2021-09-17 03:18:42
|
Also
https://github.com/WICG/largest-contentful-paint/issues/71
|
|
2021-09-17 03:29:34
|
> **yoavweiss**
> When trying to prioritize that work, a blocker question that came up was "do we have evidence that progressive image loading creates a better experience?".
> Unfortunately, AFAICT, the web performance community has many opinions on this question, but very little data. I'm currently working with partners to try and get such data, that will hopefully prove (or disprove) the benefits of progressive loading.
> Once we have that, we'd be able to prioritize this work.
|
|
|
_wb_
|
2021-09-17 03:32:32
|
There are many opinions, but I think nobody has the opinion that just not showing anything until you suddenly show everything is good
|
|
2021-09-17 03:44:32
|
I think sometimes "we need more data on the benefits of progressive loading" is just an excuse to not have to make the effort of making progressive loading actually work
|
|
2021-09-17 03:46:23
|
It's not that easy to make progressive work properly, even if the bitstream itself is progressive - I just spent another day on fixing stuff related to progressive jxl
|
|
|
Scope
|
2021-09-17 03:47:30
|
In my opinion, and as far as I understand various other opinions, something universal that will improve the user experience in most cases, then for a progressive JXL it would be something like this sequence:
1. A solid dominant color placeholder (or color pattern, but not a blurred recognizable image)
2. Center-first/Guided groups loading (and incremental loading doesn't hurt perception as a real progressive and can be updated as often as needed and as long as it doesn't create noticeable additional resource consumption)
3. Almost full quality image ~60%
4. Full image 100%
But, LQIP as a small understandable image and even MQIP can be debatable and is not for all people better for perception and also the benefits of progressive loading can increase with very slow connections or huge images
|
|
|
_wb_
|
2021-09-17 03:51:09
|
I agree, LQIP and MQIP are to some extent debatable and controversial, and probably a matter of personal taste or depending on how your personal brains work. Solid color placeholder, incremental/center-first, HQIP is probably always fine though, like you say.
|
|
2021-09-17 03:53:50
|
And of course with fast enough connections and/or small enough images, progressive is less needed (and the first priority should be to properly size images and not send a 24 MP image to a 720 pixel wide phone)
|
|
|
Scope
|
2021-09-17 04:23:58
|
And also, yes, even for normal Jpegs, progressive encoding is more efficient than non-progressive and MozJpeg and various optimizers do this by default and how to use this depends mostly on the decoder/render side, not that the images will be larger/worse in quality
|
|
2021-09-17 05:24:42
|
<https://github.com/libjxl/libjxl/issues/586>
Hmm, so it does affect performance, sadly 🤔
|
|
|
_wb_
|
2021-09-17 05:31:07
|
Shouldn't make much of a difference
|
|
2021-09-17 05:32:45
|
But yes, the default top to bottom order is free (one bit to signal "default") while a custom order means signaling a permutation (but we have a compressed encoding for that so it shouldn't be that bad)
|
|
|
Scope
|
2021-09-17 05:43:42
|
I also mean the decoding performance, however, is it possible to make a different group order that would be somewhere between these two and better than the current one for perception, but would have almost no effect on decoding performance?
|
|
|
|
veluca
|
2021-09-17 05:44:35
|
it also increases memory usage a bit (or it will once we optimize the code more xD) and it might increase the number of filtered pixels too
|
|
|
_wb_
|
2021-09-17 05:44:50
|
Might be worth benchmarking how big a difference we are talking here
|
|
|
|
veluca
|
2021-09-17 05:44:53
|
but likely both those effects are in the ~5% range
|
|
2021-09-17 05:45:06
|
we need to have the code first 😛
|
|
|
_wb_
|
2021-09-17 05:45:44
|
Ah, it's current code vs potential future code that could optimize for scanline order
|
|
2021-09-17 05:46:16
|
Is it about how many borders there are between already-decoded and not-yet-decoded groups?
|
|
2021-09-17 05:47:37
|
Because then maybe the optimization could be done more generally and also apply to orders that are not scattering stuff all over the place, but growing some shape
|
|
2021-09-17 05:50:07
|
With center-first, all boundary borders are needed, but it's only in the first group that all 4 are needed, and only in few of the rest that 3 are needed, often it will be just 2 like in scanline order (just not always bottom and right like in scanline order)
|
|
|
Scope
|
2021-09-17 06:33:50
|
It can also be something closer to the center, but not necessarily from the point of the exact center, as more often the more useful information can be in this area and only then decode the borders of the image, for example:
`i do not know how it could be optimal for decoding and algorithmically, but just by intuition decided to decode the central area at the top and the borders of the image at the bottom (to compensate the miss on the important area and also in photos at the borders it can be more often at the bottom)`
|
|
2021-09-17 07:01:24
|
or
|
|
|
_wb_
|
2021-09-17 07:21:40
|
A lot of orders could make sense, depending on the image content. I can also imagine scenarios where you use CSS magic to get art direction responsive images from the same file, just truncated for the mobile version that is a smaller crop than the desktop version which has more room for background stuff and a more landscape aspect ratio
|
|
2021-09-17 07:22:36
|
Building the authoring tools to do such funky tricks is another thing though
|
|
|
Scope
|
2021-09-17 07:28:51
|
Yep, but I mean the default order without content knowledge, which would more often show more useful information earlier than just a top-down scanline
|
|
2021-09-17 07:31:32
|
Because many images and photos are created on similar principles and it is not often that an important area will be for example in the upper left corner
|
|
2021-09-17 07:38:16
|
But, obviously, it is better to create a custom order for each image, especially when the encoder has full access to the image and in the future it is possible to use neural networks for this purpose, although they are also not perfect (but, the good thing that misses for early loading will not be so critical)
https://www.theverge.com/2021/3/10/22323298/twitter-image-cropping-preview-problematic-racial-bias-testing-fix
|
|
|
_wb_
And of course with fast enough connections and/or small enough images, progressive is less needed (and the first priority should be to properly size images and not send a 24 MP image to a 720 pixel wide phone)
|
|
2021-09-18 11:46:32
|
Also, is it possible to use responsive image features with progressive JXL in browsers right now, without using individual resized images, such as when a full image is very large, but the browser shows it downsized?
|
|
2021-09-18 11:47:55
|
Progressive-responsive features
|
|
|
|
Deleted User
|
|
_wb_
placeholders kind of need to be inlined in the html anyway if you want to avoid the delay from a request roundtrip, there is not really any way to avoid that
|
|
2021-09-19 03:31:58
|
What do you think about that? This HTML is backwards-compatible, it won't break unsupported browsers. The only drawback is the need to specify image's intrinsic dimensions, but it's necessary if the preview has smaller dimensions than the original (and thus avoid content shift).
To reveal the preview remove `src="..."` and change `preview` to `src`.
```html
<img src="https://i.imgur.com/6GUC1TB.jpg" height="3024" width="4032" preview="data:image/avif;base64,AAAAIGZ0eXBhdmlmAAAAAGF2aWZtaWYxbWlhZk1BMUIAAADybWV0YQAAAAAAAAAoaGRscgAAAAAAAAAAcGljdAAAAAAAAAAAAAAAAGxpYmF2aWYAAAAADnBpdG0AAAAAAAEAAAAeaWxvYwAAAABEAAABAAEAAAABAAABGgAAASYAAAAoaWluZgAAAAAAAQAAABppbmZlAgAAAAABAABhdjAxQ29sb3IAAAAAamlwcnAAAABLaXBjbwAAABRpc3BlAAAAAAAAAMkAAACXAAAAEHBpeGkAAAAAAwgICAAAAAxhdjFDgQAMAAAAABNjb2xybmNseAACAAIABoAAAAAXaXBtYQAAAAAAAAABAAEEAQKDBAAAAS5tZGF0EgAKChgd8iW7IEBA0YAylQIT/QUEZNMfQLO/wKOb7gB+Gn42elBobHcoef54gAHQ0dGx8oKzYqQyOUJP8kgAIA0gGyIoLjYqQyOUJP8l9/eoeGfnyFhIFuaX5shH9qiOGtfoaAfvBufHd9n3F2ZH+Jj4aBgHx6snminYaBzbBqbXKCgH+DjWpwi5iBeH13f6+VoZt5i723gIgMCAQGBAIO/e1jX+FPSLVSbZv6b2KjASwHw7Tg5jVPg3f0+h3WITHUlbcR7TGENJbjt2URMRBEVTffn02jVspAehDRtUoOqp3olr9b055H6CzavVPY3fgeMWHbm0OfbAM6ODf5L5xTuc0OZ0l8tvrpiNS8cs7EOopciiUZE0FqLg03VDaHIRMW2NZqpg" />
```
|
|
|
_wb_
|
2021-09-19 03:48:36
|
Isn't the usual approach to put the preview in the src and then swap in javascript?
|
|
|
|
Deleted User
|
|
_wb_
Isn't the usual approach to put the preview in the src and then swap in javascript?
|
|
2021-09-19 03:55:07
|
That wouldn't be backwards compatible with JS-disabled browsers. That's a big no-no for me.
|
|
2021-09-19 03:56:17
|
Adding new HTML attribute is definitely backwards compatible way.
|
|
2021-09-19 03:57:36
|
I've been thinking about other names for this attribute, like `placeholder` (inspired by `<input>`, but reusing attributes may lead to compatibility issues, rlly idk) or `lqip`.
|
|
2021-09-19 03:58:09
|
Another thing would be maximum allowed length fot the value. I think 1023 would suffice...
|
|
2021-09-19 04:01:28
|
That's enough data to squeeze LQIP into and also a psychological barrier, because it's just below 1 KiB.
|
|
2021-09-19 04:01:57
|
You'd be able to say: "If it's 1 KiB or over, it's no longer LQ, it's just IP."
|
|
|
_wb_
|
2021-09-19 04:03:11
|
Making it a new attribute certainly makes sense to me, now just convince the browsers/w3c, I guess?
|
|
|
|
Deleted User
|
2021-09-19 04:04:17
|
Where and how can I make a suggestion?
|
|
2021-09-19 04:04:55
|
And what do you think about this idea? Do you think it deserves a chance?
|
|
2021-09-19 04:07:24
|
<@!228116142185512960>, WDYT? You're working in web dev, does `preview` IMG attribute have any potential?
|
|
|
Scope
|
2021-09-19 04:09:53
|
I don't think this is a problem, as far as I know even now there are enough ways to show previews that will be compatible with older browsers and without using JS, besides, as discussed earlier, very blurry images are unpleasant for many people and create extra work for the brain to recognise, so solid colour or something like coloured background patterns may be more universal
|
|
|
surma
|
2021-09-19 04:10:59
|
yeah, data URLs are a workaround imo. Also it’s easily used in the wrong way. If people put an actual URL there, it is yet another request
|
|
2021-09-19 04:11:13
|
I think overall the current explorations of embedding the preview into the image itself is the best way forward
|
|
2021-09-19 04:11:25
|
But Jake might actually have more opinions on this. he’s spent a lot more time thinking about this than I have
|
|
|
|
Deleted User
|
|
surma
yeah, data URLs are a workaround imo. Also it’s easily used in the wrong way. If people put an actual URL there, it is yet another request
|
|
2021-09-19 04:13:08
|
That might actually be baked into the spec, too, just like the maximum length I mentioned above. If there's any other URL than data URL, the browser should reject it.
|
|
|
surma
I think overall the current explorations of embedding the preview into the image itself is the best way forward
|
|
2021-09-19 04:14:05
|
Yeah, and that's exactly why we're all here to follow <:JXL:805850130203934781> development 😜
|
|
|
surma
|
2021-09-19 04:14:09
|
That seems very confusing, tbqh
|
|
|
|
Deleted User
|
|
surma
That seems very confusing, tbqh
|
|
2021-09-19 04:17:39
|
What exactly?
If there's e.g.: ```html
<img src="https://i.imgur.com/6GUC1TB.jpg" height="3024" width="4032" preview="https://i.imgur.com/6GUC1TB.preview.jpg" />
```the browser should discard `preview` value completely (since it's not data URL) and fetch just the `src`.
|
|
|
surma
|
2021-09-19 04:23:10
|
I mean, you can start a discussion on the HTML spec issue tracker if you like
|
|
2021-09-19 04:23:16
|
You don’t need to convince me, but them 😄
|
|
2021-09-19 04:23:39
|
But I think an attribute that takes an URL but is a noop for all URLs but base64, where base64 is a horrible encoding seems unlikely to succeed to me
|
|
2021-09-19 04:23:45
|
(but again, that’s _my_ opinion)
|
|
|
|
Deleted User
|
|
surma
But I think an attribute that takes an URL but is a noop for all URLs but base64, where base64 is a horrible encoding seems unlikely to succeed to me
|
|
2021-09-19 04:25:00
|
> where base64 is a horrible encoding
Do you have any other *universal* alternatives? 😄
|
|
|
surma
But I think an attribute that takes an URL but is a noop for all URLs but base64, where base64 is a horrible encoding seems unlikely to succeed to me
|
|
2021-09-19 04:25:26
|
> takes an URL but is a noop for all URLs but base64
That's on purpose, to avoid any additional requests.
|
|
|
surma
|
2021-09-19 04:25:51
|
I understand your intention, but I don’t think it’s a good design, as it has no precedent in HTML
|
|
2021-09-19 04:26:10
|
I think the problem space is real and important
|
|
2021-09-19 04:26:18
|
but I don’t think this is the right solution
|
|
|
Scope
|
2021-09-19 04:26:24
|
Also speaking of responsiveness, in the future I would like to move away as much as possible from using multiple copies of the same image with different resolutions and speaking more concretely about progressive JXL and possibly Jpeg in some way, I would like some possibility to use one image for multiple resolutions, for example as it is now
```<img
src="Image-lowres.jpg"
srcset="
Image-2.jpg 2x,
Image-3.jpg 3x,
Image-4.jpg 4x,
Image-8.jpg 8x,
"
>```
but to make it work for one image if it is progressive, so as to load only the necessary amount of data and not to use separate copies with different resolutions
|
|
|
|
Deleted User
|
2021-09-19 04:29:34
|
Another thing is detection whether the <:JXL:805850130203934781> file is progressive at all. If you're using VarDCT, by default you're only reaching 1:8 stage of progression before full image. Still better than fully-sequential JPEG, but should be detected anyway.
|
|
2021-09-19 04:30:17
|
The browser somehow has to know how many bytes has to load to reach particular stages of progression and which ones are available at all.
|
|
|
_wb_
|
2021-09-19 04:51:24
|
Base64 is not too bad if the html gets gzipped or brotli'd anyway
|
|
|
The browser somehow has to know how many bytes has to load to reach particular stages of progression and which ones are available at all.
|
|
2021-09-19 04:53:52
|
Another approach is to use server-side http2 priorities, <@826537092669767691> does it like that to get all the headers out first (for layout), then all the DC (for a good preview), then the remaining scans
|
|
|
|
Deleted User
|
|
_wb_
Another approach is to use server-side http2 priorities, <@826537092669767691> does it like that to get all the headers out first (for layout), then all the DC (for a good preview), then the remaining scans
|
|
2021-09-19 04:54:32
|
Oh, that's nice! How does it work exactly?
|
|
|
_wb_
|
2021-09-19 04:55:06
|
Well you start by being a CDN like Cloudflare that can implement that logic on the server side
|
|
2021-09-19 04:55:39
|
For John Doe average webdev it's probably a bit tricky, especially if you don't have much control over hosting
|
|
2021-09-19 04:56:17
|
Client-side range requests are probably a more robust approach, since it 'only' requires updating the browsers
|
|
2021-09-19 04:58:29
|
But that requires either html annotations about ranges, or a file format that contains a table of contents in its header, so the ranges can be derived from the first few hundred bytes
|
|
2021-09-19 04:58:43
|
(jxl is currently the only such file format afaik)
|
|
|
|
Deleted User
|
|
_wb_
Well you start by being a CDN like Cloudflare that can implement that logic on the server side
|
|
2021-09-19 05:06:00
|
Does <:Cloudinary:806630933242445904> do that (serving images partially for better performance)?
|
|
|
_wb_
|
2021-09-19 05:09:36
|
No, we don't afaik. We rely on other CDNs (akamai, fastly, cloudfront etc) which makes it harder for us, since you need cdn edge logic for that
|
|