JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

jxl

Anything JPEG XL related

Traneptora
2021-09-04 11:10:29
thanks
2021-09-04 11:10:43
that's unfortunate, if they're already locking it behind a config option
2021-09-04 11:11:04
they could just do `image.experimental.jxl.enabled` but apparently not I guess
Fraetor
2021-09-04 11:11:12
I think it was a little bit of a bug that they added to config option to stable, yeah.
2021-09-04 11:11:27
But hopefully it will lead to them adding JXL support to stable.
Traneptora
2021-09-04 11:11:28
yea, either expose it or don't but don't expose it and then have it do nothing :(
2021-09-04 11:11:52
that said google has lots of leverage and likes jxl
2021-09-04 11:12:08
so they can strongarm mozilla into adopting formats they wouldn't otherwise adopt, like webp
Fraetor
2021-09-04 11:39:28
Yeah, I think the only thing stopping JXL from becoming more prevalent is its newness, and competition from AVIF.
2021-09-04 11:40:04
But in my opinion it is a better format than AVIF for most cases when an image should be used.
OkyDooky
2021-09-05 12:11:15
Yes, exactly what I meant. The default "Gaborish" smoothing kernel the decoders are instructed to perform. (<@179701849576833024>)
veluca
2021-09-05 12:14:12
if I am not misinterpreting, approximately... ``` 0.06 0.12 0.06 0.12 0.33 0.12 0.06 0.12 0.06 ```
2021-09-05 12:15:10
(this has quite a bit of rounding, the actual weights *do* sum to 1 :D)
OkyDooky
2021-09-05 12:15:13
Thank you for digging this up \:-) I'll have a look at it's transfer function later...
veluca
2021-09-05 12:16:05
you can check in `lib/jxl/loop_filter.cc` for the more precise values
2021-09-05 12:16:45
gab_?_weight1 is the weight for taxicab distance 1, weight2 is for the corners, and the center is whatever is left to sum to 1
OkyDooky
2021-09-05 12:17:01
Doesn't it always sum up to exactly 1.0?
veluca
2021-09-05 12:27:01
yes, which is why we didn't put the third weight in the file 😛
OkyDooky
2021-09-05 12:34:28
So, if anyone's interested, the max gain is 1 (at DC) and the min gain is about 0.13576 (at normalized wavenumbers k=(0,1) and k=(1,0)). So, it attenuates high wavenumbers by about 17.3 dB.
2021-09-05 12:39:03
The "step response" might be interesting as well since a blocking artefact is basically a step. For a vertical/horizontal edge, the filter basically replaces a step [0, 0, 1, 1] with [0, 0.22, 0.78, 1].
Traneptora
Fraetor But in my opinion it is a better format than AVIF for most cases when an image should be used.
2021-09-05 05:27:04
`caniuse` reports that JXL is better than AVIF for most purposes
2021-09-05 05:27:19
I believe AVIF only has taken a "lead" simply because it has a head start
2021-09-05 05:27:31
I'm hoping it fades and JXL just takes over seeing as JXL is just a better technology
OkyDooky
2021-09-05 07:04:56
I wouldn't say "is just better technology". Doesn't AVIF have an advantage on arbitrarily oriented 1D content (edges) due to its intra prediction modes?
2021-09-05 07:06:46
VarDCT doesn't do that. I don't know enough about JXL's modular mode to judge...
veluca
2021-09-05 07:36:32
That's *probably* true... But I suspect it is more noticeable at lower bitrates
I wouldn't say "is just better technology". Doesn't AVIF have an advantage on arbitrarily oriented 1D content (edges) due to its intra prediction modes?
2021-09-05 07:39:20
You could say AVIF has an edge there 🤣
_wb_
2021-09-05 08:21:07
My intuition: Directional prediction is a good coding tool to get cheap sharp edges at low bitrates, where it's ok to change the shape to use directions of supported angles and you don't attempt to correct that with the residuals because it's "close enough" for low bitrate. Especially if you have lots of directions like av1, it works quite well, though it can lead to a "vectorized" look with some oil-painting-like smudging artifacts.
2021-09-05 08:24:00
At medium to high fidelity, the advantage of directional prediction starts to dwindle, since you cannot just quantize the residuals away, so it doesn't really reduce entropy anymore
2021-09-05 08:25:14
Much like how interframe motion compensation is great for lossy video, but for lossless video it's not really worth doing (at least in natural, non-synthetic cases)
2021-09-05 08:25:56
(the above is just my gut feeling, fwiw)
ishitatsuyuki
2021-09-05 09:37:11
nice to know about
2021-09-05 09:37:38
just wondering what kind of magic is good for lossless?
2021-09-05 09:37:46
since jxl lossless has very impressive performance
_wb_
2021-09-05 09:43:12
Lossless is a very different game from lossy. I think we get a lot of leverage from our context model + predictor approach: MA trees with a predictor per ctx node is a quite powerful thing that can adjust to a large variety of image contents.
ishitatsuyuki
2021-09-05 09:44:04
noted
_wb_
2021-09-05 09:48:54
FLIF already had MA trees, but jxl added a few things that make it better: - predictor choice per ctx node - ctx map that allows trees to be DAGs - the weighted predictor - other predictors - more MA tree properties (e.g. x,y coords)
BlueSwordM
_wb_ My intuition: Directional prediction is a good coding tool to get cheap sharp edges at low bitrates, where it's ok to change the shape to use directions of supported angles and you don't attempt to correct that with the residuals because it's "close enough" for low bitrate. Especially if you have lots of directions like av1, it works quite well, though it can lead to a "vectorized" look with some oil-painting-like smudging artifacts.
2021-09-05 03:10:19
Can varDCT actually use diagonal(wedge) partitions?
_wb_
2021-09-05 03:14:13
What do you mean exactly?
2021-09-05 03:16:46
The blocks themselves are just rectangular DCTs, with some special stuff for 8x8 (DCT with a cut corner, 'identity')
2021-09-05 03:17:31
The partitioning (block selection) can be arbitrary as long as the 256x256 grid is not crossed
BlueSwordM
_wb_ What do you mean exactly?
2021-09-05 03:21:40
Essentially, I know that varDCT can select blocks from 4x4-256x256(with subdivisions down to 2x2 that are possible), with the option to do rectangular partitions. In AV1, as part of its inter-coding tools, you have the option to actually use wedge partitions to code diagonal lines more effectively. I have no idea if intra only coding has access to that.
Scope
2021-09-05 03:25:30
_wb_
2021-09-05 03:28:09
I think the wedge partitioning in av1 is for the directional predictor choice, not for the transform itself, right?
BlueSwordM
_wb_ I think the wedge partitioning in av1 is for the directional predictor choice, not for the transform itself, right?
2021-09-05 03:28:29
Yes.
_wb_
2021-09-05 03:28:41
We don't have directional prediction at all in jxl, so also not with wedge partitioning
2021-09-05 03:29:32
In the block partitioning we can do arbitrary splits though, while av1 is limited to certain kinds of splits
2021-09-05 03:32:51
2021-09-05 03:33:03
That's what av1 can do
2021-09-05 03:34:14
In jxl you can do any block layout as long as everything is aligned to 8x8 and doesn't cross 256x256 boundaries
2021-09-05 03:35:45
Current cjxl only makes selections that don't cross 64x64 boundaries though, and only investigates limited amounts of non-naturally-aligned block selections
Scope
2021-09-05 03:40:38
Also, predictors that are not in Jpeg XL, but are available in other formats, were not so effective?
_wb_
2021-09-05 03:42:18
Select is in practice the same as Paeth, just a bit better and a bit faster. Was redundant to keep both.
2021-09-05 03:44:07
Those weird averages from lossless webp probably don't bring much, and even if they do, you could parametrize the weighted predictor to include those
2021-09-05 03:46:41
Predicting from pixels below the current one is something you can only do in FLIF's funky interlacing. We don't do that funky interlacing anymore.
2021-09-05 03:49:00
For photographic images, FLIF's interlacing works quite well for compression (also lossy FLIF), but the progressive previews suffer a lot from aliasing. FUIF's Squeeze is a better idea if you want modular progressive/lossy. If you don't care about progressive, then the Weighted predictor will give better compression for photos.
2021-09-05 03:49:42
For nonphoto, FLIF's noninterlaced mode generally worked better anyway, and there the prediction from pixels below can also not be used.
2021-09-05 03:50:47
So TL;DR: yes! 😁
Traneptora
2021-09-05 08:57:45
Iirc, Paeth is `Top+Left-TopLeft` clamped between Top and Left
2021-09-05 08:57:50
what is select?
_wb_
2021-09-05 09:46:01
Clamped Top-Left+TopLeft is Gradient
2021-09-05 09:47:22
Paeth picks one of Top, Left, TopLeft, the one closest to Top+Left-TopLeft
2021-09-05 09:49:48
Select picks Top or Left, whichever is closest to Top+Left-TopLeft
2021-09-05 09:50:43
It's one less branching instruction and compression is similar or better
ishitatsuyuki
Scope
2021-09-06 06:01:14
Where does this presentation come from?
Scope
2021-09-06 06:02:31
<https://jpegxl.info/> <https://docs.google.com/presentation/d/1LlmUR0Uoh4dgT3DjanLjhlXrk_5W2nJBDqDAMbhe8v8/>
ishitatsuyuki
2021-09-06 06:02:39
Oh, thanks
Petr
2021-09-06 08:02:04
I like that presentation! Just made a comment in it…
spider-mario
2021-09-06 12:44:27
hi all, we would like to get some more insight into how people use or would prefer to use the noise generation feature, so if you have a moment, we would appreciate it if you could tell us your thoughts here: https://forms.gle/K7TjGQKj8SYZSfba8
fab
2021-09-06 02:33:15
2021-09-06 02:33:26
honestly i want all enabled
2021-09-06 02:33:37
if thewy disable this option i'll stop using cjxl
Traneptora
2021-09-07 05:52:56
```sh printf '#!/usr/bin/env -S sh -c "echo scrub"' >bin/git-gud; chmod +x bin/git-gud ```
Jyrki Alakuijala
2021-09-07 09:41:46
https://opensource.googleblog.com/2021/09/using-saliency-in-progressive-jpeg-xl-images.html
Do you happen to know off the top of your head what default parameters these are? I'm curious. (<@532010383041363969>)
2021-09-07 09:46:39
all the quantization matrices
_wb_ We don't have directional prediction at all in jxl, so also not with wedge partitioning
2021-09-07 09:52:28
To keep the codec simpler I didn't include tools in VarDCT that didn't help at d0.5-d3 range. Prediction tools become more useful when one is making more loss. We had three experiments with directional tools, they usually worked in the range of d8-d12 or so, and brought a lot of complexity + a quality ceiling what they were capable of.
July
2021-09-08 12:47:44
Is there any way to automatically check if a jxl file was converted from jpeg or png and convert back accordingly?
190n
2021-09-08 01:06:15
did someone with webhook permissions get their account hacked?
2021-09-08 01:08:42
<@!179701849576833024> can you delete that?
Cool Doggo
2021-09-08 01:11:19
no admin online?
190n
2021-09-08 01:12:21
veluca is an admin, the admin role isn't listed separately
2021-09-08 01:14:36
oh i bet that message was bridged from matrix
2021-09-08 01:14:44
the pfp looks a little like the matrix icon
BlueSwordM
2021-09-08 02:18:57
<@&803357352664891472> Something to remove https://discord.com/channels/794206087879852103/794206170445119489/884966077493309440
190n
2021-09-08 03:18:11
<a:modCheck:884875028527734795>
veluca
2021-09-08 06:01:42
Done
_wb_
July Is there any way to automatically check if a jxl file was converted from jpeg or png and convert back accordingly?
2021-09-08 06:28:11
If there's a `jbrd` box, you can reconstruct a jpeg, otherwise you can't
yurume
2021-09-08 09:09:35
super-experimental in terms of both stability and performance (preflate is _slow_) 😉
July
_wb_ If there's a `jbrd` box, you can reconstruct a jpeg, otherwise you can't
2021-09-08 01:32:09
How do I check for this? Not really searching for PNG reconstruction, just a way to know which ones are JPGs to reconstruct
_wb_
2021-09-08 01:33:50
there's no easy way atm except for trying `djxl in.jxl out.jpg` and seeing if it gives a warning or not (if it doesn't, the jpg was reconstructed, if it does, it encoded a new jpg)
July
2021-09-08 01:40:36
Ah alright. Kinda sucks but good to know
2021-09-08 01:41:35
Other question: is there a way to convert directly from webp to jxl? Cmake mentions libwebp when building libjxl but cjxl doesn't accept webp to my knowledge
_wb_
2021-09-08 01:46:32
we're not aiming to replace ImageMagick - there's nothing special we can do with webp images except decode to pixels and encode from there, so we didn't add that to cjxl
2021-09-08 01:47:19
I think it's in benchmark_xl so we can more easily compare webp to jxl using the same tool, but that's probably the only reason why libwebp is mentioned
2021-09-08 01:48:23
in general, cjxl was created to develop and test the encoder
2021-09-08 01:48:48
so input codecs were only added to it when none of the existing input codecs could be used
July
2021-09-08 01:48:58
Ok, that makes sense. Was confused why it was being mentioned but didn't seem to be used
_wb_
2021-09-08 01:49:29
there's nothing that webp can do that apng cannot already do, so there was no need to add webp input
2021-09-08 01:50:37
regarding knowing whether a jxl can be reconstructed to jpg: maybe open an issue about that? it would make sense if `jxlinfo` would provide that info
lithium
2021-09-08 03:08:14
cjxl separate option will be a stable option in near future? I mean if when I promote libjxl to someone, I can use libjxl separate option compare avif, and say why libjxl is better.
_wb_
2021-09-08 03:12:32
near future probably not, there's still quite some research work needed to bring it from POC to stable option
lithium
2021-09-08 03:20:48
ok, thank you 🙂 , I guess I need keep waiting.
Scope
2021-09-08 03:26:24
If there were no visible boundary crossings, even the current `separate` option would already be usable on black and white images if not using a very low quality encoding
2021-09-08 03:32:46
2021-09-08 03:33:00
d1
2021-09-08 03:33:17
d1 separate
lithium
2021-09-08 03:46:45
separate option is very powerful for non-photographic image, I guess if jxl implement this feature, avif will tough act to follow jxl in quality compare.
Scope
2021-09-08 03:56:26
Also in the future it will theoretically be possible to use `separate` for difficult areas for VarDCT, such as sharp transitions to flat solid color backgrounds and clear lines, where it is difficult to avoid artifacts
2021-09-08 05:32:09
Btw, how do I know which predictors are used through benchmark_xl?
lithium
2021-09-08 06:44:40
I review previous test for separate option(Scope test), I think this feature can let d 1 reach visually lossless, but I understand still have many work need to do, so please consider increase priority for separate feature.🙂
July
2021-09-08 07:07:02
I just noticed something very odd: outputting a jxl made from a jpeg losslessly as png via djxl results as a different image than the original jpg
Scope
2021-09-08 07:08:07
https://gitlab.com/wg1/jpeg-xl/-/issues/226
July
2021-09-08 07:11:25
Why is ppm correct, then?
2021-09-08 07:11:33
Tested it and it matches the original jpeg
Scope
2021-09-08 07:25:00
Because on this conversion path there will be only one and the same Jpeg decoder
_wb_
2021-09-08 09:37:21
Jpeg is not fully specified, people just tend to assume that what libjpeg-turbo does is "correct" but the reality is that a jpeg can be decoded in different ways that are all correct.
July
2021-09-08 09:49:28
cjxl exhibits different behavior if you decode the jpeg to pixels there.
2021-09-08 09:49:44
Using djxl on the result to output a png creates one that matches libjpeg-turbo
2021-09-08 09:50:08
Why is that?
veluca
2021-09-08 09:50:18
yeah, cjxl uses libjpeg-turbo to decode the JPEG
2021-09-08 09:50:48
but the thing is that the way JPEG is specified, multiple different PNGs can all be correct
July
2021-09-08 09:52:47
is ppm based on jpeg or something? why does that one match the libjpeg-turbo result when png doesn't? both using djxl of course. not that important just kinda curious as to what's going on and i'm not super knowledgable in this
veluca
2021-09-08 09:58:28
there shouldn't be a difference between ppm and png pixels
July
2021-09-08 09:59:38
ah there isn't now... strange could've sworn i got one where ppm and jpeg matched
2021-09-08 09:59:46
but yeah now ppm and png match
Romao
2021-09-08 10:00:02
yet you can decode it to jpeg and use imagemagick to convert it 1:1 to png
_wb_
2021-09-08 10:00:46
Using jxl as a jpeg decoder is slightly more accurate than libjpeg-turbo
July
2021-09-08 10:01:20
Can you use djxl's jpeg decoder directly on jpegs?
veluca
2021-09-08 10:01:34
no (or not yet)
July
2021-09-08 10:01:59
Why doesn't cjxl use the same decoder? (sorry for the million questions I had no idea the jpeg format was like this)
Fraetor
2021-09-08 10:22:27
For JPEG decompression, cjxl does its special lossless transcode.
2021-09-08 10:23:02
And I guess it just wan't considered for pixel input from JPEG, but I'm not sure.
_wb_
July Why doesn't cjxl use the same decoder? (sorry for the million questions I had no idea the jpeg format was like this)
2021-09-09 06:46:35
good question - we probably should do that, and drop the libjpeg-turbo dependency.
Scope
2021-09-09 06:57:44
At the moment, how much needs to be done to make the Jpeg decoder and a simple encoder work by JXL itself?
2021-09-09 06:57:48
Because I think some kind of Jpeg encoder will also be needed
veluca
2021-09-09 07:09:58
We *almost* have one of those as it is
2021-09-09 07:10:20
(no Huffman table construction, but otherwise everything is there)
2021-09-09 07:11:12
Writing a (simple, non-progressive, not 100% complete) JPEG to pixels decoder likely doesn't take more than a couple of hours... The question is if we really want it :P
2021-09-09 07:11:33
At least in the form that would come out
Scope
2021-09-09 07:18:42
Well, for compatibility it is still necessary to be able to convert to Jpeg, as well as from recompressed Jpeg, in addition, some applications will be able to fully replace the Jpeg libraries on JXL, since the encoding in Jpeg they also usually needed
_wb_
2021-09-09 07:18:55
<@!179701849576833024> can't we just take the jpeg bitstream parser from jpeg recompression and instead of tokenizing the coeffs, just proceed with dequant and the rest of decode?
2021-09-09 07:19:15
for jpeg decoding
veluca
2021-09-09 07:19:48
like I said, pretty easy, but not 100% complete and perhaps not exactly the way we want to do it
_wb_
2021-09-09 07:21:17
for jpeg encoding: all we need to do is define a "default `jbrd`", which could be a fixed huffman table, a fixed scan script (maybe first all DC, then all AC), all padding bits zero, etc
2021-09-09 07:22:24
if we don't care about optimizing much, just having something to write jpegs quickly
Traneptora
2021-09-09 08:17:19
What's the generation loss of `-d 1`look like?
2021-09-09 08:17:41
jpeg, for example, always decreases quality with every recompression in a measurable way
Scope
2021-09-09 08:19:54
https://youtu.be/FtSWpw7zNkI
Traneptora
2021-09-09 08:20:09
ah there's already a video on this
2021-09-09 08:20:34
the big trepidation I have with -d 1 for synthetic imagery is generation loss
2021-09-09 08:20:46
rather than just -d 0 or -d 0.05 or something
Scope
2021-09-09 08:25:07
Sometimes I noticed color bleeding, lossy modular mode is better in this regard, but it is slower and not always suitable for every content
Deleted User
veluca Writing a (simple, non-progressive, not 100% complete) JPEG to pixels decoder likely doesn't take more than a couple of hours... The question is if we really want it :P
2021-09-09 01:54:12
Yes, we really want it! https://gitlab.com/wg1/jpeg-xl/-/issues/232
veluca
2021-09-09 02:03:03
I mean implemented that way 😛
Deleted User
2021-09-09 02:17:52
Well, at least it would be a start?
Scope
2021-09-10 07:05:27
I think <#805062027433345044> can be removed as it basically creates unnecessary doubling of <#847067365891244042>
Petr
2021-09-10 07:36:43
That channel also contains a lot of failures (with broken links) so I also vote for removing it.
veluca
2021-09-10 07:42:43
well, broken for you 😛
2021-09-10 07:44:07
now it's gone
w
2021-09-10 07:45:07
this is so sad
fab
2021-09-10 12:22:24
if i do
2021-09-10 12:24:48
2021-09-10 12:25:05
is there some risk of losing image quality
2021-09-10 12:25:36
will the size be good or better
2021-09-10 12:25:40
i would like to try
2021-09-10 12:27:18
is there any thing at the moment that could be better
2021-09-10 12:27:45
this adds noise until it get high bpp to not notice
2021-09-10 12:27:49
(the noise)
2021-09-10 12:27:59
then compress with most available ram
2021-09-10 12:28:06
obviously is not fidelity
2021-09-10 12:28:27
and modular part get damaged like those spikes
2021-09-10 12:29:18
also it cleans image and redistributes noise in a good way
2021-09-10 12:29:34
i'm sure there are more RAM consuming methods
2021-09-10 12:29:51
there is something we don't know
2021-09-10 12:30:43
i should copy files my computer can broke even with windows 11
2021-09-10 12:30:49
before the announcement
2021-09-10 12:31:19
now libavif 1.0 will be announced this year
2021-09-10 12:32:13
jxl photon noise reduces fidelity there's no right thing
2021-09-10 12:32:43
it uses more ram for the artifacts so colors for example get encoded with worse psnr
2021-09-10 12:33:06
also you have to decode those files in png
2021-09-10 12:34:17
also i remember that i did modular q 95.2 with an cjxl encoder after upscaling with paint
2021-09-10 12:34:31
and it worked not sure which setting was
2021-09-10 12:53:23
IT'S NOT WORKING
2021-09-10 12:53:28
I MISTAKEN THE ENCODERS
2021-09-10 01:02:36
don't work either way
2021-09-10 01:02:45
do not use this command i suggeted on reddit
2021-09-10 04:00:50
Scope
2021-09-10 04:58:25
Also, this changelog does not contain the changes related to the improved (up/down)sampling https://github.com/libjxl/libjxl/pull/572
veluca
2021-09-10 05:01:52
` - Improved the 2x2 downscaling method in the encoder for the optional color channel resampling for low bit rates.`?
Scope
2021-09-10 05:07:40
Hmm, that's okay then, I also thought that the upsampling was also changed, but it looks like not 🤔 (all the changes were only for downscaling)
fab
2021-09-11 04:41:39
that's latest comment of jyrki here
2021-09-11 04:41:40
jyrkialakuijala commented 2 hours ago • I have carefully reviewed the image quality of this method and consider it a significant improvement, and a first step in a six step improvement for downsampling: this PR iterative refinement of downsampling (using the algorithm in this PR as the initial guess) for even more faithful compression adjust more X and B multipliers for quantization for downsampled images (similar to ramping up those multipliers for higher distances) fine tune visual masking objectives in iterative refinement taking clamping into consideration in iterative refinement automatically downsampling above some distance specification (such as distance 8 or 16), so that users converge to more predictable behaviour (downsampling propagates less ringing and less ugly artefacts than just high distances). Particularly, I expect improvements in the downsampling method allow us to switch to the 2x2 downsampling at a lower distance (for example 10 insted of 14), and allow for better preservation of subpixel geometry such as text and graphics.
2021-09-11 04:41:52
one month ago comment
2021-09-11 04:43:20
2021-09-11 04:43:29
that's my recent comment
2021-09-11 04:44:46
17/08/2021 11:37
2021-09-12 02:52:16
diskorduser
2021-09-12 05:42:26
noooooooooooooooooooooooooooooooooo
_wb_
2021-09-12 05:57:35
?
Cool Doggo
2021-09-12 09:47:10
https://www.youtube.com/watch?v=6AaZeOnGM7U (-d 2 -e 7)
nathanielcwm
Cool Doggo https://www.youtube.com/watch?v=6AaZeOnGM7U (-d 2 -e 7)
2021-09-13 03:10:39
funny how one of the passes made it extremely sharp <:kekw:808717074305122316>
2021-09-13 03:11:08
wait the first one is the source right?
2021-09-13 03:11:31
oh nvm im stupid <:Thonk:805904896879493180>
2021-09-13 03:11:38
forgot about youtube autores
diskorduser
_wb_ ?
2021-09-13 04:44:12
screenshot spam
Scope
2021-09-13 09:53:56
Also about progressive decoding, but in lossless/modular mode, as far as I understand at the moment it does not work and is not supported even in Firefox with patches for progressive decoding? However, since lossless images are much larger and slower to decode, this is even more important for them, although full progressive can further slow down overall decoding time and noticeably increase image size, but is it possible to use LQIP + incremental decoding like it does with non-progressive VarDCT JXL?
w
2021-09-13 10:03:19
progressive is just for if you don't have the data
2021-09-13 10:04:42
and it's up to the API in libjxl to spit out frames for firefox
2021-09-13 10:06:23
but what I do find for modular is that it's always so much larger with progressive on for some reason
_wb_
2021-09-13 10:07:12
progressive modular means Squeeze is used and a bunch of lossless coding tools have to be disabled, like palette
w
2021-09-13 10:07:41
oh... that makes a lot of sense
_wb_
2021-09-13 10:09:13
I don't know what currently happens in the decoder but it could be that it errors on partial bitstreams when it tries to undo Squeeze
Scope
2021-09-13 10:13:18
But, is it possible to sort of incrementally decode lossless images that are encoded in the normal way, without `-p`? And also use LQIP as for VarDCT?
2021-09-13 10:30:11
I just tested a bit the case of using JXL for sites with manga and other images for which using lossless would be more efficient than VarDCT But for example to make people want to replace PNG, it would be nice to have some other advantages besides better compression, because according to my own tests sometimes loading time for lossless images can be long, as well as decoding on slow systems and looking at a blank screen waiting for only full image download can be a bit annoying, LQIP helps to improve this, also incremental decoding would be additionally helpful (even without real progressive)
Cool Doggo
nathanielcwm wait the first one is the source right?
2021-09-13 10:32:21
here is the source
Scope
2021-09-13 10:35:56
Like this, but it's a non-progressive VarDCT JXL
_wb_
Scope But, is it possible to sort of incrementally decode lossless images that are encoded in the normal way, without `-p`? And also use LQIP as for VarDCT?
2021-09-13 11:48:34
Incremental, yes, in principle that should work.
2021-09-13 11:48:50
LQIP: no, in the no-Squeeze case there is no LQIP
2021-09-13 11:48:57
of course you can always redundantly add a preview frame
Scope
2021-09-13 11:52:42
Hmm, looks like real progressive decoding also doesn't work in Firefox with patches or at least doesn't work with truncated images in this Web emulation 🤔
2021-09-13 11:53:14
Or is this not yet implemented in libjxl?
2021-09-13 05:41:09
But, incremental decoding works for `separate`, although perhaps not in the most visually appealing way
2021-09-13 05:41:25
2021-09-13 05:45:23
But, it does not work for normal lossy/lossless modular mode
_wb_
2021-09-13 05:48:57
How does that demo work?
2021-09-13 05:49:31
Does it truncate the file and then run a wasm version of djxl?
2021-09-13 05:49:48
Or does it truncate and then does browser decode?
2021-09-13 05:50:05
Or a wasm version of a decoder that uses the api?
2021-09-13 05:50:56
Does it work if you do simple lossless like -e 2 or 3?
Scope
2021-09-13 05:51:04
I think so, except without external decoders, but just those that are already supported by browsers https://stan-kondrat.github.io/progressive-image-viewer/
2021-09-13 05:51:33
<https://github.com/stan-kondrat/progressive-image-viewer>
2021-09-13 05:53:13
lossless -s 2
_wb_
2021-09-13 05:55:58
This is on firefox with the patch to add progressive?
Scope
2021-09-13 05:56:18
Yep
2021-09-13 05:56:30
However, as already discussed, it would be nice to have an online page with various types of encoded JXL images for testing
_wb_
2021-09-13 05:57:04
Yes, feel free to add something on jpegxl.info
2021-09-13 05:59:31
<@768090355546587137> any idea why there's no incremental loading of modular images?
2021-09-13 05:59:47
Who wrote that firefox patch again? Was it <@288069412857315328> ?
w
2021-09-13 06:00:06
yeah
_wb_
2021-09-13 06:01:04
What mechanism is used in ff to get the image? Output buffer or pixel callback?
w
2021-09-13 06:01:29
output buffer
_wb_
2021-09-13 06:02:22
And is the buffer what is on screen or do you wait for decode events to do a sync or something?
w
2021-09-13 06:03:26
when it needs more data but the decode is not complete, it does the flush
2021-09-13 06:03:36
The flush thing in the api
_wb_
2021-09-13 06:03:38
Makes sense
2021-09-13 06:03:49
Could be it's a libjxl problem
w
2021-09-13 06:05:20
it works for the non modular so 🤷
_wb_
2021-09-13 06:05:45
We really need to make a decode_oneshot thing that uses the api in a way similar to how a browser would use it, to make it easier to debug this stuff
w
2021-09-13 06:06:43
maybe make it use stdin aswell so you can do it procedurally
_wb_
2021-09-13 06:08:15
Yes, or add an option to output the buffer once every X bytes or something
Scope
2021-09-13 06:24:54
And the real progressive decoding does not work either (this is a progressively encoded VarDCT JXL image) FF (this is likely LQIP + incremental decoding)
2021-09-13 06:25:52
Chrome (only incremental decoding)
Scope And the real progressive decoding does not work either (this is a progressively encoded VarDCT JXL image) FF (this is likely LQIP + incremental decoding)
2021-09-13 06:37:04
Hmm, but maybe it works (just not exactly how I thought it would, and it would be better to test on a photographic image)
_wb_
2021-09-13 06:38:32
That does look like real progressive AC scans
w
2021-09-13 06:40:41
Chrome uses the callback and it gets to manipulate the image byte array directly
2021-09-13 06:42:56
rows in FF only allow doing it in order (scanlines) so the only way is to use a buffer, but in that case, just use output buffer in libjxl api
Scope
2021-09-13 07:01:25
Hmm, without using salience or other methods that are not sequential decoding from left to right and top to bottom, progressive JXL is not much different in perception than LQIP+incremental decoding, up to about 60-65% when the image looks like fully loaded
_wb_
2021-09-13 07:11:57
I think LCP should be defined to occur at that stage, at about 60-65% when the progressive image looks "good enough"
2021-09-13 07:24:39
I think that such an LCP definition would make a lot of sense, and it would encourage browsers to properly implement progressive and webdevs to use it
2021-09-13 07:25:04
(not just for jxl but also for jpeg and maybe for avif if it can be made to work there)
2021-09-13 07:26:32
I think it's silly to define LCP as the final stage moment, which is not what really matters for user experience - a page is "usable" earlier than that
2021-09-13 07:29:32
Defining LCP incorrectly means that a non-progressive webp or avif is considered better than a somewhat larger jpeg, even if visually, when using the (progressive) jpeg, the page looks finished in 500ms while with webp/avif there's still a gaping hole in the page at 750ms (but at 800ms it's fully loaded so LCP is at 800ms, while the jpeg takes 900ms to get there)
veluca
2021-09-13 09:48:13
<@!794205442175402004> we don't do group-by-group output of Modular IIRC, I had a patch somewhere for that but I think we never merged it
_wb_
2021-09-13 10:02:08
We need to merge that, also for images with alpha
veluca
2021-09-13 10:08:24
probably worth you rewriting it from scratch 😛
_wb_
2021-09-13 10:22:11
I suppose it would help to have a special case for Squeeze residuals up to 1:8 being available, and upsample it like DC at that point. We do alpha with squeeze by default in lossy mode, so...
Scope
2021-09-14 09:25:22
Yep, images with alpha also don't work
2021-09-14 09:25:34
2021-09-14 10:07:07
I agree about LCP But also in my opinion, if to speak in more detail about what I meant and choice from real progressive decoding, its full absence and some alternatives, I think that LQIP + incremental decoding is good enough solution. Yes, it's worse than real full progressive decoding, but if consider that in most cases images are loaded not more than few seconds (longer only with very slow connection), then difference between LQIP + incremental decoding is not so big, considering that for real progressive it is needed purposely encode images with this option, JXL even in VarDCT mode usually become a bit larger size (on my senses by 3-5% on typical images and quality), also decoding is more computationally expensive, so all in all it is quite a decent exchange of some shortcomings for others Also an example of a visual difference: 50% real progressive:
2021-09-14 10:07:11
2021-09-14 10:07:32
50% LQIP + incremental decoding:
2021-09-14 10:08:50
But, 60% real progressive:
2021-09-14 10:09:15
60% LQIP + incremental decoding:
2021-09-14 10:20:04
But, still, perhaps I even prefer LQIP + incremental decoding, given its "free" to use (it is always there when encoding, no noticeable costs for decoding, no additional increase in image size), and the extra 30-40% of waiting for full image loading is not so long, given that there is also a preview and a partially loaded image At least until the implementation of a more elegant use of progressive, like salience
2021-09-14 10:40:48
Also the problem for lossless/modular mode is that progressive makes compression noticeably worse, and there's no free LQIP It would be useful to have some cheaper version of LQIP, like around 200-600 bytes in size, that could be enabled separately from full progressive and browsers would know how to show it by default 🤔 Although, it depends if this is possible to do more optimally (for decoding, size, etc.) than generating a separate LQIP externally
_wb_
2021-09-14 11:51:29
to be clear: when you say LQIP, you mean the fancy-upsampled 1:8 DC image
2021-09-14 11:52:26
which is of course a kind of LQIP, but it's at the high end
2021-09-14 11:52:56
so let's maybe call it a MQIP (medium quality image placeholder)
2021-09-14 11:53:25
things like a blurhash or even just a single predominant color are also considered LQIP
2021-09-14 11:53:54
with progressive DC, we can in principle get 1:16, 1:32, 1:64 etc versions of the DC too
2021-09-14 11:54:30
(currently progressive DC is disabled by default on d < 4.5 and enabled by default at higher distance, because at that point it's just better for compression)
2021-09-14 11:54:38
(probably already a bit earlier)
2021-09-14 11:58:36
I'm personally in favor of a 3-step progression: - LQIP that is a very blurry 1:64 image (after 1% of the bytes), only available if progressive DC is used, typically not rendered on connections that are fast enough to skip to the - MQIP that is the 1:8 image (after around 15% of the bytes) - final image, loaded 256x256 group by group, possibly in a center-first or saliency-guided order
Scope
2021-09-14 11:59:27
Yes, I mean any preview that is noticeably smaller in size and quality than the original image, but within the same boundaries and position before it is fully loaded, not to be confused with a preview that is only needed for a thumbnail And yes, LQIP as a minimal possible blurhash-like placeholder would also be very useful to have browser/lib support
_wb_
2021-09-14 12:01:22
in the discussions about LCP though, there was a sentiment that LCP has to be when the image is "usable", which requires better quality than MQIP, and more like the 60% point with real progressive
2021-09-14 12:01:41
for example because the image can contain text and the text needs to be readable
2021-09-14 12:02:38
at the moment LCP is still just final image though
2021-09-14 12:04:57
anyway, IF the LCP definition would get changed so that real progressive would be rewarded, then I think the best strategy is to have a single HQIP scan that immediately brings things to the "good enough for LCP" point, followed by a final scan for the full detail. If it's only one extra scan, probably the density penalty is smaller than with the current --progressive
Scope
2021-09-14 12:09:53
For VarDCT, yes (as I said before except in some cases where it is more efficient, the size can be larger by about 3-5%) But, for lossless it would also be useful to have at least minimal LQIP, just to roughly indicate the borders and "composition" of the full image
2021-09-14 12:14:06
And for slow networks/servers or giant images, more steps can also be used for incremental decoding, as far as I understand they are not as resource intensive and do not need to redraw the whole image as new as with real full progressive
2021-09-14 12:35:41
Also, for normally encoded JXL images with `-d 1` and without `--progressive`, there are also other preview options, since LQIP and MQIP are available for them by default?
_wb_
2021-09-14 12:44:54
with -d 1, only MQIP is available by default, LQIP only with --progressive_dc=1 or with -d > 4.5 (also the decode api nor djxl actually show pre-MQIP previews atm)
2021-09-14 12:47:42
for lossless, unless you use Squeeze (which hurts density a lot on many images, unless of course near-lossless squeeze is also OK), there is nothing the bitstream automatically gives you - but a low res preview frame could be added, and assuming viewers/browsers upscale the preview to the image dimensions, that could act like a blurhash/LQIP
Scope
2021-09-14 12:51:41
So only MQIP is free for normal images? 🤔 For lossless, I think some experimentation with LQIP is worth it, but perhaps third-party separate solutions might be better And in addition to this discussion, I think progressive is an important direction to spend resources on its development, research and support, and as far as I have noticed, to get more widespread and create a hype, developers and ordinary people really like some new or unusual features (even when often they are not that useful), so it is good for the format even for marketing reasons
2021-09-14 12:54:21
In my experience and conversations, just improving compression is usually not enough to get people interested (even for some companies, where saving traffic/bandwidth is not the highest priority thing compared to something else and possible problems with new formats) <:SadOrange:806131742636507177>
lithium
2021-09-14 03:31:19
modular patches only can create rectangle patches? It's possible create some circle or polygon patches?
_wb_
2021-09-14 03:46:50
Patches are rectangles, but they can have arbitrary shapes, you just need to put a bounding rectangle around them. With blendmode kAdd or kBlend, you just make the part you don't need all zeroes
lithium
2021-09-14 03:51:20
Cool, I understand, modular patches feature is really powerful, I think still have chance implement separate option and fix border issue.🙂
_wb_
2021-09-14 03:55:26
it will take a while to make full use in the encoder of the bitstream possibilities of things like patches and splines
2021-09-14 03:56:07
until 9 months ago, we were basically just defining the game
2021-09-14 03:56:31
now we (and maybe others?) can start playing it
2021-09-14 03:58:53
but there's still lots of other stuff to do too - like making the API more mature, getting progressive decoding to work properly, and most of all: getting adoption! because having a great encoder for a format that isn't supported, is not that useful 🙂
Scope
2021-09-14 04:01:12
Yes, it's sad that support in browsers is delayed, I think after a wider adoption most likely other developers would have joined in improving and experimenting with bitstream possibilities
lithium
2021-09-14 04:06:40
We really need a great quality encoder for high quality lossy, for now av1(avif) still not easy to use for still image, jxl is better, but still need some quality improve for graphics.
Scope
_wb_ I'm personally in favor of a 3-step progression: - LQIP that is a very blurry 1:64 image (after 1% of the bytes), only available if progressive DC is used, typically not rendered on connections that are fast enough to skip to the - MQIP that is the 1:8 image (after around 15% of the bytes) - final image, loaded 256x256 group by group, possibly in a center-first or saliency-guided order
2021-09-15 11:05:21
> I'm personally in favor of a 3-step progression: > - LQIP that is a very blurry 1:64 image (after 1% of the bytes), only available if progressive DC is used, typically not rendered on connections that are fast enough to skip to the > - MQIP that is the 1:8 image (after around 15% of the bytes) > - final image, loaded 256x256 group by group, possibly in a center-first or saliency-guided order I agree, btw, is it possible for non `--progressive` images to also use a non-sequential 256x256 group loading order? Then we would also have free and default MQIP and center-first or salience guided loading for all images (even non-progressive) Also center-first groups loading by default (because I think it's better than sequential in most cases) needs changes in browsers or libjxl?
_wb_
2021-09-15 11:15:18
Yes, group order can always be permuted, also for lossless btw
2021-09-15 11:15:58
It's already implemented in libjxl
2021-09-15 11:16:40
Chrome doesn't show MQIP yet, only final image, but it should do that in whatever group order the bitstream has
2021-09-15 11:18:29
Recompressed jpegs can also get MQIP and center-first/saliency guided group order, btw
2021-09-15 11:21:42
(that's one of the reasons I pushed to get Brunsli out of the spec and use vardct instead: it wasn't just a spec simplification, it's also more powerful because it allows recompressed jpegs to get all the progressive features too)
Scope
2021-09-15 11:25:42
So, it's pretty cool, except perhaps lacking a minimal blurhash-like LQIP by default for all images and ideally loaded in parallel first, for a better user experience, but perhaps this is better left to external tools
_wb_ Chrome doesn't show MQIP yet, only final image, but it should do that in whatever group order the bitstream has
2021-09-15 01:45:45
Hmm, so the groups order is defined during encoding, but why currently it is sequential and not center-first or is it not yet clearly stated that it is better?
Fraetor
2021-09-15 01:57:27
I guess blurhash is really for dynamic content that you can't easily just use a HTML colour placeholder.
_wb_
Scope Hmm, so the groups order is defined during encoding, but why currently it is sequential and not center-first or is it not yet clearly stated that it is better?
2021-09-15 02:04:04
Default is sequential, there's an option to change it. I suppose we could do center-first by default, I don't think there are any major downsides to doing that.
2021-09-15 02:04:30
Maybe open an issue about it on github? To get more opinions
Scope
2021-09-15 02:06:45
Option in the current encoder or source code? Oh, found it: ``` --centerfirst Put center groups first in the compressed file. --center_x=0..XSIZE Put center groups first in the compressed file. --center_y=0..YSIZE Put center groups first in the compressed file.```
2021-09-15 02:27:45
I also remember reading some articles with other variations of similar progressive loading that are also good for perception, but I would need to find them first
fab
2021-09-15 02:40:02
is any improvement of image quality coming?
2021-09-15 02:40:12
or heuristics?
2021-09-15 02:40:29
or modular lossless?
2021-09-15 02:40:45
did jxl finished?
2021-09-15 02:41:00
like exhale xheaac two years of development and stop
_wb_
2021-09-15 02:58:29
https://tenor.com/view/star-wars-yoda-patience-sw3-gif-21070142
Scope
Scope I also remember reading some articles with other variations of similar progressive loading that are also good for perception, but I would need to find them first
2021-09-15 03:08:04
Also I think a round-center order, kind of like how a spiral is formed, might also be helpful 🤔
veluca
2021-09-15 07:17:45
If I were <@795684063032901642>, I'd be worried about that picture on the right going around too much 🤣
Nova Aurora
2021-09-15 07:18:42
Looks like you're also trying to create an efficent targeting algorithm for our future robot overlords?
2021-09-15 07:19:07
Would you like some help with that?
2021-09-15 07:19:12
https://tenor.com/view/clippy-microsoft-office-word-publisher-gif-5630386
Scope
2021-09-15 07:21:25
I just couldn't find any other better photos to quickly show what I mean and this one was already on the blog about progressive decoding, so...
_wb_
2021-09-15 07:25:54
Concentric ellipses do seem better than the concentric squares we currently do, though I assume also a bit more of a pain to implement in a way that avoids updates all over the place
Scope
2021-09-15 07:30:26
Although, it would be better to also indicate the blocks that will be shown And, yes, about this sort of technique, it's probably more complex to implement than the current rectangular one, but it has advantages at least on the most corners blocks, since usually the more useful information is more centered to the circle
2021-09-15 07:40:11
Hmm, perhaps it would be easier to make something like a superellipse, but as a rectangle without blocks on the corners? 🤔 Although, this would also require calculating the number of missing blocks with increasing image size/number of total blocks
2021-09-15 11:05:36
Redrawn examples:
2021-09-16 07:50:25
Hmm, I guess if getting a minimal LQIP is impossible for regular modular and VarDCT, is it possible to somehow calculate and take the dominant color directly from the bitstream to use as a solid color placeholder?
2021-09-16 08:04:24
Or maybe calculate it during encoding and store it as additional information in a few bytes and then pass it to the decoder later? That would also get rid of using some third party generators and extra scripts, as far as I know some reduce full image to 1x1 and the resulting pixel color is the color of the placeholder, but on the other hand it's not so easy for browsers to use parallel loading of multiple images partially and it's easier to load completely separate placeholders 🤔
2021-09-16 08:08:58
Also about solid color, as far as I have read studies and tests with people, for some people a blurred or downsized image before loading the full image is more annoying than a solid color or even loading only the full image, so this is also useful and may even be better for some people than full progressive loading or blurhash
_wb_
2021-09-16 09:29:54
One option is to use a very small preview frame (say 8x8 pixels), which then can be used by webdev tools to make a blurhash, gradient or solid color to be inlined in the html as a placeholder (if the preview frame is missing, they can also do it by decoding the whole image and downscaling it, but the embedded preview would just be an optimization for those tools, as well as an embedded placeholder in cases where html cannot be produced)
2021-09-16 09:30:59
which reminds me that we need to add an option to cjxl to write a preview frame (currently I think it just never does), and we need to make sure that e.g. browsers actually subscribe to the preview frame event and show it
Scope
2021-09-16 09:34:13
Or maybe even something like a dominant color for each block 🤔
2021-09-16 09:36:40
However, since there are no studies and comparisons of what would be easier and cheaper to use and network requests, these are just my thoughts
2021-09-16 09:45:01
Also, can't splines create color-filled shapes so that it would be possible to make something like a SQIP? Or even if they could, they wouldn't be as light and easy to decode?
fab
2021-09-16 09:47:51
i would do even
2021-09-16 09:47:52
-q 80.79 --patches=0 --use_new_heuristics -s 1 -I 0.323 --keep_invisible=1
2021-09-16 09:48:10
i care about micro details
_wb_
2021-09-16 09:48:17
splines cannot do filled shapes, but they can do curves that are thick or have a variable thickness
fab
2021-09-16 09:48:42
i need two new heuristics
2021-09-16 09:49:20
i want a setting good for bad photos
_wb_
2021-09-16 09:49:27
nothing prevents you in principle from making a preview frame that is high resolution but very blurry and contains some splines to create funky artistic effects
fab
2021-09-16 09:49:40
like samsung gn1 photos or jpegs
2021-09-16 09:49:43
or screenshots
2021-09-16 09:49:48
an all purpose settings
2021-09-16 09:49:57
i would care about an encoder that do that
2021-09-16 09:50:16
even if generation loss is similar to rav1e
2021-09-16 09:51:11
just to save memory
_wb_
2021-09-16 09:52:00
an encoder cannot have an all purpose setting that at the same time keeps good images good and does smoothing/denoising on bad images to make them compress better by not bothering to preserve their artifacts
2021-09-16 09:52:40
it's also not an encoder's job to make bad images look better
2021-09-16 09:53:34
if you have bad images, please use other software to try to improve them
fab
2021-09-16 09:54:35
i will test when is available
2021-09-16 09:54:59
2021-09-16 09:55:11
that's a classical image i want to compress
2021-09-16 09:55:18
this type of quality
2021-09-16 09:55:24
already damaged
2021-09-16 09:55:40
not higher samsung gn1 sensor do not produce higher quality
2021-09-16 09:55:46
it has lot of mpx but lot of noise
2021-09-16 09:56:18
i know libjxl cannot preserve more images that what is in image
2021-09-16 09:57:12
basically lots of blue and green
2021-09-16 09:57:21
the one that remains
2021-09-16 09:57:30
image that already look poor
2021-09-16 09:57:40
like not too much colours
2021-09-16 09:58:03
i know any encoder isn't good for this
2021-09-16 09:58:26
but even if generation loss is similar to rav1e is not bad to me
2021-09-16 09:59:47
i will use worst settings for quality
2021-09-16 09:59:59
-q 80.79 --patches=0 --use_new_heuristics -s 1 -I 0.323 --keep_invisible=1
2021-09-16 10:00:08
do not know if keep invisible 1 changes lossy file
2021-09-16 10:00:14
guess no, right?
Fraetor
2021-09-16 11:28:46
You probably want to do some preprocessing such as deblocking or denoising with GIMP (or imagemagik if you are trying to script it) before you encode with cjxl.
fab
2021-09-16 12:01:51
I know how to do the best deblocking ever in cjxl
2021-09-16 12:02:21
But anyway is not useful for screenshots like the Image i posted
Fraetor You probably want to do some preprocessing such as deblocking or denoising with GIMP (or imagemagik if you are trying to script it) before you encode with cjxl.
2021-09-16 12:13:13
think if you store explicit images do you want that?
2021-09-16 12:16:06
honestly a new heuristics not based on concept of distance would be interesting even with worst settings
2021-09-16 12:16:17
even if i know this is out of the scope of libjxl
2021-09-16 12:16:30
libjxl is meant for using butteraugli and high quality photos
2021-09-16 12:18:08
the problem is that video codec delete the red color by aggressive quantization or saturating and are not that efficient if you use poor speed
2021-09-16 12:18:18
it doesn't work that way
2021-09-16 12:18:31
probably i should get more documented
2021-09-16 12:18:38
as bluesword said
2021-09-16 12:19:52
probably i got brainwashed by jxl presentations
Cool Doggo
2021-09-16 12:20:02
afaik video codecs dont preserve color very well in general
fab
2021-09-16 12:20:25
in a video rav1e speed 10 looks good
2021-09-16 12:20:33
but on a still i see artifacts
2021-09-16 12:20:47
for the content i use
2021-09-16 12:21:22
31082021-3 build
2021-09-16 12:22:20
svt av1 wasn't even released the new build
2021-09-16 12:23:07
isn't useful to do settings like these
2021-09-16 12:23:09
can svt av1 with libavif do qp 18 min qp 65 mx speed 3 sharpness 2
2021-09-16 12:23:16
because there isn't any new encoder
2021-09-16 12:23:46
avif has been stable, not change in video artifacts in months
2021-09-16 12:26:03
in a new heuristics i want perfect preservation even at low speeds
2021-09-16 12:26:10
but is not garanteeed
2021-09-16 12:26:28
new heuristics scope is devs that need to test the encoder
2021-09-16 12:28:49
jxl focus is high quality and neatness or precision at higher speeds s 7 s 8 s 9
2021-09-16 12:28:59
but is not guaranteed
2021-09-16 12:38:17
honestly i'm joking
2021-09-16 12:45:43
probably -s 8 -q 83.329 --patches=0 is good for my usage
2021-09-16 12:45:53
and i don't need special settings
Scope
2021-09-16 12:53:57
Progressive encoding-decoding
Justin
2021-09-19 01:10:55
Do you prefer the distance or quality parameter for conversion? No idea what to set as default for the converter we're building.
2021-09-19 01:12:36
I'd think that distance is easier to understand and more important than a fixed quality value
Scope
2021-09-19 01:20:45
I think quality will be more understandable and familiar to most people, especially if they use other formats as well But only for JXL, the distance may be more convenient, but it still works and matches the quality values, so there is no difference apart from convenience for the more familiar values to someone
_wb_
2021-09-19 01:22:05
From a convenience pov, 'quality' is what people are typically used to
2021-09-19 01:22:38
From a didactical pov, 'distance' is a better way to think about lossy compression
Justin
2021-09-19 01:36:20
Nice, thanks!
fab
2021-09-19 06:10:05
when imageready event presentation
_wb_
2021-09-19 06:31:16
I don't know if <@519159731529580554> is still here
fab
2021-09-20 08:13:33
https://twitter.com/towebperf
2021-09-20 08:13:51
you need to download zoom app in your phone and access
2021-09-20 08:14:02
don't know if i remind my data or i can login with google
2021-09-20 08:14:39
unless you have windows ten
2021-09-20 08:15:00
i know everyone isn't interested in a smartphone
2021-09-20 08:15:07
neither I
2021-09-20 08:22:54
ok i registered
2021-09-20 08:23:00
google account didn't worked in a phone
2021-09-20 08:24:57
is difficult you need to access with google at meetup.com
2021-09-20 08:25:35
the hour is late 19:00 22:00 for me
2021-09-20 08:26:14
i don't remember if it was the same last year
2021-09-20 08:26:21
but the date was 1 october also
Traneptora
2021-09-20 05:59:15
nice, apparently I discovered that with the gdk pixbuf loader I can set my desktop background to a JXL
veluca
2021-09-20 06:01:57
good to know, although not super surprising 😄
Traneptora
2021-09-20 06:03:08
Yea, I was like "I wonder if this works"
2021-09-20 06:03:09
and it did
2021-09-20 06:03:19
it's nice cause my HDD is slow as all hell
2021-09-20 06:03:35
so even cutting down on decoding a 4 MB png from HDD can save me some effort
diskorduser
2021-09-20 06:20:58
All my wallpapers even lockscreen, login screen are in jxl. Also user profile picture is in jxl too.
veluca
2021-09-20 06:21:26
😄
2021-09-20 06:21:31
which kind of pictures?
diskorduser
2021-09-20 06:21:59
Hmmm they are normal. Like nature and abstract background
2021-09-20 06:24:44
Can jxl do multi-resolution images? Like 16x16 to 256x256 of same image. That kind of feature would work great for icon packs.
veluca
diskorduser Hmmm they are normal. Like nature and abstract background
2021-09-20 06:26:50
so photographic?
diskorduser Can jxl do multi-resolution images? Like 16x16 to 256x256 of same image. That kind of feature would work great for icon packs.
2021-09-20 06:27:04
it can, up to a point
_wb_
2021-09-20 06:32:13
VarDCT with squeezed DC and progressive AC scans can kind of give you something like 8x8, 16x16, 32x32, 64x64, 128x128, 256x256, 512x512, 1024x1024 etc versions of the same image
diskorduser
2021-09-20 06:33:29
But icons need lossless though 🤔
_wb_
2021-09-20 06:33:48
Then you can do just squeezed lossless
2021-09-20 06:34:12
It's typically not as dense as non-squeezed lossless
veluca
2021-09-20 06:34:42
... to the point that often it's actually better to store all the downsamples separately
_wb_
2021-09-20 06:35:47
It depends on the image though
diskorduser
2021-09-20 06:35:56
Windows .ico , mac os icns and Linux old icon formats have multi-resolution support
_wb_
2021-09-20 06:36:06
There are also cases when squeeze is denser than non-squeeze
2021-09-20 06:36:59
Storing downsamples separately is not something that is really supported in the jxl codestream
2021-09-20 06:37:57
Though you could do multi-layer with four layers where the layers are 8x upsampled, 4x upsampled, 2x upsampled and 1:1
2021-09-20 06:38:16
Where the first 3 layers are completely redundant
2021-09-20 06:39:27
But then you would also need an application that understands that for a specific resolution it only needs to decode one layer
2021-09-20 06:39:51
The current decode api doesn't allow that
diskorduser
2021-09-20 06:40:47
So It is better to make new icon format which uses jxl to store losslessly...
2021-09-20 06:41:24
Currently android uses webp for icons on some apps. Things may change in future
_wb_
_wb_ The current decode api doesn't allow that
2021-09-20 06:41:35
For that we would need something like a JXL_DEC_LAYER event (for all kRegular frames that are not triggering JXL_DEC_FRAME), allow layer skipping, expose the upsampling factor of the frameheader, and have an option to skip the upsampling
2021-09-20 06:42:25
Not sure if multi-size icons are still that useful/relevant
2021-09-20 06:42:54
I'd use svg or if it has to be raster, just have the largest size only
2021-09-20 06:43:49
It's some wasted effort to decode more pixels than needed and to downscale, but meh
diskorduser
2021-09-20 06:45:00
Icons don't look good / sharp when downscaled from high resolution.
2021-09-20 06:48:54
Even even SVGs need a separate image for smaller resolution because they don't align properly to pixel grid
_wb_
2021-09-20 06:50:30
Hmyeah for really small icons like 16x16 the pixel grid does indeed become a big factor, but are such small icons still needed with today's screen densities?
diskorduser
2021-09-20 06:51:00
Yeah for notification tray and symbolic icon stuffs.
OkyDooky
2021-09-20 07:03:39
Hi
improver
2021-09-20 07:05:44
why cant i block bot users ;_;
_wb_
2021-09-20 07:06:13
Is that the Matrix integration?
improver
2021-09-20 07:06:26
no clue
Scope
2021-09-20 07:06:47
Yep
2021-09-20 07:07:02
Matrix users
_wb_
2021-09-20 07:07:16
Yes, <@309408702530846730>
2021-09-20 07:07:41
It changes its name to the nickname of someone saying something on Matrix
2021-09-20 07:07:52
So we can see it here too
2021-09-20 07:08:25
Those people are not bots :)
improver
2021-09-20 07:08:41
you can never know
_wb_
2021-09-20 07:08:41
Just not using discord but matrix
2021-09-20 07:09:01
Ok yes they could be bots
improver
2021-09-20 07:09:55
even if someone looks human, they may be bot level simpleminded
diskorduser
2021-09-20 07:22:26
Interesting
spider-mario
Traneptora it's nice cause my HDD is slow as all hell
2021-09-20 07:38:20
I was almost going to suggest trying something like systemd-readahead but I forgot that it’s not supported anymore
2021-09-20 07:38:24
is there no more alternative to it?
diskorduser
2021-09-20 07:42:08
e4rat preload. If using ext4
spider-mario
2021-09-20 07:44:21
oh yes, e4rat
2021-09-20 07:44:25
I think I used that one too
diskorduser
2021-09-20 07:44:58
Works great on HDD. Halved my boot time.
Traneptora
spider-mario I was almost going to suggest trying something like systemd-readahead but I forgot that it’s not supported anymore
2021-09-20 11:09:44
wdym
2021-09-20 11:09:48
what's that
diskorduser e4rat preload. If using ext4
2021-09-20 11:09:58
what's that
spider-mario
2021-09-20 11:20:18
it makes it so that the files that are usually read on startup are rearranged on the disk and read in advance
diskorduser
Traneptora what's that
2021-09-21 08:55:20
https://github.com/ShyPixie/e4rat-lite
2021-09-21 08:55:57
Using e4rat benefits on faster HDD too.
Traneptora
2021-09-21 09:43:30
very very old
fab
2021-09-21 10:19:49
2021-09-21 10:19:58
a backup of 12:19
w
2021-09-21 10:28:51
<:thinkraging:427900399027093506>
diskorduser
fab a backup of 12:19
2021-09-21 10:52:26
Why do you need backups?
fab
2021-09-21 11:45:58
Because if they ban my comments or dev delete my post
2021-09-21 11:46:18
Also is interesting to me
w
2021-09-21 11:58:01
my laptop from 9 years ago with a 112gb drive performs at 10mb/s max throughput
fab
2021-09-21 11:58:57
Can you reply
w
2021-09-21 11:59:12
old spinning drive
fab
2021-09-21 12:01:03
Who knows what Muscles from Brussels means?