|
VcSaJen
|
2024-02-07 12:14:02
|
Would that re-encode image pixels? AFAIR you need to pass a special parameter to turn off lossless jpeg recompression.
|
|
|
lonjil
|
2024-02-07 12:16:37
|
they did decode to pixels and then lossless encoding I think
|
|
|
Traneptora
|
|
VcSaJen
Would that re-encode image pixels? AFAIR you need to pass a special parameter to turn off lossless jpeg recompression.
|
|
2024-02-07 12:20:31
|
`cjxl -d 0 -e 3 input.jpg output.jxl` would, but they may have taken the PNG and fed it to cjxl
|
|
2024-02-07 12:20:50
|
if they had done lossless recompression of the JPEG it is extremely unlikey they would have inflated the JPEGs by a factor of 4 or something, which is what happened
|
|
2024-02-07 12:21:18
|
so my conclusion is they must have done decode to pixels first
|
|
2024-02-07 12:21:25
|
possibly by using the PNG as an input
|
|
|
lonjil
|
2024-02-07 12:22:13
|
> Then THAT defaults to “lossless transcoding” for old JPEG’s, which is not something the other codecs are capable of doing, so I had to set --lossless_jpeg=0 to make it actually convert apples to apples.
|
|
2024-02-07 12:22:41
|
actually, wait
|
|
2024-02-07 12:22:47
|
that table doesn't make any sense
|
|
|
VcSaJen
|
2024-02-07 12:34:44
|
just checked, yes, it does lossless transcode on -d 0 -e 3
|
|
|
lonjil
> Then THAT defaults to “lossless transcoding” for old JPEG’s, which is not something the other codecs are capable of doing, so I had to set --lossless_jpeg=0 to make it actually convert apples to apples.
|
|
2024-02-07 12:43:03
|
Wait, he explicitly turned off jpeg transcoding, then used lossless encoding, to compare "apples-to-apples"? Shouldn't he have used JPEG-LS, then?
|
|
|
lonjil
|
2024-02-07 12:43:43
|
idk honestly https://wiki.alopex.li/LossyImageFormats#performance
|
|
|
Quackdoc
|
|
lonjil
> Then THAT defaults to “lossless transcoding” for old JPEG’s, which is not something the other codecs are capable of doing, so I had to set --lossless_jpeg=0 to make it actually convert apples to apples.
|
|
2024-02-07 12:46:23
|
what is this in reference too? if they are lossy transcoding other jpegs they should be testing both
|
|
2024-02-07 12:46:54
|
I miss when people wouldn't disregard actual real world use case in their "apples to apples" comparisons
|
|
|
VcSaJen
|
2024-02-07 12:47:15
|
Ah, that's a different person from -d 0 -e 3?
|
|
|
lonjil
|
2024-02-07 12:47:39
|
I think so
|
|
|
Quackdoc
|
2024-02-07 12:49:08
|
when using jpegs as a source when benchmarking jxl, I can rarely think of a reason to not include lossless transcoding in your tests, unless you are really crushing quality ofc
|
|
2024-02-07 12:49:38
|
if it's a lossless test, disabling it is just weird. I can understand benching it both enabled and disabled however
|
|
|
gb82
|
|
lonjil
if anyone here uses lobsters: https://lobste.rs/s/jfjjcp/jpeg_xl_rejected_for_interop_2024
please do not brigade, but if anyone has something useful to add to their discussion, I can provide an invitation link if needed.
|
|
2024-02-08 09:47:21
|
that's my post :D
|
|
|
lonjil
|
2024-02-08 09:51:02
|
I noticed!
|
|
|
Quackdoc
|
2024-02-11 02:43:29
|
I think a lot of the simple stuff could be a page on the reddit
|
|
|
jonnyawsom3
|
2024-02-11 03:51:09
|
The Resouces channel strikes again ;P
|
|
|
damian101
|
2024-02-11 12:02:36
|
There's this...
https://wiki.x266.mov/docs/images/JXL
I think that Wiki does miss a "best practice" section or something like that.
|
|
2024-02-11 12:03:13
|
batch encoding images is almost the same for all image formats afterall...
|
|
2024-02-11 12:03:43
|
and currently there is also no straight-forward direction of what image/video/audio formats are actually worth using, and with which tools
|
|
2024-02-11 12:04:09
|
Haven't contributed to that wiki yet, have been wanting to do that for many months...
|
|
|
There's this...
https://wiki.x266.mov/docs/images/JXL
I think that Wiki does miss a "best practice" section or something like that.
|
|
2024-02-11 12:08:17
|
Actually, most topic sections should probably have an overwiew/comparison entry, where all the different tools or formats of that section are compared, including benchmarks, and a quick guide section where people who aren't already very knowledgeable and just want to get stuff done can get some good information of how to compress or filter images/video/audio.
|
|
|
w
|
2024-02-11 01:19:34
|
why is it called wiki when it's not a wiki
|
|
|
damian101
|
|
w
why is it called wiki when it's not a wiki
|
|
2024-02-11 02:01:57
|
how do you mean <:Hmmm:654081052108652544>
|
|
|
Foxtrot
|
2024-02-11 03:25:47
|
yeah... people call wiki what should be called docs
|
|
2024-02-11 03:26:19
|
> A wiki (/ˈwɪki/ ⓘ WI-kee) is a form of online hypertext publication that is collaboratively edited and managed by its own audience directly through a web browser.
|
|
2024-02-11 03:28:05
|
it says here it accepts contributions, but if you cant make edits as anonymous user i wouldnt call it wiki https://github.com/av1-community-contributors/codec-wiki/tree/main
|
|
|
VcSaJen
|
2024-02-11 03:43:21
|
Yea, anything that requires making pull requests isn't a wiki.
|
|
|
Traneptora
|
|
how do you mean <:Hmmm:654081052108652544>
|
|
2024-02-11 04:43:52
|
wiki implies anyone can edit or anyone with an account can edit
|
|
2024-02-11 04:43:56
|
which is not the case here
|
|
|
VcSaJen
|
2024-02-19 12:45:50
|
JPEG XL is one of the most-voted feature requests last week on Mozilla Connect:
https://connect.mozilla.org/t5/discussions/mozilla-connect-weekly-recap-top-voted-ideas-2-9-2-16/td-p/51631
|
|
|
spider-mario
|
2024-02-19 09:28:44
|
with 777 thumbs-up
|
|
|
yoochan
|
2024-02-19 12:01:44
|
https://tenor.com/view/jago33-slot-machine-slot-online-casino-medan-gif-25082594
|
|
|
HCrikki
|
2024-02-19 10:19:45
|
maybe we should ensure firefox Developper edition gets the support updated and enabled initially
|
|
2024-02-19 10:21:24
|
compared to regular builds, its a lot safer pushing and enabling new tech out of the box in preparation for eventual enabling in regular stable and ESR
|
|
|
Traneptora
|
2024-02-20 05:20:57
|
It's already in nightly, they just won't merge the patch fix for no particular reason
|
|
|
yoochan
|
2024-02-20 07:50:33
|
the comment is from 2021, we are in 2024, let them change their mind 🙂
|
|
|
Quackdoc
|
2024-02-20 08:15:30
|
real reason I guess should have been specified lol
|
|
|
novomesk
|
2024-02-22 02:54:45
|
French article:
https://korben.info/jxl-avif-nouveaux-formats-image-efficients.html
|
|
|
yoochan
|
2024-02-22 03:16:09
|
I don't now why avif have a better reputation for mangas/comics/anime, is it supported by subjective or objective reviews ?
|
|
|
damian101
|
|
yoochan
I don't now why avif have a better reputation for mangas/comics/anime, is it supported by subjective or objective reviews ?
|
|
2024-02-22 03:17:34
|
Anime can be comrpessed a lot further than other content without decreasing appeal much. And avif handles that quality range a lot better.
|
|
|
lonjil
|
2024-02-22 03:18:47
|
People have reported in this Discord that AVIF is better for manga even at high quality
|
|
2024-02-22 03:19:08
|
IIRC, the theory is that the directional predictor is pretty good for all those lines
|
|
|
Tirr
|
2024-02-22 03:19:52
|
sometimes lossless compresses better than lossy for manga
|
|
2024-02-22 03:20:20
|
(in jxl)
|
|
|
yoochan
|
2024-02-22 03:24:46
|
well, I love mangas AND jxl. I'll do some more test. Even if it's not easy to find original png, straight out of the drawing app
|
|
|
HCrikki
|
|
yoochan
I don't now why avif have a better reputation for mangas/comics/anime, is it supported by subjective or objective reviews ?
|
|
2024-02-22 03:35:09
|
afaik in compares jxl preserves more detail at all comparable qualities. the numbers shared in the past to promote use of avif for for low filesize images were misleading and corresponded to incredibly low quality jpeg-tier results noone would ever use either then or now no matter how starved of storage/bandwidth they are
|
|
|
damian101
|
|
yoochan
well, I love mangas AND jxl. I'll do some more test. Even if it's not easy to find original png, straight out of the drawing app
|
|
2024-02-22 03:35:22
|
use `-a tune=ssim -a quant-b-adapt=1 -s 1` for best performance
|
|
2024-02-22 03:35:34
|
and `-a enable-chroma-deltaq=1` which is beneficial for yuv 4:4:4
|
|
2024-02-22 03:35:56
|
and `-a sb-size=64` at high quality
|
|
|
yoochan
|
2024-02-22 03:40:12
|
I bet I can't activate this from benchmark_xl 😄
|
|
2024-02-22 03:40:18
|
I'll have a look
|
|
|
jonnyawsom3
|
|
yoochan
well, I love mangas AND jxl. I'll do some more test. Even if it's not easy to find original png, straight out of the drawing app
|
|
2024-02-22 05:46:54
|
If you can't find the original PNG, I'd suggest giving quantization a go beforehand. Not an ideal solution but when the content is only a few shades, it can naturally massively help
|
|
|
_wb_
|
2024-02-22 06:06:02
|
Avif works well for lossy manga with clean lines. I'm still somewhat hesitant to use lossy on nonphoto images though, since it can be a bit unpredictable. E.g. maybe the lines will be fine but some more subtle things may be smoothed in ways that are not acceptable.
|
|
|
jonnyawsom3
|
2024-02-22 07:24:59
|
Just imagine if we had a reasonable way of encoding splines, doubt you'd beat that for manga
|
|
2024-02-22 07:26:34
|
Hmm... Wonder how the svg2jxl combined with encoding the difference (From non-filled shapes and missed areas) would hold up... Assuming you turned an image into an svg in the first place
|
|
|
_wb_
|
2024-02-22 08:36:48
|
Or even without splines, splitting the image in a part that compresses well with lossless/near-lossless and residuals that are left for VarDCT would be nice. Anything with low color count and hard edges is better left for modular. But doing that separation is pretty nontrivial...
|
|
|
Oleksii Matiash
|
2024-02-22 09:03:19
|
Sounds like djvu
|
|
|
yoochan
|
|
_wb_
Or even without splines, splitting the image in a part that compresses well with lossless/near-lossless and residuals that are left for VarDCT would be nice. Anything with low color count and hard edges is better left for modular. But doing that separation is pretty nontrivial...
|
|
2024-02-22 09:06:00
|
You mean that a very clean, black and white, line based drawing compressed with modular will let almost no residuals?
|
|
|
_wb_
|
2024-02-22 09:11:29
|
Colored too. Doing a good near-lossless modular followed by a vardct frame to catch the rest could be great. Just hard to make a good encoder for it. But the jxl bitstream has all the ingredients for it.
|
|
|
Quackdoc
|
|
yoochan
I don't now why avif have a better reputation for mangas/comics/anime, is it supported by subjective or objective reviews ?
|
|
2024-02-22 09:24:20
|
Manga typically behaves really well with yuv420 and avif is really well optimized for that, ofc this can decimate colored manga, and even some black and white manga due to how scanners encode.
I typically never fo 420 because I batch encode everything so I typically just use d 1.5 e 6 m 1 when encoding since this is pretty great for me
|
|
2024-02-22 09:25:25
|
don't forget a lot of scanned manga goes jpeg -> png -> you so you often do need to deal with making sure you don't compound artifacting too much
|
|
|
jonnyawsom3
|
2024-02-22 09:27:04
|
Once again giving me flashbacks to the "More modular patches" PR that mixed lossy and lossless
|
|
|
Quackdoc
|
2024-02-22 09:29:19
|
I also plan on re-running my tests with a higher efficiency now thanks to 0.10
|
|
|
Nyao-chan
|
|
yoochan
well, I love mangas AND jxl. I'll do some more test. Even if it's not easy to find original png, straight out of the drawing app
|
|
2024-02-23 04:08:19
|
look for releases from kodansha and sometimes viz manga on nyaa.
I have a bunch compressed with -e 9 lossless if you want to compare too
|
|
2024-02-23 04:12:25
|
they sometimes sell pdfs on humble bundle. on nyaa there are sometimes lossless png rips, like witch hat atelier
|
|
2024-02-23 04:13:59
|
the fable is also lossless
1r0n always releases pngs and says if they are edited, like denosied or not
|
|
2024-02-23 04:14:16
|
unedited compress a lot better too
|
|
2024-02-23 04:14:52
|
Hunter X Hunter
|
|
2024-02-23 04:31:31
|
yes, English.
finding raws in good quality is difficult, an least on English sites. I wonder if it's better in Japanese.
|
|
|
Quackdoc
|
2024-02-23 04:41:04
|
it used to be, itazuraneko had a good amount of png scans before it went bust
|
|
|
yoochan
|
2024-02-23 07:57:08
|
sometimes, even ebooks bought from the editor bear some jpeg blocking artifacts 😄 despite being in png
|
|
|
Traneptora
|
|
_wb_
Or even without splines, splitting the image in a part that compresses well with lossless/near-lossless and residuals that are left for VarDCT would be nice. Anything with low color count and hard edges is better left for modular. But doing that separation is pretty nontrivial...
|
|
2024-02-23 03:22:08
|
is this true even of some lossy modular?
|
|
|
_wb_
|
2024-02-23 03:23:45
|
lossy modular with squeeze doesn't play very well with typical nonphoto, since it cannot really be combined with things like palette and lz77
|
|
2024-02-23 03:24:12
|
but there are other lossy things you can do in modular
|
|
2024-02-23 03:24:44
|
using delta palette, for example
|
|
|
HCrikki
|
2024-02-23 03:27:16
|
many publishers release digital 'raws' on their publisher subscriptions
|
|
2024-02-23 03:28:01
|
if you need an official source with huge variance, humble regularly sells manga bundles. last one was like 120 volumes for 30$
|
|
2024-02-23 03:28:53
|
redistribution would be a no-no but individual pages for representative benchmarks and compares displayed in a non-profit capacity should be fine.
good scanlations should be preferable though for a number of reasons. highres, pngs, and theyre the lossless source of the compressed versions floating around on all online readers. any compares would help those sites make up their mind about wether they want into jxl sooner
|
|
|
lonjil
|
2024-02-23 03:31:29
|
Hm, if you use splines, you could have a smart encoder consider the pixels that will be covered by splines to be unconstrained, when trying to find a good prediction tree. Would be a fun niche algorithm to implement.
|
|
|
yoochan
|
|
lonjil
Hm, if you use splines, you could have a smart encoder consider the pixels that will be covered by splines to be unconstrained, when trying to find a good prediction tree. Would be a fun niche algorithm to implement.
|
|
2024-02-23 03:38:51
|
I was thinking of the same thing ! it's not the case yet ?
|
|
|
_wb_
|
2024-02-23 03:39:07
|
easiest way to do it would be to subtract the splines before encoding the rest, and the spline would have to be a perfect fit because as long as there is some faint residual, it will have basically the same entropy as before...
|
|
2024-02-23 03:39:38
|
(of course when doing lossy you could just eliminate the faint residuals)
|
|
|
lonjil
|
|
yoochan
I was thinking of the same thing ! it's not the case yet ?
|
|
2024-02-23 03:40:26
|
well, libjxl doesn't use splines, right? So anything involving them is hypothetical.
|
|
|
jonnyawsom3
|
2024-02-23 03:41:19
|
The computing cost is too high without extensive work on heuristics for them
|
|
|
lonjil
|
|
_wb_
easiest way to do it would be to subtract the splines before encoding the rest, and the spline would have to be a perfect fit because as long as there is some faint residual, it will have basically the same entropy as before...
|
|
2024-02-23 03:42:26
|
How do splines get added? If you can use them to simply replace what pixels would've been there otherwise, then there is no residual to worry about. Those pixels can be given whatever value makes it easiest to predict the pixels immediately surrounding the spline. Though ofc easier said than done etc.
|
|
|
yoochan
|
|
lonjil
well, libjxl doesn't use splines, right? So anything involving them is hypothetical.
|
|
2024-02-23 03:42:56
|
but they use dots don't they ? the issue is the same
|
|
|
lonjil
|
|
_wb_
|
2024-02-23 03:43:54
|
splines get added with + 🙂
|
|
|
lonjil
|
2024-02-23 03:43:55
|
I know patches are used and the same would apply to them.
|
|
|
_wb_
|
2024-02-23 03:44:26
|
so they don't replace what's underneath, they add stuff (which can be negative, if you want to make things darker)
|
|
|
lonjil
|
|
Traneptora
|
|
lonjil
How do splines get added? If you can use them to simply replace what pixels would've been there otherwise, then there is no residual to worry about. Those pixels can be given whatever value makes it easiest to predict the pixels immediately surrounding the spline. Though ofc easier said than done etc.
|
|
2024-02-23 03:44:49
|
spline XYB values are just added to the original ones yea
|
|
2024-02-23 03:45:02
|
the idea was that hair and stuff is better done additively so they can have the low-frequency stuff handled by VarDCT
|
|
|
lonjil
|
2024-02-23 03:45:48
|
My intuition is that it would be easier to make a good encoder if splines replaced rather than added.
|
|
|
Traneptora
|
2024-02-23 03:46:20
|
oh, cause you could put whatever you wanted underneath them
|
|
|
lonjil
|
2024-02-23 03:46:51
|
yeah.
|
|
|
jonnyawsom3
|
2024-02-23 03:47:30
|
I don't suppose splines are stored in another layer similar to patches?
|
|
|
_wb_
|
2024-02-23 03:47:57
|
it's not so clear what "replace" means considering splines don't have hard edges in jxl
|
|
|
lonjil
|
2024-02-23 03:48:11
|
good point
|
|
|
yoochan
|
2024-02-23 03:49:03
|
but if they add in a way which will always make 'black', pixels below are 'discarded' somehow, hence they could be whatever are easier to encode
|
|
|
lonjil
|
2024-02-23 03:50:30
|
Could you draw splines on an otherwise empty frame, and have pixels that are definitely inside the spline have alpha=1, pixels that are definitely outside the spline have alpha=0, and partial pixels get an in-between alpha depending on percentage coverage?
|
|
|
jonnyawsom3
|
|
lonjil
Could you draw splines on an otherwise empty frame, and have pixels that are definitely inside the spline have alpha=1, pixels that are definitely outside the spline have alpha=0, and partial pixels get an in-between alpha depending on percentage coverage?
|
|
2024-02-23 03:51:11
|
That would seem more based on the spline's alpha than the pixel's alpha
|
|
|
_wb_
|
2024-02-23 03:52:20
|
basically what you need is something like a vectorizing algorithm that traces paths and represents them as splines, then subtract the splines, and then have some heuristic to clean up the small mismatches you'll get because the anti-aliasing and spline width fluctation will not be precise enough to be exact
|
|
|
lonjil
|
2024-02-23 03:52:42
|
mm
|
|
|
That would seem more based on the spline's alpha than the pixel's alpha
|
|
2024-02-23 03:53:38
|
anti-aliasing on the alpha channel, is what I'm thinking.
|
|
|
jonnyawsom3
|
|
Hmm... Wonder how the svg2jxl combined with encoding the difference (From non-filled shapes and missed areas) would hold up... Assuming you turned an image into an svg in the first place
|
|
2024-02-23 03:54:39
|
That was my thinking in this message. There was a (naturally) AI vectorizing tool that left beta recently, up until then it was free and had good results. Was thinking of getting some splines out of it, overlaying them on the source image and playing with methods of subtracting/mixing them
|
|
2024-02-23 03:55:52
|
https://vectorizer.ai/
|
|
|
Traneptora
|
2024-02-23 07:37:41
|
JPG -> SVG -> svg_to_jxl <:kek:857018203640561677>
|
|
|
MSLP
|
2024-02-23 09:04:07
|
that could be an interesting developement for zune-jxl encoder, they already have modular mode
|
|
|
jonnyawsom3
|
|
Traneptora
JPG -> SVG -> svg_to_jxl <:kek:857018203640561677>
|
|
2024-02-24 08:20:10
|
This is exactly what I did multiple times xD
|
|
|
VcSaJen
|
|
use `-a tune=ssim -a quant-b-adapt=1 -s 1` for best performance
|
|
2024-02-24 12:08:53
|
This highlights that some presets are needed. If not in CLI, then at least in GUI (GIMP, Krita, etc).
|
|
|
spider-mario
|
2024-02-24 12:37:26
|
yeah, “how to compress to avif optimally” reeks of arcane knowledge and incantations
|
|
|
_wb_
|
2024-02-24 01:25:58
|
I got a blogpost in the pipeline on libjxl 0.10 — probably to be published on Tuesday
|
|
|
damian101
|
2024-02-24 01:26:04
|
I also really like `-a deltaq-mode=2` for higher consistency and some extra sharpness at very high qualiy.
|
|
|
spider-mario
yeah, “how to compress to avif optimally” reeks of arcane knowledge and incantations
|
|
2024-02-24 01:29:42
|
It's nothing compared to the normal inter video coding with aomenc 💀
|
|
|
yoochan
|
2024-02-25 03:30:31
|
Read on https://issues.chromium.org/issues/40168998 : ```IMO Alphabet should sue Reddit users who make completely pointless threads and comments which 1. doesn't help anyone 2. defame Alphabet as monopoly... 3. have very few arguments if any 4. repeated
I hope r/jpegxl should have less threads and comments like these. Don't be fanatics. People try to work despite these unproductive messages.```
|
|
|
VcSaJen
|
2024-02-25 03:58:39
|
"Don't be fanatics" "Alphabet should sue Reddit users" - quite ironic
|
|
|
jonnyawsom3
|
2024-02-26 11:06:38
|
I've been out all weekend, saw the first email about a comment come through and told my friend "This is gonna cause another dozen by tomorrow"
Lo and behold within 2 hours another 5 emails came though so I had to tell everyone to stop yammering again Dx
|
|
|
_wb_
|
2024-02-29 07:54:14
|
https://cloudinary.com/blog/jpeg-xl-and-the-pareto-front
|
|
|
yoochan
|
2024-02-29 08:09:59
|
you did some amazing job to speed up the encoding for this 0.10 !
|
|
|
_wb_
|
2024-02-29 08:13:07
|
I don't say it in so many words, but what the results boil down to: if you have libjxl and jpegli, then basically have the Pareto front covered.
For lossless, jxl makes webp/png (and avif/heic/qoi) redundant.
For lossy, across the (usable) quality spectrum, jxl covers most of the Pareto front, except at superfast encode speeds where jpegli can go faster than libjxl (also for compatibility, jpegli can be useful). WebP, AVIF, HEIC, mozjpeg: they're not on the Pareto front anymore, except possibly AVIF at the very lowest qualities (lower than what social media consider acceptable) and slowest encode speeds (much slower than what any image cdn considers acceptable), where it is competitive with/slightly better than jxl.
|
|
|
yoochan
|
2024-02-29 08:13:50
|
funny how AVIF q50 can slide against the pareto frontier at the farthest bottom of the plot for medium high quality
|
|
2024-02-29 08:14:18
|
I'm sure it's just a matter of tweaking for medium quality (if you deem it interesting) to take back the crown
|
|
|
_wb_
|
|
yoochan
you did some amazing job to speed up the encoding for this 0.10 !
|
|
2024-02-29 08:17:39
|
Credits for that should go mostly to <@179701849576833024> and <@987371399867945100>. It took some quite nontrivial effort to make it work, but it was clearly well worth it.
|
|
|
|
veluca
|
2024-02-29 08:18:05
|
the blog post was a pretty nice work too 😛
|
|
|
yoochan
|
2024-02-29 08:18:19
|
szabadka is a member of the google zurich team ?
|
|
|
|
veluca
|
2024-02-29 08:20:38
|
well he's not in zurich
|
|
2024-02-29 08:20:40
|
but yes
|
|
|
_wb_
|
|
yoochan
funny how AVIF q50 can slide against the pareto frontier at the farthest bottom of the plot for medium high quality
|
|
2024-02-29 08:23:48
|
For lossy, anything slower than e7 hasn't received much attention yet. Most focus went into making e6/e7 good. I'm sure it's possible to do better things at slower settings; currently it's mostly just spending time on computing butteraugli and tweaking adaptive quant weights with it, but there are many other things a slow jxl encoder could try (different block selections, using more patches, detecting splines, estimating noise, etc) that could be more effective. But generally speaking: few use cases actually want such very slow encoders...
|
|
|
yoochan
|
2024-02-29 08:31:16
|
agreed but many testers don't take the time to have a good methodology to benchmark lossy codec yet, it doesn't stop them from writing articles about how avif can reach lower bpp 😄
|
|
|
_wb_
|
2024-02-29 08:54:31
|
I hope this blogpost will help to make it clear that speed matters and you cannot just make a bitrate-distortion plot comparing avif s0 to libjpeg-turbo without huffman optimization (ignoring the 3 orders of magnitude speed gap) and then zoom in on the ridiculously low qualities nobody uses and compute a BD rate that would show that your codec is 50-60% better than jpeg, like was done e.g. here: https://storage.googleapis.com/avif-comparison/images/subset1/rateplots/subset1_avif-trb_t-1_avif-s0_trb-s0_ssimulacra2__bpp-0.1-3.0.png
|
|
|
|
afed
|
|
_wb_
https://cloudinary.com/blog/jpeg-xl-and-the-pareto-front
|
|
2024-02-29 08:57:12
|
it's without changes for `-e 9` in 10.1?
|
|
|
yoochan
|
|
_wb_
I hope this blogpost will help to make it clear that speed matters and you cannot just make a bitrate-distortion plot comparing avif s0 to libjpeg-turbo without huffman optimization (ignoring the 3 orders of magnitude speed gap) and then zoom in on the ridiculously low qualities nobody uses and compute a BD rate that would show that your codec is 50-60% better than jpeg, like was done e.g. here: https://storage.googleapis.com/avif-comparison/images/subset1/rateplots/subset1_avif-trb_t-1_avif-s0_trb-s0_ssimulacra2__bpp-0.1-3.0.png
|
|
2024-02-29 09:00:37
|
a fearless review
|
|
|
_wb_
|
|
afed
it's without changes for `-e 9` in 10.1?
|
|
2024-02-29 09:03:12
|
I think I included some of the fixes that were made between the 0.10 and 0.10.1 release. Plots like these are a bit of a moving target anyway, any code changes can make the points wiggle a bit. But the overall message of the blogpost will probably remain true.
|
|
|
|
afed
|
2024-02-29 09:07:27
|
because e9 changes is a pretty big difference in encoding speed with some compression loss, but for a pareto front it think it will be much better
|
|
|
fab
|
2024-02-29 09:11:00
|
The comparison isn't fair because avif q77 2mt spends 45% of the time decoding and has less time to encode
|
|
|
_wb_
|
|
afed
because e9 changes is a pretty big difference in encoding speed with some compression loss, but for a pareto front it think it will be much better
|
|
2024-02-29 09:15:14
|
it's not just at e9 that things change, it's at all effort settings
|
|
2024-02-29 09:15:18
|
|
|
|
|
afed
|
|
_wb_
I think I included some of the fixes that were made between the 0.10 and 0.10.1 release. Plots like these are a bit of a moving target anyway, any code changes can make the points wiggle a bit. But the overall message of the blogpost will probably remain true.
|
|
2024-02-29 09:16:20
|
also it would be useful to mention memory consumption for other encoders, without charts, but at least for some similar settings for a single image
|
|
|
_wb_
|
2024-02-29 09:16:20
|
the new e8 is about as fast as the old e5 (at least on these images, on this machine), the new e7 as fast as the old e4
|
|
2024-02-29 09:17:34
|
good point. I did look at those numbers but I forgot to mention them in the blogpost
|
|
2024-02-29 09:18:13
|
basically it went from "quite a bit worse than other tools" to "quite a bit better"
|
|
2024-02-29 09:18:21
|
in terms of memory
|
|
|
|
afed
|
|
_wb_
it's not just at e9 that things change, it's at all effort settings
|
|
2024-02-29 09:18:30
|
i mean this, these changes were after 10.0
https://github.com/libjxl/libjxl/pull/3337
|
|
2024-02-29 09:18:33
|
https://github.com/libjxl/libjxl/issues/3323
|
|
2024-02-29 09:18:47
|
|
|
|
_wb_
|
|
afed
i mean this, these changes were after 10.0
https://github.com/libjxl/libjxl/pull/3337
|
|
2024-02-29 09:19:30
|
yeah I did include those fixes in the plots already, otherwise e9 would look slower and for the photographic images actually worse in bpp
|
|
|
|
afed
|
2024-02-29 09:20:13
|
so it's basically 0.10.1
|
|
|
_wb_
|
2024-02-29 09:21:58
|
yeah basically, or something in between 0.10.0 and 0.10.1 — I tested using my local dev version
|
|
|
|
afed
|
2024-02-29 11:08:03
|
some mode with fastest compression like jpeg-turbo would be also useful
and also slower but improved visually lossless, like using slower lossless modes and less filtering/aq which is not needed for high bpp or even worse because it causes more distortions
jxl can still lose to some other more simple formats at very high bpp and may not be quite visually lossless when compared closely
|
|
|
lonjil
|
|
_wb_
https://cloudinary.com/blog/jpeg-xl-and-the-pareto-front
|
|
2024-02-29 11:08:45
|
In the first animated comparison, I think JPEG looks the best
|
|
|
_wb_
https://cloudinary.com/blog/jpeg-xl-and-the-pareto-front
|
|
2024-02-29 11:14:48
|
and in the second animated comparison, the JPEG XL column has more artefacting than JPEG or AVIF. In the high quality row, JPEG and AVIF are almost visually lossless, but JXL isn't. And in the medium quality row, JXL has a lot of skin smoothing compared to AVIF. This seems odd to me.
|
|
|
_wb_
|
2024-02-29 11:21:09
|
Things are different in different parts of the image. To me, in the medium quality row, avif looks best when you look at the forehead and worst when you look at the region between the eyes.
|
|
2024-02-29 11:26:24
|
Also the regions below the eyes are worse in avif than in the rest (still talking about the medium quality row). Overall, the avif gives me a rather inconsistent experience where some macroblocks get smoothed a lot and others look OK, while the jxl has a more uniform behavior of smoothing a bit everywhere. And in jpeg the macroblocking without any deblocking filters is quite bad around the nose.
|
|
2024-02-29 11:29:44
|
They're all at ssimulacra2=60 overall, but they get there in quite different ways, with avif and jpeg being sharper in the high-contrast parts but worse in the low-contrast parts, and jxl spreading the distortion more evenly over all parts.
|
|
|
lonjil
|
|
fab
|
2024-02-29 11:42:07
|
Seeing from the table seems like JPEG XL after e7 is better but avif is 8 years old codec
|
|
|
lonjil
|
2024-02-29 11:44:36
|
I still claim that for "high quality", across the entire image, AVIF is nearly visually lossless, while JXL is not. Even zoomed out, I can notice the change in the JXL image, while the changes in the AVIF image become impossible to see.
|
|
|
fab
|
2024-02-29 11:45:22
|
Show some image even d1.2
|
|
2024-02-29 11:45:40
|
Max 3000x2000
|
|
2024-02-29 11:46:07
|
Do you hahe some files in.jxl
|
|
|
|
afed
|
|
_wb_
https://cloudinary.com/blog/jpeg-xl-and-the-pareto-front
|
|
2024-02-29 11:58:54
|
might be also useful to make another speed graph with real scaling, because this graph doesn't really show the actual encoding speed gap and how much faster some modes might be
|
|
2024-02-29 12:00:45
|
or at least some more simplified like this
https://canary.discord.com/channels/794206087879852103/803574970180829194/1044547734239203329
|
|
|
lonjil
|
|
_wb_
https://cloudinary.com/blog/jpeg-xl-and-the-pareto-front
|
|
2024-02-29 12:05:55
|
> The Pareto front consists of mostly JPEG XL but at the fastests speeds again also includes JPEG.
hm, JXL is more or less a superset of JPEG in terms of compression features, right? I wonder if it would be worth trying to make a JXL encoder that covers that speed range of jpegli and libjpeg-turbo.
|
|
|
|
afed
|
|
afed
some mode with fastest compression like jpeg-turbo would be also useful
and also slower but improved visually lossless, like using slower lossless modes and less filtering/aq which is not needed for high bpp or even worse because it causes more distortions
jxl can still lose to some other more simple formats at very high bpp and may not be quite visually lossless when compared closely
|
|
2024-02-29 12:12:01
|
yeah, though it's some niche use case because even higher speeds isn't needed that often, though jpeg-turbo is still pretty used
|
|
|
jonnyawsom3
|
2024-02-29 02:03:37
|
I just read the blog thinking "Huh, wonder why Jon didn't post it on Discord..."
This is what I get for sleeping in for once xD
|
|
|
diskorduser
|
2024-02-29 02:16:20
|
I read it as pareto font.
|
|
|
_wb_
|
|
lonjil
> The Pareto front consists of mostly JPEG XL but at the fastests speeds again also includes JPEG.
hm, JXL is more or less a superset of JPEG in terms of compression features, right? I wonder if it would be worth trying to make a JXL encoder that covers that speed range of jpegli and libjpeg-turbo.
|
|
2024-02-29 02:27:48
|
We could at some point add an e2 and e1 for lossy that uses huffman instead of ANS and does basically the same thing as jpegli but in jxl syntax. Hasn't really been a priority because it wouldn't really beat jpeg, but yes, it should be possible to get those jpegli speeds in jxl too.
|
|
|
damian101
|
2024-02-29 02:42:09
|
Using a perceptually uniform colorspace for lossy compression is such a no-brainer... no issues with compatibility when simply done internally, and extremely fast, as well as comparitively extremely easy to implement in hardware, and that with a huge boost in efficiency and consistency.
|
|
|
|
veluca
|
|
_wb_
We could at some point add an e2 and e1 for lossy that uses huffman instead of ANS and does basically the same thing as jpegli but in jxl syntax. Hasn't really been a priority because it wouldn't really beat jpeg, but yes, it should be possible to get those jpegli speeds in jxl too.
|
|
2024-02-29 04:17:14
|
woudln't it? 😛
|
|
|
_wb_
|
2024-02-29 04:18:27
|
maybe by a little, the numzero entropy coding should be a bit better than what jpeg does
|
|
2024-02-29 04:22:05
|
and the DC prediction would also be better
|
|
|
|
veluca
|
2024-02-29 04:22:40
|
it can also be faster, you can pick a custom coefficient order and don't need to escape ff
|
|
|
lonjil
|
2024-02-29 04:22:48
|
Maybe being able to use XYB would make a small difference? I know you can do it in JPEG with an ICC profile, but many things do not support that, so it's an unreliable solution.
|
|
|
_wb_
|
2024-02-29 04:25:15
|
yes, XYB should help
|
|
|
jonnyawsom3
|
|
_wb_
We could at some point add an e2 and e1 for lossy that uses huffman instead of ANS and does basically the same thing as jpegli but in jxl syntax. Hasn't really been a priority because it wouldn't really beat jpeg, but yes, it should be possible to get those jpegli speeds in jxl too.
|
|
2024-02-29 05:20:25
|
Sounds like the middle option may become reality
https://res.cloudinary.com/cloudinary-marketing/image/upload/w_700,c_fill,f_auto,q_auto,dpr_2.0/Web_Assets/blog/Encoder_diagram.png
|
|
2024-02-29 05:20:36
|
Original Jpeg compatible JXL codestreams
|
|
|
|
afed
|
2024-02-29 05:20:52
|
is libjxl-tiny something like e3 at least?
or it's only useful for hw compatibility and there's nothing interesting to reuse for fast modes?
|
|
|
jonnyawsom3
|
2024-02-29 05:24:17
|
libjxl-tiny is what the e1 lossless mode is based on, if I recall. Not sure what it does for lossy if at all
|
|
|
_wb_
|
2024-02-29 05:27:45
|
no that's fjxl
|
|
2024-02-29 05:28:55
|
libjxl-tiny does lossy in a simple, limited way. It is being used as a model for hw encoders. It does both more and less than e3: it does use more block types than just 8x8, but it doesn't use ANS.
|
|
|
lonjil
|
2024-02-29 05:31:51
|
Speaking of HW encoders, how are those going? 😄
|
|
|
_wb_
|
2024-02-29 05:34:24
|
I don't really know. In summer I'll be in Japan to talk with the hardware devs face-to-face, I think I'll have a better idea of timelines then. Also most of what I do know is confidential so I cannot say much about it anyway 🙂
|
|
|
lonjil
|
2024-02-29 05:46:04
|
fair!
|
|
|
damian101
|
|
Sounds like the middle option may become reality
https://res.cloudinary.com/cloudinary-marketing/image/upload/w_700,c_fill,f_auto,q_auto,dpr_2.0/Web_Assets/blog/Encoder_diagram.png
|
|
2024-02-29 06:42:31
|
what's the difference between most right and second from right supposed to be?
|
|
|
_wb_
|
2024-02-29 07:03:07
|
The second one from the right would be basically a jxl encoder that uses only dct8x8 (and some other constraints) so the result can be directly transcoded as a jpeg, but it could perhaps still use things like gaborish or EPF too, so the jxl would look best but the derived jpeg would be ok.
|
|
|
jonnyawsom3
|
|
_wb_
no that's fjxl
|
|
2024-02-29 07:03:45
|
Ahh right, I thought fjxl was it's prior name to libjxl-tiny, my bad
|
|
|
lonjil
|
|
_wb_
https://cloudinary.com/blog/jpeg-xl-and-the-pareto-front
|
|
2024-02-29 08:13:18
|
What about JXL gives it such a large lead over everything else with the large images?
|
|
|
damian101
|
|
lonjil
What about JXL gives it such a large lead over everything else with the large images?
|
|
2024-02-29 08:17:17
|
I think size is not very relevant here as the metric used is size-agnostic.
The large images are probably just much more likely to be photographs, especially nature photographs, where JXL has a significant advantage over AVIF.
|
|
|
lonjil
|
2024-02-29 08:20:09
|
pretty sure the daala test set is mostly photos
|
|
|
jonnyawsom3
|
2024-02-29 08:21:07
|
I'd assume it's because JXL can use larger block sizes while also being designed with block encoding in mind. For AVIF multithreading was an afterthought with sacrifices made
|
|
|
damian101
|
2024-02-29 08:28:27
|
Yes, threading is extremely important when normalizing to encoding speed...
|
|
2024-02-29 08:29:16
|
but for parallel image encoding or systems with few CPU threads it's also not significant
|
|
2024-02-29 08:29:49
|
also threading increases resource consumption obviously
|
|
2024-02-29 08:30:08
|
so there should always be a benchmark with no threading as well imo
|
|
2024-02-29 08:30:34
|
also to be able to roughly interpolate for systems with less available threads
|
|
|
lonjil
|
|
I'd assume it's because JXL can use larger block sizes while also being designed with block encoding in mind. For AVIF multithreading was an afterthought with sacrifices made
|
|
2024-02-29 08:31:21
|
there's almost no difference for AVIF w.r.t. threading in this case:
|
|
|
|
afed
|
2024-02-29 08:33:04
|
mt is just using tiles, it's all multithreaded
it just means that tiles don't help speed in this case
|
|
|
damian101
|
|
lonjil
there's almost no difference for AVIF w.r.t. threading in this case:
|
|
2024-02-29 08:33:40
|
but threading doesn't change here?
|
|
2024-02-29 08:33:50
|
always 8 threads...
|
|
|
lonjil
|
2024-02-29 08:34:25
|
> For AVIF, the darker points indicate a faster but slightly less dense tiled encoder setting (using –tilerowslog2 2 –tilecolslog2 2), which is faster because it can make better use of multi-threading, while the lighter points indicate the default non-tiled setting.
|
|
|
damian101
|
2024-02-29 08:35:43
|
that's 4*4 tiles...
|
|
|
lonjil
|
2024-02-29 08:36:31
|
actually, I just realized, 11 mpix is bigger than what avif supports without doing container-level tiling, right?
|
|
|
damian101
|
|
lonjil
actually, I just realized, 11 mpix is bigger than what avif supports without doing container-level tiling, right?
|
|
2024-02-29 08:37:50
|
nope, that's what I wondered, too, and I'm quite sure that it fits within one tile
|
|
|
|
afed
|
2024-02-29 08:38:14
|
4096x2304 max res by specs
|
|
|
damian101
|
2024-02-29 08:38:22
|
ah yes
|
|
2024-02-29 08:38:25
|
two tiles then
|
|
2024-02-29 08:40:43
|
that is a measly threading improvement for tiling then...
|
|
2024-02-29 08:41:03
|
pretty sure tiling helps decoding more than encoding then
|
|
2024-02-29 08:41:23
|
but dav1d is also extremely well optimized
|
|
|
|
afed
|
2024-02-29 08:45:17
|
yeah, pure asm with C only as a wrapper, basically there is not much to optimize for typical use, just for other architectures and higher bits
|
|
|
_wb_
|
2024-02-29 09:38:09
|
I think it does 4x4 tiles so should be enough for 8 threads
|
|
2024-02-29 09:38:27
|
With both log2 params set to 2
|
|
2024-02-29 09:40:16
|
I could make plots for single-threaded too, I picked 8 threads for the "saving image from an image editor" scenario since most machines have 8 cores, but single thread is indeed also important
|
|
|
damian101
|
|
_wb_
I think it does 4x4 tiles so should be enough for 8 threads
|
|
2024-02-29 10:59:58
|
threading by far does not scale linearly with number of tiles for some reason...
|
|
2024-02-29 11:00:27
|
even with restoration, cdef and everything disabled, which it is for allintra anyway
|
|
2024-02-29 11:00:54
|
Actually, that was for decoding.
|
|
2024-02-29 11:01:12
|
But for encoding, the threading improvement seems to be worse
|
|
2024-02-29 11:02:06
|
Before I did testing I always assumed tiling would allow for almost linear threading improvement...
|
|
|
|
afed
|
2024-02-29 11:13:58
|
<https://encode.su/threads/3397-JPEG-XL-vs-AVIF/page11>
|
|
2024-02-29 11:14:02
|
|
|
2024-02-29 11:18:32
|
so that is why some option to disable streaming mode is useful for people who wants the old behavior
not true about screenshots or just a few picked samples
for lossless there are always some cases when some codec or compressor will be better, but on a wider number of samples the results are quite different
|
|
|
jonnyawsom3
|
2024-02-29 11:31:03
|
e 10 disables streaming for that exact use case
|
|
2024-02-29 11:31:22
|
But the info was only updated in 0.10.1
|
|
|
|
afed
|
2024-02-29 11:39:09
|
yeah, but it's only for the slowest effort
sometimes, especially for parallel encoding in single-threaded mode, it's useful for faster efforts
|
|
|
monad
|
|
afed
so that is why some option to disable streaming mode is useful for people who wants the old behavior
not true about screenshots or just a few picked samples
for lossless there are always some cases when some codec or compressor will be better, but on a wider number of samples the results are quite different
|
|
2024-03-01 09:34:42
|
if 'screenshots' means mostly UI/text content (no significant photo/film or 3D scene), it seems true
|
|
2024-03-01 10:42:02
|
|
|
|
_wb_
|
2024-03-01 10:54:19
|
currently #1 on hackernews: https://news.ycombinator.com/item?id=39559281
|
|
|
|
afed
|
|
monad
if 'screenshots' means mostly UI/text content (no significant photo/film or 3D scene), it seems true
|
|
2024-03-01 10:55:37
|
only if it's not a very complex ui and there are a lot of single colored areas
<@794205442175402004><@179701849576833024> though it would be worth adding some heuristics or at least trying for `-I 1 -P 0 -g 3` for `-e 10`, so a larger group size just for this case and zero predictor, on average at least on my samples it gave a better average results, `-I 0` usually worse but sometimes better
it's at least will be better for a lot of such cases
|
|
|
_wb_
|
2024-03-01 11:00:05
|
We should really try to find a way to combine the power of MA trees and lz77. We're currently basically doing one or the other but not really both, since it seemed too hard at the time.
|
|
|
jonnyawsom3
|
2024-03-01 11:06:23
|
When there's so many toys to play with it's hard to decide what to pick up first
|
|
|
lonjil
|
|
_wb_
currently #1 on hackernews: https://news.ycombinator.com/item?id=39559281
|
|
2024-03-01 11:11:59
|
literally every time there are people making stuff up about patents
|
|
|
|
afed
|
|
_wb_
currently #1 on hackernews: https://news.ycombinator.com/item?id=39559281
|
|
2024-03-01 11:22:05
|
still, notes on memory consumption for other codecs, especially avif (lossless <:kekw:808717074305122316> ) would be entertaining
|
|
2024-03-01 11:24:45
|
though webp has `-low_memory`, but with slower encoding and perhaps some other downsides
|
|
|
|
Squid Baron
|
2024-03-01 11:31:34
|
On the internet, if you write a long enough confident-sounding comment, you're right
|
|
2024-03-01 11:31:51
|
it doesn't matter if your're pulling everything out of your ass
|
|
2024-03-01 11:31:58
|
that's why that comment about patents on HN is on the top
|
|
2024-03-01 11:32:35
|
the funniest part is the author didn't even try to soften his claim with "I think", "maybe"
|
|
2024-03-01 11:32:40
|
no, he just knows that the lawyers are involved
|
|
|
|
afed
|
2024-03-01 11:34:49
|
yeah, this patent is mentioned every time on every public jxl discussion
|
|
|
|
paperboyo
|
|
_wb_
https://cloudinary.com/blog/jpeg-xl-and-the-pareto-front
|
|
2024-03-01 11:53:36
|
Very, very cool article and useful too! Thank you. One would hope Chrome team will respond 😉
I do wonder about encoder consistency: from my (extremely limited!) experience I like JXL because one-quality-for-everything will produce images of **less** varying **filesize** (not quality). Which I prefer, because with AVIF (and WebP) some images, eg. those with a lot of noise (yeah, same that go magenta), will balloon 8x in filesize (unlike in JXL and mozJPEG where filesizes are more consistent). And the article suggests JXL is more consistent keeping same quality (so I would expect filesize to balloon?). But my tests are very, very limited in breadth, so I probably got something wrong…
|
|
|
_wb_
|
2024-03-01 11:59:59
|
libjxl is consistent in quality, not size. But it may suffer less from the "wildly varying file sizes" phenomenon than other encoders, since it does a better job at modeling phenomena like contrast masking so it can avoid spending as many bits on very "busy" high-contrast regions as some of the simpler encoders do, while it also avoids spending too little bits on subtle textures that will get completely smoothed out by simpler encoders (leading to very small files in those cases, but also a less consistent visual quality).
|
|
|
yoochan
|
|
lonjil
literally every time there are people making stuff up about patents
|
|
2024-03-01 12:05:39
|
bullshit is easier to produce than documented knowledge
|
|
|
sklwmp
|
|
lonjil
literally every time there are people making stuff up about patents
|
|
2024-03-01 12:14:03
|
this is so absurdly annoying, even more so that people still believe this 🤦
|
|
2024-03-01 12:15:55
|
also, is this you?
> > you want Chrome to re-add jxl behind a feature flag? Doesn't seem very useful.
>
> Chrome has a neat feature where some flags can be enabled by websites, so that websites can choose to cooperate in testing. They never did this for JXL, but if they re-added JXL behind a flag, they could do so but with such testing enabled. Then they could get real data from websites actually using it, without committing to supporting it if it isn't useful.
i never knew this! super interesting feature ngl
|
|
|
lonjil
|
2024-03-01 12:16:08
|
that is me
|
|
2024-03-01 12:16:39
|
when they removed JXL, some people asked why they didn't try to do such an opt-in trial with JXL, but I don't think they ever responded
|
|
|
_wb_
|
2024-03-01 12:34:00
|
With Cloudinary we really wanted to do an origin trial, but somehow we never were allowed to do it. It would have been a good way to get real data, but somehow I think getting real data was not a priority for the decision makers at Chrome...
|
|
|
|
afed
|
|
_wb_
https://cloudinary.com/blog/jpeg-xl-and-the-pareto-front
|
|
2024-03-01 02:02:43
|
jpegli xyb I think is also worth adding as a promotion for the encoder and that old jpeg can do something even more
even if it's not supported everywhere, but at least for browsers
or xyb doesn't give much on higher qualities in this test?
|
|
|
_wb_
|
2024-03-01 02:22:42
|
I tested default jpegli, which is not using xyb
|
|
2024-03-01 02:22:58
|
I also used default jpeg decoding, which is a bit worse than jpegli decoding
|
|
2024-03-01 02:23:54
|
with xyb and with jpegli decoding, it gets even better but I haven't actually tested that recently.
|
|
|
Olav
|
2024-03-01 02:48:34
|
https://www.phoronix.com/news/Ubuntu-24.04-No-JPEG-XL
|
|
|
VcSaJen
|
2024-03-01 02:59:58
|
But there's a chance for 26.04
|
|
|
HCrikki
|
2024-03-01 03:01:06
|
itd be moot if popular apps in snapcraft/flathub integrated jxl directly
|
|
2024-03-01 03:01:33
|
any adoption by those goes completely untracked too btw
|
|
|
VcSaJen
|
2024-03-01 03:02:32
|
That would be app support, not OS support. Different things.
|
|
|
HCrikki
|
2024-03-01 03:05:10
|
ubuntu made minimal install the new default. afaik thats why libjxl is not part of default build but still is in full
|
|
|
yoochan
|
2024-03-01 03:06:42
|
soon libpng will not need to be installed anymore by default ! 😄
|
|
|
_wb_
|
2024-03-01 03:43:42
|
as long as it's easy to install or will be installed anyway as a dependency as soon as the user installs any image-related software, it doesn't really matter imo
|
|
|
Traneptora
|
2024-03-01 04:51:17
|
they might have optional deps though
|
|
2024-03-01 04:51:23
|
like installing gimp doesn't automatically install libjxl
|
|
|
_wb_
|
2024-03-01 05:06:40
|
oh... yeah at some point I would assume tools like Gimp will want to make it a non-optional dependency. We're probably not at that point yet, but I hope at some point "Gimp cannot load/save jxl" will sound as silly as "Gimp cannot load/save jpeg or png".
|
|
|
VcSaJen
|
2024-03-01 05:07:17
|
Default OS support is important to eliminate "annoyance factor" of a new format. See: users hating webp when they download images and can't do anything with them.
|
|
|
HCrikki
|
2024-03-01 05:09:50
|
in a gnome-background 46 discussion i recall someone mentioning the possibility of making something have libjxl as a required dependency
|
|
|
Traneptora
|
2024-03-01 05:17:39
|
you also have the issue of websites, not just software
|
|
2024-03-01 05:17:45
|
many websites limit uploads to PNG, JPG, and GIF
|
|
2024-03-01 05:18:06
|
even if their backend is magick or vips or whatever that can handle most images, the frontend rejects them
|
|
|
HCrikki
|
2024-03-01 05:38:39
|
web services that deliver mainly or exclusively to their own apps (mobile or desktop) can switch just fine. browser isnt the only delivery vector
|
|
|
Traneptora
|
2024-03-01 06:18:49
|
im not talking about browser limitations here
|
|
2024-03-01 06:19:37
|
im talking about websites themselves not accepting any image upload beyond those three formats
|
|
2024-03-01 06:19:54
|
even if their backend could support it
|
|
2024-03-01 06:20:59
|
for example, gfycat (while it existed) actually supported matroska files but you had to pick the "all files" option in the upload menu
|
|
2024-03-01 06:21:31
|
the JS on the website very easily could have rejected it
|
|
2024-03-01 06:21:44
|
even though the backend supported it fine
|
|
|
HCrikki
|
2024-03-01 08:19:11
|
on the hn coverage, funny how some diminish adobe's weight. acrobat reader alone has almost a billion installs on *android*, 2+ billions on windows and is now the pdf reader shipped in Edge (was pushed to all consumer installs last year, soon to all entreprise unless optout).
if/when pdf readers (including browsers' integrated reader hopefully) adopt jxl, the install userbase should be equal or even higher than browsers' (avg user would have a browser *and* a separate pdf reading app)
|
|
|
|
afed
|
2024-03-02 01:56:36
|
not FLIP, but <:KekDog:805390049033191445>
https://ffmpeg.org/pipermail/ffmpeg-devel/2024-March/322473.html
|
|
|
sklwmp
|
2024-03-02 03:49:49
|
do they mean FLIF?
|
|
|
_wb_
|
2024-03-02 07:00:33
|
Probably
|
|
|
spider-mario
|
2024-03-02 08:03:30
|
flic is slang for policeman in French
|
|
2024-03-02 08:04:18
|
(it's basically “cop”)
|
|
2024-03-02 08:04:54
|
https://fr.wikipedia.org/wiki/22,_v%27l%C3%A0_les_flics_!
|
|
|
|
Posi832
|
2024-03-02 09:08:29
|
Maybe they misremembered it as "free lossless image codec"
|
|
|
|
afed
|
2024-03-02 09:50:05
|
yeah, but at least FLIC is also an image codec <:KekDog:805390049033191445>
```The compression technology behind JPEG XL, based on Google's PIK [1] and Cloudinary's FLIP [2], officially standardized the file format and coding system in October 2021 and March 2022, respectively.```
https://canary.discord.com/channels/794206087879852103/803574970180829194/1197493901997658132
|
|
|
_wb_
|
|
spider-mario
flic is slang for policeman in French
|
|
2024-03-02 10:54:46
|
In Belgium we use the word in both French and Dutch. "Flik", plural "flikken".
|
|
2024-03-02 10:55:29
|
https://en.wikipedia.org/wiki/Flikken
|
|
2024-03-02 10:56:17
|
That was a popular TV series here, about cops, obviously
|
|
|
fab
|
2024-03-02 12:27:32
|
Apparently full av1 support was restored
|
|
2024-03-02 01:01:33
|
|
|
2024-03-02 01:01:34
|
Now is full, full support
|
|
2024-03-02 01:01:38
|
But not global
|
|
2024-03-02 01:01:50
|
For some territories outside usa and china
|
|
2024-03-02 01:03:07
|
|
|
2024-03-02 01:03:11
|
Even fb shows someav1
|
|
2024-03-02 01:03:19
|
Interesting
|
|
2024-03-02 01:03:41
|
Won't share links, is uzeless adn not useful in the server
|
|
2024-03-02 01:03:52
|
And purissimo of rules
|
|
|
username
|
2024-03-02 01:04:55
|
<@416586441058025472> this channel is for coverage about only JPEG XL. <#805176455658733570> is probably the channel this should go in
|
|
|
fab
|
2024-03-02 01:14:09
|
Yokes
|
|
|
MSLP
|
2024-03-05 04:41:37
|
JXL is on the IANA list!
https://www.iana.org/assignments/media-types/media-types.xhtml#image
|
|
2024-03-05 04:42:01
|
https://www.iana.org/assignments/media-types/image/jxl
|
|
|
VcSaJen
|
2024-03-06 12:31:21
|
Someone should update Wikipedia page
|
|
|
spider-mario
|
2024-03-06 08:55:56
|
done, thanks
|
|
|
VcSaJen
|
2024-03-06 12:14:33
|
Any holdouts among servers, etc? Or MIME type is added everywhere already?
|
|
|
_wb_
|
2024-03-06 01:20:48
|
afaik most mimedb type things should already have it (though servers can be slow in updating), but now it's on the official list, anything remaining should get it automatically sooner or later
|
|
|
MSLP
|
2024-03-06 05:22:40
|
Having a mime type now officially assigned it would be good to have an entry in default Nginx and Apache mime types files - they aren't updated automatically AFAIK.
https://github.com/nginx/nginx/blob/master/conf/mime.types#L18
https://github.com/apache/httpd/blob/trunk/docs/conf/mime.types#L1541
They aren't merging github pull requests tho, those are read-only repo mirrors
And possibly (but not neccesairly) entries in the Apache mime-magic file, for file type autodetection
https://github.com/apache/httpd/blob/trunk/docs/conf/magic#L251
which could be eg:
```
0 beshort 0xff0a image/jxl
0 string \x00\x00\x00\x0cJXL\x20\x0d\x0a\x87\x0a image/jxl
```
|
|
|
190n
|
2024-03-07 07:46:25
|
https://blog.jimmac.eu/2024/gnome46-wallpapers/
|
|
2024-03-07 07:46:43
|
using JPEG XL noise to mitigate banding
|
|
|
lonjil
|
2024-03-07 07:47:49
|
why not make it higher bit depth and then dither?
|
|
|
190n
|
2024-03-07 07:49:04
|
maybe there was already banding in the source
|
|
|
username
|
2024-03-07 07:59:35
|
seems like some of them might be in a higher bit depth? unsure https://gitlab.gnome.org/GNOME/gnome-backgrounds/-/commit/ead74c78fee4baa27f90dfa1c7433074c00d0bd1
|
|
|
|
veluca
|
2024-03-07 08:15:12
|
> the only available path for now is 16bpp PNG > JXL, but still better than 8bpp.
I'd love to see the image on which 16bpp are not enough xD
|
|
|
190n
|
2024-03-07 08:15:55
|
maybe they want floats
|
|
|
lonjil
|
|
lonjil
why not make it higher bit depth and then dither?
|
|
2024-03-07 08:17:45
|
actually, I suppose noise is added before any color space conversions and bit depth reductions that might occur? In which case it's basically random dither already.
|
|
2024-03-07 08:18:00
|
(for any files that are high bit depth)
|
|
|
|
veluca
|
2024-03-07 08:19:21
|
noise is going to behave like high quality dithering, yeah
|
|
2024-03-07 08:19:42
|
now, could you have achieved the same with lower computational effort? probably
|
|
2024-03-07 08:19:44
|
but oh well
|
|
|
sklwmp
|
2024-03-09 07:41:01
|
https://www.mark-pekala.dev/posts/jpeg-xl
found via AI-generated tweets, but hey, the article itself isn't AI
|
|
|
username
|
|
sklwmp
https://www.mark-pekala.dev/posts/jpeg-xl
found via AI-generated tweets, but hey, the article itself isn't AI
|
|
2024-03-09 07:42:00
|
https://news.ycombinator.com/item?id=39600158
|
|
|
sklwmp
|
2024-03-09 07:42:25
|
ah, that's why i felt this was familiar, thanks!
|
|
|
yoochan
|
2024-03-09 07:46:32
|
I thought it meant extra lovely
|
|
|
Demiurge
|
|
username
seems like some of them might be in a higher bit depth? unsure https://gitlab.gnome.org/GNOME/gnome-backgrounds/-/commit/ead74c78fee4baa27f90dfa1c7433074c00d0bd1
|
|
2024-03-09 08:34:31
|
Weird. The photon noise feature isn't supposed to be used to reduce banding. It's supposed to be used to restore grain in the decoding process after grain-removal filter in the encoding process.
|
|
2024-03-09 08:37:40
|
If you see banding, then you have a problem with either the image data itself, or how it's being decoded and presented.
|
|
|
Traneptora
|
2024-03-09 08:37:59
|
I have wanted to backronym it as eXtended Life for a while
|
|
2024-03-09 08:38:04
|
it doesn't stand for that, but I wish it did
|
|
|
Demiurge
|
2024-03-09 08:38:36
|
Extended length :)
|
|
2024-03-09 08:39:44
|
Extra lasagna
|
|
|
username
|
|
Demiurge
Weird. The photon noise feature isn't supposed to be used to reduce banding. It's supposed to be used to restore grain in the decoding process after grain-removal filter in the encoding process.
|
|
2024-03-09 08:41:00
|
while there are more proper and computational cheaper ways to reduce/remove banding, photon noise does work pretty well as a "hacky" solution.
https://saklistudio.com/whyjxl/
|
|
|
Traneptora
|
|
username
while there are more proper and computational cheaper ways to reduce/remove banding, photon noise does work pretty well as a "hacky" solution.
https://saklistudio.com/whyjxl/
|
|
2024-03-09 08:41:44
|
big difference between using photon noise and using debanding/dithering before coding is that dithering inflates the file size and photon noise does not
|
|
|
Demiurge
|
2024-03-09 08:42:31
|
Yeah, it works as a weird solution, but if you notice banding then there is a deeper inherent problem that needs to be solved.
|
|
|
Traneptora
|
2024-03-09 08:42:42
|
sometimes you don't have access to the source
|
|
2024-03-09 08:42:48
|
so you can't solve that problem
|
|
|
jonnyawsom3
|
|
Traneptora
I have wanted to backronym it as eXtended Life for a while
|
|
2024-03-09 08:43:01
|
Could've sworn that is the actual meaning
|
|
|
username
|
|
Traneptora
big difference between using photon noise and using debanding/dithering before coding is that dithering inflates the file size and photon noise does not
|
|
2024-03-09 08:43:02
|
I meant more proper as in something like adding a step after decoding and not doing anything with encoding
|
|
|
Demiurge
|
2024-03-09 08:43:02
|
The noise just helps to mask the problem which is still there and still a problem
|
|
|
Traneptora
|
2024-03-09 08:43:19
|
like, if someone sends you a photograph that is banded cause it's 8-bit, and you don't have access to the sensor data, there's not much you can do about that
|
|
|
jonnyawsom3
|
2024-03-09 08:44:04
|
Plus side is since photon noise is metadata, the original file is still preserved too
|
|
|
Traneptora
|
2024-03-09 08:44:19
|
yea, a decoder could choose not to render the noise
|
|
|
Demiurge
|
2024-03-09 08:45:13
|
JXL is supposed to be extremely efficient at compressing smooth gradients
|
|
|
Traneptora
|
2024-03-09 08:45:37
|
it is
|
|
|
Demiurge
|
2024-03-09 08:45:43
|
If there's banding then it's probably a problem with the decoder. Only recently was dithering added to libjxl decoder
|
|
|
Traneptora
|
2024-03-09 08:45:58
|
well, no, if there's banding in the source that's not a problem with the decoder
|
|
|
Demiurge
|
2024-03-09 08:46:05
|
Before that libjxl decoder produced output with banding. That's a bug in the decoder
|
|
|
Traneptora
|
2024-03-09 08:46:15
|
not if the source had banding
|
|
|
Demiurge
|
2024-03-09 08:46:28
|
Obviously, but it produced banding where there was none before
|
|
|
Traneptora
|
2024-03-09 08:46:40
|
well that's just a bug, that's not something you work around, that's something you just fix
|
|
|
Demiurge
|
|
Traneptora
|
2024-03-09 08:46:54
|
we're discussing solutions when the source has banding
|
|
|
Demiurge
|
2024-03-09 08:47:04
|
I don't think we are, no
|
|
2024-03-09 08:47:28
|
Gnome wallpapers are computer generated from non-raster source
|
|
2024-03-09 08:47:43
|
The source should not have banding
|
|
2024-03-09 08:48:01
|
They use blender and inkscape
|
|
2024-03-09 08:48:36
|
If there is banding then there is a bug somewhere
|
|
|
lonjil
|
|
Demiurge
If you see banding, then you have a problem with either the image data itself, or how it's being decoded and presented.
|
|
2024-03-09 08:50:06
|
it's actually a very common technique among artist and photographers to add noise to prevent banding. Except they add it before compression 😄
|
|
|
username
|
2024-03-09 08:50:58
|
ok, are we talking about 8bpc images being displayed on 8bpc displays or higher bit depth images like 16bpc being displayed on 8bpc displays?
|
|
|
lonjil
|
|
Demiurge
The source should not have banding
|
|
2024-03-09 08:51:34
|
huh? The images have gradiants that are as smooth as possible given the bit depth. Banding is simply inevitable in some cases depending on the display bit depth and brightness. The only way to prevent it is to make the gradient less smooth (with dither), but random noise works well enough.
|
|
|
Demiurge
|
2024-03-09 08:51:49
|
But if you want to hide banding from an existing source image that already has banding, then noise can help to hide that, or better yet just reduce the bit depth by 1.5 bits maybe during decoding, or start with a different source image that isn't banded :)
|
|
|
lonjil
|
2024-03-09 08:52:09
|
we aren't talking about massive bands here
|
|
2024-03-09 08:52:45
|
just that a 1/255 difference in sRGB can be bigger than the Just Noticable Difference
|
|
|
Demiurge
|
|
lonjil
huh? The images have gradiants that are as smooth as possible given the bit depth. Banding is simply inevitable in some cases depending on the display bit depth and brightness. The only way to prevent it is to make the gradient less smooth (with dither), but random noise works well enough.
|
|
2024-03-09 08:52:57
|
Not if you have a decoder that applies dithering
|
|
|
lonjil
|
2024-03-09 08:53:44
|
If the encoded data has a higher bit depth than what you wish to display, yes, dither helps. And random noise can act as quite decent dither.
|
|
2024-03-09 08:54:10
|
I quote:
> noise is going to behave like high quality dithering, yeah
|
|
|
Demiurge
|
2024-03-09 08:54:21
|
Which should always, always happen when quantizing down to 8 bits from a higher precision
|
|
2024-03-09 08:54:28
|
Not doing so is a bug
|
|
2024-03-09 08:55:27
|
Not applying dither when reducing bit depth is wrong
|
|
|
lonjil
|
2024-03-09 08:55:38
|
I reckon that essentially zero image displaying programs on Linux do that
|
|
2024-03-09 08:55:57
|
libjxl ofc does apply dither if you ask it to decode to a lower bit depth
|
|
2024-03-09 08:56:14
|
but that doesn't help if you, say, resize the image after that
|
|
|
Demiurge
|
2024-03-09 08:58:57
|
Resizing an image is usually done at a high bit depth. Like all image editing.
|
|
|
lonjil
|
2024-03-09 09:00:01
|
not if you have 8-bit buffers and ask libjxl to decode to such
|
|
|
Demiurge
|
2024-03-09 09:03:51
|
I don't think libjxl has an API for resizing or decoding images at a different size. If it did, reducing bit depth and dithering should obviously be the final step. If someone decides to decode an image to 8 bits and then resize the image, most of the time resizing is done at a higher bit depth and then you still need to reduce the bit depth back down to 8 bits again which means you need to dither again.
|
|
|
jonnyawsom3
|
|
Demiurge
They use blender and inkscape
|
|
2024-03-09 10:06:05
|
> Thanks to the lovely JPEG-XL grain described earlier, it’s not just Inkscape and Blender that were used.
https://gitlab.gnome.org/Teams/Design/wallpaper-assets/-/blob/master/46/experiments/py/geo.py?ref_type=heads
|
|
|
Demiurge
Resizing an image is usually done at a high bit depth. Like all image editing.
|
|
2024-03-09 10:07:39
|
Assuming it's a normal person sending/saving an image and not a massive nerd like all of us, and that the image is then 8 bit with banding for x or y or z reason, then a sprinkling of photon noise can help without modifying the rest of the image at all
|
|
|
Jim
|
2024-03-12 11:25:31
|
Jon's JPEG XL and the Pareto Front article showed up in the latest Web Performance Newsletter: https://mailchi.mp/perf.email/147?e=08bc2739cc
|
|
|
jonnyawsom3
|
2024-03-12 03:00:14
|
> Now, imagine a situation where your mobile plan allows 300 MB data a month and is of maximum 4G speed.
Making me feel old just because I don't have 5G
|
|
|
yoochan
|
|
Jim
Jon's JPEG XL and the Pareto Front article showed up in the latest Web Performance Newsletter: https://mailchi.mp/perf.email/147?e=08bc2739cc
|
|
2024-03-12 04:33:09
|
next article speaks about _Supercharge compression efficiency with shared dictionaries_. Isn't exactly what brotli already does in text mode ?
|
|
|
|
afed
|
2024-03-12 04:36:04
|
yeah, but it's been reworked and improved
|
|
|
yoochan
|
|
Traneptora
|
|
yoochan
next article speaks about _Supercharge compression efficiency with shared dictionaries_. Isn't exactly what brotli already does in text mode ?
|
|
2024-03-12 04:37:55
|
no, shared dictionaries are when a custom dict is stored in a browser cache based on the last version sent
|
|
2024-03-12 04:38:35
|
so like if I ship a webpage, I can then ship an updated version compressed with a dictionary tuned to the previous version
|
|
2024-03-12 04:39:31
|
relies on browser cache and http headers ofc
|
|
2024-03-12 04:39:56
|
the browser sends an http header that declares what version it has cached
|
|
|
yoochan
|
2024-03-12 04:40:07
|
I took the time to read the article and understand a bit 😅 it seems complicated but if it works
|
|
|
Traneptora
|
2024-03-12 04:40:19
|
it seems largely unnecessary imo
|
|
2024-03-12 04:40:54
|
shaving 50k off a page load has much lower impact than reducing round trips
|
|
|
|
afed
|
2024-03-12 04:44:34
|
the first proposals were from 2008, then there were some changes or even deprecations
https://lists.w3.org/Archives/Public/ietf-http-wg/2008JulSep/att-0441/Shared_Dictionary_Compression_over_HTTP.pdf
<https://groups.google.com/a/chromium.org/g/blink-dev/c/nQl0ORHy7sw>
and now it's
https://github.com/WICG/compression-dictionary-transport/
|
|
2024-03-12 04:47:09
|
but, it's still useful only for certain use cases
|
|
2024-03-12 05:07:30
|
though delta compression looks interesting
|
|
|
_wb_
|
|
Traneptora
it seems largely unnecessary imo
|
|
2024-03-12 05:20:45
|
This could also be used to do things like downloading Android package updates, if there isn't already some other mechanism for delta compression for that...
|
|
|
Traneptora
|
|
_wb_
This could also be used to do things like downloading Android package updates, if there isn't already some other mechanism for delta compression for that...
|
|
2024-03-12 05:28:58
|
well this is at the protocol level. package update deltas is already a solved problem, fedora for example serves `.drpm`
|
|
|
_wb_
|
2024-03-12 05:50:45
|
Doing things at the protocol level could simplify some systems though, you don't need application-specific delta mechanisms...
|
|
|
Traneptora
|
2024-03-12 05:53:59
|
perhaps, but this is about using the previous version as a dictionary
|
|
2024-03-12 05:55:30
|
which works well with text assets like JS libraries
|
|
2024-03-12 05:55:45
|
but for package deltas, they simply don't include files that are unchanged, so it's a much better solution
|
|
|
_wb_
|
2024-03-12 06:36:46
|
Well an unchanged file would be just one long lz77 match, that's kind of just as good, no?
|
|
|
Traneptora
|
|
_wb_
Well an unchanged file would be just one long lz77 match, that's kind of just as good, no?
|
|
2024-03-12 09:01:38
|
not really, cause these packages are typically solid archives compressed with some kind of thing like gzip, xz, zstd, etc.
|
|
2024-03-12 09:01:59
|
so a file that is unchanged between two packages may not actually be unchanged between the various compressed packages
|
|
2024-03-12 09:02:36
|
for example, rpm files are a cpio archive that is solid compresed (plus a header). arch uses .tar.zst, and debian packages have a control.tar.xz and a data.tar.xz inside a standard ARchive
|
|
2024-03-12 09:03:26
|
in order to correctly create a delta-package, you need to know what the package looks like before being compressed for distribution
|
|
2024-03-12 09:03:59
|
this google proposal is designed for web stuff where the user is likely to have the uncompressed version cached. e.g. an older version of a JS library
|
|
|
lonjil
|
2024-03-12 09:10:33
|
Presumably you have all the older files, though not packaged up. You'd just need a stable ordering for dictionary construction.
|
|
2024-03-12 09:11:10
|
Though I don't know how well standard compressors like zstd and Brotli would do with such huge dictionaries.
|
|
|
Traneptora
|
|
lonjil
Presumably you have all the older files, though not packaged up. You'd just need a stable ordering for dictionary construction.
|
|
2024-03-12 09:38:21
|
presumably, but it doesn't make sense to construct an entropy dictionary based on the previous data for this purpose
|
|
2024-03-12 09:38:30
|
if it isn't changed you can just signal that
|
|
|
lonjil
|
2024-03-12 09:41:28
|
indeed
|
|
|
Orum
|
2024-03-12 11:45:58
|
well, this aged like milk: https://blobfolio.com/2021/jpeg-xl/
|
|
|
HCrikki
|
2024-03-13 01:58:55
|
negative hit pieces get attention as theyre at least posted on the public web at diverse locations from multiple authors
|
|
2024-03-13 01:59:53
|
take discord, anything (interesting) posted here might as well not exist for rest of web unless mentioned or screenshotted
|
|
2024-03-13 02:04:25
|
caniuse is also misleading by default cuz no representative numbers are mentioned anywhere
|
|
2024-03-13 02:06:37
|
take its **tracked mobile** numbers in the most important countries - jxl support is almost at 33% in the us, uk, canada, japan and thats without chromium that wouldve overnight moved the needle up an extra 60%. for a huge number of websites (especially local services and sites like news), thats their *main* target readerbase - you can check against your own analytics
|
|
2024-03-13 02:11:10
|
notable as the way caniuse works appears to tally actively used browsers and versions, not unused preinstalls (samsung browser has half as many preinstalls as mobile chrome and like 8x more installs than all firefox but barely tallied)
|
|
|
VcSaJen
|
|
HCrikki
take discord, anything (interesting) posted here might as well not exist for rest of web unless mentioned or screenshotted
|
|
2024-03-13 08:36:47
|
Discord overall is essentially a "secret club". No option of public access or indexing. Even archival or public mirror are impossible due to ToS.
Matrix is only marginally better, public access is an afterthought: not default, not implemented almost anywhere, etc.
At least IRC have tooling to allow it, without any problems, but again, not by default
|
|
|
w
|
2024-03-13 09:21:45
|
irc doesnt even have chat logs
|
|
2024-03-13 09:21:52
|
weird comparison but i guess everything sucks
|
|
|
spider-mario
|
2024-03-13 09:28:02
|
on irc, if you’re not connected to the server when a message is sent, you just don’t receive it, ever
|
|
2024-03-13 09:28:11
|
neither Discord nor Matrix has that issue
|
|
|
lonjil
|
2024-03-13 09:37:51
|
IRCv3 has a history feature but I guess almost no one uses it
|
|
|
spider-mario
|
2024-03-13 09:54:10
|
yeah, from afar, IRCv3 kind of looks like a theoretical exercise: “if we fixed IRC, what would that look like?”
|
|
2024-03-13 09:54:30
|
on its own, doesn’t really make “IRC” not have those problems
|
|
|
VcSaJen
|
2024-03-13 09:57:38
|
There's also Jabber/XMPP
|
|
|
lonjil
|
2024-03-13 10:04:25
|
*Some* IRCv3 specs are widely adopted
|
|
2024-03-13 10:04:51
|
But most of the more ambitious ones kinda died
|
|
|
yurume
|
2024-03-13 10:20:39
|
IRCv3 was late by more than 1 decade IMO
|
|
|
yoochan
|
2024-03-13 10:22:15
|
IRC ! that's when we didn't shut down our PC at night 😄 to not miss an anime to download
|
|
|
fab
|
2024-03-26 04:02:27
|
|
|
2024-03-26 04:02:35
|
I made 4chan seems like Jon Sneyers is speaking that truth
|
|
2024-03-26 04:03:02
|
The concepts in this PDF could confuse you
|
|
2024-03-26 04:03:44
|
As i used history and chronological hour willing and usage
|
|
2024-03-26 04:04:03
|
I accepted youtube to use it on my Google account
|
|
2024-03-26 04:04:39
|
And personalize 4chan and all the site in the world according to Mines youtube autism
|
|
2024-03-26 04:05:25
|
Keep in my mind that is fan-fiction and this opinion isn't used
|
|
2024-03-26 04:13:40
|
Improper use of ai though is bad, bad of situation
|
|
2024-03-26 04:17:01
|
I deleted JXL artifacts on purpose on ytt pipeline
|
|
2024-03-26 04:18:48
|
And i train avif to do a good job, but this is the jpeg version
|
|
|
Fox Wizard
|
2024-03-26 04:20:46
|
<:KittyThink:1126564678835904553>
|
|
|
fab
|
2024-03-26 06:11:36
|
|
|
2024-03-26 06:11:49
|
New file of deepfake
|
|
|
_wb_
|
2024-03-26 06:34:20
|
https://boards.4chan.org/g/thread/99659945/jpeg-xl-won
4chan makes my eyes hurt
|
|
|
lonjil
|
2024-03-26 06:35:23
|
Makes brain hurt
|
|
2024-03-26 06:36:18
|
Makes my sensibilities as a regular, everyday, not terrible person hurt
|
|
|
jonnyawsom3
|
2024-03-26 06:36:22
|
The internet is certainly... A place... I do wonder if the frequent Reddit posts linking to it are the best idea
|
|
2024-03-26 06:37:18
|
After all, nothing screams "Mature Image Format" like a 4chan thread of everyone calling each other an idiot
|
|
|
username
|
2024-03-26 06:37:39
|
maybe a spoiler tag for the image would have been a good idea lol
|
|
|
jonnyawsom3
|
2024-03-26 06:38:22
|
Yeahh
|
|
|
Quackdoc
|
|
The internet is certainly... A place... I do wonder if the frequent Reddit posts linking to it are the best idea
|
|
2024-03-26 06:38:27
|
I see 4chan is rather tame today
|
|
|
fab
|
|
_wb_
https://boards.4chan.org/g/thread/99659945/jpeg-xl-won
4chan makes my eyes hurt
|
|
2024-03-26 07:19:00
|
Because you're seeing new37 and is a deepfake i made for fun to make the joke of JXL spec and format behind AVIFs
|
|
2024-03-26 07:19:35
|
So it is extremely closed minded and bot duplicated with a purpose
|
|
2024-03-26 07:20:32
|
4chan texts are fakes;
|
|
|
Nyao-chan
|
2024-03-26 07:34:28
|
why do they want cb7? compressing jxl does nothing. makes me wonder why though.
|
|
|
fab
|
2024-03-26 07:38:04
|
All 4chan gemini 1.0 pro insults made for the libjxl devs, first part Alphabet, second part me
|
|
|
Quackdoc
|
|
Nyao-chan
why do they want cb7? compressing jxl does nothing. makes me wonder why though.
|
|
2024-03-26 07:38:48
|
cb7 can be uncompressed
|
|
|
fab
|
|
The internet is certainly... A place... I do wonder if the frequent Reddit posts linking to it are the best idea
|
|
2024-03-26 07:39:12
|
I don't post too much on reddit i just write something that could give an advice
|
|
|
HCrikki
take its **tracked mobile** numbers in the most important countries - jxl support is almost at 33% in the us, uk, canada, japan and thats without chromium that wouldve overnight moved the needle up an extra 60%. for a huge number of websites (especially local services and sites like news), thats their *main* target readerbase - you can check against your own analytics
|
|
2024-03-26 07:40:04
|
I know only this aspects about Discord
|
|
|
Nyao-chan
|
|
Quackdoc
cb7 can be uncompressed
|
|
2024-03-26 07:41:00
|
but if you you want uncompressed just use cbz. cb7 is better at compressing jpg. But for some reason similar jxl are compressed so differently that additional compression on top of that is useless
|
|
|
fab
|
2024-03-26 07:42:42
|
Honestly making gg sans too readable bothers me as there are under 18 people on this site
|
|
|
uis
Anglo-
|
|
2024-03-26 07:44:01
|
Throwback
|
|
|
Quackdoc
|
|
Nyao-chan
but if you you want uncompressed just use cbz. cb7 is better at compressing jpg. But for some reason similar jxl are compressed so differently that additional compression on top of that is useless
|
|
2024-03-26 07:47:52
|
there is no real benefit to using cbz now since majority of apps also support cb7. so it's not like there is major benefits either way, ofc with cb7 if you are compressing a hundred or two jxl files there may be some file savings over cbz though marginal at best ofc.
|
|
2024-03-26 07:48:41
|
7z also allegedly has better seeking perf then zip so that could be a factor to use cb7 over cbz too
|
|
|
Nyao-chan
|
2024-03-26 08:05:17
|
mihon unfortunately doesn't
there is a pull request it seems
as for compression, it really seems useless
I just checked hunter x hunter and on strongest 7z compression, the difference is 144592K vs 144228K
I see it does indeed decompress faster without compression, but does it matter? The difference is not much and images can be preloaded
|
|
|
Quackdoc
|
2024-03-26 08:06:43
|
mihon doesn't? thats strange, I thought it did, or maybe it was a fork I used
|
|
|
Nyao-chan
|
2024-03-26 08:08:53
|
https://github.com/mihonapp/mihon/pull/46
I did not test though
|
|
2024-03-26 08:14:08
|
Yeah, "no pages found". It does see it though
|
|
|
HCrikki
|
2024-03-26 11:55:30
|
https://youtu.be/qB9L-ZYM1_0
|
|
|
jonnyawsom3
|
2024-03-27 03:12:00
|
It's a little old now, and they got some facts wrong, but at least they tried
|
|
|
fab
|
2024-03-27 09:53:52
|
Perfect youtube gave me a ban
|
|
2024-03-27 09:55:02
|
I use it like nsfw and also those medias i used them to change the search engine results
|
|
|
Demiurge
|
2024-04-03 06:48:03
|
https://quackdoc.github.io/blog/hidden-jxl-benefits/
|
|
2024-04-03 06:48:14
|
https://quackdoc.github.io/blog/whynotusejxl/
|
|
|
Quackdoc
|
2024-04-03 07:03:25
|
how did you even find this? I posted it in like, 1 place
|
|
|
HCrikki
|
2024-04-03 07:10:46
|
people focus on web browsers but the actual ecosystem changer is forced updates and getting rid of middlemen. web services can directly serve the most optimal version of images to their own apps (desktop and mobile) so why arent they doing that already? from just storage/bw savings, tight budgetary balance or even deficit could turn into surplus
|
|
2024-04-03 07:13:16
|
adoption doesnt need to cover 100% of their content either. if they recompress original pngs with jpegli they could already get rid of the need to store/serve webp/avif and have a stronger base for eventual pipeline migration to jxl, wether its from recompressing originals or jpg->jxl
|
|
|
Quackdoc
|
|
HCrikki
people focus on web browsers but the actual ecosystem changer is forced updates and getting rid of middlemen. web services can directly serve the most optimal version of images to their own apps (desktop and mobile) so why arent they doing that already? from just storage/bw savings, tight budgetary balance or even deficit could turn into surplus
|
|
2024-04-03 07:24:49
|
well considering 90% of apps either rely on web browsers or otherwise platform support for image decoding, they don't really have many options lmao.
as a dedicated service provider, it's a hard sell to swap to jxl unless image distribution is a significant cost. enough of a cost to warrant custom solutions. In the end, you really do pretty much need android or chrome to pick it up. thankfully android media codecs is a project mainline thing now since like android 10, so it's possible for those to get backported I guess. if someone does wind up doing it
|
|
|
VcSaJen
|
|
Quackdoc
well considering 90% of apps either rely on web browsers or otherwise platform support for image decoding, they don't really have many options lmao.
as a dedicated service provider, it's a hard sell to swap to jxl unless image distribution is a significant cost. enough of a cost to warrant custom solutions. In the end, you really do pretty much need android or chrome to pick it up. thankfully android media codecs is a project mainline thing now since like android 10, so it's possible for those to get backported I guess. if someone does wind up doing it
|
|
2024-04-04 03:08:48
|
A lot of the time apps specifically whitelist image formats, excluding all formats that platform/library provides, but are not in the whitelist.
|
|
|
HCrikki
|
2024-04-04 03:26:00
|
an image decoder is barely 3 entire megabytes and support for extra ones never depended on browser support - you want a lib, you just added one more library to your app
|
|
2024-04-04 03:27:05
|
even with platforms granting support using no additional lib in your app, devs would still ship libs and update them as they fancy
|
|
|
VcSaJen
|
2024-04-04 03:29:05
|
For Windows, it's trivial. For Android, there's a surprising lack of image libraries. They're all for 'allocating' or 'modifying' images, and never for 'image formats'.
|
|
|
Quackdoc
|
2024-04-04 03:29:18
|
3mb plus development and debugging man hours
|
|
|
HCrikki
|
2024-04-04 03:36:18
|
obtaining the support from the libraries you already use like vips outsourced all the hard labour early. on android theres plugins for picasso, glide, coil and of course ones needing more effort in different stacks integrating (rust, go, swift, npm)
|
|
|
Quackdoc
|
2024-04-04 04:05:45
|
just because the libraries exist doesn't mean using them is "free". It can also complicate things a lot more when things do go wrong.
|
|
|
VcSaJen
|
|
HCrikki
obtaining the support from the libraries you already use like vips outsourced all the hard labour early. on android theres plugins for picasso, glide, coil and of course ones needing more effort in different stacks integrating (rust, go, swift, npm)
|
|
2024-04-04 04:17:43
|
Interesting. It's nearly impossible to find, I found it only by using keywords in your post (glide, coil):
https://github.com/awxkee/jxl-coder
|
|
|
username
|
2024-04-04 04:21:34
|
I remember someone in here mentioned that repo along with the new LineageOS gallery app repo with the idea of possibly having the new version of LineageOS support JXL but they deleted the their messages like a day later
|
|
|
HCrikki
|
2024-04-05 05:01:31
|
seems coverage of and comments on jpegli sometimes erroneously assume jpegli is *not* from the jpegxl authors but an underhanded effort from those who tried to slow jpegxl's adoption
|
|
2024-04-05 05:12:07
|
given how common use of libjpeg-turbo and mozjpeg is, ton of software could swap libraries and encode/decode at higher quality overnight
|
|
2024-04-05 05:12:56
|
afaik steam still uses a bad isc jpg encoder for lossy screenshots of mediocre quality and color accuracy with banding - perhaps they could evaluate switching since theyre tinkering with the screenshot function lately anyway
|
|
|
Quackdoc
|
2024-04-05 05:15:39
|
doesn't seem like arch has an aur package for it yet
|
|
|
|
afed
|
2024-04-05 05:18:35
|
libjpeg-turbo was also sponsored by google for a long time
|
|
|
Quackdoc
|
2024-04-05 05:19:10
|
oh maybe libjxl-git builds jpegli now
|
|
2024-04-05 05:19:46
|
oh nvm im an idiot, ill make one up
|
|
2024-04-05 06:35:40
|
I was thinking about it, and still am.
|
|
|
Demiurge
|
2024-04-06 12:20:57
|
cjpegli is already included in official arch package for libjxl.
|
|
2024-04-06 12:22:06
|
What would be more useful and cool is for a libjpeg package that replaces libjpeg-turbo
|
|
2024-04-06 12:22:27
|
maybe libjpeg6-libjxl or libjpeg6-jpegli
|
|
2024-04-06 12:22:45
|
And a variant that is compiled using the newer libjpeg API
|
|