JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

other-codecs

improver
2021-09-30 11:10:25
i mean for fallback formats
Scope
2021-09-30 11:10:41
And embedding a low-quality preview by bloating the size of the original image is not a good method, even if using something like this, I would prefer blurhash or a similar implementation with 200 bytes jpeg from fb
The_Decryptor
2021-09-30 11:10:47
You'd need a new CSS property like "background-image-placeholder" that takes something like a colour
improver
2021-09-30 11:10:50
iirc in case of css background that can only be done at server side
Jim
2021-09-30 11:11:04
Not fully supported yet, but `image-set()` will allow the same as picture. https://developer.mozilla.org/en-US/docs/Web/CSS/image/image-set()
improver
2021-09-30 11:11:32
oh yes this will help
2021-09-30 11:11:56
but then... you gotta have fallback for stuff what doesn't support image-set()
2021-09-30 11:12:33
> There is no inbuilt fallback for image-set(); therefore to include a background-image for those browsers that do not support the function, a separate declaration is required before the line using image-set(). heh usual css hacky way
Jim
2021-09-30 11:14:12
That's the way of CSS. Can't do anything about that. It will be supported soon enough.
improver
2021-09-30 11:14:33
> Before Firefox 89, the type() function is not supported as an argument to image-set(). ooof
The_Decryptor
2021-09-30 11:15:12
Doesn't work in Edge either
improver
2021-09-30 11:15:38
guess it's a bit early still but future looks bright
2021-09-30 11:15:46
when blink removes -webkit- prefix
Scope
2021-10-02 05:10:34
Added Intra Bi-Predictor <https://chromium.googlesource.com/codecs/libwebp2/+/0611e5f8fb1e0da341eab91f00401159c82d99b2%5E%21/> 🤔 > // The DoubleDcPredictor sets all pixels to the mean of top and/or left > // context first. Then for some pixels it calculates a linear combination of > // precalculated value and the pixel above or on the left in the context. The > // predictor is based on AV2's Intra Bi-Predictor.
lithium
2021-10-02 06:08:22
libjpeg-turbo jpegtran have some useful new feature, > jpegtran -copy icc this feature can remove exif and retain icc profile, but I haven't test gamma profile on this feature. > https://github.com/libjpeg-turbo/libjpeg-turbo/issues/533
The_Decryptor
2021-10-03 01:56:22
That's one of the nicer aspects of JXL to me, it doesn't consider things like colour profiles as extra metadata that tools can just strip by default
_wb_
2021-10-03 06:19:29
https://github.com/libjxl/libjxl/blob/main/doc/format_overview.md#metadata-versus-image-data
The_Decryptor
2021-10-03 06:28:43
It's a nice quality of life improvement, I ran into the jpegtran issue recently when batch processing some images, accidentally stripped all their colour profiles
Scope
2021-10-03 01:03:11
Hmm, continuing the discussion about formats, as I said about some other companies and including the recent update at unsplash.com, it looks like they mostly need new formats for previews that would look good with very strong compression, for maximum bandwidth reduction and speed up loading for users, for full size they also still use the source Jpeg (and also when images are hundreds of petabytes, it is not always efficient to store them in multiple formats), so for them the benefit of WebP and AVIF is obvious Although, I wonder how much Jpeg XL can compete at least with WebP for such previews, also, if some companies use FPGA to convert to WebP, it is theoretically possible to use something like that to be able to store all Jpeg recompressed to JXL, for fast reverse conversion
2021-10-03 01:13:21
And also Jpeg XL may have advantages with progressive loading, but since the most useful is 60% HQIP when it is almost like a full image, I wonder if it is possible to make a non-progressive mode where only HQIP was available instead of MQIP (as it is now for VarDCT) or would it have the same efficiency as the current full progressive encoding?
_wb_
2021-10-03 02:24:10
For HQIP you need AC coefficients so that does require progressive encoding
Scope
2021-10-03 02:45:25
Hmm, I compared the AVIF preview on unsplash.com to JXL with the same sizes (default settings) and: Source
2021-10-03 02:45:34
AVIF
2021-10-03 02:45:42
JXL
2021-10-03 02:45:47
2021-10-03 02:45:52
2021-10-03 02:46:13
Source
2021-10-03 02:46:24
AVIF
2021-10-03 02:46:31
JXL
2021-10-03 02:46:36
2021-10-03 02:46:41
2021-10-03 02:50:15
Considering that according to unsplash AVIF for previews is 30% smaller than WebP (which is also 30% smaller than Jpeg), and it seems that at these encoding settings and sizes AVIF does not have any advantages over JXL 🤔 https://twitter.com/letsgoheadless/status/1441123832392126473
_wb_
2021-10-03 02:52:09
in the second example, the jxl is a lot better imo
2021-10-03 02:52:49
I see bad banding in the background in the avif, and smoothing of the fabric in the foreground
2021-10-03 02:53:53
in the first example, the difference is smaller but the jxl is also better
2021-10-03 02:54:10
in particular: the red shoe suffers from chroma subsampling in avif
Scope
2021-10-03 02:59:25
Yep, but I also take into account that AVIF is still in test mode and may use a better setting/encoder after some time (but still, at the moment, even for these previews JXL is pretty competitive and without making comparisons I thought AVIF would be better without any chances in such cases)
_wb_
2021-10-03 03:01:12
Encoder improvements are certainly possible for both avif and jxl
Scope
2021-10-03 03:04:54
Yes, but for AVIF there are already a lot of encoders and more "heavy" settings that improve quality (and judging by the images they haven't been used yet), and for lossy JXL there is no noticeable quality improvement yet even when using slower speeds, -s 8/9 does not give such a difference
2021-10-03 03:09:51
Also, having done tests on more random images, the size is somewhere between `-d 1` and `-d 3`, which is pretty suitable for good quality encoding with JXL 🤔
_wb_
2021-10-03 03:10:08
in vardct mode, current -s 8/9 is mostly spending a lot of time tweaking the adaptive quant weights based on butteraugli iterations, but it's still far from exhaustive in e.g. trying different block splits
2021-10-03 03:11:10
while afaik libaom can be very exhaustive at its slower speeds (which is why it is so slow), but doesn't really optimize for perceptually very relevant metrics (psnr and ssim are rather weak compared to butteraugli)
Scope
Scope Considering that according to unsplash AVIF for previews is 30% smaller than WebP (which is also 30% smaller than Jpeg), and it seems that at these encoding settings and sizes AVIF does not have any advantages over JXL 🤔 https://twitter.com/letsgoheadless/status/1441123832392126473
2021-10-03 03:16:21
So, I think it's pretty much possible for JXL to be 50% smaller than Jpeg on such previews and small images at a good encoding speed (I think faster than even the settings of the current AVIF in use)
BlueSwordM
Scope Yes, but for AVIF there are already a lot of encoders and more "heavy" settings that improve quality (and judging by the images they haven't been used yet), and for lossy JXL there is no noticeable quality improvement yet even when using slower speeds, -s 8/9 does not give such a difference
2021-10-03 03:16:40
I think it's more like that the AVIFenc devs hasn't made any of the new settings actually default for fear of compatibility with stable aomenc.
Scope
2021-10-03 03:23:29
That too, and most image delivery providers use their tested and stable settings and encoders, because getting the random broken images is very bad and not worth any extra efficiency
_wb_
2021-10-03 04:09:43
I can tell you that at Cloudinary, we never even considered using libaom to encode avif because it is way too slow for on-the-fly encoding (first serve before things get cached would just be too slow).
2021-10-03 04:10:11
We first used SVT-AV1 to do our avif encoding, at speed 7, and even that was too slow
2021-10-03 04:11:09
we now use Visionular's Aurora encoder (which is proprietary), at a speed setting that is about twice as fast as SVT speed 7 and seems to perform somewhat better than SVT at that speed
2021-10-03 04:11:49
(though we're still evaluating and the situation is still changing since all encoders are in active development)
Scope
2021-10-03 04:17:38
And for JXL, does Cloudinary use the default setting with distance or faster speeds with auto quality like for other formats?
_wb_
2021-10-03 04:19:05
At the moment it just uses default
2021-10-03 04:19:31
That might change when adoption becomes more significant
2021-10-03 04:19:45
Probably not though
2021-10-03 04:20:13
For lossless we use s3 for now
2021-10-03 04:20:27
For lossy the default s7
2021-10-03 04:21:31
Q_auto:good just translates to -q 80
2021-10-03 04:21:47
In jxl that is
2021-10-03 04:22:00
In other formats you cannot just assume q80 is ok
Scope
2021-10-03 04:22:21
Unsplash (imgix) uses Libavif as far as I can see (but I don't know how to identify the encoder and speed), with these properties: ``` * Resolution : 500x625 * Bit Depth : 8 * Format : YUV420 * Chroma Sam. Pos: 0 * Alpha : Absent * Range : Full * Color Primaries: 1 * Transfer Char. : 13 * Matrix Coeffs. : 6 * ICC Profile : Present (3144 bytes)```
_wb_
2021-10-03 04:24:15
They should use tinysrgb, it's silly to waste 3144 bytes on a default srgb icc profile
Scope
2021-10-03 04:29:43
Yeah, it doesn't look like something optimal, too bad that JXL support is so delayed in browsers, then it would be possible to compare it in practice And as I've said before, if it's possible, it would be nice to have for `-s 3` some quick detection for non-photo images and switch to something like `-s 2`, because most lossless images are not photos
_wb_
2021-10-03 04:34:54
Yes, I am planning to do such tweaking later
Scope
2021-10-03 04:44:07
And for **web** lossless I think it is a good idea to set `--keep_invisible=0` by default or maybe even `--premultiply` (although this is not always more efficient, but in general yes), this allows some additional savings according to my comparison and I do not think that for web these options are dangerous (since webp has the same behavior and the source image will still be available)
2021-10-03 04:49:16
Also for small previews like the above with `-q 80` some ringing artifacts are sometimes noticeable (but the image size is also smaller than AVIF)
_wb_
2021-10-03 04:56:45
Yes
2021-10-03 04:57:13
The customers that care about fidelity will use q_auto:best, which maps to cjxl -q 90 atm
2021-10-03 04:58:32
Those that care more about bandwidth will use q_auto:eco (which maps to -q 70 atm) or :low (which maps to -q 60)
nathanielcwm
2021-10-04 02:29:13
<@!794205442175402004> wtf would cause this sort of issue?
2021-10-04 02:29:13
https://media.discordapp.net/attachments/869655213315878983/894591145122861096/Pog.png?width=537&height=450
Fox Wizard
2021-10-04 02:29:28
~~Maybe tell him to open it in browser :p~~
nathanielcwm
2021-10-04 02:29:35
shutttttttttttt
2021-10-04 02:29:42
yeah u gotta open the original in browser lol
2021-10-04 02:30:36
Fox Wizard
nathanielcwm
2021-10-04 02:30:53
You said that before when it wasn't <:KekDog:884736660376535040>
2021-10-04 02:31:02
Also: https://cdn.discordapp.com/attachments/869655213315878983/894591342745899028/unknown.png
nathanielcwm
2021-10-04 02:31:13
oh now it's super scuffed in discord
Fox Wizard You said that before when it wasn't <:KekDog:884736660376535040>
2021-10-04 02:31:28
https://tenor.com/view/shut-up-shush-shh-ok-bird-gif-17679708
Fox Wizard
2021-10-04 02:31:34
Apparently it breaks with the link, but not original image <:SovietThinking:821039067843133462>
nathanielcwm
2021-10-04 02:31:40
weird
2021-10-04 02:31:52
well the jxl version is fine for me
2021-10-04 02:31:58
2021-10-04 02:32:02
prolly best to post it here too <:kekw:808717074305122316>
Fox Wizard
2021-10-04 02:32:06
2021-10-04 02:32:08
2021-10-04 02:32:26
The original png. Not the weird one you resized <:KekDog:884736660376535040>
nathanielcwm
2021-10-04 02:32:40
Not if you open it in browser <:KekDog:884736660376535040>
nathanielcwm
Fox Wizard The original png. Not the weird one you resized <:KekDog:884736660376535040>
2021-10-04 02:32:49
shut that screenshot was showing that it was fine in my image viewer
2021-10-04 02:32:50
reeeeee
_wb_
2021-10-04 02:33:07
it's a grayscale+alpha image
Fox Wizard
2021-10-04 02:33:07
<a:FrogStab:821038589345136682>
_wb_
2021-10-04 02:33:21
the grayscale info is fine
Fox Wizard
2021-10-04 02:33:27
How come some decoders get it right and others completely mess it up?
_wb_
2021-10-04 02:33:29
but some of the black is transparent
nathanielcwm
2021-10-04 02:33:45
weird
_wb_
2021-10-04 02:33:46
well it all depends on what they show as a background for the transparent pixels
2021-10-04 02:34:02
if they mess up and don't support transparency, then it will look fine
nathanielcwm
2021-10-04 02:34:03
well my browser should be using white as the background
2021-10-04 02:34:11
so makes sense that it looks white
_wb_
2021-10-04 02:34:12
if they show a black or dark background, then it will look fine
nathanielcwm
2021-10-04 02:34:16
hmm
_wb_
2021-10-04 02:34:26
if they use a bright background, then it will not look fine
nathanielcwm
2021-10-04 02:34:43
<@!219525188818042881> oh ye it's scuffed in dsicord light theme
Fox Wizard
2021-10-04 02:34:50
Lol
nathanielcwm
2021-10-04 02:34:58
2021-10-04 02:35:05
oh my fucking god how do ppl live with this
Fox Wizard
2021-10-04 02:35:31
Lol
nathanielcwm
2021-10-04 02:35:47
it's so fucking ugly
Fox Wizard
2021-10-04 02:35:50
Wonder how some optimizer broke black in that image
nathanielcwm
2021-10-04 02:35:53
and the contrast is absolute shiteeeeeee
2021-10-04 02:36:30
i bet it doesn't pass accessibility requirements lmao
2021-10-04 02:37:01
discord's css is a pain and a half tho
nathanielcwm i bet it doesn't pass accessibility requirements lmao
2021-10-04 02:37:12
so ig i'm not gonna bother looking into it lmao
2021-10-04 02:37:40
2021-10-04 02:37:40
oh nvm
2021-10-04 02:40:21
2021-10-04 02:41:30
idk why discord light theme is legit painful <:kekw:808717074305122316>
2021-10-04 02:41:45
unless i turn the saturation slider down in accessibility
Fraetor
nathanielcwm
2021-10-04 03:35:10
And this is their attempt apparently targeting accessibility... https://discord.com/blog/light-theme-redeemed
Scope
2021-10-07 08:48:24
https://spectrum.ieee.org/forget-jpeg-how-would-a-person-compress-a-picture
yurume
2021-10-07 08:52:19
should have measured (or estimated) human brain's kolmogorov complexity for the fair comparison 😄
monad
2021-10-07 09:55:17
An improvement: upload the target image to the Internet first, then send its address to the reconstructor.
Scope
2021-10-09 02:52:04
https://twitter.com/richgel999/status/1446684631865122820
spider-mario
2021-10-09 03:49:46
https://www.dpreview.com/articles/8980170510/how-hdr-tvs-could-change-your-photography-forever > HEIF/HEIC is a broad standard, and the files from Canon and Apple are not cross-compatible with one another
2021-10-09 03:49:50
sounds like a lot of fun
190n
2021-10-09 03:54:55
<:kekw:808717074305122316>
_wb_
2021-10-09 04:07:51
Apple doesn't seem to support 4:4:4 heic, and also not > 10-bit
2021-10-09 04:08:11
At least that's how it was on my macbook pro last time I checked
2021-10-09 04:08:54
I assume this is a consequence of using hardware decode (apparently without software fallback)
spider-mario
2021-10-09 04:11:04
sounds plausible
2021-10-09 04:11:22
I was looking for an open-source way to generate HEIF files that Apple devices recognize as HDR
2021-10-09 04:11:54
I thought I could perhaps look at what my Canon camera puts in its PQ HEIFs, but sounds like it’s not going to work even if I manage to figure it out
2021-10-09 04:13:06
and those that my iPhone generates don’t seem to be HDR
2021-10-09 04:18:24
apparently `heif-enc --matrix_coefficients=9 --colour_primaries=9 --transfer_characteristic=18` (18 -> 16 for PQ instead of HLG) is not enough
_wb_
2021-10-09 04:45:28
I think they might use an 8-bit tone mapped HEVC payload plus some auxiliary image with exponents or something
spider-mario
2021-10-09 04:49:04
that sounds painful but you might be right
2021-10-09 04:49:57
AFAIK this is also how Dolby’s “JPEG-HDR” works
_wb_
2021-10-09 04:52:27
Also JPEG XT does that iirc
spider-mario
2021-10-09 04:53:29
yeah, Wikipedia says this: > JPEG XT Part 2 HDR coding is based on Dolby JPEG-HDR format,[5]
Kornel
2021-10-12 07:36:23
<@794205442175402004> I saw you asking if aom could adopt jpeg xl
2021-10-12 07:36:42
AV2 is currently in development.
2021-10-12 07:37:07
Maybe you could sneak in JPEG xl features into AV2?
_wb_
2021-10-12 07:37:57
Yes, maybe some
2021-10-12 07:38:31
Though I think trade-offs are different for a video codec aiming at hw decode than for a still image codec
2021-10-12 07:39:33
Something like MA tree entropy coding is just not feasible in hw, unless you add strong restrictions (which is ok for encode, but not for decode)
2021-10-12 07:41:23
Also things like progressive decoding are not particularly hw-friendly
2021-10-12 07:43:36
(or at least, it would require significant effort to design hw around a bitstream that is organized in a way that decodes progressively, and that effort will likely not be spent if the primary use case is video)
2021-10-12 07:46:28
According to my gut feeling, hw requires relatively simple entropy coding / ctx modeling
2021-10-12 07:48:40
So it 'naturally' works better at low fidelity operating points, where you cannot really use fancy entropy coding since the bits you end up encoding have high entropy anyway (i.e., sending them as raw bits is not so far from optimal)
2021-10-12 07:49:53
Whereas for high fidelity operating points, much more stuff gets signalled and fancy entropy coding (that might be hard to implement in hw) becomes more important
2021-10-12 07:55:01
In other words: I think AVIF and JXL are complementary, and probably will remain complementary (even if encoder improvements might make the difference smaller). I think that's a consequence of different design criteria/decisions. And I think it will not be fundamentally different with AV2IF.
BlueSwordM
2021-10-12 08:01:26
The two things I'd like for AV2 to have is no LBD profile(already the case, as evident by the HBD only build switch) and finally moving on from YCbCr to XYB since darker blue tones are a slight problem.
_wb_
2021-10-12 08:10:51
I hope they stop using psnr/ssim as a way to evaluate proposed coding tools, because that's a known recipe for designing a codec that is great at psnr/ssim and bad at actual perceptual quality.
2021-10-12 08:12:02
(e.g. jpeg 2000 and wdp are also examples of this: they look great when you look at psnr plots, not so much when you look at the images)
BlueSwordM
2021-10-12 08:14:10
I vote for adding `ssimulacra` into their test suite.
_wb_
2021-10-12 08:14:15
Nah
2021-10-12 08:14:25
Ssimulacra is not very good
2021-10-12 08:14:52
With hindsight, I made it way too tolerant for smoothing stuff
2021-10-12 08:17:21
It was made with jpeg, j2k, webp in mind
2021-10-12 08:18:24
It's good at penalizing obvious blockiness, ringing, banding
2021-10-12 08:19:30
But it by design doesn't care so much about lost details or blurred away texture
2021-10-12 08:20:49
(to be fair, it was designed to measure appeal, more than fidelity - it tries to estimate if there would be noticeable artifacts if you cannot compare with the original)
veluca
BlueSwordM The two things I'd like for AV2 to have is no LBD profile(already the case, as evident by the HBD only build switch) and finally moving on from YCbCr to XYB since darker blue tones are a slight problem.
2021-10-12 08:38:25
I *believe* that XYB is considered as a not-AV1/2-problem, i.e. something you'd specify in say the CICP box or whatnot
BlueSwordM
veluca I *believe* that XYB is considered as a not-AV1/2-problem, i.e. something you'd specify in say the CICP box or whatnot
2021-10-12 08:38:53
We did talk about it a few months ago 🤔
2021-10-12 08:39:27
Yes we did: https://discord.com/channels/794206087879852103/794206087879852106/850842147375939585
veluca
2021-10-12 08:40:39
so I ran a bunch of metrics (with the help of <@!604964375924834314> ) on the data here: http://compression.cc/leaderboard/perceptual/test/
2021-10-12 08:41:26
results can be summarized in "all metrics suck" xD but some more numbers... ``` PSNR: 50.5% SSIM: 49% VMAF: 57.1% VMAF-NEG: 57.6% BT-6: 60.7% BT-MAX: 60.5% MS-SSIM: 53.5% ```
Scope
2021-10-12 08:46:46
What about <https://arxiv.org/abs/1910.13646>? Although, perhaps all DL metrics are quite slow and can be "hacked" to significantly improve results by various cheating methods
veluca
2021-10-12 10:18:45
DL metrics are *super* slow
fab
2021-10-14 05:08:22
Why every commit of libavif webp2 or jxl max 27 lines of code change
_wb_
2021-10-14 05:19:41
10 lines of code can make a huge difference, and changing 2000 lines of code can be just a minor refactoring that doesn't change any behavior. It's not a very useful thing to look at.
Scope
2021-10-18 07:14:28
Also, PNG is the most used format and its percentage is increasing, about how lossless even for the Web is widely used and important https://w3techs.com/technologies/history_overview/image_format/all/y
_wb_
2021-10-18 07:47:56
I guess there are mostly just a lot of pages that have a png logo / icon somewhere
2021-10-18 07:48:13
even if they have no other images
2021-10-18 07:49:06
e.g. many wikipedia pages just have the wikipedia logo (a png) and no other images
2021-10-18 07:49:52
that makes plots that show "% of pages that use format X" a bit misleading
2021-10-18 07:50:26
as in, it makes it look like PNG is more important than JPEG
2021-10-18 07:51:07
while if you go by bandwidth, or by number of pixels, or whatever other metric, probably JPEG is more important than PNG on the web
2021-10-18 07:52:42
(also there's a bit of a difference between the public web and the part of the web that is behind login screens and not necessarily indexed or accessible, like a big part of facebook/twitter/etc)
Scope
2021-10-18 08:13:33
Yes, only the percentage does not say about such traffic consumption and it also depends on sites, for example fb/youtube as far as I know does not use png for previews or any big images, only for some parts of UI or icons, but still, on other sites PNG traffic can be quite prevalent or also substantial, like a lot of imageboards, art/manga/comix sites, even reddit or discord and even for small images it is still a significant amount of use, as I remember FireFox also showed similar statistics (though PNG was probably used less than JPEG) For UI and graphs SVG is also growing
The_Decryptor
2021-10-18 11:44:27
Some sites (like Github) use SVG for their logo and embed it directly into the page source, wonder if they count things like that
fab
2021-10-24 07:13:15
--min 16 --max 32 --deltaq-mode=5 -sharpness=1 -s 3 aq-mode=2 is it right how it should be written i'm using avifenc please help
lithium
2021-10-25 08:33:34
# High quality lossy anime,manga content dilemma I reupload this information on this channel, maybe someone interested this. 1. For general anime,manga content, have a lot of high contrast sharp edges area, on avif, av1 inner filter can clear up some annoying tiny artifacts, jxl also get great result, but some area have some annoying tiny artifacts. I understand filter also reduce some detail, but I think for lossy anime,manga content, slight filter is worth it. 2. For specific black,white or grayscale mange content, can't use DCT method, so need some non-DCT lossy method to handle this situation, on avif, av1 palette prediction mode can handle this very well, but on default jxl, DCT method will inflate file size, jxl option --separate can handle this, but still have some issue. > aomenc palette prediction mode - manga image test > - 962,047 > q15-enable-palette=0 899,496 > q15-enable-palette=1 314,465 > --enable-palette=<arg> Enable palette prediction mode (0: false, 1: true (default))
Fraetor
2021-10-25 08:34:40
How does lossy modular fair?
lithium
2021-10-25 08:41:54
I think jxl lossy modular don't have adaptive quantization, so I compare av1 lossy and jxl lossy(vardct) on this case.
lithium # High quality lossy anime,manga content dilemma I reupload this information on this channel, maybe someone interested this. 1. For general anime,manga content, have a lot of high contrast sharp edges area, on avif, av1 inner filter can clear up some annoying tiny artifacts, jxl also get great result, but some area have some annoying tiny artifacts. I understand filter also reduce some detail, but I think for lossy anime,manga content, slight filter is worth it. 2. For specific black,white or grayscale mange content, can't use DCT method, so need some non-DCT lossy method to handle this situation, on avif, av1 palette prediction mode can handle this very well, but on default jxl, DCT method will inflate file size, jxl option --separate can handle this, but still have some issue. > aomenc palette prediction mode - manga image test > - 962,047 > q15-enable-palette=0 899,496 > q15-enable-palette=1 314,465 > --enable-palette=<arg> Enable palette prediction mode (0: false, 1: true (default))
2021-10-27 07:15:27
Interesting, use --epf=3 on jxl vardct -d 0.5 -e 8 can slight reduce high contrast sharp edges tiny artifacts, look slightly better than default epf value, if we have --epf=4 or 5 probably can reduce those tiny artifacts?
fab
2021-10-27 08:10:28
there are no quality improvements with new build
2021-10-27 08:11:16
fo a quality
2021-10-27 08:11:17
for %i in (D:\pictures\DACONVERTIRE\insta25october\*.jpg) do cjxl -s 8 -q 80.55 --patches=0 --epf=1 --use_new_heuristics %i %i.jxl
2021-10-27 08:11:22
can get bette
2021-10-27 08:11:34
but don't know if patches 0 woks with new heuistics
2021-10-27 08:11:43
<@!794205442175402004> does it wok
2021-10-27 08:12:08
and what is the epf value of new heuistics?
_wb_
2021-10-27 08:30:31
epf is not related to use_new_heuristics afaik
fab
2021-10-27 08:45:45
and is patches 0
2021-10-27 08:45:54
does new heuistics use patches at s 8
2021-10-27 08:46:21
the new build you ae making how it will be
lithium
lithium Interesting, use --epf=3 on jxl vardct -d 0.5 -e 8 can slight reduce high contrast sharp edges tiny artifacts, look slightly better than default epf value, if we have --epf=4 or 5 probably can reduce those tiny artifacts?
2021-10-27 09:14:18
sorry for delete first comment, I guess post on jxl server is better. I don't know higher epf implement or --separate option implement, which is better idea for high contrast sharp edges feature? For now default av1 lossy use some filiter clear up high contrast sharp edges tiny artifacts, but also reduce detail. and for some grayscale and black,white manga content, av1 auto change to palette prediction, use non-DCT mode, so in this mode high contrast sharp edges also don't have tiny artifacts. It's really hard keep grain,detail and let high contrast sharp edges claer in same time. I guess maybe --separate option is better? but border issue is really hard to solve...
fab
2021-10-27 09:15:42
https://discord.com/channels/794206087879852103/794206087879852106/902846589272481792
2021-10-27 09:17:46
jxl is doing great you need just to wait and if you want image quality don't use s 7, s 7 is meant fo high appeal 69,8 kb photo to shae in intenet at d 1 o photogahic images to shae in intenet
_wb_
2021-10-27 10:00:30
Epf is good to get rid of subtle ringing in dct mode, separate (once I have time to actually develop it) will be good to avoid dct in areas where it is not a good idea
diskorduser
2021-10-27 01:54:52
Avif not working on Android 12 😭
fab
2021-10-27 01:55:29
do you have a modded Android 12?
diskorduser
2021-10-27 01:55:48
What does modded Android 12 mean?
fab
2021-10-27 01:55:59
custom rom
diskorduser
2021-10-27 01:56:03
Yeah
fab
2021-10-27 01:56:11
flashed with magisk
diskorduser
2021-10-27 01:56:46
You are confusing me
fab
2021-10-27 01:56:53
doesn't matter
2021-10-27 01:57:10
it is a custom ROM
2021-10-27 01:57:29
so probably your ROM doesn't have AVIF
diskorduser
2021-10-27 01:58:29
No. My rom has av1 decoder
fab
2021-10-27 01:58:43
ok
2021-10-27 01:58:56
so all of the most used have avif
2021-10-27 01:59:05
or many of them
2021-10-27 01:59:15
is not disabled
2021-10-27 01:59:29
or not included supported
diskorduser
2021-10-27 01:59:38
See av1 decoder in Apex folder
fab
2021-10-27 02:00:35
and why no gallery and the one of the rom can't use it
2021-10-27 02:00:37
file a bug
2021-10-27 02:01:05
when is ready
diskorduser
2021-10-27 02:04:28
Google photos app recognises it as image file but when it opens, image is blank
lithium
2021-10-27 03:30:21
A little curious, It's possible overcome separate option border issue? I mean mix vardct algorithm and modular algorithm area, probably create big error on border area, and max_butteraugli will very upset that area?
lithium A little curious, It's possible overcome separate option border issue? I mean mix vardct algorithm and modular algorithm area, probably create big error on border area, and max_butteraugli will very upset that area?
2021-10-27 05:42:41
Tested pr403 on my issue image, look like this don't affect too much for d 0.5 e8 epf 3. > https://github.com/libjxl/libjxl/pull/403 For now I think epf 3 can slight reduce tiny artifacts, but can't clear too much, I suspect gradient color tiny artifacts issue, probably related to high contrast sharp edges(line), red plane have gradient color and both sides have high contrast sharp edges(line), maybe high contrast sharp edges tiny artifacts will spread error to adjacent area(gradient color area)?
2021-10-27 06:50:34
About some necessary use non-DCT mode manga, look like jxl lossy-palette also can process those image, but still need some improve.
2021-10-27 06:51:32
2021-10-28 05:00:09
Test some drawing content image, I can find some tiny artifacts on, red, wine red, dark red red plane or gradient color, and near area have high contrast sharp edges(line). avif inner filter reduce those tiny artifacts, but also have chance lost detail. I guess maybe red need more bitrate for drawing content(plane, gradient color)? or high contrast sharp edges(line) spread those to near area, I don't know which is right. > cjxl -d 0.5 -e 8 --epf=3
2021-10-28 07:11:01
On previous version jxl, red and blue color have some tiny artifacts for drawing content, for current jxl blue color is look good, but I guess red color still have some issue, maybe current red color weight is good for photo, but for drawing still need some adjust.
2021-10-30 10:16:08
What's Identity transform? > 4 Transform types: > DCT, ADST, Flipped (reverse) ADST, Identity > Upto 16 transform combinations with independent horizontal & vertical selection > Reduce available combinations based on block size (for efficiency) > Identity transform useful for screen content coding > > page 26, https://parisvideotech.com/wp-content/uploads/2017/07/AOM-AV1-Video-Tech-meet-up.pdf
_wb_
2021-10-30 10:51:57
Not doing anything, i.e. storing pixels (or predictor residuals) in the spatial domain instead of as DCT coeffs
lithium
_wb_ Not doing anything, i.e. storing pixels (or predictor residuals) in the spatial domain instead of as DCT coeffs
2021-10-30 12:50:45
I understand, thank you 🙂 If don't consider color space, it's lossless transform?
_wb_
2021-10-30 01:14:08
I don't know how exactly av1 implements it, maybe it's not actually just the identity but also does some quantization or whatever
lithium
2021-10-30 04:33:07
For now I think jxl is great for preserve grain and detail, I understand Jyrki Alakuijala integral transforms improve is already merge to main branch, but for high contrast sharp edges(line) content, I think still need some improve. // Edit: BlueSwordM: CDEF is CLPF + DDRF. I guess av1 except CDEF filter, also have some strong filter from other codec? **Directional de-ringing filter** > AV1 inherits this filter from the Daala codec. > The de-ringing filter helps you suppress ringing artifacts caused by quantization of transform coefficients. > It also conditionally replaces pixels affected by ringing artifacts. **Constrained Low Pass Filter (CLPF)** > AV1 inherits this filter from the Thor codec. > By using CLPF, you can correct the artifacts introduced by quantization errors and interpolation filters.
BlueSwordM
lithium For now I think jxl is great for preserve grain and detail, I understand Jyrki Alakuijala integral transforms improve is already merge to main branch, but for high contrast sharp edges(line) content, I think still need some improve. // Edit: BlueSwordM: CDEF is CLPF + DDRF. I guess av1 except CDEF filter, also have some strong filter from other codec? **Directional de-ringing filter** > AV1 inherits this filter from the Daala codec. > The de-ringing filter helps you suppress ringing artifacts caused by quantization of transform coefficients. > It also conditionally replaces pixels affected by ringing artifacts. **Constrained Low Pass Filter (CLPF)** > AV1 inherits this filter from the Thor codec. > By using CLPF, you can correct the artifacts introduced by quantization errors and interpolation filters.
2021-10-30 09:04:49
CDEF is CLPF + DDRF.
lithium
2021-10-31 06:01:30
I understand, thank you 🙂
BlueSwordM
2021-11-06 06:23:46
So, I've decided to stop procrastinating and finish my article by finalizing my encode tests once and for all. <@!532010383041363969> As for your question: https://discord.com/channels/794206087879852103/840831132009365514/900643385617043466 Range of qualities that I've determined to be accurate are(these are mostly for photographic images. Artificial images are a bit different): Minimum quality(<0,1bpp) Very low quality(around 0,1bpp) Low quality(around 0,3bpp) Medium quality(around 0,6bpp) Web average(around 1bpp) Big(1,5bpp-2,5bpp) Near-lossless(around 50% of the lossless encode) Lossless Does this seem like a good range?
_wb_
2021-11-06 06:29:09
Those bpps look about right to me for jxl/avif. For jpeg/webp they're a bit on the low end.
BlueSwordM
2021-11-06 06:30:57
Ok then. Thanks for the feedback. I will of course try and manually adjust the quality if images fall too out of range in terms of metric/visual quality, but I guess it should be fine.
lithium
2021-11-06 06:49:52
If using butteraugli distance describe, probably like this? d8 = Minimum quality(<0,1bpp) d6 = Very low quality(around 0,1bpp) d4 = Low quality(around 0,3bpp) d2.5 = Medium quality(around 0,6bpp) d2 = Web average(around 1bpp) d1 = Big(1,5bpp-2,5bpp) d0.5 = Near-lossless(around 50% of the lossless encode) d0 = Lossless
Jyrki Alakuijala
2021-11-07 02:37:47
I like that
2021-11-07 02:38:19
Current web average is about 2 bpp
2021-11-07 02:38:43
depends on how to calculate it 😛
2021-11-07 02:39:06
excluding smallest images makes sense
2021-11-07 02:39:16
weighting the average on what kind of image people are waiting for makes sense
lithium For now I think jxl is great for preserve grain and detail, I understand Jyrki Alakuijala integral transforms improve is already merge to main branch, but for high contrast sharp edges(line) content, I think still need some improve. // Edit: BlueSwordM: CDEF is CLPF + DDRF. I guess av1 except CDEF filter, also have some strong filter from other codec? **Directional de-ringing filter** > AV1 inherits this filter from the Daala codec. > The de-ringing filter helps you suppress ringing artifacts caused by quantization of transform coefficients. > It also conditionally replaces pixels affected by ringing artifacts. **Constrained Low Pass Filter (CLPF)** > AV1 inherits this filter from the Thor codec. > By using CLPF, you can correct the artifacts introduced by quantization errors and interpolation filters.
2021-11-07 02:48:55
I still have pending improvements that I have not been able to prioritize, mostly to reduce ringing The best improvements are to use more dct2x2 in certain areas (instead of dct8x4, dct8x8 and bigger) I believe that will eventually solve the random infrequent quality degradation in anime at around d1
BlueSwordM So, I've decided to stop procrastinating and finish my article by finalizing my encode tests once and for all. <@!532010383041363969> As for your question: https://discord.com/channels/794206087879852103/840831132009365514/900643385617043466 Range of qualities that I've determined to be accurate are(these are mostly for photographic images. Artificial images are a bit different): Minimum quality(<0,1bpp) Very low quality(around 0,1bpp) Low quality(around 0,3bpp) Medium quality(around 0,6bpp) Web average(around 1bpp) Big(1,5bpp-2,5bpp) Near-lossless(around 50% of the lossless encode) Lossless Does this seem like a good range?
2021-11-07 02:50:45
I consider that the minimum relevant quality is about q70 in mozjpeg, and q75 is likely the 10 percentile in the internet
2021-11-07 02:51:22
The super low bpp images that exists in the internet are there mostly because they have a lot of white background
2021-11-07 02:51:45
Compressing normal photographs without those white backgrounds at those low bpps is not representative use
2021-11-07 02:53:26
For a corpus specific bitrate for photographs, I'd recommend compressing that corpus with mozjpeg (with quality 75, 85 and 95), looking at the resulting bitrate, and then taking a 50 % lower bitrate for libjxl and 30 % lower bitrate for AVIF
2021-11-07 02:54:09
For WebP, I don't know, possibly 10 % lower bitrate
2021-11-07 02:54:36
The bitrate should be for the whole corpus, i.e., the relevant encoder should decide how to spread the bits between images (not mozjpeg)
BlueSwordM
Jyrki Alakuijala For a corpus specific bitrate for photographs, I'd recommend compressing that corpus with mozjpeg (with quality 75, 85 and 95), looking at the resulting bitrate, and then taking a 50 % lower bitrate for libjxl and 30 % lower bitrate for AVIF
2021-11-07 04:43:29
I didn't even think of doing it that way 🤔 That is a much simpler way instead just using straight bpp 🙂 I'll use mozjpeg output as a reference.
lithium
Jyrki Alakuijala I still have pending improvements that I have not been able to prioritize, mostly to reduce ringing The best improvements are to use more dct2x2 in certain areas (instead of dct8x4, dct8x8 and bigger) I believe that will eventually solve the random infrequent quality degradation in anime at around d1
2021-11-07 04:46:51
Thank you for your work 🙂 Also look forward jxl implement auto non-DCT lossy feature(lossy palette or --separate), hope we can get that feature in near future.
Scope
2021-11-07 07:30:09
https://godotengine.org/article/godot-3-4-is-released#lossless-webp
2021-11-07 07:30:47
> Lossless WebP encoding > WebP is a new image format that can replace both JPEG and PNG. It has both lossy and lossless modes which are much more optimized than JPEG and PNG, it has a faster decompression time and a smaller file size compared to PNG. Godot already used WebP for lossy compression, and Godot 3.4 now defaults to using it for lossless texture compression instead of PNGs, thanks to work from Morris Arroad (mortarroad). The importer time might be slightly increased, but the imported file sizes and loading times are significantly reduced.
Jyrki Alakuijala
2021-11-08 08:52:52
Lossless WebP is great, but near-lossless is a true unfound gem in that area 🙂
Scope
2021-11-08 09:29:23
Some texture compressors as far as I understand use similar techniques, like when zooming a lot of differences become noticeable, but at the typical distance where they are used it's hard to see
Jyrki Alakuijala
2021-11-08 11:27:28
the way near-lossless WebP works doesn't bring the errors into visibility even with 5x zooming, at least not with the two highest levels (80 and 60), and the errors in 40 are still relatively subtle -- much higher quality than what texture compressors do
Scope
2021-11-08 11:40:58
Yep, texture formats are much simpler, more limited and less efficient because they need GPU hardware support, but they also use something like lossy and limited palette per block, grouping near-color pixels, etc.
Jyrki Alakuijala
2021-11-08 01:53:51
yes, it is surprising how texture compression can be as good as as they are while maintaining the direct addressing property
Deleted User
Jyrki Alakuijala Lossless WebP is great, but near-lossless is a true unfound gem in that area 🙂
2021-11-08 01:55:02
Is there any public information on how near-lossless WebP rougly works/what it does?
Jyrki Alakuijala
2021-11-08 01:55:21
the code
Deleted User
2021-11-08 01:55:37
I mean something that is easy to understand. ^^
Jyrki Alakuijala
2021-11-08 01:55:49
it has two separate algorithms, one for non-predicted compression and other for predicted
2021-11-08 01:56:29
it does not allow changing peaks or valleys, i.e., no contrast reduction at all
2021-11-08 01:56:58
it allows changing values that are part of a slope, and within that slope
2021-11-08 01:57:44
for example if you have values 12, 77 and 131 in three consequent pixels, it would be able to change the 77 to say 64
2021-11-08 01:58:14
by changing those values it can reduce entropy
2021-11-08 01:58:27
setting 80 modifies values only by +-1
2021-11-08 01:58:36
setting 60 modifies values only by +-2
2021-11-08 01:58:52
setting 40 modifies values by +-4
2021-11-08 01:59:34
but they are only modified on the slopes or noisy areas, so they are not easy to notice
Deleted User
Jyrki Alakuijala it has two separate algorithms, one for non-predicted compression and other for predicted
2021-11-08 02:09:39
Does the encoder do this as kind of preprocessing and then ending the same way as fully lossless would? Because I think it was possible to decode to say PNG and back to lossless WebP and both WebP files would be the same size, right?
Jyrki Alakuijala
2021-11-08 02:38:47
no, it is possible that complications emerge
2021-11-08 02:39:05
for non-predicted everything works in a sane way
2021-11-08 02:39:31
predicted near-lossless is more interesting as the additional quantization needs to be done to prediction residuals
2021-11-08 02:40:23
and that additional quantization may affect which predictors are used the next time (or even possibly make the non-predicted image more interesting (theoretical possibility))
2021-11-08 02:42:10
in the stock encoder I choose filters based on their entropy estimate and rechoosing the same filter is thus very likely -- but not guaranteed
2021-11-08 02:43:30
and there are many more things that can happen -- like there could be 257 colors before the near-lossless, but color count reduced to 256 during the process but too deep inside the encoder to redo it in palette ... but next time when the image is compressed, it might happen
2021-11-08 02:44:36
the non-predicted near-lossless is stable, i.e., several runs don't degrade further
2021-11-08 02:44:49
the predicted near-lossless is stable, i.e., several runs don't degrade further
2021-11-08 02:45:00
but the mix of them is more interesting
2021-11-08 02:45:12
in practice it is unlikely an issue
Scope
2021-11-08 02:46:27
For JXL, near-lossless like WebP can also be useful or will the lossy-palette and other methods be more promising and efficient?
_wb_
2021-11-08 02:58:43
it could also be useful, but there are a lot more degrees of freedom to do near-lossless things in jxl - delta palette is one of them, squeeze with selective residual zeroing is another approach (not yet tried), engineering an MA tree and doing different quantization of the residuals depending on the branch (e.g. more quantization in regions where abs(WGH) is larger, i.e. likely contrast masked areas) is another approach (also not yet tried)
2021-11-08 03:00:23
any technique that has been used in the past to do near-lossless webp or near-lossless png could be brought to jxl and be put "on steroids"
Jyrki Alakuijala
2021-11-08 04:27:22
I consider that delta palette reflects my best attempt to do the same thing that near lossless tries to achieve. I consider delta palette about 2x more efficient in achieving that.
2021-11-08 04:27:49
If I had the opportunity to build a near-lossless again, it would be the libjxl delta-palette once again 🙂
2021-11-08 04:30:16
we had several attempts within that domain during pik and jpeg xl development
2021-11-08 04:30:46
the other attempts were so much far behind vardct efficiency that we decided to just remove them
2021-11-08 04:31:06
delta palette can occasionally beat VarDCT while looking great
2021-11-08 04:31:50
there are still some artefacts here and there, but they are not format driven
Scope
2021-11-09 08:32:22
https://netflixtechblog.com/bringing-av1-streaming-to-netflix-members-tvs-b7fc88e42320
2021-11-09 08:32:47
> All AV1 streams are encoded with 10 bit-depth even if AV1 Main Profile allows both 8 and 10 bit-depth. Almost all movies and TV shows are delivered to Netflix at 10 or higher bit-depth. Using 10-bit encoding can better preserve the creative intent and reduce the chances of artifacts (e.g., banding)
_wb_
2021-11-09 08:36:13
Basically with YCbCr 420 and then some dc quantization, 8-bit is just not really good enough for SDR, if you're going to view it on a relatively bright and good screen. You need grain synthesis just to hide the banding...
2021-11-09 08:37:04
So I think av1/avif might not really be great for HDR
2021-11-09 08:38:48
If 10-bit is needed to do SDR decently, and 10-bit is the max of the Main profile (which is likely what hw decode will support), then there doesn't seem to be a lot of precision left to do HDR well
Scope
2021-11-09 08:42:15
Yep, i think this also applies to AVIF and even according to my own tests, 10-bit helps avoid many quality problems, even for 8-bit source images Perhaps it is a good idea to start using 10-bit encoding for AVIF in Cloudinary as well, even for 8-bit images?
BlueSwordM
_wb_ If 10-bit is needed to do SDR decently, and 10-bit is the max of the Main profile (which is likely what hw decode will support), then there doesn't seem to be a lot of precision left to do HDR well
2021-11-09 08:43:29
This issue has also been seen with previous codecs like with HEVC for HDR, and even newer standards like EVC/VVC don't seem to want to fix this issue. Internally though, AV1 encoders do most of their stuff at 16bpc. It is possible to do all calculations in 16bpc, and output to 8bpc YCbCr.
_wb_
2021-11-09 08:43:57
8-bit sRGB already can have (subtle) banding issues by itself. Make the gamut larger and take away a bit from R and B — which is effectively the main thing YCbCr does, it's essentially the same thing as G (B-G)/2 (R-G)/2 — and it's not hard to see that precision can become a problem. I wonder how the old JPEG manages to do reasonably well, and I think it might be because it uses 12-bit internally.
Scope Yep, i think this also applies to AVIF and even according to my own tests, 10-bit helps avoid many quality problems, even for 8-bit source images Perhaps it is a good idea to start using 10-bit encoding for AVIF in Cloudinary as well, even for 8-bit images?
2021-11-09 08:46:17
Yes that might be a good idea. We would have to check the cpu cost / latency for that though, and also the decode speed penalty on mobile. In the long run, I think avif will only be useful for low-fidelity encoding, and there 8-bit is probably fine.
BlueSwordM This issue has also been seen with previous codecs like with HEVC for HDR, and even newer standards like EVC/VVC don't seem to want to fix this issue. Internally though, AV1 encoders do most of their stuff at 16bpc. It is possible to do all calculations in 16bpc, and output to 8bpc YCbCr.
2021-11-09 08:47:41
Question is mainly what decoders do, and if they could even do things at higher precision while conforming to the spec.
Scope
2021-11-09 08:49:15
Even for low bpp I've often noticed the benefit of 10-bit, 8-bit might only be useful for extremely low bpp (which I don't think is usable even for Web previews)
BlueSwordM
_wb_ Question is mainly what decoders do, and if they could even do things at higher precision while conforming to the spec.
2021-11-09 08:52:22
In this case, it would be: 1. Hybrid 16b+8b coding with a 8b output. What most AVIF streams will likely end up as. 2 a) Full 16b coding(HBD) for 8b output. Possible to do with `--use-16bit-internal` in aomenc, and probably avifenc as well. You get slightly higher coding efficiency, but you still get 8b YUV problems at the decode side. 2 b) 16b coding(HBD) for 10b output. 10b AVIF streams, and still supported on the HW side. 3. 16b coding(HBD) for most functions, 32b coding(Extended Range High Bit Depth coding as known) for some other functions, giving a 12b output. Currently not supported by any consumer level HW decoder.
_wb_
2021-11-09 08:57:18
The thing is, for av1 video, option 2b sounds like a good idea (because hw decode is becoming a thing), but for avif, which will likely remain sw decoded, the decode speed penalty might be a bit much, so option 2a sounds best to me for now
Scope
2021-11-09 08:59:10
With the latest dav1d, 10-bit decoding is also much faster
_wb_
2021-11-09 08:59:30
Yes, maybe it's ok
BlueSwordM
2021-11-09 08:59:37
Also, the problem might be that libgav1 is the decoder used on Android 12, and not dav1d.
2021-11-09 09:00:07
Even the latest libgav1 is quite slow vs dav1d.
_wb_
2021-11-09 09:00:32
Is there any app using the Android system decoder?
BlueSwordM
2021-11-09 09:01:08
Photo Galleries might be the one? Firefox and Chrome use dav1d, so that's not an issue.
_wb_
2021-11-09 09:01:08
Browsers use dav1d
2021-11-09 09:01:45
Yeah if you save an avif on Android 12 and view it with default system viewers it will be slow (but at least it will work)
2021-11-09 09:02:40
We need good non-default image viewers for Android that have dav1d and libjxl in them
2021-11-09 09:02:51
Or does such a thing already exist?
2021-11-09 09:03:22
(also just for all the people who are not on Android 12 yet)
BlueSwordM
2021-11-09 09:04:07
There is one I know about, but it hasn't been updated in a very long time: https://github.com/programmingisart/avif_viewer_android
2021-11-09 09:05:52
Wow, I can't even update the decoder separately.
diskorduser
2021-11-10 03:11:58
Google photos app uses system decoder i think. I can open avif images with it.
2021-11-10 03:52:52
I can view only 420 8b and 10b avif images. It doesn't decode even 8b 444. Doesn't open 12b 420 (Google photos app)
BlueSwordM
diskorduser I can view only 420 8b and 10b avif images. It doesn't decode even 8b 444. Doesn't open 12b 420 (Google photos app)
2021-11-10 04:10:46
Understandable for 12b, but libgav1 is supposed to support 4:4:4.
diskorduser
2021-11-10 04:24:45
Not working
2021-11-10 04:25:09
Only 420 8b and 10b works
2021-11-10 04:28:50
Not even supporting 8b 444. Whose lame decision is this -_-
Scope
2021-11-10 04:30:48
Probably intentional limitations for hardware support compatibility
diskorduser
2021-11-10 04:31:31
If someone downloads 8b 444 avif from a website, it is useless on phone.
BlueSwordM
diskorduser Not even supporting 8b 444. Whose lame decision is this -_-
2021-11-10 04:44:40
As Scope has said, this is likely an intentional limitation: https://chromium.googlesource.com/codecs/libgav1/ "libgav1 is a Main profile (0) & High profile (1) compliant AV1 decoder."
diskorduser
2021-11-10 04:50:28
So it is compliant with 8/10b 444?
2021-11-10 04:52:55
2021-11-10 04:58:49
Also if HW decoder support high profile, it should include 10b 444. No?
BlueSwordM
diskorduser Also if HW decoder support high profile, it should include 10b 444. No?
2021-11-10 05:00:36
Yeah, but that does not exist yet, and will likely not exist for a very long time on mobile devices.
Scope
2021-11-10 05:01:26
I think these restrictions purposely force using Main profile (like not to make differences between compatibility of any decoders), because hardware decoders mostly support only Main profile
_wb_
2021-11-10 06:48:13
It might also be that they want to keep things in yuv420 to give it like that to the GPU
lithium
2021-11-10 07:07:48
> DZgas: > After so much time, I can say for sure that JPEG XL is a promising codec, > and the last codec that humanity will have until we make a new technological "Jump Of Power". > but AVIF is an excellent solution for the Internet, > and especially high hopes to see it on sites where there is a large number of arts and paint. > Сonsidering all the tests, I would even say that avif is almost a vector format, > in terms of advantages, raster vectors hah but because vector functions is the basis of the format, > it suffers from parallelization problems and progressive, > while jpeg makes noise more real than real noise in photos, compressing photos perfectly. > > https://encode.su/threads/3397-JPEG-XL-vs-AVIF?p=71427&viewfull=1#post71427 This comment from a encode.su member(DZgas), A little curious why this member say avif almost a vector format? As far as I know, vector format is for data sheet or simple graphic, like svg format. And why a large number of arts and paint will choose avif?, As far as I know, jxl XYB color space is better than avif ycbcr444, jxl butteraugli is better than avif default psnr-hvs, jxl is better than avif on preserve detail,grain, But I think jxl still need implement sharp edge reduce tiny artefacts features and auto color palette features, If implement those features, I think jxl also can get advantage for arts and paint content.
Scope
2021-11-10 07:25:03
I think because AVIF is good at preserving lines and edges in palette mode, even with heavy compression and judging from past posts, most likely comparisons were made at low bpp, where AVIF is very strong compared to other formats on art and vector-like images
_wb_
2021-11-10 07:45:53
Low bpp avif tends to look a bit like the result of vectorization: you get an oil painting like effect (smudging and smoothing), but hard lines are preserved quite nicely, without ringing/blockiness/jaggies.
2021-11-10 07:48:03
If the image is already like that, then the artifacts of avif will tend to be ok
lithium
2021-11-11 04:42:17
I understand, that make sense. 🙂 About lines and edges, I think current jxl still have some issue on high bbp(d0.5~1.0), And I think av1 not always use palette mode for non-photo content, In my test, av1 have some heuristic to decide palette mode and DCT, For some drawing image enable or disable palette mode, both get the same result, but for some manga image disable palette mode will get big file size.
2021-11-14 04:10:53
# aomenc disable cdef and palette for drawing test Look like disable av1 cdef, palette, intra-edge-filter feature, av1 still look fine.🤔 And I have a question, when I using nomacs and XnViewMP, can see some tiny artefacts on this test png, but when I using chrome, I can't see too much tiny artefacts on this test png, chrome will do some pre-processing on decode png? > avifenc --min 0 --max 63 -d 12 -s 3 -j 6 -a end-usage=q -a cq-level=7 > -a color:enable-dnl-denoising=0 -a color:enable-chroma-deltaq=1 > -a color:sharpness=2 -a color:qm-min=0 -a color:enable-qm=1 -a color:deltaq-mode=4 > -a enable-cdef=0 -a enable-palette=0 -a enable-intra-edge-filter=0 > > cjxl -d 0.5 -e 8 -j --epf=3 --num_threads=12
diskorduser
lithium # aomenc disable cdef and palette for drawing test Look like disable av1 cdef, palette, intra-edge-filter feature, av1 still look fine.🤔 And I have a question, when I using nomacs and XnViewMP, can see some tiny artefacts on this test png, but when I using chrome, I can't see too much tiny artefacts on this test png, chrome will do some pre-processing on decode png? > avifenc --min 0 --max 63 -d 12 -s 3 -j 6 -a end-usage=q -a cq-level=7 > -a color:enable-dnl-denoising=0 -a color:enable-chroma-deltaq=1 > -a color:sharpness=2 -a color:qm-min=0 -a color:enable-qm=1 -a color:deltaq-mode=4 > -a enable-cdef=0 -a enable-palette=0 -a enable-intra-edge-filter=0 > > cjxl -d 0.5 -e 8 -j --epf=3 --num_threads=12
2021-11-15 06:07:26
Could be xnview introduces artefacts or something?
lithium
2021-11-15 06:57:41
I don't know, In my test, use nomacs and XnViewMP, both give me the same result, and I test un-cut compressed jxl image(d0.5-e8-epf3) also can see those tiny artefacts on nomacs and XnViewMP, but chrome decode result have slight different. I already check original jpg, compressed jxl, compare png image, those image don't have icc profile and gamma.
2021-11-15 07:00:44
jpg q100 444 is original image
2021-11-17 07:58:54
wb Thank you for your reply on av1 server 🙂 I plan spread jxl information for chinese user, for now I think jxl have great lossless mode for photo and non-photo content, and jxl also have great lossy mode(vardct) fot photo content, But jxl lossy for non-photo content, I still have some quality and feature issue, in my test, avif aomenc and jxl lossy mode(vardct) on high quality setting, and very near file size for non-photo content, In my opinion, I think avif more suitable lossy non-photo content right now, jxl also good, but for visually lossless, less artifacts is better. (I already tested jxl lossy modular, but I can't control very well for quality and file size in this mode.) Both codec produce good result on high quality setting and near file size, avif inner filter clear sharp edge and line tiny artifacts, also avif have auto color palette feature to avoid DCT worst case, jxl sharp edge and line tiny have artifacts, but I think jxl can correct preserve detail and noise, for now jxl don't have auto color palette feature to avoid DCT worst case, I think avif prefer preserve object edge or line and reduce artifacts, jxl prefer preserve object surface detail and noise, both codec are strong image encoder, but for still image I prefer use jxl. I delay spread plan right now, and wait jxl improvement non-photo content quality and feature, because I worry if I spread avif more suitable lossy non-photo content this information, will reduce jxl advantage for new still image adopt fight, I really hope jxl can win this fight, Please consider increase priority for improvement non-photo content quality and feature. 🙂
2021-11-20 11:40:20
Why cloudflare not support jpeg xl?😢 > https://blog.cloudflare.com/images-avif-blur-bundle/
diskorduser
2021-11-20 11:47:49
Probably browsers don't support it by default.
_wb_
2021-11-20 12:32:04
Browser support is one reason, worries about security of a 0.x libjxl that is not implemented in Rust is another reason.
Scope
2021-11-21 02:48:03
https://news.ycombinator.com/item?id=29281360
2021-11-21 03:01:53
https://blog.benjojo.co.uk/post/not-all-jpegs-are-the-same
novomesk
2021-11-21 11:47:13
According the blogger, lack of the browser support is obstacle: https://blog.hellmar-becker.de/2021/11/15/getting-my-graphics-smaller-experimenting-with-avif/
The_Decryptor
2021-11-22 12:36:40
For whatever reason Edge doesn't support it either, even if the platform supplies the necessary bits. I'm sure that'll trip people up if they assume it's exactly the same as Chrome
lithium
2021-11-24 09:49:48
Some av1 still image quality improvement. (From av1 server BlueSwordM) * Add tune=image_perceptual_quality > https://aomedia-review.googlesource.com/c/aom/+/147022 * Auto select noise synthesis level for all intra > https://aomedia-review.googlesource.com/c/aom/+/147641
nathanielcwm
novomesk According the blogger, lack of the browser support is obstacle: https://blog.hellmar-becker.de/2021/11/15/getting-my-graphics-smaller-experimenting-with-avif/
2021-11-24 11:18:42
i personally find png to oftentimes be smaller than jpeg for screenshots <:kekw:808717074305122316> but my screenshots are normally a lot smaller and less busy than his
Fraetor
nathanielcwm i personally find png to oftentimes be smaller than jpeg for screenshots <:kekw:808717074305122316> but my screenshots are normally a lot smaller and less busy than his
2021-11-24 01:11:16
PNG compresses flat colours and lines, as well as repeating content a lot better, which is the bulk of most screenshot's contain. JPEG would work better for screen shots with lots of photographic like content, such as 3D video game screen shots.
nathanielcwm
2021-11-24 01:12:53
and his screenshots don't fit into that category
2021-11-24 01:13:15
https://blog.hellmar-becker.de/assets/2021-11-07-3-filter.avif
2021-11-24 01:13:25
oh forgot that avifs don't embed
Fraetor
2021-11-24 01:33:25
So their point about gradients not compressing amazingly in PNG is valid, but their test image doesn't have any gradients. Its hard to predict what PNG would get, as we don't have the lossless source image.
Scope
2021-11-24 05:00:52
https://phoboslab.org/log/2021/11/qoi-fast-lossless-image-compression > Introducing QOI — the Quite OK Image format. It losslessly compresses RGB and RGBA images to a similar size of PNG, while offering a 20x-50x speedup in compression and 3x-4x speedup in decompression. All single-threaded, no SIMD. It's also stupidly simple
2021-11-24 05:01:49
https://phoboslab.org/files/qoibench/
190n
2021-11-24 06:39:49
good it's already posted
Scope
2021-11-24 08:44:06
https://news.ycombinator.com/item?id=29328750
2021-11-24 09:26:38
2021-11-24 09:29:07
Maybe for JXL it is also possible to make an even faster and simpler mode, like faster than PNG encoding/decoding, but not much worse
_wb_
2021-11-24 10:13:48
Yes, maybe it's even possible to do something similar to that QOI format within jxl, actually
2021-11-24 10:14:26
Using delta palette, prefix coding and lz77
diskorduser
2021-11-25 04:27:23
Does it compress better for screenshots?
yurume
diskorduser Does it compress better for screenshots?
2021-11-25 04:34:18
according to qoibench QOI is comparable or slightly inferior to libpng in size for that use case
Scope
2021-11-25 06:31:07
🤔 http://www.gfwx.org/
2021-11-25 06:31:23
diskorduser
2021-11-25 06:39:40
Nice
Scope
Scope https://news.ycombinator.com/item?id=29328750
2021-11-25 04:42:26
It's interesting that such a quick and simple format concept got so much discussion and more than 1k stars on github on the first day (for example libjxl has less than 300 currently)
_wb_
2021-11-25 04:43:30
the virality of stuff is always hard to predict
2021-11-25 04:44:10
I think well-explained simple stuff does attract more attention than more complicated stuff
yurume
2021-11-26 02:06:06
simple enough that you can actually chime in
Scope
2021-11-26 02:18:05
🤔
yurume
2021-11-26 02:23:00
zpng as in PNG but using zstd?
Scope
2021-11-26 02:25:02
Yeah, something like that <https://github.com/catid/Zpng>
_wb_
2021-11-26 06:26:03
The thing with png encode time is that you can massively adjust it by how you do filtering and most of all what kind of deflate effort you use
Kleis Auke
Scope 🤔
2021-11-26 12:48:39
FWIW, here's a bit more info about `t0rakka` and the (now removed) `mango` library: <https://github.com/libvips/libvips/issues/1537>. Curiously, all linked Reddit posts (benchmarks, copyright issues) in that issue were removed.
_wb_ The thing with png encode time is that you can massively adjust it by how you do filtering and most of all what kind of deflate effort you use
2021-11-26 12:58:04
Ah yes, I remembered a conversation that changing the default `Z_FILTERED` strategy to `Z_RLE` in libpng provides a 10% speed improvement. <https://github.com/lovell/sharp/commit/3917efdebd91ddaf3878a276a6c7b79edce2d2fd#r41720871>
Scope
2021-11-26 02:36:35
Yep, there are still some blogposts: <https://t0rakka.silvrback.com/png-benchmark> <https://t0rakka.silvrback.com/jpeg-decoding-benchmark>
2021-11-26 02:43:43
And also Wuffs/PNG decoder: https://github.com/phoboslab/qoi/pull/11
yurume
2021-11-26 02:51:00
Wuffs is interesting, but Wuffs PNG implementation is very complex to the point that I don't necessarily believe that it is correct (though it will be definitely memory-safe)
Scope
2021-11-26 06:08:46
<https://www.reddit.com/r/programming/comments/r1amo0/lossless_image_compression_in_on_time/hm5t1iq/>
2021-11-26 06:36:31
https://twitter.com/richgel999/status/1464236893495828486
monad
2021-11-27 08:48:55
3. tested with Lenna
yurume
2021-11-27 10:03:21
Lenna is IMHO not a good representative sample image because its color was essentially washed out
monad
2021-11-27 10:44:22
It's only lasted because of tradition.
Scope
2021-11-27 03:06:55
Yep, the quite popular Kodak set is also outdated and does not match modern image quality Maybe it's okay to test some basic encoder functionality, but for benchmarking lossy/lossless formats this is not a very good images
_wb_
2021-11-27 04:05:36
They are 422 too, iirc
Fraetor
2021-11-27 04:25:12
I guess a good test case would be a load of peoples social media photos. The only issue is that their is usually not a copy without prior lossy compression.
grey-torch
2021-11-29 09:05:27
somebody's already making a qoi image viewer https://github.com/floooh/qoiview
Scope
2021-11-29 08:39:13
<:Thonk:805904896879493180> https://twitter.com/richgel999/status/1465221038200004611
2021-11-29 08:41:34
https://twitter.com/ProgramMax/status/1465249323202060288
2021-11-29 11:32:09
https://encode.su/threads/3756-FAPEC-library
2021-11-29 11:32:52
🤔 > **A new image compression algorithm: Hierarchical Pixel Averaging (HPA)** > FAPEC is the fully adaptive version of PEC (Prediction Error Coder), an entropy coding algorithm that requires an adequate pre-processing stage. It can be as simple as a differentiator, just predicting each new sample as being equal to the previous sample. Nevertheless, more elaborated pre-processing stages providing better predictions will obviously generate smaller prediction errors and, thus, FAPEC will obtain better compression ratios. In this work we have developed a new pre-processing stage specific for images. We called it HPA, which stands for “Hierarchical Pixel Averaging”. The goal was to obtain an efficient image compression algorithm allowing to progressively move from lossless compression to lossy compression with different quality levels
_wb_
2021-11-30 06:35:26
Sounds a lot like the Squeeze of jxl modular...
Jim
2021-12-01 03:28:25
https://hackaday.com/2021/11/30/a-super-speedy-lightweight-lossless-compression-algorithm/
190n
2021-12-01 05:27:28
https://discord.com/channels/794206087879852103/805176455658733570/913111946209918987
Scope
2021-12-01 05:47:07
🤔 https://github.com/nigeltao/qoi2-bikeshed/issues/21
190n
2021-12-01 05:48:38
<:banding:804346788982030337>
2021-12-01 05:48:45
4 bits per channel <:monkaMega:809252622900789269>
Scope
2021-12-01 06:03:00
The current QOI implementation is good only compared to the fastest compression options for not very efficient PNG encoders But, its simplicity and popularity has brought a lot of people with new ideas and experiments
_wb_
2021-12-01 06:14:05
That's good. Maybe one or two of them will learn about jxl and come up with an encoder improvement.
Scope
2021-12-01 06:25:21
Some interesting ideas in the future might be possible to implement in JXL to `--effort=0` or `--effort=-1` or like that
2021-12-01 06:49:29
PNG plans to be the format that contains everything? https://twitter.com/richgel999/status/1465804369782362114
yurume
2021-12-01 06:50:04
there was also a preliminary proposal to make zpng kind of official
2021-12-01 06:50:37
personally I think any further extension to PNG should use a different file extension, a la .zip vs. .zipx
2021-12-01 06:53:53
also I believe PNG should not be a kitchen sink (so to say) of future compression formats
2021-12-01 06:54:17
it should remain as a lossless format, and its decompression performance should be at least on par with the current zlib
2021-12-01 06:55:00
because otherwise the PNG format minus zlib is fundamentally lacking in my opinion
Scope
2021-12-01 06:55:16
In my opinion it is a very bad decision to add some modifications or extensions to an existing format, instead of developing a more efficient and faster one from scratch
yurume
2021-12-01 06:55:30
well we have APNG for a reason
2021-12-01 06:56:06
in most cases it is a bad decision to do so, but not universally
Scope
2021-12-01 06:58:55
But APNG is likely a separate format with different functionality, because the usual PNG did not have animation support at all, it's like making another extension for PNG with audio
The_Decryptor
2021-12-01 07:02:52
https://twitter.com/richgel999/status/1465837973954572295 < Wish I still had a Twitter account to ask questions, like how that works with colour management or resampling
_wb_
2021-12-01 07:12:34
APNG took ages to get support, and still isn't really there - plenty of viewers still show just the fallback still
yurume
The_Decryptor https://twitter.com/richgel999/status/1465837973954572295 < Wish I still had a Twitter account to ask questions, like how that works with colour management or resampling
2021-12-01 07:13:43
they are specifically designed for GPU, where you can leverage a lot of mini-cores that do the lookup by themselves
_wb_
2021-12-01 07:13:46
The kitchen sink approach is basically what we have TIFF for already
yurume
2021-12-01 07:14:13
by comparison "traditional" image formats are meant to be uncompressed in the memory
2021-12-01 07:14:51
those kind of texture compression formats are very different from typical image formats because it should be fully random-accessible
_wb_
2021-12-01 07:15:08
GPU formats are useful, but fixed bpp per block is quite a strong constraint to get good compression, especially for non-photo
yurume
2021-12-01 07:15:32
Binomial's Basis Universal is somewhere in between the full texture compression format and the full image compression format
_wb_
2021-12-01 07:16:09
YUV420 is in a way the first GPU format
yurume
2021-12-01 07:16:25
yeah
2021-12-01 07:16:45
chroma subsampling is definitely a compression algorithm
190n
Scope PNG plans to be the format that contains everything? https://twitter.com/richgel999/status/1465804369782362114
2021-12-01 07:48:37
at that point what's the point of the png format if it's incompatible? plus doesn't tiff already sorta contain anything?
_wb_
2021-12-01 07:56:22
I wonder if some simple subset of modular jxl could be used as a GPU format (assuming of course that GPUs would support it). Say using the smallest group size (128x128), so you have 128 independent tiles for a 2K screenfull, 512 tiles for a 4K screenfull, 2K tiles for a 8K screenfull. Use only simple entropy coding and predictors, and you should have something that decodes very quickly if you can use one core per group.
Jim
Scope Some interesting ideas in the future might be possible to implement in JXL to `--effort=0` or `--effort=-1` or like that
2021-12-01 11:12:52
--effort=-500 👀
VEG
_wb_ APNG took ages to get support, and still isn't really there - plenty of viewers still show just the fallback still
2021-12-01 03:39:02
GIF animation is not supported by all image viewers also, so that's fine. It is important that all modern browsers support it.
_wb_
2021-12-01 03:41:55
yes, but it took until 2017, and there are still browsers left that don't support apng (https://caniuse.com/apng)
Fox Wizard
2021-12-01 03:42:28
Lol, of course IE doesn't support it <:KekDog:884736660376535040>
VEG
2021-12-01 03:42:31
Yeah, I know, it took much time to persuade Chrome developers to support it.
2021-12-01 03:43:23
But now it is safe to use it on the web.
_wb_
2021-12-01 03:43:27
APNG was created in 2004, and was itself a compromise / dumbed down version of MNG, which dates back to 2001
VEG
2021-12-01 03:45:09
Steam uses APNG a lot 🙂 Basically, all the animated stickers and images are APNG: https://store.steampowered.com/points/shop/c/avatar/cluster/1
_wb_ APNG was created in 2004, and was itself a compromise / dumbed down version of MNG, which dates back to 2001
2021-12-01 03:46:15
It was created later, in 2008
2021-12-01 03:46:22
It is not based on MNG
_wb_
2021-12-01 03:46:23
so if the past is an indication of the future, if you would make a new PNG extensions with better compression / gpu support / whatever, if you finalize the spec in 2022, you can expect all major browsers to support it in 2039
2021-12-01 03:46:52
well it also does animation, like MNG, just in a way simpler way
VEG
2021-12-01 03:47:57
MNG was too complex, so it was removed from Firefox. But they needed a simple format with alpha support for small animations like "loading" for their GUI, and APNG was created for this.
2021-12-01 03:49:14
https://bugzilla.mozilla.org/show_bug.cgi?id=204520
2021-12-01 03:49:16
https://bugzilla.mozilla.org/show_bug.cgi?id=195280
2021-12-01 03:49:20
The MNG drama
2021-12-01 03:53:19
JNG was JPEG + alpha channel in PNG container
2021-12-01 03:53:49
If they didn't remove it in 2003, it would probably become popular
_wb_
2021-12-01 04:15:39
AJNG could have been there in 1998 or so if someone would have proposed it and it became popular, and it would have killed GIF
VEG
2021-12-01 04:23:40
Yeah, but JNG was already supported by Mozilla 🙂
2021-12-01 04:24:02
They just removed it with MNG to save 100-200KB of space
Fraetor
2021-12-01 11:23:04
Even if PNG was originally created to be extensible, IMO it would be annoying if they broke decoder compatibility. Reusing large parts of PNG makes sense to reuse existing good code, but it should have a new MIME type/extension.
VEG
2021-12-04 07:38:43
I wouldn't object against `Content-Encoding: png2` which would work for PNG like `Content-Encoding: jxl` for usual JPEG files.
2021-12-04 07:39:15
Browser accepts new recompressed data, but the user gets usual PNG which is compatible with everything.
_wb_
2021-12-04 07:39:37
Content encodings require byte-identical roundtrips though, not just pixel-identical
VEG
2021-12-04 07:39:54
Yeah, but JPEG XL allows that for JPEG 🙂
2021-12-04 07:40:05
Probably it could be done for PNG also?
_wb_
2021-12-04 07:40:23
It can, but it comes at a bigger cost in the bitstream reconstruction data
2021-12-04 07:40:51
PNG has more choices in the entropy coding itself than JPEG
VEG
2021-12-04 07:41:02
Oh... Clear.
2021-12-04 07:42:17
Hope that JPEG XL will eventually be included into release versions of browsers.
yurume
2021-12-04 07:46:11
if you are interested in PNG reconstruction with JPEG XL, I once wrote an experimental compressor based on that: https://gist.github.com/lifthrasiir/5c24058f21ce6fba231cf1bfba45bf28
2021-12-04 07:47:07
...because there actually is a (in fact, multitude of) tool that gives you those "choices in the entropy coding" into a compact form so you can recompress the bitstream
2021-12-04 07:47:30
but it is very slow and would be unacceptable for production uses
_wb_
2021-12-04 07:48:10
It could also make sense to have special cases for common encoders like "whatever libpng does by default"
yurume
2021-12-04 07:48:21
it might be possible to write an alternative to preflate, but I haven't investigated further
_wb_ It could also make sense to have special cases for common encoders like "whatever libpng does by default"
2021-12-04 07:48:34
this is actually the case for preflate, where it tries multiple common options for zlib
_wb_
2021-12-04 07:48:43
Ah ok
yurume
2021-12-04 07:48:48
and without that the reconstruction data would bloat a lot (cause you have to record where you have chosen to put length-distance codes as opposed to literal codes)
_wb_
2021-12-04 07:49:05
Yes, I imagine
2021-12-04 07:50:30
So for very optimized pngs (ones where very exhaustive lz77 search was done), it would ironically have the biggest problem compressing the reconstruction data
monad
2021-12-08 05:40:42
Certainly not.
2021-12-08 05:43:22
(in b4 Jon proves me wrong)
_wb_
2021-12-08 06:01:54
Unlikely to happen anytime soon, but could in principle be done, I think
2021-12-08 06:03:31
I don't know how big the gap is atm, feel free to do your own testing with e1 lossless jxl vs QOI
2021-12-08 06:04:43
We do support > 8-bit and other channel counts than 4, so we cannot specialize things as much as QOI can
BlueSwordM
2021-12-08 04:18:27
It is also important to add that there are a lot of ways to speedup encoding and decoding as well on the PNG side. For compression and decode performance, just utilizing libdeflate's better SIMD can help a lot in that regard. It also doesn't help that QOI is much faster for simple screen content.
2021-12-08 04:18:37
It isn't nearly as fast for photographic style content.
diskorduser
2021-12-08 06:02:03
It's useless then
_wb_
2021-12-08 07:49:19
Well it's useful to make people interested in image compression
BlueSwordM
diskorduser It's useless then
2021-12-08 08:39:57
Not really.
2021-12-08 08:40:17
I don't know of any PNG encoders using the fast deflate library, which is the main problem.
diskorduser
2021-12-09 04:13:40
Someone said it's not good for screenshots too.
Scope
2021-12-10 10:39:19
2021-12-10 10:49:12
QOI is more about simplicity, it is good for those who want to understand how the format works and for those who want to quickly make their implementation, but it is not always good for speed (for example it is not designed for SIMD) and efficiency (improving efficiency adds complexity)
_wb_
2021-12-11 12:01:11
https://twitter.com/jonsneyers/status/1469636856052604937?s=20
2021-12-11 12:02:48
such trickery can also be done for jxl (but not as easily using just a hex editor) by adjusting the global X and B quantweights, right <@!179701849576833024> ?
2021-12-11 12:06:44
I think it would be useful to have a tool to modify orientation, saturation, sharpness and crop of a jxl file in a reversible way: i.e. add a box in the container to store the original values of orientation, X/B quantweights, gaborish and image size + frame offset and you can basically very cheaply adjust these things without doing recompression, with a way to undo the changes
veluca
2021-12-11 12:15:51
you can also modify the cfl base factors
2021-12-11 12:15:56
probably the easiest way
2021-12-11 12:16:04
or the opsin matrix
_wb_
2021-12-11 12:27:08
Opsin matrix is probably the most versatile way to do color adjustments without re-encoding
2021-12-11 12:27:49
And would affect the whole image, including splines and noise and modular patches
lithium
2021-12-11 02:34:29
IKEA web pages for Norway started to use AVIF.🤔 > Yesterday we enabled AVIF for the majority of larger product images on product pages in Norway. > Pages are on http://ikea.com/no/no/p/* > > https://twitter.com/twittmdl/status/1469205796051423236 > https://www.reddit.com/r/AV1/comments/rdwxuv/ikea_web_pages_for_norway_started_to_use_avif/
_wb_
2021-12-11 04:21:18
Sounds like a bad plan to me
2021-12-11 04:22:02
Product images benefit a lot from high-fidelity preservation of fine texture etc
2021-12-11 04:22:46
When buying a couch or whatever online, you want to know how it actually looks, not how avif smoothed it out
fab
2021-12-11 04:28:20
some pages aen't accessible to me
2021-12-11 04:28:56
ikea even in italy uses avif
2021-12-11 04:30:18
but not the poducts
2021-12-11 04:32:23
they ae done on speed five
spider-mario
2021-12-11 08:21:02
> The findings so far indicate a drop in image transfer size of 10-20% (70-150k) for those pages. I am a bit confused by the fact that this is a “finding” that comes after it’s rolled out to production
2021-12-11 08:21:33
did they just say “let’s compress with AVIF with whatever settings and see what size comes out”, or?