JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

other-codecs

_wb_
2021-06-16 07:20:08
What about prediction modes? Can in principle all modes allowed by the bitstream be used in av1 encoders, or are there some that they do not even try so will never be used?
BlueSwordM
_wb_ What about prediction modes? Can in principle all modes allowed by the bitstream be used in av1 encoders, or are there some that they do not even try so will never be used?
2021-06-16 07:26:26
That I do not know. I have not analyzed any of the speed features/specifications to know the extent of what prediction modes can be used.
_wb_
2021-06-16 07:30:09
It's quite hard to estimate a bitstream's potential
2021-06-16 07:31:03
Which is why usually reference implementations are very slow and exhaustive, to make sure they're as close as possible to the real potential of the bitstream
2021-06-16 07:31:19
(then later trade-offs and shortcuts are made)
2021-06-16 07:32:49
With cjxl that is not really the case: even tortoise is quite far from exhaustive, doing things exhaustively would just be infeasible
2021-06-16 07:33:17
I wonder to what extent it is the case with libaom at slowest setting
Deleted User
2021-06-17 11:53:44
How would I go about creating a yuva444p AV1 video (from RGBA PNGs)? (<@!321486891079696385> ?)
_wb_
2021-06-17 01:25:55
I don't think the av1 bitstream itself supports alpha (otherwise why wouldn't they do that in avif?)
2021-06-17 01:27:21
so likely you encode a yuv444p av1 for the color, then a yuv400p av1 for the alpha, and then do some container muxing magic at the mp4/mkv level
Deleted User
2021-06-17 01:32:36
Well, guess I keep using animated WebP then. Even if that's an unpopular choice here. ๐Ÿ˜›
Scientia
2021-06-18 04:18:17
avif is better because it doesn't force 420 sampling but no firefox support
rappet
2021-06-18 02:41:42
I like the fake noice feature
2021-06-18 02:42:32
But that's more for Videos.
2021-06-18 02:42:54
I hate how an Anime Blueray is 2 Gigabytes Anime and 18 Gigabytes noice.
2021-06-18 02:44:45
Ok depends on the Anime...
2021-06-18 02:45:07
Most old animes that you get in europe have bad quality.
2021-06-18 02:45:28
Better than DVD, but I can still get them to 2 Gigabytes compresses with VP9...
2021-06-18 02:45:50
That does not work for sharp anime...
2021-06-18 02:46:00
Yes, they do.
2021-06-18 02:47:32
The best I've seen was 70mbit/s H.265
2021-06-18 02:47:50
But in most cases you really don't need it.
BlueSwordM
2021-06-18 02:48:20
Nope they don't.
2021-06-18 02:48:40
Anime blu-rays are still massive, and yet, most of the time, they are full of artifacts and aren't native 1080p for example.
rappet
2021-06-18 02:49:05
You still have to waifu2x them sometimes....
BlueSwordM
2021-06-18 02:49:55
540p, 720p, 768p, 810p, 900p are the most common resolutions.
2021-06-18 02:50:07
Most of the time, even today(mainly for shows, movies are a bit less problematic) mastered anime is not drawn natively at 1080p.
2021-06-18 02:50:43
For example, my Hero Academia is drawn at 720p.
2021-06-18 02:51:23
Also, most of the best anime BDs actually come from Italy funnily enough.
Deleted User
rappet I like the fake noice feature
2021-06-18 02:54:25
https://thumbs.gfycat.com/TartWeakBigmouthbass-size_restricted.gif
rappet
2021-06-18 02:55:04
Yeah it's nice.
2021-06-18 02:55:15
Just encode it to reallly low bitrate
2021-06-18 02:55:20
Add fake noice.
2021-06-18 02:55:23
Done
2021-06-18 02:55:31
*sarkasm*
spider-mario
2021-06-18 04:05:27
this one seems to be at ~10-15 Mbps for the video stream
2021-06-18 04:05:41
h.264, 4:2:0, 1920ร—1080
rappet
2021-06-18 05:00:26
Yeah but they take 90s Anime (made for analog TV), and encode it with about 40 Mbps
Crixis
2021-06-19 04:16:28
Not madoka magica?
Ringo
2021-06-19 04:33:01
that's Madoka Magica
lithium
2021-06-19 08:27:01
Puella Magi Madoka Magica (Suitable for children ๐Ÿ˜› )
veluca
2021-06-19 09:46:47
well, the first couple of episodes, at least...
_wb_
2021-06-19 07:26:49
https://www.reddit.com/r/AV1/comments/o3ips4/a_fine_balance_of_compressing/
2021-06-19 07:28:03
Anyone feel like doing a variant of that where in the first panel it's avif doing the compression, and in the last panel you see a 0.02 bpp avif?
190n
2021-06-19 07:28:18
lemme try
2021-06-19 07:28:26
btw doesn't twitter not compress images as much anymore?
2021-06-19 07:28:32
or at least you can get the original
_wb_
2021-06-19 07:28:51
Twitter changed some things, in some cases they don't convert png to jpg anymore
2021-06-19 07:29:50
I don't think you can get the original, I don't think they keep those around
2021-06-19 07:31:14
I think they also don't recompress jpegs anymore
2021-06-19 07:35:50
As long as they're under 5MB and under 4096x4096 pixels, they will still show recompressed previews on timelines but you can click on them to get the original.
2021-06-19 07:36:00
If it's a jpeg.
2021-06-19 07:36:39
For png, I think it converts to jpeg unless it's a png8 (palette colors) or has alpha.
2021-06-19 07:39:13
This has lead to people making images with one transparent pixel just to prevent twitter recompression (which is kind of annoying to deal with, I occasionally have seen such images outside twitter and it's not so great to have such fake alpha channels imo)
190n
2021-06-19 07:47:47
how's this
_wb_
2021-06-19 07:53:45
Excellent!
2021-06-19 07:53:58
Mind if I tweet that?
2021-06-19 07:54:35
Can credit you of course
190n
2021-06-19 07:55:30
sure thing
190n how's this
2021-06-19 07:55:55
CC BY-SA 4.0
_wb_
2021-06-19 07:56:04
How should I credit you? "190n"?
190n
2021-06-19 07:56:58
i'm @_190n on twitter
_wb_
2021-06-19 07:57:56
Ah maybe you want to tweet it then, I retweet?
190n
2021-06-19 07:59:37
ah you can tweet it idc
2021-06-19 07:59:45
don't use twitter that much
BlueSwordM
2021-06-21 04:29:44
So, it seems like someone's been working on improving FLAC further: https://hydrogenaud.io/index.php?PHPSESSID=rfkklc5eh13e71h7j35s7eb7u1&topic=120158.msg999524;topicseen#new
Deleted User
2021-06-21 05:31:18
<@!321486891079696385> I've got an idea for AV2, wanna read?
BlueSwordM
2021-06-21 05:31:24
Yes.
Deleted User
2021-06-21 05:32:00
Nice.
2021-06-21 05:32:16
Okay... how much do you like chroma subsampling?
BlueSwordM
2021-06-21 05:33:51
I don't.
Deleted User
2021-06-21 05:35:13
Nice.
2021-06-21 05:37:46
Just like Opus ditched the idea of keeping the original sampling rate (it saves it in a tag that can, but doesn't have to be respected by decoder) and does everything internally in 48 kHz, AV2 should ditch the idea of keeping chroma subsampling (it could save it in a tag) and do everything internally in 4:4:4.
2021-06-21 05:38:29
From what I understand, AV1 has (just like previous codecs) special coding "modes" for 4:4:4, 4:2:0 etc., am I right?
2021-06-21 05:45:29
Are you alive Blue?
BlueSwordM
Are you alive Blue?
2021-06-21 05:48:02
Yes to both questions ๐Ÿ˜›
2021-06-21 05:48:22
Anyway, I can't always answer stuff on time because I have work to do <:kekw:808717074305122316>
Deleted User
BlueSwordM Anyway, I can't always answer stuff on time because I have work to do <:kekw:808717074305122316>
2021-06-21 05:51:27
I'll write a bit, you'll read when you have time ๐Ÿ™‚
BlueSwordM Yes to both questions ๐Ÿ˜›
2021-06-21 05:54:04
So AV1 has special coding modes for different subsamplings, AV2 will have none (in my project). The only thing subsampling of the source material will do is change the "source subsampling" tag I've mentioned before and affect encoder decisions a bit.
improver
2021-06-21 06:01:20
makes sense. just encode that stuff in lesser quality. subsampling is very crude way to get that
Deleted User
improver makes sense. just encode that stuff in lesser quality. subsampling is very crude way to get that
2021-06-21 06:01:31
I know, I'll get to that.
2021-06-21 06:06:07
There will be no more nearest-neighbour upsampled chroma (like in the image below, particularly visible in reds) since squares are hard to encode with DCT and it just looks awful.
2021-06-21 06:06:13
http://www.johncostella.com/magic/replicate.png
2021-06-21 06:10:00
Reference encoder should first upsample it with a better upsampling algo, something like Magic Kernel Sharp+ from the original article (http://www.johncostella.com/magic/) should probably do the job.
2021-06-21 06:10:07
http://www.johncostella.com/magic/fancy.png
spider-mario
2021-06-21 06:10:25
yeah, YUView allows to choose the method for chroma subsampling (iirc between nearest neighbor and bilinear), and it makes quite a difference
Deleted User
2021-06-21 06:12:06
Yup, but there's even more to my proposed AV2 project than just smarter chroma upsampling algorithm ๐Ÿ™‚
spider-mario
2021-06-21 06:14:24
of course ๐Ÿ˜„ it was just a small aside
Deleted User
2021-06-21 06:19:21
AV1 already has the second part, it's CfL prediction. If the AV2 encoder knows that the source had subsampled chroma, it should completely ignore the error between upsampled chroma and CfL prediction in respectively highest DCT coefficients and assume that CfL predicted them perfectly (like in <#824000991891554375>). It'll probably result in sharper chroma in sharp luma areas, wouldn't it?
_wb_
2021-06-21 06:22:44
This is something an AV1 decoder could actually do already if it would decode to yuv444, right?
Deleted User
_wb_ This is something an AV1 decoder could actually do already if it would decode to yuv444, right?
2021-06-21 06:23:23
Actually yes, damn
2021-06-21 06:42:17
I've just noticed in Squoosh that AVIF indeed does NN upsampling (whyyyyy <:NotLikeThis:805132742819053610>), while MozJPEG chroma was smooth no matter with or without chroma subsampling.
2021-06-21 06:48:03
Fortunately it'll be simple to implement better chroma upsampling in AV1 decoders.
2021-06-21 06:54:50
And thanks to Jon I've just realised that those things that I've planned for AV2 encoder can be applied to AV1 decoder (with a bit worse performance, but still).
2021-06-21 06:56:52
Decent chroma upsampling should be a no-brainer in any decoder, but I think how well would CfL-based "fake hi-res" chroma work...
raysar
2021-06-21 07:20:05
I have 4k smartphone video, if i want to encode it in 10mb fullhd, it's better to do it in av1 444 or 420?
190n
2021-06-21 07:21:02
420 10-bit
diskorduser
2021-06-21 07:23:54
Why 420? <@164917458182864897>
raysar
2021-06-21 07:25:44
i thing there is quality trigger then 444 is better than 420, or maybe near all the time like jxl? ๐Ÿ˜„
2021-06-21 07:27:19
For 10bits we know it better in all cases in all video format ๐Ÿ™‚
190n
2021-06-21 07:28:56
444 is unnecessary for most video content (at least video recorded irl, which this is), and it's less compatible with hardware decoders
Deleted User
2021-06-21 08:09:16
https://discord.com/channels/794206087879852103/803574970180829194/852206410615881728
190n
2021-06-21 08:15:14
looks like that's for images?
BlueSwordM
raysar I have 4k smartphone video, if i want to encode it in 10mb fullhd, it's better to do it in av1 444 or 420?
2021-06-21 09:41:10
4:2:0, since the video is already in 4:2:0.
raysar
2021-06-21 09:57:01
4k video 4:2:0 is equal in color in 4:4:4 fullhd. I have no solution to flip video to test visual quality. That's why i ask the question. I hate color subsampling.
Deleted User
2021-06-22 12:51:33
https://www.youtube.com/watch?v=1aKwFVwGoUE
diskorduser
2021-06-22 01:13:41
Downscale only luma and keep chroma as it is.
BlueSwordM
2021-06-22 03:45:39
So, AVIF lossless has improved a lot these days, but it's still not enough to beat WebP, let alone JXL. Funnily enough, for best encode performance, single-threading actually makes lossless **worse** in many scenarios. I wonder why actually. Also, from my decoder testing for the article, using AVIF lossless is basically a death sentence for decoding performance.
2021-06-22 03:46:00
Example encodes
2021-06-22 03:46:01
2021-06-22 03:46:06
improver
2021-06-22 03:48:10
also does AVIF lossless use 8ibt YUV here? (how can i check?)
2021-06-22 03:48:52
ah. found `avifdec -i` ``` * Bit Depth : 8 * Format : YUV444 ```
2021-06-22 03:49:24
so I guess this will reduce colors unlike with webp/jxl
2021-06-22 03:56:52
huh it doesn't? interesting
2021-06-22 03:58:20
depends whether source image used actual full RGB range or not
2021-06-22 04:06:14
i kinda always assumed that full RGB->YUV is sorta lossy but does color loss actually happen in practice?
Deleted User
BlueSwordM
2021-06-22 04:31:37
nice image ๐Ÿ˜„
improver i kinda always assumed that full RGB->YUV is sorta lossy but does color loss actually happen in practice?
2021-06-22 04:32:03
Yes, YUV444 has less precision than RGB24
BlueSwordM
improver i kinda always assumed that full RGB->YUV is sorta lossy but does color loss actually happen in practice?
2021-06-22 04:32:07
Not if lossless mode is activated.
2021-06-22 04:32:43
Lossless mode does transformations at much higher bit-depth to prevent these sort of YUV lossless, and as such, is mathematically lossless.
2021-06-22 04:32:57
Of course, because of the huge decoding speed penalty, I would only recommend using lossless AV1 in video.
Deleted User
2021-06-22 04:33:39
So it is using 10 bit YUV for RGB24?
BlueSwordM
So it is using 10 bit YUV for RGB24?
2021-06-22 04:34:09
Lossless does everything in 16bpc.
2021-06-22 04:35:19
~~Oh no, I don't think it's mathematically lossless for RGB source <:monkaMega:809252622900789269>~~
improver
2021-06-22 04:36:14
huh so it's actually lossless or not idgi
BlueSwordM
improver huh so it's actually lossless or not idgi
2021-06-22 04:36:35
I don't know.
2021-06-22 04:36:49
However, I used a bad example ๐Ÿ˜”
2021-06-22 04:36:56
I just checked MD5 checksums <:kekw:808717074305122316>
2021-06-22 04:37:08
How would I actually verify if it's actually mathematically lossless? <:Thonk:805904896879493180>
improver
2021-06-22 04:40:33
`compare -metric AE` apparently can do this but im unsure should look it up what it does exactly
lithium
2021-06-22 04:40:47
avif-lossless --cicp 2/2/0 or 1/13/0 > Setting the matrix coefficients to 0 (Identity) says that you don't want to use YUV, but that you just want to pack the RGB values directly into the planes (in the order GBR), > It is currently the only way to truly achieve lossless encoding at the same bit depth as the source image.
improver
2021-06-22 04:41:14
in my experiments with it it does actually return zero but idk if that's proof or im just lucky with my images
BlueSwordM
improver `compare -metric AE` apparently can do this but im unsure should look it up what it does exactly
2021-06-22 04:41:47
I checked.
2021-06-22 04:41:57
All images are lossless, with AVIF, JXL, and WebP.
2021-06-22 04:42:11
All the same size, not a single bit difference.
BlueSwordM All images are lossless, with AVIF, JXL, and WebP.
2021-06-22 04:42:45
Actually, there's a problem.
2021-06-22 04:42:54
The JXL image is actually a few bytes bigger.
lithium
2021-06-22 04:44:35
djxl png will include srgb icc profile?
retr0id
2021-06-22 06:13:48
Does anyone know of any other discord servers dedicated to a media codecs?
2021-06-22 06:26:35
ooh I just found the AV1 discord
_wb_
BlueSwordM All the same size, not a single bit difference.
2021-06-22 07:37:00
best way to check lossless is not filesize but doing something like `compare -metric pae orig.png decoded.png null:`
spider-mario
lithium djxl png will include srgb icc profile?
2021-06-22 11:14:18
normally, only the sRGB chunk
2021-06-22 11:14:59
we can use something like http://entropymine.com/jason/tweakpng/ to look at exactly what is there
2021-06-22 11:15:17
this is how I noticed the imagemagick oddity with the missing sRGB chunk
2021-06-22 11:15:45
(iirc it works well in wine)
paperboyo
spider-mario we can use something like http://entropymine.com/jason/tweakpng/ to look at exactly what is there
2021-06-23 09:43:31
For simple stuff, I find this one good too: http://www.libpng.org/pub/png/apps/pngcheck.html
Scope
2021-06-23 04:35:00
https://twitter.com/kornelski/status/1407732715433119746
_wb_
2021-06-23 05:19:34
Good that FOSS avif encoders are getting improved
lithium
2021-06-24 10:22:29
libavif 0.9.2 > https://github.com/AOMediaCodec/libavif/blob/master/CHANGELOG.md
2021-06-24 10:30:14
Yes, If you don't specify -j N only use 1 core.
_wb_
2021-06-24 10:31:02
I think you get worse compression though with avifenc when using more threads
2021-06-24 10:31:28
you certainly do not get the same result with different -j values
2021-06-24 10:31:57
(it's different from cjxl --num_threads in that respect, which does give you the same result regardless of how many threads you use)
2021-06-24 10:32:33
I think they create more independent tiles when using more threads
BlueSwordM
_wb_ I think they create more independent tiles when using more threads
2021-06-24 02:52:41
Not consistently true funnily enough, especially in lossless. As well, threading in aomenc is just using row threading, which works really well in intra compared to normal video coding, and no need for tiling until you hit >4096x2160, and purely independent tiles when you hit >35MP if you actually care about HW decode in the future for most stuff, which you shouldn't really outside of camera apps.
2021-06-24 02:57:02
Of course, once you get an image above 4096x2160, avifenc will swiftly switch to using 2 tiles and up and again to confirm to tile size specs.
_wb_
2021-06-24 03:00:15
even with a small image, the output I get for `avifenc in.png out.avif -j 1` is not the same as the output I get with `-j 2`
BlueSwordM
_wb_ even with a small image, the output I get for `avifenc in.png out.avif -j 1` is not the same as the output I get with `-j 2`
2021-06-24 03:15:16
Normal. Row threading(or wpp threading) is not fully deterministic.
_wb_
2021-06-24 03:22:55
kind of weird that the encoder is that nondeterministic though
BlueSwordM
_wb_ kind of weird that the encoder is that nondeterministic though
2021-06-24 03:31:44
Yeah, intra-only is more affected by this than normal video coding. Anyway, thank you yesterday for telling me about IM's Compare tool.
2021-06-24 03:32:00
I was able to verify that way that even the very slightly bigger JXL file was lossless.
_wb_
2021-06-24 03:32:48
Did anyone try to measure the density/quality impact of using multithreaded avif encode?
2021-06-24 03:33:07
Is it only libaom that behaves like that or also the other encoders?
BlueSwordM
_wb_ Is it only libaom that behaves like that or also the other encoders?
2021-06-24 03:35:26
Only SVT and aom behave like this. rav1e is the one that gets affected the most, since it only has access to tile threading.
2021-06-24 03:36:13
Anyway, you just made me doubt myself about the threading stuff <:Thonk:805904896879493180>
_wb_
2021-06-24 03:57:59
testing default avifenc on some random image: 1 thread: 1330327 bytes, 3-norm: 1.226357 2 threads: 1334347 bytes, 3-norm: 1.227158
2021-06-24 03:58:54
not a huge difference, but not really insignificant either
improver
2021-06-24 03:59:10
so, bigger and worse rip
_wb_
2021-06-24 04:05:27
Well if there is a difference in result when doing parallel, it's always going to be doing less global optimization to get more speed, i.e. get bigger and/or worse.
improver
2021-06-24 04:05:58
true. not very surprising
_wb_
2021-06-24 04:06:24
In cjxl we avoid that and only parallelize what can be safely parallelized (but that means less than linear speedup)
spider-mario
2021-06-24 04:46:24
especially at slower speeds
2021-06-24 04:49:47
I measured `cjxl`โ€™s speed with 0/4/8 threads at various speeds the other day and got something that looked like this: ``` -s 1 -s 3 -s 5 -s 7 -s 9 --num_threads=0 34.5 29.68 20.39 5.56 0.26 --num_threads=4 68.55 62.88 41.75 10.36 0.28 --num_threads=8 81.39 78.39 46.98 11.81 0.28 ```
2021-06-24 04:50:37
(unit is MP/s)
_wb_
2021-06-24 04:53:25
You need a large image to get speedup from many threads too - at least one dc group per thread or you cannot really use all threads for the dc part
2021-06-24 04:53:43
Dc groups are 4 megapixels big
2021-06-24 05:01:23
The histogram / context map is I guess the main thing that doesn't parallelize at the faster speeds, right?
2021-06-24 05:02:19
I suppose we _could_ do that less globally and sacrifice some density for speed. At least it's entropy coding only, not quality-changing.
2021-06-25 01:39:28
wow I had this older libavif/libaom lying around: Version: 0.9.0 (aom [enc/dec]:2.0.2
2021-06-25 01:39:58
now using latest from git: Version: 0.9.2 (aom [enc/dec]:v3.1.1
2021-06-25 01:41:14
did the same command with both of them: `avifenc -s 6 -j 1 -y 420 --min 0 --max 63 -a end-usage=q -a tune=ssim -a cq-level=24`
2021-06-25 01:42:33
old version took 48 seconds to give me a 2.1 MB avif file
2021-06-25 01:43:16
new version took 11 seconds to give me a 977 KB avif file
2021-06-25 01:45:46
yeah but the 977 KB avif file does look a lot worse too
2021-06-25 01:50:41
if I want to reach the same butteraugli score, I now have to go all the way to `cq-level=9`, and that gives me a 2.2 MB avif file
2021-06-25 01:50:55
but only in 10 seconds
2021-06-25 01:51:33
so it is 5 times as fast as before, but a bit worse in compression on that example
2021-06-25 01:52:03
I'm mostly curious how the scale changed so dramatically, that what used to be cq-level 24 is now cq-level 9
2021-06-25 01:57:07
anyway, default cjxl using single-core takes 3.5 seconds on that image to produce something with a better butteraugli score than the 2.2 MB avif, but in 937 KB
Scope
2021-06-25 01:57:14
The quality may be higher or the same without `-y 420` (at least in my tests and not at the lowest bitrate), also min=max can increase the encoding speed, but it is harder to achieve the required size
_wb_
2021-06-25 01:58:28
i'm using 420 mostly because I'm not sure that 444 will remain supported once hardware starts to get used
2021-06-25 01:58:42
at least that's what happened with heic
2021-06-25 01:59:05
444 heic files worked fine in Apple devices, until they enabled hw decode and now only 420 works
improver
2021-06-25 01:59:27
sounds v rude way to "support" stuff
_wb_
2021-06-25 02:00:02
well we'll see, maybe I am too pessimistic here
improver
2021-06-25 02:00:38
i doubt non-apple stuff have as tight integration w hw so i wouldn't be so pessimistic
_wb_
2021-06-25 02:03:39
yeah, they'll need software fallback anyway so it'll probably work
2021-06-25 02:04:06
might become a "bad practice" though to use av1 that requires software fallback
improver
2021-06-25 02:04:40
i wonder if it could end up other way around, hw decoders supporting 444
Scope
2021-06-25 02:04:41
Is AVIF hardware decoding already used in browsers or is it for the future? And I think to decode AVIF correctly, it would require some sort of tile splitting like in HEIC (unless it's the only single large image)
improver
2021-06-25 02:05:45
im not aware of any use so far
_wb_
2021-06-25 02:11:24
hw decoders currently announced are all 420-only afaik
2021-06-25 02:12:07
I don't think browsers are doing hardware av1 video decode yet even
Scope
2021-06-25 02:12:32
Yep, mostly Main profile (but at least with 10-bit support, which can also may increase quality even for 8-bit source, but it's slower for software decoding)
Deleted User
2021-06-25 02:41:03
444-only AV2!
_wb_
_wb_ anyway, default cjxl using single-core takes 3.5 seconds on that image to produce something with a better butteraugli score than the 2.2 MB avif, but in 937 KB
2021-06-25 03:11:31
Just one data point of course, but at least in this case (the p06 image, at around q75), we have jxl being 3x as fast as avif and 50% smaller. <@532010383041363969>
BlueSwordM
_wb_ Just one data point of course, but at least in this case (the p06 image, at around q75), we have jxl being 3x as fast as avif and 50% smaller. <@532010383041363969>
2021-06-25 03:39:49
BTW, I actually do not recommend tune=ssim.
2021-06-25 03:40:00
It may increase detail retention in some spots, but it hurts color quite a bit.
_wb_
2021-06-25 04:27:22
What are the settings you would recommend?
BlueSwordM
_wb_ What are the settings you would recommend?
2021-06-25 04:29:06
Did you compile aomenc with butteraugli tune support?
2021-06-25 04:29:19
Let me write my usual settings in <#840831132009365514>
_wb_
2021-06-25 04:29:25
Ok thx
BlueSwordM Did you compile aomenc with butteraugli tune support?
2021-06-25 04:30:05
I didn't. Is it fast enough for 'reasonable time' encoding?
BlueSwordM
2021-06-25 04:30:09
Ye.
spider-mario
2021-06-25 05:24:33
when I tried tune=butteraugli (not on many images, admittedly), I didnโ€™t actually find it better than tune=ssim, it was slightly worse
2021-06-25 05:24:42
(and also produced worse butteraugli p-normโ€ฆ)
BlueSwordM
spider-mario when I tried tune=butteraugli (not on many images, admittedly), I didnโ€™t actually find it better than tune=ssim, it was slightly worse
2021-06-25 05:26:57
Odd, I find it improves stuff on most images. Granted, it has different tradeoffs vs `--tune=ssim`(most are better color preservation with the butteraugli tune in exchange for less detail retention), but I feel the tradeoffs are in a better direction.
2021-06-25 05:38:40
Pasted something something about AVIF encoding: https://old.reddit.com/r/AV1/comments/o7s8hk/high_quality_encoding_of_avif_images_using/ Criticism, discussion, and corrections welcome <:Stonks:806137886726553651>
Jyrki Alakuijala
2021-06-25 07:38:47
how does tune butteraugli compare with tune vmaf?
spider-mario when I tried tune=butteraugli (not on many images, admittedly), I didnโ€™t actually find it better than tune=ssim, it was slightly worse
2021-06-25 07:39:54
Supposedly there was a terrible bug in tune butteraugli until the most recent version of aomenc
spider-mario
2021-06-25 07:40:26
oh, yes, that might explain it
BlueSwordM
Jyrki Alakuijala how does tune butteraugli compare with tune vmaf?
2021-06-26 09:16:17
I have not tested that yet. It would be an interesting test to do on a few images/test set <:Thonk:805904896879493180>
2021-06-26 09:27:27
I still need to compile VMAF on my production ready system, since my bleeding edge OM install broke...
2021-06-27 05:59:53
The WebP file is 330kB. The JPG file is 3,3MB ๐Ÿ‘€
2021-06-27 06:00:22
So yeah, much lower bandwidth costs.
2021-06-27 06:00:55
This is an extreme example though: most WebP images on the web are a bit smaller than the JPG ones.
2021-06-27 06:05:04
Heck, I'd say the smart thing would be to run mozjpegtran on every JPG that comes in.
2021-06-27 06:05:37
It's pretty fast, and gives a free efficiency boost.
2021-06-27 06:10:02
Actually, why doesn't every website that accept raw JPEGs do this <:Thonk:805904896879493180>
lithium
2021-06-27 08:15:25
Both image look so bad...๐Ÿ˜ข
improver
2021-06-27 12:10:19
tbh i prefer dithered one but that's just personal preference they both look bad
2021-06-27 12:12:57
anime stuff just looks better with clean lines even if colors are dotty
spider-mario
2021-06-27 01:30:22
do they not give the original at all?
2021-06-27 01:30:43
surely we can do much better, even for a color-reduced PNG
2021-06-27 01:30:49
(how did they choose the palette?)
BlueSwordM
Jyrki Alakuijala how does tune butteraugli compare with tune vmaf?
2021-06-27 06:38:28
That is a good question. However, I do not currently like using tune vmaf_without_preprocessing, as VMAF stuff is currently single-threaded in aomenc, which means it's a massive latency bottleneck in encoding <:monkaMega:809252622900789269>
Jyrki Alakuijala how does tune butteraugli compare with tune vmaf?
2021-06-27 07:01:35
tune vmaf_without_preprocessing is so much slower than butteraugli it's not even fair(tune vmaf_without_preprocessing is a pure quality based RD tune, absolute no shortcuts taken) to compare them, but here's my image set(the large resolution image set with vmaf is still not done <:monkaMega:809252622900789269>).
2021-06-27 07:02:36
Photographic image sets
2021-06-27 07:02:36
2021-06-27 07:02:58
Command line for each setting(except for the tune obviously) `avifenc -s 3 -j 2 --min 0 --max 63 -a end-usage=q -a cq-level=18 -a tune=vmaf_without_preprocessing -a color:aq-mode=1 -a color:enable-chroma-deltaq=1 -a color:qm-min=0 -a color:deltaq-mode=3`
improver
2021-06-28 12:23:23
that just looks like screenshot tbh so you could probably find it by going thru anime. i think. unless its special material which ended up not included
2021-06-28 12:24:42
which reminds me i started watching this one and havent went thru rest of it because got busy with things
2021-06-28 12:26:08
i think it may b still airing
2021-06-28 12:29:04
also tbh for stuff like anime things, idk if theres much interest, id rather poke things which do big on images like fb (already interested), maybe twitter, maybe tumblr, maybe uhh how was that one for photos called
2021-06-28 12:29:45
it kinda makes sense for them to be enthusiastic about it
2021-06-28 12:29:55
as they may gain big savings from jxl
2021-06-28 12:33:45
i suspect BDs may not be yet out for this one but approach makes sense in general
fab
2021-06-28 06:27:48
jxl can't do any generation loss
2021-06-28 06:27:52
is all marketing
2021-06-28 06:28:06
is all anticipated marketing
2021-06-28 06:28:39
jon adviced to make wikipedia articles and many things at random people that always are online
2021-06-28 06:29:21
webp2 is what you better use for the web.
2021-06-28 06:31:39
EVEN jon and cloudinary marketing has said until you aims at bitrates lower than 0.636 bpp so less than visually lossless in a flip test (basically what you get with this command (-s 9 -d 1 --use_new_heuristics with libjxl v0.3.7-169-g1f7445a win_x64) you shouldn't use it
2021-06-28 06:32:30
if you think this command isn't good, wait for final release before judging generation loss or web capacity.
2021-06-28 06:32:39
or get a better source
2021-06-28 06:32:47
don't rely on ac strategy
Crixis
2021-06-28 06:32:55
Webp2 is good at 0.636 bpp?
fab
2021-06-28 06:33:28
2021-06-28 06:33:36
2021-06-28 06:33:44
i'm using images of jyrki to illustrate it
Crixis
2021-06-28 06:35:25
Which is webp2?
_wb_
fab EVEN jon and cloudinary marketing has said until you aims at bitrates lower than 0.636 bpp so less than visually lossless in a flip test (basically what you get with this command (-s 9 -d 1 --use_new_heuristics with libjxl v0.3.7-169-g1f7445a win_x64) you shouldn't use it
2021-06-28 06:45:24
That's not what I said. I think below 0.3 bpp AVIF is better, between 0.3 and 0.5 bpp they are similar, and above 0.5 bpp JXL is significantly better than AVIF. Or at least current encoders behave like that.
fab
Crixis Which is webp2?
2021-06-28 06:50:14
this is d 3 epf 0
Jyrki Alakuijala
2021-06-28 06:53:47
PNG looks better
Cool Doggo
2021-06-28 07:02:30
webp completely blurs all of the grass ๐Ÿ˜„
Jyrki Alakuijala
2021-06-28 07:03:33
they used too low quality
2021-06-28 07:03:53
while WebP has this tendency to blur green and red objects, it is not usually this bad
2021-06-28 08:05:50
highly saturated colors get too few bits in YUV color modeling (this is solved much better in XYB, and also in Lab)
2021-06-28 08:06:05
that's why reds and greens look weird on video
2021-06-28 08:06:26
perhaps particularly so for magenta and red
2021-06-28 08:06:54
(with webp I also observe that dark greens and browns get rather blurry)
2021-06-28 08:08:11
I tried to get colors right in JPEG XL, to have balanced amount of quantization there in different base colors
2021-06-28 08:08:46
it started by first having a corpus of images with all the possible base colors, each with some subtle details -- both in color and intensity
2021-06-28 08:09:13
what I didn't have, I pseudocolored...
2021-06-28 08:10:11
like this rose doesn't exist
2021-06-28 08:10:52
IIRC, I swapped the red and blue channels for it
2021-06-28 08:26:32
> But to verify this claim, is it still accurate if you use a test data of lossy images (and many of them may not be -q 99 high quality JPGs)?
2021-06-28 08:26:41
no one knows...
2021-06-28 08:27:13
we don't know what is the correct switch point from VarDCT to lossless jpeg recompression for original jpeg quality
2021-06-28 08:28:00
I anticipate if original is quality 90, we'd likely want to VarDCT-code it -- and if the original is quality 50, we'd like to lossless jpeg recompression it
2021-06-28 08:28:20
where the cross-over point exactly is, I don't think anyone looked into it
2021-06-28 08:28:43
can also be different for yuv420 and yuv444
2021-06-28 08:29:20
if VarDCT coding, it might make sense to 'restore' the quality of the original using knusperli-like approaches
fab
2021-06-28 08:30:03
one this is sure the 169 -g (i don't remember build)
2021-06-28 08:30:16
that one don't look good with explicit images, let alone anime
2021-06-28 08:30:23
the ac strategy is too flat
2021-06-28 08:30:42
those images also are usually low quality sources
2021-06-28 08:31:31
but we want the encoder to advance and the images not looking like floorings.
Jyrki Alakuijala
2021-06-28 08:44:57
improvement comes in small steps
2021-06-28 08:46:06
share the most problematic images with their results (only d and epf settings, please)?
fab
Jyrki Alakuijala like this rose doesn't exist
2021-06-28 09:10:10
good
Scope
2021-06-28 06:59:28
So, there may be new interesting or improved compressors from past challenges (including image compression and open source) <https://gdcc.tech/>
190n
2021-06-29 06:07:44
> `JXL_FAILURE: APNG with dispose-to-0 is not supported for non-full or blended frames` is there a way in ffmpeg to produce APNG files that don't use dispose-to-0?
_wb_
2021-06-29 06:42:06
We could also just implement that in the apng loader, I guess
Deleted User
2021-06-30 11:47:14
<@!321486891079696385> how does local palette work in AV1?
_wb_
2021-06-30 01:13:35
As far as I understand, you can use a palette of up to 8 colors per channel, but I have no idea how the pixel data itself is encoded in palette blocks
fab
2021-07-01 01:52:49
the problem with audio is that i want appeal as 177 kbps nero ABR in lower quality and that doesn't happen with no encoders encoders aren't tuned for that
improver
2021-07-01 01:53:55
use opus
fab
2021-07-01 01:56:16
i just wanted to try exhale at 64 kbps
2021-07-01 01:56:18
preset 1
2021-07-01 01:56:39
maybe is better with sbr
2021-07-01 02:00:10
exhale much better with sbr
improver
2021-07-01 02:00:19
please don't encode in such awful bitrates
fab
2021-07-01 02:01:14
at all bitrates exhale is much better with sbr
spider-mario
2021-07-01 02:01:16
afaik libfdk_aac is the best open-source AAC encoder
2021-07-01 02:01:32
(old-style AAC, not the new extension)
improver
2021-07-01 02:01:43
id just use vorbis or opus, aac is sucky
spider-mario
2021-07-01 02:01:54
(also not high-efficiency)
improver id just use vorbis or opus, aac is sucky
2021-07-01 02:02:07
vorbis is actually worse iirc
2021-07-01 02:02:16
than fdk
improver
2021-07-01 02:02:36
aren't they pretty similar? i kinda forgot when i read about that stuff
spider-mario
2021-07-01 02:03:20
yeah, I need to look again
improver
2021-07-01 02:05:14
there's also aoTuV patchset for vorbis which I have installed idk how that compares
spider-mario
2021-07-01 02:05:52
in https://listening-test.coresv.net/results.htm, at 96โ€ฏkbps, aoTuV lost to Appleโ€™s AAC encoder
2021-07-01 02:06:46
but at other bitrates, or to fdk, possibly not
2021-07-01 02:06:54
https://hydrogenaud.io/index.php?topic=120062.0
2021-07-01 02:07:01
> โ€ข Appleโ€™s AAC encoder (QuickTime, iTunes) really plays in a different league. Quality is outstanding and it outperform the competition.
improver
2021-07-01 02:08:20
https://hydrogenaud.io/index.php?topic=117247.0 and there someone saying that they prefer vorbis lol
spider-mario
2021-07-01 02:10:03
in any case, it indeed seems that roughly everybody agrees that Opus is the grand winner
2021-07-01 02:10:39
my one annoyance with opus is that opusenc only has bitrate-based quality control
2021-07-01 02:10:45
no `-d 1` equivalent
improver
2021-07-01 02:11:27
looking at https://listening-test.coresv.net/results.htm they come pretty close too. but yeah opus looks better everywhere
BlueSwordM
2021-07-05 04:06:29
Follow-up to this: https://discord.com/channels/794206087879852103/794206170445119489/861635017348218890 The trick to make noise synthesis as good as possible consists of doing 4 things if you want the highest quality implementation in addition to all of the other things that have to be done normally: 1. Variance based analysis. Areas with higher frequency blocks need less noise generation than those with lower frequency blocks, but require higher bits/block(such an analysis can be decently substituted just by using variance based adaptive quantization during the normal coding process). * 2. Good edge detection. Edges need to be detected as to preserve them as well as possible and to not apply noise to them(otherwise, it might look artifacty or if denoising is activated, line removal). 3. Good powerful temporal 3D denoising for the noise estimation, but **not** applying it to the source image. Any kind of denoising to the source frame is not very good, which is why the `--enable-dnl-denoising=0` option in aomenc was introduced by yours truly with the help of some friends: does not denoise the source input, but denoises during the calculations. 4. Film grain size estimation. Intensity and type analysis is already done, but size is one of those things that's rather important. Currently, only SVT-AV1 gets close to a "great" implementation by doing 1-2 and a bit of 4, but aomenc only does the 3rd one(and not denoising the source picture), but not #4, and nothing for #1 and #2. * During the normative analysis of the grain synthesis parameters, an operation similar to that is done to estimate the grain parameters, but it is only done at 32x32 block levels, and not SB(superblocks) level/frame-level, resulting in some weaknesses for aomenc in animation type content with grain synthesis.
eddie.zato
2021-07-06 05:10:54
Does this command decode to pixels or not? `avifdec -i test.avif` I need the same this does without the output file `djxl test.jxl` Trying to compare the decoding speed of the avif/jxl files.
BlueSwordM
eddie.zato Does this command decode to pixels or not? `avifdec -i test.avif` I need the same this does without the output file `djxl test.jxl` Trying to compare the decoding speed of the avif/jxl files.
2021-07-06 05:20:23
To decode to pixels, you need to do: `avifdec --info input.avif`
2021-07-06 05:21:35
Also, for optimal decoding speed, a binary built with libyuv(fast YUV path) is recommended.
2021-07-06 05:21:45
It makes a non-negligible difference in this regard.
eddie.zato
2021-07-06 05:23:05
`-i` and `--info` are the same, according to the help ๐Ÿ˜„ So it does decode. Thanks.
BlueSwordM
BlueSwordM It makes a non-negligible difference in this regard.
2021-07-06 05:24:11
Fast libyuv also makes it so that you can decode JPEGs directly to raw data without any independent step, skipping chroma subsampling to 4:4:4/4:2:0 again.
Petr
2021-07-09 07:28:43
Could someone pls share AV1 Discord invitation?
Petr Could someone pls share AV1 Discord invitation?
2021-07-09 07:32:45
Found itโ€ฆ
Scope
2021-07-09 05:07:04
Updated PNG Optimizers Benchmark from the Pingo creator https://css-ig.net/benchmark/png-lossless
fab
2021-07-09 05:29:22
is impressive it reduces to 47% from webp reddit
2021-07-09 05:29:37
at lowest quality
2021-07-09 05:29:45
if you convert to jpg quality 100 with xnview
2021-07-09 05:38:08
to lossless png at maximum effort it only reduce to 10%
2021-07-09 05:38:16
compared to xnview
2021-07-09 05:38:47
14.845 kb to 13.632 kb
lithium
2021-07-09 06:37:14
Great!, pingo -pngcolor is back to command manual, ๐Ÿ™‚ but look like pingo still force transfer 16 bit depth png to 8 bit depth...
bonnibel
2021-07-17 02:36:07
got a JPEG1 question that's above my pay grade: For higher pixel density versions of images (eg what you'd serve to many phone screens) you can set the quality a bit lower without as much perceived loss. Some old discussions on the MozJpeg issue tracker (<https://github.com/mozilla/mozjpeg/issues/76#issuecomment-68163040>) suggest that when doing this you should use the quantization table tuned for MS-SSIM. Is this (still) true? Any other settings you should use when doing this (e.g. also tuning trellis optimization for ms-ssim rather than the default of psnr-hvs)?
BlueSwordM
bonnibel got a JPEG1 question that's above my pay grade: For higher pixel density versions of images (eg what you'd serve to many phone screens) you can set the quality a bit lower without as much perceived loss. Some old discussions on the MozJpeg issue tracker (<https://github.com/mozilla/mozjpeg/issues/76#issuecomment-68163040>) suggest that when doing this you should use the quantization table tuned for MS-SSIM. Is this (still) true? Any other settings you should use when doing this (e.g. also tuning trellis optimization for ms-ssim rather than the default of psnr-hvs)?
2021-07-17 02:55:27
That is true. For photographic image, MS SSIM is superior as a trellis RD tune. PSNR HVS is better for everything else.
bonnibel
2021-07-17 03:03:25
Okay, thanks ๐Ÿ–ค!
_wb_
2021-07-17 03:12:25
Generally you need relatively less aggressive low freq quantization, because banding becomes a bigger issue than dct noise
bonnibel
2021-07-17 05:41:35
Oh followup question: is there a decent way to use butteraugli in this case? comparing as-is but using a higher target number? downscaling both and comparing that?
2021-07-17 05:43:35
i'd like to have some perceptional target number to hit
BlueSwordM
bonnibel Oh followup question: is there a decent way to use butteraugli in this case? comparing as-is but using a higher target number? downscaling both and comparing that?
2021-07-17 08:21:33
Yeah, that works but it is a bit of a bruteforce approach :p
bonnibel
2021-07-17 08:45:08
got bored and made a graph from a grand library of 1 (one) test image, so absolutely no statistical significance size of image compressed with various quantization tables vs butteraugli score of the half size version of it vs the half size version of the original
veluca
2021-07-17 10:17:12
interesting jump at 700kb, is that the q90 threshold? (420 vs 444)
bonnibel
2021-07-18 06:17:55
Yup!
2021-07-19 10:44:39
https://matthorn.s3.amazonaws.com/JQF/qtbl_vis.html
2021-07-19 10:45:04
demo of https://arxiv.org/abs/2008.05672
veluca
2021-07-19 11:01:03
hah, "visually indistinguishable"
2021-07-19 11:01:46
on eg. parrots it creates a nontrivial amount of blocking from what I can see
2021-07-19 11:01:58
(with their own side-by-side comparison)
bonnibel
BlueSwordM Yeah, that works but it is a bit of a bruteforce approach :p
2021-07-20 12:04:52
speaking of brute-forcing, the best results (as in, lowest file size with a 3-norm at or below 1) i'm getting thus far are by disabling chroma subsampling and then using different quality settings for each channel
2021-07-20 12:06:09
takes a fair while to calculate those though as i'm currently just doing it all via a very hacky script. there's probably a way better way to do all of this but _shrugs_ this ain't really my area
raysar
2021-07-22 09:26:47
https://eclipseo.github.io/image-comparison-web/#reykjavik*2:1&JXL_20210715=l&AOM_20210715=l&subset1 Look in this picture, avif mozjpeg heif webp webp2 seem to have a gamma problem in lossy, you know why? jxl is good ๐Ÿ˜„
improver
2021-07-22 09:59:22
wow avif absolutely destroys grain even at large preset
2021-07-22 10:00:54
also yes it makes stuff whiter i wonder why
veluca
2021-07-22 10:06:06
something weird in the PNG original
spider-mario
2021-07-23 07:38:47
maybe that gAMA thing again
Scope
2021-07-23 11:22:36
https://news.ycombinator.com/item?id=27925393
2021-07-23 11:23:13
<https://games.greggman.com/game/zip-rant/>
bonnibel
2021-07-23 07:51:42
its a shame arithmetic coding isnt implemented anywhere given how many pages of the spec it takes up _has been reading it to try to broaden her understanding and is currently regretting not ripping all those pages out a day ago with how much she's had to scroll past them_
improver
2021-07-23 07:53:25
it's cool that it's being utilized nowadays, and in not form what is slow like it used to be back then
2021-07-23 07:53:32
in new formats i mean
_wb_
2021-07-23 07:58:42
The AC in the 1992 jpeg spec is not that great anyway, without context modeling etc it's basically just like huffman but with more accurate distributions...
bonnibel
2021-07-23 08:24:45
also interesting that 16 bit quantization tables appear to be a thing thats used when 12 bit samples arent and the spec says they should only be used for those
2021-07-23 08:50:59
the bit thats hardest to wrap my head around thus far is the encoding for successive approximation of AC components the initial scan is easy, just divide the coefficient by 2^n on subsequent scans, if its the 1st scan the coefficient isnt 0, you write huffman(zero run length + a magnitude fixed to 1) + a sign bit + a correction bit (which does... something) and if it _isnt_ the 1st scan the coefficient isnt 0, then... i don't know
2021-07-23 09:31:15
ah okay, the correction bit is just the bit thats in that place
2021-07-23 09:32:18
and for every coefficient thats had a non-0 encoded in a previous scan, you just append its current bit to the end of the scan data?
Scope
2021-07-27 08:14:33
<https://chromium.googlesource.com/codecs/libwebp2/+/1101a08f70f5da82e67a9ffd890d11b0bdb1f11b> > Add minimal AVIF parsing and writing > > The goal is to demonstrate the possibility of storing necessary image > features into a lightweight container/header, which is easier to parse > and write than AVIF/MIAF. The contained AV1 OBU bitstream is kept as > is for now, but may be reduced too in the future. ๐Ÿค”
190n
2021-07-27 08:16:03
interesting
2021-07-27 08:16:23
isn't webp2 based somewhat on av1? maybe their encoder could be extended into a better avif encoder
paperboyo
2021-07-30 06:22:55
I guess more interesting as far as input to JXL encoder is concerned than as a competing HDR-ready web delivery format: https://lists.w3.org/Archives/Public/public-colorweb/2021Jul/0003.html
Scope
2021-08-02 10:58:35
<https://encode.su/threads/3422-GDC-Competition-Discussions?p=70468&viewfull=1#post70468>
2021-08-02 10:58:46
Interesting review article on the evolution of lossless compression (images may not load via GTranslate and it is better to use browser translation) https://translate.google.com/translate?sl=ru&tl=en&u=https://habr.com/ru/post/570694/
2021-08-02 11:03:01
> And about 10 years ago Asymmetric numeral systems appeared(ANS), which made it possible to achieve the compression ratio of arithmetic at the speed and computational cost of the fast Huffman algorithm, which since 2013 has led to a small revolution in fast practical algorithms. Note that it took 30 years to create a fast version of the efficient algorithm. ANS is now used in the libraries Facebook Zstandard, Google Draco, JPEG XL and many other recent developments. Note that ANS author Jarek Duda is a member of the expert council of the new compression competition, which will be discussed below.
2021-08-02 01:13:47
> ... > In this regard, it is very gratifying that commercial companies (and primarily Google) in the last 10 years have begun to regularly publish suitable implementations of lossless compression libraries in open source (see Google Snappy - 2011, Zopfli - 2013, brotli - 2013, PIK - 2017, Draco - 2017, etc.). > That is, development proceeds by steps, first a new idea appears, giving another leap in benchmarks, and then its industrial implementations are gradually being pulled up (more often - closed, but sometimes open).
_wb_
2021-08-02 03:05:34
http://www.jbkempf.com/blog/post/2021/dav1d-0.9.1-a-ton-of-asm
Scope
2021-08-02 03:08:06
> This is very large for handwritten assembly, and for comparison, this is more assembly than what there is in FFmpeg (for all codecs)
_wb_
2021-08-02 03:08:43
It's as fast as it gets now
2021-08-02 03:09:24
Handwritten asm for all important code paths and platforms, it doesn't get any faster in software
spider-mario
2021-08-02 03:13:18
they mention remaining improvements for the threading model, not sure how much it matters for that
veluca
2021-08-02 03:18:31
pretty impressive LOC...
_wb_
2021-08-02 03:31:44
I wonder how fast jxl decode would get if effort like that is spent on handwritten asm using fixedpoint
Scope
2021-08-02 03:42:54
And how much smaller the decoder will become with C+asm and also further development may be in the direction of using the GPU to get more power efficiency (not sure about speed)
_wb_
2021-08-02 03:45:40
It can only get smaller if you can distribute for a single target, if you need to have the separate code for each cpu target it gets larger, not smaller
veluca
2021-08-02 04:03:59
my guess is "not that much"
2021-08-02 04:04:12
maybe 25-50% depending on the platform
2021-08-02 04:04:14
but it's a guess
2021-08-02 04:04:43
and likely it's not very different from what you'd get with fixpoint using hwy
Scope
2021-08-02 04:07:35
But this is from 2019 (and perhaps a big difference because libaom was originally not very optimized for this purpose)
fab
2021-08-02 04:38:07
https://chromium.googlesource.com/codecs/libwebp2/
2021-08-02 04:38:21
two commits
improver
2021-08-02 05:06:20
not important ones
raysar
2021-08-03 05:48:32
https://tenor.com/view/destroy-undertaker-tongue-out-wwe-gif-15650864
2021-08-03 05:49:03
I do some test with last avif encoder in animate image, so it destroys jxl in terms of size, it is indescent.
_wb_
2021-08-03 06:04:31
For animation, av1 or even h264 will beat jxl any day
2021-08-03 06:04:48
If interframe tools are useful
veluca
2021-08-03 06:04:49
not very surprising, the JXL encoder nowadays doesn't even try to do frame prediction xD
_wb_
2021-08-03 06:08:40
cjxl is bad at using whatever coding tools jxl has for animation (doing even less than an apng encoder like apngasm or imagemagick convert), and those tools are limited compared to real video codecs
veluca
2021-08-03 06:15:20
more than "bad", it doesn't even try...
spider-mario
2021-08-03 06:17:00
yeah, if starting from a GIF, running `gifsicle -bO3` on it beforehand is useful
2021-08-03 06:17:29
it will make the GIF update the minimal rectangle necessary in each frame
2021-08-03 06:17:35
and `cjxl` will reuse that (iirc)
_wb_
2021-08-03 06:39:10
Yes, same with apng, it just reuses the frame rects from the input
Scope
2021-08-04 01:49:44
https://www.reddit.com/r/compression/comments/onbb8q/project_image_compression_with_loss_based_on_the/
2021-08-04 01:50:07
https://raw.githubusercontent.com/giacomo-ascari/dichotomic-compression/main/docs/examples.gif
Deleted User
2021-08-04 06:52:23
Interesting at least.
Scope
2021-08-04 07:01:47
As a complete codec it may not be enough, but as part of some algorithms it may be interesting for images with large solid color areas, although perhaps modern formats use something more efficient
_wb_
2021-08-04 07:16:56
VarDCT block selection is kind of the same concept
Scope
2021-08-04 07:21:55
Also, 256x256 VarDCT blocks can improve efficiency mostly only at very high resolutions?
_wb_
2021-08-04 08:03:45
Currently the encoder only produces up to 64x64, bigger blocks would make sense even for not so huge images but it's a bit tricky to cross chroma-from-luma macroblocks in the current encoder heuristics (CfL multipliers are given by a 1:64 aux image)
2021-08-04 08:06:40
Note that for a 256x256 block, we already encode 32x32 of the low frequency DCT coefficients as part of the 'DC' image. So it's a DCT with 65536 coefficients but 1024 of the lowest frequency ones are signaled already.
2021-08-04 08:07:32
That's still a huge amount of coeffs left, and it only makes sense to use such a big block if they're mostly zeroes
2021-08-04 08:09:58
(but they don't all have to be zeroes, e.g. you could have something like a stripe pattern represented by a relatively small amount of nonzero coeffs)
fab
2021-08-05 04:52:35
As of Nov. 2020, WebP 2 is only partially optimized and, roughly speaking 5x slower than WebP for lossy compression. It still compresses 2x faster than AVIF, but takes 3x more time to decompress. The goal is to reach decompression speed parity. Side-by-side codec comparisons can be found at: 2021-07-13 2021-07-01 2021-06-08 2021-04-20
Reddit โ€ข YAGPDB
2021-08-05 04:52:55
fab
2021-08-05 04:56:03
1,4 mb screenshot about AV2
2021-08-05 04:56:04
2021-08-05 04:56:11
only gudea font
2021-08-05 05:01:33
2021-08-05 05:01:40
better compound prediction, better delta, + tx parttiuon mlr non intra non directional (disabled) vvc on in like only single left boundary ORIP IST ONLY ON THE LOW FREQUENCY WE CAPTURE MORE DIRECTIONARLY BUT MANTAINING SEE VVC INSTALL VQPROBE CREATE A PROJECT (FOR EACH STREAM) FOR AI SCENARIO FOR RA SCENARIO?
improver
2021-08-05 05:03:54
people can read article on their own
fab
2021-08-05 05:11:45
yes
2021-08-05 05:12:20
do you think there is a lot of news
2021-08-05 05:12:32
or that AV3 will succeed
2021-08-05 05:14:41
probably that bother you, to me bothers that image contains big font and few information
Deleted User
2021-08-05 05:26:18
For me no images = no comparison. I don't trust metrics at all. *(Dark Shikari intensifies)*
fab
2021-08-05 07:13:01
Fabian โ€” Oggi alle 20:58 http://repository.essex.ac.uk/27250/1/adaptive_precoding_accepted.pdf https://discovery.ucl.ac.uk/id/eprint/10113692/1/115100C.pdf https://docplayer.net/164911616-Deep-neural-network-based-frame-reconstruction-for-optimized-video-coding-an-av2-approach-dandan-ding-hangzhou-normal-university.html who is that person https://wp.ufpel.edu.br/dpvsa/chia-yang-tsai/ 4th DPVSA gcorrea Chia-Yang Tsai don't watch Thor, the text written is silly, too less A lot of YouTube content is not great quality at source; content may be shaky, poorly processed, or repeatedly re-uploaded. Such content may not look perceptively better above 480p. 5: Big Buck Bunny is traumatizing video developers everywhere twitch not working for me https://boardgamestips.com/faq/which-is-better-720p-or-720p-x265/ copied by google good resource
2021-08-05 07:13:12
2021-08-05 07:13:15
2021-08-05 07:13:39
2021-08-05 07:13:52
the third link is basically the pdf i'm uploading now
2021-08-05 07:14:00
nek i don't care about bandwidth
2021-08-05 07:14:13
https://www.reddit.com/r/AV1/comments/e2lee3/looks_like_facebook_started_with_the_av1_adoption
2021-08-05 07:14:23
2021-08-05 07:14:27
2021-08-05 07:14:38
2021-08-05 07:15:22
Fabian โ€” Oggi alle 21:11 At only 49KB, AVIF offers an exceptional result for the image used. WebP in turn offers much better compression than the venerable JPG. The results may differ depending on the original picture, but the potential save in picture sizes can help load websites much faster without sacrificing compatibility in most cases. But as noted above, there are situations where AVIF still cannot be used, such as uploading images through plugins where you cannot add alternatives and you are forced to use WebP or JPG. https://www.inoveo.ro/dezvoltare-website/cum-sa-folosesti-noile-formate-de-imagini-pentru-web/ if you want to watch comparison and have time
190n
2021-08-05 07:35:52
<:frogstudy:872914282772299796>
fab
For me no images = no comparison. I don't trust metrics at all. *(Dark Shikari intensifies)*
2021-08-05 07:37:31
I agree.
2021-08-05 07:58:40
https://www.youtube.com/watch?v=xiBNxcM4538&t=22s
2021-08-05 07:59:23
Depending on the application-specific frequency of random access points, the experimental results show averaged bit-rate savings of about 10โ€“15% for AV1 and 36โ€“37% for the VVC reference encoder implementation (VTM), both relative to the HEVC reference encoder implementation (HM) and by using a test set of video sequences with different characteristics regarding content and resolution. A direct comparison between VTM and AV1 reveals averaged bit-rate savings of about 25โ€“29% for VTM, while the averaged encoding and decoding run times of VTM relative to those of AV1 are around 300% and 270%, respectively.
2021-08-05 07:59:50
so av2 is 23%
2021-08-05 08:00:22
https://spin-digital.com/tech-blog/8k-vvc-encoding-and-playback/
_wb_
2021-08-05 08:02:04
VVC is going to be the same patent mess as HEVC, I don't really care too much about how it performs because it will only be a usable codec in 2040.
2021-08-05 08:03:31
(or later, if they also patent encoder techniques in the coming years)
Scope
2021-08-05 08:09:48
For the Web, maybe, but for TV and most likely smartphones and as a standard for physical media I think all companies will stay with MPEG formats as it was before
_wb_
2021-08-05 08:43:24
Yes, that seems likely, but even there av1/av2 might at some point become more attractive than figuring out who to pay and risking that more has to be paid when the hardware investments are already made.
raysar
2021-08-06 12:54:25
avif is enable by default now in firefox: https://www.reddit.com/r/AV1/comments/oz5dul/firefox_92_has_fixed_avif_colour_space_support/?utm_medium=android_app&utm_source=share
Crixis
raysar avif is enable by default now in firefox: https://www.reddit.com/r/AV1/comments/oz5dul/firefox_92_has_fixed_avif_colour_space_support/?utm_medium=android_app&utm_source=share
2021-08-06 01:21:24
Sad
_wb_
2021-08-06 01:25:01
No, it's progress. But now jxl too, please!
improver
2021-08-06 01:28:36
animation still doesn't work and they are enabling it...
_wb_
2021-08-06 01:33:08
Oh. That's annoying.
190n
2021-08-06 07:55:37
leaving a comment to that effect
2021-08-06 07:55:40
this is super annoying
2021-08-07 05:08:52
from a discussion in av1: what if animated avif had a separate mime type?
_wb_
2021-08-07 05:52:39
That's how it was 3 years ago, there was image/avif and image/avifs.
2021-08-07 05:53:06
Then they decided to drop image/avifs and use the same media type for both.
190n
2021-08-07 06:35:25
oh what could've been ๐Ÿ˜”
fab
2021-08-07 08:19:48
2021-08-07 08:20:06
https://css-ig.net/benchmark/png-lossless
2021-08-07 08:20:43
all PNG are 8 bits/sample โ€” or 1, 2, 4, 8 bits/pixels for paletted
2021-08-07 08:20:53
machine: Intel(R) Xeon(R) CPU E5-2667 v3 @ 3.20GHz โ€” 12,0 Go RAM โ€” Windows 10
Scope
2021-08-07 04:20:32
๐Ÿค” <https://www.reddit.com/r/compression/comments/ozup3w/weird_facebook_compression_artifacts_in_memes/>
paperboyo
Scope ๐Ÿค” <https://www.reddit.com/r/compression/comments/ozup3w/weird_facebook_compression_artifacts_in_memes/>
2021-08-07 04:23:18
? https://github.com/mozilla/mozjpeg/issues/328
Scope
2021-08-07 04:25:09
Yeah and looks like it's still not fixed
improver
2021-08-07 04:28:30
interesting artifact
_wb_
2021-08-07 05:08:16
Looks like a bug in the decoder-clamping based deringing mozjpeg has for black and white hard edges
2021-08-07 05:08:51
(a trick we are not exploiting yet in cjxl btw)
190n
_wb_ Then they decided to drop image/avifs and use the same media type for both.
2021-08-07 05:33:26
...any idea why?
Scope
2021-08-07 05:37:18
I think in purpose not to make it more complicated and not to increase the number of MIME types
_wb_
2021-08-07 05:45:09
Probably that, and also the anticipation that anything that decodes one av1 frame can also decode more than one.
2021-08-07 05:45:51
The more surprising thing to me was that originally avif was av1 intra only, and then they changed it to allow full av1 bitstreams
Deleted User
2021-08-07 05:47:38
Why even differentiate then? Just use the same MIME for video and image.
Scope
2021-08-07 05:52:34
And also animated WebP and PNG did not use other extensions and types, and for example GIF is just a separate format and not an indication of the animation Splitting video and images I think makes sense, animated images are always looped and without sound and it's usually something lighter than video
_wb_
Why even differentiate then? Just use the same MIME for video and image.
2021-08-07 05:54:45
It's a different container though, and avif cannot contain an audio track.
190n
Scope I think in purpose not to make it more complicated and not to increase the number of MIME types
2021-08-07 05:55:30
Then Firefox should support all avif files <:whatisthis:853506460679405588>
BlueSwordM
2021-08-07 05:56:01
Indeed.
Scope
2021-08-07 05:56:03
Also they are usually different formats, for example a decoder that can decode AV1 cannot necessarily decode AVIF, just like a decoder for VP8 cannot decode WebP (even considering it is based on it)
190n
2021-08-07 05:56:40
avif also supports rgb, and is lossless avif just lossless av1 or some different format entirely? (not that lossless avif is good)
Scope
2021-08-07 05:58:35
No, unlike WebP, lossless AVIF is not another format (which is why it is not as efficient)
190n
2021-08-07 05:59:10
i see
diskorduser
Scope ๐Ÿค” <https://www.reddit.com/r/compression/comments/ozup3w/weird_facebook_compression_artifacts_in_memes/>
2021-08-07 07:27:26
I got this bug on sending text images / screenshots on whatsapp too.
Stan
2021-08-08 09:04:55
https://globalcompetition.compression.ru/#leaderboards
2021-08-08 09:05:35
https://www.gdcc.tech
2021-08-08 09:10:06
Compression ratio/Decompression time depends on libjxl implementation or jpeg-xl algorithms?
_wb_
2021-08-08 09:12:36
They didn't test best compression? Default speed is best they tried
Scope
2021-08-08 12:03:34
Because at GDC the encoding/decoding speed had certain limits, in addition Jpeg XL and other open formats were only as a reference for comparison
2021-08-08 12:07:25
Sad that in 2021 at GDC they don't compare the usual images, but something like this:
2021-08-08 12:08:40
Not sure if JXL can encode this kind of data without changing the encoder (as far as I understand it's a mix of data and images)
2021-08-08 12:16:11
I'd be interested in having even more efficient codecs to compare, good thing this time it's not just photographic images, but now it's a pretty niche data set
veluca
2021-08-08 12:54:38
IIRC anything that you can fit in a 3d array you can compress with jxl lossless
2021-08-08 12:55:00
well, up to some limits in the size of that 3d array, and one dimension doesn't get as good prediction as the others, but...
_wb_
2021-08-08 12:55:21
Jxl can encode 2D n-channel data in those data types, but "content may lack precise matching" is a bit of a problem - you need to feed the encoder an image, not raw bytes that may be too much or too little to fit an image
veluca IIRC anything that you can fit in a 3d array you can compress with jxl lossless
2021-08-08 12:56:33
If you include animation, it's a 4d array where one dimension (the channels) is limited to 4k.
veluca
2021-08-08 12:56:57
right... and the time dimension is also a bit weird ๐Ÿ˜›