JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

other-codecs

BlueSwordM
2021-04-06 04:10:39
I see.
2021-04-06 04:10:52
I'll see if I can easily fix it. If I can, I'll just submit a PR to the mozjpeg guys.
spider-mario you might be able to get around it by setting PNG_LIBRARY manually in CMakeCache.txt, not sure
2021-04-06 05:33:39
That doesn't seem like it fixed it.
2021-04-06 05:33:47
It didn't accept my change. 😛
2021-04-06 05:33:52
I'll probably find another way to fix it.
Scope
2021-04-06 09:32:03
Some comparisons on photographic sets (I didn't add JXL because it's in a different category, not storage formats) <https://old.reddit.com/r/jpegxl/comments/mepoun/modern_lossless_jpegxl_still_defeated_by_old/gtm8geq/>
2021-04-06 10:38:46
The Fastest, Safest PNG Decoder in the World <:Thonk:805904896879493180> <https://nigeltao.github.io/blog/2021/fastest-safest-png-decoder.html>
2021-04-06 10:40:46
<https://github.com/google/wuffs> ```Summary: Wuffs’ PNG image decoder is memory-safe but can also clock between 1.22x and 2.75x faster than libpng, the widely used open source C implementation. It’s also faster than the libspng, lodepng and stb_image C libraries as well as the most popular Go and Rust PNG libraries. High performance is achieved by SIMD-acceleration, 8-byte wide input and copies when bit-twiddling and zlib-decompressing the entire image all-at-once (into one large intermediate buffer) instead of one row at a time (into smaller, re-usable buffers). All-at-once requires more intermediate memory but allows substantially more of the image to be decoded in the zlib-decompressor’s fastest code paths.```
improver
2021-04-06 10:50:55
i liked their "puffs" name better
Scope
2021-04-06 10:54:25
🤔
Master Of Zen
2021-04-06 11:28:06
No JXL or AV1 :(
BlueSwordM
2021-04-07 12:04:54
Wait, it's that Nigel Tao?
2021-04-07 12:05:02
Man, it's been so long since I've heard of him.
2021-04-07 12:05:09
He's extremely good at mathematics.
veluca
2021-04-07 08:55:51
it would take a long while to write a JXL decoder in that language... also with SIMD, probably going to be very "fun"
BlueSwordM He's extremely good at mathematics.
2021-04-07 08:56:37
given the surname, I'm not too surprised (jk, I don't think he's related to Terence Tao :P)
BlueSwordM
veluca given the surname, I'm not too surprised (jk, I don't think he's related to Terence Tao :P)
2021-04-07 03:57:10
Oh you are correct.
2021-04-07 03:57:18
I often mistake Nigel Tao for his brother. 😛
veluca
2021-04-07 03:58:03
I didn't know they were actually related xD
2021-04-07 03:58:51
both went to IMO too, so there's some genetic factors there 😛
Crixis
2021-04-07 04:31:27
more like same education
veluca
2021-04-07 04:32:38
I doubt that's enough for IMO 😛
2021-04-07 04:34:33
but maybe things were different ~30 years ago, who knows
_wb_
2021-04-09 03:02:52
http://gailly.net/05533051.html haha
2021-04-09 03:03:13
"Making abstraction of the legalese, claim 7 says in short that you can compress an arbitrary sequence of two bits down to one bit. It took three years to the patent office to ascertain the validity of such a patent. A person with basic knowledge in mathematics and data compression can find the flaws immediately upon first reading."
Crixis
_wb_ http://gailly.net/05533051.html haha
2021-04-09 03:28:03
Sad
monad
2021-04-10 01:29:33
http://mattmahoney.net/dc/barf.html
Crixis
monad http://mattmahoney.net/dc/barf.html
2021-04-10 06:48:46
Sound like a very bad scam
_wb_
2021-04-10 07:02:04
Or a good joke
Deleted User
2021-04-10 07:04:10
Someone actually tried to patent "circular transportation facilitation device". Any guesses what that could be?
_wb_
2021-04-10 07:07:58
Haha yes, it was in Australia iirc that they patented the wheel
Scientia
monad http://mattmahoney.net/dc/barf.html
2021-04-11 03:08:32
this seems to work but there's some catch i'm not missing
2021-04-11 03:08:43
obviously it's a joke but
2021-04-11 03:08:56
someone did some good research for this joke
2021-04-11 03:12:55
no i'm absolutely stupid i just realized
2021-04-11 03:13:04
the bytes are in the filename
2021-04-11 03:13:11
haha
2021-04-11 03:25:06
this is pretty good
monad
Scientia someone did some good research for this joke
2021-04-11 04:48:56
Matt Mahoney of PAQ fame
BlueSwordM
spider-mario you might be able to get around it by setting PNG_LIBRARY manually in CMakeCache.txt, not sure
2021-04-13 12:04:00
I managed to fix the issue by messing with the CMake flags.
2021-04-13 12:08:48
I specifically looked at the set of conditions used, and then I found line 726, which was set to only look for libpng 1.6.
Deleted User
2021-04-13 06:40:20
I'm revisiting my multi-codec comparison: https://discord.com/channels/794206087879852103/805176455658733570/828588867065217064
2021-04-13 06:41:08
And I need your help. Which JPEG XR transform overlapping do you visually like the most?
2021-04-13 06:41:31
0
2021-04-13 06:41:39
1
2021-04-13 06:41:45
2
Nova Aurora
2021-04-13 06:44:10
2 I guess
2021-04-13 06:44:21
all three are unappealing
Deleted User
Nova Aurora all three are unappealing
2021-04-13 07:02:16
YOU DON'T SAY <:kekw:808717074305122316>
2021-04-13 07:02:37
You have to pick your poison
Nova Aurora
YOU DON'T SAY <:kekw:808717074305122316>
2021-04-13 07:05:55
I mean I would honestly prefer VVC smoothing all the detail until it's an anime over that, and also vote
Deleted User
2021-04-13 07:12:37
0 and 1 have that kinda smoothing effect
Scientia
2021-04-13 08:16:11
I might be wrong but 0 looks like it has the most detail
Scope
2021-04-13 09:10:40
Scientia
2021-04-13 09:11:36
Don't blame him, blame the guy who made it default for chrome to request webp from servers that can serve it
2021-04-13 09:12:11
It's annoying as hell to have a .jpg link but the server serves webp too so you download it and get a webp
Pieter
2021-04-13 09:18:54
why is that a probleem?
_wb_
2021-04-13 09:34:42
That doesn't happen anymore anyway
2021-04-13 09:35:26
Chrome pretends not to know what webp is and re-requests the image when you right-click save a webp image
2021-04-13 09:36:35
Which is also annoying btw, I usually want to save the image I am seeing, not a fallback jpeg...
Scientia
2021-04-13 09:42:15
I'm just interested in downloading the original image
2021-04-13 09:42:28
Not anything derived from the original image
Master Of Zen
_wb_ Chrome pretends not to know what webp is and re-requests the image when you right-click save a webp image
2021-04-14 05:53:40
most web thing ever `Chrome pretends not to know what webp is`
Scope
2021-04-15 11:57:14
https://twitter.com/R1CH_TL/status/1382738910091042823
Jim
2021-04-16 01:19:27
Google often violates their own recommendations. I remember they worked with the IAB years ago to produce guidelines for advertising. There are many cases of them not following them along with their own web recommendations. It's always a case of "do as I say, not as I do."
190n
2021-04-16 01:20:30
almost like the people working on the blog are different from the people who make web performance recommendations 🤔
Jim
2021-04-16 01:24:26
Not about who works where, it's the idea that if Google makes broad-targeted recommendations you would think that all parts of Google would follow them.
190n
2021-04-16 01:26:11
well good luck getting 100k+ employees to follow the same recommendations. what _would_ be nicer would be if they used (or developed internally) a blogging platform that would take care of this stuff automatically. convert all gifs to videos, convert images into a bunch of formats, etc etc.
monad
2021-04-16 01:27:50
Yeah, software mitigates this, so one wonders why they don't utilize it.
Jim
2021-04-16 01:28:59
Good luck getting the millions of developers outside Google to follow them as well.
monad
2021-04-16 01:38:05
Realistically, it's probably that below some threshold MB is deemed good enough for already "large" media anyway.
fab
2021-04-16 04:59:45
which encoders does it use
2021-04-16 04:59:46
https://deploy-preview-987--squoosh.netlify.app/editor
2021-04-16 04:59:54
so new draft in squoosh github
2021-04-16 05:00:10
with avif grain synthesis demo
2021-04-16 05:00:21
what avif version and commit is?
2021-04-16 05:00:35
can you check please i'm interested
190n
2021-04-16 05:02:33
version 0.5.4 i think
fab
2021-04-16 05:03:02
the commit
2021-04-16 05:03:10
from today yesterday
2021-04-16 05:03:14
when specific date
2021-04-16 05:03:19
the aajsjskl
190n
2021-04-16 05:04:47
the commit is here https://github.com/GoogleChromeLabs/squoosh/tree/fca5f0860b503952655f004f00fbb60ba88fbe6e
2021-04-16 05:04:48
4 hours ago
fab
2021-04-16 05:11:00
wp2 codec
2021-04-16 05:11:04
ah the branch
2021-04-16 05:12:42
2021-04-16 05:13:28
avif is at level wp2 was 3 months ago
2021-04-16 05:14:17
2021-04-16 05:14:21
i used those options
2021-04-16 05:19:29
Master Of Zen
2021-04-17 07:20:58
I found my old avif screenshots
2021-04-17 07:21:07
some are quite small
2021-04-17 07:21:16
2021-04-17 07:21:26
2021-04-17 07:22:15
^ 493 bytes for 🙂
raysar
2021-04-17 09:23:45
273 bytes, noob ! (-m -s 9 -E 3)
fab
2021-04-19 03:56:35
use vp9 28-12-2020 it's the best quality for 720p av1 is still a new technology it can reach great reduction if you use 1080p and a bit more than youtube bitrate
2021-04-19 03:56:56
this is from a video server r/av1 comment by me
2021-04-19 03:57:25
i don't know the settings for that video
Scope
2021-04-20 05:43:14
Also <https://encode.su/threads/342-paq8px?p=68562&viewfull=1#post68562> > My GDCC entries were prototypes of ideas I had for some codecs for it, the main goal for me was to have a framework that handled detection and clustering so that with specialized codecs one could get compression ratios close to paq8px but at 2-3 MB/s on contemporary hardware. mpais is the EMMA author
fab
2021-04-20 07:01:38
so what you think jpeg xl can compress more than vp9 28-12-2020
2021-04-20 07:01:46
or is only marketing
2021-04-20 07:03:41
the image quality is really better of even this is pure marketing
NeRd
fab so what you think jpeg xl can compress more than vp9 28-12-2020
2021-04-20 09:13:11
Why are you using VP9 (the video codec) to compress still images? Or alternatively, why are you trying to use JPEG-XL to compress video?
fab
2021-04-20 09:16:26
i know
2021-04-20 09:16:51
but i don't think jpeg xl can compress x3 more a file
NeRd
2021-04-20 09:21:31
I've had some .PNG files that JPEG-XL has been able to reduce by more than 3 times over in lossless mode, although I would agree that the majority of images are not that reducible. I would expect lossy reductions to reduce the size of files by a fairly large factor however, which JPEG-XL has been able to do quite well. Are you comparing single-frame VP9 videos to lossy JPEG-XL? I'd assume that would be the most fair comparison, although I may be wrong.
_wb_
2021-04-20 09:30:09
VP9 should be somewhere between lossy WebP (VP8) and AVIF, probably closer to lossy WebP (or even worse in practice, since VP9 doesn't have an encoder tuned for still images)
raysar
2021-04-20 05:53:44
I see nobody testing vp9 i frame (and x264 iframe), only x265 iframe with BPG and heif, and it's a good denoiser image format 😄
Scope
2021-04-22 05:52:23
🤔 <https://www.khronos.org/news/press/khronos-ktx-2-0-textures-enable-compact-visually-rich-gltf-3d-assets> > Today, The Khronos®️ Group announces the ratification of KTX™️ 2.0, adding support for Basis Universal supercompression to this container format for reliable, ubiquitous distribution of GPU textures. > Additionally, Khronos has released the KHR_texture_basisu extension enabling glTF to contain KTX 2.0 textures, resulting in universally distributable glTF assets that reduce download size and use natively supported texture formats to reduce GPU memory size and boost rendering speed on diverse devices and platforms.
2021-04-22 05:52:28
<https://playground.babylonjs.com/full.html#YD2TXP#22>
2021-04-22 05:56:54
<https://www.khronos.org/ktx/> https://www.khronos.org/assets/uploads/apis/2021-ktx-comparing-ktx-with-basis-universal-to-image-formats_2.jpg
_wb_
2021-04-22 06:09:59
GPU texture formats still need to be fixed bpp, right? (in memory, not on disk)
Scope
2021-04-22 06:17:41
As far as I know, yes
_wb_
2021-04-22 06:26:51
I wonder if jxl groupdim might be small enough to use a subset of jxl with padded groups as a gpu texture format
Scientia
2021-04-22 06:31:09
Isn't decoding speed especially important for GPU textures?
2021-04-22 06:31:34
I know jxl is fast but is it fast enough for being a GPU texture format?
_wb_
2021-04-22 06:32:54
Not in general, but a subset might be
190n
2021-04-22 06:50:33
so do most games store textures on disk in gpu formats? or do they store in some other format and convert to something the gpu can use upon loading?
2021-04-22 06:51:03
since afaik gpu texture formats aren't very efficient, i wonder if they could be losslessly compressed as in jpeg → jxl transcoding
_wb_
2021-04-22 06:51:39
they already do that I think
190n
2021-04-22 06:53:00
ooh interesting
Scope
2021-04-22 06:55:18
Yep https://github.com/BinomialLLC/basis_universal
2021-04-22 06:58:39
<http://renderingpipeline.com/2012/07/texture-compression/>
2021-04-22 09:55:34
https://blog.youtube/inside-youtube/new-era-video-infrastructure
fab
2021-04-22 03:49:33
https://arxiv.org/pdf/2104.10544.pdf
2021-04-22 03:49:36
what is this
Pieter
fab what is this
2021-04-22 05:04:32
It is a doctoral dissertation.
Reddit • YAGPDB
2021-04-22 05:30:14
Scientia
2021-04-22 06:19:52
AV1 on YouTube is great
2021-04-22 06:20:07
But their bitrates are like 10% lower than you need to get as good an image as their avc and vp9 encodes
BlueSwordM
2021-04-22 06:20:57
There's also another problem: their stream quality with AV1 is not consistent.
2021-04-22 06:21:11
At times, with a bitrate only slightly lower than VP9, I get top notch quality.
2021-04-22 06:21:27
At other times, I get 40% lower bitrate than VP9 and it's garbage.
Scientia
2021-04-22 06:21:37
AV1 still can outdo vp9 at lower bitrates but it's not magic
2021-04-22 06:22:05
YouTube can maybe squeeze 10 or 20% of the size of their vp9 encodes out
2021-04-22 06:22:23
But not 30 or 40% that's when it begins to be an issue
fab
2021-04-22 06:22:37
Vp9 with 580 kbps 720p looks better 28 12 2020
BlueSwordM
Scientia But not 30 or 40% that's when it begins to be an issue
2021-04-22 06:22:43
I've been thinking about it actually: Is it possible that the reason video quality varies so widely per AV1 encoded video is exactly because that they might be using CPU encoding for some videos and an FPGA based solution for others? It's been over a year since Google talked about having FPGA based encoding, and we know CPU based encoding, especially with more complex codecs, is quite a bit superior.
fab
2021-04-22 06:22:50
I guess They used their chip
BlueSwordM
2021-04-22 06:23:07
Like, I can get AV1 to look better than VP9 at 50% of the bitrate, but it requires using more advanced encoder techniques and actually capable settings.
2021-04-22 06:23:33
Techniques that YT would not want to use as it would make their audiences used to higher quality encodes <:kekw:808717074305122316>
Scientia
2021-04-22 06:23:50
They should be able to use the best encode settings, they usually wait until it gets a certain amount of views and if it's a certain newness
2021-04-22 06:24:06
So they have the time and ofc the hardware to make a good encode
BlueSwordM
2021-04-22 06:24:20
But do they care enough to spend the time on it?
2021-04-22 06:24:37
We'll never get the true answer to that question sadly, since we will never know Youtube's true encoding settings.
Scientia
2021-04-22 06:24:38
They care enough to start using it in the first place
2021-04-22 06:25:07
I heard YouTube used something like a CRF of 30 or 33
2021-04-22 06:25:37
But it's probably a lot more complex
2021-04-22 06:25:50
Probably extremely long
BlueSwordM
2021-04-22 06:26:23
Eh, encode times aren't much of an issue. I encode AV1 daily and it's not slow if you know what to do.
Scientia
2021-04-22 06:26:29
No
2021-04-22 06:26:35
I mean the arguments and cli
2021-04-22 06:26:42
Are probably really long
BlueSwordM
2021-04-22 06:26:51
True. Their encoding pipeline must be insane <:kekw:808717074305122316>
Scientia
2021-04-22 06:27:28
YouTube could make really good encodes considering
2021-04-22 06:27:34
Even on avc
2021-04-22 06:27:59
Since they have hundreds of millions of videos and giant machine learning infrastructure
2021-04-22 06:28:27
If they wanted to make some system for using machine learning to best tune videos they could
fab
2021-04-22 06:32:34
https://www.youtube.com/watch?v=5ChlMMh1mJ0
2021-04-22 06:32:42
watch this at the start at 360p
2021-04-22 06:33:04
you see how the algorithm delete the particular you can't see
2021-04-22 06:34:30
this is very intelligent
diskorduser
fab you see how the algorithm delete the particular you can't see
2021-04-22 06:35:37
Explain
fab
2021-04-22 06:38:03
2021-04-22 06:38:41
new vp9 quality
2021-04-22 06:38:45
at 720p
2021-04-22 06:38:52
it becomes less
2021-04-22 06:38:59
this is maybe with the new chip
2021-04-22 06:39:06
or some variation
2021-04-22 06:39:16
but i do not think italy has yet gotten
2021-04-22 06:39:39
he don't even look like himself
2021-04-22 06:40:01
too filtering of the image
2021-04-22 06:41:03
355 KBPS
2021-04-22 06:41:05
wWTTF
2021-04-22 06:41:08
AHAHAH
2021-04-22 06:41:09
AHAHHA
2021-04-22 06:41:10
AHAH
Pieter
fab you see how the algorithm delete the particular you can't see
2021-04-22 06:46:16
how can you see something you can't see?
Jim
2021-04-22 06:47:03
Observing it's effects on other things you can see.
fab
2021-04-22 06:51:30
palette is evident in this
2021-04-22 06:51:41
this is not worse than what i did today with jpeg xl
2021-04-22 06:51:57
but sure youtube i hope the algorithm increase sometimes
2021-04-22 06:52:02
but i don't care
2021-04-22 06:52:05
music is music
2021-04-22 06:54:42
beautiful banding at 720p vp9
2021-04-22 06:54:44
2021-04-22 06:57:25
level 3
2021-04-22 06:57:26
Petr
BlueSwordM Eh, encode times aren't much of an issue. I encode AV1 daily and it's not slow if you know what to do.
2021-04-23 05:42:18
What about encoding to AV1 lossless?
2021-04-23 05:42:22
I use `ffmpeg -hide_banner -i input.avi -f yuv4mpegpipe -pix_fmt yuv444p uncompressed.y4m`
2021-04-23 05:42:26
and then `aomenc --lossless=1 --threads=4 -o output.webm uncompressed.y4m`
2021-04-23 05:42:30
And it's damn slow. Can you speed it up somehow?
2021-04-23 05:42:41
The input is a lossless screen recording of an old game that uses only 4 colors. 😜
BlueSwordM
Petr And it's damn slow. Can you speed it up somehow?
2021-04-23 05:42:44
That's easy. Don't use lossless AV1 <:kekw:808717074305122316>
Petr
2021-04-23 05:44:47
Oops, that's not the kind of reply I was hoping for… 🙂
2021-04-23 05:46:33
AV1 lossless gives the best compression results out of all lossless formats I've seen so I'm kind of stuck to it.
_wb_
2021-04-23 06:06:30
Tried ffv1?
Petr
2021-04-23 06:10:59
Not yet. Will try, thanks.
fab
2021-04-24 03:38:01
exhale 1.1.5 rc1
2021-04-24 03:38:03
https://hydrogenaud.io/index.php?topic=118888.msg996679;topicseen#new
Master Of Zen
Petr Not yet. Will try, thanks.
2021-04-27 02:21:54
As I know, x264 have best lossless performance. Outperforming ffv1 and x265
Petr
2021-04-27 03:05:22
H.264 lossless is fast but the compression ratio is far from best. ffv1 disillusioned me with huge output files. AV1 lossless is bloody slow but gives the best compression ratios.
2021-04-27 03:06:15
If anyone is interested, I can provide my test results when I'm back to the computer.
fab
2021-04-27 03:09:07
ffv1 but is the version standardized
2021-04-27 03:09:15
or the one from 4 years ago
Master Of Zen
Petr H.264 lossless is fast but the compression ratio is far from best. ffv1 disillusioned me with huge output files. AV1 lossless is bloody slow but gives the best compression ratios.
2021-04-27 07:08:08
btw as I remember, AV1 is not actually lossless in lossless mode
2021-04-27 07:08:11
it's broken
veluca
2021-04-27 07:09:43
it's lossless if you don't do YCbCr IIRC
spider-mario
2021-04-27 07:16:56
does it still compress that well in that case?
2021-04-27 07:19:24
also, last time I tried lossless x264, I think I might have encountered the same issue
2021-04-27 07:19:29
but I could have done it wrong
2021-04-27 07:19:33
I should try again one of these days
Master Of Zen
spider-mario also, last time I tried lossless x264, I think I might have encountered the same issue
2021-04-27 07:21:14
just running `ffmpeg -i riv.mkv -c:v libx264 -crf 0 riv264.mkv` gives `[Parsed_psnr_0 @ 0x563e89be0180] PSNR y:inf u:inf v:inf average:inf min:inf max:inf`
improver
2021-04-27 07:21:51
i think one needed 10bit ycbcr for 8bit rgb to not lose data or something like that
Master Of Zen
2021-04-27 07:22:38
assuming your source is rbg?
Pieter
2021-04-27 07:23:20
most video source material is some flavor of YUV (really YCbCr) already no?
Master Of Zen
Pieter most video source material is some flavor of YUV (really YCbCr) already no?
2021-04-27 07:23:35
yes, it is)
2021-04-27 07:24:02
it's actually quite hard to find anything rbg outside of test samples
improver
2021-04-27 07:24:55
pretty easy if you want to encode rgb avif actually losslessly though
Master Of Zen
2021-04-27 07:25:09
`[Parsed_psnr_0 @ 0x559f985b3e40] PSNR y:inf u:52.872374 v:53.360745 average:57.880911 min:57.706772 max:58.177437`
2021-04-27 07:25:14
It's aomenc lossless
2021-04-27 07:25:28
`--lossless=1 --threads=64 --cpu-used=6`
2021-04-27 07:25:47
and it weights more than x264 lossless
spider-mario
Master Of Zen just running `ffmpeg -i riv.mkv -c:v libx264 -crf 0 riv264.mkv` gives `[Parsed_psnr_0 @ 0x563e89be0180] PSNR y:inf u:inf v:inf average:inf min:inf max:inf`
2021-04-27 07:31:49
iirc, that wasn’t my experience with crf 0
2021-04-27 07:32:09
but I don’t even remember for sure what it was that I was encoding
2021-04-27 07:32:14
maybe my Prince of Persia 2 playthrough
Master Of Zen
2021-04-27 07:33:06
IIRC there was issue with ffmpeg that crf 0 wasn't mapping to -qp 0, or something like that
2021-04-27 07:33:25
So x264 wasn't running in lossless mode with crf 0
spider-mario
2021-04-27 07:33:38
that might have been it, yeah
2021-04-27 07:34:00
fwiw it was last year, if it was the POP2 playthrough
2021-04-27 07:34:02
April 2020
_wb_
2021-04-27 08:02:27
I think they're going to add YCoCg or some other RCT to AVIF or AV1 to better do lossless compression
2021-04-27 08:03:10
It's a bit worrying though that they can still add things like that to the spec at such a late stage while expecting interoperability.
Petr
Master Of Zen btw as I remember, AV1 is not actually lossless in lossless mode
2021-04-28 05:54:06
When I started learning lossless video transcoding, I found this article from 2018: https://www.texpion.com/2018/07/av1-vs-vp9-vs-avc-h264-vs-hevc-h265-1-lossless.html
2021-04-28 05:54:09
At that time, AV1 lossless wasn't lossless. But the current versions of aomenc do lossless really losslessly and the compression ratios are much better than the competitors.
2021-04-28 05:54:55
So you remember it correctly, <@!258670228819410944>, but things have changed. 🙂 And it's good.
2021-04-28 05:57:15
So if people want the lightest lossless videos and to grow old while waiting for the results, 🙂 they can do AV1.
_wb_
2021-04-28 06:09:52
Is it still good if you take encode speed into account?
2021-04-28 06:11:15
I think for many lossless video use cases, encode speed is as important as decode speed (since it's usually part of workflows that are basically encode-once, decode-once)
Scientia
2021-04-28 06:14:33
I think av1 lossless is good for archival
2021-04-28 06:14:56
Think, public library, or video archive
2021-04-28 06:15:52
For workloads that require encode once, decode once, I'd think a faster but less efficient codec would be ok
2021-04-28 06:16:22
Especially since you're presumably getting rid of the video after you decode it
Petr
_wb_ I think for many lossless video use cases, encode speed is as important as decode speed (since it's usually part of workflows that are basically encode-once, decode-once)
2021-04-28 06:20:10
Agreed, AV1 is a no-go in such cases.
_wb_
2021-04-28 06:50:26
I think AV1 is mostly useful for encode-once, decode-many-times use cases
2021-04-28 06:52:18
Archival is typically more like encode-once, decode-never (i.e. you don't care much about decode speed, the file is probably stored in some very high latency storage anyway so that will negate any advantage of fast decode)
2021-04-28 06:54:04
So there I think more symmetrical compression makes more sense. Also things like error resilience are probably more important (not much point in doing very strong compression if it forces you to bump up the redundancy of the storage because a single rotten bit can spoil the entire thing)
Pieter
2021-04-28 07:01:46
i've used FFV1 before when doing several processing steps on some raw video material
2021-04-28 07:02:12
just because i didn't have enough storage for the whole thing uncompressed
2021-04-28 07:02:33
it was indeed mostly encode once / decode once (or maybe a few times)
veluca
2021-04-28 08:14:37
well, we certainly hope so, at least 😛
190n
2021-04-28 08:15:28
it seems likely as JXL should be getting browser support soon™️ while afaik none of those other standards were ever supported in browsers (except maaaaybe jpeg xr in ie/edge?)
Pieter
2021-04-28 08:18:27
Do any of them have features that JPEG-XL lacks?
Petr
Pieter Do any of them have features that JPEG-XL lacks?
2021-04-28 08:19:10
Nasty patents? 🤣
190n
2021-04-28 08:19:29
jpeg xt, if i'm understanding correctly, is about adding more data to a jpeg file that jpeg xt decoders use to improve quality while the files are still compatible with jpeg decoders
Pieter
Petr Nasty patents? 🤣
2021-04-28 08:19:58
I misread that as "parents". Was confused.
Scientia
190n jpeg xt, if i'm understanding correctly, is about adding more data to a jpeg file that jpeg xt decoders use to improve quality while the files are still compatible with jpeg decoders
2021-04-28 08:38:02
doesn't jxl do something similar with vardct? where it can provide a jxl with the vardct info or a worse jpeg?
Crixis
Scientia doesn't jxl do something similar with vardct? where it can provide a jxl with the vardct info or a worse jpeg?
2021-04-28 08:54:12
no, jxl is not a directly readable jpg. a jpg decoder cant do anything with a jxl
Pieter
Scientia doesn't jxl do something similar with vardct? where it can provide a jxl with the vardct info or a worse jpeg?
2021-04-28 08:58:18
You may be thinking of the JPEG lossless recompression feature, but that's something very different. It's not compatible with existing JPEG software; it just permits reconstructing the input .jpeg file exactly (but you still need a JPEG-XL decoder to do so).
_wb_
2021-04-28 08:59:05
XT is indeed backwards-compatible — but that approach doesn't really work imo, e.g. you do have "graceful degradation" in the sense that the image decodes with existing jpeg decoders, and you can add an alpha channel that a jxt decoder can understand, but that still doesn't give you alpha with the old jpeg decoder, so in practice you will just see a non-alpha image (until people upgrade their jpeg decoder to understand XT, but that just doesn't happen because "it works with the old decoder")
2021-04-28 09:02:12
For less essential extensions like adding HDR data while having a tone mapped image in the jpeg, it does kind of work, but still it is hard to get adoption of the new things if old decoders also "work" (poorly, but end-users don't know that)
Pieter Do any of them have features that JPEG-XL lacks?
2021-04-28 09:07:08
I don't think so. JPEG 2000 is a bit more flexible in terms of bitstream organization and reordering, but it is not clear to me if there are any use cases where it would actually make a difference.
2021-04-28 09:11:01
LS and HTJ2K have a different trade-off between density and speed (encodes faster and less dense) which I suppose makes them more suitable than XL for some use cases - though it's not clear to me if jxl encoding can also be faster and worse — I suspect LS does not have an advantage over XL, while HTJ2K can probably offer better latency/throughput than XL.
2021-04-28 09:17:29
So assuming jxl gets widespread support, I think it supersedes all existing JPEG codecs for most use cases, with the exceptions of: - XS and HTJ2K for professional video production (and possibly camera raw) - JPEG Pleno for non-2D images (pointcloud, lightfield, holography)
2021-04-28 09:18:38
Meaning the others are basically in "maintenance mode", only relevant for legacy reasons.
2021-04-28 09:23:36
Intra only, yes
2021-04-28 09:23:59
For production / digital cinema, not for end-user delivery
2021-04-28 09:24:41
Maybe medical too
2021-04-28 09:30:25
We have <@553324745240608773> here, who will be asleep right now (US timezone) but he knows everything about (HT) J2K and can explain better than me what the use cases are.
Scope
2021-04-28 09:34:49
https://youtu.be/Jvb3mUCyHH0
Master Of Zen
Petr So you remember it correctly, <@!258670228819410944>, but things have changed. 🙂 And it's good.
2021-04-28 11:00:01
The proof that aomenc doesn't do lossless I made right now, it's now, not from 2018
Petr
Master Of Zen The proof that aomenc doesn't do lossless I made right now, it's now, not from 2018
2021-04-28 11:38:55
OK, I could be wrong.
2021-04-28 11:38:56
Do you have any examples?
2021-04-28 11:39:02
What I do is that I take input videos which have exactly 4 colours in every frame, encode them with `-pix_fmt yuv444p` to AV1 lossless, do a snapshot of a random frame and count the colours. If the result is 4, I consider it lossless. I could also do PSNR but I'm too lazy for it. 🙂
Pieter
2021-04-28 11:42:56
So you wouldn't detect any color space transformations?
Master Of Zen
Petr What I do is that I take input videos which have exactly 4 colours in every frame, encode them with `-pix_fmt yuv444p` to AV1 lossless, do a snapshot of a random frame and count the colours. If the result is 4, I consider it lossless. I could also do PSNR but I'm too lazy for it. 🙂
2021-04-28 11:47:51
Ok, you shouldn't do it like that
2021-04-28 11:47:56
I managed to make lossless av1
2021-04-28 11:48:02
2021-04-28 11:48:05
2021-04-28 11:48:08
2021-04-28 11:48:23
` ffmpeg -i riverbed_source.mkv -c:v libaom-av1 -cpu-used 6 -b:v 0 -aom_options lossless=1 riverbed_av1.mkv`
2021-04-28 11:50:31
it's still bigger than x264 medium, even with cpu-used 0
raysar
2021-04-28 11:52:50
A mjxl like a mjpeg is possible? For example teamviewer are using mjpeg and refresh only moving part of image. And stupid security cam too.
Petr
Pieter So you wouldn't detect any color space transformations?
2021-04-28 12:08:26
How do you mean it? Like the encoder does something that I don't expect? Or like I use some param?
Pieter
Petr How do you mean it? Like the encoder does something that I don't expect? Or like I use some param?
2021-04-28 12:53:28
If the encoder applies a non-reversible color transformation, you would incorrectly treat that as lossless.
Petr
2021-04-28 01:01:11
AFAIK it doesn't.
_wb_
2021-04-28 01:04:53
It does YCbCr by default
2021-04-28 01:05:38
which is basically the same thing as converting 8-bit RGB to 7-bit RGB with somewhat more precise G
2021-04-28 01:07:02
so it loses ~2 bits per pixel, which are typically very high entropy bits (since they're the least significant ones, so they tend to behave like random noise, i.e. not compressible)
2021-04-28 01:08:34
if you want true lossless, you need to either encode in RGB, or do only reversible color transforms, like YCoCg
2021-04-28 01:08:54
RGB can be done in AVIF, but it has poor compression (since no color decorrelation at all happens)
2021-04-28 01:09:13
YCoCg is being added, I don't know if it can be done via ffmpeg already
Pierre-Anthony Lemieux
2021-04-28 03:35:30
<@456226577798135808> Happy to discuss J2K in more detail. In summary, I would say that J2K has pretty much unmatched scalability: the same codec can handle an arbitrary number of channels, integer and floating point pixel color values, lossy and lossless coding (important for archival applications), partial codestream decoding (by pixel bits, position, resolution, etc.), low and high-latency (few lines vs entire frame). Here's a demo of J2K doing partial streaming/decoding of 16-bit lossless 4K RGB frames (20 MB+ each). http://sandflow.com/public/wow/index.html. Do not hesitate to DM me.
Deleted User
raysar A mjxl like a mjpeg is possible? For example teamviewer are using mjpeg and refresh only moving part of image. And stupid security cam too.
2021-04-28 06:35:47
Oh God, JPEG XL would be soooo powerful here... Patches could act as a rudimentary form of motion compensation, the cursor being the easiest one because it almost never changes 😃 For this use case lossy Modular would probably be better than VarDCT.
Scope
2021-04-28 06:40:26
I think the latency will be a problem
_wb_
2021-04-28 06:46:04
mjpeg with refresh of only moving part could be directly translated to recompressed jpeg frames
2021-04-28 06:46:25
(but you're still going to be better off with something like h264)
BlueSwordM
2021-04-28 07:03:08
MJXL would be a very nice update for those cheap ass capture cards <:kekw:808717074305122316>
Scientia
2021-04-29 01:20:55
Using chromium 89 and same behavior, interesting
Pierre-Anthony Lemieux
2021-04-29 01:22:36
Yes, here's what is happening.
2021-04-29 01:28:50
There is an MXF file stored in an S3 bucket. The MXF file contains a sequence of lossless J2K frames. Each frame is a 4K image with 3-component pixels, and each components is a 16-bit integer.
2021-04-29 01:29:33
Each frame is therefore approximately 20 Megabytes
2021-04-29 01:30:03
so the bit rate at 24 fps is approx. 8 Gbps
2021-04-29 01:30:45
... so it is evidently impossible to stream the video in realtime in its entirety
2021-04-29 01:32:40
Each J2K frame is however organized in resolution progressive order (a 20-year old J2K feature)
2021-04-29 01:33:27
It is therefore possible for the client to read the first N bytes of each frame (based on the available bandwidth) and decode whatever resolution is present in those N bytes.
2021-04-29 01:34:31
So the low resolution video that you see is a low-resolution extraction of each frame
2021-04-29 01:34:50
When you pause, the remaining resolution information is downloaded and you get the full lossless frame
2021-04-29 01:35:25
*the key idea is that no proxy is needed to view this 4K video*
2021-04-29 01:37:27
This of course works with lower resolution lossy files (in which case the video would look much better)
2021-04-29 01:38:01
(the MXF parsing and J2K decoding is done in-browser)
2021-04-29 01:40:23
4K lossless 48-bit files are routinely used for professional audio-visual masters/archives
Deleted User
2021-04-29 01:54:22
When you pause in a perfect moment <:kekw:808717074305122316>
Pierre-Anthony Lemieux
2021-04-29 02:28:02
Yes, the demo uses HT, which offers an alternative to the original J2K arithmetic block coder -- HT is much faster. No change to the wavelet transform and codestream structure, however. There is an open source implementation at https://openjph.org/
2021-04-29 02:28:42
The same principle would work with the original J2K arithmetic block coder, minus the increased performance/lower latency
Pieter
2021-04-29 02:28:49
What does HT stand for?
Pierre-Anthony Lemieux
2021-04-29 02:28:55
High-throughput
Pieter
2021-04-29 02:28:58
Ah.
Pierre-Anthony Lemieux
2021-04-29 02:30:07
the HT block coder just plays much better with modern highly-parallel CPU/GPU architectures
2021-04-29 02:48:26
HT was published as an common standard between ITU and ISO in 2019: https://www.itu.int/rec/T-REC-T.814/en
fab
2021-04-29 06:32:15
av1 new hw chip by youtube
2021-04-29 06:32:17
_wb_
2021-04-29 09:14:51
Regarding WebP2: I am still wondering which (planned) feature is supposed to make it especially suitable for the web, besides its name. I can see how its design choices make it _not_ suitable for some non-web use cases (e.g. limited to 10-bit, which is not enough for authoring workflows), but I don't see what features it has / will have that make it more suitable for the web than, say, jxl.
improver
2021-04-29 09:16:11
b-but polygon previews
2021-04-29 09:16:38
(not serious feature, but a bit interesting one)
2021-04-29 09:19:27
in general I agree though. right now I'm not seeing it either
_wb_
2021-04-29 09:21:27
Yes the triangular preview thing is fun but it's basically a builtin blurhash-type placeholder of a specific kind, in principle it could be added to most formats as a small 'metadata' chunk that goes early in the bitstream, if it's considered useful. I personally think a preview needs to be a bit closer to the final image to be useful, though.
improver
2021-04-29 09:25:24
I think avif people were considering adding something of that sort too (chunk for preview, I don't think it was with polygons though). in general previews are inferior to progressive decoding imo, as they take up additional bandwidth and information gets discarded once image gets fully loaded. only really small ones like in webp2 makes sense to me
Scope
2021-04-29 09:26:00
As far as I understood it mostly priority for low bpp, like previews in YouTube and other applications, UI elements, with faster encoding speed than AVIF, minimal headers, incremental decoding and higher efficiency than lossless WebP with comparable speed The current implementation is slow, but it's not optimized yet
improver
2021-04-29 09:26:54
https://github.com/AOMediaCodec/av1-avif/issues/102 ah this got closed
2021-04-29 09:27:58
closed with reference to https://github.com/AOMediaCodec/libavif/issues/605 ..?
2021-04-29 09:28:09
so it seems they gonna do it heh
_wb_
2021-04-29 09:29:37
Yes, I see the potential improvements of webp2 compared to avif: smaller headers, better lossless, faster encode, maybe faster decode. But I think jxl already has those things, or am I missing something?
2021-04-29 09:29:47
Maybe the focus on low bpp
improver
2021-04-29 09:30:32
I personally dislike webp's and webp2's "web" limitations
Scope
2021-04-29 09:33:27
For medium and high bpp I do not see any particular advantage, but I think at low WebP2 may be better
improver
2021-04-29 09:34:16
is it going to support progressive decoding, though?
_wb_
2021-04-29 09:34:20
No
improver
2021-04-29 09:34:29
because without that I can't see how it's better at all
_wb_
2021-04-29 09:34:33
Incremental but not progressive
improver
2021-04-29 09:35:05
incremental? you mean that poly preview thing, or something else?
_wb_
2021-04-29 09:35:13
No just top down
improver
2021-04-29 09:35:17
oh
_wb_
2021-04-29 09:35:23
Sequential I usually call it
2021-04-29 09:35:32
But they call it incremental
improver
2021-04-29 09:35:36
well, that's at least better than avif, I think
_wb_
2021-04-29 09:36:16
Avif could probably also do that, it's just that in a video decoder api you don't tend to bother with the hooks for that
2021-04-29 09:38:30
I don't really see a large use case for low bpp but not progressive - usually low bpp is good when network speed is an issue, but that's also when progressive is most useful
2021-04-29 09:40:01
From the server pov to save bandwidth/storage cost it still makes sense, but from the end-user pov not so much imo
improver
2021-04-29 09:40:37
if image is small enough, it wouldn't need much of progression to load on medium-high speed networks, but would still save bandwidth for server side
2021-04-29 09:40:49
ah yeah you just said the same
Scope
2021-04-29 09:41:40
The triangular preview or LQIP can be about equal progressivity for the perception of faster loading (but if the network speed is not extremely slow) Like 2-3 steps of progression, but not much more
_wb_
2021-04-29 09:42:20
It can help to reduce flashing
2021-04-29 09:43:56
But that's only when it all loads within a second or so - if it's slower, then you're not really going to get perceived faster loading since those previews are still quite far from a 'usable' image
2021-04-29 09:44:34
(e.g. if it contains text you won't be able to read it, if it contains faces you won't be able to recognize them)
improver
2021-04-29 09:47:13
format not being finalized yet allows them to add something better, maybe. not sure at what stage they're at, and if they have plans for anything else
Scope
2021-04-29 09:49:52
As far as I understood from Pascal Massimino's messages, no changes are planned and these are intentional limitations, because such features are not needed in practical use Like their benefits are not proven by any practical research (for web use)
improver
2021-04-29 09:53:54
goal of being on par with avif on decoding speed also sounds kinda ehhh as avif is pretty slow compared to others
2021-04-29 09:56:19
btw, how good is wp2 lossless?
2021-04-29 09:56:40
should look into your benchmarks, maybe you had column for that
2021-04-29 09:58:25
jxl winning that
Crixis
2021-04-29 10:07:23
Is a little better then webp
Scope
2021-04-29 10:18:15
Yes, lossless WebP2 is an improved WebP, it is also consistently more efficient, and considering that the encoding speed will remain almost the same (after optimizations), this is a good result
_wb_
2021-04-30 05:15:06
It is a good and useful result indeed, but it would also be good to work on making lossless jxl better and faster. The use cases for lossless WebP2 are limited to the web (small non-photo images) because with the size and bit depth limitations it will not be suitable for image authoring.
2021-04-30 06:59:08
It already does imo, but the encoder could be optimized more to get better speed/density trade-offs.
2021-04-30 07:00:10
Basically the current speed 3 is a fixed MA tree that we know works well for photos, so tree learning is skipped completely, which makes it quite fast.
2021-04-30 07:01:57
We could do more fixed trees like that for different kinds of images and heuristically choose one of them, to get fast encode (even slightly faster than s3 in some cases) that is still as good as anything else.
2021-04-30 07:03:08
At the slower speed end there is room for improvements in the tree learning algorithm and more exhaustive / less greedy search could lead to better densities than what we get now.
Petr
_wb_ At the slower speed end there is room for improvements in the tree learning algorithm and more exhaustive / less greedy search could lead to better densities than what we get now.
2021-04-30 07:12:56
This explains why cjxl currently doesn't beat the size of optimized 2bpp PNG. Looking forward to the bright future. 🙂
_wb_ Sequential I usually call it
2021-04-30 07:28:54
I remember a funny naming of PNG's non-interlaced: "straight-laced". Which normally means something else: https://en.wiktionary.org/wiki/straight-laced 🙂
Pieter
2021-04-30 07:29:47
The antonym of inter- should be extra- or intra- I'd say :)
_wb_
2021-04-30 07:31:13
extralaced, hm
2021-04-30 07:32:04
interlacing is a confusing word
2021-04-30 07:32:13
for still images interlaced is a kind of progressive
2021-04-30 07:32:25
while for video, interlaced is the exact opposite of progressive
2021-04-30 07:32:32
progressive is a confusing word too
2021-04-30 07:41:42
maybe we should start calling sequential/incremental/top-down/non-progressive/non-interlaced/scanline-order images **conservative**? 😜
Petr
2021-04-30 07:44:01
This kind of confusion is what happens with natural languages all the time. Let's switch e.g. to Loglan: https://en.wikipedia.org/wiki/Loglan 😄
_wb_ maybe we should start calling sequential/incremental/top-down/non-progressive/non-interlaced/scanline-order images **conservative**? 😜
2021-04-30 07:44:50
Let's not be so conservative. 😉
_wb_
2021-04-30 07:46:09
in JPEG they usually call progressive "Scalable", which is also quite a confusing word
2021-04-30 07:49:07
there's scalable vector graphics, there's parallelization which is also a kind of 'scalable' (with the number of available cores), there's scalability in the sense of "can be used at large scale", i.e. the computational complexity allows deploying it for use cases like facebook, twitter, google photos, etc.
2021-04-30 07:49:51
but then there's also 'resolution-scalable' and 'quality-scalable' which basically mean progressive
Scope
2021-04-30 01:45:05
https://github.com/fraunhoferhhi/vvenc/issues/59
2021-04-30 01:45:11
_wb_
2021-04-30 02:17:44
it's amazing how obligatory chroma subsampling is still a thing
2021-04-30 02:18:05
even in next-gen projects
2021-04-30 02:20:15
eliminating non-visible chroma details is fine, but just removing 75% of the chroma data as a preprocessing step before even starting compression is quite a blunt way to do it
Deleted User
2021-04-30 02:43:47
You can do it in DCT domain anyway by simply dumb-discarding 3/4 highest coefficients
2021-04-30 02:45:24
But you've been mentioning it enough times so I'll not repeat that in order not to go crazy...
BlueSwordM
_wb_ even in next-gen projects
2021-04-30 02:45:45
That is actually quite funny in that regard.
2021-04-30 02:46:09
Most projects and devs don't seem to care much about color information for some reason <:Thonk:805904896879493180>
_wb_
2021-04-30 02:46:35
still too many Y-only metrics are being used to measure success
Deleted User
But you've been mentioning it enough times so I'll not repeat that in order not to go crazy...
2021-04-30 02:47:01
Oh, found it! https://discord.com/channels/794206087879852103/794206170445119489/807523533705052170
spider-mario
_wb_ it's amazing how obligatory chroma subsampling is still a thing
2021-04-30 02:47:12
iirc, HandBrake’s pipeline is still 8-bit 4:2:0 only
BlueSwordM
spider-mario iirc, HandBrake’s pipeline is still 8-bit 4:2:0 only
2021-04-30 02:47:48
In February of this year, 10-bit was added for software encoders which is nice at least.
spider-mario
2021-04-30 02:47:58
oh! great news, thanks
2021-04-30 02:48:23
not yet in an official release, right?
BlueSwordM
2021-04-30 02:48:47
I'm not sure, as that depends on when stable releases are done. Nightly does have it though.
2021-04-30 02:49:38
In regards to color, one of the things I want to get in the AV2 spec(since it is currently being drafted) is native XYB color space support in the encoder pipeline.
Deleted User
2021-04-30 02:54:21
And not being *so goddamn slow*
2021-04-30 02:54:30
Moore law is slowing down
2021-04-30 02:55:02
And IMHO I think that basing AV1 on VP10 instead of Daala was wrong
2021-04-30 02:55:24
Daala was quite good
BlueSwordM
BlueSwordM In regards to color, one of the things I want to get in the AV2 spec(since it is currently being drafted) is native XYB color space support in the encoder pipeline.
2021-04-30 02:56:44
Of course, that does require I write a document discussing the implementation as well as provide some proof, so when I get some purely free time, it should be doable.
spider-mario
2021-04-30 03:28:40
wasn’t Daala primarily for very low bitrates?
Deleted User
2021-04-30 03:49:44
It did pretty well at 16 KiB <:Thonk:805904896879493180> But the best thing is that despite HIGHEST complexity setting... ```real 0m1.168s user 0m1.152s sys 0m0.001s``` Good luck with AVIF on `-cpu-used 0` <:kekw:808717074305122316>
2021-04-30 03:50:09
JPEG XL at the similar filesize for comparison
2021-04-30 03:51:03
Guys at Xiph definitely knew how to optimize things...
fab
2021-04-30 05:06:05
can you add xpnsr
2021-04-30 05:06:07
in avif
2021-04-30 05:08:11
what is avif progressive
2021-04-30 05:08:17
avif don't support progressive
improver
2021-04-30 05:08:59
im trying to figure it out too
fab
2021-04-30 05:21:59
quantizer scaling seems more efficient
2021-04-30 05:22:10
i can see details if i focus more
2021-04-30 05:22:14
on the third image
improver
2021-04-30 05:36:06
oh. alright
Master Of Zen
2021-04-30 05:36:21
<@!260412131034267649> thank you)
fab
2021-04-30 05:36:47
add xpnsr
sklwmp
2021-04-30 07:23:46
idrk where to put this but does anyone know how to use `butteraugli`'s command line tool? by that i mean, which of `image1` and `image2` is the reference image? I've ran the tool two times, just reversing the order of the images and got different results: ``` ❯ butteraugli prof-reconv.jpg prof.png 0.386337 ❯ butteraugli prof.png prof-reconv.jpg 0.390707 ``` The `reconv` file is the one converted via JPEG quality 100 from the png file. Which distance is more accurate? Additionally, is the `butteraugli` project even being updated anymore? The GitHub repo had its last commit a year ago.
BlueSwordM
sklwmp idrk where to put this but does anyone know how to use `butteraugli`'s command line tool? by that i mean, which of `image1` and `image2` is the reference image? I've ran the tool two times, just reversing the order of the images and got different results: ``` ❯ butteraugli prof-reconv.jpg prof.png 0.386337 ❯ butteraugli prof.png prof-reconv.jpg 0.390707 ``` The `reconv` file is the one converted via JPEG quality 100 from the png file. Which distance is more accurate? Additionally, is the `butteraugli` project even being updated anymore? The GitHub repo had its last commit a year ago.
2021-04-30 07:26:37
When compiling JXL, you can use ` -DJPEGXL_ENABLE_PLUGINS=ON -DJPEGXL_ENABLE_GIMP_SAVING=1 -DJPEGXL_ENABLE_DEVTOOLS=ON` to activate a bunch of options which activate the plugins as well as some of the tools like butteraugli_main(JXL one) which is better and more up to date.
spider-mario
sklwmp idrk where to put this but does anyone know how to use `butteraugli`'s command line tool? by that i mean, which of `image1` and `image2` is the reference image? I've ran the tool two times, just reversing the order of the images and got different results: ``` ❯ butteraugli prof-reconv.jpg prof.png 0.386337 ❯ butteraugli prof.png prof-reconv.jpg 0.390707 ``` The `reconv` file is the one converted via JPEG quality 100 from the png file. Which distance is more accurate? Additionally, is the `butteraugli` project even being updated anymore? The GitHub repo had its last commit a year ago.
2021-04-30 07:31:16
the command-line tool expects the reference image first
fab
2021-04-30 07:31:41
yes but ewout don't compile butteraugli
spider-mario
2021-04-30 07:31:42
and yes, as BlueSwordM points out, the most up-to-date Butteraugli is the one in JPEG XL
sklwmp
2021-04-30 07:32:06
Great, thank you for the info! Appreciate it.
fab
2021-04-30 07:32:31
butteraugli 2021-03-16 13:24 butteraugli_main.exe 3.91 MB
Scientia
2021-04-30 07:44:17
wait vvc reference encoder is incapable of lossless OR anything other than 420 chroma subsampling?
Scope
2021-04-30 07:46:13
> The Fraunhofer Versatile Video Encoder (VVenC) is a fast and efficient "real-world" VVC encoder implementation
Scientia
2021-04-30 07:46:43
can it handle more than 8 bit either? <:kekw:808717074305122316>
Scope
2021-04-30 07:48:32
Currently, no
Scientia
2021-04-30 07:49:00
seems a bit rushed to me
2021-04-30 07:49:28
to push it out with just the most basic features for actually encoding any video
2021-04-30 07:49:45
i guess maybe those aren't their priorities
2021-04-30 07:50:18
those being 422 and 444 chroma subsampling, lossless, and 10 bit
BlueSwordM
Scientia can it handle more than 8 bit either? <:kekw:808717074305122316>
2021-04-30 08:04:13
Yes.
Scope
2021-04-30 08:26:45
It accepts 10-bit as input, but I don't know how to set the bit depth of the output format (maybe it will be the same as the input or always 8/10-bit 🤔 , can not remember which encoder does what)
BlueSwordM
Scope It accepts 10-bit as input, but I don't know how to set the bit depth of the output format (maybe it will be the same as the input or always 8/10-bit 🤔 , can not remember which encoder does what)
2021-04-30 09:08:40
You need to set: `--internal-bitdepth XX(8/10)`. The default is 10-bit.
cucumber
JPEG XL at the similar filesize for comparison
2021-04-30 09:09:17
pardon if this is some famous test image, but do you have the original image for comparison?
Deleted User
2021-04-30 09:10:04
It's the one from Xiph, the makers of Daala codec.
improver
2021-04-30 09:10:45
looks like it has a lot of blur background and i think av1 excels at encoding that
Deleted User
2021-04-30 09:25:32
We'll talk about it when AV1 reaches Daala's speeds <:kekw:808717074305122316> Quick reminder: ```real 0m1.168s user 0m1.152s sys 0m0.001s```
BlueSwordM
It's the one from Xiph, the makers of Daala codec.
2021-04-30 09:27:15
Here's the absolute best I was able to do with avifenc using aomenc-av1.
Pieter
2021-04-30 09:27:19
AV1 is in part designed by Daala people.
Deleted User
Pieter AV1 is in part designed by Daala people.
2021-04-30 09:28:51
I know, but the main principles ("core") was based on VP10, the never released internal upgrade of VP9. https://discord.com/channels/794206087879852103/805176455658733570/837703603958579261
2021-04-30 09:29:26
I feel like there were lots of VP10 sprinkled a bit with some novelties from Daala and Thor.
Pieter
2021-04-30 09:29:39
I see. I'm not familiar with the details.
2021-04-30 09:29:49
I just used to play board games with some of them 🙂
BlueSwordM
BlueSwordM Here's the absolute best I was able to do with avifenc using aomenc-av1.
2021-04-30 09:29:52
So, ziemek, what do you thank of my best effort attempt at the image? `avifenc -j 2 -d 10 --min 39 --max 44 -s 2 -a aq-mode=1 -a enable-chroma-deltaq=1 -a enable-dnl-denoising=0 -a denoise-noise-level=10 -a tune=ssim input.pnf output.avif`
Scope
2021-04-30 09:30:42
Yep, but some very costly things from Daala were rejected in AV1 (although they may have found a more effective replacement)
Deleted User
BlueSwordM So, ziemek, what do you thank of my best effort attempt at the image? `avifenc -j 2 -d 10 --min 39 --max 44 -s 2 -a aq-mode=1 -a enable-chroma-deltaq=1 -a enable-dnl-denoising=0 -a denoise-noise-level=10 -a tune=ssim input.pnf output.avif`
2021-04-30 09:32:39
But you know that the target is 16 KiB (16,384 bytes)? 😜
BlueSwordM
But you know that the target is 16 KiB (16,384 bytes)? 😜
2021-04-30 09:32:45
Indeed.
2021-04-30 09:33:00
Not lying 😛
Deleted User
BlueSwordM Not lying 😛
2021-04-30 09:36:48
Damn, quite good...
2021-04-30 09:37:58
> looks like it has a lot of blur background and i think av1 excels at encoding that I wish AV1 wasn't blurring *everything* instead of just the background <:kekw:808717074305122316>
2021-04-30 09:40:14
That blur is soooo VP9-ey... The VPx codec line is quite blurry compared to x264 and unfortunately we can see that AV1 was based *mostly* on VP10.
BlueSwordM
That blur is soooo VP9-ey... The VPx codec line is quite blurry compared to x264 and unfortunately we can see that AV1 was based *mostly* on VP10.
2021-04-30 09:43:08
The encoder matters much more than the standard.
Deleted User
2021-04-30 09:50:28
It just so happens that publicly available VPx encoders are crappy 😒
spider-mario
2021-04-30 09:51:22
I wonder how much of x264 could be brought over to an avif encoder
2021-04-30 09:51:39
(disclaimer: I know very, extremely little about how either of them works)
BlueSwordM
It just so happens that publicly available VPx encoders are crappy 😒
2021-04-30 09:53:34
To be fair, libvpx-vp9 has improved a lot, especially recently.
2021-04-30 09:54:06
VP8 is still bad though. Only use it when your alternative choice is h.262(MPEG-2) <:kekw:808717074305122316>
Deleted User
2021-04-30 09:57:36
Does libvpx-vp9 have psy tuning like x264?
2021-04-30 09:57:59
And even if it has, why is EVE so much better? DAMN
Scope
2021-04-30 09:58:54
I think eve-av1/vp9 has many ideas from x264 (and not the blurry psycho-visual tuning), since Ronald Bultje also worked on x264
2021-04-30 10:00:20
But it's hard to get an encoder to compare to the average person
Deleted User
2021-04-30 10:01:20
EVE-VP9
2021-04-30 10:01:23
https://images.squarespace-cdn.com/content/v1/582b36daf7e0abc54f0695c2/1568057791099-V4HFNRU3N3EMWF5TQAH4/ke17ZwdGBToddI8pDm48kNvT88LknE-K9M4pGNO0Iqd7gQa3H78H3Y0txjaiv_0fDoOvxcdMmMKkDsyUqMSsMWxHk725yiiHCCLfrh8O1z5QPOohDIaIeljMHgDF5CVlOqpeNLcJ80NK65_fV7S1UbeDbaZv1s3QfpIA4TYnL5Qao8BosUKjCVjCf8TKewJIH3bqxw7fF48mhrq5Ulr0Hg/image-asset.png?format=1500w
2021-04-30 10:01:33
libvpx-vp9
2021-04-30 10:01:38
https://images.squarespace-cdn.com/content/v1/582b36daf7e0abc54f0695c2/1568057821508-H60T56J0AXY3QOUT9Q3Q/ke17ZwdGBToddI8pDm48kNvT88LknE-K9M4pGNO0Iqd7gQa3H78H3Y0txjaiv_0fDoOvxcdMmMKkDsyUqMSsMWxHk725yiiHCCLfrh8O1z5QPOohDIaIeljMHgDF5CVlOqpeNLcJ80NK65_fV7S1UbeDbaZv1s3QfpIA4TYnL5Qao8BosUKjCVjCf8TKewJIH3bqxw7fF48mhrq5Ulr0Hg/image-asset.png?format=1500w
2021-04-30 10:01:56
It's so blurry that even x264 beats it lmao
2021-04-30 10:01:59
https://images.squarespace-cdn.com/content/v1/582b36daf7e0abc54f0695c2/1568058225591-VPKQGT5K20Z6U922N4VL/ke17ZwdGBToddI8pDm48kNvT88LknE-K9M4pGNO0Iqd7gQa3H78H3Y0txjaiv_0fDoOvxcdMmMKkDsyUqMSsMWxHk725yiiHCCLfrh8O1z5QPOohDIaIeljMHgDF5CVlOqpeNLcJ80NK65_fV7S1UbeDbaZv1s3QfpIA4TYnL5Qao8BosUKjCVjCf8TKewJIH3bqxw7fF48mhrq5Ulr0Hg/x264-282.png?format=1500w
BlueSwordM
EVE-VP9
2021-04-30 10:36:01
Man, if only it was easier to create still images with libvpx-vp9 :/
2021-04-30 10:36:09
With avifenc, it is so easy.
2021-04-30 10:38:10
To be fair, we don't know what settings they used/what version they used.
Scientia
Scope I think eve-av1/vp9 has many ideas from x264 (and not the blurry psycho-visual tuning), since Ronald Bultje also worked on x264
2021-05-01 02:21:30
Wait is EVE av1 actually superior to libaom?
2021-05-01 02:27:52
Well, is it more efficient and/or better looking at the same rate?
raysar
EVE-VP9
2021-05-01 02:40:35
i only see old 2016 info about eve vp9. I'm not sure it's best than libvpx 2021 :D
2021-05-01 02:41:54
i'm always searching big vmaf benchmark about all vp9 av1 and x264 8bit 10bits to see the actuel level of encoders.
Scientia
2021-05-01 02:43:37
If it wasnt better at least for lower bitrates then Netflix wouldn't be using it.
BlueSwordM
raysar i'm always searching big vmaf benchmark about all vp9 av1 and x264 8bit 10bits to see the actuel level of encoders.
2021-05-01 03:04:04
No worries, one is coming soon™️
fab
2021-05-01 05:38:50
<@!111445179587624960>
2021-05-01 05:39:10
wp2 palette effort 2 how to do
Deleted User
2021-05-01 05:40:03
You have to delete your posts from coverage first. Otherwise he won't help.
Scope
2021-05-01 05:40:42
I don't know the option to force palette, maybe it happens automatically at some point
fab
2021-05-01 05:41:28
lossy palette wp2
2021-05-01 05:41:33
does it exist
2021-05-01 05:41:37
effort 2
_wb_
2021-05-01 05:41:54
Lossy palette is generally left to tools like pngquant
fab
2021-05-01 05:42:19
how to do
2021-05-01 05:42:25
which quality set
2021-05-01 05:42:36
i have wp2 downloaded from jamaika builds
2021-05-01 05:42:39
doom9 builds
Scope
2021-05-01 05:45:38
Perhaps to some extent it is used for 96-99 quality
fab
2021-05-01 05:45:43
cwp2.exe -mt 8 -progress -q 100 -z 6 -lossless 1.png -o 1.wp2
2021-05-01 05:45:52
what z i should set
2021-05-01 05:46:45
-q 94,6 -X 22.4 -Y 38 -I 0.68 -s 3 -m --palette --effort 2 -q 78,079 -s 5 -epf=1 --noise=0 --dots=1 --gaborish=0 --patches=0
2021-05-01 05:46:52
i want to do that do other image
_wb_
2021-05-01 05:46:57
I don't know about wp2, but in jxl we don't have a real lossy palette yet - there's lossless palette (if the image has few enough colors, the encoder will make a palette for it), and we do have something that uses delta palette in a lossy way with the default (empty) palette, but not something that does something like pngquant (find the most important colors and reduce the number of colors to make palette possible)