JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

other-codecs

Nova Aurora
2021-01-31 01:23:01
What codec is google planning on using?
2021-01-31 01:23:48
JPEG XL is partly a google project, but there's also webpv2 and they were fast to put AVIF in chrome
lonjil
2021-01-31 01:30:37
Google has no unified front.
_wb_
2021-01-31 07:16:30
I think they are betting on all horses in this race
2021-01-31 06:01:54
I do wonder what the plan is with wp2 though. It sounds a lot like a lightweight version of av1-intra plus a next version of lossless webp. Which would be quite great, if avif and jxl didn't already exist.
2021-01-31 06:03:09
In terms of added value compared to jxl and avif, the only thing I can think of is "it has webp in its name so maybe that will trick people into adopting it"
2021-01-31 06:05:53
It feels like WebP2 will be to AVIF what JPEG XR was to JPEG 2000.
Nova Aurora
2021-01-31 06:06:23
Kind of useless and obsolete on arrival?
_wb_
2021-01-31 06:07:20
In short, yes.
Jim
2021-01-31 06:11:30
In testing, putting a WebP2 next to an AVIF with same/similar settings is usually difficult to tell a difference. Yet the creator told me on Twitter that it is not based on a video codec? If not, they definitely took a lot of the same techniques since the output looks very much the same as AVIF but with an even slower encoder as of now. I do agree that if AVIF wasn't developed it would probably have taken it's place, but given it does exist, I don't see much adoption on the developer side given it looks & smells like AVIF but with fewer features.
_wb_
2021-01-31 06:12:24
The main motivation for WDP/JXR was to make something that can do high bit depth and alpha like j2k, but computationally cheaper and simpler to implement, because j2k was widely seen as too slow and too complex
2021-01-31 06:14:53
If I understand skal's intentions correctly, he wants WebP2 to be a simplified avif, more suitable for software decode, without losing too much compression efficiency.
2021-01-31 06:16:16
That was exactly how jxr was positioning itself: somewhere in between jpeg and jpeg 2000 in both compression results and complexity/speed.
Jim
2021-01-31 06:18:39
Even in the graphics next webinar, the AVIF creators were seeking to drop a number of less-likely-to-be-used & redundant features. I am just not sure how in the long run it will be different enough from AVIF that people would opt for WebP2 over AVIF.
_wb_
2021-01-31 06:20:36
Defining a low complexity profile in avif seems like a much better approach than making a fresh codec
Jim
2021-01-31 06:20:58
What I see happening is AVIF being used for video thumbnails (like on YouTube, Vimeo, etc), JPEG XL being used for the vast majority of all other images, and JPG used for lower-quality (at least until the lower quality portion of XL is made better).
_wb_
2021-01-31 06:24:42
JPEG as fallback will be there for a while, but jxl is already a lot better than jpeg for low-quality
Jim
2021-01-31 06:39:40
I haven't actually tested it since before 0.1, but when I did I noticed between the 40-55 quality there seemed to be a switch where JPG would produce a smaller file size and look about the same if not slightly better in most cases than the JXL. Maybe something has changed? I haven't tested that in a while.
_wb_
2021-01-31 06:58:37
Yes, things have changed, low bitrate is looking a bit better every new version now
Reddit โ€ข YAGPDB
2021-02-01 04:13:45
Nova Aurora
2021-02-01 06:46:01
What's happening with MPEG?
_wb_
2021-02-01 06:46:55
You mean after Leonardo ragequit?
Nova Aurora
2021-02-01 06:47:09
yeah
_wb_
2021-02-01 06:49:12
I don't go to MPEG meetings, only JPEG ones, so I don't really know
2021-02-01 06:51:13
I got some LinkedIn spam from Leonardo at some point, where he seemed to make the point that patents are a Good Thing, or something
Nova Aurora
2021-02-01 06:52:34
His resignation creates a lot of confusion about MPEG
2021-02-01 06:52:37
_wb_
2021-02-01 07:04:45
I guess in Leonardo's world, MPEG == Leonardo
2021-02-01 07:31:37
Anyway
2021-02-01 07:32:05
Looks like Apple broke something in WebP
2021-02-01 07:32:07
https://ericportis.com/etc/broken-webps/
Nova Aurora
2021-02-01 07:32:54
Yeah, well they don't get any of the experience with chromium's or firefox's implementation right?
Master Of Zen
2021-02-01 07:33:42
Hey that's my meme)
_wb_
2021-02-01 07:33:50
I have no clue how they messed it up
2021-02-01 07:34:26
It's very random
2021-02-01 07:34:33
Many WebP images work
2021-02-01 07:34:38
Some break
2021-02-01 07:35:01
No way to predict what will work and what will break
2021-02-01 07:35:16
VP8-only webps can break
2021-02-01 07:35:23
Lossless-only can break
2021-02-01 07:35:34
VP8+alpha can break
2021-02-01 07:36:22
And obviously no way to check to code to try to debug
Nova Aurora
2021-02-01 07:36:55
Is the bug with webkit? cause then it could be debugged on open source
_wb_
2021-02-01 07:37:08
Safari/webkit team blames the underlying system
2021-02-01 07:37:18
Says they cannot fix
2021-02-01 07:38:45
I have no way to contact the CoreImage team
2021-02-01 07:38:53
Afaik
Nova Aurora
2021-02-01 07:38:54
I used the webkit-based gnome web and got pretty much the same thing as firefox
2021-02-01 07:40:13
I don't have a mac to test any further
fab
2021-02-02 06:49:32
https://chromium.googlesource.com/codecs/libwebp2/+/cf5abfa3c304cf9f18254ca374c5d55693394ae3
Nova Aurora
2021-02-02 06:51:58
This just raises more questions, it uses a different file extension, and it's goals seem to be catching up with avif...
Crixis
2021-02-02 06:53:06
Is in early development, same avif problems (details)
2021-02-02 06:53:33
You can also test it on squoosh
Nova Aurora
2021-02-02 06:55:06
Yeah, I know it's still early days for webpv2, I'm just questioning the effort when your goals are to get something that is equivalent to something that's already in production and free
fab
2021-02-02 06:56:29
squoosh version hasn't half-block distortion
2021-02-02 06:56:39
new experiment introduced 24th january
Crixis
2021-02-02 06:56:56
Simply avif is a bad generic porpus image code and a lot of people are realizing it
fab new experiment introduced 24th january
2021-02-02 06:58:08
Is a strong improvement?
fab
2021-02-02 06:58:12
no
2021-02-02 06:58:48
just better quality at q61 q74
2021-02-02 06:58:55
maybe
2021-02-02 06:59:09
but anyway is the same
2021-02-02 06:59:12
is the same
Crixis
2021-02-02 07:00:41
It Is so slow encoding e so focus on low ppb, I have low aspettative on it
Nova Aurora
2021-02-02 07:00:58
Is is slower than AVIF?
2021-02-02 07:01:05
That would be an achievement
fab
2021-02-02 07:02:39
even effort 5 don't achieve anything
Crixis
2021-02-02 07:02:50
On decoding I think yes
2021-02-02 07:05:34
And jxl have a lot more vardct dimensions
_wb_
2021-02-02 07:10:19
"compresses 2x faster than AVIF, but takes 3x more time to decompress. The goal is to reach decompression speed parity."
2021-02-02 07:11:42
"more efficient lossy compression (~30% better than WebP, as close to AVIF as possible)"
2021-02-02 07:12:24
So the goal is to be as good as avif in density, with the same decode speed
2021-02-02 07:13:04
๐Ÿค”
Nova Aurora
2021-02-02 07:13:21
but not be avif
_wb_
2021-02-02 07:13:32
Isn't there a rather simple way to achieve that goal?
2021-02-02 07:13:51
Call it navif
2021-02-02 07:14:07
Ditch the container overhead of heif
2021-02-02 07:14:12
Done
Nova Aurora
2021-02-02 07:17:04
is heif catching on anywhere outside apple? or has avif killed it?
_wb_
2021-02-02 07:17:49
Avif hasn't killed heif
2021-02-02 07:17:56
Avif IS heif
Nova Aurora
2021-02-02 07:18:05
heic
2021-02-02 07:18:16
the one based on hevc rather than av1
_wb_
2021-02-02 07:18:40
We do have some customers using heic for iOS apps and stuff
2021-02-02 07:19:44
But it will never be a web standard, with the hevc patent mess
2021-02-02 07:21:36
Unless we can wait until 2036, when the patents should have expired
Nova Aurora
2021-02-02 07:22:39
but by that point VVC will have been a distant memory
Crixis
2021-02-02 07:24:06
Heic have some supercool patent tool or always dct?
190n
2021-02-02 07:24:54
heic is just an i-frame from HEVC which is patented
Crixis
2021-02-02 07:28:22
What is the stronger idea on hevc?
_wb_
2021-02-02 07:36:18
I don't really know tbh
2021-02-02 07:37:46
"The majority of active patent contributions towards the development of the HEVC format came from five organizations: Samsung Electronics (4,249 patents), General Electric (1,127 patents),[10] M&K Holdings[11] (907 patents), NTT (878 patents), and JVC Kenwood (628 patents).[12] Other patent holders include Fujitsu, Apple, Canon, Columbia University, KAIST, Kwangwoon University, MIT, Sungkyunkwan University, Funai, Hikvision, KBS, KT and NEC"
2021-02-02 07:40:03
https://www.mpegla.com/wp-content/uploads/hevc-att1.pdf
2021-02-02 07:40:45
That's a list of all the hevc patents just in the MPEG-LA pool alone
2021-02-02 07:41:07
That's just the patent numbers, in 3 columns
2021-02-02 07:41:29
That document is almost as large as the jxl spec, lol
2021-02-02 07:42:08
I honestly don't know how there can be thousands of patentable things in a single codec
2021-02-02 07:43:29
I think the development of hevc was 10% writing code and inventing coding tools and 90% inventing and writing patents
Nova Aurora
2021-02-02 07:43:55
MPEG is a special place
_wb_
2021-02-02 07:44:29
Someone once described it to me as 1/3 codec engineers, 2/3 patent trolls
Nova Aurora
2021-02-02 07:44:32
case in point their website is still controlled by someone who split from them 9ish months ago
_wb_
2021-02-02 07:45:31
I stay away from mpeg as far as I can
Nova Aurora
2021-02-02 07:46:21
MPEG LA /=/ MPEG
_wb_
2021-02-02 07:46:22
But some people in jpeg also go to mpeg, and both are part of SC 29 of ISO/IEC JTC1
Nova Aurora
2021-02-02 07:47:09
MPEG LA is a vampire/parasite that finds all the patents that apply to the codec the engineers were forced to add
2021-02-02 07:47:40
Then extorts people trying to comply with the international standard
2021-02-02 07:48:04
but they don't even do that properly
_wb_
2021-02-02 07:48:32
The whole ISO process is kind of broken imo
Nova Aurora
2021-02-02 07:48:37
2021-02-02 07:48:48
No clear pool of people you have to pay
_wb_
2021-02-02 07:49:14
Yes a complete mess
2021-02-02 07:49:45
The thing with the ISO process is that working groups are not allowed to talk about IP
2021-02-02 07:50:21
You're supposed to judge contributions by technical merit only, without considering patents
Nova Aurora
2021-02-02 07:50:31
cool
_wb_
2021-02-02 07:50:50
Well it makes sense in a way
Nova Aurora
2021-02-02 07:50:57
I'd like to be able to legally use it over being 0.1% more efficent
_wb_
2021-02-02 07:51:22
But it's mainly very stupid
Crixis
_wb_ You're supposed to judge contributions by technical merit only, without considering patents
2021-02-02 07:51:30
Smart think with other's money
Nova Aurora
_wb_ You're supposed to judge contributions by technical merit only, without considering patents
2021-02-03 03:54:45
It makes sense to get the best standard possible, but it also forces vendor lock-in
_wb_
2021-02-03 06:17:47
ISO still operates under the assumption that a standard will have a lifespan of > 100 years, so the first 20 years being patent-encumbered is not a big deal in the overall picture.
2021-02-03 06:20:34
But when standards become obsolete after ~15 years, as is the case with video codecs...
Pieter
2021-02-03 07:02:07
I was going to say 15 years sounds long even, but then I looked and learned that H.264 was standardized in 2003, and is still very much alive.
_wb_
2021-02-03 07:03:59
JPEG and h264 are exceptionally long-living examples
2021-02-03 07:04:26
VP8 was made and became obsolete during the lifetime of h264
Nova Aurora
2021-02-03 07:04:30
The codecs that were meant to replace it died
2021-02-03 07:04:59
https://en.wikipedia.org/wiki/Internet_Video_Coding
2021-02-03 07:06:46
https://en.wikipedia.org/wiki/VC-1
2021-02-03 07:09:12
VP8 was slower to encode for at best equivalent quality, and didn't get much hardware acceleration
2021-02-03 07:10:41
In the early 2010s, google was also looking to have a rapid development model on their video codecs, getting a new one every 18 months or something insane like that
2021-02-03 07:12:41
I think that ended when aomedia got so frustrated with MPEG they formed a new standards organization
lithium
2021-02-03 07:17:25
I think VVC will failure... https://www.streamingmedia.com/Articles/ReadArticle.aspx?ArticleID=144949
Nova Aurora
2021-02-03 07:29:29
When you think HEVC was a mess, they go and make a Micheal Bay style sequel
Master Of Zen
2021-02-03 12:23:04
Long live AV1
fab
2021-02-03 03:32:32
2021-02-03 03:32:37
25 hours ago
_wb_
2021-02-03 08:54:54
Did anyone make a lossless converter between ๐Ÿ…ฑ๏ธ ๐Ÿ…ฟ๏ธ ๐Ÿ‡ฌ and HEIC? The payload is <:H265_HEVC:805856045347242016> for both so that should just be a container thing, right?
2021-02-03 08:56:20
(at least if you don't use fancy heif features, like Apple does with its independent hevc bitstreams in 512x512 tiles)
Deleted User
2021-02-03 11:54:07
By the way, which is better quality-wise, efficiency-wise and complexity-wise, H.264 or VC-1?
BlueSwordM
By the way, which is better quality-wise, efficiency-wise and complexity-wise, H.264 or VC-1?
2021-02-04 12:12:50
h.264
Deleted User
2021-02-04 12:16:49
How much is VC-1 worse? And are there *any* of those 3 mentioned aspects that VC-1 may be better?
BlueSwordM
How much is VC-1 worse? And are there *any* of those 3 mentioned aspects that VC-1 may be better?
2021-02-04 12:29:51
Slower vs say the x264 encoder, less efficient, and is less supported overall, especially in terms of hardware decoding.
Deleted User
2021-02-04 12:31:00
VC-1 is less efficient AND slower? No wonder it was obsoleted
190n
2021-02-04 12:32:12
well since VC-1's less efficient i imagine it hasn't seen as much optimization effort as H.264
Deleted User
2021-02-04 02:27:49
Especially when some weebs realized that the encoders are crappy and decided to tune the fuck out of them just to make their encodes of *Perfect Cherry Blossom* gameplays good-looking
2021-02-04 02:28:08
And that's how (roughly) x264 was born IIRC
Nova Aurora
2021-02-04 05:41:36
You can also use h.264 in HEIF as AVCI for no particular reason
Orum
2021-02-04 07:46:34
VC-1 has some things that are better than H.264
2021-02-04 07:46:59
but I don't think any of the encoders got the love or attention that H.264 encoders got
raysar
2021-02-04 08:32:33
x264 tune veryslow is amazingly powerfull ๐Ÿ˜„ and "more amazing" in 10bits ๐Ÿ˜ Better than x265 in very high bitrates for many people.
Fox Wizard
2021-02-04 08:33:23
<:H264_AVC:805854162079842314>
Crixis
2021-02-04 09:20:18
mozX264
_wb_
2021-02-04 09:22:39
X264tzli
Nova Aurora
2021-02-05 06:43:23
X264 XL, bring it full circle
lonjil
2021-02-05 06:47:59
make a new video codec that uses jxl for intracoding <a:tapshead:482628289941340200>
Nova Aurora
2021-02-05 06:50:27
well, there's already MJPEG 2000, but wouldn't that be too agonizingly slow and inefficent for anything but digital cinema?
_wb_
2021-02-05 07:10:34
I guess jxl could be used to replace j2k there too, but maybe we should let them have their niches
2021-02-05 07:11:27
J2K is still quite active in JPEG, with their HTJ2K now and all
Deleted User
2021-02-05 01:12:30
I was about to mention those things
2021-02-05 01:12:59
JXL can have better quality than J2K in DCP
2021-02-05 01:13:28
If they're planning to create next version of DCP standard, they definitely should include JXL
2021-02-05 01:14:17
Better compression efficiency- and quality-wise
2021-02-05 01:14:31
Grain synthesis (similar to AV1's)
2021-02-05 01:17:15
And I'd also be curious to see a video codec with JXL's intra and e.g. AV1's inter
_wb_
2021-02-05 01:33:46
not sure if grain synthesis in jxl is useful for multi-frame, it might look weird
2021-02-05 01:34:25
but for digital cinema you basically don't want to use any of the video codec trickery with grain synthesis and inter prediction and all that
2021-02-05 01:34:48
because you can afford much higher bitrates
2021-02-05 01:34:57
and the screen is huge
2021-02-05 01:35:06
so you really don't want artifacts
2021-02-05 01:36:34
motion estimation is good for video on the web, but for action scenes at the very high fidelity you want in a cinema, I don't think it helps much at all
fab
2021-02-05 03:33:33
https://www.reddit.com/r/AV1/comments/l8k2ed/is_youtube_phasing_out_vp9_in_favor_of_av1/
2021-02-05 03:33:48
steven robertson Youtube engineer
2021-02-05 03:33:54
read his answer
Scope
2021-02-05 09:52:16
https://twitter.com/boutell/status/1357707533612441604
2021-02-05 09:53:45
https://twitter.com/boutell/status/1357707536770678790
_wb_
2021-02-06 07:48:00
444 h264 still doesn't play in many environments
2021-02-06 07:48:37
Probably never will, at this point
2021-02-06 07:51:05
That's one of the reasons why still image codecs based on video codecs can have a problem with 444
Fox Wizard
2021-02-06 07:51:43
It's a 4:2:0 video though, but guess it might not worked, because of the 4:4:4 profile
_wb_
2021-02-06 07:52:05
Video codecs seem to assume that end-users only ever need 420, and 422/444 are only for professionals
2021-02-06 07:52:35
So you get WebP that only does 420
2021-02-06 07:53:10
HEIC that on MacOS doesn't do 444
lonjil
2021-02-06 07:53:17
wow
2021-02-06 07:53:44
so technically speaking HEIC is incapable of producing images of the same quality as jpeg... ๐Ÿ˜ฌ
2021-02-06 07:53:55
at least on macos
_wb_
2021-02-06 07:54:13
Yes it is quite crappy actually
Pieter
2021-02-06 07:54:34
Just double the resolution :D
_wb_
2021-02-06 07:55:04
They do independent 512x512 tiles, causing tile boundary artifacts because there is no cross-tile deblocking or anything
lonjil
2021-02-06 07:55:36
That's one reason I want jxl to succeed, it is better *on all fronts* than legacy codecs, not just selectively better for some use cases. sure, those are maybe the most common usecases, but there is no inherent reason to compromise in modern codecs.
_wb_
2021-02-06 07:56:42
If you do "export as lossless HEIC" in Preview, you get something that has a Butteraugli distance of 2-3 and peak pixel errors of 30%
2021-02-06 07:57:11
Apple has a weird definition of 'lossless' it seems
lonjil
2021-02-06 07:57:40
oof
_wb_
2021-02-06 02:49:28
Nostalgia: this was 5 years ago: https://encode.su/threads/2321-FLIF-Free-Lossless-Image-Format?p=46705&viewfull=1#post46705
2021-02-06 02:50:32
<@532010383041363969> asked "Do you believe that some interesting algorithmic middle ground exists between FLIF and WebP lossless?"
2021-02-06 02:52:48
I think for lossless, we found not just a middle ground, but something way better than either of the three original ingredients (FLIF/FUIF, lossless WebP, lossless Pik)
Nova Aurora
2021-02-06 10:30:42
Yeah, the reason I didn't use flif was slow decodes
_wb_ Apple has a weird definition of 'lossless' it seems
2021-02-06 10:34:15
https://tenor.com/view/the-princess-bride-inigo-montoya-meaning-are-you-sure-of-that-gif-7536490
Master Of Zen
_wb_ Apple has a weird definition of 'lossless' it seems
2021-02-06 10:39:28
Loss less, not no loss
Nova Aurora
2021-02-06 10:41:39
So it only gets allows what every other codec would call 'slight loss'?
Diamondragon
Master Of Zen Loss less, not no loss
2021-02-07 01:43:14
Less in that context means minus the loss, or free of loss. Not the way people normally use that word, admittedly.
Scope
2021-02-07 06:19:46
https://twitter.com/jyzg/status/1358018700977328129
2021-02-07 06:19:49
https://twitter.com/jyzg/status/1358023796247175169
2021-02-07 06:20:29
Btw, the compression speed of LEA in single-threaded mode is about the same as `JXL -s 5`, and EMMA is faster than `JXL -s 9`, but the main problem is the decompression speed, it is sometimes even slightly slower than compression (therefore, they cannot be used for example as a Web standard or as a format for viewing photos/images, but they may be suitable for storage) Also EMMA has more efficient compression than BMF on everything except Pixel Art content
2021-02-07 06:22:51
They are also based on this Context Mixing Compressor <https://encode.su/threads/2459-EMMA-Context-Mixing-Compressor>
_wb_
2021-02-07 06:43:30
Encode speed can always be improved later. Decode speed typically not really.
Nova Aurora
2021-02-08 12:12:14
What's EMMA'S concept?
2021-02-08 01:48:02
How does it improve upon current techniques?
2021-02-08 06:12:01
Is it just an evolution of PAQ?
VEG
_wb_ Encode speed can always be improved later. Decode speed typically not really.
2021-02-08 09:58:55
libjpeg-turbo significantly improves decode speed in comparison to usual libjpeg (using assembly and SIMD instructions, e.g. https://github.com/libjpeg-turbo/libjpeg-turbo/tree/master/simd/i386)
_wb_
2021-02-08 10:44:03
Yes. Modern codecs typically have already done most of the "-turbo" thing in their decoder implementation. Original libjpeg didn't because SIMD wasn't really much of a thing yet when it was written...
Nova Aurora
2021-02-08 04:51:55
I found this helpful to know what happened to MPEG: https://www.streamingmedia.com/Articles/ReadArticle.aspx?ArticleID=141678
Jyrki Alakuijala
2021-02-09 01:37:11
I'm making progress on low bpp vardct jpeg xl ๐Ÿ˜„
2021-02-09 01:39:25
work-in-progress
2021-02-09 01:39:30
baseline
2021-02-09 01:39:50
0.1 bpp
2021-02-09 01:40:37
it is playing partial catch-up with avif and webp2 which are very good with this particular image
Nova Aurora I found this helpful to know what happened to MPEG: https://www.streamingmedia.com/Articles/ReadArticle.aspx?ArticleID=141678
2021-02-09 01:43:48
You can read similar 'JPEG XL' is closed stories in 30 years when I and Jon start quarrelling. When we start publishing dirt on each other, don't believe a word from either side ๐Ÿ˜„
Deleted User
2021-02-09 01:44:45
Wait, the first one is optimized and the second one is before optimizations?
Jyrki Alakuijala
2021-02-09 01:44:53
I'm still working on reducing the ringing -- it will need some more advances in masking
2021-02-09 01:44:56
correct
BlueSwordM
2021-02-09 01:45:27
How does forced modular mode stack up? IIRC, it does better at extreme low bpp(0,03-0,1bpp).
Jyrki Alakuijala
2021-02-09 01:46:04
I'm not testing it in my work, just trying to improve vardct -- to reach lower bpp (and to improve in a wide range)
Deleted User
2021-02-09 01:46:18
I can finally see the stairs and the narrower texture on the back of the church that got squashed before
2021-02-09 01:46:59
And less color banding
Jyrki Alakuijala
2021-02-09 01:47:12
this is distance 23 :-----D
Deleted User
2021-02-09 01:47:27
How low for `-q`?
Jyrki Alakuijala
2021-02-09 01:47:43
No idea. I don't use -q
Deleted User
2021-02-09 01:47:49
But still, HOLY SHIT
Jyrki Alakuijala
2021-02-09 01:49:11
the optimizations are still running and I have a few more ideas for algorithmic improvements
Deleted User
2021-02-09 01:49:23
I see more ringing near the upper spikes of the church, but for me that's a good deal for optimizing the rest of the image
Jyrki Alakuijala
2021-02-09 01:49:25
and looks like I can delete more code than add for this improvement
2021-02-09 01:49:37
yes, I'll take that ringing away, I promise
2021-02-09 01:49:58
it is already mostly gone for distances lower than 23
Deleted User
2021-02-09 01:50:05
Still better than without optimizations ๐Ÿ˜ƒ
Jyrki Alakuijala
2021-02-09 01:50:08
(I'm testin 1, 3, 5, 7, 11 and 23 in this patch)
2021-02-09 01:50:36
this photo is the main weakness that I know of -- vs. AVIF and WebP2
2021-02-09 01:51:00
on most photos JPEG XL is far better already
2021-02-09 01:51:50
I believe I can make it within 20 % in density, i..e, no longer embarrassing
2021-02-09 01:52:00
with some luck better
Deleted User
2021-02-09 01:52:01
Let's move this discussion to <#794206170445119489>, okay? Idk seriously why it appeared on <#805176455658733570> ๐Ÿ˜œ
Jyrki Alakuijala
2021-02-09 01:52:35
I'm doing this mostly to make jxl competitive with other codecs
Deleted User
2021-02-09 01:52:49
Ah, ok
Jyrki Alakuijala
2021-02-09 01:52:54
but I don't mind
2021-02-09 01:53:01
I'm just learning to use this chat program
2021-02-09 01:53:09
I used IRC the first day it was on
Deleted User
Jyrki Alakuijala I used IRC the first day it was on
2021-02-09 01:53:26
A man of culture
Jyrki Alakuijala
2021-02-09 01:53:30
but haven't used these for 25 years or so
2021-02-09 01:53:39
my guess is that I was the fifth user on IRC
2021-02-09 01:53:49
also, I wrote the first bot ๐Ÿ˜„
2021-02-09 01:54:46
I didn't realize that chat bots would become a business one day
Deleted User
2021-02-09 01:55:09
Ahead of time, I guess
2021-02-09 01:55:12
Me too
Jyrki Alakuijala
2021-02-09 01:55:32
I was just a young fearless hacker with more energy than competence ๐Ÿ˜„
Deleted User
2021-02-09 01:56:11
I was social outsider in school and getting notes and homework was kinda problematic for me bc it required social interaction, so I wrote quick e-Notebook website in pure HTML
2021-02-09 01:57:28
With pre-made ODT template for OpenOffice (yeah, I didn't have MS Office back then) I manually rewrote every day's notebook into computer and exported that thing into PDF and uploaded onto the site
2021-02-09 01:57:35
It was just a simple directory
2021-02-09 01:58:03
Yeah... it sunk bc of lack of any interest from others
2021-02-09 01:58:42
That was in 2012-2014... I guess? I really don't remember exactly
2021-02-09 01:59:11
But now 2020 came and the age of e-learning began
2021-02-09 01:59:41
So I kinda know what being ahead of time is like ๐Ÿ˜‰
Jyrki Alakuijala
2021-02-09 01:59:48
I lived in a small village and didn't belong -- then my family moved to a university town (in 1988) and everything was suddenly normal
2021-02-09 02:00:45
yes, looks like I gave up with chat programs 30 years ago, not 25 ๐Ÿ˜„
Deleted User
2021-02-09 02:02:04
Not only compression, but also IRC chatbots?
2021-02-09 02:02:20
Damn, I wish I knew as much as you all do...
2021-02-09 02:02:53
I'm really struggling at maths, we collectively hate them as a whole
2021-02-09 02:03:17
It's hard and I barely understand anything
2021-02-09 02:04:56
I can only play with already existing things (like various codecs), it's a shame that I'm too stupid to create something mine (or at least participate in a bigger project bc JPEG XL definitely isn't a work of one man)
Scope
2021-02-09 02:05:45
Also (~86 500 bytes each image) <https://slow.pics/c/CGOCjUbz>
2021-02-09 02:07:00
Source
2021-02-09 02:07:08
AVIF
2021-02-09 02:07:15
JXL
Deleted User
2021-02-09 02:08:30
Look at the ribbon on the heart-shaped box, JPEG XL is the only codec that fully preserves one of the lines
2021-02-09 02:08:59
AVIF cuts it prematurely and WebP 2 completely omits it
2021-02-09 02:10:43
Same with some other lines with same color on both sides
Scope
2021-02-09 02:11:40
Yep, WebP v2 is also worse than AVIF in this image (although with the new builds on photographic content it is often better)
Jyrki Alakuijala
2021-02-09 02:27:51
I'm delighted that WebP team is also looking into photographic quality -- not just psnr etc.
2021-02-09 02:29:09
red colors are very difficult to model
2021-02-09 02:29:27
the same for blue sky with white clouds
2021-02-09 02:29:46
both are underestimated with primitive color models
2021-02-09 02:30:00
the edge of the round cup is crappy with jxl
2021-02-09 02:30:11
(I'll promise to fix it soon :-D)
_wb_
2021-02-09 08:59:50
updated the flif website: https://flif.info/
Deleted User
2021-02-09 12:37:30
I remember that website without the first and last paragraph...
2021-02-09 12:38:41
I still remember playing with FLIF lossless and lossy
2021-02-09 12:38:55
And Xiph's Daala encoder
2021-02-09 12:40:00
Back then I had to use Ubuntu VirtualBox and compile all the stuff there, because WSL wasn't even in our dreams
2021-02-09 12:40:12
Ah, those were the times...
2021-02-09 12:41:21
After FLIF I didn't have time, so I missed the birth and death of standalone FUIF, I've also heard of Pik, but didn't play with it, only watched visual comparisons by others
2021-02-09 12:41:51
And now here I am, jumped straight into JPEG XL after merging Pik and FUIF
2021-02-09 12:42:55
Seems great to finally be back after a break
_wb_
2021-02-09 12:50:49
Yes, good timing ๐Ÿ™‚
Deleted User
_wb_ updated the flif website: https://flif.info/
2021-02-09 12:51:05
typo: FLIF **development development** has stopped since FLIF is superceded by FUIF and then again by JPEG XL
_wb_
2021-02-09 12:52:20
oops, thanks! fixed
Deleted User
2021-02-09 12:55:01
no problem
Scope
2021-02-09 02:23:08
I've been following FLIF since the beginning, it had interesting features, open source, but slow decoding from the disadvantages. I was always interested in dense compression and there were formats which are strong even now, for example BMF was already in 1998/1999 (and in 2010 only some tweaks for slow mode were added) but it also has slow decoding and in addition closed source code, also various encoders from Alexander Rhatushnyak (who also helped in developing Jpeg XL). I was also interested in PIK (where I think lossy compression and visual tuning was the right direction to develop, unlike video-based formats) and I was happy that FLIF/FUIF and PIK merged and got best features of both, as well as improvements in lossless decoding speed and now it is fast enough for practical use in very dense compression. Also one of the important things is openness and communication between developers and the community, which is rare these days and format development is very closed (from what I remember before it was communication from Jyrki and Pascal about WebP development or sometimes from Thomas Richter about Jpeg formats)
2021-02-09 02:29:56
For example, the development of AVIF was and still is quite closed from the community and only some third-party developers like Joe Drago, Kornel, Derek communicate.
2021-02-09 02:42:30
Btw, in my opinion, more news and more discussions were about FLIF when it appeared than about Jpeg XL now, perhaps people have a worse attitude when the format is behind the big companies and organizations like Jpeg, and FLIF felt more open and independent.
_wb_
2021-02-09 03:17:14
Yes, the JPEG "brand" has positive and negative effects
2021-02-09 03:18:32
Same with Google and Cloudinary, I guess, probably mostly negative though โ€“ nobody really likes corporations
Pieter
_wb_ updated the flif website: https://flif.info/
2021-02-09 03:34:08
superceded -> superseded
Deleted User
2021-02-09 03:59:18
<@&803357352664891472> y'all probably know about AV1's CDEF. Was it possible to include it in JPEG XL and if not, was it because of technical limitations (it would break the format) or practical (e.g. being AVIF-slow)? Was CDEF even considered/discussed?
veluca
2021-02-09 04:11:34
AFAIU CDEF operates better with low bitrates, which we didn't really consider a priority - so we have our own filter that works better for detail preservation
_wb_
2021-02-09 04:20:15
I assume directional filtering works well in tandem with directional prediction, which we also do not have.
2021-02-09 04:23:31
Intuitively, for low-bpp high-appeal encoding, directional stuff is nice to get sharp edges and a smooth 'vectorized' look. It is not very useful for higher bpp, high-fidelity encoding. At high-fidelity, context modeling / entropy coding matters more.
2021-02-09 04:25:24
Video codecs have to keep context modeling / entropy coding simple enough (hw-friendly), and they can afford to, because at the low bpps they are targeting, most of the stuff they are signaling has high entropy anyway, so entropy coding will not help much anyway.
2021-02-09 04:26:02
At least that's my gut feeling.
Deleted User
_wb_ Intuitively, for low-bpp high-appeal encoding, directional stuff is nice to get sharp edges and a smooth 'vectorized' look. It is not very useful for higher bpp, high-fidelity encoding. At high-fidelity, context modeling / entropy coding matters more.
2021-02-09 06:24:45
> At high-fidelity, context modeling / entropy coding matters more. ...and seems like you've MASTERED it ๐Ÿ˜ƒ
Nova Aurora
2021-02-11 08:22:25
What's MPEG's numbering scheme? Right now they are working on MPEG-1 part 3 and MPEG 5 EVC at the same time.
_wb_
2021-02-11 08:34:04
As far as I understand, MPEG-1 part 3 is audio (that's why MP3 is called MP3). Otherwise, MPEG-1 is quite outdated.
Pieter
2021-02-11 08:34:56
And MPEG-3 was skipped for exactly that reason I think (confusion with MPEG-1 layer 3)
_wb_
2021-02-11 08:35:36
MPEG-2 is h.262, which was used for DVD
Pieter
2021-02-11 08:35:39
It would appear that I am wrong: https://en.wikipedia.org/wiki/MPEG-3
_wb_
2021-02-11 08:36:01
MPEG-3 was kind of skipped
Pieter
2021-02-11 08:37:20
And h.263 is not MPEG. h.264 is MPEG-4 part 10.
_wb_
2021-02-11 08:37:56
MPEG-4 contains lots of parts, a.o. h264 and the .mp4 file format
2021-02-11 08:38:35
it's quite bizarre how many weird parts they have in MPEG-4: https://en.wikipedia.org/wiki/MPEG-4
Pieter
2021-02-11 08:38:48
Yes, and MPEG-4 also contains part 2, which is ASP, more widely known as DivX
Nova Aurora
2021-02-11 08:38:54
VVC is mpeg-I part 3
Pieter
2021-02-11 08:39:11
h.266 = VVC = MPEG-I part 3. Indeed. I, not 1.
2021-02-11 08:39:15
Not confusing at all.
_wb_
2021-02-11 08:40:43
h265 is in MPEG-H part 2
2021-02-11 08:41:00
(and HEIF is MPEG-H part 12)
2021-02-11 08:41:55
and yes, h266 is MPEG-I part 3
2021-02-11 08:42:14
in short
Nova Aurora What's MPEG's numbering scheme? Right now they are working on MPEG-1 part 3 and MPEG 5 EVC at the same time.
2021-02-11 08:42:35
There is only one rule: there are no rules
Nova Aurora
2021-02-11 08:43:13
How much ambiguity, confusion and bearucracy can we get?
2021-02-11 08:43:26
MPEG and ISO: yes
_wb_
2021-02-11 08:44:53
ITU-T H.266 MPEG-I part 3 VVC ISO/IEC 23090-3
2021-02-11 08:44:57
these are all the same thing
2021-02-11 08:45:29
JPEG XL is ISO/IEC 18181 btw
lonjil
2021-02-11 09:25:30
That's easy to remember at least
Deleted User
2021-02-11 01:26:48
> ISO/IEC 18181 We've got quite lucky number, wow
Scope
2021-02-11 01:50:15
_wb_
2021-02-11 03:37:41
missed that DIS submission date by a few days, not that it matters
Jyrki Alakuijala
_wb_ I assume directional filtering works well in tandem with directional prediction, which we also do not have.
2021-02-11 09:30:17
We tried three different ways of directional filtering and transforms. None of them produced significant improvements in quality d1-d8. At d12-d16 we started to see benefits, but that was not in the quality range we were so interested in the end.
2021-02-11 09:32:25
I studied AVIF performance for turning on and off the directional filtering on a larger corpus and saw the best gains to be 20 % at a relatively low quality
2021-02-11 09:33:02
Our team was not able to make it work in a way that was satisfactory, so we left that out
2021-02-11 09:33:58
we tried directional dcts (three different forms), complex perspective-like tiled geometry transforms, directional filtering (transforming gaborish into directional lines with a guidance field)
2021-02-11 09:34:56
none of it worked as I'd like it to work -- over a large quality range, be fast to encode for, be easy to decide the heuristics to use for -- the directional stuff just didn't meet the quality bar we have
2021-02-11 09:35:17
we didn't copy over the implementation from AVIF, but tried everything independently
2021-02-11 09:35:36
also, we didn't try exactly the same approach as in AVIF
2021-02-11 09:36:08
also, we didn't look into comfort at any stage but measured success through fidelity rather than comfort
2021-02-11 09:36:54
it is highly likely that there is a way to do directional coding, intuitively should be possible, but we were not able to find good solutions for it
2021-02-11 09:38:22
Two bright senior software engineers tried looking for these solutions, and one high-performing junior dev. I rotated the task so that we would find new perspectives to it. Still, it didn't make it.
2021-02-11 09:38:33
sorryyyyyy
Master Of Zen
2021-02-11 09:42:06
Here is some data for av1
2021-02-11 09:42:16
x265 PLACEBO compared to aomenc cpu-used 6 (red color - need more % ,relatively , to get same quality)
2021-02-11 09:42:25
https://cdn.discordapp.com/attachments/738825198534656100/809527697705271316/2021_02_11_22_52_54.webp
2021-02-11 09:42:40
cpu-used 6 -> cpu-used 0
2021-02-11 09:42:49
https://cdn.discordapp.com/attachments/738825198534656100/809528902875480134/2021_02_11_22_58_35.webp
Jyrki Alakuijala
2021-02-11 09:44:21
what are the values? it is the first time I see such reports
Master Of Zen
Jyrki Alakuijala what are the values? it is the first time I see such reports
2021-02-11 09:52:29
it's from AWCY (are we compressed yet) it's a benchmarking tool for video codecs. Values are relative %, positive percentages under time category mean that codec we compare to take more time and values under metric category are BD-rate difference (how much more bit rate needed to get same quality on average, negative means that we need less bit rate (more efficient, green) and positive means that we need to spend more bit rate (less efficient, red)) In second example by switching from cpu-used 6 to cpu-used 0 (from fastest to slowest aomenc setting), our encoding time increased by 7361% for same set of videos, ~73 times slower and coding efficiency increased by ~25%
2021-02-11 09:53:28
Here is x265 placebo vs aomenc cpu-used 6 https://beta.arewecompressedyet.com/?job=aom-master%402021-02-10T20%3A15%3A36.010Z&job=x265_preset_placebo_reduced_probe_set%402021-02-11T18%3A40%3A03.490Z
2021-02-11 09:53:59
Here is aomenc cpu-used 6 vs aomenc cpu-used 0 https://beta.arewecompressedyet.com/?job=aom-master%402021-02-10T20%3A15%3A36.010Z&job=aom-master-cpu-used%3D0%402021-02-10T20%3A53%3A10.688Z
Jyrki Alakuijala
2021-02-11 09:54:06
cool
2021-02-11 09:54:14
can you turn on/off directional filtering?
2021-02-11 09:54:27
and directional prediction?
BlueSwordM
2021-02-11 09:54:55
Yes, in some parts. `--enable-cdef=0 --enable-restoration=0` are the most important ones in aomenc.
Master Of Zen
2021-02-11 09:55:13
Yes, I can make any set of options/encoder cli
Jyrki Alakuijala
2021-02-11 09:55:40
perhaps you can demonstrate how much we lost in jpeg xl by our inability to make directional stuff work ๐Ÿ˜„
Master Of Zen
Jyrki Alakuijala can you turn on/off directional filtering?
2021-02-11 09:55:41
for cpu-used 6?
Jyrki Alakuijala
2021-02-11 09:55:52
sounds good to me
Master Of Zen
Jyrki Alakuijala sounds good to me
2021-02-11 09:56:09
One second will start benchmark and notify you when it's done ๐Ÿ‘
Jyrki Alakuijala
2021-02-11 09:56:20
run all four combos, please
2021-02-11 09:56:32
off+off, on+off, off+on, on+on
Master Of Zen
Jyrki Alakuijala off+off, on+off, off+on, on+on
2021-02-11 09:59:09
on+on is default, so I started 3 benchmarks
2021-02-11 09:59:27
Jyrki Alakuijala
2021-02-11 10:07:29
oioioi, I'm so excited ๐Ÿ˜›
2021-02-11 10:07:51
is there a specific quality setting it goes to
2021-02-11 10:08:24
(what we saw in jxl was worse performance at mid to high quality, and better at low)
2021-02-11 10:12:59
https://distill.pub/selforg/2021/textures/
2021-02-11 10:13:34
Now I have texture envy ๐Ÿ™‚
2021-02-11 10:22:10
Alex's tweet on this https://twitter.com/zzznah/status/1359971771030601731
Master Of Zen
2021-02-11 11:01:24
<@!532010383041363969> Both off https://beta.arewecompressedyet.com/?job=aom-master%402021-02-10T20%3A15%3A36.010Z&job=aomenc_cdef_off_loop_off%402021-02-11T21%3A56%3A30.571Z Only cdef off https://beta.arewecompressedyet.com/?job=aom-master%402021-02-10T20%3A15%3A36.010Z&job=aomenc_cdef_off_loop_on%402021-02-11T21%3A57%3A16.869Z Only loop off https://beta.arewecompressedyet.com/?job=aom-master%402021-02-10T20%3A15%3A36.010Z&job=aomenc_cdef_on_loop_off%402021-02-11T21%3A58%3A07.393Z
BlueSwordM
2021-02-11 11:04:39
CDEF is clearly pulling its weight all alone quite well actually.
Master Of Zen
2021-02-12 12:15:04
<@!532010383041363969> what you think about results?
Jyrki Alakuijala
2021-02-12 07:14:10
cdef looks like a small net positive without impacting encoding or decoding time
2021-02-12 07:14:37
it can be that there are two things that are different in this eval and in the eval we did
2021-02-12 07:15:28
1. our material had likely more high frequency components -- cdef can interpolate more effectively if the features are not super-sharp
2021-02-12 07:16:17
2. these metrics use L2 aggregation, having some higher errors here and there doesn't matter so much
2021-02-12 07:17:35
perhaps also 3. this comparison is 0.01 to 0.1 bpp, and our comparisons were around 0.3 to 1.5 bpp
_wb_
Jyrki Alakuijala Now I have texture envy ๐Ÿ™‚
2021-02-12 08:27:09
Modular jxl can also do quite interesting classes of cellular automata
2021-02-12 08:27:26
probably even more than flif
2021-02-12 08:27:27
https://cloudinary.com/blog/compressing_cellular_automata
2021-02-12 08:27:41
2021-02-12 08:27:49
2021-02-12 08:31:45
2021-02-12 08:31:52
2021-02-12 08:34:48
it would be a fun artistic exercise to see what kind of cellular automata can get their rules encoded in an MA tree. If the full rules are in the MA tree, the entropy coding gets a perfect context model: all histograms are singletons, so zero entropy left.
2021-02-12 08:35:31
There are many, many very beautiful synthetic images expressible as jxl files under 1 kb.
2021-02-12 08:40:25
2021-02-12 08:40:32
lonjil
2021-02-12 08:53:15
new golfing game: designing codecs that make as many different CAs compress as small as possible.
_wb_
2021-02-12 09:01:01
the class of CAs that compress well in jxl is quite large
2021-02-12 09:01:11
2021-02-12 09:01:17
2021-02-12 09:02:25
just exploring some variations on a simple CA that is based on only the value of `North - NorthWest`
2021-02-12 09:03:23
someone should make a program that lets you define a CA by defining a jxl MA tree, visually
2021-02-12 09:03:54
such CAs will compress to just a few hundred bytes
2021-02-12 09:04:10
might be useful for some kinds of game textures, for example
lonjil
2021-02-12 09:05:08
another fun game: finding MA trees that result in the most visually complex images and then making people unfamiliar be like "whoa" when they see such a complex image compressed so small ๐Ÿ˜„
_wb_
2021-02-12 09:06:27
the above 222 byte jxl file will already be quite challenging for other codecs to not do 100x worse, even lossy
lonjil
2021-02-12 09:06:38
aye!
_wb_
2021-02-12 09:26:09
what options did you use to cjxl for that one?
_wb_
2021-02-12 09:32:55
`-q 100 -g 3 -s 9 -I 1`
2021-02-12 09:33:07
2021-02-12 09:33:50
I wonder why this one doesn't compress better, 1525 bytes is probably not optimal
2021-02-12 09:34:01
2021-02-12 09:36:24
a `-quality 50` jpeg is 1.7 MB so maybe I shouldn't complain though
lonjil
2021-02-12 09:38:45
ohhh so I got a worse size on foo.jxl because I used png as the intermediary format and cjxl stored the ICC profile >.<
2021-02-12 09:39:00
using ppm instead got me 217 bytes
2021-02-12 09:39:29
bless the existence of jxlinfo
_wb_
2021-02-12 09:44:29
anyway, this is just the result of CA based on a single MA tree property, and probably not the coolest ones but just the most interesting ones I could find after looking at a dozen random ones. The space of things that can be expressed is quite huge, and I wonder what kind of stuff is possible with it
lonjil
2021-02-12 09:47:54
Hm, you wouldn't happen to have handy any papers or anything else about how this MA stuff works?
_wb_
2021-02-12 09:51:56
MA trees can express at least any CA where the cell value depends on some combination of the neighboring cell values of W, NW, N, NE, WW and NN. Probably more.
2021-02-12 09:52:56
cell values can have whatever range, and of course you can use whatever color palette to show them
lonjil
2021-02-12 09:53:19
Yeah it seems pretty cool. I mean like, can I read about how they work technically somewhere, or is the stuff in JXL so fresh I'll have to read the code to find out? ๐Ÿ˜ƒ
_wb_
2021-02-12 09:55:27
uh
2021-02-12 09:55:35
there's the old flif paper: https://flif.info/papers/FLIF_ICIP16.pdf
2021-02-12 09:57:14
modular jxl has the MA trees like flif โ€“ not exactly the same but not hugely different
2021-02-12 09:57:44
it's no longer MANIAC like flif, it's now MAANS
lonjil
2021-02-12 09:58:11
alright ๐Ÿ‘ thanks
veluca
2021-02-12 11:45:02
probably the code is your best bet if you want to know more ๐Ÿ˜„ once the lib is production-ready, we can spend some time documenting the fancier things...
BlueSwordM
Jyrki Alakuijala cdef looks like a small net positive without impacting encoding or decoding time
2021-02-12 02:13:20
Uh no <:kekw:808717074305122316> CDEF is one of the most demanding features of AV1 in terms of decoding performance.
2021-02-12 02:13:38
When properly SIMD optimized, disabling it makes the decoding around 5% faster, which is negligible in the real world.
2021-02-12 02:14:32
Without any form of optimizations(including O3), disabling it makes the decoding around 2x faster.
veluca
2021-02-12 02:40:09
5% is not that much of a slowdown, at least for images
BlueSwordM
2021-02-12 02:41:52
True, but it has to be properly SIMD optimized to massively speed up decoding performance.
Fox Wizard
2021-02-12 02:48:40
<a:speedy:714294670985003018>
veluca
2021-02-12 02:49:13
as is the case for most image processing algorithms - try to do DCT without SIMD...
BlueSwordM
2021-02-12 02:49:52
True.
Deleted User
2021-02-12 02:50:34
Well... they tried, see: `libjpeg` (before the inception of `libjpeg-turbo`).
veluca
2021-02-12 02:51:20
for 8x8 it's still more or less reasonable, but for things like 32x32...
Master Of Zen
BlueSwordM Uh no <:kekw:808717074305122316> CDEF is one of the most demanding features of AV1 in terms of decoding performance.
2021-02-12 02:51:43
well, that's not true actually
2021-02-12 02:52:13
Issue with AV1 decoding is that filters atm decoded sequentially, and in single thread
veluca
2021-02-12 02:52:17
I'd expect entropy decoding to be the worst offender, if things translate between codecs
Master Of Zen
2021-02-12 02:53:03
I wouldn't say that "no, don't use CDEF because it's computationally expensive"
BlueSwordM
2021-02-12 02:53:48
Since when did I say that? In fact, I always use CDEF. Its benefits are very nice, especially with some content(aliasing in games come to mind... ๐Ÿ˜„ ) Also, single-threaded performance is the one that matters the most when profiling stuff that is hard to decode without SIMD by design.
_wb_
Master Of Zen <@!532010383041363969> Both off https://beta.arewecompressedyet.com/?job=aom-master%402021-02-10T20%3A15%3A36.010Z&job=aomenc_cdef_off_loop_off%402021-02-11T21%3A56%3A30.571Z Only cdef off https://beta.arewecompressedyet.com/?job=aom-master%402021-02-10T20%3A15%3A36.010Z&job=aomenc_cdef_off_loop_on%402021-02-11T21%3A57%3A16.869Z Only loop off https://beta.arewecompressedyet.com/?job=aom-master%402021-02-10T20%3A15%3A36.010Z&job=aomenc_cdef_on_loop_off%402021-02-11T21%3A58%3A07.393Z
2021-02-12 03:20:39
How to interpret these plots? Encode and decode time: is the vertical axis made so higher is better? frames per second or something?
2021-02-12 03:22:47
And do I see it correctly that CDEF is good according to PSNR but bad according to VMAF?
2021-02-12 03:23:21
(is the horizontal axis bpp?)
Master Of Zen
_wb_ How to interpret these plots? Encode and decode time: is the vertical axis made so higher is better? frames per second or something?
2021-02-12 04:09:31
In "both off" direction of comparison if from default to both loop and cdef disabled. Positive numbers under encoding/decoding means that we need more more time(red - more, green - less time). Positive numbers under metrics mean that we need spend N% more bit-rate on average to get same metric result, negative means that we need less = more efficient. So from this comparison, disabling CDEF and loop restoration, hit majority of metrics by 1-2% and give VMAF 2% and 6% increase
_wb_
2021-02-12 04:20:11
so disabling CDEF/loop restoration makes encoding and decoding *faster*? I don't understand
Master Of Zen
_wb_ so disabling CDEF/loop restoration makes encoding and decoding *faster*? I don't understand
2021-02-12 04:25:37
opposite, encoding 2% slower, decoding 5% slower, 1-2% decrease to majority of metrics, 2% and 6% increase for VMAF
2021-02-12 04:29:02
<@!794205442175402004> here is more clear example of comparing fastest aomenc preset to slowest https://beta.arewecompressedyet.com/?job=aom-master%402021-02-10T20%3A15%3A36.010Z&job=aom-master-cpu-used%3D0%402021-02-10T20%3A53%3A10.688Z
2021-02-12 04:29:51
encoding time increase 7361%, ~25% improvement on all metrics
_wb_
2021-02-12 04:31:28
sorry I meant slower
2021-02-12 04:31:49
I don't understand why disabling stuff makes things slower
Master Of Zen
2021-02-12 04:32:06
*this benchmark tool run everything in single thread
2021-02-12 04:32:51
(to make accurate measurements with as less hardware "noise" as possible)
_wb_ I don't understand why disabling stuff makes things slower
2021-02-12 04:33:41
I will ask one of the devs and report back
_wb_
2021-02-12 04:34:41
ah thanks for the clearer example. so encoding time is actual time in seconds or something
Master Of Zen
_wb_ ah thanks for the clearer example. so encoding time is actual time in seconds or something
2021-02-12 04:35:13
A to B relative %
_wb_
2021-02-12 04:36:32
I don't trust any perceptual metric, but I certainly don't trust any of the metrics listed there
Master Of Zen
_wb_ I don't trust any perceptual metric, but I certainly don't trust any of the metrics listed there
2021-02-12 04:39:09
I don't even trust myself ๐Ÿ˜„
_wb_ I don't trust any perceptual metric, but I certainly don't trust any of the metrics listed there
2021-02-12 04:39:55
Seriously, they have decent battery of metrics, from PSNR to CIEDE2000 to VMAF
_wb_
2021-02-12 04:40:54
you can have a lot of bad metrics
2021-02-12 04:41:13
quantity does not compensate for quality
2021-02-12 04:41:46
CIEDE2000 is the only of all those methods that operates in a somewhat perceptually relevant space
Master Of Zen
_wb_ you can have a lot of bad metrics
2021-02-12 04:42:09
yep, can be the case, what to use when?
_wb_
2021-02-12 04:43:10
that's a very good question, and I think we need more research to find out
2021-02-12 04:44:38
my gut feeling for still images: - dssim or ssimulacra at low-medium fidelity - hdr-vdp or butteraugli at high fidelity
2021-02-12 04:45:36
for ultra-low fidelity maybe vmaf might somewhat work, but you probably need an AI-based metric for that โ€“ at that point you're measuring appeal and that is a very semantic and tricky thing
Nova Aurora
Master Of Zen I don't even trust myself ๐Ÿ˜„
2021-02-12 08:35:25
I fall for appeal too often ๐Ÿ˜„
Jyrki Alakuijala
Master Of Zen yep, can be the case, what to use when?
2021-02-12 11:27:17
Y-PSNR is somewhat ok, so is Y-only MS-SSIM, VMAF doesn't work for JPEG XL's distortions at all -- better practical metrics with C++ implementations are: dssim, ssimulacra and butteraugli, (also NLPD I have seen give good results against test corpora, don't know if it works well with JPEG XL distortions)
2021-02-12 11:28:57
I think a commonality in dssim, ssimulacra and butteraugli is that they do the gamma correction in approximately correct direction whereas most metrics do it in wrong direction or just don't care about color at all
2021-02-12 11:29:23
dssim and ssimulacra work in lab, which is an ok colorspace
2021-02-12 11:29:56
butteraugli uses two different colorspaces XYB for high frequency, and another ad hoc colorspace for low frequency colors
2021-02-12 11:30:40
how to normalize diffs in low frequency color and high frequency color is a big thing
2021-02-12 11:31:10
if you normalize it for good performance in flip tests, the metric will suck at side-by-side testing (or the other way around)
2021-02-12 11:31:49
butteraugli is more normalized for flips, i.e., more importance for maintaining constant LF intensity and colors
2021-02-12 11:32:27
I suspect (from observed discussions) that dssim is more calibrated with something like TID2013, which is a (AFAIK) side-by-side thing
2021-02-12 11:33:03
in the usual image compression I just rather overprovision the dc if possible -- it is less information overall in typical images
2021-02-12 11:33:50
... in side by side comparison the luma has more adaption than red-green
2021-02-12 11:34:11
... in flips the other way around, very small LF luma differences create perceptible flicker
2021-02-12 11:34:57
supposedly the neural metrics are also getting better in the low quality area (lpips)
2021-02-12 11:35:23
I suspect that they are mostly for side-by-side comparison (in normalization between LF/HF)
2021-02-12 11:40:46
practically all of the ~2500 butteraugli calibration images are from butteraugli scores of 0.6 - 1.5, with about 500 images between ~0.9 and ~1.1, i.e., it is only calibrated around d1, and the rest is linear extrapolation
2021-02-12 11:42:06
our initial plan with pik was to deliver a system that could give decent compression in the range of d0.5 to d2 with hard guarantees of butteraugli scores, i.e., visually lossless or close
2021-02-12 11:43:10
when we joined the jpeg xl effort we needed to expand the scope to low quality and complicate the codec a lot (for example with larger transforms and filtering)
2021-02-12 11:44:31
JPEG XL's requirements included operation down to 0.06 bpp, which is something like distance 77 -- just not achievable what pik was designed for
2021-02-12 11:46:14
later the committee understood that just the act of writing low bpp requirements in the codec's requirements spec doesn't necessarily make the codec more useful
2021-02-12 11:47:54
(0.06 bpp for a full 4k image in 60 kB, whereas today the median website sends 999 kB of images)
Scope
2021-02-13 02:33:09
https://chromium.googlesource.com/codecs/libwebp2/+/3610526508602f568ce97e8e8587603aee58b0aa
Master Of Zen
2021-02-13 04:49:58
av1 vs x264 web rtc at 250Kbps
2021-02-13 04:50:03
Deleted User
2021-02-13 06:17:40
Haha laptop battery goes brrrrr
fab
2021-02-13 07:26:47
webp2 q89 good
_wb_
2021-02-13 07:29:14
How image-dependent are q-settings in webp2?
2021-02-13 07:30:09
In jxl, -q actually means quality, in the sense that -q N gives perceptually relatively consistent results.
fab
2021-02-13 07:31:03
135 kb i got 145 kb
2021-02-13 07:31:08
from the original jpg
2021-02-13 07:31:16
not good with the image i used
_wb_
2021-02-13 07:31:41
In cjpeg and cwebp, the -q means quantization, and it gives consistent results in terms of quantization tables, but in terms of perceptual results your mileage can vary a lot
Master Of Zen
2021-02-13 07:32:06
. jxl -d 1 is `VMAF score: 96.473198` jxl -d 3 is `VMAF score: 93.050041` ๐Ÿค”
_wb_
2021-02-13 07:32:21
VMAF is garbage
Master Of Zen
2021-02-13 07:32:25
On this
_wb_
2021-02-13 07:33:18
Higher distance giving lower quality is normal though
2021-02-13 07:33:36
Or lower vmaf score
Master Of Zen
_wb_ VMAF is garbage
2021-02-13 07:35:03
<:SadCat:805389277247701002>
fab
2021-02-13 07:35:05
on screenshots you can do distortions to the color to get more savings like in 0.3.1