JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

other-codecs

DZgas Ж
2023-04-28 08:04:12
Why is that? it is turned off. Is this ANOTHER one of the same software that millions use but no one has checked? 🥹
2023-04-28 08:11:00
I also can't understand this, it says "Tune trellis optimization for ****" But if I write -notrellis and that's it. Then it will use "Tune trellis optimization for PSNR-HVS (default)". But. For what reason is it used if I am notrellis. I want to say that if I write -notrellis -tune-psnr - what does it mean? have I started using the psnr metric? to optimize trellis which I disabled? I wouldn't have any questions, but -notrellis-tune-psnr and -notrellis-tune-hvs-psnr give DIFFERENT results. This means that by setting -tune-psnr, I am not changing trellis optimization, but exactly what metric is used.... BUT here -notrellis -tune-ssim makes a literally identical file as -notrellis -tune-psnr. What does it mean? it's broken.
2023-04-28 08:11:05
<:Thonk:805904896879493180>
Reddit • YAGPDB
2023-04-28 10:27:39
DZgas Ж
DZgas Ж Casually studying mozjpeg in order to make a previews of my site (guetzli is too slow) I was surprised to find that "Enable trellis optimization of DC coefficients (default)" just kills pixels.
2023-04-28 10:40:01
now I see that even guetzli cannot do as I would like
2023-04-28 10:40:57
I especially want to pay attention to the grid in the very first block. All algorithms do it
2023-04-28 10:41:42
but -notrellis -tune-psnr did it perfectly.
2023-04-28 10:42:50
it's a pity that I do not know how to make more quality on color channels. because I see that they the worst even on yuv444
2023-04-28 10:50:37
interestingly, this are actually from 0 to 8, I will be necessary to check each
2023-04-28 11:29:06
1 and 7 save pixels best. but 1 (flat) is better
2023-04-28 11:39:26
and yet -baseline -optimize don't work together<:Thonk:805904896879493180>
2023-04-28 11:40:17
Or I didn't understand. how to check it. because I didn't understand where the no-optimize parameter is
2023-04-28 11:40:57
Because if -optimize is enabled by default, then why write it
Reddit • YAGPDB
2023-04-28 01:10:54
fab
2023-04-28 02:22:23
https://www.cufonfonts.com/font/nevermind-nr
Traneptora
2023-04-28 02:23:07
403
fab
2023-04-28 05:50:20
The font has already a copyright
2023-04-28 05:50:39
I'm using the same name for a project by other people
2023-04-28 05:50:56
At the moment it didn't got published
Reddit • YAGPDB
2023-04-29 06:47:39
2023-04-30 09:33:29
2023-04-30 06:18:45
DZgas Ж
2023-05-01 07:53:29
💀
Reddit • YAGPDB
2023-05-02 09:40:53
2023-05-03 12:38:33
2023-05-03 03:04:28
2023-05-03 03:27:03
Foxtrot
2023-05-03 01:40:50
https://nikonrumors.com/2023/05/02/nikon-z8-vs-nikon-z9-specifications-comparison.aspx/#more-180701 Nikon Z8 will support HEIF 10-bit I wonder if they will be willing to add JPEG XL in the future of just stick to HEIF.
_wb_
2023-05-03 01:56:42
Once hw jxl encoders are there, they could support jxl at enough precision to use it for most raw use cases...
Reddit • YAGPDB
2023-05-03 02:17:33
fab
2023-05-03 06:33:06
https://research.nvidia.com/labs/rtr/neural_texture_compression/
BlueSwordM
fab https://research.nvidia.com/labs/rtr/neural_texture_compression/
2023-05-03 08:25:30
Interesting. They're using an advanced form of psy grain synthesis.
2023-05-03 08:25:35
Thank you fab for finding this.
diskorduser
2023-05-04 02:43:09
Nice they have included jpegxl and avif 😄
username
2023-05-04 02:45:05
they only mention jpeg xl
2023-05-04 02:45:28
while with avif they mention it and include it in comparisons
diskorduser
2023-05-04 03:09:06
Jxl is in comparison
2023-05-04 03:10:36
username
2023-05-04 03:22:12
I was looking here
Reddit • YAGPDB
2023-05-04 04:59:43
yoochan
diskorduser Jxl is in comparison
2023-05-04 07:24:00
do you have a link for this ?
diskorduser
2023-05-04 07:29:56
See the video (bottom of the page)
2023-05-04 07:30:27
https://research.nvidia.com/labs/rtr/neural_texture_compression/assets/ntc_video.mp4
yoochan
2023-05-04 07:56:59
thanks
Reddit • YAGPDB
2023-05-04 04:53:53
2023-05-04 07:58:43
2023-05-05 01:55:03
2023-05-05 02:00:08
_wb_
2023-05-05 03:49:06
Ugh, av2 now already? And they plan to just make libavif do av2 payloads too?
2023-05-05 03:49:45
That is an interoperability nightmare... Expect "this avif image doesn't work!" soon...
MSLP
2023-05-05 03:52:40
well, If they intend it as a web-delivery format - the browsers got a frequent updates and autoupdate, so in that case it should work
fab
2023-05-05 04:01:31
2023-05-05 04:01:53
Honestly I like the quality of YouTube vp9 encoder
2023-05-05 04:02:02
Is pleasant for me
2023-05-05 04:02:46
Av1 is too complex
2023-05-05 04:02:59
Like few frame few movements for
2023-05-05 04:03:18
It uses small movements of each freme
2023-05-05 04:03:42
That's get incredibily slow to encode good and efficientluly
2023-05-05 04:04:24
And if you want to be pleasant at medium speed it gets blurry instead of blocky
2023-05-05 04:04:35
But is an artifact
2023-05-05 04:05:00
Video have 49k views no av01
Foxtrot
2023-05-05 04:05:00
I understand frequent new format for final web delivery... Your CMS or framework or CDN can just reencode the image to all formats and use the best supported one... But for local storage on PC or in your CMS? Thats awful... You want long term solution. Some format that will be supported and improved for decades to come. Not something that will just make version 2.0 and leave the original behind.
fab
2023-05-05 04:06:02
At the moment there is nothing at 1700 kbps encoded with recent techniques at 167k views
MSLP
2023-05-05 04:08:52
Time to hack in libavif output support to djxl I guess 🤪
fab
2023-05-05 04:15:28
Likely we expect av2 to have 500 kbps and that appeal
2023-05-05 04:15:30
https://www.instagram.com/p/Cr3iPc6oN7_/?igshid=YmMyMTA2M2Y=
2023-05-05 04:15:43
But you know it won't happen
2023-05-05 04:16:00
Compression artifacts will still be visible
_wb_
MSLP well, If they intend it as a web-delivery format - the browsers got a frequent updates and autoupdate, so in that case it should work
2023-05-05 04:17:11
sure, if you're Chrome and you can just push a new format every 2-3 years, and breaking other browsers is a feature not a bug, then this model kind of works... but it's not a nice way to advance the web platform imo
fab
2023-05-05 04:21:22
AV2 as far I know will be aualities lower than this
2023-05-05 04:30:36
JPEG XL is failed
DZgas Ж
2023-05-05 04:50:00
?
jonnyawsom3
2023-05-05 06:19:53
Ended up having to use ZBrush for a friend who wants a model put into Blender... Turns out it only supports TGA, TIFF, PSD or proprietary formats with EXR as an addon for only displacement maps. The entire thing is stuck in the 90s
DZgas Ж
DZgas Ж now I see that even guetzli cannot do as I would like
2023-05-05 06:46:57
I never thought that I would be able to make my personal discovery in the jpeg field in 2023. But I found that the parameter: MozJpeg ```cjpeg.exe -baseline -notrellis -sample 1x1 -quality 90 -tune-psnr -quant-table 1``` gives surprisingly good quality. I didn't know jpeg could work well at all. For the last 7 years I have been using jpegXR q95 to archive my photos. And I'm about to switch to Jpeg XL - But now I'll have to reconsider everything. and also make a comparison with webp and all other codecs
_wb_
2023-05-05 08:58:39
Good old jpeg is not to be underestimated.
2023-05-05 08:59:45
It's a good thing that jxl is guaranteed to be at least as good as good old jpeg. You can always losslessly recompress a jpeg and it will be smaller.
MSLP
_wb_ sure, if you're Chrome and you can just push a new format every 2-3 years, and breaking other browsers is a feature not a bug, then this model kind of works... but it's not a nice way to advance the web platform imo
2023-05-05 10:58:56
That's sadly how the ecosystem looks at this time, with chrome having major share on both pc and mobile, reminiscent of when IE was a major browser. At least given that cdn image delivery can give flexible format to end-user there will be fallbacks for minor browsers.
2023-05-05 11:04:58
But in the bigger picture regarding other web-technologies things may not go as smooth
DZgas Ж
_wb_ It's a good thing that jxl is guaranteed to be at least as good as good old jpeg. You can always losslessly recompress a jpeg and it will be smaller.
2023-05-05 11:13:18
This is true. It's just weird to see that.... What it seems to me.... This is my guess, but - jpeg was bad Spoiled in pursuit of the ability to compress more pixels but with worse quality. It just seems strange to me that in 2023 I literally see: That jpeg turns out to be not so bad.
DZgas Ж I never thought that I would be able to make my personal discovery in the jpeg field in 2023. But I found that the parameter: MozJpeg ```cjpeg.exe -baseline -notrellis -sample 1x1 -quality 90 -tune-psnr -quant-table 1``` gives surprisingly good quality. I didn't know jpeg could work well at all. For the last 7 years I have been using jpegXR q95 to archive my photos. And I'm about to switch to Jpeg XL - But now I'll have to reconsider everything. and also make a comparison with webp and all other codecs
2023-05-06 09:24:38
well, nothing unusual happened, jpeg xl has a much pixel accuracy
2023-05-06 09:25:07
on q98
afed
2023-05-06 01:11:05
https://videocardz.com/newz/nvidia-neural-texture-compression-offers-4-times-higher-resolution-than-standard-compression-with-30-less-memory
2023-05-06 01:11:13
https://research.nvidia.com/labs/rtr/neural_texture_compression/
2023-05-06 01:12:00
psnr <:Thonk:805904896879493180>
2023-05-06 01:18:46
seems to be sort of predicting some texture layers from others and ai upscaling, so not surprisingly it will be better than traditional codecs for such cases
2023-05-06 01:19:25
https://research.nvidia.com/labs/rtr/neural_texture_compression/assets/ntc_video.mp4
jonnyawsom3
2023-05-06 01:37:28
Only AVIF once again https://research.nvidia.com/labs/rtr/neural_texture_compression/assets/Results/results.html
Reddit • YAGPDB
2023-05-08 11:18:10
2023-05-09 02:11:19
jonnyawsom3
2023-05-09 02:25:52
Someone's a bit out of the loop
Reddit • YAGPDB
2023-05-09 08:58:59
2023-05-09 12:24:34
2023-05-09 06:57:59
zamfofex
2023-05-09 07:11:35
Doesn’t seem like it would be difficult to construct such an image. Even for lossless JPEG XL, I’d imagine. Compression is about compromise in some way. Any kind of lossless compression algorithm can be exploited to come to produce a larger output.
username
2023-05-09 07:17:25
I don't have a reddit account but if someone is able to it would probably be good to bring these results to that post
2023-05-09 07:18:14
optimized the PNG and made it around 30% smaller and I also made a optimized lossless WebP and JXL
2023-05-09 07:18:30
the WebP actually ends up smaller then both with current encoders
2023-05-09 07:20:29
original PNG = 66.3 KB optimized PNG = 45.7 KB lossless JXL = 42.2 KB lossless WebP = 36.8 KB
2023-05-09 07:20:47
all of which are in file I posted above
yoochan
2023-05-09 07:28:55
Do you remember the options you used for jxl?
afed
2023-05-09 07:29:30
username
yoochan Do you remember the options you used for jxl?
2023-05-09 07:30:28
-q 100 -e 9
afed
2023-05-09 07:33:07
did you do this with effort 3‽ I haven't really explored how the options in cjxl effect things so this is interesting
afed
2023-05-09 07:34:01
nope, options in the filename
Fraetor
2023-05-09 07:34:29
E3 is different from e3
username
2023-05-09 07:34:40
oh I had no clue
Fraetor
2023-05-09 07:35:56
2023-05-09 07:36:19
(From cjxl --help -v -v)
derberg🛘
username the WebP actually ends up smaller then both with current encoders
2023-05-09 07:47:23
With which settings?
username -q 100 -e 9
2023-05-09 07:47:33
Ah, okay
2023-05-09 07:47:46
Then wait a moment~
username
derberg🛘 With which settings?
2023-05-09 07:51:39
I did the WebP like this ```cwebp -mt -lossless -noalpha -z 9 -metadata icc -progress -v```
derberg🛘
2023-05-09 07:51:43
I get a way bigger results with the options I usually use, huh.
username
username I did the WebP like this ```cwebp -mt -lossless -noalpha -z 9 -metadata icc -progress -v```
2023-05-09 07:52:22
I probably don't need to add "-noalpha" but I do it just in case, otherwise I would just do "-alpha_filter best"
derberg🛘
2023-05-09 07:52:29
(When converting <https://i.redd.it/1h1vrc800uya1.png>)
2023-05-09 07:56:20
It's also interesting to see how long it takes for this one to convert
afed
2023-05-09 08:04:17
You can go down from 8530 to 8485 with those: `cjxl -q 100 -m 1 -e 9 --brotli_effort 11 -E 3 -I 100 -g 0 -j 1`
2023-05-09 08:04:51
(not all parameters are necessary, I just took what I usually use and modified it a bit)
2023-05-09 08:05:20
I guess -I 100 makes the difference
afed
2023-05-09 08:10:38
yeah, it's just a fast bruteforce, I think -e 10 can reduce the size even more, though it will take a significant amount of time, sadly we can't check which options have been applied
derberg🛘
2023-05-09 09:11:51
Yeah, -e 10 just finished a few seconds ago
2023-05-09 09:12:41
8053 `jxl --allow_expert_options -q 100 -m 1 -e 10 --brotli_effort 11 -E 3 -I 100 -g 3 -j 1`
2023-05-09 09:14:13
Maybe should have only used -q 100 -e 10 if it really just tries every combination (edit: made no difference for the size)
2023-05-09 10:10:54
I wonder how small this could go if JXL art was used.
2023-05-09 10:12:36
Pattern is pretty interesting; colored a bit of it
yoochan
2023-05-09 10:16:15
Iirc Sierpinsky triangle already appeared as jxl art... Can't find it but
Reddit • YAGPDB
2023-05-10 01:58:00
2023-05-10 01:58:19
derberg🛘
derberg🛘 Pattern is pretty interesting; colored a bit of it
2023-05-10 09:10:26
https://www.reddit.com/r/AV1/comments/13cy9f2/comment/jjkpyqc/ Hm, copying the whole image and inverting 1/4 of it... I don't think that can be easily done with what is written here: https://jxl-art.surma.technology/wtf.html. And idea?
2023-05-10 09:13:19
Ah, they just shared code: https://www.reddit.com/r/AV1/comments/13cy9f2/comment/jjn1s04/
The Bull's Eyed Womprat
2023-05-11 08:32:46
https://developer.android.com/guide/topics/media/hdr-image-format
sklwmp
2023-05-12 02:39:18
> **This document defines the behavior of a new file format that encodes a logarithmic range gain map image in a JPEG image file.** Legacy readers that don't support the new format read and display the conventional low dynamic range image from the image file. Readers that support the format combine the primary image with the gain map and render a high dynamic range image on compatible displays.
2023-05-12 02:39:28
they're *really* squeezing every inch out of JPEG
_wb_
2023-05-12 04:46:06
Isn't this just JPEG XT reinvented?
derberg🛘
2023-05-12 09:32:22
Inb4 Google adding JXL features over time to new backwards-compatible JPEG formats they come up with
Reddit • YAGPDB
2023-05-13 12:25:49
gb82
The Bull's Eyed Womprat https://developer.android.com/guide/topics/media/hdr-image-format
2023-05-13 03:05:05
They'll do anything but implement JXL support, lmao
veluca
gb82 They'll do anything but implement JXL support, lmao
2023-05-13 09:30:37
to be as generous as possible with this decision, gainmaps do have the advantage that you get to decide how the SDR image looks like
2023-05-13 09:30:45
I still don't like them, but there's that
Reddit • YAGPDB
2023-05-13 02:59:15
2023-05-13 10:40:24
2023-05-15 10:21:29
DZgas Ж
2023-05-15 11:25:19
No fps no scale
2023-05-15 11:26:12
Maybe av1 aom speed 0 -- 20%
Reddit • YAGPDB
2023-05-16 06:02:44
Oleksii Matiash
2023-05-17 08:24:15
Hi, I'm not sure this is correct place to ask, I just don't know where to go with it. I have bunch of videos compressed with h.264, but in CAVLC mode. So I'm curious, is it possible to 'recompress' it without decoding to pixels and compressing again? Something like unpack CAVLC compression and compress existing data with CABAC? There are some threads over the internet asking about this, but with no solution.
Traneptora
2023-05-17 10:34:38
I believe it is technically possible but I don't know any tools to do so
Oleksii Matiash
2023-05-17 11:01:40
In one of these threads there was suggestion to implement it as ffmpeg bitstream filter, but unfortunately my knowledge of C and (more important) h.264 specs is not enough to do it
Reddit • YAGPDB
2023-05-17 12:51:59
2023-05-18 12:00:09
2023-05-18 06:37:24
2023-05-19 07:32:14
2023-05-19 09:36:04
2023-05-20 07:12:59
2023-05-20 07:13:55
2023-05-20 08:16:20
DZgas Ж
DZgas Ж I never thought that I would be able to make my personal discovery in the jpeg field in 2023. But I found that the parameter: MozJpeg ```cjpeg.exe -baseline -notrellis -sample 1x1 -quality 90 -tune-psnr -quant-table 1``` gives surprisingly good quality. I didn't know jpeg could work well at all. For the last 7 years I have been using jpegXR q95 to archive my photos. And I'm about to switch to Jpeg XL - But now I'll have to reconsider everything. and also make a comparison with webp and all other codecs
2023-05-20 09:03:09
original png my jpeg yuv444 mozjpeg yuv444 telegram yuv420
2023-05-20 09:04:04
even at quality 70 it shows amazing quality when comparing images with the same file size
2023-05-20 09:05:03
the only problem is the gradients, it does them poorly, so is not suitable for quality lower than 85.
2023-05-20 09:16:15
I also want to say that Jpeg XL turned out to be inapplicable in some of my tasks due to the fact that the amount of data that needs to be downloaded to display the image, even with multi-layered progressive, is too much for it *to be felt visually*. In some cases, I preferred to use the JPEG because it loads line by line and it shows instantly.... I hope someday JPEG XL will make it possible to progressively decode 256x256 blocks line by line and show it immediately......
2023-05-20 09:17:46
For compress -d 6+ need to load 50% of the entire image to see the first layer of progressive decoding.
yoochan
DZgas Ж For compress -d 6+ need to load 50% of the entire image to see the first layer of progressive decoding.
2023-05-21 10:04:37
Really?! I expected less indeed. Can't some encoding options change that?
jonnyawsom3
2023-05-21 10:22:35
Issue is when the overall image is that small, suddenly the '20KB' preview of a 1MB image is now half of the entire file
DZgas Ж
yoochan Really?! I expected less indeed. Can't some encoding options change that?
2023-05-21 11:30:05
yes, JPEG XL itself is made to be a good codec for large and medium-compress images, but if you start compressing very much, the large size of the internal structures affects, because there are all sorts of block distribution tables, there is a lot of service information, and it turns out that on very compressed images this information can be half of the entire file
2023-05-21 11:30:28
But I think any of the developments in can say more accurately ☝️
_wb_
2023-05-21 12:27:16
the lower the quality, the larger proportion of the file is LF, indeed
2023-05-21 12:29:45
with an LF frame, the LF image itself can also be done progressively... but I don't think libjxl will currently show those previews (it will probably wait until the entire LF is available)
2023-05-21 12:31:57
and yes, libjxl shows 256x256 groups only when they're fully available — probably that could be changed
DZgas Ж
2023-05-21 01:09:18
I saw how 256x256 groups were shown as the image was progressively loaded. It would be nice to have the same mechanism for Standard images without progressive decoding
_wb_ the lower the quality, the larger proportion of the file is LF, indeed
2023-05-21 01:12:00
But it still like a problem. simple, it seems to me - If 256x256 is the largest and most independent block, then why not make the structure of a JPEG XL file so that all data is loaded first for the first block of 256x256, then for the second block of 256x256 and so on. But I have a feeling that such a permutation will completely break the format compatibility
_wb_
2023-05-21 01:15:04
The LF is encoded in 256x256 groups too, but since that data is 1:8 it corresponds to 2048x2048
DZgas Ж
2023-05-21 01:15:09
in fact, I do not know the structure of jpeg XL, but since I work with it a lot, I can guess what it is. it would be nice to have some kind of image that completely disassembles the entire structure of a single Jpeg XL file. I know that there are those who write decoders here, maybe I should ask for help
_wb_
2023-05-21 01:16:13
You can permute the groups to do a full 2048x2048 region before anything else, mixing LF and HF. But the current libjxl encoder will always first send all LF, and only then start sending HF groups
DZgas Ж
_wb_ You can permute the groups to do a full 2048x2048 region before anything else, mixing LF and HF. But the current libjxl encoder will always first send all LF, and only then start sending HF groups
2023-05-21 01:18:36
This is really a problem for progressive loading of large but highly-compressed images, for transmission in a low-speed network.
_wb_
2023-05-21 01:22:43
For progressive loading (as opposed to sequential), it's better to do an LF frame so you can get a 1:128 preview fast, then 1:64, then 1:32, etc
2023-05-21 01:24:04
But then we probably need to make libjxl actually render those stages of the LF frame. Currently it will wait until the whole LF frame is available
DZgas Ж
2023-05-21 01:24:20
it would be nice if libjxl could display them
2023-05-21 01:24:24
but yea
2023-05-21 01:25:45
Even despite all this, the amount of structural data is too large. When the image is large but has little real data
DZgas Ж This is really a problem for progressive loading of large but highly-compressed images, for transmission in a low-speed network.
2023-05-21 01:27:49
I wanted to say that it would be better to have a different data structure that would solve this narrowly focused task. This is to have a Jpeg XL in which each 256x block is a completely independent image with its own structure that can be loaded separately and displayed immediately
_wb_
2023-05-21 01:29:30
For lossless it can be like that
DZgas Ж
_wb_ For lossless it can be like that
2023-05-21 01:30:07
only lossless or the modular mode?
2023-05-21 01:30:47
ah the -g
_wb_
2023-05-21 01:30:59
For lossy, there's the LFgroups which contain data at 1:8 resolution (the LF itself plus the HF metadata such as block type selection and adaptive quantization) and then the HF groups at 1:1 resolution
2023-05-21 01:31:33
Modular mode uses only HF groups, unless Squeeze is used
Reddit • YAGPDB
2023-05-21 02:23:33
DZgas Ж
2023-05-22 11:25:43
if you disable all webp filters, if image out to be of very low quality, it does not greatly surpass the usual JPEG. But it is the filters, as well as the new experimental function -af, that greatly correct the situation I also wanted to say about what kind of decoding speed breakup occurs between webp/ jpeg and avif/jxl, this is a very very strong difference in decoding. in tests on 8000x8000 images, this is noticeable right before our eyes. WEBP really decodes almost instantly, while JPEG XL takes almost half a minute. And only support for multithreaded decoding makes this problem only slightly less noticeable
diskorduser
DZgas Ж if you disable all webp filters, if image out to be of very low quality, it does not greatly surpass the usual JPEG. But it is the filters, as well as the new experimental function -af, that greatly correct the situation I also wanted to say about what kind of decoding speed breakup occurs between webp/ jpeg and avif/jxl, this is a very very strong difference in decoding. in tests on 8000x8000 images, this is noticeable right before our eyes. WEBP really decodes almost instantly, while JPEG XL takes almost half a minute. And only support for multithreaded decoding makes this problem only slightly less noticeable
2023-05-22 11:28:11
Are you testing with a non avx cpu?
DZgas Ж
diskorduser Are you testing with a non avx cpu?
2023-05-22 11:29:12
I don't think there can be any difference here since the test is fairly happening between the same set of architectures
diskorduser Are you testing with a non avx cpu?
2023-05-22 11:31:20
I see what you want to tell me. Just if you continue, I have to remind you that working fast on AVX is Not advantage. but the low speed of work on SSE is disadvantage.
2023-05-22 11:33:35
Because we live in a world where AVX is not forcible standard for anything at all anywhere
2023-05-22 11:36:51
if WEBP could be in yuv444 and the dimensions are larger than 16k x 16k, And supported anything other than PNSR (or at least had other encoders that would do much better compression, like mozjpeg for jpeg), then I would consider it as a replacement for Jpeg XL. But webp is just some shit.
username
2023-05-22 11:57:16
<@226977230121598977> I haven't tested this myself but do you know how is the Trellis quantization like in WebP and how does it compare to MozJPEG? "-m 5" enables partial Trellis while "-m 6" enables full Trellis in cwebp
DZgas Ж
username <@226977230121598977> I haven't tested this myself but do you know how is the Trellis quantization like in WebP and how does it compare to MozJPEG? "-m 5" enables partial Trellis while "-m 6" enables full Trellis in cwebp
2023-05-22 12:24:21
For strong compression, WEBP prefers to use 4-by-4 blocks instead of quantization... On closer inspection, it seems that it destroys more information than Jpeg. but if the adaptive filter -af is activated in WEBP, then when viewed from a normal distance of 1:1 - webp seems to contain more details. I conducted webp quality tests and in fact I only saw that the highest quality 6 respectively makes the best quality. But nothing more. To be honest, I don't even know which sets of algorithms are used for each quality parameter. But I can say that in my last discovery I found out that mozjpeg Trellis does much worse on any parameter https://discord.com/channels/794206087879852103/805176455658733570/1104117048499519548 . But meanwhile. I think.... I would like someone to build guetzli with OFF-restrictions so that I could make a comparison between guetzli and WEBP on qualities of about 10-40. I have a feeling that WEBP is not so good.
2023-05-22 12:28:30
It seems to me that in the accuracy of transmitting some information, "mozjpeg without Trellis" works even better than guetzli, especially on small pixel images. But I can't deny the fact that guetzli is the only JPEG solution that creates the best gradients from all JPEG encoders, as well as preserves colors. This can be a very weighty argument when compressing
2023-05-22 12:31:26
but webp. For obvious reasons of using PSNR - it has just disgusting and unnecessary artifacts in color with strong compress. It creates colors where they shouldn't be, artifacts look exactly the same on a classic JPEG with strong compression. And only filters -af smooths it out strongly enough that it wouldn't be so noticeable.
Reddit • YAGPDB
2023-05-22 08:46:35
2023-05-23 02:58:34
2023-05-23 09:33:49
2023-05-24 01:50:49
2023-05-24 01:57:39
2023-05-24 05:47:00
2023-05-25 12:05:39
2023-05-25 10:00:49
2023-05-25 07:25:00
_wb_
2023-05-26 08:14:39
https://web.dev/avif-updates-2023/
2023-05-26 08:15:03
It's getting silly how they keep spreading deceptive information
2023-05-26 08:15:27
> Compared to other modern still image formats, AVIF produces smaller files with similar visual quality (see the following graph, lower is better) but is also faster to encode.
Traneptora
2023-05-26 09:23:32
they're citing the chromium team's claim that avif is faster to encode
2023-05-26 09:23:48
which is propagating pretty weirdly
jonnyawsom3
2023-05-27 11:56:43
I might as well ask here in case anyone has some advice. Yesterday I spent about 5 hours messing with HEVC to save some storage for archiving video with my 1070TI. I know 10bit is meant to increase the compression since it reduces banding and dithering, which it did... But when I tested OBS set to record in it, it seems to add far more ringing and compression artefacts then just the base 8bit. Ended up just setting both to 8bit in case the ringing happens on the ffmpeg hardware encode too in certain cases, but it'd be nice to figure out why or if it's just a common issue no one has clocked yet
Oleksii Matiash
I might as well ask here in case anyone has some advice. Yesterday I spent about 5 hours messing with HEVC to save some storage for archiving video with my 1070TI. I know 10bit is meant to increase the compression since it reduces banding and dithering, which it did... But when I tested OBS set to record in it, it seems to add far more ringing and compression artefacts then just the base 8bit. Ended up just setting both to 8bit in case the ringing happens on the ffmpeg hardware encode too in certain cases, but it'd be nice to figure out why or if it's just a common issue no one has clocked yet
2023-05-27 03:55:59
I also ended up with 8 bits rather than 10 because at least for synthetic videos (i.e. game recordings) it produces better looking results with the same bitrate. My routine is to record using lossless 4:4:4 h.265, and then convert it to h.264 using ffmpeg with `-c:v libx264 -preset:v veryslow -profile:v high444 -crf 18`
jonnyawsom3
2023-05-27 04:04:23
I'm mostly just aiming for low filesize because of my single SSD and apparent curse with any HDD I buy, under 200GB left so it's entered it's 'conservative' mode that lowers speeds
Demez
2023-05-27 05:11:47
personally I just encode x265 with my CPU to get the best quality, but the videos I usually encode are shorter too
spider-mario
I might as well ask here in case anyone has some advice. Yesterday I spent about 5 hours messing with HEVC to save some storage for archiving video with my 1070TI. I know 10bit is meant to increase the compression since it reduces banding and dithering, which it did... But when I tested OBS set to record in it, it seems to add far more ringing and compression artefacts then just the base 8bit. Ended up just setting both to 8bit in case the ringing happens on the ffmpeg hardware encode too in certain cases, but it'd be nice to figure out why or if it's just a common issue no one has clocked yet
2023-05-27 08:19:40
do I understand correctly that you are first recording (to 8 bits because OBS produces worse quality otherwise) and then reencoding that? I think I have seen people argue that 10-bit encoding can be advantageous even for 8-bit original material so have you tried 8-bit recording -> 10-bit encoding?
jonnyawsom3
2023-05-27 11:44:39
I started with setting up an ffmpeg command to do hardware re-encoding from 8-bit to HEVC 10-bit, then thought I'd try the same in OBS directly since it seemed to work well enough, but recording directly to 10-bit resulted in far worse quality than the re-encode. Bear in mind, I was still set to Rec 709 so it *should* have been SDR encoded by main10 from what I understand
Reddit • YAGPDB
2023-05-29 02:14:33
2023-05-29 08:33:48
yoochan
_wb_ https://web.dev/avif-updates-2023/
2023-05-30 07:47:29
it's so enraging !!!
_wb_
2023-05-30 02:35:44
This comparison is new — is this lossless GIF recompression with jxl versus lossy recompression with AVIF?
2023-05-30 02:40:02
I mean, there's no doubt that AV1 is better for animation than JPEG XL, since it's a video codec after all. But why do they prove that point by comparing lossless to lossy?
spider-mario
2023-05-30 02:41:17
plus, it’s so weird, for a lossy codec, to just say “look, we can make small files”
2023-05-30 02:41:24
well, okay? how do they look?
jonnyawsom3
2023-05-30 02:45:15
Try it on some pixel art and it'd probably still be a lot closer with lossless
_wb_
2023-05-30 02:50:22
https://www.blue-dot.io/avif-speed-quality-benchmark/ is interesting data
2023-05-30 02:52:26
though BD rates are deceiving, aggregating gains over a large range of qualities can be extremely deceiving, especially if a large part of that range is useless
2023-05-30 02:59:39
as expected, the hardware (or rather, hw-accellerated) compression results are worse and faster than the software results
2023-05-30 03:00:25
a bit strange that they only compare to default speed software though
2023-05-30 03:03:08
it's "7-23" times as fast as libaom s6 with 8 threads
2023-05-30 03:03:51
at the cost of compressing 4% worse than that or so
2023-05-30 03:04:28
(according to bad metrics and aggregated over a probably mostly meaningless range)
2023-05-30 03:05:30
the thing is, libaom s9 is also about an order of magnitude faster than libaom s6
2023-05-30 03:06:38
s6 with 8 threads is about 4% worse than s6 with 1 thread
2023-05-30 03:08:13
also they're using tv range yuv420 for some reason, which basically has the effect of using 7.5 bit instead of 8 bit, i.e. the qualities will all be a bit lower and the cap of maximum achievable quality will be a bit lower too
2023-05-30 03:13:09
the thing with lossy hardware compression is this: if it can do very high quality (like camera jpeg files), then the suboptimal compression of hw (as opposed to sw) doesn't matter that much because for the actual delivery on the web, you will probably want to reduce resolution and quality anyway and use the best software encoder you can find
2023-05-30 03:13:27
but hardware compression that cannot reach very high quality, what's the use case for that?
2023-05-30 03:16:22
sacrificing ~10% compression (compared to, say, aom s6 1-thread full range 444) for some speedup that saves you some CPU costs but requires you to buy special hardware (which will cost something too, of course) is a questionable imo
2023-05-30 03:19:52
what is the use case where you don't care about 10% compression and you are also fine with relatively low quality (capped by tv range yuv420)?
2023-05-30 03:21:48
also they probably didn't do the speed measurement of the sw encoders in a very fair way. They used a 32-core machine and then either encoded 24 images simulaneously with -j 4, or 32 images simultaneously with -j 8
2023-05-30 03:22:56
that sounds to me like putting way too much load on that poor machine, which will result in worse timings than if you keep the load reasonable and e.g. have only one job at a time per core
BlueSwordM
_wb_ also they're using tv range yuv420 for some reason, which basically has the effect of using 7.5 bit instead of 8 bit, i.e. the qualities will all be a bit lower and the cap of maximum achievable quality will be a bit lower too
2023-05-30 03:42:50
I mean, it is obvious why: speed. Faster than 4:4:4 just because of less to process and more optimizations available in 4:2:0 than 4:4:4 currently.
spider-mario
2023-05-30 03:45:27
what about the tv range? (luma 16-235, chroma 16-240)
jonnyawsom3
2023-05-30 03:51:17
The limits on hardware acceleration always seemed odd to me. Surely it's not *that* more costly to allocate a few more bits, also surprised you can't accelerate the slowest operations then fallback to CPU for the rest, but I know most of the time it all runs simultaneously so that's not possible
_wb_
BlueSwordM I mean, it is obvious why: speed. Faster than 4:4:4 just because of less to process and more optimizations available in 4:2:0 than 4:4:4 currently.
2023-05-30 04:38:30
Yes, 4:2:0 is clear why, that processes only half the data so of course it's faster. I wonder why tv range though — can they somehow exploit that to get cheaper transforms or something?
2023-05-30 04:40:04
Generally for stupid metrics like psnr and ssim that don't see banding anyway, I would assume that lowering the effective bitdepth doesn't really hurt that much, while the range reduction does reduce the amount of entropy in the signal so it should help to improve compression...
BlueSwordM
spider-mario what about the tv range? (luma 16-235, chroma 16-240)
2023-05-30 05:10:08
Well, for TV range, that's because officially speaking, full range BT709 YCbCr isn't a thing 😛
novomesk
2023-05-30 05:26:33
https://www.nokia.com/blog/high-five-to-10-years-of-heif/
_wb_
BlueSwordM Well, for TV range, that's because officially speaking, full range BT709 YCbCr isn't a thing 😛
2023-05-30 05:27:43
Officially according to what authority?
BlueSwordM
2023-05-30 05:29:35
Let me look in my docs to see where I found this statement. It might take a while though 😅
2023-05-30 05:34:50
Hmm, I can't seem to find the original statement that I based my reply on. I guess I was wrong, or just can't find the original statement.
Reddit • YAGPDB
2023-05-30 05:38:33
_wb_
2023-05-30 05:38:47
Just curious why tv range is so ubiquitous still, I don't see any advantage to it
2023-05-30 05:39:32
Of course all else being equal, files compressed with tv range will be smaller than with full range
2023-05-30 05:39:39
Maybe it's that?
Tirr
2023-05-30 05:41:08
`cjxl -d 0 input.gif output.jxl` and `ffmpeg -i input.gif -crf 0 -b:v 0 output.avif` using <https://commons.wikimedia.org/wiki/File:(-)-menthol-3D-qutemol.anim.gif> as input
w
2023-05-30 05:41:13
is tv range so that there's a range for wtw and btb
Tirr
2023-05-30 05:41:36
is libaom in ffmpeg performing badly, or did I get something wrong? ffmpeg took 30 seconds and gave me that
2023-05-30 05:45:48
it's even bigger than the original gif
MSLP
_wb_ Just curious why tv range is so ubiquitous still, I don't see any advantage to it
2023-05-30 05:56:23
Citing from this site: <https://www.tvtechnology.com/miscellaneous/composite-digital-video> ``` The sync tip is assigned the value 16 decimal or 010 hexadecimal. The highest signal level, corresponding to yellow and cyan, is assigned the value of 972 decimal or 3CC hexadecimal. The standard provides for a small amount of bottom headroom (some call it foot-room), levels four to 16 decimal or 004 to 010 hexadecimal, and top headroom, levels 972 to 1019 decimal or 3CC to 3FB hexadecimal. The total headroom is on the order of 1dB and allows for mis-adjusted or drifting analog input signal levels. This reduces the S/QRMS (signal-to-RMS quantizing error) ratio by the same amount. The theoretical S/QRMS of a 4fSC product featuring analog in/out interfaces is 68.10dB for a 10-bit system and 56.06dB for an eight-bit system. This is considerably higher than any composite analog or component analog VTR. ``` So looks like it's a relic for analog equipment compatibility.
Tirr
Tirr `cjxl -d 0 input.gif output.jxl` and `ffmpeg -i input.gif -crf 0 -b:v 0 output.avif` using <https://commons.wikimedia.org/wiki/File:(-)-menthol-3D-qutemol.anim.gif> as input
2023-05-30 05:57:45
hmm for jxl `-e 3` creates smaller image than `-e 6` and `-e 7`
jonnyawsom3
2023-05-30 06:06:45
I've found `e 3` and `e 4` can make files smaller than anything below `e 8` in some cases
_wb_
2023-05-30 06:08:32
For lossless or lossy?
DZgas Ж
2023-05-30 07:28:54
hm
_wb_ This comparison is new — is this lossless GIF recompression with jxl versus lossy recompression with AVIF?
2023-05-30 07:35:38
it looks expected. I don't see the point in these tests - but where is webp
2023-05-30 07:37:49
expected - AVIF doesn't make sense to use lossless compression. In addition, unlike the WEBP - AVIF , this is a full - AV1
2023-05-30 07:38:53
In fact, the other day I tried an animated WEBP and it's terrible, every frame there is only a key one, it's not VP8 as I thought
_wb_
2023-05-30 07:39:07
Yeah it's not
2023-05-30 07:39:19
Avif originally was intra only too
DZgas Ж
2023-05-30 07:39:29
This also explains why WEBP animated decodes longer than AVIF animated
_wb_
2023-05-30 07:39:36
Then they decided to allow full av1 after all
DZgas Ж
_wb_ Avif originally was intra only too
2023-05-30 07:39:37
lol
_wb_
2023-05-30 07:40:26
If you have av1 for video anyway, it makes a lot of sense
DZgas Ж
_wb_ Avif originally was intra only too
2023-05-30 07:41:03
well, in my opinion this is initially a bad idea. Why do animation in a video-based codec WITHOUT using video technology
_wb_
2023-05-30 07:41:09
If you just want to support avif for still images, it might be a bit annoying though, since you have to implement the full av1
2023-05-30 07:42:15
Why not just allow video formats in an img tag so you don't have to wrap your av1 in an avif but you can just use mp4 or mkv or whatever you want
DZgas Ж
_wb_ If you just want to support avif for still images, it might be a bit annoying though, since you have to implement the full av1
2023-05-30 07:42:44
But isn't that why firefox didn't support animated AVIF?
_wb_
2023-05-30 07:43:04
I dunno, but it could be, yes
DZgas Ж
2023-05-30 07:44:24
Okay, actually, given the scope of the entire AV1 format and the number of companies, it doesn't look like a problem right now, because- well, look, all browsers support full AV1, there's no reason to cut the AVIF codec. And I think it was taken into account at the development stage
_wb_ Why not just allow video formats in an img tag so you don't have to wrap your av1 in an avif but you can just use mp4 or mkv or whatever you want
2023-05-30 07:47:50
I agree that AV1 should not work in the IMG tag until it is made as an AVIF. After all, AV1 is always in some kind of container like mp4 webm mkv, but AVIF is a cleaner AV1 in HEIF format
2023-05-30 07:48:22
true image format
_wb_
2023-05-30 08:14:43
Meh, safari allows mp4 in img tags, and why not?
2023-05-30 08:15:04
Insisting on a specific container is kind of silly imo
gb82
_wb_ Meh, safari allows mp4 in img tags, and why not?
2023-05-30 08:15:11
That's nuts, lol
_wb_
2023-05-30 08:15:59
It's just the same as a video tag with looping, autoplay, no controls and muted
2023-05-30 08:16:26
But more convenient since you can put both "gif" and still images in the same tag
2023-05-30 08:17:12
Of course historically it was probably a mistake to allow non-still images in an img tag
2023-05-30 08:17:50
But that ship has sailed 30 years ago or so, thanks to gif89
2023-05-30 08:18:34
If animated gif is allowed, then why not mp4 or any other video container with any supported video codec payload in it
2023-05-30 08:19:43
Making a still image container to contain a video codec and then forcing to use it whenever you want to use the video codec for video, I don't understand the logic in that
gb82
_wb_ If animated gif is allowed, then why not mp4 or any other video container with any supported video codec payload in it
2023-05-30 08:19:57
That makes sense
_wb_
2023-05-30 08:20:36
One of the only reasons for all that overhead of heif is so you can do alpha and icc color profiles
2023-05-30 08:21:52
But it cannot even do it in a very good way afaik... I don't think it really has support for muxing the two bitstreams so you can do video-with-alpha in an interleaved way...
2023-05-30 08:22:19
After all, it is made for still images where you have one frame of color and one frame of alpha
2023-05-30 08:22:48
Video containers have solved the problem of muxing several long streams a long time ago
2023-05-30 08:23:40
Just use that, and take the bonus points for allowing anyone to just use any old video software to create "gifs"
jonnyawsom3
_wb_ For lossless or lossy?
2023-05-30 08:34:28
Usually lossy but there was an occasion recently where lossless was better at lower effort too. Unfortunately I don't remember the file though...
Eugene Vert
Usually lossy but there was an occasion recently where lossless was better at lower effort too. Unfortunately I don't remember the file though...
2023-05-30 08:56:20
IIRC, lower effort on VarDCT => Fewer quality checks => The result may miss distance target and be of lower quality (and smaller) than the image encoded with higher effort.
jonnyawsom3
2023-05-30 08:59:09
Yeah
DZgas Ж
_wb_ Making a still image container to contain a video codec and then forcing to use it whenever you want to use the video codec for video, I don't understand the logic in that
2023-05-30 10:03:48
waiting for controls panel for animated pictures.
2023-05-30 10:13:43
Today I did a lot of tests with very compressed Webp, indeed it is a unique format that allows you to compress so much. I made a small algorithm, the picture is compressed, if its weight is more than 4096 bytes, then it is compressed by q10 less. And so on until the weight is <. Then I compress it again use -pass 10 -af, which in practice, working with very strong compression turned out to be extremely inefficient, but still make the image better by about a couple of percentages. Well, that's it. q0 is a real meme, I do not know what parameters are set there inside the codec, but in some cases the image with q0 can be 3 times smaller than q1, it's very hilarious. So, I was thinking how to break the compression cycle, and then I realized that it does not matter at all, because impossible to any image reaches the compression level q0 with >4096. hah. ||But perhaps after reaching q10 it is necessary to make a separate function to compress q1, because q0 is really funny.||
Reddit • YAGPDB
2023-05-31 03:42:03
2023-05-31 04:51:58
2023-05-31 11:49:03
2023-06-01 03:02:08
2023-06-01 05:27:38
2023-06-03 01:05:33
2023-06-03 12:17:38
DZgas Ж
2023-06-05 07:58:34
<:VP8:973222137202610226>
Traneptora
2023-06-05 07:59:44
those are massive distances tho
DZgas Ж
Traneptora those are massive distances tho
2023-06-05 08:00:13
Traneptora those are massive distances tho
2023-06-05 08:03:41
in such conditions, JXL does not demonstrate advantage
Traneptora
2023-06-05 08:04:04
JXL is not designed for extremely low qualities
DZgas Ж
2023-06-05 08:05:12
WEBP just like JXL has very little data on its personal container
spider-mario
2023-06-05 08:05:34
this PNG has EXIF data that `cjxl` is including
2023-06-05 08:05:42
ah, just 14 bytes
username
spider-mario this PNG has EXIF data that `cjxl` is including
2023-06-05 08:05:57
that's discord doing that
2023-06-05 08:06:03
it adds that to all PNGs
Traneptora
2023-06-05 08:06:16
it adds eXIf after IDAT iirc
2023-06-05 08:06:24
that's how you can tell
spider-mario
2023-06-05 08:06:47
what’s in it?
DZgas Ж
Traneptora JXL is not designed for extremely low qualities
2023-06-05 08:06:57
and yet, in the texts of advantages over AVIF, I saw bragging about the size of the container, and what does it turn out here
username
spider-mario what’s in it?
2023-06-05 08:08:29
they add this to every PNG
2023-06-05 08:09:04
I assume it's unintended behavior of whatever they shove PNGs through to strip stuff from them
Traneptora
DZgas Ж and yet, in the texts of advantages over AVIF, I saw bragging about the size of the container, and what does it turn out here
2023-06-05 08:09:07
that matters for progressive previews more than for actual filesize
_wb_
2023-06-05 08:10:07
It mostly matters for small files like icons, where people now resort to css sprite sheets to avoid the header overhead
DZgas Ж
DZgas Ж <:VP8:973222137202610226>
2023-06-05 08:14:37
I would like to say that there is quite a tangible example when jpeg xl is worse than webp. But again, I wanted to note that Jpeg xl uses very few of the possible block sizes, and also note that at the moment of development I am not satisfied with the reliability/quality of the VAR-block placement, so maybe it's all temporary, but as it is.
Cool Doggo
2023-06-05 08:23:42
imo they are both equally terrible quality, they just have better/worse quality in different parts of the image
Reddit • YAGPDB
2023-06-05 08:34:58
DZgas Ж
2023-06-05 09:35:01
<:monkaMega:809252622900789269>
2023-06-05 09:36:29
so powerful that they decided not to spend money on hardware decoding
Reddit • YAGPDB
2023-06-06 03:54:28
The Bull's Eyed Womprat
2023-06-06 06:13:40
Another thing of note in Apple news is Safari will now support HEIC/HEIF. Seems strange to me they would finally put it in their browser after 6 years. https://developer.apple.com/documentation/safari-release-notes/safari-17-release-notes
190n
2023-06-06 06:14:37
hilarious
_wb_
2023-06-06 06:19:47
It kind of makes sense, they want to embrace web apps, and then it could be useful to handle iPhone camera pictures directly...
2023-06-06 06:21:32
They probably don't expect this to be a codec available in the broader web platform, just something that would help web-based photo apps to support their internal camera format
username
_wb_ It kind of makes sense, they want to embrace web apps, and then it could be useful to handle iPhone camera pictures directly...
2023-06-06 06:21:49
yeah that seems to be one of the main reasons
2023-06-06 06:22:13
screenshot from here: https://webkit.org/blog/14205/news-from-wwdc23-webkit-features-in-safari-17-beta/
afed
2023-06-06 06:30:44
but considering that hevc has also started to be supported by chrome and other browsers if hw decoding is available, then adding heic even in other browsers is theoretically not that far off from reality
_wb_
2023-06-06 06:54:02
The patent mess will make it take at least until 2033 before heic can conceivably get universal support
2023-06-06 06:54:24
Remember, chrome only has hevc support if your hardware happens to support it
2023-06-06 06:55:08
Meaning you paid for the license as part of the hardware cost.
afed
2023-06-06 06:59:20
yeah, but hw hevc decoding is pretty common now, and it's already ubiquitous on mobiles
_wb_
2023-06-06 07:13:12
Do you have statistics on what kind of percentage?
afed
2023-06-06 07:24:12
like most gpu/igpu/apu from ~2015-2016 and also mobiles (even the very cheap ones on mediatek), older ones probably have no support, but especially for mobiles this means they are officially unsupported in general (unless some critical vulnerabilities) and some new formats for them will also be a problem
190n
2023-06-06 07:24:31
nvidia has it since 10 series
jonnyawsom3
2023-06-06 07:25:28
It's generally just sites/applications not tying into the hardware already available
lonjil
2023-06-06 07:40:47
I find it unlikely that Chrome will go thru the trouble to do HW decode of images just to support HEIC.
_wb_
2023-06-06 07:44:12
Also note that hw generally only supports 4:2:0
spider-mario
2023-06-06 08:14:19
so does Apple anyway, though
jonnyawsom3
2023-06-06 08:44:26
Hardware in, hardware out. Or rather, the other way around
Reddit • YAGPDB
2023-06-07 05:16:03
2023-06-07 02:53:23
2023-06-07 03:06:38
2023-06-07 04:15:43
2023-06-07 08:31:38
Fox Wizard
2023-06-07 08:44:38
Lmao that article
2023-06-07 08:44:53
SVT-AV1 preset 0, no wonder it's extremely slow <:KekDog:884736660376535040>
2023-06-07 08:49:09
And the VMAF results look... off. In my experience speeds that slow should definitely get better VMAF scores compared with x265
Reddit • YAGPDB
2023-06-08 08:02:58
2023-06-09 10:45:48
2023-06-10 09:25:48
2023-06-10 09:30:13
2023-06-10 11:12:58
2023-06-10 03:43:23
2023-06-11 07:50:13
2023-06-11 03:39:23
Dzuk
2023-06-11 04:25:10
I've heard AVIF has a lossless setting but when I tried using it via a particular program (can't remember off the top of my head), the results were definitively not lossless. Is there an actual lossless AVIF or did someone just wrongly label it as such?
diskorduser
2023-06-11 04:33:48
Avif lossless is useless. Don't use it.
Dzuk
2023-06-11 04:35:52
i was thinking as much but also i wanted to try it lol
_wb_
2023-06-11 04:43:39
Have to also set the colorspace to RGB if you want lossless. But then it doesn't do any color decorrelation, so it kind of sucks. I think they want to add YCoCg but that would not work on existing deployments, so it's a bit late for that...
2023-06-11 04:44:36
But even with YCoCg it's probably not very good at lossless. It's entropy coding is designed for having lots of zero residuals, it's not great for high entropy stuff like lossless...
Dzuk
2023-06-11 06:58:15
okay, maybe i wont bother then
2023-06-11 06:58:26
its clearly not a lossless format
2023-06-11 06:58:54
(one of the major reasons I'm so miffed at JXL getting snubbed in places... would be really nice if we had like a really solid widespread alternative to PNG)
spider-mario
2023-06-11 07:04:37
is “AVIF also does lossless” sometimes used as an argument for why JXL would be unnecessary?
yoochan
2023-06-11 08:14:13
yes
spider-mario
2023-06-11 09:31:58
so bizarre
Reddit • YAGPDB
2023-06-12 12:59:23
afed
2023-06-14 05:18:59
https://github.com/AOMediaCodec/libavif/pull/1432
_wb_
2023-06-14 05:37:07
> Warning: This feature is experimental and forbidden by the current AVIF specification.
2023-06-14 05:37:50
I don't like the sound of that
2023-06-14 05:38:25
The point of standardization is that you don't change the spec all the time
2023-06-14 05:39:53
I get it that they want to reduce that header overhead, but it's a bit late for that now...
Reddit • YAGPDB
2023-06-21 10:38:14
2023-06-21 12:14:35
The Bull's Eyed Womprat
_wb_ I get it that they want to reduce that header overhead, but it's a bit late for that now...
2023-06-21 02:25:56
maybe it could be used in AV2F
jonnyawsom3
2023-06-21 02:27:09
Ohh, they missed out on `AV1F`
Reddit • YAGPDB
2023-06-21 03:35:54
2023-06-21 07:48:35
Jim
2023-06-21 09:39:38
AV1 isn't exactly a massive subreddit, wonder why they are going after them...
Quackdoc
2023-06-21 09:40:40
im pretty sure it's an automated bot going after any subreddit with `x amount of days` private at this point
Jim
2023-06-21 09:43:11
Did jpegxl get a message?
_wb_
2023-06-22 06:23:11
not afaik
Jim
2023-06-22 09:14:29
On AV1's Discord they announced they are moving from Reddit to Lemmy.
Reddit • YAGPDB
2023-06-23 02:58:14
2023-06-23 06:07:44
2023-06-23 06:10:04
2023-06-23 06:21:49
2023-06-23 06:22:59
2023-06-23 05:32:54
2023-06-24 05:22:49
2023-06-24 09:54:14
2023-06-24 10:08:39
2023-06-24 08:03:59
2023-06-24 10:54:59
2023-06-24 10:56:19
Ulric
2023-06-25 12:17:18
Hello! I'm not sure if this is the right place, if there's a better one, let me know! I'm trying to implement a compression algorithm (first time doing this!:) ), but I'm struggling a bit. The algorithm is LZSS, but the implementation of the server is very different than what I've found online! I've followed this guide to understand how it works and to glance how an implementation should be done: https://go-compression.github.io/algorithms/lzss/ However, the reference server varies *very* much: https://gitlab.freedesktop.org/spice/spice-common/-/blob/master/common/lz_decompress_tmpl.c They reference this paper, but I don't know enough maths to understand it: https://dl.acm.org/doi/10.1145/322344.322346 And other client implementations do aswell: https://github.com/Shells-com/spice/blob/master/lz.go https://github.com/eyeos/spice-web-client/blob/ddf5ea92a6dfb1434bc5d79f6a2f8c9bed53ae97/lib/images/lz.js#L25 https://gitlab.freedesktop.org/spice/spice-html5/-/blob/master/src/lz.js I'm really lost, could anyone point me to the right direction? Thanks! :)
yurume
2023-06-25 01:48:43
LZSS is a primary example of "asymmetric" compression algorithms, where you have one way (in principle) to decompress the data but you can compress in so many ways with different characteristics
2023-06-25 01:49:33
also LZSS itself is a concept, while all of them would implement a concrete compression file format which *will* greatly differ in details (and they won't use just LZSS but with other algorithms)
Ulric
2023-06-25 01:49:50
Hmmmm
2023-06-25 01:50:06
I think I get it
yurume
2023-06-25 01:50:38
have you ever implemented *decompressor*? I think you should do that first, because compression is more difficult than decompression in general
Ulric
2023-06-25 01:51:00
I just want to decompress, there doesn't need to be a compression part
yurume
2023-06-25 01:51:01
back in time I implemented DEFLATE in Python and that gave many (but not all) insights
2023-06-25 01:51:18
try DEFLATE then, I think it's most accessible
Ulric
2023-06-25 01:51:25
What's that?
yurume
2023-06-25 01:51:37
one used by zip, png and so on
2023-06-25 01:51:44
it's a standardized compression format
Ulric
2023-06-25 01:51:53
The compression algorithm has to be this one, as I'm writting a client for a protocol that uses this compression algo
yurume
2023-06-25 01:52:06
https://www.rfc-editor.org/rfc/rfc1951.html
2023-06-25 01:52:11
wait
2023-06-25 01:52:24
that means you should have a compression format description
Traneptora
2023-06-25 01:52:32
Can you link to an external library?
Ulric
2023-06-25 01:52:38
But I haven't started writting a decompressor, because I wanted to understand it correctly before
yurume
2023-06-25 01:52:41
otherwise you should figure out gory details yourself, which is very hard
Ulric
2023-06-25 01:52:47
I'm writting it in TS, so no
yurume
2023-06-25 01:53:13
can you reveal which protocol is in use? is it standardized?
Ulric
2023-06-25 01:53:26
https://www.spice-space.org/spice-protocol.html
2023-06-25 01:53:35
The protocol itself is standarised
Traneptora
Ulric But I haven't started writting a decompressor, because I wanted to understand it correctly before
2023-06-25 01:53:37
Often writing a deconpressor and debugging it is the best way to understand it
Ulric
2023-06-25 01:53:49
But the compression parts doesn't seem to be
yurume
2023-06-25 01:53:54
oh it's a custom compression format...
Ulric
2023-06-25 01:54:11
(There's also their custom "QUIC" compression algorithm, that doesn't seem to be documented anywhere?)
2023-06-25 01:55:06
(Check section 8 of the display channel, `Spice Image`
2023-06-25 01:55:28
There are some of the compression formats that need to be implemented
2023-06-25 01:56:25
And those are other client implementations, but there are 0 comments on the code, so I don't get how everything works: ``` https://gitlab.freedesktop.org/spice/spice-common/-/blob/master/common/lz_decompress_tmpl.c https://github.com/Shells-com/spice/blob/master/lz.go https://github.com/eyeos/spice-web-client/blob/ddf5ea92a6dfb1434bc5d79f6a2f8c9bed53ae97/lib/images/lz.js#L25 https://gitlab.freedesktop.org/spice/spice-html5/-/blob/master/src/lz.js ```
yurume
2023-06-25 01:57:06
https://gitlab.freedesktop.org/spice/spice-common/-/blob/master/common/quic.c seems the definitive code
2023-06-25 01:57:22
I agree it's really daunting for beginners
Ulric
2023-06-25 01:57:26
I guess I could link the `spice-common` library, but it would impact severely the application performance, because passing the whole arrays between JS and WASM is very slow
yurume
2023-06-25 01:57:43
but at least it's open source, so in principle you can mindless translate C into JS
Ulric
yurume https://gitlab.freedesktop.org/spice/spice-common/-/blob/master/common/quic.c seems the definitive code
2023-06-25 01:58:30
That's for the QUIC algorithm, but there's also their LZSS implemetation
yurume
2023-06-25 01:58:53
in my understanding you don't need an actual source code to reconstruct their LZSS specifics, while you need one for QUIC
Ulric
2023-06-25 01:59:13
Why not for LZSS?
yurume
2023-06-25 01:59:27
ah wait
2023-06-25 01:59:29
hmm
2023-06-25 01:59:43
yeah, you're correct, I thought LZPalette or similar gives much information about that
2023-06-25 01:59:48
it didn't
Ulric
2023-06-25 01:59:50
They seem to switch between image type and do different things
yurume
2023-06-25 02:00:09
anyway... yeah, you need to translate C to JS, there seems no other way if you don't want wasm
2023-06-25 02:00:39
actually I would want to ask why you want to avoid wasm
Ulric
Ulric I guess I could link the `spice-common` library, but it would impact severely the application performance, because passing the whole arrays between JS and WASM is very slow
2023-06-25 02:00:53
This ^
yurume
2023-06-25 02:01:00
did you measure?
Ulric
2023-06-25 02:01:05
Nope
yurume
2023-06-25 02:01:23
while there is indeed some overhead, it might or might not matter for your use case
Ulric
2023-06-25 02:01:39
But if there needs to be allocation and copying for each frame I recieve, I'm guessing it's going to be way slower
yurume
2023-06-25 02:01:41
you can make a simple proof of concept to prove if it's indeed the case or not
2023-06-25 02:01:49
computer is really fast 🙂
Ulric
2023-06-25 02:02:06
Another question I had is if it makes sense / can be done implement those protocols using WebGL?
2023-06-25 02:02:15
So the computations are on the GPU part?
2023-06-25 02:02:25
(WebGL -> OpenGL on the browser)
yurume
Ulric But if there needs to be allocation and copying for each frame I recieve, I'm guessing it's going to be way slower
2023-06-25 02:02:43
and in fact, if you export the wasm memory you can directly access it as ArrayBuffer
2023-06-25 02:02:48
https://www.w3.org/TR/wasm-js-api-1/#memories
Ulric
2023-06-25 02:03:09
You can have shared memory??????
yurume
2023-06-25 02:03:11
while accessing *individual* elements from JS may be slow, you would want to copy it to other ArrayBuffer
2023-06-25 02:03:19
and that wouldn't incur much overhead
2023-06-25 02:04:08
note: I don't know if stock solutions like emscripten do support memory exports, but wasm proper does allow for that
Ulric
2023-06-25 02:05:40
Going back to the compression part, the only way to do it is copying it from C?
Ulric Another question I had is if it makes sense / can be done implement those protocols using WebGL?
2023-06-25 02:05:50
And also this
yurume
2023-06-25 02:15:32
yes, you can directly send ArrayBuffer into webgl texture (https://developer.mozilla.org/en-US/docs/Web/API/WebGLRenderingContext/texImage2D) so it's not that hard
2023-06-25 02:16:00
if you have other compositions going on webgl would be desirable, otherwise 2d canvas context might be enough
2023-06-25 02:16:45
compositions like, color correction or palette indexing or general affine transforms
Ulric
2023-06-25 02:17:12
Nice, thank you very much!
Reddit • YAGPDB
2023-06-26 12:54:29
VcSaJen
2023-06-26 01:09:00
It seems MS Edge now supports AVIF, but behind a flag, only on Win11 and only on Canary channel (it was on Canary 114, but it's not on Release 114).
Reddit • YAGPDB
2023-06-26 07:04:14
2023-06-27 06:32:19
2023-06-28 11:32:09
fab
2023-06-28 08:13:35
AVM has a good quality like e9d1 already with cpu 8
2023-06-28 08:13:48
Inter takes only 15 seconds
2023-06-28 08:14:09
Like JPEG XL for Google you have to admit there's no need
fab AVM has a good quality like e9d1 already with cpu 8
2023-06-28 08:15:12
Like with libjxl 0.7.0
2023-06-28 08:15:59
You may say cjxl quality improved but how many people can see the difference?
2023-06-28 08:22:00
The efficiency is too far from VVC
2023-06-28 08:22:04
Vvenc
2023-06-28 08:26:32
Current cjpegli at d 0.479 should match AVM
_wb_
2023-06-28 08:34:32
Does vvc even work well at such high qualities?
RockPolish
2023-06-28 08:39:58
not sure if this is the right place to ask, but does something similar to JPG -> JXL lossless transcoding exist for older video codecs?
jonnyawsom3
2023-06-28 08:47:10
Closest would probably be JpegXT, but that's just backwards compatible to old Jpeg
RockPolish
2023-06-28 08:55:07
I see, I was hoping more that something might exist for h264
_wb_
2023-06-28 08:57:06
Well mpeg1 is just motion jpeg so that would be convertible to jxl too. I think some medical stuff used that so it might be relevant for archives or something...
RockPolish
2023-06-28 09:00:36
would that be converting the keyframes to jxl and keeping the rest the same then?
lonjil
2023-06-28 09:15:34
I don't think MJPEG has any interframe anything
derberg🛘
2023-06-28 09:41:26
Extend cjxl to take MJPEG input 🤔
afed
2023-06-28 09:43:17
does VVC have good lossless compression? or it's yuv lossless and weird source for this example?
2023-06-28 09:43:23
<https://encode.su/threads/3397-JPEG-XL-vs-AVIF/page10>
Traneptora
2023-06-28 10:27:33
wait what? mpeg-1 is just motion jpeg? that doesn't sound right
jonnyawsom3
2023-06-28 11:04:14
Yeah, mpeg-1 was the first intra-coded one, mjpeg was before that
Traneptora
2023-06-28 11:08:45
do you mean inter-coded?
jonnyawsom3
2023-06-28 11:19:20
Yeah, sorry
BlueSwordM
_wb_ Does vvc even work well at such high qualities?
2023-06-29 12:35:00
No, not currently.