JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

other-codecs

BarryCap
2021-09-03 09:08:30
What I thought was that it was first developed to replace WebP, and that it was abandoned for JXL, which suited best the use cases. I made some tests and I found that WP2 has a compression method quite similar to JXL, except it’s a bit heavier (sometimes even more than WebP). And the support for WP2 is, as I’ve seen almost non-existent (not by GIMP, IrfanView, not even one browser have the option, and it’s not even mentioned in Can I Use), as it’s a nearly discontinued recent format. > WebP 2 is the successor of the WebP image format, currently in development. It is not ready for general use, and the format is not finalized so changes to the library can break compatibility with images encoded with previous versions.
2021-09-03 09:08:38
—https://chromium.googlesource.com/codecs/libwebp2/
2021-09-03 09:11:14
They say it’s still in active development but I can’t think it will ever be a widely used format with jxl.
Scope
2021-09-04 12:47:43
https://twitter.com/pandoras_foxo/status/1433856228011110404
_wb_
2021-09-04 01:32:59
I think this is why ecosystem support outside the browser is so important imo, and why image codecs specifically made "for the web" are a bad idea, even if it is easier for companies that control both ends of the web to push adoption on the web.
2021-09-04 01:34:04
Images have a life cycle that is not (always) just from server serving it to client viewing it.
improver
2021-09-04 01:37:17
people are allowed to download images nowadays? **schocked**
_wb_
2021-09-04 01:38:12
For video, it's kind of inevitable that different things are used for production than for end-user delivery. I think it's video people who have this concept of an image format that is "good enough for the web", as if the flow is always that an image is 'produced' once and then served a million times (basically the youtube/netflix/broadcasting model of distribution)
2021-09-04 01:39:10
For images we can now afford to use the same format for both authoring and delivery, and the line between those two is way more blurry than for video
improver
2021-09-04 01:40:47
are you saying that people are able make their own images, and can encode them and even distribute them without paying someone to do that by watching ads?
2021-09-04 01:41:08
where is economy in that?
2021-09-04 01:42:38
i mean that probably isn't the case but sometimes i wonder what goes in heads of ones who are pushing for avif so hard
_wb_
2021-09-04 01:43:28
Lol when I was trying to convince professional photographers that generation loss is a problem, some of them considered it a feature, not a bug, since it acts like a rudimentary DRM that prevents unlimited unauthorized sharing of their photos
improver
2021-09-04 01:46:34
gotta be careful about advertising liberty-enabling features
Scope
_wb_ https://github.com/godotengine/godot/issues/31568 does someone feel like opening an issue like that but for jxl?
2021-09-04 02:30:08
I think for now the lossless Jpeg XL encoder still needs improvement in speed/efficiency for game content and I think that lossless WebP is still more universal and preferable for that purpose (unless more than 8 bits per channel are required), although it might be better to start implementing earlier, but it might not have any noticeable advantages when comparing the current Jpeg XL implementation
2021-09-04 02:43:15
However: <https://github.com/godotengine/godot/pull/47835>
2021-09-04 02:43:16
https://user-images.githubusercontent.com/32321/120538887-dcff5400-c39b-11eb-8b68-8a65726413e9.png
2021-09-04 02:46:32
_wb_
Scope I think for now the lossless Jpeg XL encoder still needs improvement in speed/efficiency for game content and I think that lossless WebP is still more universal and preferable for that purpose (unless more than 8 bits per channel are required), although it might be better to start implementing earlier, but it might not have any noticeable advantages when comparing the current Jpeg XL implementation
2021-09-04 02:50:30
I think we need to come up with encoder heuristics for e4-6 that do something more than just using a fixed MA tree and RCT, and something less than the full MA tree learning algorithm (which is what it does now)
2021-09-04 02:51:38
That and making the tree learning algorithm more parallel/SIMD'd (for e7-9) should help a lot to get better speed/density trade-offs
veluca
2021-09-04 02:53:31
webp mode
2021-09-04 02:53:32
😛
2021-09-04 02:54:06
(I suspect we can make a "webp mode" that works up to e8 by default, and at e9 try both that and what we do today)
_wb_
2021-09-04 02:55:33
Yes, but also e.g. atm modular basically does global MA trees all the time (where group id is just a property it can use). Wouldn't it make sense to use local trees/histograms and possibly lose a bit in density but gain a lot in parallelizability (and possibly getting better / more locally adapted trees?)
veluca
2021-09-04 02:56:31
why not parallelize the tree learning?
2021-09-04 02:56:33
can be done...
Scope
2021-09-04 02:58:58
Btw, WebP2 as far as I understand should have almost the same decoding speed (after the final optimization), but at the expense of slow encoding speed, because basically there remains the same model as in WebP1, but some things are changed to more advanced and may be worth to do also something like this? And for games most important is decoding speed (mostly in single-threaded mode, because there are a lot of images and they are loaded in parallel) and moderate memory consumption
_wb_
2021-09-04 03:01:21
In the case of (parametrized) fixed trees and no weighted predictor, I think we could make decode speed very fast
2021-09-04 03:03:38
Like -e 2 now, but without the unnecessary conversions from int to float and back to int that we're doing now
BlueSwordM
2021-09-04 03:27:58
unlord, unlord, this was certainly unexpected from you: https://old.reddit.com/r/AV1/comments/phhhhk/any_way_to_improve_the_quality_of_dark_areas_of/hbkg0ta/
Scope
2021-09-04 03:35:39
Quite expected, since this is not the first such statement
BlueSwordM
2021-09-04 03:36:11
I know, but that's not what unlord and me have been discussing before in a more private setting <:Thonk:805904896879493180>
w
2021-09-04 03:37:58
web images are one thing, but i am quite happy with how jxl performs as an offline store (lossless only)
fab
2021-09-04 03:38:38
i use for only lossy
w
2021-09-04 03:38:40
and i do get the expected average -35% over png and jpg
fab
2021-09-04 03:38:52
no i want 70% lower at least
2021-09-04 03:39:07
with rav1e i get 30% lower than mp4 on video
2021-09-04 03:39:31
for screenshot i want 80 kb from 420-1000 kb
2021-09-04 03:39:36
500-1000 kb
2021-09-04 03:40:18
for some images i want a similar file to the original even when specifing d 1
_wb_
2021-09-04 04:26:40
Nathan seems so bitter about jxl
2021-09-04 04:27:48
Does anyone know what he's doing nowadays (after mozilla did that big sacking thing)?
veluca
2021-09-04 04:30:43
tbf, if you use JPEG recompression on mozjpeg files... it is much closer to 7% than to 20%
fab
2021-09-04 04:37:25
waiting for -d 0.339 -I 0.574 -s 2 --patches=1 --epf=2 --use_new_heuristics 08092021
2021-09-04 04:37:33
to ruin some images
2021-09-04 04:37:34
with it
2021-09-04 04:37:53
and destroy completely picture quality
_wb_
veluca tbf, if you use JPEG recompression on mozjpeg files... it is much closer to 7% than to 20%
2021-09-04 04:38:32
Maybe we should run some benchmarks to see how it performs on mozjpeg, libjpeg-turbo, 420/444, various qualities.
fab
2021-09-04 04:38:44
hope the bpp choice will be good
BlueSwordM
_wb_ Nathan seems so bitter about jxl
2021-09-04 04:38:45
That's what is weird 🤔
fab
2021-09-04 04:38:46
for photos
2021-09-04 04:38:57
especially some photos same size
2021-09-04 04:38:59
some photo way less
2021-09-04 04:39:05
like 80 less
2021-09-04 04:39:09
i want miracles
BlueSwordM
BlueSwordM That's what is weird 🤔
2021-09-04 04:39:21
He seems to have a different public facing opinion about JXL and AVIF than what I've seen in "private"
veluca
2021-09-04 04:39:23
another thing is that what "internet quality" means is not quite clear - is it mozjpeg -q 85? -q 65? -q 95?
BlueSwordM
veluca another thing is that what "internet quality" means is not quite clear - is it mozjpeg -q 85? -q 65? -q 95?
2021-09-04 04:39:31
mozjpeg -q 10 <:kekw:808717074305122316>
veluca
2021-09-04 04:40:59
😛
2021-09-04 04:41:16
we're trying to figure that out, but "nontrivial" doesn't even begin to describe that
2021-09-04 04:42:01
I know some big websites use 65-75
2021-09-04 04:42:13
but other places use higher
2021-09-04 04:42:30
depending how you sample and weight, you can get medians around 80-85
fab
2021-09-04 04:42:36
i want that settings
veluca
2021-09-04 04:42:36
wikipedia uses 88 I believe?
fab
2021-09-04 04:42:37
https://discord.com/channels/794206087879852103/805176455658733570/883752627744169995
2021-09-04 04:43:01
don't care for more -d
_wb_
2021-09-04 04:43:03
Wordpress uses 82 by default iirc
veluca
2021-09-04 04:43:36
Jon can probably tell us what *they* use
2021-09-04 04:43:37
😛
_wb_
2021-09-04 04:43:49
We don't use fixed q most of the time
fab
2021-09-04 04:43:50
rav1e at speed 10 with 01092021 build from releases do all
2021-09-04 04:43:55
with notenoughav1encodes
2021-09-04 04:43:59
the quality is blurred
veluca
2021-09-04 04:44:00
yes I mean what's the distribution
fab
2021-09-04 04:44:05
but the bitrate is automatic
2021-09-04 04:44:17
it doesn't delete any detail
_wb_
2021-09-04 04:44:54
I don't have super great statistics on it, but the bulk of what we do is in the q65-q90 range
lithium
2021-09-04 04:45:35
mozjpeg have different quant table preset and trellis quantization, I think libjpeg q and mozjpeg q have different quality define.
_wb_
2021-09-04 04:45:48
Yes, that's correct
2021-09-04 04:46:38
It's not hugely different, but we do a mapping to convert libjpeg q to mozjpeg (or webp for that matter) q
veluca
2021-09-04 04:46:59
(can you share that, and how you obtained it?)
2021-09-04 04:47:10
also -- any such knowledge about photoshop? 😛
2021-09-04 04:47:23
(I mean by email, just to clarify :P)
_wb_
2021-09-04 04:47:27
Dunno if I can share that, it's not public info
2021-09-04 04:47:47
We basically just aligned using ssimulacra on some corpus, iirc
2021-09-04 04:47:57
And made a lookup table
2021-09-04 04:48:44
Photoshop: we don't encode with photoshop, only libjpeg-turbo and mozjpeg :)
Scope
BlueSwordM mozjpeg -q 10 <:kekw:808717074305122316>
2021-09-04 04:48:53
Judging by some people, yes, although not quality, but some people want to encode in something like mozjpeg -q 10-30 sizes when encoding in AVIF
veluca
2021-09-04 04:48:53
fair enough
2021-09-04 04:49:21
well, some images in AVIF at sizes of mozjpeg q30 are not *too* bad
_wb_
2021-09-04 04:50:12
Mozjpeg q30 size is something very different from mozjpeg q30 fidelity
lithium
2021-09-04 04:56:33
jpeg ImageMagick, mozjpeg, Guetzli quality test, use PSNR、SSIM、butteraugli compare, this comapre say for q90 up quality, no reason choose mozjpeg. > japanese site > JPEG画像をImageMagick,mozjpeg,Guetzliでquality > を変えつつ生成した場合のファイルサイズ、入力画像に対するPSNR、SSIM、butteraugliを比較する > https://qiita.com/kazinoue/items/38b34f9d798400c0d129
2021-09-04 04:57:04
https://qiita-user-contents.imgix.net/https%3A%2F%2Fqiita-image-store.s3.amazonaws.com%2F0%2F149350%2Fa9c2c1e9-8fbe-eaca-3335-98b8642e5475.png?ixlib=rb-4.0.0&auto=format&gif-q=60&q=75&w=1400&fit=max&s=213cc0933504ceb3a536438902b4055f
2021-09-04 04:59:13
I get near result on mozjpeg from my drawing test.
_wb_
2021-09-04 05:00:48
We usually use mozjpeg when 420 is ok and for the lower qualities (q60-85), and just libjpeg-turbo (but of course with huffman optimization and using progressive) for the higher qualities (q85-95)
2021-09-04 05:00:56
Guetzli is too slow for us
lithium
2021-09-04 05:01:40
yes, agree, Guetzli is too slow...
_wb_
2021-09-04 05:01:53
At the higher qualities we didn't really see a big enough benefit for mozjpeg to justify its speed
veluca
2021-09-04 05:10:23
as in, "it isn't better", or as in, "it isn't better enough"?
_wb_
2021-09-04 05:15:01
Iirc the difference became smaller or nonexistent as q goes up
2021-09-04 05:15:21
But the speed difference remained the same
2021-09-04 05:16:29
(but that's not default mozjpeg vs default libjpeg-turbo, it's our specific mozjpeg settings vs our specific libjpeg-turbo settings, so ymmv)
2021-09-04 05:18:31
I think things like trellis quantization and overshoot-clamping-based deringing and whatever else mozjpeg has that libjpeg-turbo doesn't, just becomes less effective / important as you quantize less aggressively
lithium
2021-09-04 05:23:59
Sometime mozjpeg need higher q to reach maxButteraugli 1.0 quality, (libjpeg q95 444 and mozjpeg q98 444)
_wb_
2021-09-04 05:26:00
Yes, mozjpeg uses different quant tables and does more sophisticated quantization
lithium
2021-09-04 05:32:06
mozjpeg default use PSNR-HVS, I think for high quality quantization choose libjpeg is a better option. (sjpeg use PSNR optimize, also can't get better maxButteraugli on high quality quantization.)
_wb_
2021-09-04 05:34:24
Before we switched to the lookup table, we used the following to map "libjpeg q" to a mozjpeg q:
2021-09-04 05:35:11
`mozjpeg_q = libjpeg_q + 4 + (70-libjpeg_q)/8`
2021-09-04 05:36:29
That mapping probably only makes sense above q50 since we don't care much about lower than that
veluca
2021-09-04 05:56:16
aka 12.75 + 0.875 * libjpeg_q ?
_wb_
2021-09-04 06:05:10
I guess :)
Traneptora
2021-09-05 05:32:22
rav1e doesn't have good ratio
2021-09-05 05:32:30
compared to libaom-av1
2021-09-05 05:32:31
fwiw
2021-09-05 05:32:40
it's generally not better, it's just, written in rust
2021-09-05 05:32:45
so rustaceans will use it for no other reason
Scope
2021-09-05 05:37:54
Rav1e is also a real community encoder, while others are mostly developed by companies, and it usually has better quality at high bitrates than aom, but for now it is slower and has less features
_wb_
2021-09-05 05:49:50
Isn't rav1e only not "developed by a company" because Mozilla fired 25% of their staff including all their people who were working on rav1e?
2021-09-05 05:53:14
Also, I think it was originally intended to be faster (but more limited in the coding tools it uses) than libaom. Now it seems to be aiming to be slower-but-better. It's a bit unclear to me what the overall goal of the rav1e project is, tbh.
Scope
2021-09-05 05:54:36
As far as I know now Xiph exists and is funded separately, also mostly now Rav1e developers under VideoLAN and Vimeo funding
_wb_
2021-09-05 06:01:49
Yes, xiph is separate, I guess it's good that they put it under that umbrella and not the mozilla one
Traneptora
Scope Rav1e is also a real community encoder, while others are mostly developed by companies, and it usually has better quality at high bitrates than aom, but for now it is slower and has less features
2021-09-05 08:38:20
this is actually not true
2021-09-05 08:38:37
libaom-av1 outperforms rav1e at higher bitrates
2021-09-05 08:39:14
the biggest weakness of libaom at this point for an end user is that it multithreads fairly poorly
2021-09-05 08:39:17
although av1an can fix this
2021-09-05 08:39:45
just encode each GOP in its own single-threaded process
2021-09-05 08:41:54
as for "developed by a company" it depends on what you mean
2021-09-05 08:42:07
funded by a company and all the decisions made by a company providing funding are not the same thing
Scope
2021-09-05 08:42:23
Depends on the content, but with the latest builds and in visually lossless quality, I like the result of Rav1e mostly more, the same applies to AVIF, although before I did not consider Rav1e as a real competitor
Traneptora
2021-09-05 08:42:40
define "visually lossless quality"
2021-09-05 08:43:03
you can't like the result of rav1e more if it's visually lossless, you should like it exactly the same
2021-09-05 08:43:07
and just look at the filesize/speed
2021-09-05 08:43:27
if you're comparing the actual files visually then it is not visually lossless
_wb_
2021-09-05 08:43:38
We don't care much about multithreading in production, tbh. For video, we parallelize in the way you describe (split in segments, encode them separately), which is needed anyway if you want to do streaming. For still images, we do everything in single-threaded processes since that's the most effective if you need to encode thousands of images per second anyway.
Traneptora
2021-09-05 08:43:56
yea, that's what I do
2021-09-05 08:44:10
I figure cjxl multithreads by default because the average joe wants that
2021-09-05 08:44:23
but anyone batch processing will just disable threading
_wb_
2021-09-05 08:44:42
For an end-user who just needs to encode/decode a single image, multithreading is nice to have
2021-09-05 08:45:43
For those who want to use it in more demanding use cases, they can figure out how to add `--num_threads=0` 😄
Traneptora
2021-09-05 08:45:51
exactly
Scope
2021-09-05 08:47:41
I take size into account (like quality at the same size), but I don't take speed, Rav1e is still slower than Aom Also for discussions about video codecs and AV1 there is a quite active discord community <https://discord.gg/HSBxne3>
Traneptora
2021-09-05 08:49:15
also it really depends on what you define visually lossless to be
2021-09-05 08:49:29
I've seen people claim x264 on preset veryslow is visually lossless at CRF=18
2021-09-05 08:49:36
which is just isn't
_wb_
2021-09-05 08:49:59
It depends a lot on testing methodology
2021-09-05 08:51:19
AIC-2 still image flicker test (what they used to test JPEG XS, which targets video production workflows) is very strict, corresponds to cjxl -d 0.3 or so
Traneptora
2021-09-05 08:51:54
I mean what testing methodology is there beyond "a viewer can fairly easily tell the difference on most content"
2021-09-05 08:52:10
If your metric marks something as "visually lossless" when a viewer can fairly easily tell the difference on most content, then your metric is wrong.
_wb_
2021-09-05 08:52:28
BT.500 side-by-side eval is much more forgiving, you can get within 'indistinguishable from the original' probably at -d 2 or so in most cases
Scope
2021-09-05 08:53:01
Yes, but I just clarified what I mean by high bitrate, this is where the difference is quite difficult to see when carefully comparing even frames with each other (although for video to compare frames is not always the best solution)
_wb_
2021-09-05 08:53:18
Video eval methodologies are even more forgiving, since you don't even compare frames, you first watch one clip and then watch another.
Traneptora
2021-09-05 08:53:34
the quality settings of rav1e btw don't necessarily map to those of libaom
_wb_
2021-09-05 08:53:55
Libaom current version also doesn't map to libaom older version
Traneptora
2021-09-05 08:54:37
and since constant quality will give you better ratios than ABR, to truly compare them you either have to do ABR anyway and compare visual quality at a specific bitrate, or you have to tweak the quality settings until you figure out what's roughly comparable on most content
2021-09-05 08:54:48
this second one is very hard cause there's a lot of variance
_wb_
2021-09-05 08:55:28
On a random test image, I got a 2 MB avif, then I updated my libaom/libavif to most recent version, and with the exact same command line it now gave me a 800 KB avif (which looked much worse too, of course)
Traneptora
2021-09-05 08:56:00
yea, and some viewers notice certain things more than others
2021-09-05 08:56:23
`cjxl -d 1 lenna.png lenna.jxl` to me is noticeably different but because I tend to look at grain
2021-09-05 08:56:29
it's something that catches my eye
2021-09-05 08:56:38
I'm guessing most viewers don't notice it as much
2021-09-05 08:57:19
like if you showed me lenna.jxl without context would I know? probably not. but side-by-side it's pretty apparent to me
_wb_
2021-09-05 08:57:38
Also -e 8 -d 1 might be a bit different visually from lower effort -d 1
Traneptora
2021-09-05 08:57:50
well I tried -e 9 and got a higher filesize
2021-09-05 08:57:51
so
2021-09-05 08:57:55
I'm not entirely sure why that happened
_wb_
2021-09-05 08:58:10
At e8+ we do iterations of Butteraugli
2021-09-05 08:58:28
So likely the e7 one is not actually reaching d1
Traneptora
2021-09-05 08:58:36
so you're saying at e9 the "d=1" metric is different-ish
2021-09-05 08:58:44
or rather, it's more correct
_wb_
2021-09-05 08:58:59
At e8+ you get more guarantees that it actually reaches the target
Traneptora
2021-09-05 08:59:00
but at d7 and lower, it's only approximately matching d=1?
2021-09-05 08:59:18
that makes sense, trading accuracy for speed
_wb_
2021-09-05 08:59:24
At faster settings it is more heuristics-based so it might be a bit over or under target
Traneptora
2021-09-05 08:59:36
I suppose in this case d=1 is really d<=1
_wb_
2021-09-05 08:59:51
Well BA is a maxnorm metric so
2021-09-05 09:00:15
It's the worst spot that counts, not the average spot
Traneptora
2021-09-05 09:00:44
I mean, if you run cjxl d=1 and you end up getting an image with d=0.5 from the original I'm guessing the encoder goes 'ok'
2021-09-05 09:01:06
or are your heuristics good enough that this is uncommon
_wb_
2021-09-05 09:01:11
Well at d8+ the iterations are not per image, they are per block
2021-09-05 09:01:21
Since we do adaptive quantization
2021-09-05 09:02:30
So it will adjust quantization (in both directions) until it's close enough to the target
2021-09-05 09:03:11
At e7 and faster, it selects adaptive quantization heuristically without actually verifying the result
2021-09-05 09:03:57
It works well enough in practice, but in general: the lower the encode effort, the more perceptual variation in the result
2021-09-05 09:04:46
To the point that at -e 3 it's basically just a quantizer setting you're specifying, not a perceptual target
Traneptora
2021-09-05 09:06:39
huh
_wb_
2021-09-05 09:06:45
(so it becomes more like any other encoder when you dial down the effort, setting quantization, not quality)
Traneptora
2021-09-05 09:07:04
that sounds very different from the way x264 handles CRF
Scope
2021-09-05 09:07:21
Yes, I don't mean quality in the form of -q/-crf/-cq option, I mean visual quality at the best settings of both encoders at the same image or video size and to put it simply, Aom has a very noticeable detail loss and blurring even at high bitrates (although this can be somewhat improved with some settings), while Rav1e at high bitrates works better, but the lower the bitrate, Aom is mainly more visually appealing and at medium and low bitrates it almost always looks more pleasant than Rav1e
Traneptora
2021-09-05 09:07:38
best settings for quality is lossless
2021-09-05 09:07:43
you have to make some compromise
Scope
2021-09-05 09:08:46
I don't mean the bitrate or lossless settings, Aom by default has far from optimal settings and some things can be improved to the same bitrate
Traneptora
2021-09-05 09:09:01
you mean slowest/highest effort, ah
_wb_
Traneptora best settings for quality is lossless
2021-09-05 09:09:02
Well some people like <@416586441058025472> might disagree on that - some people like the compression artifacts and it does happen in subjective evals that distorted images get higher scores than the original
Traneptora
2021-09-05 09:09:24
liking artifacts is not higher quality
2021-09-05 09:09:36
quality in the visual sense is based on distance from the original
2021-09-05 09:09:57
a compression algorithm that sharpens the output after decoding might appear "better" cause people like sharp things
fab
_wb_ Well some people like <@416586441058025472> might disagree on that - some people like the compression artifacts and it does happen in subjective evals that distorted images get higher scores than the original
2021-09-05 09:10:00
i like simplyfing of original image
Traneptora
2021-09-05 09:10:02
but it's not really better
2021-09-05 09:10:15
fab is also not a good metric
fab
2021-09-05 09:10:19
even if it retains some of the noise or compression artifacts
_wb_
2021-09-05 09:10:22
I think that's not the job of an image codec, but some people don't mind it if an encoder "denoises" an image
fab
2021-09-05 09:10:46
but an hdr image and shooten better is best especially for a phone
Traneptora
2021-09-05 09:10:50
but that's just placing a higher subjective emphasis on the damage caused by certain types of artifacts
fab
2021-09-05 09:10:52
even if compresses more
_wb_
2021-09-05 09:11:07
I think an image codec should be neutral and transparent, image in = image out
Traneptora
2021-09-05 09:11:16
by design, yea
fab
2021-09-05 09:11:19
a simplified image to the original or a image that look same as original to the bytes
2021-09-05 09:11:21
look stupid
2021-09-05 09:11:25
is not encoder job
2021-09-05 09:11:35
at least is encoder job but there is a limit
2021-09-05 09:11:46
is not to the extreme
Traneptora
2021-09-05 09:11:47
some people don't care if it encoder denoises but that's ultimately just a different subjective weight to certain artifact types
fab
2021-09-05 09:12:00
but i like even at d 0.339 new heuristics
2021-09-05 09:12:10
i like to see the image simplified
_wb_
2021-09-05 09:12:25
Other tools can do denoising and sharpening and whatnot. If I want to have a noisy image, I want to be able to keep it that way.
Traneptora
2021-09-05 09:12:44
exactly
2021-09-05 09:12:57
that's why I say quality is nto "how nice it looks" but distance to the original
Scope
Traneptora you mean slowest/highest effort, ah
2021-09-05 09:13:22
Not only the speed, but something like `--tune` from x26x (but unlike x26x, Aomenc does not even have optimal default settings, not just for certain content) Perhaps because the default for Aom developers considered something like YouTube quality
Traneptora
2021-09-05 09:13:56
well x264 doesn't either. it has sane defaults but not optimal defaults for all content
2021-09-05 09:14:01
x265 is a piece of work and I won't use it
2021-09-05 09:14:17
it's not really like x264, it's just got the same CLI user interface and the same "name"
2021-09-05 09:14:24
but it's a different thing that works much more poorly
2021-09-05 09:15:30
choose the default for youtube makes no sense cause they're going to set all those settings manually anyway
2021-09-05 09:17:27
The way x264 handles its constant quality setting is basically by making a (psychovisual) decision on how much the extra bits are necessary at a specific quantization parameter setting. this is based on the qcomp value curve, which defaults to 0.6 (scale between 0 and 1) at qcomp=1, it considers them *always* necessary and it works like CQP (constant quanization) at qcomp=0 it considers them *never* necessary and it works like CBR (or very close) the qcomp determines how aggressive it is at throwing away data that it thinks is probably unnecessary for that target quality
_wb_
Traneptora that's why I say quality is nto "how nice it looks" but distance to the original
2021-09-05 09:17:48
I know, and I agree. It's just a complication when doing subjective evaluation, and it's one of the reasons why "visually lossless" is sometimes (incorrectly imo) defined to occur at very low bitrates. On BT.500 MOS scales (1=worst, 5=best) we often see original images get 3.5 or 4 when compared side-by-side against themselves, and then if you look at confidence intervals around MOS scores of distorted images, you can claim images to be 'visually lossless' at 0.5 bpp, while in a fliptest anyone would see clear differences.
Traneptora
2021-09-05 09:18:51
yea, my personal take is that objective metrics are a tool to make sense of what is ultimately human subjectiveness
2021-09-05 09:19:11
since the true goal is human visual quality
2021-09-05 09:19:36
and if the metric is inconsistent with human interpretation, then IMO the metric is wrong
fab
2021-09-05 09:19:37
so simpler images isn't the ultimate goal
Traneptora
2021-09-05 09:19:41
no
fab
2021-09-05 09:19:45
the goal is entropy compression
Traneptora
2021-09-05 09:19:59
also no
_wb_
2021-09-05 09:21:07
I think Butteraugli is a good metric to estimate whether some artifact can be seen or not. It's not a very good metric to estimate which artifacted image will look better, if both of them are clearly artifacted
Traneptora
2021-09-05 09:21:28
well that's a much harder problem
2021-09-05 09:21:42
cause some people care more about different artifact types
_wb_
2021-09-05 09:21:57
Yes, it's more about image semantics than about the visual system itself
Scope
Traneptora well x264 doesn't either. it has sane defaults but not optimal defaults for all content
2021-09-05 09:22:06
Yes, but the default settings in x264 are still quite good if there is no way to adjust them for known content and encode blindly, but there are some things for Aom that when enabled, will give better quality at about the same speed and size
Traneptora
2021-09-05 09:22:19
I call that "sane defaults"
_wb_
2021-09-05 09:22:36
People don't care if the grass is artifacted, but they do care if the face is artifacted.
2021-09-05 09:26:44
Also they tend to assume background blur is intentional/artistic, while foreground blur is more annoying. Everyone will hate blockiness, but for smoothing or ringing there is more variation in what people think about it. Some are fine with some ringing because it can improve apparent sharpness, others hate it. Same with smoothing: some like how it denoises stuff, others hate the plastic look
Scope
2021-09-05 09:27:35
And for example for AVIF, there will be even more such settings and they almost always give better results than the defaults, something like that: `-a color:denoise-noise-level=5 -a color:enable-dnl-denoising=0 -a color:sharpness=2 -a color:enable-chroma-deltaq=1 -a color:deltaq-mode=3 -a color:enable-qm=1` (although this may vary with changes in Aom from build to build) I mean Aom still has not optimal defaults, but x264 mostly doesn't need something like that, which gives better results on any content (not considering speed) and defaults are pretty ok
_wb_
2021-09-05 09:29:45
It would be really convenient if they would make whatever they think is the best general-purpose setting the default
2021-09-05 09:30:39
Command line tools that require <@416586441058025472>-level parameter tweaking are just user hostile imo
Scope
2021-09-05 09:34:35
Also JXL for lossy, maybe besides enabling noise in some cases, it has no settings that could noticeably improve the quality (so the default settings are ~optimal, at least for those that are available for the user)
_wb_
2021-09-05 09:39:30
That's the point. If there is something that is a good idea to do by default, then I consider it a bug if we don't do it by default.
Scope
2021-09-05 09:48:44
Also, this discussion reminded me that it would be a good idea to add rav1e to this comparison: <https://storage.googleapis.com/demos.webmproject.org/webp/cmp/2021_08_10/index.html> (it was useless before because old rav1e builds were always worse than aom, but now it makes some sense)
_wb_
2021-09-05 09:52:51
That comparison is done by the webp2 folks, I dunno if they will keep doing it
2021-09-05 09:53:21
Doesn't hurt to ask for it
2021-09-05 09:54:50
It might be good if someone would fork that comparison project and add a bunch more things to it, e.g. besides rav1e I am also interested in multiple encoder speed settings
2021-09-05 09:56:54
I think it's all default speeds in the webp2 comparisons, which makes sense if you test only one thing, but it's easier to do apples-to-apples comparisons if multiple speeds are tried
veluca
Scope And for example for AVIF, there will be even more such settings and they almost always give better results than the defaults, something like that: `-a color:denoise-noise-level=5 -a color:enable-dnl-denoising=0 -a color:sharpness=2 -a color:enable-chroma-deltaq=1 -a color:deltaq-mode=3 -a color:enable-qm=1` (although this may vary with changes in Aom from build to build) I mean Aom still has not optimal defaults, but x264 mostly doesn't need something like that, which gives better results on any content (not considering speed) and defaults are pretty ok
2021-09-05 10:03:15
together with the usual `-min 0 -max 63 --end-usage=q --tune=ssim`?
Scope
2021-09-05 10:08:00
`--min 0 --max 63 -a end-usage=q -a cq-level=` and I think `--tune=ssim` is not always better, so I don't change `--tune` unless there is a way to use `--tune=butteraugli`, but butteraugli doesn't work with 10 bits
_wb_ That comparison is done by the webp2 folks, I dunno if they will keep doing it
2021-09-05 10:10:25
I think it updates when Jyrki asks for it, also in the last comparison the methodology for adjusting the size of images has changed
_wb_
2021-09-05 10:27:20
Yes, that was a good thing, the way they did it before gave rather variable perceptual results for the same size setting. Would probably be even better if size was just made to match jxl size at various d settings, but at least it's better now than how it was.
spider-mario
Traneptora yea, my personal take is that objective metrics are a tool to make sense of what is ultimately human subjectiveness
2021-09-05 10:44:55
yes, it’s all too easy to lose track of that fact
2021-09-05 10:45:00
tip #3 of https://thecooperreview.com/10-tricks-appear-smart-meetings/ is relevant here
2021-09-05 10:45:35
_wb_
2021-09-05 10:46:28
Non-ironically though
spider-mario
2021-09-05 10:46:30
I saw someone on the dpreview forum with a nice signature, along the lines of: “Beware correct answers to the wrong question.”
_wb_
2021-09-05 10:54:54
Most of humanity's problems are caused by accepting neat solutions that are solving the wrong problem. Whole fields of study are based on doing that, like most of economics, all of theology, and big chunks of medicine or law.
2021-09-05 10:56:31
E.g. the invisible hand of the market certainly is an elegant solution, but unfortunately not for the right problem.
Scope
2021-09-05 10:56:59
Also about metrics https://videoprocessing.ai/metrics/ways-of-cheating-on-popular-objective-metrics.html
2021-09-05 10:57:08
https://videoprocessing.ai/assets/img/metrics/psnr_and_ssim/pic2.png
spider-mario
2021-09-05 10:57:14
modern economists don’t believe that the free market solves everything, quite the opposite
2021-09-05 10:57:16
https://noahpinion.substack.com/p/is-economics-an-excuse-for-inaction
2021-09-05 11:12:11
something that is arguably a correct answer to a generally irrelevant question is confidence intervals
2021-09-05 11:12:14
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4742505/
2021-09-05 11:12:55
and then it is interpreted _as if_ it were the answer to another question
_wb_
2021-09-05 11:16:06
I love the reading material I get here, thanks for the interesting pointers, <@604964375924834314> and <@111445179587624960> !
spider-mario https://noahpinion.substack.com/p/is-economics-an-excuse-for-inaction
2021-09-05 01:36:58
That was an interesting read. And yes, academic economists are (fortunately!) no longer the high priests justifying laissez-faire for the neoliberals of the 80s and 90s, though as the article says, the "economists" you typically see on TV or other popular media are often still stuck in the 80s.
2021-09-05 01:38:00
It's in particular good that they're turning more empirical now.
2021-09-05 01:40:44
In my opinion, after Marx there was basically a century of non-empirical economics where they studied mathematical toy problems instead of reality because studying reality did not lead to the desired political conclusions.
2021-09-05 01:42:18
Those mathematical toy problems (though fun and interesting) are what I meant with "neat solutions that are solving the wrong problem".
2021-09-05 02:01:11
Anyway, sorry for derailing this channel
2021-09-05 02:01:13
https://www.reddit.com/r/AV1/comments/picypn/the_av1_encoding_situation_sucks/
2021-09-05 02:01:45
unlord: "If you are complaining the open-source tooling that others are generously investing in just isn't good enough, then you should ask for a refund."
2021-09-05 02:03:09
I don't like that attitude - basically he says FOSS is allowed to suck because it's gratis?
2021-09-05 02:06:59
I don't think libaom/rav1e/svt suck, btw. But it is a concern to me that if you want best encode performance, the proprietary encoders are better (maybe? I don't really know)
2021-09-05 02:08:06
What's the point of making a royalty-free codec if you still need to get (expensive) closed-source proprietary software to make good use of it?
2021-09-05 02:10:38
Not saying that av1 is in that situation atm, but it is a risk imo
Scope
2021-09-05 02:12:46
Also, as I wrote in AV1 discord, about the significant improvement in quality even in real-time software AV1 encoders compared to the AVC that some companies were showing a couple of years ago, judging by this comparison I have big doubts <https://www.reddit.com/r/AV1/comments/pi411a/av1_still_the_current_future_of_video/> <https://blog.xaymar.com/2021/08/19/av1-still-the-current-future-of-video/>
2021-09-05 02:18:18
Yes, it is possible that some closed encoders have better speed and quality, but I don't think it is that significant, and judging by these comparisons, software AV1 encoders (in real time) are still not much better quality than existing AVC, with a huge increase in computing requirements
2021-09-05 02:24:31
Even many real-time streaming platforms still haven't switched to AV1 even though they wanted to do it years ago, most likely software encoding makes not enough difference in efficiency and maybe they are waiting for the availability of good hardware encoders
diskorduser
2021-09-05 02:44:23
Afaik only Google meet android use av1? That too with very low resolution and on slow networks.
Scope
2021-09-05 02:45:09
Yep, maybe for a very low bitrate it is reasonable
Reddit • YAGPDB
2021-09-05 09:52:47
_wb_
2021-09-08 03:37:30
you might want to look into using `jpegtran` to do lossless crops
2021-09-08 03:38:07
also you can try `identify -verbose` to see what quality imagemagick estimates the jpeg has, but that's a heuristic thing, since different jpeg encoders do different things
2021-09-08 03:43:15
jpegtran is part of libjpeg-turbo/mozjpeg, identify is part of ImageMagick
2021-09-09 08:00:45
avif in firefox postponed again: https://bugzilla.mozilla.org/show_bug.cgi?id=1682995#c70
Scope
2021-09-09 09:27:48
And it will also once again postpone the review of JXL patches
_wb_
2021-09-09 10:15:59
it's somewhat frustrating from my point of view: the biggest advantage AVIF has compared to JXL, is that they "were first" - they finalized the bitstream first, they got browser support first, so they "won the race" in that sense: they were the first to beat what was there before (JPEG/WebP) and now according to some people, the game has changed and JXL doesn't just have to beat JPEG/WebP, it also has to beat AVIF. But on the other hand, AVIF support is still lacking in firefox and safari, and probably a big reason of why jxl support is not advancing more rapidly in browsers is because browser devs are still busy with AVIF.
doncanjas
2021-09-09 10:26:36
I find it funny how the AOM is constituted by the same entities that own the internet and still can't impose their codecs due to lack of support, etc
xiota
_wb_ it's somewhat frustrating from my point of view: the biggest advantage AVIF has compared to JXL, is that they "were first" - they finalized the bitstream first, they got browser support first, so they "won the race" in that sense: they were the first to beat what was there before (JPEG/WebP) and now according to some people, the game has changed and JXL doesn't just have to beat JPEG/WebP, it also has to beat AVIF. But on the other hand, AVIF support is still lacking in firefox and safari, and probably a big reason of why jxl support is not advancing more rapidly in browsers is because browser devs are still busy with AVIF.
2021-09-09 04:50:14
AVIF is also first in slowness.
kb
2021-09-09 10:49:45
hey guys, 👋 I'm having trouble trying to find / remember a website / web app... months ago, while perusing some comparisons between codecs (still images and video, like JXL, AVIF, HEIC, etc) I came upon a webapp which had a few images (the first and default one was a car racing track) which allowed the user to choose: 1. whether chroma subsampling was applied to the image, 2. whether interpolation was used on chroma, 3. and some sliders to lower the resolution of the chroma components (it could go REALLY low); now I need it to explain chroma subsampling and just can't find it anywhere 😦 it was a very nice demonstration of it (of course, it transformed the image using js) but I can't remember does it ring a bell?
190n
2021-09-10 12:26:14
lol i've gone through this same thing with the same site... i might be able to find it in my history
2021-09-10 12:27:59
https://goo.gle/yuv <@!818626963639238668>
_wb_
2021-09-10 05:26:01
Note that in that demo, the effects of lossy compression after subsampling are not taken into account
kb
190n lol i've gone through this same thing with the same site... i might be able to find it in my history
2021-09-10 12:20:54
thank you so much!! that's the one (saving it)
_wb_ Note that in that demo, the effects of lossy compression after subsampling are not taken into account
2021-09-10 12:31:25
this is actually just to show "you could severely subsample chroma in most images without anyone noticing", so I guess "works for me"
fab
2021-09-10 12:48:10
2021-09-10 12:48:10
2021-09-10 12:48:11
2021-09-10 12:48:11
_wb_
kb this is actually just to show "you could severely subsample chroma in most images without anyone noticing", so I guess "works for me"
2021-09-10 01:00:56
I know, and it's a neat demo to show that, but in practice you also get artifacts from doing lossy on the subsampled chroma and they do make problems easier to notice - any ringing, blur or blockiness will get magnified by the upsampling factor and become more problematic than what it would be with nonsubsampled chroma
kb
2021-09-10 01:36:56
yeah, I see wha1t you mean
_wb_ I know, and it's a neat demo to show that, but in practice you also get artifacts from doing lossy on the subsampled chroma and they do make problems easier to notice - any ringing, blur or blockiness will get magnified by the upsampling factor and become more problematic than what it would be with nonsubsampled chroma
2021-09-10 02:14:02
I don't fully undestand the intersection of chroma subsampling and Chroma from Luma in a way, CfL could basically undo a lot of chroma subsampling's artifacts, to some extent? (if a good prediction can be found for a given block) CfL works on the full resolution image, after all
_wb_
2021-09-10 02:20:28
are you talking about av1's CfL or jxl's ?
2021-09-10 02:20:45
we both have a concept of CfL but it's not quite the same
2021-09-10 02:23:17
I don't know the details of CfL in av1 - but I assume that it doesn't work on the full resolution image, because that would mean you are returning a yuv444 image when using yuv420 mode
kb
2021-09-10 03:15:55
you're right, it's not quite the same... in jxl, can djxl use CfL to reconstruct a full-resolution Chroma plane?
2021-09-10 03:16:31
or does it downsample luma when predicting? that would make it less complex I assume
_wb_
2021-09-10 03:23:54
CfL in jxl is only really meaningful when doing 444 (but we only do 420 when recompressing 420 jpegs)
kb
2021-09-10 03:25:19
oh
_wb_
2021-09-10 03:25:49
in jxl, CfL is signaled as two local multipliers (per 64x64 region) that get multiplied with the luma AC coeffs and added to/subtracted from the two chroma AC coeffs
2021-09-10 03:26:12
for DC there's a fixed multiplier for the whole image
2021-09-10 03:26:21
or the whole DC group, I don't remember
2021-09-10 03:27:45
for that to work, you need the DCT transforms to be the same for the 3 channels - so it doesn't work when chroma is subsampled
2021-09-10 03:29:39
in av1 I think they do things in the pixel domain, and I think they have two local constants per chroma channel (an additive term and a multiplier). I assume they downscale luma to make it match chroma in case of 420
2021-09-10 03:30:04
not sure though about av1
kb
2021-09-10 03:42:59
yeah, so it's in frequency domain so no impact on chroma subsampling... I'd guess, in general, Chroma Subsampling has much better compression efficiency than 444 + CfL, right?
_wb_
2021-09-10 04:02:55
nah not really
2021-09-10 04:03:56
chroma subsampling is basically exactly the same thing as zeroing 3/4 of the chroma coeffs
2021-09-10 04:04:14
you don't need chroma subsampling to zero those coeffs if you want to zero them
2021-09-10 04:05:20
it's just a bit of a memory optimization if you can skip the upsampling and give yuv420 buffers to the gpu
2021-09-10 04:06:57
and in codecs with poor entropy coding, like old jpeg, it does help because you have less chroma DC to encode and chroma AC can also be encoded more efficiently because you're using 16x16 blocks for chroma instead of 8x8 ones
2021-09-10 04:07:42
but in codecs with good entropy coding, the compression advantage of chroma subsampling is quite small
2021-09-10 04:08:59
while the disadvantage of chroma subsampling is quite large: you HAVE to zero 3/4 of the chroma coeffs, even in spots where that's a really bad idea
diskorduser
fab
2021-09-10 05:24:23
what is this
fab
2021-09-10 05:25:18
Av1 wallpaper i do not have the lgbt one and the Halloween one
2021-09-10 05:25:25
They exist
2021-09-10 05:26:10
The font i used is elektra text pro
2021-09-10 05:26:58
The one with the snow the author is Alex of notenoughav1encodes
2021-09-10 05:27:36
Lgbt was not made by him
2021-09-10 08:20:46
can svt av1 with libavif do qp 18 min qp 65 mx speed 3 sharpness 2
2021-09-10 08:21:22
is any efficient and there's any recent build 09 september 10 september
2021-09-10 08:23:21
or i should wait for libwebp2 and libjxl
nathanielcwm
fab Av1 wallpaper i do not have the lgbt one and the Halloween one
2021-09-11 02:25:40
they look square so probably a bad wallpaper (unless u want a massive white or black border)
yurume
2021-09-11 02:27:57
good for a square monitor though
fab
nathanielcwm they look square so probably a bad wallpaper (unless u want a massive white or black border)
2021-09-11 02:38:07
Right
nathanielcwm
yurume good for a square monitor though
2021-09-11 02:38:49
well most ppl don't have square monitors lmao
yurume
2021-09-11 02:39:09
my coworker actually has one in the workplace, I was amazed
nathanielcwm
2021-09-11 05:07:42
damn
2021-09-11 05:07:42
what res?
yurume
2021-09-11 08:11:18
I don't know
2021-09-12 12:24:08
yes it is really a square (1:1), the coworker complained about its resolution so I guess its resolution is not too high (1600x1600?)
2021-09-12 12:25:01
the complaint was that, the coworker really wants to use a square display but it is super uncommon so one with higher resolution or dimension is non-existent or very expensive
nathanielcwm
2021-09-12 02:03:06
found an eizo one on amazon
2021-09-12 02:03:13
looks like it's only a singular seller tho
2021-09-12 02:04:10
https://www.eizo.com/products/flexscan/ev2730q/
2021-09-12 02:04:36
eizo is generally only used in enterprise tho afaik?
2021-09-12 02:04:42
well makes sense
2021-09-12 02:06:44
cdw sells it for $1400 <:monkaMega:809252622900789269>
2021-09-12 02:07:03
it only has 1 dp and 1 dvid input
2021-09-12 02:07:17
60hz
2021-09-12 02:07:50
300cd/m^2 but advertised as 100% srgb?
_wb_
2021-09-12 06:00:11
There are also monitors you can rotate, might be useful I guess when working with portrait stuff
improver
2021-09-12 09:47:06
rotating messes up subpixel font stuff though unless its grayscale
_wb_
2021-09-12 10:48:40
Subpixel rendering is a neat trick but it's very fragile, you need to be sure of the display RGB subpixel layout
2021-09-12 10:50:13
Phones and tablets have rotating screens anyway, so I think subpixel rendering is on its way out
improver
2021-09-12 11:07:15
grayscale one is not but for rgb id say yes
_wb_
2021-09-12 11:31:17
Grayscale is not really subpixel rendering, but just doing anti-aliasing properly
spider-mario
2021-09-12 12:15:51
subpixel rendering with rotation would be possible but it would be a hassle
2021-09-12 12:16:04
phones have such dense displays that it’s not worth the trouble
2021-09-12 12:16:47
regular antialiasing is good enough
improver
2021-09-12 12:20:49
oh yeah was abt to say that myself its not subpixel if its grayscale
_wb_
2021-09-12 12:20:51
To me, subpixel rendering is a neat hack, but on high density screens you don't need it, and on low density screens it can cause visible color fringing imo. There's a sweet spot of screen density where it's a good idea, but there are disadvantages like screenshots no longer being portable to other screens.
improver
2021-09-12 12:21:49
generally it is portable enough except when things are rotated
2021-09-12 12:22:14
at least looking at majority of displays sold rn
_wb_
2021-09-12 12:22:47
At some point I considered adding a Modular transform that basically converts subpixel rendered text to 3*width grayscale text and back
2021-09-12 12:23:00
It helped a bit for compression
improver
2021-09-12 12:23:53
interesting. how exactly it works
_wb_
2021-09-12 12:24:07
But it's a bitstream complication, and I am happy we don't have to deal with it
2021-09-12 12:24:39
Well basically just take the RGB data and reinterpret it as 3 grayscale pixels per pixel
2021-09-12 12:24:44
Or the other way around
2021-09-12 12:25:11
That basically seems to be the way most subpixel font renderers work
2021-09-12 12:27:04
If we would define the transform at a higher level in the bitstream (not in modular), then patches would also benefit, because now we're missing lots of opportunities in case of subpixel rendering since you get 3 totally different pixel patterns depending on where the letter starts exactly
2021-09-12 12:27:27
But it would be a big bitstream complication
2021-09-12 12:27:54
And it would be hard to make it work well with mixed content that is not just subpixel rendered text
improver
2021-09-12 12:28:07
hmm essentially inlined rgb channels
_wb_
2021-09-12 12:33:52
You can literally take a ppm file with black text that uses subpixel rendering, change the header to that of a pgm with 3x the width, and it will make more sense for compression
2021-09-12 12:35:16
PNG doesn't suffer much from subpixel rendering because it is basically gzipped ppm, and for gzip it looks exactly the same as the 3x width pgm
2021-09-12 12:36:02
jxl and flif do suffer from it because they encode in a planar way, not an interleaved way like png
2021-09-12 12:37:51
And subpixel rendering causes weird colored edges that make no sense for neither lossy or lossless compression
2021-09-12 12:38:23
(it messes up color decorrelation attempts completely)
2021-09-12 02:42:03
Somewhat funny to me that someone encoding a movie in av1 is considered reddit-worthy news
2021-09-12 02:42:05
https://www.reddit.com/r/AV1/comments/pmpzok/big_bucky_bunny_2160p_60fps_is_now_av1_10bit/
Scope
2021-09-12 03:29:49
Also about the resolution and that 16k is enough for everything, after some experimentation with random Manga/Manhua/Manhwa images
Fraetor
2021-09-12 04:17:23
Yeah, I've often seen, for example, guides which are greater than 16K vertically.
2021-09-12 04:18:36
2021-09-12 04:18:47
OK, not quite actually.
2021-09-12 04:20:31
This image is 21.6K:
2021-09-12 04:21:02
Discord does not like that, lol
nathanielcwm
_wb_ Somewhat funny to me that someone encoding a movie in av1 is considered reddit-worthy news
2021-09-13 03:05:24
looks like op is self advertising a video he encoded <:kekw:808717074305122316>
Fraetor
2021-09-13 09:26:43
If it is a transcode that isn't very interesting. If, on the other hand, they got Blender outputting AV1 directly then that is more interesting.
nathanielcwm
2021-09-13 10:35:07
looks like it's just a transcode
2021-09-13 10:35:17
oh wait
2021-09-13 10:35:26
blender can use ffmpeg to output video apparently
2021-09-13 12:23:24
nope appears to just be a transcode
2021-09-13 12:24:18
anyway afaik to get the big buck bunny files u have to pay the blender foundation?
2021-09-13 12:24:59
oh nvm
2021-09-13 12:25:05
u can download the blend files for bbb
2021-09-13 12:25:17
it's 800mb <:kekw:808717074305122316>
Fraetor
2021-09-13 04:26:40
That's some very good compression for a lossless version.
2021-09-13 04:26:56
Though I think the decode complexity is a little high.
_wb_
2021-09-13 04:54:44
Lol
Scope
2021-09-17 07:02:00
https://encode.su/threads/3701-GDCC-21-T2-Multispectral-16-bit-image-compression
2021-09-17 07:03:21
``` --------- pngcrush ------ flif ----- ccsds -- pik-gray --- pik-rgb ------ lzma --- tpnib C -------- 301,032,449 257,964,169 296,274,782 278,858,839 260,087,773 278,943,805 272,547,287```
2021-09-17 07:11:07
So these are images and can be compared with Jpeg XL 🤔
_wb_
2021-09-17 07:49:25
interesting
2021-09-17 07:49:52
I wonder how they passed 32-bit floats to those image codecs, because none of them can do lossless float
2021-09-17 07:50:18
probably just as two 16-bit ints per sample, or something
fab
2021-09-19 08:40:22
is there a way to do
2021-09-19 08:40:24
for %i in (C:\Users\Utente\Documents\Facebook\*.jpg) do avifenc -q 93.13 -sharpness=3 -enable-chroma-deltaq=1 -s 5 "%i" -o "%i.avif"
2021-09-19 08:40:35
or av1enc
2021-09-19 08:40:47
why it fails
2021-09-19 08:41:14
i copied parameter not listed by the encoder such as quality shaprness and delta quantization
Scope
2021-09-20 06:51:37
Hmm, it was sad to discover that some sites started using even FPGA WebP encoders (with something like 50% more compression than Jpeg and sometimes with transcoding only on the fly) with a future move to FPGA AVIF, but they never heard of Jpeg XL <:SadOrange:806131742636507177>
_wb_
2021-09-20 07:11:12
Fpga vp8/av1, I wonder how that performs compared to software
2021-09-20 07:11:39
For the web, I suspect software encode is a better idea than hw encode
2021-09-20 07:12:26
For camera, you don't care too much about density, so stupider hw encode is ok
Scope
2021-09-20 07:17:07
Usually it's some kind of hybrid FPGA which also does cropping, resizing and other operations and as I understand on very high-load sites, speed of software encoding/processing is not always enough or they are quite expensive for the CPU
2021-09-20 07:19:25
And until recently I also hadn't heard much about FPGA WebP encoders and also didn't think anyone used them, but they do exist and are used 🤔
_wb_
2021-09-20 07:22:38
I wonder what kind of site that is then
2021-09-20 07:23:26
Cloudinary does do quite a lot of encodes but so far cpu works well for us
2021-09-20 07:23:38
Afaik it's the same for twitter and facebook
2021-09-20 07:24:50
Youtube does hw video encode for low traffic / quick encodes, and switches to sw encode when there are more views
2021-09-20 07:24:53
Afaik
Scope
2021-09-20 07:27:39
It's kind of like photo sites or Instagram with a lot of mobile users, maybe it's somehow more cost-effective
_wb_
2021-09-20 07:28:27
Instagram is facebook, they also don't do that
2021-09-20 07:31:29
I think hw transcode is done by some who do it on the fly on every request
Scope
2021-09-20 07:31:54
Yep, I also don't know what FPGA solution they use, the first thing I found is <https://www.inspursystems.com/fpga/use-case/webp> but as far as I understand they use something else that was developed with Intel
_wb_
2021-09-20 07:33:34
I still doubt it really makes sense to try to save on storage/cache so badly that you need hw to just repeat the encode every time you need the image
diskorduser
2021-09-20 07:35:16
45 watts 😧
Scope
2021-09-20 07:35:42
Maybe they have a cache on the most used images, but they don't want to transcode all available images, but do it in real time for rarer content (and software encoders can't do it with the needed quality, speed and latency)
_wb_
2021-09-20 07:36:45
You can do that in cpu too
2021-09-20 07:37:56
The thing with custom hw is it's a pain to deploy in a decentralized way (which is essential if you want it to work well not only close to your serverfarm)
Scope
2021-09-20 07:39:04
As far as I know they only use their own servers and do not use third-party CDNs
_wb_
2021-09-20 07:40:17
That is a really bad idea unless you only target a geographic region around your servers
Scope
2021-09-20 07:44:20
Hmm, probably, perhaps this is also one of the main reasons why some kind of custom FPGAs is not used as often for certain very specific purposes (other than for example offline encoding of some content long before distribution) 🤔
2021-09-20 08:00:48
Hmm, as far as I understand it is something based on Intel Arria, perhaps with some custom features/designs 🤔 <https://www.intel.com/content/www/us/en/products/details/fpga/arria/10.html> So maybe this is something more flexible for programming and Jpeg XL can also be "accelerated" in a similar way if needed
_wb_
2021-09-20 08:09:30
Likely. Also 'generic' DSP chips are available on many SoCs and could probably be used to accellerate stuff like DCT or color transforms
Scope
2021-09-21 01:10:03
🤔 https://www.ipol.im/pub/art/2021/325/
2021-09-23 08:03:40
https://twitter.com/OliverJAsh/status/1441095716227543042
2021-09-23 08:03:57
🤔 https://twitter.com/OliverJAsh/status/1441107335171043334
fab
2021-09-23 08:15:45
AMAZING
2021-09-23 08:18:44
Nova Aurora
2021-09-23 08:38:38
So when can we get JPEG XL in unsplash? <:FeelsReadingMan:808827102278451241>
fab
2021-09-23 08:42:50
at least for higher resolution not the preview
_wb_
2021-09-25 07:48:58
https://twitter.com/alyssarzg/status/1441595107615064065?s=19
diskorduser
2021-09-26 03:39:29
I get 38.2 kb jxl modular
2021-09-26 04:09:52
cjxl -d 0 chroma-444.png test4444.jxl
2021-09-26 04:09:59
>stat -c "%s" test4444.jxl 39151
2021-09-26 04:10:53
JPEG XL encoder v0.5.0
The_Decryptor
2021-09-26 04:11:24
Disabling patches takes it down to 24Kb
diskorduser
2021-09-26 04:23:54
Size 38917 - JPEG XL encoder v0.7.0 cdd3ae6
eddie.zato
2021-09-26 04:25:41
``` cjxl -m -E 3 -I 1 -e 8 --patches=0 JPEG XL encoder v0.7.0 cdd3ae6 [SSE4] Compressed to 17165 bytes ```
diskorduser
2021-09-26 04:27:40
Still bigger than webp
BlueSwordM
2021-09-26 04:33:17
```cjxl -s 9 -d 0.0 -g 1 --patches=0 JPEG XL encoder v0.7.0 cdd3ae6 [AVX2] Compressed to 16227 bytes (0.141 bpp)```
2021-09-26 04:35:16
```cjxl -s 9 -d 0.0 -g 1 --patches=0 -P 8 -I 0.65 JPEG XL encoder v0.7.0 cdd3ae6 [AVX2] Compressed to 15622 bytes (0.136 bpp). ```
eddie.zato
2021-09-26 05:13:50
Yeah, lossless webp is too good. 😄 ``` cjxl -m -g 1 -I 1 -P 8 --patches=0 -e 9 -Y 0 --palette=9 JPEG XL encoder v0.7.0 cdd3ae6 [SSE4] Compressed to 15477 bytes (0.134 bpp). ```
_wb_
2021-09-26 07:06:25
``` $ ../build/tools/cjxl chroma-444.png chroma-444.png.jxl -q 100 --patches 0 -e 9 -P 0 -X 0 -Y 0 -I 0 -g 3 JPEG XL encoder v0.7.0 59851dc [SSE4,Scalar] Read 1280x720 image, 62.7 MP/s Encoding [Modular, lossless, tortoise], 4 threads. Compressed to 9580 bytes (0.083 bpp). ```
2021-09-26 07:08:25
it's one of those cases where lz77 matching helps a lot, which is the core thing png/lossless webp do, but in cjxl it only seriously attempts to use lz77 matching with `-e 9 -I 0`
2021-09-26 07:14:14
yes, not to mention having to use very non-default options 🙂
2021-09-26 07:14:47
just shows there's room for improvement in the default options
2021-09-26 07:15:58
lossless is always a very tricky thing if you have these specific images that can be losslessly compressed to 0.1 bpp
eddie.zato
2021-09-26 07:16:20
``` cjxl -e 8 -m --patches=0 -P 0 -Y 0 -I 0 -g 3 JPEG XL encoder v0.7.0 cdd3ae6 [SSE4] Compressed to 12863 bytes (0.112 bpp) TotalSeconds : 0,2933563 cwebp -mt -m 6 -lossless -z 9 Lossless-ARGB compressed size: 12238 bytes TotalSeconds : 0,3493875 ```
2021-09-26 07:19:26
It could be smaller 😄 ``` cjxl -e 9 -m --patches=0 -P 1 -Y 0 -I 0 -g 3 Compressed to 9472 bytes (0.082 bpp). TotalSeconds : 14,3653558 ```
_wb_
2021-09-26 07:21:32
a webp-like encode mode could be made that is faster and better for such images - basically skip tree learning and instead use fixed trees with one context for every 16x16 block or whatever
w
2021-09-26 07:28:19
so e 9 is 50x slower than e 8?
_wb_
2021-09-28 08:28:23
does anyone know if and how you can get a partial webp decoded?
2021-09-28 08:30:15
or otherwise emulate progressive loading of webp
Scope
2021-09-28 10:16:07
<https://developers.google.com/speed/webp/docs/api>
2021-09-28 10:17:34
2021-09-28 10:25:21
Seems like there are no easy methods
Jim
2021-09-30 01:21:06
Doubtful. Can't do it with AVIF either. They are considering including a low res version of the image in the metadata to use as a placeholder while the image downloads. I created a demo that uses a small separate image with a blur filter as an example: https://thumb.avif.techie-jim.net/
2021-09-30 01:22:46
Could do the same with WebP
Nova Aurora
2021-09-30 07:09:38
What kind of efficiency hit would that take though?
2021-09-30 07:11:12
The point of progressive is that the efficiency loss it pretty slight since it is simply rearranging the data that is already needed to be encoded so that it can be decoded partially
2021-09-30 07:12:48
Would it even be worth it to use two separate AVIFs/webps over jxl or even legacy jpeg in situations where progressive is something you want?
_wb_
2021-09-30 08:59:28
Probably, but it only makes sense if the preview is low enough quality to not make the total size too large
2021-09-30 09:00:04
And of course it's a very limited kind of progressive, only one intermediate preview...
Jim
2021-09-30 10:51:35
It's really not efficient at all; It adds a few extra kilobytes to the payload. But for larger images it at least displays a preview early on so you don't just have an empty space or content being pushed around later (when `height` and `width` are not defined). Also, web speed tools are considering image "time to first render" as part of the page speed metrics... so AVIF may tank a site's metrics. Having a small preview show up early is a [slightly gimmicky] solution.
improver
2021-09-30 10:53:52
i wonder if same approach could be used for lossless non-progressive jxl, because ll progressive is hella inefficient
Jim
2021-09-30 10:57:11
Yes, you can. See my preview link above. I wrote on it that it can be used with any image format, including jxl (though I would favor progressive or responsive loading). All you need is to produce your regular image and a small resolution preview image and with a bit of extra html (and javascript if you want a smoother transition), and viola! As far as embedding the preview in the metadata, the encoder & decoder would have to be written to handle that.
improver
2021-09-30 10:57:47
extra js is eww
Jim
2021-09-30 10:58:49
As I said, the JS is only for a smooth transition - there is no way for CSS to detect when the image is loaded. Without the JS the larger image still "pops in" when loaded, just not smoothly.
Scope
2021-09-30 11:00:20
As I said before, a pretty large percentage of people don't like the very low quality blurry preview image and they even more prefer solid color, but the incremental loading of the image is accepted well by most people and it is already there in lossless non-progressive jxl
Jim
2021-09-30 11:02:06
True, but when your image format doesn't support that... gotta go with something less desirable. You can go with a solid color but if you want it to be the primary color of the image there needs to be some sort of backend/server processing to find it first.
improver
2021-09-30 11:02:07
i actually kinda like decoding square by square it looks less boring than jpeg/png's smooth line by line
2021-09-30 11:02:57
as for avif, eh stuff should be just tiled, will improve encoding/decoding perf too
Jim
2021-09-30 11:03:27
For me it will be jarring at first but I will get used to it.
2021-09-30 11:04:18
I believe AVIF is looking to possibly do that in the long term but as their codec is written now it would take a lot of rewriting to support progressive loading.
2021-09-30 11:05:10
They wrote it to be part of the video codec - were not really thinking about how the keyframes could be displayed on the web.
Scope
2021-09-30 11:05:58
In my opinion the main advantage of progressive decoding is the near full quality stage or the ability to use lower resolutions in jxl from the same image
improver
2021-09-30 11:06:06
its kinda mess, im not surprised some people just like jpegs and pngs
Jim
2021-09-30 11:08:05
Yeah it will remain a niche thing that fewer sites will support. Should work fine as video capture thumbnails but larger images, no. It should work fine as a lower resolution background. I don't think CSS backgrounds support progressive rendering.
improver
2021-09-30 11:09:42
less likely to be usable for css background as iirc theres no such thing as <picture> for that
Jim
2021-09-30 11:10:14
What do you need picture for?