|
_wb_
|
|
fab
from the original jpg
|
|
2021-02-13 07:35:16
|
If your original is already lossy, better not destroy it further by recompressing in a different lossy way
|
|
|
fab
|
2021-02-13 07:35:39
|
so webp2 q89 is garbage
|
|
2021-02-13 07:35:47
|
even for screenshots
|
|
2021-02-13 07:35:53
|
should not be used
|
|
2021-02-13 07:36:27
|
i prefer jxl because the bistream is finalized
|
|
|
_wb_
|
2021-02-13 07:37:20
|
VMAF does not correlate well with human perception, at least not with modern codecs. It sometimes says jxl is worse than anything else (including old jpeg) while if you look at the images the opposite is true.
|
|
|
Master Of Zen
|
|
_wb_
VMAF does not correlate well with human perception, at least not with modern codecs. It sometimes says jxl is worse than anything else (including old jpeg) while if you look at the images the opposite is true.
|
|
2021-02-13 07:40:23
|
what the usage of VMAF was? asking because it's explicitly said that if you have less than 1080p image you need to scale it up to fix 1080p, as default model is trained by people watching 1080p screen from 3 heights of the screen. (also vmaf type and model)
|
|
|
_wb_
|
2021-02-13 07:41:06
|
Scaling up?
|
|
2021-02-13 07:41:17
|
That sounds like a bad idea
|
|
|
Master Of Zen
|
|
_wb_
Scaling up?
|
|
2021-02-13 07:46:12
|
Main idea of VMAF as a metric that they try to simulate actual human being seeing the video.
Default v0.6.1 model is trained on DMOS scores (Differential Mean Opinion Score) from people watching video on 1080p tv, at certain distance, at certain conditions.
There is 4k, default, and phone models publicly available
|
|
|
_wb_
|
2021-02-13 07:46:56
|
People watching video is not a good model for still images
|
|
2021-02-13 07:47:19
|
You tend to look at still images for more than 40 ms
|
|
|
Master Of Zen
|
|
_wb_
People watching video is not a good model for still images
|
|
2021-02-13 07:47:26
|
That's for sure, I don't make point for it))
|
|
2021-02-13 07:47:50
|
btw you might find interesting
|
|
2021-02-13 07:47:51
|
https://netflixtechblog.com/toward-a-practical-perceptual-video-quality-metric-653f208b9652
|
|
|
lithium
|
2021-02-13 07:51:58
|
Just my opinion,
I think Butteraugli_jxl 3-norm is more suitable on video,
maxButteraugli is suitable on high quailty still images.
|
|
|
_wb_
|
2021-02-13 07:55:01
|
It depends on the use case and what fidelity target you have in mind. Being "visually lossless" means very different thing if you do zoomed in fliptest, regular size fliptest, side-by-side comparison, or even context-switching comparisons like when you let people watch video clips
|
|
2021-02-13 07:56:06
|
For some news article with a stock photo just to avoid dry text, probably low-fidelity high-appeal is good enough
|
|
|
Scope
|
2021-02-14 06:20:09
|
Btw, for animations and animated emoji also well suited vector JSON-based frameworks like Lottie (although vector formats also have their limitations and drawbacks) https://lottiefiles.com/lottie
|
|
|
_wb_
|
2021-02-14 07:06:38
|
Yes, svg and co are nice if you can use it
|
|
2021-02-14 07:07:19
|
https://sneyers.info/jxl/ has no raster images on it, I just notice
|
|
|
|
Deleted User
|
2021-02-14 07:10:00
|
Unfortunately I've just noticed that the test animation (car) in Lottie Web Player lags when I zoom in, even though I'm on ex-flagship Samsung Galaxy Note 9
|
|
2021-02-14 07:11:02
|
|
|
|
Scope
|
2021-02-14 07:12:14
|
Yes, this is one of the disadvantages of vector formats - unpredictable performance
|
|
2021-02-14 07:17:50
|
<https://encode.su/threads/3143-New-Telegram-animated-image-format?p=60825&viewfull=1#post60825>
Not Brotli <:PepeHands:808829977608323112>
|
|
2021-02-14 07:18:35
|
|
|
2021-02-14 07:18:38
|
|
|
|
_wb_
|
2021-02-14 07:21:38
|
Brotli is nice
|
|
|
Scope
|
2021-02-14 07:58:54
|
Also, Steam uses APNG for animated stickers <:Thonk:805904896879493180> https://store.steampowered.com/points/shop/c/stickers
|
|
2021-02-14 08:00:07
|
Which is also not supported in discord
https://cdn.akamai.steamstatic.com/steamcommunity/public/images/items/637310/979bb2133de5a89a3f7a0524f8a1d27163301f6d.png
|
|
|
_wb_
|
2021-02-14 08:08:08
|
That's the problem with APNG (and JPEG XT for that matter). Adding new functionality to something that already exists, with a 'graceful degradation', means most implementations will just settle for the graceful degradation.
|
|
2021-02-14 08:08:49
|
You see an image, so it 'works'. Doesn't move? Too bad.
|
|
2021-02-14 08:09:33
|
Like a JPEG XT with HDR: only see the tone mapped SDR image? Too bad.
|
|
2021-02-14 08:10:10
|
The transparency is gone? Too bad. At least you still see an image.
|
|
2021-02-14 08:11:20
|
Most users don't even realize they're not getting the intended experience. They don't get a broken image icon. Just a 'graceful degradation'.
|
|
2021-02-14 08:12:15
|
So the implementers don't get a lot of bug reports and angry users.
|
|
|
BlueSwordM
|
2021-02-14 08:13:11
|
So, what you are saying is that if we notice unintended bugs, we should report them quickly? Blasphemy. <:kekw:808717074305122316>
|
|
|
Scope
|
2021-02-14 08:19:57
|
<:FeelsReadingMan:808827102278451241> <https://www.reddit.com/r/programming/comments/livw57/svg_the_good_the_bad_and_the_ugly/>
https://news.ycombinator.com/item?id=26114863
|
|
|
_wb_
|
2021-02-14 08:29:46
|
I would love to at some point take a sensible subset of svg, define a special purpose compressed representation for it, and define a jxl file format box for it so you can combine raster and vector (or of course just one of the two) in a well-compressed way.
|
|
2021-02-14 08:30:23
|
Call it SXL - Scalable XL
|
|
2021-02-14 08:31:30
|
All the vector formats somehow have the problem of feature bloat and unnecessarily Turing complete stuff
|
|
2021-02-14 08:31:49
|
PostScript could already make printers go in infinite loops
|
|
2021-02-14 08:32:00
|
PDF extended that
|
|
2021-02-14 08:32:33
|
SVG also basically requires a full browser to render
|
|
2021-02-14 08:39:03
|
A simple vector representation with limited functionality but enough to do editable text and all the vector drawing needed to do fonts (bezier curves, polygons, etc), plus maybe gradients and blur, and that's it. No javascript, no hatchings or whatever weird fillings, no macro bomb stuff
|
|
|
|
Deleted User
|
2021-02-14 08:59:50
|
WAIT SVG CAN RUN JAVASCRIPT
|
|
|
_wb_
|
2021-02-14 09:07:41
|
Yes, and you also need to implement CSS to make an SVG renderer
|
|
2021-02-14 09:08:35
|
At Cloudinary we have to render SVG using a sandboxed headless browser, basically
|
|
|
spider-mario
|
|
_wb_
PDF extended that
|
|
2021-02-14 11:56:41
|
I thought PDF only used a non-Turing-complete subset of PostScript, is that not the case?
|
|
|
BlueSwordM
|
2021-02-15 02:52:09
|
<@!794205442175402004> https://aomedia-review.googlesource.com/c/aom/+/130141 π€
```Add tune=butteraugli
Kodak Image Dataset with baseline tune=psnr
avg_psnr ovr_pnsr ssim butteraugli
4.295 4.295 -1.739 -16.660
TODO:
1) replace the Butteraugli interface with the official release
2) use a test dataset with high resolution images
3) there are 7 out of 150+ test images report BD-rate loss
4) fine tune the parameters of the RD model```
|
|
|
Scope
|
|
lonjil
|
|
spider-mario
I thought PDF only used a non-Turing-complete subset of PostScript, is that not the case?
|
|
2021-02-15 05:16:38
|
Should be noted that PDF can also contain arbitrary JavaScript.
|
|
|
Pieter
|
2021-02-15 05:23:36
|
As in: as formatted text in it? :D
|
|
|
lonjil
|
2021-02-15 05:28:10
|
Nope, as code to be executed
|
|
|
Pieter
|
2021-02-15 05:42:15
|
Who comes up with that?
|
|
|
diskorduser
|
2021-02-16 01:15:08
|
Anyone here use webp2 on arch linux? I get this error:
|
|
2021-02-16 01:15:15
|
`cwp2: error while loading shared libraries: libimageio.so: cannot open shared object file: No such file or directory`
|
|
|
Crixis
|
2021-02-16 01:22:24
|
just install libimageio
|
|
2021-02-16 01:24:57
|
I expect some `sudo pacman`, sorry I'm a ubuntu user
|
|
|
|
veluca
|
2021-02-16 01:29:07
|
`sudo pacman -S openimageio` would likely do it
|
|
|
|
Deleted User
|
2021-02-16 01:32:29
|
Linux packaging needs some kind of standardization across distributions
|
|
|
|
veluca
|
2021-02-16 01:36:33
|
that'd be flatpak basically
|
|
2021-02-16 01:36:41
|
or snap
|
|
|
|
Deleted User
|
2021-02-16 01:37:17
|
yeah, it doesn't look like traditional distribution packages will come to a common ground anytime soon
|
|
2021-02-16 01:38:06
|
snap still had some performance issues the last time I tried Ubuntu
|
|
2021-02-16 01:38:21
|
But I've seen there was some work to fix that
|
|
2021-02-16 01:39:36
|
and flatpak, I tried one but it didn't work so left it at that
|
|
2021-02-16 01:40:13
|
AppImage was cool as an ideea
|
|
2021-02-16 01:40:49
|
offered the best from flatpack and windows installers
|
|
2021-02-16 01:46:29
|
when I was using Ubuntu years ago everything that I needed I was finding in rpm format. Now that I'm using fedora everything is in deb format π€£
|
|
|
diskorduser
|
|
veluca
`sudo pacman -S openimageio` would likely do it
|
|
2021-02-16 02:52:23
|
I have installed that. still it shows the same error
|
|
2021-02-16 02:53:53
|
it doesn't have libimageio.so
|
|
|
Crixis
|
|
snap still had some performance issues the last time I tried Ubuntu
|
|
2021-02-16 03:09:13
|
now it is better
|
|
2021-02-16 03:12:18
|
also flatpak now is solid
|
|
2021-02-16 03:13:33
|
deb is more diffuse in my esperience, only Oracle go only rpm
|
|
|
|
Deleted User
|
2021-02-16 03:20:25
|
ughhh Oracle...I'll never forgive them for destroying Sun
|
|
|
Crixis
|
2021-02-16 03:33:28
|
and torture java
|
|
|
Scope
|
2021-02-16 04:13:05
|
Hmm, does the jxl encoder also use multi-pass for some things?
https://youtu.be/MBVBfLdh984?list=LL&t=1239
|
|
2021-02-16 04:13:10
|
|
|
|
_wb_
|
2021-02-16 04:27:26
|
define multi-pass
|
|
2021-02-18 05:10:46
|
https://cloudinary.com/blog/how_to_adopt_avif_for_images_with_cloudinary
|
|
|
BlueSwordM
|
2021-02-18 05:24:04
|
I don't think that speed comparison is very fair though.
|
|
2021-02-18 05:24:25
|
By that logic, JPEG-XL with Effort=3 is much slower than mozjpg in Squoosh.
|
|
|
|
paperboyo
|
|
_wb_
https://cloudinary.com/blog/how_to_adopt_avif_for_images_with_cloudinary
|
|
2021-02-18 05:27:09
|
This is cool! I wonder if any resizing provider allows specifying your own `q_auto:custom` (by eg. encoding all formats via `f_auto` to the given metric like DSSIM) and choosing the slimmest file supported by the client? A lot of compute, that, but I guess giving eg. `q_auto:good` already entails multiple encodings of the same file to hit a certain metric?
|
|
|
lonjil
|
2021-02-18 05:57:17
|
>don't really need lossless for the web
maybe not for photographic content, but I beg to differ for all other stuff π
|
|
|
_wb_
|
|
BlueSwordM
By that logic, JPEG-XL with Effort=3 is much slower than mozjpg in Squoosh.
|
|
2021-02-18 05:59:25
|
Uh no, cjxl -s 3 is a lot faster than default mozjpeg.
|
|
|
BlueSwordM
|
|
_wb_
Uh no, cjxl -s 3 is a lot faster than default mozjpeg.
|
|
2021-02-18 05:59:46
|
I was talking about using Squoosh as a comparison. π
|
|
2021-02-18 06:00:00
|
Squoosh seems to have a different scale.
|
|
|
_wb_
|
2021-02-18 06:00:56
|
Ah, yes, but even native libavif encoding with the fastest foss encoder (SVT-AV1) is pretty slow, unless you set it to fastest speed but then it is worse than webp
|
|
|
lonjil
>don't really need lossless for the web
maybe not for photographic content, but I beg to differ for all other stuff π
|
|
2021-02-18 06:01:59
|
For nonphoto you also don't need lossless, you just need visually lossless, which is not exactly the same thing anymore
|
|
2021-02-18 06:04:07
|
E.g. png8 made by pngquant or a lossless webp version of that is lossy, but often good enough for the web (depending on the image of course)
|
|
|
paperboyo
This is cool! I wonder if any resizing provider allows specifying your own `q_auto:custom` (by eg. encoding all formats via `f_auto` to the given metric like DSSIM) and choosing the slimmest file supported by the client? A lot of compute, that, but I guess giving eg. `q_auto:good` already entails multiple encodings of the same file to hit a certain metric?
|
|
2021-02-18 06:06:20
|
That's what f_auto,q_auto does: for each client, it picks the best supported format for that image
|
|
2021-02-18 06:08:18
|
So e.g. currently on a recent Chrome, that will often be avif, but for some logos with alpha it can e.g. be best to do lossless webp.
|
|
2021-02-18 06:09:53
|
While on Safari you will often get jpeg 2000, but sometimes old jpeg or png are better. WebP is kind of semi-broken atm in Safari, but once it reliably works again we will again serve webp to safari versions where it works
|
|
2021-02-18 06:10:52
|
Once jxl support lands in browsers, I suspect usually jxl will be best, and all the other codecs will just be for fallback
|
|
|
|
paperboyo
|
|
_wb_
That's what f_auto,q_auto does: for each client, it picks the best supported format for that image
|
|
2021-02-18 06:11:55
|
> it picks the best supported format for that image
Best = smallest at the same quality metric? What I would love is to set this metric goal myself . For two reasons: a) I suspect `q_auto:low` will be much too good for me b) I would want to vary that by dpr (lower for higher density).
|
|
|
_wb_
|
2021-02-18 06:14:02
|
We don't do a huge amount of metric tweaking, that would give too much latency on first requests. So things are based on heuristics, some trial encoding, and lots of assumptions and offline calibration
|
|
2021-02-18 06:14:49
|
We are probably going to add a target below q_auto:low, with avif that becomes useful
|
|
2021-02-18 06:20:07
|
Varying by dpr: that kind of already happens naturally because of the metric we use (ssimulacra)
|
|
2021-02-18 06:21:01
|
You automatically tend to get lower encode settings for higher-res versions of the image, I mean
|
|
2021-02-18 06:23:19
|
Default q_auto also varies based on the `Save-data` client hint (in chrome you have an option to save bandwidth, I think it is now called Lite mode): it gives q_auto:good by default, and q_auto:eco to those who want to save data
|
|
|
|
paperboyo
|
2021-02-18 06:27:53
|
> So things are based on heuristics, some trial encoding, and lots of assumptions and offline calibration
Thank you, this is very useful. One more question, not to derail even more: suppose, in theory, a client of yours would be able to run something on all the images, and provide you with an output of this thingie, prior to requesting renders. Does a thingie like that exist, that would, ideally, shift some of the compute burden from you and allow you to make better decisions, faster? I have no idea what Iβm talking about, but a lot happens to images before they end up being requested (so the _latency_ could shift my way, I mean).
Interesting regarding higher res imagery shifting the metric: I would only want it to shift with an explicit `dpr` para, because that way I can distinguish _higher res coz displayed huge_ from _higher rez coz displayed very small but densely_ (I know the βphysicalβ dimensions of the images, as much as one can on the web, that is).
|
|
|
_wb_
|
2021-02-18 06:31:57
|
For now we don't do anything different for a dpr_2, w_500 (so 1000px wide) image than for a dpr_1, w_1000 image, but we might at some point change that. We would probably not really change the quality target as such, but rather the relative bit allocation (more bits to low frequencies for higher dpr).
|
|
2021-02-18 06:33:19
|
External compute is not impossible (we do it for some addons, for example) but it's generally tricky to actually offload much burden - sending images around for processing also comes at a cost
|
|
|
|
paperboyo
|
2021-02-18 06:33:42
|
> dpr_2, w_500 vs dpr_1, w_1000
Yeah: so we differ target quality **dramatically** between those two.
|
|
|
_wb_
|
2021-02-18 06:35:11
|
It's a bit risky to make assumptions on physical dimensions or viewing distance based on dpr though
|
|
2021-02-18 06:35:32
|
We need better web platform level infrastructure for these things, imo
|
|
2021-02-18 06:35:55
|
Which gets complicated by privacy concerns (fingerprinting potential)
|
|
|
|
paperboyo
|
2021-02-18 06:43:55
|
> External compute β¦ sending images around for processing
So in me little brain, I was thinking about running some compute on the source image that I would later send to resizer together with an output of that compute (I donβt know: that much high freq detail, these many colours, etc.), and that would somehow help you immensely when making decisions around `auto`s. No extra images would fly around between us: Iβm not thinking about requesting every image from a resizer at all possible qualities and formats and making the decision myself π . Possibly impossible, just a thought from an ignoramus. If that would be easy/possible, you would be doing it your-side, I suppose.
True, obviously, about risks around βphysicalβ dimensions assumptions. Less for staticly sized images vs. βfluidβ ones (hero etc.), but yeah., risky still. We decided to do that, because tests on HiDPI devices proved we donβt have to send 4 times the data over the wire inconveniencing users more than trying to help them. And weβve stuck with that.
|
|
|
Anthony
|
2021-02-18 06:49:39
|
I'd love to see a word about decoding times in articles saying "go ahead, use AVIF!"
|
|
|
_wb_
|
|
paperboyo
> External compute β¦ sending images around for processing
So in me little brain, I was thinking about running some compute on the source image that I would later send to resizer together with an output of that compute (I donβt know: that much high freq detail, these many colours, etc.), and that would somehow help you immensely when making decisions around `auto`s. No extra images would fly around between us: Iβm not thinking about requesting every image from a resizer at all possible qualities and formats and making the decision myself π . Possibly impossible, just a thought from an ignoramus. If that would be easy/possible, you would be doing it your-side, I suppose.
True, obviously, about risks around βphysicalβ dimensions assumptions. Less for staticly sized images vs. βfluidβ ones (hero etc.), but yeah., risky still. We decided to do that, because tests on HiDPI devices proved we donβt have to send 4 times the data over the wire inconveniencing users more than trying to help them. And weβve stuck with that.
|
|
2021-02-18 07:00:41
|
What we see happening is that customers do different things depending on the context of the image, e.g. they might e.g. use q_auto:eco for the small thumbnails in product galleries , but q_auto:best for the large images on the product page when people are considering to actually buy the thing and fidelity is important.
|
|
|
paperboyo
> External compute β¦ sending images around for processing
So in me little brain, I was thinking about running some compute on the source image that I would later send to resizer together with an output of that compute (I donβt know: that much high freq detail, these many colours, etc.), and that would somehow help you immensely when making decisions around `auto`s. No extra images would fly around between us: Iβm not thinking about requesting every image from a resizer at all possible qualities and formats and making the decision myself π . Possibly impossible, just a thought from an ignoramus. If that would be easy/possible, you would be doing it your-side, I suppose.
True, obviously, about risks around βphysicalβ dimensions assumptions. Less for staticly sized images vs. βfluidβ ones (hero etc.), but yeah., risky still. We decided to do that, because tests on HiDPI devices proved we donβt have to send 4 times the data over the wire inconveniencing users more than trying to help them. And weβve stuck with that.
|
|
2021-02-18 07:03:34
|
Yes, it is now even possible to send a 2x image and use a new exif trick to tell browsers that its intrinsic dimensions are actually 3x. For the very high dpr devices that makes sense - no point sending 4K images to a smartphone, people just don't see the difference if you send something lower res instead
|
|
|
Anthony
I'd love to see a word about decoding times in articles saying "go ahead, use AVIF!"
|
|
2021-02-18 07:04:31
|
I think avif decode time with dav1d is OK, except of course that avif decode is not progressive and not even incremental.
|
|
|
Scope
|
2021-02-18 07:49:02
|
<:PepeHands:808829977608323112>
https://discord.com/channels/794206087879852103/806898911091753051/810300639065407499
|
|
2021-02-18 07:52:19
|
|
|
|
raysar
|
2021-02-19 03:40:13
|
Is there free tool to repair jpeg? And jxl?
For example i have a jpeg broken at the middle of the picture
|
|
|
_wb_
|
2021-02-19 07:13:34
|
Most of the broken images I see, are just truncated (e.g. partially downloaded)
|
|
2021-02-19 07:14:02
|
Not much you can do in terms of repairing
|
|
|
|
Deleted User
|
|
lonjil
>don't really need lossless for the web
maybe not for photographic content, but I beg to differ for all other stuff π
|
|
2021-02-19 08:33:41
|
Maybe in the future we'll use just lossless π€
|
|
|
_wb_
|
2021-02-19 08:45:08
|
Maybe. I doubt it though. Bitdepths will go up, and the fidelity we want from lossy compression will go up, but for anything that comes from noisy sensors, going full lossless is kind of wasteful since most of the entropy is in the noise that you don't see anyway (and with higher bit depths, there will be more of that).
|
|
|
|
Deleted User
|
2021-02-19 08:47:10
|
You're right. Maybe visually lossless with be the better option for images with noise.
|
|
2021-02-19 08:47:34
|
Or maybe a filter could be applied before saving to remove the noise
|
|
2021-02-19 08:48:04
|
How much bit depth jxl supports ?
|
|
|
Pieter
|
2021-02-19 08:49:38
|
as resolutions go up, maybe there is a place for algorithms that encode a lower-resolution version losslessly, and high frequency corrections on top of that lossy
|
|
|
_wb_
|
2021-02-19 09:41:10
|
The current implementation of jxl supports up to 24-bit integer bit depth losslessly, or 32-bit float. We could extend it to 64-bit float and ~60-bit integer if needed (but I don't see many use cases that can actually meaningfully use that kind of precision).
|
|
2021-02-19 09:44:41
|
<@799692065771749416>: the VarDCT approach is to encode the 1:8 image in whatever way you want, which then gets translated into "DC" coefficients (actual DC in case of 8x8 blocks, DC + some low frequency coefficients in case of bigger blocks).
|
|
2021-02-19 09:45:54
|
you could do the 1:8 image losslessly (well, up to color transforms I suppose, you don't want to encode things in RGB)
|
|
|
|
veluca
|
2021-02-19 09:49:40
|
between RCTs and chroma-from-luma, even RGB is likely not *that* bad of an idea
|
|
|
_wb_
|
2021-02-19 10:27:35
|
Yes, it's doable - but the encoder is currently optimized for XYB and it would take some effort to make it do something sensible for RGB
|
|
|
|
Deleted User
|
|
_wb_
The current implementation of jxl supports up to 24-bit integer bit depth losslessly, or 32-bit float. We could extend it to 64-bit float and ~60-bit integer if needed (but I don't see many use cases that can actually meaningfully use that kind of precision).
|
|
2021-02-19 10:51:26
|
I have some friends that do photography on film. I know they prefer to scan the pictures in TIFF format because it has 48 bit depth.
|
|
|
_wb_
|
2021-02-19 12:26:27
|
no it has not
|
|
2021-02-19 12:26:46
|
you are confusing 3 x 16 bit with 48-bit
|
|
2021-02-19 12:28:53
|
it is confusing, people often multiply the bit depth by the number of channels, and e.g. talk about PNG8, PNG24, PNG32, PNG48 and PNG64
|
|
2021-02-19 12:29:56
|
I prefer not to do that because it is very confusing: 32-bit can then mean either e.g. 32-bit float grayscale, or 16-bit grayscale with alpha, or 8-bit RGBA.
|
|
2021-02-19 12:31:26
|
so when I say jxl supports up to 24-bit integer bit depth, that is **per channel**, not in total. In total that could be up to "98376 bit", since you can have up to 4099 channels in jxl
|
|
|
|
Deleted User
|
2021-02-19 01:21:47
|
Oh, OK. thanks for the info
|
|
|
diskorduser
|
2021-02-19 04:05:09
|
so, jxl supports 4722366482869645213696 colors?
|
|
|
raysar
|
|
_wb_
Most of the broken images I see, are just truncated (e.g. partially downloaded)
|
|
2021-02-19 04:23:57
|
So you are using a tool to scan the structure of jpeg? What is that name?
|
|
|
_wb_
|
|
diskorduser
so, jxl supports 4722366482869645213696 colors?
|
|
2021-02-19 04:28:19
|
or more if you use floats, or if you use more than 3 channels to represent colors (e.g. you can use spot colors to reach otherwise out-of-gamut colors)
|
|
|
raysar
So you are using a tool to scan the structure of jpeg? What is that name?
|
|
2021-02-19 04:29:29
|
nah I usually just go by the errors libjpeg gives
|
|
|
Nova Aurora
|
|
_wb_
so when I say jxl supports up to 24-bit integer bit depth, that is **per channel**, not in total. In total that could be up to "98376 bit", since you can have up to 4099 channels in jxl
|
|
2021-02-19 05:03:23
|
Are you storing the entire electromagnetic spectrum?
|
|
|
_wb_
|
2021-02-19 05:28:44
|
I suppose you could, though XYB is designed for visible light
|
|
2021-02-19 05:29:27
|
but I guess you could go into very out-of-gamut colors that are no longer visible light? not sure actually how this works
|
|
|
lonjil
|
2021-02-19 06:23:28
|
Idk how XYB works in this regard, but in lab, most out of gamut is simply non-physical.
|
|
|
_wb_
|
2021-02-19 06:26:36
|
I guess it's the same in XYB
|
|
2021-02-19 06:26:51
|
Or in RGB for that matter
|
|
|
Pieter
|
2021-02-19 06:27:51
|
What are the constraints on XYB values?
|
|
2021-02-19 06:28:11
|
Are they all positive?
|
|
|
_wb_
|
2021-02-19 06:30:26
|
No, X is basically cube root of L-M so it gets negative
|
|
|
Pieter
|
2021-02-19 06:31:07
|
Well, anything that corresponds to a negative L, M, or S value is inherently non-physical, I think.
|
|
|
_wb_
|
2021-02-19 06:31:15
|
Y and B are basically cube root of L+M and S so should be positive
|
|
2021-02-19 06:33:21
|
I guess it depends on how you translate LMS to physical light
|
|
2021-02-19 06:34:28
|
I could imagine something that turns negative L into ultraviolet and negative S into infrared
|
|
|
spider-mario
|
2021-02-19 07:24:34
|
light outside of the visible spectrum would simply be projected to (0, 0, 0)
|
|
2021-02-19 07:24:46
|
since the color matching functions are 0 outside of the visible spectrum
|
|
2021-02-19 07:24:51
|
in LMS and XYZ, I mean
|
|
|
_wb_
|
2021-02-19 07:28:35
|
Yes, in that direction. But in the other direction, what do you map something with "impossible" LMS values to? Like L=big number, M=zero, S=some negative number
|
|
|
Pieter
|
2021-02-19 07:29:29
|
Actually, you don't need to go that extreme. Just any mix of some zero L,M,S, and nonzero others is non-physical.
|
|
|
spider-mario
|
2021-02-19 07:30:13
|
ah, right, but maybe this can be achieved by exploiting chromatic adaptation by habituating the receptors
|
|
2021-02-19 07:30:27
|
see for example: https://upload.wikimedia.org/wikipedia/commons/5/56/Chimerical-color-demo.svg
|
|
|
_wb_
|
2021-02-19 07:30:34
|
Well I suppose you could make a very precise display that targets individual cones cells and stimulates them in arbitrary ways
|
|
|
spider-mario
|
2021-02-19 07:30:52
|
so we could have an animation that changes color to trigger this effect :p
|
|
2021-02-19 07:31:34
|
in context: https://en.wikipedia.org/wiki/Impossible_color#Chimerical_colors
|
|
|
_wb_
|
2021-02-19 07:33:37
|
Those can be represented in LMS/XYB but just aren't caused by any combination of light frequencies, right?
|
|
|
spider-mario
|
2021-02-19 07:33:49
|
thatβs my understanding of it
|
|
2021-02-19 07:34:15
|
itβs also why the CIE chromaticity diagram is horseshoe-shaped in the first place
|
|
2021-02-19 07:34:47
|
(forgot the βhorseβ part)
|
|
|
_wb_
|
2021-02-19 07:38:44
|
It is quite fascinating how there is this line of purples of non-spectral colors that turns color into a circle
|
|
2021-02-19 07:38:46
|
https://en.m.wikipedia.org/wiki/Line_of_purples
|
|
|
Anthony
|
|
_wb_
I think avif decode time with dav1d is OK, except of course that avif decode is not progressive and not even incremental.
|
|
2021-02-20 06:26:28
|
Is that the decoder used by Chrome and Firefox? and what does OK mean?
|
|
|
_wb_
|
2021-02-20 06:28:23
|
OK means it is fast enough in practice, at least for 8-bit yuv420 avif decoding.
|
|
|
Scope
|
2021-02-21 03:47:48
|
4chan ||https://boards.4channel.org/g/thread/80290429||
|
|
|
190n
|
2021-02-21 03:49:08
|
lmao
|
|
2021-02-21 03:50:06
|
> 14 KB, 758x644
https://i.4cdn.org/g/1613829104548.jpg <:banding:804346788982030337><:banding:804346788982030337><:banding:804346788982030337>
|
|
|
Scope
|
2021-02-21 03:55:02
|
https://i.4cdn.org/g/1613851758062.png
|
|
|
190n
|
2021-02-21 03:55:48
|
correct answer "never"
|
|
2021-02-21 03:55:56
|
well assuming new codecs are allowed
|
|
|
Scope
|
|
Master Of Zen
|
2021-02-21 06:31:51
|
<@!111445179587624960> can you drop me post?
|
|
|
_wb_
|
2021-02-21 07:22:41
|
Try that image with `-d 0 -g 3 -I 0 -s 9 -P 0`
|
|
2021-02-21 07:25:09
|
Also: my compliments to the chef for testing on such a representative corpus of a whopping N=1 images
|
|
2021-02-21 07:27:52
|
4chan is such an uncivilized, puberal place. It's kind of funny that even there, people are discussing image codecs
|
|
2021-02-21 07:41:05
|
https://boards.4channel.org/g/thread/80290429#p80300225 haha this made me chuckle
|
|
2021-02-21 07:43:06
|
"alwayspng.jpg.png
3.99 MB PNG"
|
|
|
190n
|
2021-02-21 07:44:06
|
i thought the images looked way worse downscaled for some reason
|
|
2021-02-21 07:44:15
|
turns out 4chan has low res jpeg previews
|
|
2021-02-21 07:44:18
|
https://i.4cdn.org/g/1613860519557s.jpg
|
|
|
Master Of Zen
|
|
_wb_
4chan is such an uncivilized, puberal place. It's kind of funny that even there, people are discussing image codecs
|
|
2021-02-21 09:19:59
|
go on 4chan and call them names π
|
|
2021-02-21 09:27:52
|
<@!794205442175402004> what you think about that blogpost? https://cloudinary.com/blog/how_to_adopt_avif_for_images_with_cloudinary
|
|
|
_wb_
|
2021-02-21 09:39:49
|
Main takeaway is that you can now try Visionular's AV1 encoder, Aurora, which is supposed to be the best one (but also a proprietary one and not easy to license either)
|
|
2021-02-21 09:44:39
|
At the moment, avif is the best available option in chrome. Hopefully soon jxl will be the best option, but for now (since jxl is not yet supported) avif is the best option for chrome.
|
|
|
Master Of Zen
|
|
_wb_
At the moment, avif is the best available option in chrome. Hopefully soon jxl will be the best option, but for now (since jxl is not yet supported) avif is the best option for chrome.
|
|
2021-02-21 10:58:01
|
I feel a little bit of irritation in that one π, am I wrong?
|
|
|
_wb_
|
2021-02-21 11:23:48
|
Well every step forward is nice, I am not complaining. Maybe with hindsight we'll later say that it would have been better to skip WebP and/or AVIF and immediately go for jxl, but we'll have to see what the future will bring...
|
|
|
Master Of Zen
|
2021-02-21 11:26:27
|
JPEG ULTRA waiting room
|
|
|
Scope
|
2021-02-21 05:31:31
|
Sometimes it's a pity that wawelets methods were not as effective and promising as everyone thought before
https://i.redd.it/h3fgeti8rqh61.png
|
|
2021-02-22 11:57:41
|
4chan ||https://boards.4channel.org/g/thread/80307415||
|
|
2021-02-22 11:57:45
|
|
|
2021-02-22 11:58:51
|
|
|
2021-02-22 11:59:42
|
|
|
2021-02-22 12:02:10
|
Patches <:Thonk:805904896879493180>
|
|
2021-02-22 12:06:42
|
<:Thonk:805904896879493180>
|
|
2021-02-22 12:17:58
|
π€
https://twitter.com/cramforce/status/1363182846731382790
|
|
|
_wb_
|
2021-02-22 12:19:09
|
π€¦
|
|
2021-02-22 12:19:48
|
I think I have hit my stupid-people-on-the-internet quota for this week now, thanks
|
|
2021-02-22 12:21:04
|
"This post is advice for tuning your automatic image optimization process where if you're anything like me you'll configure a global value once and then never look back. "
|
|
2021-02-22 12:22:06
|
are there really people encoding JPEG with a global setting of q50 or q60 and never looking back? my eyes hurt just thinking about what that means
|
|
2021-02-22 12:24:04
|
also, you could at least mention what JPEG encoder that is β there is quite some gap between libjpeg-turbo and mozjpeg
|
|
|
|
paperboyo
|
2021-02-22 01:23:34
|
> also, you could at least mention what JPEG encoder that is β there is quite some gap between libjpeg-turbo and mozjpeg
Yep, that! Took me a while to find it in https://www.industrialempathy.com/posts/avif-webp-quality-settings/#caveats. Still, I donβt know, but I suppose libvips can use any encoder you wantβ¦
|
|
2021-02-22 01:44:56
|
To be honest, though, there isnβt a lot of info on the interwebz on how to do multi-encoder quality normalisation for an unknown and varied image corpusβ¦ And Cloudinaryβs `q_auto`, imgIXβs `auto=compress`, fastlyβs `optimize` etc. are just black boxes, too, with not enough dials to play with.
Easy to add a dancing dog to an image, and create 500b WebGL placeholders made of chocolate, but normalising qualityβ¦ errrmβ¦ _thatβs a bit hard, letβs not talk about it, Iβm gonna sell my own box_.
At least his post mentions stuff like dssim and he mentions low sample size. I must admit my own sample size was of similar size. But instead of dssim I used my eyes π± .
|
|
|
Master Of Zen
|
|
_wb_
Main takeaway is that you can now try Visionular's AV1 encoder, Aurora, which is supposed to be the best one (but also a proprietary one and not easy to license either)
|
|
2021-02-22 02:55:56
|
<@!794205442175402004>
|
|
|
_wb_
|
2021-02-22 03:21:44
|
just make a free cloudinary account
|
|
2021-02-22 03:22:44
|
or if you're too lazy for that, do `wget res.cloudinary.com/jon/w_500,f_avif,q_70/http://blabla/orig/image.png`
|
|
2021-02-22 03:23:24
|
where you can drop the `w_` resizing if the original is good, and you can change `q_` to something else
|
|
2021-02-22 03:23:58
|
(note we don't implement lossless avif because it's not good anyway, so like in jpeg, `q_100` is also lossy)
|
|
|
Nova Aurora
|
2021-02-22 04:18:55
|
Seems like JPEG2000 was the JXL of it's day
|
|
2021-02-22 04:19:07
|
Futureproofed to hell
|
|
2021-02-22 04:19:39
|
Essentially removing the arbirtrary restrictions of JPEG
|
|
|
|
Deleted User
|
2021-02-22 04:22:17
|
But unfortunately no open implementation and too slow for machines of the day. JPEG XL retains the speed, but you can trade it off for better encoding.
|
|
|
_wb_
|
2021-02-22 04:55:43
|
J2K failed to replace JPEG for several reasons imo.
- not sufficiently better compression (especially with early encoders it was barely better than jpeg)
- too slow for the hardware of its days
- too complicated to implement
- no good foss encoders available (even today)
- royalty-free status not clear in the beginning
- still worse than png for non-photo
|
|
|
Nova Aurora
|
2021-02-22 05:00:06
|
Can someone explain mozjpeg's trellis quantization to me?
|
|
|
Orum
|
2021-02-22 05:26:36
|
AVIF is 10-bit max? https://cloudinary.com/blog/time_for_next_gen_codecs_to_dethrone_jpeg?utm_source=reddit&utm_medium=social&utm_campaign=time-for-next-gen-codecs-to-dethrone-jpeg
|
|
2021-02-22 05:26:49
|
12-bit AVIFs work fine for me (even if there's not much point to them unless you're doing HDR) π€·
|
|
2021-02-22 05:37:16
|
I also wonder who is scoring things in those tables...
|
|
2021-02-22 05:43:41
|
and maybe add a "supports inter-frame compression" in addition to the "supports animation"
|
|
|
lonjil
|
2021-02-22 06:05:31
|
It's a scoring of image formats, I don't see why being good at video would be a plus.
|
|
|
_wb_
|
2021-02-22 06:08:27
|
AVIF is limited to 10-bit, yes.
|
|
2021-02-22 06:09:54
|
It's not clear to me to what extent inter frame is supposed to be supported in AVIF
|
|
|
lonjil
|
2021-02-22 06:12:14
|
From what I've read, avif sequence is a completely normal full on AV1 bitstream.
|
|
|
_wb_
|
2021-02-22 06:15:06
|
I think so to, but it wasn't always that way
|
|
2021-02-22 06:16:00
|
AVIF was supposed to be implementable without requiring a full AV1 decoder at some point. I guess they abandoned that idea though.
|
|
2021-02-22 06:59:20
|
If you feel like sharing my blogpost on various reddits or hackernews, please go ahead. When I do it myself it usually doesn't work well because people don't like self-promotion...
|
|
2021-02-22 07:00:22
|
I got banned from r/webdev/ for sharing my blogpost on progressive decoding there
|
|
|
Scope
|
2021-02-22 07:02:52
|
<https://www.reddit.com/r/AV1/comments/lpwpga/time_to_dethrone_jpeg_a_comparison_of_nextgen/>
|
|
|
Reddit β’ YAGPDB
|
|
lonjil
|
2021-02-22 07:49:52
|
I posted it on Tildes, and left a comment I hope will give correct background info to users there, let me know if I got anything wrong: https://tildes.net/~tech/vau/time_for_next_gen_codecs_to_dethrone_jpeg_comparison_with_newer_image_formats_by_co_creator_of_jpeg
|
|
|
_wb_
|
2021-02-22 07:51:58
|
"who one of the creators " there's an "is" missing
|
|
|
lonjil
|
|
_wb_
|
2021-02-22 07:57:17
|
Background info looks good to me, nice summary π
|
|
|
lonjil
|
2021-02-22 08:04:31
|
Thanks :)
|
|
|
fab
|
2021-02-22 08:15:13
|
can webp2 do 4:4:4 ?
|
|
|
_wb_
|
2021-02-22 08:41:33
|
Yes, unless skal changes his mind
|
|
2021-02-22 09:07:53
|
Gonna try on /r/programming too: https://www.reddit.com/r/programming/comments/lpzxnc/time_to_dethrone_jpeg_a_comparison_of_nextgen/
|
|
|
lonjil
|
2021-02-22 09:19:07
|
updooted
|
|
|
|
Deleted User
|
2021-02-22 09:20:34
|
Same here
|
|
|
Nova Aurora
|
|
Orum
|
2021-02-22 11:27:14
|
well, 12-bit may be out-of-spec for AVIF, but it works in practice <:Thonk:805904896879493180>
|
|
|
bonnibel
|
2021-02-22 11:29:31
|
"12-bit" just wish someone made ternary computers again π
|
|
|
Pieter
|
2021-02-22 11:34:15
|
12 bits = 7.571157 trits
|
|
|
lonjil
|
2021-02-23 12:16:01
|
12 bits = 8.317766167 nats
|
|
|
_wb_
Gonna try on /r/programming too: https://www.reddit.com/r/programming/comments/lpzxnc/time_to_dethrone_jpeg_a_comparison_of_nextgen/
|
|
2021-02-23 12:19:13
|
Not nearly as bad as 4chan but I'm still getting cancer from that thread.
|
|
|
190n
|
2021-02-23 12:25:40
|
oh god
|
|
2021-02-23 12:25:55
|
someone says BPG has the "best license"??
|
|
|
BlueSwordM
|
|
lonjil
Not nearly as bad as 4chan but I'm still getting cancer from that thread.
|
|
2021-02-23 12:27:43
|
You're right.
|
|
2021-02-23 12:27:57
|
You also have this lovely part from the 1st guy. π
|
|
2021-02-23 12:27:59
|
`The fact that you are trying to claim dominance on the whole market instead of finding a niche (like, idk, focus on computer generated images, or viceversa) speaks of the contrast between your ambicious scope and your accomplishment so far. `
|
|
|
Nova Aurora
|
2021-02-23 12:28:25
|
But... we **Don't** want to be J2k
|
|
|
lonjil
|
2021-02-23 12:28:56
|
> You're not going to dethrone JPEG or MPEG before Intel CPUs have instructions dedicated to doing your decoding. :-)
|
|
|
Nova Aurora
|
2021-02-23 12:29:40
|
With video formats... ok hardware is important
|
|
2021-02-23 12:30:07
|
But are any image formats that hardware-dependent?
|
|
|
bonnibel
|
2021-02-23 12:30:36
|
darn we need intel to make some dedicated jxl instructions before i can view it on my phone
|
|
|
lonjil
|
|
Nova Aurora
But are any image formats that hardware-dependent?
|
|
2021-02-23 12:30:47
|
only for stuff like hardware encoding in cameras
|
|
2021-02-23 12:30:54
|
nothing on general purpose cpus
|
|
|
|
Deleted User
|
2021-02-23 12:32:25
|
Smartphones can encode in software
|
|
2021-02-23 12:33:48
|
I'm curious if there are any camera JPEG chips that can be repurposed for simple 8x8 VarDCT JXL backwards-compatible encoding...
|
|
|
Nova Aurora
|
2021-02-23 12:34:52
|
To what extent do hardware video coders reuse hardware between formats? like x264 x265 vp9 and av1
|
|
|
lonjil
|
2021-02-23 12:45:09
|
I think someone in the reddit thread is downvoting everyone with positive opinions of new image formats π€
|
|
|
BlueSwordM
|
2021-02-23 12:49:43
|
Honestly, who cares? <:kekw:808717074305122316>
|
|
2021-02-23 12:50:05
|
As long as you are reasonable, intelligent and thoughtful in your responses, downvotes do not matter.
|
|
|
lonjil
|
2021-02-23 01:35:32
|
> Spreadsheets are a niche of images.
|
|
2021-02-23 01:37:42
|
oh lord someone is saying that new formats should have to prove themselves for 10 to 15 years before being considered for the web
|
|
|
Nova Aurora
|
2021-02-23 01:39:27
|
> someone is saying that new formats should have to prove themselves for 10 to 15 years before being considered for the web
|
|
2021-02-23 01:39:44
|
How? How do you prove your format without the web?
|
|
2021-02-23 01:40:10
|
otherwise it gets stuck in a niche based on what the early adopters were
|
|
|
BlueSwordM
|
2021-02-23 01:55:16
|
They probably think JPEG came out in 1980. π
|
|
|
Pieter
|
2021-02-23 04:31:53
|
So your only option is probably being lucky enough to have your image format designed before the web emerged.
|
|
|
Nova Aurora
|
2021-02-23 04:32:39
|
Sooooooo.......
|
|
2021-02-23 04:32:51
|
Back to GIF, BMP, and JPEG
|
|
2021-02-23 04:33:04
|
any takers?
|
|
2021-02-23 04:33:45
|
I'm sorry but png didn't wait until it was put on the web, therefore it is unproven
|
|
|
Pieter
|
2021-02-23 04:34:21
|
TIFF.
|
|
|
Nova Aurora
|
2021-02-23 04:34:28
|
J2k might skirt in since it's sucess has been outside the web
|
|
|
_wb_
|
|
lonjil
Not nearly as bad as 4chan but I'm still getting cancer from that thread.
|
|
2021-02-23 06:15:36
|
Oh boy
|
|
2021-02-23 06:16:13
|
https://c.tenor.com/-5jtyxuFnmwAAAAM/shit-jurassic-part.gif
|
|
2021-02-23 06:37:24
|
It's not that bad actually
|
|
2021-02-23 06:38:26
|
I already got the highlights of worst takes here before I read the thread there
|
|
|
Nova Aurora
|
2021-02-23 06:38:41
|
Just a lot of skeptics that want PNG and JPEG to be around forever
|
|
2021-02-23 06:39:14
|
And a few HEIC shills that think it's the best because apple uses it
|
|
|
_wb_
|
2021-02-23 06:42:28
|
That lake example is not contrived, it's what I get if I open the image in Apple Preview and do Save As HEIC with medium quality settings. I do not understand why anyone would consider such artifacts acceptable...
|
|
|
Nova Aurora
|
|
_wb_
That lake example is not contrived, it's what I get if I open the image in Apple Preview and do Save As HEIC with medium quality settings. I do not understand why anyone would consider such artifacts acceptable...
|
|
2021-02-23 06:44:07
|
They don't notice it
|
|
2021-02-23 06:44:31
|
The true tragedy of learning about image compression
|
|
2021-02-23 06:44:51
|
Acceptable becomes horrifyingly terrible
|
|
|
|
paperboyo
|
2021-02-23 08:46:25
|
I agree with someone there that the comparison images are not giving justice to codec differences (even after a lot of zooming; WebP blocky sky is hardly pronounced). A link to https://eclipseo.github.io/image-comparison-web/#buenos-aires&MOZJPEG=t&JXL=t&subset1 is more telling (although not all old codecs there, sadly).
|
|
|
_wb_
|
2021-02-23 09:29:25
|
https://sneyers.info/jxl/comparison/ is that better?
|
|
2021-02-23 09:31:02
|
when I look at the water, I can see the differences quite clearly
|
|
2021-02-23 09:31:30
|
but the limited width of the blogpost is a bit of a problem indeed
|
|
|
|
paperboyo
|
2021-02-23 09:40:43
|
> but the limited width of the blogpost is a bit of a problem indeed
True dat.
> is that better?
So on my macbook pro βretinaβ in a bright room, JP2k and WebP are acceptable and almost indistinguishable. from better codecs. I guess what I would see more useful is not a comparison at βsensibleβ bpp (the realistic one, showing only slight differences). I would crank it down by 33%, so that only JPXL and AVIF look acceptable. Or β even further, so even the newest codecs are on the verge of breaking up ;-).
As soon as one understands that this is the same amount of bytes and see the dramatic difference in image quality β thatβs the message.
|
|
|
_wb_
|
2021-02-23 09:48:11
|
maybe my standards are too high, but I consider even the JXL not really acceptable in the current example π
|
|
|
|
paperboyo
|
2021-02-23 09:48:34
|
At least thatβs what Iβm looking for when I want to get excited about JXL & AVIF. I always use Tiny on those comparison sites like eclipseo. To be perfectly honest, my production environment may also be closer to Tiny that it is to Small, too (yes, we are a bit radical, but itβs only a small subset of images that are kinda unacceptable and I deem the compromise worth it. Thatβs why Iβm so excited about new formats).
|
|
|
_wb_
|
2021-02-23 09:49:13
|
https://tenor.com/view/faltas-de-ortograf%C3%ADa-eyes-eye-wash-gif-13468132
|
|
2021-02-23 09:50:05
|
is bandwidth that much of a cost for your use case?
|
|
2021-02-23 09:50:44
|
eclipseo tiny is really yucky if you ask me
|
|
|
|
paperboyo
|
2021-02-23 09:51:41
|
Itβs not about the cost of bandwidth. More about the democratic accessibility, if I may be allowed to sound a bit aloof :-D.
|
|
|
_wb_
|
2021-02-23 09:54:01
|
ok, but then I would ruin images when `save-data` is used, but not in the general case
|
|
2021-02-23 09:54:25
|
also, progressive helps more for that imo than low bitrates
|
|
2021-02-23 09:54:57
|
at least if network speed is the main bottleneck for accessibility
|
|
2021-02-23 09:55:31
|
if the network is nice and fast but just expensive per MB, then progressive doesn't help and bitrate is all that matters
|
|
|
|
paperboyo
|
2021-02-23 09:58:38
|
Hush, hush, but switch WebP off in the browser, make sure you are on dpr>1.3 (we reduce quality further) and look at the sky https://www.theguardian.com/news/gallery/2021/feb/19/floral-eye-shadow-and-newborn-foals-fridays-best-photos
|
|
2021-02-23 09:59:07
|
This pic:
|
|
|
_wb_
|
2021-02-23 10:00:28
|
<:banding:804346788982030337>
|
|
|
|
paperboyo
|
2021-02-23 10:00:47
|
Yep.
|
|
2021-02-23 10:25:22
|
Lack of quality normalisation between encoders (AFAIU, it cannot really be done perfectly and staticly, because they differ depending on the image makeup in different parts of the quality spectrum) and lack of flexibility from resizing serviceβs automagic blackboxes is a big problem for us. Not sure DSSIM can take physical image density into account. But I do when ~~pulling me quality numbers out of a hat~~ using a panel of trained scientists to set quality.
I am very much interested in claims that quality scale for JXL is based on some perceptual metric (would love to read more about that!).
And I would love to see the best automagick blackbox dissected and reverse engineered, so that we can look into doing it on our side (most of the time, we have plenty of spare time before the image must be ready and even if itβs seconds, we could replace it later). I see Malte Ubiβs article useful for all its flaws. I get that CDNs/resizers wonβt ever be offering a real per-render perfect quality setup or they would go out of business. But how can I get close to it without creating multiple renders for every image is something Iβm most interested in.
Also, having occasional poor quality helps for when photographers are upset we are sharing high resolutions versions of their pics. Sometimes itβs still Napster vs Metallica π.
|
|
|
_wb_
|
2021-02-23 10:49:30
|
Yes, I have had discussions with photographers about generation loss resiliency, where some of them actually have the opinion that generation loss is a feature, not a bug, because it acts as a copyright protection mechanism when a photo gets shared a lot
|
|
2021-02-23 10:51:32
|
The cjxl encoder sets quality based on targets that are expressed in "just-noticeable difference" units
|
|
2021-02-23 10:51:38
|
distance 0 is mathematically lossless
|
|
2021-02-23 10:51:56
|
distance 1 is just-noticeable difference (in a careful flip-test)
|
|
2021-02-23 10:52:18
|
distance 2 is probably no noticeable difference in a side by side comparison
|
|
2021-02-23 10:52:42
|
distance 3 is probably acceptable if you don't see the original (like on the web)
|
|
2021-02-23 10:53:14
|
distance 4 is probably acceptable if you can live with some artifacts in return for smaller/faster images
|
|
2021-02-23 10:53:27
|
I personally wouldn't go higher than that, but you can
|
|
|
Fox Wizard
|
2021-02-23 10:58:12
|
Nah, we all know this is the best quality you can possible want and get ;)
|
|
|
_wb_
|
2021-02-23 11:00:11
|
For Cloudinary, I implemented the q_auto feature which selects a good encode setting based on the image. For JPEG that can mean anything between q60 4:2:0 (or lower, if you don't do default q_auto) and q92 4:4:4 (or higher, with q_auto:best)
|
|
2021-02-23 11:00:47
|
Operationally that is nontrivial, and even if you have such a system, it is somewhat error prone
|
|
2021-02-23 11:01:12
|
With jxl, I can just set the distance to 2 and call it a day
|
|
2021-02-23 11:01:35
|
Or to 3 for q_auto:eco and 4 for q_auto:low
|
|
|
|
paperboyo
|
2021-02-23 11:01:39
|
> it is somewhat error prone
Never-ever would I expect perfection! As you could see above, we have our share of errors already ;-).
|
|
|
_wb_
|
2021-02-23 11:02:21
|
JXL is going to make one of Cloudinary's killer features, q_auto, completely redundant
|
|
2021-02-23 11:02:38
|
Eventually
|
|
|
Scope
|
2021-02-23 01:38:34
|
<https://www.reddit.com/r/webdev/comments/lqi1q6/time_to_dethrone_jpeg_a_comparison_of_nextgen/>
|
|
2021-02-23 01:38:43
|
https://tenor.com/view/sweat-sweating-nervous-wet-sad-gif-5796001
|
|
|
_wb_
|
|
|
Deleted User
|
|
_wb_
https://sneyers.info/jxl/comparison/ is that better?
|
|
2021-02-23 03:00:35
|
JPEG: awful banding/blocking but still some detail [0.5/5]
J2K: quite smeary/blurry, loss of details [1/5]
WebP: sky banding, selectively blurred water, desaturation of small details [1.5/5]
HEIC: selectively blurred water, slight chroma/luma shift [3/5]
AVIF: stars disappear, extremely blurry water, desaturation of small details [2/5]
JXL: smaller amounts of ringing or blur [3.5/5]
|
|
2021-02-23 03:01:43
|
<@!794205442175402004> Btw, do you want to add your blog link to https://sneyers.info/jxl ?
|
|
|
_wb_
|
2021-02-23 03:12:36
|
Yes, still have to do that
|
|
2021-02-23 03:13:27
|
The luminance issue you're seeing is most likely a chrome png rendering bug
|
|
2021-02-23 03:15:00
|
https://res.cloudinary.com/jon/image/fetch/f_png,q_100,cs_tinysrgb/https://sneyers.info/jxl/comparison/full-jxl.png
|
|
2021-02-23 03:15:14
|
Does that look better?
|
|
2021-02-23 03:15:35
|
https://sneyers.info/jxl/comparison/full-jxl.png
|
|
2021-02-23 03:15:52
|
These two images should look the same
|
|
2021-02-23 03:16:33
|
But in mobile chrome the second one looks desaturated/brighter for some reason
|
|
2021-02-23 03:21:59
|
I don't know why mobile chrome is rendering that png differently from desktop chrome
|
|
|
|
Deleted User
|
2021-02-23 03:23:46
|
No, they look the same for me (Firefox).
|
|
|
_wb_
|
2021-02-23 03:23:54
|
<@768090355546587137> do you know? Is mobile chrome somehow applying both the gAMA and the ICC profile?
|
|
2021-02-23 03:24:12
|
Firefox?
|
|
2021-02-23 03:24:27
|
Oh firefox likely renders the other pngs incorrectly
|
|
|
|
Deleted User
|
2021-02-23 03:24:36
|
What settings did you use to encode the JXL?
|
|
|
_wb_
|
2021-02-23 03:25:30
|
I don't remember, just picked a -d to match the size
|
|
2021-02-23 03:25:53
|
-d 4.5 or so iirc
|
|
|
|
Deleted User
|
2021-02-23 03:26:04
|
So we are at a stage where no browser can render PNG correctly, a format from the last century. ^^
|
|
|
_wb_
|
2021-02-23 03:27:10
|
https://res.cloudinary.com/cloudinary-marketing/image/upload/cs_tinysrgb/v1613766717/Web_Assets/blog/original.png
|
|
2021-02-23 03:27:21
|
Does that the same in firefox?
|
|
|
Nova Aurora
|
2021-02-23 03:27:33
|
looks same
|
|
|
_wb_
|
2021-02-23 03:27:47
|
Same as https://res.cloudinary.com/cloudinary-marketing/image/upload/v1613766717/Web_Assets/blog/original.png I mean
|
|
|
Nova Aurora
|
2021-02-23 03:28:57
|
I think so?
|
|
|
_wb_
|
2021-02-23 03:29:04
|
Firefox by default does not do color management on images without explicit icc profile
|
|
2021-02-23 03:29:18
|
Which is fine if you happen to have an sRGB screen
|
|
2021-02-23 03:31:45
|
On my phone, the comparison looks ok in firefox
|
|
2021-02-23 03:31:50
|
But not in chrome
|
|
|
|
Deleted User
|
2021-02-23 03:31:58
|
ok, so the full-jxl.png looks fine when downloaded
|
|
2021-02-23 03:33:22
|
but firefox darkens the image
|
|
2021-02-23 03:35:50
|
Final rating for low bbp: JXL > HEIC >> AVIF > WebP > J2K > JPEG
|
|
|
Crixis
|
2021-02-23 03:36:59
|
i prefer AVIF over HEIC, trees on HEIC are completly flat
|
|
|
|
Deleted User
|
2021-02-23 03:42:43
|
Yeah, we should keep in mind that this comparison is only based on one image and other photos might perform differently across the codecs.
|
|
|
_wb_
|
2021-02-23 03:43:02
|
It depends a bit on where I look, heic and avif are roughly equivalent to me
|
|
|
|
paperboyo
|
2021-02-23 03:47:51
|
> Is mobile chrome somehow applying both the gAMA and the ICC profile?
Canβt remember which browser is wrong where anymoreβ¦ This is useful: https://kornel.ski/en/color for testing (doesnβt have DCF EXIF for JPEGs, though).
|
|
|
_wb_
|
2021-02-23 03:52:28
|
Firefox is wrong on that test, chrome is right
|
|
2021-02-23 03:53:29
|
But somehow on the png output of djxl, mobile chrome renders it differently than desktop chrome
|
|
|
|
paperboyo
|
2021-02-23 03:58:10
|
Even with `gfx.color_management.mode` set to `1` from a default, Firefox isnβt perfect. But is much, much better. Incidentally, outdated colour management just got AVIF delayed in Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=1682995#c32. Iβm kinda happy, maybe they will feel the pressure to finally fixed it (default got changed in one release recently, but they chickened out after reports of some smaller bugs. I have mine set to `1` since I can remember).
|
|
|
_wb_
Does that the same in firefox?
|
|
2021-02-23 04:02:43
|
First image has (`pngcheck`):
``` chunk iCCP at offset 0x00025, length 297
profile name = icc, compression method = 0 (deflate)
compressed profile = 292 bytes
```
Second one has extra `gAMA`:
```
chunk iCCP at offset 0x00025, length 453
profile name = 1, compression method = 0 (deflate)
compressed profile = 450 bytes
chunk gAMA at offset 0x001f6, length 4: 0.45455
```
|
|
2021-02-23 04:50:43
|
The ICC profiles also look different. First one is Facebookβs simplified sRGB equivalent. The second is described thus in Photoshop (which also displays differences between pics):
|
|
|
_wb_
|
2021-02-23 05:02:37
|
Yes, I know they are different, but they should both be basically sRGB
|
|
2021-02-23 05:03:05
|
It must be the gAMA chunk that is upsetting mobile chrome
|
|
2021-02-23 05:08:04
|
|
|
2021-02-23 05:08:14
|
3 more likes...
|
|
2021-02-23 05:08:40
|
https://c.tenor.com/LG4GkCe_p-gAAAAM/puss-in-boots-cat.gif
|
|
|
|
paperboyo
|
2021-02-23 05:13:36
|
I would expect `gAMA` to be discarded if there is an ICC profile to use. But Photoshop displays those pics differently even if I assign standard sRGB to both. Suggesting the pixel values differ:
|
|
|
_wb_
|
2021-02-23 05:15:46
|
There could be some rounding differences, there shouldn't be huge pixel differences though
|
|
|
|
paperboyo
|
|
_wb_
|
2021-02-23 05:43:54
|
https://twitter.com/jonsneyers/status/1363889625702543367?s=19
|
|
2021-02-23 05:44:02
|
One more like to get 4:4:4
|
|
|
Nova Aurora
|
2021-02-23 05:45:18
|
I don't have a twitter
|
|
|
_wb_
|
2021-02-23 05:47:26
|
It's at 444 now
|
|
2021-02-23 05:47:39
|
Now stop liking it please π
|
|
|
|
Deleted User
|
|
_wb_
Now stop liking it please π
|
|
2021-02-23 05:49:20
|
https://tenor.com/view/no-bugs-bunny-nope-gif-14359850
|
|
|
Nova Aurora
I don't have a twitter
|
|
2021-02-23 05:49:55
|
You don't need Twitter account if you want to only read the post
|
|
|
Pieter
|
2021-02-23 05:54:24
|
<@794205442175402004> Max 4099 channels? Who comes up with such a number?
|
|
|
Nova Aurora
|
|
You don't need Twitter account if you want to only read the post
|
|
2021-02-23 05:54:39
|
I was talking about liking it
|
|
|
|
veluca
|
|
Pieter
<@794205442175402004> Max 4099 channels? Who comes up with such a number?
|
|
2021-02-23 05:55:42
|
4096 + 3 color channels π
|
|
|
lonjil
|
2021-02-23 05:57:11
|
How do you signal having only say 2 channels?
|
|
|
_wb_
|
2021-02-23 05:59:25
|
Can signal the main image to be grayscale
|
|
2021-02-23 05:59:43
|
And then one extra channel for e.g. alpha
|
|
|
Pieter
<@794205442175402004> Max 4099 channels? Who comes up with such a number?
|
|
2021-02-23 06:04:38
|
The 3 main channels are special. The VarDCT part is hardcoded for 3 channels, and many things are shared between those 3 channels (they use the same block type selection, adaptive quant weights, epf filter strength, etc).
|
|
2021-02-23 06:05:37
|
Any other channels are encoded with Modular, which can deal with an arbitrary number of channels
|
|
|
Pieter
|
2021-02-23 06:06:12
|
but you can choose to use modular for the main channels too?
|
|
|
_wb_
|
2021-02-23 06:06:58
|
The limit of 4096 extra channels is just because that's the highest number the header can signal. We could allow more but we don't know applications that could have a use for more than a few dozen
|
|
2021-02-23 06:07:07
|
Yes, modular can also do the main channels
|
|
|
Pieter
|
2021-02-23 06:07:31
|
modular is the maniac successor?
|
|
2021-02-23 06:07:39
|
or alternative
|
|
|
_wb_
|
2021-02-23 06:07:50
|
Yes, it's based on maans
|
|
2021-02-23 06:10:18
|
Also contains a cool new predictor made by Alexander Rhatushnyak
|
|
2021-02-23 06:11:03
|
It's a self-correcting weighted predictor that feeds past errors back in its subpredictors
|
|
2021-02-23 06:11:23
|
Very good for photographic images
|
|
2021-02-23 06:12:13
|
Can use it with a simple context model based on only the quantized max error it has to maintain anyway
|
|
2021-02-23 06:12:31
|
That's what we do at lossless speed 3 at the moment
|
|
2021-02-23 06:13:05
|
But it is also just available as a predictor, and this error can be used as a property in a MA tree
|
|
2021-02-23 06:13:32
|
(for speed 3 we just use a fixed tree that is special cased to avoid branching)
|
|
2021-02-23 06:14:26
|
Also new compared to maniac is that the predictor is not signalled per channel, but it is signalled per MA leaf node
|
|
2021-02-23 06:14:52
|
which means you can use a predictor depending on context, which is quite cool
|
|
|
Pieter
|
|
_wb_
|
2021-02-23 06:16:01
|
Besides the predictor, leaf nodes can also have a multiplier and an offset
|
|
2021-02-23 06:21:02
|
The multipliers allow you to do something similar to quantization, except it's not the values themselves that get quantized, but the predictor residuals, which is not the same thing: first quantizing and then predicting gives color banding, while doing it the other way around gives smoother gradients (with the right predictors)
|
|
|
|
Deleted User
|
|
_wb_
The 3 main channels are special. The VarDCT part is hardcoded for 3 channels, and many things are shared between those 3 channels (they use the same block type selection, adaptive quant weights, epf filter strength, etc).
|
|
2021-02-23 06:30:21
|
> The VarDCT part is hardcoded for 3 channels
So you can't use VarDCT for "true" (single-channel) grayscale?
|
|
|
_wb_
|
2021-02-23 06:33:30
|
You can, the current code will probably allocate buffers for chroma though (which is just all zeroes in the case of grayscale)
|
|
|
spider-mario
|
2021-02-23 07:16:30
|
just created an HDR avif, curious how Discord and Chrome render it
|
|
2021-02-23 07:16:39
|
okay, they donβt
|
|
2021-02-23 07:16:52
|
yet it showed me a preview when uploading
|
|
2021-02-23 07:18:54
|
when opened in its own tab, it seems fine on desktop Chrome, albeit with the highlights clipped quite soon when the desktop is not in HDR mode
|
|
|
Nova Aurora
|
2021-02-23 07:19:30
|
firefox requires you to download it if you have the flag enabled
|
|
|
|
Deleted User
|
|
spider-mario
yet it showed me a preview when uploading
|
|
2021-02-23 08:20:44
|
For me it didn't
|
|
|
spider-mario
|
|
For me it didn't
|
|
2021-02-23 08:29:56
|
do you mean if you download it and then try to upload it yourself?
|
|
|
|
Deleted User
|
|
spider-mario
do you mean if you download it and then try to upload it yourself?
|
|
2021-02-23 08:34:50
|
Exactly (I'm talking about Windows app, not the website)
|
|
|
spider-mario
|
2021-02-23 08:36:01
|
ah, that could explain it
|
|
2021-02-23 08:36:12
|
for me, in the web version in Chrome, it looks like this:
|
|
2021-02-23 08:36:20
|
|
|
|
lonjil
|
2021-02-23 08:36:38
|
App is probably an ancient version of electron from before avif support..
|
|
|
VEG
|
2021-02-24 08:51:53
|
https://storage.googleapis.com/demos.webmproject.org/webp/cmp/2021_02_15/index.html#kopenhagen-tivoli-9120&Original=s&AVIF-AOM=b&subset1
|
|
2021-02-24 08:52:12
|
Something is wrong. AVIF is much brighter than the original.
|
|
|
spider-mario
|
2021-02-24 08:53:51
|
for me, it displays a bright overlay while the UI is loading, but then the two images are equally light
|
|
|
VEG
|
2021-02-24 08:58:42
|
Interesting. On another machine it is displayed properly π€
|
|
|
Master Of Zen
|
2021-02-24 09:22:56
|
<@!794205442175402004> what do you think about AQ modes? Imo some are quite controversial and can make your image worse
|
|
|
_wb_
|
2021-02-24 09:38:49
|
AQ modes?
|
|
|
Master Of Zen
|
|
_wb_
AQ modes?
|
|
2021-02-24 09:45:19
|
Adaptive quantization
|
|
|
_wb_
|
2021-02-24 09:49:03
|
in what codec?
|
|
|
Master Of Zen
|
|
_wb_
in what codec?
|
|
2021-02-24 09:50:50
|
AV1, HEVC, etc
|
|
2021-02-24 09:51:16
|
for example in this comparison https://discord.com/channels/794206087879852103/803645746661425173/814003600698900490
|
|
2021-02-24 09:51:30
|
used aq-mode 1, which is variance adaptive quantization
|
|
|
_wb_
|
2021-02-24 09:52:13
|
is there a description somewhere of how these AQ modes work?
|
|
2021-02-24 09:53:00
|
adaptive quantization can only be as good as the metric you're using to select the quantweights
|
|
2021-02-24 09:55:01
|
that's the problem when you optimize for e.g. PSNR in YCbCr: that's a poor metric in a perceptually not very relevant color space. Optimizing for that will naturally lead to nice results in PSNR-measuring benchmarks, and poor visual results (blurry, color banding in the darks, etc)
|
|
|
Master Of Zen
|
|
_wb_
is there a description somewhere of how these AQ modes work?
|
|
2021-02-24 09:55:39
|
Not a lot, for example this paper http://www.diva-portal.org/smash/get/diva2:906171/FULLTEXT01.pdf
|
|
2021-02-24 12:21:19
|
AV1
|
|
2021-02-24 12:21:27
|
Source
|
|
2021-02-24 12:22:28
|
AV1 `--threads=8 -b 10 --cpu-used=6 --end-usage=q --cq-level=30 --tile-columns=2 --tile-rows=1`
|
|
2021-02-24 12:22:29
|
|
|
2021-02-24 12:22:50
|
Butteraugli score `10.519846`
|
|
2021-02-24 12:22:53
|
<:kekw:808717074305122316>
|
|
2021-02-24 12:23:28
|
|
|
2021-02-24 12:31:15
|
Original
|
|
2021-02-24 12:31:25
|
aomenc cq20 cpu1
|
|
2021-02-24 12:31:31
|
|
|
2021-02-24 12:31:36
|
`3.674355`
|
|
2021-02-24 12:41:06
|
Same image
|
|
2021-02-24 12:41:17
|
Source
|
|
2021-02-24 12:41:22
|
Encoded
|
|
2021-02-24 12:41:43
|
|
|
2021-02-24 12:41:59
|
`1.385405`
|
|
2021-02-24 12:45:34
|
https://tenor.com/view/breakingbad-science-yeah-science-gif-5954775
|
|
|
_wb_
|
2021-02-24 01:03:53
|
https://tenor.com/view/spit-take-laugh-lmao-gif-9271200
|
|
|
Scope
|
2021-02-24 03:15:21
|
https://twitter.com/dericed/status/1364343123271376898
|
|
|
fab
|
2021-02-24 03:35:52
|
and with butteruagli tune zen
|
|
2021-02-25 02:45:26
|
does with butteraugli AV1 AOMENC looks better
|
|
2021-02-25 02:45:32
|
like the options i'm doing
|
|
2021-02-25 02:45:43
|
i'm not asking like 0.3.0 -s 4 -q 99.2
|
|
2021-02-25 02:46:02
|
but at least like the vardct and modular custom settings i'm using
|
|
2021-02-25 02:46:10
|
the first did 13 to 2 mb
|
|
2021-02-25 02:46:40
|
or is only a bit more tuned
|
|
2021-02-25 02:46:44
|
nothing interesting
|
|
|
Scope
|
2021-02-25 02:49:23
|
<@!794205442175402004> Also, about other discord servers discussing compression and formats (I haven't heard about such, maybe some from enthusiasts), but mostly everyone stayed in IRC, maybe it would be convenient for those who use one client and for history to add here a bridge at least to read freenode #daala channel (but I do not know reliable discord bots for that)
|
|
|
_wb_
|
2021-02-25 03:32:47
|
it would be nice to have such a bridge
|
|
|
Scope
|
2021-02-25 04:28:58
|
For example, the IRC bridge exists in Matrix, it looks like this: <https://app.element.io/#/room/#freenode_#daala:matrix.org>
but for Discord I found only bots that need to be installed on the server <:SadCat:805389277247701002>
|
|
|
_wb_
|
2021-02-25 04:40:52
|
Nathan does sound a bit angry in that reddit comment β I wonder what those many details are that I got wrong
|
|
2021-02-25 04:46:20
|
It is also somewhat surprising to me that AVIF is apparently not yet finalized
|
|
2021-02-25 04:46:33
|
There is AVIF 1.0 from February 2019
|
|
2021-02-25 04:46:48
|
Then there is the current "working draft"
|
|
2021-02-25 04:46:51
|
https://aomediacodec.github.io/av1-avif/#change-list
|
|
2021-02-25 04:47:36
|
Is there a schedule for when AVIF 2.0 (or 1.1 or whatever) will be released?
|
|
|
Scope
|
2021-02-25 04:49:02
|
Yep, it would be better to specify what is wrong, also I noticed that the developers of other formats are more unfriendly when someone compares or point out the flaws of their formats, from the side of JXL it is more gentle
|
|
|
_wb_
|
2021-02-25 04:49:58
|
When we say chrome and firefox support AVIF, what exactly does it mean? Do they support AVIF 1.0, or AVIF current working draft, or is it a commitment to staying up to date with the latest version of the spec?
|
|
2021-02-25 04:50:29
|
Will there be a different media type than `image/avif` when there is a new version of the spec?
|
|
2021-02-25 04:51:30
|
Someone pointed out to me that I shouldn't say that AVIF is limited to 10-bit, because there exists a pull request to the AVIF spec that wants to bump it up to 12.
|
|
|
lonjil
|
2021-02-25 04:51:38
|
10 minutes ago:
> Hi Jon. Your article has actually started a good conversation on the AVIF standards list about the need for more clarity around 12-bit AVIF. The spec authors were not aware that the wording in the document was ambiguous and lead outsiders to believe that 12-bit was not supported, which of course is not the case as the AOM official libavif implementation supports 12-bit just fine.
>
> Hopefully that will become explicit in an updated spec document, so stay tuned.
|
|
2021-02-25 04:51:54
|
So apparently it is supposed to be 12 bit, they just didn't spec it right.
|
|
|
_wb_
|
2021-02-25 04:53:59
|
Seems like a stretch to say it was ambiguous. The spec states quite clearly that you have to declare a profile, and then it lists two profiles: Baseline and Advanced, which correspond to AV1 Main and AV1 High respectively.
|
|
2021-02-25 04:55:25
|
But OK, great that 12-bit is officially supported then
|
|
2021-02-25 04:55:30
|
or will be
|
|
2021-02-25 04:58:00
|
It still bugs me a bit that the AVIF spec remains something that can be modified just like that. Maybe someone was writing an AVIF decoder and was somehow counting on that 10-bit limit (I dunno, using a compact representation of 4 bytes per pixel, which can be done if 10-bit is the limit but not if 12-bit is the limit), and now this "clarification" can mean that they need to start from scratch
|
|
|
Scope
|
2021-02-25 04:59:57
|
AVIF looks to me like creating the format on demand in a big rush and also quickly implementing its support in browsers and where possible, with further development later when it is already in use (without proper time to discuss and develop the specification and needed features before that)
Perhaps this is as a quick response to the HEIC replacement or a request from Netflix as they urgently needed HDR
|
|
|
_wb_
|
|
lonjil
10 minutes ago:
> Hi Jon. Your article has actually started a good conversation on the AVIF standards list about the need for more clarity around 12-bit AVIF. The spec authors were not aware that the wording in the document was ambiguous and lead outsiders to believe that 12-bit was not supported, which of course is not the case as the AOM official libavif implementation supports 12-bit just fine.
>
> Hopefully that will become explicit in an updated spec document, so stay tuned.
|
|
2021-02-25 05:01:45
|
where does that come from?
|
|
|
lonjil
|
2021-02-25 05:01:58
|
Reddit
|
|
2021-02-25 05:02:17
|
Unlord replied to you
|
|
|
BlueSwordM
|
|
_wb_
Nathan does sound a bit angry in that reddit comment β I wonder what those many details are that I got wrong
|
|
2021-02-25 05:02:31
|
I think it's that:
1. 10-bit support max, while AVIF actually supports 12-bit natively just fine. It just needs better tagging support. Of course, the fact that the AVIF spec didn't explicitly show that, which is odd.
2. Decoding speed isn't actually much of an issue if you use say dav1d. It not only scales very well with resolution, but single-threaded decoding performance is quite good(I still haven't found a way to test out decoding speed with avidec yet though).
3. AVIF doesn't actually use purely independent tiles unlike HEIF. It's actually an AV1 feature that tiles can still access a lot of information from other tiles IIRC. That means that at higher resolution, it doesn't actually pose much of a problem(Scope provides a better explanation in this regard.)
4. You only tested AVIF using the reference encoder. That's not much of a problem when there's only one encoder(in the case of JPEG-XL for example... for now <:Stonks:806137886726553651> ), but there are encoders that actually do a better job in some scenarios in the case of AV1.
|
|
2021-02-25 05:02:46
|
Other than that, your article is quite accurate.
|
|
2021-02-25 05:04:47
|
It's excellent. π
unlord must have replied rather quickly and not provide that information. π€
|
|
|
Scope
|
2021-02-25 05:14:39
|
2. For me it is noticeably slower than VarDCT JXL on large images (also take into account that dav1d is already a very optimized decoder and JXL decoder still has a lot of room for further optimization)
3. As far as i know for fast hardware encoding/decoding will use single-image grid or when the image is very large and everything is exactly the same as HEIC, it is not a comparison with regular tiles
|
|
2021-02-25 05:20:58
|
Tiles:
```--tilerowslog2 R : Set log2 of number of tile rows (0-6, default: 0)
--tilecolslog2 C : Set log2 of number of tile columns (0-6, default: 0)```
Grid:
```-g,--grid MxN : Encode a single-image grid AVIF with M cols & N rows. Either supply MxN identical W/H/D images, or a single image that can be evenly split into the MxN grid and follow AVIF grid image restrictions. The grid will adopt the color profile of the first image supplied.```
|
|
|
BlueSwordM
|
2021-02-25 05:21:54
|
What would be the advantage of using a grid image?
I've mainly been using --tiles for better threading at higher resolutions, since it didn't create the same discontinuties.
|
|
|
_wb_
|
2021-02-25 05:22:50
|
There's av1 level tiling and then there's heif level tiling, no?
|
|
2021-02-25 05:23:01
|
At least that's how it is in heic
|
|
|
Scope
|
2021-02-25 05:23:21
|
I'm not sure about hardware encoding (as far as I know the grid implementation is also much simpler), but tiles cannot be used to increase the resolution, for example if the image needs more than 8k
|
|
|
lonjil
|
2021-02-25 05:23:35
|
<@!321486891079696385> a standard grid pattern makes hardware encoding and decoding actually worth while
|
|
2021-02-25 05:25:03
|
that's why HEICs produced on iphones are actually 256x256 (IIRC) images tiled in HEIF
|
|