JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

other-codecs

_wb_
2021-06-03 05:22:28
"quality level" and psnr are such meaningless things...
Petr
zebefree Vimeo is now using AVIF https://medium.com/vimeo-engineering-blog/upgrading-images-on-vimeo-620f79da8605
2021-06-03 05:32:25
"faster loading times"? Can they accelerate time? ๐Ÿ˜œ
2021-06-03 07:17:23
Sure. Or "faster loading". I'm just making fun of it. ๐Ÿ™‚
Scope
2021-06-03 04:57:47
https://twitter.com/PascalMassimino/status/1400473040203436036
2021-06-03 04:57:50
https://discord.com/channels/794206087879852103/805176455658733570/849787979698143282
Jim
2021-06-03 05:02:16
As I predicted, video sites will use it.
2021-06-03 05:02:19
Not sure how they got WebP faster than JPEG.
_wb_
2021-06-03 05:15:44
Default cwebp vs default mozjpeg?
2021-06-03 05:16:51
Also avif only being 3x as slow is kind of suspect, must be a pretty fast encode setting
Scope
2021-06-03 05:17:39
https://twitter.com/daemon404/status/1400474108987678723
2021-06-03 05:18:37
And webp seems to be default > I also went and checked our code for reference, in lossy webp mode, we're using cfg.method == 4 in libwebp. `-m 4` (default)
_wb_
2021-06-03 05:18:39
Assumed so. Default mozjpeg is pretty slow - with some options it is still good and quite a bit faster
2021-06-03 05:19:11
That fast aom setting is probably not much better than webp though
2021-06-03 05:19:41
If you would use decent metrics or actual subjective evaluation, I mean
Scope
2021-06-03 05:20:44
Recent libaom builds gave a significant increase in speed (but it's still not very fast)
_wb_
2021-06-03 05:21:06
I wonder what cpu-used setting they are using
Scope
2021-06-03 05:21:11
https://miro.medium.com/max/2400/1*FeDdawlRhreqeJh1mVKytg.png
_wb_
2021-06-03 05:22:26
I see pretty ugly banding in all of those skies
Jim
2021-06-03 05:23:06
Question is - is the JPEG the original and did they start with a banded image?
Scope
2021-06-03 05:23:16
But, this is VMAF and since it is a video service, the sources for these previews will be encoded lossy video (which is usually not very good quality) So perhaps a very low bpp and AVIF for something like this is not such a bad solution
Jim
2021-06-03 05:23:47
That too <:This:805404376658739230>
_wb_
2021-06-03 05:24:18
If they downscale the video for those thumbnails, it shouldn't be _that_ lossy
2021-06-03 05:25:17
They start from full HD or 4K, and scale that down to what? Something quite a bit smaller I bet
Jim
2021-06-03 05:25:44
I think the idea is that even if the original uploaded video was very high quality, the videos that are produced by the service are typically far lower quality and could have caused the banding prior to the thumbnails being generated.
Scope
2021-06-03 05:26:13
Yes, but it's probably a similar marketing test as Netflix when trying to show the difference at maximum compression (and the one with less artifacts wins)
Jim
2021-06-03 05:28:09
I feel Netflix compressed those images more than they needed to be.
_wb_
2021-06-03 05:28:20
I mostly see the block size of the banding getting larger from jpeg to webp to avif
2021-06-03 05:31:49
It reminds me a bit of the days of High Color
2021-06-03 05:32:06
https://en.wikipedia.org/wiki/High_color
2021-06-03 05:34:10
It feels so wrong now that we finally have displays that are capable of pretty good color reproduction (and even HDR etc), that compression is quantizing colors and makes ugly 64x64 blocks of banding
Scope
2021-06-03 05:35:17
Btw https://twitter.com/MikeTiVo/status/1400484825312727041
2021-06-03 05:37:52
A test gallery with HDR JXL images would be interesting
veluca
2021-06-03 05:38:33
just need to find the source images ๐Ÿ˜›
_wb_
2021-06-03 05:40:00
<@604964375924834314> do you have some you could release with a CC BY-SA license? Could add some to the conformance repo, and also add a page to jpegxl.info with a HDR test gallery
2021-06-03 05:41:03
Can also make some HDR <#824000991891554375>, I still need to get me an HDR screen though, otherwise I cannot really see what I am doing.
Kleis Auke
2021-06-03 05:54:14
fwiw, http://www.anyhere.com/gward/pixformat/tiffluvrend.html contains also some nice HDR rendered images in TIFF format (LogLuv encoding). This was the last image of that gallery (Pete Shirley's photopic RGBE picture of Snellen and Macbeth CC charts).
_wb_
2021-06-03 05:57:39
We can do better than RGBE though
2021-06-03 05:58:44
RGBE is a bit of a hack imo
Kleis Auke
2021-06-03 06:00:36
Wasn't JPEG XT based on the RGBE format? Yeah, it's a bit of hack
_wb_
2021-06-03 06:01:59
Well XT aims to be bw compatible, and for that RGBE is not so bad, if you put the tone mapped SDR image in RGB so it gracefully degrades to that
fab
2021-06-03 07:01:36
AAC 139 KBPS ABR AAC 240 KBPS VBR EXHALE 1.1.6 STABLE PRESET G -S -V 66.6817 -h - %d GXLAME
2021-06-03 07:01:41
.....
2021-06-03 07:01:43
.....
2021-06-03 07:01:45
NERO ENCODER
spider-mario
fab NERO ENCODER
2021-06-04 11:05:02
FYI the hydrogenaudio wiki recommends FDK AAC over Nero: https://wiki.hydrogenaud.io/index.php?title=Advanced_Audio_Coding#Encoders_.2F_Decoders_.28Supported_Platforms.29
2021-06-04 11:05:31
ffmpeg can be built with support for fdk although prebuilt binaries donโ€™t have that because of license incompatibilities which prevent redistribution of such binaries
2021-06-04 11:06:24
(ffmpegโ€™s `configure` script needs to be called with `--enable-libfdk-aac --enable-nonfree`)
2021-06-04 11:07:09
then one can encode with: `ffmpeg -i input.flac -c:a libfdk_aac -vbr 5 output.m4a`
lithium
2021-06-04 11:38:02
QAAC, FDK AAC, OPUS which is better?
Deleted User
2021-06-04 11:48:29
OPUS or โ€“ if you need lossless โ€“ FLAC
spider-mario
2021-06-04 01:06:08
right, the main advantage of AAC is probably compatibility
2021-06-04 01:06:45
the Nintendo DSi can read it for example :p
lithium
2021-06-04 01:34:12
ok, thank you ๐Ÿ™‚
spider-mario
2021-06-04 01:47:24
<@!461421345302118401> if you would like some data on the subject, there was this listening test: https://listening-test.coresv.net/results.htm
2021-06-04 01:47:56
which found that around 100โ€ฏkbps, Opus was better than Apple AAC which in turn was better than Vorbis which was similar to {MP3 with 30% more bits}
2021-06-04 01:48:25
so qaac is rather decent too
diskorduser
lithium QAAC, FDK AAC, OPUS which is better?
2021-06-04 01:48:49
Opus
Scope
2021-06-04 02:02:09
The best AAC encoder now is generally considered CoreAAC (which is now on iTunes), but that was before the most modern xHE-AAC (USAC) profile and there is only one most complete and high-quality encoder from Fraunhofer, also xHE-AAC is better than Opus at very low bitrates and about the same as past AAC profiles at higher bitrates, but it has no backward compatibility
lithium
2021-06-04 02:07:36
<@!604964375924834314> <@!263309374775230465> <@!111445179587624960> Thank you for your help ๐Ÿ™‚
diskorduser
2021-06-04 02:10:41
Imo, Bass sounds realistic on opus than fdk aac. (256kbps)
Scope
2021-06-04 02:18:53
I think there are very few people who can hear the difference between AAC and Opus in blind tests at such bitrates, even at 128kbps it is difficult enough, unless using special complex songs
improver
2021-06-04 02:33:48
idk how special but on some complex tracks its indeed pretty noticeable, even on higher bitrates
Scope
2021-06-04 02:36:08
http://abx.digitalfeed.net/opus.html
spider-mario
2021-06-04 05:39:56
at home, I personally have little reason to use lossy audio anyway, in part because thereโ€™s also an archival aspect to it
2021-06-04 05:40:10
and on-the-go, Iโ€™ll encode to whatever my portable player can play
2021-06-04 05:40:40
(or, letโ€™s say, what seems to be the best codec among those that it can)
2021-06-04 05:41:14
which for my iPod Touch seems to be AAC
2021-06-04 05:42:53
but if it changes, still having the lossless source means that it is possible to encode to a better codec without generation loss
2021-06-04 05:43:04
so keeping lossiness to where it matters has advantages
improver
2021-06-04 05:58:11
reasonable approach. listening conditions not-at-home usually aren't ideal anyway
Kleis Auke
Scope No, nothing has changed
2021-06-04 09:00:46
fwiw, libvips 8.11.0-rc1 is now available which should fix that.
2021-06-04 09:02:22
The what's new post has some more details https://libvips.github.io/libvips/2021/06/04/What's-new-in-8.11.html
Scope
2021-06-04 09:05:26
Are there any compiled Windows builds?
Kleis Auke
Scope Are there any compiled Windows builds?
2021-06-04 09:05:58
Just uploaded those to https://github.com/libvips/build-win64-mxe/releases/tag/v8.11.0-rc1
2021-06-04 09:08:21
(you'll need the `-all` variant for PPM support)
Scope
2021-06-04 09:21:34
Yep, PPM[strip] works, but JXL: *vips-dev-w64-all-8.11.0-rc1.zip* ```vips.exe copy 1.png 1.jxl VipsForeignSave: "1.jxl" is not a known file format```
2021-06-04 09:27:15
Or it's not ready yet?
2021-06-04 09:32:15
Oh ๐Ÿค” > JPEG-XL is still a little immature so itโ€™s not enabled by default in libvips 8.11. Hopefully the missing features (metadata, progressive encode and decode, animation, etc.) will arrive soon, and the remaining bugs will be squeezed out.
Pieter
2021-06-04 09:41:29
*squeezed* out see what you did there
Scope
2021-06-04 11:40:06
Hmm, for some PNGs the converted PPM is different for VIPS and IM (as well as encoded sizes after other encoders)
2021-06-04 11:40:14
2021-06-04 11:40:27
2021-06-04 11:40:36
2021-06-04 11:41:42
PPM sizes are the same, but inside they are different (and this also affects compression efficiency for some encoders)
_wb_
2021-06-05 05:50:52
Maybe a color conversion thing?
2021-06-05 05:53:20
Maybe one of them just dumps the values, stripping the color profile, while the other one might be doing something else? (converting to sRGB first?)
Kleis Auke
2021-06-05 11:25:30
Looks like vips flattens the transparency, whereas ImageMagick drops the alpha channel when saving to PPM. Reproducer: ```bash $ vips getpoint CQ2nhMO.png 0 0 120 82 41 230 $ vips ppmsave CQ2nhMO.png x.ppm --strip --vips-cache-trace vips cache : pngload filename="CQ2nhMO.png" out=((VipsImage*) 000001f7c4720020) flags=((VipsForeignFlags) VIPS_FOREIGN_SEQUENTIAL) access=((VipsAccess) VIPS_ACCESS_SEQUENTIAL) - vips cache : copy in=((VipsImage*) 000001f7c4720020) out=((VipsImage*) 000001f7c4720340) - vips cache+: flatten in=((VipsImage*) 000001f7c4720020) background=0 out=((VipsImage*) 000001f7c47201b0) - vips cache+: cast in=((VipsImage*) 000001f7c47201b0) out=((VipsImage*) 000001f7c47204d0) format=((VipsBandFormat) VIPS_FORMAT_UCHAR) - vips cache+: linecache in=((VipsImage*) 000001f7c47207f0) out=((VipsImage*) 000001f7c4720b10) tile-height=16 access=((VipsAccess) VIPS_ACCESS_SEQUENTIAL) persistent=TRUE - vips cache+: sequential in=((VipsImage*) 000001f7c47207f0) out=((VipsImage*) 000001f7c4720980) tile-height=16 - vips cache : ppmsave in=((VipsImage*) 000001f7c4720020) filename="x.ppm" strip=TRUE - $ vips flatten CQ2nhMO.png x.png $ vips getpoint x.ppm 0 0 108 73 36 $ vips getpoint x.png 0 0 108 73 36 $ convert CQ2nhMO.png -strip x2.ppm $ vips getpoint x2.ppm 0 0 120 82 41 ``` /cc <@!310374889540550660>
2021-06-05 11:25:52
If you expect a similar output as that produced by ImageMagick, you can try this: ```bash $ vips extract_band CQ2nhMO.png x.ppm[strip] 0 -n 3 $ convert CQ2nhMO.png -strip x2.ppm $ sha265sum x*.ppm 72096cb14d7f9119a726dbdfb55c62452551557c906d2fcae1872245f5bd3121 x2.ppm 72096cb14d7f9119a726dbdfb55c62452551557c906d2fcae1872245f5bd3121 x.ppm ```
Scope
2021-06-05 11:37:02
๐Ÿค” Hmm, Ok, not that I need the same result, rather I need to know which behavior is more correct (when ImageMagick drops transparency it's not good, although transparency is not really needed in this image)
_wb_
2021-06-05 11:39:16
IM just gives the RGB (which is nice if you want to see invisible pixels), if you want it to flatten you have to do `-background white -flatten`
2021-06-05 11:39:35
Or whatever other background color
2021-06-05 11:41:06
What is more correct is a matter of taste, I'd say, but not flattening does show stuff you otherwise wouldn't see so in most cases flattening is the 'right thing' to do
2021-06-05 11:42:00
(keeping the alpha is the real 'right thing' of course, but ppm cannot do that. You can use pam though)
Scope
2021-06-05 11:43:28
Yep, but not all formats/encoders accept pam
_wb_
2021-06-05 11:43:48
(which reminds me, we should maybe add pam as in/out formats in jxl)
Scope
2021-06-05 01:40:02
Source with transparency
2021-06-05 01:40:49
2021-06-05 01:40:58
2021-06-05 01:41:08
2021-06-05 01:41:15
2021-06-05 01:45:00
Also not sure what is the more correct behavior in these examples (although mostly for my case the experimental encoders didn't support transparency, but it seems that here after VIPS they get less "useful" pixels, so the image is compressed better)
2021-06-05 04:07:34
2021-06-05 04:07:44
2021-06-05 04:07:49
2021-06-05 04:07:58
2021-06-05 04:08:03
2021-06-05 04:11:56
However, on these images, the vips result is more suitable for my purposes
improver
2021-06-05 04:14:19
im one seems like it just strips alpha channel and leaves whatever color was underneath
Scope
2021-06-05 05:22:12
Also about junk alpha pixels
2021-06-05 05:22:26
2021-06-05 05:22:32
2021-06-05 05:22:46
2021-06-05 05:22:51
2021-06-05 05:27:36
<@!794205442175402004> Btw, JXL's premultiply option in that PR can't remove invisible pixels yet?
2021-06-05 05:28:59
_wb_
2021-06-05 06:18:43
Premultiply implies removing invisible pixels. But we could also do something to make invisible pixels compress better in the non-premultiplied case, that's not implemented yet
tufty
Scope Or it's not ready yet?
2021-06-05 08:01:37
libvips ships two win binaries: one tagged as "web" which only enables trusted file formats (regular jpeg, etc), and one tagged "all" which also includes support for weird things that are probably not yet safe on untrusted data (nifti, fits, etc.) the "all" binary for libvips-8.11-rc1 doesn't yet have jpeg-xl support built in, but only because we've not had time --- we expect to add jpeg-xl before 8.11 final assuming we can wrangle the cross-compiler into submission etc.
Scope
2021-06-05 09:36:11
cwp2 ```-av1 ................... use lossy AV1 internally instead of lossy WP2``` ๐Ÿค”
rappet
2021-06-05 10:45:10
Is that how JXL that can be transcoded to JPEG look better than their JPEG work? https://imsc.uni-graz.at/hollerm/papers/tgv_jpeg_visapp.pdf
2021-06-05 10:45:44
So that JXL's which are JPEG 'compatible' are just better, because they use a nicer decoder?
Scope
2021-06-05 10:51:08
I think it would be something like this (but I'm not sure if it's already implemented): https://github.com/google/knusperli
rappet
2021-06-05 10:52:40
Yeah
2021-06-05 10:52:47
I came around this: https://github.com/victorvde/jpeg2png
2021-06-05 10:55:13
jpeg2png has a nice short explanation about how it works in the README
2021-06-05 10:56:00
I'm wondering if it is possible to just add the -p parameter for each block as another color channel in JPEG.
2021-06-05 10:56:18
And if decoders that don't know about it would just ignore it.
Scope
2021-06-05 10:58:45
Also <https://github.com/ilyakurdyukov/jpeg-quantsmooth>
veluca
2021-06-05 11:06:04
it's very hard to do this stuff properly without oversmoothing
2021-06-05 11:06:30
JXL does have a nice restoration filter but JPEG recompression leaves that off by default
rappet
2021-06-05 11:06:46
That's why I'm thinking about adding the -p tuning parameter per block basis.
2021-06-05 11:07:02
So you get a crappy JPEG that everybody can load.
veluca
2021-06-05 11:07:14
if you notice any quality improvement it's likely just due to float DCTs
rappet
2021-06-05 11:07:15
And a slightly less crappy JPEG if you have the right decoder.
veluca
2021-06-05 11:07:34
I have no idea what libjpeg will do with >3 components... probably assume it's CMYK
rappet
2021-06-05 11:08:02
So, stuff it in an APP segment then?
veluca
2021-06-05 11:09:17
that could work
2021-06-05 11:09:54
(I checked, you can do CMYK and YCCP with 4 channels)
rappet
2021-06-05 11:10:01
The 4th segment has the nice advantage that I can just encode it at the very end if I use a progressive JPEG.
veluca
2021-06-05 11:10:35
what smoothing algorithm do you plan to use?
rappet
2021-06-05 11:10:55
(the idea is that I can have a lot of sharpening for text and less for faces...)
veluca
2021-06-05 11:10:57
(note that jxl does allow to do what you're saying, i.e. controlling the smoothing per-block :P)
rappet
2021-06-05 11:11:23
Total variation
veluca
2021-06-05 11:12:07
mhhhh, I suspect it will be rather oversmoothed
rappet
2021-06-05 11:12:12
Yeah, but I like stuffing more fancy things in old things so that it works everywhere ๐Ÿ˜‰
2021-06-05 11:12:27
Why would it?
2021-06-05 11:12:57
The result would still encode the same.
veluca
2021-06-05 11:13:07
because minimizing total variation is not the correct thing to do near edges
rappet
2021-06-05 11:13:28
Yes, that's why I have the tuning parameter.
veluca
2021-06-05 11:13:29
also, it would be very, very slow
2021-06-05 11:14:07
but that's what you get with anything that uses optimization to find the unquantized DCT coefficients ๐Ÿ˜„
2021-06-05 11:16:02
I spent about six months trying to find something that decodes in a reasonable amount of time and doesn't oversmooth for JPEG XL, initially I was trying to guess what to dequantize coefficients to (but that didn't work), but in the end I went for something like a selective gaussian filter
rappet
2021-06-05 11:16:09
Yeah.. it doesn't have to be to nice... optimizing the first quotients by a bit might still minimize most of the typical artefacts and still be somewhat fast.
2021-06-05 11:16:59
So, then I use your experience and look into selective gaussian filters..
veluca
2021-06-05 11:17:11
one thing I would try if compute was not an issue would be trying to find DCT coefficients that result in a fixpoint for that kind of smoothing
2021-06-05 11:17:59
i.e. use as a loss the delta caused by a selective gaussian
2021-06-05 11:18:36
might not even need parameters in the JPEG itself
rappet
2021-06-05 11:19:19
Well, you could have sharp ringing as an JPEG artefact that you want to smooth out...
2021-06-05 11:19:29
And at the other point you have sharp text.
2021-06-05 11:19:39
Or are I missing something here?
veluca
2021-06-05 11:19:55
that's correct
2021-06-05 11:20:22
I suspect you can use the amount of nonzero DCT coefficients to get a guess of how much smoothing should happen
rappet
2021-06-05 11:20:35
So, I might still need a parameter (1-2 bit per block) in the JPEG?
2021-06-05 11:20:44
Oh good Idea.
veluca
2021-06-05 11:20:49
but if it's too complicated to figure out, parameters are a solution ๐Ÿ˜›
veluca I suspect you can use the amount of nonzero DCT coefficients to get a guess of how much smoothing should happen
2021-06-05 11:21:29
I don't remember if I tried *that* one, it's probably not perfect even if it works somewhat (or it could be that it was just too complicated implementation-wise in JXL...)
rappet
2021-06-05 11:21:43
It should be fast enough that a bit of webassembly can read all JPEGs in the page and transcode them to blobs of JPEGs with less quantization.
veluca
2021-06-05 11:22:21
eh, good luck with that ๐Ÿ˜›
2021-06-05 11:22:50
you have about a 5x margin over libjpeg-turbo I believe
rappet
2021-06-05 11:22:52
Least effort for user. Just add a JS xD
veluca
2021-06-05 11:23:08
maybe 10x
rappet
2021-06-05 11:23:57
Yeah, I bet that should be no problem if you limit it only to small images (those where everybody can see the artefacts)
veluca
2021-06-05 11:25:09
without SIMD on wasm, I foresee problems in your future ๐Ÿ˜› even with SIMD, I'd expect something that does more than 3 optimization steps to not be an option
rappet
2021-06-05 11:26:52
Well, I think at least removing banding would be possible without actually doing the IDFT.
2021-06-05 11:27:35
Do a bit DC smoothing and add 3 low frequency AC values.
veluca
2021-06-05 11:28:53
right, progressive decoding for DC does that in libjpeg-turbo now
2021-06-05 11:29:43
it might be interesting to see what happens if one leaves those AC coefficients there if they're inside the quantization range
2021-06-05 11:30:00
JXL had something like that, but it created *other* artefacts
rappet
2021-06-05 11:32:56
So, I might just try that on blocks that don't have any AC values?
2021-06-05 11:33:21
And the neighbours don't have them either?
veluca
2021-06-05 11:34:29
my attempts at that in JXL ended up in overshooting
2021-06-05 11:34:32
๐Ÿ˜„
2021-06-05 11:34:47
anyway, it's sufficiently late here that I probably should be sleeping ๐Ÿ˜›
rappet
2021-06-05 11:37:16
Good night!
_wb_
rappet Yeah, but I like stuffing more fancy things in old things so that it works everywhere ๐Ÿ˜‰
2021-06-06 06:07:16
You may want to look into JPEG XT then.
2021-06-06 06:09:23
Generally the problem with gracefully degrading backwards compatible improvements is that nobody even notices that there are potential improvements they're missing out on, because "it already loads just fine, don't need a new decoder".
rappet
2021-06-06 08:46:22
Is there a way to get the specification? It looks a bit expensive on the ISO webpage?
_wb_
2021-06-06 09:10:48
Not afaik, but you can look at Thomas Richter's libjpeg, which implements it
2021-06-06 09:24:37
https://github.com/thorfdbg/libjpeg
2021-06-06 09:28:33
That's also the only complete implementation of JPEG afaik, including arithmetic coding, hierarchical, lossless, 12-bit, etc โ€” all the stuff that was in the original 1992 JPEG spec but isn't usually implemented so it's not usable in the 'de facto' JPEG standard...
Scope
2021-06-06 10:34:16
<@!208917283693789185> <@!310374889540550660> Hmm, one more thing with VIPS, how can I force saving only in binary color PPM (P6 magic number), because sometimes images are automatically converted to gray scale PGM (P5) etc., that's good to save space, but the problem is that there are applications that only accept binary PPM (but not support PBM, PGM, PAM). In ImageMagick it depends on output file extension and by default it is something like PPM (P6), PGM (P5), PBM (P4), PAM (P7)
tufty
Scope <@!208917283693789185> <@!310374889540550660> Hmm, one more thing with VIPS, how can I force saving only in binary color PPM (P6 magic number), because sometimes images are automatically converted to gray scale PGM (P5) etc., that's good to save space, but the problem is that there are applications that only accept binary PPM (but not support PBM, PGM, PAM). In ImageMagick it depends on output file extension and by default it is something like PPM (P6), PGM (P5), PBM (P4), PAM (P7)
2021-06-06 03:36:53
In python you could write: image.colourspace("srgb").write_to_file("x.ppm", strip=True) and you'll always get a 3-band (channel?) binary PPM libvips doesn't use the file extension to decide the PPM type, it picks the Pn from the image colourspace
2021-06-06 03:39:02
or at the command-line: vips colourspace input-file t1.v srgb vips copy t1.v output.ppm[strip] depending on how you name your temporary intermediate files
Scope
tufty In python you could write: image.colourspace("srgb").write_to_file("x.ppm", strip=True) and you'll always get a 3-band (channel?) binary PPM libvips doesn't use the file extension to decide the PPM type, it picks the Pn from the image colourspace
2021-06-06 04:02:01
I see, I also tried the `--interpretation` option (but it doesn't seem to be for this) Also I noticed that all-build for Windows has `libMagickCore-6.Q16-7.dll`, but options like `magicksave` don't work
tufty
2021-06-06 04:08:34
`vips copy x y --interpretation zzz` just changes the "how should I display this?" hint on an image, it doesn't change any pixels you need `colourspace` if you want to transform pixel values
2021-06-06 04:09:11
`magicksave` ought to work on the "all" windows build, what error are you seeing?
Scope
2021-06-06 04:09:56
```vips.exe: unknown action "magicksave"```
2021-06-06 04:10:54
<https://github.com/libvips/build-win64-mxe/releases/tag/v8.11.0-rc1> `vips-dev-w64-all-8.11.0-rc1.zip`
tufty
2021-06-06 04:11:32
I see: $ ./vips.exe magicksave ~/Pictures/before.png ~/x.bmp --format=bmp $ ie. I could use magicksave to write a BMP file
2021-06-06 04:12:19
oh, I don't know, I didn't make the 8.11-rc1 win binary, I'll check that was with 8.10.6
Scope
2021-06-06 04:13:24
```vips.exe magicksave 1.png 1.bmp --format=bmp vips.exe: unknown action "magicksave"```
tufty
2021-06-06 04:16:04
It's working for me. I see: ``` $ unzip -qq vips-dev-w64-all-8.11.0-rc1.zip $ cd vips-dev-8.11/bin/ $ ./vips.exe magicksave save file with ImageMagick usage: magicksave in filename [--option-name option-value ...] where: ... etc. ``` That's on win10. Perhaps it's picking up a different version? What's your PATH set to?
Scope
2021-06-06 04:18:16
I checked in the same directory, hmm, maybe I should specify Path
tufty It's working for me. I see: ``` $ unzip -qq vips-dev-w64-all-8.11.0-rc1.zip $ cd vips-dev-8.11/bin/ $ ./vips.exe magicksave save file with ImageMagick usage: magicksave in filename [--option-name option-value ...] where: ... etc. ``` That's on win10. Perhaps it's picking up a different version? What's your PATH set to?
2021-06-06 04:25:30
Hmm, yep, that works now, thanks
2021-06-06 05:54:56
It seems that IM convert with `-background black -alpha remove` does something similar with transparency as the VIPS by default (at least visually) ๐Ÿค”
Deleted User
Scope I think it would be something like this (but I'm not sure if it's already implemented): https://github.com/google/knusperli
2021-06-06 08:22:52
Knusperli works quite well at removing painfully obvious banding and block borders while not removing sharpness excessively. That "fake sharpness" is sometimes actually kinda helpful, someone wrote about it in a comment under Hacker News post about Knusperli. https://news.ycombinator.com/item?id=16616025
rappet Is that how JXL that can be transcoded to JPEG look better than their JPEG work? https://imsc.uni-graz.at/hollerm/papers/tgv_jpeg_visapp.pdf
2021-06-06 08:27:09
The paper looks interesting, but just like Luca said before, it's blurry af because of TV/TGV. In Figure 7 in that PDF I actually prefer the "standard decompression" instead of their reconstruction because lots of actual detail got removed together with artifacts.
Scope
2021-06-06 08:29:05
Yes, I prefer something like Knusperli, other similar decoders are too smooth and blurry, it may be good for very bad quality, but for the rest I would even prefer some artifacts and blockiness
Deleted User
2021-06-06 08:29:59
You can see in the image below (again, from Hacker News post, but this time about jpeg2png) that the "fake sharpness" phenomenon is retained perfectly by Knusperli.
2021-06-06 08:30:03
https://mod.ifies.com/f/200710_lena_decode_comparison.png
2021-06-06 08:32:07
By the way <@!179701849576833024>, is something Knusperli-like used in JPEG XL?
veluca
2021-06-06 08:32:15
nope
2021-06-06 08:32:32
too slow, it requires an image-global optimization step...
2021-06-06 08:34:09
the restoration filter does probably a bit worse on block boundaries, but a bit better on other artefacts and it's local and reasonably fast
Scope
2021-06-06 08:34:18
In Jpeg XL I noticed some banding fixes when decoding (although I don't remember if I enabled any options or if it was the default one a long time ago)
veluca
2021-06-06 08:34:34
you mean for JPEG input?
Scope
2021-06-06 08:34:38
Yes (when decoding lossless Jpeg to PNG/higher quality Jpeg)
veluca
2021-06-06 08:35:12
I'm not sure *why* xD the only explanation I can come up with is that we use float DCTs and only convert to int in RGB while IIRC JPEG does that in YCbCr
Deleted User
veluca too slow, it requires an image-global optimization step...
2021-06-06 08:35:59
Why? Idk if the Hacker News guy is wrong, but they wrote: > Jpeg2png minimizes the total variation over the whole image, and knusperly only along the boundaries of the blocks.
veluca
2021-06-06 08:36:18
ah that's likely the case
2021-06-06 08:36:28
but still, your choices in a block will affect other blocks
2021-06-06 08:36:54
and that has a domino effect where IIRC you can't be really sure that block (0, 0) won't affect block (1000, 1000)
Scope
2021-06-06 08:40:14
Btw, were there any experiments with Wavelets during the PIK/Jpeg XL development?
veluca
2021-06-06 08:45:05
not as far as I know
2021-06-06 08:45:19
well, Squeeze is a wavelet AFAIU
rappet
2021-06-06 10:25:22
Does AV1 do similar filtering?
2021-06-06 10:25:34
It sometimes looks quite blurry to me.
veluca
2021-06-06 10:52:33
av1 does a lot of filtering
2021-06-06 10:52:57
it has cross-block filtering, directional filtering, and one more filter still
2021-06-06 10:54:28
page 4 here https://www.jmvalin.ca/papers/AV1_tools.pdf gives some explanation
Scope
2021-06-07 03:17:28
https://twitter.com/daemon404/status/1401914442921791496
2021-06-07 05:12:31
I'm also very interested in very fast and efficient mode in JXL, I wonder what Qlic2 by Alex Rhatushnyak is based on or is it something that went into lossless PIK and JXL s3? Although Qlic2 is much faster, also lossless PIK is faster and more efficient than the current JXL s3
Deleted User
2021-06-07 05:15:15
Related question: how did lossless mode work in PIK and how is it different from JXL?
_wb_
2021-06-07 05:30:47
Lossless pik was basically the same thing as `cjxl -m -s 3`
2021-06-07 05:31:46
That is, Weighted Predictor, and a fixed ctx model that uses only the weighted predictor max err (WGH in jxl art terms)
2021-06-07 05:34:24
Lossless pik was hardcoded for that, and for 8-bit or 16-bit only, and possibly was a bit faster and more efficient than current cjxl -s 3, but it's basically the same thing
Scope
2021-06-07 11:01:47
<:Thonk:805904896879493180> https://twitter.com/richgel999/status/1402033891930738695
Scientia
Scope https://twitter.com/daemon404/status/1401914442921791496
2021-06-07 11:43:37
PNG with ZSTD instead of zlib would be so much better
2021-06-07 11:44:06
Notwithstanding existing lossless codecs that are more optimized like jxl and webp lossless but it would be a massive improvement
Scope
2021-06-07 11:45:24
As already discussed, switching to the new codec will be even easier and cause less confusion and problems than incompatible changes to the old formats
Scientia
2021-06-07 11:45:40
It's a thought experiment more than a useful codec
Scope
2021-06-07 11:52:27
And it's not just only zlib inefficiency https://twitter.com/jyzg/status/1253714896270868481
2021-06-07 11:58:17
Btw, in some mobile games I found modified incompatible PNGs with LZ4PNG header
Scientia
2021-06-08 05:30:50
Ah yes
2021-06-08 05:31:00
Combine EVEN WORSE compression
2021-06-08 05:31:04
With PNG
2021-06-08 05:31:17
For incompatibility galore and massive files
_wb_
2021-06-08 07:17:25
I suspect it's for faster loading? But I wonder why, regular png decode is pretty fast
Scope
2021-06-08 07:25:56
Probably (LZ4 is 10-12x faster), there may have also been some other modifications for better compression of a specific type of images/resources
Kleis Auke
tufty libvips ships two win binaries: one tagged as "web" which only enables trusted file formats (regular jpeg, etc), and one tagged "all" which also includes support for weird things that are probably not yet safe on untrusted data (nifti, fits, etc.) the "all" binary for libvips-8.11-rc1 doesn't yet have jpeg-xl support built in, but only because we've not had time --- we expect to add jpeg-xl before 8.11 final assuming we can wrangle the cross-compiler into submission etc.
2021-06-08 11:52:18
I managed to build libvips with JPEG XL support on Windows, see commit https://github.com/libvips/build-win64-mxe/commit/b7926cd212044a4ed4db6297a459e2297b2368d5.
2021-06-08 11:53:29
(It's built as a dynamic module so users can delete `lib/vips-modules-8.11/vips-jxl.dll` if they want to)
2021-06-08 11:57:17
So far only tested on Windows x64, I still need to test the i686, armv7 (i.e. Windows 10 IoT) and arm64 builds.
_wb_
2021-06-09 11:21:51
Leonardo wrote a book: https://www.amazon.com/Even-stars-die-history-digital-ebook/dp/B096G6TSF9/ref=tmm_kin_swatch_0?_encoding=UTF8&qid=&sr=
2021-06-09 11:22:28
The story of how the Moving Picture Experts Group (MPEG) was conceived and established. This book is structured in six sections each with a different number of chapters and parts of the story: Section 1 describes the world of media before MPEG. Section 2 recounts the birth of MPEG and the story of its first 3 standards. Section 3 explores some of the media-related areas in which MPEG operated. Section 4 assesses some of MPEGโ€™s diverse characteristics. Section 5 tells the causes of and mourns the death of MPEG. Section 6 describes the constitution, the work MPAI did so far and future plans.
zebefree
_wb_ Leonardo wrote a book: https://www.amazon.com/Even-stars-die-history-digital-ebook/dp/B096G6TSF9/ref=tmm_kin_swatch_0?_encoding=UTF8&qid=&sr=
2021-06-09 03:02:13
How much of this applies to JPEG?
veluca
2021-06-09 03:15:11
I have never been in MPEG but it does resonate with me quite a bit...
_wb_
2021-06-09 03:56:29
I haven't been to MPEG either, but the ISO bureaucracy is a real thing, also in JPEG. I don't think that's the main thing that was wrong with MPEG though. I think the main issue is that 2/3 of the MPEG attendees are patent trolls.
2021-06-09 04:01:58
Building a codec with a committee where most people are just trying to get their patented stuff in it, I can only begin to imagine how tiring and annoying that must be as a codec engineer who just wants to make a good codec.
Nova Aurora
2021-06-09 04:20:11
Fox Wizard
2021-06-09 04:26:01
``.png``
Deleted User
Nova Aurora
2021-06-09 04:35:17
GIMME THE ORIGINAL `.JPG`, I WANNA KNUSPERLI IT
Crixis
GIMME THE ORIGINAL `.JPG`, I WANNA KNUSPERLI IT
2021-06-09 04:38:28
I want so see this
Nova Aurora
2021-06-09 04:45:36
I'll have to find it
Fox Wizard
2021-06-09 04:46:32
This is the only true jpeg
Deleted User
Fox Wizard This is the only true jpeg
2021-06-09 04:48:12
Fox Wizard
2021-06-09 04:48:16
At least this is somewhat readable <:kekw:808717074305122316>
Deleted User
Fox Wizard At least this is somewhat readable <:kekw:808717074305122316>
2021-06-09 04:49:05
Fox Wizard
2021-06-09 04:49:29
No idea what you're doing, but it looks cursed
Deleted User
2021-06-09 04:50:23
I think it's too much blockiness even for Knusperli...
2021-06-09 04:50:48
And are you sure those deep-fried JPEGs are from originals?
Fox Wizard
2021-06-09 04:51:18
I edited it myself
2021-06-09 04:51:45
Well, took a like 2 mintues, because compression hides all imperfection anyways ~~yes, this is my level of boredom~~
2021-06-09 04:56:44
Nice
2021-06-09 04:58:05
Avif pog
Deleted User
2021-06-09 04:59:34
Now do the same comparison with Knusperli version, it won't be as blocky
Fox Wizard
2021-06-09 05:00:55
Not home anymore sadly <:cheems:720670067091570719>
Scope
2021-06-09 05:14:16
```Encoding [Modular, Q-500.00, kitten] Compressed to 17930 bytes (0.016 bpp). 3000 x 3000, 1.15 MP/s [1.15, 1.15]```
2021-06-09 05:15:30
2021-06-09 05:16:24
JXL modular, Quality negative 500
lithium
2021-06-09 05:19:03
Modular Q-500? cool
veluca
2021-06-09 05:28:11
that's wrong on so many levels...
_wb_
2021-06-09 05:38:47
https://c.tenor.com/jP0CrJmmeVQAAAAM/cant-decide.gif
2021-06-09 05:40:53
Luca trying to figure out if he should be most offended by the word "pizza" on something that is clearly not even remotely related to actual pizza, or by the Q-500 modular ๐Ÿ˜‚
veluca
_wb_ Luca trying to figure out if he should be most offended by the word "pizza" on something that is clearly not even remotely related to actual pizza, or by the Q-500 modular ๐Ÿ˜‚
2021-06-09 05:50:55
you know me well ๐Ÿ˜œ
Crixis
2021-06-09 06:27:32
Stop torture italians with bad pizzas, we prove emotions
Scope
2021-06-09 06:33:59
Nova Aurora
2021-06-09 07:16:16
Opinions on pineapple and pizza?
_wb_
2021-06-09 07:35:44
My opinion: sure, why not. Many great pizzas don't have pineapple, and if you're going to use good mozarella then probably pineapple is not the best combination (the traditional tomato and basil is pretty much unbeatable), but pineapple can be a nice thing to put on a pizza. I like it most in combination with bell peppers, spicy peppers, mushrooms and strong cheeses (old ones, that are strong enough to compete with the acidity and flavor of pineapple).
lithium
2021-06-09 07:37:46
Italianโ€™s reaction to Hawaiian Pizza ๐Ÿ˜› https://www.youtube.com/watch?v=EDUy3Y_w9Tk
veluca
_wb_ My opinion: sure, why not. Many great pizzas don't have pineapple, and if you're going to use good mozarella then probably pineapple is not the best combination (the traditional tomato and basil is pretty much unbeatable), but pineapple can be a nice thing to put on a pizza. I like it most in combination with bell peppers, spicy peppers, mushrooms and strong cheeses (old ones, that are strong enough to compete with the acidity and flavor of pineapple).
2021-06-09 07:40:51
we're not friends anymore ๐Ÿ˜›
lithium Italianโ€™s reaction to Hawaiian Pizza ๐Ÿ˜› https://www.youtube.com/watch?v=EDUy3Y_w9Tk
2021-06-09 07:41:08
yup, I've seen that - I wish I understood the dialect though! ๐Ÿ˜„
_wb_
2021-06-09 07:43:54
I put anything on a pizza (well, anything vegetarian)
2021-06-09 07:44:18
I also put anything in a spaghetti sauce or in a lasagna
Nova Aurora
_wb_ My opinion: sure, why not. Many great pizzas don't have pineapple, and if you're going to use good mozarella then probably pineapple is not the best combination (the traditional tomato and basil is pretty much unbeatable), but pineapple can be a nice thing to put on a pizza. I like it most in combination with bell peppers, spicy peppers, mushrooms and strong cheeses (old ones, that are strong enough to compete with the acidity and flavor of pineapple).
2021-06-09 07:44:39
I like it but then I have to fend off pitchforks and torches from people
veluca
lithium Italianโ€™s reaction to Hawaiian Pizza ๐Ÿ˜› https://www.youtube.com/watch?v=EDUy3Y_w9Tk
2021-06-09 07:44:44
<@!424295816929345538> <@!416586441058025472>
_wb_ My opinion: sure, why not. Many great pizzas don't have pineapple, and if you're going to use good mozarella then probably pineapple is not the best combination (the traditional tomato and basil is pretty much unbeatable), but pineapple can be a nice thing to put on a pizza. I like it most in combination with bell peppers, spicy peppers, mushrooms and strong cheeses (old ones, that are strong enough to compete with the acidity and flavor of pineapple).
2021-06-09 07:46:12
that sentence would be more acceptable if it was "on flatbread with tomato" instead of "on pizza" ๐Ÿ˜›
_wb_
2021-06-09 07:46:51
Ok, fair enough
2021-06-09 07:48:18
I abuse the words pizza, spaghetti, lasagna to mean a lot of things that have a vaguely similar shape
veluca
2021-06-09 07:48:31
that said, I know a person from Sicily that likes fries on pizza...
2021-06-09 07:48:43
sometimes I wonder where the world is heading xD
_wb_
2021-06-09 07:50:29
Fries on pizza, hm I wouldn't do that, I consider the bread to be enough of the starchy stuff not to require potatoes too
2021-06-09 07:51:07
Are potatoes considered a vegetable in Italy?
2021-06-09 07:51:31
In France they are kind of treated like a vegetable
2021-06-09 07:52:35
While here it's more like the bread/pasta/rice part of a dish, not considered a vegetable
fab
2021-06-09 07:58:47
2021-06-09 07:59:29
2021-06-09 07:59:42
photos not mine
2021-06-09 07:59:59
from internet there are
2021-06-09 08:00:53
only poors eat those things
2021-06-09 08:01:13
because spending 4 euros for a pizza is considered poor
2021-06-09 08:01:46
people also like vegetables in italy
veluca
_wb_ Fries on pizza, hm I wouldn't do that, I consider the bread to be enough of the starchy stuff not to require potatoes too
2021-06-09 08:04:25
it's more for the *fried* part ๐Ÿ˜›
2021-06-09 08:04:55
we call it "americana", and it also splits opinions ๐Ÿ˜›
fab
2021-06-09 08:07:47
2021-06-09 08:07:59
other codecs becoming food
2021-06-09 08:08:03
we stop here
2021-06-09 08:08:13
i shouldn't post images
veluca
fab because spending 4 euros for a pizza is considered poor
2021-06-09 08:11:14
btw, here in Zurich, a reasonable pizza is about 20 chf ๐Ÿ˜› (~ as many euros)
fab
2021-06-09 08:14:12
is called other codecs not screenshots and youtube videos
_wb_
2021-06-09 08:14:40
One thing that has changed here in the past decade or two, is the way Turkish kebab or durum is usually prepared
2021-06-09 08:15:42
Inserting fries between the bread or rolled inside the durum is now kind of standard as far as I can tell
2021-06-09 08:16:26
Belgians do like fries a lot
2021-06-09 08:17:39
Our most famous dishes are: Moules frites (mussles with fries) Biefstuk-friet (steak with fries) Stoofvlees met frieten (stew with fries)
Crixis
veluca <@!424295816929345538> <@!416586441058025472>
2021-06-09 10:30:40
Seriusly, I can somehow accept a bad pizza but please, stop america to use ketchup on pasta, is gross, is a crime
veluca
2021-06-09 10:32:47
what about ketchup on pizza? ๐Ÿ˜›
2021-06-09 10:32:55
I've seen that, and not even in the US!
Crixis
2021-06-09 10:43:11
Please, why, stop
2021-06-09 10:44:56
Ketchup as a strong flavor (also acid?), is good on salty things but why use it on pizza or pasta?
2021-06-09 10:46:11
It completly replace pasta and other ingredients' flavor
2021-06-09 10:47:23
Is a stereotipe that italian can't apreciate other culture food
2021-06-09 10:48:33
But please stop mix ketchup and wheat
Deleted User
Crixis Ketchup as a strong flavor (also acid?), is good on salty things but why use it on pizza or pasta?
2021-06-09 10:59:07
What else to use on pizza?
Crixis
What else to use on pizza?
2021-06-09 11:01:24
Tomato, all sort of cheese and meet, also egg, in special pizzas also sardas. A lot of vegetable
2021-06-09 11:01:57
Only not use fruit and strong salsa
Deleted User
2021-06-09 11:04:42
but I like pineapples ๐Ÿ˜„
2021-06-09 11:05:39
I always put them with as liitle space as possible between each piece on my pizza. ^^
Crixis
2021-06-09 11:18:45
I like choccolate but i don't put it on pizzas ๐Ÿ˜†
Deleted User
2021-06-09 11:42:12
https://www.reddit.com/r/StupidFood/comments/bh5awz/chocolate_pizza_is_getting_mainstream/
https://www.reddit.com/r/StupidFood/comments/bh5awz/chocolate_pizza_is_getting_mainstream/
2021-06-10 12:27:45
I've seen this monstrosity in Poland... NOOOOOO
fab
2021-06-10 07:04:37
do you have link with the new webp2 comparison
2021-06-10 07:25:02
https://storage.googleapis.com/demos.webmproject.org/webp/cmp/2021_06_08/index.html#meg-20110701-japan-expo-02&AVIF-AOM=s&WEBP2=s&subset1
veluca
Crixis I like choccolate but i don't put it on pizzas ๐Ÿ˜†
2021-06-10 07:36:54
pizza alla nutella? ๐Ÿ˜›
fab
2021-06-10 07:37:29
veluca is from north or south
2021-06-10 07:37:37
or center
veluca
2021-06-10 07:38:16
~~~ Genoa
2021-06-10 07:38:44
but I went to university in Pisa
Crixis
veluca pizza alla nutella? ๐Ÿ˜›
2021-06-10 07:38:46
nutella != choccolate, also is a dessert not a tomato/nutella pizza
veluca
2021-06-10 07:39:07
ok, ok ๐Ÿ˜› (yeah, chocolate + tomato = ?????)
2021-06-10 07:39:20
nutella is not *that* different from chocolate...
Crixis
2021-06-10 07:40:26
not equal also
_wb_
2021-06-10 07:46:40
chocolate + tomato can be nice, some Mexican mole sauces do that
fab
2021-06-10 08:39:43
current codecs are expensive
2021-06-10 08:40:01
the only good codec is cavif rs that can encode good explicit contnet
2021-06-10 08:40:16
thanks to good sharpening filter that cover full color range
2021-06-10 08:40:50
new approaches faster speeds, better compression, better quality size are still in study
2021-06-10 08:46:20
i remember a metrics called IQA
2021-06-10 09:04:50
cavif rs it has blurriness image look only red and white
2021-06-10 09:05:13
it has its artifacts should be used only with high quality images
2021-06-10 09:40:53
for normal images is evident the lack of adaptive quantization
2021-06-10 09:41:05
making some images seem too fake
2021-06-10 09:41:45
and flaccid body
2021-06-10 09:44:52
eddiezato builds of jxl don't work on windows 7 sse4.1<@!387462082418704387>
2021-06-10 11:00:26
The integration is fake no release no builds no automatic conversion to jpg when You copy to old version of word. No full integration in any text applications.
2021-06-10 11:00:52
You cant Say people use jpeg xl
eddie.zato
2021-06-10 11:33:52
I experimented and dropped 'Scalar'. I'll bring it back next time.
Crixis
fab The integration is fake no release no builds no automatic conversion to jpg when You copy to old version of word. No full integration in any text applications.
2021-06-10 11:38:58
We don't have a good windows viewer plugin and you want a multi-application OS level automatic coversion?
veluca
Crixis We don't have a good windows viewer plugin and you want a multi-application OS level automatic coversion?
2021-06-10 11:48:21
remind me again what's wrong with the mirillis one?
Scope
2021-06-10 11:50:43
It's on a very old version of libjxl, crashes on some images and doesn't support animation
veluca
2021-06-10 11:53:12
animation I care less about
2021-06-10 11:53:21
have you tried filing an issue?
2021-06-10 11:54:18
possibly even asking to move it to libjxl, I dunno
Scope
2021-06-10 11:58:08
They don't seem to be very active and don't read issues
Crixis
2021-06-10 12:18:52
crashes on a lot of images
eddie.zato
2021-06-10 01:03:24
I guess it's just a one-time thing with no further support.
spider-mario
2021-06-10 02:14:23
> With standard uncompressed file formats, such as RGB TIFF, whew, 2004 was a different time, wasnโ€™t it
_wb_
2021-06-10 02:25:25
where is that quote from?
2021-06-10 02:27:09
(btw TIFF is still a thing in 2021, for multilayer/multipage, for float24, for lossless CMYK, etc it's still the only option, kind of)
spider-mario
2021-06-10 02:58:22
itโ€™s from the manual of the Canon PowerShot Pro1
2021-06-10 02:58:35
an old model from 2004, discontinued in 2006
2021-06-10 02:59:26
true, TIFF is still sometimes used, though itโ€™s rare for it to be uncompressed nowadays, isnโ€™t it?
_wb_
2021-06-10 03:06:40
typical TIFFs I see are either uncompressed or RLE 'compressed'
2021-06-10 03:07:08
there are TIFF extensions for all sorts of compressed payloads but those tend to be not well supported
spider-mario
2021-06-10 04:11:17
oh, good to know; when I export tiff from darktable, itโ€™s generally with deflate
2021-06-10 04:11:29
havenโ€™t checked what Hugin and DxO PhotoLab do exactly
2021-06-10 04:11:49
I did see limited compatibility but I assumed it was either general incompatibility with tiff, or because I exported as floating point
2021-06-10 04:12:02
(my main reason for using tiff in the first place)
_wb_
2021-06-10 04:30:49
Deflate is also not very effective in tiff, iirc it is either without prediction or with Left prediction only (and I think there's only prediction for uint8 and uint16)
spider-mario
2021-06-10 05:22:46
the full sentence in the manual was:
2021-06-10 05:22:56
> In addition, although a RAW file is larger than an equivalent JPEG file, it is still only approximately one-quarter the size* of an uncompressed RGB TIFF format file, making it relatively compact.
2021-06-10 05:23:38
> * As measured by Canonโ€™s testing standard.
2021-06-10 05:23:58
https://files.canon-europe.com/files/soft24019/Manual/Pro1_CUG_EN.pdf
Scope
2021-06-10 11:31:27
https://twitter.com/miltinh0c/status/1392944896760238080
2021-06-12 01:00:28
๐Ÿค” <https://chromium.googlesource.com/codecs/libwebp2/+/bcb5089b06912771d0af46742abca489e55f7da4>
2021-06-14 06:19:56
<https://forum.doom9.org/showthread.php?p=1945019#post1945019>
Deleted User
2021-06-14 06:21:54
<@!321486891079696385> continuing our discussion from <#794206170445119489> here in order not to derail that channel.
2021-06-14 06:22:02
I've identified 3 major Dark Shikari's contributions that helped x264 become a quality king (and their respective Doom9 threads): - Variance Adaptive Quantization (VAQ) https://forum.doom9.org/showthread.php?t=135093 - Psychovisually optimized rate-distortion optimization (Psy-RDO) https://forum.doom9.org/showthread.php?t=138293 - Macroblock Tree Ratecontrol (MB-tree) https://forum.doom9.org/showthread.php?t=148686
2021-06-14 06:24:30
May be helpful to AOM devs and rav1e devs (including you ofc)
2021-06-14 06:25:23
libaom currently does something like this (x264 with PSNR tuning):
2021-06-14 06:25:42
https://web.archive.org/web/20150119170631if_/http://x264.nl/developers/Dark_Shikari/imagecoding/x264_psnr.png
2021-06-14 06:27:32
I want it to create "fake detail", something like this (x264 at its usual settings):
2021-06-14 06:27:49
https://web.archive.org/web/20150514184757if_/http://x264.nl/developers/Dark_Shikari/imagecoding/x264.png
2021-06-14 06:28:30
Photos taken from this article: https://web.archive.org/web/20150319214453/http://x264dev.multimedia.cx/archives/541
Scope
2021-06-14 06:29:29
Variance AQ Megathread (AQ v0.48 update--defaults changed) <https://forum.doom9.org/showthread.php?t=132760> VAQ 2.0 <https://forum.doom9.org/showthread.php?t=136445> New B-pyramid testing thread <https://forum.doom9.org/showthread.php?t=152761>
2021-06-14 06:32:54
Introducing the next generation in video encoders: x264 Revolution! <https://forum.doom9.org/showthread.php?t=146064>
2021-06-14 06:39:33
Also, I've been on x264 IRC since its creation and followed the development, it's a pity there isn't such an active and open community of new video codecs with a leader like DS now, maybe daala and rav1e could do, but they are much less open and active (just like in the old days with Theora and other open formats)
Deleted User
Scope Also, I've been on x264 IRC since its creation and followed the development, it's a pity there isn't such an active and open community of new video codecs with a leader like DS now, maybe daala and rav1e could do, but they are much less open and active (just like in the old days with Theora and other open formats)
2021-06-14 06:41:12
> I've been on x264 IRC since its creation and followed the development Wow, a veteran... DS was indeed a damn good leader.
BlueSwordM
I've identified 3 major Dark Shikari's contributions that helped x264 become a quality king (and their respective Doom9 threads): - Variance Adaptive Quantization (VAQ) https://forum.doom9.org/showthread.php?t=135093 - Psychovisually optimized rate-distortion optimization (Psy-RDO) https://forum.doom9.org/showthread.php?t=138293 - Macroblock Tree Ratecontrol (MB-tree) https://forum.doom9.org/showthread.php?t=148686
2021-06-14 06:43:00
1. That one is currently being worked on by 2 rav1e folks. Anyway, I'm lucky to have my own personal laptop at work, but look at the difference prioritizing sharpness a bit over blurriness. Only difference? `sharpness=0` vs `sharpness=2` `avifenc -s 2 -j 4 --min 0 --max 63 -a end-usage=q -a cq-level=61 -a color:sharpness=X -a tune=butteraugli -a color:enable-chroma-deltaq=1 -a color:aq-mode=1 -a color:deltaq-mode=3` What do you think? https://slow.pics/c/FttY4yxx 32kB end file sizes.
2021-06-14 06:44:20
I think `sharpness=1` would therefore be our best compromise ๐Ÿ™
Scope
2021-06-14 06:49:42
I think it would be better to resize to 840x1120 to match the images in the article (because this is overly strong compression where it is hard to say what is better)
Deleted User
BlueSwordM 1. That one is currently being worked on by 2 rav1e folks. Anyway, I'm lucky to have my own personal laptop at work, but look at the difference prioritizing sharpness a bit over blurriness. Only difference? `sharpness=0` vs `sharpness=2` `avifenc -s 2 -j 4 --min 0 --max 63 -a end-usage=q -a cq-level=61 -a color:sharpness=X -a tune=butteraugli -a color:enable-chroma-deltaq=1 -a color:aq-mode=1 -a color:deltaq-mode=3` What do you think? https://slow.pics/c/FttY4yxx 32kB end file sizes.
2021-06-14 06:55:56
I agree with <@!111445179587624960>, but I can already tell from the full-size (non-downscaled) encode that `sharpness=2` is *kinda* better. There are some nasty artifacts though, e.g. coding noise in the center of dog's face, I see too much DCT components. Look again at x264 tuned images I linked above, the fake detail in them is *directional*-looking. Maybe DS fiddled a bit with directional predictors? It's a topic to research.
BlueSwordM
I agree with <@!111445179587624960>, but I can already tell from the full-size (non-downscaled) encode that `sharpness=2` is *kinda* better. There are some nasty artifacts though, e.g. coding noise in the center of dog's face, I see too much DCT components. Look again at x264 tuned images I linked above, the fake detail in them is *directional*-looking. Maybe DS fiddled a bit with directional predictors? It's a topic to research.
2021-06-14 07:17:33
To be fair, the image went from 3MB to 30kB, which I'd a massive decrease.
2021-06-14 07:17:43
x264 intra would likely look a lot worse.
2021-06-14 07:18:10
Of course, for such high compression, you are better off scaling the image down.
lithium
I've identified 3 major Dark Shikari's contributions that helped x264 become a quality king (and their respective Doom9 threads): - Variance Adaptive Quantization (VAQ) https://forum.doom9.org/showthread.php?t=135093 - Psychovisually optimized rate-distortion optimization (Psy-RDO) https://forum.doom9.org/showthread.php?t=138293 - Macroblock Tree Ratecontrol (MB-tree) https://forum.doom9.org/showthread.php?t=148686
2021-06-15 09:31:50
Hello <@456226577798135808>, Thank you provide this information ๐Ÿ™‚ A little curious, av1 dev will implement those amazing features in av1 encoder? or have some alternative features is already implement in av1 encoder? > BlueSwordM > 1. That one is currently being worked on by 2 rav1e folks.
2021-06-15 05:24:09
I read some av1 technology article, look like for now av1 encoder don't have similar features like --psy-rd. (maybe use tune butteraugli have some Psychovisual optimized?, I not sure)
_wb_
2021-06-15 05:52:59
Everything a lossy codec does is in a way a form of psychovisual optimization - the goal of lossy is always to lose stuff you don't see.
2021-06-15 05:53:32
Some encoders are of course better at it than others, and some codecs are more suitable for it than others.
2021-06-15 05:54:28
But all lossy coding tools are designed (or should have been designed) specifically for that goal.
2021-06-15 05:57:07
If they're designed for something else, it's because an engineer somewhere was confusing some easily computable number like psnr with the actual goal of the codec/encoder.
lithium
2021-06-15 06:20:07
Look like av1 use Just Noticeable Distortion (JND) to make some Psychovisual optimized, I don't really understand this features, <@!321486891079696385> Could you teach me some knowledge about JND? ๐Ÿ™‚ > https://ieeexplore.ieee.org/document/8954513
Scope
2021-06-15 07:01:36
<https://encode.su/threads/3557-Low-Complexity-Enhancement-Video-Coding-(LCEVC)?p=69923&viewfull=1#post69923>
diskorduser
2021-06-15 07:02:50
Scope has ๐Ÿ”— for every topic.
Scope
2021-06-15 07:08:45
LCEVC has very interesting ideas to encode small details and high frequency part separately, and the rest with reduced resolution by any of the available codecs (moreover, encoding becomes much faster)
2021-06-15 07:09:19
However, it is still unclear how much better this is in practice
_wb_
2021-06-15 07:28:51
I have heard claims of great results, but I am somewhat sceptic - I think it was mostly designed to have something with a royalty-free baseline (to compete with aom) and options for patented extensions (because mpeg is gonna mpeg)
2021-06-15 07:32:44
Claims I have seen so far talk about PSNR, which I think is a particularly bad metric for approaches like this, because even just encoding a downscaled image and upscaling it again (without any added detail) is something that is amazingly good for PSNR - after all, PSNR measures an average error and the average error will be quite good automatically with hierarchical approaches.
190n
_wb_ I have heard claims of great results, but I am somewhat sceptic - I think it was mostly designed to have something with a royalty-free baseline (to compete with aom) and options for patented extensions (because mpeg is gonna mpeg)
2021-06-15 07:37:38
isn't that EVC, not LCEVC?
_wb_
2021-06-15 07:58:48
EVC is the royalty-free codec, LCEVC is the mechanism to layer two codecs hierarchically, I think intended thing is to do EVC for the base layer and VVC for the 'extra details' layer
2021-06-15 07:59:41
and make videos that way that use a 'royalty free' codec but to view the high res thing you need to have the patent-encumbered codec
2021-06-15 08:00:16
I dunno, I'm just speculating based on impressions from the sideline
Scope
2021-06-15 08:03:51
https://youtu.be/B_d9C-I6kX0?t=231
2021-06-15 08:07:00
_wb_
2021-06-15 08:12:24
https://www.lcevc.org/
2021-06-15 08:13:02
> In particular, the key performance requirements were defined as follows: > > When enhancing an n-th generation codec (e.g., AVC), compression efficiency for the aggregate stream is appreciably higher than that of the n-th generation MPEG codec used at full resolution and as close as possible to that of the (n+1)-th generation codec (e.g., HEVC) used at full resolution, at bandwidths and operating conditions relevant to mass market distribution;
Scope
2021-06-15 08:16:11
I've seen that slide before (I had PSNR in my head but it did indeed say VMAF โ€” not that the difference between those two metrics is huge)
2021-06-15 08:16:48
I am curious what kind of encoders they used there โ€” do you see that x axis?
2021-06-15 08:17:08
a logscale that goes from 1000x realtime to 100,000x realtime
2021-06-15 08:18:32
are they comparing a somewhat optimized superslow reference encoder with a totally not optimized superslow reference encoder here, or what?
2021-06-15 08:19:35
only Netflix can afford to spend _that_ kind of compute on video
Scope
2021-06-15 08:21:33
For AVC as far as I remember they used x264, for more modern ones like VVC - reference encoders, for AV1 - libaom
_wb_
2021-06-15 08:23:10
I have no time to watch that video, but I think the more realistic thing is that they have something that isn't actually better than the codecs it uses, and LCEVC(1:2 video in h264, 1:1 residuals in h265) produces something that is a bit worse (slightly slower, slightly less dense) than just doing 1:1 video in h265, but still significantly better than doing 1:1 video in h264
2021-06-15 08:24:27
and same if you do it with h265 and h266
Scope
2021-06-15 08:24:30
But, LCEVC gives more benefits to older codecs, in modern codecs as far as I understand some similar methods are used internally, like for VVC benefits are only about 7% or less (but there is still an increase in encoding speed, due to lower resolution)
_wb_
2021-06-15 08:25:13
yes well older codecs have smaller macroblocks etc so they are less good at high res
2021-06-15 08:26:59
it's a nice idea and a good way to make gracefully degrading upgrade/adoption paths for new codecs, but I don't think it can be better than just using the best codec โ€” I might be wrong though
Scope
2021-06-15 08:28:53
The interesting thing is that this high-frequency layer does not require a lot of resources for decoding and can be on JS or supported in the player (when the main stream will be in normal format)
_wb_
2021-06-15 08:29:59
Wait they have a codec for the high res layer itself?
Scope
2021-06-15 08:30:09
Yep
2021-06-15 08:30:13
2021-06-15 08:30:38
_wb_
2021-06-15 08:30:55
I thought they only standardized the upsampling method and how to apply the residual layer to it, not how to encode the residual layer itself
Scope
2021-06-15 08:31:18
_wb_
2021-06-15 08:32:56
Oh so they also specify a specific residual encoding then, that is fixed and cannot be upgraded/improved?
Scope
2021-06-15 08:33:57
๐Ÿค”
2021-06-15 08:34:29
_wb_
2021-06-15 08:34:32
How does this not mean you can recursively make an n+5th generation codec right now by doing lcevc five times with an n-th generation codec at the base?
Scope
2021-06-15 08:35:30
2021-06-15 08:37:04
_wb_
2021-06-15 08:37:51
If the claims are true, why not just make a recursive lcevc codec that applies residuals 12 times on a 1:4096 video which is just a single pixel? Should be ultra fast and 12 generations ahead of its time
BlueSwordM
_wb_ If the claims are true, why not just make a recursive lcevc codec that applies residuals 12 times on a 1:4096 video which is just a single pixel? Should be ultra fast and 12 generations ahead of its time
2021-06-15 08:38:23
There's likely a downside we haven't seen with LCEVC yet since not many people haven't tested it.
_wb_
2021-06-15 08:39:20
Well it could be it is great but then why isn't it a coding tool already in all the other modern codecs?
BlueSwordM
_wb_ Well it could be it is great but then why isn't it a coding tool already in all the other modern codecs?
2021-06-15 08:39:58
Actually, it is already in some ways. Grain synthesis/noise synthesis is "just" an enhancement layer designed to bolster psycho-visual quality significantly without huge computational/data rate increases.
_wb_
2021-06-15 08:40:05
The idea itself goes back at least to the 1992 jpeg spec, which describes exactly that: normative upsampling, residual encoding. It was called hierarchical jpeg.
2021-06-15 08:40:51
"scalable layers" is what it is usually called in video codecs iirc, and I think x265 and av1 do support that
Scope
2021-06-15 08:41:39
I think this only works well for half resolutions and such, moreover the LCEVC layer also consumes resources and bitrate and it always stays in full resolution
2021-06-15 08:43:36
It is also unknown how well this enhancement sub-layer compresses
_wb_
2021-06-15 08:44:39
So they have residuals at half res first to correct artifacts, then at full res to add detail?
2021-06-15 08:44:50
No way this is a good idea for density
2021-06-15 08:45:20
Encoding artifacts and then encoding residuals to remove artifacts, no way that that's a good idea
2021-06-15 08:45:36
But we'll see where it goes
Scope
2021-06-15 08:46:51
It's also bad that all this is not royalty-free (but as they say very cheap and loyal)
lithium
2021-06-16 08:44:52
Continue Just Noticeable Distortion (JND) discuss, https://discord.com/channels/794206087879852103/805176455658733570/854425055224135720 I try to find more JND information, but I can't find too much, I guess av1 Just Noticeable Distortion (JND) and jxl just noticeable error(Butteraugli distance) probably have some similar place?
2021-06-16 09:25:20
Some detail about JND 01
2021-06-16 09:25:36
Some detail about JND 02
2021-06-16 09:25:41
Some detail about JND 03
_wb_
2021-06-16 03:38:24
I wonder if it would make sense to design a video codec that is not designed for hw decoding, but designed for sw decode of intra frames and hw decode of inter frames. So e.g. something like jxl for the keyframes (one every 5 or 10 seconds or whatever) and av1 for the interframe part.
Deleted User
2021-06-16 03:54:07
AV1 would at least have to support XYB. Color accuracy gets quickly bad for video codecs and you notice slight changes easily since your idea would be like a flicker test.
2021-06-16 03:59:54
It would be nice if cjxl supported adaptive quality. E.g. -d 1 for frames shown at least 1s, -d 2 for frame times between 500ms and 999ms, and -d 3 for everything below.
lithium
2021-06-16 04:13:55
For now av1 encoder follow this standard, probably av2 will implement xyb. > https://www.itu.int/rec/T-REC-H.273-201612-I/en
2021-06-16 04:26:51
I don't understand why JND still include SSE (The sum of squares due to error)? In theory MSE, RMSE, SSE is without Psychovisual optimized.
_wb_
2021-06-16 04:30:37
A JND should be something psychovisual, but what kind of aggregation/pooling you do is an orthogonal question: mse is the 2-norm, so basically it's an average where good pixels can compensate for bad pixels, while max-butteraugli is a 'worst case' pooling, where a single bad spot can spoil the whole image.
2021-06-16 04:31:20
Ssimulacra and p-norm butteraugli are somewhere in between those two extremes.
It would be nice if cjxl supported adaptive quality. E.g. -d 1 for frames shown at least 1s, -d 2 for frame times between 500ms and 999ms, and -d 3 for everything below.
2021-06-16 04:34:43
Yes, not just for animation (which I don't think is a strong point of jxl anyway) but also for single frames it would be nice to have local control over the fidelity target to reach.
AV1 would at least have to support XYB. Color accuracy gets quickly bad for video codecs and you notice slight changes easily since your idea would be like a flicker test.
2021-06-16 04:36:32
Well inter frames would still be based on decoded intra frames, but yes, it would help for that if they would use the same color space.
lithium
_wb_ Ssimulacra and p-norm butteraugli are somewhere in between those two extremes.
2021-06-16 04:41:53
Maybe we should convince av1 dev use butteraugli 3-norm or 6-norm in JND features? (this ppt is from AOM Summit 2021)
_wb_
2021-06-16 04:49:36
Maybe such a hybrid codec with strong intra, not restricted to hw limitations, and classical inter (restricted to hw implementability) could combine the benefits of hw (battery life, offload from cpu) with the benefits of sw (stronger entropy coding possible etc), and make a codec that is truly suitable for both video and still.
BlueSwordM
I've identified 3 major Dark Shikari's contributions that helped x264 become a quality king (and their respective Doom9 threads): - Variance Adaptive Quantization (VAQ) https://forum.doom9.org/showthread.php?t=135093 - Psychovisually optimized rate-distortion optimization (Psy-RDO) https://forum.doom9.org/showthread.php?t=138293 - Macroblock Tree Ratecontrol (MB-tree) https://forum.doom9.org/showthread.php?t=148686
2021-06-16 05:14:39
1. Following my point: aomenc already has variance AQ. However, by default, it is not well tuned, and does not work super well in video, but there are ways to improve it. 2 rav1e people are working on making the x264 AQ implementations even stronger and more temporally aware. 2. Already present in rav1e, and I'm not sure about aomenc(although it has psy-rd already in the form of `--sharpness`, but not sure about pure psycho-visual psy-RDO) 3. Macroblock Tree Ratecontrol = Temporal RDO. Already present with rav1e, aomenc and SVT-AV1. rav1e implementation is currently the strongest I've seen within encoders, aomenc is currently being improved nicely, and SVT-AV1's implementation works, although it is of lower quality to achieve higher through(lower quality in this case = more bpp spent).
lithium
2021-06-16 05:18:54
Thank you very much for your help <@!321486891079696385> ๐Ÿ™‚
Deleted User
BlueSwordM 1. Following my point: aomenc already has variance AQ. However, by default, it is not well tuned, and does not work super well in video, but there are ways to improve it. 2 rav1e people are working on making the x264 AQ implementations even stronger and more temporally aware. 2. Already present in rav1e, and I'm not sure about aomenc(although it has psy-rd already in the form of `--sharpness`, but not sure about pure psycho-visual psy-RDO) 3. Macroblock Tree Ratecontrol = Temporal RDO. Already present with rav1e, aomenc and SVT-AV1. rav1e implementation is currently the strongest I've seen within encoders, aomenc is currently being improved nicely, and SVT-AV1's implementation works, although it is of lower quality to achieve higher through(lower quality in this case = more bpp spent).
2021-06-16 06:05:47
Nice to know! I've got a bit different ideas, though, but still: you're working on that and that's great.
_wb_
2021-06-16 06:15:28
I wonder how exhaustive libaom is at slowest speed. Is it exploring most of the space the bitstream can express?
veluca
2021-06-16 07:01:45
the more interesting question imho is, is it optimizing for the right thing it its exploration?
_wb_
2021-06-16 07:03:05
That probably not, I don't think anyone really knows what exactly the right thing is, let alone have an algorithm for it, let alone one with a reasonable speed
2021-06-16 07:03:23
But assuming you really want to optimize for psnr, say
2021-06-16 07:05:01
Is it close to trying everything it can to do that, or are there encoder choices where it just picks something without really trying to optimize close to exhaustively?
BlueSwordM
_wb_ I wonder how exhaustive libaom is at slowest speed. Is it exploring most of the space the bitstream can express?
2021-06-16 07:13:59
Yes, but there are places where it still lacks some... compute destruction.
2021-06-16 07:14:11
For example, only rav1e does a bottom-up partition search at its slower speeds similar to this: https://discord.com/channels/794206087879852103/794206170445119489/854279720312242227
2021-06-16 07:15:02
All other open AV1 encoders do top-down partition search, leading to some... dubious block choices in edge cases.
_wb_
2021-06-16 07:16:11
Partitioning is one of those things where I guess exhaustive is not feasible
2021-06-16 07:16:57
But at least av1 encoders can in principle do any partitioning the bitstream allows, right?
BlueSwordM
2021-06-16 07:17:03
Yes.
_wb_
2021-06-16 07:18:16
this is not the case with cjxl, which only does naturally aligned partitionings that do not cross 64x64 boundaries, while the bitstream allows floating blocks as long as they do not cross 256x256 boundaries