|
|
veluca
|
2021-05-26 06:00:42
|
yeah
|
|
|
diskorduser
`[ 93%] Linking CXX executable djxl
/usr/bin/ld: attempted static link of dynamic object `/usr/lib/libgif.so'
collect2: error: ld returned 1 exit status
make[2]: *** [tools/CMakeFiles/djxl.dir/build.make:136: tools/djxl] Error 1
make[1]: *** [CMakeFiles/Makefile2:1044: tools/CMakeFiles/djxl.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[100%] Linking CXX executable benchmark_xl
/usr/bin/ld: attempted static link of dynamic object `/usr/lib/libjpeg.so'
collect2: error: ld returned 1 exit status`
|
|
2021-05-26 06:00:57
|
you're on arch, right?
|
|
|
diskorduser
|
|
|
veluca
|
2021-05-26 06:01:15
|
the thing is that static builds ain't gonna work if your OS doesn't have the static libs
|
|
2021-05-26 06:01:23
|
and arch doesn't do that IIRC
|
|
|
diskorduser
|
2021-05-26 06:01:27
|
Yeah you're right
|
|
|
|
Deleted User
|
2021-05-26 06:06:46
|
https://github.com/libjxl/libjxl/commit/96e1f432595669ab9912934ec4d8b4d24d3f8eab
Why no ASCII art tho <:PepeHands:808829977608323112>
|
|
2021-05-26 06:19:53
|
Seems like I have to update my one-liner compiling script with `git revert`...
|
|
|
monad
|
2021-05-26 06:37:43
|
It's probably to make room for a nicer logo with custom console pixel manipulation.
|
|
2021-05-26 06:38:36
|
Differently, is there a tool that can simply strip metadata from WebPs?
|
|
|
diskorduser
|
2021-05-26 06:43:24
|
Exiv2 or exiftool?
|
|
|
monad
|
2021-05-26 06:44:19
|
exiftool doesn't write WebP, I'll look at Exiv2
|
|
|
diskorduser
|
2021-05-26 06:44:49
|
https://developers.google.com/speed/webp/docs/webpmux
|
|
|
_wb_
|
|
monad
It's probably to make room for a nicer logo with custom console pixel manipulation.
|
|
2021-05-26 06:45:37
|
<:kekw:808717074305122316>
|
|
2021-05-26 06:45:55
|
Good luck doing that in a portable way! 😅
|
|
2021-05-26 06:47:00
|
I wanted to do ANSI colors originally, turns out that doesn't work in Windows
|
|
|
|
Deleted User
|
2021-05-26 06:49:03
|
ffmpeg somehow manages to color its cmd output on Windows
|
|
|
|
veluca
|
|
_wb_
I wanted to do ANSI colors originally, turns out that doesn't work in Windows
|
|
2021-05-26 06:50:47
|
it does actually 😛
|
|
|
_wb_
|
2021-05-26 06:56:11
|
Well it depends on the version I guess
|
|
2021-05-26 06:57:17
|
Old enough: it works. New enough: it works. In the middle: it doesn't. At least that's what People On The Internet say, I am not going to touch Windows to try it out
|
|
|
|
veluca
|
2021-05-26 06:59:13
|
hahaha
|
|
|
fab
|
2021-05-26 07:01:54
|
|
|
2021-05-26 07:02:01
|
see how jxl looks good
|
|
|
_wb_
|
2021-05-27 03:45:51
|
Good points, could you open a GitHub issue for them? 😛
|
|
2021-05-27 03:52:11
|
The gitlab repo is not going away, it is still the official repo like it always was.
Development was never done there though, it was done in a different, private repo. That is what has changed and is now public, exactly to make it easier for people to not only report issues but also contribute fixes and improvements.
|
|
2021-05-27 03:54:13
|
Migrating the gitlab issues to github would be nice but I doubt that can be easily done. So I think we'll just have to solve all the gitlab issues 😅
|
|
|
|
veluca
|
2021-05-27 06:22:54
|
I think we're slowly going through the gitlab issues and either closing, migrating or asking to migrate them
|
|
2021-05-27 06:23:03
|
then we'll close the gitlab issues
|
|
2021-05-27 06:24:28
|
as for confidential issues: this exists https://github.com/libjxl/libjxl/security/advisories
|
|
2021-05-27 06:25:17
|
we'll update the URLs soon 🙂 (well, not the one of the official website, that needs to wait for an ISO meeting)
|
|
|
_wb_
Good points, could you open a GitHub issue for them? 😛
|
|
2021-05-27 06:25:41
|
but that's a good point 😛
|
|
2021-05-27 06:26:59
|
ah, that's disappointing
|
|
2021-05-27 06:27:37
|
although, even if you could create confidential issues - with development in the open, you'd have the problem of the pull request being public, no?
|
|
2021-05-27 06:31:58
|
ah, fair enough
|
|
2021-05-27 06:32:19
|
I guess we should write to send security issues to some email address
|
|
|
Jake Archibald
|
2021-05-27 08:30:31
|
We looked at progressive image loading in the latest HTTP203. Thanks to folks here for helping me get a JXL decoding demo in there https://www.youtube.com/watch?v=-7k3H2GxE5E&list=PLNYkxOF6rcIAKIQFsNbV0JDws_G_bnNo9&index=1
|
|
2021-05-27 08:30:49
|
(the JXL bit is at 20:50)
|
|
|
|
veluca
|
|
Jake Archibald
We looked at progressive image loading in the latest HTTP203. Thanks to folks here for helping me get a JXL decoding demo in there https://www.youtube.com/watch?v=-7k3H2GxE5E&list=PLNYkxOF6rcIAKIQFsNbV0JDws_G_bnNo9&index=1
|
|
2021-05-27 08:32:55
|
you can't compete with our army of Internet watchers: https://discord.com/channels/794206087879852103/822105409312653333/847145002105700403 😆
|
|
2021-05-27 08:33:01
|
thanks for the video!
|
|
|
Petr
|
2021-05-27 09:47:28
|
Level design of video games is kinda art.
|
|
2021-05-27 09:47:49
|
So does this qualify as <#824000991891554375>? 😉
|
|
2021-05-27 09:48:00
|
|
|
|
_wb_
|
2021-05-27 10:01:41
|
Haha why not
|
|
|
Petr
|
2021-05-27 10:23:29
|
Here's the actual game play: https://youtu.be/cQAuH25hAao
The jxl logo is better visible at the end, after crushing almost all the stones.
|
|
|
Jake Archibald
|
2021-05-27 01:32:03
|
<@!179701849576833024> hah! Looks like you folks found it pretty soon after it came out. Nice!
|
|
|
|
Deleted User
|
|
Jake Archibald
<@!179701849576833024> hah! Looks like you folks found it pretty soon after it came out. Nice!
|
|
2021-05-27 01:35:13
|
WE ARE ~~ON~~ SPEED
|
|
|
Fox Wizard
|
2021-05-27 02:10:21
|
<a:FurretSpeed:832307660035850310>
|
|
|
diskorduser
|
2021-05-27 02:38:18
|
left jxl d1 s7 and right source png. I see banding artifacts. Is it normal?
|
|
2021-05-27 02:42:24
|
source png is a jpeg resized to 25% with imagemagick.
|
|
|
_wb_
|
2021-05-27 02:46:26
|
Try decoding to a 16-bit png. `djxl --bits_per_sample 16`
|
|
|
improver
|
2021-05-27 02:46:49
|
also resizing may mess up dithering if it was in source
|
|
|
_wb_
|
2021-05-27 02:48:14
|
wow that banding is extreme though
|
|
2021-05-27 02:48:21
|
can you share the source png?
|
|
|
diskorduser
|
|
_wb_
Try decoding to a 16-bit png. `djxl --bits_per_sample 16`
|
|
2021-05-27 02:48:39
|
this fixed it. now looks fine.
|
|
|
_wb_
|
2021-05-27 02:49:21
|
yes but I'm still curious how a default 8-bit png can look that ugly
|
|
2021-05-27 02:50:01
|
maybe ProPhoto or some other icc profile like that?
|
|
|
diskorduser
|
2021-05-27 02:51:15
|
icc:description: sRGB IEC61966-2.1
|
|
|
_wb_
|
2021-05-27 02:55:26
|
this banding is strange and much more than just quantization to 8-bit
|
|
2021-05-27 02:56:03
|
can you share the source png so I can try to see what is going on?
|
|
|
|
veluca
|
2021-05-27 03:10:11
|
that's some really bad banding...
|
|
|
diskorduser
|
2021-05-27 03:22:54
|
probably noise is too much I think but they are demosaicing artifacts.
|
|
|
_wb_
|
2021-05-27 03:28:15
|
as I suspected, that ICC profile is not actually sRGB
|
|
2021-05-27 03:29:53
|
or something weird is going on with it at least
|
|
|
diskorduser
|
2021-05-27 03:36:25
|
I think sRGB IEC61966-2.1 is not compatible.
|
|
2021-05-27 03:37:04
|
It is embedded from camera app. (google camera)
|
|
2021-05-27 03:38:59
|
I think I should stick to display p3 or dci p3 when taking photos. (On my phone)
|
|
2021-05-27 03:48:28
|
Converting sRGB IEC61966-2.1 to sRGB_v4_ICC_preference.icc fixed the banding.
|
|
2021-05-27 03:49:00
|
I downloaded icc from color.org. 61966 icc works fine on other formats ( jpeg, PNG, webp)
|
|
|
_wb_
|
2021-05-27 04:01:12
|
it ends up getting encoded using sRGB primaries with a linear transfer curve
|
|
2021-05-27 04:02:17
|
../lib/extras/codec_png.cc:87: Unknown type in 'Raw format type' text chunk: icc: 596 bytes
|
|
|
diskorduser
|
2021-05-27 04:02:55
|
But I have this problem on source jpeg too. Jpg from camera app
|
|
|
_wb_
|
2021-05-27 04:03:44
|
yes somehow something is going wrong causing things to get encoded using linear sRGB
|
|
2021-05-27 04:04:07
|
which is a problem when reconstructing to 8-bit
|
|
|
diskorduser
|
2021-05-27 04:32:09
|
Right now I find Google camera embeds icc profile. Other camera apps don't. People with pixel phones will be affected after android jxl adoption I think.
|
|
|
spider-mario
|
2021-05-28 12:29:00
|
should be reasonable, we can simply close if we can reproduce with 0.3.7 and not head
|
|
2021-05-28 12:29:14
|
as long as it’s not too difficult to reproduce
|
|
2021-05-28 12:34:19
|
```
-rw-r--r-- 1 47431412 28 mai 14:33 0.3.7-s3.jxl
-rw-r--r-- 1 7920251 28 mai 14:33 0.3.7-s4.jxl
-rw-r--r-- 1 7464122 28 mai 14:33 0.3.7-s5.jxl
-rw-r--r-- 1 7449433 28 mai 14:33 0.3.7-s6.jxl
-rw-r--r-- 1 7154828 28 mai 14:33 0.3.7-s7.jxl
```
|
|
2021-05-28 12:34:23
|
are those the sizes you get?
|
|
2021-05-28 12:39:59
|
oh, indeed I forgot to check the size of the original file 😅
|
|
|
_wb_
|
2021-05-28 12:51:35
|
probably -g 3 could help a bit
|
|
2021-05-28 12:56:32
|
recompressing GIF losslessly is kind of tricky to do in a consistently-better way, since some GIF encoders are quite clever in how they do the color reduction and associated dithering in a way that compresses well in GIF but might be hard for cjxl to do a good job on
|
|
2021-05-28 12:59:03
|
yeah no -s 3 is going to be horrible on GIFs no matter what you do, it uses the predictor that works well on photos but for these palette indexed images it will be horrible
|
|
2021-05-28 01:06:58
|
hm strangely adding -q 100 in cjxl changes what it does, I wonder what's up with that
|
|
2021-05-28 01:10:26
|
yes, it is
|
|
2021-05-28 01:10:36
|
with gif input it does lossless by default
|
|
2021-05-28 01:10:50
|
but adding -q 100 still makes it encode it in a different way
|
|
2021-05-28 01:11:55
|
```
5897751 slime_7cover.gif
36884914 slime_7cover.gif.jxls1
29426864 slime_7cover.gif.jxls1q100
31391478 slime_7cover.gif.jxls2
23240224 slime_7cover.gif.jxls2q100
47431412 slime_7cover.gif.jxls3
26518874 slime_7cover.gif.jxls3q100
7156799 slime_7cover.gif.jxls7
7184973 slime_7cover.gif.jxls7q100
5915585 slime_7cover.gif.jxls9g3I1
5934902 slime_7cover.gif.jxls9g3patches0
5911795 slime_7cover.gif.jxls9g3patches0I1
5442127 slime_7cover.gif.jxls9g3patches0I1P1q100
```
|
|
2021-05-28 01:13:52
|
for non-photo stuff like this (which includes all GIFs because they are by definition palettized), -s 2 works better than -s 3
|
|
2021-05-28 01:14:12
|
(still poorly, but better)
|
|
2021-05-28 01:14:47
|
Now I wonder why the -q 100 makes a difference, <@!179701849576833024> any idea?
|
|
2021-05-28 01:19:13
|
5046479 bytes for just `-s 9 -q 100`
|
|
2021-05-28 01:20:07
|
which should be the same thing as `-s 9` on GIF input but it somehow isn't...
|
|
2021-05-28 01:24:02
|
-q 100 is mathematically pixel exact lossless, not GIF-bitstream-reconstructible lossless (only for JPEG we have such a thing)
|
|
2021-05-28 01:25:54
|
how to verify that it is lossless: one way is to djxl to frames and compare each frame with what you get by doing a coalescing decode of the gif (`convert bla.gif -coalesce frame.png` iirc)
|
|
|
|
veluca
|
|
_wb_
Now I wonder why the -q 100 makes a difference, <@!179701849576833024> any idea?
|
|
2021-05-28 01:38:52
|
what 😮
|
|
|
raysar
|
2021-05-28 01:46:09
|
Yes but gif is a lossy format againt source file. If you give us the original video file jxl will produce better quality and file size :p
You need to encode gif in lossy mode. (with and without --patches)
|
|
2021-05-28 02:05:17
|
Ok so with this gif all dct options is shit 😄 and lossy modular too 😄
|
|
2021-05-28 02:13:11
|
even cjxl.exe .\slime_7cover.gif ./gif.jxl -s 8 -m -q 30
6.5mB 😄
|
|
2021-05-28 02:16:26
|
cjxl.exe .\slime_7cover.gif ./gif.jxl -d 14 --progressive_dc=0
5.8mB 😄
|
|
2021-05-28 02:19:22
|
cjxl.exe .\slime_7cover.gif ./gif.jxl -d 14 --progressive_dc=0 --dots=0 --patches=0
5.1 mB ! yes !
|
|
2021-05-28 02:20:12
|
It's an awfull result 😄
|
|
2021-05-28 02:21:03
|
gif is so optimised fot this kind of picture, so we do not convert it
|
|
|
improver
|
2021-05-28 02:21:05
|
compresses p well here with modular plus some extra flags
|
|
|
raysar
|
2021-05-28 02:21:34
|
apng conversion is even worse 😄
|
|
|
|
Deleted User
|
2021-05-28 02:21:47
|
Try using lossy Modular (`-Q`)
|
|
|
improver
|
2021-05-28 02:22:07
|
doesn't even need lossy to compress to quite lower size than original gif
|
|
2021-05-28 02:23:00
|
just `-m -q 100 -s 9 -E 3 -I 1`
|
|
2021-05-28 02:23:10
|
basically the usual "stronkest lossless" params
|
|
2021-05-28 02:23:25
|
also git version
|
|
|
_wb_
|
2021-05-28 02:24:09
|
for this type of images (palette, sharp lines, text), lossy compression is not very useful and will often result in bigger files than lossless
|
|
|
improver
|
2021-05-28 02:24:49
|
it depends on your use case
|
|
2021-05-28 02:25:44
|
you don't really know how much it took to compress the original if you haven't compressed it yourself
|
|
2021-05-28 02:26:20
|
also it probably lost some details during compression process so it's not quite fair so say that
|
|
|
raysar
|
2021-05-28 02:26:21
|
apng with imagemagick takes 7.8mB 😄
|
|
2021-05-28 02:28:49
|
but i i convert the gif in multiple png it's 5.5mB but impossible to watch it 😄
|
|
|
improver
|
2021-05-28 02:30:22
|
```
% wc -c slime_7cover.gif*
5897751 slime_7cover.gif
5014182 slime_7cover.gif.jxl
```
|
|
|
raysar
|
2021-05-28 02:37:14
|
it's funny to see inside a GIF 😄
|
|
2021-05-28 02:38:18
|
But we understand why it's so hard to compress it.
|
|
|
improver
|
2021-05-28 02:44:09
|
tbh iirc cjxl currently doesn't attempt that sort of blending
|
|
2021-05-28 02:44:42
|
it just renders every frame independently and compresses each separately
|
|
2021-05-28 02:44:55
|
so there are a lot of optimizations possible there
|
|
2021-05-28 02:47:03
|
which aren't there not because they're not possible to do, but because there wasn't much dev attention spent on that
|
|
2021-05-28 02:49:47
|
I think that's individual gif frames, which are normally intended to be blended with previous frames, but in this case, applied to gray background for visualization purposes.
I'd sorta like to know how to do such visualization aswell
|
|
|
|
veluca
|
2021-05-28 02:54:29
|
yeah, trying to compress those delta frames with lossy is not going to do anything useful
|
|
|
improver
|
2021-05-28 02:56:53
|
but i think it doesn't compress delta frames at the moment at all, no?
|
|
|
|
veluca
|
2021-05-28 02:57:11
|
for gif input it does
|
|
2021-05-28 02:57:29
|
or it should at least
|
|
|
monad
|
2021-05-28 02:57:29
|
```5897751 slime_7cover.gif
5897505 slime_7cover_gifsicle.gif```
|
|
|
improver
|
|
veluca
for gif input it does
|
|
2021-05-28 02:58:20
|
then I guess my info is sorta outdated :P
|
|
|
|
veluca
|
2021-05-28 02:58:58
|
still, optimized-for-gif files are likely hard to recompress with other stuff
|
|
|
raysar
|
2021-05-28 03:12:41
|
it's easy `magick.exe .\slime_7cover.gif .\slime_7cover.png`
It extract all gif frames 😄 then i record my screen
|
|
|
monad
|
2021-05-28 03:22:51
|
Now do it with the mp4.
|
|
|
raysar
|
2021-05-28 04:07:07
|
3.15mB in AV1 video, but it's pretty high crf 35
|
|
|
monad
|
2021-05-29 01:23:18
|
just `--no-conserve-memory --no-ignore-errors`
|
|
2021-05-29 01:30:30
|
My Gifsicle 1.91 doesn't seem to have an `-s` (in any case, I didn't use one). `--no-ignore-errors`: "Exit with status 1 when encountering a very erroneous GIF. Default is to muddle on."
|
|
2021-05-29 01:37:13
|
Uses about 10 MB memory on this image.
|
|
|
Dirk
|
2021-05-29 07:39:42
|
The latest release of ImageMagick supports setting the effort with `-define jxl:effort=<number>`
|
|
|
_wb_
|
2021-05-29 08:09:14
|
Might be the same thing that causes gif input to do something different with or without -q 100. Something funny is going on in the encode parameter passing I guess
|
|
2021-05-29 08:13:00
|
Yes please
|
|
2021-05-29 08:18:08
|
Interesting
|
|
|
|
veluca
|
2021-05-29 01:13:43
|
good question, maybe different header?
|
|
|
improver
|
2021-05-29 01:14:05
|
comparing files themselves instead of pixels they hold is bad idea
|
|
|
|
veluca
|
2021-05-29 01:14:44
|
yep, there's many different ways to get the same ppm...
|
|
|
_wb_
|
2021-05-29 01:15:46
|
Any whitespace can separate the header fields in ppm, and you can add comments
|
|
|
|
veluca
|
2021-05-29 01:20:30
|
could be a change in bit depth
|
|
2021-05-29 01:20:42
|
although the error looks a tad too big for that
|
|
|
_wb_
|
2021-05-29 01:20:53
|
This looks like a bug
|
|
2021-05-29 01:21:22
|
How do they compare to original?
|
|
2021-05-29 01:21:52
|
And what do you get if you first djxl them to png/ppm?
|
|
2021-05-29 01:29:41
|
I mean what does compar -metric pae say when comparing cjxl's jxl to the original image
|
|
2021-05-29 01:29:51
|
And IM's jxl
|
|
|
improver
|
2021-05-29 01:41:49
|
there's someone doing IM jxl support here in discord iirc...
|
|
|
_wb_
|
2021-05-29 01:46:47
|
Yes, <@693227044061839360>
|
|
2021-05-29 02:24:49
|
Could be it is somehow doing things in XYB or something
|
|
2021-05-29 02:25:13
|
Encode api has not been well tested, and cjxl doesn't use it...
|
|
|
fab
|
2021-05-29 02:50:02
|
how good noto sans lisu is?
|
|
2021-05-29 02:50:05
|
best font ever
|
|
2021-05-29 02:58:08
|
|
|
|
diskorduser
|
2021-05-29 03:00:34
|
Noto sans lisu very relevant on topic
|
|
2021-05-29 06:18:24
|
Does Android chrome canary support jxl?
|
|
|
|
veluca
|
2021-05-29 06:22:00
|
not unless you build it yourself
|
|
|
_wb_
|
2021-05-29 06:26:06
|
Android chrome doesn't like extra package size for only an experimental thing, I guess
|
|
|
improver
|
2021-05-29 07:44:57
|
works on firefox nighty android though
|
|
|
|
veluca
|
2021-05-29 07:47:14
|
we'll ask what we should do to enable it on canary builds on android 😛
|
|
2021-05-29 07:47:24
|
well, not just canary
|
|
|
|
Deleted User
|
2021-06-01 02:42:50
|
<@!794205442175402004> I'm not putting this question in <#805176455658733570> because it'll be kinda related to JPEG XL. How does lossy FLIF work?
|
|
|
_wb_
|
2021-06-01 02:43:20
|
quite different from lossy FUIF, actually
|
|
2021-06-01 02:45:56
|
lossy FLIF is basically zeroing least significant bits of predictor residuals in the weird interlacing that FLIF does - it has the neat property of idempotence which is great for generation loss (as long as you don't crop/move pixels, that is)
|
|
|
|
Deleted User
|
2021-06-01 02:50:15
|
Is it possible to do lossy FLIF without interlacing?
|
|
|
_wb_
|
2021-06-01 02:51:39
|
lossy FUIF (and lossy modular jxl) is quantizing Squeeze residuals (more quantization for higher-frequency residuals), which is a somewhat stronger way to do lossy, and also more progressive (partial bitstreams with Squeeze are actual downscaled versions of the image, while partial bitstreams with FLIF's interlacing are basically NN-subsampled versions of the image, with all the aliasing artifacts that has)
|
|
2021-06-01 02:53:06
|
lossy FLIF without interlacing - not as good; you can basically do the same things you can do with PNG then: reduce colors to take advantage of palette, or quantize the predictor residuals to get better entropy coding, but you can't do a large amount of that because the artifacts get too strong
|
|
2021-06-01 02:53:25
|
(with the interlacing you can avoid the artifacts better)
|
|
|
|
Deleted User
|
|
_wb_
lossy FLIF without interlacing - not as good; you can basically do the same things you can do with PNG then: reduce colors to take advantage of palette, or quantize the predictor residuals to get better entropy coding, but you can't do a large amount of that because the artifacts get too strong
|
|
2021-06-01 02:58:43
|
OK, so here's the actual question:
We've seen in <#824000991891554375> that Modular predictors can generate pretty natural-looking, convincing images. How viable would it be to create lossy Modular *without* Squeeze, just with carefully picked lossless predictors and quantizing their residuals?
|
|
2021-06-01 03:00:09
|
(Optionally the encoder could use lossy palette for even better compression, but it's not the main requirement.)
|
|
|
_wb_
|
2021-06-01 03:00:17
|
sounds hard for a practical encoder
|
|
2021-06-01 03:01:05
|
I mean, my first idea would be to make some kind of genetic algorithm or something AI-based, but it would likely be insanely slow
|
|
2021-06-01 03:04:39
|
it's a bit like printing a photo by carefully stuffing a lot of colored paintball capsules in a tennis cannon and then carefully aiming it at the sky hoping that it will rain down in a pattern that will magically look like the photo
|
|
|
|
veluca
|
2021-06-01 03:06:53
|
I actually think there are a few reasonable options
|
|
2021-06-01 03:06:56
|
like what we do for DC
|
|
2021-06-01 03:08:02
|
use a fixed-Gradient predictor, then drop the lowest 2 bits from the residual, except for small values
|
|
2021-06-01 03:08:34
|
or also, use a fixed-gradient predictor, and in leaves that correspond to low errors have a multiplier of 1, in leaves with a high error have a multiplier of 2/4
|
|
2021-06-01 03:08:39
|
(similar, but different)
|
|
2021-06-01 03:08:57
|
<@!794205442175402004> want to try playing with that?
|
|
|
_wb_
|
2021-06-01 03:09:58
|
yes, something like that could give good near-lossless results and also be fast
|
|
2021-06-01 03:11:37
|
I meant something that does <#824000991891554375> style cellular automata to produce different but perceptually similar textures of things like clouds, smoke, grass, etc, that would be insanely hard and slow
|
|
|
|
veluca
|
2021-06-01 03:12:37
|
yeah, no
|
|
2021-06-01 03:13:19
|
but I do wonder if a good mix of fixed tree + predictor choice + quantization levels wouldn't give pretty good results
|
|
|
_wb_
|
|
_wb_
I meant something that does <#824000991891554375> style cellular automata to produce different but perceptually similar textures of things like clouds, smoke, grass, etc, that would be insanely hard and slow
|
|
2021-06-01 03:14:09
|
that would be an interesting phd project, to explore that and see what can be done (it's highly nontrivial)
|
|
|
Scope
|
2021-06-01 03:14:25
|
Btw, can the XYB color space give any advantages for lossless/near-lossless, like -q 100 but with XYB or it doesn't matter?
|
|
|
|
veluca
|
2021-06-01 03:14:36
|
for near lossless yes
|
|
2021-06-01 03:14:42
|
for lossless likely no
|
|
|
_wb_
|
2021-06-01 03:15:16
|
for lossless it's not going to help, you need way too many extra bits of precision in XYB to do things losslessly at the RGB level
|
|
2021-06-01 03:17:19
|
(it's not like YCbCr which basically just trims one bit from R and B so if you add two bits you're probably close to OK; the transfer curve is different in XYB so to make it mathematically lossless everywhere requires a lot of extra precision)
|
|
2021-06-01 03:17:43
|
for near lossless it's likely a good idea
|
|
|
|
Deleted User
|
2021-06-01 04:12:42
|
Thanks for your answers, <@!794205442175402004> and <@!179701849576833024>. I've got another question, this time related entirely to JPEG XL.
IIRC in VarDCT mode DC is compressed with Modular and progressive DC is made by applying Squeeze to that DC frame, right?
|
|
|
|
veluca
|
|
|
Deleted User
|
|
veluca
yep
|
|
2021-06-01 04:26:13
|
So is it possible to do lossy Modular on the DC frame? If it is, how do you think it'd affect visual quality and compression performance?
|
|
|
|
veluca
|
2021-06-01 04:26:28
|
we do that
|
|
2021-06-01 04:26:35
|
with --progressive_dc
|
|
2021-06-01 04:26:46
|
and by default at low enough qualities
|
|
2021-06-01 04:27:02
|
the selection of lossy modular quality might not be the most well tuned thing ever though xD
|
|
2021-06-01 04:27:27
|
(read: I eyeballed it)
|
|
|
|
Deleted User
|
|
veluca
and by default at low enough qualities
|
|
2021-06-01 04:29:50
|
Are we talking about the same thing? I know that below `-q 7` Modular is used exclusively, but are you using lossy Modular in tandem with VarDCT on its DC frames?
|
|
|
|
veluca
|
2021-06-01 04:30:29
|
yep
|
|
2021-06-01 04:30:51
|
and above -d 3 or so, progressive dc is enabled by default
|
|
|
|
Deleted User
|
|
veluca
and above -d 3 or so, progressive dc is enabled by default
|
|
2021-06-01 04:31:50
|
Nice to know
|
|
|
|
veluca
|
2021-06-01 04:32:19
|
maybe 4.5
|
|
2021-06-01 04:32:26
|
anyway above some point 😄
|
|
|
|
Deleted User
|
2021-06-01 04:32:46
|
One last question (for now): lossy alpha. Is it allowed by JPEG XL?
|
|
|
Scope
|
2021-06-01 04:32:50
|
https://discord.com/channels/794206087879852103/803645746661425173/817734220167774239
|
|
|
|
Deleted User
|
|
Scope
https://discord.com/channels/794206087879852103/803645746661425173/817734220167774239
|
|
2021-06-01 04:33:31
|
Damn, you've got good memory...
|
|
|
|
veluca
|
|
Scope
https://discord.com/channels/794206087879852103/803645746661425173/817734220167774239
|
|
2021-06-01 04:34:08
|
eh, I chose that constant 😛 I am pretty sure both me and Jyrki are eyeballing it by memory in both messages xD
|
|
|
lithium
|
|
One last question (for now): lossy alpha. Is it allowed by JPEG XL?
|
|
2021-06-01 04:34:21
|
https://github.com/libjxl/libjxl/issues/76
|
|
|
Scope
|
|
Damn, you've got good memory...
|
|
2021-06-01 04:34:25
|
Because this is one of the things that improved quality of art content on low bpp
|
|
|
|
veluca
|
2021-06-01 04:34:39
|
though us both mentioning 4.5 makes it somewhat more likely
|
|
2021-06-01 04:34:59
|
anyway, alpha is encoded with modular, so anything modular can do, it can do it to alpha channels
|
|
|
|
Deleted User
|
|
lithium
https://github.com/libjxl/libjxl/issues/76
|
|
2021-06-01 04:38:23
|
I wasn't talking about *lossless alpha* lossily influencing invisible pixels, but about *lossily compressing alpha itself* no matter if you change invisible pixels.
|
|
|
|
veluca
|
|
I wasn't talking about *lossless alpha* lossily influencing invisible pixels, but about *lossily compressing alpha itself* no matter if you change invisible pixels.
|
|
2021-06-01 04:39:37
|
see my previous msg 😄
|
|
|
|
Deleted User
|
|
veluca
see my previous msg 😄
|
|
2021-06-01 04:43:15
|
I was replying to <@!461421345302118401>.
Don't worry, I've seen your message:
https://discord.com/channels/794206087879852103/794206087879852106/849325168119250995
|
|
|
_wb_
|
2021-06-01 04:45:49
|
We are already doing slightly lossy alpha in lossy mode, iirc
|
|
2021-06-01 04:47:16
|
At least I remember I did add that at some point, pretty mild but slightly lossy alpha. Not sure if cjxl still does it by default
|
|
2021-06-01 04:48:59
|
Lossy squeezed alpha shouldn't suffer from ringing at least (the main concern when doing lossy alpha: you don't want ripples of visible pixels near hard edges in the alpha, which is what DCT is likely to produce)
|
|
2021-06-01 04:49:27
|
It might get a bit blocky though
|
|
2021-06-01 04:49:36
|
Or rather pixelated
|
|
2021-06-01 04:49:51
|
If you crank up the loss in alpha that is
|
|
2021-06-01 04:51:56
|
The mild settings it uses now should be fine and help a lot for typical alpha masks used on photo+alpha, where it's mostly a binary mask with feathering.
|
|
2021-06-01 04:59:17
|
It can probably be tuned though, maybe doing alpha a bit more lossily than what it is doing now. Need a good representative corpus of images that have alpha and where lossy makes sense though, which is something that is a bit hard to find. Most images with alpha you find in the wild are those where lossy doesn't really make that much sense, things like logos, icons, ui elements, game graphics. Actual photographic images with alpha are not that common atm in the wild because until recently, the only universally supported way to deliver images with alpha was PNG. People rather use JPEGs with a background color matching the website style than to serve huge PNGs...
|
|
|
lithium
|
2021-06-01 05:07:50
|
I also have some question want ask, could I ask some question?
|
|
2021-06-01 05:14:15
|
> previously in guetzli we had d1.0 = quality 94 or so --- if today quality 90, we may have slipped a bit in quality
About this comment, why jxl d 1.0 choose quality 90?
lower quality(90) will affect Adaptive quantization and butteraugli assessment?
https://discord.com/channels/794206087879852103/794206170445119489/832524993966899210
|
|
|
_wb_
|
2021-06-01 05:20:35
|
I think guetzli may aim a bit too much at maxnorm BA optimization, I dunno though
|
|
2021-06-01 05:21:25
|
In jxl we usually looked at bpp*pnorm to judge if a change is good or not
|
|
|
lithium
|
2021-06-01 05:26:32
|
non-photographic images have much entropy, probably have chance adaptive quantization will do some radical lossy compress(choose lower quality) on much entropy area?
(on libjpeg some non-photographic images need q99 444 to keep quality)
|
|
|
_wb_
|
2021-06-01 05:41:03
|
Well if things go wrong, it can either be the metric not accurately measuring human perception, or the encoder not accurately optimizing for the metric.
|
|
|
lithium
|
2021-06-01 05:43:57
|
I using butteraugli and butteraugli jxl, but sometime they have different assessment on same compressed image.
|
|
2021-06-01 05:48:18
|
Probably cjxl target distance have some limit on maximum distance?
(like target distance -d 1.0 -s 8, maximum distance -d 1.5, result will on d 0.9 ~ d 1.55)
|
|
2021-06-02 09:32:50
|
https://ai.googleblog.com/2021/06/a-browsable-petascale-reconstruction-of.html
|
|
2021-06-02 09:33:56
|
I can't understand this... 😢
|
|
2021-06-02 09:34:12
|
https://1.bp.blogspot.com/-TpKb0Ycw72Q/YLaHKBDDERI/AAAAAAAAHrM/Om5ioZmrnVYgVKaKRu4d3LkdSn-GrNphwCLcBGAsYHQ/s1999/image5.png
|
|
|
Artoria2e5
|
|
lithium
https://1.bp.blogspot.com/-TpKb0Ycw72Q/YLaHKBDDERI/AAAAAAAAHrM/Om5ioZmrnVYgVKaKRu4d3LkdSn-GrNphwCLcBGAsYHQ/s1999/image5.png
|
|
2021-06-02 11:14:02
|
it's comparing compression ratio to the fidelity of... neural connections preserved i think
|
|
2021-06-02 11:16:23
|
denoising making compression easier isn't new -- both jxl and avif has some noise model to basically take it out during compression and add someting like it back in during decompression. but i guess brain pics can use some specialized magical denoiser
|
|
|
lithium
|
|
Artoria2e5
denoising making compression easier isn't new -- both jxl and avif has some noise model to basically take it out during compression and add someting like it back in during decompression. but i guess brain pics can use some specialized magical denoiser
|
|
2021-06-02 11:20:07
|
I understand, thank you 🙂
|
|
|
Crixis
|
2021-06-02 11:25:12
|
Cool, avif and jpeg xl are the only choice in recent studies
|
|
|
_wb_
|
2021-06-02 12:39:56
|
HEIC combines all the disadvantages of AVIF with some of its advantages, while being a patent-encumbered mess that nobody except Apple can actually deploy.
|
|
|
|
Deleted User
|
2021-06-02 05:58:40
|
Solution for H.265 images: drop HEIC and use Bellard's BPG instead 🙂
https://bellard.org/bpg/
|
|
|
_wb_
|
2021-06-02 06:00:37
|
That is a better container, yes.
|
|
2021-06-02 06:00:47
|
Still patent-encumbered mess though.
|
|
|
|
Deleted User
|
2021-06-02 06:01:51
|
At least it's one "patent mess" less...
|
|
2021-06-02 06:02:36
|
Now you only have to deal with HEVC's patents instead of HEIC's *and* HEVC's...
|
|
|
_wb_
|
2021-06-02 06:04:12
|
Afaik only Nokia claims patents on HEIF. I don't know how many they claim to be HEIF related but I cannot imagine it being more than a handful.
|
|
2021-06-02 06:04:54
|
HEVC has _thousands_ of patents, held by many different companies, some in patent pools, others not
|
|
2021-06-02 06:14:14
|
All of this applies to BPG too: https://en.wikipedia.org/wiki/High_Efficiency_Video_Coding#Patent_licensing
|
|
|
BlueSwordM
|
2021-06-02 07:50:08
|
I've got a great idea for a new container for AVIF <:megapog:816773962884972565>
|
|
2021-06-02 07:50:35
|
.j(av1)xl
|
|
|
lithium
|
2021-06-02 07:54:46
|
.javxl ?
|
|
|
improver
|
|
_wb_
|
2021-06-02 07:55:16
|
https://fr.wikipedia.org/wiki/Eau_de_Javel
|
|
2021-06-02 07:55:26
|
.javl
|
|
2021-06-02 07:56:48
|
Bleaches your images nice and clean, all subtle textures gone! 😂
|
|
|
Scope
|
2021-06-02 08:15:02
|
jax1v
|
|
|
_wb_
|
2021-06-03 10:57:32
|
yep
|
|
2021-06-03 10:57:50
|
I'm still waiting for the new CNET article to get published
|
|
2021-06-03 11:27:19
|
technically yes
|
|
2021-06-03 11:27:45
|
but that will only happen if camera vendors and Adobe want it to happen, is my guess
|
|
2021-06-03 11:33:07
|
that would be a useful convention, yes
|
|
2021-06-03 11:34:01
|
the technical capabilities of the codec/file format are one thing, getting applications to implement it is something completely different
|
|
|
|
veluca
|
2021-06-03 05:37:20
|
some banding for you 😛 (but only if it's not 100% rescale...)
|
|
|
_wb_
|
2021-06-03 05:43:06
|
|
|
2021-06-03 05:43:34
|
That's what my phone does to that when making an icon
|
|
2021-06-03 05:43:38
|
<:banding:804346788982030337>
|
|
|
Jim
|
2021-06-03 05:46:35
|
<:banding:804346788982030337>
|
|
|
diskorduser
|
2021-06-03 07:16:09
|
I don't see banding on Google photos on all scales. It shows banding only when loading the image.
|
|
|
|
veluca
|
2021-06-03 08:47:29
|
fun thing is, it's just a gradient
|
|
2021-06-03 08:47:52
|
each band is two nearby colors in 8-bit sRGB I believe
|
|
|
Scope
|
2021-06-03 10:11:21
|
<@!794205442175402004> <https://www.twitch.tv/videos/1036703639?t=19h46m7s>
|
|
|
_wb_
|
2021-06-04 07:10:19
|
interesting - gonna watch that
|
|
|
Scope
|
2021-06-04 03:13:47
|
https://github.com/libjxl/libjxl/pull/106
|
|
2021-06-04 03:13:55
|
🤔
https://twitter.com/9550pro/status/1397812947049533445
|
|
2021-06-04 03:14:26
|
and
> Intel Alder Lake's Hybrid big.LITTLE design means that each chip has a number of Golden Cove traditional cores and a number of Gracemont Atom cores as well.
|
|
|
_wb_
|
2021-06-04 03:19:29
|
yep, heterogeneous cores is becoming ubiquitous
|
|
2021-06-04 03:20:44
|
some phones have 3 different cpu types in them now - e.g. 1 really fast one, 3 fast ones, and 4 slow ones
|
|
2021-06-04 03:21:59
|
which is cool but it requires somewhat trickier scheduling than when you can assume that all threads are the same
|
|
|
sn99
|
2021-06-04 03:29:50
|
Is the github one the mirror or gitlab one ?
|
|
|
_wb_
|
2021-06-04 03:31:58
|
Github is where stuff happens
|
|
2021-06-04 03:32:10
|
Gitlab is the mirror
|
|
|
|
veluca
|
|
Scope
https://github.com/libjxl/libjxl/pull/106
|
|
2021-06-04 03:32:25
|
you people are stalkers! 😛
|
|
|
_wb_
|
2021-06-04 05:21:11
|
https://twitter.com/SteveStuWill/status/1400859899702038530?s=19
|
|
|
diskorduser
|
|
Scope
https://github.com/libjxl/libjxl/pull/106
|
|
2021-06-04 05:38:56
|
What does it do? Improves performance?
|
|
|
|
Deleted User
|
|
_wb_
https://twitter.com/SteveStuWill/status/1400859899702038530?s=19
|
|
2021-06-04 05:39:15
|
We live in a ~~society~~ **simulation**
|
|
2021-06-04 05:40:02
|
https://tenor.com/view/joaquin-phoenix-joker-smile-sad-smirk-gif-16148806
|
|
|
Scope
|
|
diskorduser
What does it do? Improves performance?
|
|
2021-06-04 05:45:08
|
Better load balancing for combinations where some cores are faster and others slower (now it is mostly mobile processors, but the next generations of desktop processors are also planned to be similar)
|
|
|
|
veluca
|
2021-06-04 05:59:16
|
also you can decide # threads afterwards, say based on image dimensions
|
|
|
lithium
|
2021-06-04 08:25:35
|
I think this comment can explanation why current jxl vardct have some smooth issue for my use case.
> From Jyrki Alakuijala
> Two rather speculative considerations on this:
>
> Smoothing the input is a double edged sword for JPEG XL. If there are smooth areas in an image, the current encoder of JPEG XL considers that there is no visual masking there and can end up putting more bits there than without smoothing. Even a small smoothed area (such as three near-by 4x4 pixel areas that are smooth) within the same integral transform can reduce quantization opportunities significantly in the current encoder. Likely when and if we would like to introduce smoothing as a way to achieve higher compression ratios, we should reduce the effect of visual masking in the respective areas. (Sup. Fig. 1 supports this hypothesis -- JPEG XL is not benefiting from the soothing the same way AVIF is in compression ratio, and the most likely culprit is the modeling of visual masking.)
>
> AVIF's filtering is more compatible with producing smooth geometry (such as photos of plastic or metallic objects and outline drawings), while JPEG XL's filtering is more compatible with preserving low contrast textures (photography of real life objects, textures like clouds, skin, marble, wood). In video you don't necessarily need to get the low contrast textures right in the keyframe -- if the scenery is static, consequent P or B frames can fill in the low contrast texturing rather quickly and maintain a more stable use of bandwidth for video.
>
> https://encode.su/threads/3397-JPEG-XL-vs-AVIF?p=69868&viewfull=1#post69868
|
|
|
Scope
|
2021-06-04 10:35:31
|
https://twitter.com/ID_AA_Carmack/status/1400930510671601666
|
|
|
lithium
|
2021-06-05 01:12:29
|
Have a vardct question, could someone teach me please?
visual masking how it works on jxl vardct?
|
|
|
|
veluca
|
2021-06-05 01:14:10
|
#nobodyknows
|
|
2021-06-05 01:14:13
|
(kidding)
|
|
2021-06-05 01:14:51
|
... but I don't know really
|
|
2021-06-05 01:15:16
|
probably only <@!532010383041363969> truly does
|
|
|
lithium
|
2021-06-05 01:16:21
|
I try to understand this Jyrki Alakuijala comment, but visual masking is out of my skill knowledge...😢
https://discord.com/channels/794206087879852103/794206087879852106/850470364609249341
|
|
|
|
veluca
|
2021-06-05 01:18:13
|
well, I do know the general idea - visual masking tries to estimate if an area is more "active" (such as like a bush may be, as opposed to the sky) - artefacts in more active areas are less visible, so we can quantize more
|
|
|
lithium
|
2021-06-05 01:21:39
|
But how to assessment active area importance?
|
|
|
|
veluca
|
2021-06-05 01:25:08
|
eh good question - it's done in https://github.com/libjxl/libjxl/blob/main/lib/jxl/enc_adaptive_quantization.cc and it uses a few heuristics invoving the Laplacian of the image and other things like local deltas in a block
|
|
|
_wb_
|
2021-06-05 01:27:44
|
just guessing, but I think the general idea is: few edges -> smooth area -> less quantization because you're more likely to spot issues like banding etc. Lots of edges -> busy area -> can get away with more quantization
|
|
|
|
veluca
|
2021-06-05 01:31:15
|
very very roughly 😄
|
|
|
_wb_
|
2021-06-05 01:32:40
|
point is, it's encoder heuristics, and encoder heuristics can always be improved if they don't give the desired result
|
|
2021-06-05 01:33:10
|
what cannot be changed is what the bitstream can express, but I don't think jxl has an issue there
|
|
|
lithium
|
2021-06-05 01:34:50
|
So vardct will distribution lossy error on complex(busy) area, let error hard to see?
but I still don't understand why drawing image will get some issue...
|
|
|
|
veluca
|
2021-06-05 01:38:36
|
I think the real answer there is that we never did that much work on non-photo content 😛
|
|
|
lithium
|
2021-06-05 01:46:35
|
I just expected vardct (-d 1.0, 0.5) will like webp near-lossless (40,60) quality, but use dct(smaller size).
|
|
2021-06-05 01:47:36
|
Thank you for teach me about visual masking 🙂
|
|
|
_wb_
|
|
veluca
I think the real answer there is that we never did that much work on non-photo content 😛
|
|
2021-06-05 01:56:22
|
Patches for text (and even small icons) works pretty well, but that's about the only real use of nonphoto-specific heuristics we have atm in the vardct encoder
|
|
2021-06-05 01:57:39
|
I suspect avif uses palette blocks for those, but maybe also in other cases of hard edges where dct isn't that great
|
|
2021-06-05 01:58:53
|
For nonphoto, likely an encoder that uses patches for the hard-with-dct stuff and dct anywhere else would be good
|
|
|
|
veluca
|
2021-06-05 02:05:06
|
avif has CDEF, very powerful (deblocking and not) filters, and can also do directional prediction instead of DCT
|
|
2021-06-05 02:05:18
|
all of those very likely help a lot
|
|
|
_wb_
|
2021-06-05 02:05:40
|
I think there's quite some room for cross-pollination of encoder techniques between av1 and jxl
|
|
2021-06-05 02:06:45
|
Directional prediction is good for straight lines but I don't think it helps much for curved ones
|
|
2021-06-05 02:07:13
|
Same with CDEF
|
|
2021-06-05 02:08:29
|
Splines and paletty patches could help a lot, detecting them is the hard part
|
|
|
Scope
|
|
Scope
https://twitter.com/ID_AA_Carmack/status/1400930510671601666
|
|
2021-06-05 05:55:39
|
Also, discussion on HN, including mention of Jpeg XL and BPG (again):
https://news.ycombinator.com/item?id=27399416
|
|
|
_wb_
|
2021-06-05 07:29:16
|
It's ironic that people want higher resolutions, more dynamic range, wider gamut, and then you have your nice 10-bit wide gamut high-res HDR screen and to save some memory, you end up using 8-bit YCbCr 4:2:0, effectively only using your green subpixels at full resolution (for red and green you could as well have only 1/4th of the subpixels), and effectively reducing your 1 billion 10-bit RGB colors to only 4 million 8-bit YCbCr colors, or even less if the 'tv range' variant of YCbCr is used like in WebP.
|
|
|
|
veluca
|
2021-06-05 07:31:54
|
(honest question: why the heck is tv range still a thing??)
|
|
|
_wb_
|
2021-06-05 07:47:54
|
My theory is that it's only still a thing because it's quite good for PSNR: reducing the range of input and output can only help for compression, and by redefining the uncompressed input to be tv range YCbCr 420, it's even a lossless operation!
|
|
2021-06-05 07:48:32
|
Maybe I am too cynical
|
|
|
lithium
|
2021-06-05 07:51:40
|
Why modern video format av1 still use PSNR for default setting?
(vvc use XPSNR)
|
|
|
_wb_
|
2021-06-05 07:54:30
|
What is XPSNR? Not familiar with that name
|
|
|
lithium
|
2021-06-05 07:54:59
|
https://ieeexplore.ieee.org/document/9054089
|
|
|
_wb_
|
2021-06-05 07:55:37
|
I dunno, I guess because it's easy to measure and easy to optimize for? That plus tradition and inertia.
|
|
2021-06-05 08:00:58
|
Mostly I think because there is no uncontroversial metric that just correlates perfectly with human opinion across the fidelity spectrum, so since that doesn't exist, many throw their hands in the air and say "well we don't know what metric to optimize for, so let's just optimize for a simple one"
|
|
|
lithium
|
2021-06-05 08:01:56
|
Maybe we need a new image quality assessment method?
mix Butteraugli + Butteraugli p-norm + dssim + ssimulacra + XL
|
|
|
_wb_
|
2021-06-05 08:03:20
|
Yes, there's interesting work being done and to be done on making better metrics. AI-based metrics can also be an interesting approach...
|
|
|
|
veluca
|
2021-06-05 08:17:37
|
I have my own random ideas for a metric that takes some inspiration from Butteraugli and NLPD
|
|
2021-06-05 08:17:44
|
but, as always, time...
|
|
|
|
Deleted User
|
|
lithium
Maybe we need a new image quality assessment method?
mix Butteraugli + Butteraugli p-norm + dssim + ssimulacra + XL
|
|
2021-06-05 08:52:44
|
XYB-based (instead of current L\*a\*b\*-based) SSIMULACRA should be quite good
|
|
2021-06-05 08:53:03
|
But I'm waiting for AI-based metrics
|
|
2021-06-05 08:53:39
|
That'd basically mean free psy-tuning
|
|
|
BlueSwordM
|
|
_wb_
My theory is that it's only still a thing because it's quite good for PSNR: reducing the range of input and output can only help for compression, and by redefining the uncompressed input to be tv range YCbCr 420, it's even a lossless operation!
|
|
2021-06-05 08:54:10
|
As said before by some AV1 devs when I asked if I could submit a document detailing why XYB should be the color space used in AV2 and what advantages it would pose:
"HW ASIC manufacturers often pose some problems in the ratification of some features in the standard, such as color handling and chroma subsampling due to higher bandwidth requirements. Other times, another reason is that nobody actually pitched the idea and proved the large benefit of doing something novel."
|
|
|
|
veluca
|
2021-06-05 09:02:55
|
there's no particular reason why xyb, or some variant, shouldn't work well as a ycbcr replacement - in particular subsampling B should be almost free, psychovisually speaking
|
|
|
BlueSwordM
|
|
veluca
there's no particular reason why xyb, or some variant, shouldn't work well as a ycbcr replacement - in particular subsampling B should be almost free, psychovisually speaking
|
|
2021-06-05 09:05:58
|
Well, that's why I've been writing a small XYB requirement paper for AV2 so that it gets included in the main spec and we can leave YCbCr behind after all this time.
|
|
|
|
veluca
|
2021-06-05 09:06:27
|
I expect you're going to have one heck of a time with it
|
|
|
BlueSwordM
|
|
veluca
I expect you're going to have one heck of a time with it
|
|
2021-06-05 09:07:57
|
Hey, as you've shown before, the benefit from going with XYB psycho-visually speaking is rather large as a single coding tool, so convincing them with a well written paper and experiments with JPEG YCbCr>XYB that you've shown earlier this year should not be too difficult.
|
|
|
|
veluca
|
2021-06-05 09:08:55
|
IIRC there is a more fundamental problem: the decision process is entirely metric-driven, and the metrics are either computed in ycbcr space (which couldn't possibly help) or are VMAF, which IIRC hates XYB with a passion
|
|
2021-06-05 09:09:17
|
I'm not saying you *shouldn't* do it
|
|
2021-06-05 09:09:29
|
but it might take some extra work
|
|
|
BlueSwordM
|
2021-06-05 09:10:00
|
I see. So this is an uphill battle 😛
I mean, one of my arguments is that JPEG-XL uses it, so that must mean it's good xD
I can also use butteraugli as a metric which will help.
|
|
|
|
veluca
|
2021-06-05 09:10:29
|
in fact, I recommend you write to me (veluca@google.com) and possibly Jyrki (jyrki@google.com) to set this up 😛 at least I would like to try
|
|
2021-06-05 09:11:00
|
(I was asked by a couple of people to try to add butteraugli as a metric in AV2, that could be useful too)
|
|
|
veluca
I have my own random ideas for a metric that takes some inspiration from Butteraugli and NLPD
|
|
2021-06-05 09:11:43
|
or maybe that one if they want something easier to explain.. (and if I can manage to prepare something that works in time)
|
|
2021-06-05 09:12:17
|
(I've had a few thoughts on how to do XYB in other codecs too)
|
|
|
BlueSwordM
|
|
veluca
(I was asked by a couple of people to try to add butteraugli as a metric in AV2, that could be useful too)
|
|
2021-06-05 09:13:35
|
Some rav1e/daala people are working on integrating butteraugli as an AWCY metric(Are We Compressed Yet, or a scalable cloud testing program which is what we use to test encoder improvements: https://medium.com/vimeo-engineering-blog/scalable-codec-testing-with-are-we-compressed-yet-c3a64003f67b), so that is in fact something that should come this year.
|
|
|
|
veluca
|
2021-06-05 09:14:39
|
reeeeally? why did I not hear of that 😄 do you know what exact version?
|
|
|
|
Deleted User
|
2021-06-05 09:14:59
|
<@!321486891079696385> can you use Dolby's ICtCp in AV1? It's kinda in the same spirit as XYB (perceptually-driven) so it probably should perform in the same class as XYB...
And if you *can* use it, how?
|
|
|
|
veluca
|
2021-06-05 09:15:21
|
I know you can, probably through the CICP box
|
|
2021-06-05 09:15:26
|
but I don't know much more
|
|
2021-06-05 09:16:00
|
otoh you're not going to get many benefits without some serious reworking of quantization tables (IIRC AV1 uses flat ones)
|
|
|
BlueSwordM
|
2021-06-05 09:16:16
|
Perceptual vector quantization then?
|
|
|
|
veluca
|
2021-06-05 09:17:15
|
(I'm not familiar with the term)
|
|
|
BlueSwordM
|
|
veluca
(I'm not familiar with the term)
|
|
2021-06-05 09:17:45
|
It's a daala thing mainly
https://jmvalin.ca/daala/pvq_demo/
|
|
|
veluca
reeeeally? why did I not hear of that 😄 do you know what exact version?
|
|
2021-06-05 09:19:10
|
I actually suggested it as a Google Summer of Code Project in rav1e, and someone actually took it up to implement it during the summer
https://summerofcode.withgoogle.com/projects/#6634888326807552
|
|
|
|
veluca
|
2021-06-05 09:22:19
|
pretty cool
|
|
2021-06-05 09:22:46
|
the activity masking thingy is pretty similar to what JXL does (although computed differently)
|
|
|
|
Deleted User
|
|
veluca
the activity masking thingy is pretty similar to what JXL does (although computed differently)
|
|
2021-06-05 09:23:28
|
How do they compare?
|
|
|
|
veluca
|
2021-06-05 09:23:35
|
no clue 😄
|
|
2021-06-05 09:23:42
|
the DAALA thingy is DCT-based
|
|
2021-06-05 09:23:47
|
from what I can see
|
|
2021-06-05 09:24:01
|
we had a piece of masking estimation that was DCT-based, but got rid of it
|
|
|
|
Deleted User
|
|
|
veluca
|
2021-06-05 09:24:31
|
it didn't help
|
|
2021-06-05 09:25:19
|
one reason for that could be that DCT is not particularly good with step functions
|
|
|
Jyrki Alakuijala
|
|
veluca
avif has CDEF, very powerful (deblocking and not) filters, and can also do directional prediction instead of DCT
|
|
2021-06-07 07:30:09
|
CDEF was turned on/off for the anime images where it was anticipated by some to be the reason for JXL/AVIF differences. Nothing happened there and it was a 1 % thing there in objective metrics. Directional prediction is a bit in the same category, but can be more effective -- just not on all images. Corpus level effect is likely 1.5-2 % and on '1 % best case' images for directional prediction you can see 10-15 %.
|
|
|
|
veluca
|
2021-06-07 07:31:38
|
It could be one of the other two filters though
|
|
|
Jyrki Alakuijala
|
|
BlueSwordM
Well, that's why I've been writing a small XYB requirement paper for AV2 so that it gets included in the main spec and we can leave YCbCr behind after all this time.
|
|
2021-06-07 07:33:25
|
I suspect, but don't know, that Dolby's ICtCp is still CIE 1931 based -- XYB does things differently to not to be based on psychovisual experiments of slabs of color, but it also relates to the eyes ability to receive high frequency information, which is very different for S receptors than M and L
|
|
2021-06-07 07:34:10
|
someone could look at the before compression colorspace rotation and how it relates to those of XYB
|
|
2021-06-07 07:35:12
|
XYB is a great colorspace for image compression, but not very scientific -- no publication, and I only used my own eyes to come up with it, ... We did verify it with many eyes, though
|
|
2021-06-07 07:36:40
|
if my suspicions is correct about ICtCp vs. XYB, it should give a 3-5 % compression benefit for XYB over ICtCp for normal images
|
|
2021-06-07 07:37:12
|
ICtCP has logarithmic behaviour for high values making it likely more efficient in HDR than JPEG XL's XYB
|
|
2021-06-07 07:37:27
|
... butteraugli's XYB would likely work better there
|
|
2021-06-07 07:38:30
|
with JPEG XL we largely compensate for it in adjusting adaptive quantization respectively, but if one adds XYB to another codec without thinking about the cube vs. exp (or cubic root/log) difference, then it is not going to be that amazing for HDR
|
|
|
<@!321486891079696385> can you use Dolby's ICtCp in AV1? It's kinda in the same spirit as XYB (perceptually-driven) so it probably should perform in the same class as XYB...
And if you *can* use it, how?
|
|
2021-06-07 07:40:38
|
In AV1 colors are someone else's problem, the codec transports channels. There are eight(?) supported short codes for colorspaces, one of them is ICtCp. I tried to convince them to add XYB back in the day, but they were not enthusiastic at that time.
|
|
|
But I'm waiting for AI-based metrics
|
|
2021-06-07 07:43:24
|
AI doesn't know the intrinsic compromises in human vision. When such a system is trained, the training material should cover these compromises -- no effort that I know is trying to tackle this area.
|
|
|
lithium
Have a vardct question, could someone teach me please?
visual masking how it works on jxl vardct?
|
|
2021-06-07 07:56:15
|
Visual masking works in many different ways in different algorithms (for different speeds and for different uses). The reasons for diversity/complexity are half speed related, half code archeology. Luca made an attempt to replace that mess with 'new heuristics', but I/we haven't followed up on it yet. One of the most important ways is in initial quantization. ...
|
|
2021-06-07 07:56:42
|
There we compute a 4x4 subsampled view of visual activity (measured by a laplacian or similar)
|
|
2021-06-07 07:57:09
|
each locality looks at a 3x3 neighbourhood of such 4x4 samples, i.e, 12 x 12 area is considered
|
|
2021-06-07 07:58:13
|
we take the three 4x4 areas with the lowest visual activity (in that 3x3/12x12 neighbourhood), weighted-average the visual activities in them, and that works as a visual activity for that area
|
|
2021-06-07 07:58:42
|
in Butteraugli we do roughly the same but with Gaussians and pixel accuracy instead of 4x4 and subsampling
|
|
2021-06-07 07:59:30
|
enc_ac_strategy.cc needs something similar but computes it on its own in a slightly different way (which is just stupid, but that is how it is)
|
|
2021-06-07 08:00:19
|
often heuristics like this include more compromises (i.e., modeling other things than expected) than one would think, and clean reuse may be more difficult to tune than having specialized code
|
|
|
killerwhale
|
2021-06-07 11:48:53
|
|
|
2021-06-07 11:49:31
|
check out this pixel upscaling filter from russell kirsch (inventor of the pixel)
|
|
2021-06-07 11:50:23
|
heres the white paper
|
|
|
_wb_
|
2021-06-07 12:08:35
|
it's not an upscaling filter, it's a proposal to use a non-square pixel representation
|
|
2021-06-07 12:09:50
|
which is an interesting idea but I don't really see how you would apply it in practice
|
|
2021-06-07 12:11:27
|
for display technology, it seems unlikely that something like that is practical
|
|
2021-06-07 12:11:37
|
for capturing technology: the same
|
|
2021-06-07 12:13:10
|
for compression, this is a bit like directional prediction built into the representation, but I think it complicates things more than that it helps
|
|
2021-06-07 12:13:50
|
for uncompressed representations: yes, this might be useful; maybe for use in a texture format?
|
|
|
killerwhale
|
2021-06-07 12:17:56
|
yes this is most similar to texture filters like hqx xbr etc
|
|
|
_wb_
|
2021-06-07 12:20:48
|
those are pixel art upscalers
|
|
2021-06-07 12:21:13
|
this is not an upscaler afaiu, that image on the bottom was not generated from the image above
|
|
2021-06-07 12:21:55
|
both were created from a higher res original, just one is using square pixels and the other is using the same number of variable-sized pixels
|
|
2021-06-07 12:22:28
|
or a slightly lower number, but the same number of bits needed taking into account the 3 bits per 2 pixels needed to signal the pixel shape
|
|
|
Señor Orcoso
|
2021-06-07 04:38:06
|
JPG XL is the future
|
|
2021-06-07 04:38:08
|
like if you agree
|
|
|
spider-mario
|
2021-06-07 05:17:17
|
pixels are generally not considered to have a shape in the imaging community
|
|
2021-06-07 05:17:19
|
they are point samples
|
|
2021-06-07 05:18:28
|
the image can be reconstructed with squares, and it effectively corresponds to nearest-neighbour upsampling
|
|
2021-06-07 05:18:42
|
but that’s not really what they “are”
|
|
2021-06-07 05:23:34
|
in a sense, what he seems to be proposing is kind of a vector graphics approach with a specific set of primitives
|
|
2021-06-07 05:25:49
|
but for a fair comparison with more “traditional” bitmap storage, I think he should have used better upsampling than nearest-neighbour
|
|
2021-06-07 05:28:18
|
(and made sure that capture aliasing was properly avoided, that’s not really clear to me either)
|
|
|
sn99
|
2021-06-07 05:52:25
|
Is there a lot of difference between google's butteraugli vs libjxl ? https://github.com/google/butteraugli/blob/master/butteraugli/butteraugli.cc
|
|
|
_wb_
|
2021-06-07 05:56:09
|
Not a _lot_, I think, but <@532010383041363969> will know better
|
|
|
lithium
|
2021-06-07 07:04:56
|
good question 🙂
|
|
2021-06-07 07:05:30
|
> Butteraugli vs Butteraugli(jpeg xl)
>
> From Jyrki Alakuijala
> butteraugli's xyb is likely more accurate,
> because of asymptotic log behaviour for high intensity values (instead of raising to a power),
>
> jpeg xl's xyb modeling is going to be substantially faster to computer,
> because gamma is exactly three there.
>
> Shouldn't matter in the end. All butteraugli's have been calibrated to the same reference corpus with pretty good results.
> The reference corpus contains 2500 image pairs in the area of 0.6--1.3 max butteraugli values.
> Earlier butterauglis may be slightly more accurate, later butterauglis more practical and compatible with compression.
> Later butterauglis can be slightly better at detecting faint larger scale degradations as
> they do recursive 2x multiscaling of top of the Laplacian pyramid, not just a single run of
> four levels of Laplacian pyramid.
>
> I, as the author of butteraugli, use the latest butteraugli in jpeg xl for my use.
|
|
|
Scope
|
2021-06-07 07:07:58
|
Also, it's not just a question
<https://summerofcode.withgoogle.com/projects/#6634888326807552>
|
|
|
lithium
|
2021-06-07 07:09:15
|
cool 🙂 Rust Butteraugli
|
|
|
|
veluca
|
2021-06-07 07:24:26
|
eh, that's one case where I don't really think it's a good expense of effort xD
|
|
|
_wb_
|
2021-06-07 07:25:40
|
Perceptual metrics is not immediately the first thing that springs to mind where security/robustness is hugely important
|
|
2021-06-07 07:26:34
|
Could be nice for the slow settings of a hypothetical Rust jxl encoder though, and to integrate in rav1e
|
|
|
|
veluca
|
2021-06-07 07:27:18
|
meh, you can use FFI and the C++ butteraugli 😛
|
|
|
|
Deleted User
|
|
Scope
Also, it's not just a question
<https://summerofcode.withgoogle.com/projects/#6634888326807552>
|
|
2021-06-07 07:33:04
|
From what I've heard Rust is better in terms of security/robustness. Does it come with a performance hit compared to pure C with assembly (like in x264 or libjpeg-turbo)?
|
|
|
sn99
|
|
Scope
Also, it's not just a question
<https://summerofcode.withgoogle.com/projects/#6634888326807552>
|
|
2021-06-07 07:35:18
|
I am the one working on it 🤣
|
|
2021-06-07 07:36:15
|
We are mainly looking at maintainability and how well will it fair against Cpp
|
|
|
_wb_
|
|
From what I've heard Rust is better in terms of security/robustness. Does it come with a performance hit compared to pure C with assembly (like in x264 or libjpeg-turbo)?
|
|
2021-06-07 07:36:35
|
I think the claim is that you don't really pay a performance price - it's compile time robustness, not runtime checks or something
|
|
2021-06-07 07:37:13
|
But it might be trickier to write some things in a way that Rust likes
|
|
2021-06-07 07:39:27
|
In C you can basically do any dirty trick, whether it means you shoot yourself in the foot and write fragile code or you make something very fast in a robust way, C just lets you do whatever (it's basically just some syntactic sugar over portable assembly)
|
|
|
|
Deleted User
|
2021-06-07 07:41:31
|
I'm too much used to C and "dirty tricks", do you think Rust will work for me? Because I don't think so 🙁 (despite the fact Rust seems like a fun tool).
|
|
2021-06-07 07:41:49
|
I've read <@!826537092669767691>'s article:
https://kornel.ski/rust-c-speed
|
|
2021-06-07 07:44:40
|
> "Clever" memory use is frowned upon in Rust. In C, anything goes. For example, in C I'd be tempted to reuse a buffer allocated for one purpose for another purpose later
That one hit me. On the one side there are MS Teams devs that programmed this shitty app and called it a day. On the other side I still program like it's '80s and I still carefully think whether to add another variable or not, sometimes re-using them **exactly** like Kornel described.
|
|
|
|
veluca
|
2021-06-07 07:47:26
|
you do get *some* performance penalty
|
|
2021-06-07 07:47:35
|
otoh, probably not much
|
|
2021-06-07 07:48:45
|
butteraugli is not SIMDfied, so it shouldn't be too bad
|
|
|
sn99
|
2021-06-07 07:48:54
|
There is also the angle of "maintainability", let's say you come back after months or say you working with other people, it might work now but there is not "guarantee" as to if it will from 2-3 months from now
|
|
|
veluca
butteraugli is not SIMDfied, so it shouldn't be too bad
|
|
2021-06-07 07:49:15
|
Why is that ?
|
|
|
|
Deleted User
|
|
sn99
There is also the angle of "maintainability", let's say you come back after months or say you working with other people, it might work now but there is not "guarantee" as to if it will from 2-3 months from now
|
|
2021-06-07 07:49:55
|
Somehow mozjpeg and x264 get maintained despite literally having *assembly* code in there
|
|
|
|
veluca
|
2021-06-07 07:50:14
|
I think we just never had time to SIMDfy it
|
|
2021-06-07 07:51:31
|
(btw, if you'd be interested in more rust stuff, I have my nice pet jxl reimplementation in Rust that could use some love by somebody that actually knows rust... :P)
|
|
|
sn99
|
|
Somehow mozjpeg and x264 get maintained despite literally having *assembly* code in there
|
|
2021-06-07 07:52:58
|
When I said other people I meant others who are not maintainers and have not worked on it a long time, basically 3rd party people who just wanna modify it the slightest and can shoot themselves in the leg, A good C dev won't but a normal one might
|
|
|
veluca
(btw, if you'd be interested in more rust stuff, I have my nice pet jxl reimplementation in Rust that could use some love by somebody that actually knows rust... :P)
|
|
2021-06-07 07:53:32
|
I would like to take a look at it
|
|
|
|
Deleted User
|
2021-06-07 07:53:42
|
By the way: is it simple to SIMDify C (not C++) code and is it possible without assembly (which I don't know *yet*)? I've read that Rust supports SIMD out of the box, which is awesome...
|
|
|
|
veluca
|
2021-06-07 07:53:54
|
intrinsics exist
|
|
2021-06-07 07:54:00
|
is that easy? I dunno...
|
|
|
sn99
I would like to take a look at it
|
|
2021-06-07 07:54:20
|
https://github.com/libjxl/jxl-rs it never got even a review though
|
|
2021-06-07 07:54:31
|
and it doesn't do *that* much yet
|
|
|
|
Deleted User
|
2021-06-07 07:55:13
|
https://doc.rust-lang.org/edition-guide/rust-2018/simd-for-faster-computing.html
|
|
2021-06-07 07:55:42
|
SIMD in Rust looks like this:
```rust
pub fn foo(a: &[u8], b: &[u8], c: &mut [u8]) {
for ((a, b), c) in a.iter().zip(b).zip(c) {
*c = *a + *b;
}
}```
|
|
|
|
veluca
|
2021-06-07 07:56:06
|
IIRC that's not necessarily SIMDfied
|
|
|
sn99
|
2021-06-07 07:56:28
|
Making binding would be the best way if you want it as of as soon as possible, as long as cpp code does not break neither should bindings
|
|
|
|
veluca
|
2021-06-07 07:57:13
|
there's already a couple of people that made bindings 😄
|
|
2021-06-07 07:57:29
|
it's more of a mix of learning rust for myself and having a real Rust impl
|
|
2021-06-07 07:57:42
|
(of a subset, possibly)
|
|
|
Ringo
|
|
SIMD in Rust looks like this:
```rust
pub fn foo(a: &[u8], b: &[u8], c: &mut [u8]) {
for ((a, b), c) in a.iter().zip(b).zip(c) {
*c = *a + *b;
}
}```
|
|
2021-06-07 07:57:46
|
SIMD in Rust is basically when LLVM decides to vectorize code
|
|
|
sn99
|
|
SIMD in Rust looks like this:
```rust
pub fn foo(a: &[u8], b: &[u8], c: &mut [u8]) {
for ((a, b), c) in a.iter().zip(b).zip(c) {
*c = *a + *b;
}
}```
|
|
2021-06-07 07:57:53
|
Most of it gets taken care by llvm until it does not
|
|
|
veluca
(of a subset, possibly)
|
|
2021-06-07 07:58:37
|
Don't take butteraugli, I am doing that <:PeepoDiamondSword:805394101340078092>
|
|
|
|
veluca
|
|
Ringo
SIMD in Rust is basically when LLVM decides to vectorize code
|
|
2021-06-07 07:58:56
|
stdsimd is being done 😛
|
|
|
sn99
Don't take butteraugli, I am doing that <:PeepoDiamondSword:805394101340078092>
|
|
2021-06-07 07:59:02
|
wasn't going to
|
|
2021-06-07 07:59:33
|
I was thinking of JPEG to JXL transcoding to start with
|
|
|
Ringo
|
|
veluca
stdsimd is being done 😛
|
|
2021-06-07 07:59:44
|
oh, nice
|
|
|
sn99
|
2021-06-07 08:05:21
|
I remember them closing stdsimd https://github.com/rust-lang/rfcs/pull/2366#issuecomment-511724227
|
|
2021-06-07 08:05:45
|
There is standalone crate now though that is being maintained https://github.com/rust-lang/stdsimd
|
|
|
|
Deleted User
|
2021-06-07 08:15:11
|
My another issue with Rust:
> Rust's lack of implicit type conversion and indexing only with `usize` nudges users to use just this type, even where smaller types would suffice.
Seems like Rust is not for me, I was so hyped about it 🙁
|
|
|
sn99
|
2021-06-07 08:25:05
|
Yep cannot index with u8, u16 or anything, but you can do `<any u8> as usize` just for at the moment use, I think cpp also converts them to usize, the only problem is you have to explicitly tell rust that make this as usize, will help if you accidently try to do something like this with f64 or i16
|
|
2021-06-07 08:27:12
|
```rust
...
let x: u8 = 3;
let q = arr[x]; // will error out during compilation
let q = arr[x as usize]; // will work
...
```
|
|
|
|
Deleted User
|
|
sn99
Yep cannot index with u8, u16 or anything, but you can do `<any u8> as usize` just for at the moment use, I think cpp also converts them to usize, the only problem is you have to explicitly tell rust that make this as usize, will help if you accidently try to do something like this with f64 or i16
|
|
2021-06-07 08:58:48
|
But will such cast actually use less memory and/or CPU?
|
|
|
sn99
|
|
But will such cast actually use less memory and/or CPU?
|
|
2021-06-07 09:22:38
|
It will be same as cpp in all regards you just have to state it out explicitly in rust, ~~in terms of speed vs memory it is most likely whether the number `x` is a compile time constant or not and what optimisation are being applied for speed vs storage~~
|
|
|
Kornel
|
2021-06-07 09:35:04
|
these casts are free.
|
|
2021-06-07 09:35:32
|
Check with rust.godbolt.org
|
|
2021-06-07 09:35:54
|
In the end everything gets inlined and crunched by LLVM
|
|
2021-06-07 09:36:18
|
For simd there are intrinsics 1:1 copied from Intel's C intrinsics
|
|
2021-06-07 09:36:33
|
And hook into the same thing in LLVM
|
|
|
|
veluca
|
2021-06-07 09:39:38
|
yeah, those explicit casts can be a little annoying in the beginning but after a while you realize they're lifesavers
|
|
|
improver
|
2021-06-07 10:35:41
|
what can forced array index cast save from? use of wrong var to index stuff?
|
|
|
|
veluca
|
2021-06-07 10:36:23
|
for array index, maybe not so much, but for other things, absolutely useful
|
|
2021-06-07 10:37:26
|
unless you can tell me if this is true or false in C/C++: `((uint8_t)8*(uint8_t)8) - (uint8_t)70 > 0`
|
|
|
Pieter
|
2021-06-07 10:41:47
|
promotion to int applies, no?
|
|
|
improver
|
2021-06-07 10:41:59
|
oh you meant automatic integer cast rules. but does array indexing even use that?
|
|
|
Pieter
|
2021-06-07 10:42:21
|
the '-' operator itself causes promotion, I believe
|
|
|
improver
|
2021-06-07 10:42:47
|
i think * could do too (to int ithink)
|
|
2021-06-07 10:43:33
|
so it would probably end up being false
|
|
|
Pieter
|
2021-06-07 10:43:48
|
oh, yes
|
|
|
improver
|
2021-06-07 10:44:19
|
they get more complex when it goes above 32bit ints too.. it's kinda mess
|
|
|
|
veluca
|
2021-06-07 10:44:28
|
yup
|
|
2021-06-07 10:44:44
|
for array indexing, I believe it simply comes from the "no automatic conversions" logic
|
|
2021-06-07 10:45:01
|
also - what should `a[-1]` be? 😛
|
|
|
improver
|
2021-06-07 10:45:48
|
but ithink array indexing operators could define stuff for all unsigned kinds without much issue
|
|
|
|
veluca
|
2021-06-07 10:46:14
|
probably - but rust has no function overloading, so...
|
|
|
improver
|
2021-06-07 10:47:22
|
no automatic casts is nice property, i like it in golang too
|
|
2021-06-07 10:47:47
|
but i think i can address slices and arrays with whatever types there
|
|
|
Pieter
|
2021-06-07 10:48:16
|
If `a` has a pointer or array type, `a[b]` is identical to `*(a + b)`.
|
|
|
improver
|
2021-06-07 10:49:09
|
both rust and go bound check if they can't prove it's within bounds
|
|
|
|
veluca
|
|
Pieter
If `a` has a pointer or array type, `a[b]` is identical to `*(a + b)`.
|
|
2021-06-07 10:52:33
|
*in C*, not in Rust...
|
|
|
Pieter
|
2021-06-07 10:52:45
|
Oh, sure. I have no clue about Rust.
|
|
2021-06-07 10:52:59
|
Apologies, I missed context.
|
|
|
improver
|
2021-06-07 10:57:54
|
I'd say `a[-1]` is just usual bounds check failure at runtime (because compile time proof fails).. regardless whether you interpret that as signed int, or reinterpret as unsigned
|
|
|
|
veluca
|
|
Pieter
If `a` has a pointer or array type, `a[b]` is identical to `*(a + b)`.
|
|
2021-06-07 10:57:59
|
this statement is too restrictive btw, `1["abc"] == 'b'`
|
|
|
improver
|
2021-06-07 11:00:12
|
(not meant to imply C-style int type here, rather isise/usize equivalents in rust)
|
|
|
Pieter
|
2021-06-07 11:00:27
|
```c
int main() {
char x = *("abc" + 1);
printf("%c\n", x);
}
```
works
|
|
2021-06-07 11:00:39
|
(prints `b`)
|
|
|
improver
|
2021-06-07 11:00:54
|
well yea obv
|
|
|
|
veluca
|
2021-06-07 11:01:19
|
it's more worrying that `1["abc"]` does xD
|
|