|
Jyrki Alakuijala
|
2021-02-27 08:14:15
|
I'm fixing it in next version
|
|
2021-02-27 08:14:31
|
(it will still not be optimal, but much already better)
|
|
|
|
Deleted User
|
2021-02-28 01:07:24
|
I wonder how well would splines deal with this image
|
|
2021-02-28 01:08:12
|
Images like this one will probably use them more than real-life photos
|
|
|
lithium
|
2021-02-28 09:14:35
|
A little curious, if this issue fixed,
in high bbp(-d 0.5, 1.0) and medium bbp(-q85, 80),
also can get benefit too?
|
|
|
Crixis
|
|
Jyrki Alakuijala
we know that they don't work at low bit rates
|
|
2021-02-28 09:52:35
|
This is expect to reduce bundary artifacts and high frequency rigging?
|
|
|
|
veluca
|
|
Scope
JXL (33 558)
|
|
2021-02-28 12:04:05
|
could you use the script to show block type selection on this? (or share the original, I guess :))
|
|
|
lithium
|
|
veluca
could you use the script to show block type selection on this? (or share the original, I guess :))
|
|
2021-02-28 12:35:59
|
春場ねぎ @negi_haruba
https://twitter.com/negi_haruba
https://twitter.com/negi_haruba/status/1263054094232518658
https://pbs.twimg.com/media/EYdEuKaUcAA-tnZ?format=png&name=large
|
|
2021-02-28 12:45:11
|
望月けい
https://www.pixiv.net/artworks/85779247
LAM
https://www.pixiv.net/artworks/76098998
lack
https://www.pixiv.net/artworks/77817992
Mika Pikazo
https://www.pixiv.net/artworks/78511602
|
|
2021-02-28 12:55:37
|
original png
|
|
|
fab
|
2021-02-28 03:02:43
|
-j -q 68.97 -s 4 --intensity_target=700
|
|
|
_wb_
|
2021-02-28 03:03:13
|
Why so bright?
|
|
|
fab
|
2021-02-28 03:03:28
|
this was an old setting i had i never tested
|
|
2021-02-28 03:03:42
|
but with new commit jxl it works good at that quality
|
|
|
_wb_
|
2021-02-28 03:03:59
|
Intensity target is saying: please show white at 700 nits
|
|
|
fab
|
2021-02-28 03:04:19
|
so it doesnt' change the quality of image?
|
|
2021-02-28 03:04:25
|
or the bits?
|
|
|
_wb_
|
2021-02-28 03:04:43
|
It does, because that does mean you will see the darks better
|
|
|
fab
|
2021-02-28 03:05:00
|
on a normal display
|
|
|
_wb_
|
2021-02-28 03:05:04
|
But it will also cause your image to be very bright on an HDR screen
|
|
|
fab
|
2021-02-28 03:05:06
|
lcd 250 nits
|
|
|
_wb_
|
2021-02-28 03:05:51
|
So better just use a higher quality target instead
|
|
|
fab
|
2021-02-28 03:07:09
|
i converted galaxy s21 ultra demo image (jpg) 4,5 mb to 900 kb with it
|
|
2021-02-28 03:10:40
|
that's ARE the settings i liked the most
|
|
2021-02-28 03:10:49
|
probably images are seriously damaged
|
|
2021-02-28 03:11:20
|
but compression was good (first one x5 85% less size) or 0.100 bpp with the second
|
|
2021-02-28 03:11:34
|
with the third png inflates 2 times and more sometimes but in some cases is usually 60% less
|
|
2021-02-28 03:13:04
|
-s 7 -q 78.1 i have already encoded wallpapers
|
|
2021-02-28 03:13:39
|
i agree for medium/useful qualities we should wait until the encoder becomes better
|
|
2021-02-28 03:14:23
|
but 0.3.2 commit 5175d117 even if it has slow lossless (even speed 4 takes 30 seconds on dual core i3) it's amazing in lossless
|
|
|
Jyrki Alakuijala
|
2021-02-28 03:27:56
|
😄
|
|
2021-02-28 03:28:56
|
Thank you Fabian for all the feedback
|
|
2021-02-28 03:29:09
|
I still remember the chromacity feedback you gave earlier
|
|
2021-02-28 03:29:50
|
I haven't checked if I fixed that, but I have changed the code and in other circumstances it behaves better now
|
|
2021-02-28 03:30:44
|
if you want me to check something, it is easier for me if you can crop a small tile, like 128x128 to 256x256 and show the command line that creates a problem -- then file a bug in the jxl repo on it
|
|
2021-02-28 03:31:22
|
... also what Jon says is true
|
|
2021-02-28 03:31:41
|
I got a huge improvement in equal-bpp quality when I dropped the default intensity target from 250 to 80
|
|
|
lithium
A little curious, if this issue fixed,
in high bbp(-d 0.5, 1.0) and medium bbp(-q85, 80),
also can get benefit too?
|
|
2021-02-28 03:34:40
|
Typical fixes to these things bring quality benefits across a wide range of images. There can be shifts at where it works best however, like you might need 1 % more bits at quality d1, but get a 7 % improvement at d6.
|
|
2021-02-28 03:35:28
|
Currently I'm tracing quality impact between d1.0 and d23 when I'm improving the low bpp range
|
|
2021-02-28 03:37:40
|
I could start tracing better qualities, say d0.6 in it, too, to be safer
|
|
2021-02-28 03:38:21
|
for example, one fix was that the cost of an integral transform was evaluated differently in two code paths
|
|
2021-02-28 03:38:33
|
I had forgotten one multiplier in the other
|
|
2021-02-28 03:39:02
|
that is just a weirdness and inefficiency and causes 99.9999 % pure silliness to happen
|
|
2021-02-28 03:39:30
|
fixing it gave a something like 0.18 % improvement, but it is very likely that that kind of improvement is consistently available in all qualities
|
|
|
lithium
|
2021-02-28 03:44:54
|
in my test, some non-photographic image(anime type),
use -d 0.5 still have some issue in specific area,
(compare libavif min7 max7 and cavif q94, near file size, bbp)
|
|
|
Jyrki Alakuijala
|
2021-02-28 03:52:41
|
Lee, I think that when I reverse the integral transform search from small to big there will be a nice improvement to that
|
|
2021-02-28 03:53:23
|
current system is a bit haphazardous
|
|
2021-02-28 03:53:50
|
of course I mostly have guesses at this time
|
|
|
_wb_
|
2021-02-28 03:59:42
|
When you merge small blocks into larger ones, you could also consider doing it in non-naturally aligned ways
|
|
2021-02-28 04:00:20
|
Bigger search space but maybe worth it
|
|
|
lithium
|
2021-02-28 04:12:01
|
probably have some issue in loop filter(non-photographic image content)?
in avif aq-mode default is off.
aq-mode=M : Adaptive quantization mode (0: off (default), 1: variance, 2: complexity, 3: cyclic refresh)
|
|
|
fab
|
|
Jyrki Alakuijala
I still remember the chromacity feedback you gave earlier
|
|
2021-02-28 04:15:52
|
i don't remember
|
|
2021-02-28 04:21:22
|
i don't have HDR displays
|
|
|
lithium
|
2021-02-28 04:30:28
|
In libjpeg q95 444, some non-photographic image content,
will get some tiny ringing and noise, need use q99 to avoid this issue,
in jpeg xl -d 0.5 will get similar situation, need use -d 0.3,
In libavif (min 7 max 7) high bbp, non-photographic image content compress very well,
if increase quantizer(min 15 max 15 or min 20 max 20), image still look fine, only reduce some detail
|
|
|
Jyrki Alakuijala
|
2021-02-28 06:15:04
|
I'm now trying to develop an understanding on what goes wrong with JXL and Anime -- starting with those 4 images, tiling them to 256x256 tiles, throwing away the small entropy tiles, using the rest for optimization process to find out the optimal settings (and what changes can cause improvements if any)
|
|
2021-02-28 06:15:33
|
Luca tried uniform quantization already and that wasn't an answer to this problem
|
|
|
Crixis
|
2021-02-28 06:25:55
|
<@!532010383041363969> If you find a way to smooth out the jxl colourful high frequency rigging on ink lines and flat or slow gradient areas you are my new favorite jxl core dev (sorry <@!794205442175402004> )
|
|
|
Scope
|
2021-02-28 06:31:40
|
|
|
|
veluca
could you use the script to show block type selection on this? (or share the original, I guess :))
|
|
2021-02-28 06:55:31
|
Where can I find that script? And the source images are in the beginning of these tests, I just continued further experiments with the same
|
|
|
|
veluca
|
2021-02-28 06:55:54
|
It's in tools/
|
|
2021-02-28 06:57:59
|
demo_vardct_select.sh
|
|
|
Scope
|
2021-02-28 07:02:28
|
I see, hmm, the problem is that it's much more difficult on Windows <:Thonk:805904896879493180>
|
|
|
Jyrki Alakuijala
|
|
Crixis
<@!532010383041363969> If you find a way to smooth out the jxl colourful high frequency rigging on ink lines and flat or slow gradient areas you are my new favorite jxl core dev (sorry <@!794205442175402004> )
|
|
2021-02-28 07:59:28
|
What if I do it by using Jon's algorithm? :------D
|
|
2021-02-28 08:00:04
|
I just posted some evidence into bot-spam channel by my eternal state of confusion
|
|
2021-02-28 08:01:10
|
looks like half the improvement in the anime ringing can be gained with --progressive_dc (Luca imported Jon's algos into DC modeling) and the other half with my changes to integral transform selection
|
|
|
spider-mario
|
|
veluca
demo_vardct_select.sh
|
|
2021-02-28 08:23:36
|
that script seems locale-sensitive 😅
|
|
2021-02-28 08:23:47
|
```
+ benchmark_xl --input=/tmp/tmp.2G7QSQd55hvardctdemo/orig --codec=jxl:d0,2 […]
```
|
|
2021-02-28 08:24:29
|
I had no idea that `seq` behaved like that
|
|
2021-02-28 08:24:55
|
in fact, I don’t recall it doing so in the past
|
|
2021-02-28 08:25:29
|
https://stackoverflow.com/a/23885058 hm, apparently it’s been a while
|
|
|
|
veluca
|
2021-02-28 08:29:15
|
What, why would you do that 🤣
|
|
|
Jyrki Alakuijala
|
2021-02-28 08:37:37
|
excel does that
|
|
2021-02-28 08:37:56
|
floating comma %-)
|
|
2021-02-28 08:38:50
|
in our first version of brotli experimentation Zoltan used toupper
|
|
2021-02-28 08:39:22
|
I quickly advised him to flip some bits instead 😄
|
|
2021-02-28 08:39:58
|
also I flip some other bits when it is not a symbol with upper/lowercase meaning
|
|
2021-02-28 08:40:19
|
some reviewer of rfc got worried that we call it changing the case
|
|
2021-02-28 08:41:15
|
as I had always wanted to I saw that a perfect opportunity to use the word ferment, and we changed the case-related words to discuss about the fermentation process (which is implemented by flipping certain bits)
|
|
2021-02-28 08:42:42
|
usa.csv ending should be british.scsv -- from comma-separated values to semi-colon separated values
|
|
|
Crixis
|
|
Jyrki Alakuijala
looks like half the improvement in the anime ringing can be gained with --progressive_dc (Luca imported Jon's algos into DC modeling) and the other half with my changes to integral transform selection
|
|
2021-02-28 09:04:20
|
it is seriously a step in the right direction
|
|
|
Jyrki Alakuijala
|
2021-02-28 09:18:03
|
yes, I think two more of such steps and it is already good
|
|
|
lithium
|
2021-03-01 07:13:35
|
non-photographic image medium quality(bbp) test,
in cjxl q80 s7, sky blue color chinese text(second word) fill smooth area have some noise(ringing?),
this situation also happen on other image in -d 0.5~1.0(high quality) smooth area(deep purple, blue ,red color),
if use s3(reduce adaptive quantization), smooth area will better than s7.
|
|
2021-03-01 07:13:56
|
13_original_jpeg q99 444
|
|
2021-03-01 07:14:11
|
13_libavif_min 24_max 26_s3_13.3KB
|
|
2021-03-01 07:14:22
|
13_jxl_q80_s7_20.6KB
|
|
2021-03-01 07:14:28
|
13_jxl_q80_s3_26.8KB
|
|
|
_wb_
|
2021-03-01 08:01:41
|
For such an image, I suspect delta palette could be great, with those 4 main colors in the palette, and the fifth 'color' being delta=0,0,0 which will produce interpolated colors for the anti-aliasing/blur at the edges.
|
|
2021-03-01 08:58:07
|
Time for an adhoc group meeting together with JPEG XS to discuss HDR experiments!
|
|
|
Scope
|
2021-03-01 06:37:00
|
``` --progressive_dc=num_dc_frames
Use progressive mode for DC.```
`--progressive_dc=1` is enough?
A little less convenient because it doesn't work with `--target_size` <:PepeSad:815718285877444619>
|
|
|
Jyrki Alakuijala
|
2021-03-01 06:40:38
|
--processive_dc 1 works at distance greater than d2.5 (--processive_dc 2 is garbage right now, don't use)
|
|
2021-03-01 06:41:36
|
--progressive_dc 1 is a bit wobbly with butteraugli scores below d5.0, but I have reviewed manually and I consider butteraugli overly concerned about DC there (between d2.5 and d5.0)
|
|
2021-03-01 06:43:15
|
file a bug that it doesn't work with --target_size
|
|
|
BlueSwordM
|
2021-03-01 06:50:43
|
So, about -d XX, does using a higher number of --num_reps=X allow for better targeting of the metric?
|
|
|
Scope
|
2021-03-01 07:09:16
|
Hmm, or maybe it was some typo I made in my scripts, but now `--target_size` works <:Hypers:808826266060193874>
Btw, it would be nice to have it for lossy modular mode as well
|
|
|
_wb_
|
|
BlueSwordM
So, about -d XX, does using a higher number of --num_reps=X allow for better targeting of the metric?
|
|
2021-03-01 07:25:17
|
No, that is just for benchmarking speed.
|
|
|
Scope
Hmm, or maybe it was some typo I made in my scripts, but now `--target_size` works <:Hypers:808826266060193874>
Btw, it would be nice to have it for lossy modular mode as well
|
|
2021-03-01 07:26:31
|
Doesn't that work? -m --target_size?
|
|
|
Scope
|
2021-03-01 07:30:58
|
Last time I tried it on previous builds, it gave an error, but now it seems to be working <:Hypers:808826266060193874>
|
|
2021-03-01 07:34:30
|
Ah, I see what my mistake was with VarDCT, it didn't work with `--resampling`
|
|
|
lithium
|
|
Jyrki Alakuijala
--processive_dc 1 works at distance greater than d2.5 (--processive_dc 2 is garbage right now, don't use)
|
|
2021-03-01 07:51:12
|
If i using -d 0.5~1.0 and q85~80,
--processive_dc should setting which value?
|
|
|
Jyrki Alakuijala
|
2021-03-01 07:51:29
|
0
|
|
2021-03-01 07:51:54
|
(you can try with 1 if you are curious, but it should be worse at d0.5 to d1)
|
|
|
lithium
|
2021-03-01 07:52:21
|
ok, thank you 🙂
|
|
|
Jyrki Alakuijala
|
|
lithium
ok, thank you 🙂
|
|
2021-03-01 07:52:48
|
Nooo, thank *you* for trying it out!!
|
|
|
lithium
|
2021-03-01 08:07:06
|
use -d 0.5 -s 7 --progressive_dc=0 still have some noise,
how to use --adaptive_reconstruction flag in cjxl 0.3.2?
cjxl -0.5 -s7 --adaptive_reconstruction 1 // like this
|
|
|
yllan
|
|
lithium
non-photographic image medium quality(bbp) test,
in cjxl q80 s7, sky blue color chinese text(second word) fill smooth area have some noise(ringing?),
this situation also happen on other image in -d 0.5~1.0(high quality) smooth area(deep purple, blue ,red color),
if use s3(reduce adaptive quantization), smooth area will better than s7.
|
|
2021-03-02 03:29:40
|
Just noticed the sample images... do you work for TONGLI ebook? 😮
|
|
2021-03-02 03:34:17
|
I just compiled jxl on my new Apple Silicon. This is not the benchmark of jxl itself, but the compilation speed of jxl `time cmake --build . -j`:
2019 MBP16", 2.6Ghz i7 (6c12t, cost around $2700): 53.8s
2020 MBA 13", Apple Silicon M1 ($1450): 20s
|
|
|
|
Deleted User
|
2021-03-02 06:36:14
|
https://discord.com/channels/794206087879852103/794206087879852106/816133029402640426
|
|
2021-03-02 06:37:00
|
> 640k pixels ought to be enough for everyone
Not really <:kekw:808717074305122316>, but 640k? With JPEG XL... why not? But there are still some artifacts.
|
|
2021-03-02 06:38:13
|
I've just tried my first "640k challenge" from the hardest image: stitched panorama from Hugin. Here's original JPEG's lossless transcode.
|
|
2021-03-02 06:40:54
|
And here is 640k lossy version. I deliberately encoded it with `--noise=0`, because I saw some artifacts in the sky that noise synth didn't manage to cover up in my first noise-enabled encode.
|
|
2021-03-02 06:44:11
|
Look at the sky. Why are there lots of small VarDCT blocks and only some big 64x64? Encoder should be able to detect low activity in that area, encode the sky with big transforms and regenerate remaining noise with noise-synth.
|
|
2021-03-02 01:28:35
|
<@532010383041363969> you might be interested
|
|
|
Jyrki Alakuijala
|
2021-03-02 02:08:12
|
looking
|
|
|
spider-mario
|
2021-03-02 03:39:46
|
out of curiosity, was it encoded from Hugin’s JPEG, or did you reexport from Hugin to a lossless format?
|
|
2021-03-02 03:39:53
|
and in what format were the images making up the panorama?
|
|
|
|
Deleted User
|
|
spider-mario
and in what format were the images making up the panorama?
|
|
2021-03-02 03:54:39
|
I stitched the panorama from straight-out-of-camera JPGs (Samsung Galaxy Note 9)
|
|
|
spider-mario
out of curiosity, was it encoded from Hugin’s JPEG, or did you reexport from Hugin to a lossless format?
|
|
2021-03-02 03:55:54
|
And the lossy .jxl file was sourced from the exact same .jpg that I losslessly transcoded to .jpg.jxl
|
|
|
lithium
|
|
yllan
Just noticed the sample images... do you work for TONGLI ebook? 😮
|
|
2021-03-02 03:59:55
|
I don't work for TONG LI,
I only use jpeg xl for public interests and scientific research.
|
|
|
Scope
|
2021-03-02 05:33:37
|
<@!532010383041363969> <@!794205442175402004> Also AVIF (aom) with 10 bit encoding and `enable-chroma-deltaq=1` can go even further at low bpp (no noticeable banding, better details preservation)
<:monkaMega:809252622900789269> (22 450 bytes) <https://slow.pics/c/8edqBMMT>
JXL with `--progressive_dc` (but, doesn't help much with this image)
|
|
|
_wb_
|
2021-03-02 05:44:35
|
What does chroma deltaq mean?
|
|
|
Scope
|
2021-03-02 05:46:55
|
```--enable-chroma-deltaq=<arg>
Enable chroma delta quant (0: false (default), 1: true)```
https://aomedia.googlesource.com/aom/+/c5806c20d1d24741db4d9bb47ddcbe9c7075056a%5E%21/
|
|
|
_wb_
|
2021-03-02 05:55:05
|
Amazing result for avif
|
|
|
Jyrki Alakuijala
|
2021-03-02 05:59:30
|
more work to do 😄
|
|
2021-03-02 05:59:59
|
we have ~7 % improvement in the pipeline that comes just when we push the new version out for this use case
|
|
2021-03-02 06:00:05
|
but it is not enough
|
|
2021-03-02 06:00:23
|
we need another 40 % or so to match I think
|
|
2021-03-02 06:01:09
|
(I anticipate that we can deliver 25 % within vardct, i.e., not quite parity for this material)
|
|
2021-03-02 06:02:46
|
twice a year we have a bureaucratic month for us managers and unfortunately it is right now, so I'm a bit stuck for making progress with quality now
|
|
2021-03-02 06:06:23
|
Scope, what happens if you choose an AVIF level where the quality is pristine, i.e., not this low bpp -- how do we then fare with JPEG XL and progressive dc?
|
|
|
Look at the sky. Why are there lots of small VarDCT blocks and only some big 64x64? Encoder should be able to detect low activity in that area, encode the sky with big transforms and regenerate remaining noise with noise-synth.
|
|
2021-03-02 06:07:27
|
it is because I do it in a very stupid way for low bpp -- I have a plan to correct it
|
|
|
Scope
|
2021-03-02 06:18:51
|
<@!532010383041363969> ~69 540 `(Butteraugli distance 2.218 yields 69553 bytes, 0.350 bpp)`
<https://slow.pics/c/NHO2gepT>
|
|
2021-03-02 06:24:31
|
For AVIF (or rather aom with the current tuning and settings), there is a problem that it still removes some details, even with a high bitrate increase (and it turns out that after certain values the size increases, but the quality improves imperceptibly and still does not exactly match the original)
|
|
2021-03-02 06:30:04
|
And JXL still has noticeable artifacts near sharp edges
|
|
|
Scope
<@!532010383041363969> <@!794205442175402004> Also AVIF (aom) with 10 bit encoding and `enable-chroma-deltaq=1` can go even further at low bpp (no noticeable banding, better details preservation)
<:monkaMega:809252622900789269> (22 450 bytes) <https://slow.pics/c/8edqBMMT>
JXL with `--progressive_dc` (but, doesn't help much with this image)
|
|
2021-03-02 08:47:53
|
WebP (also not as bad as I thought)
|
|
2021-03-02 08:48:05
|
Jpeg (MozJpeg)
|
|
|
Dr. Taco
|
2021-03-02 09:05:33
|
I'd be more interested in saving the image in each format so it looks good (artifacts not noticeable), and then comparing file sizes
|
|
2021-03-02 09:07:39
|
you guys keep using these over compressed anime pictures, and avif looks "best" when over compressed, but I am still noticing avif compression artifacts, they just happen to be something that works well stylistically with those image's art style.
|
|
2021-03-02 09:10:38
|
realistically, I rarely run into over compressed images on the internet anymore. People would rather see higher quality.
|
|
|
|
paperboyo
|
2021-03-02 09:19:34
|
So here is the opposite view: I find the above methodology more useful. I would be exaggerating just a bit if I would say that serving “high quality” on the internet is a crime against humanity ;-). My choice (for serving on the web only!) would always be the codec that will provide best quality at the smallest filesize.
I agree, obviously, that the style of anime is not representative of most imagery in the wild and may indeed lends itself particularly well to certain compression techniques. That said, I secretly hope any improvements in this area will translate well into photographic imagery as well.
|
|
|
Scope
|
2021-03-02 09:22:35
|
Yes, and it's been discussed many times before, comparing on low quality or a certain type of image does not make one format/codec better in everything else.
But I'm doing this test specifically for low bpp and similar type of art images, as this is currently one of the weakest points of JXL where the differences with AVIF are very large and people also have a need for heavy compression of such content (for example for previews, where details are not that necessary, but keeping a good overall image look is important).
All this is to see these weaknesses and possibly improve JXL, to reduce the gap.
|
|
|
|
Deleted User
|
2021-03-02 09:28:13
|
> People would rather see higher quality.
Facebook: *let's pretend we didn't see that*
|
|
|
BlueSwordM
|
2021-03-02 09:28:49
|
So, I've got a big brain idea.
|
|
2021-03-02 09:29:10
|
I become a Google Intern as part of the Pixel team, and convince the Pixel 6 team to support native JPEG-XL encoding and decoding as a current Pixel exclusive that will roll out to the mainline Android build as time goes on.
|
|
|
Dr. Taco
|
2021-03-02 09:32:24
|
"Our phone only takes pictures in a format no one supports". That is the most Apple statement ever
|
|
|
Scope
|
2021-03-02 09:34:12
|
Also comparing when every image looks good is possible, but usually less accurate, because then we need to trust the metrics or visual perception of a particular person(s)
|
|
|
Dr. Taco
|
2021-03-02 09:58:16
|
I'm pretty trustworthy, I'll do it, just gimme that quality slider
|
|
|
Scope
|
|
Scope
<@!532010383041363969> <@!794205442175402004> Also AVIF (aom) with 10 bit encoding and `enable-chroma-deltaq=1` can go even further at low bpp (no noticeable banding, better details preservation)
<:monkaMega:809252622900789269> (22 450 bytes) <https://slow.pics/c/8edqBMMT>
JXL with `--progressive_dc` (but, doesn't help much with this image)
|
|
2021-03-02 10:02:47
|
WP2
|
|
|
Jyrki Alakuijala
|
|
BlueSwordM
I become a Google Intern as part of the Pixel team, and convince the Pixel 6 team to support native JPEG-XL encoding and decoding as a current Pixel exclusive that will roll out to the mainline Android build as time goes on.
|
|
2021-03-03 01:04:32
|
The pixel camera team and the JPEG XL team are in the same product area in Google's organization. It has always been easy to connect with them. Also, they have people who really deeply understand image quality.
|
|
|
Dr. Taco
I'd be more interested in saving the image in each format so it looks good (artifacts not noticeable), and then comparing file sizes
|
|
2021-03-03 01:12:26
|
I'm interested in these images because it shows a weakness in JPEG XL. I'm hoping to fix it in a way that makes every (or many) image compress better.
|
|
|
_wb_
|
2021-03-03 06:14:40
|
Ideally, a codec is Pareto-optimal at the entire fidelity range. Both <@139947707975467009> and <@810102077895344159> are right: high fidelity is what people want, but fast loading pages or in some cases even just a usable internet is also something people want, and that can imply low fidelity.
|
|
|
|
Deleted User
|
|
Dr. Taco
"Our phone only takes pictures in a format no one supports". That is the most Apple statement ever
|
|
2021-03-03 07:30:02
|
> "Our phone only takes pictures in a format no one supports".
I hope that it'll become irrelevant the very moment Google Photos enable JXL both in their apps and web service.
|
|
|
lithium
|
|
Dr. Taco
you guys keep using these over compressed anime pictures, and avif looks "best" when over compressed, but I am still noticing avif compression artifacts, they just happen to be something that works well stylistically with those image's art style.
|
|
2021-03-03 09:19:26
|
I always use -d 0.5(q95),1.0(q90)(high bbp) and -q 85, 80(medium bbp),
but in those quality target still have some issue,
Probably increase priority on fix high bbp and medium bbp quality,
should be a good idea?
|
|
|
Crixis
|
2021-03-03 11:09:58
|
If jxl is insufficient in some areas the probability to win the format war are lower
|
|
|
Jyrki Alakuijala
|
|
Crixis
If jxl is insufficient in some areas the probability to win the format war are lower
|
|
2021-03-03 11:26:17
|
yes, it would be great to have 'balanced' features rather than superb-but-narrow-application-field features
|
|
2021-03-03 11:26:50
|
and a boring user experience: 'if you do this, it always just works'
|
|
2021-03-03 11:27:35
|
making JXL better for drawings will widen the application field
|
|
|
Scope
|
2021-03-03 05:47:35
|
Some of the improvements for low bpp also apply to higher bpp, as it is interesting to understand what methods are most effective in AVIF to keep lines sharp without artifacts, also Anime art is only one example for the test, similar content can even be in UI elements or former SVG images, such as shown here: https://jakearchibald.com/2020/avif-has-landed/
|
|
2021-03-03 05:47:48
|
|
|
2021-03-03 05:48:12
|
And AVIF is strong in this kind of thing
|
|
|
|
Deleted User
|
2021-03-03 05:48:27
|
Truly a man of culture, I've also read this article 😉
|
|
|
Dr. Taco
|
2021-03-03 05:53:32
|
yeah, it does good where smoothing out gradients works in the favor of the image. Can we make JXL handle smooth gradient images better?
|
|
|
Crixis
|
2021-03-03 05:53:59
|
How the f*** avif have always 0 boundary artifact and 0 rigging?
|
|
|
|
Deleted User
|
2021-03-03 05:54:42
|
But what exactly makes smooth gradients better? <@!111445179587624960> already ruled out CDEF.
|
|
2021-03-03 05:54:43
|
https://discord.com/channels/794206087879852103/803645746661425173/815245565780557854
|
|
2021-03-03 05:55:35
|
I was surprised by that result, I *extremely* overestimated CDEF's impact. Seems like that's not it... but what?
|
|
|
Dr. Taco
|
2021-03-03 05:56:45
|
maybe it's just a side effect of blurring stuff for motion estimation
|
|
2021-03-03 05:56:53
|
unrelated to CDEF
|
|
|
Scope
|
2021-03-03 05:57:18
|
I don't know, I had only guesses, I don't have enough time or knowledge yet to parse the impact of each method in AVIF encoders
|
|
|
|
Deleted User
|
2021-03-03 05:57:45
|
Maybe we have to do the same thing as with CDEF: turn every single coding enhancement off and then enable them back on one by one.
|
|
|
Scope
|
2021-03-03 05:58:38
|
My guess is that CDEF affects video more than static images
|
|
|
|
Deleted User
|
2021-03-03 05:58:39
|
That's gonna be really interesting if even with all of those funky stuff turned off AVIF will still be better at low bpp...
|
|
2021-03-03 05:59:03
|
If that happens, then it's probably something transform-related
|
|
2021-03-03 06:00:20
|
Maybe we'll invite someone from libaom or rav1e encoder team?
|
|
|
Scope
|
2021-03-03 06:01:46
|
It's interesting that AVIF keeps lines sharp even at very low bpp and fastest encoding speed (so these are some of the tools that do not turn off at fast encoding too)
|
|
|
|
Deleted User
|
|
Scope
It's interesting that AVIF keeps lines sharp even at very low bpp and fastest encoding speed (so these are some of the tools that do not turn off at fast encoding too)
|
|
2021-03-03 06:02:25
|
Ok, that's important thing to know
|
|
|
Scope
|
2021-03-03 06:05:48
|
Speed 8 (this image encodes in 0.1 second on my CPU)
|
|
2021-03-03 06:07:29
|
Same size JXL (with much slower encoding speed)
|
|
|
|
Deleted User
|
2021-03-03 06:07:58
|
Speed 8 AVIF is the fastest?
|
|
|
Scope
|
2021-03-03 06:08:25
|
10 in Avifenc (but Aomenc has RT speed and normal)
|
|
|
|
Deleted User
|
2021-03-03 06:08:48
|
Ok, now try similarly fast speed in cjxl
|
|
|
Scope
|
2021-03-03 06:10:17
|
Speed 3
|
|
2021-03-03 06:10:52
|
Speed 4
|
|
2021-03-03 06:11:47
|
Btw, more ringing artifacts
|
|
|
|
Deleted User
|
2021-03-03 06:12:01
|
Yep, definitely more ringing
|
|
2021-03-03 06:12:37
|
But edges look more... natural? Close to original?
|
|
|
Scope
|
2021-03-03 06:13:02
|
Source
|
|
|
|
Deleted User
|
2021-03-03 06:13:15
|
Speed 3 looks quite as blocky as Modular
|
|
|
Scope
|
2021-03-03 06:16:07
|
As far as I know JXL VarDCT is mostly tuned at default speed (so there might be some strange things at other speeds)
|
|
2021-03-03 06:27:01
|
Although I also prefer visually lossless and (near) lossless images and would like even web to go the way of increasing quality (with a reasonable reduction of bpp) over legacy formats rather than reducing bpp as much as possible, but people have very different needs and uses
|
|
|
lithium
|
2021-03-03 06:28:06
|
aomenc --tune-content=<arg> Tune content type (default, screen, film)
probably av1 use some heuristic to choose right tune?
|
|
|
Scope
|
2021-03-03 06:30:03
|
It's rather like `--tune` in x264/x265/...
|
|
|
lithium
|
2021-03-03 06:31:59
|
cwebp -preset string
Specify a set of pre-defined parameters to suit a particular type of source material. Possible values are: default, photo, picture, drawing, icon, text.
|
|
|
Scope
|
2021-03-03 06:33:35
|
As far as I remember in cwebp these are different AQ matrix presets and such
|
|
|
lithium
|
2021-03-03 06:37:24
|
I just little suspicion, probably jpeg xl use some method is good for compression photographic image,
but that method not suitable non-photographic image?
|
|
|
Scope
|
2021-03-03 06:42:01
|
Perhaps because there has not yet been any significant optimization and tuning for non-photographic images
|
|
|
lithium
|
2021-03-03 06:48:37
|
Hope Jyrki Alakuijala can bring some good news(big Improvement for non-photographic images) 🙂
|
|
|
Scope
|
2021-03-03 06:50:13
|
I think the only limitation is the frozen bitstream, but it's also very flexible
|
|
|
|
Deleted User
|
2021-03-03 06:51:40
|
Original 1992 JPEG has already been so flexible that it took us over 25 years to master this bitstream
|
|
2021-03-03 06:53:41
|
Getting everything out of JPEG XL's bitstream will probably take us even longer because there are even more things to exploit...
|
|
2021-03-03 06:54:03
|
This thing has RIDICULOUS potential
|
|
|
_wb_
|
2021-03-03 07:05:44
|
We may need AI-based encoders to really make full use of the bitstream possibilities...
|
|
|
Master Of Zen
|
|
_wb_
We may need AI-based encoders to really make full use of the bitstream possibilities...
|
|
2021-03-03 07:06:28
|
<:pogchimp:796576991012585502>
|
|
|
|
Deleted User
|
|
_wb_
We may need AI-based encoders to really make full use of the bitstream possibilities...
|
|
2021-03-03 07:29:11
|
It's not gonna be real AI if it couldn't do this:
https://discord.com/channels/794206087879852103/794206170445119489/813494463263014962
|
|
2021-03-03 07:34:29
|
Or this:
https://discord.com/channels/794206087879852103/803645746661425173/813985579208540201
|
|
|
Pieter
|
|
_wb_
We may need AI-based encoders to really make full use of the bitstream possibilities...
|
|
2021-03-03 07:34:41
|
Or a superencoder. Try literally every bitstream and see which ones decode to something close to the input.
|
|
|
|
Deleted User
|
2021-03-03 07:37:25
|
Seems like Marcus Hutter (http://prize.hutter1.net) was right about lossless compression being similar to AI. Only human or human-level intelligent AI could pull off stuff that I did in two links mentioned above.
|
|
|
_wb_
|
|
Pieter
Or a superencoder. Try literally every bitstream and see which ones decode to something close to the input.
|
|
2021-03-03 07:38:51
|
That would be ridiculously expensive. Probably an interesting way though to find weaknesses in whatever perceptual metric you use to determine what "close to the input" means.
|
|
|
|
Deleted User
|
2021-03-03 07:39:10
|
Ok, maybe the second link (with the bush) could be done manually by some kind of auto-alignment, but that tree from the first link? No way current "simple" non-AI encoders could do that. That was crazy, assembling the patch from multiple sources in the same image in an intelligent way.
|
|
|
Crixis
|
|
Pieter
Or a superencoder. Try literally every bitstream and see which ones decode to something close to the input.
|
|
2021-03-03 08:00:46
|
bongo encoder
|
|
|
|
veluca
|
2021-03-03 09:27:54
|
I'm not entirely sure I'd call it impossible without AI 😄
|
|
|
_wb_
|
2021-03-03 09:29:03
|
There is certainly still a lot of room for just better classical algorithms and heuristics
|
|
2021-03-03 09:30:16
|
But I do think the search space of possible bitstreams is very large and AI might beat handcrafted heuristics at some point
|
|
|
|
Deleted User
|
2021-03-03 09:31:33
|
It's already been done, e.g. RNNoise, it's been ML-trained, but the algorithm itself is still good old "normal" DSP
|
|
2021-03-03 09:32:02
|
ML only helped training noise detection and removal parameters
|
|
2021-03-03 09:33:26
|
Same here, AI/ML can help with determining VarDCT blocksizes, better adaptative quantization, perfect lossy pallette, smart patch finding etc.
|
|
2021-03-03 09:33:44
|
It could help with both lossy and lossless heuristics
|
|
|
BlueSwordM
|
2021-03-03 09:59:35
|
I think before integrating more ML stuff, we should look at multi-layer enhancements.
|
|
|
fab
|
2021-03-04 02:36:28
|
it destroys small texts
|
|
2021-03-04 02:36:49
|
maybe even the resolution
|
|
|
_wb_
|
2021-03-04 02:37:02
|
Stop setting arbitrary intensity targets, lol
|
|
2021-03-04 02:37:59
|
And yes, if you downscale everything 2x, you obviously lose resolution. It only makes sense as a last resort if you want to hit really low file size targets
|
|
|
fab
|
2021-03-04 02:38:35
|
if you open the site and zoom you see heavy loss in texture
|
|
|
_wb_
|
2021-03-04 02:38:47
|
It is kind of silly to do that low distance and at the same time do --resampling=2
|
|
2021-03-04 02:39:14
|
Can better not resample and do higher distance then
|
|
2021-03-04 02:39:44
|
Patches do not destroy anything, if they do it's a bug
|
|
|
fab
|
2021-03-04 02:40:01
|
ok thanks
|
|
2021-03-04 02:40:14
|
i do not see higher lux in the images
|
|
2021-03-04 02:40:23
|
looks even better than what this person encodes
|
|
2021-03-04 02:40:36
|
at least for that type of images of female journalist
|
|
2021-03-04 02:40:52
|
this girl at 68 kb get airbrushed by normal jpg
|
|
2021-03-04 02:41:19
|
the encoder is not definitive
|
|
2021-03-04 02:41:42
|
new encoder obviously higher qualities
|
|
|
_wb_
|
2021-03-04 02:41:53
|
you cannot see more brightness than what your screen can produce
|
|
|
fab
|
2021-03-04 02:43:03
|
jpeg xl masks better the artifacts while preserving texture
|
|
|
_wb_
|
2021-03-04 02:43:04
|
but `--intensity_target=458` means "the max brightness is not the usual SDR max brightness, but it is 458 nits"
|
|
|
fab
|
2021-03-04 02:43:33
|
if you have like a 427 kb jpeg is already good with some photos
|
|
|
_wb_
|
2021-03-04 02:44:25
|
`-d 0.528 --resampling=2` means "I want a very very high quality encode of this image that is first butchered by doing 1:2 downsampling"
|
|
|
fab
|
2021-03-04 02:44:35
|
i'm not overcompressing i'm using same bpp as images are now
|
|
2021-03-04 02:44:54
|
people will not switch if they can't make 30 kb images
|
|
|
_wb_
|
2021-03-04 02:45:16
|
it makes more sense to do `-d 2` without the resampling, or whatever distance you want
|
|
|
fab
|
2021-03-04 02:45:29
|
ok thanks
|
|
2021-03-04 02:53:43
|
i got
|
|
2021-03-04 02:54:24
|
it's simply doing with less effort and blurring it, it is worse
|
|
|
Scope
|
2021-03-05 11:55:24
|
Hmm, is `--progressive_dc` now automatically enabled on low bpp?
|
|
2021-03-06 12:00:20
|
Also, one of the old JXL build
|
|
2021-03-06 12:01:03
|
0.3.3
|
|
2021-03-06 12:07:31
|
Old
|
|
2021-03-06 12:07:41
|
New
|
|
|
BlueSwordM
|
2021-03-06 12:12:33
|
Woah.
|
|
2021-03-06 12:12:36
|
That is a lot better.
|
|
|
Scope
|
2021-03-06 12:16:53
|
Old
|
|
2021-03-06 12:17:00
|
New
|
|
|
BlueSwordM
|
2021-03-06 12:17:33
|
"7% better"
*Image looks like a completely different one*
|
|
|
Scope
|
2021-03-06 12:20:24
|
7% is compared to recent builds, and I'm comparing to older ones (which are about a year old) to see the improvement over time
|
|
|
BlueSwordM
|
|
_wb_
|
2021-03-06 06:54:40
|
While designing the bitstream, we focused on d1-d4, making sure at all times that we have an encoder that can get good results in that medium to high fidelity range. That way we ensured that the bitstream is capable of getting good results in that range.
|
|
2021-03-06 06:57:25
|
The lower fidelity range was not a focus during the bitstream design, it's mostly only after the bitstream was frozen that we (mostly Jyrki) have been looking at encoder improvements to see what the bitstream can do in terms of high-appeal, low-bitrate compression.
|
|
2021-03-06 07:03:26
|
I think with AVIF (or video-derived codecs in general) the opposite happened: during bitstream design, the focus was on being able to get good appeal at low bpp. Encoder improvements to do better at high-fidelity are happening there.
|
|
2021-03-06 07:04:44
|
So there will be some convergence, I think, where AVIF gets better at high-fidelity and <:JXL:805850130203934781> gets better at low-fidelity.
|
|
2021-03-06 07:08:23
|
And maybe some of the work can be shared between the different codecs: e.g. perceptual metrics and segmentation heuristics can be used in different codecs (jxl, avif, wp2)
|
|
|
|
veluca
|
|
Scope
Hmm, is `--progressive_dc` now automatically enabled on low bpp?
|
|
2021-03-06 10:14:10
|
yup
|
|
|
Scope
7% is compared to recent builds, and I'm comparing to older ones (which are about a year old) to see the improvement over time
|
|
2021-03-06 10:14:31
|
heh, a lot happened in one year...
|
|
|
Jyrki Alakuijala
|
|
_wb_
but `--intensity_target=458` means "the max brightness is not the usual SDR max brightness, but it is 458 nits"
|
|
2021-03-06 10:47:32
|
increasing the maximum brightness can reduce the relative amount of bits used in saturation vs. used in intensity -- when light is bright enough the color saturation is perceived less
|
|
|
BlueSwordM
Woah.
|
|
2021-03-06 10:49:54
|
wonderful to see that I wasn't just wasting my time -- I reviewed each increment carefully, but lost the touch with the whole
|
|
|
_wb_
|
2021-03-06 10:51:57
|
yes, I guess it alters the relative importance of Y and X,B — I suppose we could expose options to tweak that balance, maybe we should. But modifying `--intensity_target` may cause the image to actually be displayed brighter, which will cause weird surprises when more people get HDR screens
|
|
|
Jyrki Alakuijala
|
|
Scope
7% is compared to recent builds, and I'm comparing to older ones (which are about a year old) to see the improvement over time
|
|
2021-03-06 10:52:03
|
(I just made up the 7 % guesswork -- I believe the improvement is less on high quality, more on low quality -- even there can be some degradation at the highest quality, but shouldn't be significant)
|
|
2021-03-06 10:57:05
|
Looks like the low bit rate improvement 0.3.2 to 0.3.3 is surprisingly good -- but I believe we still don't necessarily match quality with AVIF and WebP2 in these low bitrates, I think
|
|
2021-03-06 10:57:50
|
I know how to make a roughly similar amount of improvement, which will affect the highest quality and mid quality likely more than the lowest quality, but bring some improvement into the lowest quality, too
|
|
2021-03-06 11:05:29
|
I worked unsustainably hard to get this done -- often late hours until 2-4 AM to have some peace from the family -- and now I'm quite exhausted
|
|
|
_wb_
|
2021-03-06 11:06:56
|
Time to take a break 💆
|
|
|
Jyrki Alakuijala
|
2021-03-06 11:07:09
|
Even if we are still playing catch-up with AVIF and WebP2 at low bitrates, we are pushing down the cross-over point in density where it is better to choose JPEG XL over AVIF
|
|
2021-03-06 11:07:31
|
I need to push through our performance evaluation work -- then I will take a couple of days off
|
|
2021-03-06 11:07:52
|
(internal people performance evaluation in the company, not performance evaluation of the codec)
|
|
|
_wb_
|
2021-03-06 11:08:18
|
Performance is a heavily overloaded word
|
|
2021-03-06 11:10:15
|
It can mean speed, density, fidelity, and human labor productivity
|
|
2021-03-06 11:11:54
|
Some of the things it means are incompatible with one another: more speed usually means less density, more density usually means less speed, yet both things are 'performance'
|
|
2021-03-06 11:13:11
|
'complexity' is also one of those words: e.g. more specialized code paths can reduce the computational complexity while they increase the code complexity...
|
|
|
Jyrki Alakuijala
|
2021-03-06 11:13:13
|
encoding speed, decoding speed, memory use at decoding, memory use at encoding, density, memory access patterns, scalability with multi-processing, various objective metrics, subjective performance, bd-rate performance, performance at low bit rates, performance at high bitrates, average performance, comfort of the results, fidelity of the results, and probably 100 other things
|
|
2021-03-06 11:14:10
|
we should establish a new term 'performance complexity' that would only mean one single thing (also should have nothing to do with any of the performance or complexity topics)
|
|
|
_wb_
|
2021-03-06 11:14:28
|
Haha
|
|
2021-03-06 11:16:00
|
'progressive' is also a confusing word: in the video world it just means 'non interlaced', while in the image world it means something else.
|
|
|
Jyrki Alakuijala
|
2021-03-06 11:16:48
|
progression -- the process of developing or moving gradually towards a more advanced state.
|
|
2021-03-06 11:17:06
|
progression -- a number of things in a series.
|
|
|
_wb_
|
2021-03-06 11:18:08
|
I've heard 'scalable' as a term for progressive decodable too. Which is also confusing: scalable can also mean vector graphics (as in SVG), or parallelizable, or deployable at a large scale
|
|
|
Jyrki Alakuijala
|
2021-03-06 11:18:22
|
heh
|
|
2021-03-06 11:18:30
|
so much confusion
|
|
|
bonnibel
|
2021-03-06 11:18:33
|
"progressive" image codec (takes less bits the less capitalist an image is)
|
|
|
_wb_
|
2021-03-06 11:19:23
|
Progressive is indeed also the opposite of conservative (or maybe it's the opposite of reactionary, with conservative in the middle, I dunno)
|
|
|
Jyrki Alakuijala
|
2021-03-06 11:19:37
|
I have read people use 'progressive' in the context of image codecs to just mean 'advanced' or 'modern'
|
|
|
bonnibel
|
2021-03-06 11:19:50
|
(usually defined as an m:r ratio of at least 40:1)
(m:r is measured in bits an image of marx takes vs bits an image of raegan takes)
|
|
|
Jyrki Alakuijala
|
2021-03-06 11:20:11
|
Some bloggers are able to write about topics with confidence and try to give direction to the field even if they know nothing of the subject
|
|
|
_wb_
|
2021-03-06 11:20:29
|
Ah yes, the codec that represents technological progress could be called progressive I guess
|
|
|
fab
|
2021-03-06 11:21:09
|
for %i in (C:\Users\User\Documents\d\*.jpg) do cjxl2 "%i" "%i.jxl" -s 4 -q 56 --epf=2 --num_reps=5 --num_threads=2
|
|
2021-03-06 11:21:19
|
does numreps 5 do anything?
|
|
2021-03-06 11:21:42
|
does it encodes 5 times the image?
|
|
2021-03-06 11:22:01
|
|
|
2021-03-06 11:22:10
|
i see 5 reps
|
|
|
_wb_
|
2021-03-06 11:22:37
|
Images of Marx of course need more bits than images of movie stars: a beard just has more entropy than botoxed skin.
|
|
|
bonnibel
|
2021-03-06 11:23:30
|
there's no reason you can't make an encoding that is more efficient at storing large beards than storing smooth surfaces
|
|
|
_wb_
|
|
fab
does numreps 5 do anything?
|
|
2021-03-06 11:23:45
|
It is just for benchmarking: it does the exact same thing N times so it is easier to accurately measure the time
|
|
|
Jyrki Alakuijala
|
|
bonnibel
"progressive" image codec (takes less bits the less capitalist an image is)
|
|
2021-03-06 11:24:13
|
Q: How to capitalize the proletariat? A: Proletariat.
|
|
|
fab
|
2021-03-06 11:25:20
|
doesn't do multipass like webp2?
|
|
|
Jyrki Alakuijala
|
|
fab
doesn't do multipass like webp2?
|
|
2021-03-06 11:26:01
|
https://www.youtube.com/watch?v=NVPLqbWXdDA Multi-Pass!!
|
|
|
fab
|
2021-03-06 11:26:29
|
seriously
|
|
|
_wb_
|
2021-03-06 11:26:48
|
Basically speed 8 and up does a kind of multipass
|
|
2021-03-06 11:26:56
|
Not really
|
|
|
fab
|
2021-03-06 11:27:01
|
and speed 4
|
|
|
Jyrki Alakuijala
|
2021-03-06 11:27:09
|
what is webp2 multipass?
|
|
|
fab
|
2021-03-06 11:27:27
|
re encoding
|
|
2021-03-06 11:27:29
|
basically
|
|
|
Jyrki Alakuijala
|
2021-03-06 11:28:01
|
VarDCT level 8 does one round of error evaluation, level 9 does 3 rounds or so
|
|
|
_wb_
|
2021-03-06 11:28:02
|
Multipass is mostly a video concept I think, where you have the issue that an encoder cannot look far into the future frames so recording some info on them can help to make better decisions in a second pass
|
|
|
Jyrki Alakuijala
|
2021-03-06 11:28:49
|
ac-strategy selection tries to fit different integral transforms to a locality and may try 15 different codings to a single area
|
|
|
fab
|
2021-03-06 11:28:53
|
no it has same file
|
|
|
_wb_
|
2021-03-06 11:29:22
|
For still images it might also make sense if you have an encoder that goes top to bottom or something, and has to reach some bpp target
|
|
|
Jyrki Alakuijala
|
2021-03-06 11:29:24
|
we don't do full encoding with all the errors and feed it back to the encoder -- this could further improve the coding if we did it
|
|
|
fab
|
2021-03-06 11:29:27
|
so what i can set to 5?
|
|
2021-03-06 11:29:35
|
for vardct?=
|
|
2021-03-06 11:29:43
|
for %i in (C:\Users\User\Documents\d*.jpg) do cjxl2 "%i" "%i.jxl" -s 4 -q 56 --epf=2 --num_reps=5 --num_threads=2
|
|
|
_wb_
|
2021-03-06 11:30:32
|
Num reps is something you should only use if you're trying to measure encode speed. It has no other purpose.
|
|
|
Jyrki Alakuijala
|
2021-03-06 11:30:58
|
there might be a flag called butteraugli_iters
|
|
|
fab
|
2021-03-06 11:31:16
|
and other flag for vardct i can set to 5 in the encoder?
|
|
|
_wb_
|
2021-03-06 11:31:40
|
I don't think it's exposed in cjxl, maybe in benchmark_xl
|
|
|
Jyrki Alakuijala
|
2021-03-06 11:31:53
|
Fabian, are you able to recompile from source?
|
|
|
_wb_
|
2021-03-06 11:31:57
|
Why do you want to set a flag to 5, specifically?
|
|
|
fab
|
2021-03-06 11:32:27
|
OK
|
|
2021-03-06 11:32:31
|
thanks
|
|
|
Jyrki Alakuijala
|
2021-03-06 11:36:36
|
https://gitlab.com/wg1/jpeg-xl/-/blob/master/lib/jxl/enc_params.h#L124
|
|
|
fab
|
2021-03-06 11:36:40
|
the build of jamaika says jpegxl 0.2.0
|
|
|
Jyrki Alakuijala
|
2021-03-06 11:36:56
|
I suspect that changing this number helps
|
|
2021-03-06 11:37:01
|
I'm checking from the use
|
|
|
fab
|
2021-03-06 11:37:10
|
why
|
|
2021-03-06 11:37:19
|
but he said its a new build
|
|
2021-03-06 11:37:37
|
i changed the name to cjxl2
|
|
|
Jyrki Alakuijala
|
2021-03-06 11:38:12
|
https://gitlab.com/wg1/jpeg-xl/-/blob/master/lib/jxl/enc_adaptive_quantization.cc#L896
|
|
2021-03-06 11:38:16
|
this is where it is used
|
|
2021-03-06 11:38:47
|
it is set to 4 by quality 9 and to 2 in quality 8, otherwise no butteraugli iterations
|
|
|
_wb_
|
2021-03-06 11:38:57
|
Do cjxl -V to see your version
|
|
|
fab
|
2021-03-06 11:40:15
|
It says the 0.2.0
|
|
2021-03-06 11:40:26
|
i commented on doom9
|
|
|
Jyrki Alakuijala
|
2021-03-06 11:44:35
|
best tweak to 0.2.0 is to not use it
|
|
2021-03-06 12:07:03
|
https://forum.doom9.org/showpost.php?p=1929782&postcount=348
|
|
2021-03-06 12:07:25
|
perhaps we should try to have convergence in the flags definition with other codecs, would perhaps make multi-format environments easier to setup up
|
|
2021-03-06 12:12:20
|
also less cognitive load for the users
|
|
|
Scope
Hmm, is `--progressive_dc` now automatically enabled on low bpp?
|
|
2021-03-06 12:23:50
|
yes, in 0.3.3. progressive_dc is enabled by default I think above d3.5 (or 4.5)
|
|
|
fab
|
2021-03-06 12:54:38
|
ah i used wrong build for older system
|
|
|
|
Deleted User
|
|
Jyrki Alakuijala
I know how to make a roughly similar amount of improvement, which will affect the highest quality and mid quality likely more than the lowest quality, but bring some improvement into the lowest quality, too
|
|
2021-03-06 02:14:32
|
Can you estimate when the encoder will be matured enought to only get marginal improvements from that point onwards?
|
|
|
_wb_
|
2021-03-06 02:18:13
|
January 2038
|
|
|
Jim
|
2021-03-06 02:22:03
|
But what about {insert distant feature here}? That should set it back to at least June.
|
|
|
_wb_
|
2021-03-06 02:23:48
|
Yes, I think it will actually be January 2039
|
|
2021-03-06 02:23:59
|
That's when the codec will be 18 years old
|
|
2021-03-06 02:24:23
|
So by many definitions, it's considered mature then
|
|
2021-03-06 02:24:31
|
At least legally
|
|
|
bonnibel
|
2021-03-06 02:25:14
|
darn and i just finished printing out the entire libjxl source code to pour alcohol onto it
|
|
|
_wb_
|
2021-03-06 02:26:02
|
In the US it will still not be allowed to drink alcohol even in 2039, but it's an international standard so can just as well use Belgian drinking age
|
|
|
Jim
|
2021-03-06 02:26:44
|
... but it can still go to war.
|
|
|
_wb_
|
2021-03-06 02:32:38
|
It's funny how different countries have different orders in which you can do these things:
- drive a car
- drink alcohol
- vote
- be a candidate in elections
- fire a gun
- go to war
|
|
2021-03-06 02:36:46
|
In Belgium, this is the order:
- drink alcohol (16)
- drive a car (17-18)
- vote (18)
- be a candidate in elections (18-21)
- guns and wars we don't really do much at all lately
|
|
2021-03-06 02:43:29
|
In the US:
- fire a gun (no real minimum age)
- drive a car (16-17)
- go to war (17-18)
- vote (18)
- drink alcohol (21)
- be a candidate in elections (25-35)
|
|
|
|
Deleted User
|
2021-03-06 05:09:37
|
You forgot p0rn and s€x!
|
|
|
Jyrki Alakuijala
|
2021-03-06 06:46:47
|
I find it interesting that man's nipple is ok to show in TV but woman's not. In USA they can show murder, male nipple, torture, all kinds of crimes, but not female nipple.
|
|
|
Pieter
|
2021-03-06 06:50:43
|
Sense, it makes none.
|
|
|
Crixis
|
|
Jyrki Alakuijala
I find it interesting that man's nipple is ok to show in TV but woman's not. In USA they can show murder, male nipple, torture, all kinds of crimes, but not female nipple.
|
|
2021-03-06 06:59:24
|
just censor femenale nipple whit male nipple
|
|
|
Scope
|
|
Jyrki Alakuijala
Looks like the low bit rate improvement 0.3.2 to 0.3.3 is surprisingly good -- but I believe we still don't necessarily match quality with AVIF and WebP2 in these low bitrates, I think
|
|
2021-03-06 07:01:11
|
Yes, especially for art and anime images the gap is still big
|
|
|
BlueSwordM
|
2021-03-06 07:07:51
|
So, I found an interesting case of why YCbCr might have some issues vs RGB/XYB...
rav1e avif YCbCr vs rav1e avif RGB
|
|
2021-03-06 07:07:53
|
https://slow.pics/c/h8zNf9rY
|
|
|
Scope
|
|
Jyrki Alakuijala
(I just made up the 7 % guesswork -- I believe the improvement is less on high quality, more on low quality -- even there can be some degradation at the highest quality, but shouldn't be significant)
|
|
2021-03-06 07:08:06
|
Perhaps sometime in the future, it will be necessary to increasingly split the encoding for low and high quality, so that they do not affect each other (and for example for low quality use more filtering and other things that are bad for high quality), but on the other hand it will make an encoder even more complex 🤔
|
|
|
Crixis
|
2021-03-06 07:08:40
|
in medium-low bpp to me seam magic that avif have always 0 rigging and 0 pixellation (no boundary on the square of the dct). but on very low bpp i see jxl fall apart also in the square not only on the boundary
|
|
2021-03-06 07:10:12
|
I speculate avif have some big constrain in the quantitation table to not make rigging and pixellation
|
|
|
Scope
|
2021-03-06 07:12:07
|
Yes, but the opposite problem with AVIF (at least aom), even at very high quality and enough bitrate it still filters and removes a lot of small details
|
|
|
|
Deleted User
|
|
Jyrki Alakuijala
I find it interesting that man's nipple is ok to show in TV but woman's not. In USA they can show murder, male nipple, torture, all kinds of crimes, but not female nipple.
|
|
2021-03-06 07:15:09
|
r/FreeTheNipple
|
|
|
Crixis
|
|
Scope
Yes, but the opposite problem with AVIF (at least aom), even at very high quality and enough bitrate it still filters and removes a lot of small details
|
|
2021-03-06 07:18:16
|
Yes, i don't want to say that Avif is better, i simply can't explain to me avif dct performance on low bpp
|
|
|
BlueSwordM
|
2021-03-06 07:19:29
|
It might be the powerful wiener filtering that is too aggressive because of how the RDO is tuned prior to the filtering.
|
|
|
|
Deleted User
|
|
Can you estimate when the encoder will be matured enought to only get marginal improvements from that point onwards?
|
|
2021-03-06 07:20:12
|
It took us almost 30 years to master simpler codec (JPEG) with mozjpeg and Guetzli, so despite being able to reuse some older tweaks there are even more powerful tools in JPEG XL, so I don't think we'll master this beast quickly.
|
|
|
BlueSwordM
|
2021-03-06 07:20:44
|
Currently, aom and SVT-AV1 feature wiener filtering that makes them particularly good for some types of images, which includes drawings.
Apparently, rav1e doesn't have it... yet.
|
|
2021-03-06 07:22:33
|
However, rav1e is already very good at retaining details, but artifact prevention is one of its weaknesses, moreso in inter than intra images, but it is still a weakness at low bpp.
|
|
|
Crixis
|
|
BlueSwordM
Currently, aom and SVT-AV1 feature wiener filtering that makes them particularly good for some types of images, which includes drawings.
Apparently, rav1e doesn't have it... yet.
|
|
2021-03-06 07:22:37
|
tell me more about this wiener filtering please
|
|
|
BlueSwordM
|
|
Crixis
tell me more about this wiener filtering please
|
|
2021-03-06 07:25:34
|
Wiener filtering is basically a general-purpose denoising filter that remove DCT basis noise via a configurable amount of blurring.
|
|
|
Scope
|
2021-03-06 07:25:49
|
But, even rav1e is also good at keeping lines sharp without artifacts at low bpp
|
|
|
BlueSwordM
|
2021-03-06 07:26:45
|
It does explain why aom does better than rav1e in low bandwidth scenarios when loop filtering is active(some images look disgusting when loop filtering is disabled in aom, unlike rav1e which still does decently), but it doesn't explain why aom still does well in drawing with sharp lines and whatnot without loop filtering enabled.
|
|
|
Jim
|
2021-03-06 07:31:54
|
Compared to JXL, AVIF seems to put way more of it's budget into the lines (sharp transitions and text) but in doing so gives up A LOT of detail within the more smooth/shaded areas. JXL seems to level out and give a more equal budget to lines and fine details at low bpp. That ends up with greater fine detail preserved but causes more artifacting to occur around sharp lines.
|
|
|
Crixis
|
|
Jim
Compared to JXL, AVIF seems to put way more of it's budget into the lines (sharp transitions and text) but in doing so gives up A LOT of detail within the more smooth/shaded areas. JXL seems to level out and give a more equal budget to lines and fine details at low bpp. That ends up with greater fine detail preserved but causes more artifacting to occur around sharp lines.
|
|
2021-03-06 07:34:38
|
to avif is better forget the original that make a boundary artifact, can do it from constrain in the quantization table? ("smart" use of ac)
|
|
|
Jim
|
2021-03-06 07:39:40
|
For it's purpose, yes. Since it is designed to be a video encoder, getting rid of fine detail and focusing on sharp lines is better when frames are flying by at 20+ per second, people are not really looking for all the detail to exist, just that they can understand what is going on in the video.
For still images where people can take a longer look and see there is likely loss of detail may not be the best way to handle it. For drawings it may be more acceptable way to use the budget for some, but replace that with a photograph of people and that same loss of detail and blurring can make it appear as if photo editing has occurred and may not be desirable or even morally acceptable.
|
|
|
Scope
|
2021-03-06 07:54:58
|
Rav1e
|
|
2021-03-06 07:55:06
|
JXL
|
|
|
Jyrki Alakuijala
|
|
Scope
Perhaps because there has not yet been any significant optimization and tuning for non-photographic images
|
|
2021-03-06 10:21:20
|
I keep images like this in the test set
|
|
2021-03-06 10:22:35
|
|
|
2021-03-06 10:23:24
|
|
|
2021-03-06 10:24:13
|
but I measure success only at the corpus level
|
|
2021-03-06 10:24:46
|
if the algorithm becomes weaker with synthetic images and stronger with photographic, then optimization can bring it there
|
|
2021-03-06 10:25:26
|
also, higher p-norms are ineffective to guide the rate-distortion-optimization at low bit rates
|
|
|
Jim
Compared to JXL, AVIF seems to put way more of it's budget into the lines (sharp transitions and text) but in doing so gives up A LOT of detail within the more smooth/shaded areas. JXL seems to level out and give a more equal budget to lines and fine details at low bpp. That ends up with greater fine detail preserved but causes more artifacting to occur around sharp lines.
|
|
2021-03-06 10:30:14
|
my (possibly flawed) understanding is that AVIF has relatively flat quantization matrices, whereas JXL's are sloping
|
|
|
Crixis
in medium-low bpp to me seam magic that avif have always 0 rigging and 0 pixellation (no boundary on the square of the dct). but on very low bpp i see jxl fall apart also in the square not only on the boundary
|
|
2021-03-06 10:34:28
|
My thinking is that there are a couple of things there: directional prediction (not directional filtering), non rectangular blocks, local palette blocks, stronger deblocking and ringing filtering (which occasionally seems to affect texture quality, too)
|
|
|
Scope
|
2021-03-06 10:36:33
|
I think that the optimization for low bpp is very different from medium and high, or rather to make the picture look appealing (maybe this is even an optimization option for the distant future, when more important things will be ready and low quality will still be needed)
Because when artifacts are clearly visible in an image, usually people don't care as much about how accurately the rest of the details are preserved, it automatically qualifies as bad
|
|
2021-03-06 10:41:51
|
And this also applies to sharp lines, they are very noticeable artifacts that are usually annoying, in JXL even at higher bpp, it would be interesting to understand how AVIF deals with this (and even better at the same time without removing so much detail from the rest of the image)
|
|
2021-03-06 10:54:56
|
For example here:
Source
|
|
2021-03-06 10:55:08
|
AVIF (83 893)
|
|
2021-03-06 10:55:18
|
JXL (~distance 4-4.5)
|
|
2021-03-06 11:01:11
|
(some blurring of details is not as noticeable as artifacts at the lines)
|
|
2021-03-06 11:04:44
|
🤔
|
|
2021-03-06 11:04:49
|
AVIF
|
|
2021-03-06 11:05:12
|
JXL
|
|
2021-03-06 11:34:34
|
In general, I think this image is suitable for the test art pictures, or maybe additionally add areas with solid color divided by a dark bold line (this is quite common in art), like this:
|
|
2021-03-06 11:35:55
|
(where artifacts are clearly visible with heavy compression)
|
|
2021-03-06 11:41:42
|
|
|
|
Jyrki Alakuijala
|
|
Scope
JXL
|
|
2021-03-07 07:57:23
|
it looks like we could have learned more from AVIF for this kind of content, too
|
|
2021-03-07 08:01:01
|
my suspicion is that AVIF has a pretty good largish block palette mode with some rle-ing (or sufficiently powerful prediction and context modeling) that gets beneficial on this kind of content
|
|
|
Scope
And this also applies to sharp lines, they are very noticeable artifacts that are usually annoying, in JXL even at higher bpp, it would be interesting to understand how AVIF deals with this (and even better at the same time without removing so much detail from the rest of the image)
|
|
2021-03-07 08:04:56
|
This is true to our philosophy -- if image was considered spoiled we made never an attempt to make it look great nonetheless, this wasn't our priority
|
|
2021-03-07 08:06:01
|
5) Color Palette as a Predictor: Sometimes, especially for artificial videos like screen capture and games, blocks can be approximated by a small number of unique colors. Therefore, AV1 introduces
palette modes to the intra coder as a general extra coding tool. The palette predictor for each plane of a block is specified by (i) a color palette, with 2 to 8 colors, and (ii) color indices for all pixels in the
block. The number of base colors determines the trade-off between fidelity and compactness. The color indices are entropy coded using the neighborhood-based context.
|
|
2021-03-07 08:06:57
|
there is no ringing in palette mode
|
|
2021-03-07 08:07:12
|
snip from https://www.jmvalin.ca/papers/AV1_tools.pdf
|
|
2021-03-07 08:08:04
|
JXL's palette mode is likely more expressive than AVIFs, but it comes with many buts
|
|
2021-03-07 08:09:22
|
if one wants to combine it with dct mode -- one needs to have layers
|
|
2021-03-07 08:11:49
|
layers will reduce/remove our ability to do progression
|
|
2021-03-07 08:12:10
|
the same for delta palette mode
|
|
2021-03-07 08:18:04
|
yet another possibility would be to do a better job in chroma-from-luma, and to rotate the colors with a higher resolution image, JXL does that using 64x64 tiles (in WebP lossless I wend down to 4x4 in the format, don't know right nw if it is configurable in JXL)
|
|
2021-03-07 08:24:17
|
one thing that might be interesting would be to run pngquant on one of the anime images and see how many colors we can keep to have decent compression in JXL (lossless)
|
|
2021-03-07 08:25:41
|
JXL's palette mode can be made equivalent with local palettes (by a very large palette and contexts modeling the effective subpalettes out of it), but it requires a new encoder
|
|
2021-03-07 08:41:56
|
our context modeling is likely slightly weaker for high density images than AVIFs -- we need to record the entropy codes explictly with some memory traffic cost while AVIF calculates everything dynamically with arithmetic coding (with a constant small coding speed penalty)
|
|
2021-03-07 08:50:22
|
in VarDCT we have a forced semantic model of a 8x8 image as an intermediate representation (for progression)
|
|
2021-03-07 08:50:42
|
AVIF doesn't have this constraint
|
|
2021-03-07 08:51:18
|
when combined with a pixel-to-pixel coding mode (like identity transform in JXL) vs. palette coding in AVIF, the intermediate representation is likely a significant amount of entropy
|
|
2021-03-07 08:52:35
|
roughly related to log(n) of pixels, but when n is 64, it can still amount to significant values
|
|
2021-03-07 08:53:26
|
if someone is capable enough to make an ablation study by short-circuiting avif's palette mode -- that would be really interesting and guide my efforts on improvement
|
|
|
lithium
|
2021-03-07 09:41:42
|
I'm very much look forward to use jpeg xl on my drawing content image,
but i think i should wait jpeg xl 0.3.5 release(march 31st maybe?).
And how to make ablation study by short-circuiting avif's palette mode?
could you give me more detail?
|
|
|
_wb_
|
|
Jyrki Alakuijala
if one wants to combine it with dct mode -- one needs to have layers
|
|
2021-03-07 10:48:01
|
Can also do it with patches. Then it does not really interfere with progression: patches are signalled together with DC (or just after DC in case of progressive DC), so the 'palette blocks' would be done before the AC, which is likely a feature, not a bug (e.g. text can become readable early)
|
|
|
|
veluca
|
|
Jyrki Alakuijala
yet another possibility would be to do a better job in chroma-from-luma, and to rotate the colors with a higher resolution image, JXL does that using 64x64 tiles (in WebP lossless I wend down to 4x4 in the format, don't know right nw if it is configurable in JXL)
|
|
2021-03-07 11:02:27
|
(it's not configurable)
|
|
|
Jyrki Alakuijala
|
|
_wb_
Can also do it with patches. Then it does not really interfere with progression: patches are signalled together with DC (or just after DC in case of progressive DC), so the 'palette blocks' would be done before the AC, which is likely a feature, not a bug (e.g. text can become readable early)
|
|
2021-03-07 12:30:11
|
that is correct, we can very likely do a similar job than local palette transform does with patches
|
|
|
veluca
(it's not configurable)
|
|
2021-03-07 12:31:11
|
too many configurable things will make a decoder implementation a nightmare 🙂
|
|
|
lithium
I'm very much look forward to use jpeg xl on my drawing content image,
but i think i should wait jpeg xl 0.3.5 release(march 31st maybe?).
And how to make ablation study by short-circuiting avif's palette mode?
could you give me more detail?
|
|
2021-03-07 12:33:21
|
I don't know how to make it. If I were to try I'd probably add 'total_bits += 123456;' just before return in av1_palette_color_cost_y function in https://aomedia.googlesource.com/aom/+/refs/heads/av1-normative/av1/encoder/palette.c
|
|
2021-03-07 12:33:37
|
I never changed code there so I could be terribly wrong
|
|
2021-03-07 12:38:06
|
it seems to me that AVIF's palette coding later participates in filtering and filtering can remove the problems that very low color count (up to 8 colors) causes
|
|
2021-03-07 12:38:30
|
JPEG XL's patches does not participate in filtering I believe, which may make it less applicable to very low quality
|
|
|
lithium
I'm very much look forward to use jpeg xl on my drawing content image,
but i think i should wait jpeg xl 0.3.5 release(march 31st maybe?).
And how to make ablation study by short-circuiting avif's palette mode?
could you give me more detail?
|
|
2021-03-07 12:42:47
|
I promise that I will fix all the high quality issues (d0.5 - d1.0).
|
|
|
fab
|
2021-03-07 12:56:06
|
i used that latest version
|
|
2021-03-07 12:58:35
|
new jpeg xl is good enough to not use resampling
|
|
2021-03-07 12:58:45
|
even at q20 is amazing
|
|
2021-03-07 01:00:10
|
and not have security bug so best is libjxl 1.0
|
|
|
_wb_
|
2021-03-07 01:03:57
|
If time is a problem, then do not do -s 9 but just leave it at default speed...
|
|
2021-03-07 01:04:54
|
I don't understand what you think I don't find acceptable?
|
|
|
fab
|
2021-03-07 01:12:10
|
if you do less than speed 9 you'll see too much ringing
|
|
|
_wb_
|
2021-03-07 01:17:46
|
We have the biggest part of that already. For typical photos it works great already. There is room for improvement for less typical image content, mostly to improve density by making better use of patches, splines, etc.
|
|
|
|
veluca
|
2021-03-07 01:27:56
|
please don't use the new heuristics xD
|
|
|
fab
|
2021-03-07 01:28:22
|
making the encoder automatic is more than only file size
|
|
2021-03-07 01:28:47
|
yes i you use more than q60 and less than q60 or less than s9
|
|
2021-03-07 01:28:50
|
i had a bug
|
|
2021-03-07 01:28:52
|
wait
|
|
|
_wb_
|
2021-03-07 01:29:16
|
I've been hiding that new heuristics option behind -v -v -h, but Fabian will try every option 😅
|
|
|
|
veluca
|
2021-03-07 01:29:41
|
I *thought* I wrote some form of "DON'T USE THIS" in the documentation of the flag 😛
|
|
|
fab
|
|
_wb_
|
2021-03-07 01:30:46
|
That looks like you took an image with alpha and stripped the alpha?
|
|
|
fab
|
2021-03-07 01:31:44
|
that's the file
|
|
|
_wb_
|
2021-03-07 01:31:45
|
Did you decode to jpg or ppm or something?
|
|
|
fab
|
2021-03-07 01:32:04
|
mirillis wic
|
|
2021-03-07 01:32:12
|
jpeg xl decoder doesn't work to me
|
|
2021-03-07 01:32:20
|
maybe i have to rewrite the decoder
|
|
2021-03-07 01:32:41
|
the file i published
|
|
2021-03-07 01:32:56
|
i don't remember original codec
|
|
|
_wb_
|
2021-03-07 01:32:57
|
Well looks like you are viewing it with something that ignores alpha
|
|
2021-03-07 01:33:24
|
So that's a bug in that application, it should not ignore alpha
|
|
|
fab
|
2021-03-07 01:33:30
|
so mirillis bugged
|
|
2021-03-07 01:33:35
|
xnview bugged
|
|
2021-03-07 01:33:42
|
what they use do they use imagemagick
|
|
2021-03-07 01:33:49
|
qjpegdll
|
|
2021-03-07 01:33:58
|
because nomacs uses qjpegdll and it works
|
|
|
_wb_
|
2021-03-07 01:34:23
|
Could be they just didn't implement checking for an alpha channel
|
|
|
fab
|
|
Jyrki Alakuijala
|
2021-03-07 01:39:08
|
0.3.3 should be better on image quality, especially low bpp (edit: ah, this is modular mode -- no matter)
|
|
|
veluca
please don't use the new heuristics xD
|
|
2021-03-07 01:40:29
|
new heuristics -- the architecture is better, more extensible, future proof, more beautiful, will be more efficient
|
|
2021-03-07 01:40:32
|
old heuristics -- terrible code, but actually works
|
|
|
fab
|
2021-03-07 01:41:15
|
for modular encoding i used q 94 in that case but jpeg-xl-mingw64-0.3-35ad23dd
|
|
2021-03-07 01:41:42
|
if i use higher than q94 i notice artifacts and even moire
|
|
2021-03-07 01:41:45
|
not worth
|
|
2021-03-07 01:41:51
|
jpeg should not be re encoded
|
|
2021-03-07 01:43:50
|
maybe i will write an article
|
|
2021-03-07 01:44:13
|
jon said it doesn't change nothing for lossy modular
|
|
2021-03-07 01:44:21
|
but always i should test i think
|
|
2021-03-07 01:45:52
|
i think it was that
|
|
2021-03-07 01:46:19
|
but anyway the improvement on video isn't that good
|
|
2021-03-07 01:46:30
|
because the image is already compressed in video
|
|
2021-03-07 01:46:45
|
it can go with a denoiser
|
|
2021-03-07 01:46:56
|
but not with a video codec at low bitrate
|
|
2021-03-07 01:47:13
|
it's better for images
|
|
|
_wb_
|
2021-03-07 01:47:22
|
-I 5 is the same as -I 1, it's a number between 0 and 1. No idea how it interacts with -s 3.
|
|
|
fab
|
2021-03-07 01:47:24
|
i had 72 kb images with it
|
|
2021-03-07 01:47:36
|
it's iterations
|
|
2021-03-07 01:47:41
|
in modular
|
|
|
_wb_
|
2021-03-07 01:48:04
|
Yes, historically it is iterations of mock encodes to train MA trees
|
|
|
fab
|
2021-03-07 01:48:08
|
also these settings are not automatic and they work for a single encoder version (commit)
|
|
|
_wb_
|
2021-03-07 01:48:48
|
Now it is just fraction of data used for MA tree learning, where 1=100% is the maximum
|
|
|
fab
|
2021-03-07 01:49:14
|
you are talking about iterations?
|
|
|
_wb_
|
2021-03-07 01:49:21
|
Yes, -I
|
|
|
fab
|
2021-03-07 01:49:31
|
so 100/5= 20% DATA USED
|
|
2021-03-07 01:49:35
|
FOR LEARNING
|
|
|
_wb_
|
|
fab
|
2021-03-07 01:49:48
|
Ok i don't know math
|
|
|
_wb_
|
2021-03-07 01:49:51
|
-I 0.2 would be 20%
|
|
2021-03-07 01:50:05
|
Default is -I 0.5 which is 50%
|
|
|
lithium
|
2021-03-07 02:22:45
|
<@!532010383041363969>
I understand, thank you very much 🙂
and av1 have different palette mode, so complex...
Page5:
In AV1 [4], luma palette mode and chroma palette mode are
determined independently so separate palettes are used. Up to 8
entries are allowed for each palette coded mode, which can be
either predicted from neighbouring used palette colors or
signalled for the delta part from the predictor.
The index map coding follows a diagonal scan order as shown
in Fig. 8. For each index, it is coded using it top and left
neighbouring indices (when available) as context.
Unlike in HEVC SCC and VVC, where no residue coding is
applied to a palette coded block, transform coding and
quantization is applied to the residue block in AV1 palette
mode, just like the other intra prediction modes.
https://arxiv.org/ftp/arxiv/papers/2011/2011.14068.pdf
|
|
|
_wb_
|
2021-03-07 02:28:55
|
We can also apply residu coding with patches. Patches from a modular image is more expressive than palette mode blocks: no need to align the patches with the blocks, can use more colors, can use delta palette, can use stronger entropy coding, etc. But we currently don't have an encoder yet that detects good candidates for patch-based encoding — the current heuristic only detects repetitive elements near flat-color regions (which detects mostly letters in text).
|
|