|
fab
|
2021-08-08 03:48:04
|
|
|
2021-08-08 03:48:05
|
|
|
2021-08-08 03:48:09
|
|
|
2021-08-08 03:48:12
|
test fonts, for devs of jpeg xl
|
|
2021-08-12 04:19:08
|
D 1 s 9 e 2 1.443 gaborish 0 dots 1 epf 1 faster decoding 4 use new heuristics 22082021 2 build
|
|
2021-08-12 04:22:57
|
Is quality the only important thing
|
|
2021-08-12 04:23:45
|
Jxl will evolve
|
|
2021-08-12 04:24:02
|
Will become more affordable
|
|
2021-08-12 04:24:35
|
And worthy to reduce x4 x5 the sizes of screenshots compared to png
|
|
2021-08-12 04:25:03
|
People that dont like it will be the same
|
|
2021-08-13 05:29:30
|
|
|
2021-08-13 05:30:03
|
People need to have patience
|
|
2021-08-13 05:30:39
|
The encoder still is in Early ISO stage
|
|
|
_wb_
|
2021-08-13 06:27:39
|
ISO has nothing to do with encoders.
|
|
2021-08-13 06:28:20
|
Standards only define what a decoder does, for encoders the only spec is "produce something that decodes".
|
|
|
fab
|
2021-08-13 08:25:51
|
Now intensity target 872 s 6 use new heuristics d 0.213
|
|
2021-08-13 08:26:03
|
Produce better quality than before
|
|
2021-08-13 08:26:22
|
So HDR is starting to make sense
|
|
|
_wb_
|
2021-08-13 08:37:04
|
It already made sense before. Just default settings should be fine, if you give the input image with a PQ or HLG color profile.
|
|
|
fab
|
2021-08-24 07:31:36
|
-s 2 -d 0.79 -I 0.49 --use_new_heuristics --faster_decoding=2
|
|
2021-08-24 07:36:02
|
|
|
2021-08-24 07:36:03
|
|
|
2021-08-24 07:36:10
|
|
|
2021-08-24 07:40:44
|
benchmarks on five images
|
|
2021-08-24 07:40:45
|
359 KB (368.175 byte)
|
|
2021-08-24 07:40:59
|
old s9 322 KB (329.762 byte)
|
|
2021-08-24 07:41:10
|
old s7 336 KB (344.438 byte)
|
|
2021-08-24 08:42:38
|
for %i in (D:\Images\INT2\Insta\2021d\*.jpg) do cjxl -j -s 3 -d 3.5 -I 0.549 --use_new_heuristics --faster_decoding=3 --gaborish=0 %i %i.jxl
|
|
|
Fraetor
|
2021-09-12 11:35:27
|
For a simple case with lossless compression, is there any way of beating PNG at compressing this QR code?
|
|
2021-09-12 11:37:43
|
That PNG is 670 bytes.
`cjxl -q 0 -e 9` gives 4,547 bytes.
I assume PNG is winning here because it is indexed.
|
|
|
Cool Doggo
|
2021-09-13 12:08:49
|
``-d 0 -E 3 -g 2 -I 1 -e 9 --patches=0`` 637 bytes
|
|
2021-09-13 12:09:35
|
can probably beat with less convoluted command but i just tried stuff to get lower slowly ๐
|
|
2021-09-13 12:36:35
|
``--patches=0 --override_bitdepth=1 -d 0 -e 9 -g 2`` this gets 610 bytes (why does override bitdepth matter here?)
|
|
|
eddie.zato
|
2021-09-13 02:44:38
|
```
PS > cjxl qr.png qr.jxl -m -e 9 -g 2 --patches=0
PS > cwebp -m 6 -mt -z 9 -lossless -o qr.webp qr.png
Name Length
---- ------
qr.png 670
qr.jxl 624
qr.webp 550
```
|
|
|
_wb_
|
2021-09-13 06:50:02
|
this kind of images is a best-case scenario for lz77
|
|
2021-09-13 06:50:23
|
`-m -g 2 -I 0 -e 9 -P 0 --override_bitdepth 1 --patches=0`
|
|
2021-09-13 06:50:38
|
is 540 bytes
|
|
2021-09-13 06:54:39
|
png has the advantage of doing bit packing: for 1-bit images, it stuffs 8 pixels in a byte and gives that to lz77
|
|
|
Cool Doggo
``--patches=0 --override_bitdepth=1 -d 0 -e 9 -g 2`` this gets 610 bytes (why does override bitdepth matter here?)
|
|
2021-09-13 06:57:33
|
it matters because the png input only does 8-bit or 16-bit, it doesn't do 1-bit. So it passes the image as an 8-bit image to the encoder, who will then use channel palette to encode it effectively as 1-bit, but it takes some extra bytes to signal the channel palette
|
|
|
Scope
|
2021-09-13 08:02:53
|
So, some kind of automatic bit depth reduction would be useful and as I said before something similar for lossy palette, but for grayscale and b/w images
|
|
2021-09-13 08:12:28
|
Also, is it possible to select a limited set of predictors, not just the only one, or combinations like 13, 14, 15 already defined in the options?
And it would be interesting to find some optimal encoding options for lossless grayscale manga content that would be more often efficient than png and webp, but not as slow as for general content like `-s 9 -E 3 -I 1`
|
|
|
_wb_
|
|
Scope
So, some kind of automatic bit depth reduction would be useful and as I said before something similar for lossy palette, but for grayscale and b/w images
|
|
2021-09-13 09:06:01
|
The complication there is that strictly speaking bit depth reduction is never really lossless, e.g. if you have a 3-bit image, the pixel values are 0, 1/7, 2/7, 3/7, 4/7, 5/7, 6/7, 1, but you cannot express that exactly in 8-bit where values have a denominator of 255.
|
|
2021-09-13 09:10:05
|
E.g. it is common to encode 10-bit or 12-bit images as 16-bit PNG, but strictly speaking there's a difference between the 16-bit image and the 10/12-bit image you get by ignoring the padding bits and treating it as 10/12-bit data
|
|
|
|
Deleted User
|
2021-09-13 10:28:26
|
๐ฑ
|
|
2021-09-13 06:54:11
|
I always wondered why 8 Bit --> 16 --> 8 wasn't lossless in GIMP. But this seems kinda unavoidable when wanting exactly 0 as well as exactly 1.
<@!794205442175402004> This also means however, that libjxl will treat 2 and 4 Bit grayscale PNGs wrongly?
|
|
|
_wb_
|
2021-09-13 06:58:51
|
libjxl does not treat png files
|
|
2021-09-13 06:59:09
|
cjxl does read them as 8-bit, yes
|
|
2021-09-13 06:59:17
|
I think
|
|
2021-09-13 06:59:44
|
Should probably change that
|
|
|
spider-mario
|
2021-09-13 08:30:27
|
for what itโs worth, if you *255/15 or *255/3 the values, round them, then do the inverse and round them again, they seem to roundtrip properly
|
|
|
_wb_
|
2021-09-13 08:39:37
|
Well 255 is divisible by 15 and by 3
|
|
2021-09-13 08:39:54
|
So 2-bit and 4-bit are fine
|
|
2021-09-13 08:40:04
|
And 1-bit
|
|
2021-09-13 08:40:43
|
3-bit isn't though
|
|
2021-09-13 08:46:50
|
Also 1023 and 4095 do not divide 65535, so 10 or 12 bit does not exactly map to 16 bit
|
|
2021-09-13 08:50:05
|
Fun fact btw: you cannot represent exactly 50% gray in any bitdepth
|
|
2021-09-13 08:50:20
|
Since 2^n - 1 is always odd
|
|
2021-09-13 08:54:31
|
Always something annoyingly unsatisfying about having to choose between 127 and 128
|
|
2021-09-13 09:42:20
|
Oh well, at least in jxl you can get real 50% by using lossless float.
|
|
|
spider-mario
|
|
_wb_
Well 255 is divisible by 15 and by 3
|
|
2021-09-13 09:48:40
|
sorry, I meant 7 (2ยณ โ 1), not 3
|
|
2021-09-13 09:48:48
|
so it does seem to work for 3 bits
|
|
2021-09-13 09:48:51
|
also 10 and 12 for 16
|
|
2021-09-13 09:49:18
|
of course, you have to actually do the proper scaling in both directions, not just truncate
|
|
2021-09-13 09:50:20
|
```pycon
>>> all(k == round((2**3 - 1) / (2**8 - 1) * round((2**8 - 1) / (2**3 - 1) * k)) for k in range(2**3))
True
```
|
|
2021-09-13 09:53:12
|
ugh so many thinkos in there
|
|
2021-09-13 09:53:19
|
but I think Iโve got it at last
|
|
2021-09-13 09:53:38
|
(long day)
|
|
|
_wb_
|
2021-09-13 09:58:07
|
Good to know that 12 to 16 to 12 rounds back correctly
|
|
2021-09-13 10:00:02
|
But it's still not technically the same to have 16/65535 instead of 1/4095.
|
|
|
|
Deleted User
|
2021-09-18 04:12:48
|
<@!794205442175402004> Is there any bit depth format that goes from -128/65280 to 65407/65280 or -2/1020 to 1021/1020 or -8/4080 to 4087/4080 with negative numbers behaving like full black and values above 1 as white? This would waste a minor bit of precision range but conversions from 8 Bit to 10 Bit to 12 Bit and then to 16 Bit would remain perfectly accurate.
|
|
|
_wb_
|
2021-09-18 04:22:59
|
Not that I know of
|
|
2021-09-18 04:25:09
|
I suppose there can be metadata saying what is the black level, or something
|
|
|
fab
|
2021-09-22 12:37:03
|
-q 93.182 -s 6 -I 1.221 --patches=0 --epf=2 --gaborish=0 --intensity_target=255 --photon_noise=ISO4917 --faster_decoding=4 --dots=1
|
|
2021-09-22 12:37:54
|
like seriously i have watched a link posted in other codecs
|
|
2021-09-22 12:38:05
|
https://discord.com/channels/794206087879852103/805176455658733570/889861035958366208
|
|
2021-09-22 12:38:13
|
and this remind me of jpeg xl
|
|
2021-09-22 12:38:21
|
it looks like a better jpeg xl
|
|
2021-09-22 12:38:36
|
the concept of patches is not new i know
|
|
2021-09-22 12:38:52
|
but is impressive
|
|
2021-09-22 12:39:21
|
how they can get right balance of quality and fidelity
|
|
2021-09-22 12:42:45
|
like is not incredible or fine in terms of image fidelity
|
|
2021-09-22 12:43:04
|
it tries to be good at all miniscule difference
|
|
2021-09-22 12:43:40
|
from 0.588 0.600 bpp to 0.384 bpp and even lower if there is right
|
|
2021-09-22 12:44:07
|
is not that simple to know if an image is worth it
|
|
2021-09-22 12:45:11
|
is optimized for 400 nits display
|
|
2021-09-22 12:45:15
|
for all
|
|
2021-09-22 12:45:57
|
it should look very similar in many devices
|
|
2021-09-22 12:45:58
|
not simple
|
|
2021-09-22 12:46:07
|
have decent speeds
|
|
2021-09-22 12:48:41
|
from a 1.200 bpp an 1.800 bpp image and this
|
|
2021-09-22 12:49:11
|
simplying the delta
|
|
2021-09-22 12:49:14
|
is not that simple
|
|
2021-09-22 12:49:25
|
while good fidelity
|
|
2021-09-22 12:52:48
|
simplying the pixels
|
|
2021-09-22 12:53:36
|
quantization and squeeze
|
|
2021-09-22 12:53:56
|
noise
|
|
2021-09-22 12:56:55
|
saliency based compression
|
|
2021-09-22 12:56:57
|
detail
|
|
2021-09-22 01:00:24
|
|
|
2021-09-22 01:07:53
|
|
|
2021-09-22 01:10:41
|
|
|
2021-09-22 01:10:43
|
|
|
2021-09-22 01:10:43
|
|
|
2021-09-22 01:10:45
|
|
|
2021-09-22 01:10:45
|
|
|
2021-09-22 01:10:46
|
|
|
2021-09-22 01:10:46
|
|
|
2021-09-22 01:10:47
|
|
|
2021-09-22 01:13:21
|
<@456226577798135808> this is the point of lossy, i don't think wb make encoders just to encode some images at q 95 and some in lossless. many users in jxl or av1 server do that but that's not what i want
|
|
2021-09-22 01:13:25
|
i want to lose colour
|
|
2021-09-22 01:14:17
|
to me a 19:9 photo that weight 854 kb is perfect to post in the internet
|
|
2021-09-22 01:14:53
|
for manga though is not fb with 4 mb limitation per image so lossless original is better
|
|
2021-09-22 01:15:15
|
do not know all people say lossless original is always more than 40% better
|
|
2021-09-22 01:15:24
|
is really true or is it stupid?
|
|
2021-09-22 01:15:33
|
i do not have the experience for that
|
|
2021-09-22 01:16:02
|
and neither the abilities
|
|
2021-09-22 01:25:38
|
Sorry for bad English
|
|
|
w
|
2021-09-22 01:40:02
|
<:linusretire:707375442226708542>
|
|
|
fab
|
|
diskorduser
|
|
fab
|
|
2021-09-25 02:39:52
|
Are the comments encoding parameters? <:HaDog:805390049033191445>
|
|
|
eddie.zato
|
2021-09-28 05:10:38
|
What else can I tune up to more reduce the file size of such an image?
```
cjxl -m -e 9 -P 0 -Y 0 -I 1 -g 3 --palette=8
Compressed to 583578 bytes (0.191 bpp)
```
|
|
2021-09-28 05:10:48
|
|
|
|
_wb_
|
2021-09-28 05:26:58
|
Can try --patches 0 and/or -I 0
|
|
|
eddie.zato
|
2021-09-28 05:34:50
|
`--patches=0` has no effect.
`-I 0` increases the size.
`-g 2` slightly reduces the size.
|
|
2021-09-28 05:35:00
|
```
cjxl -m -e 9 -P 0 -Y 0 -I 1 -g 2 --palette=8
Compressed to 583411 bytes (0.191 bpp)
```
|
|
|
_wb_
|
2021-09-28 06:07:01
|
Does --palette=8 do anything?
|
|
2021-09-28 06:07:16
|
Can try --palette=-8 too, iirc
|
|
2021-09-28 06:08:28
|
Negative number for palette means it makes the palette in (sub)image order, not sorted lexicographically (i.e. by luma in case of YCoCg) which is the default behavior.
|
|
|
eddie.zato
|
2021-09-28 06:11:18
|
`magick identify` reports that the image has 8 colors, so I tried `--palette=8` and that reduced the size a bit.
`--palette=-8` slightly increase the size.
|
|
2021-09-28 06:15:16
|
What's interesting, in this case `cjxl` with the default settings easily beats the tuned up `cwebp`:
```
cjxl -m 0.gif 1.jxl
Compressed to 738165 bytes (0.242 bpp)
cwebp -mt -m 6 -lossless -z 9 -o 2.webp 0.gif
Output: 1203588 bytes (0.39 bpp)
```
|
|
|
_wb_
|
2021-09-28 06:41:22
|
lossless is a weird game
|
|
|
|
haaaaah
|
|
eddie.zato
|
|
2021-09-28 09:14:21
|
Nausica!
|
|
|
BlueSwordM
|
|
BlueSwordM
9. `color:deltaq-mode=3` Default is objective Q variation per superblock(1), which is not optimal for intra-only psychov-visual quality. Very recently, a intra quality Q variation mode made for psycho-visual quality has been introduced, and it actuallys works well.
**Extra**: for banding prevention without using grain synthesis, you can add in `-d 10`. Prevents the use of `-a tune=butteraugli` though. Combined with `-d 10` and grain synthesis(`-a color:enable-dnl-denoising=0 -a color:denoise-noise-level=5`), it makes it a strong encoder, even in challenging scenarios.
|
|
2021-10-09 05:51:02
|
An updated settings guide:
`avifenc -s X -j X --min 0 --max 63 -a end-usage=q -a cq-level=XX -a color:sharpness=2 -a tune=butteraugli -a color:enable-chroma-deltaq=1 -a color:enable-qm=1 -a color:deltaq-mode=4`
|
|
|
BlueSwordM
An updated settings guide:
`avifenc -s X -j X --min 0 --max 63 -a end-usage=q -a cq-level=XX -a color:sharpness=2 -a tune=butteraugli -a color:enable-chroma-deltaq=1 -a color:enable-qm=1 -a color:deltaq-mode=4`
|
|
2021-10-09 05:52:04
|
Also, if you're on the absolute bleeding edge(like, unofficial patched aomenc...) you can try this thing out for 10bpc:
`avifenc -s X -j X -d 10 --min 0 --max 63 -a end-usage=q -a cq-level=XX -a color:sharpness=2 -a tune=image_perceptual_quality -a color:enable-chroma-deltaq=1 -a color:enable-qm=1 -a color:deltaq-mode=4`
|
|
|
lithium
|
2021-10-09 06:30:49
|
Thank you very much BlueSwordM ๐
|
|
|
|
veluca
|
|
BlueSwordM
An updated settings guide:
`avifenc -s X -j X --min 0 --max 63 -a end-usage=q -a cq-level=XX -a color:sharpness=2 -a tune=butteraugli -a color:enable-chroma-deltaq=1 -a color:enable-qm=1 -a color:deltaq-mode=4`
|
|
2021-10-09 07:35:20
|
no enable_qm? ๐
|
|
|
BlueSwordM
|
|
|
veluca
|
|
BlueSwordM
Also, if you're on the absolute bleeding edge(like, unofficial patched aomenc...) you can try this thing out for 10bpc:
`avifenc -s X -j X -d 10 --min 0 --max 63 -a end-usage=q -a cq-level=XX -a color:sharpness=2 -a tune=image_perceptual_quality -a color:enable-chroma-deltaq=1 -a color:enable-qm=1 -a color:deltaq-mode=4`
|
|
2021-10-09 07:35:47
|
do you know what is the "image_perceptual_quality" tuning mode exactly?
|
|
2021-10-09 07:40:26
|
by default I'd assume it's the mode that was added somewhere around end of June
|
|
2021-10-09 07:40:30
|
but I'm not sure it is
|
|
|
BlueSwordM
|
|
veluca
by default I'd assume it's the mode that was added somewhere around end of June
|
|
2021-10-09 07:41:14
|
No, it's very very new. In fact, it's a WIP patch. That is why I mentioned "unofficial patched aomenc".
|
|
|
|
veluca
|
2021-10-09 07:41:32
|
link to the commit? ๐
|
|
|
BlueSwordM
|
|
veluca
link to the commit? ๐
|
|
2021-10-09 07:42:30
|
I don't understand it 100% , so you are better off reading the commit itself directly so I don't make any mistakes:
https://aomedia-review.googlesource.com/c/aom/+/147022
It's actually similar to `--tune=ssim` in the way it calculates and redistributes bits, but without hurting color performance or flat shading(animated).
|
|
|
|
veluca
|
2021-10-09 07:43:27
|
deltaq-mode=4 <- why are you not trying that? ๐
|
|
|
BlueSwordM
|
2021-10-09 07:43:55
|
What do you mean Veluca?
|
|
|
|
veluca
|
2021-10-09 07:45:59
|
huh
|
|
2021-10-09 07:46:08
|
either I'm blind or you changed it and it used to be 2
|
|
2021-10-09 07:46:13
|
(probably I am blind)
|
|
|
BlueSwordM
|
2021-10-09 07:46:29
|
2 is currently borked and a WIP, so never touched that ๐
|
|
|
|
veluca
|
2021-10-09 07:47:07
|
does enable_qm not work with this patch?
|
|
|
BlueSwordM
|
|
veluca
does enable_qm not work with this patch?
|
|
2021-10-09 07:47:20
|
No, I just forgot to enable it in the 1st place.
|
|
|
|
veluca
|
2021-10-09 07:47:27
|
ok ๐
|
|
2021-10-09 07:47:38
|
admittedly the cli options are nontrivial
|
|
|
BlueSwordM
|
2021-10-09 07:48:00
|
In fact, in my original "guide", I forgot to enable the default quantization matrices in the 1st place.
|
|
|
|
veluca
|
2021-10-09 07:48:41
|
from what I can see of the patch it's probably a nice improvement, although admittedly I'd say the perceptual optimization still leaves a bit to be desired
|
|
|
BlueSwordM
|
|
veluca
from what I can see of the patch it's probably a nice improvement, although admittedly I'd say the perceptual optimization still leaves a bit to be desired
|
|
2021-10-09 07:49:04
|
Butteraugli is still better indeed as an RD tune, but it doesn't work in 10-12bpc, so yeah...
|
|
2021-10-09 07:49:11
|
It's our best option for now ๐
|
|
|
|
veluca
|
2021-10-09 07:49:35
|
tbh from what I understand of how butteraugli optimization is implemented, it's not great either
|
|
2021-10-09 07:50:05
|
(better than function-of-variance, but still there's some corners being cut)
|
|
|
BlueSwordM
|
2021-10-09 07:51:49
|
Indeed. I don't get why some AOM members don't want JXL to be really a thing.
|
|
2021-10-09 07:52:21
|
AV1 encoders has already benefited nicely from working with the JXL folks ๐
|
|
2021-10-09 07:52:54
|
We just need JXL to become an AOM standard, and we are all set.
|
|
|
|
veluca
|
2021-10-09 07:53:14
|
ah that would be nice! ๐
|
|
2021-10-09 07:53:37
|
one would need to show that it solves a problem that avif doesn't solve, I think
|
|
2021-10-09 07:54:03
|
but I would be happy if that happened
|
|
|
_wb_
|
2021-10-09 07:55:57
|
Me too
|
|
2021-10-09 07:58:58
|
We could even define a new superformat, .jaxif or whatever, which would be compact headers with a jxl or av1 payload
|
|
|
Scope
|
2021-10-09 07:59:33
|
Btw, how compatible are the AOM rules with JPEG/ISO or is the format just enough to be open and royalty-free?
|
|
|
_wb_
|
2021-10-09 08:04:48
|
No idea tbh
|
|
2021-10-09 08:06:31
|
But if ISO is a problem, we can do it like so many other standards do it (e.g. Unicode): let ISO just be a mirror
|
|
|
|
veluca
|
2021-10-09 08:13:59
|
I'd expect the difficulties to be mostly with AOM rather than with ISO for this
|
|
|
BlueSwordM
It's our best option for now ๐
|
|
2021-10-09 08:14:42
|
do you have a good idea of what's the gap between best-avif and jxl btw? ๐
|
|
2021-10-09 08:15:11
|
(since you certainly explored more encode options than most people...)
|
|
|
BlueSwordM
|
|
veluca
do you have a good idea of what's the gap between best-avif and jxl btw? ๐
|
|
2021-10-09 08:19:44
|
Lossy: It's now actually decently competitive at all qualities, even at higher bpp. 8b performance has been improved, but 8b YUV being what it is, still is a problem in some images. 10b is where it's at, at a cost to encode speed.
Lossless: Absolute demolition by JPEG-XL. Never use AV1 lossless for images.
JPEG compression: JXL obviously wins, but now, if you have libyuv installed on your system/binary is built with avifenc, you get no generational chroma subsampling loss from aomenc, which is good.
Speed: At S6 and above with JXL, aomenc is sort of competitive on average in terms of CPU time. At S5 and below however, all open AV1 encoders get absolutely destroyed.
Memory use: Now that's where aomenc has improved massively with lots of recent work: its memory usage in intra-only/all intra is now much smaller than before, and in most cases, is superior than JXL.
Decode: dav1d is very fast, usually on par with DJXL per CPU-time, and scales quite well.
|
|
|
|
veluca
|
2021-10-09 08:20:54
|
I'm not surprised by memory use - JXL uses floats, avif doesn't
|
|
2021-10-09 08:21:16
|
I was mostly wondering about lossy ๐ what does "decently competitive" mean?
|
|
|
BlueSwordM
|
|
veluca
I was mostly wondering about lossy ๐ what does "decently competitive" mean?
|
|
2021-10-09 08:22:05
|
It means that in images where aomenc had trouble in at higher qualities, it means aomenc can now get decently close to what cjxl produces.
|
|
|
|
veluca
|
2021-10-09 08:22:58
|
no I get that, but if you had to give a % to "decently close", what would that % be? 1%? 5%? 10%? 25%?
|
|
|
BlueSwordM
|
2021-10-09 08:24:07
|
No objective evidence for now. I'm still gathering a much wider image corpus for my article and as such, I haven't tested anything objectively speaking ๐
I've also modified my analysis metrics: ๐, single frame VMAF, butteraugli, SSIMULACRA, and MS-SSIM just in case.
|
|
2021-10-09 08:24:22
|
Any more metrics that I could use?
|
|
|
|
veluca
|
2021-10-09 08:24:30
|
fair enough!
|
|
2021-10-09 08:24:41
|
do you have a "gut feeling" number?
|
|
2021-10-09 08:25:03
|
you could also use dssim, I think (assuming you're OK with AGPL)
|
|
2021-10-09 08:25:18
|
also, is it max-butteraugli or 3-norm-butteraugli?
|
|
|
Lastrosade
|
|
BlueSwordM
No objective evidence for now. I'm still gathering a much wider image corpus for my article and as such, I haven't tested anything objectively speaking ๐
I've also modified my analysis metrics: ๐, single frame VMAF, butteraugli, SSIMULACRA, and MS-SSIM just in case.
|
|
2021-10-09 08:26:25
|
Do you want some images for your corpus ? I have some good fresh photos (Assume CC0)
|
|
|
BlueSwordM
|
|
veluca
do you have a "gut feeling" number?
|
|
2021-10-09 08:27:00
|
If I would give CJXL an output score of 4.5/5, I'd give old aomenc 3.5/5, and current aomenc 4.0-4.2(per image performance is still a problem...).
As for butteraugli, I'll be posting both scores, since a higher average picture quality is important, but having higher low-percentile quality is even more so.
|
|
|
|
veluca
|
2021-10-09 08:29:32
|
so it cut the gap in half in some sense ๐ ok, thanks for your thoughts!
|
|
|
novomesk
|
2021-10-09 08:59:32
|
Did you know that libavif decoder is fastest in 8bit when it use libyuv (and of course with dav1d too)? Many people use libavif without libyuv so they use slow YUV->RGB conversion.
On my system, 8bit AVIF decodes approximately 30% faster than similar JXL image.
|
|
|
_wb_
|
2021-10-09 09:07:15
|
How is 10bit?
|
|
|
novomesk
|
|
_wb_
How is 10bit?
|
|
2021-10-10 10:44:45
|
I don't have recent data. However in the past libaom decoded 10bit AV1 faster than dav1d. YUV decoding is probably still quite fast, it depends if quality or fast conversion to RGB is used.
|
|
|
_wb_
|
2021-10-10 10:51:06
|
8-bit ycbcr has about 4 million different colors
|
|
2021-10-10 10:51:22
|
8-bit rgb has 16 million colors
|
|
|
|
veluca
|
2021-10-10 12:07:11
|
from what I can see on openbenchmarking, the difference is about 20%
|
|
2021-10-10 12:07:24
|
(on x86)
|
|
2021-10-10 12:08:01
|
and similar on arm, perhaps a bit more
|
|
2021-10-10 12:08:27
|
although which arm cpu it actually is is kinda hard to say ๐
|
|
2021-10-10 12:10:35
|
either way 4-core A72 decode seems to be able to do ~60MP/s or thereabouts with 10-bit, which is I think rather close to what djxl does (IIRC on a samsung a51 it is ~20MP/s without filters on one core, and 10~15MP/s with filters)
|
|
|
Jyrki Alakuijala
|
|
BlueSwordM
No objective evidence for now. I'm still gathering a much wider image corpus for my article and as such, I haven't tested anything objectively speaking ๐
I've also modified my analysis metrics: ๐, single frame VMAF, butteraugli, SSIMULACRA, and MS-SSIM just in case.
|
|
2021-10-21 07:07:08
|
PSNR
|
|
|
veluca
also, is it max-butteraugli or 3-norm-butteraugli?
|
|
2021-10-21 07:08:01
|
3-norm is more of optimization tactics (smoother surface for optimization algorithms) and 6-norm would be a better default for humans
|
|
|
BlueSwordM
Lossy: It's now actually decently competitive at all qualities, even at higher bpp. 8b performance has been improved, but 8b YUV being what it is, still is a problem in some images. 10b is where it's at, at a cost to encode speed.
Lossless: Absolute demolition by JPEG-XL. Never use AV1 lossless for images.
JPEG compression: JXL obviously wins, but now, if you have libyuv installed on your system/binary is built with avifenc, you get no generational chroma subsampling loss from aomenc, which is good.
Speed: At S6 and above with JXL, aomenc is sort of competitive on average in terms of CPU time. At S5 and below however, all open AV1 encoders get absolutely destroyed.
Memory use: Now that's where aomenc has improved massively with lots of recent work: its memory usage in intra-only/all intra is now much smaller than before, and in most cases, is superior than JXL.
Decode: dav1d is very fast, usually on par with DJXL per CPU-time, and scales quite well.
|
|
2021-10-21 07:15:16
|
What range is all qualities?
|
|
|
eddie.zato
What's interesting, in this case `cjxl` with the default settings easily beats the tuned up `cwebp`:
```
cjxl -m 0.gif 1.jxl
Compressed to 738165 bytes (0.242 bpp)
cwebp -mt -m 6 -lossless -z 9 -o 2.webp 0.gif
Output: 1203588 bytes (0.39 bpp)
```
|
|
2021-10-21 07:22:57
|
WebP lossless has great single frame compression. The animation however doesn't use the same compression engine, it was added by other people in a way that consider WebP lossy and lossless techniques a black box.
|
|
2021-10-21 07:24:12
|
As a consequence, WebP lossless wins ~30 % over PNG, but often slightly loses against APNG.
|
|
2021-10-21 07:26:02
|
JPEG XL has a lot more expressive methods for using the previous frame(s) in animations, and we haven't spend much time figuring out how to use them in encoding
|
|
2021-10-21 07:27:24
|
If we think of high quality pixels vs. high quality photography compression, I consider the delta palettization a 'dark horse' for offering a 50 % improvement for classic palette reduction approach that makes GIF, PNG and WebP lossless practical in many cases
|
|
2021-10-21 07:29:20
|
Delta palettization has not been implemented in other formats, it is a unique value brought by JPEG XL. Its encoder in JPEG XL is somewhat experimental still (single speed, and slower than what is practical).
|
|
2021-10-21 07:31:42
|
More guarantees on degradation in quality can be given for pixel based compression (PNG, WebP lossless, JXL lossless, JXL delta palettization) than for block based (JPEG, JXL, AVIF, WebP lossy) compression.
|
|
2021-10-21 07:34:02
|
when comparing butteraugli, some people have added the opensourced butteraugli from github.com/google/butteraugli that had least compromises to assist compression, some have even had their favorite release on that (often the first one)
|
|
2021-10-21 07:36:01
|
the butterauglis that are integrated in JPEG and JPEG XL compression (guetzli and libjxl) have adjustments that make them perform slightly better in that context, and JPEG XL's butteraugli includes changes that make it perform better when used as a differential version in neural compression (that make both image quality use and JPEG XL use slightly worse)
|
|
|
eddie.zato
|
|
Jyrki Alakuijala
WebP lossless has great single frame compression. The animation however doesn't use the same compression engine, it was added by other people in a way that consider WebP lossy and lossless techniques a black box.
|
|
2021-10-21 07:40:12
|
In my case, the image isn't animated, it's a static gif made from a grayscale scan. ๐
|
|
2021-10-21 07:40:17
|
https://discord.com/channels/794206087879852103/840831132009365514/892277140895182859
|
|
2021-10-21 07:41:11
|
I'm used to lossless webp always being good, but in this particular case it is noticeably losing to jxl.
|
|
|
Jyrki Alakuijala
|
2021-11-06 06:36:00
|
This loss can be explained by lack of pixel-based context modeling in WebP
|
|
2021-11-06 06:36:30
|
in WebP the entropy code is chosen by a subresolution image and is the same for a say, 16x16 or 32x32 area
|
|
2021-11-06 06:36:44
|
in JPEG XL the entropy code can depend on the neighboring pixels
|
|
2021-11-06 06:37:09
|
for example if T and L pixels are white, then use one entropy code, if T and L are black then another, otherwise a third
|
|
2021-11-06 06:37:45
|
when there are very high pixel-to-pixel possible decorrelations like such high entropy line drawings and some fractals, then the pixel based context is far more dense
|
|
2021-11-06 06:37:59
|
but it is always necessarily slower to decode
|
|
|
|
necros
|
2022-01-29 07:27:48
|
Do I understand correct that some flags are dropped in latest version like choosing threads number ?
|
|
|
|
Morpholemew
|
2022-01-29 02:42:39
|
Irfanview has a jpeg-xl plugin: https://www.irfanview.com/plugins.htm
|
|
|
monad
|
2022-01-29 06:49:32
|
<@!285240447922733056> No flags dropped between v6 and current. According to help, cjxl accepts "PNG, APNG, GIF, JPEG, EXR, PPM, PFM, or PGX" (it can also accept PSD) and djxl outputs "PNG with ICC, JPG, or PPM/PFM" (it can also output APNG).
|
|
2022-01-29 06:52:49
|
For testing lossless, you can do `compare -metric pae <original> <decoded> null:`
|
|
|
|
Hello71
|
2022-01-29 07:42:05
|
although that will return non-zero for (almost all) jpeg reconstructed images
|
|
2022-01-29 07:42:28
|
if "count pic colors" means counting palette size, that seems... unreliable at best
|
|
|
_wb_
|
2022-01-29 07:44:50
|
For testing jpeg recompression, you can just reconstruct the jpeg and use `cmp` to check that it's the exact same file
|
|
|
|
necros
|
2022-01-29 10:56:57
|
<@!794205442175402004>for now I stick to packjpg cause it`s smaller, small difference but still (in my sample image 2 234 kb vs jxl 2 377 kb) wish it will change in the future
|
|
2022-01-29 10:57:45
|
<@!263300458888691714>in latest version I don`t see multi thread option in-built in help so my question arose
|
|
2022-01-29 11:09:57
|
<@!411361798696992798>thanx
|
|
|
monad
|
2022-01-29 11:12:19
|
`--num_threads` is hidden behind one level of verbosity (`cjxl -h -v`)
|
|
2022-01-29 11:13:20
|
But that was the same as before
|
|
|
|
necros
|
2022-01-29 11:18:03
|
ok thanx, missed
|
|
2022-02-05 01:23:22
|
Hi, any way to losslessly convert webp to jxl?
|
|
|
_wb_
|
2022-02-05 02:02:32
|
Lossy webp? I doubt it.
|
|
|
DZgas ะ
|
|
necros
Hi, any way to losslessly convert webp to jxl?
|
|
2022-02-05 02:30:50
|
lossless webp is same picture as png
|
|
2022-02-05 02:31:55
|
but if you are talking about lossy webp, then no, it's impossible, the algorithms are too different
|
|
|
_wb_
|
2022-02-05 02:42:13
|
Well vp8 is doing DCT, in block sizes that are probably a subset of what jxl can do. But jxl doesn't have directional prediction, so that's a problem. Also webp always uses tv range YCbCr while jxl can only do full range (like it is in jpeg)
|
|
2022-02-05 02:42:53
|
Ah and jxl doesn't actually support variable block sizes when it's 4:2:0
|
|
2022-02-05 02:43:09
|
Yeah no, it's not possible
|
|
|
|
JendaLinda
|
2022-02-05 04:32:03
|
This is interesting, would it be possible to losslessly recompress (doesn't need to be reversible) other JPEG "X" flavors?
|
|
2022-02-05 04:41:28
|
I'm talking about JPEG XR, XS, XT, if anybody actually use them.
|
|
|
Fraetor
|
|
JendaLinda
I'm talking about JPEG XR, XS, XT, if anybody actually use them.
|
|
2022-02-05 04:52:17
|
Only lossless for the original JPEG.
|
|
2022-02-05 04:52:41
|
At least with the existing encoder.
|
|
2022-02-05 04:53:28
|
In principle other lossless format conversion could be done if the internal structure of the codecs are similar enough, but that would require significant development effort.
|
|
|
_wb_
|
2022-02-05 05:00:14
|
XT is basically old jpeg plus extra stuff, the old jpeg can be done with jxl but the extra stuff I dunno
|
|
2022-02-05 05:01:14
|
XR and 2000 use different transforms, they cannot be represented in jxl
|
|
2022-02-05 05:02:01
|
XS is not really meant as a file format so I don't think it would be useful. But also probably cannot be done.
|
|
|
|
JendaLinda
|
2022-02-05 05:03:19
|
These formats don't seem to be popular very much. I discovered them only while I was researching about JXL. So the actual question was if these formats would "fit" into the JXL codestream. JPEG2000 seem to be somewhat popular, but as far as I know it doesn't use DCT, so that wouldn't work at all.
|
|
|
_wb_
|
2022-02-05 05:04:12
|
One of the use cases for j2k is lossless for medical, that of course can be converted to jxl losslessly
|
|
|
|
JendaLinda
|
2022-02-05 05:05:50
|
That's right. Lossless formats are not an issue. Lossless formats can be converted freely.
|
|
|
_wb_
|
2022-02-05 05:11:11
|
The only way to effectively do lossless recompression of a lossy codec, is to be a superset of it (as far as the lossy coding tools go, the bitstream organization, entropy coding, and lossless parts can be different).
|
|
2022-02-05 05:12:16
|
For jpeg, it made sense to do that, because jpeg is relatively simple, its coding tools are very battle-tested and known to be useful, and there are a huge amount of existing jpeg files out there.
|
|
2022-02-05 05:12:59
|
For any other lossy codec, it makes a lot less sense, because it's more spec complication for less practical value.
|
|
|
|
Hello71
|
2022-02-05 09:48:46
|
doesn't it depend on what exactly you're missing? like, if your codec is much much better but doesn't support some feature, then you can code it in a "residual error" block
|
|
2022-02-05 09:49:35
|
the problem with this theory i think is that image compression hasn't gotten *that* much better over the last decade
|
|
|
_wb_
|
2022-02-05 09:55:01
|
If lossless pixel values is all you need, then "residual error" works, but it's likely not going to be more effective than just encoding the decoding image losslessly (so will end up larger than the other format, probably)
|
|
|
|
Hello71
|
2022-02-05 10:48:27
|
mmhmm
|
|
|
study169
|
2022-02-07 04:02:07
|
Hello, I just started using JPEG XL. Trying to compress some uint16 image with JPEG XL lossless. I could achieve lossless compression with cjxl. But I can not achieve lossless compression with interface used in encode_oneshot example. I tried all the lossless frame controls, it didn't work for me. It seems that cjxl_ng in progress uses same interface as used in encode_oneshot. But cjxl_ng crashed on my single channel 4001x4001 uint16 image. Any suggestions? Thanks a lot first.
|
|
2022-02-07 04:02:35
|
The lossless frame settings that I have used so far.
|
|
2022-02-07 04:02:43
|
float distance = 0.0; // mathematically lossless
if (JXL_ENC_SUCCESS != JxlEncoderSetFrameDistance(frame_settings, distance))
{
fprintf(stderr, "JxlEncoderSetFrameDistance failed\n");
return false;
}
if (JXL_ENC_SUCCESS != JxlEncoderFrameSettingsSetOption(frame_settings, JXL_ENC_FRAME_SETTING_EFFORT, 7)) {
fprintf(stderr, "JxlEncoderFrameSettingsSetOption JXL_ENC_FRAME_SETTING_EFFORT failed\n");
return false;
}
if (JXL_ENC_SUCCESS != JxlEncoderFrameSettingsSetOption(frame_settings, JXL_ENC_FRAME_SETTING_MODULAR,1)) {
fprintf(stderr, "JxlEncoderFrameSettingsSetOption JXL_ENC_FRAME_SETTING_MODULAR failed\n");
return false;
}
if (JXL_ENC_SUCCESS != JxlEncoderFrameSettingsSetOption(frame_settings, JXL_ENC_FRAME_SETTING_KEEP_INVISIBLE, 1)) {
fprintf(stderr, "JxlEncoderFrameSettingsSetOption JXL_ENC_FRAME_SETTING_KEEP_INVISIBLE failed\n");
return false;
}
if (JXL_ENC_SUCCESS != JxlEncoderSetFrameLossless(frame_settings, JXL_TRUE))
{
fprintf(stderr, "JxlEncoderSetFrameLossless failed\n");
return false;
}
|
|
|
_wb_
|
2022-02-07 04:38:58
|
You have to set `basic_info.uses_original_profile = JXL_TRUE`
|
|
2022-02-07 04:40:02
|
Otherwise it converts to XYB, which is not lossless and it's also a bad idea for compression
|
|
2022-02-07 04:41:16
|
This is such a common api usage pitfall that I feel like changing JxlEncoderSetFrameLossless to fail if the basic info is not compatible with lossless
|
|
|
study169
|
2022-02-07 05:24:42
|
It works perfectly now. Thanks a lot. It bothered me for almost two days.
|
|
|
cucumber
|
2022-02-11 04:50:26
|
are there any suggested parameters for lossy encoding of lined, flat-colored digital art? i.e. there's a small color palette and all color transitions are sharp, especially along edges
|
|
2022-02-11 04:53:38
|
unless i drop the quality level *really* low, lossy encoding does way worse than lossless in terms of file size, and of course in terms of quality
|
|
|
monad
|
2022-02-11 08:55:35
|
<@!461421345302118401> has been studying drawing/manga content encoding, but targeting distances 1 to 0.5. If lossless just performs much better, then I'm not sure current JXL parameter tweaking will be enough. Based on description, smart lossy palette+dither might be an option (e.g. pngquant). Maybe Lee has some insights anyway.
|
|
|
cucumber
|
2022-02-12 01:20:50
|
thanks! looking forward to their insights, if they have any, then ๐
|
|
|
lithium
|
2022-02-12 01:38:43
|
In my opinion, I don't recommend apply current jxl lossy(vardct) for non-photo content,
for some type drawing content, vardct work very well on `-d 0.5~1.0 -e 8`,
but for some type drawing content, I think still need wait next quality and features improvement.
(non-photo quality improve and auto palette features)
For now you can try different lossy mode in jxl,
lossy Modular(XYB, Haar like(squeeze)transform, PSNR),
lossy palette(--lossy-palette, still very experiment).
but I guess the best practices is wait jxl complete next quality and features improvement for non-photo.
lossy palette + dither(pngquant) also a good method,
but sometime will happen very noticeable error.
(you also can try av1 palette prediction, but I can't talk this too much in this channel.)
|
|
|
cucumber
|
|
lithium
In my opinion, I don't recommend apply current jxl lossy(vardct) for non-photo content,
for some type drawing content, vardct work very well on `-d 0.5~1.0 -e 8`,
but for some type drawing content, I think still need wait next quality and features improvement.
(non-photo quality improve and auto palette features)
For now you can try different lossy mode in jxl,
lossy Modular(XYB, Haar like(squeeze)transform, PSNR),
lossy palette(--lossy-palette, still very experiment).
but I guess the best practices is wait jxl complete next quality and features improvement for non-photo.
lossy palette + dither(pngquant) also a good method,
but sometime will happen very noticeable error.
(you also can try av1 palette prediction, but I can't talk this too much in this channel.)
|
|
2022-02-18 06:39:09
|
sorry for the delayed reply, but I wanted to say thanks!
|
|
2022-02-18 06:39:28
|
that's a very helpful reply and in a lot more depth than I could've asked for, so I really appreciate it
|
|
|
BlueSwordM
|
2022-02-18 06:53:55
|
As expected from Lee ๐
|
|
|
lithium
|
2022-02-19 07:43:07
|
I open a new libjxl issues about lossy palette(delta palette),
I think if can switch varDCT mode and lossy palette mode for different non-photo content,
that will be very great.
|
|
|
|
necros
|
2022-03-13 02:06:15
|
are these correct params for lossless convertion of jpg to jxl? "-d 0.0 -e 9 "
|
|
|
BlueSwordM
|
|
necros
are these correct params for lossless convertion of jpg to jxl? "-d 0.0 -e 9 "
|
|
2022-03-13 02:08:36
|
Yes.
|
|
|
|
necros
|
2022-03-13 02:12:53
|
Strange but Irfanview shows result file have doubled number of unique colors in picture, plugin bug?
|
|
|
The_Decryptor
|
2022-03-13 02:14:03
|
cjxl does lossless JPG conversion by default, you only need `-d 0` to get lossless encoding of PNG (or such) input
|
|
|
_wb_
|
|
necros
Strange but Irfanview shows result file have doubled number of unique colors in picture, plugin bug?
|
|
2022-03-13 07:12:27
|
No, just different results you get with different jpeg decoders (libjxl and I assume libjpeg-turbo). The jpeg format allows some decoder freedom so it is not guaranteed to produce identical pixels when different decoders are used.
|
|
|
|
Deleted User
|
2022-05-13 01:41:28
|
<@794205442175402004> I wonder if that is possible with Cloudinary (and if you know if that is possible):
Assume a lossless source image uploaded to the media library. Can I deliver it as
- JXL (q93) if the browser supports it
- JPEG (q93) if the image has no transparancy
- WebP (q95) otherwise
|
|
|
_wb_
|
2022-05-13 01:46:19
|
yes, `f_auto,q_93` would have that behavior (more or less; I'm not sure what cwebp quality q_93 maps to but probably something like 95) but you'd need your f_auto configuration to be set to include jxl and webp but not avif, and we don't have a self-service way of doing f_auto configuration yet, I think, so you'd need to open a ticket with customer support to do that at the moment.
|
|
|
|
Deleted User
|
2022-05-13 01:48:51
|
ok, thx! I'll open a ticket once I actually need that. ^^
|
|
|
Fraetor
|
2022-05-15 03:11:49
|
Is there a maximum distance that can be used? I'm getting an error about an invalidly freed pointer.
|
|
2022-05-15 03:18:18
|
I get the same error with any distance of 20 or more.
|
|
2022-05-15 03:35:22
|
OK, so it encodes if the effort is 7 or less, and a different image encodes fine at e 9.
|
|
2022-05-15 03:37:48
|
Hmm, I think I'll open an issue about this.
|
|
|
_wb_
|
2022-05-15 03:43:34
|
The issue is that e>7 and resampling doesn't work, and at very high distance it does 2x resampling by default
|
|
|
Fraetor
|
2022-05-15 03:48:58
|
Ah
|
|
|
_wb_
The issue is that e>7 and resampling doesn't work, and at very high distance it does 2x resampling by default
|
|
2022-05-15 03:49:36
|
So it is this issue then?
https://github.com/libjxl/libjxl/issues/330
|
|
|
_wb_
The issue is that e>7 and resampling doesn't work, and at very high distance it does 2x resampling by default
|
|
2022-05-15 03:51:34
|
Is there any other heuristic other than distance to decide if it does respampling? I've got some images encoding, and some not, with the same settings.
|
|
2022-05-15 04:09:24
|
The one that works is much higher resolution though.
|
|
|
_wb_
|
2022-05-15 04:15:39
|
Interesting. Could be an indication of where the bug is
|
|
|
Fraetor
|
2022-05-15 04:23:15
|
I'm seeing if I can produce a simpler image that encodes, then I'll attach it to the bug report.
|
|
|
|
Deleted User
|
|
_wb_
yes, `f_auto,q_93` would have that behavior (more or less; I'm not sure what cwebp quality q_93 maps to but probably something like 95) but you'd need your f_auto configuration to be set to include jxl and webp but not avif, and we don't have a self-service way of doing f_auto configuration yet, I think, so you'd need to open a ticket with customer support to do that at the moment.
|
|
2022-05-18 01:26:30
|
I guess Cloudinary cannot do something like: if supported use /image/a.jxl else /image/b.jpg as one URL?
|
|
|
_wb_
|
2022-05-18 02:29:15
|
it can, that's what f_auto does: send different responses based on accept header and UA
|
|
|
|
Deleted User
|
2022-05-18 02:52:55
|
yes, but I mean you cannot manually specify two images like how it is possible with a picture tag, right?
|
|
|
_wb_
|
2022-05-18 04:10:29
|
ah, no, f_auto is not meant for that
|
|
2022-05-18 04:11:46
|
I think we will soon expose a way to do two (or more) really different images behind the same url, where what you get can depend on request headers including e.g. geolocation estimated from ip
|
|
2022-05-18 04:12:35
|
so you can have an image that shows someone with a thick coat when you view it in Alaska, but someone in a swimsuit when you view it in Hawaii
|
|
2022-05-18 04:13:31
|
(or some other kinds of funky localization / personalization)
|
|
2022-05-18 04:15:14
|
main use case for that is A/B testing where marketeers want to see which image "works best" so one way to do that would be to use such a url that can end up showing different images (e.g. randomly based on ip) and then keep track of conversion metrics for each variant
|
|
|
|
Deleted User
|
2022-05-18 04:19:31
|
sounds interesting. Then I hope it will support the accept header. Please ping me if this releases and you still remember.
|
|
|
_wb_
|
2022-05-18 05:21:55
|
Will do - if I remember :)
|
|
|
DZgas ะ
|
2022-05-22 09:50:57
|
``-e 9 --brotli_effort 11``
is it really the maximum possible power parameters that I can enter?
|
|
|
_wb_
|
2022-05-22 10:02:33
|
No
|
|
2022-05-22 10:02:48
|
Add -E 3 -I 10 for more
|
|
|
DZgas ะ
|
|
_wb_
Add -E 3 -I 10 for more
|
|
2022-05-22 10:24:03
|
-I is what? I don't see it in the help -v
|
|
|
_wb_
|
2022-05-22 10:30:51
|
How much data it looks at for MA tree learning
|
|
|
DZgas ะ
|
|
_wb_
How much data it looks at for MA tree learning
|
|
2022-05-22 10:32:53
|
surprisingly it is not so much slower than I expected, alas, can not mining the picture
|
|
|
Traneptora
|
2022-05-26 09:06:59
|
does `--brotli_effort` do anything to the codestream or only the brob boxes
|
|
|
_wb_
|
2022-05-26 09:44:54
|
Iirc it's brob and jbrd
|
|
|
study169
|
2022-05-26 10:08:26
|
I have some RGB24 images. If I use cjxl to do lossy compression, it works fine. But if I call JPEG XL api to do the compression, the color changed a little bit with the same distance control. The color controls that I used was from the example encode_oneshot.cc.
|
|
2022-05-26 10:08:35
|
JxlColorEncoding color_encoding = {};
JxlColorEncodingSetToSRGB(&color_encoding, /*is_gray=*/pixel_format.num_channels < 3);
if (JXL_ENC_SUCCESS != JxlEncoderSetColorEncoding(enc.get(), &color_encoding))
{
fprintf(stderr, "JxlEncoderSetColorEncoding failed\n");
return false;
}
|
|
2022-05-26 10:10:16
|
I also tried the cjxl_ng which was cloned and built a few months ago. cjxl_ng can encode the image without changing color but scale the color to RGB48 (each channel is 16bit).
|
|
2022-05-26 10:11:15
|
Any suggestions for the color encoding controls for RGB24 lossy compression? Thanks a lot for any suggestions.
|
|
|
_wb_
|
2022-05-27 05:29:50
|
What colorspace are your images in?
|
|
2022-05-27 05:29:59
|
Probably not sRGB then
|
|
|
DZgas ะ
|
2022-05-27 12:28:05
|
i have problems compressing this image, it has 3 colors, i use -m -g 3 -P 15 --palette 3
but PNG turns out to Size 2 times less
|
|
2022-05-27 12:32:45
|
there is any function that I donโt know about that would brute force modular compression
|
|
|
yurume
|
2022-05-27 12:38:18
|
I think you are missing `-d 0`
|
|
2022-05-27 12:39:12
|
in fact `-e 9 -d 0` is enough for me to get 5096 bytes (which is not much different from 5089 bytes for `-e 9 -d 0 -m -g 3 -P 15 --palette 3`)
|
|
|
DZgas ะ
|
|
yurume
in fact `-e 9 -d 0` is enough for me to get 5096 bytes (which is not much different from 5089 bytes for `-e 9 -d 0 -m -g 3 -P 15 --palette 3`)
|
|
2022-05-27 12:49:41
|
thanks, and i used -I 1 and got 5060 bytes
|
|
|
monad
|
2022-05-27 01:28:03
|
`-d 0 -e 9 -I 1 -P 0` : 4977B
|
|
|
study169
|
|
I have some RGB24 images. If I use cjxl to do lossy compression, it works fine. But if I call JPEG XL api to do the compression, the color changed a little bit with the same distance control. The color controls that I used was from the example encode_oneshot.cc.
|
|
2022-05-27 01:31:59
|
It turned out to be my own issue. I used opencv to write the RGB24 image extracted from points cloud to be used for cjxl and forgot that opencv imwrite always assumes BGR. There is nothing wrong with JPEG XL encoder control. Sorry for the false alarm.
|
|
|
_wb_
|
2022-05-27 02:19:49
|
Yeah opencv assumed bgr
|
|
|
Traneptora
|
2022-05-27 04:01:00
|
this is one of those use cases where it would be nice to request planar output in any order
|
|
2022-05-27 04:01:17
|
so you could request planar GBR or BGR or RGB
|
|
|
_wb_
|
2022-05-27 04:08:52
|
You mean interleaved?
|
|
|
Traneptora
|
2022-05-27 05:43:59
|
no, I mean planar
|
|
2022-05-27 05:44:16
|
I figure it would be part of the patchset
|
|
|
_wb_
|
2022-05-27 07:55:47
|
In planar you'd probably just get one buffer per plane, order doesn't really matter then
|
|
|
Traneptora
|
2022-05-28 01:14:48
|
I guess, yea
|
|
2022-05-28 01:14:55
|
order wouldn't matter since you'd just get a pointer
|
|
|
DZgas ะ
|
2022-08-08 11:40:55
|
-e 9 is maximum for varDHT ? can't I bruteforce the compression? maybe there is some debug function for this?
|
|
|
|
JendaLinda
|
2022-08-08 11:46:13
|
It looks like more effort in VarDCT improves quality at the same file size.
|
|
|
monad
|
2022-08-09 07:19:38
|
The encoder always targets the quality/distance you request. With higher effort it can be more accurate. The relationship between effort and file size is not guaranteed.
|
|
2022-08-09 07:21:00
|
`-E 3` can also benefit VarDCT mode.
|
|
|
Traneptora
|
2022-08-14 05:29:05
|
Keep in mind that higher effort also means more *consistent* quality to bpp
|
|
2022-08-14 05:29:15
|
that is, you're more likely to achieve the target you desired
|
|
2022-08-14 05:29:39
|
when low efforts produce smaller files at the same distance it typically means that the achieved quality is lower than the requested one
|
|
|
|
gurpreetgill
|
2022-09-01 02:59:39
|
๐ for encoding progressive JXL images, do I only need to set the `JXL_ENC_FRAME_SETTING_RESPONSIVE` param to `1`? There are other params `JXL_ENC_FRAME_SETTING_PROGRESSIVE_AC` , `JXL_ENC_FRAME_SETTING_QPROGRESSIVE_AC`.... but I am not sure what they are for! I am a newbie to this and still gaining context, so thanks in advance ๐
I did notice that when `cjxl --progressive` param is passed, it sets multiple progressive encoding. params https://github.com/libjxl/libjxl/blob/0103e5a901b776c46b55081fd6b647fbdd618472/tools/cjxl_main.cc#L891-L894
|
|
|
yurume
|
2022-09-01 03:32:15
|
technically speaking there is no single "progressive" mode in jxl, it provides a way to chop different bits of data and merge them at the decoding end. those options determine how to chop them.
|
|
2022-09-01 03:33:13
|
I guess options set by `--progressive` are fine defaults.
|
|
|
_wb_
|
2022-09-01 03:43:44
|
even just the options set by doing nothing at all are not too bad โ you always get DC first, which for most use cases I think is actually "progressive enough"
|
|
|
|
gurpreetgill
|
2022-09-01 04:39:31
|
so setting `JXL_ENC_FRAME_SETTING_RESPONSIVE` to `1` should be good defaults then?
|
|
|
_wb_
|
2022-09-01 04:57:14
|
uh no, that forces modular to use squeeze
|
|
2022-09-01 04:57:28
|
the best default is not setting anything
|
|
|
Fraetor
|
2022-09-01 09:41:42
|
Is there a good way to view progressive images at different load amounts? I tried just truncating the files then viewing with a gdk-pixbuf viewer, but they only show black with in incomplete progressive file.
|
|
2022-09-01 09:42:33
|
(Interestingly a non-progressive file does show the blocks loading sequentially.)
|
|
|
_wb_
|
2022-09-01 10:09:09
|
https://github.com/libjxl/libjxl/blob/main/examples/decode_progressive.cc
|
|
|
Fraetor
|
2022-09-01 10:55:06
|
Thanks.
|
|
|
Moritz Firsching
|
|
Fraetor
Is there a good way to view progressive images at different load amounts? I tried just truncating the files then viewing with a gdk-pixbuf viewer, but they only show black with in incomplete progressive file.
|
|
2022-09-02 06:16:00
|
Should also work in the very latest chrome when you pass a truncated file. The patch for that was merged yesterday
|
|
|
Traneptora
|
|
Moritz Firsching
Should also work in the very latest chrome when you pass a truncated file. The patch for that was merged yesterday
|
|
2022-09-03 07:24:17
|
does this have anything to do with whether chrome decided to go with JXL or not?
|
|
|
_wb_
|
2022-09-03 07:50:06
|
Afaik that decision still has not been communicated.
|
|
|
DZgas ะ
|
2022-11-03 11:41:37
|
so, six months have passed and I want to ask again -- the *-e 9 -E 3* parameter is the maximum possible for VarDCT compression?
|
|
|
_wb_
|
2022-11-03 12:10:35
|
For VarDCT -E 3 does not really make a difference
|
|
2022-11-03 12:11:18
|
Also -e 9 does not seem to be better than -e 8 for vardct mode, just slower, according to my testing
|
|
2022-11-03 12:14:07
|
We don't really have an exhaustive and very slow encoder for jxl, like the slow settings of avif and heic reference software.
|
|
|
DZgas ะ
|
|
_wb_
For VarDCT -E 3 does not really make a difference
|
|
2022-11-03 12:14:24
|
0.5 %
|
|
|
_wb_
|
2022-11-03 12:15:14
|
Really? I wonder how, that would be just on DC then I guess...
|
|
|
DZgas ะ
|
|
_wb_
We don't really have an exhaustive and very slow encoder for jxl, like the slow settings of avif and heic reference software.
|
|
2022-11-03 12:15:30
|
I would like some kind of algorithm bruteforce quality
|
|
|
_wb_
|
2022-11-03 12:15:35
|
Then you can also try -I 100, I suppose
|
|
|
DZgas ะ
|
2022-11-03 12:16:13
|
I just wanted to do a little test for the forum, so I asked <#805176455658733570>
|
|
|
_wb_
|
2022-11-03 12:16:20
|
Bruteforce full exhaustive would be extremely slow, even slower than slowest avif. The bitstream is very expressive so the search space is huge.
|
|
|
DZgas ะ
|
|
_wb_
Bruteforce full exhaustive would be extremely slow, even slower than slowest avif. The bitstream is very expressive so the search space is huge.
|
|
2022-11-03 12:16:59
|
how many days per 512x512 image?
|
|
|
_wb_
|
2022-11-03 12:18:10
|
Nobody tried making something like that but it could be more than the lifetime of the universe to really find the optimal jxl encoding (according to some metric, obviously) for a single image
|
|
|
DZgas ะ
|
2022-11-03 12:19:16
|
it would be very interesting for me to use a search of all the options for the lowest quality option, because for AVIF it really works well at extremely low quality, when the codec can spend tens of minutes searching for the right angle option, guardian, and the like
|
|
2022-11-03 12:20:22
|
I just would like to compress some parameter like "--compress much possible" so that there is warmth in the soul
|
|
|
_wb_
|
2022-11-03 12:23:39
|
This would be interesting indeed, though maybe not a priority for practical use cases
|
|
|
DZgas ะ
|
|
_wb_
This would be interesting indeed, though maybe not a priority for practical use cases
|
|
2022-11-03 12:24:28
|
but it gives false confidence that you are *getting the best you could get*
|
|
|
_wb_
|
2022-11-03 12:55:30
|
For lossy, that is kind of inevitable, since the perfect metric to optimize for does not exist
|
|
|
Traneptora
|
2022-11-03 01:06:08
|
it makes more sense for lossless as it's literally only a file-size encode-time tradeoff
|
|
2022-11-03 01:06:13
|
the metric is the file size
|
|
|
DZgas ะ
|
2022-11-03 01:14:18
|
for lossless it is not interesting
|
|
|
|
JendaLinda
|
2022-11-03 05:24:17
|
Lossless is interesting. Now I have to use PNG because I have no choice. Letting PNGOUT chewing on bunch of PNGs overnight is silly.
|
|
|
ziemek.z
|
|
JendaLinda
Lossless is interesting. Now I have to use PNG because I have no choice. Letting PNGOUT chewing on bunch of PNGs overnight is silly.
|
|
2022-11-04 06:37:07
|
just use pingoโข
|
|
|
Traneptora
|
2022-11-04 10:02:32
|
same thing applies, letting a png optimizer squeeze the last few bits out of a PNG is kind of wasteful when you could just use lossless modular
|
|
|
DZgas ะ
|
2022-11-10 09:52:58
|
is it true that limiting the Modular palette to 64 colors, it will be 6 bits long? or 256 as 8 bits? is there any additional efficiency in the fact that I choose exactly 256 colors and not 258 colors? any overhead?
|
|
|
_wb_
|
2022-11-10 10:05:40
|
You don't need palette sizes to be a power of two to make entropy coding efficient. It is not based on bit packing, so the cost of more colors smoothly increases, not in a step function where there's a bigger penalty to expand the range by one bit than to expand it within the same bit width.
|
|
|
DZgas ะ
|
2022-11-10 10:06:37
|
Sounds great
|
|
|
_wb_
|
2022-11-10 10:09:23
|
Note that libjxl does not perform any color reduction itself when using the regular palette, it only applies it if it can be done in a lossless way. With delta palette it does apply loss, but that kind of palette works differently than a simple gif/png palette (palette entries can be either a color or a set of deltas w.r.t. a predictor, so gradients can be done cheaply without needing dithering or many palette entries).
|
|
|
DZgas ะ
|
2022-11-10 10:21:30
|
according to my tests, cheetach -e 3 is now the best choice for compression from an image with a quality of 90-95, mainly due to the lack of complex anti-aliasing filters that spoil and blur the image more at such high qualities, but at the same time there is already effective separation by varDCT Blocks
|
|
|
|
JendaLinda
|
2022-11-10 10:26:08
|
For pixel art, it's worth trying to increase the number of palette colors, so the whole image is encoded just using indexed color. It may help a lot.
|
|
|
DZgas ะ
|
|
JendaLinda
For pixel art, it's worth trying to increase the number of palette colors, so the whole image is encoded just using indexed color. It may help a lot.
|
|
2022-11-10 10:28:19
|
the number of colors is automatically determined by the encoder, based on all the colors in the picture
|
|
|
|
JendaLinda
|
2022-11-10 10:32:13
|
By default, the number of colors in palette is limited. I think the default limit was 1024.
|
|
|
DZgas ะ
|
|
DZgas ะ
according to my tests, cheetach -e 3 is now the best choice for compression from an image with a quality of 90-95, mainly due to the lack of complex anti-aliasing filters that spoil and blur the image more at such high qualities, but at the same time there is already effective separation by varDCT Blocks
|
|
2022-11-10 10:32:25
|
on the quality of e5, some formulas are beginning to be used to generate color channels, and this is noticeable in some of my examples Not for the better, so on the qualities of 95+ it is better to use e 3, completely without any transformations, but I would not refuse variable blocks... in fact, I will have to start testing Jpeg XL at speed 3 and JPEG XR. it is important.
|
|
|
JendaLinda
By default, the number of colors in palette is limited. I think the default limit was 1024.
|
|
2022-11-10 10:32:46
|
no limit
|
|
|
|
JendaLinda
|
2022-11-10 10:33:42
|
There used to be a limit. It made a difference when I tested it.
|
|
2022-11-10 10:39:14
|
Anyway, I suppose there has to be some limit. It doesn't make much sense to create a palette of millions of colors.
|
|
|
DZgas ะ
|
|
JendaLinda
There used to be a limit. It made a difference when I tested it.
|
|
2022-11-10 10:42:17
|
before is before. the codec is in active development. as far as I know, the lossless compression module is now completely ready. and I'm already using it. but the more massive VarDCT module is still of course.... well, you can use it. But I don't use
|
|
|
JendaLinda
Anyway, I suppose there has to be some limit. It doesn't make much sense to create a palette of millions of colors.
|
|
2022-11-10 10:42:53
|
the limit depends on how many colors were in the source
|
|
|
_wb_
|
|
JendaLinda
Anyway, I suppose there has to be some limit. It doesn't make much sense to create a palette of millions of colors.
|
|
2022-11-10 10:44:52
|
Iirc default limit is still 1024. Maybe it was lowered to 512. Bigger palettes can be useful, but the current heuristic just applies the palette when below the limit, so we need a conservative bound to avoid cases where palette is counterproductive.
|
|
|
DZgas ะ
|
|
_wb_
Iirc default limit is still 1024. Maybe it was lowered to 512. Bigger palettes can be useful, but the current heuristic just applies the palette when below the limit, so we need a conservative bound to avoid cases where palette is counterproductive.
|
|
2022-11-10 10:46:50
|
you are now about VarDCT ?
|
|
|
_wb_
|
2022-11-10 10:47:04
|
No, palette is a modular thing
|
|
|
DZgas ะ
|
|
_wb_
No, palette is a modular thing
|
|
2022-11-10 10:47:46
|
What if I have lossless colors more than 1024?
|
|
|
|
JendaLinda
|
2022-11-10 10:48:44
|
So I recall it correctly. The default limit is not listed in the help at the moment.
|
|
|
DZgas ะ
|
2022-11-10 10:49:23
|
I don't understand how a default limit can exist at all
|
|
|
|
JendaLinda
|
2022-11-10 10:50:53
|
I suppose other compression strategies are usually more efficient in pictures with more colors.
|
|
2022-11-10 10:57:28
|
I tried VarDCT mostly on scanned documents and analog video captures. Near lossless is okay for such content. It's not worth wasting bits in lossless on the noise introduced by analog media.
|
|
|
DZgas ะ
|
|
JendaLinda
I tried VarDCT mostly on scanned documents and analog video captures. Near lossless is okay for such content. It's not worth wasting bits in lossless on the noise introduced by analog media.
|
|
2022-11-10 11:03:44
|
I used JPEGXR with the same result
|
|
2022-11-10 11:04:20
|
now I would most likely use AVIF for documents. because of all its vector-like functions
|
|
|
|
JendaLinda
|
2022-11-10 11:09:37
|
The JXL images are indistinguishable from the original, so it worked well.
|
|
|
DZgas ะ
|
2022-11-10 11:10:00
|
๐
|
|
|
DZgas ะ
on the quality of e5, some formulas are beginning to be used to generate color channels, and this is noticeable in some of my examples Not for the better, so on the qualities of 95+ it is better to use e 3, completely without any transformations, but I would not refuse variable blocks... in fact, I will have to start testing Jpeg XL at speed 3 and JPEG XR. it is important.
|
|
2022-11-10 10:21:45
|
and all such e6 turned out to be the best compromise between speed and quality for me. better than e7 due to the lack of more complex functions that give almost zero result...but the speed of work at e3 is really amazing. that's fast...
|
|
2022-11-17 02:00:14
|
To compress this pic, will the ```-q 100 -e 9 -g 3``` command be the most efficient possible?
or any hidden keys I missed?
|
|
2022-11-17 02:06:16
|
no wait, I found that -g 2 compressed much better
|
|
2022-11-17 02:08:23
|
and new bug
|
|
2022-11-17 02:09:39
|
Or I didn't understand what it was.... for what reason does it work like this
cjxl.exe -m 1 -e 9
JPEG XL encoder v0.8.0 72af6c7 [Unknown]
Read 698x365 image, 60449 bytes, 107.6 MP/s
Encoding [Modular, d1.000, effort: 9],
Compressed to 25337 bytes (0.796 bpp).
698 x 365, 0.12 MP/s [0.12, 0.12], 1 reps, 4 threads.
cjxl.exe -q 100 -e 9
JPEG XL encoder v0.8.0 72af6c7 [Unknown]
Read 698x365 image, 60449 bytes, 127.5 MP/s
Encoding [Modular, lossless, effort: 9],
Compressed to 22037 bytes (0.692 bpp).
698 x 365, 0.03 MP/s [0.03, 0.03], 1 reps, 4 threads.
|
|
|
Demiurge
|
2022-11-17 04:06:12
|
Is there a need to specify group size with -g flag?
|
|
|
_wb_
|
2022-11-17 04:11:26
|
The default should be reasonable, sometimes different values give better compression (but bigger group sizes also means less opportunity for parallel decode)
|
|
|
DZgas ะ
|
|
_wb_
The default should be reasonable, sometimes different values give better compression (but bigger group sizes also means less opportunity for parallel decode)
|
|
2022-11-17 07:12:43
|
-g 2 in my cases compresses better by 5-10%, this is extremely much
|
|
|
DZgas ะ
Or I didn't understand what it was.... for what reason does it work like this
cjxl.exe -m 1 -e 9
JPEG XL encoder v0.8.0 72af6c7 [Unknown]
Read 698x365 image, 60449 bytes, 107.6 MP/s
Encoding [Modular, d1.000, effort: 9],
Compressed to 25337 bytes (0.796 bpp).
698 x 365, 0.12 MP/s [0.12, 0.12], 1 reps, 4 threads.
cjxl.exe -q 100 -e 9
JPEG XL encoder v0.8.0 72af6c7 [Unknown]
Read 698x365 image, 60449 bytes, 127.5 MP/s
Encoding [Modular, lossless, effort: 9],
Compressed to 22037 bytes (0.692 bpp).
698 x 365, 0.03 MP/s [0.03, 0.03], 1 reps, 4 threads.
|
|
2022-11-17 07:13:19
|
<@794205442175402004>I wanted to ask why -m 1 does not mean -q 100
|
|
|
_wb_
The default should be reasonable, sometimes different values give better compression (but bigger group sizes also means less opportunity for parallel decode)
|
|
2022-11-17 07:15:31
|
and I would like to ask - why there is no algorithm for finding the best group size (not bruteforce)
|
|
|
_wb_
|
|
DZgas ะ
<@794205442175402004>I wanted to ask why -m 1 does not mean -q 100
|
|
2022-11-17 07:17:10
|
Would it make more sense if modular implied lossless by default?
|
|
|
DZgas ะ
and I would like to ask - why there is no algorithm for finding the best group size (not bruteforce)
|
|
2022-11-17 07:18:29
|
I haven't found a heuristic yet for that, but also changing it impacts parallelization potential so maybe shouldn't be done automatically...
|
|
|
DZgas ะ
|
|
_wb_
Would it make more sense if modular implied lossless by default?
|
|
2022-11-17 07:19:08
|
Of course, that's what it was created for.... in fact, in my particular case, the problem is that it is worse in size than the lossless
|
|
|
_wb_
I haven't found a heuristic yet for that, but also changing it impacts parallelization potential so maybe shouldn't be done automatically...
|
|
2022-11-17 07:20:56
|
okay, for my specific pictures, I have deduced my value -g 2, but in theory, can compress even better if you choose the exact value
|
|
2022-12-11 09:07:09
|
on what qualities is recommended to use lossy modular?
|
|
2022-12-11 09:16:37
|
in lossy modular mode there are such big artifacts on q 85
|
|
|
_wb_
|
2022-12-12 06:18:10
|
I don't know if lossy modular should be recommended at all, tbh.
|
|
|
username
|
2022-12-12 10:01:13
|
modular is designed for lossless while vardct is designed for lossy. trying to use lossy modular is like trying to use a png to store an image in a lossy matter
|
|
|
DZgas ะ
|
2022-12-12 10:51:48
|
In fact, lossy modular always loses to varDCT on any quality parameter
|
|
|
_wb_
|
2022-12-12 12:48:57
|
Lossy modular effectively just optimizes for psnr atm (in xyb space). Vardct has better heuristics, and also dct is just better for photographic images.
|
|
|
Demiurge
|
|
_wb_
I don't know if lossy modular should be recommended at all, tbh.
|
|
2022-12-27 09:40:20
|
I don't think there is any situation where it performs well either, at least in the encoder's current state.
|
|
2022-12-27 09:40:43
|
It should never be used until it's improved
|
|
2022-12-27 09:42:49
|
It would be nice if there was a lossy mode that did this. Making smart little edits to make the codestream more predictable and compressible.
|
|
2022-12-27 09:46:34
|
Right now lossy modular doesn't do anything like that at all. From what I understand, and I don't know that much about the encoder, really... but it seems that it basically does some sort of wavelet-based transform called Squeeze or something that doesn't seem that different in principle from varDCT mode?
|
|
|
Jyrki Alakuijala
|
|
DZgas ะ
in lossy modular mode there are such big artifacts on q 85
|
|
2022-12-29 07:17:57
|
looks like blue quant needs to be tuned
|
|
|
DZgas ะ
|
|
Jyrki Alakuijala
looks like blue quant needs to be tuned
|
|
2022-12-29 11:45:39
|
I don't think that's the only problem. modular itself just looks disgusting at this quality and below. Much worse than the vardct. In fact, I'm starting to doubt why lossy even exists for modular. But until the vardct has solved all his problems, modular is an option.
|
|
|
_wb_
|
2022-12-30 05:53:24
|
Lossy modular is not really designed for perception, at least the current encoder basically just optimizes psnr, the only perceptual thing being that it does that in XYB with channel weights that make some amount of sense.
|
|
2022-12-30 05:54:14
|
It exists because it's the only way to do lossy extra channels โ vardct is only defined for the 3 color channels.
|
|
|
Jyrki Alakuijala
|
|
DZgas ะ
I don't think that's the only problem. modular itself just looks disgusting at this quality and below. Much worse than the vardct. In fact, I'm starting to doubt why lossy even exists for modular. But until the vardct has solved all his problems, modular is an option.
|
|
2022-12-30 10:16:54
|
we don't know the full potential of lossy modular -- it is likely that it will not be that competitive ever in the high loss area -- it's pretty good for lossless and it may have more potential in near-lossless and the delta-palettization approaches
|
|
|
DZgas ะ
|
|
Jyrki Alakuijala
we don't know the full potential of lossy modular -- it is likely that it will not be that competitive ever in the high loss area -- it's pretty good for lossless and it may have more potential in near-lossless and the delta-palettization approaches
|
|
2022-12-30 11:40:25
|
Unfortunately, even at 99 quality, it shows results worse than VARDCT, both in visual and relative size. And only if the task is to leave all the information unchanged, every pixel, do lossless, modular works
|
|
|
diskorduser
|
2022-12-30 12:59:51
|
Just don't use lossy modular. Pretend it doesn't exist.
|
|
|
Jyrki Alakuijala
|
|
DZgas ะ
Unfortunately, even at 99 quality, it shows results worse than VARDCT, both in visual and relative size. And only if the task is to leave all the information unchanged, every pixel, do lossless, modular works
|
|
2022-12-30 01:25:39
|
you are correct -- that is the current state, VarDCT is our workhorse for lossy ๐
|
|
|
DZgas ะ
|
|
diskorduser
Just don't use lossy modular. Pretend it doesn't exist.
|
|
2022-12-30 04:38:53
|
I want to note that the key -m for activating modular mode encodes exactly in lossy
|
|
|
Demiurge
|
|
diskorduser
Just don't use lossy modular. Pretend it doesn't exist.
|
|
2023-01-01 09:21:33
|
honestly it should be harder for people to use bad settings like this so that people who don't know what they are doing (commonly referred to as idiots) don't go around creating and distributing poorly encoded images
|
|
2023-01-01 09:31:43
|
Maybe at the very least some sort of warning to alert any potential idiots that they are needlessly and preventably degrading image quality.
|
|
|
Traneptora
|
|
Demiurge
honestly it should be harder for people to use bad settings like this so that people who don't know what they are doing (commonly referred to as idiots) don't go around creating and distributing poorly encoded images
|
|
2023-01-02 12:20:45
|
don't conflate uneducated people with stupid people
|
|
2023-01-02 12:21:04
|
someone isn't an idiot cause they don't know how to encode a jxl
|
|
|
Demiurge
|
2023-01-02 12:48:03
|
That doesn't stop intelligent yet uninformed people from being perceived as and called idiots by others. Likewise nor does it doesn't stop unintelligent people from being perceived as smarter than they really are due to their acquired wisdom.
|
|
2023-01-02 12:48:51
|
Intelligence and wisdom are two separate things and intelligence is more subtle and wisdom is more obvious and practical. Unwise people will still be called fools no matter how capable of learning they are.
|
|
|
diskorduser
|
|
Demiurge
honestly it should be harder for people to use bad settings like this so that people who don't know what they are doing (commonly referred to as idiots) don't go around creating and distributing poorly encoded images
|
|
2023-01-02 12:51:14
|
Those people are not going to use cjxl/djxl. So that's not a problem.
|
|
|
Demiurge
|
2023-01-02 01:46:26
|
They will.
|
|
|
_wb_
|
2023-01-02 01:53:43
|
If you use cjxl, just use the options you see in the default help screen
|
|
2023-01-02 01:55:02
|
If you use options hidden behind one or more --verbose --help, it's to be expected that many choices will be worse than default
|
|
2023-01-02 01:55:41
|
This is the same in any tool that has advanced settings
|
|
|
Demiurge
|
2023-01-04 06:32:42
|
I just noticed yesterday while at a friend's house, he was using irfanview, and there was a checkbox "use modular"
|
|
2023-01-04 06:33:37
|
For some reason that option is extremely inexplicably attractive to a certain kind of person
|
|
2023-01-04 06:34:12
|
Even though it should never be used, CERTAIN people are somehow inexplicably drawn to it
|
|
|
Jyrki Alakuijala
|
|
DZgas ะ
i have problems compressing this image, it has 3 colors, i use -m -g 3 -P 15 --palette 3
but PNG turns out to Size 2 times less
|
|
2023-01-09 10:33:58
|
we could perhaps use larger context trees for very small color counts
PNG does that through pixel packing, 3 colors would be packed in PNG to 2 bits, i.e., entropy coding would compress a joint-distribution of 4 pixels -- a competing entry would need a context tree of 3 neighbouring pixels
|
|
|
|
JendaLinda
|
|
Demiurge
I just noticed yesterday while at a friend's house, he was using irfanview, and there was a checkbox "use modular"
|
|
2023-01-09 04:09:04
|
Good news, the modular option was removed in the recent version of IrfanView. It was replaced by progressive encoding.
|
|
|
Demiurge
|
2023-01-09 04:11:20
|
Haha, wow, I wonder if it's somehow related to how I complain way too much about little things like that...
|
|
2023-01-09 04:12:06
|
But that's definitely a nice improvement!
|
|
|
fab
|
2024-04-02 07:01:12
|
yes my msgs eads bad like gsehewuwebufbuy because i'm not native. anyway i finished conrtibutions. bye all
|
|
2024-04-02 07:03:42
|
also wods in talk aent in ode like byefwhfqbu
|
|
2024-04-02 07:39:34
|
I discovered that mental brain encoding in some disability is a disability also
|
|
2024-04-02 07:39:35
|
https://www.threads.net/@rehabscience/post/C5OH45oruJg/
|
|
2024-04-02 02:48:58
|
|
|
2024-04-02 02:49:01
|
Effort 9 fast speed start to be good
|
|
2024-04-02 02:49:16
|
Vmaf 66 amazingly high
|
|
|
|
necros
|
2024-07-21 12:21:51
|
re all , can you suggest command line to convert heic files to visually lossless jxl or avif?
|
|
|
Quackdoc
|
2024-07-21 12:24:33
|
visually lossless is kinda a bad term since it depends on the image and the viewer of said image, I typically encode using `-e 7 -d 0.5 --progressive_dc=1 --brotli_effort=10`
|
|
2024-07-21 12:25:32
|
but first you need to decode the heic file to an intermediate, or just use ffmpeg and loose the `progressive_dc` and `brotli_effort`
|
|
|
jonnyawsom3
|
2024-07-21 01:07:29
|
You can set effort and distance in ffmpeg
|
|
2024-07-21 01:07:31
|
With `-effort` and `-distance` surprisingly
|
|
|
_wb_
|
2024-07-21 01:17:00
|
Everything depends on how lossy the HEIC files are to begin with. Transcoding in general is tricky. It only makes sense if you pick a quality setting that produces a smaller file and doesn't waste bits on getting the compression artifacts of the first codec exactly replicated. But you probably also don't want to set it too low and introduce too much generation loss.
|
|
|
|
SwollowChewingGum
|
2024-07-21 04:06:56
|
Since when has this channel existed? Iโve never noticed it
|
|
|
๐๐ฆ๐๐ฃ๐ธ๐ฅ๐ฉ๐ฏ๐ฆ | ๆไธ่ชฟๅใฎไผๆญ่
| ็ฐ่ญฐใฎๅ
็ด
|
|
necros
re all , can you suggest command line to convert heic files to visually lossless jxl or avif?
|
|
2024-07-22 05:23:27
|
https://kb.ltgc.cc/ltgc/encode.html#lossy-still
might prove helpful to you, though I forgot to update with the "fake lossless" portion
depends on the input, a distance between `0.1` and `0.5` should work, and modular mode could be applied for lower distance values
|
|
|
Quackdoc
|
2024-07-22 06:21:31
|
the mental deterioration of fab "coincidentally" almost perfectly lines up with his apparent addiction to social media in the av1 server
|
|
|
๐๐ฆ๐๐ฃ๐ธ๐ฅ๐ฉ๐ฏ๐ฆ | ๆไธ่ชฟๅใฎไผๆญ่
| ็ฐ่ญฐใฎๅ
็ด
|
2024-07-30 06:35:29
|
ppl may find this useful I guess
|
|
|
A homosapien
|
2024-07-30 09:59:58
|
Lossy webp should NOT be used for archival purposes. TV range 4:2:0 images should be delivery only.
|
|
|
|
JendaLinda
|
|
A homosapien
Lossy webp should NOT be used for archival purposes. TV range 4:2:0 images should be delivery only.
|
|
2024-07-30 10:15:20
|
Unless you want to archive screenshots from VHS tapes ๐
|
|
|
๐๐ฆ๐๐ฃ๐ธ๐ฅ๐ฉ๐ฏ๐ฆ | ๆไธ่ชฟๅใฎไผๆญ่
| ็ฐ่ญฐใฎๅ
็ด
|
|
A homosapien
Lossy webp should NOT be used for archival purposes. TV range 4:2:0 images should be delivery only.
|
|
2024-07-31 02:28:26
|
on that front it's better just go with near-lossless
I just didn't have which word to use for that balance point
|
|
2024-07-31 02:29:39
|
though with near lossless WebP is, again, out of the question, even with dedicated near-lossless mode as it's too inefficient
|
|
|
yoochan
|
2024-07-31 07:13:31
|
Lossless webp is sometimes better than lossless jxl, especially on screenshots of old games ๐
|
|
|
๐๐ฆ๐๐ฃ๐ธ๐ฅ๐ฉ๐ฏ๐ฆ | ๆไธ่ชฟๅใฎไผๆญ่
| ็ฐ่ญฐใฎๅ
็ด
|
|
yoochan
Lossless webp is sometimes better than lossless jxl, especially on screenshots of old games ๐
|
|
2024-07-31 09:28:02
|
it excels at stuff with low entropy I guess, I'm observing the same trend with a lot of my screenshots
|
|
|
|
JendaLinda
|
2024-07-31 09:36:02
|
Lossless WebP gives good and consistent results.
|
|
2024-07-31 09:40:27
|
Lossy WebP is lacking fidelity, but if the source material is already in 4:2:0 TV range, lossy WebP actually wouldn't do any harm. It's quite niche use case but it's possible.
|
|
|
CrushedAsian255
|
|
JendaLinda
Lossy WebP is lacking fidelity, but if the source material is already in 4:2:0 TV range, lossy WebP actually wouldn't do any harm. It's quite niche use case but it's possible.
|
|
2024-07-31 09:47:31
|
Movie screenshots?
|
|
|
|
JendaLinda
|
2024-07-31 09:55:33
|
Possibly. DVD is using TV range AFAIK. Not sure about Bluray. However it only makes sense if the source material also uses the same conversion matrix as WebP.
|
|
|
jonnyawsom3
|
|
yoochan
Lossless webp is sometimes better than lossless jxl, especially on screenshots of old games ๐
|
|
2024-07-31 04:38:16
|
It's another thing I feel like I say every week, but I know WebP has a much more advanced palette sorting method which was going to be ported to libjxl at some point
I can't remember the exact details, but this is what's currently used https://github.com/libjxl/libjxl/pull/2523 https://github.com/libjxl/libjxl/pull/3420
|
|
|
TPS
|
2024-08-07 11:33:51
|
Since [AllRGB.com](https://AllRGB.com) has been mentioned before, has anyone looked into recreating any of the high-compressed images there using `tree_to_jxl`?
_(Reposting from https://discord.com/channels/794206087879852103/824000991891554375/1269696743654686763 since maybe a better fit here?)_
|
|
|
jonnyawsom3
|
2024-08-08 09:40:42
|
Turns out FileOptimizer doesn't optmize JXL very well... https://www.reddit.com/r/jpegxl/comments/1efrls4/comment/lh6ep6s/
|
|
2024-08-08 09:43:36
|
Seems to just set ImageMagik to effort 9 and that's it, not even sure if it's lossless as the optimizer claims
|
|
|
Fox Wizard
|
2024-08-09 05:37:26
|
Last time I checked it was lossy. But it has some other issues too lile removing AV1 video streams when optimizing mkv ~~I've been too lazy to open an issue~~
|
|
|
Demiurge
|
2024-08-09 12:57:11
|
I guess the best file optimizer is `rm` then
|
|
2024-08-09 12:57:57
|
Why only delete crucial data when you could just delete the whole file?
|
|
|
CrushedAsian255
|
2024-08-09 12:58:13
|
mkfs.ext4 for the win
|
|
|
Demiurge
|
2024-08-09 12:58:47
|
that's the hard drive optimizer/defragmenter
|
|
2024-08-09 12:59:59
|
`blkdiscard` is also good if you want it really optimized
|
|
|
CrushedAsian255
|
2024-08-09 01:02:08
|
Best optimisation tool is obviously the hammer
|
|
|
TPS
|
2024-08-10 02:10:03
|
Javier (FO's dev) is a good guy, & appreciates PRs & issues. Give him a chance & some notice.
|
|
|
|
ChrisX4Ever
|
2024-08-25 05:17:34
|
How do I encode multiple images with the same file extension that are in a folder in the main directory? What I mean by that is that I create a new folder (let's say "Images"), then I put all my image files (let's say they are all PNG files) into that folder, and I want to use a cycle (for/while/etc.) that enters Images folder, encode the images one by one and each encoded image must have it's original name (but now instead of .png they are .jxl) and saves said encoded files in Images folder.
I know it's very specific but if anyone can help me, I would greatly appreciate it. for now I have been using this code in FFMPEG: for %f in (.png) do ffmpeg -i "%f" -c:v libjxl -distance 0.0 "%~nf.jxl"
but not only it doesn't work for libjxl, it also reads images that are on the main folder
|
|
2024-08-25 08:40:15
|
Thanks for the reply. Yes, I would love to losslessly save some bytes on my HDD in the case of lossy JPEG files.
|
|
|
KKT
|
2024-08-26 06:02:17
|
Something like:
`#!/bin/bash
# Directory containing the PNG files
input_dir="/path/to/your/png/folder"
# Loop through all PNG files in the directory
for file in "$input_dir"/*.png; do
# Extract the file name without the extension
filename=$(basename "$file" .png)
# Convert the PNG file to JPEG XL
cjxl -d 0 -e 9 "$file" "$input_dir/$filename.jxl"
echo "Converted $file to $filename.jxl"
done
echo "All PNG files have been converted to JPEG XL format."`
|
|
2024-08-26 06:05:02
|
Save the script to `convert_png_to_jxl.sh`
Make it executable: `chmod +x convert_png_to_jxl.sh`
Run : `./convert_png_to_jxl.sh`
|
|
2024-08-26 06:05:42
|
And change the input folder to where ever your PNGs are
|
|
|
|
ChrisX4Ever
|
2024-08-27 04:59:25
|
So impressive, thank you very much.
|
|
2024-08-27 04:59:44
|
I genuinely appreciate it
|
|
|
TheBigBadBoy - ๐ธ๐
|
2024-08-27 01:12:27
|
`parallel cjxl --num_threads=0 -d 0 -e 9 {} {.}.jxl ::: /dir/*.png`
my beloved <:FeelsAmazingMan:808826295768449054>
|
|
|
CrushedAsian255
|
2024-08-27 01:14:31
|
Personally a
find * -iname โ*.pngโ -print0 | xargs-0 -n1 -P$(nproc) cjxl
person
|
|
|
TheBigBadBoy - ๐ธ๐
|
2024-08-27 01:15:24
|
I used to, but since I discovered `parallel` the syntax is much more simple
|
|