|
190n
|
2021-02-20 07:58:35
|
speed 9 was 60x slower than speed 8 and produced a larger file
|
|
|
BlueSwordM
|
2021-02-20 08:01:34
|
If you want to increase speed, use --num_threads=1 and use GNU-Parallel.
|
|
|
190n
|
2021-02-20 08:04:24
|
yeah that makes sense for a batch encode, i probably won't use that for this benchmark though since i don't want one encode to be affected by how much cpu another is using
|
|
|
|
member102
|
2021-02-20 08:06:23
|
Glad to hear the acknowledgement that brotli's compressor needs some love. It is true that FB has been pushing improvements periodically with ZSTD while brotli ... not so much. It is not like Google lacks talent or resources to do it. So I am lookimg forward to new and enhanced releases.
|
|
|
_wb_
|
2021-02-20 08:30:16
|
First you make an encoder that is good enough to develop and demonstrate the capabilities of the bitstream. Then you make a robust and fast decoder. Once there is sufficient adoption, it makes sense to make better and better encoders.
|
|
2021-02-20 08:55:01
|
I hope there can be some llvm-style convergence in encoder improvements, where different improvements can be applied to different 'target' bitstreams. E.g. lz77 matching is done in gzip, brotli, zstd, and even jxl. Adaptive quantization and perceptual optimization are done (or can be done) in one way or another in any modern image and video codec.
|
|
2021-02-20 09:03:04
|
Bitstreams are like instruction sets. We now have very advanced ones, and we mostly are still using primitive compilers (encoders). There are various types of program analysis and compiler optimization, e.g. loop vectorization, that can be applied for various cpu architectures. For compression, we still need to come up with suitable intermediate languages/representations to properly express such things though so it becomes easier/automatic to "port" and encoder optimization from one codec to another.
|
|
|
spider-mario
|
|
If you hard-code them before compilation (e.g. for testing purposes), feed spline parameters via command-line parameters and write the code to receive those parameters, or simply write spline heuristics, then *everything else* is ready. Decoder can decode bitstreams with spline data, too.
|
|
2021-02-20 09:04:00
|
correct, in fact we did that to check that they did have some potential to improve quality/bitrate
|
|
2021-02-20 09:04:22
|
it was tricky to hand-tune those splines 😄
|
|
|
_wb_
|
2021-02-20 09:08:23
|
We should make a spline drawing program at some point, maybe a fork of inkscape or something that only lets you draw catmull-rom splines of varying thickness and color and nothing else 😅
|
|
2021-02-20 09:09:05
|
Turn it into an art-form to make spline-only jxls
|
|
2021-02-20 09:09:32
|
Interesting constraints make great art, is my motto
|
|
|
|
Deleted User
|
|
_wb_
Interesting constraints make great art, is my motto
|
|
2021-02-20 09:12:26
|
That's right tho
|
|
|
_wb_
|
2021-02-20 09:21:47
|
Constraints trigger creativity — a 32x32 4-color pixel art can be more interesting than a full color high res piece, or those 3-channel chiptunes on the C64 are often more brilliant and interesting compositions than the unconstrained but often kinda bland game music of today
|
|
2021-02-20 09:25:21
|
I think it's because constraints create a problem, which for a problem-solving mind is an inspiration to find "solutions". An empty canvas with no constraints whatsoever only inspires writer's block.
|
|
2021-02-20 09:26:29
|
Philophy of arts on a Saturday night in <#794206170445119489>, why not
|
|
|
|
Deleted User
|
|
_wb_
I think it's because constraints create a problem, which for a problem-solving mind is an inspiration to find "solutions". An empty canvas with no constraints whatsoever only inspires writer's block.
|
|
2021-02-20 09:27:09
|
Creativity is made for solving problems. If you've got none, you have to invent some.
|
|
|
_wb_
|
2021-02-20 09:30:49
|
The other approach of course is being a very troubled artist who is crazy and depressed and has sufficient sources of problems that way to solve/express with art
|
|
|
BlueSwordM
|
2021-02-21 02:22:32
|
<@456226577798135808> I actually disagree making lossless the default behavior for CJXL:
https://gitlab.com/wg1/jpeg-xl/-/issues/133#note_513678812
|
|
2021-02-21 02:26:59
|
The way it's done right now is actually good: lossless inputs should get visually lossless outputs, but not mathematically lossless, while lossy inputs(JPGs) should stay lossless to prevent generational losses.
|
|
|
|
Deleted User
|
2021-02-21 02:30:56
|
Ok, you may be right, but my point for `-j` switch to always mean lossy (and to change its current behavior in the decoder to reflect that) still stands.
|
|
2021-02-21 02:34:26
|
I can resign from encoder's "always lossless by default" behavior, but not from making `-j` switch consistent in a safe way ("lossless JPG by default") and it's really nice we agree on that 🙂
|
|
2021-02-21 02:37:58
|
Maybe I'll file another bug just for decoder...
|
|
2021-02-21 04:33:26
|
“Command: cjxl -q 100 -p in.jpg out.jxl
Result: Same as Example 1, VarDCT, d1.000 is used, i.e. not the lossless compression as the user expected.”
I’d naturally expect -q 100 would mean lossless… at least in this example it really should be using lossless
|
|
2021-02-21 04:34:22
|
regardless the current progressive flag behavior isn’t consistent with everything else
|
|
2021-02-21 04:37:28
|
Without specifying any flags, the default really depends on application.. however, lossless transcodes and encodes will always be able to be reverted in case the defaults aren’t want the user expected. The same cannot be said about lossy if the user didn’t keep the originals
|
|
|
BlueSwordM
|
|
“Command: cjxl -q 100 -p in.jpg out.jxl
Result: Same as Example 1, VarDCT, d1.000 is used, i.e. not the lossless compression as the user expected.”
I’d naturally expect -q 100 would mean lossless… at least in this example it really should be using lossless
|
|
2021-02-21 04:40:47
|
Something is wrong.
|
|
2021-02-21 04:41:28
|
Here's what I get with that command:
```cjxl /home/bluezakm/Documents/index.jpeg /home/bluezakm/Documents/index.jxl -q 100 -p
`J P E G \/ |
/\ |_ e n c o d e r [v0.3.2 | SIMD supported: AVX2]
Read 313x161 image, 35.4 MP/s
Encoding [Modular, lossless, squirrel], 8 threads.```
|
|
2021-02-21 04:41:52
|
I don't get anything using the VARDCT path used. <:Thonk:805904896879493180>
|
|
2021-02-21 04:42:29
|
Something must have changed from 2 weeks ago to now. 😛
|
|
|
|
Deleted User
|
2021-02-21 04:42:33
|
<:Thonk:805904896879493180> maybe jxl fixed it since then yeah
|
|
|
BlueSwordM
|
|
<:Thonk:805904896879493180> maybe jxl fixed it since then yeah
|
|
2021-02-21 04:43:36
|
Of course, there's a little problem now.
The final image size is 80kB.
The original is 11kB. 😛
|
|
|
_wb_
|
2021-02-21 07:04:14
|
Also it is decoding the jpeg to pixels, that's not really a lossless thing to do
|
|
2021-02-21 07:04:38
|
(some info is lost on iDCT and YCbCr to RGB)
|
|
2021-02-21 07:07:41
|
We also need to make cjxl/djxl use the library, so things will have to change anyway when it uses the API instead of directly calling stuff. Not so much point in making it do the best possible things now, when we'll have to mostly reimplement it anyway.
|
|
|
fab
|
2021-02-21 01:59:11
|
do you have link of the comparison with other codecs? the one with all dates
|
|
|
_wb_
|
2021-02-21 02:00:22
|
All dates?
|
|
|
fab
|
2021-02-21 02:10:42
|
wp2
|
|
2021-02-21 02:11:58
|
https://storage.googleapis.com/demos.webmproject.org/webp/cmp/index.html
|
|
|
|
paperboyo
|
|
fab
https://storage.googleapis.com/demos.webmproject.org/webp/cmp/index.html
|
|
2021-02-21 03:41:34
|
Is there a similar comparison (with newest jxl etc.) that also includes oldgen codecs (libjpeg, mozjpeg)?
|
|
|
_wb_
|
2021-02-21 03:51:11
|
There's the eclipseo one, it's not the most recent jxl encoder though
|
|
|
fab
|
2021-02-21 05:28:31
|
does it exist a way to to an average
|
|
|
_wb_
|
2021-02-21 05:38:24
|
<:WhatThe:806133036059197491>
|
|
|
lonjil
|
2021-02-21 05:44:11
|
Here's an interesting target to optimise for https://www.reddit.com/r/AV1/comments/lox7be/what_is_perseverance_mars_rover_using_as_an_image
|
|
|
fab
|
2021-02-21 05:53:41
|
of the photos in jxl lossless jpg
|
|
2021-02-21 05:53:48
|
and the ones lossy jpg -j
|
|
2021-02-21 05:54:02
|
how to recognize the bitstream
|
|
2021-02-21 05:54:05
|
that i'm also interested
|
|
2021-02-21 05:54:52
|
also what it changes between jpeg xl optimization of a jpg image
|
|
2021-02-21 05:55:03
|
and antialiasing of nomacs
|
|
2021-02-21 05:55:56
|
probably i used nomacs antialiasing is on default
|
|
2021-02-21 05:56:02
|
and i lied using djxl
|
|
2021-02-21 05:58:11
|
|
|
2021-02-21 05:58:28
|
how is that image quality from an expert in artifact
|
|
2021-02-21 05:58:33
|
without reference
|
|
|
BlueSwordM
|
|
fab
without reference
|
|
2021-02-21 06:12:04
|
It looks fine.
|
|
|
_wb_
|
|
lonjil
Here's an interesting target to optimise for https://www.reddit.com/r/AV1/comments/lox7be/what_is_perseverance_mars_rover_using_as_an_image
|
|
2021-02-21 06:24:22
|
Something like `-m -p -I 0` could be interesting for such a use case. Is lossless, but any prefix gives some more detail. The -I 0 is just to avoid spending initial bytes on MA trees and to start quickly with the actual data.
|
|
|
lonjil
|
2021-02-21 06:35:07
|
Is sending actual data quickly actually important?
|
|
|
_wb_
|
2021-02-21 06:39:34
|
No idea, but it might be
|
|
|
|
Deleted User
|
|
lonjil
Here's an interesting target to optimise for https://www.reddit.com/r/AV1/comments/lox7be/what_is_perseverance_mars_rover_using_as_an_image
|
|
2021-02-21 06:46:01
|
Have y'all seen my comment? Some useful research is in there
|
|
|
lonjil
|
2021-02-21 06:46:45
|
Yes, that's what sparked my interest
|
|
|
|
Deleted User
|
2021-02-21 06:46:57
|
Oh, nice
|
|
2021-02-21 06:47:27
|
I've tried encoding in multiple ways and apparently nothing can beat wavelets in that matter
|
|
2021-02-21 06:47:39
|
What the hell NASA did that we can't?
|
|
|
_wb_
|
2021-02-21 06:53:28
|
Do you have a sample image?
|
|
2021-02-21 06:53:45
|
As far as I understand, they basically use jpeg 2000, no?
|
|
|
|
Deleted User
|
|
_wb_
As far as I understand, they basically use jpeg 2000, no?
|
|
2021-02-21 06:54:59
|
ICER is *kinda* similar to JPEG 2000 (wavelets being the biggest similarity), but there are some differences
|
|
2021-02-21 06:55:57
|
I posted a link to a paper on ICER in that thread, but I can upload it directly here
|
|
2021-02-21 06:56:05
|
|
|
|
_wb_
|
2021-02-21 06:56:53
|
I think it's basically a simplified j2k, plus something like jpeg-ls for lossless
|
|
|
|
Deleted User
|
2021-02-21 06:57:16
|
Then I used an online image extractor and extracted all image files from that PDF
|
|
2021-02-21 06:57:38
|
This is the original...
|
|
|
_wb_
|
2021-02-21 06:57:53
|
Details from a larger image
|
|
|
|
Deleted User
|
2021-02-21 06:57:54
|
...this is ICER-encoded...
|
|
2021-02-21 06:58:06
|
...and this is JPEG-encoded.
|
|
|
Scope
|
2021-02-21 06:58:34
|
> - JPEG 2000 uses floating point math, where ICER uses only integer math
> - ICER reverts to a separate internal LOCO (Low Complexity Lossless Compression) compressor for lossless image compression
> - JPEG 2000 implements a low complexity symmetrical wavelet lossless compressor, but ICER uses an integer only non-wavelet lossless compressor
🤔
|
|
|
|
Deleted User
|
|
...and this is JPEG-encoded.
|
|
2021-02-21 06:59:20
|
I checked the last one in Paint, blocks are 8x8 in size, so they're all 1:1 sized in terms of pixels
|
|
|
...this is ICER-encoded...
|
|
2021-02-21 07:00:59
|
The paper claims the ICER version to be encoded in 0.125 bpp, which (for that image's resolution) translates to 784 bytes filesize.
|
|
|
_wb_
|
2021-02-21 07:02:08
|
The large image was 0.125 bpp
|
|
2021-02-21 07:02:17
|
Doesn't mean that crop was also 0.125
|
|
2021-02-21 07:02:51
|
There are things like headers that add overhead
|
|
|
|
Deleted User
|
|
_wb_
Doesn't mean that crop was also 0.125
|
|
2021-02-21 07:04:12
|
Okay, you may be right
|
|
2021-02-21 07:04:26
|
It'd be nice to get the original image
|
|
|
_wb_
|
2021-02-21 07:07:34
|
Could be that the original had some regions that are easy to compress, and they only show the most interesting part
|
|
|
|
Deleted User
|
2021-02-21 07:09:03
|
Because JPEG XL on *this* high compression clearly isn't doing well...
`cjxl -p -s 9 -q -25 .\image--004.png .\image--004.jxl`
Size-matched, 785 B.
|
|
|
...and this is JPEG-encoded.
|
|
2021-02-21 07:10:31
|
By the way, JPEG was encoded at higher bpp in order to demonstrate it as being less efficient than ICER.
|
|
|
_wb_
|
2021-02-21 07:10:52
|
Can you add -v to see where bits are spent?
|
|
|
|
Deleted User
|
2021-02-21 07:11:32
|
```C:\Users\zieme\Downloads\155J> ..\jpeg-xl-mingw64-e2b1e60f\cjxl -p -s 9 -q -25 -v .\image--004.png .\image--004.jxl
J P E G \/ |
/\ |_ e n c o d e r [v0.3.2 | SIMD supported: SSE4,Scalar]
Read 224x224 image, 11.0 MP/s
Encoding [Modular, Q-61.00, tortoise], 2 threads.
Compressed to 785 bytes (0.125 bpp).
224 x 224, 0.41 MP/s [0.41, 0.41], 1 reps, 2 threads.
Average butteraugli iters: 0.00
Total layer bits headers 1.071143% 67
Total layer bits TOC 1.278977% 80
Total layer bits quant tables 0.783373% 49
Total layer bits modularGlobal 81.550759% 5101 [c/i: 9.00 | hst: 46 | ex: 55 | h+c+e: 622.984]
Total layer bits modularTree 15.315747% 958 [c/i: 4.00 | hst: 13 | ex: 17 | h+c+e: 86.333]
Total image size 6255 [c/i: 13.00 | hst: 60 | ex: 73 | h+c+e: 709.316]
Allocations: 152 (max bytes in use: 2.979270E+06)```
|
|
|
_wb_
|
2021-02-21 07:12:25
|
Hm, try with -c 1 -I 0 maybe?
|
|
|
|
Deleted User
|
2021-02-21 07:13:28
|
You probably mean `-C 1` (big C, not small)?
|
|
|
_wb_
|
2021-02-21 07:13:36
|
No, small c
|
|
2021-02-21 07:13:53
|
I don't know why it is doing XYB here, but it is not going to be useful
|
|
|
|
Deleted User
|
2021-02-21 07:14:32
|
I thought that `-c` is just for VarDCT and one should use `-C` as an equivalent in Modular...
|
|
|
_wb_
|
2021-02-21 07:14:55
|
-c are the lossy color transforms, -C the reversible ones
|
|
|
YAGPDB.xyz
|
2021-02-21 07:14:56
|
Unrecognized token IDENT (`are`) in expression
|
|
|
_wb_
|
2021-02-21 07:16:14
|
48 bits are wasted on XYB scaling factors that are not useful since it is greyscale anyway so XYB just gives you a somewhat different gamma
|
|
|
|
Deleted User
|
2021-02-21 07:16:27
|
OOPS...
|
|
2021-02-21 07:16:29
|
```C:\Users\zieme\Downloads\155J> ..\jpeg-xl-mingw64-e2b1e60f\cjxl -p -c 0 -I 0 -s 9 -q -25 -v .\image--004.png .\image--004.jxl
J P E G \/ |
/\ |_ e n c o d e r [v0.3.2 | SIMD supported: SSE4,Scalar]
Read 224x224 image, 12.2 MP/s
Encoding [Modular, Q-61.00, tortoise], 2 threads.
Failed to compress to Modular.```
|
|
2021-02-21 07:16:47
|
And no, removing `-p` didn't help.
|
|
|
_wb_
|
2021-02-21 07:17:21
|
Ah yes we didn't implement lossy squeeze without trees
|
|
2021-02-21 07:17:24
|
Uh
|
|
2021-02-21 07:17:39
|
Make it -I 0.1
|
|
2021-02-21 07:18:12
|
Or just various values for -I
|
|
|
|
Deleted User
|
2021-02-21 07:18:27
|
Ok, now it didn't fail, now I have to re-adjust the quality to match the filesize again
|
|
2021-02-21 07:19:35
|
```C:\Users\zieme\Downloads\155J> ..\jpeg-xl-mingw64-e2b1e60f\cjxl -p -c 0 -I 0.1 -s 9 -q -26.4 -v .\image--004.png .\image--004.jxl
J P E G \/ |
/\ |_ e n c o d e r [v0.3.2 | SIMD supported: SSE4,Scalar]
Read 224x224 image, 11.2 MP/s
Encoding [Modular, Q-65.20, tortoise], 2 threads.
Compressed to 785 bytes (0.125 bpp).
224 x 224, 0.60 MP/s [0.60, 0.60], 1 reps, 2 threads.
Average butteraugli iters: 0.00
Total layer bits headers 1.070801% 67
Total layer bits TOC 1.278568% 80
Total layer bits quant tables 0.783123% 49
Total layer bits modularGlobal 79.798626% 4993 [c/i: 10.00 | hst: 56 | ex: 51 | h+c+e: 608.083]
Total layer bits modularTree 17.068883% 1068 [c/i: 4.00 | hst: 13 | ex: 21 | h+c+e: 97.586]
Total image size 6257 [c/i: 14.00 | hst: 69 | ex: 72 | h+c+e: 705.669]
Allocations: 152 (max bytes in use: 2.984430E+06)```
|
|
2021-02-21 07:19:53
|
|
|
|
_wb_
|
2021-02-21 07:20:36
|
Ugh, still spending quite a few bits on the trees themselves
|
|
2021-02-21 07:20:50
|
This is one of the things that get better if you have the full image
|
|
2021-02-21 07:21:36
|
And it is -c 1, not -c 0
|
|
2021-02-21 07:23:33
|
Not very intuitive, but 0 is xyb, 1 is do-nothing (rgb), 2 is ycbcr (and the encoder assumes your data is already in ycbcr)
|
|
|
|
Deleted User
|
|
_wb_
And it is -c 1, not -c 0
|
|
2021-02-21 07:23:40
|
Oops, sorry
|
|
2021-02-21 07:25:15
|
what
|
|
2021-02-21 07:25:27
|
`Compressed to 1290 bytes (0.206 bpp).`
|
|
2021-02-21 07:25:44
|
That's a huge bump, have to re-adjust again
|
|
|
_wb_
|
2021-02-21 07:26:34
|
Yes, the scales for rgb and xyb are different
|
|
|
|
Deleted User
|
2021-02-21 07:27:05
|
``` C:\Users\zieme\Downloads\155J> ..\jpeg-xl-mingw64-e2b1e60f\cjxl -p -c 1 -I 0.1 -s 9 -q -174 -v .\image--004.png .\image--004.jxl
J P E G \/ |
/\ |_ e n c o d e r [v0.3.2 | SIMD supported: SSE4,Scalar]
Read 224x224 image, 11.9 MP/s
Encoding [Modular, Q-508.00, tortoise], 2 threads.
Compressed to 785 bytes (0.125 bpp).
224 x 224, 0.55 MP/s [0.55, 0.55], 1 reps, 2 threads.
Average butteraugli iters: 0.00
Total layer bits headers 1.262788% 79
Total layer bits TOC 1.262788% 79
Total layer bits quant tables 0.015985% 1
Total layer bits modularGlobal 83.711637% 5237 [c/i: 10.00 | hst: 72 | ex: 323 | h+c+e: 623.528]
Total layer bits modularTree 13.746803% 860 [c/i: 4.00 | hst: 13 | ex: 26 | h+c+e: 89.861]
Total image size 6256 [c/i: 14.00 | hst: 85 | ex: 349 | h+c+e: 713.389]
Allocations: 155 (max bytes in use: 2.990678E+06)```
|
|
|
_wb_
|
2021-02-21 07:27:06
|
Will have to go more negative
|
|
|
|
Deleted User
|
|
_wb_
|
2021-02-21 07:27:29
|
Ugh
|
|
2021-02-21 07:29:11
|
It may be doing something silly, such combinations of lots of flags are not tested well
|
|
|
|
Deleted User
|
2021-02-21 07:29:13
|
Oh, and by the way: AVIF. 718 B.
|
|
|
_wb_
|
2021-02-21 07:29:30
|
Can you try with pgm input?
|
|
2021-02-21 07:29:58
|
It may not properly be doing things as grayscale, doing everything 3 times or so
|
|
2021-02-21 07:30:53
|
Grayscale needs more testing in general
|
|
2021-02-21 07:33:01
|
Ultra low bitrate grayscale encoding is something we haven't been focusing on, to put it mildly, lol
|
|
|
|
Deleted User
|
|
_wb_
Can you try with pgm input?
|
|
2021-02-21 07:33:01
|
You mean PGX? Because `cjxl` doesn't support PGM.
|
|
|
_wb_
|
2021-02-21 07:33:11
|
It doesn't? Hm
|
|
2021-02-21 07:33:44
|
It only does ppm? No pgm?
|
|
|
|
Deleted User
|
2021-02-21 07:34:16
|
``` INPUT
the input can be PNG, APNG, GIF, JPEG, PPM, PFM, or PGX```
|
|
|
|
member102
|
2021-02-21 07:35:28
|
pgm and ppm are very similar, changing the file suffix works with some tools
|
|
|
_wb_
|
2021-02-21 07:36:21
|
It should read pgm
|
|
|
|
Deleted User
|
|
member102
pgm and ppm are very similar, changing the file suffix works with some tools
|
|
2021-02-21 07:36:24
|
PPM can have 3 channels, PGM must have only one if I'm not wrong
|
|
|
_wb_
|
|
|
Deleted User
|
2021-02-21 07:38:03
|
So we have to remove two unnecessary channels
|
|
2021-02-21 07:38:18
|
Simply changing suffix won't work
|
|
|
_wb_
|
2021-02-21 07:38:55
|
Just do `convert image--004.png bla.pgm`
|
|
|
|
Deleted User
|
|
_wb_
It should read pgm
|
|
2021-02-21 07:41:43
|
```C:\Users\zieme\Downloads\155J> ..\jpeg-xl-mingw64-e2b1e60f\cjxl -p -I 0.1 -s 9 -q -174 -v .\image--004.pgm .\image--004.jxl
J P E G \/ |
/\ |_ e n c o d e r [v0.3.2 | SIMD supported: SSE4,Scalar]
Failed to read image .\image--004.pgm.```
|
|
2021-02-21 07:42:01
|
I used an online image converter since I don't have ImageMagick installed
|
|
|
_wb_
|
2021-02-21 07:46:30
|
Hm, weird
|
|
|
|
member102
|
2021-02-21 07:46:42
|
```compression/jpeg-xl/build/cjxl -p -I 0.1 -s 9 -q -174 -v /tmp/big_tree2.pgm /tmp/big_tree2.jxl
J P E G \/ |
/\ |_ e n c o d e r [v0.2.0 | SIMD supported: AVX2,SSE4,Scalar]
Read 6088x4550 image, 148.4 MP/s
Encoding [Modular, Q-508.00, tortoise], 4 threads.
Compressed to 50125 bytes (0.014 bpp).
6088 x 4550, 1.09 MP/s [1.09, 1.09], 1 reps, 4 threads.
Average butteraugli iters: 0.00
Total layer bits headers 0.029438% 117
Total layer bits TOC 3.957287% 15728
Total layer bits quant tables 0.012329% 49
Total layer bits modularGlobal 44.120430% 175354 [c/i: 41.00 | hst: 578 | ex: 3586 | h+c+e: 44209.724]
Total layer bits modularDcGroup 42.750174% 169908
Total layer bits modularAcGroup 7.826008% 31104
Total layer bits modularTree 1.304335% 5184 [c/i: 5.00 | hst: 45 | ex: 148 | h+c+e: 697.284]
Total image size 397444 [c/i: 46.00 | hst: 623 | ex: 3770 | h+c+e: 44943.008]
Allocations: 10321 (max bytes in use: 1.483667E+09)
wc -c /tmp/big_tree2.jxl
50125 /tmp/big_tree2.jxl```
|
|
|
_wb_
|
2021-02-21 07:47:00
|
https://gitlab.com/wg1/jpeg-xl/-/blob/master/lib/extras/codec_pnm.cc#L72
|
|
|
|
Deleted User
|
2021-02-21 07:50:14
|
I'm using Scope's Windows build from encode.su, if that's important
|
|
|
_wb_
|
2021-02-21 07:55:19
|
Maybe the pgm converter is doing something weird
|
|
|
|
Deleted User
|
2021-02-21 08:06:20
|
Ok, whatever
|
|
2021-02-21 08:06:45
|
I've been looking for the original image that was used in that paper
|
|
|
|
member102
|
2021-02-21 08:09:53
|
FWIW, I am getting the same encoding error after turning image--004.png to pgm with Paint Shop Pro ... 🤨
|
|
|
|
Deleted User
|
2021-02-21 08:10:51
|
I don't know if it's the best quality and if it's correct photo, but we've got color and a bigger picture!
|
|
2021-02-21 08:10:52
|
http://www.xtec.cat/~aparra1/astronom/mars/mpf/mpf4/yogi-pres-col-2.jpg
|
|
2021-02-21 08:17:04
|
After quick check in GIMP it's not *exactly* the same picture, the search continues...
|
|
|
|
member102
|
2021-02-21 08:28:21
|
OK the PNM header parsing code looks suspicious: in jpeg-xl/lib/extras/codec_pnm.cc, the parsing fails when the type is 2 (image--04.pgm) but works when the type is 6 (big_tree2.pgm). P2 is a valid type, not properly handled by the parser
|
|
|
|
Deleted User
|
|
http://www.xtec.cat/~aparra1/astronom/mars/mpf/mpf4/yogi-pres-col-2.jpg
|
|
2021-02-21 08:38:54
|
Come on, I can't find any better photo than that...
|
|
2021-02-21 08:39:31
|
Fortunately though I fiddled a bit with the first color photo from Perseverance
|
|
2021-02-21 08:40:05
|
I downloaded the original photo from NASA's website and upscaled it 2x in Topaz Gigapixel
|
|
|
_wb_
|
2021-02-21 08:41:05
|
P2 is human readable grayscale, right? With pixel values in ascii text, very very uncompressed 😅
|
|
|
|
Deleted User
|
2021-02-21 08:43:43
|
And *that's* why I had two options, `Raw` and `ASCII`, when exporting to PPM in GIMP...
|
|
|
_wb_
|
2021-02-21 08:43:51
|
we only read Raw
|
|
2021-02-21 08:44:39
|
ascii is really silly, I don't know why you would want to do that except if you want to send something over a medium that only accepts ascii text maybe
|
|
2021-02-21 08:45:48
|
it's 2-4 times as wasteful as uncompressed
|
|
|
eustas
|
2021-02-21 08:47:06
|
twim 768 bytes
|
|
|
|
member102
|
2021-02-21 08:47:33
|
maybe a better error message would avoid the confusion
|
|
|
_wb_
|
2021-02-21 08:47:48
|
twim?
|
|
|
eustas
|
2021-02-21 08:48:11
|
my pet project for previews
|
|
|
|
Deleted User
|
|
I downloaded the original photo from NASA's website and upscaled it 2x in Topaz Gigapixel
|
|
2021-02-21 08:55:45
|
I'm trying encoding this image and suddenly 0.125 bpp isn't that bad
|
|
|
_wb_
|
2021-02-21 08:58:47
|
|
|
2021-02-21 08:58:54
|
|
|
2021-02-21 09:00:23
|
|
|
2021-02-21 09:00:31
|
|
|
|
|
Deleted User
|
2021-02-21 09:00:46
|
WOW
|
|
2021-02-21 09:00:53
|
How did you do that?
|
|
|
_wb_
|
2021-02-21 09:01:07
|
that's the best I could do with some manual tweaking 🙂
|
|
2021-02-21 09:01:21
|
`cjxl image--004.png.pgm image--004.png.pgm.jxl -v -m -c 1 -Q 35 -C 0 -I 1 -s 9 -X 0 -Y 0 --palette 0 --resampling=2`
|
|
2021-02-21 09:03:23
|
apparently the original isn't quite grayscale
|
|
2021-02-21 09:04:43
|
it only uses 234 different colors, but none of them are exactly gray
|
|
|
|
Deleted User
|
|
_wb_
apparently the original isn't quite grayscale
|
|
2021-02-21 09:05:40
|
I also noticed on previews that on JXL-decoded PNG the tint is kinda different compared to other pictures. Gray on JXL-to-PNG and ...green-blue-ish on the rest?
|
|
|
_wb_
|
2021-02-21 09:05:55
|
that causes jxl to waste a lot of bits on trying to preserve those exact slightly colored grays
|
|
2021-02-21 09:06:43
|
it does look like the actual data is only grayscale, but things have been tinted by something
|
|
2021-02-21 09:08:23
|
```Histogram:
333: ( 27, 30, 28) #1B1E1C srgb(27,30,28)
270: ( 28, 30, 29) #1C1E1D srgb(28,30,29)
308: ( 29, 31, 30) #1D1F1E srgb(29,31,30)
315: ( 30, 30, 29) #1E1E1D srgb(30,30,29)
323: ( 30, 33, 31) #1E211F srgb(30,33,31)
293: ( 31, 34, 33) #1F2221 srgb(31,34,33)
320: ( 32, 31, 30) #201F1E srgb(32,31,30)
299: ( 32, 35, 34) #202322 srgb(32,35,34)
343: ( 33, 36, 35) #212423 srgb(33,36,35)
305: ( 34, 32, 31) #22201F srgb(34,32,31)
328: ( 34, 37, 36) #222524 srgb(34,37,36)
324: ( 35, 39, 38) #232726 srgb(35,39,38)
```
|
|
2021-02-21 09:08:27
|
etc
|
|
2021-02-21 09:08:42
|
```$ identify -format "%k\n" image--004.png
234
```
|
|
2021-02-21 09:10:33
|
so converting to grayscale does eliminate some of that – probably AVIF already eliminates that by just converting to YCbCr and rounding all the chroma to zero
|
|
|
|
Deleted User
|
2021-02-21 09:21:54
|
My `-p --middleout -d 2` encode of Perseverance photo fails to decode at 40 KB truncation (out of 338 KB)...
|
|
2021-02-21 09:22:09
|
```C:\Users\zieme\Downloads\jpeg-xl-mingw64-e2b1e60f> ..\jpeg-xl-mingw64-e2b1e60f\djxl --allow_partial_files --allow_more_progressive_steps '.\PIA24430-Perseverance''s_first_full-color_look_at_Mars-gigapixel-scale-2_00x.jxl' '.\PIA24430-Perseverance''s_first_full-color_look_at_Mars-gigapixel-scale-2_00x.jxl.png'
Read 39689 compressed bytes [v0.3.2 | SIMD supported: SSE4,Scalar]
../lib/jxl/dec_frame.cc:612: JXL_ASSERT: num > 0```
|
|
|
_wb_
|
2021-02-21 09:24:15
|
Yes, I think some truncations do not work yet
|
|
2021-02-21 09:25:03
|
We are atm mostly aiming at what a browser decoder would do, not trying to decode as much as possible from a partial bitstream
|
|
2021-02-21 09:26:03
|
In a browser, you don't want to re-render at every byte. Just two or three renders should be good, maybe a few more on a really slow network
|
|
2021-02-21 09:27:08
|
Otherwise the extra cost of progressive becomes too much
|
|
2021-02-21 09:27:21
|
(cost in cpu/battery I mean)
|
|
2021-02-21 09:29:18
|
Also we are reorganizing decode code to better fit a progressive decoding api, but that atm breaks the progressive decoding of DC
|
|
|
Jyrki Alakuijala
|
|
_wb_
|
|
2021-02-22 12:13:51
|
is this using the JPEG XL's internal upsampling system?
|
|
|
_wb_
|
|
Jyrki Alakuijala
|
2021-02-22 12:32:01
|
I love the JPEG XL's internal upsampling system 😄
|
|
2021-02-22 12:33:02
|
but it is like XYB -- I meticulously tuned them for my eyes (not some super-principled math), so they both necessarily work well for my eyes
|
|
2021-02-22 12:34:52
|
During the development of PIK I had the same idea like Jon had with squueze -- to deliver lower resolution image first and then increase the resolution gradually
|
|
|
_wb_
|
2021-02-22 12:34:53
|
To reach super low bitrates it's quite useful to just subsample the whole thing
|
|
|
Jyrki Alakuijala
|
2021-02-22 12:35:14
|
there I thought that a great supersampling is the key to success
|
|
2021-02-22 12:35:35
|
so I spend about six weeks for developing a 4x4 supersampler
|
|
2021-02-22 12:36:04
|
but I couldn't make it work well in that context, the density was always much worse than just our basic 8x8 dct at that time
|
|
2021-02-22 12:36:18
|
so, I dumped it and deleted it from the codebase
|
|
2021-02-22 12:37:05
|
now, 3-4 years later we took it back, put it in JPEG XL for progression support and back-ported it to libjpeg-turbo, too
|
|
2021-02-22 12:38:04
|
it has some nasty properties -- like non-iterative encoding to it is likely not generation-loss resistant
|
|
2021-02-22 12:38:43
|
that doesn't matter much for progressive viewing and for very low bpp
|
|
2021-02-22 12:38:59
|
... I consider the supersampling only the last option for low bpp
|
|
2021-02-22 12:39:30
|
I have many more ideas for it -- most related to more effective dct size/shape selection
|
|
|
|
Deleted User
|
2021-02-22 04:06:30
|
I'm looking at the code on GitLab, maybe I'll be able to "hack" it in order to show noise params and maybe even encode noise params from a hard-coded `noise.csv` file (if it exists, get noise from there, if it doesn't, do stuff as usual)
|
|
2021-02-22 04:07:49
|
And patches, I hope I could add patch overriding (getting patch info from hard-coded `patches.csv` so I can manually prepare patches in a smarter way than current encoder)
|
|
2021-02-22 04:09:33
|
For example this image, I can't get it out of my head:
https://discord.com/channels/794206087879852103/803645746661425173/811331562385702973
|
|
2021-02-22 04:14:48
|
I bet 99% of it can be patched so the only thing that remains is a nice, easy to encode, flat green surface (and maybe some rebellious pixels that don't follow any pattern, but maybe I'd still be able to somehow deal with them)
|
|
|
_wb_
|
2021-02-22 04:16:07
|
Probably better and easier to try to make better/different patch detection heuristics than to do things manually
|
|
2021-02-22 04:16:31
|
Noise params might be fun to be able to set manually
|
|
|
BlueSwordM
|
2021-02-22 04:16:42
|
Now that would be very interesting.
|
|
2021-02-22 04:16:47
|
Adjustable grain synthesis.
|
|
2021-02-22 04:17:05
|
That reminds me: you can do grain synthesis in avifenc now. <:Thonk:805904896879493180>
Also, CAVIF-RS(rav1e) seems to have better mid-high bpp quality than libaom-av1 for some reason.
|
|
|
|
Deleted User
|
|
_wb_
Probably better and easier to try to make better/different patch detection heuristics than to do things manually
|
|
2021-02-22 04:17:20
|
Humans can see patterns that some algorithms won't be able to see, ever.
|
|
|
Nova Aurora
|
2021-02-22 04:17:27
|
It will be cool to see what happens when the patching code is fleshed out even further
|
|
|
|
Deleted User
|
2021-02-22 04:17:31
|
So... AI-based patch detection? 😄
|
|
|
BlueSwordM
That reminds me: you can do grain synthesis in avifenc now. <:Thonk:805904896879493180>
Also, CAVIF-RS(rav1e) seems to have better mid-high bpp quality than libaom-av1 for some reason.
|
|
2021-02-22 04:19:24
|
I was reading a paper about AV1's grain synthesis
|
|
2021-02-22 04:19:47
|
And it's using some autoregressive model
|
|
2021-02-22 04:20:28
|
Is JPEG XL's noise model simpler or is it as smart as in AV1?
|
|
|
BlueSwordM
|
2021-02-22 04:20:37
|
I have no idea. I've only researched and tested AV1's grain synth, and a bit about h.264 grain synth. Not much else other than settings --noise=1 in JXL.
|
|
2021-02-22 04:22:33
|
Funnily enough, using --noise=1 and modular at very low BPP actually helps more than using a higher BPP and no --noise=1, which is nice with photographs.
|
|
|
|
Deleted User
|
|
BlueSwordM
Funnily enough, using --noise=1 and modular at very low BPP actually helps more than using a higher BPP and no --noise=1, which is nice with photographs.
|
|
2021-02-22 04:23:44
|
Oh God, Guetzli JPEG encoder *butchers* sensor noise, that's why I'm now keeping the originals even after uploading to Google Photos.
|
|
|
BlueSwordM
I have no idea. I've only researched and tested AV1's grain synth, and a bit about h.264 grain synth. Not much else other than settings --noise=1 in JXL.
|
|
2021-02-22 04:23:57
|
H.264 had grain synth?
|
|
|
BlueSwordM
|
2021-02-22 04:24:32
|
Yes in fact. There were some talk of actually including grain synthesis in the AVC standard when HD-DVD was being developed, but that never actually came to fruition in the end.
|
|
|
|
Deleted User
|
2021-02-22 04:28:30
|
x264 is still the best and sharpest encoder out there, it would be *perfect* with grain synth available
|
|
2021-02-22 04:29:34
|
All that sharp tuning thanks to some weebs who wanted perfect encodes of their *Perfect Cherry Blossom* gameplays
|
|
|
spider-mario
|
|
Oh God, Guetzli JPEG encoder *butchers* sensor noise, that's why I'm now keeping the originals even after uploading to Google Photos.
|
|
2021-02-22 04:49:36
|
is this based on the assumption that Photos uses guetzli?
|
|
|
|
Deleted User
|
2021-02-22 04:54:02
|
They've got resources to actually do that
|
|
2021-02-22 04:55:08
|
And lots of behaviors exhibited by Guetzli (e.g. agressive snapping AC coeffs to 0) are present in photos I've been downloading from Google Photos
|
|
2021-02-22 04:56:22
|
Jon was actually describing that AC zeroing in his article about Guetzli vs Cloudinary's q_auto
|
|
|
_wb_
|
2021-02-22 04:57:51
|
I don't think Photos uses Guetzli
|
|
2021-02-22 04:58:15
|
Guetzli is too slow for that
|
|
|
|
Deleted User
|
2021-02-22 04:59:07
|
They've got servers that are not only powerful on their own, but also can be made to work in parallel
|
|
|
_wb_
|
2021-02-22 04:59:11
|
No idea what they do use, maybe mozjpeg? Or just jpegtran requantization
|
|
2021-02-22 04:59:34
|
Yeah but why would they spend that kind of compute on low bandwidth stuff
|
|
|
Scope
|
2021-02-22 04:59:38
|
<https://github.com/webmproject/sjpeg> ?
|
|
|
_wb_
|
2021-02-22 05:00:17
|
Could be
|
|
2021-02-22 05:01:41
|
The Googlers here are probably not at liberty to disclose such internal info, and I can only guess. But it would surprise me if they would do anything that requires significant compute. Compute at scale gets expensive quickly.
|
|
|
|
Deleted User
|
2021-02-22 05:01:54
|
And by the way I've tried downloading the infographic from your today's article with `f_jxl` and then tried decoding it with `djxl -s 8`. Patches performed... poorly at least.
|
|
|
_wb_
|
2021-02-22 05:02:09
|
Try from the original image
|
|
2021-02-22 05:03:04
|
https://res.cloudinary.com/cloudinary-marketing/image/upload/Web_Assets/blog/Battle-of-the-Codecs_fnl.png
|
|
|
|
Deleted User
|
2021-02-22 05:04:37
|
But I wanted to test patch performance on Cloudinary's production JXL encoder
|
|
|
_wb_
|
2021-02-22 05:05:33
|
Ah
|
|
2021-02-22 05:05:44
|
Not sure what that will do
|
|
2021-02-22 05:06:27
|
https://res.cloudinary.com/cloudinary-marketing/image/upload/f_jxl/Web_Assets/blog/Battle-of-the-Codecs_fnl.png
|
|
2021-02-22 05:06:36
|
Did you try that?
|
|
|
|
Deleted User
|
2021-02-22 05:07:53
|
I've tried https://res.cloudinary.com/cloudinary-marketing/image/upload/w_700,c_fill,f_jxl,q_auto,dpr_2.0/Web_Assets/blog/Battle-of-the-Codecs_fnl.png, the only change is `f_auto` -> `f_jxl`
|
|
|
Nova Aurora
|
|
_wb_
Did you try that?
|
|
2021-02-22 05:08:12
|
|
|
|
|
Deleted User
|
|
I've tried https://res.cloudinary.com/cloudinary-marketing/image/upload/w_700,c_fill,f_jxl,q_auto,dpr_2.0/Web_Assets/blog/Battle-of-the-Codecs_fnl.png, the only change is `f_auto` -> `f_jxl`
|
|
2021-02-22 05:08:27
|
That's how I got that JXL file, as it would be viewed on the Web
|
|
|
_wb_
|
2021-02-22 05:11:33
|
Yes, the downscaling is not good for patches...
|
|
|
|
veluca
|
2021-02-22 05:12:54
|
I guess you want this? https://res.cloudinary.com/cloudinary-marketing/image/upload/w_auto,c_fill,f_jxl,q_auto,dpr_2.0/Web_Assets/blog/Battle-of-the-Codecs_fnl.png
|
|
|
lonjil
|
2021-02-22 05:20:04
|
What exactly is are "spot-color channels"?
|
|
|
_wb_
|
2021-02-22 05:39:43
|
It's a Photoshop feature parity thing
|
|
2021-02-22 05:40:12
|
Spot colors are just extra channels that represent some special ink
|
|
|
fab
|
2021-02-22 05:43:19
|
which encoder was used? dev version of which day?
|
|
2021-02-22 05:43:39
|
for jxl? settings?
|
|
|
|
Deleted User
|
2021-02-22 06:05:35
|
<@!794205442175402004> from what I see from the code (`jpeg-xl/lib/jxl/patch_dictionary.h`), patches do support alpha, am I right?
|
|
|
_wb_
|
2021-02-22 06:07:32
|
Yes, they do, but the current encoder isn't using that yet iirc
|
|
|
|
Deleted User
|
2021-02-22 06:10:41
|
Ok, one last, really important question: does the bitstream allow putting patches partially "off-canvas"?
|
|
|
_wb_
|
2021-02-22 06:11:11
|
Yes
|
|
2021-02-22 06:11:20
|
I think so
|
|
|
|
Deleted User
|
2021-02-22 06:11:21
|
YAAAAAY
|
|
|
_wb_
|
2021-02-22 06:11:56
|
Or worst case you have to select a different crop from the patch spritesheet frame to do it
|
|
2021-02-22 06:12:42
|
The spritesheet can be used to source patches in overlapping ways
|
|
2021-02-22 06:13:09
|
So it might make sense to order fonts in a way that groups common sequences like "the"
|
|
2021-02-22 06:14:17
|
(which would allow you to use single patches for "the" and "he" besides having the individual letters, possibly saving some signaling cost)
|
|
|
fab
|
2021-02-22 06:16:24
|
which encoder was used? dev version of which day?
for jxl? settings?
|
|
2021-02-22 06:16:33
|
for the low quality fidelity
|
|
2021-02-22 06:16:47
|
in the latest cloudinary article
|
|
|
_wb_
|
2021-02-22 06:23:36
|
Uh, I think that was just whatever I had on my machine last week
|
|
|
fab
|
2021-02-22 06:23:55
|
so more updated
|
|
|
_wb_
|
2021-02-22 06:23:58
|
I think I used -s 9, not sure though
|
|
|
fab
|
2021-02-22 06:24:03
|
it implements more things
|
|
|
_wb_
|
2021-02-22 06:24:06
|
Maybe just default settings
|
|
|
fab
|
2021-02-22 06:24:12
|
or in a different way
|
|
|
_wb_
|
2021-02-22 06:24:17
|
Shouldn't be a big difference
|
|
|
fab
|
2021-02-22 06:24:20
|
ok
|
|
2021-02-22 06:25:35
|
if you did low fidelity maybe you used less than q 90?
|
|
|
_wb_
|
2021-02-22 06:43:49
|
Oh yes
|
|
2021-02-22 06:44:15
|
I think it was around q55 or so
|
|
2021-02-22 06:47:04
|
For high fidelity, things become a bit more subtle. I wanted to emphasize that it's a lot better than JPEG, and when you do high fidelity comparisons, there will be too many people that will say the JPEG is good enough
|
|
|
|
veluca
|
|
_wb_
Yes, they do, but the current encoder isn't using that yet iirc
|
|
2021-02-22 06:47:34
|
the current *decoder* isn't doing that either
|
|
|
_wb_
|
2021-02-22 06:51:57
|
Right. One of the things we need to do before we can declare 0.4
|
|
|
|
Deleted User
|
|
veluca
the current *decoder* isn't doing that either
|
|
2021-02-22 06:55:25
|
OUCH
|
|
2021-02-22 06:55:51
|
Which patch would be more efficiently encoded:
|
|
2021-02-22 06:56:16
|
without alpha, just cropped from the original image...
|
|
2021-02-22 06:56:43
|
...or after removing the "grass" and leaving just the "plant"?
|
|
|
_wb_
|
2021-02-22 06:56:58
|
Just the crop
|
|
2021-02-22 06:57:36
|
The alpha is more info. Without alpha there are 4 colors, with alpha there are 5.
|
|
|
|
Deleted User
|
2021-02-22 06:59:34
|
The green inside the "plant" is the same color as the "grass". So even if I remove that green from *inside* the "plant", it won't be encoded more efficiently than the cropped-only original, because there gonna be 4 colors in both cases?
|
|
2021-02-22 07:04:27
|
So this will get encoded as efficiently as the first patch I uploaded above (without alpha), if I understand correctly?
|
|
|
VEG
|
2021-02-22 07:21:30
|
JPEG can store orientation of an image in its medatata, and image viewers (and browsers) can apply this information on displayed images. Can we have something like this for scaling? Currently, when you embed a screenshot from a HighDPI monitor (e.g. with 200% scaling) into a web page, it appears as a very huge image if you won't specify height and widths of the img tag twice smaller than the real image resolution in pixels. Can an image store some metadata which would tell a browser how to scale this image by default. So, if a web page is opened in a monitor with standard DPI, it would just show reduced image, but if somebody opens the page on a HighDPI display with 200% scaling, it would be displayed as is.
|
|
2021-02-22 07:21:41
|
Or this use case is already handled somehow?
|
|
|
_wb_
|
2021-02-22 07:21:53
|
Yes, both will be a 4-color palette image. The one without alpha might still be slightly smaller because the palette has an extra channel
|
|
|
VEG
|
2021-02-22 07:22:05
|
I frequently enormous screenshots on the internet from HiDPI displays, and seems like it is a common problem.
|
|
|
_wb_
|
2021-02-22 07:23:41
|
We have a header field to denote intrinsic dimensions, which is the way to tell a browser how to scale it by default.
|
|
2021-02-22 07:24:15
|
Also we have 2x, 4x and 8x upsampling options built in in the codec
|
|
|
|
veluca
|
|
So this will get encoded as efficiently as the first patch I uploaded above (without alpha), if I understand correctly?
|
|
2021-02-22 07:24:19
|
probably what you actually want to encode is `plant - green` - you get a patch with "impossible colors", but it still works, and then you don't really need alpha anyway
|
|
|
VEG
|
2021-02-22 07:24:23
|
Oh, great!
|
|
|
bonnibel
|
2021-02-22 07:24:45
|
~~i hate rotation done via metadata, half the programs around dont implement it and i end up sending sideways pictures to friends~~
|
|
|
_wb_
|
2021-02-22 07:25:26
|
(reminds me that we'll have to make sure chromium respects the signalled intrinsic dimensions)
|
|
2021-02-22 07:26:01
|
Orientation is done by the jxl decoder by default so applications don't have to bother with it
|
|
2021-02-22 07:26:19
|
Because over half of them will forget it anyway
|
|
|
bonnibel
|
2021-02-22 07:26:54
|
oh thank god
|
|
2021-02-22 07:28:28
|
🥱 better compression or whatever (/j)
😍 no more 12 different versions of every image on each website for different scales
😍 no more sideways selfies
|
|
|
|
Deleted User
|
|
veluca
probably what you actually want to encode is `plant - green` - you get a patch with "impossible colors", but it still works, and then you don't really need alpha anyway
|
|
2021-02-22 07:28:57
|
I don't know if that's necessary at all, because I can simply blend my first patch posted here in `kReplace` mode. It'll do for that image, I linked the big original above.
|
|
|
_wb_
|
2021-02-22 07:29:01
|
Basically we want to have all render impacting data in the bitstream, and only have metadata for things you can actually ignore if you just want to see the image
|
|
|
bonnibel
|
2021-02-22 07:29:09
|
_nods_
|
|
|
_wb_
|
2021-02-22 07:31:02
|
So color space info, orientation, intrinsic dimensions: those are in the bitstream. You can also do those in Exif but the spec says you can ignore what the Exif says, if info is duplicated.
|
|
|
|
Deleted User
|
|
I don't know if that's necessary at all, because I can simply blend my first patch posted here in `kReplace` mode. It'll do for that image, I linked the big original above.
|
|
2021-02-22 07:32:00
|
Here's the original I was talking about, because it got lost in the message flood:
https://discord.com/channels/794206087879852103/803645746661425173/811331562385702973
|
|
2021-02-22 07:36:34
|
And here, after a bit of work, here's tree patch. I had to do what probably no encoder could *ever* do, which is: **assemble full tree from multiple sources in GIMP**. Fortunately it works, because I checked and the patch matched pixel-perfectly. Unfortunately the use of alpha is unavoidable in this case, because it would break the intersection of bushes and trees at the bottom of the linked image.
|
|
2021-02-22 07:36:54
|
Yes, you can call me insane 😉
|
|
2021-02-22 07:45:33
|
Again: that tree would be *impossible* for even the *smartest* encoder to figure out fully. You'd probably need human-level artificial general intelligence for that to reach the level of optimization that we, humans, can do.
|
|
|
_wb_
|
2021-02-22 07:47:05
|
Now you just need to patch the patch encoder to allow you to use a handcrafted spritesheet and a csv file with crop/location coords
|
|
2021-02-22 07:47:41
|
Alpha may not be needed if you use kAdd: adding zero is the same as being transparent
|
|
|
|
Deleted User
|
|
_wb_
Now you just need to patch the patch encoder to allow you to use a handcrafted spritesheet and a csv file with crop/location coords
|
|
2021-02-22 07:48:48
|
Unfortunately I can't do much with limited knowledge about the project and with only `nano` and `gcc` on WSL2 to my disposal
|
|
|
_wb_
Alpha may not be needed if you use kAdd: adding zero is the same as being transparent
|
|
2021-02-22 07:49:12
|
So I simply need to substract the green and then alpha is out?
|
|
|
_wb_
|
2021-02-22 07:50:14
|
Yes, but you will get negative colors so it becomes a bit tricky - no formats can easily represent that
|
|
|
|
Deleted User
|
|
_wb_
Yes, but you will get negative colors so it becomes a bit tricky - no formats can easily represent that
|
|
2021-02-22 07:51:30
|
> no formats can easily represent that
But the JXL patch spritesheet supports it?
|
|
|
_wb_
|
2021-02-22 07:52:08
|
Yes
|
|
2021-02-22 07:53:02
|
Negative is no problem for modular mode, it is used to that - after RCTs or Squeeze you also get negative values
|
|
2021-02-22 07:54:57
|
You could also make jxl images with negative colors in them. Would get clamped to black when you convert it to the usual uint8 or uint16 buffers, but the internal data could be negative
|
|
2021-02-22 07:55:14
|
That would be a fun kind of steganography
|
|
2021-02-22 07:55:37
|
Hidden images in the deep blacks
|
|
|
|
Deleted User
|
2021-02-22 07:56:03
|
Deeper than black
|
|
2021-02-22 07:56:31
|
And it seems like JPEG XL is Vantablack-ready 😉
|
|
|
veluca
probably what you actually want to encode is `plant - green` - you get a patch with "impossible colors", but it still works, and then you don't really need alpha anyway
|
|
2021-02-22 08:29:39
|
Ok, now encoding everything in `kAdd` mode makes sense. If I decided to encode the tree patch with `kAdd` and plant patch with `kReplace`, they'll have to be encoded independently, but if I encode them both with `kAdd` blending mode, they'll be able to reuse those "impossible" colors and thus reduce the palette significantly. Wow.
|
|
|
|
paperboyo
|
|
_wb_
So color space info, orientation, intrinsic dimensions: those are in the bitstream. You can also do those in Exif but the spec says you can ignore what the Exif says, if info is duplicated.
|
|
2021-02-22 09:28:23
|
> So color space info, orientation, intrinsic dimensions: those are in the bitstream. You can also do those in Exif but the spec says you can ignore what the Exif says, if info is duplicated.
Interesting! Is it possible to set the decoder to not apply (some) of them? Thinking about systems that deal with many encoders and make decisions based on metadata on what to do with resulting pixels…
|
|
|
_wb_
|
2021-02-22 09:30:51
|
You can ask to get the unrotated image plus orientation info, I think. And you can choose to ignore intrinsic dimensions or color space. But at least we want to make it as easy as possible to do the right thing.
|
|
|
|
paperboyo
|
2021-02-22 09:33:04
|
Am I right that the spec isn’t published yet (so I can feel a bit less shitty about asking all those questions :-)?
|
|
|
Nova Aurora
|
2021-02-22 09:34:46
|
Spec is in FDIS
|
|
2021-02-22 09:34:58
|
Frozen, but not ratified yet
|
|
2021-02-22 09:39:13
|
availabe for ** * *only* *** 118CHF or 166 CAD, or 133 USD (150 from ANSI)
|
|
2021-02-22 09:39:40
|
Thanks ISO!
|
|
|
|
veluca
|
2021-02-22 09:52:00
|
for what it's worth, I doubt you'd understand more from the spec than from the code 😛
|
|
|
Nova Aurora
|
2021-02-22 09:55:08
|
Include the entire spec as comments in the reference encoder!
|
|
|
Jyrki Alakuijala
|
|
Nova Aurora
Frozen, but not ratified yet
|
|
2021-02-22 10:37:21
|
free-zed
|
|
2021-02-22 10:37:56
|
no one wants to see the spec, it is for bureaucrats only 😛
|
|
|
veluca
for what it's worth, I doubt you'd understand more from the spec than from the code 😛
|
|
2021-02-22 10:39:10
|
I'm thinking that we will eventually write a book about JPEG XL
|
|
|
|
Deleted User
|
2021-02-22 10:39:52
|
Bad news <@794205442175402004> <@179701849576833024>, sadly I can't use `kAdd` mode and I *have to* use `kReplace` with alpha. I simulated `kAdd` patches with GIMP (`Substract` and `Addition` layer modes) and unfortunately plants (which obviously are patched, too) "shine" through the trees, as shown on this GIF (switches every 1000 ms between original and `kAdd` simulation; I only tested this on 2 trees from the left, 3rd one was left untouched as a reference).
|
|
|
Jyrki Alakuijala
|
2021-02-22 10:40:03
|
I have an awesome plan to make JPEG XL vardct stronger
|
|
|
|
Deleted User
|
2021-02-22 10:40:16
|
If they didn't overlap, `kAdd` blending mode would be perfect idea...
|
|
|
Jyrki Alakuijala
|
2021-02-22 10:40:27
|
to separate visual masking to visual activity field and to a visual masking field, and to apply erosion only in the masking field
|
|
2021-02-22 10:40:48
|
that should hugely improve the dct size/shape selection and improve quality and reduce ringing
|
|
2021-02-22 10:41:11
|
it is all annoyingly complicated, but not impossible
|
|
2021-02-22 10:41:33
|
for Luca it would be just a dance, but not everyone can be at his level 😄
|
|
|
Nova Aurora
|
2021-02-22 10:41:44
|
You have my moral support!
|
|
|
Jyrki Alakuijala
|
2021-02-22 10:45:10
|
I'm expecting a 3.777777 % improvement
|
|
2021-02-22 11:33:20
|
the stupid computer gives me segfault instead of 3.77777777777777777 % gains
|
|
2021-02-22 11:33:22
|
grrrrrrrr
|
|
|
Nova Aurora
|
|
Jyrki Alakuijala
the stupid computer gives me segfault instead of 3.77777777777777777 % gains
|
|
2021-02-23 12:02:14
|
https://tenor.com/view/monkey-pissed-annoyed-irritated-furious-gif-4908411
|
|
|
|
Deleted User
|
|
Jyrki Alakuijala
the stupid computer gives me segfault instead of 3.77777777777777777 % gains
|
|
2021-02-23 12:03:28
|
https://tenor.com/view/guaton-computadora-enojado-computer-rage-gif-14480338
|
|
|
doajc_blogger
|
2021-02-23 02:27:55
|
Does JPEG XL properly store EXIF data? I tried a lossless conversion and converted the result back to JPEG and the EXIF data was missing. There's also an issue from 3 weeks ago about how EXIF data is handled so is it not finished yet or is there a command-line flag I should be using?
|
|
|
Nova Aurora
|
|
Does JPEG XL properly store EXIF data? I tried a lossless conversion and converted the result back to JPEG and the EXIF data was missing. There's also an issue from 3 weeks ago about how EXIF data is handled so is it not finished yet or is there a command-line flag I should be using?
|
|
2021-02-23 02:37:14
|
did you pass `-j ` when decoding?
|
|
|
doajc_blogger
|
2021-02-23 02:37:50
|
No, that does a lossy conversion. Is that the only way to preserve EXIF data?
|
|
2021-02-23 02:38:02
|
Oh wait, you said decoding.
|
|
2021-02-23 02:38:13
|
I'll try that.
|
|
|
Nova Aurora
|
2021-02-23 02:39:00
|
It gave me checksum-exact compression doing a lossless conversion then passing `-j` on decode
|
|
|
doajc_blogger
|
2021-02-23 02:46:35
|
It worked. Thank you. I didn't realize it was a decode option.
|
|
|
|
Deleted User
|
2021-02-23 03:00:25
|
Oh, you've just reminded me to file decoder bug in order to make `-j` switch behavior consistent. `-j` should *always* mean lossy.
|
|
|
Nova Aurora
|
|
Does JPEG XL properly store EXIF data? I tried a lossless conversion and converted the result back to JPEG and the EXIF data was missing. There's also an issue from 3 weeks ago about how EXIF data is handled so is it not finished yet or is there a command-line flag I should be using?
|
|
2021-02-23 03:32:19
|
To answer your question literally, the spec includes it, but currently the encoder doesn't include it unless doing lossless jpeg recompression
|
|
|
doajc_blogger
|
2021-02-23 03:32:35
|
Oh okay
|
|
|
Nova Aurora
|
2021-02-23 03:40:10
|
And even then I don't think the current libjpegxl recognizes EXIF for what it is, I think it just treats it as a blob to be compressed
|
|
|
|
Deleted User
|
|
Nova Aurora
And even then I don't think the current libjpegxl recognizes EXIF for what it is, I think it just treats it as a blob to be compressed
|
|
2021-02-23 04:00:52
|
oh actually we can test that
take two identical jpgs one with exif one without
|
|
|
Nova Aurora
|
2021-02-23 04:08:26
|
Gwenview can't see the metadata when viewing it as a jxl
|
|
2021-02-23 04:09:11
|
but it shows back up when decoded with `-j`
|
|
2021-02-23 04:09:42
|
So I think it's currently being treated as a blob to be compressed
|
|
|
Crixis
|
|
Jyrki Alakuijala
for Luca it would be just a dance, but not everyone can be at his level 😄
|
|
2021-02-23 06:04:41
|
Show the images when you can! It is aestetic see artifact vanish day after day, as avif/webp2 possibilities
|
|
|
Jyrki Alakuijala
|
|
Crixis
Show the images when you can! It is aestetic see artifact vanish day after day, as avif/webp2 possibilities
|
|
2021-02-23 07:16:39
|
I'm still at a level of empty talk about a good idea 🙂 I think images will be here in two days...
|
|
2021-02-23 07:18:53
|
ok, not just empty talk -- I made a change and now my encoder segfaults 😅
|
|
2021-02-23 07:19:46
|
I'm too old for segfaults... I used to do a lot of them in my 20s
|
|
2021-02-23 07:27:08
|
too many different cases of padding for my simple brain -- a 4x4 padding for the visual activity/masking computation, a 8x8 padding for dct selection, some padding for simd
|
|
|
_wb_
|
2021-02-23 08:02:30
|
Padding and alignment always complicate things
|
|
|
bonnibel
|
2021-02-23 09:57:32
|
~~time to start over and rewrite the entire thing in rust~~
|
|
|
_wb_
|
2021-02-23 11:19:39
|
Since when do we have our own wikipedia page? https://en.wikipedia.org/wiki/JPEG_XL
|
|
2021-02-23 11:19:47
|
anyone here did that? 🙂
|
|
|
fab
|
2021-02-23 11:57:17
|
Number of page watchers Fewer than 30 watchers
|
|
2021-02-23 11:57:35
|
Page views in the past 30 days
1,230
|
|
2021-02-23 11:57:53
|
https://en.wikipedia.org/w/index.php?title=JPEG_XL&action=info
|
|
2021-02-23 11:59:26
|
recent number of distint authors 5
|
|
|
|
veluca
|
|
oh actually we can test that
take two identical jpgs one with exif one without
|
|
2021-02-23 01:07:28
|
no need to test, that's definitely the case 😛
|
|
|
Jyrki Alakuijala
|
|
_wb_
Since when do we have our own wikipedia page? https://en.wikipedia.org/wiki/JPEG_XL
|
|
2021-02-23 05:27:27
|
I noticed the Japanese version today, and was surprised it had the English link, too
|
|
|
fab
|
2021-02-23 06:35:02
|
new build of jpeg xl <@!111445179587624960> please also for old computer
|
|
2021-02-23 06:35:27
|
i'm lazy to rename jamaika builds
|
|
|
|
Deleted User
|
2021-02-23 06:46:14
|
I'm using Scope's builds for Windows
|
|
|
fab
|
|
I'm using Scope's builds for Windows
|
|
2021-02-23 06:46:51
|
where did you found it
|
|
|
|
Deleted User
|
|
fab
where did you found it
|
|
2021-02-23 06:47:41
|
https://encode.su/threads/3564-JXL-version-0-3-released#post68594
|
|
2021-02-23 06:48:04
|
But that's an older build, there was an update to the repository ~2 hrs ago
|
|
2021-02-23 06:48:59
|
So I've just compiled my own Linux version on WSL2 from source while waiting for Scope's build for Windows
|
|
2021-02-23 06:50:36
|
I've got a laptop from 2013 and runs Win10 Pro fairly well
|
|
2021-02-23 06:50:53
|
But I'm on SSD, not HDD, otherwise I'd go *nuts*
|
|
|
fab
|
2021-02-23 06:51:08
|
that's they said on av1 server
|
|
2021-02-23 06:51:17
|
but on xnview i can't digit my commands
|
|
2021-02-23 06:51:29
|
and i have to wait the developer that update the encoder
|
|
2021-02-23 06:51:33
|
the patched dll
|
|
2021-02-23 06:51:39
|
so it's even worse
|
|
2021-02-23 06:51:47
|
there is no support at the moment
|
|
2021-02-23 06:52:01
|
there is megapixel alex software but i can't encode
|
|
|
Scope
|
|
doajc_blogger
|
2021-02-24 12:12:54
|
<@111445179587624960>: Why does it only use 1 CPU core even when it says it's using 4 threads?
|
|
2021-02-24 12:13:32
|
I had the same problem with version 0.3.1 (jpeg-xl-mingw64-ef3f7a62).
|
|
|
Scope
|
2021-02-24 12:16:23
|
Not everything is multi-threaded yet and also as far as I know, there are some bugs with this in the Windows version
|
|
|
doajc_blogger
|
2021-02-24 12:22:12
|
Oh okay
|
|
|
fab
|
2021-02-24 08:14:14
|
ok encoded at -s 7 -q 8 some screens of galaxy s4 no artifacts problem file size too big all images weight 39.28 kb
|
|
2021-02-24 08:16:22
|
how to extract png of some pdf without imagemagick
|
|
2021-02-24 08:16:35
|
is there an open source alternative without python 3.9.1
|
|
2021-02-24 08:16:45
|
which software you recommend
|
|
|
spider-mario
|
2021-02-24 08:39:46
|
poppler has the `pdfimages` binary
|
|
2021-02-24 08:41:52
|
by default, it converts the images it finds into pbm or ppm, but `-png` will convert them to PNG instead, while `-all` will keep them in their original format if it’s jpeg, jpeg2000, jbig2 or ccitt (didn’t know that last one)
|
|
|
OkyDooky
|
2021-02-24 09:38:08
|
btw\: what does it mean to be "LZ77-enabled"? \:)
(<@794205442175402004>)
|
|
|
_wb_
|
2021-02-24 09:44:41
|
it means you don't just get the usual context modeling + (ANS or prefix code), but the entropy decoder can also keep a sliding window of decoded symbols which can be referenced with lz77-style (distance, length) pairs
|
|
2021-02-24 09:46:03
|
so in a way it's like DEFLATE, except instead of doing LZ+Huffman, we do LZ+(MAANS or MAHuffman)
|
|
2021-02-24 09:48:05
|
for some kinds of repetitive images, especially if the repetition cannot be effectively eliminated with Patches, the lz77 can be quite useful
|
|
2021-02-24 09:48:53
|
usually just MAANS without using lz77 is good enough though
|
|
|
diskorduser
|
2021-02-24 10:16:41
|
jxl images from jpegs are rotated in gwenview. Is it a bug
|
|
|
Master Of Zen
|
|
diskorduser
jxl images from jpegs are rotated in gwenview. Is it a bug
|
|
2021-02-24 10:19:28
|
Gwenview can rotate images from metadata, so probably it rotated image on view from JPEG metadata, and didn't copy/use it after transcoding
|
|
|
_wb_
|
2021-02-24 11:03:24
|
it's a bug / missing feature in our jpeg recompression: we should parse the exif to see the orientation, and put it in the jxl header too, otherwise you will only see it correctly oriented after decompressing back to jpeg
|
|
|
fab
|
2021-02-24 11:38:17
|
60% reduction with -s 7 -q 8
|
|
2021-02-24 11:38:25
|
maybe i need to use a slower speed
|
|
2021-02-24 11:38:42
|
like s9 q 10
|
|
2021-02-24 11:38:52
|
is really that important?
|
|
2021-02-24 11:39:13
|
pdf was 13 mb, png 38 mb jxl 6 mb
|
|
|
Jim
|
|
_wb_
Since when do we have our own wikipedia page? https://en.wikipedia.org/wiki/JPEG_XL
|
|
2021-02-24 11:48:42
|
History shows it was created in 2018 but was empty until Feb 19, 2021. User Avayak created a basic page for it.
|
|
|
_wb_
|
2021-02-24 11:52:12
|
-q 8 ? That is really low quality
|
|
|
fab
|
2021-02-24 11:54:10
|
-s 4 -g 2 -q 49
|
|
2021-02-24 11:54:14
|
or -d 4
|
|
2021-02-24 11:54:17
|
i prefer d 4
|
|
2021-02-24 11:54:21
|
wb
|
|
2021-02-24 11:54:35
|
d 4 i use it at speed 9
|
|
2021-02-24 11:54:55
|
d4 what q is
|
|
2021-02-24 11:55:23
|
what g2 means?
|
|
2021-02-24 12:01:30
|
yes it's good to see the patches
|
|
|
_wb_
|
2021-02-24 12:11:00
|
Patches may use different criteria depending on distance/quality. Decoding with -s 8 is a better way to see them
|
|
|
fab
|
2021-02-24 12:17:14
|
just encode everything at -s 4 -g 2 -q 49 with latest build
|
|
2021-02-24 12:17:20
|
who cares about artifacts quality
|
|
2021-02-24 12:17:32
|
joking obv
|
|
2021-02-24 12:18:13
|
i think quality changes over time
|
|
|
_wb_
|
2021-02-24 12:37:56
|
-g 2 will not do anything unless you do -m (or pick a quality that maps to modular, like -q 100 or -q -10)
|
|
|
|
Deleted User
|
|
_wb_
-g 2 will not do anything unless you do -m (or pick a quality that maps to modular, like -q 100 or -q -10)
|
|
2021-02-24 02:38:53
|
I thought that `-q 7` and above use VarDCT, so `-q 6.999` and below use Modular.
|
|
|
_wb_
|
2021-02-24 02:39:33
|
yes, I think that's true. don't count on it though, those defaults might change
|
|
2021-02-24 02:39:45
|
(e.g. if we get better ways to do low-fidelity vardct)
|
|
|
fab
|
2021-02-24 03:29:19
|
but what means g 2
|
|
2021-02-24 03:30:02
|
with avif you encode anime but jxl you have real photo
|
|
2021-02-24 03:30:14
|
i prefer real photo
|
|
|
_wb_
|
2021-02-24 03:49:38
|
-g 2 means "use 512x512 groups in Modular"
|
|
|
BlueSwordM
|
2021-02-24 05:49:16
|
Is it possible to use the JXL butteraugli tool separately?
|
|
|
_wb_
|
2021-02-24 05:57:23
|
build/tools/butteraugli_main ?
|
|
|
BlueSwordM
|
2021-02-24 06:02:57
|
Thank you.
|
|
|
|
il1kesonic
|
|
BlueSwordM
|
|
NO
|
|
2021-02-25 01:53:01
|
No about what?
|
|
|
|
il1kesonic
|
|
BlueSwordM
|
|
pees
|
|
2021-02-25 01:53:20
|
What about pees?
|
|
|
doajc_blogger
|
2021-02-25 01:53:22
|
Sometimes she writes random things.
|
|
|
BlueSwordM
|
2021-02-25 01:53:26
|
Are you sure you didn't mean patches?
|
|
|
|
il1kesonic
|
|
doajc_blogger
|
2021-02-25 01:53:37
|
No, she's not a developer or anything. She's just a friend that I invited.
|
|
|
|
il1kesonic
|
2021-02-25 01:53:38
|
*pees cutely*
|
|
|
doajc_blogger
|
2021-02-25 01:53:51
|
I didn't realize she'd write weird things here.
|
|
2021-02-25 01:54:05
|
I told her they might ban her if she doesn't stop.
|
|
|
|
il1kesonic
|
2021-02-25 01:54:21
|
how is typing the word "no" weird?
|
|
|
BlueSwordM
|
|
how is typing the word "no" weird?
|
|
2021-02-25 01:54:59
|
Because there was no context to that no, that's all.
|
|
|
|
il1kesonic
|
2021-02-25 01:55:26
|
y would u ban me for that though-
|
|
|
BlueSwordM
|
2021-02-25 01:56:24
|
Because you are in <#794206170445119489>, and not <#806898911091753051> ?
|
|
2021-02-25 01:56:27
|
That's all.
|
|
2021-02-25 01:56:49
|
Anyway, I've been comparing single-threaded JPEG-XL vs single-threaded rav1e and aomenc-av1. The speed difference when threading is out of the question is staggering.
|
|
|
Pieter
|
2021-02-25 01:56:50
|
Yes, please keep it on topic about JPEG-XL here. There are other channels for random banter.
|
|
|
doajc_blogger
|
2021-02-25 01:58:45
|
Since JPEG XL compresses better and is much faster, it doesn't look like AV1 has any advantages for still pictures, at least with current implementations.
|
|
|
BlueSwordM
|
2021-02-25 01:59:00
|
It's actually the opposite depending on your goals.
|
|
|
|
il1kesonic
|
2021-02-25 01:59:08
|
i literally wrote the word no LMAO
|
|
|
BlueSwordM
|
2021-02-25 01:59:38
|
rav1e actually does better at detail retention than aomenc-av1 at similar mid-high BPP where JPEG-XL is stronger overall, but speed is lacking obviously. rav1e is still decent though in that regard, and proposes an approach that is rather interesting.
|
|
|
doajc_blogger
|
2021-02-25 02:00:46
|
What do you mean by "where JPEG-XL is stronger" if rav1e is better at detail retention?
|
|
|
BlueSwordM
|
|
What do you mean by "where JPEG-XL is stronger" if rav1e is better at detail retention?
|
|
2021-02-25 02:01:06
|
*rav1e actually does better at detail retention than aomenc-av1 where JPEG-XL is stronger overall*
|
|
2021-02-25 02:01:12
|
I should've written the sentence better.
|
|
2021-02-25 02:01:17
|
I was comparing rav1e against aomenc-av1. 😛
|
|
|
doajc_blogger
|
2021-02-25 02:02:04
|
Oh okay
|
|
2021-02-25 02:02:42
|
I was wondering if there was any scenario in which I would want to use AV1 instead of JPEG XL.
|
|
2021-02-25 02:02:55
|
It sounded like you meant that there were a few.
|
|
|
Scope
|
2021-02-25 02:03:08
|
I've shown examples where AVIF is stronger, it's medium-low bpp and art images with sharp lines and large single-color areas <https://slow.pics/c/NvJvg7o9>
https://discord.com/channels/794206087879852103/803645746661425173/814001495753228320
|
|
|
doajc_blogger
|
2021-02-25 02:05:04
|
Interesting. I hadn't done enough low BPP tests to notice that.
|
|
2021-02-25 02:06:13
|
I was mainly interested in lossless but you're right, AVIF does look better in this case.
|
|
|
_wb_
|
2021-02-25 07:00:45
|
For lossy non-photo, AVIF is quite good indeed
|
|
2021-02-25 07:02:14
|
I _think_ JXL could be even better than AVIF for that, but we still need to write an encoder that proves it so at the moment it is just my opinion
|
|
|
fab
|
2021-02-25 07:27:44
|
-s 4 -q 93.1 --resampling=2 --epf=1 --dots=0
|
|
2021-02-25 07:37:00
|
13 mb to 2 mb
|
|
2021-02-25 07:37:05
|
this works better
|
|
2021-02-25 01:07:27
|
for %i in (C:\Users\User\Documents\png3\*.png) do cjxl "%i" "%i.jxl" --mquality=94 -s 4 --lossy-palette --palette=0 -C=1 -g=1 --predictor=8 --num_threads=2
|
|
2021-02-25 01:07:34
|
i have errors with it
|
|
2021-02-25 01:07:55
|
unknown argument -c=1
|
|
2021-02-25 01:08:16
|
a majuscule
|
|
2021-02-25 01:10:49
|
it encodes in vardct
|
|
2021-02-25 01:10:56
|
for %i in (C:\Users\User\Documents\png3\*.png) do cjxl "%i" "%i.jxl" --mquality=94 -s 4 --lossy-palette --palette=0 --colorspace=1 --group-size=1 --predictor=8 --num_threads=2
|
|
2021-02-25 01:11:25
|
i want to set something at 0 and use same command
|
|
2021-02-25 01:11:27
|
how to do
|
|
2021-02-25 01:13:30
|
It's c without = ? or majuscule?
|
|
2021-02-25 01:13:48
|
like -c 0
|
|
2021-02-25 01:14:20
|
colorspace i want to do 1 but Jon said it hasn't effect.
|
|
2021-02-25 01:14:29
|
Has it effect with modular?
|
|
|
_wb_
|
2021-02-25 01:31:43
|
for VarDCT, you shouldn't use -c nor -C
|
|
2021-02-25 01:34:46
|
for Modular, `-c 0` (XYB) is default for lossy while `-c 1` (RGB) is default for lossless. You can change the default but it is likely to be a bad idea. For lossless modular, `-C` can be used to specify a reversible color transform: `-C 0` is none, `-C 1` is YCoCg, and there are more. Default is to use heuristics to try to find the RCT that works best one in each group (how many are tried depends on the speed setting), so at slower speeds, default will probably be better than any specific `-C` setting. At faster speeds, there might be specific `-C` settings that are better than default.
|
|
|
fab
|
2021-02-25 01:35:55
|
but why it encodes in vardct
|
|
2021-02-25 01:36:05
|
i used the second ommand
|
|