|
_wb_
|
2023-06-02 05:53:11
|
Well if you create such a comparison and put it online, I think it's quite clear that the person is ignorant and over-confident. I mean, of course things don't have to be perfect, but why write about lossy compression if you don't even understand that encoders have a configurable quality setting and can obtain various trade-offs between filesize and quality.
|
|
|
username
there is a high volume of online articles and blogposts that don't know what they are talking about, for example recently I found one that was recommending people to re-encode/recompress pre-existing jpeg files to make them progressive
|
|
2023-06-02 05:54:17
|
With jpegtran you can do that. But I assume they were talking about decoding to pixels and recompressing from that?
|
|
|
username
|
|
_wb_
With jpegtran you can do that. But I assume they were talking about decoding to pixels and recompressing from that?
|
|
2023-06-02 05:55:08
|
yes
|
|
2023-06-02 05:55:58
|
they do not know about things such as jpegtran and where telling people to open their JPEGs in photoshop and then export them again
|
|
|
_wb_
|
2023-06-02 05:56:21
|
Ouch.
|
|
|
username
|
2023-06-02 05:57:34
|
I forgot where the article/blogpost for that is but it was something that was showing on the first page of search results
|
|
2023-06-02 06:01:34
|
I found it: https://medium.com/hd-pro/jpeg-formats-progressive-vs-baseline-73b3938c2339
|
|
2023-06-02 06:02:47
|
the first result just advertises a CDN when it gets the the part about converting while the second one is the one I mentioned before
|
|
|
Dzuk
|
2023-06-11 03:14:55
|
ran my entire emoji set (~8000 images, mostly flat colour) through my exporter (just edited it so it can use libjxl), and compared Lossless JXL to Lossless WebP, PNG and crushed PNG
|
|
2023-06-11 03:15:23
|
at normal effort it loses to WebP but at max effort it beats it
|
|
|
_wb_
|
2023-06-11 03:19:41
|
If you have plenty of time you could try e10 😅
|
|
|
Dzuk
|
2023-06-11 03:23:30
|
looool, I think e9 was long enough
|
|
2023-06-11 03:31:39
|
|
|
2023-06-11 08:53:29
|
was experimenting to see if a size increase in progressive could weigh up potential benefits with how browsers could just choose to download only a bit of a JXL depending on resolution, for me the answer seems to be 'no', but also on the e7 512px output the filesize massively spiked :S
|
|
|
jonnyawsom3
|
2023-06-11 08:58:54
|
Progressive lossless is a bit iffy at the moment
|
|
|
Dzuk
|
2023-06-11 09:36:30
|
ahhh okay
|
|
|
username
|
2023-06-11 10:26:59
|
<@162568168693432320> for lossless webp you should try ```cwebp -mt -lossless -z 9 -alpha_filter best -metadata icc``` which will result in max effort and also make sure icc profiles are properly copied over
|
|
|
Dzuk
|
2023-06-11 10:47:00
|
ooo thank you
|
|
|
username
|
2023-06-11 10:52:25
|
I'm unsure if "-z 9" also includes what "alpha_filter best" does but I included both just in case
|
|
|
Dzuk
|
2023-06-12 12:18:03
|
okay ^^, I'll probably avoid the -mt flag for my export application because it already multithreads export operations
|
|
2023-06-12 12:18:33
|
(well, the operations themselves aren't multithreaded, but it runs multiple export operations on individual threads)
|
|
2023-06-12 09:31:32
|
okay, maxing out WebP quality was not worth it but it was interesting to see lol
|
|
|
yoochan
|
2023-06-12 02:19:52
|
jxl works really well it seems 😄
|
|
|
_wb_
|
2023-06-12 02:22:09
|
well we could improve speed/density trade-offs, e1-e3 are OK but e4+ is slow and could be faster
|
|
2023-06-12 02:22:56
|
(and possibly there's also room for better heuristics to improve density without losing speed)
|
|
|
|
Squid Baron
|
2023-06-16 11:31:42
|
I have an image that jxl struggles with. With `cjxl -e 9 -q 100 ` I get a 195KB output
|
|
2023-06-16 11:32:06
|
the original (GIF) is 94 KB and an optipng-optimized png is 123KB
|
|
2023-06-16 11:32:31
|
|
|
|
username
|
|
Squid Baron
I have an image that jxl struggles with. With `cjxl -e 9 -q 100 ` I get a 195KB output
|
|
2023-06-16 11:39:41
|
adding `-g 3` results in the output getting down to 165KB
|
|
2023-06-16 11:40:08
|
also I did a max effort lossless WebP and it ended up being 114KB
|
|
|
|
Squid Baron
|
2023-06-16 12:03:54
|
Hmm, thanks, I didn't know about that flag. Apparently, it isn't documented.
|
|
2023-06-16 12:04:13
|
I did some googling, I found this thread https://github.com/libjxl/libjxl/issues/426
|
|
2023-06-16 12:04:27
|
and I played with some of the flags mentioned there
|
|
2023-06-16 12:05:14
|
and now my best result is 119933 bytes: ` -I 0.3 -P 0 -e 9 -q 100 -g 3`
|
|
|
|
veluca
|
2023-06-16 12:49:34
|
If you have time, try --allow_expert_options -e 10
|
|
|
TheBigBadBoy - 𝙸𝚛
|
|
Squid Baron
I have an image that jxl struggles with. With `cjxl -e 9 -q 100 ` I get a 195KB output
|
|
2023-06-16 12:56:34
|
Interestingly, effort 10 provides far better compression than effort 9, but still larger than GIF
- effort 10: 118799 bytes, 4153.316 seconds of compression `cjxl -e 9 -d 0 --brotli_effort=11 --num_threads=0 -I 100 -g 3 -E 11`
- effort 9: 165153 bytes, 14.802 seconds `cjxl -e 10 -d 0 --brotli_effort=11 --allow_expert_options --num_threads=0 -I 100 -g 3 -E 11`
|
|
|
|
veluca
|
2023-06-16 12:59:56
|
Don't use other flags with e10 (other than d0) - it will try everything 😛
|
|
2023-06-16 01:00:13
|
Also brotli effort doesn't matter here
|
|
|
TheBigBadBoy - 𝙸𝚛
|
|
veluca
Don't use other flags with e10 (other than d0) - it will try everything 😛
|
|
2023-06-16 01:04:00
|
<:frogstudy:872898333239279666>
and since I specified the different params, is the result different from the "vanilla" -e 10 ?
also, will cjxl bruteforce settings using parallelization ?
|
|
2023-06-16 01:09:10
|
oh yeah, full cpu load <:2_pepeShy:721833660457943151>
But I wonder if using multiple threads with JXL could increase output size ? <:Hmmm:654081052108652544>
|
|
|
|
veluca
|
|
TheBigBadBoy - 𝙸𝚛
oh yeah, full cpu load <:2_pepeShy:721833660457943151>
But I wonder if using multiple threads with JXL could increase output size ? <:Hmmm:654081052108652544>
|
|
2023-06-16 01:15:09
|
no no
|
|
|
TheBigBadBoy - 𝙸𝚛
<:frogstudy:872898333239279666>
and since I specified the different params, is the result different from the "vanilla" -e 10 ?
also, will cjxl bruteforce settings using parallelization ?
|
|
2023-06-16 01:15:40
|
it might be, I don't actually remember 😆
|
|
|
TheBigBadBoy - 𝙸𝚛
|
|
veluca
no no
|
|
2023-06-16 01:19:26
|
That's really nice to read, I really thought it would somewhat split the input image in (let's say) 4 rectangles and then compress them independently (at any level of effort)
|
|
2023-06-16 01:20:35
|
I got the exact same output size with only `cjxl -e 10 --allow_expert_options`
Compressed in 659.222 seconds using 16 threads
|
|
|
|
veluca
|
2023-06-16 01:21:22
|
huh not as slow as I thought 😆
|
|
|
_wb_
|
|
TheBigBadBoy - 𝙸𝚛
Interestingly, effort 10 provides far better compression than effort 9, but still larger than GIF
- effort 10: 118799 bytes, 4153.316 seconds of compression `cjxl -e 9 -d 0 --brotli_effort=11 --num_threads=0 -I 100 -g 3 -E 11`
- effort 9: 165153 bytes, 14.802 seconds `cjxl -e 10 -d 0 --brotli_effort=11 --allow_expert_options --num_threads=0 -I 100 -g 3 -E 11`
|
|
2023-06-16 02:19:28
|
Is this with current git version or with 0.8.x?
|
|
|
TheBigBadBoy - 𝙸𝚛
|
|
_wb_
Is this with current git version or with 0.8.x?
|
|
2023-06-16 02:20:44
|
0.8.2
|
|
2023-06-16 02:21:32
|
I guess with https://github.com/libjxl/libjxl/pull/2523
it might produce smaller output now ?
|
|
|
_wb_
|
2023-06-16 02:22:58
|
Maybe, maybe not
|
|
|
TheBigBadBoy - 𝙸𝚛
|
2023-06-16 02:23:18
|
eh before compressing I always forgot that I use the latest version from my package manager <:dogelol:867794291652558888>
and not the nightly builds
|
|
|
_wb_
Maybe, maybe not
|
|
2023-06-16 02:24:00
|
gonna try it then <:thinkies:895863009820414004>
|
|
|
_wb_
|
2023-06-16 02:24:29
|
It will certainly be different, if it's an ex-gif
|
|
2023-06-16 02:24:56
|
I hope better but there's no guarantee
|
|
|
TheBigBadBoy - 𝙸𝚛
|
|
TheBigBadBoy - 𝙸𝚛
gonna try it then <:thinkies:895863009820414004>
|
|
2023-06-16 02:25:26
|
oh no nevermind, latest nightly build is from 10th June, while your PR was merged 13rd
|
|
2023-06-16 02:25:42
|
is it easy to build ourselves cjxl ?
|
|
|
diskorduser
|
2023-06-16 02:26:14
|
Yes it's easy
|
|
2023-06-16 02:26:40
|
On linux*
|
|
|
TheBigBadBoy - 𝙸𝚛
|
2023-06-16 02:27:10
|
ofc on Linux <:chad:862625638238257183>
|
|
2023-06-16 02:27:12
|
gonna try it then
|
|
|
_wb_
|
2023-06-16 02:27:57
|
https://github.com/libjxl/libjxl/blob/main/BUILDING.md
|
|
2023-06-16 02:29:36
|
```
sudo apt install cmake pkg-config libbrotli-dev clang ninja-build
export CC=clang CXX=clang++
git clone https://github.com/libjxl/libjxl.git --recursive --shallow-submodules
cd libjxl
./ci.sh opt
```
|
|
2023-06-16 02:30:01
|
That should do the trick on debian or ubuntu
|
|
|
TheBigBadBoy - 𝙸𝚛
|
2023-06-16 02:35:39
|
thanks, I already saw that "how to build" file
I was mostly worried about finding the equivalent packages in Arch, but it appears I had already everything installed <:thinkies:895863009820414004>
|
|
|
diskorduser
|
2023-06-16 02:43:11
|
Just use makepkg from aur if you're on arch
|
|
|
TheBigBadBoy - 𝙸𝚛
|
2023-06-16 02:51:07
|
Ahem <@794205442175402004> I think you'll be disappointed <:dogelol:867794291652558888>
Current git produces bigger file (0.1%) <:kekw:758892021191934033>
118938 bytes, in 562.977s 16 threads (this time compiled by me)
118799 bytes from 0.8.2
|
|
2023-06-16 02:53:40
|
`jxlinfo` showing the exact same stuff for the 2 files
|
|
2023-06-16 02:56:50
|
well that size increase is quite negligible
|
|
2023-06-16 02:56:59
|
but I wonder where it comes from <:Hmmm:654081052108652544>
|
|
2023-06-16 02:58:12
|
verbose version of `jxlinfo` still the same
|
|
|
_wb_
|
2023-06-16 03:03:48
|
What does it do at default effort or at e9?
|
|
2023-06-16 03:05:02
|
The size difference comes from using a different palette order
|
|
|
TheBigBadBoy - 𝙸𝚛
|
2023-06-16 03:12:47
|
I'll do that in a few hours, need to leave my place sorry
|
|
2023-06-16 03:35:16
|
If I use `-fprofile-generate` when compiling, is it a problem to also use `-O3 -flto -static -mtune=generic` when doing that (and use them also with `-fprofile-use`) ?
|
|
|
_wb_
What does it do at default effort or at e9?
|
|
2023-06-16 05:23:34
|
current git at default and -e 9 provides 1~2% smaller output than 0.8.2, while being 1.46x faster
```
96651 16 jun 16:06 dilbert.gif
254624 16 jun 19:19 dilbertDef.jxl
252811 16 jun 19:20 dilbertDefGit.jxl
200550 16 jun 19:19 dilbertE9.jxl
195689 16 jun 19:20 dilbertE9Git.jxl
```
|
|
2023-06-16 05:24:12
|
so on this side it's a really good improvement
|
|
2023-06-16 05:25:13
|
only -e 10 has the problem <:Hmmm:654081052108652544>
Should I try -e 10 on another file?
|
|
|
_wb_
|
2023-06-16 06:49:13
|
Meh, I mostly want to make e1-e7 get good trade-offs. If speed is not a concern, we can even make an e11 that is even more exhaustive than e10, but I think in practice you generally want something that is fast enough to be convenient.
|
|
|
lonjil
|
2023-06-16 06:58:29
|
Really depends on how many times each image might get served
|
|
2023-06-16 06:58:54
|
I can imagine some very large scale web sites being interested in ultra slow encoding
|
|
2023-06-16 06:59:15
|
But they'll probably tell you that themselves if they ever need it 🙂
|
|
|
_wb_
|
2023-06-16 07:03:10
|
Yeah if something will be served a lot it can be worth it, but many lossless workflows are basically save-once load-once...
|
|
|
TheBigBadBoy - 𝙸𝚛
|
|
lonjil
I can imagine some very large scale web sites being interested in ultra slow encoding
|
|
2023-06-16 07:12:14
|
and some nerds like me too <:2_pepeShy:721833660457943151>
|
|
2023-06-16 07:13:20
|
effort 11 ? <:woag:852007419474608208>
I thought effor 10 already bruteforced all the settings, what effort 11 would bring in more ?
|
|
2023-06-16 07:18:29
|
Also, I am wondering: if I manually bruteforce all the different arguments given to cjxl, could I reach -e 10 size ?
If I read correctly, the most "random" things is the function `-I`
|
|
|
_wb_
|
2023-06-16 07:29:06
|
e10 tries many things, but as you can see it is not exhaustive, otherwise how could it get worse?
|
|
2023-06-16 07:29:32
|
e10 is basically trying lots of combinations of e9 options
|
|
2023-06-16 07:30:09
|
Truly exhaustive jxl is not feasible, the bitstream is just too expressive
|
|
|
derberg
|
2023-06-17 01:05:47
|
Well, with -e 10 it took hours in the worst case for me (Ryzen 7 3700X) when I tested on screenshots and smaller graphics
|
|
2023-06-17 01:05:53
|
And seconds in the best cases
|
|
2023-06-17 01:06:12
|
I would be open for something that could take days or weeks on my hardware
|
|
|
jonnyawsom3
|
2023-06-27 11:57:13
|
Was having a look on here and noticed the benchmark image took quite a while to load, decided to check how a JXL would fare
<https://www.cpuagent.com/cpu-compare/amd-ryzen-7-1700-vs-amd-ryzen-7-5800x3d/benchmarks/passmark-cpu-single-mark/nvidia-geforce-gtx-1070-ti?res=1&quality=ultra>
721 KB to 59.3 KB Not bad for a 12x lossless size reduction
Also, wouldn't recommend running `-e 10` on a 10K image... Let's just say I now have over half my RAM free after Ctrl + C
|
|
|
TheBigBadBoy - 𝙸𝚛
|
2023-06-27 12:14:57
|
wouldn't even recommand `-e 10` on 1080p <:dogelol:867794291652558888>
|
|
|
jonnyawsom3
|
2023-06-27 12:16:36
|
The original filesize has a big impact on total memory usage, so I thought 700KB might not be too bad.... Yeahhhhhh, nah
|
|
|
|
veluca
|
|
Was having a look on here and noticed the benchmark image took quite a while to load, decided to check how a JXL would fare
<https://www.cpuagent.com/cpu-compare/amd-ryzen-7-1700-vs-amd-ryzen-7-5800x3d/benchmarks/passmark-cpu-single-mark/nvidia-geforce-gtx-1070-ti?res=1&quality=ultra>
721 KB to 59.3 KB Not bad for a 12x lossless size reduction
Also, wouldn't recommend running `-e 10` on a 10K image... Let's just say I now have over half my RAM free after Ctrl + C
|
|
2023-06-27 02:47:27
|
yep that would take a while
|
|
|
monad
|
|
The original filesize has a big impact on total memory usage, so I thought 700KB might not be too bad.... Yeahhhhhh, nah
|
|
2023-06-27 04:52:56
|
Not file size, _image_ size, measured in pixels.
|
|
|
jonnyawsom3
|
2023-06-27 05:58:22
|
Did two tests, both 4096x4096 PNG files from Krita, one with just a simple scribble on and the other a texture for a 3d model
234KB was 750MB (2GB peak) of RAM
34.4MB was 4.4GB (4.9GB peak) of RAM (Didn't finish due to taking so long)
|
|
|
spider-mario
|
2023-06-27 07:05:39
|
likely, the original filesize is not the direct factor, but correlates with it, downstream of it causally
|
|
|
monad
|
|
Did two tests, both 4096x4096 PNG files from Krita, one with just a simple scribble on and the other a texture for a 3d model
234KB was 750MB (2GB peak) of RAM
34.4MB was 4.4GB (4.9GB peak) of RAM (Didn't finish due to taking so long)
|
|
2023-06-27 08:02:15
|
This demonstrates memory usage is related to complexity of image content (which, yes, correlates with _compressed_ file size).
|
|
|
_wb_
|
2023-06-27 08:16:53
|
Kind of depends on the kind of encoder heuristics
|
|
2023-06-27 08:17:19
|
At e1-3 I am pretty sure memory is just linear in the number of pixels
|
|
2023-06-27 08:18:23
|
But when things get more complicated, e.g. patches are used, memory usage also starts depending on image content
|
|
|
jonnyawsom3
|
2023-06-28 01:30:14
|
Yeah, I know there was an issue open about large filesize PNGs using up dozens of gigs of memory too
<https://github.com/libjxl/libjxl/issues/464>
|
|
2023-06-28 01:34:18
|
But yeah, resolution, complexity and filesize all have an effect on the memory usage, although they are all generally intertwined too
|
|
|
derberg
|
|
TheBigBadBoy - 𝙸𝚛
wouldn't even recommand `-e 10` on 1080p <:dogelol:867794291652558888>
|
|
2023-06-28 01:44:47
|
Should be okay with 64 GB.
When I use -e 10 on screenshots of two monitors I have seen RAM usage of up to 21.5 GiB for cjxl
|
|
|
_wb_
|
2023-06-28 05:21:52
|
With e10 you probably want to limit the number of threads if you use it on larger images
|
|
|
TheBigBadBoy - 𝙸𝚛
|
2023-06-28 05:46:29
|
I mean, the problem is not really the RAM usage but rather the time to encode <:dogelol:867794291652558888>
|
|
|
_wb_
|
2023-06-28 06:19:46
|
sure, but generally there is a hard cap on available RAM while time is only limited by patience 🙂
|
|
|
derberg
|
|
_wb_
With e10 you probably want to limit the number of threads if you use it on larger images
|
|
2023-06-28 01:21:01
|
Ah, so it does influence how much RAM is taken?
|
|
2023-06-28 01:21:32
|
More threads = more RAM usage?
|
|
|
jonnyawsom3
|
2023-06-28 01:27:01
|
If I recall e10 is essentially just brute forcing e9 options, so each thread could be trying combinations in parallel
|
|
|
derberg
|
2023-06-28 01:28:14
|
Ah, that could be what is going on
|
|
|
_wb_
|
2023-06-28 01:49:35
|
yeah, e10 just does one thread per setting, as opposed to e9 and below which do just one setting where some steps in the process are multithreaded (but most of the high-effort modular encode stuff is still sequential)
|
|
|
monad
|
|
But yeah, resolution, complexity and filesize all have an effect on the memory usage, although they are all generally intertwined too
|
|
2023-06-28 06:47:04
|
Here's a test for filesize using images with same content:
```v0.9.0 141c48f5 [AVX2,SSE4,SSSE3,SSE2]
cjxl -d 0 -e 9 --num_threads 0 --disable_output
cjxl memory usage, median of 5 trials
pixels file(b) mem(kb)
sv.00.png 3840x2160 24925484 710972
sv.11.png 3840x2160 959928 687412
succ.00.png 600x900 1623567 84132
succ.11.png 600x900 756728 83500```
The difference in memory is less than the difference in filesize. Not dramatic.
|
|
2023-06-28 06:48:53
|
Surely this is not related to the actual encoding process.
|
|
|
jonnyawsom3
|
2023-06-28 06:54:21
|
I assume you just used worse PNG compression to get the larger size?
|
|
|
Traneptora
|
2023-06-28 07:13:10
|
guessing it's not freeing zlib stuff aggressively enough
|
|
|
nec
|
2023-07-04 08:22:26
|
Can I trust ssimulacra2.1 scores with avif? Usually score around 85 is hard to distinguish without a zoom, but it's different in case of avif and sometimes there are easily perceivable differences. If I compare 2 pictures of something like 85 jxl and 85 avif, it can be completely different.
|
|
2023-07-04 08:27:39
|
I feel like jxl has smoother scoring, in the meaning that if we split whole image on 100-1000 fragments, all fragments would be close to each other in quality. But in case of avif sometimes it might score at lot on majority of such fragments, then have several bad fragments, that are easily perceivable, but at the same time barely affect overall score. Does this make sense?
|
|
|
monad
|
2023-07-04 08:46:18
|
You can't trust it absolutely, but it is one of the least bad perceptual metrics. butteraugli max-norm might be more discriminative about local error.
|
|
|
nec
|
2023-07-04 10:13:42
|
I've tried to check specific picture, and I was wrong. Fragmenting doesn't help much. Here is 3x zoom of problematic place. Avif completely blurs it, while jpeg xl tries to preserve even the shape of that artifact in the corner. Banding probably can be solved with different settings, but those check-like patterns are important. It's not artifacts, but dot-like transition. While these lower quality settings do not preserve it properly, jpeg xl tries to create something similar and it works at normal viewing distance, while in case of avif it's easily perceiavable that dot-like shadowing turned into gradient. This is 82.2 score for a showcase, but the problem stays the same until 89-90 score. Score of smaller 150x200 pixels fragment is also similar, so I suppose from metrics point of view, both encodings are lacking, but psychovisually jpeg produced more fidelity. This is why I was pondering if it's even accurate to compare. While at 90 both pictures are similar, I feel that there are probably more situations when jpeg xl 80-85 is more accaptable than 80-85 avif.
https://i.imgur.com/pTxjBKq.gif
|
|
2023-07-04 10:48:31
|
Btw, the size of avif in this case was 30% smaller, that's why I've decided to check in details.
|
|
|
Jim
|
2023-07-04 10:51:06
|
That's the point of AVIF - it's a video codec. When frames are wizzing by at 30-60 per second, you're not likely to even notice the finer details. So it gets blurred out to save bandwidth. For a still image that needs the finer detail, it's more of a problem than a solution.
|
|
|
|
veluca
|
2023-07-04 11:38:25
|
at the end of the day, the only accurate way to evaluate image quality is to use human eyes 😉
|
|
|
spider-mario
|
2023-07-04 12:14:53
|
and brain
|
|
2023-07-04 12:14:57
|
people sometimes forget that part
|
|
|
jonnyawsom3
|
2023-07-04 01:37:17
|
Assuming they have one these days
|
|
|
DZgas Ж
|
|
nec
I've tried to check specific picture, and I was wrong. Fragmenting doesn't help much. Here is 3x zoom of problematic place. Avif completely blurs it, while jpeg xl tries to preserve even the shape of that artifact in the corner. Banding probably can be solved with different settings, but those check-like patterns are important. It's not artifacts, but dot-like transition. While these lower quality settings do not preserve it properly, jpeg xl tries to create something similar and it works at normal viewing distance, while in case of avif it's easily perceiavable that dot-like shadowing turned into gradient. This is 82.2 score for a showcase, but the problem stays the same until 89-90 score. Score of smaller 150x200 pixels fragment is also similar, so I suppose from metrics point of view, both encodings are lacking, but psychovisually jpeg produced more fidelity. This is why I was pondering if it's even accurate to compare. While at 90 both pictures are similar, I feel that there are probably more situations when jpeg xl 80-85 is more accaptable than 80-85 avif.
https://i.imgur.com/pTxjBKq.gif
|
|
2023-07-04 02:29:17
|
Hmm. A very unrealistic use case... hmm.. personally, if I see that my original image has jpeg artifacts, I use the "cunet" in waifu2x-caffe — neural network that is specially trained to destroy jpeg artifacts https://discord.com/channels/794206087879852103/794206087879852106/1104649413831438376 , which is also suitable for photos.
But do same thing on the image codec... This is strange
|
|
2023-07-04 02:34:10
|
want it from codecs that are designed to convey the details of the picture as much as possible...
Logically, JXL is better here because it really preserves the details that were there.
|
|
|
jonnyawsom3
|
2023-07-04 02:53:53
|
That's the point
|
|
|
DZgas Ж
|
|
Jim
That's the point of AVIF - it's a video codec. When frames are wizzing by at 30-60 per second, you're not likely to even notice the finer details. So it gets blurred out to save bandwidth. For a still image that needs the finer detail, it's more of a problem than a solution.
|
|
2023-07-04 03:31:28
|
this is not this case, the AVIF is based on a keyframe, the frame to which there is no movement
|
|
|
jonnyawsom3
|
2023-07-04 03:41:59
|
And which is still displayed for 12ms
|
|
|
Foxtrot
|
2023-07-04 04:01:05
|
now it makes sense why AVIF is bad for science and medical use... would be bad if it decided to blur tumor or something
|
|
|
jonnyawsom3
|
2023-07-04 04:06:36
|
It's bad for any use outside of video or tiny filesize, you loose all the fine detail
|
|
|
Traneptora
|
|
Foxtrot
now it makes sense why AVIF is bad for science and medical use... would be bad if it decided to blur tumor or something
|
|
2023-07-04 04:27:23
|
science and medical use typically is lossless, so it's only inferior to JXL in that lossless JXL outperforms it in speed and compression density
|
|
2023-07-04 04:27:26
|
quality is identical, cause lossless
|
|
|
jonnyawsom3
|
2023-07-04 04:29:56
|
And that it can only do 12bit or whatever it was
|
|
|
BlueSwordM
|
|
nec
Can I trust ssimulacra2.1 scores with avif? Usually score around 85 is hard to distinguish without a zoom, but it's different in case of avif and sometimes there are easily perceivable differences. If I compare 2 pictures of something like 85 jxl and 85 avif, it can be completely different.
|
|
2023-07-04 06:21:44
|
What your avifenc settings?
|
|
|
nec
|
|
BlueSwordM
What your avifenc settings?
|
|
2023-07-04 07:10:37
|
By default I use " -s 2 --min 0 --max 63 -a tune=ssim -a end-usage=q -a cq-level=24" and simply adjust cq to get specific ssimulacra score. Sometimes I play around with d 10-12, --sharpyuv or aq-mode, but haven't decided on final result yet.
|
|
2023-07-04 07:18:08
|
Also that blurring is quite common. Here is another example with skin color. Bottom-left is crop, and the right side is 2x zoom. Avif has 84.8 score, while jpeg xl 81.8 with slightly smaller size, but in my opinion it retains overall idea better here. Avif not only blurs, but also adds something of it's own and this visibility slightly depends on color profile. For example, there are 2 slightly lighter lines a bit below center, so it looks like a kind of m wave.
https://i.imgur.com/HePZf32.png
Ok, discord seems doesn't like animations. It's actually apng and animated at the link.
|
|
2023-07-04 07:44:20
|
At equal 85 ssimu score jxl is just better at retaining all details here. So I'm not so sure about it. Avif is decent at 90. Also such kind of blurring should be good for something like 60-70 score, when people want very small size and are fine with some kinds of distortions. But I'm not sure about this in between place. At least it's obvious that we can't simply take a pack of images, encode with both and directly compare just based on metrics alone. Out of ~30 different images, I've seen several when avif had easily perceivable changes and these changes often stayed until ~88 ssimulacra score, while for jpeg xl it was hard to distinguish after ~84.
|
|
|
jonnyawsom3
|
2023-07-04 11:09:41
|
Heh, JXL even conserves the original jpeg group borders
|
|
|
fab
|
|
nec
Also that blurring is quite common. Here is another example with skin color. Bottom-left is crop, and the right side is 2x zoom. Avif has 84.8 score, while jpeg xl 81.8 with slightly smaller size, but in my opinion it retains overall idea better here. Avif not only blurs, but also adds something of it's own and this visibility slightly depends on color profile. For example, there are 2 slightly lighter lines a bit below center, so it looks like a kind of m wave.
https://i.imgur.com/HePZf32.png
Ok, discord seems doesn't like animations. It's actually apng and animated at the link.
|
|
2023-07-05 06:38:04
|
avm is better than that
|
|
2023-07-05 06:38:23
|
update your jxl for having less issues
|
|
2023-07-05 06:38:24
|
also
|
|
2023-07-05 06:38:39
|
for %i in (C:\Users\Use\Documents\a\*.png) do cjxl -e 8 -d 0.647 -I 0.154 --lossless_jpeg=0 "%i" "%i.jxl"
|
|
2023-07-05 06:38:41
|
use this
|
|
2023-07-05 06:38:51
|
don't go to blocking
|
|
2023-07-05 06:39:24
|
anyway with enine d one
|
|
2023-07-05 06:40:52
|
if you're palermitan and there is a fat woman with you, your face will be noticeability puffier
|
|
2023-07-05 06:44:00
|
s 8 d 0.805 i 0.758 would be better suited to your use case like a lot
|
|
2023-07-05 06:44:21
|
and won't cause the puffier effect
|
|
2023-07-05 06:45:02
|
but screenshot will have only an acceptable quality, not good for print
|
|
|
DZgas Ж
|
2023-07-05 06:45:53
|
<:PepeGlasses:878298516965982308>
|
|
|
fab
|
|
DZgas Ж
<:PepeGlasses:878298516965982308>
|
|
2023-07-05 06:46:09
|
you want a copasion
|
|
2023-07-05 06:46:17
|
avm i don't have it now
|
|
2023-07-05 06:46:32
|
because cpu 5 crashed
|
|
2023-07-05 06:46:48
|
warp rachel barker
|
|
2023-07-05 06:47:55
|
|
|
|
DZgas Ж
|
|
fab
you want a copasion
|
|
2023-07-05 06:48:07
|
No I won't do absolutely nothing
|
|
|
fab
|
2023-07-05 06:48:37
|
|
|
2023-07-05 06:49:49
|
|
|
|
fab
|
|
2023-07-05 06:49:59
|
that's the new jxl
|
|
2023-07-05 06:50:52
|
a bit of blocking and undesatuation
|
|
|
DZgas Ж
|
|
nec
I've tried to check specific picture, and I was wrong. Fragmenting doesn't help much. Here is 3x zoom of problematic place. Avif completely blurs it, while jpeg xl tries to preserve even the shape of that artifact in the corner. Banding probably can be solved with different settings, but those check-like patterns are important. It's not artifacts, but dot-like transition. While these lower quality settings do not preserve it properly, jpeg xl tries to create something similar and it works at normal viewing distance, while in case of avif it's easily perceiavable that dot-like shadowing turned into gradient. This is 82.2 score for a showcase, but the problem stays the same until 89-90 score. Score of smaller 150x200 pixels fragment is also similar, so I suppose from metrics point of view, both encodings are lacking, but psychovisually jpeg produced more fidelity. This is why I was pondering if it's even accurate to compare. While at 90 both pictures are similar, I feel that there are probably more situations when jpeg xl 80-85 is more accaptable than 80-85 avif.
https://i.imgur.com/pTxjBKq.gif
|
|
2023-07-05 06:50:55
|
The topic is not clear to me in general. JXL saves details better than avif. Proof was shown in gif. Well, that's it, closed the topic....
|
|
|
fab
|
|
fab
|
|
2023-07-05 06:52:18
|
this webp3
|
|
|
DZgas Ж
|
2023-07-05 06:52:33
|
<:PepeGlasses:878298516965982308> 3
|
|
|
fab
|
2023-07-05 06:52:45
|
avm
|
|
2023-07-05 06:53:06
|
125,1kb
|
|
2023-07-05 06:54:01
|
old build no warp
|
|
|
DZgas Ж
|
|
fab
avm
|
|
2023-07-05 06:54:56
|
real who
|
|
|
fab
|
|
DZgas Ж
real who
|
|
2023-07-05 06:55:30
|
thanks it'useful
|
|
2023-07-05 06:55:37
|
anyway i have a old blog of jxl
|
|
|
DZgas Ж
|
2023-07-05 06:55:51
|
https://discord.com/channels/794206087879852103/794206170445119489/1080900107278504038
|
|
|
fab
|
2023-07-05 06:55:56
|
plus the lemmy.wold of bitflag
|
|
2023-07-05 06:56:59
|
cpu seven woks with wap
|
|
2023-07-05 06:57:12
|
i need only six
|
|
2023-07-05 06:57:42
|
the file is supe big
|
|
2023-07-05 06:57:46
|
like supe
|
|
|
DZgas Ж
|
|
fab
thanks it'useful
|
|
2023-07-05 06:58:21
|
But you say your words uselessly. The main developer of jxl does not know what the fuck avm you are saying all the time, but even after that you continue to do it. Enough, say the full name every time so that I could at least Google it!
|
|
|
fab
|
|
DZgas Ж
But you say your words uselessly. The main developer of jxl does not know what the fuck avm you are saying all the time, but even after that you continue to do it. Enough, say the full name every time so that I could at least Google it!
|
|
2023-07-05 06:58:55
|
i'n not that advanced in english
|
|
2023-07-05 06:59:19
|
cpu six is sane as seven
|
|
2023-07-05 06:59:43
|
aomenc.exe C:\Users\Use\Pictures\a\lo.y4m --cpu-used=6 --threads=4 --min-qp=30 --width=1366 --height=768 --max-qp=68 -o C:\Users\Use\Videos\single.ivf -
|
|
2023-07-05 07:00:03
|
i will do highe
|
|
2023-07-05 07:00:55
|
i did this aomenc.exe C:\Users\Use\Pictures\a\lo.y4m --cpu-used=6 --threads=4 --min-qp=60 --width=1366 --height=768 --max-qp=136 -o C:\Users\Use\Videos\single.ivf -
|
|
2023-07-05 07:01:19
|
aomdec C:\Users\Use\Videos\single.ivf -o C:\Users\Use\Videos\csao.y4m
|
|
2023-07-05 07:01:27
|
ffmpegold.exe -i C:\Users\Use\Videos\csao.y4m C:\Users\Use\Videos\sqfa.png
|
|
2023-07-05 07:01:58
|
a bit contrasty
|
|
2023-07-05 07:02:46
|
|
|
2023-07-05 07:03:17
|
accoding to yiang wu av2 cons this yea
|
|
2023-07-05 07:03:28
|
and it's aleady at 25 gains
|
|
2023-07-05 07:03:56
|
the intev published in livevideostack.cn on twentynine of june
|
|
2023-07-05 07:04:53
|
distotions modelling of the noise
|
|
2023-07-05 07:05:30
|
this has taken 5.1s on i3 330m
|
|
2023-07-05 07:05:53
|
it was even bef fast then webp2
|
|
2023-07-05 07:06:10
|
like 198s fo cpufou
|
|
2023-07-05 07:06:27
|
and quality is not bette than jpeg xl
|
|
|
fab
|
|
2023-07-05 07:14:15
|
The photo is clipped on my phone
|
|
2023-07-05 07:14:30
|
And doesn't look better
|
|
2023-07-05 07:15:18
|
Anyway it has the best font retention
|
|
2023-07-05 07:15:28
|
Out of all photos
|
|
2023-07-05 07:20:15
|
AVM retains the saturation without being too reflective
|
|
2023-07-05 07:20:25
|
Even at High speed
|
|
2023-07-05 07:20:50
|
Is not a codec yet JPEG XL is
|
|
|
nec
|
|
fab
for %i in (C:\Users\Use\Documents\a\*.png) do cjxl -e 8 -d 0.647 -I 0.154 --lossless_jpeg=0 "%i" "%i.jxl"
|
|
2023-07-05 07:59:55
|
Any particular reason to use non-default -I parameter?
|
|
|
fab
|
|
fab
s 8 d 0.805 i 0.758 would be better suited to your use case like a lot
|
|
2023-07-05 08:03:48
|
This is the best
|
|
2023-07-05 08:04:20
|
For font retention and avoid blocking and puffing issues
|
|
|
jonnyawsom3
|
2023-07-05 07:20:04
|
Generally I just try `-I 1, -I 50 (Default) and -I 100`
|
|
|
fab
|
|
Generally I just try `-I 1, -I 50 (Default) and -I 100`
|
|
2023-07-06 07:55:26
|
Those are nt new settings I just invented now
|
|
|
monad
|
2023-07-06 08:02:13
|
fabian invents the most precise settings for each context
|
|
|
gb82
|
|
monad
fabian invents the most precise settings for each context
|
|
2023-07-06 01:54:00
|
Yes
|
|
|
nec
|
2023-07-11 04:17:34
|
I got curious about upscaling. Quite often there is no difference between different modes on a normal viewing distance, but if we look at pixels, we can see small changes. And it's not even so rare, we can zoom pictures or view 480-720p video in fullscreen. Here is a small comparison between jxl and avif. I took 88-89 ssimulacra score pictures and losslessly upscaled x2 and x3 to check how much ssimulacra score would change. Average scores are similar. Median for jxl at x2 was 0.9327 and for avif 0.9305, at x3 0.8995 and 0.8971, thus typically 89 score becomes around 80 at x3 zoom. But we can see some differences in 95 percentile, jxl quite constantly provides slightly higher values. Avif jiggyness is due to sorting values by jxl for easier comparison, on practice both vary due to picture's content.
|
|
|
jonnyawsom3
|
2023-07-11 04:21:28
|
Odd timing with the video I put in <#806898911091753051> about scaling methods. Although at first I thought you were benchmarking cjxl's `--resampling`
|
|
|
nec
|
2023-07-11 04:22:05
|
I will check that. My scaling was done simply by repeating values 2-3 times in both axis. I had seen that how avif and jxl approaches lossy differ quite significantly, similarly -e6 and -e9 for jxl apply different strategies, which differ on individual pixels, but not always visible on a normal viewing distance. This scaled approach was an attempt to automate comparison a bit. Like what if both pictures would have a similar score on a normal viewing distance, but differ significantly after zooming a bit.
|
|
|
jonnyawsom3
|
2023-07-11 05:07:40
|
Yeah, photos you're almost expected to zoom in, videos it's almost impossible without specific software
|
|
|
Traneptora
|
2023-07-17 03:34:51
|
Is there any way we can add `ssimulacra2` to the installed CLI tools when debug tools are disabled?
|
|
2023-07-17 03:35:14
|
I feel like that one is worth installing even without jxl_from_tree and hgl_to_pq or whatever
|
|
|
acedent
|
2023-07-23 05:01:16
|
Hi 👋 ... what's the status of `cjxl` for small (<200 bytes), lossless images? I was surprised that WebP comes close, and is beaten by WebPv2 ... Is there still some tuning planned for lossless with the core encoder?
|
|
|
jonnyawsom3
|
2023-07-23 05:44:30
|
Right now bit packing (two 4 bit pixels per byte) and pallete ordering (webp is currently much better) are two pitfalls JXL has, if/once those are implimented/improved there should be a larger gap between webp especially for small images
|
|
|
acedent
|
|
Right now bit packing (two 4 bit pixels per byte) and pallete ordering (webp is currently much better) are two pitfalls JXL has, if/once those are implimented/improved there should be a larger gap between webp especially for small images
|
|
2023-07-23 05:47:28
|
So bitdepths 1,2 & 4 (from png) are not currently supported by `cjxl`?
|
|
|
jonnyawsom3
|
2023-07-23 05:49:16
|
They are supported, but they can be larger than the original png due to using 8 bits with the other 4 zeroed out (If I recall, feel free to correct me). Usually the zeros are compressed down in the final file so it has less of an impact though
|
|
|
acedent
|
|
They are supported, but they can be larger than the original png due to using 8 bits with the other 4 zeroed out (If I recall, feel free to correct me). Usually the zeros are compressed down in the final file so it has less of an impact though
|
|
2023-07-23 05:50:32
|
Wow. Ok. So there's definitely some low hanging fruit 🙂
|
|
|
jonnyawsom3
|
2023-07-23 06:09:41
|
If I recall wb said they might be able to do bitpacking using some modular encoding wizardry, but it'll take some work
|
|
2023-07-23 06:23:47
|
Here's an old explination on how it could work https://discord.com/channels/794206087879852103/794206170445119489/935096155354832906
|
|
|
|
afed
|
2023-07-25 06:04:20
|
<https://encode.su/threads/4025-HALIC-(High-Availability-Lossless-Image-Compression)?p=80326&viewfull=1#post80326>
|
|
2023-07-25 06:04:24
|
|
|
2023-07-25 06:04:31
|
|
|
2023-07-25 06:12:04
|
is jxl e1 decoding still the same after the recent improvements?
although as far as i remember it's a slower version of jxl with msvc compiler, without avx2
jxl can be much faster with clang
|
|
2023-07-25 06:16:51
|
> Test Machine: i7 3770K
and 3770K doesn't support avx2 😔
|
|
|
BlueSwordM
|
|
afed
> Test Machine: i7 3770K
and 3770K doesn't support avx2 😔
|
|
2023-07-25 06:19:06
|
I have the impression he did it on purpose...
|
|
|
_wb_
|
2023-07-25 06:38:25
|
e1 encode should be way faster than that I think
|
|
|
w
|
2023-07-25 06:42:11
|
yeah but what about halic with avx2
|
|
|
|
afed
|
2023-07-25 06:50:10
|
i don't think there will be any noticeable difference, at least until some other entropy encoders are used or implementations optimized for avx2+
> SIMD was not used. FSE was preferred as entropy encoder considering the old systems.
>
> The results are the same as the compression ratio as V.0.6. In my tests, good results can be obtained with various alternatives such as rANS -o1, Deflate and Error Correction. However, because the process speed fell slightly, I left it later. And AVX2 alternatives.
|
|
2023-07-25 06:55:22
|
good speed without using SIMD is great, but I think for a future-proof codec maximizing speed with SIMD is something that should be a highest priority
|
|
|
jonnyawsom3
|
2023-07-25 11:38:14
|
e1 usually takes less than a second, must have some pretty big bottlenecks to hit 13 (seconds?)
|
|
|
_wb_
|
2023-07-25 01:36:09
|
I don't think this is the fast_lossless e1 codepath that ends up being used. Maybe he's calling libjxl with input in the wrong pixel format so it doesn't trigger the fast_lossless codepath, or something like that
|
|
2023-07-25 01:39:27
|
fast lossless encoding is an order of magnitude faster than decoding so this the generic fallback e1 codepath (the one that is used whenever the fast path cannot be used, like for > 16-bit, > 4 channels, etc)
|
|
|
|
afed
|
2023-07-25 01:53:02
|
i don't think it's that slow, because as i understand it's not a single image, but 16 large 46 megapixel photos
but still e1 should be faster, it seems to be an issue with the git binary version (with msvc) and testing without avx2
|
|
2023-07-25 02:08:26
|
|
|
|
afed
<https://encode.su/threads/4025-HALIC-(High-Availability-Lossless-Image-Compression)?p=80326&viewfull=1#post80326>
|
|
2023-07-31 01:02:41
|
is it possible to make at least e2 consume less memory or is it strictly required, maybe only for single threaded mode?
and again lots of people make benchmarks based on official github binary versions for windows, which are much slower than they could be and make wrong statements about jxl speed
|
|
|
jonnyawsom3
|
2023-07-31 01:12:43
|
Streaming encoder should help with memory use, but since e1 runs a specific 'fast lossless' mode optimizing e2 may not be as easy as it sounds
|
|
|
_wb_
|
2023-07-31 01:31:50
|
Currently e2+ is doing global histograms which means it's buffering all entropy coded tokens, then making histograms and doing context clustering, and only then writing bitstream. Likely with local histograms, compression would not be too different (<@179701849576833024> have you tried this at some point?). There would be more signaling cost since histograms would have to be signaled per group instead of globally, but possibly compression could still be better since the histograms are tuned to each group. It would reduce the memory consumption to a fixed amount per thread (one group worth of tokens) and remove the sequential bottleneck in encoding (since each group can be encoded independently).
|
|
2023-07-31 01:34:11
|
I mean for lossless. For lossy, I suppose signaling the histograms+clustering for every group is too much signaling overhead and the effect on compression would be bad. For lossless, especially with small fixed MA trees like we have in e2/e3, the histograms+clustering signaling should be small compared to the data itself
|
|
|
|
veluca
|
2023-07-31 02:11:43
|
We're working on making a (more) streaming encoder right now 😄
|
|
|
|
afed
|
2023-07-31 02:21:19
|
for lossless as well?
and it would be nice to have some modes for cli encoder
|
|
2023-07-31 04:55:28
|
my quick comparison
`8192x5456 image, 134086673 bytes`
```md
# clang v0.9.0 [cjxl e1 lossless MT 8T]
134,086,673 -> 60,600,350: real 586 mb/s (0.218 sec) = 221%. ram 439084 KB, vmem 626304 KB
# clang v0.9.0 [cjxl e1 lossless]
134,086,673 -> 60,600,350: real 304 mb/s (0.420 sec) = 92%. ram 438872 KB, vmem 625960 KB
# Git release build v0.8.1 [cjxl e1 lossless]
134,086,673 -> 60,590,767: real 73 mb/s (1.746 sec) = 98%. ram 440368 KB, vmem 626292 KB
# Halic 0.7
134,086,673 -> 42,934,087: real 207 mb/s (0.615 sec) = 99%. ram 14668 KB, vmem 17704 KB
# Halic 0.7 MT
134,086,673 -> 42,934,087: real 455 mb/s (0.281 sec) = 171%. ram 203820 KB, vmem 667420 KB
# QLIC2
134,086,673 -> 55,560,218: real 133 mb/s (0.956 sec) = 99%. ram 309988 KB, vmem 307196 KB
# KVICK
134,086,673 -> 50,718,469: real 95 mb/s (1.333 sec) = 99%. ram 184640 KB, vmem 289856 KB
# QIC
134,086,673 -> 59,306,145: real 223 mb/s (0.573 sec) = 97%. ram 235868 KB, vmem 328976 KB
# WEBP -z 0
134,381,603 -> 61,230,557: real 49 mb/s (2.596 sec) = 97%. ram 939072 KB, vmem 1029708 KB
# clang v0.9.0 [cjxl e2 lossless]
134,086,673 -> 57,074,430: real 33 mb/s (3.826 sec) = 97%. ram 3082080 KB, vmem 3578156 KB
# clang v0.9.0 [cjxl e3 lossless]
134,086,673 -> 54,192,098: real 17 mb/s (7.516 sec) = 95%. ram 3076816 KB, vmem 3575676 KB
# clang v0.9.0 [cjxl e4 lossless]
134,086,673 -> 53,533,646: real 5 mb/s (25.388 sec) = 98%. ram 3076456 KB, vmem 3575728 KB```
|
|
|
|
veluca
|
|
afed
for lossless as well?
and it would be nice to have some modes for cli encoder
|
|
2023-07-31 05:02:28
|
yep
|
|
|
uis
|
|
afed
|
|
2023-08-02 05:19:42
|
Can't find HALIC source code
|
|
|
jonnyawsom3
|
2023-08-02 06:33:40
|
That's because there isn't any so far
|
|
|
|
afed
|
2023-08-08 06:10:13
|
<:FeelsReadingMan:808827102278451241>
|
|
|
yoochan
|
2023-08-26 08:45:31
|
I was looking for a documentation of benchmark_xl ? doc/benchmarking.md seems outdated. And I'm specifically looking for the syntax of the --codec parameter ? do you have some tips for me ? 😅
|
|
|
_wb_
|
2023-08-26 08:52:27
|
It's poorly documented
|
|
2023-08-26 08:52:42
|
I check the code when in doubt lol
|
|
2023-08-26 08:54:01
|
--codec=jxl:d1:6,jxl:d2:3 will do a distance 1 encode at effort 6 and a distance 2 encode at effort 3
|
|
|
yoochan
|
2023-08-26 08:54:08
|
ok 😄 like here tools/benchmark/benchmark_args.h ?
|
|
2023-08-26 08:54:27
|
nice, which letter could I use to set quality instead of distance (if possible ?)
|
|
2023-08-26 08:54:59
|
your example should be enough for what I want to do 😄 thanks
|
|
|
_wb_
|
2023-08-26 08:55:08
|
I don't think quality is implemented in benchmark_xl, you can use cjxl to translate q to d since it says what d it uses
|
|
|
yoochan
|
2023-08-26 08:55:38
|
oki, perfect, that's why it didn't worked 😄 i'll do with d
|
|
2023-08-26 05:22:07
|
the formula found in cjxl_main should be the right one, isn't it ?
``` double distance = args->quality >= 100 ? 0.0
: args->quality >= 30
? 0.1 + (100 - args->quality) * 0.09
: 53.0 / 3000.0 * args->quality * args->quality -
23.0 / 20.0 * args->quality + 25.0;```
|
|
2023-08-26 05:30:17
|
https://www.desmos.com/calculator/5onywrdyau
|
|
|
_wb_
|
2023-08-26 05:30:59
|
Yeah. There is no real right or wrong here though, it's just a parameter where higher d or lower q means more distortion
|
|
|
yoochan
|
2023-08-26 05:31:46
|
yep, I just wanted something similar to what's used, but I understand your point, only distance is used internally
|
|
2023-08-26 05:40:55
|
funny, the 30 -> 100 part crosses distance 0 à quality = 101 is seems 😄
|
|
|
Traneptora
|
|
yoochan
the formula found in cjxl_main should be the right one, isn't it ?
``` double distance = args->quality >= 100 ? 0.0
: args->quality >= 30
? 0.1 + (100 - args->quality) * 0.09
: 53.0 / 3000.0 * args->quality * args->quality -
23.0 / 20.0 * args->quality + 25.0;```
|
|
2023-08-27 11:49:57
|
depends on what you mean by "correct"
|
|
2023-08-27 11:50:04
|
pretty sure FFmpeg has a slightly different method
|
|
2023-08-27 11:51:19
|
```c
/**
* Map a quality setting for -qscale roughly from libjpeg
* quality numbers to libjxl's butteraugli distance for
* photographic content.
*
* Setting distance explicitly is preferred, but this will
* allow qscale to be used as a fallback.
*
* This function is continuous and injective on [0, 100] which
* makes it monotonic.
*
* @param quality 0.0 to 100.0 quality setting, libjpeg quality
* @return Butteraugli distance between 0.0 and 15.0
*/
static float quality_to_distance(float quality)
{
if (quality >= 100.0)
return 0.0;
else if (quality >= 90.0)
return (100.0 - quality) * 0.10;
else if (quality >= 30.0)
return 0.1 + (100.0 - quality) * 0.09;
else if (quality > 0.0)
return 15.0 + (59.0 * quality - 4350.0) * quality / 9000.0;
else
return 15.0;
}
```
|
|
2023-08-27 11:51:22
|
This is what FFmpeg uses
|
|
|
yoochan
|
2023-08-27 01:23:24
|
Damned! I'll compare the two
|
|
|
Traneptora
|
2023-08-27 06:29:15
|
tbf FFmpeg's relies on the -qscale feature
|
|
2023-08-27 06:29:29
|
it's not documented that it does this and it's strongly recommended that -distance be used instead
|
|
|
yoochan
|
2023-08-28 11:33:54
|
which gives the following graph (red for cjxl and blue for ffmpeg) https://www.desmos.com/calculator/budsvf2qkz. They are almost identical above q30 (the jpeg xl version cross d=0 at q=101 instead of q=100 for the ffmpeg one) and really different below q30
|
|
2023-08-29 11:49:08
|
`BPP*pnorm` is the result given by benchmark_xl which seems closer to a score of "best quality / size tradeoff" but why pnorm instead of ssimulacra2 score is used ? And what is QABPP ?
|
|
2023-08-29 11:52:48
|
QABPP is same as BPP*pnorm but with 4-norm ?
|
|
|
|
veluca
|
2023-08-29 11:57:51
|
qabpp is pretty much `bpp * max(1, butteraugli max-norm)` IIRC
|
|
|
yoochan
|
|
Traneptora
|
|
yoochan
which gives the following graph (red for cjxl and blue for ffmpeg) https://www.desmos.com/calculator/budsvf2qkz. They are almost identical above q30 (the jpeg xl version cross d=0 at q=101 instead of q=100 for the ffmpeg one) and really different below q30
|
|
2023-08-30 12:46:33
|
if libjxl isn't mapping q=100 to d=0 that's probably a bug
|
|
|
yoochan
|
2023-08-30 12:47:43
|
maybe yes 😄 affine functions are tricky beasts
|
|
|
Traneptora
|
2023-08-30 12:48:02
|
even if it is mapping >=100 to 0, it's still a discontinuity
|
|
2023-08-30 12:48:35
|
when I chose the FFmpeg mapping, I tried to map q=0 to distance=15
|
|
2023-08-30 12:48:43
|
as distances above 15 aren't sane
|
|
2023-08-30 12:48:54
|
however, I strongly recommend against setting -qscale for ffmpeg at all
|
|
2023-08-30 12:48:59
|
and instead just use -distance
|
|
|
yoochan
|
2023-08-30 12:50:03
|
oki, you did the ffmpeg integration of jxl ?
|
|
2023-08-30 12:50:26
|
right now, I'm just playing with benchmark_xl
|
|
|
Traneptora
|
|
yoochan
oki, you did the ffmpeg integration of jxl ?
|
|
2023-08-30 05:29:54
|
that was me, yes
|
|
|
Jyrki Alakuijala
|
|
Traneptora
if libjxl isn't mapping q=100 to d=0 that's probably a bug
|
|
2023-09-08 09:05:26
|
guetzli goes to quality 110
|
|
2023-09-08 09:05:45
|
"110 is more than 100"
|
|
2023-09-08 09:06:24
|
https://www.youtube.com/watch?v=4xgx4k83zzc
|
|
|
yoochan
|
2023-09-11 08:00:14
|
I tried to shave some bytes off this image : https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2F0xda9546x6g61.png (for which webp outperform a bit jxl)
|
|
2023-09-11 08:00:32
|
```cjxl -d 0 -e 9 -I 100 -E 3 --container=0 0xda9546x6g61.png 0xda9546x6g61.jxl
JPEG XL encoder v0.9.0 4caa4209 [AVX2]
Encoding [Modular, lossless, effort: 9]
Compressed to 577 bytes (0.018 bpp).
512 x 512, 0.140 MP/s [0.14, 0.14], 1 reps, 4 threads.```
|
|
2023-09-11 08:01:56
|
(not bad, but a bit behind the 520 bytes of the previous lossless encoding attempt shown line 56 of the tab PixelArt of https://docs.google.com/spreadsheets/d/1ju4q1WkaXT7WoxZINmQpf4ElgMD2VMlqeDN2DuZ6yJ8/edit?pli=1#gid=0)
|
|
2023-09-11 08:03:58
|
So I wondered if it was better (since each pixel is made of 64 pixels) to compress the same image downscaled by a ratio of 8... but by default, it uses a container, and the result is bad
|
|
2023-09-11 08:04:16
|
```cjxl -d 0 -e 9 -I 100 -E 3 0xda9546x6g61_1px.png 0xda9546x6g61_1px.jxl
JPEG XL encoder v0.9.0 4caa4209 [AVX2]
Encoding [Container | Modular, lossless, effort: 9 | 208-byte Exif | 3426-byte XMP]
Compressed to 1363 bytes including container (2.662 bpp).
64 x 64, 0.054 MP/s [0.05, 0.05], 1 reps, 4 threads.```
|
|
2023-09-11 08:04:31
|
without the container it is better though:
|
|
2023-09-11 08:04:46
|
```cjxl -d 0 -e 9 -I 100 -E 3 --container=0 0xda9546x6g61_1px.png 0xda9546x6g61_1px.jxl
JPEG XL encoder v0.9.0 4caa4209 [AVX2]
Encoding [Modular, lossless, effort: 9]
Compressed to 463 bytes (0.904 bpp).
64 x 64, 0.043 MP/s [0.04, 0.04], 1 reps, 4 threads.```
|
|
2023-09-11 08:05:17
|
but I don't know if the jxl format could upscale the reduced image by a factor 8 at the decoding ?!
|
|
|
_wb_
|
2023-09-11 08:15:42
|
yes it can, see also https://github.com/libjxl/libjxl/pull/2571
|
|
|
yoochan
|
2023-09-11 08:30:49
|
nice ! I played a bit, is the container required to store all the relevant informations ? ```cjxl -d 0 -e 9 -I 100 -E 3 --already_downsampled --resampling 8 --upsampling_mode 0 0xda9546x6g61_1px.png 0xda9546x6g61_1px.jxl
JPEG XL encoder v0.9.0 4caa4209 [AVX2]
Encoding [Container | Modular, lossless, effort: 9 | 208-byte Exif | 3426-byte XMP]
Compressed to 1784 bytes including container (3.484 bpp).
64 x 64, 0.049 MP/s [0.05, 0.05], 1 reps, 4 threads.```
|
|
2023-09-11 08:31:32
|
(the containerized version crash on my viewer... and without the container, there is no upsampling, I'm digging)
|
|
2023-09-11 08:32:40
|
it work perfectly under gimp ! thank you !
|
|
2023-09-11 08:33:15
|
but I don't gain any byte compared to the original, compressed with cjxl at 577 bytes
|
|
2023-09-11 08:48:49
|
It seems to work without container, for 884 bytes
|
|
|
_wb_
|
2023-09-11 09:31:24
|
That Exif and XMP metadata is taking a lot of bytes
|
|
2023-09-11 09:32:15
|
Custom upsampling weights for doing 8x NN upsample do add too many bytes if the image is small like that
|
|
2023-09-11 09:33:05
|
It would be worth it if it's a larger sprite or say a full screenshot of an old game or something
|
|
2023-09-11 09:33:29
|
2x and 4x also have less signaling cost than 8x for the custom weights
|
|
|
Tirr
|
2023-09-11 10:06:50
|
maybe 8x NN upsampling can be done using modular LF frame and vardct frame with empty HF coeffs
|
|
|
_wb_
|
2023-09-11 11:58:21
|
Ah yes that would also be a way to do it
|
|
|
jonnyawsom3
|
|
yoochan
```cjxl -d 0 -e 9 -I 100 -E 3 --container=0 0xda9546x6g61_1px.png 0xda9546x6g61_1px.jxl
JPEG XL encoder v0.9.0 4caa4209 [AVX2]
Encoding [Modular, lossless, effort: 9]
Compressed to 463 bytes (0.904 bpp).
64 x 64, 0.043 MP/s [0.04, 0.04], 1 reps, 4 threads.```
|
|
2023-09-11 12:42:40
|
For downsized pixel art (around 256x256 or under) you can actually use `-e 10 --allow_expert_options` without the `-I` or `-E 3` to have it find the best settings automatically. Much bigger than that and you'll likely run out of RAM or it'll take too long to compress
179 bytes (Unfortunately wb is right about the upsampling bloating the size, it's 600 bytes then)
|
|
|
yoochan
|
|
For downsized pixel art (around 256x256 or under) you can actually use `-e 10 --allow_expert_options` without the `-I` or `-E 3` to have it find the best settings automatically. Much bigger than that and you'll likely run out of RAM or it'll take too long to compress
179 bytes (Unfortunately wb is right about the upsampling bloating the size, it's 600 bytes then)
|
|
2023-09-11 12:46:42
|
thank you ! with -e 10 I miss the knowledge of which setting where used to compress best ! but 179 bytes is neat !
|
|
2023-09-11 12:49:43
|
damn ! ```cjxl -d 0 -e 10 --allow_expert_options --container=0 0xda9546x6g61_1px.png 0xda9546x6g61_1px_e10.jxl
JPEG XL encoder v0.9.0 4caa4209 [AVX2]
Encoding [Modular, lossless, effort: 10]
Compressed to 433 bytes (0.846 bpp).
64 x 64, 0.001 MP/s [0.00, 0.00], 1 reps, 4 threads.``` it gives me 433 bytes... I'm jealous
|
|
2023-09-11 12:50:11
|
0xda9546x6g61_1px.png is the downscaled version (with 1px per pixel)
|
|
|
jonnyawsom3
|
2023-09-11 01:11:01
|
Huh, what's the filesize of your 1 pixel input image?
|
|
2023-09-11 01:14:11
|
Here's what I've got
|
|
|
yoochan
|
2023-09-11 01:22:02
|
weird, I reopened the downscaled png under gimp to rewrite it again (without the metadatas, or the color profile) and the new png is 350 bytes, compressed to ```cjxl -d 0 -e 10 --allow_expert_options --container=0 0xda9546x6g61_1px.png 0xda9546x6g61_1px_e10.jxl
JPEG XL encoder v0.9.0 4caa4209 [AVX2]
Encoding [Modular, lossless, effort: 10]
Compressed to 186 bytes (0.363 bpp).
64 x 64, 0.001 MP/s [0.00, 0.00], 1 reps, 4 threads.```
|
|
2023-09-11 01:22:39
|
|
|
2023-09-11 01:23:42
|
Not exactly what you got but much closer
|
|
2023-09-11 01:27:07
|
(the png I used just before was this one 4.7kb of png with metadatas, which compress to 433 bytes in jxl)
|
|
|
jonnyawsom3
|
2023-09-11 01:42:41
|
The large one was at full 32 bitdepth and had a lot of GIMP metadata inside listing the original file, version, OS, ect
The one near mine has the correct 4 bit pallete, although it totalled 8 colors and mine 7 weirdly
|
|
|
yoochan
|
2023-09-11 01:44:29
|
😄
|
|
2023-09-11 01:44:59
|
is there an argument for jxl to discard ALL medatata ? except container=0 ?
|
|
2023-09-11 01:46:14
|
I note that I have to be careful with the original image...
|
|
|
jonnyawsom3
|
2023-09-11 01:47:49
|
`-x strip=exif` was added fairly recently, might work with `-x strip=all` but not sure, could try exif and xmp at the same time too
|
|
|
yoochan
|
|
jonnyawsom3
|
2023-09-11 01:50:21
|
I might be on a newer version than you, but container doesn't quite do what you'd expect either
```--container=0|1
0 = Avoid the container format unless it is needed (default)
1 = Force using the container format even if it is not needed.```
|
|
2023-09-11 01:50:43
|
It's either sometimes on, or always on
|
|
|
yoochan
|
2023-09-11 01:53:44
|
Yep, I noticed the subtleties of container... Indeed, it look like there is not way to discard it at all times. I'm pulling the last version and compiling, then I'll test once more 🙂
|
|
|
Fraetor
|
2023-09-11 06:32:18
|
If you are just testing I've got a script that strips the container.
https://github.com/Fraetor/jxl_decode/blob/main/src/jxl-strip.py
|
|
|
monad
|
|
yoochan
thank you ! with -e 10 I miss the knowledge of which setting where used to compress best ! but 179 bytes is neat !
|
|
2023-09-12 02:24:03
|
Just use `-e 9 -I 0 -P 0 --patches 0` for pixel art. e10 is a terrible waste of cycles, it's unjustifiable.
|
|
|
jonnyawsom3
|
2023-09-12 02:32:24
|
`-I 0` isn't always best, it was a month or two ago so I don't recall specifics, but I was testing on pixel art and stepping up the tree percentage in steps of 5 to see what worked best, it ended up being `-I 75`. 74 and 76 were even worse than just 0, but for some reason 75 took off about a quater of the filesize as far as I recall
|
|
2023-09-12 02:34:02
|
And that's why I specifically said 256x256 or under, when you're in the realm of 'raw' pixel art it only takes a second even on `-e 10`, as soon as you hit anything at a regular size the time and compute increase exponentially
|
|
|
monad
|
2023-09-12 03:02:55
|
If it only takes a second, then you've wasted an entire second.
|
|
|
jonnyawsom3
|
2023-09-12 03:10:09
|
And saved many more if you plan to actually distribute the file
|
|
|
monad
|
2023-09-12 03:12:20
|
Have an example?
|
|
2023-09-12 03:14:54
|
I imagine high efforts are not touched in practical cases, e9 is insanely resource intensive itself.
|
|
2023-09-12 03:16:40
|
If you're distributing a lot of pixel art, better go with WebP.
|
|
|
jonnyawsom3
|
2023-09-12 03:50:52
|
I am curious how much the WebP pallete ordering will improve JXL when it's ported over, it seems like that's the main reason JXL doesn't win every time
|
|
|
yoochan
|
2023-09-12 07:24:16
|
I'll have a look at your feedbacks and do some more tests on this pixelart sword 🙂
|
|
|
monad
|
2023-09-12 08:19:12
|
```
e10_phone_home.txt
B s
0xda9546x6g61.png 2503
(various PNG) 720
cjxl d0e9I100E3g3 497 0.33
cjxl d0e8I0P0g3 412 0.04
cjxl d0e10 388 690.27
cjxl d0e9I0P0g3 388 0.44
cwebp z2 374 0.00 *```
|
|
2023-09-12 08:22:03
|
I'm pretty sure e10 sends the file to Jon to encode by hand.
|
|
|
yoochan
|
2023-09-12 08:31:06
|
what is g3 ?
|
|
|
monad
|
2023-09-12 08:32:28
|
group size of 1024x1024
|
|
2023-09-12 08:36:16
|
much of the encoding context applies to groups so things can be parallelized. larger groups allow more context, possibly yielding smaller output
|
|
|
yoochan
|
2023-09-12 08:38:16
|
ok
|
|
2023-09-12 08:45:26
|
g2 should be enough then, given the fact the original pic is 512x512
|
|
|
monad
|
2023-09-12 08:47:22
|
yes, and sometimes g2 is simply better than g3, but as image dimensions increase that tends to happen less
|
|
|
yoochan
|
2023-09-12 08:52:58
|
I don't understand why -I 0 works better than I 100... ```Higher values use
more encoder memory and can result in better compression```
|
|
|
monad
|
2023-09-12 08:58:34
|
I think in general the context modeling is implemented in a way which performs well on photographic content. Here, the point is to avoid a high-error/large tree and utilize other tools (LZ77).
|
|
|
yoochan
|
2023-09-12 09:01:58
|
this plot shows the resulting file size for `-d 0 -e 9 -I xxx -P 0 --patches 0 -g 2` for each values of xxx from 0 to 100
|
|
2023-09-12 09:02:31
|
the size is not very well correlated 😄
|
|
|
|
veluca
|
2023-09-12 09:53:17
|
that's heuristics for you 😛
|
|
|
yoochan
|
|
Tirr
maybe 8x NN upsampling can be done using modular LF frame and vardct frame with empty HF coeffs
|
|
2023-09-12 11:36:52
|
could this mode be forced with the existing options of cjxl ?
|
|
|
_wb_
|
2023-09-12 11:58:15
|
nope
|
|
2023-09-12 11:58:28
|
also not sure if it would be less signaling overhead than custom upsampling weights
|
|
|
yoochan
|
2023-09-12 01:14:14
|
IMHO, pixel art should be only 1 pixel per pixel, and only rendered upsampled in the display... but it's not what pixel art does, most of the time
|
|
|
jonnyawsom3
|
2023-09-12 01:48:59
|
1 pixel per pixel is perfect, NN upsampling beforehand is okay in most cases, but then occasionally you find NN upsampling which has either been resized incorrectly so it can't be downsampled to 1:1 again, or... They save it as a jpeg....
|
|
|
monad
|
2023-09-16 07:57:54
|
```
irresponsible_computing.txt
input count
1:1 pixel art 109
(optimized and stripped PNG)
mean
mean B user s wins best of
17755.60 .04 0 cjxl d0e7
* 17149.56 .00 4 cwebp z3
* 16748.95 .04 1 cwebp z8
16379.49 1.02 5 cwebp z9
* 15913.63 .24 11 cjxl d0e9P0I0g3patches0
* 14846.37 .63 9 cjxl d0e9E3I100g3
14414.88 .87 20 cjxl d0e9E3I100g3, d0e9P0I0g3patches0
* 14413.90 .85 18 cjxl d0e9E3I94g3, d0e9P0I0g3patches0
* 14383.66 6.95 61 all cjxl except e10
* 14332.60 8.05 74 all except e10
* 14154.25 501.16 55 cjxl d0e10
* 14116.77 509.21 109 all
'all' commands (wins):
cjxl --disable_output -d 0 -e 9 -E 3 -I 100 -g 3
cjxl --disable_output -d 0 -e 9 -E 3 -I 99 -g 3
cjxl --disable_output -d 0 -e 9 -E 3 -I 98 -g 3
cjxl --disable_output -d 0 -e 9 -E 3 -I 97 -g 3
cjxl --disable_output -d 0 -e 9 -E 3 -I 96 -g 3
cjxl --disable_output -d 0 -e 9 -E 3 -I 95 -g 3
cjxl --disable_output -d 0 -e 9 -E 3 -I 94 -g 3
cjxl --disable_output -d 0 -e 9 -E 3 -I 93 -g 3
cjxl --disable_output -d 0 -e 9 -E 3 -I 92 -g 3
cjxl --disable_output -d 0 -e 9 -E 3 -I 91 -g 3
cjxl --disable_output -d 0 -e 9 -E 3 -I 90 -g 3
cjxl --disable_output -d 0 -e 9 -P 0 -I 0 -g 3 --patches 0
cjxl --disable_output --allow_expert_options -d 0 -e 10
cwebp -mt -z 9
cwebp -mt -z 8
cwebp -mt -z 7
cwebp -mt -z 6
cwebp -mt -z 5
cwebp -mt -z 4
cwebp -mt -z 3
cwebp -mt -z 2
reference (no wins):
cjxl --disable_output -d 0 -e 7
environment:
cjxl v0.9.0 97e5ab09 (clang 15.0.7 x86_64-pc-linux-gnu)
cwebp 1.2.4 (1.2.4-0.1ubuntu0.23.04.2 amd64)
i5-13600K, 16GBx2 RAM, SSD
timings were median of 3 or 5 runs, 1 run for e10```
|
|
2023-09-16 08:04:24
|
I did it to free the rest of you.
|
|
|
jonnyawsom3
|
2023-09-16 10:46:43
|
The numbers don't seem to add up, 123 wins total instead of the 109 listed as all (Ignoring the 'all except' entries)
|
|
|
monad
|
2023-09-16 06:29:23
|
That can happen when summing random numbers. Notice you don't have individual results for every command, and the results for some commands are represented multiple times.
|
|
2023-09-16 06:30:44
|
Differently, if you consider the wins for **cjxl d0e10** and **all except e10** sum to 129, you'll find there are 20 cases where both strategies found the same solution; thus, e10 found 35 better solutions than everything else, and everything else found 54 better solutions than e10.
|
|
2023-09-16 06:34:30
|
Even just the cjxl strategies found more wins than e10. One might speculate that a curated selection of cjxl strategies would outperform e10 quite comfortably.
|
|
|
jonnyawsom3
|
2023-09-16 08:36:24
|
I see what you mean now, curate a selection of e9 parameters to run successively and select the best result, rather than brute forcing all option with e10
|
|
|
yoochan
|
|
monad
Even just the cjxl strategies found more wins than e10. One might speculate that a curated selection of cjxl strategies would outperform e10 quite comfortably.
|
|
2023-09-16 08:44:45
|
Agreed. Do you think e10 can sometimes activate some options which are not directly available via the command line in e9?
|
|
|
jonnyawsom3
|
2023-09-16 08:47:16
|
I doubt it, even e10 doesn't brute force every option, it's why a few want an e11 for maximum savings for mass distribution
|
|
|
monad
|
2023-09-16 08:49:52
|
I don't know, I glanced at the code when the e10 commit was made but didn't understand it.
|
|
|
I doubt it, even e10 doesn't brute force every option, it's why a few want an e11 for maximum savings for mass distribution
|
|
2023-09-16 08:50:22
|
Nobody wants that.
|
|
|
yoochan
|
2023-09-16 08:50:41
|
We'll ask the chef
|
|
|
lonjil
|
2023-09-16 08:53:50
|
if an option that tries literally everything is ever added, it should be e99, not something like e11
|
|
|
jonnyawsom3
|
2023-09-16 09:02:52
|
Surely it should be e42
|
|
|
monad
|
2023-09-16 10:26:32
|
When numbers fail to inspire
|
|
|
yoochan
|
2023-09-17 10:07:29
|
I'm convinced a limited number of settings should be searched for, depending on the type of source. For screenshot, mangas, pixel art etc. in every case, an experimented jxler pull a string of options which works well. We should just aggregate this knowledge
|
|
|
Demiurge
|
2023-09-19 07:51:12
|
According to my calculations, it is time for JXL to achieve world domination.
|
|
|
spider-mario
|
2023-09-19 07:56:21
|
16 September 2023: pick up car from garage
18 September 2023: world domination
19 September 2023: buy potatoes for raclette
|
|
|
Traneptora
|
2023-09-20 01:46:16
|
unlimited breadsticks
|
|
|
_wb_
|
2023-09-20 05:24:18
|
Mmm raclette yummy
|
|
|
yoochan
|
2023-09-20 02:56:39
|
raclette in september ! are you crazy ! what will you eat in the winter !?
|
|
|
_wb_
|
2023-09-20 05:17:01
|
Where I live, it is winter from September to June.
|
|
|
spider-mario
|
|
yoochan
raclette in september ! are you crazy ! what will you eat in the winter !?
|
|
2023-09-20 08:27:02
|
also raclette
|
|
2023-09-20 08:27:22
|
I’ve had raclette in August (2018, I believe)
|
|
2023-09-20 08:27:34
|
why not, right?
|
|
|
Nova Aurora
|
2023-09-20 08:45:38
|
What is the traditional cheese for raclette?
|
|
|
spider-mario
|
2023-09-20 09:04:02
|
raclette cheese https://www.raclette-suisse.ch/en/raclette/our-cheese https://www.raclette-du-valais.ch/fr-ch/raclette-du-valais/fabrication https://en.wikipedia.org/wiki/Raclette_du_Valais
|
|
|
monad
|
|
diskorduser
|
2023-10-15 04:17:55
|
Are you using avx2 cpu
|
|
|
jonnyawsom3
|
2023-10-15 05:36:16
|
> JPEG XL encoder v0.8.2 954b4607 [AVX2,SSE4,SSSE3,SSE2]
Looks like it (Unless I've been misunderstanding that printout the entire time)
|
|
|
_wb_
|
2023-10-15 07:50:20
|
Lossless above effort 1 is not going to benefit much from avx2...
|
|
|
diskorduser
|
|
> JPEG XL encoder v0.8.2 954b4607 [AVX2,SSE4,SSSE3,SSE2]
Looks like it (Unless I've been misunderstanding that printout the entire time)
|
|
2023-10-16 01:41:03
|
Oh I thought it was showing instructions sets it would support based on build options. 😄
|
|
2023-10-16 01:43:00
|
Do you use avx2 optimized Firefox or the regular one?
|
|
|
monad
|
2023-10-22 08:59:03
|
|
|
2023-10-22 09:12:29
|
could be optimized more, but it took me long enough to compile this as-is
|
|
2023-10-23 08:51:04
|
ok, optimized a bit
|
|
|
bonnibel
|
2023-10-31 05:23:48
|
Ran a few different Butteraugli norms on the CID22 validation set
Butteraugli scores obtained using butteraugli_main.exe from GH Actions artefacts of a few days ago, correlations with MCOS scores calculated using SciPy's implementation
```
Norm Kendall Spearman Pearson
6 -0.5894 -0.7729 -0.7044
5 -0.5966 -0.7795 -0.7128
4 -0.6057 -0.7880 -0.7260
3 -0.6162 -0.7985 -0.7472
2.5 -0.6218 -0.8048 -0.7623
2 -0.6273 -0.8120 -0.7809
1.5 -0.6328 -0.8211 -0.8015
1.25 -0.6360 -0.8269 -0.8111
1 -0.6399 -0.8335 -0.8182
4/5 -0.6419 -0.8373 -0.8203
3/4 -0.6419 -0.8376 -0.8201
2/3 -0.6406 -0.8368 -0.8188
1/2 -0.6327 -0.8297 -0.8125
```
Norms tested = the usual 2-6 range, + half and quarter those, + 4/5-norm and 2/3-norm because I saw 3/4-norm scored well so picked 2 nice fractions slightly above and below it
|
|
|
_wb_
|
2023-10-31 06:33:30
|
interesting that 0.75-norm seems to be optimal here — I should check if that's also true for the full set
|
|
|
bonnibel
|
2023-10-31 10:01:29
|
I did note the 3-norm result is different from the one listed in the validation set table on github (mine is lower by -0.060, -0.020, -0.022). Not sure whether that's due to a change in Butteraugli, ~~reduced precision (the command line tool only prints 6 digits after the dot for pnorm)~~ (tested w patched tool to print full precision, not it), or a difference in SciPy's implementation of the correlation metrics
|
|
2023-11-01 04:07:20
|
Some silly norms
```
Norm Kendall Spearman Pearson
π -0.6147 -0.7969 -0.7436
π/2 -0.6319 -0.8196 -0.7986
π/4 -0.6419 -0.8374 -0.8203
e -0.6193 -0.8020 -0.7553
e/2 -0.6345 -0.8243 -0.8071
e/4 -0.6409 -0.8370 -0.8191
φ -0.6313 -0.8187 -0.7966
φ/2 -0.6419 -0.8372 -0.8203
φ/4 -0.6240 -0.8213 -0.8060
```
|
|
|
|
veluca
|
2023-11-01 12:41:50
|
"3-norm" likely means a mix of 3, 6 and 9 norm
|
|
|
bonnibel
|
2023-11-01 01:25:56
|
yes, the listed norm is the argument to --pnorm, and from looking at the code butteraugli uses an even mix of pnorm, 2pnorm, and 4pnorm
|
|
2023-11-01 01:27:49
|
that's why i picked pnorm arguments ½ and ¼ the usual ones
|
|
2023-11-01 01:29:53
|
3-norm = 3 norm + 6 norm + 12 norm
1.5-norm = 1.5 norm + 3 norm + 6 norm
0.75-norm = 0.75 norm + 1.5 norm + 3 norm
|
|
|
_wb_
|
2023-11-01 03:27:39
|
Ssimulacra2 used a mix of 1-norm and 4-norm
|
|
|
damian101
|
2023-11-01 03:38:34
|
4-norm is closer to max-norm?
|
|
2023-11-01 03:38:45
|
don't quite understand what this whole thing means mathematically...
|
|
2023-11-01 03:39:26
|
Wikipedia link would be appreciated <:dorime:1075610532620541952> so I can learn <:frogstudy:1077511463264071701>
|
|
2023-11-01 03:40:12
|
found it
https://en.wikipedia.org/wiki/Norm_(mathematics)
<:frogstudy:1077511463264071701>
|
|
|
_wb_
|
2023-11-01 10:25:17
|
higher norm is closer to max, yes.
|
|
2023-11-01 10:25:38
|
1-norm is the same as average, 2-norm is root mean square
|
|
|
Traneptora
|
|
don't quite understand what this whole thing means mathematically...
|
|
2023-11-01 10:32:37
|
basically, you take the conponents of a vector, absolute value them. then you raise each to the power of `p`, add up the results, and then take the sum to the power of `1/p` which is the pth-root
|
|
2023-11-01 10:32:57
|
If p is 2, then you have the standard distance formula
|
|
2023-11-01 10:33:06
|
but 2 is not always ideal
|
|
|
bonnibel
|
|
_wb_
interesting that 0.75-norm seems to be optimal here — I should check if that's also true for the full set
|
|
2023-11-02 12:38:51
|
run a big simplex search on it to find the optimal norm ;P
|
|
|
|
veluca
|
2023-11-02 12:47:24
|
don't need simplex search, it's 1d 😛
|
|
|
bonnibel
|
2023-11-02 12:48:31
|
you could determine the 3 norms to use rather than just using p 2p and 4p
|
|
2023-11-02 12:48:47
|
then itd be 3d
|
|
|
|
veluca
|
|
bonnibel
|
2023-11-09 02:52:37
|
here's a few using actually a single norm (no blending of 3 norms)
```
Norm Kendall Spearman Pearson
1.833 -0.6435 -0.8388 -0.8213 # Found via search
2 -0.6430 -0.8378 -0.8210
1.5 -0.6416 -0.8378 -0.8195
2.5 -0.6393 -0.8318 -0.8163
1.25 -0.6367 -0.8335 -0.8157
3 -0.6354 -0.8250 -0.8076
1 -0.6283 -0.8255 -0.8094
4 -0.6299 -0.8148 -0.7849
0.75 -0.6163 -0.8134 -0.7999
5 -0.6257 -0.8078 -0.7624
0.5 -0.6012 -0.7969 -0.7860
6 -0.6211 -0.8015 -0.7433
0.25 -0.5830 -0.7754 -0.7647
```
|
|
|
monad
|
2023-11-21 10:21:55
|
|
|
2023-11-21 10:25:01
|
Alas, I have not found how to beat e10 on digital paintings.
|
|
2023-11-21 10:48:27
|
|
|
|
yoochan
|
2023-11-21 12:38:48
|
do you have a tool (or a methodology) to do subjective assessment (especially to compare 2 settings, not 1 encoded vs the original)
|
|
|
Demiurge
|
2023-11-22 06:35:24
|
You're asking if there's a double blind ABX test program for images
|
|
2023-11-22 06:35:47
|
maybe
|
|
|
yoochan
|
2023-11-22 07:44:22
|
yep, kind of 😄 except abx attempt to assess the capacity to tell apart the original and the encoded version (when they are almost impossible to tell apart) here I would like to compare A (encoded with one setting) vs B (encoded with another setting) with the reference R available
|
|
|
|
veluca
|
2023-11-22 08:04:55
|
I do have some subjective evaluation software, internal only unfortunately ☹️
|
|
|
yoochan
|
2023-11-22 08:48:22
|
I'll do something some quick n dirty stuff for my needs 🙂 no worries
|
|
|
spider-mario
|
|
yoochan
yep, kind of 😄 except abx attempt to assess the capacity to tell apart the original and the encoded version (when they are almost impossible to tell apart) here I would like to compare A (encoded with one setting) vs B (encoded with another setting) with the reference R available
|
|
2023-11-22 09:56:18
|
for quick viewing, libjxl’s “compare_images” can do that
|
|
2023-11-22 09:56:37
|
```
$ compare_images a.png b.png ref.png
```
|
|
2023-11-22 09:57:27
|
it will split the view into `a.png` on the left, `b.png` on the right, and `ref.png` in the middle, following the mouse cursor, with configurable width for the middle
|
|
2023-11-22 09:57:44
|
one can also use the arrow keys to display just one of the images at a time
|
|
2023-11-22 09:57:59
|
(left arrow: `a.png`, right arrow: `b.png`, up or down arrow: `ref.png`)
|
|
2023-11-22 09:58:13
|
so you can quickly switch between all three of them
|
|
2023-11-22 10:00:25
|
to have that tool, libjxl must be built with `cmake -DJPEGXL_ENABLE_VIEWERS=ON`, and with Qt6 available
|
|
2023-11-22 10:01:36
|
we haven’t quite made it compatible with macOS, though (the most straightforward way to do that would restrict colour gamut to sRGB)
|
|
|
yoochan
|
2023-11-22 10:16:36
|
thank you for the info, I'll recompile with the flag to test this
|
|
|
|
afed
|
2023-11-22 10:02:34
|
`8192x5456 image, 134086673 bytes`
```md
# clang [cjxl e1 lossless MT 8T]
134,086,673 -> 60,600,350: real 513 mb/s (0.249 sec) = 175%. ram 425332 KB, vmem 586100 KB
# clang [cjxl e1 lossless]
134,086,673 -> 60,600,350: real 303 mb/s (0.422 sec) = 92%. ram 424580 KB, vmem 585744 KB
# clang [cjxl e1 streaming lossless MT 8T]
357,838,336 -> 60,601,114: real 179 mb/s (1.906 sec) = 52%. ram 7836 KB, vmem 9972 KB
# clang [cjxl e1 streaming lossless]
357,838,336 -> 60,601,114: real 165 mb/s (2.063 sec) = 77%. ram 6348 KB, vmem 7836 KB
# Halic 0.6.4
134,086,673 -> 42,927,166: real 199 mb/s (0.640 sec) = 87%. ram 10840 KB, vmem 8236 KB
# Halic 0.7.1
134,086,673 -> 42,934,087: real 234 mb/s (0.546 sec) = 82%. ram 18900 KB, vmem 17356 KB
# Halic 0.7.1 ST
134,086,673 -> 42,934,087: real 215 mb/s (0.593 sec) = 76%. ram 181564 KB, vmem 419452 KB
# Halic 0.7.1 MT
134,086,673 -> 42,934,087: real 747 mb/s (0.171 sec) = 566%. ram 185896 KB, vmem 524380 KB
# QLIC2
134,086,673 -> 55,560,218: real 136 mb/s (0.939 sec) = 81%. ram 310036 KB, vmem 307192 KB
# KVICK
134,086,673 -> 50,718,469: real 95 mb/s (1.343 sec) = 86%. ram 184668 KB, vmem 289828 KB
# QIC
134,086,673 -> 59,306,145: real 221 mb/s (0.578 sec) = 80%. ram 235932 KB, vmem 329008 KB```
|
|
|
|
veluca
|
2023-11-22 11:17:31
|
is the % CPU time over real time?
|
|
|
|
afed
|
2023-11-22 11:34:54
|
basically only total time is matter and correct, percentages can be ignored
|
|
|
|
veluca
|
2023-11-22 11:43:27
|
regardless this definitely needs some extra profiling
|
|
|
|
afed
|
2023-11-22 11:59:13
|
```[cjxl e1 lossless]
Compressed to 60600.1 kB (10.847 bpp).
8192 x 5456, 145.595 MP/s [145.59, 145.59], 1 reps, 0 threads.
[cjxl e1 streaming lossless]
Compressed to 60601.0 kB (10.847 bpp).
8192 x 5456, 21.669 MP/s [21.67, 21.67], 1 reps, 0 threads.```
|
|
|
|
veluca
|
2023-11-23 12:09:07
|
what disk are you reading data from?
|
|
|
|
afed
|
2023-11-23 12:15:24
|
NVMe SSD ~3 GB/s
|
|
|
|
veluca
|
2023-11-23 06:57:52
|
I saw the same slowdown too
|
|
2023-11-23 06:58:00
|
I'll try to figure it out tomorrow
|
|
|
|
afed
|
2023-12-13 09:58:25
|
```[cjxl e1 lossless]
ram 424304 KB, vmem 585720 KB
8192 x 5456, 141.233 MP/s [141.23, 141.23], 1 reps, 0 threads.
[cjxl e1 streaming lossless]
ram 135896 KB, vmem 7980 KB
8192 x 5456, 143.065 MP/s [143.07, 143.07], 1 reps, 0 threads.
[cjxl e1 lossless 8t]
ram 425024 KB, vmem 586168 KB
8192 x 5456, 364.837 MP/s [364.84, 364.84], 1 reps, 8 threads.
[cjxl e1 streaming lossless 8t]
ram 136144 KB, vmem 8268 KB
8192 x 5456, 398.885 MP/s [398.89, 398.89], 1 reps, 8 threads.```
|
|
2023-12-13 10:17:47
|
**500 reps 1t**
```[cjxl e1 lossless]
8192 x 5456, geomean: 156.968 MP/s [135.47, 161.38], 500 reps, 0 threads.
[cjxl e1 streaming lossless]
8192 x 5456, geomean: 180.074 MP/s [141.18, 183.05], 500 reps, 0 threads.
```
|
|
|
Traneptora
|
2023-12-15 12:24:55
|
136 MB of ram is still pretty high
|
|
2023-12-15 12:25:28
|
Though if you need ultra low hydrium still works
|
|
|
|
afed
|
2023-12-15 12:40:11
|
in previous version it was 6-8 mb (but slower) and that's for lossless
https://discord.com/channels/794206087879852103/803645746661425173/1177006237913718944
or does hydrium already support lossless?
i wonder if now (with mmap changes) it will work faster for the smallest blocks
|
|
|
|
veluca
|
|
Traneptora
136 MB of ram is still pretty high
|
|
2023-12-15 07:51:52
|
meh, it's memory mapped memory, it's not "true" memory usage
|
|
2023-12-15 07:54:08
|
(input file is 130mb, afaiu - encoder memory usage excluding the mmapped input is not affected by this change)
|
|
|
Traneptora
|
|
afed
in previous version it was 6-8 mb (but slower) and that's for lossless
https://discord.com/channels/794206087879852103/803645746661425173/1177006237913718944
or does hydrium already support lossless?
i wonder if now (with mmap changes) it will work faster for the smallest blocks
|
|
2023-12-15 04:38:57
|
hydrium's VarDCT and lossy only atm
|
|
|
monad
|
|
yoochan
I'm convinced a limited number of settings should be searched for, depending on the type of source. For screenshot, mangas, pixel art etc. in every case, an experimented jxler pull a string of options which works well. We should just aggregate this knowledge
|
|
2023-12-19 01:08:00
|
So, depending on content, you can outperform d0e10 at 90-330x efficiency running a few commands.
|
|
2023-12-19 01:09:32
|
|
|
|
|
veluca
|
2023-12-19 04:17:11
|
mhhh weird, e10 should explicitly try those flags...
|
|
|
jonnyawsom3
|
2023-12-19 04:38:58
|
I still love how this came from me trying to defend e10's existence for low res pixel art
|
|
|
yoochan
|
|
monad
|
|
2023-12-19 08:52:14
|
how do you get this detailled prompt ?
|
|
|
monad
|
2023-12-19 05:44:11
|
Please elaborate, I'm not sure what you're asking about specifically.
|
|
|
yoochan
|
2023-12-20 08:31:00
|
I was wondering what is this tools which seems to plot the size and bpp of different options of cjxl and plot such a beautiful benchmark
|
|
|
monad
|
2023-12-20 09:38:30
|
It's some glue code I wrote, interfacing with `cjxl`, `time` and `identify`. That's why it has that stupid mem bpp column giving the max memory allocated divided by total pixels for all images. <:YEP:808828808127971399>
|
|
|
_wb_
|
2023-12-21 07:24:02
|
this type of investigation would be useful to improve the speed/density curve for e2-e9 by setting defaults differently — but you need a representative corpus for that, and I still don't really know how to make a representative test corpus for lossless image compression...
|
|
|
monad
|
2023-12-26 09:57:21
|
RE: https://discord.com/channels/794206087879852103/848189884614705192/1188264295323156511 🦐🌬️
|
|
|
spider-mario
|
2023-12-26 10:05:40
|
what’s (u) vs. (r)?
|
|
|
|
veluca
|
2023-12-26 10:11:38
|
user/real, I guess
|
|
|
monad
|
|
Jyrki Alakuijala
|
|
veluca
"3-norm" likely means a mix of 3, 6 and 9 norm
|
|
2024-01-03 10:40:03
|
I'm guilty of this (3, 6, and 12 norm IIRC) -- it seemed to me that a mix of norms worked better than one, and it felt too complicated to be exposed to usual users
|
|
|
|
afed
|
2024-01-06 10:43:17
|
unsplash png
sqz
progressive lossless jxl (center-first, -p, e9)
|
|
2024-01-06 10:44:26
|
truncated to 272557
sqz and jxl
|
|
2024-01-06 11:10:37
|
<@179701849576833024> <@794205442175402004> <@532010383041363969>
maybe this codec will be interesting and author welcomes any suggestions and possible improvements (if someone will have free time and interest for this, maybe something useful for research)
also xyb may have some advantages over Oklab (or logl1) for faster/better compression with the same subjective quality?
or maybe something can be performed better for lossless for the same concept of scalability?
<https://encode.su/threads/4183-SQZ-Low-complexity-scalable-lossless-and-lossy-image-compression-library>
i find the results of this experiment are pretty interesting for fairly simple implementation and flexibility, also it's open source (with MIT license)
<https://github.com/MarcioPais/SQZ>
|
|
2024-01-06 11:22:43
|
normal compression for jpeg (jpegli) and vardct jxl (e8) and truncated for sqz (target size ~172557 bytes)
|
|
2024-01-06 11:22:55
|
sqz is worse than jxl subjectively, but not by much, given the infinite byte by byte scaling and lossless-lossy concept
|
|
|
jonnyawsom3
|
2024-01-06 01:33:29
|
I know progressive lossless JXL currently has a massive size penalty (if I recall worse at higher efforts), so if that weren't a factor I think the results would be a lot closer
|
|
|
|
afed
|
2024-01-06 01:47:28
|
but then they are very different contexts and the second comparison is the normal lossy jxl
if there are no requirements for progressive compression, then it will be different formats, from the same author there is a non progressive lossless codec with very strong compression (but not open source)
|
|
|
|
veluca
|
2024-01-06 02:15:28
|
I wonder how it compares to just a lossy vardct frame followed by a lossless frame
|
|
|
|
afed
|
2024-01-06 02:19:02
|
for photos lossless sqz has almost the same sizes as lossless jxl (e7)
but worse for non-photos
though I need more tests
|
|
2024-01-06 02:22:42
|
maybe also i need to compare fuif as lossless-lossy
or jxl lossy modular is the same but improved?
|
|
|
|
veluca
|
2024-01-06 03:27:07
|
it's at least very similar
|
|
|
_wb_
|
2024-01-06 04:14:08
|
Note that progressive lossless is not something we have been seriously trying to make a great encoder for. We have been assuming that usually when you want progressive, you also want lossy. It is possible that better compression is possible.
|
|
|
yoochan
|
2024-01-06 05:25:24
|
Interesting to read the rationale behind the design choice. For me progressiveness is also a way to display thumbnail from partially loaded files, and in this case, lossless files could benefit from this. Yet perfect progressivity is not required for this usecase
|
|
|
lonjil
|
2024-01-06 05:33:29
|
I use lossless files whenever I have images that compress well using lossless algorithms, which definitely isn't exclusive with getting lower quality previews while on a slow connection or something.
|
|
2024-01-06 05:34:33
|
If the design of JXL doesn't preclude efficient progressive lossless files, maybe I'll try to look in to it. Are there any modular encoders other than the one in libjxl?
|
|
|
|
afed
|
2024-01-06 05:55:06
|
as a web format or for some previews, using just some stages is completely enough
but, lossless format which can be encoded only once and used as lossy with truncation to the needed compression level is also interesting
while still having decent compression and scalable quality (up to lossless for the same file)
e.g. j2k was used as a format for textures in some games because it has flexible quality scaling (although not used currently because there are more specialized formats for gpus)
|
|
2024-01-06 06:21:32
|
i think the problem with this type of scalability is the very constant quality changes across the entire image
when the other normal codecs can prioritize some of the higher perceptually important areas and give them more bits
|
|
2024-01-07 03:30:14
|
at such low bitrates, which jxl is not optimized for, sqz performs better
|
|
2024-01-07 03:30:30
|
|
|
|
yoochan
|
2024-01-07 03:44:05
|
what does ssimulacra says ?
|
|
2024-01-07 03:44:34
|
is your assessment based on fidelity or appeal ? https://cloudinary.com/blog/what_to_focus_on_in_image_compression_fidelity_or_appeal
|
|
|
username
|
2024-01-07 03:58:36
|
with the comparison posted in [here](https://discord.com/channels/794206087879852103/803645746661425173/1193577411267272834) it's obvious to see that sqz does better then jxl just by looking at the images from any viewing distance however this kinda makes since because libjxl is known to currently not perform that well at low bitrates
|
|
|
|
afed
|
2024-01-07 04:04:35
|
i've also tested at very low bitrates and sqz mostly wins there, but basically jxl lower than d4 is not very good in general in the current implementation
low bitrates are mostly avif field
|
|
2024-01-07 04:08:39
|
sqz is similar to other wavelet codecs and has smoother quality degradation at very low bitrates
more appealing quality at very low bitrates for normal codecs is not that useful feature, but for progressive quality scaling it would make sense
|
|
|
fab
|
|
afed
|
|
2024-01-07 04:17:48
|
Great like 0.293 14,41 0,256 0,239 19.24 improvements
|
|
|
afed
i've also tested at very low bitrates and sqz mostly wins there, but basically jxl lower than d4 is not very good in general in the current implementation
low bitrates are mostly avif field
|
|
2024-01-07 04:18:46
|
Yes I agree with every single yours commented.
|
|
|
Traneptora
|
|
yoochan
is your assessment based on fidelity or appeal ? https://cloudinary.com/blog/what_to_focus_on_in_image_compression_fidelity_or_appeal
|
|
2024-01-07 05:06:13
|
Fidelity is how close it matches the original
|
|
2024-01-07 05:06:31
|
Appeal can be cheated by increasing brightness or saturation
|
|
2024-01-07 05:06:54
|
which causes people to say something "looks better"
|
|
|
jonnyawsom3
|
2024-01-07 06:59:05
|
JXL will always be at a disadvantage compared to video formats at low bitrates, although eventually the larger VarDCT block sizes and overlapping blocks could help close the gap
|
|
|
Jyrki Alakuijala
|
2024-01-08 07:45:20
|
I have more encoding ideas to improve jpeg xl at lower distance, but likely will not be magical -- I expect these to surface in mid 2024
|
|
2024-01-08 07:45:41
|
perhaps 5-10 % improvement
|
|
2024-01-08 07:46:29
|
it is to have less heuristic approach to edge preserving smoothing, more quantitative/disciplined approach
|
|
|
fab
|
2024-01-11 06:34:11
|
Jyrki
|
|
2024-01-11 06:52:16
|
This is HD now
|
|