|
CrushedAsian255
|
2024-08-27 01:16:28
|
I prefer findβs ability to recursively find
|
|
2024-08-27 01:16:40
|
Although I could probably pipe find into parallel
|
|
|
TheBigBadBoy - πΈπ
|
|
CrushedAsian255
|
2024-08-27 01:16:56
|
Parallelβs βbar is kind of enticing for me tbh
|
|
|
TheBigBadBoy - πΈπ
|
2024-08-27 01:17:40
|
https://cdn.discordapp.com/emojis/806636752952754186.gif?size=48&quality=lossless&name=CatYes
|
|
|
KKT
|
|
TheBigBadBoy - πΈπ
`parallel cjxl --num_threads=0 -d 0 -e 9 {} {.}.jxl ::: /dir/*.png`
my beloved <:FeelsAmazingMan:808826295768449054>
|
|
2024-08-27 06:47:47
|
Fun! parallels brew formula on the Mac seems busted though.
|
|
|
CrushedAsian255
|
|
KKT
Fun! parallels brew formula on the Mac seems busted though.
|
|
2024-08-27 09:04:34
|
Works fine for me
|
|
|
KKT
|
|
CrushedAsian255
Works fine for me
|
|
2024-08-27 09:35:39
|
That's weird. Just tried again and it worked.
|
|
|
CrushedAsian255
|
|
KKT
That's weird. Just tried again and it worked.
|
|
2024-08-27 09:39:24
|
Internet issues?
|
|
|
TheBigBadBoy - πΈπ
|
2024-08-28 09:44:20
|
[β ](https://media.discordapp.net/stickers/1052898883971338291.png?size=160&name=Skill+Issue)
|
|
2024-08-28 09:44:29
|
<:KekDog:805390049033191445>
|
|
|
DZgas Π
|
2024-08-28 04:27:18
|
cjxl -g 3 -e 10 -I 100 --patches 1 -q 100 <:JXL:805850130203934781> loseless bruteforce<:JPEG_XL:805860709039865937>
|
|
|
|
Just me
|
2024-09-18 10:05:30
|
`q` hould be rational because medium range cameras are 10+ Mpixels...
|
|
|
monad
|
2024-09-18 10:40:22
|
my position on this should be obvious
|
|
|
jonnyawsom3
|
2024-09-19 09:13:24
|
I don't understand what that means
|
|
|
Demiurge
|
2024-09-20 04:03:20
|
I don't understand what any of this means
|
|
|
ChrisX4Ever
Thanks for the reply. Yes, I would love to losslessly save some bytes on my HDD in the case of lossy JPEG files.
|
|
2024-09-20 04:08:40
|
try xl converter
https://github.com/JacobDev1/xl-converter/releases
|
|
2024-09-20 04:09:28
|
Your ffmpeg example looks like DOS syntax so you probably aren't using a bash shell
|
|
2024-09-20 04:12:07
|
Also the script doesn't check if the output filename already exists before clobbering it (minor issue in this case) or if the input filename is a symlink or an actual file. Or if the output result is less bytes than the input.
|
|
2024-09-20 04:14:29
|
Would be cool to keep a running total of how much space was saved too, for extra credit
|
|
2024-09-20 04:14:50
|
All of this will be on the test
|
|
|
AccessViolation_
|
|
Demiurge
try xl converter
https://github.com/JacobDev1/xl-converter/releases
|
|
2024-09-21 08:17:09
|
Ooo, I was hoping something like this existed. Otherwise I would have gone janky bash script
|
|
|
CrushedAsian255
|
2024-09-21 10:27:01
|
I love my janky bash scripts
|
|
|
Demiurge
|
|
CrushedAsian255
I love my janky bash scripts
|
|
2024-09-22 10:24:41
|
I do too.
|
|
2024-09-22 10:24:53
|
:)
|
|
|
Nova Aurora
|
|
CrushedAsian255
I love my janky bash scripts
|
|
2024-10-11 10:03:04
|
What about build scripts that decode test cases and inject them into ssh keys?
|
|
|
CrushedAsian255
|
|
Nova Aurora
What about build scripts that decode test cases and inject them into ssh keys?
|
|
2024-10-11 10:04:52
|
My favourite β€οΈ
|
|
|
yoochan
|
2024-10-20 12:47:07
|
I want to losslessly recompress jpegs but strip any useless byte (including stripping the jbrd box), how can I remove all meta data ? I guess this command is not enough : `cjxl --quiet --lossless_jpeg=1 --allow_jpeg_reconstruction=0`
|
|
2024-10-20 12:48:45
|
did I stripped everything if I add `-x strip=exif -x strip=xmp -x strip=jumbf` ?
|
|
|
RaveSteel
|
2024-10-20 12:50:46
|
I think it may be enough to specify --container 0?
|
|
2024-10-20 12:51:18
|
No, wrong, --container 0 just leaves it up to the encoder
|
|
|
_wb_
|
2024-10-20 01:12:13
|
Some people will be very upset if you say metadata is useless π
|
|
|
yoochan
|
2024-10-20 02:27:15
|
π
let say, I just want undocumented and untraceable arrays of pixels
|
|
|
Meow
|
2024-10-20 03:54:58
|
For photographers it's essential
|
|
|
Dejay
|
2024-10-30 02:49:58
|
With cjxl, is quality 80 mapped to distance = 2?
|
|
|
Meow
|
|
Dejay
With cjxl, is quality 80 mapped to distance = 2?
|
|
2024-10-30 02:57:30
|
1.9 according to a table made by someone
|
|
2024-10-30 03:04:34
|
For decreasing each 1 from q99 to q30, it increases each 0.9 from d0.19 to d6.4
|
|
|
Dejay
|
|
Meow
For decreasing each 1 from q99 to q30, it increases each 0.9 from d0.19 to d6.4
|
|
2024-10-30 03:17:21
|
Thanks, although I don't get the math, I don't need a precise answer anyway π
|
|
|
Meow
|
2024-10-30 03:17:51
|
Then 1.9 is the answer
|
|
|
jonnyawsom3
|
2024-10-30 03:59:43
|
It'll say what distance when you run the command
|
|
|
CrushedAsian255
|
|
Meow
For decreasing each 1 from q99 to q30, it increases each 0.9 from d0.19 to d6.4
|
|
2024-10-30 08:12:08
|
Do you mean it decreases by 0.09 per 1 ?
|
|
|
Meow
|
2024-10-30 08:12:31
|
Oops that's 0.09
|
|
2024-10-30 08:13:23
|
Below q30 the curve of distance becomes sharper
|
|
|
CrushedAsian255
|
2024-11-07 10:22:26
|
`-d 0 -e 11 --allow_expert_options`
|
|
2024-11-07 10:22:44
|
haters need to get a faster computer
|
|
|
jonnyawsom3
|
2024-11-07 12:00:54
|
-e 12
|
|
2024-11-07 12:01:48
|
Although, I did beat e11 last night just by using e10 with - P 0, so it seems like there's a few options missed since Monad's PR
|
|
|
CrushedAsian255
|
2024-11-07 12:05:35
|
`-e 13` time
|
|
|
A homosapien
|
2024-11-07 04:02:34
|
v0.8 e10 beats Monad's e12 sometimes... <:kekw:808717074305122316>
|
|
2024-11-07 04:32:55
|
Found it, zig is too powerful for e12 π https://discord.com/channels/794206087879852103/794206087879852106/1292927543904571432
|
|
|
monad
|
|
Although, I did beat e11 last night just by using e10 with - P 0, so it seems like there's a few options missed since Monad's PR
|
|
2024-11-07 05:01:07
|
It does try P0, but with additional configuration. Old e11 tried P0 and P15; new tries P0, P4, P15 or P0, P6, P14, P15 depending on the image.
|
|
|
DZgas Π
|
|
CrushedAsian255
`-e 13` time
|
|
2024-11-07 05:13:47
|
Let's gooo
|
|
|
jonnyawsom3
|
|
monad
It does try P0, but with additional configuration. Old e11 tried P0 and P15; new tries P0, P4, P15 or P0, P6, P14, P15 depending on the image.
|
|
2024-11-07 06:04:30
|
I think in this case it was `-P 0 -I 0`
|
|
|
|
JendaLinda
|
2024-11-09 10:31:10
|
I wonder why `cjpegli` doesn't have `--effort` option. JPEG encoding is so fast, there still must be room for improvement.
|
|
|
TheBigBadBoy - πΈπ
|
|
JendaLinda
I wonder why `cjpegli` doesn't have `--effort` option. JPEG encoding is so fast, there still must be room for improvement.
|
|
2024-11-09 10:49:59
|
`cjpegli` already has good heuristics
if you want better compression (lossless quality), check IJG's jpegtran, jpegultrascan and/or ect
|
|
|
|
JendaLinda
|
2024-11-09 11:00:19
|
`jpegultrascan` does further compress JPEGs encoded by `cjpegli`, I've tried that.
However, I though adaptive quantization could be improved as well with more effort.
|
|
|
jonnyawsom3
|
2024-11-09 11:45:29
|
<#1301682361502531594>
|
|
|
Demiurge
|
|
JendaLinda
I wonder why `cjpegli` doesn't have `--effort` option. JPEG encoding is so fast, there still must be room for improvement.
|
|
2024-11-11 04:38:48
|
Not necessarily.
|
|
|
|
JendaLinda
|
2024-11-11 05:12:11
|
How could we know if we don't try?
|
|
|
Demiurge
|
2024-11-11 11:18:53
|
It depends on what you try. Time doesn't have inherent value if it's not used correctly.
|
|
2024-11-11 11:19:10
|
Spending extra time on something doesn't always mean better results.
|
|
2024-11-11 11:20:03
|
That's why jpegli is much faster than mozjpeg and much better looking at the same time. It uses time more effectively.
|
|
2024-11-11 11:20:33
|
It doesn't necessarily need more time, and spending extra time is not necessarily productive
|
|
|
|
JendaLinda
|
2024-11-12 02:31:11
|
For example, performing more extensive search for the optimal quantization to avoid the unwanted smoothing.
|
|
2024-11-12 02:32:13
|
I've also noticed jpegli seems to smooth chroma too much, resulting in a similar effect like chroma subsampling. Even at very low distances under 0.5
|
|
|
Code Poems
|
2025-01-02 09:58:02
|
Hello everyone! I think I found something interesting.
Could a libjxl developer please take a look at this issue? https://github.com/libjxl/libjxl/issues/3746#issuecomment-2567331492
This majorly reduces RAM usage and increases encoding speed for cjxl users. I'm not sure if this is officially supported for non-PPM inputs, though. It does seem to work great so far.
|
|
|
CrushedAsian255
|
2025-01-02 10:17:08
|
It should be relatively easy to implement on a row group basis for png input
|
|
|
0xC0000054
|
|
Code Poems
Hello everyone! I think I found something interesting.
Could a libjxl developer please take a look at this issue? https://github.com/libjxl/libjxl/issues/3746#issuecomment-2567331492
This majorly reduces RAM usage and increases encoding speed for cjxl users. I'm not sure if this is officially supported for non-PPM inputs, though. It does seem to work great so far.
|
|
2025-01-02 10:55:40
|
Streaming encoding is currently only implemented for PPM inputs.
|
|
2025-01-02 10:58:35
|
But more examples of how to use JxlChunkedFrameInputSource would be welcome.
|
|
|
A homosapien
|
2025-01-02 10:58:43
|
The documentation could be wrong/outdated, I've also pointed it out here: https://github.com/libjxl/libjxl/issues/3517#issuecomment-2513479999
|
|
2025-01-02 10:59:33
|
It seems like you can force streaming for formats other than just PPM, I've gotten it to work on lossless jxl and jpeg (`-j 0`) inputs. I don't know if this behavior is intentional or not.
|
|
|
Code Poems
|
2025-01-02 11:28:36
|
Adding `--streaming_input` enables streaming encoding in all cases I tested (yes, PNG input, not PPM). The pixel output seems to be the same as if PPM proxy was used.
My question is, is this ready to be used? Is this a result of an oversight or an undocumented feature? I could really use an opinion from a libjxl dev. I want to include this in XL Converter.
Encoding an 8 k image (PNG, `-e 9`, VarDCT) requires 8 GB of RAM. With `--streaming_input`, that PNG now requires 16 times less RAM and processes multiple times faster.
|
|
2025-01-02 11:31:12
|
I think this is the commit that introduced this behavior: https://github.com/libjxl/libjxl/commit/0e3e08f89305e821e8467df7e058fc7889fc913e
|
|
2025-01-02 02:54:07
|
After reviewing the source code, the `--streaming_input` argument is most likely unintentionally triggering `JXL_ENC_FRAME_SETTING_BUFFERING` being appended to the `params.options` vector.
```c
if (args.streaming_input) {
params.options.emplace_back(JXL_ENC_FRAME_SETTING_BUFFERING,
static_cast<int64_t>(3), 0);
}
```
The speed / RAM usage improvements are incredible, but I'm unfamiliar with the code base, so I can't tell if this is safe to use all the time.
|
|
|
|
veluca
|
2025-01-02 03:00:53
|
As far as I know that should be fine, the one thing that is not supported is not decoding the PNG into ram
|
|
|
Code Poems
|
2025-01-02 03:04:55
|
Hi <@179701849576833024> thanks for the answer. I tried doing something like this. Do you think this is a sound idea?
```c
diff --git a/tools/cjxl_main.cc b/tools/cjxl_main.cc
index 4b01e18..7031a10 100644
--- a/tools/cjxl_main.cc
+++ b/tools/cjxl_main.cc
@@ -1163,10 +1163,7 @@ int main(int argc, char** argv) {
params.runner = JxlThreadParallelRunner;
params.runner_opaque = runner.get();
- if (args.streaming_input) {
- params.options.emplace_back(JXL_ENC_FRAME_SETTING_BUFFERING,
- static_cast<int64_t>(3), 0);
- }
+ params.options.emplace_back(JXL_ENC_FRAME_SETTING_BUFFERING, static_cast<int64_t>(3), 0);
jpegxl::tools::SpeedStats stats;
jpegxl::tools::JxlOutputProcessor output_processor;
```
|
|
|
|
veluca
|
2025-01-02 03:06:37
|
Answering that would require me remembering what the code does π€£
|
|
2025-01-02 03:06:59
|
I'll try to remember to look into it next week when we're back from the holidays
|
|
|
Code Poems
|
2025-01-02 03:09:59
|
Hah, yes the code base is certainly extensive. If you do work on it, here is the associated libjxl issue: https://github.com/libjxl/libjxl/issues/3746
|
|
|
0xC0000054
|
|
veluca
Answering that would require me remembering what the code does π€£
|
|
2025-01-02 03:43:42
|
The header documentation says:```
/** Control what kind of buffering is used, when using chunked image frames.
* -1 = default (let the encoder decide)
* 0 = buffers everything, basically the same as non-streamed code path
(mainly for testing)
* 1 = buffers everything for images that are smaller than 2048 x 2048, and
* uses streaming input and output for larger images
* 2 = uses streaming input and output for all images that are larger than
* one group, i.e. 256 x 256 pixels by default
* 3 = currently same as 2
*
* When using streaming input and output the encoder minimizes memory usage at
* the cost of compression density. Also note that images produced with
* streaming mode might not be progressively decodeable.
*/
JXL_ENC_FRAME_SETTING_BUFFERING = 34,```
|
|
2025-01-02 03:46:19
|
IIRC libjxl uses the chunked API internally when you call `JxlEncoderAddImageFrame`.
|
|
2025-01-02 03:47:34
|
I went digging through that code a while back while trying to figure out what the intended usage pattern is for JxlChunkedFrameInputSource.
|
|
|
jonnyawsom3
|
2025-01-02 03:52:44
|
Huh, oh yeah this has been known for a few month now. The difference is between streamed input and chunked encode.
Streamed input implicitly enables chunked encoding, even if the input format doesn't support streaming. Thereby disabling any full frame tools that cause most of the memory consumption (effort 8 and 9 used to enable it, but I thought it was moved to 10 for images above 2048 width/height)
|
|
|
Lilli
|
2025-08-18 09:29:33
|
It's strange that when I increase the effort, the filesize is bigger (I guess it's because my images aren't "perceptual"). I thought I'd share though ! It's quite consistent.
All inputs are 300MB 16bits TIFF files.
Rows are: filename, distance, effort, output filesize
(I know you'll say my distance makes no sense, but I'm not sure it should behave this way anyway)
|
|
2025-08-18 02:05:17
|
I just tested with intensity_target=60000 and distance=0.5, the behavior stays the same, image quality is approx the same as before
|
|
|
Tirr
|
2025-08-18 02:11:24
|
for lossy, higher effort means consistent quality throughout image. it might not produce smaller file
|
|
2025-08-18 02:12:08
|
so the encoder used more bits to preserve details
|
|
|
Lilli
|
2025-08-18 02:12:21
|
Interesting !
|
|
2025-08-18 02:12:59
|
Okay, I see, then I might want to use a low effort then! Thank you π
|
|
|
Tirr
|
2025-08-18 02:14:26
|
or maybe find good balance between distance and effort? I'm not an expert of encoding params though
|
|
|
jonnyawsom3
|
2025-08-18 02:39:21
|
Higher efforts are more accurate to the quality you set, but also *try* to be more dense too. So either use a low effort with less consistent results, or a higher effort with a lower quality
|
|
|
monad
|
2025-08-19 01:50:13
|
<:Agree:805404376658608138> ideally, you should adjust effort to moderate compute, distance to moderate quality. but it is possible some intersection of digital image content and encoding behavior reveals pathological cases.
|
|
|
Lilli
|
2025-08-19 07:11:22
|
I see... I'd like to have a consistent filesize but that might be hard to achieve
|
|
2025-08-19 07:13:34
|
All my files will be 300MB in 16bits TIFF format, and I want them to be 20~30MB, so I'm trying to find parameters that do just that but of course the images are different and some compress to 15MB and some to 60MB for the same parameters :/
|
|
|
A homosapien
|
2025-08-19 07:48:33
|
That's due to the perceptual nature of libjxl. To reach the same target quality, some files need more or less bits. There was a target file size option but that was depreciated quite sometime ago. Why do you want to target file size specifically?
> Intensity_target=60000 and distance=0.5
I have a feeling your images aren't perceptual at all. Is it raw data or something like that?
|
|
|
Lilli
|
2025-08-19 10:21:03
|
Yes it is raw camera output, so no gamma, only linear, only processing is debayer. The goal is to transfer this data for preview/processing. The filesize must be relatively consistent as the transfer time is quite big, so I need to be in control of that to a degree
|
|
|
jonnyawsom3
|
2025-08-19 12:32:18
|
I assume lossless doesn't get close?
|
|
|
Lilli
|
2025-08-19 12:52:56
|
Input is 192.3MB PNG
Using v0.11.1:
```
cjxl -e X -d 0 PNGs/inp.PNG out_lossless_X.jxl
```
With X the effort value
```
| Effort | Size (MB) | MP/s |
|--------|-----------|----------|
| 1 | 134.7 | 354.607 |
| 2 | 130.8 | 72.732 |
| 3 | 128.6 | 41.254 |
| 4 | 128.4 | 19.268 |
| 5 | 128.4 | 10.978 |
| 6 | 128.7 | 6.916 |
| 7 | 130.3 | 3.209 |
| 8 | 131.6 | 1.097 |
| 9 | 131.2 | 0.412 |
| 10 | 134.7 | 0.050 |
```
And with the settings for lossy described above I get around 25MB
|
|
2025-08-19 12:56:11
|
It's interesting that e=4 yields the smallest size
So it's definitely not near enough :/
|
|
|
Kupitman
|
|
Lilli
Input is 192.3MB PNG
Using v0.11.1:
```
cjxl -e X -d 0 PNGs/inp.PNG out_lossless_X.jxl
```
With X the effort value
```
| Effort | Size (MB) | MP/s |
|--------|-----------|----------|
| 1 | 134.7 | 354.607 |
| 2 | 130.8 | 72.732 |
| 3 | 128.6 | 41.254 |
| 4 | 128.4 | 19.268 |
| 5 | 128.4 | 10.978 |
| 6 | 128.7 | 6.916 |
| 7 | 130.3 | 3.209 |
| 8 | 131.6 | 1.097 |
| 9 | 131.2 | 0.412 |
| 10 | 134.7 | 0.050 |
```
And with the settings for lossy described above I get around 25MB
|
|
2025-08-19 01:08:49
|
old problem, I make report about it
|
|
2025-08-19 01:09:05
|
but doesn't remember solved or not
|
|
|
Lilli
Input is 192.3MB PNG
Using v0.11.1:
```
cjxl -e X -d 0 PNGs/inp.PNG out_lossless_X.jxl
```
With X the effort value
```
| Effort | Size (MB) | MP/s |
|--------|-----------|----------|
| 1 | 134.7 | 354.607 |
| 2 | 130.8 | 72.732 |
| 3 | 128.6 | 41.254 |
| 4 | 128.4 | 19.268 |
| 5 | 128.4 | 10.978 |
| 6 | 128.7 | 6.916 |
| 7 | 130.3 | 3.209 |
| 8 | 131.6 | 1.097 |
| 9 | 131.2 | 0.412 |
| 10 | 134.7 | 0.050 |
```
And with the settings for lossy described above I get around 25MB
|
|
2025-08-19 01:09:30
|
can you share file?
|
|
|
Lilli
|
2025-08-19 01:09:40
|
the PNG?
|
|
|
Kupitman
|
|
Lilli
|
2025-08-19 01:12:14
|
https://we.tl/t-4Y8dzRlYYG
|
|
|
Kupitman
|
|
π°πππ
|
|
Lilli
Input is 192.3MB PNG
Using v0.11.1:
```
cjxl -e X -d 0 PNGs/inp.PNG out_lossless_X.jxl
```
With X the effort value
```
| Effort | Size (MB) | MP/s |
|--------|-----------|----------|
| 1 | 134.7 | 354.607 |
| 2 | 130.8 | 72.732 |
| 3 | 128.6 | 41.254 |
| 4 | 128.4 | 19.268 |
| 5 | 128.4 | 10.978 |
| 6 | 128.7 | 6.916 |
| 7 | 130.3 | 3.209 |
| 8 | 131.6 | 1.097 |
| 9 | 131.2 | 0.412 |
| 10 | 134.7 | 0.050 |
```
And with the settings for lossy described above I get around 25MB
|
|
2025-08-19 01:29:33
|
quality should also be assessed
|
|
2025-08-19 01:29:37
|
otherwise it's pointless
|
|
2025-08-19 01:30:08
|
I have encountered preset 10 (fastest svt-av1 preset) producing the smallest videos given the same quantizer
|
|
|
Kupitman
|
|
π°πππ
quality should also be assessed
|
|
2025-08-19 01:30:12
|
what
|
|
|
π°πππ
|
2025-08-19 01:30:26
|
oh this is lossless
|
|
2025-08-19 01:30:29
|
π
|
|
|
Kupitman
|
2025-08-19 01:30:34
|
it's lossless...
|
|
|
π°πππ
|
|
Lilli
Input is 192.3MB PNG
Using v0.11.1:
```
cjxl -e X -d 0 PNGs/inp.PNG out_lossless_X.jxl
```
With X the effort value
```
| Effort | Size (MB) | MP/s |
|--------|-----------|----------|
| 1 | 134.7 | 354.607 |
| 2 | 130.8 | 72.732 |
| 3 | 128.6 | 41.254 |
| 4 | 128.4 | 19.268 |
| 5 | 128.4 | 10.978 |
| 6 | 128.7 | 6.916 |
| 7 | 130.3 | 3.209 |
| 8 | 131.6 | 1.097 |
| 9 | 131.2 | 0.412 |
| 10 | 134.7 | 0.050 |
```
And with the settings for lossy described above I get around 25MB
|
|
2025-08-19 01:30:42
|
this message confused me
|
|
2025-08-19 01:30:45
|
my bad π
|
|
2025-08-19 01:31:07
|
And my JXL lossless experience is 8 being the sweet spot
|
|
|
Kupitman
|
2025-08-19 01:31:32
|
i use 9 for big and 10 for small/gray images
|
|
|
π°πππ
|
|
Lilli
Input is 192.3MB PNG
Using v0.11.1:
```
cjxl -e X -d 0 PNGs/inp.PNG out_lossless_X.jxl
```
With X the effort value
```
| Effort | Size (MB) | MP/s |
|--------|-----------|----------|
| 1 | 134.7 | 354.607 |
| 2 | 130.8 | 72.732 |
| 3 | 128.6 | 41.254 |
| 4 | 128.4 | 19.268 |
| 5 | 128.4 | 10.978 |
| 6 | 128.7 | 6.916 |
| 7 | 130.3 | 3.209 |
| 8 | 131.6 | 1.097 |
| 9 | 131.2 | 0.412 |
| 10 | 134.7 | 0.050 |
```
And with the settings for lossy described above I get around 25MB
|
|
2025-08-19 01:31:38
|
this is so interesting though.
Effort 1 and 10 give the same size?
|
|
|
Kupitman
|
2025-08-19 01:31:57
|
source image is 48 bit
|
|
2025-08-19 01:32:10
|
i think that because jxl have problem with it
|
|
|
π°πππ
|
2025-08-19 01:32:17
|
Hmm, that's probably why
|
|
|
Kupitman
|
2025-08-19 01:32:22
|
webp 24 compress to 3.2 mb
|
|
2025-08-19 01:32:27
|
|
|
|
jonnyawsom3
|
2025-08-19 01:34:23
|
For bayer DNG files effort 2 is usually best, so it really depends on the image content and bitdepth. I'll do a bit of testing too and see if there's a specific flag that's going wrong
|
|
|
Kupitman
|
2025-08-19 01:35:36
|
i test that image, but the error did not occur
|
|
|
Lilli
|
|
Kupitman
webp 24 compress to 3.2 mb
|
|
2025-08-19 01:36:23
|
I looked at the image and well there's nothing left in terms of, well anything really
|
|
|
jonnyawsom3
|
2025-08-19 01:37:13
|
Yeah... Don't do that
|
|
|
Lilli
Input is 192.3MB PNG
Using v0.11.1:
```
cjxl -e X -d 0 PNGs/inp.PNG out_lossless_X.jxl
```
With X the effort value
```
| Effort | Size (MB) | MP/s |
|--------|-----------|----------|
| 1 | 134.7 | 354.607 |
| 2 | 130.8 | 72.732 |
| 3 | 128.6 | 41.254 |
| 4 | 128.4 | 19.268 |
| 5 | 128.4 | 10.978 |
| 6 | 128.7 | 6.916 |
| 7 | 130.3 | 3.209 |
| 8 | 131.6 | 1.097 |
| 9 | 131.2 | 0.412 |
| 10 | 134.7 | 0.050 |
```
And with the settings for lossy described above I get around 25MB
|
|
2025-08-19 01:38:28
|
Well, that's interesting
```cjxl M31-1280.tif.PNG M31-1280.jxl -d 0 -e 2
JPEG XL encoder v0.12.0 03e432d3 [_AVX2_,SSE4,SSE2] {Clang 20.1.8}
Encoding [Modular, lossless, effort: 2]
Compressed to 130839.8 kB (20.936 bpp).
7779 x 6427, 26.780 MP/s, 16 threads.```
|
|
2025-08-19 01:38:58
|
I guess we did something right for v0.12, because now effort 2 beats all your results. 124MB
|
|
|
Kupitman
|
|
Lilli
I looked at the image and well there's nothing left in terms of, well anything really
|
|
2025-08-19 01:40:24
|
wdym
|
|
|
Lilli
|
2025-08-19 01:40:32
|
hmmm I'm not so sure, what I reported in the table is the size given by cjxl
You have 130.89 which is what I got with e2
|
|
|
jonnyawsom3
|
2025-08-19 01:40:59
|
Ah right, MiB vs Mb
|
|
|
Lilli
|
|
Kupitman
wdym
|
|
2025-08-19 01:42:31
|
When you stretch it, it should look more like this (35MB jxl)
|
|
2025-08-19 01:42:34
|
|
|
2025-08-19 01:46:40
|
I appreciate you both taking the time and having the curiosity to try out my input data π
|
|
|
jonnyawsom3
|
|
Lilli
It's strange that when I increase the effort, the filesize is bigger (I guess it's because my images aren't "perceptual"). I thought I'd share though ! It's quite consistent.
All inputs are 300MB 16bits TIFF files.
Rows are: filename, distance, effort, output filesize
(I know you'll say my distance makes no sense, but I'm not sure it should behave this way anyway)
|
|
2025-08-19 01:49:11
|
Also... How does this file look?
|
|
|
Lilli
|
2025-08-19 01:51:52
|
I was thinking, would it work to compress a few 256x256 patches and quickly converge to a good distance parameter?
|
|
|
Also... How does this file look?
|
|
2025-08-19 01:56:09
|
I have checked against the reference:
|
|
2025-08-19 01:57:07
|
It's quite good, but it creates some "holes" some quite dark pixels and the blocks are very visible, with colors behaving a little strange
|
|
2025-08-19 01:58:12
|
But what you have is consistent with my findings so far, for these noisy images, 25MB is just about enough to remove most of the egregious blockyness
|
|
2025-08-19 01:58:50
|
What parameters did you use?
|
|
|
jonnyawsom3
|
2025-08-19 02:01:31
|
`-d 0.01 -m 1`
It uses Modular encoding, which is normally for lossless, in a lossy way. It should perform better than VarDCT at such low distances, but it runs at a speed and memory cost similar to effort 10, so might not be worth it
|
|
|
Lilli
|
2025-08-19 02:01:46
|
Interesting
|
|
2025-08-19 02:02:01
|
the result is indeed quite impressive for this filesize
|
|
2025-08-19 02:02:46
|
I made a 7.2 MB jxl with -e3 -d 0.04
|
|
2025-08-19 02:02:55
|
I'll compare
|
|
2025-08-19 02:04:41
|
Oh wow ! the difference is huge, yours is much much better
When I said the blocks are very visible, well, no they're quite discreet compared to that one
|
|
|
jonnyawsom3
|
2025-08-19 02:04:50
|
I think I know how it could be even more efficient, possibly for lossless too, but we'd need to plug some new wires into libjxl. Right now `-R` is a bool to enable the Squeeze transform, which recursively halves the resolution and stores the difference. If we changed it to be the levels of halving instead, then we could do something like `-R 2` to hopefully seperate the noise from the real image
|
|
|
Lilli
|
2025-08-19 02:06:04
|
What would that entail ?
|
|
|
jonnyawsom3
|
|
Lilli
I made a 7.2 MB jxl with -e3 -d 0.04
|
|
2025-08-19 02:06:53
|
effort 3 is functionally the same as a JPEG, so if you can I'd aim for effort 5 <https://github.com/libjxl/libjxl/blob/main/doc/encode_effort.md>
|
|
|
Lilli
|
2025-08-19 02:07:38
|
Ah that's very interesting
|
|
|
jonnyawsom3
|
|
Lilli
What would that entail ?
|
|
2025-08-19 02:07:53
|
Work on our part mostly, it'd be an update to the library so unfortunately we can't try it yet
|
|
|
Lilli
|
2025-08-19 02:08:19
|
I mean what would be the expected result?
|
|
|
jonnyawsom3
|
2025-08-19 02:09:52
|
Ah, in an ideal world it would only compress the noise, and then we could use photon noise to add it back in
|
|
|
Lilli
|
2025-08-19 02:10:20
|
So, if I change the distance so that -e 3 and -e 5 give the same filesize, the -e 5 should be significantly better?
|
|
|
jonnyawsom3
|
2025-08-19 02:11:04
|
Hopefully π
|
|
|
Lilli
|
2025-08-19 02:12:00
|
I have to see if the added compute time is worth it
|
|
2025-08-19 02:13:58
|
It is better indeed with equally sized outputs! But absolutely no match for the modular one you showed
|
|
|
jonnyawsom3
|
2025-08-19 02:16:44
|
Really, I should do more testing and just enable modular automatically at such low distances, but there's always a risk that something explodes
|
|
2025-08-19 02:17:25
|
Especially since it goes singlethreaded
|
|
|
Lilli
|
2025-08-19 02:20:45
|
Something happens when I use my encoder vs cjxl... Maybe it's because of the intensity target
Edit: yes it's the intensity target that makes the quality much higher for the same filesize
|
|
|
jonnyawsom3
|
2025-08-19 02:24:33
|
Makes sense. Intensity target effectively brightens the image internally, so it sees the result you want rather than the raw data, and can separate the image from the noise more effectively
|
|
|
Lilli
|
2025-08-19 02:28:41
|
Okay, e3 35MB vs e5 26MB with same distance (0.5 intensity target 60000) and the quality is really really close
|
|
2025-08-19 02:31:02
|
I guess I'll go with e5, do a few passes of encoding on 256x256 patches to find the correct distance and aim for 25MB π
|
|
2025-08-19 02:32:35
|
Now that I see it, e5 is more than twice the time as e3...
|
|
|
A homosapien
|
|
Lilli
When you stretch it, it should look more like this (35MB jxl)
|
|
2025-08-19 10:23:33
|
Then why not stretch it to begin with? Since it's more perceptual libjxl will preserve more details but I guess that might not fix the consistent file size problem...
|
|
2025-08-19 10:25:18
|
Is lossless at least somewhat consistent?
|
|
|
Lilli
|
2025-08-20 08:13:52
|
Yes that's a good suggestion, I have tried it before. Buf if I stretch it before export, I lose control over the stretching after export (which is the whole point), or I need to unstretch it. All of that costs time...
|
|
2025-08-20 08:15:38
|
And in all cases, it won't change the issue that I need to find the relevant distance for each file so that it compresses to ~25MB
|
|
2025-08-20 08:18:57
|
In the end, I also needed to find the correct stretch so that it looks good for jxl, but then it was hard to manage the quality of the dark regions, so this approach doesn't work that well for me
|
|
|
jonnyawsom3
|
2025-08-20 08:28:38
|
You basically want near-lossless, but lossy modular is veeeery slow
|
|
|
Lilli
|
2025-08-20 09:29:26
|
yes π near lossless, and as much "non lossless" as needed to get x10-x12 compression
|
|
|
JaitinPrakash
|
2025-08-20 05:04:24
|
I think that using jpeg2000 might be better here, as opj_compress can set the compression ratio, and you can just set it to 12, and it will hit 25MB perfectly.
|
|
|
Lilli
|
2025-09-01 09:14:27
|
I see... that was our initial choice <:AngryCry:805396146322145301>
|
|
|
spider-mario
|
2025-09-01 01:26:43
|
the compression kind of sucks, though
|
|
|
Gositi
|
2025-09-01 08:36:35
|
Hi! I need some clarification on how the encoding parameters work. My understanding was that modular encoding meant lossless encoding and that distance was no longer relevant, but I observe (using cjxl) that `-m 1` is lossy while `-d 0` is lossless. So how does modular/varDCT and distance actually play together?
I also read in the libjxl API docs that setting a distance of 0 will create a "mathematically lossless" image but that it is not enough to produce a "true lossless" image. Could someone explain whatever is going on there, since I observe lossless encoding using that in cjxl?
|
|
|
Kupitman
|
|
Gositi
Hi! I need some clarification on how the encoding parameters work. My understanding was that modular encoding meant lossless encoding and that distance was no longer relevant, but I observe (using cjxl) that `-m 1` is lossy while `-d 0` is lossless. So how does modular/varDCT and distance actually play together?
I also read in the libjxl API docs that setting a distance of 0 will create a "mathematically lossless" image but that it is not enough to produce a "true lossless" image. Could someone explain whatever is going on there, since I observe lossless encoding using that in cjxl?
|
|
2025-09-01 10:06:13
|
Math lossless is 1.0 ssim, visual - human eye lossless. d 0 is true lossless
|
|
|
spider-mario
|
|
Gositi
Hi! I need some clarification on how the encoding parameters work. My understanding was that modular encoding meant lossless encoding and that distance was no longer relevant, but I observe (using cjxl) that `-m 1` is lossy while `-d 0` is lossless. So how does modular/varDCT and distance actually play together?
I also read in the libjxl API docs that setting a distance of 0 will create a "mathematically lossless" image but that it is not enough to produce a "true lossless" image. Could someone explain whatever is going on there, since I observe lossless encoding using that in cjxl?
|
|
2025-09-01 10:33:21
|
the documentation says that when using the API, setting the distance to 0 is not the only requirement for lossless, but that doesnβt apply to `cjxl`, which does whatever is needed for lossless if it detects that itβs given a distance of 0
|
|
2025-09-01 10:33:29
|
it doesnβt *just* set the distance
|
|
2025-09-01 10:34:12
|
https://github.com/libjxl/libjxl/blob/1bb7f02535a006afbc753282f37a7ec348afe3d1/lib/extras/enc/jxl.cc#L249
https://github.com/libjxl/libjxl/blob/1bb7f02535a006afbc753282f37a7ec348afe3d1/lib/extras/enc/jxl.cc#L289-L293
|
|
|
jonnyawsom3
|
2025-09-01 11:25:57
|
Lossless has to be modular, but modular doesn't have to be lossless
|
|
|
Gositi
|
2025-09-02 07:45:19
|
Alright, thanks!
|
|
|
_wb_
|
2025-09-02 08:52:47
|
mathematically lossless means not doing the XYB color transform and storing the RGB values exactly. VarDCT mode cannot store values exactly (it's like the old JPEG in that regard) and in libjxl, it always uses the XYB color transform. Modular mode can store values exactly but it also has various mechanisms to do lossy β generally not as effective as VarDCT but sometimes relatively competitive in compression (though slower). When you do `-m 1` with a distance > 0 (like the default `-d 1`) it will use XYB and do something lossy. When you set the distance to 0, cjxl will both disable XYB and use modular in a lossless way (or if the input is a JPEG, it will apply lossless recompression of the JPEG data, which can be done since VarDCT can represent JPEG's quantized DCT coefficients directly).
|
|
|
Gositi
|
|
_wb_
mathematically lossless means not doing the XYB color transform and storing the RGB values exactly. VarDCT mode cannot store values exactly (it's like the old JPEG in that regard) and in libjxl, it always uses the XYB color transform. Modular mode can store values exactly but it also has various mechanisms to do lossy β generally not as effective as VarDCT but sometimes relatively competitive in compression (though slower). When you do `-m 1` with a distance > 0 (like the default `-d 1`) it will use XYB and do something lossy. When you set the distance to 0, cjxl will both disable XYB and use modular in a lossless way (or if the input is a JPEG, it will apply lossless recompression of the JPEG data, which can be done since VarDCT can represent JPEG's quantized DCT coefficients directly).
|
|
2025-09-02 12:11:45
|
Thanks for the breakdown! Now I understand better how the options work.
|
|