|
_Broken s̸y̴m̴m̵̿e̴͌͆t̸r̵̉̿y̴͆͠
|
2025-01-31 07:56:34
|
Thank you for the replies so far c;
Okay;
When I export from Krita or Gimp in lossless, would that be more safe?
Or would it just be best to test each individual image?
> insane in the membrane.
I am insane.
But besides.
> If you're going for maximum compression, you might want to add --compress_boxes=1 --brotli_effort=11 to compress the metadata more.
I do try to go to maximum compression for now.
I do not need metadata, unless this refers to jxl needed data.
~~*currently looking in to fining info bout -E -I -g*~~ found it: `cjxl.exe -v -v -v -v --help`
|
|
|
Laserhosen
|
|
_Broken s̸y̴m̴m̵̿e̴͌͆t̸r̵̉̿y̴͆͠
Thank you for the replies so far c;
Okay;
When I export from Krita or Gimp in lossless, would that be more safe?
Or would it just be best to test each individual image?
> insane in the membrane.
I am insane.
But besides.
> If you're going for maximum compression, you might want to add --compress_boxes=1 --brotli_effort=11 to compress the metadata more.
I do try to go to maximum compression for now.
I do not need metadata, unless this refers to jxl needed data.
~~*currently looking in to fining info bout -E -I -g*~~ found it: `cjxl.exe -v -v -v -v --help`
|
|
2025-01-31 08:07:30
|
Gimp and Krita both just use libjxl, so it shouldn't be any better or worse.
|
|
2025-01-31 08:09:50
|
Testing each image is a good idea if you can script it. e.g. with ImageMagick, you can do
`magick compare -metric pae original.png converted.jxl null:`, which will print `0 (0)` if all pixels are identical.
|
|
2025-01-31 08:12:46
|
I think `-E 3 -I 100 -g 3` is typically a good combination for maximum compression at the expense of encode/decode time, but it will depend on the image.
|
|
2025-01-31 08:15:36
|
The metadata that gets lost would be (non-Exif) comments and embedded timestamps and things. Nothing that JXL cares about.
|
|
|
_Broken s̸y̴m̴m̵̿e̴͌͆t̸r̵̉̿y̴͆͠
|
2025-01-31 10:20:38
|
Ahh, that took me longer than expected;
I am in due need for coffee now.
(They started at webp, this was before I knew jxl)
|
|
|
monad
|
|
Laserhosen
Testing each image is a good idea if you can script it. e.g. with ImageMagick, you can do
`magick compare -metric pae original.png converted.jxl null:`, which will print `0 (0)` if all pixels are identical.
|
|
2025-01-31 10:24:29
|
Note that `compare` can concatenate warnings to its result, so you might not get the outcome you expect if you naively compare the string.
|
|
|
_Broken s̸y̴m̴m̵̿e̴͌͆t̸r̵̉̿y̴͆͠
|
2025-01-31 10:27:11
|
Was about to ask about the comparison anyway:
When it fails to do lossless compression, does it than corrupt, or does it "just" change a few values hard visually to perceive?
And, when I decode it back to a `.png`, would loading it with python `PIL.Image` be good enough for comparison (in-case you know by chance), since there I would know how to do that.
|
|
|
Laserhosen
|
2025-01-31 10:59:04
|
Well if you find a new encoder bug... it could do anything or nothing. In the past it's been anywhere between "completely fails to decode" and "decodes with a weird rectangle in the corner". FWIW I've encoded thousands of pictures with zero issues, so you'd be incredibly unlucky.
|
|
2025-01-31 11:01:40
|
PIL should work fine for comparing the pixel values as long as it can read JXLs (or you convert them back to PNGs first).
|
|
|
_Broken s̸y̴m̴m̵̿e̴͌͆t̸r̵̉̿y̴͆͠
|
|
Laserhosen
PIL should work fine for comparing the pixel values as long as it can read JXLs (or you convert them back to PNGs first).
|
|
2025-01-31 11:03:10
|
I'd need to convert back first.
--
Good to know; Those chances are fine for me.
I've looked through all them using image glass right now, and visually they all loaded correctly;
--
So far, thank you all c;
|
|
|
RaveSteel
|
2025-01-31 11:03:55
|
A PNG-to-JXL conversion will show the same image hash between both images, no need to convert the JXL back to PNG
|
|
2025-01-31 11:04:39
|
Simply use imagemagick's `identify` to compare hashes `/usr/bin/identify -format "%m %#\n" FILES`
|
|
|
monad
|
2025-01-31 11:08:02
|
PIL should work as well as IM, but neither consider color profiles as far as I know. ssimulacra2 would for example if you care to catch that.
|
|
|
RaveSteel
|
2025-01-31 11:08:31
|
Would colour profiles not stay the same if not explicitly stripped during conversion?
|
|
|
monad
|
2025-01-31 11:08:51
|
Bug don't typically follow rules.
|
|
|
RaveSteel
|
2025-01-31 11:09:00
|
True
|
|
|
Laserhosen
Gimp and Krita both just use libjxl, so it shouldn't be any better or worse.
|
|
2025-01-31 11:10:20
|
Krita's implementation for JXL exports is more mature, GIMP for example does not allow exporting JXL with layers
|
|
|
_Broken s̸y̴m̴m̵̿e̴͌͆t̸r̵̉̿y̴͆͠
Thank you for the replies so far c;
Okay;
When I export from Krita or Gimp in lossless, would that be more safe?
Or would it just be best to test each individual image?
> insane in the membrane.
I am insane.
But besides.
> If you're going for maximum compression, you might want to add --compress_boxes=1 --brotli_effort=11 to compress the metadata more.
I do try to go to maximum compression for now.
I do not need metadata, unless this refers to jxl needed data.
~~*currently looking in to fining info bout -E -I -g*~~ found it: `cjxl.exe -v -v -v -v --help`
|
|
2025-01-31 11:13:20
|
You actually don't even need to specify --help lol
|
|
|
_Broken s̸y̴m̴m̵̿e̴͌͆t̸r̵̉̿y̴͆͠
|
|
monad
PIL should work as well as IM, but neither consider color profiles as far as I know. ssimulacra2 would for example if you care to catch that.
|
|
2025-01-31 11:13:41
|
Hmm, good to know.
-# Wondered if PIl did or didn't but it's hard to find this info for me..
Good thing tho is, they all have been only srgb / D65 grey.
They are scans that I work over anyway, so I do not need the profile to be 100% correct, but I do not want to throw value information out due to "accidental lossyness"
>> Okay did do the conversion thing, and checked, they converted back to the original png information. <<
Thank you guys again for the help 🦦 💖
|
|
|
Demiurge
|
2025-02-01 04:41:40
|
I'm looking closer at the jpegli source and I'm even more confused why it was put in a separate repo. It depends on a lot of shared code from libjxl and it's properly a part of the same library as libjxl. Also the cjpegli/djpegli tools share a lot of code with cjxl/djxl. Isn't it a burden to have two separate, duplicate branches of the same library code and tool code? Since they share a lot of duplicate code, when fixes and improvements are made to one, they don't automatically apply to both anymore like they used to since they are separated now.
Wouldn't it make way more sense to merge them back together again, and instead put the cjxl/jpegli/djxl/djpegli code all together in a separate repo, so they can all benefit from the shared code, while at the same time making the library-only build much lighter?
|
|
2025-02-01 04:41:52
|
Honestly this seems like a serious issue.
|
|
2025-02-01 04:42:38
|
Having them separate I mean, with two branches of the same duplicated code in two separate repositories. That's really bad.
|
|
2025-02-01 04:45:02
|
jpegli shares a VERY substantial amount of code from libjxl that's redundantly copied over and having two separate branches of the same code maintained separately seems like a massive error needlessly increasing maintenance burden.
|
|
2025-02-01 04:46:49
|
Would it be better if I just opened an issue on the github bug tracker or something instead of lurking on discord?
|
|
|
jonnyawsom3
|
|
Laserhosen
2. Others might give a more detailed answer, but bit-rot is probably more damaging to JXL than PNG in general just because of its density. There has been discussion of an option for djxl for attempting to decode broken inputs, but it hasn't been implemented yet.
|
|
2025-02-01 07:31:05
|
It was actually implemented a while ago https://github.com/libjxl/libjxl/pull/4062
|
|
2025-02-01 07:31:52
|
The header info is essentially non-recoverable, but it will skip any corrupted (HF/LF) groups
|
|
|
Traneptora
|
|
_Broken s̸y̴m̴m̵̿e̴͌͆t̸r̵̉̿y̴͆͠
I got a few questions about the libjxl .. well lib.
I'm currently working on making a backup of my drawing folder, and in there are quiet a few scans.
Now they all are scanned in lossless (`.tif`), and currently are stored as `.png `(some rgb, some gray).
- My question revolves mainly around the following line from the repo:
> Note: This release is for evaluation purposes and may contain bugs
I assume this only applies to the newer features being implemented,
and less so to the general *encoding* / *decoding* in *lossless* / *lossy* quality, right?
Like it would be safe to use, for the task of lossless encoding, right?
- My second question would be a bit less on-topic, but still is fairly important.
How "vulnerable" is `.jxl` encoding? Like when the *body* and/or *header* is damaged, how well can parts of it still be recovered?
I sadly do not know how other formats like `.png, .jpg, .tif (LZMA and None)` them self hold up regarding that,
but it would interesting to know, if you guys know anything about that?
-# I ask this out of interest, less to skip the 1-2-3 principle, which I just can't full-fill financially.
-# I yet have to read in to testing such things as redundancy and recover-ability myself, but not enough time and I do not know apportion resources for that yet, since I want to do "realistic" and varied tests.
- And my last question is as follows:
Since I'm going to convert multiple images by command,
-# better said using python tkinter filedialog to select files and running the cjxl.exe a bunch of time with popen or the alike,
is the command correct for lossless encoding? :
`cjxl.exe sample.png sample_jxl.jxl -d 0.0 -e 10`
Thank you for you time
-# and also the Jxl implementation btw
|
|
2025-02-01 08:08:35
|
1. Lossless encoding for the most part is pretty mature. You can always decode back with the software and compare if you wish, just in case. If the software decodes it to the original now, it'll always be recoverable.
2. Not at all error-resilient, like most things. You'll want to back the file up, or use some sort of physical redundancy (e.g. RAID) to protect it - those will both be far more reliable than anything in the format.
3. Except for the `-e 10` part, yes. e10 is unnecessarily slow, you're looking at a massive massive waste of cpu cycles for fractions of a percent of bpp saved. Not worth it.
|
|
|
Quackdoc
|
2025-02-01 08:10:59
|
I *always* verify when lossless encoding with a separate tool, it's cheap to do so there is no real reason not to.
|
|
|
Traneptora
|
2025-02-01 08:13:35
|
I wrote an image-checksum feature which dumps both images to `RGBA:-` with imagemagick and then feeds it into a hasher
|
|
2025-02-01 08:13:54
|
I use sha3-384-base64 as the checksum
|
|
2025-02-01 08:14:08
|
but there's probably something more useful as it's noncryptograhic
|
|
|
Quackdoc
|
2025-02-01 08:14:56
|
I just use `magick identify -format "%# "` and compare the outputs in bash lol
|
|
|
Traneptora
|
2025-02-01 08:15:20
|
what does `"%# "` do?
|
|
|
_wb_
|
2025-02-01 08:16:51
|
I would assume it prints a hash
|
|
|
Traneptora
|
2025-02-01 08:17:33
|
based on context yea but I was looking for more specifics
|
|
|
Quackdoc
|
|
Traneptora
what does `"%# "` do?
|
|
2025-02-01 08:18:04
|
print's magick's signature hash
|
|
|
RaveSteel
|
2025-02-01 08:19:18
|
It prints the image hash, ignoring metadata etc.
|
|
|
Quackdoc
|
2025-02-01 08:19:21
|
https://web.mit.edu/GRAPHICs/src/ImageMagick-6.0.6/www/api/magick_wand.html#magickgetimagesignature
|
|
2025-02-01 08:19:30
|
```
MagickGetImageSignature
MagickGetImageSignature() generates an SHA-256 message digest for the image pixel stream.
The format of the MagickGetImageSignature method is:
const char MagickGetImageSignature ( MagickWand *wand );
A description of each parameter follows:
```
|
|
2025-02-01 08:19:51
|
it's not the fastest thing in the world, but it does the job
|
|
|
Traneptora
|
2025-02-01 08:20:07
|
I do something very similar
|
|
|
Quackdoc
|
2025-02-01 08:20:21
|
if I need faster I will setup a decoder and pipe to xxhash now or b3sum depending on system
|
|
2025-02-01 08:21:12
|
I guess I could just make a simple hash tool using image-rs now that I think about it if I need something faster
|
|
|
RaveSteel
|
2025-02-01 08:21:21
|
I feel that decoding instead of just calculating the hash will probably not be faster, I may be wrong though
|
|
|
Quackdoc
|
2025-02-01 08:21:32
|
you need to decode to generate the hash
|
|
2025-02-01 08:21:42
|
if you don't it's not meaningful
|
|
|
RaveSteel
|
2025-02-01 08:21:48
|
Yes, but it decodes "raw", if you know what I mean
|
|
|
Traneptora
|
2025-02-01 08:21:50
|
we're comparing pixel data so you have to decode to pixels first
|
|
2025-02-01 08:21:59
|
`magick convert input.png 'RGBA:-' | sha384-base64` is what I use
|
|
|
RaveSteel
|
2025-02-01 08:22:08
|
except if the decode were to bmp or something like that, which should be pretty instantaneous
|
|
|
Traneptora
|
2025-02-01 08:22:17
|
sha384-base64 runs sha384sum (or equivalent), encodes the hexdump to binary then decodes it to base64
|
|
2025-02-01 08:22:22
|
just to make the representation a little more compact
|
|
|
Quackdoc
|
|
RaveSteel
except if the decode were to bmp or something like that, which should be pretty instantaneous
|
|
2025-02-01 08:22:22
|
it doesn't really matter, pipes are uber fast
|
|
|
RaveSteel
|
|
_wb_
|
2025-02-01 08:22:53
|
In general it's a quite tricky thing to even define what real lossless means. Same pixel data sure, though maybe in some cases it is fine to convert RGBA to RGB or RGB to Gray if the data allows it. But also colorspace info should be kept (though does it need to be the exact ICC profile or is an equivalent one also ok?). Then there is Exif and XMP, which I guess should just be kept as is. What about other kinds of metadata? And where exactly is the boundary between image data, metadata and format-specific stuff? E.g. is the palette of a PNG part of the data that has to be preserved?
|
|
|
Quackdoc
|
2025-02-01 08:22:56
|
~~unless powershell on windows~~
|
|
|
Traneptora
|
2025-02-01 08:23:14
|
in this case it only compares pixel data and not metadata associated with it
|
|
2025-02-01 08:23:34
|
you can define lossless if you're talking about a rectangle of integers
|
|
|
Quackdoc
|
|
_wb_
In general it's a quite tricky thing to even define what real lossless means. Same pixel data sure, though maybe in some cases it is fine to convert RGBA to RGB or RGB to Gray if the data allows it. But also colorspace info should be kept (though does it need to be the exact ICC profile or is an equivalent one also ok?). Then there is Exif and XMP, which I guess should just be kept as is. What about other kinds of metadata? And where exactly is the boundary between image data, metadata and format-specific stuff? E.g. is the palette of a PNG part of the data that has to be preserved?
|
|
2025-02-01 08:23:40
|
personally, I am fine as long as the pixel data matches, this is the only thing I *really* care about. Yeah keeping the metadata is nice, but for me, as long as the picture renders accurately, I can mess with metadata if needed
|
|
|
Traneptora
|
2025-02-01 08:23:42
|
which is I believe how most people intend it to mean when they say lossless
|
|
2025-02-01 08:24:04
|
the same rectangle of integers, if mistagged, is the same pixel data but a different image
|
|
|
_wb_
|
2025-02-01 08:24:26
|
If I give you a Rec2100PQ image and you store the data exactly but tag it as sRGB, I would say what you did is not lossless since it would ruin my image.
|
|
|
Quackdoc
|
2025-02-01 08:25:19
|
true, but I can fix that myself so IMO it's not really a major issue, ofc that is unless you are using some really wacky gamut/transfer
|
|
2025-02-01 08:25:47
|
I guess you could do a best effort render to sRGB for testing
|
|
|
Traneptora
|
|
_wb_
If I give you a Rec2100PQ image and you store the data exactly but tag it as sRGB, I would say what you did is not lossless since it would ruin my image.
|
|
2025-02-01 08:25:57
|
yea, that would be a bug in the encoder
|
|
2025-02-01 08:26:12
|
but yea, but if you're trying to figure out if, say, JXL mangled your image, for the purpose of archival, if you can recreate the rectangle of integers bit-exactly then that's usually good enough for most people since the metadata tags can be fixed
|
|
|
_wb_
|
2025-02-01 08:26:24
|
You can fix it but if it's a subtle difference in colorspace you might not even know that the image was supposed to look different
|
|
|
Quackdoc
|
|
Quackdoc
I guess you could do a best effort render to sRGB for testing
|
|
2025-02-01 08:26:25
|
would be hard to validate for higher bitdepth images
|
|
2025-02-01 08:27:03
|
well, unless you handle all the transform stuff
|
|
|
jonnyawsom3
|
2025-02-01 08:27:21
|
By that logic, XYB JPEGs are lossless even if color management is disabled
|
|
|
Traneptora
|
2025-02-01 08:27:28
|
most people when they say "lossless" refer to the preservation of the rectangle of pixels. ideally teh preservation of metadata happens as well
|
|
2025-02-01 08:27:38
|
but it's generally possible to recover that if that is destroyed
|
|
2025-02-01 08:27:41
|
unlike the pixel rectangle
|
|
|
Quackdoc
|
2025-02-01 08:29:15
|
I wonder if there is a good way to validate "crucial" metadata. I guess you would have to use some kind of framework that has a "decode to srgb" or "decode to linear" or whatever
|
|
|
jonnyawsom3
|
2025-02-01 08:29:15
|
I mean, I wouldn't call cjxl 'lossless' myself because it discards PNG text chunks unless explicitly XMP or EXIF, so I'd loose all my screenshot metadata from my 5K VRChat photos over the past few years.
I asked the relevant people about using XMP/EXIF instead, but they cited Discord stripping them as the cause not to
|
|
|
Quackdoc
|
2025-02-01 08:29:56
|
does jxl have a sort of "generic" data payload thing that can be punted into?
|
|
|
jonnyawsom3
|
2025-02-01 08:31:05
|
An old idea was just storing it *as* an XMP tag, which djxl could re-insert into decoded PNGs
|
|
|
_wb_
|
2025-02-01 08:33:26
|
The three standardized metadata boxes in jxl are Exif, XMP, and JUMBF. But of course you could add your own boxes if you want, spec says unknown boxes should just be ignored by a decoder.
|
|
|
Demiurge
|
|
_wb_
In general it's a quite tricky thing to even define what real lossless means. Same pixel data sure, though maybe in some cases it is fine to convert RGBA to RGB or RGB to Gray if the data allows it. But also colorspace info should be kept (though does it need to be the exact ICC profile or is an equivalent one also ok?). Then there is Exif and XMP, which I guess should just be kept as is. What about other kinds of metadata? And where exactly is the boundary between image data, metadata and format-specific stuff? E.g. is the palette of a PNG part of the data that has to be preserved?
|
|
2025-02-01 08:35:38
|
There should be an option for simplifying/converting ICC data to a more compact lossy representation
|
|
2025-02-01 08:36:02
|
Since a exact copy of icc data is usually not necessary at all
|
|
2025-02-01 08:36:24
|
Or maybe it should be an option to preserve the exact icc profile and simplifying it should be the default
|
|
|
Quackdoc
|
2025-02-01 08:36:33
|
perhaps a reccomendation could be made on how to handle metadata that doesn't map well, by doing it in a somewhat generic way?
|
|
|
Demiurge
|
|
I mean, I wouldn't call cjxl 'lossless' myself because it discards PNG text chunks unless explicitly XMP or EXIF, so I'd loose all my screenshot metadata from my 5K VRChat photos over the past few years.
I asked the relevant people about using XMP/EXIF instead, but they cited Discord stripping them as the cause not to
|
|
2025-02-01 08:38:08
|
Well obviously cjxl and other conversion tools should just convert tEXT to XMP
|
|
|
_wb_
|
|
Demiurge
Or maybe it should be an option to preserve the exact icc profile and simplifying it should be the default
|
|
2025-02-01 08:38:18
|
Currently it does simplify by default when doing lossy and it preserves when doing lossless.
|
|
|
Demiurge
|
|
_wb_
Currently it does simplify by default when doing lossy and it preserves when doing lossless.
|
|
2025-02-01 08:40:41
|
Yes, I know. There should be an option to opt into simplification during lossless. Or maybe it should just be the default with an option NOT to simplify, since lossless already discards some metadata during conversion
|
|
2025-02-01 08:42:36
|
Is there any theoretical advantage to preserving the exact icc profile instead of replacing it with an equivalent?
|
|
2025-02-01 08:43:22
|
Aside from possible bugs where it fails to recognize a profile is not equivalent
|
|
2025-02-01 08:58:42
|
Speaking of bugs in libjxl color, I wonder if it still has that horrible bug where it decodes images to 8 bit int linear by default...
|
|
|
jonnyawsom3
|
2025-02-01 09:35:57
|
I think it does
|
|
|
Demiurge
|
2025-02-01 09:40:32
|
It looks like it would be simple to solve by simplifying the logic in that file that decides on the output format. So it's more...logical.
|
|
2025-02-01 09:41:33
|
The way it's written, it's difficult to figure out what it actually does and how it actually makes the final decision, because there are several different places that can affect it
|
|
2025-02-01 09:43:16
|
That style of spaghetti code is kinda the norm throughout libjxl unfortunately
|
|
2025-02-01 09:44:14
|
Ar least it's all in one file though. I think.
|
|
2025-02-01 09:44:44
|
So it shouldn't be that difficult to change and make it simpler.
|
|
|
CrushedAsian255
|
2025-02-02 01:19:36
|
I think it would be helpful for there to be a standard “unknown metadata” box like “mbox” or something
|
|
|
RaveSteel
|
2025-02-02 01:23:47
|
I think there should be an option to copy any metadata regardless of type. Quite important for archival
|
|
2025-02-02 01:24:20
|
Also an option to copy the timestamps of the source file would be nice
|
|
|
CrushedAsian255
|
2025-02-02 01:25:08
|
Bit-exact PNG transcoding would be nice but probably more difficult. What could probably be done is bit-exact png header reconstruction (the IDAT chunks will be different but all others are preserved)
|
|
|
icymiguel420
|
2025-02-02 02:21:40
|
Is there an even more verbose build of jxlinfo ? Dealing with an image extracted from a game and getting `Decoder Error` and would like to see which part is returning `JXL_DEC_ERROR` but I guess that will require rebuilding the code (can do but don't have VS installed rn) ?
|
|
2025-02-02 02:22:47
|
If it's not against the rules I can upload it
|
|
|
monad
|
|
icymiguel420
If it's not against the rules I can upload it
|
|
2025-02-02 03:44:53
|
it is okay in general
|
|
|
icymiguel420
|
2025-02-02 03:46:42
|
Seems like an issue with the extra channel in the header, trying to figure it out (i used wsl instead of vs)
|
|
2025-02-02 03:46:48
|
but i think there are some other issues too
|
|
2025-02-02 03:55:44
|
```JXL_QUIET_RETURN_IF_ERROR(visitor->Enum(ExtraChannel::kAlpha, &type));``` this line in image_metadata.cc is problematic. Seems like they might be using extra channel for something that's not alpha ?
|
|
2025-02-02 04:40:35
|
thought i could simply skip over it, but no i have to figure out how to parse it because skipping over it causes it to read the frame header at wrong place
|
|
|
jonnyawsom3
|
2025-02-02 06:56:04
|
Try using jxl-oxide with `-I --show-offsets --show-all`
|
|
|
Tirr
|
2025-02-02 10:31:59
|
`jxl-oxide -Iv --all-frames image.jxl`
|
|
|
icymiguel420
|
2025-02-02 12:58:08
|
```
←[2m2025-02-02T12:56:54.581731Z←[0m ←[34mDEBUG←[0m ←[2mjxl_threadpool←[0m←[2m:←[0m Initialized Rayon thread pool ←[3mnum_threads←[0m←[2m=←[0m12
←[2m2025-02-02T12:56:54.581929Z←[0m ←[33m WARN←[0m ←[2mjxl_image←[0m←[2m:←[0m Unknown extension ←[3mextension_idx←[0m←[2m=←[0m0
←[2m2025-02-02T12:56:54.582344Z←[0m ←[33m WARN←[0m ←[2mjxl_image←[0m←[2m:←[0m Unknown extension ←[3mextension_idx←[0m←[2m=←[0m1
←[2m2025-02-02T12:56:54.582686Z←[0m ←[33m WARN←[0m ←[2mjxl_image←[0m←[2m:←[0m Unknown extension ←[3mextension_idx←[0m←[2m=←[0m2
←[2m2025-02-02T12:56:54.583066Z←[0m ←[33m WARN←[0m ←[2mjxl_image←[0m←[2m:←[0m Unknown extension ←[3mextension_idx←[0m←[2m=←[0m4
←[2m2025-02-02T12:56:54.583529Z←[0m ←[31mERROR←[0m ←[2mjxl_oxide←[0m←[2m:←[0m failed reading JPEG XL image: bitstream error: Enum(BlendMode) read invalid enum value of 5
```
second image I have
```
←[2m2025-02-02T12:55:28.160078Z←[0m ←[34mDEBUG←[0m ←[2mjxl_threadpool←[0m←[2m:←[0m Initialized Rayon thread pool ←[3mnum_threads←[0m←[2m=←[0m12
←[2m2025-02-02T12:55:28.160287Z←[0m ←[33m WARN←[0m ←[2mjxl_image←[0m←[2m:←[0m Unknown extension ←[3mextension_idx←[0m←[2m=←[0m0
←[2m2025-02-02T12:55:28.161242Z←[0m ←[33m WARN←[0m ←[2mjxl_image←[0m←[2m:←[0m Unknown extension ←[3mextension_idx←[0m←[2m=←[0m1
←[2m2025-02-02T12:55:28.161616Z←[0m ←[33m WARN←[0m ←[2mjxl_image←[0m←[2m:←[0m Unknown extension ←[3mextension_idx←[0m←[2m=←[0m2
←[2m2025-02-02T12:55:28.161981Z←[0m ←[33m WARN←[0m ←[2mjxl_image←[0m←[2m:←[0m Unknown extension ←[3mextension_idx←[0m←[2m=←[0m4
←[2m2025-02-02T12:55:28.162442Z←[0m ←[31mERROR←[0m ←[2mjxl_oxide←[0m←[2m:←[0m failed reading JPEG XL image: F16() read NaN or Infinity
```
|
|
2025-02-02 01:01:59
|
jxlinfo says these are the unknown extensions if that matters
```
unknown box: type: "exif" size: 64
box: type: "pdfl" size: 24, contents size: 16
unknown box: type: "pdfl" size: 24
```
|
|
|
_wb_
|
2025-02-02 01:17:28
|
Which encoder produced that jxl?
|
|
|
icymiguel420
|
2025-02-02 02:22:38
|
no idea it's ripped from a game. it's not truncated because the length matches that in the header correctly and the next byte after that is the start of the next filename in the capsule (or whatever it's called) anyways.
i was able to extract working JXR files from the same capsule using the same method (checking for headers of JXR/JXL and extracting the bytes). it's just these JXL files that they have encoded weirdly
|
|
|
RaveSteel
|
2025-02-02 02:30:34
|
Can you upload the original file the JXL was derived from?
|
|
|
icymiguel420
|
|
RaveSteel
Can you upload the original file the JXL was derived from?
|
|
2025-02-02 02:32:45
|
|
|
|
RaveSteel
|
2025-02-02 02:33:46
|
Is this really an image?
|
|
|
jonnyawsom3
|
|
icymiguel420
no idea it's ripped from a game. it's not truncated because the length matches that in the header correctly and the next byte after that is the start of the next filename in the capsule (or whatever it's called) anyways.
i was able to extract working JXR files from the same capsule using the same method (checking for headers of JXR/JXL and extracting the bytes). it's just these JXL files that they have encoded weirdly
|
|
2025-02-02 02:33:54
|
Final Fantasy?
|
|
|
icymiguel420
|
2025-02-02 02:34:08
|
Gran Turismo
|
|
|
RaveSteel
Is this really an image?
|
|
2025-02-02 02:34:55
|
yes, I have working JXRs extracted from it
|
|
2025-02-02 02:35:09
|
they add up to 2048x1024 when combined together, just in chunks for some reason
|
|
|
jonnyawsom3
|
2025-02-02 02:35:31
|
How do you know it's a JXL and not just random bytes?
|
|
|
icymiguel420
|
2025-02-02 02:35:40
|
it does look like the thumbnail preview (the JXR).
|
|
2025-02-02 02:36:27
|
the JXL has a half valid header
|
|
2025-02-02 02:36:43
|
it shows the resolution, bit-depth and stuff like that with correct looking values
|
|
|
RaveSteel
|
2025-02-02 02:37:11
|
How did you extract the JXRs?
|
|
|
Tirr
|
2025-02-02 02:37:20
|
it has somewhat valid jxl container format (except having `exif` and `pdfl` box which is undefined), and `jxlc` box starts with `ff 0a`
|
|
|
icymiguel420
|
|
RaveSteel
How did you extract the JXRs?
|
|
2025-02-02 02:38:37
|
|
|
|
RaveSteel
|
|
icymiguel420
|
2025-02-02 02:42:29
|
i do need to create a proper extraction script which preserves the filenames inside it too
|
|
|
Tirr
|
|
icymiguel420
Seems like an issue with the extra channel in the header, trying to figure it out (i used wsl instead of vs)
|
|
2025-02-02 02:48:22
|
this file has some repetitive patterns which doesn't look very jxl...
|
|
|
jonnyawsom3
|
2025-02-02 03:07:59
|
```box: type: "JXL " size: 12, contents size: 4
JPEG XL file format container (ISO/IEC 18181-2)
box: type: "ftyp" size: 20, contents size: 12
box: type: "exif" size: 64, contents size: 56
unknown box: type: "exif" size: 64
box: type: "pdfl" size: 24, contents size: 16
unknown box: type: "pdfl" size: 24
box: type: "jxlc" size: 608300, contents size: 608292
Decoder error
Animation length: 0.000 seconds (looping)
Error reading file: JXL1.jxl```
|
|
2025-02-02 03:08:08
|
Definitely *funky*
|
|
2025-02-02 03:09:16
|
ExifTool says it's `4096x2048`
|
|
|
icymiguel420
|
2025-02-02 03:15:22
|
i got jxlinfo to say that and alot of other info like bit depth by forcing the extra channels count to be 0
|
|
2025-02-02 03:15:52
|
got invalid frame header or some error after that tho, but I thought it's because it would be skipping over the extra channels and reading at the wrong place ?
|
|
2025-02-02 03:33:50
|
```
box: type: "JXL " size: 12, contents size: 4
JPEG XL file format container (ISO/IEC 18181-2)
box: type: "ftyp" size: 20, contents size: 12
box: type: "exif" size: 64, contents size: 56
unknown box: type: "exif" size: 64
box: type: "pdfl" size: 24, contents size: 16
unknown box: type: "pdfl" size: 24
box: type: "jxlc" size: 499275, contents size: 499267
JPEG XL image, 4096x2048, (possibly) lossless, 10-bit RGB
num_color_channels: 3
num_extra_channels: 0
have_preview: 1
Preview image: 68x51
have_animation: 0
Intrinsic dimensions: 4096x2048
Orientation: 2 (Flipped horizontally)
Color space: RGB, D65, sRGB primaries, sRGB transfer function, rendering intent: Relative
Decoder error
```
|
|
2025-02-02 03:34:03
|
so my guess is that I have to make a new parser for the extra channel info so it reads starting from the correct place
|
|
2025-02-02 04:22:44
|
Is there a PDF or something documenting the file structure ? Can't seem to find anything
|
|
|
_wb_
|
2025-02-02 07:20:19
|
The header syntax of jxl is quite compact and avoids redundancy so any random sequence of bits has a chance of corresponding to something that looks like the beginning of a header
|
|
2025-02-02 07:23:37
|
It has a 68x51 preview frame and an unusual orientation — I would guess that it does not actually contain a jxl codestream but this is just some arbitrary bytes interpreted as jxl by making it start with 0xff0a
|
|
2025-02-02 07:24:02
|
Very strange though
|
|
|
jonnyawsom3
|
2025-02-02 07:25:29
|
It does seem oddly specific with the correct `JXL` and `ftyp` sizes, with common power of 2 dimensions. I'd be more impressed if they actually used JXL than if some random data made an almost-valid file though
|
|
|
_wb_
|
2025-02-02 07:28:11
|
I am not aware of any encoder writing `exif` boxes (the standard box name is `Exif`) and I wonder what `pdfl` could be
|
|
2025-02-02 07:29:45
|
Could it be something that was made based on a super old version of libjxl from before the bitstream was finalized?
|
|
2025-02-02 07:31:53
|
Very mysterious 🙂
|
|
|
jonnyawsom3
|
2025-02-02 07:39:46
|
A leaked test of JXL PDF support? xD
Whatver it is, it's very strange to be in Gran Turismo
|
|
|
icymiguel420
|
2025-02-02 08:00:00
|
There is also a frame header with the correct dimensions (at pos 26 (dec) after removing everything before ``FF 0A`` codestream), but everything else about it seems messed up.
|
|
|
CrushedAsian255
|
2025-02-02 11:17:00
|
So its is definitely written to meant to be a JXL, as it has a fully compliant 18181-2 container (other than the misnamed `exif` box), it would be weird for it to be anything other than a JXL
|
|
|
icymiguel420
|
2025-02-03 01:40:04
|
used the container with their own format i guess
|
|
2025-02-03 01:40:19
|
was hoping the issue would just be with the extra channel parsing. if it helps it's likely an HDR or high bit depth image
|
|
|
CrushedAsian255
|
2025-02-03 02:29:47
|
Maybe it’s a fork of JXL
|
|
2025-02-03 02:30:02
|
Like removed certain things to improve decode speed
|
|
|
HCrikki
|
2025-02-03 02:49:14
|
what does that file correspond to in that game?
|
|
|
icymiguel420
|
|
HCrikki
what does that file correspond to in that game?
|
|
2025-02-03 03:46:31
|
it's a texture on a mesh used in a photo mode environment
|
|
|
Meow
|
2025-02-04 01:49:07
|
Found that `cjxl -d 0.05` produces larger than `-d 0` and even the original PNG
|
|
2025-02-04 01:51:10
|
`-d 0.1` could be already larger than `-d 0`
|
|
|
jonnyawsom3
|
2025-02-04 02:40:25
|
Depends on image. Adobe's lossy JXL DNGs are fairly often larger than lossless if bayer instead of linear RAW
|
|
|
Meow
|
2025-02-04 03:03:48
|
Is that preventable?
|
|
|
jonnyawsom3
|
2025-02-04 03:28:20
|
If you lower the quality
|
|
|
CrushedAsian255
|
2025-02-05 04:02:21
|
It's because VarDCT isn't particularly efficient at storing almost-lossless images
|
|
2025-02-05 04:02:25
|
that's why Modular mode exists
|
|
2025-02-05 04:02:38
|
that's why lossless AVIF sucks so much
|
|
|
Meow
|
2025-02-05 04:25:43
|
Seems libjxl v0.11 sometimes compresses worse than v0.10 and before
|
|
|
monad
|
2025-02-05 08:59:12
|
in lossless? yes, it is natural consequence of performance tradeoffs, bug fixing, whatever. lossy? much more challenging to measure, but similarly unsurprising. in general, the encoder improves.
|
|
|
Demiurge
|
2025-02-06 12:52:53
|
Except when it doesn't
|
|
|
|
JendaLinda
|
2025-02-06 09:16:04
|
Lossless JPEG transcoding compression got worse after version 0.9.2. Version 10.0.0 gives worse results than 0.9.2 and it's that way in the current builds as well. Lossless JPEG transcoding seems to be slightly broken in the current version, -e - 9 gives better results than -e 10. Version 0.9.2 still does better lossless JPEG transcoding compression than current builds.
|
|
|
jonnyawsom3
|
2025-02-06 09:39:38
|
I think we got it the first time, but makes sense. Effort 10 didn't exist, so I'm surprised it does anything different to effort 9
|
|
|
Meow
|
2025-02-06 09:49:15
|
Inflation of compression?
|
|
|
|
JendaLinda
|
2025-02-06 09:53:50
|
For lossless jpeg transcoding, 0.9.2 -e 9 produces smaller files than either-e 9 and -e 10 in the current version.
|
|
|
Meow
|
|
JendaLinda
For lossless jpeg transcoding, 0.9.2 -e 9 produces smaller files than either-e 9 and -e 10 in the current version.
|
|
2025-02-06 09:56:53
|
Pure `-e 10` or the so-called incantation `-e 10 -E 4 -I 100 -g 3`?
|
|
|
|
JendaLinda
|
2025-02-06 09:59:07
|
I was trying only efrort options alone.
|
|
2025-02-06 10:00:19
|
I will give it a try.
|
|
2025-02-06 10:21:32
|
The additional options are definitely helping, but they're helping to 0.9.2 too.
|
|
2025-02-06 10:21:38
|
|
|
2025-02-06 10:25:08
|
And yes, all these jxl files do decode to the same pixels.
|
|
|
Quackdoc
|
2025-02-06 10:52:18
|
have any benchmarks been done on a size optimized libjxl?
> Ren'Py now has decent support for C++.
>
> Right now, my main issue is adding 4mb per platform. Can someone get the size of a compiled libjxl down significantly?
|
|
|
spider-mario
|
2025-02-06 11:11:08
|
if I change `CMAKE_BUILD_TYPE` from `Release` to `MinSizeRel`, it goes from 4.0MB (3.6MB stripped) to 2.9MB (2.4MB stripped)
|
|
2025-02-06 11:12:50
|
(upx can then compress those 2.4MB down to slightly less than 1MB, but not sure they’d want to use upx)
|
|
2025-02-06 11:13:06
|
(https://upx.github.io/)
|
|
|
Quackdoc
|
2025-02-06 11:13:49
|
is it possible to just compile decoder stuff?
|
|
|
spider-mario
|
2025-02-06 11:20:02
|
ah, right, we already do that, forgot about it
|
|
2025-02-06 11:20:04
|
libjxl_dec
|
|
2025-02-06 11:21:58
|
Release: 1.63MB, strips to 1.42MB (upxes to 570kB)
MinSizeRel: 1.27MB, strips to 1.01MB (upxes to 412kB)
|
|
|
Quackdoc
|
2025-02-06 12:05:23
|
interesting, I forgot libjxl_dec exists since arch doesn't package it. I wonder how hard it will be to get renpy's build system working on my PC then
|
|
|
spider-mario
|
2025-02-06 12:09:21
|
even just lossless recompression of JPEGs would probably more than make up for those 4MB anyway
|
|
|
Quackdoc
|
2025-02-06 12:10:39
|
does libjxl_dec also get made when compiling for static?
|
|
|
spider-mario
|
2025-02-06 12:13:47
|
ah, right, that’s what I wanted to look at (since the size I gave were with dynamic brotli)
|
|
|
Quackdoc
|
2025-02-06 12:14:20
|
I have a feeling that compiling static + LTO should be small enough
|
|
|
spider-mario
|
2025-02-06 12:14:55
|
Release: 2.8MB `.a`
MinSizeRel: 2.3MB `.a`
but it’s difficult to draw conclusions on the size difference of an executable that would link to it vs not
|
|
|
jonnyawsom3
|
2025-02-06 02:28:58
|
This never got merged either https://github.com/libjxl/libjxl/pull/2766
|
|
2025-02-06 02:29:55
|
Oh wait, I should've scrolled down a bit more :P
|
|
2025-02-06 02:38:16
|
There's a lot of old threads about binary size fluctuating
https://github.com/libjxl/libjxl/issues/3841
|
|
2025-02-06 02:38:24
|
Also reminded me of this https://forum.zdoom.org/viewtopic.php?style=5&p=1244398&sid=b853fcc65268fff4e7d13902db3a6451#p1244398
|
|
|
|
JendaLinda
|
2025-02-06 03:08:33
|
So, lossless jpeg transcoding in version 0.9.2 seems to be consistently better than in 0.12.0. I've tried different combinations of the options. Group size doesn't do anything. -I and -E improve compression in both 0.12.0 and 0.9.2, so 0.9.2 is still performing better.
|
|
|
Demiurge
|
2025-02-06 09:53:28
|
The size also depends on the c runtime too right? Like glibc tends to bloat the size of executables.
|
|
2025-02-06 09:54:05
|
And there's also the c++ runtime, the clang vs the gnu one
|
|
|
Quackdoc
|
2025-02-06 09:55:03
|
oh right, context of what I was talking about for people who don't want to try and find it https://github.com/renpy/renpy/issues/3865
|
|
2025-02-06 09:55:15
|
basically the current blocker for renpy support seems to be size increase
|
|
2025-02-06 09:55:48
|
I gave the three kinds of size optimization
```
* Use libjxl_dec.{a,so} instead of libjxl.{a,so}
* use CMAKE_BUILD_TYPE="MinSizeRel" (this can hurt performance, but libjxl is fast enough that this is likely not an issue except for ultra low end hardware, I my N3050 laptop has no problems and neither does my LG G7 thinq)
* if building static using LTO
```
|
|
2025-02-06 09:56:02
|
if anyone else has other ideas post them there
|
|
|
JendaLinda
So, lossless jpeg transcoding in version 0.9.2 seems to be consistently better than in 0.12.0. I've tried different combinations of the options. Group size doesn't do anything. -I and -E improve compression in both 0.12.0 and 0.9.2, so 0.9.2 is still performing better.
|
|
2025-02-06 09:56:24
|
interesting
|
|
2025-02-06 09:56:33
|
I need to try that
|
|
|
Demiurge
|
|
JendaLinda
So, lossless jpeg transcoding in version 0.9.2 seems to be consistently better than in 0.12.0. I've tried different combinations of the options. Group size doesn't do anything. -I and -E improve compression in both 0.12.0 and 0.9.2, so 0.9.2 is still performing better.
|
|
2025-02-06 09:58:55
|
That's weird. How big is the difference?
|
|
|
|
JendaLinda
|
|
JendaLinda
|
|
2025-02-06 11:09:41
|
Look at the comparison. It's not a big difference but it's there. At least I would expect improvements during the development.
|
|
2025-02-06 11:14:36
|
Brotli effort didn't make a difference but that's probably because my jpegs are bare, so jbrd box is very small.
|
|
|
Demiurge
|
2025-02-07 12:49:16
|
Lossless recompression seems a lot harder to screw up. It's surprising that there would be regressions even there.
|
|
2025-02-07 12:49:35
|
Regressions in regular lossless or lossy mode are more understandable.
|
|
2025-02-07 12:50:58
|
Those are a lot more complex
|
|
|
CrushedAsian255
|
2025-02-07 06:12:10
|
It’s probably different optimisation in the Modular encoding
|
|
|
|
JendaLinda
|
2025-02-07 10:17:43
|
Additionally, -e 11 is equivalent to -e 10 in jpeg transcoding. And -e 10 is consistently worse than -e 9.
|
|
2025-02-07 10:21:15
|
I was also surprised that some options that are seemingly meant for Modular do apply to jpeg transcoding too.
|
|
|
jonnyawsom3
|
2025-02-07 11:06:13
|
Modular sub-bitstream for the DC encoding
|
|
|
Meow
|
2025-02-07 02:34:11
|
> Compressed to 3728.8 kB (6.069 bpp).
> 1920 x 2560, 0.012 MP/s [0.01, 0.01], , 1 reps, 6 threads.
> JPEG XL encoder v0.11.1 0.11.1 [AVX2,SSE4,SSE2]
> Read 1920x2560 image, 5529683 bytes, 227.8 MP/s
> Encoding [Modular, lossless, effort: 10]
It says using 6 threads, but `-e 10 -E 4 -I 100 -g 3` actually uses one thread only
|
|
|
jonnyawsom3
|
2025-02-07 02:37:40
|
Yeah, that's because chunked encoding is disabled at effort 10 for Patches, ect
|
|
|
Meow
|
|
Yeah, that's because chunked encoding is disabled at effort 10 for Patches, ect
|
|
2025-02-07 02:45:50
|
Thus `--num_threads=N` wouldn't work either for it, correct?
|
|
2025-02-07 02:46:47
|
But it seems I can let multiple workflows run at the same time
|
|
2025-02-07 02:49:21
|
Each would consume 100% of CPU, aka one thread each
|
|
2025-02-07 02:53:42
|
Genius
|
|
|
jonnyawsom3
|
2025-02-07 02:58:50
|
Yeah, the threading *might* improve with `--patches 0 --dots 0` but I think it still does full-frame heuristics
|
|
|
Meow
|
2025-02-07 05:09:05
|
Maybe it's not that smart— RAM becomes the new bottleneck
|
|
2025-02-07 05:10:46
|
They're now together running much slower than my expectation as they've been consuming almost all of RAM
|
|
|
jonnyawsom3
|
2025-02-07 05:29:10
|
Oh yeah, `-g 3` hugely increases memory usage, especially with chunked encoding disabled
|
|
|
Oleksii Matiash
|
2025-02-07 05:59:24
|
When I bought my current pc, I thought that ~~640KB~~ 64 GB would be enough for a long time, but then I discovered -e 10\11
|
|
|
RaveSteel
|
2025-02-07 06:10:00
|
we were promised 64/128GB DIMMs, but those exist only for the professional segment smh
|
|
|
|
JendaLinda
|
2025-02-07 06:23:19
|
I also have few images where `cjxl -e 11` just gave up and terminated without a word after a minute.
|
|
|
Cacodemon345
|
|
Quackdoc
basically the current blocker for renpy support seems to be size increase
|
|
2025-02-07 07:19:11
|
Same thing that blocked off JPEG XL from GZDoom.
|
|
|
CrushedAsian255
|
|
Meow
Genius
|
|
2025-02-07 10:33:27
|
ls *.png | parallel —bar cjxl {} {.}.jxl -d 0 -E 10 -g 3 -E 4 -I 100
|
|
|
jonnyawsom3
|
2025-02-07 10:34:05
|
Double `-E`
|
|
2025-02-07 10:34:30
|
Also `-E 11` is max IIRC, not that even 4 should do anything since there's only 3 channels before Alpha
|
|
|
HCrikki
|
|
Quackdoc
basically the current blocker for renpy support seems to be size increase
|
|
2025-02-07 10:47:28
|
extra functionalty like jxl support could very well be handled as post-install downloads (required, optional or selectable during install) since theyd have their separate update pace. decoding-only should be barely over 1 megabyte for games shipping
|
|
|
Meow
|
2025-02-08 01:43:19
|
Two `cjxl -d 0 -e 10 -E 4 -I 100 -g 3` processes have done several huge images (over 100M pixels) over night
|
|
2025-02-08 01:43:53
|
To finish more when I'm not at home
|
|
|
MSLP
|
|
Cacodemon345
Same thing that blocked off JPEG XL from GZDoom.
|
|
2025-02-08 02:39:42
|
Hmm... I wonder if in time effort-limited decoders (i mean decoders which can only decode files encoded up to certain cjxl effort) which would have smaller binary size will appear.
|
|
|
|
JendaLinda
|
2025-02-08 10:06:28
|
This repository stopped updating.
https://artifacts.lucaversari.it/libjxl/libjxl/latest/
|
|
|
A homosapien
|
2025-02-08 10:14:47
|
It happens from time to time
|
|
|
|
veluca
|
|
JendaLinda
This repository stopped updating.
https://artifacts.lucaversari.it/libjxl/libjxl/latest/
|
|
2025-02-08 11:33:51
|
... Not the best timing, I doubt I'll be able to give it a look in the next week or two
|
|
2025-02-08 11:34:01
|
It might be because the CI is failing though (?)
|
|
|
jonnyawsom3
|
2025-02-08 01:02:42
|
Seems to be passing from what I can see
|
|
|
|
veluca
|
2025-02-08 04:37:19
|
... in which way did it stop updating? libjxl doesn't seem to have any commits after 30-01
|
|
|
Meow
|
2025-02-08 05:22:31
|
10500 x 10500, 0.006 MP/s
|
|
|
Demiurge
|
|
Oleksii Matiash
When I bought my current pc, I thought that ~~640KB~~ 64 GB would be enough for a long time, but then I discovered -e 10\11
|
|
2025-02-08 06:12:47
|
e10/e11 was added as a trap to get naive people to waste tons of time and electricity saving 2 bytes on a 2,000,000 byte image
|
|
2025-02-08 06:13:37
|
--allow-moron-settings
|
|
|
Meow
|
2025-02-08 06:45:50
|
I've already done 16 images of 10500 x 10500 pixels
|
|
|
jonnyawsom3
|
|
Meow
10500 x 10500, 0.006 MP/s
|
|
2025-02-08 07:02:26
|
I knew it was a good idea to go to 3 decimal places
|
|
|
Meow
|
2025-02-08 07:03:59
|
That's 6 KP/s, not that bad
|
|
|
Demiurge
|
2025-02-08 07:18:17
|
That's horrible
|
|
2025-02-08 07:18:36
|
That's like a full second to process a postage stamp
|
|
2025-02-08 07:19:19
|
No, not even
|
|
2025-02-08 07:19:36
|
It's even slower than that. Several times slower
|
|
2025-02-08 07:20:06
|
That stuff really needs to be locked behind --allow-moron-options
|
|
|
Meow
|
2025-02-09 04:02:42
|
10500 x 10500, 0.004 MP/s [0.00, 0.00]
|
|
2025-02-09 04:03:44
|
What do these 0.00 and 0.00 mean?
|
|
|
Tirr
|
2025-02-09 04:06:11
|
I guess it's min/max throughput but truncated
|
|
|
Demiurge
|
2025-02-09 05:22:05
|
It's glitchy and probably meaningless
|
|
2025-02-09 05:22:25
|
Last time I read the source code it was pretty bad and didn't actually calculate anything
|
|
2025-02-09 05:22:32
|
Unless they fixed it
|
|
2025-02-09 05:23:09
|
The quality of the cjxl/djxl tools is much worse than the library code. Another reason I wish they were separate
|
|
|
A homosapien
|
|
Demiurge
It's glitchy and probably meaningless
|
|
2025-02-09 05:28:42
|
No it's not, when used with `--num_reps x` it shows the lowest and highest MP/s within x repetitions
|
|
2025-02-09 05:31:40
|
In terms of bugginess, benchmark_xl takes the cake, I personally consider it borderline unusable
|
|
|
jonnyawsom3
|
2025-02-09 12:48:41
|
Most functions simply don't work for me in benchmark_xl
|
|
|
CrushedAsian255
|
2025-02-09 01:00:06
|
is it even being actively maintaned?
|
|
|
Meow
|
2025-02-09 04:04:50
|
For normal-sized images running multiple `cjxl -d 0 -e 10 -E 4 -I 100 -g 3` processes is fine
|
|
2025-02-09 04:05:34
|
Memory usage won't skyrocket
|
|
|
Traneptora
|
|
Meow
Found that `cjxl -d 0.05` produces larger than `-d 0` and even the original PNG
|
|
2025-02-10 07:55:05
|
modular outperforms VarDCT for very low distances
|
|
2025-02-10 07:55:10
|
try `-d 0.05 -m 1`
|
|
|
Meow
|
2025-02-11 09:54:20
|
I'll see if the dozens of my gigantic images by `cjxl -d 0 -e 10 -E 4 -I 100 -g 3` are finally done later today
|
|
2025-02-11 09:55:29
|
The process was re-initiated about 42 hours ago
|
|
2025-02-11 09:56:59
|
With the different approach this time, the speed is now faster than 10 KP/s
|
|
|
RaveSteel
|
2025-02-11 10:39:07
|
Post savings when finished
|
|
|
Meow
|
|
RaveSteel
Post savings when finished
|
|
2025-02-11 12:06:36
|
What kind of layouts do you want to see?
|
|
|
RaveSteel
|
2025-02-11 12:19:53
|
Just a before and after
|
|
|
Meow
|
2025-02-11 02:19:19
|
Seems it needs one more day
|
|
|
CrushedAsian255
|
|
Meow
I'll see if the dozens of my gigantic images by `cjxl -d 0 -e 10 -E 4 -I 100 -g 3` are finally done later today
|
|
2025-02-11 02:19:44
|
These settings?
|
|
2025-02-11 02:19:52
|
What KP/s are you getting?
|
|
|
Meow
|
2025-02-11 02:20:46
|
It's now processing some stubborn ones and falling below 10 KP/s
|
|
|
CrushedAsian255
|
2025-02-11 02:21:15
|
Since e10+ is single threaded I could probably experiment running long cjxl sessions on my server. How much RAM does it use?
|
|
2025-02-11 02:21:33
|
It would be nice for cjxl to output progress info for longer encodes
|
|
2025-02-11 02:21:59
|
Like: processing group 4/160: testing MA tree 4/32
|
|
|
Meow
|
2025-02-11 02:23:19
|
Mine has 32 GB and the memory usage is at Level 4 (highest)
|
|
|
CrushedAsian255
|
2025-02-11 02:23:51
|
I have 16 GB ram on my system, around 10 gb free.
|
|
|
Meow
|
2025-02-11 02:24:14
|
I'm running two cjxl processes
|
|
|
CrushedAsian255
|
2025-02-11 02:24:33
|
We should all set up some distributed cjxl server.
|
|
2025-02-11 02:25:06
|
Like you register you device and when people send in image transcode requests it would calculate approx memory usage and allocate each job to the best system for it
|
|
|
Meow
|
2025-02-11 02:25:07
|
CPU is just slightly warmer
|
|
|
CrushedAsian255
|
2025-02-11 02:26:05
|
Having encodes run on your device earns you points and those points can then be used on “buying” encodes.
|
|
2025-02-11 02:27:11
|
Is there a way to calculate the max bounds of cjxl memory usage given a certain parameter set an image size?
|
|
|
Meow
|
2025-02-11 02:28:56
|
I'm not even sure about how much it really uses now as macOS enables swapping for it
|
|
|
CrushedAsian255
|
2025-02-11 02:29:35
|
Does swapping hurt performance for it too much or is it usually only reading parts of the memory at a time
|
|
|
Meow
|
2025-02-11 02:30:15
|
Each uses over 20 GB now
|
|
2025-02-11 02:31:07
|
The system isn't significantly slowed down however
|
|
|
CrushedAsian255
|
2025-02-11 02:31:22
|
Probably not great for your ssd though 🤷♂️
|
|
|
Meow
|
2025-02-11 02:32:19
|
When I checked last month the lifespan was at 96%
|
|
|
jonnyawsom3
|
2025-02-11 03:40:43
|
Found the old 'first test' of chunked encoding that Veluca posted https://discord.com/channels/794206087879852103/804324493420920833/1204090760148942910 just for context as to what e10 is sacrificing
|
|
|
CrushedAsian255
Does swapping hurt performance for it too much or is it usually only reading parts of the memory at a time
|
|
2025-02-11 04:47:10
|
It *should* cause massive slowdowns, but I don't think anyone has specifically compared adding more RAM to a system. Here's a throwback to a 300GB JXL encode though :P https://discord.com/channels/794206087879852103/794206170445119489/1204734729522515999
|
|
|
Meow
|
2025-02-12 11:08:45
|
The slowest one it recorded
> `10500 x 10500, 0.002 MP/s [0.00, 0.00]`
|
|
2025-02-12 11:09:11
|
aka 2 KP/s
|
|
2025-02-12 11:11:01
|
The whole process finished 6 hours ago
|
|
2025-02-12 11:19:29
|
I'll test -e 7 (default) again
|
|
|
jonnyawsom3
|
2025-02-13 05:49:17
|
I wonder what kind of impact using the fixed MA tree on higher effort levels would have
|
|
2025-02-15 09:37:57
|
I was about to update the encoding effort docs again after finding this, but then I realised... Is that meant to be `>` instead of `>=`?
<https://github.com/libjxl/libjxl/blob/f73fc6f045268e2d12c17490ac490e0721116657/lib/jxl/enc_ac_strategy.cc#L1128>
|
|
2025-02-15 09:41:32
|
Seems it was changed back in v0.5 but the comment wasn't updated <https://github.com/libjxl/libjxl/commit/a8740af7f889292c606ba143e5996247b9f4765f#diff-4bcf87d9b41e940d20a14b016f4f8c7316cfbb9a2bf43142ebbeb0e84a1526dfL1365>
|
|
|
Traneptora
|
|
I was about to update the encoding effort docs again after finding this, but then I realised... Is that meant to be `>` instead of `>=`?
<https://github.com/libjxl/libjxl/blob/f73fc6f045268e2d12c17490ac490e0721116657/lib/jxl/enc_ac_strategy.cc#L1128>
|
|
2025-02-17 06:16:49
|
perhaps. it used to be the case that effort=4 would allow for custom varblocks
|
|
2025-02-17 06:18:12
|
effort=4
|
|
2025-02-17 06:18:20
|
effort=5
|
|
2025-02-17 06:18:47
|
appears that's enabled on e=5 now
|
|
|
Meow
|
2025-02-18 04:08:31
|
So is fjxl now simply `cjxl -e 1`? I can't find the exact answer about that
|
|
|
jonnyawsom3
|
2025-02-18 04:12:48
|
Pretty much, yeah
|
|
|
Meow
|
2025-02-18 04:13:35
|
Do people really use -e 1?
|
|
|
RaveSteel
|
2025-02-18 04:14:38
|
Can be useful for large images
|
|
|
Meow
|
2025-02-18 04:16:28
|
I found `-e 10 -E 4 -I 100 -g 3` very useful for gigantic images
|
|
2025-02-18 04:19:01
|
Yes new idea: how fast is -e 1 for tiny images
|
|
|
RaveSteel
|
|
Meow
I found `-e 10 -E 4 -I 100 -g 3` very useful for gigantic images
|
|
2025-02-18 04:21:19
|
Will lead to better compression, yes, but also to vastly higher times before an image is opened due to higher decode times
|
|
|
jonnyawsom3
|
2025-02-18 04:21:50
|
Extreme memory usage and many orders of magintude longer encode times, along with rather slow decode times usually 2x that of PNG
|
|
|
RaveSteel
|
2025-02-18 04:22:07
|
Exactly
|
|
|
jonnyawsom3
|
2025-02-18 04:22:36
|
e 1 uses effectively the same memory as the raw image plus a dozen MB or so and is 2x **faster** than PNG decoding
|
|
|
Meow
I found `-e 10 -E 4 -I 100 -g 3` very useful for gigantic images
|
|
2025-02-18 04:41:53
|
For some context, I did some very rough tests and e1 was 10% larger than max Oxipng settings and 40,000x faster at 6ms per 1080p image and 26ms for 4K
|
|
2025-02-18 04:42:24
|
e2 has the same decode speed and slightly better size, but is 10x slower
|
|
|
Meow
|
|
For some context, I did some very rough tests and e1 was 10% larger than max Oxipng settings and 40,000x faster at 6ms per 1080p image and 26ms for 4K
|
|
2025-02-18 05:10:10
|
Is that OxiPNG with Zopfli?
|
|
|
jonnyawsom3
|
2025-02-18 05:17:47
|
Actually, I don't think it was... Please hold for a 2,000,000x speed difference :P
|
|
|
Meow
Is that OxiPNG with Zopfli?
|
|
2025-02-18 06:21:56
|
```wintime -- oxipng -o max --fast -Z -vv Oxipng.png
2921960 bytes (20.82% smaller): Oxipng.png
PeakWorkingSetSize: 121 MiB
PeakPagefileUsage: 184.5 MiB
Wall time: 0 days, 00:00:58.028 (58.03 seconds)
User time: 0 days, 00:00:00.484 (0.48 seconds)
Kernel time: 0 days, 00:01:00.437 (60.44 seconds)```
1000 reps to warm the cache and make measuing easier
```wintime -- cjxlClang --disable_output -d 0 Oxipng.png -e 1 --num_reps 1000
JPEG XL encoder v0.12.0 [_AVX2_,SSE4,SSE2]
Encoding [Modular, lossless, effort: 1]
Compressed to 2826.1 kB (10.903 bpp).
1920 x 1080, median: 282.788 MP/s [118.54, 309.26] (stdev 67.773), , 1000 reps, 16 threads.
PeakWorkingSetSize: 32.35 MiB
PeakPagefileUsage: 30.28 MiB
Wall time: 0 days, 00:00:07.899 (7.90 seconds)
User time: 0 days, 00:00:01.171 (1.17 seconds)
Kernel time: 0 days, 00:01:08.828 (68.83 seconds)```
8ms for 1080p compared to 60,000 for Oxipng, over 7,000x faster and 80% less memory
```
Original: 3.51 MB (3,690,257 bytes)
Oxipng Zopfli: 2.78 MB (2,921,960 bytes)
Effort 1 JXL: 2.69 MB (2,826,103 bytes)
```
And in this instance, effort 1 is actually 3% smaller too
|
|
|
Actually, I don't think it was... Please hold for a 2,000,000x speed difference :P
|
|
2025-02-18 06:24:43
|
Depends on image, since apparently I *did* use Zopfli and cjxl was 60 MP/s faster than this time (ignore Brotli)
|
|
|
Meow
|
2025-02-19 02:10:09
|
Does libjxl still offer Jpegli now?
|
|
2025-02-19 02:10:44
|
I'm editing the libjxl part on Wikipedia
|
|
|
A homosapien
|
2025-02-19 02:22:44
|
I'm pretty sure jpegli is still included with the lastest nightly builds
|
|
2025-02-19 02:23:19
|
Are talking about jpegli being in a separate repo?
|
|
|
Meow
|
2025-02-19 02:25:48
|
Then it should say the both
|
|
2025-02-19 02:44:57
|
Is https://github.com/libjxl/libjxl/tree/main/tools/fast_lossless different from `fjxl`?
|
|
|
```wintime -- oxipng -o max --fast -Z -vv Oxipng.png
2921960 bytes (20.82% smaller): Oxipng.png
PeakWorkingSetSize: 121 MiB
PeakPagefileUsage: 184.5 MiB
Wall time: 0 days, 00:00:58.028 (58.03 seconds)
User time: 0 days, 00:00:00.484 (0.48 seconds)
Kernel time: 0 days, 00:01:00.437 (60.44 seconds)```
1000 reps to warm the cache and make measuing easier
```wintime -- cjxlClang --disable_output -d 0 Oxipng.png -e 1 --num_reps 1000
JPEG XL encoder v0.12.0 [_AVX2_,SSE4,SSE2]
Encoding [Modular, lossless, effort: 1]
Compressed to 2826.1 kB (10.903 bpp).
1920 x 1080, median: 282.788 MP/s [118.54, 309.26] (stdev 67.773), , 1000 reps, 16 threads.
PeakWorkingSetSize: 32.35 MiB
PeakPagefileUsage: 30.28 MiB
Wall time: 0 days, 00:00:07.899 (7.90 seconds)
User time: 0 days, 00:00:01.171 (1.17 seconds)
Kernel time: 0 days, 00:01:08.828 (68.83 seconds)```
8ms for 1080p compared to 60,000 for Oxipng, over 7,000x faster and 80% less memory
```
Original: 3.51 MB (3,690,257 bytes)
Oxipng Zopfli: 2.78 MB (2,921,960 bytes)
Effort 1 JXL: 2.69 MB (2,826,103 bytes)
```
And in this instance, effort 1 is actually 3% smaller too
|
|
2025-02-19 05:27:59
|
Maybe comparing to `oxipng -o 0` would be fairer?
|
|
2025-02-19 06:45:35
|
xnconvert can select "effort 0" and I thought it would be something like cjxl -e 0 (?), but it's actually -e 7
|
|
|
CrushedAsian255
|
2025-02-19 09:31:11
|
Probably just means “default effort”
|
|
|
Meow
|
2025-02-19 09:54:58
|
Good that jxl-x64-windows-static.zip allows me to use `cjpegli` on Windows successfully
|
|
2025-02-19 10:18:02
|
Also nice to learn `wintime` on Windows
|
|
2025-02-19 10:31:53
|
Counting width * height / pixels per second isn't a pleasant experience with libjxl
|
|
|
TheBigBadBoy - 𝙸𝚛
|
|
Meow
Is https://github.com/libjxl/libjxl/tree/main/tools/fast_lossless different from `fjxl`?
|
|
2025-02-19 12:49:28
|
anyway `fjxl` was merged in `cjxl -e 1`
|
|
|
Meow
|
2025-02-19 03:50:19
|
Looks good comparing `cjxl -d 0 -e 1` with `oxipng -o 0 -a`
```byte type sec
67,283,398 original PNG
57,563,195 oxipng -o 0 -a 8.433
53,017,186 cjxl -d 0 -e 1 1.799
byte type sec
44,123,818 original PNG
38,332,388 oxipng -o 0 -a 13.910
29,846,465 cjxl -d 0 -e 1 2.157```
|
|
2025-02-19 03:50:56
|
The same image with/without the background
|
|
|
jonnyawsom3
|
2025-02-19 03:51:06
|
Using alpha optimization?
|
|
|
Meow
|
2025-02-19 03:51:33
|
The second image has alpha
|
|
|
jonnyawsom3
|
2025-02-19 03:52:07
|
--keep_invisible 0 is the cjxl equivalent
|
|
2025-02-19 03:52:32
|
May compress slightly better
|
|
|
Meow
|
|
--keep_invisible 0 is the cjxl equivalent
|
|
2025-02-19 03:55:20
|
As `oxipng -a `didn't make any different, `--keep_invisible 0` didn't either
|
|
2025-02-19 03:55:44
|
The same sizes as well as very similar speed
|
|
|
jonnyawsom3
|
2025-02-19 03:55:58
|
Ah right
|
|
|
Meow
|
2025-02-19 03:57:22
|
Yeah I did check ssimulacra2 after I used `oxipng -a` for my personally owned artworks
|
|
2025-02-19 03:57:29
|
All 100
|
|
2025-02-19 04:01:57
|
That post above is just a quick test for the minimum effort of `cjxl` and `oxipng`. Some `-e 10` spell can take almost a half day for the same two images
|
|
2025-02-19 04:05:11
|
```-e 10 -E 4 -I 100 -g 3
29,236,724
17,997,945```
Just for the reference
|
|
|
_wb_
|
2025-02-21 04:31:16
|
nice! what aspect are you most interested in?
|
|
|
Quackdoc
|
2025-02-21 09:41:12
|
perhaps issues detailing everything that needs to be done still would be warranted? then one could pick an issue and express interest in working on it to not overlap with existing efforts
|
|
|
jonnyawsom3
|
2025-02-22 02:08:21
|
That was quite the speed hit https://github.com/libjxl/libjxl/pull/471
|
|
2025-02-22 02:09:26
|
When I'm home I might do some resampling benchmarks. I think it could trigger at distance 15 instead of 20 for equal quality, but the encode speed gets hurt a lot by that
|
|
2025-02-27 09:29:09
|
Made a fork and a start on updating the effort docs (again). This time looking at code instead of just comments, but my head hursts just from looking at this <https://github.com/search?q=repo%3Alibjxl%2Flibjxl+%22if+%28cparams.speed_tier+%22&type=code>
So any help digging out the decisions would be appreciated (I messed up the first commit name too :P) <https://github.com/jonnyawsom3/enc-effort>
|
|
2025-02-27 09:45:19
|
I'm realising effort 5 does a *lot* more than effort 4... Maybe we should spread things out more if some aren't a big speed hit
|
|
|
0xC0000054
|
2025-03-01 11:58:54
|
Does libjxl guarantee that the alpha channel has the same bit depth as the color channel(s)? I.e. when an alpha channel is present, `basicInfo.alpha_bits/alpha_exponent_bits` is the same as `basicInfo.bits_per_sample/exponent_bits_per_sample`.
|
|
2025-03-02 12:01:30
|
The libjxl GIMP plugin and Krita appear to be making that assumption, along with my Paint.NET plugin. AVIF allows the color and alpha images to use different bit depths, but I was not sure if that is also true for libjxl/JPEG XL.
|
|
|
_wb_
|
2025-03-02 08:38:00
|
The bit depth can be different, but when you decode to uint8 or uint16 RGBA, it will always produce 8/16 bit buffers regardless of what the actual bit depth is
|
|
|
Melirius
|
2025-03-04 12:43:12
|
Hi, tutti! Does anybody know why `total_pixels` value is overestimated in `ModularEncode` function if its initial value is 0? I tried to figure it out, but to no avail
|
|
|
Traneptora
|
|
Melirius
Hi, tutti! Does anybody know why `total_pixels` value is overestimated in `ModularEncode` function if its initial value is 0? I tried to figure it out, but to no avail
|
|
2025-03-04 12:53:49
|
can you provide line numbers so I can know where to look? ty
|
|
|
monad
|
2025-03-04 12:54:26
|
talking about here: <https://github.com/libjxl/libjxl/pull/4135/commits/72b2299bae2f42f6191b21ea771e29db7b915e66>
|
|
2025-03-04 12:55:52
|
i guess the claim is elsewhere it counts differently?
|
|
|
Melirius
|
|
monad
talking about here: <https://github.com/libjxl/libjxl/pull/4135/commits/72b2299bae2f42f6191b21ea771e29db7b915e66>
|
|
2025-03-04 12:56:48
|
Yeah, exactly
|
|
|
Traneptora
can you provide line numbers so I can know where to look? ty
|
|
2025-03-04 12:57:55
|
The link is here, and my initial thoughts are in the PR description
|
|
|
monad
|
2025-03-04 01:00:34
|
at a glance it seems correct to me, because tree_samples iterates all channels
|
|
|
Melirius
|
|
monad
at a glance it seems correct to me, because tree_samples iterates all channels
|
|
2025-03-04 01:01:16
|
Yes, but it increases total_pixels by itself!
|
|
|
monad
at a glance it seems correct to me, because tree_samples iterates all channels
|
|
2025-03-04 01:03:02
|
https://github.com/libjxl/libjxl/blob/main/lib/jxl/modular/encoding/enc_encoding.cc#L163 and so on
|
|
|
Traneptora
|
2025-03-04 01:03:18
|
not sure it makes sense to call `std::max()` for this
|
|
2025-03-04 01:03:36
|
`if (!*total_pixels) *total_pixels = 1;` or something
|
|
2025-03-04 01:03:41
|
seems cleaner
|
|
2025-03-04 01:04:04
|
maybe the compiler will optimize it out tho
|
|
|
Melirius
|
2025-03-04 01:04:32
|
So that is what appears finally: if initially `total_pixels = 0` then we have after the function `total_pixels = 2 * (number of pixels in the image)` increased by this part and `GatherTreeData`
|
|
2025-03-04 01:05:02
|
In any other case we have `total_pixels += (number of pixels in the image)` increased by `GatherTreeData`
|
|
2025-03-04 01:13:41
|
And I don't get if it is intentional or a bug 🙂
|
|
|
_wb_
|
2025-03-04 01:14:50
|
looks like that number only really matters for this: https://github.com/libjxl/libjxl/blob/main/lib/jxl/modular/encoding/enc_encoding.cc#L318 unless I missed something
|
|
|
Melirius
|
|
_wb_
looks like that number only really matters for this: https://github.com/libjxl/libjxl/blob/main/lib/jxl/modular/encoding/enc_encoding.cc#L318 unless I missed something
|
|
2025-03-04 01:15:37
|
Yeah, I've got to the same conclusion
|
|
|
_wb_
|
2025-03-04 01:17:21
|
it scales the threshold needed to decide whether adding a node to the tree is worth it; if fewer pixels were sampled then it more or less assumes the cost saving in the sample will be representative (but not quite)
|
|
|
Melirius
|
|
_wb_
looks like that number only really matters for this: https://github.com/libjxl/libjxl/blob/main/lib/jxl/modular/encoding/enc_encoding.cc#L318 unless I missed something
|
|
2025-03-04 01:18:06
|
Ah, also in the `PredefinedTree` - zero pb was fixed in https://github.com/libjxl/libjxl/pull/4134
|
|
|
_wb_
|
2025-03-04 01:19:04
|
if the total_pixels is wrong by a factor of two then that more or less boils down to `options.splitting_heuristics_node_threshold` being different
|
|
|
Traneptora
|
2025-03-04 01:19:51
|
`git diff --trans`
|
|
|
Melirius
|
|
_wb_
it scales the threshold needed to decide whether adding a node to the tree is worth it; if fewer pixels were sampled then it more or less assumes the cost saving in the sample will be representative (but not quite)
|
|
2025-03-04 01:21:14
|
So overestimation increases a threshold to accept a node, right?
|
|
|
_wb_
|
2025-03-04 01:22:57
|
decreases it, since the threshold is basically multiplied by nb_samples / total_pixels
|
|
2025-03-04 01:24:41
|
the base threshold is set here: https://github.com/libjxl/libjxl/blob/main/lib/jxl/enc_modular.cc#L519
|
|
2025-03-04 01:25:00
|
```
cparams_.options.splitting_heuristics_node_threshold =
82 + 14 * static_cast<int>(cparams_.speed_tier);
```
|
|
2025-03-04 01:25:45
|
which means higher threshold at faster settings (lower effort)
|
|
|
Melirius
|
|
_wb_
decreases it, since the threshold is basically multiplied by nb_samples / total_pixels
|
|
2025-03-04 01:25:46
|
Hmm, I think I miss something: if less pixels are sampled, then any resulting decision should be more informed - by larger threshold, not the other way
|
|
|
_wb_
|
2025-03-04 01:30:53
|
the threshold is expressed in number of bits estimated to be saved by adding the node, in the sampled data. But if the amount of sampled data is smaller, the absolute number of bits that is saved is also smaller
|
|
2025-03-04 01:31:51
|
and yes, it should be more informed, I think that's why it's not quite just multiplied but there's this * 0.9 + 0.1 thing
|
|
2025-03-04 01:32:21
|
obviously there is room for improvement in how these thresholds are determined
|
|
|
Melirius
|
|
_wb_
the threshold is expressed in number of bits estimated to be saved by adding the node, in the sampled data. But if the amount of sampled data is smaller, the absolute number of bits that is saved is also smaller
|
|
2025-03-04 01:32:49
|
Aha, got it then
|
|
|
_wb_
|
2025-03-04 01:33:59
|
`82 + 14 * static_cast<int>(cparams_.speed_tier)` is a pretty arbitrary formula that is not really based on anything
|
|
2025-03-04 01:35:39
|
basically it makes higher effort settings get a lower threshold so more likely to produce larger trees, but there is no fundamental reason to link that threshold to the speed setting
|
|
2025-03-04 01:38:31
|
also there are some additional heuristics around https://github.com/libjxl/libjxl/blob/main/lib/jxl/modular/encoding/enc_ma.cc#L431 that apply some creative interpretations of the threshold
|
|
|
Melirius
|
2025-03-04 01:39:23
|
So for fast encoding `libjxl` uses effectively twice lower thresholds (as any of `stream_images_` are encoded separately)
|
|
|
_wb_
|
2025-03-04 01:40:47
|
from the compression point of view, the threshold should correspond to the signaling cost of the extra node, which is a bit hard to estimate since you don't know the histograms for the MA tree signaling yet, but probably 82 bits is a safe overestimate of that
|
|
2025-03-04 01:42:31
|
in the bigger picture, it's also beneficial for speed to keep trees small, avoid the weighted predictor, and encourage static tree properties since they do not need to be evaluated per-pixel. So having a somewhat higher threshold helps with that
|
|
2025-03-04 01:44:00
|
probably it would make sense to rethink that whole threshold computation since I don't think it has been touched in ages and it was just a kind of placeholder in the first place anyway
|
|
|
Traneptora
|
|
Traneptora
`git diff --trans`
|
|
2025-03-04 02:01:29
|
<@167023260574154752>
|
|
|
lonjil
|
|
Traneptora
|
2025-03-04 02:01:55
|
it's not perfect (pink and white inverted) but it did make me smile a bit when I saw it
|
|
|
lonjil
|
|
Melirius
|
2025-03-04 05:22:00
|
There is a mess around using `responsive` parameter, what was the intention to use it? For example, it is set inside the `ModularFrameEncoder` copy, but then its generally uninitialized value of the `cparams_orig` is used further
|
|
|
_wb_
|
2025-03-04 06:59:04
|
Intention is for it to be disabled by default unless it gets enabled explicitly
|
|
|
Melirius
|
|
_wb_
Intention is for it to be disabled by default unless it gets enabled explicitly
|
|
2025-03-04 07:47:48
|
I will try then to follow data flow and fix the problem
|
|
|
RaveSteel
|
2025-03-04 10:28:33
|
I have a few GIFs that encode properly to lossless JXL, but checksums do not match. Does anyone have an idea why that might be?
|
|
2025-03-04 10:28:39
|
|
|
|
Demiurge
|
2025-03-04 11:18:23
|
Pretty
|
|
|
monad
|
2025-03-04 11:31:28
|
could it have to do with alpha?
|
|
|
A homosapien
|
|
RaveSteel
I have a few GIFs that encode properly to lossless JXL, but checksums do not match. Does anyone have an idea why that might be?
|
|
2025-03-04 11:37:36
|
Checksums are not a reliable way to check if two images are same. Small differences in the header like tiny bit of exif metadata will change the checksum while the pixel data is identical.
|
|
2025-03-04 11:38:29
|
Well, you could decode the gif and jxl into individual ppm images and use ssimulacra2 to compare them
|
|
2025-03-04 11:38:46
|
or maybe there is a special image magick command that does that
|
|
|
monad
|
2025-03-04 11:39:14
|
presumably RaveSteel is using imagemagick, which presents this issue
|
|
|
Laserhosen
|
2025-03-04 11:39:57
|
`magick identify -format '%#\n'` will give you a hash for the pixel values of each frame.
|
|
|
A homosapien
|
2025-03-04 11:42:41
|
Also I brute forced it and decoded every frame and stitched it back together in a gif and used `gifdiff.exe` to compare the pixel values. They are 100% identical.
|
|
|
RaveSteel
|
|
monad
presumably RaveSteel is using imagemagick, which presents this issue
|
|
2025-03-04 11:43:51
|
I am, yes
|
|
|
Laserhosen
|
2025-03-04 11:43:58
|
magick does give different results, but yeah, it could be related to alpha.
|
|
|
RaveSteel
|
2025-03-04 11:44:45
|
ImageMagick normally outputs matching image hashes
|
|
2025-03-04 11:44:57
|
For GIF as well, I have encoded hundreds of GIFs in a dataset and it always matched, of course only if nothing went wrong during the encode
|
|
2025-03-04 11:45:09
|
But yes, I assume it has something do with alpha
|
|
2025-03-04 11:52:56
|
Decoding the GIF and converted JXL to their frames indeed shows same hashes for all of the frames, but it is still weird, or rather, unexpected, that the hashes would differ. Not just using Imagemagick but ffmpeg's framemd5 as well
|
|
|
A homosapien
|
2025-03-04 11:53:37
|
ooh, ssimulacra2 accepts .gif input
`ssimulacra2.exe original.gif djxl.gif` = 100
|
|
|
RaveSteel
|
2025-03-04 11:54:08
|
Does it actually decode the entire GIF, or does it just diff the first frame?
|
|
|
A homosapien
|
2025-03-04 11:54:23
|
Testing rn
|
|
2025-03-04 11:58:34
|
only first frame 🥲
|
|
|
RaveSteel
|
|
A homosapien
|
2025-03-04 11:59:14
|
I feel like it wouldn't be hard to implement tbh
|
|
|
RaveSteel
|
2025-03-05 12:00:15
|
Oh cool, I didn't know that djxl decodes animated JXL to APNG
|
|
|
A homosapien
|
2025-03-05 12:00:46
|
I guess my main gripe is that djxl does not output natively to .gif. I had to output as apng then reconvert to gif carefully without ruining the per-frame palette the original gif had
|
|
|
RaveSteel
|
2025-03-05 12:01:11
|
I just extracted the frames to PNG using ffmpeg
|
|
2025-03-05 12:01:19
|
Works perfectly fine
|
|
2025-03-05 12:01:43
|
except you get many more frames for the JXL, but deduplicate those and you're golden lol
|
|
|
A homosapien
|
2025-03-05 12:03:34
|
I just use `apngdis.exe` to split it into frames
|
|
|
RaveSteel
|
2025-03-05 12:03:59
|
I skipped decoding via djxl entirely because ffmpeg can handle it just fine
|
|
|
A homosapien
|
2025-03-05 12:04:43
|
makes sense, idk what setting to pass to ffmpeg for deduplication tho
|
|
|
monad
|
2025-03-05 12:05:10
|
convert can also give you the frames ...
|
|
|
RaveSteel
|
|
A homosapien
makes sense, idk what setting to pass to ffmpeg for deduplication tho
|
|
2025-03-05 12:05:45
|
There are params for that I believe but I just used `rclone dedupe --dedupe-mode newest --by-hash .` lmao
|
|
2025-03-05 12:06:15
|
A very brute force approach I must admit, but it works
|
|
2025-03-05 12:12:33
|
Looking at ImageMagick's identify with -verbose for both files, the GIF does not have an alpha layer but some of the colours in the ColourMap are "srgba". I assume this could be the reason why the JXL gains an alpha channel during conversion?
|
|
2025-03-05 12:12:38
|
I'll test some more
|
|
|
Traneptora
|
|
RaveSteel
except you get many more frames for the JXL, but deduplicate those and you're golden lol
|
|
2025-03-05 12:33:59
|
how so? the JXL animation has 12 frames and ffmpeg outputs 12 frames
|
|
|
RaveSteel
|
2025-03-05 12:35:46
|
Right, but ffmpeg outputs over 100 frames, with all except the real 12 frames being duplicates
|
|
2025-03-05 12:35:50
|
Don't ask me why
|
|
|
Traneptora
|
|
RaveSteel
Right, but ffmpeg outputs over 100 frames, with all except the real 12 frames being duplicates
|
|
2025-03-05 12:35:59
|
what version are you using?
|
|
|
RaveSteel
|
|
Traneptora
|
2025-03-05 12:36:30
|
interesting. I just tested it on the JXL you uploaded and it outputs 12 frames
|
|
2025-03-05 12:36:48
|
```
ffmpeg -i 61472748_ugoira1920x1080\ -\ 神社と星.jxl -c rawvideo -f framemd5 -
```
|
|
2025-03-05 12:37:18
|
you're not doing something like setting `-r` are you?
|
|
2025-03-05 12:37:41
|
I tested it with ffmpeg 7.1 and got a similar result to git master
|
|
|
RaveSteel
|
2025-03-05 12:38:25
|
That's the interesting part, ffmpeg detects the JXL as having 12 frames, as is correct, but outputting to png via `ffmpeg -i JXL img%04d.png` yields the many frames
|
|
|
Traneptora
|
2025-03-05 12:38:44
|
oh, that's the image2 muxer
|
|
|
RaveSteel
|
2025-03-05 12:39:14
|
It's not JXL specific, I have seen this behaviour with other animated files as well
|
|
|
Traneptora
|
2025-03-05 12:39:43
|
problem in this case is the image2 muxer is setting the framerate to 100 based on the timebase, and then just duplicating frames
|
|
|
RaveSteel
|
2025-03-05 12:40:11
|
Which is why I then have to deduplicate to get the real 12 frames in this case
|
|
|
Traneptora
|
2025-03-05 12:40:28
|
just add `-fps_mode:v passthrough` as an output option
|
|
2025-03-05 12:40:59
|
that prevents it from duping or dropping frames based on "framerate"
|
|
2025-03-05 12:41:03
|
it just passes them through
|
|
2025-03-05 12:41:25
|
for ex `ffmpeg -i input.jxl -fps_mode:v passthrough img%04d.png`
|
|
|
RaveSteel
|
2025-03-05 12:41:58
|
I did use that once but couldn't remeber it earlier and by then I had just deduplicated, thanks!
|
|
2025-03-05 12:42:05
|
I will add it to my personal documentation xd
|
|
|
Traneptora
|
2025-03-05 12:42:23
|
it's basically the old vsync option but vsync is deprecated
|
|
|
RaveSteel
|
2025-03-05 12:43:20
|
Good to know
|
|
2025-03-05 12:43:46
|
I know a project where -vsync is still used, I will open a PR to update the setting
|
|
|
BlueSwordM
|
|
A homosapien
I feel like it wouldn't be hard to implement tbh
|
|
2025-03-05 01:37:19
|
vship.
|
|
|
_wb_
|
2025-03-05 10:04:48
|
likely imagemagick hashes are based on the non-coalesced frames so they will be different since libjxl decoding coalesces by default
|
|
|
jonnyawsom3
|
2025-03-05 04:05:37
|
I know libjxl doesn't, or at least didn't, handle unspecified disposal types in GIF. So it would error or have slight artifacts
|
|
|
Lumen
|
|
BlueSwordM
vship.
|
|
2025-03-07 09:50:49
|
I am getting some ads here x) thank you blue
|
|
|
Meow
|
2025-03-08 12:36:21
|
What PNG library does `djxl` use?
|
|
|
Traneptora
|
|
Meow
What PNG library does `djxl` use?
|
|
2025-03-08 01:07:27
|
libpng
|
|
|
Meow
|
2025-03-08 01:18:40
|
Hmm so my procedure of quickly "resetting" PNG works: `cjxl -d 0 -e 1` -> `djxl`
|
|
|
spider-mario
|
2025-03-08 03:56:12
|
`decode_and_encode` from the dev tools might also work
|
|
2025-03-08 03:56:28
|
oh, but I think it requires a colorspace hint
|
|
2025-03-08 03:56:45
|
though I also think it will ignore it with PNG input
|
|
2025-03-08 03:57:22
|
yeah, maybe cjxl/djxl isn’t so bad
|
|
|
Traneptora
|
|
Meow
Hmm so my procedure of quickly "resetting" PNG works: `cjxl -d 0 -e 1` -> `djxl`
|
|
2025-03-08 06:52:29
|
what is "resetting" a PNG supposed to do, exactly?
|
|
|
Meow
|
2025-03-08 07:03:47
|
As some may be optimised either really well or badly
|
|
|
Traneptora
|
2025-03-08 07:04:20
|
what I do is I typically run it through `umbrielpng`, then `optipng`, then `zopflipng` in that order
|
|
|
Meow
|
2025-03-08 07:04:20
|
This resets them to the same pre-optimisation status
|
|
|
Traneptora
|
2025-03-08 07:05:07
|
you can also use `optipng -zc6 -zm8 -zs1 -f5 foo.png` to force zlib compression level 6, zlib memory 8, zlib strategy 1, filter strat 5
|
|
2025-03-08 07:05:10
|
or something of that form
|
|
2025-03-08 07:05:39
|
`umbrielpng` to strip a bunch of unnecessary chunks too
|
|
2025-03-08 07:05:41
|
https://github.com/Traneptora/umbrielpng
|
|
|
Meow
|
2025-03-08 07:07:45
|
If this is an optimisation it's not what I want to do
|
|
2025-03-08 07:11:52
|
I wanted them to be big and at the same rate of compression, and cjxl d0 e1 + djxl can do it well and rapidly
|
|
|
AccessViolation_
|
2025-03-08 07:24:45
|
thoughts `-e 11` not attempting to override parameters which were explicitly specified? for example so that if I pass a group size like `-g 3`, it doesn't include the group size in its brute forcing efforts and just always goes with the specified size instead
|
|
2025-03-08 07:38:21
|
the reason I'd want this personally is because Jon's code for visualizing predictors and the MA tree will only output images for a single group (which should be 1024x1024), but I think it might also be nice that you could lock down or focus on specific parameters when you're testing their effects on compression
|
|
|
A homosapien
|
|
Meow
This resets them to the same pre-optimisation status
|
|
2025-03-08 07:52:24
|
`oxipng --force --zc 1 -f 0 input.png`
There ya go, an unoptimized png.
|
|
|
monad
|
|
AccessViolation_
thoughts `-e 11` not attempting to override parameters which were explicitly specified? for example so that if I pass a group size like `-g 3`, it doesn't include the group size in its brute forcing efforts and just always goes with the specified size instead
|
|
2025-03-08 07:54:03
|
just try your own e10 runs in that case
|
|
|
AccessViolation_
|
2025-03-08 07:55:38
|
is there a way I can emulate e11's "e10 + previous-channel MA tree properties" in e10?
|
|
2025-03-08 07:56:04
|
Oh that's what `-E` does!
|
|
|
monad
|
2025-03-08 07:56:35
|
e11 does `-E 4`, but it is not useful on single-channel stuff
|
|
|
AccessViolation_
|
2025-03-08 07:57:35
|
yeah for the visualizations i'm using single channel grayscale anyway
|
|
|
Traneptora
|
|
Meow
If this is an optimisation it's not what I want to do
|
|
2025-03-08 07:59:04
|
then yea just force specific parameters with optipng or similar
|
|
|
Meow
|
|
A homosapien
`oxipng --force --zc 1 -f 0 input.png`
There ya go, an unoptimized png.
|
|
2025-03-08 08:00:00
|
Hmm may try later
|
|
|
monad
|
|
AccessViolation_
thoughts `-e 11` not attempting to override parameters which were explicitly specified? for example so that if I pass a group size like `-g 3`, it doesn't include the group size in its brute forcing efforts and just always goes with the specified size instead
|
|
2025-03-08 08:16:19
|
btw, it's not a brute force search over parameters, it uses pre-selected configurations which were proven useful. as such, it doesn't really make sense to integrate user choice as implemented.
|
|
|
TheBigBadBoy - 𝙸𝚛
|
|
Traneptora
`umbrielpng` to strip a bunch of unnecessary chunks too
|
|
2025-03-08 08:16:39
|
is it done by default?
|
|
|
AccessViolation_
|
|
monad
btw, it's not a brute force search over parameters, it uses pre-selected configurations which were proven useful. as such, it doesn't really make sense to integrate user choice as implemented.
|
|
2025-03-08 08:16:54
|
ah i see, yeah that's fair
|
|
|
Traneptora
|
|
TheBigBadBoy - 𝙸𝚛
is it done by default?
|
|
2025-03-08 08:25:13
|
yes
|
|
2025-03-08 08:25:17
|
that's kind of the point
|
|
2025-03-08 08:25:34
|
I could add a setting "--no-strip" but I haven't yet
|
|
|
TheBigBadBoy - 𝙸𝚛
|
2025-03-08 08:55:53
|
and that's perfect <:YEP:808828808127971399>
|
|
|
Meow
|
2025-03-09 06:37:48
|
Yeah `-f 0 --zc 1 --force` results larger ones
|
|
2025-03-09 06:38:11
|
Tried `--zc 0` for just one initially and it's too large
|
|
2025-03-09 06:44:18
|
Much faster than the `cjxl -d 0 -e 1` + `djxl` approach
|
|