|
RaveSteel
|
2024-11-27 05:55:02
|
jpegli does not support lossless compression, does it not? Why are you using it for this project?
|
|
|
jonnyawsom3
|
2024-11-27 07:33:55
|
It could be using invalid jpeg files that other decoders ignore the errors for. Jpegli is a lot more strict currently for some files that work elsewhere
|
|
|
Traneptora
|
2024-12-01 03:20:16
|
I've been exchanging emails with w3c's PNG spec team, and `cLLi` and `mDCv` are going to be changed to `cLLI` and `mDCV` (note difference in capitalization). The change landed in the repo already: https://w3c.github.io/png/ and will land in the published spec once it's released. For this reason I suggest that we start writing `cLLI` immediately instead of `cLLi`. Chris Blume of W3C PNG spec team said that he's adding the new chunk conventions to the next release of Chrome, and I've already sent a list to the ML to patch FFmpeg, and Mediainfo already supports both (the three known implementations of `cLLI`). I believe <@179701849576833024> you're the author of this?
|
|
|
|
veluca
|
2024-12-01 03:36:55
|
I'm certainly not the author of the spec, but I'm the author of the chrome code IIRC
|
|
|
Traneptora
|
|
veluca
I'm certainly not the author of the spec, but I'm the author of the chrome code IIRC
|
|
2024-12-01 04:08:01
|
I meant the author of the PNG writer that writes `cLLi` chunks in libjxl
|
|
2024-12-01 04:08:10
|
we should probably change it to write `cLLI` instead
|
|
|
spider-mario
|
2024-12-01 07:56:15
|
apparently, we wrote it `cLLI` in the original PR description https://github.com/libjxl/libjxl/pull/2711
|
|
2024-12-01 07:56:25
|
so maybe that points to that spelling being more intuitive
|
|
|
DZgas Ж
|
2024-12-03 01:54:39
|
does anyone know jxl viewer with animation support on android
|
|
|
A homosapien
|
|
DZgas Ж
|
|
A homosapien
|
|
2024-12-03 03:31:54
|
It's working. the only problem is that it completely decodes before the show. you have to wait about 5 minutes
maybe someone else knows some kind ?
|
|
|
Quackdoc
|
2024-12-03 03:38:45
|
has anyone tried updating the pix-jxl thing to latest, maybe that would work better.
|
|
|
A homosapien
|
2024-12-03 05:16:20
|
Progressively decoding animated JXLs frame by frame? Does such a program even exist yet?
|
|
|
jonnyawsom3
|
2024-12-03 05:36:57
|
ImageToolbox on Android does at least, unless it's decoding the entirety of Shrek in less than 0.2 seconds
|
|
|
A homosapien
|
2024-12-03 06:14:55
|
Does ImageToolbox support animated JXL? It could just be decoding the first frame and that's it.
|
|
|
DZgas Ж
It's working. the only problem is that it completely decodes before the show. you have to wait about 5 minutes
maybe someone else knows some kind ?
|
|
2024-12-03 06:20:54
|
I didn't have to wait 5 minutes. My Pixel 8 opens and starts playing shrek.jxl within 8-10 seconds.
|
|
|
jonnyawsom3
|
|
A homosapien
Does ImageToolbox support animated JXL? It could just be decoding the first frame and that's it.
|
|
2024-12-03 06:49:49
|
Editing is a single frame, but the file preview correctly animates in both the thumbnail and 'full' view, starting immediately on my 8 year old phone instead of preloading the entire file
|
|
|
DZgas Ж
|
|
ImageToolbox on Android does at least, unless it's decoding the entirety of Shrek in less than 0.2 seconds
|
|
2024-12-03 07:52:21
|
The application is overcomplicated and not viewer specific, but it works faster
|
|
|
A homosapien
|
2024-12-03 08:12:38
|
Neat, it decoded all frames without crashing or lagging.
|
|
|
oupson
|
|
DZgas Ж
It's working. the only problem is that it completely decodes before the show. you have to wait about 5 minutes
maybe someone else knows some kind ?
|
|
2024-12-04 09:54:11
|
There a new version on the way that should be faster. Do you have any images I can test for performances ?
|
|
|
DZgas Ж
|
|
oupson
There a new version on the way that should be faster. Do you have any images I can test for performances ?
|
|
2024-12-04 01:29:00
|
|
|
|
oupson
|
2024-12-04 01:36:01
|
it take only a few seconds to load oO.
If you want to try it I made a pre-release, it is signed with the same key used to sign the release on izzyondroid / github.
|
|
|
DZgas Ж
|
|
oupson
it take only a few seconds to load oO.
If you want to try it I made a pre-release, it is signed with the same key used to sign the release on izzyondroid / github.
|
|
2024-12-04 02:08:00
|
40 seconds "pre decode" before show
gamma is dead
samsung j4 2018
|
|
2024-12-04 02:13:35
|
imagetoolbox show after 2 sec wait. normal gamma
the speed of your smartphone just did not allow you to see how unoptimized this playback is
so this is clearly not my problem and not a smartphone problem. No matter how old is is, it was not av1 1080p cpu decode
|
|
|
bonnibel
|
|
_wb_
no, but doing `/ 255.f` does give more accurate results than doing `* (1.f / 255)`
|
|
2024-12-07 03:10:25
|
late reply but at least in my quick dirty test, either dividing by maxval rather than multiplying with its reciprocal _or_ multiplying with the reciprocal as a double and then casting back to a float, makes an 8-bit image and an identical 16-bit image convert to the same float
<https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=34bb65dc370152dcc2f27beda9f68687>
|
|
|
DZgas Ж
|
2024-12-08 04:19:54
|
When 4x4 blocks?
|
|
|
CrushedAsian255
|
2024-12-08 07:57:07
|
When 256x256 blocks (could be useful for compressing a plain colour)
|
|
|
jonnyawsom3
|
2024-12-09 01:19:50
|
A modular patch would be better for that
|
|
|
CrushedAsian255
|
2024-12-09 01:25:15
|
probably yeah
|
|
|
A homosapien
|
2024-12-10 07:05:50
|
why hasn't this been merged?
https://github.com/libjxl/libjxl/pull/3905
is it because <@263300458888691714> hasn't approved it yet?
|
|
|
monad
|
2024-12-10 03:39:12
|
Nothing to do with me, I'm not on the dev team ...
|
|
|
A homosapien
|
2024-12-10 05:58:22
|
Odd, maybe it's just being overlooked I guess
|
|
2024-12-10 06:14:37
|
<@&803357352664891472> can this be merged? The documentation is super old, like 0.6 old
|
|
|
_wb_
|
|
CrushedAsian255
|
2024-12-17 03:25:23
|
Does djxl return an error code when using disable output? If so can I use that to detect if a JXL is malformed?
|
|
|
pedromarques
|
2024-12-17 10:43:25
|
WebP 2.0 was not officially released. I wonder what it will do about their 4x4 square pixels prediction model that I hate (destroying image quality to accommodate smaller file savings)? <@794205442175402004> ?
|
|
2024-12-17 11:19:58
|
If JPG XL permits to have layers, can these layers have transparency masks?
|
|
|
jonnyawsom3
|
2024-12-17 11:40:04
|
Layers can have extra channels, yes. Assuming that's what you mean
|
|
|
_wb_
|
2024-12-18 02:05:52
|
The typical way to use layers is by having an alpha channel in each layer and alpha-blending (kBlend) them on top of one another. It's not obligatory to do it that way: you can also have layers without alpha channel that are just stacked opaque layers (kReplace blend mode), which is fine if it's just a composite image consisting of non-overlapping layers; alternatively you can use kAdd or kMul blend mode which can be useful for overlapping layers without requiring an extra channel.
|
|
2024-12-18 02:09:30
|
You can even have multiple alpha channels and do weird kinds of blending that cannot be represented as a simple chain, i.e. a linearly ordered blend stack (like how it always is in image editors), but is a more complicated DAG of dependencies between layers.
|
|
|
0xC0000054
|
2024-12-18 02:11:16
|
Are layers the same as multi-frame/animation?
|
|
|
_wb_
|
2024-12-18 02:21:26
|
yes, frames are either:
- a layer if it has zero duration (which is the only possible duration if the image is marked as a still image and not an animation)
- an animation frame if it has nonzero duration
- a page if it has the maximum duration value, which is a special value
|
|
2024-12-18 02:23:40
|
and frame blending works the same way regardless of duration. But of course doing a full-sized kReplace blend doesn't really make sense for a layered image (it makes all previous layers irrelevant) while for an animation frame or a new page this makes more sense.
|
|
|
0xC0000054
|
|
_wb_
yes, frames are either:
- a layer if it has zero duration (which is the only possible duration if the image is marked as a still image and not an animation)
- an animation frame if it has nonzero duration
- a page if it has the maximum duration value, which is a special value
|
|
2024-12-18 03:14:23
|
Thanks. Layer support is on the long-term plans for the Paint.NET Jpeg XL plugin, but the only reference implementation of that feature I am aware of is the libjxl GIMP plugin.
|
|
2024-12-18 03:17:30
|
I am somewhat confused by what actions are required for an app when disabling coalescing to read layers. The [documentation](<https://github.com/libjxl/libjxl/blob/main/doc/format_overview.md#frames>) appears to imply that you would need to handle cropping frames that are outside the bounds of the main image in addition to repositioning smaller frames that are offset.
|
|
2024-12-18 03:19:14
|
I may wait to implement that feature until there are more libjxl consumers which handle layers that I can use as a reference for my implementation.
|
|
|
_wb_
|
2024-12-18 03:26:22
|
yes, when disabling coalescing you get the frames as they are, which means they can be smaller or larger than the canvas (which is also how it typically is in image editors), and which also means you need to blend them yourself if you want to show a merged image (which is something image editors can already do).
|
|
|
CrushedAsian255
|
2024-12-22 11:23:07
|
Can someone link me to a resource to help me understand how range ANS works in JPEG XL
|
|
|
Jarek
|
|
CrushedAsian255
Can someone link me to a resource to help me understand how range ANS works in JPEG XL
|
|
2024-12-24 05:44:41
|
there are lots of materials, videos, e.g. https://m.youtube.com/watch?v=RFWJM8JMXBs
|
|
|
CrushedAsian255
|
|
Jarek
there are lots of materials, videos, e.g. https://m.youtube.com/watch?v=RFWJM8JMXBs
|
|
2024-12-24 05:49:33
|
I have seen that one and it is very good, I meant more specifically range ANS
|
|
|
Jarek
|
2024-12-24 05:51:38
|
https://community.wolfram.com/groups/-/m/t/3023601 do you have some specific question?
|
|
|
CrushedAsian255
|
|
Jarek
https://community.wolfram.com/groups/-/m/t/3023601 do you have some specific question?
|
|
2024-12-24 07:42:26
|
If possible can you export it as PDF? It’s crashing my web browser!
|
|
|
Jarek
|
2024-12-24 07:43:24
|
https://encode.su/threads/2078-List-of-Asymmetric-Numeral-Systems-implementations dozens of explanations
|
|
|
Tirr
|
2024-12-24 08:02:01
|
jxl uses alias mapping rANS. the idea is, first start with evenly divided range for all possible symbols, and more likely symbols "borrow" ranges from less likely symbols by creating an alias pointing from less likely to more likely (so more likely symbols have more effective ranges).
|
|
|
Jarek
|
2024-12-24 08:35:26
|
Alias trick was proposed by Fabian Giesen: https://fgiesen.wordpress.com/2014/02/18/rans-with-static-probability-distributions/
|
|
|
DZgas Ж
|
2024-12-29 02:01:33
|
does libjxl have a roadmap?
|
|
|
spider-mario
|
|
inflation
|
2025-01-02 04:08:05
|
Hi there, I would love to contribute to https://github.com/libjxl/jxl-rs in my free time. I noticed that there is a tracking issue https://github.com/libjxl/jxl-rs/issues/58 and it seems like a good point to start. How could I collaborate on this? Thank you.
|
|
|
|
veluca
|
|
inflation
Hi there, I would love to contribute to https://github.com/libjxl/jxl-rs in my free time. I noticed that there is a tracking issue https://github.com/libjxl/jxl-rs/issues/58 and it seems like a good point to start. How could I collaborate on this? Thank you.
|
|
2025-01-02 09:30:51
|
other than the stages, I'm fairly close to having things ready enough for starting to implement the actual processing of modular transforms (and that whole logic will also need about a billion tests, at some point or another...)
I sent you a DM 😉
|
|
|
CrushedAsian255
|
|
veluca
other than the stages, I'm fairly close to having things ready enough for starting to implement the actual processing of modular transforms (and that whole logic will also need about a billion tests, at some point or another...)
I sent you a DM 😉
|
|
2025-01-02 10:43:48
|
Is basic image decoding functional yet?
Like a VarDCT image with no restoration filters or patches
|
|
|
|
veluca
|
|
CrushedAsian255
Is basic image decoding functional yet?
Like a VarDCT image with no restoration filters or patches
|
|
2025-01-02 10:44:51
|
nope, we're hoping to get to that point in a month or so
|
|
|
username
|
2025-01-02 01:27:46
|
what are the plans in relation to when jxl-rs gets to the optimization stage if Rust still don't have portable SMID when the time comes?
|
|
|
|
veluca
|
|
username
what are the plans in relation to when jxl-rs gets to the optimization stage if Rust still don't have portable SMID when the time comes?
|
|
2025-01-02 01:37:54
|
oh there's options, https://docs.rs/pulp/latest/pulp/index.html is one, and otherwise macros 🙂
|
|
|
AccessViolation_
|
2025-01-02 01:50:20
|
That reminds me, I recently read a post from the image-rs guys about how removing manual SIMD and instead writing the Rust code in such a way that it's easy for the compiler to autovectorize, resulted in better SIMD code and better performance in general, leading to it outperforming the C PNG decoders. I'm not really an expert on this and I don't want to backseat develop this software, but, it seemed very promising, I was wondering if this approach has been considered
[The post](<https://redlib.nl/r/rust/comments/1ha7uyi/memorysafe_png_decoders_now_vastly_outperform_c/>)
relevant things from the post:
[writing code so the compiler knows it can vectorize it](<https://matklad.github.io/2023/04/09/can-you-trust-a-compiler-to-optimize-your-code.html>)
> The drawback of automatic vectorization is that it's not guaranteed to happen, but in Rust we've found that once it starts working, it tends to keep working across compiler versions with no issues. When I talked to an LLVM developer about this, they mentioned that it's easier for LLVM to vectorize Rust than C because Rust emits noalias annotations almost everywhere.
|
|
|
Quackdoc
|
2025-01-02 01:50:51
|
its hard to do, works some times, but compilers are finicky beasts
|
|
|
Tirr
|
2025-01-02 01:54:05
|
autovec is limited in cpu features decided at compile-time, so we need some kind of dynamic dispatch on a set of functions compiled with various cpu features (which pulp seems to support). and something like DCT doesn't get autovectorized very well so we need manual SIMD for those
|
|
|
|
veluca
|
2025-01-02 02:01:12
|
yeah autovec is... limited, we're pretty far away from being able to get optimal performance out of it in a lot of very relevant cases
|
|
|
AccessViolation_
|
2025-01-02 02:05:34
|
Would it be be preferable to write good-for-autovectorization code (like code that works on chunks of statically known sizes), see if the compiler can autovectorize it, and only do manual SIMD for cases where it can't or where the code turns out to be suboptimal? I feel like there might be some value autovectorized code over manual SIMD where possible since I imagine it's a lot easier to maintain. Or would that be like, not worth it given how rarely it actually works well
|
|
|
Quackdoc
|
2025-01-02 02:16:40
|
doubtful, `good-for-autovectorization` is very limited. it's really good when it works, but it hardly ever works and you almost always just fight with yourself aside from some basic stuff
|
|
|
|
veluca
|
2025-01-02 02:19:50
|
yep, and vectorized code *can* be quite reasonable to maintain with good library support - not so much if you write asm or intrinsics, but IMO you shouldn't be doing too much of that if you can avoid it
|
|
|
lonjil
|
2025-01-02 03:56:04
|
just don't let whoever runs the ffmpeg twitter account hear you say that
|
|
|
CrushedAsian255
|
|
veluca
yep, and vectorized code *can* be quite reasonable to maintain with good library support - not so much if you write asm or intrinsics, but IMO you shouldn't be doing too much of that if you can avoid it
|
|
2025-01-02 05:33:03
|
Write the entire library in pure ASM for maximum efficiency
|
|
|
_wb_
|
2025-01-02 08:53:58
|
cjxl with argument parsing written in handcrafted AVX-512
|
|
|
spider-mario
|
|
AccessViolation_
Would it be be preferable to write good-for-autovectorization code (like code that works on chunks of statically known sizes), see if the compiler can autovectorize it, and only do manual SIMD for cases where it can't or where the code turns out to be suboptimal? I feel like there might be some value autovectorized code over manual SIMD where possible since I imagine it's a lot easier to maintain. Or would that be like, not worth it given how rarely it actually works well
|
|
2025-01-02 09:20:51
|
depends on what you mean by “easier to maintain” – maintaining correctness, sure; maintaining autovectorisation, maybe not so much
|
|
2025-01-02 09:21:05
|
I wouldn’t fancy having to periodically check that the assembly is still vectorised
|
|
|
Demiurge
|
2025-01-02 11:38:34
|
I propose that in addition to lossy and lossless encoding modes, the codec should also be capable of foxy and foxless encoding.
|
|
2025-01-03 12:54:36
|
Otherwise it will never be practical for real-world applications, in my view.
|
|
|
CrushedAsian255
|
2025-01-03 02:16:23
|
`cjxl --foxes=1`
|
|
|
Demiurge
|
2025-01-03 02:32:29
|
In foxy mode, a fox nibbles up unnecessary bits to make your images lighter. Foxless mode doesn't have any foxes.
|
|
|
DZgas Ж
|
2025-01-03 03:35:33
|
cjxl --foxes=0 👍
|
|
|
novomesk
|
2025-01-06 12:08:22
|
During JXL_DEC_BOX_NEED_MORE_OUTPUT event, can I skip further decoding of the current box (for example the decompressed data is too big) just by using JxlDecoderReleaseBoxBuffer only and not setting a new buffer?
|
|
|
_wb_
|
2025-01-06 12:11:58
|
I doubt it, but that would be a nice addition. Probably requires an extra function to do the actual skipping, I think...
|
|
|
Jarek
Alias trick was proposed by Fabian Giesen: https://fgiesen.wordpress.com/2014/02/18/rans-with-static-probability-distributions/
|
|
2025-01-06 02:33:58
|
Is this alias trick by Fabian Giesen the same thing as the AliasTable we have in jxl? <@179701849576833024>
|
|
2025-01-06 02:40:18
|
If I understand correctly then the idea actually dates back to 1974? https://en.wikipedia.org/wiki/Alias_method
|
|
|
|
veluca
|
|
_wb_
Is this alias trick by Fabian Giesen the same thing as the AliasTable we have in jxl? <@179701849576833024>
|
|
2025-01-06 03:04:12
|
yep
|
|
|
Jarek Duda
|
2025-01-06 03:14:56
|
its application to rANS was Fabian Giesen idea
|
|
|
_wb_
|
2025-01-14 07:48:28
|
I started getting this:
```
error: Scalable vectors function return values not yet supported on Darwin for function: _ZN3jxl5N_SVE14InterpolateVecEu13__SVFloat32_tPKf. Consider always_inline.
fatal error: error in backend: Unsupported feature
clang++: error: clang frontend command failed with exit code 70 (use -v to see invocation)
Apple clang version 16.0.0 (clang-1600.0.26.4)
Target: arm64-apple-darwin23.6.0
Thread model: posix
```
|
|
2025-01-14 07:48:56
|
is highway doing cutting edge stuff again or something?
|
|
2025-01-14 07:59:40
|
oh wait why is it using Apple clang 16 and not the clang 19 I installed with homebrew
|
|
2025-01-14 08:07:43
|
still getting issues with clang 19 though
|
|
2025-01-14 08:08:52
|
```
FAILED: lib/CMakeFiles/jxl_dec-obj.dir/jxl/dec_group.cc.o
/opt/homebrew/Cellar/llvm/19.1.6/bin/clang++ -DFJXL_ENABLE_AVX512=0 -DJXL_INTERNAL_LIBRARY_BUILD -D__DATE__=\"redacted\" -D__TIMESTAMP__=\"redacted\" -D__TIME__=\"redacted\" -I/Users/jonsneyers/dev/libjxl -I/Users/jonsneyers/dev/libjxl/third_party/highway -I/Users/jonsneyers/dev/libjxl/third_party/brotli/c/include -isystem /Users/jonsneyers/dev/libjxl/build/lib/include -DJXL_IS_DEBUG_BUILD -fno-rtti -O2 -g -DNDEBUG -std=c++17 -arch arm64 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX15.1.sdk -mmacosx-version-min=14.6 -fPIC -fvisibility=hidden -fvisibility-inlines-hidden -fmacro-prefix-map=/Users/jonsneyers/dev/libjxl=. "-DHWY_DISABLED_TARGETS=(HWY_SSSE3|HWY_AVX3|HWY_AVX3_SPR|HWY_AVX3_ZEN4|HWY_RVV)" -funwind-tables -Xclang -mrelax-all -fno-omit-frame-pointer -Wno-builtin-macro-redefined -Wall -Werror [...] -DJPEGXL_ENABLE_SKCMS=1 -DJPEGXL_ENABLE_TRANSCODE_JPEG=1 -DJPEGXL_ENABLE_BOXES=1 -MD -MT lib/CMakeFiles/jxl_dec-obj.dir/jxl/dec_group.cc.o -MF lib/CMakeFiles/jxl_dec-obj.dir/jxl/dec_group.cc.o.d -o lib/CMakeFiles/jxl_dec-obj.dir/jxl/dec_group.cc.o -c /Users/jonsneyers/dev/libjxl/lib/jxl/dec_group.cc
fatal error: error in backend: Invalid size request on a scalable vector.
```
|
|
|
jonnyawsom3
|
2025-01-14 08:18:42
|
Not related to this, is it?
https://github.com/libjxl/libjxl/pull/4060
|
|
|
_wb_
|
2025-01-14 08:34:42
|
ah, yep, that's the culprit
|
|
|
monad
|
2025-01-14 08:44:36
|
synonym for undisable: enable
|
|
|
_wb_
|
2025-01-14 08:46:04
|
re-enable, to be more exact
|
|
2025-01-14 08:47:20
|
but now we will have to re-disable I think, at least I had to do it to get it to compile without making clang crash
|
|
|
A homosapien
|
2025-01-14 08:53:44
|
Then you have to re-re-enable in the future 😄
|
|
|
_wb_
|
2025-01-14 08:59:39
|
unredisable?
|
|
|
Kleis Auke
|
2025-01-15 09:49:20
|
Updating the highway submodule to <https://github.com/google/highway/commit/1d5cc073467fe38c79de229797e90d07ea287cf7> ought to fix that.
|
|
|
spider-mario
|
2025-01-15 10:51:18
|
I thought we already did that
|
|
2025-01-15 10:51:33
|
<@794205442175402004> did you `git submodule update`?
|
|
|
_wb_
|
2025-01-15 10:55:30
|
Yes I did but I still get clang crashes when trying to compile current git head. If I add back the disabling of `HWY_SVE|HWY_SVE2|HWY_SVE_256|HWY_SVE2_128|HWY_RVV` then it compiles fine.
|
|
2025-01-15 11:04:01
|
I get the same thing as https://github.com/libjxl/libjxl/actions/runs/12786076550/job/35642428301?pr=4066
|
|
|
Kleis Auke
|
|
spider-mario
I thought we already did that
|
|
2025-01-15 11:04:29
|
Unfortunately, that commit is not part of any released version of Highway. The submodule currently points to version 1.2.0 (<https://github.com/libjxl/libjxl/commit/4ba6e969025fba2b7487c1b7214af1591fb83b25>).
|
|
|
_wb_
|
2025-01-15 11:06:03
|
I suppose we can for now just copy the condition in https://github.com/google/highway/commit/1d5cc073467fe38c79de229797e90d07ea287cf7 to libjxl and then drop it again when there's a new Highway release
|
|
|
Kleis Auke
|
2025-01-15 11:16:33
|
It looks like PR https://github.com/libjxl/libjxl/pull/4065 is going to fix this. You could also try to build with `-DJPEGXL_FORCE_SYSTEM_HWY=ON` and install Highway via Homebrew as a potential workaround (see e.g. <https://github.com/Homebrew/homebrew-core/pull/187722>).
|
|
|
Lumen
|
2025-01-15 03:06:57
|
Hi, I am here to speak about licensing 👀
about butteraugli jxl and butteraugli google
I ve seen that google butter is on apache 2.0 and jxl is BSD 3 I think
And I am trying to rewrite butter (mainly from google one) on GPU vapoursynth plugin in vship
https://github.com/Line-fr/Vship/tree/inprogress (ongoing)
I did put my code as MIT license
will it be an issue?
if it is, do you have the contact information of the people I should ask for permission to do this?
|
|
|
novomesk
|
2025-01-21 03:54:11
|
I observed that people who work with CMYK are confused that zero in one software means no ink (white) and full ink (black) in another software.
I suggest that there is short info in headers, that in libjxl, zero is full ink and 255 no ink.
|
|
|
jonnyawsom3
|
2025-01-21 04:07:03
|
You mean for people implementing libjxl?
|
|
|
novomesk
|
|
You mean for people implementing libjxl?
|
|
2025-01-21 04:33:45
|
I mean it is useful information for people using libjxl - to know how to interpret values decoded by libjxl.
|
|
|
Me
|
2025-01-21 04:33:46
|
do jxl-oxide developers hang out here?
|
|
|
jonnyawsom3
|
2025-01-21 04:34:04
|
<#1065165415598272582>
|
|
|
Quackdoc
|
|
novomesk
I observed that people who work with CMYK are confused that zero in one software means no ink (white) and full ink (black) in another software.
I suggest that there is short info in headers, that in libjxl, zero is full ink and 255 no ink.
|
|
2025-01-21 05:05:40
|
this is something that is confusing in general for a lot of people, many different things have the reverse of others.
|
|
|
_wb_
|
2025-01-21 08:04:01
|
It all began with 1-bit images (black&white) where the convention was often 1=black for some reason, I guess because 1=ink, 0=no ink kind of makes sense.
|
|
2025-01-21 08:04:40
|
For screen display, 1=light, 0=no light makes more sense though.
|
|
2025-01-21 08:06:05
|
In the light-based models like RGB, having 0=black, 1=white makes sense.
|
|
2025-01-21 08:10:44
|
For CMYK, if you define it as 0=ink, 1=no ink, it maybe makes less sense semantically, but it does kind of work nicely since you get the same 0=black, 1=white as in RGB, and in fact you can even misinterpret the CMY as RGB and still get a reasonable picture (it will not do proper color management of course, but it will be roughly correct since Cyan ink absorbs red light, Magenta ink absorbs green light and Yellow ink absorbs blue light).
|
|
2025-01-21 08:12:04
|
This is also the convention used in CMYK jpegs, and a lot of software actually just shows the CMY misinterpreted as RGB.
|
|
2025-01-21 08:13:20
|
So it seemed natural to follow the same convention. That way at least in principle the CMY channels of a CMYK jpeg can be losslessly transcoded (the K still has to be encoded as pixels though).
|
|
|
lucius
|
2025-01-22 05:44:21
|
While investigating why my lossily compressed float images with float extra channels become significantly larger than expected, I noticed some potentially problematic code in method ModularFrameEncoder::ComputeEncodingData(), file enc_modular.cc at line 924:
```cpp
for (size_t i = 0; i < extra_channels.size(); i++) {
int ec_bitdepth =
metadata.extra_channel_info[i].bit_depth.bits_per_sample;
pixel_type ec_maxval = ec_bitdepth < 32 ? (1u << ec_bitdepth) - 1 : 0;
bitdepth_correction = ec_maxval / 255.f;
float dist = 0;
if (i < cparams_.ec_distance.size()) dist = cparams_.ec_distance[i];
if (dist < 0) dist = cparams_.butteraugli_distance;
quantizers.push_back(quantizer * dist * bitdepth_correction);
}
```
I find this behavior suspicious because the value of ec_distance is effectively ignored (multiplied by zero) if the extra channel is float. Since ec_distance is set by JxlEncoderSetExtraChannelDistance, it should be relevant to my use case. This function is also relevant because it is called for my float images with extra channels, but not for my float images without extra channels.
Interestingly, I observed that lossy float images with uint8 or uint16 extra channels have sizes proportional to their components, as expected. However, when I hard-code bitdepth_correction to 0 in this function, their sizes increase to a level similar to images with float extra channels.
Could you clarify whether this behavior is intentional or if it might indicate an issue with how ec_distance is applied to float extra channels?
|
|
|
_wb_
|
2025-01-22 07:18:38
|
Good question. The lossy approach for extra channels kind of assumes integer data; applying it to float data probably would behave a bit weirdly (though maybe it would kinda work), which I guess is why that code does something weird
|
|
|
lucius
|
2025-01-22 07:33:10
|
I get results like this with float extra channel: Image alone (lossy): 32 169 bytes, extra channel alone (lossy): 36 080 bytes, Image plus extra channel (both lossy): 715 466 bytes. In this example, the extra channel would have been 511 935 bytes if compressed losslessly, so it looks like the extra channel has been expanded, not compressed.
|
|
|
jonnyawsom3
|
2025-01-22 07:40:46
|
I would've assumed it's doing the same as `cjxl -m 1 -d 1` on the float data. What's the source of your float images? Is it lower bitdepth scaled up by any chance?
|
|
|
lucius
|
2025-01-22 07:50:29
|
The images are photos I have shot, exported as jxl from Lightroom classic. In the last example, they were exported as lossless jxls, with small dimensions (512x384). The sizes I quote, were determined by saving the component files from the same program that created the composite. The extra channel was created by extracting the green channel from an image with the same dimensions as the main image.
The photos were stored as raw files, i.e. 16 bits, and exported as jxl 16 bits. Lightroom does not have an option to save as float.
|
|
|
jonnyawsom3
|
2025-01-22 07:52:58
|
So 16-bit ints converted to float?
I always think of this whenever floats and density is involved, but it might be worth checking if they're float16 or float32 being fed to libjxl https://github.com/libjxl/libjxl/issues/3511
(I forgot this is <#848189884614705192> so slowmode is enabled)
|
|
|
lucius
|
2025-01-22 07:56:40
|
They were converted to float in my program by DecodeJpegXlOneShot() from the test programs
I've now submitted a bug report about float extra channel problem: https://github.com/libjxl/libjxl/issues/4082
|
|
|
CrushedAsian255
|
2025-01-25 12:33:03
|
What happens if a modular predictor predicts a value outside of the bounds of the channel's bitdepth? Does it wrap around? Does it clip? What happens to the next pixel?
|
|
|
jonnyawsom3
|
2025-01-25 12:38:27
|
(As far as I can tell) None of the predictors can go above existing pixel values, that's where the MA tree comes in
|
|
|
AccessViolation_
|
|
CrushedAsian255
What happens if a modular predictor predicts a value outside of the bounds of the channel's bitdepth? Does it wrap around? Does it clip? What happens to the next pixel?
|
|
2025-01-25 01:06:12
|
Since it inherits whatever C++ does, unsigned integer arithmetic overflow is wrapping, and reversible:
```rust
#[allow(overflowing_literals)]
fn main() {
let mut px: u8 = 70;
let delta: u8 = 1000; // prediction outside unsigned 8-bit range
println!("pre-prediction: {px}");
px -= delta;
println!("post-prediction: {px}");
px += delta;
println!("reconstructed: {px}");
}
```
```
pre-prediction: 70
post-prediction: 94
reconstructed: 70
```
(code is rust, but it handles arithmetic overflow the same as c++, this is just for demonstration)
I didn't see any code that attempts to detect overflows in the predictors, but I don't know anything about what it does regarding this in MA trees like jonny mentioned.
|
|
2025-01-25 01:13:44
|
Note that I am not a jxl dev, I just know predictor arithmetic is C++ arithmetic from looking at the code
|
|
|
CrushedAsian255
|
2025-01-25 01:29:44
|
no, as in goes outside the bounds of the *pixel's* max and min. The modular arithmatic is handled using 32 bit integers, even if the frame is only 8 bits
|
|
|
AccessViolation_
|
2025-01-25 01:40:40
|
Oh, I didn't know that 👀
That explains why some of the more complex predictors internally overflowing isn't an issue. I think what jonny said about predictors never evaluating to something larger than the inputs is right, like things are averaged or clamped to the max value of the observed values (but I didn't check the self-correcting predictor) so it shouldn't occur
|
|
2025-01-25 01:54:59
|
```c
// Clamps gradient to the min/max of n, w (and l, implicitly).
static JXL_INLINE int32_t ClampedGradient(const int32_t n, const int32_t w,
const int32_t l) {
const int32_t m = std::min(n, w);
const int32_t M = std::max(n, w);
// The end result of this operation doesn't overflow or underflow if the
// result is between m and M, but the intermediate value may overflow, so we
// do the intermediate operations in uint32_t and check later if we had an
// overflow or underflow condition comparing m, M and l directly.
// grad = M + m - l = n + w - l
const int32_t grad =
static_cast<int32_t>(static_cast<uint32_t>(n) + static_cast<uint32_t>(w) -
static_cast<uint32_t>(l));
// We use two sets of ternary operators to force the evaluation of them in
// any case, allowing the compiler to avoid branches and use cmovl/cmovg in
// x86.
const int32_t grad_clamp_M = (l < m) ? M : grad;
return (l > M) ? m : grad_clamp_M;
}
```
<https://github.com/libjxl/libjxl/blob/main/lib/jxl/modular/encoding/context_predict.h>
|
|
|
_wb_
|
2025-01-25 07:55:07
|
Also note that sample values are allowed to be outside the nominal range, e.g. 8-bit is nominally [0,255] but if you want to have negative values or values > 255, this is allowed. You can expect decoders to clamp it to the nominal range when representing the image in a uint buffer, but only at the end. So you can have layers with negative samples (or even negative alpha) and they do not get clamped before blending, etc.
|
|
|
CrushedAsian255
|
|
_wb_
Also note that sample values are allowed to be outside the nominal range, e.g. 8-bit is nominally [0,255] but if you want to have negative values or values > 255, this is allowed. You can expect decoders to clamp it to the nominal range when representing the image in a uint buffer, but only at the end. So you can have layers with negative samples (or even negative alpha) and they do not get clamped before blending, etc.
|
|
2025-01-25 11:49:24
|
so it only wraps at the 32 bit integer limit? or is that undefined behaviour
|
|
|
_wb_
|
2025-01-26 07:20:12
|
It's not a conforming bitstream if it overflows int32_t (in the sample values or in any intermediate modular buffer; though predictors like (W+N)/2 have to be computed without overflow in their arithmetic).
If the header field modular_16_bit is true, then it is not a conforming bitstream if it overflows int16_t.
|
|
|
CrushedAsian255
|
|
_wb_
It's not a conforming bitstream if it overflows int32_t (in the sample values or in any intermediate modular buffer; though predictors like (W+N)/2 have to be computed without overflow in their arithmetic).
If the header field modular_16_bit is true, then it is not a conforming bitstream if it overflows int16_t.
|
|
2025-01-26 07:22:36
|
so things like `(W+N)/2` have to be done either in 64 bit, or something like ```W>>1+N>>1+(W&N)&1``` to stay within int32_t ?
|
|
|
_wb_
|
2025-01-26 07:26:00
|
Yes, local arithmetic has to be done to avoid overflow in intermediate results (assuming inputs fit in int32), but any values stored in buffers can be assumed to fit in int32 / int16 (if it doesn't, it's a non-conforming bitstream so decoder behavior is undefined).
|
|
|
Traneptora
|
|
CrushedAsian255
What happens if a modular predictor predicts a value outside of the bounds of the channel's bitdepth? Does it wrap around? Does it clip? What happens to the next pixel?
|
|
2025-01-30 07:55:09
|
it just predicts it outside of the bounds. modular allows you to have negative, etc. values. modular is specified as an array of integers
|
|
2025-01-30 07:55:18
|
clamping is done on render, not on decode
|
|
|
BlueSwordM
|
|
Butteraugli apparently decodes Bt.709 using the inverse scene-referred OETF. But most Bt.709 content is encoded using inverse gamma 2.4, and decoded as such, so the scores are way off in dark regions...
But even worse, I think all Butteraugli scores might technically be off, because my perception of an image decoded with gamma 2.2 aligns better with Butteraugil distmap when Butteraugli decodes it using the inverse sRGB OETF than when it does so using gamma 2.2. Which makes me wonder, might the original "sRGB" training data have been performed on gamma 2.2 monitors but incorrectly decoded and mapped using the inverse sRGB OETF? However, correcting this would actually result in reduced perceptual alignment for most people, since many if not most "sRGB" images are meant to be decoded as gamma 2.2, and are usually watched just so anyway, due to most displays using gamma 2.2.
|
|
2025-02-20 06:53:38
|
I wouldn't be so sure. Better to ask Jyrki and _wb_ directly.
|
|
|
damian101
|
|
BlueSwordM
I wouldn't be so sure. Better to ask Jyrki and _wb_ directly.
|
|
2025-02-20 07:29:07
|
Btw, while I think butteraugli "sees" sRGB-tagged images as Gamma 2.2 (and Gamma 2.2 tagged images generally incorrectly), this doesn't seem to apply to ssimulacra2.
|
|
|
Quackdoc
|
|
Btw, while I think butteraugli "sees" sRGB-tagged images as Gamma 2.2 (and Gamma 2.2 tagged images generally incorrectly), this doesn't seem to apply to ssimulacra2.
|
|
2025-02-20 09:42:58
|
at least there is no concencus on how to decode sRGB (or even encode for that matter) images so that is at least not too bad
|
|
|
damian101
|
|
Quackdoc
at least there is no concencus on how to decode sRGB (or even encode for that matter) images so that is at least not too bad
|
|
2025-02-20 09:45:39
|
If you encode using gamma 2.2 and tag as sRGB anyway, it's actually an improvement.
But this error, if it exists, and if it is of the nature I assumed, should affect everything, any transfer curve.
|
|
|
|
Jeremy
|
2025-02-25 04:18:07
|
Hi, I'm an Ubuntu Desktop Developer. We would like to include JPEG XL in Ubuntu 25.04 but this is blocked on finishing a review from the Ubuntu Security Team. Could someone look into https://github.com/libjxl/libjxl/pull/4111 ?
|
|
|
|
veluca
|
|
Jeremy
Hi, I'm an Ubuntu Desktop Developer. We would like to include JPEG XL in Ubuntu 25.04 but this is blocked on finishing a review from the Ubuntu Security Team. Could someone look into https://github.com/libjxl/libjxl/pull/4111 ?
|
|
2025-02-25 04:58:48
|
pinged this to someone that can probably give it a look soon!
|
|
|
Demiurge
|
2025-02-25 11:27:12
|
looks like it's been merged today.
|
|
|
|
Lucas Chollet
|
2025-03-13 08:27:08
|
Is there a way to take a container-encoded JPEG XL and get the raw codestream from it?
|
|
|
Laserhosen
|
|
Lucas Chollet
Is there a way to take a container-encoded JPEG XL and get the raw codestream from it?
|
|
2025-03-13 08:33:43
|
I made a python script that does exactly that (and can also pack a codestream back into a container)... it's not public yet but I'll put it on github soon-ish if it's any use to people. I'm not aware of any other tool that does it.
|
|
|
|
Lucas Chollet
|
2025-03-13 08:35:43
|
Nice, I was thinking about writing one too 😅. Would you send it to me now?
|
|
|
Laserhosen
|
2025-03-13 08:41:54
|
Yeah, let me tidy it up a little bit...
|
|
2025-03-13 09:29:37
|
https://github.com/alistair7/boxcutter
There you go. It's way more complicated than you need, but it should do the trick.
```
$ ./boxcutter.py list fog.jxl
seq off len type
----------------------
[0] 0x000 12 JXL
[1] 0x00c 20 ftyp
[2] 0x020 269 jxlp
[3] 0x12d 146 brob : Compressed Exif box.
[4] 0x1bf 696 brob : Compressed xml box.
[5] 0x477 2117651 jxlp
$ ./boxcutter.py extract-jxl-codestream fog.jxl fog-raw.jxl
$ ls -l fog*.jxl
-rw------- 1 al al 2118794 Mar 20 2023 fog.jxl
-rw-rw-r-- 1 al al 2117896 Mar 13 21:25 fog-raw.jxl
```
|
|
|
|
Lucas Chollet
|
2025-03-13 10:06:04
|
Thanks!
Is that something that devs want in a main tool, like cjxl? I can submit a PR
|
|
|
Laserhosen
|
2025-03-13 10:08:07
|
It probably belongs in the mythical jxltran utility 🙂
https://github.com/libjxl/libjxl/issues/871
|
|
|
RaveSteel
|
2025-03-13 10:10:58
|
Metadata remains one of the pain points for me, I have dozens of TIFFs with extensive metadata which does not stay the same when the conversion to JXL takes place. Even if exiftool is used to copy over metadata afterwards
Which is a real shame, since there are massive savings to be had. One 3.9GB File was reduced to 2.5GB at e2. I would have used e7, but the difference in opening the file at e7 is 20-30 seconds longer than with e2
|
|
|
|
Lucas Chollet
|
|
Laserhosen
It probably belongs in the mythical jxltran utility 🙂
https://github.com/libjxl/libjxl/issues/871
|
|
2025-03-13 10:11:56
|
I can give that a shot 🤔
|
|
|
Laserhosen
|
|
Lucas Chollet
I can give that a shot 🤔
|
|
2025-03-13 10:17:24
|
They'd probably want a C++ implementation for the official tool rather than [my questionable] python. I'm intending to add a few more container-level features to the script, but I hope jxltran will make it redundant.
|
|
|
|
Lucas Chollet
|
2025-03-13 10:18:44
|
Sure, I was thinking of doing that in C++ too
|
|
|
Demiurge
|
|
RaveSteel
Metadata remains one of the pain points for me, I have dozens of TIFFs with extensive metadata which does not stay the same when the conversion to JXL takes place. Even if exiftool is used to copy over metadata afterwards
Which is a real shame, since there are massive savings to be had. One 3.9GB File was reduced to 2.5GB at e2. I would have used e7, but the difference in opening the file at e7 is 20-30 seconds longer than with e2
|
|
2025-03-14 12:35:40
|
What kind of metadata? It's surprising that even exiftool, the holy grail of metadata hacking, isn't up to the task.
|
|
|
CrushedAsian255
|
|
RaveSteel
Metadata remains one of the pain points for me, I have dozens of TIFFs with extensive metadata which does not stay the same when the conversion to JXL takes place. Even if exiftool is used to copy over metadata afterwards
Which is a real shame, since there are massive savings to be had. One 3.9GB File was reduced to 2.5GB at e2. I would have used e7, but the difference in opening the file at e7 is 20-30 seconds longer than with e2
|
|
2025-03-14 05:02:18
|
You could always just make up a new box to stuff it all in
|
|
|
Demiurge
|
2025-03-14 04:41:44
|
Just use xml
|
|
|
AccessViolation_
|
2025-03-14 05:13:12
|
no use RON <:Stonks:806137886726553651>
edit: oops this is the contact-devs channel, sorry for meming
|
|
|
Quackdoc
|
2025-03-14 05:43:40
|
kdl [av1_chad](https://cdn.discordapp.com/emojis/862625638238257183.webp?size=48&name=av1_chad)
|
|
|
jonnyawsom3
|
2025-03-22 06:15:34
|
Could someone double check I'm giving the right information with regards to jpegli's API version?
There's no documentation anywhere and [the repo](<https://github.com/google/jpegli?tab=readme-ov-file#usage>) says it's only compatible with `so.62.3.0`
https://github.com/RawTherapee/RawTherapee/issues/7125#issuecomment-2745395327
|
|
2025-03-22 07:27:56
|
I'm... *Pretty* sure libjpegli can do `so.8.3.2`, right?
|
|
|
_wb_
|
2025-03-23 07:52:37
|
It's a compile flag which libjpeg API to use.
|
|
|
Demiurge
|
|
jonnyawsom3
|
2025-03-23 02:12:10
|
Would you mind leaving a comment on the [RawTherapee issue](<https://github.com/RawTherapee/RawTherapee/issues/7125#issuecomment-2745404354>)?
They seem to think it's only a version number change and not an actual API change. Without documentation they have no reason to believe me
|
|
|
_wb_
|
2025-03-23 06:14:20
|
Major version number changes in libraries are generally a sign of breaking API changes
|
|
|
spider-mario
|
|
Would you mind leaving a comment on the [RawTherapee issue](<https://github.com/RawTherapee/RawTherapee/issues/7125#issuecomment-2745404354>)?
They seem to think it's only a version number change and not an actual API change. Without documentation they have no reason to believe me
|
|
2025-03-23 06:19:34
|
could you maybe link to https://github.com/google/jpegli/blob/bc19ca2393f79bfe0a4a9518f77e4ad33ce1ab7a/lib/jpegli/decode.h#L75-L77 as evidence that the variable is used to control what API jpegli exposes?
|
|
2025-03-23 06:20:04
|
(+ https://github.com/google/jpegli/blob/bc19ca2393f79bfe0a4a9518f77e4ad33ce1ab7a/lib/jpegli.cmake#L19-L25 for the connection from the variable you mentioned to the one that is used in that header)
|
|
|
jonnyawsom3
|
|
spider-mario
could you maybe link to https://github.com/google/jpegli/blob/bc19ca2393f79bfe0a4a9518f77e4ad33ce1ab7a/lib/jpegli/decode.h#L75-L77 as evidence that the variable is used to control what API jpegli exposes?
|
|
2025-03-23 06:36:31
|
Linked to both, hopefully they don't just say 'that's just changing the version number' again. Thanks for the help
|
|
|
spider-mario
could you maybe link to https://github.com/google/jpegli/blob/bc19ca2393f79bfe0a4a9518f77e4ad33ce1ab7a/lib/jpegli/decode.h#L75-L77 as evidence that the variable is used to control what API jpegli exposes?
|
|
2025-03-23 10:17:13
|
Still nope
|
|
|
username
|
|
Still nope
|
|
2025-03-23 10:22:48
|
oh god is this about system compiled libs 😭. literally one of the main reasons I hate trying to work with a lot of Linux projects is because a bunch of them rely on system provided libs rather then self bundling/compiling them which means that if something needs to be changed in one of them then there is just no solution and you are screwed.
|
|
|
spider-mario
|
2025-03-23 10:23:30
|
“as previously stated”, but back when they stated that, they appeared unaware that it could be configured for a different API version at all
|
|
|
jonnyawsom3
|
|
username
oh god is this about system compiled libs 😭. literally one of the main reasons I hate trying to work with a lot of Linux projects is because a bunch of them rely on system provided libs rather then self bundling/compiling them which means that if something needs to be changed in one of them then there is just no solution and you are screwed.
|
|
2025-03-23 10:27:42
|
The issue is about replacing libjpeg-turbo with libjpegli in RawTherapee for higher decode quality using int16/float32 decoding (It allows decent exposure adjustment without RAWs).
I pointed out they already build libjxl and libjpegli is a drop-in replacement, so it should be simple. They then said the Turbo API is SO 8 not 6, so it's incompatible.
I told them how to set jpegli to 8 using the cmake header, but they said that's just changing the version number, not the API. So then I linked to the API being changed, but they still say it's incompatible until I show them a Linux system using the swapped library...
I'm on Windows 10 and have never touched Linux in my life, hence asking for help here. I suppose I could link to [this](<https://github.com/libvips/libvips/discussions/4177#discussioncomment-10810561>) but I think they'd take an actual developer's word over my 7th link in the past 2 days...
|
|
|
spider-mario
“as previously stated”, but back when they stated that, they appeared unaware that it could be configured for a different API version at all
|
|
2025-03-23 10:33:03
|
I think they mean when they tried with it set to the SO 6 API, and still don't believe the cmake flag changes it... Maybe they think those commits automatically change API version based on the replaced library, or handle both versions at once or something? (5 minute slow mode my beloved)
|
|
|
spider-mario
|
2025-03-23 10:33:21
|
ah, right, I’m immune
|
|
|
jonnyawsom3
|
2025-03-23 10:38:56
|
<@877735395020927046> I assume that's you? Could you clarify what you mean? (Maybe move to <#805176455658733570> for longer discussion)
Hmm, last message here was 2021, so maybe not...
|
|
|
username
|
|
<@877735395020927046> I assume that's you? Could you clarify what you mean? (Maybe move to <#805176455658733570> for longer discussion)
Hmm, last message here was 2021, so maybe not...
|
|
2025-03-23 10:47:08
|
might be them idk 🤷. I will say though that account isn't dead because it was last active at-least 3 days ago
|
|
|
jonnyawsom3
|
|
_wb_
Major version number changes in libraries are generally a sign of breaking API changes
|
|
2025-03-23 10:48:31
|
I only just realised why you said that. So long as it's `SO.8.x.x` the API should be compatible regardless of other version numbers. I'll try and explain that with a link to the libVIPS example, but I feel like I'm talking to a rock
|
|
2025-03-24 03:30:35
|
[Hmm](<https://github.com/RawTherapee/RawTherapee/issues/7125#issuecomment-2746670684>)
|
|
|
Demiurge
|
2025-03-24 07:33:25
|
It can be used as a drop-in replacement either by compiling it with the correct API version at compile time or by source code changes (replacing `jpeg_` prefix with `jpegli_` in the API)
|
|
|
spider-mario
could you maybe link to https://github.com/google/jpegli/blob/bc19ca2393f79bfe0a4a9518f77e4ad33ce1ab7a/lib/jpegli/decode.h#L75-L77 as evidence that the variable is used to control what API jpegli exposes?
|
|
2025-03-24 07:39:00
|
Wait huh?! This is dumb, the libjpeg api version needs to be set too even for the new `jpegli_` api for some reason...
|
|
2025-03-24 07:44:37
|
It only needs to be here in this file
https://github.com/google/jpegli/blob/main/lib/jpegli/libjpeg_wrapper.cc
|
|
2025-03-24 12:06:37
|
libjpegli would have to get rid of those conditionals in the `jpegli_` namespace and expose all of the v8 functionality so software like rawtherapee can switch over by just find/replace `jpeg_` with `jpegli_` and use either a static or dynamic linked jpegli. Also the wrapper would have to expose the jpeg12 API for libtiff/rawtherapee and a bunch of other software to work with the 12 bit api.
|
|
|
|
Lucas Chollet
|
2025-03-26 04:35:01
|
Hey, two days ago, I made a [PR](https://github.com/libjxl/libjxl/pull/4161) for `libjxl`. How can I get it reviewed?
|
|
|
monad
|
|
_wb_
|
2025-03-26 07:35:09
|
I will take a look later today
|
|
|
|
Lucas Chollet
|
2025-03-26 02:14:34
|
Thanks!
|
|
|
Traneptora
|
|
Lucas Chollet
Hey, two days ago, I made a [PR](https://github.com/libjxl/libjxl/pull/4161) for `libjxl`. How can I get it reviewed?
|
|
2025-03-29 06:57:17
|
This is great and I'll try to put some work into it too
|
|
|
|
Lucas Chollet
|
2025-03-29 08:51:20
|
Thanks, I'm really going steps by steps on this one
|
|
|
damian101
|
2025-03-30 02:15:55
|
I wonder, how would I turn an ssimulacra2 score into a distance score?
I assume a piece-wise function is needed, as scores beyond transparency behave differently?
Asking here in case this was already done somewhere, or such a relationship was considered in the design of ssimulacra2.
|
|
|
_wb_
|
2025-03-30 02:45:27
|
Mapping ssimu2 to JND units you mean?
|
|
|
damian101
|
|
_wb_
Mapping ssimu2 to JND units you mean?
|
|
2025-03-30 09:55:07
|
Not sure in this context...
I was thinking of butteraugli-style distance values, the distance relative to reference distance at which the error in an image becomes imperceptible.
What I'm looking for is basically, a rough formula to predict Butteraugl (3-norm I guess) distance scores from ssimulacra2 scores.
|
|
|
monad
|
2025-03-31 01:37:49
|
Maybe check the data at https://discord.com/channels/794206087879852103/803645746661425173/1067108316221816962 or generate some data based on <https://cloudinary.com/labs/cid22>. Predicting precisely per-image seems impossible, but you could correlate ranges of scores.
|
|
|
Traneptora
|
|
I wonder, how would I turn an ssimulacra2 score into a distance score?
I assume a piece-wise function is needed, as scores beyond transparency behave differently?
Asking here in case this was already done somewhere, or such a relationship was considered in the design of ssimulacra2.
|
|
2025-03-31 06:17:51
|
the goal of ssimulacra2 is to roughly match jpeg quality so you could use the same function used to map q2d. but this isn't guaranteed to actually work, you may need to tweak it.
```c
static float quality_to_distance(float quality)
{
if (quality >= 100.0)
return 0.0;
else if (quality >= 90.0)
return (100.0 - quality) * 0.10;
else if (quality >= 30.0)
return 0.1 + (100.0 - quality) * 0.09;
else if (quality > 0.0)
return 15.0 + (59.0 * quality - 4350.0) * quality / 9000.0;
else
return 15.0;
}
```
|
|
2025-03-31 06:18:30
|
Since ssimulacra2 can have wildly negative values for uncorrelated images this isn't perfect. but a start
|
|
|
Lilli
|
2025-04-01 08:53:21
|
Hi there. Do you have some pointers as to how to use the chunked API ? I couldn't make it work so far :/ Is there an available example I can use ?
|
|
2025-04-01 09:14:58
|
Because right now I'm just chunking the image manually and making 4x4 smaller jxl files, that I have to then decompress and recombine when I want to use them.
|
|
2025-04-01 03:44:08
|
I swear it's not an april fool's joke =_=
|
|
|
_wb_
|
2025-04-01 05:03:38
|
https://github.com/libjxl/libjxl/blob/0c1aba1d51ed32f61be4de638f075f2b199082d0/lib/extras/enc/jxl.cc#L368 maybe this helps?
|
|
|
|
Lucas Chollet
|
2025-04-02 08:43:22
|
Small bump for <https://github.com/libjxl/libjxl/pull/4165>
|
|
|
Lilli
|
|
_wb_
https://github.com/libjxl/libjxl/blob/0c1aba1d51ed32f61be4de638f075f2b199082d0/lib/extras/enc/jxl.cc#L368 maybe this helps?
|
|
2025-04-03 09:36:43
|
Thanks to that I noticed I had swapped two arguments, (channels and offset) which of course broke everything, it now works ! I will check if the amount of chunking satisfies the memory requirements of the raspi 🙂 great stuff!
|
|
|
0xC0000054
|
|
Lilli
Thanks to that I noticed I had swapped two arguments, (channels and offset) which of course broke everything, it now works ! I will check if the amount of chunking satisfies the memory requirements of the raspi 🙂 great stuff!
|
|
2025-04-03 09:47:23
|
Is your chunked frame code open-source? I have previously attempted to use the chunked frame API, but I was never able to figure out the offset system.
|
|
|
Lilli
|
2025-04-03 09:48:39
|
It's not online, it's just a POC for me, but I can give you what I have
|
|
2025-04-03 09:59:34
|
From what I've understood, the `*row_offset = width*bytesPerPixel` , so with 3 channels in 16bits, it's `width*6`
Similarly, when computing the offset in the callback, you have to be careful to count the bytes per pixel instead of the number of pixels
`offset = image.data() + ypos*width*bytesPerPixel + xpos*bytesPerPixel;`
the `*row_offset` doesn't change if your image doesn't have some exotic configurations
|
|
|
spider-mario
|
|
Lilli
Thanks to that I noticed I had swapped two arguments, (channels and offset) which of course broke everything, it now works ! I will check if the amount of chunking satisfies the memory requirements of the raspi 🙂 great stuff!
|
|
2025-04-03 11:54:52
|
parameter comments + https://clang.llvm.org/extra/clang-tidy/checks/bugprone/argument-comment.html can help find such issues
|
|
|
Lilli
|
2025-04-03 12:02:31
|
Good to keep in mind, thanks 🙂
|
|
|
A homosapien
|
2025-04-06 01:41:24
|
What's does the underscore indicate, for example `cparams_.IsLossless()` and `cparams.IsLossless()`?
|
|
|
spider-mario
|
2025-04-06 08:02:12
|
a private class member
|
|
2025-04-06 08:03:45
|
https://google.github.io/styleguide/cppguide.html#Variable_Names
|
|
|
username
|
2025-04-19 06:46:48
|
Would there be any downsides to making the default group ordering be middle out instead of "top down" in libjxl when encoding? In most images wouldn't the center of attention be closer to the middle rather then the top left so why should the default be the top left then?
|
|
|
CrushedAsian255
|
|
username
Would there be any downsides to making the default group ordering be middle out instead of "top down" in libjxl when encoding? In most images wouldn't the center of attention be closer to the middle rather then the top left so why should the default be the top left then?
|
|
2025-04-20 06:22:47
|
Not sure how that would work for streaming input
|
|
|
jonnyawsom3
|
|
CrushedAsian255
Not sure how that would work for streaming input
|
|
2025-04-20 07:47:46
|
Not to mention buffering has already broken progressive https://github.com/libjxl/libjxl/issues/3823
|
|
|
DZgas Ж
|
2025-05-31 10:44:04
|
How significant is the issue of JPEG XL decoding speed? How much does it affect the adoption of the format? **Has anyone conducted such research? **
-
I am concerned by the fact that instead of transitioning to new formats, there is a continuous improvement of old ones. The problems with the transition to AVIF and AV1 are undoubtedly directly related to performance, just as Google's rejection of AVIF in YouTube video previews and the use of JPEG previews in images.google.com
-
I also noticed that there are practically no tests of decoding speed, because, as I understand it, this question was never on the agenda, After all, it is quite expected that time goes by, and technology too. But this time has passed.
-
Looking at the actual difference in JPEG XL decoding speed, I could conclude that the update to it is not happening solely because it is much faster to transmit 5 times more JPEG information, which will instantly open on a smartphone, than the speed of waiting for decoding jxl on the same smartphone. This undoubtedly gives a complete advantage to WebP and JPEG. -- And it also kills my hope for the future, because no one is making competitors for these formats, everyone is making "codecs of the future"
|
|
|
jonnyawsom3
|
|
DZgas Ж
How significant is the issue of JPEG XL decoding speed? How much does it affect the adoption of the format? **Has anyone conducted such research? **
-
I am concerned by the fact that instead of transitioning to new formats, there is a continuous improvement of old ones. The problems with the transition to AVIF and AV1 are undoubtedly directly related to performance, just as Google's rejection of AVIF in YouTube video previews and the use of JPEG previews in images.google.com
-
I also noticed that there are practically no tests of decoding speed, because, as I understand it, this question was never on the agenda, After all, it is quite expected that time goes by, and technology too. But this time has passed.
-
Looking at the actual difference in JPEG XL decoding speed, I could conclude that the update to it is not happening solely because it is much faster to transmit 5 times more JPEG information, which will instantly open on a smartphone, than the speed of waiting for decoding jxl on the same smartphone. This undoubtedly gives a complete advantage to WebP and JPEG. -- And it also kills my hope for the future, because no one is making competitors for these formats, everyone is making "codecs of the future"
|
|
2025-06-01 12:06:26
|
Have you tried --faster_decoding? Especially for lossless
|
|
|
A homosapien
|
|
DZgas Ж
How significant is the issue of JPEG XL decoding speed? How much does it affect the adoption of the format? **Has anyone conducted such research? **
-
I am concerned by the fact that instead of transitioning to new formats, there is a continuous improvement of old ones. The problems with the transition to AVIF and AV1 are undoubtedly directly related to performance, just as Google's rejection of AVIF in YouTube video previews and the use of JPEG previews in images.google.com
-
I also noticed that there are practically no tests of decoding speed, because, as I understand it, this question was never on the agenda, After all, it is quite expected that time goes by, and technology too. But this time has passed.
-
Looking at the actual difference in JPEG XL decoding speed, I could conclude that the update to it is not happening solely because it is much faster to transmit 5 times more JPEG information, which will instantly open on a smartphone, than the speed of waiting for decoding jxl on the same smartphone. This undoubtedly gives a complete advantage to WebP and JPEG. -- And it also kills my hope for the future, because no one is making competitors for these formats, everyone is making "codecs of the future"
|
|
2025-06-01 12:07:23
|
I've done some benchmarks here https://discord.com/channels/794206087879852103/1358733203619319858 (lossless) and here https://discord.com/channels/794206087879852103/803645746661425173/1321342478376505365 (lossless & lossy). Lossy JXL is quite fast, on my machine it's just as fast as AVIF if not faster. And with faster_decoding, it blows AVIF out of the water, so I don't think decoding speed is the main issue here.
|
|
|
damian101
|
2025-06-01 12:07:42
|
high resolution images on the web won't be lossless
|
|
|
Quackdoc
|
|
high resolution images on the web won't be lossless
|
|
2025-06-01 12:48:38
|
unless you have a gallery
|
|
|
DZgas Ж
|
|
A homosapien
I've done some benchmarks here https://discord.com/channels/794206087879852103/1358733203619319858 (lossless) and here https://discord.com/channels/794206087879852103/803645746661425173/1321342478376505365 (lossless & lossy). Lossy JXL is quite fast, on my machine it's just as fast as AVIF if not faster. And with faster_decoding, it blows AVIF out of the water, so I don't think decoding speed is the main issue here.
|
|
2025-06-01 09:32:08
|
I think that fast-decode gives a ridiculous difference of tens of percent, when it is necessary to do it several times for lossy. the problem with losseles is a separate problem, and I think that the problem with lossy is much more important
Thanks for your answers, but in the developers' chat I would like to get a big answer from the developers
|
|
|
Quackdoc
|
2025-06-01 09:45:06
|
how are you generating and decoding the image? those numbers look super off
|
|
|
DZgas Ж
|
2025-06-01 09:46:15
|
because decoding in 1 thread for all
-
in fact, i measured not the speed, but how much more complex the different codes in the decoding device really are. and yes, in the time (amount of power) it takes to decode jpeg xl, you can decode 26 times more jpeg pixels
-
it is important to see that uncompressed data takes even longer to decode, because such delays in reading from the disk and subsequent movement in memory
-
I was surprised by bmp myself. but that's what we have, maybe the decoder is very bad, maybe haven't update it for 30 years
|
|
|
Quackdoc
|
2025-06-01 09:52:09
|
even when using 1 core, these numbers are still wild. whats the cmdline used to decode and encode the image? jxl is for sure on the slower side, but these numbers are way out there
|
|
|
DZgas Ж
|
|
Quackdoc
even when using 1 core, these numbers are still wild. whats the cmdline used to decode and encode the image? jxl is for sure on the slower side, but these numbers are way out there
|
|
2025-06-01 09:57:12
|
here we are getting close to my original question: where are the real big objective tests and resersch? i did this for myself, to roughly understand what is what. One thread, one computer, one set of pictures, I "think" 100 random pictures in folder. and the total decoding speed of everything is divided by their number. i used ffmpeg for everything. i did the tests half a year ago and i didn't save any keys of it
|
|
|
Quackdoc
|
2025-06-01 10:04:49
|
Personally has an ex developer of various programs and websites (I hardly do much coding any more), why I would or wouldn't choose jxl would depend on the clientele, but I have yet to find a case where decode speed has played a significant issue, aside from video editing, which is served perfectly fine with faster decoding.
everyone has likely done their own testing, personally I found that jxl trounced avif in terms of decode speed on a set of 20k images when decoded on my phone by nearly 20% speed, 10% battery life and temperatures were far better with jxl when doing a purely sequential test loop. But it's really hard to comment on the numbers you got when you haven't provided an even basic replication case. I also didn't bother limiting cores since that's just not applicable to my use case.
EDIT: for context on speed. I have an older, and slightly low ram/cpu powered LG G7 thinq (SDM845 4G ram), primarily use JXL for image storage, 16gb worth, and have no issues with decode speed, and using a gallery app is far faster then when I had tested with avif, both were at the time using a sample size of 10k images
|
|
|
A homosapien
|
2025-06-01 10:14:08
|
I'm doing some benchmarking of my own following dzgas's settings in ffmpeg, and I am getting numbers nowhere near what is shown here. JXL is around 3-5x slower than JPEG, not 26x slower. Also I normalized for filesize, so JXL is achieving much higher quality than a simple progressive 420 JPEG.
|
|
|
_wb_
|
2025-06-01 12:46:30
|
It will depend on the specific CPU too. But something like 5x slower than JPEG seems reasonable (for lossy jxl). Using 8 cores, jxl is probably about as fast as jpeg decoding. Using more cores it can in principle be faster.
|
|
|
CrushedAsian255
|
2025-06-01 12:53:44
|
But then the issue is if you’re doing a batch convert job JXL would take longer as JPEG you can do one on each thread but JXL you can either do one image on multiple threads but you then have less concurrency or you could give each process a single thread but now it takes 5x as long
|
|
|
_wb_
|
2025-06-01 01:07:46
|
Yes, better compression does come at a cost in CPU. Then again jpeg is super fast on current machines, it was designed for the 1990s...
|
|
|
DZgas Ж
|
|
A homosapien
I'm doing some benchmarking of my own following dzgas's settings in ffmpeg, and I am getting numbers nowhere near what is shown here. JXL is around 3-5x slower than JPEG, not 26x slower. Also I normalized for filesize, so JXL is achieving much higher quality than a simple progressive 420 JPEG.
|
|
2025-06-01 08:19:55
|
decode 8 times slower when encoding -e 7
-
in single-threaded comparison. I probably did the initial tests on -9.
-
I don't have the opportunity to check this again now, because 24 GB of RAM is still not enough to encode a jpeg xl -e 9 picture of 12k x 12k on which I do the new tests, since it is more convenient than having scripts and folders with pictures
|
|
|
jonnyawsom3
|
2025-06-02 11:30:19
|
I know on Jon's Mac the gradient predictor is significantly faster than on my AMD system, so that could be slowing down lossy somewhat
|
|
|
CrushedAsian255
|
|
_wb_
Yes, better compression does come at a cost in CPU. Then again jpeg is super fast on current machines, it was designed for the 1990s...
|
|
2025-06-02 11:39:26
|
is there any JXL mode that is as fast as JPEG with better compression?
|
|
|
_wb_
|
2025-06-02 11:44:43
|
VarDCT e3 gets close to just JPEG. But it still does XYB and uses ANS entropy coding. It could be sped up by doing YCbCr with Huffman instead — at which point it basically just becomes JPEG with somewhat better compression.
|
|
|
CrushedAsian255
|
|
_wb_
VarDCT e3 gets close to just JPEG. But it still does XYB and uses ANS entropy coding. It could be sped up by doing YCbCr with Huffman instead — at which point it basically just becomes JPEG with somewhat better compression.
|
|
2025-06-02 11:46:17
|
i'm thinking like how there would be parts of JPEG XL that lend themselves better to SIMDing
|
|
|
damian101
|
|
_wb_
VarDCT e3 gets close to just JPEG. But it still does XYB and uses ANS entropy coding. It could be sped up by doing YCbCr with Huffman instead — at which point it basically just becomes JPEG with somewhat better compression.
|
|
2025-06-02 11:58:20
|
Are there better alternatives for YCbCr that have the same computational complexity coming from R'G'B'?
Are there good options in-between YCbCr and all those expensive perceptually uniform colorspaces where you have to go through LMS first?
|
|
|
_wb_
|
2025-06-02 04:05:39
|
imo YCoCg is at least as good as YCbCr and it has the advantage of not needing any multiplications and being reversible so also suitable for lossless. But in jxl you cannot really do anything except XYB and YCbCr when using VarDCT.
|
|
|
damian101
|
2025-06-02 09:13:33
|
yeah...
btw, might faster lower precision R'G'B' <-> XYB be feasible? maybe even using a high precision 3D LUT? Would need to be precomputed at compile-time though to be faster, so couldn't support everything. When doing directly between R'G'B' <-> XYB anyway. Also not good for memory consumption.
|
|
|
Kupitman
|
|
DZgas Ж
because decoding in 1 thread for all
-
in fact, i measured not the speed, but how much more complex the different codes in the decoding device really are. and yes, in the time (amount of power) it takes to decode jpeg xl, you can decode 26 times more jpeg pixels
-
it is important to see that uncompressed data takes even longer to decode, because such delays in reading from the disk and subsequent movement in memory
-
I was surprised by bmp myself. but that's what we have, maybe the decoder is very bad, maybe haven't update it for 30 years
|
|
2025-06-03 04:35:51
|
Why 1 when any smartphone have 8. 24 Gb ram enough for 12k x 12k lossless, for LOSSY use effort 7 varDCT
|
|
|
gb82
|
2025-06-10 02:29:26
|
any devs who could answer this, it'd be much appreciated. As far as I understand, the way BD-rate is currently measured in an image dataset is:
1. Aggregate size and metrics from each image in the dataset for a lossy encoder
2. Create RD curves from aggregate image data
3. Compute BD-rate on the final curves
To me this feels like treating the dataset as if it were one large image. I don't know how good or bad that is, but an alternative approach could be:
1. Build RD curves for each image in a dataset individually, and compute BD-rate
2. Create a running list of the BD-rate for each image
3. Compute a final average BD-rate by averaging the BD-rates in your list
|
|
2025-06-10 02:34:34
|
An example on the CID22 Validation Set with `cjxl -e 6`, avif (aom 3.12.1) speed 8, and libwebp m6:
Final Overall BD-rate (commonly used method):
```
libwebp vs libjxl:
Final Overall SSIMULACRA2 BD-rate: -12.301
Final Overall Butteraugli BD-rate (3-norm, 80 nits): -11.511
Final Overall Butteraugli BD-rate (3-norm, 203 nits): -13.522
libavif vs libjxl:
Final Overall SSIMULACRA2 BD-rate: 4.904
Final Overall Butteraugli BD-rate (3-norm, 80 nits): -0.826
Final Overall Butteraugli BD-rate (3-norm, 203 nits): -2.415
jpegli vs libjxl:
Final Overall SSIMULACRA2 BD-rate: -29.672
Final Overall Butteraugli BD-rate (3-norm, 80 nits): -25.973
Final Overall Butteraugli BD-rate (3-norm, 203 nits): -26.462
```
Average BD-rate:
```
libwebp vs libjxl:
Mean SSIMULACRA2 BD-rate: -5.816
Mean Butteraugli BD-rate (3-norm, 80 nits): -0.901
Mean Butteraugli BD-rate (3-norm, 203 nits): -3.059
libavif vs libjxl:
Mean SSIMULACRA2 BD-rate: 12.095
Mean Butteraugli BD-rate (3-norm, 80 nits): 6.942
Mean Butteraugli BD-rate (3-norm, 203 nits): 5.383
jpegli vs libjxl:
Mean SSIMULACRA2 BD-rate: -29.410
Mean Butteraugli BD-rate (3-norm, 80 nits): -24.637
Mean Butteraugli BD-rate (3-norm, 203 nits): -25.289
```
To me, it seems like average BD-rate is probably more representative of how good the encoder is most of the time; not sure though
|
|
|
_wb_
|
2025-06-10 07:37:55
|
I think BD-rate is not a great way to compare encoders, see also https://arxiv.org/abs/2401.04039 and https://cloudinary.com/blog/contemplating-codec-comparisons
|
|
2025-06-10 07:46:11
|
BD-rate can be OK if three conditions hold (that rarely hold in the way it is often used in practice):
- The metric actually correlates strongly with subjective quality and does not suffer from codec bias
- The overlap in distortion range covered by the various codecs and source images corresponds to the range of interest
- The metric scores are on a scale that has equal spacing for equal jumps in visual quality (e.g., not like most metrics that have a saturating behavior where e.g. the jump from ssim=0.8 to ssim=0.9 can be visually about equivalent to the jump from ssim=0.96 to ssim=0.98)
|
|
2025-06-10 07:59:42
|
I think a better way to do it, instead of integrating over the curve, is to just compute the bitrate gain per-image at a few quality points of interest (say ssimu2=50, 70, 85) and report the average and stdev of that (aggregated over various source images).
Then additionally it can be interesting to consider encoder consistency: ideally you don't just want an encoder to have good curves, you also want an encoder setting to produce reliable results, i.e. the same setting shouldn't produce a great image for one source image and a crappy one for another (that would be poor consistency). One way to look at that is to plot curves where every point is an encoder setting, the bitrate is computed as the total corpus size / total pixels, and the metric score is not the average but the p10 or p5 or whatever worst score in the corpus. That plot then basically corresponds to "if I would encode the whole corpus with that setting, how good would the overall compression be and what is the worst-case quality I will get" — which I think is the question that really matters in most use cases. If you're aiming at a certain quality target, it's not because some images will have a much better quality than your target that it becomes OK if some other images look like crap.
|
|
|
gb82
|
2025-06-10 05:32:07
|
That’s super interesting, and makes a lot of sense … in particular, the method of picking a couple of quality points and computing avg +stddev bitrate gain is very interesting
|
|
2025-06-13 05:15:46
|
what is everyone's favorite image dataset? I particularly like Daala's subset1 and subset2, as well as the Kodak Lossless True Color Image Suite even though its lower resolution and smaller
|
|
|
damian101
|
2025-06-13 08:57:58
|
I think CID22 is very good
|
|
|
_wb_
|
2025-06-13 09:21:15
|
all of these are quite small images, "web resolution" rather than "camera resolution". Which is fine, but it's a bit different from images like https://imagecompression.info/test_images/ where there is photon noise and the higher resolution makes the characteristics at the pixel level a bit different than what you get in downscaled images.
|
|
|
|
afed
|
2025-06-13 09:21:16
|
the newer, with more variety and respecting modern requirements from codecs, the better
better if it's a completely unknown, recently created one
the older and better known, the worse it is, because most encoders are very hard tuning and achieve the best results for the most popular datasets
it's sort of the same as for metrics, if a metric is popular, then the results of a certain encoder improve the tuning for that metric and basically the results automatically become worse, more synthetically inflated than honest
and this is even more important in the current AI world, where known datasets are basically useless
|
|
|
_wb_
|
2025-06-13 09:24:39
|
The best image dataset is created by taking a modern camera and shooting some new pictures in RAW, processing them in a "typical" way, and storing the result in 16-bit RGB. Completely new images will ensure they weren't used in encoder tuning.
|
|
|
|
afed
|
|
afed
the newer, with more variety and respecting modern requirements from codecs, the better
better if it's a completely unknown, recently created one
the older and better known, the worse it is, because most encoders are very hard tuning and achieve the best results for the most popular datasets
it's sort of the same as for metrics, if a metric is popular, then the results of a certain encoder improve the tuning for that metric and basically the results automatically become worse, more synthetically inflated than honest
and this is even more important in the current AI world, where known datasets are basically useless
|
|
2025-06-13 09:32:58
|
and also as the much-quoted
<https://en.wikipedia.org/wiki/Goodhart%27s_law>
|
|
|
CrushedAsian255
|
2025-06-13 09:53:29
|
do higher resolution images benefit more from noise synthesis?
|
|
|
damian101
|
|
_wb_
all of these are quite small images, "web resolution" rather than "camera resolution". Which is fine, but it's a bit different from images like https://imagecompression.info/test_images/ where there is photon noise and the higher resolution makes the characteristics at the pixel level a bit different than what you get in downscaled images.
|
|
2025-06-13 11:14:56
|
What mostly matters is relative viewing distance, rather than resolution... For most purposes, small images are fine I think. But yes, downscaling can change the nature of images... for which cropping instead of scaling could also be an option.
|
|
|
_wb_
|
2025-06-13 11:41:49
|
for subjective testing, I'd even argue that small images are better than large ones, to reduce the influence of saliency. But you need both crops and downscales, not just one or the other, imo.
|
|
|
Demiurge
|
|
afed
and also as the much-quoted
<https://en.wikipedia.org/wiki/Goodhart%27s_law>
|
|
2025-06-13 03:14:56
|
That's a good one. I didn't know this idea had a name.
|
|
|
gb82
|
|
_wb_
all of these are quite small images, "web resolution" rather than "camera resolution". Which is fine, but it's a bit different from images like https://imagecompression.info/test_images/ where there is photon noise and the higher resolution makes the characteristics at the pixel level a bit different than what you get in downscaled images.
|
|
2025-06-14 12:39:48
|
Does Daala’s subset1 fall into a similar category here?
|
|
|
_wb_
|
2025-06-14 06:22:13
|
Yes, they are downscales from large jpegs
|
|
|
Kupitman
|
2025-06-16 05:33:33
|
What the fuck is BD-rate
|
|
|
_wb_
|
2025-06-16 06:14:07
|
Bjøntegard Delta computed along (bit)rate on a Bitrate/Distortion plot, i.e. the so-called "average bitrate savings". You can also look at BD-[metric], e.g. BD-PSNR which is the aggregated change in metric score.
|
|
|
spider-mario
|
2025-06-16 06:29:51
|
~~Blu-ray Disc rate~~
|
|
|
diskorduser
|
2025-06-17 11:21:24
|
Big Data rate
|
|
|
gb82
|
2025-06-17 05:52:01
|
How do you measure consistency for an image codec's performance? I'd probably pick a quality level, measure SSIMU2 scores, and then compute std dev, but is that really the best way to do it?
|
|
|
Quackdoc
|
2025-06-17 05:52:55
|
video sequences :D
|
|
|
_wb_
|
2025-06-17 08:05:56
|
For video, ssimu2 does not really measure quality that well since in video there is motion masking or however they call it (in fast-moving scenes it is harder to see artifacts than in slow/static scenes) and ssimu2 does not take that into account at all
|
|
2025-06-17 08:16:03
|
For still images, most metrics including ssimu2 are not super consistent themselves, that is the scale of the scores is a bit dependent on the image content so for one image you may have 1 JND at ssimu2=80 while for another it may be at 75 or at 85
|
|
|
Lumen
|
2025-06-23 01:26:19
|
Hi _wb\_, Lyra and myself were wondering if an SSIMULACRA2 distortion map would be possible
we were theorizing this idea
instead of applying weights to the aggregated 128 scores, we could apply them locally for each pixel for each scale, and then combine scales by upscaling
|
|
|
_wb_
|
2025-06-23 03:18:38
|
yes this would be possible, though the maps are aggregated with the 1-norm and 4-norm so you'd probably need to include both weights
|
|
2025-06-23 03:20:16
|
also there are 3 different error maps that you could map to RGB channels of the distortion map instead of aggregating them into a single value
|
|
2025-06-23 03:21:12
|
I never got around to doing this but it would be interesting to add an option to the tool to produce such a distortion heatmap
|
|
|
pedromarques
|
2025-06-25 09:25:04
|
PNG reacting??
https://www.programmax.net/articles/png-is-back/
|
|
|
lonjil
|
2025-06-25 09:37:34
|
still no associated alpha smh
|
|
|
jonnyawsom3
|
2025-06-25 09:55:09
|
We've been discussing it in <#805176455658733570>
|
|
|
Meow
|
|
pedromarques
PNG reacting??
https://www.programmax.net/articles/png-is-back/
|
|
2025-06-26 03:40:37
|
Just a spec adding features that many softwares are already using for PNG
|
|
|
Kupitman
|
2025-06-26 06:32:31
|
Can djxl understand image using effort 9 (chunks?) or effort 10 in lossless? Or see metadata, without decoding?
|
|
|
_wb_
|
2025-06-26 07:25:40
|
djxl can decode any valid jxl file, or at least if it doesn't then that's a bug
|
|
2025-06-26 07:25:56
|
for metadata without decode, you can use `jxlinfo`
|
|
|
Ignacio Castano
|
2025-06-26 10:55:57
|
I've been working on a small standalone implementation of the ssimulacra2 metric without dependencies on JXL code, or any other third party library (libhwy or liblcms2). The goal was to keep it small and easy to build on any environment. It's currently under 1000 lines of code. Performance is much lower than the official implementation, but this is acceptable for my use case. I may spend some time adding some most critical optimizations, but before doing that I'd like to understand better how the "RecursiveGaussian" is implemented, in particular the derivation of the mul_in, mul_prev, and mul_prev2 factors. The code seems to reference some external formulas, is there a paper or documentation explaining these terms and their motivation?
|
|
|
_wb_
|
2025-06-27 08:53:14
|
I didn't implement that function, I just used it and assumed it does an approximate gaussian blurring.
|
|
2025-06-27 08:53:56
|
If you don't rely on a CMS library, are you then assuming input is in a fixed colorspace?
|
|
|
Lumen
|
2025-06-27 09:34:06
|
Weirdly enough, on my hardware it seems like the libjxl ssimu2 is slower than vszip (vapoursynth zig plugin having ssimu2 with a real gaussian blur) by quite a lot
And yet, it seems like using a true gaussian blur leads to better mos correlation.
So I was wondering where the problem would be, especially since my cpu does support AVX instructions. How would the less accurate approx version be slower. (Ryzen 7900x, non overclocked DDR5 3600Mhz, about 13 fps on jxl and 23 fps vszip)
It doesnt really bother me much since I use the gpu version to get around 413 fps now
|
|
|
𝕰𝖒𝖗𝖊
|
|
Lumen
Weirdly enough, on my hardware it seems like the libjxl ssimu2 is slower than vszip (vapoursynth zig plugin having ssimu2 with a real gaussian blur) by quite a lot
And yet, it seems like using a true gaussian blur leads to better mos correlation.
So I was wondering where the problem would be, especially since my cpu does support AVX instructions. How would the less accurate approx version be slower. (Ryzen 7900x, non overclocked DDR5 3600Mhz, about 13 fps on jxl and 23 fps vszip)
It doesnt really bother me much since I use the gpu version to get around 413 fps now
|
|
2025-06-27 12:13:11
|
VSZIP (CPU) and VSHIP (GPU) are optimized for videos as you know.
`libjxl ssimulacra2` doesn't even have the concept of "FPS". You call it externally over different images (including converting the frames to images before). So, huge bottlenecks apply.
And for images; no external implementations are faster than libjxl ssimu2/butter as far as I can tell.
|
|
|
Quackdoc
|
2025-06-27 12:19:15
|
well, ssimu2rs can be faster on some images due to the image crate being good, :D
|
|
|
jonnyawsom3
|
2025-06-27 12:28:27
|
That reminds me, libjxl isn't using zlib-ng for PNG decoding. When me and <@207980494892040194> tested it, encode wall time was 30% faster with less CPU time for a 200 MP PNG
|
|
|
username
|
|
That reminds me, libjxl isn't using zlib-ng for PNG decoding. When me and <@207980494892040194> tested it, encode wall time was 30% faster with less CPU time for a 200 MP PNG
|
|
2025-06-27 12:46:09
|
also helps PNG encoding as well when using djxl and such
|
|
|
𝕰𝖒𝖗𝖊
|
|
Quackdoc
well, ssimu2rs can be faster on some images due to the image crate being good, :D
|
|
2025-06-27 12:46:52
|
Oh ssimu2rs is less accurate. VSHIP on the other hand can even have better MOS correlation.
And the native binary can work on different image types and extensions as a bonus 🙂
|
|
|
Quackdoc
|
|
𝕰𝖒𝖗𝖊
Oh ssimu2rs is less accurate. VSHIP on the other hand can even have better MOS correlation.
And the native binary can work on different image types and extensions as a bonus 🙂
|
|
2025-06-27 01:10:29
|
is it? I found ssimu2rs to more or less be within margin of error. but yeah, more images more better :D
|
|
|
Lumen
|
2025-06-27 01:12:07
|
Accuracy of different implem
|
|
|
𝕰𝖒𝖗𝖊
|
|
Quackdoc
is it? I found ssimu2rs to more or less be within margin of error. but yeah, more images more better :D
|
|
2025-06-27 01:12:21
|
Especially you get a different result when you need to convert the images.
jxl ssimu2 can directly work on jxl images for example.
So if you want to stay in line; it's better to use the native one unless you test videos.
And for that; the beloved FFVship from Lumen is unbeatable in the near future (I get around 500 FPS speed). And it's also accurate with true gaus blur.
|
|
|
Ignacio Castano
|
|
_wb_
If you don't rely on a CMS library, are you then assuming input is in a fixed colorspace?
|
|
2025-06-27 03:10:55
|
Yes, input is expected to be in sRGB or linear RGB color space. Users are free to do color transform on their end. So, this keeps things simpler and is enough for my needs.
|
|
|
_wb_
|
2025-06-27 03:12:53
|
Makes sense. Probably need to have a way to deal with HDR though, but that probably needs some calibration anyway. Ssimu2 works well for HDR but the scale is not quite the same as for SDR, the scores are lower for the same JND value in similarish viewing conditions.
|
|
|
Ignacio Castano
|
|
_wb_
I didn't implement that function, I just used it and assumed it does an approximate gaussian blurring.
|
|
2025-06-27 03:18:04
|
Ah, thans, I'll dig into the libjxl git history to find where that came from and see if I can find some reference to the source material.
|
|
2025-06-27 03:26:05
|
I haven't looked into using it for HDR data yet, but I think the same strategy of assuming a specific color space and offloading the color transform responsibility to the user should work also.
|
|
2025-06-27 04:45:38
|
Ah, looks like the code refers to: "Recursive Implementation of the Gaussian Filter Using Truncated Cosine Functions" I had stripped that comment while pulling the code out from libjxl. However, git history does not reveal the original author (it was commited by jxl-bot). Do you recall who did that work?
|
|
|
username
|
|
Ignacio Castano
Ah, looks like the code refers to: "Recursive Implementation of the Gaussian Filter Using Truncated Cosine Functions" I had stripped that comment while pulling the code out from libjxl. However, git history does not reveal the original author (it was commited by jxl-bot). Do you recall who did that work?
|
|
2025-06-27 04:50:55
|
iirc most old commits from "jxl-bot" are ones that where transferred from this repo: https://gitlab.com/wg1/jpeg-xl
|
|
|
Ignacio Castano
|
2025-06-27 06:08:20
|
The commit on that repo is the same: https://gitlab.com/wg1/jpeg-xl/-/commit/50bbf27d90ca938a5ab8818b870c7f238380708e?page=2
|
|
2025-06-27 06:37:30
|
The paper is actually quite readable. The idea appears to be very similar to the "sliding window" or "moving averages" approach to implement a box filter, but using 3 or 5 terms instead of one. Very neat. I'd love to know who contributed that code.
|
|
|
_wb_
|
2025-06-27 07:16:51
|
Uh, that commit would be in the private repo where we initially did development because for some reason JPEG wanted it to stay private initially (I think to allow in theory that people could contribute stuff they didn't want to open source or something)
|
|
2025-06-27 07:19:05
|
Perhaps someone who is good at git magic should try to get that commit history imported into the main libjxl repo so it has a full git history. That would be useful to figure out who wrote what, because at this point even the person who did might no longer remember that they did 🙂
|
|
|
Ignacio Castano
|
2025-06-27 07:33:29
|
When I find some time I'll compare the recursive sliding filter against the regular convolution. I'm curious to see how much faster it is and whether it's worth the additional complexity. It looks like the kernel footprint is just 5, so it may not be that much faster. I'll report what I find!
|
|
|
DZgas Ж
|
2025-07-09 08:13:29
|
jpeg xl does not support PSRN or SSIM encoding, replacing Butteraugli?
|
|
|
_wb_
|
2025-07-09 08:28:39
|
libjxl has a mode to disable perceptual optimization but it disables everything, also XYB.
|
|
|
DZgas Ж
|
|
_wb_
libjxl has a mode to disable perceptual optimization but it disables everything, also XYB.
|
|
2025-07-09 10:41:30
|
Is psnr in xyb implemented?
|
|
|
jonnyawsom3
|
2025-07-09 10:44:59
|
JPEGLI has it, at least in the code
|
|
|
DZgas Ж
|
2025-07-09 11:04:38
|
I'm interested if anyone could encode jpeg xl using something other than Butteraugli
|
|
|
_wb_
|
2025-07-10 05:50:18
|
Well libjxl only really uses butteraugli at effort 8+
|
|
|
Demiurge
|
|
DZgas Ж
jpeg xl does not support PSRN or SSIM encoding, replacing Butteraugli?
|
|
2025-07-10 07:59:50
|
cjxl --ugly-mode --tune=psnr in.png ugly.jxl
|
|
|
DZgas Ж
|
|
_wb_
Well libjxl only really uses butteraugli at effort 8+
|
|
2025-07-10 10:45:14
|
this slightly contradicts all my knowledge about jpeg xl
|
|
|
Kupitman
|
|
DZgas Ж
this slightly contradicts all my knowledge about jpeg xl
|
|
2025-07-10 12:15:29
|
https://github.com/libjxl/libjxl/blob/main/doc/encode_effort.md
|
|
|
|
afed
|
2025-07-10 02:57:05
|
I think <#822120855449894942> needs to be cleaned of everything that doesn't relate to faq
|
|
|
Demiurge
|
|
DZgas Ж
this slightly contradicts all my knowledge about jpeg xl
|
|
2025-07-11 10:41:21
|
libjxl uses its own psychovisual heuristics that are tuned by hand with some help from butter and ssimu2
|
|
2025-07-12 12:30:59
|
At effort 8+ it actually uses butteraugli but otherwise it's just making quantization decisions by its own fast heuristics
|
|
2025-07-12 03:38:48
|
Afaik
|
|
|
RaveSteel
|
2025-07-16 10:50:04
|
https://github.com/libjxl/libjxl/issues/4097
layers are now not coalesced upon reencoding JXL but the metadata for the layers is lost.
The layers have no names in the reencoded JXL
Original JXL:
```
layer: full image size, name: "Auswahlmaske"
layer: full image size, name: "Background"
layer: 405x196 at position (48,149), name: "Text"
```
reencoded JXL:
```
layer: full image size
layer: full image size
layer: 405x196 at position (48,149)
```
|
|
|
jonnyawsom3
|
|
RaveSteel
https://github.com/libjxl/libjxl/issues/4097
layers are now not coalesced upon reencoding JXL but the metadata for the layers is lost.
The layers have no names in the reencoded JXL
Original JXL:
```
layer: full image size, name: "Auswahlmaske"
layer: full image size, name: "Background"
layer: 405x196 at position (48,149), name: "Text"
```
reencoded JXL:
```
layer: full image size
layer: full image size
layer: 405x196 at position (48,149)
```
|
|
2025-07-16 11:04:01
|
With this? https://github.com/libjxl/libjxl/pull/4327
|
|
|
RaveSteel
|
2025-07-16 11:06:04
|
My libjxl is built from current master, if this PR fixed it, then it would keep the layer names
|
|
|
Ignacio Castano
|
2025-07-22 11:00:54
|
Playing around with distortion heatmaps in ssimulacra2:
|
|
|
jonnyawsom3
|
2025-07-22 11:03:04
|
Oh, you managed to get it outputting them?
|
|
|
Ignacio Castano
|
2025-07-22 11:15:06
|
I'm still experimenting with different ways to combine the weights and map the colors, but it's starting to look useful. As <@794205442175402004> pointed out we could output multiple error maps. Here I'm combining them all in a single one. I'll publish my implementation once I tune things a bit more, probably after siggraph.
|
|
2025-07-23 01:00:29
|
And now integrated in my regression tests.
|
|
2025-07-23 01:09:32
|
It's not clear to me what to do with the L4 weights when accumulating the per-pixel errors. For now I'm simply scaling the per-pixel error by both weights, but I'm not sure that's the right thing to do.
|
|
|
_wb_
|
2025-07-23 07:57:45
|
I guess you could scale the error linearly for the L1 norm and proportional to x^4 for the L4 norm
|
|
|
Ignacio Castano
|
2025-07-23 05:10:47
|
That’s the first thing I tried, but the output didn’t look as good, it attenuates many of the errors that I think needed to be highlighted . The score uses the pow1/4 of the sum of the L4 errors, so simply ^4 the error value is not accurate either. If you think about it as a single pixel score the two cancel each other.
|
|
|
_wb_
|
2025-07-23 05:52:06
|
Yeah it cancels out per-pixel but still the impact of larger errors is much larger in the L4 norm than in the L1 norm, while the impact of smaller errors is much weaker in the L4 norm than in the L1 norm. I am not sure how to reflect that in a heat map. Maybe do ^4 to the error value, then slightly blur the heatmap, and only then do pow1/4?
|
|
|
Kupitman
|
2025-08-03 09:47:17
|
What algorithms is using in 1-10 efforts, with jpeg-transcoding option?
|
|
|
_wb_
|
2025-08-03 02:12:47
|
When transcoding jpeg, there's not that much the encoder can choose since all lossy choices are already made — only the entropy coding makes a difference. Basically the DC coefficients are just a small lossless image to be encoded however you want; for the AC the entropy coding has less freedom but you can still spend some more or less effort on things like coefficient reordering, context clustering, lz77, etc.
But generally there is not a huge difference between low effort jpeg recompression and higher effort.
|
|
|
Kupitman
|
|
_wb_
When transcoding jpeg, there's not that much the encoder can choose since all lossy choices are already made — only the entropy coding makes a difference. Basically the DC coefficients are just a small lossless image to be encoded however you want; for the AC the entropy coding has less freedom but you can still spend some more or less effort on things like coefficient reordering, context clustering, lz77, etc.
But generally there is not a huge difference between low effort jpeg recompression and higher effort.
|
|
2025-08-03 02:47:18
|
So what exactly is used for compression?
|
|
|
Exorcist
|
|
Kupitman
So what exactly is used for compression?
|
|
2025-08-03 02:51:00
|
https://gist.github.com/LukeNewNew/b55a3e87037b0e951095e3fed9ec4aec
|
|
|
_wb_
|
|
Kupitman
So what exactly is used for compression?
|
|
2025-08-03 03:03:19
|
The entropy coding itself is ANS or Prefix coding, with a configurable HybridUint tokenization and optional lz77.
In case of AC coefficients, they are first ordered in a configurable order (not necessarily the zigzag order of jpeg), then the number of nonzeroes per block is signaled (which is similar to the "end of block" symbol in jpeg which makes tail zeroes cheap, but even better in terms of compression), and then the coefficients themselves are coded with a configurable but relatively fixed context model.
|
|
|
LMP88959
|
2025-09-21 04:17:42
|
I haven’t looked at the source code, but I am wondering if the squeeze transform needs the tendency term to be computed both during encoding and decoding or only encoding?
|
|
|
_wb_
|
|
LMP88959
|
2025-09-22 12:24:29
|
thank you
|
|
|
MSLP
|
2025-10-14 05:54:28
|
A question about `jxl-rs` - is it planned to backport it to rustc v1.87 - the version which Firefox build system is targeted to update to (but is not yet updated to; currently is on v1.86).
Currently jxl-rs doesn't build on stable rustc 1.87.0 (17067e9ac 2025-05-09), because of some language constructs from beyond 1.87 and instances of `error[E0658]: use of unstable library feature 'stdarch_x86_avx512'` and `error[E0658]: the target feature 'avx512f' is currently unstable`.
Or the general plan is to wait for Firefox to go past v1.87 - but IMO that would require a MSRV freeze anyway, since Firefox build system seems to be at times up to a year behind rustc releases.
|
|
|
lonjil
|
2025-10-14 05:57:47
|
Firefox is built allowing unstable features anyhow.
So as long as the feature works the same in 1.87, it should be fine.
|
|
|
MSLP
|
2025-10-14 06:00:39
|
Ah, cool, so only a few language constructs then.
Also seems like I overshot it with "up to a year" - more like "up to 6 months", with typical like 2-3 months behind.
Tho v1.87 may be a distant target too, due to <https://bugzilla.mozilla.org/show_bug.cgi?id=1923255>
But I see Martin is doing some titanic work over there.
|
|
|
|
veluca
|
2025-10-14 08:38:09
|
rustc will likely get updated all the way to 1.89 after this
|
|
2025-10-14 08:38:15
|
the main blocker are llvm changes
|
|
2025-10-14 08:38:31
|
as long as MSRV for jxl-rs does not go past a rustc version that uses llvm 21 we ought to be fine
|
|
|
_wb_
|
2025-10-17 03:01:47
|
`Build/Test MSYS2 / Windows MSYS2 / mingw64 (pull_request)` on the libjxl repo keeps failing, does anyone feel like taking a look at that?
|
|
|
awxkee
|
2025-10-23 02:43:31
|
<@794205442175402004> I'm wondering whether `JXL_HIGH_PRECISION=0` is considered safe to use, since it's neither documented nor mentioned in the CMake configuration. Could I rebuild and publish [jxl-coder](https://github.com/awxkee/jxl-coder) with this flag?
|
|
|
_wb_
|
2025-10-23 04:05:05
|
I think it's what we used in the Chromium build back when we were still in Chromium, right <@179701849576833024> ? To make decoding a bit faster especially on older phones. So yes I would consider this safe to use. It should still conform to Level 5 conformance which is more than enough precision for viewing (even for HDR).
|
|
|
|
veluca
|
2025-10-23 04:06:11
|
I honestly don't remember but yeah it's probably more than fine
|
|
|
novomesk
|
2025-10-25 09:42:30
|
When I install libjxl-0.11.1 on Gentoo, following message is displayed:
* QA Notice: Compatibility w/ CMake < 3.16 will be removed in future ECM release.
* If not fixed in upstream's code repository, we should make sure they are aware.
* See also tracker bug #964407; check existing or file a new bug for this package.
*
* The following CMakeLists.txt files are causing warnings:
* examples/CMakeLists.txt:3.10
*
* An upstreamable patch should take any resulting CMake policy changes
* into account. See also:
* https://cmake.org/cmake/help/latest/manual/cmake-policies.7.html
|
|
|
jonnyawsom3
|
2025-10-25 09:50:10
|
It should just be changing this to 3.16, right? <https://github.com/libjxl/libjxl/blob/main/examples/CMakeLists.txt#L8>
|
|
|
MSLP
|
2025-11-17 06:58:32
|
Question regarding the jxl-rs Firefox inclusion: Firefox may finally be done with clang-20 migration, and even managed to Bump rustc builders to 1.89 <https://bugzilla.mozilla.org/show_bug.cgi?id=1958726>.
Also as I checked the current jxl-rs compiles on rustc 1.89.
Are there any plans to speed-up the first upstream inclusion tests, by reducing currently set "land initial jpegxl rust code pref disabled" rustc 1.90 target to 1.89?
Also would it be possible to fix the MSRV for jxl-rs to 1.90 for ongoing developement, at least till Firefox will go to rustc 1.91 (which requires llvm 21), in case migration from clang 20 to 21 will be again complicated for Firefox project.
|
|
|
jonnyawsom3
|
2025-11-17 07:16:29
|
1.87 is the minimum for SIMD which jxl-rs uses <https://bugzilla.mozilla.org/show_bug.cgi?id=1986626>, then it can be vendored and vetted
That's Mozilla's decision, everyone here only works on the JXL side of things (AFAIK)
|
|
|
MSLP
|
2025-11-17 07:52:37
|
Yes, but some time after posting <https://bugzilla.mozilla.org/show_bug.cgi?id=1986393> the 1.87 requirement was bumped to 1.90 (by Martin from team-veluca 😉 - due to some minor post-1.87 features used in jxl-rs code). But this can be possibly reduced to 1.89 (to which Firefox builder were updated recently.
Also I'm asking about freeze on MSRV 1.90, due to previous difficulties on LLVM 19 to 20 upgrade in FIrefox - 1.91 will require similar upgrade, hard to say how difficult it will be.
|
|