JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

on-topic

Whatever else

Quackdoc
A homosapien Similar to this? https://github.com/libjxl/libjxl/issues/3973
2025-01-27 04:52:33
similar but the only warning was `Warning : [minor] Text/EXIF chunk(s) found after PNG IDAT (may be ignored by some readers)`
2025-01-27 04:53:14
I am rebuilding libjxl now to get some more data
2025-01-27 04:57:47
well building it like that didn't work
2025-01-27 04:58:09
``` build() { export CFLAGS+=' -ffat-lto-objects' export CXXFLAGS+=' -ffat-lto-objects' cmake -B build -S libjxl \ -G 'Unix Makefiles' \ -DCMAKE_BUILD_TYPE:STRING='Debug' \ -DCMAKE_INSTALL_PREFIX:PATH='/usr' \ -DJPEGXL_ENABLE_BENCHMARK:BOOL='false' \ -DJPEGXL_ENABLE_EXAMPLES:BOOL='false' \ -DJPEGXL_ENABLE_FUZZERS:BOOL='false' \ -DJPEGXL_ENABLE_OPENEXR:BOOL='false' \ -DJPEGXL_ENABLE_PLUGINS:BOOL='true' \ -DJPEGXL_ENABLE_VIEWERS:BOOL='false' \ -DJPEGXL_FORCE_SYSTEM_BROTLI:BOOL='true' \ -DJPEGXL_FORCE_SYSTEM_GTEST:BOOL='true' \ -DJPEGXL_FORCE_SYSTEM_HWY:BOOL='true' \ -DJPEGXL_BUNDLE_LIBPNG:BOOL='NO' \ -DJPEGXL_INSTALL_JARDIR='/usr/share/java' \ -Wno-dev cmake --build build make -C build doc } ```
2025-01-27 04:58:14
oh well, ill try ci.sh
2025-01-27 05:03:32
interesting ``` libpng warning: iCCP: known incorrect sRGB profile lib/extras/dec/apng.cc:1079: JXL_FAILURE: Repeated iCCP / sRGB chunk lib/extras/dec/decode.cc:151: JXL_FAILURE: Codecs failed to decode Getting pixel data failed. ```
Tirr
2025-01-27 05:15:33
sRGB and iCCP are mutually exclusive according to the spec, but some decoders/apps are permissive I guess
Quackdoc
Tirr sRGB and iCCP are mutually exclusive according to the spec, but some decoders/apps are permissive I guess
2025-01-27 08:11:06
it doesn't seem like it, at least in https://www.w3.org/TR/2003/REC-PNG-20031110/#11sRGB rather, what the spec "implies" that with `11.3.3.3 iCCP Embedded ICC profile` > A PNG datastream should contain at most one embedded profile, whether specified explicitly with an iCCP chunk or implicitly with an sRGB chunk. but then *explicitly contradicts it* in `11.3.3.5 sRGB Standard RGB colour space` by *reccomending* that they do not both appear but do not prohibit it > It is recommended that the sRGB and iCCP chunks do not both appear in a PNG datastream.
2025-01-27 08:12:56
this seems like a mistaken in the spec when they ment to prohibit it, but failed to do so
2025-01-27 11:56:07
https://github.com/libjxl/libjxl/issues/4087#issuecomment-2615216732
2025-01-27 11:56:37
so apparently 3rd edition resolves the ambiguity
jonnyawsom3
2025-01-29 11:46:13
Tried to render a quick test image in Blender to get float files for benchmarking. Turns out Blender renders to a float16 image, even though it shows 32bit export options. No clue if I can increase it to 32bit anywhere, but the file hashes of PFM and JXL matched with the EXR being double the size
AccessViolation_
2025-01-29 03:12:29
the multithreading in the encoder makes me wish I had a threadripper
2025-01-29 03:13:12
imagine it just zooming through the image<:BlobYay:806132268186861619>
Demiurge
2025-01-30 03:25:48
libjxl has extremely poor sub-linear scaling across threads
2025-01-30 03:26:07
After about 3 or 4 threads it just stops getting faster
jonnyawsom3
2025-01-30 04:43:31
Depends on image resolution, group size and other options
_wb_
2025-01-30 08:31:54
Decoding and lossy and e1-e3 lossless encoding should scale quite well, lossless e4+ encoding scales poorly.
jonnyawsom3
2025-01-31 01:01:38
Huh, didn't know the game had a dithering setting
Demiurge
2025-02-01 05:22:09
Nah. Even large 30 megapixel images with lossless e2 does not scale above 3 threads.
2025-02-01 05:22:28
at compression that is
2025-02-01 05:22:33
Haven't tried decompression
2025-02-01 05:23:34
Someone should test on different hardware than mine and measure multi core scaling
2025-02-01 05:24:11
On my system it does well up to 3 threads but just doesn't go any higher than that, aside from wasting power
2025-02-01 05:24:56
But it might do better on a different system, who knows?
A homosapien
Demiurge Someone should test on different hardware than mine and measure multi core scaling
2025-02-01 07:22:37
What kinda old hardware are you using? I see diminishing returns around 8 threads for a 45 MP image on a intel core i5-12400.
2025-02-01 07:23:29
A possible bottleneck I can think of is memory throughput due to the ram-heavy nature of libjxl.
_wb_
2025-02-01 08:01:37
Yes, likely things get bottlenecked by the memory bus.
2025-02-01 08:30:41
https://almanac.httparchive.org/en/2024/media
Demiurge
2025-02-01 10:27:04
Aaah, maybe that's it. I'm on an old old system
2025-02-01 10:27:21
Must be the memory bus
JendaLinda
2025-02-03 07:15:07
That Bayer dithering gives jxl real retro vibes, considering it's being applied to most vardct images.
jonnyawsom3
2025-02-03 08:23:36
Sometimes it hurts dark flat areas
JendaLinda
2025-02-03 08:27:00
It appears even in losslessly transcoded regular jpegs.
jonnyawsom3
2025-02-03 08:29:03
Yeah, I did a lot of testing around that https://discord.com/channels/794206087879852103/804324493420920833/1301648402664263730
JendaLinda
2025-02-03 08:31:04
Anyway, the Bayer matrix is an interesting choice, it reminds me old Windows running with 16 color graphics.
spider-mario
2025-02-03 08:42:23
https://www.dpreview.com/news/8799733255/canon-shows-why-optical-and-in-body-stabilization-both-have-a-place > Thanks to forum member spider-mario for bringing this diagram to our attention. ๐Ÿ˜Ž
jonnyawsom3
JendaLinda Anyway, the Bayer matrix is an interesting choice, it reminds me old Windows running with 16 color graphics.
2025-02-03 10:04:10
When a few of us were in the voice channel, <@207980494892040194> made a version that used blue noise instead. It worked well, but needed tuning
2025-02-03 10:04:11
https://cdn.discordapp.com/attachments/804324493420920833/1313028719656435753/noise-comp.webp?ex=67a25c09&is=67a10a89&hm=436c2797b4e3515836bdefc3f708fffad58411f3256ef198387cb227544a2d32&
A homosapien
2025-02-03 10:20:05
I should make a draft PR for that
JendaLinda
2025-02-03 11:58:26
If I need to do anything with images, I usually reconstruct the original jpegs and use those, so I don't get any dithering.
2025-02-04 12:06:25
I'm not sure if jxl decoder is the right place to perfom dithering. I think dithering should be done as the last step when the pixels are being rendered to the screen.
damian101
JendaLinda I'm not sure if jxl decoder is the right place to perfom dithering. I think dithering should be done as the last step when the pixels are being rendered to the screen.
2025-02-04 12:09:49
Well, considering that JXL supports 8-bit RGB, that is not necessarily an option at all.
2025-02-04 12:10:18
When reducing bit depth, you want to dither, and that's what's happening in the decoder.
jonnyawsom3
2025-02-04 12:10:37
Because it decodes the internal jpeg as float32
2025-02-04 12:11:03
Not actually sure if you can disable it in libjxl, but it'd be nice to have a flag for djxl at least
JendaLinda
2025-02-04 12:18:09
In case of 8 bit source material, it seems that jxl is trying to add something that wasn't there.
2025-02-04 12:27:46
If dithering is desirable, what about more advanced techniques, like Floyd-Steinberg?
A homosapien
2025-02-04 12:28:34
Floyd-Steinberg is a bit too slow as far as I know
2025-02-04 12:28:59
That's why we chose blue noise, it's a good balance of speed and quality
JendaLinda
2025-02-04 12:30:25
Floyd-Steinberg has been used for decades and it seems to be the most popular dithering method in general.
jonnyawsom3
2025-02-04 12:35:28
The key is avoiding repetition. This is floyd and a poor attempt at blue noise
2025-02-04 12:36:21
Or, I just squint and see what turns into a gradient first
2025-02-04 12:37:23
Also, because JXL does animation, error diffusion could be distracting, so a fixed LUT is better
JendaLinda
2025-02-04 12:40:02
Hmmm would it be possible to use the photon noise feature to do dithering as well?
2025-02-04 12:40:16
It's already in the codec.
jonnyawsom3
2025-02-04 12:43:22
https://surma.dev/lab/ditherpunk/lab
JendaLinda Hmmm would it be possible to use the photon noise feature to do dithering as well?
2025-02-04 12:43:56
Yes, there was actually a blog post about it which I've never been able to find again
A homosapien
Not actually sure if you can disable it in libjxl, but it'd be nice to have a flag for djxl at least
2025-02-04 12:48:05
A flag to disable dithering would be relatively easy to add. I should probably work on that instead of trying to compile libjxl on Windows ๐Ÿ˜…
JendaLinda
2025-02-04 12:49:01
Another option is using an old version of the decoder. ๐Ÿ˜„
Demiurge
A homosapien A flag to disable dithering would be relatively easy to add. I should probably work on that instead of trying to compile libjxl on Windows ๐Ÿ˜…
2025-02-04 01:54:05
why would you ever disable dithering? You can just decode to a higher bit depth...
The key is avoiding repetition. This is floyd and a poor attempt at blue noise
2025-02-04 01:55:34
that doesn't look like true blue noise, there's lots of LF energy...
jonnyawsom3
Demiurge that doesn't look like true blue noise, there's lots of LF energy...
2025-02-04 02:31:16
Hence "a poor attempt", it was using the site I linked
Demiurge why would you ever disable dithering? You can just decode to a higher bit depth...
2025-02-04 02:32:05
When the source was 8-bit to begin with, decoding to higher means either PPM (Too niche to share) or PNG (16-bit means too large to share)
CrushedAsian255
The key is avoiding repetition. This is floyd and a poor attempt at blue noise
2025-02-04 02:54:45
argh moiree patterns
jonnyawsom3
CrushedAsian255 argh moiree patterns
2025-02-04 03:36:33
Yeah, that's why we want to move away from Bayer
2025-02-04 03:53:08
Bayer, Floyd, 'Blue Noise'
2025-02-04 03:53:47
Spent the past half hour trying to make a new blue noise image, but can't find any online tool to convert and only GIMP came up with a filter for it
Demiurge
When the source was 8-bit to begin with, decoding to higher means either PPM (Too niche to share) or PNG (16-bit means too large to share)
2025-02-04 03:54:49
Why would you ever NOT dither 8 bit output? That's just plain wrong
2025-02-04 03:55:24
That's like put toothpaste in your coffee levels of wrong
jonnyawsom3
2025-02-04 03:55:41
Because the original image never had dithering?
Demiurge
2025-02-04 03:56:17
Yeah but we are talking about lossy right? The original is gone and doesn't matter
2025-02-04 03:56:28
In lossless there is no dither
jonnyawsom3
2025-02-04 03:56:45
In that case just use AVIF if you don't care about the original :P
2025-02-04 03:57:25
The dithering adds bayer patterns to dark flat areas that never had a gradient in the first place
Demiurge
2025-02-04 03:57:44
No. The original doesn't factor into this. It's not that I don't care about the original. It's actually more true fidelity to the original if you dither 8 bit output even if the original wasn't dithered
2025-02-04 03:59:19
Because the image is being processed and converted into a different format and back again. It would be like if you converted an sRGB image into ProPhoto and back. You are applying a transformation to the image at high precision and when going back to low precision you need to dither again every time
2025-02-04 04:00:41
Any time you do some kind of transformation and edit to the pixels at high precision those pixels need to be redithered when going back to the original low precision after the lossy transformation
2025-02-04 04:01:50
The closest analogy I can think of is, that would be like resampling without doing a lowpass filter first
2025-02-04 04:05:37
And getting really strong moire patterns in your output that weren't in the original
2025-02-04 04:06:35
Dithering is necessary, otherwise you get stripes and bands and other weird lines and creases that weren't in the original
A homosapien
2025-02-04 04:08:09
What if banding was the original? What if you want to *preserve* the banding?? <:KekDog:805390049033191445>
jonnyawsom3
2025-02-04 04:08:34
Original, djxl, jxl-oxide. It's a small example, but I can't be bothered to find another real-world example when I already linked to it above
A homosapien
2025-02-04 04:08:57
Don't get us wrong, by default dithering is usually better and it will stay that way.
Demiurge
2025-02-04 04:09:26
The banding will probably still be there if it's in the original. Jxl is not supposed to smooth over visually noticeable image features.
2025-02-04 04:09:52
Dithering cannot smooth things either
jonnyawsom3
Demiurge And getting really strong moire patterns in your output that weren't in the original
2025-02-04 04:11:16
That's exactly what it is doing https://discord.com/channels/794206087879852103/804324493420920833/1301649129541337148
2025-02-04 04:11:58
At more reasonable zoom levels, it causes patterns and draws the eye to what's meant to be flat/smooth areas
2025-02-04 04:12:13
Hence wanting blue noise so there is no pattern to notice
A homosapien
2025-02-04 04:14:08
There have been times where I didn't want dithering, and there have been times I wanted gamma *incorrect* scaling.
2025-02-04 04:16:22
Those cases are very rare, but having it as an optional flag comes at no cost
JendaLinda
2025-02-04 07:30:46
I still think all color management, scaling etc. should be done before dithering.
spider-mario
2025-02-04 01:46:05
> Effectiveness of audio noise reduction varies by lens. (https://cam.start.canon/en/C005/manual/html/UG-04_Shooting-2_0070.html) โ€ฆ what
AccessViolation_
2025-02-04 01:58:40
Ahh, they're talking about mechanical lens sounds
2025-02-04 01:58:54
> When recording with the built-in microphone, this feature reduces mechanical lens sounds caused by autofocusing, as well as white noise.
spider-mario
2025-02-04 02:27:12
ah, that makes more sense
Quackdoc
2025-02-04 03:42:13
i've always found built in mics terrible and honestly pointless at this point, if you are spending money on a higher quality camera for video recording, its worth it to just look for one with mic in and attach even a cheap shotgun mic to it. they are "good enough" for most cases and small enough to fit on even go pros now
spider-mario
2025-02-04 04:02:52
or a separate recorder so you donโ€™t even need to connect it to the camera
2025-02-04 04:03:06
(to the point that you can film with your phone if you like)
2025-02-04 04:03:30
DaVinci Resolve can then sync it with the cameraโ€™s built-in audio in two clicks in post
2025-02-04 04:03:39
(and then you can delete the cameraโ€™s audio track altogether)
Quackdoc
2025-02-04 06:42:41
yup, recorders are for sure nice, I had a camera body mount that my phone could go into so I could use it has a recorder and a screen window, worked well
Demiurge
2025-02-05 07:33:51
But the quartz clock in the recorder is going to be out of sync with the camera if they are separate...
2025-02-05 07:34:29
You'll have to manually sync and account for drift if it's a long enough recording
spider-mario
2025-02-05 07:51:07
I have not found this to be an issue in practice
2025-02-05 08:04:42
> In the early days of portable digital recorders, Samson launched the โ€œZoomโ€ line of recorders, but used pretty cheap commodity clock crystals like you might find in a PC, maybe 100โ€“200ppm. What you would find is that, in a few minutes, the camera sound and the recorder sound would be pretty clearly walking away from one another. They fixed this in subsequent models like their H4n, and all of a sudden became the go-to recorder for video sync use.
Demiurge
2025-02-05 02:47:20
It's rare for two separate clocks to not have drift
Quackdoc
2025-02-05 03:03:06
*some* modern devices can avoid clock drift by knowing it's an issue, and re syncing video according to synchronized time, it isn't super perfect and won't ever be used scientifically in place of genlock, but is typically "good enough"
2025-02-06 05:57:35
ugh, more issues with decoding pngs in cjxl hitting this issue https://github.com/libjxl/libjxl/issues/3973 T.T
TheBigBadBoy - ๐™ธ๐š›
A homosapien Fant/box downscaling doesn't cause ringing that I see with bicubic and lanczos while also retaining the same level of sharpness if not greater
2025-02-06 07:51:55
which tool do you use for Fant downscaling ? Even Gimp doesn't seem to have it [โ €](https://cdn.discordapp.com/emojis/654081052108652544.webp?size=48&name=av1_Hmmm)
2025-02-06 08:13:35
oh, it's a thing from paint.NET any tool that I can use in Linux with Fant downscaling ?
A homosapien
2025-02-06 08:14:34
Gimp also works
2025-02-06 08:14:49
Let me double check real quick
2025-02-06 08:26:27
Yup, downscaling the image using bicubic or linear algorithms with integer divisors (1/2, 1/3, 1/4, so on) results in the same scaling as fant/box.
2025-02-06 08:35:05
It looks even better than libplacebo's special "ewa" algorithims I'll be honest
TheBigBadBoy - ๐™ธ๐š›
2025-02-06 08:35:32
oh nice, thank you for testing โค๏ธ
Quackdoc
2025-02-06 08:43:03
libplacebo's scalers really aren't that special, what is special is using glsl scalers
A homosapien
2025-02-06 08:49:54
That makes sense
2025-02-06 08:50:04
Here is another comparison for fun
2025-02-06 08:51:10
For some reason libplacebo can't handle RGB with alpha
Quackdoc
2025-02-06 08:54:11
libplacebo itself can, but iirc its ffmpeg stuff is not wired up?
jonnyawsom3
2025-02-06 10:02:41
What the fuuuuuuuuck? If I zoom on one particular WebP in Irfanview, trying to pan makes the antimalware service start chugging and make the panning stutter
2025-02-06 10:10:46
Even stranger, it's location dependant, but putting it into an excluded folder doesn't help
Quackdoc
What the fuuuuuuuuck? If I zoom on one particular WebP in Irfanview, trying to pan makes the antimalware service start chugging and make the panning stutter
2025-02-06 10:17:20
gpu skill issue?
2025-02-06 10:17:35
5.0% really isn't much
jonnyawsom3
Quackdoc gpu skill issue?
2025-02-06 10:17:48
It doesn't use the GPU, it's image decoding :P
2025-02-06 10:18:08
Well, not even decoding, panning rather
Quackdoc
2025-02-06 10:18:19
it uses it for rendering though doesn't it? panning would be rendering domain
jonnyawsom3
2025-02-06 10:19:14
It works fine in my Downloads folder, no stutter. Trying it in the original folder, or a Defender excluded one, causes stuttering and Antimalware CPU usage
RaveSteel
2025-02-06 10:36:23
Does it do this persistently? Or does it stop after a while?
jonnyawsom3
2025-02-06 11:38:21
Every time I pan, CPU usage stops when I stop dragging
2025-02-06 11:47:23
Very strange... I made a PNG version to see if it was specifically WebP, but now both work fine
2025-02-06 11:54:29
And now it's stuttering again with the PNG deleted
2025-02-07 09:29:02
https://youtu.be/EzjWBmhO_1E
jjrv
2025-02-13 07:27:22
Here's bit stream and image header parsing in Zig: https://gist.github.com/jjrv/cb373bcbde3f5d6287977f5ad0c510f8 Not much verified or tested, but a bit of comptime made it nice to write. Super easy to pretty-print resulting structs as JSON. Bit stream avoids unaligned access, I got the impression it might be a bad idea on wasm. Super unsafe, reads past the input buffer at both ends. Targeting wasm, I just don't care. Less checks in code means less to download and execute. getVar64 should be the only function in BitReader with branches executed at run-time, the rest are comptime. I do have a wasm-build of a recent libjxl but it's about a megabyte-sized .wasm, hoping for smaller.
2025-02-13 09:18:49
Would it be useful to produce an annotated view of header contents and corresponding bits (for investigating some jxl file)? Trying to get some early benefit out of this in case continuing doesn't end up making enough sense. Close wasm integration might have the benefit of less work getting debug output out of it visualized.
CrushedAsian255
jjrv Would it be useful to produce an annotated view of header contents and corresponding bits (for investigating some jxl file)? Trying to get some early benefit out of this in case continuing doesn't end up making enough sense. Close wasm integration might have the benefit of less work getting debug output out of it visualized.
2025-02-13 11:42:11
A spec is available in https://discord.com/channels/794206087879852103/1021189485960114198
Demiurge
2025-02-13 12:10:50
Zig syntax seems kinda nice and clean...
2025-02-13 12:11:47
And the the static comptime stuff seems pretty nice too...
2025-02-13 12:12:26
I'm kinda skeptical, there has to be something terribly wrong with it because there's always a catch of some kind
2025-02-13 12:12:34
Nice things aren't allowed to exist after all
2025-02-13 12:12:39
๐Ÿ˜‚
jjrv
CrushedAsian255 A spec is available in https://discord.com/channels/794206087879852103/1021189485960114198
2025-02-13 12:17:46
I mean, you drag a file to a box in a web app, and it shows the bits in the file, which bits correspond to which things and what they mean. Might be helpful for the next hapless soul who decides to write a decoder or encoder.
2025-02-13 12:19:44
Haven't found a catch yet, except that the Zig spec is in flux so half a year old code always needs some changes to compile on latest nightly ๐Ÿ˜…
CrushedAsian255
jjrv I mean, you drag a file to a box in a web app, and it shows the bits in the file, which bits correspond to which things and what they mean. Might be helpful for the next hapless soul who decides to write a decoder or encoder.
2025-02-13 12:26:00
Oh, I thought you wanted to code that yourself
jjrv
2025-02-13 12:26:22
Looks like the code wouldn't become too much uglier, to support a comptime flag where it creates a structure in memory with info of which bit ranges ended up in which struct fields. And yes I was planning to code that. A kind of meta-JXL-viewer. For now, just for the headers, but if I end up writing a full decoder, then a meta-viewer can for example show which MA tree predictor was used for which pixel, and what intermediate images at different steps might look like.
2025-02-13 12:30:31
That would seem to constantly produce useful visible results during development, instead of only at the end when it properly decodes an image completely, since I don't know if I'll get there.
2025-02-13 12:33:45
I'm interested in a more complete understanding of the format, to figure out anything non-obvious relevant to bitmap and vector tiles for online maps. Currently polylines are usually compressed using variable length integer coordinates, delta to previous point, sign moved to LSB. But maybe more could be done, looking for inspiration.
Demiurge
jjrv Haven't found a catch yet, except that the Zig spec is in flux so half a year old code always needs some changes to compile on latest nightly ๐Ÿ˜…
2025-02-14 01:42:57
If I can call zig libraries from C and generate/verify C headers that work with zig code then it might be a great language to write a codec with. And have a lot of LUTs generated at comp time
2025-02-14 01:43:24
I like self-documenting/self-explanatory code
jjrv
2025-02-14 05:50:38
C is easier to call from Zig, than from C, believe it or not. Since you can just include a C header and use, but it gets imported in a struct of your choice instead of polluting the global namespace.
2025-02-14 05:54:43
You can set C defines only that header can see, and I guess you can set their values based on any comptime computation. So the limited C preprocessor also gets replaced by Zig comptime.
2025-02-14 07:01:25
Never tried to call the other way around, but StackOverflow says it can generate C headers during build https://stackoverflow.com/questions/77062533/how-to-make-the-zig-build-system-emit-a-c-header-along-with-a-generated-library
Demiurge
2025-02-14 11:53:23
In C, you could evaluate code at comp time too. You just put a rule in a makefile for how to generate a header file containing data tables, for example.
2025-02-14 11:54:59
You just write code in any language you want that can generate the data table and put it in a makefile rule
AccessViolation_
2025-02-14 02:16:35
I don't really get the comptime thing in zig
2025-02-14 02:17:44
it seems like there should me more to it than just "this is evaluated at compile time instead of runtime", because that's not really a unique thing?
2025-02-14 02:21:36
in other languages, ordinary functions can also be comptime evaluated and have their logic replaced with the inlined result
lonjil
2025-02-14 02:53:43
optimizations are not semantics
2025-02-14 02:54:25
during comptime in zig, types are available as values that can be passed around
spider-mario
2025-02-14 02:55:39
what does that look like? it sounds somewhat more compelling that โ€œerm, we have `constexpr` tooโ€
lonjil
2025-02-14 03:14:36
At its most basic it looks like this: ``` fn max(comptime T: type, a: T, b: T) T { return if (a > b) a else b; } ```
2025-02-14 03:23:05
you can also do stuff like use the built-in function `@typeInfo(T)` to get a list of the fields in a struct, and then use `inline for` to do stuff on each field (the loop is semantically unrolled at comptime, so of the different fields are different, that's fine, it's just like writing a visitor that goes to each field manually)
2025-02-14 03:23:40
Or you can dynamically (at comptime) construct a type info struct and turn it into a type with `@type(T)`.
2025-02-14 03:25:12
(you can write `anytype` as the argument or return type if you don't have a type parameter available to use like in the `max` example above)
AccessViolation_
2025-02-14 03:46:08
ah so compile time code evaluation, generics (which look like they might be similar to rust's monomorphization) and metaprogrmming are all handled with `comptime`
2025-02-14 03:46:13
that's pretty neat
lonjil
2025-02-14 03:49:31
Also, while all code has to be syntactically correct, dead code elimination happens *before* checking whether the code makes semantic sense. So you can do `if (@hasField(@typeOf(foo), "bar") { foo.bar }` and it won't complain about the case where foo doesn't have such a field.
AccessViolation_ that's pretty neat
2025-02-14 03:53:02
downside is if you do anything non-trivial it easily becomes spaghetti, and unlike interfaces or traits, you can't just look at a function signature and know what it needs. Like there's stuff in the Zig standard library that takes a `writer`, but that's just ducktyped with `anytype`.
2025-02-14 03:54:05
This also makes it a lot harder for library authors to ensure that they don't accidentally break backwards compat.
2025-02-14 03:54:45
And if you write generic code, unless you write a lot of error handling boilerplate to give the user nice errors when they use an improper type, all the errors will be terrible and useless.
AccessViolation_
2025-02-14 04:03:56
I see
2025-02-14 04:06:07
i feel like trying Zig properly some day, but its approach to memory safety doesn't appeal to me
2025-02-14 04:10:14
if I primarily wrote C id probably be very interested, but it doesn't look like i have much to gain coming from Rust, and it gives up memory and thread safety
lonjil
AccessViolation_ i feel like trying Zig properly some day, but its approach to memory safety doesn't appeal to me
2025-02-14 04:19:50
what approach to memory safety? lol
AccessViolation_
2025-02-14 04:27:40
lol, for a language where comptime is a selling point, it's pretty funny how its memory safety is runtime checks instead of compile time proofs
RaveSteel
2025-02-14 09:34:41
Maybe someone can test on their device, but both my Samsung S23 and S24 produce invalid PNG screenshots that cannot be converted using libjxl
2025-02-14 09:35:12
exiftool `Warning : [minor] Trailer data after PNG IEND chunk`
Quackdoc
2025-02-14 09:36:13
~~time to overhaul png loading in cjxl totally~~
RaveSteel
2025-02-14 09:37:07
A much older Samsung using LineageOS outputs valid PNGs
jonnyawsom3
RaveSteel Maybe someone can test on their device, but both my Samsung S23 and S24 produce invalid PNG screenshots that cannot be converted using libjxl
2025-02-14 09:40:13
Can you zip and send one?
RaveSteel
2025-02-14 09:41:21
here
2025-02-14 09:41:55
This is from the S23
jonnyawsom3
2025-02-14 10:00:43
They're storing metadata as raw text after the `IEND` chunk``` Samsung_Capture_InfoScreenshot Captured_App_Info SEFT```
2025-02-14 10:01:01
RaveSteel
2025-02-14 10:01:26
lmao
jonnyawsom3
2025-02-14 10:01:35
ExifTool seems to read it correctly > Warning : [minor] Trailer data after PNG IEND chunk > Samsung Capture Info : Screenshot
RaveSteel
2025-02-14 10:14:11
I wonder why they handle it that way, since the phone itself doesn't even display this metadata, none of the stock apps do
Demiurge
2025-02-15 01:26:47
It would be a trivial one liner patch in cjxl to not have a mental breakdown when it encounters a trailer
2025-02-15 01:27:37
And just print a warning instead
CrushedAsian255
2025-02-15 01:42:07
Could even store the trailing data as a โ€œtailโ€ box or something
2025-02-15 01:42:14
Or just discard it
monad
2025-02-15 10:23:26
shouldn't discard by default. unexpected data loss is worse than unexpected encoding failure.
RaveSteel
2025-02-15 10:25:11
Agreed
jonnyawsom3
2025-02-15 10:27:07
Not arbitrary data, but I wanted to use XMP tags to store unsupported PNG text chunks <https://github.com/libjxl/libjxl/issues/2641#issuecomment-2268101325>
2025-02-15 10:27:38
Then either end software can check for the same tag names, or djxl could write the chunks back even if it's not the same order, ect
Demiurge
2025-02-15 10:58:59
It should preserve it in a metadata box or, failing that, cjxl should just print a warning and exit with a non zero status after ignoring the tail data and transcoding the damn PNG
2025-02-15 10:59:14
:)
CrushedAsian255
2025-02-16 02:30:59
There should be a standardised โ€œunknownโ€ metadata box
2025-02-16 02:31:07
Like to store the tEXt from png
A homosapien
2025-02-16 04:01:27
What about garbage/junk metadata from aCropalypse?
username
2025-02-16 04:04:08
well if it's a format with a dedicated end chunk or marker then aCropalypse can be ignored because none of it is even valid data (valid in the sense of following said format's spec and being considered an actual legit part of the image)
A homosapien
2025-02-16 04:05:30
That makes sense
Demiurge
2025-02-16 04:07:58
Why would tEXt be stored in an unknown box and not some standardized exif or XML "comment" tag
jonnyawsom3
2025-02-16 04:14:16
Hence my comment. Invalid data should be invalid and the cause resolved. tEXt chunks are valid but ignored unless already EXIF or XMP currently
CrushedAsian255
2025-02-16 04:26:38
Iโ€™m donโ€™t know much about png metadata
gb82
AccessViolation_ if I primarily wrote C id probably be very interested, but it doesn't look like i have much to gain coming from Rust, and it gives up memory and thread safety
2025-02-16 05:34:08
if ur coming from rust, idk why you'd try zig
2025-02-16 05:34:15
oops sorry necro
Quackdoc
2025-02-17 03:25:46
magick's hashing probably doesn't apply things like metadata or icc does it [Hmm](https://cdn.discordapp.com/emojis/1113499891314991275.webp?size=48&name=Hmm)
CrushedAsian255
2025-02-17 04:41:56
No it does not handle metadata
spider-mario
2025-02-24 10:49:51
is my pseudo-quadraphonic setup overkill?
2025-02-24 10:50:12
(the headtracker hasnโ€™t arrived yet but Iโ€™m not sure whether it will improve the experience)
Quackdoc
2025-02-24 10:56:07
never :D
AccessViolation_
2025-02-25 09:58:26
i have no clue what i'm looking at (i guess some sort of tool top optimize audio coming from speakers for your head position?) but it looks very interesting
w
2025-02-25 10:10:15
i run my mic through reaper are you saying I should also be running my desktop audio through it
spider-mario
AccessViolation_ i have no clue what i'm looking at (i guess some sort of tool top optimize audio coming from speakers for your head position?) but it looks very interesting
2025-02-25 10:27:05
first plugin takes the quad audio track and turns it into 3rd-order Ambisonics; the plugin in the screenshot visualises the resulting Ambisonics signal and where the sound come from (but since itโ€™s just from a quad recording, there is no elevation information), and then the last plugin renders the Ambisonics signal to binaural audio to be played on headphones
2025-02-25 10:27:35
so in some sense, it effectively simulates quadraphonic sound on headphones
2025-02-25 10:28:29
the headtracker I refer to is a device that tracks how the user (in this case, me) turns their head, which makes it possible to rotate the Ambisonics signal accordingly in real time before rendering it
AccessViolation_
2025-02-25 10:29:43
that's really cool
2025-02-25 10:31:11
does it only work with rotation or is there also a location aspect to it? i imagine a quad audio track isn't really enough to infer what it would sound like if you were like a meter to the left
2025-02-25 10:32:18
i guess it would be enough if you knew the location of the four microphones and did some sort of trilateration but that sounds like it would be very computationally expensive and timing sensitive
spider-mario
2025-02-25 10:32:39
indeed, a limitation of Ambisonics is that it just models spherical sound
2025-02-25 10:33:54
so this approach wouldnโ€™t even let me simulate โ€œwalking one metre to the left in an actual room with quad speakersโ€
AccessViolation_
2025-02-25 10:34:18
makes sense, still really cool though ๐Ÿ‘€
spider-mario
2025-02-25 10:34:50
I guess Iโ€™ll see how strongly I miss this once the tracker arrives
2025-02-25 10:35:49
for now, I can turn my โ€œvirtual headโ€ by clicking on knobs, but I mostly donโ€™t bother ๐Ÿ˜‚
AccessViolation_
2025-02-25 10:36:26
just strap some joycons to your head <:galaxybrain:821831336372338729>
spider-mario
2025-02-25 10:39:28
hmโ€ฆ https://www.digitaltrends.com/gaming/how-to-use-nintendo-switch-joy-cons-on-pc/#dt-heading-frequently-asked-questions
2025-02-25 10:41:03
(for the sake of balance, I will point out that this person seems to have strong opinions against Ambisonics: https://www.reddit.com/r/SpatialAudio/comments/18s2adr/does_ambisonics_format_lower_sound_quality/ki433po/?context=2)
AccessViolation_
2025-02-25 10:43:56
they seem to mostly have practicality issues with the approach that uses speakers though. using headphones, like you are, probably removes a lot of interference from external factors
w
2025-02-25 10:44:20
at some point the answer is to just use speakers
AccessViolation_
2025-02-25 10:46:01
also, if it works for you and you like it, that's all that matters ๐Ÿ™‚
spider-mario
2025-02-25 10:49:39
they do say: > dont even get me started on the stupidity of using ambisonics to decode to binaural.
2025-02-25 10:49:46
Iโ€™d be curious to get them started
AccessViolation_
2025-02-25 10:50:51
so true, you just need the four or eight audio outputs to satisfy the four or eight ears we have
spider-mario
2025-02-25 10:51:08
maybe they mean when creating an ambisonics mix from scratch as opposed to e.g. capturing with an ambisonics microphone
AccessViolation_
2025-02-25 10:52:47
this reminds me of those "surround sound" headphones that have multiple drivers per side
spider-mario
2025-02-25 10:52:49
(I guess upmixing quad to ambisonics would still fall under that criticism then, but, eh)
AccessViolation_ this reminds me of those "surround sound" headphones that have multiple drivers per side
2025-02-25 10:53:20
oh, yeah, Iโ€™m not sure I ever really understood the point
2025-02-25 10:54:09
but then Iโ€™ve never tried them, so who knows what Iโ€™m missing out on
AccessViolation_
2025-02-25 10:55:17
i guess it's true that that the shape of our ear helps with distinguishing directions too, so maybe they play into that
spider-mario
2025-02-25 10:58:06
apparently, theyโ€™ve stopped using multiple drivers in recent times?
2025-02-25 10:58:27
> Surround Sound headphones are a gimmick. The early ones actually tried to shove a full 7.1 speaker system in, that failed because they had to use tiny little speakers - thereby wrecking the audio quality. The modern ones are a gimmick because at core they're using the same positional audio trick as any other stereo headset. > > I've owned a set of both. One of the best of the original multi-speaker cans, and a more recent "7.1" "surround" headset. They both delivered worse audio and less immersion than high-quality modern Stereo headsets.
2025-02-25 10:59:19
------ > In the past there were multi-driver headphones. Didn't really work out. > Now the majority have 1 driver. (what is okay, considering the improvements.)
AccessViolation_
2025-02-25 11:02:58
ah I see, that makes sense
2025-02-25 11:05:15
spatial audio also works with those head/ear-shaped microphones. then you only need two channels to capture what sounds like basically exactly what you would have heard if you were there, if you listen to it with headphones (or earbuds work even better, i imagine)
2025-02-25 11:05:32
they are many youtube videos that were made this way, it's pretty cool
CrushedAsian255
2025-02-25 11:09:53
My card doesnโ€™t charge international transaction fees it seems, so adding DeepSeek tokens using CNY & WeChat Pay saves me 6 cents.
jonnyawsom3
spider-mario so in some sense, it effectively simulates quadraphonic sound on headphones
2025-02-25 11:45:02
Reminded me of this https://github.com/google/tabuli
spider-mario
AccessViolation_ spatial audio also works with those head/ear-shaped microphones. then you only need two channels to capture what sounds like basically exactly what you would have heard if you were there, if you listen to it with headphones (or earbuds work even better, i imagine)
2025-02-25 11:48:37
yeah, itโ€™s quite neatย โ€“ the main potential drawback is the fact that the listenerโ€™s orientation is effectively hardcoded in the recording
2025-02-25 11:48:42
(which can be fine depending on the purpose)
Reminded me of this https://github.com/google/tabuli
2025-02-25 11:49:41
itโ€™s not a _pure_ coincidence ๐Ÿ˜
2025-02-25 11:49:48
e.g. I wrote large parts of this https://github.com/google/tabuli/blob/main/speaker_experiments/angular.cc
2025-02-25 11:50:08
it uses a similar idea as https://github.com/sboukortt/intersect-lv2
2025-02-25 11:55:02
https://github.com/google/tabuli/blob/main/speaker_experiments/two_to_three.cc was a more direct โ€œportโ€
AccessViolation_
spider-mario yeah, itโ€™s quite neatย โ€“ the main potential drawback is the fact that the listenerโ€™s orientation is effectively hardcoded in the recording
2025-02-25 07:00:24
ahh yeah that's a good point
Traneptora
AccessViolation_ spatial audio also works with those head/ear-shaped microphones. then you only need two channels to capture what sounds like basically exactly what you would have heard if you were there, if you listen to it with headphones (or earbuds work even better, i imagine)
2025-02-25 10:30:16
3d sound studio my beloved
RaveSteel
2025-02-26 10:32:13
https://gitlab.gnome.org/GNOME/gnome-backgrounds/-/commit/85c63cb0c231570ad2017c3b75b58fe79bad6792
2025-02-26 10:32:29
Unfortunate
AccessViolation_
2025-02-26 10:50:44
weird, i thought decoding performance was about the same for both formats...
RaveSteel
2025-02-26 10:53:54
https://gitlab.gnome.org/GNOME/gnome-shell/-/issues/6886#note_2360630
AccessViolation_
2025-02-26 10:53:57
maybe it can be and libjxl just hasn't had the same optimization efforts as common jpeg decoders
RaveSteel
2025-02-26 10:54:10
The comment that lead to this commit talks about 17 seconds of stalling before the desktop loads
2025-02-26 10:54:24
Which indeed does sound weirdly long
2025-02-26 10:54:43
``` 2015 X1 Carbon gen3 i7 with 8gb ram. Not super old, but not super young either. Has a 600mb/sec SATA SSD. Samples for login land roughly 60% in gnome-shell, of which about 85% of that is in JXL decoding, presumably for our new file format for backgrounds. I'm a less concerned that it's 85% and more concerned that it's about a 17 second stall on login. ```
2025-02-26 10:56:01
Certainly an interesting thread
AccessViolation_
2025-02-26 10:57:59
...is it a blocking task?
2025-02-26 10:59:35
yes it is https://gitlab.gnome.org/GNOME/gnome-shell/-/merge_requests/2205
2025-02-26 11:00:53
this doesn't excuse the long decode time, but those should definitely be asynchronous
RaveSteel
2025-02-26 11:04:47
Well, easier for them to just switch back to JPEG
2025-02-26 11:04:58
Sad, but it is what it is
2025-02-26 11:06:11
https://gitlab.gnome.org/GNOME/gnome-shell/-/issues/6886#note_1812522
2025-02-26 11:06:39
The benchmark in this comment using gdk-pixbuf.-thumbnailer also doesn't give a favourable result
2025-02-26 11:06:53
In fact, it is pretty much downright unusable, compared to JPEG
2025-02-26 11:07:42
I had wanted to try switching KDE's thumbnailing to WebP and JXL to compare loading times, but haven't gotten around to setting up an environment
veluca
2025-02-26 11:10:00
it's a bit suspicious to me, might be something off in the gdk-pixbuf integration code
RaveSteel
2025-02-26 11:10:19
Seems like a reasonable assumption
jonnyawsom3
veluca it's a bit suspicious to me, might be something off in the gdk-pixbuf integration code
2025-02-26 11:27:27
According to yourself, there was but it was fixed <https://github.com/libjxl/libjxl/pull/2942>
2025-02-26 11:27:32
Maybe something broke again?
2025-02-26 11:28:02
Though, I don't think the OP ever actually said it was fixed, only said it was still slow a few days ago, so it could be a hardware quirk
RaveSteel
2025-02-26 11:33:39
I have no old CPU to test with, but with my current setup get these results ``` $ /usr/bin/time -f "%E seconds" gdk-pixbuf-thumbnailer -s 1440 adwaita-l.jxl foo.png 0:00.77 seconds $ /usr/bin/time -f "%E seconds" gdk-pixbuf-thumbnailer -s 2160 adwaita-l.jxl foo.png 0:01.66 seconds ```
2025-02-26 11:34:21
Which is still slower than the JPG decode in [this comment](https://gitlab.gnome.org/GNOME/gnome-shell/-/issues/6886#note_1812522)
CrushedAsian255
2025-02-27 02:40:37
what format are the wallpapers in? Lossy or lossless?
2025-02-27 02:40:47
If they're high effort lossless that could explain it
juliobbv
veluca it's a bit suspicious to me, might be something off in the gdk-pixbuf integration code
2025-02-27 03:47:14
I was wondering, did they ever test JXLs recompressed from JPEG?
2025-02-27 03:48:05
worst case perf scenario, they'd just end up picking using those files, and still get some compression benefit over just regular JPEG
veluca
RaveSteel I have no old CPU to test with, but with my current setup get these results ``` $ /usr/bin/time -f "%E seconds" gdk-pixbuf-thumbnailer -s 1440 adwaita-l.jxl foo.png 0:00.77 seconds $ /usr/bin/time -f "%E seconds" gdk-pixbuf-thumbnailer -s 2160 adwaita-l.jxl foo.png 0:01.66 seconds ```
2025-02-27 07:46:26
right, but not *too much* slower
_wb_
2025-02-27 07:46:48
Are they maybe compiling libjxl without any simd or something like that?
veluca
2025-02-27 07:46:58
especially if they have noise generation, I could buy it being slower
2025-02-27 07:47:47
(they don't)
RaveSteel
CrushedAsian255 what format are the wallpapers in? Lossy or lossless?
2025-02-27 07:47:49
Lossy
veluca
2025-02-27 07:48:45
ok, on my pc adwaita-l.jxl takes ~150ms to decode
2025-02-27 07:48:55
I wonder if the thumbnailer uses multithreading
2025-02-27 07:49:21
it's still not *too* slow even in that case (~250ms), but still
RaveSteel I have no old CPU to test with, but with my current setup get these results ``` $ /usr/bin/time -f "%E seconds" gdk-pixbuf-thumbnailer -s 1440 adwaita-l.jxl foo.png 0:00.77 seconds $ /usr/bin/time -f "%E seconds" gdk-pixbuf-thumbnailer -s 2160 adwaita-l.jxl foo.png 0:01.66 seconds ```
2025-02-27 07:49:46
what does `time ~/libjxl/build/tools/djxl -v --disable_output adwaita-l.jxl --num_reps 10 --num_threads 0` tell you?
Tirr
2025-02-27 08:34:20
adwaita-l.jxl has noise, I remember optimizing jxl-oxide for that image
RaveSteel
veluca what does `time ~/libjxl/build/tools/djxl -v --disable_output adwaita-l.jxl --num_reps 10 --num_threads 0` tell you?
2025-02-27 08:51:35
Latest stable version of libjxl ``` $ time djxl -v --disable_output adwaita-l.jxl --num_reps 10 --num_threads 0 JPEG XL decoder v0.11.1 794a5dcf [AVX2,SSE4,SSE2] Read 757379 compressed bytes. Decoded to pixels. 4096 x 4096, geomean: 56.299 MP/s [55.39, 56.76], , 10 reps, 0 threads. real 0m2,997s user 0m2,707s sys 0m0,265s ```
veluca
2025-02-27 08:57:37
So 300ms
2025-02-27 08:57:54
That seems to point at the gdk pixbuf loader being slower than djxl
2025-02-27 08:58:14
Can you try the thumbnailer with jpeg?
RaveSteel
2025-02-27 09:22:27
It's not a direct comparison since both images looks slightly different, but running `/usr/bin/time -f "%E seconds" gdk-pixbuf-thumbnailer -s 2160 adwaita-l.jpg foo.png` 10 times always took around 0:01.84 seconds ยฑ 2 milliseconds
veluca
2025-02-27 09:28:54
isn't that pretty much the same as the jxl file?
2025-02-27 09:29:19
(indicating most of the time goes, well, not in the reader)
RaveSteel
2025-02-27 09:30:12
Here are the cjxl settings they've used, pretty standard stuff https://gitlab.gnome.org/Teams/Design/wallpaper-assets/-/blob/master/48/00-tojxl.sh?ref_type=heads ``` cjxl --photon_noise_iso=800 -p -q 85 $1-l.png $1-l.jxl cjxl --photon_noise_iso=800 -p -q 85 $1-d.png $1-d.jxl ````
2025-02-27 09:30:36
Not the exact image, but I want to render the original image to get a better base for comparing
veluca isn't that pretty much the same as the jxl file?
2025-02-27 09:31:11
Yes, slightly longer even
veluca
2025-02-27 09:32:49
maybe they're on some ancient libjxl
RaveSteel
2025-02-27 09:46:19
They are on 10.3
2025-02-27 09:46:25
https://packages.ubuntu.com/search?keywords=libjxl&searchon=names&suite=oracular&section=all
2025-02-27 09:46:36
https://packages.ubuntu.com/oracular/libjxl-tools
_wb_
2025-02-27 09:48:39
maybe there is low available memory and they end up using swap space or something?
RaveSteel
2025-02-27 09:55:22
I don't think so, this problem was reproduced by multiple people in the issue
2025-02-27 09:58:49
libpixbufloader-jxl.so, which the thumbnailer seems to use for JXL is on libjxl version 0.11, but I cannot speak for Ubuntu/GNOME OS
veluca
2025-02-27 10:02:49
guess I should try that out at some point in the future
RaveSteel
2025-02-27 10:03:11
I am installing Ubuntu 24 lts and GNOME OS in VMs currently to test further
jonnyawsom3
2025-02-27 10:11:10
If it helps, they could remove the noise and use 0.11's dithered output instead
RaveSteel Here are the cjxl settings they've used, pretty standard stuff https://gitlab.gnome.org/Teams/Design/wallpaper-assets/-/blob/master/48/00-tojxl.sh?ref_type=heads ``` cjxl --photon_noise_iso=800 -p -q 85 $1-l.png $1-l.jxl cjxl --photon_noise_iso=800 -p -q 85 $1-d.png $1-d.jxl ````
2025-02-27 10:14:38
Does progressive actually do anything in this scenario? I'm fairly sure they're not doing progressive loading during startup, ignoring the fact progressive isn't actually progressive since 0.10's chunked encoding
Traneptora
RaveSteel I don't think so, this problem was reproduced by multiple people in the issue
2025-02-27 12:17:19
the problem was reproduced by multiple people *before* my patch
2025-02-27 12:17:32
so far it's only one person
RaveSteel
Traneptora the problem was reproduced by multiple people *before* my patch
2025-02-27 12:51:24
Then maybe a person here with older hardware may be able to test, my hardware is far too new. I will still test of course, but it's not the same
Traneptora
RaveSteel Then maybe a person here with older hardware may be able to test, my hardware is far too new. I will still test of course, but it's not the same
2025-02-27 12:54:59
you can try compiling it without simd, which is probably the key here
2025-02-27 12:55:21
there's a cmake option to set highway to scalar only
2025-02-27 12:55:26
so nothing will be vectorized
RaveSteel
2025-02-27 03:29:09
At least it doesn's seem to be a general issue with GNOME OS, the results are almost identical to my earlier runs on bare metal
2025-02-27 03:29:51
boot times were also normal, nothing out of the ordinary
2025-02-27 03:30:54
gdk-pixbuf-thumbnailer is slower than djxl, around a factor of 4
2025-02-27 03:31:34
At least on my hardware and in this VM
2025-02-27 03:41:24
gdk-pixbuf uses libjxl 0.10 in GNOME OS
2025-02-27 03:46:04
This background image that I've tested on loads in around a second in the VM
2025-02-27 03:46:34
Takes less than 100ms on baremetal in KDE Plasma
jonnyawsom3
2025-02-27 05:04:26
Loads in under 2 seconds on my 8 year old Android. So something is very wrong if they're getting more than that
RaveSteel
2025-02-27 05:16:04
It also takes around one second using GNOMEs image viewer, loupe, on bare metal
2025-02-27 05:16:27
So it's not an issue of only GNOME OS, but GNOME's implementation is slow for some reason
2025-02-27 05:19:35
It seems to affect other GNOME applications as well
jonnyawsom3
2025-02-27 05:32:28
System time is around the same as djxl, right? So maybe it's something other than the decoding itself
RaveSteel
2025-02-27 05:35:07
djxl decode takes around 0.3 seconds on bare metal, so roughly three times faster than loupe
Tirr
2025-02-27 05:42:17
loupe uses glycin, which sandboxes decoders in separate process and communicates with those using unix socket. maybe that's the reason
2025-02-27 05:45:41
and... glycin jxl loader doesn't seem to use parallel runner?
jonnyawsom3
2025-02-27 06:01:08
The results above are all singlethreaded, djxl is still 4x faster
veluca
Tirr loupe uses glycin, which sandboxes decoders in separate process and communicates with those using unix socket. maybe that's the reason
2025-02-27 06:07:36
that likely doesn't help
Tirr
2025-02-27 06:10:22
...and bwrap and seccomp filters and setrlimit. anyway that slowdown on single thread is very suspicious, maybe glue code is slow?
RaveSteel
2025-02-27 06:28:44
I swapped the wallpaper between JXL and JPEG to see if it any impact on boot time, but no. Maybe my hardware really is just too new for it to have a significant impact ``` --- JXL Wallpaper graphical.target reached after 4.694s in userspace. graphical.target reached after 3.860s in userspace. graphical.target reached after 3.777s in userspace. graphical.target reached after 3.675s in userspace. graphical.target reached after 3.783s in userspace. graphical.target reached after 3.850s in userspace. --- JPEG Wallpaper graphical.target reached after 3.900s in userspace. graphical.target reached after 3.816s in userspace. graphical.target reached after 3.718s in userspace. graphical.target reached after 3.858s in userspace. graphical.target reached after 3.774s in userspace. graphical.target reached after 3.762s in userspace. ```
Demiurge
Tirr loupe uses glycin, which sandboxes decoders in separate process and communicates with those using unix socket. maybe that's the reason
2025-02-27 11:23:14
This is the way... If only dynamic linking was a thing of the past
2025-02-27 11:25:02
Dynamic linking is so ugly and fragile by comparison, and gives the library too much access...
RaveSteel I swapped the wallpaper between JXL and JPEG to see if it any impact on boot time, but no. Maybe my hardware really is just too new for it to have a significant impact ``` --- JXL Wallpaper graphical.target reached after 4.694s in userspace. graphical.target reached after 3.860s in userspace. graphical.target reached after 3.777s in userspace. graphical.target reached after 3.675s in userspace. graphical.target reached after 3.783s in userspace. graphical.target reached after 3.850s in userspace. --- JPEG Wallpaper graphical.target reached after 3.900s in userspace. graphical.target reached after 3.816s in userspace. graphical.target reached after 3.718s in userspace. graphical.target reached after 3.858s in userspace. graphical.target reached after 3.774s in userspace. graphical.target reached after 3.762s in userspace. ```
2025-02-27 11:26:18
Wouldn't you just boot to a black desktop if it took a long time to decode?
RaveSteel
2025-02-27 11:44:19
No, because they apparently haven't implemented asynchronous decode for the wallpaper, so the login doesn't finish until the decode has finished
Demiurge
2025-02-28 03:02:38
Who hasn't? On X11 the wallpaper is usually drawn by its own dedicated process
2025-02-28 03:03:48
And x11 is not known for being well made
Quackdoc
2025-02-28 03:04:50
its likely an issue with how they have services setup
Demiurge
2025-02-28 03:05:28
I don't know if KDE/GNOME use a separate process or not
2025-02-28 03:06:07
Those have an even worse reputation than x11 itself
Quackdoc
2025-02-28 03:07:56
yeah they do use separate processes
2025-02-28 03:08:33
well KDE at least, gnome I dunno
_wb_
2025-02-28 07:00:24
Without dynamic linking, it would take ages before new libjxl versions percolate to applications. Sure it's messy and complicated, but besides avoiding a bunch of duplication of compiled code, it does have advantages. Plus it doesn't hurt to have another incentive to keep the API stable...
Demiurge
2025-02-28 09:27:47
I mean in general, if libraries were replaced by a multiprocess server/daemon system instead.
2025-02-28 09:28:16
That would be a better design, like in plan9
2025-02-28 09:28:46
A better convention for operating systems beyond the usual Linux-style stagnation
lonjil
_wb_ Without dynamic linking, it would take ages before new libjxl versions percolate to applications. Sure it's messy and complicated, but besides avoiding a bunch of duplication of compiled code, it does have advantages. Plus it doesn't hurt to have another incentive to keep the API stable...
2025-02-28 01:21:14
dynamically linked libraries having updates pushed out and then used by already installed applications is basically only a thing on Linux, and Linux distros could just as easily recompile applications when libraries are updated, rather than rely on dynamic linking. Which they actually have to do with lots of C++ libraries due to the use of templates. And ofc fpr the 95% of users who are on Windows and macOS, application updates are needed anyway. And the most widely used Linux distros try to be "stable", and thus don't update libraries anyway.
spider-mario
2025-02-28 01:31:43
example: Arch Linux rebuilding a bunch of stuff for flac 1.5 https://archlinux.org/todo/flac-150/
_wb_
2025-02-28 03:27:27
well I think it's not a great idea to ship statically linked copies of image codecs like libjxl with all possible applications that may want to encode/decode images... not just because it duplicates a few megabytes of binary size, but mostly because it means that many applications are not going to bother to push an update for every new libjxl version, so bugfixes and improvements will end up taking longer to get deployed than they already do...
lonjil
2025-02-28 03:47:19
I don't see why applications would be more likely to push updates with new libjxl versions just because it's dynamically linked. Dynamic library updates are always bundled with updates to the application as a whole, so there's always a rebuild of the application anyway.
_wb_
2025-02-28 04:00:48
Well not in Linux distros, or e.g. in things like Homebrew afaik
2025-02-28 04:01:32
Of course if you're going to ship a copy of the library with the application anyway, then you can just as well (or better) link statically
lonjil
2025-02-28 04:18:38
but applications don't push updates on Linux, the distro maintainers choose what to do. i don't really care about static vs dynamic linking but I think this argument is kinda weak.
2025-02-28 04:20:04
and distros like debian and red hat don't ship major library updates much anyway, just bug fix backports
_wb_
2025-02-28 04:35:42
It makes life a lot easier for a distro (and saves a lot of storage/bandwidth) if minor/patch version updates and bugfixes can be done just by updating the library itself instead of having to upgrade anything that depends on it too.
spider-mario
2025-02-28 09:58:07
> Quad binaural is an alternative system [in which] four binaural stereo images are used, corresponding to different head orientations (front, left, back and right). These are mixed together live depending on where the user is looking. This system only supports one axis of rotation, unlike ambisonics which supports all three.
2025-02-28 09:58:07
fun
2025-02-28 10:09:28
I wonder how well it works in practice
2025-02-28 10:10:14
if I look 45ยฐ to the side, does mixing 0ยฐ and 90ยฐ binaural recordings together really work?
2025-02-28 10:13:08
I also wonder how many implementations of this actually exist
2025-02-28 10:13:32
ah, Samsung uses it, apparently
2025-02-28 10:14:06
> As the viewer turns within the โ€œspaceโ€ of the video, the Gear VR crossfades between the binaural streams. You never hear more than two files in the output. If youโ€™re careful with how you construct and monitor your mixes, you can do this with little to no noticeable phase issues as the boundaries are crossed.
Demiurge
2025-03-01 10:53:18
On Windows and Mac and other non Linux platforms each app vendor ships their own redundant copies of the dynamic linked object files.
2025-03-01 10:53:54
They never get updated until the app vendor decides to update and the app vendor decides what version to use, not the OS vendor. That is just static linking with extra steps!
2025-03-01 10:54:44
Dynamic linking is a cool technology and is useful but the way it's usually used is not appropriate
2025-03-01 10:55:07
And it's a lot more dangerous than a server daemon model
dogelition
Demiurge They never get updated until the app vendor decides to update and the app vendor decides what version to use, not the OS vendor. That is just static linking with extra steps!
2025-03-01 10:58:40
+ lgpl compliance i think
Demiurge
2025-03-01 10:58:44
If you want to share and re-use code as a common shared service then it makes more sense to communicate with a separate process than to link it into the same process, where it can hijack control away from the programmer (take a long time to hand control back to the caller, or just never give control back and hang or abort())
2025-03-01 10:59:47
I think there are cases where it makes sense to link (either statically or dynamically) and a lot of cases where it makes more sense to have it in a separate process space.
2025-03-01 11:00:17
And it's not often considered when to use which
dogelition + lgpl compliance i think
2025-03-01 11:00:55
lgpl is a clown license honestly
2025-03-01 11:01:31
MPL achieves a similar goal more elegantly and permissively
2025-03-01 11:01:57
But licenses and copyright in general is clownish overall so eh
2025-03-01 11:03:00
Patents are kind of interesting because they encourage people to publish and explain their inventions but they also operate in a pretty clownish way
spider-mario
spider-mario (the headtracker hasnโ€™t arrived yet but Iโ€™m not sure whether it will improve the experience)
2025-03-01 01:05:21
yeah, head tracking works but Iโ€™m not sure it feels that mindblowing
2025-03-01 01:06:24
and the zero point drifts a lot
Traneptora
Demiurge They never get updated until the app vendor decides to update and the app vendor decides what version to use, not the OS vendor. That is just static linking with extra steps!
2025-03-01 03:41:37
yes and no, because if you ship dynamic libraries you can use dlopen() or equivalent to load plugins on the fly
2025-03-01 03:41:42
but yes I agree that this is kinda silly
2025-03-01 03:41:52
every GTK app for windows ships a copy of GTK
2025-03-01 03:41:58
for ex.
spider-mario
Traneptora yes and no, because if you ship dynamic libraries you can use dlopen() or equivalent to load plugins on the fly
2025-03-01 05:17:30
VST plugins are a prime example of that (and as you might have guessed, came to my mind from recent experience)
Traneptora
2025-03-01 05:17:55
not sure what VST plugins are
2025-03-01 05:18:14
but I know that a few things use these strats to load plugins on the fly without taking too much ram
2025-03-01 05:18:26
for ex. gdk-pixbuf uses dlopen on libpixbufloader-jxl.so only if a JXL file is found, iirc
spider-mario
2025-03-01 05:24:23
theyโ€™re audio processing plugins that are typically loaded in DAWs or video editors
2025-03-01 05:24:44
for example, one could buy this audio denoiser from Waves https://www.waves.com/plugins/clarity-vx-pro
2025-03-01 05:24:54
and then use it in Audacity or whatever
2025-03-01 05:24:59
(itโ€™s not in the list because I have not bought that plugin)
2025-03-01 05:27:02
https://www.productionmusiclive.com/blogs/news/top-30-best-vst-audio-plugins-in-2022-for-music-production
2025-03-01 05:28:21
LV2 (https://lv2plug.in/) is an API with a similar purpose, and somewhat more popular than VST on Linux due to its licencing
2025-03-01 05:28:41
(but both VST and LV2 exist on Linux and elsewhere)
2025-03-01 05:33:14
ffmpeg has a filter to apply an LV2 or LADSPA (LV2โ€™s predecessor) plugin https://ffmpeg.org/ffmpeg-filters.html#lv2 https://ffmpeg.org/ffmpeg-filters.html#ladspa
Traneptora
2025-03-01 05:38:12
ah that's what that is for
2025-03-01 05:38:14
nice
lonjil
Demiurge MPL achieves a similar goal more elegantly and permissively
2025-03-01 06:54:52
And EUPL does an even better job, giving you the best of the MLP, the LGPL, and the AGPL.
Demiurge
Traneptora yes and no, because if you ship dynamic libraries you can use dlopen() or equivalent to load plugins on the fly
2025-03-01 07:30:42
Sometimes it's better to use IPC than to link objects
2025-03-01 07:31:36
Often linking is not what you actually want, it's just used by convention even when it's better served by another method
AccessViolation_
2025-03-03 09:51:18
2025-03-03 09:51:18
speaking of OpenStreetMap, a while ago I tried transcoding the PNGs they serve to lossless JXL, and to my surprise they barely got any smaller. I guess they spend a lot on compression to save on bandwidth
2025-03-03 10:24:19
oh damn, you could probably use splines really well for that. and you wouldn't even need to use spline heuristics in an encoder, you could just directly convert line-like features from the OpenStreetMap data to splines
2025-03-03 10:56:31
I do like how the quality 1 JPEG has like, idk 16 colors total, and is 200 KB, while the quality 1 WebP looks fairly reasonable and is like half the size <:KekDog:805390049033191445>
Traneptora
AccessViolation_
2025-03-03 08:20:17
pretty sure this is is how the ICC Varints work?
2025-03-03 08:20:54
annex E specifies the following
2025-03-03 08:20:55
AccessViolation_
Traneptora pretty sure this is is how the ICC Varints work?
2025-03-03 08:24:30
I think so yeah, but i just forwarded that message because openstreetmap was mentioned. what you said was brought up in the discussion surrounding it, you can go there if you click the redirect at the bottom of the forwarded segment ๐Ÿ™‚
Traneptora
2025-03-03 08:28:26
ah okay
Demiurge
2025-03-03 08:53:13
I thought they use svg
Vlad (Kuzmin) Erium
2025-03-06 02:51:08
Compression of Spectral Images using Spectral JPEG XL Authors: Alban Fichet Christoph Peters Intel Corporation Editor: Brent Burley Walt Disney Animation Studios Editor-in-Chief: Eric Haines NVIDIA Abstract The advantages of spectral rendering are increasingly well known, and corresponding rendering algorithms have matured. In this context, spectral images are used as input (e.g., reflectance and emission textures) and output of a renderer. Their large memory footprint is one of the big remaining issues with spectral rendering. Our method applies a cosine transform in the wavelength domain. We then reduce the dynamic range of higher-frequency Fourier coefficients by dividing them by the mean brightness, i.e., the Fourier coefficient for frequency zero. Then we store all coefficient images using JPEG XL. The mean brightness is perceptually most important and we store it with high quality. At higher frequencies, we use higher compression ratios and optionally lower resolutions. Our format supports the full feature set of spectral OpenEXR, but compared to this lossless compression, we achieve file sizes that are 10 to 60 times smaller than their ZIP compressed counterparts. https://jcgt.org/published/0014/01/04/
jonnyawsom3
2025-03-06 02:54:34
Discussion in https://discord.com/channels/794206087879852103/822105409312653333/1346799113840103455
2025-03-06 06:34:32
Does anyone know of a relatively recent resampling method, called HQIP or something similar, made default in an image processing tool with the description 'a paper will be published on it soon'. I've tried to find it a few times but it seems to have vanished, it quoted better quality and speed than any other method. It's a long shot, but if anyone would know they'd be in here
2025-03-06 06:39:09
I'm also wondering how the downsampling used in libjxl compares to other methods. I know the paper Jon made recently shows a few examples, but it seems like the best of any method and should implemented elsewhere too
2025-03-06 06:41:54
I considered if jpegli should just resamle the image at low quality too, but obviously different dimensions could be a big issue for some applications... Not that they should go that low anyway
Traneptora
I'm also wondering how the downsampling used in libjxl compares to other methods. I know the paper Jon made recently shows a few examples, but it seems like the best of any method and should implemented elsewhere too
2025-03-06 11:11:02
on decode, it just uses the pixel data that's given in multiple passes. do you mean on encode?
jonnyawsom3
2025-03-07 10:41:19
Yeah
spider-mario
2025-03-08 11:46:33
https://www.reddit.com/r/zerotomasteryio/comments/1j5rrsr/a_student_just_disproved_a_40yearold_hash_table/mgkhz61/
Traneptora
spider-mario https://www.reddit.com/r/zerotomasteryio/comments/1j5rrsr/a_student_just_disproved_a_40yearold_hash_table/mgkhz61/
2025-03-08 06:50:22
ohey I know Bill Kuszmaul
2025-03-08 06:50:29
I went to summer camp with him
2025-03-08 06:50:48
https://arxiv.org/abs/2501.02305
_wb_
2025-03-08 06:53:38
it's a small world ๐Ÿ™‚
AccessViolation_
2025-03-09 07:50:23
there is a method of text topic classification that works by compressing the text and comparing the similarity of the Huffman data to some target. the method turned out to be faster than and about as accurate as using word embeddings. now i'm wondering if you could create some sort of whole-image classification thing that uses just the MA trees for 1024x1024 images (one whole group)
2025-03-09 07:55:57
I think it might work. many similar scenes will have similar MA trees, but you might even be able to tell the difference between a pine forest and broadleaf forest because pine trees are easier to predict with NE and NW, and as such that predictor usage would give it away
2025-03-09 07:57:11
tomorrow after work I'll see if I can use Jon's code to encode visually similar images to see if they create similar MA trees
2025-03-09 08:01:02
If it works I doubt it'd be useful at all in practice, because encoding an image losslessly at a high effort setting is probably less resource efficient than using other image classification methods haha
derberg๐Ÿ›˜
2025-03-12 10:34:34
The AVIF displayed by the browsers on the left looks awesome /s
jonnyawsom3
2025-03-12 10:46:48
I miss the ocean image with the filesize ๐Ÿ˜”
derberg๐Ÿ›˜
2025-03-12 10:47:01
Looks like it's just very noticable on my desktop (on both very different screens showing pretty much the same) that is running X11 and using a 3700X & a RX 560 (less shader variant).
I miss the ocean image with the filesize ๐Ÿ˜”
2025-03-12 10:51:40
_petition to add it back_
2025-03-12 10:51:42
https://web.archive.org/web/20230417171231im_/https://jpeg.org/images/jpegxl-home.jpg
TheBigBadBoy - ๐™ธ๐š›
2025-03-13 12:16:24
anyone has a good corpus with JPG images for benchmarking compression?
Meow
2025-03-13 01:08:34
A tag added with filetype:jpg order:score ||https://danbooru.donmai.us||
damian101
TheBigBadBoy - ๐™ธ๐š› anyone has a good corpus with JPG images for benchmarking compression?
2025-03-13 06:44:55
Why JPG images? Wouldn't you want lossless for that?
HCrikki
2025-03-13 06:53:46
if you acquired digital versions of mangas its a thouroughly diverse source (humble sometimes offers 20+ volumes for multiple series for barely 20$ in its ebook bundles)
2025-03-13 06:54:53
also handy since many online readers recompress and rehost those so you get a reference for both originals and lousy recompresses to bench against
2025-03-13 06:58:08
reversible jpg transcoding is lightning fast for this kind of content (consistent sub 40 milliseconds - effort 7 an excellent default but even 3 still guarantees smaller filesizes converted at even shorter durations)
damian101
2025-03-13 06:58:38
Wouldn't you ideally still want to compare against the original in that case? So it's probably better to create the intermediate JPEGs yourself
HCrikki
2025-03-13 06:59:26
scanlators releasing using pngs are a different source of truth
2025-03-13 07:02:37
as for official originals, itd have to be either raws or the png originals in japanese (few paid japanese services offer that even for their own apps)
TheBigBadBoy - ๐™ธ๐š›
Why JPG images? Wouldn't you want lossless for that?
2025-03-13 09:21:32
for benchmarking optimizers once again
damian101
2025-03-13 09:21:50
ah
TheBigBadBoy - ๐™ธ๐š›
2025-03-13 09:28:03
I tested all of these (order: first = smallest output): - JpegUltraScan.pl - `ect` - MozJPEG's `jpegtran` - IJG's `jpegtran` - libjpeg-turbo's `jpegtran`, leanify, pingo, jpegoptim, curtail, jpgcrush, jhead (but they were useless compare to those above)
2025-03-13 09:28:50
but I would need a quite big corpus with a lot of variety to be sure it is like that
A homosapien
2025-03-13 09:58:36
pingo was useless compared to ect?
2025-03-13 09:58:47
pingo should perform about the same, if not better
TheBigBadBoy - ๐™ธ๐š›
2025-03-13 10:33:02
mmmmh then I need to test on more files indeed but my point was: I found nothing better than a mix of ect, jpegtran and JpegUltraScan.pl
2025-03-13 10:37:33
oh I remember why I said pingo was useless: > Looking at pingo's result, it looks like it either replicates or directly copies what ECT is doing, in addition to stripping the 18-byte JPEG header: I don't like that at all, since it decreases support https://encode.su/threads/3987-Jpeg-Optimizer-testing?p=77383&viewfull=1#post77383
A homosapien
2025-03-13 11:05:47
I've never seen a situation where stripping the JPEG header mattered
TheBigBadBoy - ๐™ธ๐š›
2025-03-13 11:33:42
perhaps, but I guess it is an invalid JPG file without that header
2025-03-13 11:33:45
(?)
A homosapien
2025-03-13 11:59:28
All programs/websites I have used new and old process it just fine.
2025-03-14 12:08:14
And if it works, then I find pingo to be better than ect
CrushedAsian255
2025-03-14 05:05:43
Is JPEG Ultra scan just brute force ?
Meow
2025-03-14 05:30:50
What does Caesium Image Compressor use for optimising JPEG?
A homosapien
CrushedAsian255 Is JPEG Ultra scan just brute force ?
2025-03-14 05:33:05
yes, it's very slow
TheBigBadBoy - ๐™ธ๐š›
2025-03-14 12:47:38
and the savings are perhaps not so important compared to the exec time but hey, who doesn't want 0.1% save? <:KekDog:805390049033191445>
Fox Wizard
2025-03-14 01:05:32
I often get better savings though <:KittyThink:1126564678835904553>
2025-03-14 01:06:17
And also use up to -b 3 since more never resulted in a smaller file for me
TheBigBadBoy - ๐™ธ๐š›
Fox Wizard I often get better savings though <:KittyThink:1126564678835904553>
2025-03-14 01:55:35
I remember `jhead` saving 6 bytes over jpegultrascan once but that was bc it used arithmetic encoding
Fox Wizard I often get better savings though <:KittyThink:1126564678835904553>
2025-03-14 01:56:27
proof [โ €](https://cdn.discordapp.com/emojis/659513552033415188.webp?size=48&name=av1_pepegun)
Fox Wizard
2025-03-14 02:22:55
Not home XD
TheBigBadBoy - ๐™ธ๐š›
2025-03-14 02:53:23
I never thought I would win so easily https://cdn.discordapp.com/emojis/1126563616343216268.webp?size=48
Fox Wizard
2025-03-14 05:46:15
Won't even be home for 10 more days <:KekDog:884736660376535040>
TheBigBadBoy - ๐™ธ๐š›
2025-03-16 04:42:58
well, I just remember this: https://github.com/MegaByte/jpegultrascan/issues/9 but uncommon iirc and well... it's 4 bytes <:KekDog:805390049033191445>
derberg๐Ÿ›˜
2025-03-17 08:12:09
Could it be that GIMP and krita do not display zero duration multi-frame JXL?
jonnyawsom3
2025-03-17 01:05:56
That's layers, isn't it?
username
2025-03-17 01:37:46
<@693503208726986763> have you ever thought about adding .ogg support to your project [RHEFO](https://github.com/T-3B/rhefo) by making use of [OptiVorbis](https://github.com/OptiVorbis/OptiVorbis)?
2025-03-17 01:38:09
wtf why did only the first URL embed
2025-03-17 01:38:31
test: [OptiVorbis](https://github.com/OptiVorbis/OptiVorbis)
TheBigBadBoy - ๐™ธ๐š›
username <@693503208726986763> have you ever thought about adding .ogg support to your project [RHEFO](https://github.com/T-3B/rhefo) by making use of [OptiVorbis](https://github.com/OptiVorbis/OptiVorbis)?
2025-03-17 01:55:43
Yes of course! It is planned, but as you can see in the sh script there are dozens and dozens of TODOs <:KekDog:805390049033191445>
2025-03-17 01:56:08
I really want to "finish" asap a first version so I can make a release with embedded optimizers
2025-03-17 01:58:26
and that first version should have all the already supported format completely done, so it takes time (as well as benchmarking/bruteforcing numerous of optimizers)
2025-03-17 01:59:35
And once I release it I'll share it on encode.su too
Laserhosen
derberg๐Ÿ›˜ Could it be that GIMP and krita do not display zero duration multi-frame JXL?
2025-03-17 05:43:04
The current Gimp plugin (the one in the libjxl repository and the independent one in Gimp's source) always coalesces zero-length frames, so the layers that appear in Gimp when opening a JXL file are always blended animation frames. See https://github.com/libjxl/libjxl/discussions/3900 and the discussion in https://github.com/libjxl/libjxl/pull/1863 .
jonnyawsom3
2025-03-17 06:29:40
Krita does either animation or layers, but not animated layers
2025-03-17 08:20:28
<https://www.nasa.gov/blogs/missions/2025/03/14/nasa-science-data-received-blue-ghost-captures-eclipse-from-moon/> If only it was a JXL
2025-03-17 08:20:30
https://www.nasa.gov/wp-content/uploads/2025/03/eclipse-shot-from-blue-ghost-mission-1.jpg
derberg๐Ÿ›˜
Krita does either animation or layers, but not animated layers
2025-03-17 09:12:51
Hm... a bit unfortunate but well. I guess it will still take a bit until I can replace ora for layered screenshots with jxl. (Krita shows OpenRaster very well btw.) ~~(I set it to frame duration of 1 ms for the jxl for now. Ignore the small weird things like mergedimage not properly generated in this ora or Floorp positioned a bit too far right in the jxl)~~
2025-03-17 09:13:14
(Also reason on why to replace ora: I doubt tesseract will ever support it & jxl inside ora has zero support as of now anyways)
jonnyawsom3
derberg๐Ÿ›˜ Hm... a bit unfortunate but well. I guess it will still take a bit until I can replace ora for layered screenshots with jxl. (Krita shows OpenRaster very well btw.) ~~(I set it to frame duration of 1 ms for the jxl for now. Ignore the small weird things like mergedimage not properly generated in this ora or Floorp positioned a bit too far right in the jxl)~~
2025-03-17 11:28:33
This might interest you https://bugs.kde.org/show_bug.cgi?id=500877#c4
derberg๐Ÿ›˜
2025-03-17 11:36:35
Yeah, thing is just that ideally it should be a jxl since that has a good chance to be supported by the library used by tesseract and I want to be able to use that to quickly find images with certain content. Imagine being able to filter for files containing specific windows (window names in files) & then further narrowing it down by looking for e.g. some text that was displayed in a certain application. ~~(Ideally everything regarding windows should be vector based cause I want vector screenshots to work outside of GTK3 windows and I just want to be able to grep for text from any window... but that's faaar away)~~
jonnyawsom3
derberg๐Ÿ›˜ Hm... a bit unfortunate but well. I guess it will still take a bit until I can replace ora for layered screenshots with jxl. (Krita shows OpenRaster very well btw.) ~~(I set it to frame duration of 1 ms for the jxl for now. Ignore the small weird things like mergedimage not properly generated in this ora or Floorp positioned a bit too far right in the jxl)~~
2025-03-17 11:43:55
Do you still have the 0 duration JXL?
derberg๐Ÿ›˜
Do you still have the 0 duration JXL?
2025-03-17 11:51:58
screenshot2.jxl has duration: 0.0 ms displayed by `jxlinfo` screenshot.jxl has no duration displayed. Not sure if those files are actually conforming...
jonnyawsom3
derberg๐Ÿ›˜ screenshot2.jxl has duration: 0.0 ms displayed by `jxlinfo` screenshot.jxl has no duration displayed. Not sure if those files are actually conforming...
2025-03-17 11:55:38
screenshot.jxl opens perfectly fine as a layered image in Krita, though the layers weren't named (Which is possible in JXL)
derberg๐Ÿ›˜
2025-03-17 11:55:52
Yeah, just noticed this as well
2025-03-17 11:56:10
Did not try it earlier cause for the others there were issues
2025-03-17 11:56:24
But funnily GIMP does have issues with this one and only shows the top layer
2025-03-17 11:57:29
Also same for Floorp
Quackdoc
2025-03-18 12:00:51
tried gimp with a processing test no idea if I did something wrong but had very clear 8bit processing artifacts when I applieda gaussian blur to a red star on a cyan background
2025-03-18 12:00:58
-_-
2025-03-18 12:02:19
will we ever get a foss photo editor that can handle jxl native that doesnt suck <:PepeHands:808829977608323112>