|
yoochan
|
|
Demiurge
What do you guys have to gain exactly from their organization? I understand they held a contest and brought the FUIF and PIK teams together, but after that, it sounds like all they did was boss you around, take your money (so you pay for the privilege!), and tell you to focus on meaningless bitrates (0.06 bbp) at the expense of high fidelity optimizations.
|
|
2024-10-18 05:09:55
|
And iirc it was not a choice by cloudinary, google or John but a mandate by the jpeg committee to create an ISO norm
|
|
|
Demiurge
|
2024-10-18 05:10:42
|
same thing
|
|
2024-10-18 05:11:05
|
The JPEG committee is part of ISO right?
|
|
|
Nova Aurora
|
|
Demiurge
The JPEG committee is part of ISO right?
|
|
2024-10-18 07:01:12
|
ISO/IEC joint working group 1
|
|
|
Demiurge
|
2024-10-18 07:04:32
|
The International Authority Union of Serious Global People Task Force Division 14 Working Group 6 Subcommittee B
|
|
|
Nova Aurora
|
2024-10-18 07:07:38
|
Don't be ridiculous subcommittee B deals with freight container labeling
|
|
|
_wb_
|
2024-10-18 07:38:35
|
JPEG XL was created by an adhoc group of the 'image coding and quality' subgroup of working group number 1 of subcommittee number 29 of the joint technical committee number 1 of the international organization for standardization and the international electrotechnical commission.
|
|
2024-10-18 07:39:58
|
Things were kept simple, the international telecommunication union was left out of it.
|
|
|
AccessViolation_
|
2024-10-18 07:48:05
|
special containment procedures:
|
|
2024-10-18 07:49:54
|
Honestly you probably could transform it into something that can pass as an SCP if you're not paying attention
|
|
2024-10-18 07:55:12
|
SCP-18181 concerns the containment procedure of any digital representation of electromagnetic radiation with a frequency of 750-420 terahertz. This representation is to be contained in a two dimensional matrix no larger than 1,075,741,823 x 1,075,741,823
|
|
|
jonnyawsom3
|
|
AccessViolation_
SCP-18181 concerns the containment procedure of any digital representation of electromagnetic radiation with a frequency of 750-420 terahertz. This representation is to be contained in a two dimensional matrix no larger than 1,075,741,823 x 1,075,741,823
|
|
2024-10-18 08:01:04
|
Oh, good timing, I was about to ping you and ask about your plans with Space Engine some more, since my friend just posted some photos from it themselves
|
|
|
AccessViolation_
|
2024-10-18 08:04:36
|
Oh, yeah photography in space engine is a lot of fun
My plans are currently on hold since I bought a camera last weekend and that's what I've been focused on
|
|
2024-10-18 08:08:19
|
But in general my plan was to combine images of different exposures taken with the game's camera tool, combine them in a way similar to HDR stacking, and create a viewer that's able to resolve the incredible brightness of the objects, utilizing the high bit depth of JXL
|
|
2024-10-18 08:09:35
|
Like the game has a slider for adjusting the exposure for very dark asteroids to very bright stars, and I want a viewer that can do that with a single JXL file produced in that way
|
|
|
jonnyawsom3
|
2024-10-18 08:12:17
|
Ahh fair, I don't own it myself and am having trouble finding much info on it about the camera. From what I can tell it has tonemapping options going up to Rec2020 but actual HDR data I have no clue, or screenshots above 8bit
|
|
|
AccessViolation_
|
2024-10-18 08:19:35
|
I haven't looked into how the game produces images yet, but it doesn't really matter for my use case. I'd just be taking like 10 images over the whole exposure range, from the darkest object in the scene to the brightest object in the scene, and then combining them in some way such that a program can take in the image and the user can selectively look at a given exposure range. The ultimate goal was to be able to take REAL pictures that have this feature by stacking normal pictures and then some with solar filters or whatever, but doing it with Space Engine is actually approachable lol
|
|
2024-10-18 08:27:12
|
Anyway, if you like photography and space, flying around to random planets in Space Engine and seeing there's any nice scenes you can capture is sort of a surreal experience. You know that the procedural generation (in addition to Real Objects from a DB) is based on our knowledge of how the universe works, the atmosphere rendering is based on the composition and pressure, etc, again realistic. So when you go to a planet it sort of strikes you that those scenes might as well be real, like you could be standing on that planet. Not that exact planet unless it's a Real Object, but yeah
|
|
2024-10-18 08:33:22
|
Put on some of your favorite music, turn off the lights and get lost in space, to scale, for a few hours. It's amazing
|
|
|
RaveSteel
|
2024-10-18 09:27:21
|
SpaceEngine looks pretty nice
|
|
|
AccessViolation_
|
|
Ahh fair, I don't own it myself and am having trouble finding much info on it about the camera. From what I can tell it has tonemapping options going up to Rec2020 but actual HDR data I have no clue, or screenshots above 8bit
|
|
2024-10-18 10:42:35
|
Oh I just found this blog, they revamped the screenshot tool, allowing you to take screenshots without a limit on resolution, the only limit being that of the file format ๐ Not sure if that's what you were reading as well
https://spaceengine.org/news/blog240611
|
|
2024-10-18 10:43:41
|
This was a while ago apparently. I haven't played it in quite some time
|
|
|
jonnyawsom3
|
2024-10-18 10:47:58
|
Yeah, I mentioned that to my friend just now and had them set the tonemapping to Rec2020 with the hopes of slightly higher range, but still just 24bit PNGs
|
|
|
A homosapien
|
|
Unoptimized Greyscale images getting extra colors from lossy isn't ideal... https://github.com/libjxl/libjxl/issues/3896
|
|
2024-10-19 12:14:11
|
I responded explaining what png optimizers are, but I still don't understand what this guy is talking about. Why do people insist on focusing on file sizes to determine if something is done "properly"? Where does this mentality come from?
Also, the insistence to not use XYB is strange when converting it to RGB/Grayscale should be lossless, unless there is a bug present?
|
|
|
jonnyawsom3
|
|
A homosapien
I responded explaining what png optimizers are, but I still don't understand what this guy is talking about. Why do people insist on focusing on file sizes to determine if something is done "properly"? Where does this mentality come from?
Also, the insistence to not use XYB is strange when converting it to RGB/Grayscale should be lossless, unless there is a bug present?
|
|
2024-10-19 12:18:22
|
The actual reason they were using it was for greyscale encoding, which for some reason the XYB in VarDCT is causing extra colors to appear in the greyscale image, totalling around 270 instead of 256
|
|
|
A homosapien
|
2024-10-19 12:28:39
|
Weird, idk if it's a bug or limitation with VarDCT. I feel like cjxl should recognize images with no color information and just use one channel for XYB. It's been bothering me for a while now that empty alpha channels are not discarded as well.
|
|
2024-10-19 12:30:32
|
Or maybe have a parameter like `-x color_space Gray_D65_SRG_Rel_SRG` to force grayscale idk ยฏ\_(ใ)_/ยฏ
|
|
|
VcSaJen
|
2024-10-19 10:08:02
|
Image with empty alpha in => Image with empty alpha out. It should be present, unless you have a metadata that would restore an empty channel on decode.
With that logic encoded indexed image also should be decoded to indexed image with no modifications to index numbers, I dunno if that's the case or not.
|
|
|
jonnyawsom3
|
2024-10-19 10:34:17
|
Yeah, empty alpha has very little size overhead, but causes quite a large speed and memory hit for being empty
|
|
|
spider-mario
|
|
A homosapien
Or maybe have a parameter like `-x color_space Gray_D65_SRG_Rel_SRG` to force grayscale idk ยฏ\_(ใ)_/ยฏ
|
|
2024-10-19 10:49:21
|
we could maybe try to make `-x color_space=Gra_D65_Rel_SRG` work with nominally-but-actually-not RGB images
|
|
2024-10-19 10:49:31
|
it seems it works for PGM input but neither PNG nor PPM
|
|
2024-10-19 10:49:42
|
(but for PGM input, itโs already the default anyway)
|
|
2024-10-19 10:50:28
|
```console
$ tools/cjxl -x color_space=Gra_D65_Rel_PeQ test.pgm test.jxl
JPEG XL encoder v0.12.0 095f259b0 [AVX2,SSE4,SSE2]
Encoding [VarDCT, d1.000, effort: 7]
Compressed to 143.3 kB (2.916 bpp).
768 x 512, 7.689 MP/s [7.69, 7.69], , 1 reps, 32 threads.
$ tools/jxlinfo test.jxl
JPEG XL image, 768x512, lossy, 8-bit Grayscale
intensity_target: 10000.000000 nits
min_nits: 0.000000
relative_to_max_display: 0
linear_below: 0.000000
Color space: Grayscale, D65, PQ transfer function, rendering intent: Relative
```
|
|
|
jonnyawsom3
|
2024-10-19 03:59:31
|
Main issue is the the OP thought the PNG *was* greyscale, but it's actually RGB with even values. Having to manually specificy the color space for every file is... Less than ideal
|
|
|
lonjil
|
2024-10-19 04:09:57
|
If all the channels are the same values, shouldn't the X and B channels be equal to 0?
|
|
|
AccessViolation_
|
2024-10-19 04:25:56
|
Yeah, wouldn't they get decorrelated perfectly with a residual of 0?
|
|
|
jonnyawsom3
|
2024-10-19 04:39:13
|
That would make too much sense :P
|
|
2024-10-19 04:40:04
|
Original, Optimized
JXL Original, JXL Optimized
|
|
2024-10-19 04:40:44
|
The JXL Original has 270 colors
|
|
|
AccessViolation_
I haven't looked into how the game produces images yet, but it doesn't really matter for my use case. I'd just be taking like 10 images over the whole exposure range, from the darkest object in the scene to the brightest object in the scene, and then combining them in some way such that a program can take in the image and the user can selectively look at a given exposure range. The ultimate goal was to be able to take REAL pictures that have this feature by stacking normal pictures and then some with solar filters or whatever, but doing it with Space Engine is actually approachable lol
|
|
2024-10-19 05:32:34
|
Wondered why the images coming out were so large being mostly empty space, turns out the game has dithering enabled by default to avoid <:banding:804346788982030337> . So if someone somehow made a mod that allows HDR capture, they'd probably be even smaller than normal while having more detail
|
|
2024-10-19 05:33:35
|
The Pro version allows HDR DDS captures for skyboxes, so it *should* be possible...
|
|
|
AccessViolation_
|
2024-10-19 05:35:55
|
Could you get around dithering by capturing 2x the resolution in both axes and scaling the image down, merging 2x2 pixels into one pixel of a higher bit depth image? Like is that a common operation some software can do
|
|
2024-10-19 05:37:21
|
that's assuming it does dithering in 2x2 pixel tiles, you could go with an even larger original resolution if it uses some other distribution
|
|
2024-10-19 05:38:30
|
I'm really hoping I can just use captures from the camera tool without needing to mod the game lol
|
|
|
jonnyawsom3
|
|
AccessViolation_
Could you get around dithering by capturing 2x the resolution in both axes and scaling the image down, merging 2x2 pixels into one pixel of a higher bit depth image? Like is that a common operation some software can do
|
|
2024-10-19 05:56:20
|
Like the 'HDR' button in the camera, I think it might be misnamed and is actually more like ISO noise
|
|
2024-10-19 05:57:38
|
Without there *is* banding, but most 'empty' space looks completely fine without the 'dithering'
|
|
2024-10-19 05:58:32
|
(These are from different images, the top has a persistent 'glow' across the image due to atmosphere, the lower is the outer bloom of a planet)
|
|
|
AccessViolation_
I'm really hoping I can just use captures from the camera tool without needing to mod the game lol
|
|
2024-10-19 06:03:48
|
> With regards to SpaceEngine itself, our plan for 2024 is to complete the transition to Vulkan, which will open a lot of new doors. We simply could not build the features we'd wanted (and that existing users have asked for) in the current version of SpaceEngine; OpenGL wasn't up to the task, even if itโs maintained.
So it sounds like they're going from OpenGL to Vulkan soon, maybe then they'll have HDR output since it was mentioned on Reddit when someone asked how to enable it too
|
|
|
Demiurge
|
|
_wb_
Things were kept simple, the international telecommunication union was left out of it.
|
|
2024-10-19 07:23:38
|
but the ITU is probably the one with the most engineers/bureaucrat ratio
|
|
|
_wb_
|
2024-10-19 07:32:11
|
ITU seems a bit nicer than ISO, at least it does not put their stuff behind a paywall
|
|
|
|
JKUser4592
|
2024-10-20 03:15:29
|
Can Imagemagick made lossless jxl files (still and animated)? If so, how?
|
|
|
Axiom
|
2024-10-20 06:12:49
|
`magick 1.png -define jxl:effort=7 -quality 100 1.jxl`
|
|
2024-10-20 06:13:16
|
I don't know about animated, but you probably shouldn't use imagemagick for it.
|
|
|
jonnyawsom3
|
2024-10-20 06:15:54
|
You don't have to post it in both places at once ;P
|
|
|
Axiom
|
2024-10-20 06:24:52
|
It would be nice if `cjxl` accept `.y4m` as an input to make animation, but unlocking more of the encoder capability for still images is the priority now.
|
|
|
CrushedAsian255
|
|
Axiom
It would be nice if `cjxl` accept `.y4m` as an input to make animation, but unlocking more of the encoder capability for still images is the priority now.
|
|
2024-10-20 06:53:04
|
y4m is more for videos, it would be easier to convert to APNG using external software
|
|
2024-10-20 06:53:10
|
also y4m has some weird oddities
|
|
|
w
|
2024-10-20 06:56:16
|
animation is video
|
|
2024-10-20 06:56:20
|
it does make sense to take in y4m
|
|
2024-10-20 06:56:25
|
but jxl SUCKS for animation
|
|
|
CrushedAsian255
|
2024-10-20 07:00:58
|
remember: jxl is an image format not a video foramt
|
|
|
Orum
|
2024-10-20 07:55:21
|
intra-only codecs are a thing, but generally they're only useful for very high quality and lossless
|
|
|
CrushedAsian255
|
2024-10-20 07:56:04
|
ProRes my beloved (my hard drive has differing opinions)
|
|
|
Orum
|
2024-10-20 07:59:46
|
I would like ProRes a lot more if the encoder wasn't so damn limited
|
|
2024-10-20 08:00:09
|
even on the lowest bitrate setting it's too high for my uses
|
|
|
CrushedAsian255
|
2024-10-20 08:01:04
|
Even Proxy?
|
|
|
Orum
|
2024-10-20 08:01:47
|
But, that's why I would love to get JXL video working in ffmpeg, but first the containers need support for it. This is trivial to patch in (it's like 1 line of code), but recompiling ffmpeg and all things that link to it is such a hassle...
|
|
|
CrushedAsian255
|
|
Orum
even on the lowest bitrate setting it's too high for my uses
|
|
2024-10-20 08:02:11
|
It makes sense though, ProRes is designed for high end workflows where quality is of the upmost importance and storage space is just another tax write off
|
|
|
Orum
|
2024-10-20 08:02:28
|
sure, but I only want to use it for proxy clips
|
|
2024-10-20 08:02:38
|
and the proxy bitrate is still stupidly high
|
|
|
CrushedAsian255
|
2024-10-20 08:02:41
|
H264 all intra is probably more for you than
|
|
|
Orum
|
2024-10-20 08:02:59
|
H.264 is not very good at intra though
|
|
|
CrushedAsian255
|
2024-10-20 08:03:18
|
Motion JPEG?
|
|
2024-10-20 08:03:31
|
Or is what youโre wanting is Motion JPEG XL
|
|
|
Orum
|
2024-10-20 08:03:57
|
yeah, I basically want MJXL, but for now I'm stuck with mjpeg
|
|
2024-10-20 08:04:21
|
Cineform HD is close, but doesn't decode quickly enough
|
|
|
CrushedAsian255
|
2024-10-20 08:04:24
|
Is jpeg xl decoding fast enough?
|
|
2024-10-20 08:04:39
|
In my Mac 4k30 encode was 0.2x
|
|
|
Orum
|
2024-10-20 08:04:41
|
with the right options when encoding, I think so
|
|
|
CrushedAsian255
|
2024-10-20 08:04:44
|
Decode was 1.1x
|
|
2024-10-20 08:04:58
|
M3 max 14c
|
|
|
Orum
|
2024-10-20 08:07:34
|
the thing that's nice about JXL is even though the entropy encoding is more expensive computationally, the bitrate is so much less than jpeg (for the same quality) that it's not actually that much worse to decode
|
|
|
CrushedAsian255
|
2024-10-20 08:09:17
|
What settings do you use
|
|
2024-10-20 08:09:26
|
Distance, effort, faster decoding?
|
|
|
Orum
|
2024-10-20 08:14:35
|
yeah, faster decoding is a *must*
|
|
2024-10-20 08:14:58
|
distance depends, as JXL still has major issues with dark areas
|
|
2024-10-20 08:15:13
|
effort is usually 7
|
|
2024-10-20 08:15:41
|
but again, haven't really incorporated this into my workflow as I need container support in ffmpeg
|
|
|
CrushedAsian255
|
2024-10-20 08:17:38
|
Does ffmpegโs current JXL encoder support. Fiddlers
|
|
2024-10-20 08:17:41
|
Faster decoding?
|
|
|
Orum
|
2024-10-20 08:18:55
|
๐คทโโ๏ธ I've just been testing with cjxl for now
|
|
|
lonjil
|
2024-10-20 08:41:39
|
How about using modular e1 instead? Libjxl has special fast decoding for it.
|
|
|
CrushedAsian255
|
2024-10-20 08:44:29
|
That works well for me but lossless sizes are bigger than the ProRes source
|
|
2024-10-20 08:45:45
|
JPEG xl hardware decoder anyone?
|
|
|
Orum
|
|
CrushedAsian255
That works well for me but lossless sizes are bigger than the ProRes source
|
|
2024-10-20 09:14:27
|
doesn't surprise me as ProRes is lossy
|
|
|
A homosapien
|
2024-10-20 09:30:45
|
lossy e1 then
|
|
2024-10-20 09:31:22
|
should provide even faster decoding
|
|
|
Orum
|
2024-10-20 10:30:50
|
๐ค I will give that a try when I get home
|
|
2024-10-20 10:31:28
|
you mean VDCT e1 or lossy modular e1?
|
|
|
CrushedAsian255
|
2024-10-20 12:09:27
|
I thought vDCT only went to e3
|
|
2024-10-20 12:09:40
|
Iโm personally more worried about encode time
|
|
|
A homosapien
|
|
Orum
you mean VDCT e1 or lossy modular e1?
|
|
2024-10-20 04:40:44
|
try both, see which provides better results
|
|
|
Demiurge
|
|
_wb_
ITU seems a bit nicer than ISO, at least it does not put their stuff behind a paywall
|
|
2024-10-20 04:44:47
|
Yeah, they're probably the nicest one with the least self-fellating bureaucracy
|
|
|
jonnyawsom3
|
2024-10-20 04:57:57
|
I thought the fast paths were only lossless e1
|
|
|
_wb_
|
2024-10-20 05:14:25
|
We don't really have a superfast lossy encode mode yet. Something with fixed prefix codes instead of ANS would be faster...
|
|
|
|
veluca
|
2024-10-20 05:15:52
|
I had hacked something together a while ago, but it wasn't general purpose
|
|
2024-10-20 05:16:04
|
could be resurrected and adapted though
|
|
|
_wb_
|
2024-10-20 05:52:04
|
Would be nice to have something for lossy e1/e2...
|
|
|
lonjil
|
2024-10-20 06:03:27
|
How hard would it be to make a fast lossy modular encoder?
|
|
|
|
veluca
|
2024-10-20 06:04:16
|
probably harder than a fast vardct encoder
|
|
2024-10-20 06:04:35
|
although if you go for "near-lossless style" lossy it's probably not too hard
|
|
2024-10-20 06:05:09
|
wouldn't be as fast as lossless or -- I assume -- as vardct though
|
|
|
_wb_
|
2024-10-20 06:17:19
|
A near-lossless modular could maybe be done almost as fast as lossless, if you do something like just quantizing the prediction residuals. You need an averaging predictor though, Gradient doesn't work for that since you'll end up just quantizing the colors...
|
|
|
|
veluca
|
2024-10-20 06:18:33
|
yeah, but that breaks SIMDfication of prediction ๐
|
|
|
_wb_
|
2024-10-20 06:18:40
|
With AvgW+N you can do quite good near-lossless, but that won't be fast
|
|
2024-10-20 06:19:01
|
Maybe AvgN+NW or AvgN+NE also works though
|
|
|
lonjil
|
|
veluca
wouldn't be as fast as lossless or -- I assume -- as vardct though
|
|
2024-10-20 06:19:08
|
It'd still be able to make use of the dedicated fast decode paths though, yeah?
|
|
|
|
veluca
|
2024-10-20 06:19:13
|
ah, right, you could use those
|
|
|
lonjil
It'd still be able to make use of the dedicated fast decode paths though, yeah?
|
|
2024-10-20 06:19:20
|
not really
|
|
2024-10-20 06:19:29
|
you could make some new ones though
|
|
|
_wb_
Maybe AvgN+NW or AvgN+NE also works though
|
|
2024-10-20 06:19:56
|
not sure how good it would be tbh, I don't expect too much ๐
|
|
|
_wb_
|
2024-10-20 06:21:44
|
Only way to know is to try, I guess.
|
|
2024-10-20 06:22:23
|
Even if prediction simd cannot be done, it might still be pretty fast compared to, say, e2 lossless
|
|
|
|
veluca
|
2024-10-20 06:24:36
|
not sure why you'd want this over vardct tbh
|
|
|
_wb_
|
2024-10-20 06:29:45
|
You can have stricter guarantees on peak abs error
|
|
2024-10-20 06:30:29
|
Maybe makes sense in the d < 0.05 range.
|
|
2024-10-20 06:30:37
|
Or if the data is not visual.
|
|
|
|
veluca
|
2024-10-20 06:31:10
|
yes, but it seems fairly niche, no?
|
|
|
_wb_
|
2024-10-20 06:32:08
|
Yeah
|
|
2024-10-20 06:33:55
|
Maybe for something like screenshots it could be useful to have fast near-lossless.
|
|
|
jonnyawsom3
|
2024-10-20 06:53:12
|
Even then, lossless to lossy modular still has huge differences in filesize where lossless is best over VarDCT, not entirely sure why though...
|
|
|
Orum
|
2024-10-20 06:55:53
|
how consistent is the distance over the entire effort scale?
|
|
|
_wb_
|
2024-10-20 06:56:56
|
Hard to tell since we don't really have subjective experiment data for all effort settings
|
|
|
spider-mario
|
2024-10-20 07:23:16
|
running `benchmark_xl` at a bunch of efforts and looking at the butteraugli column might give some insight
|
|
|
Orum
|
2024-10-20 07:25:17
|
I've found butteraugli doesn't correlate well with quality for my eyes ๐คทโโ๏ธ
|
|
2024-10-20 07:25:52
|
ssimul2 is better, but still far from perfect
|
|
|
spider-mario
|
2024-10-20 07:27:16
|
itโs also in benchmark_xlโs output
|
|
|
A homosapien
|
|
A homosapien
I agree with Pashifox here, I think there was a change in the encoder itself. I'll do some extra testing with EPF and gaborish disabled.
|
|
2024-10-20 07:31:58
|
Did some preliminary testing: https://discord.com/channels/794206087879852103/1278292301038227489/1297641128333410404
|
|
2024-10-20 07:36:06
|
Doing it by hand is tiring, a program or script would be nice. I want to do this on a dataset rather than a single image
|
|
|
_wb_
|
2024-10-20 07:48:27
|
I have no idea how accurate either butteraugli or ssimu2 really are at estimating encoder consistency at each effort level. They might very well have some subtle biases that are hard to know about without doing fine-grained subjective tests on a large corpus of images and at each effort level. It would be quite costly to do that.
|
|
|
|
veluca
|
2024-10-20 07:57:00
|
I mean, it could be a community effort, possibly
|
|
2024-10-20 07:57:10
|
but it'd still take some work to set the thing up
|
|
|
_wb_
|
2024-10-20 08:24:48
|
The AIC-3 test framework is likely going to be open sourced, if all goes well then at some point we should have something that only requires pushing one button to generate the test infrastructure and then when the responses are in, pushing another button to compute a JND scale from the responses.
|
|
2024-10-20 08:25:05
|
It's still a bit WIP atm though
|
|
|
Demiurge
|
2024-10-20 08:42:20
|
Should "fast decode" option essentially never ever be used? It never makes much difference (other than an extremely bad difference)
|
|
2024-10-20 08:42:32
|
Maybe it should be moved to "expert/useless options mode"
|
|
|
jonnyawsom3
|
2024-10-20 08:45:18
|
It disables expensive codec tools, essentially lowering the effort setting, but does especially badly on lossless
|
|
|
Quackdoc
|
2024-10-20 08:47:11
|
faster_decoding is necessary to get good performance for stuff like allintra video
|
|
2024-10-20 08:51:23
|
though iirc it was broken now
|
|
|
CrushedAsian255
|
2024-10-20 08:58:53
|
slower_decoding when
|
|
|
jonnyawsom3
|
2024-10-20 09:13:01
|
We already have that, it's called `-E 11`
|
|
|
CrushedAsian255
|
2024-10-20 09:13:49
|
And -g 3
|
|
|
spider-mario
|
2024-10-20 09:16:04
|
`-E 11` at home: `avifenc`
|
|
|
|
veluca
|
|
_wb_
The AIC-3 test framework is likely going to be open sourced, if all goes well then at some point we should have something that only requires pushing one button to generate the test infrastructure and then when the responses are in, pushing another button to compute a JND scale from the responses.
|
|
2024-10-20 09:17:00
|
does that use ELO for question selection? ๐
|
|
|
CrushedAsian255
|
2024-10-20 09:18:26
|
If AVIF is av1 how can it play real time?
|
|
2024-10-20 09:18:33
|
The raw power of hardware decoding?
|
|
|
jonnyawsom3
|
|
spider-mario
`-E 11` at home: `avifenc`
|
|
2024-10-20 09:18:58
|
"Honey, we have `-E 11` at home"
`-E 11` at home: `--allow_expert_options -e 11 -d 0 --num_reps=100`
|
|
|
_wb_
|
|
veluca
does that use ELO for question selection? ๐
|
|
2024-10-20 09:31:14
|
No, there is no dynamic question selection. Just predetermined batches and then Thurstonian JND scale reconstruction.
|
|
|
|
veluca
|
2024-10-20 09:32:06
|
I like how ISO can take 3 years to standardise something that is 20% less efficient than what has been used in the meantime ๐
|
|
|
CrushedAsian255
|
|
veluca
does that use ELO for question selection? ๐
|
|
2024-10-20 09:32:18
|
https://en.wikipedia.org/wiki/Electric_Light_Orchestra ?
|
|
|
veluca
I like how ISO can take 3 years to standardise something that is 20% less efficient than what has been used in the meantime ๐
|
|
2024-10-20 09:32:30
|
What did they standardise?
|
|
|
|
veluca
|
2024-10-20 09:33:29
|
pairwise comparisons with static batches
|
|
2024-10-20 09:33:34
|
ELO = the chess thingy
|
|
|
CrushedAsian255
|
2024-10-20 09:37:06
|
JPEG XL could be nice to store chess games as the pieces could be moved around with patches
|
|
|
_wb_
|
2024-10-20 10:08:40
|
The main issue with ELO is that it only gives you a relative ranking. Cannot use it to assess encoder consistency, for example.
|
|
2024-10-20 10:09:44
|
That is, consistency across sources to reach similar JND when using same encode settings.
|
|
|
|
veluca
|
2024-10-20 10:29:02
|
that's somewhat debatable, and it doesn't actually preclude you from using it for question selection but using something else for final scoring, but sure
|
|
|
dogelition
|
|
veluca
ELO = the chess thingy
|
|
2024-10-20 10:45:15
|
nitpick: it's `Elo`, which is just the last name of its creator
|
|
|
|
afed
|
|
CrushedAsian255
https://en.wikipedia.org/wiki/Electric_Light_Orchestra ?
|
|
2024-10-20 10:52:32
|
<:KekDog:805390049033191445>
https://en.wikipedia.org/wiki/Elo_hell
|
|
|
yoochan
|
|
veluca
I like how ISO can take 3 years to standardise something that is 20% less efficient than what has been used in the meantime ๐
|
|
2024-10-21 05:32:16
|
You speak for which standard?
|
|
|
|
veluca
|
2024-10-21 05:33:13
|
AIC-3
|
|
|
_wb_
|
|
veluca
that's somewhat debatable, and it doesn't actually preclude you from using it for question selection but using something else for final scoring, but sure
|
|
2024-10-21 05:42:37
|
Ah, yes, you could do that. The main reason we did static batches is that it's more convenient to implement on typical crowdsourcing platforms. Though multi-round batches is something we did consider.
|
|
|
yks
|
2024-10-21 06:52:31
|
Hello. Is it possible to force to load only a sufficient part of JXL to create a placeholder? For example in games with low texture quality selection or with different level of detail models which are far away?
|
|
|
CrushedAsian255
|
2024-10-21 06:53:44
|
The format supports that as a use case but I donโt know if the libjxl API does though (yet)
|
|
2024-10-21 06:54:23
|
Progressive DC
|
|
|
_wb_
|
2024-10-21 06:54:57
|
The API does support progressive loading, but at the moment it always returns previews at full resolution. There's no API to get lower-res previews.
|
|
|
yks
|
2024-10-21 06:57:12
|
Thanks. That would make a useful feature when you're working with limited rescources. If i'm correct only 15-20% of file needs to be loaded to create a placeholder?
|
|
|
Oleksii Matiash
|
|
Demiurge
Should "fast decode" option essentially never ever be used? It never makes much difference (other than an extremely bad difference)
|
|
2024-10-21 07:06:54
|
It was mentioned somewhere here that adobe uses fast_decode in dngs, not sure if it is true
|
|
|
CrushedAsian255
|
2024-10-21 07:49:04
|
Yeah i remember that conversation
|
|
2024-10-21 07:49:11
|
They also use a stupid block size
|
|
|
jonnyawsom3
|
|
yks
Thanks. That would make a useful feature when you're working with limited rescources. If i'm correct only 15-20% of file needs to be loaded to create a placeholder?
|
|
2024-10-21 07:49:23
|
Depending on the file, you can get *something* at 0.15% loaded https://discord.com/channels/794206087879852103/794206170445119489/1285247523807690896
|
|
|
CrushedAsian255
|
2024-10-21 07:50:35
|
Should progressive dc be on by default?
|
|
2024-10-21 07:50:46
|
How much does it impact density?
|
|
|
jonnyawsom3
|
2024-10-21 07:51:09
|
It disables chunked/streaming encoding so memory use gets multiplied
|
|
|
CrushedAsian255
|
2024-10-21 07:51:49
|
If I have plenty of memory, does it affect file size)
|
|
2024-10-21 07:52:24
|
Also with streaming / non progressive is it just the LF group for that 2048x2048 then the HF groups then the LF for the next 2048x2048?
|
|
|
jonnyawsom3
|
2024-10-21 07:52:30
|
Very slightly. Maybe 1% or even 0.1% larger if I recall
|
|
|
CrushedAsian255
|
2024-10-21 07:54:29
|
Does loss less JPEG reconstruction work with lossless jpeg?
|
|
2024-10-21 07:54:36
|
Would guess no, but just to be sure
|
|
2024-10-21 07:55:25
|
Would be nice to convert all my current DNGs to JXL, could save around 300 GB of storage
|
|
|
_wb_
|
2024-10-21 07:57:58
|
No, but lossless is lossless so if you don't care about reconstructing the files exactly, just the data, you could convert lossless jpeg to lossless jxl...
|
|
|
CrushedAsian255
|
2024-10-21 07:58:37
|
Is there an (offline, cli) tool that does that while keeping ProRAW metadata?
|
|
|
Oleksii Matiash
|
2024-10-21 08:38:37
|
Btw, adobe did weird decisions on jxl settings in their dng usage, but I'm curious if wisely compressed dngs (created by some imagine 3d party utility) would be decodable by adobe's soft?
|
|
|
lonjil
|
2024-10-21 08:48:59
|
Most likely they just hand the images to libjxl for decoding. At most you may have to match DNG-level visible stuff like tiling.
|
|
|
CrushedAsian255
|
2024-10-21 09:31:02
|
Appleโs tiling is different to Adobeโs default and afaik it works fine
|
|
2024-10-21 09:31:22
|
Also Adobe made the DNG spec so I would think / hope their software would be compliant
|
|
2024-10-21 09:31:35
|
And so should open any DNG file that is within spec
|
|
|
Oleksii Matiash
|
|
CrushedAsian255
And so should open any DNG file that is within spec
|
|
2024-10-21 11:41:13
|
Yes, my question was about dng specs, do they allow more than adobe uses in their dng converter
|
|
|
CrushedAsian255
|
2024-10-21 11:43:46
|
i can't be bothered looking through it right now but here is a link to it
|
|
2024-10-21 11:43:46
|
https://helpx.adobe.com/content/dam/help/en/photoshop/pdf/DNG_Spec_1_7_1_0.pdf
|
|
|
spider-mario
|
2024-10-21 11:49:02
|
> This optional tag specifies the decode speed parameter used to encode the JPEG XL data in this IFD. Values range from 1 (slow) to 4 (fast).
|
|
|
RaveSteel
|
2024-10-21 11:58:01
|
At least it's optional
|
|
|
lonjil
|
2024-10-21 12:00:45
|
Note that that's just a tag indicating which libjxl encoding setting you used. It's just informational metadata that they decided to make tags for for some reason.
|
|
|
CrushedAsian255
|
2024-10-21 12:03:57
|
I can just not include it then
|
|
|
Oleksii Matiash
|
|
CrushedAsian255
i can't be bothered looking through it right now but here is a link to it
|
|
2024-10-21 12:06:10
|
Thank you
|
|
|
spider-mario
|
|
CrushedAsian255
I can just not include it then
|
|
2024-10-21 12:11:38
|
yep, by quoting it, I just meant to point out that at least, the possibility of using a different setting from theirs is hinted at
|
|
2024-10-21 12:11:49
|
whether one actually signals it or not
|
|
|
jonnyawsom3
|
2024-10-21 03:07:31
|
I was going to try directly interting a JXL to a donor DNG and then editing the metadata to match
|
|
|
|
JKUser4592
|
2024-10-21 06:15:47
|
what do you think of this? https://www.reddit.com/r/jpegxl/comments/1g8coud/webp_jxl/
|
|
|
Axiom
I don't know about animated, but you probably shouldn't use imagemagick for it.
|
|
2024-10-21 06:15:56
|
why?
|
|
|
Axiom
|
2024-10-21 06:19:32
|
Easier to use ffmpeg
>Note, you must use an explicit image format specifier to read an APNG (apng:myImage.apng) image sequence, otherwise it assumes a PNG image and only reads the first frame. You must have ffmpeg in your execution path to process APNG images.
This is taken from imagemagick.
|
|
|
RaveSteel
|
|
JKUser4592
what do you think of this? https://www.reddit.com/r/jpegxl/comments/1g8coud/webp_jxl/
|
|
2024-10-21 06:37:49
|
libjxl is currently focussing on still images. Animated lossless JXL still works perfectly fine but often delivers worse results in terms of filesize compared to lossless WebP.
For lossy animations JXL is better due to lossy WebPs 4:2:0 limitation. While this does not matter if the source is a video, which are often yuv420p, JXL will be much better if the input is RGB or 4:4:4
|
|
2024-10-21 06:38:28
|
Lossy JXL based on 4:2:0 will also probably look better but at the cost of much higher filesize
|
|
|
JKUser4592
why?
|
|
2024-10-21 06:39:54
|
The easiest way to pass an APNG to cjxl is by piping with ffmpeg
`ffmpeg -i INPUT -f apng - | cjxl - -d 0 OUTPUT.jxl`
|
|
2024-10-21 06:40:54
|
Passing via imagemagick is also possible. Like so:
`magick INPUT apng:- | cjxl [...]`
|
|
2024-10-21 06:41:17
|
But since IM depends on ffmpeg for this, why not use it directly
|
|
2024-10-21 06:42:10
|
You may be able to get a better result for the JXL filesize by using a PNG optimizer with that APNG, but I haven't tried that personally
|
|
|
jonnyawsom3
|
2024-10-21 07:17:00
|
Yeah, cjxl doesn't try to optimize the image at all, it just stores whatever it's given, so using `apngopti` or similar first lets cjxl take advantage of cropped frames, difference masking via transparency, ect
|
|
2024-10-21 07:17:45
|
It's also why it's sometimes a good idea to turn GIFs into APNG first too, since then it can use cropped frames to shave a few extra bytes off
|
|
|
TheBigBadBoy - ๐ธ๐
|
2024-10-21 07:18:57
|
but GIF supports cropped frames too <:KekDog:805390049033191445>
|
|
|
spider-mario
|
2024-10-21 07:22:13
|
true, one can use e.g. `gifsicle` to perform those optimisations
|
|
2024-10-21 07:22:51
|
`gifsicle -bO3 image.gif` to do it in place, `gifsicle -O3 -o output.gif input.gif` to save the optimised version to another file
|
|
2024-10-21 07:24:01
|
(`--unoptimize` or `-U` to expand to a non-optimised GIF that only uses full frames)
|
|
|
_wb_
|
2024-10-21 07:27:30
|
Currently the libjxl API doesn't allow optimizing animations, since frames are encoded as they are given.
|
|
2024-10-21 07:28:10
|
But maybe we could have an encode option and even enable it by default to allow the encoder to modify stuff as it sees fit
|
|
2024-10-21 07:29:07
|
Like cropping the frame, changing the blend mode from kReplace to kAdd, or whatever else that doesn't change the final result but can improve compression
|
|
2024-10-21 07:30:59
|
Same with dropping trivial alpha channela, changing RGB to grayscale if the content is actually grayscale, or using lossless even though d0.2 was asked, if lossless happens to be smaller.
|
|
2024-10-21 07:34:26
|
Some things can only be done if we know we have all input data before we write any output. E.g. dropping alpha or using lossless instead of XYB requires adjusting the image header and can only be decided if all input is known, so it is not something that can be done if the full input is not known yet, e.g. if you are encoding frame by frame...
|
|
|
RaveSteel
|
|
Yeah, cjxl doesn't try to optimize the image at all, it just stores whatever it's given, so using `apngopti` or similar first lets cjxl take advantage of cropped frames, difference masking via transparency, ect
|
|
2024-10-21 07:35:16
|
Do you have a link to that? There are literally no search results on DDG and Google for it but many different for apng optimiser
|
|
|
jonnyawsom3
|
|
_wb_
Same with dropping trivial alpha channela, changing RGB to grayscale if the content is actually grayscale, or using lossless even though d0.2 was asked, if lossless happens to be smaller.
|
|
2024-10-21 07:35:27
|
I think that last point would actually make Adobe DNG files lossless instead of lossy
|
|
|
RaveSteel
Do you have a link to that? There are literally no search results on DDG and Google for it but many different for apng optimiser
|
|
2024-10-21 07:36:54
|
Right, I forgot I renamed it to be in-line with the other 'apng tools' https://sourceforge.net/projects/apng/files/APNG_Optimizer/
|
|
|
RaveSteel
|
2024-10-21 07:37:01
|
Thanks
|
|
2024-10-21 07:37:26
|
Still, weird that there are zero search results if it was once known as apngopti
|
|
|
jonnyawsom3
|
|
TheBigBadBoy - ๐ธ๐
but GIF supports cropped frames too <:KekDog:805390049033191445>
|
|
2024-10-21 07:38:04
|
*Actual* cropped frames, or just making areas transparent?
|
|
|
RaveSteel
Still, weird that there are zero search results if it was once known as apngopti
|
|
2024-10-21 07:38:47
|
It wasn't, it just felt weird having the name backwards compared to the rest haha
|
|
|
_wb_
|
|
*Actual* cropped frames, or just making areas transparent?
|
|
2024-10-21 07:40:24
|
Gif has actual cropped frames, just like apng and awebp
|
|
|
jonnyawsom3
|
2024-10-21 07:41:09
|
Huh, ya learn something new every day. Guess I've just not seen it without something like TweakPNG which shows it
|
|
|
_wb_
Some things can only be done if we know we have all input data before we write any output. E.g. dropping alpha or using lossless instead of XYB requires adjusting the image header and can only be decided if all input is known, so it is not something that can be done if the full input is not known yet, e.g. if you are encoding frame by frame...
|
|
2024-10-21 07:41:37
|
Couldn't the header be edited after writing or while still in memory? Naturally disabled when streaming output is specified
|
|
|
_wb_
|
2024-10-21 07:45:49
|
It's not just a header problem, imagine you have a layered image with a background layer and then an alpha-blended overlay. The background may very well have a trivial alpha channel. But of course it cannot be stripped, because then the overlay will no longer have an alpha channel.
From the point of view of the libjxl encode api, this is what it sees:
- image basic info says it is 8-bit RGBA
- here is a frame. The alpha channel is all opaque.
- encode the frame please
- here is another frame. The alpha channel is nontrivial.
- close the input, finalize encode please.
|
|
2024-10-21 07:46:44
|
But in another case it could be an animation where the alpha can actually be removed.
|
|
|
Axiom
|
2024-10-21 07:47:40
|
Can't wait to see the encoder fully matures.
`cjpegli` appears after... 30 years since JPEG format was standardized.
|
|
|
_wb_
|
2024-10-21 07:48:41
|
If you pass all frames to the encoder before asking it to start encoding, then it can know whether or not it is safe to strip alpha. But that's not a great thing to do, memory-wise
|
|
|
jonnyawsom3
|
2024-10-21 07:48:41
|
Then there's multithreading to consider, since encoding a single frame/layer at a time massively slows down encode speed for smaller files.
So optimizing would probably be a higher effort settings, that 'scans' each frame/layer, checks what's needed/redundant, and *then* encodes
|
|
|
_wb_
|
2024-10-21 07:49:42
|
But at least for the case of a single frame still image, we could in principle do some things we currently aren't doing
|
|
|
A homosapien
|
2024-10-21 07:52:23
|
Discarding empty alpha channels and converting RGB to grayscale comes to mind
|
|
|
jonnyawsom3
|
|
_wb_
If you pass all frames to the encoder before asking it to start encoding, then it can know whether or not it is safe to strip alpha. But that's not a great thing to do, memory-wise
|
|
2024-10-21 07:52:44
|
Yeah, unless you store the learned info about each frame separately and reference that instead, but that's a lot more complexity to essentially do
`Frame1Alpha: n Frame2Alpha: n DiscardAlpha` or `Frame1Alpha: y Frame2Alpha: n KeepAlpha`
|
|
|
_wb_
|
2024-10-21 07:54:38
|
Extra channels are global for the whole image/animation
|
|
2024-10-21 07:54:52
|
So either all frames have alpha or none have it
|
|
|
jonnyawsom3
|
2024-10-21 07:56:13
|
Yeah, what I meant is instead of holding all frames in memory, just remember if it has alpha or not, then if all frames have returned no, remove it
|
|
2024-10-21 07:57:18
|
Sorry for making it confusing
|
|
|
_wb_
|
2024-10-21 07:57:45
|
Yeah but it would need to keep the frames in memory because the application still expects it to encode them too ๐
|
|
|
jonnyawsom3
|
2024-10-21 07:59:04
|
Ah yes, the encoder needs to encode... Forgot about that part
|
|
|
_wb_
|
2024-10-21 07:59:40
|
I don't think there's a way to avoid it unless we would make some kind of two-pass API
|
|
2024-10-21 08:01:01
|
But animations and layered images aside: for a single frame image where the whole frame is passed at once (which I suppose many applications will do), we could do alpha/grayscale detection and optimization
|
|
2024-10-21 08:04:53
|
And maybe have some heuristic where it tries e1 lossless and if that's not much larger than what lossy produces, then probably it is worth trying e2/e3 lossless to see if it's smaller than lossy.
|
|
2024-10-21 08:08:20
|
Say at lossy e5+ d<0.5 we could do that, and it would not make encoding much slower on average since e1 lossless is very fast and the e2/e3 encode would only be done on images where it's a good idea
|
|
|
jonnyawsom3
|
2024-10-21 08:10:42
|
For a moment I thought about scaling the size increase with distance, so if an image is 10% larger lossless than `-d 2`, it will still do lossless because of the quality gain, but that might result in every file becoming lossless if it's not set properly
|
|
|
RaveSteel
|
2024-10-21 08:12:07
|
How would the encoder know pre-encode that lossless may be only margianlly bigger if at all? Is libjxl currently able to guess the final size after encoding?
|
|
|
_wb_
|
2024-10-21 08:12:47
|
No, cannot guess it. But you can just do both and then compare.
|
|
|
CrushedAsian255
|
2024-10-21 08:14:04
|
Heuristics ?
|
|
|
_wb_
|
2024-10-21 08:16:21
|
Or maybe do e1 lossless first, just to know the size of that, and then start doing lossy and abort as soon as the tentative output is larger than the e1 lossless size (maybe times 0.8 or whatever). Do e2 or e3 lossless if lossy was aborted.
|
|
|
RaveSteel
|
2024-10-21 08:35:12
|
I definitely like the idea, because lossless is always nicer to have than lossy
|
|
|
|
JKUser4592
|
|
RaveSteel
libjxl is currently focussing on still images. Animated lossless JXL still works perfectly fine but often delivers worse results in terms of filesize compared to lossless WebP.
For lossy animations JXL is better due to lossy WebPs 4:2:0 limitation. While this does not matter if the source is a video, which are often yuv420p, JXL will be much better if the input is RGB or 4:4:4
|
|
2024-10-21 09:17:49
|
Did you download the source files? Which ones had the higher quality? Did they look the same or how they were different?
|
|
|
RaveSteel
|
2024-10-21 09:18:51
|
You did not link your source files in that thread
|
|
|
jonnyawsom3
|
2024-10-21 09:27:22
|
They just made another post, it was a HEVC video as the source, so in other words it's not lossless
|
|
|
|
JKUser4592
|
|
RaveSteel
You did not link your source files in that thread
|
|
2024-10-21 09:46:56
|
It is in the comments section
|
|
|
They just made another post, it was a HEVC video as the source, so in other words it's not lossless
|
|
2024-10-21 09:47:38
|
I converted the HEVC video to an APNG first when the command mentioned
|
|
2024-10-21 10:04:24
|
I'm trying to figure out if the webp has the same quality as JXL with lower file size
|
|
|
jonnyawsom3
|
|
JKUser4592
I converted the HEVC video to an APNG first when the command mentioned
|
|
2024-10-21 10:13:18
|
Yeah, turning a lossy video into a lossless format doesn't make it any less lossy
|
|
|
Demiurge
|
|
JKUser4592
what do you think of this? https://www.reddit.com/r/jpegxl/comments/1g8coud/webp_jxl/
|
|
2024-10-21 11:23:46
|
Not surprising. webp lossless is really well optimized code from Jyrki but libjxl has none of those optiizations ported over yet.
|
|
|
RaveSteel
|
2024-10-21 11:24:44
|
There is a lot of potential, so I am excited for the future
|
|
|
Demiurge
|
2024-10-21 11:28:29
|
Yeah, but currently the encoder has hardly any special paths for animated images (no clever stuff like gifsicle) or for low bit depth and limited color images
|
|
|
CrushedAsian255
|
|
_wb_
I don't think there's a way to avoid it unless we would make some kind of two-pass API
|
|
2024-10-21 11:28:41
|
maybe you can add an optional parameter StripAlpha
|
|
|
Demiurge
|
2024-10-21 11:29:10
|
The encoder can't even tell there's an empty/unused alpha channel and strip it a la oxipng
|
|
|
RaveSteel
|
|
Demiurge
Yeah, but currently the encoder has hardly any special paths for animated images (no clever stuff like gifsicle) or for low bit depth and limited color images
|
|
2024-10-21 11:44:15
|
I remain optimistic for the future
|
|
2024-10-21 11:44:20
|
We have time
|
|
|
A homosapien
|
2024-10-21 11:44:44
|
Priority number one is adoption right now
|
|
|
RaveSteel
|
|
Demiurge
|
2024-10-21 11:46:07
|
Well the best way to speed up adoption is to give people good tools to use it
|
|
2024-10-21 11:46:43
|
tools that are better than the alternatives, like gifsicle, oxipng, and webp-lossless
|
|
|
RaveSteel
|
2024-10-21 11:47:09
|
Let's not forget that creators also need to be aware of those tools
|
|
|
Demiurge
|
2024-10-21 11:47:11
|
Otherwise people will rightly continue to use the tools that work better
|
|
|
RaveSteel
|
2024-10-21 11:47:18
|
99% of artists still only create GIFs
|
|
|
CrushedAsian255
|
2024-10-21 11:49:04
|
GIFs
|
|
2024-10-21 11:49:06
|
why
|
|
|
Demiurge
|
2024-10-21 11:49:23
|
because gifs still outperform libjxl ๐
|
|
|
RaveSteel
|
2024-10-21 11:49:53
|
And WebP as well as AVIF have low adoption among sites where creators share their art
|
|
|
Demiurge
|
|
RaveSteel
Let's not forget that creators also need to be aware of those tools
|
|
2024-10-21 11:50:19
|
Not necessarily. libjxl can be integrated already into the tools they use.
|
|
|
RaveSteel
|
2024-10-21 11:50:35
|
While true, they need to be aware of those tools
|
|
2024-10-21 11:50:47
|
And that they offer advantages
|
|
|
Demiurge
|
2024-10-21 11:50:52
|
So it's equally important to develop plugins and integration into other programs and libraries as it is to develop libjxl
|
|
2024-10-21 11:51:57
|
working with and double checking the code from other projects that integrate libjxl
|
|
2024-10-21 11:52:17
|
like vips, ffmpeg, and other libraries and third party software
|
|
2024-10-21 11:52:33
|
graphicsmagick too
|
|
2024-10-21 11:53:26
|
It's hard to develop a good library if you don't look at how people are actually using it in the wild.
|
|
|
RaveSteel
|
2024-10-21 11:54:27
|
And then we have companies like adobe that want JXL but only offer a pretty limited, if not outright bad implemenation in their commercial offerings
|
|
2024-10-21 11:55:21
|
Samsung creating improper JXLs with their Expert RAW camera app
|
|
2024-10-21 11:55:22
|
etc.
|
|
2024-10-21 11:56:09
|
We should have a thread under adoption, but for bad examples of adoption lol
|
|
|
Demiurge
|
2024-10-22 12:02:08
|
what do you mean? What's samsung do?
|
|
|
CrushedAsian255
|
2024-10-22 12:07:51
|
yes the guardian
|
|
|
RaveSteel
|
|
Demiurge
what do you mean? What's samsung do?
|
|
2024-10-22 12:10:06
|
When they first introduced JXL into Expert RAW their implemented settings were so borked that you would get 1-2MB DNGs
|
|
2024-10-22 12:10:22
|
Utterly unusable
|
|
|
Demiurge
|
2024-10-22 12:10:44
|
nah that's just the efficiency ๐
|
|
|
RaveSteel
|
2024-10-22 12:10:51
|
heck yeah ๐
|
|
2024-10-22 12:11:11
|
It took them weeks to fix it btw
|
|
|
CrushedAsian255
|
2024-10-22 12:11:15
|
jpeg xl is just that good
|
|
|
RaveSteel
|
2024-10-22 12:12:04
|
Expert RAW is now pretty good, except that JXL DNGs still cannot be edited in RawTherapee or Darktable
|
|
2024-10-22 12:12:21
|
So still kinda unubsable, but the DNGs are pretty good at least
|
|
|
CrushedAsian255
|
2024-10-22 12:13:56
|
is that an issue with Expert RAW or the other software?
|
|
|
RaveSteel
|
2024-10-22 12:14:22
|
It was an issue with Samsung's settings
|
|
2024-10-22 12:14:25
|
|
|
|
CrushedAsian255
|
2024-10-22 12:14:40
|
i meant the new
|
|
|
RaveSteel
Expert RAW is now pretty good, except that JXL DNGs still cannot be edited in RawTherapee or Darktable
|
|
2024-10-22 12:14:44
|
this message
|
|
|
RaveSteel
|
|
2024-10-22 12:14:48
|
ok that's silly
|
|
|
RaveSteel
|
2024-10-22 12:15:08
|
The DNGs are perfectly fine, but darktable and RT do not have support for DNG 1.7 yet
|
|
2024-10-22 12:15:20
|
Both are able to open JXL files otherwise
|
|
2024-10-22 12:15:42
|
So the solution is to extract the JXL from the DNG and then develop like normal, sans DNG metadata though
|
|
2024-10-22 12:16:45
|
I must admit i was pretty surprised how well this worked when I first tried it
|
|
|
jonnyawsom3
|
|
RaveSteel
When they first introduced JXL into Expert RAW their implemented settings were so borked that you would get 1-2MB DNGs
|
|
2024-10-22 12:30:53
|
Adobe did the same thing... Wonder if it was the same issue in the Adobe SDK
|
|
|
RaveSteel
So the solution is to extract the JXL from the DNG and then develop like normal, sans DNG metadata though
|
|
2024-10-22 12:32:24
|
Depends on implimentation. Adobe split it into dozens of tiles, TinyDNG does a single JXL and Apple does 2046 x 2046 (Roughly)
|
|
2024-10-22 12:32:42
|
But yeah, everything relies on libraw, which only updates once a year
|
|
|
RaveSteel
|
2024-10-22 12:52:17
|
The issue with Rawtherapee and darktable is that both use their "own" libraries based on libraw. darktable has rawspeed and while Rawtherapee directly integrated libraw, they do not compile with the DNG SDK due to the license
|
|
2024-10-22 12:54:47
|
I think i am wrong regarding rawspeed actually, but darktable does not have support for 1.7 DNGs nonetheless
|
|
|
CrushedAsian255
|
2024-10-22 01:15:01
|
is JXL required for DNG1.7 compliance?
|
|
|
RaveSteel
|
2024-10-22 01:21:31
|
I assume no, but most apps that produce DNGs following the 1.7 spec seem to be DNG JXL based
|
|
|
CrushedAsian255
|
2024-10-22 01:22:58
|
not creation, i meant do you need to be able to decode jxl to be dng 1.7 compliant
|
|
|
RaveSteel
|
2024-10-22 01:24:13
|
Good question
|
|
|
_wb_
|
2024-10-22 05:29:32
|
Yes
|
|
2024-10-22 05:30:22
|
If you cannot decode all payloads then you cannot decode DNG 1.7
|
|
|
Oleksii Matiash
|
|
Depends on implimentation. Adobe split it into dozens of tiles, TinyDNG does a single JXL and Apple does 2046 x 2046 (Roughly)
|
|
2024-10-22 06:51:48
|
Thank you for mentioning tinydng. Maybe one day I will be able to support not only uncompressed dngs without pain with adobe dng sdk
|
|
|
CrushedAsian255
|
|
Oleksii Matiash
Thank you for mentioning tinydng. Maybe one day I will be able to support not only uncompressed dngs without pain with adobe dng sdk
|
|
2024-10-22 07:10:58
|
are you the developer of tiny dng
|
|
|
Oleksii Matiash
|
|
CrushedAsian255
are you the developer of tiny dng
|
|
2024-10-22 07:11:45
|
No, I mean that with tinydng my app would be able to read and write not only uncompressed dngs
|
|
|
CrushedAsian255
|
2024-10-22 08:03:24
|
Is there any reason JXL used ANS instead of Arithmetic?
|
|
|
A homosapien
|
2024-10-22 08:06:24
|
It's more flexible I think.
|
|
2024-10-22 08:07:01
|
It's in the name, asymmetric, so faster encoding times with slower decoding times and vice versa.
|
|
2024-10-22 08:08:18
|
My knowledge of the subject ends there. I'm sure there are more reasons why ANS is better than Arithmetic.
|
|
|
CrushedAsian255
|
2024-10-22 08:12:51
|
i thought it was Asymmetric because the numeral system probabilities were different
|
|
2024-10-22 08:13:03
|
i think what you're talking about is internet speeds
|
|
2024-10-22 08:13:21
|
also wouldn't you want faster decoding even at the cost of slightly slower encoding?
|
|
|
lonjil
|
|
CrushedAsian255
Is there any reason JXL used ANS instead of Arithmetic?
|
|
2024-10-22 09:57:32
|
faster
|
|
2024-10-22 09:57:46
|
arithmetic coding is slow
|
|
|
CrushedAsian255
|
2024-10-22 10:00:35
|
makes sense
|
|
2024-10-22 10:01:06
|
is there anywhere where range-ans is explained? i have a rough understanding of the concept of ANS but don't quite get the range version
|
|
|
_wb_
|
2024-10-22 10:44:01
|
https://en.wikipedia.org/wiki/Asymmetric_numeral_systems#Basic_concepts_of_ANS
|
|
2024-10-22 10:48:02
|
In essence, arithmetic coding and ANS are quite similar, main difference being that AC ends up being basically FIFO and encoding/decoding can both be done in order but require maintaining state as two numbers, while ANS is LIFO and decoding is easier than encoding (encoding has to be done in reverse order) and requires only maintaining one number as state.
|
|
2024-10-22 10:53:10
|
Arithmetic coding is mostly slow if you do the CABAC thing like FLIF does (and FFV1, and I suppose many video codecs), which also means coding one bit at a time (with context so you know what kind of bit it is) and updating chances every time, so to read one token with a larger alphabet, you need several AC decodes + chance updates, which is a lot slower than doing a single ANS decode of an entire token at a time and using a static histogram (no chance update).
|
|
2024-10-22 10:57:38
|
Doing things bit-by-bit is OK if most of the stuff are actual bits or low-amplitude stuff, as is typically the case for low-quality lossy. For higher quality it gets substantially slower though, as you can see if you look at for example how AVIF or HEIC encode/decode speeds get lower as quality goes up.
|
|
|
CrushedAsian255
|
2024-10-22 12:04:50
|
So CABAC works at a bit level not a symbol level? Why would that be helpfulโฆ ever?
|
|
|
_wb_
Arithmetic coding is mostly slow if you do the CABAC thing like FLIF does (and FFV1, and I suppose many video codecs), which also means coding one bit at a time (with context so you know what kind of bit it is) and updating chances every time, so to read one token with a larger alphabet, you need several AC decodes + chance updates, which is a lot slower than doing a single ANS decode of an entire token at a time and using a static histogram (no chance update).
|
|
2024-10-22 12:07:30
|
Isnโt the whole point of these non-prefix codes to support larger non uniform alphabets with more precise weights?
|
|
|
_wb_
|
2024-10-22 12:11:51
|
you use different contexts per bit, e.g. in FLIF there's one ctx for the sign bit, another ctx for the "zero or nonzero" bit, then it actually has the exponent in unary notation with a different ctx for each position, and then it has mantissa bits with a ctx for each position. So that way of encoding can be seen as an overall way to encode a large alphabet with non uniform distribution, but the building blocks are just single bits
|
|
2024-10-22 12:12:37
|
the main reason to do it with bits is that it is easy to do dynamic probability updates
|
|
2024-10-22 12:14:30
|
since you just maintain one chance per bit and updating it can be done with a lookup table with two output chances for every input chance
|
|
|
Jarek Duda
|
|
CrushedAsian255
is there anywhere where range-ans is explained? i have a rough understanding of the concept of ANS but don't quite get the range version
|
|
2024-10-22 02:25:58
|
Here is a good introduction: https://www.youtube.com/watch?v=RFWJM8JMXBs do you have a specific question?
|
|
|
CrushedAsian255
|
|
_wb_
since you just maintain one chance per bit and updating it can be done with a lookup table with two output chances for every input chance
|
|
2024-10-22 07:46:07
|
Does JXL have variable contexts ?
|
|
|
_wb_
|
2024-10-22 08:09:59
|
Variable in what way?
|
|
2024-10-22 08:11:01
|
MA trees in modular mode allow basically arbitrary ctx modeling
|
|
2024-10-22 08:12:16
|
In VarDCT mode, the ctx model is a bit parametrized but mostly static and big, and relying on the ctx map to trim it down to a limited set of histograms
|
|
|
CrushedAsian255
|
2024-10-22 08:28:43
|
As in after a context starts decoding do the probabilities change
|
|
|
_wb_
MA trees in modular mode allow basically arbitrary ctx modeling
|
|
2024-10-22 08:29:06
|
As in the individual probabilities within each MA leaf
|
|
|
_wb_
In VarDCT mode, the ctx model is a bit parametrized but mostly static and big, and relying on the ctx map to trim it down to a limited set of histograms
|
|
2024-10-22 08:29:45
|
Is ctx map like the Huffman tree selector meta image from WebP/VP8L ?
|
|
2024-10-23 05:04:11
|
can extra channels have different resolutions?
|
|
2024-10-23 05:04:26
|
like my depth maps are 1/4 the size of the actual image
|
|
2024-10-23 05:04:28
|
does jxl work?
|
|
|
jonnyawsom3
|
2024-10-23 05:05:17
|
You can do it with upsampling/resampling yeah
|
|
|
_wb_
|
2024-10-23 05:13:04
|
Every extra channel has its own upsampling factor, which can be 1x, 2x, 4x or 8x
|
|
|
CrushedAsian255
|
2024-10-23 09:00:21
|
what upscaling does it use?
|
|
|
_wb_
|
2024-10-23 11:45:56
|
the fancy non-separable one, as signaled in the image header so you can make it do nearest neighbor or whatever if you want that
|
|
|
CrushedAsian255
|
2024-10-24 11:31:18
|
are these the best options so far?
`-d 0 -e 10 -E 4 -I 100 -g 3`
|
|
|
monad
|
2024-10-24 11:37:47
|
If your only concern is density at the expense of compute for general content, yes those are very good options (although you can probably safely make it E11).
|
|
|
CrushedAsian255
|
2024-10-24 11:38:14
|
im not in a "storage is cheap" situation, so I really only care about density
|
|
|
monad
|
2024-10-24 11:49:05
|
on a 783 image set, the best of d0e10E3I100g0modular_palette_colors70000 and d0e10E3I100g3modular_palette_colors0Y0 achieves about 1% better density
|
|
|
jonnyawsom3
|
2024-10-24 11:51:43
|
The first would be much lower memory usage because of g0
|
|
|
monad
|
2024-10-24 11:53:00
|
("best of" meaning you try both. the first alone is not good in general)
|
|
|
CrushedAsian255
|
2024-10-25 12:01:05
|
lol can't really be bothereed so im just gonna do d0e10E3I100g3
|
|
2024-10-25 12:01:14
|
or maybe d11
|
|
|
jonnyawsom3
|
2024-10-26 05:49:25
|
<@446428281630097408> Sorry for the ping again, but since you already have all the tooling required, do you think you could create a cropped version of your 21 channel JXLs?
Want some rough numbers on the different `-E` values from none to max to request the option in GDAL, but with it being 20 images in one, 20MP is being pretty rough to my RAM haha. 2048 x 2048 would be a full DC block at least, although if you already have the numbers laying around, that would help too
|
|
2024-10-26 06:04:13
|
https://github.com/OSGeo/gdal/issues/11127
|
|
2024-10-26 06:05:12
|
<@781543780629217321> after seeing your issue for the extra channel distances, I thought I should open one for this too. If all goes well you should be able to just use GDAL without needing intermediaries
|
|
|
|
sglidq
|
|
<@781543780629217321> after seeing your issue for the extra channel distances, I thought I should open one for this too. If all goes well you should be able to just use GDAL without needing intermediaries
|
|
2024-10-26 08:07:46
|
<@238552565619359744> Thanks for the help! If GDAL JPEG XL Driver supports the `-E` option, I can get a simple solution since the default distance of extra channels has been revised to be the same as the distance of main channels.<:BlobYay:806132268186861619>
|
|
2024-10-26 08:12:23
|
<@238552565619359744> I have another small question about the quality and the distance. The current libjxl's source supports setting d to 0.01, but setting any q in the range of [0,100] makes d larger than 0.1 (no matter with the new mapping function since v0.8 or the old mapping function in v0.6). If I want to understand this, what references can I read?
|
|
|
jonnyawsom3
|
2024-10-26 08:18:39
|
I noticed you saying that before, but realistically those values shouldn't be required anyway. It's only due to the 16bit padding essentially shifting the image into orders of magnitude lower quality that you want them at the moment.
Currently libjxl can't tell the difference between a dark image, and an image that's been pushed into being dark due to higher bitdepths. If you already tried fixing the input by moving the padding to the lowest bits instead of the highest, and the quality is still degraded, then there may be bigger issues at play
|
|
|
|
sglidq
|
2024-10-26 08:26:16
|
Thanks for your patient explanation. I got it: Usually, d=0.1 or q = 99.99999 should produce a high-enough quality of the reconstructed image if the input image is correctly padded.
What I am concerned about is that the allowed `d` and `q` ranges produce different PSNR/MSE ranges. So why not let min(d) = 0.1 or let the quality2distance mapping reach a distance approaching 0.01?
|
|
|
CrushedAsian255
|
2024-10-26 08:27:03
|
The real problem is the dark image artifacting
|
|
2024-10-26 08:27:11
|
Your problem is just a side effect
|
|
|
|
sglidq
|
|
I noticed you saying that before, but realistically those values shouldn't be required anyway. It's only due to the 16bit padding essentially shifting the image into orders of magnitude lower quality that you want them at the moment.
Currently libjxl can't tell the difference between a dark image, and an image that's been pushed into being dark due to higher bitdepths. If you already tried fixing the input by moving the padding to the lowest bits instead of the highest, and the quality is still degraded, then there may be bigger issues at play
|
|
2024-10-26 08:28:28
|
Do we need a padding option for cjxl instead of the indirect `--intensity_target` option?
|
|
|
CrushedAsian255
The real problem is the dark image artifacting
|
|
2024-10-26 08:30:37
|
Yes, the dark image artifacting is the core issue. It seems HEVC and VVC handle high bit-depth with smaller QP, e.g., the smallest QP for an NBITS image is `-6 * (NBITS - 8)`. And JPEG XL handles this by padding to MSB.
|
|
|
jonnyawsom3
|
|
CrushedAsian255
The real problem is the dark image artifacting
|
|
2024-10-26 08:31:30
|
Kinda, it's a 10bit image in 16bit padding, but the padding is on the high bits instead of the low, so the perceptual encoding sees black, and encodes it as black
|
|
2024-10-26 08:32:20
|
If you're a human, that looks fine, but converting it back to 10bit afterwards pulls it back into the visible range
|
|
|
|
sglidq
|
2024-10-26 08:37:44
|
Thanks. So I can just do 1. pre-padding to MSB to cover the full 16-bit range; 2. cjxl + djxl; 3. post-dropping the LSB to the true bit depth.
|
|
|
jonnyawsom3
|
2024-10-26 08:43:10
|
Ideally the input would already be flagged as 10bit, then cjxl could just encode a 10bit image instead, but I think only PPM support that and can't have metadata...
|
|
|
CrushedAsian255
|
2024-10-26 08:44:20
|
Maybe a โbit depthโ argument to cJXL ?
|
|
|
jonnyawsom3
|
2024-10-26 08:44:52
|
There already is, but it doesn't just truncate the bits as far as I'm aware, it actually downsamples to the set depth
|
|
2024-10-26 08:45:37
|
> in case of PNG, when storing 10-bit images, the padding should be in the lsb, not in the msb (that is, you still need to use the full range of 0 to 65535). If the PNG does not look correct when viewed in, say, a web browser, then you're doing something wrong.
A quote from Jon on one of the old Github issues
|
|
2024-10-26 08:47:40
|
What was the original format of the images again?
|
|
|
|
sglidq
|
|
CrushedAsian255
Maybe a โbit depthโ argument to cJXL ?
|
|
2024-10-26 08:49:51
|
GDAL's JPEG XL Driver supports `NBITS`, i.e., the โbit depthโ argument as mentioned.
|
|
|
jonnyawsom3
|
2024-10-26 08:53:49
|
Hmm... Assuming I'm right, maybe the bitdepth setting should first check for padding, if it fits then remove it, otherwise *then* downsample like it does currently
|
|
|
|
sglidq
|
|
> in case of PNG, when storing 10-bit images, the padding should be in the lsb, not in the msb (that is, you still need to use the full range of 0 to 65535). If the PNG does not look correct when viewed in, say, a web browser, then you're doing something wrong.
A quote from Jon on one of the old Github issues
|
|
2024-10-26 08:56:15
|
Which option is the "bit depth" argument to cJXL? Is it `--intensity_target=N`?
Upper bound on the intensity level present in the image, in nits.
Default is 0, which means 'choose a sensible default value based on the color encoding.
|
|
|
jonnyawsom3
|
2024-10-26 08:56:52
|
```--override_bitdepth=BITDEPTH
Default is zero (use the input image bit depth); if nonzero, override the bit depth```
|
|
|
|
sglidq
|
|
Hmm... Assuming I'm right, maybe the bitdepth setting should first check for padding, if it fits then remove it, otherwise *then* downsample like it does currently
|
|
2024-10-26 08:56:59
|
So it will be a safe way for me to do pre-padding + post-dropping for cjxl/djxl, right?
|
|
|
```--override_bitdepth=BITDEPTH
Default is zero (use the input image bit depth); if nonzero, override the bit depth```
|
|
2024-10-26 08:58:43
|
Got it. The default uses the true bit depth. I did not test cjxl on this option. But I tested GDAL JPEG Xl's `NBITS` option. It must be the the true bit depth, otherwise, the compression performance becomes worse.
|
|
|
jonnyawsom3
|
2024-10-26 08:59:54
|
When you say "the compression performance becomes worse" how is the quality score for the resulting JXL?
|
|
2024-10-26 09:00:56
|
It's likely just massively overshooting your quality target now that the image is 'fixed', so `-d`/`-q` could be lowered
|
|
2024-10-26 09:02:43
|
I probably should've just asked for a cropped version of the file so I could do some testing myself, thinking about it now
|
|
|
|
sglidq
|
|
When you say "the compression performance becomes worse" how is the quality score for the resulting JXL?
|
|
2024-10-26 09:14:27
|
For the lossless case (d=0), ARAD_1K_0950.tif uploaded previously. PSNR calculation uses a peak of 10000.
NBITS=12 (True Bit Depth)
PSNR: inf, bpsp: 4.075046429527506
NBITS=11 (<)
PSNR: 67.84136199951172 (lossy), bpsp: 4.075027606746085
NBITS=13 (>,)
PSNR: inf, bpsp: 4.075070480859322 (larger)
|
|
|
jonnyawsom3
|
2024-10-26 09:17:10
|
Right, I see now
|
|
|
|
sglidq
|
2024-10-26 09:18:08
|
Lossy, d=0.1
NBITS=12 (True Bit Depth)
PSNR: 69.22028350830078, bpsp: 1.5923111029313346
NBITS=11 (<) not OK
PSNR: 66.53080749511719 (lower), bpsp: 2.1877478332887166 (higher)
NBITS=13 (>,) unsure
PSNR: 65.31294250488281 (lower), bpsp: 1.1273727161691875 (lower)
|
|
2024-10-26 09:18:47
|
For the lossy case, setting NBITS smaller than the input image bit depth results in a worse R-D performance.
|
|
2024-10-26 09:19:59
|
Setting a larger NBITS results in both lower PSNR and bpsp. It is uncertain to decide which one is better based on such a single point.
|
|
|
sglidq
Lossy, d=0.1
NBITS=12 (True Bit Depth)
PSNR: 69.22028350830078, bpsp: 1.5923111029313346
NBITS=11 (<) not OK
PSNR: 66.53080749511719 (lower), bpsp: 2.1877478332887166 (higher)
NBITS=13 (>,) unsure
PSNR: 65.31294250488281 (lower), bpsp: 1.1273727161691875 (lower)
|
|
2024-10-26 09:23:25
|
The effect of `NBITS` is not similar to the `intensity_target` in cjxl. cjxl results in both higher PSNR and bpsp with larger `intensity_target` given a fixed `d`. Maybe the effect of `NBITS` is similar to the `--override_bitdepth` option.
|
|
|
|
tufty
|
2024-10-28 10:41:29
|
silly sample JXL image someone made with layers and animation: https://gbr.kampidh.com/kampidh/jxl-layers-e7.jxl
might be handy for testing
seems to work well with libvips 8.16 fwiw
|
|
|
jonnyawsom3
|
2024-10-28 11:32:06
|
Oh, so it was you on Bluesky
|
|
2024-10-28 12:07:48
|
Some backstory here https://discord.com/channels/794206087879852103/803574970180829194/1299039655345524810
|
|
|
|
tufty
|
2024-10-28 12:30:54
|
ah, cool!
|
|
|
jonnyawsom3
|
2024-10-28 02:06:52
|
This is what happens when everyone's watching the Reddit channel xD
|
|
|
Quackdoc
|
2024-10-29 07:52:10
|
Does anyone know what happened with zamfofex? Seems the github account is gone and jxl-crx repo is dead
|
|
2024-10-29 07:52:54
|
wondering if anyone had the jxl-oxide branch saved, I may somewhere
|
|
|
yoochan
|
2024-10-29 08:01:45
|
Dunno... I'll check if I forked it. He was a member of this discord
|
|
|
|
JKUser4592
|
2024-10-29 10:17:03
|
Did anyone see my test videos and files from my two Reddit posts last week about this? What were the results? Did they all have the same colors? Were there any quality difference?
|
|
|
jonnyawsom3
|
2024-10-30 12:49:06
|
Again, you don't need to repeat everything here that you've already asked on Reddit, and you still never uploaded the videos or files....
|
|
|
|
JKUser4592
|
2024-10-30 03:21:34
|
I sent links to them
|
|
2024-10-30 03:21:49
|
they're in my other posts
|
|
2024-10-30 03:22:20
|
in the comments sections
|
|
|
jonnyawsom3
|
2024-10-30 04:02:50
|
You haven't, you just keep linking to your own posts instead of a file
|
|
|
|
JKUser4592
|
2024-10-30 04:36:21
|
here https://mega.nz/file/NEByCYDb#JVZZedqh7evGLDZTP0j4fj_TZ5kq-2VxbIAOJ-Ub5zw
|
|
2024-10-30 04:36:42
|
https://mega.nz/file/cU41VCJY#qzSine3Es-kDdQ5vaWZ6xKfMtS9txjv0uzrydYPzQWE
|
|
2024-10-30 04:38:02
|
You also have the open the 7z file
|
|
2024-10-30 01:36:29
|
<@238552565619359744> I get it now: my comments with the links to the files kept getting auto deleted by the sub's spam filter.
|
|
|
jonnyawsom3
|
2024-10-30 01:36:53
|
Ahh right, Reddit is very stingy about Mega links...
|
|
|
Dejay
|
2024-10-31 01:55:43
|
This might be a stupid question, but a JXL file can use both VarDCT and image features like patches at the same time, right? For normal color images... The two modes are not exclusive right?
|
|
|
jonnyawsom3
|
2024-10-31 02:15:18
|
Yeah, patches are even still enabled on images that don't exceed 2048 in either dimension. Used to be a PR that would pick a modular block if it compressed better
|
|
|
CrushedAsian255
|
|
Yeah, patches are even still enabled on images that don't exceed 2048 in either dimension. Used to be a PR that would pick a modular block if it compressed better
|
|
2024-10-31 06:39:44
|
Can I use patches on another patches?
|
|
|
jonnyawsom3
|
2024-10-31 08:23:17
|
Yes... I think
|
|
|
_wb_
|
2024-10-31 08:59:05
|
yes, you can use multiple patch frames and use patches as a coding tool in patch frames, just like any other coding tool
|
|
|
Traneptora
|
2024-10-31 09:44:32
|
yup, currently WIP I'm having Hydrium assemble patch frames to get the final frame
|
|
2024-10-31 09:44:38
|
dunno if it's better than just sending frames as tiles
|
|
|
DZgas ะ
|
2024-11-02 07:41:00
|
<:pancakexl:1283670260209156128> ๐
|
|
2024-11-02 07:42:56
|
can VP8L encode only the frame difference? I have not found any information about this. and after looking through the source codes, I didn't notice anything like that either <:VP8:973222137202610226> <:WebP:1279637593868210368>
|
|
|
_wb_
|
2024-11-02 08:11:45
|
Both webp and jxl have cropped frames and alpha-blended frames as a basic inter mechanism. Jxl also has other blend modes. But cjxl does not use any of that, it just encodes the input literally (doesn't try to do its own cropping or conversion of replace to other blend modes).
|
|
|
A homosapien
|
2024-11-02 08:14:00
|
cjxl is kinda dumb not gonna lie ๐
|
|
|
CrushedAsian255
|
|
A homosapien
cjxl is kinda dumb not gonna lie ๐
|
|
2024-11-02 08:25:17
|
its not mature like libx26x
|
|
2024-11-02 08:25:25
|
eventually it will be smart
|
|
2024-11-02 08:25:40
|
(and probably 80% hand written in asm lol)
|
|
|
_wb_
|
2024-11-02 10:26:24
|
Focus has been and still is on still images. For animation there is stuff that can be done, low hanging fruits to improve compression, but my general feeling is that we shouldn't really encourage using still image formats for gif-like animation since video codecs are more suitable for that. Sure, jxl could consistently beat gif/apng/webp for animations if we do some basic improvements like detecting crops, but compared to av1 or even h264 it would still be pretty bad.
|
|
|
CrushedAsian255
|
2024-11-02 10:27:12
|
you can also put the png into a png optimiser before giving it to libjxl if you want things like crop detection
|
|
|
_wb_
Focus has been and still is on still images. For animation there is stuff that can be done, low hanging fruits to improve compression, but my general feeling is that we shouldn't really encourage using still image formats for gif-like animation since video codecs are more suitable for that. Sure, jxl could consistently beat gif/apng/webp for animations if we do some basic improvements like detecting crops, but compared to av1 or even h264 it would still be pretty bad.
|
|
2024-11-02 10:28:11
|
> Focus has been and still is on still images
The pages idea from <#803574970180829194> feels like something that would be helpful to improve as it's kinda a standout feature, improvements in that could be transferred over to animated image
|
|
|
_wb_
|
2024-11-02 10:30:37
|
Yes, I agree, for something like a rasterized slide deck probably jxl would be quite nice, and detecting things like header/footers not changing (much) between pages is the same thing as detecting crops in animation. Reusing patches across frames would also be nice, e.g. encode font once and reuse in many slides.
|
|
|
CrushedAsian255
|
2024-11-02 10:39:30
|
i feel like this would be difficult to implement but something like this would be really clever
|
|
2024-11-02 10:39:36
|
|
|
2024-11-02 10:39:45
|
cjxl detecting that is all a background and encoding it as a seperate frame
|
|
|
_wb_
|
2024-11-02 10:43:43
|
Yeah, the bitstream allows representing all of that kind of stuff, but making an encoder that exploits it well is something else. Same with splines etc
|
|
|
jonnyawsom3
|
2024-11-02 10:47:37
|
I know Veluca(?) had the idea of using an LZ77 pass to detect patch candidates, doing that across frames could be one solution
|
|
|
CrushedAsian255
|
|
I know Veluca(?) had the idea of using an LZ77 pass to detect patch candidates, doing that across frames could be one solution
|
|
2024-11-02 10:50:20
|
yeah, as patches are effectively 2d lz77
|
|
|
A homosapien
|
2024-11-02 11:58:01
|
If cjxl focus is primarily on images, then at some point there's going to be an img2cjxl? Like WebP's tools
|
|
|
|
JKUser4592
|
2024-11-02 02:45:17
|
Well, what were the results? Did they all have the same colors? Were there any quality difference? https://www.reddit.com/r/jpegxl/comments/1gfoo4u/webp_jxl_reupload/
|
|
|
RaveSteel
|
2024-11-02 02:52:42
|
Lossless to lossless will not have any differences
|
|
2024-11-02 02:53:20
|
you can compare yourself by using imagemagick's `identify` to compare image signatures
|
|
|
|
JKUser4592
|
2024-11-02 04:56:17
|
How would that work?
|
|
|
RaveSteel
|
2024-11-02 05:00:20
|
do `/usr/bin/identify -format "%m %#\n" INPUT` to get the image signature. for apng you need to prefix the input with apng:INPUT
|
|
|
|
JKUser4592
|
2024-11-02 05:06:51
|
What's "%m %#\n" for?
|
|
|
RaveSteel
|
2024-11-02 05:15:06
|
m for printing ouzt the filetype and # for printing out the hash
|
|
|
Demiurge
|
|
_wb_
Focus has been and still is on still images. For animation there is stuff that can be done, low hanging fruits to improve compression, but my general feeling is that we shouldn't really encourage using still image formats for gif-like animation since video codecs are more suitable for that. Sure, jxl could consistently beat gif/apng/webp for animations if we do some basic improvements like detecting crops, but compared to av1 or even h264 it would still be pretty bad.
|
|
2024-11-02 06:54:24
|
Someone ought to just port gifsicle to output jxl...
|
|
|
|
JKUser4592
|
|
RaveSteel
do `/usr/bin/identify -format "%m %#\n" INPUT` to get the image signature. for apng you need to prefix the input with apng:INPUT
|
|
2024-11-03 01:09:07
|
what's `/usr/bin/` for?
|
|
|
RaveSteel
|
2024-11-03 01:09:41
|
thats the absolute path, you can leave that out
|
|
|
|
JKUser4592
|
2024-11-03 01:13:23
|
How would I know if the quality is either the same or different?
|
|
|
RaveSteel
|
2024-11-03 01:16:31
|
Assuming that nothing went wrong during the lossless encoding, the output will have the same checksum as the input file
|
|
2024-11-03 01:17:04
|
comparing the hashes of the files guarantees that there is no difference
|
|
|
|
JKUser4592
|
2024-11-03 01:20:16
|
This is what I got when I put it the identity command for one of my jxl files.
|
|
|
RaveSteel
|
2024-11-03 01:20:36
|
now compare this to your input, which is the apng
|
|
|
|
JKUser4592
|
2024-11-03 02:35:56
|
APNG
|
|
2024-11-03 02:36:12
|
WEBP
|
|
|
Oleksii Matiash
|
2024-11-03 06:54:13
|
Somebody have to use google/ask chatgpt, and not this thread
|
|
|
๐๐ฆ๐๐ฃ๐ธ๐ฅ๐ฉ๐ฏ๐ฆ | ๆไธ่ชฟๅใฎไผๆญ่
| ็ฐ่ญฐใฎๅ
็ด
|
2024-11-03 09:13:00
|
is it normal to have `cjxl` error out reading APNG data?
|
|
2024-11-03 09:13:13
|
|
|
2024-11-03 09:14:54
|
```
$ cjxl -j 0 -d 1 -e 7 -p --progressive_dc 1 lurk-googly.apng lurk-googly.jxl
JPEG XL encoder v0.10.2 [AVX2,SSE4,SSE2]
Getting pixel data failed.
```
|
|
2024-11-03 09:18:35
|
the APNG file was emitted from ImageMagick
|
|
|
DZgas ะ
|
|
A homosapien
cjxl is kinda dumb not gonna lie ๐
|
|
2024-11-03 09:38:18
|
Yeah I can't even compress the full of Shrek... ...Wait a minute though. <:megapog:816773962884972565>
|
|
|
CrushedAsian255
|
2024-11-03 10:12:46
|
Proof of concept but the first 2 minutes of shrek
|
|
|
VcSaJen
|
|
CrushedAsian255
Proof of concept but the first 2 minutes of shrek
|
|
2024-11-03 12:28:56
|
IrfanView plays it just fine, but XnView MP is having trouble, plays in choppy slow-motion.
|
|
|
Oleksii Matiash
|
|
VcSaJen
IrfanView plays it just fine, but XnView MP is having trouble, plays in choppy slow-motion.
|
|
2024-11-03 12:37:53
|
How did you get IV to play it? Oo I'm getting this error. I have the latest jpeg_xl plugin installed
|
|
|
jonnyawsom3
|
|
๐๐ฆ๐๐ฃ๐ธ๐ฅ๐ฉ๐ฏ๐ฆ | ๆไธ่ชฟๅใฎไผๆญ่
| ็ฐ่ญฐใฎๅ
็ด
```
$ cjxl -j 0 -d 1 -e 7 -p --progressive_dc 1 lurk-googly.apng lurk-googly.jxl
JPEG XL encoder v0.10.2 [AVX2,SSE4,SSE2]
Getting pixel data failed.
```
|
|
2024-11-03 12:58:17
|
Try updating your libjxl version to 0.11, there were a lot of issues with the APNG loader before
|
|
|
VcSaJen
|
|
Oleksii Matiash
How did you get IV to play it? Oo I'm getting this error. I have the latest jpeg_xl plugin installed
|
|
2024-11-03 01:00:24
|
Mine is not latest, program version 4.60, JPEG_XL.DLL version 4.59.1.
|
|
|
jonnyawsom3
|
2024-11-03 01:01:03
|
I was going to ask, how does the decode speed differ if you update... But apparently it's -infinity :P
|
|
2024-11-03 01:03:27
|
Does it still decode with djxl?
|
|
|
Oleksii Matiash
|
|
VcSaJen
Mine is not latest, program version 4.60, JPEG_XL.DLL version 4.59.1.
|
|
2024-11-03 02:07:24
|
Lol. After replacing 4.68 with 4.60.1 it started loading. There is a 4 second delay on loading, but then it plays correctly
|
|
2024-11-03 02:09:23
|
4.63 can't decode it too
|
|
|
jonnyawsom3
|
2024-11-03 02:26:31
|
ImageToolbox can play around a minute, then hits OOM and restarts the Android UI
|
|
|
VcSaJen
|
2024-11-03 02:36:24
|
ffplay plays it fine. mpv, too.
|
|
|
jonnyawsom3
|
2024-11-03 11:50:46
|
Waterfox plays it fine
|
|
|
๐๐ฆ๐๐ฃ๐ธ๐ฅ๐ฉ๐ฏ๐ฆ | ๆไธ่ชฟๅใฎไผๆญ่
| ็ฐ่ญฐใฎๅ
็ด
|
|
2024-11-04 12:34:11
|
Unfortunately Discord is stripping the APNG chunks on download so I can't test myself
|
|
|
A homosapien
|
|
๐๐ฆ๐๐ฃ๐ธ๐ฅ๐ฉ๐ฏ๐ฆ | ๆไธ่ชฟๅใฎไผๆญ่
| ็ฐ่ญฐใฎๅ
็ด
|
|
2024-11-04 12:57:12
|
upload it within a zip file so that discord doesn't strip anything out
|
|
|
Mine18
|
2024-11-04 04:24:07
|
assuming that software implementations use libjxl, what would disallow them from saving or viewing animated jxl files? I've tried exporting a gif as an animated jxl file through Image Toolbox, which partially works when viewing it through the same app (when unfocused it plays but doesnt when focused), but when trying it on Imageglass and Photopea only displays the first frame, and Fossify gallery doesn't show the file at all
|
|
|
jonnyawsom3
|
2024-11-04 05:25:23
|
I think that's related to the issues we mentioned yesterday with `shrek.jxl`, the latest Irfanview errors but the old one works, browsers load but Fossify doesn't, ect
|
|
|
Mine18
|
2024-11-04 05:33:21
|
so could this be a fairly old issue with libjxl?
|
|
|
jonnyawsom3
|
2024-11-04 05:37:08
|
Either a recent issue or developers not updating properly
|
|
2024-11-04 05:37:36
|
Since `djxl` still decodes things fine
|
|
|
_wb_
|
2024-11-04 05:55:09
|
Applications that don't take into account that jxl can be animated might just not have the code to deal with it at all.
|
|
|
Mine18
|
|
_wb_
Applications that don't take into account that jxl can be animated might just not have the code to deal with it at all.
|
|
2024-11-04 05:55:29
|
that's the most likely scenario
|
|
|
Quackdoc
|
|
Mine18
assuming that software implementations use libjxl, what would disallow them from saving or viewing animated jxl files? I've tried exporting a gif as an animated jxl file through Image Toolbox, which partially works when viewing it through the same app (when unfocused it plays but doesnt when focused), but when trying it on Imageglass and Photopea only displays the first frame, and Fossify gallery doesn't show the file at all
|
|
2024-11-04 10:28:53
|
you have to update the frame which in some cases can be really hard without doing a complete overhaul of the system
|
|
|
CrushedAsian255
|
2024-11-05 12:17:07
|
is there any way to generate animated JXL without using APNG?
|
|