JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

jxl

Anything JPEG XL related

jonnyawsom3
CrushedAsian255 The encoder should detect areas that are darker than the rest of the image and not near high light and increase quantisation quality
2024-09-28 07:21:45
Do it based on relative darkness instead of absolute
Demiurge
2024-09-28 07:21:59
And on a really good screen like that, it would be even more noticeable when, after the process of jxl encoding, all those beautiful details were destroyed that you paid all that money for an HDR screen in the first place for.
2024-09-28 07:22:31
But that's just a funny hypothetical situation I'm imagining. 😂
CrushedAsian255
Do it based on relative darkness instead of absolute
2024-09-28 07:22:33
Darker than the rest of the image means relative
jonnyawsom3
2024-09-28 07:23:49
Yeah, which is why a completely dark image with darker areas shouldn't all be bad
2024-09-28 07:25:00
(I have been up all night long so my wording may be poor and my reasoning made up)
Demiurge
2024-09-28 07:26:48
The more your eye is able to adjust to it, the more you notice small variations...
2024-09-28 07:27:03
Also, from what I understand, there really shouldn't be any reason for a DCT codec to BLUR so much.
2024-09-28 07:27:23
DCT is extremely good at preserving texture.
2024-09-28 07:30:43
The excessive blurriing and smoothing and utter obliteration of image features seems like a really lousy and perplexing choice to make, between adding some extra DCT noise but preserving detail underneath it, or utterly obliterating and smoothing out all noise and detail.
2024-09-28 07:31:16
With DCT it's actually more work and more effort to smooth things
2024-09-28 07:31:37
So why spend more effort on doing something that's just so much worse?
2024-09-28 07:33:16
I'm sorry if I'm being too critical here. I'm only critiquing it because I feel invested and excited about this codec.
2024-09-28 07:40:26
I have a huge amount of respect for everyone who made this software possible and available to the public, but I am worried that if people compare it to other existing lossy encoders, it could hurt the reputation and future of a format with really really high potential and really thoughtful and good design.
2024-09-28 07:45:29
Worst case scenario is, someone publishes a very memorable and visual comparison that shows lossy jxl in a very bad light compared to competitors. And from that point on whenever someone suggests implementing support for JXL, people think, "isn't that the format that blurs everything?" and rejects it out of hand, despite the fact that the format still has lots of untapped potential still on the table.
HCrikki
2024-09-28 07:48:55
common issue in discussions is how everything is interwined by confused participants. jxl itself doesnt have issues with quality, libjxl is just a reference implementation and the release from the jxl authors tuned for quality at all target sizes - one could do a rebuild that discards more detail for a few more savings
Demiurge
2024-09-28 07:49:13
Also, DCT noise (like mosquito noise etc) only looks ugly because it's non-uniform and highly-correlated to the original image signal. The more uniform and uncorrelated the noise is, the better and less noticeable it looks.
HCrikki common issue in discussions is how everything is interwined by confused participants. jxl itself doesnt have issues with quality, libjxl is just a reference implementation and the release from the jxl authors tuned for quality at all target sizes - one could do a rebuild that discards more detail for a few more savings
2024-09-28 07:51:39
Yeah, that's the problem. libjxl is just an example encoder. The format itself is extremely well thought out and has immense power and potential that libjxl leaves on the table.
2024-09-28 07:52:23
But libjxl is the face of the format, and the only publicly-available encoder, so if libjxl produces ugly looking output most people will think that it's a problem with jxl, the format.
2024-09-28 07:52:39
And they won't know that it's a problem that could be fixed and tweaked and tuned.
2024-09-28 07:53:03
Kind of like how there's a huge difference between libtheora 1.1 and 1.2
CrushedAsian255
Demiurge The excessive blurriing and smoothing and utter obliteration of image features seems like a really lousy and perplexing choice to make, between adding some extra DCT noise but preserving detail underneath it, or utterly obliterating and smoothing out all noise and detail.
2024-09-28 07:53:54
I think the blurring is from overuse of EPF and Gaborish
2024-09-28 07:54:03
Not just DCT
Demiurge
2024-09-28 07:58:05
I can see why you would think that, but I believe you're wrong.
2024-09-28 07:58:45
I think the high-frequency coefficients are just being completely trashed or zeroed.
2024-09-28 07:58:58
That's what it looks like.
CrushedAsian255
2024-09-28 07:59:32
It’s probably both
Demiurge
2024-09-28 07:59:40
No quantization, just zeroing.
CrushedAsian255
Demiurge No quantization, just zeroing.
2024-09-28 07:59:54
Quantisation weight of infinity?
2024-09-28 08:00:00
That would set it to 0
Demiurge
2024-09-28 08:00:02
Like there's no attempt to even quantize it
CrushedAsian255
2024-09-28 08:00:14
Truncation ?
Demiurge
2024-09-28 08:00:36
That's just what it looks like to me. I don't think epf and gab filters are capable of... complete obliteration.
2024-09-28 08:01:43
They're designed to simulate the effect of a lapped transform and eliminate macroblocking and sharpening effects
_wb_
2024-09-28 08:02:07
Maybe we should just bump up the default intensity target. If libjxl assumes 255 nits but the actual display is 500 nits, it will crush the darks more than it should
Demiurge
_wb_ Maybe we should just bump up the default intensity target. If libjxl assumes 255 nits but the actual display is 500 nits, it will crush the darks more than it should
2024-09-28 08:06:05
I don't have a screen that bright, and I think the brightness of the screen is a bit of a red herring... Maybe it will be a good workaround, but I think the real problem is that the encoder doesn't take into account how the human eye becomes more sensitive and linear in the shadows, when it's allowed to adjust to the low contrast region.
2024-09-28 08:06:44
Maybe that's the real solution though
2024-09-28 08:07:11
The eye is a lot, lot more sensitive to small changes in dark regions than large changes in bright regions...
2024-09-28 08:07:49
That's the whole reason why we have gamma curves instead of linear brightness. That's a form of perceptual quantization.
CrushedAsian255
2024-09-28 08:09:19
Is jxl linear light?
2024-09-28 08:09:34
I think I remember someone saying something about cubics so gamma 3?
_wb_
2024-09-28 08:09:36
XYB is a biased cubic
CrushedAsian255
2024-09-28 08:12:11
So is it left or right wing
Demiurge
2024-09-28 08:15:17
It listens to Joe Rogan
damian101
2024-09-28 08:17:00
btw, does someone here have the math for the luma function of XYB from linear at hand?
2024-09-28 08:17:19
any form is fine, code as well...
_wb_
2024-09-28 08:17:38
``` bias = −0.00379307325527544933; Lmix = 0.3 * R + 0.622 * G + 0.078 * B − bias; Mmix = 0.23 * R + 0.692 * G + 0.078 * B − bias; Smix = 0.24342268924547819 * R + 0.20476744424496821 * G + 0.55180986650955360 * B − bias; Lgamma = cbrt(Lmix) + cbrt(bias); Mgamma = cbrt(Mmix) + cbrt(bias); Sgamma = cbrt(Smix) + cbrt(bias); X = (Lgamma − Mgamma) / 2; Y = (Lgamma + Mgamma) / 2; B = Sgamma; ```
2024-09-28 08:18:09
This is starting with linear RGB with the sRGB primaries
damian101
2024-09-28 08:19:09
Just to be certain, Y is luma, yes?
_wb_
2024-09-28 08:20:46
Yes
damian101
2024-09-28 08:22:10
Thanks! that'll work...
Demiurge
2024-09-28 08:22:26
Hmm...
damian101
2024-09-28 08:24:02
that is some very simple math compared to ICtCp...
Demiurge
2024-09-28 08:24:09
How did you come up with those multiplication factors?
2024-09-28 08:24:20
Is that from some perceptual study?
2024-09-28 08:24:47
those really long float constants
2024-09-28 08:26:33
It looks like a really lossy transformation too. I'm surprised.
CrushedAsian255
2024-09-28 08:26:37
Don’t the float constants get truncated?
2024-09-28 08:26:55
Can you get more precision using doubles?
Demiurge
2024-09-28 08:27:13
doubles are floats 👀
CrushedAsian255
2024-09-28 08:27:44
I meant float=f32 double-=f64
Demiurge
2024-09-28 08:28:23
he never specified how many bits of storage they use, he just wrote a bunch of arithmetic.
CrushedAsian255
2024-09-28 08:28:40
I think I remember it using f32
Tirr
2024-09-28 08:28:57
it's indeed lossy, that's why jxlinfo says it's lossy if xyb is used
Demiurge
2024-09-28 08:28:59
I'm just curious where those constants were derived from...
CrushedAsian255
2024-09-28 08:29:16
Probably psycho visual analysis
_wb_
2024-09-28 08:29:39
It's based on LMS, as you might have guessed from the variable names
Demiurge
2024-09-28 08:29:44
And surprised that such a lossy-looking transformation can be so good at preserving the original image.
2024-09-28 08:30:01
do the constants come from LMS too?
damian101
Demiurge And surprised that such a lossy-looking transformation can be so good at preserving the original image.
2024-09-28 08:30:37
How do you mean lossy-looking? with infinite precision, there shouldn't be any loss, no?
_wb_
2024-09-28 08:30:43
There are several variants of LMS, I dunno which one was used here. XYB comes from PIK 🙂
CrushedAsian255
2024-09-28 08:30:44
Weird that L and M have short constants but S has really long constants
How do you mean lossy-looking? with infinite precision, there shouldn't be any loss, no?
2024-09-28 08:31:08
Probably mean lots of chances of float imprecision
2024-09-28 08:31:53
Cube root sounds like something that could cause float imprecision
_wb_
2024-09-28 08:32:42
I think S was adjusted to satisfy the desirable property that for grayscale, the chroma channels end up being all zeroes
2024-09-28 08:33:11
The chroma channels are X and (B-Y), by the way
Demiurge
How do you mean lossy-looking? with infinite precision, there shouldn't be any loss, no?
2024-09-28 08:33:20
I dunno...? Maybe? It looks like all three RGB channels are being mixed together in a way that is not guaranteed to be reversible at first glance.
CrushedAsian255
_wb_ I think S was adjusted to satisfy the desirable property that for grayscale, the chroma channels end up being all zeroes
2024-09-28 08:34:13
With chroma do you mean B or B-Y?
Demiurge
2024-09-28 08:35:18
and X is L-S right?
CrushedAsian255
Demiurge I dunno...? Maybe? It looks like all three RGB channels are being mixed together in a way that is not guaranteed to be reversible at first glance.
2024-09-28 08:35:20
3 inputs, 3 outputs, should be reversible
Demiurge and X is L-S right?
2024-09-28 08:35:45
Is that also applied in CfL?
Demiurge
2024-09-28 08:36:54
I didn't do the math yet, but... just because there's the same number of outputs, doesn't mean the transformations in between are all mathematically reversible. Especially when addition is involved.
2024-09-28 08:37:43
Adding the weighted values of all three channels together into one sum seems potentially lossy.
CrushedAsian255
2024-09-28 08:38:19
It just looks like YCbCr with extra steps
2024-09-28 08:38:30
And different parameters
Demiurge
2024-09-28 08:38:48
no, I think YCbCr is more directly related to RGB
CrushedAsian255
2024-09-28 08:39:06
This kind of addition looks like matrix vector multiplication
2024-09-28 08:39:19
As long as the matrix is invertible it should be fine
2024-09-28 08:39:35
(excluding the bias thing)
Tirr
2024-09-28 08:40:55
it's lossy with f32, but not that much I think. loss can be reduced further by using FMA operations
CrushedAsian255
2024-09-28 08:41:03
It looks like RGB * matrix = LMS Then apply bias and gamma to LMS Then do mid/side to L and M to get X and Y Each step sounds reversible
Tirr it's lossy with f32, but not that much I think. loss can be reduced further by using FMA operations
2024-09-28 08:41:19
Is that just due to float imprecision?
Demiurge
2024-09-28 08:41:48
I haven't really thought about this type of thing too much. Maybe it is similar to YCbCr
Tirr
2024-09-28 08:42:22
yeah, it's *mathematically* reversible (where you have infinite precision)
CrushedAsian255
2024-09-28 08:43:29
Only thing that confuses me is why isn’t S included in Y calculation?
Tirr
2024-09-28 08:43:57
I think it's because S barely contributes to perceived luminance
CrushedAsian255
2024-09-28 08:44:15
It still does a bit, is it just to make calculation easier ?
2024-09-28 08:44:27
I heard somewhere around 7%
Demiurge
2024-09-28 08:45:51
Keep in mind that when calculating L and M, they weigh all 3 RGB channels together.
CrushedAsian255
2024-09-28 08:46:19
So L and M aren’t exactly the same as human L and M?
Demiurge
2024-09-28 08:46:42
Anyways I guess the reason why I'm impressed is because I haven't looked deeply enough into color conversion matrices.
CrushedAsian255 So L and M aren’t exactly the same as human L and M?
2024-09-28 08:47:01
Human L and M are activated by blue light as well.
2024-09-28 08:47:08
Just not as much
CrushedAsian255
2024-09-28 08:47:33
So it does end up modelling brightness well?
Demiurge
2024-09-28 08:49:14
I assume so? Honestly I am not sure... Deeply saturated blue lights don't look very bright, you're saying?
damian101
Demiurge I dunno...? Maybe? It looks like all three RGB channels are being mixed together in a way that is not guaranteed to be reversible at first glance.
2024-09-28 08:49:19
With enough precision, it should be exactly reversible by simple rounding. But 8-bit to 8-bit will mean information loss. Same is true for almost all colorspaces that aim to be more perceptually relevant. One exception is YCoCg-R, which is used by monitors to be able perform chroma subsampling with no luma precision loss, and in Display Stream Compression (DSC).
CrushedAsian255
2024-09-28 08:51:00
Is YCoCg-R based on integer math only?
damian101
2024-09-28 08:53:16
YCoCg is defined by a simple matrix conversion from RGB, quite similar to the common YCbCr. How the reversible YCoCg-R implementation is done, idk...
2024-09-28 08:57:54
Btw, if you convert 8-bit RGB to 8-bit full-range YCbCr, then back to 8-bit RGB, by method of simply rounding to the next-closest values, you have lost some information in the process. But the same with 10-bit YCbCr as the intermediary, even with limited range, will exactly reverse all the slight color shifting, resulting in no information lost. Tested that experimentally once.
_wb_
2024-09-28 09:08:47
YCoCg-R is available as an RCT in modular mode. It's RCT 6.
CrushedAsian255
_wb_ YCoCg-R is available as an RCT in modular mode. It's RCT 6.
2024-09-28 09:10:28
Is RCT%6 define the permutation and RCT//6 define the actual RCT?
_wb_
2024-09-28 09:11:35
%7 defines the actual RCT, /7 defines the permutation
2024-09-28 09:12:41
> For every pixel position in these channels, the values (A, B, C) are replaced by (V[0], V[1], V[2]) values, as follows: ``` /* rct_type < 42 */ permutation = rct_type Idiv 7; type = rct_type Umod 7; if (type == 6) { // YCoCg tmp = A − (C >> 1); E = C + tmp; F = tmp − (B >> 1); D = F + B; } else { if (type & 1) C = C + A; if ((type >> 1) == 1) B = B + A; if ((type >> 1) == 2) B = B + ((A + C) >> 1); D = A; E = B; F = C; } V[permutation Umod 3] = D; V[(permutation+1+(permutation Idiv 3)) Umod 3] = E; V[(permutation+2-(permutation Idiv 3)) Umod 3] = F; ```
2024-09-28 09:12:54
(this is the decode side, from the spec)
CrushedAsian255
2024-09-28 09:13:36
So 13 is the same as 6 but with permutation ?
_wb_
2024-09-28 09:15:03
Yes, 13, 20, 27, 34, 41 are variants of YCoCg where the channels are shuffled
jonnyawsom3
Demiurge I can see why you would think that, but I believe you're wrong.
2024-09-28 09:21:24
This was the best lead so far <https://github.com/libjxl/libjxl/issues/3530#issuecomment-2277884901>
A homosapien
2024-09-28 09:59:47
I agree with Pashifox here, I think there was a change in the encoder itself. I'll do some extra testing with EPF and gaborish disabled.
damian101
_wb_ ``` bias = −0.00379307325527544933; Lmix = 0.3 * R + 0.622 * G + 0.078 * B − bias; Mmix = 0.23 * R + 0.692 * G + 0.078 * B − bias; Smix = 0.24342268924547819 * R + 0.20476744424496821 * G + 0.55180986650955360 * B − bias; Lgamma = cbrt(Lmix) + cbrt(bias); Mgamma = cbrt(Mmix) + cbrt(bias); Sgamma = cbrt(Smix) + cbrt(bias); X = (Lgamma − Mgamma) / 2; Y = (Lgamma + Mgamma) / 2; B = Sgamma; ```
2024-09-28 10:39:19
Looks to me like ``` Y = cbrt(Ylinear − bias) + cbrt(bias); ``` then. That's some surprisingly simple math.
_wb_
2024-09-28 10:44:07
Yes, we wanted to keep the decode side simple (cubing is easy)
Demiurge
2024-09-28 10:55:52
It's a really, really well designed format.
damian101
2024-09-28 10:56:34
the only well-designed video or image format for the modern world, really...
2024-09-28 10:57:37
of those that are in a polished state and have the potential to reach the mainstream, anyway
Demiurge
2024-09-28 10:58:51
If anyone finds any really good images that really showcase regressions or other major problems with lossy quality in jxl, it's good to post them so Jyrki can use them when he's tuning encoder parameters. :)
CrushedAsian255
the only well-designed video or image format for the modern world, really...
2024-09-28 10:59:24
Could jxl technology be extended to a video format?
damian101
CrushedAsian255 Could jxl technology be extended to a video format?
2024-09-28 10:59:44
Some of it, very easily.
Demiurge
2024-09-28 11:00:04
It's in an MP4 container and supports animation, it's already video, what more do you want?
damian101
Some of it, very easily.
2024-09-28 11:00:09
I think I misunderstood the question.
2024-09-28 11:00:36
I meant that some jxl coding technologies could definitely be used in the video world.
CrushedAsian255
2024-09-28 11:00:41
Don’t make me go down a rabbit hole of using JXL as a mezzanine again lol
damian101
2024-09-28 11:00:53
if the JXL image format could be extended to a video format? idk...
CrushedAsian255
I meant that some jxl coding technologies could definitely be used in the video world.
2024-09-28 11:01:03
Like using JXL for I frames ?
_wb_
2024-09-28 11:01:08
JXL for intra frames should work quite well, just add some inter stuff from av1 or whatever other video codec.
CrushedAsian255
_wb_ JXL for intra frames should work quite well, just add some inter stuff from av1 or whatever other video codec.
2024-09-28 11:01:22
This is what I meant
damian101
CrushedAsian255 Like using JXL for I frames ?
2024-09-28 11:01:23
I-frame coding is not what holds video formats back
CrushedAsian255
I-frame coding is not what holds video formats back
2024-09-28 11:01:43
Good inter prediction is more important I’m guessing
_wb_
2024-09-28 11:03:05
Maybe using something like jxl modular for encoding the inter vectors and stuff could help...
damian101
2024-09-28 11:03:20
perceptually optimizing all those coding tools video formats have in combination with inter coding just terribly hard... there is huge potential left
CrushedAsian255
2024-09-28 11:04:09
MPEG XL?
damian101
2024-09-28 11:04:23
modern video encoders slowly get better in small increments, over years, because you don't know how to optimize one component before most of the others are optimized...
2024-09-28 11:08:56
Also, way better metrics are possible I think... and if you have good metrics, you could actually automate a lot of the optimizations...
2024-09-28 11:09:43
even with the pretty good metrics we have today, that should be possible...
Demiurge
2024-09-28 11:16:09
No, from what I've seen, modern metrics are absolutely terrible. It's important to develop improved metrics.
CrushedAsian255
2024-09-28 11:17:08
Don’t hate me for this but Maybe AI / ML could help?
2024-09-28 11:18:43
Aren’t modern metrics why JXL is blurring ?
Demiurge
2024-09-28 11:20:09
Yes
damian101
Looks to me like ``` Y = cbrt(Ylinear − bias) + cbrt(bias); ``` then. That's some surprisingly simple math.
2024-09-28 11:20:43
<@794205442175402004> why do I have to add multiplier 1.1829999659281363 for this to work like a regular OETF for range 0.0 to 1.0? I'm pretty sure my math is all correct...
Demiurge
2024-09-28 11:20:56
But you don't need AI to make better metrics.
damian101
CrushedAsian255 Don’t hate me for this but Maybe AI / ML could help?
2024-09-28 11:21:03
Absolutely.
Demiurge
2024-09-28 11:21:42
People have already tried using ML based metrics and they aren't any less complicated to develop and they aren't any better either.
damian101
Demiurge But you don't need AI to make better metrics.
2024-09-28 11:22:39
It sure helps. Even if just to get the right weights to use in a simple metric.
Demiurge
2024-09-28 11:23:16
I think better metrics could be developed through very simple, incremental improvements.
damian101
Demiurge People have already tried using ML based metrics and they aren't any less complicated to develop and they aren't any better either.
2024-09-28 11:23:25
I think the best approach is to use ML tu build algorithmically simpler metrics
Demiurge
2024-09-28 11:23:27
Without any over-engineered solutions
2024-09-28 11:23:51
I think frequency analysis could help a lot
damian101
I think the best approach is to use ML tu build algorithmically simpler metrics
2024-09-28 11:24:00
only works if you know what you're doing in the first place, of course
Demiurge
2024-09-28 11:25:01
For the same reasons that DCT codecs look really good perceptually to human vision, I think the same algorithms can be used to compare images in the frequency domain and improve psycho-quality metrics.
damian101
2024-09-28 11:26:16
I think visual energy is a quite relevant but rarely measured thing...
Demiurge
2024-09-28 11:26:50
Instead of starting with a good metric (which doesn't exist) and trying to make a codec from it, how about start with a good codec (which does exist) and build a metric from that.
damian101
2024-09-28 11:27:28
those encoders are just good because they use good hand-tuned heuristics
2024-09-28 11:27:47
but that method doesn't work as well for high complexity formats
Demiurge
2024-09-28 11:28:15
I heard the x264 project had some of the best psychovisual tuning and models in the whole field of lossy images
damian101
2024-09-28 11:28:53
yes, and some of it carried over quite nicely to x265...
Demiurge
2024-09-28 11:29:02
Maybe some of it can be adapted into an image comparison/fidelity metric
damian101
yes, and some of it carried over quite nicely to x265...
2024-09-28 11:29:16
but I doubt it will to x266 not that easily anyway
Demiurge
2024-09-28 11:30:21
Really, there's a lot of value in being able to automatically detect whether a change is ugly and destructive, or hard to notice.
_wb_
<@794205442175402004> why do I have to add multiplier 1.1829999659281363 for this to work like a regular OETF for range 0.0 to 1.0? I'm pretty sure my math is all correct...
2024-09-28 11:30:58
Generally in the design of XYB, constant multipliers were not considered important since those can be assimilated into the quantization anyway. So the range is kind of meaningless.
damian101
_wb_ Generally in the design of XYB, constant multipliers were not considered important since those can be assimilated into the quantization anyway. So the range is kind of meaningless.
2024-09-28 11:31:15
ah, good to know
Demiurge
but that method doesn't work as well for high complexity formats
2024-09-28 11:32:16
jxl is not high complexity compared to x264 :)
damian101
2024-09-28 11:32:51
yes, because it doesn't have inter coding
_wb_
2024-09-28 11:34:33
Inherently, inter tools make things a lot harder because not all frames are encoded in the same way.
lonjil
_wb_ Maybe we should just bump up the default intensity target. If libjxl assumes 255 nits but the actual display is 500 nits, it will crush the darks more than it should
2024-09-28 11:39:07
My monitor has a maximum brightness of 250 nits, the brightness slider is set to 60%, and I thought the dark area artefacts in the image posted above were quite noticeable.
damian101
2024-09-28 11:41:27
<@794205442175402004> Ok, this surprises me a lot: https://www.desmos.com/calculator/rcgeqgq9lv To me it looks like XYB would be more perceptually uniform if more of its luma range would cover the lower luminance range...
2024-09-28 11:45:51
updated the link to change the colors
2024-09-28 11:46:03
red is XYB luma now
Demiurge
lonjil My monitor has a maximum brightness of 250 nits, the brightness slider is set to 60%, and I thought the dark area artefacts in the image posted above were quite noticeable.
2024-09-28 11:47:20
exactly
2024-09-28 11:47:53
something's wrong here
damian101
2024-09-28 11:48:08
lol, these posts go together very well
Demiurge
2024-09-28 11:48:10
and you don't need a high intensity monitor to see it
2024-09-28 11:49:05
something's very very wrong with the way libjxl destroys dark regions of images for a long time now
lonjil
2024-09-28 11:49:32
Does intensity target even make sense? If I display an image at 1000 nits, but I'm outdoors and my eyes are daylight adjusted, I won't see any dark area details. But if I'm indoors and display the image at 150 nits, I'll easily be able to see dark area details.
Demiurge
2024-09-28 11:49:44
Maybe setting intensity target will fix it, but it has nothing to do with people with bright monitors.
damian101
lonjil Does intensity target even make sense? If I display an image at 1000 nits, but I'm outdoors and my eyes are daylight adjusted, I won't see any dark area details. But if I'm indoors and display the image at 150 nits, I'll easily be able to see dark area details.
2024-09-28 11:52:40
Yeah, contrast is most important...
2024-09-28 11:52:59
Bright areas in a frame mask dark areas.
2024-09-28 11:57:37
That's why it's mostly in videos where people complain about dark detail being destroyed. Still images usually use the full dynamic range available, while movies often do not.
2024-09-28 12:00:13
Does this mean that the best approach would be to use a perceptually uniform colorspace, but *lower* quality of dark areas if bright areas are present? Or is it better to just use a colorspace that is not quite perceptually uniform for luma, to compensate for this effect?
2024-09-28 12:01:54
I definitely was often surprised that my JXL images had lower quality in the dark regions, which I assumed would be a thing of the best with being encoded in a perceptually uniform colorspace...
Demiurge
Does this mean that the best approach would be to use a perceptually uniform colorspace, but *lower* quality of dark areas if bright areas are present? Or is it better to just use a colorspace that is not quite perceptually uniform for luma, to compensate for this effect?
2024-09-28 12:02:27
Bright areas being present doesn't stop the eye from adjusting to dark areas when looking at the dark area.
2024-09-28 12:02:44
If the bright area is not overlapping the dark area.
damian101
2024-09-28 12:03:03
the effect of the bright area decreases with its distance on the retina to the focus area...
_wb_
2024-09-28 12:03:39
Maybe we should adjust things to take into account adaptation to local average brightness
Demiurge
2024-09-28 12:04:27
In a picture, where you can crop, or just look at a different part of the screen, your eye will still adjust to the dark regions as long as they aren't overlapping with bright regions in the same area. As long as there's enough dark pixels in one area.
_wb_
2024-09-28 12:04:29
Which effort settings are affected by the problem? All?
Demiurge
2024-09-28 12:04:36
in a contiguous region
_wb_ Which effort settings are affected by the problem? All?
2024-09-28 12:05:18
Well, definitely d1 and d2
lonjil
2024-09-28 12:05:31
those are distance settings :p
Demiurge
2024-09-28 12:05:38
Oh lol
2024-09-28 12:05:41
e7
2024-09-28 12:05:47
I always use the default effort for lossy
2024-09-28 12:06:19
I don't bother testing other effort settings for lossy
2024-09-28 12:06:29
effort is really a lossless thing
_wb_
2024-09-28 12:06:35
Could be e6 or e8 are better or worse in this regard, that would help give a clue what to change
Demiurge
2024-09-28 12:07:00
good idea
_wb_
2024-09-28 12:07:05
Effort makes it use different heuristics so it has big impact on quality
Demiurge
2024-09-28 12:07:59
I thought only e8 and above uses different heuristics
2024-09-28 12:08:26
But not in a significant way
2024-09-28 12:12:12
I didn't know. Sounds like it would be a good thing to test...
username
Demiurge I thought only e8 and above uses different heuristics
2024-09-28 12:15:10
no? lossy e8 and above are just e7 but with way more butteraugli iterations. wait what did you think the lower values did then? also this is relevant even though it's a little outdated https://github.com/libjxl/libjxl/blob/main/doc/encode_effort.md
Demiurge
username no? lossy e8 and above are just e7 but with way more butteraugli iterations. wait what did you think the lower values did then? also this is relevant even though it's a little outdated https://github.com/libjxl/libjxl/blob/main/doc/encode_effort.md
2024-09-28 12:16:14
I thought the lower values just used less features like patches and less other fancy things like epf and gab
2024-09-28 12:16:54
And if you go low enough it uses fixed size dct but idk if that really actually matters at all as far as psy heuristics go
2024-09-28 12:38:25
I can say that there is a slight change when going from e4 to e5
2024-09-28 12:38:43
And there is another slight change when going from e7 to e8
lonjil
_wb_ Which effort settings are affected by the problem? All?
2024-09-28 12:41:42
I've never had this problem myself, though I've seen examples others have posted, so I don't have a good test image handy, but I did try it with a random image I have. d2 e3: whole image denoised. light areas don't have any obvious artifacts. dark areas have blockiness. d2 e7: light area noise mostly retained. dark area noise mostly removed. light areas are close to visually lossless even with my face pressed to the screen. dark areas are not blocky, but *some* textures have been severely smoothed. Both files were about 1.2 MiB, the e3 one 3% smaller than the e7 one. More testing needed to figure out how large the dark areas need to be for the artifacting to be noticeable. Incidentally, I noticed that at d1 e3, it was quite good at removing most sensor noise without affecting any real details.
Demiurge
2024-09-28 12:44:03
That doesn't sound perceptually uniform if it's so obvious how light and dark areas are affected so differently.
lonjil
2024-09-28 12:44:10
here is my test image if someone else wants to evaluate it
Demiurge
2024-09-28 12:44:19
wow that's a great image
2024-09-28 12:46:32
Might be better if it were resized too
lonjil
Demiurge That doesn't sound perceptually uniform if it's so obvious how light and dark areas are affected so differently.
2024-09-28 12:47:00
well, I don't know how obvious it is. I was using the jxl repo's image compare tool at 1:1 zoom, which since the image is large, meant that I only saw portions of it at the same time. But if this means that an image that is *all* dark, or mostly dark, or something, would get the same artifacting, then that is a problem yeah.
Demiurge wow that's a great image
2024-09-28 12:47:31
I took it on vacation a few years ago. Seemed good since it has a larger dark area on the left, and a smaller dark area on the right with light on both sides of it.
Demiurge
2024-09-28 12:47:57
I wish I had some good images of statues, especially statues made out of dark materials.
RaveSteel
Demiurge I wish I had some good images of statues, especially statues made out of dark materials.
2024-09-28 12:51:52
What sort of statues are you looking for? I have a few images
2024-09-28 12:52:35
Darkest material is stone though
damian101
<@794205442175402004> Ok, this surprises me a lot: https://www.desmos.com/calculator/rcgeqgq9lv To me it looks like XYB would be more perceptually uniform if more of its luma range would cover the lower luminance range...
2024-09-28 01:07:15
better to look at the deritaves to compare biases for different brightness levels: https://www.desmos.com/calculator/fkrquovbut
Demiurge
2024-09-28 01:08:07
Preferably pictures of statues that do a good job capturing their grit and texture.
CrushedAsian255
2024-09-28 01:13:20
I’m in Türkiye so might be able to get some
RaveSteel
Demiurge Preferably pictures of statues that do a good job capturing their grit and texture.
2024-09-28 01:13:43
I have a lot of images showing texture etc, but not of statues. Does it need to be one?
CrushedAsian255
2024-09-28 01:14:56
Not really what you’re thinking but this image might be good for testing
2024-09-28 01:15:00
2024-09-28 01:15:27
I can send DNG if it’s good
Demiurge
RaveSteel I have a lot of images showing texture etc, but not of statues. Does it need to be one?
2024-09-28 01:24:07
As long as there's a lot of fine texture and the image does a good job conveying it, like it makes you want to reach out and touch :)
CrushedAsian255
2024-09-28 01:25:05
Maybe...? A lot of smooth uniform surfaces though
CrushedAsian255
2024-09-28 01:25:31
What exactly are you looking for so I can keep my eye out
Demiurge As long as there's a lot of fine texture and the image does a good job conveying it, like it makes you want to reach out and touch :)
2024-09-28 02:25:54
This?
Demiurge
2024-09-28 03:06:10
Yes, actually. Exactly. But it has EXTREME blockiness from jpeg compression
2024-09-28 03:07:03
A version without the JPEG artifacts would be good
2024-09-28 03:08:17
I heard that stone is great for stress testing codecs. Especially different colors and textures and shading.
2024-09-28 03:08:46
As long as it's a very clear photo with no egregious pre-existing artifacts...
CrushedAsian255
2024-09-28 03:08:49
It is a DNG
2024-09-28 03:09:08
Shouldn’t have artificers
2024-09-28 03:09:18
I took it on iPhone 14 Pro Max
2024-09-28 03:09:28
Maybe discord squished it
username
2024-09-28 03:09:58
Discord mobile reencodes stuff at a lower quality when you send files
2024-09-28 03:10:04
there might be a way to turn that off
Demiurge
2024-09-28 03:11:05
Yeah, send as lossless JXL to avoid getting mangled :)
2024-09-28 03:11:16
discord doesn't know how to mangle jxl files
RaveSteel
2024-09-28 03:11:44
Discord limit is 10MB, a bit low for lossless 😞
CrushedAsian255
2024-09-28 03:12:34
We transfer?
Demiurge
2024-09-28 03:14:51
Resize it until it's 10mb
2024-09-28 03:14:59
it doesn't need to be super large.
2024-09-28 03:15:47
Just resize the image using a linear light resampler
RaveSteel
2024-09-28 03:16:05
Maybe open a thread in <#803645746661425173> so that we are not spamming images into this channel
CrushedAsian255
2024-09-28 03:16:32
https://we.tl/t-rlh5jDTAfJ
Demiurge
2024-09-28 03:16:46
lanczos is supposed to be good
CrushedAsian255
2024-09-28 03:16:57
I’m on my phone so can’t do fancy transforms
Demiurge
RaveSteel Maybe open a thread in <#803645746661425173> so that we are not spamming images into this channel
2024-09-28 03:17:05
good idea
CrushedAsian255
2024-09-28 03:17:16
Once done someone ping me
damian101
better to look at the deritaves to compare biases for different brightness levels: https://www.desmos.com/calculator/fkrquovbut
2024-09-28 05:26:53
added OKLab https://www.desmos.com/calculator/ivybwuhfhw
<@794205442175402004> Ok, this surprises me a lot: https://www.desmos.com/calculator/rcgeqgq9lv To me it looks like XYB would be more perceptually uniform if more of its luma range would cover the lower luminance range...
2024-09-28 08:58:52
<@794205442175402004> Might the bias value be variable? Does it change with intensity target? (if it doesn't it should) With the given `bias = −0.00379307325527544933`, XYB seems only good for something with very low peak brightness...
2024-09-28 08:59:53
If it does change with peak brightness, I would be very interested in the math for the relationship between intensity target and bias variable...
_wb_
2024-09-28 09:24:00
These constants can be signaled but current libjxl doesn't use anything but the defaults
Quackdoc
added OKLab https://www.desmos.com/calculator/ivybwuhfhw
2024-09-28 10:07:00
now do eotfs [av1_dogelol](https://cdn.discordapp.com/emojis/867794291652558888.webp?size=48&quality=lossless&name=av1_dogelol)
lonjil
_wb_ ``` bias = −0.00379307325527544933; Lmix = 0.3 * R + 0.622 * G + 0.078 * B − bias; Mmix = 0.23 * R + 0.692 * G + 0.078 * B − bias; Smix = 0.24342268924547819 * R + 0.20476744424496821 * G + 0.55180986650955360 * B − bias; Lgamma = cbrt(Lmix) + cbrt(bias); Mgamma = cbrt(Mmix) + cbrt(bias); Sgamma = cbrt(Smix) + cbrt(bias); X = (Lgamma − Mgamma) / 2; Y = (Lgamma + Mgamma) / 2; B = Sgamma; ```
2024-09-28 10:29:08
What exactly does it mean that the chroma channels are X and (B-Y)? Elsewhere I've seen you say something like "B is stored as S - Y". Wouldn't it make more sense to define `B = Sgamma - (Lgamma + Mgamma) / 2`? Or is this subtraction done quite separately from the XYB conversion for some reason?
_wb_
2024-09-29 06:44:07
For some reason it is done as part of chroma from luma...
Demiurge
2024-09-29 08:57:48
idk if libjxl supports a shrink-on-load feature in the API, but theoretically, or during progressive decoding for example, and a DC image or lower-resolution image is loaded, is it the same as linear light resampling? Or nonlinear?
_wb_
2024-09-29 09:33:43
Nonlinear. It's basically average values in the colorspace used for encoding, so typically XYB.
Demiurge
2024-09-29 09:42:48
and XYB is cubic right?
jonnyawsom3
Demiurge idk if libjxl supports a shrink-on-load feature in the API, but theoretically, or during progressive decoding for example, and a DC image or lower-resolution image is loaded, is it the same as linear light resampling? Or nonlinear?
2024-09-29 09:54:38
You mean an API option to, for example, progressive load the DC and output a 1:8 resolution image without upsampling?
Demiurge
2024-09-29 10:30:37
Yep. A lot of image codec libraries, even libwebp, has an option to load a smaller version of the image.
2024-09-29 10:31:08
vipsthumbnail uses it
2024-09-29 10:32:30
If libjxl doesn't have it yet it would be a cool thing to add and work into libvips, as well as region-of-interest decoding.
_wb_
You mean an API option to, for example, progressive load the DC and output a 1:8 resolution image without upsampling?
2024-09-29 11:34:39
We originally had that in the API, but removed it in favor of a more generic progressive loading API. But I guess it would be useful to have an option to get passes without upsampling...
lonjil
2024-09-29 11:49:37
Personally if I were making a thumbnailer, I would prefer to get the full picture, convert to linear light, apply a Gaussian blur to prevent moiré, and then downscale. Though if the image is quite large, then getting a smaller pass would be useful just for efficiency. E.g. if I want a 128x128 thumbnail, and the image is 8Ki x 8Ki, then the 1:8 would be plenty.
_wb_
2024-09-29 12:12:16
Doing Gaussian blur with radius corresponding to half a downscaled pixel and then downscaling in linear light is in some way the 'correct' way to do it, but it's also a bit expensive
spider-mario
2024-09-29 12:47:34
with a proper low-pass filter, “downscaling” can be just subsampling, but failing that, averaging (which is equivalent to a box filter) is usually decent enough
_wb_
2024-09-29 12:53:34
Averaging without first blurring a bit can still result in some funky Moiré in some cases
Demiurge
2024-09-29 01:59:17
That's a moire!
2024-09-29 02:01:02
When the lines on the screen make more lines in between that's a moire!
Quackdoc
2024-09-29 04:12:46
will there be a rust CMS made for jxl-rs or just use LCMS?
CrushedAsian255
2024-09-29 04:19:17
What is Clap?
2024-09-29 04:19:20
https://github.com/libjxl/jxl-rs/commit/d806b8820253ab639525dc526e473d68422ff540
Quackdoc
2024-09-29 04:22:05
arg parsing
Tirr
2024-09-29 04:22:52
it's CLI arg parsing crate also used by jxl-oxide-cli, it's quite handy
Quackdoc
2024-09-29 04:24:35
i've always thought clap was kinda stupid, it's really not that hard to hand roll your own argument parsing, but when I actually started to use it, I never not use clap now lmao
CrushedAsian255
2024-09-29 04:27:17
Is it just rust argparse?
Quackdoc
2024-09-29 04:29:14
it's fairly basic but it's super ergonomic https://docs.rs/clap/latest/clap/
CrushedAsian255
2024-09-29 04:35:44
Is that a good documentation site ?
Quackdoc
2024-09-29 04:36:08
it where most rust crates documentation is
2024-09-29 04:36:23
some have their own site, but docs.rs is fairly standard
CrushedAsian255
2024-09-29 05:29:08
Does ARM even support simd ?
lonjil
2024-09-29 05:29:45
of course
Quackdoc
2024-09-29 05:30:29
will jxl-rs be nostd compatible [Hmm](https://cdn.discordapp.com/emojis/1113499891314991275.webp?size=48&quality=lossless&name=Hmm)
2024-09-29 05:31:08
it could be neat to pull that off since it would enable putting JXL decode on embedded stuff, not that it would be super duper useful that I know of xD
CrushedAsian255
2024-09-29 05:32:04
Is nostd mean not requiring linking to the rust standard library?
Quackdoc it could be neat to pull that off since it would enable putting JXL decode on embedded stuff, not that it would be super duper useful that I know of xD
2024-09-29 05:32:22
JXL decode on 6502 anyone?
Quackdoc
CrushedAsian255 Is nostd mean not requiring linking to the rust standard library?
2024-09-29 05:33:49
yeah, it doesn't mean everything has to be nostd, things like CMS, CLI etc, aren't needed for embedded so those can rely on std. but it would be nice if the core decode stuff was nostd. it would work on stuff like arduinos pi picos etc
CrushedAsian255
2024-09-29 05:34:19
What is required for nostd ?
Quackdoc
2024-09-29 05:34:54
rather then that question, it's what stuff requires std, you loose a number of built in functions by using nostd
2024-09-29 05:35:09
https://docs.rust-embedded.org/book/intro/no-std.html
CrushedAsian255
2024-09-29 05:37:13
Pure ASM jxl decoder anyone?
Quackdoc
2024-09-29 05:37:35
also for a nostd CMS, that would be dope, I was playing around with stuff like basic matrix multip and doing stuff like EOTFs and OETFs in nostd rust because of https://rust-gpu.github.io/
veluca
2024-09-29 05:43:17
eh, it would be a bit of a pain, all the threading stuff is in std iirc
CrushedAsian255
2024-09-29 05:45:10
Maybe a specific nostd feature that disables threading
2024-09-29 05:45:27
Is there any way to thread without std ?
Tirr
2024-09-29 05:51:50
threading is a platform feature, and std is a bunch of abstractions over platform features. so you can *kinda* do threading without std, but you should write all the tedious and unsafe things (e.g. link to pthreads and use it, or maybe call `clone(2)` directly if you're on Linux) to do it
2024-09-29 06:00:40
the feature is always there even without std, you just don't have battle-tested safe abstraction over that
Demiurge
CrushedAsian255 Pure ASM jxl decoder anyone?
2024-09-29 06:21:46
that's what jxl-rs is probably gunna be :)
CrushedAsian255
2024-09-29 06:22:37
Can rust use inline ASM? Wouldn’t that violate safety principles?
2024-09-29 06:22:47
I’m guessing unsafe block?
Quackdoc
2024-09-29 06:23:42
if you can abstract threading away from the functions themselves that would be workable. I believe rayon can handle falling back to no threading in the case of nostd, and iirc that's what jxl-oxide does right?
lonjil
Demiurge that's what jxl-rs is probably gunna be :)
2024-09-29 06:24:06
unlikely
2024-09-29 06:24:55
Presumably it'll either use Rust's WIP portable-simd, or a hypothetical highway-rs
Demiurge
2024-09-29 06:28:21
If it's anything like https://github.com/google/rbrotli-enc then it's going to be crazy fast and efficient and basically only use Rust as a wrapper/glue/meme
2024-09-29 06:32:27
Rust has inline intrinsics. It isn't portable, but you can use CPU instructions from rust code just like it was assembly with a rust wrapper.
2024-09-29 06:33:10
this brotli encoder is even faster than the reference encoder.
2024-09-29 06:33:25
I'm surprised it's impossible to find by searching for it
veluca
lonjil Presumably it'll either use Rust's WIP portable-simd, or a hypothetical highway-rs
2024-09-29 06:40:55
probably the second one 😉
lonjil
2024-09-29 06:41:38
yes 😄
Demiurge
2024-09-29 06:44:14
Hmm. I just learned that the rust standard library, including println, can panic/abort and that it's considered normal and not a bad practice in the Rust community apparently.
2024-09-29 06:46:35
And it also aborts whenever a malloc fails too.
yoochan
veluca probably the second one 😉
2024-09-29 06:48:33
on how many stoves do you work at the same time ! you have commits everywhere 😄
Quackdoc
Demiurge Hmm. I just learned that the rust standard library, including println, can panic/abort and that it's considered normal and not a bad practice in the Rust community apparently.
2024-09-29 06:52:17
what's normal? if it panics something is very wrong
CrushedAsian255
2024-09-29 06:52:48
As in it’s not considered weird that it can do that
2024-09-29 06:53:18
Only thing I don’t particularly like about rust is its panic system / no way to tell if a function panics by signature
Demiurge
2024-09-29 06:53:32
All the merits/advantages of rust seem kinda pointless to me if it's difficult or impossible to respond to error conditions without crashing, even in library code.
CrushedAsian255
2024-09-29 06:53:38
It would be nice to “catch” panics but that’s kinda against the whole philosophy
lonjil
2024-09-29 06:54:17
but you can catch panics
2024-09-29 06:55:12
if a request handler in my web server panics I get a neat little error message on stderr and the server continues to run
Demiurge
lonjil but you can catch panics
2024-09-29 06:55:37
You mean with a separate watchdog process?
lonjil
2024-09-29 06:55:41
no
Quackdoc
2024-09-29 06:56:24
I extremely rarely see rust panic unless I explicitly make it
lonjil
Demiurge You mean with a separate watchdog process?
2024-09-29 06:57:19
https://doc.rust-lang.org/stable/std/panic/fn.catch_unwind.html
Demiurge
2024-09-29 06:59:41
Is this new? I heard that there's no way to stop a panic.
2024-09-29 07:00:08
Maybe it's a recent change in response to Linus
lonjil
2024-09-29 07:01:23
2016
2024-09-29 07:01:59
but, as noted, if you need real error handling, it's better to return a Result, than to panic
Quackdoc
2024-09-29 07:03:45
the panics that rust was talking about were more or less placeholders, Many people, myself included, treat stuff like unwrap() and whatnot as TODOs
Foxtrot
2024-09-29 07:09:37
From what I read rust program should only panic if it's unrecoverable.
Demiurge
2024-09-29 07:23:50
or if there's floating point math, or hidden memory allocations...?
lonjil
2024-09-29 07:24:32
99.99% of programs treat allocation failures are unrecoverable
2024-09-29 07:27:43
and I'm not sure what you're referring to w.r.t. floating point math?
veluca
2024-09-29 07:28:01
panics are closer to exceptions than to aborts 😉
Quackdoc
2024-09-29 07:33:32
anything can panic, rust just makes it far less likely to happen, but if std is panicing, something is very wrong with either stdlib, or you are abusing it in a way that could make it into the next book of shades of whatever
Demiurge
lonjil 99.99% of programs treat allocation failures are unrecoverable
2024-09-29 07:37:22
you're supposed to check the return value of malloc in C, when it makes sense to handle allocation failure. Like in library code.
AccessViolation_
2024-09-29 07:38:34
panics aren't the error handling primitive, the `Result` type is. If the `Result` is `Err` rather than `Ok`, the error is handled immediately or propagated to be handled elsewhere. One way of "handling" errors is calling `unwrap()` on `Result` types, which panics if the result is `Err`. There are also lints for usage of `Result` such that it can panic. So not knowing whether a function panics isn't really an issue, since usually that function would return a `Result`
Demiurge
2024-09-29 07:39:04
It's nice that C doesn't prevent you from making your code as safe as you want it to be.
lonjil and I'm not sure what you're referring to w.r.t. floating point math?
2024-09-29 07:39:58
I was just reading this... > With the main point of Rust being safety, there is no way I will ever > accept "panic dynamically" (whether due to out-of-memory or due to > anything else - I also reacted to the "floating point use causes > dynamic panics") as a feature in the Rust model. > > Linus
lonjil
2024-09-29 07:40:40
the only thing I can find is that the clamp function panics if max < min
AccessViolation_
2024-09-29 07:42:42
Iirc that post was very specific to kernel development, e.g. when something fucks up in the kernel you often don't want to panic, you want to continue on with the incorrect state. I also imagine Rust for Linux doesn't use `std` but `core` instead? I'm not actually sure about that though
lonjil
2024-09-29 07:42:42
as for situations where you need to handle allocation failure, that's why Rust has the `try_` family of allocating functions these days
AccessViolation_ Iirc that post was very specific to kernel development, e.g. when something fucks up in the kernel you often don't want to panic, you want to continue on with the incorrect state. I also imagine Rust for Linux doesn't use `std` but `core` instead? I'm not actually sure about that though
2024-09-29 07:43:06
they use the `alloc` crate
Demiurge
2024-09-29 07:43:19
I think the whole point, or the whole virtue, of Rust is the to make it more convenient to write modern software with static guarantees of thread and memory safety, compared to C. I don't think Rust is inherently more safe than C. It just makes certain types of safety more convenient, at the cost of other things.
2024-09-29 07:45:18
You can make C as safe as you want, after all. What C lacks is the convenience.
AccessViolation_
2024-09-29 07:46:39
I think Rust is more safe in practice because safe code is guaranteed to be safe. You can write perfectly safe code in C in theory, but in practice you often don't know for sure if your code is safe. So it doesn't make Rust and C equal. If it did, there basically wouldn't have been a reason to create Rust, and there clearly was
Demiurge
2024-09-29 07:49:45
It's possible but it's not convenient to have static safety guarantees in a C project. I think Rust improves on C in some ways by making certain types of safety more convenient, but not all types of safety... I think the virtues of Rust are overstated and the weaknesses probably downplayed, like the extremely slow compile times... For some reason a lot of novice programmers get excited when hearing about the safety guarantees and think it's some kind of magical force that will "prevent bugs from being possible" because of the static analyzer.
2024-09-29 07:50:09
Static analysis is cool and definitely the correct direction to be going in, but that doesn't mean Rust does it the best way.
2024-09-29 07:52:10
All that a programming language can do is make it more convenient to develop software. Including understanding possible error conditions and minimizing the amount of time it takes to read and understand and debug where a problem occurs.
2024-09-29 07:52:23
Does rust do the best job of that? Probably not.
AccessViolation_
2024-09-29 07:52:42
Rust is a bit more than static analysis, it's memory model is more akin to a proof checker in the sense that it makes certain kinds of unsafety impossible. Compile times are about equal to C++ from what I remember reading.
veluca
2024-09-29 07:53:48
(what's the difference between static analysis and a proof checker? ;))
AccessViolation_
2024-09-29 07:54:05
Proof, mostly :p
2024-09-29 07:54:29
I guess a proof checker is a form of static analysis, but not necessarily the other way around
Demiurge
2024-09-29 07:54:33
I think mozilla's demand to have a rust version of libjxl is an example of this magical thinking with rust.
2024-09-29 07:55:27
I don't think mozilla gets anything out of it being in rust instead of C++
AccessViolation_
Demiurge All that a programming language can do is make it more convenient to develop software. Including understanding possible error conditions and minimizing the amount of time it takes to read and understand and debug where a problem occurs.
2024-09-29 07:55:36
These are all things that Rust is particularly good at, though
Demiurge
2024-09-29 07:56:00
the only real benefit is that it's fresh code with less bitrot
2024-09-29 07:56:35
But C++ vs Rust hardly matters.
AccessViolation_ These are all things that Rust is particularly good at, though
2024-09-29 07:57:15
Well, it probably does a better job than C++.
2024-09-29 07:57:36
Maybe.
2024-09-29 07:58:40
C++ has some advantages being based off C, but Rust has advantages by not being based off anything else.
2024-09-29 07:59:15
C++ is really terrible at reducing the amount of time it takes to read and debug >:(
AccessViolation_
2024-09-29 07:59:21
> All that a programming language can do is make it more convenient to develop software first party build system, linting engine, strict and expressive type system, etc > Including understanding possible error conditions If you mean logic errors then the static type system allows you to model the state accurately leading to compile time errors which would otherwise have been runtime errors. If you mean in terms of compiler error messages, rustc treats poor error messages as compiler bugs, which is a real breath of fresh air after using C++ > and minimizing the amount of time it takes to read and understand and debug where a problem occurs Once again the expressive type system can help with this. Also if you notice memory safety issues, you know it must come from one of your unsafe blocks. Also miri can interpret Rust's intermediate representation to detect unsafety or UB in unsafe blocks
Demiurge
2024-09-29 07:59:24
That's probably it's worst sore point
lonjil
Demiurge the only real benefit is that it's fresh code with less bitrot
2024-09-29 07:59:25
new code tends to be buggier than old code, at least in actively maintained codebases
Demiurge
lonjil new code tends to be buggier than old code, at least in actively maintained codebases
2024-09-29 08:00:11
sure, but it also tends to be much easier to understand and work with
lonjil
2024-09-29 08:00:23
The Android team has found that the number of exploitable bugs in their C++ code decays exponentially over time, so most such bugs are in new code.
2024-09-29 08:01:12
When they started writing most new code in Rust, they stopped having memory bugs in (most) new code.
2024-09-29 08:01:39
So after a few years, the rates of exploitable bugs being discovered had gone way down.
AccessViolation_
2024-09-29 08:03:16
Also the Asahi Linux GPU driver written in Rust which has not once had a crash or panic or unsafety that wasn't caused by broken promises from the C code it was bound to - promises which could have been expressed entirely in Rust's type system, causing compile time errors when broken.
Demiurge
AccessViolation_ > All that a programming language can do is make it more convenient to develop software first party build system, linting engine, strict and expressive type system, etc > Including understanding possible error conditions If you mean logic errors then the static type system allows you to model the state accurately leading to compile time errors which would otherwise have been runtime errors. If you mean in terms of compiler error messages, rustc treats poor error messages as compiler bugs, which is a real breath of fresh air after using C++ > and minimizing the amount of time it takes to read and understand and debug where a problem occurs Once again the expressive type system can help with this. Also if you notice memory safety issues, you know it must come from one of your unsafe blocks. Also miri can interpret Rust's intermediate representation to detect unsafety or UB in unsafe blocks
2024-09-29 08:04:08
Hmm... Well the main complaint, other than compile times, is the syntax itself being hard to read and compose.
lonjil So after a few years, the rates of exploitable bugs being discovered had gone way down.
2024-09-29 08:05:11
Yeah, the problem is that C and C++ makes it extremely convenient to make memory errors.
2024-09-29 08:05:34
It's more convenient than ensuring there are no errors
AccessViolation_
Demiurge Hmm... Well the main complaint, other than compile times, is the syntax itself being hard to read and compose.
2024-09-29 08:05:57
Completely fair, it's a personal preference after all
afed
2024-09-29 08:07:33
https://cppalliance.org/vinnie/2024/09/12/Safe-Cpp-Partnership.html
veluca
2024-09-29 08:08:40
as with most things related to C++ and especially C++ language evolution, I'll believe it when I see it...
Demiurge
2024-09-29 08:08:46
Basically a good programming language should make it more convenient and easy to ensure there are no errors, and Rust is a lot better than C for that.
2024-09-29 08:09:02
At least for memory and threading errors.
2024-09-29 08:09:16
And most other errors
2024-09-29 08:09:29
C doesn't really make error handling that easy and convenient.
2024-09-29 08:09:56
It's kind of an afterthought that you need to do extra work for
2024-09-29 08:10:09
A lot of extra work that just adds up
2024-09-29 08:10:24
That's the main fault with modern C
2024-09-29 08:10:42
Yikes I should probably move to off-topic huh?
2024-09-29 08:10:46
Sorry for the rant.
2024-09-29 08:10:55
:)
AccessViolation_
2024-09-29 08:15:36
I think a lot of people that don't write Rust and are just interested observers underestimate how much of a joy it is to write Rust. It kind of seems like they see the restrictions as a necessary evil that Rust users think are justified for guaranteed memory and thread safety, but the "restrictions" aren't experienced as such by me, I just see it as the language model. It's what writing Rust *is* like, and it's not really that bad. The borrow checker is confusing at first if you expect to be able to throw memory around like in C or C++, but much like how C/C++ *do also* have a language model that you need to get used to, even if it doesn't feel like it because you've been working in it for years and years. Tell a Rust user to learn and write C for a month and they'll be begging for proper error messages and for the compiler to just tell them in advance instead of going `segmentation fault` randomly every so often
2024-09-29 08:20:56
Then there's also people that *really* dislike Rust, to the point where if you address their main criticisms which are based on incorrect knowledge of the language, they will eventually tell you that at least C++ isn't made by "gay furry femboys" (I've actually seen two separate discussions that went pretty much exactly like that, though the insults at the end were different but both equally intolerant). I don't get that type of thinking, if you're making up things so you can dislike it then it might be healthier to just accept that it's not as bad as you want it to be. But I don't think I understand exactly where that way of thinking comes from
2024-09-29 08:22:44
Oh, this isn't <#806898911091753051>
2024-09-29 08:24:18
Just imagine I said JPEG XL instead of Rust and AVIF instead of C <:PepeOK:805388754545934396>
Demiurge
AccessViolation_ Just imagine I said JPEG XL instead of Rust and AVIF instead of C <:PepeOK:805388754545934396>
2024-09-29 08:28:59
Yeah! Me too
Quackdoc
2024-09-29 08:39:06
all of this because I asked about nostd [av1_monkastop](https://cdn.discordapp.com/emojis/720662879367332001.webp?size=48&quality=lossless&name=av1_monkastop)
lonjil
afed https://cppalliance.org/vinnie/2024/09/12/Safe-Cpp-Partnership.html
2024-09-29 08:53:50
Sean Baxter's work is quite good and he's very serious about bringint safety to C++, but unfortuantely the C++ committee leadership are hostile to his work. continued at https://discord.com/channels/794206087879852103/806898911091753051/1290056621506039858
CrushedAsian255
2024-09-30 09:22:36
Has anyone tested iPhone HEIF to JXL?
2024-09-30 09:23:02
I know you shouldn’t convert lossy to lossy but a sufficiently high quality setting should minimise generational losses
_wb_
2024-09-30 11:48:37
a sufficiently high quality setting will give you recompressed images that are larger than what you started with 🙂
2024-09-30 11:49:33
lossy to lossy only makes sense if the original is very high quality and the recompressed one is substantially lower quality
CrushedAsian255
2024-09-30 12:21:27
Is iPhone HEIF high quality ?
Meow
2024-09-30 12:33:45
Doubtful. You can convert images to HEIF via the File app too
2024-09-30 12:34:30
Then you can estimate the quality value by observing the resulted file size
lonjil
2024-09-30 12:40:28
didn't someone report visible seams in an image converted at the maximum quality setting?
CrushedAsian255
lonjil didn't someone report visible seams in an image converted at the maximum quality setting?
2024-09-30 01:42:36
Seams as in tiling?
lonjil
2024-09-30 01:44:26
yes
Demiurge
CrushedAsian255 Is iPhone HEIF high quality ?
2024-09-30 07:28:34
In my experience iphone heic is high bitrate
CrushedAsian255
2024-09-30 07:28:53
Can I convert to d1 without too much problem?
2024-09-30 07:29:10
I gave my phone to someone to take picture and they bumped the ProRAW button
Demiurge
2024-09-30 07:30:53
d=1 is much lower bitrate in my experience
2024-09-30 07:33:10
Yes you can but I'm not that impressed with libjxl lossy in general especially the color degradation. Also I don't know if the "resistance to generation loss" feature is still true anymore in modern versions of libjxl
2024-09-30 07:34:46
Plus for some reason libjxl lossy mangles color profiles? Idk, seems like there's way too many weird problems with current state of lossy in libjxl to me...
2024-09-30 07:36:12
The sad thing is none of these problems are inherent to jxl but just implementation errors in the current libjxl encoder
_wb_
2024-09-30 08:15:17
I am not sure what you're seeing but cjxl is pretty good at handling color space signaling correctly. It's better at it than many viewers, which can lead to colors looking wrong not because cjxl does something wrong but because you were seeing the wrong colors all along...
lonjil
2024-09-30 08:20:44
The only time I've seen color weirdness is when decoding to pfm and then encoding with libjxl-tiny
2024-09-30 08:21:05
Would be convenient if pfm had color information in its minimal header 😄
Demiurge
2024-09-30 08:26:13
Do you have any idea what's going on in this image then when encoding lossless jxl to lossy jxl? https://discord.com/channels/794206087879852103/803645746661425173/1290109913175035905
2024-09-30 08:28:39
Something clearly wrong is happening and it reminds me of existing open issues on the issue tracker about lossy mode making images darker.
2024-09-30 08:29:28
It also reminds me of the bug where djxl decodes lossy images to a linear int format, causing severe clipping
lonjil
2024-09-30 08:34:23
I can't reproduce
2024-09-30 08:34:34
What were the exact steps you took?
2024-09-30 08:34:58
<@1028567873007927297>
Demiurge
lonjil I can't reproduce
2024-09-30 08:41:56
You can't reproduce the weird looking clipping in the wood texture image?
2024-09-30 08:42:06
Or you can't reproduce djxl decoding to linear int format?
2024-09-30 08:43:18
https://github.com/libjxl/libjxl/issues/2147
2024-09-30 08:46:14
Here's a duplicate of the same bug: https://github.com/libjxl/libjxl/issues/2289
lonjil
Demiurge You can't reproduce the weird looking clipping in the wood texture image?
2024-09-30 08:46:18
oh, since you said something about making images darker, and since the last time I checked one of your test images, it had a weird brightness shift, I thought that was the issue. I just tried getting a weird looking clipping in the wood texture image and I can't, but I don't know what exactly you did so idk if I can reproduce it or not.
damian101
2024-09-30 08:46:22
sRGB was a mistake.
lonjil
2024-09-30 08:47:10
when I looked at your encodings of my test image, one was brighter and the other darker. I haven't experienced that.
Demiurge
lonjil oh, since you said something about making images darker, and since the last time I checked one of your test images, it had a weird brightness shift, I thought that was the issue. I just tried getting a weird looking clipping in the wood texture image and I can't, but I don't know what exactly you did so idk if I can reproduce it or not.
2024-09-30 08:47:12
Well the brightness shift is what I'm referring to when I said clipping. It looks kind of like clipping artifacts.
2024-09-30 08:47:27
I'm using waterfox to view the images
lonjil
2024-09-30 08:48:31
ok, when I use gwenview in *browse* mode, there is a weird difference
2024-09-30 08:48:38
but when I open the images properly, they look identical
2024-09-30 08:48:48
Waterfox probably does something wrong
2024-09-30 08:49:06
And Gwenview in thumbnail browsing mode probably doesn't color manage properly either
Demiurge
lonjil when I looked at your encodings of my test image, one was brighter and the other darker. I haven't experienced that.
2024-09-30 08:49:23
Your test image looks the same brightness to me actually, but the darkest parts of the shadows look blown out, like the shadows are clipping.
2024-09-30 08:49:33
on the lossy version
2024-09-30 08:50:17
In the lossless version I see the texture of the tree bark in the darkest shadows in the image.
2024-09-30 08:50:40
In the lossy version that texture looks posterized
lonjil
Demiurge Your test image looks the same brightness to me actually, but the darkest parts of the shadows look blown out, like the shadows are clipping.
2024-09-30 08:52:43
here is how your files `source_DSC_0282.jxl` and `d1_DSC_0282.jxl` look in the compare_images tool on my computer. Note that *both* of them are too light, the original is darker than both of them.
2024-09-30 08:56:06
here is how the stained planks image looks, original on the left and jpg->lossless jxl->lossy jxl on the right. Quite visible difference.
2024-09-30 08:56:33
but when I fully open the images, they look identical
2024-09-30 08:56:53
So I think it's a color management issue in how Gwenview renders previews
damian101
lonjil here is how the stained planks image looks, original on the left and jpg->lossless jxl->lossy jxl on the right. Quite visible difference.
2024-09-30 08:57:42
I can make the lossy JXL image on the right look like the one on the left by converting from gamma 2.2 transfer to sRGB OETF.
2024-09-30 08:58:45
The source however looks correct when simply decoded as gamma 2.2.
lonjil
2024-09-30 09:00:42
oh that's interesting. the original when properly opened in gwenview, doesn't look like either preview. I wonder what's up with that.
damian101
2024-09-30 09:02:48
huh, right... that screenshot has higher contrast, darker dark areas than it looks on my display
2024-09-30 09:03:26
~~maybe that player decodes using the inverse sRGB OETF~~
Demiurge
2024-09-30 09:03:31
I downscaled the original JPEG and used the iccv4 srgb profile from color.org to convert it to srgb.
2024-09-30 09:03:47
And used that as source_wood.jxl
2024-09-30 09:04:12
Then I used cjxl to convert source_wood.jxl to d1_wood.jxl
2024-09-30 09:04:42
and I'm only comparing those two files, not the original
damian101
~~maybe that player decodes using the inverse sRGB OETF~~
2024-09-30 09:05:01
no, then it would look the opposite, less contrasty, brighter dark areas
2024-09-30 09:06:35
it would, in fact, look like what the lossy JXL decodes to...
2024-09-30 09:10:03
Yes, I can make each look like the other with these Vapoursynth scripts: ```py import vapoursynth as vs core = vs.core c = core.bs.VideoSource("d1_wood.jxl") c = core.std.SetFrameProps(c, _Transfer=4) c = core.resize.Point(c, transfer=13, dither_type="error_diffusion", format=vs.RGB24) c.set_output() ``` ```py import vapoursynth as vs core = vs.core c = core.bs.VideoSource("source_wood.jxl") c = core.std.SetFrameProps(c, _Transfer=13) c = core.resize.Point(c, transfer=4, dither_type="error_diffusion", format=vs.RGB24) c.set_output() ```
2024-09-30 09:10:20
transfer 13 is srgb transfer 4 is gamma 2.2
2024-09-30 09:12:06
Ok, I solved the reason for the discrepancy.
2024-09-30 09:14:03
The lossless JXL decodes with undefined transfer curve tag. The lossy JXL decodes with sRGB transfer curve tag.
2024-09-30 09:14:42
Assuming you have a gamma 2.2 display, which is most likely, the image with undefined transfer curve will always decode correctly.
2024-09-30 09:15:09
While the srgb-tagged version will potentially decode incorrectly if color-managed.
2024-09-30 09:15:33
As I said before, I hate hate hate the sRGB standard for this nonsense.
2024-09-30 09:19:31
I actually wonder whether more "sRGB" images look correct decoded with gamma 2.2 or decoded using the inverse sRGB OETF...
Demiurge
2024-09-30 09:21:12
but why...?
2024-09-30 09:21:34
isn't srgb transfer curve the same as gamma 2.2?
damian101
Demiurge isn't srgb transfer curve the same as gamma 2.2?
2024-09-30 09:21:45
😭
Demiurge
2024-09-30 09:21:53
:(
2024-09-30 09:21:57
I'm confused
2024-09-30 09:22:12
also I think I found another duplicate issue https://github.com/libjxl/libjxl/issues/2550
lonjil
2024-09-30 09:22:19
now explain this. from left to right: d1_wood.jxl, original downscaled with libplacebo and written with ffmpeg to a png file, source_wood.jxl
Demiurge
2024-09-30 09:22:57
Look at the clipping/posterize artifacts in the dark jacket
2024-09-30 09:23:06
It's pretty reminiscent of this
damian101
lonjil now explain this. from left to right: d1_wood.jxl, original downscaled with libplacebo and written with ffmpeg to a png file, source_wood.jxl
2024-09-30 09:24:33
wtf, why does your source_wood.jxl look like that
lonjil
2024-09-30 09:25:17
beats me
2024-09-30 09:25:33
My own personal attempt at encoding it into a lossless jxl doesn't look like that at all
Demiurge
2024-09-30 09:25:53
first time I heard of libplacebo, looks cool
lonjil
2024-09-30 09:25:57
But I didn't do Pashifox's step of downloading the sRGB ICC file from color.org and converting or whatever he did
2024-09-30 09:26:34
aren't untagged jpegs usually interpreted as sRGB after YCbCr -> RGB conversion anyway?
damian101
lonjil aren't untagged jpegs usually interpreted as sRGB after YCbCr -> RGB conversion anyway?
2024-09-30 09:27:37
Yes, but the bigger question is: How is sRGB interpreted after YCbCr to RGB conversion?
Demiurge
2024-09-30 09:27:48
If source_wood.jpg looks funny, that's because I used vips to (linear light) downscale and transform/convert color profile.
2024-09-30 09:28:27
I don't really care how weird or funny source_wood looks
2024-09-30 09:28:33
since I'm using it as a source
damian101
lonjil now explain this. from left to right: d1_wood.jxl, original downscaled with libplacebo and written with ffmpeg to a png file, source_wood.jxl
2024-09-30 09:28:37
oh, is the one on the right source_wood.jpg?
lonjil
Yes, but the bigger question is: How is sRGB interpreted after YCbCr to RGB conversion?
2024-09-30 09:28:43
good question!
oh, is the one on the right source_wood.jpg?
2024-09-30 09:28:46
yeah
damian101
2024-09-30 09:28:51
ah
2024-09-30 09:28:55
then all makes sense
Demiurge
lonjil now explain this. from left to right: d1_wood.jxl, original downscaled with libplacebo and written with ffmpeg to a png file, source_wood.jxl
2024-09-30 09:29:03
source_wood does not look that washed out to me on waterfox
damian101
2024-09-30 09:30:01
the middle one is source_wood.jxl
lonjil
Yes, but the bigger question is: How is sRGB interpreted after YCbCr to RGB conversion?
2024-09-30 09:30:03
I have no color management and my monitors are not calibrated and I haven't the faintest idea what curves they use
the middle one is source_wood.jxl
2024-09-30 09:30:42
no, the middle one in my screenshot is the actual original file downscaled and converted to png with ffmpeg
damian101
lonjil I have no color management and my monitors are not calibrated and I haven't the faintest idea what curves they use
2024-09-30 09:31:31
It's easy to test, actually. If d1_wood.jxl looks wrong to you, but source_wood.jxl does not, your monitor probably uses gamma 2.2 transfer. Or potentially gamma 2.4, bt.1886.
lonjil
2024-09-30 09:31:54
they both obviously look wrong
damian101
2024-09-30 09:31:59
wut
lonjil
2024-09-30 09:32:10
I mean I can't know what the original intent was
2024-09-30 09:32:20
But they're both quite different from the original file in any viewer