|
Quackdoc
|
2025-01-24 06:52:24
|
python is such a mess. I hate it and everything that uses it
|
|
2025-01-24 06:52:27
|
aur time
|
|
|
Demiurge
|
2025-01-24 07:00:48
|
Python2 is gone
|
|
|
Quackdoc
|
2025-01-24 07:06:03
|
python2 is in the aur still :D
|
|
|
Demiurge
|
2025-01-25 05:22:09
|
The last version is 5 years ago
|
|
2025-01-25 05:24:42
|
It's dead Jim
|
|
|
Quackdoc
|
2025-01-25 05:53:34
|
well, there is a reason why python is detested for making applications
|
|
|
Demiurge
|
2025-01-25 10:14:02
|
I thought it was loved
|
|
2025-01-25 10:14:18
|
It's Python after all
|
|
|
Quackdoc
|
2025-01-25 10:45:18
|
https://tenor.com/view/wheeze-wheeze-laugh-nikkal-kundhal-nikkalkundhal-gif-23223780
|
|
|
Traneptora
|
|
bonnibel
thank you!
|
|
2025-01-30 07:28:30
|
update: it's a known bug still (haasn is busy) but now it has this: https://code.videolan.org/videolan/libplacebo/-/issues/331
|
|
|
gb82
|
|
𐑛𐑦𐑕𐑣𐑸𐑥𐑩𐑯𐑦 | 最不調和の伝播者 | 異議の元素
is there a WASM build of SSIMULACRA2 for use in web browsers and Deno?
|
|
2025-02-16 05:56:16
|
u might be able to compile the C implementation using a wasm target? idk
|
|
|
𐑛𐑦𐑕𐑣𐑸𐑥𐑩𐑯𐑦 | 最不調和の伝播者 | 異議の元素
|
|
gb82
u might be able to compile the C implementation using a wasm target? idk
|
|
2025-02-16 07:03:52
|
I don't want to have Node.js bindings, only vanilla ESM ones
|
|
|
spider-mario
|
2025-02-20 01:47:48
|
(why Élevé and not Élevée?)
|
|
|
_wb_
|
|
spider-mario
(why Élevé and not Élevée?)
|
|
2025-02-20 04:26:34
|
what's the context for this question?
|
|
|
spider-mario
|
2025-02-20 04:27:03
|
<@814115494021300285> 's name
|
|
|
_wb_
|
2025-02-20 04:29:06
|
oh. Right, it's la lumière...
|
|
2025-02-20 04:30:42
|
I misread it as élève tbh
|
|
|
𐑛𐑦𐑕𐑣𐑸𐑥𐑩𐑯𐑦 | 最不調和の伝播者 | 異議の元素
|
|
spider-mario
<@814115494021300285> 's name
|
|
2025-02-20 04:30:55
|
well I'm not a girl, so I purposefully chose that...
|
|
2025-02-20 04:31:26
|
~~honestly you're the first person to ask that at all, wonder why others hadn't yet~~
|
|
|
spider-mario
|
2025-02-20 04:34:35
|
right, but adjectives in French agree with the grammatical gender of the noun they apply to (here, lumière, which is feminine), regardless of the gender of the underlying person that the noun might be referring to
|
|
2025-02-20 04:35:22
|
« une grande personne » refers to a tall person, who might be a man
|
|
2025-02-20 04:36:39
|
(using this example reminds me of https://youtu.be/nc5LaNcTAfU?t=22s)
|
|
|
_wb_
|
2025-02-20 04:39:11
|
This is easy for me to get confused about, since in Dutch and English it's different (although Dutch/English have less grammatical gender than French).
In French you say "sa barbe" even though the owner of the beard is male, while in English you say "his beard" (and in Dutch "zijn baard").
|
|
|
spider-mario
|
2025-02-20 04:40:07
|
oh, yes, only the gender of the possessed thing and not the possessor
|
|
2025-02-20 04:40:15
|
(in German, you need to take _both_ into account…)
|
|
2025-02-20 04:41:07
|
although as far as adjectives are concerned, I believe it’s the same in German as in French
|
|
2025-02-20 04:41:15
|
ein grosses Mädchen
|
|
|
𐑛𐑦𐑕𐑣𐑸𐑥𐑩𐑯𐑦 | 最不調和の伝播者 | 異議の元素
|
|
spider-mario
right, but adjectives in French agree with the grammatical gender of the noun they apply to (here, lumière, which is feminine), regardless of the gender of the underlying person that the noun might be referring to
|
|
2025-02-21 09:21:03
|
~~I'm too tired to correct pronoun usage XD~~
|
|
2025-02-21 09:22:10
|
hence purposefully breaking the grammar to perhaps make the assumption go away, which worked mostly flawlessly so far
|
|
|
gb82
u might be able to compile the C implementation using a wasm target? idk
|
|
2025-02-21 09:24:12
|
wonder if it's viable to use Emscripten, then write my own glue layer to get rid of any Node.js dependency
~~SSIMULACRA2 feels like the current best to both consider speed and result neutrality~~
|
|
|
gb82
|
2025-02-21 10:09:17
|
Woah ur in the fosstoast server lol
|
|
|
AccessViolation_
|
|
_wb_
This is easy for me to get confused about, since in Dutch and English it's different (although Dutch/English have less grammatical gender than French).
In French you say "sa barbe" even though the owner of the beard is male, while in English you say "his beard" (and in Dutch "zijn baard").
|
|
2025-02-21 11:40:32
|
i wish the singular 'hen' in dutch was more commonly used as a gender neutral pronoun (e.g. "hen loopt")
<https://nl.wiktionary.org/wiki/hen>
|
|
2025-02-21 11:42:32
|
it sounds wrong, but i've slowly been adopting it into my vocabulary and i'm getting used to saying it now
|
|
2025-02-21 11:45:38
|
now i'm using it when the gender of the person that's talked about isn't clear. it already sounds passable in cases like "ik ken hen niet" but in "hen is onbekend" it still sounds fairly awkward to me
|
|
2025-02-21 11:47:46
|
gotta love the wrong and the right pronouns both feeling wrong but for different reasons 🥲
|
|
2025-02-21 11:48:24
|
at least in english it works out with 'they'
|
|
|
_wb_
|
2025-02-22 07:54:39
|
"die" is a good gender-neutral or nonbinary pronoun in Dutch. It works without requiring grammatically awkward plural-for-singular constructions. "Ik ken die niet" and "die is onbekend" both sound just fine.
|
|
|
damian101
|
2025-02-22 12:53:20
|
Fucking Romans had to ruin everything with their grammatical genders.
|
|
|
AccessViolation_
|
|
_wb_
"die" is a good gender-neutral or nonbinary pronoun in Dutch. It works without requiring grammatically awkward plural-for-singular constructions. "Ik ken die niet" and "die is onbekend" both sound just fine.
|
|
2025-02-22 02:56:49
|
ohh yeah that's a good one
|
|
2025-02-22 02:56:52
|
never thought of that
|
|
|
Traneptora
|
2025-02-25 10:13:59
|
Hebrew is my favorite because verbs also have gender conjugation
|
|
2025-02-25 10:14:22
|
So past and future tense actually care about gender, number, and person
|
|
2025-02-25 10:15:03
|
and present tense cares about gender and number, like adjectives, but not person
|
|
|
Traneptora
So past and future tense actually care about gender, number, and person
|
|
2025-02-25 10:15:28
|
Unless it's first person. Then gender isn't a thing
|
|
2025-02-25 10:16:07
|
still is in present tho
|
|
|
𐑛𐑦𐑕𐑣𐑸𐑥𐑩𐑯𐑦 | 最不調和の伝播者 | 異議の元素
|
2025-02-26 11:13:52
|
JPEG X Language XD
|
|
2025-02-26 11:14:11
|
btw is jpegli still in a separate repo?
|
|
|
RaveSteel
|
|
damian101
|
|
𐑛𐑦𐑕𐑣𐑸𐑥𐑩𐑯𐑦 | 最不調和の伝播者 | 異議の元素
btw is jpegli still in a separate repo?
|
|
2025-02-26 01:19:00
|
Still?
It's probably gonna stay there, and still needs to properly separated from the JXL project...
|
|
|
AccessViolation_
|
|
𐑛𐑦𐑕𐑣𐑸𐑥𐑩𐑯𐑦 | 最不調和の伝播者 | 異議の元素
btw is jpegli still in a separate repo?
|
|
2025-02-26 08:45:20
|
in case you think it was always like that and are suggesting it be merged into the libjxl one: it was originally part of the libjxl repo and was separated into its own thing a little while ago. if you knew this and i misinterpreted your message feel free to ignore :p
|
|
|
𐑛𐑦𐑕𐑣𐑸𐑥𐑩𐑯𐑦 | 最不調和の伝播者 | 異議の元素
|
|
AccessViolation_
in case you think it was always like that and are suggesting it be merged into the libjxl one: it was originally part of the libjxl repo and was separated into its own thing a little while ago. if you knew this and i misinterpreted your message feel free to ignore :p
|
|
2025-02-26 08:46:59
|
~~I know that it was split into a separate repo a while ago~~
|
|
2025-02-26 08:47:23
|
Nixpkgs still hasn't get round to package `jpegli` in a standalone process yet
|
|
|
jonnyawsom3
|
2025-02-26 09:25:31
|
When encoding floats to JXL, is `--intensity_target` the only way to avoid crushed blacks with cjxl? I suppose there's also `--disable_perceptual_optimizations` but from what I recall it severely increases filesize as expected
|
|
|
Quackdoc
|
2025-02-26 09:28:30
|
messing with intensity_target can royally bung up decoding for users so that would be not great
|
|
|
jonnyawsom3
|
|
damian101
|
|
Quackdoc
messing with intensity_target can royally bung up decoding for users so that would be not great
|
|
2025-02-26 10:48:39
|
Really? Are there broken JXL decoders that don't consider peak intensity metadata and just assume some intensity target when decoding XYB?
|
|
2025-02-26 10:48:54
|
Because otherwise that should be no issue at all as long as source colorspace is SDR.
|
|
2025-02-26 10:49:23
|
Or did something change across decoder versions?
|
|
|
Meow
|
2025-02-27 01:50:37
|
Yet it's much easier to use cjpegli and djpegli via the libjxl release on Windows
|
|
2025-02-27 01:51:44
|
I can't build successfully from the jpegli repo on Windows
|
|
|
Quackdoc
|
|
Really? Are there broken JXL decoders that don't consider peak intensity metadata and just assume some intensity target when decoding XYB?
|
|
2025-02-27 03:46:30
|
im not sure I follow, `intensity_target` in jxl specifies how bright the luminance value of `1.0` is. I don't know how decoders handles this, but each colorspace reasonably expects a general value for this.
if you encode an sRGB image with an intensity_target of `1000` the decoder when tonemapping should try to make this reasonable when decoding
|
|
2025-02-27 03:46:37
|
imo anyways
|
|
2025-02-27 03:47:42
|
this is doubly important when doing sdr->hdr or hdr->sdr
|
|
2025-02-27 03:49:06
|
i just realized `intensity_target` could have interesting uses when targeting colorspaces like scRGB
|
|
2025-02-27 03:54:15
|
for instance this is jxl-oxide when decoding an image normally, and one with a target_intensity of 2000
|
|
2025-02-27 03:56:18
|
note in occulante the image is explicitly being requested to tonemap to sRGB
|
|
2025-02-27 03:59:10
|
tho this isn';t what I think it would look like
|
|
|
Tirr
|
2025-02-27 04:00:56
|
maybe jxl-oxide should ignore intensity_target when decoding sdr to sdr (I thought it already skips it though), not sure around intensity_target of sdr transfer
|
|
|
Quackdoc
|
2025-02-27 04:04:45
|
it's hard to say, realistically, SDR and HDR aren't persay all that different here. In fact I would argue using terms like HDR and SDR here is overcomplicating the situation by using poorly defined terms
|
|
2025-02-27 04:05:27
|
it would be ideal for a program to know the "intended reference brightness" of various colorspaces
|
|
2025-02-27 04:06:30
|
that being said, doesn't jxl have a boolean that specifies whether a transfer is relative to display max?
|
|
|
Tirr
|
2025-02-27 04:32:26
|
I vaguely remember there's something like that
|
|
|
_wb_
|
|
When encoding floats to JXL, is `--intensity_target` the only way to avoid crushed blacks with cjxl? I suppose there's also `--disable_perceptual_optimizations` but from what I recall it severely increases filesize as expected
|
|
2025-02-27 07:40:17
|
What's your float workflow? One thing to be careful with is assumptions about transfer functions, e.g. if you just use ImageMagick to convert between PPM and PFM, it will use the sRGB transfer curve for both, while libjxl interprets PPM as sRGB by default and PFM as Linear sRGB by default...
|
|
|
jonnyawsom3
|
2025-02-27 10:07:27
|
That would explain why I was getting much darker images in Irfanview. I'm using a 16bit EXR Blender render, converted to PFM with FFMPEG.
Though, I had seen the low quality dark areas on a sample EXR and an 8bit PNG input too. I'll try setting it to sRGB tonight and see what happens
|
|
|
Quackdoc
|
2025-02-27 10:43:40
|
when working with exr I use oiiotool
|
|
2025-02-27 10:43:54
|
or really whenever I need more accurate conversions
|
|
|
Tirr
I vaguely remember there's something like that
|
|
2025-02-27 10:45:16
|
I think If it is true, don't do tone mapping if false then do it since the pixel would logically be nits
|
|
2025-02-27 10:45:32
|
unless I am misinterpreting it ofc
|
|
|
jonnyawsom3
|
|
Quackdoc
or really whenever I need more accurate conversions
|
|
2025-02-27 10:50:26
|
I need to open an issue for Blender, the render buffer is only f16, so none of the 32bit export options actually work properly. Just increase filesize for 16bit content
|
|
|
Quackdoc
|
|
Traneptora
|
|
That would explain why I was getting much darker images in Irfanview. I'm using a 16bit EXR Blender render, converted to PFM with FFMPEG.
Though, I had seen the low quality dark areas on a sample EXR and an 8bit PNG input too. I'll try setting it to sRGB tonight and see what happens
|
|
2025-03-03 09:47:09
|
ffmpeg doesn't default to linear for pfm, but you can always linearize first
|
|
|
jonnyawsom3
|
2025-03-07 06:28:23
|
Used FFMPEG directly but still got the same issue. As soon as it's a JXL, the image decodes far darker and using lossy doesn't seem to take into account the wider range, so anything dark gets set to a single value instead
|
|
|
Traneptora
|
|
Used FFMPEG directly but still got the same issue. As soon as it's a JXL, the image decodes far darker and using lossy doesn't seem to take into account the wider range, so anything dark gets set to a single value instead
|
|
2025-03-07 10:16:50
|
if the image is darker it may be a TRC issue
|
|
2025-03-07 10:17:02
|
like linear light is being interpreted as sRGB or something
|
|
|
jonnyawsom3
|
2025-03-07 10:56:32
|
It seems to be a few issues compounding. FFMPEG is encoding the EXR files to 16bit int JXLs and IrfanView doesn't color manage above 8bit. Importing the EXR to Krita and saving it as a lossless JXL, then viewing in either Krita or Waterfox works
|
|
2025-03-07 11:01:22
|
Ah... The dithering is a little... Eager, too
|
|
2025-03-07 11:07:09
|
```Input #0, exr_pipe, from 'C:\Users\jonat\Downloads\16F.exr':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: exr, gbrapf32le(unknown/unknown/linear), 2048x2048 [SAR 1:1 DAR 1:1], 25 fps, 25 tbr, 25 tbn
Stream mapping:
Stream #0:0 -> #0:0 (exr (native) -> jpegxl (libjxl))
Press [q] to stop, [?] for help
Output #0, image2, to 'test.jxl':
Metadata:
encoder : Lavf61.9.100
Stream #0:0: Video: jpegxl, rgba64le(pc, gbr/unknown/linear, progressive), 2048x2048 [SAR 1:1 DAR 1:1], q=2-31, 200 kb/s, 25 fps, 25 tbn
JPEG XL image, 2048x2048, lossy, 16-bit RGB+Alpha```
|
|
2025-03-07 11:09:25
|
Main annoyance is PFM doesn't support Alpha, so I can't use the 0.9 cjxl to get the correct bpp
|
|
2025-03-07 11:13:10
|
```JPEG XL encoder v0.11.0 0185fcd [AVX2,SSE2]
Encoding [Modular, lossless, effort: 5]
Compressed to 19394.5 kB (36.992 bpp).
2048 x 2048, 1.274 MP/s [1.27, 1.27], , 1 reps, 16 threads.
PeakPagefileUsage: 448.5 MiB
Wall time: 0 days, 00:00:03.365 (3.37 seconds)
User time: 0 days, 00:00:00.796 (0.80 seconds)
Kernel time: 0 days, 00:00:05.531 (5.53 seconds)
JPEG XL encoder v0.9.1 b8ceae3 [AVX2,SSE4,SSSE3,SSE2]
Encoding [Modular, lossless, effort: 5]
Compressed to 12349.3 kB (23.554 bpp).
2048 x 2048, 0.757 MP/s [0.76, 0.76], 1 reps, 16 threads.
PeakPagefileUsage: 416.6 MiB
Wall time: 0 days, 00:00:05.609 (5.61 seconds)
User time: 0 days, 00:00:01.031 (1.03 seconds)
Kernel time: 0 days, 00:00:24.765 (24.77 seconds)```
|
|
2025-03-07 11:46:47
|
```JPEG XL encoder v0.9.1 b8ceae3 [AVX2,SSE4,SSSE3,SSE2]
Encoding [Modular, lossless, effort: 9]
Compressed to 9010.5 kB (17.186 bpp).
2048 x 2048, 0.006 MP/s [0.01, 0.01], 1 reps, 16 threads.
PeakPagefileUsage: 989.4 MiB
Wall time: 0 days, 00:12:31.794 (751.79 seconds)
User time: 0 days, 00:01:14.406 (74.41 seconds)
Kernel time: 0 days, 00:11:39.718 (699.72 seconds)```
Seems like a bit of tuning for floats could be done. `-g 0` performs best because of group palette`
|
|
|
couleur
|
2025-03-27 12:34:28
|
are there tools that try out difference image encoders to find out which is best at losslessly compressing a specific image
|
|
2025-03-27 12:34:40
|
from my experience it's not always the same
|
|
2025-03-27 12:34:51
|
when testing around with jxk avif png webp
|
|
|
HCrikki
|
2025-03-27 01:28:24
|
not the same according to what factors, do specific encoders not generate actually lossless images ?
|
|
2025-03-27 01:33:18
|
this? https://github.com/WangXuan95/Image-Compression-Benchmark
|
|
2025-03-27 01:34:17
|
do note it seems to not have the latest versions of codecs (libjxl at 0.9 - 0.10 was a massive improvement), but you could swap in your own
|
|
|
juliobbv
|
2025-04-04 04:43:16
|
so, random question: why is the standalone 🧈 tool is called `butteraugli_main` and not just `butteraugli`?
|
|
2025-04-04 04:43:48
|
is it to distinguish google's from libjxl's implementations?
|
|
|
A homosapien
|
2025-04-04 04:55:08
|
afaik libjxl's butteraugli is more updated than google's butteraugli
|
|
|
𝕰𝖒𝖗𝖊
|
2025-04-05 03:35:14
|
google's butteraugli was problematic on my last try
|
|
|
Crite Spranberry
|
|
juliobbv
|
|
A homosapien
afaik libjxl's butteraugli is more updated than google's butteraugli
|
|
2025-04-05 05:07:13
|
Yeah, but I mean, where did the `_main` suffix come from? Any interesting backing story? That's my question basically.
|
|
|
Crite Spranberry
|
2025-04-05 05:07:46
|
_main could just be the branch name
|
|
2025-04-05 05:07:49
|
or well main
|
|
2025-04-05 05:07:52
|
_(branch)
|
|
|
A homosapien
|
2025-04-05 05:11:50
|
Yeah that's probably the most likely story
|
|
|
_wb_
|
2025-04-05 06:21:38
|
I think it's just because it has a main() function, i.e. a standalone command line tool, as opposed to a function used internally in an encoder or benchmarking tool, or a library API.
|
|
|
juliobbv
|
2025-04-06 07:19:44
|
interesting, that makes more sense
|
|
2025-04-06 07:19:59
|
IIRC butter predates the master -> main branch renaming effort
|
|
|
Demiurge
|
2025-04-07 08:43:33
|
Glad effort is being spent where it really matters
|
|
|
Crite Spranberry
|
|
Crite Spranberry
hi
|
|
2025-04-10 05:37:25
|
few steps away from XP
|
|
|
BlueSwordM
|
2025-04-10 03:11:03
|
<@794205442175402004> Have you ever thought of adding intensity_target to ssimulacra2?
I believe it would be a great way to make the metric a lot more flexible in harsher viewing conditions.
|
|
|
damian101
|
|
BlueSwordM
<@794205442175402004> Have you ever thought of adding intensity_target to ssimulacra2?
I believe it would be a great way to make the metric a lot more flexible in harsher viewing conditions.
|
|
2025-04-10 05:51:17
|
if you want to increase intensity_target beyond default, it would need to extrapolate, though, like butteraugli
|
|
2025-04-10 05:51:48
|
for lower you could also just lower exposure...
|
|
2025-04-10 06:10:17
|
I assume the goal is to increase the weight for distortion in dark areas of ssimu2?
|
|
2025-04-10 06:11:13
|
Actually, does ssimu2 even take luminance directly into account? Or is all luminance weighting done by the XYB transfer?
|
|
2025-04-10 06:11:24
|
I think that XYB transfer is modifiable, though.
|
|
2025-04-10 06:20:04
|
XYB luminance is perceptually much closer to sRGB and other traditional SDR transfer functions than to PQ, or Oklab luma:
https://www.desmos.com/calculator/jsseulfqad
|
|
2025-04-10 06:22:25
|
PQ overdoes it, however, because it assumes no environment light.
But still, it is much more perceptually uniform than sRGB or Gamma 2.4 in my experience, I use it for a lot of processing.
|
|
2025-04-10 06:35:12
|
Like, 1% vs 100% luminance means factor ~50 for 100 nits PQ for contrast perception relative to linear light, while for XYB it's only factor ~17.
|
|
2025-04-10 06:42:53
|
Here relative to Gamma 2.2 instead of linear:
https://www.desmos.com/calculator/yanochba5n
|
|
|
_wb_
|
|
I think that XYB transfer is modifiable, though.
|
|
2025-04-10 07:31:09
|
Yes, changing the intensity target has an effect on the effective tf
|
|
2025-04-10 07:33:53
|
The way it is now, it looks like ssimu2 is giving lower scores than it should on HDR images. Also cjxl has a similar behavior where d1 is a higher quality on HDR input than on SDR images, at least that is how it looks to me.
|
|
2025-04-10 07:34:59
|
I am not sure what ssimu2 does on PQ input, it may be that just like cjxl it sets the intensity target to 10000 automatically
|
|
|
Lumen
|
2025-04-10 07:35:00
|
people complain that ssimu2 gives good score on low luminosity when the image looks really bad
|
|
|
_wb_
|
2025-04-10 07:36:26
|
You mean dark images?
|
|
|
Lumen
|
2025-04-10 07:37:02
|
pretty much yes
|
|
2025-04-10 07:37:08
|
that is why blue was asking I believe
|
|
|
_wb_
|
2025-04-10 07:38:12
|
I think it may assume 255 nits for SDR white, so if you have a dark image but view it on a 500 nits display in a dark room, there will be more artifacts visible than what ssimu2 assumes will be visible
|
|
2025-04-10 07:39:07
|
It's also quite possible that there just weren't enough dark images in the corpora I used to tune ssimu2.
|
|
|
Quackdoc
|
2025-04-10 07:43:46
|
~~ssimu2 custom tunes for luminance when~~
|
|
|
Lumen
|
2025-04-10 07:46:53
|
if someone give me a formula to plug somewhere
|
|
2025-04-10 07:46:56
|
there can be a vship version fast
|
|
|
damian101
|
|
_wb_
It's also quite possible that there just weren't enough dark images in the corpora I used to tune ssimu2.
|
|
2025-04-10 08:06:55
|
Well-mastered still images usually utilize the full dynamic range, while dark scenes in movies don't. And bright areas make detail in neighbouring dark areas harder to see.
|
|
|
_wb_
|
2025-04-10 08:09:55
|
Right, effectively in a dark movie scene, if it takes long enough, your eyes readapt to a lower overall brightness which makes artifacts much more visible than if the dark frame would be only a part of a larger image that goes brighter in other parts.
|
|
|
Demiurge
|
2025-04-10 09:27:28
|
libjxl also has the same problem of disproportionately blurring and messing up darker regions. https://afontenot.github.io/image-formats-comparison/#pont-de-quebec&AV1=m&JPEGXL=m
|
|
|
spider-mario
|
2025-04-10 10:38:45
|
for a laugh, check out JPEG XR on that image
|
|
2025-04-10 10:39:01
|
(any bitrate)
|
|
|
CrushedAsian255
|
|
_wb_
Yes, changing the intensity target has an effect on the effective tf
|
|
2025-04-11 06:43:19
|
does intensity target just modify the psychovisual modelling?
|
|
|
damian101
|
|
CrushedAsian255
does intensity target just modify the psychovisual modelling?
|
|
2025-04-11 10:40:51
|
I believe for butteraugli it just changes at what peak brightness content is mapped into LMS colorspace.
|
|
|
spider-mario
|
2025-04-11 01:45:34
|
right, an sRGB image with `[(0, 0, 0), (118, 118, 118), (255, 255, 255)]` will be mapped to luminances of `[0, 18, 100]` if intensity_target is 100, or to `[0, 36, 200]` if it’s 200
|
|
2025-04-11 01:45:48
|
for PQ input, it makes no difference (it’s always the absolute luminance that the PQ signal implies)
|
|
|
jonnyawsom3
|
2025-04-11 01:59:00
|
That makes sense why setting it to 1000 stops crushed blacks. The values are mapped from the 0-20 range to 0-200, so they get quantized more like the bright areas without intensity_target.... I think
|
|
|
CrushedAsian255
|
|
spider-mario
right, an sRGB image with `[(0, 0, 0), (118, 118, 118), (255, 255, 255)]` will be mapped to luminances of `[0, 18, 100]` if intensity_target is 100, or to `[0, 36, 200]` if it’s 200
|
|
2025-04-11 02:21:26
|
Oh so it’s how many nits full-scale translates into?
|
|
|
spider-mario
|
2025-04-11 02:21:55
|
strictly speaking, in the jxl spec, it’s just “an upper bound of the luminances found in the image”, to guide tone mapping
|
|
2025-04-11 02:22:02
|
but yes, cjxl also uses it to interpret SDR input, mapping (1, 1, 1) to that
|
|
|
CrushedAsian255
|
2025-04-11 02:22:40
|
Why doesn’t it just do max(image) to find the upper bound?
|
|
2025-04-11 02:22:56
|
Or is it like the upper bound of what *could* be in the image?
|
|
|
spider-mario
|
2025-04-11 02:23:52
|
it’s ideally the actual maximum in the image (so that the viewer doesn’t have to compute it on the fly), but it’s allowed to just be an upper bound estimate (so that the encoding process doesn’t have to involve computing it either)
|
|
|
CrushedAsian255
|
2025-04-11 02:25:47
|
So a SDR image that only goes to 0.5 should ideally have it set to 127 nits?
|
|
|
spider-mario
|
2025-04-11 02:37:11
|
yes, theoretically
|
|
2025-04-11 02:37:33
|
although current djxl, when decoding this to sRGB, would then remap such an image to use the 0-1 range in the output
|
|
|
damian101
|
|
CrushedAsian255
Why doesn’t it just do max(image) to find the upper bound?
|
|
2025-04-11 03:14:10
|
SDR transfer functions are relative...
|
|
|
CrushedAsian255
Or is it like the upper bound of what *could* be in the image?
|
|
2025-04-11 03:14:45
|
I think so
|
|
|
spider-mario
|
|
SDR transfer functions are relative...
|
|
2025-04-11 04:28:17
|
right, the exact function would be max(luminance(linearise(rgb)) for rgb in image) – is that what you mean?
|
|
2025-04-11 04:28:30
|
I understood max(image) as a figurative simplification
|
|
|
Jyrki Alakuijala
|
|
Like, 1% vs 100% luminance means factor ~50 for 100 nits PQ for contrast perception relative to linear light, while for XYB it's only factor ~17.
|
|
2025-05-12 05:06:25
|
there is the cubic root compression in JPEG XL XYB and log compression in Butteraugli XYB, depends which one you use
|
|
|
damian101
|
2025-05-12 05:12:25
|
Didn't know that...
|
|
|
Jyrki Alakuijala
|
2025-05-12 05:13:41
|
both are biased: cbrt(energy + C) and log(energy + C)
|
|
2025-05-12 05:13:57
|
they fit not so badly until few hundred nits
|
|
2025-05-12 05:14:14
|
then JPEG XL is kludged to follow log through manipulation of the adaptive quantization
|
|
2025-05-12 05:14:29
|
but computation is cbrt (or two multiplications in expansion)
|
|
2025-05-12 05:15:46
|
https://github.com/libjxl/libjxl/blob/52aaea503e5e8ec25094472377aab4b2cde96652/lib/jxl/enc_adaptive_quantization.cc#L148C7-L148C49 shows this kludge that makes cbrt and log the same
|
|
2025-05-12 05:16:12
|
it is some additional entropy in the adaptive quantization map, but speeds up the decoding
|
|
|
Traneptora
|
|
Jyrki Alakuijala
then JPEG XL is kludged to follow log through manipulation of the adaptive quantization
|
|
2025-05-13 01:41:09
|
what part of the encode process would you say is "adaptive quantization"? are you referring to the HFMult?
|
|
2025-05-13 01:41:41
|
or rather, determining it?
|
|
|
Jyrki Alakuijala
|
2025-05-13 08:25:59
|
The code in libjxl that resides in enc_adaptive_quantization.cc
|
|
|
bonnibel
|
2025-05-30 05:20:27
|
would baugli or ssimu2 be appropriate metrics for comparing color quantization + dithering algorithms on a dataset?
|
|
|
A homosapien
|
2025-05-30 05:46:06
|
PNGquant uses MSE internally I think
|
|
|
_wb_
|
|
bonnibel
would baugli or ssimu2 be appropriate metrics for comparing color quantization + dithering algorithms on a dataset?
|
|
2025-05-30 08:58:35
|
Worth trying but at least for ssimu2 it was not tuned based on datasets including such images (or not much at least, iirc TID2013 does have some of that)
|
|
|
bonnibel
|
2025-05-30 10:59:10
|
I'll be comparing the original images with versions quantized down to 256 colors. MSE/PSNR is the most common metric in the literature, but I figure probably not the most appropriate to test error diffusion dithering. You're trying to create the illusion of a higher color resolution, but the per-pixel error will always be worse than just mapping every pixel to the nearest colour in the palette
|
|
|
_wb_
|
2025-05-31 06:36:29
|
Dithering and noise are notoriously difficult things for any metric...
|
|
|
Quackdoc
|
2025-05-31 06:38:00
|
AI metrics when
|
|
|
_wb_
|
2025-05-31 06:38:34
|
There already are quite a few
|
|
|
Quackdoc
|
2025-05-31 06:39:27
|
~~ok yeah but good ones~~ [av1_dogelol](https://cdn.discordapp.com/emojis/867794291652558888.webp?size=48&name=av1_dogelol)
|
|
|
Kupitman
|
2025-05-31 09:46:17
|
av1_dogelol...
|
|
|
𝕰𝖒𝖗𝖊
|
2025-07-25 03:17:04
|
I've just rebuilt `ssimulacra2` and `butteraugli_main` (from `libjxl`) and they don't work with JPG (XYB) images and I am 100% sure that I have been using them with XYB images.
|
|
2025-07-25 03:17:10
|
What can be my potential problem?
|
|
2025-07-25 03:17:49
|
Tried building everything statically.
Tried using `libjpeg.so` from `google/jpegli`
Tried `libjpeg.so` from `libjxl`
|
|
2025-07-25 03:18:02
|
Tried `lcms2` or `skcms`
|
|
2025-07-25 04:03:33
|
Maybe git upstream problem. Tests fail:
```
99% tests passed, 5 tests failed out of 8169
The following tests FAILED:
2819 - CodecTest.TestRoundTrip (Failed)
2833 - JpegliTest.JpegliSRGBDecodeTest (Failed)
2834 - JpegliTest.JpegliGrayscaleDecodeTest (Failed)
2835 - JpegliTest.JpegliXYBEncodeTest (Failed)
2836 - JpegliTest.JpegliDecodeTestLargeSmoothArea (Failed)
```
|
|
2025-07-25 06:49:14
|
Yes, another version worked.
|
|
2025-07-25 06:49:32
|
Hope this info is helpful for the devs
|
|
2025-07-25 06:56:12
|
I will try to pinpoint the exact commit with the problem
|
|
2025-07-26 12:32:53
|
Detect codec by signature. ([#4268](https://github.com/libjxl/libjxl/pull/4268))
This is the problematic commit
|
|
2025-07-26 12:33:37
|
```
ssimulacra2 ref.jpg dis.jpg
-5.08811768
```
```
ssimulacra2 ref.jpg dis.jpg
Could not decode distorted image: temp.jpg
```
|
|
2025-07-26 12:34:01
|
Only breaks JPG with XYB, standard JPG is not affected.
|
|
2025-07-26 01:06:48
|
I'll try to fix it. I guess it doesn't check:
```C
{0xFF, 0xD8, 0xFF, 0xE2}, // JPEG with XYB ICC
```
|
|
2025-07-26 01:07:11
|
all XYB files I checked, show this signature.
|
|
2025-07-26 01:11:54
|
I am trying RN. I'll open a PR
|
|
|
jonnyawsom3
|
2025-07-26 01:12:21
|
The last two values signify an APP2 marker/ICC profile
|
|
|
𝕰𝖒𝖗𝖊
|
2025-07-26 01:12:26
|
Yes
|
|
2025-07-26 01:12:56
|
I learned it, and it's indicated in the `xxd` output
|
|
2025-07-26 01:13:14
|
```
xxd jpg_q99.jpg | head -n 4
00000000: ffd8 ffe2 02e0 4943 435f 5052 4f46 494c ......ICC_PROFIL
00000010: 4500 0101 0000 02d0 6a78 6c20 0440 0000 E.......jxl .@..
```
|
|
2025-07-26 01:14:31
|
```
ssimulacra2 test.jpg temp.jpg
-5.08811768
```
Yes, fixed it
|
|
|
Jyrki Alakuijala
|
|
bonnibel
I'll be comparing the original images with versions quantized down to 256 colors. MSE/PSNR is the most common metric in the literature, but I figure probably not the most appropriate to test error diffusion dithering. You're trying to create the illusion of a higher color resolution, but the per-pixel error will always be worse than just mapping every pixel to the nearest colour in the palette
|
|
2025-09-28 05:05:42
|
I used butteraugli for helping to compare dithering of the delta palette design, and to automatically find candidate solutions that produced low pnorm*bpp values
|
|
|
gb82
|
2025-10-12 11:23:35
|
https://giannirosato.com/blog/post/oavif/
|
|
2025-10-12 11:24:14
|
let me know what u all think – esp <@794205442175402004> , I’d be interested to hear how Cloudinary does TQ (I’m sure u have models that are much better)
|
|
|
_wb_
|
2025-10-12 11:35:17
|
Looks interesting, I think we did similar things (though probably a bit dumber, and some things differently) before we started switching to an approach where a simple AI model is directly producing the q setting that is predicted to hit target metric scores (that is, it produces predicted metric scores for all q settings given an image as input)
|
|
2025-10-12 11:37:41
|
In any of these approaches though, the weak spot remains having a metric that avoids any content bias. With AIC-4 we want to try to make such a metric or at least understand better which of the existing metrics is best at that, but it is hard and we just started.
|
|
|
jonnyawsom3
|
2025-10-12 11:40:03
|
I didn't know there was a SSIMU2 fork with half the memory usage, wonder if we could port any of those improvements back to the libjxl version
|
|
|
gb82
|
|
_wb_
Looks interesting, I think we did similar things (though probably a bit dumber, and some things differently) before we started switching to an approach where a simple AI model is directly producing the q setting that is predicted to hit target metric scores (that is, it produces predicted metric scores for all q settings given an image as input)
|
|
2025-10-13 12:13:12
|
that sounds super smart to me, I'm sure that model is giving very good results
|
|
|
I didn't know there was a SSIMU2 fork with half the memory usage, wonder if we could port any of those improvements back to the libjxl version
|
|
2025-10-13 12:13:45
|
yeah I wrote it based on another older (slower) Zig implementation so let me know if you have any questions
|
|
2025-10-13 12:14:14
|
it ditches some ssimu2 features but it is 99.97% accurate according to my testing so I think it is decent for TQ
|
|
|
_wb_
|
2025-10-13 06:25:42
|
The error will be much smaller than the error between ssimu2 and actual subjective quality, so I wouldn't worry about it.
|
|
|
Lumen
|
2025-10-13 06:39:45
|
Honestly the zig implementation is far from what s achievable in memory usage
|
|
2025-10-13 06:40:25
|
Using some tricks for my hip implem I was able to reduce vram usage from 32 planes to 12 by changing to a tile based computation
|
|
2025-10-13 06:40:50
|
Reducing the memory bandwidth needed at the same time which sped up computation
|
|
2025-10-13 06:41:24
|
But it's a nice start already to ditch useless things and leads to great savings already
|
|
2025-10-13 06:45:16
|
Basically 12 planes represents the source, the distorted and a single temporary image (3 plane) all multiplied by 4/3 due to preloading the multiscale
|
|
|
gb82
|
|
Lumen
Honestly the zig implementation is far from what s achievable in memory usage
|
|
2025-10-13 07:08:42
|
what you've done with vship is truly amazing; im really inspired by it. if you have documentation for any of this outside of the code, I'd be very interested in implementing pretty much everything you've done
|
|
2025-10-13 07:12:59
|
honestly it might just be time for me to look at the vship code
|
|
|
Lumen
|
2025-10-13 07:14:20
|
The new code with this optimization is quite convoluted but not unreadable either
It would great if I could create a research paper detailing everything that I did for vship
|
|
|
gb82
|
2025-10-13 07:15:04
|
that would be genuinely awesome
|
|
2025-10-13 07:15:21
|
I've defaulted to blog posts for everything, I think even that may suffice
|
|
|
Lumen
|
2025-10-13 07:15:47
|
Truue though I dont have a fancy website like yours ^^
|
|
|
gb82
|
2025-10-13 07:16:15
|
zola was really easy and fun to set up, highly recommend
|
|
2025-10-13 07:16:23
|
now I can just write Markdown and press publish
|
|
|
spider-mario
|
2025-10-13 08:46:30
|
same here, I set up a blog with Hugo and push it to GitHub Pages
|
|
|
|
veluca
|
|
gb82
zola was really easy and fun to set up, highly recommend
|
|
2025-10-13 11:10:16
|
same, use it for www.olimpiadi-informatica.it and it's a pretty good experience
|
|
|
lonjil
|
2025-10-14 09:55:55
|
I think I'll be writing my new blog using Typst's improved HTML output in v0.14.
|
|
|
|
veluca
|
2025-10-14 10:38:53
|
(feels like we are going a bit off topic, but I am very excited about typst's html output too)
|
|
|
|
Mia
|
|
lonjil
I think I'll be writing my new blog using Typst's improved HTML output in v0.14.
|
|
2025-10-24 12:13:12
|
May be interested in <https://github.com/Glomzzz/typsite>
|
|
|
lonjil
|
2025-10-24 12:13:36
|
indeed!
|
|