|
yoochan
|
2024-02-14 08:57:42
|
perhaps the question could also be reformulated as : how to describe the modular encoding in a few words... where DCT based could be described as "Fourrier transform on 8x8 pixels with high frequencies discarded and the result zipped" 😄
|
|
|
Oleksii Matiash
Well, idk why I didn't tried it myself before asking 🤦♂️ Tried -d 10 with -m 0 and 1, and got the idea
|
|
2024-02-14 08:58:02
|
can we see ?
|
|
|
Oleksii Matiash
|
|
yoochan
can we see ?
|
|
2024-02-14 09:00:19
|
Sure, but keep in mind that original file was in ProPhoto space (don't ask why 😅).
|
|
2024-02-14 09:04:19
|
And the result moves me to another question - am I right that vardct can only be used for the rgb channels, and all other can only be compressed with modular? If yes, it is a bit disappointing, because vardct better suits, for example, for depth or thermal map, imo
|
|
|
jonnyawsom3
|
2024-02-14 09:08:59
|
Did a quick example too, `-d 25 -e 4` with VarDCT on the left and Modular on the right
|
|
2024-02-14 09:10:20
|
Might just be because it's a 20MP photo, but `-d 10` was very hard to notice differences hence cranking it up to max
|
|
2024-02-14 09:13:52
|
At first it appears like Modular preserves details better, but it also has larger filesize so isn't directly comparable
|
|
|
lonjil
|
2024-02-14 09:14:39
|
the flaws are very easy to notice at 1:1 zoom
|
|
|
yoochan
|
2024-02-14 09:19:48
|
(is it possible to compile ssimulacra2 from libjxl as a standalone tool ? with a cmake flag ?)
|
|
|
lonjil
|
2024-02-14 09:23:13
|
`-DJPEGXL_ENABLE_DEVTOOLS=ON` should give you an executable called `ssimulacra2` in `build/tools/`.
|
|
|
_wb_
|
2024-02-14 09:55:19
|
Modular's Squeeze basically splits an image in two images of half the size, where every two neighboring pixels A,B are averaged to (A+B)/2 in the "squeezed" image while the difference A-B is stored in the "residual" image. This is done recursively so in the end you only have a tiny squeezed image and a pyramid of residuals — this is what allows you to get progressive previews at all power-of-two downscale factors.
However it's not just the difference that is stored in the residuals, first a predicted difference (the "tendency term") is subtracted from it; this tendency term boils down to linear interpolation in regions that are monotonic (like gradients) while it is just zero in regions that are nonmonotonic (like edges), which means it avoids the ringing artifacts of the DCT when residuals get quantized, which is what you do to make it lossy. Everything is reversible so it can be done in a lossless way. It's typically counterproductive for compression when used in a lossless way though.
Lossy modular might be useful at the very high end of the quality spectrum, since if you just slightly quantize the highest freq residuals, you get something near-lossless with strong guarantees (you can guarantee a per-pixel max error) while saving a lot compared to lossless.
|
|
|
yoochan
|
2024-02-14 09:57:43
|
nice introduction ! thank you very much for taking the time to type it ! I understand better when one speaks about residuals 😄
|
|
2024-02-14 09:58:29
|
don't you have issues of accuracy when doing repeated "divided by 2" ? or may be you work with floats...
|
|
|
Oleksii Matiash
|
|
_wb_
Modular's Squeeze basically splits an image in two images of half the size, where every two neighboring pixels A,B are averaged to (A+B)/2 in the "squeezed" image while the difference A-B is stored in the "residual" image. This is done recursively so in the end you only have a tiny squeezed image and a pyramid of residuals — this is what allows you to get progressive previews at all power-of-two downscale factors.
However it's not just the difference that is stored in the residuals, first a predicted difference (the "tendency term") is subtracted from it; this tendency term boils down to linear interpolation in regions that are monotonic (like gradients) while it is just zero in regions that are nonmonotonic (like edges), which means it avoids the ringing artifacts of the DCT when residuals get quantized, which is what you do to make it lossy. Everything is reversible so it can be done in a lossless way. It's typically counterproductive for compression when used in a lossless way though.
Lossy modular might be useful at the very high end of the quality spectrum, since if you just slightly quantize the highest freq residuals, you get something near-lossless with strong guarantees (you can guarantee a per-pixel max error) while saving a lot compared to lossless.
|
|
2024-02-14 09:59:15
|
Thank you!
|
|
|
_wb_
|
|
yoochan
don't you have issues of accuracy when doing repeated "divided by 2" ? or may be you work with floats...
|
|
2024-02-14 10:03:30
|
it's all integer arithmetic. If you know `(A+B)/2` (in integer arithmetic so rounded down) and `A-B`, you can reconstruct A and B exactly once you realize that the least significant bit of A+B must be equal to the least significant bit of A-B (since the difference between those two numbers is even), so you can 'undo' the rounding that happened in `(A+B)/2` and actually get A and B exactly.
|
|
|
yoochan
|
2024-02-14 10:03:54
|
oh, smart
|
|
|
_wb_
|
2024-02-14 10:04:50
|
```
nightshot_iso_100.ppm.png
Encoding kPixels Bytes BPP E MP/s D MP/s Max norm SSIMULACRA2 PSNR pnorm BPP*pnorm QABPP Bugs
-----------------------------------------------------------------------------------------------------------------------------------------
jxl:m:d0 7375 6047836 6.5595889 3.259 27.468 nan 100.00000000 99.99 0.00000000 0.000000000000 6.560 0
jxl:m:d0.06 7375 4219563 4.5766120 2.047 33.303 0.26415491 95.28152196 58.49 0.08403327 0.384587659491 4.577 0
jxl:d0.1 7375 4870393 5.2825136 7.776 99.562 0.29657105 95.59548796 60.56 0.08135912 0.429780645959 5.283 0
Aggregate: 7375 4990497 5.4127809 3.730 44.992 0.27989408 96.93531517 70.75 0.08268538 0.406556678366 5.413 0
```
at these kind of very high qualities, depending on the image, lossy modular might be better than vardct (if the slower encode/decode is not a blocker)
|
|
2024-02-14 10:07:09
|
generally though, vardct is better for lossy than modular — at higher distances the squeeze transform is a bit too simplistic to really give nice-looking results (it gets blocky/macropixelated)
|
|
|
CrushedAsian255
|
2024-02-14 09:19:15
|
Only thing with modular is at least on my ARM MacBook it doesn’t seem to be multi threading, modular uses cpu 100% where vardct uses around 1000% to 1200%
|
|
|
|
veluca
|
2024-02-14 09:28:44
|
what version are you using? also, I assume you're talking about encoding?
|
|
|
|
runr855
|
2024-02-14 11:55:49
|
Will there ever be official MacOS builds? Or is there somewhere trusted to get them from which isn't a package manager?
|
|
|
Demiurge
|
2024-02-14 11:58:18
|
Brew.sh
|
|
2024-02-14 11:58:29
|
Also apple has built in support.
|
|
|
|
runr855
|
2024-02-15 12:08:02
|
I was thinking about encoding from command line, is that included? That would be wild.
|
|
|
Demiurge
|
2024-02-15 01:01:40
|
No, the best way would be to install homebrew and then install libjxl from there, that will get you the command line tools.
|
|
|
CrushedAsian255
|
|
veluca
what version are you using? also, I assume you're talking about encoding?
|
|
2024-02-15 07:00:04
|
```cjxl -v 212885.png test.jxl -m 1
JPEG XL encoder v0.9.1 0.9.1 [NEON]
Read 3999x2424 image, 4189891 bytes, 691.9 MP/s
Encoding [Modular, d1.000, effort: 7]
Compressed to 2600.3 kB (2.146 bpp).
3999 x 2424, 2.494 MP/s [2.49, 2.49], 1 reps, 14 threads.
```
|
|
2024-02-15 07:00:33
|
|
|
2024-02-15 07:00:58
|
vs VarDCT
|
|
2024-02-15 07:01:00
|
```cjxl -v 212885.png test.jxl
JPEG XL encoder v0.9.1 0.9.1 [NEON]
Read 3999x2424 image, 4189891 bytes, 670.7 MP/s
Encoding [VarDCT, d1.000, effort: 7]
Compressed to 2118.6 kB (1.748 bpp).
3999 x 2424, 5.482 MP/s [5.48, 5.48], 1 reps, 14 threads.```
|
|
2024-02-15 07:01:13
|
|
|
|
Tirr
|
2024-02-15 07:02:06
|
what happens if you set `-d 0` instead of `-m 1`?
|
|
|
CrushedAsian255
|
2024-02-15 07:04:29
|
`-d 0` implies `-m 1` as VarDCT doesn't support Lossless
|
|
2024-02-15 07:04:37
|
`-m 1` is modular force switch
|
|
|
Tirr
|
2024-02-15 07:04:46
|
but lossy modular may be different from lossless modular
|
|
|
CrushedAsian255
|
2024-02-15 07:05:01
|
hang on il try again with a bigger image for longer test
|
|
2024-02-15 07:06:26
|
nope, lossless and lossy modular both only use 1 core
|
|
2024-02-15 07:06:38
|
while vardct uses like 10-12 cores
|
|
2024-02-15 07:07:25
|
does modular split into Tiles?
|
|
|
Tirr
|
2024-02-15 07:10:16
|
ah you're using v0.9.1, it would be slow
|
|
2024-02-15 07:10:55
|
recent git version has streaming encoding, which will actually do mutithread
|
|
2024-02-15 07:13:56
|
with 23.7 MP image, wall time goes from 82 seconds to 4 seconds and CPU utilization increases to 800%:
```console
$ time cjxl -d 0 --disable_output ~/Pictures/bg/3_orig.png
JPEG XL encoder v0.9.1 0.9.1 [NEON]
Encoding [Modular, lossless, effort: 7]
Compressed to 20336.7 kB (6.869 bpp).
5787 x 4093, 0.292 MP/s [0.29, 0.29], 1 reps, 10 threads.
cjxl -d 0 --disable_output ~/Pictures/bg/3_orig.png 81.81s user 9.26s system 110% cpu 1:22.75 total
$ time ./cjxl -d 0 --disable_output ~/Pictures/bg/3_orig.png
JPEG XL encoder v0.10.0 fbf36b27 [NEON]
Encoding [Modular, lossless, effort: 7]
Compressed to 20488.6 kB (6.920 bpp).
5787 x 4093, 6.033 MP/s [6.03, 6.03], 1 reps, 10 threads.
./cjxl -d 0 --disable_output ~/Pictures/bg/3_orig.png 33.02s user 0.43s system 802% cpu 4.169 total
```
|
|
|
CrushedAsian255
|
2024-02-15 07:15:37
|
when will 0.10.0 go for mac Homebrew?
|
|
2024-02-15 07:16:26
|
cause that seems really helpful
|
|
|
Tirr
|
2024-02-15 07:18:08
|
v0.10.0 is not released yet, and afaik there's no set date
|
|
|
CrushedAsian255
|
2024-02-15 07:18:29
|
so just build from Source
|
|
2024-02-15 07:18:32
|
?
|
|
|
Tirr
|
2024-02-15 07:18:36
|
yes, for now
|
|
|
|
veluca
|
|
Tirr
v0.10.0 is not released yet, and afaik there's no set date
|
|
2024-02-15 07:24:48
|
we can hopefully release 0.10 by end of next week
|
|
|
CrushedAsian255
|
|
_wb_
|
2024-02-15 09:07:10
|
Lossy modular is still slow since it cannot be made streaming (at least that would be a lot more complicated if it is possible at all)
|
|
|
|
veluca
|
2024-02-15 09:53:58
|
it should be less slow though, I believe I made it so that it'd learn local trees
|
|
2024-02-15 09:54:06
|
if not we probably ought to do that
|
|
|
_wb_
|
2024-02-15 10:53:09
|
I'll benchmark it. We could btw also do local squeeze instead of global, which makes it non-progressive but makes it streaming (and also allows cropped decode).
|
|
|
lonjil
|
2024-02-15 10:54:52
|
no way to interleave the local squuezes rather than putting them in order or whatever?
|
|
2024-02-15 10:55:45
|
I guess that would make it not streaming, oop
|
|
|
CrushedAsian255
|
|
_wb_
Lossy modular is still slow since it cannot be made streaming (at least that would be a lot more complicated if it is possible at all)
|
|
2024-02-15 11:18:19
|
Is there any reason you would use lossy modular over vardct?
|
|
2024-02-15 11:18:29
|
I am still learning the specifics on how all this works
|
|
2024-02-15 11:18:45
|
Is there a good place for a technical deep dive on how the codec works?
|
|
|
_wb_
|
|
CrushedAsian255
Is there any reason you would use lossy modular over vardct?
|
|
2024-02-15 11:42:52
|
Not really. For anything visual, for the useful quality range, VarDCT will do a better job. Lossy modular might be useful for weird edge cases like extremely high or low quality, and non-visual data.
|
|
|
CrushedAsian255
Is there a good place for a technical deep dive on how the codec works?
|
|
2024-02-15 11:49:15
|
Not really, there are some slides here: https://docs.google.com/presentation/d/1LlmUR0Uoh4dgT3DjanLjhlXrk_5W2nJBDqDAMbhe8v8/edit?usp=drivesdk but that's not super deep
|
|
|
lonjil
|
|
_wb_
Not really. For anything visual, for the useful quality range, VarDCT will do a better job. Lossy modular might be useful for weird edge cases like extremely high or low quality, and non-visual data.
|
|
2024-02-15 11:50:13
|
something like webp's "near lossless"?
|
|
|
CrushedAsian255
|
2024-02-15 11:55:27
|
So modular tries to make a logical (if-else) model and then entropy encodes residuals?
|
|
|
_wb_
|
2024-02-15 11:55:28
|
Yeah, I think jxl has three approaches that could be used to do a good near-lossless:
- squeeze with light quantization
- lossy delta palette
- the same approach some lossy PNG encoders use: quantize prediction residuals (where in jxl we have cool predictors to choose from)
|
|
|
CrushedAsian255
So modular tries to make a logical (if-else) model and then entropy encodes residuals?
|
|
2024-02-15 11:56:40
|
The MA tree is just used as context for the entropy coding. Good context can make a big difference.
|
|
|
CrushedAsian255
|
|
_wb_
The MA tree is just used as context for the entropy coding. Good context can make a big difference.
|
|
2024-02-15 11:57:13
|
Wdym context?
|
|
2024-02-15 12:00:32
|
the 'jxl art' thing just looks like a bunch of nested if-elses in a tree
```
if y > 150
if c > 0
- N 0
if x > 500
if WGH > 5
- AvgN+NW + 2
- AvgN+NE - 2
if x > 470
- AvgW+NW -2
if WGH > 0
- AvgN+NW +1
- AvgN+NE - 1
if y > 136
if c > 0
if c > 1
if x > 500
- Set -20
- Set 40
if x > 501
- W - 1
- Set 150
if x > 500
- N + 5
- N - 15
if W > -50
- Weighted -1
- Set 320
```
|
|
2024-02-15 12:05:32
|
also i think im doing somethign wrong why is my avif like 10x smaller than my jxl both from the same hires png
|
|
2024-02-15 12:05:45
|
is `-d 1` just too highres
|
|
|
_wb_
|
2024-02-15 12:11:27
|
Entropy coding is based on a probability distribution of the values that get encoded (e.g. typically zero has a high probability and low-amplitude residuals have a higher probability than high-amplitude one). That's what allows it to save bits: e.g. instead of using 8 bits for all samples, you can use 1 bit for zeroes (and 12 bits for the rare cases where the value is large).
Context modeling allows it to use different distributions depending on the local properties of the image. In modular, the context doesn't only determine the distribution but also the predictor to be added to the residual (and also an offset and multiplier), which makes it possible to tune everything to the specific behavior of the pixels in the image.
|
|
|
CrushedAsian255
also i think im doing somethign wrong why is my avif like 10x smaller than my jxl both from the same hires png
|
|
2024-02-15 12:13:08
|
cjxl default quality is way higher than avifenc default quality. If you compare lossy compression you have to use either your eyes or a good perceptual metric and compare filesizes at the same visual quality (or visual quality at the same filesize).
|
|
|
CrushedAsian255
|
|
_wb_
Entropy coding is based on a probability distribution of the values that get encoded (e.g. typically zero has a high probability and low-amplitude residuals have a higher probability than high-amplitude one). That's what allows it to save bits: e.g. instead of using 8 bits for all samples, you can use 1 bit for zeroes (and 12 bits for the rare cases where the value is large).
Context modeling allows it to use different distributions depending on the local properties of the image. In modular, the context doesn't only determine the distribution but also the predictor to be added to the residual (and also an offset and multiplier), which makes it possible to tune everything to the specific behavior of the pixels in the image.
|
|
2024-02-15 12:23:15
|
seems like maybe i should read the spec
|
|
|
_wb_
cjxl default quality is way higher than avifenc default quality. If you compare lossy compression you have to use either your eyes or a good perceptual metric and compare filesizes at the same visual quality (or visual quality at the same filesize).
|
|
2024-02-15 12:24:14
|
that makes sense, AV1 is a **video** format after all, although this does raise the question of *why* are we using video codecs for images again?
|
|
2024-02-15 12:24:42
|
videos frame = shown for like anywhere from ~40ms down to less than 10ms
|
|
2024-02-15 12:24:52
|
photo = meant to be looked at closely for long time
|
|
|
_wb_
|
2024-02-15 12:37:45
|
Yes, and also in video subsequent inter frames can add more detail in case it's a static image. Except in avif those refinements never come :/
|
|
|
CrushedAsian255
|
|
_wb_
Yes, and also in video subsequent inter frames can add more detail in case it's a static image. Except in avif those refinements never come :/
|
|
2024-02-15 12:38:12
|
yep... :/
|
|
2024-02-15 12:38:18
|
who's idea was this anyways?
|
|
|
_wb_
|
2024-02-15 12:39:07
|
It's a straightforward "quick win" idea, lossy webp, HEIC, bpg are also doing this.
|
|
|
CrushedAsian255
|
|
_wb_
It's a straightforward "quick win" idea, lossy webp, HEIC, bpg are also doing this.
|
|
2024-02-15 12:39:45
|
but we all know, the first idea that comes to you that sounds easy and reasonable is **NEVER** the best idea
|
|
|
_wb_
|
2024-02-15 12:40:15
|
But it's a fundamentally bad idea imo. At least until we can afford video at much better fidelity than what we have to do now to keep bandwidth reasonable.
|
|
|
jonnyawsom3
|
2024-02-15 12:40:30
|
Scroll up in <#803645746661425173> and see what real formats do
|
|
2024-02-15 12:40:52
|
I especially love Jpegli beating WebP
|
|
|
CrushedAsian255
|
2024-02-15 12:42:12
|
also video codecs love saying bye-bye to anything other than yuv420p
|
|
|
_wb_
|
2024-02-15 12:44:41
|
Basically the whole chrome debacle is video codec folks, instead of admitting that maybe image codecs are better for still images than video codecs, doubling down on the bad idea by moving the goalposts and pretending that the web would be better if we all used low quality images instead of what we're doing now.
|
|
2024-02-15 12:45:19
|
Because that's what video codecs are good at: making bit starved images look somewhat passable.
|
|
|
CrushedAsian255
|
2024-02-15 12:45:46
|
is it like saying we should just use a speech codec at high bitrates for High quality music?
|
|
|
_wb_
|
2024-02-15 12:46:01
|
Yep, more or less.
|
|
2024-02-15 12:46:15
|
Or even at low bitrates.
|
|
2024-02-15 12:47:39
|
All the web metrics are promoting pages that just load fast no matter how they look. Because quality is hard to measure while speed is easy to measure.
|
|
|
CrushedAsian255
|
2024-02-15 12:48:19
|
so basically saying "oh we should all use 32kbps MP3 because it loads really fast"
|
|
|
jonnyawsom3
|
2024-02-15 12:49:04
|
Reminded me of storing FLACs in JXL files
|
|
|
CrushedAsian255
|
2024-02-15 12:49:46
|
oh yeah i tried something like that but not quite
|
|
2024-02-15 12:50:07
|
i tried encoding each audio sample as a int16 and putting that into a jpeg xl and then compressing
|
|
2024-02-15 12:50:26
|
the DCT artifacts added some really funky frequencies (probs aliasing artifacts)
|
|
|
jonnyawsom3
|
2024-02-15 12:51:31
|
I did some high quality lossy and surprisingly it worked well. Could only notice in silent areas where the artefacts introduced noise
|
|
|
CrushedAsian255
|
|
I did some high quality lossy and surprisingly it worked well. Could only notice in silent areas where the artefacts introduced noise
|
|
2024-02-15 12:52:08
|
hq lossy but still in vardct?
|
|
2024-02-15 12:52:12
|
or did u use lossy-modular?
|
|
|
jonnyawsom3
|
2024-02-15 12:52:58
|
It was -d 0.3 -e 8
|
|
2024-02-15 12:53:05
|
So yeah, VarDCT
|
|
|
CrushedAsian255
|
2024-02-15 12:54:27
|
nice
|
|
2024-02-15 12:54:34
|
it's night time in my timezone now so ttyl
|
|
2024-02-15 12:54:35
|
gn
|
|
|
jonnyawsom3
|
2024-02-15 12:54:42
|
FLAC to JXL
ffmpeg -hide_banner -i input.flac -f u8 -ac 1 -ar (Sample Rate) - | ffmpeg -f rawvideo -pix_fmt gray -s:v (Seconds x 10)x(Sample Rate / 10) -i - -distance 0.3 -effort 8 -y output.jxl
JXL to FLAC
ffmpeg -i input.jxl -f rawvideo -pix_fmt gray - | ffmpeg -f u8 -ac 1 -ar (Sample Rate) -i - output.flac
|
|
|
CrushedAsian255
|
|
FLAC to JXL
ffmpeg -hide_banner -i input.flac -f u8 -ac 1 -ar (Sample Rate) - | ffmpeg -f rawvideo -pix_fmt gray -s:v (Seconds x 10)x(Sample Rate / 10) -i - -distance 0.3 -effort 8 -y output.jxl
JXL to FLAC
ffmpeg -i input.jxl -f rawvideo -pix_fmt gray - | ffmpeg -f u8 -ac 1 -ar (Sample Rate) -i - output.flac
|
|
2024-02-15 12:54:51
|
ooh nice looks similar to what i did
|
|
2024-02-15 12:55:04
|
(also the big problem that i see is that you're using u8)
|
|
|
jonnyawsom3
|
2024-02-15 12:55:09
|
Moments after saying how we should keep formats to what they were made for haha
|
|
|
CrushedAsian255
|
2024-02-15 12:55:11
|
that's whats probably adding the noise
|
|
|
Moments after saying how we should keep formats to what they were made for haha
|
|
2024-02-15 12:55:17
|
lol
|
|
2024-02-15 12:55:20
|
but this is fun
|
|
2024-02-15 12:55:24
|
not actually for production
|
|
2024-02-15 12:55:35
|
# **DON'T YOU DARE USE THIS IN PRODUCTION**
|
|
2024-02-15 12:56:38
|
i know wasm exists and you could hack something together to get this to load .wav.jxl files or something, don't do it
|
|
|
jonnyawsom3
|
2024-02-15 12:57:44
|
Yeah, next up using splines to represent audio
|
|
|
CrushedAsian255
|
2024-02-15 12:58:05
|
lol
|
|
|
FLAC to JXL
ffmpeg -hide_banner -i input.flac -f u8 -ac 1 -ar (Sample Rate) - | ffmpeg -f rawvideo -pix_fmt gray -s:v (Seconds x 10)x(Sample Rate / 10) -i - -distance 0.3 -effort 8 -y output.jxl
JXL to FLAC
ffmpeg -i input.jxl -f rawvideo -pix_fmt gray - | ffmpeg -f u8 -ac 1 -ar (Sample Rate) -i - output.flac
|
|
2024-02-15 12:58:56
|
wait i recognise that code, did I send that to you?
|
|
2024-02-15 12:59:18
|
also i think the reason it kinda doesn't suck is because audio codecs also use dct
|
|
|
jonnyawsom3
|
2024-02-15 12:59:33
|
Probably, I know someone gave me the command and then I tweaked it for an evening
|
|
|
CrushedAsian255
|
2024-02-15 01:00:20
|
oh i remember this lol
|
|
2024-02-15 01:01:14
|
i switched to int16's and it improved the quality significantly
|
|
|
_wb_
|
|
veluca
if not we probably ought to do that
|
|
2024-02-15 01:03:20
|
looks like currently cjxl -d 1 -m 1 is essentially still single-threaded, I'm getting this on a big image:
```
7216 x 5412, 2.356 MP/s [2.36, 2.36], 1 reps, 12 threads.
17.03 real 18.73 user 0.94 sys
```
|
|
|
CrushedAsian255
|
2024-02-15 01:03:47
|
i get some multithreading but yeah mostly single threading
|
|
|
_wb_
|
2024-02-15 01:04:36
|
the global squeeze itself was already using threads but the rest is still bottlenecked by global trees I assume
|
|
|
|
veluca
|
|
CrushedAsian255
|
2024-02-15 01:05:21
|
wait am i using the right branch?
|
|
|
|
veluca
|
2024-02-15 01:05:24
|
ah, I think I know why
|
|
|
CrushedAsian255
|
2024-02-15 01:05:33
|
i couldn't find a `master` branch so i just used `main`
|
|
|
Tirr
|
2024-02-15 01:05:45
|
that's the right one
|
|
|
CrushedAsian255
|
|
CrushedAsian255
it's night time in my timezone now so ttyl
|
|
2024-02-15 01:06:52
|
i really should go now, gn ttyl
|
|
|
Traneptora
|
|
CrushedAsian255
So modular tries to make a logical (if-else) model and then entropy encodes residuals?
|
|
2024-02-15 04:32:49
|
that's *kinda* how it is
|
|
2024-02-15 04:33:06
|
it walks down a decision tree based on properties
|
|
2024-02-15 04:33:14
|
and then at the end it arrives upon a predictor
|
|
2024-02-15 04:33:41
|
that predictor is used to generate that residual, which is entropy-coded
|
|
|
Demiurge
|
|
CrushedAsian255
?
|
|
2024-02-15 09:05:27
|
homebrew has a --HEAD option
|
|
2024-02-15 09:05:42
|
that might be what you want to get the latest unreleased version from git
|
|
|
CrushedAsian255
|
2024-02-15 10:20:02
|
lmao i converted a (lossy) heic into png and then into lossless jpeg xl and it's smaller
|
|
|
Demiurge
homebrew has a --HEAD option
|
|
2024-02-15 10:20:21
|
`Error: No head is defined for jpeg-xl`
|
|
2024-02-15 10:30:04
|
cjxl version 0.10.0 seems to make bigger files
|
|
2024-02-15 10:30:18
|
not by much 3.08bpp vs 3.11bpp
|
|
2024-02-15 10:30:20
|
at d1
|
|
2024-02-15 10:30:28
|
might just be a tradeoff with the higher-speed encoding
|
|
2024-02-15 10:30:33
|
9MP/s vs 7MP/s
|
|
|
jonnyawsom3
|
2024-02-15 11:12:16
|
Yeah, the difference is most noticable on lossless
|
|
|
Demiurge
|
|
CrushedAsian255
`Error: No head is defined for jpeg-xl`
|
|
2024-02-15 11:15:45
|
https://github.com/Homebrew/homebrew-core/blob/e387641ecc81aa66b751285c1751235701f510e8/Formula/j/jpeg-xl.rb
Yeah, for some reason no one defined it in the formula. It would be extremely simple to add, however.
|
|
2024-02-15 11:17:45
|
```ruby
head do
url "https://github.com/libjxl/libjxl.git", branch: "main"
end
```
|
|
2024-02-15 11:17:54
|
That's all that needs to be added to the formula...
|
|
2024-02-15 11:20:56
|
For some reason, whoever wrote this formula didn't add that, and also for some reason, they disabled libjpegli with `-DJPEGXL_ENABLE_JPEGLI=OFF`
|
|
2024-02-15 11:21:07
|
Pretty annoying.
|
|
2024-02-15 11:21:16
|
It's enabled by default in recent versions
|
|
2024-02-15 11:21:25
|
unless explicitly disabled like that
|
|
2024-02-15 11:21:33
|
lame.
|
|
|
CrushedAsian255
|
2024-02-15 11:28:24
|
What’s jpegli do?
|
|
2024-02-15 11:28:34
|
I ended up just building from sauce
|
|
|
jonnyawsom3
|
2024-02-15 11:38:03
|
Old Jpeg encoder that beats WebP, and can also do more than 8bit
|
|
|
username
|
2024-02-15 11:42:57
|
probably better to word it as "JPEG 1" or "original JPEG" encoder rather then "old JPEG" encoder because it kinda sounds like the encoder is being called old rather then the format
|
|
|
|
afed
|
2024-02-15 11:46:38
|
jpegli is a new modern encoder for the old jpeg format
|
|
|
Demiurge
|
2024-02-15 11:49:38
|
It also has some pretty big bugs when using XYB mode
|
|
2024-02-15 11:52:56
|
Like making the image noticeably darker, not working correctly on images with an alpha channel, and by default using chroma subsampling that is incompatible with libjxl lossless transcoding
|
|
|
|
afed
|
2024-02-15 11:53:39
|
but, blockiness on some colors and images still hasn't been fixed and for low bpp mozjpeg is also mostly better
|
|
|
CrushedAsian255
|
|
afed
jpegli is a new modern encoder for the old jpeg format
|
|
2024-02-15 11:58:51
|
"JPEG 1" as in the thing you get as a '.jpg' file?
|
|
|
Demiurge
|
2024-02-15 11:59:04
|
It is a really amazing idea, and tool. Especially the XYB mode has a lot of potential if not for the huge bugs with it
|
|
|
CrushedAsian255
|
|
Old Jpeg encoder that beats WebP, and can also do more than 8bit
|
|
2024-02-15 11:59:24
|
who's surprised that it beats webp
|
|
2024-02-15 11:59:38
|
it's *WEBP*
|
|
|
Demiurge
|
2024-02-16 12:00:43
|
After forcing webp down our throats Google wants to block joeg-xl from being used on the web. Unbelievable arrogance...
|
|
2024-02-16 12:03:09
|
Yeah, JPEG 1 is also known as JPEG from the year 1992
|
|
2024-02-16 12:03:47
|
Still better than webp
|
|
2024-02-16 12:07:41
|
But it was really important for Jim Bankowski to force everyone to permanently adopt webp as an essential part of the browser. Mandatory for using discord now even.
|
|
2024-02-16 12:09:37
|
But he spoke with "some people" for us and made the decision for us that apparently jpeg-xl is unwanted. According to "some harware guys I spoke to, trust me."
|
|
2024-02-16 12:10:08
|
So it's totally different than webp, which was totally essential.
|
|
2024-02-16 12:12:16
|
So luckily all the useless jpeg-xl code that no one wanted or had any interest in was thankfully removed from Chromium source tree that all other browsers are based off of, thanks to Jim looking out for everyone and thinking of all our best interests for us.
|
|
2024-02-16 12:13:22
|
Can't make this shit up, it's just Google is stranger than fiction
|
|
|
Traneptora
|
|
Demiurge
Like making the image noticeably darker, not working correctly on images with an alpha channel, and by default using chroma subsampling that is incompatible with libjxl lossless transcoding
|
|
2024-02-16 06:24:42
|
it doesn't subsample the B channel anymore
|
|
|
Demiurge
|
2024-02-16 08:31:18
|
Does it still output nonsense when the input image has an alpha channel?
|
|
2024-02-16 08:32:20
|
But only in XYB mode, in regular mode it correctly throws away the alpha channel
|
|
|
Traneptora
it doesn't subsample the B channel anymore
|
|
2024-02-16 08:37:25
|
You don't need to type `--chroma_subsampling=444` every single time you use XYB mode now?
|
|
|
jonnyawsom3
|
|
Demiurge
You don't need to type `--chroma_subsampling=444` every single time you use XYB mode now?
|
|
2024-02-16 09:57:07
|
That still happens, and it seems using `-q` doesn't change the distance displayed in the output
|
|
|
Insanity
|
2024-02-16 10:43:49
|
Recently, I was studying methods on how to do chroma upsampling.
I read that AV1 uses a technique that reconstructs the channels as accurately as possible (Predicting Chroma from Luma in AV1. Luc Trudeau et al.) called CfL.
I was wondering if JXL uses the same technique or something similar.
If so, if JPEG were to only do simple upsampling without considering the luma (I guess, I don't know JPEG in such detail), we should get slightly better images from a lossless conversion from JPEG to JXL, right?
|
|
|
Oleksii Matiash
|
|
Insanity
Recently, I was studying methods on how to do chroma upsampling.
I read that AV1 uses a technique that reconstructs the channels as accurately as possible (Predicting Chroma from Luma in AV1. Luc Trudeau et al.) called CfL.
I was wondering if JXL uses the same technique or something similar.
If so, if JPEG were to only do simple upsampling without considering the luma (I guess, I don't know JPEG in such detail), we should get slightly better images from a lossless conversion from JPEG to JXL, right?
|
|
2024-02-16 10:48:14
|
Lossless means no changes at all, not doing better, not worse
|
|
|
jonnyawsom3
|
|
Insanity
Recently, I was studying methods on how to do chroma upsampling.
I read that AV1 uses a technique that reconstructs the channels as accurately as possible (Predicting Chroma from Luma in AV1. Luc Trudeau et al.) called CfL.
I was wondering if JXL uses the same technique or something similar.
If so, if JPEG were to only do simple upsampling without considering the luma (I guess, I don't know JPEG in such detail), we should get slightly better images from a lossless conversion from JPEG to JXL, right?
|
|
2024-02-16 10:50:12
|
JXL uses CfL
|
|
2024-02-16 10:51:18
|
Or rather, it does for lossy JXL encoding, for transcoding it only uses what's in the original jpeg
|
|
2024-02-16 10:51:25
|
AFAIK
|
|
|
Tirr
|
2024-02-16 10:51:44
|
jxl explicitly forbids CfL if it's chroma subsampled
|
|
|
Insanity
|
|
Or rather, it does for lossy JXL encoding, for transcoding it only uses what's in the original jpeg
|
|
2024-02-16 10:55:42
|
Yeah, but the chroma remains subsampled in the transcoded image.... So... When the image is reconstructed (for example in a image viewer app) the chroma now should be reconstructed with CfL, right?
|
|
|
Tirr
jxl explicitly forbids CfL if it's chroma subsampled
|
|
2024-02-16 10:56:21
|
So when it's used them? I only see the use in upsampling the chroma
|
|
|
jonnyawsom3
|
2024-02-16 10:59:12
|
I could be wrong, but I'm fairly sure CfL is done during encoding, and we can't decode data which doesn't exist
|
|
|
Tirr
|
2024-02-16 11:00:07
|
I'm not familiar with JPEG 1 bitstream, but I think it will require requantization if CfL transform is to be applied, which causes loss of data and cannot be reconstructed back to original jpeg
|
|
2024-02-16 11:00:42
|
CfL is mainly for encoding to jxl from pixels
|
|
|
Insanity
|
|
I could be wrong, but I'm fairly sure CfL is done during encoding, and we can't decode data which doesn't exist
|
|
2024-02-16 11:12:14
|
Probably I've confused things.
I've read about CfL only from this blog post (which talks about other topics) https://artoriuz.github.io/blog/mpv_upscaling.html as a technique to upsample chroma to the luma res
|
|
2024-02-16 11:13:52
|
Also in the paper's abstract https://arxiv.org/abs/1711.03951 the thing being discussed is the decoder part.
|
|
|
Tirr
|
2024-02-16 11:17:22
|
CfL (on the decoder side) is about recovering chroma data from luma data. it implies some chroma data is removed during encoding
|
|
2024-02-16 11:26:03
|
well, more like chroma data is "extracted" to somewhere else in a form that can be compressed better
|
|
|
Insanity
|
|
Tirr
CfL (on the decoder side) is about recovering chroma data from luma data. it implies some chroma data is removed during encoding
|
|
2024-02-16 11:38:52
|
Yeah, I think that happens when the chroma is subsampled, right?
|
|
|
Demiurge
|
2024-02-16 11:39:25
|
Edge detection from luma can help sharpen low res chroma channel
|
|
|
Insanity
|
2024-02-16 11:39:45
|
So, I was wondering: when the chroma is subsampled in the bitstream, does JPEG XL reconstruct the full-resolution chroma using that method, or does it perform bicubic (or similar) interpolation?
|
|
2024-02-16 11:41:50
|
What about JPEG?
|
|
|
Demiurge
|
2024-02-16 11:43:00
|
joeg with low res chroma channel normally looks blurry or blocky when upsampled with most decoders
|
|
2024-02-16 11:43:48
|
But a better decoder could use the luma data to help upsample the chroma channels
|
|
2024-02-16 11:45:18
|
There's no reason a jpeg decoder couldn't use a better upsampling algorithm that takes the full res luma channel into account when upscaling the crappy subsampled chroma channel
|
|
|
Tirr
|
2024-02-16 01:06:50
|
downsampled chroma is restored using rather simple linear interpolation, it has nothing to do with CfL
|
|
|
damian101
|
|
Insanity
Recently, I was studying methods on how to do chroma upsampling.
I read that AV1 uses a technique that reconstructs the channels as accurately as possible (Predicting Chroma from Luma in AV1. Luc Trudeau et al.) called CfL.
I was wondering if JXL uses the same technique or something similar.
If so, if JPEG were to only do simple upsampling without considering the luma (I guess, I don't know JPEG in such detail), we should get slightly better images from a lossless conversion from JPEG to JXL, right?
|
|
2024-02-16 03:20:04
|
Reconstructing chroma from luma when upsampling sounds like a great idea.
But that's not what's done in AV1 or JXL.
|
|
|
Traneptora
|
|
Insanity
So, I was wondering: when the chroma is subsampled in the bitstream, does JPEG XL reconstruct the full-resolution chroma using that method, or does it perform bicubic (or similar) interpolation?
|
|
2024-02-16 04:42:02
|
chroma subsampling is never used at the same time as CFL
|
|
|
damian101
|
2024-02-16 07:31:40
|
yes, not in JXL
|
|
|
Traneptora
|
|
Insanity
So, I was wondering: when the chroma is subsampled in the bitstream, does JPEG XL reconstruct the full-resolution chroma using that method, or does it perform bicubic (or similar) interpolation?
|
|
2024-02-16 07:32:49
|
when you transcode a 4:2:0 JPEG to JXL, and then transcode it back to jpeg, it'll produce the same file, so the resulting jpeg will be bit-identical to the original
|
|
2024-02-16 07:32:51
|
i.e. 4:2:0
|
|
2024-02-16 07:33:10
|
if you decode it *as* a JXL to pixels, it uses linear upsampling
|
|
2024-02-16 07:33:42
|
> Horizontal (vertical) upsampling is defined as follows: the channel width (height) is multiplied by two (but limited to the image dimensions), and every subsampled pixel B with left (top) neighbour A and right (bottom) neighbour C is replaced by two upsampled pixels with values 0.25 * A + 0.75 * B and 0.75 * B + 0.25 * C. At the image edges, the missing neighbors are replicated from the edge pixel, i.e. the mirroring procedure of 5.2 is used.
|
|
|
_wb_
|
2024-02-16 08:17:08
|
That's also how libjpeg-turbo does it, and how it is informatively defined in the spec.
|
|
|
Traneptora
|
2024-02-16 08:42:53
|
does JPEG1 spec require linearly upsampling?
|
|
2024-02-16 08:43:04
|
oh, informatively defined
|
|
2024-02-16 08:43:08
|
as in, suggested
|
|
|
_wb_
|
2024-02-16 11:02:52
|
The JPEG 1 spec doesn't say much at all about it iirc, it was first formalized in the JPEG XT part 1 spec
|
|
|
CrushedAsian255
|
2024-02-16 11:08:17
|
Are these specs available for free?
|
|
|
lonjil
|
2024-02-16 11:11:52
|
here is JPEG1 https://www.w3.org/Graphics/JPEG/itu-t81.pdf
|
|
|
CrushedAsian255
|
2024-02-17 12:01:04
|
is this spec good enough for use with making a decoder?
|
|
|
Traneptora
|
|
CrushedAsian255
is this spec good enough for use with making a decoder?
|
|
2024-02-17 04:09:52
|
this is an old draft pre 1st edition
|
|
2024-02-17 04:10:15
|
<#1021189485960114198> has a more recent draft to comment on
|
|
|
CrushedAsian255
|
|
Traneptora
<#1021189485960114198> has a more recent draft to comment on
|
|
2024-02-17 04:12:10
|
Do they have a GitHub?
|
|
|
Traneptora
|
2024-02-17 04:13:24
|
no, there's no public github repo of the spec
|
|
2024-02-17 04:13:30
|
you'll have to work with the drafts in that channel
|
|
2024-02-17 04:13:47
|
it was created so those of us who write decoders can comment on places where the spec is either confusing, or disagrees with libjxl
|
|
2024-02-17 04:14:11
|
libjxl was created first, so sometimes things in the spec, which attempt to document the format, don't always agree with libjxl, so in that case the spec needs to be changed
|
|
|
CrushedAsian255
|
|
Traneptora
it was created so those of us who write decoders can comment on places where the spec is either confusing, or disagrees with libjxl
|
|
2024-02-17 05:15:52
|
so just read the spec, and if there's a weird glitch cross-reference with the thread?
|
|
|
Traneptora
|
|
CrushedAsian255
so just read the spec, and if there's a weird glitch cross-reference with the thread?
|
|
2024-02-17 06:56:27
|
report spec issues in <#1021189485960114198>
|
|
|
Demiurge
|
2024-02-17 11:49:24
|
Is there any valid reason not to use a more advanced upsampling algorithm for the chroma channel when decoding JPEG with chroma subsampling?
|
|
2024-02-17 11:50:58
|
Linear interpolation seems like one of the ugliest methods aside from nearest neighbor.
|
|
|
spider-mario
|
2024-02-17 11:51:15
|
it’s not that bad
|
|
2024-02-17 11:51:22
|
nearest neighbour is _much, much_ worse
|
|
|
Demiurge
|
2024-02-17 11:52:38
|
Since a high resolution luma channel is available it makes sense to have an algorithm that takes that data into account during the upsample operation
|
|
2024-02-17 11:53:38
|
But I am not sure if there is a "correct" way since upsampling is a very subjective operation
|
|
2024-02-17 11:54:07
|
Lanczos usually looks good
|
|
|
|
veluca
|
|
Demiurge
Since a high resolution luma channel is available it makes sense to have an algorithm that takes that data into account during the upsample operation
|
|
2024-02-17 12:15:53
|
you could do something like a weighted sum of the neighbouring chroma pixels depending on how similar the target luma pixel is to the downsampled luma pixels, but not sure how slow that might be
|
|
|
_wb_
|
|
Demiurge
Is there any valid reason not to use a more advanced upsampling algorithm for the chroma channel when decoding JPEG with chroma subsampling?
|
|
2024-02-17 01:37:24
|
Fancy encoders need to know what the decoder will do. Allowing decoders to do it in whatever fancy way they want is worse than defining exactly what they do (even if it is something relatively simple), imo.
|
|
|
Demiurge
|
2024-02-17 05:45:44
|
But there's no way to know what encoder was used
|
|
2024-02-17 05:45:52
|
Most of the time it's far from fancy
|
|
|
Traneptora
|
2024-02-17 06:34:29
|
the point is that specifying how it's upsampled is better than just having it be a wild west of "decoder chooses"
|
|
|
Demiurge
|
2024-02-17 06:38:56
|
Well, in the context of the current state of JPEG, it's the wild west and quality is usually the last afterthought
|
|
|
lonjil
|
2024-02-17 07:07:44
|
Pretty sure all jpeg decoders use linear interpolation
|
|
|
Traneptora
|
2024-02-17 07:23:23
|
libavcodec returns 4:2:0 as-is, it linearly interpolates 4x subsampling to 2x and then returns 4:2:0
|
|
2024-02-17 07:23:30
|
dunno what it does with 3x, I'd have to double check the source
|
|
|
_wb_
|
2024-02-17 07:42:15
|
All major JPEG decoders do it the libjpeg-turbo way, so we specced that to be the way to do it.
|
|
2024-02-17 07:43:56
|
We didn't bother to allow 3x/4x upsampling since it's not something you will encounter in the wild, and it's not very useful anyway.
|
|
2024-02-17 07:48:02
|
420 is already a pretty blunt tool to do lossy compression. Subsampling even more is like knitting an intricate pattern using two baseball bats.
|
|
|
Jyrki Alakuijala
|
|
Insanity
Also in the paper's abstract https://arxiv.org/abs/1711.03951 the thing being discussed is the decoder part.
|
|
2024-02-19 01:16:20
|
I think overall CfL is bit of a disappointment in high and medium quality -- not sure if it is much better at lower qualities, but could be -- think of 1 % density improvement with quite a bit of additional complexity
|
|
|
jonnyawsom3
|
2024-02-19 01:20:10
|
From the papers I found it was cited as 5% improvement max, 2% in video/motion
|
|
|
Jyrki Alakuijala
|
2024-02-19 01:24:57
|
that feels plausible
|
|
2024-02-19 01:25:17
|
pretty much every modeling/prediction is more efficient at lower qualities
|
|
|
fab
|
2024-02-19 02:41:29
|
i join the evergreen JPEG XL server
|
|
|
lonjil
|
2024-02-19 02:42:14
|
hello, welcome back
|
|
|
fab
|
2024-02-19 03:14:23
|
I'm reproducing complex textures with jpg Meta stock encoder
|
|
2024-02-19 03:14:47
|
This wasn't possible even with libjxl 0.9.0 effort 8
|
|
|
_wb_
|
|
fab
This wasn't possible even with libjxl 0.9.0 effort 8
|
|
2024-02-19 03:20:15
|
I don't understand what these two sentences could possibly mean. This tends to happen quite often with the statements you make. Please try to make more sense.
|
|
|
fab
|
2024-02-19 03:21:34
|
in the last eleven days on facebook this texture was difficult at least in mine test
|
|
2024-02-19 03:21:49
|
now stock jpg has it good.
|
|
2024-02-19 03:24:12
|
I made Fb JPG encoder more psychovidjal
|
|
2024-02-19 03:26:00
|
Finally i get it right
|
|
|
jonnyawsom3
|
2024-02-19 03:26:13
|
Hmmmm
|
|
|
fab
|
2024-02-19 03:28:02
|
I got even more roght I think it is sufficient
|
|
|
jonnyawsom3
|
2024-02-19 03:29:42
|
This is not a promising start
|
|
|
fab
|
|
_wb_
|
2024-02-19 03:33:10
|
Fab, I don't think anyone understands what you're trying to do here. It looks like you are just spamming random images.
|
|
|
fab
|
2024-02-19 03:33:21
|
No i finished
|
|
|
Jyrki Alakuijala
|
2024-02-19 03:33:28
|
Fab, did you try jpegli yet
|
|
|
fab
|
2024-02-19 03:36:49
|
No, but fixed now the QPs in instagram, again
|
|
2024-02-19 03:38:00
|
|
|
2024-02-19 03:38:15
|
4 seconds to show this photo on Facebook is it normal,
|
|
2024-02-19 03:38:34
|
Is very slow:
|
|
2024-02-19 03:39:55
|
I tried to imitate the jpegli texture with JPG
|
|
2024-02-19 03:43:46
|
|
|
2024-02-19 03:43:58
|
The textures I'm, getting are interesting.
|
|
2024-02-19 03:45:37
|
|
|
2024-02-19 03:45:40
|
|
|
2024-02-19 03:45:58
|
I managed to get lowe qality #,benchmarks #offtopic
|
|
2024-02-19 03:48:51
|
Is this low enough?
|
|
|
w
|
2024-02-19 04:32:58
|
who let the dogs out
|
|
|
fab
|
2024-02-19 05:19:59
|
<@379365257157804044> <@794205442175402004> file:///C:/Users/User/Desktop/2402.04760.pdf implemented this
|
|
2024-02-19 05:20:17
|
in avc, coding instagram
|
|
2024-02-19 05:20:40
|
please refresh your page as child could get'shocked
|
|
2024-02-19 05:20:55
|
Lazzarotto et al.
RESEARCH
Subjective performance evaluation of bitrate
allocation strategies for MPEG and JPEG Pleno
point cloud compression
|
|
2024-02-19 05:29:24
|
<@167023260574154752> https://www.instagram.com/p/C3gKpdDLttz/?igsh=cXZraWF3NXBzY25x
|
|
2024-02-19 05:29:29
|
I'm a boss
|
|
2024-02-19 05:29:40
|
Sueper encode made By me
|
|
2024-02-19 05:30:02
|
Hope in a hour will get an jxl copy of this
|
|
|
lonjil
|
|
w
|
2024-02-19 05:31:48
|
this is biden's america
|
|
|
fab
|
2024-02-19 05:47:51
|
|
|
2024-02-19 05:47:59
|
Instagram bullying myself
|
|
|
_wb_
|
2024-02-19 05:51:01
|
Ok @fab I think you might need a bit more mute time, if you cannot start making more sense or being less verbose in your nonsense. I don't know how you manage to be _this_ random. Does it involve psychedelic drugs?
|
|
|
jonnyawsom3
|
2024-02-19 06:00:58
|
Seems like Autsism and ADHD based on others I've known in the past
|
|
|
w
|
2024-02-19 06:02:44
|
perma mute or single containment channel. make a decision already
|
|
|
Kremzli
|
2024-02-19 06:06:44
|
Might be a good randomness source for cloudflare
|
|
|
lonjil
|
|
Seems like Autsism and ADHD based on others I've known in the past
|
|
2024-02-19 06:11:01
|
I know many autistic and/or adhd people. None of them are like that. He may have such diagnoses, but they do not on their own explain his behavior.
|
|
|
fab
|
2024-02-19 06:15:18
|
No i know jon is Senior developer
|
|
|
_wb_
|
2024-02-19 07:08:20
|
<@416586441058025472> I put you on one last 1-week timeout, if the spammy randomness still continues after this one, it will be a ban. It's just too annoying to see all channels getting diluted with random stuff that makes no sense whatsoever.
|
|
|
spider-mario
|
|
lonjil
I know many autistic and/or adhd people. None of them are like that. He may have such diagnoses, but they do not on their own explain his behavior.
|
|
2024-02-19 07:28:51
|
(I might also get one next week! I managed to get a referral for an autism assessment)
|
|
|
lonjil
|
2024-02-19 07:37:01
|
Oh! Pretty rare for adults to be diagnosed.
|
|
|
_wb_
|
2024-02-19 07:46:12
|
My brother is a psychologist who specializes in adult autism.
|
|
2024-02-19 07:48:37
|
I think most comp sci geeks are somewhere on the spectrum, tbh. Myself included, probably.
|
|
|
lonjil
|
2024-02-19 07:50:33
|
I was diagnosed when I was a kid
|
|
|
Traneptora
|
2024-02-19 08:16:51
|
ye getting diagnosed as a kid is very helpful
|
|
2024-02-19 08:17:01
|
for me it was back when they were really starting to do that
|
|
|
Wolfbeast
|
|
fab
<@379365257157804044> <@794205442175402004> file:///C:/Users/User/Desktop/2402.04760.pdf implemented this
|
|
2024-02-19 09:09:03
|
Not sure what you're going on about, but please refrain from @ mentioning me in the future about random subjective evaluation papers.
|
|
|
w
|
2024-02-19 09:59:23
|
so that's the one that does it huh
|
|
|
CrushedAsian255
|
2024-02-19 10:42:07
|
does using progressive encoding decrease coding efficiency significantly? or is it negligable
|
|
|
HCrikki
|
2024-02-19 10:51:00
|
fast decoding? i did a few test encodes before but results seemed odd
|
|
2024-02-19 10:52:36
|
using a folder of alpha pngs from a game (pngs at 679mb)
fastdecoding0 = 360mb
fastdecoding1 1390mb
fastdecoding4 614mb
|
|
|
CrushedAsian255
|
2024-02-19 10:53:26
|
fast decoding?
|
|
2024-02-19 10:53:48
|
is that the thing where it starts decoding a 1:32 then 1:8 then 1:4 etc?
|
|
|
HCrikki
|
2024-02-19 10:57:15
|
afaik jxl does progressive only, fast decoding is a tradeoff on top of that
|
|
|
damian101
|
|
HCrikki
afaik jxl does progressive only, fast decoding is a tradeoff on top of that
|
|
2024-02-20 12:21:45
|
you can make the decoding more progressive, comes with an efficiency penalty
|
|
|
HCrikki
|
2024-02-20 12:45:33
|
is there a predictable deviation in the penalty? the numbers i got in that one test then just seemed off
|
|
|
damian101
|
|
HCrikki
using a folder of alpha pngs from a game (pngs at 679mb)
fastdecoding0 = 360mb
fastdecoding1 1390mb
fastdecoding4 614mb
|
|
2024-02-20 01:05:21
|
what are these numbers?
|
|
2024-02-20 01:05:28
|
what is fastdecoding in the context of JXL
|
|
2024-02-20 01:05:33
|
there is no such option
|
|
|
HCrikki
|
2024-02-20 01:11:58
|
i used jxl batch converter
|
|
|
Quackdoc
|
|
there is no such option
|
|
2024-02-20 01:16:03
|
faster_decode in cjxl, it makes encoding the final jxl significantly faster at the exchange of filesize
|
|
|
damian101
|
|
Quackdoc
faster_decode in cjxl, it makes encoding the final jxl significantly faster at the exchange of filesize
|
|
2024-02-20 01:16:42
|
`--help --verbose` does not list that option
|
|
|
Quackdoc
|
2024-02-20 01:16:48
|
-v -v -v -v -v life
|
|
2024-02-20 01:16:55
|
i dunno how much you actually need
|
|
2024-02-20 01:17:09
|
2
|
|
2024-02-20 01:17:12
|
you need 2
|
|
|
damian101
|
|
Quackdoc
|
|
WTF
|
|
2024-02-20 01:18:23
|
it is uh... kinda excessive
```ps
➜ ~ cjxl -h -v | wc -l
JPEG XL encoder v0.9.2 41b8cdab [AVX2,SSE4,SSE2]
74
➜ ~ cjxl -h -v -v | wc -l
JPEG XL encoder v0.9.2 41b8cdab [AVX2,SSE4,SSE2]
114
➜ ~ cjxl -h -v -v -v | wc -l
JPEG XL encoder v0.9.2 41b8cdab [AVX2,SSE4,SSE2]
138
➜ ~ cjxl -h -v -v -v -v | wc -l
JPEG XL encoder v0.9.2 41b8cdab [AVX2,SSE4,SSE2]
170
➜ ~ cjxl -h -v -v -v -v -v | wc -l
JPEG XL encoder v0.9.2 41b8cdab [AVX2,SSE4,SSE2]
170
```
|
|
|
damian101
|
|
Quackdoc
-v -v -v -v -v life
|
|
2024-02-20 01:18:32
|
double verbose is the max apparently
|
|
|
Quackdoc
|
2024-02-20 01:18:38
|
no
|
|
2024-02-20 01:18:48
|
quad verbose
|
|
|
damian101
|
|
Quackdoc
quad verbose
|
|
2024-02-20 01:19:06
|
same output as double
|
|
|
Quackdoc
|
2024-02-20 01:19:22
|
I just showed the wordcount...
|
|
2024-02-20 01:19:37
|
well line count
|
|
|
damian101
|
2024-02-20 01:20:07
|
then it's different for me?
|
|
|
Quackdoc
|
2024-02-20 01:20:19
|
might be version diff
|
|
|
damian101
|
2024-02-20 01:20:26
|
must be...
|
|
|
Demiurge
|
2024-02-20 01:20:29
|
Psychedelics don't cause people to act like that either
|
|
|
Quackdoc
|
2024-02-20 01:20:59
|
|
|
2024-02-20 01:21:16
|
double verbose vs quad verbose
|
|
|
damian101
|
|
Quackdoc
it is uh... kinda excessive
```ps
➜ ~ cjxl -h -v | wc -l
JPEG XL encoder v0.9.2 41b8cdab [AVX2,SSE4,SSE2]
74
➜ ~ cjxl -h -v -v | wc -l
JPEG XL encoder v0.9.2 41b8cdab [AVX2,SSE4,SSE2]
114
➜ ~ cjxl -h -v -v -v | wc -l
JPEG XL encoder v0.9.2 41b8cdab [AVX2,SSE4,SSE2]
138
➜ ~ cjxl -h -v -v -v -v | wc -l
JPEG XL encoder v0.9.2 41b8cdab [AVX2,SSE4,SSE2]
170
➜ ~ cjxl -h -v -v -v -v -v | wc -l
JPEG XL encoder v0.9.2 41b8cdab [AVX2,SSE4,SSE2]
170
```
|
|
2024-02-20 01:21:40
|
126 lines with double verbose for me
|
|
|
Quackdoc
|
2024-02-20 01:21:57
|
version diff I guess
|
|
|
gb82
|
|
_wb_
Ok @fab I think you might need a bit more mute time, if you cannot start making more sense or being less verbose in your nonsense. I don't know how you manage to be _this_ random. Does it involve psychedelic drugs?
|
|
2024-02-20 08:25:41
|
he was just banned from the AV1 server, FYI. seems people are cracking down
|
|
2024-02-20 08:35:08
|
the mod log is public
|
|
|
Jyrki Alakuijala
|
|
lonjil
I know many autistic and/or adhd people. None of them are like that. He may have such diagnoses, but they do not on their own explain his behavior.
|
|
2024-02-20 09:22:30
|
let's not diagnose people here -- let's try to tolerate all people, but also have borders to protect our ability to communicate
|
|
|
_wb_
|
2024-02-20 09:34:24
|
Yes. In one week fab can speak again. If he stops spamming, he can stay. Otherwise, ban.
|
|
|
lonjil
|
|
Jyrki Alakuijala
let's not diagnose people here -- let's try to tolerate all people, but also have borders to protect our ability to communicate
|
|
2024-02-20 09:41:18
|
I did not diagnose anyone, did I?
|
|
|
spider-mario
|
2024-02-20 11:27:32
|
I understood your comment as being more about P(this behaviour|diagnoses) than about P(diagnoses|this behaviour)
|
|
|
jonnyawsom3
|
2024-02-20 11:31:23
|
Whatever the case, we've given him plenty of chances to learn
|
|
|
lonjil
|
|
spider-mario
I understood your comment as being more about P(this behaviour|diagnoses) than about P(diagnoses|this behaviour)
|
|
2024-02-20 11:38:20
|
yes
|
|
|
Jyrki Alakuijala
|
|
lonjil
I did not diagnose anyone, did I?
|
|
2024-02-20 12:48:19
|
What I tried to say 'let's not try to associate or evaluate neurodevelopmental disorders in relation to any of the chat participants'
If Jon is banning non-topic posters too frequently or too gently, possibly communicate in private with him about it so that he can consider everyone's needs. From my own viewpoint he has been doing this role well so far.
|
|
|
DZgas Ж
|
|
DZgas Ж
Unfortunately, the wave-type artifacts remained identical, are caused directly by a red object that is nearby
|
|
2024-02-20 12:57:13
|
I'm a little out of "work here". Can someone tell me if someone found interesting artifacts and bugs in JXL that still haven't been fixed? or which were not shown to the JXL developers properly? I would be interested to see and recreate its .
(today I plan to return, take the latest JXL build and compress my good old pictures for artifacts search, that should not be there)
|
|
|
Jyrki Alakuijala
|
2024-02-20 12:57:45
|
gray images with red-green noise turn pink -- I haven't fixed it yet
|
|
|
DZgas Ж
|
2024-02-20 12:58:21
|
I remember this artifact
|
|
|
Jyrki Alakuijala
|
2024-02-20 12:58:31
|
it is unforgettable 😄
|
|
|
DZgas Ж
|
2024-02-20 01:01:27
|
I find my best image for searching for artifacts ...... to tell the truth, it's already very old, and I've got a lot of new music. 😉 so I'm going to rebuild it with new covers. And then I'll do pixel hunting
|
|
|
Jyrki Alakuijala
|
2024-02-20 01:02:49
|
This has proven to be very helpful in the past! You are one of the unsung heros of JPEG XL image quality 😄
|
|
|
DZgas Ж
|
2024-02-20 01:03:53
|
In fact, since then I have been able to compare hundreds of thousands of images. But here there is a problem - separately encoded images may not have the artifacts that appear in such giant groups at all... And I can only operate with such a large image exclusively using JPEG Baseline - otherwise it's the death of RAM and CPU
|
|
|
Jyrki Alakuijala
This has proven to be very helpful in the past! You are one of the unsung heros of JPEG XL image quality 😄
|
|
2024-02-20 01:05:47
|
I am excited to make JXL even better, so that I can proudly write everywhere that it is the best codec
|
|
|
yoochan
|
2024-02-20 01:51:29
|
you can't yet ? 😄
|
|
|
DZgas Ж
|
2024-02-20 02:44:22
|
well
|
|
2024-02-20 02:47:29
|
which parameter is the most relevant and popular for Jpeg XL compression? -e 7 -d 0.5 ?
|
|
2024-02-20 02:52:02
|
Wow! Where are the artifacts <:monkaMega:809252622900789269>
|
|
2024-02-20 02:56:46
|
It's okay, I found artifacts . 👍
|
|
2024-02-20 03:02:38
|
there are no artifacts on the Luma plane at all, everything is perfect, everything I observe is related to colors
|
|
|
yoochan
|
2024-02-20 03:08:23
|
how to you tell appart ? you split planes ? or you have an experienced eye ?
|
|
|
DZgas Ж
|
|
yoochan
how to you tell appart ? you split planes ? or you have an experienced eye ?
|
|
2024-02-20 03:12:16
|
Very experienced eye. But in order not to compare all 352 photos, I make myself a field based on the pixel difference and then look at all the images in it
|
|
|
yoochan
|
2024-02-20 03:13:26
|
you compute the pixel diffence in XYB or RGB ?
|
|
|
DZgas Ж
|
2024-02-20 03:13:32
|
If I see that the artifact is caused by things that are not related to compression, I will put a check mark on the image. then I will work with him in more detail to correctly describe the artifact
|
|
|
yoochan
you compute the pixel diffence in XYB or RGB ?
|
|
2024-02-20 03:13:44
|
RGB
|
|
|
yoochan
|
|
DZgas Ж
If I see that the artifact is caused by things that are not related to compression, I will put a check mark on the image. then I will work with him in more detail to correctly describe the artifact
|
|
2024-02-20 03:14:19
|
cool ! I can't wait to see improvements 🙂
|
|
|
DZgas Ж
|
2024-02-20 04:24:25
|
new line
|
|
2024-02-20 04:30:57
|
orig d0.5 1.0 1.5 2.0
|
|
2024-02-20 04:39:07
|
The most noticeable thing is. These are the same artifacts that I observed a few years ago. But! in those images where they were - they are not there now, can only clap.... therefore, here are the new images on which they are
It is unclear why the areas with transitions to colors that are not there
|
|
2024-02-20 04:40:17
|
although I would say that this is an incomprehensible discoloration. although in the picture number 2 from above, there was obviously no need to remove identical adjacent colors
|
|
2024-02-20 04:41:03
|
|
|
2024-02-20 04:44:06
|
In the image number 8, even with the quality of d0.5, the artifact is very noticeable and without pointing to it
|
|
2024-02-20 04:45:30
|
bottom & left
*ghostly * red color
|
|
2024-02-20 04:49:58
|
In the last image, the most noticeable effect is the fading of the red color, while it is very sharply separated by pixels, although there should be a reduction so that it becomes more smoothed, blurred, compressed, but here....
|
|
2024-02-20 04:57:15
|
<:JXL:805850130203934781>
|
|
|
Soni
|
2024-02-20 05:07:59
|
JXL supports FIR?
|
|
2024-02-20 05:12:06
|
finite impulse response
|
|
|
lonjil
|
|
Soni
|
|
Demiurge
|
2024-02-21 08:39:49
|
What would that even mean in this context
|
|
|
Jyrki Alakuijala
|
|
Soni
JXL supports FIR?
|
|
2024-02-21 09:27:51
|
what would it do if it was supported?
|
|
|
yoochan
|
2024-02-21 09:31:19
|
wouldn't a 2d FIR be a simple convolution filter ?
|
|
|
Jyrki Alakuijala
|
|
DZgas Ж
In the last image, the most noticeable effect is the fading of the red color, while it is very sharply separated by pixels, although there should be a reduction so that it becomes more smoothed, blurred, compressed, but here....
|
|
2024-02-21 09:31:25
|
what is the worst thing (or worst 5-10 things) that is happening around d1.0 ?
|
|
|
DZgas Ж
|
|
Jyrki Alakuijala
what is the worst thing (or worst 5-10 things) that is happening around d1.0 ?
|
|
2024-02-21 11:17:04
|
Well, this is the most visible and the worst. The color disappears where it shouldn't. No other codec do not similar problem. It's even hard for me to say what caused it. Instead of copying the color coefficient, it just erases it.
If you subtract the Luma channel (Y), then all the color artifacts will be very noticeable. At a time when everything is just perfect for the channel and brightness, not a single artifact, distortion is minimal -- On the color channel (in this case in the (Y)CrCb representation, but in this case everything is clearly visible and so) -- there is some big problem with the color blocks
|
|
2024-02-21 11:17:21
|
|
|
2024-02-21 11:19:43
|
Luma channel (Y) It's just perfect. not a single visible connection of blocks, not a single artifact.
|
|
2024-02-21 11:45:02
|
> No other codec do not similar problem.
the funny thing is that I found similar problems with the original jpeg - this may be important
|
|
2024-02-21 11:56:54
|
only unlike JXL, jpeg has the same problems on the luma channel - artifacts...
for JXL I can assume that in this case the problem is in the "weights" of the allocation of information between color and luma
|
|
2024-02-21 11:57:45
|
The perfect Luma and artifact chroma
|
|
|
Soni
|
2024-02-21 12:02:56
|
yes, convolution filter for image generation purposes
|
|
|
Jyrki Alakuijala
|
|
DZgas Ж
> No other codec do not similar problem.
the funny thing is that I found similar problems with the original jpeg - this may be important
|
|
2024-02-21 12:11:04
|
I can add more data on color channels easily -- then usual photographs compress worse however, perhaps up to 5 % worse
|
|
2024-02-21 12:11:41
|
perhaps there could be a flag for compressing options when the viewing is intended for closer than 1000 pixels distance
|
|
|
DZgas Ж
|
|
Jyrki Alakuijala
I can add more data on color channels easily -- then usual photographs compress worse however, perhaps up to 5 % worse
|
|
2024-02-21 12:19:12
|
the problem is rare ... to change all the images... everything else looks fine. Perhaps it's about some kind of quality determination mechanisms
|
|
2024-02-21 12:20:39
|
A distinctive feature of the artifact is that the image is surrounded by very similar colors
|
|
|
jonnyawsom3
|
|
Jyrki Alakuijala
perhaps there could be a flag for compressing options when the viewing is intended for closer than 1000 pixels distance
|
|
2024-02-21 12:23:13
|
I assumed that would be equivalent to lowering the distance parameter, no? I usually go with `-d 0.3` for 'truly' visually lossless even on pixel level inspection, not including a flicker test, which reveals shifts in colour shades
|
|
|
DZgas Ж
|
|
Jyrki Alakuijala
I can add more data on color channels easily -- then usual photographs compress worse however, perhaps up to 5 % worse
|
|
2024-02-21 12:23:50
|
in addition, with an increase in the quality, the artifact does not disappear anywhere.
|
|
|
Jyrki Alakuijala
|
|
DZgas Ж
in addition, with an increase in the quality, the artifact does not disappear anywhere.
|
|
2024-02-22 09:01:08
|
do you mean that the artifacts go away and don't appear any more?
|
|
|
diskorduser
|
2024-02-22 09:03:04
|
I think he meant, increasing quality does not remove artifact.
|
|
|
yoochan
|
2024-02-22 09:30:47
|
I read it the same way as <@263309374775230465>
|
|
|
DZgas Ж
|
|
diskorduser
I think he meant, increasing quality does not remove artifact.
|
|
2024-02-22 10:55:14
|
Yes
|
|
|
Jyrki Alakuijala
|
2024-02-22 11:54:51
|
the artefacts are present even with d0.5 ? -- what about d0.1 ?
(I'm trying to learn about this with an open mind, not just being nihilist)
|
|
|
DZgas Ж
|
|
Jyrki Alakuijala
the artefacts are present even with d0.5 ? -- what about d0.1 ?
(I'm trying to learn about this with an open mind, not just being nihilist)
|
|
2024-02-23 08:43:23
|
🧐
|
|
2024-02-23 08:45:48
|
I must say how I make exactly such images
I take the original RGB
I take the Y (luma) component
And I make the difference of the images.
In a few cases, you can use GIMP or PaintNet for this (as I did in the images above https://discord.com/channels/794206087879852103/794206170445119489/1209828125450702908 )
|
|
2024-02-23 08:48:36
|
it's funny, but on a very strong compression, there are no such artifacts, everything looks clean
|
|
2024-02-23 08:49:38
|
although some "collapses" from quality to quality are visible
|
|
|
yoochan
|
2024-02-23 08:52:58
|
<@226977230121598977>, could you show the exact same process applied to the original image, I struggle to see what I should see 😄
|
|
2024-02-23 08:53:22
|
interesting animation by the way
|
|
|
DZgas Ж
|
|
yoochan
<@226977230121598977>, could you show the exact same process applied to the original image, I struggle to see what I should see 😄
|
|
2024-02-23 09:03:57
|
|
|
|
yoochan
|
2024-02-23 09:05:17
|
I meant, a still picture at d=0.0 ? to compare side by side with the animation
|
|
|
DZgas Ж
|
|
yoochan
I meant, a still picture at d=0.0 ? to compare side by side with the animation
|
|
2024-02-23 09:06:37
|
I think the return at the end of the animation from d2 to d0.01 is very visual
|
|
|
yoochan
|
2024-02-23 09:07:35
|
yes, but d0.01 is not the original
|
|
|
DZgas Ж
|
2024-02-23 09:07:50
|
take the original
|
|
|
yoochan
|
2024-02-23 09:08:14
|
cool perfect 🙂
|
|
|
DZgas Ж
|
2024-02-23 09:08:29
|
<:PepeOK:805388754545934396> 👍
|
|
|
yoochan
|
2024-02-23 09:24:11
|
I definitely don't have enough visual acuity to exactly spot the issue... I see differences but I'm unable to state exactly what changed 😄
|
|
|
Tirr
|
2024-02-24 01:48:50
|
jxl-oxide WASM demo is updated with the release of v0.7.0 <https://jxl-oxide.tirr.dev/demo/index.html>
1x zoom mode is added, and HDR images are tone mapped to sRGB in Firefox.
|
|
|
Oleksii Matiash
|
|
Tirr
jxl-oxide WASM demo is updated with the release of v0.7.0 <https://jxl-oxide.tirr.dev/demo/index.html>
1x zoom mode is added, and HDR images are tone mapped to sRGB in Firefox.
|
|
2024-02-24 02:30:41
|
I'm not a fan of HDR, so just curious, why to sRGB?
|
|
|
Tirr
|
2024-02-24 02:31:33
|
sRGB is guaranteed to be supported, that's all.
|
|
|
damian101
|
2024-02-24 02:33:21
|
sRGB is guaranteed to be ambiguous
|
|
2024-02-24 02:34:17
|
Firefox should tonemap by default to gamma2.2 IMO, it's the most widely used EOTF in displays.
|
|
|
Oleksii Matiash
|
2024-02-24 02:43:02
|
Thank you
|
|
|
Tirr
|
2024-02-24 02:45:49
|
to be clear: jxl-oxide-wasm maps to BT.709 gamut, D65 white point, sRGB transfer function, Relative rendering intent, then it adds iCCP chunk describing the color encoding to resulting PNG
|
|
2024-02-24 02:46:45
|
then it's up to Firefox whether it maps to display color space
|
|
|
Quackdoc
|
|
sRGB is guaranteed to be ambiguous
|
|
2024-02-24 02:47:32
|
wel "sRGB" whether or not its srgb or 2.2 who knows xD
|
|
|
Firefox should tonemap by default to gamma2.2 IMO, it's the most widely used EOTF in displays.
|
|
2024-02-24 02:48:01
|
2.2 vs srgb is 50 50, there is no real major one way or the other, if there was, srgb would be much less of a pain
|
|
|
spider-mario
|
2024-02-24 02:51:51
|
I think I’ve seen it argued that 2.2 interpreted as sRGB (dark tones slightly too light) is better than sRGB interpreted as 2.2 (dark tones crushed)
|
|
2024-02-24 02:52:42
|
and that 2.2 tagged as sRGB is therefore the winning combo
|
|
2024-02-24 02:52:46
|
or something like that
|
|
|
Quackdoc
|
2024-02-24 02:55:12
|
I suppose thats true, but realistically making something like this configurable shouldn't be that hard hmmm
|
|
|
damian101
|
|
Quackdoc
2.2 vs srgb is 50 50, there is no real major one way or the other, if there was, srgb would be much less of a pain
|
|
2024-02-24 03:14:46
|
I doubt that, actually...
All 5 smartphones I tried used gamma 2.2, and of ~8 PC monitors (some very old, many newer), only one used sRGB gamma...
|
|
|
spider-mario
I think I’ve seen it argued that 2.2 interpreted as sRGB (dark tones slightly too light) is better than sRGB interpreted as 2.2 (dark tones crushed)
|
|
2024-02-24 03:15:05
|
true
|
|
|
spider-mario
and that 2.2 tagged as sRGB is therefore the winning combo
|
|
2024-02-24 03:15:26
|
why not just tag gamma 2.2 as gamma 2.2 💀
|
|
2024-02-24 03:15:38
|
BT.470 System M
|
|
2024-02-24 03:15:52
|
most applications don't convert gamma anyway...
|
|
|
spider-mario
|
|
why not just tag gamma 2.2 as gamma 2.2 💀
|
|
2024-02-24 03:16:27
|
because then, if the app converts to sRGB because it thinks that’s what the display is using, but it’s actually 2.2, you’re back to the exact same problem as with sRGB-tagged sRGB
|
|
|
damian101
|
|
spider-mario
because then, if the app converts to sRGB because it thinks that’s what the display is using, but it’s actually 2.2, you’re back to the exact same problem as with sRGB-tagged sRGB
|
|
2024-02-24 03:18:08
|
that would make not tagging transfer at all a better option compared to tagging incorrectly
|
|
|
spider-mario
|
2024-02-24 04:40:54
|
I don’t know, it’s still in the right ballpark, just with a small safety margin
|
|
2024-02-24 04:41:18
|
and if untagged is anyway interpreted as sRGB, I’m not sure what it achieves
|
|
2024-02-24 04:41:48
|
if it’s not interpreted as sRGB, it seems like it might be interpreted as something worse
|
|
|
Oleksii Matiash
|
|
spider-mario
and if untagged is anyway interpreted as sRGB, I’m not sure what it achieves
|
|
2024-02-24 04:53:02
|
Correct displaying on wide gamut monitors, if I got the question
|
|
|
damian101
|
|
spider-mario
and if untagged is anyway interpreted as sRGB, I’m not sure what it achieves
|
|
2024-02-24 05:02:53
|
only untagged transfer
|
|
|
spider-mario
if it’s not interpreted as sRGB, it seems like it might be interpreted as something worse
|
|
2024-02-24 05:03:43
|
if untagged, it's interpreted as targeting display transfer, so unmodified
|
|
2024-02-24 05:04:46
|
The thing is, system-wide color management will come.
|
|
2024-02-24 05:04:52
|
It already exists on some platforms.
|
|
2024-02-24 05:05:47
|
And browsers might not always convert to sRGB transfer by default anyway
|
|
2024-02-24 05:07:10
|
If Firefox started targeting gamma 2.2 instead of sRGB, untagged gamma 2.2 will be the way better choice compared to sRGB-tagged gamma 2.2
|
|
|
Oleksii Matiash
|
2024-02-24 05:09:43
|
Blurred to avoid moire
|
|
|
w
|
2024-02-24 05:10:46
|
isn't the point of the image profile such that the target display gamma doesn't matter
|
|
2024-02-24 05:10:58
|
you don't want it to do nothing
|
|
2024-02-24 05:11:23
|
otherwise you end up with over saturated and over contrast and so on
|
|
2024-02-24 05:13:18
|
and if the display isn't profiled then it's all out the window anyway
|
|
2024-02-24 05:15:16
|
no idea what I walked into <:trollface:834012731710505011>
|
|
|
damian101
|
|
w
isn't the point of the image profile such that the target display gamma doesn't matter
|
|
2024-02-24 05:26:47
|
Yes, but on most systems out there currently, the browser does not know the display EOTF, nothing in the system does, nor is there a way for the user to set that system-wide.
|
|
2024-02-24 05:26:57
|
Will change, though.
|
|
|
w
|
2024-02-24 05:29:55
|
the content should stay sRGB though
|
|
2024-02-24 05:30:08
|
sounds like you want it to default target to something else
|
|
|
damian101
|
|
w
sounds like you want it to default target to something else
|
|
2024-02-24 05:46:08
|
I want color management to simply target the actual display EOTF.
|
|
2024-02-24 05:46:14
|
Which is not possible on most systems.
|
|
|
w
|
2024-02-24 05:46:49
|
just calibrate your monitor
|
|
2024-02-24 05:46:57
|
or get a mac
|
|
2024-02-24 05:47:19
|
gamma is subjective anyway
|
|
|
damian101
|
|
w
just calibrate your monitor
|
|
2024-02-24 05:53:10
|
to what? sRGB?
|
|
|
w
|
2024-02-24 05:53:19
|
to anything you want just need to profile it
|
|
|
damian101
|
|
w
to anything you want just need to profile it
|
|
2024-02-24 05:53:46
|
Most applications do not convert transfer curves
|
|
2024-02-24 05:53:51
|
Firefox converts to sRGB
|
|
2024-02-24 05:53:56
|
Most monitors are Gamma 2.2
|
|
|
w
|
2024-02-24 05:54:16
|
and most video displays are 2.4
|
|
2024-02-24 05:54:27
|
mine are all 2.4
|
|
|
damian101
|
2024-02-24 05:54:39
|
cool
|
|
2024-02-24 05:54:47
|
but crushes blacks on most web images
|
|
2024-02-24 05:55:01
|
especially if you use Firefox
|
|
|
w
|
2024-02-24 05:55:08
|
black point scaling
|
|
|
damian101
|
2024-02-24 05:55:11
|
sRGB to gamma 2.2 is already terrible
|
|
|
w
|
2024-02-24 05:57:20
|
where does it convert to sRGB
|
|
2024-02-24 05:58:02
|
actually what do you even mean by that
|
|
2024-02-24 05:58:23
|
that makes no sense
|
|
2024-02-24 06:00:54
|
I can notice the sRGB image appearing lighter than non color managed app
|
|
2024-02-24 06:01:08
|
it's not much so idk how crushed blacks comes to be
|
|
|
damian101
|
|
w
where does it convert to sRGB
|
|
2024-02-24 06:01:13
|
if an image is tagged as using gamma 2.2, Firefox will not simply display it, but adust transfer curve for display on an sRGB instead of gamma 2.2 display
|
|
|
w
|
2024-02-24 06:01:52
|
yeah that's right since it's assuming display is sRGB
|
|
|
damian101
|
|
w
it's not much so idk how crushed blacks comes to be
|
|
2024-02-24 06:02:08
|
For images that use the full SDR dynamic range it's often not very noticeable.
|
|
|
w
it's not much so idk how crushed blacks comes to be
|
|
2024-02-24 06:03:58
|
this is just sRGB vs gamma 2.2
|
|
|
w
|
2024-02-24 06:04:24
|
I still don't understand what you mean
|
|
|
damian101
|
|
this is just sRGB vs gamma 2.2
|
|
2024-02-24 06:05:03
|
if you watch these images on a gamma 2.2 display, the sRGB one will have very crushed blacks
|
|
2024-02-24 06:05:12
|
while the gamma 2.2 one looks natural
|
|
2024-02-24 06:06:15
|
a gamma 2.2 image on a gamma 2.4 display will have crushed blacks as well, but not as severely, but the difference will be quite noticeable throughout the luminance range where sRGB and gamma 2.2 are basically identical except for the very dark range
|
|
|
w
|
2024-02-24 06:08:37
|
if the image is originally sRGB it should show up as sRGB on whatever monitor
|
|
|
damian101
|
|
w
if the image is originally sRGB it should show up as sRGB on whatever monitor
|
|
2024-02-24 06:09:13
|
it needs to be transcoded to a different transfer if display is not sRGB
|
|
|
w
|
2024-02-24 06:09:31
|
yeah?
|
|
2024-02-24 06:10:02
|
hey that's the basis for hdr
|
|
|
damian101
|
|
Quackdoc
|
|
I doubt that, actually...
All 5 smartphones I tried used gamma 2.2, and of ~8 PC monitors (some very old, many newer), only one used sRGB gamma...
|
|
2024-02-24 11:28:13
|
both my s9+ and axon 7 are srgb
|
|
|
damian101
|
2024-02-24 11:28:27
|
my S10 is gamma 2.2
|
|
|
Quackdoc
|
2024-02-24 11:28:29
|
all of my monitors but 1 are too
|
|
|
my S10 is gamma 2.2
|
|
2024-02-24 11:29:39
|
yeah my brother's s7 was too, but his fold iirc is srgb, not sure since it doesn't have the best color/curve following to begin with
|
|
|
kkourin
|
|
Firefox converts to sRGB
|
|
2024-02-24 11:36:38
|
it will use the display profiled TRC if available
|
|
2024-02-24 11:42:02
|
but I assume you're referring to when there is no display profile?
|
|
|
damian101
|
|
kkourin
but I assume you're referring to when there is no display profile?
|
|
2024-02-24 11:45:04
|
yes...
Where would the display transfer be read from?
System-wide color management? I think that's a thing under macOS, and soon under Windows, but not yet.
|
|
|
kkourin
|
|
yes...
Where would the display transfer be read from?
System-wide color management? I think that's a thing under macOS, and soon under Windows, but not yet.
|
|
2024-02-24 11:49:56
|
oh I just meant that when you were responding to w it seemed like they were assuming that you had a display profile installed
|
|
|
damian101
|
2024-02-24 11:50:10
|
oh
|
|
2024-02-24 11:50:22
|
well, I don't even know how that would work...
|
|
2024-02-24 11:50:47
|
How can I set what kind of display Firefox targets?
|
|