|
Scope
|
|
_wb_
|
2021-06-07 07:42:25
|
That's a good improvement looking at pnorm, not sure to what extent pnorm is reliable for that kind of image.
|
|
2021-06-07 07:43:38
|
Btw it might also be good to use delta palette for lossless, atm in lossless delta entries aren't used at all but presumably in some cases it could be beneficial to use them
|
|
2021-06-07 08:17:21
|
Just one image though, so doesn't say that much
|
|
2021-06-07 08:18:16
|
(but I am quite sure that kind of change is unlikely to be harmful, and can basically only help)
|
|
|
Scope
|
2021-06-07 08:20:19
|
Also, why exactly 1%?
But when I have time, it would be interesting to compare these changes in other images, and <@!461421345302118401> might also be interested
|
|
|
_wb_
|
2021-06-07 08:24:07
|
Yeah I think probably with 0.1% it works even better
|
|
2021-06-07 08:24:27
|
But need a large corpus to test then
|
|
2021-06-07 08:25:37
|
(also would be fun to make a heuristic to add custom deltas, instead of relying on the default ones...)
|
|
|
Scope
|
|
_wb_
(also would be fun to make a heuristic to add custom deltas, instead of relying on the default ones...)
|
|
2021-06-07 08:28:25
|
Yep
|
|
2021-06-07 08:49:46
|
New lossy-palette build
|
|
|
Jyrki Alakuijala
|
2021-06-07 09:23:59
|
yay
|
|
2021-06-07 09:24:17
|
it only adds colors that have constant 4-connected neighbours
|
|
2021-06-07 09:24:33
|
for some images it is a great strategy, but it is nothing universal
|
|
2021-06-07 09:24:45
|
luckily I have more ideas than that π
|
|
2021-06-07 09:30:05
|
i.e., if there is a huge slab of constant color in a pic, then add that exact color into the palette
|
|
|
Scope
|
2021-06-07 09:31:32
|
Source
|
|
2021-06-07 09:31:47
|
New LP (156 396)
|
|
2021-06-07 09:31:57
|
Old LP (183 274)
|
|
|
Jyrki Alakuijala
|
2021-06-07 09:32:22
|
did you like it?
|
|
|
Scope
|
2021-06-07 09:32:43
|
VarDCT = New LP Size
|
|
|
Jyrki Alakuijala
|
2021-06-07 09:33:03
|
I believe I can make is still 17 % better for images with slight gradients like this one
|
|
|
Scope
Old LP (183 274)
|
|
2021-06-07 09:34:31
|
The Old LP has dithering artefacts in the middle gray area -- I hypothesized that they will go away with the new change and that seems to be confirmed -- working as intended
|
|
|
Scope
|
2021-06-07 09:35:48
|
Yep (New - Old)
|
|
|
Jyrki Alakuijala
|
2021-06-07 09:36:35
|
big improvement π
|
|
2021-06-07 09:37:12
|
I'm so thankful for you showing this, made my evening!
|
|
2021-06-07 09:37:47
|
I created the dithering algorithm 'blind', didn't look at many of the images
|
|
|
Scope
|
2021-06-07 09:37:48
|
But these are only the images where I found a difference (otherwise everything is almost the same including the size as in the old LP), although I'll look for more examples
|
|
|
Jyrki Alakuijala
|
2021-06-07 09:37:56
|
now that I see the results I'm less happy with it :----D
|
|
2021-06-07 09:38:14
|
yes, it should only trigger on special images
|
|
2021-06-07 09:38:39
|
the next change I plan is to deal with exotic or slow gradients with low noise
|
|
2021-06-07 09:38:50
|
that would likely be a more universal win
|
|
|
Scope
|
2021-06-07 09:40:28
|
New LP
|
|
2021-06-07 09:40:42
|
Old LP
|
|
2021-06-07 09:41:49
|
Also size (New - 76,707, Old - 202,954)
|
|
|
|
veluca
|
|
Scope
VarDCT = New LP Size
|
|
2021-06-07 09:41:59
|
to me this one still looks better than the LP image though
|
|
|
Scope
|
2021-06-07 09:43:33
|
Yes, currently in other images this is also the same (VarDCT better/same), but perhaps I can find counterexamples
It is also all -s 8 (perhaps -s 9 and other options could be better)
|
|
|
|
veluca
|
2021-06-07 09:47:24
|
-s 9 likely yes
|
|
2021-06-07 09:47:29
|
-E changes nothing
|
|
2021-06-07 09:47:43
|
you want -I 0 -P (0? probably?)
|
|
|
Scope
|
2021-06-07 10:00:28
|
Yep, that might help, but for now I'm testing with something relatively universal for all kinds of images and not very slow
|
|
|
|
veluca
|
2021-06-07 10:11:11
|
`--lossy-palette -s 9 -I 0 -P 0` is much faster than the usual `-s 9`s
|
|
2021-06-07 10:11:17
|
and should be reasonably universal
|
|
|
Scope
|
2021-06-07 10:19:43
|
On lossless comparisons it wasn't always better on photos, I test on those too (although LP would be more useful on artificial images)
|
|
|
|
veluca
|
2021-06-07 10:35:59
|
it should always be better after LP
|
|
|
Scope
|
2021-06-07 10:42:24
|
<@!179701849576833024><@!794205442175402004> Then maybe enable `-I 0 -P 0 --palette=0` by default when the `--lossy-palette` option is selected and change them only if they are specified?
|
|
|
|
veluca
|
2021-06-07 10:44:16
|
probably a good idea yes π
|
|
|
_wb_
|
2021-06-08 07:30:45
|
Never seen that one, sounds like a weird icc profile?
|
|
2021-06-08 07:30:56
|
<@604964375924834314> ?
|
|
|
spider-mario
|
2021-06-08 07:37:36
|
interesting, could you please share the icc profile or the image so that we can take a look?
|
|
2021-06-08 07:37:48
|
I suppose it might be a LUT profile
|
|
2021-06-08 08:19:48
|
thanks, it seems the red and green primaries have negative Z, and the blue primary has negative Y
|
|
2021-06-08 08:21:25
|
profile
|
|
2021-06-08 08:21:35
|
I think
|
|
2021-06-08 08:22:02
|
not 100% sure
|
|
|
Jyrki Alakuijala
|
|
Jake Archibald
50kB frame
|
|
2021-06-08 08:37:15
|
One thing that may be relevant to progressive analysis is the resolution ratios. Progression is not important for small images of say 500x500 -- they already load fast enough and Chrome will not show progressive stages. It becomes much more important for full screen or near full screen content
|
|
2021-06-08 08:43:39
|
the 8x8 subsampled overview image is a much better approximate for a near full screen photograph than for a thumbnail of the same photograph -- i.e., more utility in that use case
|
|
2021-06-08 08:45:23
|
also, Chrome would be much more likely to actually show progressive stages for a large image (more bits, longer transfer) than for a a smaller image -- Chrome seems to decide it by timing a few updates (actually I don't know what is happening, just fuzzy observations) during the load if the load is taking a significant amount of time.
|
|
2021-06-08 08:47:22
|
the 'spiraling' middle-out/center-first progression is relatively calm when observed from Chrome (by Chrome giving it a small number like five of updates during the load), but looks like a bumble bee with attention seeking disorder if every 256x256 square is painted separately (like possibly in some videos demonstrating progressive loading)
|
|
|
Jake Archibald
|
2021-06-08 09:06:45
|
That image was 1920x1080 fwiw
|
|
|
_wb_
|
2021-06-08 09:14:26
|
that's 8x5 groups, 7x4 full ones
|
|
|
spider-mario
|
|
spider-mario
profile
|
|
2021-06-08 09:59:47
|
yep: βTristimulus values must be non-negative.β (ICC spec)
|
|
2021-06-08 10:13:42
|
something might be off about the measurements
|
|
2021-06-08 10:14:00
|
(but I donβt think I would be able to tell)
|
|
2021-06-08 10:14:40
|
yours is an ICCv2 profile
|
|
2021-06-08 10:16:18
|
also, Iβm not sure that skmcs supports ICCv4 anyway
|
|
2021-06-08 10:17:46
|
it might be that we have an overzealous check somewhere, actually
|
|
2021-06-08 10:18:06
|
in fact the line numbers donβt seem to match in the current version, so we might have changed it already
|
|
2021-06-08 10:18:10
|
which JXL version are you using?
|
|
|
Jyrki Alakuijala
|
2021-06-08 10:22:40
|
... a better comparison of how different progressive modes, blurry-preview and fully non-progressive fare against each other would be to record full screen experience on Chrome rather than small images
|
|
|
spider-mario
|
|
spider-mario
it might be that we have an overzealous check somewhere, actually
|
|
2021-06-08 10:23:58
|
sorry yes, the check is ours, nothing to do with skcms
|
|
|
Jyrki Alakuijala
|
2021-06-08 10:24:00
|
when showing thing side-by-side it would be best if the side by side level images are not crops of the big image, but for example full screen captures or even including the monitor showing the whole setup -- then users can relate (likely) most effectively and authentically to how they would feel with different progression experiences
|
|
|
spider-mario
|
2021-06-08 10:26:34
|
seems that itβs our fault after all, I think we can maybe remove the check
|
|
2021-06-08 10:26:38
|
thanks for finding this out
|
|
|
Jyrki Alakuijala
|
2021-06-08 10:27:01
|
Thank you spider-mario for bringing clarity to this
|
|
|
spider-mario
|
2021-06-08 11:42:45
|
done, should be in the next release
|
|
2021-06-08 12:28:42
|
if this is what I think it is, we are battling an internal compiler error in gcc 7
|
|
2021-06-08 12:29:14
|
the rest of the pipeline, with different compilers, is fine
|
|
|
_wb_
|
2021-06-08 12:51:17
|
can we do a 0.3.8 release? or what's the plan?
|
|
2021-06-08 12:53:00
|
we haven't bumped the version number in a while, and we are in this weird situation where the main branch says it is 0.3.7, the 0.4.x branch is missing some of the new stuff, and the 0.3.x branch is even more outdated
|
|
2021-06-08 12:55:03
|
wouldn't it be better to call 0.3.x and 0.4.x what they really are (I think) : the versions that are in M91 and M92 ?
|
|
2021-06-08 12:58:02
|
I want to update the jxl in Cloudinary but with the current version tagging it's kind of a mess
|
|
2021-06-08 12:59:21
|
chrome milestones
|
|
2021-06-08 12:59:41
|
(which is a fancy word for version numbers)
|
|
2021-06-08 01:00:18
|
yes, that's the current stable chrome
|
|
2021-06-08 01:00:55
|
https://chromiumdash.appspot.com/schedule
|
|
|
diskorduser
|
2021-06-08 01:24:39
|
Are you saying there is a jxl support on chrome stable?
|
|
|
_wb_
|
2021-06-08 01:33:38
|
behind a flag, yes
|
|
2021-06-08 01:33:55
|
and only on desktop, not mobile
|
|
2021-06-08 01:34:15
|
https://chromium-review.googlesource.com/c/chromium/src/+/2932498
|
|
2021-06-08 01:35:07
|
mobile and animated jxl (still behind a flag) will be for the next stable chrome version, M92
|
|
|
diskorduser
|
2021-06-08 02:17:25
|
Waiting for Firefox mobile ( beta ) jxl support
|
|
|
_wb_
|
|
_wb_
we haven't bumped the version number in a while, and we are in this weird situation where the main branch says it is 0.3.7, the 0.4.x branch is missing some of the new stuff, and the 0.3.x branch is even more outdated
|
|
2021-06-08 02:19:14
|
<@&803357352664891472> WDYT?
|
|
|
lithium
|
2021-06-08 02:49:53
|
libjxl 0.3.8 release will include some verdct improve?, I'm very look forward to see thisπ
|
|
|
Scope
|
|
Scope
<@!179701849576833024><@!794205442175402004> Then maybe enable `-I 0 -P 0 --palette=0` by default when the `--lossy-palette` option is selected and change them only if they are specified?
|
|
2021-06-08 02:52:16
|
Hmm, yes on a large variety set `--lossy-palette` with `-I 0 -P 0` is less effective in general
<@!532010383041363969><@!179701849576833024><@!794205442175402004> Also, enabling `--lossy-palette` discards alpha
|
|
|
|
veluca
|
2021-06-08 02:52:38
|
*less* effective?
|
|
|
Scope
|
2021-06-08 02:53:20
|
Yes, only `-s 9` is more effective
|
|
2021-06-08 02:54:52
|
But, it's on a random set of images, I didn't choose which might be better with `--lossy-palette`
|
|
|
_wb_
|
2021-06-08 02:55:43
|
at s < 9, I think -I 0 is a bad idea since there's no or weak lz77 search then
|
|
|
Scope
|
2021-06-08 03:07:18
|
Also, is it possible to make `--lossy-palette` mode very effective for almost black and white or almost grayscale images?
Like sometimes there are comics, manga or document scans, but with slightly different pixels in other colors, theoretically it is possible to reduce palette to true black and white or limited grayscale without noticeable visual differences
Like, yes, such an image can be converted manually, but I'm more about something like automatic detection when the user does not know what kind of image, but, it would require some clever detectors like that, say when it is 99.9% black and white or grayscale image and color pixels are more like random digital noise or little difference visually (say on the edges)
|
|
|
_wb_
|
2021-06-08 03:14:33
|
probably for actual grayscale, it will either not be effective or will require custom deltas to make it effective (defaults are kind of designed for 3 color channels)
|
|
|
Jyrki Alakuijala
|
|
Scope
Hmm, yes on a large variety set `--lossy-palette` with `-I 0 -P 0` is less effective in general
<@!532010383041363969><@!179701849576833024><@!794205442175402004> Also, enabling `--lossy-palette` discards alpha
|
|
2021-06-08 05:30:26
|
yes, that is by design
|
|
|
Scope
Also, is it possible to make `--lossy-palette` mode very effective for almost black and white or almost grayscale images?
Like sometimes there are comics, manga or document scans, but with slightly different pixels in other colors, theoretically it is possible to reduce palette to true black and white or limited grayscale without noticeable visual differences
Like, yes, such an image can be converted manually, but I'm more about something like automatic detection when the user does not know what kind of image, but, it would require some clever detectors like that, say when it is 99.9% black and white or grayscale image and color pixels are more like random digital noise or little difference visually (say on the edges)
|
|
2021-06-08 05:31:01
|
more context modeling will help there
|
|
2021-06-08 05:31:20
|
next planned step for improvements to delta palettization is to cluster deltas
|
|
2021-06-08 05:31:44
|
I anticipate a 15-20 % improvement for images with gradients (and low noise)
|
|
2021-06-08 05:33:24
|
graphics mostly
|
|
|
Scope
|
2021-06-08 05:37:07
|
So transparency is not possible with `--lossy-palette`, or is it just not implemented now?
Theoretically, it might be possible to get more improvements with more efficient lossy encoding methods for transparency
|
|
|
|
veluca
|
2021-06-08 05:40:35
|
possible, it is
|
|
2021-06-08 05:40:42
|
implemented, no way xD
|
|
|
Scope
|
|
Scope
Hmm, yes on a large variety set `--lossy-palette` with `-I 0 -P 0` is less effective in general
<@!532010383041363969><@!179701849576833024><@!794205442175402004> Also, enabling `--lossy-palette` discards alpha
|
|
2021-06-08 05:47:47
|
Also, more precisely what I meant was that `-s 9 -I 0 -P 0` with `--lossy-palette` is less efficient than just `-s 9`, but mostly better and faster than just `-s 8`, so `-I 0 -P 0` is not very suitable as the default option
|
|
|
|
veluca
|
2021-06-08 05:48:43
|
weird, that
|
|
2021-06-08 05:48:56
|
I wonder what predictor it ends up using...
|
|
|
Scope
|
2021-06-08 05:51:43
|
Also, does it differ from WebP only in that there is no automatic predictor selection and limited group size or is the LZ77 implementation at this point also less efficient?
|
|
|
|
veluca
|
2021-06-08 05:55:04
|
it should be basically equivalent
|
|
2021-06-08 05:55:25
|
although *probably* webp has the advantage of LZ77-ing all the channels at the same time
|
|
|
Jyrki Alakuijala
|
2021-06-08 06:05:17
|
in my experience LZ77ing per channel is more efficient in density
|
|
2021-06-08 06:06:11
|
I chose WebP lossless' approach because the loss is not big and it allows faster coding
|
|
2021-06-08 06:06:55
|
I had a very capable intern who looked into different LZ77 strategies (like brotlifying per channel with WebP lossless' predictors)
|
|
|
Scope
|
2021-06-08 06:13:01
|
Interesting recent commit from WebP2 π€ (as far as I know they want to significantly improve alpha compression compared to WebP, not in this particular commit, but in general)
|
|
|
_wb_
|
2021-06-08 06:19:44
|
Alpha-from-luma?
|
|
2021-06-08 06:28:38
|
Reminds me that we should take a look at using RCTs involving alpha. It might sometimes make sense. E.g. use a permutation RCT to move alpha to the first channel, so the other channels can use it as context - e.g. often semitransparent pixels are all the same color (black or a dark gray, for a shadow effect), and that could be exploited by basically starting the MA tree for luma with something like
```
if Prev < 255
- Set 10
```
|
|
2021-06-08 06:30:11
|
So you'd get a kind of luma-from-alpha from that
|
|
|
Jyrki Alakuijala
|
2021-06-08 09:08:38
|
it seems like a context modeling thing would be appropriate, not a prediction thing
|
|
|
_wb_
|
2021-06-08 09:13:10
|
Well we can do either and both, since MA trees work both as ctx model and as predictor
|
|
|
|
Deleted User
|
2021-06-09 10:46:46
|
I've seen something that seems to contradict each other in cjxl...
|
|
2021-06-09 10:47:34
|
From `cjxl -h -v`:
``` -P K, --predictor=K
[modular encoding] predictor(s) to use: 0=zero, 1=left, 2=top, 3=avg0, 4=select, 5=gradient, 6=weighted, 7=topright, 8=topleft, 9=leftleft, 10=avg1, 11=avg2, 12=avg3, 13=toptop predictive average 14=mix 5 and 6, 15=mix everything. Default 14, at slowest speed default 15```
|
|
2021-06-09 10:48:43
|
But at the same time...
|
|
2021-06-09 10:49:43
|
```./cjxl image.png -P 14
Invalid predictor value 14, must be less than 14.
Error parsing flag -P
Use './cjxl -h' for more information```
|
|
2021-06-09 10:50:31
|
Could you explain it to me?
|
|
2021-06-09 10:52:16
|
Because I don't understand this...
|
|
|
_wb_
|
2021-06-09 10:56:42
|
Looks like we document the internal default values but don't actually allow setting them manually. Open an issue maybe? Or a PR to just fix the bounds check in cjxl.
|
|
2021-06-09 12:13:43
|
|
|
2021-06-09 12:13:58
|
|
|
2021-06-09 12:14:19
|
|
|
2021-06-09 12:15:12
|
|
|
|
|
Deleted User
|
2021-06-09 12:56:02
|
https://tenor.com/view/captcha-thug-robot-i-see-what-you-did-there-thuglife-gif-7721399
|
|
2021-06-09 01:34:35
|
Which build are you using and on which platform?
|
|
2021-06-09 02:00:54
|
You mean `v0.3.x` branch from GitHub?
|
|
2021-06-09 02:06:22
|
Ah, ok. Could you recompile from the `main` GitHub branch? It's got a couple of improvements and it should be stable. I've (correctly, it seems) suspected that you've got an older version because of lack of line break after `[...] less than 14.` string. I fixed that in `tools/args.h` in https://github.com/libjxl/libjxl/pull/80/commits/e8915c56bc1082d0ab6b17cf8afb4e21022c57bd.
|
|
|
|
veluca
|
2021-06-09 02:13:19
|
I don't think it will happen, we'll rather have a v0.4.0
|
|
|
|
Deleted User
|
2021-06-09 02:34:35
|
You don't have to, just like I said before: current git version is stable enough.
|
|
2021-06-09 02:57:02
|
https://gitlab.com/wg1/jpeg-xl/-/issues/228#note_593247388 π
|
|
|
Scope
|
2021-06-10 02:50:50
|
Also, can checks by some static analyzers be useful for vulnerabilities, bugs, etc.?
For example, PVS reports on zlib-ng code
|
|
2021-06-10 02:51:15
|
|
|
|
_wb_
|
2021-06-10 03:05:46
|
could be, we already do some linting but maybe fancier analyzers can find more
|
|
|
|
veluca
|
2021-06-10 03:15:10
|
I often find clang-tidy to find way more false positives than actual bugs xD
|
|
|
_wb_
|
2021-06-10 03:20:50
|
Oh yes. It's often "maybe you meant to do something else?" and the answer is "nope"
|
|
|
|
veluca
|
2021-06-10 03:22:43
|
or "X could be null!" "no, it can't"
|
|
|
BlueSwordM
|
2021-06-11 06:19:26
|
So, for faster encoding purposes on limited hardware, would it be possible to limit the computational bit depth to 16-bit instead of 32-bit?
|
|
|
|
veluca
|
2021-06-11 06:28:11
|
in theory yes
|
|
2021-06-11 06:28:27
|
but I wouldn't enjoy writing that
|
|
|
_wb_
|
2021-06-12 06:04:30
|
Current jxl implementation does everything with 32-bit float (for lossy) and 32-bit int (for lossless)
|
|
2021-06-12 06:04:59
|
The main impact of that is memory usage
|
|
2021-06-12 06:08:04
|
For speed it probably doesn't matter much, maybe using 16-bit ints would be slightly faster but I don't think it would make a huge difference
|
|
2021-06-12 06:11:09
|
Modular transforms can increase the integer width needed, so using 8-bit buffers for 8-bit images would only work if the transforms are not used
|
|
2021-06-12 06:14:34
|
Theoretically you could make an encoder that uses only constant memory (if you can give it the image in tiles instead of in one big buffer), and a decoder that for images encoded in that way also uses constant memory
|
|
2021-06-12 06:17:17
|
But for now we have the whole image in memory in the encoder, and in the decoder we are working on avoiding to have the full image in memory, returning the decoded image in tiles instead
|
|
2021-06-12 06:18:32
|
Encoder memory needs can still be inproved a lot, we are focusing more on the decoder atm
|
|
|
fab
|
2021-06-12 07:27:08
|
it was x4 times bigger in speed 9
|
|
2021-06-12 07:27:36
|
the version before 0.3.7-65
|
|
2021-06-12 07:27:56
|
jpeg xl needs ram
|
|
2021-06-12 07:28:20
|
if you want jxl, buy a good computer even if you're planning to encode 16 mpx imags
|
|
2021-06-12 07:28:30
|
decoder is light
|
|
|
_wb_
|
2021-06-12 07:38:56
|
Lossy default speed should be ok-ish in memory needs
|
|
|
|
Deleted User
|
|
_wb_
Modular transforms can increase the integer width needed, so using 8-bit buffers for 8-bit images would only work if the transforms are not used
|
|
2021-06-12 01:00:36
|
So `uint16_t` should be enough for Modular transforms for 8-bit inputs, I guess? How many additional bits do those transforms require?
|
|
|
_wb_
|
2021-06-12 01:13:20
|
well we use signed int so `int16_t`, but yes, that should be enough for up to 12-bit inputs
|
|
2021-06-12 01:13:42
|
maybe 14-bit if you don't do both squeeze and RCTs
|
|
|
|
Deleted User
|
|
_wb_
well we use signed int so `int16_t`, but yes, that should be enough for up to 12-bit inputs
|
|
2021-06-12 01:23:03
|
Ah, I forgot that you have to use signed int since transforms can result in negative values...
Does lossless use any RCTs by default?
|
|
|
_wb_
|
2021-06-12 01:23:54
|
yes
|
|
2021-06-12 01:24:21
|
it selects one per group, using some heuristic to determine which one will likely work best
|
|
2021-06-12 01:24:36
|
at faster speeds it just always does YCoCg
|
|
2021-06-12 01:25:05
|
Squeeze and RCT can result in negative values, also delta palette uses < 0 values for implicit deltas.
|
|
|
|
Deleted User
|
2021-06-14 01:30:14
|
Any updates on https://github.com/libjxl/libjxl/issues/99 ?
|
|
|
_wb_
|
2021-06-14 02:11:19
|
here you go π https://github.com/libjxl/libjxl/pull/166
|
|
|
eric.portis
|
2021-06-14 09:53:01
|
Does libjxl do HDR, yet? Does it do it well?
I took this HDR source image https://xn--g08h.img.rodeo/raw/upload/v1623700809/LuxoDoubleChecker.exr and did a simple
`cjxl LuxoDoubleChecker.exr luxo.jxl `
The result had visible banding.
Then I tried `cjxl LuxoDoubleChecker.exr luxo.jxl -q 100`. The result looked smooth/good.
My display can do very limited HDR (looking at the OpenEXR original in Preview.app, brighter whites and lighter highlights are visible, especially if I dim my screen slightly), but I assume Chrome (my JXL viewer...) cannot, so I wasn't able to "see" whether values >1.0 are preserved in the output JXLs. Are they?
|
|
2021-06-14 09:54:31
|
(this experimentation was inspired by this excellent WWDC session on how Apple does HDR https://developer.apple.com/videos/play/wwdc2021/10161/)
|
|
|
_wb_
|
2021-06-14 09:55:08
|
Current stable chrome does not do hdr jxl, current dev/canary chrome does, iirc
|
|
2021-06-14 09:56:02
|
No idea if that's true on macos though, haven't been able to try yet (need a hdr screen)
|
|
2021-06-14 09:56:22
|
In Windows it should work
|
|
|
improver
|
2021-06-14 09:56:38
|
btw does current canary chrome do progressive loading
|
|
|
_wb_
|
2021-06-14 09:56:54
|
No, not yet
|
|
|
eric.portis
|
|
_wb_
No idea if that's true on macos though, haven't been able to try yet (need a hdr screen)
|
|
2021-06-14 09:57:39
|
What Mac do you have? I have a 2017 Macbook Pro and it can do SDR*2 when I reduce brightness a tad...
|
|
|
improver
|
2021-06-14 09:58:07
|
that's sorta cheating tbh lol
|
|
|
_wb_
|
2021-06-14 09:58:13
|
How did you view the result? If it's decoding to 8-bit then banding is inevitable, need to decode to 16-bit (float or int)
|
|
2021-06-14 09:59:23
|
`djxl --bits_per_sample 16` when decoding to png, or use .exr as output
|
|
|
eric.portis
|
2021-06-14 10:00:22
|
Compare:
https://xn--g08h.img.rodeo/image/upload/v1623707935/luxo.jxl in Canary
https://xn--g08h.img.rodeo/raw/upload/v1623700809/LuxoDoubleChecker.exr in Preview.app
Preview.app looks way brighter and shows more highlight detail with display brightness at around 80%, even on my old mac.
|
|
2021-06-14 10:00:43
|
I'll try dxling...
|
|
|
_wb_
|
2021-06-14 10:01:17
|
I have a 2019 macbook pro, afaik its builtin screen is natively 8-bit display P3 but I have no idea how to actually check that
|
|
|
improver
|
2021-06-14 10:02:02
|
you gotta take into account additional dimming though
|
|
2021-06-14 10:02:54
|
as in that's probably implemented by turning pixels on and off real fast
|
|
|
eric.portis
|
|
_wb_
I have a 2019 macbook pro, afaik its builtin screen is natively 8-bit display P3 but I have no idea how to actually check that
|
|
2021-06-14 10:08:59
|
What do you see when you run this thing I cobbled together this afternoon to check my own screen? (you can trust me...)
I'm assuming you'll see something like
% ./screengetter
Hello, World!
Optional(2.0)
Optional(2.0)
0.87890625
And your screen, just like mine, can do just a little bit of HDR (SDR x 2)
|
|
|
_wb_
|
|
improver
as in that's probably implemented by turning pixels on and off real fast
|
|
2021-06-14 10:09:17
|
I usually set brightness to max hoping to avoid that (also to see more)
|
|
|
eric.portis
What do you see when you run this thing I cobbled together this afternoon to check my own screen? (you can trust me...)
I'm assuming you'll see something like
% ./screengetter
Hello, World!
Optional(2.0)
Optional(2.0)
0.87890625
And your screen, just like mine, can do just a little bit of HDR (SDR x 2)
|
|
2021-06-14 10:11:26
|
Will try tomorrow, it's bedtime for me :)
|
|
2021-06-14 10:11:53
|
I'm Jon btw :)
|
|
|
eric.portis
|
2021-06-14 10:25:49
|
Update: taking the original OpenEXR and roundtripping through cjxl + djxl back to EXR appears to preserve all HDR data (for the small sliver of HDR contained within the file that my screen can actually display). Huzzah.
|
|
2021-06-14 10:28:26
|
Too bad you can't see it in Canary...
|
|
2021-06-15 12:43:37
|
Like this: https://github.com/eeeps/screengetter/blob/main/screengetter/main.swift#L29
Just reads/prints the "Extended Dynamic Range" values from here https://developer.apple.com/documentation/appkit/nsscreen#topics which I learned about in the WWDC session video, above.
(third value is the display brightness, via some complicated-seeming code I copied and pasted from stack overflow)
|
|
2021-06-15 12:44:29
|
Weirdly, it displays different values when I have an HDR image open in Preview, vs when I don't... ha
|
|
2021-06-15 12:44:43
|
Maybe the true answer to "how does it work" is "I don't know" (;
|
|
|
_wb_
|
|
eric.portis
What do you see when you run this thing I cobbled together this afternoon to check my own screen? (you can trust me...)
I'm assuming you'll see something like
% ./screengetter
Hello, World!
Optional(2.0)
Optional(2.0)
0.87890625
And your screen, just like mine, can do just a little bit of HDR (SDR x 2)
|
|
2021-06-15 06:48:48
|
βscreengetterβ canβt be opened because Apple cannot check it for malicious software.
|
|
2021-06-15 06:50:36
|
ok found how to get around that β default dialog box doesn't give the option to say "run it anyway"
|
|
2021-06-15 06:50:41
|
```
Hello, World!
Optional(1.0)
Optional(2.0)
0.74316406
```
|
|
|
eric.portis
|
2021-06-15 07:02:43
|
So you, like me, should be able to see just a little bit of HDR. Second value (2.0) is https://developer.apple.com/documentation/appkit/nsscreen/3180381-maximumpotentialextendeddynamicr
|
|
|
_wb_
|
2021-06-15 07:37:01
|
Interesting
|
|
2021-06-15 07:37:52
|
when I open that EXR in preview, and dim the screen, then basically the brightest parts in the image remain at the brightest the screen can do, even though it's dimmed
|
|
2021-06-15 07:38:06
|
so they become brighter than what is nominally white
|
|
2021-06-15 07:38:36
|
but when I set the screen to max brightness, nominal white and the brightest parts of the EXR look the same
|
|
2021-06-15 07:43:48
|
I do see some funky artifacts when Preview is not showing the image 1:1 btw. Smells like its downscaler is doing things not quite right
|
|
2021-06-15 07:44:19
|
Chrome Canary does indeed not display the image in 'HDR' like Preview does
|
|
2021-06-15 07:44:38
|
the jxl does decode to an exr that Preview does show in 'HDR' though
|
|
2021-06-15 07:59:47
|
https://prolost.com/blog/edr
|
|
2021-06-15 08:11:16
|
|
|
2021-06-15 08:11:49
|
ugh the screenshot tool is conspiring too
|
|
2021-06-15 08:14:03
|
|
|
2021-06-15 08:15:08
|
photo of my screen isn't that clear either, but the point is: the .exr files are brighter-than-white in Preview, the .png isn't
|
|
2021-06-15 10:17:36
|
well it's the built-in macbook pro screen, just using default calibration
|
|
2021-06-15 10:18:32
|
it looks the "brighter than white" thing is something Preview does with .exr files but not with .png files
|
|
2021-06-15 10:27:21
|
yes I need to buy a colorimeter, I have a fancy BenQ monitor that is calibrated very well out of the box but it's time to check its calibration
|
|
2021-06-15 10:28:05
|
(I don't care hugely about absolute accuracy though, I only need relative accuracy)
|
|
|
eric.portis
|
2021-06-15 02:07:54
|
Yes and I still could never know what it really represented without documentation (even the documentation isn't that clear; the WWDC video is really what taught me I suppose)
|
|
|
spider-mario
|
2021-06-15 06:27:58
|
yeah, an i1Display Studio or Pro or a used ColorMunki Display is likely the best quality/affordability compromise in this range
|
|
2021-06-15 06:30:06
|
(one can also get a https://www.kleininstruments.com/k10-a for $7000, but itβs more targeted at manufacturers who calibrate many displays)
|
|
|
raysar
|
|
_wb_
(I don't care hugely about absolute accuracy though, I only need relative accuracy)
|
|
2021-06-15 08:01:00
|
And i can say, event 300β¬ spectro like xrite i1 display pro, having a good visually relative accuracy is pretty hard.
ICC default profile for macbook pro is excellent.
|
|
|
spider-mario
|
2021-06-15 09:06:07
|
the X-Rite naming scheme is a bit confusing but the i1Display Pro is not a spectrometer, the i1Pro is
|
|
2021-06-15 09:06:15
|
the i1Display Pro is a colorimeter
|
|
2021-06-15 09:07:17
|
a spectrometer measures each wavelength separately and then the software can compute the integral of the product with the color matching functions to get the X, Y, Z measurements
|
|
2021-06-15 09:07:46
|
a colorimeter has three filters that approximate the color matching functions and measures the whole filtered spectrum at once to get a tristimulus value directly
|
|
2021-06-15 09:09:20
|
a colorimeter may need a correction matrix if the filters are not quite those desired, but on the plus side it gives better dark measurements and is also generally more thermally stable
|
|
2021-06-15 09:09:52
|
so the best approach, if possible, is to use both: first a spectrometer to create a correction matrix for the colorimeter being used, and then the colorimeter for all the remaining measurements
|
|
2021-06-15 09:11:03
|
if one wants to use only one measuring device, then it should likely be a colorimeter by default, or a spectrometer if one plans to profile a printer as well
|
|
|
spider-mario
so the best approach, if possible, is to use both: first a spectrometer to create a correction matrix for the colorimeter being used, and then the colorimeter for all the remaining measurements
|
|
2021-06-15 09:13:34
|
(this is less needed with recent colorimeters with more accurate filters)
|
|
|
raysar
|
2021-06-15 09:59:03
|
Yes you wright i speek about the i1Studio, it's a spectro.
Software seem to be not smart and report after calibration is not good, really it's hard to have a good visual result on good ips 6bit+frc display even with 1 million patches.
i use my display now only with hardware rgb calibration in display menu. ( i hate banding and bad gradient)
For me absolute color deviation is not a big problem. (Only in certain colors like purple, orange)
|
|
2021-06-15 10:14:40
|
Also with old macbook pro retina (8bits display) color profile do ugly banding in some color, and grey is not perfectly grey.
|
|
|
spider-mario
|
2021-06-15 10:24:47
|
yeah, Argyll CMS and its GUI DisplayCAL have a reputation of being better than OEM software
|
|
2021-06-15 10:25:06
|
and they are indeed very good
|
|
2021-06-15 10:25:31
|
DisplayCALβs 3D LUT generator has been quite useful in helping us experiment a bit with HDR on non-HDR monitors
|
|
2021-06-15 10:44:18
|
seeing that it implemented tone mapping using the technique described in BT.2390, we implemented that in `djxl` too (https://github.com/libjxl/libjxl/blob/13649d2b29ca7c4e96795c1ec5ccfeb19524236c/lib/extras/tone_mapping.cc)
|
|
2021-06-15 10:45:17
|
and so, here is the checker image from above, encoded to jxl assuming that `1` in the EXR should have a brightness of ~30 cd/mΒ², and then tone-mapped by `djxl` for displays with a peak brightness of 80β―cd/mΒ²β―:
|
|
2021-06-15 10:46:07
|
(er, one second, something seems to have gone wrong when converting the png to jpg for discord)
|
|
2021-06-15 10:52:18
|
yeah, something strange seems to happen when converting to sRGB
|
|
2021-06-15 10:55:35
|
|
|
2021-06-15 10:55:40
|
it should probably not look like that.
|
|
2021-06-15 10:56:00
|
this is with lossless jxl, so most likely itβs the tone mapping that is behaving strangely here
|
|
2021-06-15 10:56:27
|
the shadows are quite hard to get right, with everything approaching zero
|
|
|
|
veluca
|
2021-06-15 10:58:04
|
yeah, those shadows are a bit bad xD
|
|
|
spider-mario
|
2021-06-15 11:03:37
|
ah, it could be that part of the problem is in how I prepared the EXR for compression by JXL using `exrnormalize`
|
|
2021-06-15 11:04:17
|
there is a very bright spot in the image (1847), and dividing all the rest by that may have been a bit problematic
|
|
2021-06-15 11:08:46
|
it may also put more stress on the tone mapping to map such a wide range
|
|
2021-06-15 11:08:59
|
let me try when normalizing a little less
|
|
2021-06-15 11:10:06
|
I would say that the problem-free area does look a bit larger, doesnβt it?
|
|
2021-06-15 11:10:55
|
still trying to map from up to 800Γ the original `1` value which is probably not really necessary
|
|
2021-06-15 11:18:18
|
50: finally, seemingly artifact-free shadows, although we are not retaining quite as much highlight detail anymore
|
|
2021-06-15 11:18:59
|
this makes me want to investigate what could be improved in the tone mapping to prevent this sort of thing
|
|
2021-06-15 11:19:14
|
but not right now, itβs getting late.
|
|
2021-06-15 11:19:34
|
(or early depending on the point of view, but you get what I mean.)
|
|
2021-06-15 11:53:07
|
out of curiosity, why not DisplayCAL for both steps?
|
|
2021-06-15 11:53:43
|
with
|
|
2021-06-16 05:44:55
|
hm, so it seems the problem may not be in the tonemapping after all
|
|
2021-06-16 05:45:01
|
possibly instead in color conversions
|
|
2021-06-16 05:45:22
|
I get similar artifacts if I just ask djxl to decode to a Rec. 2020 EXR
|
|
2021-06-16 05:45:57
|
and converting to linear Rec. 2020 is the first step of the tonemapping
|
|
|
|
Deleted User
|
2021-06-17 09:34:01
|
How do I make the alpha channel lossy?
|
|
|
_wb_
|
2021-06-17 09:35:49
|
it is done by default if the color image is lossy
|
|
2021-06-17 09:37:29
|
that only does a mild amount of lossy to alpha though, to keep things safe
|
|
2021-06-17 09:38:06
|
since alpha is modular-encoded, you can adjust alpha quality by using `-Q`
|
|
2021-06-17 09:38:56
|
there's currently no way to set different modular quality for alpha than for color though, if you do color with modular too
|
|
2021-06-18 11:58:26
|
Getting a nice decode speedup for the fast lossless modes like thunder, by just skipping the unnecessary copying from group to full modular image and going directly to the decoded float buffer instead
|
|
2021-06-18 11:58:44
|
<@768090355546587137>
|
|
2021-06-18 11:59:14
|
Makes -m -e 2 decoding go from 50 MP/s to 60 MP/s
|
|
2021-06-18 12:01:40
|
Also reduces decode memory by 30% or so
|
|
2021-06-18 12:02:08
|
Can only be done in RCT-only cases though, atm
|
|
2021-06-18 12:03:03
|
(could add channel and no-delta palette later, but for delta palette and squeeze it's not going to work)
|
|
|
|
Deleted User
|
2021-06-23 02:49:12
|
Is there a parameter to automatically drop alpha if it's completely opaque? (maybe this should also be the default for loosy)
|
|
|
_wb_
|
2021-06-23 02:54:38
|
ah, no, and I actually don't see any reason not to just do that always
|
|
2021-06-23 02:55:17
|
even for lossless I think it's pretty much clear that no alpha channel is the same thing as full-opaque alpha
|
|
|
|
Deleted User
|
2021-06-23 03:38:49
|
Do I need to open a GitHub issue?
|
|
|
diskorduser
|
|
Is there a parameter to automatically drop alpha if it's completely opaque? (maybe this should also be the default for loosy)
|
|
2021-06-23 04:04:10
|
Why? Does it save space?
|
|
|
improver
|
2021-06-23 04:06:03
|
i suspect it would. also complexity
|
|
|
|
Deleted User
|
|
diskorduser
Why? Does it save space?
|
|
2021-06-23 04:37:51
|
A little, also less RAM but mainly decode (probably also encode) time.
|
|
|
Do I need to open a GitHub issue?
|
|
2021-06-24 11:57:17
|
<@!794205442175402004>
|
|
|
_wb_
|
2021-06-24 11:57:56
|
yes pls π
|
|
|
|
Deleted User
|
2021-06-24 12:07:40
|
I will.
|
|
2021-06-24 12:08:11
|
Could someone explain what is going (wr)on(g) with noise=1 here?
|
|
2021-06-24 12:08:18
|
|
|
2021-06-24 12:08:19
|
|
|
2021-06-24 12:08:19
|
|
|
2021-06-24 12:08:20
|
|
|
2021-06-24 12:08:21
|
|
|
2021-06-24 12:08:21
|
|
|
2021-06-24 12:10:21
|
white background: way too much noise
black background: a bit much noise
both together: no visible noise
???
|
|
|
_wb_
|
2021-06-24 12:37:51
|
I think there's a reason noise is disabled by default π
|
|
2021-06-24 12:38:31
|
encoder decisions regarding noise are currently likely not very good
|
|
|
|
Deleted User
|
2021-06-24 12:40:52
|
I guess it would be nice being able to specify a maximum noise intensity or something then.
|
|
2021-06-24 01:16:11
|
it works fine if the image contains a dithered gradient
|
|
2021-06-24 01:16:12
|
|
|
|
improver
|
2021-06-24 08:08:27
|
can current djxl decode jpg to pixels or do i need to first cjxl it to make it do that
|
|
2021-06-24 08:09:18
|
kinda curious whether ill be able to notice any difference between how libjpeg and libjxl do decoding
|
|
|
_wb_
|
2021-06-24 08:20:26
|
Djxl only takes jxl as input atm
|
|
2021-06-24 08:20:54
|
So have to do the `cjxl bla.jpg tmp.jxl; djxl tmp.jxl decoded.png` thing
|
|
|
|
Deleted User
|
2021-06-24 08:21:44
|
<@!260412131034267649> please upvote: https://gitlab.com/wg1/jpeg-xl/-/issues/232
|
|
|
_wb_
|
2021-06-24 08:22:16
|
Difference should be small with libjpeg, but we do have more accurate processing and maybe slightly nicer dequantization
|
|
|
|
Deleted User
|
2021-06-24 08:22:38
|
Nothing will beat Knusperli though
|
|
|
Jyrki Alakuijala
|
|
white background: way too much noise
black background: a bit much noise
both together: no visible noise
???
|
|
2021-06-24 09:52:41
|
<@!604964375924834314> more noise issues
|
|
|
white background: way too much noise
black background: a bit much noise
both together: no visible noise
???
|
|
2021-06-24 09:55:02
|
This is partially related that we have accidentally reversed the intensity of the noise model -- often there is more noise in white areas than dark areas, and the physics are the opposite (not in linear space, but after the gamma compression)
|
|
2021-06-24 09:55:47
|
we are aware of this, we just were not able to prioritize it when we had a lot of work with Chrome integration
|
|
|
Nothing will beat Knusperli though
|
|
2021-06-24 09:57:00
|
At the time when we developed Knusperli, I also had a Knusperli 2.0 that also used Gaborish and other tricks π
|
|
2021-06-24 09:58:15
|
non-surprisingly, Gaborish makes low quality jpegs look better even if the opposite transform was not run at encoding time -- the opposite transform can be approximated by changing the quantization matrix weights (by increasing the highest frequency weights by about 30 % in the experiment I had)
|
|
2021-06-24 09:59:08
|
Knusperli's main idea comes from my context modeling idea in brunsli -- extrapolation of the dct values all the way to the border of the border pixels
|
|
2021-06-24 10:00:05
|
if you extrapolate dct all the way to the edge, the other edge is a 1d idct of the sum of all coefficients row or columnwise, and the other edge is an alternating sum of all coefficients row or columnwise
|
|
2021-06-24 10:00:32
|
it makes many things fun and simple, and a bit weird, too
|
|
2021-06-24 10:01:57
|
in knusperli we don't do much nicer dequantization of dc for very low quality jpegs -- that is something we developed for brunsli (but didn't make it into jpeg xl in the end)
|
|
2021-06-24 10:02:12
|
that combined with knusperli 2.0 would make a great jpeg decoder
|
|
|
YAGPDB.xyz
|
2021-06-25 01:06:48
|
Don't let the bastards grind you down.
|
|
2021-06-25 02:11:09
|
When something goes wrong in life, just shout "plot twist!" and carry on.
|
|
|
|
guesst
|
2021-06-26 08:28:46
|
How can we use JxlEncoderSetColorEncoding to make no color transformation? So when using fp values they are not transformed in encoding / decoding ?
|
|
|
_wb_
|
2021-06-26 08:33:38
|
When you set it to do lossless it should automatically do that iirc. <@768090355546587137> ?
|
|
|
|
guesst
|
2021-06-26 08:35:24
|
Using JxlColorEncodingSetToLinearSRGB appears to work
|
|
|
Eugene Vert
|
2021-06-26 12:15:03
|
Issue 299 "GIMP plugin fails to open JXLs which were transcoded from JPEG"
Sometimes gimp return blank canvas / error even for non-transcoded images
Some examples for this issue (jxl&orig) https://drive.google.com/file/d/1T5X-Cp02JaC50jdKrGhUSIrAJIdHGj8X/view
|
|
|
retr0id
|
2021-06-27 09:52:07
|
I think the TOC structure could be abused to create ambiguously parseable files. i.e. if the offsets in the TOC don't actually correspond to the real offsets in the bitstream.
To mitigate this, when the bitstream is being parsed sequentially (i.e. not using the TOC to skip ahead), checks could/should be added after each section has been parsed, to check that it actually corresponds to the size/offset declared in the TOC.
|
|
2021-06-27 09:59:48
|
(It's possible that there are already sufficient checks in place, I haven't explored the codebase enough yet tbh)
|
|
|
_wb_
|
2021-06-27 06:01:51
|
I think we already do that, not sure though. I think we do allow sections to have padding at the end, but we don't allow them to be longer than what the TOC says. Not sure though.
|
|
|
ishitatsuyuki
|
2021-06-28 04:43:27
|
any chances that CodecInOut and related APIs would become public? These APIs feel like a good "advanced mode" API where greater control over the codec metadata is possible but they are currently internal only...
|
|
|
_wb_
|
2021-06-28 07:32:28
|
not sure if that's a good idea, we're not trying to be a substitute for imagemagick, libvips and co
|
|
|
ishitatsuyuki
|
2021-06-28 08:52:46
|
fair, some of the CodecIO APIs do something similar to libvips so those would be redundant to expose. probably the current API would work well with some additional metadata supported (like the frame one I proposed before)
|
|
|
_wb_
|
2021-06-28 09:22:54
|
I think we need an api option to return zero duration frames separately and without blending, with the info about crop coords and blendmode returned too somehow
|
|
2021-06-28 09:23:48
|
Not needed for viewers or browsers, but necessary for things like gimp or imagemagick integration if we want to properly do layers there
|
|
2021-06-29 08:33:14
|
sigh
|
|
2021-06-29 08:34:34
|
for some ridiculous reason, when using floats, apparently lcms2 uses the range 0..100 for CMYK, where 100 means max-ink (which is not the convention in CMYK, usually 0 means max-ink there)
|
|
2021-06-29 08:34:58
|
for RGB it's 0..1 which is the conventional thing to do
|
|
2021-06-29 08:35:08
|
but for CMYK they don't
|
|
2021-06-29 08:35:50
|
skcms just does the sensible thing, which is also how we store it in the bitstream: 0..1 where 0 means max-ink in the case of CMYK (and 1 means max-light in the case of RGB)
|
|
2021-06-29 08:37:04
|
it would have been less annoying if this was mentioned anywhere in the documentation or something
|
|
2021-06-29 08:38:23
|
I just had to guess - 0..1 was way too faint, scaling to 0..255 was inverted, 255 * (1 - val) was too dark, 100 * (1 - val) looked perfect
|
|
|
|
paperboyo
|
|
_wb_
for some ridiculous reason, when using floats, apparently lcms2 uses the range 0..100 for CMYK, where 100 means max-ink (which is not the convention in CMYK, usually 0 means max-ink there)
|
|
2021-06-29 12:20:08
|
In all the tools I know, 100 is max ink in CMYK.
|
|
|
_wb_
|
2021-06-29 01:27:56
|
yes, in a UI that makes sense
|
|
2021-06-29 01:28:07
|
not so much in a bitstream
|
|
|
|
paperboyo
|
2021-06-29 01:28:37
|
Clear!
|
|
|
_wb_
|
2021-06-29 01:30:55
|
the number 100 is not a very round number for binary arithmetic
|
|
2021-06-29 01:33:57
|
PSD also has some instances where it uses a 0..100 scale, it is very confusing. For things like transparency, it sometimes does 0 = transparent, 255 = opaque, sometimes 0 = opaque, 255 = transparent, and sometimes 0 = transparent, 100 = opaque
|
|
2021-06-29 01:36:31
|
PSD spec is madness
|
|
2021-06-29 01:38:46
|
|
|
2021-06-29 01:39:14
|
do you smell the elegance of it all?
|
|
|
|
paperboyo
|
2021-06-29 01:45:04
|
https://github.com/gco/xee/blob/master/XeePhotoshopLoader.m#L108
(and I just ventured there once, to figure out if I can sniff easily when layered images contain fancy stuff that will prevent me from merging them via ImageMagickβ¦ still waking up in the middle of the nightβ¦)
|
|
2021-06-29 01:49:13
|
So I strongly recommend not looking into supporting Photoshop and/or TIFF layers in JXL until anno domini 2177β¦
|
|
2021-06-29 01:54:49
|
Even just figuring out when itβs safe to treat a composite layer as a 1:1 proxy for the flattened file is far from straightforward (probs thatβs why not even Adobeβs own InDesign does thatβ¦): https://github.com/guardian/grid/issues/2395#issuecomment-569743194
|
|
|
_wb_
|
2021-06-29 02:03:01
|
TIFF layers are basically a regular TIFF for the "compatibility" merged image, and then a huge "metadata" blob that contains a PSD file
|
|
2021-06-29 02:04:12
|
I discovered that the hard way because libtiff assumes metadata is small so loads it into memory even when you just want to know the basic image info
|
|
2021-06-29 02:05:29
|
PSD is an ugly beast that changes every year
|
|
2021-06-29 02:07:08
|
I hope for the basic purpose of multi-layer images (without fancy blend modes and layer effects, just alpha-blended layers), jxl can fill a gap so we finally have a standardized interchange format for that
|
|
|
|
paperboyo
|
|
_wb_
TIFF layers are basically a regular TIFF for the "compatibility" merged image, and then a huge "metadata" blob that contains a PSD file
|
|
2021-06-29 02:08:03
|
> regular TIFF for the "compatibility" merged image
Worth knowing that **by default** this composite layer doesnβt contain transparency for transparent image. So itβs basically uselessβ¦ (https://feedback.photoshop.com/conversations/photoshop/photoshop-default-tiff-save-options-to-be-save-transparency-on/5f5f46104b561a3d42719a47). PSD by default has this composite correct, one needs to actively change the preference to no save it.
|
|
|
_wb_
|
2021-06-29 02:08:41
|
the confusion about transparency in Photoshop land is enormous
|
|
2021-06-29 02:09:08
|
doesn't help that they have about 5 different notions of it
|
|
2021-06-29 02:09:36
|
alpha, opacity, masks, selections, clipping paths
|
|
|
|
paperboyo
|
|
_wb_
I hope for the basic purpose of multi-layer images (without fancy blend modes and layer effects, just alpha-blended layers), jxl can fill a gap so we finally have a standardized interchange format for that
|
|
2021-06-29 02:09:53
|
> for the basic purpose of multi-layer images
For an converter/encoder sitting outside Photoshop guts (even as a plugin probably), distinguishing those safe _basic_ ones from ones that wonβt work may be a major pain (if possible at all).
|
|
2021-06-29 02:10:22
|
6th: clipping group π
|
|
|
_wb_
|
2021-06-29 02:16:16
|
I think it's possible to come up with a criterion of what would work:
- no layer masks
- no layer fx
- blendmode has to be default (I suppose we could also support Add and Multiply technically, but meh)
- no text layers (have to rasterize them)
- no vector layers or whatever other weird stuff they have
- layer groups get lost (I suppose we could support it in the same way they are implemented in PSD: empty layers with a naming convention for "start group" and "end group", but we'd have to signal it a bit differently, 0x0 is not a valid layer dimension in jxl)
- image has to be RGB, Grayscale or CMYK (I suppose we could also do Lab via a trivial ICC profile, but not stuff like DuoTone and whatever)
|
|
|
|
paperboyo
|
|
_wb_
I think it's possible to come up with a criterion of what would work:
- no layer masks
- no layer fx
- blendmode has to be default (I suppose we could also support Add and Multiply technically, but meh)
- no text layers (have to rasterize them)
- no vector layers or whatever other weird stuff they have
- layer groups get lost (I suppose we could support it in the same way they are implemented in PSD: empty layers with a naming convention for "start group" and "end group", but we'd have to signal it a bit differently, 0x0 is not a valid layer dimension in jxl)
- image has to be RGB, Grayscale or CMYK (I suppose we could also do Lab via a trivial ICC profile, but not stuff like DuoTone and whatever)
|
|
2021-06-29 03:08:30
|
Hmmβ¦ This list can never be fully exhaustive (what about future features). Iβm not sure effort here is worth it. I can see two potential uses for saving/converting layered content to JXL:
1. full βlosslessβ replacement that can be turned back into a PSD
2. a βlossyβ subset of layered image
Even forgetting problems with automatically distinguishing between the two, 1 is not possible most of the time: I donβt think many working layered files are using such a subset of features that they could be saved in JXL with full interoperability in mind.
2 can be useful for specific **export** to web tasks: a lossless βtxtβ layer over lossy background etc. JXL properly integrated into Photoshopβs Export for Screens can be useful, but would be niche, I think. And I would count on Adobe doing it (since they already signalled interest). Hopefully will be done better than bundling outdated pngquant without proper colour supportβ¦
|
|
|
_wb_
|
2021-06-29 05:00:23
|
1. I think is technically possible, basically have the merged image as a jxl, then in the container have a box (`psdr`, for psd reconstruction data) with another jxl containing all the layer pixel data, plus a brotli-compressed chunk that contains all the 8BIM blobs (it doesn't need to know what they mean). This could then be reconstructed to the exact same PSD in a conceptually similar way as what we do with JPEG bitstreams.
2. This is probably more useful, it's not aiming to replace PSD but can be useful for the web (modular text over a vardct photo is one example), and also for having a way to interchange basic multi-layer images in an interoperable way without having to merge the layers. It's not as complete as PSD (or XCF), but it's a lot more editable than a merged image, and for some use cases you don't really need more than this β e.g. a typical example I often see are product images consisting of a background layer, a shadows layer, a layer for the product itself and maybe a few text overlays, and this one asset is then used as a source in multiple contexts using different selections of layers.
|
|
|
Cagelight
|
2021-06-29 10:41:04
|
I'm trying to write a JXL screenshot function in a game, but I'm having some issues with the color encoding. The image data is taken from OpenGL buffers, and when I use JXL's `JxlColorEncodingSetToSRGB` with `JxlEncoderSetColorEncoding`, the image comes out very dark. I'm fairly out of my depth here as far as "color encoding" and that kind of stuff goes, as neither libpng nor libjpeg require me to worry about it. Any advice?
|
|
|
spider-mario
|
2021-06-29 10:49:44
|
it could be that the data that you are getting from OpenGL is in linear light, without the sRGB transfer function applied
|
|
|
Cagelight
|
2021-06-29 10:50:48
|
I thought it might be something like that, but libjxl only has `JxlColorEncodingSetToSRGB` and `JxlColorEncodingSetToLinearSRGB`, both of which are sRGB and produce a dark image
|
|
2021-06-29 10:50:53
|
so I'm unsure where to go from here
|
|
|
spider-mario
|
2021-06-29 10:51:36
|
both use the sRGB primaries but `JxlColorEncodingSetToLinearSRGB` does not use the sRGB transfer function
|
|
2021-06-29 10:51:41
|
I think itβs worth a try
|
|
2021-06-29 10:51:58
|
βfullβ sRGB has both parts
|
|
2021-06-29 10:55:42
|
primaries = β(1, 0, 0) is specifically this red, (0, 1, 0) is that green, and (0, 0, 1) is this particular blueβ
sRGB transfer function = β(0.46, 0.46, 0.46) actually represents 18% of the maximum light output, not 46%β
|
|
2021-06-29 10:56:24
|
βlinear sRGBβ = sRGB primaries (red, green, blue), but linear transfer function ((0.18, 0.18, 0.18) = 18% of maximum output)
|
|
|
Cagelight
|
2021-06-29 10:57:21
|
another curious possibly related issue is that `djxl` errors out trying to decode the produced JXL, but the Qt JXL plugin I'm using to load it in image viewers and thumbnails has no issues
|
|
|
spider-mario
|
2021-06-29 11:02:46
|
would you perhaps happen to have such a jxl file at hand?
|
|
|
Cagelight
|
2021-06-29 11:03:14
|
|
|
2021-06-29 11:03:35
|
same JXL for both issues, too dark and doesn't work in `djxl`
|
|
2021-06-29 11:06:32
|
ah wait, silly me, I write the entire buffer instead of just the consumed buffer, so the file has lots of extraneous data at the end
|
|
2021-06-29 11:06:51
|
I guess that breaks `djxl`
|
|
|
spider-mario
|
2021-06-29 11:14:20
|
does `JxlColorEncodingSetToLinearSRGB` seem to fix the darkness issue?
|
|
2021-06-29 11:15:14
|
actually, I guess I can check whether it should do so by decoding (after relaxing the decoder) and assigning a different profile
|
|
|
Cagelight
|
2021-06-29 11:16:06
|
it did not for me
|
|
2021-06-29 11:16:15
|
the two encodings seemed to produce largely identical images
|
|
|
spider-mario
|
2021-06-29 11:19:39
|
my test suggests that it should have made it look more like this:
|
|
|
Cagelight
|
2021-06-29 11:19:47
|
oh actually, now that extraneous data was removed, `JxlColorEncodingSetToSRGB` produces a correct image, so maybe it was just the buffer issue on my end
|
|
2021-06-29 11:21:23
|
and yeah about that, it's upside down because of the way OpenGL buffers are oriented, so I tried setting JxlBasicInfo's orientation to `JXL_ORIENT_FLIP_VERTICAL` but nothing changed, that's another thing I'm unsure about
|
|
2021-06-29 11:23:26
|
I could of course just flip the lines as I do with libjpeg and libpng, but thought that was worth mentioning
|
|
|
ishitatsuyuki
|
2021-06-30 06:29:11
|
opengl buffers are flipped? didn't knew that
|
|
|
_wb_
|
2021-06-30 06:49:37
|
BMP does that too
|
|
2021-06-30 06:49:48
|
I wonder why, it seems inconvenient
|
|
|
Cagelight
|
2021-06-30 06:17:19
|
hate to be such a bother, but in addition to the `JXL_ORIENT_FLIP_VERTICAL` issue above, there seems to be another discrepancy...
|
|
2021-06-30 06:17:46
|
the JXL produced by `libjxl` is almost twice as big as the one produced by `cjxl` with identical distance and effort settings
|
|
|
_wb_
|
2021-06-30 06:19:05
|
that doesn't seem right
|
|
|
Cagelight
|
2021-06-30 06:19:32
|
I took the one produced by `libjxl` (`JxlEncoderOptionsSetDistance(m_opt, 0)` and `JxlEncoderOptionsSetEffort(m_opt, 3)`), decoded it to a png, and re-encoded with with `cjxl -d 0 -e 3`
|
|
2021-06-30 06:19:53
|
4.7 MiB vs 2.6 MiB
|
|
|
_wb_
|
2021-06-30 06:20:57
|
uh
|
|
2021-06-30 06:21:09
|
probably different bitdepth?
|
|
|
Cagelight
|
2021-06-30 06:21:54
|
they should both be 8 unless I messed something up, `m_info.bits_per_sample = 8;`
|
|
2021-06-30 06:22:40
|
I'll just post this:
```CPP
m_enc = JxlEncoderCreate(nullptr);
assert(m_enc);
m_opt = JxlEncoderOptionsCreate(m_enc, nullptr);
if (quality < 0) quality = 0;
JXLEE(JxlEncoderOptionsSetDistance(m_opt, quality));
JXLEE(JxlEncoderOptionsSetLossless(m_opt, quality == 0 ? JXL_TRUE : JXL_FALSE));
JXLEE(JxlEncoderOptionsSetEffort(m_opt, 3));
static constexpr JxlPixelFormat PFMT {
.num_channels = 3,
.data_type = JXL_TYPE_UINT8,
.endianness = JXL_NATIVE_ENDIAN,
.align = 0
};
m_info.xsize = width;
m_info.ysize = height;
m_info.bits_per_sample = 8;
m_info.num_color_channels = 3;
m_info.intensity_target = 255;
m_info.orientation = JXL_ORIENT_IDENTITY;
JXLEE(JxlEncoderSetBasicInfo(m_enc, &m_info));
JxlColorEncoding color_profile;
JxlColorEncodingSetToSRGB(&color_profile, JXL_FALSE);
JXLEE(JxlEncoderSetColorEncoding(m_enc, &color_profile));
JXLEE(JxlEncoderAddImageFrame(m_opt, &PFMT, buf, width * height * 3));
JxlEncoderCloseInput(m_enc);
```
|
|
|
_wb_
|
2021-06-30 06:25:33
|
π€
|
|
2021-06-30 06:27:44
|
<@!768090355546587137> any idea what this could be? I haven't tried the encode api yet but something seems weird here
|
|
|
retr0id
|
2021-06-30 07:22:52
|
<@!78807526392475648> could you share your output files?
|
|
|
Cagelight
|
2021-06-30 07:23:17
|
|
|
2021-06-30 07:23:19
|
|
|
|
retr0id
|
2021-06-30 07:27:16
|
```diff
- xyb_encoded = True,
+ xyb_encoded = False,
```
|
|
2021-06-30 07:27:20
|
only difference I can see
|
|
2021-06-30 07:27:28
|
looking at the headers, that is
|
|
|
Cagelight
|
2021-06-30 07:28:53
|
yeah unfortunately I don't know what "xyb" means here, and "xyb" doesn't appear anywhere in `encode.h` so I'm not sure if it's something I even have control over from my end
|
|
|
retr0id
|
2021-06-30 07:29:52
|
It's a colourspace - as for how to control it, or why its resulting in such different file sizes, idk
|
|
2021-06-30 07:30:12
|
(I'm not a libjxl dev, hopefully someone knows)
|
|
|
Cagelight
|
2021-06-30 07:31:09
|
if it is indeed related to colorspaces, and resulting in the drastic file size issues, then perhaps it might be some sort of bug with `JxlColorEncodingSetToSRGB`?
|
|
2021-06-30 07:31:12
|
just a shot in the dark of course
|
|
|
retr0id
|
2021-06-30 07:33:34
|
btw, it's the libjxl one that's using xyb
|
|
2021-06-30 07:37:14
|
fwiw, they both have identical FrameHeaders
|
|
|
Cagelight
|
2021-06-30 07:39:21
|
another thing perhaps worth mentioning is that both `cjxl` and `libjxl` take roughly the same amount of time on their respective effort settings
|
|
2021-06-30 07:41:54
|
and also just to clarify, I also tried effort 5 and 7, and this issue of the libjxl file being larger persisted on every effort setting
|
|
|
_wb_
|
2021-06-30 08:14:19
|
Why is it using xyb when doing lossless? That doesn't make sense
|
|
2021-06-30 08:14:32
|
Can you open an issue? It's an api bug
|
|
2021-06-30 08:15:17
|
Xyb will not be lossless, but it will have a lot of precision so it will also not be far from lossless and use way too many bits
|
|
2021-06-30 08:16:20
|
Basically in modular it would convert 8-bit rgb to something like 14-bit xyb
|
|
2021-06-30 08:17:05
|
That should normally only happen if you're going to do lossy modular and the xyb will get quantized anyway
|
|
|
Cagelight
|
2021-06-30 08:17:31
|
that must be it, because the cjxl output is roughly 8/14 the size of the libjxl output
|
|
|
_wb_
|
2021-06-30 08:18:26
|
For lossless it doesn't make sense because even with that precision, it's not perfectly reversible, the result of rgb -> xyb -> rgb will have some rounding errors
|
|
|
|
Deleted User
|
2021-06-30 08:19:41
|
Even with 14 bits of XYB precision?
|
|
2021-06-30 08:19:49
|
For 8 bit source?
|
|
|
_wb_
|
2021-06-30 08:20:03
|
I would expect that when you SetLossless it also changes the encoding to rgb instead of xyb
|
|
2021-06-30 08:20:56
|
Yes, after the gamma changes and all, it's not reversible
|
|
2021-06-30 08:21:04
|
Not that you will see any difference
|
|
2021-06-30 08:22:16
|
Main errors will be where RGB is a bit redundant, I assume
|
|
2021-06-30 08:23:32
|
It's quite impossible to do color conversions that are not extremely simple like YCoCg in a reversible way
|
|
2021-06-30 08:23:39
|
In integer arithmetic I mean
|
|
2021-06-30 08:24:42
|
Everything is reversible with mathematical real numbers, but of course that is a very imaginary concept
|
|
2021-06-30 08:27:50
|
I have never seen a real number implemented in practice that isn't a relatively simple rational number (say something of the form a/b where a and b are smaller than 2^2^100)
|
|
|
Cagelight
|
2021-06-30 08:27:59
|
I hope I did the issue right
|
|
|
_wb_
|
2021-06-30 08:28:41
|
It's time the encode api gets some attention, we have been focusing a lot on decode (for browsers)
|
|
2021-06-30 08:29:34
|
Thanks for finding bugs!
|
|
|
|
veluca
|
|
_wb_
Yes, after the gamma changes and all, it's not reversible
|
|
2021-06-30 08:48:14
|
are you actually sure it wouldn't be reversible with floats and std::round? it seems like it should be precise enough...
|
|
|
_wb_
|
2021-06-30 09:04:51
|
with 32-bit floats? yes then it's probably precise enough for 8-bit ints
|
|
|
Pieter
|
2021-06-30 10:57:08
|
32-bit floats have 24 bits of precision, so I would think that integers up to 2^24 can be represented exactly in it
|
|
|
Cagelight
|
2021-07-01 02:09:20
|
after investigating the libjxl code, I figured out that I can explicitly set `uses_original_profile = JXL_TRUE;` to force it to use RGB encoding because of `enc->metadata.m.xyb_encoded = !info->uses_original_profile;`, is this intended behavior?
|
|
|
_wb_
|
|
Pieter
32-bit floats have 24 bits of precision, so I would think that integers up to 2^24 can be represented exactly in it
|
|
2021-07-01 06:06:22
|
Correct. I was talking about 8-bit sRGB, convert to float linear RGB, then matrix multiply to get something like LMS, then cube root, to get XYB, and then doing all that in reverse. Do you get the same 8-bit sRGB values out? I said probably but I am not sure.
|
|
|
Pieter
|
2021-07-01 06:10:04
|
Right, I see.
|
|
|
_wb_
|
2021-07-01 07:06:03
|
Probably best to make -j implicit when target_bpp is used
|
|
|
|
Deleted User
|
2021-07-01 12:25:34
|
https://github.com/libjxl/libjxl/blob/9d50ae080734/lib/jxl/enc_frame.cc#L1370
> lies **closet** to
Maybe make a list of typos in comments and when enough of them accumulates, make a single commit that fixes them?
|
|
2021-07-01 12:31:43
|
I may be wrong, but I don't think that codespell would detect *that* particular typo because "closet" is actually a valid English word, e.g. used in the famous phrase "come out of the closet".
|
|
|
spider-mario
|
2021-07-01 12:33:42
|
something like LanguageTool might detect it if we somehow pre-digest the data appropriately for its consumption
|
|
2021-07-01 12:34:05
|
(removing `//`, this sort of thing)
|
|
|
|
Deleted User
|
2021-07-01 01:26:26
|
https://github.com/libjxl/libjxl/blob/fc564718416da689629/deps.sh#L62
Isn't that supposed to be a comment?
|
|
|
improver
|
2021-07-01 01:33:05
|
looks like informational message explaining what is happening and what command is getting ran
|
|
|
|
Deleted User
|
2021-07-01 01:33:53
|
So why is there no `echo`?
|
|
2021-07-01 01:34:06
|
Is that even allowed?
|
|
|
spider-mario
|
2021-07-01 01:34:25
|
itβs a heredoc
|
|
2021-07-01 01:35:44
|
the relevant feature here is actually from the shell
|
|
|
|
Deleted User
|
2021-07-01 01:36:02
|
Damn, I didn't know you can do such things
|
|
|
improver
|
2021-07-01 01:36:47
|
cat echoes stdin, and stdin in this case comes from bash using special syntax
|
|
|
spider-mario
|
2021-07-01 01:36:49
|
I mean the very syntax to pass a big string as stdin
|
|
|
improver
|
2021-07-01 01:37:15
|
or not experienced one
|
|
|
Damn, I didn't know you can do such things
|
|
2021-07-01 01:38:23
|
https://stackoverflow.com/questions/2500436/how-does-cat-eof-work-in-bash explanation incase you're interested
|
|
|
|
Deleted User
|
2021-07-01 01:39:00
|
<@456226577798135808>
> You are clearly a non-Linux user then
I use Linux, just not that often (I'm on WSL2 on Win10)
|
|
|
improver
|
2021-07-01 01:39:35
|
doesn't count so on-point comment :v)
|
|
2021-07-01 01:40:40
|
tbh it's more of shell wizardy than linux thing im sure many linux users who aint into that probably won't know all its tricks
|
|
|
monad
|
2021-07-01 02:44:17
|
Meh, I've been using Linux nearly exclusively for over 10 years and only used a heredoc once.
|
|
|
spider-mario
|
2021-07-01 02:58:00
|
I think Iβve used them once or twice, but actually more often in perl than in *sh
|
|
|
|
Deleted User
|
2021-07-01 03:07:29
|
Big thanks for that tool, it's awesome! My PR has just gone live π
|
|
|
_wb_
|
2021-07-02 05:08:47
|
I've been adding a lot of test cases to https://github.com/libjxl/conformance
|
|
2021-07-02 05:08:54
|
it's JPEG meeting next week
|
|
2021-07-02 05:09:07
|
we need to work on Part 3 - conformance
|
|
|
|
Deleted User
|
2021-07-02 06:48:33
|
> ```cpp
> // TODO(deymo): Remove "gradient" once all clients stop setting this value.
> // This flag is already deprecated and is unused in the encoder.
> Override gradient = Override::kOff;
> ```
Can it be removed now?
https://github.com/libjxl/libjxl/blob/f9bab66674d270fd6a0e73759f9887a668c6d287/lib/jxl/enc_params.h#L156-L158
|
|
|
_wb_
|
2021-07-02 06:51:42
|
I think so
|
|
2021-07-02 06:51:47
|
This is something ancient now
|
|
|
|
Deleted User
|
2021-07-02 07:34:55
|
What was it even supposed to do?
|
|
|
|
veluca
|
2021-07-02 10:12:05
|
it was a banding mitigation system, there's a different one now
|
|
|
eclipseo
|
2021-07-04 07:58:20
|
Is there a new release planned for 2021?
|
|
|
eddie.zato
|
2021-07-09 02:54:54
|
No more temp files, I've already checked <:FeelsAmazingMan:808826295768449054>
https://github.com/libjxl/libjxl/pull/300
|
|
2021-07-09 07:05:42
|
The `tools/cmakelist.txt` file has a flag `JPEGXL_VERSION`. But if you try to set `-DJPEGXL_VERSION` to some value when building `libjxl`, it will be ignored.
|
|
2021-07-09 07:05:53
|
In `build.ninja` this flag is always set to `(unknown)`. So I started replacing it with the last commit, using `sed` after `cmake` before `ninja`. Now my binaries can show which commit it is.
|
|
2021-07-09 07:06:01
|
```
PS > cjxl -V
cjxl v0.3.7 8119fe5 [SSE4]
Copyright (c) the JPEG XL Project
```
|
|
2021-07-09 07:06:31
|
The question is, can the devs make it so that building doesn't ignore the `-DJPEGXL_VERSION` flag? Or is this flag used for something else? π
|
|
|
|
Deleted User
|
|
eddie.zato
In `build.ninja` this flag is always set to `(unknown)`. So I started replacing it with the last commit, using `sed` after `cmake` before `ninja`. Now my binaries can show which commit it is.
|
|
2021-07-09 08:54:39
|
What's your exact command?
|
|
|
eddie.zato
```
PS > cjxl -V
cjxl v0.3.7 8119fe5 [SSE4]
Copyright (c) the JPEG XL Project
```
|
|
2021-07-09 08:55:43
|
Nice! Would it be possible to change the commit hash to an 8 char long one?
|
|
|
eddie.zato
|
2021-07-09 09:01:32
|
```Bash
#!/bin/bash
export CC=clang CXX=clang++
cd libjxl
git pull
git submodule update --init --recursive
version=$(git describe --always)
rm -rf build
cmake -B build -G Ninja -S ./ \
-DCMAKE_BUILD_TYPE=Release \
-DBUILD_TESTING=OFF \
-DJPEGXL_ENABLE_DEVTOOLS=ON \
-DJPEGXL_ENABLE_OPENEXR=OFF \
-DJPEGXL_ENABLE_SKCMS=OFF \
-DJPEGXL_ENABLE_BENCHMARK=ON \
-DJPEGXL_WARNINGS_AS_ERRORS=OFF \
-DJPEGXL_STATIC=ON \
-DCMAKE_C_FLAGS='-O3 -march=native' \
-DCMAKE_CXX_FLAGS='-ffunction-sections -fdata-sections -O3 -march=native' \
-DCMAKE_EXE_LINKER_FLAGS='-Wl,--gc-sections -Wl,--no-export-dynamic'
sed -i -e 's#(unknown)#'"$version"'#' build/build.ninja
ninja -C build
strip -s build/tools/{cjxl.exe,djxl.exe,butteraugli_main.exe,ssimulacra_main.exe}
```
|
|
2021-07-09 09:02:19
|
To get the commit hash, I just use `version=$(git describe --always)` without any modifications.
|
|
2021-07-09 09:18:02
|
In fact, you can write anything you want: π
```
$ libjxl/build/tools/cjxl -V
cjxl v0.3.7 8119fe5_super_puper_duper_build [SSE4]
Copyright (c) the JPEG XL Project
```
|
|
2021-07-10 03:48:07
|
To use the `-DJPEGXL_VERSION` flag when building, simply define the `JPEGXL_VERSION` variable in `CMakeLists.txt`.
Something like this:
```
set(JPEGXL_VERSION "" CACHE STRING
"JPEGXL version.")
```
Then use it in your building script:
```Bash
version=$(git describe --always)
cmake -DJPEGXL_VERSION="$version"
```
|
|
2021-07-10 05:03:55
|
This looks like a bug. The flag is used in `ci.sh`, but will be ignored because the variable isn't present in `CMakeList.txt`.
|
|
|
|
Deleted User
|
2021-07-12 01:41:02
|
<@!179701849576833024> What percentual decoding cost does noise currently have?
|
|
|
|
veluca
|
2021-07-12 01:41:49
|
after my PR that enables SIMD, I think it makes things about 30% slower or so
|
|
2021-07-12 01:42:02
|
(as opposed to ~2x from before)
|
|
2021-07-12 01:42:13
|
but there's still improvements to be done
|
|
|
|
Deleted User
|
|
BlueSwordM
|
2021-07-12 03:51:24
|
So, from what I've seen in the recent patches that Jyrki has introduced, I'm guessing the main improvements are being done toward anything related to edges, correct?
|
|
2021-07-12 03:52:00
|
So, why not force split to smaller blocks where the current analyzed partition
contains at least one sub block with very low variance and at least
one that has much higher variance to actually prevent ringing propagation?
|
|
|
|
veluca
|
2021-07-12 06:38:59
|
I did want to try something like that, haven't had time yet
|
|
|
yurume
|
2021-07-13 01:03:01
|
IIUC libjxl doesn't directly expose any interface to retrieve TOC, right? (I was thinking of calculating the prefix size required for given downsampled images and/or LQIP.)
|
|
|
_wb_
|
2021-07-13 04:34:58
|
not yet but it should have something like that imo
|
|
|
|
Deleted User
|
2021-07-16 09:06:30
|
Will there be more noise types other than photon_noise? I took a somewhat closer look and noticed that the setting produces a bit too little noise in dark and a bit too much noise in bright areas than I would like.
|
|
|
spider-mario
|
|
Will there be more noise types other than photon_noise? I took a somewhat closer look and noticed that the setting produces a bit too little noise in dark and a bit too much noise in bright areas than I would like.
|
|
2021-07-16 09:49:42
|
you may want to increase `kInputReferredReadNoise` and decrease `kPhotoResponseNonUniformity`
|
|
2021-07-16 09:52:35
|
the noise modeled by `enc_photon_noise.cc` is a quadrature sum of three noise regimes: read noise (constant standard deviation, so most notable in the shadows), photon shot noise (square root, dominant over a large range) and photo response non uniformity (linear, dominant in the bright tones)
|
|
2021-07-16 09:53:40
|
I decided that 3 electrons rms of read noise and 0.5% PRNU were reasonable defaults but they are not the only possibility
|
|
2021-07-16 09:54:09
|
(although PRNU in most cameras is more often higher than that rather than lower)
|
|
2021-07-16 09:55:50
|
(over more pixels than typical internet images, though, so it averages out)
|
|
2021-07-16 10:05:01
|
the read noise for a given camera at a given ISO setting can be found here: https://www.photonstophotos.net/Charts/RN_e.htm
|
|
2021-07-16 10:05:27
|
and PRNU here: https://www.photonstophotos.net/Charts/PTC.htm
|
|
2021-07-16 10:07:59
|
in fact, that last link also illustrates the model with its three regimes (the three grey lines)
|
|
|
|
Deleted User
|
|
spider-mario
you may want to increase `kInputReferredReadNoise` and decrease `kPhotoResponseNonUniformity`
|
|
2021-07-16 11:27:17
|
Can I simply do a pull request with my preferred values? π
|
|
|
diskorduser
|
2021-07-17 05:25:52
|
Is it better ?
|
|