JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

contact-devs

Bring stuff to the attention of jxl devs here

Demiurge
2023-01-19 07:31:53
Maybe the 16-bit depth is also somehow confusing the encoder.
afed
2023-01-19 07:53:09
yes, perhaps it reduced the bpc to 8 after editing, but it still looks like a bloated screenshot to 16 bpc with lossy video that doesn't have 16 bpc as already mentioned, it would be good to have a direct transfer to yuv420 for such cases
Fox Wizard
2023-01-19 10:30:44
Guess at least ``-e X -q 100 -I 100 -E 3 -g 3 --brotli_effort=11 `` works as expected with ``v0.9.0 bcacd51`` <:KekDog:805390049033191445> (but with just ``-e X -q 100`` I get the same weird results)
veluca
2023-01-19 12:34:47
try -e 10 without -I / -E / -g options
2023-01-19 12:35:26
if you are patient 😛
JendaLinda
2023-01-19 03:05:08
-E can go up to 11 but it doesn't seem to make a difference above 3.
veluca
2023-01-19 03:08:33
it will not for images with 3 channels.
JendaLinda
2023-01-19 03:24:03
That makes sense. So for RGBA it would be 4, I guess.
_wb_
2023-01-19 09:15:15
For RGB it should not make a difference above 2, for RGBA it should not make a difference above 3. It's the number of "previous channels" taken into account in the MA context model, it corresponds to Prev* (going one channel back) and PPrev* (going two channels back) in jxl art (jxl_from_tree). -E 3 is the most you need when inputs are PNGs, which have at most 4 channels.
afed
2023-01-20 02:14:25
`--override_bitdepth=10` ```Encoding kPixels Bytes BPP ------------------------------------------- jxl:d0:1 2073 2495994 9.6296065 jxl:d0:2 2073 2164298 8.3499151 jxl:d0:3 2073 2203015 8.4992863 jxl:d0:4 2073 1830932 7.0637809 jxl:d0:5 2073 1823912 7.0366975 jxl:d0:6 2073 1819105 7.0181520 jxl:d0:7 2073 1787242 6.8952238 jxl:d0:8 2073 1764037 6.8056983 jxl:d0:9 2073 1749531 6.7497338```
Mrtt
2023-01-24 09:09:34
<@179701849576833024> : Hello, i just went through some testing on the lossless fast encode (thanks for the 16bit depth support btw) and i did some test with my images, and i'm a bit confused by the effort option i tried the range 0 to 127 as i've seen through the code comment, but the compression rate do not change only the pixel throughput get slower. Also, (i need to do better analysis on my reference dataset (on some other storage/cluster) but my first test show 40% average compression ratio on 654 single channel 16 bit images, i was expecting from some earlier results ( earlier (pre-0.7) libjxl totally lossless ) to be closer to 30%-32% on a similar dataset. I'll move my test code to another server, but i case you have ideas on why the effort parameter has no effect that may be interesting. Also here we achieve compression that is no better than lzma on png.
veluca
Mrtt <@179701849576833024> : Hello, i just went through some testing on the lossless fast encode (thanks for the 16bit depth support btw) and i did some test with my images, and i'm a bit confused by the effort option i tried the range 0 to 127 as i've seen through the code comment, but the compression rate do not change only the pixel throughput get slower. Also, (i need to do better analysis on my reference dataset (on some other storage/cluster) but my first test show 40% average compression ratio on 654 single channel 16 bit images, i was expecting from some earlier results ( earlier (pre-0.7) libjxl totally lossless ) to be closer to 30%-32% on a similar dataset. I'll move my test code to another server, but i case you have ideas on why the effort parameter has no effect that may be interesting. Also here we achieve compression that is no better than lzma on png.
2023-01-24 09:13:00
I'm not entirely surprised that on 16-bit data compression doesn't improve much with higher effort... Wdym 40% compression ratio exactly? Re lzma on png, that also doesn't surprise me too much (1-channel images don't have as much benefits from fast lossless as color images); also lzma+png is a lot slower, no?
Mrtt
2023-01-24 09:14:37
compression ratio: encoded_size/initial size, yes the lzma+png is very slow indeed, but the jxl (non fast) yielded best compression ratio (in lossless) with the same type of images (very nice job btw)
veluca
2023-01-24 10:05:06
yeah thing is that doing fast-lossless on 16 bit content is not *that* easy, the current code is cutting a lot of corners to gain speed - in principle I could modify it to be a bit better at the cost of a bit of speed, but I haven't tried to do that yet
_wb_
2023-01-24 10:48:48
One thing e2+ does that e1 doesn't, is channel palette. If you have 16-bit data that is actually lower bitdepth (as is common since the source material could very well be only 8-bit or e.g. 10/12-bit but saved as 16-bit for whatever reason, e.g. because PNG forces it), then channel palette can help quite a bit. Perhaps it would make sense to "port" channel palette (or a simplified version of it) to fast-lossless too.
2023-01-24 10:54:33
The simplified form of channel palette could be this: specifically on 16-bit input, for i in 1..8, check if the i lsb are equal to the i msb (only continue checking as long as there is no counterexample). If there is an i for which this is true, then write such a channel palette in the header, right shift all samples and encode the image as if it was a (16-i)-bit input.
2023-01-24 11:03:36
(if the check fails for one value of i, no need to check higher values of i since they'll fail too; if the image is "true 16-bit" then it will fail quickly and shouldn't bring much extra cost in the typical case — of course there are pathological cases where only the bottom-right pixel makes it fail; if the check succeeds it does add some overhead but you can also use a lower-bitdepth codepath for the actual encode and that might make up for it in terms of speed, compression should in any case be better)
afed
_wb_ One thing e2+ does that e1 doesn't, is channel palette. If you have 16-bit data that is actually lower bitdepth (as is common since the source material could very well be only 8-bit or e.g. 10/12-bit but saved as 16-bit for whatever reason, e.g. because PNG forces it), then channel palette can help quite a bit. Perhaps it would make sense to "port" channel palette (or a simplified version of it) to fast-lossless too.
2023-01-24 07:33:31
yeah, that would be good, because now e1 also works worse for palette png images than fjxl, but it would be better if palette detection did not depend on the format, because e1 can also be used for ppm or direct screenshots
Mrtt
2023-01-25 07:29:05
Regarding the paletting, on our data, this is clearly not the solution, maybe for some images we have a depth of 12bit, but most of them are really going up the 16bit depth (or very close). So i don't think the effect will be very strong. Also sometimes our noise level is not 0, and i tried shifting the signal to be close at 0 (initial signal - min(signal)), but this was not a game changer (but i left the bitdepth in the code at 16, i can give a try at adjusting this if it makes sense). I should be able to do my larger scale test today on the data i talk yesterday for which i have the "normal libjxl" (non fast) losslessly compressed dataset as a ground truth reference (12000+ image files will be compressed in this test, i'm also checking decompression to assess that i have no problem to reconstruct my images, so it will take a few hours ).
Jyrki Alakuijala
afed without black bars ```Encoding kPixels Bytes BPP ------------------------------------------- jxl:d0:1 1533 1518306 7.9202191 jxl:d0:2 1533 1295790 6.7594679 jxl:d0:3 1533 1282971 6.6925978 jxl:d0:4 1533 1138388 5.9383829 jxl:d0:5 1533 1133039 5.9104799 jxl:d0:6 1533 1131081 5.9002660 jxl:d0:7 1533 1110128 5.7909650 jxl:d0:8 1533 1093744 5.7054982 jxl:d0:9 1533 1087268 5.6717162```
2023-01-25 10:33:57
[edit: ignore this -- it is just based on a misunderstanding -- taking : as a .] these are high BPP values, I'd anticipate around 1.7 BPP for jxl:d0.9, not 5.6717162 if caused by the 16-bit input, then it looks like a bug that we should fix with a high priority would you consider filing an issue with input data and ways to reproduce?
_wb_
2023-01-25 10:59:10
it's d0 at effort 9, not d0.9 🙂
Demiurge
veluca yeah thing is that doing fast-lossless on 16 bit content is not *that* easy, the current code is cutting a lot of corners to gain speed - in principle I could modify it to be a bit better at the cost of a bit of speed, but I haven't tried to do that yet
2023-01-25 11:27:46
If you could beat lossless-webp density then jxl will be the perfect pareto-ideal lossless format
Jyrki Alakuijala
2023-01-25 04:10:16
what a nice battle to have ❤️ I feel perfectly pareto-ideal before and after 😄
afed
2023-01-25 04:19:30
cwebp z0 is almost as fast as cjxl e2, but with cjxl e5-e6 compression on average (except for photos) so fjxl is not yet comparable to cwebp in compression, at least for 8 bpc, but webp doesn't support higher than 8-bit at all
DZgas Ж
2023-01-27 10:56:11
I have an image 64x64 of 8 bit color, its color is one and is FF9375, I save it with the parameters -q 95 -e3, and its color becomes FF937**4**. Why? JPEG XL works in 38-bit per color mode, but it can not exactly save 8-bit color at the same time?
_wb_
2023-01-27 12:39:37
What's the color value when you decode to 16-bit?
2023-01-27 12:56:57
Libjxl uses 32-bit float internally, but the jxl bitstream does contain quantized XYB colors, where the precision depends on the quality setting. How much precision that preserves in terms of RGB is a bit hard to say: e.g. I expect G to be higher precision than B, but it's also not uniform across the range, due to different transfer curves...
2023-01-27 01:12:43
If keeping the errors in RGB low is what you want, then skipping color conversion is better - but perceptually, RGB error is not the best thing to optimize for...
DZgas Ж
_wb_ Libjxl uses 32-bit float internally, but the jxl bitstream does contain quantized XYB colors, where the precision depends on the quality setting. How much precision that preserves in terms of RGB is a bit hard to say: e.g. I expect G to be higher precision than B, but it's also not uniform across the range, due to different transfer curves...
2023-01-27 08:46:45
As I understood that the color in JPEG XL is a float, and the more accurate the decoding program I take, the more accurate the float color value I will get can I setting the color quality separately? (color components)
_wb_
2023-01-27 08:51:12
Yes, quantization can be adjusted per component. I don't think that's exposed in the current encode api though...
DZgas Ж
_wb_ Libjxl uses 32-bit float internally, but the jxl bitstream does contain quantized XYB colors, where the precision depends on the quality setting. How much precision that preserves in terms of RGB is a bit hard to say: e.g. I expect G to be higher precision than B, but it's also not uniform across the range, due to different transfer curves...
2023-01-27 09:04:40
just from all this, it turns out that because of its float nature, Pixel images using VarDCT simply begin to float with colors, although there is absolutely no reason for this, some colors begin to transition into similar colors, and the problem is that these are such strong fluctuations. But this problem is not observed at all at e3. Because there each block is not connected to each other? are they processed separately? is there any parameter that also locks blocks separately for processed in mode above e3
_wb_
2023-01-27 09:08:29
This image is just very unsuitable for dct. Maybe e3 is lucky because it only does 8x8 blocks and they align well here, but try cropping a few pixels away from the top or left of the image...
DZgas Ж
_wb_ This image is just very unsuitable for dct. Maybe e3 is lucky because it only does 8x8 blocks and they align well here, but try cropping a few pixels away from the top or left of the image...
2023-01-27 09:22:43
I made the image to experiment specifically with blocks. in this case, of course, e3 will show itself disgusting, but I'm talking about something else entirely - the influence of something on the image does NOT go beyond the boundaries of its 8x8 block, unlike all other modes. I have already talked about this problem with another image, and it has been partially solved. the influence of one part of the image with residual artifacts goes very far beyond self block https://github.com/libjxl/libjxl/pull/2024
2023-01-27 09:30:08
I just drew colored pixels, but look how far the artifacts diverge, and this is despite the fact that I have truncation to 8 bit color. What is the real length with 16 bits of color?
2023-01-27 10:17:53
so I decoded it into 32-bit PFM to look at the Real state of affairs, and - group of 8x7 pixels makes an artifact effect on the size of 106x74 pixels around itself
2023-01-27 10:36:31
original | e7 artifact long | e3 artifact long
DZgas Ж so I decoded it into 32-bit PFM to look at the Real state of affairs, and - group of 8x7 pixels makes an artifact effect on the size of 106x74 pixels around itself
2023-01-28 09:52:03
Is this problem? will this problem be solved? what are the priorities?
Jyrki Alakuijala
DZgas Ж I have an image 64x64 of 8 bit color, its color is one and is FF9375, I save it with the parameters -q 95 -e3, and its color becomes FF937**4**. Why? JPEG XL works in 38-bit per color mode, but it can not exactly save 8-bit color at the same time?
2023-01-30 01:22:26
looks like a bug, this should intuitively work
DZgas Ж I have an image 64x64 of 8 bit color, its color is one and is FF9375, I save it with the parameters -q 95 -e3, and its color becomes FF937**4**. Why? JPEG XL works in 38-bit per color mode, but it can not exactly save 8-bit color at the same time?
2023-01-30 04:09:36
I'm working on this right now, will make an improvement on it soon -- don't know if it will actually fix this case exactly, but it will reduce the likelyhood of this happening in general
2023-01-30 04:23:22
https://github.com/libjxl/libjxl/pull/2122
afed
Jyrki Alakuijala https://github.com/libjxl/libjxl/pull/2122
2023-01-31 12:32:33
there are some red color bleeding issues on the image from the av1 discord (even on d1, but on low qualities it's highly noticeable) https://cdn.discordapp.com/attachments/673202643916816384/1068635452292223047/DSC_3202_02.png
2023-01-31 12:37:56
d1 (2 752 732) d2 (1 547 191) jpegli q68 (1 570 506) jpeg q71 (1 571 905) jpeg q84 (2 774 370)
DZgas Ж
DZgas Ж Is this problem? will this problem be solved? what are the priorities?
2023-01-31 06:33:39
???
Demiurge
DZgas Ж I made the image to experiment specifically with blocks. in this case, of course, e3 will show itself disgusting, but I'm talking about something else entirely - the influence of something on the image does NOT go beyond the boundaries of its 8x8 block, unlike all other modes. I have already talked about this problem with another image, and it has been partially solved. the influence of one part of the image with residual artifacts goes very far beyond self block https://github.com/libjxl/libjxl/pull/2024
2023-01-31 08:07:20
I'm curious how the problem might look like in a realistic example image. VarDCT uses variable size and shape DCT macroblocks at higher effort settings. Also there is Gaborish filter and Edge-Preserving Filter (EPF) which I think get enabled at lower quality settings.
DZgas Ж
Demiurge I'm curious how the problem might look like in a realistic example image. VarDCT uses variable size and shape DCT macroblocks at higher effort settings. Also there is Gaborish filter and Edge-Preserving Filter (EPF) which I think get enabled at lower quality settings.
2023-01-31 08:45:39
It seems to me that the artifacts that I discussed earlier are directly related to the fact that with e4+, the length of the artifacts is ten times greater than the place of the artifact
2023-01-31 09:02:58
What care me is that the developers have not responded to this for 3 days... Did I really find such a big bug or fundamental flaw?...<:Thonk:805904896879493180>
Demiurge
DZgas Ж It seems to me that the artifacts that I discussed earlier are directly related to the fact that with e4+, the length of the artifacts is ten times greater than the place of the artifact
2023-01-31 09:09:06
Is this with the latest encoder?
_wb_
2023-01-31 09:10:30
I don't understand how exactly you are measuring artifacts
DZgas Ж original | e7 artifact long | e3 artifact long
2023-01-31 09:12:49
I mean an image like this, what does it mean? Did you magnify the pixel-wise differences by some huge amount or how did you create these images?
2023-01-31 09:16:14
in general, sure, e3 only uses 8x8 and e4+ uses bigger blocks, and bigger blocks implies artifacts can propagate further, but what we care about are visible artifacts (i.e. when viewing at 1:1 and without artificially exaggerating the difference).
DZgas Ж
_wb_ in general, sure, e3 only uses 8x8 and e4+ uses bigger blocks, and bigger blocks implies artifacts can propagate further, but what we care about are visible artifacts (i.e. when viewing at 1:1 and without artificially exaggerating the difference).
2023-01-31 09:44:42
But you are wrong, and I am trying to show you this, what is the point of saying that "uses bigger blocks"?? if the artifact goes much, much **further **than the block itself, in some cases 10 other blocks are **through **the length > how exactly you are measuring artifacts I up the pixel values of a 32-bit image and see these artifacts, and I don't understand why I see them. After all, all artifacts should end in THEIR block, but I observe this exclusively on e1-3.
Demiurge
2023-01-31 09:46:48
Are you using Gaborish or EPF?
DZgas Ж
Demiurge Are you using Gaborish or EPF?
2023-01-31 09:49:52
All my tests with Gaborish is disabled. What is EPF? going to <#794206170445119489>
DZgas Ж But you are wrong, and I am trying to show you this, what is the point of saying that "uses bigger blocks"?? if the artifact goes much, much **further **than the block itself, in some cases 10 other blocks are **through **the length > how exactly you are measuring artifacts I up the pixel values of a 32-bit image and see these artifacts, and I don't understand why I see them. After all, all artifacts should end in THEIR block, but I observe this exclusively on e1-3.
2023-01-31 10:11:37
> we care about are visible artifacts here are the visible 8-bit artifacts if I fill the white color with black. due to the fact that all color block give artifacts much further than the block in which they are located
DZgas Ж I just drew colored pixels, but look how far the artifacts diverge, and this is despite the fact that I have truncation to 8 bit color. What is the real length with 16 bits of color?
2023-01-31 10:19:54
This can be seen on 8 bit color, rainbow that comes on Other blocks, I made https://discord.com/channels/794206087879852103/848189884614705192/1068656120018771998 32 bit artifact looks just monstrous gigantic.
_wb_
DZgas Ж > we care about are visible artifacts here are the visible 8-bit artifacts if I fill the white color with black. due to the fact that all color block give artifacts much further than the block in which they are located
2023-01-31 10:38:22
"if I fill the white color with black" is a strange way to judge how visible artifacts are. It is of course much easier to see the difference between #000000 and #FEFFFF than the difference between #FFFFFF and #FEFFFF
DZgas Ж
_wb_ "if I fill the white color with black" is a strange way to judge how visible artifacts are. It is of course much easier to see the difference between #000000 and #FEFFFF than the difference between #FFFFFF and #FEFFFF
2023-01-31 10:40:36
It is of course much easier to see the difference in E3 because there is not a single such artifact
Jyrki Alakuijala
DZgas Ж It seems to me that the artifacts that I discussed earlier are directly related to the fact that with e4+, the length of the artifacts is ten times greater than the place of the artifact
2023-01-31 03:34:33
which setting did you compress it with? d1 ?! looks like I have to do more work to fix this... 😕 ... this doesn't look acceptable
DZgas Ж
Jyrki Alakuijala which setting did you compress it with? d1 ?! looks like I have to do more work to fix this... 😕 ... this doesn't look acceptable
2023-01-31 03:36:42
these are all old images, you have already worked with it, I just want to say that this problem may be directly related to the fact that the artifacts are allocated so far, because e3 did not have this
username
2023-02-04 12:07:17
is the whole idea of a few devs from here supporting chromium forks that want to continue to support jpeg-xl still a thing? I am planning on making a github discussion or issue page for a small chromium fork called "Thorium" (they currently have jxl enabled by default) asking if they plan to attempt to support jxl after version 109, so I want to know if I should mention devs from here wanting to provide code support for jxl post chromium 109
_wb_
2023-02-04 12:11:05
I don't think a lot of support would be needed, all the code is already there if you just revert the commit that removed it. The only thing that would need to be done is a periodic update of the libjxl version, in case decoder improvements or security fixes are made upstream.
2023-02-04 12:13:03
But should anything pop up that needs support, they are welcome to point us to it and I am sure there will be people ready to help
veluca
2023-02-06 12:51:53
nice, although I suspect it's not particularly better than Lepton
2023-02-06 12:52:24
(as a general rule, it's not *super* hard to get a bit more compression than JXL with extra compute)
Quikee
2023-02-06 05:40:19
is it possible to get more compression with extra compute also with JXL?
_wb_
2023-02-06 06:27:50
In jxl we have dc first (for progressive decode) and tiled ac (for parallel decode). If you relax such design constraints, you can get somewhat better compression (like lepton, which uses AC in the ctx model of DC)
2023-02-06 06:39:22
There is some room for better jpeg recompression in jxl: the DC compression is just like any lossless image, and for AC we currently have some heuristic for doing the ctx modeling but it can probably be improved.
Jarek
2023-02-06 07:12:50
prediction of local noise levels gives large improvement of compression ratio - not only of values, but also e.g. widths of Laplace distributions: https://arxiv.org/abs/2007.12055
Jim
2023-02-15 11:33:56
<@794205442175402004> Could you send me the 140kb, 100kb, and 60kb avifs of the water picture on the overview site?
gb82
Jim <@794205442175402004> Could you send me the 140kb, 100kb, and 60kb avifs of the water picture on the overview site?
2023-02-16 03:38:32
do u want them full res or a lower res more similar to the resolution on the site?
_wb_
2023-02-16 07:19:49
here's the uncompressed version of the jxl image on the jpeg.org site
2023-02-16 07:20:09
2023-02-16 07:20:16
2023-02-16 07:20:25
2023-02-16 07:23:12
obviously you can get different avifs depending on what encoder and settings you use
2023-02-16 07:29:57
this is what a recent default avifenc/aom produces, the 140kb one (actually 138.8kb but it's that or 147.9kb, you don't get very fine-grained control over quality since cq-level is an integer with a small range) is what you get with completely default `avifenc in.png out.avif` on a recent avifenc/aom
2023-02-16 07:46:11
To me the 60kb avif looks like pretty low quality (there's quite some details lost, the water on the right becomes a smeary mess that doesn't look like water anymore) but for some web use cases it may be 'good enough'. The 100kb avif looks like medium-low quality to me — ok-ish but still kind of blurry and smoothed. The 140kb avif looks like medium-high quality to me — "good enough" for most web use cases. The 140kb jxl looks like high to very high quality to me — not fully visually lossless but quite close (I can see tiny differences when flipping with the original, but I need to look carefully). That's about as high quality as is typically needed on the web (assuming the image is only sent for the purpose of displaying it on a page, not for further editing or printing etc). That's also roughly where "camera quality" starts, but it goes a lot higher than that, especially if the option of further editing has to be there.
2023-02-16 08:11:01
at all these points from low to very high, jxl is performing better than avif on this image
2023-02-16 08:11:27
60kb jxl on the left, original in the middle, 60kb avif on the right
gb82
_wb_ 60kb jxl on the left, original in the middle, 60kb avif on the right
2023-02-16 08:27:14
this lines up with my results
HLBG007
_wb_ here's the uncompressed version of the jxl image on the jpeg.org site
2023-02-16 08:46:42
The original image is a jpg? https://pxhere.com/en/photo/1583399 Is this not a problem because the source is pre-purified by jpg?
gb82
2023-02-16 09:02:52
That's the photo I tested, downscaled to 2240x1260. I posted my results on the Twitter thread, but I'll post them here again. It is not a good showing for AVIF.
_wb_
2023-02-16 09:09:04
the image on the website is 980x500 (the css layout on jpeg.org makes that image 490 css pixels wide, and nowadays you can kind of assume that a css pixel is typically 2 display pixels)
gb82
2023-02-16 09:12:23
I can retest at that resolution, but ultimately I believe results are going to be rather similar
_wb_
2023-02-16 09:57:33
Yes. For most photographic images you get behavior like that: jxl beats avif everywhere except in the unusably low qualities.
Jim
gb82 do u want them full res or a lower res more similar to the resolution on the site?
2023-02-16 10:11:18
Thanks Jon & Gb. I am making the same as the site. Going to make a video since I noticed something else that the other main image didn't have. <:JXL:805850130203934781>
gb82
_wb_ Yes. For most photographic images you get behavior like that: jxl beats avif everywhere except in the unusably low qualities.
2023-02-16 10:12:59
and here, anything below ssimu30 is terrible, awful quality that is truly not worth putting anywhere
_wb_
2023-02-16 10:29:03
I calibrated ssimulacra2 so 10 is very low, 30 is low, 50 is medium, 70 is high, 90 is very high quality. Scores above 0 make sense (though for most use cases you'll want 50+). Negative scores are just silly low quality that is irrelevant — I'm not clamping it to zero so you can still use the metric there, but you can consider the sign change to be a warning that you're entering stupid territory
2023-02-16 10:34:05
I should perhaps have been a bit more extreme and scale it so 0 is low quality, 100 is very high quality (visually lossless, with mathematically lossless at 150 or so instead of at 100) and scale scores below 0 so they use a smaller range (e.g. what is -100 now could become -10). Then BD rates would become more meaningful since they wouldn't put all the emphasis on the extremely low qualities and barely count what happens at the higher end...
afed
2023-02-16 01:38:34
yeah, some special value for mathematically lossless would be cool, it's useful to know when it's true lossless using the same metric and so it doesn't cause confusion with visually lossless
_wb_
2023-02-16 01:47:54
yeah currently mathematically lossless is always 100 but images that are very close to lossless can also get 100. I'm considering to change the scale to avoid that, the only thing stopping me is that it's annoying when metrics change...
gb82
_wb_ I should perhaps have been a bit more extreme and scale it so 0 is low quality, 100 is very high quality (visually lossless, with mathematically lossless at 150 or so instead of at 100) and scale scores below 0 so they use a smaller range (e.g. what is -100 now could become -10). Then BD rates would become more meaningful since they wouldn't put all the emphasis on the extremely low qualities and barely count what happens at the higher end...
2023-02-16 07:27:46
that would be really cool, although currently I think it is telling when a codec dips *so far* into the negatives. That's why I think useful graphs come in pairs; one with the <0 values, one without. Personally, if I was just casually browsing the web, I agree that 50+ would be preferable & anything below 30 would be noticeably degraded
Mściwój
2023-02-17 08:06:34
I am trying to get lastest one on MacOS brew install jpeg-xl getting error on `cmake --build build` step
2023-02-17 09:50:48
can someone help? ```Last 15 lines from /Users/maciek/Library/Logs/Homebrew/jpeg-xl/02.cmake: [ 76%] Linking CXX executable encode_oneshot /usr/local/Cellar/cmake/3.25.2/bin/cmake -E cmake_link_script CMakeFiles/encode_oneshot.dir/link.txt --verbose=1 /usr/local/Homebrew/Library/Homebrew/shims/mac/super/clang++ -fno-rtti -funwind-tables -Xclang -mrelax-all -Xclang -mconstructor-aliases -fno-omit-frame-pointer -O3 -DNDEBUG -O2 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -Wl,-search_paths_first -Wl,-headerpad_max_install_names -fPIE -Xlinker -pie CMakeFiles/encode_oneshot.dir/examples/encode_oneshot.cc.o -o encode_oneshot -Wl,-rpath,/tmp/jpeg-xl-20230217-53266-1tllc8u/libjxl-0.8.1/build libjxl.0.8.1.dylib libjxl_threads.0.8.1.dylib [ 76%] Built target encode_oneshot make[1]: *** [lib/CMakeFiles/jpegli-static.dir/all] Error 2 ⋮make[1]: *** Waiting for unfinished jobs.... [ 77%] Linking CXX executable djxl cd /tmp/jpeg-xl-20230217-53266-1tllc8u/libjxl-0.8.1/build/tools && /usr/local/Cellar/cmake/3.25.2/bin/cmake -E cmake_link_script CMakeFiles/djxl.dir/link.txt --verbose=1 /usr/local/Homebrew/Library/Homebrew/shims/mac/super/clang++ -fno-rtti -funwind-tables -Xclang -mrelax-all -Xclang -mconstructor-aliases -fno-omit-frame-pointer -O3 -DNDEBUG -O2 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -Wl,-search_paths_first -Wl,-headerpad_max_install_names -fPIE -Xlinker -pie CMakeFiles/djxl.dir/djxl_main.cc.o -o djxl -Wl,-rpath,/tmp/jpeg-xl-20230217-53266-1tllc8u/libjxl-0.8.1/build … ⋮``` Has anyone got something like it? Can help? know how to fix?
_wb_
2023-02-17 10:58:58
when I do brew install jpeg-xl, it doesn't do any compilation, it just fetches a .tar.gz with binaries
2023-02-17 12:42:33
No idea, my macbook is intel, but I dunno what Homebrew is doing. I thought they always shipped binaries but apparently they can also compile...
2023-02-17 12:42:58
Libjxl itself should work fine on ARM
Mściwój
2023-02-17 12:54:59
I confirm, brew choose to compile for me automatically the full log from brew attached
_wb_
2023-02-17 12:56:05
Can you show what's in /Users/maciek/Library/Logs/Homebrew/jpeg-xl/02.cmake ?
2023-02-17 12:56:44
Any particular reason you're using macOS 10.15 and not a more recent version?
Mściwój
2023-02-17 01:03:25
My macbook is from 2013, 10.15 (Catalina) is as far as I can go
_wb_
2023-02-17 02:08:12
Oh, sucks, I didn't know they limited OS upgrades for older laptops
2023-02-17 02:10:02
``` libjxl-0.8.1/lib/jpegli/decode_marker.cc:65:29: error: assigning to 'boolean' from incompatible type 'bool' [ 75%] Built target decode_oneshot cinfo->progressive_mode = (data[1] == 0xc2); ^~~~~~~~~~~~~~~~~ ```
2023-02-17 02:10:33
looks like that's what is causing trouble here
2023-02-17 02:25:04
sounds like something stupid
Mściwój
2023-02-19 09:06:32
Maybe, I am not sure who prepares homebrew packages… Could it be something very recently (last 2-3 mths) that Homebrew decided to not support older OSs and resort to comping? I noticed that it compiles (although successful) new Python versions for me (during update) too …
DZgas Ж
2023-03-03 12:54:36
what parameters should be used to get the maximum number of steps in progressive decoding? I thought ```-p --group_order=0 --progressive_ac --progressive_dc=0``` but file with these parameters has identical hash like -p it is not written anywhere that these are the default parameters as I understand can't use --progressive_ac and --qprogressive_ac together, but I also don't understand the difference between them ```--progressive_dc=num_dc_frames Progressive-DC setting. Valid values are: -1, 0, 1, 2. ``` what do these numbers mean? where are the explanations
_wb_
2023-03-03 01:45:28
-1 is default which is (atm) the same thing as 0: no dc frame
2023-03-03 01:45:45
1 means 1 dc frame, which is probably the only thing that actually works properly
2023-03-03 01:46:16
2 means 2 dc frames, so the dc frame itself also has a dc frame. So first frame is 1:64, second frame is 1:8, third frame is the actual 1:1 image
2023-03-03 01:48:22
regarding AC progression: the bitstream is rather flexible with that, and there currently are only two hardcoded scan scripts available (--progressive_ac and -qprogressive_ac), both with 4 scans or so iirc, but not quite the same
2023-03-03 01:50:05
bitstream-wise I think you can do up to 10 scans or so iirc, and there are lots of ways to decide what goes in each scan — there's way more encoder freedom in that regard with jxl than with jpeg, where the scan script dictates exactly which bitplanes get encoded and every bit of quantized coeffs belongs to a specific scan
DZgas Ж
_wb_ regarding AC progression: the bitstream is rather flexible with that, and there currently are only two hardcoded scan scripts available (--progressive_ac and -qprogressive_ac), both with 4 scans or so iirc, but not quite the same
2023-03-03 02:23:00
> not quite the same good. I can see that. so what's the difference between them? > 10 scans so which command should be used for this? > 2 means 2 dc frames like 2 x 1:64 + 2 x 1:8 + 2 x 1:1 ?? 8, not 10<:Thonk:805904896879493180> > --progressive_dc=2 is fine, but I can not tests now -- at the moment it is broken and does not work, i reported it here https://discord.com/channels/794206087879852103/804324493420920833/1081193044486934628
_wb_
2023-03-03 02:32:27
libjxl/cjxl doesn't have a way to do 10 scans at the moment. You'd need to change the code if you want to do that. Relevant code is here: https://github.com/libjxl/libjxl/blob/a741606839f6b5118dba7be0b67d2f496e225543/lib/jxl/enc_frame.cc#L73
DZgas Ж
_wb_ libjxl/cjxl doesn't have a way to do 10 scans at the moment. You'd need to change the code if you want to do that. Relevant code is here: https://github.com/libjxl/libjxl/blob/a741606839f6b5118dba7be0b67d2f496e225543/lib/jxl/enc_frame.cc#L73
2023-03-03 02:36:37
this code looks like as if I can write any number there. will it work?
_wb_
2023-03-03 02:41:30
very likely not everything will work or make sense, but you can experiment with it I suppose 🙂
DZgas Ж
_wb_ very likely not everything will work or make sense, but you can experiment with it I suppose 🙂
2023-03-03 04:33:31
I completely (at all) did not understand what the suitable_for_downsampling_of_at_least parameter does, but the code I wrote works very fine and I get a very multi-layered upload, it looks just great at a speed of about 2G. But, in no case should you write the -p parameter! otherwise, everything breaks and does not work, you need to write only the --progressive_ac parameter and that's it
_wb_
2023-03-03 04:37:04
the suitable_for_downsampling thing is basically only metadata, in a way. It signals when you consider the detail to be enough for a 1:8, 1:4, 1:2 or 1:1 rendering of the image — idea being that a clever viewer/browser would use this metadata in combination with the TOC to decide how much image data to fetch given the actual image dimensions it needs after layout
2023-03-03 04:39:01
that first argument is the number of coefficients N, which is actually the NxN topleft corner of an 8x8 dct (or equivalent for other block sizes), so in principle N=2 gives you enough for a 1:4 downscale, N=4 enough for a 1:2 downscale, and N=8 is just everything so it's enough for 1:1
2023-03-03 04:40:11
but if you want, you could put the 1:1 point a bit earlier (say for delivering a web quality image at 1:1, while there is still another scan left that turns it into a very high quality image), etc
DZgas Ж
_wb_ the suitable_for_downsampling thing is basically only metadata, in a way. It signals when you consider the detail to be enough for a 1:8, 1:4, 1:2 or 1:1 rendering of the image — idea being that a clever viewer/browser would use this metadata in combination with the TOC to decide how much image data to fetch given the actual image dimensions it needs after layout
2023-03-03 04:40:44
for what does this parameter exist? why is it not equal to the sample size
_wb_
2023-03-03 04:40:52
(also the actual bitstream is still more flexible than what this syntax allows)
2023-03-03 04:44:53
it exists to give the encoder some flexibility in where it wants the decoder to put the responsive offsets. You may want to say some pass is "good enough for 1:2" even if not all the relevant coefficients are available at full precision yet, or only when additional coefficients are also available. But mostly it is because passes do not have specific coefficients to cover, that's just an encoder choice. You can update all coefficients in all passes if you want. They just get added together. So you can keep some coefficients zeroed during one pass and only signal them in a later pass, allowing you to do e.g. salient regions first.
DZgas Ж
_wb_ that first argument is the number of coefficients N, which is actually the NxN topleft corner of an 8x8 dct (or equivalent for other block sizes), so in principle N=2 gives you enough for a 1:4 downscale, N=4 enough for a 1:2 downscale, and N=8 is just everything so it's enough for 1:1
2023-03-03 04:48:10
for what reason to have parameter 8? if it is 1:1 and does not affect anything, I reduced it and the image became pixelated. it can even be increased to 10 (without decoding errors) and then the image will become bad the encoder does not allow you to write size 1, outputs an error when encoding ```./lib/jxl/enc_progressive_split.h:67: JXL_ASSERT: p[i].num_coefficients > previous_pass.num_coefficients || (p[i].num_coefficients == previous_pass.num_coefficients && p[i].shift < previous_pass.shift)``` why is there no parameter that would do literally what I wrote to in the code? some parameter literally sample_steps=6 (max)
_wb_ 2 means 2 dc frames, so the dc frame itself also has a dc frame. So first frame is 1:64, second frame is 1:8, third frame is the actual 1:1 image
2023-03-03 05:04:10
I also don't understand at all why the --progressive_dc=1 parameter is needed. its effect is literally impossible to notice. except that it reduces the weight of the file by about ~0.01%, I was able to see image differences at about 1-2 pixel level differences with an 8-bit image (like FFFFFF -> FFEFFE)
ayumi
2023-03-03 05:11:26
It is needed, if you want to ensure the resulting file has exactly one DC frame. If you do not care about that, then it is not needed. In order to see the difference you would need to compare the file structure.
_wb_
2023-03-03 05:18:16
Progressive dc makes it possible to get 1:16, 1:32 etc previews. Without progressive dc, the first preview is 1:8
2023-03-03 05:19:06
But I dunno if libjxl actually renders pre-1:8 previews atm, my guess is that it doesn't
DZgas Ж
_wb_ Progressive dc makes it possible to get 1:16, 1:32 etc previews. Without progressive dc, the first preview is 1:8
2023-03-03 05:24:12
no. (they work identically)
_wb_ Progressive dc makes it possible to get 1:16, 1:32 etc previews. Without progressive dc, the first preview is 1:8
2023-03-03 05:31:28
this all works - because I can just count how many transformation are going on during decoding
_wb_
2023-03-03 05:33:25
With 1:16 I mean 16x downscaled in both directions, so 1/256 of the pixels
DZgas Ж
DZgas Ж this all works - because I can just count how many transformation are going on during decoding
2023-03-03 05:36:55
this happens 7 times. from the first show to the original. But if you are telling the truth, then 1:16(8?4??) is not shown, then what do I see in the very first frame? I want to note that only the first frame has been gauss smoothing, do you want to say that this is a frame that is not specified in this table?
_wb_
2023-03-03 05:38:22
The first thing you see is the dc alone, before any ac pass, and corresponds to 1:8
2023-03-03 05:39:30
There should be 8 updates total (dc plus 7 ac passes)
DZgas Ж
_wb_ The first thing you see is the dc alone, before any ac pass, and corresponds to 1:8
2023-03-03 05:42:22
that is, the parameter is similar to {1, 0, 0}? in this case, it is clear why it cannot be written here, it is written somewhere deep in the code. > 8 updates total yes, you're right
_wb_
2023-03-03 05:51:11
The {1,0,0} is implicit because it is obligatory
DZgas Ж
2023-03-05 03:46:27
How to encode Jpeg XL image using only one color component (LUMA) - exactly so that the encoder use only one channel, and not that (not only) I would make the image black-white
ayumi
2023-03-05 05:36:36
If you want single-channel output you need to give it singe-channel input.
Traneptora
DZgas Ж How to encode Jpeg XL image using only one color component (LUMA) - exactly so that the encoder use only one channel, and not that (not only) I would make the image black-white
2023-03-05 07:39:19
You'd have to convert the image to grayscale and then encode a grayscale image
2023-03-05 07:39:42
there's no single unique way to convert an image to grayscale
DZgas Ж
Traneptora there's no single unique way to convert an image to grayscale
2023-03-05 08:43:09
You didn't get the gist of the question. How does the codec determine that there is only one channel? does it read information about the image? or counts the number of color pixels? at what level does the analysis take place? how do I make a certain number of channels? and if I make everything black and white but make 1 color pixel, then the codec will create a whole color layer for this one pixel? In YUV systems, there is such a thing as yuv400, which guarantees the presence of only one grayscale channel. but JPEG XL has its own system.
Traneptora
DZgas Ж You didn't get the gist of the question. How does the codec determine that there is only one channel? does it read information about the image? or counts the number of color pixels? at what level does the analysis take place? how do I make a certain number of channels? and if I make everything black and white but make 1 color pixel, then the codec will create a whole color layer for this one pixel? In YUV systems, there is such a thing as yuv400, which guarantees the presence of only one grayscale channel. but JPEG XL has its own system.
2023-03-05 08:45:26
in Jxl files the color encoding is signaled in the image header. for cjxl input it depends on whatever file you feed it.
2023-03-05 08:46:52
There is no analysis. If you feed it an RGB image that is all white it will create an RGB image. If you feed it a grayscale image that is all white it will be grayscale
2023-03-05 08:49:09
Note that internally for VarDCT, grayscale is treated as RGB with R=G=B, and the image header signals whether or not that image is actually rgb or actually grayscale
DZgas Ж
Traneptora in Jxl files the color encoding is signaled in the image header. for cjxl input it depends on whatever file you feed it.
2023-03-05 08:50:55
it all sounds unconvincing, I can create a PNG with a palette, how can he understand what kind of image it is Without analysis?
_wb_
2023-03-05 08:53:05
A png with palette gets converted to rgb or rgba at decode time. Libjxl doesn't know it had a palette. When doing lossless, it will try if 1024 colors (or whatever the default is) are enough for the image, and use a palette if it is. But it doesn't matter if the input was a palette png or a full color png.
2023-03-05 08:56:39
For grayscale vs rgb vs rgba: this is something you have to select before encoding the image header, and libjxl will just do what you tell it. If you tell it the image is rgba but it's actually a fully opaque grayscale image, it will still encode it using 4 channels (but when doing lossless an RCT will turn it into 1 luma channel and 2 chroma channels that are all zeroes, and alpha is all-255, so those 3 channels will compress to some small constant size regardless of the image dimensions)
2023-03-05 08:59:15
We cannot really automatically reduce RGBA to RGB or GrayAlpha or Gray, because you may be encoding multiple frames and e.g. starting to encode the bottom layer of a multi-layer image which is fully opaque, but then in a later frame you will actually have nontrivial alpha.
2023-03-05 09:00:09
(I suppose we could do it if we know there are no more frames coming)
DZgas Ж
_wb_ (I suppose we could do it if we know there are no more frames coming)
2023-03-05 09:07:49
I am interested in lossy (VarDCT) compression in this question
_wb_ For grayscale vs rgb vs rgba: this is something you have to select before encoding the image header, and libjxl will just do what you tell it. If you tell it the image is rgba but it's actually a fully opaque grayscale image, it will still encode it using 4 channels (but when doing lossless an RCT will turn it into 1 luma channel and 2 chroma channels that are all zeroes, and alpha is all-255, so those 3 channels will compress to some small constant size regardless of the image dimensions)
2023-03-05 09:15:58
my goal is to compress the image, the size of which is slightly beyond reasonable. It would be logical to assume that encoding only one Channel will greatly speed up the compression process as well as reduce memory consumption, but I still didn't understand, So Jpeg XL can encode the only Y channel? The Input image is guaranteed to contain only 256 grayscale colors, how to make to the encoder that he would not create unnecessary channels.
_wb_
2023-03-05 09:19:19
VarDCT is hardcoded for 3 channels. If 2 are all-zeroes, in principle it could be optimized but in libjxl that isn't done.
DZgas Ж
2023-03-05 09:23:23
Yea. That's what I expected to hear. not support.
2023-03-05 09:29:05
I want to remind that an empty white image of 8000x8000 compress in **5 kilobytes**. But encoding (vardct) uses 1 gigabyte of RAM and time like normal images
username
2023-03-10 10:16:39
<@794205442175402004> what where the parameters (such as quality level and format specific options) that you used for each format on the jxl browser test page? and I guess also just what parameters do you usually use for those formats even outside of the test site? one of the reasons I'm asking is because in the case of webp the default encoding settings exclude things such as trellis quantization and a more accurate RGB to YUV conversion for the purpose of encoding speed.
_wb_
2023-03-10 10:19:21
I don't remember but I probably used default settings, maybe changed the quality parameter
username
2023-03-10 10:37:58
usually default settings show a better example of the type of quality you would see from a format out in the wild as most people don't change around default settings that much except for stuff like -q (quality) as most image editors and programs always expose a quality level but leave out most other settings or if they do include other settings that aren't that likely to be touched by most people as they are more technical then a simple slider that says "quality" or "speed" or say in the case of a company setting up automatic image converting/encoding system they will leave out slower options since they might have to handle thousands if not millions of images a day. so in cases such as webp the default settings may show what you would commonly see in the wild but don't show a format at it's best which is a shame.
_wb_
2023-03-10 02:32:31
imo encoder libraries should make the default good exactly for that reason — you should assume that 99.9% of encodings will be done with only the quality slider exposed to or used by the end-user
DZgas Ж
2023-03-12 10:00:17
For what reason does using Lossy Modular give more file size than Loseless?? Shouldn't it be the simple reduction of coefficient? -R 1 -m 1 -q 99 -e 5 | 392 943 bytes -R 0 -m 1 -q 99 -e 5 | 275 074 bytes -m 1 -q 100 -e 5 | 197 124 bytes
_wb_
2023-03-12 11:28:12
Lossy uses xyb, lossless stays in rgb. Lossy doesn't do palette nor local RCTs, lossless does. There are probably some other differences.
2023-03-12 11:29:57
Paradoxically, in lossless you can do more stuff than in lossy, because in lossless you never have to worry about getting artifacts, while in lossy you do...
DZgas Ж
_wb_ Lossy uses xyb, lossless stays in rgb. Lossy doesn't do palette nor local RCTs, lossless does. There are probably some other differences.
2023-03-12 12:10:49
Well, that sounds like a problem. because I need to reduce the accuracy of LossLess compression. It just doesn't work the way I wanted it to. Considering what you say, I would even say that Modular Lossless and Modular Lossy are 2 different codecs. In this case. I don't see any other ways to compress other than reducing the number of colors.
_wb_
2023-03-12 02:10:10
Lossy modular -R 0 is basically doing that (it makes things lossy by quantizing XYB colors). Delta palette is probably a better approach though.
DZgas Ж
2023-03-12 02:39:24
Why **gaborish** does not turn off when the quality is 95+ ? It is quite obvious that using gaborish for example on q98 is useless and only makes worse for the image
_wb_ Lossy modular -R 0 is basically doing that (it makes things lossy by quantizing XYB colors). Delta palette is probably a better approach though.
2023-03-12 02:54:52
That would be great. If only it worked. but Modular, with any other quality besides Loseless, simply destroys colors. It is not possible to use it. For example, the image is compressed by 2 times relative to the maximum compressed PNG (original left), in the center -R 0 -d 1 modular. On the right, VarDCT q99 (have identical file size)
_wb_
2023-03-12 04:31:03
Yeah, delta palette or squeeze (-R 1) will be much better at doing gradients
DZgas Ж
_wb_ Yeah, delta palette or squeeze (-R 1) will be much better at doing gradients
2023-03-12 04:36:37
-R 1 ruined the whole picture too much, much better to use VarDCT
2023-03-13 12:14:48
When will the codec start using large block sizes (and other types of blocks like 32X8) ? what are the plans? are there any dates when this will happen?
Jyrki Alakuijala
DZgas Ж When will the codec start using large block sizes (and other types of blocks like 32X8) ? what are the plans? are there any dates when this will happen?
2023-03-17 10:44:05
I wasn't able to get improvements with them -- I have an idea how to do it, but it requires research -- we had a capable intern with a good host trying it out and it didn't work, so it might not be easy It is a lot of work and helps most in distance 8+ area, where the images are more or less useless already -- perhaps only needed for the 0.05 bpp photocompression competition, which is more of a (losing) academic battle with the neural compression guys than something that brings utility I'd guess current ETA for making that is 2026Q2
DZgas Ж
2023-04-10 09:19:08
well. LLVM (and msvc) builds give a different Jpeg XL compression result than MSYS2 (which is mingw) - What's wrong? the image is identical in pixels. But the size differs by 100 bytes identical source code was used
2023-04-10 09:59:00
Together with my friend We found out that **GCC**, **CLANG**, **LLVM **, **MSCV **produce different JPEG XL file size (but not image pixels) when compressing images *(as possibly different same Compiler from the operating system, although it was noticed that on Windows minGW is different from MSVC/LLVM and on Linux GCC is different from CLANG)*, the difference is from 0 to 100+ bytes approximately, on average 5-10 bytes, the bigger the image the bigger the difference in bytes will be. In images up to 200kb (compressed) there may be no difference at all Sample image, and the compressed versions from llvm and msys2(gcc) -- 7 bytes difference
_wb_
2023-04-11 05:14:45
This is expected. Also different compiler versions and different platforms can give (slightly) different results, and in VarDCT mode the pixels can even be (slightly) different. The reason is floating point math and SIMD...
DZgas Ж
2023-04-11 08:54:12
Since no one else uses **such **technologies for compression, it was very surprising - The same code on the same processor does not give the same result (not in speed, but in data) is unstable, since absolutely identical images have a different HASH, this be problematic for determining duplicates
spider-mario
2023-04-11 08:56:49
is it, though? how often would it happen that different people would lossily compress the same source and one would miss the opportunity to detect that as duplicates compared to if they had used another codec?
2023-04-11 08:57:36
notably given that even with a codec that always gives strictly identical results with the same encoding parameters, they could have used different encoding parameters
DZgas Ж
spider-mario is it, though? how often would it happen that different people would lossily compress the same source and one would miss the opportunity to detect that as duplicates compared to if they had used another codec?
2023-04-11 09:01:41
It's true. It's just that since I was doing a modification of the source code, my friend suggested compiling the code for LLVM because it should be faster. And after that, of course, we started checking identical images with identical parameters and saw that they were different! - of course, this may mean that We broke something. But it turned out that no... By the way, builds on LLVM work about 10-15% faster than MSYS (mingw) which can be quite a lot
jonnyawsom3
2023-04-11 09:21:52
If I recall we had an issue open about choosing a compiler with better performance for the release builds
_wb_
2023-04-11 09:48:26
btw this is not something specific or inherent to JPEG XL. You could make a jxl encoder that uses only integer math and produces identical results regardless of compiler/architecture. You could also make a jpeg/webp/avif/heic encoder that uses float math in some of its internals (transforms, or encoder heuristics, or both) and has (slightly) different results depending on compiler/arch
monad
2023-04-11 07:18:08
File hash is not enough in general as identically encoded image data can couple with differing metadata. Decoding and computing similarity of image data is a more robust second pass, especially when 'duplicates' could be cropped, resized, flipped, or lossily reencoded.
DZgas Ж
monad File hash is not enough in general as identically encoded image data can couple with differing metadata. Decoding and computing similarity of image data is a more robust second pass, especially when 'duplicates' could be cropped, resized, flipped, or lossily reencoded.
2023-04-12 11:57:04
Well, no, the same source code with the same compiler, with Changes in components - should produce the same data. Otherwise I can't find out if it works correctly. Hash always helps. But now we also need to take into account who compiles on what. After conducting a series of tests, I came to the conclusion that LLVM is the best solution.
derberg🛘
2023-04-13 09:15:49
https://github.com/libjxl/libjxl/blob/3610979fadbfa036eec3a5a792f16b493c876e03/plugins/gdk-pixbuf/pixbufloader-jxl.c#L567-L568 Is writing support going to be implemented anytime soon? Looks like there is no open issue about that that one can follow. This is kinda a pretty serious adoption blocker IMO; was trying to get a xfce dev into adding JXL support for xfce4-screenshooter and they were open for it since it shouldn't have been much work with a working gdk-pixbuf patch, but a bit later after the dev looked into it, they replied with a link to that line stating that writing support is TODO... :/
circa
2023-04-17 11:06:05
Hello, may I ask where you are standing on optimising encoding and decoding on GPU ? I read on a comment from a while ago that it might be complicated to do for JPEG XL and I was wondering if the opinion had changed.
_wb_
2023-04-17 12:09:00
there are certainly parts of jxl that could be done on GPU (like transforms, filters, color conversions). Entropy decode is not very convenient to do on GPU and better done with a CPU, but that's the case with any image format except the GPU texture formats (which sacrifice some compression in exchange for being fully decodable on GPU)
circa
2023-04-17 01:22:03
Okay makes sense, thank you for the answer.
novomesk
derberg🛘 https://github.com/libjxl/libjxl/blob/3610979fadbfa036eec3a5a792f16b493c876e03/plugins/gdk-pixbuf/pixbufloader-jxl.c#L567-L568 Is writing support going to be implemented anytime soon? Looks like there is no open issue about that that one can follow. This is kinda a pretty serious adoption blocker IMO; was trying to get a xfce dev into adding JXL support for xfce4-screenshooter and they were open for it since it shouldn't have been much work with a working gdk-pixbuf patch, but a bit later after the dev looked into it, they replied with a link to that line stating that writing support is TODO... :/
2023-04-18 01:24:07
The write support for gdk-pixbuf plugin is not difficult. The loader needs SKCMS which is not packaged everywhere - that's also a blocker. Would be good to optionally use LCMS2. Loader loads all frames into memory. That's OK for normal animations but someone create large JXL with many frames, memory gets exhausted quickly.
DZgas Ж
2023-05-21 02:09:17
I have the file 4134x2756 that is encoded with 8 layers of progressive decoding with parameters -e 7 -d 25 --resampling=1 and because of the gigantic internal structure, the first preview is possible to get only after loading 63.4% bytes of the images. (https://google.github.io/attention-center/) In addition, due to its size, it requires a lot of computing power. But I can't understand why when loading 63.4% of the image to show the very first layer needs as much calculations as when loading the full image. This leads to such a situation that the processor simply does not have time to show any layer at all, and with subsequent loading, it begins to load new layers again and again creating a large delay of ten seconds, so long that a complete image is loaded full.
_wb_
2023-05-21 05:12:35
-d 25 is very low quality, I don't think there will be much HF data left at that kind of quality
2023-05-21 05:13:18
You might be better off with a lower resolution image at -d 6 or something like that
DZgas Ж
_wb_ -d 25 is very low quality, I don't think there will be much HF data left at that kind of quality
2023-05-21 08:40:02
I want to say that due to the lack of a mechanism for instant linear display of the downloaded image line by line, Jpeg XL loses to the only codec (in general, the only one) that may be the best solution for working in extremely low connection or speed networks (for example, i2p 20kb/s). -- This is WEBP. And this is really the only codec that: 1. shows "acceptable" quality when compressed by q 1 2. has a line-by-line display during loading immediately (and in fact, none of the codecs that are currently being used meets these two parameters except webp) And even though JPEG XL turns out to be 20-30% smaller(as same quality), it's still not that significant. Because as you know. On very strong compression, jxl does not show the best results, and webp needs to be only a little better to be the same in quality...
DZgas Ж I have the file 4134x2756 that is encoded with 8 layers of progressive decoding with parameters -e 7 -d 25 --resampling=1 and because of the gigantic internal structure, the first preview is possible to get only after loading 63.4% bytes of the images. (https://google.github.io/attention-center/) In addition, due to its size, it requires a lot of computing power. But I can't understand why when loading 63.4% of the image to show the very first layer needs as much calculations as when loading the full image. This leads to such a situation that the processor simply does not have time to show any layer at all, and with subsequent loading, it begins to load new layers again and again creating a large delay of ten seconds, so long that a complete image is loaded full.
2023-05-21 08:45:04
197 612 byte webp q1
_wb_
2023-05-21 09:45:58
Doesn't baseline jpeg also do mcu by mcu decoding? But this is more a property of implementations than of bitstreams. You could also make a jxl decoder that shows every decoded block...
DZgas Ж
_wb_ Doesn't baseline jpeg also do mcu by mcu decoding? But this is more a property of implementations than of bitstreams. You could also make a jxl decoder that shows every decoded block...
2023-05-21 09:51:23
Jpeg is simply not suitable for strong compression. it just crumbles. I use JPEG yuv444 exclusively when webp yuv420 gives a very bad result. for example, for my album cover images. > You could also make a jxl decoder A tempting offer. 🙂 But I will stop at the fact that I am simply reported this highly specialized problem.
Jyrki Alakuijala
DZgas Ж Why **gaborish** does not turn off when the quality is 95+ ? It is quite obvious that using gaborish for example on q98 is useless and only makes worse for the image
2023-05-22 08:48:36
I'll look into this
username
Jyrki Alakuijala I'll look into this
2023-05-22 08:55:22
here is a example/testcase that seems directly related: https://discord.com/channels/794206087879852103/804324493420920833/1092894594578010192
DZgas Ж
2023-05-22 02:45:31
What are the ways to create a structural grid of blocks manually? Is it even possible?
Traneptora
DZgas Ж What are the ways to create a structural grid of blocks manually? Is it even possible?
2023-05-22 03:59:02
of Varblocks? At the moment you cannot create a varblock layout yourself and encode a JXL using it you'd have to write your own encoder to do that or modify libjxl. no existing tools do that
TheBigBadBoy - 𝙸𝚛
2023-06-04 03:43:39
Is this server interested of having 2 more boosts ? I got 2 free and won't use them otherwise
_wb_
2023-06-04 04:45:15
Not sure what we can do with those boosts but I don't think it would hurt...
TheBigBadBoy - 𝙸𝚛
2023-06-04 05:33:29
there we go <:2_pepeShy:721833660457943151>
yoochan
2023-06-04 07:32:47
I would like to thank eustas and mo271 for their contributions to the wasm in libjxl (two recent commits) I don't know if they are here, If you know them, pat them on the back from me
_wb_
2023-06-05 05:20:39
<@811568887577444363> <@795684063032901642>
Moritz Firsching
2023-06-05 05:22:05
thanks, <@1051137277813870592>, let us know if there is anything you'd need to make the libjxl wasm more usable..
yoochan
2023-06-05 12:51:57
I'll have a look a it and I try to make a feedback
2023-06-05 01:43:20
<@795684063032901642> well, not need to set again NODE_JS it was already set in my .emscripten file and $EMSDK is not known (even if its meaning is trivial). But ninja-build is mandatory, it could be added to the readme since the error returned by cmake is not perfectly clear,
Moritz Firsching
2023-06-05 01:45:34
make sense, feel free to open a pull request with clarification.
yoochan
2023-06-05 01:53:19
I will
DZgas Ж
2023-06-05 05:32:14
where this code https://github.com/libjxl/libjxl/commit/15a5cc2dc280ff1b1345f4377bd5be00cbb2cd26 when changing the file cjxl.cc to cjxl_main -- it was not rewritten -- they just did not rewrite it.
spider-mario
2023-06-05 05:40:04
that was a deliberate choice
DZgas Ж
spider-mario that was a deliberate choice
2023-06-05 05:43:01
where are the arguments? reasons? in the current version of the code:
afed
2023-06-05 05:44:44
https://github.com/libjxl/libjxl/pull/1110
DZgas Ж
afed https://github.com/libjxl/libjxl/pull/1110
2023-06-05 06:01:36
First, more than a year has passed since that moment - where is the result. What kind of job is this? just offered to improve, deleted the tool, and that's it?? Second I don't see any obvious reasons here. All I see here as the reason is that *stupid people* use this parameter in order to do speed/size comparison tests. Third, Deleting --target_size is not written in any changelog, why? ( I also wanted to find out a more understandable reason of Deleting ) Fourth, the code that makes target_size didn't erase, what is the chance that if I add this missing code, everything will just work? Fifth, --target_size and --target_bpp are different things for different cases, I would never need to use --target_bpp, but in the future I plan to use --target_size on some of my sites to limit the amount of information in the image without using internal code(python/bash/any). And it was also a great tool for quality/size comparison tests Which I've done before
jonnyawsom3
2023-06-05 11:50:43
Perhaps re-adding it under the expert options flag could do. Then there's little chance of misuse but still easily accessible for those who want it
derberg🛘
2023-06-06 12:27:19
If concern of misuse is high, just describe it better in the man and help pages
monad
2023-06-06 06:31:21
Or remove the expert flag and a level of help verbosity, assume people can manage to fire at least a single neuron.
jonnyawsom3
2023-06-06 10:35:57
That's a bold assumption
Moritz Firsching
2023-06-07 07:51:15
yes, we will update this
2023-06-07 03:11:52
done
jonnyawsom3
2023-06-07 03:14:18
Could've probably fixed the firefox signal too if there was a status more appropriate, since they said Neutral quite some time ago
Moritz Firsching
2023-06-07 03:22:08
one thing at a time...
Traneptora
2023-06-21 04:58:26
What's the license on the conformance samples? I'm considering putting one of them in the FFmpeg fate test repository and I want to make sure I can do that legally
_wb_
2023-06-21 05:35:41
I think we only used images where the license is not an issue (CC0 or similar). IIRC that was a design criterion.
2023-06-21 05:36:37
But regardless of license I would assume "fair use" would cover it anyway
lonjil
2023-06-21 05:42:52
I don't think it would, since they'd be part of a public repo that'd mean arbitrary distribution and copying.
Traneptora
2023-06-21 05:46:03
I'm looking at newton's cradle, and icos4d animation specifically
2023-06-21 05:46:13
I don't have my own animations I've personally created on hand, otherwise I'd just use those
_wb_
lonjil I don't think it would, since they'd be part of a public repo that'd mean arbitrary distribution and copying.
2023-06-21 05:54:33
They already are part of a public repo
lonjil
2023-06-21 05:55:32
Then it's just a license question. Fair use doesn't really apply to something like that.
Traneptora
2023-06-21 05:59:59
I'm sure it's a nonissue I just want to be able to tell FFmpeg that there's no risk of legal trouble
2023-06-21 06:00:20
There's just no license declaration in https://github.com/libjxl/conformance/
_wb_
2023-06-21 06:09:38
Yeah we probably should fix that.
Sayfer Kopak
2023-06-22 12:46:24
Hello. And sorry for using google translate. I am an ordinary person who sometimes likes to test different programs, to determine the best in its class. As well as video and graphic formats. I recently found out about the existence of this format and was quite impressed with it. But there was a problem: as far as I know, there is currently no GUI batch converter that provides full support for all its features. I mean primarily for Windows. I managed to find only three converters, but they were full of flaws: 1. **XNConvert** - only allows to choose between quality (0-99) or lossless compression. Can't transcode JPG. Very slow. 2. **JXLgui** - it is possible to specify the quality by editing the settings file. This is all. Can't transcode JPG. About the same slow. 3. **Ultra7z Images to JPEG XL (JXL) Converter & Optimizer** - has some settings, but does not save them, every time the program is launched, they need to be manually specified. Finally, can transcode JPG. Incredibly slow. Shows ads. All of these tools use very outdated versions of JXL and are very slow (albeit with almost no CPU load). I tried replacing **cjxl.exe** for the second and third programs, but it didn't work. Unfortunately, I don't have the programming skills to make this work. So, what is the reason for the lack of a full-fledged batch converter? It is quite difficult for ordinary people to use the command line and make lists of thousands of files sent for conversion. I'm wondering if there is some special reason why the creators of this format are not interested in it being used by ordinary people and in general in promoting it? Maybe some hidden patent restrictions, like the BPG format?
2023-06-22 01:50:00
Then what is the reason for this situation? And is it worth hoping that one day this format will be able to be used not only by programmers?
jonnyawsom3
2023-06-22 02:06:10
Well, it's not actually finished yet for a start... We're at version 0.8, working on getting it ready for a 'full' release when it can properly be used. It's also just meant to be an example for others to make their own converters from, with time there will be better solutions. Steve Jobs wasn't making the music for people's iPods after all ;P If you just want to convert every image from one folder into another, we can write the command to do that for you here
sklwmp
Sayfer Kopak Then what is the reason for this situation? And is it worth hoping that one day this format will be able to be used not only by programmers?
2023-06-22 02:06:43
Well, to be fair, it's up to the community to make batch converter programs, not the original creators of the format. And, to be honest, JXL support is kind of niche and specialized to the point where it isn't ready for mass consumption yet. In short, most people who are interested in JXL right now are technically literate, so they tend to make tools for their fellow technically literate folk. But with Apple adopting JXL, I hope its support grows much larger and people start developing tools for the general public to use.
Sayfer Kopak
Well, it's not actually finished yet for a start... We're at version 0.8, working on getting it ready for a 'full' release when it can properly be used. It's also just meant to be an example for others to make their own converters from, with time there will be better solutions. Steve Jobs wasn't making the music for people's iPods after all ;P If you just want to convert every image from one folder into another, we can write the command to do that for you here
2023-06-22 02:23:59
Thanks. Please write what I need to do.
jonnyawsom3
Sayfer Kopak Thanks. Please write what I need to do.
2023-06-22 02:46:44
`for %i in (Folder\*.*") do cjxl.exe "%i" -d 0 -e 4 "New folder\%~ni.jxl"` Replace `Folder` with the path to the images, `cjxl.exe` with the path to the exe, `New folder` with where you want the new images to go. Remove the extra " in the folder paths `-d 0` is lossless and `-e 4` should be much faster, but change those to what you want. I just tested `-d 0.3` and it made a folder 90MB instead of 500MB Then paste it into the command line and it should just work™ If you want to do it recursively, change the start to `for /R Folder %i in (*.*) do` Hopefully that helps
Sayfer Kopak
`for %i in (Folder\*.*") do cjxl.exe "%i" -d 0 -e 4 "New folder\%~ni.jxl"` Replace `Folder` with the path to the images, `cjxl.exe` with the path to the exe, `New folder` with where you want the new images to go. Remove the extra " in the folder paths `-d 0` is lossless and `-e 4` should be much faster, but change those to what you want. I just tested `-d 0.3` and it made a folder 90MB instead of 500MB Then paste it into the command line and it should just work™ If you want to do it recursively, change the start to `for /R Folder %i in (*.*) do` Hopefully that helps
2023-06-22 02:49:36
Thank you very much!
yoochan
2023-06-22 07:07:36
Take care, there is a reason why developers prefer command line: it works really well and once you know a bit of it you can craft your own workflow... I agree it is not intuitive, but when you start, you may quickly fall in the rabbit hole, where GUI sucks for their inability to do what you want them to do 😅
DZgas Ж
2023-07-01 11:55:53
Since formulas and numbers are used when encoding jpeg xl art. Can I, by translating bin to hex and hex to dec, hide information of any size inside formula numbers?
Traneptora
2023-07-04 02:14:44
you could try "hiding" information stegonographically with things like negative numbers that are darker than black but you're limited by the 16-bit size of the modular buffers, or the 32-bit size if that feature is disabled
Lucas Chollet
2023-07-06 04:00:16
Hey (<@794205442175402004>), would it be possible to host the latest spec draft somewhere? That can be a nice thing to do I think, it will remove the barrier of discord to access the spec and avoid the need to search where is the most up to date version.
username
Lucas Chollet Hey (<@794205442175402004>), would it be possible to host the latest spec draft somewhere? That can be a nice thing to do I think, it will remove the barrier of discord to access the spec and avoid the need to search where is the most up to date version.
2023-07-06 04:12:56
from what I understand it's kinda something that's in a gray area hence probably why it's only in this Discord server as I don't think it's really meant to be widespread just something that's in here so people can contribute and directly report issues with it and such, sure if it was posted online somewhere then more people could contribute to it however then a lot of people wouldn't really have an incentive to actually properly buy it if they can just grab a up-to-date draft on a website. Discord servers aren't really indexed on the web so stuff that ends up in them is sorta half public.
w
2023-07-06 04:14:49
there shouldn't be any incentive to buy it (from ISO)
Lucas Chollet
2023-07-06 04:18:50
Sharing drafts is not uncommon for ISO standards (e.g. C++ or PDF)
afed
2023-07-06 04:26:46
```ISO's publication of ISO 32000-2 in 2017 ended the 24-year tradition of the latest PDF specification being freely available from Adobe. Starting in April, 2023, to provide PDF developers and stakeholders with their accustomed level of access, the PDF Association and its sponsors made ISO 32000-2 available for download at no cost.```
lonjil
2023-07-06 04:29:47
C and C++ have exceptions which allows drafts to be shared legally, but almost no other ISO standards have been granted such an exception. And here's something funny: for C23 it seems like the final draft to be freely available will have significant semantic differences to what will be standardized.
spider-mario
2023-07-06 05:20:10
I don’t know that “funny” is the word I would choose
veluca
2023-07-06 05:43:09
I am not entirely sure that those exceptions are actually there, and it's not just a case of, y'know, ppl letting the sleeping dogs lie
_wb_
2023-07-06 06:33:20
I have been told that putting even a draft on a public website is not allowed by ISO. I don't like it, but I think we have a reasonable gray area solution here on discord. Those who are sufficiently interested to find the discord and ask around can find a good draft. There are very few people who really need that spec anyway, it's not like a programming language...
Lucas Chollet
2023-07-06 06:38:56
Sure, thanks for your answer!
jonnyawsom3
2023-07-06 06:40:59
Oddly enough, just the other day I dug up the draft so a friend can try to make a decoder eventually
_wb_
2023-07-06 06:44:11
Factual information like tables with header fields is not copyrightable. So if anyone wants to take the spec and rewrite all the prose in their own words, you can actually do that and put it online for free.
Lucas Chollet
2023-07-06 06:49:46
I love the idea of a free specification but I don't have the courage to start that
w
2023-07-06 06:56:17
chatgpt jxl spec
jonnyawsom3
2023-07-06 07:12:44
"Write this but better"
derberg🛘
w chatgpt jxl spec
2023-07-06 08:57:07
Create a chat "AI" and feed it with specifications that should be open. And in reality it doesn't make stuff up (by merging together info from several things) but just quotes from the documents (at most with a bit of rephrasing)
_wb_ Factual information like tables with header fields is not copyrightable. So if anyone wants to take the spec and rewrite all the prose in their own words, you can actually do that and put it online for free.
2023-07-06 09:07:30
Throwback to a way how cryptography was exported from the US.
DZgas Ж
w chatgpt jxl spec
2023-07-08 04:42:17
"Write jxl best encoder in 10 kb C code"
gb82
2023-07-08 05:29:41
What's the plan for the Butteraugli library in libjxl? The AV1 people are wondering because aom-av1-lavish used butter quant & we're interested in the future of butteraugli for use in aom-av1-lavish. Is it going to be kept around as an external library or removed from libjxl with no future plans?
_wb_
2023-07-08 06:14:29
Probably would make sense to make a libvmaf-style library for the various metrics.
Torvid
Oddly enough, just the other day I dug up the draft so a friend can try to make a decoder eventually
2023-07-08 08:36:04
me? ovo#
jonnyawsom3
2023-07-08 08:38:30
Slowmode on here, so maybe move to <#803663417881395200>
Torvid
2023-07-08 08:45:30
If I get around to writing a decoder, it might be fun to also write an "easy mode" technical description of the file format. Since... yeah the spec itself is not the lightest of reads x'D
Jyrki Alakuijala
gb82 What's the plan for the Butteraugli library in libjxl? The AV1 people are wondering because aom-av1-lavish used butter quant & we're interested in the future of butteraugli for use in aom-av1-lavish. Is it going to be kept around as an external library or removed from libjxl with no future plans?
2023-07-09 06:07:05
you can rip it out from libjxl or use github.com/google/butteraugli -- or request me for help jyrki@google.com
Lucas Chollet
2023-07-28 09:49:17
Can someone shed me some light on why is there a TOC in the stream? (As a heads-up, I didn't look at the VarDCT mode yet, only the Modular mode) I think that I understand how it works but not why it has been added. I really don't see how it helps with compression, so I guess that it's only to help the decoder. Is it related to parallelism?
veluca
2023-07-28 09:50:04
yup, makes it easier to decode sections in parallel
2023-07-28 09:50:36
we could also have done that each section is prepended with its size, but then things like reading from disk a subset of the sections would be slower
Lucas Chollet
2023-07-28 09:55:01
Thank you! > reading from disk a subset of the sections would be slower Sure, that really make sense What about the permutation feature then?
Traneptora
Lucas Chollet Thank you! > reading from disk a subset of the sections would be slower Sure, that really make sense What about the permutation feature then?
2023-07-28 11:06:06
TOC permutation allows you to order the elements for a more optimized progressive decode experience. For example, it could allow you to put the center of the image first in the decode order
Lucas Chollet
2023-07-28 11:15:34
Oh okay, thanks for your answer
Aryan Pingle
2023-08-04 06:53:45
My build keeps breaking for commits starting from https://github.com/libjxl/libjxl/commit/ef97fe07656e385bbd1af3801635b66bdbd413f2. Can someone help me understand what this commit does?
Moritz Firsching
2023-08-04 07:05:07
how does it break?
Aryan Pingle
2023-08-04 07:09:43
I get this error: ``` wasm-ld: /b/s/w/ir/cache/builder/emscripten-releases/llvm-project/llvm/include/llvm/Support/Casting.h:104: static bool llvm::isa_impl_cl<lld::wasm::DefinedGlobal, const lld::wasm::GlobalSymbol *>::doit(const From *) [To = lld::wasm::DefinedGlobal, From = const lld::wasm::GlobalSymbol *]: Assertion `Val && "isa<> used on a null pointer"' failed. ``` I've tried the build on many commits and this one was the first one where it started to break
Moritz Firsching
2023-08-07 06:06:06
Is it also broken at current head? What steps are you doing to get this error? Are you following the wasm build instructions? Perhaps it would be a good idea to open an issue on our github repo about this
Aryan Pingle
2023-08-11 04:06:17
Ahh man sorry for replying so late Luca (commit author) helped me fix this, just had to update the emscripten version from 2.0.23 to 2.0.34 (must've been a compiler bug)
_wb_
2023-08-20 06:30:47
CLA bot is for the whole repo
2023-08-20 06:31:23
We could make a separate subrepo for documentation I guess. Also so there is no CI for that
Aryan Pingle
2023-08-30 04:22:26
Does JXL support pure single threading when building with emscripten? I can't seem to build without passing the -pthread flag even for the JXLDecoder API (version 0.7) https://github.com/libjxl/libjxl/blob/main/CMakeLists.txt#L290 implies that -pthread is enabled for all JXL builds when Emscripten is detected I really hope this doesn't mean JXL WASM is unusable for lower end devices
pshufb
2023-09-19 12:10:47
I saw a good number of places where it seems relatively straightforward to avoid using `new`. As something of a C++ snob, I’d like to do something about that: 1) Would there be interest in a PR/PRs to get rid of some of those `new`s, replacing them with smart pointers? (And/or replacing `unique_ptr<Foo>(new Foo())` with something like `make_unique<Foo>()`) 2) If so, and if you’d prefer multiple PRs, how would you like them to be split? - As there aren’t *that* many `new`s, and they’re generally quite compartmentalized into e.g. the benchmarking harness, demos, etc., if splitting by category into ~4 categories works for you, then it works for me.
veluca
2023-09-19 12:24:15
feel free to send those my way, either one or multiple PRs are fine
spider-mario
2023-09-19 02:26:45
note that so far, we have been targeting C++11; std::make_unique is C++14
2023-09-19 02:27:07
we have jxl::make_unique but maybe not all the codebase can depend on it (e.g. probably not jpegli)
yoochan
2023-09-22 06:55:28
<@604964375924834314> , do you work at the same workplace as <@179701849576833024> ?
spider-mario
yoochan <@604964375924834314> , do you work at the same workplace as <@179701849576833024> ?
2023-09-22 07:35:21
yes, same building of the Google Zürich office (albeit on different floors)
yoochan
2023-09-22 07:36:54
your are not in the same department ? or the building is really narrow ?
monad
2023-09-22 08:11:41
Sami's office is right on top of Luca's. If they ever need to chat, they just pop open the joining door in their ceiling/floor. A rope ladder facilitates bodily traversal between planes. Before installing the door, Morse code or whispering through the vents were most opportune, and highly obnoxious.
yoochan
2023-09-22 08:13:44
haha, no tube based messaging system yet then
veluca
monad Sami's office is right on top of Luca's. If they ever need to chat, they just pop open the joining door in their ceiling/floor. A rope ladder facilitates bodily traversal between planes. Before installing the door, Morse code or whispering through the vents were most opportune, and highly obnoxious.
2023-09-22 08:14:01
this message made me wonder if you know more than you should 😛
2023-09-22 08:14:17
we are pretty much one above the other
2023-09-22 08:14:50
and not in our building, but in another building there are "quick connection" fireman poles between some floors
_wb_
2023-09-22 11:12:21
fireman poles? cool!
2023-09-22 11:13:24
emergency, day-zero exploit found in libjxl, USE THE FIREMAN POLE SAMI
yoochan
2023-09-22 11:47:07
it look so much like how I envisioned startups of 2010, with sofas, slides and table soccer
AWM
2023-09-24 12:05:58
What does GatherTreeData() do in JPEG XL? As I understand, it tries to examine the image and make a tree structure like in the JPEG XL Art app. It also uses Xoroshiro128+ to make random decisions. But I still don't understand how it is done? Also I would like to know what exactly gets encoded with ANS in JPEG XL. Thanks in advance
Traneptora
2023-09-25 01:07:22
How often is https://github.com/libjxl/bench updated? I fixed a few bugs so I'm curious to see what changed
_wb_
2023-09-25 01:22:12
Ideally we should make it so it triggers an automatic update on every change to a decoder (maybe at most once per day to keep it sane) — if someone feels like setting up the appropriate github actions and hooks, feel free!
Traneptora
_wb_ Ideally we should make it so it triggers an automatic update on every change to a decoder (maybe at most once per day to keep it sane) — if someone feels like setting up the appropriate github actions and hooks, feel free!
2023-09-25 01:26:45
https://github.com/libjxl/bench/issues/8 that's already on the todo-list, but it looks like the answer is "only manually, at the moment"
_wb_
2023-09-25 01:29:58
I mean, we can accept pull requests on the bench repo too 🙂
DZgas Ж
2023-10-03 06:19:06
Is there a technical possibility to glue images without re-encoding. So that it is wide from two images. Is it possible to make animation from pre-encoded images without recoding them. Considering that the maximum unit of a macroblock is 256x256. Are they independent units ? Is it possible to separate them into separate photos without recoding them? Of course, there is no such functionality in libjxl, I'm asking about a technical-theoretical possibility.
_wb_
2023-10-03 07:36:20
Yes, technically it is possible to take two jxl images and turn them into a single composite jxl image without re-encoding (the only restriction is that the two inputs have to be both XYB or both the same non-xyb colorspace). You can also take a single jxl image and split it into separate images, but that would have some restrictions: - if no recoding at all is allowed, the unit is 2048x2048 (the size of a DC group) for VarDCT mode, and in Modular mode it is typically 256x256 - if some entropy recoding is allowed, the unit is 256x256 (the size of an AC group), but then you need to re-encode the DC if the crops are not aligned to 2048x2048 - if a lot of entropy recoding is allowed and you're only worried about preserving the vardct data losslessly, then for current libjxl-produced files, the unit is 64x64
Traneptora
2023-10-03 12:29:39
it gets more complicated though if the modular things like the HF Metadata and LF Coefficients were squeezed though
a goat
2023-10-03 09:28:23
Krita's documentation for Jpeg XL says that reencoding with lossy compression results in continued quality loss. Is this true? I remember FLIF did not have this issue with its lossy encoder
2023-10-03 09:35:32
https://docs.krita.org/en/general_concepts/file_formats/file_jxl.html "For JPEG XL files using lossy compression, it is not advised to save over the same file multiple times. The lossy compression will cause the file to reduce in quality each time you save it. This is a fundamental problem with lossy compression methods. Instead you should use the lossless compression mode, or a working file format while you are working on the image."
jonnyawsom3
2023-10-03 10:54:31
Likely in the default VarDCT mode instead of Modular lossy
gb82
2023-10-04 12:39:37
https://youtu.be/w7UDJUCMTng?si=4nK0IFHmejvamQQO
_wb_
2023-10-04 05:10:47
The way FLIF did lossy was quite uniquely idempotent, which was only possible because it's effectively not doing a frequency transform at all (just an Adam7-style sample ordering). It was not very good at lossy though (worse than JPEG).
2023-10-04 05:11:40
FUIF and JXL do have generation loss — it's relatively mild though compared to video codecs.
Demiurge
2023-10-04 06:36:16
Mozjpeg seems to have no generation loss, is it just extremely mild like JXL?
_wb_
2023-10-04 06:42:04
JPEG has relatively mild generation loss in general since "what happens in the block stays in the block"
spider-mario
2023-10-04 06:54:21
I wonder how much that changes if we shift the image one pixel right, down, left, up in between iterations
username
spider-mario I wonder how much that changes if we shift the image one pixel right, down, left, up in between iterations
2023-10-04 06:56:58
like this? : https://www.youtube.com/watch?v=jQQ7epXoULU https://www.youtube.com/watch?v=ytI7Yn_ShXk
monad
gb82 https://youtu.be/w7UDJUCMTng?si=4nK0IFHmejvamQQO
2023-10-05 07:09:39
Defaults don't hold up well in more practical cases (https://discord.com/channels/794206087879852103/803645746661425173/1023189031011491921 or `in:benchmarks has:video`). MozJPEG is much better.
Jyrki Alakuijala
2023-10-05 09:30:48
MozJPEG and libjpeg-turbo seem roughly equivalent to me -- while jpegli (and even guetzli) is actually a substantial win over them
_wb_
2023-10-05 03:25:48
mozjpeg is better (but slower) at the lower qualities (below q80). At the higher qualities it is worse and slower than libjpeg-turbo...
damian101
2023-10-19 12:32:07
I think I have found a significant issue with JXL butteraugli, and that is that the default intensity_target always stays at 80, even though in cases where PQ or HLG transfer curve is specified, it should be 10'000.
2023-10-19 12:37:59
And if I specify something like `--colorspace RGB_D65_SRG_Rel_709`, I think default intensity_target also should not remain 80, but be 100, as specified as the reference screen luminance in BT.1886. Matter of principle...
w
2023-10-19 12:57:38
but srgb is 80 nits
Quackdoc
2023-10-19 12:57:59
~~dont you just love relative luminance~~ its also worth noting another common intensity would be 160
damian101
w but srgb is 80 nits
2023-10-19 01:02:11
Yes, so if non-SRGB transfer function is specified it should use non-SRGB reference screen luminance. It's especially relevant for PQ or HLG, because those do not specify relative SDR brightness but absolute brightness levels from 0-10'000 nits. With intensity_target 80 on HDR content you get butteraugli scores that are far rmoved from reality, as everything is more than 100 times less bright than it should be.
And if I specify something like `--colorspace RGB_D65_SRG_Rel_709`, I think default intensity_target also should not remain 80, but be 100, as specified as the reference screen luminance in BT.1886. Matter of principle...
2023-10-19 01:08:10
Another transfer curve I think would be relevant is DCI (`--colorspace RGB_DCI_DCI_Rel_DCI`). But apparently, reference screen brightness for that is only 48 nits... or 52.37 nits to ensure screen brightness is at least 48 nits with different white points: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7290729 I assumed it would be especially high, considering the strong transfer function DCI (~2.6 gamma if I remember correctly) it uses... I don't think most cinema projectors nowadays are that dim... But it's the same with movies mastered on Bt.1886 screens especially for Web or Blu-ray release, which are also usually not mastered at 100 nits anymore, but more often somewhere between 150 and 200 nits afaik (which might not be such a good idea considering how often people complain about how dark movie scenes are nowadays, not everyone watches on a bright screen in a dark environment afterall, but those who do of course benefit from it)...
w
2023-10-19 01:11:57
crazy how far light control goes
Quackdoc
2023-10-19 01:28:50
I think playing loosey goosey with the spec for defaults is probably not a good thing
Nourane
2023-10-19 04:02:01
hello, I'm trying to run these commands cd libjxl mkdir build cd build cmake -DCMAKE_BUILD_TYPE=Release -DBUILD_TESTING=OFF .. cmake --build . -- -j$(nproc) I got these errors jxl_cms.cc:(.text+0x2daf): undefined reference to `skcms_ParseWithA2BPriority' clang: error: linker command failed with exit code 1 (use -v to see invocation) tools/CMakeFiles/cjxl.dir/build.make:89: recipe for target 'tools/cjxl' failed make[2]: *** [tools/cjxl] Error 1 CMakeFiles/Makefile2:1847: recipe for target 'tools/CMakeFiles/cjxl.dir/all' failed make[1]: *** [tools/CMakeFiles/cjxl.dir/all] Error 2 Makefile:140: recipe for target 'all' failed make: *** [all] Error 2 (my_env) user@User:~/libjxl/build$
Moritz Firsching
2023-10-19 07:45:25
<@224363555074342912> Thanks for reporting this. There were recent changes in jxl_cms.cc, which might have caused this. Itwould be best to open an issue at https://github.com/libjxl/libjxl (Or I can just open an issue with what you wrote here, as you prefer...)
Nova Aurora
Moritz Firsching <@224363555074342912> Thanks for reporting this. There were recent changes in jxl_cms.cc, which might have caused this. Itwould be best to open an issue at https://github.com/libjxl/libjxl (Or I can just open an issue with what you wrote here, as you prefer...)
2023-10-19 07:52:39
You're looking for <@1153399370490855515>
Moritz Firsching
2023-10-19 07:53:09
sorry..,autocompletion got me
spider-mario
2023-10-19 07:58:37
it is likely caused by https://github.com/libjxl/libjxl/pull/2884, on which I have already commented
2023-10-19 07:58:56
(albeit only mentioning lcms2)
novomesk
2023-10-24 12:29:49
I am not sure whether this issue is something serious, but I received an report about Timeout in kimgio_jxl_fuzzer (exceeds 60 secs). libjxl is built from v0.8.x branch and -DHWY_COMPILE_ONLY_SCALAR is used.
yoochan
2023-10-31 12:55:45
I did a checkout of the main branch and got a compile error today : ```[ 98%] Building CXX object tools/CMakeFiles/benchmark_xl.dir/benchmark/benchmark_codec_avif.cc.o /media/forge/demo_jxl/repo/libjxl/tools/benchmark/benchmark_codec_avif.cc: In member function ‘virtual jxl::Status jpegxl::tools::AvifCodec::Compress(const std::string&, const jxl::CodecInOut*, jxl::ThreadPool*, std::vector<unsigned char>*, jpegxl::tools::SpeedStats*)’: /media/forge/demo_jxl/repo/libjxl/tools/benchmark/benchmark_codec_avif.cc:258:14: error: ‘void result’ has incomplete type 258 | auto result = avifEncoderSetCodecSpecificOption( | ^~~~~~ gmake[2]: *** [tools/CMakeFiles/benchmark_xl.dir/build.make:258: tools/CMakeFiles/benchmark_xl.dir/benchmark/benchmark_codec_avif.cc.o] Error 1 gmake[2]: *** Waiting for unfinished jobs.... gmake[1]: *** [CMakeFiles/Makefile2:1706: tools/CMakeFiles/benchmark_xl.dir/all] Error 2 gmake: *** [Makefile:146: all] Error 2 ```
spider-mario
2023-10-31 03:06:24
thanks, we’re on it
yoochan
2023-10-31 03:16:45
merci 🙂
Moritz Firsching
2023-10-31 09:12:14
That should be fixed already by https://github.com/libjxl/libjxl/pull/2920
username
2023-11-04 02:12:52
<@795684063032901642> This is probably a minor issue but in `cc/base/devtools_instrumentation.cc` for the chromium JXL implementation there seems to be a histogram name that is wrong. https://chromium-review.googlesource.com/c/chromium/src/+/4255409 someone in the <#794206170445119489> channel here https://discord.com/channels/794206087879852103/794206170445119489/1170169129555214357 found this typo.
damian101
2023-11-04 12:49:52
Is it possible to specify Gamma 2.2 transfer curve, same as BT.470 System M transfer curve, in butteraugli or cjxl? It is extremely common, the most common transfer curve across all types of consumer displays, including all Apple devices, and quite distinct from sRGB.
Quackdoc
2023-11-04 09:48:51
Gamma2.2 supremacy, it is reccomended by filmlight to produce and use gamma2.2 instead of srgb too
spider-mario
Is it possible to specify Gamma 2.2 transfer curve, same as BT.470 System M transfer curve, in butteraugli or cjxl? It is extremely common, the most common transfer curve across all types of consumer displays, including all Apple devices, and quite distinct from sRGB.
2023-11-04 11:20:07
yes: https://libjxl.readthedocs.io/en/latest/api_color.html#_CPPv4N16JxlColorEncoding5gammaE
2023-11-04 11:20:22
oh, wait, `cjxl`? more tricky, I believe
2023-11-04 11:21:12
I know that we can serialise such color encodings in the `RGB_D65_202_Rel_SRG` description format but I’m not sure whether we also parse that form
2023-11-04 11:22:05
but if the input has an ICC profile that encodes such a curve, we should detect it as such
damian101
2023-11-04 11:22:52
all video formats support it thanks to BT.470 System M...
spider-mario
2023-11-04 11:23:32
(lcms2: https://github.com/libjxl/libjxl/blob/7f9e7452a64b12a0ee2c6559b58b2308c3e215d4/lib/jxl/cms/jxl_cms.cc#L730-L765, skcms: https://github.com/libjxl/libjxl/blob/7f9e7452a64b12a0ee2c6559b58b2308c3e215d4/lib/jxl/cms/jxl_cms.cc#L494-L528)
Traneptora
Is it possible to specify Gamma 2.2 transfer curve, same as BT.470 System M transfer curve, in butteraugli or cjxl? It is extremely common, the most common transfer curve across all types of consumer displays, including all Apple devices, and quite distinct from sRGB.
2023-11-05 01:57:00
fwiw gamma22 is H.273, ID4
2023-11-05 01:57:41
so you can always tag a PNG with cICP and set TRC=4
2023-11-05 01:58:25
or something similar
damian101
Traneptora so you can always tag a PNG with cICP and set TRC=4
2023-11-05 03:46:15
but will it be preserved when converting to JXL?
spider-mario
2023-11-05 03:47:55
with a PNG, you can set the gAMA tag directly, and that’s preserved
Traneptora
but will it be preserved when converting to JXL?
2023-11-05 07:21:33
yes tho an XYB jxl can decode to any space
Moritz Firsching
username <@795684063032901642> This is probably a minor issue but in `cc/base/devtools_instrumentation.cc` for the chromium JXL implementation there seems to be a histogram name that is wrong. https://chromium-review.googlesource.com/c/chromium/src/+/4255409 someone in the <#794206170445119489> channel here https://discord.com/channels/794206087879852103/794206170445119489/1170169129555214357 found this typo.
2023-11-06 10:16:09
Thanks, <@245794734788837387>, yeah, I think the names of the histograms changed at some point... Will fix it whenever I rebase it again...
yoochan
2023-11-09 07:56:39
<@532010383041363969> I wanted to have a look at your recent tweaks on the vardct. What is the last commit before? Are the modifications on the main branch?
damian101
I think I have found a significant issue with JXL butteraugli, and that is that the default intensity_target always stays at 80, even though in cases where PQ or HLG transfer curve is specified, it should be 10'000.
2023-11-13 11:36:18
Impacts cjxl compression as well. PQ has peak of 10'000 nits, HLG (usually) of 1'000 nits, not 80 nits...
spider-mario
2023-11-13 12:29:30
cjxl used to automatically use a correct intensity target by default – did it stop doing that?
2023-11-13 12:29:45
or is that just when calling butteraugli?
Dexrn ZacAttack
2023-11-13 01:34:42
How can I make a website embed JXL?
damian101
spider-mario cjxl used to automatically use a correct intensity target by default – did it stop doing that?
2023-11-13 03:53:40
Jyrki Alakuijala
yoochan <@532010383041363969> I wanted to have a look at your recent tweaks on the vardct. What is the last commit before? Are the modifications on the main branch?
2023-11-13 04:33:10
yes, it is merged https://github.com/libjxl/libjxl/pull/2836 -- I don't know how to find the commit just before it -- I'm somewhat helpless with git -- the modifications are merged in the main branch
damian101
2023-11-13 04:55:27
Oh, there's a much bigger issue here, the files encoded with cjxl seem to miss any color metadata. Command used: ``` cjxl -e 9 -d 2.34 --dec-hints=color_space=RGB_D65_DCI_Rel_PeQ --intensity_target 10000 '/run/media/damian101/shared/CosmosLaundromat 2k24p HDR P3PQ 01700/CosmosLaundromat_2k24p_HDR_P3PQ_01700.png' '/run/media/damian101/shared/CosmosLaundromat 2k24p HDR P3PQ 01700/CosmosLaundromat_2k24p_HDR_P3PQ_01700-e9-d2.34-10000nits.jxl' ```
spider-mario
Oh, there's a much bigger issue here, the files encoded with cjxl seem to miss any color metadata. Command used: ``` cjxl -e 9 -d 2.34 --dec-hints=color_space=RGB_D65_DCI_Rel_PeQ --intensity_target 10000 '/run/media/damian101/shared/CosmosLaundromat 2k24p HDR P3PQ 01700/CosmosLaundromat_2k24p_HDR_P3PQ_01700.png' '/run/media/damian101/shared/CosmosLaundromat 2k24p HDR P3PQ 01700/CosmosLaundromat_2k24p_HDR_P3PQ_01700-e9-d2.34-10000nits.jxl' ```
2023-11-13 06:22:55
I’m not sure that the color_space dec-hint is respected for PNG input, does it work better if you first convert the input to PPM?
damian101
spider-mario I’m not sure that the color_space dec-hint is respected for PNG input, does it work better if you first convert the input to PPM?
2023-11-13 07:34:02
It definitely correctly influences how the JXL is encoded... Either way, I definitely want some way to specify primaries and transfer curve during conversion, overriding source metadata.
Moritz Firsching
Jyrki Alakuijala yes, it is merged https://github.com/libjxl/libjxl/pull/2836 -- I don't know how to find the commit just before it -- I'm somewhat helpless with git -- the modifications are merged in the main branch
2023-11-14 08:49:20
https://github.com/libjxl/libjxl/commit/136cd8107f1ed79f95dfb9ca3250fb7adc9d68a2 is the commit, the parent is https://github.com/libjxl/libjxl/commit/6b879656c2c5b6b9faeb58403483372c4a0b5522
damian101
spider-mario I’m not sure that the color_space dec-hint is respected for PNG input, does it work better if you first convert the input to PPM?
2023-11-14 03:51:47
Yes, first converting to ppm works, now the metadata is correct. When encoding from PNG, only intensity_target metadata is saved correctly, no color space stuff.
2023-11-14 06:56:16
so you can always tag a PNG with cICP
Traneptora so you can always tag a PNG with cICP and set TRC=4
2023-11-14 08:43:04
~~Cjxl ignores those tags (generally terribly supported), I assume cjxl and most other software only looks for the ICC profile in PNGs.~~
veluca
2023-11-14 09:14:00
I'm fairly sure cjxl does support cicp
damian101
veluca I'm fairly sure cjxl does support cicp
2023-11-14 09:28:05
~~This works indeed just fine, I must have done something wrong during the initial test...~~ Also, when metadata is taken from source metadata, default intensity_target is indeed correct.
~~This works indeed just fine, I must have done something wrong during the initial test...~~ Also, when metadata is taken from source metadata, default intensity_target is indeed correct.
2023-11-14 10:10:50
No, there is indeed something very wrong. While jxlinfo reports the correct intensity_target, neither is the image encoded targeting the correct intensity_target, nor is the decoder communicating the correct intensity_target. These two images show the same output for jxlinfo, but are displayed very differently in viewers supporting tonemapping:
spider-mario
2023-11-15 10:27:01
<@794205442175402004> I have been told that ssimulacra having no tagged version might jeopardise its AUR package https://aur.archlinux.org/packages/ssimulacra, since a package that tracks the HEAD of a git repository should be named ssimulacra-git; what would you think of tagging the current HEAD as 1.0? (I think the package is technically safe from that rule since it’s pinned to a specific commit that just happens to be the current HEAD, but might as well make it a little more than “technically safe” if it’s not too much hassle)
_wb_
2023-11-15 11:53:56
Makes sense
Jyrki Alakuijala
yoochan <@532010383041363969> I wanted to have a look at your recent tweaks on the vardct. What is the last commit before? Are the modifications on the main branch?
2023-11-17 11:27:52
any news on this -- I'm full of curiosity 😄
yoochan
2023-11-17 01:01:42
I compiled both version, run them on the set of kodak images. I'm just building some script/tool to assess them properly 🙂 I tried with d=3.0 in order to have something to assess, but I may go to 2.0 if needed
Demiurge
2023-11-18 05:20:50
this looks interesting... I wonder why it isn't easy and fast to generate a low res thumbnail without the thumbnailer timing out. Does the libjxl API not provide a way to decode a very small and fast thumbnail? https://www.reddit.com/r/jpegxl/comments/17xv0kq/best_way_to_quickly_browse_jxl_thumbnail_files_in/
Jyrki Alakuijala
yoochan I compiled both version, run them on the set of kodak images. I'm just building some script/tool to assess them properly 🙂 I tried with d=3.0 in order to have something to assess, but I may go to 2.0 if needed
2023-11-23 02:52:54
I'm curious about any results -- I think I looked more at 1.0-2.0 myself and more on strange images like anime and pixel graphics than on photos I think my conclusion on photos was that it might be slightly better, neutral or even ever so slightly worse, but on pixel graphics there was a substantial improvement, possibly also on anime
yoochan
2023-11-23 03:48:44
<@532010383041363969> the first evaluation I did was ... mostly neutral 😄 but I'm not sure I looked at the right criteria. And I didn't test anime, only the kodak image set. I did the comparison with a tool of mine, where images are magnified 200% and I can switch between one the other and the reference. But as a blind test, I don't know which one has used the old or the new encoding (i can share the tool I made for blind testing). I tried to look at : 1. the loss of detail, 2. the addition of noise to pick the picture which seemed closer to the original. But I was not very consistent and I would like to try a bit more (I did a dozen samples, I would like to reach 50). results were almost 50/50 The used distance was d=3.0 and I don't know if I should focus on less noise or more details 😄
2023-11-23 04:10:33
I did some more tests and I can see something ! on the old version, the edges moves a bit visually, between the encoded version and the reference (like less than a pixel) but with the new version edges stays at the same place... Once I found this, I got a perfect score to determine which version was the new and which was the old... I can't blind test anymore 😄
Jyrki Alakuijala
2023-11-25 10:34:10
Thank you 🌞🌞🌞. Do you think it is an improvement now or still neutral?
fab
2023-11-25 02:48:47
Squoosh is good
yoochan
2023-11-25 02:50:12
well, I plan to look again with other kind of images... for photographic content, I lack experience and I still don't know if I prefer loss of detail over extra noise 😆
a goat
_wb_ Makes sense
2023-11-30 11:05:52
What's the official conversion formula for LAB to XYB?
Silikone
Quackdoc Gamma2.2 supremacy, it is reccomended by filmlight to produce and use gamma2.2 instead of srgb too
2023-11-30 11:57:14
To be fair, 2.2 is horrible on average displays. So many near-blacks get crushed. Even on my professional Dell IPS, I struggle to tell the darkest shades apart. sRGB prevents this
Quackdoc
Silikone To be fair, 2.2 is horrible on average displays. So many near-blacks get crushed. Even on my professional Dell IPS, I struggle to tell the darkest shades apart. sRGB prevents this
2023-12-01 12:59:13
This sounds wrong. Gamma2.2 is very reliable and looks good across sRGB and Gamma2.2 display, of which they are about 50:50 of "sRGB" displays. Even flimlight / baselight reccomends and defaults to using a pure gamma2.2 now.
Silikone
Quackdoc This sounds wrong. Gamma2.2 is very reliable and looks good across sRGB and Gamma2.2 display, of which they are about 50:50 of "sRGB" displays. Even flimlight / baselight reccomends and defaults to using a pure gamma2.2 now.
2023-12-01 01:03:07
Displays using 2.2 profiles, that is. I can toggle between them on my Dell, and I strongly prefer sRGB so I can navigate caves in Minecraft safely. My CRT with 2.2 esque gamma looks insane, though. I can see so much just by dimming my room
Quackdoc
2023-12-01 01:17:59
Im talking about setting 2.2 on images for distribution. also one needs to keep in mind that games are rather arbitrary on how they display, many output actual sRGB (not all however), and that can look weird on a gamma2.2 display but the inverse, a gamma2.2 image on an sRGB display is a better situation.
Silikone
2023-12-01 03:04:59
Why not bt 1886. No ambiguity, and it behaves well with low contrast displays.
Quackdoc
2023-12-01 03:38:57
different transfer. this could potentially cause some weirdness (not all viewers support ICC for instance and may not render properly because of this)
Jyrki Alakuijala
a goat What's the official conversion formula for LAB to XYB?
2023-12-02 06:37:14
There are several things we call XYB, one in JPEG XL, another in Butteraugli, different from version to version. The one in Butteraugli I consider the best. It has biased log as response curve whereas JPEG XL's has a cubic root and the rest of log is put in adaptive quantization... Roughly the attempt in XYB is to take LMS spectral energy, compress each with biased log, then do X=M-L, Y=L+M, B=S (often we use for practicality B=S-Y)
_wb_
2023-12-02 10:05:33
the JPEG XL one, but with X and B-Y shifted/rescaled so they can be represented in uint ranges
Traneptora
2023-12-03 04:17:07
Is there a way to build ssimulacra2 without building the majority of the test tools? I don't want to install them when running `make install` but I want to have access to ssimulacra2
damian101
Silikone To be fair, 2.2 is horrible on average displays. So many near-blacks get crushed. Even on my professional Dell IPS, I struggle to tell the darkest shades apart. sRGB prevents this
2023-12-03 11:40:53
But that completely depends on how the content is mastered. Also, you are not supposed to easily tell neighbouring shades apart on an 8-bit display? Ideally you'd want equal distance between all shades, like with the PQ transfer. IMO, Gamma 2.4 on a display would be better than gamma 2.2 and especially sRGB, but those displays completely crush the blacks of content mastered for sRGB. Raising the brightness of dark areas is preferable over crushing them.
Silikone
But that completely depends on how the content is mastered. Also, you are not supposed to easily tell neighbouring shades apart on an 8-bit display? Ideally you'd want equal distance between all shades, like with the PQ transfer. IMO, Gamma 2.4 on a display would be better than gamma 2.2 and especially sRGB, but those displays completely crush the blacks of content mastered for sRGB. Raising the brightness of dark areas is preferable over crushing them.
2023-12-04 02:19:59
Even if the content is mastered for 2.2, you're essentially wasting data on shades that can't be seen, leaving less to smooth out the shades you actually can discern. If displays did what BT.1886 does for 2.4, this wouldn't be an issue. So I agree, just use 2.4, or more specifically, BT.1886
damian101
Silikone Even if the content is mastered for 2.2, you're essentially wasting data on shades that can't be seen, leaving less to smooth out the shades you actually can discern. If displays did what BT.1886 does for 2.4, this wouldn't be an issue. So I agree, just use 2.4, or more specifically, BT.1886
2023-12-04 02:40:07
Yes, that's why Bt.1886 > gamma2.2 > sRGB in terms of perceptual uniformity. But almost no consumer displays follow Bt.1886. Because they don't want users to complain about crushed blacks. The vast majority of displays either use gamma2.2 or, nowadays less common, sRGB transfer curve. And a big practical issue with targeting gamma 2.4 EOTF: How do I tag the content? Bt.709? Lots of software assume the Bt.709 recommended encoding gamma for live broadcast instead of the gamma 2.4 EOTF, and converts transfer curve incorrectly. And I think there is no other way of reliably tagging 2.4 gamma in CICP. Unlike for Gamma 2.2, thanks to Bt.470 System M. Bt.709 is somewhat ambigous due to the existence of scene-referred and display-referred transfer Bt.709 transfer curve. Similar for sRGB, in practice lots of things are named sRGB that actually target different transfer functions. That's the nice thing about gamma2.2, you can pretty much trust the metadata to be correct.
Quackdoc
Silikone Even if the content is mastered for 2.2, you're essentially wasting data on shades that can't be seen, leaving less to smooth out the shades you actually can discern. If displays did what BT.1886 does for 2.4, this wouldn't be an issue. So I agree, just use 2.4, or more specifically, BT.1886
2023-12-04 02:41:18
Im curious as to why you think a 2.4 curve is better then a 2.2 curve
w
2023-12-04 03:01:46
bt1886 2.4 is the best. Florian said so
DZgas Ж
2023-12-04 01:29:11
difficult question: how much functionality(%) is implemented in libjxl, relative to what jpeg xl itself can do
veluca
2023-12-04 01:49:25
I propose sin(pi^100)
2023-12-04 01:49:37
I don't think anybody knows 😉
yoochan
2023-12-04 02:33:09
seems to be negative : https://www.wolframalpha.com/input?i=sin%28pi%5E100%29
DZgas Ж
veluca I don't think anybody knows 😉
2023-12-04 06:03:12
devs 🙂
Jyrki Alakuijala
DZgas Ж difficult question: how much functionality(%) is implemented in libjxl, relative to what jpeg xl itself can do
2023-12-04 08:50:54
Delta palettization is interesting for some inputs and quality ranges. Vardct can be made 7.777777 % better.
a goat
2023-12-04 09:13:14
Are there any further optimization I can do to jpegs that won't add any to the decompression times after compressing with guetzli?
veluca
yoochan seems to be negative : https://www.wolframalpha.com/input?i=sin%28pi%5E100%29
2023-12-04 09:14:51
ugh, I tried... I'll go with `sin(pi^(pi^(pi^pi)))^2`, pretty sure nobody knows *that* one (beyond it being in [0, 1])...
DZgas Ж devs 🙂
2023-12-04 09:16:28
I can ensure you we don't know all of "what jpeg xl can do" 😛
Traneptora
veluca ugh, I tried... I'll go with `sin(pi^(pi^(pi^pi)))^2`, pretty sure nobody knows *that* one (beyond it being in [0, 1])...
2023-12-05 02:45:52
you can do it numerically, but it's not a rational multiple of pi so it's probably not something interesting
veluca
Traneptora you can do it numerically, but it's not a rational multiple of pi so it's probably not something interesting
2023-12-05 05:10:35
let me know if you manage, it's a known open problem 😛
2023-12-05 05:10:55
(possibly something similar)
_wb_
DZgas Ж difficult question: how much functionality(%) is implemented in libjxl, relative to what jpeg xl itself can do
2023-12-05 10:30:09
Decode side: all of it, maybe modulo some discrepancies between some spec limits and implementation limits. Encode side: 80% or so of the coding tools are used in one way or another by the encoder (the 20% is splines and super large VarDCT blocks, and also the things that are not used by default without using special experimental options, such as delta palette and noise). But the coding tools that are used, are typically used in a specific, limited way that doesn't come anywhere close to exhausting the bitstream expressivity.
DZgas Ж
2023-12-05 10:34:34
😵‍💫<:This:805404376658739230>
jonnyawsom3
2023-12-05 10:34:57
The key is that it's purely feature, not function, so there's still plenty to improve on both density and speed in future for those coding tools (Hopefully)
_wb_
2023-12-05 11:20:57
Nearly all encoder decisions in libjxl are made mostly heuristically, with relatively little exhaustive search. Exhaustive search would be computationally infeasible with a bitstream as expressive as jxl — imagine zopflipng cubed or worse.
veluca
2023-12-05 11:54:49
I am 99% sure that many encoder choices are NP-hard, even if just optimizing for PSNR or doing lossless, imagine for perceptual-lossy...
Traneptora
2023-12-08 05:50:31
The docstring says the quality range for `JxlEncoderDistanceFromQuality` is between `100.0` and `0.0` but this is not actually true. You can pass it any value outside of these ranges but it clamps to 100.0 only on the upper side. If you send it a value less than 0 it doesn't error out, and can give you a massive distance. Should I open an issue about this?
2023-12-08 05:52:47
(For example, you can pass it a quality of -15.0 and it returns a distance of 46.225. This is probably not desirable.)
Moritz Firsching
2023-12-08 06:13:15
We assumed that people will use it to set a distance, and it should be impossible to set a distance of 46.225 both in the api and with cjxl,
Traneptora
Moritz Firsching We assumed that people will use it to set a distance, and it should be impossible to set a distance of 46.225 both in the api and with cjxl,
2023-12-08 07:15:48
sure, but if you use `JxlEncoderSetDistance(JxlEncoderDistanceFromQuality(-15.0))`, it doesn't make sense that this fails, but it succeeds if you pass `115.0`
2023-12-08 07:16:47
Basically, why is it the API user's responsibility to feed the quality setting and then check to see if the distance is sane in between? It's strange that the function clamps on one side and not the other.
Moritz Firsching
2023-12-09 09:05:28
I agree that there is some inconsistency here (of course distance 0.0 is pretty special, whereas cutting of at distance 25.0 is pretty arbitrary). I suppose this inconsistency could be either changes by also clamping negative values of "quality" to a distance of 25.0 or returning negative distances for qualities over 100. To be clear, I don't think this last idea would be good because negative distances don't make any sense if 0.0 means lossless. However another way to fix this would just to return an error already in JxlEncoderDistanceFromQuality when the quality passed is outside [0.0, 100.0]. Perhaps always clamp would be nicer, then we don't have to think about how to do the error handling. Thanks for the feedback in any case.
Demiurge
2023-12-22 12:37:53
`djxl --disable_output` causes an Illegal instruction error consistently on my machine. 0.8 doesn't do this, so it's a regression.
Moritz Firsching
2023-12-22 10:06:00
Is this a duplicate of this or a newer problem? https://github.com/libjxl/libjxl/issues/2994
Demiurge
2023-12-22 02:33:48
It could be, but I haven't tried with a debug build, so all it says is "Illegal instruction"
2023-12-22 02:40:36
Does JXL_ABORT **still** murder the calling application in non-debug build instead of allowing the application to gracefully detect and recover from library errors?!
2023-12-22 02:58:28
I just double checked, and it looks like JXL_ABORT calls abort() on msvc and __builtin_trap() on other platforms... This should not be the default behavior. It's a showstopper for Paint.NET and all other software that reasonably refuses to use libraries that murder the caller instead of allowing the caller to handle errors.
2023-12-22 03:04:24
The bug is located here: https://github.com/libjxl/libjxl/blob/main/lib/jxl/base/status.h#L132
2023-12-22 03:20:02
I would propose eliminating the usage of __builtin_trap entirely. And wrapping abort() with `#ifdef JXL_CRASH_ON_ERROR`. I'm really surprised this is still a problem today, despite the severity of the problem and the approaching 0.9 release
Moritz Firsching
2023-12-22 03:40:48
thanks for the investigation. We plan to work on the abort handling soon. does the problem with --disable_output appear at current head or for version 0.9.0?
Demiurge
Moritz Firsching thanks for the investigation. We plan to work on the abort handling soon. does the problem with --disable_output appear at current head or for version 0.9.0?
2023-12-22 04:05:37
Yes, both current head and the v0.9.x branch last I checked
Moritz Firsching
2023-12-22 04:23:26
Weird, that should have been fixed by https://github.com/libjxl/libjxl/pull/2996
Demiurge
2023-12-22 04:33:28
Maybe I should make a debug build...
2023-12-22 04:43:45
Why would it use __builtin_trap in the first place? That makes it harder to tell the difference between a genuine illegal instruction error.
Moritz Firsching
2023-12-22 08:38:33
I don't know why this was used in the first place and we plan to fix it. Checking with a debug is certainly a good idea
lonjil
2023-12-22 09:54:56
Is there no way to trap other than trying to run an illegal instruction? 🤔
MSLP
2023-12-22 11:18:51
there's <stdlib.h> `abort()`, GCC implementation of `__builtin_trap()` is documented to generate illegal instruction or by call abort. I didn't dig up to the condition on which it uses one or the other.
2023-12-23 01:37:02
ok, found it: <https://gcc.gnu.org/onlinedocs/gcc/language-standards-supported-by-gcc/c-language.html> ``` Finally, if __builtin_trap is used, and the target does not implement the trap pattern, then GCC emits a call to abort. ``` So probably on most systems `__builtin_trap()` will emit illegal instruction, and process will terminate with SIGILL, and only on some `abort()` will be used
a goat
2023-12-23 06:57:03
<@794205442175402004> If I wanted to force users of my site to decode the JpegXL files on it, how bad of an idea would it be to send the WASM implementation over (https://github.com/niutech/jxl.js), in terms of both its parity with upstream and each browsers general abilities to use WebAssembly. I'm hoping to play the long game in terms of support for JXL and start encoding all my files with it at the start
veluca
2023-12-23 07:08:37
you know you can compile libjxl to wasm, right?
2023-12-23 07:09:22
it's slower, but not terrible... and most browsers can use wasm nowadays (as in, everything up to date but IE and Opera Mini - or 97.04% - according to CanIUse)
Quackdoc
2023-12-23 07:10:07
I can actually somewhat answer this since I wired up gb32's site for him as well as wiring up jxl-oxide wasm implementation on my own testing site (which I actually recommend over jxl.js), It's fine if you don't need more then a handful of smaller images simultaneously. It's obviously kind of slow, but the real issues are; A) that for some reason, chrome and firefox will pause rendering images while another is decoding. This means that if you have a lot of images, or a couple large images, decoding speed is sufficiently fast in both cases, however since images will pause rendering, it will take a long time before any are actually usable/viewable B) if you are loading multiple images at once Problem A) becomes even more exacerbated to the point where images can just give up loading, or in the case of jxl.js cause OOM and/or crash the browser. (lazy loading can help) but again, if you are intentful about how the website actually works then it's fine. gb's site still uses it for a couple pages such as [this one](https://giannirosato.com/blog/post/nvenc-v-qsv/) and it works perfectly fine on phones, but pushing the limits of what you can do is for sure something to be avoideded
veluca
Quackdoc I can actually somewhat answer this since I wired up gb32's site for him as well as wiring up jxl-oxide wasm implementation on my own testing site (which I actually recommend over jxl.js), It's fine if you don't need more then a handful of smaller images simultaneously. It's obviously kind of slow, but the real issues are; A) that for some reason, chrome and firefox will pause rendering images while another is decoding. This means that if you have a lot of images, or a couple large images, decoding speed is sufficiently fast in both cases, however since images will pause rendering, it will take a long time before any are actually usable/viewable B) if you are loading multiple images at once Problem A) becomes even more exacerbated to the point where images can just give up loading, or in the case of jxl.js cause OOM and/or crash the browser. (lazy loading can help) but again, if you are intentful about how the website actually works then it's fine. gb's site still uses it for a couple pages such as [this one](https://giannirosato.com/blog/post/nvenc-v-qsv/) and it works perfectly fine on phones, but pushing the limits of what you can do is for sure something to be avoideded
2023-12-23 07:11:37
does what you did use web workers?
Quackdoc
2023-12-23 07:19:51
jxl.js I believe does. it's been a while since I really looked into it, as for jxl-oxide I don't think it was wired up to properly utilize them, I should look into that properly, but js is for sure not my forte and I havent had the time, I do have an updated demo site [here](https://github.com/Quackdoc/jxl-wasm/tree/master/demo) and the original work [here](https://gitlab.com/nitroxis/jxl-wasm)
jonnyawsom3
2023-12-23 11:37:49
One other little snag with the WASM decoders is that on Chrome, multithreaded workers are enabled out of the box. On Firefox, they followed the spec and require extra HTTP headers for cross site origin security. Can catch you out if you don't already know
Demiurge
2023-12-23 11:38:04
I suggest the default effort setting for lossless should be changed to 3, and for lossy it should remain 7. Lossless mode is much slower using the default speed setting and I suggest it be changed in order for libjxl to compare favorably with other image codecs. Users will be impressed with both the speed and the efficiency rather than being sad at how much slower it is compared to PNG and other formats. At effort level 3, JXL easily outperforms everything else in both speed and compression ratio. The size difference between 3 and 7 in lossless mode is typically pretty small.
Moritz Firsching
2024-01-05 09:09:28
No, it means that we didn't yet put the note..
2024-01-05 09:10:17
added it, thanks for catching it
w
2024-01-17 03:08:08
check out the other activity on the account
_wb_
2024-01-17 03:21:59
Looks like a mental breakdown of some sort, I hope he finds proper help.
w
2024-01-17 03:22:34
or he's having a lot of fun with a llm
_wb_
2024-01-17 03:31:23
In any case we obviously cannot accept spam in our github project. Here on discord I'm willing to allow a lot of nonsense (though fab managed to be annoying enough to get muted), but github is a Serious And Important Work Thing and spamming that with useless noise is just harmful.
2024-01-18 07:17:53
I reported his account to github and blocked it from the libjxl repo.
spider-mario
2024-01-20 10:55:15
do we want to clean up the spam he left, or do we keep it, for one reason or another?
monad
2024-01-20 11:20:32
Here? There is no benefit in keeping it, but the impact seems negligible. It's possible there's some admin function to delete all the latest messages from a user. Happened to me when I was kicked from a server, but it may have been a bot function.
spider-mario
2024-01-20 11:43:38
I was thinking of github but I guess here is also worth discussing
monad
2024-01-21 12:39:06
It appeared to me that github was already resolved by hiding comments.
username
2024-01-21 02:04:38
for recent messages or all messages? because I'm against purging everything because he has been in this server for years and there's gonna be older conversations that won't make as much sense with gaps in them. as for recent messages specifically I don't really have much of an opinion though I will say that they go into being actual spam rather then just hard to understand.
Quackdoc
2024-01-21 02:08:31
the new messages can go, it's been around what, half a year or so that he started making no sense?
monad
2024-01-21 02:16:34
The impact on search is the only consideration, and it's certainly of minimal impact. Almost nobody searches, and they shouldn't be hitting the spam directly if they'er savvy enough to search at all. If an admin wants to spend their time deleting the recent, obvious spam, just do it, it's not controversial.
w
2024-01-21 04:10:50
don't want to lose gems like https://discord.com/channels/794206087879852103/794206170445119489/912667993677385758 https://discord.com/channels/794206087879852103/794206170445119489/910932163799703602 https://discord.com/channels/794206087879852103/803574970180829194/1071716881888710766
spider-mario
monad It appeared to me that github was already resolved by hiding comments.
2024-01-21 08:25:13
depends on whether you call this resolved:
jonnyawsom3
2024-01-22 02:03:28
When banning a user there's a promt to delete messages from the past day, week, month, ect
lonjil
2024-01-29 03:48:24
Can we disable the bot that posts reddit posts to <#805176455658733570>?
_wb_
2024-01-29 04:18:11
I'll change it so it only posts stuff from r/av1 that gets at least 10 upvotes in the first 15 minutes (currently it just posts everything from over there)
Quackdoc
2024-01-29 04:25:05
so nothing xD
_wb_
2024-01-29 06:36:07
Well maybe some big news will still get through, but yeah, not your average r/av1 post
Traneptora
_wb_ I'll change it so it only posts stuff from r/av1 that gets at least 10 upvotes in the first 15 minutes (currently it just posts everything from over there)
2024-01-29 07:06:17
should have no limit on `r/jpegxl` imo though, that should stay as-is
_wb_
Traneptora should have no limit on `r/jpegxl` imo though, that should stay as-is
2024-01-29 07:07:52
Yes, did it like that. r/jpegxl is unfiltered and immediate, r/av1 waits 15 minutes and only lets stuff with 10 upvotes through
monad
2024-02-03 02:55:56
That's not the same question, not even close. Banning people for lack of knowledge, skill or intelligence is not helpful. If \_wb_ were to implement that, he'd have to ban the whole server.
Traneptora
2024-02-03 03:52:48
That's not the same question, not even
novomesk
2024-02-09 10:04:11
Hello, I am trying to use JxlDecoderSetCms and JxlDecoderSetOutputColorProfile. I was hoping it will automatically convert CMYK images to RGB/sRGB but it doesn’t work for me that way. Should it work that way or it is not implemented?
_wb_
2024-02-09 11:48:33
It should, but it wouldn't surprise me if that's broken. File a bug pls!
novomesk
_wb_ It should, but it wouldn't surprise me if that's broken. File a bug pls!
2024-02-12 07:24:22
https://github.com/libjxl/libjxl/issues/3280
2024-02-19 01:31:00
I am just curious, why highway 1.1.0 is now required by libjxl main?
Jyrki Alakuijala
novomesk I am just curious, why highway 1.1.0 is now required by libjxl main?
2024-02-19 01:39:53
perhaps just for reducing combinations to test?
lonjil
2024-02-20 01:01:54
Are there any plans for the release date of jpegli? Last year I heard that it would be released as its own standalone thing early this year.
Moritz Firsching
novomesk I am just curious, why highway 1.1.0 is now required by libjxl main?
2024-02-20 07:08:50
not anymore required, and we added some testing to make sure current main stays compatible with 1.0.7 until that becomes a problem
Orum
2024-02-25 05:12:22
Can someone reopen this, now that we're approaching 2 years since it was closed and it still hasn't happened yet? https://github.com/libjxl/libjxl/issues/524
monad
2024-02-25 06:37:07
Just use `-v`.
afed
2024-03-03 12:46:33
cjxl e1 streaming lossless v0.10.1 clang 17.0.6 windows `2480 x 3508, geomean: 363.940 MP/s [243.25, 413.52], 50 reps, 8 threads.` `8192 x 5456, geomean: 465.166 MP/s [349.12, 498.07], 100 reps, 8 threads.` cjxl e1 streaming lossless v0.10.1 git windows-static `2480 x 3508, geomean: 72.879 MP/s [66.27, 74.95], 50 reps, 8 threads.` `8192 x 5456, geomean: 83.324 MP/s [74.94, 84.72], 100 reps, 8 threads.`
Orum
2024-03-09 02:37:26
So whatever devs want it, here's a collection of images where lossless <:JXL:805850130203934781> (0.10.0) does poorly, at least compared to webp/xz (note that xz was only tested on the two images that are included in that format): https://drive.google.com/file/d/1uECuCMjzlaONe9qYfteP56D1_GOFBwwo/view?usp=sharing
190n
2024-03-11 06:14:43
<@&807636211489177661>
jonnyawsom3
2024-03-11 06:21:06
Huh... Well at least the permissions are right for pings
_wb_
2024-03-11 07:29:20
banned, reported, deleted. Was crypto spam, for those wondering
A homosapien
2024-03-21 07:45:23
Encode_effort.md is not accurate anymore with respect to libjxl 0.10.x. Outside of the "brute force" effort now being 11, does VarDCT e10 do anything special outside of more Butteraugli iterations than e9? What about e9 vs e10 lossless? https://github.com/libjxl/libjxl/blob/main/doc/encode_effort.md
Traneptora
A homosapien Encode_effort.md is not accurate anymore with respect to libjxl 0.10.x. Outside of the "brute force" effort now being 11, does VarDCT e10 do anything special outside of more Butteraugli iterations than e9? What about e9 vs e10 lossless? https://github.com/libjxl/libjxl/blob/main/doc/encode_effort.md
2024-03-21 09:49:18
please open an issue, it'll be easier to keep track of it
A homosapien
Traneptora please open an issue, it'll be easier to keep track of it
2024-03-22 03:43:19
Done https://github.com/libjxl/libjxl/issues/3436
fab
2024-03-28 07:33:31
Add a d20 settings in the querys. Is good startx
2024-03-28 07:42:34
My phone crashed you can allow, though i did 2000 flickering+ crash test so i can't continue any-more
Vlad (Kuzmin) Erium
2024-03-29 01:28:54
can someone check https://github.com/libjxl/libjxl/pull/3442 small fix from vcpkg
spider-mario
2024-03-29 08:51:13
we’ve been meaning to try to understand why this is needed at all
2024-03-29 08:52:36
if you ```cmake add_library(mylib mylib.cc mylib.h) add_executable(main main.cc) target_link_libraries(main mylib) ``` and `mylib` is built dynamically because `BUILD_SHARED_LIBS=ON`, then cmake should also try to link `main` dynamically to it and it should work
2024-03-29 08:53:04
it doesn’t make sense that it would build dynamically and then try to link it statically
2024-03-29 08:53:55
forcing `mylib` (jxl_extras) to be static seems like a somewhat hacky workaround that doesn’t quite get to the root of the issue
2024-03-29 08:59:23
(https://cmake.org/cmake/help/latest/variable/BUILD_SHARED_LIBS.html)
2024-03-29 09:00:38
> Note that if bringing external dependencies directly into the build, such as with `FetchContent` or a direct call to `add_subdirectory()`, and one of those dependencies has such a call to `option(BUILD_SHARED_LIBS ...)`, the top level project must also call `option(BUILD_SHARED_LIBS ...)` before bringing in its dependencies. Failure to do so can lead to different behavior between the first and subsequent CMake runs. <:Thonk:805904896879493180> could that be it
2024-03-29 09:03:05
``` third_party/brotli/CMakeLists.txt 13:option(BUILD_SHARED_LIBS "Build shared libraries" ON) third_party/gflags/CMakeLists.txt 59:## - GFLAGS_BUILD_SHARED_LIBS 171:gflags_define (BOOL BUILD_SHARED_LIBS "Request build of shared libraries." OFF OFF) ```
2024-03-29 09:03:07
hm…
2024-03-29 09:04:26
oh, but we already have this ourselves
2024-03-29 09:04:34
do we have it early enough, though
2024-03-29 09:07:20
we do, that doesn’t seem to be it
Vlad (Kuzmin) Erium
2024-04-07 05:01:24
don't know, but without this fix (it from VCPKG) jxl do not building on windows (Visual Studio. almost all dependencies also dinamic libraries).
Demiurge
2024-04-11 11:32:21
Why does cmake require a team of geniuses with years of training in order to use or comprehend it?
kkourin
spider-mario if you ```cmake add_library(mylib mylib.cc mylib.h) add_executable(main main.cc) target_link_libraries(main mylib) ``` and `mylib` is built dynamically because `BUILD_SHARED_LIBS=ON`, then cmake should also try to link `main` dynamically to it and it should work
2024-04-13 02:41:55
it doesn't work automatically for windows. cmake assumes theres a mylib.lib thunk lib generated for mylib, but if mylib doesn't export anything (either by putting dllexport in declarations or giving a .def file), the thunk will not be generated
p0nce
2024-04-18 04:53:32
so, I translated j40.h to another language (found two mem leaks), overall it works like the original. But it cannot decode to rgba and instead set alpha to 255 (aka full opacity). Any idea how where the additional channels may be merged? or any idea how it's coded? I don't understand much about JXL
2024-04-19 06:30:18
yes