|
yurume
|
2022-09-14 06:14:07
|
in order to actually decide if alpha channel is not required it will be necessary to implement `j40_info_from_image`
|
|
|
190n
|
|
improver
enable all the sans
|
|
2022-09-14 06:56:18
|
https://tenor.com/view/sans-fight-gif-26158536
|
|
|
yurume
|
|
improver
enable all the sans
|
|
2022-09-14 07:01:00
|
actually I wanted to enable ubsan, asan and msan altogether but it is impossible afaik
|
|
2022-09-14 07:01:09
|
so, just one sans
|
|
2022-09-14 07:01:14
|
san?
|
|
|
improver
|
2022-09-14 12:33:13
|
one of sans, or san
|
|
2022-09-14 12:34:07
|
https://stackoverflow.com/questions/62109255/what-is-a-good-way-to-apply-multiple-sanitizers-in-cmake-with-gcc hmmm
|
|
|
yurume
|
2022-09-15 01:47:12
|
I toyed with ninja as a replacement of make in Windows (I never had a good time with make in Windows), as ninja can be distributed as a single executable. unfortunately it requires a separate configure, and every attempt to make it self-configurable failed by now.
|
|
2022-09-15 01:49:27
|
my idea was to have a bootstrapping `build.ninja` which will rewrite itself after configure. this seemed to work, but I realized that this will break after `ninja -t clean`
|
|
2022-09-15 01:53:57
|
maybe the solution is to ditch make and force everyone to use a custom script (say, `./build` or `build.cmd`) that internally configures if needed and calls make or ninja
|
|
2022-09-15 01:54:28
|
not really comfortable though
|
|
2022-09-15 01:55:13
|
or using ninja *only* in Windows and make elsewhere might be a solution, but then I'd expect non-Windows people will try to run ninja anyway
|
|
|
BlueSwordM
|
2022-09-15 03:01:30
|
Yea, and I like it a lot more than CMake.
|
|
|
spider-mario
|
2022-09-15 07:50:04
|
one of my projects uses Meson (https://github.com/sboukortt/intersect-lv2/blob/master/meson.build) and tup (https://github.com/sboukortt/intersect-lv2/blob/master/Tupfile)
|
|
2022-09-15 07:50:21
|
tup = https://gittup.org/tup/
|
|
|
yurume
|
2022-09-15 08:34:11
|
cmake and meson are meta-build systems (i.e. doesn't do build themselves but generates a metadata for actual build systems) and I think it is too much for small projects like J40
|
|
2022-09-15 08:34:35
|
tup has a great design that influenced ninja later
|
|
|
improver
|
2022-09-15 08:39:15
|
why is gnu make bad on windows
|
|
2022-09-15 08:40:04
|
if you do it with mingw/msys2, it works fine in my experience
|
|
|
yurume
|
2022-09-15 10:00:49
|
iirc gnu make on windows didn't work well with windows `$SHELL` (aka cmd.exe)
|
|
|
0xC0000054
|
2022-09-15 11:13:47
|
GNU make requires a `*nix` shell with all of the included utilities, the Windows command prompt does not provide that environment.
MinGW/MSYS2 fixes that by bundling its own `*nix` shell environment and compiler toolchain (gcc, make, etc) that outputs native Windows programs.
|
|
|
improver
|
2022-09-15 11:37:32
|
WSL is an actual linux env it's like apples vs oranges
|
|
|
yurume
|
2022-09-15 07:47:17
|
FYI: future J40 discussion will be placed in https://discord.com/channels/794206087879852103/1020056709831594005
|
|
|
improver
|
2022-09-15 07:59:16
|
configure higher inactivity timeout if u still haven't
|
|
|
yurume
|
2022-09-15 08:00:01
|
oh thank you for the tip, is 1 week the maximum?
|
|
|
improver
|
2022-09-15 08:09:01
|
seems so...
|
|
|
Fraetor
|
2022-09-15 08:16:16
|
I think that is just after inactivity though, so someone posting will reset it.
|
|
|
diskorduser
|
2022-09-16 04:09:13
|
https://www.reddit.com/r/linux/comments/xfrf45/what_are_the_open_formats_i_can_use_to_support/
|
|
2022-09-16 04:09:21
|
no jxl 😦
|
|
|
Brinkie Pie
|
|
diskorduser
no jxl 😦
|
|
2022-09-17 05:44:10
|
not sure if it was edited since then, but "JPEG-XL" is (now) listed
|
|
|
diskorduser
|
2022-09-17 05:44:52
|
Yeah user blueswordm added it.
|
|
|
Demez
|
2022-09-18 08:48:16
|
the paywalled spec?
|
|
|
username
|
2022-09-18 08:53:27
|
yeah jpeg xl is standardized through ISO and they seem to have a paywall for anyone that wants to read a spec
|
|
|
GlassGhost
|
2022-09-18 08:06:40
|
What about Flif?
|
|
|
_wb_
|
2022-09-18 08:07:17
|
what about it?
|
|
2022-09-18 08:09:27
|
the flif 'spec' is here: https://flif.info/spec.html
|
|
2022-09-18 08:14:52
|
It's quite incomplete and not very rigorous, but then again FLIF was never really meant to be a 'serious' format in the sense of really getting adopted. To me it was more of a fun exercise and a research project than anything else — at least with hindsight; maybe at some point back in the time, I was taking it a bit more seriously than I should have.
|
|
|
GlassGhost
|
2022-09-18 09:38:25
|
Wait you're the author????
|
|
2022-09-18 09:40:30
|
Anyways I was just saying I think flif isn't paywalled and has compression comparable to jpegxl
|
|
2022-09-18 09:42:10
|
https://youtu.be/jnxqHcObNK4
|
|
2022-09-18 09:43:23
|
Mad respect if you are the author, I just joined this discord cause i saw this youtube video on wavelets. and thought parts of JXL were related or inherited from flif and maybe could find more documentation
|
|
|
_wb_
|
2022-09-18 09:43:36
|
flif isn't paywalled since it's not officially standardized either 🙂
|
|
2022-09-18 09:44:08
|
yes, I worked on both flif and jxl and part of jxl is based on ideas from flif
|
|
2022-09-18 09:45:23
|
flif was made for lossless only though, most of lossy jxl is based on Pik
|
|
2022-09-18 09:47:31
|
(though in jxl, lossy and lossless are not completely separate modes, e.g. all the lossy control fields and also the DC image are encoded with lossless jxl, aka modular mode — but the main lossy data (AC coefficients) is encoded in a different way)
|
|
|
GlassGhost
|
2022-09-18 09:48:44
|
really; was impressed with that artifactlesss demo though
|
|
|
_wb_
|
2022-09-18 09:54:47
|
keeping generation loss as low as possible is imo crucial for digital image preservation in the long term — otherwise we'll end up with photos that 'fade away' over time
|
|
2022-09-18 09:55:38
|
the lossless transcoding of jpeg into jxl helps a lot in that respect
|
|
2022-09-18 09:57:45
|
it's quite interesting that both the pik team and me had that same insight — we both submitted a proposal to the JPEG XL call for proposals that included a lossless jpeg transcoding option (brunsli in the case of the pik team, fuif having similar functionality in my case), even though that wasn't part of the original requirements of the JPEG XL project as it was originally envisioned by the JPEG committee.
|
|
2022-09-18 10:00:29
|
now it almost feels strange to me that of all the "next generation" codecs that were supposed to replace jpeg — and there are quite a few of those — none of them had the idea to be 'backwards compatible' with jpeg in order to allow lossless jpeg transcoding, until jxl.
|
|
|
GlassGhost
|
2022-09-18 10:00:52
|
the original jpg is based on DCT right
|
|
|
_wb_
|
|
GlassGhost
|
2022-09-18 10:01:08
|
and what was flif based on again?
|
|
|
_wb_
|
2022-09-18 10:01:22
|
and all image and video codecs kind of still are based on DCT, except j2k which is based on DWT
|
|
2022-09-18 10:02:00
|
flif doesn't do lossy transforms so it isn't really based on any transform
|
|
|
GlassGhost
|
2022-09-18 10:02:24
|
is it it's own algo or is it based on something?
|
|
|
_wb_
|
2022-09-18 10:03:06
|
it mostly does entropy coding in a much more complicated, image-specific, and better way than what was done before — e.g. PNG basically just uses general-purpose compression on image data, with only some basic filtering
|
|
2022-09-18 10:03:49
|
it does have some transforms like a generalized palette (which is also in jxl) and YCoCg (which is also in jxl)
|
|
|
GlassGhost
|
2022-09-18 10:04:43
|
thank you so much <@794205442175402004>
|
|
|
_wb_
|
2022-09-18 10:07:21
|
the main difference between flif and fuif/jxl is that flif has this funky interlacing scheme where pixels are traversed in an Adam7-like pattern which is very bad for memory locality and thus decode speed, but it allows one very nice thing: you can do prediction from almost all directions! The final half of the data are basically pixels that can be predicted from NW, N, NE, W, SW, S, SE — so 7 of the 8 neighboring pixels. That means they can be predicted and context modeled very well.
|
|
|
GlassGhost
|
2022-09-18 10:09:20
|
Have you seen the nile language by bret victor and dan amelang?
|
|
2022-09-18 10:10:53
|
http://www.vpri.org/pdf/tr2010003_PEG.pdf
|
|
2022-09-18 10:11:39
|
I believe that is the backend for nile, I think I found a scheme r5rs or r7rs implemenation for the ometa backend but I lost the link
|
|
2022-09-18 10:13:35
|
http://tinlizzie.org/dbjr/high_contrast.html
|
|
2022-09-18 10:14:02
|
there it is compiled to javasecript but I believe how they compiled it wasn't very formalized
|
|
2022-09-18 10:14:40
|
I believe bret victor did do a talk how they were making it very parallel but I lost that link as well
|
|
|
yurume
|
2022-09-18 10:15:16
|
I don't see how that is related to image compression, are you talking about parallelization?
|
|
|
GlassGhost
|
2022-09-18 10:16:18
|
but the nile code looks beautiful, considering how much shorter they made the code 400 lines compared 400,000 of the cairo text rendering library chrome or webkit is using
|
|
|
yurume
I don't see how that is related to image compression, are you talking about parallelization?
|
|
2022-09-18 10:16:55
|
well the nile code could be used to write a dct for compressing images. edit: *nile language
|
|
|
_wb_
|
2022-09-18 10:20:04
|
Rendering vector graphics is kind of something different, but I do hope that one day we can create a format that includes both vector and raster graphics (and text).
|
|
2022-09-18 10:20:35
|
JXL splines are a start for that, but they're not expressive enough for all vector stuff
|
|
|
GlassGhost
|
2022-09-18 10:20:48
|
I don't know about format I think SVG and html have all those bases covered
|
|
|
yurume
|
2022-09-18 10:21:22
|
I think you should have linked to https://raw.githubusercontent.com/wiki/damelang/nile/socal.pdf , I was a bit confused what it exactly was
|
|
|
GlassGhost
|
2022-09-18 10:21:31
|
oh sorry
|
|
|
_wb_
|
2022-09-18 10:21:33
|
I think jxl has the potential to bridge the gap between authoring formats and delivery formats, by supporting lossless and layers etc
|
|
2022-09-18 10:22:00
|
It doesn't bridge the gap between raster and vector though
|
|
|
GlassGhost
|
2022-09-18 10:22:30
|
I wouldn't want to I think you can specify raster from svg
|
|
2022-09-18 10:22:46
|
and have it embed the raster file in it.
|
|
|
_wb_
|
2022-09-18 10:22:49
|
Svg can do everything but it's very bloated
|
|
2022-09-18 10:23:18
|
Pdf can too, but it's also very bloated
|
|
|
GlassGhost
|
2022-09-18 10:23:49
|
yeah I think there's some huge html compression that is bidirectional though so you can get back to the original svg without any errors
|
|
|
yurume
|
2022-09-18 10:24:05
|
stream formalism is great, but technically speaking this can be actually done without a special language, isn't it? any language with inlined iterator seems enough to implement that formalism.
|
|
|
_wb_
|
2022-09-18 10:24:16
|
I mean bloated feature-wise
|
|
2022-09-18 10:25:01
|
Implementing full svg requires implementing a browser, basically
|
|
2022-09-18 10:25:17
|
Same with pdf
|
|
|
GlassGhost
|
2022-09-18 10:25:35
|
<@268284145820631040> I'm not sure I was just trying to get "nile"'s back end "ometa" working and failed
|
|
|
yurume
|
2022-09-18 10:28:33
|
stream or pipeline formalism is kinda natural for PL and compiler researchers, pretty much everything is linear
|
|
2022-09-18 10:29:18
|
(until it no longer is, especially with incremental compilation and IDE support)
|
|
2022-09-18 10:30:02
|
so I guess it's natural that PL researchers may see everything else similarly
|
|
|
Jyrki Alakuijala
|
|
GlassGhost
Anyways I was just saying I think flif isn't paywalled and has compression comparable to jpegxl
|
|
2022-09-20 01:34:09
|
FLIF compression can be roughly experienced in JPEG XL as 'modular' -- with most things improved, but also some weakened
|
|
2022-09-20 01:35:53
|
to put shortly, FLIF/FUIF is about 50 % more bytes than the VarDCT mode of JPEG XL, so it isn't exactly competitive for photographs
|
|
2022-09-20 01:38:54
|
In JPEG XL we extended FLIF/FUIF with delta palettization (WebP lossless inspiration), static context trees instead of adaptive for faster decoding (for speed), with Alexander Ratushnyak's gradient predictor (gralic/qlic inspiration possibly), and with LZ77 with 2d short codes (WebP lossless inspiration)
|
|
2022-09-20 01:40:01
|
I consider JPEG XL's modular fully supercedes FLIF, FUIF and WebP lossless and represents the best pixel-lossless practical format today
|
|
2022-09-20 01:40:14
|
but VarDCT is still more powerful for lossy photographs
|
|
2022-09-20 01:41:09
|
VarDCT's legacy is in the butteraugli/guetzli/brunsli/pik trajectory, possibly some ideas from webp lossless and brotli, too
|
|
2022-09-20 01:48:39
|
and also the formats are 'happily married' as VarDCT uses modular for DC and control fields
|
|
2022-09-20 01:49:43
|
webp lossless is a good format, too, and more mature
|
|
2022-09-20 01:50:15
|
I believe it is possible to model webp lossless compression as a jpeg xl stream, too, so in the end there shouldn't be big wins ever for webp lossless
|
|
2022-09-20 01:51:09
|
I tried to guide our side of lossless work to be more towards better compression of lossless photographs -- a domain where webp lossless is not so great
|
|
2022-09-20 01:51:43
|
I don't know if I was successful in that in the end, I feel we ended up building a pretty generic solution
|
|
2022-09-20 01:52:05
|
I think that the Ratushnyak's predictor should be quite strong for photographs
|
|
|
_wb_
|
2022-09-20 01:54:53
|
it is — modular e3 is using just that predictor with a simple fixed ctx model and it produces pretty dense results on photographic images
|
|
2022-09-20 02:04:56
|
For compressing text, which is mostly repetitive patterns, jxl has two main methods: patches and lz77 with 2d distance codes. The limitation of lz77 is that groups are encoded independently, so you can't make references outside each (by default) 256x256 group. WebP has no groups; just like PNG it's row-based, not tiled, which is bad for parallel decode but good for horizontally repetitive patterns like text. Another thing is that we never really figured out how to effectively combine MA tree learning with lz77, so the current encoder basically does one or the other but not both — which is obviously suboptimal. You can try if `cjxl -m 1 -I 0 -P 0 -e 9 -g 3 --patches=0` produces a better result: that sets the group size to 1024x1024 (to allow more matching to happen), disables MA tree learning so the encoder will go all the way for lz77, and disables prediction and patches since those can do more harm than good when lz77 matching is the main tactic.
|
|
2022-09-20 02:11:26
|
Patches don't have a group boundary limitation: they're encoded in a separate invisible frame and can then be referenced anywhere. The main problem there is that it's not easy to really detect all repeating patterns in an image in a way that is still reasonably fast. We currently have some heuristics that tend to detect quite a lot of text in screenshots, but the heuristics will certainly miss some opportunities. Better patch heuristics could lead to substantial improvements for images with lots of text — or other repetitive patterns like icons or avatars.
|
|
|
fab
|
2022-09-20 02:29:23
|
Thanks.
|
|
|
_wb_
|
2022-09-21 05:00:29
|
Lossy modular with no MA tree learning is a combination that isn't implemented. But we should do something else than just fail with a cryptic error. Could you open an issue?
|
|
|
yurume
|
2022-09-27 12:33:57
|
TIL: I learned that qcms has been completely translated to Rust months ago https://github.com/FirefoxGraphics/qcms/
|
|
|
|
paperboyo
|
|
yurume
TIL: I learned that qcms has been completely translated to Rust months ago https://github.com/FirefoxGraphics/qcms/
|
|
2022-09-27 12:50:17
|
Interesting. Because some of the arguments against being colour-correct were about performance…
|
|
|
The_Decryptor
|
2022-09-30 12:23:59
|
It's just a normal JPEG at the end
|
|
2022-09-30 12:24:15
|
It does all the bracketing and tone mapping and writes out a SDR result
|
|
|
_wb_
|
2022-09-30 05:10:07
|
It's basically capturing HDR to produce a better SDR image
|
|
2022-09-30 05:12:25
|
Though there also exists something like "HDR JPEG" which is based on JPEG XT. It's a tone mapped SDR jpeg that also contains extra info to reconstruct a HDR image.
|
|
|
Brinkie Pie
|
2022-09-30 07:24:54
|
to be fair, there's a benefit to mapping HDR to SDR, you theoretically get more details in both light and dark spots.
|
|
|
_wb_
|
2022-09-30 08:25:38
|
yes, though tone mapping is also an artistic choice, so if it's done by an algorithm and then set in stone, that's a bit of a pity compared to getting a real HDR image where you can still adjust those artistic choices
|
|
2022-09-30 08:26:24
|
(not to mention that you may also want to display the image in actual HDR instead of tone-mapped SDR)
|
|
|
Jyrki Alakuijala
|
2022-10-04 02:21:08
|
github.com/libjxl/libjxl is more up-to-date
|
|
|
Traneptora
|
2022-10-04 02:27:59
|
SIGILL is how asserts fail in libjxl
that said that's a *very* old version
|
|
|
VcSaJen
|
2022-10-10 12:44:00
|
Is there any way to simulate Wide Gamut or HDR on regular screen on Windows?.. Like, in this simulation SDR images would lack contrast, and normal gamut images would be looking gray-ish, while HDR and Wide-gamut would look normal.
Perhaps a virtual monitor on virtual machine?
|
|
|
spider-mario
|
2022-10-10 12:57:09
|
while it is (unfortunately) not possible to extend the gamut and dynamic range of the screen, one possibility is to take an existing image, make it even more narrow-gamut / low-dynamic-range, and then compare that with its normal version to illustrate the “direction” of the improvement
|
|
2022-10-10 12:59:45
|
e.g. normal image:
|
|
2022-10-10 01:00:01
|
artificially lower dynamic range:
|
|
2022-10-10 01:00:25
|
flipping between the two gives an idea of the sort of improvement that HDR can bring
|
|
|
VcSaJen
|
2022-10-10 01:08:58
|
> narrow-gamut
Yes, that's the idea
> image
I'm not talking about image file, but about debugging application.
|
|
|
spider-mario
|
2022-10-10 02:45:32
|
ah, debugging is indeed another matter
|
|
2022-10-10 02:46:24
|
I have a suspicion that finding a cheap HDR monitor might be quite an easier solution than managing the various hacks to simulate one
|
|
|
Kagamiin~ Saphri
|
|
_wb_
For compressing text, which is mostly repetitive patterns, jxl has two main methods: patches and lz77 with 2d distance codes. The limitation of lz77 is that groups are encoded independently, so you can't make references outside each (by default) 256x256 group. WebP has no groups; just like PNG it's row-based, not tiled, which is bad for parallel decode but good for horizontally repetitive patterns like text. Another thing is that we never really figured out how to effectively combine MA tree learning with lz77, so the current encoder basically does one or the other but not both — which is obviously suboptimal. You can try if `cjxl -m 1 -I 0 -P 0 -e 9 -g 3 --patches=0` produces a better result: that sets the group size to 1024x1024 (to allow more matching to happen), disables MA tree learning so the encoder will go all the way for lz77, and disables prediction and patches since those can do more harm than good when lz77 matching is the main tactic.
|
|
2022-10-14 04:16:16
|
patches are very powerful... I tried compressing screenshots with lots of text using lossy modular and high butteraugli distance... and it's clear how stuff like text in titlebars and stuff become messy but like a wikipedia article stays quite good even at very high butteraugli distances
|
|
|
_wb_
|
2022-10-14 04:45:43
|
thanks to <@179701849576833024>'s clever patch detection heuristics — though of course there's always still room for improvement 🙂
|
|
|
Traneptora
|
2022-10-14 04:47:10
|
when did these get implemented?
|
|
2022-10-14 04:47:16
|
these heuristics
|
|
|
_wb_
|
2022-10-14 04:48:16
|
quite a while ago
|
|
2022-10-14 04:48:46
|
they work reasonably well for text on a solid background
|
|
|
Traneptora
|
2022-10-14 04:50:57
|
I wonder how well they work for synthetic images, like anime-screenshots
|
|
2022-10-14 04:51:10
|
are they only used for lossless and/or modular, or are they also used for VarDCT?
|
|
|
_wb_
|
2022-10-14 05:13:57
|
Also for vardct
|
|
2022-10-14 05:15:33
|
But the current heuristic requires at least two repetitions per patch, which happens for text but not for anime
|
|
|
Traneptora
|
2022-10-14 06:39:07
|
Is there any benefit to near-repetitions? like say patch and then encode difference on another layer with blend add
|
|
|
_wb_
|
2022-10-14 06:52:26
|
In lossy mode it does that, but the threshold for being considered close enough is quite strict
|
|
2022-10-14 06:53:19
|
The encoder uses kAdd blending for the patches and subtracts them, so no need for another layer, the residuals just remain on the main layer
|
|
2022-10-14 06:54:15
|
The patch itself gets encoded in modular mode, the residuals with vardct if that's the main layer mode
|
|
2022-10-14 06:55:25
|
So if the residuals are small, they might get quantized away, depending on the quality setting
|
|
|
|
KammutierSpule
|
2022-10-23 04:52:25
|
Hello all, I'm learning about JPEG XL for participate on the next Inercia Demoparty !
Where can I learn more about the properties and predictors ?
|
|
|
_wb_
|
2022-10-23 05:01:23
|
Scroll up in <#824000991891554375> 🙂
|
|
2022-10-23 05:02:05
|
Also the help screen of https://jxl-art.surma.technology/
|
|
|
|
KammutierSpule
|
2022-10-23 05:06:32
|
oh thanks
|
|
|
Jyrki Alakuijala
|
|
KammutierSpule
Hello all, I'm learning about JPEG XL for participate on the next Inercia Demoparty !
Where can I learn more about the properties and predictors ?
|
|
2022-10-26 07:42:49
|
many things in JPEG XL that is not part of the jxl-art tool, too
|
|
2022-10-26 07:44:10
|
predictors here: https://github.com/libjxl/libjxl/blob/main/lib/jxl/modular/encoding/context_predict.h#L199
|
|
|
spider-mario
|
2022-10-30 11:01:41
|
> I've said it before and I'll say it once more: JPEG-XL offers reversible recompression of JPEG, not lossless. If it were lossless, it would decode bit-identical to what the libjpeg decoder outputs, but this is not the case.
|
|
2022-10-30 11:01:42
|
what
|
|
|
_wb_
|
2022-10-30 11:09:19
|
this person is confused about different jpeg decoders producing different results and somehow thinks libjpeg-turbo (with default options, I suppose) is the only correct way to decode a jpeg
|
|
|
yurume
|
2022-10-30 11:33:15
|
i.e. this person is comparing original image -> jpeg -> rendered image vs. original image -> jpeg -> jxl -> rendered image
|
|
2022-10-30 11:33:43
|
the only thing jxl guarantees is that original image -> jpeg -> jxl -> jpeg is identical
|
|
2022-10-30 11:34:32
|
different jpeg implementations can result in different rendered image, even though the difference is imperceptible and probably +/- 1
|
|
|
|
JendaLinda
|
2022-10-30 11:49:12
|
Once the image is a JPEG, there's no exactly right way to decode it.
|
|
2022-10-30 12:01:44
|
Another common case is PNG with alpha semi-transparency. Different programs may produce slightly different results because the alpha blending is not always done exactly in the same way.
|
|
2022-10-30 12:03:48
|
One program may even render the same partially transparent image in two different formats (for example PNG versus lossless WebP) slightly differently.
|
|
|
_wb_
|
2022-10-30 01:07:33
|
Ugh, must be something wrong with it indeed
|
|
2022-10-30 01:13:32
|
Looks like the bot was upgraded and somehow lost its reddit feeds, fixed it
|
|
|
spider-mario
what
|
|
2022-10-30 03:43:45
|
That discussion was getting silly, I hope my clarification will help: https://www.phoronix.com/forums/forum/phoronix/latest-phoronix-articles/1354717-google-chrome-is-already-preparing-to-deprecate-jpeg-xl?p=1354918#post1354918
|
|
|
fab
|
2022-10-30 03:47:56
|
google don't make you decidd
|
|
2022-10-30 03:48:24
|
i was watching a song in vp9 4k on ytm 5,8 mbps connection
|
|
2022-10-30 03:48:42
|
it interrupted the song at 2:10
|
|
2022-10-30 03:48:47
|
with their ads
|
|
2022-10-30 03:48:53
|
two ads non skippable
|
|
2022-10-30 03:49:22
|
then they switched to 144p in the video i was watching
|
|
2022-10-30 03:49:34
|
then immediately 4k
|
|
2022-10-30 03:50:36
|
also google takeout include nothing of yours youtube music data neither your liked songs
|
|
|
Jim
|
2022-10-30 04:55:21
|
> Is there a work-around via plugins? I wonder how fast a web assembly-written JPEG-XL decoder would be.
Might be time to create a standard Media Codec API so formats can be added via browser extensions as they are getting increasingly stingy on adding new ones and political gatekeepers for the browsers that can add them. I would also suggest browsers add some type of database of formats so that when they encounter these unknown formats they can recommend the verified extension that supports it - similar to what they did with Flash years ago.
|
|
|
diskorduser
|
2022-10-31 05:27:32
|
what's happening here. it does not encode.
|
|
2022-10-31 05:48:39
|
Something wrong with cjxl from opensuse. it does not encode any files.
|
|
2022-10-31 05:49:44
|
No. did not test nightly yet. going to download nightly.
|
|
2022-11-01 02:38:24
|
Should I try asking jxl support in this app? They have avif already.
https://github.com/SimpleMobileTools/Simple-Gallery/issues/1143
|
|
2022-11-01 02:38:50
|
It's an android gallery application
|
|
2022-11-01 02:57:28
|
Just installed that app. Avif doesn't work on Android 11 as expected. 😦
|
|
2022-11-01 02:58:50
|
I'm going to learn android development and implement jxl 😭
|
|
|
190n
|
2022-11-01 03:39:29
|
jxl in a gallery would be amazing since i could convert all my old jpegs losslessly
|
|
2022-11-01 03:39:41
|
but it would probably be worse if the rest of the system didn't support jxl
|
|
2022-11-01 03:40:02
|
now, if the gallery could also transparently decode jxl to jpeg and share the jpeg version to other apps... 👀
|
|
|
DZgas Ж
|
2022-11-01 01:07:33
|
wow jpeg XL now in firefox 108
|
|
2022-11-01 01:07:51
|
|
|
|
yurume
|
2022-11-01 01:47:24
|
nightly?
|
|
|
DZgas Ж
|
|
yurume
nightly?
|
|
2022-11-01 01:47:45
|
yes
|
|
|
Jyrki Alakuijala
|
2022-11-02 10:06:56
|
that's great news -- ah, nightly, it has been there for a while
|
|
2022-11-02 10:07:20
|
I noticed that the highest ever chrome blink bug has about 2900 stars
|
|
2022-11-02 10:08:08
|
if we could get 3000 stars on the JPEG XL bug, it would be the most community interest ever on Chrome Blink subsystem
|
|
|
username
|
2022-11-02 10:09:13
|
with the rate that it's growing I really don't see it hitting that number sadly
|
|
|
Jyrki Alakuijala
|
2022-11-02 10:09:27
|
it would possibly reduce the impact of the '- There is not enough interest from the entire ecosystem to continue experimenting with JPEG XL' argumentation in comment 84 here: https://bugs.chromium.org/p/chromium/issues/detail?id=1178058#c84
|
|
2022-11-02 10:09:54
|
we would need to ask colleagues to ask colleagues to consider it
|
|
2022-11-02 10:11:34
|
top 10 would be about 1000 stars
|
|
2022-11-02 10:11:52
|
I think we got ~150 stars yesterday -- of course it was with quite a bit of media coverage
|
|
2022-11-02 10:12:32
|
they list four reasons in comment 84
|
|
2022-11-02 10:13:04
|
1. the flag is experimental (not really a reason for removal)
|
|
2022-11-02 10:13:57
|
2. not enough interest (we can improve here with stars -- in practice bugs above 32 stars are statistically likely to make it, and usually 100+ stars show interest, but we can make this more clear)
|
|
2022-11-02 10:14:56
|
3. JPEG XL is incremental -- I think they measure with PSNR and SSIM and similar, they don't have a process with evaluation by human viewing
|
|
2022-11-02 10:18:43
|
4. Maintenance burden (((seems inflated, so far JPEG XL code has produced quite a bit of free work in Chrome when we have been solving systemic HDR and colorspace issues in Chrome for free, and we solved them for AVIF and others, too -- if we look at mature externally maintained codecs like JPEG, PNG and GIF, the maintenance work is not a substantial effort)))
|
|
2022-11-02 10:19:32
|
if we can drop the 2nd concern in a way that it becomes somewhat ridiculous -- like having it the highest starred bug ever -- we can focus more on arguing on 3rd and 4th topic
|
|
2022-11-02 10:21:44
|
yeah, it would be great if stars and comments come from the industry
|
|
2022-11-02 10:22:07
|
of course it can be that there are not 3000 image compression experts in the world 🙂
|
|
2022-11-02 10:23:17
|
it is still our moment to express support -- staying silent doesn't seem responsible either and starring is not a huge attack on their system 😛
|
|
2022-11-02 10:26:08
|
Chrome people look at the stars and they have observed that bugs above 32 stars get fixed, their tools allow sorting by stars
|
|
2022-11-02 10:26:36
|
so they will see it in prioritization and they will keep scratching their heads with a higher star count -- at least until it is closed
|
|
|
fab
|
2022-11-02 10:27:25
|
So when improvements in red artifacts retention
|
|
|
Jyrki Alakuijala
|
2022-11-02 10:27:48
|
if they choose not to fix the highest (or say top 10 ever) starred bug, it will likely become a legend that they will talk about when talking about prioritization based on stars in general -- that can help in reviving it later
|
|
|
fab
|
2022-11-02 10:27:48
|
Does small improvements in ssimulacra2 can help It
|
|
2022-11-02 10:28:03
|
For low fidelity jpg images?
|
|
|
Jyrki Alakuijala
|
2022-11-02 10:28:40
|
chrome/aom does not use simulacra2, butteraugli or dssim. they use psnr, ssim, ms-ssim, and vmaf
|
|
|
fab
|
2022-11-02 10:29:01
|
Continue
|
|
|
Jyrki Alakuijala
|
2022-11-02 10:29:03
|
they don't have plans or capacity to take in new metrics
|
|
|
fab
|
2022-11-02 10:29:11
|
What's next
|
|
|
Jyrki Alakuijala
|
2022-11-02 10:29:21
|
I think three things:
|
|
|
fab
|
2022-11-02 10:29:46
|
For Stefania scordio image can it improve red artifacts
|
|
|
Jyrki Alakuijala
|
2022-11-02 10:30:05
|
1. port enough stuff from JPEG XL to old school JPEG so that it becomes slightly better than AVIF -- to show how good the stuff is and to avoid the internet to be spoiled by oversmooth photography
|
|
|
fab
|
2022-11-02 10:30:06
|
Why some images get demoised in the reds by JPEG XL
|
|
2022-11-02 10:30:25
|
I know every image is different
|
|
2022-11-02 10:30:38
|
You can't do every image perfect
|
|
|
Jyrki Alakuijala
|
2022-11-02 10:30:38
|
2. wasm implementations (both HDR canvas and service worker thread versions)
|
|
2022-11-02 10:31:52
|
3. build more momentum: more awareness, build larger ecosystem, prepare for HW encoding, build a MJPEG XL possibly on libjxl-tiny, more visibility to the quality differences
|
|
2022-11-02 10:32:08
|
reds and magentas are difficult
|
|
2022-11-02 10:32:46
|
they can vary on different monitors
|
|
2022-11-02 10:33:01
|
making it really complicated
|
|
2022-11-02 10:33:24
|
matching the intensity of reds with the intensity of grays is important psychovisually, but poorly calibrated equipment make it challenging
|
|
2022-11-02 10:33:38
|
try to view the image with different equipment and see if that is the issue here
|
|
2022-11-02 10:34:03
|
compress with JPEG1 and AVIF to see if JPEG XL does better or worse at the same bit rate
|
|
2022-11-02 10:34:12
|
if worse, file a bug on libjxl
|
|
|
fab
|
|
Jyrki Alakuijala
compress with JPEG1 and AVIF to see if JPEG XL does better or worse at the same bit rate
|
|
2022-11-02 10:38:07
|
AVif has old encoder
|
|
2022-11-02 10:38:34
|
The libavif appveyor does not have 3.5.0-349
|
|
|
|
afed
|
2022-11-02 10:39:01
|
> 2. not enough interest
I think 3000 stars is unrealistic, but there is very active discussion in the news, blogs and various platforms, and also interest from big players such as Facebook, Instagram, WhatsApp (and some other messengers), some CDNs, Shopify, Adobe, etc. and that is already a big internet traffic and huge number of users of these products
> 3. JPEG XL is incremental -- I think they measure with PSNR and SSIM and similar
or the comparison was done mostly for low bpp, since there is a prevailing opinion that the web only needs the lowest bpp at very high compression
> 4. Maintenance burden
however, it is a pity that there is no full time team just for Jpeg XL, then bugs and vulnerabilities would be found and fixed faster and there would be more quality improvements in the meantime
and also the problem that could arise is that Google would ask the people who worked on Jpeg XL to drop work on it completely (although there is still free time, but maybe that wouldn't be welcome either)
|
|
|
Jyrki Alakuijala
|
2022-11-02 10:42:03
|
we have a full time team just for JPEG XL
|
|
2022-11-02 10:42:24
|
it is financed outside of Chrome org
|
|
2022-11-02 10:43:03
|
we can make a commitment to Chrome on it
|
|
|
Jim
|
|
Jyrki Alakuijala
1. port enough stuff from JPEG XL to old school JPEG so that it becomes slightly better than AVIF -- to show how good the stuff is and to avoid the internet to be spoiled by oversmooth photography
|
|
2022-11-02 10:43:32
|
I started working on a wasm polyfill for browsers now, may also work on node but there are plenty of node tools already so it's not my priority. How much would you actually be able to add to JPEG? I would think they wouldn't want a lot of breaking changes at this point.
|
|
|
Jyrki Alakuijala
|
|
Jim
I started working on a wasm polyfill for browsers now, may also work on node but there are plenty of node tools already so it's not my priority. How much would you actually be able to add to JPEG? I would think they wouldn't want a lot of breaking changes at this point.
|
|
2022-11-02 10:44:56
|
I'd recommend to start such a wasm polyfill together with Moritz, we have invested a bit into this and have quite clear ideas on how to proceed -- I'd love to maintain control over this since we were bitten by a volunteer effort in Brotli+nginx, where the volunteer became unresponsive for three years 😛
|
|
2022-11-02 10:45:35
|
it could be a libjxl project from my point of view, then we would have enough control over it
|
|
2022-11-02 10:46:02
|
I would feel a bit uncomfortable promoting a fully external solution for this as the only solution
|
|
|
Jim
I started working on a wasm polyfill for browsers now, may also work on node but there are plenty of node tools already so it's not my priority. How much would you actually be able to add to JPEG? I would think they wouldn't want a lot of breaking changes at this point.
|
|
2022-11-02 10:47:08
|
we are able to improve high end (q95) JPEGs by 8 % and q90 by 7 % or so in decoding only
|
|
2022-11-02 10:47:25
|
in a backward compatible way, and conforming to the JPEG1 spec
|
|
2022-11-02 10:48:01
|
decoding improvements are 25 % or so in q85 and above, but sharply declining for lower quality
|
|
|
Jim
|
2022-11-02 10:48:02
|
That's fine, didn't know if you wanted it to be under libjxl project anyway. <@795684063032901642> if you want to reach out, would love to go over that.
|
|
|
Jyrki Alakuijala
|
2022-11-02 10:48:54
|
the main challenge in wasm is the HDR
|
|
2022-11-02 10:49:02
|
there are two ways on how to get it done
|
|
2022-11-02 10:49:08
|
HDR canvas
|
|
2022-11-02 10:49:28
|
48-bit/64-bit PNG conversion
|
|
2022-11-02 10:51:20
|
HDR canvas is just emerging, and it is now behind a flag in Chrome -- some people are thinking that the first version of the spec will be ready this winter
|
|
2022-11-02 10:52:02
|
the canvas solution gives more opportunities for progressive rendering while the PNG solution requires less new features from the browser
|
|
2022-11-02 10:52:33
|
it does require that 16-bit per channel PNGs can be HDR rendered
|
|
|
Jim
|
2022-11-02 10:53:28
|
It looks like it's available in all but Firefox, or is this not what you are referring?
https://caniuse.com/?search=canvas%20colorspace
|
|
|
|
afed
|
2022-11-02 10:53:33
|
also, is there any way to further improve butteraugli where this metric is behind ssimulacra2?
|
|
|
Jyrki Alakuijala
|
2022-11-02 10:53:42
|
Overall the 7+25 % gives a 30 % improvement. This makes JPEG1 more powerful than AVIF in the q90+, possibly q85+ range
|
|
|
afed
also, is there any way to further improve butteraugli where this metric is behind ssimulacra2?
|
|
2022-11-02 10:54:51
|
this is possible -- but we don't know for sure, we don't have experience on this -- but it seems to me that image quality is not the big issue right now
|
|
2022-11-02 10:55:17
|
metrics and compression interact in weird ways, and butteraugli has some adjustments that make it less good for metrics and better for controlling image compression
|
|
|
Jim
|
2022-11-02 10:55:32
|
That's awesome that you can get that much improvement out of the current format.
|
|
|
Jyrki Alakuijala
|
2022-11-02 10:55:57
|
yes, it feels a bit bizarre that it is possible, but we have already published those tools in libjxl
|
|
2022-11-02 10:56:28
|
we are refining and packaging them now and considering to wrap them in a libjpeg-turbo compatible API
|
|
2022-11-02 10:56:47
|
if we could replace libjpeg/libjpeg-turbo then it would be faster to deploy
|
|
2022-11-02 10:57:21
|
those APIs are relatively leaky and have loads of features, so it needs some work that Zoltan is doing now
|
|
2022-11-02 10:57:55
|
if we can use ICCv4, we can even use the XYB colorspace there
|
|
2022-11-02 10:58:30
|
but it makes JPEGs look rather funny when there is no ICCv4 support or the support is failing in some way
|
|
2022-11-02 10:59:14
|
We use chroma subsampling only in B-channel, and have X and Y without subsampling -- that gives some more speed without the need to compromise quality
|
|
|
Jim
|
2022-11-02 10:59:41
|
Like the image that turns green on Firefox 😆
|
|
|
Jyrki Alakuijala
|
2022-11-02 11:00:26
|
It is bit of a coin toss, but I'd say we are three months away from it
|
|
2022-11-02 11:00:50
|
ah, decode only three months away
|
|
2022-11-02 11:01:16
|
we will very likely stage the launch so that decode is in the first phase and encode in the second
|
|
2022-11-02 11:02:09
|
I don't understand this question
|
|
2022-11-02 11:02:55
|
the newly encoded JPEGs are fully compatible -- we can do without ICC, with ICCv2, with ICCv4/XYB
|
|
2022-11-02 11:03:35
|
XYB gives likely an additional 5-10 % or so, but the savings are in the 20+ % range without it, too
|
|
2022-11-02 11:04:26
|
if we replicate the libjpeg API, then linux distribution maintainers can choose our libjpegli instead of another library, and all apps get it instantly
|
|
|
Jim
|
2022-11-02 11:05:07
|
Oh, you mean is the API the same as libjpeg?
|
|
2022-11-02 11:06:15
|
That was just answered 😆
|
|
2022-11-02 11:07:28
|
Regardless, even if it wasn't it would likely be rather trivial to write a middle-man package to translate just like the pulse -> pipewire packages do.
|
|
|
Jyrki Alakuijala
|
2022-11-02 11:08:04
|
we wouldn't be able to generate ICCv4 by default from the same API, I think, that would likely be observed as a violation of the API contract
|
|
2022-11-02 11:08:40
|
so, some additional steps are necessary
|
|
2022-11-02 11:08:48
|
getting all details right is going to be quite some effort
|
|
2022-11-02 11:09:09
|
of course it would be much easier to just launch JPEG XL and have all the good stuff in a simpler package 🙂
|
|
2022-11-02 11:09:29
|
but not all forces of the universe agree with that plan
|
|
2022-11-02 11:10:10
|
JPEG1 has its strengths, too, particularly the speed and memory bandwidth use of decoding are nice
|
|
2022-11-02 11:11:09
|
and the more we compress in encoding, the faster the decoding
|
|
|
Jim
|
2022-11-02 11:11:41
|
True, but also 30 years old. Compression has evolved since then.
|
|
|
Jyrki Alakuijala
|
2022-11-02 11:14:03
|
yes, like being able to recompress it 20-24 % is indicative that not all things went right 🙂
|
|
2022-11-02 11:15:26
|
also what were they thinking not to add adaptive quantization -- it was a huge mistake -- even just one bit per MTU (with RLE or similar) to double every AC coefficient would make the format ~10 % better
|
|
|
Jim
|
2022-11-02 11:20:02
|
I'm actually surprised there is not a polyfill yet. A few libraries get about 90% there but don't go the last step. I considered using vips, but that library supports a bunch of different formats and would just bloat the code. I think it's meant for for cli tools. You want a polyfill to be as small and specific as possible so it downloads and runs fast. Just having the polyfill will incur overhead so you want to minimize it as much as possible.
|
|
|
fab
|
2022-11-02 11:41:04
|
No add anymore
|
|
2022-11-02 11:41:25
|
An interest in encoder such as cylbius new aom lavish commit
|
|
2022-11-02 11:41:42
|
Isn't interesting enough to detract Google
|
|
2022-11-02 11:41:51
|
Google is a big company
|
|
2022-11-02 11:42:27
|
Google knows what's its doung
|
|
|
The_Decryptor
|
2022-11-02 11:43:14
|
I've actually stopped receiving emails when somebody comments on the Chrome issue, not sure why
|
|
2022-11-02 11:43:25
|
Stopped sending them after it hit 100 comments
|
|
2022-11-02 11:44:25
|
Yeah, I got an email for comment 99, then 151 next, and no others
|
|
|
Jim
|
2022-11-02 11:53:00
|
I'm still getting the emails.
|
|
|
_wb_
|
2022-11-02 12:53:01
|
There could be a limit on how many mails it sends per day or something
|
|
|
Fraetor
|
2022-11-02 07:56:27
|
Yeah, I think it only sends one per day now.
|
|
|
Moritz Firsching
|
|
Jim
That's fine, didn't know if you wanted it to be under libjxl project anyway. <@795684063032901642> if you want to reach out, would love to go over that.
|
|
2022-11-02 08:27:58
|
We have a working wasm decoder from this summer, see https://github.com/eustas/jxl-demo for the code and https://eustas.github.io/jxl-demo/ to see it live.
Steps that could be improved are
- ease-of-use, perhaps using a service worker,
- wasm binary size, currently 216k (brotli) compressed with some opportunity for improvements
|
|
|
Kleis Auke
|
|
Moritz Firsching
We have a working wasm decoder from this summer, see https://github.com/eustas/jxl-demo for the code and https://eustas.github.io/jxl-demo/ to see it live.
Steps that could be improved are
- ease-of-use, perhaps using a service worker,
- wasm binary size, currently 216k (brotli) compressed with some opportunity for improvements
|
|
2022-11-02 08:34:09
|
FWIW, looking at the contents of `jxl_emcc.wasm`, I think Brotli was compiled for Wasm without `-DBROTLI_EXTERNAL_DICTIONARY_DATA` - see:
https://github.com/emscripten-core/emscripten/issues/9960
That would probably save some kilobytes.
|
|
|
Moritz Firsching
|
2022-11-02 08:35:36
|
yes, about 75 or so, a good first step
|
|
|
|
veluca
|
2022-11-02 08:44:42
|
but presumably the wasm decoder doesn't *actually* need a brotli decoder to just produce pixels
|
|
|
_wb_
|
2022-11-02 09:04:30
|
Yes, no brotli needed to get pixels, only to do jpeg bitstream reconstruction and compressed metadata boxes
|
|
|
Kleis Auke
|
2022-11-02 09:12:50
|
Ah, interesting. If it's not done already (I could not find the build script), `-DJPEGXL_ENABLE_TRANSCODE_JPEG=0 -DJXL_DEBUG_ON_ABORT=0` could also help to reduce binary size.
|
|
|
Jim
|
2022-11-02 10:57:42
|
If there is some way to break out various features that might not be needed with every image, that would make a smaller base file then could dynamically include the other libraries as necessary.
|
|
2022-11-03 12:25:50
|
If you are using Firefox, service workers don't currently work in private tabs.
|
|
2022-11-03 01:18:17
|
Just a cursory glance it looks like there are some placeholders which might not work and they don't appear to be using an actual service worker. That is what I will be looking into tomorrow. As far as Firefox's private tab service worker, I will probably just display an error message for now. Getting `img` and css `background-image` working without a service worker is a tricky proposition, will look into that later. Firefox has a bug out for it but no idea when it will be fixed.
|
|
|
Jyrki Alakuijala
|
2022-11-03 05:53:23
|
Wasm ecosystem is a bit immature with support and particularly HDR support, but creating strong use cases like JPEG XL will also help to close those gaps faster
|
|
|
lonjil
|
|
Jyrki Alakuijala
Overall the 7+25 % gives a 30 % improvement. This makes JPEG1 more powerful than AVIF in the q90+, possibly q85+ range
|
|
2022-11-03 11:34:27
|
is that compared to mozjpeg? I would assume so but want to be sure.
|
|
|
Jim
|
|
Jyrki Alakuijala
Wasm ecosystem is a bit immature with support and particularly HDR support, but creating strong use cases like JPEG XL will also help to close those gaps faster
|
|
2022-11-03 11:52:11
|
Currently, browsers only support the p3 color space (https://developer.mozilla.org/en-US/docs/Web/API/HTMLCanvasElement/getContext#colorspace) but there is a proposal to add rec (https://github.com/w3c/ColorWeb-CG/blob/master/hdr_html_canvas_element.md - no idea how long before they implement it). However, Firefox is behind and is still working on any HDR support.
Only Safari supports it in CSS so far: https://caniuse.com/?search=display-p3
|
|
2022-11-03 12:04:16
|
None of that has specifically to do with wasm though, but if the browsers don't support displaying a wider gamut, it won't matter much if wasm outputs in HDR.
|
|
|
kyza
|
2022-11-03 01:06:00
|
Pretty disappointing but understandable.
|
|
|
Kleis Auke
|
2022-11-03 01:27:44
|
Indeed, it's a bit disappointing. Note that libjxl itself is still fuzzed, see e.g.:
<https://github.com/google/oss-fuzz/tree/master/projects/libjxl>
(and I vaguely remembered that it is/was fuzzed internally by Google as well)
So, most fuzzer bugs would probably have already been caught by `djxl_fuzzer` and others (libvips' fuzzer only tested JXL decode).
Besides the support overhead, it also prevented new timeouts and OOMs from being found in other dependencies of libvips, since these type of fuzzer bugs are only reported once per fuzz target.
<https://google.github.io/oss-fuzz/faq/#how-do-you-handle-timeouts-and-ooms>
(I reported these in <#848189884614705192> and verified that it's also reproducible with `djxl_fuzzer`)
|
|
|
|
afed
|
2022-11-03 04:40:24
|
so this is the initial implementation of a Jpeg decoder with quality improvements?
https://github.com/libjxl/libjxl/pull/1858
|
|
2022-11-03 06:29:41
|
i think, what if also add Jpeg transcoding together with the new decoder?
but the bad thing is that full Jpeg XL would lose one of useful features and also chances to fully support
|
|
|
Jyrki Alakuijala
|
|
afed
so this is the initial implementation of a Jpeg decoder with quality improvements?
https://github.com/libjxl/libjxl/pull/1858
|
|
2022-11-03 08:20:49
|
yes
|
|
|
|
afed
|
2022-11-04 03:47:36
|
```I think JPEG XL's ability to be both non-Web editing format and Web format is actually working against it here, because the Web platform doesn't want non-Web features. Browsers haven't even implemented AVIF as specced, only a bare minimum subset. JPEG XL doesn't have a minimal subset```
https://news.ycombinator.com/item?id=33454038
|
|
2022-11-04 03:47:45
|
Code complexity and vulnerabilities are a serious thing, but I wonder what jxl features are completely unneeded for the web?
|
|
2022-11-04 03:48:01
|
High quality photo viewing, lossless, HDR, higher resolution (not even mentioning mostly web-specific features)?
|
|
2022-11-04 03:48:09
|
There's a lot of use of highly compressed images on the web, but are all the other things really that useless?
|
|
2022-11-04 04:04:13
|
I agree that supporting something bloated, with very different purposes like PDF or PSD is bad, but JXL doesn't have anything really useless for a web platform
|
|
|
_wb_
|
2022-11-04 04:05:14
|
Cmyk is useless for the web, but we don't allow that in Level 5 (which is what is intended for browsers)
|
|
2022-11-04 04:07:52
|
Multispectral imagery (like satellite images with 12 channels and stuff like that) is useless for the web, but we also don't allow that in Level 5
|
|
2022-11-04 04:10:13
|
Everything else I think could actually be useful for the web, e.g. layers can also be a good idea just for compression
|
|
|
lonjil
|
2022-11-04 04:10:15
|
Are the levels and their features listed somewhere, or is it in the spec?
|
|
|
_wb_
|
2022-11-04 04:10:26
|
It's in the spec, in 2nd edition
|
|
|
|
afed
|
2022-11-04 04:11:49
|
But even Cmyk is also present in Jpeg and this is rather a compatibility issue
|
|
|
Traneptora
|
|
_wb_
It's in the spec, in 2nd edition
|
|
2022-11-04 04:17:30
|
speaking of which, is it possible to cite the 2nd edition draft posted in <#1021189485960114198> or is that the sort of thing that for legal reasons best be left until it's published
|
|
|
_wb_
|
2022-11-04 04:18:05
|
Best left until published, I think
|
|
2022-11-04 04:19:34
|
I am not supposed to be sharing drafts, but I do it anyway because it is obviously the best way to improve them. But it would be wise not to draw ISO's attention to it.
|
|
|
BlueSwordM
|
|
afed
High quality photo viewing, lossless, HDR, higher resolution (not even mentioning mostly web-specific features)?
|
|
2022-11-04 04:59:54
|
Lossless is needed because PNG supports it.
|
|
|
_wb_
|
2022-11-04 05:18:35
|
Lossless for some images just compresses better than lossy
|
|
|
Traneptora
|
2022-11-04 10:00:36
|
lossless is also necessary for the web for other purposes like small icons
|
|
2022-11-04 10:00:53
|
remember you can losslessly recompress gif animations with jpegxl since it supports paletted color
|
|
|
Fraetor
|
2022-11-05 12:29:36
|
I guess the difference there is that while they have PDF support in the browser, it isn't really part of the web platform.
|
|
|
DZgas Ж
|
|
DZgas Ж
|
|
2022-11-05 06:06:40
|
2 and a half years have passed, the check problem, there are no more problems
|
|
2022-11-05 06:07:53
|
even at low quality
|
|
2022-11-05 06:09:36
|
q90 already, apparently, too much
|
|
2022-11-05 06:10:31
|
it 2 times less than on tests 2 years ago, so the problem is definitely solved
|
|
|
DZgas Ж
ok it's just a gardint of cooked chicken the original
|
|
2022-11-05 06:16:18
|
the problems with the color's blocks are also completely solved
|
|
2022-11-06 07:16:35
|
hmmmmm, why saving a white blank 800x600 list gives 457 bytes, at speeds 1-3, and at speeds 4-9 gives more than 520 bytes
|
|
2022-11-06 07:18:41
|
I said that about varDCT. But there is also an anomaly on Modular size at a speed of 3
|
|
2022-11-06 07:21:21
|
varDCT q0 e7 give 597 byte but q99 e2 give 492 byte
|
|
|
yurume
|
2022-11-06 07:31:58
|
each effort level and modular vs. vardct will use different sets of heuristics (cause you can't evaluate all of them for speedy encoding) so that's not that strange
|
|
|
DZgas Ж
|
|
yurume
each effort level and modular vs. vardct will use different sets of heuristics (cause you can't evaluate all of them for speedy encoding) so that's not that strange
|
|
2022-11-06 07:32:39
|
I said only about varDCT
|
|
|
yurume
|
2022-11-06 07:33:30
|
I thought you were comparing modular efforts 1--3 vs. 4--9 and (separately) vardct q0 e7 vs. q99 e2?
|
|
2022-11-06 07:33:37
|
both cases can be explained in the same way
|
|
|
DZgas Ж
|
|
yurume
I thought you were comparing modular efforts 1--3 vs. 4--9 and (separately) vardct q0 e7 vs. q99 e2?
|
|
2022-11-06 07:34:25
|
stop talking about modular
|
|
|
DZgas Ж
varDCT q0 e7 give 597 byte but q99 e2 give 492 byte
|
|
2022-11-06 07:34:56
|
I have even more questions when I saw that the size decreases with increasing quality
|
|
2022-11-06 07:35:33
|
why and how it works
|
|
|
yurume
I thought you were comparing modular efforts 1--3 vs. 4--9 and (separately) vardct q0 e7 vs. q99 e2?
|
|
2022-11-06 07:37:01
|
in modular mode, EVERYTHING is LOGICAL, the slow the speed, the more it compresses, except for parameter 3. But VarDCT has some kind of abnormal size i
|
|
|
yurume
|
2022-11-06 07:37:40
|
not even a general-purpose compression algorithm is guaranteed to work in that way, contrary to what you want to believe
|
|
|
DZgas Ж
|
|
yurume
not even a general-purpose compression algorithm is guaranteed to work in that way, contrary to what you want to believe
|
|
2022-11-06 07:38:44
|
is that the answer? what,
|
|
|
yurume
|
2022-11-06 07:39:37
|
I think you are holding an incorrect belief about compression levels
|
|
2022-11-06 07:40:01
|
so if you are using that belief to judge cjxl or anything else, of course you will be disappointed
|
|
|
DZgas Ж
|
|
yurume
I think you are holding an incorrect belief about compression levels
|
|
2022-11-06 07:40:32
|
pyramid
|
|
|
yurume
|
2022-11-06 07:41:17
|
in reality your "belief" is a (good) guideline about compression levels and algorithm authors will strive to maintain that, but it is not guaranteed, and I have seen plenty of cases where the "belief" doesn't hold
|
|
2022-11-06 07:41:41
|
and that's especially visible when you are pushing those algorithms to edge cases
|
|
|
DZgas Ж
|
2022-11-06 07:42:23
|
it's just that in this test it turns out that the best is not the best? for what reasons? why don't you think it's a bug?
algorithms for that and algorithms that there is a clear and understandable explanation for their work
|
|
|
yurume
|
2022-11-06 07:42:59
|
it can be argued that, for example, higher compression levels should also run lower compression levels and pick the best result, and sometimes it is naturally possible, but sometimes it isn't and it's a waste of time to run lower compression levels which will lose most of the time
|
|
|
DZgas Ж
|
2022-11-06 07:43:41
|
<@268284145820631040> can you explain to me why VarDCT works longer and produces the worst file size
|
|
|
yurume
|
2022-11-06 07:43:52
|
so as long as you don't frequently encounter that kind of cases, it's fine
|
|
|
DZgas Ж
<@268284145820631040> can you explain to me why VarDCT works longer and produces the worst file size
|
|
2022-11-06 07:45:22
|
I'm not a best person to answer that (I do have some understanding of modular encoding in libjxl, but not VarDCT), but my guess is that at each effort level libjxl will try a *limited* set of AC strategies and other parameters, and sometimes those sets don't overlap
|
|
2022-11-06 07:45:54
|
so higher level may have missed a better set of parameters in this particular case
|
|
|
DZgas Ж
|
2022-11-06 07:46:23
|
<@794205442175402004> I need you to tell me who is working on VarDCT
|
|
|
yurume
so higher level may have missed a better set of parameters in this particular case
|
|
2022-11-06 07:47:35
|
If this is the case, then why is there no Higher level that would compare all the best results? to use all functions from all compression levels
|
|
|
yurume
|
2022-11-06 07:49:31
|
if the highest level wins about 99% of time there is not much incentive left to run lower levels, I agree it *can* be done in principle, but it doesn't have to be absolutely done
|
|
2022-11-06 07:50:56
|
but, if you can demonstrate that, in most remaining cases lower levels can produce significantly better files than the highest level, then it might be worthwhile to run lower levels
|
|
2022-11-06 07:51:13
|
I believe libjxl developers have already done that calculation though
|
|
2022-11-06 07:52:20
|
to conclude my remark: compression, especially lossy one, is a balancing act between compression ratio, output quality and compression time
|
|
2022-11-06 07:53:06
|
it is very easy to result in a seemingly suboptimal result, but it might not be worthwhile to fix that suboptimal result if that suboptimality is not significant enough
|
|
|
DZgas Ж
|
|
yurume
if the highest level wins about 99% of time there is not much incentive left to run lower levels, I agree it *can* be done in principle, but it doesn't have to be absolutely done
|
|
2022-11-06 07:55:00
|
|
|
|
yurume
|
2022-11-06 07:55:02
|
I can imagine that in the future there would be jxl recompressors like DEFLATE, and they will try a lot of different parameters including those not even covered by libjxl at any level, but libjxl doesn't have to do so, much like many other DEFLATE implementations don't
|
|
|
DZgas Ж
|
|
yurume
|
2022-11-06 07:56:54
|
ah, and I forgot to mention that in VarDCT efforts are correlated with quality
|
|
|
DZgas Ж
|
2022-11-06 07:57:09
|
I want to show that in this case e7 gives better quality than e9
|
|
|
yurume
|
2022-11-06 07:58:08
|
maybe you are hitting the worse case scenario for Butteraugli then? (I haven't calculated the actual metric, but it might be a possible explanation)
|
|
|
DZgas Ж
|
|
yurume
maybe you are hitting the worse case scenario for Butteraugli then? (I haven't calculated the actual metric, but it might be a possible explanation)
|
|
2022-11-06 07:58:51
|
?
|
|
|
yurume
|
2022-11-06 07:58:53
|
so libjxl may have thought e9 is better than e7 in terms of Butteraugli, but it's not when you actually look at it
|
|
|
DZgas Ж
|
|
yurume
maybe you are hitting the worse case scenario for Butteraugli then? (I haven't calculated the actual metric, but it might be a possible explanation)
|
|
2022-11-06 07:59:05
|
No I do not know what it is
|
|
|
yurume
|
2022-11-06 07:59:12
|
(or you look at it *after zoom*)
|
|
|
DZgas Ж
|
|
yurume
so libjxl may have thought e9 is better than e7 in terms of Butteraugli, but it's not when you actually look at it
|
|
2022-11-06 07:59:49
|
can I have manY quality metrics when creating a file?
|
|
|
yurume
|
2022-11-06 08:00:35
|
Butteraugli is an image distance metric used by libjxl, assuming that the image is seen from a certain viewing angle and distance from the screen and the screen has a specific intensity profile
|
|
2022-11-06 08:01:51
|
so if your viewing condition doesn't match those assumptions, libjxl can give you a lower-quality image thinking it performs better in Butteraugli
|
|
|
DZgas Ж
|
2022-11-06 08:02:35
|
okay, let's not discuss the problems of metrics yet, let's go back to my first question
|
|
|
yurume
|
2022-11-06 08:02:38
|
that kind of things can easily complicate any lossy compression codecs
|
|
|
DZgas Ж
|
2022-11-06 08:02:58
|
why does the white sheet in e9 is size 15% more than e1
|
|
|
yurume
|
2022-11-06 08:03:53
|
keep in mind that e9 is still much smaller than the original uncompressed file 😉
|
|
|
DZgas Ж
|
2022-11-06 08:05:25
|
and as far as I know Jpeg XL does not have a function that determines which algorithm is better to compress VarDCT or Modular
|
|
|
yurume
|
2022-11-06 08:05:29
|
in this specific case the optimal encoder will have to determine how to efficiently encode a maximal-length RLE under the constraint of JPEG XL format, and that's not a typical case I think?
|
|
|
DZgas Ж
|
|
yurume
keep in mind that e9 is still much smaller than the original uncompressed file 😉
|
|
2022-11-06 08:05:55
|
I want to say that PNG size 2 times less than EVEN modular e9
|
|
2022-11-06 08:06:03
|
202 bytes
|
|
|
yurume
|
|
DZgas Ж
and as far as I know Jpeg XL does not have a function that determines which algorithm is better to compress VarDCT or Modular
|
|
2022-11-06 08:06:22
|
that's true, even when you specify a specific distance there is still a chance that modular is actually better than vardct at that distance
|
|
|
DZgas Ж
202 bytes
|
|
2022-11-06 08:06:53
|
well I believe a hand-crafted JXL file will easily reach ~50 bytes
|
|
|
DZgas Ж
|
2022-11-06 08:07:46
|
lossless webp also size less than jpeg XL
|
|
|
yurume
well I believe a hand-crafted JXL file will easily reach ~50 bytes
|
|
2022-11-06 08:08:01
|
well how
|
|
|
yurume
|
2022-11-06 08:08:12
|
actually, I've checked that, it is 21 bytes long
|
|
2022-11-06 08:08:39
|
I've used jxl_from_tree https://jxl-art.surma.technology/ with the following spec:
```
Width 800
Height 600
- Set 255
```
|
|
|
DZgas Ж
|
2022-11-06 08:09:00
|
I note that webp in a white picture at any quality gives an identical size, but jpeg XL does not
|
|
|
yurume
|
2022-11-06 08:09:02
|
so that's about the theoretical optimum if you have infinite time to optimize a particular JPEG XL file...
|
|
2022-11-06 08:09:33
|
it's an entirely different story whether an encoder can produce that file in a reasonable amount of time or not
|
|
|
DZgas Ж
|
|
yurume
actually, I've checked that, it is 21 bytes long
|
|
2022-11-06 08:09:43
|
of course, someone made a function to automatically detect this? and creating such an image directly in Jpeg XL
|
|
|
The_Decryptor
|
2022-11-06 08:10:50
|
https://jxl-art.surma.technology/?zcode=C89MKclQMDQwMuHySM1MzyiBsJ0yS1JSC0AyXLoKwalAUS4A < I got 20 bytes
|
|
|
yurume
|
2022-11-06 08:10:51
|
well JPEG XL can be directly used to simulate cellular automata with a small MA tree, but the encoder will have no idea whether a particular image can be produced by CA or not
|
|
2022-11-06 08:12:38
|
and some cellular automata can be used to simulate Turing machine (keyword: universality of CA), so you are essentially asking the shortest program to produce given output... and I think "extremely" is not even a good word to describe the computational complexity to do that
|
|
|
DZgas Ж
|
2022-11-06 08:12:42
|
I'm not asking you to create a complex function for creating images, I'm only want "are all pixels of the same color in the picture?"
|
|
|
yurume
|
2022-11-06 08:13:07
|
of course you can detect that case and produce a specific image for that
|
|
2022-11-06 08:13:12
|
but there would be tons of those cases
|
|
2022-11-06 08:13:53
|
what if, for example, the image is a horizontal linear gradient? that can be also detected and described with MA
|
|
|
DZgas Ж
|
2022-11-06 08:14:11
|
that would be very revealing, we're talking about a hundred bytes, but the weight is 20 times less than a standard Jpeg XL
|
|
|
yurume
|
2022-11-06 08:14:31
|
but by doing so you may also have to detect a vertical gradient because it'd be strange that only some directions work well
|
|
2022-11-06 08:14:58
|
and things can escalate from there quickly, and the time to detect those special cases will soon dominate the total compression time
|
|
|
DZgas Ж
|
2022-11-06 08:15:25
|
I'm also wondering if there are tools to look at the boundaries of the VarDCT blocks
|
|
|
yurume
|
2022-11-06 08:15:55
|
maybe there might be a sweetspot for detecting those "simple" images, and you may propose one, but I would not hold my breath for that
|
|
|
DZgas Ж
|
|
yurume
and things can escalate from there quickly, and the time to detect those special cases will soon dominate the total compression time
|
|
2022-11-06 08:16:11
|
`not create complex function`
|
|
|
yurume
|
2022-11-06 08:16:45
|
complex function is impossible to detect, but even simple images will take time to detect
|
|
|
DZgas Ж
|
|
DZgas Ж
I'm also wondering if there are tools to look at the boundaries of the VarDCT blocks
|
|
2022-11-06 08:17:13
|
what is the maximum size of the VarDCT block at the moment?
|
|
|
yurume
|
2022-11-06 08:17:32
|
256x256
|
|
2022-11-06 08:18:21
|
J40 actually has an `if (false)` block for overlaying block boundaries, so it might be a use for you
|
|
|
DZgas Ж
|
|
yurume
256x256
|
|
2022-11-06 08:20:31
|
Well, I tried a 768x512 image, ideally for 6 blocks, so that they with zeros and reduced to a few bytes. but the image still weighs 450 bytes, so NOTHING works
|
|
|
yurume
|
2022-11-06 08:21:01
|
I'm not even sure if libjxl *does* produce an image with DCT256x256 blocks by itself though...
|
|
|
The_Decryptor
|
2022-11-06 08:21:40
|
https://jxl-art.surma.technology/?zcode=C89MKclQMDQwMuHySM1MzyiBsJ0yS1JSC8AyXFyZaQoVCnYKhlwKCroK4QraEEZwaomCARcA Took me way too long to figure out how to do a simple gradient
|
|
|
DZgas Ж
|
2022-11-06 08:21:44
|
in any case, I see it as a problem that no one noticed, it needs to be solved
|
|
2022-11-06 08:21:58
|
and explanations
|
|
|
yurume
|
|
The_Decryptor
https://jxl-art.surma.technology/?zcode=C89MKclQMDQwMuHySM1MzyiBsJ0yS1JSC8AyXFyZaQoVCnYKhlwKCroK4QraEEZwaomCARcA Took me way too long to figure out how to do a simple gradient
|
|
2022-11-06 08:22:50
|
it will be very interesting to implement all possible linear gradients in this way, but I'm not very sure if it's actually possible (because of truncation errors)
|
|
|
DZgas Ж
|
|
The_Decryptor
https://jxl-art.surma.technology/?zcode=C89MKclQMDQwMuHySM1MzyiBsJ0yS1JSC8AyXFyZaQoVCnYKhlwKCroK4QraEEZwaomCARcA Took me way too long to figure out how to do a simple gradient
|
|
2022-11-06 08:24:13
|
this is of course interesting, but I'm only interested in the fact that an empty picture does not Size 20 bytes as it can, but size 597
|
|
|
The_Decryptor
|
2022-11-06 08:25:13
|
I assume it's just a "simple" optimisation that's being missed, something to step in and override the normal encoding methods
|
|
2022-11-06 08:25:43
|
"This entire frame is a single colour, use modular+palette instead of VarDCT mode" etc.
|
|
|
diskorduser
|
2022-11-06 02:04:28
|
cjxl on opensuse does not create jxl files. I'm doing bug report. What severity should I choose?
|
|
2022-11-06 02:09:25
|
Looks like it cannot encode only from png source. I'm choosing normal severity. probably built without png support or something.
|
|
|
DZgas Ж
|
2022-11-06 02:11:46
|
<@263309374775230465>do you have linux?
|
|
2022-11-06 02:13:15
|
my friend linux just had problems with png yesterday. when he compiled last build, and everything works for me on windows
|
|
2022-11-06 02:14:26
|
|
|
2022-11-06 02:15:30
|
and then he just took the finished build and everything was fine
|
|
|
improver
|
2022-11-06 02:39:27
|
missn build dep?
|
|
|
diskorduser
|
2022-11-06 03:09:57
|
this works.
|
|
|
DZgas Ж
|
|
DZgas Ж
<@794205442175402004> I need you to tell me who is working on VarDCT
|
|
2022-11-06 04:46:46
|
<@794205442175402004>
|
|
|
_wb_
|
2022-11-06 04:51:27
|
how do you mean, working on vardct?
|
|
2022-11-06 04:51:49
|
vardct encoder improvements?
|
|
2022-11-06 04:52:15
|
I think atm encoder improvements are not the first priority
|
|
|
DZgas Ж
|
|
_wb_
how do you mean, working on vardct?
|
|
2022-11-06 04:56:40
|
in order not to read our hundreds of discussion posts, I just want to ask why the encoder cannot encode a 768x512 white sheet as 6 blocks of 256x256
|
|
2022-11-06 04:58:13
|
I found out that at the input an image with one color, firstly, the size increases with increasing complexity, as well as that the quality factors somehow affect the size... do you have the tools to look the Boundaries of the VarDCT Blocks? <@794205442175402004>
|
|
|
_wb_
|
2022-11-06 04:59:26
|
iirc, current libjxl doesn't use blocks larger than 64x64
|
|
2022-11-06 04:59:58
|
or at least not by default, maybe there's a way to make it use the huge blocks too
|
|
|
DZgas Ж
|
2022-11-06 05:00:52
|
|
|
2022-11-06 05:01:22
|
<@794205442175402004>do you have the tools to see the internal architecture and the location of the blocks, and what kind of blocks are they
|
|
|
_wb_
|
2022-11-06 05:03:57
|
https://github.com/libjxl/libjxl/blob/main/tools/demo_vardct_select.sh
|
|
2022-11-06 05:04:43
|
in particular:
|
|
2022-11-06 05:05:35
|
```
benchmark_xl --input=$tmp/orig --codec=jxl:d$i --save_decompressed --save_compressed \
--debug_image_dir=$tmp --output_dir=$tmp
convert $tmp/orig \( $tmp/orig.jxl:d$i.dbg/ac_strategy.png \
-alpha set -channel A -evaluate set 66% \) \
-composite $tmp/blockselections.ppm
```
|
|
2022-11-06 05:06:59
|
if you pass `--debug_image_dir` to benchmark_xl, it will produce this file `ac_strategy.png` that represents the block selection, which you can then overlay on top of the image with that second imagemagick command
|
|
|
DZgas Ж
|
|
_wb_
```
benchmark_xl --input=$tmp/orig --codec=jxl:d$i --save_decompressed --save_compressed \
--debug_image_dir=$tmp --output_dir=$tmp
convert $tmp/orig \( $tmp/orig.jxl:d$i.dbg/ac_strategy.png \
-alpha set -channel A -evaluate set 66% \) \
-composite $tmp/blockselections.ppm
```
|
|
2022-11-06 05:07:42
|
why did you write so difficult
|
|
|
_wb_
```
benchmark_xl --input=$tmp/orig --codec=jxl:d$i --save_decompressed --save_compressed \
--debug_image_dir=$tmp --output_dir=$tmp
convert $tmp/orig \( $tmp/orig.jxl:d$i.dbg/ac_strategy.png \
-alpha set -channel A -evaluate set 66% \) \
-composite $tmp/blockselections.ppm
```
|
|
2022-11-06 05:09:24
|
I am absolutely...
|
|
|
_wb_
|
2022-11-06 05:09:25
|
i copypasted from that script
|
|
2022-11-06 05:09:40
|
no the input has to be a png or something
|
|
2022-11-06 05:10:08
|
the visualization thing is done encode-side, I don't think we can do it decode-side
|
|
|
DZgas Ж
|
2022-11-06 05:10:10
|
Error in jxl codec
D:\a\libjxl\libjxl\tools\benchmark\benchmark_xl.cc:129: JXL_CHECK: speed_stats.GetSummary(&summary)
|
|
2022-11-06 05:10:33
|
what about the paths?
|
|
|
_wb_
|
2022-11-06 05:10:53
|
oh you might need to compile a debug version, could be that the release version misses that visualization code
|
|
2022-11-06 05:11:09
|
and you need to make sure that the debug image dir already exists
|
|
|
DZgas Ж
|
2022-11-06 05:12:03
|
what this
D:\a\libjxl\libjxl\tools\benchmark\
|
|
2022-11-06 05:12:22
|
I run the program on the C drive and don't specify anything
|
|
|
_wb_
oh you might need to compile a debug version, could be that the release version misses that visualization code
|
|
2022-11-06 05:13:24
|
|
|
2022-11-06 05:14:59
|
and of course what are the convert commands if I'm on windows
|
|
2022-11-06 05:19:11
|
I found out that the bug is found when calling the --debug_image_dir= command it just doesn't work
|
|
|
Traneptora
|
|
DZgas Ж
<@268284145820631040> can you explain to me why VarDCT works longer and produces the worst file size
|
|
2022-11-06 05:23:17
|
higher effort doesn't just produce *better* quality per bpp, it more accurately matches the target quality
|
|
2022-11-06 05:23:24
|
effort=3 tends to undershoot the quality target
|
|
2022-11-06 05:23:34
|
so while it produces smaller files, that's typically because the quality is lower than you requested it
|
|
2022-11-06 05:24:06
|
so at, say, effort=7, you're going to get pretty close to the targeted butteraugli distance of 1
|
|
2022-11-06 05:24:36
|
but at effort=3 you might ask for distance=1, but you end up getting a file lower quality than you asked
|
|
|
DZgas Ж
|
|
Traneptora
so while it produces smaller files, that's typically because the quality is lower than you requested it
|
|
2022-11-06 05:25:12
|
I need to find out how and why the reduced quality affects the weight of the picture, if it remains identical, I would not have a question if it were to weigh less, But WEIGHT is increasing, this is not normal, are you on linux? could you run up command?
|
|
|
Traneptora
|
2022-11-06 05:26:49
|
because quality ultimately isn't something that the encoder gets to decide. the encoder gets to decide how much it *quantizes* the image
|
|
|
DZgas Ж
|
|
DZgas Ж
|
|
2022-11-06 05:26:53
|
I have an urgent need to see the structure of these 3 files because I don't understand
|
|
|
Traneptora
|
2022-11-06 05:27:01
|
i.e. it decides how much data it discards
|
|
2022-11-06 05:27:16
|
discarding more data results in lower quality images, but with smaller file sizes
|
|
2022-11-06 05:27:39
|
the decoder has to make an estimate of how much data to discard in order to achieve the desired quality
|
|
|
DZgas Ж
|
2022-11-06 05:27:43
|
we are talking about this image
|
|
2022-11-06 05:27:55
|
white 800x600
|
|
2022-11-06 05:29:22
|
and I discovered that, with VarDCT, with increasing complexity, it becomes size up
|
|
|
Traneptora
|
2022-11-06 05:29:54
|
that ultimately depends on the heuristics
|
|
|
DZgas Ж
|
|
DZgas Ж
and I discovered that, with VarDCT, with increasing complexity, it becomes size up
|
|
2022-11-06 05:30:30
|
And I want to know - why
|
|
|
Traneptora
|
2022-11-06 05:30:55
|
libjxl's effort=3 always produces lower file sizes than effort=7 (at the same distance), regardless of the input
if I had to guess, it's because effort=7 turns on some optional features that have some overhead to them
|
|
2022-11-06 05:31:05
|
it should probably have a heuristic to determine if they are necessary but it doesn't
|
|
|
DZgas Ж
|
2022-11-06 05:32:16
|
And There is a big waste of time on the arrangement of structural blocks, but this only makes it worse?
|
|
|
Traneptora
|
2022-11-06 05:32:37
|
I'm not entirely familiar with libjxl's specific heuristics
|
|
|
DZgas Ж
|
|
_wb_
https://github.com/libjxl/libjxl/blob/main/tools/demo_vardct_select.sh
|
|
2022-11-06 05:33:15
|
<@853026420792360980>can you see what's here
|
|
|
Traneptora
|
2022-11-06 05:33:48
|
oh god that shell script looks awful
|
|
2022-11-06 05:33:51
|
I need to fix it
|
|
|
DZgas Ж
|
|
Traneptora
oh god that shell script looks awful
|
|
2022-11-06 05:34:40
|
can you use it to show the structure of the blocks in the image?
|
|
|
Traneptora
|
|
DZgas Ж
can you use it to show the structure of the blocks in the image?
|
|
2022-11-06 06:17:58
|
no, the script doesn't work
|
|
2022-11-06 06:18:06
|
benchmark_xl --input="white.png" --save_decompressed --save_compressed --debug_image_dir="/tmp" --output_dir="/tmp" --codec=jxl:d1
|
|
|
DZgas Ж
|
|
Traneptora
benchmark_xl --input="white.png" --save_decompressed --save_compressed --debug_image_dir="/tmp" --output_dir="/tmp" --codec=jxl:d1
|
|
2022-11-06 06:19:17
|
--debug_image_dir IS not work in Windows
|
|
|
Traneptora
|
2022-11-06 06:19:27
|
the issue is `--save_decompressed`
|
|
|
DZgas Ж
|
2022-11-06 06:21:20
|
well.
|
|
|
_wb_
|
2022-11-06 06:54:04
|
I assume it's a bug caused by the changes in the io codecs when cjxl/djxl were switched to use the api
|
|
|
diskorduser
|
2022-11-07 04:03:32
|
How to make animated jxl from apng?
|
|
2022-11-07 04:03:55
|
`❯ cjxl test.apng test.jxl -d 0
JPEG XL encoder v0.7.0 [AVX2,SSE4,SSSE3,Unknown]
Read 426x240 image, 15349882 bytes, 0.7 MP/s
Encoding [Modular, lossless, effort: 7],
JxlEncoderAddImageFrame() failed.`
|
|
|
_wb_
|
2022-11-07 04:26:51
|
can you send that apng or a smaller example so someone with a debug build can see why it's failing?
|
|
|
diskorduser
|
2022-11-07 04:40:46
|
I just created a APNG using ffmpeg. `ffmpeg -i 2022-11-07_20-03-33.mkv -vf scale=-2:240,fps=24 -ss 00:00 -to 00:10 test.apng`
|
|
2022-11-07 04:42:52
|
That APNG is a valid one. it works fine in firefox. `❯ file test.apng
test.apng: PNG image data, 426 x 240, 8-bit/color RGB, non-interlaced
`
|
|
|
spider-mario
|
2022-11-07 07:22:06
|
rare shot of Chrome deprecating JXL support
|
|
|
Jyrki Alakuijala
|
|
_wb_
iirc, current libjxl doesn't use blocks larger than 64x64
|
|
2022-11-07 11:23:57
|
I wasn't able to make anything useful with them -- my thinking was to use optimization to find encodings, with layers having big transforms and selection masks
|
|
|
DZgas Ж
|
|
_wb_
```
benchmark_xl --input=$tmp/orig --codec=jxl:d$i --save_decompressed --save_compressed \
--debug_image_dir=$tmp --output_dir=$tmp
convert $tmp/orig \( $tmp/orig.jxl:d$i.dbg/ac_strategy.png \
-alpha set -channel A -evaluate set 66% \) \
-composite $tmp/blockselections.ppm
```
|
|
2022-11-08 11:54:01
|
🤔
|
|
2022-11-08 11:54:43
|
--save_decompressed bug
|
|
2022-11-08 12:20:57
|
well, is there some way to look at the internal structure of VarDCT blocks visually?
|
|
|
fab
|
2022-11-08 01:13:21
|
benchmark_xl --input c.png --output_dir . --codec=jpeg:libjxl:d1.0 --save_compressed
|
|
2022-11-08 01:14:05
|
$ ./ci.sh opt
$ ./build/tools/benchmark_xl --input input.png --output_dir . --codec=jpeg:libjxl:d1.0 --save_compressed
|
|
|
DZgas Ж
|
|
fab
benchmark_xl --input c.png --output_dir . --codec=jpeg:libjxl:d1.0 --save_compressed
|
|
2022-11-08 01:54:44
|
but it doesn't work, show what you have at the output
|
|
2022-11-08 02:32:24
|
<@416586441058025472>
|
|