JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

jxl

Anything JPEG XL related

diskorduser
2021-08-15 04:30:21
Oh. probably I understood something wrong
BlueSwordM
2021-08-15 04:37:25
With the current improvements, it's more like +5% on low quality photos and 20% quality on directional art <:kekw:808717074305122316>
Cool Doggo
2021-08-15 04:40:51
degradation of quality in one type of image shouldnt be much of a problem as long as it isnt worse by a large margin
2021-08-15 04:41:28
especially since there will most likely be more findings that can improve on the one that got worse without harming the other
2021-08-15 04:42:35
also, at least to me, it is much easier to notice quality loss in art
lithium
Jyrki Alakuijala I haven't been able to make progress there -- I'm stuck with the 'better handling of large transforms' at the moment
2021-08-15 06:47:16
I understand, Thank you for your work 🙂
improver
2021-08-15 02:03:45
gatekeeping encoding features because they apply for one kind of thing is silly imo. that kind of thing could be intelligently autodetected by encoder, and possibly applied to some areas and not others, even if right now it needs manual knob. also, even manual knobs can be handy for artist workflows
diskorduser
2021-08-15 02:06:36
Just add line art [✔️] to gui when saving jxl.
improver
2021-08-15 02:07:12
yeh exactly
diskorduser
2021-08-15 02:10:32
http://0x0.st/-yz9.jpg
fab
2021-08-15 04:30:09
2021-08-15 04:30:15
https://en.m.wikipedia.org/wiki/Joint_Photographic_Experts_Group
improver
2021-08-15 04:33:25
these fonts look awful
2021-08-15 04:34:16
wtf is with that rendering of p in some places
fab
2021-08-15 04:37:11
maybe is a bit oblique
2021-08-15 04:37:22
but anyway is awful
2021-08-15 04:37:45
not something i would read on a tablet
2021-08-15 04:39:09
Fraetor
2021-08-16 12:17:37
What's a Picture?
2021-08-16 12:20:48
As in, what differentiates it from a photo?
fab
2021-08-16 08:03:52
i didn't encode anything new
_wb_
2021-08-16 08:33:10
I wonder if it would make sense to try to convert fonts to splines, so in authoring tools you could use such a font and it would get encoded with splines (or with patches that come from splines)
Scope
2021-08-16 08:34:01
Wouldn't that be too expensive to decode?
_wb_
2021-08-16 08:35:20
For bold/large text, jxl splines are not that useful in my limited experience, because we don't have a stroke width, we have a blurry thing. But for thin/small fonts (where dct has the most trouble anyway), splines might be suitable
2021-08-16 08:37:11
Expensive to decode: things can probably be sped up quite a bit, especially for the small sigmas that are useful for text (and you only need to render each letter/point size once, thanks to patches)
2021-08-16 08:46:54
https://jxl-art.surma.technology/?zcode=C89MKclQMDYw4PJIzUzPKAEzubiCC3Iy81K5FIBA18hAgSI46EwxpNQQsCmmBgoWUPNMDBQMjSBsI2O4sJGBgYKRmQFMsZGpAZdrXgo0ZLl0FYJTS4BOMTAAAA
2021-08-16 08:47:15
Why do I see a red outline here?
2021-08-16 08:47:26
I would expect this image to be grayscale
2021-08-16 08:49:51
(things are getting way out of the 0..1 comfort zone here of course)
veluca
2021-08-16 11:25:38
<@!604964375924834314>
spider-mario
2021-08-17 09:53:58
I don’t understand, when I save the source and run `jxl_from_tree` on it myself, the resulting JXL is also 70 bytes but does not display any spline
2021-08-17 09:54:51
wait, I tried again and now it does
2021-08-17 09:54:55
I have no idea what changed
2021-08-17 09:55:17
but it also looks different
2021-08-17 09:55:55
2021-08-17 09:56:04
oh 73 bytes now?
2021-08-17 09:56:26
ah, here is the blank one now
2021-08-17 09:57:12
so `jxl_from_tree` is not deterministic on that file
2021-08-17 09:57:14
that’s encouraging
2021-08-17 10:03:49
yep, uninitialized read
_wb_
2021-08-17 12:53:06
2021-08-17 12:53:31
experimenting with segmenting the image in photo and nonphoto parts, encoding photo with vardct and nonphoto with modular
2021-08-17 12:54:05
example above is decoded with -s 8 so you see the vardct part blurry
2021-08-17 12:54:37
here the resolution of the segmentation map is 1:64
2021-08-17 12:55:31
my heuristics are not great but they kind of work (with some false positives and negatives)
2021-08-17 12:56:59
the outlines you see around the photo part are not intended, I think it's because I just encode full images that get added, and the not-encoded-part is black (0,0,0)
2021-08-17 12:58:01
you see that black around the edges, I dunno why, probably gaborish or something
2021-08-17 12:58:34
it's a funky effect anyway 🙂
veluca
2021-08-17 01:02:36
what do you replace the nonphoto parts with in VarDCT?
2021-08-17 01:03:15
the black around the edges likely comes from DCT blocks going across the (black, I suppose) replacement border
2021-08-17 01:03:38
I assume it disappears without `-s 8`
_wb_
2021-08-17 01:21:37
yes it disappears
2021-08-17 01:23:12
replacing the nonphoto parts with some background color like in patches would make more sense I guess (but it's trickier to pick that color)
Scope
2021-08-17 01:25:06
Pretty good detection on this image, what about efficiency on such mixed images and how would that work with reducing the quality?
2021-08-17 01:38:50
Perhaps a lossy palette would also be useful here (because lossy modular creates noticeable pixelation after a certain quality), like AV1 does: <https://github.com/AOMediaCodec/SVT-AV1/blob/master/Docs/Appendix-Palette-Prediction.md>
veluca
_wb_ replacing the nonphoto parts with some background color like in patches would make more sense I guess (but it's trickier to pick that color)
2021-08-17 01:41:11
why is it trickier?
2021-08-17 01:41:33
you *are* using kReplace blending for the nonphoto parts, right? 😛
_wb_
2021-08-17 01:41:45
no
2021-08-17 01:42:03
it's kAdd
veluca
2021-08-17 01:42:03
probably a good idea to do so then
_wb_
2021-08-17 01:42:11
well both are just full frame sized
veluca
2021-08-17 01:42:27
you can still do multiple kReplace patches 🙂
2021-08-17 01:42:45
although it is a bit more painful
_wb_
2021-08-17 01:42:49
if it's in big chunks like 64x64, yes
2021-08-17 01:42:58
I want to do it more fine grained if possible though
veluca
2021-08-17 01:43:25
tbf you probably should detect big rectangles of photo/nonphoto (even if not 64-aligned)
_wb_
2021-08-17 01:44:05
also I'm doing the nonphoto in (quantized) XYB and adding it with a huge patch, not sure if that's the best approach
2021-08-17 01:44:47
(could also do it in sRGB and use frame blending)
veluca
2021-08-17 01:45:06
I think XYB is a better option but I dunno
_wb_
2021-08-17 01:46:08
I think so too but not sure about the quantization if the nonphoto stuff contains gradients
2021-08-17 01:47:08
big rectangles of photo will usually work but not all photo parts are rectangular
veluca
2021-08-17 01:55:24
well it will work in most cases - and when it doesn't, you can always use alpha-blend patches 😄
_wb_
2021-08-17 08:34:56
getting some improvements in compression/quality, though it's at a big cost in enc/dec speed (might be because huge patches are maybe not blended in the most efficient way?)
2021-08-17 08:35:00
``` ClassD_APPLE_BasketBallScreen_2560x1440p_60_8b_sRGB_444_000.ppm.png Encoding kPixels Bytes BPP E MP/s D MP/s Max norm pnorm BPP*pnorm Bugs ----------------------------------------------------------------------------------------------------------------- jxl 3686 539560 1.1709201 1.798 24.019 1.86591029 0.59592572 0.697781424948 0 jxl:separate 3686 484890 1.0522786 0.721 3.958 2.19656682 0.65291474 0.687048240555 0 jxl:d2 3686 378848 0.8221528 1.833 22.743 3.60270667 0.93827879 0.771408511975 0 jxl:d2:separate 3686 356936 0.7746007 0.737 3.974 3.22002673 0.92808257 0.718893404432 0 jxl:d3 3686 304257 0.6602799 1.910 23.743 4.17911577 1.23043588 0.812432138116 0 jxl:d3:separate 3686 278003 0.6033051 0.705 4.447 3.96620774 1.17577899 0.709353486654 0 jxl:d4 3686 260803 0.5659787 1.786 14.222 5.61551571 1.53005294 0.865977425136 0 jxl:d4:separate 3686 236307 0.5128190 0.731 3.444 4.62687683 1.42663903 0.731607616013 0 jxl:d5 3686 230743 0.5007444 1.865 13.427 9.02261162 1.94301577 0.972954183865 0 jxl:d5:separate 3686 211319 0.4585916 0.794 4.231 5.60950613 1.65590114 0.759382321131 0 jxl:d8 3686 175870 0.3816623 1.888 13.239 11.13273525 2.79994567 1.068633777045 0 jxl:d8:separate 3686 161431 0.3503277 0.820 4.210 8.94032955 2.34201284 0.820471951791 0 ```
2021-08-17 08:44:40
``` nonphoto_JonScreenshot_1200x1400_8bit_sRGB.ppm Encoding kPixels Bytes BPP E MP/s D MP/s Max norm pnorm BPP*pnorm Bugs ----------------------------------------------------------------------------------------------------------------- jxl 1680 126861 0.6041000 2.656 23.792 1.45755124 0.48071558 0.290400280210 0 jxl:separate 1680 102180 0.4865714 1.230 5.079 1.73315644 0.48175152 0.234406527574 0 jxl:d2 1680 100106 0.4766952 2.680 21.553 2.04487586 0.59050652 0.281491644342 0 jxl:d2:separate 1680 87044 0.4144952 1.241 5.020 2.14929438 0.69423165 0.287755711316 0 jxl:d3 1680 84446 0.4021238 2.791 21.775 3.11525416 0.82836638 0.333105843060 0 jxl:d3:separate 1680 72008 0.3428952 1.229 5.046 3.28416300 0.96416079 0.330606144826 0 jxl:d4 1680 75510 0.3595714 2.945 12.832 3.51071453 0.96273206 0.346170943750 0 jxl:d4:separate 1680 65121 0.3101000 1.185 4.372 4.04519129 1.14461236 0.354944293284 0 jxl:d5 1680 68877 0.3279857 2.633 12.460 4.62320328 1.21067688 0.397084722550 0 jxl:d5:separate 1680 58110 0.2767143 1.124 4.540 5.59611368 1.46440364 0.405221407205 0 jxl:d8 1680 54622 0.2601048 2.502 12.429 7.50739479 1.96001851 0.509810148368 0 jxl:d8:separate 1680 47183 0.2246810 1.216 5.138 6.74870634 1.90634576 0.428319581608 0 Aggregate: 1680 75597 0.3599858 1.802 9.023 3.36590130 0.95190569 0.342672572549 0 ```
OkyDooky
2021-08-17 09:06:45
👍🏻
Scope
2021-08-17 09:11:37
So, it could be something like `--patches=2` and maybe enabled with `-s 9` by default, also interesting how this would work not only on UI/text images but also on art content 🤔
veluca
2021-08-17 11:17:02
for dec speed I strongly suspect it's because modular is *a lot* slower - we probably need a "webp mode" for this to work well (and "stitching" together the nonphoto pieces in a smaller frame likely helps a lot too - also for i.e. memory...)
2021-08-17 11:17:09
having said that, pretty cool results 😄
frank cilantro
2021-08-18 06:33:21
any chance there's a handy guide to the adaptive quant + use of side info in predictions?
veluca
2021-08-18 08:47:22
nope! 😛
_wb_
2021-08-18 01:06:40
https://github.com/libjxl/libjxl/pull/466
Scope
2021-08-18 01:09:48
<@!794205442175402004> Also, it would be nice to change the options for lossless mode segments, like maybe a lossy palette or faster for decoding `-i 0` could be useful
_wb_
2021-08-18 01:11:04
yes - one thing that's a bit annoying is that the modular mode stuff is still done in XYB, which likely is not the best idea (but it is easiest to implement)
2021-08-18 01:12:46
doing the separation with multiple RegularFrames instead of with one big patch probably makes more sense - but then you need to detect good rectangles and stuff
2021-08-18 01:18:06
<@!179701849576833024> patches need to be in the same color space as what you're adding them to, right? so XYB if you want to add them to an XYB frame. But with frames I could have a modular sRGB frame followed by vardct frames that use XYB and that should work with any blend mode, right? (kReplace if you could detect photo rectangles and do multiple vardct frames, kAdd if you want to do one big vardct frame) as long as the color profile is representable by an Enum...
veluca
2021-08-18 01:20:49
yes, in theory - but I think XYB is OK anyway 😛
2021-08-18 01:21:04
(you *can* choose scaling factors...)
_wb_
2021-08-18 01:24:02
I suppose a palette in RGB or in XYB doesn't make a difference, but for synthetic gradients that were computed in RGB, things might work better in RGB
Scope
2021-08-19 03:50:28
<@!794205442175402004> What's a Flyby fix? Will it improve the patches logic? <https://github.com/libjxl/libjxl/pull/466>
_wb_
2021-08-19 04:00:21
it's a fix that happened because a bug was found in existing code so it's not the main purpose of the pull request but it does fix something as a side effect
2021-08-19 04:01:02
and yes, it improves vardct mode patches
veluca
2021-08-19 04:01:20
but not the detection 😛
_wb_
2021-08-19 04:02:37
nothing changes in the detection, but the way they are encoded is more sensible now (it used to be too high quality, leading to unnecessary bytes used to represent the patch more accurately than what was useful)
Scope
2021-08-19 04:08:50
Also, sometimes the reduction of the palette is very strong in modular mode, perhaps on a low quality setting is worth using VarDCT for more blocks or do something like smoothing gradients
2021-08-19 04:09:36
Source
2021-08-19 04:09:59
`-d 8`
2021-08-19 04:10:15
`-d 8 separate`
2021-08-19 04:19:33
And the borders are still visible <:PepeHands:808829977608323112>
_wb_
2021-08-19 04:57:36
I think the approach needs to be changed, kAdding a whole frame is not a great method for various reasons
2021-08-19 05:01:01
It might be better to do something like multiple kReplace Modular patches so the VarDCT part can avoid discontinuities (e.g. could apply strong blur to the holes or something like that)
Scope
2021-08-19 05:15:42
Btw, the usual already existing patches are fully lossless when used in VarDCT mode?
_wb_
2021-08-19 05:30:50
No, they are done in quantized XYB (but otherwise lossless, yes)
2021-08-19 05:32:37
Ah and lossy patch detection also allows different copies to be (very) slightly different
2021-08-19 05:33:56
Below d3, the patches get subtracted from the vardct image, so if it's a perfect match and there is no quantization error, there will be no residuals in the vardct part, but if there's some error, vardct can still correct it
Scope
2021-08-19 06:27:04
Because it is useful for high quality settings to use patches not only for better compression, but also for places where VarDCT can create noticeable artifacts and it is important that the modular mode is also guaranteed to be free of noticeable distortion
_wb_
2021-08-19 06:53:15
Yes, probably better things can be done than what we do now, which is the equivalent of something like `cjxl -m -R 0 -P 5 -Q 80`
2021-08-19 06:56:09
It might be worthwhile to use (delta) palette instead, for example
2021-08-19 06:57:25
(but the default delta palette is made for RGB, not XYB, and we don't have an encoder for that atm)
grey-torch
2021-08-20 06:21:29
Is there any plan to add jxl support to stb_image.h? It'll gonna drive adoption rate 100x consider how popular the library is
2021-08-20 06:30:19
https://github.com/nothings/stb
2021-08-20 06:32:34
welp they didn't accept new formats, too bad > Will you add more image types to stb_image.h? > No. As stb_image use has grown, it has become more important for us to focus on security of the codebase. Adding new image formats increases the amount of code we need to secure, so it is no longer worth adding new formats.
_wb_
2021-08-20 07:03:15
I suppose in principle you could just concatenate all files in libjxl in the right order, remove the internal #includes, and call it stb_jxl.h, right?
2021-08-20 07:03:31
(if you don't mind it being C++)
2021-08-20 07:04:50
You can just as well just statically link with libjxl though
2021-08-20 07:06:29
In my opinion the nice thing about software libraries is that you can do dynamic linking, which means not only do you avoid duplicating binary code, also you only need to update one place to update all dependencies.
grey-torch
2021-08-20 08:20:44
People loves single header libraries I see lots of recent libs releasing these days offers this as a feature https://github.com/randy408/libspng
_wb_
2021-08-20 08:21:28
Is that because it is easier than to figure out how to statically link?
grey-torch
2021-08-20 08:22:02
yeah
190n
grey-torch welp they didn't accept new formats, too bad > Will you add more image types to stb_image.h? > No. As stb_image use has grown, it has become more important for us to focus on security of the codebase. Adding new image formats increases the amount of code we need to secure, so it is no longer worth adding new formats.
2021-08-20 08:22:38
tbh i would use stb_image if i wanted something dead simple to integrate, not if i actually wanted performance or a variety of codecs. and i might switch from it to another library (depending on the image format) if my project grew, especially because i might end up with better build infrastructure that would make it easier to link against a more complicated library
2021-08-20 08:25:21
in [1] are they saying that that's their criteria for "no known security bugs" and stb_image just isn't fuzz tested? or is there actually a known vulnerability in stb_image?
grey-torch
2021-08-20 08:29:12
dunno, the definition is really vague about it
_wb_
2021-08-20 09:00:47
https://github.com/libjxl/libjxl/pull/473
2021-08-20 09:01:11
> On a set of 9 screenshots, this gives a ~10% improvement in bpp*pnorm, more on higher distances than on lower distances (3.3% at d1, 8.1% at d3, 18.5% at d10)
2021-08-20 09:04:43
that regression has been there for at least a year or so, so not sure if it should be called a regression or just an encoder bug that has basically always been there
2021-08-20 09:06:22
for photographic images it doesn't make any significant difference, but for nonphoto / mixed photo+nonphoto, this makes quite a big difference
veluca
2021-08-20 10:49:41
ah but stb_image implements all the formats it can decode... it's *never* going to add JXL or AVIF xD
2021-08-20 10:50:29
writing a JXL decoder in a single header is not going to happen any time soon
Jyrki Alakuijala
_wb_ For bold/large text, jxl splines are not that useful in my limited experience, because we don't have a stroke width, we have a blurry thing. But for thin/small fonts (where dct has the most trouble anyway), splines might be suitable
2021-08-20 02:20:30
The original ideas for splines are features like a single hair poking out from the head or a telephone line hanging on the sky -- today those are full DCT matrices when the line is oblique, DCT is terrible with them. I was thinking that eventually we may be able to detect using ML where to model these with splines and then use simple optimization for fitting.
_wb_
2021-08-20 02:54:48
Maybe classic vectorization algorithms might also work
Jyrki Alakuijala
2021-08-20 03:04:36
for some artistic impressions possibly would be great -- I'm thinking we will have an intern project in two years to do hair splitting
_wb_
2021-08-20 03:10:13
Some heuristic to detect the "thin line on smooth background" case, then classic vectorization to find the line, then some optimization to find the best fitting spline params, or something like that
Jyrki Alakuijala
2021-08-20 03:37:26
yeah!
_wb_
2021-08-20 06:58:27
Yes, top message in <#822120855449894942> should help
fab
2021-08-20 07:08:56
https://artifacts.lucaversari.it/ https://imageglass.org/moon https://mega.nz/file/fA8CnAIT#FZiMDYy1tEqDhDFy6ZL5hixvSug1lxpEEk229L2THu4
2021-08-20 08:05:56
libjxl image quality is incredible feels like running webp2, nhw codec and the same codec jxl at the same time
2021-08-20 08:06:11
some could argue the image has some blocking
2021-08-20 08:06:40
but honestly great quality
2021-08-20 08:07:48
people could dislike that artifacted look
2021-08-20 08:07:50
me not
2021-08-20 08:07:59
honestly i like luca versari work
2021-08-20 08:08:58
2021-08-20 08:10:03
2021-08-20 08:11:03
2021-08-20 08:11:16
the artifacts are very pleasing
2021-08-20 08:11:50
there's some work indeed
2021-08-20 08:18:06
https://storage.googleapis.com/downloads.webmproject.org/releases/webp/WebpCodecSetup.exe
2021-08-20 08:18:48
https://en.wikipedia.org/w/index.php?title=WebP&diff=prev&oldid=1037977338
2021-08-21 03:11:59
-q 77.837 -s 5 --use_new_heuristics --gaborish=1 -I 0.521
2021-08-21 03:12:26
trying this probably -d can't get this accurate without compiling xiota fix
2021-08-21 03:12:37
and probably even that don't fix it
2021-08-21 03:35:53
https://artifacts.videolan.org/x264/release-win64/
2021-08-21 03:36:10
x264-r3046-fa26446.exe 17M 10-Feb-2021 21:10
2021-08-21 03:36:36
It has a pleasing effect like this encoder on slowest speed and CRF 15
2021-08-21 03:36:58
don't know if still inflates instagram image sizes or is worth it
2021-08-21 03:37:22
but is fantastic stuff
2021-08-21 03:38:24
artifacts are definitely more noticeable
2021-08-21 03:38:35
it deletes text in grey automatically
2021-08-21 03:39:02
good rate control
2021-08-21 03:39:23
not good for gay photos
2021-08-21 03:39:30
because it doesn't preserve banding
2021-08-21 03:39:59
if there is a photo like that gone through imagegonord it automatically tries to reduce the palette and not a more uniform banding
2021-08-21 03:40:11
so jpeg xl is going to a different direction
2021-08-21 03:40:28
the speed was fast 4,3 mpx on 1 core i3 330m
2021-08-21 03:40:40
ram usage 780-1,1 gb with bigger photos
2021-08-21 03:40:57
2021-08-21 03:42:54
anyway good encoder
2021-08-21 03:43:14
more than i expected on 4,3 mpx/s
_wb_
2021-08-21 04:27:28
Anyone going to submit something to the <#824000991891554375> compo?
Deleted User
2021-08-21 04:31:04
No, we let you win. ;)
_wb_
2021-08-21 04:37:43
I don't think I will participate
2021-08-21 04:40:11
Not fair if the contest organizer who makes the rules also participates
fab
2021-08-21 06:23:47
Honestly a
2021-08-21 06:23:48
-q 84.498 -s 7 --epf=0 -I 0.93 --gaborish=0 --dots=0
2021-08-21 06:23:54
that looks better
2021-08-21 06:23:56
is better than
2021-08-21 06:24:09
a -q 77.837 -s 5 --use_new_heuristics --gaborish=1 -I 0.521 that looks good
2021-08-21 06:25:08
jxl is now only 10% better than webp
2021-08-21 06:25:29
cavif rs is crap
2021-08-21 06:29:32
but without the tools jxl looks worse
Scope
2021-08-22 04:30:25
Also about vectorization https://youtu.be/EvGA5qCfy9I?t=294
2021-08-22 04:30:50
2021-08-22 04:31:10
2021-08-22 04:31:32
diskorduser
2021-08-22 04:44:44
May be it works better on anime episodes.
Scope
2021-08-22 04:46:58
Many anime have quite complex backgrounds
diskorduser
2021-08-22 04:49:27
Yeah
fab
2021-08-23 10:35:35
big improvement in this build
ishitatsuyuki
2021-08-23 10:42:57
which commit dude
Jyrki Alakuijala
2021-08-23 01:00:05
I am not very optimistic of vectorization -- or perhaps just my eyes are not compatible with it
2021-08-23 01:00:23
it will of course work very well with vector content such as peppa pig
2021-08-23 01:00:43
but commonly those cases are not where the bulk of bits are -- focusing on it is focusing on a niche
nathanielcwm
2021-08-23 02:04:56
auto vectorisation is quite bad
2021-08-23 02:05:16
manually vectorising is quite good but very time consuming
2021-08-23 02:05:30
and ofc only really works if it's simple content
Scope
2021-08-23 02:19:41
Is there anywhere a list of all the options in benchmark_xl or their equivalents from jxl, webp, etc.?
Jyrki Alakuijala
2021-08-23 03:15:31
anything that can be done manually will find a neural solution at some stage
_wb_
Scope Is there anywhere a list of all the options in benchmark_xl or their equivalents from jxl, webp, etc.?
2021-08-23 03:17:29
I think only in the code. It's a bit of a mess tbh, we should at some point refactor both cjxl and benchmark_xl to use the encode api and to use a unified mechanism to set options
Jyrki Alakuijala
2021-08-23 03:41:18
we have technical debt in our codebase in relation to that
2021-08-23 03:41:39
until we froze the format we were only using code as a vehicle to define the format
2021-08-23 03:41:47
code quality was not a priority
2021-08-23 03:43:11
that started to change in early 2020 -- we planned to freeze in August 2020 (and ended up freezing in December, mostly because we got more ambitious in reducing decoder size for android apps)
2021-08-23 03:44:16
our early 2021 went into preparing for the practical world (with guidance from facebook), and to integrate the codec into Chrome
2021-08-23 03:44:47
and lots of security issues
2021-08-23 03:45:27
now we are at a stage where we can make further refinements in image quality and improve on production quality in the code base
2021-08-23 03:45:46
as an example cjxl doesn't use the API that the library provides
2021-08-23 03:46:20
when we fix these things, we will also improve on the related documentation
_wb_
2021-08-23 03:56:40
It's only natural that you have an encoder/decoder before you have a nice library api
fab
2021-08-25 01:46:10
i don't want warning message for d 0.3 d 8
lithium
2021-08-25 01:51:24
Hello, I have some question about cjxl --separate feature(pr466), could you teach me some detail about this feature? > https://github.com/libjxl/libjxl/pull/466 1. non-photographic frame Modular patch will use which transform? 2. If I input a pixel art image(full non-photographic image) on separate mode, make a big Modular patch for full image will increase file size? 3. If don't mind decoding time, I guess --separate feature should be the best compress method for non-photographic image contents(drawing)?
_wb_
2021-08-25 02:42:58
it's still very experimental, I dunno what the best way to proceed with it is
2021-08-25 02:43:50
two full frames is probably not the best approach, it's probably better to make smaller modular patches
lithium
2021-08-25 03:09:03
I understand, thank you very much 🙂
fab
2021-08-26 11:26:42
how convert a png photo in 12 bit jxl
2021-08-26 11:26:49
what is the command to specify 12 bit
2021-08-26 11:28:24
please help
_wb_
2021-08-26 11:54:16
`--override_bitdepth 12` iirc
fab
2021-08-26 03:39:12
i did
2021-08-26 03:39:13
for %i in (C:\Users\Utente\Documents\8\*.png) do cjxl -s 9 -d 0.787 -p --epf=2 --dots=0 -I 0.828 --faster_decoding=4 --photon_noise=ISO4780 --intensity_target=281 --override_bitdepth 12 %i %i.jxl
2021-08-26 03:40:08
https://artifacts.lucaversari.it/libjxl/libjxl/2021-08-25T13%3A38%3A37Z_a6867904bbb5e44191d22a27f850d2e8d5cdb005/
2021-08-26 03:41:47
for developing a raw file from a phone
2021-08-26 03:42:00
but some images are dark so i should fix exposure
2021-08-26 03:42:02
or use the jpg
2021-08-26 03:46:27
how override_bitdepth works?
2021-08-26 03:46:39
because to me without it creates same file from a png
2021-08-26 03:58:22
anyway the result is good enough even without 12 bits
2021-08-26 03:58:24
.....
2021-08-26 03:58:25
-q 90.062 --gaborish=1 --epf=1 -I 0.94 --photon_noise=ISO424
2021-08-26 03:58:30
this with latest build
2021-08-26 03:58:44
i won't test generation loss because is pointless for me
2021-08-26 04:00:07
also in the newest build is listed the current usage of PHOTO noise
diskorduser
fab but some images are dark so i should fix exposure
2021-08-26 04:26:52
What are you doing with raw then
fab
2021-08-27 05:29:46
2021-08-27 05:30:53
I dont remember
2021-08-27 05:31:17
It should be 19.00 25 August 2021 libjxl
2021-08-27 05:31:35
Yes
2021-08-27 05:32:02
There is less block boundary
2021-08-27 05:32:05
Artifacts
2021-08-27 05:32:30
Hope in 0.6 standard settings will work well
diskorduser
2021-08-27 05:33:48
First learn to encode with default settings.
fab
2021-08-27 05:34:21
The 57 KB Is too high quality for current nightly encoder
2021-08-27 05:35:09
Ah i dont post anything
2021-08-27 05:35:37
Try for yourself if you want
2021-08-27 05:35:49
Is a nightly version
2021-08-28 08:31:39
2021-08-28 08:32:00
downloading those you don't solve anything
2021-08-28 08:32:12
you need a stable version and is far
2021-08-28 08:32:26
https://ieeexplore.ieee.org/document/9505599
2021-08-28 08:32:39
this was not merged
2021-08-28 08:32:51
many things have to be merged from 24 days
2021-08-28 08:36:14
quality at various qualities especially at lower than 83.428
2021-08-28 08:36:35
various things have to be improved to make a final version
2021-08-28 08:36:43
i guess final version is 1.0
_wb_
2021-08-28 10:36:46
3.14.15-9265 will be the final version
veluca
2021-08-28 10:38:57
taking a leaf out of tex? 😛
fab
2021-08-28 11:24:58
for %i in (D:\DOCS\BACKUP\XIAOMI\1\Screenshots\*.jpg) do cjxl -j -s 7 -d 0.392 -I 0.283 --epf=2 --dots=1 --patches=1 --faster_decoding=3 %i %i.jxl
2021-08-28 11:25:50
s 8 blurs everything in current encoder
diskorduser
2021-08-28 11:28:26
Using jpg source with vardct -_-
_wb_
2021-08-28 11:29:51
Doing -d 0.392 on jpeg originals is silly, that is most likely higher quality than the original jpeg
fab
2021-08-28 11:31:23
_wb_
2021-08-28 11:31:34
Doing --epf 2 in combination with very high fidelity is also silly, those filters are made to make low fidelity look better, they likely do more harm than good at high fidelity
fab
2021-08-28 11:31:57
in fact the text is blurry
2021-08-28 11:32:03
is almost a pain
_wb_
2021-08-28 11:32:49
Epf might blur some text, I dunno (it is supposed to preserve edges but I don't know how foolproof it is)
fab
2021-08-28 11:32:50
but the stars in the sky are good
2021-08-28 11:32:55
not even artifacts
2021-08-28 11:33:08
i don't think is quality
2021-08-28 11:33:16
or decent enough
2021-08-28 11:33:29
because they are not default settings
2021-08-28 11:33:32
don't know
2021-08-28 11:35:15
the compression isn't also extraordinary for some photos
2021-08-28 11:35:22
is a nightly build
2021-08-28 01:34:30
or d 0.478 s 6 faster decoding 1 could be enough
2021-08-28 01:36:32
the current encoder uses good the ram
2021-08-28 01:36:37
it uses more ram
2021-08-28 01:37:25
i don't think is optimized for print or real usage for people that don't have the ram
2021-08-28 01:37:49
but for my needs is enough
2021-08-28 02:58:01
Not good enough
Jyrki Alakuijala
2021-08-29 09:03:57
subpixel-LCD-rendered text was not a huge priority, and I decided against weighting it in the benchmarks leading to JPEG XL's design decisions
2021-08-29 09:04:29
I consider that in the future the need for subpixel LCD text will be removed
2021-08-29 09:05:29
also, subpixel LCD stuff is commonly not something that can be shared from a display to display -- the pixel order is different in different displays -- so it makes more sense as a graphics trick than a first class thing in an image format
2021-08-29 09:06:25
my (perhaps false) expectations is that 4k and 8k monitors will make lcd subpixels much less interesting and it already is not used on (all?/most?) mobile phones
_wb_
2021-08-29 09:06:59
Subpixel rendering also doesn't work if you want to rotate your phone
Jyrki Alakuijala
2021-08-29 09:14:12
I just think it is not the right thing to focus a lot on for an experience
2021-08-29 09:14:29
butteraugli and other modeling we do considers that all pixels are in line
2021-08-29 09:15:09
we could resample pixels for a display's subpixel style and orientation
2021-08-29 09:15:23
but it is for not much benefit with the increasing pixel density
incredible_eagle_68175
2021-08-30 05:38:52
Next project of mine will be integrating jxl. I think it’s ready for prime time now.
_wb_
2021-08-30 05:54:40
integrating it in what?
diskorduser
Next project of mine will be integrating jxl. I think it’s ready for prime time now.
2021-08-30 05:54:47
Wow discordapp with jxl support
_wb_
2021-08-30 05:55:20
in discord? <:WhatThe:806133036059197491>
diskorduser
2021-08-30 05:55:32
His name discordapp
incredible_eagle_68175
2021-08-30 06:07:44
My new website
2021-08-30 06:08:28
When the part with jxl is deployed I’ll post the link to the part that uses it . But I do hear that discord is also looking at it from my sources, but not “really”
2021-08-30 06:18:18
The digits so far are impressive, about 500kb to a megabyte for each archived photo converted from JPEG to jxl
Justin
2021-08-30 06:20:55
Hey there! Happy to share some progress for the JXL bulk converter using WASM for clientside conversions. Release sometime this year on jpegxl.io. Parameters and UI is WIP. 🙂
2021-08-30 06:21:36
Super excited to bring this to a live version
incredible_eagle_68175
Justin Hey there! Happy to share some progress for the JXL bulk converter using WASM for clientside conversions. Release sometime this year on jpegxl.io. Parameters and UI is WIP. 🙂
2021-08-30 06:28:14
Wonderful
lithium
2021-08-30 06:59:58
Hello <@!532010383041363969> sorry for ping, About avif DSSIM score better than jxl, this situation also happen on my drawing content, for some image case, high quality(jxl d1.0~0.5) and near file size, DSSIM say avif is better than jxl, but maxButteraugli say avif have big error, Butteraugli 3-norm or 9-norm say avif and jxl both have good quality(near 0.5). (and sometime maxButteraugli_jxl and maxButteraugli_old have different assessment) > https://encode.su/threads/3397-JPEG-XL-vs-AVIF?p=70690&viewfull=1#post70690 And about libjxl pr403, I don't understand why After 'opt2' version jxl:d0.8:epf0 get maxButteraugli 2.81184077, Theoretically high maxButteraugli score, it's meant compressed image have big error, but this pr removes annoying ringing artefacts, I understand target distance just a target, but get maxButteraugli 2.8 on d0.8, I thought this situation is too risky. (VMAF also have some special issue, if enable some psy parameter on aomenc, psy optimize can increase video quality, but reduce VMAF score) > https://github.com/libjxl/libjxl/pull/403
Traneptora
2021-08-30 07:24:17
I remember back in the FLIF days, there was a flif.js that allowed you to render a FLIF image on a canvas
2021-08-30 07:24:34
but since FLIF was very slow to decode (unlike JXL), there was very little incentive to use it
2021-08-30 07:25:03
since the time you'd save from lower bandwidth consumption was usually eaten by the browser munching on JS
2021-08-30 07:25:25
I'm wondering if there's a similar jpegxl.js you can use now, since jpegxl is actually very fast to decode
_wb_
2021-08-30 07:27:38
haven't seen anything easy to use yet, but there certainly are js/wasm builds, e.g. squoosh.app (which has both encoder and decoder)
Traneptora
2021-08-30 07:30:21
By the way, if you do a lossless jpeg transcode, is there a way to revert losslessly?
_wb_
2021-08-30 07:30:24
some minimal wasm decoder would be nice to have - I don't think polyfilling is a very good solution in most production settings, but it could be good enough for some use cases, especially if users can just enable the flag to get native decoding
Traneptora
2021-08-30 07:30:57
i.e. take a jxl that was originally from a JPEG and losslessly turn it *back* into a jpeg, assuming it doesn't use any new features like true lossless compression
_wb_
Traneptora By the way, if you do a lossless jpeg transcode, is there a way to revert losslessly?
2021-08-30 07:31:08
yes, `cjxl in.jpg tmp.jxl; djxl tmp.jxl out.jpg` and in.jpg should be bit-identical to out.jpg
Traneptora
2021-08-30 07:31:24
bit-identical in image data or bit identical in file data?
_wb_
2021-08-30 07:31:29
bit identical in file data
2021-08-30 07:31:47
if you only care about image data, you can do `cjxl in.jpg tmp.jxl -strip` and save a few more bytes
Traneptora
2021-08-30 07:33:35
So, if I have a webserver, I can store jxl internally, and scan the user-agent header. If the user-agent header is known to support jxl then I can just provide the jxl, but if the header is not on a whitelist, I can decode to jpg on the fly. Not sure how that works with accept-ranges though.
2021-08-30 07:34:10
Although it might be more worth it in this case to just keep two copies. trade disk space for cpu cycles
_wb_
2021-08-30 07:34:27
yes, it all depends on storage vs cpu considerations
Traneptora
2021-08-30 07:34:37
but this way the browser itself would have no polyfill JS code
_wb_
2021-08-30 07:35:46
on heavy traffic sites you'll certainly want to keep both and not convert on the fly all the time, but maybe only in a memdisk cache, for example
Traneptora
2021-08-30 07:36:31
well in this case it would make sense to cache it
2021-08-30 07:36:39
if you're gonna convert on the fly
2021-08-30 07:36:40
no question
_wb_
2021-08-30 07:37:29
yes, mozjpeg + jxl could be a good transitional approach, save 20% in bandwidth for those who support it, save storage on the long tail of rarely requested images, no need to worry about consistency of quality or generation loss
Traneptora
2021-08-30 07:37:57
You mean, source(png, whatever) -> cjpeg -> cjxl?
_wb_
2021-08-30 07:38:04
though of course jxl from pixels does give better compression than jxl limited to dct8x8
2021-08-30 07:38:21
yes, or if you have jpegs already around
Traneptora
2021-08-30 07:38:37
well if I already have jpegs, mozjpeg won't help, will it?
2021-08-30 07:38:40
unless you mean jpegtran
_wb_
2021-08-30 07:38:56
yes, mozjpeg's jpegtran can still help a bit
2021-08-30 07:39:06
or maybe you already used mozjpeg in the past 🙂
Traneptora
2021-08-30 07:39:18
I use jpegtran a lot
2021-08-30 07:39:41
though, I thought encoding to jpeg using mozjpeg rather than with libturbo-jpeg was basically a given
2021-08-30 07:40:34
I actually swapped out my system libjpeg for mozjpeg even though they recommend against it since "your system libjpeg" is only for decoding, for my purposes and mozjpeg is just libjpeg-turbo on the decode side
_wb_
2021-08-30 07:50:25
yes, libjpeg-turbo is kind of silly to use as an encoder, its defaults are very fast but quite bad
Traneptora
2021-08-30 07:51:02
in other news, has `libjxl` finally received a `make uninstall`
2021-08-30 07:51:42
it bothers me when makefiles don't provide an `uninstall` with an `install` because it can cause issues if I decide to move the installation from `/usr/local` to a different prefix or if I just want to start tracking it with my system package manager
2021-08-30 07:51:51
I have to go and manually clean up all the files installed
Fraetor
2021-08-30 07:52:54
~~rm --no-preserve-root -rf /~~
_wb_
2021-08-30 07:53:44
maybe open an issue about it?
Traneptora
2021-08-30 08:10:55
I don't actually know if it ever got one or not
2021-08-30 08:10:59
if not I can file an issue
fab
2021-08-30 08:16:06
no mozjpeg no gimp plugin
2021-08-30 08:16:11
i want just cjxl
Traneptora
2021-08-30 08:20:43
``` Quality setting (is remapped to --distance). Range: -inf .. 100. ``` how is it remapped? is it a log scale? linear? etc.
2021-08-30 08:23:33
`d = log2(101-Q)`? etc.
2021-08-30 08:26:59
`d = 15 * (1 - 1/(101-Q))`?
fab
2021-08-30 08:28:58
from -500 to 0 to 7 modular
2021-08-30 08:29:06
from 7 and up is vardct
2021-08-30 08:29:08
d 15
Traneptora
2021-08-30 08:29:14
that does not answer my question at all
fab
2021-08-30 08:29:26
from 0 to 7 q is modular
2021-08-30 08:29:31
after is vardct
Traneptora
2021-08-30 08:29:36
that does not answer my question at all
2021-08-30 08:29:45
and repeating your answer will still not make it answer my question
fab
2021-08-30 08:29:58
ah the algebra it uses
2021-08-30 08:30:03
i do not know
2021-08-30 08:30:06
ask developers
2021-08-30 08:30:10
dev of jxl
Traneptora
2021-08-30 08:30:10
that's literally what I'm doing
2021-08-30 08:30:17
by asking a question in <#794206170445119489>
2021-08-30 08:30:38
If I specify `-Q` instead of `-d`, the docs say it converts the value of `Q` to a corresponding value of `d` but it doesn't say how exactly
2021-08-30 08:30:54
as far as I'm aware, `-Q` is mostly there to be backwards compatible with people who know how jpeg quality settings work
improver
2021-08-30 08:34:37
i have simply blocked fab at this point lol
Traneptora
2021-08-30 08:35:03
he appears to be one of those
2021-08-30 08:35:25
unfortunately common in software spaces, the sort to just ramble about anything and everything regardless of what was actually asked
Scope
2021-08-30 08:39:21
https://discord.com/channels/794206087879852103/803645746661425173/818396672081788958
2021-08-30 08:42:06
Also `-q` and `-Q` ``` -q QUALITY, --quality=QUALITY Quality setting (is remapped to --distance). Range: -inf .. 100. 100 = mathematically lossless. Default for already-lossy input (JPEG/GIF). Positive quality values roughly match libjpeg quality. -Q luma_q[,chroma_q], --mquality=luma_q[,chroma_q] [modular encoding] lossy 'quality' (100=lossless, lower is more lossy)```
Traneptora
2021-08-30 08:43:14
ah, it's almost piecewise linear
2021-08-30 08:43:20
and yes I meant `q` not `Q`
2021-08-30 08:43:26
> - cjxl -q 90 (-d 1): high fidelity, can be used on the web if fidelity matters more than usually, e.g. product images of a webshop selling clothes with subtle textile textures
2021-08-30 08:43:38
normal cjpeg with quality 90, to me, is garbage
2021-08-30 08:44:01
which means that if `-d 1` is "visually lossless" then it *has* to be better than cjpeg quality 90 which is very obviously not visually lossless
2021-08-30 08:44:33
`q=90` jpegs will give you horribly obvious ringing
Cool Doggo
2021-08-30 08:44:50
it is never claimed to be an exact match with cjpeg
2021-08-30 08:45:06
it is just to give a similarity
Traneptora
2021-08-30 08:46:01
yea, but I would never touch q=90 jpegs with a 10-foot pole unless they're photographs
_wb_
2021-08-30 08:46:07
cjpeg quality 90 in recent mozjpeg is at least 444
2021-08-30 08:46:18
in libjpeg it always defaults to 420
Traneptora
2021-08-30 08:46:25
I'm talking mozjpeg, yea
2021-08-30 08:47:20
the problem here is that there's a lot of "visually lossless" png prefiltering going on that really is visually lossless, and I'm worried that someone will see "visually lossless" and won't realize that it simply does not work that way for synthetic images
2021-08-30 08:47:27
photographs, sure
_wb_
2021-08-30 08:48:33
d1 is aiming to be visually lossless at a viewing distance of 1000 pixels
2021-08-30 08:49:03
if you zoom in or get closer to the screen, everything that is not fully lossless at some point becomes visually lossy
2021-08-30 08:50:25
for synthetic images it's probably safer to do fully lossless anyway to be sure, or something like d0.3 if you need to do lossy
2021-08-30 08:52:14
results will also vary between encode speed settings, e.g. at speed 7 and up patches are used, which could help to avoid ringing at text, and at speeds 8-9 the encoder does butteraugli iterations to make sure the adaptive quantization is actually visually OK
Traneptora
2021-08-30 08:52:20
with testing, I've found that `cjxl -d 1` is much better than `cjpeg -q 90` for synthetic imagery
2021-08-30 08:52:27
which is the protest I'm making
2021-08-30 08:52:43
`cjpeg -q 90` is not visually lossless by any means on any content
_wb_
2021-08-30 08:53:13
there are too many different definitions of visually lossless
Traneptora
2021-08-30 08:53:50
that's the thing, but to me, an untrained eye looking at my monitor at at ordinary distance, `cjpeg q=90` is always very clearly different from the source
2021-08-30 08:53:56
which is not the case for `cjxl -d 1`
2021-08-30 08:54:43
maybe it's because it's more noticable to me? but I see this kind of crud and cringe
2021-08-30 08:54:44
2021-08-30 08:54:54
the fuzziness in the blue
_wb_
2021-08-30 08:55:08
I think we prefer to err on the side of looking better than expected rather than worse than expected
Traneptora
2021-08-30 08:55:16
yea, that makes sense
2021-08-30 08:55:22
this is from cjpeg q=90
2021-08-30 08:55:52
I'm just pointing out that I think some other people might go "ew, `cjpeg q=90` sucks" and then think that it means `d=1` sucks as well, which is not the case
_wb_
2021-08-30 08:56:05
yeah colored details like that, even without 420, will get trouble in cjpeg because the luma and chroma quant tables don't match
Cool Doggo
2021-08-30 08:56:16
comparing qualities between two formats is silly
2021-08-30 08:56:25
they are different formats, they will work differently
Traneptora
Cool Doggo comparing qualities between two formats is silly
2021-08-30 08:56:40
I'm specifically discussing it since jon said "q=X roughly matches standard libjpeg"
2021-08-30 08:56:43
that's the issue
Cool Doggo
2021-08-30 08:56:53
many changes have been made
Traneptora
2021-08-30 08:57:05
I mean it still says that in `--help`
_wb_
2021-08-30 08:57:13
the -q version of the encoder config was added because people expect that to be there
2021-08-30 08:57:28
we are trying to popularize the -d way of thinking about it
Traneptora
2021-08-30 08:57:32
so it's basically just for backwards compat and if you know that `-d 1` works then I should just use that and just simply not worry about it?
_wb_
2021-08-30 08:58:09
the thing is that jpeg will give you wildly different perceptual qualities at any q=X setting
2021-08-30 08:58:18
while cjxl is more consistent
Traneptora
2021-08-30 08:58:27
yea, I suppose. I already like the `-d` since it roughly corresponds to the same concept in video encoding
Scope
2021-08-30 08:58:33
Moreover, `cjpeg -q` is not a quality indicator because none of the visual metrics for a particular image are used to determine quality, it is just a quantizer
_wb_
2021-08-30 08:58:47
for photos, cjpeg q90 can be just fine and correspond to d1
2021-08-30 08:59:07
for nonphoto, cjpeg q90 can be pretty lossy and be more like d3
Traneptora
2021-08-30 08:59:12
Yea, I prefer the `-d` anyway. For example, x264 provides `--crf` which is basically like the way we treat `-d` but on a different scale
2021-08-30 08:59:38
`--crf 0` is mathematically lossless\* but `--crf X` for positive X is a quality setting, higher number is lower quality
_wb_
2021-08-30 08:59:45
the video codecs don't really have perceptual targets though, it's just the quantizer setting
veluca
Traneptora So, if I have a webserver, I can store jxl internally, and scan the user-agent header. If the user-agent header is known to support jxl then I can just provide the jxl, but if the header is not on a whitelist, I can decode to jpg on the fly. Not sure how that works with accept-ranges though.
2021-08-30 08:59:50
https://github.com/libjxl/libjxl/pull/56 this might be interesting for you 😄 don't rely too much on it though! (I believe in my own website I ended up doing the simpler trick described in prepare_folder.sh)
Traneptora
2021-08-30 08:59:54
They absolutely do, if written well
2021-08-30 09:00:09
`--crf` is a perceptual quality setting in x264, which is based on the qcomp curve
_wb_
2021-08-30 09:01:11
well x264 does do perceptual optimization, but is there any guarantee that say --crf 20 will give a similar result on any content?
Traneptora
2021-08-30 09:01:24
that's the design
2021-08-30 09:01:42
--crf is content-aware
2021-08-30 09:01:56
You can use CQP (constant quantization parameter) in x264, which fixes a particular quanitzer, but using CRF dynamically chooses the quantize parameter based on a psychovisual model (allowing it to things like quanitze darker areas less)
_wb_
2021-08-30 09:02:29
i don't know enough about x264 and how fancy its visual modeling is, maybe you're right
Traneptora
2021-08-30 09:02:33
this is affected by the qcomp curve. `qcomp` is a setting (defaulting to 0.6) which affects how `crf` behaves
_wb_
2021-08-30 09:03:19
I know that in avif, the perceptual quality I get from a given q setting is pretty image dependent
Scope
2021-08-30 09:03:25
Even `--crf` can be noticeably different in metrics or visually for different content, something similar for video is in the form of `--target-quality` in a third-party utility Av1an <https://github.com/master-of-zen/Av1an>
Traneptora
2021-08-30 09:03:57
the issue I've found is that nothing has managed to replicate x264's model very well
2021-08-30 09:03:59
x265 has not
2021-08-30 09:04:10
crf only makes promises at the exact same settings though
2021-08-30 09:04:26
different content encoded with identical settings should produce the same quality at the same crf
2021-08-30 09:04:40
but even minor setting changes can alter it
2021-08-30 09:07:00
`qcomp = 1.0` makes `crf` behave like `cqp` even if the bits are unnecessary in that sense, the way crf works, it starts at a baseline of constant qp and then increases qp if the extra bits are unnecessary, according to its model at `qcomp=1.0` it considers them to be always necessary and `qcomp=0.0` it considers them to be *never* necessary (essentially granting almost CBR)
Scope
2021-08-30 09:07:03
Yep, x264 had good devs and was hand-tuned for years for this purpose, but still, it's not the same visual quality for all content
Traneptora
Scope Yep, x264 had good devs and was hand-tuned for years for this purpose, but still, it's not the same visual quality for all content
2021-08-30 09:07:17
although that's mostly because you tend to use different settings
2021-08-30 09:07:24
crf makes no promise to work across presets and settings
2021-08-30 09:08:01
you can also tune it to psnr or ssim which optimizes against those metrics but you usually do not what to do that I don't think tune vmaf was ever added
veluca
2021-08-30 09:08:09
in theory `-d` should do a good job at getting good, consistent quality - the corresponding `-q 90` parameter was chosen because usually on photographic content mozjpeg does a good enough job at doing the same with that quality (on average), but it was never intended to be a "in all cases you're getting a similar results"
Traneptora
2021-08-30 09:08:45
ye, that's what it sounds like, but that's more a problem with mozjpeg -q 90 not being consistent
2021-08-30 09:08:54
than cjxl not being consistent
_wb_
2021-08-30 09:09:18
basically any lossy image codec is not consistent
veluca
2021-08-30 09:09:32
I don't know how much tuning for vmaf is a good idea, as a metric it has a few... issues... like being able to return >100, not returning the same thing for comparing two identical images, and increasing scores by just pre-sharpening the input file before compression...
Traneptora
2021-08-30 09:09:37
something I am wondering is what the best default is for true lossless jxl
2021-08-30 09:09:52
does it actually use a separate mode (like, say, webp) or is it just setting the Q to 0
veluca
2021-08-30 09:09:58
separate mode
2021-08-30 09:10:10
*very* separate mode
_wb_
2021-08-30 09:10:11
it's a separate mode (though that mode is also used in parts of lossy coding)
Traneptora
2021-08-30 09:10:37
ah, so a lossy jxl could, say, take an allblack section and make it lossless on a block-by-block basis?
veluca
2021-08-30 09:10:44
among other things, it's a fair bit slower, which is something we should work on
Traneptora
2021-08-30 09:11:07
yea, that's the big thing I've noticed, is that cjxl -d 0 is very slow
veluca
2021-08-30 09:11:10
no, that cannot (easily) be done - but you could put the quantization so low that it's basically equivalent
Fraetor
2021-08-30 09:11:38
Are patches where it decides to use a different encoding setting/mode for a specific part of the image?
_wb_
2021-08-30 09:11:44
we can mix lossy and lossless using patches or multiple layers
veluca
2021-08-30 09:11:54
not necessarily *just* that
_wb_
2021-08-30 09:12:09
but every frame is either modular mode or vardct mode
Fraetor
2021-08-30 09:12:30
What is a frame in the context of JXL? Is that animation frames?
Scope
Traneptora ye, that's what it sounds like, but that's more a problem with mozjpeg -q 90 not being consistent
2021-08-30 09:12:43
Yes, but I mean more about consistent quality indicators, such as metrics can provide this, like for example VMAF 95 is always constant good enough quality, no matter the content, settings and other things (although VMAF is also not perfect and it can be cheated, but it is still consistent than something more simple)
_wb_
2021-08-30 09:13:09
vardct still does adaptive quantization so you can effectively have different qualities in different regions
Traneptora
2021-08-30 09:13:13
for archival is there anything to do other than `cjxl -d 0 -e 9 input.png output.jxl`
2021-08-30 09:13:24
like beyond that would you recommend any settings beyond the default?
2021-08-30 09:14:00
I know for ffv1 video archival you need to use `-c:v ffv1 -level:v 3 -g:v 1 -slicecrc:v 1 -coder:v range_tab` in the ffmpeg CLI cause the defaults are not good
2021-08-30 09:14:24
for example it doesn't default to `-g 1`
_wb_
2021-08-30 09:15:44
if you really want best lossless and time is no concern, `-d 0 -e 9 -E 3 -I 1` and then choosing best of `-g {0,1,2,3}` and also try `-d 0 -e 9 -I 0 -P 0 -g 3 --patches 0` in case it's one of those where png is hard to beat
Scope
_wb_ we can mix lossy and lossless using patches or multiple layers
2021-08-30 09:15:59
Also, what about the previously mentioned idea for that experiment to replace the background is not black, but something average, closer to the dominant or real background color, perhaps then these border artifacts will be unnoticeable?
veluca
2021-08-30 09:17:13
if you're OK with "slightly lossy, but I challenge you to notice it", I suggest something like `cjxl -d 0.1` or `cjxl -d 0.1 --gaborish=0` (the first one will make smaller files, but has the disadvantage of approximating an inverse convolution, which is not done perfectly - so you'll occasionally be able to see one or two pixels that are different)
_wb_
Scope Also, what about the previously mentioned idea for that experiment to replace the background is not black, but something average, closer to the dominant or real background color, perhaps then these border artifacts will be unnoticeable?
2021-08-30 09:17:26
I tried something like that but it was bad for compression - I think it's probably better not to use two full frames but to just use more patches (also for non-repeated nonphoto parts)
Scope
2021-08-30 09:18:33
Will the individual patches also compress noticeably worse than one large patch, or is the difference not as significant?
_wb_
2021-08-30 09:19:57
The patches are all thrown together in a big sprite sheet anyway
2021-08-30 09:21:05
We should have a debug option to see the patch frame, it should be fun to see
2021-08-30 09:21:37
(well except that we usually encode white as 0 there and black as something negative, so that doesn't quite work in png)
Scope
_wb_ if you really want best lossless and time is no concern, `-d 0 -e 9 -E 3 -I 1` and then choosing best of `-g {0,1,2,3}` and also try `-d 0 -e 9 -I 0 -P 0 -g 3 --patches 0` in case it's one of those where png is hard to beat
2021-08-30 09:22:39
Also `--palette=0` / `--palette=10000` because for now, palette selection is not always accurate And as I mentioned before `-d 0 -e 9 -I 0 -P 0 -g 3 --patches 0` is rarely more effective than any other option without `-I 0`
Traneptora
_wb_ if you really want best lossless and time is no concern, `-d 0 -e 9 -E 3 -I 1` and then choosing best of `-g {0,1,2,3}` and also try `-d 0 -e 9 -I 0 -P 0 -g 3 --patches 0` in case it's one of those where png is hard to beat
2021-08-30 09:24:59
`-E` and `-I` are not documented in `cjxl --help --verbose`
Scope
2021-08-30 09:25:55
`--help -v -v`
Traneptora
2021-08-30 09:26:12
oh, thanks
2021-08-30 09:26:46
also what does `--gaborish` do?
2021-08-30 09:27:25
``` --gaborish=0|1 force disable/enable gaborish ```
2021-08-30 09:27:29
I don't know what that means
Scope
2021-08-30 09:28:58
Some kind of Loop filters
Traneptora
2021-08-30 09:29:24
btw, is the default single-thread encoding?
2021-08-30 09:29:31
unless I tell it otherwise
2021-08-30 09:30:02
or is the default cpucount
spider-mario
2021-08-30 09:30:40
default count but at slower speed settings, it doesn’t give that much of a speedup
Traneptora
2021-08-30 09:31:11
so I need to use `--num_threads=0` to disable threading?
spider-mario
2021-08-30 09:31:16
yes
Traneptora
2021-08-30 09:31:19
(I assume it'll produce an identical file)
spider-mario
2021-08-30 09:31:23
also yes
_wb_
2021-08-30 09:31:30
Yes, unlike avifenc
Traneptora
2021-08-30 09:31:41
In this case if I'm batch converting images I'd prefer to disable multithreading and just let each one run in a separate process
2021-08-30 09:31:51
since I trust my operating system to handle that better than I trust cjxl
2021-08-30 09:32:03
since lots and lots and lots of people care about optimizing the OS's ability to handle that
_wb_
2021-08-30 09:33:16
Sure, makes sense
Scope
2021-08-30 09:35:36
It's not even just the operating system, not everything can be fully threaded when encoding a single image, so if there are a lot of images, it is usually always more efficient to encode one image per thread (except for memory consumption)
Traneptora
2021-08-30 09:38:44
entropy encoding is a biggie
veluca
2021-08-30 09:40:05
yup, especially in lossless mode
2021-08-30 09:40:16
in lossy things scale well enough... lossless, nah
Scope
2021-08-30 09:53:26
On Windows there are also problems with multithreading even for lossy, especially at slower speeds (although some older builds worked better)
veluca
2021-08-30 10:08:56
how so?
Scope
2021-08-30 10:14:09
I don't know, but on very old builds (maybe before FDIS) threads were fully loaded more often, then after some changes it became worse, at least on Windows on the same content and speeds (although I did not make special comparisons, but it was felt), also this applies to lossless for fast speeds, maybe there is some difference in Windows threading
2021-08-30 10:16:37
I have a mini-display on my keyboard that monitors core loads, CPU and other things and it's usually always noticeable to me
veluca
2021-08-30 11:42:23
mhhh... encoder or decoder?
2021-08-30 11:42:45
for lossless multithreading we definitely changed a lot in the encoder and it is very likely slower
Scope
2021-08-30 11:44:41
encoder