JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

on-topic

Whatever else

_wb_
2021-08-21 08:21:28
But the encoder will see that there are only 32 shades of R,G, and B, and it will make a channel palette
2021-08-21 08:22:27
I suppose you could also use `--override_bitdepth 5`, so it will store it as an actual 5-bit image and it only has to do channel palette on alpha then
2021-08-21 08:23:16
(in principle you can also do 5-bit rgb and 1-bit alpha but there is no cjxl option for that)
Scope
2021-08-21 06:50:19
https://bartwronski.com/2021/02/15/bilinear-down-upsampling-pixel-grids-and-that-half-pixel-offset/
2021-08-21 06:55:21
Hmm, maybe it is worth making a request to add this magic core to popular image manipulation utilities (although I don't know how good it is in practice) http://www.johncostella.com/magic/
ishitatsuyuki
2021-08-22 12:36:42
it's just an approximation of Gaussian I think
Jyrki Alakuijala
2021-08-23 08:49:41
Bart's blog post was actually a motivational factor to invest a bit more in downsampling right now
2021-08-23 08:50:53
Even though I used a similar system in my Ph.D. thesis in 2001 exactly for bilinear filtering and had been thinking about this for a long time, and my implementation in pik included an iterative refinement approach
2021-08-23 08:51:46
Bart is a great colleague -- he publishes a lot of creative and expressive stuff and is pleasant to work with
Scope
2021-08-23 03:24:11
Also an interesting upscaler based on RAISR (but only for GPU) > RAVU (Rapid and Accurate Video Upscaling) is a set of prescalers inspired by RAISR (Rapid and Accurate Image Super Resolution). <https://github.com/bjin/mpv-prescalers>
2021-08-23 03:24:14
https://artoriuz.github.io/blog/mpv_upscaling.html
2021-08-23 03:24:44
https://artoriuz.github.io/blog/images/mpv_upscaling/tables/fps.png
2021-08-23 03:24:49
https://artoriuz.github.io/blog/images/mpv_upscaling/fractional/tables/violet_frac_vif.png
diskorduser
2021-08-23 03:27:02
I'm using this for image upscaling too for a while. They are good
2021-08-23 03:27:36
Fsrcnxx shaders look good for movies
Scope
2021-08-23 03:27:57
And https://github.com/JWSoh/NatSR
diskorduser
Scope And https://github.com/JWSoh/NatSR
2021-08-23 03:29:19
Have you tried it yet?
Scope
2021-08-23 03:30:36
No, just reading the paper And it's probably not for use right now, but in the future I think something like this will be used more often, and neural upscalers will be more "natural"
diskorduser
2021-08-23 03:32:06
I see models and code on readme
2021-08-23 03:32:14
They don't work?
2021-08-23 03:32:46
I see download links too
Scope
2021-08-23 03:33:56
It works, I mean, right now it's still not suitable to replace common upscalers (or rather not everywhere)
diskorduser
2021-08-23 03:34:21
Ah. Okay.
Jyrki Alakuijala
2021-08-23 05:35:32
we played with neural image scalers in an 11x11 neighborhood (during pik development)
2021-08-23 05:35:56
in pik's requirements space it didn't work very well
2021-08-23 05:43:58
particularly we noticed that without side channel information a neural interpolator would occasionally create non-faithful weirdness
2021-08-23 05:44:54
when we finished that effort we thought that for fine quality target interpolation we would need to augment the neural stuff with side channel info and it wasn't feasible to improve the resolution without it in d1 category
ishitatsuyuki colorspaces conversions involve trascedentals so they can easily become very slow
2021-08-23 05:49:35
Some of them include. XYB can be converted to linear light spectrum without complex stuff.
_wb_
2021-08-24 07:54:18
https://youtu.be/aVwxzDHniEw
2021-08-24 08:32:24
2021-08-24 08:34:28
from https://link.springer.com/article/10.1007/s42979-021-00770-x
2021-08-24 08:36:06
We should really make a spline drawing tool that can eventually be integrated in authoring tools like photoshop and gimp
2021-08-24 08:38:12
Or rather inkscape and the like
2021-08-24 08:39:46
I think for line drawing, jxl splines (centripetal Catmull-Rom with varying sigma and color) are more powerful than what svg has to offer
2021-08-24 08:40:32
For filled shapes with gradients and stuff like that, of course we don't have that and svg does
2021-08-24 08:41:35
But a lot of vector art is mostly about the lines
2021-08-25 06:37:17
One thing I regret is that we don't have blend modes for splines
2021-08-25 06:37:24
It's always kAdd
2021-08-25 06:39:53
It would be useful to have kBlend (where alpha is given by the sigma thing)
2021-08-25 06:42:50
For self-intersecting curves, or just overlapping multiple curves, that could have been useful
2021-08-25 06:44:00
Also filled shapes could maybe have been painted with that (filling it with fat lines)
2021-08-25 06:45:09
Another thing is that splines only paint on the color image, not on extra channels
2021-08-25 06:45:37
For spot colors and alpha channels, splines could possibly have been useful too
2021-08-25 06:56:49
I guess I should have played with splines _before_ the bitstream was finalized 😅
2021-08-25 07:22:02
no idea - the encoder doesn't use splines at all atm
2021-08-25 07:23:16
but for authoring tools I imagine you'll often want curves to be drawn on top of the image in an opaque way, not arithmetically added to the image
Deleted User
_wb_ I guess I should have played with splines _before_ the bitstream was finalized 😅
2021-08-25 12:38:53
JPEG X2L when?
w
2021-08-25 12:49:13
jpeg xxl
_wb_
2021-08-25 02:41:12
I don't really believe in "v2" versions of formats
diskorduser
2021-08-25 02:57:52
So jpeg-xxl then.
2021-08-25 02:59:02
😛
Jyrki Alakuijala
_wb_ For self-intersecting curves, or just overlapping multiple curves, that could have been useful
2021-08-26 06:27:14
I considered it carefully and decided against it
_wb_ Another thing is that splines only paint on the color image, not on extra channels
2021-08-26 06:28:15
That is something I don't remember thinking about -- consider it an oversight or at least not getting a lot of attention
_wb_ Also filled shapes could maybe have been painted with that (filling it with fat lines)
2021-08-26 06:29:34
I wanted to have the possibility to render text with web fonts on JPEG XL (to be able to cut-and-paste from images) -- I wasn't able to convince anyone else that that is a good idea so I dropped that fantasy
_wb_ I think for line drawing, jxl splines (centripetal Catmull-Rom with varying sigma and color) are more powerful than what svg has to offer
2021-08-26 06:31:48
I fell in love with Catmull-Rom splines in my second job where I built tools for radiation therapy, including the volume modeling and contouring tools
2021-08-26 06:32:25
Usually people use B-splines
_wb_
2021-08-26 06:32:41
Yes, centripetal Catmull-Rom is nicer
Jyrki Alakuijala
2021-08-26 06:32:57
We didn't measure but I intuitively thought that catmull-rom splines can be controlled with less entropy and with more prediction
_wb_
2021-08-26 06:33:13
Text rendering would be cool but it's a big can of worms and a big decoder/spec complication
Jyrki Alakuijala
2021-08-26 06:33:36
I think we made a mistake in not allowing for that
2021-08-26 06:33:53
it is weird for users that some text they can cut-and-paste and other text not
2021-08-26 06:34:16
and being able to cut-and-paste phone numbers, addresses etc is just nice
2021-08-26 06:34:25
also great for indexing and search
_wb_
2021-08-26 06:34:42
Cut and paste and rendering are two different things though
Jyrki Alakuijala
2021-08-26 06:34:49
of course machine learning will fix it eventually
_wb_
2021-08-26 06:34:58
Rendering is complicated, but just storing utf-8 strings is not
Jyrki Alakuijala
2021-08-26 06:35:30
depends on what is asked from rendering
_wb_
2021-08-26 06:35:33
We can have rasterized text layers where the layer name is the utf-8 string you can copy/paste
Jyrki Alakuijala
2021-08-26 06:35:52
or what is guaranteed
_wb_
2021-08-26 06:36:12
Layer names we have, the api even already exposes them
Jyrki Alakuijala
2021-08-26 06:36:38
and most computers showing images already know how to deal with text -- in a well-designed system it would not be additional complexity
_wb_
2021-08-26 06:37:05
It's only a matter of the UI inverting their alpha channel to simulate "you selected it" and you can do copy paste. Whole strings only though
Jyrki Alakuijala
2021-08-26 06:37:54
that would already be great
2021-08-26 06:38:31
of course homogenous UI experience is better than cut-and-pasting text from images working differently from cut-and-pasting from a text field -- user doesn't necessarily see a difference
_wb_
2021-08-26 06:46:11
Other option would be some syntax convention to put some html+css in a layer name, which is supposed to look identical to the raster pixel data of that layer. Then viewers can just show the pixels, but browsers and smart viewers can act as if the html is there
Jyrki Alakuijala
2021-08-26 06:47:57
yeah, svg, html, or even just 'start a line of text from (x,y) with size 17 pts' without proper layout
_wb_
2021-08-26 06:49:01
We could also define a file format level box for such info, and use the brob mechanism to brotli it
Jyrki Alakuijala
2021-08-26 06:49:13
if the main use case would be to copy the address for navigation, phone number for calling, email address for contacting, then relatively simple things could have worked
_wb_
2021-08-26 06:49:47
If we define it as a jumbf box, we can also make it work for other formats like jpeg
eddie.zato
2021-08-26 06:49:51
DjVu with OCR layer 😄
Jyrki Alakuijala
2021-08-26 06:49:58
(-:
_wb_
2021-08-26 06:51:34
Images and video with hyperlinks on them are quite desired, "shoppable media" is how we call it
2021-08-26 06:52:09
Things are possible with js or other trickery, but having a native and standardized way of doing it would be neat
2021-08-26 06:53:14
E-commerce sites want to have images (or video) of a model wearing shoes, clothes, accessories, and you can click on those to get to their product page
2021-08-26 06:53:28
(that's why we call it "shoppable media")
2021-08-26 06:53:37
Of course the concept itself is more generic
2021-08-26 07:01:39
There are some security considerations (malicious links could be added to make literally viral images)
2021-08-26 07:02:13
Probably there are already existing approaches, I dunno if there's anything good
2021-08-26 07:03:02
But adding text and links to images and have it as embedded metadata as opposed to external side info, that's certainly cool and useful
2021-08-26 07:05:59
question is how expressive to make it - at some point it might be better to just use packaged html, or svg or pdf, with an embedded jxl, rather than jxl with embedded hypertext
Jyrki Alakuijala
2021-08-26 07:16:41
it would be nice if one could cut and paste text and click on urls from images and videos (including streaming, screensharing and presentations)
spider-mario
2021-08-26 08:15:29
mini-enigma: how easy is it to guess what this was an attempt at?
2021-08-26 08:17:23
I mean, the enigma is to guess it, not to estimate how easy is it to guess it
Sauerstoffdioxid
2021-08-26 08:18:01
Iso paywalls grouped by price?
spider-mario
2021-08-26 08:18:25
not quite, but nice attempt 😄 let’s say it’s a bit more directly jxl-related
veluca
spider-mario mini-enigma: how easy is it to guess what this was an attempt at?
2021-08-26 08:29:28
I probably know the answer 😛
_wb_
2021-08-26 08:30:59
a version of some plot drawing tool that produces a jxl with splines?
spider-mario
2021-08-26 08:31:30
uh, that would be an interesting project
2021-08-26 08:31:50
I hadn’t thought of this game as a potential idea generator
2021-08-26 08:35:04
what it was was an attempt at ||estimating the noise level in an image for use with `--photon_noise`||
2021-08-26 08:35:45
the graph is ||the computed probability density function for the parameter||
paperboyo
2021-08-26 09:37:28
Embedding an SVG (subset of spec for text) within JXL might have been pretty cool. Any other live text would seem to me a slippery slope, though (so much added complexity with fonts, shaping, rendering etc). I bet SVG part of JXL would be implemented by browsers in 2077 😜 .
_wb_
2021-08-26 10:12:25
SVG is basically html+css+javascript + some vector graphics
Jyrki Alakuijala
spider-mario mini-enigma: how easy is it to guess what this was an attempt at?
2021-08-26 10:31:52
(((if I had not known the answer I would have guessed 'visualizing the normal')))
Fraetor
_wb_ Images and video with hyperlinks on them are quite desired, "shoppable media" is how we call it
2021-08-26 02:20:30
Within a web context, that has been possible with the <map> element since HTML 3.2. https://developer.mozilla.org/en-US/docs/Web/HTML/Element/map
paperboyo Embedding an SVG (subset of spec for text) within JXL might have been pretty cool. Any other live text would seem to me a slippery slope, though (so much added complexity with fonts, shaping, rendering etc). I bet SVG part of JXL would be implemented by browsers in 2077 😜 .
2021-08-26 02:22:10
IMO this is out of scope for JXL. It would be better to do it the other way round and put a JXL inside of an SVG for this behaviour. https://developer.mozilla.org/en-US/docs/Web/SVG/Element/image
_wb_
Fraetor Within a web context, that has been possible with the <map> element since HTML 3.2. https://developer.mozilla.org/en-US/docs/Web/HTML/Element/map
2021-08-26 03:50:33
Sure, but that requires markup that is authored and maintained separately from the image resource. Doing this with just an img tag makes things more self-contained and easier to work with.
2021-08-26 03:52:12
jxl in svg sounds good to me, but then the svg spec needs to be updated to allow jxl (it currently only allows jpg, png and svg)
Scope
2021-08-26 03:53:35
Also https://svgwg.org/svg2-draft/
2021-08-27 05:26:55
https://twitter.com/zachleat/status/1430944481965182976
_wb_
2021-08-27 07:40:11
I wonder to what extent authoring tools could export such images to vardct+splines (+modular for non-spliny nonphotographic stuff)
improver
2021-08-28 12:37:47
i think it's really really old thing
2021-08-28 01:06:39
wait did i actually mess up that gdk pixbuf patch
2021-08-28 01:14:21
oh. empty pattern. because "\x00\x00\x00\x0CJXL \x0D\x0A\x87\x0A" starts with null character..
2021-08-28 01:14:48
now i see why they have designated flags for zero in their mask
2021-08-28 01:25:39
https://github.com/libjxl/libjxl/pull/507 sorry lol
2021-08-28 01:30:41
https://github.com/GNOME/gimp/commit/2599d79292d04665569f958e4f46c288ff3f985b uhh wait what why
2021-08-28 01:34:19
nvm that's only for gimplegacy i think
2021-08-28 02:32:24
ok so anyway yeah gimp's loader stuff was also affected by the same thing..
2021-08-28 02:33:39
but gimp stuff is like undocumented as hell so i had to dig deep into source to figure out that they do their own escaping pass against pattern so that they can specify zeros (only octal though)
2021-08-28 02:34:08
why can't they just document this stuff properly :<
2021-08-28 02:46:49
but yeah should be fixed for real this time
2021-08-28 03:07:38
also yeah they will be changing API for gimp3 but i don't think it's worth caring about it now as whatever they have in git is subject to change still
2021-08-28 04:04:13
for some reason gimp still can't load jxl when they're not named .jxl. i wonder if patterns are not working right, or they are only invoked when file is input what ends with .jxl
2021-08-28 04:05:50
i think specifying extensions does that, yeah
2021-08-28 04:14:34
well, not really. but eh it works with extension so fine whatever
2021-08-28 05:09:13
ok i was trying to sleep (it's morning here already) and then i realized why it wasn't working
2021-08-28 05:10:18
gj brain basically it interpreted space same way as `,` (it's also separator) and ended up probably ignoring whole signature
2021-08-28 05:19:11
yep. now it works with whatever extension
2021-08-28 05:19:49
so okay that PR now really totally fixes things and i goto sleep peacefuly
improver https://github.com/libjxl/libjxl/pull/507 sorry lol
2021-08-28 11:00:11
someone should maybe merge this or something.
veluca
2021-08-28 11:03:43
I approved it, I think you can merge it yourself now 😛
2021-08-28 11:03:54
if you can't... I'll do it
improver
2021-08-28 11:09:44
> Only those with write access to this repository can merge pull requests.
2021-08-28 11:09:48
so yeah
veluca
2021-08-28 11:14:00
ah, I thought you could once it was approved
2021-08-28 11:14:03
done
2021-08-28 11:14:17
(I generally don't blanket-merge before the author tells me it's ok :P)
fab
2021-08-29 10:49:47
Link not found
diskorduser
2021-08-29 10:59:13
https://github.com/danielgtaylor/jpeg-archive
spider-mario
2021-08-30 01:42:59
no, no need to refresh the package database so soon
2021-08-30 01:44:14
`-Syu` is just short for `-Sy` (refresh the database of the packages that are available in the repositories) + `-Su` (upgrade local packages to versions available in the repositories)
2021-08-30 01:44:39
ooh 😁 my bad
2021-08-30 01:44:50
then most likely yes, thanks for spotting
xiota
2021-08-30 05:14:30
Yes. "packman" is a typo. I'm not a native Arch user. I'll fix it soon...
veluca
2021-08-30 05:37:21
using the "native X user" expression in the same way you'd use "native X speaker" is a bit amusing 😄
Traneptora
2021-08-30 07:44:37
`-Syu` is a combination of `-Sy` followed by `-Su`
2021-08-30 07:45:26
`pacman -Sy` downloads the package list itself from the repos. `pacman -Su` run a system upgrade based on the currently downloaded package list
2021-08-30 07:45:48
it's the equivalent of `apt-get update` and `apt-get upgrade` respectively, if you're familiar with APT.
2021-08-30 07:45:54
As for when to use them? that part is easy
2021-08-30 07:46:04
Never use `pacman -Sy` unless you're planning on immediately running `pacman -Su`
2021-08-30 07:46:24
`pacman -Syu` is usually what you want
2021-08-30 07:47:05
`pacman -Su` skips the `pacman -Sy` step which can be handy if you say, ran `pacman -Syu` and then Ctrl+C cause you forgot something mid-package-download, then you can just run `pacman -Su` and skip that part since you did it already
2021-08-30 07:47:54
`pacman -Su` is almost entirely useful simply cause you already did the `-Sy` part
2021-08-30 07:48:34
https://wiki.archlinux.org/title/System_maintenance#Partial_upgrades_are_unsupported
2021-08-31 01:00:24
if you want to see if updates are available without downloading them then yes you cannot use any form of `-Sy`, correct
2021-08-31 01:00:39
indeed
2021-08-31 01:01:16
but it appears you know the solution
2021-08-31 01:02:47
~~either that or you clicked the wiki link~~
2021-08-31 01:03:25
personally I do not ever do any package install stuff without calling `pacman -Syu` successfully first
Scope
2021-08-31 09:17:52
🤔 https://www.dpreview.com/news/0501469519/google-researchers-detail-new-method-upscaling-low-resolution-images-with-impressive-results
2021-08-31 09:19:14
<https://iterative-refinement.github.io/> <https://cascaded-diffusion.github.io/>
diskorduser
2021-09-01 05:14:53
Is it useful to build cjxl djxl with pgo?
_wb_
Scope 🤔 https://www.dpreview.com/news/0501469519/google-researchers-detail-new-method-upscaling-low-resolution-images-with-impressive-results
2021-09-01 05:28:45
It is still laughable to use AI's plausible hallucinations for crime scene investigation. People who happened to be in the training set of that AI will be significantly more likely to have done the crime...
Scope
2021-09-01 05:34:15
Especially if this will be added to every smartphone and the resulting photos will be considered unprocessed
_wb_
2021-09-01 05:55:46
It feels so wrong that we've had 2-3 decades of megapixel races in digital cameras and display devices, and just now that we can achieve great fidelity, we also have appeal-prioritizing codecs and AI processing that can completely kill fidelity.
2021-09-01 06:02:10
Information that isn't there, isn't there. No amount of AI can magically create information out of thin air. It's only plausible interpolation.
2021-09-01 06:08:18
Suppose you could get convicted because an AI wrote incriminating text messages based on a log of your past texts.
2021-09-01 06:10:17
(btw that already kind of happens: people get kicked out of facebook and twitter all the time because AI translations of their non-English words violate the terms and conditions)
Scope
2021-09-01 06:29:31
Yes, also for smartphones these days the competition for photo quality is mostly between AI enhancements than improvements in actual hardware specs For example, my friend's vacation photo with digital zoom made on the new iPhone (and with each new generation the digital zoom is getting bigger and I think there is a strong urge by companies to make it more visually pleasing)
2021-09-01 06:30:57
Now it's like oil painting (but if there are chances to add details and make it look like a real resolution, I don't think any company will refuse it)
spider-mario
_wb_ It is still laughable to use AI's plausible hallucinations for crime scene investigation. People who happened to be in the training set of that AI will be significantly more likely to have done the crime...
2021-09-01 08:15:37
there was this gem in the comments: > Why would a network trained to be the most accurate at detail reconstruction need to be diverse?
_wb_
2021-09-01 08:21:01
I think "detail reconstruction" is misleading wording to begin with. It's not reconstructing anything, it's inventing pixels in a way that is plausible w.r.t. the training set it saw.
Jyrki Alakuijala
2021-09-01 07:14:38
I think some things can be reconstructed plausibly
2021-09-01 07:15:06
for example one property of the world is that electromagnetic forces are required to keep atoms close to each other as objects
2021-09-01 07:15:25
objects have some homogeneity and occlusion properties
2021-09-01 07:16:00
if you can choose between a continuous line and a stream of pearls, perhaps the continuous line is a more feasible construct
2021-09-01 07:16:54
when the interpolate knows about average teeth structure of the airspeed of an unladen African swallow, then it is too much for me
_wb_
2021-09-01 07:17:34
Sure. But making a face out of 5x5 pixels is never going to be "detail reconstruction", it will always be plausible invention, no matter how fancy the AI model is
2021-09-02 06:10:17
https://github.com/c2pa-org/public-draft
2021-09-02 06:10:37
Content Authenticity Initiative has a draft spec out
2021-09-02 06:10:58
It's a new metadata spec
2021-09-02 01:55:28
it's basically an attempt to counter fake media by making it possible to have cryptographically authenticated media, so e.g. websites can publish pictures where end-users can confirm the authenticity (i.e. linking them cryptographically to trusted sources)
2021-09-02 01:57:22
so e.g. if you see a picture of Trump doing something silly, you could check whether the source is some reputable press photographer, or someone/something less reputable (or more likely, the authenticity metadata is just missing)
2021-09-03 05:09:47
End users will be able to verify
2021-09-03 05:10:38
I think the idea is that it could become something like the lock icon on https pages
2021-09-03 05:11:41
You could check the certificate chain to see what the source of an image is
2021-09-03 05:12:40
It's then still up to you what you consider legitimate sources, that's of course a very interesting political question
2021-09-03 05:14:22
I hope the technology will not lead to more censorship. Like always, technology can both be used for good and for bad.
Scope
2021-09-04 03:49:56
https://twitter.com/h_okumura/status/1434008113594585092
BlueSwordM
2021-09-04 04:08:08
What is the issue Fabian 🤔
Traneptora
2021-09-05 05:33:13
the face depixelizer is great
2021-09-05 05:40:30
2021-09-05 05:40:36
Barack Obama -> Dan Smith
Fraetor
2021-09-05 10:35:13
I think there may have been some whitening in there as well. 😱
_wb_
2021-09-05 04:14:32
2021-09-05 04:15:16
It's probably just a colorspace thing and compression artifacts
2021-09-05 04:16:36
Like interpreting a P3 image in sRGB and then passing it through some generations of lossy compression
Jyrki Alakuijala
_wb_ It's probably just a colorspace thing and compression artifacts
2021-09-05 10:02:12
People like to put a meaning and intent to artefacts of incompetence or random things. "Media wants X." where the only somewhat universal thing that journalists want is to drink coffee.
Fraetor
2021-09-05 10:35:51
Except for those that want to drink tea.
Daniel Smith
Traneptora Barack Obama -> Dan Smith
2021-09-06 03:32:35
Lmao I just joined and the first thing I read is about this AI tool... and my name is Dan Smith + I look like the output photo. Touche stranger.
Traneptora
2021-09-06 03:34:17
looks to me like a full -> partial range issue
Daniel Smith Lmao I just joined and the first thing I read is about this AI tool... and my name is Dan Smith + I look like the output photo. Touche stranger.
2021-09-06 03:34:35
lmao. I got the name from a key and peele sketch
2021-09-06 03:34:54
https://www.youtube.com/watch?v=gODZzSOelss
Daniel Smith
2021-09-06 03:38:56
LMAO I can't believe I've never seen that
lithium
2021-09-07 02:37:47
Hello, I get a special case for manga(comic) content, could anyone can teach me how to lossy compress this image? This grayscale case can't compress very well on vardct mode and other dct encoder, look like webp lossless can get smaller file, if I want do some lossy compress on libjxl, how optimize parameter for this case? > original jpg grayscale 8 bit-depth q90-444 4441x6213 7,449,514 > jpg to png grayscale 8 bit-depth color-256 4,553,432 > jpg to jxl j-d1-e7 6,247,960 > jpg to jxl j-d1-e8 5,984,783 > jpg to jxl j-d2-e7 4,456,084 > jpg to jxl j-d2-e8 4,159,367 > png to jxl d1-e7 6,247,963 > png to jxl d1-e8 5,984,786 > png to jxl -m-q100-e7 5,921,998 > png to webp lossless 3,831,452 > png to webp near-lossless-40 3,914,916 > jpg to mozjpeg q90-444 7,294,452 > JPEG XL encoder v0.6.0 11e431d [AVX2,SSE4,Scalar] github build
2021-09-07 02:38:07
grayscale-png
2021-09-07 02:38:15
grayscale-jpg
Scope
2021-09-07 03:04:16
I don't think lossy will be better for this kind of content and if blind encoding with best mode selection is required, a good solution would be to wait for the improved `separate` option in JXL
lithium
2021-09-07 03:10:20
That make sense, I agree separate option very useful, but I worry separate option don't have high priority, probably have to wait 2022...😢
mojovski
2021-09-07 03:11:24
are there python wrappers available fro jpeg-xl?
lithium
mojovski are there python wrappers available fro jpeg-xl?
2021-09-07 03:13:55
For now I still recommend use cjxl. > https://github.com/olokelo/jxlpy
mojovski
lithium For now I still recommend use cjxl. > https://github.com/olokelo/jxlpy
2021-09-07 03:14:43
Thanks!
Deleted User
Scope I don't think lossy will be better for this kind of content and if blind encoding with best mode selection is required, a good solution would be to wait for the improved `separate` option in JXL
2021-09-07 03:24:19
That PR is unfortunately closed because apparently it needs to be reworked 🙁 https://github.com/libjxl/libjxl/pull/466
Scope
2021-09-07 03:27:07
Yep, but this experiment showed that it can work
lithium Hello, I get a special case for manga(comic) content, could anyone can teach me how to lossy compress this image? This grayscale case can't compress very well on vardct mode and other dct encoder, look like webp lossless can get smaller file, if I want do some lossy compress on libjxl, how optimize parameter for this case? > original jpg grayscale 8 bit-depth q90-444 4441x6213 7,449,514 > jpg to png grayscale 8 bit-depth color-256 4,553,432 > jpg to jxl j-d1-e7 6,247,960 > jpg to jxl j-d1-e8 5,984,783 > jpg to jxl j-d2-e7 4,456,084 > jpg to jxl j-d2-e8 4,159,367 > png to jxl d1-e7 6,247,963 > png to jxl d1-e8 5,984,786 > png to jxl -m-q100-e7 5,921,998 > png to webp lossless 3,831,452 > png to webp near-lossless-40 3,914,916 > jpg to mozjpeg q90-444 7,294,452 > JPEG XL encoder v0.6.0 11e431d [AVX2,SSE4,Scalar] github build
2021-09-07 03:28:18
Also
fab
2021-09-07 03:29:15
i think with next build it will get good at
2021-09-07 03:29:34
2021-09-07 03:30:23
but do not know when next build will be out
2021-09-07 03:30:50
anyway it won't be release
2021-09-07 03:30:59
before 16092021 is unlikely a release
2021-09-07 03:31:20
and viewing the changelog i think we won't have release in this 2 weeks
2021-09-07 03:31:46
libwebp2 got an important commit
lithium
Scope Also
2021-09-07 03:31:51
Cool, but e9 is too slow for big image for now, maybe we can convince dev increase separate option priority?
fab
2021-09-07 03:31:51
that improved psnr
2021-09-07 03:32:20
i can try when jamaika make a build
Scope
lithium Cool, but e9 is too slow for big image for now, maybe we can convince dev increase separate option priority?
2021-09-07 03:38:57
Yes, it's just the most effective compression for most black and white images (in my tests), something like `-s 9 -E 3 -I 1 -g 3` and maybe `--patches=0` (since they are not always effective on text, dots and stuff like that in manga/comics), also I noticed that `-g 3` is usually very often better on this content, unlike other image types And, yes, a working separate option would be very useful
fab
2021-09-07 03:41:09
i care about compression
2021-09-07 03:41:11
not quality
2021-09-07 03:41:21
or compression for quality
2021-09-07 03:41:56
because i tend to improve images sometimes not upscale or change resolution but doing photon noise, sharpening
2021-09-07 03:42:19
so i want to be sure i don't inflate a lot of space
2021-09-07 03:42:36
while images don't look too much different
2021-09-07 03:43:15
2021-09-07 03:43:30
like this type of image i re encode and don't look worse
2021-09-07 03:43:34
i care about this
2021-09-07 03:43:47
not at pure absolute difference
2021-09-07 03:44:00
At compression
2021-09-07 03:44:23
for manga is different
2021-09-07 03:44:47
i have a lot of images i can notice the artifacts especially with 10 bit xiaomi (SAMSUNG) display
2021-09-07 03:44:55
maybe they are not that revealing
2021-09-07 03:45:37
i do not use high brightness, they are not suited for HDR and even if i set the color to fidelity
2021-09-07 03:46:06
but at least i don't notice loss of quality going from 8 to 10 bit
2021-09-07 03:46:15
i do not care about all colors in the world
lithium
Scope Yes, it's just the most effective compression for most black and white images (in my tests), something like `-s 9 -E 3 -I 1 -g 3` and maybe `--patches=0` (since they are not always effective on text, dots and stuff like that in manga/comics), also I noticed that `-g 3` is usually very often better on this content, unlike other image types And, yes, a working separate option would be very useful
2021-09-07 03:46:23
I haven't try --patches=0 on pixel art compress, maybe disable patche can compress better, for now I using pingo-pngcolor-100 + cjxl-lossless-e9-g3-E3-I1-C1.
fab
2021-09-07 03:46:37
because i can't distinguish and the size obviously will be higher
2021-09-07 03:46:52
or you use lossless or you use adequate settings
2021-09-07 03:47:23
there's no reason to create big files even from raw i usually prefer an encoder that is smart
2021-09-07 03:47:35
so new heuristics are no adequate for that
Scope
lithium I haven't try --patches=0 on pixel art compress, maybe disable patche can compress better, for now I using pingo-pngcolor-100 + cjxl-lossless-e9-g3-E3-I1-C1.
2021-09-07 03:49:07
For images with a lot of very small similar elements on black and white images, it may be better to disable it, but for text it might be useful (there will be noticeably less artifacts around the letters)
lithium
Scope For images with a lot of very small similar elements on black and white images, it may be better to disable it, but for text it might be useful (there will be noticeably less artifacts around the letters)
2021-09-07 03:53:04
patches also work on chinese text or japanese text? maybe cjxl separate option can handle better other language.
Scope
2021-09-07 03:54:09
Language is not important, what matters is the repeatability of the same elements However, it does not always find them very accurately
lithium
2021-09-07 03:55:03
oh, I got it 🙂
fab
2021-09-07 03:56:10
2021-09-07 03:56:17
download this
2021-09-07 03:56:31
do not use those settings
2021-09-07 03:56:34
use only default
2021-09-07 03:57:02
photon noise can avoid repeatable elements or patch could work less
2021-09-07 03:57:10
or in a way that is overstimulated
2021-09-07 03:57:31
and with current state of jxl don't work
2021-09-07 03:58:16
from freedng cam i use
2021-09-07 03:58:36
2021-09-07 03:58:54
there is no improvement of quality since that date
2021-09-07 03:59:18
but anyway i have newest cjxl, but there's no improvement in quality
2021-09-07 04:00:22
normal heuristics is always better
2021-09-07 04:00:36
but jxl has a lot to do
lithium
2021-09-07 04:07:31
For pixel art content patches=0 not help too much, I guess manga(comic) content probably have more variable. > 9,131,641 > -s9-g3-E3-I1-C1 6,150,164 > -s9-g3-E3-I1-C1-patches=0 6,149,090
Eugene Vert
2021-09-07 04:23:40
<@!461421345302118401> You can apply a bit of bilateral smoothing for noise/jpg artifacts removal and then reduce the palette to 16/8 colors using pngquant ```bash gmic '{}' to_gray repeat 2 bilateral 2,5 'done' o ./out/'{.}'.png pngquant -o ./out/'{}' -f --nofs --strip 16 -- '{}' cjxl '{}' -j -m -s 8 -g 3 --patches=0 ./out/'{.}'.jxl ``` > ./08colors-jms8g3.jxl 606,623 > ./16colors-jms8g3.jxl 1,096,422
BlueSwordM
lithium Hello, I get a special case for manga(comic) content, could anyone can teach me how to lossy compress this image? This grayscale case can't compress very well on vardct mode and other dct encoder, look like webp lossless can get smaller file, if I want do some lossy compress on libjxl, how optimize parameter for this case? > original jpg grayscale 8 bit-depth q90-444 4441x6213 7,449,514 > jpg to png grayscale 8 bit-depth color-256 4,553,432 > jpg to jxl j-d1-e7 6,247,960 > jpg to jxl j-d1-e8 5,984,783 > jpg to jxl j-d2-e7 4,456,084 > jpg to jxl j-d2-e8 4,159,367 > png to jxl d1-e7 6,247,963 > png to jxl d1-e8 5,984,786 > png to jxl -m-q100-e7 5,921,998 > png to webp lossless 3,831,452 > png to webp near-lossless-40 3,914,916 > jpg to mozjpeg q90-444 7,294,452 > JPEG XL encoder v0.6.0 11e431d [AVX2,SSE4,Scalar] github build
2021-09-07 04:40:20
Your best bet currently is using lossy modular.
2021-09-07 04:40:33
I got a 2MB something file with `-m Q 50`
2021-09-07 04:43:25
Anyway, I'm surprised `--palette=0` doesn't actually increase compression efficiency at all in BW images. Last time I tried it, it helped nicely.
_wb_
That PR is unfortunately closed because apparently it needs to be reworked 🙁 https://github.com/libjxl/libjxl/pull/466
2021-09-07 04:43:39
I'll work more on it later, it was mostly a POC to show that it can make sense, but it'll take time to find a good approach where decode speed doesn't suffer too much, the quality/density gains are reliable, and there are no potential additional artifacts or other 'surprises'
BlueSwordM Anyway, I'm surprised `--palette=0` doesn't actually increase compression efficiency at all in BW images. Last time I tried it, it helped nicely.
2021-09-07 04:44:01
Try --palette 0 -X 0 -Y 0
2021-09-07 04:44:18
For grayscale, channel palette might also be doing silly things
2021-09-07 04:45:38
The heuristics that decide whether or not to do global or local channel palettes are very simplistic: it's just done whenever the actually used values are less than N percent of the range between min and max
2021-09-07 04:46:15
-X and -Y let you set that percentage for global and local channel palette
lithium
2021-09-07 04:48:20
Not all grayscale manga have this issue, but probably no dct method is better? I think I need check bilateral smoothing + pngquant quality and lossy modular Q50 quality Thank you for very much 🙂
2021-09-07 04:53:47
<@!736666062879195236> I haven't try bilateral smoothing before, bilateral smoothing is best filter for manga(comic) or anime? I worry filter will similar avif encoder, filter too much detail.
Eugene Vert
2021-09-07 04:57:33
From the ones I've been able to try, it retains details on anime content quite well. I usually experiment with the values for it in gmic-qt for gimp/krita
2021-09-07 05:00:55
I try to keep 'Spatial Variance' low (2-5) and adjust the 'Value Variance' to the noise level (5-20)
lithium
2021-09-07 05:05:05
How to judge image need denoise? I don't have too much experience, like this case how to choose denoise level?
Eugene Vert
2021-09-07 05:08:14
I just zoomed in (without scaling filter in image viewer) and saw the artefacts)
lithium
2021-09-07 05:11:34
I can see some tiny gray noise on zoom in image, for this case choose low denoise level is better?
Eugene Vert
2021-09-07 05:13:29
Yes, select minimum values at which it disappears
lithium
2021-09-07 05:27:02
bilateral filter repeat 2 bilateral 2,5 is great, those tiny noise is gone. If I want recompress some jpg or some already dct content to jxl, do some slight denoise filter to remove tiny noise is better?
Eugene Vert
2021-09-07 05:29:57
I prefer keep it as it is if not doing any heavy-filtering afterwards (like reducing number of colors)
2021-09-07 05:33:17
Oh, for recompression
2021-09-07 05:40:47
It _may_ help, but there is avif for such things)
lithium
2021-09-07 05:44:24
avif denoise filtering is too strong, I guess slight denoise filter might be help recompress quality?
_wb_
2021-09-08 10:58:48
<@!179701849576833024> & <@!604964375924834314> : Apple Preview on MacOS does something funny to allow you to display HDR images even on SDR monitors: if you turn down the brightness of the monitor, and then look at HDR images in Preview, it shows the bright parts brighter than the (255,255,255) white. So if you set brightness to 100%, HDR and SDR is the same, but if you set brightness to 50%, then if effectively only reduces display brightness when there's only SDR content on the screen, and if there is HDR content, it keeps the display at 100% to show the HDR parts while dialing down the max brightness of SDR parts.
2021-09-08 10:59:14
I wonder if Chrome on MacOS can be made to do that too
spider-mario
2021-09-08 11:00:08
oh, interesting
2021-09-08 11:00:29
would you happen to know if it also does that for HLG content? in my understanding, HLG should take into account the user’s preference on peak display luminance
_wb_
2021-09-08 11:01:25
I can try, do you have an example file? It has to be in a format that Preview supports though
spider-mario
2021-09-08 11:01:27
(the OOTF given in BT.2100 explicitly refers to α, “the variable for user gain in cd/m²”)
2021-09-08 11:02:20
I have a PNG but not sure Preview would interpret it as being HLG
2021-09-08 11:02:33
otherwise I can make an AVIF file, or I can look into making a HEIC file
_wb_
2021-09-08 11:03:08
HEIC and EXR work I think, I dunno about png and avif
2021-09-08 11:04:30
in png it doesn't seem to work, and Preview does not support avif
spider-mario
2021-09-08 11:04:47
thanks, then HEIC is probably our best bet
2021-09-08 11:05:22
HLG has been on my mind recently because so far we have mostly focused on PQ, with good success, but perhaps at the expense of good ol’ HLG
_wb_
2021-09-08 11:05:26
I think it only supports 10-bit 420 HEIC
spider-mario
2021-09-08 11:05:44
so I have written a small tool to convert PQ to HLG, and one to render HLG for a given display peak luminance (optionally writing the result back as PQ)
_wb_
2021-09-08 11:05:54
but if you can produce 12-bit and 444 files too (so all 4 variants) then I can see if the situation changed
spider-mario
2021-09-08 11:06:32
if I polish those tools a little (better command-line parsing and so on), maybe we can publish them as a sort of little “HDR toolkit”
_wb_
2021-09-08 11:06:44
at some point even 16-bit HEIC worked iirc, but then it stopped working, I assume because they switched to hw decode
spider-mario
_wb_ I think it only supports 10-bit 420 HEIC
2021-09-08 11:58:41
hopefully, this works
2021-09-08 11:58:47
and takes into account the colorspace information
2021-09-08 11:59:03
(which hopefully `heif-enc` wrote properly to begin with)
2021-09-08 12:00:34
the command was: ```bash heif-enc -q 95 -p chroma=420 -b 10 --matrix_coefficients=9 --colour_primaries=9 --transfer_characteristic 18 507-hlg.png -o 507-hlg.heic ```
_wb_
2021-09-08 12:00:56
it shows the image, but not with the funky HDR thing
spider-mario
2021-09-08 12:02:25
and with this one? (just to make sure the problem is not with the method of producing the file)
_wb_
2021-09-08 12:07:28
2021-09-08 12:08:08
Here's a photo of how I see things when I dim my screen to about halfway the brightness slider
2021-09-08 12:08:56
the bottom right is a PDF of the jxl spec which has an SDR white background
2021-09-08 12:09:31
the bright rings EXR file clearly has brighter whites
2021-09-08 12:09:59
the HEIC files look different but neither of them go as bright as the EXR
2021-09-08 12:11:17
it could be that they only do that HDR thing when showing EXR images
2021-09-08 12:12:40
2021-09-08 12:12:48
That's what I see when I set display brightness to max
2021-09-08 12:14:12
in the PQ one, the car is a dull red and the sky is blue, while in the HLG one, the car is a saturated red and the sky is white
2021-09-08 12:15:30
I assume that white square in the EXR test image is SDR white, so as you can see with display brightness to max, that's the brightest it gets
2021-09-08 12:16:20
but with display brightness set to halfway, it becomes gray and you can see the brighter-than-SDR-white rings
2021-09-08 12:17:16
it's funny that I have to make literal photos of my screen to show this, screenshots don't work
2021-09-08 12:18:22
if I make a screenshot, it looks like in the second picture on that screenshot
2021-09-08 12:19:51
(things get clamped to SDR P3)
2021-09-09 07:30:18
https://www.w3.org/Graphics/Color/Workshop/talks.html
lithium
2021-09-09 01:27:01
<@!111445179587624960> <@!736666062879195236> sorry for ping, I still feel confused to this manga case, Could you teach me why this case so difficult to lossy compression? I already try pngquant and jxl lossy modular Q50 method, both lossy method is good, but I still want keep quality, so I using denoise png to higher quality quantization. but I don't really understand why avif is better in this case? > original png to bilateral filter denoise + pingo-s9 4,050,574 > jxl Q80-e9 2,715,771 > (-m -Q 80 -e 9 -g 3 -E 3 -I 1 --palette=0 -X 0 -Y 0) > jxl d1.0-e7 5,955,167 > jxl d1.0-e8 5,615,851 > avif q15-s4 1,021,725 > (--min 0 --max 45 --minalpha 0 --maxalpha 0 -d 10 -s 4 -j 12 -a end-usage=q -a cq-level=15 > -a color:enable-chroma-deltaq=1 -a color:enable-dnl-denoising=0 -a color:denoise-noise-level=5)
2021-09-09 01:27:27
denoise-png
Jyrki Alakuijala
2021-09-09 01:35:40
should we have a separate channel for manga problems?
2021-09-09 01:35:54
I'd like us to be able to fix them and having more focus on it might be justified
2021-09-09 01:36:16
I acknowledge it may seem a bit funny, but if it is effective -- why not?
lithium
2021-09-09 01:52:19
This issue happen for specific manga case, other grayscale manga didn't happen this issue, I guess maybe cjxl separate option can handle those issue?
2021-09-09 01:57:53
> avif q10-s4 1,372,891 > --min 10 --max 10 --minalpha 0 --maxalpha 0 -d 10 -s 4 -j 12 -a end-usage=q -a cq-level=10 > -a color:enable-chroma-deltaq=1 -a color:enable-dnl-denoising=0 -a color:denoise-noise-level=5
BlueSwordM
lithium > avif q10-s4 1,372,891 > --min 10 --max 10 --minalpha 0 --maxalpha 0 -d 10 -s 4 -j 12 -a end-usage=q -a cq-level=10 > -a color:enable-chroma-deltaq=1 -a color:enable-dnl-denoising=0 -a color:denoise-noise-level=5
2021-09-09 02:18:17
BTW, remove `--max 10 --min 10` from avifenc and replace it with `--min 0 --max 63` Doing that makes it using a static Q for all blocks, and is therefore, less efficient 🙂
diskorduser
2021-09-09 02:24:49
But last time when I used max 63 on avifenc I got more noise than source picture 🥲.
BlueSwordM
diskorduser But last time when I used max 63 on avifenc I got more noise than source picture 🥲.
2021-09-09 02:25:48
That should not happen <:Thonk:805904896879493180>
2021-09-09 02:25:54
In fact, I've never had it happen to me <:Thonk:805904896879493180>
2021-09-09 02:26:51
Anyway, the Gimp plugin is now really good.
diskorduser
2021-09-09 02:26:51
--min 10 --max 10 made noise closer to the original
Scope
lithium <@!111445179587624960> <@!736666062879195236> sorry for ping, I still feel confused to this manga case, Could you teach me why this case so difficult to lossy compression? I already try pngquant and jxl lossy modular Q50 method, both lossy method is good, but I still want keep quality, so I using denoise png to higher quality quantization. but I don't really understand why avif is better in this case? > original png to bilateral filter denoise + pingo-s9 4,050,574 > jxl Q80-e9 2,715,771 > (-m -Q 80 -e 9 -g 3 -E 3 -I 1 --palette=0 -X 0 -Y 0) > jxl d1.0-e7 5,955,167 > jxl d1.0-e8 5,615,851 > avif q15-s4 1,021,725 > (--min 0 --max 45 --minalpha 0 --maxalpha 0 -d 10 -s 4 -j 12 -a end-usage=q -a cq-level=15 > -a color:enable-chroma-deltaq=1 -a color:enable-dnl-denoising=0 -a color:denoise-noise-level=5)
2021-09-09 02:31:25
I'm not a big fan of using filtering before encoding, including denoise, because even in black and white images there are backgrounds and other artistic things with noise and small dots Also as far as I've noticed, black and white images and manga, if they're relatively high quality, are mostly in lossless formats (but color pages or covers can be in lossy) So it would be very nice to find some options or maybe even optimize the encoder that would give good results with decent speed on such black and white images, as there are so many such images and it would be quite demanded, also maybe have a special mode like lossy palette, but more suitable for this
2021-09-09 02:43:36
For good quality I would prefer improved lossless compression for B&W images because it would guarantee absence of artifacts and losses that are not always noticeable, and maybe something like near-lossless mode, when also these losses were more predictable and free of artifacts But for medium and low quality, yes, it's better to use lossy and for JXL it still needs more tweaks to reduce ringing and mosquito artifacts
2021-09-09 02:47:12
And also a working `separate` mode would also be useful, especially when there is no way to identify the type of image or the image itself is mixed
fab
2021-09-09 02:56:43
when next improvement of image quality?
BlueSwordM
lithium Hello, I get a special case for manga(comic) content, could anyone can teach me how to lossy compress this image? This grayscale case can't compress very well on vardct mode and other dct encoder, look like webp lossless can get smaller file, if I want do some lossy compress on libjxl, how optimize parameter for this case? > original jpg grayscale 8 bit-depth q90-444 4441x6213 7,449,514 > jpg to png grayscale 8 bit-depth color-256 4,553,432 > jpg to jxl j-d1-e7 6,247,960 > jpg to jxl j-d1-e8 5,984,783 > jpg to jxl j-d2-e7 4,456,084 > jpg to jxl j-d2-e8 4,159,367 > png to jxl d1-e7 6,247,963 > png to jxl d1-e8 5,984,786 > png to jxl -m-q100-e7 5,921,998 > png to webp lossless 3,831,452 > png to webp near-lossless-40 3,914,916 > jpg to mozjpeg q90-444 7,294,452 > JPEG XL encoder v0.6.0 11e431d [AVX2,SSE4,Scalar] github build
2021-09-09 03:04:20
I was finally able to get consistently smaller lossless BW images with this vs WebP: `cjxl input.png output.jxl -s 9 -I 1.0 -d 0.0 --palette=0 -X 0 -Y 0` 3,821,020
2021-09-09 03:04:26
Of course, it is quite slow...
Scope
2021-09-09 03:14:01
I have tested these settings on a lot of different B&W images and manga and these settings give a fairly stable close to maximum results in general, but it is also slow and maybe it is possible to speed up the encoding with fixed predictors and some other settings for B&W https://discord.com/channels/794206087879852103/794206087879852106/884825075461787699
2021-09-09 03:22:06
And also comics or manga may not necessarily be black and white, but instead of black there may be some other color (like blue and white)
doncanjas
2021-09-09 03:25:01
In lossless compression I'm having cases where jpeg -> jpeg-xl is bigger than jpeg -> png And even where jpeg -> png -> jpeg-xl is smaller than jpeg -> png or even smaller than jpeg -> jpeg-xl
lithium
2021-09-09 03:25:19
Thank you for your reply 🙂 I understand lossless is great, but it is really difficult keep all image keep lossless for me, for now I want find a method to keep quality and reduce file size, like near-lossless or visually lossless(but get some generation loss). av1 option is really hard to optimize, thank you for your teach 🙂 For now I look forward jxl separate mode can reach true visually lossless in d0.5 e8 for mange content.
BlueSwordM
lithium Thank you for your reply 🙂 I understand lossless is great, but it is really difficult keep all image keep lossless for me, for now I want find a method to keep quality and reduce file size, like near-lossless or visually lossless(but get some generation loss). av1 option is really hard to optimize, thank you for your teach 🙂 For now I look forward jxl separate mode can reach true visually lossless in d0.5 e8 for mange content.
2021-09-09 03:27:44
Here is what I was able to do with the Scope options: `-s 9 -I 1.0 -d 0.0 --patches=0 --palette=0 -g 3 -X 0 -Y 0` 3,369,369 It seems to work very well for monochrome manga.
Scope
2021-09-09 03:31:45
Yes, but on some images `--palette=0 -X 0 -Y` may give worse results
BlueSwordM
Scope Yes, but on some images `--palette=0 -X 0 -Y` may give worse results
2021-09-09 03:31:59
If it is grayscale, why would it hurt?
2021-09-09 03:32:11
🤔
Scope
2021-09-09 03:32:35
Idk, but I've compared these settings as well
_wb_
2021-09-09 03:33:22
channel palette can be useful, especially if you have a grayscale image that only uses a small amount of shades of gray
fab
2021-09-09 03:33:40
when next improvement in image quality
2021-09-09 03:33:50
or for advancements in ISO
lithium
2021-09-09 03:47:44
For manga or drawing, for now I notice some online shop using jpeg q99 444 or q94 444, (they also provide lossless png file), I think high quality lossy is a acceptable option. And look like some online shop start using avif, I guess if jxl can get better lossy quality and encoding time for those content, probably those online shop will choose jxl to deliver product.
diskorduser
doncanjas In lossless compression I'm having cases where jpeg -> jpeg-xl is bigger than jpeg -> png And even where jpeg -> png -> jpeg-xl is smaller than jpeg -> png or even smaller than jpeg -> jpeg-xl
2021-09-09 03:47:59
Are you using jpeg recompression mode or modular lossless on jpg source?
doncanjas
diskorduser Are you using jpeg recompression mode or modular lossless on jpg source?
2021-09-09 03:48:27
<:monkaS:847592665293389835>
2021-09-09 03:49:46
```#!/bin/bash for file in *.jpg do cjxl "$file" ouput-path/${file%.*}.jxl -q 100 -e 8 --strip -I 1.0 -E 3 --num_threads=12 done```
lithium
2021-09-09 03:50:10
> cjxl -j -m -q 100 -e 9 // modular lossless > cjxl -e 9 // jpeg recompression mode
doncanjas
2021-09-09 03:50:10
I did this, but I also did one test without --strip and the size difference was only in KB
diskorduser
doncanjas ```#!/bin/bash for file in *.jpg do cjxl "$file" ouput-path/${file%.*}.jxl -q 100 -e 8 --strip -I 1.0 -E 3 --num_threads=12 done```
2021-09-09 03:50:30
No. You are using it wrong
doncanjas
2021-09-09 03:50:47
I see, I do be dumb
diskorduser
2021-09-09 03:50:55
cjxl produces lossless jpg -> jxl by default.
2021-09-09 03:51:52
You should not use -q on jpg source for lossless compression
doncanjas
2021-09-09 03:52:25
I see, so what is the right way to reencode jpg losslessly?
_wb_
2021-09-09 03:52:51
cjxl in.jpg out.jxl
doncanjas
2021-09-09 03:53:12
so I just don't add the -q param?
_wb_
2021-09-09 03:53:55
I think it doesn't make a difference if you add -q 100, but it's not needed, on "lossy formats" the default is to do lossless recompression
2021-09-09 03:54:09
(where "lossy formats" are JPEG and GIF)
doncanjas
2021-09-09 03:55:21
I see, so can I do `cjxl img1.jpeg img2.jxl -e 8 --strip` ?
_wb_
2021-09-09 03:55:43
if you do --strip, then you cannot get the exact same jpeg file back again, and you lose any exif/xmp
2021-09-09 03:55:53
but the image data is there
2021-09-09 03:56:12
-e 8 likely is not worth the slowdown, I'd just do default
2021-09-09 03:56:28
but yes, you can do that
doncanjas
2021-09-09 03:57:49
actually, -e 8 isn't bothering me, I just am not using -e 9 because I really didn't notice a significant difference in file size that would justify the time spent
Scope
2021-09-09 03:58:54
Yep, -e 8 is slightly better, with not much increase in encoding time
diskorduser
2021-09-09 03:59:09
do you still get bigger jxl than png?
doncanjas
doncanjas I see, so can I do `cjxl img1.jpeg img2.jxl -e 8 --strip` ?
2021-09-09 04:00:50
yeah, still bigger with this
2021-09-09 04:01:40
jpeg is 3,6MB jxl is 3.3MB and png is 1.8MB
diskorduser
2021-09-09 04:01:54
Weird
doncanjas
2021-09-09 04:01:58
I'm using pingo -s9 to optimize the pngs btw
_wb_
2021-09-09 04:02:25
Depends on the content, if it's repetitive, png can help a lot
doncanjas
2021-09-09 04:02:36
yeah, it's a grayscale image
diskorduser
2021-09-09 04:02:58
Try using -m with that image
doncanjas
2021-09-09 04:03:53
on the png?
diskorduser
2021-09-09 04:04:00
On jpg
2021-09-09 04:04:06
-j -m
doncanjas
2021-09-09 04:04:31
was missing the -j lol
2021-09-09 04:05:28
1.5MB
2021-09-09 04:05:42
now it's working
2021-09-09 04:06:13
oh yeah, these jpegs are bloated, I saved like 60% just from jpeg to png
diskorduser
2021-09-09 04:07:03
Also try using -m with the optimized png.
doncanjas
2021-09-09 04:09:42
1,5MB
2021-09-09 04:09:48
yeah, it's working
2021-09-09 04:11:37
thank you for the help
_wb_
2021-09-09 04:14:07
For lossless pixels (as opposed to lossless jpg), -e 9 might be worthwhile
Scope
BlueSwordM Here is what I was able to do with the Scope options: `-s 9 -I 1.0 -d 0.0 --patches=0 --palette=0 -g 3 -X 0 -Y 0` 3,369,369 It seems to work very well for monochrome manga.
2021-09-09 04:58:26
2021-09-09 04:58:34
2021-09-09 05:03:37
2021-09-09 05:03:41
lithium
2021-09-09 05:05:30
Look like Indexed grayscale image not suitable lossy compress. > Input file: 003.png | 441695 bytes > Image: 1114x1600 pixels | 8 bits/sample | Indexed grayscale | 64 color(s)
2021-09-09 05:08:39
But for png grayscale image, near-lossless compress still a worth option. > png - 4,050,574 > avif q7 2,462,806 > --min 0 --max 63 --minalpha 0 --maxalpha 0 -d 10 -s 4 -j 12 -a end-usage=q -a cq-level=7 > -a color:enable-dnl-denoising=0 -a color:denoise-noise-level=5 -a color:sharpness=2 -a color:qm-min=0
Scope
2021-09-09 05:10:21
Also some unexpected result
2021-09-09 05:10:25
lithium
2021-09-09 05:14:09
If don't use lossless compress, use -d 0.1 -e 8 --separate will very near lossless? (like webp near-lossless 80).
Scope
2021-09-09 05:16:14
2021-09-09 05:17:45
d1
2021-09-09 05:17:57
d1 separate
2021-09-09 05:20:09
Demonstration of how `separate` works, the blurred area is VarDCT, the rest is modular
2021-09-09 05:20:30
2021-09-09 05:22:20
Btw, I don't quite understand why exactly this area
_wb_
2021-09-09 05:24:38
No idea, the heuristic is a bit fragile I guess
lithium
2021-09-09 05:25:44
cool, d1 separate version still look very good.
fab
_wb_ No idea, the heuristic is a bit fragile I guess
2021-09-09 05:45:40
So when image quality improvements commits
diskorduser
2021-09-09 05:55:22
I demand answers /s
_wb_
2021-09-09 05:55:22
Research doesn't work like that
2021-09-09 05:56:03
I can't plan when exactly I will have the next good idea
BlueSwordM
fab So when image quality improvements commits
2021-09-09 05:56:26
They will come soon™️
2021-09-09 05:56:36
The question is not if, just when 🤣
diskorduser
2021-09-09 06:08:46
I'm trying something like this. But it doesn't work. `wget 'http://res.cloudinary.com/dofnr4zc9/image/fetch/w_1920,f_jxl,d_1/https://w.wallhaven.cc/full/y8/wallhaven-y8gp9d.jpg'`
_wb_
2021-09-09 06:10:57
Ah lol yes we don't have distance in Cloudinary
2021-09-09 06:11:09
Try q_90
2021-09-09 06:11:29
Or q_auto:best
2021-09-09 06:11:42
Which are atm the same thing for jxl
diskorduser
2021-09-09 06:14:33
The problem is in wallpaper link. If I use someother link, it works fine. But it does not work with wallhaven link.
2021-09-09 06:16:59
I can download it using `wget 'https://w.wallhaven.cc/full/y8/wallhaven-y8gp9d.jpg'` but idk why it doesn't work with cloudinary.
_wb_
2021-09-09 06:21:01
It could be that it has policies against us fetching it
diskorduser
2021-09-09 06:21:35
I see. Thanks for the info
_wb_
2021-09-09 06:22:37
I'm not sure if we respect robots.txt, but we might
2021-09-09 06:22:58
Also they could just blacklist an ip range that includes us
lithium
2021-09-09 07:15:39
cjxl separate option use which modular method compress non-photographic frame? look like there didn't use haar-like transform?
_wb_
2021-09-09 07:26:03
No, just color quantization
lithium
2021-09-10 06:42:24
Maybe Jyrki Alakuijala have some color quantization idea? > https://encode.su/threads/3108-Google-s-compression-proje%D1%81ts?p=64872&viewfull=1#post64872 pingo encoder dev implement different lossy mode, predictors ,color transformation, palette(256c) for png lossy, color transformation is really powerful for drawing, but for noise and color gradient, predictors method is better.
_wb_
2021-09-10 06:52:32
yes, ideally we do something like delta-palette but in XYB instead of RGB
Scope
2021-09-10 11:24:35
Also, does XYB have any advantages for lossy modular mode or is it redundant? Or maybe it allows to use more lossy tricks, including for delta-palette-like?
_wb_
2021-09-10 12:17:16
It does have advantages, being a more perceptual space
2021-09-10 12:17:55
But it's also just that we need to use XYB for the modular patches if we use XYB for the lossy vardct parts
lithium
2021-09-10 12:35:09
For now separate option use heuristic to segment the image, this heuristic use which method Identify content type? probably have a quick heuristic Identify this area is suitable for DCT or not suitable for DCT?
_wb_
2021-09-10 01:02:46
yes, main research question is how to properly combine different (DCT and no-DCT) coding methods without creating boundary artifacts, making decoding a lot slower, or being counterproductive in terms of density savings
lithium
2021-09-10 01:05:28
I try to find a method Identify pixel art and non-pixel art, but for now I just put this work to TODO list.😛
BlueSwordM
_wb_ yes, main research question is how to properly combine different (DCT and no-DCT) coding methods without creating boundary artifacts, making decoding a lot slower, or being counterproductive in terms of density savings
2021-09-10 02:31:03
Perhaps edge detection could be performed on thin long edges to then do edge masking telling the encoder to use lossless/lossy modular on these lines.
2021-09-10 02:31:37
The only problem is that once noise is in play, that can cause some problems if additional steps are not taken.
_wb_
2021-09-10 02:33:13
the main problem is not so much detecting where to do what, but avoiding boundary artifacts on the edge between different techniques
2021-09-10 02:34:30
in current patches that's not an issue since patches are only detected when there's a solid background color, so you can subtract the background from the patch and get residuals for VarDCT that are clean
BlueSwordM
2021-09-10 02:36:42
Oh right, I completely forgot about that.
_wb_
2021-09-10 02:38:07
in what I want to do here, there is not necessarily a solid background on the "best to use non-DCT" parts, so that makes it a bit harder to find a way to avoid boundary artifacts (which could be bad for the final image but also for compression if they cause extra entropy)
lithium
2021-09-10 03:01:57
I understand, avoid boundary artifacts is a difficult thing, I can see this issue in Scope test.
2021-09-10 03:53:11
> 4,050,574 original png + denoise > 3,317,755 -d 1.0 -e 7 --gaborish=0 > 5,955,167 -d 1.0 -e 7 > 3,136,700 -d 1.0 -e 8 --gaborish=0 > 2,462,806 avif q7 > (--min 0 --max 63 --minalpha 0 --maxalpha 0 -d 10 -s 4 -j 12 -a end-usage=q -a cq-level=7 > -a color:enable-dnl-denoising=0 -a color:denoise-noise-level=5 -a color:sharpness=2 -a color:qm-min=0)
2021-09-10 03:53:44
I think gaborish have some issue...🤔
_wb_
2021-09-10 04:10:28
that smells like it's a quite non-photographic image where patches are useful
lithium
2021-09-10 04:14:39
yes, that's a manga image, https://discord.com/channels/794206087879852103/794206087879852106/885516651657822219
_wb_
2021-09-10 04:16:40
right
2021-09-10 04:25:41
interesting that gaborish makes such a big difference in size
2021-09-10 04:26:51
must be that the sharpening it does at encode side is really messing up things
2021-09-10 04:27:26
it's an image with a lot of very sharp black/white edges
2021-09-10 04:32:33
for an image like this, I'd use something like `--override_bitdepth 3 -m -v --patches 0`
2021-09-10 04:33:01
because it's basically a black and white image, you don't really need 8-bit grays here
2021-09-10 04:34:01
by reducing the number of colors (maybe even all the way to 1-bit), compression gets a lot better
Scope
2021-09-10 04:39:15
I wonder if it is possible to do something like lossy palette, but for grayscale and b/w images to make some similar changes for better compression and the colors will be decreased automatically until it will not be noticeable visually
_wb_
2021-09-10 04:39:18
`--override_bitdepth 2 -m --patches 0 -g 3 -e 9` looks quite OK to me
2021-09-10 04:39:28
2021-09-10 04:39:32
that's what you get with that setting
Scope I wonder if it is possible to do something like lossy palette, but for grayscale and b/w images to make some similar changes for better compression and the colors will be decreased automatically until it will not be noticeable visually
2021-09-10 04:41:07
possible: sure - and likely it would also be useful to do such things on alpha channels, btw
Scope
2021-09-10 04:42:52
Because the lossy palette does not work well on such images and can often be worse than normal lossless
lithium
2021-09-10 04:43:17
oh, on override_bitdepth 2, background have a big error.
_wb_
2021-09-10 04:43:28
lossy palette is currently only implemented for RGB, I don't know what it does on grayscale
lithium oh, on override_bitdepth 2, background have a big error.
2021-09-10 04:45:25
oops, right
2021-09-10 04:45:41
didn't look at that part where it actually does use grayscale colors
2021-09-10 04:47:25
yeah some kind of lossy preprocessing would be useful here that keeps gradients intact but changes basically-black-and-white parts into reduced colors
2021-09-10 04:48:17
then in the groups where it's needed, we do real grayscale, but in the groups where we can, we use local channel palette in a lossy way to reduce it to fewer shades of gray
2021-09-10 04:48:40
if the shades of gray are only anti-aliasing around sharp edges, there's no need to have 256 of them
Scope
2021-09-10 04:49:08
Also, something like this can be changed on a per-block basis, like here, when it needs more colors for the background, but less for everything else? Or perhaps it would be better to improve the lossy palette mode 🤔
_wb_
2021-09-10 04:50:34
modular can do palette/channel palette globally for the whole frame, but also locally for the 256x256 group (or whatever the modular group size is)
lithium
2021-09-10 05:15:29
<@!111445179587624960> sorry for ping For now I waiting Jyrki Alakuijala improve integral transform, I think this improve can increase drawing quality,and fix my drawing gradient area artefacts issue, I plan lossy convert some manga and drawing to cjxl d0.5~1.0 e8, I should keep waiting until separate option implement or just waiting this improve implement should be fine? what's your opinion about this? > they are relatively rare and I know how to fix them, but haven't got up to it yet -- still more focused on reducing ringing/worst case behaviour with graphics > ... also they are rare because I'm heavily penalising 8x8 dcts currently because of the 8x8 constant color blocking artefacts it brings to gradients (and artificially favoring 8x16 and 16x16) > once I'll have better degradation for 8x8 dcts, I can have a better balance of integral transforms, too
Scope
2021-09-10 05:21:19
I think both will be needed, because even if separate is implemented and will work, there may be cases where it is inconvenient to use and it can be noticeably slower encoding/decoding speed and also even separate uses a combination with VarDCT
2021-09-10 05:23:49
But separate is still closer to the visually lossless quality and avoiding artifacts on such content
fab
lithium <@!111445179587624960> sorry for ping For now I waiting Jyrki Alakuijala improve integral transform, I think this improve can increase drawing quality,and fix my drawing gradient area artefacts issue, I plan lossy convert some manga and drawing to cjxl d0.5~1.0 e8, I should keep waiting until separate option implement or just waiting this improve implement should be fine? what's your opinion about this? > they are relatively rare and I know how to fix them, but haven't got up to it yet -- still more focused on reducing ringing/worst case behaviour with graphics > ... also they are rare because I'm heavily penalising 8x8 dcts currently because of the 8x8 constant color blocking artefacts it brings to gradients (and artificially favoring 8x16 and 16x16) > once I'll have better degradation for 8x8 dcts, I can have a better balance of integral transforms, too
2021-09-10 05:31:46
This is old 4,months ago
lithium
Scope I think both will be needed, because even if separate is implemented and will work, there may be cases where it is inconvenient to use and it can be noticeably slower encoding/decoding speed and also even separate uses a combination with VarDCT
2021-09-10 05:40:32
Thank you for your reply 🙂 Actually I also plan spread jxl to chinese community, but I still can't get stable lossy quality for drawing and manga, so for now I waiting jxl dev improve quality... > Jyrki Alakuijala comment link https://discord.com/channels/794206087879852103/794206170445119489/882884730134093864
July
2021-09-11 12:06:04
Why is this image smaller as webp than as jxl? Is there any way to make it smaller? Losslessly, of course.
yurume
2021-09-11 01:21:20
oh, A Small Hike. it's a good game. </off-topic>
2021-09-11 01:21:53
is that webp file losslessly compressed?
July
2021-09-11 02:59:06
Yes
2021-09-11 02:59:13
From a PNG from the game's press kit which I converted to webp
2021-09-11 03:01:09
Also it's called A Short Hike lol
_wb_
2021-09-11 06:20:21
Is that the image, or is that a screenshot of the image?
Scope
July Why is this image smaller as webp than as jxl? Is there any way to make it smaller? Losslessly, of course.
2021-09-11 07:06:33
2021-09-11 07:14:55
So, group size can sometimes make a big difference and as I said before it is nice to have something like automatic selection it in the future
_wb_
2021-09-11 07:36:42
128x128 works best here?
2021-09-11 07:37:19
Might be that tighter local palettes help on that image
Scope
2021-09-11 07:38:46
```GiantMountainScreenshot.png.jxl_m_E3_I1_9 5,270,580 GiantMountainScreenshot.png.jxl_m_E3_I1_g0_9 4,649,223 GiantMountainScreenshot.png.jxl_m_E3_I1_g3_9 5,820,340 GiantMountainScreenshot.png.jxl_m_E3_I1_g3_patches0_9 5,817,299 GiantMountainScreenshot.png.jxl_m_E3_I1_p10000_g3_9 6,289,721```
2021-09-11 07:41:53
But, yes, perhaps the choice of palette affects and this is something that needs to be improved primarily in lossless
Romao
2021-09-11 12:34:45
<@111445179587624960> what does `_9` at the end mean? --effort?
Scope
2021-09-11 12:35:02
Yep, these are options for benchmark_xl
Romao
2021-09-11 12:35:44
so -g 0 works the best for all lossless?
Scope
2021-09-11 12:36:32
No, just for this image
Romao
2021-09-11 12:36:45
I'm doing a test on a 99MB image, but it's taking forever lol