JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

jxl

Anything JPEG XL related

Jim
2022-12-03 02:36:47
I went from 14.4k to 56k to DSL to coax. Still on coax 300/20 now. The fiber node is in the backyard. There are faster speeds available but I don't need them. At businesses I heard of ISDN but never seen any place that had it. Most quickly went to bonded DSL lines though some of them were not setup well. Pretty much every business is now either cable or direct fiber.
Wolfbeast
2022-12-03 02:38:20
Yeah i am on the slowest subscription here, pretty much (aside from one that just artificially clamps upload...) but I really don't need more. I can get Gbit. Like I'd ever get that kind of speed from servers XD
Jim
2022-12-03 02:40:34
Some university or hpc business servers can do that, but most struggle doing 100mb/s, so it will be a while before that is practical.
Wolfbeast
2022-12-03 02:41:36
Doesn't stop ISPs from trying to sell it though ;-)
2022-12-03 02:42:52
I guess if you do digital TV and have like a family of 5 all using the internet at the same time for everything... maybe you'll use more than 100 Mbit, but otherwise? unlikely.
Jim
2022-12-03 02:43:34
Mostly about capacity on the ISP side. They can pack more people on a single node with each generation. They then advertise the max speed the terminal can do but you might not actually se that speed. Granted, most terminals are fiber now so they can already do way more than coax/dsl can do.
Wolfbeast
2022-12-03 02:48:08
I'm actually getting pretty much what is advertised (and no I never test within the ISP's own network only ;-)) so I can't complain.
2022-12-03 02:48:30
82M down, 93M up right now.
pandakekok9
2022-12-03 02:50:23
I have an advertised 20 Mbit subscription right now in both directions but sometimes it can go up to 50 or even 100 Mbit lol
Wolfbeast
2022-12-03 02:52:36
Anyway... sorry for going waaaay off-topic for jxl
pandakekok9
2022-12-03 02:53:05
Tbh even just 10 Mbit is a godsend to me, maybe that's because I have been in 1-2 Mbit for so long and I just got a 20 Mbit recently (like last year) lol
Wolfbeast Anyway... sorry for going waaaay off-topic for jxl
2022-12-03 02:53:19
Yeah I'm gonna stop right here too lol
Traneptora
pandakekok9 I do think that the results of the progressive decoding can sometimes be distracting though, so I understand why Moonchild is kinda not a fan of it. It's still possible to bring back the progressive mode from JPEG to JPEG-XL right (i.e. the one where all parts are equally blurry)?
2022-12-03 05:44:00
indeed this can be done
2022-12-03 05:44:26
JXL has Passes, which are essentially progressively more detailed rendering passes of an image
monad
pandakekok9 Huh, looks like it does seems to know to prioritize text first, as shown by the textbox here:
2022-12-03 05:45:10
In this case the text was encoded with patches. cjxl above a certain effort looks for repeated high-contrast areas to encode them just once. This can improve density but also preserve sharp edges where VarDCT might degrade them.
yurume
2022-12-03 09:56:42
"technically" possible but not actually done by libjxl today, isn't it
sklwmp
2022-12-03 09:57:47
isn't this not much better than just encoding a PNG to JPEG and then transcoding to JXL?
_wb_
2022-12-03 09:59:00
There is no libjxl option for this, but it wouldn't be hard to do. Just using your favorite jpeg encoder and then recompressing would work too
2022-12-03 09:59:40
For mozjpeg, you can skip the progressive scan tweaking and the huffman code optimization because those won't matter for the jxl size
OkyDooky
2022-12-03 11:54:10
It's kinda nice to see AV1 community too thinks google's report is deceptive (https://www.reddit.com/r/AV1/comments/zb0wnu/google_publishes_the_results_of_their_study/). But on the other hand, it looks even more like google is being *actively* malicious, which kinda sucks.
w
2022-12-03 12:02:34
av1 community is the jxl community
2022-12-03 12:02:40
it's only like 10 people
_wb_
2022-12-03 12:35:34
I need to add a tarball or something there, or at least links to the images
2022-12-03 12:37:45
https://jon-cld.s3.amazonaws.com/test/images/001.png they have urls like this
Moritz Firsching
Wolfbeast Hey no qualms about progrssive loading as a concept, just think the blocks are too large to begin with (256x256 are pretty hunky chunks...) and find the focalpoint-to-edge really distracting and not really helping with making the image be clear before it's fully loaded. But maybe that's just me.
2022-12-03 12:38:17
There is of course many aspects to when/how progressive images are useful helpful and we also need to consider cognitive load and not being distracting. A few points on this: - I think progression is especially useful for large (in byte size and image dimensions) images and for those images 256x256 chunks will be relatively small - There might be some getting used to, e.g. we are used to sequential legacy JPEGs loading from top to bottom (although it can happen that they load from the side or the bottom, depending on orientation) and also progressive JPEGs getting increasingly sharper. Having some areas of the image sharp while other areas are not yet sharp is something new and the first time one sees it, it might be more distracting than after seeing a few images loading like that already - In the demo I included the "watch image", to make especially visible what is going on. It is a bit hard preparing a demo to show something where the whole point of the thing is that one shouldn't notice what is going on (namely the progressive loading should not be distracting). - While the spec allows to have arbitrary tile order, in the current encoder (cjxl) we opted for only storing the tiles in an order that spirals out in squares around a starting tile you can choose freely. - There are more progressiveness-features specified by the JPEG XL format that are not yet implemented. I hope the explanations help, all feedback is most welcome!
_wb_
2022-12-03 12:38:18
But besides the first 49 of them, the filenames are not guessable
2022-12-03 12:48:32
Any permutation of groups can be done
Traneptora
2022-12-03 03:12:47
the decoder reads a TOC permutation which can be any permutation of groups
_wb_
2022-12-03 03:24:27
Btw the original reason we added TOC permutation is because someone at JPEG asked for it for the use case of 360 images, where the default view is typically encoded in the center and top-down loading corresponds to ceiling-first, so having a way to get the center first seemed useful.
2022-12-03 03:26:45
So we added it, we already had a nice way to represent/compress permutations for the coefficient reordering, so it was a relatively easy addition that brings quite a lot of extra flexibility in bitstream ordering options.
Jyrki Alakuijala
2022-12-03 08:02:31
I'd say probably no. It works with natural photographs. With manga the passes approach could be more useful.
2022-12-03 08:03:39
the normal progression is relatively cheap for computation: in high-frequency 256x256 blocks pixels are final
2022-12-03 08:04:03
the passes approach needs to refresh the image many times, i.e., some more computation is needed, which may or may not be a big issue
Traneptora
2022-12-03 08:56:02
how does JXL work with manga color or printed color in general that uses dots rather than actual gray to create an appearance of gray
2022-12-03 08:56:34
consider something like this ((c) M. Lunsford)
Jyrki Alakuijala
2022-12-03 09:18:56
JPEG XL formalism allows for simple low frequency (e.g. supersampled) fields to modulate simple repetitive texture fields, and that can achieve these effect with very high compression densities -- however, there is no encoder for them, we had an internship to explore it, but the encoding problem seems more complicated than what is achievable in an internship
2022-12-03 09:20:38
(doing that would require use of layers and layers are not that compatible with progression)
_wb_
2022-12-03 09:23:16
If the halftoning patterns are not scanned but synthetic, likely the current MA tree learning can already basically learn these and save a lot. But that doesn't work very well for scanned halftoning patterns, or for lossy.
Jyrki Alakuijala
2022-12-04 12:01:50
yes, also modular kind of progression might also be better for anime than the vardct like of progression (not sure)
Traneptora
2022-12-04 09:29:16
Squeeze in this case?
Jyrki Alakuijala JPEG XL formalism allows for simple low frequency (e.g. supersampled) fields to modulate simple repetitive texture fields, and that can achieve these effect with very high compression densities -- however, there is no encoder for them, we had an internship to explore it, but the encoding problem seems more complicated than what is achievable in an internship
2022-12-04 09:30:25
speaking of internships, I'm graduating with a math degree in several months, likely june. Are there any that might be worth applying for?
2022-12-04 09:31:03
I'm in the midwestern United States if that matters
Jyrki Alakuijala
2022-12-04 09:46:37
AVIF/AV1 team is MountainView
2022-12-04 09:46:57
Some of AVIF team is in Paris
2022-12-04 09:47:13
JPEG XL is in Zurich
2022-12-04 09:47:48
contact me at jyrki@google.com
2022-12-04 09:48:40
(also I feel that you have the skills that can make you a fulltimer, and the compensation for a fulltime engineer can be double that of an intern)
Traneptora
2022-12-04 10:00:57
Thanks for the heads up 👍
2022-12-05 12:06:58
I noticed `cjxl` has options `--progressive_ac` and `--progressive_dc`
2022-12-05 12:07:11
instead of `--progressive_hf` and `--progressive_lf`
2022-12-05 12:07:45
since AC and DC have been "renamed" to HF and LF, is there any plan to rename these options (probably with a deprecation period)
_wb_
2022-12-05 12:29:41
I suppose we could just add aliases so both work
Traneptora
2022-12-05 04:09:58
also what do DC and AC stand for?
yurume
2022-12-05 04:13:18
Direct Current and Alternating Current, but the actual origin would be https://en.wikipedia.org/wiki/DC_bias
Traneptora
2022-12-05 04:14:39
I figured it wasn't the electric current <:kek:857018203640561677>
veluca
2022-12-05 04:15:42
don't ask me why (probably related to signal processing), but they ended up being used for DCT coefficients xD
Traneptora
2022-12-05 04:16:26
maybe it stands for DCT Coefficient <:KEKW:643601031040729099>
_wb_
2022-12-05 04:20:44
well you can kind of see how a cosine looks like alternating current
2022-12-05 04:22:03
all the dct coeffs except the DC are discretized cosines with various wavelengths
2022-12-05 04:23:29
the DC gets cos(0) in the dct formula so it's just the mean, which I suppose you can see as direct current
Traneptora
2022-12-05 04:23:44
well arguably a constant is a degenerate cosine with zero frequency / infinite wavelength
_wb_
2022-12-05 04:24:10
I suppose this 'electrical' terminology comes from signal processing originally working with analog signals
Traneptora
2022-12-05 04:24:50
so AC is actually called that because it's the high-frequency analog of the DC coeffs
2022-12-05 04:24:51
huh
veluca
_wb_ well you can kind of see how a cosine looks like alternating current
2022-12-05 04:25:12
well, yes, but it's more that AC looks like a cosine really xD
Traneptora
2022-12-05 04:25:28
ideal alternating current is sinusoidal, isn't it?
veluca
2022-12-05 04:25:40
should be, yep
Traneptora
2022-12-05 04:25:43
so it doesn't just "look" like a cosine iirc
_wb_
2022-12-05 04:27:10
I meant "to look like" in a symmetrical way that includes "to be" as a special case 🙂
Traneptora
2022-12-05 04:47:56
is similarity a reflexive relation? and other philosophical questions
2022-12-05 05:04:32
how does Squeeze allow progressive renders?
2022-12-05 05:04:58
do you basically render the squeezed channels, ignore the residue, and upsample them?
_wb_
2022-12-05 05:09:08
yeah, or just do the full unsqueeze with all not-yet-decoded residues set to zeroes
Traneptora
2022-12-05 05:15:23
that requires you to unsqueeze twice, which doesn't really seem worth it performance-wise
_wb_
2022-12-05 05:32:22
Yeah normal upsampling will be faster
DZgas Ж
2022-12-11 09:17:15
Oh no it's been found again - but this time the quality is lower than 90
2022-12-11 10:01:46
I want to note that E1-E4 is completely meaningless for the quality of q 96-99, it is completely unsuitable for storing accurate pixel information. but E 5+ solve this task perfectly, the pixels are where they need to be... Original | e3 q99 | e5 q98
_wb_
2022-12-11 10:04:50
I think e5-8 are the useful settings, e3-4 is like libjpeg-turbo, not very reliable, and e9 has basically no advantage over e8 (for vardct)
DZgas Ж
_wb_ I think e5-8 are the useful settings, e3-4 is like libjpeg-turbo, not very reliable, and e9 has basically no advantage over e8 (for vardct)
2022-12-11 10:08:40
yes, it remains only to solve all the problems...
2022-12-11 10:09:46
strange, I thought that this problem was solved, because in the original photo where I found it, it was no longer there, but apparently it was not completely solved
2022-12-11 10:10:29
and yes, there are no such artifacts on the e3 parameter
2022-12-11 10:12:46
more precisely, we are still talking about the quality of 90, on 98 nothing anymore
Maiki3
2022-12-12 04:30:52
Is there any hope of getting JXL support back into Chrome?
BlueSwordM
Maiki3 Is there any hope of getting JXL support back into Chrome?
2022-12-12 04:46:49
Yes, but they will have to be forced.
Peter Samuels
2022-12-12 04:53:12
Like, at gunpoint? I don't understand what you mean by forced.
BlueSwordM
Peter Samuels Like, at gunpoint? I don't understand what you mean by forced.
2022-12-12 04:54:35
No, forced in the sense that they have no other choice, because of performance concerns or industry support.
Jarek
2022-12-12 05:08:49
maybe through Android support? https://issuetracker.google.com/u/0/issues/259900694?pli=1
Deleted User
2022-12-12 06:26:08
Maybe media coverage could still reverse this removal, stimulate wider discussion? Searching for "why should delete chrome" there appears lots of articles, and their authors might find worth covering also this pathological situation of overpowered browser? Maybe e.g. prepare draft of such email and suggest people sending to journalists?
pandakekok9
2022-12-12 01:00:29
2022-12-12 01:01:17
Inline JPEG-XL in Ambassador, a ChatZilla fork based on the Unified XUL Platform (i.e. same as Pale Moon's platform)
2022-12-12 01:02:07
Done with a simple ChatZilla plugin: https://github.com/jobbautista9/Chatzilla-Plugins/blob/master/inline-images/init.js
Jyrki Alakuijala
DZgas Ж yes, it remains only to solve all the problems...
2022-12-12 02:16:50
I'm working on this
DZgas Ж
Jyrki Alakuijala I'm working on this
2022-12-12 03:04:06
🙏
Jyrki Alakuijala
2022-12-12 08:10:07
2022-12-12 08:10:15
is the r2 getting good enough
2022-12-12 08:10:26
there are so many things wrong with this jxl compression 🙂
2022-12-12 08:10:53
original here once again
2022-12-12 08:12:03
compression is easy -- now just need to figure out git again to get this stuff in 🙂
OkyDooky
2022-12-12 08:17:50
https://github.com/niutech/jxl.js anyone think this could work for pushing jxl despite Google 's actions?
DZgas Ж
Jyrki Alakuijala
2022-12-12 09:58:24
🙂 ..........there really is still a lot of work to do
2022-12-12 09:59:36
because I can see red artifacts on green without zoom
daniilmaks
DZgas Ж because I can see red artifacts on green without zoom
2022-12-13 04:41:50
same
fab
Jyrki Alakuijala
2022-12-13 10:53:35
good compression
Jyrki Alakuijala
2022-12-14 01:06:10
https://github.com/libjxl/libjxl/pull/1971 should improve things a bit
2022-12-14 01:06:31
it is not a full fix, just some of the wildest red-green artefacts disappear
afed
2022-12-14 01:26:10
a few updates for fast-lossless were because it is planned to be used somewhere, on mobile? https://github.com/libjxl/libjxl/pull/1966
veluca
afed a few updates for fast-lossless were because it is planned to be used somewhere, on mobile? https://github.com/libjxl/libjxl/pull/1966
2022-12-14 03:32:10
plan is to use it for `cjxl -e 1`
afed
2022-12-14 03:37:14
good, that would be useful
veluca
2022-12-14 11:06:32
have to say writing this PR description https://github.com/libjxl/libjxl/pull/1975 with a 15x speedup gave me some satisfaction xD
afed
2022-12-14 11:16:53
<:Poggers:805392625934663710> is it like using fast_lossless as a lib with exactly the same speed, instead of changing the existing libjxl code?
veluca
2022-12-14 11:17:11
yep, pretty much
2022-12-14 11:17:27
it can now do things like ICC
2022-12-14 11:17:45
potentially also animation, but 100% not tested
yurume
veluca have to say writing this PR description https://github.com/libjxl/libjxl/pull/1975 with a 15x speedup gave me some satisfaction xD
2022-12-14 11:20:03
oh great! I believe it is one of the highest priority task and am very glad to see this becoming real.
veluca
2022-12-14 11:21:04
you can still use fast_lossless directly, btw
afed
2022-12-14 11:32:40
maybe add effort 0, instead of replacing effort 1 (and rework it later for something with better speed/compression)? or too much numbers?
yurume
2022-12-14 11:39:59
I think the current e1 (for modular, it's same to e2 for vardct) is doing the more or less same thing as fast_lossless, but more slowly
2022-12-14 11:40:09
```cpp // VarDCT: same as kThunder. // Modular: no tree, Gradient predictor, fast histograms kLightning = 9 ```
2022-12-14 11:40:23
this comment equally describe fast_lossless to my knowledge
afed
2022-12-14 11:49:43
for now maybe, but leave it for a future rework, also to be like webp (-z 0-9) or avif (-s 0-10, opposite, but)
veluca
yurume I think the current e1 (for modular, it's same to e2 for vardct) is doing the more or less same thing as fast_lossless, but more slowly
2022-12-15 12:05:11
yep
2022-12-15 12:05:32
it was even *written* to be the same thing
yurume
2022-12-15 12:08:33
was the original e1 designed for SIMDification in mind then?
afed
2022-12-15 12:14:09
svt-av1 <:monkaMega:809252622900789269> `Encoder preset, presets < 0 are for debugging. Higher presets means faster encodes, but with a quality tradeoff, default is 10 [-2-13]`
2022-12-15 12:16:53
ah yeah, there's also vardct, but maybe something like turbo-jpeg mode (faster than libjpeg-turbo <:Poggers:805392625934663710> or like libjpeg-turbo with some jxl stuff)
veluca
2022-12-15 01:20:54
well, libjxl-tiny does exist
ElijahPepe
2022-12-15 01:21:35
How is JXL support in Firefox going? Is there a patch?
veluca
2022-12-15 01:21:50
although I do wonder what you could do with say mostly fixed entropy codes, 8x8 only, simple heuristics and 16-bit arithmetic...
afed
2022-12-15 01:29:44
though, lossy encoding speed is already fast enough and it's hard to find a use case that needs to be even faster, or maybe for MJXL?
ElijahPepe How is JXL support in Firefox going? Is there a patch?
2022-12-15 01:31:45
patches are ready but not accepted/reviewed <https://phabricator.services.mozilla.com/p/wwwwwwww/>
ElijahPepe
2022-12-15 01:32:38
Anything I can do to help out with this project? I'd like to expand support for JXL.
yurume
veluca have to say writing this PR description https://github.com/libjxl/libjxl/pull/1975 with a 15x speedup gave me some satisfaction xD
2022-12-15 01:55:06
Hot take: fjxl is so much faster that we can actually run it as a preprocessing step before the actual encoding. If the encoding doesn't seem to be significantly smaller than fjxl, give up quickly and return fjxl encoded images instead. (This of course can be done per-tile basis as well.)
2022-12-15 01:55:55
I think this approach also solves a larger & slower VarDCT issue (when the input is not suitable for lossy encoding)
veluca
2022-12-15 01:55:58
To be fair, it's not nearly as fast for photographic images
2022-12-15 01:56:24
Or for >8bit, at least not yet
yurume
2022-12-15 01:56:39
Ah, fair (for >8bit)
veluca
2022-12-15 01:56:48
Also I wouldn't really want to wire things up for that to work 🤣
yurume Ah, fair (for >8bit)
2022-12-15 01:57:11
I mean, one just needs to SIMDfy the other paths, but still
2022-12-15 01:57:28
(I am btw curious to see what would happen with avx512 on zen4...)
yurume
2022-12-15 01:58:08
AVIF team's "mistake" of encoding noto emoji with VarDCT did seem to me a worthy thing to elide, it is a genuine UX problem
2022-12-15 01:59:18
If we can cheaply decide if the image is better encoded in modular or vardct I'm all in, and fjxl seems a good candidate for that
veluca To be fair, it's not nearly as fast for photographic images
2022-12-15 02:01:11
I thought it just produces a larger image, and flat histogram should be easier to detect, is it false?
veluca
2022-12-15 02:01:52
well you can't RLE a photographic image
2022-12-15 02:01:59
and RLE is faster
2022-12-15 02:02:13
I mean, not like it is *slow* for photo
2022-12-15 02:08:02
`3008 x 2000, geomean: 185.24 MP/s [107.10, 187.40], 100 reps, 0 threads.`
afed
2022-12-15 02:09:20
decoding will also be slower, or we will need the similar fast_lossless_decoder <:KekDog:805390049033191445>
yurume
2022-12-15 02:11:50
libjxl does special-case fjxl for MA tree 😉
veluca and RLE is faster
2022-12-15 02:14:39
*encoding* RLE, right? I thought detecting RLE (or its lack) is more or less uniform.
veluca
2022-12-15 02:14:51
encoding it, yep
yurume
2022-12-15 02:15:14
if you have not much RLE and the histogram is flat, we can just signal that this is not fit for fjxl (so to say)
veluca
2022-12-15 02:16:31
still, decoding photo fjxl (single-threaded) is like 30MP/s
2022-12-15 02:16:41
I mean, it's OK, but not nearly as good
yurume
2022-12-15 02:17:34
ah I was talking about fjxl as a preprocessing step, not fjxl as a main encoder
veluca
2022-12-15 02:18:21
fair enough
yurume
2022-12-15 02:19:43
say, we want to encode at e3, then we first encode at e1 but if that'd result in too less RLE and too flat histogram we simply skip coding, e3 will do the same thing as before but if i) e1 succeeded and ii) e3 seems actually larger than e1 then discard e3 and switch to e1.
2022-12-15 02:20:14
and ii) can be done during encoding step, and by per-tile basis
2022-12-15 02:20:41
having something like fjxl gives much more options for libjxl
afed
2022-12-15 02:24:49
this speedup needs to be reworked? https://github.com/libjxl/libjxl/pull/1219
Jyrki Alakuijala
yurume say, we want to encode at e3, then we first encode at e1 but if that'd result in too less RLE and too flat histogram we simply skip coding, e3 will do the same thing as before but if i) e1 succeeded and ii) e3 seems actually larger than e1 then discard e3 and switch to e1.
2022-12-15 09:40:30
love that idea
joppuyo
2022-12-15 10:03:06
I was actually thinking about adding a feature in my JXL library called auto mode (https://github.com/joppuyo/jpeg-xl-encode) where it would encode the file first as lossless and then lossy and then pick which one has the lowest file size, but that would basically double the encoding time. so I was thinking about performing the encoding at very fast speed in both vardct and modular and based in the result encode the file at slower speed
2022-12-15 10:04:09
but I would need to test 1. if this is actually faster than just doing the encoding twice. 2. do the results even agree (does the same mode win both at fast and slow encoding speed)
veluca
2022-12-15 10:11:27
(for 2: probably not always)
_wb_
2022-12-15 10:31:25
fast lossless and slow lossless might be quite far apart in terms of compression results, since fast lossless doesn't do all the funky stuff like real context modeling with MA trees, picking the best RCTs and predictors, doing local palettes, using ANS, etc.
2022-12-15 10:34:43
But one thing that could work in practice is doing fast lossless to see what kind of size that gives, say it gives a size `FLL` = 300 kb. Then do lossy and see what kind of size you get, say it gives a size `Lossy` = 150 kb. Then if and only if `FLL / Lossy < some threshold`, you could try slow lossless and see what it produces.
2022-12-15 10:35:54
Doing only an extra fast lossless encode is very cheap — it will maybe add 1 or 2 percent to the total time.
2022-12-15 10:37:16
Doing the extra slow lossless encode is something you don't want to do all the time though, since slow lossless encoding is slower than lossy encoding. So you should pick the threshold in a way that makes it likely enough that the slow lossless encoding will actually be what you end up using as the result.
2022-12-15 10:39:14
(setting the threshold to 1 is certainly safe, but you could probably set it to 1.5 or 2 and still have a close to 100% rate of ending up using the lossless result — especially if you allow the lossless one to be a bit larger than the lossy one, since it is after all also higher quality)
2022-12-15 10:40:44
Ideally we would implement this kind of logic in libjxl itself btw, so all applications can automatically benefit from it.
afed
2022-12-15 11:01:35
``` jonsneyers Not for this PR, but I would eventually use fast lossless at both e1 and e2 (with different fast effort settings, e.g. no palette check and cheaper histogram sampling at e1), and then make e3 do something that also tries the fast path since it can very well give better results for non-photo with big run lengths. I'm not sure if we still need the current e2 except as a fallback option for cases not covered by the fast path.``` maybe it is easier to make effort 0, for even faster fast_lossless mode, and effort 1 for the current settings? and maybe not replace effort 2, but make it as fast_lossless hybrid?
_wb_
2022-12-15 11:08:07
Yeah that's also an option. We did leave room for an effort 0...
afed
2022-12-15 11:15:39
because fast_lossless with internal efforts is not much different in speed and libjxl effort 2 compresses better than any fast_lossless modes maybe there will also be a room for something like zpng mode ```JXL_Lossless/f 0.860 RelCmpRatio 0.630 RelEncSpeed 0.120 RelDecSpeed (2) ZPNG_Lossless 0.864 RelCmpRatio 0.747 RelEncSpeed 0.927 RelDecSpeed (3)```
veluca
afed because fast_lossless with internal efforts is not much different in speed and libjxl effort 2 compresses better than any fast_lossless modes maybe there will also be a room for something like zpng mode ```JXL_Lossless/f 0.860 RelCmpRatio 0.630 RelEncSpeed 0.120 RelDecSpeed (2) ZPNG_Lossless 0.864 RelCmpRatio 0.747 RelEncSpeed 0.927 RelDecSpeed (3)```
2022-12-15 11:17:22
that seems a bit suspicious
_wb_
2022-12-15 11:21:33
not sure if libjxl e2 is always better than fast lossless — libjxl e2 does not do palette nor RLE, while fast lossless does. The only thing libjxl e2 has that fast lossless doesn't, is ANS...
afed
2022-12-15 11:22:55
maybe this is a wrong qoir benchmark, but my thought was about more different modes or something like a fast filter search in fpnge or using different filters doesn't make much difference in jxl? <https://github.com/veluca93/fpnge/issues/16>
_wb_ not sure if libjxl e2 is always better than fast lossless — libjxl e2 does not do palette nor RLE, while fast lossless does. The only thing libjxl e2 has that fast lossless doesn't, is ANS...
2022-12-15 11:26:56
what if combine fast lossless with ANS?
_wb_
2022-12-15 11:27:00
it could be interesting to extend the fast path with ANS — it would of course be slower because no SIMD in the entropy coding step itself, and it's trickier to do (need to do things in reverse order), but it could still be a lot faster than libjxl e2 and strictly better...
afed maybe this is a wrong qoir benchmark, but my thought was about more different modes or something like a fast filter search in fpnge or using different filters doesn't make much difference in jxl? <https://github.com/veluca93/fpnge/issues/16>
2022-12-15 11:33:30
Both libjxl e2 and the fast code path use a fixed predictor (the Gradient one) and a fixed RCT (YCoCg). One way to make it slower-but-better could be to use e.g. a different predictor and/or RCT (e.g. per frame or per group), based on either some heuristics or just trying multiple things.
veluca
2022-12-15 11:34:14
I don't think ANS would help that much for these cases 😄
yurume
2022-12-15 11:35:43
ANS doesn't leave any room for parallelism unless the format is specially designed for split streams, to my knowledge
afed
2022-12-15 11:35:45
and now some libjxl settings affect fast lossless, like num_threads, group-size?
veluca
2022-12-15 11:37:05
num_threads: yep
2022-12-15 11:37:09
group-size: nope
2022-12-15 11:37:20
would that be hard to implement? also nope
afed
2022-12-15 11:37:37
I think it would be useful
2022-12-15 11:42:17
and animation should work as I see (but I have not tested it yet)
veluca
2022-12-15 11:42:38
(I don't believe it for a second)
afed
2022-12-15 11:52:45
also now able to completely drop libjpeg and use jpegli, maybe also fpnge for png <:KekDog:805390049033191445> (ah no, decoder is still needed <:SadCat:805389277247701002> )
_wb_
veluca I don't think ANS would help that much for these cases 😄
2022-12-15 11:53:18
I think it helps for images that are well-predicted by Gradient but without having lots of big zero runs, i.e. images where the zero token gets a big chance in the histogram that may deviate significantly from the chances that Huffman can do perfectly (which are of the form 1/2^k). If e.g. the zero token has a probability of 18% then Huffman forces it to be either 12.5% or 25%.
2022-12-15 11:56:29
Something like synthetic images with lots of smooth gradients can be like that, where there are too many +-1 errors in the residuals to get long enough runs of zeroes, but still the histogram of residuals makes ANS have an advantage over Huffman
Jyrki Alakuijala
afed also now able to completely drop libjpeg and use jpegli, maybe also fpnge for png <:KekDog:805390049033191445> (ah no, decoder is still needed <:SadCat:805389277247701002> )
2022-12-15 11:56:43
I'm happy that you also see a nice future for Jpegli -- I think it is an amazing effort that hasn't yet been found by others
veluca
2022-12-15 11:57:30
I just realized - we're not really doing Huffman codes per-channel in fjxl, are we? <@794205442175402004>
_wb_
2022-12-15 11:59:18
I don't remember but if we don't, I suppose it should help to have separate histograms for Y than for Co and Cg — though I vaguely remember we tried that already at some point? Or was that something else?
2022-12-15 12:00:52
maybe getting the histograms sampled for all channels (up to 4) and then doing some simple clustering heuristic could be a good idea
veluca
2022-12-15 12:01:36
We can also just keep them all 😄
2022-12-15 12:02:31
Another easy w would be to change the starting counts depending on bitdepth
afed
Jyrki Alakuijala I'm happy that you also see a nice future for Jpegli -- I think it is an amazing effort that hasn't yet been found by others
2022-12-15 12:03:43
it will be easier for most users when there is a jpegli dll to seamlessly replace libjpeg/libjpeg-turbo
Jyrki Alakuijala
afed it will be easier for most users when there is a jpegli dll to seamlessly replace libjpeg/libjpeg-turbo
2022-12-15 02:17:14
feel free to participate in that -- I think it will be hilariously successful when it gets ready -- szabadka@ (Zoltan) is leading that effort and he is very approachable
HLBG007
2022-12-15 02:39:12
I would find a jpegxl encoder good, with which I can encode images in such a way that I can view them with jpeg-turbo. but when I look at them with a jpegxl decoder, I see the advantages.
afed
Jyrki Alakuijala feel free to participate in that -- I think it will be hilariously successful when it gets ready -- szabadka@ (Zoltan) is leading that effort and he is very approachable
2022-12-15 02:44:49
I think jpeg (li), jpeg-hdr, jpeg-xyb encoding through cjxl (or a separate tool), dll for windows with libjpeg compatibility and jpeg-xyb conversion by jpegli if there is no ICCv4 support, is all that is really needed
Traneptora
HLBG007 I would find a jpegxl encoder good, with which I can encode images in such a way that I can view them with jpeg-turbo. but when I look at them with a jpegxl decoder, I see the advantages.
2022-12-15 02:45:35
in order to be able to view an image with a legacy JPEG decoder, it has to be a legacy JPEG file
2022-12-15 02:46:06
so you won't be able to create an image file that can be viewed with TurboJPEG-based viewers out of the box, while also benefiting from JPEG XL
_wb_
2022-12-15 02:53:54
At some point we considered having legacy jpeg app markers to store some extra info like "do gaborish" or even an epf strength field. Probably not worth the effort but it's doable.
HLBG007
2022-12-15 02:59:28
I think this can work. The JPEGXL encoder creates a jpeg image with lower quality, the encoder stores supplementary information for creating a jpegxl image in an extra chunk. The previously created image in the jpeg should be the basic structure so that no byte is lost.
Traneptora
HLBG007 I think this can work. The JPEGXL encoder creates a jpeg image with lower quality, the encoder stores supplementary information for creating a jpegxl image in an extra chunk. The previously created image in the jpeg should be the basic structure so that no byte is lost.
2022-12-16 12:54:55
that would be hard to do as JPEG XL doesn't exactly have extra data that increases quality, but rather is more efficient at encoding the data it does encode
Jyrki Alakuijala
afed I think jpeg (li), jpeg-hdr, jpeg-xyb encoding through cjxl (or a separate tool), dll for windows with libjpeg compatibility and jpeg-xyb conversion by jpegli if there is no ICCv4 support, is all that is really needed
2022-12-16 01:38:30
I think JPEGLI needs to take some distance from JPEG XL for having the best reception for JPEGLI -- for example, I would be surprised if Chromium would allow a libjxl JPEGLI because they just removed libjxl thingy, but they might eventually allow a separate JPEGLI (at least once it is adopted everywhere elsewhere)
afed
2022-12-16 01:53:44
yeah, I meant just encoding (at least until browsers want to add an encoder as well), but it's better to have a separate decoder from the beginning
BlueSwordM
Traneptora that would be hard to do as JPEG XL doesn't exactly have extra data that increases quality, but rather is more efficient at encoding the data it does encode
2022-12-16 03:36:43
I mean, it could be done since stuff like Dolby Vision P7/P8 does work like that
Jyrki Alakuijala
afed yeah, I meant just encoding (at least until browsers want to add an encoder as well), but it's better to have a separate decoder from the beginning
2022-12-16 07:42:30
you will get substantial benefits from having only the decoder or only the encoder, the deployment doesn't need to be synchronized
pandakekok9
2022-12-16 10:45:43
I wonder if anyone has done the opposite of embed.moe: at the end of the address bar you type in a direct URL to a JPEG, PNG, or GIF, and you receive a JPEG-XL conversion almost instantly
2022-12-16 10:46:17
Well turns out that's very easy to do with some CGI and shell scripting: https://job.tilde.team/cgi-bin/cjxl.cgi
2022-12-16 10:46:33
Might be very insecure tho lol but it works
2022-12-16 10:46:51
improver
2022-12-16 11:24:22
VERY insecure
pandakekok9
2022-12-16 01:25:18
The imgur link doesn't seem to be a real JPEG?
2022-12-16 01:32:02
I mean the only non-default option I added is the effort parameter, which I set to 5 so it encodes faster while still not sacrificing quality. But other than that, all other parameters should be the default
2022-12-16 01:32:37
This one has JPEG reconstruction data present: https://job.tilde.team/cgi-bin/cjxl.cgi?https://i.redd.it/o4n5g77vrn0a1.jpg
_wb_
2022-12-16 02:07:50
extensions don't really matter
2022-12-16 02:08:26
next time someone asks for separate extensions for lossless and lossy jxl, I'll say they should just call their lossy jxl files `.jpg` and the lossless ones `.png` 🙂
Traneptora
pandakekok9
2022-12-16 06:27:16
if you want to do this it makes more sense to me to have the CGI accept a file as a POST request
pandakekok9
Traneptora if you want to do this it makes more sense to me to have the CGI accept a file as a POST request
2022-12-17 12:47:55
That kinda defeats what I'm going for with the CGI which is to make an opposite of embed.moe :P
2022-12-17 12:48:17
But if I'm serious, then yeah I would've gone with POST
2022-12-17 12:49:10
But then there are already other websites which convert images to JPEG-XL for you with a file upload, so mine would be redundant and just more insecure compared to them
2022-12-19 08:38:55
Does anyone have a test for ICC support like this page but for JPEG-XL? https://www.color.org/version4html.xalter
2022-12-19 08:39:53
I tried transcoding one of the images to JPEG-XL, but looks like cjxl can't recompress the JPEG (and apparently it's a known issue reported by someone who's doing exactly the same thing as I'm doing! https://github.com/libjxl/libjxl/issues/1810)
2022-12-19 08:40:44
Just wanted to make sure my ICC work in Pale Moon is done correctly, which is why I'm asking. :) https://repo.palemoon.org/MoonchildProductions/UXP/issues/2042
2022-12-19 09:18:30
Can't open the image, here's djxl's output: https://bhh.sh/6ln
2022-12-19 09:19:08
Outputting to PNG also doesn't work
2022-12-19 09:22:40
Yeah, that's strange. I can convert it to JPEG-XL just fine with cjxl
2022-12-19 09:23:21
And the output JPEG-XL can't be decoded by djxl...
Jyrki Alakuijala
2022-12-19 10:48:01
clever! please file an issue 🙂
pandakekok9
2022-12-19 12:23:13
Yep, that works for me, thanks! Looks like I did the ICC wrong...
2022-12-20 12:38:56
2022-12-20 12:39:20
Something funky going on with Chromium 108's color correction in JPEG-XL
2022-12-20 12:39:41
File in question if you want to test it yourself
_wb_
2022-12-20 03:50:30
I can reproduce in Chrome 108 on MacOS
2022-12-20 03:53:15
Usually (when flipping between tabs) it looks close to how it should look, but then sometimes I get something like this
veluca
2022-12-20 04:06:35
uh, that doesn't seem right
fab
2022-12-20 06:20:47
how to encode with jpegli?
Demiurge
2022-12-21 06:18:51
Doesn't everyone?
pandakekok9
2022-12-21 06:43:24
Well I'm currently in 33 servers, and I regularly look at only like 4 of them lol
improver
2022-12-21 08:17:37
i'm in even less (12) since i've cut down unused stuff a bit
Traneptora
Demiurge Doesn't everyone?
2022-12-22 12:12:01
no
BlueSwordM
Demiurge Doesn't everyone?
2022-12-22 12:51:42
No.
190n
Demiurge Doesn't everyone?
2022-12-22 01:02:49
no
Demiurge
2022-12-22 09:54:53
I also had somewhere around 1000 tabs open too at some point until they accidentally got blackholed
yurume
2022-12-22 10:00:38
well, 1000 tabs are different from 1000 pings
Jyrki Alakuijala
fab how to encode with jpegli?
2022-12-22 11:07:47
https://github.com/libjxl/libjxl/blob/main/lib/jpegli/README.md shows how to make jpegli your libjpeg -- then it should work with cjpeg and gimp etc.
afed
2022-12-22 11:16:01
does libjpeg.so work on windows?
Demiurge
2022-12-22 11:52:39
are there any quality or speed comparisons between jpegli and turbo/mozjpeg
Traneptora
afed does libjpeg.so work on windows?
2022-12-22 07:34:12
no, `lib*.so` is only for unix and unix-type operating systems
2022-12-22 07:34:41
windows doesn't have turbojpeg as its jpeg.dll iirc
2022-12-22 07:34:51
but rather implements jpeg itself
Jyrki Alakuijala
Demiurge are there any quality or speed comparisons between jpegli and turbo/mozjpeg
2022-12-22 08:59:27
some of the Zoltan's PRs include comparisons -- they are just in tabulated form of BPP*pnorm rather than visualizations Usually libjpeg-turbo is better than mozjpeg in the high quality (q90+), and jpegli is about 36 % more dense there
2022-12-22 09:00:49
there are no user studies like what Jon did for JPEG XL/AVIF/WebP/Mozjpeg
2022-12-22 09:01:24
but it is relatively easy to observe a set of images at q80-q90 qualities and the difference in quality is quite dramatic between libjpeg(or morzjpeg) and jpegli
2022-12-22 09:29:09
did anyone try https://github.com/libjxl/libjxl/pull/1987 -- I'd be happy to receive any feedback on it before changing more things
2022-12-22 09:29:54
it is merged and should help many previously difficult images
2022-12-22 09:30:17
it is not 'magical' in any means, just allocates more bits where there are usually difficulties
2022-12-22 09:30:38
should make quality more robust in general
2022-12-22 09:31:24
I consider it a half-way fix, it seemed to fix things about 35-50 % or so rather than fully
2022-12-22 09:32:08
it doesn't help with DZgas' red-green image, but the improvement was inspired by that failure -- I still need to work for DZgas some more 😄
afed
2022-12-22 09:37:18
i tried it and yes, there are less ringing artifacts at the edges, though sometimes there is more blurring, but overall this is a positive change too bad there is no longer an option for encoding to exact size, because visual comparisons are taking more time (since I need to find quality options to fit the same size or make some complex scripts)
Jyrki Alakuijala
2022-12-22 09:46:45
Thank you ❤️
2022-12-22 09:48:13
yes, I allocate some less bits in smooth areas, leading to some more blurring of subtle textures
afed
2022-12-22 09:49:16
and also this pr is not ready for review? https://github.com/libjxl/libjxl/pull/1395
Jyrki Alakuijala
2022-12-22 09:53:14
I think it may have implications to 8x8 progression
afed
2022-12-22 09:57:32
sad, looks like it would also help a little more on such images
Demiurge
Jyrki Alakuijala but it is relatively easy to observe a set of images at q80-q90 qualities and the difference in quality is quite dramatic between libjpeg(or morzjpeg) and jpegli
2022-12-22 10:03:01
Well that's pretty impressive, if there's a new JPEG encoder that is dramatically better than mozjpeg in that range. There should definitely be some web page about it with visual comparisons to boast about such a big achievement.
afed
2022-12-22 10:07:26
the main problem is incorrect decoding without ICCv4 support
Jyrki Alakuijala
2022-12-22 10:07:29
https://twitter.com/jyzg/status/1569249920556888066
2022-12-22 10:07:35
here 🙂
2022-12-22 10:09:59
it is possible to remove the ICCv4 requirement, but then we loose the 6 from the 36 %, and get about 30 % savings
2022-12-22 10:10:17
it is time for people to make ICCv4 work -- it is already 10 years old spec
2022-12-22 10:10:25
even Mozilla enabled it now
2022-12-22 10:11:34
we'll create a comparison page when we are more complete with the implementation, likely implementation will be complete in April 2023, and May 2023 we will have a good comparison page
afed
2022-12-22 10:21:02
and if lower quality was also be better than mozjpeg, I think jpegli would be the clear choice (because sometimes not only high quality is needed and using more than one encoder is not very convenient)
Fraetor
Jyrki Alakuijala it is time for people to make ICCv4 work -- it is already 10 years old spec
2022-12-22 10:30:08
Hopefully this effort can ahve the side effecto of improving the state of colour management.
Jyrki Alakuijala
2022-12-22 10:34:29
as I observe it, HDR and color management doesn't interest the right people
2022-12-22 10:35:26
very senior people can think that they are simple topics and assign not the best people to work on those topics, but they are rather complex to get right and have many subtleties
2022-12-22 10:35:44
It will likely get another 10 years for humanity to get HDR and color management somewhat right
Fraetor
2022-12-22 10:40:44
I'm keeping an eye on https://gitlab.freedesktop.org/wayland/wayland-protocols/-/merge_requests/14 but I feel it will be a long time yet.
Demiurge
Jyrki Alakuijala https://twitter.com/jyzg/status/1569249920556888066
2022-12-22 11:09:03
lol for some reason discord mangles the colors (I'm using discord in firefox)
2022-12-22 11:09:31
But the actual images look fine in firefox when I click on them
Jyrki Alakuijala
2022-12-22 11:09:31
discord is pre 2012 color management
2022-12-22 11:09:40
yeah, good stuff
Demiurge
2022-12-22 11:10:10
when I go to the original image I mean
Jyrki Alakuijala
2022-12-22 11:10:24
yeah
Demiurge
2022-12-22 11:17:31
How long has this worked in firefox?
Fraetor
Demiurge How long has this worked in firefox?
2022-12-22 11:18:32
Since 108, so like two weeks.
Demiurge
2022-12-22 11:19:50
lol wow
afed
2022-12-22 11:21:21
yeah https://discord.com/channels/794206087879852103/794206087879852106/1052310284204777553
pandakekok9
2022-12-23 02:51:02
Hi, can anyone on Firefox with <@288069412857315328>'s ICC patch test if https://job.tilde.team/jxl-icc crashes the tab?
2022-12-23 02:51:38
Make sure color management is enabled
sklwmp
Jyrki Alakuijala it is possible to remove the ICCv4 requirement, but then we loose the 6 from the 36 %, and get about 30 % savings
2022-12-23 03:07:43
If not ICCv4, what other "requirements" would an XYB JPEG need from a decoder?
2022-12-23 03:08:22
Also, I personally think that losing 6% savings for more compatibility is a decent compromise, though I would prefer it be an encoder option somehow.
Jyrki Alakuijala
sklwmp If not ICCv4, what other "requirements" would an XYB JPEG need from a decoder?
2022-12-23 06:06:56
an ideal decoder computes the DCT and color conversions accurately, uses the Laplacian-hypothesis dequantization bias and uses a 16 bit frame buffer (16 bits not possible through the usual ABI), having a stock decoder loses another 7 %
w
pandakekok9 Hi, can anyone on Firefox with <@288069412857315328>'s ICC patch test if https://job.tilde.team/jxl-icc crashes the tab?
2022-12-23 06:44:04
2022-12-23 06:44:18
when i djxl upperleft into png it looks like that aswell
pandakekok9
w
2022-12-23 08:54:10
Hmm, weird, I could've sworn Waterfox with your patches crashed the tab...
2022-12-23 08:54:38
I used this PR btw: https://github.com/WaterfoxCo/Waterfox/pull/2938
username
2022-12-23 09:02:07
hmm maybe the patches where combined wrong? I was the one who downloaded the diffs of each of the 3 patches and manually combined them together so maybe I messed up?
pandakekok9
2022-12-23 09:05:43
It's usually a good idea to split your changes into multiple commits if they're tackling different issues, which I think should've been done here...
2022-12-23 09:06:23
That way it's easier to see what exactly got changed
username
2022-12-23 09:12:06
I just did a diff between the pull request and the 3 phabricator pages and I don't see a single difference
w
2022-12-23 09:12:08
there might also have been a change I made that I didn't upload to phab
username
2022-12-23 09:12:33
ah was just about to ask that
w
2022-12-23 09:13:11
you can try exporting the diff that you imported and comparing it to the last patch I uploaded (excluding the lut thing)
username
w
2022-12-23 09:14:12
this patch I assume?
2022-12-23 10:14:05
just checked against that patch and there is basically no difference
pandakekok9
2022-12-23 11:52:34
Btw the upper-left picture in the JPEG-XL part is intentionally colored like that because trying to transcode the original JPEG just crashes cjxl, so I removed the ICC profile
_wb_
2022-12-23 11:58:00
They are supposed to look the same
joppuyo
2022-12-23 12:49:16
Is there some kind of high level description what jpegli is and what is does? <@532010383041363969>
2022-12-23 12:50:18
As far as I understand it’s similar to mozjpeg but I would like to know what improvements it has. Presumably it brings some ideas from JPEG XL to classic JPEG?
afed
2022-12-23 12:53:16
it's like improved guetzli, but much faster and with xyb (like using all the jpeg xl features that can be applied to jpeg), as far as I understand https://github.com/google/guetzli
Demez
pandakekok9 It's usually a good idea to split your changes into multiple commits if they're tackling different issues, which I think should've been done here...
2022-12-23 05:57:57
with this one I had already made the changes all locally and didn't feel like redoing it, probably should of though
VcSaJen
2022-12-23 06:46:08
`git citool` is useful for that
andrew
2022-12-24 08:50:45
is there any way to get cjxl to preserve metadata about DPI/page size?
2022-12-24 08:56:08
huh, I just found this https://github.com/libjxl/libjxl/issues/817
_wb_
2022-12-24 08:56:36
You can put that in Exif or XMP
OkyDooky
2022-12-24 09:03:51
From the Github page\: > Note\: Guetzli uses a large amount of memory. You should provide 300MB of memory per 1MPix of the input image.Note\: Guetzli uses a significant amount of CPU time. You should count on using about 1 minute of CPU per 1 MPix of input image. Holy shit, it better improve on that. I appreciate smaller image file sizes, but that sounds like a ridiculous cost for a "typically 20-30% smaller" file. Is Jpegli a separate library that existing applications could simply use in place of mozjpeg or similar? Or is it innately bundled with the JXL one?
_wb_
2022-12-24 09:05:19
Jpegli is much faster than guetzli
2022-12-24 09:05:47
And yes, it aims to be like mozjpeg, compatible with libjpeg
OkyDooky
2022-12-24 09:07:35
Nice. I'll keep that in mind to suggest to developers of image-related apps (like gallery ones).
andrew
_wb_ You can put that in Exif or XMP
2022-12-24 09:29:00
XNViewMP doesn't seem to be interpreting it correctly?
BlueSwordM
andrew XNViewMP doesn't seem to be interpreting it correctly?
2022-12-24 09:30:32
>JPEG2000 <:monkaMega:809252622900789269>
_wb_
2022-12-24 09:32:32
No idea what xnview does with exif info, jxl doesn't really care what you put in the exif as long as it looks like it's exif data
andrew
2022-12-24 09:33:08
do I just pray that applications in the future will be able to interpret the resolution data I put in EXIF
_wb_
2022-12-24 09:39:18
What is incorrect btw in what xnview says?
andrew
_wb_ What is incorrect btw in what xnview says?
2022-12-24 09:40:10
it seems to be assuming 72 DPI even though the EXIF says 600 DPI
_wb_
2022-12-24 09:41:06
236.23 pixels per cm is the same as 600 pixels per inch
andrew
_wb_ 236.23 pixels per cm is the same as 600 pixels per inch
2022-12-24 09:41:44
if you look at the bottom it says the image is 70.92x91.49 inches
2022-12-24 09:42:17
let me try setting PixelXDimension and PixelYDimension in the EXIF too
2022-12-24 09:43:07
``` exiftool -PixelXDimension=5106 -PixelYDimension=6587 '.\scan - 0001.jxl' Warning: Tag 'PixelXDimension' is not defined Warning: Tag 'PixelYDimension' is not defined Nothing to do. ``` huh
2022-12-24 09:43:42
oh, exiftool calls it ExifImageWidth
_wb_
2022-12-24 09:45:13
I don't remember what the right way to do it is, I always found it a bit odd to store physical dimensions of an image.
andrew
2022-12-24 09:46:36
it's necessary to store the physical dimensions of a scanned document
_wb_
2022-12-24 09:51:36
Why? Who cares what the paper size was?
2022-12-24 09:52:48
I mean I guess in some cases this kind of info is useful, but I struggle to find many cases where you would do different things just because the declared dpi is different
2022-12-24 09:57:58
When I am going to print an image, the physical dimensions are 1) how big I want them to be, 2) within the constraint that I need to have enough pixels for a good print at that size. Metadata that says how big the image is supposed to be is not going to make any difference for either 1) or 2). Maybe it can be a suggestion for 1), but it would be one that I would usually ignore.
andrew
2022-12-24 09:59:18
ok cool GIMP 2.99.14 is able to read the correct physical dimensions I put in EXIF
_wb_
2022-12-24 09:59:44
For displaying an image on screen, I don't care about the physical dimensions at all: I generally want to view photos using as much screen area as possible, whether that's on a 5 inch phone or on a 100 inch beamer.
andrew
_wb_ For displaying an image on screen, I don't care about the physical dimensions at all: I generally want to view photos using as much screen area as possible, whether that's on a 5 inch phone or on a 100 inch beamer.
2022-12-24 10:00:21
there have been some situations where I needed to display the "actual size" of a document or picture on my screen, like to match it against some template I'm printing on
_wb_
2022-12-24 10:01:11
Yeah I can see how that can be useful
andrew
2022-12-24 10:01:33
and for scanned documents it's usually useful to know how big a certain page was in case you need to replicate it
_wb_
2022-12-24 10:02:05
Do image viewers even know the physical dimensions of the screen typically though?
2022-12-24 10:02:14
I know Apple Preview does
andrew
2022-12-24 10:02:18
I was going to mention macOS
_wb_
2022-12-24 10:02:52
But Apple has the unique situation of having both the hw and software so it's kind of easy to know
2022-12-24 10:03:38
In general, I don't think you can know the physical dimension of a display device.
andrew
2022-12-24 10:04:02
EDID is able to transmit the physical dimensions of the display
_wb_
2022-12-24 10:04:54
Especially something like a beamer which can have arbitrary screen dimensions depending on how far you put it
2022-12-24 10:05:46
I guess in case of normal screens you can know then
paperboyo
_wb_ Why? Who cares what the paper size was?
2022-12-24 10:10:57
I do. And everyone who wants to reproduce it in print (or, more in theory, on screen) at a size relative to the original size. Also, possibly, this thing I never really fully got: https://github.com/w3c/csswg-drafts/issues/5025 (? https://gitlab.com/wg1/jpeg-xl/-/issues/123 & https://github.com/libjxl/libjxl/issues/817)
joppuyo
2022-12-24 10:14:30
DPI and dimensions are one of those things that are very useful when working with print and make zero sense when working on the web/digital
2022-12-24 10:14:41
just like CMYK or spot colors
_wb_
2022-12-24 10:18:22
I mean, "original size" assumes it started out as something on paper or some other physical medium, which doesn't apply to most photos or digital artwork.
2022-12-24 10:20:42
The intrinsic dimensions thing is something we do have in the jxl header (but software should probably be updated to actually respect it). It is useful if you want to have non-square pixels or if you want to pretend that an image has some dimensions while what is encoded actually has different dimensions.
joppuyo
2022-12-24 10:21:52
I'm not gonna comment any further because I legit don't have that much experience working with print. but, can't you figure out the dimensions of a scanned image just by looking at the pixel size and DPI?
_wb_
joppuyo I'm not gonna comment any further because I legit don't have that much experience working with print. but, can't you figure out the dimensions of a scanned image just by looking at the pixel size and DPI?
2022-12-24 10:22:58
Yes you can, but DPI is not something that is stored in the jxl headers. You can still put it in exif or xmp though.
2022-12-24 10:26:47
Suppose you have a 1000 pixel wide image, but someone wants to have a 980 pixel wide version of it. Then one option is to just change the intrinsic dimensions (in exif, in case of jpeg/png/webp, or in the image header in case of jxl) to say it's supposed to be 980 pixels wide, while the encoded image is still 1000 pixels
2022-12-24 10:29:24
This avoids doing a downscale to just-a-little-smaller dimensions, which is not a very good idea as it tends to introduce blur.
2022-12-24 10:36:23
It's mostly useful to avoid aspect ratios not being exact when using low-res previews — e.g. you have a 1002x648 image, but the placeholder is 100x64, then just upscaling would make it a few pixels too short vertically, which could be ugly in a web layout
2022-12-24 10:37:32
Anyway I don't think jxl should be used for low res placeholders since the whole point of progressive is that you don't need placeholders anymore
pandakekok9
2022-12-25 08:27:36
I could do that I guess
w
2022-12-25 08:47:37
isnt that something that cloudinary could do
2022-12-25 08:51:11
i'm already using a hack to have the embed.moe server not oom so i imagine a simple service to encode would be even worse
2022-12-25 08:51:20
would be nice if libjxl had an option to limit memory usage if it were even possible
pandakekok9
2022-12-25 08:54:41
Try this: https://job.tilde.team/cgi-bin/cjxl.cgi?distance=0&img=https://jp.yamaha.com/files/gt_illust_pc_b15482b91f5dc9621cfe57e0935d49c5.png
2022-12-25 09:24:19
Actually maybe I should default the distance to 0 instead of 1, since most PNGs are simpler bitmaps that benefit more from lossless than lossy compression?
2022-12-25 09:24:38
Or nah
_wb_
2022-12-25 09:35:32
Note that lossless encoding is slower than lossy at the same effort setting
2022-12-25 09:37:03
The range of speeds between lossless e1 (fjxl) and e9 is quite huge, it goes from like 1000 MP/s to 0.01 MP/s
2022-12-25 09:38:20
While for lossy, it goes from like 50 MP/s at e3 to 0.1 MP/s at e9 or something like that.
2022-12-25 09:38:56
Maybe 3 orders of magnitude of speed range for lossy, versus 5 orders of magnitude or so for lossless
2022-12-25 09:49:17
(and even then, lossless e9 is nowhere near close to exhaustive)
pandakekok9
2022-12-25 10:07:17
I see
2022-12-25 10:07:40
Maybe I should allow the user to set the effort too, and limit it to either e6 or e7
_wb_
2022-12-25 11:10:23
Or maybe just set a timeout, 30 seconds per image or whatever
HLBG007
2022-12-25 11:35:58
I am encode this image https://github.com/libjxl/testdata/blob/81c7e5247c57051bf077b879ee38efd4c309c9e2/jxl/splines.jxl in e3 / e9 . It has dimension of 320x320 so the encode in e9 was fast. This is the result of the flow of function calls:
2022-12-25 11:37:58
<@794205442175402004> Do you speed up LZ77 for e9?
Fraetor
2022-12-25 12:12:05
Arguably this is why image URLs shouldn't have a file extension, as the technology changes over time.
Jim
2022-12-25 12:13:26
Merry JXLmas!
Fraetor
2022-12-25 12:16:18
Merry JXLmas indeed!
_wb_
HLBG007 <@794205442175402004> Do you speed up LZ77 for e9?
2022-12-25 12:28:15
IIRC, lz77 is only tried at e8+ or something
2022-12-25 12:31:39
embed
2022-12-25 12:31:45
https://embed.moe/https://cdn.discordapp.com/attachments/794206170445119489/1056549787886952619/jxlmas2.jxl
HLBG007
_wb_ IIRC, lz77 is only tried at e8+ or something
2022-12-25 12:53:32
I know but a speed up for lz77 would be nice.
_wb_ IIRC, lz77 is only tried at e8+ or something
2022-12-25 12:59:27
joppuyo
Fraetor Arguably this is why image URLs shouldn't have a file extension, as the technology changes over time.
2022-12-25 12:59:46
or, you can use <picture> tag with multiple <source> elements instead of content negotiation 🙂
_wb_
HLBG007 I know but a speed up for lz77 would be nice.
2022-12-25 01:50:01
Yes, or even better would be to figure out how to combine MA trees and lz77 effectively. Currently we're basically doing one or the other but not really both.
HLBG007
_wb_ Yes, or even better would be to figure out how to combine MA trees and lz77 effectively. Currently we're basically doing one or the other but not really both.
2022-12-25 01:54:08
Is there any way to turn off highway?
_wb_
2022-12-25 01:57:42
You can build for scalar only? Or what do you mean?
HLBG007
_wb_ You can build for scalar only? Or what do you mean?
2022-12-25 02:03:16
I want execute the source code without simd.
_wb_
2022-12-25 02:04:45
There's a compile flag for that, -DHWY_SCALAR_ONLY or something, I don't remember
HLBG007
_wb_ There's a compile flag for that, -DHWY_SCALAR_ONLY or something, I don't remember
2022-12-25 03:03:14
-DJXL_HWY_DISABLED_TARGETS_FORCE not working with clang
_wb_
2022-12-25 03:18:08
-DHWY_COMPILE_ONLY_SCALAR
HLBG007
_wb_ -DHWY_COMPILE_ONLY_SCALAR
2022-12-25 03:45:33
ok the speed is 2x faster with SIMD
BlueSwordM
HLBG007 ok the speed is 2x faster with SIMD
2022-12-25 04:54:10
Only 2x faster for you? On my side, the difference is utterly massive, not just 2x <:kekw:808717074305122316>
_wb_
2022-12-25 05:00:53
Well modular encode is not very much simdified (except e1)
2022-12-25 05:01:53
And also avx2 or only sse3 makes a difference
BlueSwordM
_wb_ Well modular encode is not very much simdified (except e1)
2022-12-25 05:08:20
Oh, I was just testing varDCT.
Traneptora
https://embed.moe/https://cdn.discordapp.com/attachments/794206170445119489/1056549787886952619/jxlmas2.jxl
2022-12-25 05:08:40
ohey this crashed jxlatte
2022-12-25 05:08:49
samples are fun to decode :)
2022-12-25 05:08:53
test cases
2022-12-25 05:09:17
crashed is the wrong word, it rejected it as invalid
2022-12-25 05:09:18
but it's not
BlueSwordM
_wb_ And also avx2 or only sse3 makes a difference
2022-12-25 05:09:26
BTW, do you think there is a way for your employer (Cloudinary for you, and Google for veluca/Jyrki and the others) to get you ARM64 RK3588 boards for ARM SIMD development?
2022-12-25 05:10:23
They're decently powerful(4xA76+4xA55), representative of most ARM chips today actually, and have native Linux and Android support.
Traneptora
2022-12-25 05:15:59
oooo spec bug
2022-12-25 05:16:29
thanks for the sample!
_wb_
BlueSwordM BTW, do you think there is a way for your employer (Cloudinary for you, and Google for veluca/Jyrki and the others) to get you ARM64 RK3588 boards for ARM SIMD development?
2022-12-25 05:30:44
Is highway missing something for arm? If so, then probably better get <@456226577798135808> such a board 🙂
BlueSwordM
_wb_ Is highway missing something for arm? If so, then probably better get <@456226577798135808> such a board 🙂
2022-12-25 05:33:45
Oh no, it would just be for easier optimization testing on something akin to a mobile SOC :)
2022-12-25 05:34:16
Last I checked, Highway does have all the proper frameworks for modern ARM64.
_wb_
2022-12-25 06:10:37
I remember the Google folks talking about an arm board they used for testing, no idea if they still have it. Just Termux on an Android phone also kind of works...
2022-12-25 06:12:45
I think the arm64 performance of libjxl is not bad. It's the arm32 performance that is quite bad, which is not surprising since we use float all over the place, and arm32 kind of sucks at that.
2022-12-25 06:16:24
For better speed, we should at some point make a lower-precision code path that uses int16_t for all buffers and most of the arithmetic (with some int32_t temporary results during iDCT, I suppose). That should work for SDR and maybe also for HDR delivery if super high fidelity is not needed.
andrew
2022-12-25 07:47:36
I am happy to report that JPEG-XL appears to vastly outperform JPEG 2000 (at least the one in Adobe Acrobat) for scanned documents
BlueSwordM
_wb_ I think the arm64 performance of libjxl is not bad. It's the arm32 performance that is quite bad, which is not surprising since we use float all over the place, and arm32 kind of sucks at that.
2022-12-25 07:48:07
The problematic performance also exists for ARM64 small cores, like the A55/A510.
daniilmaks
2022-12-26 01:18:55
I'm not well versed, but AFAIK it would have to be supported by a some new revision of the pdf standard
2022-12-26 01:19:17
unless I've been living in a bubble and they've already added it
andrew
2022-12-26 01:32:48
You might be able to insert it as a file attachment but that’s kinda useless
2022-12-26 01:33:24
There might be a way to embed a JavaScript decoder…
sklwmp
andrew There might be a way to embed a JavaScript decoder…
2022-12-26 04:14:03
that definitely doesn't scream security vulnerability
pandakekok9
sklwmp that definitely doesn't scream security vulnerability
2022-12-26 05:02:48
I wonder who thought of "hey wouldn't it be nice if we can use JavaScript in PDFs" lol
2022-12-26 05:02:58
Like were they drunk or smth
yurume
2022-12-26 05:21:34
"...so that we no longer have to deal with Acrobat vulnerabilities" "oh great"
Demiurge
2022-12-26 10:04:28
webp lossless is really, really nice. as <@794205442175402004> observed it's even pareto optimal in his comparison of lossless codecs, with JXL taking the extreme ends of high speed and high compression, and lossless webp being in the middle. Is it possible to get a lossless JXL encoder with such nice performance as webp?
2022-12-26 10:05:01
Then JXL can be pareto optimal for both extremes AND the middle as well!
2022-12-26 10:05:21
And the middle is a good spot to be in
diskorduser
2022-12-26 10:06:58
May be when they code it specifically for 8bit images
Demiurge
2022-12-26 10:08:18
Also it would be cool if there was a "near-lossless" mode that does a XYB color transform and other things that are "theoretically lossy"
2022-12-26 10:09:15
Or maybe different levels of near-losslessness so people can choose what level they're okay with
2022-12-26 10:09:45
But I can't think of anything off the top of my head other than color transform
2022-12-26 10:10:53
A color transform that is only "technically lossy" in the most pedantic sense, I have a feeling that in practice most people really don't care about losslessness on such a needlessly technical level
2022-12-26 10:13:12
Maybe a mode where a certain level of fidelity is guaranteed on some technical level, or something.
2022-12-26 10:14:49
Like in the case of a color transform, the distortion that will introduce, if any, is something that is mathematically defined and usually uniform in distribution.
_wb_
2022-12-26 10:28:02
I am sure it is possible to cover the whole range with jxl, it's mostly a matter of selecting a good subset of jxl and making a fast encoder that does that and does it well
2022-12-26 10:31:59
For photographic images, lossless libjxl e3 is quite good. Maybe something that tries that and also some cheap nonphoto stuff (e1 style, maybe with some non-rle lz77 too), heuristically per group, would be good.
joppuyo
pandakekok9 I wonder who thought of "hey wouldn't it be nice if we can use JavaScript in PDFs" lol
2022-12-26 10:38:56
you can do pretty crazy stuff in a PDF, like embed a breakout-style game in it https://github.com/osnr/horrifying-pdf-experiments
daniilmaks
Demiurge Also it would be cool if there was a "near-lossless" mode that does a XYB color transform and other things that are "theoretically lossy"
2022-12-26 09:45:31
the issue with lossless xyb is that every editor out there works in RGBa space, meaning you have to do lossy back and forth conversions between edits leading to generation loss, which defeats the point of making it lossless to begin with.
_wb_
2022-12-26 10:19:50
True - it would only make sense if an editor could work in XYB natively. Which would make some amount of sense btw.
Fraetor
2022-12-26 10:44:38
Would there be a compression benefit to lossless XYB? I though the idea is that you can use lower qualities for less important colour components.
2022-12-26 10:45:46
Or would common patterns like gradients map more redundently onto XYB than RGB?
daniilmaks
2022-12-26 10:50:04
it would make the quantization noise floor perceptually lower and more consistent, assuming all XYB shades are properly presented in output view.
Demiurge
2022-12-27 12:52:10
Is the back and forth XYB-RGB conversion really subject to generation loss though?
daniilmaks
2022-12-27 02:40:19
The following is just an informed guess: it should depend on the kind of edits. Color degradation across the whole frame (from rounding errors) might only occur on the first generation, then further degradations will only accumulate in the edited areas if we assume the edits are highly localised. This would definitely distance it from regular lossy generational degradation behavior.
Demiurge
2022-12-27 10:40:55
Well if your informed guess is true then it's truly a non-issue.
2022-12-27 10:42:01
Besides I would be very surprised if the colorspace conversion could not be performed in a predictable and repeatable way that minimizes generation loss.
Jyrki Alakuijala
Fraetor Would there be a compression benefit to lossless XYB? I though the idea is that you can use lower qualities for less important colour components.
2022-12-27 01:53:44
for lossless it is best to find the transform that minimizes correlations in the resulting data -- XYB is just a psychovisual space, not a space that ideally decorrelates -- XYB decorrelates a bit, because the eye does that, too, but it doesn't decorrelate optimally
The following is just an informed guess: it should depend on the kind of edits. Color degradation across the whole frame (from rounding errors) might only occur on the first generation, then further degradations will only accumulate in the edited areas if we assume the edits are highly localised. This would definitely distance it from regular lossy generational degradation behavior.
2022-12-27 02:08:21
some degradations in JPEG1 happen through many iterations since it is not possible to store negative (and over range) values in coefficients -- if those values occur they are truncated in coefficient space -- when a remapping of those truncated coefficients are done and we go back-and-forth with pixels and coefficients, we get a slow convergence until we are strongly inside the non-clamping space --- in JPEG XL we don't have that since we don't have a limited coefficient space
Traneptora
2022-12-27 02:16:31
finally got my PFM patch merged into FFmpeg <:poggers:853085814103474176>
2022-12-27 02:16:38
now it no longer should mirror them vertically
Demiurge
Jyrki Alakuijala for lossless it is best to find the transform that minimizes correlations in the resulting data -- XYB is just a psychovisual space, not a space that ideally decorrelates -- XYB decorrelates a bit, because the eye does that, too, but it doesn't decorrelate optimally
2022-12-27 02:55:03
I don't understand what you mean by minimizing correlations. Ideally you would want the data to contain a lot of the same numbers, such as a lot of leading or trailing zeros for example, or just a lot of predictability...
2022-12-27 02:56:51
It sounds like you put some thought into what sort of colorspace would be the best for lossless compression
veluca
2022-12-27 02:56:52
across channels you want to get rid of duplicated information
Demiurge
veluca across channels you want to get rid of duplicated information
2022-12-27 02:58:00
Well, duplicated information is usually very easy to compress information.
veluca
2022-12-27 02:58:12
so i.e. if you have a grayscale image, you'd rather compress "R and B are equal to G" rather than the actual values of R and B
Demiurge
2022-12-27 02:58:59
But yeah I see what you're saying. If a lot of values are small numbers close to zero after getting rid of redundant information, that's also easy to compress.
veluca
2022-12-27 02:59:23
in general if an image is close to grayscale, (G, R-G, B-G) will compress better than directly compressing (R, G, B)
Demiurge
2022-12-27 02:59:24
It's just a strange way for me to think about decorrelation
2022-12-27 02:59:48
The use of the term seems unusual to me in that context
2022-12-27 03:00:11
But keep in mind I'm a noob at data compression :)
veluca
2022-12-27 03:00:55
iirc it comes from the fact that uncorrelated gaussian sources are also independent, so in a sense you already took out all the shared information you could have
joppuyo
veluca across channels you want to get rid of duplicated information
2022-12-27 03:01:08
Is this what “chroma from luma” does?
veluca
2022-12-27 03:01:20
pretty much, yeah
2022-12-27 03:01:47
well, CfL is usually spatially adaptive, so it will change the channel decorrelation depending on the image area
joppuyo
2022-12-27 03:02:35
Ah, interesting!
HLBG007
2022-12-27 05:12:15
Happy third birthday JPEG XL GitLab release! https://gitlab.com/wg1/jpeg-xl/-/tree/ff09371267315c39fe0220082943c5834db04ab9 Dec 27, 2019 6:12pm GMT+0100
Traneptora
veluca iirc it comes from the fact that uncorrelated gaussian sources are also independent, so in a sense you already took out all the shared information you could have
2022-12-27 06:11:27
isn't that sort of by definition of uncorrelated?
2022-12-27 06:11:30
zero covariance
veluca
2022-12-27 06:19:33
What I mean is that for Gaussians 0 covariance is the same as being independent (I think), but for other distributions you only have one implication
BlueSwordM
2022-12-27 06:33:55
So, when is a 0.8 release planned?
Traneptora
veluca What I mean is that for Gaussians 0 covariance is the same as being independent (I think), but for other distributions you only have one implication
2022-12-27 06:37:33
as far as I'm aware the implication goes in both directions for any two random variables X and Y
2022-12-27 06:37:52
`Cov(X, Y) = E(XY) - E(X)E(Y)`
2022-12-27 06:38:19
er, wait, nvm
2022-12-27 06:38:49
those expectancies could be equal but that doesn't mean `P(X <= a)P(Y <= b) = P(XY <= ab)`
2022-12-27 06:39:09
which I believe is an equivalent definition of independence
2022-12-27 06:39:59
(the usual definition I see is `P(A \cap B) = P(A)P(B)` iff A, B are independent events)
veluca
2022-12-27 06:42:38
yup
2022-12-27 06:42:57
(probably the two definitions are only equivalent assuming the distributions are nice enough, but whatever)
Traneptora
2022-12-27 06:45:26
I think you only need the random variables to be nonnegative
2022-12-27 06:45:31
but I might be mistaken about that
2022-12-27 06:46:48
If the random variables are nonnegative then E(X) is just `\int_0^\infty 1 - F_X(t) dt` which is convenient
2022-12-27 06:46:57
and the reason I prefer to work with nonegative random variables <:KEKW:643601031040729099>
2022-12-27 06:47:20
if the CDF is nonzero on some negative number you gotta work a little harder
veluca
2022-12-27 06:49:16
what's F there?
2022-12-27 06:49:26
CDF?
yurume
2022-12-27 06:53:03
cumulative distribution function
veluca
2022-12-27 06:53:27
yeah I got that, I was asking whether F = CDF 🙂
Traneptora
2022-12-27 06:55:01
`F_X` is the CDF for X, yea
andrew
2022-12-27 07:09:14
at this point I've realized that it's not feasible for me to write a "from scratch" Rust implementation of JPEG-XL from the spec in a reasonable amount of time
2022-12-27 07:09:39
should I just do a straightforward port of jxlatte or something as a starting point?
improver
2022-12-27 08:04:13
yeah or j40
spider-mario
2022-12-27 08:05:14
my favourite form for the definition of independence is P(A | B) = P(A)
daniilmaks
Demiurge Also it would be cool if there was a "near-lossless" mode that does a XYB color transform and other things that are "theoretically lossy"
2022-12-28 11:44:12
I was just now thinking, XYB space could probably be useful as perceptual bias for a lossy preprocessor for lossless encoders. So in that sense, if it does work and is viable, yes it could become a legit usecase.
Jyrki Alakuijala
Demiurge Well, duplicated information is usually very easy to compress information.
2022-12-29 09:08:37
correlation can be thought of as a 3x3 matrix between the three channels and doesn't need to be 1s and -1s in it, and the color conversions can still reduce those correlations
2022-12-29 09:22:44
DZgas' image is almost ok already after: https://github.com/libjxl/libjxl/pull/2005
2022-12-29 09:23:01
I'll look for some more improvements
2022-12-29 09:26:56
there are some degradations in this change, too, particularly I saw that simple pixel graphics in violet green border would be slightly worse (in synthetic.png), where as the red green image from DZgas is massively better