JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

on-topic

Whatever else

Scope
2021-04-11 07:19:11
2021-04-11 07:19:19
<:Thonk:805904896879493180>
_wb_
2021-04-11 07:38:58
Where is that?
Scope
2021-04-11 07:42:36
https://news.knowledia.com/US/en/articles/jpeg-xl-art-f4054189f0d2b21315589fd44b37670741dc1b16
jzxf
2021-04-11 11:45:16
Q: I've noticed a number of postings (in `#jxl-art` and `#glitch-art`) that look like instructions for the encoder, almost like a shader, e.g., ``` RCT 4 Orientation 1 Bitdepth 10 if W-NW > -1 if NW-N > -1 if N > 1021 if c > 0 - Set 1024 - Set 0 if y > 0 - W + 1 if x > 0 - W + 2 if c > 0 - Set -1024 - Set 0 - NW + 0 - Select + 2 ``` However, perusing the docs and the reference implementation doesn't make it clear to me what these instructions actually are, how I could read more about them, what they're called, or how I'd learn more in order to make my own bonkers creations. Any pointers?
190n
2021-04-11 11:51:44
check out https://jxl-art.surma.technology/ and its help page
2021-04-11 11:52:07
some of the recent art pieces won't work there since it doesn't support custom orientations and such (yet)
2021-04-11 11:52:10
but you can make your own
monad
jzxf Q: I've noticed a number of postings (in `#jxl-art` and `#glitch-art`) that look like instructions for the encoder, almost like a shader, e.g., ``` RCT 4 Orientation 1 Bitdepth 10 if W-NW > -1 if NW-N > -1 if N > 1021 if c > 0 - Set 1024 - Set 0 if y > 0 - W + 1 if x > 0 - W + 2 if c > 0 - Set -1024 - Set 0 - NW + 0 - Select + 2 ``` However, perusing the docs and the reference implementation doesn't make it clear to me what these instructions actually are, how I could read more about them, what they're called, or how I'd learn more in order to make my own bonkers creations. Any pointers?
2021-04-12 12:43:40
Besides the web interface, if you've downloaded the JXL reference software you can compile the dev tools (run `ci.sh` or just build with `-DJPEGXL_ENABLE_DEVTOOLS=ON`) which include `jxl_from_tree`.
Deleted User
2021-04-12 01:30:17
`-DJPEGXL_ENABLE_DEVTOOLS=ON` is better, `ci.sh` was crashing when I was low on RAM.
Scope
2021-04-12 03:01:11
https://twitter.com/danlmarmot/status/1381415646286598144
Jyrki Alakuijala
jzxf Q: I've noticed a number of postings (in `#jxl-art` and `#glitch-art`) that look like instructions for the encoder, almost like a shader, e.g., ``` RCT 4 Orientation 1 Bitdepth 10 if W-NW > -1 if NW-N > -1 if N > 1021 if c > 0 - Set 1024 - Set 0 if y > 0 - W + 1 if x > 0 - W + 2 if c > 0 - Set -1024 - Set 0 - NW + 0 - Select + 2 ``` However, perusing the docs and the reference implementation doesn't make it clear to me what these instructions actually are, how I could read more about them, what they're called, or how I'd learn more in order to make my own bonkers creations. Any pointers?
2021-04-12 11:09:26
If you find out the syntax, write it up πŸ™‚
2021-04-12 11:10:05
there are however 77 other options for jxl-image-golfing that are not part of this
NeRd
Scope https://twitter.com/danlmarmot/status/1381415646286598144
2021-04-12 11:11:05
Pretty sure that's JPEG 2000 haha
jzxf
monad Besides the web interface, if you've downloaded the JXL reference software you can compile the dev tools (run `ci.sh` or just build with `-DJPEGXL_ENABLE_DEVTOOLS=ON`) which include `jxl_from_tree`.
2021-04-12 11:46:07
Okay, so as I understand it, these are direct instructions for the JXL prediction tree using `jxl_from_tree`. My understanding is that JXL's tile size limits would prevent this from continuing indefinitely, but does this imply that the tree language is Turing-complete since it can represent rule-110 cellular automata?
2021-04-12 11:47:36
(Also does the "tree language" have a name or is that just considered part of JXL's Modular mode?)
veluca
2021-04-12 11:53:41
turing completeness: apparently yes πŸ˜›
2021-04-12 11:54:00
the trees themselves are called MA trees
2021-04-12 11:54:24
the syntax for the trees doesn't have a name that I know of πŸ™‚
_wb_
2021-04-12 12:00:43
We could probably make the syntax more user friendly, e.g. also allow < in if nodes
jzxf
Jyrki Alakuijala If you find out the syntax, write it up πŸ™‚
2021-04-12 12:01:29
Unless I'm missing something isn't this where the syntax is from? https://gitlab.com/wg1/jpeg-xl/-/blob/master/tools/jxl_from_tree.cc Or are you saying that you'd prefer an actual lexer/parser specification instead of a hand-tooled one?
_wb_
2021-04-12 12:09:33
It's kind of just a quick hack that is getting out of hand
jzxf
_wb_ It's kind of just a quick hack that is getting out of hand
2021-04-12 12:12:29
I mean, it seems like a lot of fun πŸ˜„
2021-04-12 12:17:50
the "slightly silly but it's easy to create something" use cases are often some of the most interesting imo
Miaourt 🍰 ε–΅ε₯Ά
2021-04-12 12:38:09
Is JXL format a turing complete language or is that a toy tool for generating JXL ??
jzxf
Miaourt 🍰 ε–΅ε₯Ά Is JXL format a turing complete language or is that a toy tool for generating JXL ??
2021-04-12 01:04:45
Not exactly. As I understand it, there is a generation mechanism for JXL called MA trees which encodes decision trees into the bitstream, which can then be used to create an image. The file sizes are small because there is no image; it's being generated during decoding. The decision trees are complex enough that they can represent cellular automata, and in particular the Rule 110 automata, which is known to be Turing-complete. However, Turing-completeness requires unbounded input sizes and the trees don't operate on unbounded inputs but rather on tiles of a maximum fixed size. So it can't compute very much.
Jyrki Alakuijala
jzxf Not exactly. As I understand it, there is a generation mechanism for JXL called MA trees which encodes decision trees into the bitstream, which can then be used to create an image. The file sizes are small because there is no image; it's being generated during decoding. The decision trees are complex enough that they can represent cellular automata, and in particular the Rule 110 automata, which is known to be Turing-complete. However, Turing-completeness requires unbounded input sizes and the trees don't operate on unbounded inputs but rather on tiles of a maximum fixed size. So it can't compute very much.
2021-04-12 01:09:25
also good to know -- no computer realized in our universe is Turing-complete due to finite size
_wb_
2021-04-12 01:18:31
The observable universe is merely a final state machine with a big state.
pogmoment
2021-04-12 01:18:42
discord doesn't handle jxl right
2021-04-12 01:18:53
like on canary with it enabled
2021-04-12 01:35:12
also why doesn't chrome copy jxl?, only image/png
jzxf
Jyrki Alakuijala also good to know -- no computer realized in our universe is Turing-complete due to finite size
2021-04-12 01:35:25
It's true! Turing-completeness is a lie perpetuated by Big CS.
pogmoment
2021-04-12 01:35:45
or maybe this is a thing with every image?
2021-04-12 01:38:57
hmm apparently it works
2021-04-12 01:38:59
or not
2021-04-12 01:39:09
the preview showed but not discord Β―\_(ツ)_/Β―
_wb_
2021-04-12 01:40:43
Discord probably just doesn't recognize that it is an image
pogmoment
2021-04-12 01:40:52
it does, at least client side
2021-04-12 01:41:15
2021-04-12 01:42:03
veluca
_wb_ The observable universe is merely a final state machine with a big state.
2021-04-12 01:56:15
I don't think we actually know that xD
2021-04-12 01:56:38
(possibly we even know the opposite of that)
_wb_
2021-04-12 02:22:42
Maybe quantum mechanics makes it something somewhat different, but still not Turing complete I think
veluca
2021-04-12 02:27:56
well, at least not in terms of "infinite memory"
jzxf
2021-04-12 02:55:47
it helps if you think about Turing-completeness as being about the language, not the computer
2021-04-12 02:56:34
can you express _ideas_ in the language that are as powerful as any other TC-language? if so, then it's TC
2021-04-12 02:57:07
can you actually _realize_ those ideas on a physical computer with that language? that depends πŸ˜„
_wb_
2021-04-12 02:57:42
I guess MA trees are as Turing complete as rule 110
jzxf
2021-04-12 02:57:50
they are, yeah
spider-mario
2021-04-12 02:58:15
intptr_t and size_t have a finite size, therefore C and C++ are not turing complete
2021-04-12 02:58:19
checkmate
_wb_
2021-04-12 02:58:53
Infinite memory is overrated
2021-04-12 02:59:00
We don't have infinite time anyway
spider-mario
2021-04-12 02:59:20
remember when Zed Shaw said that Python 3 wasn’t turing complete?
2021-04-12 02:59:27
(https://eev.ee/blog/2016/11/23/a-rebuttal-for-python-3/)
jzxf
spider-mario (https://eev.ee/blog/2016/11/23/a-rebuttal-for-python-3/)
2021-04-12 03:00:28
lol
2021-04-12 03:01:31
I mean, as far as actual languages you'd like to use, TC-ness isn't a very strong property anyway β€” it's like if a restaurant said "our food contains calories and is edible"
_wb_
2021-04-12 03:11:11
In some cases, non-Turing completeness can be desirable, e.g. to have decidable termination
Crixis
spider-mario remember when Zed Shaw said that Python 3 wasn’t turing complete?
2021-04-12 03:15:31
LOL
Pieter
2021-04-12 04:29:57
Obviously. Turing complete languages require access to arbitrarily much memory.
Crixis
Pieter Obviously. Turing complete languages require access to arbitrarily much memory.
2021-04-12 04:32:46
details
monad
2021-04-12 05:16:42
Remember when Jon claimed the JXL bitstream wasn't Turing-complete? https://discord.com/channels/794206087879852103/824000991891554375/824164440349212672
spider-mario
2021-04-12 05:29:29
I think it’s more common to discover that a language that previously not thought to be Turing-complete actually is, than the other way around
2021-04-12 05:29:58
I mean: http://blog.boreas.ro/2008/07/sedtris-tetris-game-written-in-sed.html
_wb_
monad Remember when Jon claimed the JXL bitstream wasn't Turing-complete? https://discord.com/channels/794206087879852103/824000991891554375/824164440349212672
2021-04-12 05:31:32
It depends on how you define Turing completeness
2021-04-12 05:31:54
E.g. you cannot make a jxl bitstream that makes the decoder not terminate
2021-04-12 05:32:26
So in that sense, it is not Turing complete. The halting problem for jxl decoding is decidable πŸ˜…
spider-mario
2021-04-12 05:53:00
it is Turing-enough
_wb_
2021-04-12 07:46:30
Turing-liquid but not Turing-hard
Deleted User
_wb_ Turing-liquid but not Turing-hard
2021-04-12 07:48:55
Turing-cat
jzxf
spider-mario I think it’s more common to discover that a language that previously not thought to be Turing-complete actually is, than the other way around
2021-04-13 08:20:37
another (in)famous example in this category: https://arxiv.org/abs/1904.09828
veluca
2021-04-13 09:09:15
heh, I have to admit I was more familiar with "game is NP-complete" kind of statements than "game is Turing-complete" xD
Petr
_wb_ I'm not sure what the threshold is to make a page about someone or how it gets decided, but it seems like it would make sense for jyrki to get a wikipedia article πŸ™‚
2021-04-14 11:52:26
Even though I admire the creativity of JXL authors, 😊 I'm not sure about your notability for Wikipedia articles (yet). I'd really like to see articles about you and I'd contribute to them. But I've also seen deleted articles about people who I considered notable but the Wikipedia community didn't. And that's what I don't need to see again. πŸ™‚
2021-04-14 11:52:48
Anyway, the rules are here: https://en.wikipedia.org/wiki/Wikipedia:Notability_(people)
2021-04-14 11:53:14
So anyone here can comment on it.
2021-04-14 12:11:12
Maybe we should start with adding the list of (main) authors into the JPEG XL article(s), including links to the respective biographical articles (that don't exist yet). I guess it's "safe" to take the list of authors from your JXL page, <@!794205442175402004> , right?
_wb_
2021-04-14 12:21:01
We'll probably soon add an ACKNOWLEDGEMENTS file to the public repo
2021-04-14 12:36:00
If you want to have main authors for the wikipedia article: these 3 I think we consider to be the main authors: - Jyrki Alakuijala - Jon Sneyers - Luca Versari
2021-04-14 12:37:28
If you want to add more, you can also add: - Sami Boukortt - Lode Vandevenne - Jan Wassenberg - Zoltan Szabadka
2021-04-14 12:38:08
There are more people who made significant contributions, but the above are the main ones
spider-mario
2021-04-14 12:41:45
also a few more (e.g. <@!811568887577444363> and <@!795684063032901642> here) but not sure if all of them want to be named explicitly
2021-04-14 12:41:55
but at least Moritz seems fine with it :p
_wb_
2021-04-14 12:48:20
There's also Alexander Rhatushnyak, Thomas Fischbacher, Robert Obryk, Alex Deymo
2021-04-14 12:50:31
Indirectly (through FLIF), <@799692065771749416> also contributed: e.g. the arbitrary chaining of (modular) transforms was his idea
eustas
spider-mario but at least Moritz seems fine with it :p
2021-04-14 02:29:50
me too πŸ™‚
_wb_
2021-04-15 01:52:33
It is a bit tricky
machmsg
2021-04-15 10:53:16
We have switched from using avif to jpeg xl for our operations. We manage nearly 20 tb of videos/images, and growing, so a good compression algorithm is important for us.
Deleted User
machmsg We have switched from using avif to jpeg xl for our operations. We manage nearly 20 tb of videos/images, and growing, so a good compression algorithm is important for us.
2021-04-15 11:19:07
Nice πŸ˜ƒ What are you doing, if I can ask?
machmsg
2021-04-16 12:11:46
Web and multimedia archival
raysar
machmsg Web and multimedia archival
2021-04-16 03:16:49
you are the future. :p But don't abuse on visual quality !
machmsg
2021-04-16 03:17:31
When i get some more free time I will look into seeing what i can do in terms of contriubtions to the projet
Pieter
_wb_ Indirectly (through FLIF), <@799692065771749416> also contributed: e.g. the arbitrary chaining of (modular) transforms was his idea
2021-04-16 03:23:32
Really? I don't remember that specifically.
_wb_
Pieter Really? I don't remember that specifically.
2021-04-16 05:18:46
Yes, you made Transform an abstract factory, making it easier to signal arbitrary chains of transforms (in flif also with the complication of keeping track of how it influences the range of pixel values)
Pieter
2021-04-16 05:19:37
Oh, right!
Jyrki Alakuijala
_wb_ There are more people who made significant contributions, but the above are the main ones
2021-04-16 09:12:22
Jon's description of authorship agrees with mine and our team's opinion of authorship of the format
Petr Even though I admire the creativity of JXL authors, 😊 I'm not sure about your notability for Wikipedia articles (yet). I'd really like to see articles about you and I'd contribute to them. But I've also seen deleted articles about people who I considered notable but the Wikipedia community didn't. And that's what I don't need to see again. πŸ™‚
2021-04-16 09:14:53
I wonder if it would be better to start from the national wikipedia pages -- like Jon may be more notable for pages in Flemish?
machmsg We have switched from using avif to jpeg xl for our operations. We manage nearly 20 tb of videos/images, and growing, so a good compression algorithm is important for us.
2021-04-16 09:16:29
Could you write about your switch into <https://bugs.chromium.org/p/chromium/issues/detail?id=1178058>?
fab
2021-04-16 09:18:33
Jyrki Alakuijala
2021-04-16 09:18:36
when writing, consider that the decision makers are in the same org with the AVIF developers -- it may be best to avoid a strong polarization between the formats in the bug by writing like something softer like "while AVIF already brought us big benefits over previous systems, JPEG XL's feature set and image quality guarantees are an overall better fit to our use case"
2021-04-16 09:22:30
if you write "JPEG XL gives us 11 % better quality than AVIF or JPEG XL is 3x faster to encode", then it may polarize and the AVIF developers may become defensive and may find reasons why the test setup may be not optimal (for example due to using dav1d/etc instead of libaom)
monad
Jyrki Alakuijala Could you write about your switch into <https://bugs.chromium.org/p/chromium/issues/detail?id=1178058>?
2021-04-16 09:27:02
Consider helping Discord identify that link.
spider-mario
2021-04-16 09:31:32
ah, yes, it has trouble with the ending β€œ?”
2021-04-16 09:31:35
here you go: https://crbug.com/1178058
monad
2021-04-16 09:36:01
I tend to use brackets `<>` mainly because I don't like large embeds, but they can help with formatting too.
Jyrki Alakuijala
2021-04-16 09:42:40
fixed πŸ˜„
eustas me too πŸ™‚
2021-04-16 10:21:34
Currently we have 4 groups of contributors: codec architects who made the big decision decisions, format developers who significantly influenced the coded design, libjxl developers who contributed to the implementation, and we have people who helped with the standardization (sat in the ISO meetings or prepared input documents)
Deleted User
machmsg Web and multimedia archival
2021-04-16 10:30:26
Where exactly, if you're allowed to tell?
machmsg
Where exactly, if you're allowed to tell?
2021-04-17 01:33:53
as in the name of the site?
Pieter
2021-04-17 01:43:07
I guess we could live with just an IP address.
machmsg
2021-04-17 03:05:13
The site is archivesort.org but the portion that uses jpeg xl is closed while we fix crucial security vulnerabilities (unrelated to jxl)
2021-04-17 03:05:46
We hope to open it back up later today or tomorrow
Jyrki Alakuijala when writing, consider that the decision makers are in the same org with the AVIF developers -- it may be best to avoid a strong polarization between the formats in the bug by writing like something softer like "while AVIF already brought us big benefits over previous systems, JPEG XL's feature set and image quality guarantees are an overall better fit to our use case"
2021-04-17 03:06:23
just saw this. Will keep this in mind!
2021-04-17 04:53:23
Here is what I will write: My site makes use of both AVIF and JPEG XL heavily, especially for the archival of multimedia heavy sites…. As a content provider, we need to be able to balance the ability to have small file sizes while preserve content quality at the same time. While AVIF and JPEG XL produce arguably same content quality, JPEG XL encodes signfiicantly faster while AVIF provides smaller files. The AVIF encoder process (on libaom) is slow enough that we don’t even need to put on rate limiting as the images are transcoded as soon after they are downloaded. Decoding is also pretty fast as well. The only reason I used AVIF in the first place is because I needed someway to reduce the size of the files, but once I found out I could do that while preserving image fidelity, I was immeadately hooked. I believe that AVIF is trying to kill two birds with one stone - which is all fine and good, but most content providers may only want one of those birds to be killed. As an archive, we also care about preserving the quality of the photos. JPEG XL is able to compress losslessly images already in JPEG format. This is important for certain sites that already have low quality images - we don’t want to make their quality even worse. Considering the strengths and balances of each format, we decided for a two pronged approach - while we use mostly JPEG XL, AVIF also has its place for "less important" images. For the software we use its the standard jxl library along with a custom in house multimedia transcoder to dynamically convert to and from JPG and AVIF/JXL. Even though Chrome has native AVIF support, many browsers do not, and a great number of people are still on older versions of Chromium. The ecosystem for jxl is also more developed in my opinion, and me being a busy person, I would rather work with something that's already there. If you want benchmarks and hard numbers about my experience that I can make that avaliable.
2021-04-17 04:57:51
Overall, I think jpegxl is a better fit for most organizations because of the fast encoding times and the lossless component. Who doesn't want to save free space if it means no reduction in quality? (Edit: JPEG XL is important for user directled archiving, where people expect the archived pages they submit to be there fairly quickly. In this instance there is no time for avif)
2021-04-17 05:01:08
Don't get me wrong; AV1 has its place - however I think that place is in video. We do have an extensive video archive where we try to use as much av1 as possible. But for images, I think JXL takes the cake
Scientia
2021-04-17 05:17:59
jpeg xl is better for web excepting thumbnails afaik
2021-04-17 05:18:31
easy to encode, can losslessly compress old jpeg files, good generation loss tolerance
2021-04-17 05:20:16
the generation loss thing is really important for social apps like discord or facebook since people often screenshot an image and repost it often to the point of it being copied **several hundred times**
190n
2021-04-17 05:37:57
i think jpeg xl is also a lot faster to decode than avif (iirc even faster than jpeg sometimes?)
-=MO=-
190n i think jpeg xl is also a lot faster to decode than avif (iirc even faster than jpeg sometimes?)
2021-04-17 05:43:31
a lot, yes https://www.youtube.com/watch?v=UphN1_7nP8U
190n
2021-04-17 05:45:31
oh progressive for sure i was talking about cpu time
2021-04-17 05:45:56
but if cpu decoding is fast enough then network is the bottleneck so progressive decoding is helpful
fab
2021-04-17 06:15:39
progressive encode add noise to the photo also file at parity quality are x20 bigger are bigger than a png
2021-04-17 06:15:49
also not every person will encode progressive
2021-04-17 06:16:05
jon sneyers has said that progressive for him isn't a priority
2021-04-17 06:16:19
he wants that it works on browser and on GIMP
2021-04-17 06:16:23
with whatever options
2021-04-17 06:18:46
honestly if those modular options would work with newer encoder it would be a lot better
2021-04-17 06:18:47
https://discord.com/channels/794206087879852103/803645746661425173/832701921352613959
2021-04-17 06:18:47
those are very simple
_wb_
2021-04-17 06:24:09
? Progressive is a priority
Pieter
2021-04-17 06:25:30
what is "progressive encode"?
2021-04-17 06:26:29
<@794205442175402004> I think <@!416586441058025472> may be referring to a comment you or someone else made recently that it's important that browsers don't just support some subset of features (and that that's more important than having progressive decode work)
_wb_
2021-04-17 06:36:23
Ah
2021-04-17 07:59:58
The range for -g is 0 to 3
veluca
machmsg Here is what I will write: My site makes use of both AVIF and JPEG XL heavily, especially for the archival of multimedia heavy sites…. As a content provider, we need to be able to balance the ability to have small file sizes while preserve content quality at the same time. While AVIF and JPEG XL produce arguably same content quality, JPEG XL encodes signfiicantly faster while AVIF provides smaller files. The AVIF encoder process (on libaom) is slow enough that we don’t even need to put on rate limiting as the images are transcoded as soon after they are downloaded. Decoding is also pretty fast as well. The only reason I used AVIF in the first place is because I needed someway to reduce the size of the files, but once I found out I could do that while preserving image fidelity, I was immeadately hooked. I believe that AVIF is trying to kill two birds with one stone - which is all fine and good, but most content providers may only want one of those birds to be killed. As an archive, we also care about preserving the quality of the photos. JPEG XL is able to compress losslessly images already in JPEG format. This is important for certain sites that already have low quality images - we don’t want to make their quality even worse. Considering the strengths and balances of each format, we decided for a two pronged approach - while we use mostly JPEG XL, AVIF also has its place for "less important" images. For the software we use its the standard jxl library along with a custom in house multimedia transcoder to dynamically convert to and from JPG and AVIF/JXL. Even though Chrome has native AVIF support, many browsers do not, and a great number of people are still on older versions of Chromium. The ecosystem for jxl is also more developed in my opinion, and me being a busy person, I would rather work with something that's already there. If you want benchmarks and hard numbers about my experience that I can make that avaliable.
2021-04-17 08:40:28
<@!532010383041363969>
Jyrki Alakuijala
machmsg Don't get me wrong; AV1 has its place - however I think that place is in video. We do have an extensive video archive where we try to use as much av1 as possible. But for images, I think JXL takes the cake
2021-04-18 03:50:59
looks good to me
Fox Wizard
2021-04-19 05:51:47
My face while decoding this file:
raysar
2021-04-19 06:58:15
Artifacts are strange in this picture <:Thonk:805904896879493180> it looks like rainbows
Fox Wizard
2021-04-19 07:18:04
Was AI enhanced to the extreme and source was a jpg to make it worse
Crixis
2021-04-21 05:42:45
<:YEP:808828808127971399>
BlueSwordM
2021-04-21 05:46:33
https://cdn.discordapp.com/emojis/683380703958007852.gif?v=1
190n
2021-04-21 05:56:44
https://tenor.com/view/meme-among-us-discord-gif-19878487
jzxf
2021-04-22 12:04:44
Q: I just wanted to double-check something: given an artwork source in <#824000991891554375>, is it correct that I won't be able to use it without the super-secret dev branch of the repository, or can I still do it with the public branch?
veluca
2021-04-22 12:05:08
as of yesteday also the public branch πŸ™‚
jzxf
veluca as of yesteday also the public branch πŸ™‚
2021-04-22 12:05:19
Ooh!
veluca
2021-04-22 12:05:39
you can't crop the image IIRC though (needs changing the source code by hand :D)
jzxf
2021-04-22 12:06:06
But I can still generate a 1024x1024 tile?
2021-04-22 12:07:17
For example (just picking this at random from <#824000991891554375>): ``` Bitdepth 4 Width 1024 Height 1024 Squeeze if c > 1 if |W| > 2 - Set 0 if N-NN > 0 - NW 1 - NW -1 - W 1 ``` what's the canonical way to produce that image with the public repository?
veluca
2021-04-22 12:35:27
just use jxl_from_tree on this file
2021-04-22 12:35:30
it should work
2021-04-22 12:35:49
but if you want to have a larger image that is cropped to hide some parts, IIRC you can't
_wb_
2021-04-22 01:30:01
You can't yet, I need to make a merge request for it (it's not hard)
Scope
2021-04-22 03:17:45
New image codec comparison tests are consistently invalid and overestimate savings <https://encode.su/threads/3615-New-image-codec-comparison-tests-are-consistently-invalid-and-overestimate-savings> πŸ€”
BlueSwordM
Scope New image codec comparison tests are consistently invalid and overestimate savings <https://encode.su/threads/3615-New-image-codec-comparison-tests-are-consistently-invalid-and-overestimate-savings> πŸ€”
2021-04-22 03:32:38
Wait, is the guy lying or just uninformed?
2021-04-22 03:33:04
I just tried using ECT on that Huawei sample he linked, and it's already fully optimized.
Scope
2021-04-22 03:39:29
It's optimized to 7kb, but still I don't think the article has the right conclusions and that all the comparisons use only unoptimized images
BlueSwordM
2021-04-22 03:43:54
How did it get optimized down to 7kB in the 1st place? What does File Optimizer do that's different vs ECT? Anyway, I agree with your conclusion as well.
monad
2021-04-22 03:45:13
> I don't know if these files are used for comparison reporting Okay, then how does this support the point?
Dr. Taco
2021-04-22 03:45:36
It's a bad argument, but the concern is valid. Can the new stuff that hasn't had the decades of optimization squeezing compete with the old stuff and the deeper advances made with it. My assumption is that over time the new stuff would get that same level of attention and eventually have it's own tools and eventually a gauntlet style program and would easily outperform current day state-of-the-art (if it doesn't already).
Scope
2021-04-22 03:46:54
But, yes, sometimes there are comparisons with non-optimized images, including lossy comparisons with not the best encoders for older formats
_wb_
2021-04-22 04:10:14
The concern is certainly valid. Especially for lossy, actually. In lossy it happens often that claims are made w.r.t. the worst possible JPEG encoding (e.g. default libjpeg-turbo without even huffman optimization), or made by looking at not-very-meaningful metrics that are easy to optimize for (like psnr).
2021-04-22 04:12:45
Also, afaik, that test image is not used at all for comparing compression efficiency. It is there as part of the CI software tests to catch bugs, that's the only thing it is used for afaik.
Jyrki Alakuijala
2021-04-23 01:48:29
Chrome eng asked us to have at least contacted webkit so I wrote something on this bug: https://bugs.webkit.org/show_bug.cgi?id=208235
2021-04-23 01:49:36
Consider getting onto its cc list, or possibly even adding more useful data there
cucumber
2021-04-24 08:58:53
Hi, I recall there being a website where you could upload and compare the compression of various image codecs (including jxl) but I can't find it. Does it sound familiar to anyone?
monad
cucumber Hi, I recall there being a website where you could upload and compare the compression of various image codecs (including jxl) but I can't find it. Does it sound familiar to anyone?
2021-04-24 09:01:06
https://squoosh.app/ ?
cucumber
monad https://squoosh.app/ ?
2021-04-24 09:01:24
That's it. Thank you!
jzxf
veluca just use jxl_from_tree on this file
2021-04-25 03:11:11
any relevant parameters to add?
monad
2021-04-25 03:22:38
<@!264189852617146368> `jxl_from_tree tree_in jxl_out [tree_graph]`
veluca
2021-04-25 06:13:15
Yup
_wb_
2021-04-28 03:45:17
https://twitter.com/addyosmani/status/1387431906359185408?s=19
2021-04-28 03:45:45
Chapter 19 is on <:JXL:805850130203934781>
Jyrki Alakuijala
2021-04-28 05:10:34
it has a lot of my work described guetzli, butteraugli, jpeg xl, webp lossless, zopfli, zopflipng :-]
Deleted User
2021-04-29 08:15:59
<@!792428046497611796> don't you think that `Developed by` section in enwiki article's infobox requires a little bit of cleanup? It's kinda cluttered atm
Scope
2021-04-29 08:30:01
I think it's more intended for companies/organizations than for names, like
Deleted User
2021-04-29 08:32:43
Me too.
Petr
<@!792428046497611796> don't you think that `Developed by` section in enwiki article's infobox requires a little bit of cleanup? It's kinda cluttered atm
2021-04-30 05:36:08
Someone moved the section "Authors" to the infobox. I don't like that edit but I didn't want to start an edit war so I didn't revert it. Good that you mention it here. Shall we revert it?
Deleted User
Petr Someone moved the section "Authors" to the infobox. I don't like that edit but I didn't want to start an edit war so I didn't revert it. Good that you mention it here. Shall we revert it?
2021-04-30 05:40:17
There are some other things I'd like to either change or remove, but I don't want to start an edit war either... Good that you've brought this issue up. At least I can control Polish article, plwiki (contrary to enwiki) requires reviewing every single edit, so articles are held to a bit higher standard IMHO.
2021-04-30 05:41:52
Chances are that someone else will also notice the same things that we want to change and just do it, I don't want to handle it Β―\_(ツ)_/Β―
Petr
2021-04-30 05:43:25
That edit was made from Zurich (based on IP WHOIS). Who could have been that? πŸ™‚
Deleted User
2021-04-30 05:43:37
I don't know.
_wb_
2021-04-30 05:43:45
Could make sense to keep it shorter in the infobox, e.g. only mention JPEG, Jyrki (Google), Luca (Google), Jon (Cloudinary)
2021-04-30 05:44:09
And in the main text have a larger list
2021-04-30 05:44:12
Or something
Petr
2021-04-30 05:45:25
We could also get some inspiration from Wikidata where the individual fields (that can go to infoboxes and elsewhere) ared described.
Deleted User
2021-04-30 05:46:42
Another change: someone removed the `(as of April 2021)` from `Software` section title despite me linking to relevant WP Manual of Style entry in the edit summary and I don't have time to discuss it to be changed back with others. If I had, I'd ask someone else (Redactor, Admin etc.) what to do with it.
Petr
2021-04-30 05:49:11
Hmm, this isn't very helpful because it mentions both people and organisations: https://www.wikidata.org/wiki/Property:P178
_wb_
2021-04-30 05:53:58
I think both are useful information. Just have to be a bit more concise in an infobox, I don't think there's room there to add a long list of contributors.
lithium
2021-04-30 03:49:13
Hello, I have some question about generation loss, I plan transform some jpeg image and h264 video to new format(jpeg xl and av1), i understand lossy format transform to lossy format will get some generation loss. Choose near or higher quantizer on jpeg xl and av1 encoder, can reduce generation loss? I hope compressed file can keep good quality(near lossless) and reduce file size, (I using ffmpeg transform h264 crf 21 to h265 crf 20, can reduce half file size and still fine) example: jpeg q99 yuv 444 => jpeg xl -d 0.5 jpeg q90 yuv 444 => jpeg xl -d 1.0 jpeg q80 yuv 444 => jpeg xl -q 80 h264 crf 21 => av1 --end-usage=q --cq-level=20 (Constant Quality) h264 crf 24 => av1 --end-usage=q --cq-level=21 (Constant Quality)
_wb_
2021-04-30 04:03:03
If you worry about generation loss, best is to just losslessly recompress the jpegs
improver
2021-04-30 04:03:26
jpeg->jxl can do transform which doesn't do generational loss
2021-04-30 04:03:50
(different from usual lossless modular mode)
_wb_
2021-04-30 04:04:37
If you want to compress more, it probably does make sense to not do very high quality encodes of low-quality jpegs, since that can easily end up getting larger than the original
2021-04-30 04:06:01
I'd say do lossless jpeg recompression if the jpeg is below quality N (if you know that; can estimate by looking at quant tables), otherwise if it's a very high quality jpeg (at least q90), do -d 1.0
fab
2021-04-30 04:17:17
look here if you want to find settings
2021-04-30 04:17:18
https://docs.google.com/document/d/1ru-J-6059QHtgYkrE1pB_T97O2sEGZYhDlEoiTYkbbo/edit
2021-04-30 04:17:56
if you need like
2021-04-30 04:17:58
jpeg xl will get good at -s 9 -d 2.21 will have good performance at -d 0.5 -s 9 -d 2.21 and up to -s 8 -d 4.45 i try to predict the future now the safe value is up to d 1.261 custom settings s9 and -s 6 -d 2.3 stock
2021-04-30 04:18:01
....,
2021-04-30 04:18:02
ETC
2021-04-30 04:18:10
wait for new build/commit
2021-04-30 04:19:26
i did a settings one time that causes image to shrink but i had few invalid files
2021-04-30 04:22:00
don't know why nobody told him it (NO JPG support in jxl ewout builds)
2021-04-30 04:22:58
new commit will be probably out when May will end
2021-04-30 04:23:07
so you need to wait in that case
2021-04-30 04:23:52
also chrome/FIREFOX will start integrating on august 31th stable chome 93 feature freeze 17 june 2021 or something so at least 2 month to have initial support in browser
2021-04-30 04:24:53
I DID ALSO TEST WITH AVIF:
2021-04-30 04:24:54
https://discord.com/channels/794206087879852103/794206170445119489/837600695661625354
2021-04-30 04:28:11
here you can find the build:
2021-04-30 04:28:12
https://ci.appveyor.com/project/louquillio/libavif/builds/38598210/artifacts
2021-04-30 04:28:23
https://www.reddit.com/r/AV1/comments/l5drhd/fresh_libwebp_libwebp2_and_libavif_windows/
Pieter
fab also chrome/FIREFOX will start integrating on august 31th stable chome 93 feature freeze 17 june 2021 or something so at least 2 month to have initial support in browser
2021-04-30 04:29:09
Where does this information come from?
fab
2021-04-30 04:29:43
that i thought
2021-04-30 04:30:00
in this case search on discord
2021-04-30 04:30:08
spidermario or wb said it
2021-04-30 04:30:13
one of the dev
Pieter Where does this information come from?
2021-04-30 04:30:37
https://discord.com/channels/794206087879852103/822105409312653333/836512826469253130
Pieter
fab https://discord.com/channels/794206087879852103/822105409312653333/836512826469253130
2021-04-30 04:32:24
"in an optimistic scenario" <@794205442175402004> was talking about the earliest possible time; not about when it will actually happen.
lithium
_wb_ I'd say do lossless jpeg recompression if the jpeg is below quality N (if you know that; can estimate by looking at quant tables), otherwise if it's a very high quality jpeg (at least q90), do -d 1.0
2021-04-30 04:41:26
I can allow some generation loss, new format can bring much size benefit, (in my test, some non-photographic jpeg yuv 444 q80 to jxl -d 1.0 can reduce size and quality still keep good, butteraugli is very useful) For non-photographic video and image, if i can allow some generation loss but don't want get too much, choose near quantizer should be fine?, (h264 crf 21 to av1 q 19~20) and (jpeg q90~q99 to jxl d1.0~0.5) (choose too higher quantizer will get bigger file)
fab look here if you want to find settings
2021-04-30 04:44:44
in my opinion, -d 2.21 is too risk, jxl -d 0.8 and 1.0 will in safe zone
Pieter
2021-04-30 04:45:53
just don't change formats from one to another too often; that's the worst case for generation loss
2021-04-30 04:46:18
transcoding jxl to jxl generally doesn't cause much loss (but of course something)
2021-04-30 04:47:01
I wonder how jxl behaves w.r.t. generation loss if you alternate between lossy modular and vardct. I'd expect it to be a lot worse than staying within one.
fab
2021-04-30 04:47:38
mozjpeg q80 and jxl q75 there's no much difference
2021-04-30 04:47:56
the new encoder don't make bigger change
2021-04-30 04:48:22
like first thing you notice is red saturation strong pinwheel effect
2021-04-30 04:48:38
but the image look good even with galaxy s4 screenshots
2021-04-30 04:48:54
at least with the jpg i've tested
2021-04-30 04:49:24
if you need d 1.0 use only s9
2021-04-30 04:50:29
i didn't get huge saving
2021-04-30 04:50:38
it also depends on the encoder
Deleted User
Pieter I wonder how jxl behaves w.r.t. generation loss if you alternate between lossy modular and vardct. I'd expect it to be a lot worse than staying within one.
2021-04-30 04:53:18
I wonder what would happen in that scenario if we restricted VarDCT to just 8x8 blocks, like JPEG does
Pieter
2021-04-30 04:53:54
I'd expect less variation in encoder choices to be better for generation loss.
lithium
2021-04-30 04:58:12
butteraugli can constraint vardct quantization process?
Deleted User
Pieter I'd expect less variation in encoder choices to be better for generation loss.
2021-04-30 04:59:00
I guess so, but I wonder how would it look like visually. The transform is exactly the same as in JPEG, but there's also XYB, EPF, Gaborish, ...is that all?
fab
lithium in my opinion, -d 2.21 is too risk, jxl -d 0.8 and 1.0 will in safe zone
2021-04-30 05:00:44
at some distances i prefer s 8 as i said, s 8 in next build up to d 4.45 i will use maybe
2021-04-30 05:00:53
the problem is the user that don't know how to use jxl
2021-04-30 05:00:57
how he will do
2021-04-30 05:01:18
an automatic heuristic would be better to add on top
2021-04-30 05:02:03
2021-04-30 05:02:14
how to add png support
2021-04-30 05:03:04
how to contact ewout
2021-04-30 05:03:24
can i spam in that reddit link some question to him
Pieter
2021-04-30 05:03:33
open an issue in his repo?
fab
2021-04-30 05:03:42
appveyor is not a repo
2021-04-30 05:03:58
and also i don't use gitlab
2021-04-30 05:04:45
do you advice me to contact ewout replying to one of the comment in that reddit post
2021-04-30 05:05:00
do you think he will receive notification of an old reddit post
Pieter
2021-04-30 05:05:21
i don't know
fab
2021-04-30 05:05:54
ok
lithium
Pieter I wonder how jxl behaves w.r.t. generation loss if you alternate between lossy modular and vardct. I'd expect it to be a lot worse than staying within one.
2021-04-30 05:27:35
I want find a size and quality balance point, compressed file have some generation loss, but still look good and reduce much size, Before i create python script use butteraugli_old and libjpeg to recompression jpeg, butteraugli will make sure compressed file still ok, I plan use jpeg xl -d 1.0 and -d 0.5 to recompression non-photographic image, but i not sure non-photographic video can use same method on av1, In my test, h264 crf 21 to h265 crf 20 can reduce half size and quality still good(crf 18 visually lossless), I just worry choose near quantizer is not a good idea or maybe i made some mistakes?
Pieter
2021-04-30 05:29:38
<@461421345302118401> What kind of generation loss are you worried about? As in what sequence of transcodings would you expect your images to be subject to?
lithium
Pieter <@461421345302118401> What kind of generation loss are you worried about? As in what sequence of transcodings would you expect your images to be subject to?
2021-04-30 05:38:13
fot non-photographic content, lost some little detail should be fine, I worry artifact and noise, hope compressed file still look good(like original). [lossy]jpeg yuv 444 q80, 90, 99 => [lossy]jpeg xl -d 0.5 and -d 1.0 [lossy]h264 crf 21, 23, 24 => [lossy]av1 --end-usage=q --cq-level=19~20 (Constant Quality)
_wb_
2021-04-30 05:48:12
One re-encode should usually be fine
2021-04-30 05:49:18
Generation loss is mostly a problem when repeated re-encoding is happening, like when images go from facebook to twitter to whatsapp to instagram, and each of them does their own pretty lossy transcoding
fab
2021-04-30 06:06:04
Mozjpeg 81-87,4 q 79,1 --epf= 2 --dots=0 --noise=1 JPEG -d 1.114 -s 7 Mozjpeg 92 -d 0.921 -q 88.61 -s 6 --noise=0 --patches=1 --dots=0 --epf=2 Mozjpeg 90 -q 90,9 -gaborish=1 -s 8 to -d 1.99 -s 6 Mozjpeg 82 -q 71.9 wp2 -effort 6 + 92,81 cavif rs 0.6.6 -effort 0 Mozjpeg 81 -q 74.4 jpeg xl Mozjpeg 80 -q 61,95 -s 9 -q 65,41 -s 7 (depending) Mozjpeg 70 -q 62,19 -s 8 -q 95,28 -effort 7 wp2b (definitive) Mozjpeg 65 -q 87,95 -effort 8 wp2 Mozjpeg 60 -avif q 38 -effort 9 --sharpness 4 --more color extra chroma compression tune for vmaf noise synthesis 4 libavif d37ef741 https://ci.appveyor.com/project/louquillio/libavif/builds/38598210/artifacts Mozjpeg 55 keep it lossless Mozjpeg 50 keep it lossless
2021-04-30 06:06:55
this are totally invented values
2021-04-30 06:07:03
mozjpeg 60 you will laugh
2021-04-30 06:07:29
vmaf with noise syntesis sharpness and extra extra chroma compression don't exist
2021-04-30 06:07:42
do not know
2021-04-30 06:07:53
maybe bluesword is more informed of that
2021-04-30 06:08:00
of AVIF codec
2021-04-30 06:08:41
the problem we can't know if some photos are made with old jpeg encoder
2021-04-30 06:08:58
also those values are invented
2021-04-30 06:09:10
-q 87.1 with gaborish made a sense
2021-04-30 06:11:05
the best thing is evaluating the image and ignore those random values
2021-04-30 06:11:49
2021-04-30 06:11:59
like you choose any of that or different
2021-04-30 06:12:05
actually evaluating the image
2021-04-30 06:12:12
seeing the original source
2021-04-30 06:12:43
but jpg won't never compress as a png if you encode anime of ex
2021-04-30 06:13:02
but that's a problem for who provide that content
2021-04-30 06:20:20
maybe i will try also with butteruagli tool if i can download png images
lithium
2021-04-30 06:27:41
ok, I understand, Hope next jpeg xl release can fixed my vardct issue, <@!794205442175402004> <@!799692065771749416> <@!260412131034267649> <@!416586441058025472> Thank you for your help πŸ™‚
fab
2021-04-30 06:28:01
i'm sending reply to ewout
2021-04-30 06:28:33
anyway new encoders will come
2021-04-30 06:29:06
if you can wait a month
Pieter
fab anyway new encoders will come
2021-04-30 06:32:07
Will those do anything that helps here?
fab
2021-04-30 06:32:21
no i'm not a developer
2021-04-30 06:32:28
and i can't predict the future
2021-04-30 06:32:37
i can ask bluesword and ewout
2021-04-30 06:32:45
3 thing i need
Pieter
2021-04-30 06:32:46
Then why do you mention it here?
2021-04-30 06:33:16
Yes, of course new encoders will come, but why does <@!461421345302118401> need to wait for them?
fab
2021-04-30 06:33:58
i don't know
2021-04-30 06:34:11
2021-04-30 06:34:19
i made first question
2021-04-30 06:45:32
https://www.reddit.com/r/AV1/comments/l5drhd/fresh_libwebp_libwebp2_and_libavif_windows/
2021-04-30 06:45:56
2021-04-30 06:48:46
bluesword is not online
lithium I want find a size and quality balance point, compressed file have some generation loss, but still look good and reduce much size, Before i create python script use butteraugli_old and libjpeg to recompression jpeg, butteraugli will make sure compressed file still ok, I plan use jpeg xl -d 1.0 and -d 0.5 to recompression non-photographic image, but i not sure non-photographic video can use same method on av1, In my test, h264 crf 21 to h265 crf 20 can reduce half size and quality still good(crf 18 visually lossless), I just worry choose near quantizer is not a good idea or maybe i made some mistakes?
2021-04-30 06:51:38
h265 is a video codec is stores a frame then the some inter prediction, jpeg xl stores only intra, is only intra
lithium
fab h265 is a video codec is stores a frame then the some inter prediction, jpeg xl stores only intra, is only intra
2021-04-30 06:57:38
I don't use BPG on my still image, i will use jpeg xl
fab
2021-04-30 06:58:06
phone or camera files?
2021-04-30 06:58:22
or even scans?
lithium
2021-04-30 06:59:43
??? I think my target image is drawing(non-photographic).
Pieter Yes, of course new encoders will come, but why does <@!461421345302118401> need to wait for them?
2021-04-30 07:03:22
I wait some vardct encoder improvement, Jyrki Alakuijala say need three weeks.
fab
2021-04-30 07:03:45
jyri in twitter
2021-04-30 07:03:54
said he needs a few weeks
2021-04-30 07:04:55
so if you are interested in those distances like those
2021-04-30 07:04:56
https://discord.com/channels/794206087879852103/794206087879852106/837724470914318337
2021-04-30 07:05:11
i would wait the month
Deleted User
lithium I wait some vardct encoder improvement, Jyrki Alakuijala say need three weeks.
2021-04-30 07:57:11
Please provide proof, <@!799692065771749416> always needs proof. 😁
lithium ??? I think my target image is drawing(non-photographic).
2021-04-30 09:22:13
For drawings try both Modular and VarDCT, the former might actually give nice results. But you shouldn't overdo it, because Modular tends to make square-ish artifacts, which is the last artifact you'd want to see on continuous lines (VarDCT makes more natural lines). Go figure.
2021-04-30 09:23:28
Would you want to send us some examples with your desired compression level, please? I'd like to tinker with it a bit.
lithium
Would you want to send us some examples with your desired compression level, please? I'd like to tinker with it a bit.
2021-05-01 06:10:56
https://discord.com/channels/794206087879852103/794206170445119489/827962210965782559 <@456226577798135808> <@456226577798135808> probably this improvement is already implemented? πŸ™‚
Deleted User
2021-05-01 12:27:18
Should be, but you never know what "x amount of time" really means in the end.
2021-05-01 12:29:44
0.4 is also quite a bit behind schedule.
veluca
Should be, but you never know what "x amount of time" really means in the end.
2021-05-01 01:59:37
indeed, it's not πŸ˜›
2021-05-01 01:59:52
we're trying to get to 0.4 by the chrome 92 branch cut
2021-05-01 02:00:37
it should be possible, but you wouldn't *believe* how many things crop up (for me specifically, particularly busy ISO weeks and PhD defenses don't help)
Deleted User
2021-05-01 03:49:50
https://i.imgflip.com/1tdjtd.jpg
2021-05-01 03:50:27
β˜•
Scope
2021-05-01 03:53:50
https://tenor.com/view/mr-bean-waiting-still-waiting-gif-13052487
lithium
2021-05-01 04:34:40
That was really really sad... Chrome 92 Beta coming Jun 3 - Jun 10
Pieter
2021-05-01 05:06:13
?
jzxf
2021-05-01 06:23:31
Q: are there any educational materials targeted at technologists explaining the mechanics and modes of JXL?
2021-05-01 06:23:58
(other than the official docs, I mean)
Scope
2021-05-01 06:27:23
Except for some presentations (which are more targeted at the wider audience), and old papers, I don't think
2021-05-01 06:29:58
And maybe something can be found about FLIF/FUIF and PIK as JPEG XL ancestors
_wb_
2021-05-01 06:31:03
We have work to do to explain all the stuff in jxl
2021-05-01 06:31:57
My slides here are a start, but it's quite superficial and partial: https://docs.google.com/presentation/d/1LlmUR0Uoh4dgT3DjanLjhlXrk_5W2nJBDqDAMbhe8v8/edit?usp=sharing
lithium
Pieter ?
2021-05-01 07:03:03
I think i need wait another month(chrome 92) until next jpeg xl release 😒
fab
2021-05-01 07:06:12
what it was that site where check chrome stats
_wb_
lithium I think i need wait another month(chrome 92) until next jpeg xl release 😒
2021-05-01 07:14:24
No, we will likely keep doing regular syncs / releases
2021-05-01 07:15:34
Also planning to move development to a public repo so you get changes immediately (and useful commit messages, and others can contribute)
lithium
fab what it was that site where check chrome stats
2021-05-01 07:32:16
https://www.chromestatus.com/features/schedule
Deleted User
2021-05-02 12:37:10
https://youtu.be/sAURUtjBGhg
2021-05-02 12:39:13
<@794205442175402004> suggestion: please redo this video, but this time with - MozJPEG - JPEG XL (VarDCT) - JPEG XL (Modular) - AVIF and PSNR together with Butteraugli as metrics. No hurry, of course πŸ™‚
_wb_
<@794205442175402004> suggestion: please redo this video, but this time with - MozJPEG - JPEG XL (VarDCT) - JPEG XL (Modular) - AVIF and PSNR together with Butteraugli as metrics. No hurry, of course πŸ™‚
2021-05-02 05:52:50
That would be interesting indeed. Do you want that specific image or one with color?
Scope
2021-05-02 10:26:08
I think it would be interesting to see both photographic and artificial images with clear lines, like art, comics or charts/UI
Scientia
2021-05-02 11:13:55
Can higher quality lossy modular outperform, in subjective quality or size, vardct on things like diagrams, vector illustrations, and cartoons with distinct lines?
_wb_
2021-05-02 11:22:22
Maybe, but it is mostly a question of encoder decisions. I suspect that for a lot of such material, a good delta-palette encoder with a smart palette choice would be better than either vardct or squeeze.
2021-05-02 11:23:56
(or even regular modular with just quantized predictor residuals, with quantization amount depending on `WGH`)
Deleted User
_wb_ That would be interesting indeed. Do you want that specific image or one with color?
2021-05-02 11:51:52
At first I wanted to see grayscale line-art (to exploit Kornel's famous mozjpeg patch: https://kornel.ski/deringing), but Scope's got a point. Maybe the first panel of xkcd from your original video (in order to save visual space) and a photo?
_wb_
2021-05-02 11:57:28
I can do multiple comparisons. Just need to find that script somewhere, lol
2021-05-02 11:58:01
(also not likely to happen today)
Deleted User
2021-05-02 12:00:31
Again: no hurry πŸ™‚
2021-05-02 12:03:54
Maybe share it? I'd probably be able to rewrite it in Python (that I feel more comfortable programming in) and run it on my PC...
_wb_
2021-05-02 12:04:38
I think maybe one of the youtube videos has the script in the description or in a comment
2021-05-02 12:05:07
It's quite straightforward, just need to manually figure out settings that result in the same bpp for that image
2021-05-02 12:06:12
And need to alternate between at least two different q settings because otherwise some codecs like jpeg can converge and stop getting generation loss.
Deleted User
2021-05-02 12:20:44
I know. That one should be easy to do in Python: just do it with an RNG with hardcoded seed (for repeatability).
lithium
2021-05-02 12:39:24
<@456226577798135808> A little curious, you will use which python module to implement concurrent and async?
Deleted User
lithium <@456226577798135808> A little curious, you will use which python module to implement concurrent and async?
2021-05-02 12:42:21
...not using any currently, haven't learned that yet πŸ™
lithium
2021-05-02 12:46:19
my cjxl python have some issue, I implement concurrent features, but some time when cjxl encode too much big image in same time, 16GB ram is not enough...
_wb_
2021-05-02 12:48:46
Try faster speed settings
lithium
2021-05-02 12:48:53
I using ryzen 5 1600 16GB
_wb_
2021-05-02 12:48:56
Slower speed also eats more memory
lithium
2021-05-02 12:49:53
for big image i only can concurrent 4 cjxl instance
2021-05-02 12:50:23
small image can concurrent 10~12 cjxl instance
2021-05-02 01:08:24
Is possible estimate slower speed roughly ram usege? example: if image large or near 2099x3012(2.27MB), cjxl slower speed have chance use more ram.
_wb_
2021-05-02 01:15:38
There might be encoder memory improvements in the future, so far the decoder has been a bigger priority
lithium
2021-05-02 01:20:41
Encoder memory is fine, i actually need vardct improvement for drawing πŸ™‚
Scope
2021-05-02 01:30:06
<https://www.reddit.com/r/linux/comments/n2o9ha/call_to_action_for_developers_in_light_of_the/>
2021-05-02 01:31:25
People still keep mentioning ZPNG as a good alternative to new formats <:Thonk:805904896879493180>
2021-05-02 01:41:32
Yes, although there may be some other implementations, but people can not understand that this is not a simple migration, it is very little different from the adoption of new formats, in addition will add to the confusion and compatibility hell if the PNG extension remains the same
_wb_
2021-05-02 01:45:20
It is like adding alpha to jpeg. It is straightforward technically: you can make 4-component jpegs (it is quite common for CMYK), all you need is for software to interpret the 4th channel as alpha
2021-05-02 01:45:43
But doing that is as tricky as getting a whole new codec adopted
2021-05-02 01:46:54
Basically a format is always what it was initially, you cannot really do extensions
2021-05-02 01:47:24
APNG is nice but there is still a lot of software that will just show you the first frame
2021-05-02 01:59:31
JXL speed 3?
2021-05-02 02:01:35
(depending on what png compressor you compare to: png encoding can be as fast as `cp` or as slow as you want)
2021-05-02 02:24:13
It all depends how you measure
2021-05-02 02:25:05
What is the png encoder he compares against, on what images, etc
2021-05-02 02:29:57
I'm sure you can be a lot faster than zopflipng but also denser if you use zstd/brotli instead of deflate, especially for images that do not benefit much from lz77 matching but they do benefit from better entropy coding than just Huffman...
lithium
2021-05-02 04:17:07
Interesting, I see some anime site start use avif, i think image format war is start, chrome 90 decode speed is fine.
2021-05-02 04:19:48
jpeg yuv 444 to avif yuv is fine, but i think png rgb to avif yuv is not a good idea, need use avif ycocg
2021-05-03 06:07:54
I think that is english site(not sure), not sure, image avif(drawing type)
fab
2021-05-04 11:46:12
talking about fonts, how should be a font to resist re encodings and lossy image compression?
2021-05-04 11:47:20
should be the shapes less rigid?
2021-05-04 11:47:45
for example i worry that i can't compress graphs because there is my font suduwe in there
2021-05-04 11:48:47
2021-05-04 11:49:05
for example someone sent this in r/av1 server
jzxf
_wb_ My slides here are a start, but it's quite superficial and partial: https://docs.google.com/presentation/d/1LlmUR0Uoh4dgT3DjanLjhlXrk_5W2nJBDqDAMbhe8v8/edit?usp=sharing
2021-05-04 12:09:10
Thanks! This looks promising.
Deleted User
fab for example someone sent this in r/av1 server
2021-05-04 02:10:35
It was me πŸ˜‰
fab
2021-05-04 02:10:53
ok
2021-05-04 02:11:45
menu 9 nouveaba plate
2021-05-04 02:11:48
suduwe icon 12
2021-05-04 02:11:52
firefox suduwe 12
2021-05-04 02:12:19
text suduwe (myfont) 10
2021-05-04 02:12:40
i don't know how they compare
2021-05-04 02:12:51
maybe with text only i i can readfd
2021-05-04 02:13:03
also using text as such lower sizes without serif
2021-05-04 02:13:16
even if the serif they are, there isn't monospace
2021-05-04 02:13:21
and they are too small
2021-05-04 02:14:02
and i'm noticing i have difficulty to read this font faster than 95 fps
2021-05-04 02:14:19
anyway i don't need
2021-05-04 02:15:09
so i maybe should revert to the old font i used
2021-05-04 02:16:16
guaruja grotesk and google sans aren't bad
2021-05-04 02:16:25
why use strange fonts?
2021-05-04 02:16:36
also as i said earlier there is a problem with re encoding
2021-05-04 02:17:07
even with jxl, dev don't assume responsability if i choose to use the font i forked from others two
2021-05-04 02:19:08
https://github.com/VanillaandCream/Palanquin
2021-05-04 02:19:23
this is the most looking alike of what i made
2021-05-04 02:20:23
this i can legibility 100 fps readable 140 fps
2021-05-04 02:21:04
this is latest version
2021-05-04 02:22:39
also there isn't a equilibrium between letters.
2021-05-04 02:23:07
so that makes reading long messages on discord worse and reading my messages worse, because you have to think about the formattation.
diskorduser
2021-05-05 07:20:13
I want to compress dngs to jxl losslessly to save space. What is the right procedure?
_wb_
2021-05-05 07:33:10
There isn't one yet
2021-05-05 07:34:26
You could extract the pixel data with dcraw and encode it as a 4-channel image, but we still need something to preserve the metadata
2021-05-05 10:13:25
Question is if you want lossless DNG (before debayering) or just lossless (or even high-fidelity lossy) high-bitdepth RGB
2021-05-05 10:14:13
Both are in principle possible but the first has no tooling yet while the second could be done right now
diskorduser
2021-05-05 12:25:52
I just want to develop dng on Adobe Lightroom classic in future. Which method is good enough for that purpose?
2021-05-05 12:27:29
Could you send me your imagemagick pkg?
2021-05-05 12:38:56
Yeah I am a Arch user
2021-05-05 12:41:19
Or send pkgbuild pls
2021-05-05 12:44:54
Thanks <@456226577798135808>
raysar
2021-05-05 04:15:49
i will test, but it's not clear, dng can be 3 type of picture, (jpeg 8bit in it, tiff compress, or bayer lossless compress, plus LOT of metadata) The magic future is to compress lossy(or lossloss) bayer picture, for reducing size with very small reduce of quality.(debayer picture is way bigger than bayer picture) Some dslr can do that in their raw, but it's not tunable.
_wb_
2021-05-05 04:29:15
The new Apple ProRAW is basically (afaiu) a HDR RGB image (plus some auxiliary images for all kinds of other info the camera has, like depth maps etc, and a bunch of metadata)
2021-05-05 04:30:10
For the payload images (main and aux) of that, maybe jxl could be a good codec.
diskorduser
2021-05-05 05:14:44
How do I find the type of dng? Identify output from a dng from my phone - https://bpa.st/6NCA
spider-mario
2021-05-05 07:38:42
it has a **lot** of metadata
2021-05-05 07:40:10
https://www.adobe.com/content/dam/acom/en/products/photoshop/pdfs/dng_spec_1.5.0.0.pdf chapter 4
2021-05-05 07:40:34
I would be surprised if there is much at all that one can do with a DNG with the metadata stripped
raysar
diskorduser How do I find the type of dng? Identify output from a dng from my phone - https://bpa.st/6NCA
2021-05-05 09:42:58
if it's a dng from android photo it's a bayer picture, you can see the bayer matrix with darktable software :p
2021-05-05 09:52:23
Look at the beauty of an classical bayer raw πŸ˜„
2021-05-05 10:00:38
i don't know what is the best solution for encoding it lossy, like an big 16bits grayscale or 4 layers, because there are big light difference between each color.
2021-05-05 10:21:04
I do the test with "adobe dng converter", all theses files are .dng : Original dng 16mpx from my phone: 30.6 Mb (because android phone are so stupid with no lossless compression ...) lossless dng: 11.3 Mb linear dng (debayer): 69.1 Mb lossy dng (crap 8bit jpeg) 8.6 Mb We need a "dng 16bits jxl" bayered an unbayered lossy and lossless πŸ˜„ jpeg in dng have an huge advantage, it keep the white balance temperature (kelvin+tint), so i can batch tune a series of photos.
spider-mario
2021-05-05 10:34:22
the lossy compression in Adobe DNG Converter is so bad, I couldn’t believe it when I saw the artifacts
raysar
spider-mario the lossy compression in Adobe DNG Converter is so bad, I couldn’t believe it when I saw the artifacts
2021-05-05 10:36:57
it's not about the jpeg artifact it's about the stupid 8bits who destroy the black and white of picture BEFORE retouching and adjust "contrast". At 800% zoom i don't see artifacts for an jpeg in dng.
spider-mario
2021-05-05 10:37:54
I don’t really see what you are referring to, why are you mentioning jpeg artifacts?
2021-05-05 10:38:28
or 8 bits
raysar
spider-mario I don’t really see what you are referring to, why are you mentioning jpeg artifacts?
2021-05-05 10:41:03
You say that lossy dng is bad,right? (lossy dng is only a stupid jpeg 8bit in a dng)
spider-mario
2021-05-05 10:41:11
I was referring to this setting myself:
raysar You say that lossy dng is bad,right? (lossy dng is only a stupid jpeg 8bit in a dng)
2021-05-05 10:41:16
ow, is it?
2021-05-05 10:41:28
the artifacts didn’t look very jpeg-like to me
raysar
2021-05-05 10:41:30
YES i read the dng spec
2021-05-05 10:41:49
it's their custom jpeg encoder internal
spider-mario
2021-05-05 10:42:23
oh my, it’s true
2021-05-05 10:42:24
why
raysar
2021-05-05 10:42:38
Because they are stupid, it's adobe :/
2021-05-05 10:47:24
we need to spam adobe to add jxl in their dng πŸ˜„
spider-mario
2021-05-05 10:56:25
the main artifact I remember was posterization, especially in blue skies
raysar
spider-mario the main artifact I remember was posterization, especially in blue skies
2021-05-05 10:56:58
This is the magic of 8bits picture with sun πŸ˜„
diskorduser
2021-05-06 10:01:17
2021-05-06 10:01:58
Why does it crash when quality is 100 πŸ€”
veluca
2021-05-06 10:28:47
<@!794205442175402004> for this πŸ˜„ (but maybe filing an issue might be a good idea...)
_wb_
2021-05-06 10:36:00
looks like it somehow tries to do modular with at the same time XYB and lossless float, which doesn't really make sense
2021-05-06 10:43:11
jxl issue - we should do something different than having an assert fail when we get called in this way
cucumber
2021-05-06 03:43:24
Is there a good literature review on various image coding error functions? (As in MS-SSIM, PSNR, etc)
2021-05-06 03:45:01
I'm already pretty familiar with the above, but not particularly familiar with other algorithms, and I'm not *especially* familiar with how they might compare to each other.
2021-05-06 03:55:37
And is "just use butteraugli" a sane approach to evaluating error on photographs?
BlueSwordM
cucumber And is "just use butteraugli" a sane approach to evaluating error on photographs?
2021-05-06 03:58:47
It's a good approach, yes.
2021-05-06 04:00:00
For pure image quality, butteraugli is quite accurate from my testing in photographic images.
_wb_
2021-05-06 04:08:22
Butteraugli is good for medium to very high fidelity
2021-05-06 04:08:38
For ultra low to low fidelity, it is not worth much
cucumber
2021-05-06 04:20:25
That's perfectly fine with me. Though, if there's no good literature review that would help answer these questions, is there anywhere where these shortcomings of the various functions are well documented?
_wb_
2021-05-06 05:06:52
This is something I think we need to work on in the future
2021-05-06 05:07:10
Better metrics and better validation/understanding of existing metrics
2021-05-06 05:08:45
Both butteraugli and ssimulacra were created rather pragmatically because we needed something better than what was already out there, just to do the things we wanted to do
2021-05-06 05:11:06
The scientific/academic publications around it still need to happen though (usually a low priority thing if there is other stuff to work on, and there always is)
fab
2021-05-06 05:12:58
-q 94,6 -X 22.4 -Y 38 -I 0.68 -s 3 -m -q 99,03 --effort 2 -q 78,079 -s 5 -epf=1 --noise=0 --dots=1 --gaborish=0 --patches=0
2021-05-06 05:13:34
This was fine with april 22 jxl and wp2 jamaika doom9 builds 01052021
2021-05-06 05:14:00
Dont know if it can be considered
2021-05-06 05:14:11
Fidwlity
paperboyo
_wb_ For ultra low to low fidelity, it is not worth much
2021-05-06 05:17:14
Is there an existing one that performs well at these low bpps?
_wb_
2021-05-06 05:18:43
I think the new AI-based ones are most suitable there
2021-05-06 05:20:01
LPIPS and PieAPP, for example
2021-05-06 05:20:58
At some point you can just as well use no-reference metrics
cucumber
_wb_ The scientific/academic publications around it still need to happen though (usually a low priority thing if there is other stuff to work on, and there always is)
2021-05-06 07:25:59
Unfortunate but understandable. Thank you!
_wb_
2021-05-07 04:06:57
https://twitter.com/kamedo2/status/1390680189416349700?s=19
2021-05-07 04:08:03
Always funny to get tagged in a tweet in Japanese πŸ˜…
Scope
2021-05-07 04:13:39
At least now there are translators and it is possible to understand any language (except some very exotic)
lithium
2021-05-07 04:42:03
but we don't have jpeg xl chinese wiki page?
Pieter
lithium but we don't have jpeg xl chinese wiki page?
2021-05-07 05:27:53
You can create one.
Scope
2021-05-08 12:20:35
<https://encode.su/threads/3615-New-image-codec-comparison-tests-are-consistently-invalid-and-overestimate-savings?p=69632&viewfull=1#post69632>
2021-05-08 12:20:55
> **Jyrki Alakuijala** > In JPEG XL lossless we copied over what worked very well (entropy clustering, 2d distance LUT for LZ77, non-byte-based LZ77, delta palette, recursive format definition -- i.e., side channel images, palette etc. use the image format itself in a limited way). We added stuff that needed more attention (patches for text, Alex Rhatushnyak's lossless photo compression predictor, context trees from FLIF/FUIF, larger support for prediction, mixed non-delta/delta palette). We did also remove stuff -- like the color cache and the pixel bundling in the color indexing transform. > > The entropy clustering idea from WebP lossless is also the main idea powering fast context modeling in Brotli, Brunsli, PIK, JPEG XL and WebP2 lossless. By clustering entropies, we combined different contexts and have overall less entropy codes. Having fewer entropy codes means a smaller working set, less pressure on the memory system. Having fewer entropy codes reduces the cost of sending over entropy codes. Having fewer entropy codes reduces entropy in explicit entropy coding control data such as blocks and context maps in Brotli, and the entropy image in WebP lossless. Having fewer entropy codes allows us to have the full, boring, LUT-based, fast and simple entropy codes to be defined for each context instead of the more traditional approach of taking a base entropy code and manipulating the entropy coding depending on the context -- through arithmetic coding, symbol ranking or other approaches.
2021-05-08 12:21:00
> The first version of PNG took about 4 months to design and implement. It took me 6.5 months to design WebP lossless, another half a year for the WebP team to productionize (security hardening, integrating WebP lossless as lossy alpha, streaming decoding, converting from C++ to C). JPEG XL (accounting its development trajectory from Butteraugli/Brunsli/FLIF/FUIF/Guetzli/PIK has been about 7 years of "sous vide" design, and now about half a year in the productionizing.
_wb_
2021-05-08 05:41:41
> This is how I can decode now with Ubuntu linux... > wine cjxl.exe INPUT.jpg TEMP.jxl && wine djxl.exe TEMP.jxl OUTPUT.png > Using Windows binarie.
Scientia
2021-05-08 05:42:14
If you use Linux it's super easy to just use cmake
2021-05-08 05:42:28
Plus there's instructions on the gitlab for building
_wb_
2021-05-08 05:43:04
Using wine + windows binaries to run cjxl in linux <:kekw:808717074305122316>
2021-05-08 05:43:29
Linux users are not what they used to be
monad
2021-05-08 05:47:19
Which is a good thing
_wb_
2021-05-08 05:47:56
Yes
2021-05-08 05:48:09
Just have to get used to it
2021-05-08 05:49:03
We should make `sudo apt install jxl` work
2021-05-08 06:06:36
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=948862
2021-05-08 06:09:16
If someone knows a debian maintainer who can make stuff happen, that would be great
improver
2021-05-09 12:54:58
no that sounds absurd
monad
2021-05-09 02:00:41
It's good that Linux has become approachable enough that it has not-so-savy users.
2021-05-09 02:02:06
Compiling source code is not exactly convenient even if you're familiar with it, and most people are not familiar with it.
improver
2021-05-09 02:05:55
it's a blessing and a curse
2021-05-09 02:06:27
but yeah if semi-official binary builds are provided then they should be for linux too
BlueSwordM
2021-05-09 02:14:32
I have the option of 2 and 4.
improver
2021-05-09 02:15:37
1 x 4 (archlinux aur using aur helper so stuff's effortless to update)
190n
2021-05-09 02:16:09
aur isn't official repo so 1
improver
2021-05-09 02:17:10
yeah i guess im too sleepy
Pieter
2021-05-09 02:17:20
2a. compile and don't install
2021-05-09 02:17:35
(you can just run cjxl and djxl directly)
190n
2021-05-09 02:18:15
iirc blue is running maybe openmandriva?
improver
2021-05-09 02:18:15
that's definitely reasonable as tracking system binaries without package manager isn't fault-free
Pieter
2021-05-09 02:18:33
This is true for almost all typical open source tools.
2021-05-09 02:19:16
It's different of course once you include libraries that other packages rely on. That's also possible without installing, but it gets hairy passing all the paths around.
BlueSwordM
2021-05-09 02:19:48
https://old.reddit.com/r/AV1/comments/mfhkro/openmandriva_linux_now_has_native_jpegxl_packages/ https://postimg.cc/dkRmQdNb
2021-05-09 02:20:38
I just asked the devs nicely, and they complied πŸ™‚
2021-05-09 02:21:59
In fact, JPEG-XL is on route for being in the next stable release of Openmandriva by default, including QT support and native Gimp support if I recall correctly.
Pieter
2021-05-09 02:22:18
It seems by default cjxl and djxl statically link libjxl, so that's no issue. In other libraries you sometimes see libtool trickery, where the produced "binary" is really a shell script that sets LD loading path correctly to find the libs, and then invokes the real binary.