JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

jxl

Anything JPEG XL related

CrushedAsian255
2025-01-27 03:18:55
4 MB is the largest `output_size` for ICC at level 5
2025-01-27 03:20:15
the box format explicitly allows XLBoxes (boxes larger than 4 GB)
2025-01-27 03:44:59
actually i guess jbrd might have its own signalling limit
2025-01-27 03:45:58
yeah, `tail_data_length`
2025-01-27 05:15:13
im want to burn an image of an XL sized t-shirt encoded as a JPEG XL onto a BD-RE XL disc
AccessViolation_
2025-01-28 09:03:54
> 1. HEIC and AVIF can handle larger images but not directly in a single code stream. You must decompose the image into a grid of independently encoded tiles, which could cause discontinuities at the grid boundaries. Illustration: grid boundary discontinuities in a HEIC-compressed image. _HEIC and AVIF can handle larger images but not directly in a single code stream. You must decompose the image into a grid of independently encoded tiles, which could cause discontinuities at the grid boundaries. Illustration: grid boundary discontinuities in a HEIC-compressed image._ Is this really different from JPEG XL in terms of discontinuities since JXL splits images into 1 MP blocks?
Demiurge
AccessViolation_ > 1. HEIC and AVIF can handle larger images but not directly in a single code stream. You must decompose the image into a grid of independently encoded tiles, which could cause discontinuities at the grid boundaries. Illustration: grid boundary discontinuities in a HEIC-compressed image. _HEIC and AVIF can handle larger images but not directly in a single code stream. You must decompose the image into a grid of independently encoded tiles, which could cause discontinuities at the grid boundaries. Illustration: grid boundary discontinuities in a HEIC-compressed image._ Is this really different from JPEG XL in terms of discontinuities since JXL splits images into 1 MP blocks?
2025-01-28 09:04:49
For some reason yeah jxl doesn't have tile boundaries like heif does
2025-01-28 09:05:02
Somehow
2025-01-28 09:05:25
It was designed to be super smooth with no discontinuities and block boundaries
2025-01-28 09:05:42
Suitable for smooth gradients and skies
AccessViolation_
2025-01-28 09:05:43
Hmm
2025-01-28 09:07:48
Blocks can be independently encoded and decoded so I don't think it can actively compensate for discontinuities by looking at the pixels in surrounding blocks?
Demiurge
2025-01-28 09:07:54
For an explanation on how this works, please refer to the following diagram
2025-01-28 09:07:58
2025-01-28 09:08:38
As you can see, it all makes sense now
AccessViolation_
2025-01-28 09:08:39
Maybe the decoder just blurs the lines <:KekDog:805390049033191445>
2025-01-28 09:10:47
the old vaseline trick
Demiurge
2025-01-28 09:11:02
One format was actually designed to be suitable as an image codec for arbitrarily huge images
AccessViolation_
2025-01-28 09:11:25
~~PNG?~~
2025-01-28 09:12:02
oh 'suitable', no
Demiurge
2025-01-28 09:12:12
The other is just... half assed container format with a video codec frame inside and all of its inherent limitations
2025-01-28 09:12:45
Just as halfassed as webp before it
2025-01-28 09:12:52
And just as forced
2025-01-28 09:13:32
Because the guy pushing all these halfassed formats out just happens to be the sole decider of what codecs are enabled in chromium
2025-01-28 09:15:34
And this is the second time he's abused his influence over the chromium source tree to force an unpopular half baked format he invented onto everyone thanks to everyone's dependency on chromium
2025-01-28 09:16:43
While at the same time gaslighting everyone about how there's supposedly "not enough interest" for jxl in a bug tracker filled with reps from intel shopify nvidia facebook etc all very interested in having it enabled by default
AccessViolation_
2025-01-28 09:44:44
meh
2025-01-28 09:45:34
personally i think that kind of reasoning could use some hanlon's razor
2025-01-28 09:49:23
I would really like to see jxl integrated in chrome but I don't think this is some personal vendetta from sam google codecs team
2025-01-28 09:53:06
and unlike us they're probably not thinking about it on a daily basis, if at all
_wb_
2025-01-28 10:00:41
In jxl, the restoration filters (gaborish and EPF) run across group boundaries, while in any file format level tiling (tiff, DNG, HEIF etc), the tiles are independent so you may get visible seams when using lossy.
AccessViolation_
2025-01-28 10:14:31
Oh then I was completely wrong about what the edge preserving filter was, I thought that was for edges in the image subject itself, like to mitigate ringing
Demiurge
AccessViolation_ I would really like to see jxl integrated in chrome but I don't think this is some personal vendetta from sam google codecs team
2025-01-28 10:22:18
Jim Bankowski is the lead of the chrome codec team, the team that made webp and avif and fast-tracked its adoption in chrome and youtube, and the subject matter expert seemingly responsible for ultimately deciding what codecs are included in chrome. After fast tracking his own half baked and unpopular formats, he essentially told a room full of big players in the industry (on the bug tracker) that there just "isn't enough interest" for a less half-baked format like jxl. He uses vague language like "we" and "our partners" etc. to deflect his own responsibility as a key decision maker and leader of the chrome codec team. Last time I brought this up several people said it's probably not good to bring attention to such a thing on such a personal level, but I don't understand what's bad about holding leaders accountable for making absolutely absurd decisions that impact a lot of people negatively, especially when they seem to have transparently selfish motivations.
2025-01-28 10:24:09
It's so absurd and surreal that I'm really surprised it's even possible for something that ridiculous to happen
2025-01-28 10:24:34
And actually even more surprised more people aren't talking about it and recognizing how absurd it is
2025-01-28 10:26:38
Or questioning why he's some kind of god of the internet making decisions for everyone, beyond reproach
spider-mario
AccessViolation_ Oh then I was completely wrong about what the edge preserving filter was, I thought that was for edges in the image subject itself, like to mitigate ringing
2025-01-28 11:28:10
it is
CrushedAsian255
2025-01-29 12:06:36
it's for both
Demiurge
_wb_ In jxl, the restoration filters (gaborish and EPF) run across group boundaries, while in any file format level tiling (tiff, DNG, HEIF etc), the tiles are independent so you may get visible seams when using lossy.
2025-01-29 03:34:35
Aren't there also some DC related tricks to keep things nice and smoooooth?
2025-01-29 03:47:35
Beyond block boundaries too
Meow
Demiurge Jim Bankowski is the lead of the chrome codec team, the team that made webp and avif and fast-tracked its adoption in chrome and youtube, and the subject matter expert seemingly responsible for ultimately deciding what codecs are included in chrome. After fast tracking his own half baked and unpopular formats, he essentially told a room full of big players in the industry (on the bug tracker) that there just "isn't enough interest" for a less half-baked format like jxl. He uses vague language like "we" and "our partners" etc. to deflect his own responsibility as a key decision maker and leader of the chrome codec team. Last time I brought this up several people said it's probably not good to bring attention to such a thing on such a personal level, but I don't understand what's bad about holding leaders accountable for making absolutely absurd decisions that impact a lot of people negatively, especially when they seem to have transparently selfish motivations.
2025-01-29 04:24:43
The Voldemort of image formats
Tirr
Demiurge Aren't there also some DC related tricks to keep things nice and smoooooth?
2025-01-29 04:47:32
I think you meant adaptive LF smoothing
2025-01-29 04:49:26
LF image is encoded in Modular in most cases though, so LF doesn't affect group boundary discontinuity very much
_wb_
Demiurge Aren't there also some DC related tricks to keep things nice and smoooooth?
2025-01-29 07:28:54
Yes, there's adaptive DC dequant and extra_precision which both help to avoid DC-related banding in very slow gradients.
2025-01-29 07:31:13
EPF helps for ringing/DCT noise but also blocking. It has a funky weighting that causes it to smooth a little more around block/group boundaries than inside the blocks.
Demiurge
Meow The Voldemort of image formats
2025-01-29 01:18:55
Who appointed him as Lord though? That's what I wanna know 😂 Who appointed him as our web codec god and why isn't his selfishness being reigned in?
2025-01-29 01:27:26
He's building up a legacy of using his own clout and influence at Google to force his own mediocre and half baked image formats onto the web time and time again, where he had the lead role in their creation and design, and at the expense of less half baked formats that have more industry support and demand
VcSaJen
2025-01-29 05:11:37
If you use witchcraft and turn him into a frog right now, literally nothing would change. This stuff operates on the level of multi-company groups, not on individual level.
Demiurge
2025-01-29 08:48:46
I don't see any evidence of that. Calling the shots on this matter is in his job description.
2025-01-29 08:49:16
And he hasn't specified anyone else that was involved in the decision making process.
Traneptora
Demiurge Jim Bankowski is the lead of the chrome codec team, the team that made webp and avif and fast-tracked its adoption in chrome and youtube, and the subject matter expert seemingly responsible for ultimately deciding what codecs are included in chrome. After fast tracking his own half baked and unpopular formats, he essentially told a room full of big players in the industry (on the bug tracker) that there just "isn't enough interest" for a less half-baked format like jxl. He uses vague language like "we" and "our partners" etc. to deflect his own responsibility as a key decision maker and leader of the chrome codec team. Last time I brought this up several people said it's probably not good to bring attention to such a thing on such a personal level, but I don't understand what's bad about holding leaders accountable for making absolutely absurd decisions that impact a lot of people negatively, especially when they seem to have transparently selfish motivations.
2025-01-30 07:31:41
AVIF yes, webp I think is an unfair comparison as our own Jyrki here designed the lossless format and it's pretty cool.
2025-01-30 07:32:01
at the time webp lossless was state of the art, fantastic compared to PNG (and in some cases still beats jxl)
2025-01-30 07:32:10
nothing like webp lossy which is just worse jpeg
RaveSteel
2025-01-30 07:40:26
Lossless WebP is still quite good in some scenarios
2025-01-30 07:40:38
If only ffmpeg were able to decode it
Traneptora
RaveSteel If only ffmpeg were able to decode it
2025-01-30 07:56:12
it is via libwebp isn't it?
Demiurge
2025-01-30 07:56:26
The lossless webp compression from Jyrki still is state of the art and better than jxl half the time. But webp has crippling limitations thanks to the overarching design decisions and Jim was and still is the leader of the project that produced such a half baked and limited format
RaveSteel
Traneptora it is via libwebp isn't it?
2025-01-30 07:56:31
yes
Traneptora
RaveSteel yes
2025-01-30 07:58:08
I just checked, it can decode lossless webp natively
2025-01-30 07:58:36
``` leo@gauss ~/Pictures/test_images :) $ webpinfo compass-64.webp File: compass-64.webp RIFF HEADER: File size: 1796 Chunk VP8L at offset 12, length 1784 Width: 64 Height: 64 Alpha: 0 Animation: 0 Format: Lossless (2) No error detected. leo@gauss ~/Pictures/test_images :) $ ffmpeg -i compass-64.webp -f null - ffmpeg version N-97819-g0225fe857d Copyright (c) 2000-2025 the FFmpeg developers built with gcc 14.2.1 (GCC) 20240910 configuration: --prefix=/home/leo/.local --enable-gpl --enable-version3 --enable-nonfree --enable-shared --disable-static --disable-htmlpages --enable-manpages --disable-podpages --disable-txtpages --enable-frei0r --enable-gcrypt --enable-gmp --enable-gnutls --enable-lcms2 --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcelt --enable-libcdio --enable-libdav1d --enable-libdc1394 --enable-libfdk-aac --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libiec61883 --enable-libharfbuzz --enable-libjack --enable-libjxl --enable-libkvazaar --enable-libmodplug --enable-libmp3lame --enable-libopus --enable-libplacebo --enable-libpulse --enable-librav1e --enable-librsvg --enable-librtmp --enable-librubberband --enable-libsmbclient --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libsvtav1 --enable-libtesseract --enable-libtheora --enable-libtwolame --enable-libv4l2 --enable-libvidstab --enable-libvorbis --enable-libvpl --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxcb-shm --enable-libxcb-xfixes --enable-libxcb-shape --enable-libxml2 --enable-libxvid --enable-libzimg --enable-libzmq --enable-openal --enable-opengl --disable-openssl --enable-pic --enable-ffnvcodec --enable-libdrm --disable-outdev=sdl2 --enable-vapoursynth --enable-vulkan --enable-libdvdread --enable-libdvdnav --enable-demuxer=dvdvideo --extra-cflags='-Wno-format-truncation -Wno-stringop-overflow -Wno-array-bounds' libavutil 59. 56.100 / 59. 56.100 libavcodec 61. 31.101 / 61. 31.101 libavformat 61. 9.106 / 61. 9.106 libavdevice 61. 4.100 / 61. 4.100 libavfilter 10. 9.100 / 10. 9.100 libswscale 8. 13.100 / 8. 13.100 libswresample 5. 4.100 / 5. 4.100 libpostproc 58. 4.100 / 58. 4.100 Input #0, webp_pipe, from 'compass-64.webp': Duration: N/A, bitrate: N/A Stream #0:0: Video: webp, argb, 64x64, 25 fps, 25 tbr, 25 tbn Stream mapping: Stream #0:0 -> #0:0 (webp (native) -> wrapped_avframe (native)) Press [q] to stop, [?] for help Output #0, null, to 'pipe:': Metadata: encoder : Lavf61.9.106 Stream #0:0: Video: wrapped_avframe, argb(pc, gbr/unknown/unknown, progressive), 64x64, q=2-31, 200 kb/s, 25 fps, 25 tbn Metadata: encoder : Lavc61.31.101 wrapped_avframe [out#0/null @ 0x5724982241c0] video:0KiB audio:0KiB subtitle:0KiB other streams:0KiB global headers:0KiB muxing overhead: unknown frame= 1 fps=0.0 q=-0.0 Lsize=N/A time=00:00:00.04 bitrate=N/A speed=38.8x ```
RaveSteel
2025-01-30 08:00:35
Ah sorry, I should have specified animated WebP, my bad
Traneptora
2025-01-30 08:01:10
oh, there's a thing on the ML for that iirc
RaveSteel
2025-01-30 08:01:15
dwebp also does not support animated WebP
Traneptora
2025-01-30 08:01:38
isn't that the reference implementation
RaveSteel
2025-01-30 08:02:18
I think so, yes ``` Usage: dwebp in_file [options] [-o out_file] Decodes the WebP image file to PNG format [Default]. Note: Animated WebP files are not supported. ```
Traneptora
2025-01-30 08:02:30
huh that is what it says
_wb_
2025-01-30 08:29:45
I think much of the limitations of webp (dimensions, bitdepth, 420) came not from Jim but from Skal, but I may be wrong. Hard to tell from the outside how decisions were actually made. But from the little interactions I have had with Skal, the general approach of "good enough for the web" seemed to play a big role in his design philosophy.
Demiurge
2025-01-30 08:36:26
Who came up with the goofy idea of calling it web pee
_wb_
2025-01-30 08:40:10
I guess WebM was first
2025-01-30 08:40:34
WebI would also sound silly I guess
Oleksii Matiash
RaveSteel dwebp also does not support animated WebP
2025-01-30 08:51:11
Even Android, that comes from the same Google as webp, can't decode awebp oob. Separate lib is required 🤦‍♂️We faced this issue right now
Demiurge
2025-01-30 09:05:10
Webm sounds bad too
2025-01-30 09:05:24
Especially since it's literally just matroska
2025-01-30 09:05:48
People keep insisting it's different but it's not according to the spec...
_wb_
2025-01-30 09:06:44
WebP is RIFF at the container level...
RaveSteel
Oleksii Matiash Even Android, that comes from the same Google as webp, can't decode awebp oob. Separate lib is required 🤦‍♂️We faced this issue right now
2025-01-30 09:10:28
Do correct me if I'm wrong, but doesn't Android not support animated AVIF either?
Oleksii Matiash
RaveSteel Do correct me if I'm wrong, but doesn't Android not support animated AVIF either?
2025-01-30 09:11:03
I'm not sure, we did not investigate it
Meow
_wb_ I think much of the limitations of webp (dimensions, bitdepth, 420) came not from Jim but from Skal, but I may be wrong. Hard to tell from the outside how decisions were actually made. But from the little interactions I have had with Skal, the general approach of "good enough for the web" seemed to play a big role in his design philosophy.
2025-01-31 04:39:22
Yet larger images have been existing https://commons.wikimedia.org/wiki/File:Blue_Marble_2002.png
2025-01-31 04:40:12
The dimension of 43,200 × 21,600 pixels is far larger than WebP's limitation
CrushedAsian255
Demiurge Webm sounds bad too
2025-01-31 05:31:30
I kind of understand webm. It’s basically just Matroska but only supporting open source codecs (vp8 vp9 av1 Vorbis opus webvtt etc..)
2025-01-31 05:31:42
Anyone who says it’s not matroska is silly though
juliobbv
2025-01-31 05:39:03
what I dislike about webm is that the standard never got any meaningful updates after vp9
2025-01-31 05:40:15
so av1 support is "de jure" (every player/muxer/tool that has webm support understands av1 if they have support for the codec), but not "de facto" (so uploading an av1 webm video to a service that supports webm isn't guaranteed to work)
Quackdoc
juliobbv so av1 support is "de jure" (every player/muxer/tool that has webm support understands av1 if they have support for the codec), but not "de facto" (so uploading an av1 webm video to a service that supports webm isn't guaranteed to work)
2025-01-31 06:10:32
this is the case with any format that can be extended
2025-01-31 06:10:39
nothing really possible about that
juliobbv
2025-01-31 06:13:39
with a revision, software makers at least have some sort of formal reference to adhere to
2025-01-31 06:14:13
they can still reject implementing the revision of course, but doing so it's an explicit action that can be documented
2025-01-31 06:15:54
which is at least better than "welp, webm doesn't really state av1 is supported anywhere in their docs so 🤷‍♂️"
Quackdoc
2025-01-31 06:28:13
has that ever actually happened though?
Demiurge
CrushedAsian255 I kind of understand webm. It’s basically just Matroska but only supporting open source codecs (vp8 vp9 av1 Vorbis opus webvtt etc..)
2025-01-31 07:22:08
Yeah but nobody expects matroska to mean "every possible codec combination ever"
2025-01-31 07:23:02
It's weird that they rebranded something that already had its own branding. You don't just scribble out someone else's brand and put your own cringe there instead
2025-01-31 07:23:32
matroska actually sounds cool :)
Traneptora
Demiurge Yeah but nobody expects matroska to mean "every possible codec combination ever"
2025-01-31 07:23:43
actually a lot of people expect this
2025-01-31 07:24:04
cause for the most part it does work
2025-01-31 07:24:55
WebM is a specific subset of matroska. Every webm file is matroska but not the other way around fwiw
Demiurge
2025-01-31 07:24:58
matroska is designed in a way that devices and decoders are not expected to support and recognize every feature ever made
Traneptora
2025-01-31 07:25:23
That is true, which is why a specific meaningful subset was standardized
Demiurge
2025-01-31 07:27:18
No, it's not meaningful... The webm spec is basically the ENTIRE matroska spec.
2025-01-31 07:27:29
Just with certain av codecs singled out
Traneptora
2025-01-31 07:27:32
That doesn't sound right
Demiurge
2025-01-31 07:27:50
https://www.matroska.org/technical/elements.html
Traneptora
2025-01-31 07:28:08
yea, are split files part of webm spec
2025-01-31 07:28:14
iirc they are not
Demiurge
2025-01-31 07:28:17
Notice here: `W — All elements available for use in WebM.`
2025-01-31 07:28:25
Pretty much the entire spec is in webm
2025-01-31 07:28:43
It's literally just matroska, in its entirety. It's not a meaningful subset or anything.
Traneptora
2025-01-31 07:28:46
that doesn't sound correct
Demiurge
2025-01-31 07:29:40
idk about split files, but basically the supported matroska elements in webm cover nearly every element in the spec
2025-01-31 07:30:09
all it really specifies is a certain combination of video/audio codecs
2025-01-31 07:30:27
and they slapped their ugly branding on top and scribbled away the cool matroska branding
Traneptora
2025-01-31 07:31:24
these are ebml entries. they do not correspond directly with features
Demiurge
2025-01-31 07:31:33
it seems uncool to rebrand someone else's project, but if webm didn't sound so cringe compared to the original name it wouldn't be so bad
Traneptora
2025-01-31 07:31:56
liking a name or not is subjective
2025-01-31 07:32:22
also matroska is a registered trademark
Demiurge
Traneptora these are ebml entries. they do not correspond directly with features
2025-01-31 07:32:32
Well that's basically all that matroska is, a specification on how to read and interpret ebml elements.
Traneptora
2025-01-31 07:33:00
That is not the same thing as a list of features
2025-01-31 07:33:46
most of these are simply metadata tags
2025-01-31 07:34:04
like chapter title language
Demiurge
2025-01-31 07:35:59
I know it's subjective but I feel there's an argument to be made that maybe webm is objectively worse sounding than matroshka
2025-01-31 07:36:15
😂
Traneptora
2025-01-31 07:37:06
notable things like file referrals and laces are missing
Demiurge
Traneptora That is not the same thing as a list of features
2025-01-31 07:37:11
It's a container format, the only "feature" is containing/organizing metadata and payload
Traneptora
2025-01-31 07:38:19
I don't think you understand as much about the purpose of a container as you think you do
Demiurge
2025-01-31 07:38:39
Perhaps not!
_wb_
2025-01-31 07:38:51
AVIF uses the MIAF subset of HEIF, I guess it's similar to WebM being a subset of mkv
Traneptora
2025-01-31 07:40:10
I also think it's pointless to complain about you personally disliking the name
2025-01-31 07:40:39
when Matroska itself is trademarked
_wb_
2025-01-31 07:42:10
Who owns the trademark?
Demiurge
2025-01-31 07:42:14
https://www.matroska.org/legal.html
Traneptora
2025-01-31 07:42:46
a nonprofit in France
_wb_
2025-01-31 07:43:24
So they don't want people to use the name if they don't intend to fully implement it, kind of makes sense I guess
Demiurge
2025-01-31 07:43:32
Well I actually didn't know about it being trademarked but their website encourages commercial users to contact them to ensure compatibility.
2025-01-31 07:45:09
On this page it says they'll allow the usage of their branding in exchange for becoming a sponsor
2025-01-31 07:45:11
https://www.matroska.org/license.html
2025-01-31 07:46:06
It's clearly designed in a way that implementors can choose any subset of the spec they choose to support and you're clearly not meant to support or use every possible ebml box
_wb_
2025-01-31 07:46:14
It's a bit of a hurdle to do it that way though. With jxl we just define conformance and anyone can make an implementation and claim it is conformant. Whether or not you want to believe such claims is in principle just a matter of running the conformance tests.
Demiurge
2025-01-31 07:47:57
Yeah, they are kinda protective of their branding by comparison. But it's their way of trying to get sponsorship I guess. They don't have the backing of ISO 😂
2025-01-31 07:49:08
If a corporation is going to benefit and save money by using matroska it makes sense for them to give a little bit of the money they saved back to the project that made it possible.
2025-01-31 07:50:16
Rather than just take someone else's work, benefit from it and change the branding to avoid having to pay any money despite being one of the most wealthy and powerful corporations on earth 😂
Traneptora
2025-01-31 07:51:41
You keep acting like it was stolen without permission
Quackdoc
Traneptora actually a lot of people expect this
2025-01-31 07:51:47
I wish you didn't need to recompile programs to recognize other codecs in mkv. tho I dunno if that is an ffmpeg thing specifically or an mkv one
Traneptora
Quackdoc I wish you didn't need to recompile programs to recognize other codecs in mkv. tho I dunno if that is an ffmpeg thing specifically or an mkv one
2025-01-31 07:52:15
that's ffmpeg specifically
Demiurge
Traneptora You keep acting like it was stolen without permission
2025-01-31 07:52:19
honestly I have no idea if they reached out and asked
2025-01-31 07:52:40
or if they became a sponsor
Quackdoc
Traneptora that's ffmpeg specifically
2025-01-31 07:52:58
sadge T.T ~~yet another reason to migrate to gstreamer~~
Traneptora
2025-01-31 07:53:09
gstreamer is awful
Quackdoc
2025-01-31 07:53:46
yeah... I don't like the audio video library state of things right now
Traneptora
2025-01-31 07:53:48
you should expect that list to be done at compiletime
2025-01-31 07:54:10
why would you need or want that kind of runtime configurability
Quackdoc
2025-01-31 07:54:11
if only rustav didn't effectively die <:PepeHands:808829977608323112>
Traneptora why would you need or want that kind of runtime configurability
2025-01-31 07:55:16
in general, I don't see the need for it in the modern world. I use the jxl video mkvs I use for instance, in general it's really nice to put "whatever you want into a container" especially for things like image sequences
Traneptora
Demiurge honestly I have no idea if they reached out and asked
2025-01-31 07:55:29
so then stop just talking as though it didn't happen
Quackdoc in general, I don't see the need for it in the modern world. I use the jxl video mkvs I use for instance, in general it's really nice to put "whatever you want into a container" especially for things like image sequences
2025-01-31 07:56:07
Because jxl video mkv is a thing you invented and you shouldn't expect it to Just Work
Quackdoc
2025-01-31 07:56:54
I don't see why a user shouldn't have the capability in a modern world. It's a useful feature in general, and the cons of having such flexibility are small
Traneptora
2025-01-31 07:57:30
well for starters how do you expect ffmpeg to map a fourcc to a codec id #
2025-01-31 07:57:39
if you didn't compile that in
2025-01-31 07:57:57
matroska stores riff fourccs
Quackdoc
2025-01-31 07:58:34
I'm not saying that my hack is a good one, rather just that the features itself are. I would rather see a new container without the limitations that caused this
Traneptora
2025-01-31 07:59:09
Except what you're asking for is for software to just implicitly know how to decode a codec for which it has no mapping
2025-01-31 07:59:18
which is an unrealistic request
2025-01-31 08:00:01
it sees a fourcc that it doesn't recognize what do you think it *should* do?
2025-01-31 08:00:57
how is ffmpeg supposed to select the right decoder if it sees a file that has a codec tagged with something it doesn't recognize
2025-01-31 08:01:35
<@184373105588699137> like, what is your solution to this problem
2025-01-31 08:02:07
A new container format? Same question
2025-01-31 08:02:26
What do you expect the software to do
Quackdoc
2025-01-31 08:06:31
that's a good question, and not one I have an immediate answer for. However I do know that fourcc itself can be rather limiting in this aspect as you may have a bunch of custom fourcc stuff in the world. I think a better solution might be to leverage existing file parsing stuff, but this isn't something I can think of a good implementation for off the top of my head, it's easy to say "scan this range of bytes for the magic header stuff" but ofc that can always have issues too.
2025-01-31 08:06:52
that being said, I don't think it's an unfeasible task to do given adequate thought
Demiurge
Traneptora so then stop just talking as though it didn't happen
2025-01-31 08:10:39
Fair, I'll stop complaining about how bad it looks 😂
Quackdoc
juliobbv what I dislike about webm is that the standard never got any meaningful updates after vp9
2025-01-31 08:12:24
oh right, also I forgot what are you talking about?
2025-01-31 08:12:44
webm spec is being maintaned here https://github.com/ietf-wg-cellar/matroska-specification
2025-01-31 08:13:36
https://github.com/ietf-wg-cellar/matroska-specification/blob/5ed5c4363cacf8a72d5e52182b36c5a61e8d5e42/codec/av1.md
2025-01-31 08:13:51
> This document specifies the storage format for AV1 bitstreams in Matroska video tracks. Every time Matroska is mentioned it applies equally to WebM.
Demiurge
2025-01-31 08:14:08
Quack, ffmpeg is great! I dunno why you're sad... The codecs it supports is configured at compile time because that's what ends up being the most convenient and likely time to decide.
Quackdoc
2025-01-31 08:15:15
in the end, one thing I like about gstreamer is that I almost never have to mess with something once I get it working (though getting it working can often be pure pain)
Demiurge
2025-01-31 08:15:40
It allows you to disable certain codecs for paranoid patent reasons for those distributing compiled binaries
Quackdoc
2025-01-31 08:16:27
gstreamer's plug in system for sure has issues, but it's strengths are hard to deny, I just wish we had something modern
Demiurge
2025-01-31 08:16:30
If something is disabled in ffmpeg blame whoever compiled it or wrote the compile script
2025-01-31 08:16:50
ffmpeg is modern and awesome
2025-01-31 08:16:53
Trust me
2025-01-31 08:16:59
It's great!
Quackdoc
2025-01-31 08:17:10
https://cdn.discordapp.com/emojis/867794291652558888?size=64
Demiurge
2025-01-31 08:17:53
It's basically the final word in av libraries
Quackdoc
2025-01-31 08:18:21
rip whip, maybe moq will have a better chance with ffmpeg
Demiurge
2025-01-31 08:18:41
Anything that starts with a g is suspect
Quackdoc
2025-01-31 08:18:55
actually true
juliobbv
Quackdoc webm spec is being maintaned here https://github.com/ietf-wg-cellar/matroska-specification
2025-01-31 08:30:42
oh, interesting
2025-01-31 08:31:25
ok, AV1 in webm is official then
2025-01-31 08:31:34
also JP2K in matroska as well, neat
2025-01-31 08:31:51
https://github.com/ietf-wg-cellar/matroska-specification/commit/1cd0a9be4b2d1e7c60184ec68404e00e46e3123e
AccessViolation_
Quackdoc that's a good question, and not one I have an immediate answer for. However I do know that fourcc itself can be rather limiting in this aspect as you may have a bunch of custom fourcc stuff in the world. I think a better solution might be to leverage existing file parsing stuff, but this isn't something I can think of a good implementation for off the top of my head, it's easy to say "scan this range of bytes for the magic header stuff" but ofc that can always have issues too.
2025-01-31 10:37:18
the obvious solution is my cursed patent pending idea to make every media format a wasm bytecode file that has the decoder as the program and the media attached as static memory at the bottom interested in commercial use? let's talk: pay-us@cursedformats.com
Quackdoc
2025-01-31 10:52:02
lmao
spider-mario
_wb_ WebP is RIFF at the container level...
2025-01-31 10:54:58
and DNG is TIFF
CrushedAsian255
2025-01-31 12:29:09
And everything is octet-stream
lonjil
2025-01-31 03:50:52
gstreamer is better than ffmpeg because gstreamer has a lot of Rust in it
Quackdoc
lonjil gstreamer is better than ffmpeg because gstreamer has a lot of Rust in it
2025-01-31 03:53:20
based, rewrite ffmpeg in rust time [av1_chad](https://cdn.discordapp.com/emojis/862625638238257183.webp?size=48&name=av1_chad)
AccessViolation_
2025-02-01 08:26:35
what are dots?
2025-02-01 08:27:15
it says nothing about them in the draft spec, iirc they were a feature similar to patches that may or may not have been removed?
Traneptora
AccessViolation_ what are dots?
2025-02-01 08:28:13
not a JXL feature if you're wondering
2025-02-01 08:28:25
patches, splines, and noise are the three overrendered features
2025-02-01 08:28:29
you may be thinking of Spot Colors
AccessViolation_
2025-02-01 08:29:53
> Currently patches are mostly for text in screenshots and dots are intended for something like a few stars in the sky in a regular photo https://discord.com/channels/794206087879852103/803645746661425173/1326667918662176850
2025-02-01 08:30:21
are they a libjxl specific thing then, making use of some other coding tool
jonnyawsom3
2025-02-01 08:43:07
IIRC, it's another Patches heuristic that searches for points of color instead of text
AccessViolation_
2025-02-01 08:43:40
Ahhh gotcha
2025-02-01 08:45:28
You know, that might be a good starting point for implementing splines in the encoder. You could maybe encode dots with a gradient falloff, like stars, as zero-length splines?
2025-02-01 08:47:40
I wonder if it'd be worth it, or if splines require so much data to define that a simple patch or nothing at all would be better
jonnyawsom3
2025-02-01 08:48:06
IIRC it does also take into account gradients
AccessViolation_
2025-02-01 08:48:13
Or beneficial rather, I doubt it'd be *worth it* to spend time on regardless <:KekDog:805390049033191445>
jonnyawsom3
2025-02-01 08:48:40
Hard to tell what's dots and what's patches in a normal encode
_wb_
2025-02-01 11:16:43
Dots were a separate coding tool in an early version of jxl, with quad trees to signal positions and quantized ellipses for the dots themselves. We removed it from the final spec since doing them with small patches also worked, and it simplified the spec quite a bit.
Simulping
2025-02-02 05:16:03
anyone have balanced `cjxl` parameters for serving lossy images on web?
JaitinPrakash
2025-02-02 05:44:45
the defaults are pretty good, I'd only add `--faster_decoding=2 --progressive`, and potentially look into jxl.js, but you can probably use a significantly higher distance, as even up to d2 i find it hard to see any loss without pixel-peeping. Do keep in mind this is my entirely "out my ass" opinion.
A homosapien
Simulping anyone have balanced `cjxl` parameters for serving lossy images on web?
2025-02-02 05:56:48
Well, the defaults are fine. If you care about progressive decoding `--progressive_dc=1` is recommended.
2025-02-02 05:58:51
If you are aiming for medium quality images I would say distances up to 2 look fine on the web.
2025-02-02 06:00:29
Unless you're loading hundreds of images on a web page, faster decoding is not really necessary imo
Quackdoc
JaitinPrakash the defaults are pretty good, I'd only add `--faster_decoding=2 --progressive`, and potentially look into jxl.js, but you can probably use a significantly higher distance, as even up to d2 i find it hard to see any loss without pixel-peeping. Do keep in mind this is my entirely "out my ass" opinion.
2025-02-02 06:30:01
I wouldn't , just use progressive_dc
2025-02-02 06:30:37
it is really nice and ads very little size
jonnyawsom3
2025-02-02 06:57:41
Mostly because current progressive is broken due to chunked encoding
_wb_
2025-02-02 07:43:42
For the web, I would say d2 would be the typical target. If fidelity is more important than usual, then maybe d1.5, or even d1. If bandwidth is more important than usual, then maybe d2.5, or even d3. The range d1-d3 probably covers 98% of what is used/needed on the web.
Meow
2025-02-02 07:50:28
Artworks available on websites mostly go to the two extremes — either bloated PNG or JPEG worse than d2 equivalent
Quackdoc
2025-02-02 07:52:32
I use d1 or d0.5 for gallery style applications, d2 for general use, and d5 for preview stuff
2025-02-02 07:52:55
preview stuff being "click to view full image" things
Simulping
A homosapien Unless you're loading hundreds of images on a web page, faster decoding is not really necessary imo
2025-02-02 09:36:41
i see
2025-02-02 09:36:43
alright then
2025-02-02 09:37:30
oh, and is there a jpeli binary anywhere for windows?
2025-02-02 09:37:41
the one in the libjxl release page seems outdated
A homosapien
Simulping oh, and is there a jpeli binary anywhere for windows?
2025-02-02 09:37:59
https://artifacts.lucaversari.it/libjxl/libjxl/latest/
2025-02-02 09:38:03
Here is a nightly build
Simulping
A homosapien Here is a nightly build
2025-02-02 09:38:08
great, thanks
2025-02-02 09:41:06
well that sucks
Quackdoc
Simulping well that sucks
2025-02-02 09:41:18
oof
Simulping
_wb_ For the web, I would say d2 would be the typical target. If fidelity is more important than usual, then maybe d1.5, or even d1. If bandwidth is more important than usual, then maybe d2.5, or even d3. The range d1-d3 probably covers 98% of what is used/needed on the web.
2025-02-02 09:45:33
and for jpegli? quality 95?
2025-02-02 09:45:47
or is 90 enough
2025-02-02 09:46:05
nevermind
2025-02-02 09:46:10
i saw `-d`
_wb_
2025-02-02 09:46:17
You can also use distance with jpegli
2025-02-02 09:47:15
So more like q80-85 typically, q65-90 covers most web applications.
2025-02-02 09:47:49
q95 is camera quality, too high for most websites
Simulping
2025-02-02 09:47:53
ic
2025-02-02 09:48:15
the default of progressive level 2 is fine right
_wb_
2025-02-02 09:54:29
Defaults should be sensible in cjxl
Simulping
2025-02-02 09:54:41
sorry i meant cjpegli
_wb_
2025-02-02 09:59:20
Same
Meow
2025-02-02 10:11:42
d0 for cjpegli isn't equivalent to d0 for cjxl however
_wb_
2025-02-02 10:19:28
Right. Cannot do lossless within de facto JPEG...
Meow
2025-02-02 11:06:42
I did try cjpegli -d0 yesterday and its ssimulacra2 score is just about 92, if I recall correctly
2025-02-02 02:21:51
Tested again, about 91.5 of ssimulacra2. Slightly higher than JXL d0.1 for the same source
2025-02-02 02:22:57
Also interesting that `cjpegli -q 100` archived the slightly higher score than `cjpegli -d 0`
2025-02-02 02:25:25
I thought d0 and q100 should be equivalent
VcSaJen
2025-02-02 04:13:23
Did you try `-d 0.001`?
Meow
2025-02-02 04:46:17
Could it even reach higher quality?😧
monad
2025-02-02 07:53:15
did you try -q 101?
محمد
2025-02-02 11:15:41
Anyone know why Mozilla is draging its feet on JXL?
2025-02-02 11:15:59
paid off by big tech (google)?
CrushedAsian255
Simulping well that sucks
2025-02-02 11:18:24
if this is Waterfox, you need to update
2025-02-02 11:18:31
they had a JXL-related decoding bug
AccessViolation_
محمد Anyone know why Mozilla is draging its feet on JXL?
2025-02-02 11:18:55
they're waiting for jxl-rs, should be done in a couple months
محمد
2025-02-02 11:20:57
so is jxl just not finished yet I noticed its not in a 1.0 status
2025-02-02 11:23:09
im not really a programmer or a software guy
CrushedAsian255
2025-02-02 11:24:10
it's fully compliant and working, it's just not 100% stabled
محمد
2025-02-02 11:25:01
what does that mean is laymen terms?
AccessViolation_
2025-02-02 11:28:32
In terms of image decoding, it's stable enough to use it in projects such as browsers or image viewers. But the decoder is written in C++, a memory unsafe language, and Mozilla is concerned that might introduce vulnerabilities into the browser so they are waiting for the decoder implementation in Rust (a memory safe language) to be finished. Here's the latest update on the progress regarding this: <https://github.com/web-platform-tests/interop/issues/700#issuecomment-2551623493>
محمد
2025-02-02 11:28:57
ah okay
Simulping
CrushedAsian255 if this is Waterfox, you need to update
2025-02-03 12:23:07
it's floorp
Meow
AccessViolation_ In terms of image decoding, it's stable enough to use it in projects such as browsers or image viewers. But the decoder is written in C++, a memory unsafe language, and Mozilla is concerned that might introduce vulnerabilities into the browser so they are waiting for the decoder implementation in Rust (a memory safe language) to be finished. Here's the latest update on the progress regarding this: <https://github.com/web-platform-tests/interop/issues/700#issuecomment-2551623493>
2025-02-03 01:36:21
Better late than never 🥹
Demiurge
محمد ah okay
2025-02-03 07:31:46
It means it's missing certain things the devs want in the 1.0 release. I forgot which features they wanted to include for 1.0. The ultimate fate of libjxl after the 1.0 release is... jxl-rs might just replace it.
Meow
2025-02-03 07:37:58
Isn't jxl-rs just a decoder?
_wb_
2025-02-03 07:57:44
Yes, at least for now
AccessViolation_
2025-02-03 08:29:55
i'd love it if there was a rust-based encoder, then i could actually actually try implementing my wacky encoder ideas in a fork instead of just talking about them <:galaxybrain:821831336372338729>
Quackdoc
2025-02-03 08:38:40
zune-image has one but it's jusf the fast jxl thing ported
Meow
2025-02-03 08:53:57
It would be interesting if so-called libjxl 1.0 is in fact the complete jxl-rs
CrushedAsian255
2025-02-03 09:38:40
Can C code include rust code as a library?
veluca
2025-02-03 09:40:55
yes
AccessViolation_
2025-02-03 09:43:55
```rust // Rust #[no_mangle] pub extern "C" fn hello_from_rust() { println!("Hello from Rust!"); } ``` ```c // C extern void hello_from_rust(); int main(void) { hello_from_rust(); return 0; } ```
Demiurge
2025-02-03 01:34:41
So rustc can produce C header files right?
Tirr
2025-02-03 01:50:44
not rustc itself, but cbindgen can
AccessViolation_
2025-02-03 05:46:00
i'm curious, does anyone have a set of like, "i wish we did this differently" things about jxl? things that were a good idea then but you feel like they could have been done better in hindsight?
2025-02-03 05:56:21
i think my only one is 24-bit integer and 32-bit float being the limit for bit depth per channel. you're not very likely to be in a situation where more are very useful, but i don't think it would have hurt basically at all to have a possibility to have 32-bit int and 64-bit float and int. maybe even 128-bit float and int. i think they'd slow down encoding and decoding and increase memory usage, but the file size should stay relatively the same with good predictors, and after (or progressively during) decoding they'd probably be clamped
HCrikki
AccessViolation_ i'm curious, does anyone have a set of like, "i wish we did this differently" things about jxl? things that were a good idea then but you feel like they could have been done better in hindsight?
2025-02-03 06:33:32
default effort should be adaptive, correlated to the img resolution. like, if youre crunching webcam/avatar-sized images, default effort increases by 1-2 for smaller sizes with really negligible difference in ressources consumed and time spent. opposite for huge images: default lowers by 1-2 depending on the megapixel count so as to maintain performance (jobs could take 20x less time than if they maintained the og default effort)
2025-02-03 06:35:22
not quite jxl or libjxl specific, but any encoder/gui with user-adjustable complexity levels could gain from this in place of an inflexible same effort for *all* the images of this encoding job
veluca
AccessViolation_ i'm curious, does anyone have a set of like, "i wish we did this differently" things about jxl? things that were a good idea then but you feel like they could have been done better in hindsight?
2025-02-03 06:39:47
I have a few regrets about being super efficient with headers and about the way the file format handles sections
2025-02-03 06:40:45
also I think we should have allowed the DCT context model to be expressed by a modular tree (and special-cased the current ctx model), and I think we should have let modular trees make decisions over linear functions of properties
RaveSteel
2025-02-03 06:56:53
Could this be added to the spec in a future revision?
veluca
2025-02-03 07:20:41
I guess so, but I'm not sure I want to
2025-02-03 07:21:07
ah, another regret is not interleaving channels in modular mode 🙂
_wb_
AccessViolation_ i think my only one is 24-bit integer and 32-bit float being the limit for bit depth per channel. you're not very likely to be in a situation where more are very useful, but i don't think it would have hurt basically at all to have a possibility to have 32-bit int and 64-bit float and int. maybe even 128-bit float and int. i think they'd slow down encoding and decoding and increase memory usage, but the file size should stay relatively the same with good predictors, and after (or progressively during) decoding they'd probably be clamped
2025-02-03 08:05:41
We left the option open to signal up to 64-bit in the bitdepth header, so it would be possible to make a Level 20 in the future that requires an implementation using double precision for lossy and int64_t for Modular. For now we saw no need to define or implement that, but if there is a need, it would be possible within the current header syntax.
2025-02-03 08:09:00
The TOC signaling would have been more convenient if there was an option to use fixed-size section length fields. The compressed ICC profile could have used a size field that is technically redundant but would allow skipping it if you just want to find the frame headers.
2025-02-03 08:09:37
Having a way to use VarDCT for extra channels would have been nice.
2025-02-03 08:11:32
And what <@179701849576833024> said about MA trees in the AC ctx model.
spider-mario
2025-02-03 08:13:06
the spreading of noise LUT values is also not ideal from what I remember
2025-02-03 08:13:56
if I recall correctly, it’s in the direction of the range extending unnecessarily high, such that there isn’t a lot of precision for the values that are likely going to be actually used
_wb_
2025-02-03 08:14:17
Also maybe the option of 128x128 or even 64x64 group sizes for VarDCT, and also smaller ones for modular would maybe have been useful for a GPU implementation.
2025-02-03 08:17:12
Maybe splines should have had the option for also working on extra channels (in particular alpha) and having an optional kReplace-like behavior instead of kAdd. Plus maybe the spline params could have been put in a modular subbitstream.
2025-02-03 08:19:07
The noise synthesis could also use some optional extra params for how to apply noise to the different channels, instead of assuming XYB.
Demiurge
2025-02-03 08:25:26
It's not really too late to add a new header format standard, at this early stage of adoption, right?
2025-02-03 08:25:38
In addition to the existing one
2025-02-03 08:26:48
I think it's still pretty early if someone wanted to add stuff like that to the spec and to libjxl
2025-02-03 08:27:31
It's only too late to remove stuff, not add stuff
jonnyawsom3
_wb_ Also maybe the option of 128x128 or even 64x64 group sizes for VarDCT, and also smaller ones for modular would maybe have been useful for a GPU implementation.
2025-02-03 08:27:52
I thought VarDCT goes all the way from 8x8 up to 256x256? Or do you specifically mean groups for DC, ect
Demiurge
2025-02-03 08:45:43
jxl-ng v2
2025-02-03 08:47:56
Overall it's a really carefully thought out and thoughtfully considered bitstream format from what I can tell
2025-02-03 08:52:26
It sounds like there aren't any major regrets.
2025-02-03 08:57:07
I wonder what veluca would change about the compact header or section format, or what "linear functions of properties" are.
VcSaJen
HCrikki default effort should be adaptive, correlated to the img resolution. like, if youre crunching webcam/avatar-sized images, default effort increases by 1-2 for smaller sizes with really negligible difference in ressources consumed and time spent. opposite for huge images: default lowers by 1-2 depending on the megapixel count so as to maintain performance (jobs could take 20x less time than if they maintained the og default effort)
2025-02-04 12:38:59
Encoder shouldn't be trying to be too smart. You assume "Higher resolution => higher density", which isn't the case in half of the times. It have no idea about actual canvas size. Use the principle of the least surprise
Demiurge
2025-02-04 01:37:13
Actually, to a certain extent, there are certain optimizations and compression techniques that work really really well on large images but not on small images, and vice versa.
2025-02-04 01:38:59
So it makes a lot of sense for the encoder to be smart about that to some extent and compress very large images differently. Not necessarily at a different pixel/distortion ratio. But definitely use techniques that make sense for the pixel quantity.
2025-02-04 01:41:16
Extremely large images with very little high frequency information is a special case that can be optimized a lot to have very good quality at extremely low bitrates.
2025-02-04 01:42:15
Also, recursively progressive images could make sense for ridiculously big images. But obviously not for thumbnail and icon size images
2025-02-04 01:43:55
The encoder SHOULD be smart enough to use the appropriate tools to match the content it's compressing.
2025-02-04 01:44:39
For starters, it would be nice if it was smart enough to detect if an image would have a better bitrate in lossless mode than lossy DCT mode.
2025-02-04 01:45:15
That way people don't accidentally bloat file sizes and degrade quality at the same time.
2025-02-04 01:46:46
It should be smart enough to recognize the image enough to select the appropriate compression tools without the user having intricate knowledge about image compression parameters.
2025-02-04 01:49:26
It's not gunna be easy, but it's The Dream. A universal, set-it-and-forget-it image encoder.
2025-02-04 01:49:37
No fiddling with flags. Just set it and forget.
2025-02-04 01:50:07
I think Cloudinary is using some kind of neural net to do something like that?
_wb_
I thought VarDCT goes all the way from 8x8 up to 256x256? Or do you specifically mean groups for DC, ect
2025-02-04 06:19:48
I mean group size, not block size. In VarDCT the group size is fixed to 256x256.
Demiurge I think Cloudinary is using some kind of neural net to do something like that?
2025-02-04 06:21:38
Correct. Though it also makes mistakes sometimes.
AccessViolation_
veluca I guess so, but I'm not sure I want to
2025-02-04 09:33:45
if you change your mind it would probably be good to do it soon ish while there is still basically one decoder in common use and before everything starts supporting it. otherwise it would probably end up like JPEG with arithmetic coding :p
2025-02-04 09:34:11
though we can just do a JPEG XXL in the future :)
2025-02-04 09:34:44
The Next Next-Generation Alien Technology from the Future Format
_wb_
2025-02-04 09:48:06
I'd prefer to avoid making extensions to the spec that will produce images that don't decode with current decoders. That prevents most of the things on the "with hindsight, maybe it would have been better if" list. Though some things could still be done, e.g. adding sub-section offsets (plus entropy coder state) that would allow decoding modular planes in parallel or vardct HF coeffs in stripes of 64 rows, if the encode was done in a way that makes that make sense (no prevchannel properties, no varblocks crossing stripe boundaries). This could be added in a frame header extension and would only be useful to speed up decoding but just ignoring it would be fine, so existing decoders still work.
Demiurge
_wb_ I'd prefer to avoid making extensions to the spec that will produce images that don't decode with current decoders. That prevents most of the things on the "with hindsight, maybe it would have been better if" list. Though some things could still be done, e.g. adding sub-section offsets (plus entropy coder state) that would allow decoding modular planes in parallel or vardct HF coeffs in stripes of 64 rows, if the encode was done in a way that makes that make sense (no prevchannel properties, no varblocks crossing stripe boundaries). This could be added in a frame header extension and would only be useful to speed up decoding but just ignoring it would be fine, so existing decoders still work.
2025-02-04 05:43:23
Well, you could add support for new bitstream formats to the current decoder software, but have the encoder continue to produce the same old bitstream until everyone upgrades their decoders.
2025-02-04 05:44:20
Apple is the main player here and I think they're currently using libjxl as the decoder support library
2025-02-04 05:45:22
If you want to make bitstream changes, just update the spec and update the decoders like libjxl but ensure the encoders are still producing the same old format until the day everyone is using a newer version of the decoder
2025-02-04 05:45:46
But it sounds like all the bitstream changes you guys were talking about were all extremely minor and low impact anyways
2025-02-04 05:46:08
It's already an astoundingly well designed format
jonnyawsom3
2025-02-04 05:49:16
There was already a week long discussion about the bug that caused lossless files to fail decoding outside of libjxl. Intentionally breaking backwards compatibility for anything non-critical seems unlikely at best
Demiurge
2025-02-04 05:49:30
Apple can and probably will upgrade their version of libjxl in the future. There aren't any other major commercial products with jxl support that won't have the capability to update their decoder software in the foreseeable future.
2025-02-04 05:51:15
No one is going to be using libjxl 0.7 anymore at a certain point in the future.
2025-02-04 05:51:50
So maintaining compatibility with old software is not necessary in the long term if it's done correctly.
2025-02-04 05:52:47
It's still possible to change things at this point in the life cycle of jxl
2025-02-04 05:53:47
Just add decode support to current software first, and don't enable it on the encode side until everyone is using the newer version of the library.
2025-02-04 05:54:13
And continue to support the old bitstream format forever.
A homosapien
Demiurge Well, you could add support for new bitstream formats to the current decoder software, but have the encoder continue to produce the same old bitstream until everyone upgrades their decoders.
2025-02-04 05:54:44
Libavif is currently doing that with lossless YCoCg support. I'm surprised they're still breaking things after ver 1.0
2025-02-04 05:55:02
If we're going to change things, we better do it now before libjxl becomes mainstream
Demiurge
2025-02-04 05:56:30
Assuming the changes are actually good enough 😂 like I said the format is already incredibly well designed and the current bitstream format must be supported forever regardless. A new bitstream format would have to bring concrete advantages and tangible coolness to the table.
2025-02-04 05:57:11
To be worth creating
2025-02-04 05:58:21
Then it would be added on a very experimental basis with no encode support unless --use-idiot-options enabled (they should rename it from "expert")
2025-02-04 05:59:48
It would be decode-only for a long time until everyone is satisfied and no one is using ancient libjxl
2025-02-04 06:00:34
Theoretically it shouldn't be a problem this way
2025-02-04 06:02:02
But it's a lot of ceremony for something with limited benefits and impact
2025-02-04 06:02:40
Unless the proposed new bitstream is actually really tangibly cool and awesome compared to the existing one
2025-02-04 06:08:50
Ease of 3rd party implementation should be a priority and ideally any changes to the spec should make it easier and not harder on people trying to create jxl decoders
2025-02-04 06:11:01
With all that in mind I think it's kind of unlikely there will be any changes
2025-02-04 06:13:52
Jon would probably be something of a ringleader and organizer if that sorta thing were to take place 😂 so if he doesn't think it's worth it then it probably won't happen
2025-02-04 06:15:08
He's like the most energetic promoter of jxl in general too from what I can tell
_wb_
2025-02-04 06:37:08
I think there's value in not making spec changes, it helps avoid creating the impression that supporting the format is a "moving target".
2025-02-04 06:39:13
So if we make changes, they should be worth it, or they should be things like decoder speedup hints that can be ignored without any impact on the decoded image.
2025-02-04 06:45:19
None of the things mentioned above that "with hindsight would have been done a bit differently" are major things, it's mostly things that would have been slightly more convenient for encoders, would further increase the bitstream expressivity and thus potential compression improvements (but it's already expressive enough to leave room for many decades of research on encoder improvements), or things that would allow faster decode (but decode is already more than fast enough for most use cases).
AccessViolation_
2025-02-04 06:54:49
in most other cases i personally think keeping things static to avoid the "moving target" is very valuable, at the same time this is intended to be a long-term format specifically. if we're being drip-fed new formats then you can select one that has the nice to haves that you desire, but since jpeg xl is specifically designed to be viable for a very long time, that does put a bit more weight on those imo
2025-02-04 06:56:35
but if we implemented all our wishes now and said this was the *true* final version, it's not like we wouldn't have a similar situation with new "we should have done that" cases in the future again
2025-02-04 06:58:11
i also agree that jxl is more than good enough already, it blows everything else out of the water in terms of viability for many different use cases as far as i'm aware
juliobbv
_wb_ So if we make changes, they should be worth it, or they should be things like decoder speedup hints that can be ignored without any impact on the decoded image.
2025-02-04 08:26:24
jxl could go the opus 1.5 way, and have improvements be backwards compatible
2025-02-04 08:27:00
e.g. by adding a tiny bit of extra metadata that could help an enhancement layer increase the fidelity of an encoded image
2025-02-04 08:28:47
non-aware decoders would just deal with the base image
HCrikki
2025-02-04 08:29:19
imo improvements in libjxl should not be part of spec changes to jxl but passively obtained through software updates to decoders and encoders
2025-02-04 08:32:21
even now theres still holdouts stuck on old libs 0.8 that have yet to experience it properly and update scripts for 0.10 and newer. unneeded changes could upset early adopters and diminush the position of those pushing for adoption
juliobbv
2025-02-04 08:36:14
are there any hardware decoders in existence right now?
2025-02-04 08:37:00
the answer most likely will make or break considering the argument
A homosapien
juliobbv are there any hardware decoders in existence right now?
2025-02-04 09:38:57
None afaik
AccessViolation_
2025-02-04 09:46:00
are there even hardware decoders for image formats typically? images are pretty fast to decode regardless compared to a video
2025-02-04 09:47:05
iirc AVIF decoders also don't attempt to make use of available AV1 decoding hardware
2025-02-04 09:48:44
maybe digital cameras have hardware JPEG decoding? but practically they'd only need to be able to decode the images they produce, so an extension to JXL wouldn't really affect that
A homosapien
2025-02-04 09:53:56
There are some for jpg
jonnyawsom3
AccessViolation_ maybe digital cameras have hardware JPEG decoding? but practically they'd only need to be able to decode the images they produce, so an extension to JXL wouldn't really affect that
2025-02-05 01:13:36
https://developer.nvidia.com/nvjpeg
Orum
2025-02-05 06:40:21
so I've noticed cjxl seems to do poorly with closely repeating patterns compared to cwebp
Meow
Orum so I've noticed cjxl seems to do poorly with closely repeating patterns compared to cwebp
2025-02-05 06:41:38
Talked a lot about it recently
Orum
2025-02-05 06:42:01
e.g. with dithering on (which appears to do ordered dithering) in Subnautica Below Zero, this is substantially larger in <:JXL:805850130203934781>
Meow
2025-02-05 06:42:25
Orum
2025-02-05 06:42:47
it's not just grayscale though
Meow
Orum e.g. with dithering on (which appears to do ordered dithering) in Subnautica Below Zero, this is substantially larger in <:JXL:805850130203934781>
2025-02-05 06:43:17
How much larger does it become?
Orum
2025-02-05 06:44:09
3,480,383B vs 3,036,662B, so almost 15%
Meow
2025-02-05 06:44:20
Not that bad
Orum
2025-02-05 06:44:33
15% is pretty bad in my book
Meow
2025-02-05 06:44:40
The sample I mentioned is 3 times larger
Orum
2025-02-05 06:44:55
well sure, there are definitely worse examples
Meow
2025-02-05 06:45:15
or saving -288.6%
2025-02-05 06:46:21
Corrected the number, slightly better
Orum
2025-02-05 06:50:16
my SMM one is the most egregious example I've found, where the JXL is ~3.47x as large
2025-02-05 06:51:42
I assume patches are automatically enabled/disabled at certain effort levels?
Meow
2025-02-05 06:55:48
I always use effort 7 (default) however
Orum
2025-02-05 06:56:44
ah, this was testing all efforts for both encoders (not counting things like e11) and taking the smallest of both
Demiurge
_wb_ I think there's value in not making spec changes, it helps avoid creating the impression that supporting the format is a "moving target".
2025-02-05 07:21:30
There is value, but there's also not that big of a cost in doing it at such an early stage either.
2025-02-05 07:21:56
Just my 2¢
2025-02-05 07:23:42
Especially if those changes are made quietly and non disruptively
2025-02-05 07:28:02
No one would even notice.
monad
Orum I assume patches are automatically enabled/disabled at certain effort levels?
2025-02-05 08:12:13
for lossless, patches are enabled at e10 in general and e5+ for single-chunk encoding. note that patches need to meet a relative value threshold and need to include at least one patch of 20+ pixels. also note libwebp is still far more efficient where libjxl patches work best, on text.
A homosapien
Orum e.g. with dithering on (which appears to do ordered dithering) in Subnautica Below Zero, this is substantially larger in <:JXL:805850130203934781>
2025-02-05 08:31:32
``` cwebp -z9 ------ 3,036,662 bytes libjxl -e9 ----- 3,484,727 bytes -e8 with sauce - 2,774,145 bytes ``` The sauce in question is `-d 0 -e 8 -I 20 -g 3 -P 1 -E 4`
2025-02-05 08:31:58
No patches needed
Orum
2025-02-05 08:32:01
well, I don't do that sort of tuning for webp, so I shouldn't have to for JXL
A homosapien
2025-02-05 08:32:52
I know, libjxl's heuristics just isn't mature enough like WebP is.
Orum
2025-02-05 08:33:31
fair enough, but it seems like that is the area with biggest room for improvement now
A homosapien
2025-02-05 08:34:59
I consider the lossless mode the most impressive element of Jpeg XL. I can't wait for what the future holds.
Orum
2025-02-05 08:35:03
I wonder if tuning can redeem JXL here: https://discord.com/channels/794206087879852103/794206170445119489/1212025660831301692
A homosapien
2025-02-05 08:35:41
On it boss <:CatBlobPolice:805388337862279198>
Orum
2025-02-05 08:35:56
I'm going to try some things on my end too
2025-02-05 09:00:04
e11 is still quite a difference (in a bad way) from webp...
A homosapien
2025-02-05 09:33:51
what the smallest size you got it down to?
Meow
2025-02-05 09:36:15
What's the command of cjxl to produce the lossless non-photographic image as small as possible?
Orum
A homosapien what the smallest size you got it down to?
2025-02-05 09:38:42
333,941B
A homosapien
2025-02-05 09:40:24
I got it down to 333,933 bytes *Edit (11/03/25): many months later, using this command `cjxl in.ppm out.jxl -P 15 -g 3 -I 100 -E 4 -e 9 -d 0 -Y 0` I got it down to 323,107 bytes. Still a long ways to go.*
Orum
2025-02-05 09:40:26
.ppm.xz is 196,900B, which is better than either codec <:PepeHands:808829977608323112>
A homosapien
2025-02-05 09:42:15
".ppm.xz" is not a image format tho. That's kinda cheating
Quackdoc
2025-02-05 09:42:23
pfm supremacy
Orum
2025-02-05 09:42:31
doesn't make it any less embarrassing...
A homosapien
2025-02-05 09:43:12
To be fair that source image is one best candidates for preprocessing I've seen
2025-02-05 09:43:47
Like downscaling with nearest neighbor then using JXL's upscaling
Orum
2025-02-05 09:44:31
yeah but it's scaling from the 'native' res isn't simply NN, it's probably bicubic or even bilinear
2025-02-05 09:44:39
either way, not ideal
A homosapien
2025-02-05 09:44:49
The opposite of ideal
Orum
2025-02-05 09:46:51
well actually there are two resizers going on there
2025-02-05 09:47:14
first is the internal scaling using NN to whatever the console is rendering at
2025-02-05 09:47:29
then there's the additional scaling by the emulator to UHD, which is not NN
2025-02-05 09:49:50
anyway I will be running another round of compression benchmarks soon, and will compare not just WebP and <:JXL:805850130203934781> this time, but also ppm.xz for everything
A homosapien
2025-02-05 09:58:49
best attempt clean rescale
Orum
2025-02-05 10:14:25
looks pretty good <:PepeGlasses:878298516965982308>
A homosapien
Orum .ppm.xz is 196,900B, which is better than either codec <:PepeHands:808829977608323112>
2025-02-05 10:46:49
You know... technically you could store the .ppm in a metadata box and compress it via brotli <:KekDog:805390049033191445>
2025-02-05 10:47:00
At least then JXL would beat WebP
Quackdoc
2025-02-05 10:47:52
krokiet/czkawka is really nice but you kinda really quickly hit the limitations of jxl-oxide on it when you are comparing a load of images EDIT: highly reccomend recompiling czkawka with `target-cpu=native` it helps a lot
2025-02-05 10:48:04
need jxl-rs I guess [pepehands](https://cdn.discordapp.com/emojis/1075509930502664302.webp?size=48&name=pepehands)
Quackdoc krokiet/czkawka is really nice but you kinda really quickly hit the limitations of jxl-oxide on it when you are comparing a load of images EDIT: highly reccomend recompiling czkawka with `target-cpu=native` it helps a lot
2025-02-05 10:53:30
for context
Orum
A homosapien You know... technically you could store the .ppm in a metadata box and compress it via brotli <:KekDog:805390049033191445>
2025-02-05 11:31:32
well that's even less likely to display in an image viewer than ppm.xz
2025-02-05 11:31:39
and probably going to be larger as well
Meow
2025-02-05 02:17:26
`cjxl -d 0.5` vs ssimulacra2 = 90, which is the better metric for the visually lossless quality?
jonnyawsom3
2025-02-05 02:17:39
We had similar results on Factorio screenshots. The issue is generally repeating tiles/sprites that don't have a clean background. So JXL can't create patches but regular compression can see the patterns
2025-02-05 02:18:37
Could be done using layers if JXL encoding were integrated into the game, we've seen up to 10x size reduction in testing from that, but obviously it's unlikely at best...
Orum
Meow `cjxl -d 0.5` vs ssimulacra2 = 90, which is the better metric for the visually lossless quality?
2025-02-05 02:18:47
distance is butteraugli distance, but it's really just an estimate
2025-02-05 02:19:24
in any case I don't think butteraugli is as good as ssimul2, but both have their issues
jonnyawsom3
2025-02-05 02:19:30
cjxl distance is a target, Ssimulacra2 is an actual measurement. Higher effort settings should make them align more closely. Or rather, give you a more accurate comparison
Meow
2025-02-05 02:19:52
cjxl says "Target visual distance in JND units"
jonnyawsom3
2025-02-05 04:38:22
I fear this may be another victim of image converters not warning about transcoding.... https://www.reddit.com/r/jpegxl/comments/1iidd37/
2025-02-05 04:38:53
> I archived all the jpegs into 7z for a smooth transfer and that made me regret EVERYTHING. The space-savings I got from hours spent on converting was nothing compared to what I got from archiving in minutes.
2025-02-05 04:39:44
Maybe we should explicitly ask other tools to warn if the input is JPEG and they don't pass it through to libjxl properly
Meow
2025-02-05 04:56:38
So is that JXL or JPEG?
2025-02-05 04:57:35
Read through the article several times and still can't get what has been done
Orum
2025-02-05 04:59:08
> I was using XL Converter from codepoems sounds like the source of the issue right there
jonnyawsom3
Orum > I was using XL Converter from codepoems sounds like the source of the issue right there
2025-02-05 05:04:28
Perhaps not > Its 5.2 gigs to 1.9 in 7z compared to just 4.4 gigs in jxl lossless transcode.
2025-02-05 05:05:02
That fits the transcoding savings, just seems like there was an absurd amount of redundancy between images
Orum
2025-02-05 05:05:11
well something is up if they're going from 5.2 -> 1.9 in 7z 😆
jonnyawsom3
2025-02-05 05:46:23
> The images are non-photographic so yes, there should be higher redundancy. That explains it
Demiurge
2025-02-05 06:55:33
I have experienced the same thing this guy is experiencing.
2025-02-05 06:56:00
A bunch of large JPEGs in a 7z package is much much smaller than converting them to jxl
2025-02-05 06:56:16
7z is able to crunch multiple jpeg files well
2025-02-05 06:56:49
The only thing that doesn't make sense is saying that it was faster. Lzma is super slow
2025-02-05 07:01:43
7z is not able to do the same to multiple similar jxl files
Lumen
2025-02-05 09:13:03
I got butter on gpu 👀
2025-02-05 09:13:07
https://github.com/Line-fr/Vship
monad
Meow What's the command of cjxl to produce the lossless non-photographic image as small as possible?
2025-02-05 09:43:23
a rather broad category. even excluding content which typically aligns with photo performance (digital_painting, 3d, machine_learning, texture) it's still the general incantation e10E4I100g3, excluding e11, producing lowest mean bpp. (this backed by e.g. one test of 3840 commands against 1051 8-bpc images requiring 137 different commands to minimize at 4% smaller)
AccessViolation_
2025-02-05 11:06:39
I didn't know lossless webp was that good
2025-02-05 11:08:04
I'll have to look into that later, I wonder if it's the format that gives it an edge or if it just has a particularly good encoder
jonnyawsom3
AccessViolation_ I'll have to look into that later, I wonder if it's the format that gives it an edge or if it just has a particularly good encoder
2025-02-05 11:41:08
IIRC JXL has all the features Lossless WebP does, the encoder just hasn't been fully ported over yet
Quackdoc
2025-02-05 11:42:36
lossless lossless webp transcoding when
AccessViolation_
IIRC JXL has all the features Lossless WebP does, the encoder just hasn't been fully ported over yet
2025-02-05 11:43:27
are they really that similar?
A homosapien
2025-02-05 11:52:50
Mostly, there are a few key differences as mentioned here: https://discord.com/channels/794206087879852103/822105409312653333/1300197653136670814 But a lot can be copied over from WebP
2025-02-05 11:53:12
Both lossless modes were spearheaded by Jyrki
jonnyawsom3
AccessViolation_ are they really that similar?
2025-02-05 11:53:53
Main difference I can think of is JXL using groups, so long repetitions can't be caught by the encoder as well, but that's a place for patches anyway. Jyrki mentioning encoder tuning https://discord.com/channels/794206087879852103/794206170445119489/1292778172265533441 and Jon discussing differences https://discord.com/channels/794206087879852103/794206087879852106/1247119135855611944
AccessViolation_
2025-02-05 11:56:39
"we just haven't build a great encoder for lossless" and it's already so good 👀
jonnyawsom3
2025-02-05 11:57:48
It may not always beat WebP in size, but in my experience it's always far for efficent in both time and memory
RaveSteel
2025-02-05 11:59:07
It decodes faster and is competitive with JXl at very small image sizes
2025-02-05 11:59:32
Although I have only tested this for lossless
2025-02-06 12:00:54
Also competitive in filesizes
jonnyawsom3
2025-02-06 12:01:00
Decode speed was about equal for me, or around 50% faster at effort 1 and 2 naturally. Both lossless
2025-02-06 12:01:44
Would be nice to optimise the fixed MA trees in there...
RaveSteel
2025-02-06 12:04:12
I used cwebp at max compression and cjxl at e7, but webp still decodes faster for me. Depending on the "benchmark" (nomacs, ffmpeg with -benchmark), WebP decodes around twice as fast
jonnyawsom3
2025-02-06 12:09:34
What resolution if you recall?
RaveSteel
2025-02-06 12:10:06
very small resolution, I retested again just now. up to 128x128
2025-02-06 12:10:41
WebP is around as fast as PNG while being a good bit smaller, around 40%
jonnyawsom3
2025-02-06 12:15:51
Ah, yeah. My latest image was 7500 x 7500, so the group decoding would've been multithreaded
RaveSteel
2025-02-06 12:18:35
E1 would be more conducive for thumbnails I think, but at that point it is handily beaten by WebP, while still being slower to decode, at least according to my aforementioned "benchmarks"
Demiurge
AccessViolation_ I'll have to look into that later, I wonder if it's the format that gives it an edge or if it just has a particularly good encoder
2025-02-06 12:51:58
Jyrki hit a home run when building the lossless webp encoder. A similar thing could be achieved with jxl but hasn't yet
Meow
monad a rather broad category. even excluding content which typically aligns with photo performance (digital_painting, 3d, machine_learning, texture) it's still the general incantation e10E4I100g3, excluding e11, producing lowest mean bpp. (this backed by e.g. one test of 3840 commands against 1051 8-bpc images requiring 137 different commands to minimize at 4% smaller)
2025-02-06 02:16:27
`-e 10 -E 4 -I 100 -g 3` Is this correct?
RaveSteel
2025-02-06 02:18:18
Do note that g3 will sometimes be larger than 0 to 2
Meow
RaveSteel Do note that g3 will sometimes be larger than 0 to 2
2025-02-06 02:51:32
What kind of images would sometimes let this happen?
RaveSteel
2025-02-06 02:54:35
It depends, I myself have not conducted any real tests regarding this, I just noticed it when testing parameters
Meow
2025-02-06 03:16:44
By the way, I'm planning to experiment copying PNG birth time (no Exif) to JXL as `DateTimeOriginal` and `CreateDate` of Exif. Is that a proper behaviour?
ohashi
2025-02-06 03:18:31
does anyone know if its possible to add jxl as covers for audio files? ‍​​‌‍
HCrikki
2025-02-06 03:28:07
i recall seeing foobar and another app reading jxl images as sidefiles (meaning not integrated inside music files, but as .jxl images of the usual common names for those, located in the same folder)
2025-02-06 03:30:08
jxl would be excellent for future revisions of audio metadata but the market has been shifting away from that in favour of keeping covers separate so theyre easier to manage by providers/apps
2025-02-06 03:34:30
that also gets around market inertia and unfavorable changes in specs. audio services can serve covers in any format they wish to browsers and especially their own apps
Meow
monad a rather broad category. even excluding content which typically aligns with photo performance (digital_painting, 3d, machine_learning, texture) it's still the general incantation e10E4I100g3, excluding e11, producing lowest mean bpp. (this backed by e.g. one test of 3840 commands against 1051 8-bpc images requiring 137 different commands to minimize at 4% smaller)
2025-02-06 05:34:02
I also wonder if that's more recommended than `-e 11 --allow_expert_options`
Orum
Demiurge A bunch of large JPEGs in a 7z package is much much smaller than converting them to jxl
2025-02-06 05:35:00
the real question is what if you convert them to JXL and *then* compress with LZMA? <:galaxybrain:821831336372338729>
monad
2025-02-06 06:40:13
both e10 and e11 are super compute inefficient, e11 being much worse. I wouldn't recommend either for practical scenarios. as far as answering the question of a single command to get the smallest size the encoder offers, that would be e11 by a few percent
RaveSteel Do note that g3 will sometimes be larger than 0 to 2
2025-02-06 06:44:20
it is not so significant or differentiating ``` unique mins bpp (mean) | included in 'all' B | mins | | best of 24757629 2.589999 ···· ···· · all 24817297 2.592913 96.09% 17.64% A cjxl_0.10.2-d_d0e10E4I100g3 25018605 2.598436 82.36% 3.91% A cjxl_0.10.2-d_d0e10E4I100g2``` these are more complementary ``` unique mins bpp (mean) | included in 'all' B | mins | | best of 24503705 2.559757 ···· ···· · all 24817297 2.592913 73.01% 72.70% A cjxl_0.10.2-d_d0e10E4I100g3 35807963 3.501005 27.30% 26.99% A cjxl_0.10.2-d_d0e10P0E4I0g3X0patches0```
A homosapien
2025-02-06 07:00:27
What does your testing corpus consist of?
monad
2025-02-06 07:06:15
this is the roughly non-photo or similarly performing content
2025-02-06 07:09:24
generally small images in this case since I ran thousands of combinations of settings
Meow
monad both e10 and e11 are super compute inefficient, e11 being much worse. I wouldn't recommend either for practical scenarios. as far as answering the question of a single command to get the smallest size the encoder offers, that would be e11 by a few percent
2025-02-06 07:50:40
I want to do only for my personally owned works
2025-02-06 08:26:52
To try `-e 10 -E 4 -I 100 -g 3` instead if that's the more efficient option
2025-02-06 08:27:56
`-e 11` may take days to finish one artwork
Demiurge
Orum the real question is what if you convert them to JXL and *then* compress with LZMA? <:galaxybrain:821831336372338729>
2025-02-06 08:31:59
Worse than jpeg+lzma
2025-02-06 08:32:33
Less redundancy across files
2025-02-06 08:32:49
Sad face
AccessViolation_
2025-02-06 09:02:37
i'm guessing that'd be because of adaptive quantization introducing more possible variance
2025-02-06 09:05:19
I bet lossless jpeg transcoding would still lz77 about the same, it'd be interesting to test
monad
Meow `-e 11` may take days to finish one artwork
2025-02-06 09:10:46
depends on how many threads you can dedicate. e11 does three stages of encodes: 2 to differentiate content, then either 20 or 24 based on the outcome, then the final encode with the densest settings. So, you could expect it to take roughly 3x longer than e10 minimum for single images, with opportunity to parallelize more with multiple instances of the encoder across multiple files.
jonnyawsom3
AccessViolation_ i'm guessing that'd be because of adaptive quantization introducing more possible variance
2025-02-06 09:12:03
The jpeg transcoding introduces Chroma From Luma, Arithmetic encoding (IIRC) and other tools that could mess with the 8x8 blocks that 7z tote so well
monad
AccessViolation_ I bet lossless jpeg transcoding would still lz77 about the same, it'd be interesting to test
2025-02-06 09:25:20
At one point I tested archiving VNs transcoding JPEG to JXL before general compression and usually the JXL version was smaller, but in some cases it was much larger.
Meow
Meow To try `-e 10 -E 4 -I 100 -g 3` instead if that's the more efficient option
2025-02-06 01:08:12
Good this is not that long process
2025-02-06 01:10:37
Saving additional 10% to 15% from default (-e 7) is quite impressive
2025-02-06 01:11:15
Even more efficient than ZopfliPNG for me