JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

jxl

Anything JPEG XL related

Orum
2024-04-15 08:58:22
yeah, testing only it looks like
2024-04-15 09:20:41
seems the plugin won't load 16K JXLs any more for whatever reason
HCrikki
2024-04-15 09:36:56
arent there 2 packages now for qt5 and qt6? afaik if you used the qt5 one it got renamed and the qt6 build took its place
Orum
2024-04-15 09:37:29
maybe on arch
HCrikki
2024-04-15 09:38:08
manjaro poorly handles intergen big updates unlike tumbleweed and plain arch
Orum
2024-04-15 09:39:22
I just don't get why there is only a Qt5 one on AUR and no Qt6 one
lonjil
Orum maybe on arch
2024-04-15 09:39:23
Manjaro is the same, just behind
Orum I just don't get why there is only a Qt5 one on AUR and no Qt6 one
2024-04-15 09:39:36
because it's ancient and no one cares to update it
lonjil Manjaro is the same, just behind
2024-04-15 09:40:41
on Manjaro, you have "kimageformats5" in stable and in testing, and "kimageformats" in testing, matching Arch
Orum
2024-04-15 09:42:35
I wonder how long things typically stay in testing before being pushed to stable <:Thonk:805904896879493180>
lonjil
2024-04-15 09:43:38
well, Arch pushed KDE6 a bit over a month ago. so at least that long I guess.
Orum
2024-04-15 09:44:32
but arch's push date and how long something stays in manjaro testing have no relevance to one another
lonjil
2024-04-15 09:46:17
it's based on Arch, isn't it?
Orum
2024-04-15 09:46:27
Manjaro's unstable branch is really the only thing that closely follows Arch
Demiurge
2024-04-15 10:50:19
https://archlinux.org/packages/extra/x86_64/kimageformats/
2024-04-15 10:50:51
It's qt6 and is linked with libjxl
2024-04-15 10:51:12
Sounds like a manjaro problem?
2024-04-15 10:52:35
Maybe consider switching to arch, or at least maybe endeavor? I haven't tried endeavor but a friend of mine uses it.
Orum
2024-04-16 12:06:32
if it weren't such a hassle to switch I might
2024-04-16 12:08:40
anyway, with kimageformats in testing, and the other plugin working for images smaller than 16K, I'll just wait for kimageformats for Qt6 to make it into stable
Demiurge
2024-04-16 01:22:31
Sounds like someone took arch and asked, "How can we make it like debian/ubuntu?" and thus was born manjaro :)
Orum
2024-04-16 01:44:56
literally the only reason I chose it over arch was because it had an installer and arch did not (at the time)
2024-04-16 01:45:14
also comparing it to debian isn't really fair, as debian is *way* more out of date
Demiurge
2024-04-16 03:18:37
I like being unfair sometimes :)
novomesk
2024-04-16 09:52:33
Qt6 - based viewers should call QImageReader::setAllocationLimit(int mbLimit) to set higher allocation limit. The default limit is quite small and larger images are rejected by default. When app is ported from Qt5 to Qt6, someone should add the setAllocationLimit call to allow larger images to load.
Demiurge
2024-04-16 09:56:42
The real problem is that decoding requires an infinite amount of memory
2024-04-16 09:57:52
if you try to decode a large image with djxl, you can't, because there isn't enough memory
jonnyawsom3
2024-04-16 10:10:44
Define 'large'
2024-04-16 10:11:32
Oh actually Orum said 16K
novomesk
2024-04-16 10:48:30
Qt6 uses following default: "256 MB is enough for an 8K 64bpp image"
2024-04-16 11:04:24
The limit is for size of decoded image, libjxl can use more memory during decoding process.
Demiurge
2024-04-16 04:33:26
libjxl unfortunately requires infinite RAM still
2024-04-16 04:34:29
It shouldn't require an infinite amount of RAM...
_wb_
2024-04-16 05:00:32
there is a way to set limits to libjxl iirc. The fuzzers use that iirc.
Demiurge
2024-04-16 05:02:35
But then you can't decode large files. It should be possible to successfully decode large files without requiring an exponentially increasing amount of RAM.
_wb_
2024-04-16 05:03:28
That is possible already, if you use the pixel callback api and not the full-frame api
Demiurge
2024-04-16 05:03:45
Hmm...
Orum
2024-04-17 09:00:43
does cjpegli automatically select a chroma subsampling based on distance?
2024-04-17 09:15:40
it's wild how much better JXLs look at the same distance vs cjpegli's JPGs, while being smaller, but I suppose that's to be expected
2024-04-17 09:16:15
still has some annoying block edge artifacts, but not nearly as many
jonnyawsom3
Orum does cjpegli automatically select a chroma subsampling based on distance?
2024-04-17 09:25:04
Currently no, and even more confusingly it just says YUV on default encoding When you tell it to use a specific subsampling, it displays YUV444, YUV422, YUV440 and YUV420, but never just YUV like on default
Orum
2024-04-17 09:25:32
so what is the default, then? 4:4:4?
jonnyawsom3
2024-04-17 09:31:03
Yeah, 4:4:4 by default, although on XYB the default subsampling doesn't match any of the listed options...
2024-04-17 09:31:52
Closest match is still 444 though
fab
2024-04-18 10:23:03
https://arxiv.org/html/2404.08399v1
2024-04-18 10:23:32
First paper about JPEG XL 2.0.0 WhitePaper 2024 ISO
2024-04-18 10:23:38
Not yet iso
2024-04-18 11:01:08
Because the design isn't ready
2024-04-18 11:01:31
They aren't saying the design but is likely using 0.10.2 nightly
2024-04-18 11:01:50
JPEG xl 2.0.0 is jpeg xe
2024-04-18 11:02:02
It isn't unusual and finished
2024-04-18 11:02:35
On cloudinary the same encoder is libjxl with a whitepaper that isn't iso at the moment
2024-04-18 11:03:11
And there isn't a date in the jpeg.org page of all those standards
2024-04-18 11:03:36
Only a mention to 3+1 documents relative to JPEG XL Standard 2024
2024-04-18 11:04:07
So I didn't meant to say Whitepaper
2024-04-18 11:04:22
Sorry for the misinformation
2024-04-18 11:04:43
2024-04-18 11:07:06
Yamada is jpegxl 2023 iso international or only the 2022 one is International?
2024-04-18 11:07:47
In Wikipedia Italy i don't find that information
Oleksii Matiash
2024-04-18 11:24:48
🤦‍♂️
fab
2024-04-18 11:27:36
2024-04-18 11:28:04
Is about how human and AI differently see construction
_wb_
2024-04-18 11:49:08
JPEG XE is actually a thing since recently, and it has nothing to do with JPEG XL: https://jpeg.org/jpegxe/index.html
fab
2024-04-18 11:54:29
Orum
2024-04-18 12:00:30
is there any way to get 'blurrier' (lower frequency) ISO noise in JXL?
2024-04-18 12:02:44
by default it's such high freq it's unreasonable for a bayer filtered sensor to have after demosaicing
fab
2024-04-18 01:09:58
New build fixes 0,500-0.-600 0.6640 bpp
2024-04-18 01:10:19
Is a big upgrade latest I tried was 09th April
Demiurge
fab New build fixes 0,500-0.-600 0.6640 bpp
2024-04-19 12:41:22
The new build isn't ready yet, you get bad results at 090113 bbp because the multiplication crystal is disoriented.
fab
Demiurge The new build isn't ready yet, you get bad results at 090113 bbp because the multiplication crystal is disoriented.
2024-04-19 05:02:57
The only newness is the rect and av2 has it from 5 months ago and also cctx
2024-04-19 05:04:12
The rest is similar to jpeg xl whitepaper 2023 in the requirements. In a month obviously not many big changes
Demiurge
fab The rest is similar to jpeg xl whitepaper 2023 in the requirements. In a month obviously not many big changes
2024-04-19 09:20:35
I just tested that on my laptop and it makes a big difference, especially when using high octane fuel.
fab
2024-04-19 10:37:51
2024-04-19 10:38:38
Jon I capturrd my father metrics, he's 87 years old
2024-04-19 10:39:02
Better architecture geometry than what I see
2024-04-19 10:39:23
Though imprecision in colours
2024-04-19 10:45:45
In my document folder I have this fold and other 40 of tests and yesterday at 21.40pm I did one
Demiurge
fab In my document folder I have this fold and other 40 of tests and yesterday at 21.40pm I did one
2024-04-19 11:07:31
It's too late. It's all been formatted to pickle encoding. You'll need to restore the quantization field from a backup on the jpeg master server.
2024-04-19 11:10:04
Make sure to use the latest build of Iceweasel before you do, and replace your oil filters so it doesn't bias the results.
Fox Wizard
2024-04-19 11:23:49
Demiurge
2024-04-19 11:26:36
Only the most salubrius lipids can requantize the pickle matrix.
fab
2024-04-19 11:49:04
I did another round
_wb_
2024-04-19 11:55:29
Unfortunately <@416586441058025472> seems unable to stop spamming the various channels of this discord with incomprehensible messages and screenshots that are supposedly related to image compression but that don't make sense to anyone. I have given many, many warnings over the past months, including several one-week timeouts, hoping that it would improve. But it didn't, and I cannot keep warning.
fab
2024-04-19 11:56:09
I didn't screenshots
_wb_
2024-04-19 11:57:04
I assume <@416586441058025472> is not of ill will, but at this point I don't think I have any other choice than to proceed by banning.
fab
2024-04-19 11:58:50
Ok
_wb_
2024-04-19 11:59:21
Bye, <@416586441058025472>. I hope you understand that at some point my warnings have to stop and actions need to be taken. This is that point. Have a nice day, otherwise!
2024-04-19 12:01:22
<:BanHammer:805396864639565834>
2024-04-19 12:03:08
It is done.
Oleksii Matiash
2024-04-19 01:55:21
F
Demiurge
2024-04-19 09:47:49
Damn. I had just figured out how to speak his language too 😿
Traneptora
2024-04-19 11:27:42
Italian?
monad
2024-04-20 01:11:56
who should we ban next
VcSaJen
2024-04-20 04:09:58
Let's talk about thumbnail feature. Is it implemented yet, or you still have to do progressive load and then manually halt it? Does it support arbitrarily small thumbs with ×2ᴺ smaller size, or ×8 smaller is as far as it goes? I'm talking about arbitrary \*.jxl files, not specially constructed \*.jxl files.
Tirr
2024-04-20 04:31:16
vardct images always have 1/8x progressive pass at least, modular images may or may not have progressive pass
_wb_
2024-04-20 10:41:51
Smaller than 1:8 requires a DC frame in case of VarDCT, in squeezed Modular you get it by default. But iirc libjxl is not currently showing all it in principle could show when given a partial bitstream, it plays it safe and avoids having to unsqueeze multiple times, and doesn't make the assumption that a DC frame will be used by the next real frame (so it only starts showing it when it is completely decoded and referenced from the real frame).
2024-04-20 12:22:39
That is a special case of VarDCT
Quackdoc
2024-04-20 04:49:53
It looks like slow.pics broke JXL uploads and viewing, no idea why uploads are broken but comparisons that used JXL are broken too now because they do width=0 height=0, Im assuming they are failing to probe something. but now jxl is more or less not usable on slow.pics. I wonder if there are any alternatives that work
Demiurge
2024-04-21 08:52:48
Yes, all of the losslessly transcoded images automatically have an 8x downsampled progressive image.
Jyrki Alakuijala
_wb_ Unfortunately <@416586441058025472> seems unable to stop spamming the various channels of this discord with incomprehensible messages and screenshots that are supposedly related to image compression but that don't make sense to anyone. I have given many, many warnings over the past months, including several one-week timeouts, hoping that it would improve. But it didn't, and I cannot keep warning.
2024-04-21 02:27:43
For what it is worth: I see huge value in inclusivity and having broad acceptance of different behaviours. However, @fab was too disruptive over a long time-period, received substantial guidance without alignment, and we need to protect our ability to communicate, too.
VcSaJen
2024-04-21 03:02:01
Is there a reason to do a ban instead of bot-enforced mute with infinite duration? Did he abuse PMs, reaction emotes, leak docs, etc? Read access should be always here, IMO.
_wb_
2024-04-21 03:17:27
The max mute duration is one week. Could of course manually extend it forever... which is what I have effectively been doing the past weeks and months, since every time the week was over he started again...
Demiurge
VcSaJen Is there a reason to do a ban instead of bot-enforced mute with infinite duration? Did he abuse PMs, reaction emotes, leak docs, etc? Read access should be always here, IMO.
2024-04-21 09:14:31
It might actually be healthy to take a break? He seemed to be a little obsessive.
Jyrki Alakuijala
2024-04-22 05:10:02
hey, let's not speculate on someone's health here
2024-04-22 05:10:24
good news: two quality improvement changes are coming to VarDCT
2024-04-22 05:10:53
https://github.com/libjxl/libjxl/pull/3520 is the first one
2024-04-22 05:11:23
this should mitigate color-related mistakes
2024-04-22 05:11:43
it is not a proper fix, just shifts the point between color vs. luma towards color
2024-04-22 05:12:08
It's something like -- 20 % more color info, 2 % less luma info
2024-04-22 05:13:00
next change after this will relate to edge-preserving filter and I anticipate it will be a major change in image quality for the better ((but I haven't seen the images yet))
2024-04-22 05:13:47
I'm anticipating it will feel like a 3-5 % improvement, but be much less in objective numbers -- it will be available at qualities where edge-preserving filter is used
2024-04-22 05:14:11
I hope we can tune it to address the ringing artefacts without losing the qualities of materials
yoochan
2024-04-22 07:52:31
Will it have an effect on the gaborish filter bug?
_wb_
2024-04-22 10:14:25
Gaborish is something else. I think it should be retuned for, say, 5 generations instead of just 1.
veluca
2024-04-22 10:47:19
it should also be tuned for the encoder's transform being closer to an inverse of the decoder's 😛
Jyrki Alakuijala
2024-04-22 11:17:48
the coming changes will improve generation loss, but I'm not planning to address gabor specifically
_wb_
veluca it should also be tuned for the encoder's transform being closer to an inverse of the decoder's 😛
2024-04-23 08:01:47
Well yes, that's the goal of optimizing for the Nth generation and not just the 1st. The goal is to make `dec(enc(dec(enc(dec(enc(....(image))))` as close as possible to `image`, at least perceptually. In the limit for distance -> 0 that means `dec_gaborish(enc_gaborish)` needs to be the identity, but for larger distance it could be beneficial to have an enc_gaborish that is not quite the inverse of dec_gaborish, if it can somehow compensate for other errors introduced by lossy encoding. Or maybe not. But the point is to tune for multi-generation and not single-generation, since it's too easy to overfit single-generation tuning to please a metric imo, without really bringing a real improvement.
2024-04-23 08:02:30
As things are now, gaborish is really bad for generation loss, see https://discord.com/channels/794206087879852103/803645746661425173/1230065259784572928
Jyrki Alakuijala
2024-04-24 06:39:45
Re: [libjxl/libjxl] New AR control field heuristics. (PR #3529) Is merged
2024-04-24 06:40:33
This improves things by ~1–2 % based on gut feel
2024-04-24 06:41:12
I will look a bit into further improvements, as this is now more disciplined, and likely unblocks other improvements
2024-04-24 06:41:35
Most likely it will allow slightly larger integral transforms
afed
2024-04-24 06:45:20
maybe make it even stronger for slower efforts? and more accurate, so slower efforts would be more useful, especially for lower qualities and also disable or make much weaker for high qualities, so it would be better for generation loss
2024-04-24 07:15:27
also looking at some independent benchmarks, jxl is completely destroyed at lower bpp, like below ~1.5-1 bpp even compared to some old codecs like j2k, but also because most benchmarks still just use some old and mostly video metrics, even jpeg ai framework uses vmaf as its main and best metric (no dssim, no ssimulacra2, no butteraugli) <https://gitlab.com/wg1/jpeg-ai/jpeg-ai-qaf> so maybe slower efforts can better utilize these available higher computational resources, not just more butteraugli iterations? i think there is no hope to beat modern video-based or ai codecs at very low qualities, but at least for ~0.8-1.5 bpp ranges, jxl should be pretty competitive
Orum
2024-04-25 07:19:08
Is it possible to create YCbCr <:JXL:805850130203934781>s? *Without* losslessly recompressing jpegs, that is...
_wb_
2024-04-25 08:26:03
The bitstream supports it. Libjxl has no such encode option though.
Orum
2024-04-25 08:26:49
damn
_wb_
2024-04-25 08:28:25
For what use case do you want this?
Orum
2024-04-25 08:32:27
Several reasons, actually. <@693503208726986763> and I found some interesting stuff with cjpegli when comparing its encodes in XYB vs YCbCr, and was curious if the same held true for <:JXL:805850130203934781> as well. Also it could be quite useful when using JXL as a proxy/high quality, intra-only codec for video as the other codecs in this space aren't great (e.g. CineformHD, DNxHD, ProRes, etc.).
lonjil
2024-04-25 08:33:23
you could losslessly or near losslessly encode YCbCr data in modular, right?
Orum
2024-04-25 08:33:56
Sure but VarDCT is generally more useful for lossy I think?
lonjil
2024-04-25 08:35:51
yeah but if the use case is really high quality stuff, Modular might be a good option
2024-04-25 08:36:04
not sure where the threshold is for photographic stuff
Orum
2024-04-25 08:36:39
🤔 Well it's worth investigating, that's for sure, but it's probably quite content dependent.
lonjil
2024-04-25 08:39:16
getting some better "near lossless" options in libjxl would be improve modular for this usecase
afed
2024-04-25 08:41:34
i think its just not enough xyb tuning for very high qualities, it shouldn't be any worse also, don't know what this is for, but looks useful to use in the future if the perceptual optimization is somehow wrong https://github.com/libjxl/libjxl/pull/3531
Quackdoc
Orum Sure but VarDCT is generally more useful for lossy I think?
2024-04-25 08:41:56
I prefer modular myself
afed
2024-04-25 08:42:13
modular is much slower
Quackdoc
2024-04-25 08:42:35
I dont use potato
2024-04-25 08:42:37
[chad](https://cdn.discordapp.com/emojis/1070464111114211439.webp?size=48&quality=lossless&name=chad)
Orum
2024-04-25 08:42:37
IIRC one was beating the other but it was because something wasn't turned on for encoding in one of them... let me try and find the message
2024-04-25 08:44:02
ah, here it is: https://discord.com/channels/794206087879852103/794206170445119489/1217689479612076094
2024-04-25 08:44:36
why no patch frames for VarDCT? <:FeelsSadMan:808221433243107338>
afed
2024-04-25 08:46:55
it's after streaming mode most likely because patches still haven't been optimized for it yet
Orum
2024-04-25 08:47:30
so do you get patch frames in VDCT when not using streaming?
afed
2024-04-25 08:48:55
maybe, but not sure, just saw some changes for that when streaming mode was added
Quackdoc
2024-04-25 08:50:19
all I know is that I much prefer modulars artifacting over dct
Orum
2024-04-25 08:50:32
it's very content dependent for me
2024-04-25 08:51:02
for photo VDCT basically always looks better to me, for everything else it depends™️
afed
2024-04-25 08:52:15
modular is just consistent blockiness/pixelization that looks like some sort of art filter
_wb_
2024-04-25 09:03:52
You can do either modular or vardct with YCbCr. In modular you can also do YCoCg, which I like better as a simple luma-chroma separation.
2024-04-25 09:06:05
For anything perceptual, XYB will be better (possibly with adjusted intensity target). But if you just want near-lossless RGB then YCoCg is nice since it's reversible in integer arithmetic.
Quackdoc
2024-04-25 09:11:03
Im not sure the practical difference between ycbcr snd ycocg is. I know some people still want chroma scaling like yuv420 and yuv410
Orum
2024-04-25 09:15:02
yeah, or YCoCg-R
Quackdoc
2024-04-25 09:33:01
another thing thats nice about modular is it is a lot more... progressive I guess the term would be?
2024-04-25 09:37:40
for context using tirr's script but with djxl instead `cjxl ../anime/liora_walkyria_mendou_kusai.png input.jxl -d 0.5 -e 7 -m $TOGGLE` ```ps #!/bin/bash #set -euo pipefail size=$(stat -c'%s' input.jxl) step=32768 count=1 while (( $step * $count < $size )); do head -c $(( $step * $count )) input.jxl >partial.jxl djxl --allow_partial_files partial.jxl "frame${count}.png" count=$(( $count + 1 )) done djxl --allow_partial_files input.jxl "frame${count}.png" ``` modular starts decoding at frame 5 with a viewable frame. vardct doesn't create a frame until 18, and only the last image actually has an image, you do have to compensate distance a little bit to bring ssimu2 scores back up a bit I find, but the file size is still usually smaller then dct in many cases as well, so I just always use modular
TheBigBadBoy - 𝙸𝚛
2024-04-25 11:39:30
```bash count=$(( $count + 1 )) ```==```bash ((count++)) ```<:KekDog:805390049033191445>
Quackdoc
Quackdoc another thing thats nice about modular is it is a lot more... progressive I guess the term would be?
2024-04-26 04:10:48
https://cdn.discordapp.com/attachments/673202643916816384/1233268459312251001/vardct-modular.mp4?ex=662c7a41&is=662b28c1&hm=affb5dc6cf0a41a12582a2c7a7912be2caa6e22f6e7b200a46b8292b92ece69d& video showcasing what I mean
afed
2024-04-26 04:19:46
for vardct it was just changed for browsers as far as I recall because it's cheaper to render and decode and in most cases doesn't need to be updated that often, just a few steps is enough but vardct has the same capabilities for smooth progressiveness, and maybe some options to bring back this behavior are still available in the encoder
Quackdoc
2024-04-26 04:22:30
if there is a toggle I would like to test, if its the -p option that balloons the size
afed
2024-04-26 04:25:14
there's some options, like ``` --progressive_ac Use the progressive mode for AC. --qprogressive_ac Use the progressive mode for AC with shift quantization. --progressive_dc=num_dc_frames Progressive-DC setting. Valid values are: -1, 0, 1, 2.``` but I'm not sure how this works with the "new" api
Tirr
2024-04-26 04:25:27
VarDCT can have separate Modular LF frame, and it can give great progressiveness https://discord.com/channels/794206087879852103/1065165415598272582/1231531047141769256
afed
2024-04-26 04:30:30
it can be like this, so it's just encoder settings/behavior, not a vardct limitation https://youtu.be/UphN1_7nP8U
2024-04-26 04:31:31
or (flif is close to jxl modular) https://youtu.be/A4GFzarcR3U
jonnyawsom3
afed it can be like this, so it's just encoder settings/behavior, not a vardct limitation https://youtu.be/UphN1_7nP8U
2024-04-26 04:32:14
It's actually decoder. I asked Jon about it a few months ago and it's due to libjxl API and defaults changing to avoid showing corrupted/partial files too early (If I recall)
2024-04-26 04:33:20
https://discord.com/channels/794206087879852103/794206170445119489/1193190014151032964 > JPEG XL can either start showing something as early as possible (LQIP decode mode) or only show something when at least an 1:8 resolution is available
afed
2024-04-26 04:35:38
yeah, maybe just the decoder, but I believe the default settings for the encoder have also been changed (compared to the first old jxl versions)
Quackdoc
Tirr VarDCT can have separate Modular LF frame, and it can give great progressiveness https://discord.com/channels/794206087879852103/1065165415598272582/1231531047141769256
2024-04-26 04:36:00
i'll have to try it out, the big thing for me is that modular is just more or less same efficency for what it gets. I assume the overhead is low for this so I may go for this instead of just globally using modular
afed
2024-04-26 04:40:39
modular is much larger on average for close to visually lossless, also much slower to decode and looks worse for mid and some lower qualities maybe it's not bad on some art images, but for varied images it's not that good
Quackdoc
2024-04-26 04:45:12
I generally found it to be more or less the same as VarDCT across the spectrum I tested (-d 0.05..8.5) (though needing to bump down the distance a bit) for quality:size. Slower to decode for sure. But my testing images being real photos, coloured comics and artwork I found it more or less fine
afed
2024-04-26 04:49:03
for vardct on higher qualities, disabling gaborish or using faster/slower efforts may also help on some images
Quackdoc
2024-04-26 04:49:18
https://cdn.discordapp.com/emojis/935219720003649576.gif?size=64
Tirr
2024-04-26 04:50:55
in my experience fully progressive lossless modular tends to bloat in filesize. also progressive previews don't look so good because they're interpolated in RGB
Demiurge
2024-04-26 04:51:09
Gabor is automatically disabled for high quality settings afaik
afed
2024-04-26 04:54:16
not sure, also depends on what exactly high quality is needed
Tirr
Tirr in my experience fully progressive lossless modular tends to bloat in filesize. also progressive previews don't look so good because they're interpolated in RGB
2024-04-26 05:00:22
for lossy modular, they're in XYB so it might be okay. I think vardct with progressive LF frame combines strengths of both vardct and modular -- fully progressive until 1:8, and encodes fine details efficiently
_wb_
2024-04-26 05:31:18
I agree with <@206628065147748352>
afed yeah, maybe just the decoder, but I believe the default settings for the encoder have also been changed (compared to the first old jxl versions)
2024-04-26 05:32:40
Yes, DC frames were used by default at d>4.5 iirc, and now they're never used by default anymore.
Quackdoc
2024-04-26 05:38:21
whats the cmdline for something like this? hard to read on phone
_wb_
2024-04-26 05:44:18
cjxl -p should do the trick
Quackdoc
2024-04-26 05:47:36
ah, so `-p` is something that should really only be done with lossy, I had been avoiding it out right since `-p` really blows up size on lossless
Tirr
2024-04-26 05:48:24
I needed to use --progressive_dc=1, but that might be changed recently
Quackdoc
2024-04-26 05:56:25
```ps ➜ jxl-p l Permissions Size User Date Modified Name .rw-r--r-- 446k quack 26 Apr 01:49 liora-e7-d0_75-m1.jxl .rw-r--r-- 463k quack 26 Apr 01:46 liora-e7-d1-p.jxl .rw-r--r-- 452k quack 26 Apr 01:46 liora-e7-d1.jxl ➜ jxl-p ssimulacra2_rs image ../anime/liora_walkyria_mendou_kusai.png liora-m.png Score: 87.86607220 ➜ jxl-p ssimulacra2_rs image ../anime/liora_walkyria_mendou_kusai.png liora-p.png Score: 87.89087506 ``` ahh I see, I can for sure see how this would be worth it in a lot of cases
Jyrki Alakuijala
afed also looking at some independent benchmarks, jxl is completely destroyed at lower bpp, like below ~1.5-1 bpp even compared to some old codecs like j2k, but also because most benchmarks still just use some old and mostly video metrics, even jpeg ai framework uses vmaf as its main and best metric (no dssim, no ssimulacra2, no butteraugli) <https://gitlab.com/wg1/jpeg-ai/jpeg-ai-qaf> so maybe slower efforts can better utilize these available higher computational resources, not just more butteraugli iterations? i think there is no hope to beat modern video-based or ai codecs at very low qualities, but at least for ~0.8-1.5 bpp ranges, jxl should be pretty competitive
2024-04-26 07:27:25
We cannot win in non-psychovisual or broken objective metrics by adding more psychovisual/human-viewing based tuning. We don't have resources to manage the complexity that would arise to build dedicated encoding heuristics for every faulty metric. Also, the images just wouldn't look good.
afed
Jyrki Alakuijala We cannot win in non-psychovisual or broken objective metrics by adding more psychovisual/human-viewing based tuning. We don't have resources to manage the complexity that would arise to build dedicated encoding heuristics for every faulty metric. Also, the images just wouldn't look good.
2024-04-26 07:44:15
yeah, but at least it would be nice to add or make recommendations for other metrics in some popular frameworks and this framework is also used for other jpeg family codecs and it's really strange when jxl looks bad, even compared to very old formats, just because there are no other more relevant metrics in the test suite https://gitlab.com/wg1/jpeg-ai/jpeg-ai-qaf
2024-04-26 07:49:07
something like ssimulacra2.1 and dssim at least, butteraugli maybe not quite fair because jxl is based on this metric
_wb_
2024-04-26 08:17:08
I hope JPEG AIC-4 will make it clear that some metrics are just useless to evaluate modern codecs. Things move slowly though, AIC-4 is still not even at the draft call for proposals stage.
2024-04-26 08:19:37
There's a lot to be improved in the way codecs are being assessed, including how JPEG AI is currently doing things which I think has several flaws (but it's a big project with many contributors and they need to make decisions based on objective numbers to keep it manageable, same problem as what usually happens in video codec design).
2024-04-26 08:23:09
For example: comparing quality at fixed bitrates sounds fair enough but it's quite bad. This doesn't give you any information about encoder consistency, for example. And when aggregating over multiple images, it causes a bad mixing of results since for one image 0.25 bpp might be reasonable while for another it results in a really bad quality — and then you end up aggregating metric scores in a way that doesn't really make much sense.
2024-04-26 08:25:24
Also in JPEG AI they make the imo incorrect assumption that looking at the average of the BD rates according to multiple bad metrics will somehow result in a more reliable conclusion than looking at the BD rates according to just a single bad metric.
Orum
2024-04-26 08:28:06
BD rate itself is a problematic idea
_wb_
2024-04-26 08:33:53
Another overall problem is that academics generally only look at metrics which are validated in nice academic publications, and will e.g. consider SSIM a "better validated" metric than, say, DSSIM or Butteraugli, just because there are a bunch of papers about SSIM that "validate" it while DSSIM and Butteraugli are created by people who are not currently in academia and don't care much about getting papers published, so for academics they simply don't exist. But when you dig in a bit, basically SSIM was validated on images produced by jpeg encoders of the late 1990s, and maybe some early j2k encoders, but there's no reason whatsoever to assume that it also does a good job on modern image codecs that have different kinds of artifacts, let alone on AI-based codecs that have _very_ different kinds of artifacts.
2024-04-26 09:13:27
Bjöntegaard Delta is not a useless idea but it makes some crucial assumptions that are often not true in practice: 1. the range of bitrates you look at is actually relevant for the use case, and uniformly so (in logspace). Often BD rates are computed for much lower bitrates than what is relevant for the use case, and since the integration is over log-bitrate, those low bitrates will have a large weight in the final result — e.g. if the typical bitrate is 1.5 bpp and you look at the range from 0.1 bpp to 3 bpp to compute BD rates, then that may seem like a reasonable thing to do (since 1.5 bpp looks like it is nicely in the middle of that interval), but it really isn't: in the BD rate computation, the range from 0.1 to 0.4 bpp will get the same weight as the range from 0.75 to 3 bpp, so your final number will be mostly determined by the lowest bitrates. 2. the metric outputs numbers on a scale that indicates quality in a "uniform" way (an additive change to a metric score by +x always has the same amount of "perceptual impact", so to speak). Most metrics don't work like that at all, e.g. going from VMAF 50 to VMAF 55 is not at all the same amount of "change in quality" as going from VMAF 90 to VMAF 95. Most metrics will tend to devote a much larger range to the heavy distortions than to the "reasonable quality" range, e.g. all reasonable quality images will have SSIM > 0.95 so basically the whole range from 0 to 0.95 consists of various degrees of crappy images and only the top range of 0.95 to 1 actually matters. The scales of subjective quality (e.g. MOS scores, or even JNDs: going from 0.5 to 1.5 JND is not quite the same thing as going from 10 JND to 11 JND) are also not uniform scales, and metrics are often validated only based on rank correlation anyway, which doesn't care about the numerical scale. Again, this tends to cause the lowest qualities to be over-represented in the final BD rate result.
2024-04-26 09:13:29
3. aggregating over multiple images by averaging metric scores for the same bpp makes sense. It doesn't, because of the previous problem but also because in still images, nearly nobody does constant-bitrate encoding (unless you include things like gpu texture formats), and nearly always encoders will be configured using some kind of "quality" parameter instead. 4. the bitrate-distortion curves don't cross, or not significantly; if they do, you're effectively assuming that better performance at one bitrate compensates for worse performance at another bitrate. Even if assumption 1 does hold, this is typically not the case: e.g. if you really do need low quality and high quality versions of your images, and codec A beats codec B in overall BD-rate but actually at low quality A beats B while at high quality B beats A, then if the thing you want to optimize is total bandwidth or storage, and you have to use a single codec, then codec B might still be the better choice since the low quality images use less bytes anyway and savings in the high quality images have much more impact on the total size.
veluca
2024-04-26 09:24:01
quite a few problems are improved by doing BD on the *inverse* function
2024-04-26 09:24:10
not all of them
2024-04-26 09:24:14
but it at least improves
spider-mario
_wb_ Also in JPEG AI they make the imo incorrect assumption that looking at the average of the BD rates according to multiple bad metrics will somehow result in a more reliable conclusion than looking at the BD rates according to just a single bad metric.
2024-04-26 09:26:16
brings this to mind:
_wb_
2024-04-26 09:45:52
That example of getting the height of the Emperor of China at an accuracy of ±0.03 mm by asking every person in China is a good one. Yes, most of the bad metrics are bad for similar reasons and because they're based on similar ingredients, thus their error in predicting image quality is certainly not "independent noise". It might also have some implications on how we do subjective testing btw: there might be a systematic error we don't know about, e.g. caused by people in 2024 being more familiar with jpeg artifacts than any other kinds of artifacts, and that might give us a false sense of confidence in the accuracy of test results.
yoochan
2024-04-26 09:55:28
And let's not forget the Goodharts's law 😁 https://xkcd.com/2899/
Orum
_wb_ 3. aggregating over multiple images by averaging metric scores for the same bpp makes sense. It doesn't, because of the previous problem but also because in still images, nearly nobody does constant-bitrate encoding (unless you include things like gpu texture formats), and nearly always encoders will be configured using some kind of "quality" parameter instead. 4. the bitrate-distortion curves don't cross, or not significantly; if they do, you're effectively assuming that better performance at one bitrate compensates for worse performance at another bitrate. Even if assumption 1 does hold, this is typically not the case: e.g. if you really do need low quality and high quality versions of your images, and codec A beats codec B in overall BD-rate but actually at low quality A beats B while at high quality B beats A, then if the thing you want to optimize is total bandwidth or storage, and you have to use a single codec, then codec B might still be the better choice since the low quality images use less bytes anyway and savings in the high quality images have much more impact on the total size.
2024-04-26 10:01:22
CBR is used in low-latency streaming though (e.g. Twitch)
_wb_
2024-04-26 10:18:30
yes, for video it is much more relevant than for still images
2024-04-26 10:22:00
(and BD rates were originally introduced to assess video codecs, not still images — video concepts often get applied to still images a bit too "blindly"...)
2024-04-26 10:32:13
oh I forgot one other assumption of BD rates which is often not true in practice: 5. the codecs being compared all actually cover the full range of qualities of interest. Since only the region of overlap is considered for the BD rate computation, a codec can simply refuse to produce output for the quality range where it performs poorly and that will make it look better. For example, this is effectively what lossy webp does: since it has forced tv range 8-bit yuv420 and rather coarse minimum chroma quantization, it simply cannot reach visually lossless quality on some images — but from the point of view of BD rate, it doesn't get penalized for that: the highest qualities will just be ignored when computing the BD rate.
Orum
2024-04-26 11:07:34
yeah that's why publishing BD rates is nonsensical
2024-04-26 11:07:56
at the very least show a graph over a range of BPP/bitrates
Nyao-chan
spider-mario brings this to mind:
2024-04-26 11:20:05
Probability theory the logic of science E. T. Jaynes in case anyone else was curious
spider-mario
2024-04-26 11:21:08
my idol and my bible
2024-04-26 11:21:22
have you also read it or did you somehow find that from the text?
Nyao-chan
2024-04-26 11:23:12
I've already admitted I have did not know what the book was haha ("in case anyone else...") just put the "part 1 principles..." header into Google with exact match (quotation marks) and then a second exact match for "emperor"
2024-04-26 11:24:44
spider-mario
2024-04-26 11:25:38
oh, yeah, I somehow didn’t pick up on “else” meaning “another person who was curious” and not just “someone who is not me” 😁
2024-04-26 11:26:30
that second result is the full PDF (here is a link, with an anchor to the relevant page https://www.medicine.mcgill.ca/epidemiology/hanley/bios601/GaussianModel/JaynesProbabilityTheory.pdf#page=288)
2024-04-26 11:27:12
in the PDF on library genesis, the entries in the table of contents are links, unlike in this one – not sure why both PDFs exist
2024-04-26 11:27:41
(but this one still has the table of contents as PDF metadata as well, so it’s not bad either)
Nyao-chan
2024-04-26 11:27:47
does the book maybe touch on uncertainty? That's definitely a topic I'd like to brush up on
spider-mario
2024-04-26 11:27:56
very much so
2024-04-26 11:28:20
it’s arguably a central point
Nyao-chan
2024-04-26 11:29:31
glad to here that. I did have the basic type A type B, some formulas - classes, but I feel it's a very overlooked and important field which I'd like to have an intuitive understanding of
2024-04-26 11:34:55
there's djvu on Anna's archive 👀 I do wonder why science/math books seem to use them
spider-mario
Nyao-chan glad to here that. I did have the basic type A type B, some formulas - classes, but I feel it's a very overlooked and important field which I'd like to have an intuitive understanding of
2024-04-26 11:36:19
ah, it seems you had a more specific meaning of “uncertainty” in mind than I did in that question, sorry
2024-04-26 11:36:59
although I do think you might find it interesting either way
Nyao-chan
2024-04-26 11:37:20
I meant the kind of thing they teach in physics classes, measurement errors, uncertainty analysis, that is using the measurements in calculations...
spider-mario
2024-04-26 11:37:27
his article on confidence intervals is another classic: https://sami.boo/jaynes/confidence-intervals-vs-bayesian-intervals/
Nyao-chan
2024-04-26 11:38:03
oh, I don't mean statistics
spider-mario
2024-04-26 11:39:22
even taking this into account? https://www.cambridge.org/core/books/probability-theory/9CA08E224FF30123304E6D8935CF1A99#fndtn-information > The standard rules of probability can be interpreted as uniquely valid principles in logic. In this book, E. T. Jaynes dispels the imaginary distinction between 'probability theory' and 'statistical inference', leaving a logical unity and simplicity, which provides greater technical power and flexibility in applications. […] The book will be of interest to scientists working in any area where inference from incomplete information is necessary.
Nyao-chan
2024-04-26 11:43:00
sorry, I did not see your message at the top and thought the link was a suggestion for the "uncertainty" I definitely plan to read the book, seems interesting and I like probability and statistics, they are incredibly useful
2024-04-26 11:55:58
I did find it strange that it's split this way. like splitting integration and differentiation into different fields
DZgas Ж
2024-04-26 12:32:04
👀
Nyao-chan
2024-04-26 12:59:13
hmm I don't think djvu supports text reflow? And it always looks so raster-y when zoomed in. Overall 6/10, quite decent. I mean it's better than just sticking images in for latex which is what they did in this book in the epub version. I do wish that and graphing was solved for epub already. and possibly without requiring javascript
2024-04-26 12:59:48
well it did not compress well. But if you squint, you get the idea, djvu is quite pleasant
spider-mario
2024-04-26 01:15:11
iirc, epub 3 supports math with mathml, and calibre’s reader seems to support rendering it
2024-04-26 01:15:40
I get decent-looking math, reading the second edition of “Physics of Digital Photography” in `ebook-viewer` from calibre
Nyao-chan
2024-04-26 01:32:56
I realise now that it's probably not djvu being raster-y but my viewer zooming in on an image. man that was dumb of me
spider-mario
2024-04-26 01:37:03
nah, AFAIK djvu is indeed raster-only
Nyao-chan
2024-04-26 01:37:12
wait. they use "pixmaps", which redirects to "bitmap (also called raster)" on wiki and you can select text in djvu
spider-mario
2024-04-26 01:37:24
yeah, raster with an invisible text layer
Nyao-chan
2024-04-26 01:38:41
I'm so confused. Why is it so small? I get that some decisions might have been designed for scanning documents
spider-mario
2024-04-26 01:38:48
> The JB2 encoding method identifies nearly identical shapes on the page, such as multiple occurrences of a particular character in a given font, style, and size. It compresses the bitmap of each unique shape separately, and then encodes the locations where each shape appears on the page. Thus, instead of compressing a letter "e" in a given font multiple times, it compresses the letter "e" once (as a compressed bit image) and then records every place on the page it occurs. > > Optionally, these shapes may be mapped to UTF-8 codes (either by hand or potentially by a text recognition system) and stored in the DjVu file. If this mapping exists, it is possible to select and copy text.
2024-04-26 01:39:07
(basically jxl patches)
Nyao-chan
spider-mario I get decent-looking math, reading the second edition of “Physics of Digital Photography” in `ebook-viewer` from calibre
2024-04-26 01:53:42
yeah, looks great with foliate too. that is so much more pleasant to read, when it makes effective use of the screen space
Demiurge
2024-04-27 02:57:49
Anyone can recommend a file extension to use specifically for JPEG XL files with JPEG reconstruction data?
2024-04-27 02:59:07
I want to distinguish them on my filesystem and I want something better than .jpg.jxl
eddie.zato
2024-04-27 05:23:51
2024-04-27 05:28:48
``` .t.jxl .v.jxl .m.jxl ```
Demiurge
2024-04-27 05:51:58
I was thinking maybe something with only 1 dot lol
2024-04-27 05:52:08
like .j1xl maybe
2024-04-27 05:52:14
.jbrd
2024-04-27 05:52:47
.jjxl
2024-04-27 05:53:17
Something like that... Any suggestions or opinions on a good convention to help identify these?
2024-04-27 05:54:29
Also it might be nice to have a filename convention to distinguish lossy files from lossless files.
2024-04-27 05:54:59
For that I use .png.jxl lmao, I would like to use something less silly
2024-04-27 05:55:25
Someone should help me think of a better looking suffix convention
VcSaJen
Demiurge I want to distinguish them on my filesystem and I want something better than .jpg.jxl
2024-04-27 05:57:16
.jpeg.jxl seems to be the best variant, as things stand right now. Linux was supposed to be extension-agnostic, but in practice file managers and image viewers use them.
eddie.zato
2024-04-27 05:58:54
I don't bother much and only use the exif comment “jpeg-transcode/vardct/modular”. Or `jxlinfo` for files without comment.
VcSaJen
2024-04-27 06:00:47
You can "extract" jpeg from jxl, so it seems the same convention as .extension.gz files
eddie.zato
2024-04-27 06:14:10
Since `jxlinfo` can give a good guess about the type of jxl I'm looking at, I've personally stopped using different extensions. ``` Name Info ---- ---- 1.transcode.jxl JPEG XL image, 1920x1165, (possibly) lossless, 8-bit RGB, JPEG bitstream reconstruction data available 2.tr_nodata.jxl JPEG XL image, 1920x1165, (possibly) lossless, 8-bit RGB 3.vardct.jxl JPEG XL image, 1920x1165, lossy, 8-bit RGB 4.modular.jxl JPEG XL image, 1920x1165, (possibly) lossless, 8-bit RGB ```
Demiurge
2024-04-27 06:14:23
I know... but I would prefer coming up with something that looks cooler with a single extension.
2024-04-27 06:14:51
Personally, my system doesn't care what the filename is, it's just for my own convenience and eyes
yoochan
Demiurge I want to distinguish them on my filesystem and I want something better than .jpg.jxl
2024-04-27 06:15:49
jjxl or gxl?
Demiurge
2024-04-27 06:16:19
Yeah, jjxl is kinda cool I think...
2024-04-27 06:16:24
What's gxl stand for?
yoochan
2024-04-27 06:17:41
jpeG jXL? 😅
Demiurge
2024-04-27 06:17:47
lol
eddie.zato
2024-04-27 06:18:02
Gif XL <:CatSmile:805382488293244929>
Demiurge
2024-04-27 06:18:25
any cool ideas for distinctive names for lossy jxl?
2024-04-27 06:19:31
maybe .djxl for "degraded"
2024-04-27 06:20:01
Or/and maybe .jxll for LOSSLESS!
2024-04-27 06:20:13
holy crap I'm a genius
2024-04-27 06:20:20
:)
LMP88959
2024-04-27 06:22:28
how do you guys pronounce JXL
2024-04-27 06:22:43
jay-exs-el? jexel?
eddie.zato
Demiurge Or/and maybe .jxll for LOSSLESS!
2024-04-27 06:23:03
.jxls for lossy? <:Hypers:808826266060193874>
Crite Spranberry
2024-04-27 06:42:37
It's one core api but still funny
username
2024-04-27 06:45:59
huh I forgot one core api was a thing, the last time me and my friends used it the system time would just break and be unfixable but It seems that's fixed now in newer versions which is nice
Crite Spranberry
2024-04-27 06:46:27
Yeah it's gotten a lot more stable
2024-04-27 07:08:27
jxp
Oleksii Matiash
LMP88959 jay-exs-el? jexel?
2024-04-27 07:42:18
First one. Well, actually je-exs-el, where e is something between e and a in bed\bad
Demiurge
2024-04-27 07:43:57
what's one core?
username
Demiurge what's one core?
2024-04-27 07:50:12
https://github.com/Skulltrail192/One-Core-API-Binaries
spider-mario
Crite Spranberry It's one core api but still funny
2024-04-27 07:56:50
I thought XP was 5.1; that 6.1 was 7?
Crite Spranberry
spider-mario I thought XP was 5.1; that 6.1 was 7?
2024-04-27 07:57:33
Yeah, just fakes updated version so more programs work
2024-04-27 07:57:52
Or to be more exact this is XP x64/Server 2003 so it would be 5.2
Meow
LMP88959 how do you guys pronounce JXL
2024-04-27 08:33:42
Or J forty?
Crite Spranberry
2024-04-27 08:34:32
I just say J X L
LMP88959
Meow Or J forty?
2024-04-27 08:36:31
hahah
2024-04-27 08:36:36
J extra large
w
2024-04-27 08:37:25
It doesn't help that there's a jpeg xs
Demiurge
2024-04-27 09:37:42
I usually say jaypeg exel
2024-04-27 09:37:58
But I think jexel is becoming common
HCrikki
2024-04-27 09:41:29
jxl as one extension. jxls losslessly generated from a jpg can be instantly identified and a utility manage them for you if one's interested moving them to separate folders or append their *name*
Quackdoc
2024-04-27 09:41:47
be a chad and always call it `joint photographic experts group X L`
Orum
2024-04-27 09:44:22
s\/X L/Extra Large/
Meow
2024-04-27 09:48:10
JPEG 40
spider-mario
2024-04-27 09:50:13
as far as I can tell, the libjxl devs have been pronouncing it “jay excel”
VcSaJen
2024-04-27 10:00:34
JXL support in MS Excel when??
Quackdoc
2024-04-27 10:34:45
doesn't excel use platform decoder?
Demiurge
HCrikki jxl as one extension. jxls losslessly generated from a jpg can be instantly identified and a utility manage them for you if one's interested moving them to separate folders or append their *name*
2024-04-27 10:45:58
I think it would be cool if multiple extensions became trendy.
2024-04-27 10:47:30
I especially like `.jxll` for lossless lol
spider-mario
2024-04-27 10:52:18
I like `.jxlsx` for JXL serialised as zipped xml
Meow
spider-mario I like `.jxlsx` for JXL serialised as zipped xml
2024-04-27 11:25:51
JXL s**ks
Orum
2024-04-27 02:06:05
boy I would love to use JXL as a proxy codec, but ffmpeg doesn't seem to support it in any containers <:FeelsSadMan:808221433243107338>
2024-04-27 02:13:07
Don't they just need to define a tag to make it work? Is there any reason there isn't a FourCC for JXL?
2024-04-27 02:14:30
`JXL1` isn't taken AFAIK <:KekDog:805390049033191445>
Nyao-chan
2024-04-27 03:45:02
no love for jixel
lonjil
2024-04-27 03:50:41
jexel
2024-04-27 03:51:04
like texel but it's j
Quackdoc
Orum boy I would love to use JXL as a proxy codec, but ffmpeg doesn't seem to support it in any containers <:FeelsSadMan:808221433243107338>
2024-04-27 10:44:23
I have a patch for it, but iirc ffmpeg is comming up with one
Orum
2024-04-27 10:46:32
o rly? Got a link?
Quackdoc
2024-04-27 10:48:35
uhhhhh maybe? ill check later, but uts just adding an fourcc to riff.c
Orum o rly? Got a link?
2024-04-28 01:04:12
```patch diff --git a/libavformat/riff.c b/libavformat/riff.c index df7e9df31b..16e37fb557 100644 --- a/libavformat/riff.c +++ b/libavformat/riff.c @@ -34,6 +34,7 @@ * files use it as well. */ const AVCodecTag ff_codec_bmp_tags[] = { + { AV_CODEC_ID_JPEGXL, MKTAG('J', 'X', 'L', ' ') }, { AV_CODEC_ID_H264, MKTAG('H', '2', '6', '4') }, { AV_CODEC_ID_H264, MKTAG('h', '2', '6', '4') }, { AV_CODEC_ID_H264, MKTAG('X', '2', '6', '4') }, ```
2024-04-28 01:04:22
thats all thats needed
2024-04-28 01:05:00
just use `pkgctl repo clone ffmpeg` then add the patch and rebuild
Orum
2024-04-28 01:07:40
I meant a link to where it was going to be added into the official ffmpeg
2024-04-28 01:08:04
also shouldn't the tag not include spaces?
2024-04-28 01:08:21
JXL1 seems more appropriate
Quackdoc
2024-04-28 01:09:52
I dont think anyone has submitted a patch to upstream ffmpeg
Orum
2024-04-28 01:13:42
did you write the patch or did someone else?
Quackdoc
2024-04-28 01:14:22
I did, its a simple patch though
Orum
2024-04-28 01:15:28
sure, but why not submit it upstream?
2024-04-28 01:16:36
...also why is X264 a tag for H264?
Quackdoc
2024-04-28 01:19:45
there are a lot of tags https://github.com/FFmpeg/FFmpeg/blob/master/libavformat/riff.c
Orum
2024-04-28 01:21:03
ah yes, standards are wonderful--everyone should have their own
Quackdoc
2024-04-28 01:21:47
these aren't really "standards"
Orum
2024-04-28 01:22:33
that's the problem
_wb_
Orum Don't they just need to define a tag to make it work? Is there any reason there isn't a FourCC for JXL?
2024-04-28 01:31:47
The jxl container format (18181-2) does define a file format based on ISOBMF-style boxes. The fourcc names are also on https://mp4ra.org/registered-types/boxes
2024-04-28 01:32:52
We have "JXL " for the signature box, and "jxlc" for a (full) payload codestream box.
_wb_ Bjöntegaard Delta is not a useless idea but it makes some crucial assumptions that are often not true in practice: 1. the range of bitrates you look at is actually relevant for the use case, and uniformly so (in logspace). Often BD rates are computed for much lower bitrates than what is relevant for the use case, and since the integration is over log-bitrate, those low bitrates will have a large weight in the final result — e.g. if the typical bitrate is 1.5 bpp and you look at the range from 0.1 bpp to 3 bpp to compute BD rates, then that may seem like a reasonable thing to do (since 1.5 bpp looks like it is nicely in the middle of that interval), but it really isn't: in the BD rate computation, the range from 0.1 to 0.4 bpp will get the same weight as the range from 0.75 to 3 bpp, so your final number will be mostly determined by the lowest bitrates. 2. the metric outputs numbers on a scale that indicates quality in a "uniform" way (an additive change to a metric score by +x always has the same amount of "perceptual impact", so to speak). Most metrics don't work like that at all, e.g. going from VMAF 50 to VMAF 55 is not at all the same amount of "change in quality" as going from VMAF 90 to VMAF 95. Most metrics will tend to devote a much larger range to the heavy distortions than to the "reasonable quality" range, e.g. all reasonable quality images will have SSIM > 0.95 so basically the whole range from 0 to 0.95 consists of various degrees of crappy images and only the top range of 0.95 to 1 actually matters. The scales of subjective quality (e.g. MOS scores, or even JNDs: going from 0.5 to 1.5 JND is not quite the same thing as going from 10 JND to 11 JND) are also not uniform scales, and metrics are often validated only based on rank correlation anyway, which doesn't care about the numerical scale. Again, this tends to cause the lowest qualities to be over-represented in the final BD rate result.
2024-04-29 12:09:10
See also: https://arxiv.org/abs/2401.04039
Jyrki Alakuijala
_wb_ We have "JXL " for the signature box, and "jxlc" for a (full) payload codestream box.
2024-04-29 01:57:42
The experts in the JPEG committee asked for ISOBMF -- I would have not added it on my own 😄 I think it is bloaty 😄
_wb_
2024-04-29 02:01:08
It's not as bloaty as PNG, to be fair. It's basically the same thing — chunks and boxes both have a 4-byte name and a 4-byte length, but in PNG they also added an obligatory 4-byte CRC to their chunks.
2024-04-29 02:03:23
I think we made a good compromise by defining a raw codestream to be a valid jxl file on its own, without the ISOBMF container. The container adds 40 bytes of overhead iirc, which is still not too bad compared to HEIF/AVIF which have a lot more obligatory boxes with verbose nonsense in them.
Orum
2024-04-29 02:04:01
oh yeah, the AVIF header is huge <:FeelsSadMan:808221433243107338>
Quackdoc
2024-04-29 03:01:46
the tag would be nice to have in ffmpeg, but as far as I am aware, no one but me is actually using it in a video editor [av1_dogelol](https://cdn.discordapp.com/emojis/867794291652558888.webp?size=48&quality=lossless&name=av1_dogelol)
Jyrki Alakuijala
2024-04-30 08:15:14
there are about three kinds of compression people -- those who compress, those who'd like to wrap a compression stream into a thick wrapper because pure compression is 'dirty', and those who'd like to discuss the names of the compression algorithms
username
2024-04-30 09:20:31
huh it seems like newer versions of Firefox with the unmerged JXL patches now properly tonemap HDR images, I assume this is due to some change in newer versions of libjxl? I'm testing with [r3dfox](https://eclipse.cx/projects/r3dfox.htm) on this page: https://people.csail.mit.edu/ericchan/hdr/hdr-jxl.php
2024-04-30 01:03:49
<@549009613710819331> I was doing some comparisons between JXL patches for Firefox/Gecko and I noticed the updated version of the patch from LibreWolf *seems* to have a typo introduced via copy and pasting. does r3dfox still compile and work fine with this typo reverted?
2024-04-30 01:26:11
not 100% about the code implications but it seems like this typo might break caching in some form?
Crite Spranberry
username <@549009613710819331> I was doing some comparisons between JXL patches for Firefox/Gecko and I noticed the updated version of the patch from LibreWolf *seems* to have a typo introduced via copy and pasting. does r3dfox still compile and work fine with this typo reverted?
2024-04-30 11:08:53
Yeah I caught that and fixed it in a non-main branch https://github.com/Eclipse-Community/r3dfox/commit/f119f58a9af746165d28780e389470ee39e31a0c
Traneptora
Orum sure, but why not submit it upstream?
2024-05-02 01:46:45
it's unlikely to be merged
HCrikki
2024-05-02 02:07:28
cant it be merged by volunteers with commit access to nightly or a sympathetic mozco employee overseeeing nightly?
Orum
2024-05-02 02:10:35
anyway it's sure to be unmerged if no one submits it...
2024-05-02 02:10:54
worst case is they simply reject it
HCrikki
2024-05-02 02:14:09
id say there needs to be sympathetic people staying in touch about jxl too so stuff doesnt get rejected all the time with zero review cuz noone understands this new thing
2024-05-02 02:15:01
merge doesnt depend on standard position responders' opinion
Orum
2024-05-02 02:18:38
well the fact that they added support for it means there is some amount of attention & understanding of it
HCrikki
2024-05-02 02:28:53
firefox still lacks proper color management and enabled out of the box, they keep referencing passing an old flawed test they never updated since odd how they claim to support stuff they in reality dont or whose code they dont build, like jxl in stable
Traneptora
HCrikki cant it be merged by volunteers with commit access to nightly or a sympathetic mozco employee overseeeing nightly?
2024-05-02 02:40:19
yes, I could merge it, but I wont' merge a commit that isn't going through review
2024-05-02 02:40:38
and I don't expect it'll get approved
HCrikki
2024-05-02 02:49:55
then its reviewers wed need to get this in front of. why wouldnt it be approved, is there an explicit total hard block based on irrelevant critierias and regardless of code quality or reviews ?
2024-05-02 02:52:13
nightly gets tons of unimportant changes merged experimentally since thats where development happens so an updated version of the now ancient jxl patch seems like a no brainer refresh
username
2024-05-02 02:57:37
you both are talking about two different projects
2024-05-02 03:00:18
seems like one of you is talking about FFmpeg while the other is talking about Firefox
Quackdoc
2024-05-02 04:08:52
It would be nice to get both qoi and jpegxl to have bmp tags. but considering im the only one seemingly producing these files, I doubt it's a high priority [av1_dogelol](https://cdn.discordapp.com/emojis/867794291652558888.webp?size=48&quality=lossless&name=av1_dogelol)
Orum
2024-05-02 05:38:05
I doubt qoi would do better than other comparable lossless codecs (e.g. ffvhuff/utvideo)
2024-05-02 05:46:39
anyway, submit the patch <:PeepoDiamondSword:805394101340078092>
Quackdoc
2024-05-02 05:58:06
my mail client is broken and im not using gmail directly lol
Orum
2024-05-02 06:11:24
install thunderbird then <:Thonk:805904896879493180>
2024-05-02 06:11:39
don't they have some form of MR/PR that doesn't involve email?
Quackdoc
2024-05-02 06:30:32
im not touching thunderbird with a 20ft pole
2024-05-02 06:30:38
also none that I am aware of
Crite Spranberry
2024-05-02 09:52:49
https://o.rthost.win/boc-uxp/mailnews.win32-20240427-7bda12e7-uxp-92e1373e8c-xpmod.7z
2024-05-02 09:52:53
<:t_:1088450292095389706>
Traneptora
Orum I doubt qoi would do better than other comparable lossless codecs (e.g. ffvhuff/utvideo)
2024-05-04 12:12:04
qoi doesn't beat ffv1 in density or utvideo in speed, but it does sit in the middle loses to lossless jxl easily though on the pareto front
Orum
2024-05-04 01:31:15
that doesn't surprise me, but all the more reason we need some way to use JXL for video
Meow
Orum that doesn't surprise me, but all the more reason we need some way to use JXL for video
2024-05-04 01:55:26
Like how JPEG 2000 evolved?
Orum
2024-05-04 01:56:04
<:WTF:805391680538148936> this is a container issue, not a codec issue
yoochan
2024-05-04 02:00:16
Like a MJXL format ? Which could do both lossy and lossless videos 🤔
spider-mario
2024-05-04 03:16:00
should be at least a decent lossless option, I imagine
2024-05-04 03:16:16
(maybe the main weak point would be decode speed unless optimised)
2024-05-04 03:16:29
possibly also a good lossy all-intra codec for editing and the like
2024-05-04 03:16:41
where not having interframe compression is actually a feature
Quackdoc
spider-mario possibly also a good lossy all-intra codec for editing and the like
2024-05-04 03:26:05
this is how i've been using it. `cjxl -e 6 -d 0.5 --faster_decoding=4` the graphs were made with the ssimulacra2rs tool, top is jxl, bottom is prores_ks with the hq profile https://files.catbox.moe/8m6glg.png https://files.catbox.moe/1zxjul.png
2024-05-04 03:26:38
this was created using the first bit of the chimera test clip iirc
2024-05-04 03:27:18
I was actually disk I/O bottlenecked so this was a large improvement for me
VcSaJen
Orum that doesn't surprise me, but all the more reason we need some way to use JXL for video
2024-05-05 04:55:33
PNG sequences are frequently used. There's no reason to not use JXL sequences.
Orum
2024-05-05 06:01:44
PNG is lossless only though, which is a problem for several reasons, namely speed and bitrate
2024-05-05 06:03:14
and even if I wanted to use it for lossless, JXL is both lower bitrate and faster decode: ``` convert lossless.png PPM:- > /dev/null Time (mean ± σ): 506.5 ms ± 19.1 ms [User: 477.7 ms, System: 28.2 ms] Range (min … max): 477.2 ms … 548.7 ms 10 runs convert lossless.jxl PPM:- > /dev/null Time (mean ± σ): 339.0 ms ± 7.3 ms [User: 4831.4 ms, System: 165.6 ms] Range (min … max): 330.8 ms … 349.1 ms 10 runs convert lossy.jxl PPM:- > /dev/null Time (mean ± σ): 242.6 ms ± 6.9 ms [User: 793.2 ms, System: 120.6 ms] Range (min … max): 235.3 ms … 257.6 ms 11 runs ``` sizes: ``` 31728053 ./lossless.png 22939929 ./lossless.jxl 3133588 ./lossy.jxl ```
VcSaJen PNG sequences are frequently used. There's no reason to not use JXL sequences.
2024-05-05 06:07:36
or are you talking about sequences as in JXL's internal multi-frame mode where it just overlays a new delta over the previous image?
VcSaJen
2024-05-05 06:16:43
"There's no reason to not use JXL sequences" approximately means "there's a reason to use JXL sequences".
Orum
2024-05-05 06:20:29
what do you mean by "sequences" though?
2024-05-05 06:21:10
Separate images? The internal animation in JXL? Something else entirely?
Quackdoc
VcSaJen PNG sequences are frequently used. There's no reason to not use JXL sequences.
2024-05-05 08:06:05
few people actually use PNG seqs. Tiff and EXR are the most common.
2024-05-05 08:09:37
The most common use of PNG sequences I have seen is image processing stuff like upscalers and denoisers, and typically you convert the sequence when done
VcSaJen
2024-05-05 09:22:52
Blender folks use it
Quackdoc
2024-05-05 09:37:18
thats actually horrid, is that seriously still default?
2024-05-05 09:44:43
this hurts, massive kekw that they reccomend png for img -> video also png should be a ✓(*) or something xD
Orum
2024-05-05 02:37:18
the problem with image sequences is few if any editors support it, and even when they do, it's slow
Quackdoc
2024-05-05 02:55:52
thats why you mux them
2024-05-05 02:57:16
when you mux them into a container you are no longer performing a massive ammounts of opening new files, random io etc, so even just copying a png or tiff sequence into an MKV can lead to a massive performance increase
Orum
2024-05-05 03:23:21
yeah, hence the complaint about not being able to mux <:JXL:805850130203934781> ... we've come full circle
Quackdoc
2024-05-05 03:44:10
ffmpeg needs a tarball demuxer [chad](https://cdn.discordapp.com/emojis/1070464111114211439.webp?size=48&quality=lossless&name=chad)
spider-mario
2024-05-05 05:13:32
tar would seem like an awful choice for this, given it doesn’t have an index
2024-05-05 05:13:34
https://en.wikipedia.org/wiki/Tar_(computing)#Random_access
2024-05-05 05:14:08
right, what’s mentioned in the second paragraph should somewhat mitigate this
username
2024-05-05 05:21:49
I just got reminded of this: https://sozip.org/
Quackdoc
spider-mario right, what’s mentioned in the second paragraph should somewhat mitigate this
2024-05-05 06:27:08
its for sure not ideal, but it is better then spamming a whole bunch of open new files, I did have a POC with ffmpeg at one point, and it was ok, not great, but usable
hungerf3
2024-05-06 01:39:51
If I've found jpeg images that don't decode with djpegli but do with djpeg, is there anywhere I should submit them as a sample for troubleshooting?
2024-05-06 01:44:24
Looks like I'm hitting "decode_marker.cc:108: Quantization table with index 0 not found"
2024-05-06 01:45:27
From EXIF data, it looks like they were generated by software for a Canon 8800F scanner back in 2017
jonnyawsom3
hungerf3 Looks like I'm hitting "decode_marker.cc:108: Quantization table with index 0 not found"
2024-05-06 02:08:42
https://github.com/libjxl/libjxl/issues/3486
Demiurge
Orum and even if I wanted to use it for lossless, JXL is both lower bitrate and faster decode: ``` convert lossless.png PPM:- > /dev/null Time (mean ± σ): 506.5 ms ± 19.1 ms [User: 477.7 ms, System: 28.2 ms] Range (min … max): 477.2 ms … 548.7 ms 10 runs convert lossless.jxl PPM:- > /dev/null Time (mean ± σ): 339.0 ms ± 7.3 ms [User: 4831.4 ms, System: 165.6 ms] Range (min … max): 330.8 ms … 349.1 ms 10 runs convert lossy.jxl PPM:- > /dev/null Time (mean ± σ): 242.6 ms ± 6.9 ms [User: 793.2 ms, System: 120.6 ms] Range (min … max): 235.3 ms … 257.6 ms 11 runs ``` sizes: ``` 31728053 ./lossless.png 22939929 ./lossless.jxl 3133588 ./lossy.jxl ```
2024-05-06 04:23:18
doesn't this show jxl being a lot slower?
2024-05-06 04:23:21
than png
Orum
2024-05-06 04:23:59
huh <:WhatThe:806133036059197491>
2024-05-06 04:24:05
look at it again
Demiurge
2024-05-06 04:24:24
User and System time show a lot more CPU time
Orum
2024-05-06 04:24:35
yeah, I don't care about CPU time, I care about real time
Demiurge
2024-05-06 04:25:10
You're comparing single core to multiple cores
2024-05-06 04:25:15
That's a bad comparison
Orum
2024-05-06 04:25:26
it is what the libraries provide 🤷‍♂️
Demiurge
2024-05-06 04:25:44
it actually uses less CPU time to decode JXL on 1 thread
Orum
2024-05-06 04:25:52
if there's a MT PNG decoder, I don't know about it
Demiurge
2024-05-06 04:25:59
meaning that you can decode multiple images at a time, one on each core, for the fastest speed
2024-05-06 04:26:05
That equals real time savings
2024-05-06 04:26:14
and even more savings if you are using PNG instead
2024-05-06 04:26:18
since PNG decodes faster
Orum
2024-05-06 04:26:23
I don't need to decode multiple at the same time, I need to decode one image as quickly as possible
Demiurge
2024-05-06 04:26:54
Uh aren't you talking about image sequences? Sounds like you DO in fact need to decode multiple images at the same time then.
Orum
2024-05-06 04:27:09
again, define what you mean by "image sequence"
Demiurge
2024-05-06 04:27:53
Usually it means a folder filled with .png files
2024-05-06 04:28:13
It's foolish of you to think CPU time doesn't matter. It matters a lot.
Orum
2024-05-06 04:28:59
well a folder filled with images won't work for my use case
w
2024-05-06 04:29:05
yeah it's the difference between being able to use your computer or not while it's decoding
Orum
2024-05-06 04:29:20
which is the whole reason I want to mux JXL into a container
w
2024-05-06 04:29:30
jxl being more than 20x slower sounds like a shitty experience
Demiurge
2024-05-06 04:29:48
multithreaded JXL decoding is not even the most efficient way to decode most of the time, since you can decode a folder full of JXL images faster if you decode multiple images in parallel instead of one image at a time in parallel.
2024-05-06 04:30:11
Faster in real time
Orum
2024-05-06 04:30:20
PNG also makes no sense, as it'd be far worse than what I'm using now
w
2024-05-06 04:30:41
that's why you should use webp
Orum
2024-05-06 04:30:56
can webp be muxed into a video container? <:Thonk:805904896879493180>
2024-05-06 04:31:31
I suppose at that point I would just use intra-only VP8/9
Demiurge
2024-05-06 04:31:32
yeah it's called VP8 lol
2024-05-06 04:31:40
but don't ever use webp
Orum
2024-05-06 04:31:52
it would work but the quality would be crap
w
2024-05-06 04:32:36
you hate webp so much but it's like the direct predecessor to jxl
Demiurge
2024-05-06 04:33:18
you mean avif
w
2024-05-06 04:33:23
nop
Demiurge
2024-05-06 04:34:01
orum, sounds like you want ffv1 maybe, or even x264 or x265 in lossless mode (very fast)
2024-05-06 04:34:29
But I think JXL also supports having multiple pages
Orum
2024-05-06 04:34:30
FFV1 is too slow and too large, and H.264/5 intra only kind of sucks (both in speed and efficiency)
Demiurge
2024-05-06 04:35:04
Last time I tried lossless x264/5 it was faster than ffv1
2024-05-06 04:35:08
and smaller
Orum
2024-05-06 04:35:12
again, not looking for lossless
2024-05-06 04:35:21
bitrate is too high then
Demiurge
2024-05-06 04:35:46
Oh... what are you looking for then, MJPEG?
w
2024-05-06 04:35:53
what even are you asking for
2024-05-06 04:35:56
a video format?
Demiurge
2024-05-06 04:36:06
what's your use case exactly
Orum
2024-05-06 04:36:39
I'm looking for high quality lossy + intra-only + "video" (i.e. a single file container, not multiple files, as I need to use it in an editor) + fast decoding
w
2024-05-06 04:36:51
just use h264
Orum
2024-05-06 04:37:19
h264 is not very good at intra-only or fast decoding (even when using CAVLC)
w
2024-05-06 04:37:25
youre comparing it to jxl
2024-05-06 04:37:37
jxl is measured in seconds per frame
2024-05-06 04:37:44
normal video formats are measured in frames per second
2024-05-06 04:37:46
get real
Orum
2024-05-06 04:38:34
in this terrible use case (i.e. using convert) it's faster than anything else: https://discord.com/channels/794206087879852103/794206170445119489/1236558809049206905
w
2024-05-06 04:38:47
i dont see the video formats
Orum
2024-05-06 04:38:57
because imagemagick doesn't handle them AFAIK
w
2024-05-06 04:39:02
<:retard:628773674769449000>
jonnyawsom3
2024-05-06 04:39:17
Let me dig up some messages
Orum
2024-05-06 04:39:38
I can test with ffmpeg but that's even more bloated
jonnyawsom3
Quackdoc finally got it, using jxl as an alternative to prores in olive, caching is disabled quality is d0.5 which I find to be roughly the same as prores_ks hq e3 ``` 2.9G quack 8 Jun 06:23 chim-jxk.mkv 7.3G quack 8 Jun 06:09 chim-pro.mov ``` https://cdn.discordapp.com/attachments/587033245061873759/1116322453786984458/olive-jxl.mp4
2024-05-06 04:40:51
The old JXL Video thread
Orum
2024-05-06 04:56:27
``` ffmpeg -i h264.mkv -f null - > /dev/null Time (mean ± σ): 633.2 ms ± 10.6 ms [User: 513.8 ms, System: 123.5 ms] Range (min … max): 620.6 ms … 649.7 ms 10 runs ffmpeg -i lossy.jxl -f null - > /dev/null Time (mean ± σ): 553.3 ms ± 13.8 ms [User: 934.9 ms, System: 695.6 ms] Range (min … max): 529.4 ms … 572.5 ms 10 runs ``` H.264 slower <:PepeHands:808829977608323112>
2024-05-06 04:57:03
that's with `-tune fastdecode` and roughly equal quality to JXL too (JXL actually looks a little better, but they're at least close)
2024-05-06 05:05:34
also this is with the default `--faster_decoding` (i.e. 0) in jxl
2024-05-06 05:07:39
also I should note that JXL encoded about 3 times faster than x264 here as well (though I could up the presets on either x264 or JXL)
2024-05-06 05:10:07
JXL is also 1/4th of the bitrate of H.264 😂
2024-05-06 05:12:11
I can try with CABAC if you want, which would help the bitrate but kill the decode speed
2024-05-06 06:09:47
faster_decoding 4 speeds it up some with almost no perceptible loss in quality (and bitrate is actually lower): ``` ffmpeg -i lossy.jxl -f null - > /dev/null Time (mean ± σ): 333.7 ms ± 15.8 ms [User: 678.3 ms, System: 93.6 ms] Range (min … max): 308.5 ms … 361.3 ms 10 runs ```
2024-05-06 06:10:12
almost twice as fast as H.264 now <:FeelsAmazingMan:808826295768449054>
2024-05-06 06:12:34
man I wish I could get this usable in ffmpeg with either nut or mkv
w
2024-05-06 06:13:48
how many frames is t hat?
Orum
2024-05-06 06:14:27
just 1 as I don't have any way of putting multiple frames in a single container with JXL (except for the built-in JXL animation, which is useless for this use case)
w
2024-05-06 06:14:33
what a terrible comparison
Orum
2024-05-06 06:15:01
well there's no way to compare multiple frames right now without using multiple JXL files
2024-05-06 06:15:06
which as I said, isn't an option
w
2024-05-06 06:15:24
so how can you come to a conclusion
2024-05-06 06:15:25
this is so stupid
2024-05-06 06:15:30
you deserve an award for that
Orum
2024-05-06 06:15:48
it's the same scenario: one frame of JXL vs one frame in H.264
2024-05-06 06:17:03
also, H.264 doesn't support HDR like JXL does 😮‍💨, so it's way more limited
w
2024-05-06 06:17:47
that's not even comparing the decode speeds
2024-05-06 06:17:59
i give up
Orum
2024-05-06 06:18:19
how is it not comparing decode speed?
2024-05-06 06:31:27
MJPEG decodes in about 2/3rd of the time of JXL, but is nearly double the bitrate for the same quality
2024-05-06 06:32:46
...and of course, no HDR support
_wb_
Orum just 1 as I don't have any way of putting multiple frames in a single container with JXL (except for the built-in JXL animation, which is useless for this use case)
2024-05-06 07:15:41
Why is jxl animation useless for this use case? There is no real tooling yet, but the spec does define an optional frame index box that can be used to do seeking...
Orum
2024-05-06 07:17:07
because it's for a proxy clip, where seeking speed is paramount
2024-05-06 07:17:31
and IIRC to use the JXL animation you need to decode all frames before the frame you're seeking to
2024-05-06 07:18:14
...or at least, that is what I was told. Correct me if I am wrong.
veluca
2024-05-06 07:34:46
you are wrong 😉 that's what the frame index box is for
Orum
2024-05-06 07:35:51
so what is a frame index box then?
2024-05-06 07:36:55
basically the equivalent of a keyframe interval?
Tirr
2024-05-06 07:38:49
jxl can do intra frame (`resets_canvas = true` condition in the spec), those frames can be signalled using frame index box
Orum
2024-05-06 07:40:11
🤔 okay, so in cjxl I set `--frame_indexing=0`? 1?
Tirr
Tirr jxl can do intra frame (`resets_canvas = true` condition in the spec), those frames can be signalled using frame index box
2024-05-06 07:40:53
well I think it's a bit more than `resets_canvas = true` but anyway it can do intra
2024-05-06 07:41:42
I haven't done encoding with frame index box, I guess it's `--frame_indexing=1`
Orum
2024-05-06 07:42:43
the help doesn't explain it well at all 😖
Tirr
2024-05-06 07:42:51
no, it's "mark 1 if you want that frame to be intra, 0 otherwise"
2024-05-06 07:43:02
... I guess
Orum
2024-05-06 07:43:15
so if I have 100 frames I need 100 `1`s in a row?!