JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

jxl

Anything JPEG XL related

JendaLinda
2024-01-19 08:25:33
Speaking of lossy/lossless, jxlinfo reports transcoded jpegs as "possibly lossless", that's kinda contradictory.
2024-01-19 08:27:24
I mean, the original jpegs were not lossless at all.
_wb_
2024-01-19 08:29:48
They can be the most lossless version of the image, e.g. when coming straight out of a camera.
JendaLinda
2024-01-19 09:11:56
Technically, taking a photo is a lossy operation.
Oleksii Matiash
JendaLinda Technically, taking a photo is a lossy operation.
2024-01-19 09:12:38
It is not jxl scope
JendaLinda
2024-01-19 09:12:58
That's fair.
2024-01-19 09:14:41
Anyway, I suppose jxlc cannot transcode those obscure lossless jpeg variants, those would be better to compress using regular modular anyway.
JKGamer69
2024-01-19 02:53:18
How do I batch convert multiple png files to lossless jxl files at once? Will it reduce the size to kbs?
Oleksii Matiash
JKGamer69 How do I batch convert multiple png files to lossless jxl files at once? Will it reduce the size to kbs?
2024-01-19 03:33:46
https://github.com/kampidh/jxl-batch-converter
JKGamer69
2024-01-19 03:43:29
And the other question?
Oleksii Matiash
JKGamer69 And the other question?
2024-01-19 03:46:47
Regarding size reduction? Who knows ๐Ÿคทโ€โ™‚๏ธ
_wb_
JendaLinda Technically, taking a photo is a lossy operation.
2024-01-20 12:23:12
Would be funny if it wasn't, and photography was a reversible process where you could completely reconstruct reality from a picture. I guess that's a Star Trek teleporter or replicator then...
jonnyawsom3
2024-01-20 12:51:54
Lightfield cameras weren't far off ;P
JKGamer69
Oleksii Matiash https://github.com/kampidh/jxl-batch-converter
2024-01-20 01:26:11
What are the fastest settings for lossless?
Traneptora
JKGamer69 What are the fastest settings for lossless?
2024-01-20 01:31:19
fastest setting is the lowest effort, which is 1
2024-01-20 01:31:32
higher effort will be slower, but more efficient
JKGamer69
2024-01-20 01:32:18
For a single image, how long it will be for effort 1?
Traneptora
2024-01-20 01:32:32
depends on how big the image is
JKGamer69
2024-01-20 01:33:45
4.27 MB
Traneptora
2024-01-20 01:34:04
in pixel dimensions, not in megabytes
2024-01-20 01:34:27
but in either case
2024-01-20 01:34:31
it will be faster to just try it and see than it will be to ask how long it will take
Cool Doggo
2024-01-20 01:34:31
unless your image is massive it will likely be pretty quick anyways
JKGamer69
2024-01-20 01:34:37
1920x1080
Traneptora
2024-01-20 01:34:43
why don't you just try it and see
2024-01-20 01:34:56
you're looking at less than 1 second probably but it's not clear exactly how fast it'll be
2024-01-20 01:35:07
and if you are batch converting thousands of images the difference between 0.1s and 0.9s does matter
JKGamer69
2024-01-20 01:35:41
For Distance and Quality?
Traneptora
2024-01-20 01:36:02
distance is basically the quality setting you have to set it to 0 if you want lossless
2024-01-20 01:36:10
anything nonzero is lossy
_wb_
2024-01-20 01:36:42
e1 lossless is very fast, should be 0.01 seconds or so for that image size
Traneptora
2024-01-20 01:36:58
0.01s? that doesn't sound right
_wb_
2024-01-20 01:37:18
200 Mpx/s in other words
Traneptora
2024-01-20 01:37:39
when I tested it I got 0.2s but I'm converting from PNG
2024-01-20 01:37:46
so probably most of that is spent in PNG decode
2024-01-20 01:37:53
that makes more sense
2024-01-20 01:38:20
e1 lossless *is* very fast but it's also not significantly better than PNG at efficiency so I find that it's usually not worth it
JKGamer69
2024-01-20 01:38:26
What if I set Quality to 100?
_wb_
2024-01-20 01:38:30
Yeah 200 Mpx would be for just the encode and assuming the binary is already loaded etc
Traneptora
2024-01-20 01:38:33
quality and distance are linked
_wb_
2024-01-20 01:38:39
q100 is the same as d0
Traneptora
2024-01-20 01:38:40
if you set the quality to 100 that does exactly the same thing as setting distance to 0
jonnyawsom3
JKGamer69 For a single image, how long it will be for effort 1?
2024-01-20 01:39:32
For 1080p, it's 0.1 seconds
2024-01-20 01:39:45
*For me at least on a fairly old CPU
HCrikki
2024-01-20 01:44:41
depends on the image's resolution, not desired final quality. effort 1 almost instant but maybe only preferable for lossless results with no concern for filesize like screenshots. larger dimensions > lower effort for almost instant output
JKGamer69
2024-01-20 01:46:38
Do I have to activate lossless jpeg transcoding?
jonnyawsom3
2024-01-20 01:46:53
No
Traneptora
2024-01-20 10:42:36
Just released Hydrium v0.4 :D
2024-01-20 10:42:43
switched to floating point internally
2024-01-20 10:42:55
since without hand-written simd, fixed-point stuff isn't really ideal
monad
JKGamer69 1920x1080
2024-01-20 10:58:25
```encoders flags cjxl --disable_output oxipng -P CPU: i5-13600K 2 2d 15 3d 8 algorithmic 1 blend 1 edit 6 film 17 game 1 glitch 8 larip 5 lossy 3 mixed 1 photo 43 source_imagemagick 2 source_gnome_screenshot 17 ui 45 total 2073600 pixels/image MP/s (user+system) bpp | command 4.68326 34.8 cwebp_1.2.4_z0metadataall 5.14312 20.6 oxipng_9.0.0_o0 5.28590 64.8 cjxl_v0.9.0-a16a3e22_d0e1 5.29076 original PNG```
Traneptora
2024-01-20 11:00:00
yea, e1 is not particularly efficient
2024-01-20 11:00:02
but boy is it fast
afed
Traneptora switched to floating point internally
2024-01-20 11:02:28
any speed gains?
Traneptora
afed any speed gains?
2024-01-20 11:02:47
yea, it was actually faster
2024-01-20 11:06:26
the UHD-wallpaper-smile.png test image went from 2.7s to 2.3s
2024-01-20 11:06:37
not really sure how much of that is PNG decoding since I haven't added PFM input yet
2024-01-20 11:06:44
that's my next step
2024-01-20 11:10:33
one-frame mode is much faster but uses more memory
_wb_
2024-01-21 02:28:21
https://helpx.adobe.com/camera-raw/using/gain-map.html looks like Adobe has defined JPEG XL with gain maps like this:
2024-01-21 02:29:42
(UltraHDR is when you do this with JPEG, but it can also be done with HEIC, AVIF, TIFF and DNG)
2024-01-21 02:30:52
Gain maps can be used in two directions: either the main image is SDR and the gain map can be used to produce an HDR image, or the main image is HDR and the gain map can be used to do tone mapping to SDR.
2024-01-21 02:33:48
Gain maps can be 1-channel or 3-channel, and they can be downscaled but the upscaling is not specified except "not nearest neighbor". For using the gain map in the usual way (main image is SDR), Eric Chan is saying it's best to not downscale it too much (i.e. not at all, or 2x), and using 3-channel gain maps is needed for best results.
2024-01-21 02:35:14
When using gain maps in the other direction, main image is HDR and gain map is only used to basically do local tone mapping for SDR rendering, a more heavily downscaled single-channel gain map is probably OK.
2024-01-21 02:37:48
So I think it would make sense to recommend using JPEG XL with the main image in HDR, and only optionally add the gain map if the automatic global tone mapping isn't producing satisfactory results.
2024-01-21 02:39:03
<@604964375924834314> <@532010383041363969> what are your thoughts on this?
username
2024-01-21 02:45:13
I assume the reason for using gain maps on formats that natively support HDR and high bit depths is to accommodate poor software that has no tone mapping or no proper handling of HDR images which if that is the point then I find the whole idea of gain maps in such cases to be the wrong solution to a wider problem.
Quackdoc
2024-01-21 02:47:14
typically gain maps are used so you can create a legitimately good looking SDR rendition of an image and then use the extra gain in the auxiliary image to give the HDR render more data then the SDR one
2024-01-21 02:47:57
so using an HDR base and an SDR... detraction? map would be a paradigm that would require some work to get used to
username I assume the reason for using gain maps on formats that natively support HDR and high bit depths is to accommodate poor software that has no tone mapping or no proper handling of HDR images which if that is the point then I find the whole idea of gain maps in such cases to be the wrong solution to a wider problem.
2024-01-21 02:49:56
> no tone mapping this issue with this is that there is no universally good tone mapping solution
2024-01-21 02:50:06
unless you bake it into the media itself
Traneptora
afed any speed gains?
2024-01-21 04:58:14
but in either case it's a bit surprising how much faster it is when I have one-frame mode enabled vs not enabled
2024-01-21 04:58:27
it's almost 60% more time when it's in small-tile mode
2024-01-21 04:58:34
I have to investigate the bottleneck tbh
afed
Traneptora but in either case it's a bit surprising how much faster it is when I have one-frame mode enabled vs not enabled
2024-01-21 09:03:31
maybe some sort of buffered mode, not just by tile, higher consumption, but still less than one frame
spider-mario
Quackdoc so using an HDR base and an SDR... detraction? map would be a paradigm that would require some work to get used to
2024-01-21 09:30:15
would it, though?
JKGamer69
2024-01-21 01:49:51
Which file format has the best and highest quality for visually lossless results (I mean, the quality difference is so undetectable that nobody will know the difference even if they zoomed in with an image editor), jpgs, regular and/or webps, webps with -lossless 0 on ffmpeg, or jxl?
spider-mario
2024-01-21 01:58:51
I think Iโ€™d eliminate JPEG and regular WebP right away
JKGamer69
2024-01-21 02:19:27
Is the difference for jxl really not that notable? What about webps with -lossless 0 on ffmpeg?
HCrikki
2024-01-21 02:58:26
jxl has a distance concept that seems to mesh really well with its compression method (where the areas of an image that look more important receive more bits and decode in priority)
2024-01-21 02:59:26
it also has the lowest generational loss, in case you do recompressions/resaves of an already compressed image
2024-01-21 03:00:36
https://www.youtube.com/watch?v=oYM9spW7VBQ these videos could use a 2024 version
spider-mario
JKGamer69 Is the difference for jxl really not that notable? What about webps with -lossless 0 on ffmpeg?
2024-01-21 03:03:25
at the default quality setting, it would be noticeable when zooming in, but the default quality is not the absolute highest
2024-01-21 03:03:46
regular WebP forces 4:2:0 chroma subsampling so you will always incur at least that loss
2024-01-21 03:04:20
not even JPEG has that limitation
JKGamer69
2024-01-21 03:08:16
So which one is best for visually lossless results?
lonjil
2024-01-21 03:12:37
for images which truly lossless algorithms are good at, either lossless webp or lossless jxl. For e.g. photographic images, which are harder to compress, I think lossy jxl with distance set to under 1 is a good start (how much below 1 depending on how zoomed in you intend to view it)
JKGamer69
2024-01-21 03:28:26
How do I use ffmpeg to convert YUV videos to image sequences of jxl files?
Traneptora
afed maybe some sort of buffered mode, not just by tile, higher consumption, but still less than one frame
2024-01-21 03:58:19
--tile-size=3 is also faster too
JKGamer69 So which one is best for visually lossless results?
2024-01-21 04:00:10
That depends a lot on what you mean by "visuslly lossless" and also on how trained the eye looking at the image is
yoochan
JKGamer69 So which one is best for visually lossless results?
2024-01-21 04:23:54
jxl of course ๐Ÿ™‚
JKGamer69
JKGamer69 How do I use ffmpeg to convert YUV videos to image sequences of jxl files?
2024-01-21 04:52:29
<@853026420792360980> <@1051137277813870592> <@604964375924834314> You know how to do this?
Traneptora
2024-01-21 04:55:06
Yes but please don't ping directly
2024-01-21 04:55:22
you ask a question you wait for an answer
2024-01-21 04:55:45
fwiw, I didn't just provide a command line for you to copy and paste because I still don't know what you're actually trying to do
2024-01-21 04:56:17
I don't know why you'd want to convert a video to a squence of JXL frames
2024-01-21 04:56:26
since that's not a useful thing to do by itself
spider-mario
2024-01-21 05:00:20
https://tenor.com/view/youknow-you-gif-19056787
JKGamer69
2024-01-21 06:23:21
If you make images in lossy format with the maximum possible bitrate, you won't be able to see the quality even if you zoom in?
yoochan
JKGamer69 If you make images in lossy format with the maximum possible bitrate, you won't be able to see the quality even if you zoom in?
2024-01-21 06:48:03
just looking the history showed me two attempt to encode the "big bucky bunny" animation into lossless jxl. One is: https://discord.com/channels/794206087879852103/794206087879852106/1064071807877001247 (which shows the command used), the other is: https://discord.com/channels/794206087879852103/794206170445119489/1191557988733956116. But I guess it was the lossless version of bbb. I would be meaningless to convert to jxl an already compressed movie
Quackdoc
spider-mario would it, though?
2024-01-21 06:49:59
I would say it is, Perhaps not if you have good tooling that will help you out, but at the very least, when I was working with video, going from SDR -> HDR creatively is a lot easier. I would say it's largely about the mentality of it. It's mentally easier to think along the lines of, "Making a good SDR image and making it better with HDR" then "Making a great HDR image and making it worse for SDR" It for sure is possible. don't get me wrong, but at least for me going from SDR -> HDR is much easier thing to do
spider-mario
2024-01-21 06:51:09
it seems to me that which version you create first is orthogonal to how you encode the pair
2024-01-21 06:51:44
whatever software creates the โ€œUltra HDRโ€ (or whatever) image gets both versions, encodes one, and computes a map that produces the other
2024-01-21 06:53:18
(but incidentally, in a Dolby Vision workflow, donโ€™t you start with the HDR version and then do SDR โ€œtrimsโ€?)
Quackdoc
2024-01-21 06:55:25
I've never hard the pleasure or displeasure of doing so nor have I looked into it, so I can't comment on that. but it is for sure possible, and thankfully with ocio it can be mostly managed fairly easily. but maybe im just still stuck in the lut mindset
spider-mario
2024-01-21 06:58:21
in Lightroom as well, if you tick โ€œHDRโ€, you grade the HDR version and then you have a few controls on top for the SDR version:
Quackdoc
2024-01-21 07:03:01
hm, I wonder how well it works out, Ill have to try that, I don't have lightroom anymore but I can always borrow my friends PC
Traneptora
JKGamer69 If you make images in lossy format with the maximum possible bitrate, you won't be able to see the quality even if you zoom in?
2024-01-21 07:51:56
if you encode losslessly you'll end up with the same pixel data
2024-01-21 07:52:00
but again, what are you actually trying to do
2024-01-21 07:52:19
if you encode lossily you will always be able tell with sufficient zooming in
JKGamer69
2024-01-21 07:53:00
I'm trying to make images in kbs while maintaining the highest quality possible
Traneptora
2024-01-21 07:53:12
why?
2024-01-21 07:53:25
what does that have to do with converting a video to a sequence of frames
2024-01-21 07:53:54
what problem are you trying to solve
2024-01-21 07:54:47
"making images in kilobytes" is not a problem you want to solve
2024-01-21 07:55:45
it's not possible to provide a solution unless you present the problem you're having
2024-01-21 07:56:02
otherwise we end up with an XY problem
JKGamer69
2024-01-21 07:58:12
I want to put some of the frames in 7z files (like 30-50) and upload them to DeviantArt, but they get too large due to the file sizes of the images.
Traneptora
2024-01-21 07:58:36
DeviantArt doesn't support JXL
2024-01-21 07:58:41
so that won't help you
username
2024-01-21 07:59:21
DeviantArt allows people to upload anything, i've gotten EXEs from it before
Traneptora
2024-01-21 07:59:35
yes but that's not a useful thing to do
2024-01-21 07:59:54
DeviantArt processes images that you upload
2024-01-21 08:00:09
if you just want to upload a big archive full of images then sure but the website won't work nicely with it
JKGamer69 I want to put some of the frames in 7z files (like 30-50) and upload them to DeviantArt, but they get too large due to the file sizes of the images.
2024-01-21 08:00:24
Why don't you just upload a split archive?
2024-01-21 08:00:58
If you already have an archive full of images then it seems like the simplest way to do it
JKGamer69
Traneptora Why don't you just upload a split archive?
2024-01-21 08:03:43
How do I make a split archive?
Traneptora
JKGamer69 How do I make a split archive?
2024-01-21 08:05:46
7-zip has an option for it
2024-01-21 08:05:55
I forget exactly where in the user interface it is, but I assume you're using it cause you have a big 7z archive
2024-01-21 08:06:27
https://www.newsgroupreviews.com/7-zip-split-archive.html
2024-01-21 08:06:29
found this from google
monad
HCrikki https://www.youtube.com/watch?v=oYM9spW7VBQ these videos could use a 2024 version
2024-01-21 08:44:09
Others have demonstrated much worse behavior from libjxl than what was represented in those videos, see https://discord.com/channels/794206087879852103/803645746661425173/1197491887393755136 or `in:benchmarks has:video` in general.
JKGamer69 Which file format has the best and highest quality for visually lossless results (I mean, the quality difference is so undetectable that nobody will know the difference even if they zoomed in with an image editor), jpgs, regular and/or webps, webps with -lossless 0 on ffmpeg, or jxl?
2024-01-21 09:09:48
JXL is safe https://discord.com/channels/794206087879852103/803645746661425173/1110176407931330600 or see the paper at <https://cloudinary.com/labs/cid22> for additional analysis.
JKGamer69
2024-01-21 11:38:47
How do I make zpaq files?
jonnyawsom3
2024-01-21 11:43:01
I just realised, JXL is likely the only image format that fits inside a Discord nickname `data:image/jxl;base64,/wr/BwiDBAwASyAY`
Quackdoc
2024-01-22 12:13:34
lol
Traneptora
2024-01-22 12:36:26
So a few things I discovered
2024-01-22 12:36:46
linearizing in float is *SLOW*
2024-01-22 12:38:36
it takes about 30% more time to linearize the buffer than it does to start with a linearized one
2024-01-22 12:38:39
something about that is off
monad
JKGamer69 How do I make zpaq files?
2024-01-22 12:50:22
use this <https://github.com/fcorbelli/zpaqfranz>
Traneptora
2024-01-22 12:50:34
you could just use the zpaq cli
2024-01-22 12:50:39
but I don't know why you want to make zpaq files
2024-01-22 12:50:53
if your goal is to upload something to deviantart then people downloading them are unlikely to be able to do anything with it
JKGamer69
2024-01-22 12:54:47
Well, I made 1 MB PNG files with VirtualDub2, but it's INCREDIBLY slow. Is there a faster way to make them?
diskorduser
JKGamer69 Well, I made 1 MB PNG files with VirtualDub2, but it's INCREDIBLY slow. Is there a faster way to make them?
2024-01-22 01:00:34
<#805176455658733570>
yurume
JKGamer69 I want to put some of the frames in 7z files (like 30-50) and upload them to DeviantArt, but they get too large due to the file sizes of the images.
2024-01-22 01:28:48
is this your original question? please confirm or deny, since it is very important context to have.
JKGamer69
2024-01-22 01:29:39
confirm
yurume
2024-01-22 01:30:06
in that case forget everything you've seen in this channel, and use ECT https://github.com/fhanau/Efficient-Compression-Tool/ to optimize your files. (look at the "releases" section)
2024-01-22 01:31:19
this server is about developing a new-generation image format, which may or may not be available in your favorite service or app. if you are just looking for making your files smaller to fit within the website limit, new image format won't save you right now.
username
2024-01-22 01:32:12
the website in question allows arbitrary file uploads
yurume
2024-01-22 01:32:40
well but users are not likely to have djxl nor zpaq nor emma etc.
2024-01-22 01:33:23
and it's not like that you can't split files into multiple parts.
2024-01-22 01:34:29
there are a lot of rabbit holes if you are really looking for smallest files, and many of them actually involves a custom tooling, not general compression tools like jxl etc.
2024-01-22 01:35:14
but I beileve JKGamer69's ultimate goal is to show one's files to other people, and such complications wouldn't help that goal.
2024-01-22 01:35:55
so use ECT to recompress what you have, but do not bother if you can't get them much smaller
2024-01-22 01:37:26
and do not try to individually recompress video frames---video codecs are designed to be very good at that, since consecutive frames are largely correlated.
2024-01-22 01:38:22
(but I can think of use cases where you do want an access to individual frames, so I wouldn't nitpick that much.)
Traneptora
2024-01-22 04:48:09
now that's interesting
2024-01-22 04:48:20
cbrtf is very slow, but this is not:
2024-01-22 04:49:37
```c static inline float hyd_cbrtf(const float x) { union { float f; uint32_t i; } z = { .f = x }; z.i = 0x548c39cb - z.i / 3; z.f *= 1.5015480449f - 0.534850249f * x * z.f * z.f * z.f; z.f *= 1.333333985f - 0.33333333f * x * z.f * z.f * z.f; return 1.0f / z.f; } ```
2024-01-22 04:49:51
it's a fast-inverse-cube-root followed by a reciprocol
2024-01-22 04:50:43
not really sure how it works but it does
2024-01-22 04:51:14
I spend a lot of time converting to XYB so this is a dramatic speed-up
JKGamer69
2024-01-22 04:53:07
Where does anime go to in here? https://docs.google.com/spreadsheets/d/1ju4q1WkaXT7WoxZINmQpf4ElgMD2VMlqeDN2DuZ6yJ8/edit#gid=174429822
Traneptora
2024-01-22 04:55:41
screnshots from animations tend to be closest to "digital 2D art" than anything else
yurume
Traneptora not really sure how it works but it does
2024-01-22 05:11:47
that's a classical approximation followed by newton-raphson iterations, similar to fast inverse sqrt
2024-01-22 05:12:24
note the final `1.0f / z.f`, since what it actually calculates is inverse cbrt up to that point
2024-01-22 05:13:18
I believe a dedicated optimization for XYB would be much faster if it's a bottleneck
Traneptora
yurume that's a classical approximation followed by newton-raphson iterations, similar to fast inverse sqrt
2024-01-22 05:43:38
I know that, I found a paper on fast-inverse-cube-root and added the 1.0f / z.f myself
2024-01-22 05:43:48
what I meant was, I have no idea *why* the magic constant works
2024-01-22 05:43:56
why you can get a fast initial approximation that way
yurume
2024-01-22 05:44:41
cause well, f32 casted to u32, assuming the original f32 was positive, is something like 0xEEEMMMMM where 0xEEE is the exponent and 0xMMMMM is the mantissa
2024-01-22 05:46:33
for the explanation let's assume that 0xEEE is 0 when the exponent is zero, it's actually biased but that's accounted by the constant. then `cbrt(m * 2^e) = cbrt(m * 2^(e mod 3) * 2^(3 * floor(e/3))) = cbrt(m * 2^(e mod 3)) * 2^floor(e/3)`.
2024-01-22 05:47:22
now, `0xEEEMMMMM / 3 = 0xEEE00000 / 3 + 0xMMMMM / 3` has a similar effect, but the `2^(e mod 3)` part would be now bleeding to the mantissa, which means it's only a crude approximation
2024-01-22 05:48:02
but the approximation can be improved later, and the constant itself can be exhaustively optimized by accounting for the subsequent improvements
Traneptora
2024-01-22 05:49:05
huh
2024-01-22 05:49:17
that is a good expalanation, thanks
yurume
2024-01-22 05:51:12
a concrete example: 90 is `0x42b40000` in f32, while 1 is `0x3f800000`. since we know that cbrt(1) = 1, we can approximate cbrt by subtracting `0x3f800000`, multiplying by 1/3 and then adding `0x3f80000` again. you'd get `0x40915555` in that way, which is 4.54166.. and not far from cbrt(90) = 4.481404..
2024-01-22 05:53:38
inverse sqrt would work the same but you'd multiply by -1/3 instead. if you collect the constant part together, you would get a simplified expression `0x54aaaaaa - x / 3`, which is not very different from above! the remainder is a mathematical and computational optimization ๐Ÿ˜‰
2024-01-22 05:54:37
(by the way, inverse sqrt/cbrt is easier because the next step, i.e. newton methods, would be far easier to do.)
Traneptora
2024-01-22 05:57:13
I also ended up replacing `linearize` with this
2024-01-22 05:57:16
```c static inline float linearize(const float x) { if (x <= 0.0404482362771082f) return 0.07739938080495357f * x; const float y = 2.9476545525510693f * x - 1.5359539026402749f; const float z = y * 0.011138900187516445f + 0.13420198008445997f; return y * (y * z + 0.3411732297753588f) - z + 0.3679729953772884f; } ```
2024-01-22 05:57:28
it's a fourth-degree chebychev polynomial approximation
2024-01-22 06:03:34
it's a very good approximation
2024-01-22 06:03:43
but significantly faster to compute
2024-01-22 06:04:54
the approximation ends up mapping 1.0 -> 0.99847 which is an error of 0.00153043 or 0.15%
yurume
2024-01-22 06:09:25
maybe we can devise multiple optimizations for different precisions
2024-01-22 06:09:40
libjxl has a special code for 8-bit XYB for example
Traneptora the approximation ends up mapping 1.0 -> 0.99847 which is an error of 0.00153043 or 0.15%
2024-01-22 06:16:51
it seems that libjxl uses a rational approximation for this, where the final division is done with approximate inversion + one newton-raphson iteration
Traneptora
2024-01-22 06:17:34
the final division is done with inverse square root(x ^2 )?
yurume
2024-01-22 06:17:54
nope, so you compute f(x) / g(x) where both f and g are four-degree polynomials
2024-01-22 06:18:04
ah wait
2024-01-22 06:18:24
yeah I see what you meant, just it's just inversion, sorry
2024-01-22 06:19:44
and this approximate inversion is done via HW (e.g. RCP14 in x86, falls back to an ordinary inversion)
Traneptora
2024-01-22 06:41:27
I think I managed to cut away an additional operation
2024-01-22 06:41:32
```c static inline float linearize(const float x) { if (x <= 0.0404482362771082f) return 0.07739938080495357f * x; return 0.003094300919832f + x * (-0.009982599f + x * (0.72007737769f + 0.2852804880f * x)); } ```
yurume
2024-01-22 06:43:21
(consider using the Remez algorithm if you are really into that optimization)
Traneptora
yurume (consider using the Remez algorithm if you are really into that optimization)
2024-01-22 06:44:25
what is the remez algorithm? remind me
yurume
2024-01-22 06:44:48
too long to talk about that this channel, more on <#794206087879852106> ...
Traneptora
2024-01-22 06:45:04
aight, tho do note if it's a big explanation I gotta go to sleep
2024-01-22 06:45:06
it's almost 2 a.m. my time
yurume
2024-01-22 06:45:26
oh yeah sorry, sleep tight
yoochan
Traneptora what I meant was, I have no idea *why* the magic constant works
2024-01-22 07:31:07
About magic constants explained. It is a different function but this explanation is a must https://m.youtube.com/watch?v=p8u_k2LIZyo&t=1055s&pp=ygUbaW52ZXJzZSBzcXVhcmUgcm9vdCBxdWFrZSAz
Traneptora
2024-01-22 07:34:38
I'm familiar with FISR
2024-01-22 07:34:46
but yurume explained it pretty well
Jyrki Alakuijala
yurume I believe a dedicated optimization for XYB would be much faster if it's a bottleneck
2024-01-22 03:39:18
that's a fascinating idea
2024-01-22 03:40:53
Moritz knows very well how to synthesize fast approximate functions -- https://arxiv.org/abs/2312.08472
JKGamer69
2024-01-22 03:54:15
How do I losslessly compress lossless jxl files to lossless pdf files without losing any quality?
spider-mario
2024-01-22 04:04:43
if the JXLs are indeed lossless, then any intermediary lossless format should do
2024-01-22 04:04:53
e.g. you can decompress the JXLs to PNG and then embed those into a PDF
2024-01-22 04:05:14
(qpdf or pdfjam or img2pdf can do the embedding step)
2024-01-22 04:08:17
this will make a single PDF with one page per image: ```bash parallel djxl '{}' '{.}.png' ::: *.jxl img2pdf *.png -o output.pdf ```
JKGamer69
2024-01-22 08:25:48
How about this? https://www.coolutils.com/PDFCombinePro
2024-01-22 08:28:49
Is this img2pdf? https://pypi.org/project/img2pdf/
spider-mario
JKGamer69 Is this img2pdf? https://pypi.org/project/img2pdf/
2024-01-22 09:22:07
yes
JKGamer69
2024-01-22 09:23:13
Which one of qpdf should I download? https://github.com/qpdf/qpdf/releases/tag/v11.8.0
spider-mario
2024-01-22 10:09:10
depends on your platform, but thereโ€™s also a good chance that you can install it through your package manager
JKGamer69
2024-01-22 10:18:16
Know any tools that are not command line based?
spider-mario
2024-01-22 10:26:32
Iโ€™m sure they exist, but I donโ€™t know of them
2024-01-22 10:27:03
possibly pdftk
2024-01-22 10:27:55
but if you know how to use `djxl`, then `img2pdf` shouldnโ€™t be much more difficult
2024-01-22 10:28:51
instead of `djxl input.jxl output.png`, itโ€™s e.g. `img2pdf input1.png input2.png input3.png -o output.pdf`
JKGamer69
spider-mario possibly pdftk
2024-01-22 10:53:29
Where do I add the images? It only said add pdf files.
spider-mario
2024-01-22 10:59:13
sorry, it seems I might have been wrong about pdftk being an option, then
JKGamer69
2024-01-23 12:53:48
Can PDFGear and/or Acrobat do it?
w
2024-01-23 12:54:18
I swear this person is an AI chatbot
2024-01-23 12:54:53
Or someone who thinks this chat is an AI chatbot
Traneptora
w I swear this person is an AI chatbot
2024-01-23 01:02:16
I think it's more likely that they don't really know anything about compression and are inventing problems to solve for no apparent reason
w
2024-01-23 01:03:06
Might actually be good to use an AI chatbot
Traneptora
2024-01-23 01:03:30
better this than nachtnebel inventing false statements
JKGamer69
2024-01-23 01:34:23
Where is the GUI for img2pdf? Does it support folders of images? The program itself, I mean, not the GUI.
jonnyawsom3
2024-01-23 09:18:06
I mean the username doesn't exactly look human either
yoochan
2024-01-23 09:26:58
It seems like he failed the turing test ๐Ÿ˜…
Traneptora
2024-01-23 03:20:29
fwiw the role doesn't do anything
2024-01-23 03:21:38
I was the one that added it originally
2024-01-23 03:21:41
but I removed it too
2024-01-25 07:39:45
Why was `save_before_ct` chosen to require `kReferenceOnly` or `resets_canvas`?
2024-01-25 07:40:02
I assume there was some technological reason
2024-01-25 07:40:16
for it to be impossible to enable on tiled frames
veluca
2024-01-25 10:45:39
I would also assume so
2024-01-25 10:45:44
but I probably forgot the reason
_wb_
2024-01-25 10:56:00
I think we wanted to avoid needing both the XYB and the RGB version of the frame, so either it's XYB and you can only use it for patches or it's RGB and you can only use it for frame blending.
veluca
2024-01-25 11:08:16
that sounds like a reasonable reason
Traneptora
2024-01-26 04:50:18
why was frame blending decided to occur in srgb or whatever and not xyb?
veluca
2024-01-26 08:31:38
Two reasons: - it matches what i.e. gif and apng do - you can use patches to do blending in xyb
Traneptora
veluca Two reasons: - it matches what i.e. gif and apng do - you can use patches to do blending in xyb
2024-01-26 04:53:14
I looked at using patches to do blending in XYB but I couldn't figure out how to make it work with the tiled frame approach
2024-01-26 04:53:54
cause you're limited to four references for patches
2024-01-26 04:54:03
but not for frame blending or rather
veluca
2024-01-26 04:55:26
weeeelll, nobody is stopping you from patching the previous patch frame on top of the next patch frame
2024-01-26 04:55:27
๐Ÿ˜›
2024-01-26 04:55:47
maybe you need to alternate indices
Traneptora
2024-01-26 04:55:54
hm?
2024-01-26 04:56:05
I didn't think that you could do that
2024-01-26 04:56:09
that's... actually a good idea
veluca
Traneptora that's... actually a good idea
2024-01-26 04:56:35
you sound so surprised ๐Ÿ˜›
Traneptora
2024-01-26 04:56:41
I didn't think of it
2024-01-26 04:56:53
and now I have something to work on
2024-01-26 04:57:07
for hydrium one of the biggest downsides of files it produces is that frame blending has to occur in sRGB
2024-01-26 04:57:13
even though they're all kReplace and nonoverlapping
2024-01-26 04:57:40
if I can get blending to occur in XYB it would be much more ideal
2024-01-26 04:57:55
since one of the strengths of XYB and JXL is that you can just request the pixel data in whatever space you want
veluca weeeelll, nobody is stopping you from patching the previous patch frame on top of the next patch frame
2024-01-26 04:59:02
how do you do this without having the frames themselves each be the full image size
veluca
2024-01-26 05:00:08
well, first it's not really a problem to be full size, you can just put 0s
2024-01-26 05:00:22
but if you make a patches-only frame, you can grow it as you add groups to it
2024-01-26 05:00:35
i.e. first one is 1g x 1g, second one 2g x 1g, etc
Traneptora
veluca but if you make a patches-only frame, you can grow it as you add groups to it
2024-01-26 05:50:26
that's true but since they have to be rectangular eventually you'll need to fill with zeroes
2024-01-26 05:50:41
not sure how much coding efficiency you lose there though
2024-01-26 05:50:50
as it may mess up the frequency distribution
2024-01-26 05:52:17
tho I suppose I could do something like
2024-01-26 05:52:25
first frame -> 1g x 1g, saveAsReference = 0
2024-01-26 05:52:39
second frame -> 2g x 1g, saveAsReference = 0
2024-01-26 05:53:00
until the last group of the top row
2024-01-26 05:53:11
n groups x 1 g -> saveAsreference = 1
2024-01-26 05:53:25
then I do the same thing
2024-01-26 05:54:00
but in the second-last group of the second row, I'd do `(n - 1)g x 1g`
2024-01-26 05:54:07
all using saveAsReference = 0
2024-01-26 05:54:19
and then the last group of the 2nd row would be `ng x 2g`, saveAsReference = 1
veluca
Traneptora not sure how much coding efficiency you lose there though
2024-01-26 05:54:48
I think not very much
Traneptora
2024-01-26 05:55:07
I would still like to avoid encoding zeroes if possible
2024-01-26 05:55:12
and the method I described does actually do that
veluca
2024-01-26 05:55:30
but yes, you can do that of course
2024-01-26 05:55:53
you can also save a few more groups of 0s by using 2/3 and merging things in groups of 4
Traneptora
2024-01-26 05:56:13
well the method I described above has no zeroes
2024-01-26 05:56:56
one of the issues though with this method is that hydrium allows you to send the tiles in any order
2024-01-26 05:57:06
I would have to break that freedom
2024-01-26 05:57:27
being allowed to send the tiles in any order is why pfm input was so easy to add
_wb_
veluca Two reasons: - it matches what i.e. gif and apng do - you can use patches to do blending in xyb
2024-01-26 05:59:29
It's not just for blending of animation frames (i.e. apng, since in gif it doesn't make a difference because alpha is only bilevel there), it's also for blending of layered images and there we also wanted to match what Gimp and Photoshop do: alpha blending is done in the image color space there.
Traneptora
veluca I think not very much
2024-01-26 06:00:27
really? because if you have a massive number of zeroes for HF coefficients, you end up dramatically inflating the frequency of zero
2024-01-26 06:00:35
which has a side-effect of costing more bits to encode anything nonzero
veluca
2024-01-26 06:00:50
no, AC encodes # of nonzeros with a context model
Traneptora
2024-01-26 06:00:56
... right, the non-zeroes
veluca
2024-01-26 06:01:10
and if you have 0 and 0 above you can make that a separate context I believe
2024-01-26 06:01:32
so most of #nnz would be very cheap and coded in a separate context entirely
Traneptora
2024-01-26 06:01:34
forgot about the "non-zeroes" however
2024-01-26 06:01:40
what about LF coefficients?
2024-01-26 06:01:42
those don't use that
veluca
2024-01-26 06:02:19
ah, you can have per-dc-group histograms there
Traneptora
2024-01-26 06:02:44
well, the scenario I have is that each frame is now the size of each image
2024-01-26 06:02:51
if you're looking at 256x256 of nonzero data for a large image
2024-01-26 06:03:16
then you inflate the frequence of zero even in an LF Group
veluca
2024-01-26 06:04:13
you can also have a modular tree that decides on x and y coordinates
Traneptora
2024-01-26 06:04:27
yea, I suppose that is true
2024-01-26 06:04:33
that would be the way to do it
veluca
2024-01-26 06:04:35
so it has a separate context for the all-0 part
Traneptora
2024-01-26 06:04:41
ye, that makes the most sense probably
2024-01-26 06:05:11
currently the tree is locked to gradient
2024-01-26 06:05:23
but having a different tree wouldn't be hard
_wb_
2024-01-26 09:28:03
It's funny how you can play with these Lego bricks of bitstream tricks ๐Ÿ™‚
Traneptora
2024-01-26 09:32:52
100%
2024-01-26 10:53:38
something I wonder about using full-size patch frames with lots of zeroes is ballooning memory required to decode
2024-01-26 10:56:04
in either case, patches are rendered atop the frame after decode
2024-01-26 10:56:24
so I'd have to set the patch blend mode to kAdd if I want the tiles to continue to be able to be sent in any order
2024-01-26 10:56:51
otherwise, the zeroed-samples will replace the previously-decoded parts
2024-01-26 10:57:39
this may balloon decode time though, because it means for every tile there's a floating point addition
2024-01-26 10:57:44
granted, one of them is zero, but still
2024-01-26 10:59:19
if you wanted to use kReplace, you'd have to consider the previous amalgamation of the various tiles, and the new one, and somehow render the new one *onto* the old one, which patches don't let you do
2024-01-26 10:59:43
as far as I'm aware
2024-01-26 11:01:15
unless, hm. you could always alternate saveAsReference
2024-01-26 11:01:25
first one is zero, second one is one, third is zero, etc.
2024-01-26 11:01:44
that way the previous one is zero, and the new is one
2024-01-26 11:01:50
or vice versa
2024-01-26 11:03:56
You'd have to send two patches
veluca weeeelll, nobody is stopping you from patching the previous patch frame on top of the next patch frame
2024-01-26 11:07:53
regarding this:
2024-01-26 11:07:56
> If can_reference, then the samples of the decoded frame are recorded as Reference[save_as_reference] and may be referenced by subsequent frames. The decoded samples are recorded before any colour transform (XYB or YCbCr) if save_before_ct is true, and after colour transform otherwise (in which case they are converted to the RGB colour space signalled in the image header).
2024-01-26 11:08:10
(this is written in Annex F)
2024-01-26 11:08:27
it says it's performed before any color transform
2024-01-26 11:08:58
Section K.3.2 says the following
2024-01-26 11:09:01
> The sample values new_sample are in the colour space before the inverse colour transforms from L.2, L.3 and L.4 are applied, but after the upsampling from J.2 and K.2.
2024-01-26 11:09:16
Notably the spec *doesn't* say whether you save the frame as a reference frame before or after patches are calculated
2024-01-26 11:09:40
it just says you save it before color transforms
2024-01-26 11:09:55
and also it says patches are before color transform
2024-01-26 11:10:11
if patches are computed and then the frame is saved as a reference, IMO it should say that
2024-01-26 11:11:24
same with splines and noise
2024-01-26 11:11:29
> The decoder applies restoration filters as specified in Annex J. > > The presence/absence of additional image features (patches, splines and noise) is indicated in the frame header. The decoder draws these as specified in Annex K. Image features (if present) are rendered after restoration filters (if enabled), in the listed order. > > Finally, the decoder performs colour transforms as specified in Annex L.
2024-01-26 11:11:36
All of the above happen before color transforms
2024-01-26 11:14:24
If this is the case, you could always set blendMode = kReplace for the patch frame and then blendMode = kNone as a patch for the new frame
_wb_
Traneptora Notably the spec *doesn't* say whether you save the frame as a reference frame before or after patches are calculated
2024-01-26 11:49:24
There's this sentence in F.2: "Blending is performed before recording the reference frame.""
veluca
Traneptora Notably the spec *doesn't* say whether you save the frame as a reference frame before or after patches are calculated
2024-01-26 11:49:31
You are entirely right, libjxl does exactly that, and I guess I meant "just before"
_wb_ There's this sentence in F.2: "Blending is performed before recording the reference frame.""
2024-01-26 11:50:11
Ah xD doesn't mean that we should not be more explicit though
_wb_
2024-01-26 11:51:00
it's a bit ambiguous what kind of blending is meant there (frame blending or patch blending) โ€” I guess it's both but it wouldn't hurt to be more clear about that
Traneptora
2024-01-27 02:54:55
I missed that, thanks
_wb_ There's this sentence in F.2: "Blending is performed before recording the reference frame.""
2024-01-27 02:55:12
that refers to frameblending, doesn't it?
2024-01-27 02:55:32
clarification would help
_wb_
2024-01-27 02:57:04
It refers to both, but yes, it should say that explicitly.
Tirr
2024-01-27 07:05:45
what happens if two or more different (color or extra) channels use same single channel as alpha, and their blend mode or source frame conflict each other? what sample value is used for the resulting alpha channel?
Traneptora
2024-01-27 07:45:06
hm? what do you mean?
2024-01-27 07:45:22
how would they conflict?
2024-01-27 07:45:42
blend mode is tied to the channel being blended
2024-01-27 07:46:00
alpha channel is used as a reference for this blending but it isn't overwritten unless the alpha channel itself is being blended
2024-01-27 07:47:36
and if it is then the rules are different
Tirr
2024-01-27 07:49:12
in Table F.8, it says when kBlend: > The blending on the alpha channel itself always uses the following formula instead: `alpha = old_alpha + new_alpha * (1 โˆ’ old_alpha)`
2024-01-27 07:49:37
doesn't this mean the channel is blended using different formula when used as kBlend alpha channel?
Traneptora
2024-01-27 07:49:46
Yes
Tirr
2024-01-27 07:50:02
and different extra channel can specify different source frame
Traneptora
2024-01-27 07:50:12
yea, but the channel layout is identical across all frames
2024-01-27 07:50:40
in the case where you have different source frames, `old_alpha` refers to the alpha value associated with `old_sample` and `new_alpha` refers to the alpha value associated with `new_sample`
2024-01-27 07:50:50
since each sample has a corresponding alpha value
Tirr
2024-01-27 07:51:14
since source frame is different, extra channels can use different sample value as alpha
Traneptora
2024-01-27 07:51:27
yea, that's why there's `old_alpha` and `new_alpha`
2024-01-27 07:51:46
`old_alpha` refers to the alpha value from the source frame
2024-01-27 07:51:53
`new_alpha` refers to the alpha value from the current frame
Tirr
2024-01-27 07:52:00
ah wait, let me check the spec once more
Traneptora
2024-01-27 07:52:08
2024-01-27 07:52:23
you are correct that they may not agree
2024-01-27 07:52:43
but both are present in the actual formula
2024-01-27 07:53:10
the only thing that isn't clear to me is what `alpha` means after that division
2024-01-27 07:54:15
it may be the case that alpha is blended first
2024-01-27 07:54:21
and then `alpha` here is the blended alpha
2024-01-27 07:54:35
I'm not sure though
Tirr
2024-01-27 07:57:40
is BlendingInfo for alpha channel ignored, because different formula is specified?
2024-01-27 07:57:47
I'm not sure if I got it correctly
Traneptora
2024-01-27 07:58:12
I don't believe it's ignored
2024-01-27 07:58:27
it's not really clear to me what happens if the alpha channel has a different blend mode
2024-01-27 07:58:42
the only thing I'm confused about is what `alpha` means in the kBlend formula above
Tirr
2024-01-27 07:58:50
so `alpha` is only for computing the channel being blended?
2024-01-27 07:58:55
a bit confused
Traneptora
2024-01-27 07:58:55
I don't know
2024-01-27 07:59:10
`new_alpha` and `old_alpha` clearly refer to the current and source frames' alpha channels
2024-01-27 07:59:13
but `alpha` just isn't defined
2024-01-27 07:59:29
it may be the formula listed below
Tirr
2024-01-27 07:59:46
if it's used then there might be conflicts
Traneptora
2024-01-27 07:59:55
but it's not obvious to me if that is still the case if the alpha channel itself has a different blend mode
Tirr
2024-01-27 08:05:24
maybe samples are overwritten using that formula only if a channel specifies itself as alpha channel
Traneptora
2024-01-27 08:05:44
well that part is clear
2024-01-27 08:06:05
the second formula is used if the channel itself is an alpha channel
Tirr
2024-01-27 08:09:17
then I think it's basically, compute `alpha` as specified, then use `alpha` if alpha channel is specified as itself, or else do regular blending
Traneptora
2024-01-27 08:10:36
sure, but what if the alpha channel has a different blend mode
2024-01-27 08:10:43
do you use the alpha computed when you blended it?
Tirr
2024-01-27 08:12:34
I'd follow the blend mode specified for that alpha channel
2024-01-27 08:15:09
if we use alpha produced by other blending operation, there might be conflicting sample value from multiple channels. for example, if channel B has kBlend and channel C has kMulAdd, and both specify channel A as alpha, there are two conflicting value: `old_alpha + new_alpha * (1 โˆ’ old_alpha)` and `old_alpha`
2024-01-27 08:15:45
using BlendInfo for channel A kinda resolves this conflict
Traneptora
2024-01-27 08:16:32
I meant more of, for channel B
2024-01-27 08:16:43
the formula for `alpha`
2024-01-27 08:17:03
is it computed as specified in kBlend and then used there
2024-01-27 08:17:24
or do they grab the value computed for A, whatever mode it happened to have
2024-01-27 08:17:47
I think the former
2024-01-27 08:17:51
but I'm not sure
Tirr
2024-01-27 08:19:01
I don't think blending mutates input channels, so I think the former one
2024-01-27 08:20:42
ofc it can be mutated as implementation detail, but semantically I think it creates new channels for output
2024-01-27 08:20:51
so blending mode for A doesn't affect B and C
Traneptora
2024-01-27 08:25:45
makes sense
Orum
2024-01-27 12:55:10
is the only way to produce animated/'video' JXLs by feeding cjxl PNG or GIF input?
Traneptora
2024-01-27 03:24:35
at the moment yes although there's a pending libjxl animated encoder in ffmpeg
Orum
2024-01-27 05:46:03
yeah I noticed that libjxl is already in ffmpeg, but it appears to be limited to images only
2024-01-27 05:46:26
would be very nice to have it work for video
Traneptora
2024-01-27 05:48:40
it does work with animated jxl on decode
2024-01-27 05:49:07
though tbh I'm not entirely sure why you'd creating animated jxl files
2024-01-27 05:49:10
instead of just actual videos
diskorduser
Traneptora though tbh I'm not entirely sure why you'd creating animated jxl files
2024-01-27 05:55:25
16bit video. So animated jxl. ๐Ÿ™ƒ
Traneptora
diskorduser 16bit video. So animated jxl. ๐Ÿ™ƒ
2024-01-27 05:58:45
just ffv1 then?
2024-01-27 05:58:55
if you're archiving 16-bit that makes more sense
2024-01-27 05:59:14
if you're doing lossy then prores
2024-01-27 05:59:28
not sure why you'd need 16-bpp lossy video though
2024-01-27 06:01:07
ngl I feel like using an image format as a video codec is just as silly as using a still frame from a video codec as an image format
2024-01-27 06:01:15
they're not designed for the same purpose
2024-01-27 06:01:23
animated JXL exists pmuch only for feature parity reasons
lonjil
2024-01-27 06:03:46
I mean many video formats just use one or more JPEG2000 stills for each frame.
Orum
Traneptora if you're doing lossy then prores
2024-01-27 06:31:44
prores's bitrate is too high
2024-01-27 06:32:53
cineform lets me go to lower bitrates, but then decoding is a challenge as it's a lot slower to decode (per bit) compared to prores
2024-01-27 06:33:27
was hoping to see if jxl could supersede both, but it's just too damn inconvenient to use right now
Traneptora
Orum prores's bitrate is too high
2024-01-27 06:34:25
what are you trying to accomplish then
2024-01-27 06:35:00
I ask because animated JXL is not seekable
Orum
2024-01-27 06:35:03
low bitrate (for an intermediate codec, so it's still high compared to codecs with inter) and fast decoding
2024-01-27 06:35:12
ah, that would be a problem too then
Traneptora
2024-01-27 06:35:15
for an intermediate decoding you explicitly don't want animated JXL
2024-01-27 06:35:18
because it's not easily seekable
Orum
2024-01-27 06:35:35
does the hack that lets you put it in other containers make it seekable though?
Traneptora
2024-01-27 06:35:37
Ideally you'd rather than sequences of JXL frames in a proper container
2024-01-27 06:35:48
each being individual frames
2024-01-27 06:35:49
not animated JXL
Orum
Traneptora each being individual frames
2024-01-27 06:35:58
right, that's what I want...
Traneptora
2024-01-27 06:36:04
yea, that makes more sense
2024-01-27 06:36:16
QuacDoc iirc was testing an ffmpeg patch for that
Orum
2024-01-27 06:36:24
yeah I looked at it
Traneptora
2024-01-27 06:36:34
basically you change `avformat/riff.c` to allow it to work inside NUT
2024-01-27 06:36:36
and then it was fine
Orum
2024-01-27 06:36:41
it's interesting but is very much non-standard for now <:FeelsSadMan:808221433243107338>
2024-01-27 06:37:36
would love to see more official adoption so we don't have to jump through so many hoops to make it work everywhere
Quackdoc
2024-01-28 01:44:46
maybe one day, it is a fairly simple patch, and other image formats have RIFF tags, so I'm not sure if there is any real reason to not add it, I myself just can't really be bothered to propose it, mailing lists are too much of a pain to deal with lol
Traneptora
2024-01-29 05:58:32
in other news, I'm starting work on better Exif handling in avcodec
2024-01-29 05:58:39
currently Exif is in a wonky spot
2024-01-29 05:58:58
the key-value paiars are parsed and attached as global metadata
2024-01-29 05:59:04
by every decoder that supports it
2024-01-29 05:59:08
which is very few
2024-01-29 05:59:30
my plan is to make it work similar to ICC profiles where a coded can declare that it supports EXIF and then attach a buffer of EXIF data
2024-01-29 05:59:39
and then generic code will handle what it did before
afed
2024-01-29 06:26:19
ffmpeg is the new imagemagick <:Hypers:808826266060193874>
Quackdoc
afed ffmpeg is the new imagemagick <:Hypers:808826266060193874>
2024-01-29 06:36:50
maybe when image resolution doesn't destroy us T.T
afed
2024-01-29 06:41:25
images not higher than video resolutions with no streaming support <:KekDog:805390049033191445>
Quackdoc
2024-01-29 06:46:07
you know, image-not higher than video resolutions with no streaming support-magick doesn't quite roll off the tongue very well xD
afed
Traneptora in other news, I'm starting work on better Exif handling in avcodec
2024-01-29 06:55:22
is brotli still being proposed for avtransport?
Traneptora
afed is brotli still being proposed for avtransport?
2024-01-29 06:56:52
already merged
2024-01-29 06:57:19
https://github.com/cyanreg/avtransport/commit/a918281c8e2eaaf487db80a66a2964fa3450c9ce
2024-01-29 06:57:33
I suggested it to lynne and she pointed out that browsers already support it anyway so it made sense
MSLP
2024-01-29 09:10:01
hmmm... i don't know shit about meson build system, but this line seems sus ``` brotli_dep = dependency('libzstd', required: false) ``` shouldn't it be `libbrotlidec` instead of `libzstd` ?
2024-01-29 09:11:19
or rather `libbrotlienc`
Traneptora
MSLP hmmm... i don't know shit about meson build system, but this line seems sus ``` brotli_dep = dependency('libzstd', required: false) ``` shouldn't it be `libbrotlidec` instead of `libzstd` ?
2024-01-29 09:11:40
looks like a copy paste typo
2024-01-29 09:13:44
https://github.com/cyanreg/avtransport/issues/10
afed
2024-01-29 09:16:42
yeah, there is libbrotlidec, libbrotlienc and libbrotlicommon
MSLP
2024-01-29 09:19:32
nice, fwiw both `BrotliEncoderCompress` and `BrotliEncoderMaxCompressedSize` are in `libbrotlienc.so`
Traneptora
2024-01-29 09:23:34
Speaking of orientation
2024-01-29 09:23:45
something I've been wondering is Exif orientation plus JXL codestream orientation
2024-01-29 09:24:08
Currently the way it works is Exif orientation is ignored in favor of the codestream's orientation flag
2024-01-29 09:24:20
I'm wondering if it makes sense to define it instead so the Exif orientation is applied after the codestream's orientation
2024-01-29 09:24:40
this will have no difference in effect if the Exif orientation is zero
2024-01-29 09:24:47
but it means that nonzero Exif orientation matters
2024-01-29 09:24:48
thoughts?
2024-01-29 09:39:18
might be too late in the game at this point to change how it works
MSLP
2024-01-29 09:43:34
that may be a very philosophical question. I overhink this and come to weird conclusions that codestream orientation could mean the order in which the device sensor is feeding the pixels to the encoder (eg. the device may have the sensor mounted in non-standard orientation) while the exif oriientation might mean the device orientation in regard to gravity vector. Sorry for thinking of weird examples, that make no sense.
Traneptora
2024-01-29 09:44:01
orientation in the codestream as defined by the spec is an instruction to the decoder
2024-01-29 09:44:10
like "after decoding, rotate 90 degrees clockwise" etc.
2024-01-29 09:44:53
you can think of it in terms of its inverse (it's the D4 group) if you wish
MSLP
2024-01-29 09:50:27
Let's say we have a device with 2 rectangular sensors, one mounted vertically, and one horizonally. We feed the encoder pixels from those 3 sensors (for some reason the sensors have the same interface, so we feed the pixels in long-rows order. Then we'd set the image codestream orientation from vertical sensor to "flip 90 deg", and no orientation change for horizontal one. Then finally we could set the device orientation in regard to gravity in exif of the images.
Traneptora
2024-01-29 09:50:57
jpeg xl stores an existing array of pixels, not sensor data
2024-01-29 09:51:02
so this doesn't make sense
2024-01-29 09:51:35
the question I had wasn't philosophical in nature at all. it was a practical question about composing the exif orientation with the codestream orientation as opposed to just ignoring the exif orientation
2024-01-29 09:52:03
it's not about the theoretical concept of what orientation means it's about what would we like a decoder to do
MSLP
2024-01-29 09:53:21
I just made up an example that makes no sense, but in which could be useful to have completly separate settings in codestream and exif. Just treat sensor data as a stream of pixels, long-row-major-order
Traneptora
2024-01-29 09:53:45
the reason you'd want to have them be composed would be to allow you to re-orient the JXL file without having to modify the codestream
2024-01-29 09:54:12
it's a practical question wrt #3195
MSLP
2024-01-29 09:56:46
that hypothetical strange use-case would benefit from composing the orientation indices
Traneptora
2024-01-29 09:57:09
it's not a hypothetical strange use-case, it's an actual problem that an actual user asked
MSLP
2024-01-29 09:57:59
ye. this hould be clarified
Traneptora
2024-01-29 09:58:14
well as written the spec explicitly says to ignore Exif orientation
2024-01-29 09:58:22
I'm wondering if we should change that behavior
2024-01-29 09:58:43
Currently the best way to prevent a viewer from rotating twice is to zero out the Exif orientation
MSLP
2024-01-29 10:04:42
I wonder what percentage of viewers even decode JXL Exif. I suspect the behaviour may also vary weather the exif is brotli-compressed or not.
_wb_
Traneptora but it means that nonzero Exif orientation matters
2024-01-30 06:26:22
This will not work for recompressed jpegs, where we need the codestream and exif orientation to match to ensure that reconstructed jpegs and decoded-to-pixels looks the same...
2024-01-30 06:28:30
The whole philosophy of jxl codestream/file format separation is that anything render-impacting goes into codestream and file format is only used for _optional_, ignorable metadata that can be stripped without changing the image appearance.
2024-01-30 06:29:45
(which is why colorspace and orientation are part of the codestream)
2024-01-30 06:31:57
But yes, we should provide some easy way to change the image header without changing the rest of the codestream, it is currently not convenient to change jxl orientation...
Traneptora
2024-01-30 06:33:26
easiest way is probably to define a padding extension
2024-01-30 06:33:32
just "this extension must be all zeroes"
_wb_
2024-01-30 06:34:02
How does that help?
Traneptora
2024-01-30 06:34:20
because the biggest issue with changing header flags is you affect the downstream alignment
_wb_
2024-01-30 06:34:28
You cannot assume that a file has such padding present
Traneptora
2024-01-30 06:34:37
sure, but you can add it if it isn't
2024-01-30 06:34:53
changing a header field it requires you to shift all the bits until the first ZeroPadToByte
2024-01-30 06:34:57
which is after the ICC profile