|
gb82
|
2024-03-20 07:13:38
|
here's the input image
|
|
2024-03-20 07:14:05
|
which also seems wrong even though I used `exr_to_pq` the same way...
|
|
|
spider-mario
|
2024-03-20 07:34:28
|
at least the input image looks fine to me
|
|
2024-03-20 07:34:40
|
(except the discord preview which discards the ICC profile)
|
|
|
jonnyawsom3
|
2024-03-20 08:02:11
|
Filesize has halved
|
|
|
spider-mario
|
2024-03-20 09:02:12
|
for what it’s worth, based on pixel statistics from `identify -verbose` on djxl’s output, the new one is the one that matches the input PNG
|
|
2024-03-20 09:02:14
|
not sure what’s up with the old image
|
|
|
190n
|
|
gb82
here's the input image
|
|
2024-03-20 09:27:42
|
i'm honored you think this is blender lmao
|
|
|
Traneptora
|
2024-03-21 01:25:55
|
>final16
|
|
2024-03-21 01:25:56
|
yup
|
|
|
gb82
|
|
190n
i'm honored you think this is blender lmao
|
|
2024-03-21 02:01:30
|
ah... where's it from? I assumed it was yours :P
|
|
|
spider-mario
for what it’s worth, based on pixel statistics from `identify -verbose` on djxl’s output, the new one is the one that matches the input PNG
|
|
2024-03-21 02:01:55
|
ah, thanks. it has been long enough where I don't remember
|
|
|
190n
|
|
gb82
ah... where's it from? I assumed it was yours :P
|
|
2024-03-21 02:02:13
|
it's from my own raytracer not blender
|
|
2024-03-21 02:02:20
|
<https://github.com/190n/rust-raytracing/>
|
|
|
gb82
|
2024-03-21 02:03:06
|
:woag: that's sick!!
|
|
2024-03-21 02:04:45
|
`change: Calculate the charge (Proton installations minus Electron installations) of a Linux system` LMAO
|
|
|
Traneptora
|
2024-03-21 11:32:30
|
Interesting, I went back to https://libjxl.github.io/bench/ to take a look at places where jxlatte isn't compliant
|
|
2024-03-21 11:32:39
|
but it appears that the bench is misreporting some of them
|
|
2024-03-21 11:32:43
|
consider sunset_logo
|
|
2024-03-21 11:32:59
|
```
$ djxl sunset_logo.jxl ref.png
JPEG XL decoder v0.10.0 d57e4706 [AVX2]
Decoded to pixels.
924 x 1386, 7.798 MP/s [7.80, 7.80], , 1 reps, 8 threads.
leo@gauss ~/Development/Traneptora/jxlatte :) $ java -jar build/java/jxlatte.jar sunset_logo.jxl test.png
Decoded to pixels, writing PNG output.
leo@gauss ~/Development/Traneptora/jxlatte :) $ magick compare -verbose -metric pae ref.png test.png null:-
ref.png PNG 924x1386 924x1386+0+0 16-bit TrueColorAlpha sRGB 1.91744MiB 0.100u 0:00.047
test.png PNG 924x1386 924x1386+0+0 16-bit TrueColorAlpha sRGB 9.77358MiB 0.090u 0:00.023
Image: ref.png
Channel distortion: PAE
red: 1 (1.5259e-05)
green: 1 (1.5259e-05)
blue: 1 (1.5259e-05)
alpha: 0 (0)
all: 1 (1.5259e-05)
ref.png=>- PNG 924x1386 16-bit sRGB 1.91744MiB 0.280u 0:00.046
```
|
|
2024-03-21 11:33:20
|
according to imagemagick, the difference between djxl and jxlatte is literally 1 in 16-bit png space, for a 10-bit jxl file
|
|
2024-03-21 11:33:24
|
I believe that is conformant
|
|
2024-03-21 11:33:43
|
However, the conformance bench reports RMSE: 0.4445458145293334
|
|
2024-03-21 11:33:48
|
which means something somewhere messed up
|
|
|
_wb_
|
2024-03-21 11:39:29
|
that is very possible — the bench could use some improvement in how it is dealing with color conversions etc
|
|
|
Traneptora
|
2024-03-21 11:42:37
|
I agree, but both of these are tagged as sRGB so it's unlikely to be colors in this case
|
|
2024-03-21 11:43:02
|
when writing to PNG, both write an sRGB chunk
|
|
|
_wb_
that is very possible — the bench could use some improvement in how it is dealing with color conversions etc
|
|
2024-03-21 11:44:20
|
it's also possible I just fixed a bug that existed 4 months ago, when it was last run
|
|
2024-03-21 11:44:32
|
at the very least, can you rerun the bench again? it will give us more up to date info
|
|
|
Orum
|
2024-03-22 02:08:42
|
cjxl handles 16-bit/channel pgm/ppms just fine, right?
|
|
|
Traneptora
|
|
Orum
cjxl handles 16-bit/channel pgm/ppms just fine, right?
|
|
2024-03-22 03:20:49
|
just tested it with rgb48 ppm, works fine
|
|
|
Orum
|
2024-03-22 03:21:38
|
oh, thank god... this will greatly speed up encoding times
|
|
|
Traneptora
|
2024-03-22 03:21:48
|
you may want to consider portable floatmap as well
|
|
|
Orum
|
2024-03-22 03:22:12
|
is that supported for --streaming_input?
|
|
|
Traneptora
|
2024-03-22 03:22:16
|
I don't know
|
|
2024-03-22 03:22:22
|
pfm is weird in that it's bottom to top
|
|
2024-03-22 03:22:37
|
hydrium handles that by just using a TOC permutation
|
|
2024-03-22 03:22:41
|
so it's definitely a thing you can do
|
|
2024-03-22 03:22:44
|
but idk if libjxl does that
|
|
|
Orum
|
2024-03-22 03:25:16
|
well the whole goal here is just to be able to use streaming input, because that massively speeds up encoding latency of a single image
|
|
2024-03-22 03:25:44
|
but I had been leaving them in png because I wasn't sure if cjxl supported 16b ppm
|
|
|
|
JendaLinda
|
2024-03-25 10:22:14
|
Does Modular always use the same group size for the entire image?
|
|
|
_wb_
|
2024-03-25 11:00:38
|
The group size can be chosen per frame iirc.
|
|
|
|
JendaLinda
|
2024-03-25 11:09:29
|
So a simple image, one frame, uses one group size. That makes sense. Thank you.
|
|
|
jonnyawsom3
|
|
JendaLinda
Does Modular always use the same group size for the entire image?
|
|
2024-03-25 12:41:12
|
Hmm, that's an interesting point... Block selection for modular
|
|
|
Traneptora
|
2024-03-25 03:21:01
|
modular group size is in the frame header
|
|
2024-03-25 03:21:11
|
if you wanted to do smaller blocks you'd have to do it with the MA tree
|
|
|
Silikone
|
2024-03-26 01:00:40
|
What are some examples of images that heavily favor JXL over the competition?
|
|
|
Orum
|
2024-03-26 01:08:50
|
in what sense?
|
|
|
Silikone
|
2024-03-26 01:16:22
|
JXL looking visibly better at a given size
|
|
|
Traneptora
|
|
Silikone
What are some examples of images that heavily favor JXL over the competition?
|
|
2024-03-26 01:16:27
|
smooth gradients
|
|
2024-03-26 01:16:40
|
but generally anything photographic at high qualities
|
|
|
Orum
|
2024-03-26 01:17:15
|
okay, so lossy... that still depends on what we're comparing to, and at what sort of BPP
|
|
2024-03-26 01:17:30
|
but yes, in general photographic does well in JXL
|
|
|
_wb_
|
2024-03-26 02:15:56
|
As a rule of thumb: for anything 'natural', JXL does very well (and also the old JPEG is still quite OK). For anything with clean straight lines (like many man-made objects, or synthetic images), the directional prediction modes of modern video codecs (WebP, HEIC, AVIF) will allow them to be competitive with JXL or even beat it. Note that JXL doesn't have any directional prediction in its lossy mode, and that's also a major reason why it encodes fast — a big part of what makes video encoders so slow is the big search space induced by all those prediction modes. But in 'natural' scenes (nature / wildlife / portraits of humans / most fashion photography / most sports / clouds / ...), directional prediction is mostly useless, or at least way less useful than for things like, say, modern architecture.
Also, in general, directional prediction is mostly useful at low qualities where the encoder can often decide that the prediction is good enough with zero residuals. At higher qualities, directional prediction removes energy from the residuals but not really entropy, so it doesn't really help much.
|
|
|
jonnyawsom3
|
2024-03-26 04:03:21
|
If you want an ideal case scenario for JXL, then upscaled pixel art is very hard to beat.
You resize the art back down to 1:1 pixels, then with lossless you add
`--already_downsampled --resampling=(2, 4 or 8, whichever is closest to the original) --upsampling_mode=0`
All the benefits of a tiny image, with the readability of a file 8x the size. I *technically* hit 0.3 bpp with it losslessly
|
|
2024-03-26 04:04:39
|
HDR/High bit depth also does very well
|
|
|
yoochan
|
2024-03-26 05:49:12
|
Amazing combination of arguments! How can I bookmark stuffs on discord 😅
|
|
|
jonnyawsom3
|
2024-03-26 06:08:32
|
There's also upsampling_mode -1 (default bilinear or similar) and 1 (for Jon's homemade 'pixel' look)
|
|
|
Orum
|
2024-03-26 06:14:47
|
`DecompressJxlToPackedPixelFile failed` but why?!
|
|
2024-03-26 06:15:05
|
ooh, wait, I see
|
|
|
HCrikki
|
2024-03-26 06:15:56
|
something else that came unexpected - photographs of dark-skinned people in particular
video-based codecs discard a lot of detail even at high quality and colors for large blocks tend to shift with generational loss
|
|
|
Orum
|
2024-03-26 06:16:26
|
depends a lot on the AQ used
|
|
|
jonnyawsom3
|
2024-03-26 06:20:29
|
Dark areas are naturally compressed more since it tends to have more noise, especially in video codecs where you might not even have full 8 bit to work with, JXL handling higher bit depths, HDR and having more sensible defaults helps (Probably)
|
|
|
Orum
|
2024-03-26 06:23:12
|
I still have to set a high intensity target to have cjxl not obliterate the dark areas in images
|
|
|
If you want an ideal case scenario for JXL, then upscaled pixel art is very hard to beat.
You resize the art back down to 1:1 pixels, then with lossless you add
`--already_downsampled --resampling=(2, 4 or 8, whichever is closest to the original) --upsampling_mode=0`
All the benefits of a tiny image, with the readability of a file 8x the size. I *technically* hit 0.3 bpp with it losslessly
|
|
2024-03-26 06:40:38
|
`Invalid flag value for --resampling: Valid values are {-1, 1, 2, 4, 8}.` <:PepeHands:808829977608323112>
|
|
2024-03-26 06:40:50
|
why can't I use any integer?
|
|
|
jonnyawsom3
|
2024-03-26 06:44:45
|
Only powers of 2 are defined in the spec, likely for squeeze or progressive frames
|
|
2024-03-26 06:55:37
|
In an ideal world, we'd just have a `--Pixel_Scale=x` flag that NN downsamples by that amount, and sets the flags above to reverse it on decode
|
|
2024-03-26 06:56:33
|
For now, I just resize externally and go for `--resampling=8` or 4 until it's no smaller than the original
|
|
|
Orum
|
2024-03-26 06:57:18
|
I have some UHD screenshots that are natively 720p, so without 3x scaling I have to leave them at full res
|
|
|
jonnyawsom3
|
2024-03-26 06:58:47
|
Would 4x not work well enough? Or do you need exact image size on decode
|
|
|
Orum
|
2024-03-26 06:59:15
|
trying to keep them at the res they were taken at, yes
|
|
|
jonnyawsom3
|
2024-03-26 07:01:12
|
I should clarify, the `--already_downsampled` flag overrides the `--resampling` flag, so that the image doesn't get reduced in size, only upsampled on decode
|
|
2024-03-26 07:01:32
|
Hence why I mentioned resizing externally and then using the closest matching value
|
|
|
Orum
|
2024-03-26 07:02:14
|
right, that's what I did, or tried to do anyway until I encountered the error
|
|
|
jonnyawsom3
|
2024-03-26 07:06:44
|
<@794205442175402004> would it be particularly hard to allow arbitrary sampling values in the spec? Or would that have been something to do when first drafted
|
|
|
_wb_
|
2024-03-26 07:31:02
|
The fancy resampling was originally designed just for upsampling the LF (1:8) when doing progressive decoding. We added 4x and 2x and made it possible to have it as a "coding tool". Main things we envisioned were very low bitrate encoding; a kind of layered encoding with e.g. most of the image at 1:2 resolution but some parts at 1:1; and subsampled extra channels (e.g. depth maps or thermal sensors are often only at 1:8 resolution).
I think we mostly only did power of two factors because otherwise you cannot align groups of the extra channels with those of the main channels. In the spec we only reserved two bits to signal the upsampling factor so adding something else would require header syntax change. But it would be quite inconvenient to implement arbitrary factors, and I think having just powers of two is good enough for most applications. Representing NN upsampled pixel art was not something we really had in mind when speccing this 🙂
|
|
|
spider-mario
|
2024-03-26 07:38:56
|
there would even be use cases for different horizontal and vertical factors
|
|
2024-03-26 07:39:22
|
(retro games with non-square pixels)
|
|
|
jonnyawsom3
|
2024-03-26 07:47:02
|
Intriguing, like I said usually just upsampling to the nearest value is close enough for pixel art, but no harm in asking after all
|
|
2024-03-26 07:47:57
|
I am surprised at the 2 bits though, I could've sworn I saw mention of higher upsampling values that just aren't in libjxl
|
|
|
_wb_
|
2024-03-26 09:22:49
|
Extra channels have a global (per-image) dimension shift and a per-frame upsampling factor, and these a cumulative so you can have more upsampling than 8x in principle.
|
|
2024-03-26 09:23:12
|
The main color channels only have the per-frame upsampling factor though.
|
|
|
|
JendaLinda
|
2024-03-27 10:09:52
|
Pre-VGA and early VGA graphics used some weird pixel aspect ratios. Pixel doubling will help to see the picture closer to how it was displayed on CRT screen but the aspect ratio is not correct. I wonder if XMP metadata would be the right place to store pixel aspect ration and display aspect ratio information.
|
|
2024-03-27 10:13:24
|
Anyhow, the viewer software should be responsible to display images in the correct aspect ratio, not the codec.
|
|
|
gb82
|
2024-03-27 08:14:56
|
https://youtu.be/qB9L-ZYM1_0?si=logXHic_KRPB7Ur4
|
|
2024-03-27 08:15:03
|
looks like this is on YouTube now
|
|
|
jonnyawsom3
|
2024-03-27 08:57:11
|
Yeah, got posted last night in <#803574970180829194>, saw it when it was on Apple's website
|
|
|
Traneptora
|
2024-03-29 09:00:44
|
Those of you who use the XZ Utils, which is probably a good fraction of those here, you need to downgrade to a version below 5.6.0, or if you are on Arch Linux, upgrade now.
A security vulnerability was discovered in XZ Utils 5.6.0 and 5.6.1 a few hours ago, that allows breaking SSH connections. OpenSSH does not use liblzma but many distributions (such as debian) patch OpenSSH to use libsystemd, which itself uses liblzma.
|
|
2024-03-29 09:12:40
|
<@&807636211489177661> I don't know if it's worth it to make an announcement about this
|
|
2024-03-29 09:15:43
|
(I say upgrade, if you are on Arch, as it has been patched.)
|
|
|
Quackdoc
|
2024-03-29 09:26:34
|
out of all the CVEs we have had lately, kinda surprised how little news this one is getting
|
|
|
lonjil
|
2024-03-29 09:28:19
|
I've seen more discussion and news about it than anything else in a while (at least, anything else in the FOSS scene)
|
|
|
Quackdoc
|
2024-03-29 09:32:03
|
there have been a couple exploits that have been really active when they really didn't need to be somewhat recently, this one is quite something, though whats should be more popular is the process this dude went through, they really did it in quite the neat way
|
|
|
Traneptora
|
|
Quackdoc
out of all the CVEs we have had lately, kinda surprised how little news this one is getting
|
|
2024-03-29 09:37:41
|
tbf we're looking at like
|
|
2024-03-29 09:37:46
|
literally hours ago
|
|
2024-03-29 09:38:06
|
and it's on a friday afternoon of easter weekend, late in the evening in europe
|
|
2024-03-29 09:38:21
|
the timing of the discovery is such that I'm impressed how much coverage
|
|
|
Quackdoc
there have been a couple exploits that have been really active when they really didn't need to be somewhat recently, this one is quite something, though whats should be more popular is the process this dude went through, they really did it in quite the neat way
|
|
2024-03-29 09:38:49
|
I admit the actual tech is neat. but the backdoor was added and discovered in less than a week
|
|
2024-03-29 09:39:15
|
which is pretty oof considering they tried to slow burn trust-gain for 2 years.
|
|
2024-03-29 09:39:20
|
that said there may be other vulns inserted
|
|
|
lonjil
|
2024-03-29 09:39:33
|
imagine being an evil org and spending 2 years trying to insert a backdoor only for it to be thwarted in a week
|
|
|
Traneptora
|
2024-03-29 09:39:40
|
yea, lol
|
|
2024-03-29 09:39:48
|
any commit in the last 2 years is gonna be super inspected too
|
|
|
lonjil
|
2024-03-29 09:39:55
|
kinda good PR for FOSS development
|
|
|
Traneptora
|
2024-03-29 09:40:16
|
people are gonna cry about "lol foss backdoored" but the reality is that if this were closed-source, the researcher would never have discovered it
|
|
2024-03-29 09:40:21
|
and it would have stayed there for years
|
|
2024-03-29 09:40:26
|
eyes on code is a big deal
|
|
|
Quackdoc
|
|
Traneptora
I admit the actual tech is neat. but the backdoor was added and discovered in less than a week
|
|
2024-03-29 09:46:18
|
about a month at least, the tarballs for v5.6.0 are infected which was about a month ago.
|
|
|
Traneptora
|
2024-03-29 09:48:51
|
er, not two years tho
|
|
|
Quackdoc
|
2024-03-29 09:49:05
|
also note this doesn't effect anyone other then debian and rpm builds
```ps
Running as part of a debian or RPM package build:
if test -f "$srcdir/debian/rules" || test "x$RPM_ARCH" = "xx86_64";then
```
|
|
|
Traneptora
|
2024-03-29 09:49:38
|
tbf that's a lot of users
|
|
|
Quackdoc
|
2024-03-29 09:49:51
|
yeah but not me 😎
|
|
|
lonjil
|
2024-03-29 09:51:03
|
only debian unstable users since it was caught before that version was in any stable release
|
|
|
spider-mario
|
2024-03-29 10:09:18
|
I started a discussion on <#794206087879852106> on this subject, let’s take it there?
|
|
|
Demiurge
|
2024-03-30 01:03:46
|
That's pretty funny. XZ tools enjoyed a huge amount of popularity very fast and I don't understand why people didn't use lzip instead
|
|
2024-03-30 01:04:02
|
From what I heard XZ is not a good format
|
|
|
w
|
2024-03-30 06:24:24
|
proof microsoft windows is better
|
|
|
damian101
|
|
Demiurge
That's pretty funny. XZ tools enjoyed a huge amount of popularity very fast and I don't understand why people didn't use lzip instead
|
|
2024-03-30 06:24:26
|
because xz came first, which it did because it's a rudimentary flawed implementation of lzma, which people didn't care about...
|
|
|
w
|
2024-03-30 06:24:36
|
open source SUCKS!
|
|
|
Traneptora
I admit the actual tech is neat. but the backdoor was added and discovered in less than a week
|
|
2024-03-30 06:27:00
|
it doesnt matter it was discovered, it made it into release
|
|
2024-03-30 06:27:07
|
arch users owned
|
|
2024-03-30 06:27:17
|
should have stuck with debian 9
|
|
|
damian101
|
2024-03-30 07:57:58
|
My main experience with Ubuntu LTS was that the bugs were very stable.
|
|
2024-03-30 07:58:23
|
been using only rolling release distros ever since
|
|
|
yoochan
|
|
Traneptora
people are gonna cry about "lol foss backdoored" but the reality is that if this were closed-source, the researcher would never have discovered it
|
|
2024-03-30 08:01:24
|
This is a positive aspect of the foss model of course. But, in contrast with closed projects were the supply chain is also closed foss rely on mutual trust and expect benevolence from its actors. Malicious injection like feels even more like a treason 😑
|
|
|
Traneptora
|
|
w
arch users owned
|
|
2024-03-30 08:03:01
|
it actually doesn't affect Arch because Arch doesn't link OpenSSH with libsystemd
|
|
2024-03-30 08:03:05
|
so even then that meme doesn't work
|
|
|
w
|
2024-03-30 08:41:47
|
if it doesnt affect arch, why the call to action?
|
|
2024-03-30 08:41:48
|
logic fail
|
|
|
yurume
|
2024-03-30 09:22:32
|
running xz could have technically affected the arch system as well, if the attacker wanted
|
|
2024-03-30 09:22:51
|
that should be relevant enough for this accident
|
|
|
lonjil
|
2024-03-30 09:24:30
|
also it doesn't affect Arch because the build script for the backdoor was gated behind a check for whether it was part of an RPM or .deb build.
|
|
|
Demiurge
|
2024-03-30 11:11:36
|
https://lzip.nongnu.org/xz_inadequate.html
|
|
|
lonjil
|
2024-03-30 11:14:29
|
https://discord.com/channels/794206087879852103/794206087879852106/1223563232317542420
|
|
|
Demiurge
|
2024-03-30 11:14:31
|
Good thing arch pacman uses zstd now lol
|
|
|
yurume
|
2024-03-30 11:18:58
|
yeah, it is important to know but now everyone should have got the news by now 🙂
|
|
|
Tirr
|
2024-03-30 02:00:56
|
it doesn't need to be full ICC profile support, but yeah, decoders need to implement some color management operations
|
|
2024-03-30 02:09:55
|
requirements from the spec is relatively simple though (with prior knowledge): conversion from XYB to linear sRGB (this one is fully specified in the spec), conversion between white point and RGB color gamut, applying forward and inverse transfer functions specified in the spec.
|
|
2024-03-30 02:11:40
|
jxl-oxide has a crate dedicated for this <https://crates.io/crates/jxl-color>
|
|
2024-03-30 02:28:04
|
ICC profile synthesis is an extra feature that isn't required to be spec conforming. Decoders may decode images in their original color space, which may (or may not) require an ICC profile. If one is required, it needs to be synthesized from signaled color space.
|
|
2024-03-30 02:29:09
|
this is a piece of code that handles ICC profile synthesis in jxl-oxide <https://github.com/tirr-c/jxl-oxide/blob/0f8c5b6c9a7a668259ae113ca89ee0d480411ee2/crates/jxl-color/src/icc/synthesize.rs>
|
|
|
Traneptora
|
|
Tirr
requirements from the spec is relatively simple though (with prior knowledge): conversion from XYB to linear sRGB (this one is fully specified in the spec), conversion between white point and RGB color gamut, applying forward and inverse transfer functions specified in the spec.
|
|
2024-03-30 02:32:50
|
strictly speaking frame blending is performed in the tagged space
|
|
2024-03-30 02:32:58
|
so if the tagged space is an ICC space then you need a full CMS to decode
|
|
2024-03-30 02:33:28
|
frame blending and primary transforms are both linear, but TRC is not, so this can matter in that regard
|
|
|
Tirr
|
2024-03-30 02:35:28
|
iirc it's in the way so that decoder doesn't need to have full CMS to do blending
|
|
|
Orum
|
2024-03-31 02:05:18
|
interesting, I found a photo where the smallest JXL lossless is `e 4` <:Thonk:805904896879493180>
|
|
|
jonnyawsom3
|
2024-03-31 04:29:21
|
I actually just encountered a similar image, where e4 was smaller than e5 and e6, although there was a file months ago where a lower effort even beat e11 (It was due to g2 being the best out of all group sizes)
|
|
|
Orum
|
2024-03-31 01:07:49
|
yeah I had one other file where e4 was the smallest, but it wasn't a photo
|
|
|
monad
|
|
Orum
interesting, I found a photo where the smallest JXL lossless is `e 4` <:Thonk:805904896879493180>
|
|
2024-04-01 03:40:15
|
```vs cjxl_0.10.1_d0e10:
time size command
0.78% 94.2% cjxl_0.10.1_d0e2
2.35% 80.9% cjxl_0.10.1_d0e4
mean Mpx/(real s)
| Pareto front for real time and size
mean bpp | | best of
2.95271812 5.78 R cjxl_0.10.1_d0e4
2.99067811 15 R cjxl_0.10.1_d0e3
3.43969780 17 R cjxl_0.10.1_d0e2
3.64973696 0.1359 · cjxl_0.10.1_d0e10
3.65891381 36 R cjxl_0.10.1_d0e1
3.65962581 0.692 · cjxl_0.10.1_d0e9
3.69968224 1.09 · cjxl_0.10.1_d0e8
3.71802276 2.54 · cjxl_0.10.1_d0e7
3.76017563 3.70 · cjxl_0.10.1_d0e6
3.82520470 4.60 · cjxl_0.10.1_d0e5```
|
|
|
A homosapien
|
2024-04-01 09:13:19
|
What about 0.9.2 and 0.8.2? Always good to do regression testing.
|
|
|
fab
|
2024-04-01 05:18:08
|
I can't divulgate metrics AOMedia proibits it
|
|
2024-04-01 05:18:25
|
You can look at this e9 q12
|
|
2024-04-01 05:19:02
|
|
|
2024-04-01 05:32:21
|
The download was canceled because I already have the file
|
|
2024-04-01 11:35:31
|
|
|
2024-04-01 11:35:32
|
Is starting to work as video codec
|
|
|
monad
|
|
A homosapien
What about 0.9.2 and 0.8.2? Always good to do regression testing.
|
|
2024-04-02 05:07:02
|
same relationship
|
|
|
fab
|
2024-04-02 05:14:47
|
But this isn't made by Jon Sneyers but tuned by me, I don't know nothing about the release
|
|
2024-04-02 05:14:54
|
GitHub hasn't been updated
|
|
2024-04-02 05:14:54
|
For lossy there's big improvements if is real
|
|
|
monad
|
|
A homosapien
What about 0.9.2 and 0.8.2? Always good to do regression testing.
|
|
2024-04-02 06:38:45
|
0.6 e4 is strongest e
|
|
|
_wb_
|
2024-04-02 06:56:07
|
<@416586441058025472> please try to post things in the relevant channel. Also try to be less spammy and incomprehensible. Otherwise at some point I will have no choice but banning you, since you already got multiple warning time-outs.
|
|
2024-04-02 06:58:01
|
I suggest <#840831132009365514> for most of your messages. Though they are too cryptic to be useful/actionable.
|
|
|
fab
|
2024-04-02 03:42:35
|
Spidermario is changing the spec
|
|
|
spider-mario
|
2024-04-02 05:26:19
|
I am?
|
|
|
Fox Wizard
|
2024-04-02 05:27:17
|
Maybe you have an evil identical twin you don't know about <:RaysShock:686219918030798921>
|
|
|
jonnyawsom3
|
2024-04-02 06:37:40
|
Sleepcoding
|
|
|
Demiurge
|
2024-04-02 06:59:27
|
Are we suuuure he’s not chatgpt?
|
|
|
Fox Wizard
|
2024-04-02 07:30:15
|
FabGPT™️
|
|
|
spider-mario
|
2024-04-02 08:23:27
|
as in “Fabulous”
|
|
|
fab
|
2024-04-02 08:34:58
|
Bluesword told me about the history of av1 community, i did something like psnr xesr how he calls in his github repo but isn't similar or something like that
|
|
2024-04-02 08:35:31
|
I don't intend to improve as this site I have already used too much, even on the last year
|
|
2024-04-02 08:38:57
|
Apparently there's some extra work in the redone of aom psy101 damian101 gathered suggestions and I ruined ssimulacra probably by refreshing the page, cloudinary uses 22000 cookies
|
|
2024-04-02 08:41:43
|
I'm not saying I did something useful other than chatgpt, Jon Sneyers is the one pavered JPEG XL and jpegli development
|
|
2024-04-02 08:42:01
|
I only done some MCOS test and flickering
|
|
|
Jyrki Alakuijala
|
2024-04-03 05:06:36
|
jpegli blog post https://opensource.googleblog.com/2024/04/introducing-jpegli-new-jpeg-coding-library.html
|
|
|
|
afed
|
2024-04-03 05:15:28
|
though, such blockiness is still there, also mozjpeg is still better at lower bpp <:PepeSad:815718285877444619>
<https://github.com/libjxl/libjxl/issues/2094#issuecomment-1402428342>
and it would be great to have more seamless integration in libjxl instead of libjpeg with full advantage of the extended api
|
|
|
lonjil
|
2024-04-03 10:11:44
|
I see that all the images in the mucped23 zip file are PNGs, but I can't find the information about how big they were as JPEGs anywhere. Can I request that someone with that data share it?
|
|
|
|
afed
|
2024-04-03 10:21:17
|
https://www.phoronix.com/news/Google-Jpegli-Library
|
|
|
fab
|
2024-04-04 12:31:49
|
I studied a way to vp9 and av1 in Youtube 4x times better while not increasing complexity
|
|
2024-04-04 12:32:12
|
Did also was applied to All of shame Jon Sneyers page
|
|
2024-04-04 12:32:48
|
In some images did better at e3 or q17
|
|
2024-04-04 12:32:55
|
I did 10 minutes a codec
|
|
2024-04-04 12:33:04
|
Its 02:32
|
|
2024-04-04 12:44:53
|
|
|
2024-04-04 12:45:07
|
I also compared fake jpegli hdblog with original jpegli
|
|
2024-04-04 12:45:19
|
14 Minutes test
|
|
|
jonnyawsom3
|
2024-04-04 12:57:17
|
What?
|
|
|
VcSaJen
|
|
yurume
IIRC wuffs code can't even dynamically allocate memory (which would simplify many things though)
|
|
2024-04-04 02:55:40
|
I looked it up, looks like it requires a lot of non-WUFFS "glue code" for allocation and parallelization. Meaning vulnerabilities are still theoretically possible, they're just delegated to outside of WUFFS.
|
|
|
yurume
|
|
VcSaJen
I looked it up, looks like it requires a lot of non-WUFFS "glue code" for allocation and parallelization. Meaning vulnerabilities are still theoretically possible, they're just delegated to outside of WUFFS.
|
|
2024-04-04 03:15:56
|
Yes, that's my concern as well.
|
|
|
Demiurge
|
|
What?
|
|
2024-04-04 04:37:56
|
That's what I'm thinking all the time
|
|
|
monad
|
2024-04-04 08:59:20
|
that's the only question
|
|
|
yoochan
|
|
afed
https://www.phoronix.com/news/Google-Jpegli-Library
|
|
2024-04-04 09:48:15
|
now the dev team will be 100% available for the 1.0 !
|
|
|
Soni
|
2024-04-04 11:12:47
|
so how do you convince non-browser devs to ignore their toolkit and implement jxl themselves
|
|
2024-04-04 11:14:12
|
(think KVIrc or whatever)
|
|
|
yoochan
|
|
If you want an ideal case scenario for JXL, then upscaled pixel art is very hard to beat.
You resize the art back down to 1:1 pixels, then with lossless you add
`--already_downsampled --resampling=(2, 4 or 8, whichever is closest to the original) --upsampling_mode=0`
All the benefits of a tiny image, with the readability of a file 8x the size. I *technically* hit 0.3 bpp with it losslessly
|
|
2024-04-04 03:02:26
|
I tried that, the overhead for the upsampling is not negligible !
* original image in png (on the left, pixel size is 19 pixels) : 4417 bytes
* same image reduced to 1 pixel per pixel (on the right) : 776 bytes
* reduced image encoded with `cjxl -d 0 -e 10` : 735 bytes (nice !)
* reduced image encoded with `cjxl -d 0 -e 10 --already_downsampled --resampling=4 --upsampling_mode=0` : 846 bytes (more than 15% increase)
* reduced image encoded with `cjxl -d 0 -e 10 --already_downsampled --resampling=8 --upsampling_mode=0` : 1156 bytes (more than 50% increase !)
|
|
2024-04-04 03:03:49
|
|
|
2024-04-04 03:11:38
|
conclusion : pixel art should always be stored at 1px per px, should the viewer do the upscale 😄
|
|
|
jonnyawsom3
|
|
yoochan
I tried that, the overhead for the upsampling is not negligible !
* original image in png (on the left, pixel size is 19 pixels) : 4417 bytes
* same image reduced to 1 pixel per pixel (on the right) : 776 bytes
* reduced image encoded with `cjxl -d 0 -e 10` : 735 bytes (nice !)
* reduced image encoded with `cjxl -d 0 -e 10 --already_downsampled --resampling=4 --upsampling_mode=0` : 846 bytes (more than 15% increase)
* reduced image encoded with `cjxl -d 0 -e 10 --already_downsampled --resampling=8 --upsampling_mode=0` : 1156 bytes (more than 50% increase !)
|
|
2024-04-04 03:11:47
|
I should add, the bpp doesn't take into account the resampling, so you actually want to divide it by 4 to get an idea of what the command does
|
|
|
yoochan
|
2024-04-04 03:12:18
|
oki, but here I displayed the size of the resulting files
|
|
|
jonnyawsom3
|
2024-04-04 03:13:20
|
Yeah, for images below a hundred pixels the overhead is too much to be reasonable, but still 1/4 the original filesize
|
|
|
yoochan
|
2024-04-04 03:13:30
|
sure
|
|
2024-04-04 03:13:45
|
but not exactly the same file neither 19 pixels vs 8
|
|
|
jonnyawsom3
|
2024-04-04 03:14:04
|
Not to mention extra parameters to improve compression or even e11 since the image is now so tiny at 1:1
|
|
|
yoochan
but not exactly the same file neither 19 pixels vs 8
|
|
2024-04-04 03:14:51
|
Yeah, the power of 2 limit is a little annoying, but for something never meant as a feature it works surprisingly well
|
|
|
yoochan
|
2024-04-04 03:15:09
|
I agree, 19 is stupid 😄
|
|
2024-04-04 03:21:29
|
the solution might be serving the 1px and use : `image-rendering : pixelated`
|
|
|
jonnyawsom3
|
2024-04-04 03:26:59
|
Yeah, if you can do it viewer side then it's best, but when they have fixed scaling algorithms or it's a image site where you can't control it, the upsampling is a very nice compromise
|
|
|
TheBigBadBoy - 𝙸𝚛
|
|
yoochan
I tried that, the overhead for the upsampling is not negligible !
* original image in png (on the left, pixel size is 19 pixels) : 4417 bytes
* same image reduced to 1 pixel per pixel (on the right) : 776 bytes
* reduced image encoded with `cjxl -d 0 -e 10` : 735 bytes (nice !)
* reduced image encoded with `cjxl -d 0 -e 10 --already_downsampled --resampling=4 --upsampling_mode=0` : 846 bytes (more than 15% increase)
* reduced image encoded with `cjxl -d 0 -e 10 --already_downsampled --resampling=8 --upsampling_mode=0` : 1156 bytes (more than 50% increase !)
|
|
2024-04-04 08:19:36
|
just wanted to do a little more testing (I recompressed everything with smallest size I could reach, even the PNG)
```
770 a.png
659 a.jxl
770 a4.jxl
1080 a8.jxl
1153 a8_recompressed.jxl
```Every file was using the reduced image (aka "one pixel for one pixel"), except `a8_recompressed.jxl` ("19 pixels for one pixel")
all done with `-e 11`
|
|
2024-04-04 08:20:30
|
a4 and a8 are `--already_downsampled --resampling=(4|8) --upsampling_mode=0`
|
|
2024-04-04 08:23:34
|
what's funny is the fact that it was the PNG who took the longest to compress to reach that filesize lol
|
|
|
yoochan
|
2024-04-04 08:25:44
|
I already used ect -10 on the png to reduce it, how did you gain some more bytes?
|
|
|
TheBigBadBoy - 𝙸𝚛
|
2024-04-04 08:26:20
|
harder ECT'd <:KekDog:805390049033191445>
|
|
|
yoochan
|
|
TheBigBadBoy - 𝙸𝚛
|
2024-04-04 08:27:09
|
`ect -9999 --pal_sort=120 --allfilters-b`
|
|
|
yoochan
|
2024-04-04 08:27:30
|
9999! Damn
|
|
|
TheBigBadBoy - 𝙸𝚛
|
2024-04-04 08:27:50
|
yeah <:kekw:808717074305122316>
|
|
2024-04-04 08:28:04
|
won 1 byte from `ect -999 --pal_sort=120 --allfilters-b`
|
|
|
yoochan
|
2024-04-04 08:29:00
|
Do you think the 8x upsampling overhead is constant? And should always be a few hundreds of bytes whatever the image size?
|
|
|
TheBigBadBoy - 𝙸𝚛
|
|
TheBigBadBoy - 𝙸𝚛
just wanted to do a little more testing (I recompressed everything with smallest size I could reach, even the PNG)
```
770 a.png
659 a.jxl
770 a4.jxl
1080 a8.jxl
1153 a8_recompressed.jxl
```Every file was using the reduced image (aka "one pixel for one pixel"), except `a8_recompressed.jxl` ("19 pixels for one pixel")
all done with `-e 11`
|
|
2024-04-04 08:30:00
|
770 bytes PNG version
|
|
|
yoochan
|
2024-04-04 08:30:11
|
(going to bed, I'll test another day)
|
|
2024-04-04 08:30:17
|
Thanks!
|
|
|
TheBigBadBoy - 𝙸𝚛
|
|
yoochan
Do you think the 8x upsampling overhead is constant? And should always be a few hundreds of bytes whatever the image size?
|
|
2024-04-04 08:30:51
|
idk, I just learned about the resampling args in cjxl <:KekDog:805390049033191445>
|
|
|
jonnyawsom3
|
|
yoochan
Do you think the 8x upsampling overhead is constant? And should always be a few hundreds of bytes whatever the image size?
|
|
2024-04-04 08:42:29
|
Pretty much, it's 422 bytes overhead for my 250x140 image
|
|
2024-04-04 08:43:46
|
421 bytes in your 64x64 duck image
|
|
2024-04-04 08:48:52
|
My image is 70KB as a PNG vs 11KB as the upsampled JXL (Which is also higher res because of 8x upsampling being closer than 4x)
|
|
2024-04-04 08:51:22
|
So yeah, the fixed overhead very quickly gets worth it
|
|
2024-04-04 08:54:22
|
112 bytes overhead for 4x upsampling
|
|
|
Demiurge
|
2024-04-04 09:41:20
|
So apparently it's called jexel now.
|
|
2024-04-04 10:42:48
|
Now we all have to call them jexels
|
|
2024-04-04 10:42:55
|
So saith the lord
|
|
2024-04-04 10:43:30
|
At least that's how I hear people pronouncing it now in the wild lol
|
|
2024-04-04 10:44:04
|
Should have gone with the extension .jpeg-xl in the encoder ;)
|
|
|
spider-mario
|
2024-04-04 11:23:32
|
.xls .xlsx .xlj
|
|
|
Demiurge
|
2024-04-05 02:55:26
|
`Image.jpegxl` doesn’t look so bad...
|
|
2024-04-05 02:56:59
|
Or `IMG001.JPXL` maybe
|
|
2024-04-05 02:57:46
|
How do you guys come up with a suggested filename suffix?
|
|
|
yurume
|
2024-04-05 03:32:15
|
I think `.jpxl` is out of question because it can be truncated to `.jpx` which was used for JPEG 2000 (in addition to `.jp2`)
|
|
2024-04-05 03:32:52
|
`.jxr` was already used for JPEG XR, so `.jxl` was probably a natural extension
|
|
2024-04-05 03:33:56
|
I'm pretty sure that JPEG AI, if ever standardized, will have a hard time choosing a good extension for this reason
|
|
|
jonnyawsom3
|
2024-04-05 03:42:50
|
`.JAI` surely ;P
|
|
|
Fox Wizard
|
|
TheBigBadBoy - 𝙸𝚛
just wanted to do a little more testing (I recompressed everything with smallest size I could reach, even the PNG)
```
770 a.png
659 a.jxl
770 a4.jxl
1080 a8.jxl
1153 a8_recompressed.jxl
```Every file was using the reduced image (aka "one pixel for one pixel"), except `a8_recompressed.jxl` ("19 pixels for one pixel")
all done with `-e 11`
|
|
2024-04-05 07:22:21
|
Rookie numbers <:KittyUwU:1147753612529913938>
|
|
2024-04-05 07:24:03
|
|
|
2024-04-05 07:52:54
|
Funny, fast ECT parameters gave me better/the same (769 ~~nice~~ bytes) results as very slow parameters <:KekDog:884736660376535040>
|
|
|
yoochan
|
|
Fox Wizard
Rookie numbers <:KittyUwU:1147753612529913938>
|
|
2024-04-05 08:23:47
|
which parameters for this amazing 4 more bytes shaved ?
|
|
|
Fox Wizard
|
2024-04-05 08:25:36
|
Probably the same as what TheBigBadBoy used, but then with a newer version of cjxl <:KekDog:884736660376535040>
|
|
|
yoochan
|
2024-04-05 08:25:55
|
Damn!
|
|
2024-04-05 08:26:16
|
I'll build the last one
|
|
|
Fox Wizard
|
2024-04-05 08:26:22
|
``-e 11 -q 100 --allow_expert_options`` with cjxl v0.10.2 1eab001
|
|
2024-04-05 08:27:11
|
And for ECT, managed to get down to 768 bytes
|
|
2024-04-05 08:27:55
|
``-90099 --mt-deflate --pal_sort=120``. 769 bytes with ECT 0.9.5, but 768 bytes with an older version (I think 0.9.4 or 0.9.3)
|
|
2024-04-05 08:29:57
|
``.png`` removed in order for Discord to not add metadata
|
|
|
yoochan
|
2024-04-05 08:30:47
|
nice !
|
|
|
monad
|
2024-04-05 08:49:06
|
dude
|
|
2024-04-05 08:49:24
|
it's probably just d0e10P0I0
|
|
2024-04-05 08:50:32
|
which would be way smarter than running e11
|
|
|
Fox Wizard
|
2024-04-05 08:52:02
|
It is, but it's only 64x64, so speed doesn't matter anyways since it's near instant
|
|
|
yoochan
|
2024-04-05 08:53:13
|
agreed but it's interesting to know which options are equivalent to e11 at the end, for this image, so we could use similar settings for similar but bigger images, directly
|
|
|
monad
|
2024-04-05 08:57:29
|
be aware that what works for 1:1 pixel art in general is not best for scaled up pixel art in general.
|
|
|
Demiurge
|
|
yurume
I think `.jpxl` is out of question because it can be truncated to `.jpx` which was used for JPEG 2000 (in addition to `.jp2`)
|
|
2024-04-05 09:55:51
|
How do filenames get truncated? Does any OS truncate them? Does DOS even do that?
|
|
2024-04-05 10:00:29
|
I’m not aware of any system that truncates filenames like that.
|
|
|
lonjil
|
2024-04-05 10:10:16
|
Extension is at most 3 on DOS
|
|
|
yurume
|
2024-04-05 10:12:05
|
yeah, its effect is still here today, `.html` got cropped to `.htm` for that reason too
|
|
|
Demiurge
|
2024-04-05 10:18:15
|
How often do you even see .htm instead of .html?
|
|
2024-04-05 10:19:35
|
Apparently some older versions of FAT that are not even used anymore, only supported 8.3 filenames
|
|
2024-04-05 10:20:34
|
But literally no one uses software anymore with those limitations
|
|
2024-04-05 10:21:35
|
Or at least there is zero overlap with people who are interested in viewing anything newer than a GIF
|
|
2024-04-05 10:28:22
|
Shit... I almost forgot that windows api is still limited to just 256 characters for any full pathname
|
|
|
lonjil
|
2024-04-05 10:37:24
|
Actually fun fact there are APIs in Windows without that limit and if you create such a file then Explorer cannot delete it.
|
|
|
spider-mario
|
|
yurume
yeah, its effect is still here today, `.html` got cropped to `.htm` for that reason too
|
|
2024-04-05 10:54:03
|
.jpg is probably also a consequence of that
|
|
2024-04-05 10:54:12
|
as well as .tif
|
|
|
TheBigBadBoy - 𝙸𝚛
|
|
Fox Wizard
``-e 11 -q 100 --allow_expert_options`` with cjxl v0.10.2 1eab001
|
|
2024-04-05 11:01:51
|
same command but `cjxl v0.10.2 e1489592` gave me 659 bytes, and a build from today indeed gives 651 bytes.
Well at least one time (when -e 11 wasn't a thing) -e 10 produced bigger output after a new release (can't remember which one tho)
Already saw a few times `-90099` with ECT (or `-60033` etc), never fucking understood what that does/means (nor there is doc about it). Would you mind explain this to me, *O great fox, skinner of bytes*? (amen)
|
|
|
Fox Wizard
|
|
TheBigBadBoy - 𝙸𝚛
same command but `cjxl v0.10.2 e1489592` gave me 659 bytes, and a build from today indeed gives 651 bytes.
Well at least one time (when -e 11 wasn't a thing) -e 10 produced bigger output after a new release (can't remember which one tho)
Already saw a few times `-90099` with ECT (or `-60033` etc), never fucking understood what that does/means (nor there is doc about it). Would you mind explain this to me, *O great fox, skinner of bytes*? (amen)
|
|
2024-04-05 11:02:49
|
<:KittyPizza:1147750033018605680><:FoxPizza:785013250914779176>
|
|
|
TheBigBadBoy - 𝙸𝚛
|
2024-04-05 11:03:01
|
<:kekw:808717074305122316>
|
|
2024-04-05 11:03:10
|
that was quite expected <:KekDog:805390049033191445>
|
|
|
Fox Wizard
|
2024-04-05 11:05:29
|
I honestly don't know what it does, just that it can be an effective way to shave off some bytes and I often use ``-90059`` since it isn't unbearably slow ~~for my standards~~
|
|
|
username
|
|
Demiurge
Shit... I almost forgot that windows api is still limited to just 256 characters for any full pathname
|
|
2024-04-05 11:06:04
|
no and yes: https://learn.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation
|
|
|
TheBigBadBoy - 𝙸𝚛
|
|
Fox Wizard
I honestly don't know what it does, just that it can be an effective way to shave off some bytes and I often use ``-90059`` since it isn't unbearably slow ~~for my standards~~
|
|
2024-04-05 11:07:48
|
and Ig `-90059` is slower than `-9999` ?
|
|
|
Fox Wizard
|
2024-04-05 11:09:34
|
And I always use ``--mt-deflate`` since it speeds it up by a big amount and the difference between it enabled or disabled usually isn't that big. And funnily enough it sometimes even results in a smaller file. I only don't use it when I go hyper autism mode and try out different params in parallel to get the absolute smallest file <:KekDog:884736660376535040>
|
|
|
TheBigBadBoy - 𝙸𝚛
and Ig `-90059` is slower than `-9999` ?
|
|
2024-04-05 11:09:52
|
It's waaaaaaay faster
|
|
2024-04-05 11:10:19
|
It's the slowest I tend to go on normal images that aren't emoji size <:KekDog:884736660376535040>
|
|
|
TheBigBadBoy - 𝙸𝚛
|
|
Fox Wizard
It's waaaaaaay faster
|
|
2024-04-05 11:10:47
|
yeah that's what I thought
and that is why I don't understand a fucking thing about `-90059` <:kekw:808717074305122316>
|
|
|
Fox Wizard
And I always use ``--mt-deflate`` since it speeds it up by a big amount and the difference between it enabled or disabled usually isn't that big. And funnily enough it sometimes even results in a smaller file. I only don't use it when I go hyper autism mode and try out different params in parallel to get the absolute smallest file <:KekDog:884736660376535040>
|
|
2024-04-05 11:11:07
|
yeah I saw that too, tho I think it's only useful for bigger images
|
|
|
Fox Wizard
|
2024-04-05 11:11:56
|
I prefer using -90059 with --mt-deflate over going with something faster without --mt-deflate
|
|
2024-04-05 11:12:56
|
But guess the most sane thing is to just go with ``-9 --mt-deflate`` by default since it's fast <a:FurretSpeed:1126523768941072465>
|
|
|
TheBigBadBoy - 𝙸𝚛
|
2024-04-05 11:15:13
|
yup
but we know both of us don't use that ultra fast setting <:KekDog:805390049033191445>
|
|
|
Fox Wizard
|
2024-04-05 11:15:26
|
That's... true lmao
|
|
2024-04-05 11:15:56
|
I might often use -99999 when images aren't massive
|
|
2024-04-05 11:17:33
|
``-strip -99999 --mt-deflate --allfilters-b --pal_sort=120`` my beloved <:KittyUwU:1147753612529913938>
|
|
2024-04-05 11:18:35
|
~~I think I became Fabian levels of off topic lmao~~
|
|
|
yoochan
|
2024-04-05 11:19:58
|
deflate works always better than gzip with ect ?
|
|
|
TheBigBadBoy - 𝙸𝚛
|
|
Fox Wizard
``-strip -99999 --mt-deflate --allfilters-b --pal_sort=120`` my beloved <:KittyUwU:1147753612529913938>
|
|
2024-04-05 11:23:06
|
lol
never tried filters + palette with `-99999`
|
|
|
Fox Wizard
|
2024-04-05 11:23:14
|
I don't see a reason to use the gzip param tbh. Think it just outputs a .gz version of the PNG
|
|
|
TheBigBadBoy - 𝙸𝚛
lol
never tried filters + palette with `-99999`
|
|
2024-04-05 11:23:26
|
Do it on a big image <:trolldoge:1200049081880432660>
|
|
|
TheBigBadBoy - 𝙸𝚛
|
2024-04-05 11:24:04
|
depends on how much time I want to heat my house
|
|
|
yoochan
|
2024-04-05 11:24:11
|
😄 all winter
|
|
|
TheBigBadBoy - 𝙸𝚛
|
2024-04-05 11:24:40
|
winter is *not* coming <:PepeHands:808829977608323112>
|
|
|
yoochan
deflate works always better than gzip with ect ?
|
|
2024-04-05 11:26:00
|
I don't understand wdym, since both PNG and GZ use Deflate inside 🤔
|
|
|
yoochan
|
2024-04-05 11:27:23
|
yep, my bad, I missed something, I had in mind that --mt-deflate and --mt-gzip where available, but it's not... it made no sense
|
|
2024-04-05 11:27:51
|
you can indeed ask ect to zip or gzip the resulting file... but it's useless
|
|
|
TheBigBadBoy - 𝙸𝚛
|
2024-04-05 11:28:26
|
I used it for Discord to not add metadata
but changing the fileextension is more effective lol
|
|
|
jonnyawsom3
|
|
yoochan
you can indeed ask ect to zip or gzip the resulting file... but it's useless
|
|
2024-04-05 12:41:11
|
It's actually for optimizing zip and gzip files too, since it already has everything to brute force deflate
|
|
|
lonjil
Actually fun fact there are APIs in Windows without that limit and if you create such a file then Explorer cannot delete it.
|
|
2024-04-05 12:42:00
|
I've got the registry key set to remove the limit, will cause issues in 32bit apps that try to load a long path, but other than that it's been working fine
|
|
|
Demiurge
|
|
spider-mario
as well as .tif
|
|
2024-04-05 08:52:28
|
I have only ever seen .tiff
|
|
|
spider-mario
|
2024-04-05 08:52:46
|
the Windows version of DxO PhotoLab exports to .tif
|
|
|
Demiurge
|
2024-04-05 08:52:48
|
Have you ever seen .flac get shortened?
|
|
|
spider-mario
|
2024-04-05 08:52:53
|
the Mac version to .tiff
|
|
|
Demiurge
|
2024-04-05 08:53:13
|
I never saw a .fla file lol
|
|
|
spider-mario
|
2024-04-05 08:53:24
|
that’s taken by Adobe Flash
|
|
2024-04-05 08:53:45
|
.swf is the “compiled” version, the “source” is .fla
|
|
2024-04-05 08:54:21
|
I wrote to DxO’s support to ask them whether they would consider changing the Windows extension to .tiff as well
|
|
2024-04-05 08:54:27
|
their response was:
> DxO PhotoLab reprends la convention d'écriture .tif utilisée par tous les logiciels sur PC, avec extension classique de trois lettres et .tiff sur Mac où la taille de l'extension n'est pas limitée.
>
> L'extension à 3 caractères est utilisée pour des besoins de compatibilité avec les anciens systèmes type DOS et il n'est pas envisageable pour le moment de la modifier.
|
|
2024-04-05 08:55:41
|
but I since found out that the Mac version actually lets you pick (or maybe they implemented it in response to my request), so I was able to set it to .tif so that it’s the same on both
|
|
2024-04-05 08:55:54
|
(so that my PTGui projects are portable between my machines without having to rename files when I export)
|
|
|
Demiurge
|
2024-04-05 08:57:13
|
I never saw flac's file extension hold it back from being decoded everywhere, even on embedded hardware
|
|
2024-04-05 08:57:50
|
And after all, a file extension is part of the filesystem, not the bitstream format
|
|
|
spider-mario
|
2024-04-05 08:59:21
|
it’s probably too new for people to have cared much about DOS compatibility
|
|
2024-04-05 08:59:29
|
(I’m very surprised DxO does)
|
|
2024-04-05 09:00:21
|
I replied to them with:
> Merci pour votre réponse, mais je suis toutefois un peu confus – de quels besoins de compatibilité s’agit-il ? La dernière version de Windows encore basée sur DOS était Windows Millenium, dont même le support étendu a expiré en 2006, soit il y a plus de quinze ans. Windows XP (2001) n’était déjà plus basé sur DOS mais sur Windows NT, comme toutes les versions sorties depuis. L’émulation de DOS (“NTVDM”) n’est même pas incluse dans les versions 64-bit de Windows (https://learn.microsoft.com/en-us/windows/compatibility/ntvdm-and-16-bit-app-support), que PhotoLab lui-même requiert (« Configuration minimum : […] Microsoft® Windows® 10 version 20H2 (64-bits, et toujours supportée par Microsoft®) »).
>
> Par ailleurs, une compatibilité avec les noms de fichier DOS requiérerait également que la partie du nom de fichier avant l’extension fasse moins de 8 caractères, ce que PhotoLab ne contrôle pas (et le suffixe “_DxO” par défaut consomme même 4 de ces 8 caractères). Et y a-t-il même des applications DOS qui soient capable de lire ces fichiers TIFF de toute façon ?
>
> Je ne suis pas certain que l’extension .tif soit « utilisée par tous les logiciels sur PC » non plus. http://www.differencebetween.net/technology/protocols-formats/difference-between-tif-and-tiff/ (2011) note que “Newer programs […] are now using the TIFF extension as it is more widely accepted”, et en dehors des images TIFF, les extensions de plus de 3 caractères ne sont pas si rares non plus (par exemple .html, .docx/.xlsx/.pptx, .aspx).
|
|
|
govman
|
|
Sylv
|
|
spider-mario
.swf is the “compiled” version, the “source” is .fla
|
|
2024-04-06 02:54:28
|
what does swf stand for
|
|
2024-04-06 02:54:32
|
i always pronounced it "swiff"
|
|
|
sklwmp
|
2024-04-06 02:54:50
|
FWIW, my installed programs switch back and forth between ".tif" and ".tiff", I've never encountered a problem with either extension
|
|
|
Sylv
what does swf stand for
|
|
2024-04-06 02:55:05
|
i'm guessing shockwave flash?
|
|
|
Demiurge
|
2024-04-06 07:04:20
|
yeah
|
|
2024-04-06 07:04:35
|
sweet wizard format
|
|
|
yoochan
|
|
sklwmp
i'm guessing shockwave flash?
|
|
2024-04-06 07:43:12
|
Once upon a time, games could be played in the browser, then Javascript, wasm and all were designed to replace it, flash was thrown away and now you have to install a new app for every new small game you want to play
|
|
|
username
|
2024-04-06 09:13:03
|
I think you can still get Netflix to use Silverlight
|
|
|
spider-mario
|
|
yoochan
Once upon a time, games could be played in the browser, then Javascript, wasm and all were designed to replace it, flash was thrown away and now you have to install a new app for every new small game you want to play
|
|
2024-04-06 10:15:13
|
or use a flash implementation in javascript https://archive.org/details/flash_winrg2
|
|
2024-04-06 10:15:38
|
(or games targeting javascript to start with)
|
|
|
Demiurge
|
2024-04-06 11:45:53
|
It's ludicrous that it wasn't just... idk, code dumped under an MIT license or something somewhere
|
|
2024-04-06 11:46:57
|
And that an existing, established interchange file format, opaque and proprietary, with tons of existing content stored in that format, was just expected to be lost forever
|
|
2024-04-06 11:47:28
|
With no plan on how to decode existing content
|
|
|
sklwmp
|
|
spider-mario
or use a flash implementation in javascript https://archive.org/details/flash_winrg2
|
|
2024-04-06 12:26:59
|
or Ruffle https://ruffle.rs/
|
|
|
土豆
|
2024-04-06 01:52:53
|
I miss Yahoo! Games and those actual Java applets using the Java web start browser plugins
|
|
2024-04-06 01:53:42
|
sorry for being off topic, oldweb stuff is a point of fascination for me
|
|
|
spider-mario
|
|
sklwmp
or Ruffle https://ruffle.rs/
|
|
2024-04-06 02:42:28
|
that’s what archive.org is using – I guess I should have been more specific than “in javascript”
|
|
|
jonnyawsom3
|
2024-04-06 04:01:15
|
I just use Adobe's offline SWF player, a 5MB exe to solve all my problems
|
|
|
|
JKGamer69
|
2024-04-06 04:13:22
|
Can mpv make lossless jxl files? https://mpv.io/
|
|
|
lonjil
|
2024-04-06 04:14:38
|
`screenshot-jxl-distance=0` in your config file
|
|
|
|
JKGamer69
|
|
lonjil
`screenshot-jxl-distance=0` in your config file
|
|
2024-04-06 04:19:18
|
I put it in, but it only makes jpg files instead.
|
|
|
lonjil
|
2024-04-06 04:19:37
|
oh, you also need to set `screenshot-format=jxl`
|
|
2024-04-06 04:19:59
|
and you can use `screenshot-jxl-effort=3` to control which speed it uses
|
|
|
monad
|
|
yoochan
agreed but it's interesting to know which options are equivalent to e11 at the end, for this image, so we could use similar settings for similar but bigger images, directly
|
|
2024-04-08 01:10:34
|
|
|
|
Orum
|
2024-04-08 02:10:11
|
are patches enabled automatically by anything else, like effort levels, or do they always have to be enabled manually & explicitly?
|
|
2024-04-08 02:13:22
|
because the help leads me to believe the former
|
|
|
monad
|
2024-04-08 02:32:28
|
for lossless, patches are available for e5+ and on by default
|
|
|
Orum
|
2024-04-08 02:33:41
|
do patches work with streaming input?
|
|
2024-04-08 02:35:58
|
looks like no...
|
|
2024-04-08 02:39:05
|
19.7 GB of RAM used while encoding <:FeelsSadMan:808221433243107338>
|
|
|
monad
|
2024-04-08 03:04:22
|
~~Streaming output~~ parallel encode path doesn't use patches
|
|
|
Orum
|
2024-04-08 03:07:36
|
ok, makes sense
|
|
|
monad
|
2024-04-08 03:16:32
|
so for large enough images, patches are disabled by default even with num_threads0
|
|
2024-04-08 03:17:11
|
at e<10
|
|
|
Orum
|
2024-04-08 03:22:11
|
well, I took another big factorio screenshot (this time without any texture compression used by the game), and tried out some more <:JXL:805850130203934781> options, but it's still awful compared to `xz`:
```805306387 factorio.ppm
361982021 factorio-8_vanilla.jxl
331325754 factorio-9_I0-P0-g3.jxl
310169085 factorio-ect9.png
280184274 factorio-9_I1-P0-g3.jxl
255738003 factorio-9_I1-P0-g3_patches.jxl
117113480 factorio.ppm.xz```
|
|
|
jonnyawsom3
|
2024-04-08 03:27:54
|
I'm pretty sure e10 is when 'full frame heurestics' are enabled, such as patches, dots, ect. Not sure if it woul have much of an impact compared to forcing patches on at 9 though
|
|
|
Orum
|
2024-04-08 03:29:00
|
the reason forcing patches made a big difference here is always have `--streaming_input` and `--streaming_output` in the command line, but when I put `--patches=1` after that, it disabled whichever (or both?) of those that patches isn't compatible with
|
|
2024-04-08 03:29:27
|
but encoding was also very, very slow as a result
|
|
2024-04-08 03:30:20
|
...that xz is still < 50% of the best JXL I've made though <:PepeHands:808829977608323112>
|
|
|
MaryMajesty
|
2024-04-10 10:42:23
|
hello! im a hobby developer trying to learn more about the jpeg xl format and currently trying my hand at writing a decoder. my current problem is that i cant find any official specification that isnt locked behind a 300€ iso standards paywall, and i simply dont have that much money lying around to invest into a mere hobby project. ive been trying to reverse engineer the format from reading other decoder implementations, but it gets pretty messy and theres a lot of information contained in the specification that i cant infer from decoder source code. is there any way to obtain the specification without going bankrupt? all ive found so far has been a non-official specification from 2019 that i found online that isnt compatible with the current format anymore
|
|
|
lonjil
|
|
MaryMajesty
hello! im a hobby developer trying to learn more about the jpeg xl format and currently trying my hand at writing a decoder. my current problem is that i cant find any official specification that isnt locked behind a 300€ iso standards paywall, and i simply dont have that much money lying around to invest into a mere hobby project. ive been trying to reverse engineer the format from reading other decoder implementations, but it gets pretty messy and theres a lot of information contained in the specification that i cant infer from decoder source code. is there any way to obtain the specification without going bankrupt? all ive found so far has been a non-official specification from 2019 that i found online that isnt compatible with the current format anymore
|
|
2024-04-10 10:44:57
|
https://discord.com/channels/794206087879852103/1021189485960114198/1169087534060556339
|
|
|
MaryMajesty
|
2024-04-10 10:47:46
|
thank you so much, youre a lifesaver!
|
|
|
Demiurge
|
|
yoochan
|
2024-04-10 12:46:00
|
Life saved 🎉
|
|
|
Nyao-chan
|
2024-04-10 10:15:33
|
If I have 2 copies of an image, one gray scale and one colored, could jxl store them and take advantage of the similarity between them? How would that look in practice?
I got the idea from the fact, that some manga is available in both formats, but they are different enough that the grey scale version can not be obtained just by a filter. It would be cool if it could be possible have both versions in one file and reduce file size as well.
Not that I expect it to ever happen.
|
|
2024-04-10 10:28:13
|
example
ignore localisation
|
|
2024-04-10 10:29:00
|
I guess the halftone is also problematic
|
|
|
Orum
|
|
Nyao-chan
If I have 2 copies of an image, one gray scale and one colored, could jxl store them and take advantage of the similarity between them? How would that look in practice?
I got the idea from the fact, that some manga is available in both formats, but they are different enough that the grey scale version can not be obtained just by a filter. It would be cool if it could be possible have both versions in one file and reduce file size as well.
Not that I expect it to ever happen.
|
|
2024-04-10 10:48:00
|
in theory this could just be done during decode
|
|
2024-04-10 10:48:24
|
i.e. encode only the color image, and during decode, convert it to grayscale
|
|
2024-04-10 10:57:46
|
alternatively you could store the grayscale image as another channel (as JXL offers 4099 of them), but it will take additional storage space
|
|
|
190n
|
|
Orum
i.e. encode only the color image, and during decode, convert it to grayscale
|
|
2024-04-10 11:07:55
|
they said the difference between color and grayscale can't be represented by a simple filter
|
|
2024-04-10 11:08:53
|
maybe you could do something with jxl layers
|
|
|
Orum
alternatively you could store the grayscale image as another channel (as JXL offers 4099 of them), but it will take additional storage space
|
|
2024-04-10 11:09:34
|
does jxl try to find correlations between channels beyond rgb/xyb?
|
|
|
Nyao-chan
example
ignore localisation
|
|
2024-04-10 11:13:40
|
do you have an example that's lossless and where the b&w and color pages can be lined up exactly?
|
|
|
Orum
|
|
190n
they said the difference between color and grayscale can't be represented by a simple filter
|
|
2024-04-10 11:20:24
|
yeah, the issue then is, how much could you really save by coupling channels?
|
|
2024-04-10 11:23:49
|
as an exercise one could try encoding individual X/Y/B channels, subtracting them from the grayscale, and then encoding the residual and seeing if it's any smaller than just encoding the grayscale image
|
|
|
jonnyawsom3
|
2024-04-11 02:04:15
|
The `-E` parameter *might* stretch to extra channels? But I'm not sure...
|
|
|
HCrikki
|
|
Nyao-chan
If I have 2 copies of an image, one gray scale and one colored, could jxl store them and take advantage of the similarity between them? How would that look in practice?
I got the idea from the fact, that some manga is available in both formats, but they are different enough that the grey scale version can not be obtained just by a filter. It would be cool if it could be possible have both versions in one file and reduce file size as well.
Not that I expect it to ever happen.
|
|
2024-04-11 04:00:47
|
even if they were pixel perfect similar, its a frontend (app, website) that should be doing that retoning live to greyscale from a colored image. potential gain of such an unorthodox approach isnt going to be significant enough to overshadow the issues.
frontend could display colored images as greyscale using simple css properties - supported in all browsers and blink-based apps https://developer.mozilla.org/en-US/docs/Web/CSS/filter-function/grayscale
|
|
2024-04-11 04:08:28
|
saturate to 0 also works similarly
|
|
2024-04-11 04:19:19
|
not mentioned above, vivaldi has those filters applicable by the browser as 'page actions' if a web app doesnt but afaik its applied to the whole page not specific images
|
|
|
Demiurge
|
|
Nyao-chan
If I have 2 copies of an image, one gray scale and one colored, could jxl store them and take advantage of the similarity between them? How would that look in practice?
I got the idea from the fact, that some manga is available in both formats, but they are different enough that the grey scale version can not be obtained just by a filter. It would be cool if it could be possible have both versions in one file and reduce file size as well.
Not that I expect it to ever happen.
|
|
2024-04-11 06:40:59
|
JXL images can have multiple layers and frames, that reference each other, but they are meant to be displayed all at once by combining them, or as part of an animation.
|
|
2024-04-11 06:41:14
|
There aren't "pages" like in a TIFF file
|
|
2024-04-11 06:41:32
|
It's just one image, composed of multiple frames.
|
|
2024-04-11 06:41:50
|
At least, according to what I have heard.
|
|
2024-04-11 06:44:32
|
Each JXL file is assumed to be a single image. The image could be animated or it could be the result of compositing different layers on top of each other but the result is just 1 image, not multiple pages like CBZ or TIFF
|
|
|
Nyao-chan
|
|
The `-E` parameter *might* stretch to extra channels? But I'm not sure...
|
|
2024-04-11 06:45:28
|
`-E` does take arguments up to 11
|
|
|
190n
do you have an example that's lossless and where the b&w and color pages can be lined up exactly?
|
|
2024-04-11 06:47:25
|
I don't, it was more just a fun idea. I did download a colour version once thinking I could just use tachiyomi to force grey scale and found out how different they were
|
|
|
Demiurge
|
2024-04-11 06:49:24
|
Also keep in mind that XYB and YUV have 2 color channels and 1 luma channel. So if you just... delete the 2 color channels you are left with a colorless, greyscale image with just 1 channel.
|
|
|
Quackdoc
|
2024-04-11 06:50:18
|
XYB or XYZ? iirc xyb was cone sensitivity, delete two and you are left with a single "cone"
|
|
|
Demiurge
|
2024-04-11 06:50:26
|
No.
|
|
2024-04-11 06:50:32
|
X and Z are the color channels.
|
|
2024-04-11 06:50:49
|
I mean B
|
|
2024-04-11 06:50:55
|
X and B
|
|
2024-04-11 06:51:39
|
X is the red-green spectrum, to grossly oversimplify
|
|
|
Quackdoc
|
2024-04-11 06:52:27
|
ah interesting
|
|
|
Demiurge
|
2024-04-11 06:52:52
|
And B is blue all by itself separated from his colorful friends
|
|
|
jonnyawsom3
|
|
Demiurge
Also keep in mind that XYB and YUV have 2 color channels and 1 luma channel. So if you just... delete the 2 color channels you are left with a colorless, greyscale image with just 1 channel.
|
|
2024-04-11 07:09:40
|
That's an interesting idea... Convert a color page to YUV or XYB, then strip the old luma and insert the greyscale page as the new luma channel... Assuming they were identical
|
|
|
Nyao-chan
|
|
Nyao-chan
example
ignore localisation
|
|
2024-04-11 08:14:21
|
but if you do that, in this example the sky would be left greyish, not white
the similarity comes mostly from the edges
|
|
|
Demiurge
|
2024-04-11 09:34:09
|
Yeah, in the case of comics, it's usually not just a greyscale image of the color version
|
|
2024-04-11 09:34:15
|
Usually the black and white one has higher contrast
|
|
2024-04-11 09:36:54
|
And sometimes with halftoning
|
|
2024-04-11 09:37:16
|
so it's not usually an exact match with a greyscale version of the color image
|
|
2024-04-11 09:37:56
|
If it was... then you already get it for free, just delete the 2 color channels
|
|
|
_wb_
|
2024-04-11 11:04:40
|
When using Modular, you could do something like representing it as a 4-channel RGB + Gray image (where Gray is an extra channel that gets ignored by regular viewers), and then do an RCT to turn it into Y,Co,Cg,Gray. Then the Y and Gray channels are expected to be quite similar (but not identical), and possibly also the chroma channels are useful context for Gray. The MA tree can represent such things (when encoding with `-E 3`).
|
|
|
Demiurge
|
2024-04-11 11:07:35
|
That's a neat and creative idea. The only thing to keep in mind is, that the grey channel will be a sort of hidden, secret thing that you won't be able to see unless you have some sort of special viewer that can let you see each channel.
|
|
|
lonjil
|
|
Demiurge
There aren't "pages" like in a TIFF file
|
|
2024-04-11 11:13:50
|
yes there are
|
|
2024-04-11 11:14:24
|
> If duration has the value 0xFFFFFFFF, the decoder presents the next frame as the next page in a multi-page image.
|
|
|
_wb_
|
|
Demiurge
That's a neat and creative idea. The only thing to keep in mind is, that the grey channel will be a sort of hidden, secret thing that you won't be able to see unless you have some sort of special viewer that can let you see each channel.
|
|
2024-04-11 11:24:14
|
Correct. But some viewer logic will be needed in any case for this use case, I think.
|
|
|
Demiurge
|
2024-04-11 11:26:09
|
Not if we are dealing with a multi-page format that can copy content from previous pages... I think DjVu might be capable of that?
|
|
2024-04-11 11:27:54
|
jpegxl files are designed to have only one image at a time, not multiple separate, independent pages or canvases
|
|
2024-04-11 11:28:34
|
But, that would make a great extension someday...?
|
|
2024-04-11 11:28:49
|
To replace DjVu and CBZ?
|
|
|
lonjil
|
|
Demiurge
jpegxl files are designed to have only one image at a time, not multiple separate, independent pages or canvases
|
|
2024-04-11 11:31:11
|
did you read what I said above
|
|
|
Demiurge
|
2024-04-11 11:31:15
|
Because I think jpegxl is very similar to djvu already, with its multiple layers, patches, and swiss army knife of tools
|
|
|
lonjil
> If duration has the value 0xFFFFFFFF, the decoder presents the next frame as the next page in a multi-page image.
|
|
2024-04-11 11:31:48
|
Oh, I didn't see that...
|
|
|
lonjil
|
2024-04-11 11:32:20
|
honestly im not sure how many image viewers support pages, regardless of format
|
|
|
Demiurge
|
2024-04-11 11:32:47
|
Preview.app supports multi page tiff
|
|
|
lonjil
> If duration has the value 0xFFFFFFFF, the decoder presents the next frame as the next page in a multi-page image.
|
|
2024-04-11 11:33:25
|
Where's the context for this? I'd like to read more
|
|
|
lonjil
|
2024-04-11 11:33:39
|
probably be easier to get jxl support added to pdf/djvu/etc readers than to get page support to image viewers without it, tbh
|
|
|
Demiurge
Where's the context for this? I'd like to read more
|
|
2024-04-11 11:34:30
|
https://discord.com/channels/794206087879852103/1021189485960114198/1169087534060556339
first document, towards the end of section F.2
|
|
|
Demiurge
|
2024-04-11 11:34:44
|
I would like to know if it supports pages of different sizes and if each page can still have multiple layers...
|
|
2024-04-11 11:35:08
|
Also I'm kinda surprised Jon didn't know about it or mention it
|
|
|
lonjil
|
|
Demiurge
I would like to know if it supports pages of different sizes and if each page can still have multiple layers...
|
|
2024-04-11 11:36:13
|
I think if you had multiple frames in a row with a duration of 0 or marked as reference only, and then a final frame with a duration of 0xFFFFFFFF, and then more duration 0 frames, it would be like having multiple pages of multiple layers
|
|
2024-04-11 11:38:07
|
so say each page will have 3 layers, then the frames in the file will be
1. dur 0
2. dur 0
3. dur max
4. dur 0
5. dur 0
6. dur max
7. dur 0
8. dur 0
9. dur max
for a 3 page file
|
|
|
Demiurge
|
2024-04-11 11:38:13
|
Multi page jpegxl sounds cool...
|
|
2024-04-11 11:38:39
|
I think that means jpegxl can do pretty much everything djvu can do
|
|
2024-04-11 11:39:13
|
Their features are very similar
|
|
2024-04-11 11:47:19
|
Actually it's pretty sad that djvu didn't become more popular mostly because of a lack of accessible software
|
|
2024-04-11 11:50:00
|
Even today, it's very difficult to create good djvu files that take advantage of multiple layers. And it's GPL instead of MIT
|
|
2024-04-11 11:57:10
|
And C++
|
|
2024-04-11 11:57:16
|
:)
|
|
2024-04-11 11:58:58
|
Having a lightweight and portable, easy to embed and freely-idgaf-licensed reference library is pretty crucial to get a codec used everywhere
|
|
|
_wb_
|
|
Demiurge
Not if we are dealing with a multi-page format that can copy content from previous pages... I think DjVu might be capable of that?
|
|
2024-04-11 03:10:59
|
Just generic multi-page is probably not really what you want for this use case; the viewer would then show you the manga pages alternating between color and grayscale, while what you probably want is to see each page only once, either all in color or all in grayscale (e.g. depending on whether you're viewing on a phone/laptop or on an e-reader). That's what I meant by "some viewer logic will be needed".
|
|
2024-04-11 03:14:36
|
Doing it as an extra channel means the decorrelation can be done through RCTs and MA tree context; doing it as a multi-layer (multi-page) jxl image would allow decorrelation in a different way, e.g. you could subtract the grayscale image from the color one (using `kAdd` frame blending) and possibly the residuals would compress better (though halftoning patterns might just as well make things worse instead).
|
|
|
Demiurge
I would like to know if it supports pages of different sizes and if each page can still have multiple layers...
|
|
2024-04-11 03:16:26
|
Yes, each page can still have multiple layers. Basically a layer is a zero-duration frame, an animation frame is a nonzero-duration frame, and a new page is started with a maximum-duration frame. These can in principle all be mixed freely. The only thing you cannot do is looping animation within a page.
|
|
2024-04-11 03:23:41
|
Pages of different size are kind of supported and kind of not supported. The whole file has the same image dimensions, and when rendering the image/animation/pages, this is the "canvas" on which everything gets painted. Individual frames/layers can be smaller or larger than the canvas (and they can be positioned freely on the canvas) but what is supposed to be shown by a viewer is only the overlap with the canvas, i.e. any parts of the frame outside the canvas are not supposed to be shown and any parts of the canvas not covered by the frame are supposed to be still shown (and what to show there is determined by the frame blending info; it could be the previous frames/layers below, or if no previous reference frame is provided, it will just be all-zeroes, i.e. transparent black).
|
|
2024-04-11 03:27:18
|
So you can have one frame/page that is 1000x500 and a second frame/page that is 500x1000, but you'll still have to decide how you want to encode it: e.g. using a 1000x1000 canvas where you center each page, or using a 500x500 canvas where you only show a crop of each frame (while the remaining parts of the frame are still encoded in the file and could be shown in an image editor or advanced viewer), or anything else.
|
|
|
Demiurge
|
2024-04-11 04:18:50
|
When encountering a max duration frame, shouldn’t it use the size of the frame to determine the size of the next page's canvas?
|
|
2024-04-11 04:19:52
|
That just seems like the most obvious way for it to work
|
|
2024-04-11 04:23:19
|
As for looping animation, I don't completely understand why that would be an exceptional circumstance either
|
|
|
lonjil
|
|
Demiurge
As for looping animation, I don't completely understand why that would be an exceptional circumstance either
|
|
2024-04-11 05:20:40
|
when you reach the end of an animation, what happens?
|
|
|
Demiurge
|
2024-04-11 05:21:45
|
I'm not sure how it's specified in jxl.
|
|
2024-04-11 05:23:04
|
It either stops on the last frame, or rewinds to a previous frame, usually back to the beginning.
|
|
|
lonjil
|
2024-04-11 05:23:35
|
my expectation is that when the last frame in the image has been displayed for its duration, it goes back to the start
|
|
2024-04-11 05:24:21
|
if you get to an max duration frame, it'd just stop animating
|
|
|
Demiurge
|
2024-04-11 05:25:56
|
It depends on how it's specified... If max duration frames actually mean "next page" then it has nothing to do with animation anymore
|
|
2024-04-11 05:26:29
|
Because then it means something else that is completely unrelated to animation
|
|
|
Orum
|
|
Demiurge
X and B
|
|
2024-04-11 08:15:51
|
I don't think there are any luma channels in XYB
|
|
2024-04-11 08:16:31
|
https://en.wikipedia.org/wiki/LMS_color_space#Image_processing
|
|
|
lonjil
|
2024-04-11 08:41:43
|
this tree ```
XYB
Width 256
Height 128
if c > 0
if c > 1
- Set 0
- Set 0
- W 5
```
results in this image. So I think there is a lightness component.
|
|
|
Demiurge
|
2024-04-11 10:58:22
|
Y represents luma
|
|
2024-04-11 10:58:38
|
XYB has 2 color channels and 1 luma just like YUV
|
|
|
damian101
|
2024-04-11 11:01:00
|
Yes, luminance separation is essential for good encoding performance.
|
|
|
Demiurge
|
2024-04-11 11:08:12
|
The transform matrix implies that blue doesn't contribute to luma though.
|
|
2024-04-11 11:08:31
|
Which I don't completely understand
|
|
2024-04-11 11:09:34
|
Maybe you need both Y and B to get luma?
|
|
2024-04-11 11:10:24
|
Blue being completely disconnected from Y seems unexpected
|
|
|
spider-mario
|
|
Demiurge
The transform matrix implies that blue doesn't contribute to luma though.
|
|
2024-04-11 11:13:10
|
but that’s the “blue” after conversion to LMS, isn’t it?
|
|
2024-04-11 11:13:16
|
|
|
|
Demiurge
|
2024-04-11 11:13:35
|
That's what it looks like to me
|
|
2024-04-11 11:13:51
|
I'd say yes
|
|
2024-04-11 11:14:39
|
I'm not an expert though
|
|
|
spider-mario
|
2024-04-11 11:15:23
|
I’d assume that, say, sRGB’s blue would excite the M receptors (“G”) and end up in XYB’s Y in that way
|
|
|
Demiurge
|
2024-04-11 11:29:14
|
Makes sense.
|
|
2024-04-11 11:29:49
|
So L and M are mostly responsible for perception of brightness I guess?
|
|
2024-04-11 11:30:33
|
But then there's also rod cells that don't care about color?
|
|
|
spider-mario
|
2024-04-11 11:33:33
|
rods are relevant below roughly 3 cd/m²
|
|
2024-04-11 11:33:38
|
above that, not so much
|
|
2024-04-11 11:33:51
|
and here is their sensitivity to light compared to cones
|
|
2024-04-11 11:33:59
|
(V: cones, V′: rods)
|
|
|
spider-mario
rods are relevant below roughly 3 cd/m²
|
|
2024-04-11 11:35:19
|
(and at that point, cones are still relevant too https://en.wikipedia.org/wiki/Mesopic_vision)
|
|
2024-04-11 11:36:18
|
scotopic = vision in low light, through rods
photopic = vision in bright light, through cones
mesopic = mixture of the previous two in the transition zone
|
|
2024-04-11 11:37:27
|
pure scotopic vision is below 0.01 cd/m² or so
|
|
2024-04-11 11:39:18
|
the above graphs are from:
|
|
|
spider-mario
|
|
2024-04-11 11:39:24
|
https://onlinelibrary.wiley.com/doi/book/10.1002/9781119021780
|
|
|
spider-mario
and here is their sensitivity to light compared to cones
|
|
2024-04-11 11:39:33
|
https://onlinelibrary.wiley.com/doi/book/10.1002/9781118653128
|
|
|
damian101
|
|
spider-mario
rods are relevant below roughly 3 cd/m²
|
|
2024-04-11 11:51:10
|
which is a much more relevant brightness range in images, especially movies, than people assume...
|
|
|
_wb_
|
2024-04-12 05:10:50
|
Yes, when viewing in a dark room (like a typical movie theater) and there is a sufficiently long dark scene (rods require more adaptation time than cones), mesopic vision and rods are probably relevant. For still images, I'd say it's not really relevant, such dark viewing conditions and image content that is globally dark is not typical. Also I assume that at the brighter end of mesopic vision (say, above 0.5 nits), the role of rods is pretty small compared to that of cones.
|
|
|
Demiurge
|
2024-04-12 10:45:11
|
I think it's really cool how much there is to learn about image encoding
|
|
|
VcSaJen
|
|
Orum
https://en.wikipedia.org/wiki/LMS_color_space#Image_processing
|
|
2024-04-12 02:25:28
|
It's interesting how cameras perceive colors differently from eyes. Yellow laser (extremely narrow wavelength range) is green on camera, infrared is visible pale light-blue, etc.
|
|
|
Orum
|
2024-04-12 03:45:42
|
well that depends much on the filtering within the sensor
|
|
|
spider-mario
|
2024-04-12 04:15:44
|
and the processing that follows https://doi.org/10.1117/1.OE.59.11.110801
|
|
|
Orum
|
2024-04-15 08:31:51
|
oh no, has this stopped working for anyone else? https://github.com/novomesk/qt-jpegxl-image-plugin/
|
|
2024-04-15 08:32:12
|
nomacs will no longer open JXLs for me <:PepeHands:808829977608323112>
|
|
2024-04-15 08:33:55
|
oh, I guess it's because that is a Qt5 plugin and nomacs is now using Qt6...
|
|
2024-04-15 08:34:18
|
ugh, what a headache <:SadOrange:806131742636507177>
|
|
2024-04-15 08:42:27
|
well at least that can be compiled for Qt6
|
|
|
lonjil
|
2024-04-15 08:51:21
|
why not use kimageformats?
|
|
|
Orum
|
2024-04-15 08:52:17
|
I only see kimageformats for Qt5 🤷♂️
|
|
|
lonjil
|
2024-04-15 08:52:58
|
is KDE6 not available on your distro?
|
|
|
Orum
|
2024-04-15 08:53:16
|
https://aur.archlinux.org/packages/kimageformats-git
|
|
|
lonjil
|
|
Orum
https://aur.archlinux.org/packages/kimageformats-git
|
|
2024-04-15 08:55:21
|
https://archlinux.org/packages/extra/x86_64/kimageformats/
|
|
2024-04-15 08:56:05
|
that "git" package is very out of date
|
|
|
Orum
|
2024-04-15 08:56:18
|
I'm on Manjaro, not arch
|
|
|
lonjil
|
2024-04-15 08:56:59
|
Manjaro is also on 6
|
|
|
Orum
|
2024-04-15 08:57:07
|
where are you seeing that?
|
|
|
lonjil
|
2024-04-15 08:57:24
|
ok only on testing I guess
|
|
2024-04-15 08:58:11
|
the "kimageformats" package
|
|