JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

jxl

Anything JPEG XL related

A homosapien
2025-02-06 04:17:52
ZopfliPNG is inefficient even compared to other PNG optimizers like Oxipng/ECT/Pingo, unless you are talking about Oxipng's Zopfli setting
Meow
2025-02-06 04:27:37
FileOptimizer seems using it for PNG by default and the speed is slower than that JXL spell
damian101
2025-02-07 10:53:34
`oxipng -Z --fast` is all you need
2025-02-07 11:30:12
<@207980494892040194> Btw, I had a lengthy discussion with the ect devs more than a year ago arguing that `-strip` should not remove essential color metadata by default. Simply because it is not a lossless operation. But they disagreed for some reason. Our discussion was interrupted by GitHub restricting my old GitHub account, which made the issue not openly visible anymore, not even to the owners of the repository.
2025-02-07 11:31:03
So, that's an even bigger reason why anything that ect does better, should be integrated into oxipng, rather than the other way around.
A homosapien
2025-02-07 04:20:41
Good to know. I'm adding that to the list of reasons why I recommend Oxipng over ECT.
2025-02-07 04:21:31
I want to write a small blog post, or add a section about PNG optimization on the x266 wiki. I should also mention it's generally good practice to optimize your files before converting to jxl.
2025-02-07 04:31:42
2025-02-07 04:38:32
I forgot to add that Oxipng can strip junk metadata without touching color profiles. Which can be helpful, https://github.com/libjxl/libjxl/issues/3973
Quackdoc
2025-02-07 04:40:17
what does an empty alpha channel mean? all 0? if so then this is actually probably necessary since many people produce spec violating pngs by encoding associated alpha pngs
spider-mario
2025-02-07 04:40:59
all 1, rather, no?
2025-02-07 04:41:49
I find it dubious that the PNG’s palette would change anything, though
Quackdoc
2025-02-07 04:42:06
I wonder if all 1 needs to be preserved? I don't think so but my brain isn't at 100% today
spider-mario
2025-02-07 04:42:08
as far as I know, libjxl does nothing with that information; it just retrieves the pixel data
2025-02-07 04:42:12
or is that not true anymore?
2025-02-07 04:43:14
yeah ```c++ png_set_palette_to_rgb(png_ptr); ```
Quackdoc
2025-02-07 04:46:50
does pngquanting an image before hand also significantly help with lossy encoding?
A homosapien
2025-02-07 04:48:04
That's good to know, I hope one day cjxl should have better palette sorting.
Quackdoc
2025-02-07 04:48:24
~~add libquant to cjxl when~~
A homosapien
2025-02-07 04:52:15
I still remember that dude compressing a flat map with Avif and the lines got all smeary πŸ₯². Like just quantize it, the image compressed way better when paired with lossless jxl.
jonnyawsom3
Quackdoc does pngquanting an image before hand also significantly help with lossy encoding?
2025-02-07 05:09:34
It'd help with the lossless mode, since quantizing quanization is... Counterproductive
jjrv
2025-02-08 12:07:04
For some reason I started writing a decoder in Zig. Not sure if going to continue, but been parsing the headers so far. Ran into this in both libjxl and the ISO standard: ``` if (visitor->Conditional(type == ExtraChannel::kCFA)) { JXL_QUIET_RETURN_IF_ERROR(visitor->U32(Val(1), Bits(2), BitsOffset(4, 3), BitsOffset(8, 19), 1, &cfa_channel)); } ``` Do I understand correctly that according to those distributions the values can be 1, 0-1, 3-18 or 19-274 ? What if I want to represent the value 2?
jonnyawsom3
2025-02-08 12:15:54
It does seem like a strange limitation, but CFA was made for bayer images, so bitdepth starts at 8 and only goes up to 14 in current cameras as far as I'm aware. Maybe made to make hardware implementations spec compliant?
Tirr
2025-02-08 12:16:12
it's 0-3, not 0-1
jjrv
2025-02-08 12:17:32
πŸ™ˆ
2025-02-08 12:21:09
Sorry, brain fart, the weirdness there is two ways to represent 3, started misinterpreting, better get some sleep πŸ™ˆ
2025-02-08 12:34:34
Got curious today about the decoding and started this. Goal is to target wasm for decoding maps. Means the data can be trusted, all safety checks can be disabled (the browser enforces its own anyway) and there's access to GPU and a JIT compiler for generated code easily available. Not sure whether they help. If browsers later add JXL support, higher bit depths might still be an extra.
jonnyawsom3
jjrv Got curious today about the decoding and started this. Goal is to target wasm for decoding maps. Means the data can be trusted, all safety checks can be disabled (the browser enforces its own anyway) and there's access to GPU and a JIT compiler for generated code easily available. Not sure whether they help. If browsers later add JXL support, higher bit depths might still be an extra.
2025-02-08 12:46:29
Reminded me of this comment by Jon https://discord.com/channels/794206087879852103/1065165415598272582/1238399364561506315
jjrv
2025-02-08 12:47:49
I'd at least like to enable such experiments. Wasm is intrinsically cross-platform and secure πŸ€” JS less so but maybe the generated code can't too easily include arbitrary browser API calls.
Tirr
2025-02-08 12:48:01
jonnyawsom3
2025-02-08 12:54:12
> changing MA subtree into a table seems doable Efforts 2 and 3 use fixed MA trees, with lookup tables in the decoders. They haven't been tuned or tweaked in years, so it would be possible to make an optimised table for the maps, then embed the LUT into the decoder for blazing speed (hopefully)
2025-02-08 12:54:42
Unfortunately the MA tree saving/loading idea, is still only an idea
Meow
`oxipng -Z --fast` is all you need
2025-02-08 02:15:30
Seems `-o 4` is already enough. Very slightly larger but much faster
VcSaJen
A homosapien
2025-02-08 06:36:22
This is mixing internal file structure and default decode output+metadata. Technically if format was designed that way, you could have grayscale stored internally, but it would output to full RGB by default. Same with ARGB with 100% solid image: internally stored without alpha, which is then reconstructed to solid alpha on decode.
_wb_
2025-02-08 06:39:14
libjxl should support that, you can get an rgba buffer from a grayscale image if you want iirc
CrushedAsian255
jjrv I'd at least like to enable such experiments. Wasm is intrinsically cross-platform and secure πŸ€” JS less so but maybe the generated code can't too easily include arbitrary browser API calls.
2025-02-10 05:37:10
Does wasm support JITting other wasm code?
2025-02-10 05:37:30
Like can one wasm function write wasm to another part of memory and. Call it?
jjrv
CrushedAsian255 Like can one wasm function write wasm to another part of memory and. Call it?
2025-02-10 05:43:00
AFAIK no. It has to go through JS (or whatever is calling the wasm), which would initialize a new wasm module. I'm considering generating JS, it can be quite fast when dealing with basic arithmetics on buffer contents, and it can directly access the wasm memory.
2025-02-10 05:46:03
Trying not to spend too much time on this project right now. Zig can easily call C, so maybe I can reuse J40 and then look into which parts can be rewritten specifically for a wasm target.
AccessViolation_
CrushedAsian255 Does wasm support JITting other wasm code?
2025-02-10 08:58:14
Wasm code is immutable, so no
CrushedAsian255
2025-02-10 08:58:33
Shame
AccessViolation_
2025-02-10 08:59:41
it's a part of the security considerations if you're interested in reading about them: <https://webassembly.org/docs/security/>
Demiurge
2025-02-10 09:01:28
Sounds like w^x enforcement?
AccessViolation_
2025-02-10 09:01:53
yup
2025-02-10 09:05:22
it also helps with simplicity. knowing Wasm has structured control flow (no random jumps all over the place) that's validated before it's run, a runtime won't have to 'bail out' of JIT when functions receive data it didn't expect them to, or deal with parts that can't be JITed because of weird jumps. functions are units which can always be JIT compiled regardless of what calls them or what they call. and thanks to strong typing the JIT compiler won't have to observe what types of data are passed into a function to optimize for it, it knows ahead of time
Demiurge
2025-02-10 09:06:06
Entire classes of security bugs would be fixed forever if sane defaults were enforced in the C runtime
2025-02-10 09:06:15
Like W^X
AccessViolation_
2025-02-10 09:06:42
from what you've said i think you'll really like some of the design decisions of Wasm
2025-02-10 09:06:45
they basically thought the same
Demiurge
2025-02-10 09:07:08
OpenBSD is on the right track too
2025-02-10 09:07:51
Especially adding library functions to make security more convenient
2025-02-10 09:08:51
And strictly enforcing a lot of no brainer security
2025-02-10 09:09:48
If security isn't convenient and obvious then it won't get used
AccessViolation_
2025-02-10 09:14:16
yeah i agree
2025-02-10 09:17:59
i'm really glad rust is seeing as much adoption as it is. memory unsafety is another one of those things where i just can't believe it isn't the default basically always. now that memory safety is no longer a performance concession this eliminates like, 80% of ALL vulnerabilities, just by virtue of code being memory safe by default
2025-02-10 09:18:09
and Wasm on the server is also an interesting development
2025-02-10 09:19:12
i don't remember the exact quote, but one of the designers of Docker said "if Wasm existed back then we wouldn't have needed to create Docker"
2025-02-10 09:25:58
i've been thinking about Wasm game mods too. the game devs provide the api, and modders could write the mods in any language and there is no possibility of the mods being malicious (so long as they're actual Wasm modules and not secretly executables, but the game can pretty easily include a checker that tells you if mods are safe just by checking if they're Wasm)
jonnyawsom3
AccessViolation_ i've been thinking about Wasm game mods too. the game devs provide the api, and modders could write the mods in any language and there is no possibility of the mods being malicious (so long as they're actual Wasm modules and not secretly executables, but the game can pretty easily include a checker that tells you if mods are safe just by checking if they're Wasm)
2025-02-10 12:46:07
The developer of Resonite (If VRChat and Garry's Mod had a baby) wants to add WASM support for actual programming support in the game with inputs and outputs to control it's own node based systems
AccessViolation_
2025-02-10 02:10:21
oh that's awesome
2025-02-10 02:31:50
honestly Wasm is amazing in so many ways
2025-02-10 02:33:34
it's the "we can have nice things" platform
Demiurge
2025-02-10 07:45:57
But C can be pretty safe too just by using some basic things that have existed for a long time. OpenBSD doesn't use rust, but it uses things like pledge and unveil to define and revoke access to unneeded privileges on a per process basis, high quality random numbers always instantly available to any process even at early boot time, and lots of sensible restrictions to avoid memory corruption bugs
2025-02-10 07:46:34
Memory corruption bugs can still happen but they will cause a program to crash
2025-02-10 07:47:12
Rather than execute attacker controlled code
2025-02-10 07:51:51
Pledge and unveil are pretty new but everything else is pretty old. And it would be nice if other platforms could make pledge and unveil available since they're another obvious win for literally everyone. Would be nice if they were added to the musl libc on Linux
2025-02-10 07:52:47
Linux has a similar thing called Landlock but it's not as convenient for application developers
2025-02-10 07:53:25
Whenever Linux adds a security feature they always make it as inconvenient as possible to ensure it won't ever be relevant or get used at all
2025-02-10 07:54:03
Except by the tech priests that give us weird crap like Docker
AccessViolation_
2025-02-10 10:24:58
there's probably a handful of low hanging fruit things you can do to make C safer. i know telling the compiler to emit bounds checks on array accesses has a minimal performance impact (of like 1-2% according to a study from Apple)
2025-02-10 10:26:51
other things are much harder to do, and many are practically impossible unless you want basically any code to be marked as "potentially unsafe" because the language design results in there being very very few situations where it can statically know with certainty that code is safe
2025-02-10 10:27:26
i should read more about what OpenBSD does, i know very little about it
lonjil
2025-02-10 10:28:25
They really don't do much
2025-02-10 10:29:02
I mean, they compile C code with a lot more known mitigations than most people, but it doesn't fix any fundamental problems.
AccessViolation_
2025-02-10 10:29:50
hmm i see
2025-02-10 10:32:00
it'll be interesting to see reports on the security of Redox OS in the future. Redox is an operating system and kernel written entirely in Rust, and probably the first real shot we will have at evaluating the effects of an entire operating system being written in Rust
2025-02-10 10:33:18
Not necessarily the OS itself, but more a look back on the security vulnerabilities found and patched, compared to a set of vulnerabilities in C kernels
2025-02-10 10:36:09
it's surprisingly usable too, it runs libcosmic applications and balatro for some reason <:galaxybrain:821831336372338729>
lonjil I mean, they compile C code with a lot more known mitigations than most people, but it doesn't fix any fundamental problems.
2025-02-10 10:41:17
i thought it also had a more secure architecture, but that's just what i've heard in passing
lonjil
2025-02-10 10:43:24
security in OpenBSD comes mostly from processes asking the kernel to remove their ability to do things they're not supposed to be able to do.
AccessViolation_
2025-02-10 10:44:37
as a person who should not be given any responsibilities ever i can get behind that methodology
lonjil
AccessViolation_ it's surprisingly usable too, it runs libcosmic applications and balatro for some reason <:galaxybrain:821831336372338729>
2025-02-10 10:46:04
Balatro is written entirely in Lua and runs on the LΓ–VE engine, which is open source and I guess has been ported to Redox.
AccessViolation_
2025-02-10 10:46:21
it has indeed
2025-02-10 10:46:42
https://fosstodon.org/@soller/113800311529264960
2025-02-10 10:49:09
> Jeremy Soller ported the LOVE game engine to Redox, to achieve this he did the following tasks: > - Ported the libmodplug, libtheora, and openal-soft libraries > - Fixed bugs and implemented some functions in relibc
jonnyawsom3
The developer of Resonite (If VRChat and Garry's Mod had a baby) wants to add WASM support for actual programming support in the game with inputs and outputs to control it's own node based systems
2025-02-10 11:35:41
https://github.com/Yellow-Dog-Man/Resonite-Issues/issues/1211
Demiurge
2025-02-11 02:00:07
Yeah, like I said, memory corruption is still possible but it typically will just crash immediately instead of doing something worse
2025-02-11 02:00:32
It doesn't fundamentally change the issues with C but just detects them at runtime
2025-02-11 02:01:07
And makes it a lot safer in the process... but it's different than static compile time validation
Orum
lonjil security in OpenBSD comes mostly from processes asking the kernel to remove their ability to do things they're not supposed to be able to do.
2025-02-11 02:31:57
if you're referring to their tagline of "Only two remote holes in the default install, in a heck of a long time!" that's incredibly misleading
2025-02-11 02:32:37
FreeBSD has capsicum as well which allows processes to revoke their own privileges
lonjil
Orum FreeBSD has capsicum as well which allows processes to revoke their own privileges
2025-02-11 02:37:40
And Capsicum is way better honestly.
Orum if you're referring to their tagline of "Only two remote holes in the default install, in a heck of a long time!" that's incredibly misleading
2025-02-11 02:38:10
Only 2 remote holes in an install with no features, how amazing!
Orum
2025-02-11 02:38:22
yeah, literally no one uses a default install
CrushedAsian255
2025-02-11 02:38:44
Can the process reclaim privileges after revoking them?
Orum
2025-02-11 02:39:12
not sure, but I don't think so as that would kind of defeat the purpose
CrushedAsian255
2025-02-11 02:39:30
As so you only revoke permissions you KNOW you will NEVER need?
2025-02-11 02:40:04
Cause like what if the application is able to run in offline or online mode, you want to revoke network for offline mode but then what if the user wishes to switch to online mode?
Orum
2025-02-11 02:40:07
the main point of it is to secure things that cannot be secured via more traditional means
2025-02-11 02:40:59
e.g. I think ping requires root access as it's ICMP based, but you don't want some vuln in ping compromising the system
CrushedAsian255
2025-02-11 02:41:58
So like ACLs for each process’s access to the kernel?
Orum
2025-02-11 02:42:23
well unlike ACLs it's not readily user-configurable (or at least, not intended to be)
CrushedAsian255
2025-02-11 02:42:24
And specific parameters for each syscall?
lonjil
CrushedAsian255 Can the process reclaim privileges after revoking them?
2025-02-11 02:42:30
Under OpenBSD, I'm pretty sure no. Under Capsicum, sort of. Capsicum is based on capabilities, which means a process can only access a resource it is given. It's mostly based on file descriptors. So you can't open files or folders, but if you have an FD from before you enabled removing accesss, or if someone sends an FD to you over a unix socket, you can access that FD. So e.g. another process could send you an FD for a folder, and now you can open any files in that folder.
CrushedAsian255
lonjil Under OpenBSD, I'm pretty sure no. Under Capsicum, sort of. Capsicum is based on capabilities, which means a process can only access a resource it is given. It's mostly based on file descriptors. So you can't open files or folders, but if you have an FD from before you enabled removing accesss, or if someone sends an FD to you over a unix socket, you can access that FD. So e.g. another process could send you an FD for a folder, and now you can open any files in that folder.
2025-02-11 02:43:27
Like the Capsicum process can call a native file picker dialog and that returns a fd to the selected file and now the process can access the file?
Orum
CrushedAsian255 And specific parameters for each syscall?
2025-02-11 02:43:30
probably better to just read the docs at this point: https://wiki.freebsd.org/Capsicum
lonjil
CrushedAsian255 Like the Capsicum process can call a native file picker dialog and that returns a fd to the selected file and now the process can access the file?
2025-02-11 02:44:29
Yeah
Hello71
CrushedAsian255 Cause like what if the application is able to run in offline or online mode, you want to revoke network for offline mode but then what if the user wishes to switch to online mode?
2025-02-11 11:19:21
the syscall firewall assumes that an attacker who can maliciously run the "access network" code can also run the "enable online mode" code
Just me
2025-02-12 04:13:30
Casual users and journalists need several test files per feature...
2025-02-12 04:14:18
Do you have plans for several levels of compliance?
2025-02-12 04:14:39
English Wikipedia gives just a rough idea about support levels. Maybe MDN is one the best places to publish fine details about support among software...
2025-02-12 04:40:46
Tests for web browsers can help as well... https://web-platform-tests.org
CrushedAsian255
Just me English Wikipedia gives just a rough idea about support levels. Maybe MDN is one the best places to publish fine details about support among software...
2025-02-12 08:51:50
Is this about JPEG XL or Capsicum?
lonjil
2025-02-12 10:47:10
JXL
Orum
2025-02-12 04:50:04
I've noticed recent (0.11.x) JXL is a lot better in dark areas than older versions
2025-02-12 04:50:22
still far from perfect but definitely better
jonnyawsom3
2025-02-12 05:11:00
Oh, I was actually about to complain about crushed blacks in an image I just tested on
Orum
2025-02-12 05:48:40
yeah, I can still find examples where it has problems, but at least I don't need to go to -d 0.2 as much as I used to in order to get transparent images
jonnyawsom3
2025-02-12 05:51:57
I tried lowering distance to 0.3 but it had hardly any effect. Ended up setting an intensity target of 2000 nits at just distance 1 which was good enough
Orum
2025-02-12 11:21:12
yeah but raising the intensity target fucks up displaying/decoding the images in some apps
2025-02-12 11:21:26
so I finally gave up doing that
derbergπŸ›˜
2025-02-13 10:39:27
interop results today?
Traneptora
lonjil security in OpenBSD comes mostly from processes asking the kernel to remove their ability to do things they're not supposed to be able to do.
2025-02-13 10:51:42
I was under the impression that they also aggressively randomized their address space more than most other OSes
2025-02-13 10:52:00
which means illegal accesses will crash more often
2025-02-13 10:52:04
(so bugs can be fixed)
lonjil
Traneptora I was under the impression that they also aggressively randomized their address space more than most other OSes
2025-02-13 10:53:25
They were the first to start doing that but I think everyone does it now.
Traneptora
2025-02-13 10:54:09
that sounds about right. I think a lot of the early security-conscious decisions were pioneered by them but now most of them are also done by Linux
2025-02-13 10:54:13
and most other OSes
Demiurge
2025-02-13 11:21:42
They are usually the first ones to adopt proactive/defensive programming techniques before it eventually gets adopted to anyone else. They don't care about breaking compatibility or even removing functionality sometimes (like completely disabling CPU hyperthreads) if it is the simplest way to eliminate security flaws
2025-02-13 11:28:04
They removed their linux compatibility layer also, because allegedly it was too complicated and likely to have bugs that risk circumventing security or something.
Traneptora
2025-02-14 06:53:37
it's a different OS so having linux emulation seems like it's probably not worth maintaining
2025-02-14 06:54:03
especially considering that Linux promises abi-level kernel compat
2025-02-14 06:54:08
and bsd does not
Quackdoc
2025-02-14 05:48:27
it makes a lot more sense for BSD to invoke something like crosvm for seamless vms
spider-mario
2025-02-15 04:03:41
2025-02-15 04:03:46
who is this β€œJonneke Bos”?
2025-02-15 04:05:08
<@794205442175402004> could that be you?
2025-02-15 04:05:18
(whatever it would mean for them to be you)
2025-02-15 04:06:54
ah, no, then it wouldn’t be β€œ(then at Google)”
_wb_
2025-02-15 04:17:42
Lol who is Jonneke Bos?
Orum
2025-02-15 04:23:58
AI hallucinating again
Mine18
spider-mario
2025-02-15 04:50:38
veluca
2025-02-15 04:52:17
not *entirely* wrong, we were part of the JPEG committee during development
Meow
2025-02-15 05:47:55
No result of "Jonneke Bos" on DDG
2025-02-15 05:50:15
JPEG XL was secrectly developed by a mysterious legendary
AccessViolation_
Meow No result of "Jonneke Bos" on DDG
2025-02-15 05:51:56
sounds dutch
2025-02-15 05:52:22
maybe one of jon's alternate identities
2025-02-15 05:52:33
it even starts with jon
jonnyawsom3
2025-02-15 06:04:34
Maybe it got confused with Jonathan Ross
Demiurge
2025-02-15 08:13:01
You're playing with a random number generator toy... obviously it's going to generate random nonsense, that's what it's programmed to do
2025-02-15 08:14:03
Its main purpose by design is to generate almost plausible sounding randomness
2025-02-15 08:14:31
Why would you ever ask it anything except just for lulz
Traneptora
2025-02-16 07:46:49
I don't know if there's any way to disable google's AI suggest when logged in
2025-02-16 07:46:52
I wish there was
A homosapien
Traneptora I don't know if there's any way to disable google's AI suggest when logged in
2025-02-16 09:56:22
Go into settings -> Search engine -> Manage search engines and site search -> Site search -> add: ```Name: AI-free-google Shortcut: @web URL with %s in place of query: {google:baseURL}search?q=%s&udm=14``` -> three dots "make default"
2025-02-16 09:58:41
I followed this article, seems to work for me https://www.androidauthority.com/how-to-turn-off-ai-overviews-google-3445771/
Quackdoc
2025-02-16 06:02:16
me: How bright is 20,000 nits? Google ai:20,000 nits is bright. 10,000 nits is like looking at a paper on a bright sunny day. 20,000 nits is how bright the sun is.
_wb_
2025-02-16 06:47:45
The sun is more like a billion nits
2025-02-16 06:49:49
If the sun was only 20k nits, a typical light bulb would look brighter than the sun πŸ™‚
Quackdoc
2025-02-16 08:25:27
another one, but this time 30k nits
spider-mario
2025-02-17 08:59:11
if you are truly looking for those numbers, Wikipedia has a few: https://en.wikipedia.org/wiki/Orders_of_magnitude_(illuminance)#Luminance
Orum
2025-02-17 01:19:04
and the fun part is you can still overpower it
excatron
Demiurge For that particular scenario, you must compare the file hash. For other lossless scenarios (like png) you must compare the pixel hash.
2025-02-17 02:32:22
Thank you sm, can you help me understand why the pixel hash of a jpg and its losslessly trancoded jxl counterpart does not match? I mean that the file hash can't match is very obvious to me. But why is there a difference between comparing a jpg and the reconstructed jpg (same pixel hash) and just straight up comparing the jpg with the lossless jxl (different pixel hash), if the jxl contains the original jpg anyways. In my head magick should give the same pixel hash, right?
username
excatron Thank you sm, can you help me understand why the pixel hash of a jpg and its losslessly trancoded jxl counterpart does not match? I mean that the file hash can't match is very obvious to me. But why is there a difference between comparing a jpg and the reconstructed jpg (same pixel hash) and just straight up comparing the jpg with the lossless jxl (different pixel hash), if the jxl contains the original jpg anyways. In my head magick should give the same pixel hash, right?
2025-02-17 03:21:04
the JPEG spec has the decoding process as non-strict so different JPEG decoders can and commonly give slightly different results/pixels and in ImageMagick (and probably most programs that handle multiple formats) libjxl is used for JPEG decoding when decoding a JPEG that has been transcoded to a JXL and attempting to decode a non-JXL JPEG results in some other JPEG decoder being used hence why the pixel hash ends up different between JPEG and JXL JPEG
Traneptora
A homosapien Go into settings -> Search engine -> Manage search engines and site search -> Site search -> add: ```Name: AI-free-google Shortcut: @web URL with %s in place of query: {google:baseURL}search?q=%s&udm=14``` -> three dots "make default"
2025-02-17 05:58:45
oh so it's udm=14
2025-02-17 05:58:52
as a GET parameter
2025-02-17 05:58:54
ty
2025-02-17 06:04:17
the instructions listed were for Chrome, but for anyone here who uses firefox:
2025-02-17 06:04:47
1. Go to `about:config` and find the option named `browser.urlbar.update2.engineAliasRefresh` It may not exist. If it does, create it with the `+` and set its value to `true`. 2. Go to `about:preferences#search`. At the bottom of "Search Shortcuts" there should be a new button that says Add that wasn't there before. Add a new one called "Google Without AI" (or whatever you want) and set the URL to `https://www.google.com/search?udm=14&q=%s` 3. If you want it to be default, it'll appear in the above default search engine selection list.
excatron Thank you sm, can you help me understand why the pixel hash of a jpg and its losslessly trancoded jxl counterpart does not match? I mean that the file hash can't match is very obvious to me. But why is there a difference between comparing a jpg and the reconstructed jpg (same pixel hash) and just straight up comparing the jpg with the lossless jxl (different pixel hash), if the jxl contains the original jpg anyways. In my head magick should give the same pixel hash, right?
2025-02-17 06:11:31
JPEG decoding isn't fully specified, which means that you can decode a JPEG file in two slightly different, but both legal ways.
2025-02-17 06:12:01
what's likely happening is that the JPEG decoder used by imagemagick (libjpeg) is producing a slightly different result than you'd get for jpeg -> jxl -> pixels
jonnyawsom3
2025-02-17 07:06:52
There's a long discussion about it on Github, but TLDR the JXL is higher quality than the original JPEG too
Demiurge
excatron Thank you sm, can you help me understand why the pixel hash of a jpg and its losslessly trancoded jxl counterpart does not match? I mean that the file hash can't match is very obvious to me. But why is there a difference between comparing a jpg and the reconstructed jpg (same pixel hash) and just straight up comparing the jpg with the lossless jxl (different pixel hash), if the jxl contains the original jpg anyways. In my head magick should give the same pixel hash, right?
2025-02-17 10:50:01
different jpeg decoders produce slightly different pixel output, even for the same exact JPEG data.
2025-02-17 10:51:32
The FILE hash WILL match if you decode it back to JPEG
2025-02-17 10:52:50
It should decode to the EXACT same file. but the PIXEL data will be slightly different when you compare different jpeg decoders. If you losslessly convert a jpeg to a jxl, and decode the resulting jxl to pixels, you're using libjxl decoder. if you decode the original jpeg directly, you're using a different older decoder.
2025-02-17 10:53:39
This only applies to JPEG files. for lossless pixel data like PNG, then you want to compare the pixel hash, not the file hash.
2025-02-17 10:54:00
since it won't decode to the exact same file but it should decode to the same pixels
Orum
Traneptora JPEG decoding isn't fully specified, which means that you can decode a JPEG file in two slightly different, but both legal ways.
2025-02-18 01:30:41
just curious, do you know of something that will let you decode them in both ways?
_wb_
2025-02-18 03:29:18
``` djxl recompressed_jpeg.jxl tmp.jpg; djxl recompressed_jpeg.jxl decoded_by_libjxl.png; convert tmp.jpg decoded_by_libjpeg-turbo.png ```
Orum
2025-02-18 03:30:55
ah, so djxl always uses one method, and libjpeg-turbo the other?
_wb_
2025-02-18 06:22:34
Actually there are options in libjpeg-turbo that will produce slightly different outputs, try djpeg --help
Traneptora
Orum just curious, do you know of something that will let you decode them in both ways?
2025-02-18 06:22:36
"decode them both ways" it's not like there's two different ways
2025-02-18 06:22:42
it's just that there's some tolerance
jonnyawsom3
2025-02-18 06:26:08
Your wording was slightly confusing, making it sound like there's *only* two ways, instead of 2 different implimentations of it
AccessViolation_
2025-02-18 06:27:42
is this about fast floating point behavior that's allowed to differ between architectures? Wasm forbids that, so you might be able to use a decoder compiled to Wasm? Apparently things can even differ between multiple runs on the same computer, huh
_wb_
2025-02-18 06:30:48
JPEG was designed when float arithmetic was still something pretty fancy and costly, so conformance was defined in a way that allows implementing stuff approximately with 16-bit integer arithmetic and 8-bit int intermediate quantization. Libjxl uses float32 all the way, which would be unthinkable in 1992 but is now actually pretty acceptable.
2025-02-18 06:31:25
In principle you could make a double precision JPEG decoder that is even more accurate. Though the differences would be extremely tiny.
AccessViolation_
2025-02-18 06:37:12
<https://doc.rust-lang.org/nightly/std/primitive.f128.html> >:3c
2025-02-18 06:37:50
introducing the slowest highest precision jpeg decoder
jonnyawsom3
2025-02-19 07:17:03
If anyone has an account, maybe they can suggest TinyDNG or Adobe's tool instead https://superuser.com/questions/1880519/encode-multiple-raw-photos-to-jpeg-xl
RaveSteel
2025-02-19 01:03:42
I think he just wants to straight up convert DNG to JXL, which neither tinydng nur adobe's DNG converter do
2025-02-19 01:04:13
But tinydng would be the better solution, yes
VcSaJen
AccessViolation_ <https://doc.rust-lang.org/nightly/std/primitive.f128.html> >:3c
2025-02-20 01:56:07
I miss Extended type from Delphi. Nowadays it's alias of Double, but it used to be true 80 bit float.
spider-mario
2025-02-20 02:09:46
like `long double` in C and C++?
2025-02-20 02:10:49
https://en.wikipedia.org/wiki/Long_double#Implementations
CrushedAsian255
2025-02-21 12:21:26
proposal: binary128 should be called `really long double`
spider-mario
2025-02-21 01:55:47
or `double double`
_wb_
2025-02-21 02:25:46
`quadruple`
Cacodemon345
2025-02-22 03:36:40
`long long double`.
2025-02-22 03:37:10
(Still no `short float` type in C post-Sapphire Rapids.)
jonnyawsom3
2025-02-22 05:29:19
<@526322998521888768> how was progress on the satellite by the way? I'm sure we'd all love to hear some updates haha
AccessViolation_
2025-02-22 08:25:49
yeah!
_wb_
2025-02-24 11:14:36
Just for fun, I'd like to make a map of where jxl devs are from β€” say one for birthplace and one for current place of residence (though the latter might be a bit more boring since it will be mostly around Zurich πŸ™‚ So <@&803357352664891472> feel free to tell me because I don't actually know for most of you. I'm born in Leuven, Belgium and currently live in Asse, Belgium.
veluca
_wb_ Just for fun, I'd like to make a map of where jxl devs are from β€” say one for birthplace and one for current place of residence (though the latter might be a bit more boring since it will be mostly around Zurich πŸ™‚ So <@&803357352664891472> feel free to tell me because I don't actually know for most of you. I'm born in Leuven, Belgium and currently live in Asse, Belgium.
2025-02-24 11:16:04
Genoa! (If you also want to add uni places: Pisa)
spider-mario
2025-02-24 11:17:30
birth - 6 years old: Paris 6-17: Paris suburbs (Montigny-lès-Cormeilles) 17-22: studies in Rouen (but regularly visiting the parents in Montigny) 22-31 (now): Zürich
2025-02-24 11:19:47
I guess Montigny is probably the centre of the France era
_wb_
2025-02-24 11:21:32
also if you know for others who are maybe not on this discord or not regularly, let me know
2025-02-24 11:23:30
I know Jyrki is from Finland originally but I don't remember which city/town, Jan and Moritz are from Germany but I also don't remember from where exactly
spider-mario
2025-02-24 11:29:10
I know slightly, but not much, more precisely for Jyrki (Lapland)
_wb_
2025-02-24 02:54:47
how about <@987371399867945100> and <@811568887577444363> ? Judging by just the names I would guess Hungary and Russia but guessing origins by name alone is dangerous πŸ™‚
2025-02-24 02:58:41
oh and <@768090355546587137> ? we both went to uni in Leuven but I don't actually know which city you're from
jonnyawsom3
2025-02-24 03:05:47
We'll need to make a JXL art spline map and put people on it using layers ;P
CrushedAsian255
2025-02-24 08:16:29
Is jpeg xl a real format?
2025-02-24 08:16:53
Os it is just a propaganda made by the anti-AVIF political party
spider-mario
2025-02-24 08:19:55
like some sort of bogeyman?
2025-02-24 08:20:38
the Mallory to Alice and Bob?
Me
2025-02-24 08:34:41
<@238552565619359744> Hey! Thanks for asking. Not a lot of progress. We've measured the time to generate images on the pi zero2w we're flying, but we've not yet written our python code to generate the set of layers we'd like to better enable peeling smaller images off. Current step is students are working on xmodem code (lol) to download image artifacts from pi to main flight computer. Figured we should get some end to end testing before fine tuning.
Demiurge
2025-02-25 02:17:16
Double XL
eustas
_wb_ how about <@987371399867945100> and <@811568887577444363> ? Judging by just the names I would guess Hungary and Russia but guessing origins by name alone is dangerous πŸ™‚
2025-02-25 03:46:24
birth - 25yo: Syktyvkar, Komi, Russia (the only city I know that has 2 'Π«' letter/vowel; even news anchors have problems spelling it πŸ™‚
2025-02-25 03:46:59
25yo-31yo: Saint-Petersburg, Russia
Traneptora
_wb_ Just for fun, I'd like to make a map of where jxl devs are from β€” say one for birthplace and one for current place of residence (though the latter might be a bit more boring since it will be mostly around Zurich πŸ™‚ So <@&803357352664891472> feel free to tell me because I don't actually know for most of you. I'm born in Leuven, Belgium and currently live in Asse, Belgium.
2025-02-25 10:11:27
If you're not just including the core devs, then I was born in Cleveland, Ohio, United States and I currently live in Ann Arbor, Michigan, United States
gb82
CrushedAsian255 Os it is just a propaganda made by the anti-AVIF political party
2025-02-26 04:48:10
real AVIF enjoyers will know that we really do want JXL too
CrushedAsian255
gb82 real AVIF enjoyers will know that we really do want JXL too
2025-02-26 04:48:23
Yeah lol
gb82
2025-02-26 04:48:40
https://tenor.com/view/handshake-arthur-read-francine-alice-frensky-agree-shake-on-it-gif-19405532
CrushedAsian255
2025-02-26 04:48:47
AVIF for high efficiency and JXL for high quality πŸ˜„
gb82
2025-02-26 04:49:08
or even better: an open, unified web platform where we can use whatever we want!
A homosapien
2025-02-26 05:09:28
They complement each other really well. Avif - low bitrate lossy Jxl - high-fidelity lossy/best in class lossless
Meow
2025-02-26 05:47:46
> Comrades! The sinister forces of capitalist monopolies have once again conspired to strangle technological progress, pushing AVIF as a false savior while sabotaging the revolutionary potential of JPEG XL! Under the iron grip of profit-hungry corporations, AVIF is but a tool of bourgeois oppression, designed to maintain the status quo of fragmented, patent-entangled formats, ensuring that the workers of the digital world remain shackled to the whims of profit-driven overlords. Meanwhile, the people’s champion, JPEG XL, stands as a testament to true innovationβ€”open, efficient, and free from the exploitative chains of capitalist interests. But beware, comrades, for the enemies of progress seek to divide and deceive! We must resist their schemes and stand united for a future where technology serves the people, not the pockets of the capitalist elite!
jonnyawsom3
Me <@238552565619359744> Hey! Thanks for asking. Not a lot of progress. We've measured the time to generate images on the pi zero2w we're flying, but we've not yet written our python code to generate the set of layers we'd like to better enable peeling smaller images off. Current step is students are working on xmodem code (lol) to download image artifacts from pi to main flight computer. Figured we should get some end to end testing before fine tuning.
2025-02-26 12:43:24
Ahhh, if the encode time is tight or you want to bump up an effort level, building libjxl with Clang seems to give it a roughly 20% performance boost. Might save a few joules
A homosapien
2025-02-26 01:48:31
On Linux the difference between gcc and clang is more like 1-5%. Windows users are just shafted by MSVC. Unless the pi is running Windows for some godforsaken reason.
RaveSteel
2025-02-26 01:54:08
Now I am interested in running cjxl over wine
2025-02-26 01:54:55
Especially compared to native Windows
AccessViolation_
Meow > Comrades! The sinister forces of capitalist monopolies have once again conspired to strangle technological progress, pushing AVIF as a false savior while sabotaging the revolutionary potential of JPEG XL! Under the iron grip of profit-hungry corporations, AVIF is but a tool of bourgeois oppression, designed to maintain the status quo of fragmented, patent-entangled formats, ensuring that the workers of the digital world remain shackled to the whims of profit-driven overlords. Meanwhile, the people’s champion, JPEG XL, stands as a testament to true innovationβ€”open, efficient, and free from the exploitative chains of capitalist interests. But beware, comrades, for the enemies of progress seek to divide and deceive! We must resist their schemes and stand united for a future where technology serves the people, not the pockets of the capitalist elite!
2025-02-26 03:35:40
this was either generated by an LLM or written by someone that *really* likes adjectives and metaphors
Meow
AccessViolation_ this was either generated by an LLM or written by someone that *really* likes adjectives and metaphors
2025-02-26 03:38:43
> Write one paragraph with the tone of USSR propaganda criticizing AVIF being the plot of capitalists against JPEG XL
CrushedAsian255
AccessViolation_ this was either generated by an LLM or written by someone that *really* likes adjectives and metaphors
2025-02-26 08:15:17
Is LLM like a llama?πŸ¦™
Traneptora
2025-02-26 08:16:39
Large Llama model
AccessViolation_
CrushedAsian255 Is LLM like a llama?πŸ¦™
2025-02-26 08:23:28
a statistically probable llama
2025-02-26 08:24:10
e.g. it might have discontinuities like a fifth leg and no eyes
2025-02-26 08:24:57
and leaked in a torrent after which it was frantically open sourced
CrushedAsian255
2025-02-26 08:28:24
Temperature = 4; now it’s an ostrich
AccessViolation_
2025-02-26 08:30:02
checkmate creationists
spider-mario
2025-02-26 10:07:44
but why do llamas still exist then???
2025-02-26 10:08:00
countercheckmate atheists
CrushedAsian255
2025-02-26 10:20:23
because some enterprise systems still rely on the legacy versions
AccessViolation_
spider-mario but why do llamas still exist then???
2025-02-26 10:49:16
calm down calm down, our top scientists are still working on the problem of monkeys
2025-02-26 10:49:18
one thing at a time!
Demiurge
2025-02-27 07:19:02
That doesn't look AI generated, AI has no sense of humor.
2025-02-27 07:22:04
> In the grand struggle against the insidious forces of capitalism, we must recognize the emergence of AVIF as a cunning plot orchestrated by bourgeois interests to undermine the revolutionary potential of JPEG XL. While JPEG XL stands as a beacon of progress, promising superior compression and quality for the collective good, AVIF seeks to distract and divide the working class with its flashy but ultimately hollow promises. This capitalist contrivance aims to entrench proprietary interests and stifle innovation, all while masquerading as a technological advancement. We must unite in our defense of JPEG XL, the true champion of the people's needs, and reject the manipulative tactics of those who would see us shackled to their profit-driven agendas!
2025-02-27 07:22:22
Okay maybe with the right prompt it can, lol
2025-02-27 07:23:58
I like how it's so ridiculous it actually makes AVIF sound good lol
Meow
2025-02-27 09:28:19
Not as _revolutionary_ as what ChatGPT gave me
screwball
2025-02-27 07:15:45
are there any recent updates (in the last 6 months) on Google's refusal to support JPEG XL?
username
screwball are there any recent updates (in the last 6 months) on Google's refusal to support JPEG XL?
2025-02-27 07:30:30
nope, seems like they are playing the long game of "ignore and hope it dies". alsoooo just to be clear Google isn't a hivemind and the refusal to support seems to just be coming from a team or two within Google that have higher decision making power
lonjil
2025-02-27 07:36:43
It's the Chrome team deciding to not have it in Chrome
Demiurge
2025-02-27 08:17:32
It's specifically the former On2 VPx video team that was promoted to the "chrome codec team" after Google acquisition and buyout of On2
2025-02-27 08:19:27
Specifically the team that wanted the whole world to adopt webp despite criticism from mozilla engineers and others about how flawed and half baked it was
mincerafter42
lonjil paging <@281958058190700546>
2025-02-27 08:38:09
ah gosh sorry i dodn't have notifications on in this group; am i still needed? :p
lonjil
2025-02-27 08:41:29
uhh I guess not but you made that crazy all colors gif right?
DZgas Π–
2025-02-28 06:15:22
-q 100 -e 10 --patches 1 --dots 1 -I 100 is this the maximum compression possible at the moment?modular loselss
2025-02-28 06:19:02
I had a misunderstanding. In November 2022, I compressed a local archive with pictures from the site that are now deleted. And put it in the archive. Now I got them out, because I got a more powerful computer and I could compress 175 thousand pictures better than before. But I'm a little..., I try the parameters, and its all give a worse result, and I honestly don't understand why all the pictures are worse compress than what is already compressed, and in 100% of cases and I don't remember what parameters exactly I compressed in 2022. Maybe I'm missing some key
jonnyawsom3
2025-02-28 06:25:12
Generally `-e 10 -g 3 -E 3 -I 100 --brotli_effort 11` is the highest without specialising for specific image content
RaveSteel
2025-02-28 06:26:08
Still 1 KiB larger than the posted image
2025-02-28 06:26:25
Time to try out e11 <:galaxybrain:821831336372338729>
2025-02-28 06:27:40
e11 results in 27.4KiB, slightly smaller than the image's 28KiB
DZgas Π–
Generally `-e 10 -g 3 -E 3 -I 100 --brotli_effort 11` is the highest without specialising for specific image content
2025-02-28 06:30:36
the bruh
2025-02-28 06:31:17
<:galaxybrain:821831336372338729> 3 years of work by the jpeg xl team
RaveSteel
2025-02-28 06:31:23
It happens
2025-02-28 06:32:00
Current release is three times faster after all
jonnyawsom3
2025-02-28 06:32:43
That's over twice as slow
DZgas Π–
RaveSteel Current release is three times faster after all
2025-02-28 06:33:14
-e 9 is much faster than -e 10 in this case. But -e 10 can't compress as well.
2025-02-28 06:33:43
literally here is a fact. jpeg xl software 3 years ago can compress better. why?
RaveSteel
2025-02-28 06:34:34
That may be so, but this is a 2% increase in file size and 200% increase in encode speed. That is completely acceptable in my book
DZgas Π–
2025-02-28 06:35:14
you know I haven't been on the server for a long time. but this is a good reason to be ... and compare all the versions. since there is an objective metric here - file size
RaveSteel That may be so, but this is a 2% increase in file size and 200% increase in encode speed. That is completely acceptable in my book
2025-02-28 06:35:34
I don't care about that. I want to compress as much as possible without losing quality.
RaveSteel
2025-02-28 06:36:14
Could you please try e11 with your libjxl 0.7? I want to see the filesize and speed with that
DZgas Π–
RaveSteel Could you please try e11 with your libjxl 0.7? I want to see the filesize and speed with that
2025-02-28 06:36:46
libjxl 0.7 have not
RaveSteel
2025-02-28 06:37:04
Ther bottom cjxl in the screenshot says libjxl 0.7.0
2025-02-28 06:37:33
Though the filename is cjxl_0.6.exe
DZgas Π–
2025-02-28 06:38:07
I meant
RaveSteel
2025-02-28 06:38:59
hm alright
DZgas Π–
2025-02-28 06:43:30
You see it too, right?
RaveSteel
2025-02-28 06:44:35
yes
DZgas Π–
2025-02-28 06:45:33
slightly off topic. this is similar to svt-av1 to which i made my own fork because the original developers managed to screw everything up. Making it 2 times slower with the same quality, and i don't know how they managed to do it...
2025-02-28 06:48:30
but in this case, it seems, maybe, just my images are so specific, because is art pictures. That no one paid attention, that something went wrong. I think, I will test all major versions of 0.x to give the developers a conclusion. Although if someone had a non-removable database of all builds of all git-versions I would do tests on it too. But unfortunately, someonedev decided to delete all old build versions
A homosapien
DZgas Π– literally here is a fact. jpeg xl software 3 years ago can compress better. why?
2025-02-28 06:57:42
I've run into similar issues before, for this type of image, I think suboptimal palette sorting is the issue. Try libjxl 0.8 effort 10, it generally gives the smallest file size.
2025-02-28 06:59:43
Well try effort 9 first πŸ˜… idk if you are willing to spend a significant portion of your day waiting.
DZgas Π–
2025-02-28 07:00:40
give me 10 minutes
2025-02-28 07:03:28
what happened in the 2 years i was gone? what kind of shit is this?
2025-02-28 07:04:14
it looks like each subversion has its own legacy. but that's nonsense
RaveSteel
2025-02-28 07:05:29
https://github.com/libjxl/libjxl/releases/download/v0.6.1/jxl-x64-windows-static.zip
2025-02-28 07:05:36
https://github.com/libjxl/libjxl/releases/download/v0.7rc/jxl-x64-windows-static.zip
2025-02-28 07:05:45
https://github.com/libjxl/libjxl/releases/download/v0.7.0/jxl-x64-windows-static.zip
2025-02-28 07:05:59
https://github.com/libjxl/libjxl/releases/download/v0.8.0rc1/jxl-x64-windows-static.zip
2025-02-28 07:06:05
https://github.com/libjxl/libjxl/releases/download/v0.8.0/jxl-x64-windows-static.zip
2025-02-28 07:06:15
https://github.com/libjxl/libjxl/releases/download/v0.8.1/jxl-x64-windows-static.zip
2025-02-28 07:06:29
There are some more, but I don't want to spam too much
DZgas Π–
2025-02-28 07:08:39
I have already downloaded all versions
2025-02-28 07:21:46
```py 27745 bytes cjxl_0.9 -d 0 -e 10 (--allow_expert_options speed 0.001x) 27853 bytes cjxl_0.8 -d 0 -e 10 (--allow_expert_options speed 0.001x) 30176 bytes cjxl_0.7 -d 0 -e 9 30318 bytes cjxl_0.8 -d 0 -e 9 30373 bytes cjxl_0.6 -d 0 -e 9 30819 bytes cjxl_0.11 -d 0 -e 10 31094 bytes cjxl_0.10 -d 0 -e 9 31094 bytes cjxl_0.9 -d 0 -e 9 31181 bytes cjxl_0.10 -d 0 -e 10 37181 bytes cjxl_0.6 -d 0 -e 8 37249 bytes cjxl_0.7 -d 0 -e 8 37391 bytes cjxl_0.8 -d 0 -e 8 37679 bytes cjxl_0.11 -d 0 -e 9 38832 bytes cjxl_0.10 -d 0 -e 8 38832 bytes cjxl_0.9 -d 0 -e 8 39248 bytes cjxl_0.11 -d 0 -e 8 ```
2025-02-28 07:22:47
0.9 and 0.8 encoding speed at -e10 is a joke. but fact is fact. if you want the best jxl in the world, the new version can't do anything
2025-02-28 07:23:45
and yet, the new versions even on -e10 are much bad. outsiders
2025-02-28 07:28:04
I think.. just take the code from 0.9 -e 10 and put it in -e 13 of the new version. will be just right. (yes, exactly -e 13)
2025-02-28 07:31:16
Taking all this into account, perhaps it is possible to review all the images in <#1187313089612369991>
DZgas Π– I had a misunderstanding. In November 2022, I compressed a local archive with pictures from the site that are now deleted. And put it in the archive. Now I got them out, because I got a more powerful computer and I could compress 175 thousand pictures better than before. But I'm a little..., I try the parameters, and its all give a worse result, and I honestly don't understand why all the pictures are worse compress than what is already compressed, and in 100% of cases and I don't remember what parameters exactly I compressed in 2022. Maybe I'm missing some key
2025-02-28 07:34:18
cjxl_0.9.exe -d 0 --allow_expert_options -e 10 -g 2 -I 100 -E 3 --brotli_effort 11 --patches 1 --dots 1
2025-02-28 07:35:55
the most annoying thing is that this does not solve the problem. since it takes several minutes to compress the image. And according to my tests, it turns out that what I used cjxl_0.7 -d 0 -e 9 in 2022 is still a better choice than everything that was done.
A homosapien
2025-02-28 08:54:42
Hopefully, porting WebP's algorithms will solve these issues πŸ₯²
DZgas Π–
A homosapien Hopefully, porting WebP's algorithms will solve these issues πŸ₯²
2025-02-28 09:02:16
well i actually found some images that compress better in webp but if you compress it with the new old 0.9 e10 jpeg xl finally wins
jonnyawsom3
2025-02-28 09:04:05
Just had a random thought, I wonder what an image would look like if you disabled RCT on decode. The per-group transforms could make something interesting
Jyrki Alakuijala
_wb_ Just for fun, I'd like to make a map of where jxl devs are from β€” say one for birthplace and one for current place of residence (though the latter might be a bit more boring since it will be mostly around Zurich πŸ™‚ So <@&803357352664891472> feel free to tell me because I don't actually know for most of you. I'm born in Leuven, Belgium and currently live in Asse, Belgium.
2025-02-28 11:07:38
I was born in Oulu, Finland. Our family moved further north and I lived as a child/teenager in Tornio, Kemi and Keminmaa. As a teenager I was a sysop of world's northmost BBS system in Keminmaa. Studied IT in Oulu. My friend wrote the IRC so I was among the first five users, and wrote the first bot ever. I worked for a neurosurgery startup in Oulu (1990-1993), moved to Espoo in 1993 to work for radiation therapy and 2006 to Zurich for Google. For the last 11 years I reside in Wollerau, canton Schwyz, before that in various places in Kanton Zurich (Adliswil, Adliswil, Zurich, Zuriberg, Adliswil). Hmmm, feels like I liked to live in Adliswil given that I was there in three different homes.
2025-02-28 11:12:38
Funnily, the On2 Technologies' VP8 codec was also partially built in Oulu. The On2 Technologies had a lot of hardware related people there. Perhaps a team of 20 or so. I believe some of them may still be involved with AV1/AV2/AVIF etc.
CrushedAsian255 Is LLM like a llama?πŸ¦™
2025-02-28 11:18:32
did Jeff Minter predict the large language models: https://en.wikipedia.org/wiki/Llamasoft:_The_Jeff_Minter_Story
Demiurge
DZgas Π– well i actually found some images that compress better in webp but if you compress it with the new old 0.9 e10 jpeg xl finally wins
2025-02-28 11:32:15
Webp lossless is top tier
jonnyawsom3
DZgas Π– the bruh
2025-02-28 11:40:08
If you want to see regression, try this on both current and v0.9 set to `-d 0 -e 2`
Traneptora
Just had a random thought, I wonder what an image would look like if you disabled RCT on decode. The per-group transforms could make something interesting
2025-02-28 11:55:17
for the most part things would just be the wrong color in a relatively tiled way, if not uniform way
2025-02-28 11:55:25
doesn't strike me as that exciting
monad
DZgas Π– you know I haven't been on the server for a long time. but this is a good reason to be ... and compare all the versions. since there is an objective metric here - file size
2025-03-01 12:04:05
Some data for 0.6 through 0.10: <https://loathra.com/jxl/bench/20240506.html#cjxl-releases-single-1> At high effort, it is not so straightforward. 0.6 showed smallest size overall, but worst bpp. 0.10 found the most minimum sizes, but performed worse in mean size and density. It's also important to keep in context that more practical efforts may have improved even if high effort did not.
Orum
Demiurge Webp lossless is top tier
2025-03-01 12:15:13
until you need anything other than the standard 8-bit sRGB <:FeelsSadMan:808221433243107338>
2025-03-01 12:16:50
but yeah, it's pretty good for that, though the encoder isn't close to the pareto front when run on modern, multi-core CPUs
monad
2025-03-01 12:23:14
WebP is still on the Pareto front for UI/text/pixel art/some other non-photo.
Demiurge
2025-03-01 12:31:09
libjxl-ng πŸ‘€
Jyrki Alakuijala
DZgas Π– <:galaxybrain:821831336372338729> 3 years of work by the jpeg xl team
2025-03-01 01:01:51
there is also a possible degradation in quality in the 0.8 -> 0.10 versions that I will start soon debugging -- I believe much of it was caused to make better performance on DZgas images (thumbnails with unusual thin features)
Orum
monad WebP is still on the Pareto front for UI/text/pixel art/some other non-photo.
2025-03-01 01:02:22
sure, in the sense it can reach compression levels others can't, but it's still very slow to encode, or at least, the reference encoder is
Jyrki Alakuijala
2025-03-01 01:04:25
WebP lossless gets some advantage in density by not having tiling, but in practice tiling is a huge practical blessing as it allows multi-threaded decoding
monad
Orum sure, in the sense it can reach compression levels others can't, but it's still very slow to encode, or at least, the reference encoder is
2025-03-01 01:15:25
In the benchmark I linked, WebP was faster in general in the practical range (bearing in mind this mix of content doesn't necessarily relate to any real dataset). For just the content it performed well on, it was wildly faster. <https://loathra.com/jxl/bench/20240506.html#multi-10>
Jyrki Alakuijala
2025-03-01 01:18:12
WebP lossless has my compromises, JPEG XL lossless is Luca's and Jon's compromises, me and Zoltan didn't touch it much
2025-03-01 01:18:34
except JPEG recompression lossless
Orum
monad In the benchmark I linked, WebP was faster in general in the practical range (bearing in mind this mix of content doesn't necessarily relate to any real dataset). For just the content it performed well on, it was wildly faster. <https://loathra.com/jxl/bench/20240506.html#multi-10>
2025-03-01 01:19:02
what options were used for cjxl?
monad
2025-03-01 01:19:46
each effort level, otherwise default
jonnyawsom3
Jyrki Alakuijala there is also a possible degradation in quality in the 0.8 -> 0.10 versions that I will start soon debugging -- I believe much of it was caused to make better performance on DZgas images (thumbnails with unusual thin features)
2025-03-01 01:19:57
I'm sure you're already aware, but we had a small breakthrough in https://discord.com/channels/794206087879852103/1278292301038227489, finding the original PR that caused degradation since v0.8. Someone also questioned if Butteraugli is accurate when isolated to groups of pixels for quantization, as opposed to the usual full image used for quality metrics
Orum
2025-03-01 01:21:08
ah, I would test again with `--streaming_input` and `--streaming_output` because they have a *massive* effect on speed with multi-core CPUs
Jyrki Alakuijala
I'm sure you're already aware, but we had a small breakthrough in https://discord.com/channels/794206087879852103/1278292301038227489, finding the original PR that caused degradation since v0.8. Someone also questioned if Butteraugli is accurate when isolated to groups of pixels for quantization, as opposed to the usual full image used for quality metrics
2025-03-01 01:25:43
That was very kind of you and makes it much easier for me to understand what is needed
monad
2025-03-01 01:27:04
I should clarify, output was disabled, and IIRC --streaming_output requires writing, so a bit different test.
A homosapien
2025-03-01 01:27:35
<@321486891079696385> can you please explain to Jyrki what you told me about how butteraugli not being effective on small groups of pixels?
jonnyawsom3
2025-03-01 01:28:02
Strangely, I was getting faster encode times with streaming output set to `nul` than I was disabling the output
Orum
2025-03-01 01:30:51
I think on the current cjxl they're being automatically enabled when possible
2025-03-01 01:31:07
but I'm fairly certain that wasn't always the case
BlueSwordM
Jyrki Alakuijala That was very kind of you and makes it much easier for me to understand what is needed
2025-03-01 01:37:46
Hello again Jyrki. It's been a while. When dealing with very small image features, most full frame dedicated image metrics don't seem to work as effectively for mode and transform decisions since most haven't been trained/built to analyse those small image features and blocks effectively. From empircal testing in svt-av1-psy, using a conventional image metric for full frame/tiled decisions, and a classical HVS metric like PSNR-HVS/psy-rd works just as effectively. For this reason, I believe a tuned PSNR-HVS based on the HVS model used in JXL would work well for a generalized low complexity RD metric. An even better RD metric would be to use local variance analysis mixed with an energy preserving metric like PSNR-HVS at higher complexity levels.
Demiurge
2025-03-01 10:42:41
When you say energy preserving, you mean the energy in each DCT frequency bin?
2025-03-01 10:43:08
I am curious about psycho tuned rdo techniques
2025-03-01 10:44:54
And I notice higher-frequency information/energy and texture/grit often gets obliterated with prejudice compared to highly tuned rdo like in x264
DZgas Π– I don't care about that. I want to compress as much as possible without losing quality.
2025-03-01 10:49:43
Newer versions have streaming input/output instead of loading everything in memory all at once. Reduces memory usage and other goodness. Try disabling it in order to mimic old behavior?
DZgas Π–
Demiurge Newer versions have streaming input/output instead of loading everything in memory all at once. Reduces memory usage and other goodness. Try disabling it in order to mimic old behavior?
2025-03-01 11:03:02
Is it configurable by keys?
2025-03-01 11:04:17
there is some good news that with -e 10 -g 2 some images are compressed better than before, I'm still doing a little research on this
Demiurge
2025-03-01 11:06:34
Actually, idk. 0.12 has the following options: ``` --streaming_input Enable streaming processing of the input file, supported for PPM and PGM input. --streaming_output Enable incremental writing of the output file. ``` But that makes it look like it doesn't have an option to disable it, only to enable it?
DZgas Π–
2025-03-01 11:07:19
and also super slow compression 0.9 -e 10 couldn't compress a 2 megapixel image in 10 hours on ryzen 5 7600... in general this is not an option at all and in general it is clear why this function was cut so much
Demiurge
2025-03-01 11:07:47
You should never use e=10
2025-03-01 11:07:55
Unless you have a brain worm
2025-03-01 11:08:11
personally idk why it isn't locked behind "expert options"
2025-03-01 11:08:31
And idk why it isn't renamed "moron options"
DZgas Π–
2025-03-01 11:08:35
and yet, 0.9 e10 for compressing, for example, game sprites around 128x128, this is by far the best solution
Demiurge
2025-03-01 11:08:56
Oh well for tiny images I can understand, maybe
DZgas Π–
2025-03-01 11:09:49
698x365
Demiurge
2025-03-01 11:09:53
But tiny images are going to be tiny anyways
DZgas Π–
2025-03-01 11:10:14
but there are 175 thousand pictures
Demiurge Actually, idk. 0.12 has the following options: ``` --streaming_input Enable streaming processing of the input file, supported for PPM and PGM input. --streaming_output Enable incremental writing of the output file. ``` But that makes it look like it doesn't have an option to disable it, only to enable it?
2025-03-01 11:11:10
this means that by default it is already disabled
Demiurge
2025-03-01 11:11:21
I don't expect anything more than a lot of wasted time chasing very marginal space savings.
DZgas Π– this means that by default it is already disabled
2025-03-01 11:12:03
That's what it would normally mean. but cjxl is a very poorly hacked up tool so you never know what to expect it to actually mean :)
jonnyawsom3
Demiurge Actually, idk. 0.12 has the following options: ``` --streaming_input Enable streaming processing of the input file, supported for PPM and PGM input. --streaming_output Enable incremental writing of the output file. ``` But that makes it look like it doesn't have an option to disable it, only to enable it?
2025-03-01 11:13:53
Chunked encoding is enabled by default, and only disabled at effort 10 or specific scenarios. Streamed input only works with PPM and isn't default. Streamed output has minimal effect but IIRC breaks things like progressive
Demiurge
2025-03-01 11:14:01
It uses some home-cooked flag parsing instead of a library with conventional behavior like getopt/getopt_long
2025-03-01 11:15:02
Ah, so enabled by default, and no obvious method to opt out
jonnyawsom3
2025-03-01 11:15:36
Check the bottom of the page <https://github.com/libjxl/libjxl/blob/main/doc/encode_effort.md>
DZgas Π–
monad Some data for 0.6 through 0.10: <https://loathra.com/jxl/bench/20240506.html#cjxl-releases-single-1> At high effort, it is not so straightforward. 0.6 showed smallest size overall, but worst bpp. 0.10 found the most minimum sizes, but performed worse in mean size and density. It's also important to keep in context that more practical efforts may have improved even if high effort did not.
2025-03-01 11:17:11
I don't quite understand why it looks like that. I'll be completely honest I don't understand why this whole conversation even exists. It's all some kind of nonsense. Someone fixed the code and now it compresses worse. Why was this allowed into libjxl? Because speed is the only thing I hear, but 0.7 is faster and compresses better than 0.11 so WHAT If my images are so specific why hasn't anyone done tests on specific images like GOOGLE webp does for their compression parameters Image, Art, Icon etc.
Jyrki Alakuijala WebP lossless gets some advantage in density by not having tiling, but in practice tiling is a huge practical blessing as it allows multi-threaded decoding
2025-03-01 11:21:57
webp has an advantage on about 5% of images, which is generally not a problem. and yet this means that for me personally it is advantageous to compress all archives in webp additionally, so that then I can take from them those images that are better compressed
Demiurge
2025-03-01 11:47:44
The focus at the moment is on making the API easier for developers to use. So libjxl is more convenient for developers looking to integrate jxl support into software. Streaming encode is important to achieve that, and a lot of spaghetti had to be sloshed about.
2025-03-01 11:49:25
It increased file sizes so very slightly, but it was an important milestone achieved.
2025-03-01 11:49:55
The lossless encoder is still far from ideal or optimized.
2025-03-01 11:50:52
It could be made both faster and more extreme at the same time since it doesn't always make good time/efficiency tradeoffs
2025-03-01 11:51:48
Ideally it shouldn't lose to webp ever. Except in maybe very extreme and artificial examples
DZgas Π–
Jyrki Alakuijala there is also a possible degradation in quality in the 0.8 -> 0.10 versions that I will start soon debugging -- I believe much of it was caused to make better performance on DZgas images (thumbnails with unusual thin features)
2025-03-01 11:52:18
Here are some images that demonstrate that it is possible, but not achievable in the new libjxl
2025-03-01 11:52:53
...I would like to say. but when I started compressing, I suddenly noticed that out of 7 images on the test, 4 images could not be compressed better even on this bruteforce 0.9 -e10. And here this is really another complete surprise, which clearly indicates that some compression function was completely removed between 0.8-0.9 versions. Because this is all somehow unthinkable.
2025-03-01 11:55:09
I can't just take e10 and be sure of everything. To do that, I need to take 0.6 0.7 0.8 0.9 0.10 0.11 and git build and compress all 175 thousand images on each version to take those that compress better. Why is that?
Demiurge
2025-03-01 11:55:16
That reminds me, speaking of thumbnails, I wonder if a single-pixel change has more psychovisual metric "weight" in a smaller image since a single pixel is a larger percentage of the image area compared to a single pixel in a larger image
2025-03-01 11:56:13
I don't think that's how it works but it would be interesting if relative image size was taken into account like that
DZgas Π–
Demiurge That reminds me, speaking of thumbnails, I wonder if a single-pixel change has more psychovisual metric "weight" in a smaller image since a single pixel is a larger percentage of the image area compared to a single pixel in a larger image
2025-03-01 11:56:24
yes. I already checked this with jpeg xl. For example, when adding one random black pixel to an image, it reduced its weight by ten bytes
CrushedAsian255
DZgas Π– I can't just take e10 and be sure of everything. To do that, I need to take 0.6 0.7 0.8 0.9 0.10 0.11 and git build and compress all 175 thousand images on each version to take those that compress better. Why is that?
2025-03-01 11:56:33
New effort: e14, run that e13 script someone wrote but with every libjxl build
DZgas Π–
Demiurge That reminds me, speaking of thumbnails, I wonder if a single-pixel change has more psychovisual metric "weight" in a smaller image since a single pixel is a larger percentage of the image area compared to a single pixel in a larger image
2025-03-01 11:57:00
https://discord.com/channels/794206087879852103/794206170445119489/1288275096502407268
Demiurge
2025-03-01 11:59:13
I was thinking about the lossy encoder. Whether it judges pixel size errors to be more psychovisually significant in smaller size images
2025-03-01 12:00:18
But I doubt it would. I think it assumes all images will be viewed from a healthy distance away from the screen at maybe 72 dpi
DZgas Π–
DZgas Π– I've written a lot about my algorithm in https://discord.com/channels/794206087879852103/806898911091753051/1288265312533352489 And I noticed a strange thing. Since my algorithm starts with a black pixel, if the background is white, it puts a dot, 1 black pixel. So here's about Jpeg XL: cjxl -d 0 -g 3 -e 10 -I 100 --patches 1 If the image is without a dot, then it weighs **206 bytes**. But if With a Just one black dot first, its for some reason it starts to size as **197 bytes** Why <:JXL:805850130203934781>
2025-03-01 12:01:34
Half a year has passed and the compression algorithm is already completely broken. Well, how did they manage to do this? now it size almost twice as much
CrushedAsian255
2025-03-01 12:02:46
-g 3 doesn’t actually do anything for that size of image, maybe try without patches?
Demiurge
2025-03-01 12:03:51
Yes
DZgas Π–
CrushedAsian255 -g 3 doesn’t actually do anything for that size of image, maybe try without patches?
2025-03-01 12:04:01
it solves almost nothing.
Demiurge
2025-03-01 12:04:06
Sometimes patches=1 screws things up
CrushedAsian255
2025-03-01 12:04:35
Also current JXL is not optimised for low BPP images, large improvements can be made
Demiurge
2025-03-01 12:04:43
It works well in some images and unexpectedly bad in others
DZgas Π–
2025-03-01 12:04:44
besides, I wrote an identical command that only 6 months ago gave 206 bytes. and now 487 bytes
CrushedAsian255
2025-03-01 12:05:08
Cause uncompressed is 292 bytes (73*32 bits)
DZgas Π–
2025-03-01 12:05:49
<:galaxybrain:821831336372338729>
Demiurge
2025-03-01 12:06:22
libjxl is not yet optimized with a 1 bit bitonal encoder
DZgas Π–
2025-03-01 12:06:43
jpeg xl now uses avif header? good <:megapog:816773962884972565>
Demiurge libjxl is not yet optimized with a 1 bit bitonal encoder
2025-03-01 12:07:30
Is this an argument? Jxl can to compress. Now doesn't compress. What is your argument?
Demiurge
2025-03-01 12:07:32
Have you tried with patches=0?
DZgas Π–
Demiurge Have you tried with patches=0?
2025-03-01 12:08:08
CrushedAsian255
2025-03-01 12:08:30
What is the source image
DZgas Π–
2025-03-01 12:08:45
Demiurge
DZgas Π– Is this an argument? Jxl can to compress. Now doesn't compress. What is your argument?
2025-03-01 12:10:18
I'm not arguing. I'm just adding that libjxl has not yet had bitonal optimizations added
2025-03-01 12:10:45
There's a lot of stuff on the to-do list essentially
2025-03-01 12:11:45
It's using a generic codepath designed for fullcolor images.
2025-03-01 12:12:04
A lot of optimization is just left on the table.
CrushedAsian255
2025-03-01 12:12:19
I got 208
DZgas Π–
CrushedAsian255 I got 208
2025-03-01 12:12:35
how
CrushedAsian255
2025-03-01 12:13:07
``` oxipng -vv -o max --strip all output_1bit.png cjxl output_1bit.png output_1bit.jxl -d 0 -e 9 -P 4 ```
Demiurge
2025-03-01 12:13:16
It's at a proof-of-concept stage, and at the moment the focus is on making it easier for developers to use it in their programs
2025-03-01 12:14:25
Hmm
2025-03-01 12:14:51
Good idea, using oxipng first...
CrushedAsian255
2025-03-01 12:16:29
well at least now its beating `xz -9e`
2025-03-01 12:16:45
``` output_1bit.pbm (1/1) 100 % 244 B / 329 B = 0.742 ```
Demiurge
2025-03-01 12:17:03
<@226977230121598977> if you like using command line tools on windows I highly recommend scoop, if you don't already. And `scoop install oxipng` to get oxipng installed
CrushedAsian255
2025-03-01 12:18:22
Brotli still wins at a size of 161 bytes
Demiurge
2025-03-01 12:19:26
If libjxl had a bitonal mode it could probably be even better than that
CrushedAsian255
2025-03-01 12:19:59
Jyrki beats Jyrki by 47 bytes
Demiurge
2025-03-01 12:20:07
Possibly in the future. Assuming libjxl doesn't completely get forgotten about for jxl-rs
2025-03-01 12:21:29
I have a feeling when jxl-rs gets going, there will be less and less reason to work on old crusty spaghetti libjxl instead of clean new jxl-rs
CrushedAsian255
2025-03-01 12:23:07
https://tenor.com/view/rust-rust-lang-ferris-programming-jarvis-gif-7441835028842397805
DZgas Π–
CrushedAsian255 I got 208
2025-03-01 12:26:41
I got 204
2025-03-01 12:27:16
your turn
Demiurge That reminds me, speaking of thumbnails, I wonder if a single-pixel change has more psychovisual metric "weight" in a smaller image since a single pixel is a larger percentage of the image area compared to a single pixel in a larger image
2025-03-01 12:28:17
answering your question, here I added 1 pixel, and it became smaller. Which is illogical
CrushedAsian255 ``` oxipng -vv -o max --strip all output_1bit.png cjxl output_1bit.png output_1bit.jxl -d 0 -e 9 -P 4 ```
2025-03-01 12:29:44
I understand that something inside the image is broken by the jxl encoder. But why? Doesn't it decode the PNG, treating it as raw RGB 24 bit data before compressing it?
TheBigBadBoy - π™Έπš›
DZgas Π– I got 204
2025-03-01 12:29:46
hello
2025-03-01 12:29:54
[β €](https://cdn.discordapp.com/emojis/721833660457943151.webp?size=48&name=av1_pepeShy)
DZgas Π–
TheBigBadBoy - π™Έπš› hello
2025-03-01 12:30:23
yes i have achieved this before almost https://discord.com/channels/794206087879852103/794206170445119489/1288275096502407268
2025-03-01 12:30:58
and as I understand it the new version of jxl adds another byte of service data. for something there
TheBigBadBoy - π™Έπš›
DZgas Π– yes i have achieved this before almost https://discord.com/channels/794206087879852103/794206170445119489/1288275096502407268
2025-03-01 12:31:38
I did not chnge anything at all at the pic, it's lossless
2025-03-01 12:32:06
so I reached 196B *without* the dot
DZgas Π–
TheBigBadBoy - π™Έπš› I did not chnge anything at all at the pic, it's lossless
2025-03-01 12:32:18
Well, write how you achieved this, point by point, because I don't understand
TheBigBadBoy - π™Έπš› so I reached 196B *without* the dot
2025-03-01 12:32:52
try adding a dot, maybe it will make it even better <:Stonks:806137886726553651>
TheBigBadBoy - π™Έπš›
DZgas Π–
2025-03-01 12:33:27
took that `ect -99999 -strip` `cjxl -d 0 -e 11 --allow_expert_options -E 4 -I 100 -g 3 --patches=0 --brotli_effort=11` πŸ˜›
CrushedAsian255
2025-03-01 12:33:44
`-e 11` 😭
DZgas Π–
TheBigBadBoy - π™Έπš› took that `ect -99999 -strip` `cjxl -d 0 -e 11 --allow_expert_options -E 4 -I 100 -g 3 --patches=0 --brotli_effort=11` πŸ˜›
2025-03-01 12:33:53
Last git build 0.12 ?
TheBigBadBoy - π™Έπš›
2025-03-01 12:33:56
I know -E -I are useless bc of e11 but whatever
2025-03-01 12:34:07
`JPEG XL encoder v0.11.1 794a5dcf`
CrushedAsian255
2025-03-01 12:34:39
I added a bunch of dots
embed
CrushedAsian255 I added a bunch of dots
2025-03-01 12:34:53
https://embed.moe/https://cdn.discordapp.com/attachments/794206170445119489/1345373673351151706/art.jxl?ex=67c4505f&is=67c2fedf&hm=66ed406742431cfeb1ceba6e65c9c40e493b717ef6c2dbae687eac9d0d91db6f&
Demiurge
DZgas Π– answering your question, here I added 1 pixel, and it became smaller. Which is illogical
2025-03-01 12:36:06
Probably has to do with the lack of optimization for bitonal images... since it's not optimized for bitonal the results are more silly
TheBigBadBoy - π™Έπš›
CrushedAsian255 `-e 11` 😭
2025-03-01 12:36:09
well for that image it's ok, only 0.6s
DZgas Π–
TheBigBadBoy - π™Έπš› took that `ect -99999 -strip` `cjxl -d 0 -e 11 --allow_expert_options -E 4 -I 100 -g 3 --patches=0 --brotli_effort=11` πŸ˜›
2025-03-01 12:37:15
ect -99999 -strip is what?
TheBigBadBoy - π™Έπš›
2025-03-01 12:37:23
optimizer
2025-03-01 12:37:32
from fhanau
DZgas Π–
2025-03-01 12:37:46
I still don't understand why jxl compresses identical content differently. The encoder shouldn't work like that.
Demiurge
CrushedAsian255 https://tenor.com/view/rust-rust-lang-ferris-programming-jarvis-gif-7441835028842397805
2025-03-01 12:38:08
It's not rewriting it in rust that's good, it's rewriting it at all, in general. libjxl is old and crusty spaghetti code that would benefit a lot from simplification.
TheBigBadBoy - π™Έπš›
2025-03-01 12:38:20
yeah if you try to recompress my JXL with the same command the new output is 335B lol
DZgas Π–
2025-03-01 12:38:56
<:SadCheems:890866831047417898>
TheBigBadBoy - π™Έπš›
2025-03-01 12:39:24
can someone send me the 208B PNG version from oxipng ?
CrushedAsian255
2025-03-01 12:43:22
The PNG itself wasn't 208 bytes
2025-03-01 12:43:26
2025-03-01 12:43:38
(gzipped to prevent discord from messing with it)
DZgas Π–
CrushedAsian255
2025-03-01 12:45:27
223 png
2025-03-01 12:46:45
fun fact that gzip documentation mentions a compression mode without compression. That is, packing raw data... but no one has implemented this in everyday archivers. Apparently they thought it was useless not to compress 1 file. And then Discord appeared
TheBigBadBoy - π™Έπš›
CrushedAsian255 The PNG itself wasn't 208 bytes
2025-03-01 12:49:37
then were does "208" comes from?
CrushedAsian255 (gzipped to prevent discord from messing with it)
2025-03-01 12:50:15
for that you can simply change the extension if you want easier
DZgas Π–
TheBigBadBoy - π™Έπš› then were does "208" comes from?
2025-03-01 12:50:32
TheBigBadBoy - π™Έπš›
2025-03-01 12:50:46
oooh ok I thought the PNG was 208B
2025-03-01 12:51:10
Nobody can beat me [β €](https://cdn.discordapp.com/emojis/1088952745471516815.gif?size=48&animated=true&name=laugh)
DZgas Π–
Demiurge Probably has to do with the lack of optimization for bitonal images... since it's not optimized for bitonal the results are more silly
2025-03-01 12:53:57
in general, initially I arranged tests to understand how much my simplest idea surpasses all current algorithms. Well, it turned out that much. https://discord.com/channels/794206087879852103/806898911091753051/1288265312533352489 about 30% better than the most powerful that is in pax8px and much more powerful than webp png jxl jbig2 and other https://encode.su/threads/4316-NewEra-RLE-1-bit-image-for-manual-human-decoding
2025-03-01 12:54:35
and yet even without optimization, according to you. Jpeg XL is still the best binary image compression right now
jonnyawsom3
2025-03-01 01:18:49
Reminded me of this <https://github.com/libjxl/libjxl/issues/3775#issuecomment-2317324336>
2025-03-01 01:19:17
Using a single Squeeze step to effectively increase the bitdepth of the image, since libjxl doesn't do well at low bitdepths
Traneptora
2025-03-01 04:12:59
Currently does libjxl support regional decode?
2025-03-01 04:13:20
like suppose I have an 80k by 80k jxl
2025-03-01 04:13:57
I know JPEG XL theoretically supports this
jonnyawsom3
2025-03-01 04:14:56
Not yet https://github.com/libjxl/libjxl/issues/3519
BlueSwordM
Demiurge When you say energy preserving, you mean the energy in each DCT frequency bin?
2025-03-01 07:57:59
I just meant that you need to take into account both the detail inside of a block (energy) and the type of detail (variance) to have a good low complexity block level metric. Both are better than just computing differences (SAD/SSD), but only having one or the other leads to deficiencies in the other side.
A homosapien
DZgas Π– answering your question, here I added 1 pixel, and it became smaller. Which is illogical
2025-03-01 10:05:54
The original image was a 1-bit palette, not 1-bit grayscale, that makes cjxl think it's rgb instead of grayscale, wasting a few bytes. I told you about this half a year ago. https://discord.com/channels/794206087879852103/794206087879852106/1292988896929644556
2025-03-01 10:07:42
``` cjxl grayscale.png grayscale.jxl -d 0 -e 11 --allow_expert_options && jxlinfo grayscale.jxl Compressed to 196 bytes (0.671 bpp). JPEG XL image, 73x32, (possibly) lossless, 1-bit Grayscale Color space: Grayscale, D65, sRGB transfer function, rendering intent: Relative cjxl palette.png palette.jxl -d 0 -e 11 --allow_expert_options && jxlinfo palette.jxl Compressed to 208 bytes (0.712 bpp). JPEG XL image, 73x32, (possibly) lossless, 8-bit RGB Color space: RGB, D65, sRGB primaries, sRGB transfer function, rendering intent: Relative ```
monad
DZgas Π– I don't quite understand why it looks like that. I'll be completely honest I don't understand why this whole conversation even exists. It's all some kind of nonsense. Someone fixed the code and now it compresses worse. Why was this allowed into libjxl? Because speed is the only thing I hear, but 0.7 is faster and compresses better than 0.11 so WHAT If my images are so specific why hasn't anyone done tests on specific images like GOOGLE webp does for their compression parameters Image, Art, Icon etc.
2025-03-02 12:39:16
btw, I found this other benchmark with more versions, and in this case v0.11 performed best overall at high efforts.
couleur
2025-03-02 12:45:49
hi, is there a twitter account / rss feed / some kind of aggregation of news regarding software/libraries adopting jxl?
username
couleur hi, is there a twitter account / rss feed / some kind of aggregation of news regarding software/libraries adopting jxl?
2025-03-02 12:52:00
sadly no not really. I think the new website was supposed to have something like that but I wouldn't count on that happening. the closest thing is the <#803574970180829194> channel in where people post about software getting support but it also just has people discussing things rather then purely announcements
Quackdoc
couleur hi, is there a twitter account / rss feed / some kind of aggregation of news regarding software/libraries adopting jxl?
2025-03-02 01:30:40
"most things but android and chrome" [av1_dogelol](https://cdn.discordapp.com/emojis/867794291652558888.webp?size=48&name=av1_dogelol) though android might be pulling libjxl libs soonish since dng sdk xD
RaveSteel
2025-03-02 01:31:15
God I hope so
DZgas Π–
A homosapien The original image was a 1-bit palette, not 1-bit grayscale, that makes cjxl think it's rgb instead of grayscale, wasting a few bytes. I told you about this half a year ago. https://discord.com/channels/794206087879852103/794206087879852106/1292988896929644556
2025-03-02 06:19:13
πŸ‘
monad btw, I found this other benchmark with more versions, and in this case v0.11 performed best overall at high efforts.
2025-03-02 06:20:32
I hope it will just be "fixed" soon
couleur
2025-03-02 01:21:18
is it possible to convert jpeg to jxl, and convert it back to jpeg bit perfectly?
HCrikki
2025-03-02 01:23:22
yes. use xl converter, specifically using lossless transcoding (a special mode that works hyperfast, losslessly and uniquely guarantees smaller filesize too)
2025-03-02 01:24:26
you can do the same using libjxl commands in terminal directly, xl converter just simplifies conversions. the norm for converters is still doing oldschool full conversions that result in larger outputs or degraded quality that take a long time too so its best sticking to scripts/apps properly leveraging this new paradigm
couleur
2025-03-02 01:28:35
are these benefits still valid today?
HCrikki yes. use xl converter, specifically using lossless transcoding (a special mode that works hyperfast, losslessly and uniquely guarantees smaller filesize too)
2025-03-02 02:37:08
cool, but jpeg reconstruction fails :(
HCrikki
2025-03-02 02:40:30
odd. used improperly perhaps. did you 'normalize' and/or verify ?
couleur
2025-03-02 02:41:23
how do you go about doing that?
2025-03-02 02:41:33
yup im testing with a sample file
HCrikki
2025-03-02 02:42:46
are you on android? t8rin's imagetoolbox converts efficiently but is still subject to limitations from mobile
couleur
2025-03-02 02:43:16
i have that installed as well, im curious about doing that on xl converter though
HCrikki
2025-03-02 02:45:09
selection "lossless jpeg transcoding" mode, leave effort at 7, only enable normalize *on fail* (to deal with only problematic images), verify enabled
jonnyawsom3
couleur are these benefits still valid today?
2025-03-02 02:50:40
I have no idea who that is or what repo that's on, but they likely mean the chunked encoding in v0.10. It made speeds magnitudes faster, assuming you don't just crank the settings to max
2025-03-02 02:50:46
So yes
monad
couleur are these benefits still valid today?
2025-03-02 04:11:43
The comment wasn't absolutely true to begin with. You could argue it was generally true. You could argue for some domains it was totally false. I expand on this here: <https://github.com/jxl-community/jxl-community.github.io/issues/41>
Jyrki Alakuijala
I'm sure you're already aware, but we had a small breakthrough in https://discord.com/channels/794206087879852103/1278292301038227489, finding the original PR that caused degradation since v0.8. Someone also questioned if Butteraugli is accurate when isolated to groups of pixels for quantization, as opposed to the usual full image used for quality metrics
2025-03-02 06:57:58
I think that nothing works for automated image quality, final decisions have to be done by human eyes and reviewing 100 corner cases
BlueSwordM Hello again Jyrki. It's been a while. When dealing with very small image features, most full frame dedicated image metrics don't seem to work as effectively for mode and transform decisions since most haven't been trained/built to analyse those small image features and blocks effectively. From empircal testing in svt-av1-psy, using a conventional image metric for full frame/tiled decisions, and a classical HVS metric like PSNR-HVS/psy-rd works just as effectively. For this reason, I believe a tuned PSNR-HVS based on the HVS model used in JXL would work well for a generalized low complexity RD metric. An even better RD metric would be to use local variance analysis mixed with an energy preserving metric like PSNR-HVS at higher complexity levels.
2025-03-02 06:59:21
I optimized butteraugli relatively small samples like 100x100 pixels
BlueSwordM
Jyrki Alakuijala I optimized butteraugli relatively small samples like 100x100 pixels
2025-03-02 07:00:33
Hmm, I wonder why the butteraugli opts you added back then resulted in worse high fidelity performance if that's the case πŸ€” I always assumed butteraugli was trained on >=512x512 images, so sorry about that misconception.
Jyrki Alakuijala
BlueSwordM Hmm, I wonder why the butteraugli opts you added back then resulted in worse high fidelity performance if that's the case πŸ€” I always assumed butteraugli was trained on >=512x512 images, so sorry about that misconception.
2025-03-02 07:04:30
all the butteraugli optimization corpus is ranked for max butteraugli scores of 0.6, 0.8, 1.0, 1.25 and 1.5 (or something very similar to that -- it was in 2015-2017 work mostly some details are getting fuzzy)
2025-03-02 07:05:21
there were some adjustments and verifications that I did outside the usual optimization, but I checked the degradations individually and didn't find anything significant
2025-03-02 07:05:54
there were ~2550 pairs of images in the reference corpus and if I identified a weakness I added a lot of image pairs related to the weakness
BlueSwordM Hmm, I wonder why the butteraugli opts you added back then resulted in worse high fidelity performance if that's the case πŸ€” I always assumed butteraugli was trained on >=512x512 images, so sorry about that misconception.
2025-03-02 07:06:53
the very first version of butteraugli I tuned with usual photographs (something like 768x512 images), but I understood quickly that this has substantial issues
BlueSwordM
Jyrki Alakuijala the very first version of butteraugli I tuned with usual photographs (something like 768x512 images), but I understood quickly that this has substantial issues
2025-03-02 07:12:58
Hmm, being trained on >=100x100 samples should work fairly well downsampled to 8x8-64x64 then. Maybe there's something about butteraugli directed mode decision that lead to picking blurrier transforms overall?
2025-03-02 07:13:33
Also, I think we should find a better place to talk about all of this since Discord is not search indexed; this means any of the conversations we have here are trapped here.
Jyrki Alakuijala
2025-03-02 07:15:54
all blurring related stuff is done by hand
2025-03-02 07:16:48
even the basic loop filters fail every metric in every video codec, but is a must to have in video codecs
2025-03-02 07:17:10
I made that mistake, I will make an honest attempt to fix it soon
2025-03-02 07:17:51
previously I used ad hoc systems (basically very strange functions that appeared in my head) to decide the smoothing
2025-03-02 07:18:09
Zoltan coded for me a version that actually computes the smoothing results
2025-03-02 07:18:31
in the CL I started to use that system that is actually basic the selection on the actual smoothing results
2025-03-02 07:18:45
it is more robust and better than "inventing ad hoc crazy stuff" that I used to have