JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

on-topic

Whatever else

damian101
2024-02-17 03:01:24
but rename to png and it gets processed, lol
2024-02-17 03:01:45
but all actual image applications I work with support it now...
username
2024-02-17 03:06:00
there's a lot of people who use outdated versions of stuff like Photoshop (mostly pirates) before native WebP support was added however this plugin exists and has existed for years https://developers.google.com/speed/webp/docs/webpshop but no if something doesn't work out of the box with 0% effort then the user is left slamming their head against the keyboard and blaming everything around them
2024-02-17 03:08:10
people don't think stuff like "how can I get Photoshop to open this?" they think stuff like "Photoshop won't open this grrrrrrr"
2024-02-17 03:09:08
apologizes I'm very tired at the moment
gb82
2024-02-19 01:37:26
I'm gonna start a thread about this, but I'm experiencing some weirdness when encoding from EXR with cjxl that I don't experience with PNG
Jyrki Alakuijala
2024-02-19 01:02:07
I made encoding vardct about 35 % slower at one phase -- to make it more robust for exotic images
2024-02-19 01:04:42
exotic meaning like pixelized checker patterns and the like
lonjil
2024-02-19 11:31:04
I only write the best C++ code ```c++ for (int i = 0; (lol>>s, i++); ) cparams.delta_cols.back()[i] = stoi(s); ```
2024-02-19 11:31:23
error handling, who needs it?
veluca
2024-02-20 12:00:01
I don't think that makes the top 10 of the sketchiest code I have seen
2024-02-20 12:00:01
😛
2024-02-20 12:00:31
(also, you don't need the `i`, you can just `cparams.delta_cols.back().push_back(stoi(s));`
lonjil
2024-02-20 12:02:05
it's an std::array, not an std::vector
2024-02-20 12:03:02
for manual palette colors, I added two `std::vector<std::array<int32_t, 3>>` to cparams
2024-02-20 12:07:55
it'll look like this: ``` Palette[ 123,23,1 50,50,50 0,0,0 ] DeltaPalette[ Weighted -1,-1,2 1,1,1 -5,5,0 ] ```
2024-02-20 12:08:33
you can have any number of triples and it'll just make your palette bigger
Traneptora
lonjil error handling, who needs it?
2024-02-20 05:46:22
strtol based
_wb_
2024-02-22 06:02:27
https://ericportis.com/posts/2024/okay-color-spaces/
2024-02-22 06:12:46
(very nice blogpost by my teammate <@702298678333014087>)
lonjil
_wb_ https://ericportis.com/posts/2024/okay-color-spaces/
2024-02-23 02:04:15
nice article. The fact that it's impossible to make a color space that does what we want (because color perception is non-euclidean) makes me wonder if for some applications, whether it'd be better to have rhe ability to just ad-hoc generate even gradients between any two given colors, rather than relying on a color space to do it for us. Since we have the ability to evaluate new color spaces without doing large trials, I assume the math of perception is well understood.
monad
2024-02-23 04:39:27
Parallelizing the encode process for multi-core systems.
sklwmp
2024-02-23 06:06:28
I believe it is related, yes
_wb_
lonjil nice article. The fact that it's impossible to make a color space that does what we want (because color perception is non-euclidean) makes me wonder if for some applications, whether it'd be better to have rhe ability to just ad-hoc generate even gradients between any two given colors, rather than relying on a color space to do it for us. Since we have the ability to evaluate new color spaces without doing large trials, I assume the math of perception is well understood.
2024-02-23 06:57:54
What is good for gradients is also good for many other image manipulation operations, and image compression. For some image manipulation operations it makes sense to do things according to perception, for others it makes sense to do it according to the physics of light.
2024-02-23 07:01:27
(but "perception" depends on several things, e.g. what is perceptually uniform depends on the visual angle: if you compare large patches of color, S cones play a bigger role than if you are comparing tiny, pixel-sized patches of color)
monad
2024-02-23 07:03:46
<https://github.com/libjxl/libjxl/commit/87ec5d0cdf0319f6694f7c957cc933fbfe43df08> <https://github.com/libjxl/libjxl/commit/744575425705ca2ad276c6f377acc12493a9e8f9>
w
2024-02-23 11:31:07
is there a "default" web cmyk icc profile? like sRGB for rgb
paperboyo
w is there a "default" web cmyk icc profile? like sRGB for rgb
2024-02-23 01:17:04
Not really, no AFAIK. When I needed to answer that question, I decided on `U.S. Web Coated (SWOP) v2` as this was default in Photoshop for ages at the time (more than a decade ago).
_wb_
2024-02-23 03:40:37
yep, that one is also the one we (at cloudinary) assume to be the default for untagged CMYK images
2024-02-23 03:42:08
but it's not defined by W3C and most browsers render CMYK images incorrectly even if they are tagged — they'll typically show the CMY interpreted as RGB in sRGB, and just ignore the K channel
2024-02-23 03:43:01
(this kind of works since CMYK is stored with 0=full ink, so misinterpreting CMY as RGB is not a _horrible_ misinterpretation)
w
2024-02-23 04:49:52
now I have to figure out how to make that with lcms2 programatically...
2024-02-23 05:31:19
actually I'm just confused on how jpeg ycck and cmyk should be converted
2024-02-23 05:31:49
libjpeg cmyk output is different for each
2024-02-23 05:34:34
and I can't seem to get a normal RGB output with us swop to srgb
_wb_
2024-02-23 05:39:30
It's funky - with lcms2 you may have to explicitly select chocolate instead of vanilla
w
2024-02-23 05:42:16
there's like nothing on ycck, whats up with that
2024-02-23 07:03:30
inverted cmyk... looks like it's the adobe marker
2024-02-23 07:03:43
now I also need to figure out how to create non inverted cmyk jpegs
2024-02-23 07:07:13
i can see why nobody wants to implement cmyk
_wb_
2024-02-23 07:35:34
ycck is just ycbcr applied on the first 3 channels
2024-02-23 07:36:01
but your jpeg decoder should already be able to undo do that
2024-02-23 07:41:53
cmyk is always inverted in jpeg, that's how adobe defines cmyk, with 0=full ink, 1=no ink (which is `TYPE_CMYK_8_REV` in lcms2). At least that's how I remember it, I might be confusing things
2024-02-23 07:43:35
adobe stuff tends to be very confusing, in the Photoshop format you'll find places where they use 0=full ink, 255=no ink, other places where it's 0=no ink, 100=full ink, yet others where 0=no ink, 255=full ink, etc.
2024-02-23 07:44:37
same with alpha channels in PSD where sometimes 0 is opaque and sometimes 0 is transparent, and then it's sometimes premultiplied and sometimes not
w
2024-02-24 02:53:15
<:dead:707868275366952981>
2024-02-24 03:00:28
thanks for the help
Quackdoc
_wb_ adobe stuff tends to be very confusing, in the Photoshop format you'll find places where they use 0=full ink, 255=no ink, other places where it's 0=no ink, 100=full ink, yet others where 0=no ink, 255=full ink, etc.
2024-02-24 03:05:29
as if cmyk wasn't annoying enough to deal with, they can't even be consistent? T.T
w
_wb_ It's funky - with lcms2 you may have to explicitly select chocolate instead of vanilla
2024-02-24 03:06:28
> // F: Flavor 0=MinIsBlack(Chocolate) 1=MinIsWhite(Vanilla) I had no idea what you meant
Adrià
2024-02-24 01:36:27
Hi! I am trying to add JXL support to dlib (a C++ library). Everything works except calling: `JxlEncoderSetFrameLossless` I've also tried just adding this to: https://github.com/libjxl/libjxl/blob/main/examples/encode_oneshot.cc ```c++ JxlEncoderSetFrameDistance(frame_settings, 0.f); if (JXL_ENC_SUCCESS != JxlEncoderSetFrameLossless(frame_settings, JXL_TRUE)) { fprintf(stderr, "JxlEncoderSetFrameLossless failed\n"); return false; } ``` But I am still getting the same error. I am using libjxl 0.9.2. Am I missing something? I've read in the docs that setting distance to 0 is not enough to achieve true lossless. How should I do it?
Adrià Hi! I am trying to add JXL support to dlib (a C++ library). Everything works except calling: `JxlEncoderSetFrameLossless` I've also tried just adding this to: https://github.com/libjxl/libjxl/blob/main/examples/encode_oneshot.cc ```c++ JxlEncoderSetFrameDistance(frame_settings, 0.f); if (JXL_ENC_SUCCESS != JxlEncoderSetFrameLossless(frame_settings, JXL_TRUE)) { fprintf(stderr, "JxlEncoderSetFrameLossless failed\n"); return false; } ``` But I am still getting the same error. I am using libjxl 0.9.2. Am I missing something? I've read in the docs that setting distance to 0 is not enough to achieve true lossless. How should I do it?
2024-02-25 12:06:50
I figured out what I was doing wrong: I needed to set `basic_info.uses_original_profile` to true.
sklwmp
2024-02-25 02:16:25
Thanks for sharing! Far too often do I see once someone asks a question that "Hey, I solved my problem!", then proceeds to never share how they did so... Sorry I couldn't help you fix the issue, though 😅
_wb_
2024-02-27 12:39:07
experimenting with some stuff
2024-02-27 12:39:29
ugh discord strips
2024-02-27 12:39:52
2024-02-27 12:40:39
open that jpeg in chrome
lonjil
2024-02-27 12:48:45
when I load it, it's completely blank, until I switch to another tab and back, then it shows up the same as the it does here on discord
2024-02-27 12:49:17
mpv worked
_wb_
2024-02-27 12:55:37
chrome on the left, safari on the right
2024-02-27 12:56:15
(the screenshot doesn't really look correct, I guess apple does tone mapping when making screenshots of hdr stuff)
2024-02-27 12:57:41
lonjil
2024-02-27 01:06:36
mpv
yoochan
2024-02-27 01:07:33
which one should have been displayed ?
Tirr
2024-02-27 01:07:55
maybe both
2024-02-27 01:08:31
I guess it's some kind of hacky gain map
_wb_
2024-02-27 01:13:00
it's a sloppily handwritten UltraHDR image
Adrià
sklwmp Thanks for sharing! Far too often do I see once someone asks a question that "Hey, I solved my problem!", then proceeds to never share how they did so... Sorry I couldn't help you fix the issue, though 😅
2024-02-27 01:13:14
Yes, I also hate it when that happens!
_wb_
2024-02-27 01:13:29
looks like mpv just shows the gain map image, maybe it thinks it's a two-frame video and plays it like that
lonjil
2024-02-27 01:18:02
so it seems
jonnyawsom3
2024-02-27 02:41:18
Noticed recently that Discord is 'progressively' loading images by requesting a 512 width version before the full size when clicked
Traneptora
_wb_ looks like mpv just shows the gain map image, maybe it thinks it's a two-frame video and plays it like that
2024-02-27 04:24:39
that's correct, if you treat it as a series of concatenated jpegs and start it paused, it renders the first one until you framestep forward
2024-02-27 04:25:05
`mpv --pause --demuxer-lavf-o=format=mjpeg test.jpg`
fab
2024-02-27 06:41:47
After 3 month of working very much
2024-02-27 07:13:20
Demo Dragon ball xenoverse
2024-02-27 07:13:22
https://www.facebook.com/share/v/ynLgRk7Zso7oB35z/
Oleksii Matiash
2024-02-27 07:32:22
Oh no..
fab
2024-02-27 07:34:18
Me and damian101 we have a lot of Instagram datas
2024-02-27 07:35:06
Instagram isn't very used in Amerixa but I don't care about
2024-02-27 07:38:44
Trameptora you know BlueSworx
2024-02-27 07:38:51
2024-02-27 07:38:59
I find his comment very interesting
2024-02-27 07:40:07
Also inherent to+93 jpeg xl
2024-02-27 07:40:14
Danny
2024-02-27 07:47:30
daym
2024-02-27 07:48:02
glad to see zig in such a random place
fab
2024-02-27 07:49:27
Danyil
2024-02-27 07:49:48
I'm having Strong av1 artifacts fidelity problem
2024-02-27 07:50:17
I don't know if is av1 problem or JPEG XL can fix it
2024-02-27 07:50:44
😌🤨
damian101
fab Me and damian101 we have a lot of Instagram datas
2024-02-27 07:51:11
I don't even use Instagram <:av1_thinkies:895863009820414004>
fab
2024-02-27 07:52:28
Seems like amplifird signal
_wb_
2024-02-27 07:52:35
Fab this stuff is incomprehensible, either stay in off-topic or start making sense please. Or just stay quiet.
fab
2024-02-27 07:52:45
Ok,ritt
_wb_
2024-02-27 07:53:14
You post random screenshots with random text and nobody understands what you mean with it. Either explain better or stop.
2024-02-27 07:54:42
Thanks in advance. I hope you understand, I am trying to keep this discord sane and not too spammy.
gb82
Danny glad to see zig in such a random place
2024-02-27 11:02:25
lots of Zig lovers around the AV1 servers, including myself :D
yoochan
2024-02-28 07:29:37
Zig? Is that the new C for millennials?
Quackdoc
2024-02-28 07:31:53
it's C that want's to be rust
2024-02-28 07:32:06
[av1_omegalul](https://cdn.discordapp.com/emojis/885026577618980904.webp?size=48&quality=lossless&name=av1_omegalul)
yurume
2024-02-28 08:17:55
anything better than C/C++ deserves to exist
gb82
Quackdoc it's C that want's to be rust
2024-02-28 08:47:46
quite the hot take ... lol
yurume anything better than C/C++ deserves to exist
2024-02-28 08:49:01
that's precisely Zig's philosophy. Rust is *different* from C/C++, so both are still used; Zig aims to be largely the same, but all around better. Zig can be used as a build system in existing C projects too, and integrates perfectly with existing C projects
Quackdoc
gb82 quite the hot take ... lol
2024-02-28 08:50:55
hot or not doesn't make it not true lol
gb82
2024-02-28 08:51:21
it isn't true imo
yurume
gb82 that's precisely Zig's philosophy. Rust is *different* from C/C++, so both are still used; Zig aims to be largely the same, but all around better. Zig can be used as a build system in existing C projects too, and integrates perfectly with existing C projects
2024-02-28 08:52:17
my personal take is that Rust is more or less a replacement for C++ while Zig is a replacement for C
gb82
2024-02-28 08:52:35
yeah that makes sense
yurume
2024-02-28 08:52:42
not to say that C++ and thus Rust is too complex or C and thus Zig is too simple
2024-02-28 08:53:13
they share a roughly same design philosophy in each group, but newer ones do that job in a much better way
lonjil
2024-02-28 08:53:37
Rust is a replacement for C
2024-02-28 08:53:49
Proof: Linux is adopting Rust
yurume
2024-02-28 08:54:08
well Linux allowed a C++ code base for device drivers for a long time AFAIK
2024-02-28 08:54:26
the *current* usage of Rust in Linux is an alternative language for writing drivers
lonjil
2024-02-28 08:55:02
Linus Torvalds has violently rejected C++ every time is has been suggested
yurume
2024-02-28 08:55:03
in the future Rust can indeed spread to other subsystems of Linux, but that would rather prove that Linux was too complex to be written in C alone
lonjil
2024-02-28 08:55:50
Personally I don't understand C++ at all, but find Rust nice and easy
gb82
lonjil Rust is a replacement for C
2024-02-28 08:55:55
there are instances where you need to write unsafe Rust. it becomes hard to justify
yurume
lonjil Linus Torvalds has violently rejected C++ every time is has been suggested
2024-02-28 08:55:58
again, only proves that C++ was not good for Linux, not something *better* than C++ would be also not good for Linux
lonjil
gb82 there are instances where you need to write unsafe Rust. it becomes hard to justify
2024-02-28 08:56:12
Not really?
yurume
2024-02-28 08:57:05
another similar example is GCC, which of course started as a C-only project, but now it's a C++ project albeit with a heavy restriction
gb82
lonjil Not really?
2024-02-28 08:57:22
yes, absolutely
lonjil
2024-02-28 08:58:02
Wow had to use unsafe two times in a code base, whelp guess C++ would've been a better choice /s
yurume
2024-02-28 08:58:39
are you suggesting that Rust is better than what others suggest, or worse than that?
gb82
2024-02-28 08:59:02
https://zackoverflow.dev/writing/unsafe-rust-vs-zig/
lonjil Wow had to use unsafe two times in a code base, whelp guess C++ would've been a better choice /s
2024-02-28 08:59:13
not what im implying
lonjil
yurume again, only proves that C++ was not good for Linux, not something *better* than C++ would be also not good for Linux
2024-02-28 08:59:17
Well so yes Rust is more complex than C, but honestly it's not just Linux I've seen way more C stuff and C people move to Rust than anything C++. Rust is definitely less complex than C++.
yurume
gb82 https://zackoverflow.dev/writing/unsafe-rust-vs-zig/
2024-02-28 08:59:20
I personally disagree to that article
lonjil
2024-02-28 08:59:41
Been a while since I read it but as I recall I found it to be a bit nonsensical.
yurume
2024-02-28 08:59:43
I should mention that I have written Rust for more than a decade, including the current job
gb82
2024-02-28 08:59:47
people pretend Rust is the universal catch-all which isn't necessarily the reality
yurume
gb82 people pretend Rust is the universal catch-all which isn't necessarily the reality
2024-02-28 09:00:11
*some*
gb82
2024-02-28 09:00:21
yes, some people
lonjil
2024-02-28 09:00:52
The number of people who complain about people who exaggerate Rust is greater than the number of people who exaggerate Rust, in my experience.
yurume
2024-02-28 09:00:53
and I absolutely think that Rust's memory safety is extremely valuable, even with a presence of `unsafe`
Quackdoc
yurume my personal take is that Rust is more or less a replacement for C++ while Zig is a replacement for C
2024-02-28 09:01:04
I don't really see this myself, well when you pile a bunch of crates on maybe
gb82 there are instances where you need to write unsafe Rust. it becomes hard to justify
2024-02-28 09:01:16
not at all
2024-02-28 09:01:35
unsafe rust is **significantly** safer then C still since you still have to interact with it
yurume
2024-02-28 09:01:43
because it gives you a modularized tool for managing an entire class of bugs which was never widely available before Rust
gb82
2024-02-28 09:01:48
and zig is significantly safer than C
Quackdoc
2024-02-28 09:01:54
even in general its still safer then C
yurume
2024-02-28 09:02:30
but it's also true that Rust's memory safety was a bit exaggerated by people who don't sufficiently understand that (both Rust users and outsiders)
2024-02-28 09:02:59
it is still early to quantify whether unsafe Rust turns out to be safer than (always-unsafe) C or not btw
lonjil
2024-02-28 09:03:14
Zig is safer than C mostly by making "the right thing to do" more ergonomic, rather than anything fundamental. And that's good! Good language design is good.
yurume
2024-02-28 09:03:58
mainly because C now has accumulated by quite a bit of tools like sanitizer, and hardware supports like memory tagging extension may change the situation in a near future (not guaranteed though)
Quackdoc
yurume it is still early to quantify whether unsafe Rust turns out to be safer than (always-unsafe) C or not btw
2024-02-28 09:04:47
even if you have to write *entirely* unsafe rust, which you pretty much never need to do, I would bet the rust tooling to make it still more ergonomic and "safe" then C
yurume
2024-02-28 09:04:49
in comparison, safe Rust does grant more freedom to the compiler, which means that unsafe Rust has to much strictly follow rules
gb82
lonjil Zig is safer than C mostly by making "the right thing to do" more ergonomic, rather than anything fundamental. And that's good! Good language design is good.
2024-02-28 09:04:58
100%. even just `defer` makes a huge difference for memory deallocation
yurume
2024-02-28 09:05:27
but that's currently a speculation and not fully quantified nor optimized from Rust
2024-02-28 09:06:15
Rust for example is looking for the formal notion of pointer provenance, which would essentially make any code not following that provenance rule an instant UB
2024-02-28 09:06:42
but it is also developing new APIs to make it far easier to follow
2024-02-28 09:07:26
so depending on the success of that API, the unsafe Rust in the future may be better than today, or worse than today, or just as (un)safe as today if that proposal gets scrapped
2024-02-28 09:08:33
Zig is conceptually simpler than Rust in that regard, and it *does* provide a good spatial memory safety (which is often all you want), while a temporal memory safety is left to users
2024-02-28 09:09:41
while pretty much everyone agrees that memory safety is a significant cause of security problems, spatial vs. temporal divide was not really quantified mainly because C had none
2024-02-28 09:10:22
so if Zig succeeds, it would be one of the first ever method to quantify an importance of spatial memory safety over temporal one
2024-02-28 09:10:39
until then, I would reserve my opinion about the safety story of Zig
veluca
2024-02-28 09:12:48
> good spatial memory safety (which is often all you want), is it? IIRC most memory bugs are temporal memory safety
yurume
2024-02-28 09:13:13
I think many are actually a combination of both
2024-02-28 09:13:20
that's another reason why that is hard to quantify
veluca
2024-02-28 09:13:25
(and it's not like you couldn't add bounds checks in C++ by instrumenting `span` or stl containers and thereabouts)
yurume
2024-02-28 09:13:34
oh absolutely
2024-02-28 09:13:53
if we can significantly improve the memory safety story of C without any new language, that's also welcomed
2024-02-28 09:14:28
but it's evident that ISO C didn't try too hard on that regard
lonjil
yurume Rust for example is looking for the formal notion of pointer provenance, which would essentially make any code not following that provenance rule an instant UB
2024-02-28 09:14:47
All code not following pointer provenance in both C and Rust is already UB. We just don't really know what the rules are, for Rust and for C, so we don't know what is UB.
yurume
2024-02-28 09:15:37
Rust currently doesn't have a fully defined notion, which means there are some Rust code that is unknown whether it's indeed an UB or not
2024-02-28 09:15:51
practically most cases can be sorted though
Quackdoc
2024-02-28 09:15:54
you could do a lot to make c and c++ safe, but in the end, the effort has been put into rust and it shows. rust is rocketing in popularity for a reason afterall
yurume
2024-02-28 09:16:45
I think C and C++ failed to stop something like Rust or Zig because they thought backward compatibility as something mechanical
2024-02-28 09:17:09
IMO it's something more organic
2024-02-28 09:18:09
you can remain backward compatible with an apparent breakage, provided that you can make the supposed breakage a non-issue or much less painful (e.g. `go fix`)
2024-02-28 09:18:56
there are a spectrum of technical solutions as well, like language epochs/editions
2024-02-28 09:19:12
none of them was properly considered from both parties
2024-02-28 09:19:37
JavaScript, aka ECMAScript, fared much better
lonjil
2024-02-28 09:21:55
How to make C 100% memory safe: compile C to a GC'd language and ensure that every allocation is a separate object. Bam, you get both spatial and temporal safety at the object level.
yurume
2024-02-28 09:27:12
that doesn't solve Heartbleed 😉
lonjil
2024-02-28 09:29:18
No amount of language level protections can save you when devs do really silly things
2024-02-28 09:29:29
Alas
Quackdoc
2024-02-28 09:30:01
~~me allowing users to run commands defined in a text file~~
yurume
2024-02-28 09:33:04
you can still partly attribute Heartbleed to C because OpenSSL tried to use its own memory allocator to avoid C's wild memory management...
lonjil
2024-02-28 09:33:29
Well, they wanted to avoid "slow" memory allocators
2024-02-28 09:34:13
They had like, one platform, with a slow malloc
2024-02-28 09:34:19
So they made a caching layer
2024-02-28 09:36:05
Imagine if they used Zig
2024-02-28 09:36:38
"general purpose allocator is too slow, let's make a caching layer"
yoochan
2024-02-28 10:11:50
what's good with C is that it is easy to write shitty code, but it is also easy to write clean one 😄
yurume
2024-02-28 10:34:27
only for a very small one
Traneptora
2024-02-28 04:45:43
This is not the fault of C tho
2024-02-28 04:45:57
this is the fault of bad programming practices
2024-02-28 04:46:07
you can write shitty code in any language
CrushedAsian255
Traneptora you can write shitty code in any language
2024-02-29 10:53:37
Rust but it’s all in an Unsafe block
lonjil
2024-02-29 11:09:35
putting code in an unsafe block doesn't actually do anything
Traneptora
2024-02-29 08:01:30
anyone know any easy way to detect if an existing webp file is encoded with lossy or lossless?
2024-02-29 08:01:41
i.e. encoded with vp8 or jyrki encoding
2024-02-29 08:02:14
`file` reports it for this specific webp but idk if that's actually universal
2024-02-29 08:02:19
or if there's a dedicated tool for it
lonjil
2024-02-29 08:10:20
I believe `webpinfo` should tell you
Traneptora
2024-02-29 08:10:37
ah so there is a tool for that
2024-02-29 08:10:37
ty
afed
2024-02-29 08:26:04
im identify also shows quality as 100 for lossless, but doesn't detect for lossy
lonjil
2024-02-29 09:53:51
someone sent this to me and asked if they should download 0.10 and start over
Traneptora
2024-02-29 09:58:02
what the hell
CrushedAsian255
2024-03-01 02:36:42
Wtf
monad
2024-03-01 08:57:47
yes, 0.10 is faster
CrushedAsian255
monad yes, 0.10 is faster
2024-03-01 09:14:31
What’s -e 0 do?
2024-03-01 09:14:55
Also what do the extra 4 “-v” s do
monad
2024-03-01 09:18:37
make me more verbose, but lazy
2024-03-01 09:19:57
see, I'm not even going to correct that typo, but I'm going to point it out
CrushedAsian255
2024-03-01 12:38:55
I am a bit verbose but at least truly to put effort in and be correct but might not be the most efficient
sklwmp
2024-03-01 02:33:17
has anyone considered making a standalone jpegli repo that can be compiled without the need to download all of libjxl? or is jpegli too dependent on libjxl to be decoupled like that? something like https://github.com/cloudinary/ssimulacra2
2024-03-03 11:06:22
What does `-DCMAKE_BUILD_TYPE=None` do exactly? It's not Release or Debug, so, I don't understand when you would use this. Arch builds libjxl with `-DCMAKE_BUILD_TYPE:STRING='None'` so I'm sure there's a good reason for it.
spider-mario
2024-03-03 11:45:44
I think it’s because they have their own set of optimisation flags and don’t want CMake’s interfering
2024-03-03 11:46:11
https://wiki.archlinux.org/title/CMake_package_guidelines#CMake_can_automatically_override_the_default_compiler_optimization_flag
2024-03-03 11:46:50
> Each build type causes CMake to automatically append a set of flags to `CFLAGS` and `CXXFLAGS`. When using the common `Release` build type, it automatically appends the `-O3`[1] compiler optimization flag, and this overrides the default Arch Linux flag which currently is `-O2` (defined in the makepkg configuration file). This is undesired, as it deviates from the Arch Linux targeted optimization level.
Nyao-chan
2024-03-03 12:15:35
Does None generate the debug code?
spider-mario
2024-03-03 12:33:32
it passes no specific flag, so no `-g` either
2024-03-03 12:33:47
but also no `-DNDEBUG`
Nyao-chan
2024-03-03 12:34:21
I mean like the libjxl debug macros
spider-mario
2024-03-03 12:36:37
on its own, it would, but Arch adds `-DNDEBUG` (https://gitlab.archlinux.org/archlinux/packaging/packages/libjxl/-/blob/1158595804b57f70769d4fa7e3dc8baeedc3428f/PKGBUILD#L51-52) so it doesn’t
2024-03-08 03:09:50
one of the authors of Pijul (pmeunier) seems a bit irritated in this thread https://www.reddit.com/r/programming/comments/1b98u8g/why_facebook_doesnt_use_git/
2024-03-08 03:09:53
> No chance of catching up, even though Google is definitely trying to claim some of our innovations as theirs on that one. > > The main issue is, the sequential model of Git, Mercurial, SVN, CVS, Perforce and Fossil, is actually quite naive and does not take the complexity of collaboration into account, especially around conflicts. > > The authors of Jj are trying to bullshit their way around that, claiming to take the "best of Pijul" without understanding it. Ask them what their algorithms are.
fab
2024-03-10 12:28:11
2024-03-10 12:28:37
What you think about extreme compression for tgcom24 or dmove?
2024-03-10 12:29:27
Do you think I'm wualified enough to understand that?
HCrikki
2024-03-11 09:26:40
just noticed some big sites like devianart seem to actually only have one real copy of images (the uploaded original) and they dynamically serve a recompressed version on the fly even if it was already lossy - change the url a bit and you get almost instantly served any image format at any quality level desired
2024-03-11 09:27:17
no jxl but low qualities in lossy formats is a sight
2024-03-11 09:31:15
i wonder if smugmug could do good on their past support and add jxl to their pipeline for deviantart and flickr
fab
fab What you think about extreme compression for tgcom24 or dmove?
2024-03-11 10:14:29
Those were crap q 32.1 about, I guess nobody would give indexs to the site
2024-03-11 10:15:48
2024-03-11 10:16:13
This quality is fine for what's there is on internet
2024-03-11 10:16:24
Avif can scale even higher
2024-03-11 06:15:20
HDblog is mondadori.it a book ublosjer but dday.it same thing perhaps
HCrikki
2024-03-17 11:20:00
storage and bandwidth savings would be massive even if they used jxl for only the top series (recompressing from the png originals to maximize quality for the same filesize as the old badly compressed jpgs)
2024-03-17 11:26:23
jpegli could do wonders there, stretching their use of jpg and generating higher quality jpgs they could eventually convert to jxl at extremely low ressource consumption effort (regenerating from pngs would guarantee higher quality/smaller files if available)
2024-03-17 11:33:50
afaik jpegli compresses at a higher internal precision and bit depth than the final output and against ssimulacra, deviating less from the original image compared to other jpg encoders
2024-03-17 11:36:36
on dex's numbers, they push 4gb/sec all the time and 25+ terabytes traffic everyday. chopping that in half overnight would be huge, even transitionally with an inefficient jpg->jxl conversion (would still come at 0 extra quality loss)
lonjil
2024-03-17 10:23:28
I just noticed that all the cached thumbnails on my Linux desktop are PNG, which seems rather silly.
2024-03-17 10:24:11
Perhaps that makes sense for the very tiny ones, but some folders I have in large icon view, with 256 pixel wide or tall thumbnails
2024-03-17 10:25:05
Those would surely be a good deal smaller if encoded as JPEGs instead of PNGs
HCrikki
2024-03-17 10:27:31
pngs show transparency
lonjil
2024-03-17 10:28:33
the vast majority of these images have no transparency, though maybe they didn't want to have any logic, and just always do something safe
2024-03-17 10:28:46
clearly they should be jxl instead!
2024-03-17 10:31:02
> So dolphin created a lot of thumbnails over the last 2 years. They are all saved as png, so that makes some of them bigger than the jpg original. lololol
HCrikki
2024-03-17 10:31:13
imo png thumbs is a legacy decision that just wasnt revisited. when desktops ran with sub-hd resolutions and thumbs had small sizes, jpg loses a lot of detail
2024-03-17 10:32:30
not quite an issue nowadays with fast drives, highres and large monitors but linux is at a point its shaking off its old legacy compromises
2024-03-17 10:35:01
afaik DEs compliant with freedesktop specs would read thumbs whatever the format theyre in even if its png generating by default
lonjil
2024-03-17 10:39:43
I tried replacing a png thumb with a jpg one and it just made a new png one >.>
HCrikki
2024-03-17 10:39:56
gnome?
lonjil
2024-03-17 10:40:00
kde
2024-03-17 10:41:34
Now I tried but with naming the jpg as .png, and it still overwrote it with a new png 😅
HCrikki
2024-03-17 10:58:37
afaik thumbs may be linked to og file by hash not name so a swapped in jpg would be considered a corrupt thumb
lonjil
2024-03-17 10:59:35
I wonder where that association is stored, then 🤔
HCrikki
2024-03-17 11:07:49
is png an issue btw?
lonjil
2024-03-17 11:08:25
not really, I just thought they were a bit large
2024-03-17 11:08:46
but most of them are below 100 KiB so it isn't so bad anyway
HCrikki
2024-03-17 11:09:55
at least those formats generate quickly. someone did a compare for generating thumbnails 2 months ago. everything generated in less than 1 second... except avif with 25seconds
lonjil
2024-03-17 11:10:06
lol
HCrikki
2024-03-17 11:10:17
definitely not gonna be anyone's thumbnail format soon
2024-03-17 11:11:15
and lets not get into the carbon footprint
2024-03-17 11:12:24
jxl would be neat to not even need premade thumbnails in the first place (load as little as just 10% of a file to show a preview)
Lock
HCrikki is png an issue btw?
2024-03-17 11:12:37
yeah cause jxl is better
2024-03-17 11:12:39
o(≧▽≦)o
lonjil
HCrikki jxl would be neat to not even need premade thumbnails in the first place (load as little as just 10% of a file to show a preview)
2024-03-17 11:13:30
still want separate thumbnails, because the image files may be on spinning rust and the thumbnails on a fast SSD
HCrikki
2024-03-17 11:14:51
well spec is inflexible, outside that itd depend on the de's approach
2024-03-17 11:16:16
theres scripts to generate working hashes for uris to jpg thumbs but known to cause issues
gb82
HCrikki at least those formats generate quickly. someone did a compare for generating thumbnails 2 months ago. everything generated in less than 1 second... except avif with 25seconds
2024-03-18 12:50:50
do u have a link?
HCrikki
2024-03-18 12:58:10
looked before, couldnt find it back
2024-03-18 12:59:03
it was against current lib versions with large images so might be reproducible
jonnyawsom3
lonjil I just noticed that all the cached thumbnails on my Linux desktop are PNG, which seems rather silly.
2024-03-18 09:11:47
I had Windows bluescreen the other day and delete the thumbnail cache, thanks to the thumbnail generator being single threaded it's faster to just scroll through the photos app than it is to wait for the thumbnails Dx (Especially for WebP or RAW files)
HCrikki
2024-03-18 02:59:31
afaik fjxl was obsoleted in favour of low effort cjxl. effort 1 can be heavier though, e2 and e3 consistently guarantee lower size
username
2024-03-18 03:01:33
wasn't cjxl effort 1 replaced with fjxl?
HCrikki
2024-03-18 03:01:47
more like opposite iinm
2024-03-18 03:03:25
system thumbnailers can be inflexible but apps like galleries can handle their own internal thumbs (using whatever format they allow but its usually either png or jpg)
veluca
2024-03-18 03:03:45
definitely the opposite 😛
2024-03-18 03:04:06
I mean, fjxl still exists, but cjxl -e 1 is pretty much equivalent
2024-03-18 03:04:29
you can use fjxl if you want a single-file version, I guess
HCrikki
2024-03-18 03:04:33
ideal for screenshots but still needs adoption (looks at sharex and steam)
yoochan
2024-03-18 03:05:10
can a jpeg file compressed losslessly to jxl be recompressed losslessly again to jxl but with a higher effort ?
HCrikki
2024-03-18 03:05:45
could i guess but defo doesnt make a big diff filesize wise
2024-03-18 03:06:15
pixel data doesnt change mostly the already small metadata
2024-03-18 03:06:57
not directly reconverted but going back to jpg then higher effort jxl is still almost instant
yoochan
HCrikki not directly reconverted but going back to jpg then higher effort jxl is still almost instant
2024-03-18 03:07:38
that's a possibility indeed 😄 but if it makes no difference indeed, it may be useless
HCrikki
2024-03-18 03:08:13
if specific filesize is needed and you got a lossless/hq original, recompressing to the same filesize would always be better quality than reusing an existing jpeg that mightve used a bad/old jpg encoder in the first place
2024-03-18 03:09:28
not aware of any way to target a predetermined size other than trying efforts until you hit a close number
_wb_
2024-03-18 04:52:08
there are some differences in speed and compression when changing the effort of jpeg recompression, but generally default effort should be fast enough and higher effort than that will not save much
jonnyawsom3
2024-03-18 05:05:24
24x longer for a 0.5% reduction in the quick test I just did
2024-03-18 05:05:33
e7 compared to e9
HCrikki
2024-03-18 08:03:29
imo default effort should adapt to resolution rather than have the same for everything (slightly higher for small images, bit lower for large resolutions)
_wb_
2024-03-18 08:38:31
That's an interesting idea...
jonnyawsom3
2024-03-18 10:23:05
*Should* scale rather linearly regardless of content too
Jyrki Alakuijala
2024-03-20 02:10:20
any substantial amount of new heuristic intelligence tends to be useful for some contexts and needs to be inversed in other contexts where it is harmful 🙂
damian101
2024-03-20 04:30:48
Is there some tool out there for Linux that can analyze ICC profiles from PNG, and ideally also extract and inject them?
w
2024-03-20 05:24:20
exiftool to extract/attach
2024-03-20 05:26:45
And I use displaycal profile info on windows, don't know if it comes with it on linux
2024-03-20 05:28:15
to transform to and attach profile, magick works
190n
2024-03-20 10:19:24
https://vxtwitter.com/JohnPhamous/status/1770509175791215001 <:NotLikeThis:805132742819053610>
yurume
190n https://vxtwitter.com/JohnPhamous/status/1770509175791215001 <:NotLikeThis:805132742819053610>
2024-03-21 12:12:03
I was confused for a moment, and then realized the problem...
damian101
w And I use displaycal profile info on windows, don't know if it comes with it on linux
2024-03-21 04:38:12
Exiftool can actually display that too. There's also a very nice GUI for it, jExifToolGUI.
diskorduser
2024-03-21 04:41:48
https://hackaday.com/2024/03/19/your-text-needs-more-jpeg/
_wb_
diskorduser https://hackaday.com/2024/03/19/your-text-needs-more-jpeg/
2024-03-21 07:52:16
Would be fun to do an AVIF variant of it. Way fewer obvious artifacts, but "smoothes" the text by replacing some words with plausible different words (same grammatical role, semantical distance not too far, and more common — e.g. "chameleon" could become "frog" or "animal", or "ponder" could become "think").
yurume
2024-03-21 08:10:41
perhaps each MCU should correspond to each word's semantic vector, a la word2vec?
Traneptora
2024-03-21 08:26:35
as far as I understand the JPEG DCT text lossifier straight up takes text and treats it as pixels and feeds it to a jpeg-style quantizer
2024-03-21 08:26:40
and then inverts it
2024-03-21 08:29:48
they could add AQ where they quantize the start and end of a word less
2024-03-21 08:29:51
makes it more readable
yurume
2024-03-21 09:20:01
there is an additional remapping based on glyph similarity AFAIK
2024-03-21 09:20:16
which is referred as to "perceptual charmap"
lonjil
2024-03-21 10:44:45
You could use GPT2 to lossily find sementically similar text, and then use GPT2's predicted probabilities for entropy coding 😄
Traneptora
lonjil You could use GPT2 to lossily find sementically similar text, and then use GPT2's predicted probabilities for entropy coding 😄
2024-03-21 10:47:46
yea, but then you have to include GPT2's dictionary or model data in every decoder
2024-03-21 10:47:53
probably not worth the overall size savings <:kek:857018203640561677>
lonjil You could use GPT2 to lossily find sementically similar text, and then use GPT2's predicted probabilities for entropy coding 😄
2024-03-21 10:48:25
amusingly though, the article says that the mangled text is still pretty interpretable by LLMs
lonjil
2024-03-21 10:48:32
Hah
Traneptora
2024-03-21 10:48:34
much past the point that humans can read it
2024-03-21 10:48:41
so you can use it like a reverse turing test
2024-03-21 10:49:11
2024-03-21 10:49:18
like this, a human would go "what?"
2024-03-21 10:49:44
if it gives an intelligible response, probably an LLM
lonjil
2024-03-21 10:49:54
Lmao
Traneptora yea, but then you have to include GPT2's dictionary or model data in every decoder
2024-03-21 10:50:35
GPT4 in every decoder!
Traneptora
2024-03-21 10:51:45
easy
CrushedAsian255
2024-03-21 11:03:41
Minimum required specs:
2024-03-21 11:03:49
24GB RAM
2024-03-21 11:03:59
RTX 4070 or better
DZgas Ж
diskorduser https://hackaday.com/2024/03/19/your-text-needs-more-jpeg/
2024-03-21 06:38:20
<:megapog:816773962884972565>
diskorduser
Traneptora
2024-03-22 01:50:35
How did you generate this text
Traneptora
diskorduser How did you generate this text
2024-03-22 01:51:51
through the jpeg dct text lossifizer linked above
2024-03-22 01:51:58
I didnt generate this, I grabbed it from the blog post
diskorduser
2024-03-22 01:52:09
Ok
Traneptora
2024-03-22 01:52:18
https://lcamtuf.substack.com/p/afternoon-project-jpeg-dct-text-lossifizer
2024-03-22 01:52:27
here's the blog post from the creator
DZgas Ж
2024-03-22 05:31:17
I wrote formulas to do "rgb to ycbcr to rgb" - in order to scale the color channel in advance so that when the picture was compressed, no extra color BLUR appeared in the telegram. Does anyone know where (and in general how to Google it) -- a more accurate numerical formula rgb to Ycbcr - because the formula that is written in the JPEG documentation (on wikipedia🧐 ) - gives a small error, Including, I finally understood why it is impossible to create loseless Jpeg
jonnyawsom3
2024-03-22 05:39:19
The default form is always lossy, YCoCg-R is the reversible version
DZgas Ж
The default form is always lossy, YCoCg-R is the reversible version
2024-03-22 05:47:00
No, wait, look, there are numbers in the ycbcr formula that have a certain precision.... why not calculate an accuracy so strong that there would be no artifacts even when working with 32 bits per pixel
2024-03-22 05:48:01
i found T-REC-T.871-201105-I!!PDF-E.pdf
w
2024-03-22 06:01:58
the error comes from storing the value as the same size integer, not precision in calculating
DZgas Ж
The default form is always lossy, YCoCg-R is the reversible version
2024-03-22 06:28:27
original, ycbcr, YCoCg
w the error comes from storing the value as the same size integer, not precision in calculating
2024-03-22 06:29:50
Yes, this is really a problem. the constants are just set in BT.601 and there's nothing I can do about it.... meh
jonnyawsom3
2024-03-22 06:30:06
I got mixed up between YCbCr and YCoCg, but my point still stands, YCoCg-R is the reversible form
DZgas Ж
2024-03-22 06:30:28
it is possible to reverse recreate the function by iterating over the accuracy of the colors after conversion
2024-03-22 06:31:16
<:PepeOK:805388754545934396> I'm too lazy, I'll leave it as it is, and I won't touch it
w
2024-03-22 06:32:37
what about keeping in floating point
DZgas Ж
2024-03-22 06:33:24
I'm a little tired.
w what about keeping in floating point
2024-03-22 06:35:13
I'm already doing it.
2024-03-22 06:35:28
w
2024-03-22 06:36:00
oh that's what you mean
2024-03-22 06:37:31
algebra I guess
DZgas Ж
2024-03-22 06:38:12
<:NotLikeThis:805132742819053610> oh. ok.
w algebra I guess
2024-03-22 06:46:07
that's it, I solved the problem, hell know, instead of rounding the output FLOAT with ROUND, I used a damn INT
2024-03-22 07:03:25
I wrote rgb to ycbcr manually, because Ycbcr convert in python pillow is bullshit
2024-03-22 07:07:32
<:PepeOK:805388754545934396> <:PepeOK:805388754545934396> <:PepeOK:805388754545934396> <:PepeOK:805388754545934396>
DZgas Ж I wrote formulas to do "rgb to ycbcr to rgb" - in order to scale the color channel in advance so that when the picture was compressed, no extra color BLUR appeared in the telegram. Does anyone know where (and in general how to Google it) -- a more accurate numerical formula rgb to Ycbcr - because the formula that is written in the JPEG documentation (on wikipedia🧐 ) - gives a small error, Including, I finally understood why it is impossible to create loseless Jpeg
2024-03-22 08:27:38
Traneptora
ProfPootis yep, that works
2024-03-23 11:37:22
the `--loop-file` issue should be fixed now
2024-03-23 11:37:32
in git master
ProfPootis
2024-03-23 05:06:09
nice!
sklwmp
2024-03-24 01:26:31
So annoyingly, when updating to Plasma 6 on Arch, I had to reinstall `kimageformats` and I only noticed since my screenshots were no longer in JXL and I couldn't open JXL files...
damian101
2024-03-24 01:39:51
maybe because old kimageformats was renamed to kimageformats5...
Arcane
2024-03-28 08:16:05
2024-03-28 08:16:22
_wb_
2024-03-28 02:04:35
Question for <@604964375924834314> about white points: when I take a screenshot in MacOS, the ICC profile it embeds is a DisplayP3 one, and it has the white point set at XYZ 0.95, 1, 1.089 which is D65. When encoding it to jxl and looking at the equivalent ICC profile libjxl generates, it is one with the same color primaries but with the white point set at a different value of XYZ 0.964, 1, 0.825, and there's a `chad` tag with a chromatic adaptation matrix that I guess maps that back to D65. Is there a reason for doing it like this as opposed to simply setting the white point to D65 and dropping the `chad` tag?
spider-mario
2024-03-28 02:06:01
yes, that’s what ICC wants https://www.color.org/whyd50.xalter > In ICC v4, the requirement was introduced that the media white point of a Display class profile shall be equal to D50 (i.e. [96.42, 100, 82.49]). This is in fact implied in v2, but many profile creators did not grasp the rationale for it and as a result some incorrect profiles were created and distributed. ICCv2 spec: > The mediaWhitePointTag contains CIE 1931 XYZ colorimetry of the white point of the actual medium. If the viewer completely adapts to the white point of the medium (as is often the case with monitors) this tag should be set to Xi, Yi, Zi (the PCS white point). If chromatic adaptation is being applied to the PCS values, the adaptation should be applied to the mediaWhitePointTag values as well.
_wb_
2024-03-28 02:09:04
By the way, for some context: I want to make a simple tool that takes an ICC profile as input and returns a more compact, equivalent (or near-equivalent) ICC profile as output. Some of the ICC profiles in the wild contain lots of garbage like description strings in 20 different languages or tabled versions of transfer curves where a parametric one would suffice, so it would be useful for us if we can simplify them and e.g. turn the Apple's 4 KB version of DisplayP3 into a 500 byte version.
spider-mario yes, that’s what ICC wants https://www.color.org/whyd50.xalter > In ICC v4, the requirement was introduced that the media white point of a Display class profile shall be equal to D50 (i.e. [96.42, 100, 82.49]). This is in fact implied in v2, but many profile creators did not grasp the rationale for it and as a result some incorrect profiles were created and distributed. ICCv2 spec: > The mediaWhitePointTag contains CIE 1931 XYZ colorimetry of the white point of the actual medium. If the viewer completely adapts to the white point of the medium (as is often the case with monitors) this tag should be set to Xi, Yi, Zi (the PCS white point). If chromatic adaptation is being applied to the PCS values, the adaptation should be applied to the mediaWhitePointTag values as well.
2024-03-28 02:11:42
Oh, so the profile Apple is putting in its screenshots is technically incorrect, and jxl is silently fixing it. Makes sense, cool.
2024-03-28 02:26:28
Are the ICC profiles libjxl synthesizes as compact as possible, or are there still bytes that can be saved in principle? I assume the description/copyright strings could be shortened or removed (or are they obligatory?), and I guess the `cicp` tag could also be dropped for SDR profiles, but other than that it's as compact as it gets?
spider-mario
2024-03-28 03:06:13
I’m not sure that we save every last byte but they shouldn’t be _too_ inefficient
jonnyawsom3
2024-03-28 10:28:10
I assume they're brotli compressed too, which helps already
2024-03-28 10:28:55
Or at least, the metadata contained within... Maybe...
_wb_
2024-03-29 01:13:39
> 7.2.3 Preferred CMM type field (bytes 4 to 7) > This field may be used to identify the preferred CMM to be used. If used, it shall match a CMM type signature registered in the ICC Tag Registry (see Clause 5). If no preferred CMM is identified, this field shall be set to zero (00000000h). All CMMs should be registered with ICC.
2024-03-29 01:14:08
2024-03-29 01:14:13
https://www.color.org/registry/signature/TagRegistry-2021-03.pdf
2024-03-29 01:16:43
We are writing `jxl ` in that field...
2024-03-29 01:23:37
I don't really get what the purpose is of that field, is it just a way to say "this profile was generated by this software"?
2024-03-29 01:32:36
> 7.2.19 Reserved field (bytes 100 to 127) > This field of the profile header is reserved for future ICC definition and shall be set to zero. I can see the ICC is taking a different approach than we had in the JXL design when it comes to header overhead 🙂
2024-03-29 01:33:32
you can fit an entire <#824000991891554375> image in that obligatory padding in ICC profiles 🙂
2024-03-29 01:34:44
(well you can't because those bits *shall* be set to zero, but I bet most implementations will just ignore those bits, not check that they're zeroes)
yoochan
2024-03-29 03:00:28
because 100 bytes is not round enough 2**7 - 1 is much more pleasing... I guess
fab
2024-03-29 03:21:02
Ok i finished my review, i'm not aware of more search compression space
2024-03-29 03:21:08
Good
2024-03-29 03:21:26
I don't know if i infriged Copyright
2024-03-29 04:10:19
In the papers jon said i'm not sensitive to long distance banding and the kakadu Media set
2024-03-29 04:10:46
I'm big fan of Berlusconi is normal i'm not sensitive to mediaset
2024-03-29 04:18:22
Study at high quality I prefer berlusconi AVIFs
2024-03-29 04:24:43
Jon immediately diagnosed what could be the problem
2024-03-29 04:25:06
How i vote is different from JPEG neurotypicals
2024-03-29 04:25:23
Because with Giuseppe Conte i like so much jpeg
2024-03-29 04:25:42
Jpeg xl seems to be bad for me
spider-mario
2024-03-29 05:01:28
https://www.openwall.com/lists/oss-security/2024/03/29/4 oops
190n
2024-03-29 05:31:06
<:monkaMega:809252622900789269><:monkaMega:809252622900789269><:monkaMega:809252622900789269><:monkaMega:809252622900789269><:monkaMega:809252622900789269>
Quackdoc
2024-03-29 06:04:02
thats a big yikes
spider-mario
2024-03-29 06:28:32
there’s a Phoronix article as well https://www.phoronix.com/news/XZ-CVE-2024-3094
yoochan
2024-03-29 07:06:44
After the attack on the supply chain of pypi... Bad week for the open source
yurume
2024-03-30 12:43:08
oh my
2024-03-30 12:43:51
that exploit is really crazy, the attacker in question was in fact a release manager of xz for 2 full years!
2024-03-30 12:44:47
so it is very likely that one intentionally did all the contributions (which should be now scrutinized after the fact) for this one exploit
2024-03-30 12:45:26
there are also some evidences for related attackers who tried to get the affected version of xz more widespread
Fraetor
2024-03-30 01:10:21
In terms of impact, the exploit wouldn't have gotten running on many production systems. Only debian sid and fedora 40 beta actually shipped the versions of the package with the backdoor. Its still not good though.
yurume
2024-03-30 01:23:52
https://github.com/tukaani-project/xz/ oh well the repository has been disabled for now (as of 2024-03-30T01:23Z)
2024-03-30 01:25:58
seems to be related to the suspension of the attacker's account at 01:00Z according to other articles
Kremzli
2024-03-30 02:25:24
I think you should at least update that library to a known good version (latest). That attacker has been around for a long time
yurume
2024-03-30 02:45:17
for now, yes, but in the long term I think xz should be avoided at least for the reason that the attacker has controlled the repository for too long time
2024-03-30 02:46:37
it might be also possible to revert back to the old enough version to not include attacker's contributions and selectively cherry-pick good commits instead
2024-03-30 02:47:16
the attacker seems to be very patient and sophiscated this time, so that level of caution is justifiable
2024-03-30 02:48:13
(I personally didn't buy the reason to make xz in the first place, but this concern is orthogonal)
2024-03-30 02:49:16
(and that doesn't make lzip a good alternative, as lzip author tried to claim: https://www.nongnu.org/lzip/xz_inadequate.html, either)
2024-03-30 04:33:14
indeed, and that eventually led to their exposure
2024-03-30 04:33:33
but that can't discount 1.5--2 years of the apparent preparation
Vlad (Kuzmin) Erium
2024-03-30 05:41:05
liblzma is affected? Libtiff, libXML depended on liblzma
yurume
2024-03-30 06:33:01
yes, unfortunately
2024-03-30 06:33:17
> After observing a few odd symptoms around liblzma (part of the xz package) on Debian sid installations over the last weeks (logins with ssh taking a lot of CPU, valgrind errors) I figured out the answer:
2024-03-30 06:35:25
the current theory seems that the exploit is designed to link with openssh, which is normally not the case but many distros link sshd with libsystemd which depends on liblzma, so it should be liblzma problem
2024-03-30 06:35:45
(apparently it abused ifunc in order to be executed much earlier than normal code)
2024-03-30 09:23:27
was writing too long comment about this in the HN, but I don't think lzip is good either
2024-03-30 09:25:03
the biggest red flag from the lzip format is that it thinks corrupted bits in the compressed data can be caught by the decompressor itself
lonjil
2024-03-30 09:25:45
lol
yurume
2024-03-30 09:26:00
no it isn't, cause it means that your compressed data still has some redundancy to spare!
2024-03-30 09:26:53
the author probably thinks that i) the decompressor is robust enough to the untrusted and/or corrupt input stream and ii) the uncompressed checksum should be able to catch any escape
2024-03-30 09:27:10
which is slightly better than the actual statement, but also much harder to prove
2024-03-30 09:27:46
and it doesn't seem wise to wait for the eventual corruption before the decompression of entire data
afed
2024-03-30 10:42:04
https://arstechnica.com/information-technology/2024/03/playboy-image-from-1972-gets-ban-from-ieee-computer-journals/
w
2024-03-30 11:01:41
anybody have any idea who IEEE computer society is? im getting jpeg group vibes
yurume
2024-03-30 11:02:14
good to have, but not necessary for the compression task proper.
2024-03-30 11:03:02
so I actually think it is worthy to have some very approachable archive format that also gives a good encryption and recovery feature (that is, unlike ZIP's bad encryption algorithm)
2024-03-30 11:03:11
but that shouldn't be the foremost concern
2024-03-30 11:04:05
partly because compression format designers are rarely good cryptographic format designers 🙂
2024-03-30 11:04:26
(for that reason, I once suggested that such format should just adopt something like `age` with a minimal change.)
Demiurge
2024-03-30 11:14:51
Ah neat, so this is where the chat is at
spider-mario
2024-03-30 11:46:03
https://www.reddit.com/r/programmingcirclejerk/comments/1br5ffb/package_xzutils_561really5451/kx77q7y/
yurume
2024-03-30 11:49:05
well it's true that researchers did maliciously put code like this attacker, so...
2024-03-30 11:51:14
to be clear, I believe they have shown a worthy proof of concept, but it's also not that unreasonable that they would have been criticized as well
2024-03-30 12:10:35
indeed. there is no clear evidence about the nationality of the attacker(s).
Traneptora
2024-03-30 02:40:44
most likely they used a VPN so it'll be hard to find them
2024-03-30 02:41:00
people don't do this single-handedly, they're liked backed by a large criminal organization or a state
2024-03-30 02:41:08
which means support and VPNs and stuff
w
2024-03-30 03:08:19
probably some 13 year old in kansas
VcSaJen
afed https://arstechnica.com/information-technology/2024/03/playboy-image-from-1972-gets-ban-from-ieee-computer-journals/
2024-03-31 04:34:48
Just pay them money, problem solved. Instead they tried to twist it into some vague something for sake of empty PR.
lonjil
2024-03-31 07:25:39
https://cyberplace.social/@GossiTheDog/112191802878969983
spider-mario
2024-03-31 08:13:51
ah, it’s only meant to support zstd when it’s a tar inside?
2024-03-31 08:14:15
I guess that explains this weird error I got when trying to extract a bare zst from the explorer:
2024-03-31 08:14:43
I like the unterminated error message (“To close this Assistant, click on” – on what?)
lonjil
2024-03-31 08:24:34
probably machine translated with zero QA
spider-mario
2024-03-31 08:31:10
like this one? https://twitter.com/PR0GRAMMERHUM0R/status/1643961780027228160
lonjil
2024-03-31 08:32:02
lmao
Demiurge
2024-03-31 09:49:02
Why not just remove the injected shit?
2024-03-31 09:49:17
Why the need for debian to use such silly conventions
spider-mario
2024-03-31 09:50:00
I believe they’re playing it safe in case there were other backdoors in that timeframe
lonjil
2024-03-31 09:51:21
The attacker was the primary maintainer for over a year, so yeah...
Demiurge
2024-03-31 09:55:24
Ah
2024-03-31 09:56:18
Remind me when I'm writing my own package manager, to rely on DATES rather than making assumptions about version numbers, when determining what needs to be updated
2024-03-31 09:57:45
Even pacman has that same problem. I can't believe how bad the state of Linux is sometimes
spider-mario
2024-03-31 09:58:08
pacman works around this with epochs
Demiurge
2024-03-31 09:58:18
Yeah, I know
2024-03-31 09:58:22
It sucks
2024-03-31 09:58:55
Epochs are a bad workaround
lonjil
2024-03-31 09:59:08
idk how Alpine does it, but I think I've seen apk say "downgrading X" when I run `apk upgrade`
spider-mario
Demiurge Epochs are a bad workaround
2024-03-31 09:59:29
how so?
2024-03-31 09:59:51
I mean, aesthetically, sure, I don’t love this leading `1:`
Demiurge
2024-03-31 09:59:53
Why do you think debian tries to avoid using epochs with +really
spider-mario
2024-03-31 10:00:07
but it doesn’t seem like that much of a deal, all things considered
lonjil
2024-03-31 10:00:12
Part of the problem for Debian I believe is that other projects were already depending on features introduced after that guy became maintainer, so rolling back to a version from 1+ year ago broke stuff.
Demiurge
2024-03-31 10:01:03
Ideally there should not be any need for package managers to parse version strings for packages that have different rules for version names
2024-03-31 10:01:15
That's a stupid proposition from the beginning
2024-03-31 10:02:07
The package database should just use timestamps to determine if a system is up to date
2024-03-31 10:02:14
Because that's obvious
spider-mario
2024-03-31 10:02:36
you can have specific packages use timestamps as package versions if the upstream version numbers are silly enough not to parse as version numbers
Demiurge
2024-03-31 10:02:59
Comparing timestamps between the local system and remote system makes it easy to tell
spider-mario
2024-03-31 10:03:08
comparing package versions too
2024-03-31 10:03:14
I’m not sure what timestamps solve
Demiurge
2024-03-31 10:03:20
And just have the version string be whatever upstream calls it
2024-03-31 10:03:40
There's no need to parse the version string at all
2024-03-31 10:03:55
It's just informative
spider-mario
2024-03-31 10:04:20
it seems that you trade that for a need to either specify timestamps manually or generate them automatically (and therefore not have reproducible builds)
Demiurge
2024-03-31 10:05:10
The package database doesn't have anything to do with reproducible builds
2024-03-31 10:05:27
The database can be a plain filesystem
spider-mario
2024-03-31 10:05:41
ah, and have additional information that is not contained in the packages themselves?
Demiurge
2024-03-31 10:05:53
And if the remote server has a newer timestamp than you then that means updates are available
2024-03-31 10:06:06
Yes
2024-03-31 10:06:35
When I say database, I imply information outside of the package contents
2024-03-31 10:07:14
I still think there ought to be checks that the same exact version string isn't re-used twice
2024-03-31 10:07:39
But the only way to tell if one is newer is... look at the time stamp
spider-mario
2024-03-31 10:08:00
is it timestamp of “when the package was added to the database”?
Demiurge
2024-03-31 10:08:49
The database should be responsible for making sure the timestamps are working as intended. Whether it's a database file or the filesystem itself
2024-03-31 10:10:19
Ideally just an ordinary filesystem and just use the modification time
2024-03-31 10:10:42
But it can be more complex if it actually needs to be
2024-03-31 10:11:04
Most package managers use a database file
2024-03-31 10:11:16
Because they hate simplicity
2024-03-31 10:11:39
And they need to sell certifications and training courses to use their crappy Linux server OS
lonjil
2024-03-31 10:18:40
They probably use a file because POSIX file systems have basically zero guarantees about anything
Demiurge
2024-03-31 10:19:17
If you are in control of the update server you write and ensure your own guarantees
spider-mario
Demiurge Most package managers use a database file
2024-03-31 10:19:59
I mean, you have to get the database to the clients somehow
Demiurge
2024-03-31 10:20:03
Since it's your filesystem
lonjil
2024-03-31 10:20:19
What
2024-03-31 10:21:11
File systems have certain guarantees about the available operations, and it is incredibly difficult to actually implement anything reasonable on top of them
2024-03-31 10:22:07
You're adding tons of extra complexity if you want to do it right
2024-03-31 10:23:25
(also using tons of files would be way slower, especially on older hardware and HDDs)
Quackdoc
2024-03-31 10:33:49
Im not sure I follow the benefits of using time over version. you are just replacing one check with another check, there is no net gain
Demiurge
2024-03-31 10:57:54
In practice a database file might be simpler. Or not.
Quackdoc Im not sure I follow the benefits of using time over version. you are just replacing one check with another check, there is no net gain
2024-03-31 10:58:31
Time is at least expected to be consistent. Not different from project to project and changing arbitrarily over time
2024-03-31 10:59:19
If someone decides to make a version scheme that goes backwards instead of forwards. That's their prerogative too. Maybe I'll start doing that just to spite crappy linux distros.
2024-03-31 11:01:59
You're replacing an inconsistent check that's complicated and prone to fail. With a consistent one.
yurume
2024-04-01 01:11:49
the vulnerability is strongly connected to ELF `GNU_IFUNC` feature, which doesn't apply to Windows.
HCrikki
2024-04-02 10:35:28
is there a way to deliberately corrupt images in a predictable way? curious as to how error resilience and correction works for some formats, notably jpg (mozjpg and jpegli), webp, aivif and jxl
username
HCrikki is there a way to deliberately corrupt images in a predictable way? curious as to how error resilience and correction works for some formats, notably jpg (mozjpg and jpegli), webp, aivif and jxl
2024-04-02 11:05:10
someone did some tests back in 2021 https://www.ctrl.blog/entry/bitrot-avif-jxl-comparison.html
2024-04-02 11:06:34
something important to consider is this isn't just about the bitstreams/formats themselves but what decoder implementations decide to do
2024-04-02 11:07:00
I think jxl-oxide has an option to decode even if there are errors