JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

on-topic

Whatever else

Demiurge
CrushedAsian255 distance 4000 is amazing <@207980494892040194>
2024-12-23 02:08:22
So much space savings that the image actually is negative bytes and gives you extra storage back
2024-12-23 02:08:47
By deleting random things on your hard drive
CrushedAsian255
Demiurge By deleting random things on your hard drive
2024-12-23 02:09:22
I should really de bloat my system, do I really need the French language pack
Demiurge
2024-12-23 02:09:48
Oui oui
A homosapien
CrushedAsian255 distance 4000 is amazing <@207980494892040194>
2024-12-23 02:10:33
Looks amazing
CrushedAsian255
2024-12-23 02:11:08
AVIF ain’t got nothing on libjxl
jonnyawsom3
https://embed.moe/https://cdn.discordapp.com/attachments/794206087879852106/1320572919084093491/out.jxl?ex=676a16de&is=6768c55e&hm=a305f3c6b0d4889a67ceee04874e6d9d753ac0e77b03b72c138a3dd1ac9c095d&
2024-12-23 02:13:10
I was actually using intensity target 1 and distance 5 earlier to help a friend with their art idea
CrushedAsian255
2024-12-23 02:14:01
Does intensity target effectively decrease the bit depth or something?
I was actually using intensity target 1 and distance 5 earlier to help a friend with their art idea
2024-12-23 02:14:22
New definition of “JXL art”
jonnyawsom3
CrushedAsian255 New definition of “JXL art”
2024-12-23 02:14:47
Already did that with my badge and artwork
2024-12-23 02:15:22
No, I don't know why I'm Italian either
2024-12-23 02:15:58
But someday I'll make a layered JXL that actually uses the art
damian101
Saying "No artifacts visible" is very subjective, especially when we'd be zooming in on distant stars and then sliding the exposure of the image after to make them bright
2024-12-23 02:37:50
Well, that's exactly not what's done.
Quackdoc
2024-12-23 03:45:59
tfw when you are encoding a folder of jpegs and you see this in your terminal ``` JPEG XL encoder v0.11.0 [NEON,NEON_WITHOUT_AES] Encoding [Modular, lossless, effort: 7] Compressed to 449.3 kB (6.766 bpp). 603 x 881, 0.229 MP/s [0.23, 0.23], , 1 reps, 8 threads. ```
2024-12-23 03:48:19
I wonder if there would be any point in getting cjxl to optionally throw an error if a file has the wrong extention
Demiurge
No, I don't know why I'm Italian either
2024-12-23 07:26:21
Like a pizza pie, a work of art
DZgas Ж
CrushedAsian255 ```--intensity_target 0.1 -m 1 -d 40```
2024-12-23 11:22:41
gread
CrushedAsian255
DZgas Ж gread
2024-12-23 11:23:08
what
DZgas Ж
CrushedAsian255 what
2024-12-23 11:23:42
what what? looks great I say
CrushedAsian255
DZgas Ж what what? looks great I say
2024-12-23 11:24:24
Ah makes sense
2024-12-23 11:24:36
Couldn’t work out what “gread” meant
username
2024-12-23 11:25:07
I thought it said "greed" for a bit
CrushedAsian255
username I thought it said "greed" for a bit
2024-12-23 11:25:19
same
DZgas Ж
CrushedAsian255 Couldn’t work out what “gread” meant
2024-12-23 11:28:21
somehow I combined great and grand together in the same meaning by mixing up the letters in. my bad
CrushedAsian255
DZgas Ж somehow I combined great and grand together in the same meaning by mixing up the letters in. my bad
2024-12-23 11:28:41
No problem mate
AccessViolation_
Maybe a cut down version of chunk gen for the 'higher level' properties like biome, to know what the most common blocks are
2024-12-23 04:05:01
I was thinking there are two approaches here: manually create the context models that use information like which biome it is, which height level, etc. Or take a meta-adaptive approach where it figures that out by itself, and stores those models on a per-world basis. The reason that might be good is because there are many sliders and settings you can use to modify the world gen, and instead of having to account for all of those manually it could learn them. For example if cave generation is disabled that could have quite some effect on predictors involving air at underground height levels, while if amplified mode is used predictors should expect more floating islands
2024-12-23 04:06:18
You could probably also do it on a per-chunk basis for better results, though you would need to store more MA trees
jonnyawsom3
2024-12-23 04:06:59
There's also modded content, so it would need to adapt and realise "Oh, the `Desert of fire` contains a lot of lava and soul sand"
AccessViolation_
2024-12-23 04:08:12
Oh right, yeah. I should probably try to make it as generic and adaptive from the start to account for mods and also weird custom maps that do not necessarily follow the rules of the default world gen
jonnyawsom3
2024-12-23 04:15:22
There's been a few different attempts at compressing or outright replacing the current Minecraft world format https://github.com/xymb-endcrystalme/LinearRegionFileFormatTools
AccessViolation_
2024-12-23 04:18:44
Aha interesting, it looks like the approach of that project is basically to use zstd (my beloved) and do more chunks per compression unit.
2024-12-23 04:19:10
Nothing too fancy, so there's a lot more fancy that could be added 👀
2024-12-23 04:22:53
You couldn't really compute any form of residuals here since block IDs are nominal data, so the predictors would either be correct and you don't store anything, or not, and you store the whole block. Unless you can like, create a hierarchy of blocks, as in "all of these are types of stone", etc, and you can use that redundancy between different kinds of stone, but that's too much for me to think about
jonnyawsom3
2024-12-23 04:23:49
I know for the Distant Horizons mod they already use/used ZSTD, so a few including me came to the conclusion that using a dictionary could help. In the end it didn't make much difference, maybe even slightly slower or bigger due to variance in chunks
AccessViolation_
2024-12-23 04:24:09
A dictionary as in a palette?
jonnyawsom3
2024-12-23 04:25:05
https://github.com/facebook/zstd?tab=readme-ov-file#the-case-for-small-data-compression
AccessViolation_
2024-12-23 04:25:17
Ohh a zstd dictionary
2024-12-23 04:26:32
I mean even Discord found that generating zstd dictionaries for their very small requests had minimal gains and wasn't worth the effort iirc, so I'm not surprised
jonnyawsom3
AccessViolation_ You couldn't really compute any form of residuals here since block IDs are nominal data, so the predictors would either be correct and you don't store anything, or not, and you store the whole block. Unless you can like, create a hierarchy of blocks, as in "all of these are types of stone", etc, and you can use that redundancy between different kinds of stone, but that's too much for me to think about
2024-12-23 04:53:48
Maybe still do pallete, but for regions instead of chunks. Then you're using a few bits per block instead of a dozen bytes on `minecraft:oak_log`
2024-12-23 04:55:19
Though, that would work for modded. A modded biome contains that pallete of modded blocks. Possibly could do something with material tags like you mentioned, but it's rather coarse
AccessViolation_
2024-12-23 04:56:33
Yeah that's what the game itself does too, so I'm considering just piggy backing on their palette encoding and working on indices instead (the predictors would see actual blocks for my own convenience but in the end it just stores back the indices)
2024-12-23 04:58:37
Like, I'm not going to go into histograms and ANS and all that, I'm just going to write as few blocks as possible and then zstd encode it all lol (or maybe lz4 encode it just like Minecraft does to see how much savings the predictors give)
2024-12-23 05:02:02
I'm probably just going to focus on per-chunk compression tbh. At least for starters. It's apparently pretty easy to just intercept the chunk read and write calls and transform them to and from my own format after those steps, and working on the redundancy between chunks would make it all more complex
jonnyawsom3
2024-12-23 05:09:35
Minecraft uses Deflate by default, but LZ4 was a recent addition for headless server use
_wb_
2024-12-23 05:24:50
I am not too familiar with Minecraft (know it mostly through my daughters, damn that makes me feel old) but it looks like it is basically a 3D image consisting of a palette of block types. Encoding it simply as layers (slices) of 2D palette images could work well. Possibly even better if adjacent slices are likely to be similar, in which case you could use kAdd blending between the layers to reduce entropy.
AccessViolation_
2024-12-23 05:47:07
Some basic information about Minecraft : it has has a voxel world that's split up into vertical chunks of 16 x 16 x 384. The game has different biomes that affect world generation, and distributions of naturally generated blocks in chunks are usually the same per biome (with some exceptions like mountain biomes, where the height of the mountain in a given chunk would affect how much stone there is relative to other stuff). So a per-biome dictionary for entropy coding would probably go well, and if you did it per-chunk that would probably end up wasting a bit of space, but not a lot A format that stores block data as a 16 x 16, 384 layer Modular mode Palette images would certainly be interesting. You would only care about the indices and not the colors they point to, so I assume using Palette is just for telling the encoder they're indices instead of colors directly so it won't try to assume close 'pixels' will have similar values (the can be assumed to be likely exactly the same, but if they are different at all they can't be assumed to be close numerically)
2024-12-23 05:48:35
Doing that won't allow you to use game-specific knowledge in your predictors though, like that air will never be below water and grass will never be below dirt
2024-12-23 05:52:31
Huh, with boxes, you could even just store the chunks as JXLs entirely by putting the rest of the data (such as entities) in a brotli compressed text format in a box
2024-12-23 06:07:38
Here's a single chunk for reference (this is a forest biome and doesn't use make up much of the 384-block height limit, thought other biomes and player-created structures might)
2024-12-23 06:25:52
I've thought about it some more, here's some ideas I think might be worth exploring: - Instead of using the game's own chunk palette, use one of or a combination of global (whole world), multiple-chunk or single-chunk context models that have their own palette (for example a decision for Y < 50 would contain a context model with a pallete for minerals and that darker type of stone) - Custom predictors which exploit game knowledge, like that blocks that work well as pillars or columns likely extend in both directions (this is good for tree stems but also other type of pillar-like decoration blocks). - Generic predictors that don't use game-specific knowledge. - Hand-picked per-biome context modeling. - Meta-adaptive context modeling that looks at a bunch of chunks and their metadata like biomes and figures it out? Possibly can also learn by experimentation which predictors to use in which detected contexts, so it works well even in user-modified world gen or modded worlds. This sounds hard. - Auto-generated predictors? I imagine something like a lookup table of `(input, input, input, input) -> (prediction)`, it keeps a finite number of the most successful ones. Good for large custom builds that use certain repeating patterns. These could be be used in single-chunk MA trees, while the more general predictors are used in other chunks
jonnyawsom3
AccessViolation_ Huh, with boxes, you could even just store the chunks as JXLs entirely by putting the rest of the data (such as entities) in a brotli compressed text format in a box
2024-12-23 06:38:46
Well... I did something horrible and put the raw block data from an old world's chunk file into the EXIF comment of a JPEG so cjxl would brotli compres it
AccessViolation_ I've thought about it some more, here's some ideas I think might be worth exploring: - Instead of using the game's own chunk palette, use one of or a combination of global (whole world), multiple-chunk or single-chunk context models that have their own palette (for example a decision for Y < 50 would contain a context model with a pallete for minerals and that darker type of stone) - Custom predictors which exploit game knowledge, like that blocks that work well as pillars or columns likely extend in both directions (this is good for tree stems but also other type of pillar-like decoration blocks). - Generic predictors that don't use game-specific knowledge. - Hand-picked per-biome context modeling. - Meta-adaptive context modeling that looks at a bunch of chunks and their metadata like biomes and figures it out? Possibly can also learn by experimentation which predictors to use in which detected contexts, so it works well even in user-modified world gen or modded worlds. This sounds hard. - Auto-generated predictors? I imagine something like a lookup table of `(input, input, input, input) -> (prediction)`, it keeps a finite number of the most successful ones. Good for large custom builds that use certain repeating patterns. These could be be used in single-chunk MA trees, while the more general predictors are used in other chunks
2024-12-23 06:41:38
This may only be part of Forge, but you can actually get a direct block distribution for a certain depth, which you could then assign the priority of bits towards
Demiurge
_wb_ I am not too familiar with Minecraft (know it mostly through my daughters, damn that makes me feel old) but it looks like it is basically a 3D image consisting of a palette of block types. Encoding it simply as layers (slices) of 2D palette images could work well. Possibly even better if adjacent slices are likely to be similar, in which case you could use kAdd blending between the layers to reduce entropy.
2024-12-23 08:42:03
Oddly, that makes you sound young...
_wb_
2024-12-23 09:11:43
Probably storing it as a 384x16 image with 16 layers, or even rolled out as a single 384x256 image, would be better to get some good use of prediction and lz77...
_Broken s̸y̴m̴m̵̿e̴͌͆t̸r̵̉̿y̴͆͠
2024-12-23 10:06:19
this is not what I expected, joining this server
AccessViolation_
2024-12-23 10:16:21
Using JPEG XL in ways JPEG XL should not be used in is something we eventually all succumb to, the desire is too strong
_Broken s̸y̴m̴m̵̿e̴͌͆t̸r̵̉̿y̴͆͠
2024-12-23 10:26:04
I do get this, and I'm "all in" on anything unconventional. But I this is just amazing cx
AccessViolation_
_Broken s̸y̴m̴m̵̿e̴͌͆t̸r̵̉̿y̴͆͠ I do get this, and I'm "all in" on anything unconventional. But I this is just amazing cx
2024-12-23 11:04:53
*One of us! One of us!*
2024-12-23 11:05:05
Welcome, by the way!
_Broken s̸y̴m̴m̵̿e̴͌͆t̸r̵̉̿y̴͆͠
AccessViolation_ *One of us! One of us!*
2024-12-23 11:10:23
hehe. -# Unconventional until there is an ISO
CrushedAsian255
2024-12-24 12:38:24
Could Squeeze help here?
_wb_
2024-12-24 08:40:23
Maybe, but unlikely. It works better for continuous stuff, palette indices are usually a bit too discrete for Squeeze to make much sense...
AccessViolation_
2024-12-24 10:26:38
This is pretty interesting. Voxel predictors getting one extra dimension to work with adds a lot more context. You're still just predicting one element, but now you also have the down direction to look to
2024-12-24 10:29:16
It sounds insignificant but you can do all the predictors you could already do for a 2D matrix, PLUS more, or enhance them, with knowledge about what's below. It's neat
2024-12-24 10:41:14
I'll have to think about how to model game-specific predictors in the code, like for the game's liquid spreading patterns, air always being above grass, etc. It would be *really* cool if it could figure those rules out itself and reuse them for other chunks, but that's easier said than done
2024-12-24 10:48:11
I could also make a DSL that has these rules defined in some text file, similarly to how some compilers have a DSL for optimization rewrite rules and instruction lowering rules, which makes it easy to reason about them and add more
2024-12-24 11:04:30
```rust /// Predicts that `current` will be the block directly under it. fn down(); /// Predicts that `current` will be the most common block in the 3x3 area under it. In case of a draw, picks the block directly under it. fn down_3x3(); /// Predicts that `current` will be the most common block in the 3x3 area under it, with the block directly under it counting for two. In case of a draw, picks the block directly under it. fn down_3x3_weighted(); /// Predicts that `current` will be the most common block in the 3x3 area under it, unless the *two* blocks directly under it are of the same type and aren't found elsewhere in the 3x3 area. This predictor can predict 1x1 pillar structures, and unlike the `down` predictor, only mispredicts stray different blocks in the surrounding homogenous blocks once. fn down_3x3_preserve_pillars(); ```
jonnyawsom3
2024-12-24 11:04:49
This all depends on how fast you want it to be. Realtime or archival
AccessViolation_
2024-12-24 11:05:46
Reading has to be fast, saving doesn't have to be too fast. At worst you'd notice your autosave takes a little bit longer than usual
jonnyawsom3
2024-12-24 11:13:23
Saving could be sped up by not writing chunks to disk if they've not actually been entered, or at least doing less effort for the session before purging at the end. Then the complex chunks with bases and builds can have the time devoted to them instead
AccessViolation_
2024-12-24 11:16:24
> Saving could be sped up by not writing chunks to disk if they've not actually been entered This is what I like to call the 'world generator predictor'. The problem is that chunk generation is *very* slow and CPU intensive, which might be why Minecraft goes with the "generate once, store forever" approach in the first place
2024-12-24 11:18:16
It's not a bad idea though, if you have a very good CPU and want to save on disk space, or just want to compress worlds to their theoretical minimum, like for backups
CrushedAsian255
2024-12-24 11:18:37
world generation may change so it would have to store the version it was generated with
AccessViolation_
2024-12-24 11:19:14
That too
jonnyawsom3
AccessViolation_ > Saving could be sped up by not writing chunks to disk if they've not actually been entered This is what I like to call the 'world generator predictor'. The problem is that chunk generation is *very* slow and CPU intensive, which might be why Minecraft goes with the "generate once, store forever" approach in the first place
2024-12-24 11:22:01
There's half a dozen mods that either speed up, multithread or both. So if we're already going this deep we could just apply those techniques too
AccessViolation_
2024-12-24 11:22:03
I feel like it would almost be a separate project to rely on knowledge of the seeded chunk generator to aid in compression. Besides, making very general predictors that work on any type of world gen, even modded, and on custom maps, is a fun challenge
There's half a dozen mods that either speed up, multithread or both. So if we're already going this deep we could just apply those techniques too
2024-12-24 11:22:42
Yeah that's true. I think it's out of scope for what I'm trying to do and interested in, but it would otherwise definitely be worth exploring
jonnyawsom3
2024-12-24 11:23:38
Depends how deep you go. If you start taking into account things like water flow, air above grass, ect then you're practically re-writing the generation algorithm with player creations as residuals
2024-12-24 11:24:28
So having it 'learn the rules', even if not as accurate as manually stating them, would probably be a lot less work and faster
AccessViolation_
2024-12-24 11:26:44
Yeah
2024-12-24 11:27:09
That'd be really cool to get working.
2024-12-24 11:28:05
Call them self-predicting predictors <:KekDog:805390049033191445>
2024-12-24 11:29:59
Or meta-adaptive predictors
2024-12-24 11:30:37
Even if it won't work well, I can at least give it a fancy name
CrushedAsian255
2024-12-24 11:35:08
Predict the predictors using the predicting predictors
AccessViolation_
2024-12-24 11:47:01
No one predicts the spanish inprediction
CrushedAsian255
2024-12-24 11:50:13
``` if W+N-NW > 128 if c > 1 - Invade Poland - Invade Germany - NE +5 ```
2024-12-25 11:02:23
I wonder if meta-adaptive prediction could work for audio
2024-12-25 11:47:47
Could be hard due to being 1D, maybe first do FFT then prediction?
2024-12-25 11:48:11
Could be helpful as transients would have a different energy distribution as a note being held out
AccessViolation_
2024-12-25 02:52:02
Depending on how you want to implement it, it might work well for music. I don't know how existing codecs work, but, you could take a 'patches' approach, where you first segment a song into its beast (as in 'beats per minute' beats), then you decorrelate those. Then you look for larger patterns like a chorus that repeats twice, and decorrelate those. Or another way of seeing it, detecting the 'structure' of the song and macro-block on the beats level and then use the same residuals histogram in the two choruses, but use different histograms for the rest of the song
2024-12-26 12:12:36
```rust for y in 0..Y { for z in 0..Z { for x in 0..X { // fill the chunk with some pattern idk default.blocks[y][z][x] = ((x + 1) % (z + 1) % (x + z + 1)); } } } ``` ``` predictor: `north` chunk size: 16 x 384 x 16 (98304 blocks) correctly predicted: 52223 saved: 53.124% (assumes correct predictions compress to nothing) predictor: `west` chunk size: 16 x 384 x 16 (98304 blocks) correctly predicted: 11519 saved: 11.718% (assumes correct predictions compress to nothing) predictor: `down` chunk size: 16 x 384 x 16 (98304 blocks) correctly predicted: 98077 saved: 99.769% (assumes correct predictions compress to nothing) predictor: `north-north` chunk size: 16 x 384 x 16 (98304 blocks) correctly predicted: 51071 saved: 51.952% (assumes correct predictions compress to nothing) predictor: `west-west` chunk size: 16 x 384 x 16 (98304 blocks) correctly predicted: 17279 saved: 17.577% (assumes correct predictions compress to nothing) predictor: `down-down` chunk size: 16 x 384 x 16 (98304 blocks) correctly predicted: 98077 saved: 99.769% (assumes correct predictions compress to nothing) predictor: `north-west` chunk size: 16 x 384 x 16 (98304 blocks) correctly predicted: 40703 saved: 41.405% (assumes correct predictions compress to nothing) ```
2024-12-26 12:17:09
Also, I added `north-west` because I thought it would do good with checkerboard patters, but `north-north` and `west-west` which fall back to `north-west` if their observed block is out of bounds, actually do slightly better at that it seems
2024-12-26 12:18:45
```rust fn n(&self, c: Coords) -> bool { if c.2 > 0 { // Predict block north of current. self[c] == self[Coords(c.0, c.1, c.2 - 1)] } else { // If there is no block there, predict block west of current. self.w(c) } } fn w(&self, c: Coords) -> bool { if c.0 > 0 { // Predict block west of current. self[c] == self[Coords(c.0 - 1, c.1, c.2)] } else if c.2 > 0 { // If there is no block there, predict the block north of current. self[c] == self[Coords(c.0, c.1, c.2 - 1)] } else if c.1 > 0 { // If there is no block there, predict the block down of current. self[c] == self[Coords(c.0, c.1 - 1, c.2)] } else { // If there is no block there either, fail the prediction. false } } fn d(&self, c: Coords) -> bool { if c.1 > 0 { // Predict block down of current. self[c] == self[Coords(c.0, c.1 - 1, c.2)] } else { // If there is no block there, predict block west of current. self.w(c) } } ``` The JXL draft spec inspired some of this logic, particularly the concept of falling back to other predictors
Traneptora
2024-12-26 12:41:02
I would like to point out that correct predictions aren't necessary
2024-12-26 12:41:26
getting *very close* to correct (e.g. off-by-1) still leaves a residue of 1, which will get encoded down via entropy quite well
AccessViolation_
2024-12-26 12:43:41
Unfortunately that's not the case here. The data are Minecraft block IDs, so they're nominal - numerically close doesn't necessarily mean similar in-game
2024-12-26 12:44:32
So either your block guess is correct, or it's not, and you have to store it -
2024-12-26 12:45:15
```rust north time: [44.167 µs 44.272 µs 44.369 µs] west time: [184.10 µs 184.61 µs 185.13 µs] down time: [58.242 µs 58.391 µs 58.537 µs] north-north time: [55.664 µs 55.783 µs 55.903 µs] west-west time: [23.351 µs 23.440 µs 23.541 µs] down-down time: [58.021 µs 58.188 µs 58.365 µs] north-west time: [188.68 µs 189.16 µs 189.65 µs] north-east time: [77.776 µs 77.910 µs 78.053 µs] checkerboard time: [32.041 µs 32.185 µs 32.346 µs] ``` The predictors are already very fast on relatively hard to predict chunks and I haven't tried to vectorize them yet. Currently it should take about 2 ms to try each predictor on every Y level and go with the best one, and reading them out will probably take no time at all compared to that
Traneptora
2024-12-26 12:46:51
that's a good point. plus you have the usual context model you could hardcode into an MA tree
2024-12-26 12:46:56
like grass is usually above dirt and below air
2024-12-26 12:46:57
etc.
2024-12-26 12:47:24
How do you plan on storing Tile Entities?
AccessViolation_
2024-12-26 02:45:29
I have no plans to do anything with anything other than block data for now, but if it ever gets to the point where I know I want to turn this into a mod, I might go through everything and see if there's any interesting storage optimizations that can be done on other data. If not, I'll just intercept the block data alone and leave the rest as is. Though switching from DEFLATE to Zstandard is something that would result in a lot of free gains from everything regardless
Traneptora that's a good point. plus you have the usual context model you could hardcode into an MA tree
2024-12-26 02:53:03
This is the most interesting part to me. If you're interested I talked a lot about possible approaches up above, including hard-coded knowledge like the air above grass you mentioned. After some deliberation I really liked the idea of not hard-coding anything and having powerful enough meta-adaptive context modeling to figure a lot of it out. Things like air above grass, water spreading patterns, powered redstone decreasing in power by every block, baked lighting calculations, could all be predicted with fairly simple rules, and if it could figure those out instead of me manually programming them in, it could even work on modded content that completely changes the world gen rules and introduces a bunch of new blocks with new behavior and invariants, which could all be learned
Traneptora
2024-12-26 03:02:04
agreed. ma tree learning needs to be a thing
_wb_
2024-12-26 03:07:40
Libjxl does learn ma trees. There's room for many different tree learning algorithms though, and it would also be nice to have a way to learn a tree once based on some corpus of images and then save the tree and reuse it as a fixed tree...
AccessViolation_
2024-12-26 03:09:38
I feel like that would be really good for people using a single JXL as an image set by using the layers or frames to store multiple images
2024-12-26 03:10:50
But also some hard-coded ones based on a corpus could be interesting yeah
2024-12-26 03:18:12
Minecraft has Anvil files which are regions of 32x32 chunks, and chunks themselves consist of vertically stacked 16x16x16 sub-chunks. So I could learn rules and create palettes and put them into the Anvil file so every chunk in it could reference them, but chunk-specific rules are also possible. I'm not actually limited to this artificial segmenting, but working on nothing larger than an Anvil region would greatly simplify dealing with the game's I/O. Working on single chunks would simplify it even more, but 16x384x16 pillars sort of increase the amount of mispredictions and limit how expressive you can get with MA tree node re-use, so I might work on those 512x384x512 regions instead
2024-12-26 03:26:48
Oh hmm, I didn't consider that using artificially larger chunks means that when the game wants just one it's gonna have to decompress the whole unit of multiple chunks sequentially
2024-12-26 03:28:35
Okay, then, maybe I'll make single chunks units of compression so that they can be individually decoded, but still let them share an MA tree. I just can't let the predictors themselves run on volumes larger than a single chunk or it's going to hurt decode speed a lot
jonnyawsom3
_wb_ Libjxl does learn ma trees. There's room for many different tree learning algorithms though, and it would also be nice to have a way to learn a tree once based on some corpus of images and then save the tree and reuse it as a fixed tree...
2024-12-26 04:35:35
Could even expand that to allow using the tree on decode too, since I know you mentioned JIT compiling could avoid computing all the MA properties and doing a branchy tree traversal
CrushedAsian255
AccessViolation_ Unfortunately that's not the case here. The data are Minecraft block IDs, so they're nominal - numerically close doesn't necessarily mean similar in-game
2024-12-26 11:04:21
You could use some algorithm to re-order the block palette where blocks commonly found next to each other are 1 away in index space
AccessViolation_
2024-12-26 11:09:59
That would be interesting but it would drive up the complexity of it all a lot for sure. I also considered something similar where blocks are categorized by their property, like, block IDs that start with the `110` are stone-like blocks, and then you can use more bits to further narrow down which block it is, and predictors could effectively shave off leading bits by being right about some but not all properties, but that would also be very complex relatively to just, guess is correct or not
2024-12-26 11:13:13
I think when I have some sort of MA tree thing in place, I can more easily experiment with funky shit without messing up the code base
2024-12-26 11:13:55
(I'm just a decent programmer, I easily get lost in complexity, it's honestly a miracle I even got this far instead of getting overwhelmed)
jonnyawsom3
2024-12-26 11:14:37
Again, Minecraft already has built-in block categories you could tap into. Not sure *how* useful they are though https://minecraft.wiki/w/Tag#Block_tags
AccessViolation_
2024-12-26 11:17:50
If they're not used in the predictors themselves, they could be useful as a signal in MA trees maybe?
2024-12-26 11:19:06
I could see all the blocks in a certain category being used in a similar way, so certain predictors would work better on them than others, and other categories have other predictors that work well on them
2024-12-26 11:20:21
y-level, biome, block category. I bet you decision trees with those alone could be used for some pretty effective context modeling
2024-12-26 11:34:03
```rust if y < -63 { predict_block(bedrock); } else { if down().block_tag == "logs" { predict(down()); } else { ... } ... } ```
Could even expand that to allow using the tree on decode too, since I know you mentioned JIT compiling could avoid computing all the MA properties and doing a branchy tree traversal
2024-12-26 11:35:34
JIT compiling MA trees sounds cursed but fun
2024-12-26 11:38:26
The CVEs will be plentiful and beautiful
CrushedAsian255
AccessViolation_ y-level, biome, block category. I bet you decision trees with those alone could be used for some pretty effective context modeling
2024-12-27 02:27:19
is the biome stored in the chunk data?
AccessViolation_
2024-12-27 08:39:03
Hmm yeah it is. If a 16x16 biome map is useful for predictor/palette selection, it could be worth it to decode that first, and then use it to decode the actual blocks of the chunk accordingly
CrushedAsian255
AccessViolation_ Hmm yeah it is. If a 16x16 biome map is useful for predictor/palette selection, it could be worth it to decode that first, and then use it to decode the actual blocks of the chunk accordingly
2024-12-27 11:43:32
it could probably be easily compressed as well
AccessViolation_
2024-12-27 11:44:00
Yeah I was thinking of just using one of the predictors for that layer too
CrushedAsian255
2024-12-27 11:45:57
possibly, although it is significantly differently distributed (usually only 1 or 2 different options, 2d, entry is almost always the same as one of its neighbours)
AccessViolation_
CrushedAsian255 possibly, although it is significantly differently distributed (usually only 1 or 2 different options, 2d, entry is almost always the same as one of its neighbours)
2024-12-27 11:57:51
Seems like they're already pretty good ``` == chunk size: 16 x 1 x 16 (256 blocks) == predictor: `north` correctly predicted: 241 (94.141%) predictor: `west` correctly predicted: 241 (94.141%) predictor: `down` correctly predicted: 241 (94.141%) predictor: `north-north` correctly predicted: 227 (88.672%) predictor: `west-west` correctly predicted: 227 (88.672%) predictor: `down-down` correctly predicted: 241 (94.141%) predictor: `north-west` correctly predicted: 228 (89.062%) predictor: `north-east` correctly predicted: 242 (94.531%) predictor: `checkerboard` correctly predicted: 228 (89.062%) ```
2024-12-27 12:03:07
I like my const generics for the chunk size. I have generic logic that works for chunks of any size, but because those values must be const, it means the compiler can optimize the code for the specific dimensions I give, which allows it to e.g. eliminate bounds checks if it can reason about the logic well. I might end up writing a function with identical logic but with runtime-configurable chunk sizes so that MA tree learning create segments of whatever size it wants
VcSaJen
2024-12-29 09:43:11
After I solved the interpolation of a color point based on mixing of four other color points, I stumbled upon Wikipedia article and realized I've been using barycentric coordinates all along... Would have been helpful to know when I started.
AccessViolation_
2024-12-30 09:21:07
Got encoding and decoding with predictors for minecraft chunk compression working!
sklwmp
2024-12-31 07:48:42
now that I finally have an HDR screen and switched to Zen browser, it's suddenly bugging me how Firefox still has no HDR support on Windows (or Linux, for that matter) after several years...
2024-12-31 07:49:38
i'd make a webextension to detect if a youtube video has HDR, then notify me so that i can switch to a chromium-based browser, but i'm not entirely sure if there's a way to detect that
w
2024-12-31 07:56:19
flashbang this guy
Quackdoc
2024-12-31 08:36:16
imagine if firefox just didn't suck lol
monad
AccessViolation_ Got encoding and decoding with predictors for minecraft chunk compression working!
2024-12-31 09:14:11
If you found something useful for archival, I'd be interested to try it.
AccessViolation_
2024-12-31 09:35:46
You could use it for archival for sure. It's also designed to be fast enough to intercept chunk load and save calls from the game, but at first I'm probably going to make it a standalone program that compresses saves
DZgas Ж
Quackdoc imagine if firefox just didn't suck lol
2024-12-31 11:53:19
🤣 can not
bonnibel
2024-12-31 01:55:28
do butteraugli values across a dataset naturally fit a specific distribution? in my case they seem to fit a lognormal or chi2 distribution but i don't know if that's like, coincidence
_wb_
2024-12-31 03:22:24
Interesting question...
veluca
2024-12-31 03:22:43
I can almost promise it's not done on purpose 🙂
spider-mario
2024-12-31 03:32:28
intuitively, log-normal seems to at least make some sense
2024-12-31 03:33:14
not necessarily to the point that one could have known in advance, but let’s say to the point where it’s not the most surprising thing ever
_wb_
2024-12-31 03:43:08
How was the dataset made? What image format?
bonnibel
2024-12-31 04:13:50
it's pairs of source and enhanced images (with multiple enhanced versions per source image), both 8-bit srgb png, filtered to only keep the pairs where observers have rated the enhanced as a subjective improvement
Traneptora
Quackdoc imagine if firefox just didn't suck lol
2025-01-05 12:31:30
chromium-based browsers are ahead of the game with image and display but far behind firefox when it comes to privacy. reminder that in june 2025, ublock origin will stop working on chrome
2025-01-05 12:31:56
like adblock on chrome won't be a thing mid 2025 because their "manifest v3" api doesn't support the necessary features that adblock extensions need
lonjil
2025-01-05 01:25:40
I mean adblock will still be a thing just not as good
HCrikki
2025-01-05 02:02:22
browsers will integrate adblocking functionalty to override any limitations. issue is not just being served 'personalized' ads but constant datamining+tracking code bloating sites out of control
Quackdoc
Traneptora chromium-based browsers are ahead of the game with image and display but far behind firefox when it comes to privacy. reminder that in june 2025, ublock origin will stop working on chrome
2025-01-05 07:20:35
MV2 is being kept around for a while if you use the appropriate config, probably at least 2 or so years. Technology wise, firefox isn't massively ahead of chrom{e,ium} when it comes to privacy, they do have a few features that give them an edge, but those said features often break other websites so I can't really call that a real win
Traneptora
Quackdoc MV2 is being kept around for a while if you use the appropriate config, probably at least 2 or so years. Technology wise, firefox isn't massively ahead of chrom{e,ium} when it comes to privacy, they do have a few features that give them an edge, but those said features often break other websites so I can't really call that a real win
2025-01-05 07:25:24
https://developer.chrome.com/blog/resuming-the-transition-to-mv3/
Quackdoc
2025-01-05 07:27:00
yeah I am aware, until june, and internally chromium still uses a lot of MV2 stuff so we will only need small patches to keep MV2 around
VcSaJen
Traneptora chromium-based browsers are ahead of the game with image and display but far behind firefox when it comes to privacy. reminder that in june 2025, ublock origin will stop working on chrome
2025-01-05 02:07:29
Well now that Mozilla is an ad company, privacy is only going to end up worse
AccessViolation_
2025-01-05 06:32:54
Does chrome/ium even do things like per-site cookie and stream isolation
Quackdoc
2025-01-05 06:34:18
per site cookie is a fucking horrid idea and I wish it died, but yes they are doing it now, but fuck that
2025-01-05 06:34:49
for instance, a lot of mastodon's problems could be solved with shared cookies,
2025-01-05 06:36:01
the power of {yes or no} is too strong for people to handle
2025-01-05 06:36:09
so much as asking users is a crime against privacy
AccessViolation_
2025-01-05 06:36:28
Firefox's privacy features work fine, if they don't they patch it for specific websites, see `about:compat`. Users basically won't run into sites not working if they don't manually make the tracking protection more strict
Quackdoc
2025-01-05 06:37:26
the issue with firefox is that they are holding the web back. they constantly do this, webusb is another good example.
AccessViolation_
2025-01-05 06:38:15
what's the deal with webusb?
Quackdoc
2025-01-05 06:39:01
it allows webbrowsers to accsess usb devices which is phenomenal for a lot of things such as music instruments, or my favourite usecase https://usevia.app/ https://launcher.keychron.com/#/keymap
2025-01-05 06:39:20
keyboard config software that works regardless of what operating system you use, as long as your browser supports it
2025-01-05 06:39:55
firefox, being firefox, refuses to implement it, because asking it's users for permissions is a crime against them because it's users are apparently to stupid to answer a yes or no question
2025-01-05 06:41:23
well this in question is webhid iirc, but it's pretty much the same story
AccessViolation_
2025-01-05 06:41:56
Do they actually refuse to implement it period or is it on their todo and just taking a while
Quackdoc
2025-01-05 06:42:41
refusal
AccessViolation_
2025-01-05 06:43:10
link?
Quackdoc
2025-01-05 06:43:25
https://mozilla.github.io/standards-positions/
AccessViolation_
2025-01-05 06:44:56
I likewise don't agree with their stance
2025-01-05 06:45:43
But I don't think this is the case with mozilla in general
Quackdoc
2025-01-05 06:48:25
the issue is that they *constantly* do dumb stuff like this, third party cookies is another good example of this. third party cookies could be abused sure, but they could *also* be used for really good things. An example of this is federated services as federate services could use third party services to share important information so they could reliably redirect to "your" server with proper information and such.
2025-01-05 06:48:47
yes it can be used to track you, but basic "do you want this site to save third party cookies" could exist and do a good job
AccessViolation_
2025-01-05 07:13:39
I don't know if you know this, but you can disable this (set `Enhanced Tracking Protection` to `Custom`, and under cookies select the first option that doesn't contain `and isolate remaining cross-site cookies`). I don't really see an issue to with this be honest, it's a privacy preserving browser, it will by default restrict features that allow websites to abuse you unless you create an exception or disable the option entirely, which I think makes sense. I think saying "if you don't like it, don't use it" is a bad way of dismissing someone's problems or concerns but we're lucky to have a handful of different web browsers with different differentiating feature sets. Firefox is not The Browser where if you don't like it you're fucked, that's what the whole point of it was and its existence helps maintain the already limited ability to pick something other than "comes for free with your macbook" and "king of surveillance capitalism" Don't get me wrong, I do think Mozilla is mismanaged and it's unfortunate that the browser makers are in a position where their work is funded by an infinite stream of bad ideas, google's we're-not-a-monopoly subscription and sponsored shortcuts
2025-01-05 07:21:36
I think Firefox is incredibly important for the future of the web, and that if anything, it's holding back the web from whatever the fuck happens to it when all major browsers/engines are developed by three of the top five biggest corporations in the world
Quackdoc
2025-01-05 07:41:32
webtech would be a lot better if mozilla and firefox were to die and something new come along
AccessViolation_
2025-01-05 07:50:48
I wonder where Ladybird is going
2025-01-05 07:52:55
Oh yeah I forgot that they're going to be developing it in Swift in the future lmao. Wonder how that's gonna go
Quackdoc
2025-01-05 07:54:46
yeah it's dead to me, I am all on board the servo train now, ladybird has been deleted from my PC
AccessViolation_
2025-01-05 07:59:37
Servo is cool
2025-01-05 07:59:57
Did something happen to Ladybird?
2025-01-05 08:01:43
Or are you referring to the swift thing
Quackdoc
2025-01-05 08:05:37
swift thing
2025-01-05 08:05:41
swift is... cursed
AccessViolation_
2025-01-05 08:10:12
Yeah I remember most responses to the swift announcement were basically "why"
Quackdoc
2025-01-05 08:11:24
it's not like ladybird doesn't have promise, they do have mike "ten fucking days" shaver on board [av1_dogelol](https://cdn.discordapp.com/emojis/867794291652558888.webp?size=48&name=av1_dogelol)
AccessViolation_
2025-01-05 08:11:41
Mike who
veluca
2025-01-05 08:12:05
I would like to see servo do well too 😄
Quackdoc
AccessViolation_ Mike who
2025-01-05 08:13:18
he is the one of the dudes who took netscape and turned it into firefox, and was with firefox for pretty much the entire time firefox was "the best" browser
2025-01-05 08:13:28
https://en.wikipedia.org/wiki/Mike_Shaver
2025-01-05 08:14:16
but man, i hate swift lol
AccessViolation_
2025-01-05 08:14:41
Wait, Mike "ten fucking days" as in "wrote javascript 10 fucking days"
2025-01-05 08:14:58
Is that the guy
2025-01-05 08:15:25
I can see why they went with swift then lel
Quackdoc
veluca I would like to see servo do well too 😄
2025-01-05 08:16:19
we do have good news regarding this, most of the major css stuff is supported to some degree, android support is going on again, and in my tests, servo is actually usable from a speed point of view, I was working on adding jxl-oxide to servo, and did, but not to a degree I was happy performance wise. but servo kinda... went willy nilly with threads, turns out decoding 200 jxl images with jxl-oxide is kinda brutal on system resources
AccessViolation_ Is that the guy
2025-01-05 08:16:44
nah that was brenden eich, mike shaver was the one who gloated that they had a 10day fix turnaround for a massive zero day, which at the time was unheard of
2025-01-05 08:17:45
I predict that within the year, we will have an actually usable POC browser to at least the degree of gnome web.
AccessViolation_
2025-01-05 08:18:12
Ahh cool
2025-01-05 08:20:21
I think Servo's approach of *parallelize all the things* is really cool
2025-01-05 08:20:28
If they're still about that
2025-01-05 08:21:14
WebRender comes from servo too right? Also really cool
Quackdoc
2025-01-05 08:21:31
uh, they have gone from parallelize all the things to parallelize what makes sense [av1_dogelol](https://cdn.discordapp.com/emojis/867794291652558888.webp?size=48&name=av1_dogelol)
AccessViolation_
2025-01-05 08:21:31
parsing the web with shaders is mindblowing
Quackdoc
2025-01-05 08:21:41
and yes, webrender was/is a servo thing
AccessViolation_
Quackdoc uh, they have gone from parallelize all the things to parallelize what makes sense [av1_dogelol](https://cdn.discordapp.com/emojis/867794291652558888.webp?size=48&name=av1_dogelol)
2025-01-05 08:22:03
Hah, fair
Quackdoc
2025-01-05 08:24:24
Ill recompile servo and do some demos of stuff since the progress that has been made is truly amazing
2025-01-05 08:24:38
sadly the gstreamer stuff is holding back media a bit from what I can tell
Demiurge
AccessViolation_ Oh yeah I forgot that they're going to be developing it in Swift in the future lmao. Wonder how that's gonna go
2025-01-06 12:58:16
Is that some kind of joke? lol
2025-01-06 01:00:22
Does swift even run on non apple platforms?
spider-mario
2025-01-06 01:05:07
yes
2025-01-06 01:05:53
https://www.swift.org/getting-started/ > To kickstart your journey, install Swift to begin using it on macOS, Linux, or Windows.
2025-01-06 01:08:09
and https://www.swift.org/documentation/articles/static-linux-getting-started.html
Demiurge
2025-01-06 01:25:18
What about their Serenity OS?
2025-01-06 01:25:46
Do they have a swift UI library for Serenity too?
lonjil
2025-01-06 01:27:09
Kling doesn't work on SerenityOS anymore and I think they stopped supporting it in Ladybird.
Quackdoc
2025-01-06 01:27:11
ladybird has been split out of serenity
lonjil Kling doesn't work on SerenityOS anymore and I think they stopped supporting it in Ladybird.
2025-01-06 01:27:29
They didn't stop supporting it I don't think, but it is a seperate project
Demiurge
2025-01-06 01:27:35
Huh. I had no idea
2025-01-06 01:28:22
I guess I haven't been keeping up with the news
lonjil
Demiurge Do they have a swift UI library for Serenity too?
2025-01-06 01:29:35
I don't think SwiftUI is available for Linux yet, or even Windows. You just have to use GTK or whatever.
Quackdoc
lonjil I don't think SwiftUI is available for Linux yet, or even Windows. You just have to use GTK or whatever.
2025-01-06 01:30:58
ewww
2025-01-06 01:31:00
at least use QT
lonjil
2025-01-06 01:33:18
hm, Swift did recently get really good C++ interop, didn't it? So yeah Qt would be a good option.
spider-mario
2025-01-06 02:19:27
I suspect Qt needs more than just really good C++ interop
2025-01-06 02:20:06
it has its infamous `moc` and special macros for signals and slots
Quackdoc
2025-01-06 02:54:06
QT needs a lot, it's terrible to work with IMO. but it wins on the merit of actually decent cross platform support, way better performance then GTK, and also not needing to deal with GTK's upstream
AccessViolation_
2025-01-06 03:40:09
libcosmic my beloved
Demiurge Is that some kind of joke? lol
2025-01-06 03:41:41
Yeah it feels like a step backwards
2025-01-06 03:43:15
I mean it's memory safe so that eliminates a massive amount of bugs. I don't think this is something that would really impact the users too much compared to other memory safe languages, whereas to keep using C++ would result in more bugs for sure. I just don't expect they're going to get a lot of interested developers with Swift
Quackdoc
2025-01-06 04:04:41
well in the end, what matters is using a toolset they like, if they like swift they like swift, and it's really not like you need a ton of devs for a webbrowser, a decent team will work which won't be hard to get
lonjil
2025-01-06 04:04:51
I found it funny that Kling claimed that Rust is bad for long lived programs like browsers and only good for short lived programs like compilers and CLI tools. Meanwhile Servo and Firefox and uhhh this thing called the Linux kernel. And like the 50% of the Internet that Cloudflare owns lol.
Quackdoc
2025-01-06 04:05:21
personally, im not going to touch swift myself, but if they like it, they like it
lonjil
2025-01-06 04:05:40
Swift is neat and if I was writing a native app for macOS and iOS I might use it.
2025-01-06 04:05:56
It has a lot of good shit in it. But the ecosystem outside of Apple's OSs is just weak.
2025-01-06 04:07:22
the Linebender org (partially funded by Google) is making a UI toolkit for Rust (Xilem) that's heavily inspired by SwiftUI and aims to produce highly native results on all platforms, with good accessibility support.
Quackdoc
2025-01-06 04:10:36
well it's not like it needs a strong ecosystem though. this is for internal usage in ladybird, so UI stuff and ecosystem don't really matter
lonjil
2025-01-06 04:19:40
Of course
AccessViolation_
lonjil I found it funny that Kling claimed that Rust is bad for long lived programs like browsers and only good for short lived programs like compilers and CLI tools. Meanwhile Servo and Firefox and uhhh this thing called the Linux kernel. And like the 50% of the Internet that Cloudflare owns lol.
2025-01-06 06:42:55
Oh yeah I remember that, do you know if he elaborated on that? Makes no sense
lonjil
AccessViolation_ Oh yeah I remember that, do you know if he elaborated on that? Makes no sense
2025-01-06 07:03:42
I don't follow his postings, so idk.
AccessViolation_
2025-01-06 07:04:47
smart
2025-01-06 07:17:25
I think Jeremy Soller is the only person whose social media I occasionally check
2025-01-06 07:18:31
Andreas was bitching about pronouns at one point which immediately removed any interest from me
lonjil
2025-01-06 08:04:06
just the other day he quote tweeted (in seeming agreement) someone who claimed that white men are discriminated against in tech.
Quackdoc
AccessViolation_ Oh yeah I remember that, do you know if he elaborated on that? Makes no sense
2025-01-06 08:06:48
this is about all he said https://x.com/awesomekling/status/1822241531501162806
2025-01-06 08:07:04
he generally doesn't have strong opinions on rust otherwise
spider-mario
lonjil Swift is neat and if I was writing a native app for macOS and iOS I might use it.
2025-01-06 08:39:53
I was vaguely tempted to try and learn Swift at one point, but the rather narrow prospects of actually using it outside of the Apple ecosystem kind of held me back
2025-01-06 08:40:07
it seems like a nice language, but yeah, the ecosystem is not so vast
2025-01-06 08:40:57
although, if the C++ interop really is good, maybe I should revisit that
lonjil
2025-01-06 08:42:41
Yeah, so they say. Something new in the last few years. Also I think I heard something about good Java interop? Probably would be a very fine cross platform choice for anything non-UI, I would guess.
spider-mario
2025-01-06 08:48:13
yeah, although that’s potentially a big qualifier
2025-01-06 08:49:41
but maybe UIs are best made in a system-specific fashion anyway, even when a language has access to cross-platform options
2025-01-06 08:49:50
some people certainly argue along those lines – not sure to what extent I agree
2025-01-06 08:50:16
but in that case, something like https://compnerd.github.io/swift-win32/ could come in handy
Quackdoc
2025-01-06 08:57:56
I don't hate swift, but between C, dart and rust I don't see a need to learn another new language, perhaps motivated because i'm not found of programming in the first place xD
spider-mario
2025-01-06 09:02:18
ah, yes, Dart was another one I wanted to try and learn, for Flutter
2025-01-06 09:02:36
but it kind of gives me the impression of having the opposite problem
2025-01-06 09:03:11
nice cross-platform UI capabilities but I had the impression that there wasn’t that much else in the ecosystem
AccessViolation_
Quackdoc this is about all he said https://x.com/awesomekling/status/1822241531501162806
2025-01-06 09:18:10
Lol I can see why the he perceived the Rust community as toxic. It probably *is* toxic to people who block PRs for inclusive language and think white people are discriminated against in tech <:KekDog:805390049033191445>
Quackdoc
spider-mario nice cross-platform UI capabilities but I had the impression that there wasn’t that much else in the ecosystem
2025-01-06 09:37:10
yeah, aside from flutter it's mostly barren, that being said, flutter is amazing as long as you don't care about linux
2025-01-06 09:37:37
IMO flutter is worth learning because it really is "just that good" at least as far as UI stuff goes
spider-mario
2025-01-06 09:39:20
oh, is Linux support that bad?
AccessViolation_
2025-01-06 09:39:41
Linux support was alright for me
2025-01-06 09:40:08
I do remember there was one thing for Flutter dev that that didn't work on Linux, but I can't remember what it was
Quackdoc
2025-01-06 09:49:16
Linux support is absolutely abysmal. It uses GTK3 for window embedding, which is already really slow. Flutter itself is more abstractions. It's not the fastest thing in the world, but it's not super slow either. With GTK3, you get truly abysmal performance. The performance of Flutter on Linux is so bad I legitimately have better performance running flutter apps in waydroid or an Android VM. There was an unofficial windowed in better that uses winit, and there is some work porting from GTK3 to GTK4. GTK4 is still relatively slow but it is leagues faster than GTK3 so hopefully when that lands it will be a good performance increase but until it does just completely write off Linux support.
AccessViolation_
2025-01-06 09:51:04
Oh interesting. I only dabbled in Dart/Flutter for an interview and it was generally a pleasant experience, so yeah I didn't know about technological shortcomings
spider-mario
Quackdoc Linux support is absolutely abysmal. It uses GTK3 for window embedding, which is already really slow. Flutter itself is more abstractions. It's not the fastest thing in the world, but it's not super slow either. With GTK3, you get truly abysmal performance. The performance of Flutter on Linux is so bad I legitimately have better performance running flutter apps in waydroid or an Android VM. There was an unofficial windowed in better that uses winit, and there is some work porting from GTK3 to GTK4. GTK4 is still relatively slow but it is leagues faster than GTK3 so hopefully when that lands it will be a good performance increase but until it does just completely write off Linux support.
2025-01-06 09:56:08
or the web backend?
Quackdoc
2025-01-06 09:57:25
the web backend is more or less fine, you are reliant on the browser of choice. I'm not super fond of flutter webapps a lot, they do perform fine. The issue is even if you wanted to use web on linux, your only option is CEF which is a pain or other gtk based solutions which perform just as bad.
2025-01-06 09:58:01
a good example is the matrix app fluffychat, you can load a room up with images or videos for testing, it performs ok
lonjil
2025-01-06 10:09:04
you can't use Electron?
Quackdoc
2025-01-06 10:33:27
thats even worse xD, just barely though, but I suppose yes it is an option
AccessViolation_
2025-01-06 10:37:35
I have strong opinions, but generally I like to think I have some nuance
2025-01-06 10:37:48
I throw all nuance out the window the second someone mentions Electron
Quackdoc
2025-01-06 10:38:38
I like electron, it provides a nice utility, too bad everything outside that utlity sucks. I really wish that tauri offered a chromium solution
AccessViolation_
2025-01-07 02:38:48
what happens when predictors are entirely correct in jxl? are the residuals encoded as 0 and then that matrix of zeroes is given to the compressor, or does it not encode them at all somehow? Or something else?
2025-01-07 03:12:59
my idea to use predictors for compression in minecraft doesn't currently help that much, it even seems to makes it a bit worse when I compare original chunks and encoded chunks after sending them through zstandard. it works by incrementing every block ID, so that it makes space to encode correctly predicted values as zero. so on average the values are zero a lot more often, but it does add one extra value to every chunk and trades homogenous areas of certain values with 'lines' of values where the predictor was wrong
A homosapien
2025-01-07 03:57:58
Doesn't it take up 0 bytes? So I'm assuming nothing gets encoded.
AccessViolation_
2025-01-07 04:03:21
I assume that only works when every prediction is entirely correct. Once you need to encode some residuals into the mix, don't you also need a way to signal which pixel those residuals belong to? For example if you have 50 pixels in question and there are just 10 residual values, you'd need to inform the decoder about which 10 pixels those values belong to, and the rest can be inferred as 'correct prediction'
2025-01-07 04:08:19
I solve that by just encoding correct predictions as zero and everything else as non-zero, but this just seems to confuse the general purpose compression (which comes after the predictors) more, even when >70% of block ID predictions are correct
2025-01-07 04:10:23
Like you'd think that'd be a promising statistic, but it doesn't help compression much. I guess zstandard and others are just too good
2025-01-07 04:11:11
I might look at how JPEG XL handles lossless compression of Palette mode images specifically, since that's effectively what I'm doing as well. Until then if anyone has any interesting ideas, I'm all ears btw
_wb_
2025-01-07 04:13:36
In jxl, the MA tree defines both predictor and context. If you're lucky then in some of the contexts, the predictor is entirely correct and then they get a singleton histogram so uses 0 bits per symbol. But even if you have something like 90% correct, then still those zeroes will be cheap since ANS does not have a 1 bit per symbol minimum.
AccessViolation_
2025-01-07 04:16:59
Hmm, that's good. So instead of throwing it all at zstandard, it might be better if I implement MA trees and roll my own compressor that uses ANS
2025-01-07 04:18:26
I haven't implemented MA trees in general, I'm hoping that's going to allow me to do better predictor selection and logic to turn more of those values into zeroes in the first place, but I will keep that in mind
veluca
AccessViolation_ Hmm, that's good. So instead of throwing it all at zstandard, it might be better if I implement MA trees and roll my own compressor that uses ANS
2025-01-07 07:25:59
the original version of the weighted predictor used to effectively throw all the data to zstd, after reshuffling it so that similar contexts would be contiguous 🙂
AccessViolation_
2025-01-07 08:00:56
Interesting
2025-01-07 08:02:59
Hm, something like a self-correcting predictor might work for minecraft worlds. In the sense of, just try them all simultaneously and use the one that did the best for the neighboring previous blocks
2025-01-07 08:07:20
Someone at work came up with an interesting idea to use second-choice predictor signaling in a similar way to how you'd use residuals, since residuals are not applicable here. Instead of just signaling the block ID if the predictor is wrong, you first try the other predictors, and if one of them gets it correct you signal that that predictor was used. But then instead of just 0, you'd have to reserve three values for predictors rather than one, possibly making it even worse to compress, but I'm definitely going to experiment with that
2025-01-07 08:08:13
I think it has some potential, since pretty often, if the prediction just used that other block it would have been right. And then you'd get the value `1`, instead of something more random like `15` or whatever the block ID would be
2025-01-07 08:08:32
And the set of predictors could be signaled per section or in the MA tree
jonnyawsom3
2025-01-07 08:14:46
I was going to say it sounds like you need something similar to palette sorting. You have lots of 0 values, but they're randomly distributed so still causing too much entropy for ZSTD to do much better than the raw data
AccessViolation_
2025-01-07 08:15:34
The concept of palette sorting is new to me
_wb_
2025-01-07 08:17:25
There's always going to be some entropy left, the main thing is to segment the data into well-predicted and poorly-predicted parts without using explicit signaling (context modeling in general tries to do that, MA trees are just a specific way to do context modeling), so you can store the well-predicted stuff cheaply.
2025-01-07 08:18:29
E.g. for palette images probably a context model that separates the case where the neighbors are identical from the case where they are different will already be useful.
AccessViolation_
2025-01-07 08:21:36
So in that example, would one context model be specifically for the 'border' of failed predictions where two different blocks/pixels meet?
2025-01-07 08:22:43
I thought MA trees were effectively mostly "x > 12", "y < 10" type decisions
2025-01-07 08:27:36
I guess I should look at other types of context modeling first
veluca
AccessViolation_ Hm, something like a self-correcting predictor might work for minecraft worlds. In the sense of, just try them all simultaneously and use the one that did the best for the neighboring previous blocks
2025-01-07 08:33:14
we used to do that for DC prediction 🙂
AccessViolation_ I thought MA trees were effectively mostly "x > 12", "y < 10" type decisions
2025-01-07 08:34:35
yes, but "x" could be a relatively complicated function of neighbours 🙂
AccessViolation_
2025-01-07 08:34:35
Isn't that basically how it currently works too? More complex logic like max error, but still trying all of them and picking the previous best by some definition
veluca
2025-01-07 08:34:56
DC prediction ~never uses the weighted predictor, IIRC
2025-01-07 08:35:05
just gradient and context modeling in 99% of cases
AccessViolation_
2025-01-07 08:36:47
DC refers to some frequency component after VarDCT, right?
veluca
2025-01-07 08:37:07
yes, I guess it's called LF now but it's hard for me to switch to the new name 😛
AccessViolation_
2025-01-07 08:37:19
Ahh right, gotcha!
2025-01-07 08:37:31
All these acronyms... :p
2025-01-07 08:39:08
I never actually knew any prediction or context modeling happened after VarDCT was done, but since those are themselves encoded with modular mode as I understand, it makes sense that there would be
veluca
2025-01-07 08:39:45
DCT coefficients are themselves context-modeled, even if the context model is fixed
2025-01-07 08:40:01
I sometimes wish we'd made that flexible too, but it was too hard 🙂
2025-01-07 08:40:06
(and too slow)
AccessViolation_
2025-01-07 08:40:55
Did normal JPEG do something like that too, or did it just cut out high frequency coefficients, some chroma from luma and Huffman code it?
veluca
2025-01-07 08:40:59
in general, the tree-based model is very flexible but quite slow
AccessViolation_ Did normal JPEG do something like that too, or did it just cut out high frequency coefficients, some chroma from luma and Huffman code it?
2025-01-07 08:41:10
not even chroma from luma 🙂
AccessViolation_
2025-01-07 08:41:59
Oh? Huh. I thought that's where pixel-level color artifacts came from but maybe my camera is just bad <:galaxybrain:821831336372338729>
veluca
2025-01-07 08:42:46
(but you can, if you wish, exploit progressive passes to get something similar to context modeling - it's why progressive JPEGs can compress better than sequential JPEGs, which is entirely backwards and the opposite of what happens in pretty much any other image format)
AccessViolation_
2025-01-07 08:44:58
Amazing how much work people put into squeezing as much quality as they can out of this ancient format instead of putting that same effort into simply supporting something more modern, haha
2025-01-07 08:45:19
I can respect it, though
2025-01-07 08:52:22
Of course many of these improvements came from JPEG XL in the case of jpegli for example, so it's good that this knowledge is bestowed upon JPEG as well. Maybe hampers JXL adoption a bit, sure, but something in me just loves how humanity collectively is constantly working on making software better for everyone, bit by bit, and it's a good feeling being a part of the collective that's searching for knowledge and applying it in interesting or fun ways
2025-01-07 08:55:42
Maybe it's an oddly personal message, but I really appreciate all the time people here have put into explaining image compression concepts to me or just talking about this and other formats, it's been a very interesting journey
salrit
2025-01-14 06:21:16
Neural Network qn ahead: If I give some neighbours, suppose Left, Top and Top-Left as input to a MLP (a small one) for predicting the current pixel then : 1.) Does the MLP give more weight to features with higher correlations? 2.) Are there any new interactions that weren’t apparent in the correlation analysis? If there is no theoretical way to know what is happening (apart from saying that a non-linear function is spitting out a intensity value), what experimental procedures can be helpful to know what the MLP learns? Would be glad to get any suggestions or references... 🙂
damian101
salrit Neural Network qn ahead: If I give some neighbours, suppose Left, Top and Top-Left as input to a MLP (a small one) for predicting the current pixel then : 1.) Does the MLP give more weight to features with higher correlations? 2.) Are there any new interactions that weren’t apparent in the correlation analysis? If there is no theoretical way to know what is happening (apart from saying that a non-linear function is spitting out a intensity value), what experimental procedures can be helpful to know what the MLP learns? Would be glad to get any suggestions or references... 🙂
2025-01-14 07:47:54
1) only if it reduces prediction error 2) yes
Meow
2025-01-14 09:17:09
MLP: My Little Pony or Marine Le Pen?
AccessViolation_
2025-01-14 09:26:35
Machine Learning Predictor
2025-01-14 09:27:01
Quite possibly My Little Pony, I'm not entirely sure
2025-01-14 09:29:44
I've seen something similar for text compression where a generative LLM was used to predict what the next character would be
CrushedAsian255
2025-01-14 09:37:07
I thought it was Multilayer Perceptron
jonnyawsom3
2025-01-14 09:38:39
Multilayer Machine Learning Perceptual Pony Prediction
AccessViolation_
2025-01-14 09:39:51
Massively Multilayer Online Raster Predictor Guesser
Demiurge
2025-01-14 01:37:13
Someone needs to just start randomly sneaking the word "pony" into academic papers
2025-01-14 01:38:48
Historians will be scratching their heads wondering why all of our technology is centered around ponies
2025-01-14 01:40:02
I also believe in Vermin Supreme's mandatory pony policy with pony based identification
2025-01-14 01:40:28
Where everyone gets a free pony and it's mandatory
spider-mario
Demiurge Historians will be scratching their heads wondering why all of our technology is centered around ponies
2025-01-14 01:42:19
not if they have access to these Discord logs
AccessViolation_
2025-01-14 07:09:43
no witnesses
CrushedAsian255
2025-01-15 05:27:40
JPEG XL is a revolutionary new image compression format designed for compressing many types of images, including screenshots, digital art, photographs, and mixed images. It has been tuned for compression of images of a pony.
2025-01-15 05:28:37
JPEG: Joint Pony Encoding Group
Demiurge
2025-01-15 06:56:03
Pony encoding experts
_wb_
2025-01-15 10:56:53
Sometimes also Phony Experts
_Broken s̸y̴m̴m̵̿e̴͌͆t̸r̵̉̿y̴͆͠
CrushedAsian255 JPEG: Joint Pony Encoding Group
2025-01-16 10:13:22
-# Tries really hard to not say something stupid.. Encoded horses usually are very small. -# ... well..
jonnyawsom3
2025-01-16 11:16:27
"Huh, nice graphic. Wonder if it's SVG..."
2025-01-16 11:17:18
Oh god
2025-01-16 11:17:44
https://www.spacex.com/launches/mission/?missionId=starship-flight-7
A homosapien
2025-01-17 12:35:54
I wonder how many more bytes jxl could save if splines were a thing``` cjxl SPACEX_INFOGRAPHIC.pgm SPACEX_INFOGRAPHIC.jxl -d 0 -e 9 -g 3 -E 4 -I 100 JPEG XL encoder v0.12.0 2368781 [AVX2,SSE2] Encoding [Modular, lossless, effort: 9] Compressed to 49447 bytes (0.006 bpp). 10834 x 5659, 4.496 MP/s [4.50, 4.50], , 1 reps, 12 threads. ```
jonnyawsom3
2025-01-17 12:55:18
One of the greys were 109,110,112 stopping it from being greyscale
A homosapien I wonder how many more bytes jxl could save if splines were a thing``` cjxl SPACEX_INFOGRAPHIC.pgm SPACEX_INFOGRAPHIC.jxl -d 0 -e 9 -g 3 -E 4 -I 100 JPEG XL encoder v0.12.0 2368781 [AVX2,SSE2] Encoding [Modular, lossless, effort: 9] Compressed to 49447 bytes (0.006 bpp). 10834 x 5659, 4.496 MP/s [4.50, 4.50], , 1 reps, 12 threads. ```
2025-01-17 01:01:24
A handful of bytes less converting it to PGM first :P
A homosapien
2025-01-17 01:12:05
I can do ya one better, `-e 10 -g 3 -E 4 -I 100`. I'm not brave enough to try effort 11 <:monkaMega:809252622900789269>
jonnyawsom3
2025-01-17 01:40:55
`I 100` was detrimental for me
A homosapien
2025-01-17 02:14:24
Interesting, `I 99` shaved off a few bytes. I wonder why that happens.
CrushedAsian255
2025-01-17 03:46:48
Possibly different palette ordering?
DZgas Ж
2025-01-18 11:57:39
bruh https://learn.microsoft.com/en-us/windows/win32/direct2d/bitmap-source https://github.com/microsoft/DirectXTex/wiki/WIC-I-O-Functions they came up with a new term for " equivalent " just.
spider-mario
2025-01-19 09:34:20
as far as I can find, it’s just someone’s name https://ieeexplore.ieee.org/document/4056711 https://ieeexplore.ieee.org/document/5764000
2025-01-19 09:34:34
it happens to be equivalent to another method, but that doesn’t mean that “Fant” means “equivalent”
DZgas Ж
spider-mario it happens to be equivalent to another method, but that doesn’t mean that “Fant” means “equivalent”
2025-01-19 10:45:01
it amuses me that it is not implemented anywhere in the same name. of the things I know that deal with encoding. I only found this on paintdotnet. moreover, this interpolation function turned out to be ideal for: rendering smoothing downscale, reducing size by 2 times when render without smoothing geometry lines, Fant getting perfect line smoothing, without ring artefact like by lanzcos and bicubic and whitout blur artifacts like by biliniar
spider-mario
2025-01-19 11:08:35
a quick look at the paper seems to indicate that it is indeed box filtering, which would be equivalent to ImageMagick’s `-scale` operation for downscaling
2025-01-19 11:08:53
(_not_ `-filter box -resize`, as they seem to only use an integer number of pixels for that)
_wb_
2025-01-19 01:06:49
That kind of downsampling makes sense (if done on linear RGB) but it does tend to produce quite blurry results.
jonnyawsom3
2025-01-19 01:56:17
You reminded me of a tool I saw long ago boasting a new resizing method that supposedly beat all others, was set as default, and was going to have a paper made on the technique... Now if only I remembered the name :P
A homosapien
2025-01-19 05:13:50
Fant/Box is only blurry if you are downscaling using a non integer divisor. If you downscale using fractions like 1/2, 1/3, 1/4, etc. it actually produces the sharpest results.
2025-01-19 05:14:32
Gimp has similar behavior
_wb_
2025-01-19 07:35:23
Maybe blurry is not the best word. It smooths out fine small texture and noise, which is something most downscaling methods do.
DZgas Ж
A homosapien Fant/Box is only blurry if you are downscaling using a non integer divisor. If you downscale using fractions like 1/2, 1/3, 1/4, etc. it actually produces the sharpest results.
2025-01-19 09:49:19
fant - lanzcos - bicubic - bilinear - original
2025-01-19 09:54:35
the sharpness, which is exactly why I never use lanzcos, because of the creation of something that does not exist. But it's also in Bicubic because it's essentially same. But completely nothing in FANT. despite the fact that it also lost in Bilinear. But bilinear so blur
2025-01-19 09:55:14
bicubic - fant
A homosapien
2025-01-19 10:01:05
Fant/box downscaling doesn't cause ringing that I see with bicubic and lanczos while also retaining the same level of sharpness if not greater
jonnyawsom3
2025-01-19 10:05:24
Though, this is a 2x downscale, which is when Fant/Box apparently works best
A homosapien
2025-01-20 01:37:33
That's why I love it, I do a lot of 2x, 3x, & 4x downscaling
2025-01-20 02:28:43
original image was 512x512 downscaled 4x then upscaled 2x via nearest neighbor
2025-01-20 02:29:01
notice the ringing on the eyes present in Lanczos and Bicubic
AccessViolation_
2025-01-20 12:51:12
If a lossless encoder's goal is to produce output visually identical to the input, does that imply it's valid for the encoder to zero the R,G,B values of input pixels if their alpha channel is 0?
2025-01-20 12:54:13
I'm probably going to allow users to choose whether it does that using a flag, but I'm wondering what the default should be
spider-mario
AccessViolation_ If a lossless encoder's goal is to produce output visually identical to the input, does that imply it's valid for the encoder to zero the R,G,B values of input pixels if their alpha channel is 0?
2025-01-20 01:00:25
1. only with unassociated alpha 2. some people want to preserve those pixel values even then 3. zeroing out is not necessarily what maximises compression
2025-01-20 01:01:31
re: 2, after all, if there are non-trivial non-zero values, it raises the question of how they got there
jonnyawsom3
2025-01-20 01:06:27
Cjxl zeroes in lossy and preserves with lossless currently, which can have a significant impact on compression from my results. But I've only ever had it be incorrect render layering causing noise in transparent areas
spider-mario
2025-01-20 01:12:47
we used to have an experiment where we allowed lossy to be more destructive to invisible pixels, not sure what we ended up concluding (or whether we even managed to make it work)
jonnyawsom3
2025-01-20 01:14:44
In some cases it seems to zero the pixels, in others it actually uses the predictors. It's strange
AccessViolation_
spider-mario 1. only with unassociated alpha 2. some people want to preserve those pixel values even then 3. zeroing out is not necessarily what maximises compression
2025-01-20 01:20:28
From the QOI spec: > The color channels are assumed to not be premultiplied with the alpha channel (“un-premultiplied alpha”). After checking the *Alpha compositing* Wikipedia article it seems that means it's unassociated. Point 3 is interesting, I didn't think about that. The cheapest coding tool in this format is run-length encoding, so yeah I would just start at the first fully transparent pixel and effectively make any consecutive transparent pixels have the same R,G,B values
2025-01-20 01:24:58
Though I'm *also* defying the gods by implementing a lossy mode in my QOI encoder to compress things more. I'll probably make it disregard fully transparent pixels optionally but by default in lossy mode and optionally but not by default in lossless mode, to be safe
spider-mario we used to have an experiment where we allowed lossy to be more destructive to invisible pixels, not sure what we ended up concluding (or whether we even managed to make it work)
2025-01-20 01:31:45
It'd be interesting to see... If I was comfortable with C I might have tested that as an optional flag in lossless modular mode. My first thought is: let prediction residuals be zero for all color channels if the pixel is fully transparent. Though you'd have sudden large residuals as soon as pixels start becoming opaque again, and that might not be worth it if the predictors are already really good at predicting the color channels of the fully transparent pixels
_wb_
2025-01-20 03:38:28
There are two ways to interpret alpha channels: - transparency information, i.e. a pixel is an RGBA tuple - clipping mask, i.e. a pixel is an RGB triplet but there's an additional channel defining which pixels are in the selection/cutout If the interpretation is transparency (and unassociated) then the color of invisible pixels is irrelevant: any invisible pixel is conceptually the same. If the interpretation is a clipping mask, then you may want to adjust the mask, or not apply it at all.
2025-01-20 03:39:15
Generally for end-user delivery use cases (where the image will only be displayed, not edited) the first interpretation makes the most sense, while in authoring workflows the second interpretation often makes more sense...
lonjil
2025-01-20 03:52:42
Calling it a clipping mask doesn't make much sense to me since you often want transparency (associated)
_wb_
2025-01-20 04:12:48
a clipping mask in the sense of a photo collage where you cut out part of a photo but you still want to change your mind about which part
lonjil
2025-01-20 05:34:42
Sure that's a thing you sometimes want but your two cases really don't account for what people often want in production. You want transparency, and unassociated alpha just doesn't work. So there's a third case.
_wb_
2025-01-20 08:21:48
If it's associated then of course only the first interpretation makes sense.
Posi832
You reminded me of a tool I saw long ago boasting a new resizing method that supposedly beat all others, was set as default, and was going to have a paper made on the technique... Now if only I remembered the name :P
2025-01-20 10:34:24
Seam carving? (https://en.wikipedia.org/wiki/Seam_carving)
jonnyawsom3
2025-01-20 10:35:01
No, it wasn't content aware
juliobbv
AccessViolation_ If a lossless encoder's goal is to produce output visually identical to the input, does that imply it's valid for the encoder to zero the R,G,B values of input pixels if their alpha channel is 0?
2025-01-20 11:28:04
I expect lossless modes to be actually lossless, not just effectively in the visual domain by default
2025-01-20 11:29:15
e.g. if I write luma/chroma values in invisible pixels for steganography purposes, I'd expect the encoder to keep those values intact
2025-01-20 11:31:53
or if I convert non-image to image data by mapping bits to RGBA values (because of whatever wacky reason), the least surprising result as a user would be for the encoder to not corrupt those bits when converting back
2025-01-20 11:32:37
but I think it's fair game to provide an opt-in parameter to discard invisible data
jonnyawsom3
2025-01-20 11:33:10
That's how it is currently. Lossless preserves invisible pixels, lossy discards, with an argument for it naturally
juliobbv
2025-01-20 11:34:18
yeah, that's the most sane behavior IMO
Demiurge
2025-01-21 12:15:11
Yeah, it's better to give people the option to optimize further and maybe print some kind of warning like "transparent pixels detected, toggle foo switch to avoid wasting bytes on them."
2025-01-21 12:16:13
And give people the opportunity to choose what they want for lossless
jonnyawsom3
2025-01-21 12:27:10
I was thinking "It doesn't use much space, but does slow decode a lot. I wonder is squeeze would help redundant Alpha..." Then I remembered resampling exists
2025-01-21 12:42:21
Well, it made decode speed and encode speed worse, but did nudge bpp``` Compressed to 10357.9 kB (5.053 bpp). 4100 x 4000, 3.616 MP/s [3.62, 3.62], , 1 reps, 16 threads. Compressed to 10335.5 kB (5.042 bpp). 4100 x 4000, 0.399 MP/s [0.40, 0.40], , 1 reps, 16 threads.```
AccessViolation_
2025-01-21 09:28:01
<@238552565619359744> it does support 32-bit indeed
jonnyawsom3
2025-01-21 09:28:46
Ah, nice. Ew, TIFF
AccessViolation_
2025-01-21 09:31:39
it's the only option that allows specifying a bit depth
jonnyawsom3
AccessViolation_ <@238552565619359744> it does support 32-bit indeed
2025-01-21 11:53:08
So, did you get a chance to test it?
AccessViolation_
2025-01-21 11:57:20
Nah, got the camera out but it was way too foggy and things were blowing in the wind which gave me enough reason to try tomorrow instead
jonnyawsom3
2025-01-21 11:57:44
Ahh, didn't realise you were gonna give real photos a go
AccessViolation_
2025-01-21 11:58:17
Yeah, it would be easier than getting space engine to behave on the steam deck for sure
2025-01-21 11:59:16
If this works well with exposure bracketed raws, that's the next thing I'm going to try though
2025-01-21 11:59:41
I'll let you know how it goes tomorrow, it's my day off so I have plenty of time to experiment with stuff
spider-mario
2025-01-22 09:11:49
patches I submitted to SoX in May 2015 just got merged
2025-01-22 09:11:54
better late than never, I guess
2025-01-22 09:12:56
ah, merged in a fork (sox_ng), my bad
Quackdoc
2025-01-22 09:13:11
~~now that is what you call a stable project~~
spider-mario
2025-01-22 04:06:26
https://www.dpreview.com/forums/post/68109725
2025-01-22 04:06:32
> Some modern HDR formats (like JPEG) store the HDR image by splitting it into a regular SDR image + an additional gain map for the HDR conversion. This basically scales the SDR brightness by a factor (defined in the gain map) for each pixel. > > This immediately begs the question, why not do the same for wall prints? […]
AccessViolation_
2025-01-22 04:16:30
The first time I tried it with jpegs that seemed to work okay, so I'm gonna try that again
2025-01-22 04:19:30
And when I create a batch of several photos from -5/+5 EV it just loses all that
2025-01-22 04:20:27
Maybe I don't know how to use it correctly, but I don't get to decide the range, it just does whatever it wants and sometimes the histogram goes across several stops and sometimes it's all within one or two stops?
2025-01-22 04:29:52
Maybe the actual dynamic range you end up getting depends on the brightness of parts of the scene, which would explain why it varies. But the center of that light beam was definitely properly exposed in at least one of the images so I have no idea why it's clipping to black
Oleksii Matiash
spider-mario > Some modern HDR formats (like JPEG) store the HDR image by splitting it into a regular SDR image + an additional gain map for the HDR conversion. This basically scales the SDR brightness by a factor (defined in the gain map) for each pixel. > > This immediately begs the question, why not do the same for wall prints? […]
2025-01-22 05:05:21
This (similar, ok) technique was described in "Seabrook, William (1941). Doctor Wood, Modern Wizard of the Laboratory. New York: Harcourt, Brace and Co.". With the only difference that Wood shot a photo of a landscape painting, and then projected it to this painting
AccessViolation_
AccessViolation_ The first time I tried it with jpegs that seemed to work okay, so I'm gonna try that again
2025-01-22 05:24:42
this is a lie i'm too annoyed at it to try it again today <:KekDog:805390049033191445>
jonnyawsom3
AccessViolation_ And when I create a batch of several photos from -5/+5 EV it just loses all that
2025-01-22 09:15:40
Apparently there's a button to increase/decrease the range of the image, since it's also used for tonemapping HDR to SDR
Kremzli
2025-01-24 12:26:22
Is there any up to date good website to compress and convert images (to whatever formats)
jonnyawsom3
2025-01-24 01:00:35
We wanted to fork Squoosh and update the libraries on https://jpegxl.info, since as far as we know there's no suitable alternative
HCrikki
2025-01-24 01:00:49
for in-browser conversion (no uploading to a server), theres squish ( **github.com/addyosmani/squish** ) but basic - you only select a quality % in-browser conversions are pretty useful to have in webapps/scripts to bypass upload limitations of a server/site without hammering the server
2025-01-24 01:04:10
theres similar ones based on jsquash but this one looks trustier since its apparently from a chrome senior dev
2025-01-24 01:05:43
jxl code not newer than now ancient squoosh but if youre doing your own deployment wasm magick covers conversions in-browser
2025-01-24 01:06:43
github.com/dlemstra/magick-wasm
CrushedAsian255
2025-01-24 01:45:52
does magick wasm support all the formats normal image magick supports
_wb_
2025-01-24 08:20:23
``` echo "$(identify -format "%w * %h +" *.ppm) 0 " | bc ```
Kleis Auke
2025-01-24 12:10:15
A fork of <https://github.com/addyosmani/squish> that leverages <https://github.com/kleisauke/wasm-vips> instead of jSquash is also available at: <https://github.com/CaptainCodeman/squash>.
Meow
2025-01-24 03:14:19
I haven't tried https://caesium.app
AccessViolation_
2025-01-24 04:37:46
I never realized how high quality distance 1 VarDCT actually is...
2025-01-24 04:43:06
I had a tendency to store even photographic images lossless because I have the storage space anyway, but it may have been cured
Demiurge
2025-01-24 04:44:31
It's very nice, but still needs to be a little nicer imo
2025-01-24 04:44:50
Especially when it comes to preserving yellow orange colors
2025-01-24 04:46:28
Jyrki mentioned something about chroma quality scaling when I mentioned it
2025-01-24 04:47:35
d=1 still noticeably shifts and desaturates colors in libjxl 0.11
jonnyawsom3
2025-01-24 04:47:39
There's also this https://github.com/libjxl/libjxl/issues/3754#issuecomment-2595552897
AccessViolation_
2025-01-24 04:52:18
Huh, I thought that was fixed already
RaveSteel
AccessViolation_ I had a tendency to store even photographic images lossless because I have the storage space anyway, but it may have been cured
2025-01-24 05:29:00
lossless 4 life!
jonnyawsom3
2025-01-24 10:31:26
https://youtu.be/HPqGaIMVuLs
AccessViolation_
https://youtu.be/HPqGaIMVuLs
2025-01-25 12:00:00
The most surprising 'feature' about this to me is that it compresses so well in video coding formats that use motion vectors. It was strange seeing the scene at 1:14 because this is probably the first time I've seen any sort of dithering of a moving scene that didn't max out the bitrate and turn the video into a crusty artifact soup
2025-01-25 12:09:01
Really interesting video. It taught me how Bayer matrix dithering works, and my mind is now stuck on an idea to use a patch layer consisting of common Bayer matrix threshold tiles, such that during encoding of a Bayer dithered image the bayer pattern is subtracted out of the base image and replaced by the patch blended in additive mode, resulting in compression as if the image didn't have a bayer dither pattern at all. I have no clue if that would work but it's just plausible enough that I know I'm going to be thinking about this the entire night and not be able to fall asleep because of it, man do I love autism and adhd
jonnyawsom3
AccessViolation_ Really interesting video. It taught me how Bayer matrix dithering works, and my mind is now stuck on an idea to use a patch layer consisting of common Bayer matrix threshold tiles, such that during encoding of a Bayer dithered image the bayer pattern is subtracted out of the base image and replaced by the patch blended in additive mode, resulting in compression as if the image didn't have a bayer dither pattern at all. I have no clue if that would work but it's just plausible enough that I know I'm going to be thinking about this the entire night and not be able to fall asleep because of it, man do I love autism and adhd
2025-01-25 12:12:54
It gave me the idea to use the blue noise texture I pulled out of Half Life Alyx in Krita, which *almost* worked, but the image was always either overexposed or underexposed with red pixels being more prominent among the dithering
AccessViolation_
2025-01-25 12:24:29
Ahh that's a shame
2025-01-25 12:27:38
Honestly Krita is an amazing tool, I installed it a few days ago after you were talking about its JXL support and I've been toying around with it
2025-01-25 12:28:04
In fact I'm actually somewhat of an artist myself now
2025-01-25 12:28:16
i'll make it
jonnyawsom3
AccessViolation_ i'll make it
2025-01-25 12:34:52
You can create and save custom gradients/patterns in that second top left box. I added the blue noise texture so now I can add it to any image
AccessViolation_
2025-01-25 12:36:45
OH I can just do that to test the dither tile palette thing by using the blend modes that jxl would
jonnyawsom3
AccessViolation_ OH I can just do that to test the dither tile palette thing by using the blend modes that jxl would
2025-01-25 12:41:12
<https://docs.krita.org/en/general_concepts/file_formats/file_jxl.html>
AccessViolation_
2025-01-25 02:09:55
I was confused because I thought that was referring to JXL limitations, but I guess they are limitations to Krita's implementation?
<https://docs.krita.org/en/general_concepts/file_formats/file_jxl.html>
2025-01-25 02:12:39
But this is fine, I should just need kAdd for this which it supports
CrushedAsian255
AccessViolation_ I was confused because I thought that was referring to JXL limitations, but I guess they are limitations to Krita's implementation?
2025-01-25 02:16:33
Could more blend modes be added in an extension?
Demiurge
2025-01-25 05:52:27
If you wanna add more blend modes, better to do it now while there's still time to slip it in without anyone noticing.
2025-01-25 05:53:56
Between libjxl, oxide, and latte, it should take no time at all before every single codec implementation supports the new blend mode... just as if it was always there all along
2025-01-25 05:54:19
Oh, and j40 and tiny
CrushedAsian255
2025-01-25 05:58:18
i have a small list of a few that should be added
2025-01-25 05:59:01
_wb_
2025-01-25 08:08:09
We didn't want to put all kinds of blend modes in jxl, that would complicate a decoder implementation and it's a moving target to have everything any image editor supports. Normal alpha blending is by far the most important one to have. The others are mostly there because they could be useful for advanced encoders that use layers as a coding tool.
2025-01-25 08:16:36
It was not the goal to make jxl a full replacement for the internal formats of image editors. That would make it way too complex, since that requires not just all those blend modes but also stuff like layer effects (drop shadows and whatnot), text layers, paths, etc etc
2025-01-25 08:17:16
It's also a moving target since every new version of Photoshop / Krita / Gimp brings new features that may require changes to its internal format
Tirr
2025-01-25 08:18:08
I feel like MulAdd can simulate some of the blending modes with alpha channel data
_wb_
2025-01-25 08:22:14
What jxl can be, is a useful interchange format that can represent some useful common intersection of image editor internal formats, that is, not just an export format for flattened images, but some things like normal layers (with names), selection masks, spot colors etc can be preserved.
Demiurge
2025-01-25 12:27:23
As long as it's good for layered codecs and image decomposition algorithms that no one seems to use except for proprietary djvu codecs
AccessViolation_
Demiurge As long as it's good for layered codecs and image decomposition algorithms that no one seems to use except for proprietary djvu codecs
2025-01-25 02:48:16
Since you're talking about image decomposition: we recently did some tests to see if screenshots from games that have a CRT screen effect would compress better if the CRT effect was its own layer, and at effort 7 that took a test image down from 7.5 MB to 1.5 MB https://discord.com/channels/794206087879852103/803645746661425173/1331749968578613352
2025-01-25 02:59:34
Actually decomposition is basically what DCT does, but for frequencies. I wonder if in the future a Modular encoder could be smart enough to try to extract easy to predict additive or multiplicative patterns so that the layers individually compress better than all of them together would. Basically what we tested, except the encoder figures it out itself instead of the software/user manually needing to specify 'this is its own layer' to benefit from that better compression
2025-01-25 03:02:53
Maybe that's something where tiny neural networks could have a use in an encoder: trying to decompose lossless images into certain repeating patterns if they are present, and putting them into their own layer
Demiurge
2025-01-25 09:20:25
Lossless and neural network are not usually found in the same sentence
RaveSteel
2025-01-25 09:29:05
There is possible potential in having patterns recognised by an LLM to derive the best encoding options for an image
2025-01-25 09:29:16
"AI" is great at pattern recognition after all
pshufb
Demiurge Lossless and neural network are not usually found in the same sentence
2025-01-25 09:47:54
They work well w/arithmetic coding.
AccessViolation_
Demiurge Lossless and neural network are not usually found in the same sentence
2025-01-25 09:49:16
It sounds counter intuitive indeed, but it would be completely lossless. Even if you replaced the pattern the neural network would predict with random noise, it would still be lossless. The operation is creating a (random noise, in this case) pattern, putting it in a layer and *also* destructively subtracting the pattern from the base image, and using that as the new base image. Then if you define the noise layer to be additive-blended with the base image again, you're adding up the exact same value you subtracted in the first step, resulting in the same image again. The better the neural network is able to separate the easily-compressible pattern from the otherwise noisy/detailed image, the better the file as a whole would compress. But no matter what pattern the neural network predicts, it would always be lossless
Demiurge
2025-01-25 10:01:38
Opus uses a nn to detect speech vs music. Would be nice if cjxl could use a simple, fast, entropy statistic similar to oxipng, to determine if an image would be smaller using lossless compression instead of vardct.
2025-01-25 10:03:25
Images with a low amount of entropy per pixel should avoid using vardct
2025-01-25 10:06:42
And automatically use more appropriate coding tools like webp near-lossless mode
Meow
2025-01-26 04:03:14
AI used to be Adobe Illustrator for most of artists
Quackdoc
2025-01-27 04:19:26
does anyone know if there are drop n replacements for libpng? libpng is rather slow and I have hit more then a few pngs that cjxl will fail to decode. I can't share this image, but when I come across another I can, but I know others have hit similar issues. Currently for such images I use magick to re-encode such images but would rather not need to in the first place. ``` JPEG XL encoder v0.11.1 794a5dcf [AVX2,SSE4,SSE2] libpng warning: iCCP: known incorrect sRGB profile Getting pixel data failed. ```
Tirr
2025-01-27 04:29:51
maybe it's not a problem of libpng. using libjxl debug build would help finding actual problem I think
Quackdoc
2025-01-27 04:34:43
hmm, indeed, I tested one other known one and png2pnm failed, but I tested on this one and it worked
jonnyawsom3
2025-01-27 04:34:53
It would be nice if we could move some error reporting from debug builds to the verbosity settings instead. Right now all they do is show the input decode speed IIRC
Quackdoc
2025-01-27 04:36:39
well I will remake libjxl for this one at least, is there a specific build flag to make debug builds or just cflag stuff?
Tirr
2025-01-27 04:37:53
cmake debug build will do
Quackdoc
2025-01-27 04:38:16
kk
2025-01-27 04:38:54
but yeah, interesting, I just chalked up all of these issues to libpng but I guess only some of them are
CrushedAsian255
Quackdoc does anyone know if there are drop n replacements for libpng? libpng is rather slow and I have hit more then a few pngs that cjxl will fail to decode. I can't share this image, but when I come across another I can, but I know others have hit similar issues. Currently for such images I use magick to re-encode such images but would rather not need to in the first place. ``` JPEG XL encoder v0.11.1 794a5dcf [AVX2,SSE4,SSE2] libpng warning: iCCP: known incorrect sRGB profile Getting pixel data failed. ```
2025-01-27 04:41:40
Can you share a list of the png chinks ?
2025-01-27 04:41:42
Chunks*
2025-01-27 04:41:52
Maybe they’re ordered wrong
2025-01-27 04:41:58
Or could be a renamed JPEG or something
Quackdoc
2025-01-27 04:42:50
turns out this specific image isn't a libpng issue one, but when I come across another one I will, I am doing a batch encode on my phone right now so good chance I will come across one
Tirr cmake debug build will do
2025-01-27 04:44:24
is `RelWithDebInfo` good enough or does it have to be `Debug`?
Tirr
2025-01-27 04:44:43
needs to be Debug
2025-01-27 04:44:57
it enables debug logs
Quackdoc
2025-01-27 04:45:05
kk
2025-01-27 04:45:49
oh arch also specifies -DNDEBUG guess I need to remove that too
Tirr
2025-01-27 04:46:07
ah yeah maybe that
2025-01-27 04:46:35
(I always use `./ci.sh debug` so I'm not sure on the actual flags)
Quackdoc
2025-01-27 04:49:01
one day I hope to escape cmake and use only newer systems lol
CrushedAsian255
2025-01-27 04:49:25
Cargo
Quackdoc
2025-01-27 04:49:48
too specific, though I would love to have a build system that supports cargo like dependency handling
2025-01-27 04:50:04
~~meson wrap~~ [av1_pepegun](https://cdn.discordapp.com/emojis/659513552033415188.webp?size=48&name=av1_pepegun)
A homosapien
Quackdoc does anyone know if there are drop n replacements for libpng? libpng is rather slow and I have hit more then a few pngs that cjxl will fail to decode. I can't share this image, but when I come across another I can, but I know others have hit similar issues. Currently for such images I use magick to re-encode such images but would rather not need to in the first place. ``` JPEG XL encoder v0.11.1 794a5dcf [AVX2,SSE4,SSE2] libpng warning: iCCP: known incorrect sRGB profile Getting pixel data failed. ```
2025-01-27 04:51:06
Similar to this? https://github.com/libjxl/libjxl/issues/3973