JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

other-codecs

JendaLinda
2024-05-30 08:04:54
It's not perfect but I found it useful to clean up images before downscaling.
username
2024-05-30 08:06:02
I would recommend trying jpeg2png as well
2024-05-30 08:06:16
they both have their use cases
JendaLinda
2024-05-30 08:13:50
QuantSmooth is included in IrfanView, that's how I discovered it.
damian101
2024-05-30 11:37:43
Should work excellently on text, graphics
2024-05-30 11:38:35
probably smoothes much more than ideal on photographic content
JendaLinda
2024-05-31 06:25:42
Seems to work best for non-photo content in jpeg of quality above 75. It can also decrease the impact of chroma subsampling.
2024-05-31 08:16:15
Have anybody tried using zopfli compression in TIFF images? Might be useful for images unsupported by PNG, like CMYK.
TheBigBadBoy - 𝙸𝚛
2024-06-04 03:36:52
<@219525188818042881> you won't beat me this time <:PeepoDiamondSword:805394101340078092>
Fox Wizard
TheBigBadBoy - 𝙸𝚛 <@219525188818042881> you won't beat me this time <:PeepoDiamondSword:805394101340078092>
2024-06-04 03:38:19
What if I do? <:KittyThink:1126564678835904553>
TheBigBadBoy - 𝙸𝚛
2024-06-04 03:39:20
🍕
2024-06-04 03:39:24
but you won't
Fox Wizard
2024-06-04 03:40:19
Still waiting for all the other pizzas smh
TheBigBadBoy - 𝙸𝚛
2024-06-04 03:41:46
mmmmh i don't remember from when I owe you one [⠀](https://cdn.discordapp.com/emojis/1088952745471516815.gif?size=48&quality=lossless&name=laugh)
Fox Wizard
2024-06-04 03:42:05
I do and not just 1 <:KittyLaugh:1126563616343216268>
TheBigBadBoy - 𝙸𝚛
2024-06-04 03:42:16
I mean, you always said "pizza if you want to know how" but I still don't know lmao
Fox Wizard
2024-06-04 03:42:42
Maybe it's time to buy that knowledge then <:KittyUwU:1147753612529913938>
2024-06-04 03:43:27
What if I gift you a 2 week Nitro trial thing if I can't beat you? <:KekDog:884736660376535040>
jonnyawsom3
2024-06-04 03:44:41
The two jpeg foxes fighting in my head
Fox Wizard
2024-06-04 03:45:26
<@693503208726986763> made it smaller
2024-06-04 03:45:46
TheBigBadBoy - 𝙸𝚛
Fox Wizard What if I gift you a 2 week Nitro trial thing if I can't beat you? <:KekDog:884736660376535040>
2024-06-04 03:46:15
I don't need Nitro [⠀](https://cdn.discordapp.com/emojis/862625638238257183.webp?size=48&quality=lossless&name=av1_chad)
Fox Wizard
2024-06-04 03:46:26
# F
2024-06-04 03:46:42
[⠀](https://cdn.discordapp.com/emojis/654081051768913941.webp?size=48&quality=lossless&name=av1_PepeHands)
Fox Wizard
2024-06-04 03:47:10
Why did you think I couldn't make it smaller though? <:KittyThink:1126564678835904553>
TheBigBadBoy - 𝙸𝚛
2024-06-04 03:48:24
because I managed to shrink just a little more after jpegultrascan -b 13
2024-06-04 03:48:34
but apparently not enough
2024-06-04 03:49:16
shit
Fox Wizard
2024-06-04 03:50:14
Can't beat me, I'm very experienced with small things™️
JendaLinda
2024-06-04 05:01:06
Am I missing something? Is there anything better than ECT?
2024-06-04 05:09:28
I've found a PNG, where ECT just couldn't beat PNGOut. Maybe it could beat it eventually, but it ran for more than 24 hours with no success. PNGOut is just too good in compressing flat colored lineart.
TheBigBadBoy - 𝙸𝚛
JendaLinda Am I missing something? Is there anything better than ECT?
2024-06-04 05:46:21
for JPGs ? yeah https://encode.su/threads/2489-jpegultrascan-an-exhaustive-JPEG-scan-optimizer
JendaLinda
2024-06-04 06:00:12
Looks like a wrapper for jpegtran. ECT uses mozjpeg jpegtran as well.
2024-06-04 06:01:06
Are there precompiled Windows binaries for mozjpeg tools somewhere?
TheBigBadBoy - 𝙸𝚛
JendaLinda Looks like a wrapper for jpegtran. ECT uses mozjpeg jpegtran as well.
2024-06-04 06:42:37
yeah but it's a bruteforcing script, with exhaustive scan search so really slow but always compresses better than ECT
JendaLinda
2024-06-04 06:59:57
I see. I thought the options for jpeg in ECT are quite sparse compared to options for zopfli/png.
A homosapien
JendaLinda I've found a PNG, where ECT just couldn't beat PNGOut. Maybe it could beat it eventually, but it ran for more than 24 hours with no success. PNGOut is just too good in compressing flat colored lineart.
2024-06-04 09:43:31
Can you send the png? Also pingo and oxipng are also very good for optimizing pngs, they don't offer brute forcing capabilities but their lossless reductions are top notch. ECT falls behind in that area.
2024-06-04 09:44:06
Here is png benchmark showcasing which lossless reductions are possible: https://css-ig.net/benchmark/png-lossless
TheBigBadBoy - 𝙸𝚛
2024-06-04 09:47:16
again, you can go further than just `-9` with ECT and ECT is the best Deflate optimizer I know The only way to beat it in PNG filesize is to have somehow a better filter than the ones it tries
JendaLinda
2024-06-05 06:12:41
I haven't tried -99999 yet, but I don't feel like crunching a single png for a week. I've tried some lower values, though. PNGOut excels on images with solid colors, where the zero filter is the optimal choice for the entire image as the png predictors won't do any better.
A homosapien
2024-06-05 07:16:18
-99999 is overkill, usually `-30069` works fine with `--mt-deflate`
TheBigBadBoy - 𝙸𝚛
2024-06-05 09:38:18
`69` [⠀](https://cdn.discordapp.com/emojis/586100936544616450.webp?size=48&quality=lossless&name=sataniasmirk)
Fox Wizard
2024-06-05 09:56:04
<a:RaysNut:737691574221406299>
JendaLinda
A homosapien -99999 is overkill, usually `-30069` works fine with `--mt-deflate`
2024-06-05 01:10:18
I've tried -100, -30060, -30100 and -90100
A homosapien
JendaLinda I've tried -100, -30060, -30100 and -90100
2024-06-05 03:41:27
What are the results?
JendaLinda
A homosapien What are the results?
2024-06-05 05:43:50
Saved 0B out of the file.
2024-06-05 05:44:49
The file was previously compressed using `pngout /f0`
2024-06-05 05:51:36
Here is the file in question if somebody would like to tinker with it.
TheBigBadBoy - 𝙸𝚛
2024-06-05 10:11:37
anyone already tested this ? https://github.com/thorfdbg/libjpeg/
jonnyawsom3
2024-06-05 10:50:52
Tested what about it, visuals, performance, features?
TheBigBadBoy - 𝙸𝚛
Tested what about it, visuals, performance, features?
2024-06-05 11:46:45
quality/filesize ratio (but I doubt anyone has tested it yet tho)
Fox Wizard
2024-06-05 11:48:49
`jpegtran ect leanify pingo jpegoptim curtail jpgcrush jhead` you didn't use any of them I hate you, I wasted 4 hours searching and testing, to save 0 byte <:KekDog:805390049033191445>
2024-06-05 11:52:35
I don't even know a good JPG analyser (to see the difference between yours and mine)
JendaLinda
TheBigBadBoy - 𝙸𝚛 anyone already tested this ? https://github.com/thorfdbg/libjpeg/
2024-06-06 01:45:30
It has overwhelming assortment of features. I wonder which viewers can decode those fancy jpegs.
TheBigBadBoy - 𝙸𝚛
TheBigBadBoy - 𝙸𝚛 anyone already tested this ? https://github.com/thorfdbg/libjpeg/
2024-06-06 10:17:27
just tested it and the lossless compression it provides is [⠀](https://cdn.discordapp.com/emojis/852007419474608208.webp?size=48&quality=lossless&name=av1_woag)
2024-06-06 10:21:02
```$ cjxl -d 0 -e 10 --brotli_effort=11 -x strip=all -I 100 -g 3 -E 11 1094077 Bytes 1m16.536s ~3 threads $ jpeg -c -ls 1 -cls 1289537 Bytes 0.868s 1 thread $ ect -60199 --mt-deflate 1642880 Bytes 49.927s ~3 threads``` This is quite really good lossless compression, and more importantly speed 0.o
2024-06-06 10:23:00
but I feel like this lossless mode isn't supported by anything while using a worse lossless JPG compression (just `-p`) is still supported by e.g. FFmpeg
2024-06-06 10:24:43
also supports a "normal" JPG encoding using XYZ colorspace
2024-06-06 10:24:55
this encoder is really full of features
2024-06-06 10:25:16
but at the same time it comes with so many arguments for the CLI <:KekDog:805390049033191445>
_wb_
2024-06-06 10:26:47
It's the reference implementation of JPEG, made by long-time JPEG member Thomas Richter. It's the first implementation that is actually implementing the full JPEG spec 🙂
2024-06-06 10:27:28
(and also it implements JPEG XT, if I remember correctly)
2024-06-06 10:28:02
(and JPEG LS too, I think)
TheBigBadBoy - 𝙸𝚛
2024-06-06 10:31:24
[⠀](https://cdn.discordapp.com/emojis/872898333239279666.webp?size=48&quality=lossless&name=av1_frogstudy)
2024-06-06 10:31:33
and yeah it implements XT and LS
Fox Wizard
TheBigBadBoy - 𝙸𝚛 `jpegtran ect leanify pingo jpegoptim curtail jpgcrush jhead` you didn't use any of them I hate you, I wasted 4 hours searching and testing, to save 0 byte <:KekDog:805390049033191445>
2024-06-06 10:33:07
Good, I also hate myself <:KittyUwU:1147753612529913938>
2024-06-06 10:33:16
Gotta waste some more hours :D
TheBigBadBoy - 𝙸𝚛
2024-06-06 10:33:25
<:KekDog:805390049033191445>
JendaLinda
2024-06-06 10:44:58
I gave up trying to compress my png file further. It's attached above if anybody is interested. Ryzen 5 1600 is not the fastest CPU in the world, although I've just recently upgraded from Core i5 3570 😄
TheBigBadBoy - 𝙸𝚛
2024-06-06 10:52:35
gonna try it later currently optimizing a 8GB FLAC album <:KekDog:805390049033191445>
JendaLinda
2024-06-06 10:53:34
Ah FLAC, that sounds interesting too.
TheBigBadBoy - 𝙸𝚛
2024-06-06 10:54:16
well, I could use something else for better compression but FLAC support is so good
Fox Wizard
2024-06-06 10:54:37
SAC my beloved <:trolldoge:1200049081880432660>
TheBigBadBoy - 𝙸𝚛
2024-06-06 10:54:50
no never again
2024-06-06 10:55:24
I mean it's nice and all but you compile it with different compilers and you won't have same encoded (or decoded) file <:KekDog:805390049033191445>
2024-06-06 10:56:06
If I had to pick something else than FLAC I would happily go with LA
JendaLinda
2024-06-06 10:56:15
ECT was able to optimize mp3s, although it seems it just tried to optimize jpeg and png images embedded inside mp3. My mp3s don't have any images in them, so no use for me.
TheBigBadBoy - 𝙸𝚛
JendaLinda ECT was able to optimize mp3s, although it seems it just tried to optimize jpeg and png images embedded inside mp3. My mp3s don't have any images in them, so no use for me.
2024-06-06 10:57:15
so you want a MP3 lossless optimizer? [⠀](https://cdn.discordapp.com/emojis/1042213453189873716.webp?size=48&quality=lossless&name=esmirk)
Fox Wizard
2024-06-06 10:57:23
MP3packer <:KittyUwU:1147753612529913938>
TheBigBadBoy - 𝙸𝚛
2024-06-06 10:57:43
nooooo you just threw away my chance of getting a pizza
JendaLinda
2024-06-06 10:58:29
I thought mp3 uses set of fixed bitrates.
TheBigBadBoy - 𝙸𝚛
Fox Wizard MP3packer <:KittyUwU:1147753612529913938>
2024-06-06 10:59:25
b...b...but, you discovered through me so doesn't that reset the pizza count ? <:KekDog:805390049033191445> same for FLACcid xD
Fox Wizard
TheBigBadBoy - 𝙸𝚛 b...b...but, you discovered through me so doesn't that reset the pizza count ? <:KekDog:805390049033191445> same for FLACcid xD
2024-06-06 10:59:36
I didn't <:KekDog:884736660376535040>
TheBigBadBoy - 𝙸𝚛
2024-06-06 10:59:45
shit
JendaLinda I thought mp3 uses set of fixed bitrates.
2024-06-06 11:00:23
well MP3Packer all it does is minimizing the frames' padding so it can convert losslessly CBR to VBR (and inversely)
2024-06-06 11:01:12
but even with VBR input it manages to gain some kB
JendaLinda
2024-06-06 11:02:17
I see it can be useful to clean up mp3s
TheBigBadBoy - 𝙸𝚛
Fox Wizard I didn't <:KekDog:884736660376535040>
2024-06-06 11:02:30
well then for FLACcid I'm sure lol this was the first battle we had, good times <:CatSmile:805382488293244929>
JendaLinda
2024-06-06 11:03:44
I was just curious what ECT does with mp3 but it only optimizes embedded images which I've already thrown away.
TheBigBadBoy - 𝙸𝚛
JendaLinda I see it can be useful to clean up mp3s
2024-06-06 11:04:35
you're on Linux or Windows
JendaLinda
TheBigBadBoy - 𝙸𝚛 you're on Linux or Windows
2024-06-06 11:06:10
Windows
TheBigBadBoy - 𝙸𝚛
JendaLinda Windows
2024-06-06 11:10:55
you might want give it a try <https://hydrogenaud.io/index.php/topic,32379.0.html>
CrushedAsian255
JendaLinda I see it can be useful to clean up mp3s
2024-06-06 11:11:23
Is there any thing that does that on Mac? It’s exactly what i need but I only have Macs
TheBigBadBoy - 𝙸𝚛
2024-06-06 11:12:04
do Linux executables work on Mac ?
2024-06-06 11:12:56
if no, you'll have to compile it yourself and it's really a hassle because it's OCaml, with unsafe-string
CrushedAsian255
2024-06-06 11:12:57
No, as Mac uses Mach-O instead of ELF
2024-06-06 11:13:06
I could try WINE
TheBigBadBoy - 𝙸𝚛
2024-06-06 11:13:14
yeah works with wine
CrushedAsian255
2024-06-06 11:13:33
I’ll try that when I get hoe
2024-06-06 11:13:37
Home*
spider-mario
TheBigBadBoy - 𝙸𝚛 just tested it and the lossless compression it provides is [⠀](https://cdn.discordapp.com/emojis/852007419474608208.webp?size=48&quality=lossless&name=av1_woag)
2024-06-06 11:15:49
how dense is the jxl effort with the closest speed?
TheBigBadBoy - 𝙸𝚛
spider-mario how dense is the jxl effort with the closest speed?
2024-06-06 11:22:06
```$ cjxl -d 0 -e 4 1208525 Bytes 0.878s ~3 threads $ cjxl -d 0 -e 3 --num_threads=0 1210889 Bytes 0.404s $ cjxl -d 0 -e 4 --num_threads=0 1208525 Bytes 1.134s```orig benchmark:```$ cjxl -d 0 -e 10 --brotli_effort=11 -x strip=all -I 100 -g 3 -E 11 1094077 Bytes 1m16.536s ~3 threads $ jpeg -c -ls 1 -cls 1289537 Bytes 0.868s 1 thread $ ect -60199 --mt-deflate 1642880 Bytes 49.927s ~3 threads```
Fox Wizard
TheBigBadBoy - 𝙸𝚛 well then for FLACcid I'm sure lol this was the first battle we had, good times <:CatSmile:805382488293244929>
2024-06-06 11:23:45
Yes, but that encoder was broken lmao
TheBigBadBoy - 𝙸𝚛
2024-06-06 11:25:08
I filed an issue and the author solved it lmao
spider-mario
2024-06-06 11:25:29
for flac, I recently played a little with http://cue.tools/wiki/FLACCL
2024-06-06 11:25:39
it certainly is fast
2024-06-06 11:25:57
density is fine but only occasionally better than the official encoder
TheBigBadBoy - 𝙸𝚛
spider-mario it certainly is fast
2024-06-06 11:26:29
yeah, as expected for a GPU encoder
Fox Wizard
TheBigBadBoy - 𝙸𝚛 I filed an issue and the author solved it lmao
2024-06-06 11:32:10
Yes, but as far as I know there isn't a compiled version of the fixed encoder and I'm too lazy to learn how to compile <:KekDog:884736660376535040>
TheBigBadBoy - 𝙸𝚛
Fox Wizard Yes, but as far as I know there isn't a compiled version of the fixed encoder and I'm too lazy to learn how to compile <:KekDog:884736660376535040>
2024-06-06 11:32:49
there's one I made but for Linux <:kekw:808717074305122316>
Fox Wizard
2024-06-06 11:33:09
I know. Which is why I'm not using FLACCID anymore XD
TheBigBadBoy - 𝙸𝚛
Fox Wizard I know. Which is why I'm not using FLACCID anymore XD
2024-06-06 11:33:54
https://hydrogenaud.io/index.php/topic,123248.msg1043482.html#msg1043482 just do a little search [⠀](https://cdn.discordapp.com/emojis/895863009820414004.webp?size=48&quality=lossless&name=av1_thinkies)
Fox Wizard
2024-06-06 11:34:07
What about... no <:KittyUwU:1147753612529913938>
TheBigBadBoy - 𝙸𝚛
2024-06-06 11:34:43
well since I gave you the link it's not a search anymore for you <:KekDog:805390049033191445>
Fox Wizard
2024-06-06 11:34:57
That's why I don't have to search anymore :D
2024-06-06 11:35:19
But sadly won't be using FLACCID anymore anyways since some applications can't decode it
TheBigBadBoy - 𝙸𝚛
2024-06-06 11:36:51
true but at least all my apps support it
TheBigBadBoy - 𝙸𝚛 https://hydrogenaud.io/index.php/topic,123248.msg1043482.html#msg1043482 just do a little search [⠀](https://cdn.discordapp.com/emojis/895863009820414004.webp?size=48&quality=lossless&name=av1_thinkies)
2024-06-06 11:37:32
~~shit I should have asked for a pizza~~
JendaLinda
TheBigBadBoy - 𝙸𝚛 `jpegtran ect leanify pingo jpegoptim curtail jpgcrush jhead` you didn't use any of them I hate you, I wasted 4 hours searching and testing, to save 0 byte <:KekDog:805390049033191445>
2024-06-06 11:37:36
Can any of these tools merge duplicate copies of quantization tables?
TheBigBadBoy - 𝙸𝚛
2024-06-06 11:38:02
[⠀](https://cdn.discordapp.com/emojis/586168843781668876.webp?size=48&quality=lossless&name=AkkoShrug)
2024-06-06 11:38:18
the only thing I know is that these can't improve compression over jpegultrascan
Fox Wizard
TheBigBadBoy - 𝙸𝚛 ~~shit I should have asked for a pizza~~
2024-06-06 11:39:23
But it would only be worth a pizza if it's valuable information to me <:RaysEvilUglyClone:1126526202178441277>
TheBigBadBoy - 𝙸𝚛
2024-06-06 11:40:28
right <:PepeHands:808829977608323112>
JendaLinda
2024-06-06 11:45:45
Some people use regular jpeg at q=100 in a hope it would be lossless. It's not the best idea but these jpegs do exist. In these files, luma and chroma quantization tables are identical, all contains only 1s. So using just one copy for both luma and chroma easily saves 65 bytes.
2024-06-06 04:32:11
Most jpeg optimizers seem to be based on jpegtran. jpegtran will discard unused quantization tables so the SOF marker can be modified, so both luma and chroma will use one copy of the quantization table and jpegtran will remove the redundant copy. I edited the SOF using hex editor but it shouldn't be too hard to add check for multiple identical quantization tables in the optimizer programs.
2024-06-07 07:01:04
Meanwhile, I discovered even more pngs compressed by pngout where ECT is failing.
TheBigBadBoy - 𝙸𝚛
2024-06-07 09:56:46
indeed, but I won't try `-99999 --palsort=120 --allfilters-b` on the file you send yesterday <:KekDog:805390049033191445>
2024-06-07 09:57:46
I wonder: how much bytes less is the pingo output compared to something like `ect -90099 --mt-deflate` ?
JendaLinda
2024-06-07 10:05:30
I haven't tried pingo, the sparse documentation is not convincing.
TheBigBadBoy - 𝙸𝚛
2024-06-07 10:09:07
I meant pngout lol
JendaLinda
2024-06-07 10:10:50
Also ECT doesn't say how much the file size would be when it fails.
2024-06-07 10:11:48
Bu I can do the comparison by starting with "unoptimized" file.
TheBigBadBoy - 𝙸𝚛
JendaLinda Also ECT doesn't say how much the file size would be when it fails.
2024-06-07 10:13:09
yeah it only shows the nbr of saved bytes
JendaLinda
TheBigBadBoy - 𝙸𝚛 indeed, but I won't try `-99999 --palsort=120 --allfilters-b` on the file you send yesterday <:KekDog:805390049033191445>
2024-06-07 10:15:07
According to my tests, `--mt-deflate` actually increases the file size by several bytes quite often.
2024-06-07 10:15:14
I takes much longer though.
TheBigBadBoy - 𝙸𝚛
2024-06-07 10:16:12
how can mt-deflate take longer lmao
JendaLinda
TheBigBadBoy - 𝙸𝚛 how can mt-deflate take longer lmao
2024-06-07 10:19:45
Sorry, omitting --mt-deflate takes longer.
2024-06-07 10:24:28
So yeah, `--mt-deflate` is sometimes better and sometimes worse. It depends on the image.
2024-06-07 10:26:47
Alright. I will do more tests. I suppose the result will be tomorrow 😄
TheBigBadBoy - 𝙸𝚛
2024-06-07 10:27:22
[⠀](https://cdn.discordapp.com/emojis/674256399412363284.webp?size=48&quality=lossless&name=av1_pepelove)
JendaLinda
2024-06-07 10:31:17
If there are already optimizers for audio files, the rabbit hole can go much deeper. What about optimizing video files?
2024-06-07 10:39:35
I can imagine hardware video encoder do quite sloppy job.
TheBigBadBoy - 𝙸𝚛 I wonder: how much bytes less is the pingo output compared to something like `ect -90099 --mt-deflate` ?
2024-06-07 12:10:42
Alright, didn't take too long. `pngout /f0` does 336683 bytes `ect -90099 --mt-deflate` does 336869 bytes
TheBigBadBoy - 𝙸𝚛
JendaLinda If there are already optimizers for audio files, the rabbit hole can go much deeper. What about optimizing video files?
2024-06-07 12:12:02
I know some audio optimizers but never heard of any video ones [⠀](https://cdn.discordapp.com/emojis/654081052108652544.webp?size=48&quality=lossless&name=av1_Hmmm)
JendaLinda Alright, didn't take too long. `pngout /f0` does 336683 bytes `ect -90099 --mt-deflate` does 336869 bytes
2024-06-07 12:12:28
<:FeelsReadingMan:808827102278451241>
jonnyawsom3
TheBigBadBoy - 𝙸𝚛 I know some audio optimizers but never heard of any video ones [⠀](https://cdn.discordapp.com/emojis/654081052108652544.webp?size=48&quality=lossless&name=av1_Hmmm)
2024-06-07 06:19:43
https://github.com/danielrh/losslessh264
TheBigBadBoy - 𝙸𝚛
2024-06-07 06:21:16
isn't that just `-crf 0` ?
jonnyawsom3
2024-06-07 06:21:38
The name is misleading, it's a recoder/recompressor
2024-06-07 06:21:55
Made by dropbox along with their jpeg compression
2024-06-07 06:22:29
https://github.com/dropbox/avrecode
spider-mario
2024-06-07 06:22:45
ah, a sort of brunsli for h.264?
TheBigBadBoy - 𝙸𝚛
https://github.com/dropbox/avrecode
2024-06-07 06:23:35
avrecode acts like an archive (only avrecode can decompress the encoded files)
The name is misleading, it's a recoder/recompressor
2024-06-07 06:23:40
nice, thanks !
jonnyawsom3
2024-06-07 06:23:59
Or at least that's what it says
2024-06-07 06:25:49
I just remember finding it a few months ago then wanting this after https://elecard.com/products/video-analysis/streameye
TheBigBadBoy - 𝙸𝚛
spider-mario ah, a sort of brunsli for h.264?
2024-06-07 06:35:51
suprisingly brunsli gives the smallest output for the unoptimized JPG and the biggest output is the most optimized JPG (with jpegultrascan)
2024-06-07 06:36:56
and to **de**optimize a JPG, the only command I know is `jpegtran -revert`, idk if there's something else (or something better)
2024-06-07 06:40:41
```bash $ find *jpg -printf '%s %p\n' | sort -n 365977 1.jpg 366319 2.jpg 376318 3.jpg 391187 4.jpg $ find *jxl -printf '%s %p\n' | sort -n 314107 3.jxl 314323 2.jxl 314346 1.jxl 314381 4.jxl $ find *brn -printf '%s %p\n' | sort -n 313474 4.brn 313566 3.brn 313743 2.brn 313759 1.brn ```they're all the same file (losslessly speaking)
2024-06-07 07:12:40
JXL done with `cjxl -e 10 -d 0 --brotli_effort=11 -I 100 -g 3 -E 11` ofc but idk if that changes anything for lossless_jpeg=1
A homosapien
JendaLinda Here is the file in question if somebody would like to tinker with it.
2024-06-07 07:58:01
What a strange image, I unoptimized it then tried recompressing it with pngOUT. I could never even get close to the 328 KB file you sent me. The best result I got with pngOUT `/f0` was 352 KB. Here is how the other png optimizers did: ECT `-9`: 329 KB Pingo `-s4 -l`: 330 KB Oxipng `-o max`:330 KB
TheBigBadBoy - 𝙸𝚛
2024-06-07 08:22:05
mmmmh note that running some optimizers after other ones might help compression
2024-06-07 08:22:43
for example try pngOUT on the output of ECT, I wonder if it can shave off some bytes
JendaLinda
A homosapien What a strange image, I unoptimized it then tried recompressing it with pngOUT. I could never even get close to the 328 KB file you sent me. The best result I got with pngOUT `/f0` was 352 KB. Here is how the other png optimizers did: ECT `-9`: 329 KB Pingo `-s4 -l`: 330 KB Oxipng `-o max`:330 KB
2024-06-07 08:50:43
Sorry I forgot to tell you that pngout is quite quirky and it's compression depends on previous compression. So, first run pngout without options and then pngout /f0
A homosapien
2024-06-07 08:56:38
Ok now I got that image down to 328 KB
2024-06-07 08:56:54
But that just raises more questions than answers
2024-06-07 08:57:13
Like why does running the same image twice through the same program make it smaller?
2024-06-07 09:02:48
Does it have something to do with block splitting?
JendaLinda
2024-06-07 09:06:35
This is just a quirk of pngout. The block splitting seems to be the culprit.
A homosapien
2024-06-08 07:01:25
Yup, just got a smaller image tweaking the block splitting setting with pngout
2024-06-08 07:02:20
I managed to save like 100 bytes just by manually trying different block sizes
JendaLinda
A homosapien I managed to save like 100 bytes just by manually trying different block sizes
2024-06-08 12:02:29
Yes, that's possible. Pngout seems to choose suboptimal block splitting when the filter type is enforced right away. But it's not an issue if the png was optimized by pngout beforehand. Unfortunately pngout is closed source so who knows what's going on. It could be a bug. I'm trying to move to open source optimizers.
A homosapien
JendaLinda Yes, that's possible. Pngout seems to choose suboptimal block splitting when the filter type is enforced right away. But it's not an issue if the png was optimized by pngout beforehand. Unfortunately pngout is closed source so who knows what's going on. It could be a bug. I'm trying to move to open source optimizers.
2024-06-08 08:56:47
Yup, I love open source optimizers. I got a bug fixed in oxipng recently.
jjrv
2024-06-10 08:11:01
I guess to compress an animation for web use, AV1 would be a decent codec. But is there a way to get over 8 bits per channel (let's say 12) and still decode it in a browser, reading the 12bit data in JS? If the app has to resort to Wasm, maybe JPEG XL is a competitive option even for HDR video 🤔
Quackdoc
2024-06-10 08:14:51
JXL will never be competitive for lossy video, AV1 supports 10bit on most profiles, and 12bit is usually only supported in software. if you need greater then 12bit then av1 is not usable. now if you are using JXL in a professional editing workflow where you are pretty much just using complete image sequences, then yes, JXL is quite good there
jjrv
2024-06-10 08:19:46
Use case is needing to access the decoded frame in JavaScript. I'm using lossless JXL for the raw data and wondering how to deliver compressed data to the client without losing too much flexibility. I guess one option is to decode AV1 in Wasm and provide a separate desktop app if someone is serious about using the data instead of just testing and poking around.
CrushedAsian255
jjrv Use case is needing to access the decoded frame in JavaScript. I'm using lossless JXL for the raw data and wondering how to deliver compressed data to the client without losing too much flexibility. I guess one option is to decode AV1 in Wasm and provide a separate desktop app if someone is serious about using the data instead of just testing and poking around.
2024-06-10 09:18:22
Could Motion JXL(?) be a decent way to compress my giant ProRes 422HQ archive folder?
2024-06-10 09:18:48
How does the compression ratio compare?
jonnyawsom3
CrushedAsian255 How does the compression ratio compare?
2024-06-10 09:24:10
https://discord.com/channels/794206087879852103/794206170445119489/1116377019928805426
CrushedAsian255
https://discord.com/channels/794206087879852103/794206170445119489/1116377019928805426
2024-06-10 09:29:04
Hmm, I’ll do some testing myself on my own machine but it might work
2024-06-10 09:29:18
How do I stuff JXL images into an MKV though?
2024-06-10 09:38:34
Other issue would be converting from YUV422-bt2020 to whatever JXL uses
_wb_
jjrv I guess to compress an animation for web use, AV1 would be a decent codec. But is there a way to get over 8 bits per channel (let's say 12) and still decode it in a browser, reading the 12bit data in JS? If the app has to resort to Wasm, maybe JPEG XL is a competitive option even for HDR video 🤔
2024-06-10 10:42:08
I guess you could use the HDR canvas instead of the regular image canvas to get access to 16-bit buffers from 12-bit avif / apng / jxl images/animations. But I think only Chrome supports the HDR canvas...
jjrv
2024-06-10 10:43:28
Thanks, will test! I'm fine with some features only on Chrome, it's still better than a custom desktop app.
_wb_
CrushedAsian255 Other issue would be converting from YUV422-bt2020 to whatever JXL uses
2024-06-10 10:49:01
In principle JXL can represent yuv422 losslessly (at any bitdepth, in any underlying rgb colorspace), as long as the yuv matrix is the jpeg one (full range). In practice though, libjxl does not have any API to pass buffers as yuv (or to skip the decoder conversion from yuv to rgb), since the API is based on interleaved RGB(A), so there currently is no way to do that except by hacking libjxl in nontrivial ways.
2024-06-10 10:52:50
Likely trying to do a lossless conversion will not be effective anyway, unless somehow ProRes frames can be mapped to VarDCT frames in an exact way (like we do for JPEG recompression) — I don't know if that's possible though, I don't know ProRes well enough.
2024-06-10 10:53:50
If some additional loss is allowed, I guess you can just convert the yuv422 to RGB and do standard lossy jxl encoding on that.
Quackdoc
CrushedAsian255 How do I stuff JXL images into an MKV though?
2024-06-10 10:54:40
you need a build of ffmpeg using this patch you can remove the qoi part if you want, change the +34,8 to +34,7 if you remove it ```diff diff --git a/libavformat/riff.c b/libavformat/riff.c index df7e9df31b..16e37fb557 100644 --- a/libavformat/riff.c +++ b/libavformat/riff.c @@ -34,6 +34,8 @@ * files use it as well. */ const AVCodecTag ff_codec_bmp_tags[] = { + { AV_CODEC_ID_JPEGXL, MKTAG('J', 'X', 'L', ' ') }, + { AV_CODEC_ID_QOI, MKTAG('Q', 'O', 'I', ' ') }, { AV_CODEC_ID_H264, MKTAG('H', '2', '6', '4') }, { AV_CODEC_ID_H264, MKTAG('h', '2', '6', '4') }, { AV_CODEC_ID_H264, MKTAG('X', '2', '6', '4') }, ```
JendaLinda
2024-06-10 10:55:54
Some time ago, I discovered a particular jpeg file. https://discord.com/channels/794206087879852103/804324493420920833/1242368843461169252
2024-06-10 10:55:57
I can't figure out, what's wrong with the file, I don't see anything suspicious. djpegli can't decode the jpeg file. cjxl can't transcode the jpeg file to jxl losslessly. jpehtran will fix the file so both djpegli and cjxl will be able to process it.
Quackdoc
2024-06-10 10:57:43
the thing to take note when using JXL to compress a "working file" I **highly** reccomend using faster_decoding=3, it does increase the filesize a good chunk, but the video actually remains usable in an NLE or whatever you use
2024-06-10 11:01:41
also its very important to note that the comparison I made is from the same source, and it's comparing prores HQ from ffmpeg's prores_ks encoder. so it should still save more data, but always keep in mind you will need to do a lossy encode to actually save the data
_wb_
JendaLinda I can't figure out, what's wrong with the file, I don't see anything suspicious. djpegli can't decode the jpeg file. cjxl can't transcode the jpeg file to jxl losslessly. jpehtran will fix the file so both djpegli and cjxl will be able to process it.
2024-06-10 11:08:48
If you want to figure out why things fail in libjxl, it's very useful to build your libjxl with `./ci.sh opt` instead of building the release build where debug output has been stripped. Likely you'll see a more useful error string that way.
JendaLinda
2024-06-10 11:16:51
Well, I guess I will have to figure out, how to compile libjxl. I know that lossless transcoding is sometimes picky about jpegs, but jpegli should be able to decode all jpegs.
w
2024-06-10 11:17:20
can it do cmyk
JendaLinda
2024-06-10 11:24:05
I haven't tested CMYK yet. The problematic file is supposedly just an ordinary YCbCr progressive JPEG, jpegtran can convert it losslessly to baseline JPEG.
gbetter
2024-06-10 11:43:40
**JPEG2000 support removed from Safari 18.** As of the beta, apparently, JPEG2000 support is completely removed. I have not tested this yet but as the Webkit team notes: *"WebKit for Safari 18 beta removes support for the JPEG2000 image format. Safari was the only browser to ever provide support. If you’ve been serving JPEG2000 files using best practices, then your site is using the picture element to offer multiple file format options to every browser. Safari 18 beta will simply no longer choose JPEG2000, and instead use a file compressed in JPEG XL, AVIF, WebP, HEIC, JPG/JPEG, PNG, or Gif — choosing the file that’s best for each user. Only one image will be downloaded when you use <picture>, and the browser does all the heavy lifting. We have noticed that some Content Deliver Networks (CDN) use User Agent sniffing to provide one file to each UA, offering only JPEG2000 images to Safari — especially on iPhone and iPad. If you expect this might be happening with your site, we recommend testing in Safari 18 beta on both macOS Sequoia and iOS or iPadOS 18. If you see problems, contact your SaaS provider or change your image delivery settings to ensure your website provides fallback images using industry best practices."*
HCrikki
2024-06-11 01:24:46
there should be no issue with that. i think pdf docs with jp2 images dont rely on safari/webkit to render anyway
2024-06-11 01:30:55
unlike as implied by the myth, formats can absolutely be shelved, especially if theyre almost exclusively only served by CDNs as 'optimized' substitutes for the original images that are typically jpg or png (like with webp) sites maintained in the last few years can offset any adverse consequence of their use by simply using any other format available or the originals, unlike geocities-era abandonware that doesnt suffer such an issue anyway
VcSaJen
jjrv I guess to compress an animation for web use, AV1 would be a decent codec. But is there a way to get over 8 bits per channel (let's say 12) and still decode it in a browser, reading the 12bit data in JS? If the app has to resort to Wasm, maybe JPEG XL is a competitive option even for HDR video 🤔
2024-06-11 04:27:38
*.swf used to be a good way of publishing animation for web. Infinite resolution and a small size.
a goat
VcSaJen *.swf used to be a good way of publishing animation for web. Infinite resolution and a small size.
2024-06-11 05:36:08
It's incredibly upsetting that nothing replaced Flash for animation. We have GPU accelerated vector libraries now, it's way easier to make a performant animated vector format these days than when Flash started
VcSaJen
2024-06-11 05:51:33
Animated SVG, meanwhile, can bring even i9 to knees
Meow
gbetter **JPEG2000 support removed from Safari 18.** As of the beta, apparently, JPEG2000 support is completely removed. I have not tested this yet but as the Webkit team notes: *"WebKit for Safari 18 beta removes support for the JPEG2000 image format. Safari was the only browser to ever provide support. If you’ve been serving JPEG2000 files using best practices, then your site is using the picture element to offer multiple file format options to every browser. Safari 18 beta will simply no longer choose JPEG2000, and instead use a file compressed in JPEG XL, AVIF, WebP, HEIC, JPG/JPEG, PNG, or Gif — choosing the file that’s best for each user. Only one image will be downloaded when you use <picture>, and the browser does all the heavy lifting. We have noticed that some Content Deliver Networks (CDN) use User Agent sniffing to provide one file to each UA, offering only JPEG2000 images to Safari — especially on iPhone and iPad. If you expect this might be happening with your site, we recommend testing in Safari 18 beta on both macOS Sequoia and iOS or iPadOS 18. If you see problems, contact your SaaS provider or change your image delivery settings to ensure your website provides fallback images using industry best practices."*
2024-06-11 07:20:22
Oh no Nintendo can no longer give me Switch thumbnails in .jp2<:PepeHands:808829977608323112>
_wb_
2024-06-11 07:50:12
Some Cloudinary users were actually still using j2k on Safari/iOS. For now it still seems to work (it still decodes it) but we're trying to track down users who are doing hardcoded `f_jp2` in Cloudinary (instead of using our `f_auto` where we can make sure we don't send a format that will not work), which is a bad idea anyway but it will become even worse when Safari really pulls the plug on it.
2024-06-11 07:50:39
But yes, this is actually nice to see, that a browser can actually deprecate a format it used to support.
2024-06-11 07:55:01
This has been used many times as an argument against adding JXL support: "the bar for new formats is incredibly high because once added they have to be there forever" has basically been the dogma so far, although even before the news of Safari pulling j2k, there were some precedents: Edge dropping JXR (WDP), all browsers dropping XBM.
CrushedAsian255
2024-06-11 07:56:58
at least Safari has JXL and so does my Thorium install
2024-06-11 07:57:35
as i use mac and iphone (although thinking about Android)
Quackdoc you need a build of ffmpeg using this patch you can remove the qoi part if you want, change the +34,8 to +34,7 if you remove it ```diff diff --git a/libavformat/riff.c b/libavformat/riff.c index df7e9df31b..16e37fb557 100644 --- a/libavformat/riff.c +++ b/libavformat/riff.c @@ -34,6 +34,8 @@ * files use it as well. */ const AVCodecTag ff_codec_bmp_tags[] = { + { AV_CODEC_ID_JPEGXL, MKTAG('J', 'X', 'L', ' ') }, + { AV_CODEC_ID_QOI, MKTAG('Q', 'O', 'I', ' ') }, { AV_CODEC_ID_H264, MKTAG('H', '2', '6', '4') }, { AV_CODEC_ID_H264, MKTAG('h', '2', '6', '4') }, { AV_CODEC_ID_H264, MKTAG('X', '2', '6', '4') }, ```
2024-06-11 08:08:00
i installed through Homebrew, how would i go changing the compilation?
Quackdoc
2024-06-11 08:13:51
not sure, I dont really use homebrew so I couldn't really say
CrushedAsian255
2024-06-11 08:14:54
because i could compile it but then i do not know how to link that into anything else
2024-06-11 08:15:00
like mpv
Quackdoc
2024-06-11 08:16:25
if the application is linked against the ffmpeg it should just work, mpv wont work oob since it uses it's own mkv demuxer, using the same patch you can mux into a nut and that will work with mpv better
CrushedAsian255
2024-06-11 08:17:24
my mpv build is linking against homebrew's ffmpeg /opt/homebrew/opt/ffmpeg/lib/libavcodec.61.dylib /opt/homebrew/opt/ffmpeg/lib/libavfilter.10.dylib /opt/homebrew/opt/ffmpeg/lib/libavformat.61.dylib /opt/homebrew/opt/ffmpeg/lib/libavutil.59.dylib /opt/homebrew/opt/ffmpeg/lib/libswresample.5.dylib /opt/homebrew/opt/ffmpeg/lib/libswscale.8.dylib /opt/homebrew/opt/ffmpeg/lib/libavdevice.61.dylib
2024-06-11 08:17:40
maybe ill just decode in NUT and then pass as raw video to other programs as a pipe
TheBigBadBoy - 𝙸𝚛 well MP3Packer all it does is minimizing the frames' padding so it can convert losslessly CBR to VBR (and inversely)
2024-06-11 12:21:34
Are there any tools that do that that are more up to date?
2024-06-11 12:21:44
MP3Packer breaks on joint stereo for me
TheBigBadBoy - 𝙸𝚛
CrushedAsian255 MP3Packer breaks on joint stereo for me
2024-06-11 12:22:43
MP3Packer doesn't break for 100+ files I gave it, could you share your problematic file ?
2024-06-11 12:23:26
and no, there's no other tool than MP3Packer, and not "more up to date" because there's nothing more MP3Packer can do <:KekDog:805390049033191445>
CrushedAsian255
TheBigBadBoy - 𝙸𝚛 MP3Packer doesn't break for 100+ files I gave it, could you share your problematic file ?
2024-06-11 12:24:19
Maybe wine is screwing with something
2024-06-11 12:24:29
Hang on I’m on phone I’ll send u the file in a bit
TheBigBadBoy - 𝙸𝚛
2024-06-11 12:24:50
wait, you're trying it under Linux ?
2024-06-11 12:25:02
then use the Linux executable <:KekDog:805390049033191445>
CrushedAsian255
TheBigBadBoy - 𝙸𝚛 then use the Linux executable <:KekDog:805390049033191445>
2024-06-11 12:25:17
No I’m on MacOS
TheBigBadBoy - 𝙸𝚛
2024-06-11 12:25:28
oh
2024-06-11 12:25:30
right
CrushedAsian255
2024-06-11 12:25:32
I guess I could magic wormhole the files up to my Linux vps
TheBigBadBoy - 𝙸𝚛
2024-06-11 12:25:55
~~you could also compile it yourself~~
2024-06-11 12:26:05
wait, is your MAC arm64 ?
CrushedAsian255
2024-06-11 12:26:05
Screw ocaml
TheBigBadBoy - 𝙸𝚛 wait, is your MAC arm64 ?
2024-06-11 12:26:09
Yes
TheBigBadBoy - 𝙸𝚛
2024-06-11 12:26:29
then I already compiled a static version for it [⠀](https://cdn.discordapp.com/emojis/895863009820414004.webp?size=48&quality=lossless&name=av1_thinkies)
CrushedAsian255
2024-06-11 12:26:46
How?
TheBigBadBoy - 𝙸𝚛
2024-06-11 12:26:46
well for aarch64 (my phone) but it's the same arch
CrushedAsian255
2024-06-11 12:26:52
Can you send makefile?
2024-06-11 12:27:13
I couldn’t get OCaml to compile it as something something immutable strings
TheBigBadBoy - 𝙸𝚛
2024-06-11 12:27:41
I'll send you the exec lmao I compiled it more than a year ago, I don't remember how I did it (and it was a fucking headache bc of OCaml) <:kekw:808717074305122316>
CrushedAsian255 I couldn’t get OCaml to compile it as something something immutable strings
2024-06-11 12:28:32
yeah, so you have to even build OCaml yourself with `--use-unsafe-string` (something like that) during the config <:kekw:808717074305122316>
2024-06-11 12:29:57
<@386612331288723469> MP3Packer, version 2.04, 64-bit, compiled with optimizations (`-O3 -flto`), generic, statically linked, stripped. Compressed using `upx -9 --ultra-brute` 4.0.2 https://cdn.discordapp.com/attachments/1042536514783023124/1080563572960133161/mp3packer_aarch64_clang_static
CrushedAsian255
2024-06-11 12:36:09
Is it ELF or MachO?
TheBigBadBoy - 𝙸𝚛
2024-06-11 12:42:32
I don't understand
2024-06-11 12:43:39
it's the same as if I did `clang -flto -s -static -O3 -mtune=generic -o mp3packer mp3packer.c`
CrushedAsian255 Is it ELF or MachO?
2024-06-11 12:45:01
it's a build from my arm64 (aarch64) phone butperhaps what you mean is that it needs something special to work on MacOS ?
2024-06-11 12:46:08
I thought it didn't need anything bc MacOS is based on Linux [⠀](https://cdn.discordapp.com/emojis/654081052108652544.webp?size=48&quality=lossless&name=av1_Hmmm)
CrushedAsian255
TheBigBadBoy - 𝙸𝚛 I thought it didn't need anything bc MacOS is based on Linux [⠀](https://cdn.discordapp.com/emojis/654081052108652544.webp?size=48&quality=lossless&name=av1_Hmmm)
2024-06-11 12:54:10
nah, macOS is based on Unix, but not Linux specifically
2024-06-11 12:54:14
so it uses the Mach-O
2024-06-11 12:54:25
wait where do you get a .c file from?
2024-06-11 12:54:40
i could just build the .c file using MacOS clang
TheBigBadBoy - 𝙸𝚛
CrushedAsian255 wait where do you get a .c file from?
2024-06-11 12:58:23
nowhere, I just used this command to show how it was build and with which params but ofc there's no C file
2024-06-11 12:59:03
could you try the executable I sent?
CrushedAsian255
2024-06-11 01:02:57
it won't work `ELF 64-bit LSB executable, ARM aarch64` MacOS uses `Mach-O 64-bit executable arm64` sorry
2024-06-11 01:03:44
huh it's working now
2024-06-11 01:04:24
odd behaviour
2024-06-11 01:04:27
maybe wine update fixed it
2024-06-11 01:05:28
it was this file btw
2024-06-11 01:07:25
nvm it's broken for this file
2024-06-11 01:07:34
2024-06-11 01:07:55
it gives this
2024-06-11 01:08:12
it's dropping the MS encoded audio packets i think
TheBigBadBoy - 𝙸𝚛
CrushedAsian255 it was this file btw
2024-06-11 01:17:08
optimized using MP3Packer x86-64: `0,a,murmur3=20f7d9636eed5d59a08677f1f4e79584` same hash, no prob
CrushedAsian255
2024-06-11 01:18:07
again, no prob on my end
CrushedAsian255
2024-06-11 09:59:39
probably a bug in the version of mp3packer i'm using
2024-06-11 10:02:38
updating to 2.04 fixed the bug for me
Demiurge
TheBigBadBoy - 𝙸𝚛 I thought it didn't need anything bc MacOS is based on Linux [⠀](https://cdn.discordapp.com/emojis/654081052108652544.webp?size=48&quality=lossless&name=av1_Hmmm)
2024-06-12 09:51:09
https://github.com/apple-oss-distributions/xnu
2024-06-12 09:51:53
It's its own thing that uses a lot of FreeBSD code (for some reason)
2024-06-12 09:53:51
It has no relation to Linux and is very different than Linux. Linux was loosely inspired by Unix which is why Linux loosely/vaguely resembles BSD, if BSD was dropped on its head several times as a baby
TheBigBadBoy - 𝙸𝚛
2024-06-12 09:54:43
I didn't want to say Linux, but rather Unix
CrushedAsian255
2024-06-12 10:49:56
MacOS is POSIX compliant though
TheBigBadBoy - 𝙸𝚛
2024-06-12 11:11:01
It's just that I thought any static prog compiled on Unix could be run on any Unix machine but pparently and sadly no [⠀](https://cdn.discordapp.com/emojis/654081051768913941.webp?size=48&quality=lossless&name=av1_PepeHands)
lonjil
2024-06-12 11:15:03
very much no
w
2024-06-12 11:15:24
you can take the simplest x86 program and run it on windows and linux
Quackdoc
TheBigBadBoy - 𝙸𝚛 It's just that I thought any static prog compiled on Unix could be run on any Unix machine but pparently and sadly no [⠀](https://cdn.discordapp.com/emojis/654081051768913941.webp?size=48&quality=lossless&name=av1_PepeHands)
2024-06-12 11:30:13
no, it still has to be compiled for that specific environment which at minimum includes kernel, and often times libc, though some static compilation like musl usually will link libc into it
lonjil
2024-06-12 11:32:41
that latter thing only works on linux
Quackdoc
2024-06-12 11:35:02
I think id seen a couple others where similar things work, but regardless it still kernel dependant, and ofc any other deps
lonjil
2024-06-12 11:36:00
linux is the only kernel with a stable kernel interface 😄
2024-06-12 11:37:07
every OS that doesn't use the Linux kernel, has you use some kind of dynamically linked library to access the kernel
2024-06-12 11:37:27
On POSIX OSs, this is libc
afed
2024-06-12 11:58:03
Demiurge
lonjil linux is the only kernel with a stable kernel interface 😄
2024-06-12 01:44:27
Which is a horrible idea, incidentally... the only reason why Linux does this is because they want to avoid making an actual complete OS with a stable high-level interface.
2024-06-12 01:47:29
Having a stable low level kernel interface makes it much harder for Linux to improve since they have to worry about breaking compatibility... if they just developed a full OS then they wouldn't have to worry about changing low level interfaces as long as the high level interfaces are the same. But for some reason the Linux maintainers never wanted to take on the responsibility of adding essential userspace stuff to their source tree
2024-06-12 01:48:11
They would rather let someone else worry about it even if it's a complete holy mess like glibc
2024-06-12 01:57:59
It seems completely arbitrary too, since they act like writing basic userspace tools is some magical line they cannot cross that their mind simply cannot comprehend. It started with Linus talking like that
2024-06-12 02:00:26
"I can write a kernel and (userspace tool) git but I just can't include a c library and basic daemons."
Quackdoc
Demiurge Which is a horrible idea, incidentally... the only reason why Linux does this is because they want to avoid making an actual complete OS with a stable high-level interface.
2024-06-12 02:01:45
can you elaborate on this? because I can't see the path of logic here
Demiurge
2024-06-12 02:03:45
They don't want to provide any high level userspace stuff. They make a ton of arbitrary exceptions to this rule of course because of how insane and impractical it would be to stick to that rule.
Quackdoc can you elaborate on this? because I can't see the path of logic here
2024-06-12 02:05:02
So, they provide a low level kernel interface and try to keep it stable in order to make it easier for some outside group to do the rest of the work for them.
Quackdoc
2024-06-12 02:05:04
I dont think the kernel developers providing high level userspace stuff really makes a lot of sense
lonjil
2024-06-12 02:05:12
They don't have a rule against userspace stuff
2024-06-12 02:05:26
But they're all kernel devs, not OS devs
2024-06-12 02:05:32
Making an OS would be a big shift
Demiurge
Quackdoc I dont think the kernel developers providing high level userspace stuff really makes a lot of sense
2024-06-12 02:05:49
You mean like every other open source OS project and source tree?
Quackdoc
2024-06-12 02:06:14
if you take a look at what happened with windows, it's a way worse situation, they have to actively go out and emulate previous versions of windows to retain backwards compatibility, the maintenance burden is fairly high for that
2024-06-12 02:06:42
osx just doesn't care enough to really be bothered, "if it breaks it breaks" kind of thing
Demiurge
lonjil Making an OS would be a big shift
2024-06-12 02:07:14
Sure but it makes sense to keep things in the same source tree and provide everything under one project umbrella for many different obvious practical reasons. The decision bites Linux in the ass often IMO
Quackdoc
2024-06-12 02:08:21
not really, it provides great flexibility and support, having the linux kernel actually be stable is the reason why linux dominates in pretty much every single market aside from desktop, and the reason it doesn't dominate desktop is because no one cares about it
Demiurge
2024-06-12 02:08:54
A stable kernel api has nothing to do with why people use it
Quackdoc
2024-06-12 02:09:17
it has excellent backwards compatibility, it's really rare that a program actually breaks when a kernel updates
Demiurge
2024-06-12 02:09:36
The only reason they do that is because they don't want to include libc in their source tree for irrational reasons
Quackdoc
2024-06-12 02:09:42
unlike with something like say windows server, every major upgrade you need to update your program or deal with the usually spotty backwards compatibility issues
Demiurge The only reason they do that is because they don't want to include libc in their source tree for irrational reasons
2024-06-12 02:09:54
there are loads of reasons not to do so
2024-06-12 02:10:15
like the fact that actually supporting varying libc implementations is great for security, as well as perf
Demiurge
2024-06-12 02:10:19
You don't have to worry about kernel updates, you have to worry about glibc updates breaking compatibility with everything...
Quackdoc
Demiurge You don't have to worry about kernel updates, you have to worry about glibc updates breaking compatibility with everything...
2024-06-12 02:10:40
that's because glibc is dogshit, just use musl or a different libc
2024-06-12 02:10:55
or dont update libc outside of major security updates
Demiurge
2024-06-12 02:11:04
They trust GNU to do everything for them and them they regret it over and over again and had to fork glibc several times even
Quackdoc
2024-06-12 02:11:13
no they dont
2024-06-12 02:11:29
the explicitly don't trust gnu to do everything for them.
Demiurge
2024-06-12 02:12:17
They do though. That is their whole reason why they don't just include musl into the linux source tree
Quackdoc
2024-06-12 02:12:45
no, they dont
2024-06-12 02:12:53
they actively support many libc impls
jonnyawsom3
2024-06-12 02:12:57
<#806898911091753051> <:Hypers:808826266060193874>
Demiurge
2024-06-12 02:13:02
They would rather someone else take responsibility even if it ends up making Linux suck
Quackdoc
2024-06-12 02:13:15
off the top of my head I actively use bionc, musl, and glibc, which is three extremely seperate things
<#806898911091753051> <:Hypers:808826266060193874>
2024-06-12 02:13:22
[av1_dogelol](https://cdn.discordapp.com/emojis/867794291652558888.webp?size=48&quality=lossless&name=av1_dogelol)
spider-mario
Quackdoc if you take a look at what happened with windows, it's a way worse situation, they have to actively go out and emulate previous versions of windows to retain backwards compatibility, the maintenance burden is fairly high for that
2024-06-12 02:15:15
to be fair, older versions of Windows were using an entirely different kernel altogether
2024-06-12 02:15:45
so just keeping a kernel’s interface stable wouldn’t have helped
Quackdoc
2024-06-12 02:16:57
indeed, but even say winxp -> windows vista/7 was a massive shift in kernel so a lot of things broke, a lot broke with the 7->10 too, but windows emulation is genuinely quite good so it's not a massive issue, but it is a whole thing they still need to do.
spider-mario
2024-06-12 02:18:26
from what I recall, most userspace breakage from XP to Vista was because of the emphasis on “not running everything as admin by default”
2024-06-12 02:19:50
“what do you mean I can’t just write the user’s preferences to a file at the root of `C:\`?”
Quackdoc
2024-06-12 02:30:24
well there was that too :D but we can't forget that a large amount of windows' emulation is "transparent" to the user, a good example of this is some of the old directx stuff, it still works on windows, but a lot of those old libraries needed to be updated for newer technology stacks. this is partially the reason why windows is so bloated now, they have to ship all these really old libraries to get backwards compat
2024-06-12 02:32:56
and ofc, this sometimes doesn't work at all, and you need to emulate the full environment which is the "compatibility mode" which... usually works
JendaLinda
2024-06-12 02:55:02
Windows still understands near and far pointers.
2024-06-12 03:15:20
Microsoft only deprecated features that were so buggy and exploitable so fixing them would be too much work.
Naksu
CrushedAsian255
2024-06-12 06:04:12
<:YEP:808828808127971399>
190n
Demiurge Which is a horrible idea, incidentally... the only reason why Linux does this is because they want to avoid making an actual complete OS with a stable high-level interface.
2024-06-12 06:46:43
is a stable C library interface really meaningfully higher level than a stable syscall interface
lonjil
2024-06-12 06:47:11
a bit
2024-06-12 06:47:31
a c library interface can do fun userspace side stuff instead of jumping into the kernel
190n
2024-06-12 06:47:41
i'd be more open to linux having a stable C interface instead of a stable syscall interface if the C library in question were easier to compile backwards-compatible binaries for
2024-06-12 06:48:04
like rn to make broadly compatible binaries for linux you need to either jump through hoops to link against a really old glibc, or make a static binary
lonjil a c library interface can do fun userspace side stuff instead of jumping into the kernel
2024-06-12 06:49:21
true
CrushedAsian255
Naksu <:YEP:808828808127971399>
2024-06-12 09:54:06
?
Naksu
2024-06-12 10:16:30
This music makes me feel nostalgic.
Demiurge
Quackdoc indeed, but even say winxp -> windows vista/7 was a massive shift in kernel so a lot of things broke, a lot broke with the 7->10 too, but windows emulation is genuinely quite good so it's not a massive issue, but it is a whole thing they still need to do.
2024-06-13 03:35:42
You're not supposed to interface directly with the kernel. Everything is done through windows.h and the kernel interface is unofficial and behind the scenes and private
2024-06-13 03:36:50
That's how it is on every single other OS except Linux because Linus literally just wanted to avoid the responsibility of providing a slightly more complete and more functional system
2024-06-13 03:37:11
Having bionic/musl/glibc is not a strength of Linux
2024-06-13 03:37:16
It is a contra
2024-06-13 03:38:24
There is no advantage in not having a single included libc everyone can expect to be present and take for granted without worrying about what version
2024-06-13 03:39:43
And relying on GNU as the de-facto provider for everything in the early days was simply done because the legality of BSD was in question during a lawsuit at the time and there was no other alternative available at the time. GNU and Linux were just in the right place at the right time unfortunately
2024-06-13 03:41:10
That's why we ended up with this awkward poorly thought out patchwork OS gaining lots of popularity in a very short time
2024-06-13 03:42:22
And everyone doing their own weird mutually incompatible thing and not caring about everybody else
Quackdoc
Demiurge Having bionic/musl/glibc is not a strength of Linux
2024-06-13 07:09:26
the issue with this sentiment is the idea that any single "libc" can be good, it can't each distro that ships their own libc surprise surprise, they all wind up having issues. instead linux has excellent support for many different libc implementations, allowing you to use whatever libc may be required
Demiurge
2024-06-13 07:20:56
Literally no other OS does this. It's not because Linux somehow knows better than literally every other OS project. It's because Linus literally didn't feel comfortable with the responsibility of providing the bare minimum userspace libraries for his kernel.
Quackdoc
2024-06-13 07:21:58
big disagree, it saves a lot of headaches, if this really was a massive issue, linux wouldn't dominate in nearly every single market segment
Demiurge
2024-06-13 07:23:06
There's no advantage. The only reason Bionic exists is because Google's corporate policy said to avoid anything GPL and there weren't any non-GPL alternatives that were good enough, so they just forked BSD libc.
2024-06-13 07:23:26
musl exists because glibc is a dumpster fire, and I think it was also partially forked from BSD too.
Quackdoc
2024-06-13 07:24:31
all musl, bionic, and glibc have their own respective issues, and the flexibility of swapping between them is really nice
2024-06-13 07:24:44
also simply not using a libc is really nice too
Demiurge
2024-06-13 07:25:50
glibc was the de-facto standard and Linus just relied on it because it was just there at the time and convenient since it avoids work and responsibility
Meow
2024-06-13 07:26:20
https://youtu.be/8czf2lc1nXI
Quackdoc
2024-06-13 07:28:49
maybe that is the case, but it has reaped many benefits
Demiurge
2024-06-13 08:17:09
It literally hasn't reaped anything except misery. Compatibility problems and political problems because the GNU and kernel guys try to fight each other.
lonjil
2024-06-13 08:17:29
wat
Demiurge
2024-06-13 08:18:09
There was a time when libc actually was included with Linux in the past, I think it was called klibc
2024-06-13 08:18:34
But like I said the only reason this isn't done is because Linus really really wants to avoid it for irrational reasons.
2024-06-13 08:19:01
Essentially afraid of responsibility
Quackdoc
2024-06-13 08:19:06
if you say so lmao
Demiurge
2024-06-13 08:20:42
You know, even if libc is included with Linux, it can still be customized during build time like the rest of Linux? And still replaced completely if desired, since it's open source?
2024-06-13 08:21:49
BSD has everything included but nothing stops people from writing or easily installing more sophisticated replacements for the built in components.
2024-06-13 08:22:20
There is literally, only a huge advantage to be had, in providing a basic bare minimum for people to rely on and take for granted
2024-06-13 08:26:37
Nothing forces you to use 100% of the included features either. You can exclude major parts of the Linux kernel when compiling it if you don't need all its features. And if you don't need all of the tools BSD comes with, you can make a stripped down or customized OS image for your hardware and simply not compile or include things that aren't going to be used or needed.
2024-06-13 08:27:40
People have just gotten accustomed to the mediocrity of Linux that people will make excuses to defend it without considering if it's truly an advantage or a liability.
2024-06-13 08:28:18
When comparing it to everything else, it just seems really half assed
2024-06-13 08:29:12
It's shocking that it got that popular but it was at the right place at the right time when people wanted a free system.
2024-06-13 08:30:37
Linus is also afraid to make any fundamental improvements to OS and kernel design or to learn any of the lessons from the book he read about OS design. And the stable low-level kernel interface probably contributes to the stagnation of Linux.
2024-06-13 08:35:35
Linus's professor essentially said he was pretty disappointed in his student.
2024-06-13 08:36:40
At least in how little he seemed to take from his book.
Quackdoc
2024-06-13 08:39:51
and yet, the OS he develops is still the king
lonjil
Demiurge There was a time when libc actually was included with Linux in the past, I think it was called klibc
2024-06-13 08:40:07
what are you even talking about? klibc's last release was last year.
2024-06-13 08:40:59
Its goal was to be a very tiny and incomplete libc to be used during early boot so that userspace could handle some initialization tasks instead of the kernel. It's still used for that.
Demiurge
2024-06-13 08:42:52
I know nowadays there is a kernelspace libc called klibc, but there used to be a userspace libc bundled with Linux at some point too when they thought they needed to move away from their dependency on glibc because of irreconcilable disagreements with the stubborn project management, which mostly consists of Red Hat employees.
lonjil
2024-06-13 08:43:01
wat
2024-06-13 08:43:09
you don't know what you're talking about
2024-06-13 08:43:24
klibc is a regular library like any other libc
Demiurge
2024-06-13 08:44:45
Maybe it was called something else and I'm not remembering the name correctly
lonjil
2024-06-13 08:46:15
> Linus's professor essentially said he was pretty disappointed in his student. > At least in how little he seemed to take from his book. Pretty sure Linus wasn't a student of Andrew Tanenbaum.
Quackdoc
2024-06-13 08:47:19
imagine having the balls to publically state you are dissapointed in the lead maintainer/developer of the most successful kernel on the planet
2024-06-13 08:47:28
regardless of who said it
2024-06-13 08:47:48
it feels like children trying to punch adults lmao
Demiurge
2024-06-13 08:52:01
https://man.archlinux.org/man/libc.7.en
2024-06-13 08:52:58
I don't remember if Linus was in a classroom with him or not, but he definitely used his book...
Quackdoc imagine having the balls to publically state you are dissapointed in the lead maintainer/developer of the most successful kernel on the planet
2024-06-13 08:53:55
He wasn't disappointed with his success, just disappointed with how uninspired the design of Linux is
Quackdoc
2024-06-13 08:54:48
"man, he could have done all this funky dumb stuff and made the kernel worse instead of making sensible design decisions that will eventually turn it into a behemoth powerhouse"
Demiurge
2024-06-13 08:55:17
It wasn't sensible though
2024-06-13 08:55:43
It was a monolithic design
2024-06-13 08:56:15
No separation of code or anything
2024-06-13 08:57:15
It's like someone just did the bare minimum without thinking about how it should be designed to be elegant or secure or modular or easy to maintain and expand, or anything.
2024-06-13 08:57:50
It brought absolutely nothing to the table other than barely being able to run
Quackdoc
2024-06-13 08:57:53
sure, thats why people kept adopting it, because it was a terrible design that they couldnt add anything to
Demiurge
2024-06-13 08:58:23
Because there was nothing else at the time lol.
Quackdoc
2024-06-13 08:58:30
and?
Demiurge
2024-06-13 08:59:03
Every other OS has some sort of purpose in the design
Quackdoc
2024-06-13 08:59:18
if linux was really that bad, people would have invested in an alternative, they didn't because linux was not, and is not that bad
Demiurge
2024-06-13 08:59:18
The kernel brings some sort of innovation or unique feature
2024-06-13 08:59:30
Linux had no features at all other than being able to barely boot
2024-06-13 08:59:40
No self healing anything
Quackdoc
2024-06-13 08:59:44
sounds like major copeage to me
Demiurge
2024-06-13 08:59:44
No separation of anything
2024-06-13 08:59:52
No lessons learned from the past at all
2024-06-13 09:00:17
That's why Linux is still dealing with embarrassing fragility problems whenever a driver crashes or the system runs out of RAM
Quackdoc if linux was really that bad, people would have invested in an alternative, they didn't because linux was not, and is not that bad
2024-06-13 09:01:20
Linux is only chosen if compatibility with existing Linux software ecosystem is a factor driving the decision.
Quackdoc
2024-06-13 09:01:34
[av1_kekw](https://cdn.discordapp.com/emojis/758892021191934033.webp?size=48&quality=lossless&name=av1_kekw)
2024-06-13 09:01:40
funny
Demiurge
2024-06-13 09:02:58
BSD was a very popular system at the time and it was a lot more functional and complete than Linux at the time but there was an ongoing lawsuit and people were not sure if it was legal to use BSD.
2024-06-13 09:03:45
Basically the only reason Linux is successful is because of that lawsuit and Linux and GNU both being available at the right time.
2024-06-13 09:05:08
Even today, if you compare a Linux system to a BSD system, you will see how... strangely sloppy and half-assed Linux seems by comparison.
2024-06-13 09:05:29
Then again MacOS also feels kinda half-ass cobbled together like Linux
2024-06-13 09:05:43
Like a deranged cousin of FreeBSD
2024-06-13 09:05:52
And FreeBSD is already pretty deranged
Quackdoc
2024-06-13 09:06:37
BSDs would be a lot more popular if this was actually the case
Demiurge
2024-06-13 09:06:58
Look I'm an equal opportunity hater. I just try to keep it real and if something sucks I admit to it with no excuses
2024-06-13 09:07:33
I love using Linux but it's sloppy and half-assed and I wish it wasn't
2024-06-13 09:07:48
I would still much rather use it than Windows
2024-06-13 09:08:50
If people stopped making excuses for genuine deficiencies of Linux and stopped saying "yeah but it's soooo successful so it must be good" then maybe they'd actually improve it so it could actually get more successful.
2024-06-13 09:09:29
It looks pretty stagnant to me, considering how many billions of dollars of man hours are being spent on it and how little actually improves or changes in return
DZgas Ж
2024-06-13 09:09:39
<:monkaMega:809252622900789269> long text
Demiurge
2024-06-13 09:10:03
Sorry 😅
Quackdoc
2024-06-13 09:11:03
I highly disagree that a lot of what is brought up is an actual deficiency. it's not like there are none, there are loads, however I highly disagree that libc is one of them
Demiurge
2024-06-13 09:11:50
Not just libc, just it's incompleteness and not bundling anything and relying on other people to take responsibility because you're afraid of providing basic foundational bedrock
2024-06-13 09:12:26
That's a real problem and I think a lot of people are actually in denial about this. There is a good reason why literally no one else does this outside of Linux
DZgas Ж
Meow https://youtu.be/8czf2lc1nXI
2024-06-13 09:13:11
webp good
Quackdoc
Demiurge That's a real problem and I think a lot of people are actually in denial about this. There is a good reason why literally no one else does this outside of Linux
2024-06-13 09:14:35
it's not denial, linux is designed in a way for flexibility, if linux "did everything the way other operating systems do" linux would just be another piece of trash
Demiurge
2024-06-13 09:14:45
That's their cope
Quackdoc
2024-06-13 09:14:50
this "foundational bedrock" thats missing allows "many foundational bedrocks"
Demiurge
2024-06-13 09:15:03
No it doesn't.
Quackdoc
2024-06-13 09:15:05
it's a core part of the reason *why* linux has been so succsessful
Demiurge
2024-06-13 09:15:32
Providing basic stuff doesn't prevent people from installing a replacement if and when it's needed.
lonjil
2024-06-13 09:15:38
tfw I don't agree with anyone in this conversation
DZgas Ж
2024-06-13 09:15:41
progressive line—by-line display during loading and the decoding speed make webp the only format that is better than jpeg but not more complicated than jpeg. it seems to me that webp is what jpeg should have been from its very beginning. For Internet
Quackdoc
Demiurge Providing basic stuff doesn't prevent people from installing a replacement if and when it's needed.
2024-06-13 09:17:30
disagree, it ads in inherent bias to development that doesnt exist in the linux kernel
Demiurge
2024-06-13 09:17:38
Yes it does exist
DZgas Ж
DZgas Ж progressive line—by-line display during loading and the decoding speed make webp the only format that is better than jpeg but not more complicated than jpeg. it seems to me that webp is what jpeg should have been from its very beginning. For Internet
2024-06-13 09:17:44
while I continue to use jpeg to transfer files in archives. for the reason that yuv444 and q98 are better than webp. but png file size too much, too much
Demiurge
2024-06-13 09:17:45
It already is biased toward glibc
Quackdoc
Demiurge Yes it does exist
2024-06-13 09:17:51
[av1_whatisthis](https://cdn.discordapp.com/emojis/939666933119324222.webp?size=48&quality=lossless&name=av1_whatisthis)
Demiurge
2024-06-13 09:18:20
The linux kernel literally includes glibc-specific documentation
2024-06-13 09:18:48
And there's already a huge existing bias that Linux = glibc
2024-06-13 09:18:56
Just ask Lennart
DZgas Ж while I continue to use jpeg to transfer files in archives. for the reason that yuv444 and q98 are better than webp. but png file size too much, too much
2024-06-13 09:19:46
don't forget JPEG has "progressive refinement decoding"
Quackdoc
Demiurge And there's already a huge existing bias that Linux = glibc
2024-06-13 09:20:13
no one actually has this bias, if you are aware of what glibc actually is, you know musl is also a fairly decent sized player and you may even also have a good idea that bionic is also massive, if you don't know what glibc is, then well, you don't know what it is
Demiurge
Quackdoc disagree, it ads in inherent bias to development that doesnt exist in the linux kernel
2024-06-13 09:21:36
the inherent bias already exists, and worse, it exists for the most overcomplicated and highest-resource-requirement libc. It would be much better if the bias was for something much more simple and basic and foundational that was included with Linux, rather than the current state of affairs.
DZgas Ж
Demiurge don't forget JPEG has "progressive refinement decoding"
2024-06-13 09:21:52
this is the most useless advantage in 2024
Demiurge
2024-06-13 09:22:17
It's much better than line-by-line decoding
2024-06-13 09:22:42
It's not useless, it makes images load visibly faster by a factor of ten
DZgas Ж
Demiurge don't forget JPEG has "progressive refinement decoding"
2024-06-13 09:23:24
I would always used it. but this slows down the decoding speed extremely much. I just can't use it on images like 14k x 28k.
lonjil
2024-06-13 09:23:43
let's move the OS talk <#806898911091753051>
Demiurge
Quackdoc no one actually has this bias, if you are aware of what glibc actually is, you know musl is also a fairly decent sized player and you may even also have a good idea that bionic is also massive, if you don't know what glibc is, then well, you don't know what it is
2024-06-13 09:24:16
Tell that to Lennart. He says Linux = glibc and if you use musl then you shouldn't expect to be able to run systemd or any software that depends on it.
Quackdoc
2024-06-13 09:24:29
lennart is an idiot
2024-06-13 09:24:35
and systemd is trash, this is nothing ground breaking
DZgas Ж
DZgas Ж I would always used it. but this slows down the decoding speed extremely much. I just can't use it on images like 14k x 28k.
2024-06-13 09:25:33
<:JPEG_XL:805860709039865937> jpeg xl just... it just can't be used at all. I am not writing on this server because in the last six months I have never encoded jxl and I used it only for lossless compression in my personal archives
2024-06-13 09:27:53
the big problem is that the AI processing of Cunet and Cugan neural networks makes an almost perfect image from Art in Jpeg without problems
spider-mario
Demiurge It was a monolithic design
2024-06-13 10:29:06
I vaguely recall Linus saying something along the lines that the fallacy of microkernels is the belief that complexity lies in individual modules, when in fact it's between them
2024-06-13 10:29:17
iirc, this is how he justified the monolithic design
2024-06-13 10:29:28
that you end up with less total complexity
lonjil
2024-06-13 10:30:08
He'd be right on fully memory safe hardware
2024-06-13 10:30:41
But as it is right now, the kernel can't be protected from faults in individual parts, so a microkernel would be better.
Demiurge
2024-06-13 10:31:08
microkernels aren't a fallacy, it's the standard these days for all modern software projects. And it's just a basic principle of software design to not write a monolith.
lonjil
2024-06-13 10:31:27
In the future with CHERI we can have fully independent modules sharing an address space allowing for all the advantages of monolithic kernels and microkernels to be mixed.
Demiurge microkernels aren't a fallacy, it's the standard these days for all modern software projects. And it's just a basic principle of software design to not write a monolith.
2024-06-13 10:32:05
Yeah, microkernels have been hugely successful everywhere except big computers.
Demiurge
2024-06-13 10:32:46
Code should be resilient and modular and faults in one area should not cause everything else in other areas to fail
lonjil
2024-06-13 10:32:53
MacOS was based on a microkernel (Mach) a long time ago, but they made it more monolithic. But they've spent the last 10 years making it more microkernel-y again.
Demiurge
2024-06-13 10:33:00
that's not even specific to kernels
lonjil
2024-06-13 10:33:10
Even filesystems are in userspace on iPhone now
Demiurge
2024-06-13 10:33:20
there's nothing radical or crazy about applying such obvious consideration to kernel design
2024-06-13 10:33:34
that's why every single kernel written these days is a microkernel afaik
2024-06-13 10:33:48
cuz it's just obvious to do things that way
2024-06-13 10:34:10
only reason linux doesn't is laziness and people making excuses instead of making it better
lonjil
2024-06-13 10:35:52
Re-architecting Linux would be in the 10s of millions dollars minimum
2024-06-13 10:36:09
So it not being done isn't laziness
Demiurge
2024-06-13 10:36:19
I kinda doubt it.
2024-06-13 10:36:25
Most of linux is just drivers.
2024-06-13 10:36:55
like probably 90%
lonjil
2024-06-13 10:37:05
Fixing up all the drivers whenever someone makes a small change to the internal interfaces is already a lot of work
Demiurge
2024-06-13 10:37:39
Yeah, too bad there isn't a more abstract, higher level interface drivers can use
lonjil
2024-06-13 10:37:55
The fact that Linux has so many drivers is what would make re-architecting so expensive.
2024-06-13 10:38:28
Easier to just make a new microkernel and run Linux in VMs to use as driver servers.
Demiurge
2024-06-13 10:38:30
That's part of the whole flaw with Linux
2024-06-13 10:39:21
Since Linux never bothered actually designing it well, it increases the maintenance burden. If it was a microkernel with slightly more abstract interfaces, it would reduce the breakage and maintenance burden.
2024-06-13 10:40:40
Linux consumes a huge amount of man hours of labor but it results in very little new features and improvements and it still takes years for major issues to get addressed if ever
2024-06-13 10:41:02
It's almost like it's stagnant by design
2024-06-13 10:42:33
Other OS projects innovate and come up with new ideas and build upon successful existing ideas and Linux just stays stagnant
2024-06-13 10:43:50
And occasionally something cool from another OS gets ported over to Linux. Like pledge or sudo.
2024-06-13 10:46:48
But how long did it take to even get something as basic and essential as arc4random/getentropy?
lonjil
Demiurge Since Linux never bothered actually designing it well, it increases the maintenance burden. If it was a microkernel with slightly more abstract interfaces, it would reduce the breakage and maintenance burden.
2024-06-13 10:48:45
not really relevant to my point. Re-architecting would always require large changes to drivers. Like QNX and Mach are *super* different, even though they're both microkernels.
Demiurge And occasionally something cool from another OS gets ported over to Linux. Like pledge or sudo.
2024-06-13 10:50:25
sudo is an independent software project that is unaffiliated with any specific OS. Pledge for Linux is a userspace library that uses Linux's pre-existing SECCOMP system. io_uring was an innovation in Linux that was later copied by Windows :)
a goat
2024-06-13 10:50:37
Wait isn't sudo mostly off kernel? There are distros that don't have any sudo support
lonjil sudo is an independent software project that is unaffiliated with any specific OS. Pledge for Linux is a userspace library that uses Linux's pre-existing SECCOMP system. io_uring was an innovation in Linux that was later copied by Windows :)
2024-06-13 10:50:55
Oh you beat me to it
lonjil
2024-06-13 10:52:40
also I just saw that the pledge port to Linux was done by Justine Tunney, an actual honest to goodness fascist 🤢
Demiurge
2024-06-13 10:54:57
???
2024-06-13 10:55:33
idk who this person is but I don't see any fascist advocacy...
lonjil
2024-06-13 10:57:08
She hasn't done public advocacy for around 10 years, but she also has never apologized to any of the people she hurt, so I don't reckon her views have changed.
Demiurge
2024-06-13 10:58:27
Fascism is just when government and corporations team up to screw everyone over and everyone is hypnotized into worshiping the contrived authority of these arbitrary and self-serving institutions, so in my opinion the majority of the human race seems to support fascism since everyone keeps saying it's a necessary evil or something
2024-06-13 10:59:09
Or that I supposedly signed a social contract before I was born and had the ability to consent to one
2024-06-13 11:00:03
And that if I ever benefit from it in any way then I'm not allowed to be against people committing crimes in my name and cloaking themselves in the supposed consent of the people
2024-06-13 11:00:39
But that's just my hot take.
2024-06-13 11:13:18
Lonnie, apparently I was thinking of "Linux libc" not klibc. It's also known as libc5. It was a fork of glibc that was used for a long time as the standard libc on Linux
2024-06-13 11:13:38
Aside from that I'm also aware of the eglibc fork
2024-06-13 11:14:37
which was a similar situation where relying on the glibc maintainers was inadequate
spider-mario
2024-06-13 11:28:00
found the quote from Torvalds
2024-06-13 11:28:06
> The theory behind the microkernel is that operating systems are complicated. So you try to get some of the complexity out by modularizing it a lot. The tenet of the microkernel approach is that the kernel, which is the core of the core of the core, should do as little as possible. Its main function is to communicate. All the different things that the computer offers are services that are available through the microkernel communications channels. In the microkernel approach, you’re supposed to split up the problem space so much that none of it is complex. > > I thought this was stupid. Yes, it makes every single piece simple. But the interactions make it far more complex than it would be if many of the services were included in the kernel itself, as they are in Linux. Think of your brain. Every single piece is simple, but the interactions between the pieces make for a highly complex system. It’s the whole-is-bigger-than-the-parts problem. If you take a problem and split it in half and say that the halves are half as complicated, you’re ignoring the fact that you have to add in the complication of communication between the two halves. The theory behind the microkernel was that you split the kernel into fifty independent parts, and each of the parts is a fiftieth of the complexity. But then everybody ignores the fact that the communication among the parts is actually more complicated than the original system was—never mind the fact that the parts are still not trivial. > > That’s the biggest argument against microkernels. The simplicity you try to reach is a false simplicity.
2024-06-13 11:29:38
(2001)
lonjil
2024-06-13 11:30:04
good example of linus being clueless about things outside his own work
2024-06-13 11:31:29
though to be fair, many of the academics working on microkernels in the 90s weren't exactly competent either