JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

off-topic

Tirr
2025-07-05 05:49:56
yeah likely
_wb_
2025-07-05 05:53:28
I guess I will just randomly walk around a bit here. I am on this pedestrian overpass thing which is kind of nice, it has some trees to provide some shade
2025-07-05 05:53:28
I did not bring an umbrella
2025-07-05 05:54:53
Nice that there is free wifi most of the places, I do get disconnected quite regularly though
Tirr
2025-07-05 05:58:39
those wifis are mostly from some buildings (and some bus stations I think), of course they are too weak outdoors, and some APs are not very good in network quality
2025-07-05 06:01:02
but yeah they work most of the time when you're in one of those buildings
_wb_
2025-07-05 06:08:08
2025-07-05 06:08:15
Nice flowers
Quackdoc
2025-07-05 09:38:33
any krita whizs here that can help me figure out why this crap is happening? I can't share the images, but basically I have layer with alpha mask, and a layer group above it with inhereit alpha, but when I export I get this. the good one is krita with the alpha hidden, the second one is the exported image with alpha disabled https://cdn.discordapp.com/attachments/673202643916816384/1390986669087526972/image.png?ex=686a40c9&is=6868ef49&hm=5bebd94924738b6708dc75c11824772c438f7840fd13f7a05706e0845a941518& https://cdn.discordapp.com/attachments/673202643916816384/1390986669368414308/image.png?ex=686a40c9&is=6868ef49&hm=e1d4eff971089a700a4ad9d3ada08a4665f1aa4079d461242f2478047913318c&
2025-07-05 09:39:27
the issue is only occuring when exporting alpha, if I don't export the alpha, the image exports fine, the same issue occurs when I flaten the image
diskorduser
_wb_
2025-07-05 09:57:39
Looks like lycoris
2025-07-05 03:41:01
https://g.ddinstagram.com/reel/DIt33PcRexH/?igsh=MWgxd280N2c2OWdvcw==
spider-mario
2025-07-05 03:51:32
full video: https://youtu.be/e134NoLyTug
𐑛𐑦𐑕𐑣𐑸𐑥𐑩𐑯𐑦 | 最不調和の伝播者 | 異議の元素
2025-07-06 06:12:17
related to MIDI, so definitely offtopic... https://github.com/ltgcgo/octavia/commit/b8e701614b6536813884fd46e73a3d56a600fe9c
spider-mario
2025-07-07 11:36:48
> I’m not a madman who keeps logs of all of my activities all the time on the off chance I might accidentally encounter vulnerabilities in Belgium. #brandnewsentence
2025-07-07 11:36:54
(from https://www.reddit.com/r/programming/comments/1lthqn2/belgium_is_unsafe_for_cvd_coordinated/ )
_wb_
2025-07-08 05:36:15
Ugh, why am I not surprised...
2025-07-08 05:37:49
Then again, > CCB refused to answer what might happen if I don’t comply In my experience with Belgian bureaucracy, this very likely means that the answer is "nothing at all".
2025-07-08 06:50:22
Still, very stupid that they make it so scary and bureaucratic to do CVD. I didn't know it was that bad.
spider-mario
2025-07-08 09:57:22
I’ve received my scanner
2025-07-08 10:39:27
(but not yet the colour profiling target)
diskorduser
spider-mario (but not yet the colour profiling target)
2025-07-09 03:09:27
What is that? Something like Xrite color checker?
Meow
2025-07-09 03:24:18
Rear view (SFW)
spider-mario
2025-07-09 07:43:35
yeah, looks like this for example: http://www.targets.coloraid.de/
diskorduser
spider-mario yeah, looks like this for example: http://www.targets.coloraid.de/
2025-07-09 08:14:46
Oh. Is that printed book color accurate?
Meow Rear view (SFW)
2025-07-09 08:15:07
You should make a 3D model out of this.
Meow
2025-07-09 08:15:56
I don't have the ability to make
spider-mario
diskorduser Oh. Is that printed book color accurate?
2025-07-09 08:24:44
ideally, the target should be printed on the same type of paper as what will be scanned – if not, it can result in slight differences: https://comeonux.com/2021-09-29-it8-considered-harmful/
2025-07-09 08:24:59
but even then, maybe it will be at least better than nothing
AccessViolation_
2025-07-09 06:12:49
are there no color calibration targets that are 'absolute' in that they don't care about what color they're printed on, that would work?
2025-07-09 06:13:35
like those plastic blocks from pantone, but less expensive :)
Fox Wizard
2025-07-10 03:12:32
<@179701849576833024> there's apparently another one here: https://ptb.discord.com/channels/794206087879852103/822120855449894942/1389875831886843964
veluca
2025-07-10 03:13:21
that was hard to guess from the message 😛
Fox Wizard
2025-07-10 03:14:02
Only knew what it was after opening it. Didn't expect to see porn stuff in a YouTube description although that's getting kinda common now <:KekDog:884736660376535040>
diskorduser
2025-07-11 12:38:35
https://g.ddinstagram.com/reel/DJ5W_xEvN3Q/?igsh=aGp0bmNqbjdvY3hr
2025-07-12 11:42:28
https://g.ddinstagram.com/reel/DFCfBh5RtEs/?igsh=aTA5aXo1Y3dreWY1
damian101
2025-07-14 07:00:24
that's not tricking my brain, it's manipulating my light receptors
la .varik. .VALefor.
2025-07-17 12:31:02
Meow
2025-07-17 03:16:21
https://job-boards.greenhouse.io/xai/jobs/4789505007
2025-07-17 03:17:15
Time to write the best waifu to heil all
diskorduser
la .varik. .VALefor.
2025-07-17 11:07:12
Is this lossless jxl
_wb_
2025-07-17 03:53:07
That xAI logo looks like Xl
Meow
2025-07-17 04:21:11
JxAI
diskorduser
2025-07-17 05:30:05
https://www.threads.com/@letsbeeffinchad/post/DMMxPQOhc0x?xmt=AQF0OtACKBs-wyt2GVidASSTM9-Z0IvGcFKqK8jesITDkw
spider-mario
2025-07-17 07:59:22
not sure if the person who commented “took me a moment to understand 😂” made the joke on purpose
2025-07-17 08:31:48
(https://en.wikipedia.org/wiki/Moment_(mathematics)#Mean )
la .varik. .VALefor.
diskorduser Is this lossless jxl
2025-07-17 10:33:27
.i la .varik. cu morji le du'u me'oi .lossless. VARIK remembers that the thing is lossless.
diskorduser
2025-07-18 05:31:35
https://www.youtube.com/watch?v=NDOrpjHkhWM
diskorduser https://www.youtube.com/watch?v=NDOrpjHkhWM
2025-07-18 05:32:25
he made a svg thumbnail render because windows inbuilt decoder is slow or buggy
2025-07-21 10:27:05
https://apps.apple.com/us/app/my-horse-prince/id1173408461
2025-07-21 10:27:16
Funny app screenshot.
necros
2025-07-21 12:45:40
https://youtu.be/b42QYbkU0fQ
Meow
diskorduser https://apps.apple.com/us/app/my-horse-prince/id1173408461
2025-07-21 01:23:57
Quackdoc
2025-07-21 05:12:10
man, I wanted to run more benchmarks on my compaq but I seemed to have bricked it, wont even post now
spider-mario
2025-07-21 11:38:10
jonnyawsom3
2025-07-21 11:42:51
Clearly Discord have done a lot of bug testing on this legally required feature in the UK... Right?
2025-07-21 11:44:20
(If a channel is marked as NSFW, or an image is detected as NSFW, AI based age verification is required by law in 4 days)
2025-07-21 11:44:42
Thankfully reloading fixed it, but Jesus...
Demiurge
2025-07-22 02:35:49
Easy solution: make it illegal for minors to use the internet outside of school activities...
2025-07-22 02:36:31
Imagine how beautiful the internet would be if all kids were banned :)
NovaZone
Clearly Discord have done a lot of bug testing on this legally required feature in the UK... Right?
2025-07-22 02:57:19
if it becomes problematic they'll just block uk
jonnyawsom3
2025-07-22 02:58:09
Thanks
NovaZone
2025-07-22 02:59:17
i mean tis gov's fault, i know of at min 20 sites that have already just flat out blocked the uk
CrushedAsian255
2025-07-22 03:20:55
what happened to fab?
Quackdoc
spider-mario
2025-07-22 03:33:39
reminds me of this
Demiurge Easy solution: make it illegal for minors to use the internet outside of school activities...
2025-07-22 03:34:50
but then the cod lobbies would go to just being a bunch of grown ass men being downers all the time and thats no fun
Demiurge
2025-07-22 03:55:25
That was my plan all along
spider-mario
2025-07-22 03:58:08
lmao what even is this thing
2025-07-22 03:58:13
I love sassy AI
Quackdoc
2025-07-22 03:53:16
<@853026420792360980> do you know if we can now make PRs to code.ffmpeg.org yet or do we still have to wait?
Traneptora
Quackdoc <@853026420792360980> do you know if we can now make PRs to code.ffmpeg.org yet or do we still have to wait?
2025-07-22 03:54:16
You should be able to
Quackdoc
2025-07-22 03:54:49
https://tenor.com/view/gif-gif-8593213374087197675
Tirr
2025-07-22 05:50:25
oh we can finally submit a patch to ffmpeg without going through mailing list?
Quackdoc
2025-07-22 07:26:00
yeah, so happy, I don't like mailinglists at all, haven't actually used them in years
2025-07-23 03:32:52
servo can now "play" piped videos, it's currently broken as it constantly seeks back to itself, but it does "work" so thats neat
A homosapien
2025-07-26 05:36:56
I didn't expect to see <@532010383041363969> on my Instagram feed today being crushed by an anvil. 😂 https://g.ddinstagram.com/reel/DLjdthIxKNA/
_wb_
2025-07-26 05:57:58
So aggressive... And I don't think it's Jyrki who is to blame for the concept of WebP, he just made a great lossless mode for it.
2025-07-26 05:59:40
I think it's kind of the responsibility of the chrome folks who were behind webp that it has such poor interoperability: they always considered it as an image format for the Web and didn't care much about adoption outside browsers.
2025-07-26 06:01:58
(not to mention intentionally limiting it to 8-bit, 4:2:0, 16k — limitations that are/were considered OK for the Web, but not for some other applications)
username
2025-07-27 02:35:02
most of the limitations of WebP are directly inherited from VP8. It's a shame they didn't make a superset of VP8 and from what I understand just used un-modified VP8, I'm assuming the intention was to have interop with both software and hardware implementations of VP8 however afaik that never actually ended up being useful in practice, especially nowadays that VP8 isn't used anymore except for stuff like fallback video encoding
2025-07-27 02:37:24
wow I do not know how to use grammar
Demiurge
2025-07-27 06:10:28
So in other words it's the fault of the tech lead, Jim Bankowski. The same guy who told us jxl has "not enough interest in the wider community."
2025-07-27 06:11:29
Jyrki just contributed a really good lossless codec to the project that was ran and managed by Jim
2025-07-27 06:12:28
It's unfair for Jyrki to get the hate instead of the Tech Lead who's actually responsible for it being so terrible.
2025-07-27 06:13:07
And somehow landed in a position where he can force the world to use his terrible codecs instead of better alternatives.
2025-07-27 06:13:39
The whole world. Held back by this 1 man's ego
jonnyawsom3
2025-07-27 06:26:12
I'd assume Jyrki gets the brunt of it because he's the only one publicly talking about it, all the other developers are hidden away on LinkedIn or research papers
Demiurge
2025-07-27 06:48:29
Well he is a brilliant researcher excited about his work, who was not in charge of the project, which was run by the On2 guys led by Jim
2025-07-27 06:55:49
https://en.m.wikipedia.org/wiki/On2_Technologies
lf
2025-07-27 07:10:09
stupid question but are there image and/or video compression algos optimized for lossy channels? So with increasing packet drop the image quality drops but you still get the image. And if so what are to words i need to search for?
_wb_
2025-07-27 07:45:42
For images, progressive decoding kind of gives you that in a way, though you'd need to combine it with adding more redundancy/error-correction to the initial part of the bitstream
lf
2025-07-27 08:13:00
ya probably the fastest way to get something. i am on a small Mikrocontroller but have a jpeg encoder. so was thinking of converting the jpeg to a progressiv jpeg and then do forward error correction with more redundancy for the beginning.
jonnyawsom3
2025-07-27 08:25:08
There's a pending PR to make JXL more error resilient in the decoder. So as long as you have the first pass, dropped areas would just be lower quality
lf
2025-07-27 08:30:04
mmh interesting
2025-07-27 08:36:40
i think i found it. https://github.com/libjxl/libjxl/pull/4062. but i think i have some ideas i can try
lonjil
2025-07-27 11:27:58
I have ascended ```ansi % ip -c -br addr show dev eth0 eth0 UP fdc4:5aab:abba:2:3b38:71ff:2415:7569/64 2001:xxxx:xxxx:xx00:2b7d:140c:bed4:3101/64 fe80::ea52:47a1:5e4b:7f15/64 % ping -c 1 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=112 time=4.82 ms --- 8.8.8.8 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 4.819/4.819/4.819/0.000 ms ```
veluca
2025-07-28 02:35:38
home: ``` luca@zrh ~ $ ping -c 1 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=115 time=1.11 ms --- 8.8.8.8 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 1.110/1.110/1.110/0.000 ms ``` where I am now: ``` luca@xps ~ $ ping -c 1 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=111 time=321 ms --- 8.8.8.8 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 320.601/320.601/320.601/0.000 ms ```
jonnyawsom3
2025-07-28 02:37:30
7ms for me
diskorduser
lonjil I have ascended ```ansi % ip -c -br addr show dev eth0 eth0 UP fdc4:5aab:abba:2:3b38:71ff:2415:7569/64 2001:xxxx:xxxx:xx00:2b7d:140c:bed4:3101/64 fe80::ea52:47a1:5e4b:7f15/64 % ping -c 1 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=112 time=4.82 ms --- 8.8.8.8 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 4.819/4.819/4.819/0.000 ms ```
2025-07-28 10:49:40
???
2025-07-28 10:49:47
lonjil
diskorduser ???
2025-07-28 10:50:18
pinging an IPv4 host even though I only have IPv6 on my LAN 😄
diskorduser
2025-07-28 10:52:25
hmm. i don't have ipv6 from my isp however my phone network provider provides public ipv6
lonjil
2025-07-28 10:52:54
I only recently got IPv6 when I switched to a new (cheaper) ISP
diskorduser
lonjil I only recently got IPv6 when I switched to a new (cheaper) ISP
2025-07-28 10:54:38
how much do you pay and speed?
lonjil
2025-07-28 10:55:13
$30 for 100/100
diskorduser
2025-07-28 10:55:59
100 Mbps?
2025-07-28 10:58:26
here 9 usd for 100Mbps
jonnyawsom3
2025-07-28 11:07:21
Here it's £34 for 70/20, I'm only getting 50/20 though
Fox Wizard
2025-07-28 11:20:17
Funny how you can basically get synchronous gigabit for that here <:KekDog:884736660376535040>
DZgas Ж
DZgas Ж
2025-07-28 12:26:43
fv2 I’ve significantly enhanced the fv program—for example, so that it can analyze a 2.5 GB file in just a few seconds (requiring 20 GB of RAM) by using four threads simultaneously. I’ve also introduced several new features. • -p, --pearson – Use Pearson Hash instead of the original hash functions. • -b, --bits – Choose hash table size in bits (16, 20, 24, 28, or 32; default: 24). • -s, --skip – For 1/2-byte passes, process every 4th block and multiply strength by 4. • -ss, --super-skip – For all passes, process every 8th block and multiply strength by 8. • -F, --full_analys – Enable full windowed analysis (1, 2, 4, 8 bytes). Overrides both --skip and --super-skip. Additionally, if you press CTRL+C, the program will terminate and save whatever it has processed so far.
2025-07-28 12:35:47
cudnn_cnn_infer64_8.dll (444.91 MB)
2025-07-28 12:50:38
2025-07-28 01:20:15
diskorduser
Here it's £34 for 70/20, I'm only getting 50/20 though
2025-07-28 03:06:26
Really? Very costly...
2025-07-28 04:53:46
https://www.youtube.com/watch?v=15_-hgsX2V0
veluca
veluca home: ``` luca@zrh ~ $ ping -c 1 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=115 time=1.11 ms --- 8.8.8.8 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 1.110/1.110/1.110/0.000 ms ``` where I am now: ``` luca@xps ~ $ ping -c 1 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=111 time=321 ms --- 8.8.8.8 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 320.601/320.601/320.601/0.000 ms ```
2025-07-29 10:45:09
ah yes ``` 64 bytes from 8.8.8.8: icmp_seq=1022 ttl=115 time=38215 ms ```
diskorduser
veluca ah yes ``` 64 bytes from 8.8.8.8: icmp_seq=1022 ttl=115 time=38215 ms ```
2025-07-29 11:41:43
How is this even possible
2025-07-29 11:42:16
Is it like 38 seconds?
veluca
2025-07-29 11:44:57
yup
lonjil
veluca ah yes ``` 64 bytes from 8.8.8.8: icmp_seq=1022 ttl=115 time=38215 ms ```
2025-07-29 08:25:23
Lmao
veluca
2025-07-30 04:57:41
to be fair, I am in Bolivia
Tirr
2025-07-30 05:01:45
what's surprising for me is that the packet isn't lost but it's just taking very long time
spider-mario
2025-07-30 09:17:12
`icmp_seq=1022` suggests there may have been many lost ones prior
Kupitman
lonjil $30 for 100/100
2025-07-31 03:33:12
Wtf
diskorduser here 9 usd for 100Mbps
2025-07-31 03:33:34
Here 9 usd for 1000 mbps
diskorduser
2025-07-31 03:35:44
My phone plan costs 3usd per month with unlimited 5g data. It gives around 1000-1500mbps speed.
lonjil
Kupitman Here 9 usd for 1000 mbps
2025-07-31 07:58:09
where u at?
Kupitman
lonjil where u at?
2025-07-31 08:54:09
europe
lonjil
2025-07-31 08:54:34
I am also in Europe 😔
veluca
2025-07-31 09:34:45
in Zurich I get 25gbit for 777CHF/year 😄
2025-07-31 09:59:48
and the network is good enough that I have super low latency (~1ms to 8.8.8.8) and in the last 22 hours I averaged to ~240req/s and nobody complained
Kupitman
veluca in Zurich I get 25gbit for 777CHF/year 😄
2025-07-31 10:16:16
Only in Zurich.
lonjil I am also in Europe 😔
2025-07-31 10:17:21
East Europe
lonjil
2025-07-31 10:18:16
Ah, that's the Sweden Difference (tm)
Kupitman
2025-07-31 10:18:37
?
lonjil
2025-07-31 10:18:41
high prices
Kupitman
2025-07-31 10:18:56
🤨
DZgas Ж
2025-08-01 07:48:50
I finally found the best existing neural network for removing jpeg artifacts -- SCUNet This is especially funny, because for art and anime I have been using CUnet for more than 4 years. But now, finally, I have a neural network for photos, and SCUNet works as magically well as CUnet for paintings
2025-08-01 07:51:10
I used jpeg2png (silky smooth JPEG decoding) ) when I needed to remove particularly strong jpeg artifacts without damaging image it too much, but now it's going to the trash
2025-08-01 07:58:49
I mean, wow, no over-smoothing, no misfires, no fine detail removal, oh my. bad news: the neural network, although 17M (68 mb), is still large enough that it cannot be used just when viewing any image in realtime (process 5-10 sec). Maybe on high-end RTX, decoding can take less than a tenth of a second, so that it would be possible to view all images in the browser through this neural network. But no one has made such technologies. And it is clear that it is some kind of super-level hack for Jpeg
2025-08-01 07:59:35
but for post processing, or to post a meme in better quality than you found it on the Internet, this is definitely the best that exists
Quackdoc
2025-08-01 09:02:21
what email clients do you fellas use? looking for something that is a bit better then just using gmail directly, hoping to avoid webapps unless they have a dedicated webpage you can go to instead of needing to download a client
DZgas Ж
2025-08-02 05:38:07
I thought email died 20 years ago... I never thought I'd see a question like this ever.
username
2025-08-02 05:57:57
2025-08-02 05:57:58
<@226977230121598977> have you tried this? it solves some of the overblurring/over-smoothing
Meow
2025-08-02 06:08:02
I heard many kids using emails for chatting
Quackdoc
DZgas Ж I thought email died 20 years ago... I never thought I'd see a question like this ever.
2025-08-02 06:08:19
nah, emails are still king
DZgas Ж
username <@226977230121598977> have you tried this? it solves some of the overblurring/over-smoothing
2025-08-02 08:26:50
Well, it looks good, I would even say that if there was no neural network, I would use it. But the neural network is simply better, it is more consistent with the original, without deleting what does not need to be deleted
2025-08-02 08:27:33
well and since it is a neural network it just draws what is missing, lines, noise on clothes
2025-08-02 08:31:14
2025-08-02 08:33:22
as a result, the re-generated parts by the GAN neural network method look more like the original that was there than the finishing touches by the jpeg2png method In this case, the advantage is only in speed, and not even significant, jepg2png is also quite long, a couple of seconds per picture, and the neural network takes 10 seconds. The difference is not even 100 times
Meow I heard many kids using emails for chatting
2025-08-02 09:02:48
why email if there is discord, telegram, and dozens of other messaging services. moreover, some do not even need personal communication, but public communication in comments, youtube, reddit, twitter is enough. I have never heard of anyone using email for communication. Why? Only for work, corporate life.
Quackdoc
2025-08-02 09:05:30
its reliable and quite frankly using all of these other services is often overkill and pointless. If it's critical, I call, if it's important, I text, if it can be done at any time, I email, the only things I need. Me and most of my friends and my pops still use email for the primary method of communication
2025-08-02 09:06:03
and ofc you have stuff like deltachat which works over email too
DZgas Ж
2025-08-02 09:07:06
Sounds backward technologies
2025-08-02 09:07:56
Like matrix For no one
Quackdoc
2025-08-02 09:09:09
email has been around for ages, it's just that nothing has replaced it
2025-08-02 09:09:26
for it to be backwards, we would have needed to stop using it
DZgas Ж
Quackdoc email has been around for ages, it's just that nothing has replaced it
2025-08-02 09:10:22
For work for serious people with serious work and a face like stone
Quackdoc
2025-08-02 09:11:25
and personal communications, real time messengers didn't take off until well after email was already established
2025-08-02 09:12:40
it also helps that literally no service has withstood the test of time besides email and irc, and irc is dookie
DZgas Ж
Quackdoc and personal communications, real time messengers didn't take off until well after email was already established
2025-08-02 09:13:07
Well I can't understand it because, well, you can just write me a message on discord or telegram. everything is open. but it just so happened that no one needs me because my face is not like stone
2025-08-02 09:13:58
If I had my own big life-long project, my own website, I would probably be able to have email for work.
Quackdoc
2025-08-02 09:14:01
sure but that would require me to get my friends and family into discord or telegram or facebook, when they already have email and text. and it would offer literally nothing to them
DZgas Ж
2025-08-02 09:15:15
Well, I just come to my family when I want to see them, and they call when they want to hear from me. Nothing else is needed
Quackdoc
2025-08-02 09:16:44
my sisters are 14hrs trip away so thats not time nor cost effective for idle non important communication [dogelol](https://cdn.discordapp.com/emojis/867794291652558888.webp?size=48&name=dogelol) calling is nice when it works, otherwise text and email are nice
DZgas Ж
2025-08-02 09:19:15
that's what excuse for not seeing each other in real life. No, I'm not like that
Meow
DZgas Ж why email if there is discord, telegram, and dozens of other messaging services. moreover, some do not even need personal communication, but public communication in comments, youtube, reddit, twitter is enough. I have never heard of anyone using email for communication. Why? Only for work, corporate life.
2025-08-02 09:25:10
At school probably
DZgas Ж
Meow At school probably
2025-08-02 09:34:58
... we have a separate messenger at school
Meow
2025-08-02 11:04:01
Good for your school
CrushedAsian255
DZgas Ж I finally found the best existing neural network for removing jpeg artifacts -- SCUNet This is especially funny, because for art and anime I have been using CUnet for more than 4 years. But now, finally, I have a neural network for photos, and SCUNet works as magically well as CUnet for paintings
2025-08-02 11:30:06
can you send me the model file?
DZgas Ж
CrushedAsian255 can you send me the model file?
2025-08-02 03:36:08
uuh scunet_color_real_gan.pth
CrushedAsian255 can you send me the model file?
2025-08-02 05:59:47
I agree that there is some problem with launching the neural network, so I wrote the code to use it in the ".py IN OUT" style
2025-08-02 06:00:47
oh well, the original code didn't support unicode. What is this a fork? I don't know, I don't care. Any simple code 50 kilobytes long is written by a neural network in 5 minutes and works, I won't upload it to github
Kupitman
DZgas Ж ... we have a separate messenger at school
2025-08-02 11:53:09
Max?😈
DZgas Ж
Kupitman Max?😈
2025-08-03 07:46:50
Not now
Meow
2025-08-04 02:38:57
New partially generated artwork
diskorduser
2025-08-04 04:59:20
What does partially mean? Background image drawn by a human?
Meow
2025-08-04 05:27:21
I fixed and added some details
Oleksii Matiash
2025-08-04 07:22:52
I have some experimental data that, AFAIK, more or less can be expressed as square equation. Does anybody know, is it possible to "guess" the equation from values?
_wb_
2025-08-04 07:49:20
generally you'd curve fitting minimizing square error; iirc there's even some builtin stuff in spreadsheet charts to show such fitting functions
Oleksii Matiash
_wb_ generally you'd curve fitting minimizing square error; iirc there's even some builtin stuff in spreadsheet charts to show such fitting functions
2025-08-04 08:07:26
It worked, thank you!
2025-08-04 08:08:15
2025-08-04 08:34:45
But it seems to lie about used equation. With such small coefficient of x^2 it creates almost linear equation, and also - near x coefficient makes the equation "negative"
spider-mario
2025-08-04 09:03:40
it’s not that small in context (multiplying ~2300 squared)
2025-08-04 09:04:04
in terms of orders of magnitude, it’s a ten-thousandth times a million
2025-08-04 09:04:25
so the net contribution is in the hundreds
Oleksii Matiash
2025-08-04 09:10:14
Asked by LINEST sheets gave completely different coefficients, and they are much closer to reality
2025-08-04 09:11:28
However, it seems that either experiment was not precise enough, or it's results can't be expressed as square equation. Error is too high
spider-mario
2025-08-04 07:24:41
https://x.com/chipro/status/1952131790061326593
Meow
2025-08-05 01:46:52
emdashGPT
CrushedAsian255
Meow emdashGPT
2025-08-05 01:54:37
Chat—GPT
Meow
2025-08-05 01:55:17
JPEG—XL🤮
Kupitman
Meow JPEG—XL🤮
2025-08-05 07:18:47
Wtf
spider-mario
2025-08-05 10:21:51
that sure took some effort
Quackdoc
2025-08-07 02:14:10
https://gist.github.com/Quackdoc/cb0709589874405a7dd27765813eb3c2 I made a script to record and transcribe with whisper and copy to clipboard and man I needed this [cheems](https://cdn.discordapp.com/emojis/720670067091570719.webp?size=48&name=cheems)
_Broken s̸y̴m̴m̵̿e̴͌͆t̸r̵̉̿y̴͆͠
2025-08-07 07:04:18
Cacodemon345
2025-08-08 06:37:26
The puns are insane.
spider-mario
2025-08-08 09:15:48
size of Switzerland relative to the Moon
2025-08-08 09:16:15
and here are the US
juliobbv
2025-08-08 10:36:05
how about Texas? I've been told Texas is bigger than Europe, the Moon, the U.S, and even Texas
Meow
spider-mario and here are the US
2025-08-09 10:51:59
Where's Alaska
spider-mario
2025-08-09 10:54:25
missing from Wolfram’s default polygon for the US
2025-08-09 10:55:01
you need to specify “FullPolygon” to have it, but then it fails to put the US on the moon (maybe it overflows)
jonnyawsom3
spider-mario size of Switzerland relative to the Moon
2025-08-09 10:57:24
Reminds me of the hypothetical discussion we had about storing earth in a single JXL file. In the end, we figured out it would require 2 to be in spec, but the moon would be easy
DZgas Ж
2025-08-12 11:33:27
Small experiments, practical, for generating minecraft art palette. Based on iterative blue noise with butteraugli metric. Probably gives the best possible dithering.
2025-08-13 01:34:54
blue noise can also be generated from the input image, but according to the butteraugli metric it comes out about 5% worse, but I will experiment with this some more
2025-08-13 10:48:50
yes, generating different sigma from the image + enumerating the dithering strength according to butteraugli gave the best possible result
2025-08-13 12:12:38
The best possible dithering that can be implemented for the human eye, Blue Noise using the sophisticated Void-and-Cluster generation algorithm (https://blog.demofox.org/2019/06/25/generating-blue-noise-textures-with-void-and-cluster/), which creates a noise texture that is perfect for an image, since the Blue Noise texture generation is based on the input image. Only in 2021, this algorithm was implemented (https://tellusim.com/improved-blue-noise/) on the GPU, and can now be used to dither any image and palette. Usually the texture is generated as is, just a regular png used for everything, but now it can be generated specifically for the image, making it locally perfect. My original idea was that 2D Minecraft maps only have 61 possible colors with a standard flat layout, but people choose complex 3D map art with over-blocks because it gives more palette and quality. So, it annoys me, always looking at these giga-schizo-arts, I couldn't help but think that these guys are doing some kind of bullshit, and that the chill placement of regular 2D art should be better than it is now... Well, I did it, now there is a code that creates art based on the most complex Blue Noise calculated based on the input art, selecting the dithering strength based on the psychovisual metric butteraugli (https://github.com/google/butteraugli), immediately into a palette of 61 colors that can be "as is" uploaded to the site for generating mapparts https://rebane2001.com/mapartcraft/ and The only difficulty is that the image is created based on the entire palette of blocks, and in cases where you need a limited number, for example, 10-20 blocks, the palette must be set manually, because simply removing blocks from the finished image will be worse, everything must be generated from scratch, all the tools for this are made. Important, Void-and-Cluster is a super-complex algorithm, noise generation for 128x128 art can take a couple of seconds, for 256x256 minutes, for 1024x1024 hours.
2025-08-13 12:12:56
— Original art — 2d art without dithering — 2d with Floyd–Steinberg — my blue noise 2d art — 3d art without dithering — 3d art with Floyd–Steinberg
2025-08-13 12:15:14
good question why no one used blue noise to create gif. ....If you have **rtx 5090** you can now create **perfect gif** with default 256 color palette without changes
2025-08-13 12:18:07
It's even funny that it takes half an hour
jonnyawsom3
2025-08-13 12:19:17
I tried to find blue noise GIF dithering in the past, but couldn't find anything, not even a paper or mention of it. I did find this though, supposedly higher quality than existing methods and also runs on the GPU https://arxiv.org/abs/2206.07798
DZgas Ж
I tried to find blue noise GIF dithering in the past, but couldn't find anything, not even a paper or mention of it. I did find this though, supposedly higher quality than existing methods and also runs on the GPU https://arxiv.org/abs/2206.07798
2025-08-13 12:21:36
it says it's better than BNOT, but it's not void-and-cluster... <:Thonk:805904896879493180>
2025-08-13 12:25:38
wow, i definitely saw this 2 years ago when i was studying dither superficially, hmm
jonnyawsom3
DZgas Ж it says it's better than BNOT, but it's not void-and-cluster... <:Thonk:805904896879493180>
2025-08-13 12:28:17
> The idea of using a Gaussian kernel for blue noise optimization dates back to Ulichney’s void-and-cluster algorithm [1993], and was re-introduced at least three times thereafter by Hanson [2003; 2005], Öztireli et al. [2010], and Fattal [2011], with a different motivation each time It seems to be built upon void-and-cluster, and previous attempts to surpass it
2025-08-13 12:30:38
This was Greyscale, some kinda of blue noise dithering, and the gaussian one I linked
2025-08-13 12:31:36
I think I needed to use more points, and maybe longer compute time, but my GPU is quite weak so I didn't try higher settings
DZgas Ж
> The idea of using a Gaussian kernel for blue noise optimization dates back to Ulichney’s void-and-cluster algorithm [1993], and was re-introduced at least three times thereafter by Hanson [2003; 2005], Öztireli et al. [2010], and Fattal [2011], with a different motivation each time It seems to be built upon void-and-cluster, and previous attempts to surpass it
2025-08-13 12:32:03
the void-and-cluster problem is obvious, I already wrote about it, for the moment of release шт 1993, and 10 years later, and even in 2013, 20 years after release, it was an extremely complex function. textures were created once and used for many years. Probably in 1993 such a texture was created using a supercomputer. But in 2025, it is still long, but already acceptable enough to do it on 256x256 map-arts taking only 1 minute...
2025-08-13 12:34:01
I will note that the degree of dither overlap and its brightness components are calculated through the butteraugli metric, using a Ternary search
jonnyawsom3
2025-08-13 12:34:07
Here are some binaries Homosapien compiled for me. Includes both GPU and CPU versions, with readme files explaining how to use them
DZgas Ж
Here are some binaries Homosapien compiled for me. Includes both GPU and CPU versions, with readme files explaining how to use them
2025-08-13 12:43:35
Have you figured out how to use this?
2025-08-13 12:44:40
<:Thonk:805904896879493180>
jonnyawsom3
DZgas Ж Have you figured out how to use this?
2025-08-13 12:44:57
It takes ASCII PGM files with the header comment removed as input
DZgas Ж
2025-08-13 12:45:18
""ASCII"" PGM ?
2025-08-13 12:45:24
jonnyawsom3
2025-08-13 12:45:52
DZgas Ж
2025-08-13 12:48:22
can you just do this, I have no idea where to do this, and why it was necessary not to add PNG support to the input when it is available at the output
jonnyawsom3
2025-08-13 12:49:30
Try this
DZgas Ж
2025-08-13 12:49:31
ffmpeg in general does not know what pgm out is. Usually for such specific formats I use xnview but there is no output pgm setting
2025-08-13 12:52:04
2025-08-13 01:11:04
gbn-cpu-avx2.exe source.pgm 10000 1000 output_points.txt output_image.png py round_coords.py
2025-08-13 01:11:44
<@238552565619359744> due to the specifics of the work, I think, nevertheless, this software has a more different purpose
2025-08-13 01:12:41
although for 1 bit art it will probably be better
DZgas Ж The best possible dithering that can be implemented for the human eye, Blue Noise using the sophisticated Void-and-Cluster generation algorithm (https://blog.demofox.org/2019/06/25/generating-blue-noise-textures-with-void-and-cluster/), which creates a noise texture that is perfect for an image, since the Blue Noise texture generation is based on the input image. Only in 2021, this algorithm was implemented (https://tellusim.com/improved-blue-noise/) on the GPU, and can now be used to dither any image and palette. Usually the texture is generated as is, just a regular png used for everything, but now it can be generated specifically for the image, making it locally perfect. My original idea was that 2D Minecraft maps only have 61 possible colors with a standard flat layout, but people choose complex 3D map art with over-blocks because it gives more palette and quality. So, it annoys me, always looking at these giga-schizo-arts, I couldn't help but think that these guys are doing some kind of bullshit, and that the chill placement of regular 2D art should be better than it is now... Well, I did it, now there is a code that creates art based on the most complex Blue Noise calculated based on the input art, selecting the dithering strength based on the psychovisual metric butteraugli (https://github.com/google/butteraugli), immediately into a palette of 61 colors that can be "as is" uploaded to the site for generating mapparts https://rebane2001.com/mapartcraft/ and The only difficulty is that the image is created based on the entire palette of blocks, and in cases where you need a limited number, for example, 10-20 blocks, the palette must be set manually, because simply removing blocks from the finished image will be worse, everything must be generated from scratch, all the tools for this are made. Important, Void-and-Cluster is a super-complex algorithm, noise generation for 128x128 art can take a couple of seconds, for 256x256 minutes, for 1024x1024 hours.
2025-08-13 01:14:50
than
2025-08-13 01:20:36
yes, as I thought
I tried to find blue noise GIF dithering in the past, but couldn't find anything, not even a paper or mention of it. I did find this though, supposedly higher quality than existing methods and also runs on the GPU https://arxiv.org/abs/2206.07798
2025-08-13 01:21:10
this algorithm does not support palette, only black and white colors 1 bit
DZgas Ж ffmpeg in general does not know what pgm out is. Usually for such specific formats I use xnview but there is no output pgm setting
2025-08-13 01:28:24
ok
2025-08-13 01:33:22
gpu by the way doesn't work, and on the processor it also takes quite a long time, but for single ones it's fine
2025-08-13 02:01:40
2025-08-13 03:15:22
some software on python and a small instruction. Without a ready build, lazy. It's unlikely that anyone will make 1-bit art in minecraft from Netherrack
2025-08-14 10:07:05
2025-08-14 11:03:54
works looks like force-directed graph <:Thonk:805904896879493180>
2025-08-14 03:19:29
<@238552565619359744> pog https://abdallagafar.com/ It turns out they recently released a CPU-only version that could be even faster
2025-08-14 03:20:12
2025-08-14 03:35:33
wow there are only 3 files of code ... i think i can do compile myself
jonnyawsom3
DZgas Ж <@238552565619359744> pog https://abdallagafar.com/ It turns out they recently released a CPU-only version that could be even faster
2025-08-14 04:24:33
Yeah, I think I recall seeing it, but there was no code available at the time and I can't compile myself. Glad to see you found it
DZgas Ж
2025-08-14 04:50:27
well it's almost done, the original code was for Linux, but I did it
2025-08-14 05:12:11
the program works, I'm just a little bogged down in the maximum possible compile optimization that can only be done in 2025, very interesting
2025-08-14 06:21:55
holy
2025-08-14 06:22:47
"-fprofile-generate" optimization works
jonnyawsom3
DZgas Ж holy
2025-08-14 06:24:12
Did they allow anything other than custom PGM files?
DZgas Ж
2025-08-14 06:25:16
no
2025-08-14 06:25:18
hm
2025-08-14 06:25:34
why not add this <:galaxybrain:821831336372338729>
2025-08-14 06:26:02
I've already done some optimizations,
2025-08-14 06:26:43
it's amazing when working with images like 4000x3000 saving intermediate png frames takes 80% of the time
2025-08-14 06:27:13
just lol
2025-08-14 06:48:00
png input inputed
DZgas Ж holy
2025-08-14 06:51:33
ok the initial optimization frame was slightly exaggerated (a lot) because the initial points are generated randomly. Probably it.., I don't know. The real optimization looks more realistic, here it is:
2025-08-14 06:52:59
the funnier thing is that by default **Float double** is enabled (lol why) instead of just float
DZgas Ж
2025-08-14 06:53:55
this frame shows the speed of float and Float double in seconds
2025-08-14 06:54:42
oh these scientists
2025-08-14 07:12:28
stipple.exe in.png 5000 300 output_points.txt output_image.png `SSE2 -march=x86-64 ~2003 11.615000s SSE3 -march=core2 ~2006 12.151000s AVX -march=sandybridge ~2011 11.296000s AVX2 -march=haswell ~2013 8.515000s AVX512 -march=skylake-avx512 2022 8.769000s NATIVE -march=native 7.529000s -fprofile-use 7.335000s`
2025-08-14 07:18:36
great job <:BlobYay:806132268186861619>
2025-08-14 07:30:11
bruh
2025-08-14 07:30:54
<:SadCheems:890866831047417898> two years ago
2025-08-14 07:33:20
well, in general, the code really did start working unimaginably faster, especially on a large number of points, thinking for a long time only on the first iterations
Kupitman
2025-08-14 07:35:04
😬
jonnyawsom3
DZgas Ж stipple.exe in.png 5000 300 output_points.txt output_image.png `SSE2 -march=x86-64 ~2003 11.615000s SSE3 -march=core2 ~2006 12.151000s AVX -march=sandybridge ~2011 11.296000s AVX2 -march=haswell ~2013 8.515000s AVX512 -march=skylake-avx512 2022 8.769000s NATIVE -march=native 7.529000s -fprofile-use 7.335000s`
2025-08-14 07:41:42
Could you upload the AVX2 binary for me to try? Also, an annoying issue was the PGM loader wouldn't ignore comments/headers in the file, trying to read `#Made by Irfanview` as pixel data and crashing. Not sure if that's fixed
DZgas Ж
Could you upload the AVX2 binary for me to try? Also, an annoying issue was the PGM loader wouldn't ignore comments/headers in the file, trying to read `#Made by Irfanview` as pixel data and crashing. Not sure if that's fixed
2025-08-14 07:49:40
yes, I literally just collected everything and formatted it a minute ago, you can try it, I don’t know if it works and if I collected the LIB correctly <:FeelsReadingMan:808827102278451241>
Could you upload the AVX2 binary for me to try? Also, an annoying issue was the PGM loader wouldn't ignore comments/headers in the file, trying to read `#Made by Irfanview` as pixel data and crashing. Not sure if that's fixed
2025-08-14 07:52:49
PGM shows fine in xnview. But to convert from png to pgm ascii I wrote my own python code, it's there. But now it's no longer needed
2025-08-14 07:54:25
just added libpng just vibecoding do brrrr
2025-08-14 08:01:18
<@238552565619359744>Can you tell me if it even started up for you?
jonnyawsom3
DZgas Ж <@238552565619359744>Can you tell me if it even started up for you?
2025-08-14 08:49:29
The output PNG is just black, but it generated the txt file correctly
DZgas Ж
The output PNG is just black, but it generated the txt file correctly
2025-08-14 08:52:33
outout has always been like this. it uses alpha channel to draw like in original code add white background to image
jonnyawsom3
2025-08-14 08:52:39
Oh wait, it has a transparent background. It was black on black. Yeah, sorry
DZgas Ж well, in general, the code really did start working unimaginably faster, especially on a large number of points, thinking for a long time only on the first iterations
2025-08-14 09:24:54
Would be nice if it output BW instead of anti-aliased circles. Not really dithering if you're using smooth gradients for it
2025-08-14 09:27:28
> unknown option -- P > > -P parallelMode: defer update of points like in GPU; default is serial: update each point immediately Thank you code
DZgas Ж outout has always been like this. it uses alpha channel to draw like in original code add white background to image
2025-08-14 09:28:27
Also using `-o` gives it a background
2025-08-14 09:28:49
> -o Make png plots opaque; default is transparent
DZgas Ж
Would be nice if it output BW instead of anti-aliased circles. Not really dithering if you're using smooth gradients for it
2025-08-14 10:49:34
This is absolutely impossible because of the way the algorithm works. It is also impossible to understand how many points need to be created to fit on a matrix of numbers (a bitmap). I wrote some python code that translates the coordinates of the points into points on the image, conceptually it makes it worse
> unknown option -- P > > -P parallelMode: defer update of points like in GPU; default is serial: update each point immediately Thank you code
2025-08-14 10:49:48
yep https://discord.com/channels/794206087879852103/806898911091753051/1405634631935459522
jonnyawsom3
2025-08-14 10:52:46
Ah, right
DZgas Ж
2025-08-14 10:53:32
☺️
2025-08-14 10:53:57
just blur
A homosapien
DZgas Ж yes, I literally just collected everything and formatted it a minute ago, you can try it, I don’t know if it works and if I collected the LIB correctly <:FeelsReadingMan:808827102278451241>
2025-08-15 07:39:26
Wow, these binaries are quite fast! What compile flags did you use? I'm just using these on msys2 `g++ -O2 -fprofile-use -march=native agbn-serial.cpp -o a_pgo.exe -lcairo`
2025-08-15 07:51:00
Wait a minute, are you using `Ofast`? I'm getting similar speeds with that flag.
DZgas Ж
A homosapien Wow, these binaries are quite fast! What compile flags did you use? I'm just using these on msys2 `g++ -O2 -fprofile-use -march=native agbn-serial.cpp -o a_pgo.exe -lcairo`
2025-08-15 07:52:54
`g++ agbn-serial.cpp -o stipple_instrumented.exe -Ofast -std=c++17 -march=native -fprofile-generate -flto -l cairo -lpng` `stipple_instrumented.exe in.png 5000 300 output_points.txt output_image.png` `g++ agbn-serial.cpp -o stipple_instrumented.exe -Ofast -std=c++17 -march=native -fprofile-use -flto -l cairo -lpng`
2025-08-15 07:56:02
if someone cuts out the heavy **cairo **drawing tool and writes their own final drawing function, they can create one .exe file, otherwise they need a mountain of libraries specifically for cairo
2025-08-15 12:26:45
Well, since dithering didn't work out, I'll print this out at the same time, I collected statistics for 100 thousand iterations, whether it was worth it, I don’t know, it’s clear that in general it’s getting better. At the end, he started moving several circles, this doesn’t mean that the whole picture is spoiled, this is the sum of all the errors, he just didn’t have time to move them
jonnyawsom3
DZgas Ж Well, since dithering didn't work out, I'll print this out at the same time, I collected statistics for 100 thousand iterations, whether it was worth it, I don’t know, it’s clear that in general it’s getting better. At the end, he started moving several circles, this doesn’t mean that the whole picture is spoiled, this is the sum of all the errors, he just didn’t have time to move them
2025-08-15 12:30:03
`-A` doubled the compute time, but also gave better results for me in the same number of iterations. Wonder if there's a crossover point between default, `-A` and a setting of `-T` that's optimal
DZgas Ж
`-A` doubled the compute time, but also gave better results for me in the same number of iterations. Wonder if there's a crossover point between default, `-A` and a setting of `-T` that's optimal
2025-08-15 12:32:46
I also don't know how much float64 affects the quality, I just disabled it at the beginning of the source code
jonnyawsom3
2025-08-15 12:51:44
Likely has a larger impact the more steps you use since the distance becomes exponentially smaller
DZgas Ж
2025-08-15 01:01:11
Now I'll try float64 and then we'll see
2025-08-15 01:01:30
-A interesting parameter
2025-08-15 01:02:45
∫exp(-x²)dx <:Thonk:805904896879493180>
2025-08-15 01:22:43
add float64 binary
2025-08-15 01:30:38
2025-08-15 01:32:19
float64 -A works for a very long time now... but I'll wait <:Stonks:806137886726553651>
2025-08-15 03:10:06
well... I did 40 000 iterations of float64 -A (light green) and maybe it all doesn't make sense, because if I do more iterations for float32 whitout -A (dark green) the result will be better, and it will be done faster <@238552565619359744> for 5000 dots
jonnyawsom3
DZgas Ж well... I did 40 000 iterations of float64 -A (light green) and maybe it all doesn't make sense, because if I do more iterations for float32 whitout -A (dark green) the result will be better, and it will be done faster <@238552565619359744> for 5000 dots
2025-08-15 03:24:03
Downsampling the input and output images by 10x allowed for ssimulacra2 quality comparisons. To me `-A` visually looked sharper and more accurate to the original image, bearing in mind the metrics of the tool are simply how big of a change the last step was, not necessarily the quality compared to the original (Hence the sudden bumps even at very high steps)
DZgas Ж
Downsampling the input and output images by 10x allowed for ssimulacra2 quality comparisons. To me `-A` visually looked sharper and more accurate to the original image, bearing in mind the metrics of the tool are simply how big of a change the last step was, not necessarily the quality compared to the original (Hence the sudden bumps even at very high steps)
2025-08-15 03:26:28
the document states that full convergence occurs where there are 1 000 000 iterations. In my tests I noticed that the ideal structure begins to emerge after 10 000. At what number of iterations did you do ?
jonnyawsom3
DZgas Ж the document states that full convergence occurs where there are 1 000 000 iterations. In my tests I noticed that the ideal structure begins to emerge after 10 000. At what number of iterations did you do ?
2025-08-15 03:30:17
I stayed at 1 000 iterations but played with the `-T` threshold, ordering and `-A` to see if there was a better tradeoff. My image was 1080p with 20K points, though I also tried 100K but it was far too slow
DZgas Ж
I stayed at 1 000 iterations but played with the `-T` threshold, ordering and `-A` to see if there was a better tradeoff. My image was 1080p with 20K points, though I also tried 100K but it was far too slow
2025-08-15 03:32:24
1000 is extremely small. I also started with 1000, but it is really small! the local structure does not emerge with such a number. -A 1000 iterations would be better. But you make a mistake in the tests, you do not compare the amount of time per iteration with the final result
jonnyawsom3
DZgas Ж well... I did 40 000 iterations of float64 -A (light green) and maybe it all doesn't make sense, because if I do more iterations for float32 whitout -A (dark green) the result will be better, and it will be done faster <@238552565619359744> for 5000 dots
2025-08-15 03:33:10
Could try float64 vs float32 without `-A` for less iterations just to see if they match. May be no point in float64 with us staying around 10K instead of 1M iterations
DZgas Ж 1000 is extremely small. I also started with 1000, but it is really small! the local structure does not emerge with such a number. -A 1000 iterations would be better. But you make a mistake in the tests, you do not compare the amount of time per iteration with the final result
2025-08-15 03:33:45
The CPU time was roughly doubled with `-A`, but it was late so I stopped testing for the night before I could try double the iterations without `-A`
DZgas Ж
2025-08-15 03:35:02
float64 is too slow, 2 times slower for obvious reasons. I also don't see the point in it, I thought it would give a noticeable increase in quality over a very large number of iterations, but so far I literally don't see it, the gradient descent has become smoother
2025-08-15 03:36:15
I could do 2x more iterations in the same amount of time, getting more quality, I don't see any advantages of either -A or float64
The CPU time was roughly doubled with `-A`, but it was late so I stopped testing for the night before I could try double the iterations without `-A`
2025-08-15 03:38:33
500 000 dots 1000 iterations
2025-08-15 03:39:31
unfortunately it's still not enough for a laser printer, so I gave up on it
2025-08-15 03:40:45
Big, large balls look much more impressive on paper, so I stopped at 5000 dots
2025-08-15 04:23:07
Itrtn 1499: stepMax = 5.919030e-02, #clamped = 0, **gradLenAvg**/Max = **1.058788e-02**/1.871156e-01 done! Total time = 24.925000s Itrtn 999: stepMax = 5.926432e-02, #clamped = 0, **gradLenAvg**/Max = **1.626474e-02**/1.798219e-01 done! Total time = 25.306000s I was able to increase the speed by reducing the radius of capture of neighbors, this creates a limit to achieving quality much earlier, but at the same time at a short distance it gives better quality in the same time
2025-08-15 04:23:23
#define Float float #define SUPPORT 4
2025-08-15 05:08:30
1e-3 in 20k Itrtn completely identical to the normal mode. improvements are identical up to about 30k iterations, and that's it, for the ultrafast mode this is the limit, which in general can be enough for 99% of cases. The speed is 2 times higher
2025-08-15 05:14:06
<@238552565619359744>you should try the ultrafast
2025-08-15 05:27:24
on very fast iterations the bottleneck is on the terminal output, so I removed the output of value who do not update the best value, getting an increase of 20-30% at 1000 dots
2025-08-15 05:29:15
I only realized this because the processor is loaded, but the fan does not hum. And now it hums <:Yes:1368822664382119966>
2025-08-15 05:35:53
Ultrafast is not suitable for precise positioning of a very small number of dots (<1000), few neighbors in distribution, worse positioning
2025-08-15 06:05:44
-A must be used when small number dots (like 500)
2025-08-15 06:13:04
float64 must be used when number dots like 100
A homosapien
2025-08-15 09:19:50
Okay so I found an older compile that uses the GPU, it comes with a binary that reconstructs the image from a txt file of points. (4000 points, 4000 iterations) | Original | Points | Manual Gaussian Blur | Reconstruction |
2025-08-15 09:20:38
Keeping the tradition of using unserious images for blue noise dithering
2025-08-15 09:31:44
Also it seems like the GPU version is much faster than the CPU version. Using the image above with 4000 points and 4000 interations I get these times.``` AVX2_float64 -- 109.410000s AVX2 ----------- 22.586000s AVX2_ultrafast - 11.323000s GPU ------------- 6.275000s ```
2025-08-15 09:32:23
Seems like the article's conclusion is somewhat false
jonnyawsom3
2025-08-15 09:41:09
Probably depends on the CPU and the GPU. IIRC they were using a pretty weak GPU
A homosapien
2025-08-15 09:53:14
They were using a MX250 which only has 384 CUDA cores. On par with a GTX 1030 or 650 💀 . Weak is an understatement.
jonnyawsom3
2025-08-15 10:06:15
With my crippled GPU, I'm happy with the new ultrafast build
DZgas Ж
A homosapien Also it seems like the GPU version is much faster than the CPU version. Using the image above with 4000 points and 4000 interations I get these times.``` AVX2_float64 -- 109.410000s AVX2 ----------- 22.586000s AVX2_ultrafast - 11.323000s GPU ------------- 6.275000s ```
2025-08-15 11:20:27
can you tell me where exactly you got the source code and executable files for gpu
A homosapien
2025-08-15 11:20:41
Here https://github.com/Atrix256/GaussianBlueNoise
2025-08-15 11:21:06
I'm manually compiling a faster version with fast-math and more optimizations
DZgas Ж
A homosapien Here https://github.com/Atrix256/GaussianBlueNoise
2025-08-15 11:23:24
hmm yeah on my amd rdna2 it just wont run
2025-08-15 11:24:23
<:Thonk:805904896879493180> In any case, I have an APU so I can get by with just one processor
A homosapien
A homosapien Also it seems like the GPU version is much faster than the CPU version. Using the image above with 4000 points and 4000 interations I get these times.``` AVX2_float64 -- 109.410000s AVX2 ----------- 22.586000s AVX2_ultrafast - 11.323000s GPU ------------- 6.275000s ```
2025-08-15 11:49:33
Okay I got the GPU times down to around 5.75 secs, and 4 secs with fast-math.
DZgas Ж
2025-08-16 11:06:24
|| I remember with the first image neural networks there was a funny spaghetti porn. can do the same with these balls || <:megapog:816773962884972565>
spider-mario
2025-08-16 11:11:44
2025-08-16 02:52:57
https://www.instagram.com/p/DM8ga-6Pn8H/
Quackdoc
spider-mario
2025-08-16 03:44:52
actually a decent idea, I've accidentally triggered vim modes a few times and took a few minutes before I understood what was going on
DZgas Ж
2025-08-16 04:50:53
<:megapog:816773962884972565>
2025-08-16 07:25:35
<@207980494892040194><@238552565619359744>Implemented multithreading
2025-08-16 07:28:28
I spent the day trying to optimize our distance calculation using AVX512, but it was unsuccessful. The main bottleneck is the exp function; its AVX512 implementation is tied to a specific Intel compiler library and proved inefficient. AVX2, on the other hand, has no native exp instruction. A simple vectorization of the components actually made the code 10% slower. So, I switched to multithreading instead. The distance calculations now run in parallel across multiple threads, with a final update step synchronized in a single thread at the end of each iteration. As expected, this approach is much faster.
veluca
2025-08-16 07:30:47
use this? https://github.com/libjxl/libjxl/blob/main/lib/jxl/base/fast_math-inl.h#L72
DZgas Ж
veluca use this? https://github.com/libjxl/libjxl/blob/main/lib/jxl/base/fast_math-inl.h#L72
2025-08-16 07:31:57
hmmm this looks like what grok 4 suggested to me. It seems to have no effect, I will try this function specifically,<:Thonk:805904896879493180>
veluca
2025-08-16 07:32:20
that's `2^x` instead of `exp(x)` but it's easy to adapt
jonnyawsom3
DZgas Ж <@207980494892040194><@238552565619359744>Implemented multithreading
2025-08-16 07:33:10
Oh, you removed `-o`
A homosapien
DZgas Ж <@207980494892040194><@238552565619359744>Implemented multithreading
2025-08-16 07:39:25
Have you considered hosting this code somewhere like GitHub?
2025-08-16 07:40:00
This is a major improvement.
jonnyawsom3
2025-08-16 07:41:33
It does give different results though, tried 5K points multithreaded vs 1K single and they seemed about on-par for me
DZgas Ж
Oh, you removed `-o`
2025-08-16 07:45:24
alas
A homosapien Have you considered hosting this code somewhere like GitHub?
2025-08-16 07:47:53
Well, I don't know, this is vibecoding bullshit. I do for myself, since programming is not given to me. And I also cut out 80% of all functions to reduce the code for debugging, leaving only the algorithm itself
2025-08-16 07:49:26
not a serious work. in original code the author himself wrote that he implements multithreading, in code that is already 2 years old so let him do it himself
2025-08-16 07:53:19
I would say that this is completely my fork
A homosapien
2025-08-16 08:05:11
So you can post it on GitHub as a fork
DZgas Ж
Oh, you removed `-o`
2025-08-16 08:07:41
yes I agree that this transparency is not needed at all, I will make it opaque by default, the setting is left in the source code
A homosapien So you can post it on GitHub as a fork
2025-08-16 08:11:29
yeah, I'm not really interested, I don't even have a github, but you're interested, you can post anyway. if you say so about it, you can do it yourself, since it seems to you that some golden things are disappearing in this chat 🍿 <:SadCheems:890866831047417898> I just make cool things in a couple of hours in a neural network that anyone can do. I took it up only because the code is simple, without big dependencies, it's small. I like such simple things
A homosapien
2025-08-16 08:11:47
Hmm, I wonder if I can make a static build.
DZgas Ж
A homosapien Hmm, I wonder if I can make a static build.
2025-08-16 08:12:48
for some reason this is not possible because of cairo
2025-08-16 08:13:29
the problem that bothers me more is why I can't create .exe for all architectures in one file, I don't know something...
2025-08-16 08:17:19
I really liked the images that were printed on a laser printer at 200-5000 circle dots, this is some kind of new kind of art, it looks fascinating
2025-08-16 08:19:15
by the way -fprofile-generate breaks with -fopenmp so for compilation I use `g++ agbn-serial.cpp -o stipple_optimized.exe -Ofast -flto -std=c++17 -march=native -fopenmp -l cairo -lpng`
A homosapien
DZgas Ж for some reason this is not possible because of cairo
2025-08-16 08:22:49
Msys2 doesn't provide static Cairo libraries. Meaning I have to compile Cairo with static libs, then link those to the program. It should be possible in theory.
DZgas Ж the problem that bothers me more is why I can't create .exe for all architectures in one file, I don't know something...
2025-08-16 08:25:14
Dav1d does something like that. I think it's called runtime feature detection.
DZgas Ж
Oh, you removed `-o`
2025-08-16 08:25:32
now not transparent by default
A homosapien Dav1d does something like that. I think it's called runtime feature detection.
2025-08-16 08:27:32
Well, it's all "you just have to know". Not my way
2025-08-16 08:28:38
nothing can be done with just one key
A homosapien
2025-08-16 08:29:31
I guess separate binaries are the way forward
DZgas Ж
2025-08-16 08:31:29
Well, the program is x64 so I don't see any reason not to make support for real x64. *Wow, the program for x64 really works on any processor where "x64" is written. The magic of 2025.*
2025-08-16 08:35:16
I find it amusing how, due to purely architectural features of the processor, the sse2 program runs 5% faster than sse3 on zen 4
veluca use this? https://github.com/libjxl/libjxl/blob/main/lib/jxl/base/fast_math-inl.h#L72
2025-08-16 09:22:35
Great! The code is 22% faster, for float32 the precision dropped from the limit of 5.676798e-04 to only 5.747478e-04. Almost insignificant error.
2025-08-16 09:26:42
for the -A key, the precision dropped from 5.617765e-04 to 5.625675e-04
2025-08-16 09:27:19
overall great, for float32 I will use this option by default, for float64 the original code will remain
veluca
2025-08-16 09:32:43
you can make it precise enough for f64 by using a bigger rational polynomial and fiddling with the bit representation some more, fwiw
DZgas Ж
veluca you can make it precise enough for f64 by using a bigger rational polynomial and fiddling with the bit representation some more, fwiw
2025-08-16 09:38:52
I use the float64 version to calculate a very small number of points very accurately, I'm happy with the extra precision in this case
2025-08-16 09:44:59
new fast math I also added these flags for compilation -fno-semantic-interposition -fno-exceptions -fno-rtti maybe +0.1% speed increase
2025-08-16 09:48:13
so far I've done everything I wanted, I have no more ideas
_Broken s̸y̴m̴m̵̿e̴͌͆t̸r̵̉̿y̴͆͠
2025-08-16 10:10:06
I wonder, and bet it exits, but, lossy-text-compression. ..
jonnyawsom3
2025-08-17 01:36:14
Good explanation of floating point errors and how the IEEE standard came to be https://youtu.be/y-NOz94ZEOA
DZgas Ж
2025-08-18 01:36:20
Quackdoc
2025-08-18 05:11:47
https://www.reddit.com/r/RISCV/comments/1msm0mv/comment/n99s4eu/?utm_name=web3xcss interesting to see basically autovectorization in riscv is looking rather promising
DZgas Ж
2025-08-18 07:05:21
I didn't know it was possible to create such a realistic lens blur (DoF) algorithm for a flat image + generated depth map <:Thonk:805904896879493180>
2025-08-18 07:05:51
as if I actually have a real 3D model
spider-mario
2025-08-18 07:50:45
in all honesty, I don’t really find it realistic
DZgas Ж
spider-mario in all honesty, I don’t really find it realistic
2025-08-18 10:12:47
Well, apparently... nothing better has been invented?
2025-08-19 03:19:44
It's funny that any company that created and trained its own LLM neural network without adding support for the gguf (llamacpp) format is automatically doomed to die in oblivion, because this is the main way to use LLM in general. Monopoly of universality. <:SadCheems:890866831047417898> the same can be said about any opensource without compiled binaries
2025-08-19 09:20:13
https://archive.org/details/gacha-life-archive-by-dzgas does anyone happen to know people who work at archive.org? 2 years after uploading my great archive still won't open on the site, literally the site can't open a page with 108 thousand links. I already wrote them a letter to support, to which they replied "oh well oh well upload fewer files" WELL THANK YOU. There are days when it was able to load the page, a couple of times in a couple of years, I was able to download the page and extract all the links separate, which can be used to embed videos or search by title.
prick
2025-08-19 11:57:57
<@179701849576833024> (Just pinging who probably has admin permissions) Unsolicited friend requests from here
jonnyawsom3
2025-08-20 12:26:44
There is an Admin role, though usually wb sorts it out. Unfortunately it's 1am and most are European, so it'll have to be sorted in the morning
Meow
prick <@179701849576833024> (Just pinging who probably has admin permissions) Unsolicited friend requests from here
2025-08-20 02:44:45
A request from a user with an avatar of cleavage and a seductive bio is suspicious
DZgas Ж
2025-08-23 01:35:42
I had a very interesting desire to explore this: if an image is compressed automatically, for example in a telegram, then it turns out to be N size, but if I add noise to the image, then the image will N+ size, since the encoder on the server doing a large amount of information. In this regard, a logical question arises - how can I change the image, sharpening or adding noise, while changing it minimally, so that the compression ratio of the encoder gives the maximum quality <:Thonk:805904896879493180>
Exorcist
DZgas Ж I had a very interesting desire to explore this: if an image is compressed automatically, for example in a telegram, then it turns out to be N size, but if I add noise to the image, then the image will N+ size, since the encoder on the server doing a large amount of information. In this regard, a logical question arises - how can I change the image, sharpening or adding noise, while changing it minimally, so that the compression ratio of the encoder gives the maximum quality <:Thonk:805904896879493180>
2025-08-23 01:56:53
upscale
Mr. Spock
2025-08-23 02:08:54
Unfortunaly my account was hacked yesterday! I apologize for any harassment.
DZgas Ж
Exorcist upscale
2025-08-23 02:34:40
the image is downscale on the server to a size of 1280x maximum on the side
Kupitman
DZgas Ж the image is downscale on the server to a size of 1280x maximum on the side
2025-08-23 06:33:15
2560x
2025-08-23 06:33:35
"Experimental" settings
jonnyawsom3
2025-08-23 06:37:54
The server still downscales for mobile
Kupitman
2025-08-23 06:50:32
Mistake, already everywhere 2560
2025-08-23 06:50:41
We have SD/HD option, i forget
jonnyawsom3
2025-08-23 06:51:27
Mobile HD is 2048, so desktop 2560 is downscaled on the server
DZgas Ж
Kupitman 2560x
2025-08-23 07:22:13
not interested.
2025-08-23 07:22:40
I use 1280 as a matter of principle
Kupitman
DZgas Ж I use 1280 as a matter of principle
2025-08-23 07:49:49
tg support 2560.
2025-08-23 07:50:03
i don't ask what you using
Mobile HD is 2048, so desktop 2560 is downscaled on the server
2025-08-23 07:50:58
oh fuckkk
2025-08-23 07:51:39
https://tenor.com/view/durov-pavel-durov-sitting-gif-13185823906289160734
jonnyawsom3
2025-08-23 07:52:48
I don't know why it's different between them... But it is
DZgas Ж
DZgas Ж I had a very interesting desire to explore this: if an image is compressed automatically, for example in a telegram, then it turns out to be N size, but if I add noise to the image, then the image will N+ size, since the encoder on the server doing a large amount of information. In this regard, a logical question arises - how can I change the image, sharpening or adding noise, while changing it minimally, so that the compression ratio of the encoder gives the maximum quality <:Thonk:805904896879493180>
2025-08-23 08:10:39
as an interesting solution for art there are neural networks that make sharp pixel lines, which gives the encoder less opportunity to compress
Kupitman tg support 2560.
2025-08-23 08:14:46
I am not interested in this solution
spider-mario
2025-08-23 10:46:57
yikes… different times
2025-08-23 10:47:39
likewise (different story)
jonnyawsom3
2025-08-23 10:48:40
Probably the most realistic humans in a daffy duck comic, aside from the mouths
2025-08-23 10:49:03
(I've never seen any)
spider-mario
2025-08-23 10:49:39
I’d say the “blacks” from the first image are good candidates too
2025-08-24 01:22:20
“you can buy me, too” 😬
Quackdoc
2025-08-25 05:42:02
https://developer.android.com/developer-verification
2025-08-25 05:42:32
this is fucked up, even if you sideload you can only sideload apps from "verified developers"
spider-mario
2025-08-25 10:24:06
> For student and hobbyist developers > > We're committed to keeping Android an open platform for you to learn, experiment, and build for fun. We recognize that your needs are different from commercial developers, so we're working on a separate type of Android Developer Console account for you.
2025-08-25 10:24:29
let’s wait and see how that looks
2025-08-25 10:25:01
(I know nothing more about this than what I just learned by reading the link)
Quackdoc
2025-08-26 12:50:05
I can only assume this is in response to what happened with the epic stuff, but even needing it sucks, I get having a warning, I do, but this is just nuts
2025-08-26 12:50:51
even needing a hobbiest adc being needed is not great
DZgas Ж
2025-08-26 06:05:34
nah, i'm using android 8 <:Yes:1368822664382119966>
Quackdoc https://developer.android.com/developer-verification
2025-08-26 06:18:46
finally, there will be a reason to change American spyware to Chinese spyware(HarmonyOS), which will allow running programs that are illegal in both countries
Quackdoc
2025-08-26 06:32:30
xD
RaveSteel
2025-08-26 11:06:32
https://www.reddit.com/r/PartneredYoutube/comments/1n07xkv/youtube_confirms_they_have_been_using_ai_to_edit/
spider-mario
2025-08-27 03:14:40
https://www.7joursaclermont.fr/a75-4/
2025-08-27 03:14:45
how not to do text-to-speech
Lumen
2025-08-27 04:10:35
Tell me you re french without saying you re french be like ahah
DZgas Ж
2025-08-28 12:18:13
ducky
2025-08-28 12:59:03
quack
Quackdoc
2025-08-31 10:47:41
https://www.reddit.com/r/rustjerk/comments/1n2l5kl/unwrap_or_ai_replace_unwrap_with_ai_guesses_for/
2025-08-31 10:47:48
new feature for jxl-rs?
diskorduser
2025-09-03 01:26:25
https://www.threads.com/@lauriewired/post/DOHJfY6EnPJ
gb82
diskorduser https://www.threads.com/@lauriewired/post/DOHJfY6EnPJ
2025-09-03 05:27:10
i read this today, this is really insane
Meow
2025-09-03 02:18:14
The Matrix.org database exploded https://bsky.app/profile/matrix.org/post/3lxuslbzjuc2t
Quackdoc
Meow The Matrix.org database exploded https://bsky.app/profile/matrix.org/post/3lxuslbzjuc2t
2025-09-03 03:43:31
> while also doing a point-in-time backup restore from last night (which takes >10h).
2025-09-03 03:43:38
https://tenor.com/view/tim-robinson-egg-game-egg-game-what-the-hell-gif-6970366789381324680
spider-mario
2025-09-04 09:17:48
2025-09-04 09:17:59
https://tenor.com/view/confused-nick-young-what-question-marks-idk-gif-14135308
jonnyawsom3
2025-09-04 09:19:25
Huh
mincerafter42
2025-09-09 07:06:23
anyone else getting a friend request and subsequent weird messages from <@1355893251801878538>? as if their account has been compromised and is following a script?
2025-09-09 07:06:37
and they didn't even tell me what their favourite part of JXL was when i asked :p
username
2025-09-09 07:30:00
could be a hijacked account? although it's both new and generic enough that it could have just always been a dormant bot
Crite Spranberry
2025-09-10 03:07:21
I feel like I should've swapped what I put in <#822120855449894942> and <#808692065076379699>
diskorduser
2025-09-10 05:40:31
https://youtu.be/ITdkH7PCu74
spider-mario
2025-09-13 09:25:48
(right to left) when you’re on your last breath, but you have to finish your sentence, otherwise the conjugated verb is missing
Haley from hai!touch Studios
2025-09-14 06:01:38
V2 word order be like
lonjil
2025-09-14 11:28:56
lol
Fahim
mincerafter42 anyone else getting a friend request and subsequent weird messages from <@1355893251801878538>? as if their account has been compromised and is following a script?
2025-09-14 12:08:29
I just got one from them, too
spider-mario
spider-mario (right to left) when you’re on your last breath, but you have to finish your sentence, otherwise the conjugated verb is missing
2025-09-15 08:33:14
[current](https://www.glenat.com/glenat-manga/series-dragon-ball-perfect-edition/) French version for comparison
Demiurge
2025-09-16 01:28:31
Back then it seemed like any time there was an exciting new thing in open-source land, mozilla was funding it.
2025-09-16 01:34:37
Way back in those days, when they seemed to be behind all these different research projects with impressive demos and constant updates all the time
2025-09-16 01:37:28
When I was still little I was really impressed with how many different pies Mozilla seemed to have their finger in.
2025-09-16 01:38:21
I viewed "working at mozilla" to be like how most people think of "working at NASA"
Quackdoc
2025-09-16 03:02:10
<@1028567873007927297> <@167023260574154752> > Servo was to replace parts of Gecko, piece by piece. They even had an entire project dedicated to replacing c++ gecko components with rust servo components, like webrender! > The goal of Servo was to figure out how to make a browser engine in Rust. Fully replacing Gecko was never the intent Servo was also very much intended to be it's own platform. Mozilla partnered with samsung for a time to develop it as an embedded engine for android (presumedly to power the samsung browser) As well as work done on making it a VR browser, browser for mozilla's firefoxOS and a few other things. It wasn't "just an experiment playground" While it wasn't intended to replace gecko totally, it was intended to replace things like geckoview > It's been a while so maybe I'm remembering wrong, but I thought developer salaries are a small fraction of their expenses, compared to administration salaries and other things that have nothing to do with developing firefox the Mozilla foundation which controls firefox yes, firefox is the biggest budget item, but when you look at the totality of mozilla, all the foundations and corporation, then no, firefox is not the majority of spending.
Demiurge
2025-09-16 03:06:13
I'm just really disappointed at how mozilla seems to have abandoned every avenue of research they used to be involved in and every prospect at improving firefox or the web as a whole outside of just maintenance life support mode, just getting as much Google money as they can while it lasts, as their user count plummets to zero.
2025-09-16 03:06:40
It's a corporate corpse now
2025-09-16 03:07:30
I'm just sad and emotional about it because it seems like such a drastic turn from how I used to see them
Quackdoc
2025-09-16 03:08:13
yes. Firefox doesn't even implement stuff like webusb, it's completely dysfunctional for webdevs since it lacks basic features like properly rendering a bloody gradient. Only recently did they manage to cut down on some ram usage under memory pressure scenarios, still sucks tho
Demiurge
2025-09-16 03:08:50
Back in the day, firefox was something EVERY techie had in their back pocket, loaded on their swiss army USB
2025-09-16 03:09:53
Every techie I know converted all their friends and family to it
Quackdoc
2025-09-16 03:10:08
I don't even care if mozilla is funding firefox adequately or not. My view is this. Either A) Mozilla is not funding firefox nearly enough, firefox should die and let something better get that investment B) Mozilla is funding firefox enough and firefox is a fundementally flawed browser that should be abandoned ASAP and mozilla should work on something better
2025-09-16 03:10:13
either way firefox should die in a fire
Demiurge
2025-09-16 03:13:01
The problem is, no one can just fork it... because web browsers and web standards, by design, are impractical to implement without essentially coding an entire virtual machine and entire operating system to run on it.
Quackdoc
2025-09-16 03:13:47
im just glad servo and ladybird both have active development, they really do have the potential to unfuck everything
Demiurge
2025-09-16 03:14:42
Since the browsers and web standards are designed by people who stand to gain by making them impossibly complex to implement, and impossible to do so in a way that respects consent and privacy.
Quackdoc
2025-09-16 03:15:29
I don't necessarily agree with that. Personally the actual standards themselves really aren't that bad, the real issue is the sheer scale of everything
Demiurge
Quackdoc im just glad servo and ladybird both have active development, they really do have the potential to unfuck everything
2025-09-16 03:17:02
The people in control of the web standards themselves are the problem. Their incentives are to build a platform that assumes browsing a page to read information is the same as giving consent to automatically run scripts and apps and even cryptominers on your machine. A platform that is designed to fingerprint and profile you and sell your data for profit.
2025-09-16 03:20:16
There is no api for a web page to ask for consent to enable javascript or gather fingerprint information about your device like screen resolution and time zone and installed fonts...
username
2025-09-16 03:20:42
https://drewdevault.com/2020/03/18/Reckless-limitless-scope.html
Demiurge
2025-09-16 03:22:25
The INTENT is to violate your consent and privacy by design
Quackdoc
Demiurge The people in control of the web standards themselves are the problem. Their incentives are to build a platform that assumes browsing a page to read information is the same as giving consent to automatically run scripts and apps and even cryptominers on your machine. A platform that is designed to fingerprint and profile you and sell your data for profit.
2025-09-16 03:22:27
thankfully this isn't that big of an issue on the modern web. Honestly, as long as you avoid social media, you can get pretty far even if you use noscript
2025-09-16 03:23:09
ironically most things that fail for me when using noscript are things like github/gitlab and other "foss friendly services"
Demiurge
2025-09-16 03:23:17
Even just disabling javascript makes you much easier to fingerprint and track since the assumption is that it's always on
2025-09-16 03:23:30
When it's off, that's an exceptional circumstance
2025-09-16 03:23:44
HTTP itself is designed to profile you
Quackdoc
2025-09-16 03:24:23
I do agree with that, but if you want to remain anonymous, you can scrub most things fairly easily to make it generic. ofc I don't know of anyone actually doing this anymore outside of that firefox fork because realistically, 99% of users don't care
2025-09-16 03:24:39
when it comes to implementing a browser, thats pretty much a non issue
username https://drewdevault.com/2020/03/18/Reckless-limitless-scope.html
2025-09-16 03:25:08
I actually strongly disagree with this
Demiurge
2025-09-16 03:25:16
I have used umatrix for a long time
2025-09-16 03:25:32
It was pretty annoying having to make custom rules for almost every website
Quackdoc
Quackdoc I actually strongly disagree with this
2025-09-16 03:25:40
maybe if he said impossible to do it from scratch, but realistically, 99% of the components are already out there
Demiurge
2025-09-16 03:26:42
And it's unacceptable that there aren't sane expectations and defaults when it comes to web standarda and privacy and fingerprinting and respect for web-users
2025-09-16 03:27:33
In any other context people would say it's insane to just automatically execute scripts without asking from anhwhere
Quackdoc
2025-09-16 03:27:37
not really sane when so few people actually care [dogelol](https://cdn.discordapp.com/emojis/867794291652558888.webp?size=48&name=dogelol)
2025-09-16 03:28:12
oh neat, looks like someone is planning in picking up the work of abstracting servo's JS bindings so that you could swap spidermonkey with V8 or maybe even something else
2025-09-16 03:29:04
anyways, I myself don't really care about scripts running, what I do have issues with is what the scripts can accsess without consent
username
2025-09-16 03:29:26
https://github.com/chakra-core/ChakraCore/
Quackdoc
2025-09-16 03:29:33
thankfully, It seems like both chromium and firefox are starting to take that more seriously
Demiurge
2025-09-16 03:29:56
I don't have issue with scripts running if it... ASKS PERMISSION first! Why is that so much ti ask?!
2025-09-16 03:30:03
Everywhere else it's the norm!
2025-09-16 03:30:50
There's no API for a page to obtain consent except for a few very specific things like camera access
2025-09-16 03:31:19
But that's after it already has assumed permission to execute scripts in the first place
Quackdoc
2025-09-16 03:31:49
personally I think asking perms for any script is unnecessary, I would only want a certain subset of scripts to ask for perms. But I would agree that having the ability to make it ask first would be nice
2025-09-16 03:31:58
but thats a browser thing not a webpage thing IMO
Demiurge
2025-09-16 03:32:21
Pages can redirect to other pages and pretty soon you have a javascript monstrosity hijacking your browser and moving your windows around to prevent you from closing them
2025-09-16 03:33:13
And then when you are in the middle of reading an article, suddenly a video starts playing without your permission and covers half the screen while you were mid sentence
2025-09-16 03:33:37
Web browsers are unusable without huge blocklists to zap that crap
2025-09-16 03:34:02
And even then it's still barely acceptable
Quackdoc
2025-09-16 03:35:29
I believe things like moving windows around should be something the browser itself prevents you from doing without permission, but what happens within the confines of the window itself I dont have an issue with as long as it isn't using up all your cpu time and ram time, but browsers should have protections against that. These aren't issues with the web standards though, but rather the browser implementations of said standards
Demiurge
2025-09-16 03:38:03
Well I know it's really popular for scam websites to try and hijack the screen and prevent you from closing them