|
juliobbv
|
|
Yes, it applied to larger versions of the same pattern too, but the 9x9 stopped my eyes from hurting due to moiré. It's an easy way to replicate the desaturation found in natural images too
|
|
2025-08-12 08:40:44
|
BTW, I was meant to ask you (and others): from an RGB PNG image, what is the most straightforward way to get an XYB PNG image? ("RGB" w/ XYB ICC profile)
|
|
2025-08-12 08:41:57
|
right now, I'm working it around by encoding the source PNG with -q 99.9 JXL VarDCT, and then decode it to get the XYB PNG 💀
|
|
|
jonnyawsom3
|
2025-08-12 08:53:52
|
I made an XYB PNG for testing by just attaching the ICC and manually changing a 1-bit palette's color values. I think VarDCT was how I got XYB PNG images, but I'm sure there's an easier way
|
|
|
spider-mario
|
|
juliobbv
right now, I'm working it around by encoding the source PNG with -q 99.9 JXL VarDCT, and then decode it to get the XYB PNG 💀
|
|
2025-08-12 09:00:05
|
how do you decode it while keeping it XYB? do you have a tweaked djxl where you bypass the conversion back to RGB?
|
|
|
juliobbv
|
|
spider-mario
how do you decode it while keeping it XYB? do you have a tweaked djxl where you bypass the conversion back to RGB?
|
|
2025-08-12 09:04:58
|
it's already supported by djxl: `djxl --color_space XYB_Per image.jxl image.png`
|
|
|
spider-mario
|
|
juliobbv
|
2025-08-12 09:05:09
|
I think I got the command from <@184373105588699137>'s blog
|
|
2025-08-12 09:06:48
|
but it only appears to work for VarDCT, not modular mode
|
|
|
jonnyawsom3
|
|
juliobbv
but it only appears to work for VarDCT, not modular mode
|
|
2025-08-12 09:13:03
|
You could do `cjxl -d 0.01 -m 1` and that will use XYB while being *almost* lossless
|
|
|
juliobbv
|
2025-08-12 09:14:03
|
got you, that makes sense
|
|
2025-08-12 09:15:01
|
I was just wondering if there was a direct way, because it feels weird to treat a color space changing situation as an encoding one lol
|
|
2025-08-12 09:15:30
|
maybe I'll add XYB support in imagemagick or something
|
|
|
spider-mario
|
2025-08-12 09:24:03
|
maybe tweaking jpegli so that it produces a PNG instead of a JPEG could be one approach
|
|
|
juliobbv
|
2025-08-12 10:35:31
|
yeah, ideally anything that just does the RGB -> XYB conversion would do (without any other extraneous operations)
|
|
|
Quackdoc
|
2025-08-13 12:21:27
|
djxl doesn't use color management pipeline if the image is an rgb one which is why it only works for lossy. I do not know if this is a libjxl limitation or a djxl one.
I can't remember if jxl-oxide has the same limitation or not.
|
|
2025-08-13 12:23:44
|
it would be nice if we could do -d0 --xyb or something to force xyb internally but I get that's not really lossless
|
|
|
jonnyawsom3
|
2025-08-13 12:37:08
|
Or just make a RGBtoXYB.exe like the other dev tools that get bundled in
|
|
|
Quackdoc
|
2025-08-13 01:17:17
|
true, but the colormanagement would be nice in general
|
|
|
Yan Minari
|
2025-08-16 12:27:14
|
are animated JPEG XL files able to be progressively decoded like animated FLIF i.e. "GIF" loops start at low quality and get better progressively or does it pull an APNG and renders frame by frame?
|
|
|
jonnyawsom3
|
2025-08-16 12:29:15
|
Currently it's frame by frame, but it *should* be possible to create proper progressiveness in future... I think...
|
|
|
Yan Minari
|
2025-08-16 12:37:58
|
in a way that current decoders would still make sense of the image?
|
|
2025-08-16 12:45:01
|
I feel like it would be the best way to go around "reaction GIFs"
|
|
|
jonnyawsom3
|
2025-08-16 12:51:14
|
Nevermind https://discord.com/channels/794206087879852103/794206170445119489/844312090789543996
|
|
|
RaveSteel
|
2025-08-22 02:49:24
|
https://ffmpeg.org/index.html#news
|
|
2025-08-22 02:49:36
|
FFmpeg 8.0 just released 🎊
|
|
|
HCrikki
|
2025-08-22 03:07:57
|
so many downstreams will ship with the jxl support enabled by default and not mention it in their own changelogs as usual - even ffmpeg barely did here for 8.0 (animated jxl *en*coding)
|
|
|
username
|
|
HCrikki
so many downstreams will ship with the jxl support enabled by default and not mention it in their own changelogs as usual - even ffmpeg barely did here for 8.0 (animated jxl *en*coding)
|
|
2025-08-22 11:25:27
|
when FFmpeg first got support for JXL (in release 5.1) the changelog didn't even mention JXL but ***did*** mention adding QOI support which I still find to be kinda crazy.
(EDIT: it's still like this)
|
|
|
DZgas Ж
|
2025-08-23 07:22:07
|
qoi is some kind of meme
who uses qoi in their projects?
well right now?
|
|
|
Kupitman
|
2025-08-23 07:49:37
|
<:PepeSadCatGun:973222795150508082>
|
|
|
|
ignaloidas
|
|
DZgas Ж
qoi is some kind of meme
who uses qoi in their projects?
well right now?
|
|
2025-08-23 02:57:20
|
qoi makes sense on microcontrollers with like 200kb of RAM, you're not putting JPEG XL there at least for now, but doesn't make sense for anything with more RAM/compute, I don't really understand why it's in ffmpeg
|
|
|
DZgas Ж
|
|
ignaloidas
qoi makes sense on microcontrollers with like 200kb of RAM, you're not putting JPEG XL there at least for now, but doesn't make sense for anything with more RAM/compute, I don't really understand why it's in ffmpeg
|
|
2025-08-23 03:00:07
|
Well, jpeg was invented when all computers were as powerful as a microcontroller. But if we are talking about replacing PNG, then this is an argument, a very specific way of using it.
|
|
|
|
ignaloidas
|
|
DZgas Ж
Well, jpeg was invented when all computers were as powerful as a microcontroller. But if we are talking about replacing PNG, then this is an argument, a very specific way of using it.
|
|
2025-08-23 03:05:52
|
I see it mostly as a code size/compression performance/memory usage tradeoff - qoi gives you a very small code size, with very small memory usage while giving decent speed - you can viably decode it straight into a display without even storing the image on memory with less than a KB of code+RAM, which is hard to beat with PNG or JPEG. But I don't see a need for it anywhere else.
|
|
|
DZgas Ж
|
2025-08-23 03:08:13
|
> But I don't see a need for it anywhere else.
☝️ 👍
|
|
|
Quackdoc
|
2025-08-23 03:38:49
|
we need JPEGXL light, .jll, reduced feature set dedicated for very low end environments
|
|
|
ignaloidas
qoi makes sense on microcontrollers with like 200kb of RAM, you're not putting JPEG XL there at least for now, but doesn't make sense for anything with more RAM/compute, I don't really understand why it's in ffmpeg
|
|
2025-08-23 03:39:21
|
I use qoi for editing lossless video
|
|
2025-08-23 03:39:30
|
shits on PNG and ffv1 for performance
|
|
|
RaveSteel
|
2025-08-23 03:40:42
|
Interesting
|
|
|
DZgas Ж
|
|
Quackdoc
we need JPEGXL light, .jll, reduced feature set dedicated for very low end environments
|
|
2025-08-23 03:41:07
|
> we need
not a single person
|
|
|
Quackdoc
|
2025-08-23 03:41:40
|
yes, lots of devices with very low perf
|
|
|
DZgas Ж
|
2025-08-23 03:42:18
|
<:galaxybrain:821831336372338729> mjpeg
|
|
|
Kupitman
|
|
Quackdoc
we need JPEGXL light, .jll, reduced feature set dedicated for very low end environments
|
|
2025-08-23 06:34:51
|
HAHAHAHHAAHA
|
|
2025-08-23 06:35:22
|
We already have low effort settings, nobody ask you to use only 10-11
|
|
|
Quackdoc
|
2025-08-23 06:36:35
|
the issue is you need to enforce the specific subset, you don't want to be "jxl compatible" you want to be "x subset compatible"\
|
|
|
jonnyawsom3
|
2025-08-23 06:36:41
|
Effort 1 still uses 100x more than hydrium
|
|
|
Quackdoc
|
2025-08-23 06:36:43
|
like mkv vs matroska
|
|
|
Kupitman
|
|
Quackdoc
like mkv vs matroska
|
|
2025-08-23 06:46:48
|
What
|
|
|
Quackdoc
|
2025-08-23 06:47:12
|
oops, meant webm
|
|
|
Kupitman
|
2025-08-23 06:47:15
|
)
|
|
|
Effort 1 still uses 100x more than hydrium
|
|
2025-08-23 06:47:38
|
Why we can't use hydrium?
|
|
|
jonnyawsom3
|
|
Kupitman
Why we can't use hydrium?
|
|
2025-08-23 06:48:18
|
It's lossy only and fixed quality
|
|
|
Kupitman
|
|
It's lossy only and fixed quality
|
|
2025-08-23 06:49:08
|
😭
|
|
2025-08-23 06:49:30
|
Okay, webp?
|
|
|
Quackdoc
|
2025-08-23 08:14:01
|
you also want a decoder able to reject "invalid" bitstreams
|
|
2025-08-28 09:07:16
|
so I wonder if the new fossify gallery is gonna perform like crap with jxl when they update jxl-coder since hwy got nuked lol
|
|
2025-08-28 09:07:30
|
at least it has neon
|
|
|
RaveSteel
|
2025-08-28 09:14:30
|
It can perform badly, as long as it can finally decode all my images
|
|
2025-08-28 09:14:42
|
there seems to be a limit at which it just stops decoding images
|
|
|
A homosapien
|
2025-08-28 09:16:07
|
I wonder how much faster sve is compared to just neon
|
|
|
Quackdoc
|
2025-08-28 10:18:48
|
I use 4.2.0 since it decodes no problem, it just doesnt render them right, but honestly, good enough for me
|
|
|
spider-mario
|
2025-09-05 11:33:29
|
https://github.com/apple/ml-fastvlm?tab=readme-ov-file#highlights
nice
|
|
2025-09-05 11:33:38
|
(demo here: https://huggingface.co/spaces/apple/fastvlm-webgpu)
|
|
|
awxkee
|
|
Quackdoc
so I wonder if the new fossify gallery is gonna perform like crap with jxl when they update jxl-coder since hwy got nuked lol
|
|
2025-09-06 08:04:21
|
SVE exists only on 2 latest generations of Google Pixel and latest high end MediaTek chips ( dimension 9300+ ). It's impractical to double binary size for the most powerful devices with market share ~4%. At the same vector length as NEON ( those chips limits SVE length to the same as NEON), SVE has nothing to offer except in very specific hot paths where features like masked loads or int16->int64 dot products might help. Other features related to x86, if performance on emulator is poor I don't think it worth additional megabytes.
|
|
|
gb82
|
|
awxkee
SVE exists only on 2 latest generations of Google Pixel and latest high end MediaTek chips ( dimension 9300+ ). It's impractical to double binary size for the most powerful devices with market share ~4%. At the same vector length as NEON ( those chips limits SVE length to the same as NEON), SVE has nothing to offer except in very specific hot paths where features like masked loads or int16->int64 dot products might help. Other features related to x86, if performance on emulator is poor I don't think it worth additional megabytes.
|
|
2025-09-06 09:05:38
|
why does the binary size double here? sorry, unfamiliar with highway
|
|
|
|
veluca
|
2025-09-06 09:13:52
|
well not double
|
|
2025-09-06 09:13:57
|
but +80% most likely
|
|
|
awxkee
|
2025-09-06 09:14:08
|
highway runtime dispatch is fairly straightforward. It uses simple macros that duplicate all the code inside them for a specific target. Thus, any code written using highway with runtime dispatch is simply duplicated for each additional target.
|
|
|
|
veluca
|
2025-09-06 09:14:19
|
highway produces different copies of the code for different instruction sets
|
|
2025-09-06 09:14:58
|
aarch64 has neon in its base profile, so aarch64 binaries typically only have one copy
|
|
2025-09-06 09:15:23
|
(unlike x86, which tends to have a ~baseline with sse4, avx, and avx512 -- at least I think in the latest builds)
|
|
2025-09-06 09:15:52
|
so adding an instruction set (SVE) duplicates a lot of the code (not all of it, but most of the performance sensitive code)
|
|
|
lonjil
|
|
Quackdoc
shits on PNG and ffv1 for performance
|
|
2025-09-06 10:37:19
|
why aren't you using fpnge
|
|
|
|
veluca
|
2025-09-06 10:44:18
|
I mean, there are many reasons, one of them that it won't work on non-x86
|
|
2025-09-06 10:44:37
|
(fixing that, and maybe a rust port, is one of the many things on my very long todo list)
|
|
|
|
afed
|
2025-09-06 10:48:28
|
and also only png is supported for input images
|
|
|
|
veluca
|
2025-09-06 10:49:05
|
that's only true of the console version
|
|
|
jonnyawsom3
|
2025-09-06 11:45:05
|
Or fast lossless JXL
|
|
|
Quackdoc
|
|
awxkee
SVE exists only on 2 latest generations of Google Pixel and latest high end MediaTek chips ( dimension 9300+ ). It's impractical to double binary size for the most powerful devices with market share ~4%. At the same vector length as NEON ( those chips limits SVE length to the same as NEON), SVE has nothing to offer except in very specific hot paths where features like masked loads or int16->int64 dot products might help. Other features related to x86, if performance on emulator is poor I don't think it worth additional megabytes.
|
|
2025-09-07 03:10:12
|
I found that sve was a very noticeable difference on my moms pixel. Raw benchmarks via termux build it was like, up to 2x faster on some files.
its understandable, but sad
|
|
|
grey-torch
|
2025-09-10 02:44:14
|
https://gamesbymason.com/blog/2025/stop-shipping-pngs/
|
|
2025-09-10 02:45:10
|
we need more image viewer that supports texture formats
|
|
2025-09-10 02:46:25
|
like ktx files
|
|
|
HCrikki
|
2025-09-10 03:00:43
|
Png and jpg as textures is actually rare. Whats common is using bloated jpg ang png for non-texture images
|
|
2025-09-10 03:02:43
|
Take darkest dungeon or melvor idle for example. Tons of pngs with alpha. Converting the whole to jxl results in a massive total filesize reduction
|
|
2025-09-10 03:04:48
|
Both make excellent reference sources for compression imo
|
|
|
Meow
|
|
grey-torch
https://gamesbymason.com/blog/2025/stop-shipping-pngs/
|
|
2025-09-10 03:05:44
|
so some use QOI
|
|
|
A homosapien
|
2025-09-10 03:10:34
|
QOI still sucks as a texture format
|
|
|
Kupitman
|
2025-09-10 09:10:49
|
We need jxl as a texture format
|
|
|
jonnyawsom3
|
2025-09-10 09:14:38
|
Yes but actually no. Squeeze transform would allow storing mipmaps, but the GPU wouldn't be able to decode it
|
|
|
Kupitman
|
2025-09-10 09:33:51
|
We don't need gpu
|
|
|
jonnyawsom3
|
2025-09-10 09:36:32
|
That's literally the definition of a texture format
|
|
|
Kupitman
|
|
Quackdoc
|
|
That's literally the definition of a texture format
|
|
2025-09-10 10:43:19
|
you have streaming decode, you could force it
|
|
|
grey-torch
we need more image viewer that supports texture formats
|
|
2025-09-10 01:38:58
|
oculante does
|
|
|
_wb_
|
2025-09-11 10:18:09
|
I suppose you could do local squeeze per group in 128x128 groups and that could plausibly be implementable on a GPU... I dunno if and what kind of group seam artifacts that would result in though.
|
|
|
jonnyawsom3
|
2025-09-11 12:38:35
|
We did significantly improve lossless squeeze density, there's a chance it could work out smaller than BC formats, ect. Main issue would be having the GPU find pixel positions quickly
|
|
|
|
ignaloidas
|
2025-09-11 03:53:57
|
texture formats are usually fixed size in memory, and only potentially compressed with some variable size compression on disk, and as so are unavoidably lossy, idk if it's really viable to use jxl for this
|
|
|
spider-mario
|
2025-09-20 07:09:12
|
https://en.wikipedia.org/w/index.php?title=WebP&diff=next&oldid=1310256673
what a strange edit
|
|
|
AccessViolation_
|
2025-09-21 02:31:50
|
I saw this comment and the profile picture seemed familiar, I had to do a double take
|
|
2025-09-21 02:33:10
|
I like how there's practically always someone from this server chiming in on anything image related
|
|
|
jonnyawsom3
|
2025-09-21 02:36:44
|
I remember a long time ago I messaged Mario on Telegram to test their JXL support, unfortunately desktop only
|
|
|
AccessViolation_
|
|
Yes but actually no. Squeeze transform would allow storing mipmaps, but the GPU wouldn't be able to decode it
|
|
2025-09-21 02:38:03
|
I feel like a hybrid approach of storing textures as JXL and then decoding them to something that works better on the GPU could work. especially during transmission like when downloading the game. the question is, is that fast enough to do at game launch and drop the textures when the game is closed, or should that be done just once after the game in installed, keeping the larger texture files on disk after that
|
|
|
jonnyawsom3
|
|
AccessViolation_
I feel like a hybrid approach of storing textures as JXL and then decoding them to something that works better on the GPU could work. especially during transmission like when downloading the game. the question is, is that fast enough to do at game launch and drop the textures when the game is closed, or should that be done just once after the game in installed, keeping the larger texture files on disk after that
|
|
2025-09-21 02:45:43
|
Progressive can load *something* with only a few KB, so it's feasible. Main issue is there's no API to get frames at their original size, only upsampled to the image resolution, so it'd be loading the 4x4 mip as a 4K texture which needs to be resized again
|
|
2025-09-21 02:46:24
|
I wanted something for djxl so we can get the 1:8 LF as it's own 1:8 scaled image
|
|
|
HCrikki
|
2025-09-21 03:17:43
|
imo games should just limit their jxl adoption to swapping away png and jpg imagery for jxl. much simpler and games could bundle their own decode library that completely bypasse a platform or engine's slow adoption
|
|
2025-09-21 03:21:43
|
tried converting a few games that have tons of png+alpha images, losslessly chopped their total storage size in half. converting to lossy *with lossless alpha* results in even smaller total size without perceptible difference for non-clipart images
|
|
|
jonnyawsom3
|
2025-09-21 03:37:26
|
I know a long time ago Facotrio had unoptimized PNGs. They fixed it a while back, but JXL didn't do a whole lot since it can't detect the sprites properly for patches
|
|
|
|
ignaloidas
|
2025-09-21 03:38:50
|
dealing with in-gpu formats is a different beast than dealing with how you're storing the images on disk (not in memory), jxl would make sense in the latter, but not really in former
|
|
|
jonnyawsom3
|
2025-09-21 03:59:31
|
Yeah, the whole point of GPU formats is saving VRAM, which JXL wouldn't do
|
|
|
spider-mario
|
|
AccessViolation_
I saw this comment and the profile picture seemed familiar, I had to do a double take
|
|
2025-09-21 04:39:38
|
fun fact: my profile picture is a screenshot from a random music visualisation thing (à la Windows Media Player, but open-source) I took in either 2008 or 2009
|
|
2025-09-21 04:39:48
|
I wonder what the copyright implications might be (I didn’t write the visualiser)
|
|
2025-09-21 04:40:05
|
(nor the music that was being visualised)
|
|
|
AccessViolation_
|
2025-09-21 04:40:54
|
hmm, it might be non-copyrightable, just like how AI generated imagery cannot be copyrighted since no human made it
|
|
|
spider-mario
|
2025-09-21 04:41:11
|
I don’t remember what visualiser that was, but I’m almost sure the music was https://youtu.be/tMfnmM0ab9o
|
|
|
|
ignaloidas
|
2025-09-21 04:41:34
|
probably milkdrop?
|
|
|
spider-mario
|
2025-09-21 04:41:39
|
could be
|
|
2025-09-21 04:42:00
|
ah, no, that’s windows only
|
|
|
|
ignaloidas
|
2025-09-21 04:42:04
|
milkdrop and it's presets was all the rage during the time
|
|
|
spider-mario
|
2025-09-21 04:42:34
|
“mine” was a Linux one, I think
|
|
|
|
ignaloidas
|
2025-09-21 04:43:20
|
there was a bunch of linux re-implementations of milkdrop
|
|
|
spider-mario
|
|
spider-mario
https://en.wikipedia.org/w/index.php?title=WebP&diff=next&oldid=1310256673
what a strange edit
|
|
2025-09-21 08:47:21
|
the saga continues 😁 https://en.wikipedia.org/w/index.php?title=WebP&diff=next&oldid=1312132246
|
|
|
AccessViolation_
|
2025-09-21 09:02:27
|
that one seems like a fair edit to me
|
|
2025-09-21 09:02:41
|
(assuming it's correct)
|
|
|
spider-mario
|
|
Demiurge
|
2025-09-28 04:42:14
|
Soon libjxl will replace libjpeg too
|
|
|
jonnyawsom3
|
2025-09-28 05:32:41
|
Wellll
|
|
|
|
cioute
|
|
Kupitman
|
|
Demiurge
Soon libjxl will replace libjpeg too
|
|
2025-09-28 12:22:13
|
Why thet still doesn't replace it in open camera?
|
|
|
Quackdoc
|
|
Kupitman
Why thet still doesn't replace it in open camera?
|
|
2025-09-28 01:56:51
|
Open camera Is mostly dead.
|
|
|
jonnyawsom3
|
2025-09-28 02:15:05
|
It also only uses supported camera formats in Android, built-in JPEG or RAW, everything else is a conversion from one of the two
|
|
|
Quackdoc
|
2025-09-28 02:22:49
|
which is fine, especially for jxl since you can losslessly compress the jpeg that comes out of it
|
|
|
Demiurge
|
2025-09-28 03:02:41
|
I was thinking jpegli
|
|
|
Quackdoc
|
2025-09-28 03:03:59
|
The images that camera apps take, unless they come from RAW, they usually don't, is from the APIs itself, so jpegli won't make much sense.
|
|
|
Kupitman
|
|
Quackdoc
Open camera Is mostly dead.
|
|
2025-09-28 03:23:20
|
Why
|
|
|
Quackdoc
|
2025-09-28 03:27:49
|
I dunno
|
|
2025-09-28 03:28:26
|
its mostly just general maitnence now
|
|
2025-09-28 03:29:01
|
some smaller features get added every now and then but nothing major
|
|
|
Kupitman
|
|
Quackdoc
some smaller features get added every now and then but nothing major
|
|
2025-09-28 06:32:57
|
donate to dev
|
|
|
Quackdoc
|
2025-09-28 08:21:01
|
i aint got the cash lel
|
|
|
dapohca
|
2025-10-05 10:13:58
|
is there a way to view animated jpeg xls?
|
|
|
|
cioute
|
2025-10-05 10:58:24
|
jpegview - windows, qview - linux, jxl viewer - android
|
|
|
Meow
|
2025-10-05 11:32:48
|
.xls with animated JPEG?
|
|
|
Mine18
|
|
dapohca
is there a way to view animated jpeg xls?
|
|
2025-10-05 12:45:19
|
nomacs on windows
|
|
|
RaveSteel
|
2025-10-05 12:51:49
|
fossify gallery does not yet support animated JXL, no?
|
|
2025-10-05 12:51:55
|
Or was it recently added?
|
|
|
Mine18
|
2025-10-05 12:56:15
|
doesn't seem like it, i was confused because it improved animated avif support
|
|
|
dapohca
|
|
Mine18
nomacs on windows
|
|
2025-10-05 01:03:07
|
i meant in browser
|
|
|
Mine18
|
|
dapohca
i meant in browser
|
|
2025-10-05 01:03:38
|
you need a supported browser, like waterfox or thorium
|
|
|
dapohca
|
2025-10-05 01:11:05
|
thanks
|
|
|
Quackdoc
|
2025-10-06 10:34:45
|
are there known cases if ssimu2 will completely flounder and give absurdly high scores for no good reason? I want to use ssimu2 as a verifcation tool to make sure my encodes are totally busted on a batch encode of quite a lot of files
|
|
|
Lumen
|
|
Quackdoc
are there known cases if ssimu2 will completely flounder and give absurdly high scores for no good reason? I want to use ssimu2 as a verifcation tool to make sure my encodes are totally busted on a batch encode of quite a lot of files
|
|
2025-10-07 07:15:03
|
I found dark scenes to fit that description
|
|
2025-10-07 07:15:19
|
butteraugli has less "false negative" I d say
|
|
|
Quackdoc
|
2025-10-07 12:42:31
|
so far my script hasn't ok'd any obviously bad images so I'll trust it for now, using ssimu2rs
|
|
|
Lumen
|
|
Quackdoc
so far my script hasn't ok'd any obviously bad images so I'll trust it for now, using ssimu2rs
|
|
2025-10-07 12:45:34
|
isnt ssimu2rs painfully slow?
|
|
2025-10-07 12:45:47
|
vszip is already way faster I believe (still CPU but requires vapoursynth)
|
|
|
Quackdoc
|
|
Lumen
isnt ssimu2rs painfully slow?
|
|
2025-10-07 01:18:17
|
no, its just as fast as the C version, the only issue it has is that it does a lot of unnecessary allows on video mode, not an issue for what I need.
|
|
2025-10-07 01:18:28
|
a/allows/allocs
|
|
2025-10-07 01:18:40
|
man I can't type worth shit today
|
|
|
Lumen
|
|
Quackdoc
no, its just as fast as the C version, the only issue it has is that it does a lot of unnecessary allows on video mode, not an issue for what I need.
|
|
2025-10-07 01:18:48
|
how many fps do you get?
|
|
2025-10-07 01:19:11
|
ho wait it s images
|
|
2025-10-07 01:19:13
|
not videos
|
|
2025-10-07 01:19:15
|
so you don't really care
|
|
|
Quackdoc
|
2025-10-07 02:05:16
|
I mean, it needs to be relatively fast, but yeah its fast enough for sure
|
|
2025-10-07 02:05:39
|
also one command cross compile is so nice for compiling to android
|
|
2025-10-07 02:25:29
|
the real issue for me is that cjxl keeps hitting oom lol
|
|
|
|
ignaloidas
|
2025-10-07 02:29:46
|
there's also a relatively new ssimu2 implementation in zig https://github.com/gianni-rosato/fssimu2
|
|
|
Quackdoc
|
2025-10-07 02:35:31
|
I just use the rust one since I know its very accurate and highly reliable
|
|
|
gb82
|
2025-10-09 06:18:13
|
if you'd like to check fssimu2's reliability, you can look at the README
|
|
|
Lumen
|
2025-10-10 01:54:15
|
after fully reencoding my images folder into jxl lossless I got from 5.4GB to 3.5GB
lossy -q 90 is ongoing...
|
|
|
Quackdoc
|
|
Lumen
after fully reencoding my images folder into jxl lossless I got from 5.4GB to 3.5GB
lossy -q 90 is ongoing...
|
|
2025-10-10 03:10:59
|
bah, noob numbers, I compressed 58gb down to 26gb lossless, I'm re-encoding to lossy tho since I'm broke lmao
|
|
|
Lumen
|
|
spider-mario
|
|
jonnyawsom3
|
2025-10-12 09:04:05
|
Mildly confusing
|
|
|
RaveSteel
|
2025-10-12 09:04:33
|
in an ideal world: bottom left
|
|
|
_wb_
|
2025-10-12 09:24:55
|
This is the kind of question that bad school teachers like to ask. There is no such thing as "THE two types of compression". All of these answers can be correct, the only thing that is wrong is the question.
|
|
2025-10-12 09:25:54
|
If you're talking about HTTP content encoding, then "gzip and brotli" is a good answer since those are afaik the two only compression options (there's also deflate but that's basically the same as gzip).
|
|
2025-10-12 09:26:28
|
If you're talking about images, video, audio: "lossy and lossless" would be a good answer, since that is usually the main distinction relevant there.
|
|
2025-10-12 09:27:28
|
If you're talking about general data compression, then "there's only one type of compression: lossless" would be an answer that makes sense.
|
|
2025-10-12 09:31:08
|
And "fast and slow" also is quite a relevant distinction, though it's a bit of a stretch to call this "two types", there is no clear threshold separating "fast" from "slow" and it's obviously a spectrum. (but imo you can kind of say the same about lossy vs lossless: e.g. can you call GIF lossless if it forces you to first quantize colors to a palette?)
|
|
|
Fahim
|
|
_wb_
If you're talking about HTTP content encoding, then "gzip and brotli" is a good answer since those are afaik the two only compression options (there's also deflate but that's basically the same as gzip).
|
|
2025-10-12 10:37:12
|
zstd's been added to the list and enabled by default in all but Safari as of a while ago, IIRC
|
|
|
_wb_
|
2025-10-13 07:00:50
|
ah right, zstd is there too now
|
|
|
|
cioute
|
2025-10-13 07:56:42
|
i saw video, where jpegxl recompressed a same image a lot of times, and it looks better than jpeg (it means what even for lossy storage jpegxl is better)
|
|
|
monad
|
2025-10-13 11:15:24
|
don't believe everything you see on the Internet
|
|
|
BlueSwordM
|
|
Main (176 bytes) vs 0.8 (141 bytes) at Distance 1 Effort 7
|
|
2025-10-14 12:00:43
|
Oh my god.
|
|
2025-10-14 12:00:53
|
I'm surprised no one's absurdly pissed about this.
|
|
|
gb82
|
2025-10-14 06:28:19
|
its ok, psychovisual testing is hard, and sometimes there are edge cases
|
|
|
jonnyawsom3
|
2025-10-14 09:56:50
|
It's not an edge case though, it happens on any image with saturated yellows at any distance above 0.1
|
|
|
gb82
|
2025-10-14 09:13:22
|
ah I stand corrected then
|
|
|
jonnyawsom3
|
2025-10-14 09:30:18
|
Along with https://discord.com/channels/794206087879852103/1278292301038227489/1297641128333410404
|
|
|
spider-mario
|
2025-10-15 09:30:22
|
https://devblogs.microsoft.com/oldnewthing/20251014-00/?p=111681
😂
|
|
|
username
|
|
spider-mario
https://devblogs.microsoft.com/oldnewthing/20251014-00/?p=111681
😂
|
|
2025-10-15 09:45:42
|
> Nowadays, video rendering is no longer done with overlays.
This isn't true. in recent years we have been going *back* to using overlays
|
|
|
Quackdoc
|
|
username
> Nowadays, video rendering is no longer done with overlays.
This isn't true. in recent years we have been going *back* to using overlays
|
|
2025-10-15 10:33:43
|
not for video players, almost all compositors use overlays, but no app is going to directly negotiate for them
|
|
2025-10-15 10:33:46
|
usually
|
|
2025-10-15 10:35:46
|
nowadays we use fbos, those can optionally be attached to an overlay, but aren't necessary to be attached to one
|
|
|
Cacodemon345
|
2025-10-16 09:40:43
|
Video players won't use overlays these days.
|
|
2025-10-16 09:41:15
|
It poorly works when combined with 3D compositors.
|
|
|
|
ignaloidas
|
2025-10-16 11:50:44
|
On linux it's going back to using overlays/layers (at least on wayland), the compositor gets to decide how to handle each separate surface from an application, and if one of those surfaces is playing video, that's an easy choice to put on a separate layer in a GPU
|
|
|
lonjil
|
2025-10-16 11:53:19
|
yeah, the player just writes to a surface, and the compositor decides how to use overlays with the surfaces from applications.
|
|
|
Cacodemon345
It poorly works when combined with 3D compositors.
|
|
2025-10-16 11:54:25
|
what's a 3D compositor?
|
|
|
Quackdoc
|
|
ignaloidas
On linux it's going back to using overlays/layers (at least on wayland), the compositor gets to decide how to handle each separate surface from an application, and if one of those surfaces is playing video, that's an easy choice to put on a separate layer in a GPU
|
|
2025-10-16 01:50:55
|
only compositor side, the player will never touch an overlay
|
|
2025-10-16 01:51:24
|
windows is the same way, and ofc android is too, but that's not what blog author was talking about
|
|
|
|
ignaloidas
|
2025-10-16 01:51:38
|
compositor sure, but the player may strongly recommend the compositor to use the overlay
|
|
|
Quackdoc
|
2025-10-16 01:51:59
|
at most it can request to be "always on top"
|
|
2025-10-16 01:52:21
|
well for players designed for modern compositors
|
|
2025-10-16 01:53:22
|
mainstream* compositors
|
|
|
Cacodemon345
|
|
lonjil
what's a 3D compositor?
|
|
2025-10-16 03:16:30
|
I mean as in compositors using OpenGL/Vulkan to render windows.
|
|
|
lonjil
|
|
Quackdoc
|
|
Cacodemon345
I mean as in compositors using OpenGL/Vulkan to render windows.
|
|
2025-10-16 04:08:58
|
~~and not maze or one of the VR ones? lame~~
|
|
|
jonnyawsom3
|
2025-10-31 07:22:53
|
Huh, only just found out about this https://www.videolan.org/videolan/events/vdd25/index.html
I'm actually going to London tomorrow anyway to see some friends, but looks like registration is closed and a 7am Saturday is a bit early for me haha
|
|
2025-10-31 07:23:40
|
|
|
|
The Thunk
|
2025-11-03 06:37:44
|
<@794205442175402004> in response to https://discord.com/channels/794206087879852103/803645746661425173/1433496087196340375
|
|
|
juliobbv
|
|
Huh, only just found out about this https://www.videolan.org/videolan/events/vdd25/index.html
I'm actually going to London tomorrow anyway to see some friends, but looks like registration is closed and a 7am Saturday is a bit early for me haha
|
|
2025-11-03 07:04:25
|
yeah, <@703028154431832094> and me went to VDD25... it was great!
|
|
2025-11-03 07:04:33
|
we got to present SVT-AV1-PSY
|
|
|
jonnyawsom3
|
|
juliobbv
yeah, <@703028154431832094> and me went to VDD25... it was great!
|
|
2025-11-03 07:20:51
|
Yeah! My friend attended and said hi to Gianni for me https://discord.com/channels/794206087879852103/806898911091753051/1434154028199317618, they loved your talk and said "They changed a lot" xD
Unfortunately I was the wrong side of London, but they brought me a cone to wear in spirit, which another friend took this wonderful photo of
|
|
2025-11-03 07:20:52
|
https://cdn.discordapp.com/attachments/889987478675656796/1434231906668380200/IMG_8693.jpg?ex=69098e4d&is=69083ccd&hm=aab998de962a49b6fd97acf3366fbd68b0dff49fa9f5ae8030a6a711871cd8e5&
|
|
|
_wb_
|
|
The Thunk
<@794205442175402004> in response to https://discord.com/channels/794206087879852103/803645746661425173/1433496087196340375
|
|
2025-11-03 10:11:32
|
Good luck on finding what you're missing. Something isn't right, you cannot put 1000 pigeons in 590 pigeon holes.
|
|
|
juliobbv
|
|
Yeah! My friend attended and said hi to Gianni for me https://discord.com/channels/794206087879852103/806898911091753051/1434154028199317618, they loved your talk and said "They changed a lot" xD
Unfortunately I was the wrong side of London, but they brought me a cone to wear in spirit, which another friend took this wonderful photo of
|
|
2025-11-03 10:41:57
|
nice nice! by "They changed a lot", do they mean the content of our presentation?
|
|
|
A homosapien
|
2025-11-03 10:46:02
|
I think they mean, SVT-AV1 was "changed a lot" for the better
|
|
|
jonnyawsom3
|
|
juliobbv
nice nice! by "They changed a lot", do they mean the content of our presentation?
|
|
2025-11-03 10:52:55
|
As Sapien said, changed a lot in SVT I assume. I just hope we can replicate some of that success with libjxl too, without starting from scratch 😅
|
|
|
juliobbv
|
2025-11-03 11:30:29
|
aah, that makes sense
|
|
2025-11-03 11:30:47
|
yeah, there were a lot of changes once you aggregate them all 😄
|
|
|
AccessViolation_
|
2025-11-04 12:42:23
|
|
|
2025-11-04 12:42:23
|
several times I've had people tell me that patches are a bad idea because it's possible two numbers will look sufficiently similar causing one to be replaced with the other. when they say that I like to say that it's technically possible to create a PNG encoder that replaces every detected instance of the moon with the jerma sus face
|
|
|
jonnyawsom3
|
2025-11-04 12:47:33
|
If they're looking so similar, either the original numbers were indisinguaishable or you've set the quality so low you can't read them anyway :P
|
|
|
AccessViolation_
|
2025-11-04 12:51:48
|
heh yeah
|
|
2025-11-04 12:52:45
|
I'm not 100% on this but iirc the idea to prevent this incident in JXL was to have an absolute maximum per pixel error instead of an average pixel error
|
|
2025-11-04 12:53:34
|
which would have also prevented the issue with the copier that did this messing up documents, resulting in any document touched by those devices not holding up in court <:KekDog:805390049033191445>
|
|
|
Demiurge
|
2025-11-04 06:07:35
|
Yeah, and it gave that compression scheme a bad name even though the problem was in how it was implemented
|
|
|
monad
|
|
The Thunk
<@794205442175402004> in response to https://discord.com/channels/794206087879852103/803645746661425173/1433496087196340375
|
|
2025-11-04 08:37:41
|
I am willing to run your encoder and decoder privately to provide third-party evidence of its results.
|
|
|
Exorcist
|
2025-11-04 09:02:33
|
I have to stop assuming good faith, and assume he fake log for "keep his reputation"
|
|
|
Demiurge
|
|
monad
I am willing to run your encoder and decoder privately to provide third-party evidence of its results.
|
|
2025-11-04 09:18:21
|
If that's from /dev/random then something is wrong if that's the compression ratio
|
|
|
Lumen
|
2025-11-04 09:20:59
|
it could very well be an encoder decoder pair that learns on the input
this is cheating ahah, but it would pass his test
(example: each time the encoder receives a new 512 bit sequence, it associates a corresponding shorter sequence and inform the decoder)
In practice, this just means a heavy encoder/decoder set, and it would stop working when supplied too many sequences but their tests arent on enough of them to break such practice
|
|
|
Exorcist
|
2025-11-04 09:23:32
|
He claim "it can compress random"
Now he deleted his chat context☝️
|
|
2025-11-04 09:24:09
|
"compress random" is same impossible as "perpetual motion"
|
|
|
The Thunk
|
|
Lumen
it could very well be an encoder decoder pair that learns on the input
this is cheating ahah, but it would pass his test
(example: each time the encoder receives a new 512 bit sequence, it associates a corresponding shorter sequence and inform the decoder)
In practice, this just means a heavy encoder/decoder set, and it would stop working when supplied too many sequences but their tests arent on enough of them to break such practice
|
|
2025-11-04 10:00:09
|
I believe you would need to store extra information to do that. The blocks I (my code) create store any information required to recreate them within themselves. So, I account for everything within my storage print outs.
|
|
|
Exorcist
He claim "it can compress random"
Now he deleted his chat context☝️
|
|
2025-11-04 10:01:23
|
Yes, because of the feedback received. This is the correct channel to discuss in.
|
|
|
monad
I am willing to run your encoder and decoder privately to provide third-party evidence of its results.
|
|
2025-11-04 10:04:17
|
I am open to the idea. But, I need assurances. How can we meet in the middle on this?
|
|
|
Lumen
|
2025-11-04 10:07:25
|
I love the fact that you really think you're onto something but it is actually fundamentally impossible x) the question is more about finding the flaw in your reasoning
|
|
|
monad
|
|
The Thunk
I am open to the idea. But, I need assurances. How can we meet in the middle on this?
|
|
2025-11-04 10:10:43
|
Let me know what assurances you need.
|
|
|
Demiurge
|
2025-11-04 11:37:15
|
Something could be wrong that he's not aware of. We don't have to assume that he's a crafty devil. He might not have identified the problem yet.
|
|
|
AccessViolation_
|
2025-11-04 12:07:58
|
this is a long shot, but does anyone remember a blog post about video compression that used a top down open macbook as a test image
|
|
2025-11-04 12:08:46
|
in particular I remember it explaining why that image as a JPEG was 1500(?) times larger than it as a frame in a video. or something like that
|
|
2025-11-04 12:08:54
|
I've been trying to find it again
|
|
|
LMP88959
|
|
AccessViolation_
this is a long shot, but does anyone remember a blog post about video compression that used a top down open macbook as a test image
|
|
2025-11-04 12:26:50
|
https://sidbala.com/h-264-is-magic/
|
|
|
AccessViolation_
|
2025-11-04 12:27:57
|
heh no way. thanks!
|
|
|
LMP88959
|
|
jonnyawsom3
|
|
AccessViolation_
this is a long shot, but does anyone remember a blog post about video compression that used a top down open macbook as a test image
|
|
2025-11-04 12:51:26
|
The other day a mac AD played on youtube and I was starting at the back thinking "That'd make for a great test image"
|
|
|
Exorcist
|
2025-11-04 02:27:07
|
Famous article...
that confuse frame-level DFT and block-level DCT...
and most reader don't notice
|
|
|
AccessViolation_
|
2025-11-04 03:07:00
|
<@794205442175402004> I had a wild idea - presumably cloudinary can know what mobile device is requesting an image based on the the user agent string. you could take a client's image, create several jxl variants for different intensity target buckets, and serve the lowest one that matches or exceeds the max brightness of that display. in apps, you could even use a combination of the screen brightness setting + ambient light sensor to get the relative display brightness compared to the environment brightness, and serve proportionally lower quality images if the user has a dim screen in a bright environment for whatever reason
|
|
2025-11-04 03:09:00
|
not ideal for laptops, since those can have external displays attached
|
|
2025-11-04 03:11:08
|
I'll expect a check in the mail soon <:CatBlobPolice:805388337862279198>
|
|
|
jonnyawsom3
|
2025-11-04 03:22:05
|
Downside: You have to encode and serve x more files per image that probably only save a few percent over the highest intensity target
|
|
|
AccessViolation_
|
2025-11-04 03:24:12
|
aren't they already encoding a bunch of image variants for a bunch of different scenarios anyway?
|
|
|
jonnyawsom3
|
2025-11-04 03:28:15
|
Yeah, but for all the compute of making more JXL variants, you could just increase the effort level and give all devices smaller files with the same quality
|
|
|
AccessViolation_
|
2025-11-04 03:28:27
|
I guess the default intensity target for sRGB of 255 (iirc) is already pretty low, and most mobile devices exceed it
|
|
|
Yeah, but for all the compute of making more JXL variants, you could just increase the effort level and give all devices smaller files with the same quality
|
|
2025-11-04 03:32:02
|
hm yeah that's fair
|
|
2025-11-04 03:32:49
|
I think if the default intensity target for sRGB was a lot higher it could have been worth it
|
|
|
_wb_
|
|
AccessViolation_
<@794205442175402004> I had a wild idea - presumably cloudinary can know what mobile device is requesting an image based on the the user agent string. you could take a client's image, create several jxl variants for different intensity target buckets, and serve the lowest one that matches or exceeds the max brightness of that display. in apps, you could even use a combination of the screen brightness setting + ambient light sensor to get the relative display brightness compared to the environment brightness, and serve proportionally lower quality images if the user has a dim screen in a bright environment for whatever reason
|
|
2025-11-04 03:44:00
|
User agent string alone is not really enough — phones (and laptops) often have automatically adjusting brightness so when outdoors at noon the display can be 1000+ nits and when in doomscrolling in bed it may be 50 nits. So there's very little you can conclude from knowing the device. What would be useful is if there would be a new client hint that gives clues about current screen intensity when making image requests. Also HDR vs SDR capabilities of the current screen would be useful to know. We have been pushing to get such extra client hints on the web platform but this kind of thing always gets major pushback from security/privacy folks since it increases passive fingerprinting possibilities.
|
|
|
AccessViolation_
|
2025-11-04 03:48:43
|
that makes sense! to be clear I was assuming the maximum brightness, so if the display's max brightness is 1000 nits you'd be serving from the 1000 nits bucket even if the display was set to 100 nits. but yeah, better client hints would be ideal for this
|
|
2025-11-04 03:49:59
|
I understand the pushback from a privacy perspective, but I really don't get why you wouldn't accept it and optionally turn it off. it's not like some of the other proposals which clearly have very little purpose outside of user tracking
|
|
|
lonjil
|
2025-11-04 03:50:58
|
even whether the feature is on or off is a tracking target
|
|
|
Exorcist
|
2025-11-04 03:53:08
|
SDR is all you need
https://www.yedlin.net/DebunkingHDR/
|
|
|
AccessViolation_
|
|
lonjil
even whether the feature is on or off is a tracking target
|
|
2025-11-04 03:55:22
|
yes that's sort of true, though also in practice I don't think it matters that much. it's a bit of entropy, but since this feature gives you a lot more entropy to track with when it's on, basically everyone who cares about privacy will have it off, and so it doesn't increase uniqueness of the set of people that have privacy concerns anyway
|
|
|
_wb_
User agent string alone is not really enough — phones (and laptops) often have automatically adjusting brightness so when outdoors at noon the display can be 1000+ nits and when in doomscrolling in bed it may be 50 nits. So there's very little you can conclude from knowing the device. What would be useful is if there would be a new client hint that gives clues about current screen intensity when making image requests. Also HDR vs SDR capabilities of the current screen would be useful to know. We have been pushing to get such extra client hints on the web platform but this kind of thing always gets major pushback from security/privacy folks since it increases passive fingerprinting possibilities.
|
|
2025-11-04 04:00:31
|
by coincidence I sent this in another server right before you wrote that. I guess I know why now
|
|
|
Quackdoc
|
|
Exorcist
SDR is all you need
https://www.yedlin.net/DebunkingHDR/
|
|
2025-11-04 04:38:40
|
true, just use 16 bit
|
|
|
spider-mario
|
|
Exorcist
SDR is all you need
https://www.yedlin.net/DebunkingHDR/
|
|
2025-11-04 04:51:14
|
from what I remember of this talk, wasn’t it along the lines of “if you control the entire display chain down to the surrounding luminance, you can just decide that SDR white is very bright and grade accordingly”?
|
|
2025-11-04 04:52:03
|
if you start from that premise then sure, PQ and HLG don’t solve much more than potential posterisation
|
|
|
Exorcist
|
2025-11-04 05:35:40
|
his idea is:
- SDR is relative
- HDR is absolute
- human eye is relative, so just use SDR
- only use HDR if you can control the entire display chain (studio or cinema)
|
|
|
Cacodemon345
|
2025-11-04 05:36:51
|
SDR will also run fast into limitations of the 0-255 byte value accuracy range.
|
|
|
lonjil
|
2025-11-04 05:37:24
|
Just do 16-bit SDR
|
|
|
Exorcist
|
2025-11-04 05:38:59
|
Key points summary (in Chinese language):
https://gist.github.com/LukeNewNew/5c7a9ae6ddbac5527e0efa48b54531fe
It is worth to read this, you can use LLM translate
|
|
|
Cacodemon345
|
2025-11-04 05:39:35
|
I'd prefer TL;DRs to LLM'ing this.
|
|
|
Quackdoc
|
|
Exorcist
his idea is:
- SDR is relative
- HDR is absolute
- human eye is relative, so just use SDR
- only use HDR if you can control the entire display chain (studio or cinema)
|
|
2025-11-04 05:40:09
|
I didn't watch for a long time but if this is the take away then he is completely wrong
|
|
2025-11-04 05:40:16
|
HLG is relative
|
|
2025-11-04 05:40:32
|
PQ is "relative to metadata" so to speak
|
|
|
Cacodemon345
|
|
lonjil
Just do 16-bit SDR
|
|
2025-11-04 05:40:36
|
That will restrict it to 16-bit red-green-blue integer values since floating point stuff assumes per-pixel lighting capability.
|
|
|
Quackdoc
|
|
Quackdoc
PQ is "relative to metadata" so to speak
|
|
2025-11-04 05:40:59
|
Dolby doing some weird shit
|
|
|
lonjil
|
|
Cacodemon345
That will restrict it to 16-bit red-green-blue integer values since floating point stuff assumes per-pixel lighting capability.
|
|
2025-11-04 05:41:06
|
Not sure what you mean
|
|
|
Cacodemon345
|
|
lonjil
Not sure what you mean
|
|
2025-11-04 05:41:35
|
I mean, how do you handle floating-point values beyond 1.0 for 16-bit SDR?
|
|
|
Quackdoc
|
2025-11-04 05:42:06
|
2.0 supremacy
|
|
|
lonjil
|
2025-11-04 05:42:13
|
um. let's all use 16-bit floating point 😄
|
|
|
Cacodemon345
|
|
Cacodemon345
I mean, how do you handle floating-point values beyond 1.0 for 16-bit SDR?
|
|
2025-11-04 05:42:17
|
The only thing I can think of is linearly mapping [0, 1] to black and white point ranges.
|
|
|
lonjil
|
2025-11-04 05:42:18
|
OpenEXR time
|
|
|
Quackdoc
|
2025-11-04 05:42:39
|
I don't think I've seen an issue with 16bit float in person before
|
|
|
Cacodemon345
|
2025-11-04 05:42:54
|
And mapping values beyond 1.0 to rest of HDR range.
|
|
|
Quackdoc
|
|
Cacodemon345
And mapping values beyond 1.0 to rest of HDR range.
|
|
2025-11-04 05:43:05
|
scRGB chad?
|
|
|
lonjil
|
|
Cacodemon345
The only thing I can think of is linearly mapping [0, 1] to black and white point ranges.
|
|
2025-11-04 05:43:30
|
I mean ent that how SDR usually works? 0 is black and 1 is white and it's all relative n stuff
|
|
2025-11-04 05:44:07
|
so 2^16-1 will output the maximum brightness for whatever brightness your monitor is set to, and 0 will be as dark as possible
|
|
|
Quackdoc
|
|
lonjil
I mean ent that how SDR usually works? 0 is black and 1 is white and it's all relative n stuff
|
|
2025-11-04 05:44:37
|
SDR vs HDR doesn't change that
|
|
|
Exorcist
|
2025-11-04 05:44:42
|
SDR has a simple tone-mapping --- gamma 2,2
|
|
|
Quackdoc
|
|
Exorcist
SDR has a simple tone-mapping --- gamma 2,2
|
|
2025-11-04 05:45:00
|
2.4 supremacy
|
|
|
Cacodemon345
|
|
lonjil
so 2^16-1 will output the maximum brightness for whatever brightness your monitor is set to, and 0 will be as dark as possible
|
|
2025-11-04 05:45:20
|
But many people won't really desire that range, instead desiring overbrighting shit where pixels can be brighter than the current brightness of monitors/displays.
|
|
|
lonjil
|
|
Quackdoc
SDR vs HDR doesn't change that
|
|
2025-11-04 05:45:31
|
then values above 1.0 don't seem relevant
|
|
|
Cacodemon345
But many people won't really desire that range, instead desiring overbrighting shit where pixels can be brighter than the current brightness of monitors/displays.
|
|
2025-11-04 05:45:38
|
ow my eyes
|
|
|
Cacodemon345
|
2025-11-04 05:45:45
|
For the record my Xiaomi monitor offers no brightness setting with HDR enabled.
|
|
|
Quackdoc
|
|
lonjil
then values above 1.0 don't seem relevant
|
|
2025-11-04 05:46:02
|
they are with scRGB, it's a per transfer/colorspace things not a SDR vs HDR thing really
|
|
2025-11-04 05:46:19
|
SDR vs HDR is a meme with no definition
|
|
|
Cacodemon345
|
|
Cacodemon345
For the record my Xiaomi monitor offers no brightness setting with HDR enabled.
|
|
2025-11-04 05:46:31
|
It's set to full brightness always with pixels always being bright/dark per their color values.
|
|
2025-11-04 05:46:48
|
So it's a legitimate question.
|
|
|
lonjil
|
|
Quackdoc
they are with scRGB, it's a per transfer/colorspace things not a SDR vs HDR thing really
|
|
2025-11-04 05:46:57
|
right for scRGB you'd just use a different mapping. Actually, doesn't the scRGB spec have a mapping that places all the values in 0 to 1?
|
|
|
Quackdoc
|
|
lonjil
right for scRGB you'd just use a different mapping. Actually, doesn't the scRGB spec have a mapping that places all the values in 0 to 1?
|
|
2025-11-04 05:47:16
|
not that I am aware of
|
|
|
Exorcist
|
|
Quackdoc
2.4 supremacy
|
|
2025-11-04 05:47:41
|
https://imgs.xkcd.com/comics/standards_2x.png
|
|
|
Cacodemon345
|
|
lonjil
ow my eyes
|
|
2025-11-04 05:48:58
|
Some gamers might prefer getting flashed lol.
|
|
2025-11-04 05:49:09
|
Ultra-realistic AAA experience.
|
|
|
lonjil
|
2025-11-04 05:49:31
|
realistic flashbang in call of duty 9
|
|
|
Cacodemon345
|
2025-11-04 05:50:37
|
But yeah I do have honest questions about HDR and SDR and the floating point mapping behaviour in current SDR and HDR displays.
|
|
|
lonjil
|
2025-11-04 05:53:45
|
I don't think fp vs int is particularly relevant
|
|
2025-11-04 05:54:31
|
If you set the 0.0 point to 0, and the 1.0 point to 8000, and use a linear mapping, then the 2.0 point would be 16000, but a non-linear mapping could do anything else
|
|
2025-11-04 05:55:00
|
Like "log" spaces popular in videography
|
|
|
Cacodemon345
|
2025-11-04 05:57:32
|
My idea of linear mapping was quite-literally [0, 1] for standard sRGB SDR range and [1, x] (where `x` is the maximum white point for displays allowed by the system and/or hardware) for everything else.
|
|
|
lonjil
|
2025-11-04 05:57:46
|
mm
|
|
2025-11-04 05:58:40
|
isn't this similar to HLG works? Though it isn't linear.
|
|
|
Quackdoc
not that I am aware of
|
|
2025-11-04 05:59:12
|
seems not. I was probably misremembering something about the signed 16 bit encoding in the standard.
|
|
|
Cacodemon345
|
2025-11-04 05:59:37
|
If Wikipedia is correct with this image it'd sum up what I basically wanted.
|
|
2025-11-04 06:00:01
|
Except HLG quite doesn't fit it.
|
|
2025-11-04 06:01:56
|
It's likely correct for raw display output but floating point values in the software can cross the [0, 1] range which is what I was curious about.
|
|
2025-11-04 06:03:34
|
```HLG is designed to be better-suited for television broadcasting, where the metadata required for other HDR formats is not backward compatible with non-HDR displays, consumes additional bandwidth, and may also become out-of-sync or damaged in transmission. HLG defines a non-linear optical-electro transfer function, in which the lower half of the signal values use a gamma curve and the upper half of the signal values use a logarithmic curve.[20][21] In practice, the signal is interpreted as normal by standard-dynamic-range displays (albeit capable of displaying more detail in highlights), but HLG-compatible displays can correctly interpret the logarithmic portion of the signal curve to provide a wider dynamic range.[22][23][24] In contrast with the other HDR formats it does not use metadata.[25]
The HLG transfer function is backward compatible with SDR's gamma curve. However, HLG is commonly used with Rec. 2020 color primaries which produce a de-saturated image with visible hue shifts on non-compatible devices.[26] HLG is therefore backward compatible with SDR-UHDTV and will show color distortion on common SDR devices that only support Rec. 709 color primaries.```Yeah fits the bill for existing programs.
|
|
|
lonjil
|
2025-11-04 06:12:27
|
silly proposal: 12-bit is probably enough for SDR, so let's say that 2^12-1 is 1.0 white
Then, for values above that, we double the intensity for each 2^12 added to the value, so e.g. 2^13-1 is 2.0. Equivalently, adding 1 to the integer value is equivalent to multiplying the real brightness value by the 4096th root of 2, which is 1.000169.
This gives us 12 bits of precision for the SDR range, and 12 bits of precision for each doubling above that. 2^4=16 doublings are possible, i.e. the maximum brightness is 65k times brighter than the SDR white point.
|
|
|
_wb_
|
2025-11-04 06:13:43
|
isn't that more or less how HLG works?
|
|
|
lonjil
|
2025-11-04 06:14:32
|
basically
|
|
|
_wb_
|
2025-11-04 06:15:14
|
(except with lower precision and fewer doublings)
|
|
2025-11-04 06:16:33
|
10-bit HLG is kind of like 8-bit SDR + 2 bits to cover a range corresponding to 3 doublings and then some
|
|
2025-11-04 06:17:49
|
I like your simpler formula better though, makes it easy to do the math
|
|
|
lonjil
|
2025-11-04 06:17:50
|
yeah I think it's like, 12x reference white?
|
|
2025-11-04 06:18:56
|
Maybe this will be the formula for my upcoming 16-bit FPGA codec 😄
|
|
|
AccessViolation_
|
|
Exorcist
https://imgs.xkcd.com/comics/standards_2x.png
|
|
2025-11-04 06:30:42
|
I like how for two of the three examples they give, we *do* now actually have universal standards that solved the issues with competing standards
|
|
|
spider-mario
|
2025-11-04 06:38:54
|
(USB-C and UTF-8?)
|
|
2025-11-04 07:04:54
|
at the risk of sounding like an ad, I recently travelled with https://b2b.4smarts.eu/Charger-GaN-Flex-Pro-200W-4-USB-C-white and it was perfect
|
|
|
AccessViolation_
|
2025-11-04 07:05:37
|
ooo
|
|
2025-11-04 07:05:42
|
big fan of GaN chargers
|
|
|
spider-mario
|
2025-11-04 07:06:32
|
image 8 is a table of which ports get how much power depending on which ones you use
|
|
|
lonjil
|
2025-11-04 07:07:52
|
dang, the charger I have at my desk is only 100W
|
|
|
AccessViolation_
|
2025-11-04 07:07:58
|
the four USB-C power out ports will be perfect for my laptop with four USB-C power in ports
|
|
|
lonjil
|
2025-11-04 07:08:23
|
A 200W one would be great so I can charge my laptop, my phone, my ipad, and use my soldering iron all at the same time
|
|
2025-11-04 07:09:06
|
although, I think I might buy a 120W usb soldering iron, so the 100W limit per port of that one wouldn't be enough
|
|
|
AccessViolation_
|
2025-11-04 07:09:15
|
I haven't actually tried charging my laptop with more than one cable at a time. I wonder what would happen. I'm assuming it'd just pick one
|
|
|
lonjil
A 200W one would be great so I can charge my laptop, my phone, my ipad, and use my soldering iron all at the same time
|
|
2025-11-04 07:09:38
|
pinecil soldering iron per chance?
|
|
|
spider-mario
|
|
lonjil
although, I think I might buy a 120W usb soldering iron, so the 100W limit per port of that one wouldn't be enough
|
|
2025-11-04 07:09:42
|
all the more since it drops to 65W if you use all ports (65 + 65 + 35 + 35)
|
|
|
lonjil
|
|
AccessViolation_
pinecil soldering iron per chance?
|
|
2025-11-04 07:09:51
|
yeah
|
|
|
AccessViolation_
|
2025-11-04 07:09:56
|
nice, I have one too
|
|
|
lonjil
|
|
spider-mario
all the more since it drops to 65W if you use all ports (65 + 65 + 35 + 35)
|
|
2025-11-04 07:10:05
|
😱
|
|
|
spider-mario
|
2025-11-04 07:10:11
|
you have to leave one of the first two free to get 100W from the other
|
|
2025-11-04 07:10:30
|
(100 + 50 + 50 if you free up the first or the second)
|
|
|
_wb_
|
2025-11-04 07:24:43
|
now only if it would come with a universal power plug and not just EU/UK...
|
|
|
AccessViolation_
|
2025-11-04 07:25:15
|
I'm sorry, was I not clear when I said "USB-C all the things" ⚡
|
|
2025-11-04 07:25:28
|
high voltage lines? usb-c
|
|
|
spider-mario
|
|
_wb_
now only if it would come with a universal power plug and not just EU/UK...
|
|
2025-11-04 07:28:00
|
the base plug (without adapter) is type A (US/CA/MX/JP)
|
|
2025-11-04 07:28:32
|
(the trip I alluded to was to Montréal)
|
|
|
_wb_
|
2025-11-04 07:29:44
|
ok, but for Australia that won't work. Also South Africa...
|
|
2025-11-04 07:30:39
|
it's really annoying how many socket types there are in the world
|
|
|
spider-mario
|
2025-11-04 07:30:41
|
South Africa’s outlets are apparently compatible with type C plugs
|
|
2025-11-04 07:31:11
|
ah, wait, South Africa has two
|
|
2025-11-04 07:31:18
|
type M and type N
|
|
2025-11-04 07:31:23
|
only type N is compatible with type C
|
|
|
_wb_
|
2025-11-04 07:31:41
|
type M is the common one
|
|
|
spider-mario
|
2025-11-04 07:31:46
|
> Although type M sockets are (very, very) slowly being phased out, they are still the de facto standard in South Africa. Type C sockets are ubiquitous as well, and both M and C can be found in every home and hotel. Buildings constructed after 2018 also have the new type N sockets, which are fully compatible with the commonly used type C plugs.
|
|
|
_wb_
|
2025-11-04 07:31:48
|
at least as far as I remember
|
|
|
spider-mario
|
2025-11-04 07:32:48
|
(https://www.worldstandards.eu/electricity/plug-voltage-by-country/south-africa/)
|
|
|
_wb_
|
2025-11-04 07:33:53
|
I've lived in Cape Town for 7 months just before we moved to our current house in Belgium (it was still occupied and we had already sold our previous house), and I remember the sockets being annoyingly different there from most of the rest of the world
|
|
2025-11-04 07:34:54
|
Same thing with Australia, they're also annoyingly different from anything else. I'm still contemplating if I should go to the upcoming January meeting of JPEG in Sydney. That would be the third time I go to a JPEG meeting in Sydney.
|
|
2025-11-04 07:40:32
|
I wonder what will happen first: converging power socket types, or just every non-fixed appliance (not ovens, dishwashers, etc) starting to use USB-C as a power source and people installing USB power sockets instead of the traditional ones (like many hotels are already doing).
|
|
|
Quackdoc
|
|
lonjil
silly proposal: 12-bit is probably enough for SDR, so let's say that 2^12-1 is 1.0 white
Then, for values above that, we double the intensity for each 2^12 added to the value, so e.g. 2^13-1 is 2.0. Equivalently, adding 1 to the integer value is equivalent to multiplying the real brightness value by the 4096th root of 2, which is 1.000169.
This gives us 12 bits of precision for the SDR range, and 12 bits of precision for each doubling above that. 2^4=16 doublings are possible, i.e. the maximum brightness is 65k times brighter than the SDR white point.
|
|
2025-11-04 08:11:02
|
I would classify that as HDR
|
|
2025-11-04 08:11:14
|
HDR vs SDR has no real definitive
|
|
|
lonjil
|
|
Quackdoc
|
2025-11-04 08:13:01
|
If I were to give HDR first SDR a definition based on how it's using marketing,
I would say that HDR is a color space or more specifically a transfer function that can produce a perceptually smooth brightness steps at a given bit depth to at least 1000 nits in a reference environment.
|
|
2025-11-04 08:14:01
|
I feel like this is the only definition so to speak that would adequately cover what people think of HDR and how HDR is used in literature.
|
|
|
spider-mario
|
2025-11-04 08:19:27
|
a strict definition of dynamic range makes no reference to the smoothness of steps
|
|
2025-11-04 08:19:37
|
but then I guess we’re getting into “is a hotdog a sandwich” territory
|
|
2025-11-04 08:19:53
|
with “structural purist”, etc.
|
|
2025-11-04 08:20:41
|
(you’ve probably seen this image before: https://miro.medium.com/v2/1*cSsYSEIQ4OkaYZU3lDcVLQ.jpeg)
|
|
2025-11-04 08:21:34
|
structural purist, ingredient rebel: “0.01 to 1000-nit greyscale is HDR”
|
|
|
spider-mario
(you’ve probably seen this image before: https://miro.medium.com/v2/1*cSsYSEIQ4OkaYZU3lDcVLQ.jpeg)
|
|
2025-11-04 08:23:04
|
for sandwiches, I think I lean towards “structural neutral, ingredient rebel”
|
|
|
AccessViolation_
|
2025-11-04 08:23:34
|
I'm pretty sure the third column is considered a balanced american breakfast
|
|
2025-11-04 08:24:16
|
I hope I'm misremembering but I feel like I've heard people say they eat pop tarts not as a snack but as either breakfast or lunch
|
|
2025-11-04 08:24:45
|
actually, that might have been "hot pockets"
|
|
|
spider-mario
for sandwiches, I think I lean towards “structural neutral, ingredient rebel”
|
|
2025-11-04 08:26:22
|
ice cream??
|
|
|
spider-mario
|
2025-11-04 08:26:49
|
there’s an Android version named “Ice Cream Sandwich”, so why not
|
|
2025-11-04 08:27:16
|
https://en.wikipedia.org/wiki/Android_Ice_Cream_Sandwich
|
|
|
AccessViolation_
|
2025-11-04 08:28:10
|
I could go for a hardline traditionalist sandwhich to be honest, that looks delicious. with some replacements for meat I guess
|
|
|
lonjil
|
|
Quackdoc
If I were to give HDR first SDR a definition based on how it's using marketing,
I would say that HDR is a color space or more specifically a transfer function that can produce a perceptually smooth brightness steps at a given bit depth to at least 1000 nits in a reference environment.
|
|
2025-11-04 08:28:21
|
So 16 bit sRGB (plain ol sRGB) displayed with the white point at 2000 nits is HDR.
Hooray we don't need anything new.
|
|
|
spider-mario
|
2025-11-04 08:28:51
|
the bread looks a bit more cooked than I like
|
|
|
Exorcist
|
2025-11-04 08:29:02
|
No, tone mapping is not a simple curve
|
|
2025-11-04 08:29:25
|
You need a local tone mapping algorithm for human eye
|
|
2025-11-04 08:30:30
|
"local" means compress global contrast and keep local contrast
|
|
2025-11-04 08:31:06
|
and sharpen by convolution is not good enough (ringing effect)
|
|
|
_wb_
|
2025-11-04 08:31:12
|
16-bit sRGB at 2000 nits is enough to render a HDR experience, but of course you'd need to make sure to use sensible colors, you likely don't want to render a pdf document with black text on a 2000 nits background
|
|
|
Quackdoc
|
|
lonjil
So 16 bit sRGB (plain ol sRGB) displayed with the white point at 2000 nits is HDR.
Hooray we don't need anything new.
|
|
2025-11-04 08:31:52
|
yeah, I don't know what I would consider HDR, but I feel like I haven't heard an adequate explanation closer to that
|
|
2025-11-04 08:32:12
|
I don't like it, but frankly, I don't really want to ponder the topic.
|
|
|
AccessViolation_
|
2025-11-04 08:33:11
|
I was actually ready to enjoy that controversial HDR video until I saw it's two hours
|
|
|
Exorcist
|
|
_wb_
16-bit sRGB at 2000 nits is enough to render a HDR experience, but of course you'd need to make sure to use sensible colors, you likely don't want to render a pdf document with black text on a 2000 nits background
|
|
2025-11-04 08:37:30
|
This is basically how Steve Yedlin calibrate HDR, middle white = paper white
|
|
2025-11-04 08:40:08
|
The only goodness for HDR is it can express flashlight or spark, but flashlight or spark is bad for eye health
|
|
|
_wb_
|
2025-11-04 08:43:16
|
HDR in practice to me is just when you have several stops of headroom above the brightness of "a white paper background", while in SDR it's reasonable to do black text on a max-white background. In my head, SDR is the range of non-light-emitting, non-shiny stuff like anything printed on paper and viewed with diffuse light, while HDR allows capturing/reproducing the dynamic range needed for shiny stuff, light-emitting stuff, or to be able to show both dark detail and bright detail at the same time, which is just impossible with the limited range of SDR — e.g. an image in twilight or close to night that contains both the moon (a super bright object) and some dark landscape.
|
|
|
Quackdoc
|
|
Exorcist
The only goodness for HDR is it can express flashlight or spark, but flashlight or spark is bad for eye health
|
|
2025-11-04 08:45:24
|
weird take tbh
|
|
|
AccessViolation_
|
2025-11-04 08:49:06
|
the idea that SDR works as HDR if your room and the graphics themselves are dark enough reminds me of some anime I used to watch, that made everything dark 10 seconds before there would be a bright flash to made it stand out. it wasn't a gradual decrease in brightness either, it was pretty bad
|
|
|
Quackdoc
|
|
AccessViolation_
the idea that SDR works as HDR if your room and the graphics themselves are dark enough reminds me of some anime I used to watch, that made everything dark 10 seconds before there would be a bright flash to made it stand out. it wasn't a gradual decrease in brightness either, it was pretty bad
|
|
2025-11-04 08:52:42
|
In the end, that is how color works. It's fairly limiting, but you can get by with some pretty decent effects like that. The issue is you start running into banding.
|
|
2025-11-04 08:52:47
|
But that's where bit depth comes in.
|
|
2025-11-04 08:53:58
|
Almost any transfer can be made sufficiently smooth by just throwing more bits at the problem.
|
|
|
AccessViolation_
|
2025-11-04 08:54:11
|
yeah, I agree that this is how it works in principle
|
|
2025-11-04 08:57:32
|
I'm not convinced of the practicality of the "SDR is HDR if you make it" idea, but I haven't watched the presentation so I can't comment
|
|
|
spider-mario
|
2025-11-04 09:17:33
|
I used it to get a feel for HDR before I had a more convenient HDR screen than my TV
|
|
2025-11-04 09:18:28
|
downloaded LG’s “Colors of Journey” demo video and tonemapped it to a profile of my monitor at max brightness using DisplayCAL’s “3DLUT MAKER” + ffmpeg’s “lut3d” filter
|
|
2025-11-04 09:18:46
|
also Samsung’s “Travel with my Pet”
|
|
2025-11-04 09:18:54
|
but I think I prefer Colors of Journey
|
|
2025-11-04 09:19:25
|
https://4kmedia.org/lg-colors-of-journey-demo/
https://4kmedia.org/samsung-travel-with-my-pet-hdr-uhd-4k-demo/
|
|
2025-11-04 09:19:34
|
(you can also find YouTube uploads)
|
|
|
Quackdoc
|
|
AccessViolation_
I'm not convinced of the practicality of the "SDR is HDR if you make it" idea, but I haven't watched the presentation so I can't comment
|
|
2025-11-04 09:20:10
|
I mean, no one really seems to know what HDR vs SDR even is in the first place
|
|
|
spider-mario
|
2025-11-04 09:21:27
|
https://youtu.be/IcAe78IrO-o?t=1m1s urk
|
|
|
AccessViolation_
|
2025-11-04 09:27:06
|
I'm really in no position to talk about this but I think people get lost in meanings or definitions and forget the utility. you can have endless debates about whether C and Rust are high or low level languages but you don't need to know that to appreciate their use cases. and while interesting, the definitions of HDR and SDR don't matter that much, what matters is useful, expected behavior that you can work with when you see the label
|
|
2025-11-04 09:29:22
|
but what do I know, maybe HDR *is* a scam <:KekDog:805390049033191445>
|
|
|
Quackdoc
|
|
AccessViolation_
I'm really in no position to talk about this but I think people get lost in meanings or definitions and forget the utility. you can have endless debates about whether C and Rust are high or low level languages but you don't need to know that to appreciate their use cases. and while interesting, the definitions of HDR and SDR don't matter that much, what matters is useful, expected behavior that you can work with when you see the label
|
|
2025-11-04 09:37:37
|
what is the utility if we don't even know what it is?
|
|
2025-11-04 09:37:51
|
*marketing* is the answer
|
|
|
jonnyawsom3
|
|
spider-mario
downloaded LG’s “Colors of Journey” demo video and tonemapped it to a profile of my monitor at max brightness using DisplayCAL’s “3DLUT MAKER” + ffmpeg’s “lut3d” filter
|
|
2025-11-04 09:42:55
|
Would the `--display_nits` djxl option achive similar for images? (If it worked)
|
|
|
AccessViolation_
|
2025-11-04 09:44:29
|
oh that'd actually be really convenient for inspecting HDR images on displays that don't support it 👀
|
|
|
spider-mario
|
|
Would the `--display_nits` djxl option achive similar for images? (If it worked)
|
|
2025-11-04 09:45:11
|
yes, that’s pretty much its purpose
|
|
2025-11-04 09:45:18
|
given that it at least used to work, maybe we could bisect
|
|
2025-11-04 09:45:54
|
(ugh, was too quick to press [up, ‘t’, enter] to fix the message so it edited the previous one instead)
|
|
|
The Thunk
|
|
monad
Let me know what assurances you need.
|
|
2025-11-04 11:06:24
|
Do you know how to code in assembly, C, and or C++?
|
|
|
Quackdoc
|
|
spider-mario
given that it at least used to work, maybe we could bisect
|
|
2025-11-05 01:03:56
|
automated testing time, I think I have a script to automated bisect somewhere. the decoded image should have a different luminance right? if so automating it should be easy?
|
|
|
monad
|
|
The Thunk
Do you know how to code in assembly, C, and or C++?
|
|
2025-11-05 04:12:30
|
I was a hobbyist in C++. I've mostly forgotten my C.
|
|
|
The Thunk
|
|
monad
I was a hobbyist in C++. I've mostly forgotten my C.
|
|
2025-11-05 02:36:12
|
If I shared anything with you. You would have to be bound to the project (actually contribute to it) and sign/agree to legal documents. Also, provide proof of prior works and expertise. Are you located in the USA? These seem at face value to be overbearing commitments you would have to adhere to. So, I completely understand your unwillingness to do so. Which is why I am currently waiting on peer review from another individual.
|
|
|
Lumen
|
2025-11-05 02:48:44
|
XDDD
|
|
2025-11-05 02:48:51
|
the joke is going far!
|
|
|
monad
|
|
The Thunk
If I shared anything with you. You would have to be bound to the project (actually contribute to it) and sign/agree to legal documents. Also, provide proof of prior works and expertise. Are you located in the USA? These seem at face value to be overbearing commitments you would have to adhere to. So, I completely understand your unwillingness to do so. Which is why I am currently waiting on peer review from another individual.
|
|
2025-11-05 03:13:52
|
A limited license to run the thing would make sense, but I have no personal interest in the project. As it stands, it's just one person's claims which appear to defy physics. People are dismissing it for good reason, I was just offering to provide some clarity at my own cost given your insistence to share despite failure to demonstrate proof. Anyway, good luck, I hope the effort is valuable to you.
|
|
|
Exorcist
|
2025-11-05 03:19:07
|
Submit your secret power to here:
https://research.google/careers/
|
|
|
AccessViolation_
|
2025-11-05 03:28:31
|
it's not like google needs you to be an employee to patent your work
|
|
2025-11-05 03:28:42
|
or microsoft for that matter. pouring one out for ANS with adaptive probabilities
|
|
|
Exorcist
|
2025-11-05 03:33:36
|
I want him to improve YouTube quality (if he can)🤣
|
|
|
AccessViolation_
|
|
The Thunk
If I shared anything with you. You would have to be bound to the project (actually contribute to it) and sign/agree to legal documents. Also, provide proof of prior works and expertise. Are you located in the USA? These seem at face value to be overbearing commitments you would have to adhere to. So, I completely understand your unwillingness to do so. Which is why I am currently waiting on peer review from another individual.
|
|
2025-11-05 03:34:56
|
shouldn't you work out where the bug is before you get it peer reviewed
|
|
2025-11-05 03:36:17
|
there seems to be a flaw because the behavior it seems to be exhibiting is provably impossible
|
|
|
_wb_
|
2025-11-05 03:39:59
|
If you make an outrageous claim (40% compression on random bits), but aren't willing to share any proof besides "believe me, it's true", it's hard to take it serious. Maybe you don't understand how outrageous the claim actually is but that doesn't make it any more believable.
|
|
|
AccessViolation_
|
2025-11-05 03:42:31
|
I think Jan Sloot might have been up for helping them with development, I don't know about many other compression specialists :p
|
|
2025-11-05 03:45:41
|
they're being a bit mysterious but this story has some striking similarities to that of the [Sloot Digital Coding System](<https://en.wikipedia.org/wiki/Sloot_Digital_Coding_System>)
|
|
|
TheBigBadBoy - 𝙸𝚛
|
2025-11-05 03:49:43
|
After that discussion, I was wondering (a bit off-topic), is there a way to store the "entropy" of one system ?
Since random-generated bits are not really random: some minimal systems simply take the time/CPU cycles count to create a random number. Therefore if we store that information we could recreate a random file at minimal effort (without actually compressing the payload).
So hypothetically, could we do this for e.g. a Linux system? I also wonder how heavy that information ("entropy storage") is
|
|
|
AccessViolation_
|
|
_wb_
If you make an outrageous claim (40% compression on random bits), but aren't willing to share any proof besides "believe me, it's true", it's hard to take it serious. Maybe you don't understand how outrageous the claim actually is but that doesn't make it any more believable.
|
|
2025-11-05 03:51:31
|
I guess it's technically possible, you can send over a copy of the code responsible for random number generation and the data fed into it to reproduce it, but that's deceptive. that's like saying I can compress an entire Minecraft world to minimal data, mainly the procedural generation code and that world's seed
|
|
|
TheBigBadBoy - 𝙸𝚛
After that discussion, I was wondering (a bit off-topic), is there a way to store the "entropy" of one system ?
Since random-generated bits are not really random: some minimal systems simply take the time/CPU cycles count to create a random number. Therefore if we store that information we could recreate a random file at minimal effort (without actually compressing the payload).
So hypothetically, could we do this for e.g. a Linux system? I also wonder how heavy that information ("entropy storage") is
|
|
2025-11-05 03:54:47
|
yeah! some data sources it uses are mouse movements and keyboard presses. input data is hashed and xor-stirred into the entropy pool. but CPUs also have hardware RNGs which Linux might use to feed the entropy pool, I'm not sure. it's also not trivial to read from the entropy pool *without* immediately 'emptying' that section, to avoid a situation where one program can know the random data another program is about to be given, which would be a security issue
|
|
|
TheBigBadBoy - 𝙸𝚛
|
2025-11-05 03:55:57
|
right
|
|
2025-11-05 03:56:23
|
so possible, only for really simple systems/devices <:KekDog:805390049033191445>
|
|
|
AccessViolation_
|
2025-11-05 03:58:04
|
really simple systems/devices actually have problems initializing the entropy pool after a fresh boot. this is a real problem, specifically in server environments where there are little external entropy sources those systems can use
|
|
|
TheBigBadBoy - 𝙸𝚛
so possible, only for really simple systems/devices <:KekDog:805390049033191445>
|
|
2025-11-05 03:59:34
|
definitely possible in theory, though in practice you'd probably have to modify the linux source code to get to that data ^^
|
|
|
Exorcist
|
2025-11-05 04:00:19
|
`RDSEED` is claimed from hardware thermal noise
|
|
|
TheBigBadBoy - 𝙸𝚛
|
2025-11-05 04:00:23
|
nice thanks
|
|
|
AccessViolation_
|
2025-11-05 04:02:10
|
one thing I like is that you can take any data and pipe it into /dev/random, and it'll be xor stirred into the entropy pool
|
|
2025-11-05 04:02:38
|
just be careful not to replace /dev/random by the file like I did by accident <:galaxybrain:821831336372338729>
|
|
|
lonjil
|
2025-11-05 04:07:03
|
these days /dev/random is just a ChaCha20 CSPRNG and there's no pool emptying
|
|
|
AccessViolation_
|
2025-11-05 04:07:59
|
I knew there was no pool emptying to completion by them effectively making /dev/random behave like /dev/urandom, but I thought there was still a pool and it does get 'more empty' as you generate a lot of random data
|
|
2025-11-05 04:08:54
|
so do empty the pool but extend the pool from it's own hash when the available entropy starts getting low, or something like that
|
|
2025-11-05 04:10:44
|
also I thought /dev/random used BLAKE2. my knowledge is probably outdated
|
|
|
lonjil
|
|
AccessViolation_
also I thought /dev/random used BLAKE2. my knowledge is probably outdated
|
|
2025-11-05 04:13:37
|
the entropy collector uses BLAKE2 to combine input entropy (from RDSEED and mouse events etc) to produce the seed used by the CSPRNG.
|
|
|
|
afed
|
2025-11-05 04:14:30
|
https://www.tomshardware.com/pc-components/cpus/amd-confirms-security-vulnerability-on-zen-5-based-cpus-that-generates-potentially-predictable-keys-rdseed-fix-coming-through-an-agesa-firmware-update-for-desktop-chips
|
|
|
Exorcist
|
2025-11-05 04:24:24
|
https://www.reddit.com/r/compression/comments/1lx6ud1/compression_idea_concept/
|
|
|
The Thunk
|
|
AccessViolation_
shouldn't you work out where the bug is before you get it peer reviewed
|
|
2025-11-05 04:25:49
|
It is a risk reward situation. I shouldn't share my code with people willing to verify it and not work on creating the finished project or the inverse. As the less people who know the source code / process the better. Like I said I have already consulted someone and they will peer review my project and help with the final application.
|
|
|
monad
A limited license to run the thing would make sense, but I have no personal interest in the project. As it stands, it's just one person's claims which appear to defy physics. People are dismissing it for good reason, I was just offering to provide some clarity at my own cost given your insistence to share despite failure to demonstrate proof. Anyway, good luck, I hope the effort is valuable to you.
|
|
2025-11-05 04:27:00
|
No, worries. I appreciate the offer. And, I understand the terms are not agreeable to you for providing the proof.
|
|
|
AccessViolation_
|
2025-11-05 04:28:06
|
you should try recursively compressing the output of the compressor. compressed data looks like random data. if you do this enough times you should be able to get it down to a single bit <:Stonks:806137886726553651>
|
|
|
The Thunk
|
|
_wb_
If you make an outrageous claim (40% compression on random bits), but aren't willing to share any proof besides "believe me, it's true", it's hard to take it serious. Maybe you don't understand how outrageous the claim actually is but that doesn't make it any more believable.
|
|
2025-11-05 04:28:52
|
Oh, I understand the claim is outrageous. That is why I coded the proof of concept to test my algorithm. And, those are the lossless statistics for compression. I will know soon if I am wrong finger crossed. And, I appreciate this welcoming community and constructive criticism I have received.
|
|
|
AccessViolation_
you should try recursively compressing the output of the compressor. compressed data looks like random data. if you do this enough times you should be able to get it down to a single bit <:Stonks:806137886726553651>
|
|
2025-11-05 04:32:32
|
Why is everyone so fixated on one bit? There is inherent cost to logic/numbers/math stored... Eventually the bits required to represent the logic/math/numbers exceed the starting raw data. But, being able to recursively compress one or more time meets the criteria and is more reflective of the claim I am making. Not this infinite compression scheme that you run over and over to be pedantic.
|
|
|
AccessViolation_
|
2025-11-05 04:33:17
|
I wasn't aware others had posted something relating to a single bit
|
|
2025-11-05 04:33:55
|
it was just the final result of my facetious suggestion
|
|
|
Exorcist
|
2025-11-05 04:34:29
|
This is reductio ad absurdum
|
|
|
monad
|
2025-11-05 04:38:12
|
The correct test is: compress multiple 4096 bit sequences, concatenate the compressed sequences, then compress 4096 bits from the new sequence
|
|
|
AccessViolation_
|
|
The Thunk
Why is everyone so fixated on one bit? There is inherent cost to logic/numbers/math stored... Eventually the bits required to represent the logic/math/numbers exceed the starting raw data. But, being able to recursively compress one or more time meets the criteria and is more reflective of the claim I am making. Not this infinite compression scheme that you run over and over to be pedantic.
|
|
2025-11-05 04:38:34
|
if your compressor works, you should not be able to compress your compressors output basically at all. consider:
1) if your compressor's output can be compressed, then your compressor wasn't very good. it could have compressed the original data further
2) you make a valid point about the inherent cost to logic/numbers/math stored. consider, though, that if your compressor compresses 10 already compressed blocks down to 5 compressed blocks, you can take 10 of those and compress them again to 5 compressed blocks, take 10 of those and compresses them again, meaning your algorithm can literally achieve *infinite* compression. in the end, you'll be able to restore an infinite amount of data from a single block.
it must be true that there is some data your algorithm cannot compress, whether that's random data (which should be the case) or already compressed data (which should also be the case) or something else, it is a mathematical certainty
|
|
|
|
afed
|
2025-11-05 04:41:19
|
because such claims have been around since the very first computers and the first compression algorithms, and sometimes new people make such claims every day in various compression communities
and none of them have been proved (it's like someone invented teleportation and says they are already using it)
and perhaps now, with the rise of AI and vibe coding, there will be even more such claims, because it's very quick to create “new algorithms” and when some LLM confirms that it's revolutionary and that it works correctly
|
|
|
AccessViolation_
|
|
The Thunk
Why is everyone so fixated on one bit? There is inherent cost to logic/numbers/math stored... Eventually the bits required to represent the logic/math/numbers exceed the starting raw data. But, being able to recursively compress one or more time meets the criteria and is more reflective of the claim I am making. Not this infinite compression scheme that you run over and over to be pedantic.
|
|
2025-11-05 04:41:36
|
or, put another way: if the minimum size of a compressed block that has to contain some math/numbers/logic is 128 bytes, you can take any amount of data, and keep recursing halving it until you can compress any amount of data in the world, to a single compressed block of 128 bytes
|
|
|
Exorcist
|
2025-11-05 04:42:32
|
I have read the story of Sloot Digital Coding System, the author persuaded Philips🤣
|
|
|
Exorcist
Submit your secret power to here:
https://research.google/careers/
|
|
2025-11-05 04:43:03
|
Let's guess if he can persuade Google
|
|
|
AccessViolation_
|
|
afed
because such claims have been around since the very first computers and the first compression algorithms, and sometimes new people make such claims every day in various compression communities
and none of them have been proved (it's like someone invented teleportation and says they are already using it)
and perhaps now, with the rise of AI and vibe coding, there will be even more such claims, because it's very quick to create “new algorithms” and when some LLM confirms that it's revolutionary and that it works correctly
|
|
2025-11-05 04:47:40
|
right! I was actually excited when I saw this because I've seen plenty of lectures about compression saying "if someone tells you they can compress ANY data to some smaller percentage of its original size, they are either wrong or lying" but I wasn't expecting this situation to ever actually occur, and now it has
|
|
|
The Thunk
|
|
AccessViolation_
if your compressor works, you should not be able to compress your compressors output basically at all. consider:
1) if your compressor's output can be compressed, then your compressor wasn't very good. it could have compressed the original data further
2) you make a valid point about the inherent cost to logic/numbers/math stored. consider, though, that if your compressor compresses 10 already compressed blocks down to 5 compressed blocks, you can take 10 of those and compress them again to 5 compressed blocks, take 10 of those and compresses them again, meaning your algorithm can literally achieve *infinite* compression. in the end, you'll be able to restore an infinite amount of data from a single block.
it must be true that there is some data your algorithm cannot compress, whether that's random data (which should be the case) or already compressed data (which should also be the case) or something else, it is a mathematical certainty
|
|
2025-11-05 04:54:43
|
This is a pedantic take on the process. You could change the process by adjusting block size or other logic... It is maximally efficient given the input. Any compression on random data without some type of repeating value or statistical skew of 40% is a huge accomplishment.
Yes, you are correct through concatenation of blocks... Because each block self defines and describes you would only need a cycle count to track how many times to parse the compressed file to return the original data. Since, the compressed data is binary you can simply re-compress the binary again. (With the limit you do not start with a value that is already significantly smaller than the block size.)
I have already tested those types of data.
|
|
|
_wb_
|
|
The Thunk
<@794205442175402004> in response to https://discord.com/channels/794206087879852103/803645746661425173/1433496087196340375
|
|
2025-11-05 04:57:49
|
Just to be clear, you are saying you can compress 512 KiB random bits from /dev/random losslessly to something like 302 KiB? And you're not using some weird hacked OS where /dev/random is producing something very much non-random?
|
|
|
AccessViolation_
|
|
The Thunk
This is a pedantic take on the process. You could change the process by adjusting block size or other logic... It is maximally efficient given the input. Any compression on random data without some type of repeating value or statistical skew of 40% is a huge accomplishment.
Yes, you are correct through concatenation of blocks... Because each block self defines and describes you would only need a cycle count to track how many times to parse the compressed file to return the original data. Since, the compressed data is binary you can simply re-compress the binary again. (With the limit you do not start with a value that is already significantly smaller than the block size.)
I have already tested those types of data.
|
|
2025-11-05 05:03:20
|
you would not only need a cycle count. you can't take a block, decompress it, and get the two blocks that it represents. then decompress those two, and get four such blocks. then 8. then 16. 32, 64, all the way up. that's what I mean. if your compressed blocks can be compressed further, you're able to practically achieve infinite compression. in the end, you can represent *any* data with just a single block. I might have some specifics wrong about how your things works, but the general principle of what I said applies
|
|
|
The Thunk
|
|
_wb_
Just to be clear, you are saying you can compress 512 KiB random bits from /dev/random losslessly to something like 302 KiB? And you're not using some weird hacked OS where /dev/random is producing something very much non-random?
|
|
2025-11-05 05:05:15
|
512 byte block to ~300 bytes some values compress more or less than others. I am using a Windows solution as Linux is not my OS.
|
|
|
AccessViolation_
|
|
The Thunk
512 byte block to ~300 bytes some values compress more or less than others. I am using a Windows solution as Linux is not my OS.
|
|
2025-11-05 05:07:02
|
aside from the claims you make, can I ask, do you understand the pigeonhole principle? I believe if you did, you would see that there must be an error in your methods somewhere
|
|
|
Exorcist
|
|
AccessViolation_
if your compressor works, you should not be able to compress your compressors output basically at all. consider:
1) if your compressor's output can be compressed, then your compressor wasn't very good. it could have compressed the original data further
2) you make a valid point about the inherent cost to logic/numbers/math stored. consider, though, that if your compressor compresses 10 already compressed blocks down to 5 compressed blocks, you can take 10 of those and compress them again to 5 compressed blocks, take 10 of those and compresses them again, meaning your algorithm can literally achieve *infinite* compression. in the end, you'll be able to restore an infinite amount of data from a single block.
it must be true that there is some data your algorithm cannot compress, whether that's random data (which should be the case) or already compressed data (which should also be the case) or something else, it is a mathematical certainty
|
|
2025-11-05 05:08:55
|
VVC has secondary transform
...and the improvement is trivial
|
|
|
The Thunk
|
|
AccessViolation_
aside from the claims you make, can I ask, do you understand the pigeonhole principle? I believe if you did, you would see that there must be an error in your methods somewhere
|
|
2025-11-05 05:11:54
|
I understand pigeonhole principle. If you understood the process you would see how I account for this quandary. The individual I am getting peer review from had the same concerns until I explained the process.
|
|
|
AccessViolation_
|
2025-11-05 05:12:28
|
bigger holes? smaller pigeons?
|
|
2025-11-05 05:24:35
|
so you have found a way to make one compressed blob represent more than one decompressed blob? that's what solving the pigeonhole principle constitutes.
the pigeonhole principle isn't something you can *choose* to not be affected by, or work around by using a different method. it simply must be true that if you're doing 100% lossless compression, that for any data length you could choose, there must be *some* configuration of that data that cannot be compressed, to a smaller size than the data itself
|
|
|
monad
|
2025-11-05 05:29:42
|
Jon, the chad legit scientist: Um, actually you can't compress random data. It's mathematically incomputable.
Jyrki, adjusting his wizard cap: We shouldn't be too quick to dismiss it, even if it's batshit crazy, it could lead us to new insights!
|
|
|
spider-mario
|
|
monad
Jon, the chad legit scientist: Um, actually you can't compress random data. It's mathematically incomputable.
Jyrki, adjusting his wizard cap: We shouldn't be too quick to dismiss it, even if it's batshit crazy, it could lead us to new insights!
|
|
2025-11-05 05:34:59
|
slightly reminds me of this bit for some reason: https://youtu.be/6xe4jeGo8SM?t=42m52s
„Computer! Magie der Electronik!”
|
|
|
_wb_
|
|
AccessViolation_
so you have found a way to make one compressed blob represent more than one decompressed blob? that's what solving the pigeonhole principle constitutes.
the pigeonhole principle isn't something you can *choose* to not be affected by, or work around by using a different method. it simply must be true that if you're doing 100% lossless compression, that for any data length you could choose, there must be *some* configuration of that data that cannot be compressed, to a smaller size than the data itself
|
|
2025-11-05 05:47:55
|
In fact not just _some_ data cannot be compressed, the vast majority of all possible data cannot be compressed. Even if your method trims off just one byte, that's something you can only do with at most 1/256th of the possible data, and the remaining 255/256th of the data will have to stay at the same size or get larger. So whatever your compression method, at least 99% of all possible sequences of bytes cannot be compressed with it.
The only reason data compression does actually work in practice is that a lot of data used in practice has low entropy, i.e. lots of structure, and doesn't look at all like the vast majority of possible combinations of bits.
|
|
|
juliobbv
|
2025-11-05 05:50:52
|
I like cheating by stashing data within filesystem metadata (name, date, etc)
|
|
|
_wb_
|
2025-11-05 05:54:56
|
Right, even the file size itself is a kind of subtle cheat, saving the overhead of making a syntax self-terminating
|
|
|
AccessViolation_
|
|
_wb_
In fact not just _some_ data cannot be compressed, the vast majority of all possible data cannot be compressed. Even if your method trims off just one byte, that's something you can only do with at most 1/256th of the possible data, and the remaining 255/256th of the data will have to stay at the same size or get larger. So whatever your compression method, at least 99% of all possible sequences of bytes cannot be compressed with it.
The only reason data compression does actually work in practice is that a lot of data used in practice has low entropy, i.e. lots of structure, and doesn't look at all like the vast majority of possible combinations of bits.
|
|
2025-11-05 06:01:30
|
oh I never thought about that, that's pretty eye opening
|
|
2025-11-05 06:03:51
|
I think I've cracked it
|
|
2025-11-05 06:04:40
|
the holes they're using are hermetically sealed and contain a pigeon, a hammer, a bottle of poison, a radioactive isotope and a detector
|
|
2025-11-05 06:06:04
|
quantum pigeonholes coming soon to an entropy coder near you
|
|
|
_wb_
|
2025-11-05 07:07:03
|
Schrodinger's pigeon!
|
|
|
AccessViolation_
|
2025-11-05 07:20:37
|
for something else entirely:
> But this is more a property of an encoder implementation than an intrinsic property of a codec
I've thought about this before and I've basically come to the conclusion that there's no such thing as a format that only supports lossless compression
|
|
2025-11-05 07:22:04
|
even in "lossless only" formats like PNG or QOI, an encoder can do whatever it wants to the data before it's sent to explicitly supported coding tools, to result in better compression at the cost of slightly altering the data
|
|
2025-11-05 07:26:16
|
I came to this conclusion when I realized I could make my QOI encoder not record slightly different pixels in a run of otherwise identical pixels to make the format do lossy <:KekDog:805390049033191445>
|
|
2025-11-05 07:30:20
|
I suppose you could argue that the format itself doesn't specify lossy so it's not a lossy format, but I consider lossless more of a possibility. this format *supports* losslessly encoding any data you give it
|
|
|
monad
|
2025-11-05 07:32:23
|
the format is lossless if you can get back what you put in. if you can't, it's lossy. history of where that input came from is irrelevant except philosophically
|
|
|
AccessViolation_
|
2025-11-05 07:34:23
|
I agree with that. I'm mostly thinking about semantics. it also doesn't matter at all, but I find it interesting to think about
|
|
|
|
ignaloidas
|
2025-11-05 07:35:55
|
whether something is lossless is a property of the pipeline, not a format, you can always introduce loss somewhere
|
|
2025-11-05 07:36:57
|
though some formats are done in a way where it's hard/impossible to not introduce loss
|
|
|
jonnyawsom3
|
2025-11-05 07:37:02
|
You can use a lossless format for lossy, but you can't use a lossy format for lossless
|
|
|
AccessViolation_
|
2025-11-05 07:37:19
|
my thoughts exactly
|
|
|
|
ignaloidas
|
|
You can use a lossless format for lossy, but you can't use a lossy format for lossless
|
|
2025-11-05 07:37:47
|
eh, VarDCT is kinda inherently lossy, but you can still use it for lossless compression of JPEG files
|
|
|
AccessViolation_
|
2025-11-05 07:38:15
|
but you're not *doing* the vardct then, are you?
|
|
|
jonnyawsom3
|
2025-11-05 07:38:38
|
I think transplanting DCT coefficients falls outside of the usual concept of image encoding
|
|
|
|
ignaloidas
|
2025-11-05 07:39:35
|
can you really tell from looking at the file (ignoring the YCbCr bit) whether it's lossless or not tho?
|
|
|
jonnyawsom3
|
2025-11-05 07:39:43
|
It's also only lossless because the spec essentially bundled a JPEG decoder inside, otherwise subsampling wouldn't be allowed
|
|
|
AccessViolation_
|
2025-11-05 07:39:45
|
iirc VarDCT is a superset of JPEG-1 DCT, and when losslessly recompressing them, you don't alter those already compressed blocks, you just encode them more efficiently
|
|
|
|
ignaloidas
|
2025-11-05 07:40:36
|
if your data comes in as DCT coefficients, VarDCT can be lossless
|
|
2025-11-05 07:41:03
|
ofc it's not usually going to come in as DCT coefficients
|
|