JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

jxl

Anything JPEG XL related

DZgas Ж
2023-01-31 03:40:22
Traneptora
2023-01-31 05:46:22
does anyone have any samples of PNG with sBIT set?
_wb_
2023-01-31 05:53:09
I was hoping something like `convert input -depth 10 output.png` would create a PNG with sBIT set, but it doesn't seem like imagemagick does that
2023-01-31 05:54:28
so it looks like it's a pretty exotic thing, if even imagemagick doesn't use it
Traneptora
2023-01-31 07:09:29
does djxl write it?
_wb_
2023-01-31 07:12:56
I remember adding sBIT reading and writing at some point, but no idea if it's still there and still works (this was before the change to use the api, which caused lots of things to change)
JendaLinda
2023-01-31 08:19:52
cjxl does read sBIT, but djxl doesn't write it.
2023-01-31 08:28:53
The issue is open.
Traneptora does anyone have any samples of PNG with sBIT set?
2023-01-31 08:32:27
sBIT chunk can be added to any PNG using program TweakPNG In cjxl, sBIT has the same effect as --override_bitdepth
Traneptora
JendaLinda cjxl does read sBIT, but djxl doesn't write it.
2023-01-31 08:54:10
sounds like a bug
JendaLinda
2023-01-31 08:56:52
It probably is. I've opened the issue already.
Traneptora
JendaLinda It probably is. I've opened the issue already.
2023-01-31 10:04:47
link plz?
JendaLinda
Traneptora link plz?
2023-01-31 10:07:10
https://github.com/libjxl/libjxl/issues/2066
OkyDooky
2023-02-01 12:15:32
https://www.phoronix.com/news/Mozilla-Neutral-On-JPEG-XL
2023-02-01 02:00:56
Ouch. Although, many of the comments in response are encouraging.
2023-02-01 02:01:49
This one makes an interesting point\: > If anything, adding JPEG-XL and depreciating JPEG would add less complexity since they'd be adding a format that can handle both the new and old JPEG standards. I mean, the fucking L in JPEG-XL stands for either Long-Term or Legacy because JPEG-XL is intended to be a Long-Term replacement for JPEG and it opens legacy JPEG files.
2023-02-01 02:37:21
Apparently, LibreWolf has enabled JXL support [in a recent merge request](https://gitlab.com/librewolf-community/browser/source/-/merge_requests/47).
Traneptora
JendaLinda sBIT chunk can be added to any PNG using program TweakPNG In cjxl, sBIT has the same effect as --override_bitdepth
2023-02-01 04:45:26
tweakPNG appears to be windows-only
2023-02-01 05:42:43
anyway, sent a patch to the FFmpeg mailing list to support sBIT
2023-02-01 05:44:21
ideally that makes transcoding to/from jxl/png in FFmpeg work as expected w/r/t bit depth headers
afed
2023-02-01 06:07:08
then i will need to make 10-bit benchmarks and for 8-bit right now e1 and e4 is the most balanced, at least until e5+ modes are fixed, though they are quite slow for screenshots i wonder why e3 is good for photos but isn't good for frames from movies or even live video https://discord.com/channels/794206087879852103/803645746661425173/1069395996473311242
Traneptora
afed then i will need to make 10-bit benchmarks and for 8-bit right now e1 and e4 is the most balanced, at least until e5+ modes are fixed, though they are quite slow for screenshots i wonder why e3 is good for photos but isn't good for frames from movies or even live video https://discord.com/channels/794206087879852103/803645746661425173/1069395996473311242
2023-02-01 08:26:41
do keep in mind that it's just metadata
2023-02-01 08:27:01
so nothing should change w/r/t speed or density
afed
2023-02-01 08:31:30
and the redundant data is also not zeroed or something like that? then that's sad
_wb_
2023-02-01 08:42:52
for lossy it's just metadata
2023-02-01 08:43:32
for lossless the bit depth that is signaled is the actual bit depth of the data
2023-02-01 08:44:20
so for lossless it will certainly make a difference, especially for e1 which doesn't do channel palette
2023-02-01 08:45:35
for lossy it makes no difference whatsoever since everything will be exactly the same except the header field (which is just a suggestion of what bit depth to use to represent the decoded image)
afed
2023-02-01 08:45:36
so lossless jxl will recognize that it's 10-bit and not 16?
_wb_
2023-02-01 08:48:28
not sure if it works atm in cjxl, but giving libjxl a 10-bit image to encode losslessly should result in smaller files than if you give it a 16-bit image to encode losslessly. If the 16-bit image is effectively 10-bit (with padding), then the difference should be small at e2+ (since channel palette will effectively encode the 16-bit image as a 10-bit one then, with just some overhead to encode the channel palettes), but at e1 there should be a big difference in both speed and density between 10-bit and 16-bit.
2023-02-01 08:49:41
(I think currently e1 expects the input to be msb-padded while png will give it lsb-padded, so it might not end up using the fast lossless path atm in the case of 10-bit pngs)
afed
Traneptora do keep in mind that it's just metadata
2023-02-01 09:27:21
<https://github.com/mpv-player/mpv/pull/11247> hmm, it seems to be working with `screenshot-sw=yes`, the sizes is much smaller for hdr10, though I haven't compared how it was before
Traneptora
2023-02-01 09:29:44
the library I think expands it range-wise
2023-02-01 09:29:47
so it's not padded
2023-02-01 09:29:55
i.e. white -> white and black -> black
JendaLinda
2023-02-01 04:44:34
It seems that when encoding losslessly a 16 bit PNG tagged as 10 bit, the least significant bits are removed as expected.
Traneptora tweakPNG appears to be windows-only
2023-02-01 04:47:30
That's a shame. This is the most convenient PNG hacking tool I've found so far.
Traneptora
2023-02-01 04:48:46
got it to work with wine
Kampidh
2023-02-03 09:14:26
Took another insight on jxl color profiles: https://saklistudio.com/jxlcoloranalysis/
2023-02-03 09:20:12
last pics are only in jxl, while the plots are both in jxl and png
2023-02-03 09:23:28
mmm wait it works fine here
_wb_
2023-02-03 09:23:34
Nice writeup! Yes, when decoding to integer sRGB, the gamut gets clamped because uint cannot go outside the range (not negative and also not above the maximum value). It's basically a bad idea to decode to integers if it's going to be linear sRGB, not just because of the gamut clamping but also if it's 8-bit it will have very bad banding
2023-02-03 09:25:01
(i hope before libjxl 1.0 we can have a default cms in the decoder too, so we can make it 'just work' to return integer buffers in the original color space even if it's not an enum space)
Kampidh
_wb_ Nice writeup! Yes, when decoding to integer sRGB, the gamut gets clamped because uint cannot go outside the range (not negative and also not above the maximum value). It's basically a bad idea to decode to integers if it's going to be linear sRGB, not just because of the gamut clamping but also if it's 8-bit it will have very bad banding
2023-02-03 09:30:41
Thanks~! that I also found a difference from the docs: ```If the original profile is not used, the decoder only outputs the data as sRGB (linear if outputting to floating point, nonlinear with standard sRGB transfer function if outputting to unsigned integers) but will not convert it to to the original color profile.``` However the decoder always outputs as linear even when the data is uint 🤔 (maybe that's also why some reported that the outputs were too dark? including me back then :D)
2023-02-03 09:32:58
oops, thanks for pointing that out! it does misses an extension on the srcset there ><
2023-02-03 09:33:51
should be fixed by now~
_wb_
Kampidh Thanks~! that I also found a difference from the docs: ```If the original profile is not used, the decoder only outputs the data as sRGB (linear if outputting to floating point, nonlinear with standard sRGB transfer function if outputting to unsigned integers) but will not convert it to to the original color profile.``` However the decoder always outputs as linear even when the data is uint 🤔 (maybe that's also why some reported that the outputs were too dark? including me back then :D)
2023-02-03 09:35:39
Hm, that sounds like either a bug or a documentation bug then. Could you open an issue for this?
Kampidh
_wb_ Hm, that sounds like either a bug or a documentation bug then. Could you open an issue for this?
2023-02-03 09:36:25
sure!
fab
2023-02-06 03:11:45
Quality seems bad with latest AV2
2023-02-06 03:12:30
Debargha super resolution Speed up has caused a regression.
2023-02-06 03:13:49
Is encoding the low frequency component with more precision
2023-02-06 03:16:48
Like deep grey and dark
2023-02-06 03:16:49
2023-02-06 03:16:53
Azzurro
2023-02-06 03:17:29
Though on red like 48,5% of the color it gets
2023-02-06 03:17:45
Jpeg xl seems about fidelity
2023-02-06 03:23:09
JPEG XL at d 0.891 is so much time better than AV2
2023-02-06 03:23:31
And has potential to do slower encoding
2023-02-06 03:25:29
2023-02-06 03:25:41
JPEG XL is super fast
2023-02-06 03:25:56
But looks like this at bpp 0.412
2023-02-06 03:25:58
D1
2023-02-06 03:26:03
With a selfie
2023-02-06 03:26:16
Well lit ambient
2023-02-06 03:26:59
I hope on 1.0 would be better
2023-02-06 03:30:03
I don't know if lower qp like 80
2023-02-06 03:30:18
Will get better in 0.9.0
_wb_
2023-02-06 04:02:49
it is really hard to say anything about fidelity if you only show the compressed image but not the original image
Fox Wizard
2023-02-06 08:02:47
~~Judging by the noise and combination of a not so steady hand and slow shutter speed I don't think that image had any "fidelity" to begin with <:KekDog:805390049033191445>~~
fab
2023-02-06 08:03:00
is the original
fab
2023-02-06 08:20:06
samsung gn1 sensor
daniilmaks
2023-02-07 01:38:54
> But looks like this at bpp 0.412 > is the original
2023-02-07 01:39:03
<:WhatThe:806133036059197491>
Demiurge
fab But looks like this at bpp 0.412
2023-02-07 01:51:19
it looks like what exactly?
Traneptora
2023-02-07 11:04:09
fab face reveal
fab
2023-02-07 11:05:15
HAHAH
2023-02-07 11:05:34
2023-02-07 11:05:37
for %i in (C:\Users\Use\Documents\basa\*.png) do cjxl -d 0.98 -e 8 --photon_noise=ISO200 "%i" "%i.jxl"
2023-02-07 11:06:28
with latest it loose in VMAF to AV2 but quality is pretty good at least for the AV2 I have
2023-02-07 11:06:37
i did on AV1 discord
diskorduser
fab samsung gn1 sensor
2023-02-07 05:07:11
I bought a phone yesterday with that sensor. It's not so great compared to Sony's .
fab
diskorduser I bought a phone yesterday with that sensor. It's not so great compared to Sony's .
2023-02-07 05:17:07
Pixel?
diskorduser
2023-02-07 05:24:36
Nvm. I got confused gn1 with jn1.
Demiurge
2023-02-07 06:51:37
I actually heard that AV2 will indeed be somehow even slower than av1
2023-02-07 06:52:08
I would be surprised if there really is a market for an even slower codec than av1
2023-02-07 06:54:14
Especially considering h.265 is roughly the same quality as AV1 but like 10 times as fast or something like that
_wb_
2023-02-07 06:59:41
Every newer codec is generally slower than the ones before. H.266 is slower than h.265 is slower than h.264.
Demiurge
2023-02-07 07:00:18
I think we are at the end of the line with av1
2023-02-07 07:01:05
There is no point to making things any slower. Already AV1 is practically useless to everyone who isn't a big corporation or lacks a hardware encoder.
_wb_
2023-02-07 07:03:04
Sometimes codecs are made for better trade-offs between speed and compression, like JPEG XR (positioned to be faster-but-worse than J2K) or like EVC (positioned to be faster-and-royalty-free-but-worse than HEVC), but generally new codecs anticipate that more compute will be available (or target only hardware implementations and don't care about the speed of software implementations)
Demiurge
2023-02-07 07:04:24
No one is going to want to wait several weeks for an encode job
w
2023-02-07 07:11:39
idk about that... H265 is pretty slow on actual settings aswell
2023-02-07 07:12:02
H264 aswell
2023-02-07 07:12:10
can be slow
2023-02-07 07:13:04
but it's write once read many so speed usually doesn't matter
_wb_
2023-02-07 07:13:26
What is increasingly annoying me is that they do comparisons at insanely slow encode settings, to "show the bitstream potential" because "things will be optimized later" and "hardware will make speed not a concern", but then it turns out that software cannot magically be made several orders of magnitude faster without sacrificing compression, and hardware encoders end up making shortcuts that make compression worse than even the 'fast' software encode settings...
w
2023-02-07 07:14:03
I'm fine with using those slowest settings and waiting a week if it means it's better than the other options in the end. So it is valuable to me
_wb_
2023-02-07 07:14:28
For Netflix it's encode once, decode many times. For many other use cases, it's not quite that asymmetric. For some use cases, it's encode once, decode once.
w
2023-02-07 07:14:44
and is also why I cjxl -e 9 -I 100 -E 3
_wb_
2023-02-07 07:16:24
To be clear: I think it's good to have a wide range of encode speed vs compression trade-offs so you can choose to do what makes sense for a given use case.
2023-02-07 07:18:15
I just think it's not a good idea to compare encoders only at their slowest settings, and especially not to compare quality/compression at slowest speed settings but then compare speed at fastest speed settings...
w
2023-02-07 07:18:57
Don't they normally compare at given framerate
2023-02-07 07:19:12
like at 2 fps, 10fps, 30fps...
2023-02-07 07:19:48
Oh it might have just been 1 I've seen
2023-02-07 07:20:25
I guess they *should* normally compare in 3d with framerate then
_wb_
2023-02-07 07:21:06
I think generally comparisons of different encoders for the same video codec are done that way
2023-02-07 07:23:00
But comparisons between different codecs are generally done by looking at BD rates, ignoring speed.
2023-02-07 07:28:17
But e.g. here: https://storage.googleapis.com/avif-comparison/index.html the "quality" plot is for slowest avif (s0 single thread) while the "transcoding speed" plot gives the impression that avif encoding is slightly faster than jxl encoding (by looking at s6 and s9 with 8 threads).
Cool Doggo
Demiurge No one is going to want to wait several weeks for an encode job
2023-02-07 07:31:19
good thing you dont have to, av1 is pretty good at high speeds too
fab
2023-02-07 07:31:24
Send me the jpeg xl vs jpeg li eval test
2023-02-07 07:31:33
All the links
2023-02-07 07:31:39
Is in one GitHub
2023-02-07 07:31:50
But I do not seem to found jxl
_wb_
2023-02-07 07:31:56
The thing is that lossy jxl encoding at e6 (one setting faster than default) is basically as good as lossy jxl at the slowest setting. While aom s0 is quite a lot better than aom s6...
fab
2023-02-07 07:32:14
When you could add into
_wb_
2023-02-07 07:50:59
Also in JPEG people often insist on using HM (hevc reference encoder) and VTM (vvc reference encoder) when comparing these codecs to jpeg/j2k/jxl, but that's just silly imo. That's comparing 0.01 Mpx/s encoding to 10-100 Mpx/s encoding.
BlueSwordM
_wb_ What is increasingly annoying me is that they do comparisons at insanely slow encode settings, to "show the bitstream potential" because "things will be optimized later" and "hardware will make speed not a concern", but then it turns out that software cannot magically be made several orders of magnitude faster without sacrificing compression, and hardware encoders end up making shortcuts that make compression worse than even the 'fast' software encode settings...
2023-02-07 08:27:51
I mean, a software implementation can be made several magnitude faster and higher quality... if you start with an unoptimal implementation in the first place most of the time.
2023-02-07 08:28:13
It's when you get to a pretty well optimized implementation where this gets really tricky.
_wb_
2023-02-07 08:35:49
Yes, if you start with an academic prototype written in matlab or something, obviously you can speed that up by a lot
2023-02-07 08:38:22
The thing is, codecs back in the days had quite limited encoder freedom. Of course you can always do preprocessing trickery or things like deadzone quantization, but broadly speaking, codecs like jpeg, j2k and jxr basically have no or very little encoder freedom, everything is just deterministic, at least in a 'normal' implementation.
2023-02-07 08:39:17
(I don't count mozjpeg, guetzli, jpegli as 'normal' implementations, they squeeze more out of the bitstream than what it was really designed for)
2023-02-07 08:42:20
If encoders basically don't do search and the encoding is basically deterministic (modulo some quantization parameters), then the only difference between implementations is in the constant factors, and then sure you can have a simple naive implementation and later optimize that with simd or hardware.
2023-02-07 08:43:24
But current codecs have very expressive bitstreams, and encoders do search in a search space that explodes combinatorially.
2023-02-07 08:46:37
You can expect "future optimizations" to improve the constant factors by going from naive to clever to autovec simd to hand-written simd to fpga to asic
2023-02-07 08:47:50
But you can't expect them to remove the combinatorial explosion from the equation
2023-02-07 08:54:48
Hardware can perhaps explore many local options (like directional prediction modes or transform types) at the same time and do that better than software, but decisions of a combinatorial nature like block segmentation and the down-the-road effect of local choices (choices in one block can impact blocks below and to the right) are not something that just magically goes away by improving the constant factors
2023-02-07 08:56:56
So there basically exhaustive encoders are impossible, and it all becomes a question of which bitstream allows you to do good encoding with encode heuristics that are fast enough in practice.
2023-02-07 09:00:20
And the only way to 'prove' that it is possible, is by actually doing it. If you instead can only show what the bitstream can do if you spend 5 hours per image while handwaving about 'future optimizations', I am sceptical.
OkyDooky
2023-02-07 11:04:29
The articles I read showed graphs indicating that AV1 had about 30% average quality improvements over h.265 for the same bitrate. But, h.266 looks to have a similar advantage over AV1, so we'll see if AV2 can fix the issues making it difficult to speed up. Actually, all the VPx codecs seem to be many factors slower than typical h.264 encoding, for me. Not sure why. \:&#47; (<@1028567873007927297>)
Demiurge
2023-02-07 11:06:22
That's because x264 is the most optimized lossy video coder ever written and the libvpx code dump was a complete mess by comparison when on2 dumped it on Google's door.
2023-02-07 11:06:50
it's since then gotten a lot of optimization and improvement but it was in a very poor condition to begin with
2023-02-07 11:07:14
so unless they rewrote most of it from scratch that's why it's slower
OkyDooky
2023-02-07 11:07:16
Also, I did a test with Handbrake to encode the DOOM\: Eternal teaser to AV1 in .mkv, after I had converted it to WebM VP9 using Boram, using the same apprpximate bitrate and it seemed to encode faster, actually. Quality was markedly improved. Maybe VP9 encoding will be faster in Handbreak, too. But, I haven't tested that, yet, since it doesn't seem to let me target a filesize limit like Boram does.
Demiurge
2023-02-07 11:07:48
libaom is a fork of libvpx I think.
2023-02-07 11:08:32
svt-av1 is pretty fast...
OkyDooky
2023-02-07 11:10:50
Ah. That makes sense. It just seems that the only reason they'd be having so much trouble for so long is if the base elements were unfriendly to general optimization efforts, or something. (<@1028567873007927297>)
Demiurge
2023-02-07 11:10:54
But it arbitrarily refuses to work if the resolution is not some multiple of Steve Job's birthday
OkyDooky
2023-02-07 11:11:19
Lol! Handbreak does? (<@1028567873007927297>)
Demiurge
2023-02-07 11:11:38
No, SVT-AV1
2023-02-07 11:11:55
It's really picky for some reason
OkyDooky
2023-02-07 11:12:28
Interesting. But, isn't dav1d a commonly used encoder? Or is it just one well-recognized for quality?
190n
2023-02-07 11:20:46
dav1d is a *de*coder
Demiurge
2023-02-07 11:23:29
decode only, yep. well recognized for extreme speed.
OkyDooky
2023-02-08 12:33:17
I knew it did decoding, but didn't realize it wasn't also used for encoding. (<@164917458182864897>)
190n
2023-02-08 12:34:01
no, dav1d is not used for encoding
2023-02-08 12:34:44
if you do a transcode where the input is av1, i guess you would use dav1d to decode the av1, but then you would use something else to encode it into whatever target format you want
2023-02-08 12:35:06
av1 video -> dav1d -> pixels -> x264 -> H.264 video av1 video -> dav1d -> pixels -> aomenc -> av1 video
BlueSwordM
Also, I did a test with Handbrake to encode the DOOM\: Eternal teaser to AV1 in .mkv, after I had converted it to WebM VP9 using Boram, using the same apprpximate bitrate and it seemed to encode faster, actually. Quality was markedly improved. Maybe VP9 encoding will be faster in Handbreak, too. But, I haven't tested that, yet, since it doesn't seem to let me target a filesize limit like Boram does.
2023-02-08 02:07:00
You need to look at single threaded performance 🙂
Demiurge
2023-02-08 04:05:20
None of these codecs will ever be as popular as h.264.
2023-02-08 04:06:50
Maybe HEVC might if the patent situation changes or expires.
2023-02-08 04:09:07
But I think it already had its chance and failed very badly
2023-02-08 04:09:40
It was miserably slow for most of its life too but the patent situation also makes people stay the hell away
afed
Demiurge There is no point to making things any slower. Already AV1 is practically useless to everyone who isn't a big corporation or lacks a hardware encoder.
2023-02-08 07:10:51
for the new formats it also depends on the encoders, some like svt-av1 can support a wider range for encoding speeds, even for real time and as several comparisons show they are better even at the same speeds as previous formats but these significant improvements are mostly noticeable for more modern hardware, while using more cores and for streaming video quality or for very high resolutions for high quality video the difference is not so meaningful, for example I can still get better quality with x265 than with any av1 encoders, even if I disable most of the filtering x265 has better psy and rd tuning, with enough bitrate this allows for sharp and transparent quality, when av1 still can't preserve high frequencies and overly blurs, x265 can even retain grain quite well av1 has grain synthesis, but, it's not always possible to achieve the same structure and feel as the original and it's still a fake simulation for true transparent quality most of the av1 tools and filtering are good for low to medium bitrates, even at very low quality there are almost no artifacts, although av1 is arguably a more complex standard than older ones and it allows for better compression even for high fidelity, but current encoders are not very developed in this direction and also even with a perfect encoder the difference will not be as big as for lower qualities https://www.spiedigitallibrary.org/ContentImages/Proceedings/11842/118420T/FigureImages/00395_psisdg11842_118420t_page_8_1.jpg
Demiurge
2023-02-08 07:12:54
For some reason I have a very hard time finding any public comparisons of recent encoders.
2023-02-08 07:15:30
Especially not anything that includes any screenshots for example
2023-02-08 07:16:02
in order to somewhat validate the so-called "objective" measurements
2023-02-08 07:16:10
like psnr
afed
2023-02-08 07:22:34
there are plenty comparisons from various encoder communities, with about the same results and yeah, also, each new generation moves the speed bar and for example vvc faster/fast has the same speed as veryslow or placebo for x264/x265 but that's because hardware is also getting faster
Demiurge
2023-02-08 07:28:08
Looks like according to this graph libsvtav1 is an extremely good replacement for x265 --slow
2023-02-08 07:28:48
As well as utterly destroying libvpx at every level
2023-02-08 07:35:59
Looks like there are 14 speed presets, from 0 to 13, and they did not test the faster presets in that graph
2023-02-08 07:36:47
where's the graph from and how old is the comparison? encoder tech changse rapidly these days
2023-02-08 07:37:57
Also SVT just flat out refuses to encode inputs that are the "wrong" resolution, making it a bit more obtuse to use than the other av1 encoders...
afed
2023-02-08 07:44:45
speeds faster than 10 for svt-av1 is not very good for real usage as encoding, it's mostly for real time broadcasts like conferences where there's not a lot of movement and like I said, it's for streaming video quality, most metrics and people prefer smoothing than some artifacts graphs are not very new (but I've seen something similar in recent presentations at a facebook video conference) https://www.spiedigitallibrary.org/conference-proceedings-of-spie/11842/118420T/Towards-much-better-SVT-AV1-quality-cycles-tradeoffs-for-VOD/10.1117/12.2595598.full
Demiurge
2023-02-08 07:58:40
I don't know about that, according to the "projected" performance, it would be as good or better than x265/x264
2023-02-08 07:59:27
Is x265 medium preset "not very good for real usage" either?
afed
2023-02-08 08:10:18
yeah, fast presets with low-complexity `The total number of presets in the encoder is increased from seven to nine. The two new presets cover faster speed ranges with preset 8 achieving similar complexity as that of x265 Medium preset (v3.4) with good BD-rate gain in favor of preset 8.` and i still use x265 with some tuned presets for the needed speed, it's a preference between sharpness/details and smoothing
_wb_
2023-02-08 08:30:29
avg(y-psnr,y-ssim,y-vmaf) 😱
2023-02-08 08:30:58
Averaging 3 bad metrics does not result in a good metric
2023-02-08 08:31:12
It results in an even worse metric
Demiurge
2023-02-08 09:11:01
Yeah... hence why it's stupid why no one ever includes any screenshots for some basic sanity-check verification...
afed
2023-02-08 09:17:51
yeah, but just vmaf has about the same results and vmaf is still the best widely used metric for video maybe until ssimulacra2 gets faster and takes motion into account, then it might be better known and used, also though ssimulacra2 is good for still images, so far no one has evaluated how well it correlates with subjective estimates for video
Demiurge
2023-02-08 09:28:04
ssimu2 is not a good predictor or replacement for subjective measurements either.
2023-02-08 09:28:53
Even if it has improvements over others, it still doesn't judge like a human
_wb_
2023-02-08 09:29:07
subjective assessment is always the gold standard
2023-02-08 09:29:14
it's just hard and expensive to do, especially for video
2023-02-08 09:32:02
the main issue with vmaf is that it's not a fidelity metric but an appeal metric (i.e. you can get "better scores than the original"), and it can be cheated very easily — just sharpen and adjust brightness/contrast, then do a crappy encode, and you can still easily get better results than a good encode
2023-02-08 09:32:31
so it's not a good metric to evaluate encoders
afed
2023-02-08 09:32:59
and for example what I mean between x265 sharpness and av1 smoothing (when the bitrate is not bad, but still not enough for fully transparent quality) a frame from the trailer in prores quality
2023-02-08 09:33:21
x265 has better details retention and sharpness, but also some broken noise and grain in the background
2023-02-08 09:33:34
av1 (aom) even with disabled cdef and many other options to improve sharpness and by using a fork with more settings for tuning and better quality than the default version, is still overly smooth
yurume
2023-02-08 09:33:50
.oO( in a dystopian future where a mind upload is feasible, the simulated human brain would be used to do the subjective assessment )
afed
_wb_ so it's not a good metric to evaluate encoders
2023-02-08 09:40:50
yeah, but also if not using cheating (which most open-source encoders don't use, without special parameters), vmaf is not bad and has something like 0.8-0.9 correlation with subjective results for average bitrates, but if I'm not mistaken, about 0.4 for high fidelity video (which is even worse than more simple metrics and it is less than random picking)
2023-02-08 09:42:12
and cheating can also be checked with other additional metrics `--vmaf-preprocessing` is a cheat parameter for aom that boosts vmaf scores without visual improvements
Demiurge
2023-02-08 09:49:23
You can get better than the original? lol vmaf is trash then if true
afed
2023-02-08 09:50:20
vmaf_neg is an attempt to avoid using enhancements to increase scores, but still vmaf_neg can be cheated by 20-25% as I recall and it's also worse than normal vmaf in correlation with subjective scores
_wb_
afed yeah, but also if not using cheating (which most open-source encoders don't use, without special parameters), vmaf is not bad and has something like 0.8-0.9 correlation with subjective results for average bitrates, but if I'm not mistaken, about 0.4 for high fidelity video (which is even worse than more simple metrics and it is less than random picking)
2023-02-08 09:51:29
that matches my experiences: vmaf works quite well for typical "web streaming" video quality settings, but at higher fidelity (e.g. the useful qualities for still images) the correlation with subjective scores drops and it starts basically saying random weird stuff
2023-02-08 09:52:00
you can see that behavior quite well on the kadid10k dataset:
2023-02-08 09:52:48
kadid10k covers a big range of qualities, going to extremely low qualities, as you can see in the range of the metric scores
Demiurge
afed av1 (aom) even with disabled cdef and many other options to improve sharpness and by using a fork with more settings for tuning and better quality than the default version, is still overly smooth
2023-02-08 09:54:15
that looks like trash. Is that the same bitrate as x265?
_wb_
2023-02-08 09:54:36
basically in the range of vmaf 0-90, vmaf does show some correlation with the mos scores: vmaf 0 certainly means it's crap quality and good images will certainly have a vmaf > 60
2023-02-08 09:55:20
but vmaf > 90 can basically still mean just about anything
2023-02-08 09:56:52
while in ssimulacra2 it's more or less the other way around: at the crap qualities (ssimulacra2 < 0), correlation is not so great, but at higher fidelity (ssimulacra2 > 50), correlation is quite good
Demiurge
2023-02-08 09:57:04
I don't understand what the heatmap represents or what all the different polka dot lines mean but they do swerve violently to the left at higher VMAF scores
afed
Demiurge that looks like trash. Is that the same bitrate as x265?
2023-02-08 09:57:13
yeah, same bitrate
Demiurge
2023-02-08 09:58:00
Maybe don't use libaom, it's probably the worst encoder out of svt and rav1e
_wb_
2023-02-08 09:58:19
the heatmap is just a histogram: it represents how many images there are in the kadid10k dataset that fall in that bucket of metric score and subjective score (DMOS)
Demiurge
2023-02-08 09:58:32
That looks terrible. Maybe you need to use libaom cpu0 to make it look decent
_wb_
2023-02-08 09:58:55
legend:
Demiurge
2023-02-08 09:59:22
Oh
2023-02-08 10:00:04
Also you probably should NOT use cpu0 and should probably just use a different encoder instead since that looks a lot worse than x265
2023-02-08 10:03:47
It looks like x265 introduces a lot of noisy distortion to all the low-entropy background that your eyes aren't even looking at, but retains detail and texture very well, whereas av1 just smooths over and deletes everything.
_wb_
2023-02-08 10:04:14
kadid10k really goes into the silly low qualities so most of the data is of the form "on this image, most people prefer the plague over the cholera". E.g. here's an original image and the five jpeg distortion levels they used:
2023-02-08 10:05:28
even the least distorted is already rather poor to my taste, and those last ones are just completely unusable qualities
Demiurge
2023-02-08 10:05:58
The noise that x265 is adding is hard to notice except in areas where your eyes are not even paying attention, and at least it preserves information rather than deleting it all
afed
Demiurge Also you probably should NOT use cpu0 and should probably just use a different encoder instead since that looks a lot worse than x265
2023-02-08 10:07:38
libaom is the highest quality encoder that uses all the available av1 tools (because it's also a reference encoder), rav1e at slow speeds is better for high fidelity, but it's much slower than libaom and worse for medium and low quality, i also use libaom fork which gives the same quality as rav1e for high fidelity, but at higher encoding speed svt-av1 is good as a fast encoder, it has balanced quality for this speed, but worse maximum available quality than libaom and rav1e
Demiurge
_wb_ even the least distorted is already rather poor to my taste, and those last ones are just completely unusable qualities
2023-02-08 10:08:30
lol, yeah, it looks like babby's first abx test maybe? but maybe good to sanity check computer vision with clear human responses to compare it to
_wb_
2023-02-08 10:08:45
anyway, so vmaf is not bad at correlating with "plague or cholera" opinions, but at qualities useful for still images it's worse than dssim/ssimulacra2/butteraugli 3norm
Demiurge
2023-02-08 10:09:43
In the very first image it makes her hair pixellated
afed libaom is the highest quality encoder that uses all the available av1 tools (because it's also a reference encoder), rav1e at slow speeds is better for high fidelity, but it's much slower than libaom and worse for medium and low quality, i also use libaom fork which gives the same quality as rav1e for high fidelity, but at higher encoding speed svt-av1 is good as a fast encoder, it has balanced quality for this speed, but worse maximum available quality than libaom and rav1e
2023-02-08 10:11:48
Well maybe it was judged to be "better quality" based on faulty heuristics. Just because it uses all available coding tools doesn't mean it uses them intelligently or well. A different encoder might have a different strategy that is better at retaining detail. I haven't tested SVT that much because it is really anal about resolution
2023-02-08 10:13:05
Really your own subjective experience trumps everything, so test and verify for yourself without taking for granted what someone else says.
_wb_
Demiurge lol, yeah, it looks like babby's first abx test maybe? but maybe good to sanity check computer vision with clear human responses to compare it to
2023-02-08 10:14:00
yeah, it's OK as a sanity check, but it's not very useful data for tuning a metric, since it's quite noisy data. If you ask people to rate images on a scale from 1 to 5, and that's the range of qualities they have to deal with, then the scores you're going to get in the interesting range are basically all 4 or 5 and all the interesting MOS scores are in the 4.3-4.6 range, with error margins on the scores of +-0.5, so quite noisy ground truth to tune metrics with
Demiurge
2023-02-08 10:16:14
All I know is that while there was technically less absolute distortion/difference between the original and the libaom encode compared to the x265 encode, the x265 encode preserved a lot more information and the higher "absolute" distortion did not actually cause any distraction or loss of immersion (like ugly DCT blocks do) nor did it mask any detail or texture.
_wb_ yeah, it's OK as a sanity check, but it's not very useful data for tuning a metric, since it's quite noisy data. If you ask people to rate images on a scale from 1 to 5, and that's the range of qualities they have to deal with, then the scores you're going to get in the interesting range are basically all 4 or 5 and all the interesting MOS scores are in the 4.3-4.6 range, with error margins on the scores of +-0.5, so quite noisy ground truth to tune metrics with
2023-02-08 10:17:30
Yeah. A lot of wasted time and effort on processing completely useless data that nobody cares about on the extreme low end.
_wb_
2023-02-08 10:19:38
imo libaom has improved quite a lot in the past 2 years. But yes, av1 does tend to look too smooth, even at relatively high bitrates. Not sure how much of that is due to inherent limitations of the bitstream and how much is because encoders are optimizing for the wrong thing. Probably a combination of both — in hevc I also see some of that oversmoothing (likely because those encoders are also optimizing for the wrong thing) but it's not as bad as in av1, so I assume there's something about the av1 bitstream that makes oversmoothing more likely.
Demiurge
2023-02-08 10:19:58
I never liked the charcteristic block artifacts DCT based images produce like JPEG, DivX, even AV1 at low qualities with massive macroblocks, artificial discontinuities and edges are extremely immersion-breaking.
2023-02-08 10:20:49
Even JPEG-XL often has it but at higher qualities it's pretty good at hiding it.
2023-02-08 10:21:22
Particularly when it comes to smooth gradients like the sky.
2023-02-08 10:21:43
JPEG-XL has a smooth sky and AVIF has a blocky sky even at high bitrates
2023-02-08 10:22:28
It's funny, avif is way too smooth, except when it's supposed to be, like the sky
_wb_
2023-02-08 10:22:29
I am very happy with how well jxl is able to avoid banding.
Demiurge
2023-02-08 10:22:47
Or AV1 I mean
_wb_
2023-02-08 10:23:22
Banding is imo the number one worst artifact, that is also going to be the most resilient artifact in the foreseeable future.
afed
Demiurge Well maybe it was judged to be "better quality" based on faulty heuristics. Just because it uses all available coding tools doesn't mean it uses them intelligently or well. A different encoder might have a different strategy that is better at retaining detail. I haven't tested SVT that much because it is really anal about resolution
2023-02-08 10:24:16
no, this has been proven and compared many times, also any av1 encoder smoothes out details, so far no av1 encoder has reached x265 quality for high fidelity, especially for quality sources with some noise and grain, grain synthesis for av1 helps not to notice this smoothing and sometimes it is very similar to source, but true transparency is very hard to achieve, also most people now dont use original native grain synthesis, and use photon noise from libjxl (but its often even more different from typical grain for video)
Demiurge
2023-02-08 10:24:29
Banding is pretty noticeable but I think blockiness is a lot more disruptive.
_wb_
2023-02-08 10:27:00
Ringing and other high frequency local artifacts are getting less problematic as screen density goes up, but banding (usually in the form of visible macroblocking in smooth areas, though it can also be diagonal banding in the case of av1 and other codecs with directional prediction) is something that remains even after downscaling an image
2023-02-08 10:27:46
And HDR is going to make banding more problematic than ever
Demiurge
2023-02-08 10:28:27
The reason why block artifacts are worse than banding is because banding usually occurs in large smooth low-entropy areas where most pixels are close to the same color and probably away from the center of attention but DCT blocks appear everywhere, especially in the places you look the most.
2023-02-08 10:29:36
And seeing weird looking squares overlaid over the image really takes you out of the immersion of what you're watching
2023-02-08 10:29:52
Really driving home the point that it's a fake image on a computer
2023-02-08 10:30:46
Plus it's extremely recognizeable because of jpeg, divx, vp7
2023-02-08 10:31:51
humans have probably been trained by now "this is what crappy compressed data looks like"
2023-02-08 10:32:09
even if they don't know what that means exactly
2023-02-08 10:32:37
it's such an ancient and buttfugly artifact
_wb_
2023-02-08 10:39:23
Yeah, bad blocking is the ugliest thing ever, but subtle blocking in contrast masked areas and on a high density display can also be pretty much unnoticeable
afed
_wb_ imo libaom has improved quite a lot in the past 2 years. But yes, av1 does tend to look too smooth, even at relatively high bitrates. Not sure how much of that is due to inherent limitations of the bitstream and how much is because encoders are optimizing for the wrong thing. Probably a combination of both — in hevc I also see some of that oversmoothing (likely because those encoders are also optimizing for the wrong thing) but it's not as bad as in av1, so I assume there's something about the av1 bitstream that makes oversmoothing more likely.
2023-02-08 10:42:08
but, hevc over-smoothing is at least controllable in x265, if properly adjusted aq and psy-rd parameters, at too high values the encoding can be even much sharper than the source, but it also causes some digital noise and artifacts or mosquito effects on the edges on moving objects
Traneptora
_wb_ anyway, so vmaf is not bad at correlating with "plague or cholera" opinions, but at qualities useful for still images it's worse than dssim/ssimulacra2/butteraugli 3norm
2023-02-08 01:34:55
out of curiosity, butteraugli 3-norm is `cbrt(x^3 + y^3 + b^3)`?
2023-02-08 01:35:01
or are there some scaling factors
2023-02-08 01:35:14
since X contributes much less here than Y and B
_wb_
2023-02-08 02:10:14
the butteraugli heatmap is somehow based on all components, if I understand correctly ( <@532010383041363969> will know better), and the 3-norm is over that, so cbrt(sum(heatmap^3)) — but iirc it's also not only a 3-norm but a mix of 3,6,12-norms or something like that
BlueSwordM
afed av1 (aom) even with disabled cdef and many other options to improve sharpness and by using a fork with more settings for tuning and better quality than the default version, is still overly smooth
2023-02-08 06:46:00
What settings for aomenc?
_wb_ imo libaom has improved quite a lot in the past 2 years. But yes, av1 does tend to look too smooth, even at relatively high bitrates. Not sure how much of that is due to inherent limitations of the bitstream and how much is because encoders are optimizing for the wrong thing. Probably a combination of both — in hevc I also see some of that oversmoothing (likely because those encoders are also optimizing for the wrong thing) but it's not as bad as in av1, so I assume there's something about the av1 bitstream that makes oversmoothing more likely.
2023-02-08 06:50:14
More specifically, the case is that x265 is basically the only open source HEVC encoder people tend to use with decent tuning and behaviors. I've seen the results of other HEVC encoders, and they tend to look like default aomenc style of distortions 😂
afed
BlueSwordM What settings for aomenc?
2023-02-08 06:57:20
something like this (for this screenshot), but I have experimented with many parameters and enabled/disabled any filtering, also it's yuv422 source aom-av1-lavish: `--cpu-used=3 --end-usage=q --cq-level=x --threads=2 --aq-mode=0 --bit-depth=10 --lag-in-frames=96 --tune-content=psy --tune=ssim --enable-dnl-denoising=0 --disable-kf --enable-qm=1 --quant-b-adapt=1 --enable-fwd-kf=1 --arnr-strength=1 --enable-cdef=0 --enable-restoration=1 --sb-size=64`
OkyDooky
2023-02-08 07:45:18
I kind of figured that after the first reply. Lol (<@164917458182864897>)
2023-02-08 07:45:27
That might be why I was thinking that. (<@164917458182864897>)
2023-02-08 08:00:08
You have a link to quick guide on how to go about testing like that? (A link, so that I don't necessitate this thread getting filled with hand-holding posts) (<@321486891079696385>)
2023-02-08 10:20:25
The blockiness is most disruptive on anime-style pictures, because even "subtle" ones tend to be noticeable when it's using the usual "smooth" style that's commonly used. Banding is also noticeable, but blockiness (in JPG) always shows up near edges or "details," like a mole on an otherwise smooth face. Photos or illustrations with a very high density of details tend to make the blockiness non-exitent, even at lower bitrates. It just makes it look preceptively "less good." It made me very encouraged that there is special effort being put toward that in the JXL reference encoder (from what I remember seeing on Github). (<@794205442175402004>)
Demiurge
2023-02-09 01:52:37
Is it pronounced "Jerky Ah-la-KOO-je-la?"
2023-02-09 01:53:21
because that's what it sounds like in my head
BlueSwordM More specifically, the case is that x265 is basically the only open source HEVC encoder people tend to use with decent tuning and behaviors. I've seen the results of other HEVC encoders, and they tend to look like default aomenc style of distortions 😂
2023-02-09 01:58:20
It's funny how aomenc quality is literally a punchline
2023-02-09 01:59:06
It's like it was purely tuned for objective metrics like ssim without any thought for how it actually looks like
2023-02-09 02:00:21
since it looks like the "absolute" amount of distortion is lower at the expense of washing away all detail and texture
_wb_
2023-02-09 08:02:56
It's not "jerky" but "yirky"
Jyrki Alakuijala
2023-02-09 10:14:28
something like year-key ah-lah-qui-ih-ah-lah, but I don't really care how it is pronounced, I'm happy if someone tries
2023-02-09 10:15:51
my own English pronunciation needs work, too -- think sponsered by hydraulic press channel https://www.youtube.com/channel/UCcMDMoNu66_1Hwi5-MeiQgw
Traneptora out of curiosity, butteraugli 3-norm is `cbrt(x^3 + y^3 + b^3)`?
2023-02-09 10:17:09
3-norm is not a real 3-norm but an average of 3-norm, 6-norm and 12-norm -- usual metrics use 2-norm to aggregate errors
fab
2023-02-09 11:17:12
Jey alaiula
Demiurge
2023-02-09 11:51:54
Yerky AlaKUyelah. That's how my brain is reading it now.
afed no, this has been proven and compared many times, also any av1 encoder smoothes out details, so far no av1 encoder has reached x265 quality for high fidelity, especially for quality sources with some noise and grain, grain synthesis for av1 helps not to notice this smoothing and sometimes it is very similar to source, but true transparency is very hard to achieve, also most people now dont use original native grain synthesis, and use photon noise from libjxl (but its often even more different from typical grain for video)
2023-02-09 11:55:43
I'll only believe my own eyes in a side by side comparison. Also, the way that x265 is using natural-looking noise and distortion in non-distracting ways in order to mask or hide compression artifacts that are potentially more distracting and disruptive, I think is a really clever idea.
2023-02-09 11:56:50
basically x265 has noticeably more distortion, but it's less distracting and preserves texture, and might help hide other, worse compression artifacts. That's awesome. Maybe JXL can do a similar technique?
Jyrki Alakuijala
2023-02-09 11:58:52
would porting jpegli ideas to h.264 be a good thing
Demiurge
2023-02-09 11:59:21
JXL does a lot of smoothing and blurring and at lower qualities it can sometimes even look like the image was downsampled and then upsampled again. Maybe instead of doing things like that, which always erase high frequency details and texture, maybe instead more "noisy" looking artifacts that have a better chance of preserving texture and other high frequency information can be preferred
afed
2023-02-09 12:04:11
it's mostly psy rd and some other methods ```--psy-rd <float> Influence rate distortion optimized mode decision to preserve the energy of the source image in the encoded image at the expense of compression efficiency --psy-rdoq <float> Influence rate distortion optimized quantization by favoring higher energy in the reconstructed image. This generally improves perceived visual quality at the cost of lower quality metric scores --cutree, --no-cutree Enable the use of lookahead’s lowres motion vector fields to determine the amount of reuse of each block to tune adaptive quantization factors. CU blocks which are heavily reused as motion reference for later frames are given a lower QP (more bits) while CU blocks which are quickly changed and are not referenced are given less bits. This tends to improve detail in the backgrounds of video with less detail in areas of high motion``` at slightly higher bitrates than in my screenshot it's quite similar to the source and it's better than removing those details and noise or creating excessively artificial and constant grain like av1
2023-02-09 12:05:07
it also lowers the metrics scores, but better visually
Demiurge
2023-02-09 12:33:26
Yeah, how it actually looks like matters way more than metric scores.
2023-02-09 12:34:27
I wonder what the state of RDO is in libjxl
afed
2023-02-09 12:40:42
I think it's harder to implement because libjxl is more heavily linked with metric estimation, but, for lower quality I think it's better to use something simpler
Demiurge
2023-02-09 12:49:21
metrics belong in the trash
afed
2023-02-09 12:58:11
not if these metrics are used properly, almost all encoders use some of the metrics to evaluate certain stages, from mse to ssim and other more complex ones, otherwise the encoding quality would be much lower or more unpredictable
_wb_
2023-02-09 04:59:07
if you do subjective eval, make it blind — not say which is what codec 🙂
afed
2023-02-09 05:00:03
depends on the quality of the source and I think av1 can be like half the size and most people won't notice the difference either
DZgas Ж
_wb_ if you do subjective eval, make it blind — not say which is what codec 🙂
2023-02-09 05:00:09
That's what I'm doing.
afed depends on the quality of the source and I think av1 can be like half the size and most people won't notice the difference either
2023-02-09 05:00:46
at the same speed, this is an impossible statement
afed
2023-02-09 05:04:50
encoding speed also depends on the system, I don't know how well av1 encoders are optimized for sse only, when x264 was developing this was very common, but now it is a very small percentage and I am not sure that such optimizations have any high priority
Demiurge
2023-02-09 06:46:44
If anything, metrics should be developed from codec psycho techniques that work, not the other way around
_wb_
2023-02-09 06:57:24
The only way to develop metrics is to collect big datasets of MOS scores and try to get the best possible correlation with it.
2023-02-09 06:59:44
The main thing is to have distortion types, distortion amounts, and variety in image content so everything of interest is captured (or you can hope that things will generalize to everything of interest)
2023-02-09 07:05:21
Existing databases like TID2013 and Kadid10k are rather limited in that respect: they only have (lib)jpeg and j2k compression artifacts which are relevant, and then they have a bunch of not so relevant artifacts like adding lots of white noise. And the distortion amplitudes are all the way to "barely recognizable", which is not a problem by itself but it is if there are only 5 steps of distortion amplitude, since then basically the "resolution" (in JNDs) of the dataset is very coarse.
OkyDooky
2023-02-10 01:00:59
How would that work? I'd say anything that can improve the current situation (quality, size, speed, etc.) would be a good idea, whether it's a whole new codec or an improved encoder for an existing codec. (<@532010383041363969>)
diskorduser
2023-02-10 05:27:56
Make jxl based video codec.
afed
2023-02-10 06:07:58
it's like av1 libaom has tune=butteraugli and photon-noise from libjxl av1 would also be able to get xyb if the video format had color profiles like ICCv4 so anything not limited by the standard is possible for any other format
Jyrki Alakuijala
How would that work? I'd say anything that can improve the current situation (quality, size, speed, etc.) would be a good idea, whether it's a whole new codec or an improved encoder for an existing codec. (<@532010383041363969>)
2023-02-10 03:33:56
jpegli has a few main tricks: XYB color space, JPEG XL adaptive quantization, quantization bias, more accurate frame buffers and DCTs
2023-02-10 03:34:17
one would just plugin them one at a time into x264 and observe the improvements
2023-02-10 03:34:42
if these approaches worked in JPEG XL and old JPEG, they might work elsewhere, too
_wb_
2023-02-10 04:15:09
Some things will likely not work in video codecs. E.g. more accurate DCT would only help encode-side, since decode is typically fully specified with limited precision, to avoid differences accumulating after many frames
2023-02-10 04:16:01
Also XYB via an icc profile will not work — adding a cicp codepoint for it could work, but it would still be broken in existing deployments
OkyDooky
2023-02-10 07:43:10
Thank you. That makes sense.
BlueSwordM
_wb_ Also XYB via an icc profile will not work — adding a cicp codepoint for it could work, but it would still be broken in existing deployments
2023-02-11 06:05:53
Yes, it would only work via bleeding edge software.
spider-mario
2023-02-11 09:55:26
I think I would bet on mpv being among the earliest implementations
Cool Doggo
2023-02-11 07:13:13
<@&807636211489177661>
_wb_
2023-02-11 08:19:56
Sigh
DeepSouthMan
2023-02-12 12:06:29
Hello, it seems like the jxl specs are locked behind a paywall. I would like to maybe try and create a decoder, how would I get ahold of the specs for free? Is it even possible?
OkyDooky
2023-02-12 12:09:28
did someone do something about those spam messages earlier? still visible here
username
did someone do something about those spam messages earlier? still visible here
2023-02-12 12:13:37
they are not visible on discord's side so matrix or whatever client you are using must have them cached maybe
2023-02-12 12:13:56
or the bridge for matrix still has them cached or something
yoochan
DeepSouthMan Hello, it seems like the jxl specs are locked behind a paywall. I would like to maybe try and create a decoder, how would I get ahold of the specs for free? Is it even possible?
2023-02-12 12:20:03
this is the joys of ISO, you can find drafts on this channel : https://discord.com/channels/794206087879852103/1019989424232222791/threads/1021189485960114198
Demiurge
2023-02-12 12:21:40
The more people who can read it for free the more independent JXL development (a good thing and necessary for success) will take place
DeepSouthMan
2023-02-12 12:30:23
Thank you so much :3
yoochan
2023-02-12 12:33:47
If you implement something based on the spec, share your experience here, other project are in progress, one in java (jxlatte), one in rust (jxl-oxide)
2023-02-12 12:34:08
and both authors regularly post here
DeepSouthMan
2023-02-12 12:35:11
Yes, I'll keep that in mind. Right now I'm looking to familiarize myself with the specs.
w
2023-02-12 01:13:53
<:starege:950466238683951225>
Demiurge
2023-02-12 02:33:29
is there a progressive jxl test? There used to be one from google but they removed the link to the saliency progressive test page
username
Demiurge is there a progressive jxl test? There used to be one from google but they removed the link to the saliency progressive test page
2023-02-12 02:35:09
https://google.github.io/attention-center/
Demiurge
2023-02-12 02:40:10
Cool watch
2023-02-12 02:40:15
works perfect in waterfox
yoochan
2023-02-12 02:46:57
it doesn't work with the firefox add-on 😦
username
2023-02-12 02:48:12
the firefox addon/webextension has many limitations
2023-02-12 02:50:38
the webextension does not support color profiles, progressive decoding and animations
Demiurge
2023-02-12 02:52:21
At least it supports transparency. The built in firefox support doesn't even get that right.
2023-02-12 02:52:29
And the patches have been there for years.
yoochan it doesn't work with the firefox add-on 😦
2023-02-12 02:53:01
Just install waterfox if you can. vanilla firefox is just too dumb to exist right now
2023-02-12 02:53:19
mozilla is too dumb to be real
username
2023-02-12 02:54:05
waterfox has some really nice features like private tabs and a better more customizable theme
Demiurge
2023-02-12 02:54:27
Everything about waterfox is an improvement so far. Like being able to close tabs easier. No obnoxious Pocket forced down your throat. menu icons.
2023-02-12 02:54:48
why did firefox get rid of menu icons? they make things easier.
username
2023-02-12 02:55:19
mozilla makes no sense at all I have no clue why they would remove the menu icons
Demiurge
2023-02-12 02:55:38
Like I said they are too dumb to exist
2023-02-12 02:56:06
I don't know how something so dumb can even be real
2023-02-12 02:56:23
mozilla that is
2023-02-12 02:56:56
Everything they do is some kind of parody
w
2023-02-12 02:59:48
what else does waterfox *really* add
2023-02-12 03:00:20
looking at the sync in the docs seems like they took firefox and text replaced fire with water
2023-02-12 03:00:32
but the links point to firefox
2023-02-12 03:02:01
<:starege:950466238683951225>
username
2023-02-12 03:06:04
I do use and like waterfox but I have to admit it is a bit unprofessional and messy of a project
2023-02-12 03:06:44
it always has been
_wb_
2023-02-12 03:06:52
It's annoying that we need actual forks to get things we want, as opposed to the mainline browsers just listening to community feedback
2023-02-12 03:08:06
And not spending more time on finding excuses not to merge patches or to remove code than on actually doing useful stuff...
Demiurge
2023-02-12 03:11:44
Well mozilla has been not giving a shit about what users want for a long time. Look at pocket.
2023-02-12 03:13:17
They also replaced the original, secure Firefox Sync with a far less secure version
2023-02-12 03:13:53
So they don't care about privacy or security either
2023-02-12 03:15:11
my impression of waterfox is that it's not a fork, it's just an alternate build of firefox with alternate compile time settings.
2023-02-12 03:15:33
It IS firefox but with the dumb crap disabled at build time
2023-02-12 03:15:44
and they can't legally call it firefox
2023-02-12 03:15:59
since that's a trademark
diskorduser
2023-02-12 03:16:50
too much bad words
username
2023-02-12 03:17:38
I mean waterfox isn't just compile time options it does have extra features strapped on top of it but it doesn't really change much if all that much internal code
Demiurge
diskorduser too much bad words
2023-02-12 03:18:55
yeah, it's a bad habit. sorry.
diskorduser
2023-02-12 03:20:33
may be I'm not used to these words. no probs.
_wb_
2023-02-12 03:24:29
Forks can be just to change one small thing, or they can lead to something completely different - e.g. wasn't chrome/blink a webkit fork originally?
Demiurge
2023-02-12 03:25:07
until they realized webkit was way too awful to fork
2023-02-12 03:25:40
Or rather, too awful to maintain patches for, so they had to hard fork
DZgas Ж
DZgas Ж -q 90 -e 7 --epf=0 --gaborish=0
2023-02-12 03:25:49
well, this problem **put **solved?
2023-02-12 03:28:10
I still have a small list of JPEG XL problems ✍️ 🤓 1. Creating block VarDCT structure works poorly. 2. ~~Gaborish works poorly.~~ 3. The transmission of exact colors works poorly. 4. With e4+ block artifacts go beyond the boundaries of their blocks and can do it 10 times more than the blocks themselves, artifacts are also noticed at the junction of blocks & they are moving towards the lower right corner.
2023-02-12 03:29:17
in fact, right now I'm most concerned about the length of the artifact
2023-02-12 03:31:02
although, after conducting tests with ordinary images, I found that artifacts overlap other artifacts, and it turns out something like this (Black in the center)
_wb_
DZgas Ж well, this problem **put **solved?
2023-02-12 03:33:39
I'm quite sure that what you see there is basically a DC artifact that is only going to be visible on very specific artificial images (block-aligned with high DC contrast and otherwise very smooth)
2023-02-12 03:35:17
We could add an option to do DC without the slight diagonal blur, which would avoid artifacts traveling long-distance, but also is somewhat worse for avoiding banding in images with slow gradients.
DZgas Ж
_wb_ I'm quite sure that what you see there is basically a DC artifact that is only going to be visible on very specific artificial images (block-aligned with high DC contrast and otherwise very smooth)
2023-02-12 03:35:49
I can't understand why it's impossible to fix this problem in the same way as with the E3 parameters - where this problem simply doesn't exist
2023-02-12 03:36:17
What does E3 NOT do that there are no such artifacts?
_wb_
2023-02-12 03:37:06
It's not impossible - it's just that what is done at e4 is somewhat better on most normal images (but yes, it's also worse on some artificial images)
2023-02-12 03:37:57
But we could do e9 with the simpler DC of e3, these are all just encoder decisions, nothing inherent to the format...
DZgas Ж
2023-02-12 03:38:04
e3 works on a very clear principle - here is a block, here is an artifact inside a block, and the artifact does not go further than this block. but with e4+, the artifacts just go further, isn't it easily fixed even at the Decoder level?
2023-02-12 03:38:47
at the decoder level, you can decode each block, and limit its rendering to its own block size
Demiurge
2023-02-12 03:40:24
the long-distance artifacts is not a real problem unless it's noticeable "in the wild" lets say, in my opinion. if it's transparent it doesn't matter if it's long distance.
2023-02-12 03:41:22
jxl is supposed to be good at transparent lossy, so any non-transparent images are a real problem.
DZgas Ж
_wb_ But we could do e9 with the simpler DC of e3, these are all just encoder decisions, nothing inherent to the format...
2023-02-12 03:43:04
the image block affects the distance more than 10 times its size, then you just say - it's probably better not to change anything really incomprehensible to me
2023-02-12 03:45:44
I also want to say that the artifacts are completely direct to the nearest blocks, and further, in the range of 8 bits of the image, and this fact may make it impossible to fix the exact color transfer
Demiurge
2023-02-12 03:46:24
I don't think he's saying that. but if you see anything weird and ugly, post the original image and post what it looks like after JXL conversion, along with the settings/cmdline you used, and before any post-processing is applied.
DZgas Ж
Demiurge I don't think he's saying that. but if you see anything weird and ugly, post the original image and post what it looks like after JXL conversion, along with the settings/cmdline you used, and before any post-processing is applied.
2023-02-12 03:51:19
Don't worry, that's the only thing I'm doing here.
2023-02-12 03:52:51
-q 90 -e 7 --epf=0 --gaborish=0
2023-02-12 03:55:28
The question is simple, what is going on here, and why is this happening
2023-02-12 03:56:56
immediately I want to notice the problems of the gradient, the whole green color turns into something that is not clear why. But it doesn't matter now - it's important what are the yellow lines on the green border when there is red in the image?
2023-02-12 03:58:05
2023-02-12 03:59:54
perhaps if do not lighten the picture, but make it darker, it will be more clear (this is an 8-bit color)
2023-02-12 04:07:03
I specially made such an image to demonstrate this at e3, the closest red dot goes exactly into the 8x8 block, and creates an artifact, causing a yellow color, but only within the same block. while on e7, due to the large length of the artifact, it is noticeably much further away from the red block itself.
2023-02-12 04:14:41
on -q70, the waviness effect of the artifact is perfectly noticeable, which passes through from the red block.. and you can see that it fades to the top, because red gets further
DZgas Ж on -q70, the waviness effect of the artifact is perfectly noticeable, which passes through from the red block.. and you can see that it fades to the top, because red gets further
2023-02-12 04:18:15
And also a very interesting observation, there is no wave in the center, because the red color goes inside the block, and this corrects the artifact. But there are only green and white colors on top and bottom of the blocks, and therefore the artifact from red passes perfectly in them
2023-02-12 04:20:07
about this block
Demiurge I don't think he's saying that. but if you see anything weird and ugly, post the original image and post what it looks like after JXL conversion, along with the settings/cmdline you used, and before any post-processing is applied.
2023-02-12 04:20:48
done
2023-02-12 04:28:02
waiting for answer 🛌
_wb_
2023-02-12 04:39:43
Not saying we shouldn't take a look at what is happening and if there's a way to improve DC encoding for such artificial images without making things worse for natural images. Just saying this way of doing DC is beneficial to improve precision on smooth gradients and to avoid banding, and doing the simpler e3 thing makes it so "what happens in the block stays in the block" but that also increases blockiness in smooth gradients.
2023-02-12 04:40:06
But maybe there's a way to get the best of both worlds.
Demiurge
2023-02-12 04:42:33
ideally in the future someone will make a special encoder or pre-processor for artificial images. Possibly as part of libjxl, but the libjxl is already a monsterous-looking C++/CMake project as it is...
_wb_
2023-02-12 04:43:40
Testing on artificial images and exaggerating the artifacts by zooming a lot or post-processing the colors is not the best way to find the best trade-off between artifacts though. For that, I would not change the colors and zoom not more than 3x. Otherwise you end up optimizing for unrealistic viewing conditions.
Demiurge
2023-02-12 04:44:45
There's no reason why GIF should reign supreme still for this long. What the world really needs is a good image preprocessor and palette optimizer, there has been a lot of existing published research into palette optimization strategies already but they have never been applied to PNG encoders, let alone JXL
2023-02-12 04:45:54
Even though they would probably be more effective if used in a newer format.
2023-02-12 04:46:14
Better prediction and entropy coding and all.
DZgas Ж
_wb_ Not saying we shouldn't take a look at what is happening and if there's a way to improve DC encoding for such artificial images without making things worse for natural images. Just saying this way of doing DC is beneficial to improve precision on smooth gradients and to avoid banding, and doing the simpler e3 thing makes it so "what happens in the block stays in the block" but that also increases blockiness in smooth gradients.
2023-02-12 04:54:36
gaborish for what?
_wb_ Not saying we shouldn't take a look at what is happening and if there's a way to improve DC encoding for such artificial images without making things worse for natural images. Just saying this way of doing DC is beneficial to improve precision on smooth gradients and to avoid banding, and doing the simpler e3 thing makes it so "what happens in the block stays in the block" but that also increases blockiness in smooth gradients.
2023-02-12 04:56:31
I'm not saying that we should literally use the functions from e3, I'm saying that need to figure out what specific thing makes everything work fine in e3, and how to do this in other modes without any deterioration. Because otherwise you turn a Bug into a Feature
Demiurge
_wb_ But maybe there's a way to get the best of both worlds.
2023-02-12 04:57:01
Increasing the precision of the 8x8 DC image, perhaps.
2023-02-12 04:57:23
Depending on how big the image is
DZgas Ж
_wb_ Testing on artificial images and exaggerating the artifacts by zooming a lot or post-processing the colors is not the best way to find the best trade-off between artifacts though. For that, I would not change the colors and zoom not more than 3x. Otherwise you end up optimizing for unrealistic viewing conditions.
2023-02-12 05:01:27
I understand. But it also shows that the format does not work perfectly, it always tends to broke the image where it was necessary to do things much simpler. For example, the image where the color is green and white - well codec, do the same - but no, codec begins to make a green Guardian, creating artifacts where it was possible not to do this
2023-02-12 05:03:00
It is quite obvious that in such pictures it is necessary to use Modular, but they very clearly show the problems that exist
_wb_
Demiurge Increasing the precision of the 8x8 DC image, perhaps.
2023-02-12 05:03:18
It's already quite precise, but maybe we could add some heuristics to selectively bump up precision where needed.
Demiurge
DZgas Ж I understand. But it also shows that the format does not work perfectly, it always tends to broke the image where it was necessary to do things much simpler. For example, the image where the color is green and white - well codec, do the same - but no, codec begins to make a green Guardian, creating artifacts where it was possible not to do this
2023-02-12 05:03:31
DCT does not work perfectly. DCT is intended for natural images with lots of noise like photographs. It's famous for not being a good fit for GIF-type graphics
2023-02-12 05:04:16
It's like trying to use JPEG to compress text and complaining about the noise it introduces.
DZgas Ж
Demiurge It's like trying to use JPEG to compress text and complaining about the noise it introduces.
2023-02-12 05:05:19
Well, jpeg doesn't create the artifacts that Jpeg XL creates well well
2023-02-12 05:05:37
exactly the same as e3 jpeg XL
Demiurge
2023-02-12 05:06:14
What is really needed for images like that is a different tool. And the format has lots of tools, but the current version of libjxl doesn't have any tricks like that. Someone would have to write a new JXL encoder designed for images like that.
DZgas Ж
DZgas Ж exactly the same as e3 jpeg XL
2023-02-12 05:07:08
in fact, the amazing thing is that e3 does not have a single artifact at all, from what I have found all the time, except for the accuracy of color rendering (with an error same FFFFFF -> FFFFFE)
Demiurge
2023-02-12 05:07:29
Either that, or just use GIF because apparently GIF is the most advanced state of the art technology in 2023
DZgas Ж
Demiurge What is really needed for images like that is a different tool. And the format has lots of tools, but the current version of libjxl doesn't have any tricks like that. Someone would have to write a new JXL encoder designed for images like that.
2023-02-12 05:08:37
in practice, this does not make sense. well, I'm not saying that these images will be used. I use them to visually show problems bugs artifacts
Demiurge
2023-02-12 05:08:48
-e3 tends to look significantly more ugly and blocky like ye olde JPEG does.
DZgas Ж
Demiurge -e3 tends to look significantly more ugly and blocky like ye olde JPEG does.
2023-02-12 05:09:17
no
Demiurge
2023-02-12 05:09:22
Higher effort settings definitely look better to me.
2023-02-12 05:09:28
For the images I have tested.
DZgas Ж
Demiurge Either that, or just use GIF because apparently GIF is the most advanced state of the art technology in 2023
2023-02-12 05:09:46
or maybe it's better to use a different codec, right? what do you want to prove to me?
Demiurge
2023-02-12 05:11:21
There are a lot of clever ways to compress images that have few colors. Unfortunately no one has written a modern encoder to take advantage of these clever tricks.
2023-02-12 05:11:37
The state of the art for images like that is still apparently GIF
2023-02-12 05:12:01
No one has bothered porting those tricks over to PNG or newer
2023-02-12 05:12:12
even though it should be possible to
DZgas Ж
2023-02-12 05:12:29
What are you saying
2023-02-12 05:13:23
use Jpeg XL modular is a completely separate codec, which, as it happens, is located in the same place as VarDCT - in the libjxl library
Demiurge
2023-02-12 05:13:35
I'm saying no one has written a (non-GIF) encoder that specializes in images with few unique colors.
2023-02-12 05:13:46
It's possible but no one has bothered doing it for some reason
DZgas Ж
2023-02-12 05:13:52
GIF is shit. stop about
Demiurge I'm saying no one has written a (non-GIF) encoder that specializes in images with few unique colors.
2023-02-12 05:14:42
But EVERYONE does it
2023-02-12 05:14:50
PNG WEBP JPEG XL do it
Demiurge
2023-02-12 05:16:49
But GIF encoders often produce output that is smaller than PNG WEBP JXL because the other formats do not specialize in images with a small number of colors.
2023-02-12 05:17:34
and more thought and clever tricks were put into the GIF encoder
2023-02-12 05:18:11
even though the format is shit
DZgas Ж
Demiurge But GIF encoders often produce output that is smaller than PNG WEBP JXL because the other formats do not specialize in images with a small number of colors.
2023-02-12 05:18:15
well, no.
Demiurge
2023-02-12 05:18:38
and those tricks could easily apply to other formats as well
DZgas Ж
2023-02-12 05:18:43
GIF is shit. GIF is stupid.
Demiurge
2023-02-12 05:19:36
I agree, but it's been optimized to the limit.
2023-02-12 05:19:45
And newer formats have not been.
DZgas Ж
Demiurge I agree, but it's been optimized to the limit.
2023-02-12 05:20:15
then why was it necessary to say everything before that...
Demiurge
2023-02-12 05:20:56
Nothing that I say is ever necessary...
DZgas Ж
2023-02-12 05:22:46
Okay, I'm not going to discuss gif.
Demiurge
2023-02-12 05:29:23
I hope that some day, libjxl gets palette optimization, and has a lossless performance that can compete with lossless webp. But what would be even more difficult, would be if it had a near-lossless preprocessor and/or a way to automatically recognize image regions that are non-photographic or contain a lot of high contrast smoothly-defined shapes that would be better compressed using simpler and more efficient techniques than DCT.
2023-02-12 05:31:47
That way it could truly be a fire-and-forget encoder, and the human operator does not need to know if the image they are compressing is a better candidate for lossless compression or not.
2023-02-12 05:34:45
If the encoder was able to choose the best method to use depending on the image data in each region, and without jarring borders between regions... It's kind of a fantasy but it would be great.
2023-02-12 05:38:14
All of the tools are technically available in the bitstream specification... In fact it can even pull a DjVu and have a separate non-DCT layer for sharp shapes.
2023-02-12 05:41:59
There is a lot of potential for all kinds of very clever encoder tricks if someone is interested enough in writing an encoder.
2023-02-12 05:42:38
I hope people write encoders in languages other than C++ :D
2023-02-12 05:44:16
the c++ libjxl encoder is very naive compared to what the format is capable of. It doesn't attempt any fancy stuff like that. It just does a lossy DCT or a lossless mode depending on what you tell it to do. And it has an experimental lossy mode that is similar to regular lossy mode except that it hasn't been tuned as much and it uses a DWT instead of a DCT
2023-02-12 05:48:53
If someone made a DjVu-like container format for JXL, with pages, chapters, text-layer, etc, that would be pretty wicked cool too.
2023-02-12 05:52:26
The container format could even be generalized to support other formats as well. But, thinking about it, the utility would be pretty limited in the absence of encoders that are optimized for rasterizing text and scanned pages.
2023-02-12 05:53:38
Otherwise DjVu would be more successful.
Peter Samuels
2023-02-12 05:55:05
Shut up
Fraetor
2023-02-12 06:02:43
<@&807636211489177661>
_wb_
Demiurge If the encoder was able to choose the best method to use depending on the image data in each region, and without jarring borders between regions... It's kind of a fantasy but it would be great.
2023-02-12 07:19:38
This is what I want to work on at some point. It's tricky but the bitstream is expressive enough to do much more than what we do now.
yoochan
2023-02-12 07:23:14
A bit late but I missed the explanation of wat is jpegli?! I saw this post today https://mobile.twitter.com/jyzg/status/1622895765403103232 does jpegli produce plain old jpeg? What kind of magic make it so sharp?
spider-mario
2023-02-12 07:33:17
yes, good old ’90s jpeg
Fraetor
2023-02-12 08:10:08
The magic is in using the XYB colourspace which can be signelled with an ICCv4 colour profile.
yoochan
2023-02-12 08:12:58
Thanks! Amazing idea!
spider-mario
2023-02-12 08:14:09
as far as I know, this is some of the magic, but not all
Fraetor
2023-02-12 08:15:12
Ooh?
spider-mario
2023-02-12 08:15:33
<@532010383041363969> would be able to go into more detail
username
2023-02-12 08:16:54
how complete is jpegli in it's current state?
2023-02-12 08:17:39
I've been wanting to make use of it ever since before it was called that and was just a test of xyb in jpegs
w
2023-02-12 10:18:46
how does it perform without xyb/icc?
Demiurge
yoochan A bit late but I missed the explanation of wat is jpegli?! I saw this post today https://mobile.twitter.com/jyzg/status/1622895765403103232 does jpegli produce plain old jpeg? What kind of magic make it so sharp?
2023-02-13 04:40:07
This looks beautiful. :)
2023-02-13 04:41:57
lol. ancient JPEG will actually be an HDR format...
2023-02-13 04:42:44
look how deep the colors are... And I'm colorblind!
2023-02-13 04:45:16
Greener greens and redder reds... and I have deuteranomaly, those colors are tricky for me
2023-02-13 06:01:38
Is arithmetic coding ever going to catch on with legacy JPEG? libjpeg supports it like 50% of the time and it's free space savings
zamfofex
2023-02-13 06:45:36
<@&807636211489177661> 🤔
w
2023-02-13 06:56:46
get rid of matrix
OkyDooky
2023-02-13 07:10:12
In the past few days, there are so many spam messages coming from the Matrix bridge (another one just now), so I'm afraid that perhaps one day admins might get rid of the bridge and you'll have to join Discord for chats...
_wb_
2023-02-13 07:18:43
I do want to keep that bridge, but Matrix really needs to do something about their spam problem. It shouldn't be that hard to detect users who join lots of channels and put identical longish messages in all of them...
w
2023-02-13 07:23:01
yes they should make it so you have to register with a phone number
2023-02-13 07:24:42
or old school ip ban
2023-02-13 07:24:46
ip range ban
improver
2023-02-13 07:42:29
or just not set the room as public?
_wb_
2023-02-13 08:01:39
But is it still useful then?
Jyrki Alakuijala
Fraetor The magic is in using the XYB colourspace which can be signelled with an ICCv4 colour profile.
2023-02-13 10:57:43
the examples I posted were without XYB
w how does it perform without xyb/icc?
2023-02-13 10:58:37
very very well, 20-25 % more dense without XYB, 30-35 % more dense with XYB (American-style marketing numbers, can be 5-10 % less in reality)
improver
_wb_ But is it still useful then?
2023-02-13 10:58:58
for people who are already in there yes, and you could link from jpegxl.info
w
2023-02-13 11:07:19
or they can just click on discord link
improver
2023-02-13 11:16:07
discord is pain in the ass for paranoid & open platform activist types
2023-02-13 11:17:33
it won't even let people in if they connect from tor/vpn and don't want to provide their phone number
2023-02-13 11:19:26
lucky
w
2023-02-13 11:19:38
i think it's worth it if a user is going to participate, otherwise, discord also has guest login. I also suggest that external from discord, it should be read only
improver
2023-02-13 11:22:02
i'd say the way it's set up right now is kinda a bit unfriendly, sharing only single channel
w
2023-02-13 11:23:00
with how discussion forums worked 10 years ago, it's not unfriendly
2023-02-13 11:24:42
these are non issues fabricated for no real reason
improver
2023-02-13 11:24:48
i don't really like matrix that much, preferring xmpp, so it's up to matrix users to argue their platform choice
w
2023-02-13 11:24:57
why do I (user on this platform) have to put up with the problems of other platform (matrix)
improver
2023-02-13 11:25:24
tbf spam has happened from discord side too
2023-02-13 11:27:13
i'd say just make it non-public and it should be fine spam-wise
w
2023-02-13 11:27:49
what does non public do in matrix
improver
2023-02-13 11:29:33
doesn't announce in public directory (where bots pick channels to spam from, but also where random people can discover)
OkyDooky
2023-02-13 11:43:59
ok made it non-public
_wb_
2023-02-13 11:44:15
that was me on Matrix
Demiurge
2023-02-13 02:38:24
Why do people think unsolicited advertising works?
yoochan
2023-02-13 03:03:59
do you think it has the same rate of success as d*ck pics ?
_wb_
2023-02-13 03:21:54
spam/phishing do work, you just need 0.01% of people to be stupid enough to not ignore it and then if you can reach millions you will still find hundreds of victims
2023-02-13 03:24:23
d*ck pics on the other hand are I think mostly a form of exhibitionism, i.e. I think "success" is obtained just by the target seeing the picture, which will basically have a probability close to 100%
Fraetor
_wb_ I do want to keep that bridge, but Matrix really needs to do something about their spam problem. It shouldn't be that hard to detect users who join lots of channels and put identical longish messages in all of them...
2023-02-13 06:25:39
I had a chat to some people from matrix (well, element) at a conference last week and while they didn't really have any specific suggestions around spam reduction they did mention that matrix can setup a discord account as a puppet user, which gives much better integration and would allow access to all the channels, etc.
gb82
2023-02-13 07:06:02
is there an appimage for waterfox 5.1.2?
DZgas Ж
yoochan A bit late but I missed the explanation of wat is jpegli?! I saw this post today https://mobile.twitter.com/jyzg/status/1622895765403103232 does jpegli produce plain old jpeg? What kind of magic make it so sharp?
2023-02-13 11:11:07
why is he using png lol!
2023-02-13 11:12:01
You can send an XYB jpeg to Twitter, because browsers understand the colorspace
improver
2023-02-13 11:27:47
but does twitter's thumbnailer?
Peter Samuels
DZgas Ж why is he using png lol!
2023-02-13 11:31:35
probably to have the zoom in to show the pixels without having generation loss
OkyDooky
2023-02-13 11:54:25
This sounds great...
2023-02-13 11:54:26
Screenshot\_2023-02-13\_15-46-14.png
2023-02-13 11:57:19
...but, wouldn't this kind of make everyone think twice about adopting a next-gen replacement format? I mean, if they can get HDR (future-proofing) and can increase the quality by a lot without increasing the filesize, then they have most of the benefits that the new format offers (that they would care about).
2023-02-13 11:59:30
Because Discord is proactively spyware and can be proven as such. The only way to meaningfully mitigate it is to only use it in a web wrapper or browser and never use the full application on either desktop or mobile. (<@456226577798135808>)
2023-02-14 12:04:52
As far as I know, the only thing XMPP lacks compared to Matrix is a Slack&#47;Discord-like application to take advantage of all its extensions at once (and servers offering those features). Once it has that, I'd drop Matrix in an instant; especially since it can reasonably be considered "Mossad-adjacent" technology\: https\:&#47;&#47;hackea.org&#47;notas&#47;matrix.html https\:&#47;&#47;lukesmith.xyz&#47;articles&#47;matrix-vs-xmpp&#47; (<@260412131034267649>)
2023-02-14 12:07:47
So we can all be friends. Anyways, isn't it fairly trivial to block&#47;ban a user from a room in Matrix? I'm managing a Space for the Aurora OSS apps, since the main room got abandoned by its moderators and is just spam, at this point. I haven't had to crack down on any spam posting, yet. But I thought the options for banning were pretty easy to access. (Here's the link, in case anyone is curious\: https\:&#47;&#47;matrix.to&#47;#&#47;#auroraosscommunity\:matrix.org) (<@288069412857315328>)
Demiurge
2023-02-14 01:00:50
what's up with all the annoying escape codes.
username
2023-02-14 01:02:43
something something Matrix
gb82
2023-02-14 01:07:22
Jon, are u on Mastodon?
_wb_
2023-02-14 06:23:38
Apparently I accidentally have two mastodon accounts, there is also @wb@mastodon.online
Demiurge
2023-02-14 06:30:33
I also use discord via browser and ubo
username
2023-02-14 06:31:09
does uBO even block any connections when using discord?
OkyDooky
2023-02-14 06:50:41
Can I has screenshot? (<@456226577798135808>)