JPEG XL

Info

rules 58
github 38694
reddit 688

JPEG XL

tools 4270
website 1813
adoption 22069
image-compression-forum 0
glitch-art 1071

General chat

welcome 3957
introduce-yourself 294
color 1687
photography 3532
other-codecs 25116
on-topic 26652
off-topic 23987

Voice Channels

General 2578

Archived

bot-spam 4577

other-codecs

Exorcist
2025-11-07 03:34:07 What is "smartphone photo"? Shot by smartphone camera or display for smartphone screen?
cioute
2025-11-07 04:56:19 thanks, i just tested webp 100% quality in opencamera, somehow it looks less noisy and weights less, the only problem is no affordable zoom for webp/avif/jpegxl in fossify gallery
AccessViolation_
2025-11-07 05:02:10 are you sure 100% quality doesn't turn it into a lossless WebP?
2025-11-07 05:02:41 it may be that if you want the highest lossy quality, you need to select 99%
cioute
2025-11-07 05:07:00 not sure
AccessViolation_
2025-11-07 05:07:46 I tested it in OpenCamera, it seems like a 100% quality WebP is lossless, so it's going to store a lot of information to preserve every pixel value exactly. A 100% quality WebP is 10.5 MB for me, while a 99% quality WebP is only 3.5 MB. And note that even that 100% lossless WebP is not really lossless, since it's derived from a lossy JPEG that the camera outputs
jonnyawsom3
2025-11-07 05:07:49 If it was lossless, I highly doubt it would weigh less
2025-11-07 05:08:25 > somehow it looks less noisy and weights less hmm
2025-11-08 06:45:03 Has anyone tried using ROI for foveated encoding with AV1? I did some tests with x264 back in the day but was having trouble finding a good balance
DZgas Ж
2025-11-08 08:13:26 <:BlobYay:806132268186861619>
lonjil
2025-11-09 08:05:14 I got completely side tracked on implementing an FPGA DCT encoder. I've spent several days just reading about different sine and cosine approximation techniques.
2025-11-09 08:05:50 Guess I should probably organize some notes and put up somewhere, considering how obscure some of it was.
daniilmaks
2025-11-10 05:02:12 https://cdn.discordapp.com/attachments/673202643916816384/1415149093075812493/webpisshub.webp?ex=6912934b&is=691141cb&hm=b5bf1697de917536d40352218d080c188d7fa4a478e8a5485440bc3f8cf47a59
2025-11-10 05:03:11 https://github.blog/changelog/2025-08-28-added-support-for-webp-images/
2025-11-10 05:04:35 webp is a jpeg sidegrade that was outdated on arival
Meow
2025-11-10 05:10:35 A format is widely adopted when it's outdated
daniilmaks
2025-11-10 05:11:46 a format is widely adopted when a browser monopoly shoves it into a forced adoption
2025-11-10 05:13:34 webp has no business gaining support in 2025, it should be phased out, not added in.
2025-11-10 05:15:18 webp is the definition of a codec that had no market interest. it sucks for high-density, it sucks for high fidelity, it sucks for detail preservation and is prone to colorshift and the worst generational loss that has been ever archieved.
Meow
2025-11-10 05:15:28 https://en.wikipedia.org/wiki/WebP#Support
2025-11-10 05:16:16 The lossless part done by <@532010383041363969> is good
daniilmaks
2025-11-10 05:17:08 lossless webp good. I agree. but it is a travesty that it got bundled with the worst mainstream lossy codec in current age.
2025-11-10 05:18:55 right at the first paragraph: > Google actively promotes WebP, and Google Chrome and all Chromium-based browsers support the format. The proprietary PageSpeed Insights tool suggests that webmasters switch from JPEG and PNG to WebP in order to improve their website speed score. the rest of the market was essentially forced into adding support for the format because incompatibilities drive consumers away from a given product, specially if the format is encountered often enough.
2025-11-10 05:19:56 https://en.wikipedia.org/wiki/WebP#Disadvantages_and_criticism
2025-11-10 05:20:50 > In September 2010, Fiona Glaser, a developer of the x264 encoder, wrote a very early critique of WebP. Comparing different encodings (JPEG, x264, and WebP) of a reference image, she stated that the quality of the WebP-encoded result was the worst of the three, mostly because of blurriness on the image. Her main remark was that "libvpx, a much more powerful encoder than ffmpeg's jpeg encoder, loses because it tries too hard to optimize for PSNR" (peak signal-to-noise ratio), arguing instead that "good psycho-visual optimizations are more important than anything else for compression". In October 2013, Josh Aas from Mozilla Research published a comprehensive study of current lossy encoding techniques[109] and was not able to conclude that WebP outperformed JPEG by any significant margin.
Meow
2025-11-10 05:32:26 This may have prompted the creation of MozJPEG
2025-11-10 05:35:59 How to prove that WebP isn't really better than JPEG? Create a new JPEG encoder/decoder
daniilmaks
2025-11-10 06:01:45 mozjpeg and jpegli both obsoleted webp a while ago
Meow
2025-11-10 07:27:11 Jpegli can even make XYB JPEG on par with AVIF sometimes
Exorcist
2025-11-10 07:47:59 > How to prove that WebP isn't really better than JPEG? - VP8 has only `4*4` DCT block, no `8*8` - VP8 has only YUV420, no YUV444 These limits make WebP can only fit small, low quality target
AccessViolation_
2025-11-10 08:37:08 [TECHNICAL OVERVIEW OF VP8, AN OPEN SOURCE VIDEO CODEC FOR THE WEB (Google Research)](<https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/37073.pdf>): > VP8 uses 4x4 block-based discrete cosine transform (DCT) for all luma and chroma residual signal. Depending on the prediction mode, the DC coefficients from a 16x16 macroblock may then undergo a 4x4 Walsh-Hadamard transform. [VP8 vs VP9: 8 Key Differences and How to Choose (Cloudinary)](<https://cloudinary.com/guides/video-formats/vp8-vs-vp9-8-key-differences-and-how-to-choose#2-block-sizes>) > While both VP8 and VP9 use block-based motion compensation and intra-frame prediction, VP9 supports larger block sizes than VP8. VP8 uses a fixed block size of 16×16 pixels, whereas VP9 supports block sizes ranging from 4×4 to 64×64 pixels. This flexibility allows VP9 to adapt better to different types of video content, resulting in more efficient compression and improved visual quality. <:DogWhat:806133035786829875>
cioute
2025-11-10 03:33:46 differences between vp9 and h265? maybe you know something chatgpt don't
Exorcist
2025-11-10 03:35:09 https://forum.doom9.org/showthread.php?t=168947
lonjil
2025-11-11 03:34:39 Typst's JPEG 2000 decoder I linked to earlier is now feature complete, though very slow.
Cacodemon345
2025-11-13 01:55:14 We should look to HEIF instead when it gets inevitably adopted in 2035 or something.
AccessViolation_
2025-11-13 09:44:28 woah, pure rust PDF is nice
DZgas Ж
2025-11-16 04:43:11 Hi, I've been studying codecs for practical use for over 5 years, and here's what I'll say: VP9 is more primitive HEVC is overloaded with technology VP9 is supported by browsers and has hardware support everywhere HEVC isn't fully supported by browsers, but it has hardware support everywhere, which is useful, for example, in Telegram For real-time encoding, due to its simplicity, VP9 will always be better than HEVC. When encoding with longer and more complex presets, HEVC will always be better than VP9. VP9 requires less computation for decoding, from 30 to 50% less than HEVC, depending on the content type. This means, for example, that if a device poorly decodes 1080p HEVC, it might play 1080p VP9 perfectly. Content that offers exceptional advantages in any situation: VP9: Minecraft, very active gameplay HEVC: Anime, animation, static flat lines Ideologically, the use of VP9 may be motivated by open source fanaticism. But there is no restriction on the use of HEVC by individuals anywhere (due to patents). In 90% of cases, there is no point in using HEVC if AV1 is available.
2025-11-16 04:46:15 It's important to note that VP9 encoding isn't implemented on GPU. The use of HEVC can only be achieved by enabling fast encoding for work or streaming using a GPU.
2025-11-16 04:50:00 Webp always better than Jpeg <:FrogSupport:805394101528035328> in web use
2025-11-16 04:54:17 We didn't have time to implement progressive decoding. The internet has become too fast for that. Now only decoding cost per watt matters. Time is wasted. <:avif:1280859209751334913> <:JXL:805850130203934781> <:pancakexl:1283670260209156128>
lonjil
2025-11-16 05:09:11 the only video codec worth using 10-bit h264
DZgas Ж
2025-11-16 05:10:56 a video that can't be played anywhere
2025-11-16 05:21:41 To be fair, under very specific and defined conditions, you can squeeze decent quality out of AVC. About a year ago, I set a goal to find the best combination of parameters and resolution, specifically 10-bit. and encoded AV1 for comparison with identical encoding speed on a single core. Overall, it's not bad, and without a good eye, it seems unnoticeable, but the codec is still outdated enough that it's only used when full legacy is required.
lonjil
2025-11-16 05:36:51 any good video playing software can handle it
2025-11-16 05:37:27 I have an old GPU, VP9 and AV1 are no-go for me.
2025-11-16 05:38:06 (and ofc, I'm a sucker for very high quality)
2025-11-16 05:38:52 (that is to say, I want it to look very good even to a trained eye)
DZgas Ж
2025-11-16 06:58:37 so
2025-11-16 07:00:00 This is precisely one of the reasons why I tried and succeeded in creating my own easyhevc preset, so that I could compress Telegram videos without decoding problems anywhere, both on smartphones and on older hardware.
lonjil
2025-11-16 07:02:12 I have an RX 580 😄
DZgas Ж
2025-11-16 07:02:33 🥵 🔥 👍
Quackdoc
2025-11-16 07:06:14 I got one of those sitting on my desk lmao
2025-11-16 07:06:17 4gb one
lonjil
2025-11-16 07:06:38 im glad i have the 8gb one
Quackdoc
2025-11-16 07:07:29 rn im rocking an a380, wish rx580 had better support for wlroots, no vulkan t.t
lonjil
2025-11-16 07:10:23 what's missing?
Quackdoc
2025-11-16 07:11:03 https://gitlab.freedesktop.org/mesa/mesa/-/issues/5882
2025-11-16 07:11:23 also required for zero copy vaapi + vulkan
lonjil
2025-11-16 08:09:17 rip
daniilmaks
2025-11-17 07:22:07 lol
DZgas Ж
2025-11-17 08:31:13 The reaction of a person who only knows about webp because of negative memes
daniilmaks
2025-11-17 08:34:08 I hated webp for purely technical reasons way before there were memes about it
2025-11-17 08:34:32 webp was trash back when it was created, it is trash today.
lonjil
2025-11-17 08:35:04 At all useful quality levels, jpeg is better
2025-11-17 08:35:22 At least if we're talking lossy
daniilmaks
2025-11-17 08:35:37 the only thing that webp should ever be used for is lossless mode. nothing more.
2025-11-17 08:36:16 and even then it's only for compatibility reason. sans compatibility you should be on jxl by now.
2025-11-17 08:38:55 also... *I am the one who makes the memes.*
2025-11-17 08:41:23 . https://www.youtube.com/watch?v=ujBp5B35el4 webp was always a meme regardless
2025-11-17 08:42:40 https://www.youtube.com/watch?v=_h5gC3EzlJg
DZgas Ж
2025-11-17 08:54:14 Jon's comparisons are truly ridiculous. Quantization losses are completely normal, and the fact that flif, which is positioned as a lossless codec, is being compared here is also ridiculous. What's not funny is your lack of arguments. JPEG XL also has a significant loss during recompression, which I especially noticed in my tests when shifting the image by ±3 pixel and when scaling the image by ±1%, which is exactly what happens. But what's your argument? WebP is posted on Reddit, for example, encoding it from downloaded originals. And at what point does the problem occur? Yes, this video shows the truth, but for me, it's a self-evident fact, like don't re-record a VHS 10 times from a copy of a copy of a copy of a copy
2025-11-17 08:55:35 We're living in a time when JPEG XL hasn't won. It's already seemingly everywhere except on the internet. And on the internet, you have a choice: PNG, JPEG, and WebP, which have 100% support everywhere. And WebP is the best choice
2025-11-17 08:56:48 "useful"
2025-11-17 08:57:40 I don't see any arguments in your words, you're a man meme webp
daniilmaks
2025-11-17 09:04:02 https://discord.com/channels/794206087879852103/805176455658733570/1437311015451754546
2025-11-17 09:04:54 no 444 mode in lossy mode = meme format
2025-11-17 09:06:53 iirc flif was a dual lossy-lossless codec but jon can correct me.
DZgas Ж
2025-11-17 09:07:02 Excellent argument. Could you provide statistics on the use of 444 jpeg on the internet compared to 420 jpeg?
daniilmaks
2025-11-17 09:10:17 that's an ad-populum argument, and while it is not inherently fallacious, it is flawed in this instance because we're studying the quality archieved by each codec not how most social media perform shortcuts to reduce bandwidth usage.
2025-11-17 09:11:45 "almost nobody uses this higher quality mode therefore it's pointless showing how it defeats my argument"
DZgas Ж
2025-11-17 09:13:28 Referring to a 15-year-old quality report. Apparently everyone uses 1.0, which is 15 years old? Even though WebP has been in development for all these years and continues to improve its quality, you should compare the old and current encoders. No one does such tests. It's the same gap as between the original JPEG from 1992 and mozjpeg/jpegli.
2025-11-17 09:14:25
2025-11-17 09:14:58 I just recently vibecoded it using the original documentation
daniilmaks
2025-11-17 09:16:07 any sort of lineart, graphs, screenshots, and downcaled photos are inherently butchered by 420 mode and you already know that. why downscaled and not full res you may ask? about 99% of photos are taken on bayer sensors which have reduced chroma resolution vs luma, so 444 gives only marginal improvements unless the photo is downscaled.
2025-11-17 09:16:40 (just adding that at the end because it's something you might bring up)
2025-11-17 09:18:24 yes, it's a 15-year old report, that's the whole point: I literally said it was a meme back then.
DZgas Ж
2025-11-17 09:18:34 Quality? Ultimate? Why? We're studying reality. The reality is that WebP was created for the internet and nothing else—not for film archiving, not for family photos on disk, but for the internet, and nothing else. 444 isn't an argument on the internet because it's not needed here.
daniilmaks
2025-11-17 09:19:11 as encoders advanced it's not just webp that got better but also the ones that came to compete with it.
2025-11-17 09:19:51 in the end you got what you got: a codec that was not competitive when it released, and it's not competitive today.
DZgas Ж
2025-11-17 09:21:34 words to the wind
2025-11-17 09:22:53 You know, in arguments like these, they usually do something like: you're wrong, here's a picture, and jpeg is 60 times better than webp.
daniilmaks
2025-11-17 09:22:57 webp was created for the internet and fails at that: a good chunk of the images in the internet are gui elements, which get butchered by 420. so we got to pick a poison: jpeg with no transparency, webp with poor chroma resolution, or png with higher bandwidth use.
2025-11-17 09:23:44 or gif, if we're feeling old school.
DZgas Ж
2025-11-17 09:23:47 gui elements Images
2025-11-17 09:23:54 Bruh
daniilmaks
2025-11-17 09:24:19 yes I see plenty of those as much as you're surprised
2025-11-17 09:24:25 yes, in 2025
2025-11-17 09:24:41 don't ask me why
DZgas Ж
2025-11-17 09:25:08 Using codecs for other purposes is their problem
daniilmaks
2025-11-17 09:25:53 >format made for the internet >uses it for the internet >you weren't supposed to do that
DZgas Ж
2025-11-17 09:26:09 It reminded me of the Samsung news page, with a post about news where the entire page was one giant image.
daniilmaks
2025-11-17 09:26:35 ok that's taking it too far, we both agree here lol
2025-11-17 09:26:41 <:KekDog:805390049033191445>
DZgas Ж
2025-11-17 09:27:40 But this is brilliant, why design something on the page if you can do everything in Photoshop?
2025-11-17 09:28:18 https://www.samsung.com/sec/event/GalaxyBook4Edge/comp/ just great
daniilmaks
2025-11-17 09:29:52 iirc, the reason webp got trasparency support was specifically to use it for gui decorations. As you can imagine, back then images were extensively used for gui elements.
DZgas Ж
2025-11-17 09:29:58 There is a problem: even at high quality, even webp demonstrates better quality 420 than jpeg 444 with the same file size
2025-11-17 09:30:48 Here, low quality
2025-11-17 09:31:45 There is a point when JPEG around Q90-95 becomes better than WEBP, this is true for 444, but again, this is not an internet case.
daniilmaks
2025-11-17 09:31:56 this needs a few asterisks: it will depend on the target bpp and the type of content. because at low bpp, 444 is expected to lose efficiency on jpeg.
DZgas Ж
2025-11-17 09:33:01 Here. q82 jpeg here and webp 420 looks better than jpeg 444
daniilmaks
2025-11-17 09:33:04 I will do some testing on webp next weekend since you got me in the mood. what is the most up to date build for webp?
2025-11-17 09:34:25 I'm on mobile rn so I can't do honest 1:1 checks rn but will check later, sure.
DZgas Ж
2025-11-17 09:34:44 Builds... I know Google has cwebp builds post somewhere, but I forgot where <:FrogSupport:805394101528035328>
A homosapien
2025-11-17 09:35:20 https://developers.google.com/speed/webp/download
daniilmaks
2025-11-17 09:35:31 I think I downloaded cwebp from google couple months ago
A homosapien
2025-11-17 09:35:31 Here are the latest builds
daniilmaks
2025-11-17 09:36:44 is there a changelog somewhere
DZgas Ж
2025-11-17 09:36:50 I'll be sure to send you the image I'm using for testing, it's a sound wave, a 3D spectrogram from my spectrogram drawing project, but I name it simpler: killer jpeg xl
jonnyawsom3
2025-11-17 09:36:59 In our testing, 4:2:0 was better at Quality 80 and below
daniilmaks
2025-11-17 09:37:54 yeah that's roughly what I would expect
DZgas Ж
2025-11-17 09:39:59 https://discord.com/channels/794206087879852103/805176455658733570/1380112200190005258
daniilmaks
2025-11-17 09:40:23 https://tenor.com/view/skeleton-falling-gif-27355771
DZgas Ж
2025-11-17 10:12:33 https://drive.google.com/file/d/1qdMpZrij0YTD-2zuEbniTmt9cAdETGz3/view?usp=sharing
2025-11-17 10:14:01 cwebp -q 58 -m 6 -af -sharp_yuv Or a different quality, tests. The main thing here is that JPEG XL completely loses to WebP in this image. All the problems of JPEG XL are clearly visible: fading, excessive smoothing, everything.
2025-11-17 10:15:13 JPEG loses at 444 because it doesn't have enough bitrate, and at 420 because it doesn't have enough technology
A homosapien
2025-11-17 10:29:43 I should really get sharpyuv working in jpegli
DZgas Ж
2025-11-17 10:33:41 Are you sure this is possible? I didn't go into much detail about the algorithm, but I have a similar program that independently compresses the color channel in PNG so that the JPEG encoder compresses the colors less. But WebP has a major advantage: decoding is done entirely by libwebp, meaning the color interpolation algorithm is performed by the codec that outputs the post-processed interpolated color channels. In JPEG, the color channel interpolation by the program itself, and can vary.
2025-11-17 10:34:30 In some JPEG programs, colors are interpolated using nearest neighbor or bilinear interpolation, webp has completely solved this problem with its own interpolation at the decoding stage.
A homosapien
2025-11-17 10:34:32 sjpeg already does this, I personally just want it in jpegli
DZgas Ж
2025-11-17 10:34:53 <:PepeOK:805388754545934396>
2025-11-17 10:35:38 I really can't understand how this can be done. Webp has one decoder, but JPEG...
A homosapien
2025-11-17 10:36:45 Almost all programs use libjpeg-turbo, as they should
_wb_
2025-11-17 10:58:52 In those generation loss comparisons, of course FLIF was used in a lossy way, and the bitrates of the first generations were matched to make it a more or less fair comparison. Of course it would be meaningless to put FLIF there in lossless mode, generation loss is not a thing for lossless. FLIF was exceptionally generation loss resistent (as long as you don't start cropping or otherwise moving around pixel positions between generations) because its lossy mode isn't doing any frequency transform, it's just quantizing precision of residuals in a way that is idempotent (no change if you recompress a decoded image at the same encode settings).
daniilmaks
2025-11-17 11:04:13 yeah that's roughly what I thought, I found it weird of dzgas saying it was lossless.
DZgas Ж
2025-11-17 11:05:42 I didn't say it was lossless in the test, I'm saying that the codec itself isn't designed to be used lossy; that's an additional feature, not its primary one. FLIF was promoted as a lossless codec.
daniilmaks
2025-11-17 11:06:22 that clears things, thanks.
DZgas Ж
2025-11-17 11:08:29 Same as lossy modular in jpeg xl. It's possible, but Why? This isn't something that can be shown as: look, it can... it can. It's bad, but it can. Why...? The main thing is that it exists, it's possible... maybe you want to compress lossless by 10% more so that it's almost as lossless but losslessn't .
daniilmaks
2025-11-17 11:09:12 vibe-losslessness
2025-11-17 11:10:30 sounds a bit like lossywav, you can recompress it as many times as you want, but the lossy part only happens the first time.
DZgas Ж
2025-11-17 11:11:24 Well, technically, wav has bit quantization, which is also compression, if the original is, for example, 32-bit float and the resulting file is 16 bit
2025-11-17 11:12:08 lossy in essence
daniilmaks
2025-11-17 11:12:41 I get where you're going but in that context lossless is a meaningless word with no real world use.
2025-11-17 11:13:20 in the digital media realm*
2025-11-17 11:15:09 but it's a fair point you're bringing up, people tend to forget lossless only means "lossless after this very particular stage in the processing chain"
DZgas Ж
2025-11-17 11:16:07 <:galaxybrain:821831336372338729><:This:805404376658739230>
daniilmaks
2025-11-17 11:18:17 fun anecdote: I once had to do a whole lesson on compression types to some dude who thought he was increasing the quality of screenshots by naive converting them jpg to png
DZgas Ж
2025-11-17 11:18:50 The main thing is what data is original. People whose master track is float 32bit 192khz, just because they are paranoid mélomane, but flac 16bit 44.1, which is definitely lossless.
daniilmaks
2025-11-17 11:19:14 he was incredibly stubborn at first but I had live evidence to help him change his mind.
DZgas Ж
2025-11-17 11:20:49 Well, it's not that bad, it's much harder to explain to others that they shouldn't compress images into archives, because it doesn't make sense, even though the size becomes a couple hundred kilobytes smaller due to the redundancy of service data markup inside the files 🥹
daniilmaks
2025-11-17 11:23:33 I only put media onto archives for organization (and rarely so), and even then it's usually either fast compression or no compression. there's also technical reasons: copying large numbers of files between android and windows has been buggy for over a decade, zipping stuff makes some copying operations faster and more reliable.
DZgas Ж
2025-11-17 11:31:20 For some reason, 7zip forks lack a COPY mode; I have no idea why. The original 7zip has a clear and understandable "7z copy" that doesn't perform any compression at all, packing files as is.
2025-11-17 11:45:01 Yes, that's really interesting, although it's not better than 7zip in This case, I'm surprised no one talks about this anywhere at all https://www.advancemame.it/download https://github.com/google/zopfli/issues/29#issuecomment-77830614 advzip --recompress -4 --iter 100 my.zip
2025-11-17 11:46:11 although you know it's still very close
TheBigBadBoy - 𝙸𝚛
2025-11-17 12:12:46 best Deflate optimizer is ECT and it's even multithreaded <:YEP:808828808127971399>
Exorcist
2025-11-17 01:57:27 https://github.com/victorvde/jpeg2png <:galaxybrain:821831336372338729>
daniilmaks
2025-11-17 02:03:02 HA, yeah I'm familiar, I wish I knew of it back then.
username
2025-11-17 02:06:07
daniilmaks
2025-11-17 02:06:18 obviously nowadays I'd use this instead https://github.com/ilyakurdyukov/jpeg-quantsmooth
username
2025-11-17 02:09:05 I use both jpeg2png (modified) and quantsmooth since in some cases one does better then the other
daniilmaks
2025-11-17 02:13:37 you're thio?
username
2025-11-17 02:15:07 no also the sharpness part of that fork I sent isn't in Thio's fork
Exorcist
2025-11-17 02:15:14 it may sharpen the noise dots
daniilmaks
2025-11-17 02:16:15 I'm thinking why not combine the algorithms of each so that you can tune the smoothing methods with a switch instead of a different program.
2025-11-17 02:17:04 also I find it neater having a jpg output since it's more space efficient for the source material.
username
2025-11-17 02:19:08 jpeg2png's output is fine if I plan on putting it into editing software since it will get converted to raw pixels either way
daniilmaks
2025-11-17 02:19:34 interesting, I'll add salt pepper noise to some tests.
2025-11-17 02:20:29 true, it's specially unimportant if it's just an intermediate file that will get deleted.
Exorcist
2025-11-17 02:21:21 jpeg2png is also a high bit-depth decoder when you set iteration = 0
daniilmaks
2025-11-17 02:21:48 what does iteration = 0 mean
2025-11-17 02:22:15 in this context
Exorcist
2025-11-17 02:22:24 only decode, do not smooth by gradient descent
daniilmaks
2025-11-17 02:22:51 <:FeelsReadingMan:808827102278451241> good to know
username
2025-11-17 02:24:34 how does it compare to the JPEG decoding in jpegli/libjxl?
daniilmaks
2025-11-17 02:24:49 ok just so I didn't misread you, jpeg2png always works in high bitdepth, you're just saying it can be useful if I want the high precision decoding without the smoothing. correct?
2025-11-17 02:25:55 nice, I gained 1 singular neuron today.
DZgas Ж
2025-11-17 09:26:54 can it .zip?
TheBigBadBoy - 𝙸𝚛
2025-11-17 09:32:56 yeah
2025-11-17 09:33:11 well
2025-11-17 09:33:21 it can only optimize zip
2025-11-17 09:33:29 it cannot create it from scratch <:KekDog:805390049033191445>
cioute
2025-11-19 02:52:35 very good explanation
2025-11-19 02:57:58 sadly no hw decoder
DZgas Ж
2025-11-21 07:17:21 Well, I use SCUNet-GAN and it's the best option 99% of the time. Only 1% of the time does SCUNet smooth things out more than I'd like, and my jpeg is the original, so only in those cases would jpeg2png actually be better. But over the past year, I've only encountered a couple of situations where scaling isn't an option, and SCUNet performed poorly. It's about ~10 times slower, but it's not 2^N times slower. In 99% of cases, this is an example of when everything is fine: Original jpeg | jpeg2png | SCUNet-GAN
daniilmaks
2025-11-21 07:19:49 nowadays I try to avoid ai-style solutions when it comes to well defined problems like these but I appeciate the existence of those tools nonetheless
2025-11-21 07:20:33 I do like the results there, it's almost oversmooth but still looks fine.
DZgas Ж
2025-11-21 07:22:29 It's true, it's too perfect, as if it was shot on a camera 20 times more expensive than it could have been in this scene. And it's absolutely true that sometimes the result is so good that you have to add noise in post-processing, it's all too good <:galaxybrain:821831336372338729>
2025-11-21 07:23:26 This is not a problem because it will still be compressed 10 times wherever I send it, anywhere, Reddit, Telegram, Instagram, it doesn’t matter
daniilmaks
2025-11-21 07:24:35 https://tenor.com/view/game-day-gif-1421612078979008371
DZgas Ж
2025-11-21 07:24:51 So let the clear lines be compressed with less artifacts, rather than creating more blur via jpeg2png or more artifacts by sending the original
2025-11-21 10:48:51 a large number of iterations, which gave better quality in the original code, create artifacts in this -i 1000
gb82
2025-11-22 04:20:03 i think i agree more generally for what is considered "web quality" but at high fidelity jpegli can edge out wins against libwebp in metrics and visually
2025-11-22 04:20:56 i don't think libwebp hill climbed very hard on high fidelity, because it is a genuinely very difficult problem
DZgas Ж
2025-11-22 12:08:23 This is a problem for people like **us**. Regular users don't think that way. There's a SD TV... and a HD one. There are "lossy images" for heavy "compression", and "lossless images" ""without"" compression. There's nothing to think about. Webp implements this simple approach perfectly, creating a clear **gap **between its lossless codec and its lossy codec. It's hard to explain why people need "Very High Quality but its Not the Source"... like aac 640 kbps stereo
Exorcist
2025-11-23 11:19:27 > Nokia has a legal complaint against HP related to H.264 and H.265 filed October 2023. It even mentions H.265 on consumer devices being an infringement. https://news.ycombinator.com/item?id=46019283 https://www.courtlistener.com/docket/67928650/1/nokia-technologies-oy-v-hp-inc/
2025-11-23 11:21:19 > HEIF itself is a container that may not be subject to additional royalty fees for commercial ISOBMFF licensees. Nokia also grants its patents on a royalty-free basis for non-commercial purposes. https://en.wikipedia.org/wiki/High_Efficiency_Image_File_Format#Patent_licensing https://github.com/nokiatech/heif/blob/master/LICENSE.TXT
2025-11-23 11:22:17 Trust me bro, we won't sue you<:avif:1280859209751334913>
lonjil
2025-11-23 11:27:22 > H.264 > 2023 the fuck?
spider-mario
2025-11-24 08:09:07 > It even mentions H.265 on consumer devices being an infringement. I sort of doubt this is about paying $0.04 more per laptop - there must be some uncertainty or legal risk maybe in certain chipsets or something. nothing of the sort according to their link: > Dozens of companies have taken licenses to Nokia’s essential patent claims at rates that are reasonable and non-discriminatory. Yet, despite receiving multiple offers from Nokia, HP has refused to take a license to Nokia’s H.264 and H.265 essential decoding patent claims. HP’s failure to negotiate in good faith to reach an agreement on terms for a license to Nokia’s standard essential patents for the relevant standards (including Nokia’s patented H.264 and H.265 technology) has forced Nokia to institute this lawsuit.
Cacodemon345
2025-11-24 08:39:47 Now that explains why Dell and HP disabled HEVC in their later laptops.
lonjil
2025-11-24 09:22:44 how can there still be essential patents on h.264 decoding ??
2025-11-24 09:22:53 shouldn't they have expired already?
Exorcist
2025-11-24 09:36:33 > reasonable and non-discriminatory <:ugly:805106754668068868>
spider-mario
2025-11-24 10:58:03 wikipedia says some of them don’t expire until 2030
lonjil
2025-11-24 10:58:26 How completely ridiculous
2025-11-24 10:59:21 since it's been around since 2003
spider-mario
2025-11-24 10:59:50 maybe not quite as ridiculous as copyright (death + 70 years in most countries), but yeah
lonjil
2025-11-24 11:00:06 yeah, but patents are usually 20 years
spider-mario
2025-11-24 11:00:12 (on that note, thanks to Jim Morrison’s relatively early death, I may live to see The Doors’ discography enter the public domain!)
lonjil
2025-11-24 11:00:49 like, what, did some patents sit in patent office bureaucracy hell for 7 years and then get granted with the 20 year time period? 😆
2025-11-24 11:01:12 dang
spider-mario
2025-11-24 11:01:54 it’s already the case now in countries where it’s death + 50 years, which includes Canada https://en.wikipedia.org/wiki/2022_in_public_domain#Countries_with_life_+_50_years
_wb_
2025-11-24 01:28:54 even so, it can happen that the codec itself is old enough to not be patent-encumbered anymore, but e.g. specific implementation techniques that were patented later are used... Also there could be patents on recent extensions/revisions of h.264
AccessViolation_
2025-11-24 02:54:15 I feel like every application of a patented thing should be exempt if that thing was made before the date of the patent application
2025-11-24 03:03:25 you shouldn't be able to retroactively demand people pay licensing fees for things they were doing before you got the exclusive rights to them. that's like making something illegal and prosecuting people that did it while it wasn't
2025-11-24 03:04:36 hmm no, it's not like that because I think you should also be able to continue making and selling patented things if you were doing so before the patent application
2025-11-24 03:06:46 I like that software patents aren't really a thing in the EU 🙂
_wb_
2025-11-24 03:22:18 in principle if you can demonstrate that your thing predates the patent application, it's prior art and the patent is invalid. But invalidating a patent is very nontrivial and costly
DZgas Ж
2025-11-24 03:27:14 🥹
2025-11-24 03:30:08 AVC is still the fastest codec for encoding and decoding, for 1080p resolution I would certainly recommend libvpx vp9 -deadline realtime -cpu-used 8 -row-mt 1 but avc is still an option -- And of course, AVC used in a lot of places, and no one will just release patents, haha, they released the entire EVS codec that no one needs, but they didn't dare release the AVC patents
2025-11-24 03:30:34 <:Stonks:806137886726553651>
lonjil
2025-11-24 03:35:45 Indeed. AVC is the only codec I use.
spider-mario
2025-11-24 07:48:45 I believe “the fuck” was a reaction to the patent status
DZgas Ж
2025-11-24 08:08:55 I realized this when I replied to the next message. I sincerely believe that AVC is outdated in all respects. I created my own EASYHEVC preset that solves full the decoding issue for clients. DISCORD and Telegram both support HEVC, which is enough to encode all "original quality" memes in hevc, which is what I do. For streaming, for example on my personal website, I use VP9/AV1, which encoded on any processor purchased these days... hmmm. VP9 also solves the decoding issue, and for AV1, you just need to keep the resolution at 1 megapixel at 30 fps <:AV1:805851461774475316>
Smegas
2025-11-24 09:00:47 All my home movies are in AVC. HEVC and AV1, in order to obtain a better compression ratio, blur small details, which, with stronger compression, results in a blurry image. To achieve the same level of detail in HEVC, I had to produce a file the same size as AVC. AVC is faster and eco-friendly.
spider-mario
2025-11-24 09:20:14 HEVC has broader support for hardware-accelerated 10-bit decoding, though
2025-11-24 09:20:32 which comes in handy for HDR 4K
DZgas Ж
2025-11-24 10:31:25 All problems are solved by fine-tuning presets, but unfortunately, Normal people will never do this
Lumen
2025-11-25 10:03:45 that is an extremely uneducated statment
2025-11-25 10:03:58 (in the encoding field)
_wb_
2025-11-25 04:11:20 Will jxl-rs be the first codec implementation in a browser that is written in Rust?
monad
2025-11-25 04:14:46 "The first memory-safe codec."
Cacodemon345
2025-11-25 04:29:28 Find out in the current episode.
2025-11-25 04:29:50 --- Browser News Network
veluca
2025-11-25 05:09:03 no, chrome uses the Rust png crate nowadays
monad
2025-11-25 05:23:52 "The first memory-safe codec since PNG."
lonjil
2025-11-25 05:55:09 I'm playing around with various software verification tools, so maybe if all goes well we can brag about it being the first formally verified codec implementation in a browser 😄
veluca
2025-11-25 06:01:44 you're not the first one to tell me that, funnily enough
2025-11-25 06:02:03 (at least, for formal verification for unsafe)
AccessViolation_
2025-11-25 06:06:11 that would be the final nail in the coffin for the all too common "just that it's written in rust doesn't mean it's secure"
lonjil
2025-11-25 06:07:34 There are so many different verification tools for Rust now, it almost feels like the verification community has bailed on C in favor of Rust
AccessViolation_
2025-11-25 06:10:03 I saw you comment about a JPEG XL decoder in WUFFS on hackernews and that would be *hell* to implement but really cool
2025-11-25 06:13:32 that's good to hear. I'm not surprised, rust is probably already really appealing if you need some level of verifiability
veluca
2025-11-25 06:14:01 yeah no
AccessViolation_
2025-11-25 06:15:49 right I need to remember to not bring up these silly ideas near decoder writers <:KekDog:805390049033191445>
2025-11-25 06:16:41 sorry I scared you with the mere thought of having to write that XD
veluca
2025-11-25 06:23:07 tbh I don't like wuffs much
AccessViolation_
2025-11-25 06:24:10 hmm why is that?
2025-11-25 06:25:12 I like it conceptually, but it's a shame it compiles to C. I guess that's a nice thing for compatibility, but I don't like it in terms of design aesthetics
veluca
2025-11-25 06:25:44 I might like it more once I see a decoder for a "serious" format written in it
2025-11-25 06:27:03 but the general feeling I have is that it's unlikely that you can get a good special-purpose language for this stuff and the effort would be better spent making Rust better at writing decoders 😛
AccessViolation_
2025-11-25 06:31:20 I guess that's a testament to how much of a pain it is to write. we care about safety, but not [glares at wuffs] *that* much
2025-11-25 06:31:49 or there might be other reasons too. I wouldn't know
veluca
2025-11-25 06:32:41 keep in mind that writing a custom language means you need to write a custom compiler/stdlib
2025-11-25 06:32:47 and you need to trust *those* too
AccessViolation_
2025-11-25 06:34:42 I don't how how much this helps decoders specifically but I'm quite excited about ranged integers. creating them requires proving or checking that they're within the range, but then whereever they're used, the compiler knows those integers can only ever be within that range which in theory allows it to make some novel optimizations that it wouldn't otherwise
lonjil
2025-11-25 06:35:13 brb writing a decoder in a theorem prover language with a verified core
veluca
2025-11-25 06:35:19 https://github.com/rust-lang/rfcs/issues/671 😛
AccessViolation_
2025-11-25 06:40:15 yeah those ^^ I haven't really been paying attention to the development of it, I seem to remember there was syntax to create them on nightly but they didn't do anything
2025-11-25 06:42:25 gosh that reminds me, someone made some sort of proof system that would generate a spec *and* reference implementation from the same source representation. I don't remember what it's called
2025-11-25 06:43:49 one idea was that it would eliminate the problem of inconsistencies between a specification and its reference implementation
2025-11-25 06:48:42 SpecTec! https://webassembly.org/news/2025-03-27-spectec/
_wb_
2025-11-25 07:15:27 I like programming languages with a clear operational semantics, that's the kind of stuff I did in a previous life
Exorcist
2025-11-27 11:47:11
2025-11-27 11:47:26
AccessViolation_
2025-12-01 11:28:09 <https://developers.google.com/speed/webp/docs/compression> > Transparency: 8-bit alpha channel is useful for graphical images. The Alpha channel can be used along with lossy RGB, a feature that's currently not available with any other format. this may need an update by now ^^
2025-12-01 11:32:58 > Yep! WebP is the Swiss Army knife of image formats. okay that's a stretch
2025-12-01 11:39:50 I suppose it might've been a fair statement when this was written
Quackdoc
2025-12-01 01:52:44 to be fair swiss army knifes are rarely actually good at anything they do, they just do everything barely acceptably
AccessViolation_
2025-12-01 04:37:00 JXL doens't have something like this does it?
jonnyawsom3
2025-12-01 04:49:15 Since JXL uses groups, it can just do a local pallete instead of constantly updating them
AccessViolation_
2025-12-01 04:52:23 ah, right
jonnyawsom3
2025-12-01 07:43:20 My friend is sending data via video stream and realised single bit B/W is almost non-compressed. Is this because of something like Mozjpeg's trick with clamping values for black and white, or is there some other coding trick they could use to pack more data without loss?
_wb_
2025-12-01 08:06:33 For 1-bit, JBIG2 is still quite good. Probably jxl could beat it if we would spend the effort on making a special-purpose encoder for it, but as things are right now, JBIG2 is pretty good.
jonnyawsom3
2025-12-01 08:26:21 Due to application limitations, they're stuck with a h264 stream. So far I've suggested ROI encoding to make the data region of the video higher quality
TheBigBadBoy - 𝙸𝚛
2025-12-06 10:26:09 https://blog.mozilla.org/en/mozilla/av1-video-codec-wins-emmy/
Cacodemon345
2025-12-06 10:29:19 The Onion ahh headline.
jonnyawsom3
2025-12-06 10:30:55 Here's hoping JXL does the same when ~~Blender integration~~ VFX pipelines support it
_wb_
2025-12-06 12:28:08 Lots of stuff has won an Emmy now: JPEG, JPEG 2000, JPEG XS, AV1, a lot of MPEG stuff (e.g. ISOBMFF), also metrics like SSIM and VMAF. There are different kinds of Emmy awards too, so it's a bit confusing to me.
dogelition
2025-12-06 12:30:49 BT.2100 also got one
RaveSteel
2025-12-07 10:05:40 https://www.tomshardware.com/pc-components/gpus/dell-and-hp-disable-hardware-h-265-decoding-on-select-pcs-due-to-rising-royalty-costs-companies-could-save-big-on-hevc-royalties-but-at-the-expense-of-users
HCrikki
2025-12-07 10:16:51 save big? its not even 1 dollar per laptop and everything sold before today had theirs already paid
Meow
2025-12-08 02:31:06 Wow `oxipng 9.1.5 -> 10.0.0`
2025-12-08 02:31:50 Already two days https://github.com/oxipng/oxipng/releases/tag/v10.0.0
adap
2025-12-09 09:39:32 Anyone have a good method of converting heic images to jxl?
whatsurname
2025-12-09 09:51:27 ImageMagick
adap
2025-12-09 10:07:35 It doesn't preserve hdr data is there some setting im missing?
2025-12-09 10:16:14 I can't even convert a 10 bit jpeg to 10 bit jxl with cjxl so i must be doing something wrong i just can't figure out what lol
A homosapien
2025-12-09 06:22:09 hdr data how? like an icc-profile or gain maps?
adap
2025-12-09 08:21:20 both it seems like
2025-12-09 08:21:44 ffmpeg and magick both show the heic file as 8 bit when it's 10 bit
2025-12-09 08:22:40
2025-12-09 08:23:36 From my understanding jxl doesn't even store gain map data anyway?
jonnyawsom3
2025-12-09 08:54:15 It can, but there's no point if you have native HDR/high bit depth data, since it can do tonemapping to SDR as part of the format
adap
2025-12-10 06:44:17 Is there no way to just get a hdr image out of it? I don't want an SDR image. I'm just trying to convert the heic to a idential looking jxl that's hopefully smaller.
jonnyawsom3
2025-12-10 06:45:34 Apparently nothing reads HDR HEIC files that I can find, maybe one of the Apple experts here have an idea
Meow
2025-12-10 07:23:39 Quite impressive that here we have such experts as well
Quackdoc
2025-12-10 01:29:24 are you sure FFmpeg can't do it?
2025-12-10 01:30:01 what does ffprobe itself say?
2025-12-10 01:32:00 yes it can
2025-12-10 01:32:06 ``` Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'storage/downloads/IMG_0579.HEIC': Metadata: major_brand : heic minor_version : 0 compatible_brands: mif1MiHBMiHAheixMiHEMiPrheicmiaftmap Duration: N/A, start: 0.000000, bitrate: N/A Stream group #0:0[0x2e]: Tile Grid: hevc (Main Still Picture) (hvc1 / 0x31637668), yuvj420p(pc, smpte170m/smpte432/bt709), 5712x4284 (default) Side data: ICC Profile displaymatrix: rotation of -90.00 degrees Stream group #0:1[0x3e]: Tile Grid: hevc (Rext) (hvc1 / 0x31637668), gray(pc), 2856x2142 Side data: displaymatrix: rotation of -90.00 degrees Stream group #0:2[0x73]: Tile Grid: hevc (Main 10) (hvc1 / 0x31637668), yuv420p10le(pc, smpte170m/smpte432/linear), 4096x3072 Side data: ICC Profile displaymatrix: rotation of -90.00 degrees Stream #0:60[0x3f]: Video: hevc (Rext) (hvc1 / 0x31637668), gray(pc), 2016x1512, 1 fps, 1 tbr, 1 tbn Side data: ICC Profile displaymatrix: rotation of -90.00 degrees Stream #0:61[0x41]: Video: hevc (Main 10) (hvc1 / 0x31637668), yuv420p10le(pc), 1024x768, 1 fps, 1 tbr, 1 tbn Side data: displaymatrix: rotation of -90.00 degrees Stream #0:62[0x42]: Video: hevc (Main Still Picture) (hvc1 / 0x31637668), yuvj420p(pc, smpte170m/smpte432/bt709), 416x312, 1 fps, 1 tbr, 1 tbn Side data: ICC Profile displaymatrix: rotation of -90.00 degrees Stream #0:111[0x74]: Video: hevc (Rext) (hvc1 / 0x31637668), gray(pc), 768x576, 1 fps, 1 tbr, 1 tbn Side data: displaymatrix: rotation of -90.00 degrees ```
RaveSteel
2025-12-10 01:34:21 reading works, but transcoding to a different format does not, at least not by default
2025-12-10 01:34:34 Although I wonder if there is a way
2025-12-10 01:36:14 `heif-dec` from libheif can decode the image with auxilliary data, but only to multiple separate files
Quackdoc
2025-12-10 01:36:57 you would have to manually map and layer the images using a filtergraph I suppose
2025-12-10 01:37:56 I wonder if ffmpeg properly supports gainmaps or if you would need to use a custom shader
adap
2025-12-10 06:50:58 What are you looking at here? I see the display matrix says yuv420p10le which is 10 bit ig? but the “Still main picture” portion just says yuvj420
2025-12-10 06:51:59 That sounds like the important part
2025-12-10 06:52:23 <:peeposhrug:872596376238817320>
_wb_
2025-12-10 06:55:02 I assume they use gain maps in HDR heic files from Apple, no?
Quackdoc
2025-12-10 07:07:13 all of it?
dogelition
2025-12-10 07:15:34 this should work for photos but not for screenshots https://github.com/johncf/apple-hdr-heic
2025-12-10 07:15:53 https://github.com/johncf/apple-hdr-heic/issues/12
adap
2025-12-10 07:17:03 Sorry I meant what about it shows that it's an hdr image? ffmpeg works for heic I just didn't see where it properly displayed the color depth at first. I was using a longer ffprobe command that showed more and i missed the yuv420p10le so it's nice it shows some 10bit color data atleast.
2025-12-10 07:17:36 oh okay ill take a look ty
dogelition
2025-12-10 07:18:40 i guess you'll probably want to save it to png and then use that png as input to `cjxl`, and use the colorspace override option and maybe set the bit depth to 10 or 12
2025-12-10 07:53:38 actually that tool doesn't seem to be entirely correct? the apple docs say `Linearize the gain map by inverting the gain map gamma using the Rec.709 transfer function.` but this just uses sRGB
Quackdoc
2025-12-10 08:10:16 ah yeah, you would need to read the ICC for that
jonnyawsom3
2025-12-11 04:38:23 I know you can layer GIF frames to make a full color image (with awful efficiency), and Gifski uses that to get more colors in it's video conversions, but it has a hardcoded similarity threshold that limits the layering it will do I don't suppose there's a tool that allows making a 'full color' GIF out of a normal image? I'd also ask about one using blue noise dithering, but I tried searching for both and found nothing despite the potential
Exorcist
2025-12-11 04:47:26 Vector quantization (k-means++ clustering) + dithering (error diffusion)
jonnyawsom3
2025-12-11 05:01:16 That's neither of what I just said
Exorcist
2025-12-11 05:31:32 Let GIF spec pass. You can't make literal "full color" GIF image.
jonnyawsom3
2025-12-11 05:51:04 If I wanted AI I'd have asked it instead of here
2025-12-11 05:52:51 https://cdn.discordapp.com/attachments/794206087879852107/1377161103754399754/24bit.gif?ex=693c54f9&is=693b0379&hm=b161661e61ba6726f6096c4b9c5304a87e97e6c89c5d0e430a8c15072daaf96f&
2025-12-11 05:53:38 There's no reason why not
_wb_
2025-12-11 06:28:11 one reason is that browsers and most other gif viewers will force a minimum duration for gif frames, so you can't use that trick and still have a still image. It will be animated, even if you set the frame durations to zero.
AccessViolation_
2025-12-11 06:30:00 that explains why all true color GIFs I know I've seen seem to build up. I thought that was intentional, that they were demonstrating how true color GIFs worked
2025-12-11 06:31:16 specifically this page: https://commons.wikimedia.org/wiki/Category:True_color_GIF_files
jonnyawsom3
2025-12-11 11:15:04 The building up is fine, and kinda cool honestly, but I do know at least one viewer that loops at the end even when it shouldn't
dogelition
2025-12-12 06:58:27 https://googleprojectzero.blogspot.com/2025/12/a-look-at-android-itw-dng-exploit.html
RaveSteel
2025-12-14 01:06:05 FFmpeg now seems to have preliminary(?) support for JPEG XS https://github.com/FFmpeg/FFmpeg/commit/52c097065cb59927c532a09cfbcfbfb7948e3092
mincerafter42
2025-12-16 07:50:26 would this be the right channel to talk about palette generation? i believe i've improved the time complexity of an existing palette generation algorithm but that's not directly JXL nor another image format
AccessViolation_
2025-12-16 07:51:05 probably <#794206087879852106>
RaveSteel
2025-12-18 12:42:25 https://opus-codec.org/demo/opus-1.6/
dogelition
2025-12-18 12:31:01 https://gitlab.com/AOMediaCodec/avm/-/commit/b4713fd1008bebc4fe3f265244037922fbbfd350
2025-12-18 12:31:12 av2 directly supports icc profiles 🤔
2025-12-18 12:31:22 and banding hint metadata, whatever that is supposed to be
_wb_
2025-12-18 12:34:35 Banding hint metadata? Huh, what's that for?
Quackdoc
2025-12-18 12:37:01 I guess avm might have built in debanding?
dogelition
2025-12-18 12:37:50 doesn't look like the decoder does anything with it
2025-12-18 12:38:24 so i would assume it's just there so other applications can do... something with it
Quackdoc
2025-12-18 12:39:39 ~~MPV auto debanding I guess~~
veluca
2025-12-18 01:12:07 sounds great for consistent appearence of images/videos
jonnyawsom3
2025-12-18 01:23:52 ~~HW dithering in 2025?~~
Quackdoc
2025-12-18 01:29:51 https://tenor.com/view/jarvis-jarvis-meme-enhance-gif-10207712088460034302
jonnyawsom3
2025-12-18 02:00:37 There's a paper from Netflix about AVM in-loop debanding
Exorcist
2025-12-18 02:46:28 More filter, more VMAF <:Stonks:806137886726553651>
jonnyawsom3
2025-12-19 12:57:54 <https://aomedia.org/blog%20posts/AV1-Image-File-Format-Specification-Gets-an-Upgrade-with-AVIF/> Great, they saw UltraHDR and decided to make it even worse
2025-12-19 12:58:16 > As an example, encoding the image [1] with avifenc --depth 12,8 --lossless --speed 0 leads to 10% file size savings over the source 16-bit PNG, with absolutely no quality loss.
2025-12-19 12:58:44 <:tfw:843857104439607327>
username
2025-12-19 01:00:11 lossless 12-bit image with a hidden lossy 8-bit image to bring the image up to 16-bit‽
2025-12-19 01:00:25 oh wait I forgot AV1 doesn't support 16-bit natively
2025-12-19 01:00:34 that's probably why they are doing this
2025-12-19 01:00:44 still a bit weird though
_wb_
2025-12-19 01:00:56 what does "absolutely no quality loss" mean? Not actually lossless, but you won't see the difference? But that will already be true if you just do lossless 12-bit alone.
jonnyawsom3
2025-12-19 01:02:26 Yet another format extension, for 10% less than PNG... You might as well just zip a PPM with the support it'll get
username
2025-12-19 01:02:48 I was just about to comment on that yeah. all this effort just for 10% over PNG...
_wb_
2025-12-19 01:03:19 also applying a lossy codec to the noisy 4 lsb of a 16-bit image by scaling it up to 8-bit, that's probably not going to work very well
2025-12-19 01:06:48 Yes, for 16-bit PNGs, saving 10% over PNG is a poor result. See this table I made for the big jxl paper.
2025-12-19 01:08:18 JPEG 2000 can save 10% over PNG and it's 25 years old
2025-12-19 01:10:39 for 16-bit images, JXL saves 20% over PNG easily and 30% if effort is no concern.
dogelition
2025-12-19 02:38:30 ? i don't get it, why would you want to add an optional lossy image to a lossless one
2025-12-19 02:38:51 that's like the opposite of wavpack, where you have a lossy base file with an optional lossless one to restore the original data
2025-12-19 02:45:45 according to the spec the base image has to be lossless, and the optional one can be either lossy or lossless
2025-12-19 02:46:44 > NOTE: The second input image item loses its meaning of the least significant part if any of the most significant bits changes, so the first input image item has to be losslessly encoded. The second input image item supports reasonable loss during encoding.
2025-12-19 02:47:22 okay, that makes sense given the limited bit depth i guess
2025-12-19 02:47:39 extremely weird concept either way though
2025-12-19 02:48:43 i believe the 10% improvement over png uses two lossless images then
2025-12-19 02:50:53 don't think i have any 16 bit pngs that actually have 16 significant bits of data, so idk how to test that claim
2025-12-19 02:53:26 also, given that avif is really bad at compressing rgb data, i would assume the images still get converted to ycbcr? but then does the restored image actually correspond to the original 16 bit png rgb data or 16 bit ycbcr data
Exorcist
2025-12-19 02:59:47 Okay, I understand their logic: If you can tolerant lossy most significant bits, then you can directly delete least significant bits, so merge two images into 16-bit is meaningless
dogelition
2025-12-19 03:01:51 i guess, it is kinda different from wavpack because both images are always stored in a single file
Quackdoc
2025-12-19 03:53:30 In the video world, especially with things like VFX, 16 bit lossy is absolutely better than 12 bit lossless. I'm not really sure where else this would be true, and anybody who's thinking about using AVIF for VFX is utterly insane anyways.
2025-12-19 03:53:52 But I could fathom that there is a niche out there where this makes sense, but not really one where it does and JXL doesn't.
jonnyawsom3
2025-12-22 11:40:59 <@&807636211489177661>
cioute
2025-12-23 09:10:11 iyo, which popular image format is worst?
KonPet
2025-12-23 09:15:30 GIF
2025-12-23 09:15:42 256 colors is ridiculous, and LZW is, too
_wb_
2025-12-23 09:29:43 To be fair, GIF is from a time when monochrome displays were still common, and color graphics were usually limited to either 4 color palettes (CGA) or 16 color palettes (EGA).
jonnyawsom3
2025-12-23 09:30:28 But, it is still one of 'the big 3' even today
2025-12-23 09:31:59 I guess it depends what you call 'worst'. If it's worst currently, then GIF by a mile, if it's worst when it was designed, probably lossy WebP
KonPet
2025-12-23 09:55:50 Yeah, for it's time it's great. But it's existence today is kind of odd in my eyes
_wb_
2025-12-23 10:03:23 It's not odd, it's the only image format that managed to get actual universal support for animation. With any other image format, even jxl, there are at least _some_ important applications that will treat animations as still images (e.g. treating an APNG as just its first frame).
mincerafter42
2025-12-23 10:18:50 there's still hope for animated PNG or JXL; 30-ish years ago animated GIF support was just as spotty: <https://web.archive.org/web/0/http://www.etsimo.uniovi.es/gifanim/userguid.htm> :p
NovaZone
2025-12-23 10:38:09 Tmw anything is better than gif at this point, jxl, webp, png, avif etc
RaveSteel
2025-12-23 10:40:03 animated webp still cannot be decoded by ffmpeg
NovaZone
2025-12-23 10:40:21 Huh? Srsly?
RaveSteel
2025-12-23 10:40:50 yeah ``` [webp @ 0x5613eb590140] skipping unsupported chunk: ANIM [webp @ 0x5613eb590140] skipping unsupported chunk: ANMF Last message repeated 41 times [webp @ 0x5613eb590140] image data not found ```
NovaZone
2025-12-23 10:41:08 Man that's weird
RaveSteel
2025-12-23 10:41:24 There's an open ticket, but it's many years old
NovaZone
2025-12-23 10:42:36 So ffmpeg can make animated webp's but can't play them back? 😂
RaveSteel
2025-12-23 10:42:45 yep lmao
2025-12-23 10:43:37 and libwebp, the reference library, also does not decode animated webp
NovaZone
2025-12-23 10:44:06 Gotta love the jank sometimes 😆
jonnyawsom3
2025-12-23 10:44:39 Oh you reminded me, I was gonna post something in <#822105409312653333>
RaveSteel
2025-12-23 10:44:54 webp is probably the worst format for animation but at least browsers can play it
NovaZone
2025-12-23 10:45:25 And most img viewer's yea
RaveSteel
2025-12-23 10:46:10 ye
NovaZone
2025-12-23 10:47:06 How? No1 knows kek #itjustworks
2025-12-23 10:47:50 Magic mods surely
KonPet
2025-12-23 10:53:21 wait, wasn't webp based on like vp8?
2025-12-23 10:53:48 what's the point of "animated webp" then?
2025-12-23 10:53:58 why not just use vp8
RaveSteel
2025-12-23 10:56:15 looping for example
jonnyawsom3
2025-12-23 10:56:18 The same reason AVIF exists, because you can't put a video in an image tag
0xC0000054
2025-12-23 11:22:54 The webp reference library has two APIs, a simplified libwebp API for encoding/decoding single frame images without EXIF/XMP metadata, and the low level webpmux API for animated image and advanced metadata support. I agree that is a strange design.
RaveSteel
2025-12-23 11:27:04 huh, interesting https://developers.google.com/speed/webp/docs/webpmux
Meow
2025-12-24 06:31:23 The author of ImageOptim hates image animations and that tool simply destroys APNG
2025-12-24 06:31:52 Honestly I don't know what he likes
Exorcist
2025-12-24 09:45:28 WebP animation is VP8 **all-intra**
Demiurge
2025-12-24 10:17:22 So in other words webp is literally the worst idea the Youtube and Chrome Codec Teams ever forced onto the world
jonnyawsom3
2025-12-24 10:32:16 (except lossless WebP)
Demiurge
2025-12-24 10:43:57 Which would have been amazing if it wasn't attached to the rest of webp holding it back.
2025-12-24 10:44:32 Was Jyrki still with Google Research Zurich back then?
lonjil
2025-12-24 11:10:09 Wdym "still"?
Exorcist
2025-12-25 03:22:45 a funny word I learned from reddit: bitrate snob
2025-12-25 03:24:46 WebP vs packJPG is -12.5% BD-rate, but lose everything else AVIF vs JXL is -12.5% BD-rate, but lose everything else, again
Demiurge
2025-12-25 03:54:54 Well I think he's part of the Zurich team atm right? So I was wondering if that was still the case back then when he contributed to webp.
lonjil
2025-12-25 10:51:09 Oh, the right word is "already", not "still". But yeah, he was, I'm pretty sure.
DZgas Ж
2025-12-25 05:51:06 Why make full VP8 inside WebP if you can use VP8 inside WebM as a video? <:Thonk:805904896879493180>
2025-12-25 05:53:57 Since Webp includes a separate codec "VP8L" for lossless compression, it's impossible to just make legacy compatible. It's a whole new format, a whole library. They'll cut and trim everything; after all, it's 2010. This modern AVIF is the full absolute AV1 <:avif:1280859209751334913>
2025-12-25 06:00:12 Ultimately, motion vectors are the most difficult thing to decode in video. WebP's lack of this (all-intra) makes it suitable for encoding like GIF but in 2010. Besides, no one forces you to use only Lossy coding, if you code elements like emoji or buttons on websites, Losless doesn't support anything other than all-intra anyway <:JXL:805850130203934781>
lalovoe
2025-12-25 08:25:39 So I'm stuck on an excruciatingly slow internet and I can see the images on a webpage load in real-time. I looked in the elements page, and the extension of the images are .jpg. But I have never seen a jpg load like this. Could it be a different variant that just uses the same container/file suffix? Or is it actually oldschool jpg?
2025-12-25 08:25:55 I wonder if it can be identified from the loading alone
2025-12-25 08:27:42 It looked to me as if they were loading multiple "passes" of quality, from top to bottom. I remember in the past when I saw images load it wouldn't be in distinct passes but just from top to bottom in a single revealing pass.
2025-12-25 08:29:20 It's unfortunate how the red-ish colors look more blocky compared to the more blurry nature of the rest of the lower quality passes
username
2025-12-25 08:30:17 these are actual regular JPEGs specifically progressive ones
lalovoe
2025-12-25 08:30:22 MediaInfo just reports old-school JPEG as far as I'm aware. Interesting. I really only thought more modern codecs could do this
username
2025-12-25 08:30:59 with old-school JPEG they can either be "baseline" or "progressive"
lalovoe
2025-12-25 08:31:31 Hmm. I didn't see anything about that reported by MediaInfo. But that's interesting.
username
2025-12-25 08:31:46 [ExifTool](https://exiftool.org/) should report it
lalovoe
2025-12-25 08:33:10 downloading...........
2025-12-25 08:33:34 idk why my internet is so spotty now. I mean discord seems to work, but that exiftool download is reportedly gonna take an hour even thoguh it's just 8MB
2025-12-25 08:33:52 time to restart the router. this is strange. it almost never happens
KonPet
2025-12-25 09:20:20 Something I've been wondering: I'm currently working in the area of video coding (nothing big, essentially just implementing things for someone doing their PhD), and there we obv also use DCT-ish tools, but mainly just for the residuals after prediction
2025-12-25 09:21:01 now, what if we just didn't do any prediction, and just encoded stuff on DCTs? Would it be possible to perfectly replicate a JPEG file in, say, h.266?
2025-12-25 09:22:17 Like, I'm not talking about recreating the pixels. I'm talking about essentially trying to directly convert DCT values from JPEG to h.266's DCT values
AccessViolation_
2025-12-25 09:44:40 out of curiosity, how do you DCT encode prediction residuals without risking errors in pixel values accumulating over time? since during decoding, the prediction residuals of the current pixel are added to the value from some operation on the already decoded pixels, meaning it would inherit any error in the pixel values that were already decoded, further corrupting the value of the current pixel, and the next even more so, in a long chain?
juliobbv
2025-12-25 09:55:30 4:4:4 should be doable AFAIK
2025-12-25 09:55:39 not sure about other chroma subsampling modes
2025-12-25 09:57:15 but the DCT basis functions are different in VVC, so you can only perform a one-way conversion because you'll need to fiddle with the coeffs a bit to minimize total error
_wb_
2025-12-25 10:00:00 Making these things correct and reversible is basically only possible if the details match, not just the general idea. It needs to be the same DCT, the same YCbCr, the same chroma siting (in case of chroma subsampling), etc. Most video codecs do most of these details differently.
2025-12-25 10:01:56 Things like yuv matrix and tv range vs full range can typically be signaled and in principle can be made to match (though even then there will be implementations that will just assume it is tv range regardless of what is signaled)
2025-12-25 10:03:00 Question is also: how much would you gain if you could do it? Most video codecs don't really have super fancy entropy coding
juliobbv
2025-12-25 10:12:23 one compelling reason would be to have access to the video codecs' deblocking/filtering tool repertoire
2025-12-25 10:13:04 so you could increase the appeal of JPEGs by minimizing coded block boundaries and artifacts
2025-12-25 10:13:54 and you could extend this to video: from motion JPEG to all-intra VVC
2025-12-25 10:18:22 oh, I forgot about QMs
2025-12-25 10:19:37 I'm not sure if VVC's scaling lists are expressive enough to represent JPEG QMs
2025-12-25 10:21:19 <@295894971872182272> if you're open to considering AV2, JPEG recompression is possible with the existing tooling (still one way though -- DC prediction is a bit different in AV2 so coeff fiddling will still be needed)
2025-12-25 10:28:30 you dither your DC coeff values so errors don't spatially "drift" over the image
2025-12-25 10:29:29 JPEG only supports DC pred so that makes the problem space more tractable
2025-12-25 10:40:26 (BTW: there's a JPEG recomp prototype based on an AV2-like code base)
jonnyawsom3
2025-12-25 10:42:51 Ironically, JXL is the *only* modern codec that progressively loads. It's something of a lost art since everything else is video codec based now (technically AVIF can but only a separate lower res frame)
_wb_
2025-12-25 10:43:58 Ah yes if you want blocky jpegs to look better, that might be nice. But then you might be better off doing some fancy jpeg decoding with dequant that is not just a multiply but something that minimizes blockiness while staying within quantization buckets, then postprocessing that with whatever fancy filters you like (or AI) to reduce remaining artifacts, and then encoding the result with the full coding tools available.
juliobbv
2025-12-25 10:44:59 yeah, that's also an option
2025-12-25 11:04:24 that said, I think there's value in coming up with a system of clever filtering/deblocking heuristics that gets you 75% there in artifact prevention/appeal with 100x the speed
KonPet
2025-12-25 11:05:47 let me answer that first: Just because it'd be cool. That's pretty much it
2025-12-25 11:08:20 Really? I had hoped that maybe one might be able to somehow convert between the two using some maths (here be dragons. I don't know enough about how h.266 handles DCTs to know if this would be possible)
2025-12-25 11:09:34 I have never worked with AV2, but I wouldn't rule it out. If you say it's doable then maybe I might look into that at some point
2025-12-25 11:13:10 I'm not sure I understand this question. Do you mean in the context of a real video? Or in my hypothetical transcoding (not sure this is the right word 😅)?
2025-12-25 11:16:05 anyway, I'm not sure I'll have the time to actually look into this in the coming months. I'm currently busy enough with work and uni as it is. This is just a shower thought I had and I wanted to get some input on it. If anyone here for some reason wants to do this, feel free ofc. Though it does admittedly sound like a bit of a waste of time...
AccessViolation_
2025-12-25 11:21:38 never mind it, I made too many assumptions about how it worked, seemingly to the point where what I'm asking doesn't even apply ^^"
Exorcist
2025-12-26 01:08:08 Encoder run a decoder inside, the prediction of second block: is based on first encoded-then-decoded (lossy) block, not is based on first source block so, the residual of prediction: has a chance to fix previous encoded-then-decoded (lossy) block
_wb_
2025-12-26 02:26:10 That and everything is done in integer arithmetic in video codecs with a fully defined decoder, so you cannot get cumulative errors from implementation differences, everything is defined exactly. This video codec approach has the big benefit of fully determined results and it is essential if you want to do long runs of inter frame without risking cumulative errors, but it has the big downside that precision is fixed to the lowest common denominator, i.e. it has to be the lowest precision you can get away with since hardware implementers want to minimize circuit complexity / cost. So this compromise precision is what becomes spec, and any implementation that would use more precision is just wrong.
KonPet
2025-12-26 02:29:54 Honestly I was a bit surprised when I looked into the jpeg xl spec and I saw those huge tables of floats, or the document describing the acceptable amount of error for a decoder
2025-12-26 02:30:43 I was used to only ever seeing integer arithmetic (in most case 16 bit)
jonnyawsom3
2025-12-26 02:37:06 Technically JXL can do 24bit int IIRC, but there's nothing stopping it from going up to 64bit float too, only current spec limits
_wb_
2025-12-26 03:02:52 The modular parts are integer only, but everything else is defined using just math on real numbers in the spec, and it's up to implementations to decide how they want to approximate that. This is also how the original JPEG spec did it. There are pros and cons to both approaches, but one main advantage imo is that the spec is not imposing any precision limit: if you want to implement things with double or quadruple precision floats, it will be conforming to all current levels (and we could define a level that requires this kind of precision). That's different to video codecs where whatever approximation was turned into spec, becomes _the_ approximation everyone has to use, even if for example it doesn't have enough precision to avoid banding in all cases.
AccessViolation_
2025-12-26 10:14:22 how can you do dct on incomplete data?
2025-12-26 10:15:37 doesn't the whole frequency representation of some data change if you add another data point?
2025-12-26 10:16:23 you can't dct encode 10 residuals, then decode, then add the 11th residual. the value of the new residual will change the coefficients
dogelition
2025-12-26 10:22:51 wdym by incomplete data? an entire rectangular block of residual values is transformed with a 2d dct and then quantized etc.
2025-12-26 10:23:24 (before that, the encoder predicts the block via intra prediction the same way the decoder would, and calculates the residual as the difference between the input and the predicted block)
AccessViolation_
2025-12-26 10:24:23 the message I replied to said that before encoding the next residual, you decode the ones you already encoded and base residual on that to avoid drifting
dogelition
2025-12-26 10:29:53 "decode the ones you already encoded" as in doing the same work the decoder does for reconstructing the top/left blocks that the current block uses as prediction source
2025-12-26 10:30:46 just so that it has the same top/left data to work with as the decoder and can encode the next blocks by predicting from those
AccessViolation_
2025-12-26 10:41:59 but this is DCT data. as you add more values, the coefficients will change. you can't take the 5 residuals you already have, DCT encode them lossily, see what they become, and base the 6th prediction on those. because as soon as you add the sixth value to the mix of things that become DCT encoded, the coefficients will likely change, and with it, the final value of the five values you already encoded before to demonstrate this with a thought experiment: if you encode a run of similar values, and then add one that is wildly different, this will cause ringing in the values that were originally smooth/flat. so if those 5 values are more or less the same, and base the 6th value on those, then as soon as you add the 6th value, those previous five change (they become compromised by ringing, in this specific example) and your compensating for what *those values were* is now no longer relevant because they've changed
dogelition
2025-12-26 10:48:26 you don't do that you predict an entire block, calculate the entire pixel domain residual for that block, then dct-encode and quantize etc. that entire residual block to get the exact same lossy data that the decoder will work with for subsequent blocks, you also decode the entire block again (read-only)
2025-12-26 10:48:54 you're not affecting past data because each dct is per block, it's just used as a way of encoding the difference between the target and predicted data of a block in a way that can be compressed better
2025-12-26 10:51:40 don't know about other codecs but at least av1 also has an identity transform, where you're not doing a dct at all and just storing the actual pixel domain difference
AccessViolation_
2025-12-26 11:02:24 ah, for context, I assumed they were doing something akin to losslessly encoding the entire image with predictors and then DCT encoding the residuals
Exorcist
2025-12-27 03:58:59 prediction is block-by-block, not pixel-by-pixel when extend the edge pixel of neighbor blocks, it be done in one step
cioute
2025-12-28 05:29:20 how to yuv420p10le libx265 in termux (or another android terminal app)? (ffmpeg)
RaveSteel
2025-12-28 05:34:01 use ffmpeg
cioute
2025-12-28 05:34:45 pkg install ffmpeg? already
RaveSteel
2025-12-28 05:35:30 https://benwilfong.com/ffmpeg%20cheat%20sheet/
jonnyawsom3
2025-12-28 05:49:08 https://letmegooglethat.com/?q=ffmpeg+x265+10bit
2025-12-28 05:50:37 The first result https://stackoverflow.com/a/62643531
dogelition
2025-12-28 07:22:46 https://www.sisvel.com/insights/av2-is-coming-sisvel-is-prepared/
HCrikki
2025-12-28 07:51:25 the language reeks of peak patent trolling. theres no development or research, just trying to parasite upon in the spec
2025-12-28 07:52:23 since licencing costs almost nothing, implementers think its fine paying off one parasite to make them go away but this only scares other targets since the secrecy in these discussions gives the impression those have a legit licencing pool
Exorcist
2025-12-28 07:57:16
Magnap
2025-12-30 09:28:39 Wow, that is one of the most threatening things I've read this year
_wb_
2025-12-30 12:43:02 trolls are gonna troll; when they come after cloudinary I always advise our legal team to not feed the trolls. Most of them go after hardware companies though, since software-only companies like cloudinary can much more easily just roll back deployment if needed.
Cacodemon345
2025-12-30 08:56:41 Meanwhile Disney's content got pulled from Brazil because of the patent cartels.
Meow
2025-12-31 02:22:55 The well-known patent troll is prepared
DZgas Ж
2025-12-31 05:48:16 I'll never understand this fuss with patent trolls. In my country, you just can't patent "ideas" or algorithms. Only real, physical things. That's why it's funny to see things like this... On the other hand, we could never have "h265" because you can't make money from it.
AccessViolation_
2025-12-31 08:40:31 the list of partner companies is sad
2025-12-31 08:41:19 my (dutch) ISP is on there too, which is weird, I thought software patents weren't really a thing in europe