|
jonnyawsom3
|
|
derberg
But funnily GIMP does have issues with this one and only shows the top layer
|
|
2025-03-18 12:03:15
|
libjxl will coalesce by default, since image viewers don't have layer views or blending themselves. Likely something to do with alpha and them not setting it to output layers
|
|
|
Quackdoc
tried gimp with a processing test no idea if I did something wrong but had very clear 8bit processing artifacts when I applieda gaussian blur to a red star on a cyan background
|
|
2025-03-18 12:03:45
|
Tried Krita?
|
|
|
Quackdoc
|
2025-03-18 12:04:13
|
krita is no good, it's color management pipeline is practically non existent
|
|
2025-03-18 12:04:32
|
good enough for painting but for photo editing and stuff like that, no good
|
|
|
derberg
|
|
libjxl will coalesce by default, since image viewers don't have layer views or blending themselves. Likely something to do with alpha and them not setting it to output layers
|
|
2025-03-18 12:04:53
|
So I should just add another layer containing everything?
|
|
|
Quackdoc
|
2025-03-18 12:05:15
|
https://cdn.discordapp.com/attachments/719811866959806514/1216585057112821851/image.png?ex=67d98de0&is=67d83c60&hm=e5f2696ec763c4a817d792e91ba774e28fd6b85ffe73da7f409860aefa690bfb& for example I still can never get this to look right
|
|
2025-03-18 12:05:34
|
always have somekind of non linear processing
|
|
|
jonnyawsom3
|
|
Quackdoc
https://cdn.discordapp.com/attachments/719811866959806514/1216585057112821851/image.png?ex=67d98de0&is=67d83c60&hm=e5f2696ec763c4a817d792e91ba774e28fd6b85ffe73da7f409860aefa690bfb& for example I still can never get this to look right
|
|
2025-03-18 12:10:14
|
Funny thing is, they use that same example of red and cyan
|
|
2025-03-18 12:10:15
|
https://docs.krita.org/en/general_concepts/colors/linear_and_gamma.html
|
|
|
Quackdoc
|
2025-03-18 12:13:27
|
if they understand the issue, fix it by default
https://cdn.discordapp.com/emojis/939666933119324222?size=64
|
|
2025-03-18 12:13:51
|
>ICCs
|
|
2025-03-18 12:13:56
|
oh so it's a hack
|
|
2025-03-18 12:15:40
|
also would need to support color management for incomming photos, dunno if that works
|
|
|
derberg
|
2025-03-18 04:20:24
|
I hope this is not common with 200+ layers... 🫠
|
|
|
Traneptora
|
2025-03-18 04:38:38
|
my exif overhaul for FFmpeg is coming along nicely
|
|
2025-03-18 04:40:00
|
https://github.com/Traneptora/FFmpeg/commits/exif-overhaul-2/
|
|
|
Quackdoc
|
2025-03-18 04:49:17
|
I didn't even kmow ffmpeg touches it
|
|
|
Traneptora
|
|
Quackdoc
I didn't even kmow ffmpeg touches it
|
|
2025-03-18 04:50:21
|
it barely touches it
|
|
2025-03-18 04:50:32
|
for a small number of specific decoders it grabs the orientation data and that's it
|
|
|
Quackdoc
|
|
Traneptora
|
|
Quackdoc
ahhh
|
|
2025-03-18 04:51:31
|
also for those decoders it does parse it and add it to AVFrame->metadata
|
|
2025-03-18 04:51:43
|
only for jpeg and not in a way that's easily modifiable or transferable
|
|
2025-03-18 04:51:59
|
my overhaul adds a separate struct to contain the parsed data which makes adding and modifying it much easier
|
|
2025-03-18 04:52:30
|
also adds public APIs to parse buffers of EXIF data and write the struct into a buffer. useful for encoders too to be able to take the sidedata and write it
|
|
2025-03-18 04:52:49
|
it also includes compat with things like, zeroing out out the orientation
|
|
|
Quackdoc
|
2025-03-18 04:53:34
|
are the commits in question just the ones with today's commit date or are there more burried in history?
|
|
|
Traneptora
|
2025-03-18 04:53:51
|
it's the ones authored by me at the top of the history
|
|
|
Traneptora
it also includes compat with things like, zeroing out out the orientation
|
|
2025-03-18 04:53:57
|
for example, if you attack the sidedata as displaymatrix, and then a filter, say, applies the sidedata and removes the displaymatrix, you end up with exif orientation that is incorrect, so if I'm attaching a displaymatrix I always zero out the orientation after attaching it
|
|
2025-03-18 04:54:07
|
and have a corresponding function to re-parse the orientation and add it back
|
|
|
Quackdoc
|
2025-03-18 04:57:03
|
I wish github made it easier to code compare commits instead of just manually typing it into url, xD either way nice work, I wonder if this could be used to make dng stuff less painful, iirc it can use exif for a few things
|
|
|
Traneptora
|
2025-03-18 04:57:19
|
DNG is basically just TIFF
|
|
2025-03-18 04:57:25
|
I don't know
|
|
2025-03-18 04:57:31
|
I haven't touched the TIFF code tbh
|
|
|
Quackdoc
|
2025-03-18 04:57:53
|
I dont blame anyone fornot wanting to touch tiff code lol
|
|
|
_wb_
|
2025-03-18 06:57:59
|
TIFF itself is not that horrible, just a little annoying that you have to handle both little-endian and big-endian for everything, even header syntax.
|
|
2025-03-18 06:58:29
|
Exif is also TIFF btw
|
|
2025-03-18 07:01:18
|
Baseline TIFF is not too hard to implement. But a complete TIFF decoder including all extensions, that's almost impossible.
|
|
|
Traneptora
|
2025-03-18 08:50:50
|
Yea, my parser uses some of the Tiff code but not a ton of it
|
|
|
_wb_
|
2025-03-18 03:09:46
|
I wrote a simple tiff decoder a few years ago, was just 400 lines of code or so. We needed that because libtiff returns full images which was too memory-heavy..
|
|
|
Quackdoc
|
2025-03-18 07:35:59
|
indeed but when you write a tiff decoder, who knows if it will work or not with your image lol
|
|
|
Traneptora
|
2025-03-18 08:36:41
|
the issue with EXIF is the corner cases
|
|
2025-03-18 08:36:54
|
like you can't just take IFD0 and take it out of a tiff file cause it'll be corrupted
|
|
2025-03-18 08:37:00
|
cause the offsets are relaltive to the start of the file
|
|
2025-03-18 08:37:16
|
likewise the same is true about MakerNote which is by spec a binary blob, but in practice is often an IFD
|
|
|
spider-mario
|
2025-03-18 10:29:40
|
GStreamer 1.26 adds support for H.266 and JPEG XS
|
|
2025-03-18 10:30:39
|
oh, interesting, also HDR support through Blackmagic Design’s “DeckLink” PCIe card
|
|
2025-03-18 10:31:05
|
and “Apple AAC audio encoder and multi-channel support for the Apple audio decoders”
|
|
|
Quackdoc
|
2025-03-18 11:13:17
|
yay it finally has proper decklink? nice
|
|
|
pekaro
|
2025-03-19 07:14:07
|
hi, once again with some general compression questions. Do you know about any reference in context splitting for coding wavelet transform coefficients? Best entropy I'm getting is by using only one context on coefficients that span the whole image. I thought could do better by dividing into 3 contexts: haar wavelet tree nodes, the second most deep layer and the rest of the tree, but got worse results than with 1 ANS context
|
|
2025-03-19 07:15:34
|
I thought splitting it will help, because when I plot these separate layer coefficients I'm getting several Laplace looking distributions for histograms
|
|
|
AccessViolation_
|
2025-03-19 07:57:15
|
<@226977230121598977> is the idea with the your audio spectrogram images to turn them back into sound so you can 'hear' the ways different lossy image formats sound with how the music is distorted? or just seeing how well they compress
|
|
2025-03-19 08:00:31
|
I wonder if you could meaningfully distinguish different lossy compression schemes by how they sound, like "ah this sounds like webp vaseline" or "pretty clear jpeg macroblocking and ringing on this one". I do know with certain things, turning visual data into sound data gives us an oddly better ability to detect patterns
|
|
|
username
|
|
AccessViolation_
I wonder if you could meaningfully distinguish different lossy compression schemes by how they sound, like "ah this sounds like webp vaseline" or "pretty clear jpeg macroblocking and ringing on this one". I do know with certain things, turning visual data into sound data gives us an oddly better ability to detect patterns
|
|
2025-03-19 08:01:54
|
https://discord.com/channels/794206087879852103/806898911091753051/1266183715118383136
|
|
|
AccessViolation_
|
2025-03-19 08:03:10
|
oh!
|
|
2025-03-19 08:03:11
|
thanks
|
|
|
DZgas Ж
|
|
AccessViolation_
<@226977230121598977> is the idea with the your audio spectrogram images to turn them back into sound so you can 'hear' the ways different lossy image formats sound with how the music is distorted? or just seeing how well they compress
|
|
2025-03-19 08:03:13
|
no, it is not feasible from of wave physics, or rather the discrete Fourier transform DFT (not DCT) falls under the limitation of the physical law of the uncertainty principle. The images created are only an approximate representation of the fact that somewhere there is some sound of some wave of such and such strength. And it cannot be presented back.
Although I have already done this, I will send the results of the work now
|
|
2025-03-19 08:05:01
|
|
|
|
AccessViolation_
<@226977230121598977> is the idea with the your audio spectrogram images to turn them back into sound so you can 'hear' the ways different lossy image formats sound with how the music is distorted? or just seeing how well they compress
|
|
2025-03-19 08:07:48
|
the main problem is that the wave is an extremely complex thing that has a point of phase, signal strength, point of frequency and point of time. And if you just generate a spectrogram. It will be 3 indicators, time, frequency, signal strength. There is no phase. For this track, I choose a random signal phase, it turns out zero hormonic, zero.
|
|
|
AccessViolation_
|
2025-03-19 08:08:21
|
interesting
|
|
2025-03-19 08:09:34
|
> or rather the discrete Fourier transform DFT (not DCT) falls under the limitation of the physical law of the uncertainty principle
is fft actually not perfectly reversible? is it only reversible when the wave happens to be a sum of exactly the frequencies you sample for the fft, or not even then?
|
|
2025-03-19 08:09:47
|
maybe i'm misunderstanding
|
|
|
DZgas Ж
|
2025-03-19 08:11:10
|
another problem is low frequencies, they cannot be adequately represented, there is not enough information.
The uncertainty principle works like this - you have 4 wave points. Can you say what power they are? Can you say at what frequency? Can you say anything?
No. This would violate the physical laws of our world. Spectrum analysis occurs in a window size, for example a window of 500 points. The spectrogram drawn is the central point in these 500 wave points.
|
|
2025-03-19 08:11:57
|
https://en.wikipedia.org/wiki/Uncertainty_principle
|
|
|
AccessViolation_
> or rather the discrete Fourier transform DFT (not DCT) falls under the limitation of the physical law of the uncertainty principle
is fft actually not perfectly reversible? is it only reversible when the wave happens to be a sum of exactly the frequencies you sample for the fft, or not even then?
|
|
2025-03-19 08:13:44
|
high frequencies may be reversible enough to be audible. But full reversibility is not possible at any frequency
|
|
|
AccessViolation_
interesting
|
|
2025-03-19 08:15:54
|
if calculate the phase based on all wave, you can write a table of the beginning of the phase for each frequency, for example, here I did it, it sounds better
NUMNER / 100 * 2 * 3.141
93 43 49 74 27 98 98 86 82 56 85 21 18 92 16 76 03 05 88 27 95 83 28 91 44 38 12 95 13 85 42 73 12 64 25 15 49 74 41 83 52 76 14 15 48 67 05 47 74 93 13 98 68 15 03 29 90 55 19 53 71 00 75 88 47 55 36 04 76 99 88 49 24 82 33 99 61 19 76 81 64 54 76 81 56 91 23 97 73 73 98 04 58 58 13 86 43 87 71 80 20 17 07 96 13 68 75 28 56 02 50 03 51 80 88 33 16 62 89 52 30 92 69 14 78 17 31 86 04 84 13 94 88 13 70 98 78 92 88 66 27 07 09 33 97 56 77 46 98 59 85 07 90 75 59 75 79 59 31 45 54 54 56 92 75 11 10 15 89 99 91 30 24 11 17 26 43 46 73 54 43 59 76 71 30 80 51 78 97 36 74 56 79 66 02 15 86 88 37 49 66 03 96 64 77 27 69 18 04 90 79 64 77 15 33 33 54 89 64 28 28 36 87 09 07 39 61 89 80 97 34 31 93 68 29 35 92 85 10 55
|
|
2025-03-19 08:16:47
|
but this still won't solve the low frequency problem.
|
|
|
AccessViolation_
|
2025-03-19 08:16:48
|
I didn't even consider how the phase might fuck with the fft
|
|
|
DZgas Ж
|
2025-03-19 08:17:17
|
This picture is a lie. Don't believe it.
|
|
2025-03-19 08:17:33
|
<:This:805404376658739230> lie
|
|
|
AccessViolation_
|
2025-03-19 08:17:40
|
I should watch veritasium's video on the fourier transform again
|
|
2025-03-19 08:17:49
|
this is interesting
|
|
|
DZgas Ж
|
|
AccessViolation_
I should watch veritasium's video on the fourier transform again
|
|
2025-03-19 08:18:23
|
I haven't watched it, but knowing this channel, it will most likely be very superficial
|
|
|
AccessViolation_
|
2025-03-19 08:19:53
|
probably, but I know next to nothing about math, am dyslectic, and attention deficit, so I take what I can get xD
|
|
|
DZgas Ж
|
|
AccessViolation_
I wonder if you could meaningfully distinguish different lossy compression schemes by how they sound, like "ah this sounds like webp vaseline" or "pretty clear jpeg macroblocking and ringing on this one". I do know with certain things, turning visual data into sound data gives us an oddly better ability to detect patterns
|
|
2025-03-19 08:19:55
|
I am disappointed in spectrograms. But nothing prevents you from recording sound using "film" technology
|
|
|
AccessViolation_
|
2025-03-19 08:20:47
|
like you would not believe how long it took me to read a few paragraphs from the jpeg xl paper that I was interested in because they related to something else I was working on
|
|
|
DZgas Ж
|
2025-03-19 08:21:38
|
|
|
2025-03-19 08:21:56
|
|
|
2025-03-19 08:22:07
|
the paper
|
|
2025-03-19 08:22:10
|
my paper
|
|
|
AccessViolation_
I wonder if you could meaningfully distinguish different lossy compression schemes by how they sound, like "ah this sounds like webp vaseline" or "pretty clear jpeg macroblocking and ringing on this one". I do know with certain things, turning visual data into sound data gives us an oddly better ability to detect patterns
|
|
2025-03-19 08:26:42
|
I've done it before, you can do 2d DCT transform not for 8x8 block, but for the whole image, or for any other data like sound. then do it back. AND compressed.... The result there was generally complete crap that's not even worth attention.
|
|
|
AccessViolation_
I wonder if you could meaningfully distinguish different lossy compression schemes by how they sound, like "ah this sounds like webp vaseline" or "pretty clear jpeg macroblocking and ringing on this one". I do know with certain things, turning visual data into sound data gives us an oddly better ability to detect patterns
|
|
2025-03-19 08:29:47
|
It's much easier to do this - take the wav data, convert it to 8-bit wave. Write each sample to a brightness 0-255 image file like bmp/png. And compress it. Then you can decompress it. It will all sound extremely bad, because compression without psychoacoustic principles is blood in the ears.
|
|
|
AccessViolation_
<@226977230121598977> is the idea with the your audio spectrogram images to turn them back into sound so you can 'hear' the ways different lossy image formats sound with how the music is distorted? or just seeing how well they compress
|
|
2025-03-19 08:31:13
|
I found, literally that's what I did, wav to jpeg
Music passed through a classic DCT, which is why MDCT is used for music
original / compressed
|
|
2025-03-19 08:31:23
|
attention ears
|
|
|
AccessViolation_
|
2025-03-19 08:32:28
|
what's interesting is that a lot of the noise sounds high frequency, even though that's specifically the data JPEG removes
|
|
|
DZgas Ж
|
|
AccessViolation_
what's interesting is that a lot of the noise sounds high frequency, even though that's specifically the data JPEG removes
|
|
2025-03-19 08:33:18
|
That's the whole point, what it deletes. And what is written instead of the deleted data? That's right. Quantization noise
|
|
2025-03-19 08:34:26
|
goddamn right
|
|
|
AccessViolation_
|
2025-03-19 08:34:28
|
oh, I guess high frequency sound doesn't mean they correspond to high frequency components in the image, I was thinking about it wrong. in a spectogram higher frequencies are just higher up, and that information isn't treated differently by jpeg compression of course
|
|
|
DZgas Ж
goddamn right
|
|
2025-03-19 08:36:20
|
yeah I have a visceral reaction to artifacts like these <:KekDog:805390049033191445>
|
|
|
CrushedAsian255
|
|
username
https://discord.com/channels/794206087879852103/806898911091753051/1266183715118383136
|
|
2025-03-20 06:53:48
|
(Btw swallow chewing gum was my alt for a little)
|
|
|
Dejay
|
2025-03-22 06:27:18
|
Is there a "butterohrli distance" for audio? Like a metric that can reliably determine "audibly lossless compression" when blind testing on high end equipment with the typical audiophile?
|
|
2025-03-22 06:28:10
|
I want an opusenc.exe with "d=1" option haha
|
|
2025-03-22 06:28:32
|
I guess I should ask this in the opus discord
|
|
|
HCrikki
|
2025-03-22 07:00:51
|
vbr modes are prety much that. opus' is much better than mp3's at all bitrates since it doesnt filter out as much inaudible frequencies as mp3
|
|
|
Oleksii Matiash
|
|
Dejay
Is there a "butterohrli distance" for audio? Like a metric that can reliably determine "audibly lossless compression" when blind testing on high end equipment with the typical audiophile?
|
|
2025-03-22 07:31:09
|
IIRC <@532010383041363969> was involved in the project with such goal. I'm also waiting that one day opus will be able to do real vbr instead of current abr
|
|
|
A homosapien
|
|
Dejay
Is there a "butterohrli distance" for audio? Like a metric that can reliably determine "audibly lossless compression" when blind testing on high end equipment with the typical audiophile?
|
|
2025-03-22 07:34:06
|
https://github.com/google/zimtohrli
|
|
|
damian101
|
|
Dejay
I guess I should ask this in the opus discord
|
|
2025-03-22 08:28:13
|
there is an opus discord?
|
|
|
Dejay
|
|
there is an opus discord?
|
|
2025-03-22 11:55:52
|
Sorry I just assumed but I didn't find one
|
|
|
A homosapien
https://github.com/google/zimtohrli
|
|
2025-03-22 12:23:21
|
Thanks I guess that's exactly what I was looking for
|
|
|
damian101
|
2025-03-22 01:09:45
|
I've used it a lot, so feel free to ask questions
|
|
|
Quackdoc
|
2025-03-23 06:14:40
|
man we need more jxl apps for android
```
find . -type f -printf "%f %s\n" |
awk '{
PARTSCOUNT=split( $1, FILEPARTS, "." );
EXTENSION=PARTSCOUNT == 1 ? "NULL" : FILEPARTS[PARTSCOUNT];
FILETYPE_MAP[EXTENSION]+=$2
}
END {
for( FILETYPE in FILETYPE_MAP ) {
print FILETYPE_MAP[FILETYPE], FILETYPE;
}
}' | sort -n | numfmt --field=1 --to=iec-i --format "%8f"
0 NULL
0 nomedia
36 database_uuid
1.5Ki lua
31Ki heic
103Ki avif
128Ki webp
2.9Mi 1
18Mi jpeg
40Mi mov
314Mi webm
518Mi jpg
553Mi gif
1.6Gi png
4.1Gi mp4
12Gi jxl
```
|
|
2025-03-23 06:17:25
|
weird formating issues
|
|
|
A homosapien
|
2025-03-23 06:21:52
|
More jxl than videos <:Stonks:806137886726553651>
|
|
|
jonnyawsom3
|
|
HCrikki
|
2025-03-24 03:56:05
|
cant believe even /r/Datahoarder messes conversions from JPG. Lossless transcoding too needs more awareness amongst public and utilities still doing lossy full reencodes
|
|
|
RaveSteel
|
2025-03-24 05:40:12
|
https://www.reddit.com/r/DataHoarder/comments/1ji1x4w/some_recentish_informal_tests_of_avif_jpegxl_webp/
|
|
|
juliobbv
|
|
RaveSteel
https://www.reddit.com/r/DataHoarder/comments/1ji1x4w/some_recentish_informal_tests_of_avif_jpegxl_webp/
|
|
2025-03-24 07:01:15
|
there's a point where these posts' methodology should stop being called "informal" and start being called "harmful"
|
|
2025-03-24 07:02:02
|
nothing conclusive can be derived from these observations
|
|
2025-03-24 07:03:01
|
other than "higher efforts result in slower encodes"
|
|
|
spider-mario
|
2025-03-24 07:32:45
|
it seems lossless jpeg reencoding is actually included (`JXL-l1-q__-e_`)
|
|
2025-03-24 07:32:57
|
but yeah, not sure what the point of the rest is
|
|
2025-03-24 07:33:15
|
“there existed a quality setting that produced this size”
|
|
2025-03-24 07:34:03
|
> In general JPEG-XL is not that competitive in either speed or size, and the competition is between WepP and AVIF AOM.
yeah, you can’t conclude that
|
|
|
_wb_
|
2025-03-24 08:20:54
|
Benchmarking lossy compression without considering quality at all, that's like judging cars based on how fast their windscreen wipers can move back and forth.
|
|
|
spider-mario
|
2025-03-24 08:46:29
|
to be fair, there is _some_ slight consideration of quality
> Examining fine details of some sample photos at 4x I could not detect significant (or any) quality differences, except that WebP seemed a bit "softer" than the others.
|
|
2025-03-24 08:46:53
|
but they apparently didn’t really try to see whether that was also true of lower quality settings
|
|
|
_wb_
|
2025-03-24 09:15:54
|
I missed that paragraph but it doesn't really mean much at all if you only look at one quality point. WebP at default quality is going to produce smaller files than jxl at default quality because cwebp just has a lower quality default. Cwebp defaults to q75 on a scale where q75 corresponds to a lower quality than e.g. typical jpeg encoders at q75, and much lower quality than cjxl q75. So this is a bit like comparing libjpeg-turbo q60:420 to libjpeg-turbo q55:444 and to libjpeg-turbo q90:444, saying "I could not detect significant quality differences" and concluding that q90 is not that competitive in either speed or size and the competition is between q55:444 and q60:420.
Also why is someone still testing libjxl 0.7? That version is 2.5 years old. If speed is a concern, using a more recent version can make a difference...
|
|
|
juliobbv
|
2025-03-24 11:39:46
|
libaom is also super out of date in that test, and so is likely libsvtav1
|
|
2025-03-24 11:41:22
|
the more time you spend reading the post, the fewer and fewer things make sense
|
|
|
Meow
|
|
RaveSteel
https://www.reddit.com/r/DataHoarder/comments/1ji1x4w/some_recentish_informal_tests_of_avif_jpegxl_webp/
|
|
2025-03-25 01:12:02
|
When someone uses "JPEG-XL" in benchmarks, I can presume they're not serious
|
|
|
Demiurge
|
|
RaveSteel
https://www.reddit.com/r/DataHoarder/comments/1ji1x4w/some_recentish_informal_tests_of_avif_jpegxl_webp/
|
|
2025-03-25 02:41:32
|
He is comparing libaom on its fastest and second fastest setting to libjxl on its fourth fastest setting and saying libjxl is too slow? Whut?
|
|
2025-03-25 02:42:27
|
And he doesn't upload any crops of the resulting images so we can see what they look like after
|
|
|
_wb_
I missed that paragraph but it doesn't really mean much at all if you only look at one quality point. WebP at default quality is going to produce smaller files than jxl at default quality because cwebp just has a lower quality default. Cwebp defaults to q75 on a scale where q75 corresponds to a lower quality than e.g. typical jpeg encoders at q75, and much lower quality than cjxl q75. So this is a bit like comparing libjpeg-turbo q60:420 to libjpeg-turbo q55:444 and to libjpeg-turbo q90:444, saying "I could not detect significant quality differences" and concluding that q90 is not that competitive in either speed or size and the competition is between q55:444 and q60:420.
Also why is someone still testing libjxl 0.7? That version is 2.5 years old. If speed is a concern, using a more recent version can make a difference...
|
|
2025-03-25 02:43:50
|
Why old versions? Ask debian (or ubuntu or whatev)
|
|
2025-03-25 02:44:59
|
He's using ubuntu lts. Don't blame him for using old stuff, blame him for using debian and blame debian for being utterly awful
|
|
|
HCrikki
|
2025-03-25 02:49:23
|
"cant see a difference" should shown the images. the filesize number mean nothing if the output was actually of a visual quality noone wouldve used even with jpg
|
|
2025-03-25 02:52:40
|
not falling for table numbers ever again after a jxl dev showed in the past that some low filesizes were unusable trash noone uses in real scenarios. always focus on compression efficiency, preserving visual quality at all targets instead of *discarding* even more detail to hit an arbitrary filesize target
|
|
|
Demiurge
|
2025-03-25 02:55:24
|
When uploading them they will get destroyed by reddit probably. He should have shown some cropped PNG of the result so we can see what it actually looks like.
|
|
2025-03-25 02:56:41
|
Sounds like they were really large photographic images. Usually those are very blurry to start with.
|
|
2025-03-25 02:57:41
|
To be fair libjxl is not optimized well for those types of images. Even worse at low effort mode.
|
|
2025-03-25 03:07:49
|
I don't think libjxl is clever enough to look at the spectral energy and recognize when an image is just really blurry and grainy like an oversized blurry photo.
|
|
|
Quackdoc
|
|
Demiurge
He is comparing libaom on its fastest and second fastest setting to libjxl on its fourth fastest setting and saying libjxl is too slow? Whut?
|
|
2025-03-25 03:10:08
|
time for a comment lol
|
|
|
Demiurge
|
2025-03-25 03:10:39
|
If it was smarter it would remove & replace the grain, recognize the photo is blurry as hell and compress at a much lower bitrate compared to a photo that's packed with small details.
|
|
|
Quackdoc
time for a comment lol
|
|
2025-03-25 03:11:19
|
I never made an account on le reddit, the idea is too gross for me
|
|
|
CrushedAsian255
|
|
Demiurge
If it was smarter it would remove & replace the grain, recognize the photo is blurry as hell and compress at a much lower bitrate compared to a photo that's packed with small details.
|
|
2025-03-25 03:11:50
|
Like auto applying 2x resampling?
|
|
|
Demiurge
|
|
CrushedAsian255
Like auto applying 2x resampling?
|
|
2025-03-25 03:12:29
|
Possibly. That seems kinda risky but yeah. Or do something equivalent that saves the same amount of bits.
|
|
|
CrushedAsian255
|
2025-03-25 03:13:22
|
Ig just setting hf Coeffs to 0
|
|
|
Demiurge
|
2025-03-25 03:13:29
|
jxl already compresses smooth images pretty well
|
|
2025-03-25 03:14:07
|
But it's not smart enough to remove and replace grain/noise
|
|
2025-03-25 03:17:13
|
Not even enough to measure how much grain was naturally lost as a side effect after encoding, so it can signal to replace it
|
|
|
jonnyawsom3
|
2025-03-25 03:18:05
|
There *is* `--noise` which was meant to learn the noise inside the image and reconstruct it, but it either broke over the years or never worked at all
|
|
|
Quackdoc
|
|
Demiurge
I never made an account on le reddit, the idea is too gross for me
|
|
2025-03-25 03:18:18
|
understandable lol
|
|
|
Demiurge
|
2025-03-25 03:18:53
|
At the bare minimum the encoder should be able to signal how much noise was lost after encoding compared to the original so it can be replaced by the decoder
|
|
2025-03-25 03:20:31
|
Since a lot of different observers have remarked on how libjxl compression noticeably removes grain enough for people to point it out in videos on youtube for example, and it might give people a bad first impression of the format itself
|
|
2025-03-25 03:20:50
|
People don't realize that deficiencies of libjxl are not inherent to the format itself
|
|
2025-03-25 03:24:57
|
That would probably be the simplest and easiest way to implement an "automatic grain synthesis" feature. Simply measure how much grain was lost compared to the original AFTER encoding so there's not even a need to change the encoding process itself at all.
|
|
2025-03-25 03:26:05
|
You could even apply it to existing files
|
|
|
jonnyawsom3
|
|
Demiurge
At the bare minimum the encoder should be able to signal how much noise was lost after encoding compared to the original so it can be replaced by the decoder
|
|
2025-03-25 03:26:47
|
IIRC when using photon noise, it takes it into account to lower quality in covered regions
|
|
|
Demiurge
|
2025-03-25 03:27:37
|
By comparing a lossy file with the original, you can measure how much noise was removed and then just adjust the metadata to add the missing noise to the existing lossy file without reencoding
|
|
2025-03-25 03:28:37
|
It can "fix" existing lossy files by adding back the removed noise
|
|
|
IIRC when using photon noise, it takes it into account to lower quality in covered regions
|
|
2025-03-25 03:30:31
|
If so, that's smart. You can know for sure though if the file size goes down when telling the encoder to increase the photon noise
|
|
|
AccessViolation_
|
2025-03-25 08:54:03
|
I was reading the JXL paper, got to the mention of the Move-To-Front Transform and implemented it myself out of curiosity. I found that it can perform quite a bit better if you create a more optimal state array trained on your type of data to begin with. I tried two variants: one where the bytes are sorted by their frequency in the data, and one where the state is the state after applying the transform on the reversed data. With the first approach, the more common a byte is, the closer its first representation will be to 0. The second approach is nice because the first few bytes will be close to zero (and the first is always zero), which is good because usually the first handful of bytes are the ones that are encoded as themselves since they're only changed to something better *after* they've been seen once.
The downside is of course that if you want to tailor it to the specific data, you need to signal it, and naturally this would mean that instead of`data: [221, 35, 62, 45]` you transmit `state: [221, 35, 62, 45]; data: [0, 1, 2, 3]` so you don't gain much if anything. but if the encoder and decoder agree on a state beforehand, you don't need to signal anything and you can get values you expect closer to the start of the array so they are encoded as indices closer to 0. But if the data you're encoding starts around 0 and stays close to it, you still don't gain much. This appears to be the case for JXL if this example is representative of the real data:
> For example, consider the context map corresponding to
> the following list of 48 indices mapping 48 pre-clustering
> contexts to 16 post-clustering contexts: 0, 1, 2, 2, 2, 2, 3,
> 4, 5, 6, 7, 7, 7, 7, 8, 9, 10, 11, 12, 13, 14, 15, 15, 15, 15,
> 0, 0, 15, 0, 1, 0, 0, 0, 0, 14, 15, 15, 15, 15, 14, 13, 12,
> 11, 10, 15, 13, 9, 8.
Not some incredible revelation but a fun detour nonetheless
|
|
2025-03-25 09:07:03
|
So this didn't change the algorithm, just the initial state. I wonder if you could improve the algorithm by not just moving every seen value to the front, but moving it to a position depending on its frequency as well. Probably still move it to the front regardless so you can encode repeating values as a string of 0, but instead of shifting back the array's values to account for that, place the old value at index 0 at a position that reflects how likely you think you are to see it again
|
|
|
pekaro
|
2025-03-26 10:54:56
|
Hi! Given that JPEG XL saves histograms to the header I'm trying to figure out how the context modeling works. I assume that we have one MANIAC tree for each channel. Then the amount of leaf nodes would be the total amount of different contexts. Now does JPEG XL store a data histogram for each leaf node ctx?
|
|
|
CrushedAsian255
|
2025-03-26 12:41:40
|
From what I understand, each leaf node can define which context it uses, with each context having a histogram and ans state and stuff. This is so multiple leaf nodes can share the same context if they are similar
|
|
|
_wb_
|
2025-03-26 06:21:24
|
You just have one MA tree, it can branch per channel but doesn't have to. Leaf nodes have a predictor choice and a context (histogram); contexts can be shared between leaves, the number of different contexts is actually limited to 255 iirc.
|
|
2025-03-26 06:23:20
|
See also: https://discord.com/channels/794206087879852103/1347775789139230780/1347834222047662184
|
|
|
jonnyawsom3
|
2025-03-27 12:53:17
|
Microsoft are using a GIF for their end of support popup...
|
|
|
username
|
|
Microsoft are using a GIF for their end of support popup...
|
|
2025-03-27 12:55:23
|
remember this is the same company that uses ezgif for shipping products
|
|
|
A homosapien
|
2025-03-27 02:13:18
|
I swear to God we will never let go of GIF...
|
|
|
Demiurge
|
2025-03-27 02:22:56
|
Well, yeah. No one ever tried to make a better GIF
|
|
2025-03-27 02:23:10
|
Aside from Jyrki
|
|
2025-03-27 02:23:27
|
His lossless webp encoder is better.
|
|
|
Meow
|
2025-03-27 02:25:19
|
Any other image format's animation can be better than animated GIF
|
|
2025-03-27 02:26:24
|
and all major browsers have been supporting APNG and animated WebP for years
|
|
|
Demiurge
|
2025-03-27 02:32:06
|
But no one tried porting gifski or whatever to jxl
|
|
2025-03-27 02:32:14
|
Or even apng
|
|
|
Meow
|
2025-03-27 05:28:39
|
The author of Gifski, who's also the creator of ImageOptim, hates APNG as well as WebP
|
|
|
Demiurge
|
2025-03-27 06:29:54
|
Whut? Why hate apng? It's just png.
|
|
2025-03-27 06:30:10
|
Never heard of people "hating" png before
|
|
2025-03-27 06:33:50
|
It's made by the same guy who made pngquant... so how could he like 8 bit png but prefer gif?
|
|
|
_wb_
|
|
Meow
The author of Gifski, who's also the creator of ImageOptim, hates APNG as well as WebP
|
|
2025-03-27 07:17:13
|
We can ask <@826537092669767691> directly too
|
|
|
Meow
|
|
Demiurge
Whut? Why hate apng? It's just png.
|
|
2025-03-27 07:20:17
|
> I don't plan to. I think Animated PNG is obsolete, and an evolutionary dead-end. I don't think anybody should be using it.
https://github.com/ImageOptim/ImageOptim/issues/398
Oh he just said he would revisit the idea of optimising WebP
|
|
|
Demiurge
|
2025-03-27 07:21:20
|
He thinks PNG is a dead end but that doesn't stop him from optimizing GIF...
|
|
2025-03-27 07:21:31
|
That's illogical.
|
|
|
Meow
|
|
Demiurge
He thinks PNG is a dead end but that doesn't stop him from optimizing GIF...
|
|
2025-03-27 07:22:04
|
> In 24-bit mode it has worse compression than GIF. In 8-bit mode it has worse quality than GIF.
|
|
|
Demiurge
|
2025-03-27 07:22:04
|
Why is he optimizing gif then??
|
|
2025-03-27 07:22:30
|
Huh? Worse quality? But they're lossless?
|
|
2025-03-27 07:23:42
|
I don't understand what he means
|
|
|
Meow
|
2025-03-27 07:24:00
|
If you attempt to optimise APNG with ImageOptim, the tool will remove all frames except the first one
|
|
2025-03-27 07:24:56
|
No notice or warning about that
|
|
|
Demiurge
|
2025-03-27 07:24:56
|
How is 8 bit gif better than 8 bit png...
|
|
|
Meow
|
2025-03-27 07:25:51
|
He meant 8-bit animated GIF to 8-bit APNG
|
|
|
Demiurge
|
2025-03-27 07:26:38
|
Why would it have worse quality if they are lossless?
|
|
2025-03-27 07:28:30
|
I mean I find it funny that seemingly everyone who attempted to write a PNG codec has concluded that PNG is utterly and irredeemably awful as a format. But why is GIF better? And can't they be converted to lossless webp or jxl?
|
|
2025-03-27 07:45:39
|
Gif is the dead end
|
|
|
TheBigBadBoy - 𝙸𝚛
|
|
Meow
No notice or warning about that
|
|
2025-03-27 08:16:08
|
exactly the same if you use `ect` without `--reuse`
|
|
|
jonnyawsom3
|
2025-03-27 08:26:44
|
ect works fine on APNG for me
|
|
2025-03-27 08:26:59
|
Correction, it works fine on already-optimized files
|
|
2025-03-27 08:33:29
|
Seems like it skips APNG chunks entirely, but removes Alpha when the first frame is solid, causing decode errors on the other frames
|
|
|
TheBigBadBoy - 𝙸𝚛
|
|
ect works fine on APNG for me
|
|
2025-03-27 09:11:20
|
if it does not change the filter yeah
|
|
2025-03-27 09:14:22
|
but the author does not want to add support for APNG
https://github.com/fhanau/Efficient-Compression-Tool/issues/130
|
|
|
monad
|
|
Demiurge
Huh? Worse quality? But they're lossless?
|
|
2025-03-27 11:34:12
|
surely quality for size target
|
|
|
Demiurge
|
2025-03-28 12:18:53
|
Does that mean there's something inherently more efficient with the GIF encoding than what's possible in PNG?
|
|
2025-03-28 12:19:30
|
I thought GIF uses an older and less efficient compression algorithm.
|
|
|
A homosapien
|
2025-03-28 12:34:37
|
GIF is less efficient PNG, lossy GIF means more than just a reduction in color. There is lossy LZW compression, and in Gifski's case temporal dithering via per-frame palettes.
|
|
2025-03-28 12:40:28
|
> In 24-bit mode it has worse compression than GIF. In 8-bit mode it has worse quality than GIF.
I think why Kornel doesn't like APNG is because it can't do temporal dithering due to the fact that it can't have per-frame palettes. Forcing APNG to use 24 bit color, making it a tiny bit bigger than GIF.
Even so, APNG doesn't need temporal dithering to fake more colors because it can actually display all of the colors you need. So regardless it's still better than GIF. Not to mention APNG can use the Average filter in a lossy manner which blows GIF out of the water.
|
|
2025-03-28 12:46:26
|
In the end, I think Kornel just likes the challenge of compressing GIFs as much as possible, despite the fact that it is the most "dead-end" image format to exist in todays internet landscape.
|
|
|
Demiurge
|
2025-03-28 04:51:51
|
No per frame palettes? So gif is actually more flexible here? Well how common is that in gif anyways, pretty uncommon I thought. I heard you can even have full color, still image gif by dividing the image into 16x16 squares?
|
|
|
jonnyawsom3
|
|
A homosapien
> In 24-bit mode it has worse compression than GIF. In 8-bit mode it has worse quality than GIF.
I think why Kornel doesn't like APNG is because it can't do temporal dithering due to the fact that it can't have per-frame palettes. Forcing APNG to use 24 bit color, making it a tiny bit bigger than GIF.
Even so, APNG doesn't need temporal dithering to fake more colors because it can actually display all of the colors you need. So regardless it's still better than GIF. Not to mention APNG can use the Average filter in a lossy manner which blows GIF out of the water.
|
|
2025-03-28 05:30:31
|
Do you know of an encoder/optimiser that uses the lossy average filter?
|
|
|
A homosapien
|
2025-03-28 05:30:42
|
pingo
|
|
|
jonnyawsom3
|
2025-03-28 05:31:50
|
Ah, didn't know it does APNG
|
|
2025-03-28 05:32:21
|
I just remember telling an artist that it was lossy... And now they use quality 100 JPEG for some reason
|
|
|
CrushedAsian255
|
|
I just remember telling an artist that it was lossy... And now they use quality 100 JPEG for some reason
|
|
2025-03-28 06:15:17
|
tell them to use JXL 😄
|
|
|
jonnyawsom3
|
|
CrushedAsian255
tell them to use JXL 😄
|
|
2025-03-28 06:16:26
|
I got them to use Lossless WebP in the end, since it was the newest format supported by their platforms. For the ones that don't, q100 JPEG
|
|
2025-03-28 06:18:44
|
|
|
|
A homosapien
|
|
Demiurge
No per frame palettes? So gif is actually more flexible here? Well how common is that in gif anyways, pretty uncommon I thought. I heard you can even have full color, still image gif by dividing the image into 16x16 squares?
|
|
2025-03-28 06:24:30
|
GIF is technically more flexible only in the realm of 8-bit color due to per-frame palettes. And you are right, most GIFs in the wild don't use this feature, afaik Gifski is the only encoder that uses this feature to it's fullest potential. Full color Gifs are possible using this technique, but it's so horribly inefficient you would be better off using literally anything else.
|
|
|
Quackdoc
|
2025-03-29 12:11:37
|
any idea how I should go about hasing an encoded jxl comparing it to a gif? magick can give wildly different results since it seems to do it as a per frame thing without taking previous frames into account
|
|
2025-03-29 12:13:15
|
doing `ffmpeg -i $FILE -c:v rawvideo -f hash -` gives different results too but im not sure if this is an ffmpeg limitation or if the encode is bad
|
|
2025-03-29 12:14:16
|
needs to be scriptable
https://cdn.discordapp.com/emojis/1113499891314991275?size=64
|
|
|
monad
|
2025-03-29 12:17:58
|
only way I know would be converting to a mediary format with `-coalesce`, not exactly straightforward
|
|
|
A homosapien
|
|
Quackdoc
doing `ffmpeg -i $FILE -c:v rawvideo -f hash -` gives different results too but im not sure if this is an ffmpeg limitation or if the encode is bad
|
|
2025-03-29 12:19:44
|
try this `ffmpeg -i input.gif -pix_fmt pal8 -c:v rawvideo -f hash -`
|
|
2025-03-29 12:19:53
|
I got the same hash with gif and jxl
|
|
2025-03-29 12:35:54
|
hmm, doesn't handle gifs with alpha well
|
|
|
Quackdoc
|
2025-03-29 12:46:21
|
yeah I dunno, maybe manually converting it to an rgba8
|
|
2025-03-29 12:46:43
|
its kinda annoying that magick fails tho
|
|
|
RaveSteel
|
2025-03-29 12:48:05
|
ssimulacra2 sadly only does the first frame IIRC
|
|
|
Quackdoc
|
2025-03-29 01:02:02
|
I guess I could try ssimu2bin but I think only in video mode does it do multiple frames
|
|
2025-03-29 01:02:15
|
and that requires VS which I am NOT installing on a phone lol
|
|
|
_wb_
|
2025-03-29 08:53:41
|
Shouldn't be too hard to let ssimulacra2 handle multi-frame somewhat more correctly, feel free to open a GitHub issue about it to remind me
|
|
|
Kornel
|
|
A homosapien
> In 24-bit mode it has worse compression than GIF. In 8-bit mode it has worse quality than GIF.
I think why Kornel doesn't like APNG is because it can't do temporal dithering due to the fact that it can't have per-frame palettes. Forcing APNG to use 24 bit color, making it a tiny bit bigger than GIF.
Even so, APNG doesn't need temporal dithering to fake more colors because it can actually display all of the colors you need. So regardless it's still better than GIF. Not to mention APNG can use the Average filter in a lossy manner which blows GIF out of the water.
|
|
2025-03-29 11:57:27
|
"Tiny bit bigger" 24-bit PNG is 2-3x larger compared to 8-bit.
Lossy averaging or paeth can help, but then you lose quality too, still end up with file size in the same ballpark as GIF, but don't get the benefit of universal compatibility.
I work on fixing GIF because it's used so widely, and I don't like seeing badly dithered GIFs.
GIF is used so widely not because of its technical merits, but because it works everywhere.
APNG has most of the downsides of GIF, without the only feature that makes it worth tolerating the downsides.
I don't work on APNG, because it's a complete dead-end. AWebP has got the same or better level of support now. It does everything that APNG can, but better. The lobotomized WebM leftover in AWebP is a disappointment from perspective of a codec, but still beats all the lossy averaging tricks by far. You can still mix it with WebPL and any number of colors you want. I haven't created gifski like tool for AWebP mostly because it doesn't need an intervention.
|
|
|
A homosapien
|
|
Kornel
"Tiny bit bigger" 24-bit PNG is 2-3x larger compared to 8-bit.
Lossy averaging or paeth can help, but then you lose quality too, still end up with file size in the same ballpark as GIF, but don't get the benefit of universal compatibility.
I work on fixing GIF because it's used so widely, and I don't like seeing badly dithered GIFs.
GIF is used so widely not because of its technical merits, but because it works everywhere.
APNG has most of the downsides of GIF, without the only feature that makes it worth tolerating the downsides.
I don't work on APNG, because it's a complete dead-end. AWebP has got the same or better level of support now. It does everything that APNG can, but better. The lobotomized WebM leftover in AWebP is a disappointment from perspective of a codec, but still beats all the lossy averaging tricks by far. You can still mix it with WebPL and any number of colors you want. I haven't created gifski like tool for AWebP mostly because it doesn't need an intervention.
|
|
2025-03-30 12:20:18
|
Sorry I should have been more specific, I was talking about a GIFs with per-frame palettes being converted to 24-bit APNGs is like only 10 - 20% bigger.
From a video source of course 24-bit is going to be much bigger than 8-bit. But then you lose quality with GIFs as well, so I don't understand that argument. I doubt lossy filters are being used to its fullest potential, nevertheless I agree with it being a dead end since animated WebP/AVIF/JXL exists.
I also agree that everybody should be using animated WebPs. Very few people are aware of its existence. Ever since discord added Animated WebP support, I stopped using Gifski entirely, it's a really cool piece of software though. <:PepeOK:805388754545934396>
|
|
|
jonnyawsom3
|
2025-03-30 12:21:41
|
I'm slowly getting people to use WebP now that Discord supports it. Namely when I notice my client visibly lag, only to open the network inspector and see a 15 MB GIF downloading
|
|
|
A homosapien
|
2025-03-30 12:40:08
|
Still using GIF in 2025 <:PepeSad:815718285877444619>
|
|
2025-03-30 12:40:24
|
We got to educate people that other animated formats exist
|
|
|
VEG
|
|
Kornel
"Tiny bit bigger" 24-bit PNG is 2-3x larger compared to 8-bit.
Lossy averaging or paeth can help, but then you lose quality too, still end up with file size in the same ballpark as GIF, but don't get the benefit of universal compatibility.
I work on fixing GIF because it's used so widely, and I don't like seeing badly dithered GIFs.
GIF is used so widely not because of its technical merits, but because it works everywhere.
APNG has most of the downsides of GIF, without the only feature that makes it worth tolerating the downsides.
I don't work on APNG, because it's a complete dead-end. AWebP has got the same or better level of support now. It does everything that APNG can, but better. The lobotomized WebM leftover in AWebP is a disappointment from perspective of a codec, but still beats all the lossy averaging tricks by far. You can still mix it with WebPL and any number of colors you want. I haven't created gifski like tool for AWebP mostly because it doesn't need an intervention.
|
|
2025-03-30 10:20:36
|
A lot of software also show just the first frame of GIF images, animated GIFs are not that universally supported. For example. neither Windows Photo Viewer nor Google Picasa on my machine support it, I have to use a browser to watch them animated.
|
|
2025-03-30 10:20:38
|
A lot of websites (e.g. old-school forums) where you can upload/embed images still don't accept WebP, they accept the classic trio of JPG, PNG, and GIF. APNG is the best option for animations in this case. In most of cases, it just works (if the website does not try to recompress all the images; but if it's the case, gifs usually also lose animation).
|
|
2025-03-30 10:30:10
|
Some popular websites also use APNG already since it supported by all modern browsers and became part of the PNG spec. For example, Steam: https://store.steampowered.com/points/shop/ - all of those animations are APNG and could in theory benefit from improved optimization 🙂
|
|
|
jonnyawsom3
|
2025-03-30 10:46:11
|
Oxipng and ApngOptim already do most of the work though
|
|
|
_wb_
|
2025-03-30 12:18:12
|
Browsers should just support any video format they support also in an img tag, with implicit muting. It is silly to allow images to move but not to allow using video codecs in that case.
|
|
2025-03-30 12:20:32
|
(in Safari you can put MP4 in an img tag and it works, but not in Chrome;
in Chrome you can put an arbitrary AV1 in an AVIF and it works, but not if you put the AV1 in an actual video file format which is the more convenient thing to do in terms of tooling and broader support)
|
|
|
Fab
|
2025-03-30 12:24:07
|
Have you read the Luca argentero article? Is related to appeal vs fidelity and also with mdpi assessment imaging probably even with svt vp9 should make less blur at 480 kbps
|
|
2025-03-30 12:25:04
|
And improve perceived noise aggregated sharpness in JPEG xl making the codec look better for autistic like when it was 0.2.3
|
|
2025-03-30 12:25:33
|
While change lossless like
|
|
2025-03-30 12:26:13
|
I readed 10 times to see if my Finzi someonez also did agree on
|
|
2025-03-30 12:38:46
|
Which things could help
|
|
2025-03-30 12:39:27
|
This is a video i sent many times to their server
|
|
2025-03-30 12:39:54
|
It has unoroved quite well for 5000 views
|
|
|
jonnyawsom3
|
2025-03-30 12:45:35
|
Oh, Fab is back
|
|
|
Fab
|
2025-03-30 01:15:03
|
I'm doing like a bot compression isn't my job
|
|
2025-03-30 01:25:42
|
|
|
2025-03-30 01:25:42
|
Images that got worse
|
|
2025-03-30 01:25:59
|
|
|
2025-03-30 01:26:03
|
Images that got better
|
|
2025-03-30 01:26:37
|
Not sure of what I did i i did a full reset
|
|
|
Oh, Fab is back
|
|
2025-03-30 02:21:39
|
I got suggested after working a bit a developer had to denonstrate that b. Has to stay not moving in his product VV ideogsmes
|
|
2025-03-30 02:22:43
|
I used the cookies of that site probably Jon Sneyers is tired of paying the cdn so i'll stop because i have no innovation but i have time
|
|
2025-03-30 03:07:12
|
I resetted the cookie for three video on youtube and deleted a lot of cookies. Seem all too disappear. Video doesnt look too mellow as it looked one month ago. Quality still has to be measured by experts but hw hackernews rss has contributed to it, and i thabk every comment in reddit, even the critiwue one i got today about AV2 and cnn not rredh and out
|
|
|
_wb_
|
2025-03-30 04:14:07
|
Fab, your prose is incomprehensible to us mere mortals. Your previous account got banned because it all became too spammy. Please make an effort at making sense, or just remain quiet.
|
|
|
CrushedAsian255
|
|
_wb_
Fab, your prose is incomprehensible to us mere mortals. Your previous account got banned because it all became too spammy. Please make an effort at making sense, or just remain quiet.
|
|
2025-03-30 04:25:58
|
DeepSeek seems to be able to understand what they are saying. I think they probably don't have English as their first language and are having trouble typing their thoughts.
Their first block of messages (potentially) mean:
Have you read Luca Argentero's article? It discusses the balance between visual appeal vs. fidelity, MDPI imaging assessments, and how SVT-AV1/VP9 might produce less blur at 480 kbps. This could help improve perceived noise and sharpness in JPEG XL, making the codec perform better for detail-sensitive use cases (like in version 0.2.3). I’ve also been looking into lossless compression tweaks.
I’ve reviewed the material 10 times to see if my findings align with others’ work (maybe Finzi et al.?). What improvements do you think would help?
I’ve shared this video multiple times on the server—it’s gained 5,000 views and seems to demonstrate progress.
|
|
|
Quackdoc
|
2025-03-30 04:58:47
|
we need a deepseek bot with a /translatefab command
|
|
|
monad
|
2025-03-30 05:04:03
|
not very faithful
|
|
|
Fab
|
|
Oxipng and ApngOptim already do most of the work though
|
|
2025-03-30 05:28:36
|
Glad you're working on this. Is a regression that carries since 0.2.3
|
|
2025-03-30 05:29:55
|
|
|
2025-03-30 05:30:23
|
I don't know if is related to display because i'm not rich
|
|
2025-03-30 08:07:32
|
Jon i restored facebook. But for the damage i caused to quality vs appeal page i do not what to do
|
|
|
w
|
2025-03-31 10:23:03
|
the legend
|
|
|
Demiurge
|
2025-04-01 12:41:08
|
|
|
2025-04-01 12:41:09
|
Is the Squeeze/Haar transform technically a form of DWT?
|
|
2025-04-01 12:41:39
|
No one on reddit seems to know that jxl actually does have a wavelet transform
|
|
2025-04-01 12:41:51
|
Idk how similar it is to the one in j2k though
|
|
2025-04-01 12:41:58
|
Probably very similar
|
|
2025-04-01 12:42:46
|
J2k is actually a great format, idk why there's all the hate.
|
|
2025-04-01 12:43:14
|
I think just the entropy coder was slow and bad
|
|
2025-04-01 12:43:37
|
And it has a dumb name. Anything "2000" is terribly named
|
|
2025-04-01 12:44:13
|
And I don’t think there was a good open source implementation
|
|
|
_wb_
|
2025-04-01 02:40:22
|
Haar is technically a wavelet but it's the simplest kind of wavelet and probably the hardcore wavelet folks will not consider it a 'real' wavelet.
Squeeze is a modified Haar transform with the addition of a tendency term that mitigates the blocking you'd get with just Haar, with a nonlinearity that mitigates the ringing you'd otherwise get.
With these modifications it's something that is quite different from the usual DWT.
Though of course the general idea is the same as in all frequency transforms (DCT, DWT, DFT, Walsh-Hadamard, whatever): you get coefficients/residuals corresponding to different frequencies of the signal, so you can quantize the high freq stuff more aggressively than the low freq stuff.
|
|
|
Demiurge
|
2025-04-01 05:30:11
|
Yeah that sounds awesome and I wonder why macroblocking transforms are even used at all then when that's available
|
|
2025-04-01 05:49:22
|
Does the same thing but without the weird 8x8 pattern macroblocks? What's the downside exactly?
|
|
|
LMP88959
|
|
Demiurge
Yeah that sounds awesome and I wonder why macroblocking transforms are even used at all then when that's available
|
|
2025-04-01 10:23:41
|
hardware implementation of discrete blocks are cheaper afaik
wavelets and DCT exhibit different visual degredation when bit starved
|
|
2025-04-01 10:24:14
|
Haar is a 2x2 DCT essentially
|
|
2025-04-01 10:55:26
|
<@794205442175402004> is the squeeze transform patented?
|
|
|
_wb_
|
2025-04-01 11:21:09
|
It probably is covered by one of the defensive patents we made with Cloudinary. Royalty-free, with the defensive clause that revokes the license to those who want to patent troll against jxl.
|
|
|
LMP88959
|
2025-04-01 11:24:14
|
ok, thank you
|
|
|
Demiurge
|
|
LMP88959
hardware implementation of discrete blocks are cheaper afaik
wavelets and DCT exhibit different visual degredation when bit starved
|
|
2025-04-02 01:07:46
|
Well, it wouldn't have the trademark DCT block boundary artifacts we all know and love. But it would still have noise and ringing.
|
|
2025-04-02 01:08:07
|
Wavelet based codecs are known to scale much better at lower bitrates.
|
|
2025-04-02 01:08:33
|
They both could have mosquito noise and ringing.
|
|
2025-04-02 01:08:41
|
I just don't see what the downside is.
|
|
2025-04-02 01:09:16
|
If they're so conceptually similar what's the disadvantage exactly...?
|
|
2025-04-02 01:09:56
|
Is DCT just significantly faster? Is that the whole reason?
|
|
|
LMP88959
|
2025-04-02 01:10:59
|
<@1028567873007927297> i put a comment in that reddit thread you linked
|
|
|
Demiurge
|
2025-04-02 01:12:43
|
Nice. I guess Squeeze is not very well known about. I can't find any papers on it, or the patent
|
|
|
LMP88959
|
2025-04-02 01:14:06
|
haar wavelet smoothing has existed since at least the early 90s
|
|
2025-04-02 01:14:17
|
the squeeze transform is a take on the same concept
|
|
|
pekaro
|
2025-04-02 10:44:54
|
regarding Squeeze, when you are trying to get local properties for a wavelet tree of level X with the residualas do you traverse the tree up and down to locate the adjacent already coded sample values? From the paper I see that for each coded C component you need to locate 7 values
|
|
2025-04-02 10:51:46
|
+ when encoding the wavelet residuals do you calculate MA properties based on those residuals or do you with each level of residuals get the current downscaled version and then obtain properties of residuals based on their corresponding real image values
|
|
2025-04-02 10:52:00
|
I hope this question makes sense
|
|
|
_wb_
|
2025-04-02 12:33:57
|
the MA properties are based on the residuals, not on the unsqueezed image
|
|
2025-04-02 12:34:41
|
you cannot unsqueeze while decoding anyway, there are data dependencies on not-yet-decoded residuals
|
|
|
pekaro
|
2025-04-02 12:53:27
|
ok got it, so for each residual channel we scan and get the properties of the residuals themselves. The unclear part for me was my mindbias with LOCO-I where contexts are derived from the defacto pixel - predicted values
|
|
|
Demiurge
|
2025-04-02 11:21:37
|
Is there a list somewhere of some of the known defensive patents on jxl?
|
|
2025-04-02 11:21:50
|
I would like to read more about squeeze...
|
|
|
jonnyawsom3
|
|
Demiurge
I would like to read more about squeeze...
|
|
2025-04-03 12:07:36
|
https://discord.com/channels/794206087879852103/1021189485960114198/1256224842894540831 H.6.2
<https://cloudinary.com/blog/jpeg-xls-modular-mode-explained#squeeze>
<https://drive.google.com/file/d/19pjtxUgChAOmy5mi4sQBite4xgGa4iaA/view> Page 30 - 5.1.3
|
|
|
_wb_
|
2025-04-03 07:09:54
|
This is the main one from Cloudinary, I don't recommend reading it though since it is basically a description I wrote that then went through a patent lawyer to turn it into unreadable legalese: https://patents.justia.com/patent/11064219
|
|
2025-04-03 07:13:47
|
I dunno which ones of Google are applicable. Doesn't really matter that much, it's basically just a deterrent against patent trolls: if patent troll X claims people have to pay them to use jxl and litigates, then Cloudinary and Google can say X can no longer use jxl themselves and litigate back against X. That's a hypothetical scenario that is not supposed to happen, it's just a precaution/deterrent to help maintain the royalty-free status of jxl.
|
|
2025-04-03 07:14:36
|
(here X is a variable, not twitter)
|
|
|
CrushedAsian255
|
|
_wb_
This is the main one from Cloudinary, I don't recommend reading it though since it is basically a description I wrote that then went through a patent lawyer to turn it into unreadable legalese: https://patents.justia.com/patent/11064219
|
|
2025-04-03 09:27:54
|
JPEG XL but L stand for Legal
|
|
|
Meow
|
2025-04-03 12:21:06
|
L stands for 50
|
|
|
spider-mario
|
2025-04-03 12:48:48
|
L stands for Luca
|
|
|
Demiurge
|
2025-04-03 11:53:11
|
X stands for crunchy Xen crystals
|
|
|
juliobbv
|
2025-04-04 12:15:53
|
JPEG claimed L as a numeral so the superbowl had to use 50
|
|
|
Demiurge
|
2025-04-04 03:01:04
|
So Jon was the main brain behind Squeeze then it seems like. But it's just a specific and simple implementation of a Haar wavelet it looks like.
|
|
|
intelfx
|
2025-04-06 03:48:29
|
I think this is the new record ~~for any program that I have ever run~~
|
|
|
Demiurge
|
2025-04-06 09:02:12
|
Perl, gnu parallel, zsh, all calling each other?
|
|
|
spider-mario
|
2025-04-06 10:21:57
|
parallel is written in perl
|
|
2025-04-06 10:22:25
|
and I guess it may spawn shell instances to pass them command lines
|
|
|
DZgas Ж
|
2025-04-06 12:31:26
|
An example of tile compression of an image based on a full analysis of the entire image for the correspondence and uniqueness of each block
The method shows rather not the advantages of this method, but a more fundamental thing, the reason why compression using Neural Networks can theoretically be better (but at the moment it is still too expensive to implement)
|
|
|
Fab
|
2025-04-06 02:37:36
|
What is different to av2 mscnn
|
|
2025-04-06 02:38:36
|
https://www.mdpi.com/2079-9292/13/5/953
|
|
2025-04-06 02:38:58
|
Also I did ai measurements of yt compression
|
|
|
DZgas Ж
|
2025-04-06 08:50:25
|
technological progress is dead
|
|
2025-04-06 08:51:27
|
<:SadCheems:890866831047417898> no vmaf for vp8
|
|
2025-04-06 08:57:13
|
even though the ENA bbq game used the simplest vp8 codec to encode cutscenes, in 1080p. Even so, there were so many complaints about performance issues that devs had to make a downscale version of the video as a settings option
|
|
2025-04-06 09:00:13
|
how can think about developing technologies of such a high level as VVC and AV2 when the power limit is a device that lies in your pocket and must work for half a day or more.
|
|
2025-04-06 09:01:36
|
H.267 when
|
|
|
A homosapien
|
2025-04-08 07:18:39
|
Wow not even VP9? I've seen reports saying that VP9 can decode faster than h264 in some cases. Is it not using hardware decoding?
|
|
2025-04-08 07:19:27
|
Reminds me of some Ren'Py games that still use VP8.
|
|
|
gb82
|
|
A homosapien
Wow not even VP9? I've seen reports saying that VP9 can decode faster than h264 in some cases. Is it not using hardware decoding?
|
|
2025-04-09 06:16:38
|
vp9 is very fast to decode, that’s correct
|
|
|
Mine18
|
2025-04-09 09:00:06
|
svt av1 with FD2 used can also decode quickly, with preset 9 being faster than vp9
|
|
2025-04-09 09:01:27
|
<https://gitlab.com/AOMediaCodec/SVT-AV1/-/merge_requests/2376>
|
|
|
intelfx
|
2025-04-10 09:19:31
|
what's FD2?
|
|
|
Lumen
|
2025-04-10 09:28:58
|
--fast-decode=2 ?
|
|
|
DZgas Ж
|
2025-04-10 10:10:32
|
Creating an image (on the right) by generating pseudo-random numbers, by exhaustive search. 65535 search gen for 4x4 blocks. Which means that the image size will be 2 bytes per block, total 480 bytes for 64x64.
On the left is an image size 459 bytes
|
|
|
Mine18
|
|
intelfx
what's FD2?
|
|
2025-04-10 10:49:07
|
fast decode 2, new feature to lower decoding complexity without impacting quality too much
|
|
|
intelfx
|
2025-04-10 12:22:39
|
ah, I see
|
|
2025-04-10 12:22:41
|
--fast-encode when
|
|
|
jonnyawsom3
|
2025-04-10 12:24:51
|
Somehow that gave me an idea. Lossless AVIF vs PPM with filesystem compression
|
|
|
Quackdoc
|
2025-04-17 06:00:57
|
https://github.com/qarmin/czkawka/pull/1474/files
|
|
2025-04-17 06:01:14
|
czkawka supports eyra, which uses jxl-oxide, so I guess jxl-oxide is compatible with eyra
|
|
2025-04-17 06:01:52
|
for context, eyra is a replacement for libc kinda
|
|
2025-04-17 06:02:07
|
https://github.com/sunfishcode/eyra
|
|
|
DZgas Ж
|
2025-04-17 09:47:25
|
nuh uh. On such a small amount of information, my previous full-RLE algorithm (https://encode.su/threads/4316-NewEra-RLE-1-bit-image-for-manual-human-decoding) is still more efficient than "new" block tile compression (exhaustive search of all possible values to select the top 10), and with the same RLE. Lossless is more efficient than lossy, funny, funny... I still couldn't compress 2 bytes into 1 byte. This is a dead end
|
|
|
jonnyawsom3
|
2025-04-22 09:02:30
|
Curious if people can test this and post their results https://github.com/telegramdesktop/tdesktop/issues/29219
|
|
|
𝕰𝖒𝖗𝖊
|
|
Curious if people can test this and post their results https://github.com/telegramdesktop/tdesktop/issues/29219
|
|
2025-04-22 09:50:51
|
Some Non XYB JPEG colors are also incorrect for me on telegram
|
|
|
Demiurge
|
2025-04-23 03:28:35
|
How much of this bug is because of jpegli foolishly using chroma subsampling by default in an RGB JPEG (and failing to properly signal it's an RGB JPEG in the header)
|
|
2025-04-23 03:33:20
|
Would these bugs go away if jpegli actually did things the right way (no chroma subsampling, correct app14 header present)
|
|
2025-04-23 03:34:17
|
For the record, libjpeg automatically writes an app14 tag when necessary
|
|
2025-04-23 03:38:16
|
>>> By default, the IJG compression library will write a JFIF APP0 marker if the selected JPEG colorspace is grayscale or YCbCr, or an Adobe APP14 marker if the selected colorspace is RGB, CMYK, or YCCK. You can disable this, but we don't recommend it. The decompression library will recognize JFIF and Adobe markers and will set the JPEG colorspace properly when one is found.
|
|
|
jonnyawsom3
|
|
Demiurge
How much of this bug is because of jpegli foolishly using chroma subsampling by default in an RGB JPEG (and failing to properly signal it's an RGB JPEG in the header)
|
|
2025-04-23 03:53:25
|
None. I added the APP14 tag and used 4:4:4
|
|
|
Demiurge
|
2025-04-23 03:57:53
|
Ok, good, that's an important detail. It's a telegram bug then...
|
|
2025-04-23 04:01:46
|
But the issue page does not say that there's no chroma subsampling or that the jpeg headers were corrected
|
|
|
jonnyawsom3
|
2025-04-23 07:11:08
|
Because they're irrelevant, they don't do anything
|
|
|
Demiurge
|
2025-04-23 09:00:02
|
No, it's pretty important to know that the bug is present in normal, valid RGB JPEGs with color profiles, not just malformed and weird JPEG
|
|
|
190n
|
2025-04-25 08:22:16
|
hey, you all use <https://github.com/google/highway> -- any idea if there is a #define or something that will turn off all its runtime warnings (`HWY_WARN`)?
|
|
|
Traneptora
|
|
Demiurge
How much of this bug is because of jpegli foolishly using chroma subsampling by default in an RGB JPEG (and failing to properly signal it's an RGB JPEG in the header)
|
|
2025-04-25 08:50:51
|
jpegli doesnt do this anymore fwiw
|
|
2025-04-25 08:51:14
|
It defaults to 444 and if you use --xyb iirc it refuses to use 420
|
|
2025-04-25 08:51:43
|
Definitely doesn't do weird stuff like B-only-subsampling anymore
|
|
|
190n
hey, you all use <https://github.com/google/highway> -- any idea if there is a #define or something that will turn off all its runtime warnings (`HWY_WARN`)?
|
|
2025-04-25 08:52:19
|
theoretically they should only fire on debug builds, not release, righy?
|
|
|
190n
|
2025-04-25 08:53:16
|
no they run in release, unless we've misconfigured highway
|
|
2025-04-25 08:54:40
|
but there doesn't seem to be anything checked before defining hwy_warn: <https://github.com/google/highway/blob/400fbf20f2e40b984be129b88f83d4748cfc26a0/hwy/base.h#L335-L336> and hwy::Warn: <https://github.com/google/highway/blob/400fbf20f2e40b984be129b88f83d4748cfc26a0/hwy/base.h#L268-L277>
|
|
2025-04-25 08:55:49
|
so for context we're seeing a warning because someone runs in a VM that doesn't advertise sse4.2 support (as claimed by cpuid) but actually does support the instructions. and we've built highway with a `-march` that has sse4.2, so highway is basically warning that sse4.2 is not available because we didn't tell it to support anything older
|
|
2025-04-25 08:56:01
|
so we may instead be able to avoid the warning by setting a lower `-march`
|
|
2025-04-25 08:56:26
|
although theoretically that would add a bit of code size for extensions we don't actually care about supporting
|
|
|
jonnyawsom3
|
|
Traneptora
jpegli doesnt do this anymore fwiw
|
|
2025-04-25 08:56:33
|
Maybe the library, but cjpegli definitely still does. I forked it to make it 444 by default, and I'll probably make a PR later on with some more tweaks
|
|
2025-04-25 08:57:48
|
Slightly jank, but working on it https://github.com/jonnyawsom3/jpegli/tree/jpegliSubsampling
|
|
|
Traneptora
|
|
190n
no they run in release, unless we've misconfigured highway
|
|
2025-04-25 08:59:40
|
looks like not, best you got is the runtime Hwy::SetWarnFunc
|
|
2025-04-25 08:59:47
|
set it to a function that doesn't do anything
|
|
|
Maybe the library, but cjpegli definitely still does. I forked it to make it 444 by default, and I'll probably make a PR later on with some more tweaks
|
|
2025-04-25 09:00:38
|
I'm using jpegli from the libjxl repo, and it doesn't do weird-b-only-subsampling
|
|
|
190n
|
2025-04-25 09:00:38
|
yeah i'm thinking about whether it's best to do that, or to set -march for highway so that it supports everything
|
|
|
Traneptora
|
2025-04-25 09:00:57
|
the reason it doesn't is they're not recompressesable with jxl
|
|
2025-04-25 09:01:06
|
it used to, but I pointed this out, and they changed it
|
|
2025-04-25 09:01:09
|
for exactly that reason
|
|
|
jonnyawsom3
|
|
Traneptora
I'm using jpegli from the libjxl repo, and it doesn't do weird-b-only-subsampling
|
|
2025-04-25 09:01:54
|
It was originally a fork of the libjxl repo, but we switched to the google repo because it had a few missed commits that hadn't been merged to libjxl
|
|
|
Traneptora
|
2025-04-25 09:02:22
|
lemme pull up the issue tracker
|
|
|
jonnyawsom3
|
2025-04-25 09:03:50
|
Just downloaded from libjxl
|
|
|
Traneptora
|
2025-04-25 09:04:53
|
oh I'm referring to the weird B-only
|
|
|
jonnyawsom3
|
2025-04-25 09:05:16
|
yeah, that is B only, but any subsampling for XYB breaks transcoding too
|
|
2025-04-25 09:06:41
|
Single channel subsampling is so exotic, all tools only report 4:2:0, not that it's a single channel
|
|
2025-04-25 09:07:41
|
Also added the APP14 marker, since it's RGB JPEG internally
|
|
|
Traneptora
|
2025-04-25 09:07:41
|
> We did put limitations on what kinds of JPEGs can be represented in JXL to hit a good trade-off: we did not want to complicate the jxl spec too much, especially the core codestream, while we did want to be able to recompress most JPEGs "found in the wild". For example, 4:2:0 and 4:4:4 jpegs are very common, 4:2:2 also occurs in the wild but more rarely, and any other kind of subsampling is very rare. So we did add chroma subsampling to the jxl codestream, but only for YCbCr, and only for up to 2x, to keep things as simple as possible.
>
> Basically the goal was to be able to recompress 99.9% of the JPEGs you would encounter in practice on the web, and I think we succeeded.
>
> Now if jpegli is going to produce a new kind of exotic JPEG that we cannot recompress in JXL, then I think we are shooting ourselves in the foot.
-Jon
https://github.com/libjxl/libjxl/issues/2284
|
|
|
jonnyawsom3
|
2025-04-25 09:08:03
|
Yeah, I know the issue, but it either got reverted or was never actually fixed
|
|
|
Quackdoc
|
2025-04-25 09:08:34
|
us yuv422 lovers destroyed T.T
|
|
|
Traneptora
|
2025-04-25 09:08:55
|
yuv422 is transcodable
|
|
|
Kupitman
|
|
username
|
2025-04-25 09:09:59
|
subsampled RGB JPEGs (which XYB JPEGs end up as by default) are not transcodable
|
|
|
Quackdoc
|
2025-04-25 09:10:08
|
i was talking about rare specifically. tho I wish yuv410 was more common, I watched batman as yuv410... it worked out well...
|
|
|
Kupitman
|
2025-04-25 09:10:22
|
why PackJPG from 200x, compress better than JPEGXL?
|
|
|
Quackdoc
|
2025-04-25 09:12:14
|
> You will get an error message if you try to decompress PJG
> files with a different version than the one used for compression
dunno but it's worthless to me, seems like they are making some very unacceptable compromises if you want an actual replacement to jpeg
|
|
|
jonnyawsom3
|
|
Quackdoc
i was talking about rare specifically. tho I wish yuv410 was more common, I watched batman as yuv410... it worked out well...
|
|
2025-04-25 09:13:28
|
Since subsampling XYB JPEGs breaks transcoding anyway, I was going to try 4:1:0 and see what it's like
|
|
|
monad
|
|
Kupitman
why PackJPG from 200x, compress better than JPEGXL?
|
|
2025-04-25 09:29:15
|
Last I checked (a few years ago), everything could go denser than JXL, but JXL is designed for practical real-time image decode and others may not prioritize that. https://discord.com/channels/794206087879852103/803645746661425173/937627806408523777
|
|
|
A homosapien
|
2025-04-25 09:34:58
|
PackJPG and Dropbox's lepton are archival tools first and foremost, not really meant to be an image format per se. JPEG XL transcoding was meant to be viewed and decoded progressively in a browser or image viewer.
|
|
|
Kupitman
|
|
monad
Last I checked (a few years ago), everything could go denser than JXL, but JXL is designed for practical real-time image decode and others may not prioritize that. https://discord.com/channels/794206087879852103/803645746661425173/937627806408523777
|
|
2025-04-25 09:35:19
|
but packjpg x2-3 faster...
|
|
2025-04-25 09:36:44
|
faster + 5% lower
|
|
|
monad
|
2025-04-25 09:36:54
|
are they progressive?
|
|
|
Kupitman
|
2025-04-25 09:37:06
|
idk
|
|
2025-04-25 09:37:13
|
|
|
|
A homosapien
|
2025-04-25 09:37:20
|
Lepton infamously has a hard time compressing progressive jpegs
|
|
|
Kupitman
|
2025-04-25 09:37:21
|
|
|
|
Kupitman
|
|
2025-04-25 09:37:33
|
https://github.com/packjpg/packJPG/releases/tag/2.5k
|
|
|
_wb_
|
2025-04-25 09:45:38
|
Lepton predicts DC from AC so in terms of data dependencies it is anti-progressive by design.
|
|
|
Kupitman
|
2025-04-25 09:46:18
|
Lepton?
|
|
|
jonnyawsom3
|
|
Kupitman
|
|
2025-04-25 09:50:34
|
Well the JXL certainly is progressive
|
|
|
Kupitman
|
|
Well the JXL certainly is progressive
|
|
2025-04-25 09:55:15
|
and why packjpg effective?
|
|
|
jonnyawsom3
|
|
Kupitman
and why packjpg effective?
|
|
2025-04-25 09:58:58
|
It doesn't need to be decoded during download, and can't be viewed while compressed, so it can use methods too slow to decompress or that require the entire file to be loaded first
|
|
|
Kupitman
|
2025-04-25 10:05:47
|
|
|
2025-04-25 10:05:56
|
|
|
2025-04-25 10:07:58
|
old compressor still beat new jpeg-xl
|
|
|
RaveSteel
|
2025-04-25 10:13:21
|
Ok, but can you directly open the compressed JPGs? If not, this point is kinda mute
|
|
|
A homosapien
|
2025-04-25 10:14:03
|
JPEG XL is much faster and useable than packJPG.
As I said, packJPG is just an archiver, not an image format.```
ENCODE TIME
packJPG - 20.83 sec
JPEG XL - 3.74 sec
DECODE TIME
packJPG - 19.72 sec
JPEG XL - 1.77 sec
```
|
|
|
Kupitman
|
|
A homosapien
JPEG XL is much faster and useable than packJPG.
As I said, packJPG is just an archiver, not an image format.```
ENCODE TIME
packJPG - 20.83 sec
JPEG XL - 3.74 sec
DECODE TIME
packJPG - 19.72 sec
JPEG XL - 1.77 sec
```
|
|
2025-04-25 10:14:24
|
where you get it
|
|
2025-04-25 10:14:32
|
|
|
2025-04-25 10:14:36
|
0.45
|
|
2025-04-25 10:14:45
|
faster than jpeg-xl
|
|
|
A homosapien
|
2025-04-25 10:15:06
|
I'm using a very large image, 11656 x 8742, 100 MP
|
|
2025-04-25 10:15:17
|
Let me test on a smaller image
|
|
2025-04-25 10:15:49
|
And JPEG XL is multithreaded, so it should be much faster
|
|
2025-04-25 10:17:43
|
Okay 1500 x 1137 image, packJPG 0.15 sec. JXL 0.08 sec. Twice as fast
|
|
|
RaveSteel
|
2025-04-25 10:18:22
|
add in the mandatory decompression time
|
|
|
jonnyawsom3
|
2025-04-25 10:19:00
|
And the JXL being of slightly higher quality, with improvements possible in future
|
|
|
Kupitman
|
|
A homosapien
Okay 1500 x 1137 image, packJPG 0.15 sec. JXL 0.08 sec. Twice as fast
|
|
2025-04-25 10:19:35
|
how log jpeg-xl speed
|
|
|
A homosapien
And JPEG XL is multithreaded, so it should be much faster
|
|
2025-04-25 10:24:41
|
for* jpg transcoding don't
|
|
|
jonnyawsom3
|
|
Kupitman
for* jpg transcoding don't
|
|
2025-04-25 10:27:06
|
Don't what, multithread?
|
|
|
A homosapien
|
2025-04-25 11:06:02
|
It seems to multithread for large images
|
|
2025-04-25 11:06:48
|
100 MP image again```
cjxl --num_threads 0 -- 8.35 secs
cjxl --num_threads 12 - 3.13 secs
```
|
|
|
Kupitman
|
|
gb82
|
2025-04-27 06:39:20
|
this is like comparing ZPAQ to ZSTD, and saying "why is ZPAQ so much better (because it is denser)?" when that isn't really the main allure of ZSTD in the first place
|
|
|
spider-mario
|
|
Kupitman
|
|
2025-04-27 07:00:42
|
0.45 seconds × 2844 bytes per second means it compressed 1280 bytes (1.28 kB), is that right? (or is that meant to be 2844 kbytes per second, i.e. 1.28MB?)
|
|
|
Kupitman
|
|
spider-mario
0.45 seconds × 2844 bytes per second means it compressed 1280 bytes (1.28 kB), is that right? (or is that meant to be 2844 kbytes per second, i.e. 1.28MB?)
|
|
2025-04-27 07:04:44
|
avrg. KBYTE per s
|
|
|
gb82
this is like comparing ZPAQ to ZSTD, and saying "why is ZPAQ so much better (because it is denser)?" when that isn't really the main allure of ZSTD in the first place
|
|
2025-04-27 07:05:04
|
zpaq better
|
|
|
spider-mario
|
|
Kupitman
avrg. KBYTE per s
|
|
2025-04-27 07:06:25
|
right, it says that in the left column, but then bytes in the right column, so it wasn’t really clear which one was authoritative
|
|
|
Kupitman
|
|
spider-mario
right, it says that in the left column, but then bytes in the right column, so it wasn’t really clear which one was authoritative
|
|
2025-04-27 07:08:02
|
kbyte
|
|
2025-04-27 07:08:09
|
|
|
|
gb82
|
|
Kupitman
zpaq better
|
|
2025-04-27 08:00:30
|
this feels like trolling to me
|
|
|
Kupitman
|
|
gb82
this feels like trolling to me
|
|
2025-04-27 08:02:42
|
i use zpaq for most packages
|
|
|
A homosapien
|
2025-04-27 08:29:02
|
Maximal compression is not the ultimate goal for me (and for most people). Usability, versatility, and reliability are important.
|
|
2025-04-27 08:35:03
|
> Compressed PJG files are not compatible between different packJPG versions.
That alone is a good enough reason to not use packJPG for many people. Imagine if every version of 7-zip produced incompatible files?
|
|
2025-04-27 08:37:20
|
Not even Lepton (another jpeg archiver) has this problem. As far as I know, it's files are compatible across multiple versions and it typically compresses better than packJPG.
|
|
|
jonnyawsom3
|
|
A homosapien
Maximal compression is not the ultimate goal for me (and for most people). Usability, versatility, and reliability are important.
|
|
2025-04-27 08:43:35
|
Just wait until they learn about our faster decoding work... Sacrificing density for multiple times the speed? Blasphemy
|
|
|
Kupitman
|
|
A homosapien
> Compressed PJG files are not compatible between different packJPG versions.
That alone is a good enough reason to not use packJPG for many people. Imagine if every version of 7-zip produced incompatible files?
|
|
2025-04-28 05:05:39
|
🥺
|
|
2025-04-28 05:06:03
|
Packjpg doesn't update
|
|
|
A homosapien
|
2025-04-28 07:07:46
|
And what if it does? That's a poor argument
|
|
2025-04-28 07:07:58
|
packJPG isn't even good as an archiver, it's more like a toy format, a proof of concept
|
|
2025-04-28 07:08:43
|
It's not even the best at compression, that title belongs to Dropbox's Lepton
|
|
2025-04-28 07:10:10
|
It's still being actively developed, is multithreaded, and it __**doesn't break backwards compatibility**__. It's good at what it does and to rub salt on the wound, it compresses better than packJPG.
|
|
|
Demiurge
|
|
A homosapien
It's not even the best at compression, that title belongs to Dropbox's Lepton
|
|
2025-04-28 07:46:08
|
can this gap be overcome by an improved jxl encoder?
|
|
|
A homosapien
|
2025-04-28 07:50:56
|
Most likely not overcome, but the gap could be closer.
|
|
2025-04-28 07:53:51
|
For example Jon is working on improving JPEG transcoding as we speak
https://github.com/libjxl/libjxl/pull/4202
|
|
|
gb82
|
2025-04-29 03:23:58
|
<@794205442175402004> another Cloudinary question – what metrics do you guys use for images, or what metrics would you say are important to you? I know AIMOS exists, but I'm guessing you also consider SSIMU2 & Butteraugli? Maybe just SSIMU2?
|
|
2025-04-29 03:24:15
|
Or do you average butter & ssimu2 somehow maybe? not sure
|
|
|
_wb_
|
2025-04-29 06:42:59
|
Currently I think CVVDP, IW-SSIM, Butteraugli p-norm and ssimulacra2 are the most reliable metrics. But we just started JPEG AIC-4 with the goal of finding/making better metrics...
|
|
|
Lumen
|
2025-04-29 02:38:55
|
It s hard to read pytorch implem
|
|
|
gb82
|
|
_wb_
Currently I think CVVDP, IW-SSIM, Butteraugli p-norm and ssimulacra2 are the most reliable metrics. But we just started JPEG AIC-4 with the goal of finding/making better metrics...
|
|
2025-04-29 05:55:18
|
gotcha – CVVDP in particular is very interesting
|
|
|
CrushedAsian255
|
|
_wb_
Currently I think CVVDP, IW-SSIM, Butteraugli p-norm and ssimulacra2 are the most reliable metrics. But we just started JPEG AIC-4 with the goal of finding/making better metrics...
|
|
2025-04-30 12:12:49
|
Just use MSE lol
|
|
2025-04-30 12:13:00
|
/s
|
|
|
gb82
|
2025-04-30 11:26:48
|
huh, interesting
|
|
|
diskorduser
|
2025-05-01 10:11:09
|
Do we still have to do some system files editing to set jxl files as wallpaper in KDE plasma?
|
|
|
RaveSteel
|
2025-05-01 10:23:57
|
no
|
|
2025-05-01 10:24:03
|
works OOTB
|
|
|
diskorduser
|
|
RaveSteel
works OOTB
|
|
2025-05-01 10:30:23
|
I don't get set as wallpaper for jxl file.
|
|
|
RaveSteel
|
2025-05-01 10:31:01
|
Ahh, using the context menu you mean. I have never tried that, I set up a wallpaper folder and just put JXLs in there
|
|
2025-05-01 10:31:05
|
Which works without problems
|
|
|
diskorduser
|
2025-05-01 10:33:01
|
hmm. I think it still needs to edit some files to get wallpaper option in context menu.
|
|
|
HCrikki
|
2025-05-01 11:49:56
|
opensuse leap 16's beta still carries libjxl 0.8 - concerned this will keep performance degraded for gnome, kde and gimp for years if not updated before july's stable
(many complaints about slow jxl seemed to cover ancient lib versions)
|
|
|
CrushedAsian255
|
|
HCrikki
opensuse leap 16's beta still carries libjxl 0.8 - concerned this will keep performance degraded for gnome, kde and gimp for years if not updated before july's stable
(many complaints about slow jxl seemed to cover ancient lib versions)
|
|
2025-05-01 11:52:19
|
Is the API stable? If so can’t they all just share 1 SO?
|
|
|
jonnyawsom3
|
2025-05-01 05:33:11
|
There was a change in 0.10 that requires some tweaks IIRC
|
|
2025-05-01 05:35:58
|
Huh, maybe not. I can't see anything in the changelogs
|
|
|
Laserhosen
|
2025-05-01 08:03:48
|
0.9.0 definitely made breaking API changes. I haven't noticed any since then.
|
|
|
jonnyawsom3
|
2025-05-01 08:20:56
|
Ah yeah, that'd do it
> encoder and decoder API: all deprecated functions were removed:
|
|
|
JesusGod-Pope666.Info
|
2025-05-02 12:29:56
|
Hello
|
|
2025-05-02 12:30:38
|
anyone knows the best way to test out MozJPEG - I don't think it really is..... but I am testing stuff for my webpage.
|
|
2025-05-02 12:30:53
|
Overall program I have with JXL is the combability as it goes for now.
|
|
2025-05-02 12:31:37
|
Although I think AVIF actually looks better in lower file storrage or higher incoding. But overall... They seem close together.
|
|
2025-05-02 12:33:20
|
Seems to be somewhat better details in AVIF while JXL smudge it out in lower with low files - JXL seems to be better with resaving over and over again compared to AVIF.
|
|
2025-05-02 12:33:54
|
Sadly.... Browser had not made support for it, I know a lot of people would like that and yet.....
|
|
2025-05-02 12:34:27
|
And hard to test JXL on my website when the browser does not support it.
|
|
2025-05-02 12:34:36
|
comparing against the other 2.
|
|
|
username
|
2025-05-02 12:34:47
|
something nice about JXL is you can see the image before it's finished downloading while with AVIF you won't see a single pixel untill 100% of the file is downloaded
|
|
|
JesusGod-Pope666.Info
|
2025-05-02 12:35:00
|
I have been thinking of MoxJPEG but not figured out something to make the files yet.... kinda a lot of stuff to know to do things apparently.
|
|
2025-05-02 12:35:33
|
Yea, but I am not sure that really does anything on my webpage with the Picture app I have as it kinda loads before showing anyway. But would be nice to test things of cause.
|
|
2025-05-02 12:35:39
|
I have this auto gallery thing.
|
|
2025-05-02 12:35:48
|
I'll fix a web url brb
|
|
2025-05-02 12:36:02
|
https://jesusgod-pope666.info/images.php
|
|
2025-05-02 12:36:19
|
I have 80.000 files, and around 40 GB.... I am looking to make them smaller and upload them to the website.
|
|
2025-05-02 12:36:26
|
and some very slow computers here working with it.
|
|
2025-05-02 12:36:54
|
I heard that MozJPEG is the best for the old JPEG file format, so I would like to try that as well and compare that.
|
|
2025-05-02 12:37:10
|
By the way, a great image app for webpages.
|
|
2025-05-02 12:37:19
|
the very best that I know, but it does cost money.
|
|
|
A homosapien
|
|
JesusGod-Pope666.Info
I heard that MozJPEG is the best for the old JPEG file format, so I would like to try that as well and compare that.
|
|
2025-05-02 12:37:44
|
Actually jpegli is now the best jpeg encoder on the market right now
|
|
|
jonnyawsom3
|
2025-05-02 12:37:58
|
These may be worth looking into https://github.com/niutech/jxl.js for JXL and https://opensource.googleblog.com/2024/04/introducing-jpegli-new-jpeg-coding-library.html for JPEG
|
|
|
JesusGod-Pope666.Info
|
2025-05-02 12:38:01
|
okay....... And what supports that?
|
|
|
jonnyawsom3
|
2025-05-02 12:38:27
|
Anything that supports normal JPEGs
|
|
2025-05-02 12:38:37
|
It's a mozjpeg replacement, essentially
|
|
|
JesusGod-Pope666.Info
|
2025-05-02 12:38:43
|
Well what app can I use to batch some files in that incoder.
|
|
2025-05-02 12:38:49
|
ahhh okay.
|
|
2025-05-02 12:38:59
|
does Gimp 3 have it?
|
|
2025-05-02 12:39:20
|
I have just installed Gimp 3 on an old slim device running Windows 11, took what seemed half an hour.
|
|
2025-05-02 12:39:33
|
I also installed a batch plugin to it.
|
|
2025-05-02 12:40:30
|
On the moxjpeg replacement, comes from Google? Do we kinda trust Google....
|
|
|
jonnyawsom3
|
2025-05-02 12:41:00
|
It was made as a side project to JXL, using some of the same ideas
|
|
|
JesusGod-Pope666.Info
|
2025-05-02 12:41:22
|
More dense: Jpegli compresses images more efficiently than traditional JPEG codecs, which can save bandwidth and storage space, and speed up web pages.
How Jpegli works
Jpegli works by using a number of new techniques to reduce noise and improve image quality; mainly adaptive quantization heuristics from the JPEG XL reference implementation, improved quantization matrix selection, calculating intermediate results precisely, and having the possibility to use a more advanced colorspace. All the new methods have been carefully crafted to use the traditional 8-bit JPEG formalism, so newly compressed images are compatible with existing JPEG viewers such as browsers, image processing software, and others.
|
|
2025-05-02 12:41:32
|
It sounds good but.... I kinda have my trust issues with Google.
|
|
|
jonnyawsom3
|
2025-05-02 12:42:05
|
Same company, different department. Most here are from Google Research, working on JXL and jpegli
|
|
|
JesusGod-Pope666.Info
|
2025-05-02 12:43:38
|
Well you guys are the experts, although of cause there is always a lot of oppinions out there on the internet.
|
|
2025-05-02 12:43:56
|
Aha, I though JXL was made by someone else then Google.
|
|
2025-05-02 12:44:33
|
Still kinda trying to figure out what to use and such for my webpage.
|
|
2025-05-02 12:44:52
|
Like 40GB is just way to much, I tried some of the other formats and it cuts a lot of it off.
|
|
|
username
|
|
JesusGod-Pope666.Info
Aha, I though JXL was made by someone else then Google.
|
|
2025-05-02 12:45:00
|
it was made by people from Google and also people not from Google
|
|
|
JesusGod-Pope666.Info
|
2025-05-02 12:45:08
|
I think I can get the images down to maybe 4 GB.
|
|
2025-05-02 12:45:30
|
I will have to test the image gallery as it load 200 files at a time.
|
|
2025-05-02 12:45:55
|
I have 600 files being made into AVIF at the moment, will take some 3-5 hours until finish on my slim client.
|
|
2025-05-02 12:45:59
|
talk about slow slow slow.
|
|
2025-05-02 12:46:05
|
but that is what I have to deal with.
|
|