|
damian101
|
2023-12-05 02:09:02
|
under Linux, too
|
|
2023-12-05 02:09:05
|
Windows as well
|
|
|
jonnyawsom3
|
2023-12-05 02:09:19
|
I'm sure I remember seeing jpeg screenshots from my phone, but I think they reverted it in a firmware update
|
|
|
damian101
|
2023-12-05 02:09:24
|
mpv is the only exception
|
|
2023-12-05 02:09:31
|
in my experience
|
|
|
Quackdoc
|
2023-12-05 02:09:57
|
what AOSP does is it will always capture the image in PNG then send it to the exporter function, which AOSP supports jpeg png webp. but whether or not it does PNG WEBP or JPG is up to the rom itself since they will usually implement their own screenshot utility or modify AOSP
|
|
|
jonnyawsom3
|
2023-12-05 02:10:58
|
Ah, scrollshots were always jpeg at least
|
|
|
Quackdoc
|
|
Ah, scrollshots were always jpeg at least
|
|
2023-12-05 02:11:37
|
yeah that would be using a custom screenshot util probably.
|
|
|
DZgas Ж
|
|
My phone still does PNG, though I noticed when directly sharing an image from telegram to discord via the apps, it seemed to re-compress Telegram's jpeg with nearest neighbour to 720p from 1080p and quality 40 encoding (Based on what Ssimulacra2 said comparing to the original)
|
|
2023-12-05 02:11:39
|
Discord recompress jpeg again. Yep. But not upscaled
|
|
|
damian101
|
|
Ah, scrollshots were always jpeg at least
|
|
2023-12-05 02:11:48
|
Isn't scrollshots an extremely new feature?
|
|
2023-12-05 02:11:59
|
Android 12 or 13
|
|
|
Quackdoc
|
2023-12-05 02:12:04
|
back in, I want to say kitkat, AOSP only supported png
|
|
|
Android 12 or 13
|
|
2023-12-05 02:12:21
|
for aosp iirc it was introduced a11? but roms have had it for a very long time
|
|
|
DZgas Ж
|
2023-12-05 02:12:34
|
🤣👉mpv
|
|
|
Quackdoc
|
2023-12-05 02:12:36
|
my axon 7 had it too
|
|
2023-12-05 02:13:25
|
I think the number of roms that use AOSP's screenshot util is actually in the minority, not the majority
|
|
|
damian101
|
|
Quackdoc
for aosp iirc it was introduced a11? but roms have had it for a very long time
|
|
2023-12-05 02:13:40
|
I think it must be Android 12...
|
|
|
DZgas Ж
|
2023-12-05 02:14:07
|
my friend recommended mpv to me once. then I made this meme
|
|
|
Quackdoc
|
|
I think it must be Android 12...
|
|
2023-12-05 02:15:19
|
I just checked at android11 did introduce the scroll screenshot
|
|
2023-12-05 02:15:26
|
|
|
|
damian101
|
|
Quackdoc
|
2023-12-05 02:15:47
|
ofc that does not mean all a11 phones have it, it depends on what a11 tag they based on
|
|
2023-12-05 02:16:21
|
there have quite a few tags an A11 rom can be based on
|
|
|
DZgas Ж
|
2023-12-05 02:17:18
|
I've already forgotten why different versions of android exist. After Android 6, it didn't make any sense.
|
|
2023-12-05 02:17:51
|
the screen is there, the buttons are pressed, the application is working.
|
|
2023-12-05 02:21:01
|
all I see is that applications size now 200+ megabytes each..... because. because Google play allowed it
|
|
|
damian101
|
|
DZgas Ж
I've already forgotten why different versions of android exist. After Android 6, it didn't make any sense.
|
|
2023-12-05 02:21:13
|
yeah, and except for the UI, changes are rather incremental, it's like the switch from Windows 10 to 11
|
|
|
Quackdoc
|
2023-12-05 02:22:13
|
well there have been some major under the hood changes for things like project mainline.
|
|
|
damian101
|
2023-12-05 02:22:48
|
hmm, yes...
|
|
2023-12-05 02:23:27
|
Android versions are mostly a way to break compatibility
|
|
2023-12-05 02:23:38
|
but not between all Android versions is that necessary
|
|
|
DZgas Ж
|
2023-12-05 02:23:53
|
<@184373105588699137> why does Aosp not support jxl for screenshots? mm?
|
|
|
damian101
|
|
Android versions are mostly a way to break compatibility
|
|
2023-12-05 02:24:06
|
unlike for Windows, which cares a lot abut backwards compatibility
|
|
|
Quackdoc
|
2023-12-05 02:24:13
|
probably because skia for some reason still uses the gitlab for jxl xD
|
|
|
damian101
|
|
unlike for Windows, which cares a lot abut backwards compatibility
|
|
2023-12-05 02:24:20
|
which is why it's such a bloated OS
|
|
|
Quackdoc
|
|
but not between all Android versions is that necessary
|
|
2023-12-05 02:25:28
|
android does try to maintain a degree of backwards compatibility, but not as significant as windows for sure. but the major thing for them is migrating as much as they can to mainline stuff so they can get rid of the years of tech debt they incurred. it's hard to keep in mind just how much is put into android to keep it somewhat functioning
|
|
|
damian101
|
|
Quackdoc
android does try to maintain a degree of backwards compatibility, but not as significant as windows for sure. but the major thing for them is migrating as much as they can to mainline stuff so they can get rid of the years of tech debt they incurred. it's hard to keep in mind just how much is put into android to keep it somewhat functioning
|
|
2023-12-05 02:26:08
|
yes, I put it wrong
|
|
|
Quackdoc
|
2023-12-05 02:26:21
|
well, it's a lot better then gnu distros xD
|
|
|
damian101
|
2023-12-05 02:27:52
|
Android is quite backwards compatible, but they allow devs to use new Android system features at the cost of breaking compatibility with older versions
|
|
|
DZgas Ж
|
2023-12-05 02:27:58
|
I remember Google made own core to replace android... haha.
|
|
|
damian101
|
|
Android is quite backwards compatible, but they allow devs to use new Android system features at the cost of breaking compatibility with older versions
|
|
2023-12-05 02:28:27
|
the version scehem is supposed to keep that ordered
|
|
|
DZgas Ж
|
2023-12-05 02:28:49
|
Google Fuchsia
|
|
|
Quackdoc
|
|
Android is quite backwards compatible, but they allow devs to use new Android system features at the cost of breaking compatibility with older versions
|
|
2023-12-05 02:29:18
|
indeed, they do try to encourage some backwards compatibility via ways of telling devs to use the minimum supported api for google play store, but at some point they do break compatibility, all because users refuses to stop shooting themselves in the foot, At the very least, it's not like you need to use these APIs if you dont target gplay store
|
|
|
DZgas Ж
|
|
Android is quite backwards compatible, but they allow devs to use new Android system features at the cost of breaking compatibility with older versions
|
|
2023-12-05 02:32:32
|
this happens in the same way as application compatibility between win xp and win 11. the more independent the program was originally, the more likely it is that it will work. android 2.3 programs that just can't install
|
|
2023-12-05 02:33:23
|
there is no problem with this in general. except for the memories. Current programs are supported by developers and just work.
|
|
|
damian101
|
2023-12-05 02:34:37
|
I guess Android apps usually make much more use of system libraries, compared to Windows
|
|
|
DZgas Ж
|
2023-12-05 02:35:05
|
<:BanHammer:805396864639565834> <:Android:806136610642853898>
|
|
|
Quackdoc
|
|
I guess Android apps usually make much more use of system libraries, compared to Windows
|
|
2023-12-05 02:35:35
|
well windows does also just have a killer legacy emulation system
|
|
|
DZgas Ж
|
2023-12-05 02:35:46
|
Bad guys 👉x86 x86_64 armv8
|
|
|
fab
|
2023-12-05 02:37:44
|
https://readabilitylab.xyz/results?type=FAVORITE&idUserExternal=30020&user1=Lexend%20Deca&user2=Helvetica&user3=EB%20Garamond&user4=Noto%20Sans&user5=Avant%20Garde&top1=Noto%20Sans&top2=Times&top3=Avenir%20Next&top4=Helvetica&top5=Calibri&sentence=In%20our%20studies%20users%20read%20in%20their%20preferred%20font%20at%20an%20average%20words%20per%20minute.%20Also,%20they%20do%20not%20read%20faster%20or%20slower,%20on%20average,%20by%20reading%20in%20the%20commonly%20preferred%20fonts%20(Noto%20Sans%20and%20Times).
|
|
2023-12-05 02:38:03
|
|
|
2023-12-05 02:38:48
|
do you agree
|
|
2023-12-05 02:40:52
|
|
|
2023-12-05 02:41:02
|
i fixed the design
|
|
2023-12-05 02:41:05
|
not metrics
|
|
2023-12-05 02:41:23
|
i have the list of the font i did this summer
|
|
2023-12-05 02:42:43
|
i have a font on my xiaomi with 20000 chactes
|
|
2023-12-05 02:42:55
|
once is eapiued i tell the name of it
|
|
2023-12-05 02:43:42
|
|
|
2023-12-05 03:51:03
|
watching nsw videos esults it
|
|
2023-12-05 03:51:10
|
https://readabilitylab.xyz/results?type=FAVORITE&idUserExternal=30020&user1=Noto%20Sans&user2=Roboto&user3=Arial&user4=Poynter%20Gothic%20Text&user5=Helvetica&top1=Noto%20Sans&top2=Times&top3=Avenir%20Next&top4=Helvetica&top5=Calibri&sentence=In%20our%20studies%20users%20read%20in%20their%20preferred%20font%20at%20an%20average%20words%20per%20minute.%20Also,%20they%20do%20not%20read%20faster%20or%20slower,%20on%20average,%20by%20reading%20in%20the%20commonly%20preferred%20fonts%20(Noto%20Sans%20and%20Times).
|
|
2023-12-05 03:51:38
|
wuufwebuwfub
|
|
|
Traneptora
|
|
mpv is the only exception
|
|
2023-12-05 05:03:01
|
mpv doesn't use system screenshot feature
|
|
2023-12-05 05:03:09
|
it dumps the video frame buffer
|
|
|
damian101
|
|
Traneptora
mpv doesn't use system screenshot feature
|
|
2023-12-05 05:03:36
|
yes, I know
|
|
|
Traneptora
|
2023-12-05 05:03:44
|
it's not really an exception
|
|
|
damian101
|
2023-12-05 05:03:46
|
but encoding to JPEG is a choice
|
|
|
Traneptora
|
2023-12-05 05:03:52
|
since we're referring to the system screenshot feature
|
|
|
damian101
|
|
Traneptora
since we're referring to the system screenshot feature
|
|
2023-12-05 05:04:17
|
well, mpv can also do actuall screenshots of itself with Ctrl + S
|
|
|
Traneptora
|
2023-12-05 05:04:38
|
yes that's a VO_CTRL screenshot
|
|
|
but encoding to JPEG is a choice
|
|
2023-12-05 05:05:02
|
on many android phones the system screenshot encoder is also a choice
|
|
2023-12-05 05:05:10
|
|
|
|
damian101
|
|
Traneptora
|
2023-12-05 05:05:20
|
my phone lets me pick between jpg and png, for example
|
|
|
damian101
|
2023-12-05 05:05:38
|
but I've also only ever seen PNG as default (didn't really use the screenshot feature on my first phones)
|
|
|
Traneptora
|
|
well, mpv can also do actuall screenshots of itself with Ctrl + S
|
|
2023-12-05 05:07:21
|
and yes I'm very familiar with how mpv screenshots work, both with regard to hardware and software screenshots
|
|
|
fab
|
2023-12-05 08:11:31
|
use hi ocr screenshot
|
|
|
bonnibel
|
2023-12-11 01:04:44
|
i'd like to have words with whoever decided pfms should be upside down
|
|
|
|
veluca
|
2023-12-11 01:06:00
|
get in line
|
|
|
_wb_
|
2023-12-11 01:18:21
|
bmp does the same thing, right?
|
|
|
bonnibel
|
2023-12-11 01:20:36
|
they can go both ways, depending on the sign of the height
|
|
|
|
veluca
|
2023-12-11 01:21:23
|
well that's even worse
|
|
|
_wb_
|
2023-12-11 02:16:12
|
it's the TIFF philosophy: make it so that an encoder can just write some header and then dump memory. Nice for writing, very annoying for reading.
|
|
|
bonnibel
|
2023-12-11 02:53:37
|
is/was bottom-to-top image data in memory that common?
|
|
|
fab
|
|
fab
use hi ocr screenshot
|
|
2023-12-11 03:47:17
|
Hi ocr screenshots has been lately slow
|
|
|
spider-mario
|
2023-12-11 05:25:42
|
isn’t OpenGL also upside down?
|
|
|
lonjil
|
2023-12-11 05:32:28
|
opengl screen space coordinates are like the upper right quadrant of your typical math xy plane, IIRC. So origin in the lower left and +y going up and +x going right.
|
|
|
spider-mario
|
2023-12-11 09:59:50
|
I thought I remembered the origin being in the top left, but I have very little experience with this so I might very well be wrong
|
|
|
Traneptora
|
|
spider-mario
isn’t OpenGL also upside down?
|
|
2023-12-12 05:18:09
|
It is, opengl is mirrored vertically from convention but origin is in the lower left so the whole image has positive coords
|
|
|
lonjil
|
2023-12-12 09:41:20
|
I always thought that lossy WebP's quality ceiling is entirely due to it always doing chrome subsampling, but in this article (https://giannirosato.com/blog/post/jpegli/) I saw that cjpegli 4:2:0 seems to consistently reach a higher quality level than WebP.
|
|
2023-12-12 09:41:32
|
Does anyone have any insight on this?
|
|
|
username
|
2023-12-12 09:46:53
|
after checking the parameters used it seems like for WebP `-sharp_yuv` was not specified in those tests which could have had an effect maybe? https://www.ctrl.blog/entry/webp-sharp-yuv.html
|
|
2023-12-12 09:47:48
|
|
|
2023-12-12 09:48:48
|
oh and also `-af` could have had a positive effect on the results as well
|
|
|
lonjil
|
2023-12-12 09:49:45
|
I might try to replicate the author's tests using those options tomorrow, then.
|
|
|
username
|
2023-12-12 09:50:50
|
also iirc the author is in this server.
**EDIT**: I meant for giannirosato.com not ctrl.blog (been awake too long and my reading comprehension is a bit low right now, apologies).
|
|
2023-12-12 09:53:03
|
just double checked and yeah, their username is "gb82"
|
|
|
lonjil
|
2023-12-12 09:56:10
|
ah, yeah
|
|
|
Quackdoc
|
2023-12-12 10:02:08
|
just ping him <@703028154431832094>
|
|
|
gb82
|
|
Quackdoc
|
2023-12-12 10:03:03
|
people have qestions about webp
|
|
|
gb82
|
2023-12-12 10:03:48
|
what exactly?
|
|
|
Quackdoc
|
2023-12-12 10:04:01
|
> I always thought that lossy WebP's quality ceiling is entirely due to it always doing chrome subsampling, but in this article (https://giannirosato.com/blog/post/jpegli/) I saw that cjpegli 4:2:0 seems to consistently reach a higher quality level than WebP.
> Does anyone have any insight on this?
|
|
|
gb82
|
2023-12-12 10:05:06
|
I'd be willing to believe that lossy WebP does some sort of aggressive filtering to remove artifacts that helps at med/low quality but not at higher fidelity. kind of characteristic of video codecs that are repurposed as image codecs
|
|
|
Quackdoc
|
2023-12-12 10:11:41
|
also speaking of your site, I tried using jxl-oxide for the corpus page, and it works way better, but much slower, for some reason chrome and firefox wont render the image while another image is decoding though which causes it to look like its taking way longer then it is
|
|
2023-12-12 10:12:16
|
3 images are done decoding in jxl, and sent to the browser for rendering but since browser is stupid it stops rendering them LOL
|
|
2023-12-12 10:13:14
|
this is literally the stupidest thing ever and the only reason why jxl-wasm isn't a good idea
|
|
2023-12-12 10:13:33
|
its still better then firefox though which will just give up for some reason, on chrome based, only 10 fail to load
|
|
2023-12-12 10:14:58
|
also apparently one of the images is kinda busted
|
|
|
gb82
|
|
Quackdoc
3 images are done decoding in jxl, and sent to the browser for rendering but since browser is stupid it stops rendering them LOL
|
|
2023-12-13 03:36:06
|
😭
|
|
|
lonjil
|
2023-12-13 05:06:34
|
<@703028154431832094> could you send me the original images you used in that blog post? I want to replicate your tests.
|
|
|
gb82
|
2023-12-13 06:18:58
|
Oh boy, I'll see if I can find them tonight
|
|
|
fab
|
2023-12-13 06:54:42
|
Jxl sucks
|
|
|
a goat
|
2023-12-13 07:34:23
|
https://www.youtube.com/watch?v=oIQWsQNW20Q
|
|
|
lonjil
|
2023-12-13 10:24:53
|
<@245794734788837387> with the image I tested (cathedral from imagecompression.info), af makes no difference at the highest qualities, but sharp yuv lets it reach a very slightly higher score:
|
|
2023-12-13 10:29:09
|
Next I suppose I'll test jpegli 420 and see how it goes
|
|
2023-12-13 11:45:15
|
Still only testing one image, but here's mozjpeg vs jpegli vs webp
|
|
2023-12-14 12:22:55
|
larger range and logarithmic x axis:
|
|
2023-12-14 12:23:32
|
and does anyone here have any good pointers on the correct way to average the results of comparisons like this across several images?
|
|
|
Miku
|
2023-12-14 07:49:56
|
I have recently been researching several image formats for storing personal photos. Initially, I was familiar with webp, as it has broad support. There is also avif, which is considered better than webp. At first, I thought it was a good choice until I encountered a concern raised in the comments of an issuehttps://github.com/AOMediaCodec/av1-avif/issues/111#issuecomment-717710961
> Not only that, but AVIF lossless isn't even true lossless, since it requires the image be converted to yuv4:4:4 or yuv4:2:0, which means if your image is originally RGB, you're incurring a lossy conversion just by changing from RGB to YUV. I've heard of ways in which the image can be losslessly converted between RGB and YUV, but that is unorthodox and I do not recommend it.
Consequently, I turned to JXL. In fact, I was initially aware of the JXL format, but the inclusion of 'JPEG' in its name made me perceive it as somewhat outdated. However, I later discovered that my assumption was incorrect.
|
|
2023-12-14 07:50:45
|
AVIF sucks
|
|
|
Tirr
|
2023-12-14 07:51:40
|
avif isn't designed for lossless (it's video codec anyway)
|
|
2023-12-14 07:52:53
|
webp lossy is based on video codec, but webp lossless is completely different codec designed for lossless
|
|
|
Miku
|
2023-12-14 07:57:51
|
yes, but I don't know why they made this decision...
> it requires the image be converted to yuv4:4:4 or yuv4:2:0, which means if your image is originally RGB, you're incurring a lossy conversion just by changing from RGB to YUV
|
|
|
Tirr
|
2023-12-14 07:59:18
|
I guess it's not a problem if it's lossy video
|
|
|
Quackdoc
|
2023-12-14 08:00:47
|
storing RGB video is kinda not fun unless you have specific needs like a mezzanine. and for images, well the current av1 encoders really isn't superb at higher fidelities anyways, at least in terms of single images. JXL and avif are quite complimentary IMO
|
|
|
|
veluca
|
|
Miku
yes, but I don't know why they made this decision...
> it requires the image be converted to yuv4:4:4 or yuv4:2:0, which means if your image is originally RGB, you're incurring a lossy conversion just by changing from RGB to YUV
|
|
2023-12-14 08:11:21
|
that's actually not true, you can store RGB444 AVIF
|
|
2023-12-14 08:11:31
|
but tbh I'd rather use PNG
|
|
|
Miku
|
2023-12-14 08:14:15
|
it is, AVIF can store RGB444 but it must convert to YUV first
|
|
|
Quackdoc
|
2023-12-14 08:15:08
|
cant aomenc encode RGB directly? if so that sounds like a limitation of whatever tool that is no?
|
|
|
|
veluca
|
2023-12-14 08:15:40
|
No no, you don't have to - of course that makes the already abysmal compression rate even worse, so I can't see a single reason why anybody *would*, but you can
|
|
|
HCrikki
|
2023-12-14 08:16:55
|
cavif-rs can encode to rgb but files will end larger
|
|
|
Quackdoc
|
2023-12-14 08:17:24
|
isnt cavif just a encoder + muxer for rav1e?
|
|
|
|
veluca
|
2023-12-14 08:17:27
|
So can avifenc, if you pass the correct cicp parameters
|
|
|
Miku
|
|
veluca
but tbh I'd rather use PNG
|
|
2023-12-14 08:32:06
|
you are right, it's bug in imagemagick.
|
|
2023-12-14 08:38:27
|
❯ avifenc -l d.png -o output_image.avif
❯ magick compare -metric MAE d.png output_image.avif null:
0 (0)
❯ magick -quality 100 d.png d.avif
❯ magick compare -metric MAE d.png d.avif null:
289.032 (0.00441035)
|
|
2023-12-14 08:51:14
|
I just noticed that it took 10s to encode a 560KB PNG to a lossless avif.
JXL fantastic!
|
|
|
|
veluca
|
2023-12-14 08:53:11
|
lossless avif is a waste of everyone's mental energies to even think about 😛 just use pretty much any other lossless format, it will be better
|
|
|
_wb_
|
|
lonjil
Does anyone have any insight on this?
|
|
2023-12-14 06:37:41
|
One difference between webp and jpeg is that webp uses tv range ycbcr while jpeg uses full range. Also I think the minimum dct coeff dequantization constants in webp are not 1 like in jpeg but 4, not sure if the dct scaling compensates for that or not...
|
|
2023-12-14 07:26:05
|
Avif can do lossless and progressive just like jxl can do animation: the feature is there, box can be checked (but it wasn't designed for that)
|
|
|
lonjil
|
|
_wb_
One difference between webp and jpeg is that webp uses tv range ycbcr while jpeg uses full range. Also I think the minimum dct coeff dequantization constants in webp are not 1 like in jpeg but 4, not sure if the dct scaling compensates for that or not...
|
|
2023-12-14 07:28:48
|
makes sense, thank you
|
|
|
spider-mario
|
2023-12-14 07:40:42
|
right, the j in ffmpeg's legacy pix_fmts (yuvj444) stands for jpeg iirc
|
|
|
lonjil
|
2023-12-14 08:53:45
|
Now to figure out the correct way to make this make this easier to look at...
|
|
2023-12-14 09:45:54
|
like, what's the correct notion of average here?
|
|
2023-12-14 09:51:08
|
for each tool and settings combination, I essentially have a tuple of (quality setting, score, bpp).
I think gb82 simple took the arithmetic mean of both the score and the bpp for each quality setting, which seems useful for figuring out which quality option to pick when putting something into production, but I don't think its' the only option.
I was thinking it might be good to go by iso-score instead of quality option, to find the average bpp needed to achieve some particular score, but that has the issue that the images don't have the same ranges of scores, and of course that no two datapoints have exactly the same score.
|
|
|
Traneptora
|
|
lonjil
for each tool and settings combination, I essentially have a tuple of (quality setting, score, bpp).
I think gb82 simple took the arithmetic mean of both the score and the bpp for each quality setting, which seems useful for figuring out which quality option to pick when putting something into production, but I don't think its' the only option.
I was thinking it might be good to go by iso-score instead of quality option, to find the average bpp needed to achieve some particular score, but that has the issue that the images don't have the same ranges of scores, and of course that no two datapoints have exactly the same score.
|
|
2023-12-15 12:33:49
|
if you're plotting on a log scale then would geo mean be better?
|
|
|
DZgas Ж
|
|
Miku
AVIF sucks
|
|
2023-12-15 05:51:21
|
Based
|
|
|
Miku
I have recently been researching several image formats for storing personal photos. Initially, I was familiar with webp, as it has broad support. There is also avif, which is considered better than webp. At first, I thought it was a good choice until I encountered a concern raised in the comments of an issuehttps://github.com/AOMediaCodec/av1-avif/issues/111#issuecomment-717710961
> Not only that, but AVIF lossless isn't even true lossless, since it requires the image be converted to yuv4:4:4 or yuv4:2:0, which means if your image is originally RGB, you're incurring a lossy conversion just by changing from RGB to YUV. I've heard of ways in which the image can be losslessly converted between RGB and YUV, but that is unorthodox and I do not recommend it.
Consequently, I turned to JXL. In fact, I was initially aware of the JXL format, but the inclusion of 'JPEG' in its name made me perceive it as somewhat outdated. However, I later discovered that my assumption was incorrect.
|
|
2023-12-15 05:56:09
|
for those who do not know, I repeat:
webp has a separate algorithm for lossless compression.
jxl has a separate algorithm for lossless compression.
avif does not have this. it uses the same algorithms as for lossy compression.
avif lossless is a complete analogue of jpeg loseless
|
|
|
Tirr
webp lossy is based on video codec, but webp lossless is completely different codec designed for lossless
|
|
2023-12-15 05:58:57
|
but it's funny that the webp lossless algorithm is called VP8L
|
|
|
yoochan
|
|
lonjil
Now to figure out the correct way to make this make this easier to look at...
|
|
2023-12-15 08:46:54
|
what if, instead of a score, you plotted the difference of score between the contender and the reference (ex: reference would always be jpegli yuv444 or jpegxl -e6), as such, if you have more than one image to plot, we could hope that the deviation of score relative to the reference would show some patterns easier to interpret
|
|
|
lonjil
|
2023-12-15 08:55:14
|
such a comparison could make sense, yeah
|
|
2023-12-15 10:02:46
|
Here's what I'm thinking. For every theoretical ssimulacra2 score, I check which test images have at least one score bigger and smaller than that, then I compute a theoretical bpp for the current score I'm plotting, by first linearly interpolating between the real scores for each relevant image, and then taking the arithmetic mean of those values.
|
|
|
Traneptora
if you're plotting on a log scale then would geo mean be better?
|
|
2023-12-15 10:42:02
|
I don't think so? Arithmetic average for bpp means that you could in principle take that value and multiply it by the total number of pixels in all your photos to get a good idea of how much space they will take up, if they are average-ish.
|
|
|
lonjil
Here's what I'm thinking. For every theoretical ssimulacra2 score, I check which test images have at least one score bigger and smaller than that, then I compute a theoretical bpp for the current score I'm plotting, by first linearly interpolating between the real scores for each relevant image, and then taking the arithmetic mean of those values.
|
|
2023-12-15 12:09:12
|
actually, not a good idea because shap jumps are introduced whenever a new image joins the average
|
|
2023-12-15 12:18:03
|
what I can do, is limit it so that it only takes the average for values defined for all images
|
|
2023-12-15 12:18:11
|
that does result in some of them being rather cut short. might be OK at the high end of quality, but I think I need to regenerate my data down to q=0 for all the tools. (I used q=30 as the lowest quality setting when I generated this data)
|
|
2023-12-15 01:31:50
|
worked pretty well, I'd say.
|
|
|
yoochan
|
2023-12-15 02:15:08
|
the curve is nice, the average is computed on what ?
|
|
2023-12-15 02:15:20
|
isn't median more meaningful ?
|
|
|
lonjil
|
|
yoochan
the curve is nice, the average is computed on what ?
|
|
2023-12-15 02:19:32
|
Bits per pixel at the given ssimulacra2 score
|
|
2023-12-15 02:20:48
|
Median would tell you that half the images are bigger and half are smaller, mean tells you how much space each image takes on average. Both are meaningful.
|
|
|
w
|
2023-12-15 02:25:51
|
i think median is more meaningful
|
|
2023-12-15 02:25:58
|
mean can be heavily skewed by outliers
|
|
|
fab
|
2023-12-15 04:20:24
|
JPEGli 420 was with xyb
|
|
2023-12-15 04:20:39
|
How you got 40% reduction in the second slide
|
|
2023-12-15 04:22:26
|
It seems 22% to 4,004 better than mozjpeg
|
|
2023-12-15 04:22:39
|
Like always there's no news in that
|
|
|
lonjil
|
|
fab
JPEGli 420 was with xyb
|
|
2023-12-15 04:38:14
|
I did not use 420 with XYB, because the results were very terrible.
|
|
|
fab
How you got 40% reduction in the second slide
|
|
2023-12-15 04:39:07
|
that was just the result
|
|
2023-12-15 04:40:25
|
I just re-ran the tests this time down to quality setting 1 in all the tools, to get better coverage at lower qualities
|
|
2023-12-15 04:44:47
|
will do median later
|
|
2023-12-15 04:52:44
|
oh, and the combined average of all the images instead of the group split:
|
|
|
fab
|
2023-12-15 05:40:13
|
How is q 66.17 444
|
|
2023-12-15 05:40:44
|
Vs q 84 420
|
|
2023-12-15 05:42:05
|
And 76.92 444
|
|
|
lonjil
|
2023-12-15 05:45:13
|
I don't know. I only tested whole integer quality options
|
|
|
yoochan
|
2023-12-15 08:15:12
|
would you mind to share the array of your results ? like in a google sheet ? so we could play with the way to represent them 🙂
|
|
2023-12-15 08:15:41
|
why didn't you included jpegxl also ?
|
|
|
lonjil
|
|
yoochan
why didn't you included jpegxl also ?
|
|
2023-12-15 08:23:41
|
didn't get around to it yet. I run a script to do it and avif over night.
|
|
|
diskorduser
|
2023-12-16 10:24:39
|
I think it should be like ' to convert existing jpeg to jxl losslessly' for other formats add -d 0 or -q 100.
|
|
|
sklwmp
|
2023-12-16 10:25:17
|
yea, the Arch wiki page isn't the best... but it's a start
|
|
|
spider-mario
|
2023-12-16 10:38:55
|
I’ve reworded it slightly
|
|
|
yoochan
|
2023-12-16 02:56:28
|
c'est mieux 🙂
|
|
|
diskorduser
|
|
sklwmp
yea, the Arch wiki page isn't the best... but it's a start
|
|
2023-12-17 03:41:59
|
It will improve with the blessings of <@416586441058025472>
|
|
|
Traneptora
|
2023-12-17 04:27:20
|
please no
|
|
|
sklwmp
|
|
diskorduser
It will improve with the blessings of <@416586441058025472>
|
|
2023-12-17 05:27:33
|
optimal settings: -d 0.1458934
|
|
|
spider-mario
|
2023-12-17 12:19:21
|
`-d 3.14159265359`
|
|
|
diskorduser
|
2023-12-17 12:25:31
|
-d 2.7182818284590452353602874713527
is also good.
|
|
|
MSLP
|
2023-12-17 04:26:13
|
what about `-d 1.61803398875` ?
|
|
|
yoochan
|
2023-12-17 04:43:33
|
sqrt(2) for a distance would be very appropriate 😄
|
|
|
lonjil
|
2023-12-17 04:55:24
|
what about negative distance
|
|
2023-12-17 04:55:28
|
i want better than lossless
|
|
|
diskorduser
|
2023-12-17 05:02:48
|
-d √(-1)
|
|
|
yoochan
|
2023-12-17 05:22:20
|
imaginary quality ? would it be enough for you <@167023260574154752> ?
|
|
|
lonjil
|
|
MSLP
|
|
diskorduser
-d √(-1)
|
|
2023-12-17 05:48:33
|
don't do it man, you'll avtivate the skynet
|
|
|
Traneptora
|
2023-12-17 07:05:52
|
Complex Qualities are incomparably good!
|
|
|
lonjil
|
2023-12-17 07:07:29
|
I'd like to use surreal qualities. -d 1/inf, not lossless, but the differences are smaller than any positive real
|
|
|
|
afed
|
2023-12-18 04:38:47
|
<@853026420792360980> is libplacebo integration in ffmpeg have access to full settings?
`extra_opts` has a very unclear description
e.g. how can I set mpv-like settings `--dscale=ewa_robidoux --dscale-param1=0 --dscale-param2=0` for fffmpeg?
`downscaler_param1=0` etc doesn't seem to work
|
|
|
Traneptora
|
|
afed
<@853026420792360980> is libplacebo integration in ffmpeg have access to full settings?
`extra_opts` has a very unclear description
e.g. how can I set mpv-like settings `--dscale=ewa_robidoux --dscale-param1=0 --dscale-param2=0` for fffmpeg?
`downscaler_param1=0` etc doesn't seem to work
|
|
2023-12-18 02:37:14
|
I don't know, I'd have to read the manpage
|
|
|
Quackdoc
|
2023-12-18 02:39:23
|
~~just encode using mpv itself~~
|
|
|
Traneptora
|
2023-12-18 02:40:40
|
mpv encoding was a mistake
|
|
|
Quackdoc
|
2023-12-18 02:48:27
|
I don't think very many people will contest that lol
|
|
|
|
afed
|
2023-12-18 05:41:55
|
yeah, but there's only this
and it's not really clear which options actually work
|
|
|
bonnibel
|
2023-12-18 09:09:17
|
you can use anything from libplacebos options.c in it
|
|
|
DZgas Ж
|
2023-12-24 01:52:56
|
|
|
2023-12-24 01:53:07
|
how encode this
|
|
2023-12-24 02:12:16
|
I can't open it with absolutely nothing, but I want to note that Windows calmly decoded the png line by line in streaming mode and created preview (in a minute)
**mozjpeg-v4.1.1 Insufficient memory (case 4)**
**cwebp.exe Decoding failed.Status: 3(BITSTREAM_ERROR)**
|
|
2023-12-24 02:13:08
|
libjxl ate 5 gb of RAM, thought for 20 seconds, and died
|
|
|
diskorduser
|
2023-12-24 02:45:21
|
but but who drew the image that big?
|
|
|
DZgas Ж
|
|
diskorduser
but but who drew the image that big?
|
|
2023-12-24 02:53:49
|
neural network
|
|
|
diskorduser
|
2023-12-24 03:01:45
|
oh ok
|
|
|
DZgas Ж
|
2023-12-24 03:08:15
|
hm.
|
|
2023-12-24 03:18:01
|
I was able to encode jpeg, but not jxl
|
|
2023-12-24 03:24:58
|
visp mozjpeg takes the same memory error as mozjpeg when using the optimize_coding parameter. So I decided to try to encode the image via mozjpeg.... And... And how to turn it off?? there is no parameter here
|
|
2023-12-24 03:26:17
|
🙄
|
|
|
MSLP
|
2023-12-24 03:29:03
|
you have to use `-revert`
|
|
|
DZgas Ж
|
|
MSLP
you have to use `-revert`
|
|
2023-12-24 03:47:45
|
That's great! jpeg encoding now takes up 2 megabytes of RAM
|
|
2023-12-24 03:56:45
|
|
|
|
MSLP
|
2023-12-24 04:02:03
|
hmm... it's a curious jpeg file, it's very compressible with general purpose compression algorithm
|
|
|
DZgas Ж
|
|
MSLP
hmm... it's a curious jpeg file, it's very compressible with general purpose compression algorithm
|
|
2023-12-24 04:08:04
|
because disabling huffman is what I did. therefore, huffman compression in zip is to do it instead of huffman in jpeg
|
|
|
gb82
|
2023-12-26 03:47:08
|
Mac people - can someone open this in Preview & tell me if there are a bunch of green and black lines that show up after waiting a second or zooming in?
|
|
2023-12-26 03:47:27
|
|
|
|
|
gbetter
|
2023-12-26 11:16:33
|
In Preview only, yes, eventually the green lines show up (opens fine at first). Opened in Safari the green lines never show up even after extensive zooming. MacBook Pro, M2 chip, 16GB RAM, Sonoma 14.2.1
|
|
|
gb82
|
2023-12-27 01:52:24
|
I wonder why... So strange
|
|
2023-12-27 01:52:34
|
I'm gonna do a feedback report
|
|
|
|
afed
|
2023-12-27 11:01:12
|
https://youtu.be/DMQ_HcNSOAI
|
|
|
damian101
|
2023-12-29 11:22:38
|
this happens when I try to build libjxl-metrics-git from the AUR...
```
a2x: ERROR: "xmllint" --nonet --noout --valid "/home/damian101/.cache/yay/libjxl-metrics-git/src/build/djxl.xml" returned non-zero exit status 4
a2x: ERROR: "xmllint" --nonet --noout --valid "/home/damian101/.cache/yay/libjxl-metrics-git/src/build/cjxl.xml" returned non-zero exit status 4
make[2]: *** [CMakeFiles/manpages.dir/build.make:78: djxl.1] Error 1
make[2]: *** Waiting for unfinished jobs....
[ 4%] Building CXX object lib/CMakeFiles/jxl_extras_codec-obj.dir/extras/exif.cc.o
make[2]: *** [CMakeFiles/manpages.dir/build.make:74: cjxl.1] Error 1
make[1]: *** [CMakeFiles/Makefile2:393: CMakeFiles/manpages.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
```
|
|
|
a goat
|
2023-12-30 08:42:00
|
Does anyone have any experience with machine learning?
I want to use the butteraugli, and perhaps another, broader, psychovisual metric as a way to train a neural network to emulate the jpeg processing processing my camera does on raw files, so that I can use its look in a lossless format (probably jxl).
I know there's software out there that will do this, but most of it is proprietary and I want to get in to neural network programming and have a command line solution available.
|
|
2023-12-30 08:43:37
|
I have thousands of raw files and thousands of jpegs to train from, though now that I think about it I might need to preprocess the jpegs so that it doesn't end up just emulating the lossy compression as well
|
|
|
damian101
|
2023-12-30 08:45:21
|
that's a lot of effort just to end up with reduced image quality <:thinkies:895863009820414004>
|
|
2023-12-30 08:46:28
|
also, butteraugli is really slow for such a thing
|
|
2023-12-30 08:47:24
|
better use the original ssimulacra2 implementation if you want something robust and reasonably fast
|
|
|
a goat
|
|
that's a lot of effort just to end up with reduced image quality <:thinkies:895863009820414004>
|
|
2023-12-30 08:51:38
|
You're right. Since I have the original raw files I should start with removing the compression artifacts before I continue. I'm just less clear on how to do that, as the visual difference between raw and in camera processed jpeg can be quite high
|
|
|
damian101
|
2023-12-30 08:52:26
|
oh
|
|
2023-12-30 08:52:36
|
it's not about the JPEG compression artifacts
|
|
2023-12-30 08:52:58
|
but the processing that happens before the JPEG encoding
|
|
|
a goat
|
2023-12-30 08:53:00
|
It's about copying the image processing over
|
|
|
damian101
|
2023-12-30 08:53:02
|
was confused, haha
|
|
|
a goat
|
2023-12-30 08:53:03
|
Yeah
|
|
|
better use the original ssimulacra2 implementation if you want something robust and reasonably fast
|
|
2023-12-30 08:53:34
|
This should be more than adequate for what I want to do
|
|
|
damian101
|
|
a goat
|
2023-12-30 08:55:22
|
It's a bit of a chicken and the egg situation, though. Do I attempt to copy the look, first, and then use that to remove the artifacts so that my model can be retrained to not copy over the artifacts, or do I attempt to remove the artifacts some how
|
|
2023-12-30 08:56:00
|
And then the look
|
|
2023-12-30 08:57:13
|
I wish cameras weren't such black boxes, but there's no custom firmware solution to figure out what it looks like before being sent to the jpeg
|
|
|
damian101
|
2023-12-30 08:57:52
|
a large part is probably simple per-pixel operations that are the same throughout the image
|
|
|
a goat
|
2023-12-30 08:58:57
|
Yes, it has to be mostly a LUT and then some very non computationally intensive additional changes
|
|
|
damian101
|
2023-12-30 08:59:16
|
right
|
|
|
a goat
|
2023-12-30 09:02:20
|
I've tried emulating the look, myself, and it's surprisingly difficult. The exact processing steps are definitely something companies invest a lot of money into
|
|
|
Oleksii Matiash
|
|
a goat
You're right. Since I have the original raw files I should start with removing the compression artifacts before I continue. I'm just less clear on how to do that, as the visual difference between raw and in camera processed jpeg can be quite high
|
|
2023-12-30 09:07:04
|
Many camera makers provide their proprietary raw converters that can convert exactly as camera does, and these converters for sure can save tiffs
|
|
|
a goat
|
|
Oleksii Matiash
Many camera makers provide their proprietary raw converters that can convert exactly as camera does, and these converters for sure can save tiffs
|
|
2023-12-30 09:10:02
|
Yes, this is definitely the best option at the moment to get better training data. I'm going to see what's available for Panasonic
|
|
2023-12-30 09:23:02
|
Hmm. I was able to find a proprietary solution that's exactly this, but it's only available for canon cameras. I can at least start there with my old canon raws
|
|
|
Oleksii Matiash
|
|
a goat
Hmm. I was able to find a proprietary solution that's exactly this, but it's only available for canon cameras. I can at least start there with my old canon raws
|
|
2023-12-30 09:51:27
|
For Panasonic the "native" raw converter is SILKYPIX Developer Studio SE, unfortunately 😦
|
|
|
spider-mario
|
2023-12-30 10:37:45
|
is it that terrible?
|
|
|
Oleksii Matiash
|
|
spider-mario
is it that terrible?
|
|
2023-12-30 03:02:59
|
It is just "generic" converter, not native to Panasonic. But yes, in my opinion it is terrible in terms of usability. But idk how it is in terms of mimicking Panasonic in-camera jpeg look
|
|
|
lonjil
|
2023-12-30 03:35:29
|
I wanted to know the correct way to resize sRGB images with ImageMagick, in linear RGB, and this is what I got from the docs...
```
magick portrait.jpg -gamma .45455 -resize 25% -gamma 2.2 \
-quality 92 passport.jpg
```
>.>
|
|
|
|
veluca
|
2023-12-30 03:45:47
|
*triggered*
|
|
|
lonjil
|
2023-12-30 03:58:08
|
ok, it seems what you should do, mentioned in a more obscrure part of the docs, is ```
magick foo.png -colorspace RGB -resize 25% \
-colorspace sRGB foo.png
```
unless you're using an older version of IM in which case the the colorspace options should be swapped around, as the sRGB option converted FROM sRGB and the RGB option converted TO sRGB.
|
|
|
Traneptora
|
|
lonjil
I wanted to know the correct way to resize sRGB images with ImageMagick, in linear RGB, and this is what I got from the docs...
```
magick portrait.jpg -gamma .45455 -resize 25% -gamma 2.2 \
-quality 92 passport.jpg
```
>.>
|
|
2023-12-30 08:42:45
|
```
ffmpeg -i input.png -vf libplacebo output.png
```
:D
|
|
2023-12-30 08:43:51
|
scales in linear light by default if the input is tagged properly
|
|
|
lonjil
|
|
Traneptora
```
ffmpeg -i input.png -vf libplacebo output.png
```
:D
|
|
2023-12-30 08:44:59
|
nice. If it isn't tagged at all, does it assume sRGB? And where can I read about how to pass the right options to libplacebo?
|
|
|
Traneptora
|
|
lonjil
nice. If it isn't tagged at all, does it assume sRGB? And where can I read about how to pass the right options to libplacebo?
|
|
2023-12-30 08:45:24
|
if it's not tagged I don't know if it assumes sRGB or if it scales in the previous space
|
|
2023-12-30 08:46:16
|
you can always test it
|
|
|
lonjil
|
2023-12-30 08:47:22
|
aye
|
|
2023-12-30 08:50:39
|
doesn't seem to like my computer, hm ```
[libplacebo @ 0x7fdaad9a8080] Missing vulkan hwdevice for vf_libplacebo.
```
|
|
|
Traneptora
|
|
lonjil
doesn't seem to like my computer, hm ```
[libplacebo @ 0x7fdaad9a8080] Missing vulkan hwdevice for vf_libplacebo.
```
|
|
2023-12-30 08:58:02
|
sounds like you're missing the vulkan headers
|
|
2023-12-30 08:58:06
|
or the vulkan driver
|
|
2023-12-30 08:58:13
|
what GPU? amd? intel? nvidia?
|
|
|
lonjil
|
2023-12-30 08:59:01
|
Intel iGPU from 2015, but I do have Vulkan and a properly functioning driver
|
|
|
Traneptora
|
2023-12-30 08:59:14
|
are you using git master FFmpeg?
|
|
|
lonjil
|
|
Traneptora
|
2023-12-30 08:59:23
|
what version?
|
|
2023-12-30 08:59:52
|
I ask cause vf_libplacebo currently automatically initializes a vulkan hwcontext if one is not provided
|
|
2023-12-30 08:59:58
|
but this didn't always used to be the case
|
|
|
lonjil
|
2023-12-30 09:00:44
|
I ran `ffmpeg -init_hw_device vulkan ...` and got the message ```
[libplacebo @ 0x7f0e50e79b00] Missing device feature: hostQueryReset```
Versions are
```
ffmpeg version 6.0.1
libplacebo-6.338.1```
|
|
2023-12-30 09:02:25
|
looks newer than the last fixed issue I found when googling that error
|
|
|
Traneptora
|
|
lonjil
I ran `ffmpeg -init_hw_device vulkan ...` and got the message ```
[libplacebo @ 0x7f0e50e79b00] Missing device feature: hostQueryReset```
Versions are
```
ffmpeg version 6.0.1
libplacebo-6.338.1```
|
|
2023-12-30 09:06:37
|
that's a bug in older FFmpeg
|
|
|
lonjil
|
2023-12-30 09:08:05
|
ah, fixed in 6.1?
|
|
|
Traneptora
|
|
lonjil
ah, fixed in 6.1?
|
|
2023-12-30 09:09:38
|
yea, Lynne's fixes were merged in 6.1
|
|
|
lonjil
|
|
Traneptora
|
2023-12-30 09:10:14
|
```
Apr 30 13:00:44 <Traneptora> Lynne: where's your branch so I can test it?
Apr 30 13:02:26 <Lynne> https://github.com/cyanreg/FFmpeg/tree/vulkan
Apr 30 13:03:00 <Traneptora> ty
Apr 30 13:03:46 <Traneptora> shouldn't be this hard for a girl to scale in linear light up in here
Apr 30 13:11:05 <Traneptora> Lynne: [libplacebo @ 0x55ccd473f540] Missing device feature: dynamicRendering, what does this mean?
Apr 30 13:11:39 <Traneptora> I have an nvidia GTX 1070, is that actually missing or is this some driver/bug/etc. issue
Apr 30 13:14:05 <Lynne> it's my fault
Apr 30 13:14:30 <Traneptora> would you like a log?
Apr 30 13:14:33 <Traneptora> or do you know what's up
Apr 30 13:15:29 <Lynne> fixed
Apr 30 13:17:37 <Traneptora> thanks, now it's missing hostQueryReset
Apr 30 13:17:41 <Traneptora> [libplacebo @ 0x5620635d9e00] Missing device feature: hostQueryReset
Apr 30 13:20:03 <Lynne> thanks, fixed
Apr 30 13:21:33 <Traneptora> looks like that did it, thanks
```
|
|
2023-12-30 09:10:16
|
here's an IRC log
|
|
2023-12-30 09:10:17
|
for context
|
|
2023-12-30 09:10:21
|
6.0 was released Feb 28
|
|
|
lonjil
|
|
Traneptora
|
2023-12-30 09:11:21
|
6.1.1 was just released
|
|
2023-12-30 09:11:22
|
try that
|
|
|
lonjil
|
2023-12-30 09:12:34
|
apparently my distro upgraded to 6.1 several weeks ago, so I just need to update my computer
|
|
|
Traneptora
|
2023-12-30 09:12:39
|
noice
|
|
2023-12-30 09:12:49
|
a number of nice libjxl fixes are in 6.1 as well
|
|
2023-12-30 09:13:08
|
notably the jpegxl parser
|
|
|
lonjil
|
2023-12-30 09:13:58
|
nice
|
|
2023-12-30 09:21:57
|
hooray it's working ```
MESA-INTEL: warning: ../src/intel/vulkan/anv_formats.c:794: FINISHME: support more multi-planar formats with DRM modifiers
```
|
|
2023-12-30 09:30:32
|
hooray `ffmpeg -i foo.png -vf "libplacebo=w=iw*0.5:h=-1" bar.png`
|
|
|
Traneptora
|
|
lonjil
hooray it's working ```
MESA-INTEL: warning: ../src/intel/vulkan/anv_formats.c:794: FINISHME: support more multi-planar formats with DRM modifiers
```
|
|
2023-12-30 09:31:58
|
drivers 😔
|
|
2023-12-30 09:32:23
|
glad it works tho
|
|
|
lonjil
|
2023-12-30 09:32:40
|
now to test how it does with the color spaces and stuff
|
|
|
Traneptora
|
2023-12-30 09:32:44
|
wdym?
|
|
|
lonjil
|
|
Traneptora
if it's not tagged I don't know if it assumes sRGB or if it scales in the previous space
|
|
2023-12-30 09:33:06
|
^
|
|
|
Traneptora
|
2023-12-30 09:33:12
|
o that
|
|
2023-12-30 09:33:13
|
yea
|
|
|
lonjil
|
2023-12-30 09:37:25
|
alright, it seems my file is indeed untagged. So I guess I'll tag it and see if there's a difference 🤔
|
|
2023-12-30 09:40:33
|
lol, when you write a file using ImageMagick and specify sRGB, ffprobe finds the following information in it: ```
color_transfer=unknown
color_primaries=bt709
```
color_transfer is what should tell it whether it's linear or sRGB, right?
|
|
|
bonnibel
|
2023-12-30 09:41:18
|
-color_trc before the -i
|
|
|
lonjil
|
2023-12-30 09:41:33
|
danke
|
|
|
bonnibel
|
2023-12-30 09:41:54
|
if libplacebo gets an unknown transfer function itll leave it untouched if possible but otherwise assumes gamma22 afaik
|
|
2023-12-30 09:42:26
|
at least for (de)linearizing and sigmoidization
|
|
|
Quackdoc
|
2023-12-30 09:46:31
|
>gamma 2.2
|
|
2023-12-30 09:46:35
|
https://cdn.discordapp.com/emojis/657641202945884160.webp?size=48&name=SweatingTowelGuy&quality=lossless
|
|
|
lonjil
|
|
bonnibel
-color_trc before the -i
|
|
2023-12-30 09:56:39
|
`-color_trc srgb`, right?
|
|
2023-12-30 09:57:04
|
oh, `iec61966-2-1` seems to work
|
|
|
spider-mario
|
2023-12-30 10:00:21
|
yes, or 13
|
|
2023-12-30 10:01:55
|
for PQ, I generally write “smpte2084”, but for sRGB and HLG, usually just 13 and 18 respectively
|
|
2023-12-30 10:02:02
|
can’t be bothered to look up their ffmpeg names
|
|
|
bonnibel
|
2023-12-30 10:03:32
|
ffmpegs input color options are such a pain to look up (google never finds them somehow) that i just ffinfo or ffmpeg -f null - a file that has the curve / colorspace / matrix that i want and just copy it from there
|
|
|
spider-mario
|
2023-12-30 10:05:17
|
come to think of it, I think part of why I remember that PQ and HLG are 16 and 18 is `avifenc`
|
|
2023-12-30 10:05:35
|
`--cicp 9/16/9` for Rec. 2020 primaries, PQ TRC and Rec. 2020 non-constant luminance matrix
|
|
|
Traneptora
|
|
bonnibel
if libplacebo gets an unknown transfer function itll leave it untouched if possible but otherwise assumes gamma22 afaik
|
|
2023-12-30 10:12:36
|
it does not
|
|
|
lonjil
lol, when you write a file using ImageMagick and specify sRGB, ffprobe finds the following information in it: ```
color_transfer=unknown
color_primaries=bt709
```
color_transfer is what should tell it whether it's linear or sRGB, right?
|
|
2023-12-30 10:12:58
|
check the chunks, I'm guessing it has `cHRM` and `gAMA`
|
|
2023-12-30 10:13:15
|
avcodec compares `cHRM` to the values in H.273 and if it matches a known tag, it uses that (provided `iCCP`, `cICP`, or `sRGB` are absent)
|
|
2023-12-30 10:13:49
|
however, it doesn't attempt to interpret `gAMA`
|
|
2023-12-30 10:14:58
|
I could fix that, tbh
|
|
|
spider-mario
|
2023-12-30 10:15:21
|
it means you’ll have to deal with the broken PNGs from ImageMagick
|
|
2023-12-30 10:15:32
|
those meant to be sRGB but with a gAMA=0.45455 and no sRGB chunk
|
|
|
Traneptora
|
2023-12-30 10:15:50
|
no, I mean, there's an H.273 entry for gamma22
|
|
2023-12-30 10:16:15
|
or rather, gamma22, gamma28, gamma26, and gamma1 have H.273 entries
|
|
|
spider-mario
|
2023-12-30 10:16:36
|
right, but then, PNGs that were intended to be sRGB will be interpreted as gamma 2.2
|
|
2023-12-30 10:16:44
|
(although in fairness, that’s already happening in Chrome)
|
|
|
Traneptora
|
2023-12-30 10:16:47
|
wait, I lied
|
|
2023-12-30 10:17:05
|
```c
/* these chunks override gAMA */
if (s->iccp_data || s->have_srgb || s->have_cicp) {
av_dict_set(&s->frame_metadata, "gamma", NULL, 0);
} else if (s->gamma) {
/*
* These values are 100000/2.2, 100000/2.8, 100000/2.6, and
* 100000/1.0 respectively. 45455, 35714, and 38462, and 100000.
* There's a 0.001 gamma tolerance here in case of floating
* point issues when the PNG was written.
*
* None of the other enums have a pure gamma curve so it makes
* sense to leave those to sRGB and cICP.
*/
if (s->gamma > 45355 && s->gamma < 45555)
avctx->color_trc = frame->color_trc = AVCOL_TRC_GAMMA22;
else if (s->gamma > 35614 && s->gamma < 35814)
avctx->color_trc = frame->color_trc = AVCOL_TRC_GAMMA28;
else if (s->gamma > 38362 && s->gamma < 38562)
avctx->color_trc = frame->color_trc = AVCOL_TRC_SMPTE428;
else if (s->gamma > 99900 && s->gamma < 100100)
avctx->color_trc = frame->color_trc = AVCOL_TRC_LINEAR;
}
```
pngdec.c line 698
|
|
2023-12-30 10:18:10
|
<@167023260574154752> can you produce a quick sample and attach it?
|
|
2023-12-30 10:18:34
|
theoretically if it's tagged as `gAMA=45455` it should report the TRC as GAMMA22, not unknown
|
|
|
spider-mario
|
2023-12-30 10:18:38
|
how disappointing, they haven’t reopened the bug even after you pointed out that it was still present
|
|
2023-12-30 10:18:39
|
https://github.com/ImageMagick/ImageMagick/issues/4375
|
|
|
Traneptora
|
2023-12-30 10:20:31
|
nvm, I figured it out
|
|
2023-12-30 10:20:38
|
it's leaving out `gAMA`
|
|
|
bonnibel
|
|
bonnibel
at least for (de)linearizing and sigmoidization
|
|
2023-12-30 10:21:52
|
|
|
|
Traneptora
|
2023-12-30 10:22:19
|
yea, it leaves it untouched if possible
|
|
2023-12-30 10:22:23
|
iirc it doesn't scale in gamma22 light
|
|
2023-12-30 10:22:26
|
it just leaves it untouched
|
|
|
bonnibel
|
2023-12-30 10:23:04
|
yea thats what i said
does nothing if possible but assumes gamma22 for (de)linearizing if the curve is unknown
|
|
|
Traneptora
|
2023-12-30 10:25:34
|
oic
|
|
2023-12-30 10:25:37
|
I misread
|
|
|
bonnibel
|
2023-12-30 10:25:45
|
by default it downscales sdr in linear and upscales sdr in sigmoid (which passes through linear). so if it cant figure out the curve it undoes a gamma of 2.2 even if thats not what the real input has
|
|
2023-12-30 10:26:31
|
hdr scaling is always just done in whatever curve it has afaik
|
|
2023-12-30 10:27:10
|
sigmoidization doesnt apply since it clips to 0..1 and linear apparently looks bad according to comments in the code. ive thankfully never really had to deal with hdr so i dont know
|
|
|
Traneptora
|
|
spider-mario
https://github.com/ImageMagick/ImageMagick/issues/4375
|
|
2023-12-30 10:31:25
|
I left another comment with more explicit evidence to bump it
|
|
|
lonjil
|
|
Traneptora
check the chunks, I'm guessing it has `cHRM` and `gAMA`
|
|
2023-12-30 10:34:48
|
cHRM, yep
|
|
2023-12-30 10:35:00
|
(sorry I was away preparing a dough for baking)
|
|
|
spider-mario
|
2023-12-30 10:37:53
|
oh, is imagemagick not writing a gAMA chunk anymore?
|
|
2023-12-30 10:37:59
|
or is umbrielpng just not displaying it?
|
|
|
Traneptora
|
2023-12-30 10:38:02
|
looks like it's writing cHRM only
|
|
2023-12-30 10:38:13
|
umbrielpng would display it if it were being written, since it prints all chunks
|
|
2023-12-30 10:39:34
|
here's an example from an older file
|
|
2023-12-30 10:39:36
|
```
leo@gauss ~/Pictures/test_images :) $ umbrielpng granite.png
PNG signature found: granite.png
Chunk: IHDR, Size: 25, Offset: 8, CRC32: 31107cf8
Chunk: gAMA, Size: 16, Offset: 33, CRC32: 0bfc6105
Chunk: cHRM, Size: 44, Offset: 49, CRC32: 9cba513c
Chunk: PLTE, Size: 48, Offset: 93, CRC32: f475ecb5
Chunk: bKGD, Size: 13, Offset: 141, CRC32: 68d0f456
Chunk: tIME, Size: 19, Offset: 154, CRC32: 2b92ca26
Chunk: IDAT, Size: 5959, Offset: 173, CRC32: 5485769b
Chunk: tEXt, Size: 61, Offset: 6132, CRC32: ef51a8ef
Chunk: tEXt, Size: 49, Offset: 6193, CRC32: 7b1043ba
Chunk: tEXt, Size: 49, Offset: 6242, CRC32: 0a4dfb06
Chunk: IEND, Size: 12, Offset: 6291, CRC32: ae426082
Size: 128x128, Color: 4-bit Indexed
cHRM matches sRGB space
```
|
|
2023-12-30 10:39:58
|
the only thing it *doesn't* print is every IDAT chunk, instead it prints a message that more IDATs occur consecutively
|
|
2023-12-30 10:40:22
|
e.g.
|
|
2023-12-30 10:40:25
|
```
$ umbrielpng test.png
PNG signature found: test.png
Chunk: IHDR, Size: 25, Offset: 8, CRC32: fab5570d
Chunk: pHYs, Size: 21, Offset: 33, CRC32: 84791773
Chunk: sRGB, Size: 13, Offset: 54, CRC32: d9c92c7f
Chunk: cHRM, Size: 44, Offset: 67, CRC32: 9cba513c
Chunk: gAMA, Size: 16, Offset: 111, CRC32: 0bfc6105
Chunk: IDAT, Size: 4108, Offset: 127, CRC32: e4c005e2
Chunk: 492 more IDAT chunks
Chunk: IEND, Size: 12, Offset: 2022977, CRC32: ae426082
Size: 1080x720, Color: 8-bit RGB
cHRM matches sRGB space
```
|
|
|
spider-mario
|
2023-12-30 10:40:56
|
right, I just retested with the images I originally attached to the bug report, and it seems imagemagick is dropping the gAMA chunks now, even when originally present
|
|
2023-12-30 10:41:09
|
```shell
$ pngcheck -v output/image-sRGB-cHRM-gAMA.png
File: output/image-sRGB-cHRM-gAMA.png (81293 bytes)
chunk IHDR at offset 0x0000c, length 13
256 x 192 image, 24-bit RGB, non-interlaced
chunk cHRM at offset 0x00025, length 32
White x = 0.3127 y = 0.329, Red x = 0.64 y = 0.33
Green x = 0.3 y = 0.6, Blue x = 0.15 y = 0.06
chunk bKGD at offset 0x00051, length 6
red = 0x00ff, green = 0x00ff, blue = 0x00ff
chunk IDAT at offset 0x00063, length 32768
zlib: deflated, 32K window, maximum compression
chunk IDAT at offset 0x0806f, length 32768
chunk IDAT at offset 0x1007b, length 15464
chunk tEXt at offset 0x13cef, length 37, keyword: date:create
chunk tEXt at offset 0x13d20, length 37, keyword: date:modify
chunk tEXt at offset 0x13d51, length 40, keyword: date:timestamp
chunk IEND at offset 0x13d85, length 0
No errors detected in output/image-sRGB-cHRM-gAMA.png (10 chunks, 44.9% compression).
```
|
|
|
Traneptora
|
2023-12-30 10:41:26
|
oo, what's pngcheck?
|
|
2023-12-30 10:41:45
|
why did I write my own program to do that
|
|
2023-12-30 10:41:54
|
~~the answer is cause --fix is an argument~~
|
|
2023-12-30 10:42:32
|
umbrielpng has a `--fix` argument or `-o` which lets it do things I personally like to do to PNG files
|
|
|
spider-mario
|
2023-12-30 10:42:40
|
the main annoyance with pngcheck is that it errors out on new chunks like cICP / cLLi / etc.
|
|
|
Traneptora
|
2023-12-30 10:43:25
|
Umbrielpng's primary purpose is to normalize PNG files
|
|
|
spider-mario
|
2023-12-30 10:43:26
|
```shell
$ pngcheck -v .../ClassE_507.png
File: .../ClassE_507.png (3235710 bytes)
chunk IHDR at offset 0x0000c, length 13
944 x 1080 image, 48-bit RGB, non-interlaced
chunk iCCP at offset 0x00025, length 2181
profile name = 1, compression method = 0 (deflate)
compressed profile = 2178 bytes
chunk cHRM at offset 0x008b6, length 32
White x = 0.3127 y = 0.329, Red x = 0.708 y = 0.292
Green x = 0.17 y = 0.797, Blue x = 0.131 y = 0.046
chunk cICP at offset 0x008e2, length 4: illegal (unless recently approved) unknown, public chunk
ERRORS DETECTED in .../ClassE_507.png
```
|
|
|
lonjil
|
2023-12-30 10:45:46
|
Now I've actually done the test I talked about earlier.
All of these produce files which ImageMagick says have a PAE of 0 against each other:
```
ffmpeg -i Untagged.png -vf "libplacebo=w=1000:-1" TestUntagged.png
ffmpeg -i TaggedPrimaries.png -vf "libplacebo=w=1000:-1" TestTagged.png
ffmpeg -color_trc iec61966-2-1 -i TaggedPrimaries.png -vf "libplacebo=w=1000:-1" TestTagged+TRC.png
```
|
|
|
Traneptora
|
2023-12-30 10:46:54
|
It does the following changes with `--fix`:
- strips all of the following chunks unconditionally: `bKGD, eXIf, pHYs, sPLT, tIME, tEXt, zTXt, iTXt`
- strips `sBIT` if it matches the depth of the image data
- strips `hIST` if `PLTE` is not present
- strips `tRNS` if the image has no alpha
- normalizes color space chunks\*
- combines all `IDAT`s into one
|
|
|
lonjil
|
2023-12-30 10:47:29
|
off topic but what is up with the capitalization of PNG chunks?
|
|
|
spider-mario
|
|
Traneptora
|
|
lonjil
off topic but what is up with the capitalization of PNG chunks?
|
|
2023-12-30 10:48:40
|
> A sequence of four bytes defining the chunk type. Each byte of a chunk type is restricted to the hexadecimal values 41 to 5A and 61 to 7A. These correspond to the uppercase and lowercase ISO 646 [ISO646] letters (A-Z and a-z) respectively for convenience in description and examination of PNG datastreams. Encoders and decoders shall treat the chunk types as fixed binary values, not character strings. For example, it would not be correct to represent the chunk type IDAT by the equivalents of those letters in the UCS 2 character set. Additional naming conventions for chunk types are discussed in 5.4 Chunk naming conventions.
|
|
2023-12-30 10:49:02
|
here's a direct link to section 5.4
|
|
2023-12-30 10:49:02
|
https://www.w3.org/TR/png-3/#5Chunk-naming-conventions
|
|
|
lonjil
|
2023-12-30 10:49:37
|
ahhh
|
|
2023-12-30 10:50:23
|
the whole thing is a single C file 🤔
|
|
|
Traneptora
|
|
lonjil
ahhh
|
|
2023-12-30 10:50:54
|
basically:
first letter uppercase = Critical
first letter lowercase = Ancillary
second letter uppercase = public i.e. specified
second letter lowercase = private i.e. implementation-dependent
third letter uppercase = required
third letter lowercase = reserved
fourth letter uppercase = unsafe-to-copy
fourth letter lowercase = safe-to-copy
|
|
|
lonjil
the whole thing is a single C file 🤔
|
|
2023-12-30 10:51:03
|
it's like 1k lines, that's perfectly acceptible
|
|
|
lonjil
|
2023-12-30 10:51:18
|
yeah
|
|
2023-12-30 10:54:45
|
oh no it depends on glibc so I can't build it
|
|
|
Traneptora
|
|
Traneptora
It does the following changes with `--fix`:
- strips all of the following chunks unconditionally: `bKGD, eXIf, pHYs, sPLT, tIME, tEXt, zTXt, iTXt`
- strips `sBIT` if it matches the depth of the image data
- strips `hIST` if `PLTE` is not present
- strips `tRNS` if the image has no alpha
- normalizes color space chunks\*
- combines all `IDAT`s into one
|
|
2023-12-30 10:57:36
|
by *normalizes colors* here's the logic:
- If the input contains cICP, and it's not actually just sRGB, then forward it, and strip any other color chunks.
- Otherwise, if the input contains iCCP (and not cICP), and it's not actually sRGB, then forward it, and strip any other color chunks.
- Otherwise, determine if the input is actually sRGB. This means cICP that just specifies sRGB, an iCCP that's actually sRGB, an sRGB chunk, an entirely untagged file, or a file that's tagged only with cHRM which matches the bt709 primaries. If the input is sRGB, then write an sRGB chunk and strip any other color chunks.
- Othewrise, forward all color chunks as-is.
|
|
2023-12-30 10:58:45
|
```c
default_srgb = data.cicp_is_srgb || (!data.have_cicp && (data.icc_is_srgb ||
(!data.have_iccp && !data.have_srgb && !data.have_gama &&
(!data.have_chrm || data.chrm_is_srgb))));
```
|
|
|
lonjil
oh no it depends on glibc so I can't build it
|
|
2023-12-30 11:02:21
|
it does what?
|
|
2023-12-30 11:02:59
|
pretty sure I follow the C spec
|
|
|
lonjil
|
|
Traneptora
it does what?
|
|
2023-12-30 11:03:00
|
I just went thru it. The only problem is that it includes error.h
|
|
2023-12-30 11:03:05
|
That's a glibc header
|
|
2023-12-30 11:03:15
|
But you don't seem to be using any functions or variables from it
|
|
|
Traneptora
|
2023-12-30 11:03:23
|
perror
|
|
2023-12-30 11:03:44
|
I thought perror was in error.h
|
|
|
lonjil
|
2023-12-30 11:04:01
|
that's in stdio.h
|
|
|
Traneptora
|
2023-12-30 11:04:08
|
oh, then I'll just remove error.h
|
|
|
lonjil
|
2023-12-30 11:04:16
|
👍
|
|
2023-12-30 11:04:44
|
error.h has the functions `error()` and `error_at_line()`
|
|
|
Traneptora
|
2023-12-30 11:05:03
|
is strings.h a problem?
|
|
|
lonjil
|
2023-12-30 11:05:26
|
I don't think so
|
|
|
Traneptora
|
2023-12-30 11:05:32
|
I use strcasecmp
|
|
|
lonjil
|
2023-12-30 11:06:08
|
I think that's POSIX so should be fine. Compiles without error after I remove error.h.
|
|
|
Traneptora
|
2023-12-30 11:06:20
|
need `-lz` but ye
|
|
|
lonjil
|
2023-12-30 11:08:31
|
yep it's working
|
|
2023-12-30 11:08:38
|
though that caused me to notice something annoying
|
|
|
Traneptora
|
|
lonjil
|
|
lonjil
Now I've actually done the test I talked about earlier.
All of these produce files which ImageMagick says have a PAE of 0 against each other:
```
ffmpeg -i Untagged.png -vf "libplacebo=w=1000:-1" TestUntagged.png
ffmpeg -i TaggedPrimaries.png -vf "libplacebo=w=1000:-1" TestTagged.png
ffmpeg -color_trc iec61966-2-1 -i TaggedPrimaries.png -vf "libplacebo=w=1000:-1" TestTagged+TRC.png
```
|
|
2023-12-30 11:08:57
|
I must've done something wrong here because those output files didn't actually get resized
|
|
|
Traneptora
|
2023-12-30 11:09:20
|
did you do `w=1000:h=-1`?
|
|
2023-12-30 11:10:01
|
guessing the `-1` is treated as an arg and not a kwarg and overrides w=1000 which is the first arg
|
|
|
lonjil
|
2023-12-30 11:10:03
|
d'oh I messed it up when editing it
|
|
|
Traneptora
|
2023-12-30 11:10:16
|
so you end up with `-1:-1` which is the input
|
|
|
lonjil
|
2023-12-30 11:12:10
|
alright, now I did it correctly
|
|
2023-12-30 11:15:49
|
output from untagged input compares as equal to output from primaries tagged input. Tagged input with the `-color_trc` option has a slight difference.
|
|
|
Traneptora
|
2023-12-30 11:19:29
|
this is what umbrielpng does to the default rose
|
|
2023-12-30 11:19:51
|
```
leo@gauss ~ :) $ magick convert rose: -colorspace sRGB rose.png
leo@gauss ~ :) $ umbrielpng --fix rose.png
PNG signature found: rose.png
Chunk: IHDR, Size: 25, Offset: 8, CRC32: 1aebf8e4
Chunk: cHRM, Size: 44, Offset: 33, CRC32: 9cba513c
Chunk: bKGD, Size: 18, Offset: 77, CRC32: a0bda793
Chunk: IDAT, Size: 6754, Offset: 95, CRC32: 113ce85f
Chunk: tEXt, Size: 49, Offset: 6849, CRC32: abf2b86c
Chunk: tEXt, Size: 49, Offset: 6898, CRC32: daaf00d0
Chunk: tEXt, Size: 52, Offset: 6947, CRC32: 8dba210f
Chunk: IEND, Size: 12, Offset: 6999, CRC32: ae426082
Size: 70x46, Color: 8-bit RGB
cHRM matches sRGB space
Writing chunk: IHDR
Inserting default sRGB chunk
Stripping chunk: cHRM
Stripping chunk: bKGD
Writing chunk: IDAT
Stripping chunk: tEXt
Stripping chunk: tEXt
Stripping chunk: tEXt
Writing chunk: IEND
leo@gauss ~ :) $ umbrielpng rose.png
PNG signature found: rose.png
Chunk: IHDR, Size: 25, Offset: 8, CRC32: 1aebf8e4
Chunk: sRGB, Size: 13, Offset: 33, CRC32: d9c92c7f
Chunk: IDAT, Size: 6754, Offset: 46, CRC32: 113ce85f
Chunk: IEND, Size: 12, Offset: 6800, CRC32: ae426082
Size: 70x46, Color: 8-bit RGB
leo@gauss ~ :) $
```
|
|
2023-12-30 11:21:45
|
the primary reason I wrote this honestly was to normalize output from GIMP
|
|
2023-12-30 11:21:58
|
since it writes a lot of crap and I personally cba to uncheck all the boxes all the time
|
|
2023-12-30 11:22:03
|
laziness breeds productivity
|
|
2023-12-30 11:22:43
|
the way it checks if `iCCP` is actually sRGB is that it checks the tags for `rTRC` `gTRC` `bTRC` `wtpt` `rXYZ` etc.
|
|
2023-12-30 11:22:51
|
and they all have to match
|
|
2023-12-30 11:23:22
|
it ignores other parts of the profile like black point specification because those are irrelevant for SDR
|
|
2023-12-30 11:23:36
|
it was the best I had for working without a CMS
|
|
2023-12-30 11:23:56
|
it returns a negative for ICC profiles with a LUT TRC, it only works with `para` tags
|
|
|
lonjil
|
2023-12-30 11:25:52
|
I notice that on the images that got output by ffmpeg, the one from untagged input has untagged output and thus gets an sRGB chunk. For the one whose input had a cHRM only, it sees that it matches sRGB, and writes both an sRGB chunk and the cHRM. And finally the one produced by telling ffmpeg about the transfer function has umbrielpng remove both the cHRM and the gAMA, and writing an sRGB chunk.
|
|
|
Traneptora
|
|
lonjil
I notice that on the images that got output by ffmpeg, the one from untagged input has untagged output and thus gets an sRGB chunk. For the one whose input had a cHRM only, it sees that it matches sRGB, and writes both an sRGB chunk and the cHRM. And finally the one produced by telling ffmpeg about the transfer function has umbrielpng remove both the cHRM and the gAMA, and writing an sRGB chunk.
|
|
2023-12-30 11:26:57
|
umbrielpng should not replace cHRM and gAMA together with sRGB
|
|
2023-12-30 11:27:06
|
since that's incorrect
|
|
2023-12-30 11:27:26
|
it assumes sRGB for anything that isn't tagged, but it doesn't assume sRGB TRC if gAMA is explicitly present
|
|
2023-12-30 11:27:37
|
unless it's overridden by another chunk like the sRGB chunk
|
|
|
lonjil
|
2023-12-30 11:28:09
|
only cHRM:
```
cHRM matches sRGB space
Writing chunk: IHDR
Inserting default sRGB chunk
Stripping chunk: pHYs
Writing chunk: cHRM
Writing chunk: IDAT
Writing chunk: IEND
```
cHRM and gAMA:
```
cHRM matches sRGB space
Writing chunk: IHDR
Stripping chunk: pHYs
Writing chunk: sRGB
Stripping chunk: cHRM
Stripping chunk: gAMA
Writing chunk: IDAT
Writing chunk: IEND
```
|
|
|
Traneptora
|
|
lonjil
only cHRM:
```
cHRM matches sRGB space
Writing chunk: IHDR
Inserting default sRGB chunk
Stripping chunk: pHYs
Writing chunk: cHRM
Writing chunk: IDAT
Writing chunk: IEND
```
cHRM and gAMA:
```
cHRM matches sRGB space
Writing chunk: IHDR
Stripping chunk: pHYs
Writing chunk: sRGB
Stripping chunk: cHRM
Stripping chunk: gAMA
Writing chunk: IDAT
Writing chunk: IEND
```
|
|
2023-12-30 11:28:24
|
first instance is a bug I fixed an hour ago
|
|
|
lonjil
|
|
Traneptora
|
2023-12-30 11:28:43
|
second one is cause the input also had sRGB
|
|
|
lonjil
|
2023-12-30 11:28:46
|
talk about bleeding edge software
|
|
|
Traneptora
|
2023-12-30 11:28:50
|
so the gAMA is ignored
|
|
|
lonjil
|
2023-12-30 11:29:01
|
oh, yeah, didn't notice that
|
|
2023-12-30 11:30:41
|
alright excellent I have the latest and greatest version
|
|
|
Traneptora
|
2023-12-31 01:13:57
|
<@178524721023811584> update, apparently the line you linked in src/shaders/colorspace.c is only if they use really low level APIs
|
|
2023-12-31 01:14:06
|
for higher level api users e.g. vf_libplacebo it defaults untagged to bt.1886
|
|
2023-12-31 01:14:19
|
it linearlizes it assuming bt.1886 and then unlinearizes it with the inverse
|
|
2023-12-31 01:14:25
|
roundtripping, and scaling in linear light
|
|
2023-12-31 01:14:41
|
notable lines: https://github.com/haasn/libplacebo/blob/master/src/colorspace.c#L599-L600
|
|
2023-12-31 01:15:15
|
then it falls through here and roundtrips
https://github.com/haasn/libplacebo/blob/master/src/colorspace.c#L635-L640
|
|
|
bonnibel
|
2023-12-31 01:17:16
|
a few months ago there also was a bug where if you upscaled linear input w sigmoidization enabled, the input got interpreted as gamma22 no matter what
|
|
2023-12-31 01:17:22
|
i think thats fixed now though
|
|
2023-12-31 01:22:54
|
(i found that through the shader output, where it printed the line i linked earlier)
|
|
|
DZgas Ж
|
2024-01-05 12:47:50
|
I turned out that when encoding a static image for x264 video, it is better to use Ultrafast, because other modes that include various algorithms do not give any better, but very actively use service information clogging the stream <:H264_AVC:805854162079842314>
|
|
|
diskorduser
|
|
DZgas Ж
I turned out that when encoding a static image for x264 video, it is better to use Ultrafast, because other modes that include various algorithms do not give any better, but very actively use service information clogging the stream <:H264_AVC:805854162079842314>
|
|
2024-01-05 05:39:18
|
For x265?
|
|
|
DZgas Ж
|
|
diskorduser
For x265?
|
|
2024-01-05 05:41:01
|
what's the sense if discord doesn't support <:H265_HEVC:805856045347242016>🧐
|
|
|
diskorduser
|
|
DZgas Ж
|
2024-01-05 05:41:42
|
https://discord.com/channels/794206087879852103/806898911091753051/1192871644851617872
|
|
|
jonnyawsom3
|
2024-01-05 05:44:56
|
What about Opus?
|
|
|
DZgas Ж
|
|
What about Opus?
|
|
2024-01-05 05:48:32
|
AAC-HE is just better than OPUS at bitrates of 8-16 kbps
|
|
|
jonnyawsom3
|
2024-01-05 05:48:56
|
Ah, I thought it was closer to 32 at least
|
|
|
DZgas Ж
|
2024-01-05 05:49:07
|
even VORBIS is better than OPUS at bitrates of 8-16
|
|
|
yoochan
|
2024-01-05 05:52:25
|
what ! you say they lied to me ?!
|
|
|
Oleksii Matiash
|
|
What about Opus?
|
|
2024-01-05 06:58:03
|
I'll never understand opus authors decision to force 48 kHz sampling rate and absence of "quality level" encoding. I believe that giving the only possibility to encode to target bitrate is absolutely stupid - not handling different sampling frequencies, not handling different material complexity, nothing. The same goes to 48 kHz. They claim that opus is better than other audio codecs, but with this inevitable useless resampling - how can it be better?
|
|
|
yoochan
|
2024-01-05 07:09:12
|
perhaps because you can always convert to 48kHz with a noise floor under the perceptible level
|
|
|
Oleksii Matiash
|
|
yoochan
perhaps because you can always convert to 48kHz with a noise floor under the perceptible level
|
|
2024-01-05 07:25:03
|
But for what reason? Why just not add native 44.1 kHz encoding, if there are SO many records with this sampling frequency?
|
|
|
Quackdoc
|
|
Oleksii Matiash
I'll never understand opus authors decision to force 48 kHz sampling rate and absence of "quality level" encoding. I believe that giving the only possibility to encode to target bitrate is absolutely stupid - not handling different sampling frequencies, not handling different material complexity, nothing. The same goes to 48 kHz. They claim that opus is better than other audio codecs, but with this inevitable useless resampling - how can it be better?
|
|
2024-01-05 07:27:23
|
it simplifies things and resampling is cheap and effective
|
|
2024-01-05 07:28:53
|
ive yet to meet anyone who can tell the difference between 44.1 and 48 when resampled with sox or pipewire
|
|
|
Oleksii Matiash
|
2024-01-05 07:32:32
|
Well, let it be. I'm just avoiding opus because of these two flaws. Yes, probably the result of resampling is imperceptible, but it just annoys me too much 🤷♂️
|
|
|
Quackdoc
|
2024-01-05 07:35:34
|
in the end, opus is a distribution codec, its laser focused on distribution, and thats why they made the choices they did and in the end it works great for it
|
|
|
lonjil
|
2024-01-05 07:39:28
|
Opus can technically speaking handle 44.1 kHz is you *really* want it to, it just won't give as good results.
|
|
2024-01-05 07:40:32
|
> I believe that giving the only possibility to encode to target bitrate is absolutely stupid - not handling different sampling frequencies, not handling different material complexity, nothing.
well, the target bitrate isn't actually a target bitrate. For more complex material, the real bitrate will be higher than the target, and for simpler material, it will be lower.
|
|
|
Quackdoc
in the end, opus is a distribution codec, its laser focused on distribution, and thats why they made the choices they did and in the end it works great for it
|
|
2024-01-05 07:41:56
|
and live transmission. When you're encoding a voice call, the input is probably already 48kHz anyway.
|
|
|
gb82
|
2024-01-05 09:03:32
|
I think I've asked before, but I'll ask again in case someone has an answer now: is there any way to go from gainmap JPEG -> HDR JXL?
|
|
|
spider-mario
|
2024-01-05 09:36:25
|
as far as I can tell, that would involve recompression from pixels to JPEG XL
|
|
|
|
afed
|
2024-01-05 11:21:32
|
something interesting, FIUF-like scalable lossy/lossless compression
comparisons plots only with psnr, but anyway, as some concept
```SQZ provides very competitive image quality on a very tight byte budget, and even though it eventually gets outclassed, it does so providing best-in-class scalability at much lower complexity and easy integration. The same SQZ image can be decompressed at each of these quality levels by simply choosing how many bytes to use for decompression, there is no need for reencoding.```
https://github.com/MarcioPais/SQZ
|
|
|
jonnyawsom3
|
2024-01-06 02:44:36
|
I did little test earlier of making a jpeg, then encoding the difference to the original in a PNG. Couldn't find the right blend mode to get the lossless image back in Krita, but was fun to try and vaguely similar to that
|
|
|
gb82
|
2024-01-06 06:19:46
|
like wavpack but for images
|
|
|
monad
|
2024-01-06 07:19:05
|
Simple, just use the same JPEG decoder.
|
|
|
yoochan
|
|
Oleksii Matiash
Well, let it be. I'm just avoiding opus because of these two flaws. Yes, probably the result of resampling is imperceptible, but it just annoys me too much 🤷♂️
|
|
2024-01-06 07:33:32
|
At first I was like you. Then I learned a bit more how a ressampler works and started to trust the math behind. And about the bitrate, opus doesn't operate as CBR but interpret the bitrate requested as an optimal target size, but the real size varies depending on the complexity. What I love about opus is how it performs on blind tests and the fact it is modern yet well supported on browsers
|
|
2024-01-06 07:34:23
|
I compressed my whole library in 160 kbps and serve it directly in the browser via a custom server
|
|
|
Quackdoc
|
2024-01-06 07:39:53
|
I don't trust the math, I don't even understand it, what I do trust though is the various blind testing I have personally conducted on myself and others, as well as lots of other testing everyone else has done :D
|
|
|
jonnyawsom3
|
2024-01-06 01:28:06
|
I remember trying to cut down on archival stereo audio recordings, before I was using the default AAC 128kb/s, managed 96kb/s before severe degredation in the high frequency range. Switching to Opus I got 64kb/s (the minimum in the program) and it had no noticeable degredation for my untrained ears
|
|
2024-01-06 01:29:49
|
Yeah, it was purely a fun little test I wanted to do using only Krita to keep it fast, tried every blend mode but none seemed to correct the image properly
|
|
|
damian101
|
2024-01-06 01:29:49
|
Everything below 96 kbps Opus often has quite obvious artifacts imo.
|
|
|
spider-mario
|
2024-01-06 03:11:00
|
I remember trying sox’s resampler going from 44.1 kHz to 48 kHz and then back, for maybe 10 roundtrips or so, and then subtracting for the original to see the nature of the difference
|
|
2024-01-06 03:11:06
|
result: pretty much low-level broadband noise
|
|
2024-01-06 03:11:18
|
its results on https://src.infinitewave.ca/ are reassuring as well
|
|
2024-01-06 03:11:46
|
|
|
2024-01-06 03:12:12
|
|
|
2024-01-06 03:13:45
|
I have zero concern about resampling, at least if sox is the one doing it
|
|
|
Traneptora
|
|
Oleksii Matiash
I'll never understand opus authors decision to force 48 kHz sampling rate and absence of "quality level" encoding. I believe that giving the only possibility to encode to target bitrate is absolutely stupid - not handling different sampling frequencies, not handling different material complexity, nothing. The same goes to 48 kHz. They claim that opus is better than other audio codecs, but with this inevitable useless resampling - how can it be better?
|
|
2024-01-06 09:00:31
|
Both of these statements are false, btw, Opus does support other sample rates other than exactly 48 kHz, it just deliberately doesn't support 44.1 kHz
The reason for this decision is that all digital audio hardware in existence converts 44.1 kHz audio to 48 kHz before feeding it to a DAC. The resampling *will* happen, so the Opus devs decided it made more sense for it to happen on encode, rather than decode.
|
|
2024-01-06 09:00:54
|
Also, 44.1 kHz -> 48 kHz is a perfect reconstruction because of the Shannon sampling theorem
|
|
2024-01-06 09:01:14
|
at least within the 0 -> 20 kHz band, and Opus has a hard 20 kHz lowpass because anything higher than 20 kHz is entirely inaudible
|
|
2024-01-06 09:02:13
|
Also, the statement about not having true VBR is also false. The `--bitrate` mode in opusenc is actually VBR, where the bitrate you pass is actually a quality setting which roughly correlates with bitrate for the output file. If you pass it something with extremely low complexity, for example speech or silence, it will wildly undershoot the target, as you'd expect for true VBR.
|
|
2024-01-06 09:04:08
|
People who complain about the difference between 44.1 kHz and 48 kHz don't realize that they're functionally identical because of the Sampling theorem
|
|
|
lonjil
|
2024-01-06 09:14:11
|
It totally supports 44.1 kHz
|
|
2024-01-06 09:14:35
|
It just isn't supported by default because the modelling isn't tuned for it.
|
|
|
Oleksii Matiash
|
|
Traneptora
Also, the statement about not having true VBR is also false. The `--bitrate` mode in opusenc is actually VBR, where the bitrate you pass is actually a quality setting which roughly correlates with bitrate for the output file. If you pass it something with extremely low complexity, for example speech or silence, it will wildly undershoot the target, as you'd expect for true VBR.
|
|
2024-01-06 09:15:08
|
I'm not complaining about not having true VBR, I talking about the only possibility to set vbr bitrate, not quality and let encoder to choose bitrate. Just did a quick test - encoded 44.1 kHz file with --bitrate 256, got 46 MB file. Downsampled 44.1 -> 11, encoded with the same --bitrate 256, and got 47 MB file. Encoding the same files to vorbis with q 8.0 resulted in 42 and 14 MB files respectively. That is what I'm complaining about.
|
|
|
Traneptora
|
|
lonjil
It totally supports 44.1 kHz
|
|
2024-01-06 09:16:10
|
originally it did not
|
|
2024-01-06 09:16:16
|
it was a later decision to add it
|
|
|
Oleksii Matiash
|
|
lonjil
It just isn't supported by default because the modelling isn't tuned for it.
|
|
2024-01-06 09:16:18
|
For me it does not mean "supports"
|
|
|
Traneptora
|
|
Oleksii Matiash
I'm not complaining about not having true VBR, I talking about the only possibility to set vbr bitrate, not quality and let encoder to choose bitrate. Just did a quick test - encoded 44.1 kHz file with --bitrate 256, got 46 MB file. Downsampled 44.1 -> 11, encoded with the same --bitrate 256, and got 47 MB file. Encoding the same files to vorbis with q 8.0 resulted in 42 and 14 MB files respectively. That is what I'm complaining about.
|
|
2024-01-06 09:16:37
|
that's literally how VBR works though, the quality setting is called "bitrate" but it's a quality setting
|
|
2024-01-06 09:17:15
|
it takes the setting passed to bitrate, divides it by the sample rate, and then uses that as the quality setting of the encoder
|
|
2024-01-06 09:18:17
|
it adjusts the passed argument by the sample rate of the input because the idea is that it's supposed to produce a file with that bitrate, but in a VBR sense
|
|
|
Oleksii Matiash
|
|
Traneptora
it takes the setting passed to bitrate, divides it by the sample rate, and then uses that as the quality setting of the encoder
|
|
2024-01-06 09:18:26
|
Why not to give possibility to specify this quality manually? For me this VBR looks like ABR
|
|
|
Traneptora
|
2024-01-06 09:18:46
|
for most audio files it's pretty close to the provided value because the model is very good
|
|
2024-01-06 09:18:58
|
it's not actually ABR
|
|
|
Oleksii Matiash
|
2024-01-06 09:19:56
|
I don't want to specify bitrate. I want to specify some level of losses, let's call it so, and let encoder decide what bitrate it requires for such level
|
|
|
lonjil
|
2024-01-06 09:20:23
|
> for most audio files it's pretty close to the provided value because the model is very good
No. For a typical music library, the average across all files will usually be close to the specified VBR, but different files can have wildly different rates depending on how complex the data is.
|
|
|
Traneptora
|
|
lonjil
> for most audio files it's pretty close to the provided value because the model is very good
No. For a typical music library, the average across all files will usually be close to the specified VBR, but different files can have wildly different rates depending on how complex the data is.
|
|
2024-01-06 09:20:41
|
ye, that's true, but the model is quite good
|
|
|
lonjil
|
2024-01-06 09:20:50
|
what do you mean by that?
|
|
|
Traneptora
|
2024-01-06 09:20:50
|
but I'm aware that it's true VBR
|
|