|
fab
|
2021-05-01 05:46:57
|
to automate a bi t
|
|
2021-05-01 05:47:22
|
so it uses a mixed with patches
|
|
2021-05-01 05:47:26
|
what it does
|
|
2021-05-01 05:47:42
|
anyway scope if you can suggest something
|
|
2021-05-01 05:47:58
|
what quality set wp2
|
|
2021-05-01 05:49:07
|
maybe i will try -q 92.39
|
|
2021-05-01 05:49:26
|
but that is not a palette it don't change colours it reduces image quality
|
|
2021-05-01 05:49:39
|
so what i need do i need to specify a -z command in wp2
|
|
|
_wb_
|
2021-05-01 05:49:44
|
I assume -q 96-99 does something with the lossless codec (with preprocessing to make the image closer to the predictions, or something)
|
|
2021-05-01 05:49:47
|
In wp2
|
|
|
fab
|
2021-05-01 05:49:53
|
or is not supported palette at the moment
|
|
2021-05-01 05:50:13
|
anyway i want less loss possible
|
|
2021-05-01 05:50:32
|
less distortion amount imaginable
|
|
2021-05-01 05:50:49
|
less artifacts visible
|
|
2021-05-01 05:51:14
|
Shelwien
Shelwien is offline
Administrator
Shelwien's Avatar
Join Date
May 2008
Location
Kharkov, Ukraine
Posts
4,242
Thanks
334
Thanked 1,468 Times in 846 Posts
> attachment is downloaded as a jpg
It a screenshot from file comparison utility. Blue numbers are different.
> is the input 16bit?
Yes, 16-bit grayscale.
> Did you expect a 16bit output?
Yes, FLIF and jpegXL handle it correctly.
> What command did you use to generate the attachment?
convert.exe -verbose -depth 16 -size 4026x4164 gray:1.bin png:1.png
cwp2.exe -mt 8 -progress -q 100 -z 6 -lossless 1.png -o 1.wp2
dwp2.exe -mt 8 -progress -png 1.wp2 -o 1u.png
convert.exe -verbose -depth 16 png:1u.png gray:1u.bin
> (webp2 only handles 10bit samples at max, btw)
That's ok, but cwp2 doesn't say anything about -lossless not being lossless.
|
|
|
Scope
|
2021-05-01 05:54:30
|
There is no universal advice on the choice of quality values, if they were, there would be no choice in encoders, it all depends on the content and image size limits
|
|
|
fab
|
2021-05-01 06:27:57
|
what is libwebp2 max resolution in width and height
|
|
2021-05-01 06:28:04
|
also libwebp
|
|
2021-05-01 06:28:33
|
so i will try with one image
|
|
|
_wb_
|
2021-05-01 06:30:22
|
16k by 16k
|
|
|
fab
|
2021-05-01 06:31:48
|
how do you know
|
|
2021-05-01 06:32:26
|
i see no mention in libwebp2 chromium
|
|
2021-05-01 06:32:31
|
has it changed?
|
|
|
_wb_
|
2021-05-01 06:32:38
|
Because I tried to convince pascal to bump up the maximum to something higher, and I failed
|
|
|
fab
|
2021-05-01 06:33:08
|
and the fork ssignnet did of wp2 + some code of jxl when it was not standardized
|
|
2021-05-01 06:33:15
|
what resolution support
|
|
2021-05-01 06:33:21
|
the test program
|
|
2021-05-01 06:33:23
|
he did
|
|
2021-05-01 06:33:29
|
pingo authors
|
|
2021-05-01 06:33:35
|
png compression cmd
|
|
2021-05-01 06:33:45
|
i know is rarewares
|
|
2021-05-01 06:34:03
|
first step works with modular
|
|
2021-05-01 06:34:10
|
now i need to have some scripts
|
|
|
_wb_
|
2021-05-01 06:34:25
|
The width and height are signaled in 14 bits each so it's a hard bitstream limitation that is there on purpose because pascal is convinced that 16k is enough for all web use cases so the format doesn't need to support anything larger
|
|
|
fab
|
2021-05-01 06:36:06
|
this is like what i did to my image
|
|
2021-05-01 06:36:08
|
filtering more
|
|
2021-05-01 06:36:09
|
-q 94,6 -X 22.4 -Y 38 -I 0.68 -s 3 -m
-q 99,03 --effort 2
-q 78,079 -s 5 -epf=1 --noise=0 --dots=1 --gaborish=0 --patches=0
|
|
2021-05-01 06:36:28
|
introducing like an hdr camera almanence effect
|
|
2021-05-01 06:36:31
|
then filtring
|
|
2021-05-01 06:36:43
|
it could help to compress
|
|
2021-05-01 06:36:46
|
is bad
|
|
2021-05-01 06:37:11
|
but is faster and good
|
|
2021-05-01 06:37:28
|
i think estimated quality would be less
|
|
2021-05-01 06:38:09
|
imageranger when it will support jxl will detect a lot of low quality images
|
|
2021-05-01 06:54:06
|
wp2
|
|
2021-05-01 06:54:09
|
for %i in (C:\Users\User\Documents\a2\*.png) do cwp2 "%i" "%i.wp2" -o -q 99.03 -effort 2
|
|
2021-05-01 06:55:07
|
|
|
2021-05-01 06:55:19
|
someone knows the correct script
|
|
|
Scientia
|
2021-05-02 03:09:05
|
All these extra options plus speed 5 where a bunch of optimizations are turned off probably isn't a big help
|
|
2021-05-02 03:09:42
|
If in doubt I'd use speed 7
|
|
|
fab
|
2021-05-02 06:35:53
|
i want to convert batch images with wp2
|
|
|
lithium
|
2021-05-03 09:34:25
|
https://groups.google.com/a/aomedia.org/g/av1-discuss/c/4o0TRIZXa10
New aom_tune_metric enum value: AOM_TUNE_BUTTERAUGLI. The new aomenc option
--tune=butteraugli was added to optimize the encoder’s perceptual quality by
optimizing the Butteraugli metric. Install libjxl (JPEG XL) and then pass
-DCONFIG_TUNE_BUTTERAUGLI=1 to the cmake command to enable it.
Someone tried new av1 --tune=butteraugli?
|
|
|
fab
|
2021-05-03 10:07:08
|
don't work
|
|
2021-05-03 10:07:15
|
with cmd windows 7
|
|
2021-05-03 10:07:18
|
it said invalid
|
|
2021-05-03 10:07:20
|
command
|
|
2021-05-03 10:07:37
|
with o after it says you need to specify folder to do batch
|
|
|
Scope
|
2021-05-03 12:30:39
|
https://tools.ietf.org/html/draft-zern-webp-00
|
|
|
improver
|
2021-05-03 01:13:39
|
they haven't had this for years, why now?
|
|
|
zebefree
|
|
improver
they haven't had this for years, why now?
|
|
2021-05-03 06:05:19
|
Now that WebP is supported by all major modern browsers (with Apple finally adding support) there has been increasing pressure to adopt it to save space over legacy JPEG, but also some pushback due to it not having an official IANA media type. For example WebP is being added to EPUB, but W3C wants the media types in its specification to have an official IANA registration.
|
|
|
improver
|
2021-05-03 06:07:21
|
I see. makes sense.
|
|
|
_wb_
|
2021-05-03 06:10:28
|
I wonder what the media type for WebP2 will be. `image/wp2`?
|
|
|
Crixis
|
2021-05-03 07:18:54
|
`image/vvp2`
|
|
|
zebefree
|
2021-05-03 07:26:30
|
I assume it will be `image/webp2`, but hopefully they will officially register it this time once it is stable, instead of waiting another 10 years.
|
|
|
fab
|
2021-05-04 11:31:49
|
|
|
2021-05-04 11:33:31
|
|
|
2021-05-04 11:39:34
|
01052021 doom9 jamaika
|
|
2021-05-04 11:39:53
|
here's settings
|
|
2021-05-04 11:39:54
|
https://discord.com/channels/794206087879852103/794206170445119489/839100479828525058
|
|
2021-05-04 11:42:09
|
if you want to reduce more q77 mozjpeg image or you want less compression or lossless transcode ask someone more expert
|
|
2021-05-04 11:42:17
|
because i don't know
|
|
2021-05-04 11:59:12
|
for %i in (C:\Users\User\Downloads\5\*.png) do echo %i cwp2 "%i" -o "%i.wp2" -q 99.03 -effort 2
|
|
2021-05-04 11:59:14
|
don't work
|
|
2021-05-04 12:02:41
|
no one
|
|
2021-05-04 12:02:53
|
but it don't display anything
|
|
2021-05-04 12:02:56
|
only lists of file
|
|
2021-05-04 12:03:33
|
i didn't used the command you wrote
|
|
2021-05-04 12:03:45
|
after echo what i should write
|
|
2021-05-04 12:03:54
|
i want to specify output folder
|
|
2021-05-04 12:03:58
|
or do in the same folder
|
|
2021-05-04 12:04:11
|
better do in same folder
|
|
2021-05-04 12:04:20
|
i'm still on win7
|
|
2021-05-04 12:04:37
|
no i want for the jamaika build
|
|
2021-05-04 12:04:45
|
and i don't have time to ask which commti is
|
|
2021-05-04 12:04:46
|
to him
|
|
2021-05-04 12:05:30
|
thanks
|
|
2021-05-04 12:13:07
|
is doing
|
|
2021-05-04 12:30:39
|
running first command
|
|
2021-05-04 12:30:40
|
for %i in (C:\Users\User\Downloads\apr5\*.png) do cjxl -q 94.6 -X 22.4 -Y 38 -I 0.68 -s 3 -m %i %i.jxl
|
|
2021-05-04 02:02:54
|
ok 1h 35m for 43 mb of images
|
|
2021-05-04 02:03:03
|
they were all in high resolution
|
|
2021-05-04 02:03:18
|
but my cpu handled because of low effort
|
|
2021-05-04 02:03:23
|
someone crashed on wp2
|
|
2021-05-04 02:03:45
|
because of assertion it has problem to allocate memory increase memory to use palette
|
|
2021-05-04 02:03:48
|
.cc
|
|
2021-05-04 02:04:18
|
for %i in (C:\Users\User\Downloads\apr5\*.png) do cjxl -q 94.6 -X 22.4 -Y 38 -I 0.68 -s 3 -m %i %i.jxl
for %i in (C:\Users\User\Downloads\jxl\*.jxl) do djxl %i %i.png
for %i in (C:\Users\User\Downloads\in\*.png) do cwp2 -q 99.03 -effort 2 %i -o %i.wp2
for %i in (C:\Users\User\Downloads\in2\*.wp2) do dwp2 %i -o %i.png
for %i in (C:\Users\User\Downloads\in3\*.png) do cjxl -q 78.079 -s 5 --epf=1 --noise=0 --dots=1 --gaborish=0 --patches=0 %i %i.jxl
|
|
2021-05-04 02:04:28
|
these all commands
|
|
2021-05-04 02:04:37
|
but they are written wrong
|
|
2021-05-04 02:04:55
|
the double %i is not needed
|
|
2021-05-04 02:05:00
|
i don't know why orum suggested
|
|
2021-05-04 02:05:11
|
maybe because i crashed the cmd
|
|
2021-05-04 02:05:20
|
maybe i did something wrong
|
|
2021-05-04 02:05:21
|
ok
|
|
2021-05-04 02:05:30
|
average size saving 50%
|
|
2021-05-04 02:05:37
|
but faces get distorted
|
|
2021-05-04 02:05:46
|
i couldn't recognize britney spears face
|
|
2021-05-04 02:05:55
|
she still look 18
|
|
2021-05-04 02:06:06
|
but i thought it was a cat at first glance
|
|
2021-05-04 02:06:46
|
i don't think you can get 50% jpg reduction without using modular and those
|
|
2021-05-04 02:07:04
|
using a stronger modular usually is not efficient
|
|
2021-05-04 02:07:09
|
as wb said
|
|
|
Reddit • YAGPDB
|
|
|
Deleted User
|
|
|
|
2021-05-04 08:12:57
|
<@321486891079696385> could you explain those guys, please? I don't have time, goshhhh...
|
|
|
lithium
|
|
<@321486891079696385> could you explain those guys, please? I don't have time, goshhhh...
|
|
2021-05-05 07:41:28
|
best overall settings in aomenc-AV1 and SVT-AV1
rav1e-av1 currently doesn’t have grain synthesis, bad bad,
aomenc-av1 strongest scenario : live-action, weakest scenario : animation,
SVT-AV1 strongest scenario: 2D animation, weakest : generally slightly blurrier and lower quality.
|
|
|
fab
|
2021-05-05 07:56:37
|
thanks
|
|
2021-05-05 07:57:09
|
with my fonts set at 10 pt i read the article in 12 seconds
|
|
2021-05-05 07:57:19
|
why you are saying is a long article
|
|
2021-05-05 07:57:26
|
i didn't perceived 39000 characters
|
|
2021-05-05 07:57:31
|
are you sure is right?
|
|
2021-05-05 07:58:26
|
anyway it only says the options he uses for encode videos in mass
|
|
2021-05-05 07:58:47
|
but he don't specify if they are for ffmpeg, for aomenc or for fastflix
|
|
2021-05-05 07:58:55
|
he only say cpu 6
|
|
2021-05-05 07:59:12
|
and then noise value 12
|
|
2021-05-05 07:59:41
|
i didn't understood from his cpu 6 commands
|
|
2021-05-05 07:59:48
|
which are the denoiser settings
|
|
2021-05-05 07:59:58
|
and the added grain settings
|
|
2021-05-05 08:00:29
|
is only advice of settings like all posts of r/av1
|
|
|
|
Deleted User
|
|
lithium
best overall settings in aomenc-AV1 and SVT-AV1
rav1e-av1 currently doesn’t have grain synthesis, bad bad,
aomenc-av1 strongest scenario : live-action, weakest scenario : animation,
SVT-AV1 strongest scenario: 2D animation, weakest : generally slightly blurrier and lower quality.
|
|
2021-05-05 09:38:16
|
Thank you, but by "explain those guys" I meant "rekt the anti grain synthesis guys from comments" 😉
|
|
|
spider-mario
|
2021-05-06 09:56:47
|
gave it an attempt, not sure it will work out
|
|
|
Scope
|
2021-05-06 04:14:33
|
https://twitter.com/PascalMassimino/status/1390268489072627714
|
|
|
|
paperboyo
|
2021-05-06 04:36:22
|
https://twitter.com/PascalMassimino/status/1390281518640152579
I created those GIFs (Kornel’s giflossy magic), so what? You can always concoct an image that will compress best using a specific codec combo. More interesting problem to solve is to have a tool that will work better in most situations and will be easier to operate. Those shouldn‘t even be GIFs in the first place, but I lacked tooling/knowledge and the support wasn’t there for a performant cinemagraph (I tried some crazy PNGs with holes in them though which the videos were showing, but that was so much harder than GIFs). Animation in image formats should die and as often mentioned here, browser should just be able to display any video in the `img` tag…
|
|
|
_wb_
|
2021-05-06 04:38:50
|
https://twitter.com/jonsneyers/status/1390344481917083650?s=19
|
|
|
|
Deleted User
|
2021-05-06 05:03:12
|
Don't feed the troll!
|
|
|
Jim
|
2021-05-06 05:04:26
|
Guessing they mean with default settings?
|
|
|
_wb_
|
2021-05-06 05:05:39
|
That doesn't mean much. I can make a `mycjxl` that has -m -s 9 -g 3 -I 1 as default settings :)
|
|
|
|
Deleted User
|
|
_wb_
https://twitter.com/jonsneyers/status/1390344481917083650?s=19
|
|
2021-05-06 05:53:32
|
https://twitter.com/ZiemekZ/status/1390363487042424838 😜
|
|
|
_wb_
|
2021-05-06 05:57:13
|
What options did you use?
|
|
|
|
Deleted User
|
2021-05-06 06:15:59
|
That's strange... I used the exact same options as yours.
|
|
2021-05-06 06:16:44
|
It's the version just before the latest sync.
|
|
|
Scope
|
2021-05-06 06:21:33
|
26 415 🤔
|
|
|
|
Deleted User
|
|
Master Of Zen
|
2021-05-06 06:30:11
|
|
|
2021-05-06 06:31:37
|
Be good at what you do or have glow, pick one
|
|
2021-05-06 06:36:30
|
|
|
|
Jim
|
2021-05-06 06:43:01
|
Seems to be a lot of hit pieces on JXL recently. I say it's a good thing. Just means it's getting out there and people who support other formats feel threatened by it.
|
|
|
Scope
|
2021-05-06 06:44:02
|
Btw, still don't understand why my comparison is considered mostly fan-art (it includes such images, but there are many other categories), especially these are mostly real popular images that are most common and found as lossless internet content, it lacks except sets of screenshots of text from social networks/messengers and various UI
https://twitter.com/PascalMassimino/status/1390372475599560706
|
|
|
190n
|
2021-05-06 06:45:50
|
maybe people only look at the first few pages of the spreadsheet lol
|
|
|
|
veluca
|
|
Scope
Btw, still don't understand why my comparison is considered mostly fan-art (it includes such images, but there are many other categories), especially these are mostly real popular images that are most common and found as lossless internet content, it lacks except sets of screenshots of text from social networks/messengers and various UI
https://twitter.com/PascalMassimino/status/1390372475599560706
|
|
2021-05-06 06:46:22
|
might be a good idea to reply then... 😄
|
|
|
Jim
|
|
190n
maybe people only look at the first few pages of the spreadsheet lol
|
|
2021-05-06 06:51:05
|
Agree, I went through an earlier one and ran my own calculations. Just because the number of art pieces ended up being like 10-12% greater than many other image types people were saying it wasn't representative. I don't feel that was the case just because some sections had maybe a hundred more than some other sections.
|
|
|
Scope
|
2021-05-06 06:55:06
|
Maybe, but I've answered a similar question before, and I also don't want to start writing on Twitter, I only read it and on very specific niche topics
But maybe I should make Details as the first Tab
|
|
|
Jim
|
|
Scope
Btw, still don't understand why my comparison is considered mostly fan-art (it includes such images, but there are many other categories), especially these are mostly real popular images that are most common and found as lossless internet content, it lacks except sets of screenshots of text from social networks/messengers and various UI
https://twitter.com/PascalMassimino/status/1390372475599560706
|
|
2021-05-06 06:58:39
|
I also find it funny that most of the AVIF articles don't even mention performance or act like it's a non-issue, but as soon as you talk about other formats... where are the performance numbers? Truthfully, everyone could include them... but AVIF is likely to look horrible with them included. I know WP2 was even worse than AVIF in the early days, not seen recent numbers though.
|
|
|
Scope
Maybe, but I've answered a similar question before, and I also don't want to start writing on Twitter, I only read it and on very specific niche topics
But maybe I should make Details as the first Tab
|
|
2021-05-06 07:01:09
|
I agree with you, and the news cycle will pass in a day and most people will forget about it. I too have not responded to a number of the hit pieces. I see major flaws in methodology and very few actually showing their work. It just ends up not being worth it as they will come up with one excuse after another as to how their method was flawless while everyone else's was not.
|
|
|
_wb_
|
|
Jim
Seems to be a lot of hit pieces on JXL recently. I say it's a good thing. Just means it's getting out there and people who support other formats feel threatened by it.
|
|
2021-05-06 07:16:41
|
It's a good sign indeed. It's a bit annoying when random people on the internet say that "your baby is ugly" but it's a lot more annoying if they do that with valid arguments (because then there might be an actual problem) than if it's just some rather unfounded "you suck" rant...
|
|
|
Scope
|
2021-05-06 07:16:42
|
Yes, I would like to add to the comparison encoding speed as well, but I did not have the opportunity to do so, because for the accuracy of the results I need a separate completely unused PC and encoding must be performed several times with an average result, in addition, this test is actively developing encoders, when the results varies very much with each build or do not make sense, for example certain builds JXL lossless were several times slower or lossless WP2 is still very slow for practical use
Also, the encoding speed is very different on different configurations and the use of multithreading and SIMD, also to fully perform all such tests on such a large number of images will need about a month of CPU work at full load (but it is possible to use small set, although I noticed that on some images Webp2 or JXL can be very slow and the results on small sets can be very different)
|
|
|
_wb_
|
2021-05-06 07:18:05
|
I think for lossy jxl, encode/decode speed is relatively mature (there is still room for improvement, but not a huge amount imo)
|
|
2021-05-06 07:18:20
|
For lossless jxl, nah
|
|
2021-05-06 07:19:05
|
Problem with lossy is that it's a lot harder to produce uncontroversial benchmark results
|
|
2021-05-06 07:20:49
|
Either you do a real subjective test according to the recommended methodologies, for which you need human test subjects, controlled environments, etc, and then you get some results for a handful of images and codecs...
|
|
2021-05-06 07:21:32
|
Or you use objective metrics, but then you immediately hit the problem that the metrics disagree with one another and it all boils down to which metric you believe.
|
|
|
Pieter
|
2021-05-06 07:26:50
|
If you're not pissing anyone off, you're not doing it right.
|
|
2021-05-06 07:27:15
|
Of course, that doesn't mean that if you're pissing people off, you are doing it right.
|
|
|
Scope
|
2021-05-06 07:31:14
|
Also, when I was testing PNG optimizers, it turned out that even the compilation options sometimes have a noticeable impact on speed and my test had claims that I used them for open-source optimizers, but for closed-source it is impossible and therefore the results are unfair
However, when the encoders of the main formats will be relatively stable and optimized, I will try to add a comparison of the encoding speed, at least some approximate
|
|
|
_wb_
|
2021-05-06 07:33:06
|
Anything that makes anything not look like the greatest thing in the world will be controversial to at least some people
|
|
|
Scope
|
|
Scope
26 415 🤔
|
|
2021-05-06 08:42:57
|
```poe.bmf 23 036
poe.avif 523 526
poe.paq8px202 14 036```
|
|
|
spider-mario
|
|
spider-mario
gave it an attempt, not sure it will work out
|
|
2021-05-06 10:09:03
|
<@456226577798135808> yeah, indeed, now I got told “it's the noise which is actually fake, even analog one.” https://www.reddit.com/r/AV1/comments/n4si96/encoder_tuning_part_3_av1_grain_synthesis_how_it/gx6b7sf/?context=3
|
|
|
BlueSwordM
|
|
spider-mario
<@456226577798135808> yeah, indeed, now I got told “it's the noise which is actually fake, even analog one.” https://www.reddit.com/r/AV1/comments/n4si96/encoder_tuning_part_3_av1_grain_synthesis_how_it/gx6b7sf/?context=3
|
|
2021-05-06 11:37:19
|
Funny, they don't seem to understand grain synthesis replicates the original noise <:kekw:808717074305122316>
|
|
|
Scientia
|
2021-05-07 01:34:51
|
Lol they're talking about replacing noise with detail with neural networks
|
|
2021-05-07 01:35:05
|
I sure wonder where this detail comes from
|
|
2021-05-07 01:35:11
|
<:kekw:808717074305122316>
|
|
2021-05-07 01:38:36
|
"Potential noise should be denoised with neural networks like it is with ray tracing"
|
|
2021-05-07 01:39:04
|
I fail to see the connection with denoising in video game rendering and denoising in film
|
|
|
Scope
|
2021-05-07 06:43:59
|
https://twitter.com/PascalMassimino/status/1390536793855086593
|
|
2021-05-07 06:44:31
|
Hm, sets with game assets would be interesting to add, but in practice it is very rare to use PNG or some other Web formats and mostly it will be Pixel Art games or these images will be the same as the usual art or photos (such as loading screens), the rest will be assets for textures and there are using GPU formats
Unless there are UI elements, but for that I wanted a separate UI set
|
|
|
monad
|
2021-05-07 06:51:23
|
Hm, I have over 15000 lossless game screenshots. Atypical? Maybe. Relevant? Surely.
|
|
|
190n
|
2021-05-07 06:53:21
|
i think some games store textures on disk in an efficient format and then convert to a gpu format upon loading
|
|
2021-05-07 06:54:04
|
<@!111445179587624960> do you still have the script(s) you used for that comparison? i could try on some game textures maybe
|
|
|
Scope
|
|
monad
Hm, I have over 15000 lossless game screenshots. Atypical? Maybe. Relevant? Surely.
|
|
2021-05-07 06:59:11
|
Yes, I still do not understand why these are irrelevant sets for the lossless test, these are not my images, these are various popular images from the internet that are everywhere, also I do not think that the main use for the web lossless format is game assets
|
|
|
190n
<@!111445179587624960> do you still have the script(s) you used for that comparison? i could try on some game textures maybe
|
|
2021-05-07 06:59:21
|
It's not really scripts, maybe some cmd files (but nothing interesting there, just options for encoders)
|
|
|
_wb_
|
2021-05-07 07:01:25
|
for big 3D games that weigh dozens or hundreds of gigabytes, GPU texture formats are typical for most of the game graphics (exception being some UI elements and other 2D stuff)
|
|
2021-05-07 07:02:22
|
for smaller casual games for phones/tablets, where package and install size matters more, png/webp are typical
|
|
|
Scope
|
2021-05-07 07:06:13
|
For big games it's mostly specialized lossy formats for textures, including mobile games, but if it's some kind of 2D games, it mostly looks like Pixel Art or normal images or some kind of UI elements, except that they usually have transparency
|
|
|
_wb_
|
2021-05-07 07:06:16
|
9% is not bad given that the other formats in that list already had browser support (by default, not behind a flag) while jxl didn't even have behind-a-flag support back then. Why would any web dev doing image optimization even consider a codec that doesn't have browser support yet 🙂
|
|
|
Scope
For big games it's mostly specialized lossy formats for textures, including mobile games, but if it's some kind of 2D games, it mostly looks like Pixel Art or normal images or some kind of UI elements, except that they usually have transparency
|
|
2021-05-07 07:07:51
|
yes - might be good to test more stuff with transparency (and also for cjxl to get an option to do invisible pixel thrashing in lossless mode and/or premultiplied alpha, still need to do that)
|
|
|
Scope
|
2021-05-07 07:08:47
|
Yep, I'm still waiting for those options
|
|
2021-05-07 07:17:49
|
https://twitter.com/NetherworldPost/status/1387634338859298817
|
|
2021-05-07 07:17:57
|
<:SadOrange:806131742636507177>
|
|
|
fab
|
2021-05-07 07:24:44
|
I aree with market aggresively
|
|
2021-05-07 07:25:26
|
Wb added random aut to the server to get more promotion
|
|
2021-05-07 07:26:07
|
Random Not Most are in av1 server
|
|
2021-05-07 07:26:26
|
But other joined
|
|
2021-05-07 07:27:54
|
But you re right They are Not obliged to try JPEG - XL
|
|
2021-05-07 07:30:14
|
Discord is still marketing remember
|
|
|
monad
|
|
Scope
https://twitter.com/NetherworldPost/status/1387634338859298817
|
|
2021-05-07 07:30:31
|
Unfortunately, I don't think the lossless comparison refutes the Blobfolio article, since the article was about lossy.
|
|
|
fab
|
2021-05-07 07:31:38
|
No he insulted also the lossless compression
|
|
|
Scientia
|
2021-05-07 08:01:20
|
He fails to provide his cli options then he says our data is "not representative" when we provide him data that's spanning thousands of tested images
|
|
2021-05-07 08:01:51
|
I think he has his opinion and it's not going to change
|
|
2021-05-07 08:04:54
|
He uses his subjective encoding app, which targets the lowest size, I think maybe he isn't realizing the artifacts of avif because of smoothing
|
|
2021-05-07 08:06:10
|
He also fails to address the uses of modular lossless and lossless jpeg recompression
|
|
|
fab
|
2021-05-07 08:06:39
|
Which pascal?
|
|
2021-05-07 08:06:58
|
Pascal knows what is doing
|
|
2021-05-07 08:07:07
|
He's a dev
|
|
|
Scientia
|
2021-05-07 08:07:38
|
I never mentioned a pascal
|
|
|
_wb_
|
2021-05-07 01:08:49
|
did I mention already how much I hate the PSD format?
|
|
2021-05-07 01:13:50
|
No, if you know anyone who can make one, I would be happy to assist them with the jxl side of it
|
|
2021-05-07 01:15:05
|
We do have half-assed PSD input support in cjxl, mostly as a way to test multi-layer image input, as well as exotic stuff like CMYK and spot colors
|
|
|
Scope
|
|
monad
Unfortunately, I don't think the lossless comparison refutes the Blobfolio article, since the article was about lossy.
|
|
2021-05-07 01:15:55
|
Yeah, I'm more about this article spreading and people jumping to conclusions that the format sucks, without any of their own comparisons, as well as the title of the article itself
|
|
2021-05-07 01:27:07
|
And for lossy and low bpp, as I assumed and asked to pay attention to this also in Jpeg XL back in the beginning of development (even considering that higher quality is much more important), people very often compare formats/encoders on low bpp and with time there will be more and more such articles
|
|
2021-05-07 01:32:52
|
Also I think lossy WebP is just as good and sometimes even better in this article, because it is more tuned for non-photographic images and compared mostly on them and lossy JXL still needs improvement in this direction
|
|
|
fab
|
2021-05-07 01:58:53
|
Webp hits better vmaf with more than 50% jpeg input reduction
|
|
2021-05-07 02:02:13
|
|
|
2021-05-07 02:02:47
|
Like get better than this is difficult
|
|
2021-05-07 02:03:13
|
Because jpeg xl cant even recognize a mozjpeg q77 jpeg
|
|
2021-05-07 02:03:42
|
Also with webp you get a bit of Black and White
|
|
2021-05-07 02:04:19
|
People also want small sizes at all cost
|
|
2021-05-07 02:04:47
|
|
|
2021-05-07 02:04:47
|
They arent willing to do this
|
|
2021-05-07 02:04:57
|
Or to try lossless
|
|
2021-05-07 02:05:45
|
They want 23 kb 47 kb Not average 80 kb and some 65 kb 55 kb
|
|
2021-05-07 02:08:19
|
People also want one settings fits all
|
|
2021-05-07 02:09:36
|
They dont want to switch encoders
|
|
2021-05-07 02:11:49
|
They also want a low RAM mode one : both for decoding and especially for decoding
|
|
2021-05-07 02:12:33
|
In avif as Long the speed of encoder is acceptable they wont complain
|
|
|
zebefree
|
2021-05-07 02:14:16
|
https://discord.com/channels/794206087879852103/794206170445119489/807906085239128095
|
|
|
Scope
|
2021-05-07 07:49:09
|
<https://encode.su/threads/3002-Is-encode-su-community-interested-in-a-new-free-lossy-image-codec?p=69613&viewfull=1#post69613>
**Raphael Canut **
> Just a quick note, some people told me in the past that I must adapt NHW to any image size else nobody will consider and use it.That's certainly right, but again I am working on NHW on my spare time and this task is too exhausting for me.Also, I feel now that it is useless, as I have read that in the very next months, the great AVIF and JPEG XL will be definitely adopted everywhere on the Internet.
>
> However it would be great to find someone that could take over the NHW Project.Turning it into a video codec for example, because there was interest in NHW for a MotionJPEG replacement, because a company was considering MJPEG for low-power camera devices, and MJPEG was too heavyweight for their processor (dual-core 240MHz Xtensa LX6, if I remember correctly).For now with rough calculation based on the latest Independent JPEG Group (IJG) C source code, NHW is 1.5x faster to encode than JPEG (and 2x faster to decode), but with switching off heavy processings with very few quality loss, NHW can be 2x faster to encode than JPEG and with a notably better quality/compression.This engineer told me that if NHW will be 2x faster to encode than MJPEG (and with a better quality furthermore) then this could be very interesting for such embedded products.
|
|
2021-05-07 07:57:32
|
It's always sad when any of the new formats stop developing, especially not many wavelet codecs exist, but it's strange that in 15 years there was no attempt to adapt an encoder for images larger than 512x512 pixels, in my opinion this should be one of the priorities and then optimize speed, quality and everything else
|
|
|
|
necros
|
2021-05-08 10:43:14
|
Is it possible maybe in future to make jpg file sizes smaller than packjpg do?
|
|
|
_wb_
|
2021-05-08 10:49:09
|
There might be some ways to slightly improve jpeg recompression. Not hugely though.
|
|
2021-05-08 10:51:22
|
We do have other advantages over other jpg recompressors like packjpg or lepton though: the jxl can be decoded progressively and in parallel. This is not the case with e.g. lepton, which uses AC to predict DC, making it inherently non-progressive.
|
|
|
|
necros
|
|
_wb_
|
2021-05-08 11:15:05
|
Yes, need to fully decompress before it can be viewed.
|
|
|
|
necros
|
2021-05-08 11:15:05
|
<@456226577798135808>it`s not a problem if you need to archive pix, there are plugins for viewers
|
|
|
Scope
|
2021-05-08 11:50:54
|
<https://encode.su/threads/342-paq8px?p=69628&viewfull=1#post69628>
> It's rare that I publish modifications around a specialized model only. There is a reason: with this version paq8px will most probably reclaim spot #1 in Lossless Photo Compression Benchmark.
|
|
2021-05-08 11:51:02
|
🤔
|
|
|
_wb_
|
2021-05-08 12:05:38
|
Interesting
|
|
2021-05-08 12:06:34
|
I wonder if some of the ideas from paq can be 'ported' to jxl encoder improvements...
|
|
|
|
necros
|
2021-05-08 12:09:33
|
<@456226577798135808>too slow for what?
|
|
|
_wb_
|
2021-05-08 12:51:01
|
I assume different trade-offs are possible, but usually I see ultra-slow, very dense compression results with paq
|
|
|
|
veluca
|
2021-05-08 12:53:21
|
yep!
|
|
2021-05-08 12:54:12
|
ignoring jxl-art, there's at least one case where a more-or-less exhaustive search produced a file 2% smaller than -s 9 (in two days, for a 64x64 file, but...)
|
|
2021-05-08 12:56:11
|
I have a couple of ideas for a better tree learning algorithm, but I think they'll be super slow 😄
|
|
|
_wb_
|
2021-05-08 01:09:49
|
Slower is certainly possible 😂
|
|
2021-05-08 01:10:06
|
Denser probably too 😁
|
|
2021-05-08 01:20:22
|
snail has been suggested
|
|
2021-05-08 01:20:39
|
glacier and tectonic_plate have also been suggested (not animals though)
|
|
|
Fox Wizard
|
|
diskorduser
|
2021-05-08 01:25:52
|
Plaster worms move very slow
|
|
|
Pieter
|
2021-05-08 01:53:01
|
Easy to find animals slower than tortoises in absolute speed: just find smaller animals
|
|
|
_wb_
|
2021-05-08 01:57:25
|
sponges are pretty slow too
|
|
|
Pieter
|
2021-05-08 02:01:50
|
What about sea anemones?
|
|
|
monad
|
2021-05-08 05:20:34
|
Looking forward to amoeba speed
|
|
|
_wb_
|
2021-05-08 05:28:03
|
It would be cool if we can somehow get the existing speeds for lossless faster, anything but s3 is quite slow atm
|
|
|
Scope
|
2021-05-08 05:31:19
|
> Total runtime: 326266 sec (90 hours = 3.7 days)
About paq8px speed
Other encoders/formats, even the slowest ones are much faster, plus PAQ consumes a lot of RAM (although consumption is the same regardless of content and depends only on compression density options)
|
|
2021-05-08 10:36:30
|
<https://www.reddit.com/r/AV1/comments/n7xrsp/does_gimp_avif_plugin_allow_lossless_compression/>
|
|
2021-05-08 10:36:33
|
|
|
|
diskorduser
|
2021-05-09 07:17:09
|
I remember a image format in which you can encode certain parts of image at higher quality and other areas at lower quality by specifying a mask image to the encoder. Anyone remember it?
|
|
|
_wb_
|
2021-05-09 07:21:12
|
Maybe I did implement that for lossy flif, not sure
|
|
|
diskorduser
|
2021-05-09 07:21:37
|
Yeah I think it is flif !!!
|
|
|
_wb_
|
2021-05-09 07:21:58
|
It's certainly possible to do that with jxl too, we do need to add an encode option for it because it is quite cool
|
|
2021-05-09 07:25:18
|
Instead of setting a global dist target, have a min-dist and max-dist and a heatmap where 0 means min, 1 means max, in between means in between.
|
|
2021-05-09 07:25:46
|
DC would (probably) still be done at the min dist
|
|
2021-05-09 07:26:13
|
But AC quantweights could be adjusted
|
|
2021-05-09 07:34:45
|
Then connect it to AI saliency heatmap generation, and you get semantic variable quality where you do the bushes at d3 and the faces at d1. Not really a very honest / high-fidelity thing to do, but could be great for some web use cases...
|
|
|
|
veluca
|
2021-05-09 09:53:10
|
I mean, adaptive quantization kinda tries to do that, but on perceptual level instead of "semantic" level...
|
|
|
_wb_
|
2021-05-09 10:05:56
|
Yes - and it wouldn't be interesting to just use the semantic heatmap as is as the quantmap, it should still do perceptual weighing. But the perceptual target would be variable instead of global.
|
|
|
raysar
|
2021-05-09 08:39:57
|
Oh i see you speak also about variable quality in picture 😄
|
|
2021-05-09 08:42:29
|
Look at this google map jpeg artifacts, it's a custom jpeg encoder? (it can be sensor banding, but i'm suspicious)
I like these type of artifact against 8*8 bockiness.
|
|
2021-05-09 08:48:55
|
Encoder is so optimized, we don't see horrible 8*8 alignment dct, but bpp see to be very low.
|
|
2021-05-09 09:05:22
|
I answer to myself, google map use tiles 256px, i see size seem to vary to 0.6 to 1.9 bpp it's not very low as i imagine.
It's the big noise of satellite sensor who "upgrade" de quality of artifacts of jpeg encoder.
But encoder seem to be very good.
That's a downloaded tile from maps.google
|
|
2021-05-09 09:15:56
|
Artifact is a problem only on smooth area, when bpp is too low for jpeg.
|
|
2021-05-09 09:18:31
|
And 256px jpeg tiles are not visible each other.
|
|
|
BlueSwordM
|
2021-05-12 05:25:25
|
So, is lossy JXL faster in terms of decompression speeds vs PNG?
|
|
|
_wb_
|
2021-05-12 05:33:02
|
I doubt it - PNG decoding is not much more work than unzipping a ppm
|
|
|
|
veluca
|
2021-05-12 07:53:48
|
well, could be faster for a large enough image and enough many threads
|
|
|
fab
|
2021-05-14 04:22:39
|
what you think about basis?
|
|
2021-05-14 04:22:40
|
https://github.com/GoogleChromeLabs/squoosh/pull/1017
|
|
2021-05-14 04:22:46
|
is it better than jpeg xl?
|
|
|
Jim
|
2021-05-14 04:24:47
|
I believe Basis is used for storing graphics for video games in graphics cards memory. Don't think it's meant for the web.
|
|
|
fab
|
2021-05-14 04:25:13
|
lossless don't seem lossless to me
|
|
|
_wb_
|
2021-05-14 04:53:55
|
Texture formats aim for something a bit different than regular image codecs: it's to save video memory by letting the gpu use the compressed representation directly.
|
|
2021-05-14 04:54:22
|
They are very fast and hw friendly by design, but not very great in terms of compression
|
|
|
fab
|
2021-05-14 05:07:55
|
why jpeg xl don't cover this use case
|
|
|
Jim
|
2021-05-14 07:48:01
|
Not meant to. It's meant as a web format.
|
|
|
_wb_
|
2021-05-14 08:40:36
|
Well not just a web format. But not a gpu texture format.
|
|
|
|
Deleted User
|
2021-05-14 08:53:32
|
why but not more I want jpeg xl for audio video and docx
?
|
|
|
Pieter
|
2021-05-14 09:00:04
|
at least add vector graphics support
|
|
2021-05-14 09:00:06
|
_hides_
|
|
|
_wb_
|
2021-05-14 09:20:42
|
18181-5, Vector Extensions?
|
|
|
Scope
|
2021-05-15 03:01:53
|
https://github.com/facebook/zstd/releases/tag/v1.5.0
|
|
|
BlueSwordM
|
2021-05-15 03:14:05
|
We're getting them with every patch. We just get used to it 😛
|
|
2021-05-15 03:21:12
|
JXL can be built with GCC, but knowing that Clang is now a bit faster for image encoders now, and that only modular would really benefit from faster encoding, I don't think using GCC has any benefits.
|
|
|
|
necros
|
2021-05-16 09:43:41
|
Re all, how to losslessly convert animated gif to webp and mng?
|
|
|
_wb_
|
2021-05-16 09:54:16
|
ffmpeg -i foo.gif foo.apng should work
|
|
|
|
necros
|
2021-05-16 10:02:34
|
<@!794205442175402004>thanx
|
|
|
fab
|
2021-05-16 08:00:49
|
mp4 quality on youtube is not bad
|
|
2021-05-16 08:00:55
|
false
|
|
2021-05-16 08:00:59
|
https://www.youtube.com/watch?v=ivr0qpct4x4
|
|
2021-05-16 08:02:25
|
mp4 quality is heavy limited
|
|
2021-05-16 08:02:58
|
but anyway tv use mp4
|
|
2021-05-16 08:03:11
|
and i expect famous people get better mp4 quality
|
|
|
improver
|
2021-05-16 08:11:46
|
mp4 is container not codec
|
|
|
190n
|
2021-05-16 08:16:05
|
for that video the mp4 versions are h.264, but youtube also encodes av1 in an mp4 container on some videos
|
|
|
|
Deleted User
|
2021-05-16 09:08:23
|
Does JPEG (1992) support wide gamut? (Probably yes, via ICC, but I just want to be sure).
|
|
|
|
veluca
|
2021-05-16 09:18:13
|
yup, although you only get 8 bits, so...
|
|
|
|
Deleted User
|
2021-05-16 09:27:17
|
Someone removed some bullet points from plwiki article and I'm writing discussion entry on that user's page.
|
|
2021-05-16 09:33:29
|
```- Improved functionality and efficiency compared to traditional image formats (e.g. JPEG, GIF and PNG);
- Support for both photographic and synthetic imagery;
- Graceful quality degradation across a large range of bitrates;
- Support for wide color gamut and HDR;
- Royalty-free format with an open-source reference implementation.```
|
|
|
_wb_
|
2021-05-16 10:18:04
|
8-bit is OKish for P3, but I wouldn't do wider gamut than that with it
|
|
|
Petr
|
|
Someone removed some bullet points from plwiki article and I'm writing discussion entry on that user's page.
|
|
2021-05-17 05:57:02
|
I agree that this info shoudn't be removed from the article. That user should have discussed the removal first.
|
|
|
|
Deleted User
|
2021-05-17 06:57:36
|
<@!792428046497611796> <@456226577798135808> discussion entry is ready. Let's hope that they respond quickly...
https://pl.wikipedia.org/wiki/Dyskusja_wikipedysty:Kubahaha#JPEG_XL
|
|
2021-05-17 07:23:46
|
I tried to keep the tone quite cool & chill 🙂
|
|
|
_wb_
|
2021-05-17 01:07:47
|
I found something funny, <@!799692065771749416>
|
|
2021-05-17 01:10:27
|
https://sneyers.info/jon_old/papers/jpif.pdf
|
|
2021-05-17 01:10:34
|
a never-published draft
|
|
2021-05-17 01:10:51
|
of something that would later become FLIF
|
|
2021-05-17 01:11:49
|
that pdf is from January 2012
|
|
2021-05-17 01:13:32
|
Jon and Pieter's Image Format, haha
|
|
2021-05-17 01:14:11
|
it's a pretty early version of FLIF, it does some things completely differently
|
|
2021-05-17 01:15:29
|
like that recursive splitting thing - that's basically an extreme form of variable block sizes, where things are almost completely arbitrary, a performance nightmare lol
|
|
2021-05-17 01:16:13
|
I don't like the HEIF container
|
|
2021-05-17 01:16:26
|
it's verbose and ugly
|
|
2021-05-17 01:16:41
|
and Nokia claims patents on it
|
|
2021-05-17 01:17:16
|
and it cannot really do anything we cannot already do natively in a jxl codestream
|
|
2021-05-17 01:18:50
|
very good question, I have asked that a few times but never really got an answer other than a general feeling that Nokia's patents are bogus (which is probably true) so you don't have to worry about it (which is probably not true)
|
|
2021-05-17 01:19:45
|
not really, other than that libheif could relatively easily add avif support by just switching out x265 for libaom
|
|
2021-05-17 01:22:13
|
we still need to implement metadata handling properly tbh... we have defined brotli-compressed exif/xmp but we don't have tooling for that yet
|
|
2021-05-17 01:23:00
|
well I guess there are two options: tricky and compressed, or convenient and simple but uncompressed/redundant
|
|
2021-05-17 01:24:25
|
if those photos contain a large amount of exif stuff, probably yes
|
|
2021-05-17 01:25:46
|
slower: probably we should let the brotli encode speed be configurable, fast brotli should be unnoticeably fast, but yes, at effort 11, Brotli encoding of a big exif/xmp blob will take some time...
|
|
|
fab
|
2021-05-17 01:58:23
|
yes a jxl will be always 22% smaller on camera
|
|
|
_wb_
|
2021-05-17 02:01:41
|
depends on how much stuff there is in the metadata, usually metadata should only be a small fraction of the total size
|
|
|
|
Deleted User
|
|
_wb_
https://sneyers.info/jon_old/papers/jpif.pdf
|
|
2021-05-17 03:32:03
|
The http://cdb.paradice-insight.us/ website is down ☹️
|
|
|
lithium
|
2021-05-20 04:09:53
|
If you want encoder have butteraugli support,
I think only jpeg xl vardct have full support,
libaom avif tune butteraugli only implement 8bit ycbcr.
```"Only 8 bit depth images supported in tune=butteraugli mode.");
"Only BT.709 and BT.601 matrix coefficients supported in "
"tune=butteraugli mode. Identity matrix is treated as BT.601.");```
|
|
2021-05-20 04:10:42
|
I think maybe someone will need this information.
|
|
|
fab
|
2021-05-21 01:43:40
|
<@!111445179587624960> have you recent result of lossless webp2 with 19052021
|
|
2021-05-21 01:44:20
|
even a small benchmark comparison?
|
|
2021-05-21 01:44:26
|
could you link me the discord link?
|
|
|
Scope
|
2021-05-21 01:47:57
|
No, I'm waiting for some major changes, because the encoder is very slow
|
|
|
fab
|
2021-05-21 01:48:22
|
is very fast in lossy
|
|
2021-05-21 01:48:30
|
s9 is max 1 minute and 30
|
|
2021-05-21 01:48:36
|
with hd image
|
|
2021-05-21 01:48:45
|
on i3 330m
|
|
2021-05-21 01:48:54
|
not even a avx2 processor
|
|
2021-05-21 01:49:14
|
lossless is slower?
|
|
2021-05-21 01:49:23
|
even of how it was lossy before?
|
|
|
improver
|
2021-05-21 01:49:26
|
in my experience it's faster than avif but not as fast as jxl
|
|
|
fab
|
2021-05-21 01:49:27
|
like 2 hour a photo
|
|
|
Scope
|
2021-05-21 01:50:40
|
In lossy yes, unless using the slowest speeds, but on lossy there are comparisons from the WebP team
|
|
2021-05-21 01:53:07
|
At maximum compression Lossless WP2 is 5-12 times slower than JXL
|
|
|
fab
|
2021-05-21 02:04:38
|
what are you saying
|
|
2021-05-21 02:04:50
|
that like on i3 330m with a month old wp2 version
|
|
2021-05-21 02:04:58
|
3 hours for a wp2 hd in lossy
|
|
2021-05-21 02:05:05
|
21 hours for a wp2 hd in lossless
|
|
2021-05-21 02:05:12
|
lossless is more complicated
|
|
2021-05-21 02:05:15
|
?
|
|
2021-05-21 02:05:30
|
is really true do you have done extensive testing with your hardware?
|
|
2021-05-21 02:05:56
|
also now lossy takes 1 minutes and 30 at max speed
|
|
2021-05-21 02:06:12
|
lossless is still many hour range?
|
|
2021-05-21 02:06:22
|
.....
|
|
2021-05-21 02:06:29
|
but i cared about the compression
|
|
2021-05-21 02:06:30
|
how it is
|
|
2021-05-21 02:06:39
|
can you link a discord link
|
|
2021-05-21 02:07:00
|
of an image that does similar like a screenshot from a phone
|
|
2021-05-21 02:07:10
|
not computer screenshot please
|
|
|
Scope
|
2021-05-21 02:08:00
|
I have done tests for WP2 and they are in my comparison, but it is not the most recent version (I don't want to do new comparisons very often without significant changes because it takes a gigantic amount of time)
<https://docs.google.com/spreadsheets/d/1ju4q1WkaXT7WoxZINmQpf4ElgMD2VMlqeDN2DuZ6yJ8/>
|
|
|
fab
|
2021-05-21 02:08:34
|
no i want a link in benchmark
|
|
2021-05-21 02:08:47
|
and not with pixel art please
|
|
2021-05-21 02:09:54
|
or i will do when my phone is charged
|
|
2021-05-21 02:10:06
|
i will benchmark one screenshot
|
|
2021-05-21 02:10:21
|
what i should set
|
|
2021-05-21 02:10:28
|
-effort 9 -q 100 ?
|
|
2021-05-21 02:10:31
|
or other options?
|
|
|
Scope
|
2021-05-21 02:11:12
|
This is a lossless benchmark for all types of images, for lossy there is a comparison from WebP team and eclipseo (but also not the latest versions)
Yes `-effort 9 -q 100`
|
|
|
fab
|
2021-05-21 02:11:29
|
thanks
|
|
|
lithium
|
2021-05-21 07:32:35
|
webp2 still use PSNR? I can't find some information about this...
|
|
|
|
Deleted User
|
2021-05-21 10:04:16
|
Just use Butteraugli and/or SSIMULACRA lmao
|
|
|
Scope
|
2021-05-22 12:13:32
|
https://news.ycombinator.com/item?id=27244004
|
|
|
_wb_
|
2021-05-22 12:22:45
|
Originally in FLIF we had the same idea, just not with neural nets but with MA trees. Define deterministically how the tree gets trained during encode/decode, and you don't need to signal the tree itself
|
|
2021-05-22 12:23:35
|
We ditched that idea because it implies slow decode and also no way to later improve the encoder (since the tree learning is part of the bitstream spec, not an encoder choice)
|
|
|
Scope
|
2021-05-22 01:09:19
|
https://news.ycombinator.com/item?id=27230427
|
|
|
|
Deleted User
|
|
Scope
https://news.ycombinator.com/item?id=27230427
|
|
2021-05-22 01:10:33
|
|
|
|
Scope
https://news.ycombinator.com/item?id=27230427
|
|
2021-05-22 03:17:06
|
`hexo 1 day ago`
> Sometimes I wish people programmed like we had 512megs in a luxury machines.
God I wish...
*Twenty One Pilots: Stressed Out starts playing*
|
|
|
Scientia
|
2021-05-23 06:51:21
|
limitations are better
|
|
2021-05-23 06:51:34
|
nowadays we have 400GB AAA video games
|
|
2021-05-23 06:52:08
|
look at the demoscene with all their stuff fitting programs into the tiniest sizes
|
|
2021-05-23 06:52:56
|
ofc that's not realistic but like 10-100x less space efficient than the demoscene stuff and you have a program that's already smaller than most things available as of now
|
|
|
_wb_
|
2021-05-23 07:17:19
|
They spent all the budget on actors and artists, no money left for coders?
|
|
2021-05-23 07:18:29
|
(I think they also secretly think "more gigabytes" means "more game" or "better game" or "worth more")
|
|
|
|
Deleted User
|
|
_wb_
They spent all the budget on actors and artists, no money left for coders?
|
|
2021-05-23 07:19:35
|
CDPR spent all their money on ray tracing and Keanu Reeves and that's why CP2077 is buggy, makes sense
|
|
|
fab
|
2021-05-23 07:20:29
|
|
|
2021-05-23 07:20:32
|
|
|
2021-05-23 07:20:54
|
164 vs 14.1 kb
|
|
2021-05-23 07:32:50
|
|
|
2021-05-23 07:32:59
|
compressed image
|
|
2021-05-23 07:33:07
|
in 720p av1
|
|
2021-05-23 07:46:37
|
|
|
2021-05-23 07:46:37
|
|
|
2021-05-23 07:46:45
|
set q 70.9
|
|
2021-05-23 07:46:53
|
webp still good at compressing
|
|
2021-05-23 07:47:00
|
so that's why people don't use jpeg xl
|
|
|
|
veluca
|
|
Scientia
nowadays we have 400GB AAA video games
|
|
2021-05-23 07:48:48
|
But I think most of those are textures and the like, no?
|
|
|
Scientia
|
2021-05-23 07:49:14
|
it's kind of the same to me
|
|
|
fab
|
2021-05-23 07:49:22
|
|
|
2021-05-23 07:49:23
|
|
|
2021-05-23 07:49:26
|
settings for wp2
|
|
|
Scientia
|
2021-05-23 07:49:35
|
blatant use of raw textures is very similar to not optimizing code at all
|
|
|
fab
|
2021-05-23 07:49:47
|
notice that even at high quality it starts to ruin detail
|
|
|
Scientia
|
2021-05-23 07:49:52
|
you could use zopflinated PNGs for textures
|
|
2021-05-23 07:50:08
|
long compress time but speedy decompress time
|
|
2021-05-23 07:50:15
|
perfect for a video game
|
|
2021-05-23 07:50:32
|
especially when you would have used raw textures
|
|
|
|
veluca
|
2021-05-23 07:51:11
|
well, the problem is that textures typically get decompressed by a GPU
|
|
2021-05-23 07:51:22
|
and they don't like PNGs
|
|
|
Scientia
|
|
|
veluca
|
2021-05-23 07:51:37
|
basically it would increase game load times AFAIU
|
|
|
Scientia
|
2021-05-23 07:51:40
|
there's tons of gpu friendly middleware formats
|
|
2021-05-23 07:51:52
|
i guess that can cost money
|
|
2021-05-23 07:52:10
|
but if i was developing a game i'd do what I needed to to not have raw assets
|
|
|
_wb_
|
2021-05-23 08:26:27
|
https://github.com/BinomialLLC/basis_universal
|
|
|
Scientia
|
2021-05-23 08:29:04
|
Welp that's a free solution
|
|
2021-05-23 08:29:11
|
No reason not to try it
|
|
2021-05-23 08:29:20
|
Uncompressed assets are so lazy IMO
|
|
|
_wb_
|
2021-05-23 08:29:40
|
Maybe games should look more into procedurally generated GPU textures. Something like jxl art but not restricted to an existing bitstream, any code could be used. That would be even better than any compression.
|
|
|
Scientia
|
2021-05-23 08:30:09
|
I think demoscene does this
|
|
|
_wb_
|
|
Scientia
|
2021-05-23 08:30:31
|
In the future when procedural generated worlds are more common
|
|
2021-05-23 08:30:42
|
We could get fully procedurally generated textures etc
|
|
2021-05-23 08:31:52
|
I know games like no mans sky did procedural generation, but even that game had set textures and other things
|
|
|
_wb_
|
2021-05-23 08:32:18
|
It's of course harder for artists... You don't get a paintbrush, you get tunable parameters or just writing code
|
|
2021-05-23 08:32:32
|
Very different paradigm
|
|
|
Scientia
|
2021-05-23 08:32:32
|
It would be a modern achievement to have a fully procedurally generated realtime playable game
|
|
2021-05-23 08:32:57
|
That didn't just look like randomness everywhere
|
|
|
_wb_
|
2021-05-23 08:38:38
|
https://jxl-art.surma.technology/?zcode=XY7BCsIwEETv-xVzL4FtIh4FFbQnFRVyLmZt99BWavD7TaJ48LLMvmGG2WgM8og9aiavoQi7oEa06-NHH2eVMbZRpxE1nbfXdHdzO8hpeoJhHDMR6R03rJJlcJEIY_nH-Mucc4X5xJYpBBisX52vDqkkfdnaNyWA7PkyQgIq-09y_Rs
|
|
2021-05-23 08:38:43
|
That could be skin
|
|
2021-05-23 08:41:22
|
https://jxl-art.surma.technology/?zcode=rZDNCoMwEITveYo5K8EotXgqtIXWk5W2kLPUqAF_igahb9-NVLD35BB2duHbnTlpU6q3aRCGTOrSFiLawc1Lla4bsxDZbdSqN4XRQ4-Q3c9P-i9j0al8mCDAQyEY0xVeONAk8NDo3mCo0A3TBC-wsw_N9kI4uo6AeT6qmaAxLf_xE2d8juNcZ34m4SNmWxWReigbjNhUq_u1584pLFoSmscOw7NAulswa1WCJ7ay_Wu6uFgjOLYtBfwn-aolIgJ8AQ
|
|
2021-05-23 08:41:43
|
That could be used in a tree
|
|
|
diskorduser
|
2021-05-24 10:10:38
|
`stat -c "%s %n" 16*
117391935 16.emma
96327187 16.ppm` <@!111445179587624960> I tried to compress a dng data.
|
|
|
Scope
|
2021-05-24 10:14:46
|
Maybe this EMMA does not have a parser/model for DNG, perhaps it is better to use the older and complex version (but this is a general compressor for different formats, not an image format): <https://encode.su/threads/2459-EMMA-Context-Mixing-Compressor>
|
|
|
_wb_
|
2021-05-24 10:33:44
|
The ppm contains what? Grayscale bayer data?
|
|
|
diskorduser
|
2021-05-24 11:30:07
|
After using it on bayered data, it compresses better.
|
|
2021-05-24 11:32:57
|
Imagemagick ➡️ dng to ppm; emma ❎
dcraw -D ➡️ pgm to ppm; Emma ✅
|
|
2021-05-24 11:34:17
|
Uncompressed dng - 30.7mb
Emma - 4.8mb
|
|
2021-05-24 11:34:55
|
ppm - 45.9 mb
|
|
2021-05-24 11:37:28
|
Just 4.8 mb raw data 🤯
|
|
2021-05-24 11:44:50
|
Jxl modular s7 - 7 mb
|
|
2021-05-24 11:49:21
|
zstd -19 - 8.6 mb
zpaq -m6 - 5.5mb
|
|
|
|
veluca
|
2021-05-24 11:54:26
|
how about `-s 9 -E 3 -I 1`? (be prepared to wait for a *while*)
|
|
|
_wb_
|
2021-05-24 11:57:25
|
bayered data as grayscale is probably not very good in jxl, better do it as 4 channels. We have an extra channel defined for bayer deltaG, just nothing implemented that uses it
|
|
|
diskorduser
|
2021-05-24 11:58:42
|
4 channels? How to do it?
|
|
|
_wb_
|
2021-05-24 12:04:04
|
I guess we'll need to write a codec_in_out that can take dcraw pgm as input and reconstruct it, for flif I did some stuff that did that
|
|
|
diskorduser
|
|
veluca
how about `-s 9 -E 3 -I 1`? (be prepared to wait for a *while*)
|
|
2021-05-24 12:12:07
|
It's taking more than 10minutes. I stopped it.
|
|
|
|
veluca
|
2021-05-24 12:12:21
|
haha
|
|
2021-05-24 12:16:25
|
we'll do that for lossless too someday 😛 (it's mostly an encoder choice)
|
|
|
fab
|
2021-05-26 07:58:08
|
https://github.com/hohMiyazawa/deebtjeme
|
|
2021-05-26 08:05:34
|
https://github.com/ceynri/FIC
|
|
|
_wb_
|
2021-05-26 08:08:22
|
That's cool but also the most niche image codec I have ever seen
|
|
|
Nova Aurora
|
|
_wb_
That's cool but also the most niche image codec I have ever seen
|
|
2021-05-28 05:25:34
|
Where's my cat compression codec?
|
|
|
_wb_
|
2021-05-28 05:29:30
|
|
|
2021-05-28 05:29:58
|
This is how my cat was sleeping yesterday
|
|
2021-05-28 05:30:23
|
Head down
|
|
2021-05-28 05:30:53
|
|
|
2021-05-28 05:31:53
|
|
|
|
|
jjido
|
2021-05-28 11:04:38
|
https://www.livescience.com/cats-tricked-by-optical-illusion-boxes.html
|
|
|
Scope
|
2021-05-28 11:29:29
|
https://github.com/AOMediaCodec/libavif/commit/2a639cac34ee7881f07ff685b8faa43073799b66
|
|
2021-05-28 11:30:39
|
|
|
|
_wb_
|
2021-05-28 12:17:51
|
I think it's a bit wrong to call this "progressive AVIF"
|
|
2021-05-28 12:19:44
|
I would call it either "AVIF with a preview image" (if the preview image is independent of the main image) or "Hierarchical AVIF" (if they use an AV1 with multi-res 'layers' which do the "upsample previous version and add residuals" thing.
|
|
2021-05-28 12:21:13
|
the relevant analogy is not progressive JPEG, but either a JPEG with an embedded Exif thumbnail, or a hierarchical JPEG (which was a thing in the original JPEG spec but nobody really implemented it and it doesn't really work well enough to be worth implementing)
|
|
2021-05-28 12:29:02
|
These are three different ways of doing 'refinement steps', but they all work in different ways:
- A preview is separately encoded and could be a completely different image than the main image.
- Hierarchical encoding first e.g. encodes a 500x500 image, then a 1000x1000 residual image that gets added to an upsampled version of the 500x500 image. So you do need the first image to decode the second, but you're still encoding extra pixels (and in practice in JPEG this approach doesn't work well because encoding the 1000x1000 residual image is about as costly as encoding the 1000x1000 full image, so you are basically doing an extra encoding of the 500x500 image, meaning the overhead in density of doing such a thing is very significant, since you are kind of encoding pixels multiple times (or put another way, for every 4 pixels, you essentially encode 5 pixels, and the extra 5th one has more entropy than the other 4).
- Progressive encoding encodes the final image with a bitstream organization that allows extracting lower-resolution or lower-quality previews from prefixes of the bitstream. No pixel is encoded twice.
|
|
2021-05-28 12:30:21
|
As far as I understand, AV1 cannot do progressive encoding. It can do multi-layer encoding though, which is a form of hierarchical encoding. And at the AVIF level, you can have previews.
|
|
2021-05-28 12:31:43
|
JPEG XL can do previews, multi-layer, and progressive encoding. Its lossy mode is _always_ at least somewhat progressive (1:8 goes first), and can be very progressive.
|
|
2021-05-28 12:33:32
|
These subtle differences between preview encoding, hierarchical encoding and progressive encoding get a bit lost if we just start calling everything "progressive", imo.
|
|
|
|
Deleted User
|
2021-05-28 08:42:54
|
<@!321486891079696385>, again quick question: will VVC support grain synth like AV1 or not? I don't want VVC not to have it (because unfortunately I doubt that AV1 will end up in cable TV decoders)...
|
|
|
BlueSwordM
|
|
<@!321486891079696385>, again quick question: will VVC support grain synth like AV1 or not? I don't want VVC not to have it (because unfortunately I doubt that AV1 will end up in cable TV decoders)...
|
|
2021-05-28 08:53:14
|
Technically, there is an Supplemental Enhancement Information for grain synthesis, but the way it's done inside of VVC seems less complex and complete.
But yes, it does support it if someone were to make something around it.
|
|
|
Kenny / Mountain Nomad
|
|
Scientia
We could get fully procedurally generated textures etc
|
|
2021-05-29 11:30:04
|
Procedural textures are already possible - but doing it in game is trickier for performance, you would probably need to do it at launch time.
|
|
|
|
Deleted User
|
2021-05-29 08:45:27
|
<@!321486891079696385> one last (at least for now), very important question:
There's that June 1st Google Photos free tier apocalypse. JPEG "High quality" transcoding in Google Photos can *really* suck sometimes, so I think I'll store my not-yet-converted photos in original quality, taking up space (and convert them inside Google Photos to JPEG XL, when they *finally* ship it).
The main problem are **videos**. The originals are big and I'd go bankrupt the very moment I wanted to store them uncompressed on Google Photos. Unfortunately their video transcoding in free tier *butchers* video quality more than pictures, so I'm thinking about two options:
- upload in free tier and accept mediocre video quality, but at least they'll be stored for free,
- wait for PCs to get more powerful and encoders + stabilizers to mature and *properly* stabilize them with AI that doesn't get jerky and doesn't lose field of view (second one being my biggest complaint about Google Photos for Android built-in non-AI stabilizer) and then encode with a mature AV1 encoder with grain synth and personalized parameter settings (the BIG downside: I'll have to store those videos at full capacity).
I'm really concerned about video quality, but at the same time I don't have stable budget. If I drop out of my uni (something quite likely to happen) bc I'm stupid, I might actually become homeless.
Which option would you choose?
|
|
|
Maximilian
|
2021-05-29 11:46:25
|
<@!111445179587624960> do you know if BMF can do grayscale?
|
|
|
Scope
|
2021-05-29 11:51:56
|
Yes, except that the alpha channel is not supported, otherwise it is a typical image format
|
|
|
BlueSwordM
|
|
<@!321486891079696385> one last (at least for now), very important question:
There's that June 1st Google Photos free tier apocalypse. JPEG "High quality" transcoding in Google Photos can *really* suck sometimes, so I think I'll store my not-yet-converted photos in original quality, taking up space (and convert them inside Google Photos to JPEG XL, when they *finally* ship it).
The main problem are **videos**. The originals are big and I'd go bankrupt the very moment I wanted to store them uncompressed on Google Photos. Unfortunately their video transcoding in free tier *butchers* video quality more than pictures, so I'm thinking about two options:
- upload in free tier and accept mediocre video quality, but at least they'll be stored for free,
- wait for PCs to get more powerful and encoders + stabilizers to mature and *properly* stabilize them with AI that doesn't get jerky and doesn't lose field of view (second one being my biggest complaint about Google Photos for Android built-in non-AI stabilizer) and then encode with a mature AV1 encoder with grain synth and personalized parameter settings (the BIG downside: I'll have to store those videos at full capacity).
I'm really concerned about video quality, but at the same time I don't have stable budget. If I drop out of my uni (something quite likely to happen) bc I'm stupid, I might actually become homeless.
Which option would you choose?
|
|
2021-05-30 01:04:42
|
The 1st one is your best alternative.
|
|
|
|
Deleted User
|
|
BlueSwordM
The 1st one is your best alternative.
|
|
2021-05-30 01:22:30
|
But some videos are *completely* shaky (they were taken either with a smartphone or with Nikon Coolpix P900 at high zoom settings). Are you sure that it's better to just upload them all without any offline backup and processing? I thought that AV1 can compress quite well without that much quality loss...
|
|
|
diskorduser
|
|
|
Deleted User
|
|
diskorduser
x265
|
|
2021-05-30 04:43:55
|
HEVC doesn't have grain synth...
|
|
|
diskorduser
|
2021-05-30 05:21:52
|
Is grain Synth necessary for videos taken with p900 and smartphone camera? I think finer details are already lost with those cameras.
|
|
|
|
Deleted User
|
|
diskorduser
Is grain Synth necessary for videos taken with p900 and smartphone camera? I think finer details are already lost with those cameras.
|
|
2021-05-30 05:36:29
|
Maybe not with P900 or Samsung Galaxy Note 9, but Lumia 950 will *definitely* benefit from grain synth because of less destructive, more "natural" processing.
|
|
|
raysar
|
|
But some videos are *completely* shaky (they were taken either with a smartphone or with Nikon Coolpix P900 at high zoom settings). Are you sure that it's better to just upload them all without any offline backup and processing? I thought that AV1 can compress quite well without that much quality loss...
|
|
2021-05-31 12:20:42
|
A good solution is to upscale in 4k your video (fast) and then upload on youtube/google. Bitrate is acceptable.
|
|
|
Scope
|
2021-05-31 04:43:38
|
Hmm, strange, when converting LibVips to PPM not all encoders understand this PPM and also the size of the resulting encoded files is larger (with IM or ffmpeg this does not happen), I found only this difference in these PPM:
```#vips2ppm - 2021-05-31T19:31:32.569078+03```
|
|
|
Pieter
|
2021-05-31 04:52:33
|
What is the first line of the PPM file?
|
|
|
_wb_
|
2021-05-31 04:53:29
|
It's a comment in the ppm
|
|
2021-05-31 04:54:02
|
Maybe some encoders keep the comment in metadata
|
|
|
Scope
|
2021-05-31 04:55:17
|
Something like this, but IM and ffmpeg do not have this comment:
```
P6
#vips2ppm - 2021-05-31T19:54:10.760181+03
717 1000
255
```
|
|
|
Pieter
|
2021-05-31 04:56:38
|
strange, decoders should just ignore the comment
|
|
|
_wb_
|
2021-05-31 04:57:51
|
There are quite a few lazy ppm readers that don't properly handle comments
|
|
|
Scope
|
2021-05-31 04:58:15
|
Yep, but especially old encoders or something like EMMA don't understand these files or the resulting files will be larger
So for more accurate results, I am forced not to use LibVips
|
|
|
_wb_
|
2021-05-31 05:00:01
|
Doesn't it have an option to not write that comment?
|
|
2021-05-31 05:00:16
|
<@310374889540550660> ?
|
|
2021-05-31 05:01:29
|
In general I don't really like encoders that leave traces of what encoder was used and when it was done - that can be useful, but it can also be a privacy issue
|
|
|
Kleis Auke
|
2021-05-31 05:01:48
|
Try: ```bash
$ vips copy x.jpg x.ppm[strip]```
|
|
|
Scope
|
2021-05-31 05:04:40
|
No, nothing has changed
|
|
|
Kleis Auke
|
2021-05-31 05:07:08
|
Ah, yes, that's a 8.11 feature (RSN). See <https://github.com/libvips/libvips/issues/2245> and <https://github.com/libvips/libvips/commit/970ba8cfcc84d83c65687acd64192b3f9121b7b2>.
|
|
|
zebefree
|
2021-06-02 10:34:15
|
Vimeo is now using AVIF https://medium.com/vimeo-engineering-blog/upgrading-images-on-vimeo-620f79da8605
|
|
|
Scope
|
2021-06-02 10:36:33
|
PSNR <:SadOrange:806131742636507177> (however, with other metrics the difference could be even bigger)
|
|
2021-06-02 11:14:02
|
Hm, WebP encoding faster than Jpeg?
So for Jpeg something like MozJpeg was used and WebP was not encoded at maximum settings 🤔
|
|