|
_wb_
|
2021-02-10 01:59:49
|
for non-photo, I think the best option is often to just do lossless
|
|
|
lithium
|
2021-02-10 02:00:45
|
Not all non-photographic(synthetic) have this issue, only image that features high contrast sharp edges and smooth area have this issue.
|
|
|
_wb_
|
2021-02-10 02:13:43
|
yes, I mean more specifically non-photographic with text or drawing styles with strong edges
|
|
|
diskorduser
|
2021-02-10 02:13:52
|
Pff . I use -d 15 😄
|
|
|
|
Deleted User
|
2021-02-10 02:22:23
|
there’s really little reason to store most non-photographic content in lossy form..
humorously, in one of my archives of non-photographic content, mid quality JPGs have roughly the same file size as PNGs of those compressed JPGs (even when storing all the artifacts). If only I had the originals…
|
|
|
|
veluca
|
2021-02-10 03:24:32
|
if you can actually check them, I suspect running some restoration software on them and then compressing them to PNG might help 😄
|
|
|
lithium
|
2021-02-10 04:13:16
|
I don't want praise avif, just a example,
I don't want praise avif, just a example,
In cavif Q90 and Q95, that features high contrast sharp edges and smooth area is compress very well, can keep smooth,
but avif don't have maximum error metric(butteraugli), so probably have some error,
I like jpeg xl, because jpeg xl use butteraugli to keep compressed image have high fidelity and reduce error.
https://github.com/kornelski/cavif-rs
|
|
|
raysar
|
|
Jyrki Alakuijala
I want JPEG XL to be slightly less embarrassing in such comparisons, too
|
|
2021-02-10 04:29:32
|
For me, even if it is not the job of the image encoder, adding in the future an denoiser algorithm in encoder option would be super useful and efficient to please those who want to fall into the very high compression and to please well in tests against avif.
Maybe it could be better than reducing resolution, some noise reduction algo are amazing 😄 (big artefacts vs noise reduction, i prefer noise reduction :D)
For me you are creating the best ultimate file format, so you need to beat visually, denoiser format like avif and webp2 at low bpp 😄
|
|
|
Crixis
|
|
raysar
For me, even if it is not the job of the image encoder, adding in the future an denoiser algorithm in encoder option would be super useful and efficient to please those who want to fall into the very high compression and to please well in tests against avif.
Maybe it could be better than reducing resolution, some noise reduction algo are amazing 😄 (big artefacts vs noise reduction, i prefer noise reduction :D)
For me you are creating the best ultimate file format, so you need to beat visually, denoiser format like avif and webp2 at low bpp 😄
|
|
2021-02-10 05:02:03
|
this is not so good for Generation Loss and IMHO is the main factor of avif bad adoption from big companies
|
|
2021-02-10 05:06:36
|
it subtle remove details
|
|
|
raysar
|
2021-02-10 05:11:44
|
Yes, but it's the magic of option, optional 😄
|
|
|
_wb_
|
2021-02-10 05:18:44
|
I think denoising is something to do interactively, in photoshop or gimp
|
|
2021-02-10 05:19:19
|
Or maybe with some good AI
|
|
|
raysar
|
2021-02-10 05:20:13
|
avif are no agree at all <:HaDog:805390049033191445>
|
|
|
_wb_
|
2021-02-10 05:23:03
|
I haven't found a way yet to algorithmically distinguish unwanted noise from wanted subtle texture.
|
|
2021-02-10 05:24:44
|
And I suspect there is no way. At the pixel level, there is no real difference. It is all semantics.
|
|
2021-02-10 05:32:22
|
What's your favorite cjxl speed? Answer with an image or GIF
|
|
2021-02-10 05:32:34
|
https://c.tenor.com/AIFF7A29t5UAAAAM/vombat-wombats.gif
|
|
|
raysar
|
2021-02-10 05:33:43
|
https://tenor.com/view/chipmunk-nuts-gif-11219351
|
|
|
BlueSwordM
|
2021-02-10 05:36:52
|
https://tenor.com/view/kittens-meowing-kitty-kittie-adorable-gif-3471852
|
|
|
Scope
|
2021-02-10 05:37:01
|
https://tenor.com/view/ninja-turtle-angry-turtle-angry-turtle-michelangelo-gif-15243800
|
|
|
Fox Wizard
|
2021-02-10 05:46:35
|
https://media.discordapp.net/attachments/412215006718066689/788436248506793994/50fps.gif
|
|
|
_wb_
|
2021-02-10 06:08:47
|
https://c.tenor.com/eXMRBgNIvQYAAAAM/falcon-flying.gif
|
|
2021-02-10 06:09:06
|
<:This:805404376658739230> no fans?
|
|
|
BlueSwordM
|
2021-02-10 06:10:44
|
I mean, peregrine falcons are quite fast.
|
|
|
Nova Aurora
|
2021-02-10 06:11:00
|
In dives
|
|
|
Scope
|
2021-02-10 06:13:04
|
So far, the fastest speeds do not offer such a benefit over more supported formats and JXL has not yet been added to applications for saving screenshots and such, where fast speeds are so necessary
|
|
|
Nova Aurora
|
2021-02-10 06:13:47
|
The raptors like gathering at the highways here and eating any rodents unlucky enough to get stuck/run over
|
|
|
_wb_
|
2021-02-10 06:16:11
|
https://c.tenor.com/9J-PcRNutaUAAAAM/rabbit-animal.gif
|
|
|
aivv73
|
2021-02-10 06:20:30
|
https://media.discordapp.net/attachments/233804746400595970/722898971298627694/urgant.gif
|
|
|
_wb_
|
2021-02-10 06:22:03
|
`--speed human` ?
|
|
|
aivv73
|
2021-02-10 06:22:51
|
he doesnt know
|
|
|
lonjil
|
2021-02-10 06:26:51
|
We need `--effort sisyphus`
|
|
|
Nova Aurora
|
2021-02-10 06:27:50
|
Constantly convert jpg to jxl and back again?
|
|
|
fab
|
2021-02-10 06:35:43
|
i used -s 3 -d 8.08
|
|
2021-02-10 06:35:49
|
maybe for some images with 0.3.1
|
|
2021-02-10 06:35:53
|
or s7 -q 15
|
|
2021-02-10 06:36:00
|
i don't remember the settings
|
|
|
lithium
|
2021-02-10 07:23:16
|
so weird, cjxl -d 0.5 -s 7 --epf=3 and -d 0.5 -s 3 --epf=3,
-s 3 still have some error(noise) but is better than -s 7,
maybe i crazy?...this result is unexpected.
|
|
|
Jyrki Alakuijala
|
|
raysar
For me, even if it is not the job of the image encoder, adding in the future an denoiser algorithm in encoder option would be super useful and efficient to please those who want to fall into the very high compression and to please well in tests against avif.
Maybe it could be better than reducing resolution, some noise reduction algo are amazing 😄 (big artefacts vs noise reduction, i prefer noise reduction :D)
For me you are creating the best ultimate file format, so you need to beat visually, denoiser format like avif and webp2 at low bpp 😄
|
|
2021-02-10 07:47:12
|
I try to make things better on a wide front without too many completely silly compromises. If silly compromises, I try to fix them soon.
|
|
2021-02-10 07:48:08
|
soon in jpeg xl (0.3.3 or so): d5 BPP * pnorm improved by 2.6 %, d23 BPP * pnorm improved by 9 %
|
|
|
fab
|
2021-02-10 07:58:15
|
so first 0.3.2
|
|
|
Dr. Taco
|
2021-02-10 08:47:13
|
|
|
2021-02-10 08:47:37
|
--speed 🥔
|
|
|
Jyrki Alakuijala
|
2021-02-11 12:02:17
|
looks much better, but default encoding got 20 % slower (from 10 to 8 MP/s) ... needs some more work...
|
|
2021-02-11 12:03:16
|
also some more ringing
|
|
2021-02-11 12:03:29
|
but for every instance of ringing 7 nasty other problems were fixed
|
|
2021-02-11 12:03:51
|
of cause ringing is just a no-no... difficult decisions 😛
|
|
|
_wb_
for non-photo, I think the best option is often to just do lossless
|
|
2021-02-11 12:34:21
|
I was hoping to change this with VarDCT being nearly ok at graphics. It would be so awesome if people didn't have to think a lot what to do -- just compress.
|
|
2021-02-11 12:34:55
|
just use d1.0, or d0.6 to d1.0 always and no thinking
|
|
|
doajc_blogger
|
2021-02-11 12:35:12
|
Any advice for compiling JPEG XL on Windows? I've been trying for hours with VS 2017 and after installing LLVM to get the Clang compiler, I get this error.
|
|
2021-02-11 12:35:20
|
Cygwin doesn't seem to work, either.
|
|
|
Scope
|
2021-02-11 12:36:25
|
<https://github.com/m-ab-s/media-autobuild_suite> (with selected Clang compiler)
or <https://encode.su/threads/3564-JXL-version-0-3-released?p=68594&viewfull=1#post68594>
|
|
|
doajc_blogger
|
2021-02-11 12:38:11
|
Thanks, <@111445179587624960>
|
|
2021-02-11 12:43:57
|
The binary in the second link works perfectly. Thanks again.
|
|
2021-02-11 01:27:12
|
Does anyone else have success with very large images? I'm trying to compress a 20104x15527 PNG and it keeps crashing with the message, "..\lib/jxl/image.cc:102: JXL_CHECK: bytes_.get()".
|
|
|
Orum
|
2021-02-11 02:14:44
|
probably running out of memory? 🤷
|
|
|
Pieter
|
2021-02-11 02:22:59
|
it's only what... 900 MB in RGB data?
|
|
|
Scope
|
2021-02-11 02:25:45
|
https://gitlab.com/wg1/jpeg-xl/-/issues/99
|
|
|
BlueSwordM
|
2021-02-11 02:32:23
|
Also, I've just thought about something else.
Is it normal I'm getting:
```[bluezakm@bluezakm-pc ~]$ cjxl /home/bluezakm/Documents/220324.jpg /home/bluezakm/Documents/220324.jxl -s 7 -d 1.0 -j --num_threads=1
J P E G \/ |
/\ |_ e n c o d e r [v0.3.1 | SIMD supported: AVX2]```
While others are also getting AVX2/SSE4/Scalar for SIMD stuff?
|
|
2021-02-11 02:32:38
|
Is it a good thing or a bad thing?
|
|
2021-02-11 02:32:57
|
Nothing too important. I'm just curious.
|
|
|
Scope
|
2021-02-11 02:37:12
|
This happens when compiling with -march=native on AVX2 CPU (or -march=haswell and newer)
|
|
|
BlueSwordM
|
2021-02-11 02:37:56
|
Ah ok then, that's why.
|
|
2021-02-11 02:37:59
|
It makes sense.
|
|
|
Scope
https://gitlab.com/wg1/jpeg-xl/-/issues/99
|
|
2021-02-11 02:42:43
|
That's nice. The image takes up 20GB of RAM at peak.
|
|
|
diskorduser
|
2021-02-11 07:29:17
|
https://tenor.com/view/kittens-cute-cat-pet-cheeks-gif-16382546
|
|
|
Fox Wizard
|
2021-02-11 07:41:58
|
<a:catdance:770738257846599710>
|
|
|
Jyrki Alakuijala
|
2021-02-11 07:51:49
|
https://twitter.com/jyzg/status/1359661198350893060
|
|
|
Nova Aurora
|
2021-02-11 07:52:59
|
The one I notice first is usually pixelization
|
|
|
_wb_
|
2021-02-11 07:59:27
|
I picked banding because it can remain an issue even when viewing at large distance or downscaled.
|
|
|
Jyrki Alakuijala
|
2021-02-11 08:30:52
|
in my last quality change I reduce banding and 1x8 pixelisation, increase ringing and blurring (and 8x8 pixelisation)
|
|
2021-02-11 08:32:13
|
once the next round of butteraugli masking improvements is properly verified with tensorflow synthesis, I can improve the heuristics further and likely get another 5 % improvement in quality (removing the new ringing that the latest changes brings)
|
|
2021-02-11 08:33:45
|
after that, I suspect that further improvements in VarDCT quality will only be possible for very large images or for using the layered approach for very low bpp
|
|
2021-02-11 08:35:25
|
also the last change reduces weird visual activity that comes from the smallest transforms being quantized heavily -- like when you quantize a dct2x2 or 4x4 too much, some pixels start popping out and the area becomes visually more active than the original
|
|
2021-02-11 08:36:16
|
the air force chapel image is a prime example of that
|
|
2021-02-11 08:38:44
|
I'd like to learn what people think about delta palette
|
|
2021-02-11 08:39:32
|
I have many ideas around it (likely 20+ % improvement potential), but I'm keeping them on hold and working on VarDCT instead
|
|
|
|
Deleted User
|
|
_wb_
|
2021-02-11 08:47:23
|
I think delta palette is mostly useful to bridge the gap between highest-quality VarDCT and lossless
|
|
|
OkyDooky
|
2021-02-11 08:47:35
|
I think I know what "pixelization" is supposed to refer to but ... I don't recall ever seeing it as image compression artefacts. 🤔
|
|
|
_wb_
|
2021-02-11 08:49:48
|
I'd call it blockiness, no? though blockiness typically means bigger blocks (8x8 and larger), while pixelization is more like 2x2 or 4x4 or staircasing at diagonals
|
|
2021-02-11 08:50:56
|
I think we need a lexicon of compression artifact terminology, with examples
|
|
2021-02-11 08:54:52
|
especially with all the new ones that modern codecs bring, like smudging and smearing, plastification, chroma spread, over-eager copy, etc
|
|
|
|
Deleted User
|
|
Jyrki Alakuijala
the air force chapel image is a prime example of that
|
|
2021-02-11 08:55:23
|
I remember doing the codec comparison when it was posted in encode.su forum. What I noticed for the airforce chapel picture for avif vs jxl at lower bpp was that avif retained more the details and lost more of the color. where as jxl kept more of the colore but the image was more pixelated
|
|
|
Jyrki Alakuijala
|
2021-02-11 08:57:51
|
I made an ontology of magnetic resonance imaging artefacts as part of my Ph.D. thesis 😄
|
|
|
I remember doing the codec comparison when it was posted in encode.su forum. What I noticed for the airforce chapel picture for avif vs jxl at lower bpp was that avif retained more the details and lost more of the color. where as jxl kept more of the colore but the image was more pixelated
|
|
2021-02-11 08:59:29
|
I consider the airforce chapel image embarrassing for me at low bpp, and I'm improving on it furiously. 😄
|
|
|
_wb_
|
2021-02-11 08:59:47
|
the way avif encoders do chroma from luma takes chroma trickery to a next level compared to just subsampling it. It allows to reduce chroma quality by a huge amount while avoiding the typical ugliness you would get if you would just subsample CbCr 8x, because the chroma signal still 'follows' the luma signal at full resolution, just the actual information is gone so it might be the wrong color, but it will look OK (when looking superficially)
|
|
|
Jyrki Alakuijala
|
2021-02-11 09:01:18
|
in video compression it may be even more important to create comfort in every frame
|
|
2021-02-11 09:01:54
|
the next seven frames will fix the problems in this frame -- if there is bandwidth for it and the scene hasn't completely changed already
|
|
2021-02-11 09:02:48
|
I have observed a video compression artefact were you first get the room and the objects and the texture on tapestry is painted only 200 ms later or so
|
|
2021-02-11 09:03:08
|
things like that can happen if prioritization is done only by psnr without consideration to masking
|
|
|
|
Deleted User
|
|
Jyrki Alakuijala
I'd like to learn what people think about delta palette
|
|
2021-02-11 09:04:06
|
what is delta palette?
|
|
|
Jyrki Alakuijala
|
2021-02-11 09:06:43
|
it is a fixed 5x5x5 RGB palette, inteleaved with a smaller 4x4x4 palette, and a small number of 'deltas' that modulate a predicted color mixed "on the canvas"
|
|
2021-02-11 09:07:20
|
plus you can add your own colors and deltas in between of the fixed palette and the deltas
|
|
|
|
Deleted User
|
2021-02-11 09:07:54
|
it's a bit over the top for my understanding
|
|
2021-02-11 09:08:40
|
I'll let you know what was my impression when doing the comparison as I was side tracked with other things then and didn't write it. maybe it will help as an oppinion
|
|
2021-02-11 09:09:57
|
I came to the conclusion that I would prefer for the image to contain more fine details than color accuracy. Like I would prefer to know that the object in the distance is a car, than to have the better color of the car
|
|
2021-02-11 09:10:41
|
and in some pictures avif had more detail
|
|
|
Jyrki Alakuijala
|
2021-02-11 09:16:51
|
this need of balance can change with resolution, i.e., if you are zooming 300 %, then details are ++++, if you look at it at 8k, then color accuracy can be more important ?
|
|
|
|
Deleted User
|
2021-02-11 09:19:10
|
Yeah, you're right.
|
|
2021-02-11 09:20:05
|
for lossy you need to find a balance
|
|
|
OkyDooky
|
2021-02-11 10:27:33
|
OH!? A box-centered cubic lattice! \:-) I tested those for turning true color into 256 color palette imagfes \:-)
(<@532010383041363969>)
|
|
2021-02-11 10:28:15
|
Oops. I meant to quote "it is a fixed 5x5x5 RGB palette, inteleaved with a smaller 4x4x4 palette,"
|
|
2021-02-11 10:29:43
|
It might be worth going for *two* 4x4x4 palettes ... so, 5x5x5 + 4x4x4 + 4x4x4 (each shifted slightly)
|
|
2021-02-11 10:31:00
|
(for lower noise w.r.t. luminosity)
|
|
|
_wb_
|
2021-02-11 10:40:16
|
the fixed palette are what you get when you use "out of bounds" palette indices (index values larger than the palette size)
|
|
2021-02-11 10:41:34
|
you can add whatever other colors to that, e.g. if some significant region of the image is in a single color, it's probably worthwhile to add that color as an explicit palette color
|
|
2021-02-11 10:42:32
|
the implicit palette could be nice for tiny images, to avoid the cost of encoding the palette itself
|
|
|
Jyrki Alakuijala
|
2021-02-11 11:06:12
|
yes, sometimes the palette is as much as 7 % of the size (from WebP lossless experiences)
|
|
2021-02-11 11:13:08
|
with delta palette you can have a million colors for gradients and it doesn't take any space for the palette
|
|
2021-02-11 11:13:33
|
but still get the benefits of palettization for 'pixel-perfect' no ringing/no blurring areas in the image
|
|
2021-02-11 11:14:15
|
the delta palettization runs at constant quality (of around 2.5 bpp for photographs)
|
|
2021-02-11 11:14:50
|
compared to normal palettization it is a big improvement -- normal 256 palette requires about 4 bpp for the same image and looks much worse
|
|
|
_wb_
|
2021-02-11 11:16:23
|
with different encoder choices, we could probably make it more or less lossy, maybe even adaptive
|
|
2021-02-11 11:17:02
|
and those results are just with the default empty palette (using only implicit palette colors and implicit deltas)
|
|
2021-02-11 11:17:31
|
could likely do better with some well-chosen image-specific palette colors and deltas
|
|
|
Jyrki Alakuijala
|
2021-02-11 11:17:34
|
there is a sweet spot
|
|
2021-02-11 11:17:49
|
after certain amount of loss it is clear that everyone should use vardct
|
|
|
_wb_
|
2021-02-11 11:17:54
|
oh yes
|
|
|
Jyrki Alakuijala
|
2021-02-11 11:17:55
|
or modular
|
|
2021-02-11 11:18:02
|
I think we are close to it
|
|
2021-02-11 11:18:18
|
but the dithering it does is detrimental for large constant colors
|
|
2021-02-11 11:18:31
|
adding large constant color ares into the palette would help a lot
|
|
2021-02-11 11:18:48
|
also we can do a better job in context modeling the pixels based on the surrounding palette indices
|
|
2021-02-11 11:19:26
|
also we can 'invent' new palette indices that map to a short sequence of pixels through context modeling, the repetition is possible there too, not just through lz77
|
|
2021-02-11 11:19:35
|
for short sequences it can be more effective
|
|
2021-02-11 11:19:44
|
2-4 pixels
|
|
2021-02-11 11:19:57
|
because then the context modeling can relate to which sequence to ignite
|
|
2021-02-11 11:20:30
|
for lz77 it would be just the distance and that is just random numbers for context modeling
|
|
|
_wb_
|
2021-02-11 11:20:40
|
could probably also easily make a mode where the encoder guarantees a max per-pixel error (just adding more colors or deltas into the palette as needed to satisfy that constraint)
|
|
|
Jyrki Alakuijala
|
2021-02-11 11:21:09
|
yes, I started with 8x8x8 palette augmented with a 7x7x7 palette in the gaps
|
|
2021-02-11 11:21:33
|
and went smaller down to 5x5x5 augmented with 4x4x4 because that still worked well for both butteraugli and my own eyes
|
|
|
lithium
|
2021-02-11 11:35:16
|
About high contrast sharp edges and smooth area,
in cavif Q90 and Q95 smooth area can keep smooth,
but in jpeg xl vardct -d 1.0 s7 and -d 0.5 s7, smooth area have some error,
I research some avif discuss, in some non-photographic encoding discuss,
disable av1 CDEF can get some benefit in non-photographic content(anime),
So i test vardct -d 0.5 -s 7 --epf=3 and vardct -d 0.5 -s 3 --epf=3,
result is unexpected, -s 3 smooth area sill have some tiny error,
but quality is better than -s 7, -s 7 get more noise, file size -s 7 is smaller -s 3,
if i increase --intensity_target value, file size will increase too much,
I think probably some encoding method(loop filter?) will get some bad effect in non-photographic image?
|
|
|
Jyrki Alakuijala
|
2021-02-11 11:43:17
|
consider posting examples from what doesn't work
|
|
2021-02-11 11:43:52
|
if you can create a relatively small images, like 64x64 to 256x256 pixels, I could just include them into a 'difficult images' optimization set
|
|
2021-02-11 11:44:13
|
please post such crops into a bug on jpeg xl
|
|
2021-02-11 11:44:26
|
feel free to create as many bugs as you like, we can manage them easily
|
|
2021-02-11 11:45:09
|
currently I don't fully understand the problem -- other than superficially -- and I'm not tracing it's performance over quality improvements
|
|
2021-02-11 11:45:51
|
to have a better dialog I should be able to understand the problem you have, then make detailed improvements, communicate them to you and you then verify if that actually agrees with your experience
|
|
2021-02-11 11:46:06
|
until now I was too busy with low bpp improvements
|
|
2021-02-11 11:46:37
|
I think I'm going to take a short break of that now and look at quality more holistically and that could incorporate your findings
|
|
2021-02-11 11:47:38
|
currently I'm running a combo of about 50 cropped high quality images in my verification/optimization set
|
|
|
lithium
|
2021-02-11 11:48:01
|
ok, thank you 🙂 , i should upload sample to encode.su or other site?
|
|
|
Jyrki Alakuijala
|
2021-02-11 11:48:03
|
some of them are collages of difficult images
|
|
2021-02-11 11:48:36
|
but best if each difficult image is separate and as small as possible (say down to 128x128)
|
|
2021-02-11 11:49:02
|
best if you upload them into a bug in jpeg-xl site
|
|
2021-02-11 11:49:46
|
https://gitlab.com/wg1/jpeg-xl/-/issues
|
|
|
lithium
|
2021-02-11 11:50:28
|
actually i still not sure this an issue or encoder limit...
|
|
|
Crixis
|
2021-02-11 12:48:46
|
i see rigging in all semi costant color
|
|
|
Jyrki Alakuijala
|
2021-02-11 12:59:58
|
Crixis -- file bugs on gitlab about it
|
|
2021-02-11 01:00:45
|
if we get a huge diversity of opinions on how to balance the difference errors (like banding vs. ringing), we could add controls where people want their errors
|
|
|
lithium
actually i still not sure this an issue or encoder limit...
|
|
2021-02-11 01:01:49
|
no one will know if it is an encoder limit until it is fixed 😄 .... I think we can improve the encoder a lot
|
|
2021-02-11 01:02:14
|
also if it is something that happens rarely, we can just up the adaptive quantization there locally without consuming a lot of bits
|
|
2021-02-11 01:03:33
|
even if we would otherwise fail in producing a decent encoder, we could just teach a neural net where are the areas that people don't like -- and give another go with a neural net trained to give 50 % more precision in those areas
|
|
2021-02-11 01:03:53
|
that approach only needs that the failures are rare (spatially somewhat isolated)
|
|
|
Crixis
|
2021-02-11 01:39:37
|
A future auto selection of lossy modular mode when it is better is possible?
|
|
|
|
Deleted User
|
|
Scope
<https://github.com/m-ab-s/media-autobuild_suite> (with selected Clang compiler)
or <https://encode.su/threads/3564-JXL-version-0-3-released?p=68594&viewfull=1#post68594>
|
|
2021-02-11 01:52:12
|
Seems like I've found a bug with one of GIF files
|
|
2021-02-11 01:53:24
|
This is the schema of post-WWII tram network progression in Warsaw, where I live
|
|
2021-02-11 01:54:13
|
Discord fortunately didn't mess with the file, hashes match
|
|
2021-02-11 01:56:49
|
I get the error no matter if I try this command:
```PS C:\Users\zieme\Downloads\jpeg-xl-mingw64-ef3f7a62> .\cjxl -m --palette=1024 -s 9 -g 3 -E 3 -I 1 ..\1435c.gif ..\1435c.jxl
J P E G \/ |
/\ |_ e n c o d e r [v0.3.1 | SIMD supported: SSE4,Scalar]
Read 2600x2711 image, 29.7 MP/s
Encoding [Modular, lossless, tortoise], 2 threads.
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc```
|
|
2021-02-11 01:57:21
|
Or this, the simplest possible one:
```PS C:\Users\zieme\Downloads\jpeg-xl-mingw64-ef3f7a62> .\cjxl ..\1435c.gif ..\1435c.jxl
J P E G \/ |
/\ |_ e n c o d e r [v0.3.1 | SIMD supported: SSE4,Scalar]
Read 2600x2711 image, 29.3 MP/s
Encoding [Modular, lossless, squirrel], 2 threads.
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc```
|
|
2021-02-11 01:57:55
|
My current RAM usage is 6.5/7.9 GB (82%)
|
|
2021-02-11 02:00:46
|
Surprisingly though the other GIF from the same site, with pre-WWII tram network, encodes without any issues...
```PS C:\Users\zieme\Downloads\jpeg-xl-mingw64-ef3f7a62> .\cjxl ..\1525c.gif ..\1525c.jxl
J P E G \/ |
/\ |_ e n c o d e r [v0.3.1 | SIMD supported: SSE4,Scalar]
Read 2600x2684 image, 30.5 MP/s
Encoding [Modular, lossless, squirrel], 2 threads.
Compressed to 325551 bytes (0.373 bpp).
2600 x 2684, 1.12 MP/s [1.12, 1.12], 1 reps, 2 threads.```
|
|
2021-02-11 02:01:30
|
...with both commands.
```PS C:\Users\zieme\Downloads\jpeg-xl-mingw64-ef3f7a62> .\cjxl -m --palette=1024 -s 9 -g 3 -E 3 -I 1 ..\1525c.gif ..\1525c.jxl
J P E G \/ |
/\ |_ e n c o d e r [v0.3.1 | SIMD supported: SSE4,Scalar]
Read 2600x2684 image, 29.3 MP/s
Encoding [Modular, lossless, tortoise], 2 threads.
Compressed to 241801 bytes (0.277 bpp).
2600 x 2684, 0.40 MP/s [0.40, 0.40], 1 reps, 2 threads.```
|
|
2021-02-11 02:02:16
|
|
|
2021-02-11 02:03:03
|
Discord left this GIF intact, too
|
|
|
Crixis
|
2021-02-11 02:24:48
|
i can encode it
|
|
|
|
Deleted User
|
2021-02-11 02:25:14
|
The first one? `1435c.gif`?
|
|
|
Crixis
|
2021-02-11 02:27:21
|
one moment
|
|
2021-02-11 02:29:28
|
yes
|
|
2021-02-11 02:30:46
|
Im on Linux Mint Jxl 0.3.1 Clang 11
|
|
|
|
Deleted User
|
2021-02-11 02:31:11
|
I'm using Scope's latest Windows build
|
|
2021-02-11 02:31:25
|
https://encode.su/threads/3564-JXL-version-0-3-released?p=68594&viewfull=1#post68594
|
|
|
Jyrki Alakuijala
|
2021-02-11 02:42:11
|
any ideas what should we do next with libjxl?
|
|
|
Crixis
|
2021-02-11 02:42:30
|
put it in firefox XD
|
|
|
Jyrki Alakuijala
|
2021-02-11 02:42:48
|
(not that we actually have any free bandwidth from security work, memory reductions, etc.) 😛
|
|
|
|
Deleted User
|
|
Jyrki Alakuijala
any ideas what should we do next with libjxl?
|
|
2021-02-11 02:43:25
|
Better support for animations and layers ;)
|
|
|
Jyrki Alakuijala
|
2021-02-11 02:43:32
|
😄
|
|
2021-02-11 02:44:04
|
what were the most surprising good features of JPEG XL for you personally?
|
|
|
|
Deleted User
|
2021-02-11 02:44:30
|
noise=1
|
|
|
Jyrki Alakuijala
|
2021-02-11 02:45:28
|
Thank you! It may be difficult to understand how much effort went into such a 'simple' feature as noise=1.
|
|
2021-02-11 02:46:07
|
how to make it look natural while not spoiling image features
|
|
2021-02-11 02:46:34
|
I suspect that about one man year was spend on that alone, which is quite ridiculous, since a simple noise model can be done in 7 minutes
|
|
|
Crixis
|
|
I'm using Scope's latest Windows build
|
|
2021-02-11 02:47:12
|
Funny enought it encode well in wine
|
|
|
Jyrki Alakuijala
|
2021-02-11 02:47:18
|
I haven't looked how other compression formats do it, but I'd be surprised if they have as much rigor in it like we do
|
|
2021-02-11 02:48:04
|
(rigor leading to image quality)
|
|
|
|
Deleted User
|
2021-02-11 02:48:16
|
<@!532010383041363969> yes, yes, I will include you in my prayers 😄
|
|
|
doajc_blogger
|
|
Jyrki Alakuijala
what were the most surprising good features of JPEG XL for you personally?
|
|
2021-02-11 02:48:33
|
Effective lossless JPEG compression
|
|
2021-02-11 02:48:58
|
Also usually being better than lossless WebP
|
|
|
|
Deleted User
|
|
Jyrki Alakuijala
what were the most surprising good features of JPEG XL for you personally?
|
|
2021-02-11 03:12:28
|
the progressive mode, from flif era
|
|
|
Jyrki Alakuijala
any ideas what should we do next with libjxl?
|
|
2021-02-11 03:13:20
|
release a 1.0 version.
|
|
|
Crixis
|
|
Jyrki Alakuijala
what were the most surprising good features of JPEG XL for you personally?
|
|
2021-02-11 03:36:33
|
Patch, spline, points, modular blocks
|
|
2021-02-11 03:36:59
|
a smart encoder will be very surprising
|
|
|
|
Deleted User
|
2021-02-11 03:39:07
|
Patches, it'll actually make lots of PDF-to-JXL converters easier – just pre-render patches for every char from font(s) used and then put patches in right places.
|
|
|
Crixis
|
2021-02-11 03:40:31
|
Idk, i can do a multi-page jxl?
|
|
|
_wb_
|
2021-02-11 03:40:52
|
can do multi-frame
|
|
|
Crixis
|
2021-02-11 03:41:13
|
so it must be at application level
|
|
|
_wb_
|
2021-02-11 03:42:07
|
kind of the same as multi-page tiff, or animation: applications that don't support it will show first page / frame
|
|
2021-02-11 03:43:42
|
a fully featured viewer could show animations based on animation timing, but also allow manual seek to next/preview frame
|
|
2021-02-11 03:43:57
|
then you can do multi-page as just an animation where every frame takes a year
|
|
|
Crixis
|
2021-02-11 03:47:15
|
I think a metadata can indicate if it is an animation o a multipage jxl but i don't think this will become a thing if not proposed as a "standard tag" from the reference encoder
|
|
|
_wb_
|
2021-02-11 03:48:38
|
we could define something at the file format level for that
|
|
2021-02-11 03:50:53
|
or maybe not encode it as animation in a single codestream, but as a container with multiple codestreams – makes it easier to have e.g. pages with different canvas sizes
|
|
|
Crixis
|
2021-02-11 03:51:44
|
seem better just include in pdf
|
|
|
_wb_
|
2021-02-11 03:54:12
|
yes, that also makes sense
|
|
|
spider-mario
|
|
lithium
About high contrast sharp edges and smooth area,
in cavif Q90 and Q95 smooth area can keep smooth,
but in jpeg xl vardct -d 1.0 s7 and -d 0.5 s7, smooth area have some error,
I research some avif discuss, in some non-photographic encoding discuss,
disable av1 CDEF can get some benefit in non-photographic content(anime),
So i test vardct -d 0.5 -s 7 --epf=3 and vardct -d 0.5 -s 3 --epf=3,
result is unexpected, -s 3 smooth area sill have some tiny error,
but quality is better than -s 7, -s 7 get more noise, file size -s 7 is smaller -s 3,
if i increase --intensity_target value, file size will increase too much,
I think probably some encoding method(loop filter?) will get some bad effect in non-photographic image?
|
|
2021-02-11 04:10:08
|
in general, I would not recommend increasing the intensity target solely as a means of increasing quality; intensity target is meant to indicate the maximum luminance in the image
|
|
2021-02-11 04:10:31
|
so by using a very high intensity target for an SDR image, one would effectively be indicating that the image should be displayed super bright
|
|
2021-02-11 04:10:45
|
which image viewers may actually start to honor once HDR equipment and awareness is more widespread
|
|
|
_wb_
|
2021-02-11 04:21:29
|
Good point. Maybe we should allow lower distances then though
|
|
|
spider-mario
|
2021-02-11 04:35:21
|
in the form of a meme:
|
|
2021-02-11 04:35:36
|
|
|
2021-02-11 04:35:53
|
(original image from: https://mixinglight.com/color-grading-tutorials/10-things-about-hdr-grading/)
|
|
|
lonjil
|
2021-02-11 04:50:14
|
I would appreciate smaller distance targets.
|
|
2021-02-11 04:50:40
|
Is the 0.1 limit in cjxl or in libjxl?
|
|
|
_wb_
|
2021-02-11 04:52:44
|
cjxl
|
|
2021-02-11 04:55:54
|
you can use `benchmark_xl` to set arbitrary distance targets, cjxl is clamping the --distance to a "sane" range, but I think it's not really up to us to define what "sane" means
|
|
2021-02-11 04:56:57
|
for most use cases, d0.5 should be more than good enough, but if you're planning to zoom in 30x on the pixels, then you will need more
|
|
2021-02-11 04:58:55
|
```
./ClassA_8bit_WOMAN_2048x2560_8b_RGB.ppm.png
Compr Input Compr Compr Compr Decomp Butteraugli
Method Pixels Size BPP # MP/s MP/s Distance Error p norm BPP*pnorm Errors
---------------------------------------------------------------------------------------------------------------------------------
jxl:d0.1 5242880 4004785 6.11081695557 1 1.484 20.809 0.34396189 0.14815344419 0.9053385788035352 0
jxl:d0.075 5242880 4622245 7.05298614502 1 1.481 18.871 0.28397459 0.12216761202 0.8616464749793261 0
jxl:d0.05 5242880 5625598 8.58398132324 1 1.458 17.847 0.26443976 0.09208368485 0.7904446309536913 0
jxl:d0.025 5242880 7586001 11.57531890869 1 1.429 15.487 0.23897156 0.06607332590 0.7648198186797819 0
jxl:d0.01 5242880 10301559 15.71893157959 1 1.156 10.406 0.22908260 0.05861984391 0.9214413156050707 0
jxl:m 5242880 7038526 10.73993835449 1 0.227 2.827 0.00000000 0.00000000000 0.0000000000000000 0
jxl:m:q99 5242880 4526472 6.90684814453 1 0.466 2.606 0.38793665 0.16204786118 1.1192399693316055 0
jxl:m:q99.5 5242880 6260970 9.55348205566 1 0.366 2.473 0.25324365 0.10762652258 1.0282080521629313 0
jxl:m:q99.8 5242880 8479261 12.93832550049 1 0.338 2.218 0.12373304 0.04831840360 0.6251592334949853 0
Aggregate: 5242880 6211483 9.47797133873 --- 0.744 7.097 0.25374938 0.09267222019 0.8647272779811280 0
```
|
|
2021-02-11 04:59:44
|
on this image, -d 0.05 still makes sense
|
|
2021-02-11 05:00:11
|
at some point, lossless is smaller: -d 0.01 probably never really makes sense
|
|
2021-02-11 05:01:42
|
also very high quality lossy modular might make sense in the range between d0.05 and lossless
|
|
2021-02-11 05:02:45
|
(on some images -m -Q 99 is already larger than the lossless image though)
|
|
|
Scope
|
2021-02-11 05:02:57
|
Hmm, it would be handy to have other extra options and higher limits in benchmark_xl, like the ewp2 encoder in WebP2, compared to the regular cwp2
|
|
|
_wb_
|
2021-02-11 05:04:08
|
we need to clean up cjxl and make it use libjxl, benchmark_xl too, and avoid the code duplication for encode options
|
|
|
doajc_blogger
|
2021-02-11 05:10:01
|
I wrote a batch file to convert JPEG images to JPEG XL and with the 60 random files I chose, it was able to reduce the size by 25%.
|
|
2021-02-11 05:12:26
|
I didn't think a lossless JPEG compressor could make them that much smaller, especially since lossless WebP makes JPEG's bigger.
|
|
|
Jyrki Alakuijala
|
|
the progressive mode, from flif era
|
|
2021-02-11 05:16:32
|
I'm not a great believer of FLIF progression, FUIF was a lot better at that.
|
|
|
lonjil
I would appreciate smaller distance targets.
|
|
2021-02-11 05:18:36
|
What are the small distance targets for? (just curious, not judgemental)
|
|
|
|
Deleted User
|
|
Jyrki Alakuijala
I'm not a great believer of FLIF progression, FUIF was a lot better at that.
|
|
2021-02-11 05:19:52
|
I liked the idea that you could dd a file and reduce it's size. or that part of a file is a reduced version of the file
|
|
2021-02-11 05:22:39
|
I've read that now in jxl the progression is based on size steps (1:4,1:8,1:16....). So now if I have somewhere in between 1:8 and 1:16 of a file it would still have additional detail or it would be the lowest step 1:8?
|
|
|
Jyrki Alakuijala
|
2021-02-11 05:23:58
|
difference between FLIF and FUIF is nearest sampling vs. averages
|
|
2021-02-11 05:24:54
|
you cannot show a 8x8 nearest subsampled image to a user in many cases -- for some photographs it may almost work, but it is pretty far from from good quality
|
|
2021-02-11 05:25:08
|
FUIF upgraded it to averages
|
|
2021-02-11 05:25:25
|
initially pik (and brunsli) had 8x8 preview only
|
|
2021-02-11 05:26:27
|
in pik it was expanded to 16x16, 8x8, and then "scan script" over the dct, i.e., you can get high frequency inform earlier, just some more ringing (just like in normal progressive JPEGs)
|
|
2021-02-11 05:27:24
|
when lossless and jpeg recompression were added, some guarantees of always progression disappeared
|
|
2021-02-11 05:29:46
|
If we have a normal mobile connection with 17 mbps speed, we can deliver a full 4k image (8 mpix) in 500 ms
|
|
2021-02-11 05:30:01
|
how many progression levels do we want in 500 ms
|
|
2021-02-11 05:30:44
|
I like to delay the first update to happen at 50 ms to have 8x8 subsampled image first (instead of showing some other lower quality versions first)
|
|
2021-02-11 05:31:29
|
perhaps show another rendering at 200-250 ms, and then the final at 500 ms
|
|
|
|
Deleted User
|
2021-02-11 05:33:27
|
so now if I have a file encoded with --progressive, and I have transferred, say 27% of the file will I see the 27% quality of the full file, or the 25% (1:4) ?
|
|
|
Jyrki Alakuijala
|
2021-02-11 05:34:09
|
it is really complicated -- many options
|
|
2021-02-11 05:34:36
|
if passes are used, you'd have a pretty good image at 27 %
|
|
2021-02-11 05:35:24
|
(actually, I don't know the --progressive flag, someone who knows it should answer)
|
|
2021-02-11 05:36:04
|
likely you'd roughly have 1:2 resolution at that time (perhaps 70 % of the frame and 30 % of the frame in 1:4)
|
|
2021-02-11 05:36:36
|
in VarDct you'd have the dc + ~12 % of the 256x256 groups
|
|
2021-02-11 05:37:12
|
in VarDCT with passes you might have a decent full resolution image, but with some blurring/ringing somewhere
|
|
|
|
Deleted User
|
2021-02-11 05:38:18
|
thanks for the explanation. I'll have to do some digging...or ask Jon then 🙂
|
|
|
lonjil
|
|
Jyrki Alakuijala
What are the small distance targets for? (just curious, not judgemental)
|
|
2021-02-11 05:40:36
|
Quality paranoia and zooming a lot. Though of course as noted at some point lossless just makes more sense, and indeed I will probably use lossless pretty often even for photographs.
|
|
|
|
Deleted User
|
2021-02-11 05:41:36
|
https://tenor.com/view/the-mandalorian-this-is-the-way-the-way-mandalorian-star-wars-gif-18999449
|
|
2021-02-11 05:41:48
|
#lossless
|
|
|
Jyrki Alakuijala
|
|
lonjil
Quality paranoia and zooming a lot. Though of course as noted at some point lossless just makes more sense, and indeed I will probably use lossless pretty often even for photographs.
|
|
2021-02-11 05:55:01
|
did you find a case where -d 0.1 was not enough for you? (or even -d 0.5)
|
|
|
_wb_
|
|
thanks for the explanation. I'll have to do some digging...or ask Jon then 🙂
|
|
2021-02-11 05:55:28
|
You can play with dd yourself, with the two djxl options to accept partial files
|
|
|
Jyrki Alakuijala
|
|
https://tenor.com/view/the-mandalorian-this-is-the-way-the-way-mandalorian-star-wars-gif-18999449
|
|
2021-02-11 05:55:38
|
The image you promote lossless with has noticeable visual artefacts
|
|
|
lonjil
|
2021-02-11 05:58:04
|
I don't have enough data, but since changing it lower than 0.1 is apparently a simple change I don't see why it shouldn't be there.
And note that the word choice there is quite deliberate with *paranoia*. There is always the possibility that I missed some imperfection so I would like to be able to add a bit extra quality just to be sure.
|
|
|
Jyrki Alakuijala
|
2021-02-11 05:58:59
|
when doing butteraugli testing with 2500 images I never saw a difference at or below a score of 0.6
|
|
2021-02-11 06:00:03
|
0.1 is quite a lot of headroom already
|
|
2021-02-11 06:00:17
|
consider it quality 155 in libjpeg terms
|
|
2021-02-11 06:01:22
|
or maybe quality 120, not quite sure how the extrapolation would go there 😛
|
|
|
lonjil
|
2021-02-11 06:02:17
|
haha, ok
|
|
2021-02-11 06:02:29
|
most of my testing so far as been with lossless modular :)
|
|
2021-02-11 06:02:45
|
though for not seeing a difference, did you do a lot of zooming?
|
|
|
_wb_
|
2021-02-11 06:09:39
|
If you start zooming a lot and/or adjusting brightness a lot, then lossless is your best bet
|
|
2021-02-11 06:10:29
|
Lossy is meant for a reasonable range of viewing distances, which does not include microscopes
|
|
|
Scope
|
2021-02-11 06:11:17
|
I noticed some difference at -d 0.6 when zooming in a little or looking closely, but at -d 0.3 this did not happen.
With -d 0.6 it was easier to see the difference in some kinds of artificial images, but sometimes I could see it in a complex, almost photographic image, for example:
|
|
2021-02-11 06:11:29
|
Source
|
|
2021-02-11 06:11:43
|
-d 0.6
|
|
2021-02-11 06:13:10
|
(chainmail and some other areas, but when zoomed in a bit and flip)
|
|
|
doajc_blogger
|
2021-02-11 07:32:25
|
I was using JPEG XL to re-compress some images I saved from a Discord server a while ago and I noticed that sometimes lossless WebP, which I originally used, was smaller so I wrote a batch file that can be used after the conversion to delete whichever image is bigger.
|
|
|
|
veluca
|
2021-02-11 07:37:59
|
secret: there is a bit of loss caused by the gaborish option which does not depend on the distance at all 😄 (our encoder approximates an inverse convolution on the input, but it cannot be done exactly unless quite a lot of computation is done)
|
|
|
_wb_
|
2021-02-11 07:41:38
|
Don't we disable gaborish by default at very low distances?
|
|
2021-02-11 07:42:03
|
(or if we don't, why not? It shouldn't be useful anymore at d0.1, right?)
|
|
|
|
veluca
|
2021-02-11 07:42:35
|
we don't - and I think the main reason is that the quantization matrices are tuned for gaborished input
|
|
|
_wb_
|
2021-02-11 07:56:42
|
I see. Does gaborish have big impact on how you want to shape your quanttables?
|
|
|
|
veluca
|
2021-02-11 09:07:57
|
I think so yes
|
|
|
Jyrki Alakuijala
|
2021-02-11 09:10:06
|
yes, something like 30 % more quantization on HF
|
|
|
I was using JPEG XL to re-compress some images I saved from a Discord server a while ago and I noticed that sometimes lossless WebP, which I originally used, was smaller so I wrote a batch file that can be used after the conversion to delete whichever image is bigger.
|
|
2021-02-11 09:10:52
|
I'm still happy if WebP lossless is useful :---------------)
|
|
|
doajc_blogger
|
2021-02-11 09:11:37
|
<@532010383041363969>: Do you want a copy of the conversion scripts I wrote?
|
|
|
Jyrki Alakuijala
|
|
Scope
I noticed some difference at -d 0.6 when zooming in a little or looking closely, but at -d 0.3 this did not happen.
With -d 0.6 it was easier to see the difference in some kinds of artificial images, but sometimes I could see it in a complex, almost photographic image, for example:
|
|
2021-02-11 09:12:30
|
-d 0.6 will currently not give (and like never will) a butteraugli score of 0.6, it is somehow related, but not quite 🙂
|
|
2021-02-11 09:13:09
|
they used to be hard quarantees in early pik, but diverged about three years ago
|
|
|
<@532010383041363969>: Do you want a copy of the conversion scripts I wrote?
|
|
2021-02-11 09:16:15
|
No, I'm just the author of WebP lossless now working on JPEG XL. I'm always happy if two of my (co-)creations compete with each other.
|
|
|
doajc_blogger
|
2021-02-11 09:16:39
|
That's amazing. I didn't know that.
|
|
|
Jyrki Alakuijala
|
2021-02-11 09:17:08
|
I'm quite proud of WebP lossless -- I think it got much less attention than it deserves.
|
|
|
doajc_blogger
|
2021-02-11 09:17:39
|
I've been using it since (I think) 2016 or 2017 since I store so many PNG's.
|
|
|
Jyrki Alakuijala
|
2021-02-11 09:17:48
|
of course now after 10 years we know how to do things better already, but still it is performing pretty well
|
|
2021-02-11 09:19:02
|
I made it's design in April 2011 to December 2011, Zoltan Szabadka helped me already at that time by keeping the decoder in sync (as a 20 %eer in my project).
|
|
|
Scope
|
2021-02-11 09:19:43
|
Yep, lossless WebP is very well tuned for different types of images and still beats JXL on pixelart, tiny images and some grainy photos, as I mentioned in <#803645746661425173>
|
|
|
_wb_
|
2021-02-11 09:20:24
|
It has `-g inf`, that helps
|
|
2021-02-11 09:21:12
|
And it treats pixels as RGB(A), not planar, that helps too
|
|
|
Nova Aurora
|
2021-02-11 09:21:54
|
Lossless webp is what I used before jxl's bitstream freeze
|
|
|
Jyrki Alakuijala
|
2021-02-11 09:22:10
|
In some earlier efforts of JPEG XL I tried to guide them towards photography so that it would be clearly a different strength from WebP lossless
|
|
2021-02-11 09:22:48
|
but we removed those lossless efforts altogether and built new ones based on the FUIF ideas improved further by Luca
|
|
2021-02-11 09:23:23
|
I wasn't excited by FUIF because the initial coding speed was not acceptable to me and I didn't see how it could be sped up
|
|
2021-02-11 09:23:52
|
Luca just dived into the code and figured out the ways
|
|
|
|
veluca
|
2021-02-11 09:24:58
|
"For the Dark Lord to do what we cannot imagine requires only that he has a better imagination."
(cit.)
|
|
|
Jyrki Alakuijala
|
2021-02-11 09:24:59
|
I think we have even more future opportunity there by constraining the context modeling in very special ways that will allow lookups
|
|
|
doajc_blogger
|
2021-02-11 09:25:15
|
WebP is old enough to be supported by Picasa but Google probably won't bring it back so it would be good if someone reverse-engineered it and added JXL support.
|
|
|
Jyrki Alakuijala
|
2021-02-11 09:25:33
|
❤️ Picasa ❤️
|
|
2021-02-11 09:26:34
|
I don't know why I don't like photography when the images are in the web, perhaps just not enough patience to wait for them
|
|
2021-02-11 09:27:29
|
perhaps one day I get used to it, or the internet gets fast enough (or compression improves sufficiently :-D)
|
|
|
Nova Aurora
|
2021-02-11 09:27:33
|
Have you tried digikam?
|
|
|
Jyrki Alakuijala
|
2021-02-11 09:27:50
|
no, I just archived my camera and no longer take photos 😄
|
|
|
Nova Aurora
|
2021-02-11 09:28:12
|
https://tenor.com/view/sad-crying-spiderman-cry-face-ugly-face-gif-5701170
|
|
|
Jyrki Alakuijala
|
2021-02-11 09:28:36
|
except some with the phone, but it is not real photography
|
|
|
Nova Aurora
|
|
Jyrki Alakuijala
except some with the phone, but it is not real photography
|
|
2021-02-11 09:29:27
|
Almost all of my photos are from my phone.
The best camera is the one you have on you.
|
|
|
Scope
|
2021-02-11 10:57:54
|
Also some discussion about Jpeg XL
https://github.com/OSGeo/gdal/pull/3449
|
|
2021-02-11 10:58:40
|
> **boxerab**
> Thanks, I'm aware of jpeg-xl, and the lossless compression performance sounds good. But, XL is designed more for lossy web content, performing well at low bit rates. It lacks many of the killer features of JPEG 2000, such as progression by resolution or quality, and fast random access into very large images via TLM and PLT markers. So, the new standard is complimentary to good old JPEG 2000.
> As far as I can tell, XL tiles are limited to 64x64. Also, tiling introduces artifacts at the tile boundaries - with PLT markers we get fast random access into large single-tiled images, without those artifacts.
|
|
|
|
Deleted User
|
|
Jyrki Alakuijala
I'm quite proud of WebP lossless -- I think it got much less attention than it deserves.
|
|
2021-02-11 11:33:38
|
Well, in my opinion it's the only usefull part of WebP. JPEG is way superior compared to WebP lossy for medium to high fidelity content. But WebP lossless and also near lossless is really nice.
|
|
|
_wb_
|
|
Scope
> **boxerab**
> Thanks, I'm aware of jpeg-xl, and the lossless compression performance sounds good. But, XL is designed more for lossy web content, performing well at low bit rates. It lacks many of the killer features of JPEG 2000, such as progression by resolution or quality, and fast random access into very large images via TLM and PLT markers. So, the new standard is complimentary to good old JPEG 2000.
> As far as I can tell, XL tiles are limited to 64x64. Also, tiling introduces artifacts at the tile boundaries - with PLT markers we get fast random access into large single-tiled images, without those artifacts.
|
|
2021-02-12 06:15:53
|
What nonsense is that? Ugh...
|
|
|
Jyrki Alakuijala
|
|
Well, in my opinion it's the only usefull part of WebP. JPEG is way superior compared to WebP lossy for medium to high fidelity content. But WebP lossless and also near lossless is really nice.
|
|
2021-02-12 07:20:44
|
I added the near lossless already in my december 2011 version. It wasn't understood and embraced by the WebP eng team so they just removed it during the productionizing work.
|
|
2021-02-12 07:21:32
|
I was able to convince them later about it and we added it back 2-3 years later
|
|
2021-02-12 07:22:03
|
the initial near lossless didn't cope with predictions specially well
|
|
2021-02-12 07:22:43
|
Marcin Kowalczyk (riegeli author) made it work with spatial predictions, too
|
|
2021-02-12 07:23:10
|
that increased its application area by about by about 50 %
|
|
2021-02-12 07:24:06
|
I consider the near lossless the best option in webp, too 😄
|
|
|
_wb_
|
|
Jyrki Alakuijala
In some earlier efforts of JPEG XL I tried to guide them towards photography so that it would be clearly a different strength from WebP lossless
|
|
2021-02-12 10:38:12
|
we still have that btw, or at least the main parts of it. It's what `-q 100 -s 3` does at the moment. Great speed/density trade-off, just doesn't work well on some kinds of non-photographic images.
|
|
|
|
Deleted User
|
2021-02-12 10:50:24
|
re "JXL lossless tiles are 128x128, 256x256, 512x512 or 1024x1024. There is no way to do 64x64 tiles"... How is the actual tile size determined, and would there be a way/use to force a given tile size ?
|
|
|
_wb_
|
2021-02-12 11:46:04
|
`-g 0` sets it to 128, default is 256 or `-g 1`, `-g 2` set it to 512, `-g 3` sets it to 1024
|
|
2021-02-12 11:46:41
|
(actually default can also be 512 now, if the image is e.g. 300x300 it will avoid the silly tiling)
|
|
|
|
Deleted User
|
|
_wb_
(actually default can also be 512 now, if the image is e.g. 300x300 it will avoid the silly tiling)
|
|
2021-02-12 12:02:03
|
any way to do this through libjxl ?
|
|
|
_wb_
|
2021-02-12 12:02:32
|
probably not, most encode options still need to get exposed in the api
|
|
|
|
Deleted User
|
2021-02-12 12:03:42
|
in the gdal use case, given we'll always be encoding 512x512 images, would there be an advantage in setting the jxl tile size to 512?
|
|
|
_wb_
|
2021-02-12 12:04:08
|
depends
|
|
2021-02-12 12:04:56
|
for compression, 512 is better because there are fewer poorly predicted pixels (at the tile boundaries, predictors cannot 'look over the seam')
|
|
2021-02-12 12:05:48
|
for multicore speed, 256 is better because you can do some encode steps and most decode steps in 4 threads
|
|
2021-02-12 12:07:13
|
but if you already have parallelism externally, you might be better off with single-tile images
|
|
|
Jyrki Alakuijala
|
2021-02-12 01:02:06
|
if I was building a system on images now, I'd go with 256x256 tiles
|
|
2021-02-12 01:02:35
|
it allows 100+ way parallelism at 4k resolution and 25+ way in FullHD
|
|
2021-02-12 01:03:14
|
my thinking is that the core count will go up pretty soon -- transistors are getting cheaper and denser but not faster and not much more energy efficient
|
|
2021-02-12 01:03:37
|
so it can be tempting to reduce clock rates and pack more cores -- that leads to energy savings
|
|
2021-02-12 01:04:31
|
I'd consider that having that option for parallelism is likely worth 3 % in density -- of course it depends on many things
|
|
|
Dr. Taco
|
2021-02-12 02:22:53
|
<:24Rip:567491904179142678> RIP picasa. Anyone remember the Instant Messanger "Hello" that Google bought and killed. It was designed for photo sharing. You could dump a ton of photos in, and it would create tiny thumbnails and send it to the other person, when the other person clicked on one it would resize it to a small preview version, like 800px in size and send that, and if they clicked "download" it would download the full size original. Or you could download all and it would give you a zip of the originals as a direct transfer from PC to PC. It was a great idea at the time, very useful for photographers. with JXL that could be something added into any chat platform now for almost free because of the progressive nature.
|
|
2021-02-12 02:46:13
|
Wow, I can't even find any reference to that app ever existing anywhere online. It would have been some time between 2005 and 2012. Probably around 2007 or 2008ish
|
|
|
|
Deleted User
|
2021-02-12 06:34:43
|
...Welcome to the Twilight zone 👻
|
|
|
doajc_blogger
|
2021-02-12 06:48:37
|
It would be cool if they released a motion JPEG XL VFW/DirectShow codec that could losslessly compress MJPEG videos.
|
|
|
_wb_
|
2021-02-12 06:54:15
|
Shouldn't be too hard, if you only want pixel exact lossless, not bitstream exact
|
|
2021-02-12 06:55:24
|
(bitstream exact is also doable but requires a new bitstream reconstruction box in the jxl file format for mjpeg)
|
|
|
Scope
|
2021-02-12 07:35:47
|
I don't know Polish, but the content is understandable by using Google Translate (it's about Asymmetric numeral systems invented by Jaroslaw (Jarek) Duda and also used in Jpeg XL)
https://debica.naszemiasto.pl/pochodzacy-z-debicy-naukowiec-autorem-jezyka-swiatowej/ar/c1-8133741
|
|
2021-02-12 07:37:47
|
(since I don't think we need another channel for news or discussions about JXL, so it will be posted here)
|
|
|
_wb_
|
2021-02-12 07:54:33
|
rANS itself is not that revolutionary, it is basically just the LIFO version of range encoding (which is basically the same thing as arithmetic coding), which makes decode faster at the expense of requiring the encoder to buffer.
|
|
2021-02-12 07:56:43
|
I think <@799692065771749416> at some point independently discovered rANS when we were playing with the predecessor of FLIF. I think back then we just thought 'meh, somewhat faster decode, much more complicated encode, not gonna bother with this'.
|
|
|
Pieter
|
2021-02-12 07:59:12
|
<@794205442175402004> https://github.com/sipa/FLIF/commits/stackcoder 🙂
|
|
|
_wb_
|
2021-02-12 07:59:18
|
It's in any case more the context modeling that matters than the actual entropy coding itself, imo.
|
|
|
Pieter
|
2021-02-12 07:59:51
|
I think I came up with that and experimented with it in 2004 or so, but at least what I tried back then was way too slow to be useful for anything.
|
|
|
_wb_
|
2021-02-12 08:00:05
|
stackcoder, hah I like that name
|
|
2021-02-12 08:03:58
|
Anyway, it's a bit weird to see that Duda guy give interviews where he makes it seem like he is the genius behind all the new compression codecs of the past decade.
|
|
|
Pieter
|
2021-02-12 08:12:29
|
<@794205442175402004> Does JPEG-XL usr rANS or tANS?
|
|
|
_wb_
|
|
Pieter
|
2021-02-12 08:18:01
|
Are the probabilities dynamic? I remember hearing they're fixed per context.
|
|
|
_wb_
|
2021-02-12 08:18:13
|
Yes, fixed probabilities
|
|
2021-02-12 08:18:56
|
Whole tokens can be decoded at a time because of that, not bit per bit
|
|
2021-02-12 08:19:06
|
Up to 8-bit tokens
|
|
|
Pieter
|
2021-02-12 08:19:27
|
Ah and tANS is only bit per bit?
|
|
|
_wb_
|
2021-02-12 08:19:33
|
(there is also a prefix coding option, then the tokens can be up to 12-bit iirc)
|
|
|
Pieter
|
2021-02-12 08:19:56
|
How does one captialize tANS at the start of a sentence?
|
|
|
_wb_
|
2021-02-12 08:20:36
|
I think you can make tANS also for larger alphabets than single bits, not sure
|
|
|
Pieter
|
2021-02-12 08:21:28
|
I would expect that you can. It could be a lot faster.
|
|
|
_wb_
|
2021-02-12 08:21:32
|
Tabled is just avoiding a multiply, I think
|
|
|
Pieter
|
2021-02-12 08:21:52
|
Isn't there a division too?
|
|
2021-02-12 08:21:57
|
That gets avoided?
|
|
|
_wb_
|
2021-02-12 08:22:31
|
Uh, yes, in the encoder I guess
|
|
2021-02-12 08:22:39
|
I meant in the decoder
|
|
|
Pieter
|
2021-02-12 08:23:29
|
Right. rANS is a division in the encoder, and multiplication in the decoder.
|
|
|
_wb_
|
2021-02-12 08:23:43
|
In software, rANS is fast enough. In hardware I guess tANS has advantages
|
|
|
Pieter
|
2021-02-12 08:24:09
|
At the very least, with fixed probabilities you can convert those divisions into multiply/add/shift with precomputed values.
|
|
2021-02-12 08:24:17
|
That can be an order of magnitude faster.
|
|
|
_wb_
|
2021-02-12 08:24:44
|
Yes, I think we don't do any actual divisions
|
|
|
Pieter
|
2021-02-12 08:25:18
|
Ah, nice.
|
|
|
_wb_
|
2021-02-12 08:25:19
|
Integer division is something you want to avoid, it's very slow
|
|
|
Pieter
|
2021-02-12 08:25:35
|
Yes.
|
|
2021-02-12 08:25:59
|
iirc up yo 90 cycles on modern x86_64 cpus.
|
|
|
_wb_
|
2021-02-12 08:27:43
|
In the weighted predictor we needed it, with varying denumerator so tricky to avoid, so we ensured it is specced in a way that is actually only an approximate division and can be implemented with a small lookup table and a multiply/shift
|
|
2021-02-12 08:28:48
|
The first thing I noticed when starting the collaboration with the google pik team was the very strong focus on decode speed
|
|
2021-02-12 08:32:40
|
We had focused on very different things, in many ways pik and fuif were very much complimentary
|
|
|
Nova Aurora
|
2021-02-12 08:33:09
|
like CELT/SILK for opus?
|
|
|
_wb_
|
2021-02-12 08:34:07
|
Fuif focused on a modular approach to transforms which is all thanks to <@799692065771749416> for doing it like that from the start in flif
|
|
2021-02-12 08:34:24
|
Which leads to elegance and universality
|
|
|
Scope
|
2021-02-12 08:34:27
|
Also, have there been any preliminary tests of JXL's adaptive resolution as it may be in browsers and how much better it is than separately encoded images (better for caching, but worse in size, limited in resolutions, etc.)?
|
|
|
_wb_
|
2021-02-12 08:35:46
|
https://youtu.be/uzYHXqjGgL4
|
|
2021-02-12 08:38:38
|
Pik focused on decode speed and perceptual modeling, which I had mostly ignored in fuif
|
|
2021-02-12 08:42:34
|
Threading, perceptual color space and encoder tuning, color management and icc compression and all that were pik things
|
|
2021-02-12 08:43:11
|
Combining the two in a way that kept the best of both worlds was nontrivial
|
|
|
Scope
|
2021-02-12 08:50:19
|
I mean applicable to some static resolutions, is there a benefit of using it instead of a separately encoded image if consider the large scale, such as three static resolutions of the same image viewed millions of times (and most likely not everyone will download full size) and probably will be more effective to encode a separate image (because part of the progressive image with this resolution will be less effective in size)
|
|
|
_wb_
|
2021-02-12 08:55:23
|
Yes, I think probably something like having a 2000 pixel wide jxl and a 3000 pixel wide jxl will be best, which gives you truncated files that do 500, 1000 and 750, 1500 in a reasonably efficient way, together covering the actual needed widths well enough.
|
|
2021-02-12 08:57:53
|
But for that, we need browsers and/or CDNs to be smarter so they can fetch/deliver the right thing, and fetch more at viewport change events etc.
|
|
|
Scope
|
2021-02-12 08:58:19
|
Because this is also one of the very significant features that so far has not been used in other formats (perhaps Jpeg200 could, but it is not supported in all browsers) but how much will it work in practice
|
|
|
_wb_
|
2021-02-12 08:59:35
|
Eric Portis did some tests in that direction with the old jpeg, but that is limited by the zigzag order in the kind of progressions it can do effectively.
|
|
|
Jyrki Alakuijala
|
|
_wb_
Combining the two in a way that kept the best of both worlds was nontrivial
|
|
2021-02-12 10:40:16
|
in the beginning I didn't believe that they could be combined in a useful way -- Luca proved me wrong
|
|
|
Scope
Also, have there been any preliminary tests of JXL's adaptive resolution as it may be in browsers and how much better it is than separately encoded images (better for caching, but worse in size, limited in resolutions, etc.)?
|
|
2021-02-12 10:41:14
|
At lowest bitrates the progressive DC is much smaller and better looking than a traditional DC model
|
|
|
Scope
Also, have there been any preliminary tests of JXL's adaptive resolution as it may be in browsers and how much better it is than separately encoded images (better for caching, but worse in size, limited in resolutions, etc.)?
|
|
2021-02-12 10:42:38
|
I don't think the progressive model we have in VarDCT is well suited for serving the same image for FullHD, 4k and 8k
|
|
2021-02-12 10:43:01
|
with modular we might get closer to that, but I have doubts that this will be a practical thing to do
|
|
2021-02-12 10:43:46
|
modular is perhaps a 60 % hit in density to VarDCT and stopping the progression at a non optimal 2**n resolution might be too limiting
|
|
2021-02-12 10:44:22
|
it can be awesome for thumbnailing and early previews, but I don't believe that it solves the resolution problem universally and efficiently
|
|
2021-02-12 10:44:32
|
(my personal viewpoint :-D)
|
|
|
_wb_
|
2021-02-12 10:45:04
|
The progression in VarDCT is probably effective for 1:2 and 1:1, which halves the number of variants needed for responsive images
|
|
2021-02-12 10:45:39
|
You could do 1:4 too but then it starts sacrificing density a bit much.
|
|
2021-02-12 10:47:08
|
The trade-offs are complicated with this, sending everyone a tailored 1:1 image is ideal for density, but a nightmare for CDN caches
|
|
|
Jyrki Alakuijala
|
2021-02-12 10:47:28
|
yes, I love the passes-thing in VarDCT even though I wasn't driving it myself -- I just failed to understand that we need it, and I was too focused on the 8x8 + final idea based on my own fixations and beliefs of how best performance and user experience is formed
|
|
2021-02-12 10:48:25
|
my back of the envelope calculation was that perhaps it takes one second to load a desktop full of images, and the 8x8 preview can be shown at 100-150 ms when done well
|
|
2021-02-12 10:48:40
|
I didn't think we'd need intermediate updates at 500 ms or so
|
|
2021-02-12 10:49:27
|
but now with the LCP discussions where the LCP folks are not willing to consider 8x8 as good enough to trigger LCP, we need the passes approach to propose for an earlier LCP trigger
|
|
|
_wb_
|
2021-02-12 10:53:56
|
For non-photo, squeeze tends to be bad for density compared to non-progressive lossless. But that's with full squeeze. I wonder what happens if we squeeze only to 1:2. It might be a lot better, and could be a good way to do two resolutions with the same file, or get faster LCP for large non-photo images too
|
|
|
BlueSwordM
|
2021-02-13 04:05:20
|
So, with images, would it be possible to do heuristics to detect how many colors there are in an image, and if the image is monochrome, apply --palette=0 and --lossy-palette(if lossless for BW images)? I'd assume it'd be expensive to do an exhaustive search to look at each tiles' color info pixel per pixel, unless I am missing something.
|
|
|
chuni
|
2021-02-13 04:25:23
|
There are cases where palette=0 generates larger images for grayscale than the default. But I would also be in support of a better heuristic resulting in more optimal settings for monochrome images 👍
|
|
|
lithium
|
2021-02-13 08:28:26
|
jpeg xl 0.3.2 will merge some encoder improvement? or only bug fix?
|
|
|
_wb_
|
2021-02-13 08:32:57
|
Just a quick bug fix I think
|
|
|
lithium
|
2021-02-13 08:37:15
|
I understand, Thank you 🙂
|
|
|
Orum
|
2021-02-13 08:42:49
|
bug fixes are encoder improvements... though I think 0.3.2 is just a decode fix?
|
|
|
_wb_
|
2021-02-13 08:59:54
|
There was a typo in icc profile storage that caused both encoder and decoder to do the wrong thing
|
|
2021-02-13 09:00:13
|
Which caused the automatic testing to miss it
|
|
2021-02-13 09:02:07
|
So the 0.3.1 encoder was in some cases producing invalid output that only the 0.3.1 decoder would decode, and the 0.3.1 decoder would not decode some valid inputs
|
|
2021-02-13 09:02:19
|
That's why a quick fix had to be made
|
|
2021-02-13 09:04:41
|
I think it would only happen with non-standard icc profiles that actually require storing a compressed icc profile. In most cases, that is not needed - we have an enum that can represent the most common cases like sRGB, Display P3, Adobe RGB 1998, etc in just a few bits instead of needing to store the exact icc profile.
|
|
2021-02-13 09:07:02
|
At least when doing lossy encoding. I think in lossless encoding it happens more often that we store the exact icc profile, because there are several variants and they are not exactly identical, which does not matter much for lossy, but lossless is lossless.
|
|
|
andentri
|
2021-02-13 12:47:01
|
Hi All.
Is there a way with CJXL 0.3.1 to do YUV420 subsampling like AVIF does ?
|
|
|
lonjil
|
2021-02-13 12:49:27
|
I don't think so. What do you need it for?
|
|
|
andentri
|
2021-02-13 12:51:05
|
I tried to test JXL for tiny thumbnail preview encoding (40x40 under 1kB) and AVIF give better color result with YUV420. I wondered how to do the same with CJXL
|
|
|
Orum
|
2021-02-13 12:52:53
|
you should probably update to 0.3.2
|
|
2021-02-13 12:53:22
|
but if you really need subsampling you can probably just use mozjpeg and then losslessly recompress with cjxl, though it's not the best idea...
|
|
2021-02-13 12:54:14
|
if your source material is RGB, YCC 4:2:0 should *not* look better color-wise
|
|
|
Crixis
|
|
andentri
I tried to test JXL for tiny thumbnail preview encoding (40x40 under 1kB) and AVIF give better color result with YUV420. I wondered how to do the same with CJXL
|
|
2021-02-13 01:32:47
|
This is unaspected
|
|
|
|
veluca
|
2021-02-13 01:32:54
|
a typical 40x40 file should fit in 1kB very very easily even at super high quality
|
|
2021-02-13 01:34:56
|
```
veluca@veluca ~/jpeg-xlm/build master $ ./tools/cjxl /tmp/x.png
J P E G \/ |
/\ |_ e n c o d e r [v0.3.0 | SIMD supported: AVX2,SSE4,Scalar]
Read 40x40 image, 4.2 MP/s
Encoding [VarDCT, d1.000, squirrel], 4 threads.
Compressed to 835 bytes (4.175 bpp).
40 x 40, 0.32 MP/s [0.32, 0.32], 1 reps, 4 threads.
```
|
|
|
Crixis
|
2021-02-13 01:35:25
|
4 bpp lol
|
|
|
|
veluca
|
2021-02-13 01:36:16
|
smaller stuff is harder to compress 🙂
|
|
|
|
Deleted User
|
2021-02-13 01:36:41
|
Less entropy to exploit, I guess
|
|
|
Crixis
|
2021-02-13 01:38:02
|
Yep but 4 bpp is under 1KB, and avif have a overhead of 200 B, I aspect jxl be very good
|
|
|
|
veluca
|
2021-02-13 01:38:27
|
that, and constant overhead that doesn't matter for 20 MP images matters for 1KP images 😛
|
|
|
Crixis
|
2021-02-13 01:39:17
|
In tiny image how good is modular?
|
|
|
|
veluca
|
2021-02-13 01:41:28
|
for that image, lossless modular is ~4k
|
|
2021-02-13 01:42:38
|
-Q 90 is 993 bytes, and it should be good quality - but likely not as good as the default option
|
|
|
Crixis
|
2021-02-13 01:42:47
|
Vardct much better
|
|
2021-02-13 01:44:18
|
In my test Q90 is similiar d1 but 15% bigger
|
|
|
Jyrki Alakuijala
|
|
andentri
I tried to test JXL for tiny thumbnail preview encoding (40x40 under 1kB) and AVIF give better color result with YUV420. I wondered how to do the same with CJXL
|
|
2021-02-13 01:46:40
|
40x40 ... likely better to go with 32x32 or 64x64 (or even 48x48)
|
|
2021-02-13 01:47:29
|
40x40 forces the system to use some 8x8 transforms
|
|
2021-02-13 01:48:37
|
if you can afford 1 kB, go with 64*64
|
|
2021-02-13 01:49:36
|
VarDCT should be rather efficient without YUV-trickery, especially so when the input is not YUV420 spoiled.
|
|
2021-02-13 01:50:16
|
if someone else already YUV420 spoiled the input, then you take a ~10 % hit in quality by not repeating the YUV420 mistake again 😛
|
|
2021-02-13 01:50:48
|
the quantization matrices are another more efficient way of doing the same thing
|
|
2021-02-13 01:51:16
|
just they allow more elegant control of the compromises, where as YUV420 is a big hammer that reduces data without looking at it
|
|
|
_wb_
Anyway, it's a bit weird to see that Duda guy give interviews where he makes it seem like he is the genius behind all the new compression codecs of the past decade.
|
|
2021-02-13 01:54:46
|
Jarek Duda advanced our field and formulated those advances with clarity. Let's be thankful for his contributions.
|
|
|
_wb_
Integer division is something you want to avoid, it's very slow
|
|
2021-02-13 02:02:15
|
for me it was (back in the day) interesting to find that on most cpus the floating point division and sqrt run practically the same algorithm and tend to be equally fast
|
|
2021-02-13 02:02:41
|
I had expected sqrt to be a bit slower
|
|
|
Orum
|
2021-02-13 02:12:25
|
integer division is slow? compared to what, floating point? <:WTF:805391680538148936>
|
|
|
_wb_
|
2021-02-13 02:13:08
|
Compared to any other instructions, like multiply
|
|
2021-02-13 02:15:25
|
Division by a small integer constant usually gets compiled away to a multiply/shift, but division by a variable int is something you want to avoid in an inner loop, e.g. for every pixel.
|
|
|
fab
|
2021-02-13 02:53:06
|
is jpeg xl good for web images?
|
|
2021-02-13 02:53:16
|
like to me it seems only that the colour change
|
|
2021-02-13 02:53:38
|
i don't see evident change in color space, noise, colour, image quality
|
|
2021-02-13 02:53:55
|
like it looks like jpeg
|
|
2021-02-13 02:54:14
|
webp was a revolution
|
|
2021-02-13 02:54:30
|
this seems like a work in progress
|
|
2021-02-13 02:55:08
|
like for compress screenshot even at q15 sometimes it looks same as original but value will change
|
|
2021-02-13 02:55:27
|
but does really beat webp2?
|
|
2021-02-13 02:56:50
|
like for mp3 they don't use jpeg xl, pdf not, web not, android not, softwares icon not
|
|
2021-02-13 02:58:30
|
like i see changing of the colors in the contours but nothing impressive to me
|
|
|
Crixis
|
|
fab
is jpeg xl good for web images?
|
|
2021-02-13 02:59:55
|
over 1 bpp
|
|
|
fab
like it looks like jpeg
|
|
2021-02-13 03:01:29
|
if you encode a jpg it trascode lossless without change
|
|
|
fab
|
2021-02-13 03:02:26
|
also we don't have a change that dev will make new version and will contest MPEG patents not because they work at img compression but because they are part of mpeg
|
|
2021-02-13 03:02:43
|
so they wouldn't contest themselves and risk to quit job
|
|
2021-02-13 03:03:02
|
at least they don't hide about open source
|
|
2021-02-13 03:03:17
|
hope they will be new versions of jpeg XL
|
|
|
Crixis
|
|
fab
like for compress screenshot even at q15 sometimes it looks same as original but value will change
|
|
2021-02-13 03:03:34
|
webp end avif are good at low fidelity but over 1 bpp the can't mantain texture and details
|
|
|
fab
but does really beat webp2?
|
|
2021-02-13 03:04:25
|
on lossles jxl win on webp and webp2 (avif is bad on lossless)
|
|
|
fab
|
2021-02-13 03:04:49
|
is there a change webp2 will keep invisible pixels?
|
|
2021-02-13 03:04:54
|
will work in this way?
|
|
2021-02-13 03:05:04
|
or even webp discards invisible pixels?
|
|
2021-02-13 03:05:10
|
and is meant for doing that? the webp algorithm works like that? sorry for off topic
|
|
|
Crixis
|
|
fab
also we don't have a change that dev will make new version and will contest MPEG patents not because they work at img compression but because they are part of mpeg
|
|
2021-02-13 03:05:58
|
jxl is open source
|
|
|
fab
or even webp discards invisible pixels?
|
|
2021-02-13 03:07:20
|
webp is only 4:2:0, it discard a lot
|
|
|
fab
|
2021-02-13 03:07:58
|
do you have image compression experience?
|
|
|
Crixis
|
2021-02-13 03:08:25
|
amateur
|
|
2021-02-13 03:10:44
|
you can compare on medium and high
|
|
2021-02-13 03:10:46
|
https://eclipseo.github.io/image-comparison-web/#abandonned-factory&AOM=s&AOM=s&subset1
|
|
2021-02-13 03:11:03
|
on texture details and dark zones
|
|
2021-02-13 03:11:19
|
and sky texture
|
|
2021-02-13 03:12:13
|
on some image terrain's texure are better on jxl low then avif or webp2 high
|
|
|
fab
|
2021-02-13 03:16:38
|
is it possible with 0.3.0 or 0.3.1 to use jxl as input
|
|
2021-02-13 03:16:44
|
and jxl as output
|
|
|
Crixis
|
2021-02-13 03:17:23
|
webp, webp2 and avif are very good to make the img look "good" bat with low fedelity on original
|
|
2021-02-13 03:17:37
|
webp2 is better then avif
|
|
|
fab
|
2021-02-13 03:18:09
|
yes 416b489 is great
|
|
2021-02-13 03:18:17
|
but low fidelity obviously
|
|
|
Crixis
|
|
fab
is it possible with 0.3.0 or 0.3.1 to use jxl as input
|
|
2021-02-13 03:18:27
|
no, you must use djxl to remake trancoded jpg or pixel png
|
|
|
fab
|
2021-02-13 03:19:11
|
for all images, modular, vardct, jpg to jxl (-j), jxl losless recopressor (jpg)
|
|
|
Crixis
|
2021-02-13 03:19:25
|
<@!532010383041363969> is working to make better on very low bpp
|
|
|
fab
|
2021-02-13 03:20:35
|
how many colours -s 4 -q 99.2 vardct discard on 0.3.0?
|
|
2021-02-13 03:20:42
|
how to know?
|
|
|
Crixis
|
2021-02-13 03:21:22
|
is not fixed, i take the distance from butteragli metric
|
|
2021-02-13 03:21:40
|
is better use d flag then q flag
|
|
2021-02-13 03:21:57
|
d is the distance from butteragli, q is remapped on d
|
|
2021-02-13 03:22:14
|
d 1 must be similiar q 95
|
|
2021-02-13 03:22:43
|
d 0.6 is similiar lossles to defoult zoom
|
|
|
fab
|
2021-02-13 03:22:50
|
with 0.3.1 is worth -d 1 -s 7 -s 8 -q 68.8 -s 8 -q 96
|
|
2021-02-13 03:22:54
|
d 1 is actually q 90
|
|
|
Crixis
|
2021-02-13 03:23:01
|
d 6 is similiar q 40-50
|
|
|
fab
d 1 is actually q 90
|
|
2021-02-13 03:23:24
|
it change for some img
|
|
|
fab
|
|
Crixis
|
2021-02-13 03:24:35
|
i use -s 8 or -s 8 -d 0.6
|
|