|
raysar
|
|
veluca
ugh, I can still see the banding on it...
|
|
2021-07-07 09:30:10
|
on discord preview? it's only a basic 8bits, smooth with my 6bit+frc ips
|
|
|
|
veluca
|
2021-07-07 09:30:36
|
no no, on the original
|
|
2021-07-07 09:30:46
|
I spent too much time doing image compression
|
|
2021-07-07 09:30:55
|
now I see artefacts everywhere
|
|
|
raysar
|
2021-07-07 09:34:08
|
Ok it's not perfect, i see banding on my 8bits ips smartphone. ^^ photoshop is a bad gradient maker ๐
|
|
2021-07-07 09:44:20
|
It's good here.
|
|
|
|
veluca
|
2021-07-07 10:09:30
|
nah, still has banding
|
|
|
spider-mario
|
2021-07-07 10:11:22
|
I see aliasing more than anything
|
|
2021-07-07 10:18:52
|
|
|
|
raysar
|
|
spider-mario
|
|
2021-07-07 11:01:39
|
It's a good intensity according to the luminance.
|
|
|
spider-mario
|
2021-07-07 11:02:10
|
great to hear, thanks
|
|
|
raysar
|
2021-07-07 11:03:43
|
size of noise is fixed? if i have a high resolution, noise become "smaller" than the same resize picture?
|
|
|
spider-mario
|
2021-07-07 11:04:41
|
yes, itโs finer, however the calculation I am using for the intensity vs. brightness relationship should scale the intensity accordingly
|
|
2021-07-07 11:05:51
|
so if displayed at the same physical size, the resampled image should look like it has the same amount if everything goes correctly
|
|
|
raysar
|
|
spider-mario
so if displayed at the same physical size, the resampled image should look like it has the same amount if everything goes correctly
|
|
2021-07-07 11:06:56
|
Same amount but "bigger" grain size?
|
|
|
spider-mario
|
2021-07-07 11:07:17
|
yep, for the image with fewer pixels
|
|
|
raysar
|
2021-07-07 11:09:53
|
Ok, and you have several noise levels, or only one?
|
|
|
spider-mario
|
2021-07-07 11:11:38
|
the interface Iโm thinking of is something like `--photon_noise=ISO51200`
|
|
2021-07-07 11:11:48
|
assuming a 36mmร24mm sensor
|
|
|
raysar
|
2021-07-07 11:13:54
|
Ok cool, so we can choose it with an encoder option? ๐
|
|
|
spider-mario
|
2021-07-07 11:14:46
|
I think that would be the plan ๐
|
|
|
raysar
|
2021-07-07 11:19:58
|
And try to detect the picture amond of noise?
picture_noise = 1600
if d<=1
--photon_noise=0
else if d<=3
--photon_noise = picture_noise/2
else
--photon_noise = picture_noise
An algo like that to equibrate with the denoising level of the encoder? ๐
|
|
|
spider-mario
|
2021-07-07 11:30:22
|
that sounds sensible
|
|
2021-07-07 11:30:42
|
detecting the amount in the image may be tricky but weโll see ๐
|
|
2021-07-07 11:30:58
|
letโs maybe start with manual settings and go from there
|
|
|
raysar
|
|
spider-mario
detecting the amount in the image may be tricky but weโll see ๐
|
|
2021-07-07 11:33:06
|
You should mesure the surface of the picture with a very high frequency, more surface seem equal to high noise. An idea like that? (i don't know if it's easy ๐
|
|
2021-07-07 11:39:21
|
Or searching in av1 codec to find how they do that ๐
|
|
2021-07-07 11:56:25
|
`The proposed film grain modeling and synthesis framework is shown in Fig. 1. The film
grain is removed from the video by a denoising process, and the grain parameters are
estimated from the flat regions of the difference between the noisy and de-noised
versions of the video sequence; these parameters are sent along with the compressed
video bitstream. `
`The details of the algorithm are as follows. When the film grain/noise parameters are
estimated, it is important to make sure that only smooth regions of the picture are used in
estimation, since edges and textures can affect estimation of the film grain strength and
pattern. To determine smooth areas of the picture, the Canny edge detector [7] is applied
to the denoised image at different scales, followed by the dilation operation. The
thresholds in Canny detector are proportional to the local intensity of the signal and are
therefore adapted to the luminance. Figure 2 shows an example of the image that is
produced by applying Canny edge detection at three scales to the denoised video
followed by dilation. The low luminance areas are also excluded from film grain pattern
estimation`
https://norkin.org/pdf/DCC_2018_AV1_film_grain.pdf
|
|
2021-07-07 11:58:24
|
It's complicated because they need to analyse the texture of grain to simulate the exact same noise.
|
|
|
Jyrki Alakuijala
|
|
spider-mario
|
|
2021-07-08 01:33:03
|
I believe we might need some more noise in the black area
|
|
2021-07-08 01:41:26
|
does anyone have experience with black and white photography?
|
|
2021-07-08 01:41:42
|
are the darkest areas there still noisy?
|
|
2021-07-08 01:42:21
|
I like the overall look of Sami's new LUT and think it can be merged as it and is likely a big improvmeent
|
|
|
_wb_
|
2021-07-08 02:50:43
|
what about X and B?
|
|
|
Jyrki Alakuijala
|
2021-07-09 08:50:25
|
We don't add noise B
|
|
2021-07-09 08:50:55
|
we had a version where we added X noise, I don't think we add X noise today -- not completely sure
|
|
2021-07-09 08:51:32
|
I saw that quite a few photography guides shot how to remove the chromacity noise from images but leave the intensity noise
|
|
2021-07-09 08:52:06
|
I suspect that it is artistically and aesthetically a good solution to have the noise added to Y only
|
|
2021-07-09 08:52:45
|
In Renata's original work we added in both X and Y, but I believe we moved to Y only
|
|
2021-07-09 08:52:50
|
Sami knows better
|
|
|
|
Deleted User
|
|
Jyrki Alakuijala
we had a version where we added X noise, I don't think we add X noise today -- not completely sure
|
|
2021-07-09 09:06:45
|
> we had a version where we added X noise
Was it before the format freeze?
|
|
|
Jyrki Alakuijala
|
2021-07-09 09:07:31
|
we had that 2-3 years ago
|
|
2021-07-09 09:07:49
|
I don't know if the format supports X noise
|
|
|
|
Deleted User
|
2021-07-09 09:08:08
|
Wow, that's quite ancient
|
|
|
Jyrki Alakuijala
|
2021-07-09 09:08:25
|
had there been a question about it, I would have advised to drop it (and probably did, but don't remember)
|
|
2021-07-09 09:08:55
|
B noise we never built, I didn't see the need for it
|
|
|
|
Deleted User
|
2021-07-09 09:09:23
|
How did X noise look like? Just curious. (And yes, I know myself that chroma noise is awful.)
|
|
|
Jyrki Alakuijala
|
2021-07-09 09:09:31
|
it looks a bit like debeyering gone wrong
|
|
2021-07-09 09:09:53
|
or raw images that try to be too raw
|
|
2021-07-09 09:10:13
|
typically it is not a pleasant experience
|
|
|
|
Deleted User
|
2021-07-09 09:13:47
|
Did any example images of that survive?
|
|
2021-07-09 09:17:42
|
I guess he wouldn't be allowed to show them even if they did still exist. ๐
|
|
2021-07-09 09:20:15
|
Somehow Jon *could* show his glitch art from FUIF era...
|
|
|
Jyrki Alakuijala
|
|
Did any example images of that survive?
|
|
2021-07-09 09:23:01
|
no, but perhaps pik codebase would still be able to generate some ๐
|
|
2021-07-09 09:23:10
|
it is not worth looking into
|
|
|
raysar
|
2021-07-09 11:52:04
|
<@!532010383041363969> <@!604964375924834314> i share you raw png from a full frame sensor @3200iso 24mpx (this noise i near equal to low iso from smarphone) to see what is look like.
I do also a nr reduction from chroma only, a real nr and a black and white.
https://1drv.ms/u/s!Aui4LBt66-MmlF2ATk1sffHvORGM?e=DPHnNf
|
|
|
|
Deleted User
|
2021-07-09 03:48:11
|
<@!532010383041363969> <@!604964375924834314> sorry to say that, but I think there's excess noise in highlights. Here's the original image, a result of HDR merging of 3 photos. Noise ends where channels start to clip (at least before HDR tone mapping). When there's clipping, there should be no noise, right?
|
|
2021-07-09 03:49:02
|
But here's that photo encoded with `--noise=1`. Here highlights *have* noise despite almost nobody expecting them to be there ๐
|
|
2021-07-09 03:50:45
|
Actually I've checked it now with Paint's probe and there *is* clipping in the original. There should be no noise at all in the JXL version, but it is.
|
|
|
spider-mario
|
2021-07-09 03:50:48
|
yeah, `--noise=1` is the โoldโ method, which indeed has that defect
|
|
2021-07-09 03:50:55
|
does it also occur with the new experimental photon noise?
|
|
2021-07-09 03:51:07
|
`--photon_noise=ISO6400` for example
|
|
2021-07-09 03:51:20
|
(not sure what amount would be appropriate here and it needs to be specified manually for now)
|
|
2021-07-09 03:51:52
|
it was merged recently so may need a git pull
|
|
|
|
Deleted User
|
2021-07-09 03:56:32
|
Wait a sec, I have to recompile
|
|
2021-07-09 03:57:58
|
By the way since compiling is quite RAM-intensive, is it possible to do a "differential" compilation (recompile only the files that changed and reuse the rest)?
|
|
|
spider-mario
|
2021-07-09 04:02:48
|
itโs what should happen if you go to the build directory and run `ninja`
|
|
2021-07-09 04:02:59
|
do you use `ci.sh` every time?
|
|
|
|
Deleted User
|
2021-07-09 04:03:52
|
No, I'm using original GitHub instructions:
```bash
cmake -DCMAKE_BUILD_TYPE=Release -DJPEGXL_ENABLE_DEVTOOLS=ON -DBUILD_TESTING=OFF ..
cmake --build . -- -j$(nproc)
```
|
|
|
spider-mario
|
2021-07-09 04:04:41
|
ah, I think this should be incremental too, however itโs not so rare that a header included by many files gets changed and so they need to be rebuilt anyway
|
|
|
|
Deleted User
|
2021-07-09 04:06:33
|
Can I run just the `cmake --build . -- -j$(nproc)` line or do I have to run the first one, too?
|
|
|
spider-mario
|
2021-07-09 04:06:50
|
I think the `cmake --build` should be enough
|
|
2021-07-09 04:07:06
|
or you can probably just invoke `ninja` directly instead
|
|
|
|
Deleted User
|
2021-07-09 04:07:31
|
When using `./ci.sh` I was running out of RAM quite quickly
|
|
|
spider-mario
|
2021-07-09 04:07:40
|
ah, but if the first CMake command is the full one, then it may have generated a Makefile, not ninja files
|
|
2021-07-09 04:08:39
|
you can maybe balance ram usage vs. parallelism by adjusting the `-j` parameter to something a bit less than your number of cores
|
|
2021-07-09 04:09:39
|
if you know how much RAM one process tends to use, you could divide your amount of free RAM by that and pass that to `-j` instead of `$(nproc)` for example
|
|
|
|
Deleted User
|
2021-07-09 04:25:12
|
OK, it worked. I don't know why it didn't work when I tried it a few months ago... ยฏ\_(ใ)_/ยฏ
|
|
2021-07-09 04:34:51
|
Nope. Still too much grain in the highlights...
|
|
2021-07-09 04:35:08
|
If there's visible clipping, there should be no grain at all.
|
|
2021-07-09 04:50:38
|
But that idea with "ISO"-dependent noise is a great starting point. That's gonna be interesting...
|
|
|
spider-mario
|
2021-07-09 05:00:57
|
oh, true, at the moment it doesnโt try to detect clipping
|
|
|
|
Deleted User
|
2021-07-09 05:02:50
|
It's not just clipping, more like bright areas in general
|
|
|
spider-mario
|
2021-07-09 05:14:02
|
itโs normal for even bright areas to have at least a little bit of noise due to photo response non-uniformity: https://www.photonstophotos.net/Emil%20Martinec/noise.html#prnu
|
|
2021-07-09 05:14:34
|
(but also, at high ISO settings, even areas that are โbrightโ in the processed output may not have been made from much light)
|
|
|
raysar
|
2021-07-09 06:25:11
|
I do some test with noise synthesis, and it's very good ! The amount of noise is good even in high light.
https://1drv.ms/u/s!Aui4LBt66-MmlF2ATk1sffHvORGM?e=DPHnNf
For this picture 3200iso in full resolution, for d3 the good noise is ISO400
For resize picture in fullhd in d3 the good noise is ISO3200 and it's AMAZING. (good choice of noise level in high compression ratio is game changing for noisy picture)
look at here: https://slow.pics/c/8bPWqGsq
|
|
|
fab
|
2021-07-09 06:33:21
|
see the red it looks noisier
|
|
2021-07-09 06:37:13
|
did you really elaborate a raw with jpeg xl
|
|
|
raysar
|
2021-07-09 06:40:45
|
Ah yes it's the ISO1600 best choice, you are right <@456226577798135808> noise in mid light is too important against black area.
|
|
|
fab
|
|
raysar
Ah yes it's the ISO1600 best choice, you are right <@456226577798135808> noise in mid light is too important against black area.
|
|
2021-07-09 06:42:20
|
please explain why you need noise
|
|
2021-07-09 06:42:34
|
is not better a vardct improvement
|
|
2021-07-09 06:43:01
|
do you have a build
|
|
2021-07-09 06:43:15
|
ah seen
|
|
2021-07-09 06:43:18
|
what is the command
|
|
2021-07-09 06:43:21
|
commands
|
|
|
raysar
|
2021-07-09 06:47:15
|
The same comparison with ISO1600
https://slow.pics/c/NcAu8EHC
|
|
|
fab
|
2021-07-09 06:51:13
|
basically it does the same things as webp2 but faster
|
|
2021-07-09 06:51:18
|
no super research
|
|
2021-07-09 06:58:39
|
|
|
2021-07-09 06:58:55
|
here's a sample when you can see what quality it is
|
|
2021-07-09 06:58:56
|
for %i in (D:\Documents\jULY\doc\a10\*.jpg) do cjxl -j -d 8.687 -s 7 --photon_noise=ISO6400 %i %i.jxl
|
|
2021-07-09 06:59:06
|
this is raysar build download
|
|
2021-07-09 06:59:07
|
https://onedrive.live.com/?authkey=%21AI1UqLiYRHX4%5Fqg&id=26E3EB7A1B2CB8E8%212660&cid=26E3EB7A1B2CB8E8
|
|
2021-07-09 06:59:12
|
which i used
|
|
2021-07-09 07:33:47
|
....
|
|
2021-07-09 07:36:28
|
i'll stop with this girl or someone will be tired
|
|
2021-07-09 07:40:59
|
i'd advice to use max iso 506
|
|
2021-07-09 07:41:24
|
or the encoder will be perceived worse than original
|
|
2021-07-09 07:41:51
|
use iso 506 exactly
|
|
2021-07-09 07:51:53
|
on modular it doesn't work
|
|
2021-07-09 07:52:06
|
so for %i in (C:\Users\User\Documents\modul2\*.png) do cjxl -I 0.439 -s 4 -q 87.922 --photon_noise=ISO506 -m %i %i.jxl
|
|
2021-07-09 07:52:08
|
isn't valid
|
|
2021-07-09 07:52:16
|
but -I 0.439 fortunately is valid
|
|
2021-07-09 07:53:01
|
basically what this command does is redistibute grain a more pleasant way but the encoding quality is the same
|
|
2021-07-09 07:53:10
|
obviously you need a raw png
|
|
2021-07-09 07:53:17
|
high quality shoots with many colours
|
|
2021-07-09 07:53:24
|
also you need to encode with speed 9
|
|
2021-07-09 07:53:51
|
so outside of encoding test i don't think squoosh.app will implement this
|
|
2021-07-09 07:53:58
|
people will get confused
|
|
|
_wb_
|
2021-07-09 08:06:50
|
https://twitter.com/GeraldRTamayo1/status/1411986767268311040?s=19
|
|
2021-07-09 08:07:35
|
Who is that person? He claims to be the genius behind just about everything
|
|
|
|
veluca
|
|
raysar
I do some test with noise synthesis, and it's very good ! The amount of noise is good even in high light.
https://1drv.ms/u/s!Aui4LBt66-MmlF2ATk1sffHvORGM?e=DPHnNf
For this picture 3200iso in full resolution, for d3 the good noise is ISO400
For resize picture in fullhd in d3 the good noise is ISO3200 and it's AMAZING. (good choice of noise level in high compression ratio is game changing for noisy picture)
look at here: https://slow.pics/c/8bPWqGsq
|
|
2021-07-09 08:08:29
|
next thing to do is speed up the decoder when noise is on ๐ and maybe have proper detection/denoising in the encoder...
|
|
|
Scope
|
2021-07-09 08:09:56
|
|
|
2021-07-09 08:12:48
|
Looks like a troll or a joke account
|
|
|
Jyrki Alakuijala
|
2021-07-09 09:09:21
|
that person has been communicating on encode.su, too
|
|
2021-07-09 09:13:03
|
after some posts that I observed, I couldn't make any conclusions about their intentions
|
|
|
raysar
<@!532010383041363969> <@!604964375924834314> i share you raw png from a full frame sensor @3200iso 24mpx (this noise i near equal to low iso from smarphone) to see what is look like.
I do also a nr reduction from chroma only, a real nr and a black and white.
https://1drv.ms/u/s!Aui4LBt66-MmlF2ATk1sffHvORGM?e=DPHnNf
|
|
2021-07-09 09:14:28
|
This is the kind of noise that we need!
|
|
|
raysar
I do some test with noise synthesis, and it's very good ! The amount of noise is good even in high light.
https://1drv.ms/u/s!Aui4LBt66-MmlF2ATk1sffHvORGM?e=DPHnNf
For this picture 3200iso in full resolution, for d3 the good noise is ISO400
For resize picture in fullhd in d3 the good noise is ISO3200 and it's AMAZING. (good choice of noise level in high compression ratio is game changing for noisy picture)
look at here: https://slow.pics/c/8bPWqGsq
|
|
2021-07-09 09:29:14
|
Yeah! Now it is working as intended. Thank you Spider-Mario and Raysar!
|
|
2021-07-09 09:31:04
|
getting the noise level correctly modulated with the intensity level is a huge thing for me aesthetically
|
|
2021-07-09 09:31:46
|
it creates a feeling of realism/fragility/smoke/noise/dust etc. instead of bits and artefacts
|
|
2021-07-09 09:32:08
|
I suspect we still want to modulate the noise by compression distance, too
|
|
2021-07-09 09:32:29
|
when we compress at d0.5, the noise is already included -- adding more isn't the greatest idea
|
|
2021-07-09 09:33:20
|
in every intensity level the noise should be less than the quantization from the dct
|
|
2021-07-09 09:34:03
|
also, I can take it into account in psychovisual modeling -- the noise is a masking element that changes the ideal distribution of bits to be slightly more uniform than without noise
|
|
2021-07-09 09:35:32
|
I'd like to get feedback from actual photographers on the ISO1600 etc. based definition of noise -- whatever creates the most comfortable control parametrization
|
|
2021-07-09 09:36:15
|
the noise is also related to the shutter speed and aperture
|
|
2021-07-09 09:36:36
|
and to the f-number of the lense
|
|
|
fab
basically it does the same things as webp2 but faster
|
|
2021-07-09 09:57:53
|
could you show us a comparison of jxl vs. webp2 here?
|
|
|
raysar
|
|
Jyrki Alakuijala
I suspect we still want to modulate the noise by compression distance, too
|
|
2021-07-09 10:14:27
|
That's the main aim, choose the best function to link distance and noise intensity. Max is the mesure of noise of the the picture.
|
|
|
Jyrki Alakuijala
|
2021-07-09 10:15:04
|
Raysar, I saw that you posted on <#803950138795622455>
|
|
2021-07-09 10:15:33
|
are you are hobbyist or a (semi-)professional photographer?
|
|
2021-07-09 10:16:22
|
(btw, I really appreciate the feedback and guidance you are giving on this)
|
|
|
raysar
|
2021-07-09 10:16:29
|
Hobbist, but i'm technically very good in photography and technology, not artistically :p
|
|
|
Jyrki Alakuijala
|
2021-07-09 10:17:12
|
the reference material (like the color calibration sheet) you posted was very useful for me
|
|
2021-07-09 10:17:33
|
I think we have some excessive noise in middle grays in jxl
|
|
2021-07-09 10:17:51
|
but also our look-up-table only has eight values, it may be that we are not able to fix that
|
|
2021-07-09 10:17:57
|
it is a very very subtle thing
|
|
|
BlueSwordM
|
|
_wb_
Who is that person? He claims to be the genius behind just about everything
|
|
2021-07-09 10:20:54
|
Uh, I don't remember h.264 being started designed in the 1980s <:kekw:808717074305122316>
|
|
|
raysar
|
|
Jyrki Alakuijala
the reference material (like the color calibration sheet) you posted was very useful for me
|
|
2021-07-09 10:21:09
|
cool ๐ I'm looking for silver picture that i do when i was a kid and i see the noise distribution is more visible in white than black <:YEP:808828808127971399>
|
|
|
BlueSwordM
|
2021-07-09 10:21:24
|
Ooooh.
|
|
2021-07-09 10:21:30
|
It looks like an AI actually on Twitter.
|
|
|
Jyrki Alakuijala
|
|
raysar
cool ๐ I'm looking for silver picture that i do when i was a kid and i see the noise distribution is more visible in white than black <:YEP:808828808127971399>
|
|
2021-07-09 10:21:48
|
could you send a copy?
|
|
2021-07-09 10:24:01
|
supposedly some pixels can get hotter than other pixels and provide consistently more thermal noise
|
|
|
spider-mario
|
|
Jyrki Alakuijala
the noise is also related to the shutter speed and aperture
|
|
2021-07-09 10:24:58
|
it does in the sense that the combination of scene luminance + shutter speed + f-number determines the focal plane exposure, and an ISO value is more or less another way to convey that information (but without having to know the actual scene luminance)
|
|
2021-07-09 10:25:24
|
I briefly considered basing the interface on exposure values but eventually decided against it
|
|
2021-07-09 10:25:42
|
I mean like โEV -4โ and so on
|
|
|
Jyrki Alakuijala
|
|
spider-mario
I briefly considered basing the interface on exposure values but eventually decided against it
|
|
2021-07-09 10:26:47
|
from my point of view we finally have a solution that works in the main use case (for d2 to d8 or so)
|
|
2021-07-09 10:27:02
|
we probably need some more adjustments to make it work in every case ideally
|
|
|
spider-mario
|
2021-07-09 10:27:18
|
thermal noise is sort of negligible at the shutter speeds that are typically hand-held but can be more of a problem at longer exposure times
|
|
2021-07-09 10:27:33
|
roughly those where one would start considering the use of a tripod
|
|
|
Jyrki Alakuijala
|
2021-07-09 10:28:15
|
makes me wonder if the iso value should remain a pseudo iso value for noise level control -- or if we should try to extract the imaging parameters from the original and take the timing and aperture from there
|
|
2021-07-09 10:28:57
|
my intuition is the former -- just fix the noise in usual conditions (fixed aperture, f-value and shutter time), modulate noise with iso value
|
|
2021-07-09 10:29:42
|
perhaps photography has existing culture (stronger than just defining 'iso' of the film) that we could build on
|
|
2021-07-09 10:34:37
|
like expressing intent "if I use iso1600 value I attempt to capture images with this much film grain"
|
|
2021-07-09 10:35:22
|
thermal noise may be a boost on the noise at the very lowest levels -- at least on xyb I consider that the bias values are due to thermal noise in the eye
|
|
|
raysar
|
2021-07-09 10:38:43
|
<@!532010383041363969> look at the picture 23, i'm not sure about the quality of the film scanner, but i don't think there is sensor noise.
You need to download files.
https://1drv.ms/u/s!Aui4LBt66-MmlF2ATk1sffHvORGM?e=p6QH1H
|
|
2021-07-09 11:28:21
|
I add also 4 smartphone raw picture (very recent sensor) from darktable, only unbayer and white balance, 100-400-1600-4000 iso
There are also jpg version (low quality jpg setting)
There is huge contrast apply on jpeg, and noise reduction on smartphone is horrible, + there are ALWAYS sharpen filter !
|
|
2021-07-09 11:30:41
|
But in realty there is no high frequency noise in recent jpg, smarphone and dslr, only medium strange noise.
|
|
|
spider-mario
|
2021-07-09 11:30:57
|
yes, I have come to associate strong sharpening with โthe smartphone lookโ
|
|
2021-07-09 11:31:15
|
I barely sharpen my own images if at all
|
|
2021-07-09 11:31:29
|
it quickly becomes too much, too โflatโ
|
|
2021-07-09 11:32:27
|
smartphones often obliterate everything with noise reduction, reducing any texture to mush
|
|
2021-07-09 11:32:34
|
and then bring back the edges with sharpening
|
|
|
raysar
|
2021-07-09 11:32:51
|
Sharpen will become forbidden EVERYWHERE, it always decrease visual quality and psnr quality.
|
|
2021-07-09 11:35:41
|
If we have a rooted smartphone there is solution to tune picture and remove sharpen and excessive denoising before jpg. Apple are less stupid and have better tune for their jpg.
|
|
|
BlueSwordM
|
2021-07-10 02:29:08
|
I do not think he is a human.
|
|
2021-07-10 02:29:19
|
He is clearly a GPT based AI.
|
|
2021-07-10 02:29:40
|
It has sentence level understanding, but nothing about context.
|
|
|
Fox Wizard
|
2021-07-10 02:31:15
|
You telling me half the people I met are AI? <a:ThinkingSphere:821038590091329557>
|
|
|
fab
|
|
Jyrki Alakuijala
could you show us a comparison of jxl vs. webp2 here?
|
|
2021-07-10 06:40:41
|
don't think is the same; but -s 8 -d 0.291 --photon_noise=ISO731 looks good don't know the difference in file size
|
|
2021-07-10 06:41:48
|
probably i don't understood what is the purpose of this
|
|
|
Jyrki Alakuijala
|
|
raysar
<@!532010383041363969> look at the picture 23, i'm not sure about the quality of the film scanner, but i don't think there is sensor noise.
You need to download files.
https://1drv.ms/u/s!Aui4LBt66-MmlF2ATk1sffHvORGM?e=p6QH1H
|
|
2021-07-10 07:44:30
|
This is very interesting. What is the actual detailed process used in the 'Silver' category?
|
|
|
fab
don't think is the same; but -s 8 -d 0.291 --photon_noise=ISO731 looks good don't know the difference in file size
|
|
2021-07-10 07:54:35
|
If you use -d 0.291 you don't need to add noise, -d 0.291 can store the original noise. JPEG XL's noise synthesis is not an image processing tool, there are better image processing tools around. Just post the great looking webp2 image and the problematic looking jpeg xl from the same original so that I can learn about the problem.
|
|
|
raysar
|
|
Jyrki Alakuijala
This is very interesting. What is the actual detailed process used in the 'Silver' category?
|
|
2021-07-10 07:56:11
|
It's the defaud scan from nikon silver photographic film scanner. (I don't know how we speak about analogic photography in english)
|
|
|
Jyrki Alakuijala
|
2021-07-10 08:09:24
|
the silver grain has indeed very different characteristics from shot noise
|
|
2021-07-10 08:10:20
|
I'm somehow more used to the shot noise aesthetic and it is more pleasing to me than the silver grain noise, but we could consider developing other noise parameters to create silver grain noise, too
|
|
|
fab
|
2021-07-10 08:59:14
|
what are the iso i can use
|
|
2021-07-10 08:59:15
|
for %i in (C:\Users\User\Documents\8\*.png) do cjxl -s 6 -p -q 85 --photon_noise=ISO636 %i %i.jxl
|
|
2021-07-10 08:59:22
|
with iso 638 is the same
|
|
2021-07-10 09:00:12
|
ah with progressive it doesn't work
|
|
|
spider-mario
|
2021-07-10 09:00:52
|
that, and also, there are only 0.20 stops between ISO 636 and ISO 731
|
|
2021-07-10 09:01:49
|
you can try higher values if you want to check whether itโs applied in a given mode
|
|
|
fab
|
2021-07-10 09:02:59
|
it doesn't work more
|
|
2021-07-10 09:03:22
|
for %i in (C:\Users\User\Documents\8\*.png) do cjxl -s 7 -q 85 --photon_noise=ISO1600 %i %i.jxl
|
|
2021-07-10 09:03:37
|
quality is always same file size
|
|
2021-07-10 09:03:42
|
maybe i should use distance
|
|
2021-07-10 09:04:29
|
no it doesn't change anything
|
|
2021-07-10 09:06:20
|
ah it works only on speed 8
|
|
2021-07-10 09:06:28
|
don't do at speed 7
|
|
|
|
Deleted User
|
2021-07-10 12:13:26
|
<@!416586441058025472> I'm not sure you already know, but you can also use decimals. Like: --photon_noise=ISO731.936284001
|
|
|
diskorduser
|
2021-07-10 04:33:44
|
Image viewers like gwenview crash when opening images encoded with --photon_noise= . Is it normal? Does qt plugin needs update to fix that?
|
|
|
fab
|
2021-07-10 04:38:22
|
imageglass works just fine also firefox nightly
|
|
|
raysar
|
|
diskorduser
Image viewers like gwenview crash when opening images encoded with --photon_noise= . Is it normal? Does qt plugin needs update to fix that?
|
|
2021-07-10 04:48:29
|
I think so, it's the same with xnviewmp, it only support basic jxl.
|
|
|
|
Deleted User
|
2021-07-10 04:50:20
|
I can view photon_noise with XnViewMP. <:WTF:805391680538148936>
|
|
|
fab
|
|
raysar
I think so, it's the same with xnviewmp, it only support basic jxl.
|
|
2021-07-10 04:52:33
|
Xnview doesn't support never basic jpeg xl.
|
|
2021-07-10 04:52:42
|
no support at all
|
|
2021-07-10 04:53:02
|
what chrome can open xnview cannot.
|
|
2021-07-10 04:53:36
|
including modular, animation, lossless transcode e 1 e 2
|
|
|
I can view photon_noise with XnViewMP. <:WTF:805391680538148936>
|
|
2021-07-10 04:54:34
|
i have not tried
|
|
|
raysar
|
|
I can view photon_noise with XnViewMP. <:WTF:805391680538148936>
|
|
2021-07-10 06:15:20
|
Yes low resolution with noise works but crash with high resolution picture ๐
|
|
|
spider-mario
|
|
diskorduser
Image viewers like gwenview crash when opening images encoded with --photon_noise= . Is it normal? Does qt plugin needs update to fix that?
|
|
2021-07-10 06:26:52
|
it shouldnโt, itโs the same format as before
|
|
2021-07-10 06:26:56
|
just with a specially-constructed LUT
|
|
|
_wb_
|
2021-07-10 06:27:55
|
They maybe also crash on the old --noise=1
|
|
2021-07-10 06:28:37
|
IIRC an old version of libjxl has a bug in noise reconstruction
|
|
2021-07-10 06:29:28
|
Which reminds me that we should really do a new release and encourage applications to update to avoid stuff like that
|
|
2021-07-10 08:09:48
|
https://twitter.com/jonsneyers/status/1413789539823411202?s=19
|
|
|
BlueSwordM
|
|
diskorduser
Image viewers like gwenview crash when opening images encoded with --photon_noise= . Is it normal? Does qt plugin needs update to fix that?
|
|
2021-07-10 09:58:26
|
Uh, it shouldn't crash...
|
|
|
Jyrki Alakuijala
|
2021-07-10 10:29:14
|
https://spie.org/Publications/Proceedings/Paper/10.1117/12.2529237?SSO=1 50 % claim was done with extreme care, I stand behind that claim ๐
|
|
2021-07-10 10:29:40
|
(in psychovisually lossless quality)
|
|
|
improver
|
2021-07-11 01:24:46
|
https://zpl.fi/exif-orientation-in-different-formats/ this is pretty cool
|
|
|
_wb_
|
2021-07-11 01:40:45
|
interesting
|
|
2021-07-11 01:43:21
|
in jxl we made the clear design decision that anything that has render impact goes into the codestream so the decoder can do the right thing. Metadata is only **meta**-data, info about the image but not needed to show the image itself. So orientation, color profile, etc are not metadata in jxl, but part of the image data.
|
|
2021-07-11 01:44:56
|
with the old formats like JPEG that wasn't possible because color profiles and orientation were things that weren't invented yet when the JPEG codestream was invented, so there was no other option than to do a "glue and duct tape" thing
|
|
2021-07-11 01:47:03
|
in video world they are used to putting render critical stuff as 'metadata' in the container and using the codestream only to represent pixel values but not how to interpret the values
|
|
2021-07-11 01:48:31
|
so WebP, HEIF, AVIF are also "glue and duct tape" approaches when it comes to render impacting 'metadata' like color profile or orientation
|
|
|
improver
|
2021-07-11 01:50:17
|
AVIF has designated headers (irot & imir) and they do work but it seems like EXIF metadata doesn't affect it in chromium
|
|
2021-07-11 01:50:40
|
probably for the better tbh
|
|
|
_wb_
|
|
improver
|
2021-07-11 01:51:45
|
huh wow
|
|
|
_wb_
|
2021-07-11 01:51:55
|
in canary it is fine though:
|
|
|
improver
|
2021-07-11 01:52:08
|
yes i tested canary
|
|
|
_wb_
|
2021-07-11 01:52:33
|
in current stable chrome it's not yet ok
|
|
|
improver
|
2021-07-11 01:53:27
|
tbh they pushed it in too soon
|
|
2021-07-11 01:53:42
|
on firefox avif is id say quite reasonably not yet supported
|
|
|
_wb_
|
2021-07-11 01:53:51
|
the thing with these headers is that they leave it up to the application to look for them and apply the orientation, which makes it likely that some applications do stuff wrong
|
|
2021-07-11 01:54:19
|
that's true for all of those fancy HEIF container features
|
|
2021-07-11 01:55:04
|
like cropping, grids, overlays, etc
|
|
2021-07-11 01:55:53
|
all nice on paper, but it's only as usable as the lowest common denominator of what is correctly supported in all applications, and that is likely going to remain very little
|
|
|
improver
|
2021-07-11 01:57:07
|
by not defining hell knows how many container features
|
|
|
_wb_
|
2021-07-11 01:57:15
|
we see it as a decoder responsibility, not an application responsibility
|
|
2021-07-11 01:57:40
|
we also have cropping, overlays, etc, but at the codestream level, not the container level
|
|
2021-07-11 01:57:50
|
so it only needs to be implemented in libjxl
|
|
2021-07-11 01:57:53
|
not in all applications
|
|
|
improver
|
2021-07-11 01:59:39
|
so overall jxl has about the same fancy format feature set? I though it was kinda more coherent and more well defined
|
|
2021-07-11 02:00:13
|
maybe it's kinda from impression that HEIF is big kitchen sink kind of format
|
|
|
_wb_
|
2021-07-11 02:00:48
|
I think functionally jxl can do the same as heif, but in a more elegant and coherent way
|
|
2021-07-11 02:03:13
|
In jxl there are just frames that can have a width,height,x0,y0, duration and blend mode, no separate notions of grid, overlay, cropping
|
|
|
improver
|
2021-07-11 02:03:32
|
oh so that's kinda more integrated
|
|
2021-07-11 02:05:08
|
yeah i kinda feel difference better
|
|
2021-07-11 02:06:03
|
instead of heif the format using av1 keyframes and making up avif, it's kinda less split up
|
|
|
_wb_
|
2021-07-11 02:07:54
|
the video-derived codecs need a container anyway because video codecs are typically hardcoded for 3 channels, so if you want to have alpha transparency (or a depth map), you need multiple codestreams and link them together at the container level
|
|
2021-07-11 02:08:54
|
also video codecs don't support icc profiles, video uses enums
|
|
2021-07-11 02:09:59
|
basically containers are necessary hackery needed to make video codecs work as image formats
|
|
2021-07-11 02:10:25
|
but if the codestream is designed from the start as an image codec, the container becomes optional
|
|
|
raysar
|
2021-07-11 09:38:19
|
<@!532010383041363969> <@!604964375924834314>
Noise synthesis is not greyscale (here -d 0.1 --photon_noise ISO128000) It's a choice? The aim is a luma noise only?
An other question, noise synthesis is not possible in modular now, you can enable it?
|
|
|
spider-mario
|
2021-07-11 09:46:49
|
what do you mean by not grayscale?
|
|
2021-07-11 09:47:21
|
re: modular, I think it would still only work in XYB
|
|
2021-07-11 09:47:42
|
at least work as expected, letโs say
|
|
|
spider-mario
what do you mean by not grayscale?
|
|
2021-07-11 09:48:11
|
ah, never mind
|
|
2021-07-11 09:49:07
|
from what I understand, the aim was to properly balance the color of the noise as a function of the underlying color
|
|
|
raysar
|
|
spider-mario
re: modular, I think it would still only work in XYB
|
|
2021-07-11 11:18:32
|
using --colortransform=0 do not enable noise synthesis
|
|
|
spider-mario
from what I understand, the aim was to properly balance the color of the noise as a function of the underlying color
|
|
2021-07-11 11:19:41
|
Ok so a monochrome image with noise, lost his monochrome propriety?
|
|
|
spider-mario
|
|
raysar
using --colortransform=0 do not enable noise synthesis
|
|
2021-07-11 11:19:49
|
right, it currently doesnโt, but I mean that if we enabled it, itโs likely only with XYB that it would make sense
|
|
|
raysar
Ok so a monochrome image with noise, lost his monochrome propriety?
|
|
2021-07-11 11:23:13
|
apparently a little bit, I donโt know if it was an explicit design goal to avoid it completely
|
|
2021-07-11 11:23:45
|
(empirically, I would guess it was not)
|
|
|
raysar
|
2021-07-11 11:47:42
|
128000 iso in sepia style, blue and red pixel are visible. If i have choice i prefer monochromatic noise.
|
|
2021-07-11 11:51:51
|
I'm disappointed, max iso seem to be ISO543316 ๐
|
|
|
fab
|
2021-07-12 06:54:11
|
too bright
|
|
|
Jyrki Alakuijala
|
|
raysar
<@!532010383041363969> <@!604964375924834314>
Noise synthesis is not greyscale (here -d 0.1 --photon_noise ISO128000) It's a choice? The aim is a luma noise only?
An other question, noise synthesis is not possible in modular now, you can enable it?
|
|
2021-07-12 11:03:25
|
Thanks for testing it. In Renata's noise synthesis we chose to put ~10 % of the noise energy in Y to the Y channel -- looks like that is still reflected in the noise generation. I didn't know we still did that.
|
|
2021-07-12 11:03:49
|
I don't know if Y only is better than having both X and Y
|
|
2021-07-12 11:04:22
|
probably safer by default to have Y only
|
|
2021-07-12 11:04:55
|
to have a very slight amount of X noise does make the photographs look better, but that needs to very very moderate
|
|
|
improver
instead of heif the format using av1 keyframes and making up avif, it's kinda less split up
|
|
2021-07-12 11:09:06
|
Aren't all kind of AV1 frames supported in AVIF -- not just keyframes?
|
|
|
improver
|
2021-07-12 11:10:57
|
yes, for animated stuff it's i think all frame types but i haven't looked how different it's from still avif
|
|
|
Jyrki Alakuijala
|
|
improver
yes, for animated stuff it's i think all frame types but i haven't looked how different it's from still avif
|
|
2021-07-12 11:12:04
|
I heard that any four AV1 frames can create a single AVIF frame -- didn't read the spec myself
|
|
|
_wb_
|
2021-07-12 12:43:46
|
there is no distinction afaik between still and animated AVIF โ it used to be called AVIFS for animated but they dropped that distinction
|
|
2021-07-12 12:44:42
|
which means that if you want to support `image/avif`, you need to support all of AV1, not only the intra frame stuff
|
|
|
raysar
|
|
Jyrki Alakuijala
Thanks for testing it. In Renata's noise synthesis we chose to put ~10 % of the noise energy in Y to the Y channel -- looks like that is still reflected in the noise generation. I didn't know we still did that.
|
|
2021-07-12 01:17:59
|
It's not clear for me <:Thonk:805904896879493180> https://en.wikipedia.org/wiki/LMS_color_space
There is no luma axis in XYB, how do you change only luma of one color? <:FeelsReadingMan:808827102278451241>
Is there a picture video of a cube for XYB color space? ๐ like that: https://www.youtube.com/watch?v=3dET-EoIMM8
I prefer a full luma noise with --photon_noise and an --photon_noise_chroma=1 to enable color noise ๐
I'm fighting all the time with color noise of sensor camera.<:ReeCat:806087208678588437>
|
|
|
|
veluca
|
2021-07-12 01:20:38
|
(0, 1, 1) is just luma in XYB
|
|
2021-07-12 01:20:57
|
IIRC the format will always add chroma noise
|
|
|
Jyrki Alakuijala
|
|
veluca
IIRC the format will always add chroma noise
|
|
2021-07-12 01:32:19
|
Did we define what happens in monochrome images or monochrome layers?
|
|
|
|
veluca
|
2021-07-12 01:33:25
|
I think if the image says it's grayscale (although I don't think you can say that for an XYB image) it will in the end just read one channel
|
|
2021-07-12 01:33:28
|
but otherwise no
|
|
|
raysar
|
|
veluca
IIRC the format will always add chroma noise
|
|
2021-07-12 01:39:21
|
Ok, it's a bitstream limitation?
|
|
|
|
veluca
|
2021-07-12 01:40:02
|
yeah, I think so - we add 1/128 chroma noise and 127/128 luma noise
|
|
|
Jyrki Alakuijala
|
2021-07-12 01:44:19
|
it would be interesting to learn more if this is an acceptable aesthetic solution
|
|
2021-07-12 01:45:02
|
I acknowledge that most photography guides focus on removing the chromacity noise altogether
|
|
2021-07-12 01:46:12
|
If it is just wrong we might have some ways to fix it, file an erratum, etc. Jon and Luca know how and if this can be done.
|
|
|
raysar
|
2021-07-12 02:01:57
|
In av1 they are estimating and generating color grain.
`There are other characteristics of the film grain that need to be modeled, such as correlation of grain between color channels. In film, grain is often said to be independent between R,G, and B channels and proportional to the linear light intensity. After a YCbCr color transformation, grain between the channels is typically correlated. Application of the film grain in AV1 is made in the coding (e.g. YCbCr) domain to make processing feasible in the decoders. Correlation of grain between color components has to be captured during the film grain estimation.`
https://norkin.org/research/film_grain/index.html
|
|
|
|
veluca
|
2021-07-12 02:07:44
|
well, not sure how we'd modify that without either an extension or just entirely getting rid of chroma noise
|
|
|
raysar
|
2021-07-12 02:12:01
|
Ah yes, i see the jxl specifications:
|
|
|
improver
|
|
_wb_
which means that if you want to support `image/avif`, you need to support all of AV1, not only the intra frame stuff
|
|
2021-07-12 02:19:20
|
imagemagick won't do avifs at all but works with avif just fine
|
|
2021-07-12 02:19:43
|
iirc it's related to libheif dep not handling that
|
|
2021-07-12 02:24:08
|
<https://github.com/strukturag/libheif/issues/377> and devs don't seem to be very enthusiastic about it
|
|
2021-07-12 02:26:34
|
(which is one of reasons why I'm pushing for jxl animation to be taken seriously, half-baked support is very annoying)
|
|
|
BlueSwordM
|
|
Jyrki Alakuijala
Aren't all kind of AV1 frames supported in AVIF -- not just keyframes?
|
|
2021-07-12 02:41:31
|
Yes.
|
|
|
diskorduser
|
2021-07-12 04:06:03
|
I think it would be easy to add jxl support to Pix - qt based gallery app.
|
|
|
_wb_
|
2021-07-13 09:15:28
|
Request for image compression consulting or referrals for hire
Inbox
jg@projectn.co
Jul 12, 2021, 9:49 PM (13 hours ago)
to me
Hi Dr. Sneyers,
I work at Project N, a fast growing stealth startup where we develop large scale compression algorithms for all kinds of data for large customers like Atlassian and Tesla. We are impressed by your work on FLIF and JPEG XL so I wanted to reach out for you.
I know your work on JPEG XL is crucial but are you available for a quick chat for us to pick your brain? Also, are you interested in consulting or could you refer us to someone who is available for hire to work on compression algorithms?
Here is some background on us for you or any good people you know that could be interested:
--------
We are based in Mountain View, California.
Compression is not new. What is new is doing it at this scale. We use machine learning to continually improve as more files are added until the model reaches its capacity which is informally the maximum amount of information our model can learn. Since we will run on petabytes of data, we need a model that is very fast while having extraordinarily large capacity.
We have plenty of hard problems and plenty of funding so hiring great people is a top priority for us. We still have less than 30 people at the company so this is an amazing opportunity to join early and help lay the foundations.
--
Joseph Gardi
ML lead, Project N
|
|
2021-07-13 09:15:58
|
https://tenor.com/view/robert-downey-jr-face-palm-iron-man-rdj-why-gif-10994785
|
|
|
Crixis
|
|
_wb_
Request for image compression consulting or referrals for hire
Inbox
jg@projectn.co
Jul 12, 2021, 9:49 PM (13 hours ago)
to me
Hi Dr. Sneyers,
I work at Project N, a fast growing stealth startup where we develop large scale compression algorithms for all kinds of data for large customers like Atlassian and Tesla. We are impressed by your work on FLIF and JPEG XL so I wanted to reach out for you.
I know your work on JPEG XL is crucial but are you available for a quick chat for us to pick your brain? Also, are you interested in consulting or could you refer us to someone who is available for hire to work on compression algorithms?
Here is some background on us for you or any good people you know that could be interested:
--------
We are based in Mountain View, California.
Compression is not new. What is new is doing it at this scale. We use machine learning to continually improve as more files are added until the model reaches its capacity which is informally the maximum amount of information our model can learn. Since we will run on petabytes of data, we need a model that is very fast while having extraordinarily large capacity.
We have plenty of hard problems and plenty of funding so hiring great people is a top priority for us. We still have less than 30 people at the company so this is an amazing opportunity to join early and help lay the foundations.
--
Joseph Gardi
ML lead, Project N
|
|
2021-07-13 09:28:19
|
So infinite data for your research
|
|
|
_wb_
|
2021-07-13 09:29:36
|
I sometimes feel like I live in Silicon Valley, the TV series
|
|
2021-07-13 09:29:49
|
that series is way too accurate
|
|
|
|
veluca
|
2021-07-13 10:10:46
|
I got one too
|
|
|
spider-mario
|
2021-07-13 10:13:01
|
same
|
|
2021-07-13 10:13:47
|
they must have run `git clone .../libjxl.git; cd libjxl; git shortlog -se`
|
|
|
_wb_
|
2021-07-13 10:16:51
|
Sounds way too much like a buzzword-based VC money churning operation.
|
|
|
|
veluca
|
|
spider-mario
they must have run `git clone .../libjxl.git; cd libjxl; git shortlog -se`
|
|
2021-07-13 10:19:44
|
probably not, they didn't email me on my personal account
|
|
2021-07-13 10:19:57
|
maybe some paper?
|
|
|
Jyrki Alakuijala
|
2021-07-13 10:55:28
|
I have seen some invitations to a compression startup, but possibly not this one (correction, they sent that to me, too)
|
|
2021-07-13 10:55:46
|
much more common are invitations fo high frequency trading companies
|
|
2021-07-13 11:19:42
|
There were claims on encode.su that someone managed to get a lot of money by having a claim of being able to compress everything to 4 kB, including movies etc.
|
|
2021-07-13 11:20:11
|
compression and hf trading used to be the same -- fast algorithms that process through a lot of data while applying primitive statistics
|
|
2021-07-13 11:20:55
|
since neural nets became an easy to deploy thing they stopped asking me, I guess statistics were replaced by neural nets
|
|
|
spider-mario
|
|
veluca
maybe some paper?
|
|
2021-07-13 11:23:59
|
I think my corp e-mail is only public through git repositories (mainly the libjxl one)
|
|
|
improver
|
2021-07-13 11:28:05
|
I guess not for general use but if you have certain specific kind of moving content p sure you could
|
|
|
Jyrki Alakuijala
|
2021-07-13 11:33:37
|
perhaps if you could run a full-blown simulation of the universe and just record the time and position when the data existed ๐
|
|
2021-07-13 11:34:09
|
that would make the U in URL more of a U
|
|
|
_wb_
|
2021-07-13 11:34:34
|
there's always the hoaxes about breakthrough recursive compression schemes that can compress anything down to nothing โ of course they're all fake, there's no way to avoid the pigeonhole principle.
But occasionally people manage to convince venture capitalists that they really have something great now. The right combination of buzzwords can unlock big money if you're lucky and in a position to pitch your thing to the right person at the right time.
|
|
|
Jyrki Alakuijala
|
2021-07-13 11:34:38
|
with modern technology decompression would be somewhat slow and expensive
|
|
2021-07-13 11:35:19
|
it seems likely that such a decompressor doesn't fit in our universe
|
|
|
_wb_
|
|
Jyrki Alakuijala
with modern technology decompression would be somewhat slow and expensive
|
|
2021-07-13 11:36:41
|
surely just a temporary practical problem, you shouldn't dismiss promising new technology based on that, hardware becomes better exponentially so soon enough it will be just fine
|
|
|
Jyrki Alakuijala
|
2021-07-13 11:37:09
|
people would be willing to try to solve it for 20 million
|
|
2021-07-13 11:38:36
|
if you cool that system down with liquid helium, you can overclock it and see tomorrows stock market prices -- it would quickly pay for itself
|
|
|
_wb_
|
2021-07-13 11:38:37
|
we need to think outside the box, we're not going to limit our thinking by focusing on minor details like the size of the universe, are we?
|
|
|
Jyrki Alakuijala
|
2021-07-13 11:38:59
|
ops, I'm on on-topic, no matter
|
|
|
improver
|
2021-07-13 11:39:29
|
on-topic is compression-related off-topic so i think it's fitting
|
|
2021-07-13 11:39:58
|
for prediction stuff id say one needs more inputs, and not smarter algorithms at this point
|
|
2021-07-13 11:40:48
|
big data hoarding and filtering. to see thru social cues which lead to moves
|
|
|
BlueSwordM
|
2021-07-13 01:56:29
|
Yes actually.
|
|
2021-07-13 01:57:00
|
The problem is that it will look like the biggest piece of garbage in existence.
|
|
2021-07-13 02:04:45
|
But yes, it was a complete scam.
|
|
|
lonjil
|
2021-07-13 09:23:24
|
lol I upgraded from a may build of cjxl and now the file I was playing around with encodes bigger with the same effort and distance settings
|
|
2021-07-13 09:23:25
|
F
|
|
|
_wb_
|
2021-07-13 09:26:47
|
It might also be slightly better
|
|
|
lonjil
|
2021-07-13 09:28:03
|
indeed
|
|
2021-07-13 09:28:42
|
I wasn't expecting such a jump in size, so I didn't bother to keep the old file for comparison. I could check out and build the older commit, but whatever.
|
|
2021-07-13 09:33:43
|
png: 29M
3-norm 1.21 jpeg (mozjpg q=90): 3.7M
3-norm 1.08 jxl (cjxl d=1.0): 1.7M
Man, this shit feels good.
|
|
|
raysar
|
|
_wb_
It might also be slightly better
|
|
2021-07-13 09:58:44
|
The distance estimation has changed?
|
|
|
BlueSwordM
|
|
raysar
The distance estimation has changed?
|
|
2021-07-13 10:13:28
|
It is likely the rate control has gotten more accurate and better.
|
|
|
improver
|
2021-07-14 01:49:17
|
someone explain me what HDR is. ive been trying to read about it but i dont think im getting it yet. so far seems to be about usage of 10~12bit colors with DCI-P3 color profiles but then there's also stuff about transfer functions, additional metadata and signaling of it which i don't exactly get yet
|
|
|
w
|
2021-07-14 01:52:21
|
from my understanding, it is mainly a combination of light level + wide gamut. And higher bit depth is only required for higher gamut since banding.
|
|
2021-07-14 01:53:23
|
in current year / generation of displays, it is dci-p3 contained in the rec 2020
|
|
2021-07-14 01:53:34
|
and is still a meme because no display can do 100% coverage of dci p3 (not volume. bigger volume = bad)
|
|
2021-07-14 01:53:41
|
so all it does it burn your eyes
|
|
|
_wb_
|
2021-07-14 05:50:57
|
HDR is about dynamic range: how bright you go, basically. In SDR, the brightest white is 200 nits or so, while in HDR it goes to 1000+ nits.
|
|
2021-07-14 05:52:44
|
You need more bit depth for HDR because otherwise you would get extreme banding, since the steps between subsequent shades would get much bigger.
|
|
2021-07-14 06:03:21
|
Wide gamut is an orthogonal thing: it is about where the RGB primaries are positioned, which determines the shape/volume of colors that can be reproduced (more saturated colors if the primaries are more extreme). This also requires more bit depth.
|
|
2021-07-14 06:04:03
|
You can have HDR with just the regular sRGB gamut, and you can have SDR with a very wide gamut like ProPhoto.
|
|
2021-07-14 06:04:54
|
But typically current HDR would be Rec.2100, which is both higher dynamic range and wider gamut.
|
|
2021-07-14 06:08:25
|
Yes. Also to do it right, you need good contrast. Better shades of dark is also considered a form of HDR
|
|
|
|
veluca
|
2021-07-14 12:24:11
|
hence why "hdr true black" is a thing
|
|
2021-07-14 12:24:34
|
(basically for OLED displays only, although I hear mini-LED is getting close)
|
|
2021-07-14 12:26:00
|
"traditional" LED does HDR by reaching $high nits, and also potentially using $a_lot of energy ๐
|
|
|
|
Deleted User
|
2021-07-14 12:31:48
|
https://i.ytimg.com/vi/1WOqTReFDdI/maxresdefault.jpg
left: HDR on | right: HDR off
|
|
|
fab
|
|
https://i.ytimg.com/vi/1WOqTReFDdI/maxresdefault.jpg
left: HDR on | right: HDR off
|
|
2021-07-14 12:36:16
|
youtube?
|
|
2021-07-14 12:36:36
|
or just a microsoft windows 11 demostration
|
|
2021-07-14 12:36:51
|
is the video in av1
|
|
2021-07-14 12:37:03
|
could you send link of the video
|
|
|
_wb_
|
|
https://i.ytimg.com/vi/1WOqTReFDdI/maxresdefault.jpg
left: HDR on | right: HDR off
|
|
2021-07-14 12:40:58
|
JPEG is not suitable for HDR, since it is limited to 8-bit. What you are showing is likely tone-mapped to SDR, not actual HDR.
|
|
|
spider-mario
|
2021-07-14 01:36:33
|
if itโs DisplayHDR certified, you can get an idea of how bright it gets from the number: https://displayhdr.org/#:~:text=DisplayHDR%20Performance%20Levels
|
|
|
|
Deleted User
|
|
_wb_
JPEG is not suitable for HDR, since it is limited to 8-bit. What you are showing is likely tone-mapped to SDR, not actual HDR.
|
|
2021-07-14 03:24:35
|
Well, but Oblivion calls it HDR despite 8 bpc. ^^
|
|
|
_wb_
|
2021-07-14 04:48:42
|
8 bit = 256 shades. The dynamic range depends on the transfer curve, not on the bit depth, so theoretically you could do HDR in 8-bit, but even SDR can already have some banding in 8-bit if you don't dither, so if you significantly stretch the dynamic range while still only having 256 shades, banding will become a problem even with dithering.
|
|
|
190n
|
|
Well, but Oblivion calls it HDR despite 8 bpc. ^^
|
|
2021-07-14 06:18:21
|
Oblivion might be rendering with higher precision internally and then tonemapping. Games have been doing that for a while (long before HDR monitors were a thing).
|
|
|
Jyrki Alakuijala
|
|
_wb_
JPEG is not suitable for HDR, since it is limited to 8-bit. What you are showing is likely tone-mapped to SDR, not actual HDR.
|
|
2021-07-14 08:15:01
|
I consider it a misunderstanding that JPEG is limited to 8 bits. I believe I can get 10.5 bits out of it easily.
|
|
|
_wb_
8 bit = 256 shades. The dynamic range depends on the transfer curve, not on the bit depth, so theoretically you could do HDR in 8-bit, but even SDR can already have some banding in 8-bit if you don't dither, so if you significantly stretch the dynamic range while still only having 256 shades, banding will become a problem even with dithering.
|
|
2021-07-14 08:15:53
|
the stretching needed for equal perception is something like 30-60 % ... it is log(target_intensity)/log(source_intensity)
|
|
2021-07-14 08:16:19
|
log(600)/log(80) = ~1.45
|
|
2021-07-14 08:16:56
|
if rec 2100 is used (which has asymptotically logarithmic gamma)
|
|
2021-07-14 08:17:47
|
situation is a bit worse with cubic root, but can be partially compensated for in adaptive quantization
|
|
|
_wb_
|
2021-07-14 08:18:48
|
Sure, JPEG DCT coefficients are basically 12-bit so there is more effective bit depth than 8 if you dequant with higher precision. But in practice all JPEG decoders will decode to 8-bit YCbCr, at least atm.
|
|
2021-07-14 08:19:28
|
We could do high-precision JPEG with libjxl though (either directly or via jxl recompression, or both)
|
|
|
Jyrki Alakuijala
|
2021-07-14 08:19:31
|
I consider 8 bit HDR okeyish, if done very carefully ... 600 nits is only 15 % larger psychovisual steps than 250 nits (and many monitors go to 250 when brightness is set to full)
|
|
|
_wb_
We could do high-precision JPEG with libjxl though (either directly or via jxl recompression, or both)
|
|
2021-07-14 08:19:59
|
yes, please ๐
|
|
2021-07-14 09:43:52
|
Before
|
|
2021-07-14 09:44:09
|
After
|
|
2021-07-14 09:44:29
|
d16
|
|
2021-07-14 09:45:03
|
this is an irritating change because it is quite a bit slower
|
|
2021-07-14 09:53:57
|
probably only to be activated for turtle
|
|
2021-07-14 09:54:17
|
possibly second slowest, too
|
|
|
improver
|
2021-07-14 10:00:18
|
it makes things look better though :<
|
|
|
Jyrki Alakuijala
|
2021-07-14 10:04:44
|
yes, I need to find something similar that is faster
|
|
|
diskorduser
|
2021-07-15 11:27:44
|
I have a jxl file. I made it with gimp. It doesn't open properly on gwenview, djxl and on squoosh.app but it works fine on gimp
|
|
2021-07-15 11:30:58
|
|
|
|
_wb_
|
2021-07-15 11:42:55
|
|
|
2021-07-15 11:42:56
|
weird
|
|
2021-07-15 11:43:22
|
looks like the pixel data is there, but the alpha is all zeroes
|
|
|
diskorduser
|
2021-07-15 11:43:58
|
I'm using git builds
|
|
|
_wb_
|
2021-07-15 11:44:22
|
perhaps the gimp plugin just didn't implement alpha properly, and ignores it when loading images and forgets to set it when saving them?
|
|
2021-07-15 11:45:07
|
these plugins and integrations really need some more love, it's kind of embarrassing if they mess up like that
|
|
|
Scope
|
2021-07-15 09:53:55
|
๐ค
<https://forum.doom9.org/showpost.php?p=1947660&postcount=53>
> **benwaggoner**
> Any codec, video, audio, or still, needs to gracefully degrade when out of its sweet spot. The sheer variety of content out there is mind-boggling, and all the common image test libraries lack sufficient representation of important content categories.
>
> The original sin of JPEG is that it was only intended for continuous tone images like photographs, and a pure DCT implementation runs into severe issues with discreet tone images like text, line art, traditional GIF animations, and anything else where sharp edges are the most psychovisually important element.
>
> A great thing about HEIC as an image format is that a HEVC IDR can incorporate the best of JPEG and PNG style compression, using iDCT-like transforms for natural image areas, but switch to transform skip or even lossless CUs for discreet tone areas. That makes for much more graceful degradation for weird kinds of content, and finally made it possible to compress mixed continuous/discreet tone content efficiently and in high quality.
>
> The quality of a codec is reflected in both in where it operates best and where it operates worse.
|
|
|
_wb_
|
2021-07-15 10:02:53
|
HEVC and AV1 do have some PNG-like options but not quite as good as PNG, imo. JXL has stronger options.
|
|
|
190n
|
2021-07-15 10:03:56
|
also doesn't jpeg xl handily beat png in lossless mode?
|
|
|
improver
|
2021-07-15 10:04:35
|
jxl destroyes everything in lossless mode
|
|
|
BlueSwordM
|
|
190n
also doesn't jpeg xl handily beat png in lossless mode?
|
|
2021-07-15 10:11:54
|
Even worse. Here is how lossless stacks up for most encoders we use:
CJXL > CWebP >= aomenc > x265 > HM
|
|
2021-07-15 10:13:23
|
Decoding speed in lossless <:kekw:808717074305122316>
JXL > WebP >> AVIF >> HEVC
|
|
|
Pieter
|
|
improver
jxl destroyes everything in lossless mode
|
|
2021-07-15 10:16:39
|
is this typo intentional?
|
|
|
improver
|
|
Pieter
|
2021-07-15 10:18:50
|
is it a reference to a paper by Schnorr?
|
|
|
improver
|
2021-07-15 10:19:59
|
indeed. p pointless reference but i like doing these
|
|
|
|
veluca
|
|
BlueSwordM
Decoding speed in lossless <:kekw:808717074305122316>
JXL > WebP >> AVIF >> HEVC
|
|
2021-07-16 12:57:00
|
JXL lossless decoding is not that fast ๐ (yet?)
|
|
|
|
paperboyo
|
|
Scope
๐ค
<https://forum.doom9.org/showpost.php?p=1947660&postcount=53>
> **benwaggoner**
> Any codec, video, audio, or still, needs to gracefully degrade when out of its sweet spot. The sheer variety of content out there is mind-boggling, and all the common image test libraries lack sufficient representation of important content categories.
>
> The original sin of JPEG is that it was only intended for continuous tone images like photographs, and a pure DCT implementation runs into severe issues with discreet tone images like text, line art, traditional GIF animations, and anything else where sharp edges are the most psychovisually important element.
>
> A great thing about HEIC as an image format is that a HEVC IDR can incorporate the best of JPEG and PNG style compression, using iDCT-like transforms for natural image areas, but switch to transform skip or even lossless CUs for discreet tone areas. That makes for much more graceful degradation for weird kinds of content, and finally made it possible to compress mixed continuous/discreet tone content efficiently and in high quality.
>
> The quality of a codec is reflected in both in where it operates best and where it operates worse.
|
|
2021-07-16 08:58:03
|
Yeah, I think I sympathise and agree with a gist of it. This is also why I take a slight issue with the _tractors versus race cars_ analogy. These modern vehicles are advertised as being able to run on any terrain (quality 0โ100, or whatever scale they use). Nobody, rightly, is giving any warnings that you should choose one modern vehicle over the other for this or that terrain (like we had to in JPEG vs. PNG days). Myself, I donโt have the luxury to get where I wanna go by driving exclusively over German autobahns (actually, two summers ago, I found Serbian are much more modern) or muddy tracks. I have to (and enjoy) driving wherever I want or need to. So Iโm utterly uninterested in a race car. Heck, Iโm from Poland and good luck driving one even in the city. I much prefer a vehicle that will give me freedom and will perform well in any landscape. Tractors and Formula cars both look very cool, but they are tools of narrow trade, not my one, though. Mine is having to deal with what Iโve been dealt with: any terrain. Any image.
|
|
|
_wb_
|
2021-07-16 09:07:32
|
Yes, I agree that a codec should aim at universal applicability and being pareto-optimal across the quality spectrum
|
|
2021-07-16 09:08:22
|
I don't think measuring performance in the low quality range of the spectrum is a good proxy for measuring performance in the entire spectrum though.
|
|
|
|
paperboyo
|
2021-07-16 09:08:31
|
It would be utterly sad if I would have to use JXL for hero images and AVIF for tiny ones, because AVIF takes ages to show anything and JXL looks worse than WebP on small thumbnails.
|
|
|
_wb_
I don't think measuring performance in the low quality range of the spectrum is a good proxy for measuring performance in the entire spectrum though.
|
|
2021-07-16 09:10:56
|
> measuring performance in the low quality range of the spectrum is [not] a good proxy for measuring performance in the entire spectrum though
Yep, it isnโt, agreed. My cheeky, naive, and slightly disrespectful (apologies โ only to make a point!), view on that is: codecโs fault. It should adjust.
|
|
|
_wb_
|
2021-07-16 09:12:16
|
I think jxl has a bigger gain on previous codecs like jpeg/webp/avif in the medium-high end, but it is not worse than them in the low end, except if you go into the ridiculously low end where probably lowering the resolution would be a significantly better approach than lowering the quality.
|
|
2021-07-16 09:14:06
|
And yes, encoders should "make the best of it" at that point; cjxl has been improving on it but there are still things to be done. For example, it should probably by default use --resampling=2 below a certain quality.
|
|
|
yurume
|
2021-07-17 11:04:19
|
thinking about this, a modification to preflate plus lossless jxl might be a good way to recompress PNGs
|
|
|
|
veluca
|
2021-07-17 01:16:48
|
it's technically not hard to do on top of jxl lossless, you can use stuff like https://github.com/google/grittibanzli for the lz77ing and a bit of extra metadata
|
|
2021-07-17 01:21:53
|
yeah, but before JXL was a thing I believe ๐
|
|
|
yurume
|
2021-07-17 01:31:59
|
yes, all reflate, preflate, precomp (which AFAIK uses preflate) and grittibanzli are aimed at exact deflate stream reconstruction
|
|
2021-07-17 01:33:13
|
the thing is that normally such tools give you an uncompressed data (say, raw RGB) and reconstruction data, but the former can be replaced with lossless jxl for the maximal usability
|
|
2021-07-17 01:33:28
|
since as long as you can get raw RGB the exact format doesn't matter
|
|
2021-07-17 01:34:04
|
I guess it's a good experiment to try this night
|
|
|
|
veluca
|
2021-07-17 01:55:27
|
if you can fit all the non-rgb data into a box and send a PR to the JXL repo it might be interesting... ๐
|
|
2021-07-17 01:56:36
|
(can't promise we'll integrate it immediately, or that it will go in the standard, but it could make a good starting point)
|
|
|
yurume
|
2021-07-17 02:22:24
|
oh, I realized that the JPEG1 recons mode of JXL is a whole separate project (Brunsli) merged into the spec with a completely different signature and internal format
|
|
|
Ringo
|
2021-07-17 02:24:02
|
it was
|
|
|
yurume
|
2021-07-17 02:24:07
|
I wondered why there is no arbitrary data section in the main spec while the recons mode is seemingly able to reconstruct JPEG comments
|
|
|
Ringo
|
2021-07-17 02:24:08
|
I don't think it applies anymore
|
|
|
yurume
|
2021-07-17 02:24:15
|
ah
|
|
2021-07-17 02:24:57
|
was there a big change since the committee draft? since there seems no other up-to-date spec available...
|
|
|
Ringo
|
2021-07-17 02:25:04
|
iirc the way JPEG1 reconstruction works is that it uses VarDCT mode, but only with 8x8 DCT blocks, and it puts a special "JPEG bitstream reconstruction data" box in the container, along with the JXL codestream
|
|
|
yurume
|
2021-07-17 02:25:36
|
that sounds sensible
|
|
|
Ringo
|
|
yurume
was there a big change since the committee draft? since there seems no other up-to-date spec available...
|
|
2021-07-17 02:26:11
|
yeah... that sucks
|
|
2021-07-17 02:26:38
|
from what I've heard, there are significant changes between the committee draft and the FDIS
|
|
|
_wb_
|
2021-07-17 04:18:51
|
Very significant changes, yes
|
|
2021-07-17 04:19:13
|
https://twitter.com/jonsneyers/status/1415696597585993732?s=19
|
|
|
|
veluca
|
|
Ringo
iirc the way JPEG1 reconstruction works is that it uses VarDCT mode, but only with 8x8 DCT blocks, and it puts a special "JPEG bitstream reconstruction data" box in the container, along with the JXL codestream
|
|
2021-07-17 04:19:22
|
yup, correct
|
|
|
yurume
|
2021-07-17 10:33:14
|
it took all the night but my crappy python script does seem to work:
```
$ time pypy3 ~/git/jxl-preflate/jxl-preflate.py sample-iccp.png
JPEG XL encoder v0.3.7 [AVX2,SSE4,Scalar]
Read 1761x2500 image, 23.0 MP/s
Encoding [Modular, lossless, squirrel], 1 threads.
Compressed to 2011602 bytes (3.655 bpp).
1761 x 2500, 0.43 MP/s [0.43, 0.43], 1 reps, 1 threads.
preflate v0.3.4
splitting /tmp/tmp89sblnr5 successful (3269337 -> 13210000 + 139892)
All ok
original data: size 13210000, hash a67185809d99d8a297ad3b6ba03fde3d4580647110729ebe7bad9f452c236ff9
original 3272114 -> jxl 2011602 + recons 142885 = compressed 2154487 (34.2% smaller)
real 0m30.425s
user 0m28.438s
sys 0m1.375s
$ time pypy3 ~/git/jxl-preflate/jxl-preflate.py sample-iccp.png.{jxl,recons}
JPEG XL decoder v0.3.7 [AVX2,SSE4,Scalar]
Read 2011602 compressed bytes.
Decoded to pixels.
1761 x 2500, 3.39 MP/s [3.39, 3.39], 1 reps, 1 threads.
Allocations: 606 (max bytes in use: 1.687282E+08)
reconstructed original data: size 13210000, hash a67185809d99d8a297ad3b6ba03fde3d4580647110729ebe7bad9f452c236ff9
preflate v0.3.4
recombining /tmp/tmpfpcvm1kw.x successful (13210000 + 139892 -> 3269337)
All ok
real 0m23.886s
user 0m22.766s
sys 0m0.703s
$ sha1sum sample-iccp.png{,.jxl.png}
a9a3eeb42d8836ceca810d9f660da53eef0506d3 sample-iccp.png
a9a3eeb42d8836ceca810d9f660da53eef0506d3 sample-iccp.png.jxl.png
```
|
|
2021-07-17 11:53:24
|
I've just finished to add Adam7 reconstruction (the only missing bit is probably the indexed color) and running a test
|
|
2021-07-17 11:54:06
|
I have some large image to test (30MB, 42.3MP) and jxl's slow like the heck
|
|
2021-07-17 11:57:46
|
that's definitely a thing to tweak, but for now I want to check the compression ratio
|
|
2021-07-17 11:58:27
|
for some file jxl + recons was 60% smaller than png for example
|
|
2021-07-18 12:00:20
|
okay the large-scale test is working, `original 31279328 -> jxl 17504059 + recons 1490500 = compressed 18994559 (39.3% smaller)` (and took 6 mins lol)
|
|
2021-07-18 12:01:31
|
is `-s 4` significantly slower than `-s 3` than lower levels btw? (for the lossless mode)
|
|
2021-07-18 12:03:26
|
my current test set is definitely biased ๐
|
|
|
BlueSwordM
|
2021-07-18 12:38:16
|
s3 should be absurdly fast though, even lossless.
|
|
|
yurume
I have some large image to test (30MB, 42.3MP) and jxl's slow like the heck
|
|
2021-07-18 12:39:36
|
1. Can you give the image to test?
2. What processor are you using for that test?
|
|
|
yurume
|
2021-07-18 12:51:43
|
https://gist.github.com/lifthrasiir/5c24058f21ce6fba231cf1bfba45bf28
|
|
2021-07-18 12:52:25
|
as you can see, no optimization at all (it does all PNG parsing in Python, PyPy strongly recommended), so consider it to be a proof of concept
|
|
2021-07-18 12:53:22
|
there are tons to do with non-preflate reconstruction data (e.g. iCCP can be probably entirely reconstructed from JXL?) but that's enough for me right now XD
|
|
|
BlueSwordM
1. Can you give the image to test?
2. What processor are you using for that test?
|
|
2021-07-18 01:04:17
|
I don't know what do you mean by 1 (is _my own_ test set important here?), but for 2: preflate.
|
|
2021-07-18 01:30:00
|
for 1, sure (I had a weak reason for not wanting this, but yeah), but it seems that discord supports up to 8 MB of attachments...
|
|
2021-07-18 01:31:12
|
the 30MB file in question was on pixiv, https://i.pximg.net/img-original/img/2018/10/22/20/20/34/71302422_p0.png (requires pixiv.net referer so I won't censor this here)
|
|
2021-07-18 01:31:48
|
the reason I didn't want to post that was, it turns out that the image in question is okay but the artist profile has nsfw images
|
|
2021-07-18 01:32:26
|
for 2: i7-7700 @ 3.60 GHz
|
|
2021-07-18 01:34:12
|
I actually think that the main culprit is preflate and not jxl; preflate seems not configurable to me
|
|
2021-07-18 01:58:39
|
aaaand it turns out that I've mistakenly built preflate in debug mode. ;_;
|
|
2021-07-18 01:59:08
|
(preflate does not build in linux out of the box, so I had tweaked some header files and forgot to turn it back...)
|
|
|
improver
|
|
yurume
the reason I didn't want to post that was, it turns out that the image in question is okay but the artist profile has nsfw images
|
|
2021-07-18 01:59:16
|
i have em followed already lol
|
|
|
yurume
|
2021-07-18 02:00:15
|
for that 30MB file compression now takes 74 secs and reconstruction takes 120 secs
|
|
2021-07-18 02:00:45
|
Python overhead exists but effectively negligible, since preflate takes >60s
|
|
2021-07-18 02:01:21
|
time to feed an 102MP image...
|
|
2021-07-18 02:01:54
|
(for the reference, this is https://commons.wikimedia.org/wiki/File:Home_in_Relation_to_Everything.png)
|
|
|
improver
|
2021-07-18 02:17:05
|
kinda unreal how one can just do full recompression with this
|
|
2021-07-18 02:17:31
|
now if we'd put it in metadata header...
|
|
2021-07-18 02:17:56
|
or rather, optional block
|
|
|
yurume
|
2021-07-18 02:18:13
|
if you don't model every bit of headeres then it seemed not difficult, so I gave a try
|
|
|
BlueSwordM
|
2021-07-18 02:23:56
|
Is PNG recompression even useful if you keep the metadata?
|
|
|
yurume
|
|
BlueSwordM
Is PNG recompression even useful if you keep the metadata?
|
|
2021-07-18 02:30:18
|
normally no (not even JPEG), but you might want that for archival purposes
|
|
2021-07-18 02:31:13
|
btw my 102MP test (-s 4) resulted in:
```
original 79390558 -> jxl 46407388 + recons 4491215 = compressed 50898603 (35.9% smaller)
real 6m43.237s
[snip for decoding]
(jxl decoder finished in 89.5 secs.)
(preflate reconstruction finished in 240.3 secs.)
real 7m22.926s
```
|
|
2021-07-18 02:31:59
|
the problem is, apparently preflate should run 8 worker tasks in my computer but it doesn't seem so
|
|
2021-07-18 02:40:33
|
https://encode.su/threads/3223-precomp-further-compress-already-compressed-files?p=70019&viewfull=1#post70019 seems that it's already on the way
|
|
2021-07-18 02:40:51
|
Ubuntu on WSL1 :p
|
|
2021-07-18 03:58:34
|
it does try to use `std::future`, so it might be a C++ library issue
|
|
2021-07-18 04:09:15
|
(why have I written `std::worker`, I should sleep)
|
|
|
_wb_
|
2021-07-18 05:41:28
|
Cool stuff, promising POC, if it can be refined a bit to do something specific for the chunks that are already described in other ways (icc, xmp, exif) then we basically have a `pbrd` (png bitstream reconstruction data) box that could be added to the part 2 spec in a future edition.
|
|
2021-07-18 05:44:57
|
Why would it use all threads? Software is single-threaded by default unless you figure out ways to parallelize it.
|
|
|
BlueSwordM
|
2021-07-18 05:53:51
|
So, about compression: since Brotli was added as another optional dependency, does it currently do anything in cjxl?
|
|
|
_wb_
|
2021-07-18 05:58:02
|
Brotli is only used for jbrd
|
|
2021-07-18 05:59:18
|
We should also implement it for exif/xmp compression (`brob` versions of those boxes in the file format, as specified in part 2)
|
|
|
BlueSwordM
|
2021-07-18 05:59:29
|
What is jrbd?
|
|
|
_wb_
|
2021-07-18 06:00:53
|
JPEG bitstream reconstruction data
|
|
|
BlueSwordM
|
2021-07-18 06:01:48
|
Oh yeah ๐
|
|
2021-07-18 06:01:57
|
I should use that acronym more often.
|
|
|
_wb_
|
2021-07-18 06:09:27
|
It's the fourcc code of the isobmf box that stores it
|
|
|
Pieter
|
2021-07-18 06:12:34
|
create a jxl with both jbrd and pbrd records, confuse everything?
|
|
|
_wb_
|
2021-07-18 06:29:20
|
If you have a jpeg and converted it to png, in principle it could make sense to have both jbrd and pbrd
|
|
|
yurume
|
|
_wb_
We should also implement it for exif/xmp compression (`brob` versions of those boxes in the file format, as specified in part 2)
|
|
2021-07-18 08:24:38
|
yes, that was why I chose brotli (technically can be replaced, but its 2nd order model is apt for reconstruction data)
|
|
2021-07-18 08:25:24
|
unfortunately brotli preset data didn't contain IDAT or any other PNG chunk names though ๐
|
|
|
_wb_
|
2021-07-18 08:36:48
|
That we can fix by defining common chunk names and crc computation in the pbrd spec
|
|
|
yurume
|
2021-07-18 09:06:34
|
agreed
|
|
2021-07-18 09:07:13
|
for the standardization the hardest bit will be specifying preflate
|
|
|
fab
|
2021-07-18 09:25:50
|
will be slower than s 9 q 100 e 3
|
|
2021-07-18 09:25:59
|
is jxl to png
|
|
2021-07-18 09:26:02
|
or png to jxl
|
|
2021-07-18 09:26:05
|
what's the purpose
|
|
|
yurume
|
2021-07-18 09:26:40
|
the ability to reconstruct the bit-perfect JPEG/PNG is useful from time to time
|
|
|
fab
|
2021-07-18 09:27:17
|
ah it doesn't improve lossless compression
|
|
2021-07-18 09:27:22
|
like png to jxl
|
|
|
yurume
|
2021-07-18 09:28:22
|
as precomp demonstrates this ability has some merits, but the resulting "compressed" (low-entropy) data is not readily usable as an image format
|
|
2021-07-18 09:29:23
|
so it could only function as a preprocessor for compression formats and not by itself
|
|
2021-07-18 09:30:11
|
JXL + reconstruction data approach can allow both uses (you can always go back to the original data, while you can use this recompressed format as is)
|
|
|
fab
|
2021-07-18 09:30:45
|
ah so it inflates jxl lossless modular file size
|
|
2021-07-18 09:30:51
|
for no reason
|
|
2021-07-18 09:31:04
|
it's useful only for developers to to test compression
|
|
2021-07-18 09:32:37
|
how many
|
|
2021-07-18 09:32:45
|
few kb or even 1 mb
|
|
|
yurume
|
2021-07-18 09:33:52
|
Fabian probably meant that:
i) since JPEG1 normally suffers generation loss but PNG doesn't, so this does has less merits (I agree)
ii) the addition of PNG reconstruction data does increase the file size of JXL files converted from PNG
|
|
|
fab
|
2021-07-18 09:33:59
|
and the jxl how weights
|
|