|
raysar
|
|
lithium
So if cpu support avx2, JPEG XL encoder v0.3.7 [AVX2] is grest,
if cpu unsupport avx2, will need SSE4 and Scalar optimize?
|
|
2021-06-04 05:18:56
|
Yes but the generic build support also avx2, we are talking here about super avx2 optimisation ๐
|
|
|
Pieter
|
|
_wb_
That is, avx2 is a superset of sse4
|
|
2021-06-04 05:20:43
|
I'm pretty sure there are no CPUs that have AVX2 but no SSE4, but I'm also not sure that compilers assume that AVX2 implies SSE4.
|
|
|
lithium
|
2021-06-04 05:23:12
|
ok, I understand, thanks everyone ๐
I still feel confused about build...
|
|
|
_wb_
|
2021-06-04 05:26:34
|
What the version string shows, are the runtime dispatchable code paths that it has.
|
|
2021-06-04 05:27:06
|
If it says Scalar, then basically you can run it on any ancient cpu that doesn't have any simd at all
|
|
2021-06-04 05:27:57
|
If it says e.g. sse4 and scalar, then it can run on an ancient cpu, but when running on a more modern cpu, it can also use sse4
|
|
2021-06-04 05:28:51
|
If it says only avx2, then you cannot run it on old cpus, because only the avx2 code has been included and can be dispatched
|
|
2021-06-04 05:30:16
|
If you're compiling just for your own machine, then it's fine if it says just one thing (as long as that thing is the best your cpu can do)
|
|
2021-06-04 05:31:56
|
If you're compiling for a binary distribution, you should make sure to have all SIMD versions, so everyone will get the fastest codepath that their cpu can do
|
|
|
lithium
|
2021-06-04 05:33:59
|
Maybe we have a opencl version libjxl in future ๐
|
|
2021-06-04 05:36:29
|
Thank you for teach me about SIMD knowledge.
|
|
|
raysar
|
2021-06-04 07:14:00
|
Yes it works half ๐
It create unusable build <:kekw:808717074305122316>
`git clone https://github.com/libjxl/libjxl.git --recursive
cd libjxl
mkdir build && cd build
export BUILD_TARGET=x86_64-w64-mingw32 CC=clang-7 CXX=clang++-7
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-O3 -march=haswell" -DCMAKE_C_FLAGS="-O3 -march=haswell" -DJPEGXL_ENABLE_SJPEG=ON -DJPEGXL_STATIC=1 -DBUILD_TESTING=OFF -DJPEGXL_WARNINGS_AS_ERRORS=OFF -DJPEGXL_ENABLE_OPENEXR=OFF ..
cmake --build . -- -j$(nproc)`
|
|
|
fab
|
|
raysar
Yes it works half ๐
It create unusable build <:kekw:808717074305122316>
`git clone https://github.com/libjxl/libjxl.git --recursive
cd libjxl
mkdir build && cd build
export BUILD_TARGET=x86_64-w64-mingw32 CC=clang-7 CXX=clang++-7
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-O3 -march=haswell" -DCMAKE_C_FLAGS="-O3 -march=haswell" -DJPEGXL_ENABLE_SJPEG=ON -DJPEGXL_STATIC=1 -DBUILD_TESTING=OFF -DJPEGXL_WARNINGS_AS_ERRORS=OFF -DJPEGXL_ENABLE_OPENEXR=OFF ..
cmake --build . -- -j$(nproc)`
|
|
2021-06-04 07:36:46
|
is not a exe
|
|
|
Nova Aurora
|
|
fab
is not a exe
|
|
2021-06-04 07:38:23
|
I think it's an ELF (linux) binary.
|
|
|
raysar
|
2021-06-04 11:43:47
|
So,i made static **windows** binaries from the **main** branch, **0.4** and **0.3.7** branch using docker on windows 10:
I will do it also with all versions <:Hypers:808826266060193874>
https://1drv.ms/u/s!Aui4LBt66-MmixqNVKi4mER1-P6o?e=e1agA1
|
|
2021-06-05 12:28:07
|
i have 1 error in tests when i compile 0.3.5 and 0.3.4 0.3.3
`The following tests FAILED:
1102 - PatchDictionaryTest.GrayscaleVarDCT (Failed)`
|
|
|
|
veluca
|
2021-06-05 07:18:32
|
probably yes ๐
|
|
|
raysar
|
2021-06-05 01:20:32
|
I always see the main branch and 0.4.x with 0.3.7 text when i execute it, i think we can update the code. I can push a commit to change that? XD
`J P E G \/ |
/\ |_ e n c o d e r [v0.3.7 | SIMD supported: SSE4,Scalar]`
|
|
|
|
veluca
|
2021-06-05 01:22:29
|
we're working on getting that sorted out ๐ we've been a bit busy, but we want to get fuzzing of 0.4.x properly done before we tag it... anyway, if you want to make a PR, assign it to deymo and he'll figure it out
|
|
|
raysar
|
2021-06-05 01:24:54
|
You can write RC for release candidate :p
`J P E G \/ |
/\ |_ e n c o d e r [v0.4.0 RC | SIMD supported: SSE4,Scalar]`
|
|
|
_wb_
|
2021-06-05 01:24:57
|
might be useful to put the commit hash in the version string, at least if it's not a release
|
|
2021-06-05 01:25:08
|
no idea how to do that though
|
|
2021-06-05 01:25:22
|
my git-fu is too weak for that
|
|
|
|
veluca
|
2021-06-05 01:25:41
|
it's a bit of a nightmare IIRC, it needs some cooperation from cmake
|
|
|
raysar
|
2021-06-05 01:27:49
|
This text is not manualy configure in the C code?
|
|
|
_wb_
|
2021-06-05 01:28:41
|
the version number itself is manually bumped up
|
|
2021-06-05 01:31:36
|
I suppose the cmake script could set some compile flag -DCOMMITHASH="blabla" which is then used in the code that produces the version string
|
|
|
fab
|
2021-06-06 04:14:22
|
when newer version of jpeg xl
|
|
|
Scientia
|
|
fab
when newer version of jpeg xl
|
|
2021-06-06 05:22:31
|
Soonโข๏ธ
|
|
|
fab
|
2021-06-06 05:26:34
|
now jpeg xl at d 1 s9 is perfect since 0.3.7-67 or 0.3.7-79
|
|
2021-06-06 05:26:49
|
but still i need more compression
|
|
2021-06-06 05:27:02
|
for screenshot i think is already good
|
|
|
Jyrki Alakuijala
|
|
_wb_
Both are based on LMS and aim to model the human visual system.
|
|
2021-06-07 08:06:04
|
I suspect that some of this depends on how LMS parameters of colorspace homogeneity was obtained. The usual recipe is CIE 1931 style where homogeneity is obtained by measuring just-noticeable-difference of 2 degree (originally 10 degree) slabs of color. I used smaller 'slabs', i.e, pixels. I kept the size at around 0.02 degrees, leading to different results than 2 degree slabs. This is because the vision integrates some S signals into luma at lower resolution, but not at highest. At highest resolution luma is just L+M.
|
|
2021-06-07 08:07:09
|
That hypothesis (which was a starting point of butteraugli, guetzli and pik) was strongly supported by anatomy of the eye (and respective Nobel prize in 1967) where no S receptors are observed in fovea.
|
|
|
lithium
like psnr compare butteraugli?
I have a white wall, and have black point in this wall,
in psnr and low p-norm, if black point is gone,
they don't too much mind this issue, white wall keep white is fine.
but maxButteraugli and high p-norm will very mind this issue.
|
|
2021-06-07 06:03:45
|
PSNR is 2nd norm -- it could also be made better by using higher norms for higher qualities
|
|
|
BlueSwordM
|
2021-06-08 02:55:18
|
So, are there any large commits in the pipeline for CJXL in terms of encoding performance/speed/bug fixing?
I'm doing a big image comparison for an article, and I would like to focus on the latest and best builds to make it fair for the encoders testing.
|
|
2021-06-08 02:56:35
|
These are already in, so that's not an issue.
|
|
2021-06-08 02:57:26
|
Besides, in most scenerios below S5-S6, CJXL absolutely obliterates every AV1 encoder in terms of CPU-time, so S1/S2 aren't really needed. ๐
|
|
|
_wb_
|
2021-06-08 02:59:59
|
I wouldn't wait for a specific improvement - you can keep waiting, there are enough ideas but not enough time ๐
|
|
2021-06-08 03:00:56
|
Fairest thing to do is just take a snapshot of the latest encoder on a specific date and compare with that, knowing that there can always be improvements later but you can't keep waiting for the ultimate encoders
|
|
|
BlueSwordM
|
|
_wb_
Fairest thing to do is just take a snapshot of the latest encoder on a specific date and compare with that, knowing that there can always be improvements later but you can't keep waiting for the ultimate encoders
|
|
2021-06-08 03:06:18
|
Will an improvement to noise synthesis be soon introduced in cjxl like you've said in #image in the AV1 server?
`Yes, there are changes planned for it. Apparently they got the luma-correlation not quite correct in the current encoder.`
|
|
|
_wb_
|
2021-06-08 03:10:25
|
No idea what the timeline is on that, it's a part of the code I'm not very familiar with, and it requires manual viewing because noise generation is not something you can test/validate with metrics...
|
|
|
BlueSwordM
|
2021-06-08 03:12:50
|
I see. Thanks for the detailed answers ๐
|
|
|
fab
|
2021-06-08 03:16:04
|
when 0.3.8 tag on github
|
|
2021-06-08 03:16:52
|
if don't perform good with screenshots as basis maybe people won't like it
|
|
2021-06-08 05:05:52
|
when 0.3.8
|
|
|
raysar
|
|
fab
when 0.3.8
|
|
2021-06-08 05:19:16
|
You not need numbers.
Do you know what is the "question mark"?
|
|
|
|
Deleted User
|
2021-06-08 10:47:34
|
Are JXL animations in any way more powerful than animated PNG/WebP? Or why is it so hard for tools to support that if they can already do APNG and static JXL?
|
|
|
Scientia
|
2021-06-08 10:49:22
|
I think they're better than apng, mjpeg, gif
|
|
2021-06-08 10:49:32
|
But I don't know how webp compares
|
|
2021-06-08 10:49:42
|
Honestly you'd be better off using a video
|
|
|
Jim
|
2021-06-09 01:41:27
|
<:This:805404376658739230>
|
|
2021-06-09 01:42:19
|
Even the AVIF devs have said animations are likely on the future chopping block. That's really what a video is for.
|
|
|
diskorduser
|
2021-06-09 02:14:28
|
What about transparent animations like stickers in messaging apps / websites? Videos don't support alpha.
|
|
|
BlueSwordM
|
|
diskorduser
What about transparent animations like stickers in messaging apps / websites? Videos don't support alpha.
|
|
2021-06-09 02:17:36
|
Most modern video codecs do support alpha.
|
|
2021-06-09 02:17:49
|
You just need the right container support.
|
|
|
Pieter
|
2021-06-09 10:24:45
|
Question from a friend (who has worked on Opus with some of the currently AV1 people):
> Do you know what DCT transforms JPEG-XL are using? -- back before AV1 was standardized, Tim and Nathan made two really good
> families, of integer DCT transforms. The first was built out of lifting steps (effectively a feistel) structure that made them
> unconditionally perfectly invertable. But the google hardware people didn't like the serial dependencies meant they couldn't do the 4x4
> in one clock cycle. So they made another non-lifting family that lost perfect recovery but had extremely short critical paths. Both were much better
> approximations of the DCT than the vp9 transforms (and most other integer transforms) and were faster in software too-- unfortunately
> they didn't get used in av1 because of hardware schedules, since the vp9-based transforms were already done. I was lamenting to Tim
> that it's unfortunate that other things aren't using them, and I wondered what jpeg-xl was using.
|
|
|
|
veluca
|
2021-06-09 10:31:44
|
just float DCTs
|
|
2021-06-09 10:31:45
|
๐
|
|
2021-06-09 10:32:28
|
(bet you the hardware guys would *love* that...)
|
|
|
Crixis
|
2021-06-09 10:33:41
|
2089 --> first good jxl hardware implementation from a MIT student
|
|
|
|
veluca
|
2021-06-09 10:36:26
|
I suspect a 16-bit fixpoint DCT implementation would be a good enough approximation
|
|
2021-06-09 10:36:31
|
but I don't *know* that
|
|
2021-06-09 10:36:43
|
maybe 20 bit if you go the hw way
|
|
|
_wb_
|
|
Pieter
Question from a friend (who has worked on Opus with some of the currently AV1 people):
> Do you know what DCT transforms JPEG-XL are using? -- back before AV1 was standardized, Tim and Nathan made two really good
> families, of integer DCT transforms. The first was built out of lifting steps (effectively a feistel) structure that made them
> unconditionally perfectly invertable. But the google hardware people didn't like the serial dependencies meant they couldn't do the 4x4
> in one clock cycle. So they made another non-lifting family that lost perfect recovery but had extremely short critical paths. Both were much better
> approximations of the DCT than the vp9 transforms (and most other integer transforms) and were faster in software too-- unfortunately
> they didn't get used in av1 because of hardware schedules, since the vp9-based transforms were already done. I was lamenting to Tim
> that it's unfortunate that other things aren't using them, and I wondered what jpeg-xl was using.
|
|
2021-06-10 05:15:03
|
VarDCT is just the normal DCT (that's why it can also do jpeg recompression: the "Var" can also be "Const" dct8x8).
In Modular mode there is the Squeeze transform which is an integer, perfectly reversible transform, but it's not really DCT-like, it's a modified Haar with a nonlinearity that makes it do something different in locally monotonic regions than at locally nonmonotonic regions.
|
|
|
OkyDooky
|
2021-06-11 06:48:29
|
I like to think of it as Haar-transform with nonlinear prediction of the "HF coefficients" to deal with gradients.
|
|
|
190n
|
2021-06-11 06:51:11
|
is this a webhook?
|
|
2021-06-11 07:10:51
|
ah interesting
|
|
|
_wb_
|
2021-06-11 08:20:19
|
yes, this channel is also available on Matrix
|
|
|
I like to think of it as Haar-transform with nonlinear prediction of the "HF coefficients" to deal with gradients.
|
|
2021-06-11 08:22:44
|
Yes, that's a good way to think of it - the prediction happening at the level of the transform itself though because it cannot really be expressed at the level of the residual plane (the "HF coefficients")
|
|
2021-06-11 08:23:45
|
but the residuals are indeed 'already predicted' which is why the Zero predictor works best on them
|
|
|
fab
|
2021-06-12 11:12:34
|
jxl is compressing much even at -d 0.382 -s 4 --use_new_heuristics
|
|
2021-06-12 11:12:44
|
with v0.3.7-125-g9a3fbdd
|
|
2021-06-12 12:31:09
|
i'm doing for %i in (D:\Images\fotocamera\*.JPG) do cjxl -j -d 0.123 --faster_decoding=5 -s 7 --intensity_target=259 %i %i.jxl
|
|
2021-06-12 12:31:16
|
let's see how it go
|
|
2021-06-12 12:31:35
|
if is any better than jpg lossless recompression
|
|
2021-06-12 12:31:52
|
https://github.com/saschanaz/jxl-winthumb
|
|
2021-06-12 12:32:04
|
https://twitter.com/jonsneyers/status/1403679160502034435
|
|
2021-06-12 12:32:15
|
https://eddiezato.github.io/bin/
|
|
2021-06-12 12:32:20
|
https://github.com/libjxl/libjxl/commits/main
|
|
2021-06-12 12:47:30
|
it doesn't compress my jpegs
|
|
2021-06-12 12:55:45
|
and at -d 0.339 still can improve
|
|
|
|
Deleted User
|
|
fab
i'm doing for %i in (D:\Images\fotocamera\*.JPG) do cjxl -j -d 0.123 --faster_decoding=5 -s 7 --intensity_target=259 %i %i.jxl
|
|
2021-06-12 12:58:49
|
You forgot adding --epf=2 -p --gaborish=1 -X 99 --patches=0 -P 6 --dots=1 -Y 0 --use_new_heuristics --abracadabra
|
|
|
fab
|
2021-06-12 01:00:18
|
at the moment i don't see how jpeg xl is a production encoder
|
|
2021-06-12 01:03:14
|
with this build only
|
|
2021-06-12 01:03:15
|
-d 1.64 -s 5 --epf=1
|
|
2021-06-12 01:03:18
|
should be decent
|
|
2021-06-12 01:03:37
|
but still not ready for an use
|
|
2021-06-12 01:04:05
|
but i shouldn't say that because is not my job reviewing encoder
|
|
2021-06-12 01:04:12
|
there are devs paid for that
|
|
|
_wb_
|
2021-06-12 03:57:09
|
Decoding jpeg to re-encode them at -d 0.123 is likely a bad idea, unless those JPEGs are quality 99 or 100
|
|
|
fab
|
2021-06-12 04:12:20
|
don't know the encoder seems pretty good even avif seems pretty natural now
|
|
2021-06-12 04:12:27
|
in the 32ยฐ commit
|
|
2021-06-12 04:12:37
|
but probably there is a need to evolve
|
|
2021-06-12 04:13:03
|
i didn't converted the images
|
|
2021-06-12 04:13:30
|
it depends also on the camera
|
|
2021-06-13 04:43:03
|
Today you aren't working on jpeg xl?
|
|
|
_wb_
|
2021-06-13 05:18:04
|
Most/all of us don't (or are not supposed to) work during the weekend.
|
|
|
Jyrki Alakuijala
|
|
Pieter
Question from a friend (who has worked on Opus with some of the currently AV1 people):
> Do you know what DCT transforms JPEG-XL are using? -- back before AV1 was standardized, Tim and Nathan made two really good
> families, of integer DCT transforms. The first was built out of lifting steps (effectively a feistel) structure that made them
> unconditionally perfectly invertable. But the google hardware people didn't like the serial dependencies meant they couldn't do the 4x4
> in one clock cycle. So they made another non-lifting family that lost perfect recovery but had extremely short critical paths. Both were much better
> approximations of the DCT than the vp9 transforms (and most other integer transforms) and were faster in software too-- unfortunately
> they didn't get used in av1 because of hardware schedules, since the vp9-based transforms were already done. I was lamenting to Tim
> that it's unfortunate that other things aren't using them, and I wondered what jpeg-xl was using.
|
|
2021-06-13 06:06:31
|
Not exactly an answer to the question, but I tried optimizing the coefficients in a butterfly implementation of a DCT experimentally to something else than they are. That produced worse results as expected. The DCT in jpeg xl is DCT-II aka 'the DCT', the most basic solution there is.
|
|
2021-06-13 06:07:17
|
We experimented with other integral transforms, rotated dcts, log gabor, etc. etc.
|
|
2021-06-13 06:08:44
|
There is only one small reflection left of those effort and it is the AFV. In AFV, we codify some 3 corner pixels of the 8x8 block separately from the rest. There are four variations 0, 1, 2, and 3 -- each for one of the corners of the 8x8 square.
|
|
2021-06-13 06:09:26
|
it is not exactly the most beautiful mathematical creation ever, but it reduces artefacts and gives about 0.5-1.5 % more density
|
|
2021-06-13 06:13:06
|
It was based on the observation that oblique occlusion boundaries cause reasonably often only 1-3 pixels to touch another block. When that happens, every dct coefficient needs to participate building that corner -> a lot of entropy for not so much results. Reasonable heuristics would then zero out that block and such borders would look jagged.
|
|
2021-06-14 12:57:04
|
... I'm finally giving a go on the reversal of the decision tree for ac strategy
|
|
2021-06-14 12:57:32
|
I expect 3 % quality improvements when it is ready, but currently I'm at -10 % ๐
|
|
2021-06-14 12:59:20
|
I hope to mitigate/solve the long-standing problem that <@!461421345302118401> has reported (as well as others after him)
|
|
|
|
vigilant_seal
|
2021-06-14 01:01:52
|
Hello everyone, I've encountered an issue when converting one of my jpgs to jxl using libjxl. It seems to be related to the color space of the original jpg. When I first convert the image to rgb, I can successfully convert to jxl. Is this a known issue / something not yet implemented? I could not find any references on github.
```
cjxl test.jpg test.jxl
J P E G \/ |
/\ |_ e n c o d e r [v0.3.7 | SIMD supported: AVX2]
SRC_PATH/src/libjxl/src/libjxl/lib/jxl/jpeg/enc_jpeg_data.cc:311: JXL_FAILURE: Cannot recompress JPEGs with neither 1 nor 3 channels
SRC_PATH/src/libjxl/src/libjxl/lib/extras/codec_exr.cc:155: JXL_FAILURE: OpenEXR failed to parse input
SRC_PATH/src/libjxl/src/libjxl/lib/extras/codec.cc:137: JXL_FAILURE: Codecs failed to decode
SRC_PATH/src/libjxl/src/libjxl/lib/extras/codec.cc:150: JXL_RETURN_IF_ERROR code=1: SetFromBytes(Span<const uint8_t>(encoded), io, pool, orig_codec)
Failed to read image test.jpg.
SRC_PATH/src/libjxl/src/libjxl/tools/cjxl_main.cc:67: JXL_RETURN_IF_ERROR code=1: LoadAll(args, &pool, &io, &decode_mps)
```
```
mediainfo test.jpg | grep Color\ space
Color space : YUVK
```
|
|
|
_wb_
|
2021-06-14 01:02:47
|
That's a CMYK jpeg. JXL does not support lossless recompression for CMYK jpegs.
|
|
2021-06-14 01:04:11
|
(it's not just "not implemented". It's a bitstream limitation: VarDCT is designed for 3 channels, so the bitstream cannot represent 4 channel JPEGs)
|
|
2021-06-14 01:05:34
|
we could recompress 3 channels (e.g. CMY) with VarDCT and decode the 4th channel to pixels and re-encode those losslessly, but we cannot do something that can give you back the original JPEG file.
|
|
2021-06-14 01:08:53
|
JPEG always uses DCT, so yes.
|
|
2021-06-14 01:09:12
|
In JPEG, channels are completely independent from one another.
|
|
|
|
vigilant_seal
|
2021-06-14 01:09:12
|
I see. Cause when I first used imagemagick to convert the file (convert image.jpg image.jxl), it did not give an error, but resulted in a 'negative' image.
|
|
|
_wb_
|
2021-06-14 01:11:10
|
In JXL's VarDCT, the channels are linked in a way: they share the block type selection, there's chroma-from-luma, etc. Designing it specifically for 3 channels was a choice to allow better compression, but it's also a limit. Luckily you can add arbitrarily many "extra channels", just not with VarDCT.
|
|
|
|
vigilant_seal
|
2021-06-14 01:11:25
|
Thanks for the warning, but I'm not looking for lossless recompression for my usecase, so it's fine.
|
|
|
_wb_
|
2021-06-14 01:12:00
|
ImageMagick is probably encoding the CMY only, dropping the K (we don't have an encode/decode api for the black channel yet)
|
|
|
|
vigilant_seal
|
2021-06-14 01:12:26
|
Yes I know, that was just a test case to find the source of the issue.
|
|
2021-06-14 01:13:04
|
So, should I add '-q x'?
|
|
|
_wb_
|
2021-06-14 01:13:15
|
and likely there is no color management happening in the CMYK case, so you basically see CMY interpreted as RGB, which basically inverts the image (since Cyan is "lack of R", Magenta is "lack of G" and Yellow is "lack of blue")
|
|
2021-06-14 01:14:16
|
currently best thing to do is to convert the CMYK first to wide-gamut 16-bit RGB (e.g. ProPhoto), then take that as the input image for cjxl
|
|
|
|
vigilant_seal
|
2021-06-14 01:14:29
|
```
cjxl -j test.jpg test.jxl
J P E G \/ |
/\ |_ e n c o d e r [v0.3.7 | SIMD supported: AVX2]
SRC_PATH/src/libjxl/src/libjxl/lib/extras/codec_jpg.cc:319: JXL_FAILURE: unsupported number of components (0) in JPEG
SRC_PATH/src/libjxl/src/libjxl/lib/extras/codec_exr.cc:155: JXL_FAILURE: OpenEXR failed to parse input
SRC_PATH/src/libjxl/src/libjxl/lib/extras/codec.cc:137: JXL_FAILURE: Codecs failed to decode
SRC_PATH/src/libjxl/src/libjxl/lib/extras/codec.cc:150: JXL_RETURN_IF_ERROR code=1: SetFromBytes(Span<const uint8_t>(encoded), io, pool, orig_codec)
Failed to read image test.jpg.
SRC_PATH/src/libjxl/src/libjxl/tools/cjxl_main.cc:67: JXL_RETURN_IF_ERROR code=1: LoadAll(args, &pool, &io, &decode_mps)
```
|
|
|
_wb_
|
2021-06-14 01:15:06
|
yeah the jpeg reader in cjxl cannot handle CMYK either
|
|
|
Jyrki Alakuijala
|
2021-06-14 01:17:16
|
Everything is just code, we can change it.
|
|
2021-06-14 01:18:19
|
For a new decoder part we are not limited by having to comply with JPEG XL standard, so we can reorganize the building blocks there.
|
|
|
_wb_
|
2021-06-14 01:20:17
|
CMYK jpegs currently work poorly in all browsers and in many viewers, so just refusing to decode them is imo actually better.
|
|
2021-06-14 01:21:07
|
CMYK icc profiles are typically > 500 KB, so that alone makes them a bad idea for the web.
|
|
2021-06-14 01:22:26
|
Also the concept of caring about black separation but not caring about DCT artifacts on it is imo fundamentally wrong.
|
|
2021-06-14 01:24:25
|
They can be viewed, but K gets ignored and/or no color management at all is done.
|
|
|
|
vigilant_seal
|
2021-06-14 01:51:21
|
So should I create an issue for imagemagick?
|
|
|
Jyrki Alakuijala
|
2021-06-14 03:29:51
|
it can be confusing when cmyk is not supported -- someone could have 7000 jpegs, one of them a weird cmyk, and then they get a spurious error message and need to start understanding colorspaces
|
|
2021-06-14 03:30:07
|
even a silly weird inefficient support would help
|
|
2021-06-14 03:30:23
|
like defining it through spot color layers
|
|
2021-06-14 03:31:01
|
even if cmyk becomes 20 % bigger in compression would make it better than "it doesn't work"
|
|
2021-06-14 03:33:50
|
I'm thinking that some cmyk jpegs can originate from people not understanging what they are doing in their respective image processing software
|
|
2021-06-14 03:35:29
|
thinking something like "print quality is better than web quality, so let's find the print quality setting", or "color separation must result in more vivid look than no color separation", or "graphics professionals use cmyk", or "professional ICC profile" etc.
|
|
2021-06-14 03:36:16
|
of course it doesn't make sense, but if the option exists someone will (ab)use it even when there is no reason
|
|
|
_wb_
|
2021-06-14 04:10:25
|
Yes, we should get CMYK to behave nicely also for the (imo more useful) lossless CMYK case
|
|
2021-06-14 04:10:41
|
We do have a kBlack extra channel type exactly for that
|
|
2021-06-14 04:11:45
|
just djxl doesn't know what it means, nor how to apply a cmyk icc profile to the color channels + the kBlack channel (and it would need to do that to produce correct RGB output)
|
|
2021-06-14 04:13:43
|
so that's the first priority imo: make the decoder know what kBlack means; in case it doesn't have full icc profile support it can just do the simplistic thing (multiply everything by kBlack, basically), in case it does have actual color management (lcms2/skcms) it should do the CMYK -> RGB conversion
|
|
2021-06-14 04:16:11
|
second thing is to do something useful with CMYK jpeg input: it could be to treat it like a CMY jpeg (recompressing those coefficients losslessly) plus a kBlack channel consisting of the decoded K pixels. Not reconstructible to jpeg, but should at least render correctly (provided the above is done)
|
|
2021-06-14 04:17:20
|
another option is to just convert CMYK to (sufficiently wide-gamut) RGB in the encoder and consider it as pixels, losing the black separation but making more sense for lossy compression
|
|
|
Jyrki Alakuijala
|
2021-06-14 04:31:27
|
yay, let's get that on the roadmap of what should actually work in practice ๐
|
|
2021-06-14 05:38:48
|
AVIF blog post: https://web.dev/compress-images-avif/
|
|
|
_wb_
|
2021-06-14 05:49:30
|
https://storage.googleapis.com/web-dev-uploads/image/foR0vJZKULb5AGJExlazy1xYDgI2/kVqh1xli2O6mqKF3fQNx.avif
|
|
2021-06-14 05:49:39
|
Who does that to their images?
|
|
2021-06-14 05:49:56
|
Looks like a Dali painting
|
|
2021-06-14 05:50:15
|
Poor dog
|
|
|
|
Deleted User
|
2021-06-14 05:56:17
|
> The human eye doesn't just want the image to look similar to the original, it wants the image to have similar complexity. Therefore, we would rather see a somewhat distorted but still detailed block than a non-distorted but completely blurred block. The result is a bias towards a detailed and/or grainy output image, a bit like xvid except that its actual detail rather than ugly blocking.
|
|
2021-06-14 05:56:29
|
Dark Shikari, where are you? ;_;
|
|
2021-06-14 05:56:41
|
GIVE ME PSY-RDO IN AV1
|
|
|
BlueSwordM
|
|
GIVE ME PSY-RDO IN AV1
|
|
2021-06-14 05:57:11
|
You know, there is something you can tweak that is just badly labeled:
`--sharpness=XXX`
|
|
2021-06-14 05:57:29
|
It doesn't actually do sharpening lmao
It affects rate distortion decisions: do you want to blur more, or want to keep more detail?
```av1_encoder : sharpness affect
This patch changes the effect of the sharpness parameter to the encoder.
It no longer affects the thresholds in the loop filter. Now it acts this
way:
sharpness
0 => acts as before.
1 => stops performing eob and skip block optimization
1-7 => changes rdmult in trellis to favor lower distortion
The effect is images / video include more detail at the same bitrate
but sometimes also more artifacts.
+ when using variance based aq : lower segment ids
also favor lower distortion```
|
|
|
_wb_
|
2021-06-14 05:58:21
|
> AVIF supports two types of progressive decoding. Spatial scalability can be used to offer a lower resolution image for network constrained users and 'progressively' provide a higher resolution image by sending just the additional data required to fill in the high frequency details. Quality scalability offers a similar progression by steadily improving visual quality with each render.
Not saying this is bullshit, but there might be a reason why no avif encoder exists that actually does that.
|
|
|
BlueSwordM
|
|
_wb_
> AVIF supports two types of progressive decoding. Spatial scalability can be used to offer a lower resolution image for network constrained users and 'progressively' provide a higher resolution image by sending just the additional data required to fill in the high frequency details. Quality scalability offers a similar progression by steadily improving visual quality with each render.
Not saying this is bullshit, but there might be a reason why no avif encoder exists that actually does that.
|
|
2021-06-14 05:58:47
|
Currently, the compute cost to doing this is non-negligible on the encoding side, unlike normal progressivity in JPEG/JPEG XL.
|
|
|
_wb_
|
2021-06-14 05:59:13
|
I am not even talking about encode/decode cost
|
|
2021-06-14 05:59:32
|
I am talking about density penalty
|
|
|
BlueSwordM
|
|
_wb_
I am talking about density penalty
|
|
2021-06-14 06:00:21
|
Yes. As said in the last daala watercooler(June 11th) meeting by unlord:
"Unlike with progressive in JPEG/JPEG-XL, AVIF progressive never improves compression efficiency."
|
|
|
|
Deleted User
|
|
BlueSwordM
It doesn't actually do sharpening lmao
It affects rate distortion decisions: do you want to blur more, or want to keep more detail?
```av1_encoder : sharpness affect
This patch changes the effect of the sharpness parameter to the encoder.
It no longer affects the thresholds in the loop filter. Now it acts this
way:
sharpness
0 => acts as before.
1 => stops performing eob and skip block optimization
1-7 => changes rdmult in trellis to favor lower distortion
The effect is images / video include more detail at the same bitrate
but sometimes also more artifacts.
+ when using variance based aq : lower segment ids
also favor lower distortion```
|
|
2021-06-14 06:02:10
|
I don't know if this patch is in Squoosh, because the current version doesn't really help that much...
|
|
|
BlueSwordM
|
|
I don't know if this patch is in Squoosh, because the current version doesn't really help that much...
|
|
2021-06-14 06:02:38
|
That's because everything in Squoosh in simplified to be fair, but it's not a patch, it's more like an advanced setting.
|
|
|
> The human eye doesn't just want the image to look similar to the original, it wants the image to have similar complexity. Therefore, we would rather see a somewhat distorted but still detailed block than a non-distorted but completely blurred block. The result is a bias towards a detailed and/or grainy output image, a bit like xvid except that its actual detail rather than ugly blocking.
|
|
2021-06-14 06:04:47
|
That's the thing I'm hoping to do with my current efforts and the article I'll be releasing soon.
Thank you for the quote BTW. It will be rather helpful with my changes document ๐
|
|
2021-06-14 06:05:15
|
The aom devs do care about psycho-visual optimizations. They just haven't been given much a push in that regard ๐
|
|
|
|
paperboyo
|
|
_wb_
Yes, we should get CMYK to behave nicely also for the (imo more useful) lossless CMYK case
|
|
2021-06-14 06:05:42
|
Iโm with a newspaper. FYI, almost nobody is using lossless. Mostly, at intermediary, creative stage. Huge majority of source files and output files are lossy (perceptually lossless of course).
|
|
|
|
Deleted User
|
|
BlueSwordM
That's because everything in Squoosh in simplified to be fair, but it's not a patch, it's more like an advanced setting.
|
|
2021-06-14 06:06:53
|
Squoosh has the "Sharpness" setting exposed. I simply don't know how recent is the patch you've just mentioned.
|
|
|
Jyrki Alakuijala
|
|
BlueSwordM
Yes. As said in the last daala watercooler(June 11th) meeting by unlord:
"Unlike with progressive in JPEG/JPEG-XL, AVIF progressive never improves compression efficiency."
|
|
2021-06-14 06:07:13
|
AVIF has a possibility of using frames of using variable resolution for progression. Variable resolution is unlikely to work in normal high to middle quality images (based on personal experience).
|
|
|
BlueSwordM
The aom devs do care about psycho-visual optimizations. They just haven't been given much a push in that regard ๐
|
|
2021-06-14 06:08:09
|
I'm waiting on if they will endorse --tune=butteraugli when butteraugli is not one of the AOM official metrics (and psnr and ssim are)
|
|
2021-06-14 06:08:51
|
I'm secretly hoping that they would endorse it and adopt it for future work (and also other more powerful quality metrics from the clic conference)
|
|
|
_wb_
|
|
paperboyo
Iโm with a newspaper. FYI, almost nobody is using lossless. Mostly, at intermediary, creative stage. Huge majority of source files and output files are lossy (perceptually lossless of course).
|
|
2021-06-14 06:09:34
|
My opinion is that if lossy is OK, then (wide gamut) RGB is also OK. Lossy CMYK sounds not so useful to me. CMYK raster images in general sound not so useful to me (vector for the black part and RGB for the rest sounds more logical to me)
|
|
|
|
Deleted User
|
|
BlueSwordM
That's the thing I'm hoping to do with my current efforts and the article I'll be releasing soon.
Thank you for the quote BTW. It will be rather helpful with my changes document ๐
|
|
2021-06-14 06:10:18
|
> That's the thing I'm hoping to do with my current efforts and the article I'll be releasing soon.
ANOTHER ARTICLE? <:Hypers:808826266060193874>
> Thank you for the quote BTW.
No problem bro, it's the classic from encode.su ๐
> It will be rather helpful with my changes document ๐
Link it when you're done pls ๐ฅบ
|
|
|
BlueSwordM
|
|
> That's the thing I'm hoping to do with my current efforts and the article I'll be releasing soon.
ANOTHER ARTICLE? <:Hypers:808826266060193874>
> Thank you for the quote BTW.
No problem bro, it's the classic from encode.su ๐
> It will be rather helpful with my changes document ๐
Link it when you're done pls ๐ฅบ
|
|
2021-06-14 06:10:53
|
1. Yes. The article is currently on the final stages of editing and encoding, so it will be coming out this week.
2. Danke ๐
3. No problem. ๐
|
|
|
|
Deleted User
|
|
BlueSwordM
The aom devs do care about psycho-visual optimizations. They just haven't been given much a push in that regard ๐
|
|
2021-06-14 06:10:59
|
Push them, it's definitely worth it. And it's nice that they do care about that ๐
|
|
|
Scope
|
|
Jyrki Alakuijala
AVIF blog post: https://web.dev/compress-images-avif/
|
|
2021-06-14 06:11:34
|
About this article, this is exactly what I said I would not want the Web to go in this direction, as well as promotion and comparison only on the maximum possible compression
|
|
|
|
paperboyo
|
|
_wb_
My opinion is that if lossy is OK, then (wide gamut) RGB is also OK. Lossy CMYK sounds not so useful to me. CMYK raster images in general sound not so useful to me (vector for the black part and RGB for the rest sounds more logical to me)
|
|
2021-06-14 06:12:52
|
Just saying how it is here and probably in many other places.
> if lossy is OK, then (wide gamut) RGB is also OK
Unrelated. Mindset (and, less often, real requirements) still demand control over black separation. CMYK JPEGs are [enter-high-nineties-estimate] of output images here (as they are of source imagery, but those are RGB).
> CMYK raster images in general sound not so useful to me
Again, just saying how it is. For years. Pretty useful producing multiple newsprint and more glossy products.
|
|
|
BlueSwordM
|
2021-06-14 06:14:25
|
Anyway, if someone can find the image of the dog, please do ๐
|
|
2021-06-14 06:14:38
|
I want to save the dog from over compression.
|
|
|
Scope
|
2021-06-14 06:16:16
|
https://codelabs.developers.google.com/codelabs/avif/images/happy_dog.jpg
|
|
|
_wb_
|
|
paperboyo
Just saying how it is here and probably in many other places.
> if lossy is OK, then (wide gamut) RGB is also OK
Unrelated. Mindset (and, less often, real requirements) still demand control over black separation. CMYK JPEGs are [enter-high-nineties-estimate] of output images here (as they are of source imagery, but those are RGB).
> CMYK raster images in general sound not so useful to me
Again, just saying how it is. For years. Pretty useful producing multiple newsprint and more glossy products.
|
|
2021-06-14 06:17:32
|
I get that control over black separation can be useful, and that you don't always have a workflow where vector stuff can remain vector. I just wonder why anyone would send jpeg artifacts to a printer and not, say, a cmyk tiff. Is transfer size that much of an issue?
|
|
2021-06-14 06:17:59
|
Because if it is, then maybe lossless cmyk jxl could be really useful
|
|
2021-06-14 06:18:19
|
(or lossy cmyk jxl, maybe with lossy cmy and lossless K)
|
|
2021-06-14 06:20:42
|
Btw it's really interesting to have someone like you here, <@810102077895344159> . We can use some input on what the use cases and requirements are for printing use cases, we have been focusing a lot on screen display so far...
|
|
|
|
paperboyo
|
|
_wb_
I get that control over black separation can be useful, and that you don't always have a workflow where vector stuff can remain vector. I just wonder why anyone would send jpeg artifacts to a printer and not, say, a cmyk tiff. Is transfer size that much of an issue?
|
|
2021-06-14 06:22:41
|
Lossless is used only for images with transparency (and only because no lossy format supports transparency and/or layers and, maybe, a clipping path in InDesign). JPEG XL in InDesign (and all Adobe) would be useful from printersโ POV, but only for convenience (one format to have transparency, layers and filesizes like JPEG). But there is no thirst for it in print like there is on the web.
> send jpeg artifacts to a printer
Nobody will ever notice artifacts of **lightly** lossy pic in print.
|
|
2021-06-14 06:24:46
|
I think the focus on web makes all the sense in the world. Because the need is ever-real. [EDIT: and itโs also where the format will gain most of its overall exposure]. Secondly, creative stage: camera output, intermediary work files. Print โ last. But I agree with Jyrki that completely barfing at CMYK would be sad (many users do not care who send them what: they need it to open and display correct colour).
|
|
|
_wb_
Btw it's really interesting to have someone like you here, <@810102077895344159> . We can use some input on what the use cases and requirements are for printing use cases, we have been focusing a lot on screen display so far...
|
|
2021-06-14 06:26:12
|
Pleasure is all mine and Iโm honoured and excited to be here. My main reason: lobby for lower end of the quality spectrum for web delivery :P. [EDIT: but yeah: the dog is too much even for me]
|
|
|
Jyrki Alakuijala
|
2021-06-14 06:28:19
|
a near perfect quality 2 megapixel image takes something like 118 ms on mobile to transfer, then 13 seconds of JavaScript is run -- this is a great reason to make the image to be compressed to 10 ms and make look pretty bad
|
|
2021-06-14 06:30:16
|
(average mobile speed is 17 mbps, a 2 megapixel with 1 bpp takes about 2 megabits -- 2 megabits/17 megabits/s is 118 ms)
|
|
2021-06-14 06:31:59
|
Even when some mobile devices have more device pixels than 2 million, some major websites limit the size for practical reasons to full hd resolution (2 million pixels)
|
|
2021-06-14 06:32:46
|
the viewing distance in device pixels is so big (5000-7000 pixels) for mobile phones, that individual pixels don't bring a lot of value unless it is small text
|
|
|
|
paperboyo
|
|
BlueSwordM
Anyway, if someone can find the image of the dog, please do ๐
|
|
2021-06-14 06:38:55
|
Found this: https://codelabs.developers.google.com/codelabs/avif/images/happy_dog.jpg
|
|
|
Jyrki Alakuijala
|
2021-06-14 06:40:21
|
who else is curious how bad it is with jpeg xl at these bit rates ๐
|
|
|
_wb_
|
2021-06-14 06:55:07
|
|
|
2021-06-14 06:55:27
|
|
|
|
Crixis
|
|
Jyrki Alakuijala
|
2021-06-14 06:56:49
|
dog feels better, hat looks worse
|
|
2021-06-14 06:58:49
|
the hat has more contrast
|
|
2021-06-14 06:59:44
|
if one takes a "logarithmic" approach to texture preservation and also allocates some bits to low intensity textures, then less bits will be available to high contrast things
|
|
|
Crixis
|
2021-06-14 06:59:48
|
|
|
2021-06-14 07:00:31
|
texture is good but lost a LOT on lines
|
|
2021-06-14 07:01:29
|
jxl use a lot of bit on the grass
|
|
2021-06-14 07:01:37
|
the grass is cool
|
|
2021-06-14 07:02:45
|
the hat is awful
|
|
|
_wb_
|
2021-06-14 07:07:54
|
|
|
2021-06-14 07:08:00
|
this one is with resampling 2
|
|
2021-06-14 07:08:08
|
|
|
|
Crixis
|
2021-06-14 07:11:56
|
Also in this to me the main problem is that all lines/edge are jagged, it is called pixellation?
|
|
|
_wb_
|
2021-06-14 07:13:11
|
ugh all these 18 KB dogs are just horrible
|
|
2021-06-14 07:13:32
|
the avif one looks "uncanny valley" horrible to me though
|
|
|
|
paperboyo
|
2021-06-14 07:14:29
|
> the grass is cool
> the hat is awful
sky is cool
manga is awful
The battle is where they meet?
|
|
|
_wb_
|
2021-06-14 07:14:31
|
in the jxl you can see it's poor quality (the first one is `cjxl --distance 15 -s kitten`)
|
|
2021-06-14 07:16:05
|
but that avif dog, that hat looks quite good and some of that wet hair is not bad at all, so at a very superficial first glance it looks like a nice and sharp image
|
|
2021-06-14 07:17:09
|
until you look closer and see the smearing and blur and the Dali-esque melting they did to the poor doggie
|
|
|
|
paperboyo
|
|
_wb_
ugh all these 18 KB dogs are just horrible
|
|
2021-06-14 07:18:33
|
> ugh all these 18 KB dogs are just horrible
Agreed, too low a target. But just for the sake of argument, this dog with all Squooshโs AVIF settings that artificially hide the plastic could pass as dpr=2 image (displayed at half the size: at effective 420 wide):
|
|
2021-06-14 07:19:59
|
`npx @squoosh/cli --resize '{"enabled":true,"width":840,"height":1120,"method":"lanczos3","fitMethod":"stretch","premultiply":true,"linearRGB":true}' --avif '{"cqLevel":49,"cqAlphaLevel":-1,"subsample":3,"tileColsLog2":6,"tileRowsLog2":6,"speed":0,"chromaDeltaQ":false,"sharpness":7,"denoiseLevel":50,"tune":2}'`
|
|
|
_wb_
|
2021-06-14 07:26:32
|
Yes, I guess on a sufficiently small screen that 20 KB avif dog could be reasonable-ish
|
|
2021-06-14 07:29:06
|
I would rather play it safe and have a 71 KB jxl
|
|
2021-06-14 07:29:13
|
|
|
2021-06-14 07:29:23
|
|
|
2021-06-14 07:29:48
|
that's distance 4, default speed
|
|
|
Scope
|
2021-06-14 07:30:54
|
Roughly double the size:
AVIF (45207)
|
|
2021-06-14 07:31:06
|
JXL (45220)
|
|
2021-06-14 07:31:21
|
Source
|
|
|
_wb_
|
2021-06-14 07:34:47
|
When manually compressing for a specific target audience / image purpose, you can go lower
|
|
2021-06-14 07:35:15
|
But when automatically compressing, I wouldn't feel comfortable setting it worse than distance 4 or so
|
|
|
|
paperboyo
|
2021-06-14 07:35:30
|
I feel like I have to explain myself a tiny bit: Iโm not for butchery! Really I am not. But with current state of affairs, when one is doing mass delivery at different formats, resolutions, densities etc, one needs to set a static set of values. One being me. Nobody has time to set every image for perfect presentation. And nobody will do it for me. So I need to set a compromise and it will always be a compromise: most images will look good, but there will be always some that will look a bit shit. This shit will look different depending on the format and will show up in different places in different outlier images: posterisation, blockiness, plastic. I personally may even prefer first two to the last one, but itโs not a common choice, I think. I cannot set it so that outliers will look good, because I will be wasting earth resources and peopleโs moneyz for no gain 95% of the time.
Thing is to minimise the likelihood for that to happen and for when it (inevitably?) happens for the phones not to ring.
|
|
|
_wb_
|
2021-06-14 07:36:58
|
I like what we do by default at Cloudinary: if the end-user has `save-data` set (client hint indicating they pay for bandwidth or have a slow network), we deliver a lower quality image than by default.
|
|
|
BlueSwordM
|
|
_wb_
I like what we do by default at Cloudinary: if the end-user has `save-data` set (client hint indicating they pay for bandwidth or have a slow network), we deliver a lower quality image than by default.
|
|
2021-06-14 07:37:42
|
Really? It'd be nice to have this option available for every CDN.
|
|
|
_wb_
|
2021-06-14 07:38:46
|
Default `q_auto` will give you (for jxl) basically a q80 jxl by default and a q70 one if you have save-data on
|
|
2021-06-14 07:41:29
|
I think it would be good for the web if the save-data client hint would be more detailed, e.g. multiple levels
|
|
|
Jyrki Alakuijala
|
|
Scope
Roughly double the size:
AVIF (45207)
|
|
2021-06-14 07:42:19
|
hat is still better with AVIF, the stones and the fence in the background are better with JPEG XL
|
|
|
|
paperboyo
|
|
_wb_
I think it would be good for the web if the save-data client hint would be more detailed, e.g. multiple levels
|
|
2021-06-14 07:42:36
|
Yep, but couldnโt be user-controlled then most of the time. Better network conditions sniffing would be ideal. Is Chrome for Android `save-data` by default?
|
|
|
Jyrki Alakuijala
|
2021-06-14 07:43:21
|
I hope humanity could afford to have d2.5 as the worst case -- and normally ship d0.8 to d1.5 ๐
|
|
|
Crixis
|
2021-06-14 07:44:59
|
Have you considered to insert alias for the quality? es. -lowquality (same as -d2.5), -highquality (same as -d0.5) ecc.?
|
|
|
Scope
|
2021-06-14 07:51:49
|
I do not think it is convenient, the default setting with `-d 1` is already enough, extra options would create unnecessary complexity and limit the choice (especially `-lowquality` can be very different, for example for very high resolution `-d 2.5` is not really `-lowquality`)
|
|
2021-06-14 08:00:21
|
Unless adding a description when encoding (to be more informative about the quality values), for example in Avifenc:
> Encoding with AV1 codec 'aom' speed [1], color QP [20 (**Medium**) <-> 60 (**Low**)]
|
|
2021-06-14 08:01:55
|
> Encoding with AV1 codec 'aom' speed [1], color QP [10 (**High**) <-> 20 (**Medium**)]
|
|
|
lithium
|
2021-06-14 08:05:01
|
According to pik and guetzli butteraugli, distance large than d 1.6 will enter risk zone.
|
|
|
Jyrki Alakuijala
|
2021-06-14 08:21:39
|
when I calibrated butteraugli, I didn't even bother to add evidence worse than the evidence defining the 1.25 boundary
|
|
2021-06-14 08:21:50
|
I considered it too bad for practical use
|
|
2021-06-14 08:22:04
|
I hope one day the world will agree with me and we will have crips images ๐
|
|
2021-06-14 08:25:05
|
I'm making some progress on the fixes for <@!461421345302118401>
|
|
2021-06-14 08:25:46
|
I already had a version with 10 % better butteraugli max versions... looks like reversing the decision tree has quite promise for finding higher quality solutions in difficult situations
|
|
2021-06-14 08:25:53
|
but I'm not quite there yet
|
|
2021-06-14 08:26:53
|
it is great fun to be coding again after a break of three months or so
|
|
2021-06-14 08:27:47
|
((some team member is going to have an extra annoying job to review my code, because I'm rusty))
|
|
|
|
Deleted User
|
|
Jyrki Alakuijala
((some team member is going to have an extra annoying job to review my code, because I'm rusty))
|
|
2021-06-14 08:35:01
|
*Rusty*, you say...?
|
|
2021-06-14 08:35:24
|
(yes, I've just added a Rust emoji <:kekw:808717074305122316>)
|
|
|
Jyrki Alakuijala
|
2021-06-14 08:35:26
|
perhaps it was subconscious
|
|
2021-06-14 08:35:36
|
didn't want to imply anything
|
|
2021-06-14 08:36:09
|
I never wrote a line of rust in my life, but people smarter than me like that language
|
|
|
|
Deleted User
|
2021-06-14 08:49:19
|
Does it support higher-order functions and lambda expressions?
|
|
|
Jyrki Alakuijala
|
2021-06-14 08:52:07
|
I am just googling, don't really know https://doc.rust-lang.org/rust-by-example/fn/hof.html
|
|
2021-06-14 08:53:57
|
now I have a 1.5 % (d1) to 3.5 % (d3 - d8) improvement for doing more disciplined integral transform search ๐
|
|
2021-06-14 08:54:12
|
I was hoping for 5 % improvement...
|
|
2021-06-14 08:55:02
|
I didn't re-optimize any heuristics, so there is chances still ๐
|
|
2021-06-14 08:57:32
|
default compression is 14 % slower, though, but I have ideas on how to restore the speed
|
|
|
|
Deleted User
|
2021-06-14 09:07:42
|
Coming back to the dog image:
https://discord.com/channels/794206087879852103/794206170445119489/854061696197066783
|
|
2021-06-14 09:09:13
|
Those guys made a JPEG for comparison with AVIF. I've Knusperlified it and... it doesn't look that awful IMHO...
|
|
|
_wb_
|
|
paperboyo
Yep, but couldnโt be user-controlled then most of the time. Better network conditions sniffing would be ideal. Is Chrome for Android `save-data` by default?
|
|
2021-06-14 09:09:44
|
It's called "lite mode" in Android Chrome, off by default I think, except maybe in some phones/regions where they might enable it by default.
|
|
|
|
Deleted User
|
2021-06-14 09:10:00
|
Here's the source JPEG before Knusperlifying it:
|
|
2021-06-14 09:10:04
|
https://storage.googleapis.com/web-dev-uploads/image/foR0vJZKULb5AGJExlazy1xYDgI2/Jy0O0q0mLXl668HAo43n.jpeg
|
|
|
_wb_
|
2021-06-14 09:11:58
|
Yes, Knusperli and related approaches can help quite a bit to make the worst jpeg artifacts better.
|
|
|
Jyrki Alakuijala
|
2021-06-14 09:13:43
|
before reversal of decision tree
|
|
|
_wb_
|
|
paperboyo
Yep, but couldnโt be user-controlled then most of the time. Better network conditions sniffing would be ideal. Is Chrome for Android `save-data` by default?
|
|
2021-06-14 09:14:06
|
I agree that user-controlled save-data is not ideal. Maybe browsers should set it automatically based on current observed network conditions. Also CDN timing info could help servers to make decisions (probably works best when there are multiple resources, but that's usually the case)
|
|
|
Jyrki Alakuijala
|
2021-06-14 09:14:11
|
|
|
2021-06-14 09:14:16
|
after reversal
|
|
2021-06-14 09:14:38
|
every image will have a lot less ringing ๐
|
|
|
_wb_
|
2021-06-14 09:14:47
|
That's a nice ringing reduction!
|
|
|
|
Deleted User
|
|
_wb_
Yes, Knusperli and related approaches can help quite a bit to make the worst jpeg artifacts better.
|
|
2021-06-14 09:14:54
|
How can we test if Knusperli is a conformant decoder?
https://encode.su/threads/2922-Knusperli-%E2%80%94-a-better-JPEG-decoder#post56184
|
|
2021-06-14 09:15:16
|
Because if it is... KNUSPERLI IN CHROME! ๐
|
|
|
Jyrki Alakuijala
|
2021-06-14 09:15:19
|
it is literally with every image and consistent, and it is these disgusting ringing artefacts
|
|
2021-06-14 09:15:37
|
also, the new one is less bytes
|
|
2021-06-14 09:15:42
|
(not much less)
|
|
|
_wb_
|
|
How can we test if Knusperli is a conformant decoder?
https://encode.su/threads/2922-Knusperli-%E2%80%94-a-better-JPEG-decoder#post56184
|
|
2021-06-14 09:16:10
|
Depends which conformance you are aiming at: original jpeg conformance, or the one defined in jpeg XT. Main issue is probably not conformance but doing it in a way that doesn't hurt decode speed too much...
|
|
|
Jyrki Alakuijala
|
2021-06-14 09:18:11
|
I wouldn't care with conformance of jpegs at quality 50
|
|
2021-06-14 09:18:27
|
whatever can rescue or mitigate them is ok
|
|
2021-06-14 09:18:29
|
๐
|
|
2021-06-14 09:19:35
|
before reversal
|
|
2021-06-14 09:19:58
|
after reversal
|
|
2021-06-14 09:20:58
|
I'm going to go and find the most expensive bottle of wine I have in the house (out of selection of two), and open it to celebrate ๐
|
|
2021-06-14 09:21:12
|
the above is d3 with epf0
|
|
2021-06-14 09:24:41
|
before reversal
|
|
2021-06-14 09:25:00
|
after reversal
|
|
2021-06-14 09:25:11
|
d8, epf1
|
|
|
Crixis
|
2021-06-14 09:25:59
|
Yeah, so rusty ๐คฃ
|
|
2021-06-14 09:26:46
|
Now the dog photo <:Thonk:805904896879493180>
|
|
|
Jyrki Alakuijala
|
2021-06-14 09:26:53
|
I can be creative with approaches, but my code looks terrible ๐
|
|
2021-06-14 09:27:16
|
ok, I will try to download the dog
|
|
|
_wb_
|
2021-06-14 09:33:18
|
Need to downscale it if you want to do a meaningful comparison
|
|
2021-06-14 09:34:49
|
Try that chapel image with all the oblique lines maybe? That's the one that has the worst ringing in the webp2 comparisons
|
|
|
Jyrki Alakuijala
|
2021-06-14 09:37:17
|
dog with 18700 bytes at d18
|
|
2021-06-14 09:37:43
|
the chapel is better, too, but marginally -- likely not enough to win against avif and webp2
|
|
2021-06-14 09:38:14
|
I think the hat is a bit better
|
|
2021-06-14 09:39:27
|
not quite avif level, but better than it was
|
|
2021-06-14 09:39:42
|
I need to repeat this exercise another seven times ๐
|
|
|
_wb_
|
2021-06-14 09:40:05
|
Imo getting d1 right is important for archival use cases, and getting d2-d4 right is important for the web. Doing something nice at d5-d8 is maybe good for poorly connected parts of the developing world, and beyond that it's basically only good for PR when people are comparing overcompressed images thinking that can be extrapolated to behavior at useful qualities.
|
|
|
Jyrki Alakuijala
|
2021-06-14 09:40:29
|
yes, this is d18 -- I certainly wish that no one will ever use it
|
|
2021-06-14 09:40:49
|
0.159 bpp
|
|
2021-06-14 09:41:22
|
a 2 MP image is 40 kB with this density
|
|
2021-06-14 09:41:54
|
and takes an average of 18 ms to download with average mobile bandwidth
|
|
2021-06-14 09:42:28
|
let the people wait for another 50 ms for a good image
|
|
|
_wb_
|
2021-06-14 09:43:07
|
Or give them a nice progressive preview in 10 ms and a good image 50 ms later...
|
|
|
Jyrki Alakuijala
|
2021-06-14 09:44:30
|
I think everyone will be happy if things happen in 100 ms ๐
|
|
2021-06-14 09:44:45
|
next up is to work on the 17 second javascript processing time
|
|
2021-06-14 09:46:43
|
Jon, what quality do you want to see the chapel before/after?
|
|
2021-06-14 09:47:05
|
(and epf)
|
|
2021-06-14 09:48:28
|
before reversal
|
|
2021-06-14 09:48:42
|
|
|
2021-06-14 09:49:02
|
d8 epf1 synthetic image
|
|
2021-06-14 09:49:18
|
this image has been usually nasty for integral transform selection
|
|
|
_wb_
|
2021-06-14 09:53:13
|
That is quite a nice improvement! (even better would be a heuristic that does such parts with delta palette or something else non-DCT, but that's not going to happen anytime soon I guess)
|
|
|
Jyrki Alakuijala
Jon, what quality do you want to see the chapel before/after?
|
|
2021-06-14 09:54:07
|
Something like d8, whatever default epf is at that distance? (max I guess)
|
|
|
Jyrki Alakuijala
|
2021-06-14 09:54:27
|
I have d8 at epf1
|
|
2021-06-14 09:56:18
|
before reversal of the decision tree, 0.243 bpp
|
|
2021-06-14 09:57:04
|
after reversal of the decision tree, 0.234 bpp
|
|
2021-06-14 09:58:05
|
it is almost 4 % less data, one needs to consider it, too
|
|
2021-06-14 09:58:18
|
usually I can tell apart a 0.5 % reduction
|
|
|
_wb_
|
2021-06-14 10:06:09
|
The new one does look a bit better even though it's smaller. Of course d8 is too lossy in any case
|
|
|
Jyrki Alakuijala
|
2021-06-14 10:06:30
|
what do I do from my git to get the commit visible to you and others
|
|
|
_wb_
|
2021-06-14 10:07:14
|
What happens at d3? Can you show a zoomed in crop of the top triangles at d3, before/after?
|
|
|
Jyrki Alakuijala
|
2021-06-14 10:07:15
|
yes, it is not a day and night kind of thing, but it is one of the more substantial improvement there has been
|
|
2021-06-14 10:07:29
|
I have d3 at epf:0
|
|
|
_wb_
|
2021-06-14 10:07:38
|
Just open a pull request?
|
|
2021-06-14 10:08:16
|
git checkout -b dctselect-improvements; git push and then do what it tells you :)
|
|
|
Jyrki Alakuijala
|
2021-06-14 10:08:19
|
before reversal
|
|
2021-06-14 10:08:44
|
after reversal
|
|
2021-06-14 10:16:26
|
1.2 % reduction in size, too: 0.513 bpp and 0.506 bpp
|
|
2021-06-14 10:20:20
|
https://github.com/libjxl/libjxl/pull/172
|
|
2021-06-14 10:20:28
|
I didn't check if it passes tests
|
|
2021-06-14 10:20:48
|
but it worked for my 40 image test set at 10 different settings
|
|
2021-06-14 10:20:56
|
also it worked on the avif dog ๐
|
|
2021-06-14 10:21:11
|
feel free to try if you are eager
|
|
2021-06-14 10:23:40
|
I suspect that after this I can make another similar gain by more aggressive deployment of larger transforms (which will be chosen more carefully due to the reversal)
|
|
|
raysar
|
|
Jyrki Alakuijala
after reversal
|
|
2021-06-15 12:06:06
|
That's great, it's visually better ๐
|
|
|
BlueSwordM
|
2021-06-15 02:11:10
|
Reminds me of bottom up partition selection actually in rav1e.
|
|
2021-06-15 02:11:33
|
Instead of going top down in partition size(blocks and transforms), you go bottom-up, so you get close to perfect partition selection
|
|
|
Jyrki Alakuijala
|
2021-06-15 08:42:37
|
yes, particularly important in JPEG XL where we have 10 different 8x8 transforms -- we first choose the best of those and then start thinking if some bigger transforms are contributing more to the quality
|
|
2021-06-15 08:43:17
|
if we try the biggest first, then we should compare all combinations of other transforms against it -- that will be a combinatorial explosion
|
|
2021-06-18 11:05:49
|
not much slower, likely not slower or faster in wombat and slower, perhaps 5-10 % difference
|
|
|
fab
|
2021-06-18 04:31:36
|
jxl bad thing is that is tends to sharpen the output if the original is jpg
|
|
2021-06-18 04:31:45
|
even with the best build i have
|
|
2021-06-18 04:31:51
|
v0.3.7-152-g58e6b3f win_x64 sse4 2021.06.18
|
|
2021-06-18 04:32:00
|
after -q 85.3815
|
|
2021-06-18 04:32:05
|
-e 7 or -e 9
|
|
2021-06-18 04:32:41
|
it introduces some sharpening of the edges to preserve jpeg xl details quality
|
|
2021-06-18 04:34:01
|
this also slown down decoder
|
|
2021-06-18 04:35:37
|
but not sure anyone care
|
|
2021-06-18 04:35:47
|
people will start overcompress with new codecs
|
|
2021-06-18 04:36:01
|
and encoders AVIF WP2 JXL
|
|
2021-06-18 04:36:04
|
will change everyday
|
|
|
_wb_
|
2021-06-18 04:36:13
|
You mean in lossless jpeg recompression or when encoding ex-jpeg pixels?
|
|
|
fab
|
2021-06-18 04:36:19
|
-j
|
|
2021-06-18 04:36:32
|
-d 1 -s 7
|
|
2021-06-18 04:36:48
|
that maybe s 9 can hide better
|
|
|
_wb_
|
2021-06-18 04:36:50
|
What quality are the jpegs?
|
|
|
fab
|
2021-06-18 04:39:14
|
do not know the cam is sony wx60
|
|
|
BlueSwordM
|
2021-06-18 04:39:47
|
You have a nice camera Fabian, certainly not bad at all.
|
|
|
fab
|
2021-06-18 04:44:52
|
loss of appeal also
|
|
2021-06-18 04:45:11
|
for jpg i will go to -q 85.3815
|
|
2021-06-18 04:45:15
|
and this build
|
|
2021-06-18 04:45:30
|
at least it hides the sharpening
|
|
2021-06-18 04:45:35
|
the gaborish filter
|
|
|
_wb_
|
2021-06-18 04:45:45
|
Can you give an example of sharpening issues?
|
|
2021-06-18 04:46:11
|
(like a zoom in on original vs jxl)
|
|
|
fab
|
2021-06-18 04:46:42
|
discord clyde say it's explicit
|
|
2021-06-18 04:46:44
|
is only an eye
|
|
2021-06-18 04:46:49
|
anyway
|
|
2021-06-18 04:47:51
|
this d 1 s 7
|
|
2021-06-18 04:48:22
|
for jpg you just have to decide if you want higher details or -q 85.3815
|
|
2021-06-18 04:48:30
|
or lossless recompression
|
|
|
_wb_
|
2021-06-18 04:49:07
|
what is the original?
|
|
|
fab
|
2021-06-18 04:49:12
|
first
|
|
2021-06-18 04:49:18
|
no are different images
|
|
2021-06-18 04:49:29
|
the second is one that i re encoded
|
|
|
_wb_
|
2021-06-18 04:49:33
|
I see only one image
|
|
|
fab
|
2021-06-18 04:49:47
|
yes but the thumbnails the second is re encoded
|
|
2021-06-18 04:49:56
|
and are 2 thumbnails from 2 different sources
|
|
2021-06-18 04:50:33
|
it introduces some sharpening of the edges to preserve jpeg xl details quality
this also slown down decoder
|
|
2021-06-18 04:50:51
|
i think is gaborish filter
|
|
|
_wb_
|
2021-06-18 04:50:59
|
can you show me one image before compression and then the same image after compression
|
|
|
fab
|
2021-06-18 04:51:03
|
but i think you can't see from this
|
|
2021-06-18 04:51:08
|
no is private
|
|
2021-06-18 04:51:31
|
i think for most jpg if the quality jxl is too high it does that
|
|
|
_wb_
|
2021-06-18 04:51:45
|
can you try to reproduce the issue on a non-private image? some image from wikipedia or whatever
|
|
|
fab
|
2021-06-18 04:52:06
|
like crixis said sometimes
|
|
2021-06-18 04:52:25
|
better -q 85.3815 -s 9 than -s 7 -q 90 doing randomly
|
|
2021-06-18 04:52:34
|
but i think people will opt for compression
|
|
2021-06-18 04:52:46
|
lossless transocde so far are people interested?
|
|
2021-06-18 04:52:50
|
or companies?
|
|
2021-06-18 04:52:54
|
<#803574970180829194>
|
|
2021-06-18 04:53:29
|
the image size is 4,1 mb to 1,8 mb
|
|
2021-06-18 04:53:41
|
d 1 s 7
|
|
|
_wb_
|
2021-06-18 04:53:44
|
it's nearly impossible to get an idea of what you mean, let alone attempt to improve the encoder, without having access an example that demonstrates the issue
|
|
|
fab
|
2021-06-18 04:53:59
|
s 9 change block orders to hide better
|
|
2021-06-18 04:54:11
|
but i don't think, it has only more detail
|
|
2021-06-18 04:56:18
|
so <@!424295816929345538> you were right
|
|
|
BlueSwordM
|
|
fab
s 9 change block orders to hide better
|
|
2021-06-18 05:04:03
|
Fabian, please upload an example image.
|
|
|
fab
|
2021-06-18 05:04:17
|
no
|
|
2021-06-18 05:06:02
|
i'm trying -d 0.7618 -s 8
|
|
2021-06-18 05:06:09
|
with inverse options
|
|
2021-06-18 05:06:17
|
higher quality re encoding
|
|
2021-06-18 05:06:58
|
wish there was an auto quality option
|
|
2021-06-18 05:07:20
|
but jxl developers think ringing is a priority
|
|
2021-06-18 05:27:04
|
using max compression of jxl
|
|
2021-06-18 05:27:34
|
i would be happy if all images in the world were ruined in this way, would be you also
|
|
2021-06-18 05:52:10
|
for %i in (D:\Images\fotocamera\*.JPG) do cjxl -j -d 1.31622 -s 8 --faster_decoding=6 --epf=3 --use_new_heuristics %i %i.jxl
|
|
2021-06-18 05:52:16
|
i've found compromise
|
|
2021-06-18 05:53:06
|
v0.3.7-152-g58e6b3f win_x64 sse4 2021.06.18
|
|
|
Jyrki Alakuijala
|
2021-06-18 05:55:03
|
Lee and Fabian, you may want to try after https://github.com/libjxl/libjxl/pull/172
|
|
2021-06-18 05:55:39
|
I consider it a significant improvement to image quality, especially related to ringing, but also improves contrast and vividness of colors
|
|
2021-06-18 05:56:16
|
in my testing it made same setting slightly smaller (like 1 %) so you may need to be aware of that when comparing (although even at 1 % the results looked much better for me)
|
|
2021-06-18 05:56:29
|
particularly the worst case behaviour has improved a *lot*
|
|
2021-06-18 05:57:00
|
I will likely be able to create additional improvements due to the morre solid approach here by doing more aggressive decisions elsewhere now
|
|
2021-06-18 05:57:29
|
like favoring slightly more larger transforms or being more steep in adaptive quantization and masking decisions
|
|
|
fab
|
2021-06-18 06:03:28
|
253 MB (266.164.442 byte) 69 file
49,9 MB (52.343.685 byte) 68 file
|
|
2021-06-18 06:04:27
|
for sony wx photo like you don't need better than what i did
|
|
2021-06-18 06:04:39
|
the difference in colors would be minimal
|
|
2021-06-18 06:04:58
|
for smartphone photo with new sony imx sensor
|
|
2021-06-18 06:05:11
|
not sure if with raw files there is some difference
|
|
2021-06-18 06:05:27
|
i do not have camera or smartphone now
|
|
|
|
Deleted User
|
2021-06-18 06:12:09
|
Quick question: will there be any new commits today? I don't know whether to compile now or wait for next commits (if there will be any).
|
|
2021-06-18 06:13:48
|
(I'm on a commit `b058f24`, by the way.)
|
|
|
diskorduser
|
|
fab
not sure if with raw files there is some difference
|
|
2021-06-18 06:15:05
|
Always do testing with raw developed images - png / tiff or with downscaled jpegs to remove possible artifacts from the source.
|
|
|
Jyrki Alakuijala
|
2021-06-18 06:16:08
|
I don't know of significant things in the flight right now
|
|
2021-06-18 06:16:52
|
I plan to start with next quality improvements now, and expect them to land in the next four weeks
|
|
|
|
Deleted User
|
2021-06-18 06:21:24
|
Ok, I'll compile now then.
|
|
2021-06-18 06:21:36
|
And thanks for your work!
|
|
|
Jyrki Alakuijala
|
2021-06-18 06:34:41
|
No, thank you all for testing and bringing weaknesses to our attention! ๐
|
|
2021-06-18 06:34:55
|
I'm particularly curious what are the main weaknesses now after this change
|
|
|
fab
|
2021-06-18 06:42:10
|
why there are only two commit from you
|
|
2021-06-18 06:42:29
|
if the libjxl github repo is public from 2 month ago
|
|
2021-06-18 06:42:54
|
did you make some collaborative commits or development
|
|
|
Jyrki Alakuijala
|
2021-06-18 06:46:48
|
I'm a silly manager
|
|
2021-06-18 06:46:55
|
I did collaborative work, too
|
|
2021-06-18 06:47:16
|
delta palettization changes are for example my design
|
|
2021-06-18 06:47:35
|
the last three months have been to make things work with Chrome
|
|
2021-06-18 06:47:39
|
security fixes
|
|
2021-06-18 06:47:44
|
adding missing features
|
|
2021-06-18 06:48:09
|
(HDR, progression, ...)
|
|
2021-06-18 06:48:18
|
reducing memory use
|
|
2021-06-18 06:48:27
|
reducing binary footprint
|
|
2021-06-18 06:48:35
|
simplifying APIs
|
|
2021-06-18 06:48:55
|
my contributions in image quality would have likely brought more chaos than helped at that time ๐
|
|
|
fab
|
2021-06-18 06:50:30
|
delta paletizzation is from 2 year ago
|
|
2021-06-18 06:50:37
|
i'm talking about this month
|
|
2021-06-18 06:50:57
|
like the commit of jon and eustas are made only by them
|
|
2021-06-18 06:51:11
|
or there is other people who do collaborative work and help them
|
|
|
Jyrki Alakuijala
|
2021-06-18 06:52:35
|
Iulia submitted a small improvement to delta palettization a week or two ago
|
|
2021-06-18 06:52:57
|
but I try to guide the whole project, that's my job
|
|
2021-06-18 06:53:52
|
my colleagues (including myself) are critical thinkers (and busy, there is a lot of work), so they don't always listen, which means that I sometimes need to roll up the sleeves and code myself ๐
|
|
2021-06-18 06:54:14
|
also, I have failed to delegate the image quality work efficiently so that is still on my plate
|
|
2021-06-18 06:55:06
|
the delta palettization as it is implemented now was designed likely around October 2020 -- we had a version of it earlier, but that was stripped away
|
|
2021-06-18 06:55:25
|
the previous version had missing details that made it not be efficient
|
|
|
fab
|
2021-06-18 06:55:41
|
should i use delta
|
|
2021-06-18 06:56:37
|
palette is for icons?
|
|
2021-06-18 06:56:42
|
for screenshots?
|
|
|
Jyrki Alakuijala
|
2021-06-18 07:25:50
|
no one knows, it is still WIP
|
|
2021-06-18 07:26:02
|
occasionally it is better than respective quality jxl:d1 or so
|
|
2021-06-18 07:26:12
|
as measured by bpp * pnorm
|
|
2021-06-18 07:26:19
|
(like 20 % better)
|
|
2021-06-18 07:26:31
|
and of course the artefacts are very different
|
|
2021-06-18 07:26:43
|
likely best to ignore it for now
|
|
2021-06-18 07:35:22
|
if you try it for icons and it works much better -- people would be delighted to hear about it
|
|
2021-06-18 07:35:39
|
also, if you cannot find any use for it -- that would be good knowledge for us
|
|
|
lithium
|
|
Jyrki Alakuijala
Lee and Fabian, you may want to try after https://github.com/libjxl/libjxl/pull/172
|
|
2021-06-18 07:44:52
|
Hello <@!532010383041363969>, Thank you for your work ๐
I do some quick test on my issue image and another one,
> https://discord.com/channels/794206087879852103/803645746661425173/852311635984777298
Look like on current master branch -d 0.5 -s 8 more safety for those case, -d 1.0 still on risk zone,
In my test, anime content dark color(red, blue, purple) have smooth and specific gradient area will too early(d1.0) occur noticeable error on vardct.
(on older version jxl -d 0.5 will occur noticeable error)
|
|
|
fab
|
2021-06-18 07:45:50
|
jyrki is a professional he has several graduation
|
|
2021-06-18 07:46:06
|
he can see more colors than what sony wx60 can produce
|
|
2021-06-18 07:46:13
|
surely
|
|
2021-06-18 07:46:42
|
even without hdr good colors we can have
|
|
2021-06-18 07:48:41
|
do you work at ISO?
|
|
2021-06-18 07:49:04
|
only on linkedin there is information about the developer
|
|
2021-06-18 07:49:20
|
other than manager of JPEG XL
|
|
|
_wb_
|
2021-06-18 10:16:19
|
Jyrki leads the Google compression team that works on jxl, basically besides me all jxl devs have jyrki as their boss, so to speak :)
|
|
|
raysar
|
2021-06-19 12:55:28
|
I compile the last main build for windows if you want to test the amazing update ๐ Thank you Jyrki.
Commit: f6d3a89f
https://1drv.ms/u/s!Aui4LBt66-MmixqNVKi4mER1-P6o?e=THZs75
|
|
|
Jyrki Alakuijala
|
|
fab
do you work at ISO?
|
|
2021-06-19 07:57:20
|
I have looked at images for butteraugli, guetzli and JPEG XL enough that my eyesight got another -1 in dioptre. As one example, the calibration corpus for butteraugli has 2500 image pairs that I rated carefully. Often for quality changes I review 120-160 images (40 images at 3-4 different qualities). There were many quality changes during JPEG XL and as a consequence I have learned to look at images. Even with this experience I'm far from perfect and I still learn more when other people give me evidence on what is important for them in image quality.
|
|
2021-06-19 08:01:05
|
I have decided that as a manager I will try to let my team members to represent us externally -- this will help our team to have less information bottlenecks and will induce more growth in our team. I have visited about 8 of the ISO meetings for JPEG XL, i.e., not all of them, and my participation in ISO was lighter than Jan's, Jon's, Lode's or Luca's. I'm still rather proud of two achievements there: convincing the ISO about the lossless jpeg recompression as a new requirement for JPEG XL and for getting Jon and us ("eternal competitors from FLIF/WebP lossless times") to work together.
|
|
2021-06-19 08:11:04
|
No, I didn't use that or any other such program
|
|
2021-06-19 08:15:52
|
Earlier when I was coding more furiously I used a program that forced me to have 3 min break every hour.
|
|
2021-06-19 08:28:11
|
That sounds interesting -- it is the first time I hear that someone does that
|
|
2021-06-19 08:28:26
|
how do you avoid breaking the flow with that
|
|
2021-06-19 08:28:45
|
perhaps the break is short enough that it doesn't stop the 'flow'
|
|
2021-06-19 08:28:55
|
what do you do during the 20 seconds?
|
|
2021-06-19 08:30:42
|
do you use a timer or it is spontaneous?
|
|
2021-06-19 08:31:39
|
I'm optimizing the heuristics after the decision tree reversal (best 8x8 first) -- currently the metrics suggest 2-3 % further improvement, and the optimization (tools/optimizer/simplex_fork.py) has completely eliminated the dct8 ๐
|
|
2021-06-19 08:32:59
|
it is mostly replaced by dct 4x8 and 8x16s right now
|
|
2021-06-19 08:33:14
|
probably will look not great -- but butteraugli and optimization likes this approach now
|
|
2021-06-19 08:34:58
|
nowadays about half of my work are meetings, there I can be less focused (with vision and hands) than with coding/image quality work
|
|
2021-06-19 08:35:10
|
as a consequence I don't need to worry much about breaks
|
|
2021-06-19 08:35:32
|
meetings are more compatible with the basic physiology of people than coding and image quality ๐
|
|
|
lithium
|
2021-06-19 08:53:36
|
If you have to sleep or have other thing need to do, how to saving coding flow?
|
|
|
Jyrki Alakuijala
|
2021-06-19 08:55:25
|
sleep is more important ๐ you just do nothing that might distract sleep
|
|
|
lithium
|
2021-06-19 08:57:51
|
Sometime I can't sleep very well, when my code have some unexpected behavior.๐ข
|
|
|
Jyrki Alakuijala
|
2021-06-19 08:59:53
|
Lee, that is exactly why we don't enable exceptions in C++
|
|
2021-06-19 08:59:58
|
(sorry for a bad joke)
|
|
|
lithium
|
2021-06-19 09:05:23
|
jxl dev team like which programming paradigm?
object-oriented programming or functional programming
|
|
|
Jyrki Alakuijala
|
2021-06-19 09:06:35
|
we don't talk about it much
|
|
2021-06-19 09:06:53
|
it is more like there is this problem, let's solve it
|
|
2021-06-19 09:07:58
|
from what I observe there is a lot of willingness to explore
|
|
2021-06-19 09:08:45
|
the tooling is possibly best for C++ right now for this kind of problems, but the language is rather complicated and somewhat dangerous
|
|
2021-06-19 09:10:42
|
some SW developers are extremely passionate about languages -- I don't observe much this kind of extreme passion/language advocatism in our team, people are pragmatic
|
|
2021-06-19 09:11:15
|
early in my career I observed a team writing a new version of a complex product in C++ when the old version was in Fortran
|
|
2021-06-19 09:11:49
|
the product level productivity difference was much less than I ever expected from my passionate youth ๐
|
|