JPEG XL

Info

rules 57
github 35276
reddit 647

JPEG XL

tools 4225
website 1655
adoption 20712
image-compression-forum 0

General chat

welcome 3810
introduce-yourself 291
color 1414
photography 3435
other-codecs 23765
on-topic 24923
off-topic 22701

Voice Channels

General 2147

Archived

bot-spam 4380

jxl

Anything JPEG XL related

fab
2021-02-01 07:13:54
what i lost?
2021-02-01 07:14:05
is the percentual high?
2021-02-01 07:14:29
veluca said wait for 0.3.1
lonjil
_wb_ this is default cjxl, with only the DC and patches decoded (what you get with `djxl -s 8`)
2021-02-01 07:19:18
how do you choose specifically what things you want to decode? do you have some sort of tool for that or did you just make something bespoke with libjxl for that example?
_wb_
2021-02-01 07:21:18
Well `-s 8` stops when the AC starts, so it's nice to see the fancy-upscaled DC plus anything else that is signalled at the DC group level, i.e. splines and patches
lonjil
2021-02-01 07:27:18
ahhh
2021-02-01 07:27:20
I see
BlueSwordM
2021-02-01 08:28:19
So, in regards to `resampling=X`, why does it seem to hurt decoding speed?
2021-02-01 08:28:36
Does it have to decode the image twice or do some demanding operation related to resampling?
veluca
2021-02-01 08:41:53
because upsampling is single-threaded and not simdified at the moment
2021-02-01 08:42:18
it's not an inherent limitation or anything like that, we just didn't get to it yet
BlueSwordM
2021-02-01 08:43:39
Oh ok.
veluca
2021-02-01 08:44:30
it should probably be about 10x faster than how it is now
BlueSwordM
2021-02-01 08:45:06
That's why using -march at compile time results in a non-negligible speed increase for decoding and encoding with resampling. 🤔 I didn't actually catch it at first.
veluca
2021-02-01 08:45:11
8x from SIMD, the rest from actually optimizing the code a bit
_wb_
2021-02-01 09:11:12
it's single-threaded?
2021-02-01 09:11:25
should be quite straightforward to multi-thread it, no?
veluca
2021-02-01 09:11:37
yup, working on it 😛
lonjil
2021-02-01 10:44:53
I tried the same thing as Jon and hell it's pretty cool
Nova Aurora
2021-02-02 01:06:50
simply trying it doesn't seem to do so
yllan
2021-02-02 01:09:22
1) The key cap height is higher than usual 2) Bluetooth connection is not stable. Constantly disconnection or sending repeated keys are reported.
Nova Aurora
2021-02-02 01:09:46
losslessly encoding the jpeg > jxl > back to jpeg does produce images with all of the exif information
2021-02-02 01:10:03
in fact the files have the same checksum
2021-02-02 01:10:31
but telling the JXL to lossy encode won't
2021-02-02 01:11:52
I don't know, could simply be that they haven't implemented it yet
2021-02-02 01:14:34
There probably isn't a function for that, since it won't do it for any image format. The lossless compression involves the encoder finding and compressing all the metadata to avoid losing any of it for later, but there's probably not a function in the transcoder yet
raysar
2021-02-02 05:57:14
My personal and family picture folder, is 1,28 To with raw and jpeg 😄 I'm waiting for jxl windows thumbnail and raw software compatibility to do some conversions 😄
_wb_
2021-02-02 06:03:32
Yes, also horizontal/vertical flip, 180 degrees, and transpose.
2021-02-02 06:04:13
Exif/XMP handling is something we should add
2021-02-02 06:04:52
It's just a boring and trivial thing to do, technically. Just blobs.
2021-02-02 06:08:01
We do have compressed or uncompressed Exif/XMP as a feature in the file format, just cjxl is currently stripping that stuff and djxl doesn't bother with it either
2021-02-02 06:08:49
We should change that, some people do care about their metadata
2021-02-02 06:13:55
Well, currently when doing jpg to jxl, it does preserve the exif, but it treats it like any other non-image-data that can be in the bitstream
2021-02-02 06:14:16
So you do get it back when decoding back to jpg
2021-02-02 06:14:47
But only then
Diamondragon
2021-02-02 07:43:37
Wow, that image is one tenth the size as a jxl! squoosh doesn't even open it.
2021-02-02 07:46:35
Really? I guess I shouldn't have made djxl switch it to png.
_wb_
2021-02-02 09:57:30
please give this issue a 👍 : https://github.com/Fyrd/caniuse/issues/5041
lithium
2021-02-02 10:24:17
Hello, I read some post, look like DWT can compress well on non-photographic images, DWT and Reversible Haar-like squeezing, what's the difference on compression method? and why dct can't compress well some non-photographic(synthetic) content? Could you teach me about this question?
_wb_
2021-02-02 10:30:41
not sure if DWT like in JPEG 2000 is any better than DCT for non-photographic, tbh
2021-02-02 10:37:32
The Haar 'wavelet' is the simplest DWT: basically in one step it just takes two neighboring pixels `A` and `B` and replaces them with one pixel that is `(A+B)/2` (creating a 50% downscaled in one dimension image) and one residual pixel that is `A-B`. If you do it like that, it is reversible in integer arithmetic.
2021-02-02 10:39:26
Squeeze does that, but instead of storing `A-B` as the residual, it stores `A-B-t` where `t` is a nonlinear prediction that is basically linear interpolation in locally monotone areas (like gradients) and 0 in locally non-monotone areas (like edges).
2021-02-02 10:40:36
If you quantize the residuals to zero, you basically get something that is smooth in smooth areas and blocky in non-smooth areas.
2021-02-02 10:41:24
The nice thing about the nonlinear prediction term `t` is that it avoids the usual problem of ringing around edges.
2021-02-02 10:43:46
DCT and DWT both suffer from ringing because when high-frequency coefficients get dropped, the resulting reconstruction will have overshoot/halos caused by the low-frequency signal not being corrected anymore by the high-frequency
2021-02-02 10:46:46
Squeeze avoids ringing while also avoiding the complete blockiness you'd get if you would just do quantized Haar without the `t` term
OkyDooky
2021-02-02 11:06:44
I would claim the 5/3 biorthogonal DWT is "almost identical" (in effect) to Squeeze + "nonlinear tendency prediction" (except that Squeeze probably gives you slightly more vanishing "residuals" around edges) (<@794205442175402004>)
2021-02-02 11:10:06
oh, i thought we were talking about lossless ... yeah, if quantiazation enters the picture, the 5-tap HF reconstruction filter *would* be responsible for more ringing around edges compared to the Squeeze
Jyrki Alakuijala
Is there some parameter or assumption about the size of a pixel as projected onto the retina (w.r.t. the quantization matrices)?
2021-02-02 11:18:21
The assumption is that the viewing distance is 900 pixels.
[Edit](https://discord.com/channels/794206087879852103/794206170445119489/805541910642163713): (I guess, to some extent this can be controlled with the quality slider?)
2021-02-02 11:20:46
Quality has similarities with viewing distance control, but real viewing distance control would be more effective. Today, butteraugli is calibrated at a viewing distance of 900 pixels and we don't have a scheduled version of freely defined viewing distance for butteraugli. This has led to a situation where the whole of JPEG XL works best at this distance.
lithium
_wb_ Squeeze avoids ringing while also avoiding the complete blockiness you'd get if you would just do quantized Haar without the `t` term
2021-02-02 11:27:08
thank you very much 🙂
2021-02-02 11:38:03
in Butteraugli viewing distance is 1000, in Butteraugli_jxl viewing distance is 900, source: Butteraugli has been tuned for a viewing distance of 1000 pixels. Often practical viewing distance is 2000 pixels.
fab
2021-02-02 11:43:42
gaborish off is destructive like the edit i did to opus 1.3.1-64 (opusrug)
2021-02-02 11:44:01
if i remember i did -q 64.4, you can search
lithium
2021-02-02 11:50:51
If you turn off gaborish, you will get less high frequencies than normally. Things get blurry or more quantized as the stock quantization matrices are optimized with gaborish on. Turning off gaborish increases block artefacts. Turning off gaborish reduces efficiency of further filtering. source: Jyrki Alakuijala post
_wb_
2021-02-02 11:53:49
Turning gaborish off is probably not something you want to do at lower qualities, it mostly makes sense to turn it off at qualities that are high enough to avoid block artifacts anyway
fab
2021-02-02 11:54:45
thanks lee
2021-02-02 11:55:25
the encoder of opus i did delete some consonants of portoguese like the one shtiva
2021-02-02 11:56:21
but i think with gaborish off file size is more (with most of the images)?
2021-02-02 11:56:26
so this is a difference.
2021-02-02 11:57:43
don't do
2021-02-02 11:57:48
whatsapp images aren't supported
2021-02-02 11:58:00
you get error image failed
2021-02-02 11:58:22
at least check one by one
2021-02-02 11:58:32
small folders
veluca
2021-02-02 12:00:09
do tell us if some images fail - we tried on a bunch of internet images, but some things might have slipped through... (like the weird empty dht markers in whatsapp's JPEGs)
fab
2021-02-02 12:00:40
when there is a fix in next version tell us
2021-02-02 12:00:44
if there is
2021-02-02 12:00:51
to me isn't a problem for only 4 images
2021-02-02 12:01:02
i can convert them in png
veluca
2021-02-02 12:01:14
jpeg1 is a bit of a mess, you can do a lot of weird stuff like having more quantization tables than channels (??)
fab
2021-02-02 12:01:18
ah no jpg lossless recompressor in this this doesn't work
2021-02-02 12:01:54
you can do only lossy and modular
2021-02-02 12:01:58
if you convert to png
2021-02-02 12:02:04
and less efficiency
veluca
2021-02-02 12:02:14
you could run it through jpegoptim, I think it should get rid of the weird stuff
fab
2021-02-02 12:02:43
ok if someone knows how to do
2021-02-02 12:02:51
i haven't mac
veluca
2021-02-02 12:04:06
I don't know on windows, on linux it ought to be enough to do something like `jpegtran -optimize -outfile [output] [input]`
fab
2021-02-02 12:04:14
ok
2021-02-02 12:04:18
thank all
2021-02-02 12:05:30
so no fix?
2021-02-02 12:05:47
whatsapp images will remain jpg?
2021-02-02 12:05:58
how cloudinary will act if image failed?
veluca
2021-02-02 12:05:59
working on it
fab
2021-02-02 12:06:07
you should think about
veluca
2021-02-02 12:08:24
the obvious thing to do is to keep it as jpeg
lithium
2021-02-02 12:11:35
mozjpeg jpegtran.exe
2021-02-02 12:11:37
@echo off cd /d %1 for /r %%A in (*.jpg) do ( echo %%A "%~dp0jpegtran.exe" -optimize -progressive -copy none -outfile "%%A" "%%A" ) pause
veluca
2021-02-02 12:12:16
I can confirm that (at least on one image) jpegtran + lossless jpeg recompression works even on WA jpegs
fab
2021-02-02 12:21:52
https://gitlab.com/wg1/jpeg-xl/-/issues/122
Fox Wizard
2021-02-02 12:43:00
That jpg is a big boi
2021-02-02 12:43:18
I like how lossless optimization reduced it with about 15% though
2021-02-02 12:44:25
Didn't mean jxl :p
2021-02-02 12:44:41
But jxl gives much better results than optimization anyways
2021-02-02 12:44:55
Optimized will be 27.3 MB
2021-02-02 12:46:14
Nice
2021-02-02 12:46:49
Wish more applications would support arithmetic encoding <:reecat:786377447381532682>
2021-02-02 12:47:18
With that and optimizations the original jpg would be 25,1 MB
fab
2021-02-02 12:49:05
i use -s 4 -q 99.2
_wb_
2021-02-02 12:52:59
we should make a strip encoder at some point, encoding 256 rows at a time instead of doing everything with full buffers
2021-02-02 12:53:26
just annoying to do, code-wise
2021-02-02 12:53:49
especially if you also want to do the png / ppm reading in strips
2021-02-02 12:55:38
it's conceptually very simple, just code-wise very annoying because the stuff that gets passed are full images in the current code, so would need to update lots of code
Fox Wizard
2021-02-02 01:03:13
Gotta love the ``Compressed`` option
_wb_
2021-02-02 01:04:49
wow that AI upscaler is doing a nice job
2021-02-02 01:04:53
which one is that?
Fox Wizard
2021-02-02 01:05:25
I like how it's usually either a hit or miss
2021-02-02 01:05:46
Sometimes images look <:PepeOK:805388754545934396> and sometimes it does some weird stuff or no noticeable difference
2021-02-02 01:05:50
Rip
_wb_
2021-02-02 01:06:03
I'll have to ask our AI guys about it
Fox Wizard
2021-02-02 01:06:17
Worked well for a Skyrim screenshot, but it has more predictable patterns
2021-02-02 01:06:38
https://cdn.discordapp.com/attachments/786806687285248001/797011581598171176/ScreenShot6.png
2021-02-02 01:07:32
Used an older version though, so might give different results with newer versions (that screenshot is from around mid 2020)
2021-02-02 01:08:17
But it made people believe I play the game... with 100+ mods at 5k <a:dogeevil:749457940041302078>
2021-02-02 01:08:36
Even dual 3090 couldn't handle that... if it were to work perfectly
2021-02-02 01:09:08
That one looks rip <:SadCat:805389277247701002>
2021-02-02 01:09:53
Another fun thing, they also have a good video enhancer. Gave me very good results with several videos, but also some bad results (you need a decent quality input for it to give the best results)
_wb_
2021-02-02 01:12:55
it decodes fine for me
2021-02-02 01:13:08
what thumbnailer is that?
2021-02-02 01:14:15
I guess so
spider-mario
_wb_ it's conceptually very simple, just code-wise very annoying because the stuff that gets passed are full images in the current code, so would need to update lots of code
2021-02-02 01:16:16
maybe we should have written the encoder in Haskell to get automatic streaming behavior everywhere 😉 😉
_wb_
2021-02-02 01:17:02
At some point I want to make a list of software that added jxl support - a long list with everything, and a short list with 'recommended' stuff that works well and is maintained well
2021-02-02 01:17:24
Haskell haha yes
spider-mario
2021-02-02 01:17:52
that would definitely solve our memory consumption issues
fab
2021-02-02 01:46:05
what version of topaz gigapixel you have?
2021-02-02 01:46:13
is the one with new AI?
2021-02-02 01:49:22
2021-02-02 01:49:28
how to do it doesn't move
2021-02-02 01:50:36
ah i need to use C
2021-02-02 01:52:22
now is working but is bad
2021-02-02 01:53:04
ah now is doing progressive
2021-02-02 01:53:18
at the moment only color pixel and full color
2021-02-02 01:53:25
not black and white to color etc.
2021-02-02 01:53:57
photos looks a bit cartoony i guess is the decoder enhancement
Jyrki Alakuijala
lonjil I've seen a few mentions of the lumen target affecting butteraugli targetting. How does it work, specifically?
2021-02-02 01:55:11
In 0,2 we assume a default of 255 nits, in 0.3 I change it to 80 nits which gives better overall performance, particularly so with low bpp
fab
2021-02-02 01:55:53
windows 7 viewer photo don't open it
2021-02-02 01:55:59
i still need nomacs
Jyrki Alakuijala
lonjil I've seen a few mentions of the lumen target affecting butteraugli targetting. How does it work, specifically?
2021-02-02 01:57:11
physical light is multiplied with that before it goes into the L,M,S spectra collection and XYB separation -- very high intensities reduce the ability to see color due to logarithmic compression in the receptors, very low intensities have another mechanism for reducing color (due to shot noise filtering in the eye)
fab
2021-02-02 01:58:24
modular is doing a lot of progressive for just 3 - 4 images
lonjil
Jyrki Alakuijala physical light is multiplied with that before it goes into the L,M,S spectra collection and XYB separation -- very high intensities reduce the ability to see color due to logarithmic compression in the receptors, very low intensities have another mechanism for reducing color (due to shot noise filtering in the eye)
2021-02-02 01:58:44
Oh, that's interesting!
_wb_
2021-02-02 02:07:14
<@!532010383041363969> are you Jyrki?
2021-02-02 02:07:49
the change to 80 nits is for 0.3.1, right? the 0.3 that is on the public gitlab is still using the 255 nits default I think
2021-02-02 02:52:55
Interesting, could be a grayscale bug
Fox Wizard
2021-02-02 02:52:58
AI enhanced image <a:thinkShape:676826305089634304>
_wb_
2021-02-02 02:53:38
Grayscale is an annoying source of edge case bugs
Fox Wizard
2021-02-02 02:54:16
Gotta love grayscale. A lot of things make it look kinda oof
2021-02-02 02:54:39
Wonder why though
_wb_
2021-02-02 02:58:39
Then it's probably something else
2021-02-02 02:59:06
Cannot investigate atm, upgrading my laptop to Ben Hur
2021-02-02 02:59:16
I mean Big Sur
Nova Aurora
2021-02-02 03:00:50
You're upgrading to IOS?
2021-02-02 03:02:32
The changes they made to big sur's UI just seem to make it worse for desktop and laptop use
_wb_
2021-02-02 03:04:50
I need to upgrade to Big Sur to investigate how Apple messed up WebP
2021-02-02 03:05:52
Hm
2021-02-02 03:07:22
There seems to be something buggy in the buffer passing with alpha. Is the PNG an 8-bit one or a 16-bit one?
2021-02-02 03:08:23
Do you have a djxl compiled with debug stuff?
2021-02-02 03:08:41
Or only the release build?
2021-02-02 03:09:34
Ok np i will take a look soon when this upgrade process is done
2021-02-02 03:11:33
It's easy enough if you have a linux environment
2021-02-02 03:11:59
In other platforms things are less convenient imo
Nova Aurora
2021-02-02 03:13:16
yeah I looked at building on windows and wondered why it's more complicated than sudo pacman -S base-devel then following the cmake instructions
2021-02-02 03:14:25
> djxl china-joe-biden_GIGA_JPEGXL_d0_s7_LOSSLESS.jxl c.jpg > Notice: Decoding to pixels and re-encoding to JPEG file. To decode a losslessly recompressed JPEG back to JPEG pass --jpeg to djxl. > Read 4204652 compressed bytes [v0.3.0 | SIMD supported: AVX2] > Failed to write decoded image.
2021-02-02 03:14:53
I got that attempting to decode it using my build
2021-02-02 03:15:04
which doesn't have debug enabled
_wb_
2021-02-02 03:15:33
Oh you are decoding to jpg?
2021-02-02 03:16:27
Ok
Nova Aurora
2021-02-02 03:16:33
> djxl --jpeg china-joe-biden_GIGA_JPEGXL_d0_s7_LOSSLESS.jxl c.jpg > Read 4204652 compressed bytes [v0.3.0 | SIMD supported: AVX2]
_wb_
2021-02-02 03:16:37
C'mon Ben Hur
Nova Aurora
2021-02-02 03:16:53
It gives that then exits without writing the file
2021-02-02 03:21:07
Interestingly enough, qt jpeg xl can veiw it
Crixis
2021-02-02 03:21:44
regression bug?
Nova Aurora
2021-02-02 03:30:14
can I compile jxl faster than it takes jon to get his computer back?
Crixis
Nova Aurora can I compile jxl faster than it takes jon to get his computer back?
2021-02-02 03:30:46
he seem online from desktop
2021-02-02 03:30:50
so yes
Nova Aurora
2021-02-02 03:52:54
I could decode to png, just not jpg or losslessly to jpeg
_wb_
2021-02-02 03:54:56
cannot reproduce, my djxl decodes it fine
2021-02-02 03:55:48
decoding to jpeg does not work, but that's expected
2021-02-02 03:55:52
``` $ ../tools/djxl china-joe-biden_GIGA_JPEGXL_d3_s7.jxl china-joe-biden_GIGA_JPEGXL_d3_s7.jxl.jpg Notice: Decoding to pixels and re-encoding to JPEG file. To decode a losslessly recompressed JPEG back to JPEG pass --jpeg to djxl. Read 468886 compressed bytes [v0.3.0 | SIMD supported: SSE4,Scalar] ../lib/extras/codec_jpg.cc:615: JXL_FAILURE: alpha is not supported Failed to write decoded image. ```
2021-02-02 03:56:15
cjxl should be smarter to drop trivial all-opaque alpha though
2021-02-02 03:56:53
and djxl should give a nicer error, you don't see the debug message in release builds...
Nova Aurora
2021-02-02 04:29:05
I can decode it to a png
2021-02-02 04:29:38
I was trying to decode to jpg earlier which is apparently not supported
spider-mario
Nova Aurora yeah I looked at building on windows and wondered why it's more complicated than sudo pacman -S base-devel then following the cmake instructions
2021-02-02 04:29:41
actually, this kind of works if you are using msys2
Nova Aurora
2021-02-02 04:30:12
manjaro, but I have a windows machine
spider-mario
2021-02-02 04:30:13
msys2 means that a lot of stuff is much less painful to build on windows than it used to be
2021-02-02 04:33:04
I don’t use nomacs but I do use windows and have a working djxl there
2021-02-02 04:33:28
let me try
2021-02-02 04:33:58
seems to work
2021-02-02 04:34:13
``` Read 468886 compressed bytes [v0.3.0 | SIMD supported: AVX2,SSE4,Scalar] Done. 3600 x 2772, 24.58 MP/s [24.58, 24.58], 1 reps, 4 threads. Allocations: 1393 (max bytes in use: 5.291246E+08) ```
2021-02-02 04:34:38
yes
Nova Aurora
2021-02-02 04:39:31
Yeah I was going to ask how old your djxl is
2021-02-02 04:43:23
hopefully
BlueSwordM
2021-02-02 05:03:52
The image is still problematic on my end with the latest JXL build installed, and the latest QT JPEG-XL build.
2021-02-02 05:03:59
I wonder why. 🤔
fab
2021-02-02 06:50:19
how to open more images in one nomacs window?
2021-02-02 06:59:46
also
2021-02-02 06:59:59
do you know near lossless improvements in jpeg xl?
2021-02-02 07:00:12
like i used -s 4 -q 99.2
2021-02-02 07:00:29
how to have less file size at same perceived quality?
2021-02-02 07:01:06
last i tried near lossless had grooves
_wb_
2021-02-02 07:07:12
Grooves?
2021-02-02 07:07:26
https://c.tenor.com/sTFh2vAQCP0AAAAM/groovy-austin-powers.gif
Nova Aurora
2021-02-02 07:07:48
I LOOOOOOOOOVE GOOOOOOLD!
Fox Wizard
2021-02-02 08:43:55
<a:gold:775147514084851752>
Jyrki Alakuijala
_wb_ <@!532010383041363969> are you Jyrki?
2021-02-02 09:13:06
haha, ryomivasalama2 is my previous lichess account, don't know how it got propagated here 😛, yes I'm Jyrki
Nova Aurora
2021-02-02 09:14:28
I remember trying to play online chess
2021-02-02 09:14:34
I thought I was so good
2021-02-02 09:14:44
then I lost in like 8 turns
2021-02-02 09:15:10
and had people running circles around me
raysar
2021-02-03 01:05:55
<@!111689857503375360> why are you using -s 7? and not -s 8? (ok s9 is too slow)
Nova Aurora
2021-02-03 04:03:43
Is the draft standard available for free anywhere or is it only available by paying ISO?
raysar
Nova Aurora Is the draft standard available for free anywhere or is it only available by paying ISO?
2021-02-03 06:21:16
2019-08-05 i see this draft, maybe there a more recent.
_wb_
2021-02-03 06:22:40
The draft is alas quite obsolete
2021-02-03 06:23:12
And ISO doesn't allow us to distribute anything more recent, sadly
2021-02-03 06:23:40
They want to sell the thing for shiny swiss francs
BlueSwordM
2021-02-03 06:23:43
Is this the updated one in question? https://www.iso.org/standard/77977.html If so, it's not very expensive. I'd be happy to buy it for myself.
_wb_
2021-02-03 06:24:04
That's still the DIS
2021-02-03 06:24:10
Also obsolete
2021-02-03 06:25:14
The FDIS is what you need, but the national bodies still need to approve it before ISO will put it behind its paywall
BlueSwordM
2021-02-03 06:25:51
Oh right, it's still in the inquiry phase. I did not notice that on the page itself.
_wb_
2021-02-03 06:30:00
Theoretically the national bodies could still shoot it down
2021-02-03 06:30:43
But that's unlikely
Nova Aurora
2021-02-03 06:32:08
170 CAD oof
_wb_
2021-02-03 06:32:12
ISO's non-open standards policy is annoying, I hope we can somehow get them to make it an open standard
2021-02-03 06:33:27
We spent a lot of time writing that spec, just to have ISO put it behind a paywall, it's kind of frustrating
raysar
2021-02-03 06:49:53
Talking about money, who are paying all devs for jxl and inherit projects? companies don't pay for nothing? (for creating a open standart to save bandwitch and storage?) there is google employee? and? do you works for cloudinary when you works on flif?
_wb_
2021-02-03 06:53:01
I made flif just before I started working for Cloudinary
2021-02-03 06:53:18
In fact flif was probably what got me hired
2021-02-03 06:55:12
Being able to continue work on flif (besides doing Cloudinary-specific stuff) was what they offered me
2021-02-03 06:55:32
So flif became fuif and then got absorbed into jxl
2021-02-03 06:56:45
For Cloudinary, jxl is mostly just a marketing thing: it allows them to show their (prospective) customers that they do have experts in house
2021-02-03 06:57:35
The benefit in bandwidth is mostly just a benefit for Cloudinary's customers, not for Cloudinary
2021-02-03 06:59:28
For Google, I think they just benefit from a better web since Google is large enough to _be_ a significant part of the web
Nova Aurora
2021-02-03 07:00:17
To the point that they make more products like chrome just to keep people on the web
2021-02-03 07:00:47
More people on the web = more eyeballs looking at our ads
raysar
2021-02-03 07:06:03
Who were you working for when you were on the flif?
2021-02-03 07:08:10
<@795684063032901642> <@179701849576833024> <@799692065771749416> <@579137895110279168> ivandeve, you are all working for google and cloudinary?
2021-02-03 07:10:52
<@794205442175402004> cloudinary are working on an FPGA IP for encoding/decoding images? or your code is so powerfull :D than cpu is more interesting for mass compression.
Pieter
2021-02-03 07:12:45
<@!231086792315633664> I was a colleague of <@!794205442175402004> when we were both at university still, and we worked on what later became FLIF as a side project. I haven't contributed to FLIF/FUIF/JPEG-XL afterwards.
_wb_
raysar Who were you working for when you were on the flif?
2021-02-03 07:13:32
I was doing a postdoc at KU Leuven on a completely different topic. FLIF was a side project I did (with <@799692065771749416>) while I was supposed to do other stuff. I was a temporary professor at U Antwerp when FLIF was released, so I was looking a new job for afterwards and found one at Cloudinary
2021-02-03 07:14:12
The others are all at Google Research in Zurich
Pieter
2021-02-03 07:14:43
Ironically, in 2012-2014 I worked at Google in Zurich, but not on anything even remotely related to image compression.
_wb_
2021-02-03 09:37:04
<@799692065771749416> , I don't remember if I ever asked this, but does "sipa" mean anything?
OkyDooky
2021-02-03 09:41:00
I need to write three sentences about subjective image quality of JPEG XL to my manager's manager's manager. What should I write?
_wb_
2021-02-03 09:48:26
At high fidelity operating points, most relevant to still images, JPEG XL achieves better compression density than other codecs like JPEG, JPEG 2000, WebP, HEIC and AVIF. At low to medium fidelity, it is better than JPEG, JPEG 2000 and WebP, and similar to HEIC and AVIF. For HDR image content, it outperforms any other codec.
OkyDooky
2021-02-03 09:49:41
Jon, I know what you'd write -- I'd like to learn about the sentiment in this forum \:-D
_wb_
2021-02-03 09:51:34
ah sorry, yes I'd also like to know what others think
Crixis
2021-02-03 10:07:50
Jpegxl is the only codec to use new generation colorspace for better simulate human perception, mixed complession tools on mixed content image, and have the ability to recovert old jpg without generation loss
lithium
2021-02-03 10:08:02
1. in photographic images, jpeg xl vardct mode can keep more detail (natural content) and compress very well. 2. in non-photographic images, jpeg xl have vardct mode and lossy modular mode, can handle different type image and keep image synthetic content fidelity. 3. jpeg xl have different mode, vardct, jpeg lossless , lossy modular, lossless modular, to handle different image quality request.
raysar
I need to write three sentences about subjective image quality of JPEG XL to my manager's manager's manager. What should I write?
2021-02-03 10:24:59
jxl rules them all! 😎
OkyDooky
2021-02-03 10:25:15
\:-D
_wb_
2021-02-03 10:25:31
https://tenor.com/view/lot-r-lordofthe-rings-gollum-ring-onering-gif-4724569
fab
2021-02-03 10:29:06
how to have -s 4 -q 99.2 quality on phone screenshots with less space?
2021-02-03 10:29:28
i think that's not the purpose of jpeg xl
raysar
2021-02-03 10:32:11
i have an genius idea of tshirt (or meme) :D i need to works on photoshop ^^
_wb_
2021-02-03 10:34:36
for screenshots you probably want to have `--patches=1` even if that makes it slower
2021-02-03 10:35:05
at least if it's a screenshot with text on it
Jake Archibald
2021-02-03 10:52:31
_waves_
_wb_
2021-02-03 10:56:51
Hi Jake!
2021-02-03 12:34:51
could probably get it smaller as a lossless jxl
Master Of Zen
_wb_ not afaik, but maybe we could make a library version of the metrics (also ssimulacra, and could add more), and then make some python/rust/whatever bindings
2021-02-03 12:56:03
This will be big in a lot of ways. 1. There is great application for image quality metrics tuning for video codecs, for example AV1 reference encoder (aomenc) have option to include VMAF and use it on 64x64/32x32 blocks to pick better quantization values, which (imo) is superior to AQ modes. 2. Just ability to benchmark video codecs with butteraugli/ssimulacra would be great and way better than SSIM/PSNR. 3. Using butteraugli(or model/parts of it) for psycho-visual optimizations for video encoders. 4. New quality rate control modes alike `-d N` for video encoding would be possible to do.
spider-mario
2021-02-03 12:58:52
ideally, for a video metric, we would take motion into account, which butteraugli does not presently do
Master Of Zen
spider-mario ideally, for a video metric, we would take motion into account, which butteraugli does not presently do
2021-02-03 01:09:50
That's hard one. VMAF for example , temporal part of it is super simple, it's just taking absolute difference in pixel values between current and previous frame, and it's bad, as it gives huge false positive on frames that have noise. It's not a big issue as temporal part is weighed low for VMAF.
2021-02-03 01:10:38
+ it's not take FPS into account, it's just frame difference
Crixis
Jon, I know what you'd write -- I'd like to learn about the sentiment in this forum \:-D
2021-02-03 01:10:47
What did you write?
blabaal
2021-02-03 01:11:37
Are there any papers on butteraugli? I’d love to use it to benchmark visual loss when compressing medical images but it seems like a black box at the moment.
Master Of Zen
blabaal Are there any papers on butteraugli? I’d love to use it to benchmark visual loss when compressing medical images but it seems like a black box at the moment.
2021-02-03 01:13:31
It have cli for image comparison, `exe source_image distorted_image`, should be fine for your case
blabaal
Master Of Zen It have cli for image comparison, `exe source_image distorted_image`, should be fine for your case
2021-02-03 01:15:10
Thanks! I was thinking more in terms of explaining the internals w/ benchmarks, in order to justify using it over another metric.
veluca
blabaal Thanks! I was thinking more in terms of explaining the internals w/ benchmarks, in order to justify using it over another metric.
2021-02-03 01:15:33
working on it 🙂
fab
2021-02-03 02:56:34
not sure if was i did with other codec is your interest
2021-02-03 03:18:16
cjxl -q 33.3 -s 3 --palette=6 C:\Users\User\Documents\740756671.png C:\Users\User\Documents\screenshot3.jxl
2021-02-03 03:18:21
is this a good command?
_wb_
2021-02-03 03:18:58
--palette=6 will not do anything in that command
fab
2021-02-03 03:19:05
the cjxl says i would have only 6 colors
_wb_
2021-02-03 03:19:09
no
fab
2021-02-03 03:19:10
so i need to add -m
_wb_
2021-02-03 03:19:29
it means IF the image has 6 or fewer colors, it will use palette to encode it
2021-02-03 03:20:01
it's not going to quantize the colors for you
2021-02-03 03:20:10
you can use `pngquant` for that, if you want that
fab
2021-02-03 03:20:32
i want -q 33.3 -s 3 -lossy palette 6
2021-02-03 03:20:35
how to do
_wb_
2021-02-03 03:21:36
you can do `pngquant 6` to reduce to 6 colors, and then do `cjxl -q 100 -m` to losslessly encode the result
fab
2021-02-03 03:22:06
lossy palette=6 is possible
2021-02-03 03:22:09
what is the limit
2021-02-03 03:22:10
only 0
2021-02-03 03:22:17
or there are other options
_wb_
2021-02-03 03:22:18
or you can do `cjxl -m --lossy-palette --palette=0` and it will do a lossy delta palette encoding
fab
2021-02-03 03:22:24
in 0.3.0
_wb_
2021-02-03 03:23:36
cjxl does not currently have a way to do lossy non-delta palette
fab
2021-02-03 03:24:10
so only 0 modular
2021-02-03 03:24:16
not less than 0
2021-02-03 03:24:24
or more than 0
2021-02-03 03:24:32
only lossy palette=0
2021-02-03 03:24:37
0 only option
2021-02-03 03:24:40
ok
_wb_
2021-02-03 03:27:28
official jpeg xl page was updated a bit: https://jpeg.org/jpegxl/index.html
2021-02-03 03:27:46
yes, currently that's the only thing that makes sense
2021-02-03 03:28:13
lossy palette is still quite experimental
fab
2021-02-03 03:28:38
what means -d 0 -s 7
2021-02-03 03:28:49
it's the same as -s 7 -q 100 -m
_wb_
2021-02-03 03:29:03
try `-m -q 100 -s 9 -E 3 -I 1`
fab
2021-02-03 03:29:21
?
Fox Wizard
2021-02-03 03:32:16
Nah, gotta suffer
_wb_
2021-02-03 03:34:20
now we just need to make it less slow 🙂
Crixis
2021-02-03 03:35:04
what is it the bottleneck now?
Fox Wizard
2021-02-03 03:35:42
<a:godspeed:719679385153699851>
_wb_
2021-02-03 03:36:41
we just need better heuristics to avoid brute-force search
Crixis
2021-02-03 03:37:17
nah, brute-force is the best, ask bitcoin
_wb_
2021-02-03 03:37:38
I think `-E 3` probably makes a big difference, and iirc we only use those extra properties at speed 9
Crixis
2021-02-03 03:37:52
so is -s 12
_wb_
2021-02-03 03:58:17
quite spectacular pixel invention there
2021-02-03 04:00:00
the monkey's eyes are basically invented by the AI here
veluca
_wb_ I think `-E 3` probably makes a big difference, and iirc we only use those extra properties at speed 9
2021-02-03 04:10:31
that's one thing, and also `-s 9` enables predictor mixing too...
_wb_
2021-02-03 04:12:07
for lossless we should probably rename the current `-s 9` to `-s 11` and add some intermediate steps 🙂
BlueSwordM
2021-02-03 04:21:29
No.
Scope
2021-02-03 04:21:31
It's fast compared to AV1 (AVIF) encoders, but not so fast compared to other image formats and encoders (so the default speed is ok, maybe lossless is slow for now, but considering that dense lossless compression is always quite costly and further optimization is also possible)
2021-02-03 04:33:53
Btw, the maximum lossless compression with the current unoptimized version of WebP v2 in single-threaded mode on my PC is about ~8 times slower than Jpeg XL (-s 9 -E 3)
_wb_
2021-02-03 04:46:59
We may at some point want to do faster speed than falcon, e.g. for lossless as basically just a semi-uncompressed format that still has some mild compression like RLE, but also has a color profile, alpha support etc. Basically to replace ppm, tiff, bmp and all that
2021-02-03 04:51:07
for lossy I don't know if we can go much faster than falcon but maybe we could come up with something
2021-02-03 04:51:28
in case you're wondering why the fastest speed is `-s 3`
2021-02-03 04:52:47
for lossless jxl `-s 3` is often still beating optimized PNG
Orum
2021-02-03 04:53:06
I really don't think that's a fair comparison though...
Scope
2021-02-03 04:54:06
For now, Jpeg XL at the fastest speeds has trouble compressing Pixel Art content effectively
_wb_
2021-02-03 04:54:14
for things like local temporary storage while editing an image, you just need fast encode, compression matters little
2021-02-03 04:54:58
currently `-s 3` only tries a predictor that we know works well for photo but not so well for non-photo
2021-02-03 04:56:18
pixel art is kind of the opposite of photographic material 🙂
BlueSwordM
2021-02-03 05:01:32
Yeah, some stuff that -s 8 uses isn't very well threaded vs -s 7.
2021-02-03 05:01:41
You're better off using file multi-threading for that stuff.
Orum
2021-02-03 05:01:43
7 is much faster (takes ~39% of the time that 8 does)
2021-02-03 05:01:57
the only one that's well threaded right now is -s 3
2021-02-03 05:45:48
78.6 KB <:WTF:805391680538148936>
2021-02-03 05:47:15
I guess that sort of size decrease is interesting if you're running a web server/CDN, but for me I enjoy the savings on much larger images
Pieter
_wb_ <@799692065771749416> , I don't remember if I ever asked this, but does "sipa" mean anything?
2021-02-03 06:37:19
It does, but it is not important. I've used that name since I was 10 or so. ;)
_wb_
2021-02-03 06:42:41
My first online nickname was very silly
2021-02-03 06:42:48
It was RAMbo
2021-02-03 06:43:46
https://c.tenor.com/MlOfvc9Ym_0AAAAM/saturday-weekend.gif
2021-02-03 06:44:58
<:This:805404376658739230> but with a Random Access Memory
2021-02-03 06:53:19
But that was mostly in the days before internet, on dial-in BBSs
Jyrki Alakuijala
_wb_ the change to 80 nits is for 0.3.1, right? the 0.3 that is on the public gitlab is still using the 255 nits default I think
2021-02-03 09:30:58
not sure when it is released -- (the 80 nits thing is a hack, i.e., computing to 80 nits with butteraugli even when writing 255 nits into the file)
blabaal Are there any papers on butteraugli? I’d love to use it to benchmark visual loss when compressing medical images but it seems like a black box at the moment.
2021-02-03 09:33:12
I haven't written a description about it. Also, we have published at least four different 'butterauglis', one in github.com/google/butteraugli, one in guetzli, one in jpeg xl, one in pik -- in different versions each has changed slightly (hopefully improved)
2021-02-03 09:35:33
while I believe that butteraugli performs pretty well in our use case of detecting barely noticeable compression algorithms there is really not that much independent research on it
2021-02-03 09:35:59
also half the research I saw says it is 'ok' (every research paper usually find that their own metric performs still much better)
2021-02-03 09:36:21
the other half of the research papers finds that butteraugli is the worst metric on the planet
2021-02-03 09:37:02
when we try to verify butteraugli by using TID2013 and friends, we get mediocre scores for it
2021-02-03 09:37:27
when we use our own internal corpus of just-noticeable errors, then butteraugli shines
2021-02-03 09:38:34
we are planning to make a write-up of butteraugli during this spring
lonjil
2021-02-03 10:36:38
Does the current encoder not support creating an animated jxl from a gif? It says "failed to read image".
Jyrki Alakuijala
2021-02-03 10:46:42
good news: I made some progress with d1 ringing reduction by being more careful in the integral transform selection, max butteraugli score went down 4 % with bpp * pnorm down 0.1 %
2021-02-03 10:47:37
will require some time before it surfaces in the public repo, but that will likely fix or improve on the ringing reported on anime cartoons at around d1
_wb_
2021-02-03 10:57:38
<@167023260574154752> it has worked at some point, but tbh animation hasn't been a big focus
lonjil
2021-02-03 11:01:52
ah
Jyrki Alakuijala
2021-02-03 11:09:29
WebP folks published a comparison that seems to include libjxl 0.3: https://storage.googleapis.com/demos.webmproject.org/webp/cmp/2021_02_02/index.html#08-2011-panthera-tigris-tigris-texas-park-lanzarote-tp04*3:1&Original=m&JXL=m&subset1
_wb_
2021-02-03 11:14:34
Interesting, will have a look tomorrow
2021-02-03 11:15:50
At some point we'll have to compare things at equal encode time budget too though.
2021-02-03 11:16:16
Can give webp2 some benefit of the doubt in that respect since it's experimental
2021-02-03 11:17:06
But avif has a frozen bitstream for a while now
2021-02-03 11:18:00
Cannot keep comparing 30 second encodes to 0.5 second encodes and pretend that's a fair comparison
Jim
2021-02-03 11:43:49
Agree, I think they need to start adding a "time to encode" under the size.
2021-02-03 11:47:08
Though personally I say JXL won that. It has some trouble with some images where there was really fine detail at medium (I don't even look lower than that, making the images all pixelated/smeared just seems pointless), but JXL typically had the mid to smallest file size for many and had the nicest presentation so far (and I know better is on the way). AVIF doesn't seem much different than the last one I saw. WebP2 seems to have improved some and even seems to do slightly better than AVIF in a few cases.
2021-02-03 11:47:37
Then look at lossless. It's not even a comparison. JXL often gets 10x smaller file size than AVIF or WebP2.
2021-02-03 11:49:56
In the few cases where JXL doesn't do that great at large or big (though the BPP seem rather low, probably going by what they feel is best for WebP2), I would just go with the JXL lossless (or maybe slightly lossy) which in most cases is only 2x or less the file size of AVIF/WebP2 big size and gets a near perfect representation whereas the AVIF and WebP2 have clear smoothing/smudging/blurring happening in some areas.
Jyrki Alakuijala
2021-02-03 11:52:04
quick look: Air Force Academy is a huge loss for JPEG XL at low bpp, Clovisfest, too -- everything else from the first 15 or so images are wins for JPEG XL
2021-02-03 11:53:37
also, I checked my latest and greatest improvements in quality for Air Force Academy photo -- they don't really do that much at low bpp
2021-02-03 11:53:55
I have a few more ideas, will try something else tomorrow
Jim
Jyrki Alakuijala quick look: Air Force Academy is a huge loss for JPEG XL at low bpp, Clovisfest, too -- everything else from the first 15 or so images are wins for JPEG XL
2021-02-03 11:56:26
Agree, but it did to much better than the early version. I remember the early one being a complete artifacted mess. There is not nearly as many artifacts and I know they are working to make it look even better while keeping as much detail as possible.
Deleted User
_wb_ <@167023260574154752> it has worked at some point, but tbh animation hasn't been a big focus
2021-02-03 11:58:24
Should be though! High quality animations aren't possible with video codecs unless you go lossless.
Jim
2021-02-04 12:03:17
Personally, I wouldn't mourn losing animations entirely. I know the AVIF creator has said that is one of the features on the chopping block for AVIF. I really don't see the purpose of using GIF anymore. You could do the same with h.264, VP9, or AV1 and get much better quality at a smaller file size. If you just want to do memes all you have to do is transcode or restrict resolution to something like 740x740 and no more than 15-20 seconds.
Deleted User
2021-02-04 12:05:32
I didn't mean memes.
lonjil
2021-02-04 12:09:56
animation support is good because it means all legacy image formats can be losslessly transferred to JXL, including gif and apng.
2021-02-04 12:10:39
and yeah for video-like stuff you want a real video format
2021-02-04 12:10:58
which is usually what people end up using these days, often without knowing it.
2021-02-04 12:11:24
but for say simple lossless pixel art animations, an image format is probably just fine
Deleted User
2021-02-04 12:13:23
and more often than not even better than video codecs
2021-02-04 12:14:32
since you don't actually need lossless if -d 1 looks almost identical and superior to lossy videos.
Ringo
2021-02-04 01:44:11
hey
2021-02-04 01:44:19
I just thought about it now
2021-02-04 01:44:25
what does `-E` do?
_wb_
Ringo what does `-E` do?
2021-02-04 06:14:54
It extends the context model of Modular by adding previous-channel information. This can help quite a bit, at the cost of some slowdown (also at decode time).
Ringo
2021-02-04 06:15:15
ah
2021-02-04 06:15:31
can't seem to find it in the help text
2021-02-04 06:15:44
JPEG XL is neat
_wb_
2021-02-04 06:17:25
Try `cjxl -v -v -v -h`
Ringo
2021-02-04 06:17:58
ah there, I saw it now 😅
2021-02-04 06:18:02
thank you
_wb_
2021-02-04 06:27:13
Someone asks me where to find windows binaries. Does anyone have a good windows build?
Nova Aurora
2021-02-04 06:32:13
Is there a way to cross compile to windows from linux?
lonjil
2021-02-04 06:45:38
Sounds silly but seriously consider trying Zig for that xD
Pieter
2021-02-04 06:48:56
mingw-w64 works fine for most C/C++ code that compiles on Linux
Nova Aurora
2021-02-04 06:52:43
Is that compiling windows programs on linux on windows, linux programs on windows, or windows programs on linux?
_wb_
2021-02-04 06:58:20
There are ways to cross-compile from linux to windows. Mingw is to compile for windows in windows.
lonjil
2021-02-04 07:05:11
Mingw is for compiling for Windows without visual studio. From any platform.
2021-02-04 07:07:13
Importantly, Mingw provides libc, so any compiler which understands win calling conventions and binary formats can use Mingw's headers to cross compile to windows.
Pieter
2021-02-04 07:14:54
<@794205442175402004> `apt install mingw-w64` 😉
2021-02-04 07:16:15
All Bitcoin Core binaries are compiled from a deterministic Linux VM. The Windows ones are built using MinGW-W64; the macOS ones are built using an Apple-patched clang.
2021-02-04 07:16:44
So it's definitely possible to do all of that from Linux.
Diamondragon
_wb_ Someone asks me where to find windows binaries. Does anyone have a good windows build?
2021-02-04 07:38:39
Scope built this and posted it in the av1 image thread: https://discord.com/channels/587033243702788123/673202643916816384/791761915839512606
_wb_
2021-02-04 07:39:46
Not sure if the simd runtime detection stuff will properly work with a mingw build
Pieter
2021-02-04 07:42:12
It may need a few #define's, but in general, cpuid-like functions work fine.
_wb_
2021-02-04 07:48:09
<@!111445179587624960> do you have a more recent build somewhere? preferably with avx2 detection working?
2021-02-04 08:46:06
`CC=x86_64-w64-mingw32-gcc-posix CXX=x86_64-w64-mingw32-g++ ./ci.sh opt` fails rather quickly
2021-02-04 08:46:25
``` CMake Error at third_party/brotli/CMakeLists.txt:114 (message): log2() not found ```
Diamondragon
2021-02-04 09:00:10
He also built 0.3 I guess, and attached it to a post here: https://encode.su/threads/3544-JXL-reference-software-version-0-2-released?p=67989&viewfull=1#post67989
spider-mario
2021-02-04 09:09:04
I build very often on Windows, msys2 works well
2021-02-04 09:09:21
`pacman -S mingw-w64-x86_64-{clang,cmake,ninja}` and possibly a few others and you’re good to go
_wb_
2021-02-04 09:28:33
that's what I get when I run that command
Scope
2021-02-04 11:48:18
Yep, this is the 0.3 build with Clang <https://encode.su/threads/3544-JXL-reference-software-version-0-2-released?p=67989&viewfull=1#post67989>
lithium
2021-02-04 02:59:14
How to use chroma subsampling(420) in cjxl?(--resampling=1|2|4|8)
_wb_
2021-02-04 03:05:41
it's not implemented in cjxl
2021-02-04 03:05:51
only everything-subsampling
2021-02-04 03:06:11
(or recompression of 420 JPEGs, that works too)
lithium
2021-02-04 03:07:01
ok, thank you 🙂
OkyDooky
2021-02-04 03:36:33
\:-D I'm finding some new possibilities today in tuning the initial quantization further. Looks like these areas that have both a huge range (black-white) and a visually peaceful flat area in them are confusing to the current initial quantization and significant improvement is possible. Probably not up to the level of AVIF at 0.15 bpp, but better than what we have now. (<@172952901075992586>)
lithium
2021-02-04 06:41:44
only -s 9 can affect cjxl jpeg lossless file size?
Scope
2021-02-04 06:49:59
-s 3, -s 4-5, -s 6-7, -s 8, -s 9 (when I tested on large files and -s 9 is insanely slow compared to others)
raysar
2021-02-04 08:36:11
under 0.01 mp/s for me <:ugly:805106754668068868>
_wb_
2021-02-04 08:48:49
Wot?
Scope
2021-02-04 09:04:33
```Read 5529x3686 image, 85.6 MP/s Encoding [JPEG, lossless transcode, kitten], 8 threads. Compressed to 4041028 bytes (1.586 bpp). 5529 x 3686, 15.02 MP/s [15.02, 15.02], 1 reps, 8 threads.``` ```Read 5529x3686 image, 77.1 MP/s Encoding [JPEG, lossless transcode, tortoise], 8 threads. Compressed to 4002214 bytes (1.571 bpp). 5529 x 3686, 0.04 MP/s [0.04, 0.04], 1 reps, 8 threads.```
_wb_
2021-02-04 09:08:36
Well if you want that extra 1%, you have to work hard for it, haha
2021-02-04 09:09:10
That is some serious speed gap though, lol
Scope
2021-02-04 09:15:19
cbrunsli `5529 x 3686, geomean: 27.28 MP/s [26.62, 28.05], 5 reps, 1 threads.` (3886176 bytes)
raysar
2021-02-04 09:18:03
look at my extra horrible performances 😄 https://gitlab.com/wg1/jpeg-xl/-/issues/140 Who are in windows here? Tell me if i'm the only one without multithreading.
_wb_
2021-02-04 09:18:45
Is it a 420 jpeg?
raysar
2021-02-04 09:19:33
yes, it's share with screenshot.
_wb_
2021-02-04 09:20:23
I meant scope's jpeg
2021-02-04 09:22:05
Windows multithreading not working, I hope that can be fixed, I don't have any Windows for testing
Fox Wizard
2021-02-04 09:25:39
🪟
Scope
2021-02-04 09:30:31
Yep, 420 jpeg
veluca
2021-02-04 09:35:28
420 jpegs likely compress a bit worse with jxl than with brunsli
2021-02-04 09:35:57
-s 9 is rarely worth it for jpeg recompression in my opinion
improver
2021-02-04 10:42:29
btw, what will happen to brunsli now as it's no longer part of jxl standard?
Scope
2021-02-04 10:48:54
https://twitter.com/jyzg/status/1353334229736845312
Nova Aurora
2021-02-04 10:50:27
That's up to the google gods
BlueSwordM
2021-02-04 11:02:04
*Up to the core devs
Nova Aurora
2021-02-04 11:02:56
*of brunsli
BlueSwordM
2021-02-04 11:05:10
*my axe But yeah, Brunsli compatibility would be nice, but nothing too important, as it's not really important.
lonjil
2021-02-04 11:38:55
well, jxl does everything brunsli does now anyway, right? and probably have somewhat more efficient entropy coding too.
veluca
2021-02-04 11:49:20
the answer to that question is complex - it has a different context model, which is typically better for 444 and worse for 420 JPEGs. It's significantly faster to decode though, and a good part of the code is reused from brunsli 🙂 (another advantage of jxl is that it allows cropped decoding and better progression, which brunsli doesn't - and jxl has more room for entropy encoding improvements)
2021-02-04 11:49:43
also, there's significant overlap between jxl and brunsli core devs 😛
lonjil
2021-02-05 12:03:42
Indeed! I meant more in terms of why brunsli compat probably wouldn't add much.
2021-02-05 01:25:53
How well would JXL handle a `256*256` frame gif where every frame has a time of zero, each frame has a unique palette, and each frame has pixels that are not replaced by any later frame?
Orum
2021-02-05 05:02:56
wasn't brunsli designed more around text compression anyway?
2021-02-05 05:06:25
never mind, I keep confusing brunsli and brotli
Pieter
2021-02-05 05:11:57
all those little swiss german words are hard
Orum
2021-02-05 05:33:51
hopefully we can stop naming things as br___li
2021-02-05 05:36:52
just like too many many general purpose compressors use 'l' followed by 'z' at some point: lizard, lz4, lrz, lzip, lzop
BlueSwordM
2021-02-05 05:39:58
brotli, brunsil, brazil, brostd, etc.
Nova Aurora
Orum just like too many many general purpose compressors use 'l' followed by 'z' at some point: lizard, lz4, lrz, lzip, lzop
2021-02-05 05:54:56
Isn't that because they're all based on LZ77?
Orum
2021-02-05 05:55:50
or LZMA; it's all an acronym for the original developers, but it makes it very hard to remember what is what
2021-02-05 05:56:57
I mean, there's no reason you have to put Lempel–Ziv–Markov in that order 🤷
_wb_
lonjil How well would JXL handle a `256*256` frame gif where every frame has a time of zero, each frame has a unique palette, and each frame has pixels that are not replaced by any later frame?
2021-02-05 06:28:14
Probably better to flatten such an image to a single frame before encoding and consider it as a png.
lonjil
2021-02-05 06:28:27
Yeah.
2021-02-05 06:28:37
I'm just curious.
2021-02-05 06:28:55
Some gif decoders struggle
_wb_
2021-02-05 06:30:10
It shouldn't be a problem for jxl, but of course palette restrictions would not be a reason to do zero-delay frames
lonjil
2021-02-05 06:30:18
Yeah
2021-02-05 06:31:12
Really contrived scenario but imagine using jxl as a reversible lossless encoding of a gif.
2021-02-05 06:31:40
7z achieves a 100 to 1 ratio on such a gif. Sorta curious how jxl would fare
_wb_
2021-02-05 06:31:46
In browsers and some viewers, a minimum frame delay is imposed to avoid animations getting too high fps
2021-02-05 06:32:04
Oh
2021-02-05 06:32:40
Reversible gif encoding would require some more stuff than just the image data
lonjil
2021-02-05 06:33:05
right
_wb_
2021-02-05 06:33:08
But not much probably, lzw is deterministic
2021-02-05 06:33:39
But we currently convert palette input to full color before feeding it to the encoder
lonjil
2021-02-05 06:33:49
makes sense
_wb_
2021-02-05 06:34:01
Encoder then probably palettizes again in lossless mode
2021-02-05 06:34:54
Would need to keep palette and its order to make it reversible, can be done, just requires some code plumbing
lonjil
2021-02-05 06:35:07
mm
_wb_
2021-02-05 06:35:45
Not sure if it's worth the effort. May be, if the input had a well-optimized palette order
2021-02-05 06:36:12
(same for png8 input)
lonjil
2021-02-05 06:36:58
if I did the naive thing and encoded a jxl with 256*256 palette frames with 256 colors each, how space efficient do you reckon it would be compared to gif? I find inane corner cases interesting xD
Nova Aurora
Orum I mean, there's no reason you have to put Lempel–Ziv–Markov in that order 🤷
2021-02-05 06:38:23
I say we name all software by the last name of everyone who worked on it in order chronologically
2021-02-05 06:39:38
Linux becomes thousands of characters long
_wb_
2021-02-05 06:40:14
Not sure what current cjxl would do with it, but in principle jxl should be able to beat gif for such images
lonjil
2021-02-05 06:40:50
maybe I'll poke something together with libjxl later, have some fun
lithium
Scope -s 3, -s 4-5, -s 6-7, -s 8, -s 9 (when I tested on large files and -s 9 is insanely slow compared to others)
2021-02-05 08:36:27
thank you 🙂
_wb_
2021-02-05 01:09:40
improving the debug image visualization a bit for the VarDCT blocktype selection
2021-02-05 01:10:07
for this image
2021-02-05 01:10:58
these is the block selection from high quality (d1) to very low quality (d18)
2021-02-05 01:12:58
you like this visualization style, <@!179701849576833024> ?
2021-02-05 01:13:46
I think it's a lot clearer than the current visualization (which was broken btw because the colors were 255 times too bright 🙂 )
2021-02-05 01:18:29
other example: take this bridge image
2021-02-05 01:19:03
this is our block selection from d1 to d18
veluca
2021-02-05 01:26:19
pretty nice 😄
Crixis
2021-02-05 01:27:18
Niceeee
_wb_
2021-02-05 01:29:34
slightly tweaked the pixel art in the smaller blocks
Fox Wizard
2021-02-05 01:31:06
<:Foxy22:734765939681132667>
_wb_
2021-02-05 01:31:28
the 2x2 is now a dense little grid, also made the AFV corners clearer
2021-02-05 01:31:41
AFV is used quite a bit here
2021-02-05 01:31:55
we don't use Hornuss at all it seems?
2021-02-05 01:31:59
at least on this image
2021-02-05 01:32:21
also is there an easy way to make it use bigger than 64x64 blocks?
VEG
_wb_ slightly tweaked the pixel art in the smaller blocks
2021-02-05 02:00:20
Looks great
2021-02-05 02:00:41
Eager to test JXL when Chromium and Firefox support it 🙂
Master Of Zen
_wb_ slightly tweaked the pixel art in the smaller blocks
2021-02-05 02:02:21
https://tenor.com/view/matric-matrix-i-dont-see-the-code-gif-8390287
Crixis
2021-02-05 02:04:27
Just make a new browser
_wb_
2021-02-05 02:09:41
made a few more tweaks to the debug image
2021-02-05 02:10:07
this is from d0.3 to d30
Master Of Zen
2021-02-05 02:10:29
Having d overlay would be amazing + image what it gives
2021-02-05 02:11:00
I used to d20 dice <:cmon:798146744936300556>