|
|
veluca
|
2021-08-30 11:45:50
|
ah then yes
|
|
2021-08-30 11:45:58
|
at some point we did per-group tree learning
|
|
2021-08-30 11:46:00
|
now it's global
|
|
2021-08-30 11:46:03
|
and single-threaded
|
|
|
fab
|
2021-08-31 07:49:51
|
no mozjpeg no gimp plugin
|
|
2021-08-31 07:50:07
|
unless is written by veluca
|
|
2021-08-31 07:50:15
|
and compiled by him
|
|
|
Orum
|
2021-08-31 11:13:15
|
cjxl seems somewhat inconsistent with where it prints output
|
|
2021-08-31 11:16:01
|
messages about distance (e.g. `Butteraugli distance 0.940 yields 5766905 bytes, 3.784 bpp.`) go to stdout, but everything else seems to go to stderr
|
|
2021-08-31 11:30:08
|
I suppose it's not that critical to 'fix' as I don't think there's any way to get it to output image data to stdout
|
|
|
Traneptora
|
|
fab
no mozjpeg no gimp plugin
|
|
2021-08-31 12:07:39
|
gimp uses mozjpeg to export joeg
|
|
|
|
veluca
|
2021-08-31 01:02:04
|
I think it uses whatever happens to be your system libjpeg, no?
|
|
|
Jyrki Alakuijala
|
|
Traneptora
also what does `--gaborish` do?
|
|
2021-08-31 04:00:38
|
Gaborish is an idea that DCT is not the true integral transform that we use -- but it is a DCT convolved with a small Gaussian. Enough to create some overlap (to get lapped transforms just without the overcompleteness and loss of decoding speed)
|
|
|
Traneptora
|
|
veluca
I think it uses whatever happens to be your system libjpeg, no?
|
|
2021-08-31 04:02:03
|
I found it used mozjpeg even with turbojpeg as system
|
|
|
Jyrki Alakuijala
|
2021-08-31 04:02:04
|
Gaborish differs from traditional 'lets smooth a little so that the user doesn't see the blocks' filtering approach by being reversible with very high accuracy, faster to compute and working also in the highest quality end
Many 'loop filters' become destructive in the highest quality, Gaborish scales very well there.
|
|
|
|
veluca
|
|
Traneptora
I found it used mozjpeg even with turbojpeg as system
|
|
2021-08-31 04:03:52
|
That's interesting...
|
|
|
Traneptora
|
2021-08-31 04:04:13
|
In this case how would you convolve a DCT with a gaussian? I was under the impression that convolution was a signal thing. like convolve two signals
|
|
|
veluca
That's interesting...
|
|
2021-08-31 04:04:26
|
I had mozjoeg in /opt, maybe it tried dlopen
|
|
|
|
veluca
|
|
Traneptora
In this case how would you convolve a DCT with a gaussian? I was under the impression that convolution was a signal thing. like convolve two signals
|
|
2021-08-31 04:05:34
|
smoothing (i.e. convolving with a gaussian) is applied after doing an IDCT on the coefficients
|
|
|
Jyrki Alakuijala
|
|
Traneptora
with testing, I've found that `cjxl -d 1` is much better than `cjpeg -q 90` for synthetic imagery
|
|
2021-08-31 04:06:25
|
During our guetzli work in 2017 we considered quality 94 as distance 1.0
|
|
2021-08-31 04:06:35
|
we have slipped a little ever since
|
|
|
|
veluca
|
2021-08-31 04:06:59
|
a convolution conceptually is nothing more than `out[x][y] = sum_(i=-k...k, j=-k...k) kernel[i][j] * in[x+i][y+j]`
|
|
|
Traneptora
|
2021-08-31 04:07:08
|
So it means the inverse transform is really an IDCT and then convolution with a gaussian
|
|
|
Jyrki Alakuijala
|
2021-08-31 04:07:14
|
At one stage I made an accidental 5 % quality level slip in butteraugli (by forgetting to do a manual normalization step to the fuzzy classification used there)
|
|
2021-08-31 04:07:34
|
and ever since you actually need to use about 0.95 to get the previous 1.0
|
|
2021-08-31 04:07:59
|
but it is likely something like libjpeg quality 92 or 93, not 90
|
|
|
|
veluca
|
|
Traneptora
So it means the inverse transform is really an IDCT and then convolution with a gaussian
|
|
2021-08-31 04:08:27
|
well, slightly more complex than that, as the convolution crosses block boundaries (that's kinda the whole point :P) but yes
|
|
|
Jyrki Alakuijala
|
2021-08-31 04:09:22
|
also during jpeg xl development we have payed an ever decreasing attention on max butteraugli, and more on a p-norm * bpp metric
|
|
2021-08-31 04:09:35
|
this is a shift towards what works in medium quality vs. high quality
|
|
2021-08-31 04:09:55
|
if we put more emphasis on max butteraugli, we'd get a significant quality boost in the d1 qualities
|
|
|
Traneptora
|
2021-08-31 04:10:04
|
I'm more familiar with thr Fourier Transform than I am with DCT, I will admit. But convolution is multiplication in fourier space, so it should reversible by dividing in fourier space
|
|
|
Jyrki Alakuijala
|
2021-08-31 04:10:22
|
it is just that in that area we have a healthy lead (20+ %) to the next compression system, so we just try to build a more balanced system now
|
|
|
Traneptora
I'm more familiar with thr Fourier Transform than I am with DCT, I will admit. But convolution is multiplication in fourier space, so it should reversible by dividing in fourier space
|
|
2021-08-31 04:11:10
|
I'm the inventor of the Gaborish approach and I considered DCT+Gaussian a new integral transform
|
|
|
Traneptora
|
2021-08-31 04:11:53
|
So you're basically saying that gaborish being on, postprocesses the output with an invertible gaussian convolution
|
|
2021-08-31 04:12:05
|
or almost invertible
|
|
|
Jyrki Alakuijala
|
2021-08-31 04:12:08
|
it is implemented by first dct, then filtering, but I consider that that is similar computational decomposition like butterfly decomposition in dct
|
|
|
Traneptora
|
2021-08-31 04:12:49
|
And so to encode you use IG > DCT and then decode is IDCT > G?
|
|
|
|
veluca
|
|
Traneptora
I'm more familiar with thr Fourier Transform than I am with DCT, I will admit. But convolution is multiplication in fourier space, so it should reversible by dividing in fourier space
|
|
2021-08-31 04:12:55
|
yes, it is reversible by dividing in Fourier space - but that's slow, so the encoder inverts it with a 5x5 convolution that approximates the inverse (the transform at decode time is a 3x3 convolution) - this is not *quite* perfect
|
|
|
Traneptora
|
2021-08-31 04:13:13
|
Approximation yea
|
|
|
Jyrki Alakuijala
|
2021-08-31 04:13:33
|
5x5 makes in the order of ~0.1 % errors (or less)
|
|
|
Traneptora
|
2021-08-31 04:13:48
|
But is that the general idea?
|
|
|
Jyrki Alakuijala
|
2021-08-31 04:13:58
|
FFT convolutions are awesome in textbooks, but are often problematic in practice
|
|
2021-08-31 04:14:16
|
yes, exactly like you wrote Sedna
|
|
|
|
veluca
|
2021-08-31 04:14:23
|
and at very low `d` parameters (<0.1 or so) you get more error from that approximation than from actual compression π€£ (still very very hard to notice even with significant zooming, but...)
|
|
|
Traneptora
|
2021-08-31 04:14:30
|
huh
|
|
2021-08-31 04:14:42
|
I read something similar in daala
|
|
2021-08-31 04:14:48
|
before it became av1
|
|
|
Jyrki Alakuijala
|
2021-08-31 04:14:57
|
we consider that too
|
|
2021-08-31 04:15:05
|
but lapped transforms reduce coding speed
|
|
|
|
veluca
|
2021-08-31 04:15:12
|
FFT convolutions are useful for very large kernels - for smaller ones it's a lot more effort to do the FFT
|
|
|
Jyrki Alakuijala
|
2021-08-31 04:15:13
|
7x7 == 49, 8x8 == 64
|
|
2021-08-31 04:15:48
|
one loses about 25 % of coding speed by lapping 8x8 dcts by one pixel
|
|
2021-08-31 04:16:24
|
we get roughly similar lapping for nearly free with Gaborish
|
|
2021-08-31 04:16:44
|
overcompleteness is cool, but it gets slow easily
|
|
|
Traneptora
|
2021-08-31 04:17:25
|
So I mean, I remember that a similar idea existed for an invertible loop deblock filter
|
|
|
Jyrki Alakuijala
|
2021-08-31 04:17:33
|
many people don't realize that FFT has a circularity assumption
|
|
|
Traneptora
|
2021-08-31 04:17:39
|
I read an article about it a few years ago
|
|
|
Jyrki Alakuijala
|
2021-08-31 04:17:50
|
whatever you have in the window is going to be assumed to have a looping nature
|
|
|
Traneptora
|
2021-08-31 04:18:00
|
Yea, for 2D it is toroidal
|
|
|
_wb_
|
|
veluca
and at very low `d` parameters (<0.1 or so) you get more error from that approximation than from actual compression π€£ (still very very hard to notice even with significant zooming, but...)
|
|
2021-08-31 04:18:02
|
Do we default to gaborish=0 when d<0.something?
|
|
|
Jyrki Alakuijala
|
2021-08-31 04:18:08
|
that is often mitigated by windowing
|
|
2021-08-31 04:18:16
|
but then you end up computing even more
|
|
2021-08-31 04:18:24
|
we tried that too during pik's development
|
|
2021-08-31 04:18:32
|
Gaussian windowed FFTs
|
|
2021-08-31 04:18:43
|
'log-Gabor inspired'
|
|
2021-08-31 04:19:01
|
it looked nice and organic at lower qualities
|
|
2021-08-31 04:19:16
|
but was too slow for our goals
|
|
|
|
veluca
|
|
_wb_
Do we default to gaborish=0 when d<0.something?
|
|
2021-08-31 04:19:42
|
not as far as I know - but I think we should rather spend a bit of effort on reducing the error of inverse gaborish at lower d - perhaps with some optimization
|
|
|
Jyrki Alakuijala
|
2021-08-31 04:21:00
|
I optimized the current kernels to a relatively small quality range
|
|
2021-08-31 04:21:14
|
0.5 to 4 probably
|
|
2021-08-31 04:22:06
|
the gaborish amount is also connected with the quantization matrix contrast sensitivity curve modeling
|
|
2021-08-31 04:22:31
|
if we do gaborish we can/need to quantize more in the high frequency
|
|
2021-08-31 04:23:09
|
we can do Gaborish with a normal jpeg, too
|
|
2021-08-31 04:23:23
|
we just need a decoder that does the additional smoothing
|
|
2021-08-31 04:23:35
|
or css style might be able to do a gentle smoothing
|
|
|
|
veluca
|
2021-08-31 04:39:10
|
you just cannot invert a 3x3 convolution with a 5x5 one - if you are going to compress at very high quality, you likely need to add on top of the 5x5 convolution on the encoder side, or the error will be noticeable (there are some situations where I have noticed pixels being off at relatively high zoom levels with gaborish)
|
|
|
_wb_
|
2021-08-31 04:42:43
|
Could a 7x7 convolution be done to more accurately invert the 3x3 one?
|
|
2021-08-31 04:43:07
|
Also, shouldn't we use more aggressive custom gaborish weights at high distance?
|
|
|
|
veluca
|
|
_wb_
Could a 7x7 convolution be done to more accurately invert the 3x3 one?
|
|
2021-08-31 05:13:06
|
that likely helps, yes
|
|
|
_wb_
Also, shouldn't we use more aggressive custom gaborish weights at high distance?
|
|
2021-08-31 05:13:30
|
no clue about the psychovisual aspect here, it might help but IDK
|
|
|
_wb_
|
2021-08-31 05:14:35
|
Quite a few of our bigger customers like to do sharpening (unsharp mask)
|
|
2021-08-31 05:14:51
|
Sharpening makes jpegs compress worse
|
|
2021-08-31 05:15:14
|
Gaborish could be abused into doing unsharp mask
|
|
2021-08-31 05:21:56
|
Basically you could recompress jpegs reversibly and make them look more blurry or more sharp by signaling some gaborish kernel
|
|
|
OkyDooky
|
2021-08-31 05:30:50
|
In theory, quantization should be adapted to "compensate" for the "Gaborish" pre/post filtering. Is this currently done?
And yeah, a 7x7 kernel would be able to represent an inverse more accurately.
|
|
|
lithium
|
|
Jyrki Alakuijala
if we put more emphasis on max butteraugli, we'd get a significant quality boost in the d1 qualities
|
|
2021-08-31 05:49:35
|
I understand, Thank you very much,
For now I only test d0.5 and d1.0,
and I think jxl work very well for high quality photographic content,
but for high quality non-photographic content, I think jxl can do better,
so I look forward libjxl pr430 and next quality improvement. π
|
|
|
_wb_
|
|
In theory, quantization should be adapted to "compensate" for the "Gaborish" pre/post filtering. Is this currently done?
And yeah, a 7x7 kernel would be able to represent an inverse more accurately.
|
|
2021-08-31 06:49:06
|
Compensation is done by doing sharpening pre encode. You can also 'sharpen' by exaggerating AC coefficient amplitudes a bit, but that doesn't cross block boundaries which is exactly the point of gaborish
|
|
|
fab
|
|
_wb_
Compensation is done by doing sharpening pre encode. You can also 'sharpen' by exaggerating AC coefficient amplitudes a bit, but that doesn't cross block boundaries which is exactly the point of gaborish
|
|
2021-08-31 06:52:21
|
https://github.com/libjxl/libjxl/issues/279
|
|
2021-08-31 06:52:25
|
is this similar to this
|
|
2021-08-31 06:52:37
|
like it has a similar effect
|
|
2021-08-31 06:52:46
|
or it looks visually different
|
|
|
Traneptora
|
2021-08-31 07:31:39
|
Where does the name Gaborish come from?
|
|
|
_wb_
|
2021-08-31 07:36:39
|
https://en.wikipedia.org/wiki/Gabor_transform
|
|
|
fab
|
2021-09-01 03:26:24
|
yes i'd say lossless should be quality 100
|
|
2021-09-01 03:26:42
|
without even -m
|
|
|
diskorduser
|
2021-09-01 03:49:39
|
-quality 100 is lossless
|
|
|
Jyrki Alakuijala
|
|
Traneptora
Where does the name Gaborish come from?
|
|
2021-09-01 07:21:36
|
I invented the name -- in a real Gabor transform you have a Gaussian windowing a Fourier transform
|
|
2021-09-01 07:22:23
|
here we approach similar ideas, but don't have windowing but convolution, don't have Fourier but DCT
|
|
2021-09-01 07:23:25
|
(I like to have exotic names for exotic things)
|
|
|
Traneptora
|
2021-09-01 09:23:39
|
At the name comes from Gabor
|
|
2021-09-01 09:23:42
|
I see
|
|
2021-09-01 09:24:22
|
Do you have a paper on it?
|
|
|
Jyrki Alakuijala
|
2021-09-02 06:17:08
|
no paper
|
|
2021-09-02 06:18:17
|
We could have papers on XYB, cubic root+bias gamma, butteraugli, gaborish, guetzli, brunsli, etc. or have jpeg xl
|
|
2021-09-02 06:18:44
|
I decided very consciously that I'm not going to slow down for papers until jpeg xl is ready
|
|
2021-09-02 06:19:28
|
as an example for Brotli we wrote the first paper in 2019 when the format was practically finished in 2013 and frozen in 2014
|
|
2021-09-02 06:27:54
|
for 2014-2018 when most of those techniques were done we were 2-3 people on all of pik/guetzli/butteraugli/brunsli/entropy clustering/...
|
|
2021-09-02 06:29:14
|
in my past experience every paper takes about three months of calendar time for a project, and creates substantial stress related to its deadlines
|
|
2021-09-02 06:30:13
|
it can be faster when it is a superficial systems paper like we have two on JPEG XL
|
|
|
_wb_
Could a 7x7 convolution be done to more accurately invert the 3x3 one?
|
|
2021-09-02 07:02:06
|
Yes, and similar for larger matrices. Having some regularization may become important with larger sizes.
|
|
|
In theory, quantization should be adapted to "compensate" for the "Gaborish" pre/post filtering. Is this currently done?
And yeah, a 7x7 kernel would be able to represent an inverse more accurately.
|
|
2021-09-02 07:04:12
|
I have tuned the quantization matrices to the current default level of gaborish -- the process was more of a craftsman process than scientific, think of a technician gently tapping the linear accelerator with a wooden mallet to improve the resonance frequency
|
|
2021-09-02 07:05:29
|
I usually tried to reach roughly similar visual quality (as observed by my eyes, not just blind butteraugli optimization) between the different transforms and as a consequence the data rates are different between the transforms
|
|
|
_wb_
Also, shouldn't we use more aggressive custom gaborish weights at high distance?
|
|
2021-09-02 07:08:42
|
Gaborish can only fix about half a pixel of blocking artefacts. If you try to blur more than 3x3, the inverse problem becomes rather unstable and complicated and the resulting image will have a lot of entropy. If you try to blur more within the 3x3, some dct coefficients will flip sign (or some will become zeros), and weird things will start to happen in general.
|
|
2021-09-02 07:09:40
|
Earlier in pik we had more tools for lifting dc differences into ac -- those would have been great for better low bpp visual aesthetics
|
|
2021-09-02 07:11:06
|
at some stage (when we added larger transforms IIRC) they become too complicated and we removed them
|
|
2021-09-02 07:11:38
|
currently we have some 8x8 constant color block artefacts because of that
|
|
2021-09-02 07:12:11
|
they are relatively rare and I know how to fix them, but haven't got up to it yet -- still more focused on reducing ringing/worst case behaviour with graphics
|
|
2021-09-02 07:13:15
|
... also they are rare because I'm heavily penalising 8x8 dcts currently because of the 8x8 constant color blocking artefacts it brings to gradients (and artificially favoring 8x16 and 16x16)
|
|
2021-09-02 07:13:53
|
once I'll have better degradation for 8x8 dcts, I can have a better balance of integral transforms, too
|
|
|
Traneptora
|
2021-09-02 04:34:58
|
JXL uses asymmetric numeral systems for entropy, right?
|
|
2021-09-02 04:36:35
|
I remember there was some weird patent scandal with google and that
|
|
2021-09-02 04:36:46
|
like google tried to patent it, even though they did not invent it
|
|
|
_wb_
|
2021-09-02 04:43:37
|
Yeah, and Microsoft too
|
|
|
Traneptora
|
2021-09-02 04:56:50
|
speaking of patents, are there any patented technologies in JXL for things you invented?
|
|
2021-09-02 04:57:02
|
you as in you guys, plural
|
|
2021-09-02 04:57:49
|
I remember that apple has refused to support some free codecs like Vorbis and Ogg for years because they said they were worried that Xiph had a patent and was gonna use it as a landmine
|
|
2021-09-02 04:57:55
|
(which to me is a clear excuse)
|
|
|
_wb_
|
2021-09-02 05:04:04
|
There are (defensive) patents owned by both Google and Cloudinary, yes, and they are irrevocably licensed royalty-free to anyone not litigating claiming they own non-royalty-free patents that are needed to use libjxl
|
|
2021-09-02 05:04:28
|
(similar to how aom members have defensive patents on stuff that is in av1)
|
|
|
|
Diamondragon
|
2021-09-02 11:25:05
|
Was there anything you had wanted to use in jxl, but couldn't owing to patent considerations?
|
|
|
Traneptora
|
2021-09-03 01:27:24
|
irrevocably licensed royalty-free meaning anyone can use it for any production purpose?
|
|
2021-09-03 01:28:39
|
so basically they only exist as legal verification that you did indeed invent the technologies used, in case someone else wants to claim otherwise?
|
|
2021-09-03 01:29:11
|
also how does one irrevocably license something royalty free in an enforceable way? like I see no reason for you to change your mind but how do you make that a legal thing where you actually cannot?
|
|
|
_wb_
|
2021-09-03 05:22:45
|
https://github.com/libjxl/libjxl/blob/main/PATENTS
|
|
2021-09-03 05:23:31
|
That's the formulation the lawyers came up with. It's similar to what the Apache license does, or what AOM does.
|
|
2021-09-03 05:26:01
|
If we change our minds now, you can still take a libjxl from last week and use it to get the patents licensed.
|
|
|
Traneptora
so basically they only exist as legal verification that you did indeed invent the technologies used, in case someone else wants to claim otherwise?
|
|
2021-09-03 05:28:34
|
Yes, and also to discourage patent trolls from making bogus claims which could prevent others from using jxl - if they do that, they are themselves not allowed to use jxl anymore because that's the single condition to the patent license we give them.
|
|
|
Traneptora
|
2021-09-03 05:29:13
|
I see so it's not that you can't change your mind but if you change your mind it only applies to anything new you've created
|
|
2021-09-03 05:29:27
|
and retroactively won't work as a landmine
|
|
2021-09-03 05:29:44
|
not that you have any intention of doing so, but there's concern among big corporations if you even have that option
|
|
|
_wb_
|
2021-09-03 05:29:53
|
Yes, since the bitstream is frozen so anything new can only be encoder improvements or stuff like that
|
|
|
Traneptora
|
2021-09-03 05:31:00
|
and there's only one restriction on the patent, which is that you cannot use the patented technologies if and only if you claim to own them (and you don't)
|
|
|
_wb_
|
2021-09-03 05:31:33
|
It's exactly the same situation as with av1. It's royalty-free by design and we do everything we can to make it as likely as possible that it is royalty-free in practice
|
|
2021-09-03 05:31:55
|
Patent trolls are kind of hard to fight though
|
|
|
Traneptora
|
2021-09-03 05:32:06
|
luckily if you have google behind you it makes it a bit easier
|
|
2021-09-03 05:32:35
|
if the the goal is adoption then just giving it all away for free makes sense to me
|
|
2021-09-03 05:32:59
|
speaking of which, does cloudinary have funding to pay for the time you spent developing jxl
|
|
2021-09-03 05:33:08
|
or is this done pro-bono?
|
|
|
_wb_
|
2021-09-03 05:33:35
|
I do work on jxl on company time, they pay me for it
|
|
2021-09-03 05:34:09
|
In fact it has been a large part of my job for the last 3.5 years
|
|
|
Traneptora
|
2021-09-03 05:34:33
|
I see, they just straight up pay you to do exactly this
|
|
2021-09-03 05:34:36
|
which makes sense
|
|
2021-09-03 05:34:49
|
companies like cloudinary and google serve to benefit a lot financially from jxl adoption
|
|
|
_wb_
|
2021-09-03 05:35:05
|
It's Cloudinary's core business to be good at image compression
|
|
|
Traneptora
|
2021-09-03 05:35:53
|
well there you go :)
|
|
|
_wb_
|
2021-09-03 05:36:07
|
Cloudinary does not benefit directly financially from better compression, in fact the opposite is true: we charge our customers for bandwidth and storage, and with better compression we can charge less
|
|
2021-09-03 05:36:21
|
... but we also get more customers π
|
|
|
Traneptora
|
2021-09-03 05:36:32
|
well I mean more like
|
|
2021-09-03 05:36:41
|
better technology existing out there directly serves to help the bottom line
|
|
2021-09-03 05:36:58
|
since your whole point is leveraging that
|
|
2021-09-03 05:38:37
|
like how google is pushing aom hard, in part because having full adoption makes youtube delivery better
it will make everyone else's delivery better too, and arguably will narrow the gap
but it's still worth it to push the business model as a whole
|
|
|
_wb_
|
2021-09-03 05:39:06
|
In the case of Cloudinary, it's also a matter of PR/marketing: if we are actively improving the state of the art, then it's easier for our sales people to sell the product with the argument that we know what we're doing
|
|
|
Traneptora
|
2021-09-03 05:39:29
|
ah yea true, if you literally invented the best there is
|
|
2021-09-03 05:39:59
|
something I'm curious about technologically is how you made the lossless mode faster than FLIF
|
|
2021-09-03 05:40:06
|
FLIF had good ratios but it was annoyingly slow
|
|
|
_wb_
|
2021-09-03 05:41:05
|
In the case of Google: anything that makes the web platform better, is good for them because more use of the web means more use of Google stuff. They're so big that they're everywhere, and optimizing the world has become easier than optimizing only itself.
|
|
|
w
|
2021-09-03 05:41:27
|
optimizing the world π
|
|
|
_wb_
|
2021-09-03 05:43:31
|
FLIF uses binary arithmetic coding, reading symbols one bit at a time using a zero - sign - unary exponent - binary mantissa representation, where bits are skipped that are redundant because one of their values leads to out-of-range results.
|
|
2021-09-03 05:43:57
|
And after every bit read, chances get updated.
|
|
2021-09-03 05:44:22
|
That's a good entropy coder backend, but it's inherently slow and branchy
|
|
2021-09-03 05:45:25
|
In jxl, we at some point had 5 or 6 different entropy coders, including 3 different ones for modular mode (the flif one, brotli, and the one that later evolved to what jxl does)
|
|
2021-09-03 05:45:54
|
We unified that to a single entropy coder, which is very powerful but also fast to decode
|
|
|
Traneptora
|
2021-09-03 05:46:10
|
iirc flif was ffv1-inspired, wasn't it?
|
|
|
_wb_
|
2021-09-03 05:48:14
|
Jxl uses a hybriduint symbol coding (new in jxl: uses a configurable combination of single-symbol tokens and a variable amount of raw bits), with an lz77 option, and with either ANS or prefix coding to write the tokens.
|
|
|
Traneptora
iirc flif was ffv1-inspired, wasn't it?
|
|
2021-09-03 05:50:15
|
Yes, its entropy coding evolved from ffv1's (was made more complicated), and its context model (MA trees) was new to flif but internally did use 'properties' (the things you put in decision nodes of an MA tree) borrowed from ffv1's context model.
|
|
|
Traneptora
|
2021-09-03 05:51:10
|
I wonder how lossless jxl performs compared to ffv1
|
|
2021-09-03 05:51:18
|
in ratio and speed
|
|
2021-09-03 05:51:40
|
a year ago when I tried jxl 0.3, it was pretty slow to do lossless at effort=9 (tho it was called speed=9 at the time)
|
|
|
_wb_
|
2021-09-03 05:52:08
|
Ffv1 doesn't have MA trees, it uses a fixed context model. Jxl's effort <= 3 also uses a fixed context model (it uses a fixed tree instead of doing tree learning)
|
|
2021-09-03 05:53:31
|
Encode-side, ANS is a bit more annoying than AC because it is LIFO, not FIFO (so you need to buffer tokens and write them in reverse order)
|
|
|
Traneptora
|
|
_wb_
|
2021-09-03 05:53:59
|
I suspect ffv1 has more symmetric encode/decode
|
|
|
Traneptora
|
2021-09-03 05:54:20
|
sounds about right, although in many cases with ffv1 I've found harddrive speed to be a limitation
|
|
2021-09-03 05:54:39
|
since lossless video is still very very big
|
|
2021-09-03 05:55:05
|
though, most of what I've seen about jxl has been to the lossy end, but I've seen very little about its lossless mode, in terms of benchmarking
|
|
2021-09-03 05:55:08
|
speed/ratio
|
|
|
_wb_
|
2021-09-03 06:00:42
|
The encode side of it is not very optimized yet, except at effort 1,2,3
|
|
2021-09-03 06:01:10
|
Even the decode side still has some room for optimization
|
|
|
Petr
|
|
Traneptora
since lossless video is still very very big
|
|
2021-09-03 06:23:15
|
Have you tried AV1 lossless? I'm quite happy with it (except for https://bugs.chromium.org/p/aomedia/issues/detail?id=3109).
|
|
|
190n
|
2021-09-03 06:26:57
|
i thought lossless h.264 (encoded with x264) was better?
|
|
|
Petr
|
2021-09-03 06:30:53
|
AV1 lossless has much better compression ratios with my stuff (screen recordings of a video game). But it could be due to a very high default keyframe interval, I need to check.
|
|
|
190n
|
2021-09-03 06:42:27
|
how much better?
|
|
2021-09-03 06:42:38
|
x264 has gotta be faster at least, but that tradeoff doesn't always make sense
|
|
|
_wb_
|
2021-09-03 06:46:24
|
lossless performance is extremely dependent on the kind of content
|
|
|
Scope
|
2021-09-03 07:22:49
|
Also when encoding with ffmpeg it is important to make sure that it is true lossless, especially with various color conversions, and at the time when I was making comparisons, AV1 made almost no sense to use, it was very slow and not as efficient in lossless, and something like x264, FFV1 or hardware HEVC (on modern videocards) were much faster with very good compression
|
|
|
Petr
|
|
190n
how much better?
|
|
2021-09-03 07:40:33
|
2066 frames, just 1 keyframe, max. 4 colors in every frame (yes, a very old video game π):
`libaom-av1 lossless yuv444p: 305 kB`
`x264 lossless yuv444p: 1445 kB`
(I know there was a conversion from RGB at the beginning. But if I want to convert it back to exact original RGB colors, I use lutrgb in FFmpeg.)
|
|
|
_wb_
|
2021-09-03 08:03:51
|
only 4 colors per frame is a very particular thing: av1 can use palette blocks then, which x264 doesn't have
|
|
2021-09-03 08:05:07
|
likely lossless jxl could compress that kind of content even better than av1, **if** we would make a decent animation encoder that looks for patches to be reused from previous frames etc
|
|
2021-09-03 08:07:09
|
(animation encoding is probably one of our lowest priorities though, I don't think jxl can compete with video codecs in that space except maybe for some niche use cases like CGA game footage)
|
|
|
fab
|
2021-09-03 08:57:21
|
priority is photon noise
|
|
2021-09-03 08:57:30
|
until 17 september at least
|
|
2021-09-03 08:58:01
|
making a release that don't utilize well photon noise will confuse users
|
|
|
Cool Doggo
|
2021-09-03 02:53:49
|
What does photon noise do?
|
|
|
|
Deleted User
|
2021-09-03 02:59:30
|
It noises photons.
|
|
|
fab
|
2021-09-03 03:50:03
|
answer right
|
|
2021-09-03 03:50:34
|
it distributes noises that makes image compress better the noise or look sharper etc
|
|
2021-09-03 03:50:42
|
sharper with custom settings
|
|
2021-09-03 03:50:53
|
is one thing more
|
|
2021-09-03 03:51:17
|
the noise will not look sharper or evident or more than original image
|
|
2021-09-03 03:51:26
|
sometimes it can be the same
|
|
2021-09-03 03:51:30
|
but never visually lossless
|
|
2021-09-03 03:51:51
|
but i guess dev can make visually lossless image with photon noise
|
|
2021-09-03 03:52:06
|
anyway normal people shouldn't use it because it can do damage to your photos
|
|
2021-09-03 03:52:29
|
ziemek is interested and also made a patch about squoosh
|
|
2021-09-03 03:52:42
|
squoosh.app wasm converter (jpeg xl beta)
|
|
|
spider-mario
|
|
Cool Doggo
What does photon noise do?
|
|
2021-09-03 03:52:45
|
it does ~this: https://photonstophotos.net/Charts/PTC.htm#Samsung%20Galaxy%20S6(IMX240)
|
|
2021-09-03 03:52:53
|
(the choice of camera is just an example)
|
|
2021-09-03 03:53:24
|
it makes the noise vary as a function of light intensity with a curve that has that general shape with three βregimesβ
|
|
|
fab
|
2021-09-03 03:54:07
|
thanks
|
|
|
Cool Doggo
|
|
fab
it distributes noises that makes image compress better the noise or look sharper etc
|
|
2021-09-03 06:52:08
|
does it make the file size larger?
|
|
2021-09-03 06:52:14
|
or is it similar
|
|
|
fab
|
2021-09-03 06:54:21
|
very similar
|
|
2021-09-03 06:54:27
|
like 200 byte more
|
|
2021-09-03 06:54:41
|
but to get same visual quality it always weight more
|
|
2021-09-03 06:54:48
|
it reduces efficiency
|
|
|
Cool Doggo
|
2021-09-03 06:55:47
|
oh thats cool
|
|
|
fab
|
2021-09-03 06:56:14
|
for devs yes
|
|
2021-09-03 06:56:42
|
because like they can
|
|
2021-09-03 06:56:44
|
just wait for 16/09/2021 build of jpeg xl than
-d 0.934 -s 7 --epf=1 --noise=1 --faster_decoding=4 --photonnoise=13241 -I 0.994 --patches=0
it will look a bit smoother
|
|
2021-09-03 06:57:28
|
so i try to predict the future photon noise will be even in complex conditions performant
|
|
2021-09-03 06:57:38
|
even with the worst settings
|
|
2021-09-03 06:58:00
|
i don't think dev make miracles
|
|
2021-09-03 06:58:16
|
but sure they won't release 0.6.0 with some bug fixes and no improvements
|
|
2021-09-03 06:58:26
|
i think some improvements on new features they will make
|
|
2021-09-03 06:58:50
|
i don't think i have understood photon noise how to apply
|
|
|
_wb_
|
2021-09-03 06:58:55
|
I think the next thing that should be tried, is estimating the noise of an input image, to set that param automatically (or have an option to let the encoder choose it)
|
|
2021-09-03 07:00:10
|
Also if it can be denoised before encode and noise applied again, that would be probably better for compression (though riskier for fidelity)
|
|
|
spider-mario
|
2021-09-03 08:18:20
|
I have written a toy tool that can take an image and its denoised version and uses the difference to estimate a parameter to pass to `--photon_noise`
|
|
2021-09-03 08:18:31
|
but itβs more of a quick attempt and it shouldnβt be used blindly
|
|
2021-09-03 08:19:13
|
(also, I think the intensity of the noise generated by `cjxl` is stronger than intended, so the result of my toy tool currently needs to be multiplied)
|
|
2021-09-03 08:20:28
|
as it turns out, the answer to this βTODO: verify more thoroughly whether the denominator is correctβ https://github.com/libjxl/libjxl/blob/63a481adeb6cba94c726110a5643e9d06729e8e6/lib/jxl/enc_photon_noise.cc#L75-L76 is βit probably isnβtβ
|
|
2021-09-03 08:23:08
|
in any case, the automatic detection of noise is a tricky subject with a lot of possible pitfalls
|
|
|
_wb_
|
2021-09-03 08:25:40
|
yes, and it's a bit at the edge of the scope of what a codec should be doing imo
|
|
|
nathanielcwm
|
2021-09-04 05:45:03
|
just curious <@!794205442175402004> approximately how fast can a jxl image be encoded with cjxl on a quad core A5 cpu with defaults?
|
|
2021-09-04 05:45:22
|
or is this too vague....
|
|
|
diskorduser
|
2021-09-04 05:46:45
|
It can encode 1x1 image pretty fast.
|
|
|
nathanielcwm
|
2021-09-04 05:47:06
|
haha
|
|
2021-09-04 05:47:43
|
so too vague huh
|
|
2021-09-04 05:48:51
|
say around 6000x4000
|
|
2021-09-04 05:50:58
|
i doubt anyone has A5 chips on hand tho
|
|
|
diskorduser
|
2021-09-04 05:51:30
|
Idk. Lol.
|
|
|
nathanielcwm
|
2021-09-04 05:53:01
|
has anyone even tried compiling cjxl for arm7? <:kekw:808717074305122316>
|
|
|
diskorduser
|
2021-09-04 05:53:38
|
I would work. Why not
|
|
|
nathanielcwm
|
2021-09-04 05:55:13
|
i think the closest thing i've got is a device with A9 cores?
|
|
2021-09-04 05:55:43
|
i don't think the device they are in still works tho
|
|
2021-09-04 05:56:36
|
<:kekw:808717074305122316> <@!263309374775230465> apparently A5 supports NEON
|
|
2021-09-04 05:56:42
|
well has the ability to support it
|
|
2021-09-04 05:56:46
|
but it's an optional feature
|
|
2021-09-04 05:57:27
|
looks like A5 was a successor to A9 & A11?
idk arm chips very well
|
|
2021-09-04 05:58:03
|
there's not many socs that use A5 from the looks of it
|
|
2021-09-04 05:58:36
|
snapdragon s4 play are prolly the most common?
|
|
|
diskorduser
|
2021-09-04 05:59:18
|
You can compile cjxl without neon if the processor doesn't support.
|
|
|
nathanielcwm
|
2021-09-04 05:59:46
|
how much perf loss is there tho?
|
|
|
diskorduser
|
2021-09-04 06:00:03
|
Idk. I always compiled with neon
|
|
|
nathanielcwm
|
2021-09-04 06:00:08
|
lol
|
|
2021-09-04 06:00:20
|
i mean if you're using a modern arm device it should 100% have neon no?
|
|
|
diskorduser
|
2021-09-04 06:00:39
|
Yeah. Mostly all arm nowadays has neon
|
|
2021-09-04 06:00:52
|
Snapdragons , Exynos and mediateks
|
|
|
nathanielcwm
|
2021-09-04 06:02:06
|
i wonder what soc the new bionz chips in the a7s3 use
|
|
2021-09-04 06:05:55
|
<:kekw:808717074305122316> <@!263309374775230465> well i can't find any a7s3 teardowns for some reason
|
|
2021-09-04 06:06:06
|
wonder why <:kekw:808717074305122316>
|
|
2021-09-04 06:13:54
|
<:monkaMega:809252622900789269>
|
|
|
_wb_
|
2021-09-04 06:14:06
|
It would be interesting to do some speed benchmarking of cjxl on different cpu types.
|
|
|
nathanielcwm
|
2021-09-04 06:14:32
|
well pi zeros are tiny so i don't think it's as big of a problem on there
|
|
|
_wb_
|
2021-09-04 06:14:41
|
We did test arm64 quite a bit because it is important for phones
|
|
2021-09-04 06:14:58
|
Older arm, not so much
|
|
|
nathanielcwm
|
2021-09-04 06:15:49
|
well tbh what spawned this line of thought is that apparently most of the sony alpha cams except for the a1 and a7s3 still apparently use A5 socs?
|
|
|
w
|
2021-09-04 06:15:52
|
is there a standard suite of tests to benchmark?
|
|
|
nathanielcwm
|
2021-09-04 06:16:28
|
is anyone still using a g2 or g3 tho?
|
|
2021-09-04 06:18:41
|
was that a choice of lg or does the snapdragon chip in there not support v8?
|
|
2021-09-04 06:19:20
|
cuz some newer phones ship with 32 bit versions of android to get around google minimum specs afaik
despite the soc supporting v8
|
|
2021-09-04 06:20:07
|
unfortunately it appears to be a hardware limitation
|
|
2021-09-04 06:20:55
|
so any device using a sd801 doesn't have v8
|
|
2021-09-04 06:21:49
|
first sd chips with v8 are the 808 and 810
but those use standard arm cores and not qualcomm cores
|
|
|
_wb_
|
2021-09-04 06:22:30
|
libjxl should work on 32-bit arm, I dunno about speed though
|
|
|
nathanielcwm
|
2021-09-04 06:23:04
|
maybe try installing lineage on the g2?
|
|
2021-09-04 06:23:09
|
it has official builds
|
|
2021-09-04 06:24:01
|
https://download.lineageos.org/d802
|
|
|
_wb_
|
2021-09-04 06:24:11
|
I still have an OpenPandora lying around somewhere, dunno if I could get it compiled on that
|
|
|
nathanielcwm
|
2021-09-04 06:24:52
|
oh....
i thought u would use the g3 for that sort of thing
|
|
2021-09-04 06:26:31
|
:monkas
|
|
|
diskorduser
|
2021-09-04 06:44:24
|
Why? Don't you use magisk?
|
|
2021-09-04 06:46:59
|
You can use mi remote app?
|
|
|
nathanielcwm
|
2021-09-04 06:47:50
|
https://forum.xda-developers.com/t/port-lg-quickremote-for-aosp-mm.3301931/
|
|
2021-09-04 06:47:55
|
this is really old tho <:kekw:808717074305122316>
|
|
|
diskorduser
You can use mi remote app?
|
|
2021-09-04 06:48:46
|
mi remote lacks some devices tho afaik
|
|
2021-09-04 06:48:57
|
not sure about the quality of the lg remote
|
|
2021-09-04 06:49:20
|
i remember the remote that came bundled with samsung phones had much better compatibility than the xiaomi app tho
|
|
2021-09-04 06:52:13
|
the xda page for the g3 appers dead <:kekw:808717074305122316>
|
|
2021-09-04 06:52:34
|
looks like the g2 has better custom rom support than the g3 <:kekw:808717074305122316>
|
|
2021-09-04 06:54:07
|
:monkas
|
|
2021-09-04 06:54:44
|
i feel like both phones probably need some parts replaced
|
|
2021-09-04 06:54:57
|
but for a device this old it's prolly not worth the effort
|
|
2021-09-04 06:55:05
|
it's a phone not a pc
|
|
2021-09-04 06:57:17
|
u can get something much newer within 50 usd afaik?
|
|
2021-09-04 06:57:21
|
at least before covid
|
|
2021-09-04 06:58:48
|
is there a memes channel here btw?
|
|
2021-09-04 06:58:53
|
or do they just go in offtopic
|
|
|
_wb_
|
2021-09-04 07:00:33
|
Depends, if they're on topic they can go in <#794206087879852106> π
|
|
|
nathanielcwm
|
2021-09-04 07:59:10
|
π€
|
|
2021-09-04 07:59:20
|
probably the pentium???????????
|
|
2021-09-04 08:01:21
|
ah ok
|
|
2021-09-04 08:01:31
|
wait is this with all simd disabled?
|
|
2021-09-04 08:01:42
|
or are those the numbers with ssse3 and sse4 enabled
|
|
2021-09-04 08:02:03
|
:monkas
|
|
2021-09-04 08:04:32
|
so how fast is the pentium without simd?
|
|
|
fab
|
2021-09-04 08:05:23
|
galaxy s4 processors
|
|
2021-09-04 08:05:25
|
my old phone
|
|
2021-09-04 08:05:36
|
now i have xiaomi mi 11 lite 5g
|
|
2021-09-04 08:05:39
|
i betrayed samsung
|
|
2021-09-04 08:05:48
|
and i didn't waited for the a52s 5g
|
|
|
nathanielcwm
|
|
fab
galaxy s4 processors
|
|
2021-09-04 08:05:50
|
is urs exynos or qualcomm?
|
|
|
fab
|
|
nathanielcwm
is urs exynos or qualcomm?
|
|
2021-09-04 08:06:03
|
no is without SIM card
|
|
2021-09-04 08:06:09
|
i think is same processor
|
|
|
nathanielcwm
|
|
fab
|
2021-09-04 08:06:13
|
no exyson
|
|
|
nathanielcwm
|
2021-09-04 08:06:23
|
help im confused
|
|
|
fab
|
2021-09-04 08:06:23
|
qualcomm
|
|
|
nathanielcwm
|
2021-09-04 08:06:24
|
<:kekw:808717074305122316>
|
|
2021-09-04 08:06:28
|
ah ok
|
|
2021-09-04 08:06:48
|
that should be a sd 800 then i think?
|
|
|
fab
|
2021-09-04 08:06:57
|
the xiaomi has snapdragon 780
|
|
2021-09-04 08:07:14
|
is about the same as 778g
|
|
2021-09-04 08:07:17
|
so 40% faster
|
|
2021-09-04 08:07:23
|
as many italian reviews say
|
|
2021-09-04 08:07:33
|
the a52 4g make only 1532 in benchmarks
|
|
2021-09-04 08:07:58
|
and in phone it matters because 60 hz are not enough for a phone if you need custom applications and you plan to install a lot of folders
|
|
2021-09-04 08:08:10
|
90 hz are a bit fast for me
|
|
2021-09-04 08:08:24
|
my eyes hurts with my font suduwe at 90 hz
|
|
2021-09-04 08:08:38
|
usually i use roboto and suduwe on phone
|
|
2021-09-04 08:08:49
|
on computer i also have neth sans for interface of explorer
|
|
2021-09-04 08:08:57
|
<#806898911091753051>
|
|
|
nathanielcwm
|
|
fab
and in phone it matters because 60 hz are not enough for a phone if you need custom applications and you plan to install a lot of folders
|
|
2021-09-04 08:09:02
|
the refreshrate of the screen shouldn't affect benchmark numbers
|
|
2021-09-04 08:09:24
|
oh ye i think i saw someone talking about not being able to disable highway
|
|
|
fab
|
|
nathanielcwm
the refreshrate of the screen shouldn't affect benchmark numbers
|
|
2021-09-04 08:09:35
|
i have bluecord and i uninstalled discord because of not getting two different notifications
|
|
2021-09-04 08:09:58
|
also kiwi browser without any adblock because there aren't mobile extensions in it
|
|
2021-09-04 08:10:07
|
firefox is pushing allowing blogs reviews to extension
|
|
2021-09-04 08:10:12
|
to reviews useless adblock
|
|
|
nathanielcwm
|
|
fab
my eyes hurts with my font suduwe at 90 hz
|
|
2021-09-04 08:10:17
|
ur font makes my eyes hurt from the screenshots u send <:kekw:808717074305122316>
|
|
|
fab
|
2021-09-04 08:10:24
|
to people are useful especially the one who watch video
|
|
2021-09-04 08:10:37
|
but to be the benefits are more
|
|
2021-09-04 08:11:01
|
i can't absolutely read 56 px long messages with whitney
|
|
2021-09-04 08:11:09
|
on phone resolution is harder
|
|
2021-09-04 08:11:21
|
suduwe is super readable on bluecord
|
|
2021-09-04 08:11:35
|
roboto is legible in google news
|
|
2021-09-04 08:11:38
|
i wouldn't change that
|
|
2021-09-04 08:12:28
|
|
|
2021-09-04 08:12:31
|
|
|
2021-09-04 08:12:36
|
and kiwi browser
|
|
2021-09-04 08:12:56
|
i also have on a usb drive one that is less updated
|
|
2021-09-04 08:13:03
|
this i downloaded on an indian website
|
|
2021-09-04 08:13:35
|
screenshots with suduwe weights a lot even double
|
|
2021-09-04 08:15:19
|
in dcim/Bluecord/fonts already created copy the otf and it will work even with ttf
|
|
2021-09-04 08:15:23
|
but otf is better
|
|
2021-09-04 08:31:03
|
i use neth sans for explorer, lucida console for notepad, suduwe for web fonts both in kiwi browser and firefox PC, I have also tanha font enabled. lucida grande and azeret mono should be enough but anyway i prefer other fonts.
|
|
2021-09-04 08:31:49
|
sorry for talking about refresh rate of screens i should used off topic earlier
|
|
2021-09-04 08:32:09
|
this server is losing purpose
|
|
|
w
|
2021-09-04 08:32:26
|
<:thinkraging:427900399027093506>
|
|
|
diskorduser
|
2021-09-04 08:32:51
|
Compile cjxl on your phone
|
|
|
w
|
2021-09-04 08:33:29
|
2 hours in, 40% done compiling on a pi zero
|
|
|
diskorduser
|
2021-09-04 08:33:53
|
Compiling what? Libjxl?
|
|
|
w
|
2021-09-04 08:33:56
|
yeah
|
|
2021-09-04 08:34:09
|
bruh
|
|
2021-09-04 08:35:01
|
if it's not static by default it's not building static
|
|
|
fab
|
2021-09-04 08:35:45
|
in my BT earphones the 27 watt charger don't enter
|
|
2021-09-04 08:35:55
|
i use the 5 watt charger
|
|
2021-09-04 08:38:46
|
|
|
2021-09-04 08:38:53
|
That's the plugin for wordpress
|
|
2021-09-04 08:39:54
|
old 2019 one
|
|
|
diskorduser
|
|
fab
in my BT earphones the 27 watt charger don't enter
|
|
2021-09-04 08:40:10
|
Who asked about it?
|
|
|
fab
|
2021-09-04 08:40:10
|
it can be downloaded new i think without updating
|
|
|
diskorduser
Who asked about it?
|
|
2021-09-04 08:40:20
|
nobody
|
|
|
diskorduser
|
2021-09-04 08:40:45
|
Then why <:WhatThe:806133036059197491>
|
|
2021-09-04 08:42:26
|
No asked and also not related jxl. Not related to image. You're becoming a spammer fabian.
|
|
2021-09-04 08:42:47
|
Atleast post them on offtopic
|
|
|
fab
|
|
w
|
2021-09-04 08:44:57
|
how
|
|
2021-09-04 08:45:19
|
```-- Performing Test SJPEG_HAVE_FLAG___ARM_NEON__
-- Performing Test SJPEG_HAVE_FLAG___ARM_NEON__ - Failed
-- Performing Test SJPEG_HAVE_FLAG___ARM_NEON__
-- Performing Test SJPEG_HAVE_FLAG___ARM_NEON__ - Failed```
|
|
2021-09-04 08:46:42
|
oh, neon not supported on zero
|
|
|
diskorduser
|
2021-09-04 08:47:55
|
cjxl neon from termux.
|
|
|
|
veluca
|
|
diskorduser
Compile cjxl on your phone
|
|
2021-09-04 08:48:41
|
been there, done that
|
|
|
diskorduser
|
2021-09-04 08:48:51
|
I know
|
|
|
|
veluca
|
|
w
how
|
|
2021-09-04 08:49:27
|
nice to know! Not super surprised either
|
|
2021-09-04 08:49:34
|
nope
|
|
2021-09-04 08:49:54
|
I could make one
|
|
2021-09-04 08:50:31
|
let's see if I can still get into my pi
|
|
2021-09-04 10:05:05
|
huh
|
|
2021-09-04 10:05:12
|
G2 being your phone, right?
|
|
|
fab
|
2021-09-04 10:12:54
|
Can Xiaomi mi 11 lite 5g run termux without installing or rooting anything?
|
|
2021-09-04 10:13:11
|
https://www.camerafv5.com/devices/manufacturers/xiaomi/m2101k9g_renoir_1/
|
|
2021-09-04 10:13:48
|
Here's to see if your phone support raw camera2 lenses or max analog sensivity
|
|
|
|
veluca
|
2021-09-04 11:18:59
|
try these out...
|
|
2021-09-04 11:19:12
|
uh
|
|
2021-09-04 11:19:13
|
too big
|
|
2021-09-04 11:19:15
|
gimme a sec
|
|
2021-09-04 11:20:50
|
https://old.lucaversari.it/armv7-63a481ad/cjxl
|
|
2021-09-04 11:20:53
|
https://old.lucaversari.it/armv7-63a481ad/djxl
|
|
2021-09-04 11:22:06
|
also, I *think* I know what is the issue, and you can likely at least in part fix it with...
```
diff --git a/hwy/targets.h b/hwy/targets.h
index 7e34189..338d42b 100644
--- a/hwy/targets.h
+++ b/hwy/targets.h
@@ -129,7 +129,7 @@
#define HWY_BROKEN_TARGETS (HWY_AVX3)
// armv7be has not been tested and is not yet supported.
-#elif HWY_ARCH_ARM_V7 && (defined(__ARM_BIG_ENDIAN) || defined(__BIG_ENDIAN))
+#elif HWY_ARCH_ARM_V7 && defined(__ARM_BIG_ENDIAN)
#define HWY_BROKEN_TARGETS (HWY_NEON)
#else
```
|
|
|
_wb_
|
2021-09-04 11:55:57
|
Kernel too old? Didn't know that was a thing. Are we using syscalls that didn't exist yet in older kernels or something?
|
|
|
diskorduser
|
|
fab
Can Xiaomi mi 11 lite 5g run termux without installing or rooting anything?
|
|
2021-09-04 12:01:46
|
I have already told you that termux doesn't need root to compile libjxl
|
|
|
|
veluca
|
2021-09-04 01:37:30
|
ah, I'll make a toolchain for that kernel then
|
|
|
_wb_
Kernel too old? Didn't know that was a thing. Are we using syscalls that didn't exist yet in older kernels or something?
|
|
2021-09-04 01:39:18
|
glibc is compiled for a specific/range of kernel versions - and in a static binary, you bundle it
|
|
2021-09-04 01:39:26
|
crosstool-ng
|
|
|
_wb_
|
2021-09-04 01:40:40
|
Oh it's so static that even libc gets included? That explains things
|
|
|
|
veluca
|
2021-09-04 01:41:32
|
yup
|
|
2021-09-04 01:41:38
|
wouldn't work on Android otherwise
|
|
2021-09-04 01:46:12
|
they fixed the build system already
|
|
2021-09-04 01:46:18
|
I have to do it by hand π
|
|
|
Scope
|
2021-09-04 01:51:42
|
<https://github.com/libjxl/libjxl/pull/534>
Also, if `resampling=2` is enabled by default, these bugs become even more critical:
<https://github.com/libjxl/libjxl/issues/459>
<https://github.com/libjxl/libjxl/issues/330>
|
|
2021-09-04 01:59:42
|
And maybe can be turned on even at a lower distance, I think it would be nice to compare metrics on a large set of different images at what distance `resampling=2` starts to be preferentially better for quality
|
|
|
|
veluca
|
2021-09-04 02:12:49
|
I updated the build, try again?
|
|
2021-09-04 02:23:25
|
odd... with vardct, right?
|
|
2021-09-04 02:23:55
|
think you can get a profile out of it?
|
|
2021-09-04 02:24:06
|
ah then makes sense
|
|
2021-09-04 02:24:18
|
modular is not super SIMD-fied
|
|
2021-09-04 02:24:39
|
nah
|
|
2021-09-04 02:24:47
|
not right now, at least
|
|
2021-09-04 02:27:23
|
tbf, modular mode is also basically single-threaded
|
|
2021-09-04 02:28:08
|
and there is *quite* some difference in the environment, too
|
|
2021-09-04 02:28:24
|
(different allocator, different thermals)
|
|
2021-09-04 02:29:00
|
anyway, more SIMD and more multithreading should absolutely be possible, just didn't get there yet π
|
|
2021-09-04 02:30:28
|
the real question is what it is you want to measure
|
|
2021-09-04 02:32:35
|
if you want to measure a mix of memory bandwidth and floating point muscle, then vardct might be good for that
|
|
2021-09-04 02:32:48
|
but CPU performance is very multidimensional
|
|
2021-09-04 02:35:29
|
I think your typical x86 cpu is better (wrt arm) in terms of memory bandwidth, cache, FP computations, branch predictor, superscalarity (maybe, maybe it's the same...)
|
|
2021-09-04 02:35:41
|
there's many things that go in CPU performance π
|
|
|
diskorduser
|
2021-09-04 02:36:37
|
For real-time 16mp jxl for camera app, I think only s 3 is usable now.
|
|
|
|
veluca
|
2021-09-04 02:36:42
|
the instruction sets are reasonably comparable I think, except ARM has no equivalent of AVX or AVX512 (it has SVE, but that's different)
|
|
2021-09-04 02:37:20
|
likely if you want to encode jxl on a phone quickly you need to do it on the GPU
|
|
|
diskorduser
|
2021-09-04 02:38:59
|
So veluca will make jxl vulkan encoder :p
|
|
|
|
veluca
|
2021-09-04 02:41:44
|
depends on the program π if you're bandwidth limited, and access a large amount of memory, cache doesn't matter
|
|
2021-09-04 02:42:24
|
for cache, the most important question is "will your working set fit in the cache?"
|
|
2021-09-04 02:43:07
|
for zstd compression, likely the answer is no
|
|
2021-09-04 02:43:46
|
(also, lossless compression of text is traditionally relatively hard to parallelize...)
|
|
|
_wb_
|
2021-09-04 02:46:03
|
Photo is not video, I dunno if very fast encoding is actually _that_ necessary on devices that have gigabytes of ram. Could just keep photos uncompressed in memory and encode in the background...
|
|
2021-09-04 02:47:06
|
For video that doesn't work because you run out of memory quickly, but for photo... How many photos do you take per minute?
|
|
|
BlueSwordM
|
|
_wb_
For video that doesn't work because you run out of memory quickly, but for photo... How many photos do you take per minute?
|
|
2021-09-04 02:58:03
|
20 photos per second on stuff like professional cameras, but they have dedicated hardware so that's not the same π
|
|
2021-09-04 02:58:13
|
It's the CPU difference.
|
|
2021-09-04 02:59:52
|
The G4560 is a 2C/4T Skylake chip at 3.5GHz base clock, while the N3150 is a 1,6-2.0GHz Braswell chip...
|
|
2021-09-04 02:59:58
|
It's not even fair comparing them at all.
|
|
|
_wb_
|
|
BlueSwordM
20 photos per second on stuff like professional cameras, but they have dedicated hardware so that's not the same π
|
|
2021-09-04 03:08:07
|
You can fit 30 uncompressed 8-bit 12 megapixel images in 1 GB. So on a phone, you could just keep a photo burst in memory and take your time to encode it, if you don't need to do many bursts in a short period of time.
|
|
|
BlueSwordM
|
|
_wb_
You can fit 30 uncompressed 8-bit 12 megapixel images in 1 GB. So on a phone, you could just keep a photo burst in memory and take your time to encode it, if you don't need to do many bursts in a short period of time.
|
|
2021-09-04 03:27:46
|
Indeed, but the problem is that there are now cameras outputting real 45MP 16-bit raw images apparently <:monkaMega:809252622900789269>
|
|
|
spider-mario
|
2021-09-04 03:33:44
|
nah, 16-bit raw are essentially pointless
|
|
|
BlueSwordM
|
|
spider-mario
nah, 16-bit raw are essentially pointless
|
|
2021-09-04 03:34:04
|
Yeah, but camera manufacturers are apparently doing that kind of stupid stuff.
|
|
2021-09-04 03:35:05
|
IIRC, that's what the new Canon stuff does:
16-bit raw at 20p/s > HEIC 10-bit > HEIC picture
|
|
|
diskorduser
|
|
spider-mario
nah, 16-bit raw are essentially pointless
|
|
2021-09-04 03:45:36
|
Why? I can recover details on 16bit raw.
|
|
2021-09-04 03:46:04
|
I mean shadows
|
|
2021-09-04 03:46:23
|
It's not possible on jpg.
|
|
|
spider-mario
|
2021-09-04 04:06:04
|
I mean compared to 14-bit raw
|
|
2021-09-04 04:06:18
|
the two additional bits bring approximately nothing in current cameras
|
|
|
diskorduser
|
2021-09-04 04:09:01
|
When sensors output 16bit raw, isn't the point of raw/dng to store losslessly from it?
|
|
|
_wb_
|
2021-09-04 04:10:17
|
When I looked at raws in the past, the effective range of G was larger than B and R on the raws I saw
|
|
|
spider-mario
|
2021-09-04 04:10:38
|
sensors output analog voltage, which you then digitize, and as long as your chosen quantization step is fine enough to encode the noise floor, additional bits will just be more noise
|
|
|
_wb_
|
2021-09-04 04:10:42
|
Like 14-bit G, 12-bit R, 11-bit B or something
|
|
|
spider-mario
|
2021-09-04 04:11:10
|
https://www.aes.org/e-lib/browse.cfm?elib=4523
|
|
2021-09-04 04:11:28
|
https://www.strollswithmydog.com/sub-bit-signal/
|
|
|
_wb_
|
2021-09-04 04:12:57
|
Anyway, 45 MP in 16-bit RGGB is 360 MB for the uncompressed raw
|
|
|
spider-mario
|
2021-09-04 04:13:30
|
thatβs demosaicized, isnβt it?
|
|
|
_wb_
|
2021-09-04 04:13:58
|
Does get a bit much to buffer in case of bursts, but for single photos that's still ok, no?
|
|
2021-09-04 04:14:34
|
Ah yes before demosaicing it would be /4 the pixels
|
|
2021-09-04 04:14:47
|
So 90 MB
|
|
|
diskorduser
|
2021-09-04 04:15:15
|
Why not just store them to storage at s 3 and recompress them losslessly at s7 or s9 ?
|
|
|
_wb_
|
2021-09-04 04:15:17
|
I always count one RGGB pixel as one macropixel but it's actually four pixels
|
|
|
diskorduser
Why not just store them to storage at s 3 and recompress them losslessly at s7 or s9 ?
|
|
2021-09-04 04:15:49
|
Yes, for lossless raws that would work
|
|
2021-09-04 04:17:16
|
If you want to do lossy, you could also do like -d 0.3 -e 3 initially, and then later transcode to -d 1 -e 7 or whatever
|
|
2021-09-04 04:23:21
|
My point is: for video, you basically need realtime encoding (unless you can afford huge fast enough memory, like what professional slowmo cameras do), but for still photos, there might be more options than fast hw encode.
|
|
|
diskorduser
|
2021-09-04 04:24:16
|
If user wants to just take a photo and upload (Facebook or twitter or Instagram) it needs real-time encoding
|
|
2021-09-04 04:24:27
|
But they are lower resolution anyway
|
|
2021-09-04 04:24:34
|
So that's not a problem
|
|
|
Cool Doggo
|
2021-09-04 04:24:54
|
twitter video quality
|
|
2021-09-04 04:24:57
|
<a:PepeLaugh:752777118412308500>
|
|
|
diskorduser
|
|
spider-mario
sensors output analog voltage, which you then digitize, and as long as your chosen quantization step is fine enough to encode the noise floor, additional bits will just be more noise
|
|
2021-09-04 06:58:32
|
From Wikipedia, `Exmor is the name of a technology Sony implemented on some of their CMOS image sensors. It performs on-chip analog/digital signal conversion and two-step noise reduction in parallel on each column of the CMOS sensor` if A/D happens inside the chip, what can manufacturers do?
|
|
2021-09-04 07:00:24
|
No way to get 14bit raw on latest Sony smartphone/exmor sensors? IDK
|
|
|
OkyDooky
|
2021-09-04 08:34:54
|
Do you happen to know off the top of your head what default parameters these are? I'm curious.
(<@532010383041363969>)
|
|
|
|
veluca
|
2021-09-04 08:49:47
|
what do you mean exactly?
|
|
2021-09-04 08:50:20
|
if you mean "the default level of smoothing that Gaborish produces"... I could find out the convolution kernel, if that's useful
|
|
|
Traneptora
|
2021-09-04 10:42:13
|
How would I implement JXL for discord?
|
|
2021-09-04 10:42:53
|
I currently have a webserver which receives image submissions for a particular problem, scans the images, and if they pass the test, it uploads them to a discord channel, and then deletes them locally
|
|
2021-09-04 10:43:50
|
I'm thinking of converting them to JXL first to make life a bit easier
|
|
2021-09-04 10:44:04
|
saving upload bandwidth on my webserver
|
|
2021-09-04 10:45:30
|
chrome contains support for JXL with a flag, but I don't know how to do that
|
|
2021-09-04 11:08:03
|
also, firefox is reporting that the image cannot be displayed because it contains errors, for a JXL image
|
|
2021-09-04 11:08:19
|
try opening this file in firefox
|
|
2021-09-04 11:08:28
|
|
|
2021-09-04 11:08:59
|
(attempting to view it from the CLI using `eom 'https://cdn.discordapp.com/attachments/794206170445119489/883850999758991400/lenna.jxl'` works)
|
|
|
Fraetor
|
2021-09-04 11:09:28
|
Are you using firefox nightly?
|
|
|
Traneptora
|
2021-09-04 11:09:49
|
I'm not, but I did enable jxl via about:config
|
|
|
Fraetor
|
2021-09-04 11:10:14
|
The jxl decoder is not (currently) included in the non nightly build of Firefox.
|
|
2021-09-04 11:10:23
|
So the option does nothing.
|
|
|
Traneptora
|
2021-09-04 11:10:29
|
ah, you mean even with `image.jxl.enabled` it won't work if it's not nightly
|
|