|
Koromaru Korüko
|
2022-01-30 02:06:12
|
i mean, i can write up a quick macro, that defines everything.
and only provides the getter and setters in the case of building for wasm.
|
|
2022-01-30 02:06:30
|
it would still mean, that all the structs need to be changed up to use said macro
|
|
|
|
veluca
|
2022-01-30 02:08:41
|
so if I understand correctly problem is that the C API is, well, C
|
|
|
Koromaru Korüko
|
2022-01-30 02:08:48
|
eg:
```
#if __EMSCRIPTEN__
#define DefineMember(valuetype, name) \
valuetype name; \
valuetype Get_##name(); \
void Set_##name(valuetype value);
#define DefineMemberCode(type, name) \
decaltype(type::name) type::Get_##name() {return this->name;} \
type::Set_##name(decaltype(type::name) value) {this->name=value;}
#else
#define DefineMember(valuetype, name) valuetype name;
#define DefineMemberCode(type, name)
#endif
struct MyClass{
DefineMember(uint32_t, MyMember)
}
// ... later in some .cc file
DefineMemberCode(MyClass, MyMember)
```
|
|
|
|
veluca
|
2022-01-30 02:08:49
|
and you can't really have methods
|
|
|
Koromaru Korüko
|
2022-01-30 02:09:04
|
not, exactly, the issue is embind itself
|
|
|
|
veluca
|
2022-01-30 02:09:26
|
ah sure, than do whatever xD
|
|
2022-01-30 02:09:54
|
I'd even be fine with a script that generates those bindings -- if it's portable
|
|
|
Koromaru Korüko
|
2022-01-30 02:09:57
|
it has a constraint that only allows for decaltype(Type::MemberType) Type::Function() for Getter and vice versa for Setter
|
|
2022-01-30 02:10:30
|
if i try and declare a similar function signature outside the type, the constrain throws.
|
|
|
|
veluca
|
2022-01-30 02:10:46
|
doesn't allow to return (const) references?
|
|
|
Koromaru Korüko
|
2022-01-30 02:11:32
|
nope, it validates the signature, I've tryed even abstracting the call to a naked pointer
|
|
2022-01-30 02:11:38
|
it still threw.
|
|
2022-01-30 03:05:19
|
so I'm thinking weather or not, to change out the code for defining a lot of the structs to this.
```c++
#ifndef OVERRIDE_TYPE_BUILDER
#define Struct(type, code) \
struct type \
{ \
code \
};
#define Member(valuetype, member) \
valuetype member; \
valuetype Get_##member() {return this->member;} \
void Set_##member(valuetype value) {this->member=value;}
#endif
Struct(JxlPreviewHeader,
Member(uint32_t, xsize)
Member(uint32_t, ysize)
)
```
because using this I can generate all required bindings using just a few pre-processors.
|
|
2022-01-30 03:20:39
|
so if I were to be building for wasm I could change out the macros to something like this:
```c++
#define OVERRIDE_TYPE_BUILDER 1
#define Struct(type, code) \
{ \
using def_type = type; \
auto def = emscripten::class_<type>(#type); \
code \
}
#define __Member2(member, getter, setter) def.property(#member, &getter, &setter);
#define Member(valuetype, member) __Member2(member, def_type::Get_##member, def_type::Set_##member)
#include "path_to_file.h"
#undef OVERRIDE_TYPE_BUILDER
```
|
|
2022-01-30 03:21:29
|
and then using this inside a function, I can just generate the bindings.
thus making it to where there is 0 need to edit any additional files after editing the struct definition
|
|
2022-01-30 03:22:06
|
it simply said will just work, and all edits will be propagated to all necessary components.
|
|
|
_wb_
|
2022-01-30 03:28:55
|
Maybe do one struct like that as an example and make a PR for it so it can be discussed with other jxl devs?
|
|
2022-01-30 03:29:31
|
I don't have a strong opinion on this
|
|
|
Koromaru Korüko
|
2022-01-30 03:29:40
|
ok
|
|
2022-01-30 03:30:14
|
I'm still battling with embind, to see the exact layout necessary.
|
|
2022-01-30 06:09:10
|
there, I have made an example implementation, as well as a usage example.
https://github.com/libjxl/libjxl/pull/1136
|
|
|
|
Hello71
|
2022-01-30 09:33:08
|
i'm not that familiar with wasm "ecosystem", but i thought the point of emscripten was to use unmodified C and C++ programs. why are such "invasive" changes needed?
|
|
|
Koromaru Korüko
|
2022-01-30 10:15:12
|
um no, not really
|
|
2022-01-30 10:15:34
|
the point of wasm is to be able to run native assembly like code in a sandboxed enviroment
|
|
2022-01-30 10:16:00
|
you can by all means import most c/c++ programs and directly call the main function. with varying degrees of success
|
|
2022-01-30 10:16:40
|
but to actually use the library you need to provide bindings manually, otherwise the JS wont know how to utilize the C/C++ or wasm asm
|
|
2022-01-30 10:17:23
|
in short,
wasm exports different functions, thats about it.
and depending on how you compiled your wasm, is what is available to you.
|
|
2022-01-30 10:18:37
|
for emscripten, you need to export all the types used, for the binder to create proper JS endpoints for all types and functions.
|
|
2022-01-30 10:19:27
|
so the invasive change is to enable future compatibility.
so that changing the type will immediately be reflected in the JS bindings.
|
|
2022-01-30 10:21:05
|
it also allows for us to quickly create generators for other languages, such as C# and java.
by simply creating an executable that once run writes out all the structure data, and imports in their native language.
allowing us to both easily maintain multiple wrappers, and easily add additional wrappers.
|
|
|
Scope
|
2022-01-31 10:31:04
|
Also, is it a finished PR and just need a review from someone else to merge it?
https://github.com/libjxl/libjxl/pull/1046
|
|
|
_wb_
|
2022-01-31 11:28:26
|
yes, I think so
|
|
|
Scope
|
2022-01-31 02:16:48
|
> Add a fast-XYB-to-sRGB8 stage
<https://github.com/libjxl/libjxl/pull/1140>
<:Poggers:805392625934663710> It will speed up encoding/decoding at the expense of less precision?
|
|
|
|
veluca
|
2022-01-31 02:19:17
|
nah
|
|
2022-01-31 02:19:23
|
it's just a refactor
|
|
|
Scope
|
2022-01-31 03:11:33
|
And what is the new render-pipeline, is that what will be in d/cjxl_ng?
|
|
|
|
veluca
|
2022-01-31 03:12:57
|
it's unrelated
|
|
2022-01-31 03:13:35
|
mostly a refactor of dec_reconstruct.cc to make our life easier wrt fuzzbugs, implementation errors, and implementing more memory reductions / fast paths
|
|
|
Scope
|
2022-02-02 07:59:07
|
<:Hypers:808826266060193874>
https://github.com/libjxl/libjxl/pull/1148
|
|
2022-02-02 08:00:04
|
Any benchmarks or it's not that noticeable?
OK, here they are: <https://github.com/libjxl/libjxl/pull/1149> <:Stonks:806137886726553651>
|
|
|
_wb_
|
2022-02-02 10:57:47
|
There's still unnecessary memcpy going on, tricky to get rid of though
|
|
2022-02-02 12:06:14
|
they are members but visibility is set to private for some reason
|
|
2022-02-02 12:06:19
|
changed my visibility to public
|
|
|
Alyx
|
2022-02-06 05:07:54
|
With progressive decoding is it possible to decode specific regions of an image? `JxlDecoderSetImageOutCallback` seems the closest thing to it
|
|
|
_wb_
|
2022-02-06 05:43:06
|
The bitstream supports decoding regions of interest, but we haven't implemented that yet in libjxl, let alone have an api for it
|
|
|
BlueSwordM
|
|
_wb_
The bitstream supports decoding regions of interest, but we haven't implemented that yet in libjxl, let alone have an api for it
|
|
2022-02-06 05:47:15
|
Interesting idea regarding that: is it possible to stop progressive decode at some point?
|
|
|
Alyx
|
2022-02-06 05:47:36
|
Ah okay good to know, was going round and round in circles thinking I missed something obvious.
|
|
|
_wb_
|
|
BlueSwordM
Interesting idea regarding that: is it possible to stop progressive decode at some point?
|
|
2022-02-06 05:47:51
|
How do you mean?
|
|
|
BlueSwordM
|
|
_wb_
How do you mean?
|
|
2022-02-06 05:48:56
|
Basically, let's just say you have 4 progressive loads on a website before you show the full image.
To limit data usage on mobile for example, would it be possible to only load the full progressive image if you stop scrolling?
|
|
2022-02-06 05:49:27
|
IE, until X factor is met, is it possible to make the libjxl API not decode the image fully?
|
|
|
_wb_
|
2022-02-06 05:50:04
|
Well libjxl will just decode however much of the bitstream you give it
|
|
2022-02-06 05:50:53
|
To avoid using data, the browser would need to use http range requests
|
|
2022-02-06 05:51:50
|
Because if it requests the whole file, it cannot really abort the download, except by killing the whole https connection, but that's rather drastic
|
|
|
BlueSwordM
|
2022-02-06 05:52:56
|
True, true. I don't even know why I asked the question that way 😛
|
|
|
_wb_
|
2022-02-06 05:55:01
|
One thing we should add to the api, is a function to get the offsets to get specific progressive passes, so browsers know what range request to make after the initial one
|
|
|
VEG
|
|
_wb_
Because if it requests the whole file, it cannot really abort the download, except by killing the whole https connection, but that's rather drastic
|
|
2022-02-06 07:13:18
|
HTTP/2 and HTTP/3 should not require to close connection I guess
|
|
2022-02-06 07:13:53
|
Or they don't have any means to abort specific streams?
|
|
|
_wb_
|
2022-02-06 07:32:00
|
I don't think there is anything in http to abort
|
|
2022-02-06 07:32:21
|
Everything is based on request/response
|
|
|
Koromaru Korüko
|
2022-02-06 07:39:06
|
there was never anything to abort a request
|
|
2022-02-06 07:39:20
|
you just closed the connection.
and it was handled as a timeout
|
|
|
_wb_
|
2022-02-06 07:57:02
|
So that's not a good way to do progressive-save-data, you may want to close one image stream but not the whole connection. Range requests work though - but there the caveat is not losing speed by adding an extra roundtrip
|
|
2022-02-06 07:59:43
|
Requesting the first 1kb or whatever of every img src is a good strategy anyway, if only to avoid layout shifts that can happen when trying to compute boxes when image dimensions (or at least their aspect ratios) aren't known yet.
|
|
|
Koromaru Korüko
|
2022-02-06 07:59:49
|
there is no way to "close one image stream"
you have to close the underlying tcp connection
|
|
2022-02-06 08:00:04
|
given most browsers spawn multiple connections anyways.
|
|
|
_wb_
Requesting the first 1kb or whatever of every img src is a good strategy anyway, if only to avoid layout shifts that can happen when trying to compute boxes when image dimensions (or at least their aspect ratios) aren't known yet.
|
|
2022-02-06 08:01:36
|
it would be better to do parse some partial data, and send the wanted size via xhr to the client.
then the client can request that amount
|
|
|
_wb_
|
2022-02-06 08:04:05
|
With collaborating servers, nice things can be done
|
|
2022-02-06 08:04:54
|
I was just trying to picture something that would work also with dumb servers like what you would get on some shared hosting or something
|
|
2022-02-06 08:07:31
|
With collaborating servers, I think ideally the client just sends some kind of Client Hint with the img request that says "my viewport width is X, my data cost model is Y", and then the server responds with something suitable
|
|
|
Koromaru Korüko
|
2022-02-06 08:28:02
|
I mean, for sharding what you can do is precompile a list of values and put that into a json file next to the image file
|
|
2022-02-06 08:28:25
|
then the client can just request the json file with its precompiled values, and choose the most suitable one.
|
|
2022-02-06 08:29:14
|
which would still be significantly better then doing live conversions server side, or storing multiple file versions. Which is currently the practice for most cdn's Xd
|
|
|
The_Decryptor
|
2022-02-07 12:14:36
|
HTTP 2/3 has the notion of "I don't need this stream anymore, close it", browsers can use that to abort loading more data than needed
|
|
|
|
Deleted User
|
2022-02-07 01:20:32
|
Small off-topic: how did FLIF know when to truncate to get the exact downscaled preview?
https://github.com/FLIF-hub/FLIF/blob/1946adc6e3eb96090ae35daee663a7a2b0e63326/src/flif.cpp#L120
|
|
|
_wb_
|
2022-02-07 06:13:54
|
FLIF didn't really know, iirc. It just decoded to find the offsets.
|
|
2022-02-07 06:14:14
|
FUIF had offsets in the image header
|
|
2022-02-07 06:14:28
|
JXL has offsets in the frame header
|
|
|
Koromaru Korüko
|
|
The_Decryptor
HTTP 2/3 has the notion of "I don't need this stream anymore, close it", browsers can use that to abort loading more data than needed
|
|
2022-02-07 06:28:04
|
Yes its common practice to just close ghe underlying connection when you dont need any more data. And that to then be handled as a timeout server side. But this is still a bad practice especially for http/3 where you can have multiplexing issues.
|
|
|
The_Decryptor
|
2022-02-07 06:33:01
|
Sure, but I wouldn't expect clients to just drop the connection if they're done with it, there's clean ways to signal that
|
|
|
_wb_
|
2022-02-07 06:33:53
|
What is the clean way to do it?
|
|
|
The_Decryptor
|
2022-02-07 06:35:25
|
In HTTP/2 you send a "RST_STREAM" frame with a "cancel" error code, which means you don't need the stream anymore
|
|
2022-02-07 06:35:50
|
They renamed it in HTTP/3 to "H3_REQUEST_CANCELLED", which means the same thing
|
|
2022-02-07 06:37:41
|
In both cases the connection stays alive, and other streams continue to make progress
|
|
|
VEG
|
|
_wb_
I don't think there is anything in http to abort
|
|
2022-02-07 07:15:31
|
But it is probably available at the lower level. HTTP/3 works through QUIC. I believe that QUIC should have some means to abort streams inside of a QUIC connection.
|
|
2022-02-07 07:17:22
|
Oh, it is already mentioned.
|
|
|
The_Decryptor
They renamed it in HTTP/3 to "H3_REQUEST_CANCELLED", which means the same thing
|
|
2022-02-07 07:20:55
|
Weird that it is a part of the HTTP/3, I would expect that it should be done at lower level, in the QUIC
|
|
2022-02-07 07:22:24
|
Oh, seems like QUIC still has this special frame type, and H3_REQUEST_CANCELLED is just an error code for this frame
|
|
|
The_Decryptor
|
2022-02-07 07:28:10
|
Yeah, they split the errors out into 2 categories, QUIC specific errors (Things like a malformed header, creating an invalid stream, etc.) that break the QUIC connection, and application errors that the QUIC transport is oblivious to (In this case QUIC just needs to know the stream is closed, not why it's being closed)
|
|
2022-02-07 07:29:33
|
Like there's a specific error code for "This content is only available over HTTP 1.1", that's not something a QUIC stack could ever hope to handle
|
|
|
_wb_
|
2022-02-07 07:34:31
|
Still, range requests is more robust than trying to cancel a request at the right time. Packets might still be on their way...
|
|
|
The_Decryptor
|
2022-02-07 07:37:35
|
Range requests feel more robust (Browsers already rely on them for video), and with a good CDN it wouldn't incur much of an RTT penalty
|
|
2022-02-07 07:38:09
|
e.g. Cloudflare has a <10ms RTT for me, I wouldn't notice in the slightest if it had to issue multiple requests for a cached image
|
|
|
Koromaru Korüko
|
2022-02-07 07:47:07
|
so per the rfc, a http/2 RST_STREAM requires the client to further process incoming packets for that stream.
meaning depending on browser, it will either discard that data, or include it. which may cause issues, depending on implementation.
as per the rft, a http/3 H3_REQUEST_CANCELLED should only be throw by partial processing where in data from the request/response body was sent to some higher layer, IE the application layer. However it specifically states that if a fully request/response is received the data may be sent to some higher layer, however if only a partial set of data is received, its contents shall be discarded.
|
|
2022-02-07 07:48:15
|
also if the content body was not sent to a higher layer, then
"Implementations SHOULD cancel requests by abruptly terminating any
directions of a stream that are still open. This means resetting the
sending parts of streams and aborting"
|
|
2022-02-07 07:48:50
|
idk if they mean to send proper error codes, or to just close the stream...
|
|
2022-02-07 07:49:49
|
but for http/3 there doesn't seem to be an acceptable way to early exit a stream, without violating the rfc.
|
|
|
The_Decryptor
|
2022-02-07 08:21:59
|
I feel like this is going off topic, but I think that's for cases when the server cancels the request, the client should discard the response and retry
|
|
2022-02-07 08:22:43
|
If the client is the one that cancels the request it can do whatever it wants with the partial response, it's just a blob of data in RAM then, the HTTP spec has no say over it
|
|
|
Koromaru Korüko
|
2022-02-07 08:25:21
|
yes realistically it can, but as per the rfc it's not allowed to
|
|
2022-02-07 08:25:48
|
and while not the first time, I doubt browsers will implement much outside the rfc.
|
|
|
Alyx
|
2022-02-07 11:01:48
|
Any way to currently decode into a resolution other than the original? Ideally want to output multiple images with the same bitstream making use of `JxlDecoderRewind`
|
|
|
w
|
2022-02-07 11:06:03
|
isnt it better to decode to 1 image and scale the output image
|
|
|
|
Hello71
|
2022-02-09 12:22:13
|
well the decoder has knowledge of the dct block sizes and whatnot, it could upscale better with that information.
|
|
|
The_Decryptor
|
2022-02-09 12:24:21
|
Isn't that what the `-s` option for djxl does? It's a full size image but with only the lower level of detail
|
|
2022-02-09 12:24:28
|
Then you scale that down to the desired size
|
|
|
|
Hello71
|
2022-02-09 12:24:56
|
the same way that jpeg2png/jpeg-quantsmooth/etc works, if you know the dct coefficients you can use an alternative idct
|
|
2022-02-09 12:25:14
|
ah, i assumed they wanted to scale *up*. scaling down is (arguably) easier
|
|
|
Alyx
|
2022-02-09 11:07:13
|
Nah it was for scaling down, but just found it easier to scale it after the decode. scaling down in powers of 2, so not exactly an expensive task.
|
|
|
Koromaru Korüko
|
2022-02-12 01:30:43
|
I am breaking all the c/c++ convention rules:
|
|
2022-02-12 01:31:44
|
this somehow compiles... I hope it acutely works
|
|
|
|
veluca
|
2022-02-12 02:02:29
|
both of those are pretty sketchy tbh 😄
|
|
|
|
Hello71
|
2022-02-12 02:07:39
|
gcc and llvm often use this kind of pattern
|
|
2022-02-12 02:09:34
|
`#define DATA F(a, b, c); F(d, e, f)` ... `#define F(x,y,z) f(x, y)`, `DATA`, `#undef F` ... `#define F(x,y,z) g(x, z)`, `DATA`, `#undef F`
|
|
|
Koromaru Korüko
|
2022-02-12 02:36:57
|
I was more referencing, implementing a custom namespace wrapper,
and including headers within functions.
those are general no goes.
|
|
|
|
Vincent Torri
|
2022-02-14 08:26:24
|
hello, is there a date for a 1.0 release ?
|
|
|
DuxVitae
|
2022-02-20 07:56:43
|
Hello, I appreciate all the work you did for JXL and I'm really looking forward that it will become a new standard.
So I did a lot of testing with cjxl 0.6.1. I wanted to see whether one can efficiently store HDR panoramas as 32-bit jxl. (see https://polyhaven.com/hdris for examples - though I'm not involved there).
However, findings are rather disappointing:
* **Lossless** compression:
** When using effort <= 7 files are twice as big as compressed (lossless) .hdr or .exr files
** Effort 9 takes several hours for a 16k x 8k image and produces a file still 30% larger than .exr
* When using **lossy** compression (d=0.1, intensity=65504):
** Values above 8 are clipped (bug or intended?)
** Severe compression artefacts for values below 2^-18, i.e. they become visible when increasing exposure beyond +13 EV and displaying the image as a standard sRGB 8-Bit.
Are there any measures planned / implemented to address these issues?
|
|
|
_wb_
|
2022-02-20 08:39:39
|
Can you share an example image?
|
|
2022-02-20 08:39:55
|
Oh you did
|
|
2022-02-20 08:41:33
|
<@604964375924834314> could you take a look at this?
|
|
2022-02-20 08:42:21
|
For lossless, I bet something is going wrong in how the input is reaching the encoder
|
|
|
spider-mario
|
2022-02-20 08:42:35
|
I am a bit confused by the intensity=65504, is that really intended?
|
|
|
_wb_
|
2022-02-20 08:42:46
|
For lossy, I dunno what the intended range is for HDR
|
|
2022-02-20 08:42:54
|
65k nits seems like a lot
|
|
2022-02-20 08:43:12
|
At least such a display does not exist
|
|
2022-02-20 08:43:34
|
The world itself can easily produce that brightness though
|
|
2022-02-20 08:43:55
|
Maybe there's a confusion between display intensity and image intensity?
|
|
2022-02-20 08:46:31
|
What is the kind of very high dynamic range that can still be expected to work well with lossy?
|
|
|
spider-mario
|
2022-02-20 08:51:08
|
not sure
|
|
2022-02-20 08:51:23
|
it might be that we need to make d<0.1 work somehow
|
|
2022-02-20 08:51:45
|
or maybe modular with some near-lossless transform
|
|
|
DuxVitae
|
|
spider-mario
I am a bit confused by the intensity=65504, is that really intended?
|
|
2022-02-21 08:57:46
|
The HDR on polyhaven have a dynamic range between 20 and 27 EV, so I played around with the encoder settings to perserve as much detail as possible in lossy encoding.
|
|
|
_wb_
|
|
DuxVitae
Hello, I appreciate all the work you did for JXL and I'm really looking forward that it will become a new standard.
So I did a lot of testing with cjxl 0.6.1. I wanted to see whether one can efficiently store HDR panoramas as 32-bit jxl. (see https://polyhaven.com/hdris for examples - though I'm not involved there).
However, findings are rather disappointing:
* **Lossless** compression:
** When using effort <= 7 files are twice as big as compressed (lossless) .hdr or .exr files
** Effort 9 takes several hours for a 16k x 8k image and produces a file still 30% larger than .exr
* When using **lossy** compression (d=0.1, intensity=65504):
** Values above 8 are clipped (bug or intended?)
** Severe compression artefacts for values below 2^-18, i.e. they become visible when increasing exposure beyond +13 EV and displaying the image as a standard sRGB 8-Bit.
Are there any measures planned / implemented to address these issues?
|
|
2022-02-22 10:34:08
|
I'm trying to reproduce the problem, could you point me to a problematic image?
|
|
2022-02-22 10:35:58
|
I tried this:
```
$ cjxl the_lost_city_4k.exr the_lost_city_4k.exr.jxl -m
JPEG XL encoder v0.7.0 a636cfe3 [AVX2,SSE4,SSSE3,Scalar]
Read 4096x2048 image, 5.0 MP/s
Encoding [Modular, lossless, squirrel], 4 threads.
Compressed to 11509560 bytes (10.976 bpp).
4096 x 2048, 0.40 MP/s [0.40, 0.40], 1 reps, 4 threads.
```
which seems OK, original EXR was 20 MB, lossless compression gives 11.5 MB
|
|
|
DuxVitae
|
|
_wb_
I tried this:
```
$ cjxl the_lost_city_4k.exr the_lost_city_4k.exr.jxl -m
JPEG XL encoder v0.7.0 a636cfe3 [AVX2,SSE4,SSSE3,Scalar]
Read 4096x2048 image, 5.0 MP/s
Encoding [Modular, lossless, squirrel], 4 threads.
Compressed to 11509560 bytes (10.976 bpp).
4096 x 2048, 0.40 MP/s [0.40, 0.40], 1 reps, 4 threads.
```
which seems OK, original EXR was 20 MB, lossless compression gives 11.5 MB
|
|
2022-02-22 12:15:48
|
Did you check that the bright spots are not clipped?
"C:\Program Files\libjxl\cjxl.exe" the_lost_city_4k.pfm the_lost_city_4k.jxl -d 0.0 -e 7
JPEG XL encoder v0.6.1 a205468 [AVX2,SSE4,Scalar]
Read 4096x2048 image, 92.3 MP/s
Encoding [Modular, lossless, squirrel], 6 threads.
Compressed to 16478366 bytes (15.715 bpp).
4096 x 2048, 0.83 MP/s [0.83, 0.83], 1 reps, 6 threads.
I had to use pfm since exr is not supported by the prebuilt binaries - I'm sorry but I haven't managed to compile the library under Windows so far.
And yes, the size was reduced to 16 MB. However, inspecting the output image in gimp 2.99.8 (dev version) and lowering exposure reveals that bright spots are clipped
|
|
|
_wb_
|
2022-02-22 01:18:48
|
how did you produce the pfm? the exr is a 16-bit one (half-float), how did you convert it to 32-bit float?
|
|
2022-02-22 01:21:45
|
lossless compression will suffer if you encode images losslessly as 32-bit float
|
|
|
DuxVitae
|
2022-02-22 01:22:23
|
using image magick
|
|
|
_wb_
|
2022-02-22 01:26:24
|
are you sure you have an imagemagick with float support?
|
|
2022-02-22 01:26:50
|
what does `convert --version` say?
|
|
2022-02-22 01:27:22
|
because default builds of IM are typically using uint16 internally, e.g.
`Version: ImageMagick 6.9.11-60 Q16 x86_64 2021-01-25 https://imagemagick.org`
|
|
2022-02-22 01:28:02
|
the Q16 there means it is using uint16 as the internal representation for image buffers
|
|
2022-02-22 01:28:52
|
also converting to pfm means you're removing all colorspace info, since pfm/ppm cannot contain an icc profile
|
|
2022-02-22 01:30:49
|
we don't have an option for cjxl at the moment to take pfm input and treat it as half-float — there's `--override_bitdepth` but that will set it to an integer bitdepth, we should probably add something to allow you to give it a pfm while saying "treat it as 16-bit binary16 half-float"
|
|
|
DuxVitae
|
2022-02-22 01:32:02
|
sorry, I'm so stupid. Since I'm on another computer right now, I used the 8-bit version of magick instead of the 16-bit HDRI one.
|
|
2022-02-22 01:32:41
|
then it seems you implemented many improvements since the 0.6.1 version
|
|
2022-02-22 01:33:03
|
Hopfully, I'll get it to compile so that I can rerun my tests.
|
|
|
_wb_
|
2022-02-22 01:33:58
|
in any case, it's probably best to somehow make exr input/output work in your cjxl/djxl, maybe try to find someone who can build a cjxl/djxl with openexr support for you
|
|
|
DuxVitae
|
2022-02-22 01:35:25
|
it's clang (freshly installed) in visual studio which doesn't find the standard library files to link against.
|
|
|
_wb_
|
2022-02-22 01:35:53
|
I can't help you myself, I haven't used windows since 1998
|
|
|
DuxVitae
|
2022-02-22 01:36:17
|
I'll probably try the docker variant.
|
|
|
_wb_
|
2022-02-22 01:39:55
|
for lossless, it's important to encode what is 16-bit half-float as 16-bit half-float and not as 32-bit float, otherwise it will not be effective at all
|
|
2022-02-22 01:40:20
|
for lossy, it's important to pass correct colorspace info to the encoder, which gets stripped when you convert to pfm
|
|
|
DuxVitae
|
|
_wb_
for lossy, it's important to pass correct colorspace info to the encoder, which gets stripped when you convert to pfm
|
|
2022-02-22 01:45:26
|
That bright values were clipped was probably due to the fact that the encoder assumed the wrong color space for pfm?
|
|
2022-02-22 01:51:42
|
Can cjxl load 32-bit exr images right now (magick can't, blender can - which I use to create my own images): https://github.com/libjxl/libjxl/issues/465 ?
|
|
|
_wb_
how did you produce the pfm? the exr is a 16-bit one (half-float), how did you convert it to 32-bit float?
|
|
2022-02-22 02:35:34
|
the image is stored in 32-bit floating point and there are pixels that exceed the half float range
|
|
|
_wb_
|
2022-02-22 02:37:34
|
do they? largest half float value is 65504 (which would be 6550400%)
|
|
|
DuxVitae
|
2022-02-22 03:32:29
|
I only looked at the left side where the absolute values are, no idea wht Gimp calculates on the right.
|
|
|
_wb_
|
2022-02-22 03:50:06
|
ah, yes if those values on the left are the absolute values, then that's strange
|
|
|
|
Deleted User
|
2022-02-23 08:11:00
|
I'm trying to fix https://github.com/libjxl/libjxl/issues/1177, but I can't figure out how to detect in the decoder if the picture used lossy palette. Could anyone help me?
|
|
|
_wb_
|
2022-02-23 09:10:46
|
If Palette transform is used with predictor not the zero predictor, it's delta palette, which is currently only used for lossy palette
|
|
|
|
Deleted User
|
2022-02-23 09:47:48
|
But I don't know how to detect it, I'm not *that* familiar with the codebase... Visual Studio Code was helpful, but not enough 🙁
|
|
|
novomesk
|
2022-02-23 11:46:24
|
I observed that libjxl 0.6.1 cannot be built with latest highway as system dependency.
It worked with highway 0.15 but not with 0.16 version.
|
|
|
improver
|
2022-02-24 04:21:03
|
i have iirc. it got unflagged?
|
|
|
Kleis Auke
|
|
novomesk
I observed that libjxl 0.6.1 cannot be built with latest highway as system dependency.
It worked with highway 0.15 but not with 0.16 version.
|
|
2022-02-24 08:49:29
|
FWIW, applying commit <https://github.com/libjxl/libjxl/commit/e0107a6ca637eb3997131e966e19fcd6001b37ad>, <https://github.com/libjxl/libjxl/commit/0902f85ca6e9e305338baf7974192acab8c53ac5> and <https://github.com/libjxl/libjxl/commit/600b591538827247f1b2ad6ee6ae627a2173b7ec> should fix that. highway removed those deprecated overloads in 0.16.0.
|
|
|
|
Deleted User
|
2022-02-24 10:14:09
|
Well, libjxl 1.0 should be released by Monday if this deadline still holds.
|
|
|
_wb_
|
2022-02-24 11:56:24
|
i don't think there's a real deadline, just an estimation
|
|
2022-02-24 11:56:49
|
it will be ready when it's ready, and I don't think that will be the case on Monday
|
|
|
novomesk
|
2022-02-24 12:49:45
|
Is there a chance to have libjxl 0.6.2 meanwhile? Just with those 3 commits to enable building with system-wide highway 0.16?
|
|
|
w
|
2022-02-24 12:56:31
|
do you mean 0.7.1
|
|
|
_wb_
|
2022-02-24 01:38:19
|
I think they want to avoid doing releases now that will need to be maintained (security fixes etc)
|
|
|
Petr
|
2022-02-25 02:27:34
|
What are cjxl_ng and djxl_ng BTW? I couldn't find any documentation about them. Nor meaningful mentions here on Discord. I can't even run them under Windows – they complain they can't find brotlienc.dll and brotlidec.dll.
|
|
|
_wb_
|
2022-02-25 03:49:41
|
They're work in progress, they will eventually replace cjxl and djxl
|
|
2022-02-25 03:49:59
|
The difference is that they use the libjxl api
|
|
2022-02-25 03:50:33
|
While the old cjxl/djxl call stuff directly, not through the api, so they basically contain copies of libjxl
|
|
|
|
Deleted User
|
2022-02-25 03:56:14
|
Will they miss any features when they replace the original? Or rather, does the API contain everything supported in cjxl/djxl?
|
|
|
_wb_
|
2022-02-25 05:41:38
|
Things like --use_new_heuristics will not be in the api, but most options should remain
|
|
|
|
Deleted User
|
2022-02-26 10:45:39
|
No new heuristics? Poor Fabian <:kekw:808717074305122316>
|
|
|
BlueSwordM
|
2022-02-26 04:11:33
|
BTW, is it possible for libjxl to work within a sigmoidal light space?
|
|
2022-02-26 04:11:51
|
It would give equal importance to low-luma and high-luma quantizers.
|
|
|
_wb_
|
2022-02-26 05:03:42
|
Sigmoidal transfer curve, is that a thing? How does that make sense perceptually? Isn't that just too much precision for the brights?
|
|
|
BlueSwordM
|
|
_wb_
Sigmoidal transfer curve, is that a thing? How does that make sense perceptually? Isn't that just too much precision for the brights?
|
|
2022-02-26 05:07:52
|
Yeah, it's a thing.
And actually, it gives equal importance to dark areas as light areas.
It actually does the opposite of what you said: it gives less precision to bright areas, and more precision to dark areas to balance out importance.
|
|
2022-02-26 05:10:37
|
https://artoriuz.github.io/blog/images/mpv_upscaling/light.png
|
|
2022-02-26 05:11:41
|
Basically, you would do sRGB > linear > sigmoidal > linear > sRGB for encoding and then displaying the image.
|
|
|
_wb_
|
2022-02-26 05:25:33
|
Giving less precision to brights and more to darks is what the usual transfer curves all do. E.g. XYB uses something cubic, i.e. to convert XYB to linear light you do some multiply-adds and then cube the number, which means you get much more precision in the darks than in the brights (more than with traditional transfer curves like sRGB that use a lower exponent like 2.4)
|
|
|
|
Vincent Torri
|
2022-03-06 11:31:01
|
hello
|
|
2022-03-06 11:31:07
|
any news about https://github.com/libjxl/libjxl/issues/993 ?
|
|
2022-03-06 02:58:48
|
that would really be helpful, to avoid color conversion
|
|
|
|
chafey
|
2022-03-07 05:40:12
|
Are there any instructions for building libjxl on Mac OS X yet? I am able to build it using CMake with a few tweaks, but it seems much slower than I would expect so am wondering if a key dependency is missing (like highway)
|
|
|
|
Vincent Torri
|
2022-03-07 06:25:17
|
<@!597478497367359500> look at the cmake output to see if highway is detected or downloaded
|
|
2022-03-07 06:25:33
|
also 'make VERBOSE=1' to have more verbosity when building
|
|
|
|
chafey
|
2022-03-07 06:26:53
|
"-- Performing Test HWY_EMSCRIPTEN
-- Performing Test HWY_EMSCRIPTEN - Failed
CMake Deprecation Warning at extern/libjxl/third_party/brotli/CMakeLists.txt:5 (cmake_minimum_required):
Compatibility with CMake < 2.8.12 will be removed from a future version of
CMake.
Update the VERSION argument <min> value or use a ...<max> suffix to tell
CMake that the project does not need compatibility with older versions."
|
|
2022-03-07 06:27:22
|
I will try to downlevel my CMakeFile version
|
|
2022-03-07 06:29:16
|
Actually this failure may not be due to the CMake version, I see that libjxl CMakeLists.txt requires 3.10 of CMake
|
|
2022-03-07 06:31:23
|
oh wait, that is HWY_EMSCRIPTEN
|
|
2022-03-07 06:35:37
|
Just noticed README.OSX.md and see this line: "
OSX builds have "best effort" support, i.e. build might not work at all, some
tests may fail and some sub-projects are excluded from build.
"
|
|
2022-03-07 06:35:56
|
i suppose sub-projects being highway..?
|
|
2022-03-07 06:37:31
|
bad time for my linux workstation to be in the shop <sigh>
|
|
|
|
Vincent Torri
|
2022-03-07 06:45:49
|
you can also install highway yourself
|
|
2022-03-07 06:46:32
|
then pass to cmake
|
|
2022-03-07 06:46:38
|
-DJPEGXL_FORCE_SYSTEM_HWY:BOOL=TRUE
|
|
2022-03-07 06:47:22
|
for me, a reference implementation should not use so many dependencies and highway
|
|
2022-03-07 06:48:58
|
<@!597478497367359500> try to install all the dependencies you want instead of using submodules
|
|
|
|
chafey
|
2022-03-07 06:50:20
|
think ill just wait for my linux box, I really don't like polluting my OS with various dependencies like this
|
|
|
|
Vincent Torri
|
2022-03-07 06:50:48
|
🙂
|
|
2022-03-07 06:56:53
|
hmm, libjxl does not compile anymore on Windows...
|
|
2022-03-07 06:56:58
|
at least here
|
|
2022-03-07 06:58:32
|
hmm, there should be someting wrong in my fork
|
|
|
BlueSwordM
|
2022-03-07 10:01:35
|
So, where can I find the code that approximates butteraugli RD at faster speeds(e7 and below) in libjxl?
|
|
2022-03-08 08:55:39
|
<@!794205442175402004> Any clues on where that is in the codebase? I've seen it mentioned a few times.
|
|
|
_wb_
|
2022-03-08 08:56:48
|
https://github.com/libjxl/libjxl/blob/main/lib/jxl/enc_heuristics.cc has most of the vardct encode decisions
|
|
2022-03-08 08:58:50
|
I'm not the vardct expert here, but as far as I understand, at e8/e9 it does butteraugli iterations to tune the adaptive quant weights, while at e7 and below it uses simpler heuristics that were basically made by doing corpus-level benchmark_xl bpp*BApnorm optimization but that don't really involve butteraugli in the heuristics themselves
|
|
2022-03-08 08:59:26
|
but <@!532010383041363969> or <@!179701849576833024> know better what's actually going on there
|
|
|
BlueSwordM
|
2022-03-08 09:01:02
|
I see. Thank you Jon 🙂
|
|
|
Jyrki Alakuijala
|
|
_wb_
I'm not the vardct expert here, but as far as I understand, at e8/e9 it does butteraugli iterations to tune the adaptive quant weights, while at e7 and below it uses simpler heuristics that were basically made by doing corpus-level benchmark_xl bpp*BApnorm optimization but that don't really involve butteraugli in the heuristics themselves
|
|
2022-03-08 11:06:21
|
Correct!
|
|
2022-03-08 11:07:48
|
I have done recently most of the optimization at e7 (default), and try to find heuristics that are great for butteraugli -- however, many heuristics that are good for butteraugli didn't make it during the last 1.5 years since they looked worse, and some made it even when they give worse butteraugli scores
|
|
|
BlueSwordM
|
|
Jyrki Alakuijala
I have done recently most of the optimization at e7 (default), and try to find heuristics that are great for butteraugli -- however, many heuristics that are good for butteraugli didn't make it during the last 1.5 years since they looked worse, and some made it even when they give worse butteraugli scores
|
|
2022-03-08 11:25:05
|
I see.
The reason I am asking this is that I find the way you made the heuristics and RDO+distortion decisions to be great, and I've been thinking of porting and modifying quite a bit of that work to put it to the rav1e encoder in the future to replace the current heuristics pipeline found in it, which are based on SSIM boost+psy.
|
|
|
Jyrki Alakuijala
|
2022-03-08 11:25:46
|
I'm happy to discuss the philosophy of it
|
|
2022-03-08 11:26:23
|
the original philosophy for the quality selection comes from guetzli and pik
|
|
2022-03-08 11:26:54
|
the idea there was that users should never see artefacts, and bitrate is adapted to that
|
|
2022-03-08 11:27:49
|
in that sense the main philosophy was not RDO, but it is just a constrained optimization on achieving a defined quality everywhere
|
|
2022-03-08 11:28:53
|
even today, JPEG XL doesn't do proper compromises on the ideal RD-curve, but has reflections of just trying to achieve a specific quality
|
|
2022-03-08 11:29:13
|
that can be very silly when it is extremely cheap to achieve a better quality
|
|
2022-03-08 11:29:34
|
so there are a lot of exceptions to make things work in a more generic case
|
|
2022-03-08 11:30:25
|
the most simplistic 'achieve this quality' approach works very well at quality 93-95
|
|
2022-03-08 11:30:39
|
but creates unnecessary artefacts for quality 75
|
|
2022-03-08 11:31:50
|
because of that there are exceptions, like the butteraugli optimization loop is never going to make a block more than 40 % worse
|
|
2022-03-08 11:32:12
|
but these are not disciplined approaches, more just kludges that hold it together
|
|
|
BlueSwordM
|
2022-03-08 11:32:43
|
Ah, so you trade more general optimizations for taking care of edge cases more effectively at certain defined qualities. Is that correct?
|
|
|
Jyrki Alakuijala
|
2022-03-08 11:33:08
|
let's have a VC about the approach
|
|
2022-03-08 11:33:24
|
but not today 🙂
|
|
|
BlueSwordM
|
2022-03-08 11:33:36
|
Not today sadly. I would love to do that later in the week.
|
|
2022-03-08 11:33:40
|
Oh, lol.
|
|
|
Jyrki Alakuijala
|
2022-03-08 11:34:07
|
Friday would be great, my email is jyrki at google
|
|
|
_wb_
|
2022-03-11 02:31:52
|
https://github.com/libjxl/libjxl/issues/1159 and others want to have more tagged versions, which I think is a reasonable ask
|
|
2022-03-11 02:32:59
|
afaiu, we want to avoid doing many releases because of the maintenance burden: if new security issues like fuzzerbugs get found, they need to be backported, etc.
|
|
2022-03-11 02:34:28
|
but maybe we need to introduce a concept of "official/stable releases" (which get maintained) and also have "unofficial/unstable releases" (which don't get maintained and have less guarantees of actually working)
|
|
2022-03-11 02:41:59
|
say x.y.z with an even y is stable and gets maintained, and we only have one or two such releases per year, while with an odd y there might be many releases but they are unstable / not maintained, e.g. we could now be at 0.7.23 if we had bumped up the minor version at every noteworthy change
|
|
|
|
Hello71
|
2022-03-11 04:10:45
|
what does "don't get maintained" mean? it should be easier to add security fixes to "unstable branch", not harder. normally, the tradeoff is that with "unstable branch", you may get unpredictable quality or stability changes bundled in with the security fixes. for libjxl, i think as long as stability breakage can be kept to a minimum, some quality changes can be accepted for Linux distros. if you need a specific quality level and are doing manual benchmarking to verify the quality, then you should be managing the libjxl version yourself anyways.
|
|
|
_wb_
|
2022-03-11 04:48:48
|
Adding security fixes to the most recent unstable branch is easy, backporting them to all previously released versions that are affected by the security bug is what is annoying, and that's the main reason afaiu why we didn't make intermediate releases yet
|
|
|
Troc
|
2022-03-11 06:54:31
|
<@794205442175402004> One of my friends was curious about JXL.
|
|
2022-03-11 06:55:15
|
I figured I'd ask here because I couldn't answer.
|
|
|
_wb_
|
2022-03-11 06:55:57
|
cjxl is multi-threaded, but for default-speed lossless it's still doing a lot of stuff sequentially
|
|
2022-03-11 06:56:49
|
cjxl -e 3 (or lower effort) and fjxl are reasonably well parallelized
|
|
|
Troc
|
2022-03-11 06:57:38
|
He's making a program for compressing AI datasets to take up less space.
|
|
|
_wb_
|
2022-03-11 06:58:07
|
though if your task is to compress lots of images, you're probably better off doing images in parallel, each image on one core
|
|
|
Troc
|
2022-03-11 06:59:36
|
And a C++ API? Possible? Already existing?
|
|
|
_wb_
|
2022-03-11 07:24:02
|
The api is C, so also C++
|
|
|
Troc
|
2022-03-11 07:45:24
|
Excellent! Thanks for the answers. 🙂
|
|
|
|
Hello71
|
|
_wb_
Adding security fixes to the most recent unstable branch is easy, backporting them to all previously released versions that are affected by the security bug is what is annoying, and that's the main reason afaiu why we didn't make intermediate releases yet
|
|
2022-03-11 08:19:22
|
right, my point is why does it need to be "all previously released versions"? i think a common model is to have a "stable branch" aka "lts branch" which gets only security updates, and a "unstable branch" which gets all the updates, including the security ones but also possibly some bad ones. since you already have the first part, it shouldn't be much work to add the second part
|
|
2022-03-11 08:33:17
|
at least that's the theory
|
|
|
_wb_
|
2022-03-11 08:38:58
|
Yes, that's the model I was just proposing. I think atm we only have lts releases...
|
|
|
|
lukaso
|
2022-03-13 11:52:36
|
Hi, not sure this is the right area, but...I'm trying to compile libjxl using `jhbuild`. `jhbuild` does not support `git clone --recursive` and also doesn't allow arbitrary commands to be run, so I can't run the `deps.sh` script. Git doesn't support making the `--recursive` flag the default for `clone`. So I'm stuck. Ideally there'd be a tarball that's maintained with dependencies, but that doesn't seem to be floating around either.
|
|
2022-03-13 11:53:15
|
Does anyone have any ideas on how to get around these limitations? Thanks so much!
|
|
|
|
Hello71
|
2022-03-14 02:45:14
|
i assume you're supposed to manually list the modules
|
|
2022-03-14 02:47:16
|
libjxl only has one or two dependencies not typically packaged anyways
|
|
|
novomesk
|
|
lukaso
Hi, not sure this is the right area, but...I'm trying to compile libjxl using `jhbuild`. `jhbuild` does not support `git clone --recursive` and also doesn't allow arbitrary commands to be run, so I can't run the `deps.sh` script. Git doesn't support making the `--recursive` flag the default for `clone`. So I'm stuck. Ideally there'd be a tarball that's maintained with dependencies, but that doesn't seem to be floating around either.
|
|
2022-03-14 09:46:23
|
Recent libjxl from git can be built without the bundled dependencies (no need to use deps.sh or --recursive git) when you disable skcms and have installed in system: brotli, highway, lcms. If you build without command line tools (just the library), it seems that gflags is not necessary. Disable also other things you don't need to avoid looking for bundled dependencies (disable tests, benchmarks, fuzzers, coverage, tcmalloc, sjpeg, plugins).
|
|
|
|
lukaso
|
|
novomesk
Recent libjxl from git can be built without the bundled dependencies (no need to use deps.sh or --recursive git) when you disable skcms and have installed in system: brotli, highway, lcms. If you build without command line tools (just the library), it seems that gflags is not necessary. Disable also other things you don't need to avoid looking for bundled dependencies (disable tests, benchmarks, fuzzers, coverage, tcmalloc, sjpeg, plugins).
|
|
2022-03-14 10:05:41
|
Thanks. That really helpful! I will try as soon as I have a moment.
|
|
|
eddie.zato
|
2022-03-15 06:19:20
|
What do I lose if disable skcms and sjpeg?
|
|
|
w
|
2022-03-15 06:26:01
|
disable skcms to use lcms2
|
|
|
eddie.zato
|
2022-03-15 01:35:32
|
Any advantages to use skcms over lcms2?
|
|
|
|
Hello71
|
2022-03-15 01:36:13
|
somewhat faster, smaller, more secure
|
|
|
|
lukaso
|
2022-03-17 11:24:42
|
I'm seeing a fatal error because lodepng is not found, and it doesn't look like it can be disabled or gotten from somewhere else. Any ideas here?
|
|
2022-03-17 11:32:26
|
And now that I've disabled skms it complains about not finding lcms2 and needing to submodule update. It looks like this dependency cannot be gotten from outside.
|
|
|
_wb_
|
2022-03-18 06:04:02
|
Lodepng? We don't use that anymore
|
|
2022-03-18 06:05:38
|
Maybe try compiling from scratch, you may have old stuff remaining in your build dir
|
|
2022-03-18 06:10:04
|
|
|
2022-03-18 06:10:19
|
That's a big one
|
|
|
Fraetor
|
2022-03-18 09:39:29
|
What's the change?
|
|
|
_wb_
|
2022-03-19 08:01:51
|
Shouldn't be something that noticeably changes, but it's a major improvement in the software architecture of the decoder, which should help to avoid bugs, improve speed and memory, etc.
|
|
|
|
lukaso
|
|
_wb_
|
|
2022-03-19 11:39:52
|
<@!794205442175402004> thanks for the answer. To let you know how I got here: I've pulled the 0.6.1 source bundle, which is the last released version I can see at https://github.com/libjxl/libjxl/releases and here is the relevant place where lodepng is wanted: https://github.com/libjxl/libjxl/blob/a205468bc5d3a353fb15dae2398a101dff52f2d3/third_party/CMakeLists.txt#L116-L123 (browsing from that release).
|
|
2022-03-19 11:45:12
|
Once I remove lodepng from there, I find I can't compile using my external lcms2 because of this: https://github.com/libjxl/libjxl/blob/a205468bc5d3a353fb15dae2398a101dff52f2d3/third_party/CMakeLists.txt#L204-L211. When I remove that, I then am stuck with
|
|
2022-03-19 11:45:47
|
CMake Error at lib/jxl.cmake:396 (target_include_directories):
Error evaluating generator expression:
$<TARGET_PROPERTY:lcms2,INCLUDE_DIRECTORIES>
Target "lcms2" not found.
Call Stack (most recent call first):
lib/CMakeLists.txt:132 (include)
CMake Error at lib/jxl.cmake:396 (target_include_directories):
Error evaluating generator expression:
$<TARGET_PROPERTY:lcms2,INCLUDE_DIRECTORIES>
Target "lcms2" not found.
Call Stack (most recent call first):
lib/CMakeLists.txt:132 (include)
|
|
2022-03-19 11:51:23
|
I'm building on MacOS and here are my cmake args: "-DCMAKE_BUILD_TYPE=Release -DBUILD_TESTING=OFF -DJPEGXL_ENABLE_TOOLS=false -DJPEGXL_ENABLE_MANPAGES=false -DJPEGXL_ENABLE_BENCHMARK=false -DJPEGXL_ENABLE_EXAMPLES=false -DJPEGXL_ENABLE_JNI=false -DJPEGXL_ENABLE_SJPEG=false -DJPEGXL_ENABLE_SKCMS=false"
|
|
|
|
vtorri
|
2022-03-20 02:21:25
|
hello
|
|
2022-03-20 02:21:37
|
still no news about issu #993 ?
|
|
2022-03-20 02:21:52
|
BGRA output
|
|
|
Traneptora
|
2022-03-20 02:55:39
|
<@!918053334554918933> why would you need this, out of curiosity?
|
|
2022-03-20 02:56:29
|
and have you considered using libswscale?
|
|
2022-03-20 02:56:32
|
or similar
|
|
2022-03-20 03:12:52
|
in either case, the colorspace conversion would likely have to happen
|
|
2022-03-20 03:12:59
|
just you're asking for libjxl to do it rather than something else
|
|
|
_wb_
|
2022-03-20 03:21:44
|
It's easy enough to do the permute in libjxl while converting from planar to interleaved, I think we just need to find a nice way to add it to the api
|
|
|
Traneptora
|
|
|
vtorri
|
2022-03-20 04:41:29
|
<@!853026420792360980> because a lot of platforms are using BGRA and not RGBA
|
|
2022-03-20 04:42:09
|
so why using lcms2 or something while it's just a swap in B and R channels ?
|
|
|
Traneptora
|
2022-03-20 04:48:30
|
I suppose
|
|
2022-03-20 04:48:56
|
it would also be convenient if the API lets you request planar, fwiw
|
|
|
novomesk
|
2022-03-21 03:50:27
|
Our plugin from kimageformats is fuzzed using oss-fuzz. libjxl 0.6.1 is used there now. Time to time, something gets found there. How do you want to be informed about those issues?
|
|
|
|
xq
|
2022-03-21 04:52:50
|
hi, i was wondering if somebody could point me to where in the codebase i should look if i want to get access to all the raw data in a jxl image, prior to the stage when it gets entropy coded?
|
|
2022-03-21 04:53:23
|
id like to analyze the values
|
|
2022-03-21 04:54:08
|
also im not sure if this is the case but is it the same 'place' in the code for both the lossy and lossless modes?
|
|
|
_wb_
|
|
novomesk
Our plugin from kimageformats is fuzzed using oss-fuzz. libjxl 0.6.1 is used there now. Time to time, something gets found there. How do you want to be informed about those issues?
|
|
2022-03-21 06:29:25
|
Can open an issue about it.
|
|
|
xq
hi, i was wondering if somebody could point me to where in the codebase i should look if i want to get access to all the raw data in a jxl image, prior to the stage when it gets entropy coded?
|
|
2022-03-21 06:42:35
|
I suppose most stuff that isn't headers passes through here when encoding: https://github.com/libjxl/libjxl/blob/main/lib/jxl/enc_ans.cc#L1580, so you could dump it there...
|
|
|
|
xq
|
2022-03-21 07:04:19
|
great, thank you
|
|
|
novomesk
|
2022-03-26 06:35:24
|
dec_group_border.cc:(.text+0x16a): undefined reference to `__atomic_fetch_and_1'
Gentoo developers encountered link problem on RISC-V architecture.
https://bugs.gentoo.org/836125
|
|
|
|
veluca
|
2022-03-26 10:58:57
|
mhhh think you could link into that issue on github?
|
|
2022-03-26 10:59:11
|
sounds like a simple build system fix but what do I know
|
|
|
novomesk
|
2022-03-29 09:07:36
|
I am also having problem with MSVC Win64 build:
libjxl\lib/jxl/modular/transform/squeeze.cc(190): error C2121: '#': invalid character: possibly the result of a macro expansion
Full log: https://ci.appveyor.com/project/novomesk/qt-jpegxl-image-plugin/builds/43053566
|
|
|
_wb_
|
2022-03-29 09:57:11
|
Yes, I saw that but I have no clue how to solve it
|
|
2022-03-29 10:41:30
|
<@!811568887577444363> found it, apparently msvc doesn't like #ifdef inside a macro argument, while clang and gcc handle that just fine
|
|
|
novomesk
|
2022-03-29 11:08:48
|
squeeze.cc(148): error C2131: expression did not evaluate to a constant
note: failure was caused by a read of a variable outside its lifetime
https://ci.appveyor.com/project/novomesk/qt-jpegxl-image-plugin/builds/43055199#L566
|
|
|
eustas
|
2022-03-29 11:21:58
|
constexpr is a funny beast. will try to pull it inside lambda.
|
|
|
novomesk
|
2022-03-29 02:57:58
|
The MSVC build is solved, thanks! Now I am curious about https://github.com/libjxl/libjxl/pull/1166 which is important for RISC-V platform with Gentoo Linux.
|
|
|
_wb_
|
2022-03-31 12:18:08
|
guess what I'm trying to do... it will take time to get these heuristics right and do it in an effective and safe way
|
|
|
improver
|
2022-03-31 12:32:18
|
looks like so
|
|
|
_wb_
|
2022-03-31 12:45:54
|
yes, using modular patches for regions where it might be more effective than DCT. But it's tricky to come up with good heuristics for that.
|
|
2022-03-31 12:58:32
|
no, that's exactly what I'm now trying to figure out. If it uses too many patches, compression will suffer a lot. In the above image (which is a djxl -s 8 decode so you see the patches in full detail but the varDCT part is only upsampled from 1:8), you can see that it's not quite using enough patches to cover the blue signs on the right and the cables in the top-left (which are things that might make sense to do without DCT), but too many on the building (which does have some hard-edge stuff but probably DCT is still better there than modular).
|
|
|
Fraetor
|
2022-03-31 03:39:08
|
What kind of thing is the heuristic currently looking for?
|
|
|
_wb_
|
2022-03-31 03:53:29
|
places where most pixels are either very smooth (as in, gradient predictor is close to perfect) or are a hard edge (as in, gradient predictor has a large residual). The "middle category" (no hard edge but also not quite smooth) is typical for photographic stuff and DCT is good for that. But DCT is not so good for hard edges, since you need all the high freq coefficients to be not too much quantized to do such edges without ringing.
|
|
2022-03-31 03:54:48
|
so the goal here is to both improve ringing and improve compression
|
|
2022-03-31 03:55:30
|
(so far, I'm only getting better quality but not yet better density, so there is some work left to be done)
|
|
|
Traneptora
|
2022-03-31 06:21:08
|
isn't better quality and also better density the same thing
|
|
2022-03-31 06:21:12
|
if you target butteraugli distance
|
|
2022-03-31 06:21:30
|
since if you get better quality at the same density, then presumably you can use a lower density to target the same quality
|
|
2022-03-31 06:22:43
|
also, in the event where the gradient predictor is nearly perfect, wouldn't VarDCT work very well for those parts as well?
|
|
|
_wb_
|
2022-03-31 08:58:51
|
I currently get better quality at worse density, so not very great yet
|
|
2022-03-31 08:59:45
|
Yes, if things are smooth, anything works well. So I look for areas that have some strong edges and otherwise well predicted, not much in the middle
|
|
|
Petr
|
2022-04-01 11:47:24
|
Where was the photo taken BTW?
|
|
|
|
Deleted User
|
|
Petr
Where was the photo taken BTW?
|
|
2022-04-03 06:53:23
|
At a train station 😉
|
|
|
Petr
|
|
At a train station 😉
|
|
2022-04-04 07:02:55
|
A drunkard jumps into a taxi and shouts: "Home!" The driver says: "Can you be more specific, sir?" "To the bedroom!" 🙂
|
|
|
|
veluca
|
2022-04-04 09:23:28
|
https://en.wikipedia.org/wiki/Richterswil 😉
|
|
|
lithium
|
2022-04-08 02:33:03
|
libjxl 1.0 status.<:BlobYay:806132268186861619>
|
|
|
|
Deleted User
|
2022-04-08 03:28:40
|
7%? ... 👴🧓
|
|
|
Fraetor
|
|
7%? ... 👴🧓
|
|
2022-04-08 03:49:07
|
There's a nice list of what still needs to be done here:
https://github.com/libjxl/libjxl/milestone/1
|
|
|
|
veluca
|
|
7%? ... 👴🧓
|
|
2022-04-08 03:52:13
|
it's a *very* arbitrary percentage xD
|
|
|
Fraetor
|
2022-04-08 03:57:06
|
Is the API based c/djxl going to be a thing for 1.0? They seem like a very good test case for the API.
|
|
|
|
veluca
|
2022-04-08 05:45:42
|
yep
|
|
2022-04-08 05:45:54
|
it's the main remaining thing, other than a bunch of bugfixes
|
|
|
|
Deleted User
|
2022-04-09 11:37:37
|
Idk if anyone has noticed that yet, but the almighty libjxl repo is past 1000 commits!
https://github.com/libjxl/libjxl/commit/6a54b85#commitcomment-70873308
|
|
|
Ufnv
|
|
novomesk
The MSVC build is solved, thanks! Now I am curious about https://github.com/libjxl/libjxl/pull/1166 which is important for RISC-V platform with Gentoo Linux.
|
|
2022-04-10 07:04:28
|
Hi, is there a definite guide on how to build under MSVC? Just the lib.
I try to follow the guide from here:
https://github.com/libjxl/libjxl/blob/main/doc/developing_in_windows_vcpkg.md
but no luck 😦
First, aligned_allocator_test.exe fails, if to exclude it, fails at compiling libjxl\tools\cjxl_ng_main.cc - cannot find "gflags/gflags.h" even if it is installed using vcpkg. What I am doing wrong?
|
|
2022-04-10 07:05:05
|
maybe there is finally a possibility to build using MSVC, not clang?
|
|
2022-04-10 07:10:50
|
If to provide it gflags, the linker still fails with this:
Error undefined symbol: void __cdecl testing::internal::PrintBytesInObjectTo(unsigned char const *, unsigned __int64, class std::basic_ostream<char, struct std::char_traits<char>> *) \libjxl\out\build\x64-Clang-Release\libjxl
|
|
|
novomesk
|
2022-04-10 07:35:16
|
I am building static libjxl using MSVC on appveyor: https://github.com/novomesk/qt-jpegxl-image-plugin/blob/main/appveyor.yml
The build log: https://ci.appveyor.com/project/novomesk/qt-jpegxl-image-plugin/builds/43139377
|
|
|
Ufnv
|
2022-04-10 08:19:17
|
thanks, will try!
|
|
|
DuxVitae
|
|
Ufnv
Hi, is there a definite guide on how to build under MSVC? Just the lib.
I try to follow the guide from here:
https://github.com/libjxl/libjxl/blob/main/doc/developing_in_windows_vcpkg.md
but no luck 😦
First, aligned_allocator_test.exe fails, if to exclude it, fails at compiling libjxl\tools\cjxl_ng_main.cc - cannot find "gflags/gflags.h" even if it is installed using vcpkg. What I am doing wrong?
|
|
2022-04-10 10:02:03
|
I'd also be interested in reliably building libjxl under MSVC. So far I managed, to build cjxl with some hacks. But didn't succeed in building all executables.
|
|
|
Ufnv
|
2022-04-11 07:01:08
|
Do you already have some project file you could share?
|
|
|
DuxVitae
|
2022-04-11 09:12:36
|
It won't create a solution file, only the CMakeCache. So there is not much I could share since the CMakeCache contains a lot of absolute paths.
What I did - apart from the instructions - install brotli, flas, giflib, gtest, highway, lcms, libjpeg-turbo, libpng, openexr, sjpeg, zlib via vcpkg. Delete the submodul folders third_part/gflags and thrid_party/googletest.
|
|
2022-04-11 09:14:48
|
However, it still uses the clang compiler.
|
|
|
Ufnv
|
2022-04-11 12:23:04
|
Ok, thanks!
Btw, anyone knows what is the problem with using msvc compiler?
|
|
|
_wb_
|
2022-04-16 06:35:05
|
https://github.com/libjxl/libjxl/actions/workflows/build_test.yml/badge.svg
|
|
2022-04-16 06:35:31
|
Why does it still show "failing" even though all workflows are succeeding?
|
|
|
|
veluca
|
2022-04-16 09:55:24
|
maybe it's only updated every so-and-so?
|
|
|
_wb_
|
2022-04-16 10:06:15
|
It's more than 24h now though
|
|
|
w
|
2022-04-17 10:08:18
|
is there an option to losslessly re-compress a jxl
|
|
|
Yari-nyan
|
2022-04-17 10:52:21
|
re-compress from which format?
|
|
|
w
|
|
Fraetor
|
|
w
jxl
|
|
2022-04-17 11:30:35
|
Not currently a feature, though of course if it was a lossless encode you can always decode to pixels and reencode.
|
|
|
_wb_
|
2022-04-18 06:59:47
|
There's not that much that you can do to improve vardct entropy coding without changing the block selection and coefficients
|
|
2022-04-18 07:00:47
|
There is some room, but not sure if it's worth it
|
|
|
w
|
2022-04-18 07:12:42
|
i was thinking speed 3 -> speed 9 plus other settings, old version -> new version, and same for modular
|
|
|
_wb_
|
2022-04-18 07:16:10
|
For lossless it makes a lot of sense, because pixels are pixels
|
|
|
The_Decryptor
|
2022-04-18 07:16:22
|
One issue with the lossless JXL > PNG > lossy JXL path is that if it uses noise it'll "burn" that into the image
|
|
|
_wb_
|
2022-04-18 07:16:41
|
For lossy, you have to take into account that speed 3 basically does 8x8 only, and lossless re-encoding cannot change that
|
|
2022-04-18 07:17:41
|
Lossless jxl shouldn't use noise, at least if it does, it isn't really lossless
|
|
|
saucecontrol
|
2022-04-19 02:14:44
|
howdy! I wanted to ask about the deprecation of `JxlDecoderSetDCOutBuffer`
this is a useful feature for image resizing operations where the destination size is significantly smaller than full-res
the replacement progressive decode functionality doesn't really serve the same need as it requires allocating enough memory for the full-res image
i see the libvips maintainer has asked about that functionality here <https://gitlab.com/wg1/jpeg-xl/-/issues/208#note_550809598>, and it's something i would use in my own imaging pipeline as well
would the team consider bringing that back?
|
|
|
_wb_
|
2022-04-19 07:02:56
|
One major caveat with that function is that the DC image is not always available: it exists only in VarDCT mode (though could theoretically also be produced in progressive modular mode, but that's not implemented atm)
|
|
2022-04-19 07:03:59
|
Another thing is that it's not very flexible: it's specifically for the 1:8 image, while with progressive DC we could in principle also do 1:16, 1:32 etc, and with progressive AC it might make sense to do 1:4 and 1:2.
|
|
2022-04-19 07:05:13
|
Also related is https://github.com/libjxl/libjxl/issues/1353
|
|
2022-04-19 07:07:04
|
I agree that the use case of decoding a downscaled version of the image without using full-sized buffers is useful, but I'm wondering what would be the best API to do that, and how to implement it without causing major headaches.
|
|
2022-04-19 07:13:52
|
There's also the thing that extra channels (and chroma in case of YCbCr) can be subsampled and currently we only have an api for getting them upsampled to full res. It might be nice to have a decode option to get them in subsampled resolution instead.
|
|
|
|
veluca
|
2022-04-19 07:18:11
|
also if your main concern is memory for the buffer, you can use the pixel callback and downsample on the fly to whatever resolution is required - no big buffers needed 🙂
|
|
2022-04-19 07:18:40
|
(granted, this may be a little tricky to do properly)
|
|
|
_wb_
|
2022-04-19 07:19:28
|
We could add an output buffer version of it that does exactly that if needed, or skips upsampling if possible
|
|
|
|
veluca
|
2022-04-19 07:20:39
|
I'd suggest implementing it on top of the existing API as a "simple decoder" feature
|
|
|
_wb_
|
2022-04-19 07:21:35
|
the existing api doesn't have a way to skip upsampling though
|
|
2022-04-19 07:22:26
|
doing 8x DC upsampling to then downsample it 1:8 again is not very efficient
|
|
2022-04-19 07:22:57
|
same with yuv420 output
|
|
2022-04-19 07:23:48
|
(where also the color conversion should be skipped)
|
|
2022-04-19 07:24:12
|
I don't think we should do these things for 1.0, to be clear
|
|
|
|
veluca
|
2022-04-19 07:24:15
|
right, although you can't in general do that
|
|
|
_wb_
|
2022-04-19 07:25:07
|
but we could try to anticipate that we might want to do these things in the future
|
|
2022-04-19 07:29:21
|
e.g. in how we define pixelformat and buffer sizes, we might want to avoid hardcoded/hidden assumptions that it will always be 4:4:4 RGB full-res.
|
|
2022-04-19 07:31:02
|
I'm starting to think that RGB / RGBA output should have been done as a convenience thing on top of a more generic api that only gives you single planes at a time
|
|
2022-04-19 07:32:40
|
(which would make it easier to e.g. return YCbCr + depth without upsampling, as four separate channels that are e.g. 1:1, 1:2, 1:2 and 1:8)
|
|
2022-04-19 07:35:00
|
It would also make BGR output easier, or planar output which is what you want in some applications and currently you're forced to let us do the channel interleaving just to de-interleave it again
|
|
2022-04-19 07:37:28
|
Maybe we can just add a function like the one for getting extra channels, except indices 0,1,2 are R,G,B and the extra channels start at index 3? And then the existing extrachan functions are just wrappers that do a +3 for you
|
|
2022-04-19 07:39:36
|
And then we can later add an option to get a channel in 1:2^n resolution, which we can implement generically by doing downsampling but specialize by skipping upsampling or reducing the upsampling factor.
|
|
2022-04-19 07:45:13
|
It would also be nice if the api colorspace isn't hardcoded to RGB, but you can e.g. decode to XYB planes and encode XYB directly, i.e. skipping the inverse and fwd XYB transforms when doing transcoding.
|
|
|
improver
|
2022-04-19 09:14:25
|
can't you just make up new flag "plz giv non-upsampled progressive images" and new decoder event type for these
|
|
|
|
veluca
|
2022-04-19 09:15:26
|
very long story short: not really
|
|
|
improver
|
2022-04-19 09:15:49
|
or if that's too much inefficiency at decoder side to do for every step, i guess one could specify what exact resolution would be acceptable
|
|
|
|
veluca
|
2022-04-19 09:16:38
|
what you *can* do is add a setting for "by the way, I just want the output to be 2x / 4x / 8x smaller", and optimize that (sooner or later)
|
|
2022-04-19 09:16:54
|
it won't be optimized in *all* cases
|
|
2022-04-19 09:17:00
|
but it could be
|
|
2022-04-19 09:17:17
|
now, *should* this be done? I don't know
|
|
|
improver
|
2022-04-19 09:19:51
|
hmm. im considering efficient thumbnailing use case, and in general, what wanted is image bigger than certain size (iirc 2x of what intended thumbnail should be, & that later gets downscaled with better-looking algorithm) but as small as possible otherwise
|
|
2022-04-19 09:21:02
|
"2x / 4x / 8x smaller" would need to be specified AFTER getting info on actual resolution of image first, eg after image info event but before actual decoding happens
|
|
2022-04-19 09:22:39
|
because otherwise u may get too small output, and difference for 8x is quite a lot
|
|
2022-04-19 09:24:24
|
thinking about how to do non-progressive modular with efficient thumbnailing is hard tho, yeah
|
|
2022-04-19 09:25:21
|
but then im not sure what else can do that in that way, so maybe its okay to not solve this one
|
|
2022-04-19 09:28:48
|
doing things block by block, applying whatever effects are there & then downsampling is possible but decoding path would be so distinct & complicated that im unsure of viability of it
|
|
|
_wb_
|
2022-04-19 10:15:27
|
we already have an api for decoding with pixel callback, which means you get the pixels as they are decoded and you can then do whatever you want with them, including downscaling before you write them to your buffer
|
|
2022-04-19 10:15:58
|
at least horizontal downscaling can easily be done that way, vertical is a bit trickier
|
|
|
saucecontrol
|
|
_wb_
I'm starting to think that RGB / RGBA output should have been done as a convenience thing on top of a more generic api that only gives you single planes at a time
|
|
2022-04-19 03:46:51
|
this is something else i'd make use of. from my perspective, the API kind of works 'backwards'. that is, it's more concerned with "i have this much data, give me as many pixels as you can",
while i'm more accustomed to working in the opposite direction -- "i need these pixels, so tell me what data you need"
so if i were writing a web browser and wanted to splat as much of the image on the screen as i could given the amount of image data downloaded, the current model would be perfect
but i'm writing an image processing pipeline, where i may want to decode only a region of the image or decode at a smaller size or get access to the encoded data with as little processing as possible because i know what i want to do with that data
|
|
2022-04-19 03:54:54
|
for reference, my codec interaction started with the windows imaging component, which works the way i described. you have pixel sources, and you can ask that pixel source for any region of pixels you want
the codec is backed by a stream that provides the data, so the decoder would seek and read from that stream to fill the pixel request
and you have transforms that can give access to the pixels in different forms
for reference, base pixel source interface <https://docs.microsoft.com/en-us/windows/win32/api/wincodec/nn-wincodec-iwicbitmapsource>
a planar transform that allows in-decoder scaling and access to individual planes <https://docs.microsoft.com/en-us/windows/win32/api/wincodec/nf-wincodec-iwicplanarbitmapsourcetransform-copypixels>
and stream <https://docs.microsoft.com/en-us/windows/win32/api/objidl/nn-objidl-istream>
|
|
2022-04-19 03:58:48
|
the real power with that model is that if you want to process an image one row or one MCU row at a time, you can do that incredibly efficiently
i recently integrated libheif, which works similarly with their `heif_reader`. only exception there is that since libheif wraps video codecs, you can only get the fully-decoded (but still in planar format) image
but libjxl was more of a challenge because, as i said, it's kind of 'backwards' from the way i want to operate
|
|
|
_wb_
|
2022-04-19 04:04:55
|
internally we already have a decode render pipeline that is region-based, so it shouldn't be _too_ hard to do the plumbing to make an api for cropped decoding
|
|
2022-04-19 04:05:48
|
but this type of things is quite a bit of bug-prone work, so it will be post-1.0
|
|
|
saucecontrol
|
2022-04-19 04:07:09
|
i totally understand. i'm looking forward to integrating with 1.0 no matter the shape. but good to know you're open to alternate ways of approaching the API surface
|
|
|
_wb_
|
2022-04-19 04:19:02
|
Cropped decoding was a design requirement when we designed the bitstream, so it only makes sense that we'll eventually add an api for it. But for now, the browser use case was a bigger priority.
|
|
|
saucecontrol
|
2022-04-19 04:30:20
|
gotcha. as i understand, jxl also supports encoding out of order, which makes it different to decode. the pixels ready callback might return pixels from anywhere in the image to start, right?
this is a problem when transcoding, because the target of your processing pipeline might be something that needs the pixels in order
even the video-based formats with heavy intra prediction only evaluate up and left, so you're always working your way in the same direction
|
|
|
_wb_
|
2022-04-19 04:39:07
|
Groups can be reordered, yes, but they're also independent so they can be decoded in parallel and not all of them need to be decoded if you only need a crop
|
|
2022-04-19 04:40:17
|
And images encoded in a random order can be decoded in another order if you want, there is a table of contents and nothing says the decoder needs to do things in bitstream order
|
|
|
saucecontrol
|
2022-04-19 04:50:23
|
awesome, so long as we could control decode order, that would be great
|
|
2022-04-19 05:43:32
|
i suppose in general having both pull and push models for the decoder API would be useful
push is great for the web browser use case, but pull offers a lot more opportunity for control over the decode process
|
|
|
_wb_
|
2022-04-19 06:48:05
|
Pull is good for local files, push is good when transfer/seeking takes significant time.
|
|
|
saucecontrol
|
2022-04-19 08:41:58
|
yeah, exactly 🙂
|
|
|
|
veluca
|
2022-04-20 01:50:50
|
<@718245888069468230> <@886264098298413078> <@853026420792360980> and whoever else actually uses libjxl and has an opinion... (sorry if I forgot someone!) how much of a pain would it be for you if libjxl stopped automatically undoing EXIF orientation?
|
|
|
Traneptora
|
|
veluca
<@718245888069468230> <@886264098298413078> <@853026420792360980> and whoever else actually uses libjxl and has an opinion... (sorry if I forgot someone!) how much of a pain would it be for you if libjxl stopped automatically undoing EXIF orientation?
|
|
2022-04-20 01:56:42
|
I don't use libjxl's jpeg-to-jxl functionality so I have no opinion on this from a pragmatic standpoint
|
|
2022-04-20 01:57:34
|
but since JXL spec says that the codestream orientation takes precedence over EXIF, if present, it makes sense to me that we should upon encoding from jpeg-to-jxl, strip that info from exif and put it in the codestream
|
|
2022-04-20 01:57:44
|
if we care about perfect reconstruction we could also store it in jbrd as well
|
|
|
|
veluca
|
2022-04-20 02:04:39
|
that doesn't necessarily have to do with recompressed JPEG
|
|
2022-04-20 02:05:13
|
the JXL basic info contains an "orientation" value, which specifies the orientations of the pixels you get
|
|
2022-04-20 02:06:18
|
specifically about this, unless the viewer in general handles orientation incorrectly then it shouldn't change anything.
|
|
|
Yari-nyan
|
|
veluca
<@718245888069468230> <@886264098298413078> <@853026420792360980> and whoever else actually uses libjxl and has an opinion... (sorry if I forgot someone!) how much of a pain would it be for you if libjxl stopped automatically undoing EXIF orientation?
|
|
2022-04-20 02:10:06
|
why not have it as an option?
|
|
|
|
veluca
|
2022-04-20 02:11:10
|
I'm asking specifically because I assume image viewers to already have a way to deal with Exif orientation (because JPEG), and because for various reasons undoing exif orientation in libjxl interferes both with efficiency and with allowing to do dithering in the lib (among other things)
|
|
2022-04-20 02:13:43
|
probably, but I think i.e. <@886264098298413078> likely has a better idea than I do about this, hence why I was asking 😄
|
|
|
_wb_
|
2022-04-20 02:26:21
|
so one worry I have is that some applications/viewers treat pixel decoding as one thing and 'metadata' (including exif orientation) as another thing. In the case of jxl, that could mean that they miss the jxl orientation in case of images without exif
|
|
|
Yari-nyan
why not have it as an option?
|
|
2022-04-20 02:28:58
|
it currently is an option, and by default it is done (so applications that don't do anything non-default will get the correctly oriented image). But orientation in libjxl is the only thing left that doesn't have a memory-efficient decode path yet — basically it will just decode to a full buffer, orient the buffer, and then return that.
|
|
2022-04-20 02:30:42
|
so we're pondering whether we should spent the effort of making a memory-efficient orientation step in the decoder (which is doable but a bit of a headache and work), or if we should remove the option and always return non-oriented data instead, which would simplify the libjxl implementation but also means applications have to do it themselves
|
|
|
Yari-nyan
|
2022-04-20 02:31:56
|
i see
|
|
|
novomesk
|
2022-04-20 03:04:17
|
I like the fact that the pixels are already auto-rotated by default. I don't have problem rotating the image by myself. It's a fact that someone has to rotate the image, user doesn't care who performs the rotation. I think it is better to have rotation code once in libjxl, rather than many times in each application using libjxl.
|
|
|
|
veluca
|
2022-04-20 03:19:49
|
don't most (all?) applications have to have rotation anyway, because of other formats?
|
|
2022-04-20 03:21:00
|
i.e. Chrome has an "orientation" attribute to its pixel buffers
|
|
|
_wb_
|
2022-04-20 03:31:13
|
if "other formats" is only jpeg, then in the long run if we want to at some point have libjxl also decode jpeg input and get rid of libjpeg dependencies in applications, then applications could forget about orientation again
|
|
2022-04-20 03:32:25
|
maybe Chrome has that attribute anyway because CSS also has properties to orient images, and it has to implement the whole 2D and 3D transform api anyway.
|
|
2022-04-20 03:33:47
|
I suspect that 'simpler' applications than web browsers may not always already have easy ways to get orientation done by something else
|
|
|
Fraetor
|
2022-04-20 03:46:51
|
Too many small applications don't respect orientation, so I think it would be preferable for the orientation to be decoded by libjxl.
|
|
|
saucecontrol
|
|
veluca
<@718245888069468230> <@886264098298413078> <@853026420792360980> and whoever else actually uses libjxl and has an opinion... (sorry if I forgot someone!) how much of a pain would it be for you if libjxl stopped automatically undoing EXIF orientation?
|
|
2022-04-20 04:41:15
|
i would actually prefer this, as it allows the orientation correction to be delayed. thinking about your typical thumbnailing (or resize to web-friendly dimensions) case, you can delay reorientation until after the downscale so that even if it requires materializing the entire image, at least it's smaller
|
|
2022-04-20 04:43:52
|
and yeah, most software is already handling this for jpeg. at the moment, it's not likely software will be dealing with jxl only and have the option to ignore this. if there's a future in which jxl has taken over the world and software being ignorant of orientation is an option, it can be an option in the decoder
|
|
|
|
veluca
|
2022-04-20 04:44:52
|
tbc even today you *can* request the lib not to correct orientation
|
|
|
saucecontrol
|
2022-04-20 04:47:29
|
oh yes, i was actually mixing up libjxl with libheif because i implemented them at the same time. i am skipping orientation for jxl
libheif allows you to skip it, but then doesn't give you a way to read the orientation stored in the bitstream :/
|
|
|
|
veluca
|
2022-04-20 04:47:58
|
that said, since a lot of our processing is line-based, and you *may* request line-based output, not having to deal with transposes would make life easier...
|
|
2022-04-20 04:48:02
|
ah, they do that?
|
|
2022-04-20 04:48:32
|
well, if newer formats are doing that too then I guess that's reason to support it...
|
|
2022-04-20 04:48:48
|
I'll try to figure out the best way to do so
|
|
2022-04-20 04:49:43
|
I guess the good news is that whatever changes are needed, are similar to the changes needed to get downsampled output
|
|
|
|
Hello71
|
|
_wb_
There's not that much that you can do to improve vardct entropy coding without changing the block selection and coefficients
|
|
2022-04-20 09:24:33
|
theoretically is it possible that newer libjxl versions could do bit-exact jpeg recompression more efficiently? or is the size basically fixed by the original dct or whatever? i don't really understand how jpeg xl can code jpeg more efficiently. is it just better huffman/arithmetic coding?
|
|
2022-04-20 09:25:32
|
_googles "brunsli algorithm"_
|
|
2022-04-20 09:29:59
|
ok, i read https://encode.su/threads/2706-The-future-of-lossless-data-compression/page2. is it vaguely accurate to say that brunsli basically runs the moral equivalent of gunzip | zstd on jpeg?
|
|
|
_wb_
|
2022-04-20 09:43:13
|
Jxl recompression of jpeg is basically encoding the quantized dc using lossless modular and the ac using vardct's context modeling and ans entropy coding.
|
|
|
improver
|
2022-04-20 09:44:23
|
plus compressed reconstruction data
|
|
|
_wb_
|
2022-04-20 09:44:39
|
For both, there's some bitstream expressivity so yes, it could still improve a bit in the future.
|
|
|
improver
plus compressed reconstruction data
|
|
2022-04-20 09:45:39
|
Yes, though reconstruction data is usually quite small, unless there are big arbiteary app markers or tail data or something.
|
|
2022-04-22 12:26:13
|
|
|
2022-04-22 12:26:22
|
finally all green again
|
|
|
Jim
|
2022-04-22 12:51:50
|
3 greens and an orange, close enough <:ChizuWink:857418269647700003>
|
|
|
_wb_
|
2022-04-22 01:01:46
|
well getting code coverage green might not be really useful
|
|
|
improver
|
2022-04-22 01:02:23
|
careful saying that out loud, some people can kill for that
|
|
|
_wb_
|
2022-04-22 01:03:06
|
it can be done, but it requires basically writing unit test cases for all the possible ways in which bitstreams can be corrupt, to make sure all the exotic "return error" cases are actually reached
|
|
2022-04-22 01:04:06
|
we should make it as green as possible, but to get it to 100% is not really useful imo
|
|
|
improver
|
2022-04-22 01:04:12
|
and even that wont ensure full coverage without using provers, as all the code touched doesnt mean its free from side effects when it comes to unusual values
|
|
|
_wb_
|
2022-04-22 01:04:20
|
yep
|
|
2022-04-22 01:04:45
|
for testing robustness against corrupt/malicious inputs, fuzzing is more effective than code coverage
|
|
|
spider-mario
|
|
improver
and even that wont ensure full coverage without using provers, as all the code touched doesnt mean its free from side effects when it comes to unusual values
|
|
2022-04-22 02:23:21
|
notably, it doesn’t mean that all relevant combinations of paths are covered, e.g.
```js
function foo(a, b) {
var n = f();
if (a || b) {
n += g(); // very large, possibly inf
}
if (!a || b) {
n -= h(); // also very large
}
return n; // nan?
}
test(foo(true, false));
test(foo(false, false));
```
|
|
2022-04-22 02:23:59
|
all lines covered, since each test tests a branch (`a = true`, `a = false`), but no test tests running both branches within one call (`b = true`)
|
|
|
_wb_
|
2022-04-22 04:00:18
|
one thing the coverage analysis is useful for is to find candidates of obsolete code
|
|
2022-04-22 04:00:41
|
I did some cleaning up 🙂
|
|
2022-04-22 04:01:32
|
https://tenor.com/view/chores-cleaning-housework-tom-and-jerry-housewife-gif-20706096
|
|
|
Pashi
|
2022-04-22 05:25:25
|
Why is "docs" a different shade of green
|
|
|
Fraetor
|
2022-04-22 09:47:13
|
Different badge generator?
|
|
|
Ufnv
|
2022-04-23 11:07:23
|
Is there any doc on how to use the pre-built dll?
|
|
|
_wb_
|
2022-04-23 11:47:01
|
Api documentation is here: https://libjxl.readthedocs.io/en/latest/?badge=latest
|
|
2022-04-23 11:47:24
|
It could use some overview/tutorial though
|
|
2022-04-23 11:48:12
|
It's probably easier to start from an example like https://github.com/libjxl/libjxl/blob/main/examples/decode_oneshot.cc
|
|
|
Ufnv
|
2022-04-23 01:13:27
|
Thanks!
|
|
2022-04-23 01:23:59
|
I am probably doing something wrong, but I cannot find jxl/jxl_export.h
|
|
|
_wb_
|
2022-04-23 02:15:21
|
It's produced by the install script
|
|
|
|
Vincent Torri
|
2022-04-23 02:32:18
|
is libjxl 1.0 planned soon ?
|
|
|
|
Deleted User
|
2022-04-23 03:18:00
|
Jon promised to release it next week. 😁
|
|
|
_wb_
|
2022-04-23 03:23:30
|
"ready when it's ready"
|
|
2022-04-23 03:24:09
|
Finishing up cjxl_ng and djxl_ng still needs to be done, and I want to have color management in the decode api too so we can do proper dithering
|
|
2022-04-23 03:24:32
|
So getting closer but will still take some time
|
|
2022-04-23 03:24:55
|
Tomorrow the 95th JPEG meeting starts so I will not have much time for coding then
|
|
|
|
Vincent Torri
|
2022-04-23 04:08:41
|
too bad the additional colorspace is for release > 1.0
|
|
|
|
Deleted User
|
|
_wb_
"ready when it's ready"
|
|
2022-04-23 04:32:26
|
30% <:PepeOK:805388754545934396>
|
|
|
_wb_
|
2022-04-23 04:58:07
|
Heh that progress bar is not very accurate, it is just for issues and pull requests marked with the 1.0 tag
|
|
2022-04-23 04:58:27
|
The bulk of the work was of course already done before that tag existed
|
|
|
Vincent Torri
too bad the additional colorspace is for release > 1.0
|
|
2022-04-23 04:59:01
|
What additional colorspace?
|
|
2022-04-23 05:01:53
|
If you mean BGR or something, if there's sufficient need for that, I think we can still squeeze it into 1.0, maybe. It's just a bit annoying that we may have to do interleaving to RGB for color management and then permute to BGR, but I guess that can be done as part of the conversion to uint
|
|
|
|
Vincent Torri
|
2022-04-23 05:03:29
|
<@794205442175402004> that can also be handled with an environment variable
|
|
2022-04-23 05:03:59
|
+ getenv
|
|
|
Traneptora
|
2022-04-24 01:00:54
|
does libjxl support other pixel formats at the moment?
|
|
2022-04-24 01:01:00
|
other than 8-bit and 16-bit
|
|
|
_wb_
|
2022-04-24 01:02:57
|
You mean as a pixelformat for passing buffers in the api?
|
|
2022-04-24 01:03:30
|
It also supports float32 and float16
|
|
2022-04-24 01:05:33
|
But the api pixelformat is one thing, the bitstream data is another. Bitstream data can be non-multiple-of-8 bitdepth, and exotic float types like fp24 and bfloat
|
|
2022-04-24 01:07:00
|
(for lossless that is. For lossy everything is always internally float32, and the nominal bitdepth of the header is just a suggestion of what to convert it to)
|
|
|
|
dds
|
2022-04-24 01:09:35
|
Hi, I have a question about the purpose of the offset / multiplier fields in modular predictors.
|
|
2022-04-24 01:09:42
|
I assume the point is to move the leaf distributions as close to zero as possible so they cluster well for ANS - is this right? However I can't see where they're used - I've looked at both libjxl's code and generated trees and AFAICT offset is always 0, multiplier is always 1.
|
|
2022-04-24 01:09:50
|
So the feature appears unused by the libjxl encoder... is this because ANS does a good enough job by itself?
|
|
|
_wb_
|
2022-04-24 01:11:00
|
Yes
|
|
2022-04-24 01:11:20
|
We do use multiplier, but just as a way to quantize squeeze residuals cheaply
|
|
2022-04-24 01:11:33
|
Offset we don't use atm
|
|
2022-04-24 01:12:51
|
Theoretically compression could improve a bit if we would use it
|
|
2022-04-24 01:14:45
|
Typically the distributions are already concentrated around zero because that's of course the goal of a predictor in the first place
|
|
|
|
dds
|
2022-04-24 01:15:01
|
thanks! I guess offsets would make finding optimal tree splits harder
|
|
|
_wb_
|
2022-04-24 01:15:47
|
The search space is already big enough as it is, and the tree learning is already slow and memory intensive enough
|
|
2022-04-24 01:17:17
|
We're only scratching the surface of what the bitstream can express, we might still be quite far from an optimal tree (even ignoring offset/multiplier)
|
|