|
_wb_
|
2021-10-15 05:03:32
|
Might be better to detect this case in the encoder itself, not in the input loader. PNG input will eventually not be the main entry point to encoding anymore.
|
|
2021-10-15 05:05:09
|
Quickly counting colors up to a small N of colors could be done on the input pixels - you only scan the whole image if it actually is a palette image
|
|
|
monad
|
2021-10-15 05:14:09
|
> nano
You really can't run a graphical text editor? gedit uses about 10x as much RAM as nano on my system, but that's not much.
|
|
|
diskorduser
|
2021-10-15 12:19:46
|
https://cudatext.github.io/
|
|
|
improver
|
2021-10-15 01:39:03
|
i used to use kate for all my coding back when my computer was weak
|
|
2021-10-15 01:39:24
|
its pretty good
|
|
|
|
veluca
|
2021-10-15 02:33:55
|
I use vim all the time and it's the best editor ever. Change my mind π€£
|
|
|
improver
|
2021-10-15 02:52:47
|
vim is pretty great too
|
|
2021-10-15 02:54:06
|
i just find gui stuff more fitting for project/multifile editing, because proper file lists, tabs, etc
|
|
|
spider-mario
|
2021-10-15 03:05:59
|
Iβve used kate too
|
|
2021-10-15 03:06:03
|
I used it with its vi mode
|
|
|
|
veluca
|
2021-10-15 03:43:16
|
vim has tabs π
|
|
|
190n
|
2021-10-15 06:46:01
|
raspberry pi 4 is out of stock everywhere π
|
|
|
improver
|
|
nathanielcwm
|
|
veluca
I use vim all the time and it's the best editor ever. Change my mind π€£
|
|
2021-10-16 04:56:56
|
https://twitter.com/chigbarg/status/1448817993329250334
|
|
|
yurume
|
2021-10-16 05:48:20
|
does it accept *all* possible answers? π
|
|
2021-10-16 05:51:42
|
I've consulted the Vim manual and the relevant regexp would be: ||`/^:(c?q(u(it?)?)?|q(u(it?)?)?!|wq!?|x(it?)?!?|exit?!?|(q(uit)?|wq|x)a(ll?)?!?)|Z[ZQ]/`||
|
|
|
190n
|
2021-10-16 05:52:13
|
lmfao
|
|
2021-10-16 05:52:41
|
wait that accepts `:wq`
|
|
2021-10-16 05:52:44
|
it says quit without saving
|
|
|
yurume
|
2021-10-16 05:53:04
|
oh, it does say "without saving"
|
|
2021-10-16 05:54:20
|
that would simplify it into ||`/^:c?q(u(it?)?)?!|ZQ/`||
|
|
|
190n
|
2021-10-16 05:55:19
|
that simplified it a lot lmao
|
|
2021-10-16 05:55:28
|
uh $?
|
|
|
yurume
|
2021-10-16 05:55:44
|
ah yeah, that would be needed
|
|
2021-10-16 05:55:45
|
anyway
|
|
2021-10-16 05:56:46
|
another possibility is of course `:exe "!kill -9 " . getpid()`
|
|
2021-10-16 05:57:42
|
https://github.com/hakluke/how-to-exit-vim ...and I'm waay late about that joke
|
|
|
190n
|
|
yurume
ah yeah, that would be needed
|
|
2021-10-16 06:02:27
|
ooh apparently | has higher precedence than ^ or $ so you need some more parens
|
|
|
|
veluca
|
|
nathanielcwm
https://twitter.com/chigbarg/status/1448817993329250334
|
|
2021-10-16 07:46:23
|
I haven't found a vim mode that is not buggy or incomplete in some fundamental way yet :P
|
|
|
spider-mario
|
2021-10-16 07:50:30
|
from what I recall, IdeaVim uses an actual vim behind the scenes
|
|
2021-10-16 07:50:36
|
so at least it comes close
|
|
2021-10-16 07:50:42
|
even supports macros
|
|
|
diskorduser
|
2021-10-16 06:34:26
|
All my prophoto icc JXL's thumbnails look dark and weird on windows 10. π
|
|
|
spider-mario
|
2021-10-16 06:59:42
|
what generates those thumbnails?
|
|
|
diskorduser
|
|
spider-mario
what generates those thumbnails?
|
|
2021-10-17 02:34:37
|
jxl_winthumb.dll. I got it from someone on jxl sub reddit.
|
|
|
The_Decryptor
|
2021-10-17 02:42:21
|
Yeah I've noticed some issues with that as well, Windows is pretty spotty with colour management
|
|
|
w
|
2021-10-17 02:42:34
|
in windows 11 colour management is completely broken
|
|
2021-10-17 02:43:01
|
for the thumbnails it's not windows, it would be <https://github.com/saschanaz/jxl-winthumb> not applying the icc profile
|
|
|
|
Deleted User
|
|
monad
> nano
You really can't run a graphical text editor? gedit uses about 10x as much RAM as nano on my system, but that's not much.
|
|
2021-10-17 11:00:18
|
I can't, I'm on WSL 2 in Win10. I need to edit those files inside Linux. Graphical text editor wouldn't help me either, I need code hints which only IDEs have.
|
|
|
eddie.zato
|
2021-10-17 11:03:37
|
You can try micro, at least it has syntax highlighting
https://github.com/zyedidia/micro
|
|
|
|
Deleted User
|
2021-10-17 11:04:47
|
> - If you have to use Windows, then consider adding more RAM
Zero budget. I already have 8 GB, that should be enough! It's pathological that it isn't! I can compile x264 from source and it's a no big deal, while compiling libjxl is a big event that takes so much RAM I have to close some apps sometimes so WSL doesn't get killed! Maybe it's time someone rewrites libjxl to Rust + assembly, like rav1e?
> - Or just dual boot to Linux, preferably using a lightweight distro and desktop environment if you have little RAM
I already have WSL 2.
> - Or get a SBC like Raspberry Pi 4 with 8GB RAM for coding. On the plus side, you can also do ARM development on Pi
Like above, zero budget. If I buy one, I'll use it for seeding torrents.
|
|
|
eddie.zato
You can try micro, at least it has syntax highlighting
https://github.com/zyedidia/micro
|
|
2021-10-17 11:05:39
|
I can't use anything that uses GUIs, I'm limited to CLI because I'm on WSL 2.
|
|
|
eddie.zato
|
2021-10-17 11:06:01
|
`micro` is terminal-based editor
|
|
|
|
Deleted User
|
2021-10-17 11:06:19
|
nano has syntax highlighting, too, but that's not enough. I need code hints in a CLI.
|
|
2021-10-17 11:06:31
|
(If it's possible to achieve that at all...)
|
|
2021-10-17 11:07:35
|
For example: if I see a function call, I could Ctrl+click it and Dev-C++ would "teleport" me to its definition. Same with Ctrl+clicking header files.
|
|
|
_wb_
|
2021-10-17 11:14:11
|
If memory is an issue, try compiling with fewer threads
|
|
|
eddie.zato
|
2021-10-17 11:15:03
|
There is always `neovim`, but it requires the conversion to vim <:CatSmile:805382488293244929>
|
|
|
|
Deleted User
|
|
_wb_
If memory is an issue, try compiling with fewer threads
|
|
2021-10-17 11:17:47
|
I use this command:
```bash
cd libjxl
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Release -DBUILD_TESTING=OFF ..
cmake --build . -- -j$(nproc)
```
|
|
|
eddie.zato
There is always `neovim`, but it requires the conversion to vim <:CatSmile:805382488293244929>
|
|
2021-10-17 11:18:09
|
NO VIM
|
|
|
_wb_
|
2021-10-17 11:18:21
|
Try `-j 2` or so there
|
|
|
|
Deleted User
|
|
eddie.zato
There is always `neovim`, but it requires the conversion to vim <:CatSmile:805382488293244929>
|
|
2021-10-17 11:20:21
|
I've got some suspicions how it got its name...
|
|
|
_wb_
Try `-j 2` or so there
|
|
2021-10-17 11:21:26
|
Like if compilation wasn't already kinda slow...
|
|
|
_wb_
|
2021-10-17 11:21:52
|
Well you kind of only need to do full compile once
|
|
|
|
Deleted User
|
2021-10-17 11:22:59
|
Even partial compile takes hundreds of MB of RAM
|
|
|
_wb_
|
2021-10-17 11:23:47
|
I usually build with `./ci.sh opt` the first time, then do `ninja cjxl djxl benchmark_xl` in the build dir while I'm making changes, then at the end do another `./ci.sh opt` just to check that the tests pass
|
|
|
eddie.zato
|
2021-10-17 11:29:27
|
I think `ninja` works slightly better.
```
cmake -G Ninja
ninja
```
|
|
|
|
Deleted User
|
|
_wb_
I usually build with `./ci.sh opt` the first time, then do `ninja cjxl djxl benchmark_xl` in the build dir while I'm making changes, then at the end do another `./ci.sh opt` just to check that the tests pass
|
|
2021-10-17 11:29:46
|
The first (and only) time I used `./ci.sh` my RAM went out of the window, making `cmake` the only available option on my machine.
|
|
2021-10-17 11:36:59
|
To prove my point I've just compiled x264. From start (`git clone`) to finish the whole thing took 2 minutes, 400 MB RAM and 50% CPU. libjxl would take over 5 minutes, over 1 GB RAM and 80% CPU.
|
|
|
_wb_
|
2021-10-17 11:59:12
|
Is that clang or gcc?
|
|
|
Fraetor
|
2021-10-17 12:28:12
|
Testing myself I topped out at 1.3 GB of RAM used in compiling. I'm on native debian (x86_64) compiling with gcc with `-j 4`.
|
|
|
spider-mario
|
|
I can't use anything that uses GUIs, I'm limited to CLI because I'm on WSL 2.
|
|
2021-10-17 12:38:28
|
you can use graphical applications with WSL, however you need to install an X11 server separately such as https://sourceforge.net/projects/vcxsrv/
|
|
2021-10-17 12:38:51
|
once installed and running, you can run graphical WSL apps by giving them the appropriate βDISPLAYβ environment variable
|
|
2021-10-17 12:41:16
|
(the tray icon will tell you what the value for DISPLAY should be)
|
|
|
|
Deleted User
|
|
_wb_
Is that clang or gcc?
|
|
2021-10-17 02:45:10
|
I don't remember exactly what libjxl used, but x264 used gcc.
|
|
|
Fraetor
Testing myself I topped out at 1.3 GB of RAM used in compiling. I'm on native debian (x86_64) compiling with gcc with `-j 4`.
|
|
2021-10-17 02:47:16
|
I wouldn't be surprised about 1.3 GB RAM, unfortunately it's an achievable number.
|
|
|
diskorduser
|
2021-10-17 02:49:29
|
I compile libjxl on 4gb ram. Why can't you?
|
|
|
_wb_
|
2021-10-17 02:52:10
|
I compile libjxl on my phone
|
|
2021-10-17 02:52:59
|
Phone does have 6 GB ram though
|
|
2021-10-17 03:22:08
|
https://twitter.com/mags_mclaugh/status/1449502561548087301?t=v5_XjU_FGRZ8D1VmSZPJOQ&s=19
|
|
2021-10-17 03:22:23
|
The generation loss on that
|
|
|
Fraetor
|
|
Fraetor
Testing myself I topped out at 1.3 GB of RAM used in compiling. I'm on native debian (x86_64) compiling with gcc with `-j 4`.
|
|
2021-10-17 03:28:25
|
Retested with `-j 1` and sampling memory usage every 0.1 seconds and the peak usage I got was 405 MB. So not too bad.
|
|
|
diskorduser
|
2021-10-17 03:34:17
|
Clang or gcc?
|
|
|
BlueSwordM
|
|
> - If you have to use Windows, then consider adding more RAM
Zero budget. I already have 8 GB, that should be enough! It's pathological that it isn't! I can compile x264 from source and it's a no big deal, while compiling libjxl is a big event that takes so much RAM I have to close some apps sometimes so WSL doesn't get killed! Maybe it's time someone rewrites libjxl to Rust + assembly, like rav1e?
> - Or just dual boot to Linux, preferably using a lightweight distro and desktop environment if you have little RAM
I already have WSL 2.
> - Or get a SBC like Raspberry Pi 4 with 8GB RAM for coding. On the plus side, you can also do ARM development on Pi
Like above, zero budget. If I buy one, I'll use it for seeding torrents.
|
|
2021-10-17 03:43:22
|
How do you get so much RAM usage from building libjxl?
|
|
|
Fraetor
|
|
diskorduser
Clang or gcc?
|
|
2021-10-17 04:27:10
|
Native debian unstable (x86_64) compiling with gcc 11.2
|
|
|
|
Deleted User
|
2021-10-18 11:17:40
|
Hi,
I am writing to aquire about size of compresed image.
Can we define a final size of jpeg-xl file?
For instance, we have an image of size A and we want to achieve a file of size B.
Of course B < A.
Thank you in advance!
|
|
|
_wb_
|
2021-10-18 11:21:33
|
`cjxl --target_size=B` does exactly that
|
|
2021-10-18 11:22:03
|
this is not a recommended way to compress images for anything other than testing purposes, imo
|
|
|
|
Deleted User
|
|
_wb_
`cjxl --target_size=B` does exactly that
|
|
2021-10-18 11:23:51
|
Thank you a lot for an answer!
|
|
2021-10-18 05:13:53
|
<@!604964375924834314> any chances for doing something with https://github.com/libjxl/libjxl/issues/665 ?
|
|
|
spider-mario
|
2021-10-18 05:15:30
|
oh, right, I suspect that this might only really be feasible when using XYB
|
|
2021-10-18 05:15:38
|
as the noise addition assumes this
|
|
|
|
Deleted User
|
2021-10-18 05:17:57
|
It's for Squoosh, IIRC it has no reason not to use XYB by default with lossy Modular.
|
|
|
spider-mario
|
2021-10-18 05:18:47
|
oh π
|
|
|
_wb_
|
2021-10-18 05:24:35
|
Cjxl does xyb by default for lossy modular
|
|
2021-10-18 05:25:09
|
And for lossless modular noise probably doesn't make much sense, except maybe for <#824000991891554375> purposes
|
|
|
Fraetor
|
2021-10-18 11:49:14
|
Anyone know of a way of viewing jxl on android?
|
|
|
190n
|
2021-10-18 11:50:06
|
djxl in termux <:kekw:808717074305122316>
|
|
|
Eugene Vert
|
|
Fraetor
Anyone know of a way of viewing jxl on android?
|
|
2021-10-19 01:31:30
|
Tachiyomi (manga reader), Firefox (Nightly?) or Chrome
No image galleries supporting jxl yet afaik
|
|
|
|
Deleted User
|
2021-10-19 04:45:32
|
Chrome should be the easiest one since <:JXL:805850130203934781> works in a "usual" stock version, just go to `chrome://flags#enable-jxl`, enable the flag and restart the browser.
Firefox will be trickier, you'll have to download special Nightly version for testing. Then search for `jxl` in the search bar, switch `image.jxl.enabled` to `true` and restart the browser.
|
|
|
Justin
|
2021-10-19 08:55:48
|
https://jpegxl.io/
Heyo, finally just released a beta. π Peace!
|
|
|
Fraetor
|
|
Eugene Vert
Tachiyomi (manga reader), Firefox (Nightly?) or Chrome
No image galleries supporting jxl yet afaik
|
|
2021-10-19 09:20:32
|
Thanks!
|
|
|
diskorduser
|
|
Justin
https://jpegxl.io/
Heyo, finally just released a beta. π Peace!
|
|
2021-10-19 09:34:28
|
No option for lossless encoding?
|
|
|
Fox Wizard
|
2021-10-19 09:38:23
|
q100 should be lossless
|
|
|
Justin
|
|
diskorduser
No option for lossless encoding?
|
|
2021-10-19 09:39:22
|
Currently it's q100, but I could include a checkbox to tick if that's more convenient? Or just call it "lossless" instead of "100%".
|
|
|
diskorduser
|
2021-10-19 09:39:51
|
So no vardct encoding?
|
|
2021-10-19 09:40:11
|
No jpeg recompression mode?
|
|
|
Justin
|
2021-10-19 09:40:40
|
Not yet! It wasn't bug-free. The converter crashed occasionally. I'm expecting this to be solved in the next weeks.
|
|
|
diskorduser
No jpeg recompression mode?
|
|
2021-10-19 10:18:51
|
Eh.. fuck it. It's a beta for a reason. π It's implemented now. Modular Mode + JPEG Recompression. You may have to ctrl+f5 to clear cache. Keep in mind it may crash, and if it does, let me know!
|
|
|
diskorduser
|
2021-10-19 10:19:23
|
Ha ha . I was just asking. Not going to use it btw.
|
|
|
Justin
|
|
diskorduser
Ha ha . I was just asking. Not going to use it btw.
|
|
2021-10-19 10:19:49
|
All cool, I guess it might be helpful for debugging. π
|
|
|
improver
|
2021-10-19 03:59:35
|
wild style. looks really good
|
|
|
diskorduser
|
2021-10-19 04:10:16
|
Does jpeg suffers generation loss even if always saved at 100 quality?
|
|
|
_wb_
|
2021-10-19 04:12:37
|
That will converge to a fixedpoint quickly, I think
|
|
2021-10-19 04:13:34
|
In jpeg, most generation loss comes from cbcr up/downsampling and from dct coeff requantization
|
|
2021-10-19 04:14:55
|
At q100, assuming 444, the dct quant table is all 1s so the only loss is from the color transform and dct itself, but that should be minor and not cumulate easily
|
|
2021-10-19 04:17:16
|
(dct coeffs are 12-bit in jpeg, which is not enough to be reversible, but it's quite accurate; rgb to ycbcr is lossy but I think ycbcr to rgb to ycbcr is lossless or close enough)
|
|
|
|
Deleted User
|
2021-10-19 05:13:44
|
If someone picked *exactly* the same lossy encoding settings (incl. quant tables), would generation loss be as small as at q100?
|
|
|
_wb_
|
2021-10-19 07:51:48
|
In 444 JPEG, maybe
|
|
2021-10-19 07:52:05
|
In codecs that do more fancy stuff, no
|
|
|
|
Deleted User
|
2021-10-20 09:27:52
|
Hi Guys. What is the status of jxl implementation in the browsers? Any good news or will politics win?
|
|
|
improver
|
2021-10-20 09:35:28
|
I expect something to maybe start seriously happening in 2022 once IS gets published, and non-browser software support will be improved by then, and libjxl will be more bug-free aswell
|
|
2021-10-20 09:36:06
|
perhaps IS publication will be the trigger to re-evaluate situation of "ecosystem acceptance" of format
|
|
2021-10-20 09:38:12
|
just proceed as if it'll be fine, don't let bitter thoughts get you
|
|
|
|
Deleted User
|
2021-10-20 09:38:35
|
That is a good advice π
|
|
|
w
|
2021-10-20 09:39:22
|
just start using it everywhere and make more people complain that their browsers dont support it
|
|
2021-10-20 09:40:40
|
now all the people around me have it enabled in their browsers
|
|
|
improver
|
2021-10-20 09:41:49
|
AVIF support is kinda a good thing but honestly it's way too problematic in the ways it works to be serious contender
|
|
2021-10-20 09:46:30
|
though.. honestly I still don't really think that's it's even a good thing, it's got pretty bad decoding speed and for encoding its even worse
|
|
2021-10-20 09:47:19
|
maybe for browser vendors it kinda made sense because codebase increase wasn't big but idk idk
|
|
|
nathanielcwm
|
|
w
just start using it everywhere and make more people complain that their browsers dont support it
|
|
2021-10-20 11:06:41
|
please update to a supported browser <:kekw:808717074305122316>
|
|
|
improver
|
2021-10-20 11:17:48
|
should have banner which uses picture element to either show "compatible with jxl" or "please update to supported browser"
|
|
2021-10-20 11:18:08
|
would b sorta funny
|
|
2021-10-20 11:18:38
|
kinda "best viewed in IE9" style
|
|
|
Jyrki Alakuijala
|
|
Hi Guys. What is the status of jxl implementation in the browsers? Any good news or will politics win?
|
|
2021-10-20 03:01:16
|
Pressing the thumbs up on mozilla's request for standards position and starring the chrome bug are the ways to show interest right now
|
|
2021-10-20 03:15:39
|
here support the request by thumbs up: https://github.com/mozilla/standards-positions/issues/522
|
|
2021-10-20 03:16:38
|
here by starring: https://bugs.chromium.org/p/chromium/issues/detail?id=1178058
|
|
|
Fox Wizard
|
2021-10-21 12:40:59
|
That sad moment when for some reason using photon noise increases the file size from 285KB (no photon noise) to 474KB (with photon noise) <:Cheems:884736660707901470>
|
|
2021-10-21 12:41:23
|
Wonder if that's a bug, because it happens with e9, but not e7 (e7 both were 299KB)
|
|
|
_wb_
|
2021-10-21 12:49:08
|
Hm, I hope e9 is not trying to counteract the noise or something like that
|
|
|
Fox Wizard
|
2021-10-21 12:50:04
|
``cjxl -e 9 -q 90 test.png "e9 NoNoise.jxl"``
|
|
2021-10-21 12:50:17
|
``cjxl -e 9 -q 90 --photon_noise=ISO1500 test.png "e9 Noise.jxl"``
|
|
2021-10-21 12:51:32
|
Not really sure what happened, but it looks noticeably different from e7 with noise
|
|
2021-10-21 12:51:47
|
``cjxl -e 9 -q 90 test.png "e7 NoNoise.jxl"``
|
|
2021-10-21 12:52:05
|
cjxl -e 7 -q 90 --photon_noise=ISO1500 test.png "e9 Noise.jxl"
|
|
2021-10-21 12:52:17
|
At least e7 works in this case
|
|
|
Jyrki Alakuijala
|
2021-10-21 01:09:24
|
looks like -e9 will be trying to ramp up adaptive quantization to reduce the effects of the noise
|
|
2021-10-21 01:09:38
|
this is not an effective strategy in -e9
|
|
|
diskorduser
|
2021-10-21 01:36:24
|
Does photon noise look good if I print photos? I'm going to print them @ 600dpi.
|
|
|
|
veluca
|
2021-10-21 01:45:14
|
noise + `-e 8` or `-e 9` is generally not a good idea π
|
|
|
_wb_
|
|
diskorduser
Does photon noise look good if I print photos? I'm going to print them @ 600dpi.
|
|
2021-10-21 02:18:35
|
That's an artistic question and matter of personal taste, I'd say
|
|
|
fab
|
|
Fox Wizard
``cjxl -e 9 -q 90 test.png "e9 NoNoise.jxl"``
|
|
2021-10-21 02:31:37
|
TRY https://artifacts.lucaversari.it/libjxl/libjxl/2021-10-20T15%3A49%3A34Z_f4eb5cb216e9005729a71748b94a3f5a8f3f5f1e/
|
|
2021-10-21 02:31:45
|
with for %i in (C:\Users\Utente\Pictures\o\*.png) do cjxloctober12 -d 0.597 -s 9 --use_new_heuristics --faster_decoding=6 -I 0.59 %i %i.jxl
|
|
2021-10-21 02:31:54
|
it will compress very well
|
|
2021-10-21 02:32:19
|
where there is black and pink it will retain all the noise
|
|
|
Fox Wizard
|
2021-10-21 02:34:50
|
Doesn't faster decoding make the quality worse though?
|
|
|
fab
|
2021-10-21 02:34:53
|
yes
|
|
2021-10-21 02:35:17
|
and also if you change the i to another value you'll get banding
|
|
2021-10-21 02:35:28
|
also new heuristics has problem of banding
|
|
2021-10-21 02:35:36
|
for example where there are phones in a image
|
|
2021-10-21 02:35:44
|
where there are small object
|
|
2021-10-21 02:35:52
|
so is unusable for person like lee
|
|
2021-10-21 02:36:01
|
that cares only about it
|
|
|
Fox Wizard
|
2021-10-21 02:36:39
|
Wonder if there's a parameter list somewhere with an explanation of all cjxl parameters. It should make things easier for people who haven't compared many images with different parameters
|
|
|
fab
|
2021-10-21 02:36:43
|
though it compress same or better
|
|
2021-10-21 02:37:04
|
that new heuristics with d1 s 7
|
|
2021-10-21 02:37:17
|
i got 498 kb png to 39 kb
|
|
2021-10-21 02:37:51
|
is still experimental but for the speed you can't do better
|
|
2021-10-21 02:38:02
|
i don't know if it can be used for animation
|
|
2021-10-21 02:38:15
|
but jyrki said there is much to improve for animation
|
|
2021-10-21 02:38:37
|
patches 0 don't work
|
|
|
Fox Wizard
|
2021-10-21 02:38:41
|
To be honest, speed doesn't really matter for me. Only care about static images and max. efficiency which is why I usually use -e 9 (usually lossless)
|
|
|
fab
|
2021-10-21 02:39:17
|
sign that maybe new heuristics may be disabling patches generation
|
|
2021-10-21 02:39:33
|
or just you can use only patches 1
|
|
2021-10-21 02:39:57
|
don't know how it works
|
|
|
Fox Wizard
|
2021-10-21 02:40:02
|
No idea what all those things do tbh
|
|
2021-10-21 02:40:16
|
Wonder if someday we'll get something like this for jxl: https://www.reddit.com/r/AV1/comments/lfheh9/encoder_tuning_part_2_making_aomencav1libaomav1/
|
|
|
fab
|
2021-10-21 02:40:42
|
if you have for example a raw from Photon camera github and three exposure
|
|
2021-10-21 02:41:06
|
and you want to share a near lossless file that has same artifacts.
|
|
2021-10-21 02:41:32
|
like today digital broadcast changed in italy on some channels
|
|
2021-10-21 02:42:18
|
;an user on discord/av1 tried 2 kb placeholder comparison
|
|
2021-10-21 02:46:14
|
the old parameter i think that was MATHEMATICALLY efficient was
|
|
2021-10-21 02:46:15
|
-q 47.74 -s 7 -I 0.6 --use_new_heuristics --patches=0
https://artifacts.lucaversari.it/libjxl/libjxl/2021-10-18T21%3A45%3A05Z_a2cba9fe1828bfa3782df090ec8407c9a96aea88/
|
|
2021-10-21 02:46:38
|
the build is different it doesn't have the new ac deadzone quantization commit
|
|
2021-10-21 02:47:50
|
av1 is becoming better like if you try tomorrow it will be 16% more efficient than h264
|
|
2021-10-21 02:49:18
|
like an av1 8 bit cpu 6 is 16% more efficient of h264 if it was in 10 bit
|
|
2021-10-21 02:49:25
|
22102021 version
|
|
|
Fox Wizard
|
2021-10-21 02:51:04
|
Shouldn't it be like... way more efficient than just 16% <:KekDog:884736660376535040>
|
|
|
fab
|
2021-10-21 02:51:27
|
that's because h264 isn't 10 bit
|
|
2021-10-21 02:51:33
|
h264 hasn't 10 bit
|
|
2021-10-21 02:51:52
|
av1 in 8 bit i think it should look similar
|
|
2021-10-21 02:52:08
|
don't know there aren't honest comparisons
|
|
2021-10-21 02:52:28
|
they just say vvc is better
|
|
|
|
Deleted User
|
|
Fox Wizard
Wonder if someday we'll get something like this for jxl: https://www.reddit.com/r/AV1/comments/lfheh9/encoder_tuning_part_2_making_aomencav1libaomav1/
|
|
2021-10-21 04:03:17
|
I hope we won't have to do that (because smart encoder will hopefully do it in its own).
|
|
|
BlueSwordM
|
|
Fox Wizard
Wonder if someday we'll get something like this for jxl: https://www.reddit.com/r/AV1/comments/lfheh9/encoder_tuning_part_2_making_aomencav1libaomav1/
|
|
2021-10-21 04:11:13
|
There is no need really.
|
|
2021-10-21 04:12:22
|
For photographic images, cjxl is already very strong and fast for photographic images, and is tuned quite well.
aomenc with AVIF by default currently doesn't have that, and I really think they should change that.
|
|
|
Fox Wizard
|
2021-10-21 04:26:03
|
Would still be kinda nice to at least know what certain parameters to to be honest
|
|
2021-10-21 04:26:28
|
Like, in the past I saw discussions whether to use or not use new heuristics
|
|
|
diskorduser
|
2021-10-21 04:26:42
|
Fabian will tune jxl encoder. Don't worry π
|
|
|
Fox Wizard
|
2021-10-21 04:26:51
|
<a:DogeScared:821038589201612820>
|
|
|
_wb_
|
2021-10-21 04:34:53
|
The philosophy in libjxl is that default settings are the best (general-purpose) settings, and options hidden behind cjxl -v (-v) -h are intended for experimentation. For a specific image, of course specific custom settings might always work better, but defaults should be the most sensible thing to do in general.
|
|
|
BlueSwordM
|
|
Fox Wizard
Like, in the past I saw discussions whether to use or not use new heuristics
|
|
2021-10-21 04:39:34
|
The only thing I'd personally modify would be:
`--intensity_target=XXX` depending on your display's brightness.
|
|
|
Fox Wizard
|
2021-10-21 04:40:19
|
Didn't even know that's a thing
|
|
|
Jyrki Alakuijala
|
2021-10-21 05:01:03
|
--intensity_target=XXX should ideally depend on viewers display, but the parameter is needed at compression time
|
|
2021-10-21 05:01:19
|
sometimes the encoding happens on a displayless server
|
|
2021-10-21 05:02:12
|
if one creates a new gimp painting, it might be a good idea to set intensity target automatically to the displays current max intensity
|
|
|
_wb_
|
2021-10-21 05:34:40
|
I doubt there's a way to know what that is
|
|
2021-10-21 05:34:52
|
Also there might be multiple displays
|
|
|
Jyrki Alakuijala
|
2021-10-22 06:19:19
|
yes, the only feasible approach is to have intensity target to be related to intent in psychovisual response
|
|
2021-10-22 06:19:58
|
and leave it up to the receiver to get as close as possible to that psychovisual response with whatever device is attempting to show the image
|
|
|
Justin
|
|
Fox Wizard
Wonder if there's a parameter list somewhere with an explanation of all cjxl parameters. It should make things easier for people who haven't compared many images with different parameters
|
|
2021-10-22 01:30:31
|
https://gitlab.com/wg1/jpeg-xl/-/blob/main/tools/cjxl.cc + CTRL +F = "AddOption"
|
|
2021-10-22 01:31:00
|
Often, there's 2 lines of description to the parameter/option. But yes, many points lack a good description tbh. At least for people not that deep inside the topic
|
|
|
Fox Wizard
|
2021-10-22 02:02:42
|
Hm, at least it helps a little, thanks
|
|
|
dkam
|
2021-10-25 02:01:31
|
Hi there - I'm planning to create jpeg-xl versions of images on my website. I just want to confirm that, as JPEG-XL can be progressively downloaded, I only need to create a single version on the image at the maximum resolution I'll need. Browsers *should* be able to download just enough of the image to render at any target size. So - if I display the full image, the full Jpeg-xl will be downloaded - but for thumbnails, the browser should just download some of the image, or use the cached version. Is that correct?
|
|
|
w
|
2021-10-25 02:12:04
|
I don't think there's any or will be any implementation which actively gets *enough* data for a specific resolution
|
|
2021-10-25 02:12:17
|
and as for the actual resolutions, there's a fixed number of progressive steps
|
|
2021-10-25 02:19:32
|
i think thumbnails in browsers would have to be something in the fundamentals of web standards
|
|
2021-10-25 02:20:32
|
something like designating an img tag as a thumbnail
|
|
2021-10-25 02:20:37
|
interesting idea though
|
|
2021-10-25 02:20:50
|
maybe that can be done by baking in the decoder into your website
|
|
|
dkam
|
2021-10-25 02:24:13
|
It doesn't bother me if the downloaded resolution isn't an exact match for the exact page resolution. But I was hoping that a img with size 100x100 could just download up-to the closest progressive step to render completely ( including the x1.5 x2 x3 versions ). I wanted to avoid storing and serving multiple resolutions of the same file.
|
|
|
monad
|
2021-10-25 02:25:45
|
That's intended utility designed into the codec, but AFAIK there's no standard mechanism for that behavior in web browsers.
|
|
|
w
|
2021-10-25 02:26:29
|
hmm unless you find out the minimum data to get for example the 64x64 dc and only send that much from the server
|
|
|
dkam
|
2021-10-25 02:29:47
|
I thought i'd be implemented with the browser requesting byte-ranges - but maybe the round-trip time slows it down too much to bother.
|
|
|
w
|
2021-10-25 02:30:34
|
it definitely wouldnt be something that browsers would implement
|
|
|
dkam
|
2021-10-25 02:31:16
|
Bummer. So I still need to generate multiple resolutions. Thanks for the help!
|
|
|
w
|
2021-10-25 02:34:04
|
I was thinking maybe the server can have a parameter for the range/size and then only deliver that much
|
|
2021-10-25 02:34:08
|
i will experiment with this
|
|
2021-10-25 02:48:32
|
something like this https://test.grass.moe/jxl/https://raw.githubusercontent.com/libjxl/conformance/master/testcases/bike/input.jxl?only=104167
|
|
|
_wb_
|
2021-10-25 05:12:30
|
The jxl bitstream does have features for this, but it would require significant browser changes and maybe some changes in the html and/or http specs to actually make it work. It's a pretty hard thing to do.
|
|
2021-10-25 05:13:53
|
Range requests are already something that exists in http, a browser can ask for bytes X to X+N and servers and CDNs should already understand that.
|
|
2021-10-25 05:16:32
|
Iirc, chrome uses that for images marked with `lazyload`: it fetches only the first 2kb for such images, which should be enough to get the image header to know the dimensions for layout. It fetches the rest only when you scroll close enough to the image.
|
|
|
Jyrki Alakuijala
|
2021-10-25 11:13:17
|
It is an unknown if the single-file approach is going to be effective for visual quality -- my hunch is that VarDCT designed for appropriate quality is going to be more effective in the end
|
|
2021-10-25 11:14:19
|
It is an unknown if many levels of progression visualized is a good idea in general, or to rather limit the progression to an early (10-15 %) version and then the perceived final version (~50-60 %), or something else
|
|
|
improver
|
|
Jyrki Alakuijala
It is an unknown if the single-file approach is going to be effective for visual quality -- my hunch is that VarDCT designed for appropriate quality is going to be more effective in the end
|
|
2021-10-25 12:10:20
|
same. question is whether storage cost savings will offset the cost of a bit lesser visual quality. one could also use one progressive step up but that'll shift cost to bandwidth
|
|
2021-10-25 12:17:02
|
i believe at first one should experiment with something like ?only= param, and later that could easily be shifted to CDNs (eg u request ?only= when querying CDN, and CDN translates that to byte range request)
|
|
2021-10-25 12:19:30
|
this would increase caching efficiency for CDNs. for browsers, that could come later, if efficiency gains are proved to be worth it
|
|
2021-10-25 02:55:26
|
btw, how does jxl's jpg mode mix with progression?
|
|
2021-10-25 02:56:33
|
as in, does original jpg need to be progressive to allow for jxl progression?
|
|
|
_wb_
|
2021-10-25 03:08:48
|
No
|
|
2021-10-25 03:12:39
|
The jxl can use any progression, and the original jpeg can use any progression - the coefficients are what matter for the reconstruction, and the `jbrd` will have the original jpeg progression order
|
|
|
improver
|
2021-10-25 03:14:11
|
handy
|
|
2021-10-25 03:20:48
|
oh right jpegtran can do that for jpegs too
|
|
2021-10-25 03:21:05
|
shouldve expected it to b supported i guess
|
|
2021-10-26 07:49:35
|
is there any tool to print current available progression steps of .jxl?
|
|
2021-10-26 07:49:49
|
in bytes
|
|
2021-10-26 07:50:11
|
and resolutions
|
|
2021-10-26 07:50:48
|
or fractions
|
|
2021-10-26 07:51:28
|
something one could use to prepare relevant srcset
|
|
|
nathanielcwm
|
2021-10-27 08:34:26
|
https://cdn.discordapp.com/attachments/869655213315878983/902665260656394300/done.jxl <@!321486891079696385> i found some pretty good jxl test material
|
|
2021-10-27 08:40:28
|
<@!416586441058025472> what u think?
|
|
|
Fox Wizard
|
2021-10-27 09:03:20
|
<@!794205442175402004>what exactly does ``-I`` do? Tried it on a few images and it made lossless slightly smaller (-I 5 had the biggest impact on the few test images)
|
|
2021-10-27 09:06:36
|
Oh lol, now I have an image where ``1`` was smaller than ``5`` <:KekDog:884736660376535040>
|
|
|
fab
|
|
nathanielcwm
https://cdn.discordapp.com/attachments/869655213315878983/902665260656394300/done.jxl <@!321486891079696385> i found some pretty good jxl test material
|
|
2021-10-27 09:10:00
|
this you need an high effort use q 80.87 -i 0.89 -s 9 --gaborish 0 dots 0 and a newer build than the one i tried yesterday
|
|
2021-10-27 09:10:25
|
you can experiment with gaborish 1 though i would use gaborish 0
|
|
2021-10-27 09:11:15
|
it depends how the build look
|
|
2021-10-27 09:11:59
|
also don't forget to test also fr screenshots
|
|
|
_wb_
|
|
Fox Wizard
Oh lol, now I have an image where ``1`` was smaller than ``5`` <:KekDog:884736660376535040>
|
|
2021-10-27 09:58:31
|
Strange... -I should be clamped to 1
|
|
2021-10-27 09:58:52
|
It's the fraction of samples it looks at for doing MA tree training
|
|
2021-10-27 09:58:59
|
Default is 0.5
|
|
2021-10-27 09:59:14
|
You can't look at more than all the samples
|
|
|
Fox Wizard
|
2021-10-27 10:06:00
|
Hm, I went from 0 to 9 and the sizes varied
|
|
2021-10-27 10:06:25
|
0 was noticeably bigger and 1 - 9 were similar with 5 the smallest in most cases
|
|
2021-10-27 10:15:46
|
|
|
2021-10-27 10:16:54
|
``cjxl -e 9 -q 100 -I <X> -E 3 -g 3``
|
|
2021-10-27 10:17:37
|
And nice, about 2% smaller than ``-e 9 -q 100`` <:Poggers:805392625934663710>
|
|
2021-10-27 10:24:03
|
|
|
|
_wb_
|
2021-10-27 10:32:06
|
<@179701849576833024> any idea what causes that? I assumed I > 1 would just do exactly the same as I == 1
|
|
|
Fox Wizard
|
2021-10-27 10:32:07
|
Oh wait, I misread some stuff
|
|
2021-10-27 10:32:33
|
Still different results, but -I 1 is the smallest
|
|
2021-10-27 10:32:41
|
~~I should stop reading stuff when I'm barely awake~~
|
|
|
|
veluca
|
2021-10-27 10:33:14
|
ahhh possibly because 0.1*I is used for property quantization
|
|
2021-10-27 10:33:38
|
(as in, 0.1*I fraction of the samples)
|
|
|
Fox Wizard
|
2021-10-27 10:37:53
|
Hm, 2.700 bytes
|
|
2021-10-27 10:38:17
|
2.703 bytes
|
|
2021-10-27 10:38:34
|
Guess this time -I 5 is actually smaller :p
|
|
2021-10-27 10:43:50
|
Hm, ``I 10``+ gives the same results, so guess that can be the case
|
|
|
Cool Doggo
|
2021-10-27 10:52:43
|
I didn't even know it let you do above 1 π€
|
|
|
Fox Wizard
|
2021-10-27 11:24:29
|
Same, but got bored and decided to use random values
|
|
|
novomesk
|
2021-10-29 09:30:25
|
Is there a way to ask skcms developers to make a release or tag? Many Linux distros don't like bundled dependencies and they wish skcms as individual package/system dependency. It will help also for libjxl to get into distros more easily. Many maintainers refuse to run deps.sh, they wish things to be done in harmony with their distro policy standards.
|
|
|
spider-mario
|
2021-10-29 10:00:08
|
for distros, lcms2 is an alternative
|
|
|
Kleis Auke
|
2021-10-29 12:34:47
|
I think sjpeg and LodePNG have a similar problem. sjpeg can be disabled with `-DJPEGXL_ENABLE_SJPEG=0` but for LodePNG there is no way to disable that or to use an alternative dependency.
|
|
2021-10-29 12:35:06
|
But perhaps that is not a problem since LodePNG is a header-only library (IIRC, Fedora does not have this release/tag requirement for header-only libs).
|
|
|
|
veluca
|
2021-10-29 12:59:18
|
for that matter skcms is pretty close to lodepng in terms of structure (basically 1 file)
|
|
|
improver
|
2021-10-29 01:37:18
|
maybe just bake tarballs with submodules pre-downloaded on releases
|
|
|
_wb_
|
2021-10-29 02:08:03
|
Might make sense to do that
|
|
|
monad
|
2021-11-01 04:01:20
|
```dssim data/\[r16\]raptor_v2_5_clean\|0\|.png 79.919984.png
-79.919984 79.919984.png```
Is that good or bad?
|
|
|
_wb_
|
2021-11-01 09:50:26
|
Uh, negative dssim? Looks like bug :)
|
|
|
monad
|
2021-11-01 10:57:59
|
Looks like unintended usage to me. A succinct "What's wrong with you?" should suffice for output in such cases.
|
|
|
Fraetor
|
2021-11-01 04:27:36
|
Finally managed to get hold of the standards through my University library.
|
|
|
|
embed
|
|
Fraetor
|
2021-11-01 04:40:43
|
Fine, I'll do it myself.
|
|
|
|
veluca
|
2021-11-01 04:56:52
|
huh
|
|
2021-11-01 04:56:55
|
you can do that?
|
|
2021-11-01 04:57:19
|
can't you get the FDIS of part 1 and the IS of part 2?
|
|
|
Fraetor
|
|
veluca
can't you get the FDIS of part 1 and the IS of part 2?
|
|
2021-11-01 05:04:33
|
Not as far as I could tell. My Uni doesn't subscribe to the ISO, but it does to the British Standards Institute, which has this copy.
|
|
|
|
veluca
|
2021-11-01 05:05:32
|
ah so there might be some lag
|
|
2021-11-01 05:05:34
|
makes sense
|
|
|
_wb_
|
2021-11-01 06:21:34
|
FDIS of part 1 isn't approved yet, so they will not sell it yet
|
|
2021-11-01 06:24:50
|
IS of part 2 was published two weeks ago or so, so obviously it will take time until the ISO certified carrier pigeons have transmitted its contents using their special Morse code wing flapping technique.
|
|
|
fab
|
2021-11-01 07:14:34
|
the ww bot don't work more
|
|
2021-11-01 07:14:47
|
guess he disabled the server
|
|
2021-11-01 07:14:58
|
it was only a test
|
|
2021-11-01 07:15:11
|
don't know if he will readd final version
|
|
2021-11-01 07:15:19
|
and how much storage he has, he didn't go into detail
|
|
|
w
|
2021-11-01 08:29:05
|
the bot turns on and off on its own based on the activity
|
|
2021-11-01 08:29:14
|
just run the command again
|
|
2021-11-01 08:33:25
|
and it doesn't use any storage
|
|
2021-11-01 08:34:03
|
I did go into detail on how it is hosted
|
|
2021-11-01 08:40:05
|
actually that's wrong, it's just that image
|
|
|
Fraetor
|
|
w
actually that's wrong, it's just that image
|
|
2021-11-01 10:59:46
|
Was the image becoming larger than 8 MB, and thus not able to be uploaded? I just encoded it with `cjxl -d 2 in.jpg out.jxl`
|
|
|
w
|
2021-11-01 11:00:00
|
it's not uploading that is the issue
|
|
|
nathanielcwm
|
|
_wb_
IS of part 2 was published two weeks ago or so, so obviously it will take time until the ISO certified carrier pigeons have transmitted its contents using their special Morse code wing flapping technique.
|
|
2021-11-01 11:54:10
|
transmitted over IPoAC?
|
|
|
|
embed
|
2021-11-02 12:11:36
|
https://embed.moe/https://cdn.discordapp.com/attachments/794206087879852106/904768652149157948/jxl_standards.jxl
|
|
|
w
|
2021-11-02 12:11:41
|
problem was lodepng
|
|
|
yurume
|
2021-11-02 10:55:48
|
https://pagepipe.com/dont-use-webp-image-format/ spotted this from HN. uh-oh.
|
|
|
Scope
|
2021-11-02 11:06:57
|
While reading it, I thought it was a very old article, but...
> November 2021
|
|
|
novomesk
|
2021-11-02 11:12:47
|
The article is not telling the truth. For example: "If you upload a webP file to Facebook youβll get an error message." Few hours ago one of my family members uploaded a webp image to Facebook and it worked. And it worked also several month before.
|
|
|
nathanielcwm
|
|
Scope
While reading it, I thought it was a very old article, but...
> November 2021
|
|
2021-11-02 11:14:14
|
i think the date is fake
|
|
2021-11-02 11:14:53
|
like for instance it says that safari completely lacks webp support
|
|
2021-11-02 11:17:06
|
my screenshot
|
|
|
spider-mario
|
2021-11-02 11:17:14
|
theyβre also anti-https https://pagepipe.com/httpsssl-and-its-negative-impact-on-mobile-speed/
|
|
2021-11-02 11:17:20
|
probably not a site worth reading
|
|
|
nathanielcwm
|
2021-11-02 11:17:29
|
their screenshot
|
|
2021-11-02 11:17:50
|
ah the date on the screenshot says June 2020
|
|
2021-11-02 11:17:52
|
so <:monkaMega:809252622900789269>
|
|
2021-11-02 11:22:13
|
they recommend using mozjpeg over webp which isn't really terrible advice compared to recommending another jpeg encoder
|
|
2021-11-02 11:22:14
|
Β―\_(γ)_/Β―
|
|
|
Scope
|
2021-11-02 11:28:34
|
Yep, some thoughts are not so wrong, but many other reasons not to use WebP are very strange, like
> Photoshop doesnβt natively support WebP, but you can add a plugin to add WebP support. Thatβs a clue. Adobe thinks itβs lame
or
> That saves 3.1k. In big letters, they tell us that is a 32-percent savings.
> Is that significant? No.
> Is it worth bothering with? No.
> Why?
> Images load in parallel. All at the same time. Boom!
like that 32% saving is nothing because images are loaded in parallel, assuming that only formats other than WebP can be loaded in parallel
|
|
|
spider-mario
|
|
nathanielcwm
they recommend using mozjpeg over webp which isn't really terrible advice compared to recommending another jpeg encoder
|
|
2021-11-02 11:28:49
|
their arguments are terrible, though, so to me it doesnβt matter much whether they accidentally happen to be right
|
|
2021-11-02 11:28:55
|
they would be right for the wrong reasons
|
|
|
nathanielcwm
|
|
Scope
Yep, some thoughts are not so wrong, but many other reasons not to use WebP are very strange, like
> Photoshop doesnβt natively support WebP, but you can add a plugin to add WebP support. Thatβs a clue. Adobe thinks itβs lame
or
> That saves 3.1k. In big letters, they tell us that is a 32-percent savings.
> Is that significant? No.
> Is it worth bothering with? No.
> Why?
> Images load in parallel. All at the same time. Boom!
like that 32% saving is nothing because images are loaded in parallel, assuming that only formats other than WebP can be loaded in parallel
|
|
2021-11-02 11:32:44
|
well there are official adobe instructions that specifically tell you how to install the webp plugin
which counteracts that point
|
|
2021-11-02 11:35:49
|
and afaik lossless webp isn't related to vp8 at all?
|
|
2021-11-02 11:38:37
|
their anti http sentiment seems to be more of an anti lets encrypt sentiment at points
which is not a terribly unpopular opinion afaik
|
|
|
spider-mario
|
|
nathanielcwm
and afaik lossless webp isn't related to vp8 at all?
|
|
2021-11-02 11:41:46
|
correct, itβs a separate codec
|
|
|
_wb_
|
2021-11-02 12:32:22
|
Lossless webp is almost a perfect improvement over png
|
|
2021-11-02 12:33:16
|
(if not for the limitation to 8-bit and image dimensions that it inherits from the lossy webp design decisions)
|
|
2021-11-02 12:36:02
|
Lossy webp was a quick hack to make an image format with low browser binary size overhead (if it has a vp8 decoder already) and with better-than-no deblocking so it can be somewhat better than jpeg at the low fidelity operating points that are loved by devops people who want to save on bandwidth and improve web perf without caring much about image quality.
|
|
|
190n
|
|
spider-mario
theyβre also anti-https https://pagepipe.com/httpsssl-and-its-negative-impact-on-mobile-speed/
|
|
2021-11-02 04:15:55
|
lol wtf, their latency complaint is valid but that's what HTTP/3 is trying to solve and arguably such a penalty is worth it for security
|
|
2021-11-02 04:16:03
|
> HTTPS canβt be cached.
[citation needed]
|
|
2021-11-02 04:16:44
|
lol even _that page_ is served over cached https
|
|
2021-11-02 04:18:59
|
> You can get an SSL certificate for free. Blog posts debate the value of a free SSL Certificate. But, the costs can shoot up to $1,499 per year if you opt for an SSL certificate from a provider like Symantec. You donβt have to provide corporate documentation to get SSL Certification. The authorization may be a simple email. Confirm the email inquiry, and youβre accepted as the authorized domain holder. Can free TLS certificates provided by Letβs Encrypt still be hacked? Absolutely. Anyone can get an SSL certificate β including hackers. They can set up a site to harvest information.
good technology is available to bad actors, the horror <:NotLikeThis:805132742819053610>
|
|
2021-11-02 04:19:32
|
> Labeling a site as *secure* because it has SSL is wrong. In error, users think theyβre using a secure site when in reality itβs not better than before.
this is a good point
|
|
|
|
Deleted User
|
|
190n
> Labeling a site as *secure* because it has SSL is wrong. In error, users think theyβre using a secure site when in reality itβs not better than before.
this is a good point
|
|
2021-11-04 01:20:32
|
Chrome already tells you instead that the *connection* is secure, not the page itself.
|
|
|
improver
|
2021-11-04 01:56:01
|
validating site security is impossible anyway, if you have high enough standards for what validation means
|
|
|
novomesk
|
2021-11-04 08:16:54
|
I informed colleagues from my company about JPEG XL existence. I showed them 32bytes JXL art images + info how they can be viewed in Google Chrome.
Few of the colleagues replied they were impressed and one wrote me that he made thesis about fractal compression of images.
|
|
|
_wb_
|
2021-11-04 08:43:45
|
Fractal compression, I remember that
|
|
|
Scope
|
2021-11-04 01:32:09
|
π€
> With JPEG XL image encoding performance using the de facto libjxl library, here was an example of Alder Lake performing very poorly and stemming from finding itself running on the E cores rather than performance P cores. With limited deviation between runs, this behavior was reliably happening.
https://www.phoronix.com/scan.php?page=article&item=intel-12600k-12900k&num=4
|
|
|
190n
|
2021-11-04 06:05:56
|
yeah it says that the linux scheduler needs some work
|
|
|
|
veluca
|
|
Scope
π€
> With JPEG XL image encoding performance using the de facto libjxl library, here was an example of Alder Lake performing very poorly and stemming from finding itself running on the E cores rather than performance P cores. With limited deviation between runs, this behavior was reliably happening.
https://www.phoronix.com/scan.php?page=article&item=intel-12600k-12900k&num=4
|
|
2021-11-06 02:35:18
|
sounds like we should turn off topology detection π
|
|
|
Scope
|
2021-11-06 03:06:35
|
JXL, even without PDF features, can also be relatively useful in this use case π€
https://news.ycombinator.com/item?id=29124994
|
|
|
|
Deleted User
|
2021-11-06 04:59:13
|
Any changes since that time?
|
|
|
k2arim99
|
2021-11-08 12:12:05
|
it is a know issue that cjxl crashes with some pictures?
|
|
2021-11-08 12:14:14
|
mine choked and died trying to encode manga
|
|
|
Cool Doggo
|
2021-11-08 12:19:51
|
was it a jpeg?
|
|
|
k2arim99
|
2021-11-08 12:21:49
|
yes
|
|
2021-11-08 12:22:02
|
i think i found the reason tho
|
|
2021-11-08 12:22:09
|
something with the metadata
|
|
2021-11-08 12:22:15
|
which i stripped
|
|
2021-11-08 12:22:39
|
and its working as far as im testing
|
|
2021-11-08 12:22:43
|
but its weird
|
|
2021-11-08 12:22:52
|
because some pages work and some dont
|
|
2021-11-08 12:23:29
|
and i guess any original jpg encode was in a batch with similar settings and subsequent metadata
|
|
|
Cool Doggo
|
2021-11-08 12:23:56
|
there have been a couple issues on github about some grayscale images failing too
<https://github.com/libjxl/libjxl/issues/764>
<https://github.com/libjxl/libjxl/issues/720>
|
|
2021-11-08 12:27:23
|
the metadata thing was also mentioned in an issue before π€
|
|
|
k2arim99
|
2021-11-08 12:29:07
|
i know nothing of tecnicals of codecs but its like theres some sort of tags ppl put on greyscale pics that trips cjxl
|
|
2021-11-08 12:29:35
|
but not all my greyscale pictures tripped it up tho
|
|
2021-11-08 12:29:57
|
maybe those didnt had it for some reason
|
|
|
BlueSwordM
|
2021-11-08 12:30:38
|
Metadata is the biggest problem with transcoding JPEGs.
|
|
|
k2arim99
|
2021-11-08 12:30:52
|
lol really?
|
|
|
BlueSwordM
|
|
k2arim99
lol really?
|
|
2021-11-08 12:31:13
|
Yeah, it's been a small thorn for months.
|
|
2021-11-08 12:31:27
|
"We" have fixed most of the issues, but some are still present.
|
|
|
k2arim99
|
2021-11-08 12:31:57
|
thats funny, i would have expected that was the littlest of concerns
|
|
|
w
|
2021-11-08 12:33:42
|
after 27000 pages i only ran into 1 bad jpg which was because it was cmyk
|
|
|
eddie.zato
|
2021-11-08 03:52:29
|
I losslessly transcoded about 150k jpg files and had no problems, but I cleared the metadata with `jpegtran` before that. π
|
|
2021-11-08 03:58:53
|
One of the latest issues on metadata
https://github.com/libjxl/libjxl/issues/803
|
|
|
_wb_
|
2021-11-08 06:12:41
|
Try building cjxl with ./ci.sh opt
|
|
2021-11-08 06:12:57
|
That way you get more informative error messages
|
|
|
Scope
|
2021-11-14 12:03:58
|
https://security.googleblog.com/2021/11/clusterfuzzlite-continuous-fuzzing-for.html
|
|
|
|
Vincent Torri
|
2021-11-16 03:51:46
|
hello
|
|
2021-11-16 03:52:05
|
i would like to add libjxl to our toolkit
|
|
2021-11-16 03:52:15
|
so i'm playing with the API
|
|
2021-11-16 03:52:46
|
i've looked at the jxlinfo.c example and i have a question about the infinite loop
|
|
2021-11-16 03:53:30
|
afaiu, the infinite loop is to load more image data to get basic info (because fread() is used)
|
|
2021-11-16 03:54:12
|
but if i have the whole file which is already mmap'ed in memory, there is no infinite loop to do, right ?
|
|
2021-11-16 03:56:22
|
i think that calling successsively some API calls is enough
|
|
2021-11-16 03:57:05
|
hmm, maybe not the correct chan
|
|
|
Jyrki Alakuijala
|
2021-11-17 01:18:00
|
I believe you are correct
|
|
2021-11-17 01:18:17
|
if it is unclear from the API docs, consider filing an issue against it
|
|
|
Scope
|
2021-11-18 11:51:23
|
https://arxiv.org/abs/2111.09219
|
|
2021-11-18 11:51:30
|
> The JPEG compression format has been the standard for lossy image compression for over multiple decades, offering high compression rates at minor perceptual loss in image quality. For GPU-accelerated computer vision and deep learning tasks, such as the training of image classification models, efficient JPEG decoding is essential due to limitations in memory bandwidth. As many decoder implementations are CPU-based, decoded image data has to be transferred to accelerators like GPUs via interconnects such as PCI-E, implying decreased throughput rates. JPEG decoding therefore represents a considerable bottleneck in these pipelines. In contrast, efficiency could be vastly increased by utilizing a GPU-accelerated decoder. In this case, only compressed data needs to be transferred, as decoding will be handled by the accelerators. In order to design such a GPU-based decoder, the respective algorithms must be parallelized on a fine-grained level. However, parallel decoding of individual JPEG files represents a complex task. In this paper, we present an efficient method for JPEG image decompression on GPUs, which implements an important subset of the JPEG standard. The proposed algorithm evaluates codeword locations at arbitrary positions in the bitstream, thereby enabling parallel decompression of independent chunks. Our performance evaluation shows that on an A100 (V100) GPU our implementation can outperform the state-of-the-art implementations libjpeg-turbo (CPU) and nvJPEG (GPU) by a factor of up to 51 (34) and 8.0 (5.7). Furthermore, it achieves a speedup of up to 3.4 over nvJPEG accelerated with the dedicated hardware JPEG decoder on an A100.
|
|
2021-11-18 11:58:49
|
https://github.com/weissenberger/multians
|
|
|
Jyrki Alakuijala
|
2021-11-18 03:44:47
|
good times for compression!
|
|
|
|
JonDoe
|
2021-11-20 09:33:25
|
brunsli still better
|
|
|
_wb_
|
2021-11-20 09:35:53
|
Define 'better'
|
|
2021-11-20 09:40:04
|
Brunsli is sometimes slightly denser, in particular for 420 jpegs, but it has some disadvantages:
- no parallel decode
- no progressive decode
- not always denser
- always have to convert back to jpeg to see the image (unlikely that anything will natively display brunsli)
- cannot do things like adding an alpha channel or an overlay to a jpeg, which is in principle possible with jxl (though no tool implements it atm)
|
|
2021-11-20 09:42:22
|
Also jxl should decode slightly faster (even in the single-core case), and the jxl jpeg recompression still has some (small) margin for encoder improvements, while in brunsli I don't think there are many degrees of freedom left to optimize in the encoder
|
|
2021-11-20 09:43:39
|
Also jxl has the option to apply gaborish/epf/noise to improve the appearance of a recompressed jpeg
|
|
2021-11-20 09:44:10
|
So in my opinion, TL;DR: no.
|
|
|
lithium
|
2021-11-20 02:58:41
|
look like some non-photo website and CDN start use webp lossy...π’
It's really sad, even medium quality 444 jpeg lossy still better than webp 420,
use 420 on still image always a bad idea...
For now avif lossy is strong on medium quality,
but I think we still need a high quality lossy and high fidelity still image encoder for non-photo content,
I understand lossless is better for non-photo content,
but I guess for web use high quality lossy also a important choose.
|
|
|
|
Deleted User
|
2021-11-21 02:15:57
|
Anyone with an AUR account? I can't flag FLIF package as out-of-date...
https://aur.archlinux.org/packages/flif
|
|
|
diskorduser
|
2021-11-21 05:57:01
|
I flagging out of date really important?
|
|
2021-11-21 05:57:23
|
For that package?
|
|
2021-11-21 06:04:23
|
Ok. I flagged it.
|
|
|
|
Deleted User
|
2021-11-21 07:09:52
|
Thanks! π
|
|
|
Fraetor
|
2021-11-22 02:32:44
|
It looks like it is the same issue.
|
|
|
thedonthe
|
2021-11-22 05:09:10
|
anything I can do to move along https://github.com/libjxl/libjxl/pull/864 and https://github.com/libjxl/libjxl/pull/855 ?
|
|
2021-11-22 05:10:22
|
been working on https://github.com/WebKitForWindows/WebKitRequirements/pull/70 which would add libjxl for the Windows port of WebKit
|
|
2021-11-22 09:29:51
|
<@!794205442175402004> βοΈ anything i can do to get those landed?
|
|
|
_wb_
|
|
thedonthe
|
2021-11-22 09:31:03
|
have a few other patches around building `libjxl` using `vcpkg` which is how we build out our requirements for WebKit on Windows
|
|
|
_wb_
|
2021-11-22 09:31:26
|
i'm not very good at building/portability stuff so I'm a bit hesitant to just go ahead and merge them myself
|
|
|
thedonthe
|
2021-11-22 09:31:28
|
we recently landed support of JPEG XL for non Apple ports in WebKit
|
|
2021-11-22 09:31:41
|
is there anyone in particular whod be good to involve?
|
|
|
_wb_
|
2021-11-22 09:31:54
|
<@!604964375924834314> maybe?
|
|
|
spider-mario
|
2021-11-22 09:33:05
|
maybe not right now but IΒ would be happy to have a look tomorrow
|
|
|
thedonthe
|
2021-11-22 09:34:43
|
thatd be great i have a few more patches around `find_package` which would do more than just a pkgconfig search for a library since that's typically not around on Windows
|
|
|
lithium
|
2021-11-23 07:46:41
|
Could someone can teach me how to create a debug(half) decode image for jxl separate option image file?
|
|
2021-11-23 07:52:49
|
ok, I found that. π
> https://github.com/libjxl/libjxl/pull/466
|
|
2021-11-23 07:54:33
|
It's very interesting for those decode image.
|
|
2021-11-23 08:00:05
|
I guess separate option not create patch for some manga case,
but still better than pure DCT.
|
|
2021-11-24 04:37:36
|
I think for some manga case, jxl separate option can avoid DCT worst case,
but for now, av1 palette prediction(lossy) compression rate still better,
jxl separate option also use something like palette method for non-DCT area?
|
|
|
_wb_
|
2021-11-24 07:16:55
|
yes, it can use any modular transform, including palette
|
|
2021-11-24 07:20:06
|
there's room for improvement not just for manga but in general when there are hard edges with more or less smooth surfaces on both sides. Selectively using modular in those regions and vardct in the other regions could help a lot; I'll probably try to investigate how to make a good detection for that somewhere in the first half of 2022...
|
|
|
diskorduser
|
2021-11-25 04:33:31
|
wdyt about adding this server there?
https://github.com/discord/discord-open-source
|
|
2021-11-25 04:37:41
|
Huh. It needs 1000 members :(
|
|
|
|
Deleted User
|
2021-11-25 04:29:40
|
Or 1000 GitHub stars.
|
|
|
diskorduser
|
2021-11-25 06:37:27
|
Oh. I forgot to star libjxl π€£
|
|
2021-11-26 06:38:19
|
https://github.com/NVIDIAGameWorks/NVIDIAImageScaling/releases/tag/v1.0.0
|
|
|
|
Deleted User
|
2021-11-26 06:58:04
|
Isn't this just a green ripoff of FFX CAS & FSR? π
|
|
|
w
|
2021-11-26 10:11:33
|
you cant just say ripoff
|
|
|
lithium
|
2021-12-03 05:10:37
|
Interesting, piexl art png and jxl separate option,
Those debug decode image give me transparent png.
> cjxl -d 0.5 -e 8 --separate=1 --epf=3 --num_threads=12
> djxl --bits_per_sample=16 --num_threads=12 -s 8
|
|
2021-12-03 05:12:53
|
sample
|
|
2021-12-03 05:48:30
|
I expect whole image will use modular encoding,
but look like in this case some part use vardct.
|
|
2021-12-03 05:49:12
|
original
|
|
|
diskorduser
|
2021-12-07 07:38:21
|
Build with -s? π€π€π€
|
|
|
lithium
|
|
diskorduser
Build with -s? π€π€π€
|
|
2021-12-08 07:58:04
|
I usually use djxl, and some djxl -s information from jon PR.
> https://github.com/libjxl/libjxl/pull/466
> Here is an example of an image decoded with djxl -s 8 so you can
> see the nonphoto parts sharply (since they are encoded first, as a patch)
> and the photo parts blurred (since you see the upsampled DC):
>
> -s 1,2,4,8,16, --downsampling=1,2,4,8,16
> maximum permissible downsampling factor (values greater than 16 will return the LQIP if available)
|
|
|
diskorduser
|
2021-12-09 07:56:35
|
Could someone add kimageformats to jpegxl.info?
|
|
2021-12-09 07:57:56
|
|
|
2021-12-10 04:19:42
|
It's going to support jxl now πππ
|
|
|
eddie.zato
|
|
diskorduser
Could someone add kimageformats to jpegxl.info?
|
|
2021-12-10 07:00:58
|
Done
https://github.com/jxl-community/jxl-community.github.io/pull/2
|
|
2021-12-10 10:53:36
|
This could be a list with some major apps such as mainstream browsers and Qt, WIC, and then the details disclosure element with others.
https://developer.mozilla.org/en-US/docs/Web/HTML/Element/details
|
|
|
BlueSwordM
|
|
diskorduser
It's going to support jxl now πππ
|
|
2021-12-10 04:58:49
|
What is this?
|
|
|
diskorduser
|
2021-12-10 04:59:21
|
Maui gallery app. Works on Android too
|
|
|
BlueSwordM
|
|
diskorduser
Maui gallery app. Works on Android too
|
|
2021-12-10 05:00:25
|
Ping me when it gets released.
|
|
2021-12-10 05:00:32
|
I'll do a post on r/JPEG-XL, and r/android.
|
|
|
lithium
|
2021-12-12 04:36:00
|
Upload a interesting manga sample, use vardct and jxl separate option.
> cjxl -d 1.0 -e 8 --separate=1 --epf=3 --num_threads=12
> djxl --num_threads=12 -s 8
|
|
|
diskorduser
|
2021-12-16 03:26:45
|
Can jxl thumbnail plugins in linux and windows decode large images >20 mp quickly in low resolution for thumbnails?
|
|
|
_wb_
|
2021-12-16 03:39:06
|
theoretically, yes, at least for VarDCT and lossy/progressive Modular
|
|
2021-12-16 03:40:30
|
but libjxl doesn't really have a good mechanism for it yet - you could stop decoding at the first progression event, and take the image you get from that, but that'll unnecessarily upsample the 1:8 image to 1:1 only for the thumbnailer to downsample it again
|
|
|
lithium
|
2021-12-16 05:46:23
|
I have some question about jxl separate option,
theoretically if we have strong heuristic to separate photo and non-photo block,
this process can reduce sharp edge and line error(tiny artifacts)
and avoid DCT-worst case for pure non-photo content?
(not like av1 use heavy filter on quantization process)
I read some av1 encoder filter discuss from av1 sever,
I think for still image use less filter probably a better choose on quantization process?
|
|
|
BlueSwordM
|
|
diskorduser
Can jxl thumbnail plugins in linux and windows decode large images >20 mp quickly in low resolution for thumbnails?
|
|
2021-12-16 06:29:19
|
Yes, no problem in fact π
|
|
|
lithium
|
|
lithium
I have some question about jxl separate option,
theoretically if we have strong heuristic to separate photo and non-photo block,
this process can reduce sharp edge and line error(tiny artifacts)
and avoid DCT-worst case for pure non-photo content?
(not like av1 use heavy filter on quantization process)
I read some av1 encoder filter discuss from av1 sever,
I think for still image use less filter probably a better choose on quantization process?
|
|
2021-12-17 12:09:26
|
I plan write some simple tech detail for introduce(chinese) jxl high quality lossy advantage,
I think I can write jxl separate option in my article?
> example:
> For high quality lossy non-photo content, if you use avif,
> your image probably lost some subtle grain, detail, and avif use ycbcr colorspace,
> probably have some color error on your image.
>
> Just wait a few months, you can get better lossy quality on jxl format for non-photo content,
> jxl have some more advanced featrue(separate), can separate photo and non-photo block,
> reduce quantization error, and work better on preserve grain, detail,
> and jxl have a new colorspace XYB, this colorspace is better than ycbcr on preserve colors.
|
|
|
_wb_
|
2021-12-17 02:42:08
|
it will probably be a somewhat different approach - not really separating the image into two parts, but generalizing patches and dot detection into something that doesn't just try to detect text (repeated letters) and small ellipses (dot detection), but more generally image features that are better not DCT-encoded.
|
|
|
lithium
|
2021-12-17 03:29:57
|
I understand, I will rewrite this part,
If people ask me about jxl separate option implement state,
I guess I can reply like this?
> This features probably will implement on after February 2022.
|
|
|
_wb_
|
2021-12-17 04:20:43
|
yes, maybe i'll start working on it in January/February, I dunno how long it will take
|
|
|
diskorduser
|
2021-12-18 11:35:36
|
https://developer.nvidia.com/blog/rendering-in-real-time-with-spatiotemporal-blue-noise-textures-part-1/
|
|
|
nathanielcwm
|
|
diskorduser
https://developer.nvidia.com/blog/rendering-in-real-time-with-spatiotemporal-blue-noise-textures-part-1/
|
|
2021-12-18 02:15:20
|
how is nvidias version better π€
|
|
2021-12-18 02:17:03
|
https://github.com/NVIDIAGameWorks/SpatiotemporalBlueNoiseSDK/blob/main/LICENSE.docx
|
|
2021-12-18 02:17:07
|
<:kekw:808717074305122316>
|
|
2021-12-18 02:17:17
|
LICENSE.docx
|
|
2021-12-18 02:18:40
|
https://developer-blogs.nvidia.com/wp-content/uploads/2021/12/Stochastic-transparency.png
|
|
2021-12-18 02:19:19
|
with this test image the nvidia version does seem more pleasing though
|
|
|
lithium
|
2021-12-18 05:21:23
|
An efficient lossy cartoon image compression method π€
> https://link.springer.com/article/10.1007/s11042-019-08126-7
|
|
|
diskorduser
|
|
lithium
An efficient lossy cartoon image compression method π€
> https://link.springer.com/article/10.1007/s11042-019-08126-7
|
|
2021-12-19 04:05:45
|
Is there any encoder for this?
|
|
|
lithium
|
2021-12-19 08:07:46
|
I not sure,
I guess they don't have encoder implement for this compression method?
|
|
2021-12-20 09:06:16
|
Hello, I have some question for bit depth and transform precision,
Could somebody can teach me about this?
For webp,jpeg, lossy already DCT content,
I need output 16 bit depth png to make sure transform precision,
For bmp(24,32 bit depth),webp lossless(8 bit depth), lossless and no DCT,
I also need output 16 bit depth png?
If I want use denoise filter on png(8 bit depth), I also need output 16 bit depth png?
(work on ffmpeg and imagemagick)
And I notice ImageMagick have Q16(16 bits-per-pixel component) precision calculations version,
I think if when I use ffmpeg denoise filter, I need specify 16bit on ffmpeg?
> https://stackoverflow.com/questions/48937738/imagemagick-what-does-q8-vs-q16-actually-mean
|
|
|
|
Deleted User
|
|
lithium
Hello, I have some question for bit depth and transform precision,
Could somebody can teach me about this?
For webp,jpeg, lossy already DCT content,
I need output 16 bit depth png to make sure transform precision,
For bmp(24,32 bit depth),webp lossless(8 bit depth), lossless and no DCT,
I also need output 16 bit depth png?
If I want use denoise filter on png(8 bit depth), I also need output 16 bit depth png?
(work on ffmpeg and imagemagick)
And I notice ImageMagick have Q16(16 bits-per-pixel component) precision calculations version,
I think if when I use ffmpeg denoise filter, I need specify 16bit on ffmpeg?
> https://stackoverflow.com/questions/48937738/imagemagick-what-does-q8-vs-q16-actually-mean
|
|
2021-12-21 10:43:01
|
I tried my best to answer.
> For webp,jpeg, lossy already DCT content,
> I need output 16 bit depth png to make sure transform precision,
Yes, it's recommended to avoid additional banding π« <:banding:804346788982030337>, both for input and output.
> For bmp(24,32 bit depth),webp lossless(8 bit depth), lossless and no DCT,
> I also need output 16 bit depth png?
24 (RGB) and 32 (RGBA) bits in BMP are for **all** channels, so it's 8 bits per channel. If you're decoding from BMP, lossless WebP or lossless <:JXL:805850130203934781> and you know there are only 8 bits, you don't need to force decoding to 16 bits. But for encoding? It depends. If you're generating a PNG file yourself (for example from a graphics program), you can export to 16 bits for better quality if you will next encode to lossless <:JXL:805850130203934781> (BMP and WebP don't support 16 bits per channel). But if you found a PNG file on the internet and it doesn't have more than 8 bits, then you can't do anything with that, encode without changes.
> If I want use denoise filter on png(8 bit depth), I also need output 16 bit depth png?
> (work on ffmpeg and imagemagick)
Probably yes, to avoid banding artifacts in your denoised PNG file and you want to encode to a better image format (like <:JXL:805850130203934781> π), but that's unnecessary if you want to share them on the internet in original quality, because 16 bit PNGs are *huge* and might cause problems.
> And I notice ImageMagick have Q16(16 bits-per-pixel component) precision calculations version,
> I think if when I use ffmpeg denoise filter, I need specify 16bit on ffmpeg?
I'm not using IM or ffmpeg for 16 bit PNGs, but you probably can specify 16 bit output, to be sure. But *please* check them afterwards with TweakPNG, if you're on Windows!
|
|
|
lithium
|
|
I tried my best to answer.
> For webp,jpeg, lossy already DCT content,
> I need output 16 bit depth png to make sure transform precision,
Yes, it's recommended to avoid additional banding π« <:banding:804346788982030337>, both for input and output.
> For bmp(24,32 bit depth),webp lossless(8 bit depth), lossless and no DCT,
> I also need output 16 bit depth png?
24 (RGB) and 32 (RGBA) bits in BMP are for **all** channels, so it's 8 bits per channel. If you're decoding from BMP, lossless WebP or lossless <:JXL:805850130203934781> and you know there are only 8 bits, you don't need to force decoding to 16 bits. But for encoding? It depends. If you're generating a PNG file yourself (for example from a graphics program), you can export to 16 bits for better quality if you will next encode to lossless <:JXL:805850130203934781> (BMP and WebP don't support 16 bits per channel). But if you found a PNG file on the internet and it doesn't have more than 8 bits, then you can't do anything with that, encode without changes.
> If I want use denoise filter on png(8 bit depth), I also need output 16 bit depth png?
> (work on ffmpeg and imagemagick)
Probably yes, to avoid banding artifacts in your denoised PNG file and you want to encode to a better image format (like <:JXL:805850130203934781> π), but that's unnecessary if you want to share them on the internet in original quality, because 16 bit PNGs are *huge* and might cause problems.
> And I notice ImageMagick have Q16(16 bits-per-pixel component) precision calculations version,
> I think if when I use ffmpeg denoise filter, I need specify 16bit on ffmpeg?
I'm not using IM or ffmpeg for 16 bit PNGs, but you probably can specify 16 bit output, to be sure. But *please* check them afterwards with TweakPNG, if you're on Windows!
|
|
2021-12-21 10:48:25
|
I understand, Thank you very much for your help π
I try to transform my all image(bmp,png,jpg,webp) to jxl,
so I try to figure out this part.
|
|
|
|
Deleted User
|
2021-12-21 02:34:34
|
<@!794205442175402004> is my message above correct or did I make a mistake?
|
|
|
_wb_
|
2021-12-21 02:35:39
|
looks ok
|
|
|
|
Deleted User
|
2021-12-21 02:36:27
|
Thanks! π
|
|
|
lithium
|
2021-12-21 04:18:27
|
<@!794205442175402004> sorry for ping, I write a simple guideline,
Could you check my guideline is correct and didn't make any mistake?
To avoid banding, artifacts and make sure transform precision
some transform need use 16 bit depth png to reduce precision loss.
## Necessary output 16 bit depth png
* Color space transform(YUV to RGB, jpg,webp lossy decode).
* Inverse discrete cosine transform(IDCT to pixel, jpg,webp lossy decode).
* Pixel edit(Internal 16 bit depth, denoise,crop,resize).
## Unnecessary output 16 bit depth png
* Decode 8 bit depth lossless image format to pixel(RGB to RGB),
include png(8 bit depth), webp lossless(8 bit depth), bmp lossless(8 bit depth).
## Reduce bit depth
If need reduce bit depth(16 to 8), must use 4x4 ordered dither.
|
|
|
_wb_
|
2021-12-21 04:20:15
|
sounds ok, of course there's always a trade-off between speed/memory and precision
|
|
|
lithium
|
2021-12-21 04:20:49
|
I understand, Thank you very much π
|
|
|
|
Deleted User
|
2021-12-21 04:59:00
|
Cropping doesn't "edit pixels" so you can keep the original bit depth. Also, does WebP really use DCTs?
|
|
|
_wb_
|
2021-12-21 05:13:40
|
lossy webp does, lossless doesn't
|
|
|
lithium
|
|
Cropping doesn't "edit pixels" so you can keep the original bit depth. Also, does WebP really use DCTs?
|
|
2021-12-22 08:24:11
|
Hello Matt Gore
Could you teach me, which convert method need output 16 bit depth png in imagemagick or ffmpeg?
and which convert method can keep original bit depth.
I read your dither method comment on jxl issue,
about imagemagick ordered-dither o4x4,<value> ,
could you teach me, I should set which value to reach high quality dither?
> magick convert -depth 8 -ordered-dither o4x4,256 input.png output.png
> https://gitlab.com/wg1/jpeg-xl/-/issues/136
> https://github.com/libjxl/libjxl/issues/541
|
|
|
The_Decryptor
|
2021-12-22 09:33:16
|
I think IM matches the output depth to the input depth, even if internally it operates at a higher bit depth
|
|
2021-12-22 09:34:10
|
But you can override it with `+depth` which resets it to the IM internal value, or using one of the PNG format modifiers, e.g. `PNG48:output.png` will output a 16bit RGB PNG
|
|
|
|
Deleted User
|
|
lithium
Hello Matt Gore
Could you teach me, which convert method need output 16 bit depth png in imagemagick or ffmpeg?
and which convert method can keep original bit depth.
I read your dither method comment on jxl issue,
about imagemagick ordered-dither o4x4,<value> ,
could you teach me, I should set which value to reach high quality dither?
> magick convert -depth 8 -ordered-dither o4x4,256 input.png output.png
> https://gitlab.com/wg1/jpeg-xl/-/issues/136
> https://github.com/libjxl/libjxl/issues/541
|
|
2021-12-22 10:54:14
|
If in doubt, just dither. If you crop at 16 Bit and apply dither, then it should just round back to the initial values across areas of solid color. So no harm done by always dithering.
|
|
|
spider-mario
|
2021-12-22 02:25:08
|
you mean by dithering to 16 bits?
|
|
|
|
Deleted User
|
2021-12-22 02:29:03
|
I mean dithering at 16 Bits.
|
|
2021-12-22 02:29:39
|
or what do you mean with dithering to sth.
|
|
|
spider-mario
|
2021-12-22 02:52:29
|
the same thing as you do, IΒ think
|
|
2021-12-22 02:52:53
|
dithering with an amount of noise that is appropriate for quantization to 16 bits
|
|
|
|
Deleted User
|
2021-12-22 02:54:22
|
ok, I meant doing `-ordered-dither o4x4,256` at 16 Bits.
|
|
|
spider-mario
|
2021-12-22 03:00:16
|
ah, I am not very familiar with `-ordered-dither` but it looks potentially like more dither noise than that
|
|
2021-12-22 03:03:15
|
apparently not
|
|
|
lithium
|
2021-12-22 05:48:32
|
Sorry I still have some question about bit depth and dither,
Could somebody can teach me?
1. Why imagemagick ```-ordered-dither o4x4,256```
use level 256? What's this level value mean?
> -ordered-dither threshold_map{,level...}
> Dither the image using a pre-defined ordered dither threshold map specified,
> and a uniform color map with the given number of levels per color channel.
2. apply dither process always a lossless process?
so it's mean if I apply dither twice, still don't worry have any loss?
and I see Matt Gore mention error diffusion,
If I mind quality use error diffusion will better than ordered dither?
3. imagemagick have (16 bit depth)Q16 version,
but I can't find too much bit depth detail for ffmpeg,
only find some comment say ffmpeg will use input bit depth,
I should worry ffmpeg transform precision for still image?
4. Input image(original 8 bit depth) to imagemagick convert(internal 16 bit depth),
I should use -ordered-dither o4x4,256 and back to 8 bit depth for output image?,
or I should use 16 bit depth png for output image? what's best practices?
(I understand for original 16 bit depth image, always need use 16 bit depth)
|
|
|
|
Deleted User
|
2021-12-22 06:09:26
|
1. A level of 256 means 256 values per channel which is exacly how many there are in 8 Bit images.
2. I guess you should reach a fixpoint with ordered dithering.
Error diffusion is probably overkill for 8 Bit SDR but it might be easy to use with ffmpeg.
3. Just try it with a smooth gradient and see what happens.
4. I would store whatever is smaller (and that depends on the image content).
|
|
|
lithium
|
2021-12-22 06:33:28
|
I understand, Matt Gore, The_Decryptor Thank you very much π
I will test ffmpeg later.
|
|
|
diskorduser
|
2021-12-22 06:47:38
|
Is fast jxl fast enough to do screen cast? Lossless 1080p, 8bit @60fps with minimal latency
|
|
|
|
veluca
|
2021-12-22 06:52:25
|
that's about 120MP/s so yes
|
|
2021-12-22 06:52:48
|
... on one core
|
|
2021-12-22 06:52:56
|
but the decoder is not fast enough xD
|
|
|
_wb_
|
2021-12-22 06:53:30
|
It's going to be a heavy stream though, and decode will atm require lots of cores :)
|
|
2021-12-22 06:54:52
|
Doing cropped frames should make both encode and decode cheaper though
|
|
2021-12-22 06:56:03
|
Assuming your screen is not something like a 3d game where the whole screen changes (but then you should really use an actual video codec with motion compensation)
|
|
|
diskorduser
|
2021-12-22 07:12:08
|
Ah. I was thinking about streaming / screen casting 3d games from handheld console like steam deck to TV :(
|
|
|
|
Deleted User
|
2021-12-22 07:20:01
|
Then just use a video codec. Lossless <:JXL:805850130203934781> would be helpful only with windows that can be effectively coded as patches and then applied to every frame at different positions to recreate their original movement across the screen.
|
|
|
Scope
|
2021-12-22 07:23:24
|
Yes, this would be a very bad use case, it would be easier to stream in any common lossy video format, especially those that support hardware encoding/decoding, when using a high enough bitrate - artifacts and losses will be unnoticeable to the eye and the bitrate will still be noticeably lower than in lossless encoding
|
|
|
spider-mario
|
2021-12-22 07:26:35
|
fjxl might be a good output codec for dosbox video recording
|
|
2021-12-22 07:29:14
|
currently, they use: https://wiki.multimedia.cx/index.php/DosBox_Capture_Codec
|
|
|
_wb_
|
2021-12-22 07:45:31
|
Could be a fun project to convert that to jxl
|
|
2021-12-22 07:46:19
|
I suppose we can use kAdd instead of xor, not quite the same thing but probably close enough
|
|
|
|
Deleted User
|
2021-12-23 10:41:36
|
https://discord.com/channels/794206087879852103/803645746661425173/923688644307460117
<@!794205442175402004> nice to see some live brainstorming :D
|
|
|
diskorduser
|
2021-12-24 01:20:00
|
Is possible to make jxl vardct or lossy modular faster for screen casting with lower latency than video formats?
|
|
|
_wb_
|
2021-12-24 06:08:45
|
That's not what it is designed for
|
|