|
Fox Wizard
|
2021-02-14 08:20:40
|
<a:kms:717156594596446288>
|
|
|
_wb_
|
2021-02-15 07:04:28
|
Does your eye color have an influence on your perception of color?
I would assume not, but then again, why not?
Is there research on this?
|
|
|
190n
|
2021-02-15 07:06:55
|
Idk if there's research, but even independently of eye color I kinda doubt that everyone's perception of certain colors is the same
|
|
2021-02-15 07:07:45
|
Bc the only way to describe colors is relative, right?
|
|
2021-02-15 07:08:35
|
If you ask me what red looks like, I can tell you that blood is red and fire trucks are red. Even if you see the color I see as, say, yellow when you look at those things, you'd still see them as being the same color
|
|
|
lithium
|
2021-02-15 09:20:30
|
https://discord.com/channels/794206087879852103/794206087879852106/810163961378635797
Continue SuperREP LZ77, some flag information,
LZ77 algorithm
-m1..-m3 compression modes implement LZ77 compression algorithm.
-m1: input file split into chunks of L bytes (specified by -c option, 512 bytes by default). For every chunk,
the program stores SHA-1 hash. When later it encounter L-byte chunk with the same SHA-1 value,
it replaces new chunk with reference to previous one, assuming that chunks are equal.
-m2: same as -m1, but only weak hash stored for every chunk. When the program encounters chunk with the same hash value,
it rereads old chunk from input file in order to compare data.
-m3: same as -m2, but the program compares bytes before and after equal chunks in order to extend match as much as
possible. -l option may be used to specify minimum match length.
On decompression, every squeezed chunk is restored by reading the contents of previous equal chunk from output file.
The compression algorithm in -m2/m3 modes rereads the same chunks in order to compare them with current data.
This puts heavy load on OS I/O system and disk cache.
The algorithm requires that input file when compressing (except for -m1 mode) and output file when decompressing is
seekable. If that's not true, you may use -temp option to tell the program to create temporary file used to store a copy
of all uncompressed data.
The algorithm also need to know size of input file in advance when compressing. If the program cannot determine the file
size (i.e. when compressing from stdin), file size should be supplied via -s option. Values larger than actual filesize
will work as well, and by default 25gb is assumed.
|
|
|
_wb_
|
2021-02-15 10:06:17
|
`-m1` sounds very risky. Hash collisions can happen. SHA-1 is 20 bytes, so when using it on 512 byte chunks, the math is not that hard: there are 256^512 possible 512-byte chunks, there are 256^20 hashes, so on average, there are (256^512) / (256^20) possible chunks per hash, i.e. every hash has (on average) 256^492 chunks that will collide into that hash.
|
|
2021-02-15 10:07:23
|
You can just as well replace every 512-byte chunk with it's 20-byte SHA-1 hash 🙂
|
|
|
lonjil
|
2021-02-15 10:12:03
|
It's not really risky since the probability of a collision is extremely small.
|
|
|
_wb_
|
|
190n
If you ask me what red looks like, I can tell you that blood is red and fire trucks are red. Even if you see the color I see as, say, yellow when you look at those things, you'd still see them as being the same color
|
|
2021-02-15 10:12:49
|
Yes, philosophically we will never know for any human sense if different people really experience it in the same way or not.
But I mean more pragmatically: you can test whether someone can see a certain contrast between two colors or not, and things like that. We know e.g. that some people have mutations in the genes that code some of the cone cell types (usually L or M), causing them to malfunction, have a somewhat different response curve, or not work at all, causing different variants of color blindness. This can be tested and is relatively well-understood.
|
|
|
lonjil
It's not really risky since the probability of a collision is extremely small.
|
|
2021-02-15 10:18:16
|
Is it? The probability of two random chunks colliding into the same hash is very small (I suppose it should be roughly 256^492 / 256^512). If you have a 1 GB file though, what's the probability that there are two colliding chunks amongst those 2 million chunks?
|
|
|
lonjil
|
2021-02-15 10:18:42
|
Low!
|
|
2021-02-15 10:19:08
|
SHA-1 collisions only happen with targeted attacks against known weaknesses in the algorithm.
|
|
2021-02-15 10:19:34
|
Getting a collision via brute force / bad luck over lots of data is close to impossible.
|
|
|
_wb_
|
2021-02-15 10:19:40
|
it's not just 2 million times the chance of a single collision though
|
|
2021-02-15 10:21:03
|
there are 1 trillion pairs of chunks that could collide in a 1 GB file
|
|
|
lonjil
|
2021-02-15 10:21:30
|
yeah. Still pretty small 😄
|
|
|
_wb_
|
2021-02-15 10:21:38
|
I guess
|
|
2021-02-15 10:22:03
|
1 / 256^20 is a small number
|
|
|
lonjil
|
2021-02-15 10:22:58
|
There are 2^160 possible hashes. Under the birthday problem, i.e. when all possible pairs are considered, you have to halve that. So 2^80 is the order of magnitude you need to approach for collisions to be likely.
|
|
|
_wb_
|
2021-02-15 10:25:43
|
If billions people would use it to compress terabytes worth of data, it can happen
|
|
|
lonjil
|
2021-02-15 10:41:36
|
hm, 10 billion people times 10 terabytes of data, with 512 byte blocks, is 2^67.4 hash. 40 byte blocks and it's 2^71 hashes. That's getting close. Still, "starting to be possible" chance for a single collision in 100 zetabytes of data seems good enough for most uses. ZFS apparently uses 256 bit hashes, which blows that out of the water. Guess we could store the collective knowledge of humanity in a single ZFS system and don't have to worry 🙂
|
|
|
_wb_
|
2021-02-15 10:47:44
|
it's just philosophically I don't like the idea of a compression algorithm that relies on luck for correctness – even if the chances look safe enough
|
|
|
lithium
|
2021-02-15 11:58:38
|
Just curious probably have other compressor similar SuperREP?,
(use similar compression algorithm), or new compressor have better implement?
I plan use SuperREP like this, precomp + SuperREP + LZMA(or LZMA2)
https://github.com/schnaader/precomp-cpp
|
|
|
diskorduser
|
2021-02-15 12:53:35
|
Does it compress better than zpaq?
|
|
|
Dr. Taco
|
|
_wb_
Does your eye color have an influence on your perception of color?
I would assume not, but then again, why not?
Is there research on this?
|
|
2021-02-15 01:16:23
|
From my understanding, everyone has the same capability to perceive colors the same way, unless they have a cone/rod deficiency, which would result in a type of color blindness. The main difference between color perception and identification is psychological. It is based on how you mentally catalog colors. In what ways do you group colors, how much have you studied colors to learn more obscure names. Can you identify maroon vs burgendy, chartreuse vs lime, mother of pearl vs eggshell, and dark orange vs brown? Does your culture and language even have words for these subtle differences? Linguistics of color are pretty interesting.
|
|
2021-02-15 01:26:51
|
here's a very short introductory video. It's from vox, so take it with a grain of salt, and if interested, look into the sources. https://www.youtube.com/watch?v=gMqZR3pqMjg
|
|
2021-02-15 01:32:53
|
A lot of basic misunderstanding has stemmed from these cultural differences and labeling. People mis-assume that because a group of people did not label something as blue, or could not differentiate purple from green, that means they suffer from a color blindness, or that they see colors differently, when in fact they see the same stuff, they just don't catalog it the same mentally. They could be trained to catalog using a different system and learn the distinctions between subtle differences that matter more to other cultures. They don't go through a radical transformation where they suddenly see all new colors. What they see is the same, they just know more words to better describe or differently group the color spectrum.
|
|
2021-02-15 01:34:30
|
So if you see any pop-psychology claiming that one race, or group of people in history couldn't see "blue", or were physically different, they are repeating old bad assumptions that were disproved
|
|
2021-02-15 01:44:59
|
If you really want to dive into the world of "color", Ray Maxwell had a podcast for a few years that mostly focused on color. He's spent decades studying and learning about color deeply.
**Episodes list:** https://twit.tv/shows/maxwells-house
https://www.youtube.com/watch?v=tKDSd3c75b0&t=14m30s
|
|
|
_wb_
|
2021-02-15 01:58:04
|
Thanks for the pointers, it's interesting stuff indeed.
|
|
|
|
paperboyo
|
2021-02-15 02:15:14
|
Yep, cool video, thanks! Luckily, we can make sure that humans at least are **looking** at the same stuff. Or, mostly, same. No media queries for how exhausted you are or of what age or what’s the colour of your wall… Yet 😉 .
This also cool: https://youtu.be/32LGlKwACfI
|
|
|
spider-mario
|
|
_wb_
Yes, philosophically we will never know for any human sense if different people really experience it in the same way or not.
But I mean more pragmatically: you can test whether someone can see a certain contrast between two colors or not, and things like that. We know e.g. that some people have mutations in the genes that code some of the cone cell types (usually L or M), causing them to malfunction, have a somewhat different response curve, or not work at all, causing different variants of color blindness. This can be tested and is relatively well-understood.
|
|
2021-02-15 04:34:36
|
there is variation in the matching functions
|
|
2021-02-15 04:34:52
|
how much is contributed by eye color, I don’t know
|
|
2021-02-15 04:39:59
|
(the reference I actually had in mind plotted the XYZ matching functions instead of LMS but I can’t seem to find it again)
|
|
|
Scope
|
2021-02-15 07:23:23
|
https://twitter.com/brucel/status/1361378464335597570
|
|
2021-02-15 07:24:15
|
https://github.com/w3c/csswg-drafts/issues/5889
|
|
|
_wb_
|
2021-02-18 11:25:26
|
Fun fact: the original name I used when making FUIF, was FLOF
|
|
|
Crixis
|
2021-02-18 11:30:38
|
FLOP is funnier
|
|
|
_wb_
|
2021-02-18 11:33:11
|
Free LOssy image Format was my reasoning, but "lossy" and "lossless" both start with "lo" and it could still do lossless, so FUIF made more sense
|
|
2021-02-18 11:33:30
|
https://tenor.com/view/party-minions-dancing-party-time-happy-gif-15045334
|
|
2021-02-18 11:33:57
|
especially since it means "party" in my native language, which is kind of fun
|
|
|
|
Deleted User
|
2021-02-18 11:34:13
|
And then came Pik...
|
|
|
_wb_
|
2021-02-18 11:34:34
|
and it has good ambiguous pronunciation properties, which is important in image codec naming conventions
|
|
2021-02-18 11:35:13
|
yes, pik is not a very good name in my native language
|
|
|
lonjil
|
2021-02-18 12:13:45
|
spoiler: ||it means dick||
|
|
|
Crixis
|
2021-02-18 12:18:15
|
Pik image
|
|
|
Dr. Taco
|
2021-02-18 05:51:48
|
Free LOssy Photo/Picture/Image Digital Imaging Kit
|
|
|
190n
|
2021-02-18 06:31:54
|
android 12 is adding avif https://www.androidpolice.com/2021/02/18/the-first-android-12-preview-lands-today-with-more-changes-than-we-expected/
|
|
|
_wb_
|
2021-02-18 06:39:56
|
Great, then in about 5 years, android apps that want to use avif will no longer need to package their own avif decoder
|
|
|
190n
|
2021-02-18 06:44:38
|
more than 5 years probably
|
|
|
|
Deleted User
|
2021-02-18 06:45:04
|
Heck, even my not-so-old Samsung Galaxy Note 9 won't decode them by default!
|
|
2021-02-18 06:45:40
|
Smartphone adoption is also necessary for a codec to be successful
|
|
2021-02-18 06:47:11
|
I think that I'll have JPEG XL and AVIF decoders included sooner in Samsung Gallery (or maybe even faster in Google Photos) than in One UI 2.5 that's only getting security patches now
|
|
2021-02-18 06:48:54
|
I think that on Android a better solution would be including the formats in the apps themselves, because we **__still__** can't rely on those goddamn Android OS updates!
|
|
2021-02-18 06:51:46
|
First Google Photos, it's OEM-independent, is really popular, is integrated with one of the biggest to-be use cases – Google Photos online service, and can even act on its own as a standalone photo gallery app without logging in.
|
|
|
_wb_
|
2021-02-18 06:52:04
|
Getting in the apps themselves is the only way to make rapid progress
|
|
2021-02-18 06:53:25
|
Photos and Camera apps could use jxl if only to save internal storage on the phone while still using old jpegs
|
|
2021-02-18 06:53:42
|
Chrome can get jxl support
|
|
2021-02-18 06:54:00
|
Apps for facebook, twitter, etc can do the same
|
|
|
|
Deleted User
|
|
First Google Photos, it's OEM-independent, is really popular, is integrated with one of the biggest to-be use cases – Google Photos online service, and can even act on its own as a standalone photo gallery app without logging in.
|
|
2021-02-18 06:54:30
|
Then OEM photo galleries, Samsung is the biggest (the most users will get it) and probably wants to be associated with novelties.
|
|
2021-02-18 06:55:01
|
And then camera apps, in order to create the JXL files. Again, Samsung should be the first. Don't worry about Pixels, they always get the latest Android, so they'll get their JXL decoders and encoders with the OS itself.
|
|
|
_wb_
|
2021-02-18 06:56:07
|
Getting the decoder binary size down on ARM to make it less of a burden to include a jxl decoder in apks has been something we have looked at again, and probably will keep looking at
|
|
|
|
Deleted User
|
2021-02-18 06:57:29
|
And if Google decides to include JXL encoder inside GCam, then lots of tech-savvy ppl will want to try it out
|
|
2021-02-18 06:58:29
|
Especially those who install it unofficially (modded GCam APKs)
|
|
2021-02-18 06:59:16
|
If they notice that it provides better compression, quality AND their apps can view it, they'll switch to it
|
|
|
_wb_
|
2021-02-18 07:07:09
|
Full adoption will be a long battle, but I hope we can get a momentum going where software devs feel pushed to jump on the bandwagon and add jxl support
|
|
2021-02-19 05:32:19
|
I should make an update of this old blogpost: https://cloudinary.com/blog/one_pixel_is_worth_three_thousand_words
|
|
2021-02-19 05:33:19
|
see how big 1-pixel AVIF, HEIC, WebP2, and JXL is
|
|
2021-02-19 05:33:57
|
and also how compressed size grows for a N x N solid color image
|
|
|
Scope
|
2021-02-19 06:36:28
|
https://mangadex.org/thread/425240/1/
|
|
2021-02-19 06:36:33
|
|
|
2021-02-19 06:37:00
|
`uncompressed` (lossless?) <:Thonk:805904896879493180>
|
|
2021-02-19 06:37:28
|
<:PepeHands:808829977608323112>
|
|
|
_wb_
|
2021-02-19 06:44:03
|
We spent years designing this uncompressed image format
|
|
2021-02-19 06:45:36
|
Long debates on big-endian vs little-endian
|
|
2021-02-19 06:45:45
|
Interleaved or planar
|
|
2021-02-19 06:46:06
|
Whether to do bit packing or not
|
|
2021-02-19 06:46:11
|
It was rough
|
|
2021-02-19 06:46:19
|
Bitter discussions
|
|
2021-02-19 06:46:38
|
And then of course the header syntax
|
|
|
Nova Aurora
|
2021-02-19 06:46:50
|
isn't that what BMP is for?
|
|
|
_wb_
|
2021-02-19 06:46:54
|
Human readable like ppm or not
|
|
2021-02-19 06:47:17
|
Endless committee meetings about it
|
|
2021-02-19 06:47:35
|
I am kidding, of course
|
|
2021-02-19 06:48:35
|
A nice uncompressed format would be nice actually. PFM does not do alpha, PAM does not do float...
|
|
2021-02-19 06:49:44
|
BMP is a surprisingly complicated and ugly format
|
|
2021-02-19 06:50:13
|
It is basically a memory dump of the internal struct Microsoft used for storing images
|
|
2021-02-19 06:50:32
|
Which evolved over the years
|
|
2021-02-19 06:50:59
|
It can do some 'compressed' stuff like RLE and palette representation
|
|
2021-02-19 06:51:24
|
It is mostly a mess to make a complete BMP reader
|
|
2021-02-19 06:51:38
|
Not as bad as TIFF though
|
|
2021-02-19 06:52:15
|
Anything Adobe touches becomes extremely hairy over time
|
|
2021-02-19 06:52:40
|
https://c.tenor.com/f1-BGCzrFp0AAAAM/star-wars-smiling.gif
|
|
|
Nova Aurora
|
2021-02-19 06:57:35
|
TIFF: for when you want png, jpeg, uncompressed images, and every color space ever invented, in the same file!
|
|
|
_wb_
|
2021-02-19 06:59:42
|
It also has the Adobe mentality of doing everything in every way possible instead of just choosing a convention and sticking to it
|
|
2021-02-19 07:00:17
|
Which means a decoder needs to implement all possible ways of doing it
|
|
2021-02-19 07:00:44
|
It's encoder convenience over decoder simplicity
|
|
2021-02-19 07:01:03
|
You can do big-endian or little-endian
|
|
2021-02-19 07:01:49
|
Not just for the data itself, for all the header stuff too, including the magic file signature
|
|
2021-02-19 07:02:08
|
You can have 0=black, maxval=white
|
|
2021-02-19 07:02:26
|
Or you can do the opposite, 0=white, maxval=black
|
|
2021-02-19 07:02:39
|
You can do planar or interleaved
|
|
2021-02-19 07:03:02
|
You can do tiles, stripes, or just everything at once
|
|
|
Nova Aurora
|
|
_wb_
https://c.tenor.com/f1-BGCzrFp0AAAAM/star-wars-smiling.gif
|
|
2021-02-19 07:04:05
|
Just let the wookie win, for god's sake!
|
|
|
_wb_
|
2021-02-19 07:04:36
|
And then every now and then Adobe surprises you with yet another thing you can do in tiff
|
|
|
Nova Aurora
|
2021-02-19 07:05:00
|
Is it turing complete yet?
|
|
2021-02-19 07:05:15
|
we need svg levels of feature creep
|
|
|
_wb_
|
2021-02-19 07:05:38
|
Not afaik
|
|
2021-02-19 07:06:08
|
The 24-bit floating point representation in tiff surprised me a while ago
|
|
2021-02-19 07:06:15
|
It's not in the spec
|
|
2021-02-19 07:06:26
|
But photoshop decided to start doing it
|
|
2021-02-19 07:08:31
|
You can have entire PSD files as 'metadata' in TIFF
|
|
|
Nova Aurora
|
2021-02-19 07:09:06
|
Can I include a TIFF in a PSD to create and infinite recursion loop?
|
|
|
spider-mario
|
2021-02-19 07:11:48
|
TIFF forms the basis for DNG, doesn’t it?
|
|
|
Nova Aurora
|
|
_wb_
It is basically a memory dump of the internal struct Microsoft used for storing images
|
|
2021-02-19 07:13:03
|
It's microsoft in the 80's, why am I not suprised that they created a format which is as difficult to use as possible for anyone but them?
|
|
|
spider-mario
|
2021-02-19 07:13:09
|
> DNG is based on the TIFF/EP standard format
yep.
|
|
|
Nova Aurora
|
2021-02-19 07:13:34
|
> DNG is based on the TIFF/EP standard format
|
|
2021-02-19 07:13:56
|
It seems to me like it's more agree to disagree in standard ways
|
|
|
spider-mario
|
|
_wb_
It is basically a memory dump of the internal struct Microsoft used for storing images
|
|
2021-02-19 07:14:21
|
ah, so like “Office Open XML”? :p
|
|
2021-02-19 07:14:36
|
(.docx, .xlsx, etc.)
|
|
|
Nova Aurora
|
2021-02-19 07:15:41
|
How that got into ISO when we already had odt and the other OASIS formats still confuses me
|
|
|
Nova Aurora
How that got into ISO when we already had odt and the other OASIS formats still confuses me
|
|
2021-02-19 07:16:20
|
Best guess:
https://tenor.com/view/money-mr-krabs-gif-18326632
|
|
|
spider-mario
|
2021-02-19 07:18:36
|
ah, I think this is the article I had seen on the subject: http://www.robweir.com/blog/2006/10/leap-back.html
|
|
|
spider-mario
ah, I think this is the article I had seen on the subject: http://www.robweir.com/blog/2006/10/leap-back.html
|
|
2021-02-19 07:19:46
|
actually, I’m not so sure, but still an amusing read
|
|
|
Nova Aurora
|
2021-02-19 07:21:25
|
I think that's just an unfortunate consequence of Microsoft's obsessive backwards compatibility
|
|
|
_wb_
|
2021-02-19 07:25:37
|
Exif is also a collection of TIFF tags, iirc
|
|
|
Crixis
|
2021-02-19 09:05:11
|
News on chrome?
|
|
|
|
Deleted User
|
|
Crixis
News on chrome?
|
|
2021-02-19 09:06:28
|
IMHO it's more of a topic for <#803574970180829194> than this channel, but OK
|
|
2021-02-19 09:06:48
|
Gerrit still reports `Merge Conflict`
|
|
|
Fox Wizard
|
2021-02-19 09:07:43
|
<:Google:806629068803932281>
|
|
|
|
Deleted User
|
2021-02-19 09:08:50
|
Code has been reviewed, and recently the commit's author (Moritz Firsching) made some fixes based on that review.
|
|
|
spider-mario
|
2021-02-19 09:12:11
|
Moritz is here fyi, in case someone hadn’t seen
|
|
|
Scope
|
2021-02-20 07:16:26
|
https://www.phoronix.com/scan.php?page=article&item=clang-12-5950x&num=2
|
|
2021-02-20 07:16:40
|
|
|
2021-02-20 07:18:43
|
Jpeg-recompression is used as the main test <:PepeHands:808829977608323112>
|
|
|
_wb_
|
2021-02-20 07:21:49
|
Ugh, can we reorder things so that doesn't happen?
|
|
2021-02-20 07:22:31
|
At least jpeg recompression does use the modular and vardct code paths, so it's not completely irrelevant
|
|
2021-02-20 07:25:51
|
But only DCT8x8, trivial adaptive quant (all constant), no patches/dots, no epf, etc
|
|
2021-02-21 06:40:24
|
I just almost mistyped bitstream as "butstream" and that doesn't sound very good
|
|
2021-02-21 06:44:53
|
https://c.tenor.com/2y0fwDpQ6sIAAAAM/poop-shit.gif
|
|
|
Fox Wizard
|
2021-02-21 07:52:14
|
``poop-shit.gif`` I hate it already
|
|
|
_wb_
|
2021-02-21 07:57:06
|
why do some gifs get shown in discord and others not?
|
|
2021-02-21 07:57:23
|
https://tenor.com/view/hippopotamus-pooping-tail-wagging-excited-monday-gif-10133844
|
|
|
Fox Wizard
|
2021-02-21 08:07:51
|
Well, guess they made it so that certain links get recognized
|
|
2021-02-21 08:07:53
|
Like Tenor
|
|
|
Master Of Zen
|
|
_wb_
https://tenor.com/view/hippopotamus-pooping-tail-wagging-excited-monday-gif-10133844
|
|
2021-02-21 10:59:06
|
that's disgusting
|
|
|
Fox Wizard
|
2021-02-21 11:12:56
|
No shit
|
|
|
Nova Aurora
|
2021-02-22 12:58:57
|
Yes shit, that's literally what it is...
|
|
|
_wb_
|
2021-02-22 06:15:53
|
https://www.technology.org/2018/12/05/why-hippos-are-spraying-their-dung-like-agricultural-machines/
|
|
2021-02-22 06:17:28
|
Anyway, **bit**stream it is
|
|
|
Crixis
|
2021-02-22 07:00:37
|
How this is on-topic
|
|
|
|
Deleted User
|
2021-02-22 07:04:40
|
No one cares
|
|
|
_wb_
|
2021-02-22 07:07:16
|
Bitstreams are on topic
|
|
2021-02-22 07:07:27
|
Buttstreams not so much, I have to admit
|
|
|
|
Deleted User
|
2021-02-22 07:08:09
|
Ok, so let's turn it into on-topic:
Is JPEG XL butt-ready?
|
|
|
_wb_
|
2021-02-22 07:11:06
|
https://c.tenor.com/nEuA4w73KOoAAAAM/austin-powers-mike-myers.gif
|
|
2021-02-22 07:11:30
|
The bitstream has been frozen for a while now
|
|
2021-02-22 07:11:57
|
A month, officially. Almost 3 months in practice.
|
|
|
|
Deleted User
|
2021-02-22 07:13:28
|
Come on, I've got a flamethrower
|
|
2021-02-22 07:13:46
|
It'll unfreeze everything in a matter of seconds
|
|
2021-02-22 07:13:59
|
https://tenor.com/view/kill-it-with-fire-fire-gif-13411041
|
|
|
_wb_
|
2021-02-22 07:23:14
|
Frozen is good though
|
|
|
|
Deleted User
|
2021-02-22 07:24:13
|
Yep, it's kinda childish, but still a good movie
|
|
2021-02-22 07:24:26
|
https://tenor.com/view/frozen-let-it-go-disney-disney-princess-elsa-gif-3597313
|
|
|
Pieter
|
2021-02-22 07:30:48
|
frozen buttstream
|
|
|
Crixis
|
2021-02-22 07:51:04
|
ok, now make jxl 2
|
|
2021-02-22 07:51:32
|
sorry j><l 2
|
|
|
_wb_
|
2021-02-22 07:53:05
|
I meant frozen is a good state for a bitstream to be in
|
|
2021-02-22 07:54:40
|
jxl 2 maybe in 2048
|
|
|
bonnibel
|
2021-02-22 09:55:55
|
its actually pronounced "j cross l"
|
|
2021-02-22 09:56:37
|
or "j chi l", if you must
|
|
|
_wb_
|
2021-02-22 11:29:21
|
Jay Chill
|
|
|
bonnibel
|
2021-02-22 11:36:35
|
it's actually french, _j'xl_
|
|
|
_wb_
|
2021-02-22 11:44:48
|
j'excelle, tu excelles, il excelle, nous excellons, vous excellez, ils excellent
|
|
|
Fox Wizard
|
2021-02-22 12:43:44
|
https://www.megekko.nl/product/4278/279206/Socket-AM4-Processoren/AMD-Ryzen-5-1600-processor
|
|
2021-02-22 12:43:53
|
|
|
2021-02-22 12:43:53
|
|
|
|
_wb_
|
2021-02-22 12:48:36
|
Beautiful
|
|
|
Fox Wizard
|
2021-02-22 12:56:12
|
It's true art <:PepeOK:805388754545934396>
|
|
|
fab
|
2021-02-22 01:13:34
|
firefox pdf?
|
|
2021-02-22 01:13:57
|
firefox is not good to read pdf
|
|
2021-02-22 01:14:10
|
chrome as pdf viewer won't spy you
|
|
|
Nova Aurora
|
2021-02-23 12:38:05
|
Wtf are you saying
|
|
|
Fox Wizard
|
2021-02-23 06:37:22
|
Nobody knows :p
|
|
|
_wb_
|
2021-02-24 05:11:04
|
in my inbox:
"Hi Jon, I hope this message finds you well. I saw your image compression videos on YouTube and wanted to try those experiments myself. Do you mind pointing me in the right direction? I saw the code you used on reddit but I know little in programming so it wasn't helpful to me although I'm willing to learn. Is there a GUI tool to achieve generation loss? Any help would be appreciated. Thanks for your time."
|
|
2021-02-24 05:11:41
|
you cannot believe how many people want to "achieve" generation loss as some sort of artistic effect
|
|
|
Orum
|
2021-02-24 05:16:08
|
I deliberately reencode videos multiple times (even using different codecs) for R&D purposes, but for artistic effect? <:Thonk:805904896879493180>
|
|
2021-02-24 05:17:35
|
The generation losses from WebP in your blog post were rather interesting though
|
|
|
|
Deleted User
|
2021-02-24 05:19:15
|
You better get to know about deep fried memes
|
|
2021-02-24 05:20:57
|
And <@!794205442175402004>, check your Facebook notifications 😉
|
|
|
_wb_
|
2021-02-24 05:23:43
|
facebook, ugh
|
|
2021-02-24 05:24:20
|
nice page though 🙂
|
|
|
Fox Wizard
|
2021-02-24 05:27:15
|
FB kinda <:kekw:758892021191934033>
|
|
|
Scope
|
2021-02-24 05:29:00
|
https://youtu.be/JR4KHfqw-oE
|
|
|
|
Deleted User
|
|
_wb_
nice page though 🙂
|
|
2021-02-24 05:30:15
|
Thanks! I'm trying to keep it as professional as it can be 😃
|
|
2021-02-24 05:32:28
|
It's not going to be "boring" professional (as e.g. JPEG XL's official site), but "cool" professional (as big companies' Facebook fanpages).
|
|
2021-02-24 05:39:05
|
<@!794205442175402004> you can check draft posts (it's not finished yet, so please don't post anything yet), but you can see an idea for my first post. And if you try editing that draft, you'll see... multilingual post! (You'll be able to get to know my native language and see it in action.) When I let you know that the post is finished, please make a Dutch version of it.
|
|
|
Dr. Taco
|
2021-02-24 05:48:47
|
genuinely surprised he actually uploaded it 1000 times and didn't fake it by just running it through a compressor locally for most of it. That must have taken forever, even if automated
|
|
|
fab
|
2021-02-24 06:16:33
|
honestly if nobody makes a GUI for jpeg xl that works on windows 7 nobody will use
|
|
2021-02-24 06:16:43
|
normal people don't know about xvniew and squoosh
|
|
|
Dr. Taco
|
2021-02-24 06:55:45
|
define "GUI", because XNView is a pretty polished GUI, it supports everything, and is available on Win 7. I can create for you a GUI to encode/decode/view JXL files (I did it for FLIF), but no one is going to go out of their way to download a program that *only* handles JXL. When they could instead use something already established that does that and more
|
|
|
Nova Aurora
|
2021-02-24 06:57:07
|
Plus, writing a plugin for an existing program is probably easier
|
|
|
Dr. Taco
|
2021-02-24 06:58:16
|
I think a more valuable usage of time for adoption would be to create a Node.js wrapper around the JXL CLI. I did this for FLIF a few years ago: https://github.com/FLIF-hub/node-flif
|
|
2021-02-24 06:58:36
|
it takes a LONG time to create that though, I don't have the time to do it (again)
|
|
|
|
Deleted User
|
|
Nova Aurora
Plus, writing a plugin for an existing program is probably easier
|
|
2021-02-24 06:59:50
|
Plugin should be better in yet another way – if it's a plugin for a popular program, you're going to reach *masses*.
|
|
|
Dr. Taco
|
2021-02-24 07:00:16
|
A Node.js wrapper around a native CLI executable would mean JXL conversion could be very fast, and the API very easy to interact with by JS devs (most popular language, especially for web where adoption is most key)
|
|
|
fab
|
2021-02-24 07:01:59
|
yes but normal people don't know xnview squoosh
|
|
2021-02-24 07:02:17
|
or paint.net or gimp or krita or megapixel (these require python 3.9.1 and windows 10 to work properly) gimp can also work on windows 7
|
|
|
Dr. Taco
|
2021-02-24 07:02:25
|
The slowest part of building Node-FLIF was:
1. Getting actual native binaries (this should 100% be it's own versioned node module, see `7zip-bin`)
2. Finding out the limits of every single option in the CLI (what are the upper and lower bounds allowed for validation)
3. Writing very very thorough unit tests
|
|
|
Nova Aurora
|
2021-02-24 07:02:50
|
WIC works for win7?
|
|
|
fab
|
2021-02-24 07:02:54
|
yes
|
|
2021-02-24 07:03:02
|
but i'm using sasha plugin
|
|
2021-02-24 07:03:18
|
from firefox dev and with latest version the dllhost is slow
|
|
2021-02-24 07:03:33
|
and obviously the windows photo viewer can't view them
|
|
2021-02-24 07:03:40
|
and i can't set them as wallpaper
|
|
|
Dr. Taco
|
2021-02-24 07:03:48
|
If you can create a Node-JXL library, I can get you a GUI on XP+, OSX 10.6+ and Ubuntu 12+ (same as I did for UGUI: FLIF)
|
|
|
fab
|
2021-02-24 07:04:55
|
the problem is how users will create those files
|
|
|
Dr. Taco
|
2021-02-24 07:06:25
|
presumably if they are on those OS's they are using a browser that will support JXL and they will be saving the images from the browser. At that point, they can just associate JXL files with the browser, so double-clicking them just opens them in a new tab
|
|
|
fab
|
2021-02-24 07:07:05
|
ok
|
|
2021-02-24 07:07:48
|
and on android
|
|
2021-02-24 07:08:21
|
on android it will be worse as they will need to compress
|
|
2021-02-24 07:08:33
|
use values less than s7 q 78.1 or s9 q 90
|
|
2021-02-24 07:08:54
|
because speed is 8 mpx second max
|
|
2021-02-24 07:09:08
|
google what it will use?
|
|
2021-02-24 07:09:13
|
google images
|
|
2021-02-24 07:14:27
|
also do we need compression?
|
|
2021-02-24 07:14:52
|
isn't 0.3.0 -s 4 -q 99.2 enough even if it has some discolouration?
|
|
|
Nova Aurora
|
2021-02-24 07:15:09
|
You mean lossless?
|
|
|
fab
|
2021-02-24 07:15:17
|
why google is trying to optimize butteraugli?
|
|
2021-02-24 07:16:15
|
in the next 1000 years even if a pdf in jpeg xl will weight from 13 mb to 6 mb
|
|
2021-02-24 07:16:19
|
or a bit more
|
|
2021-02-24 07:16:26
|
what is the gain in that?
|
|
|
Nova Aurora
|
2021-02-24 07:17:02
|
I'm sorry, but I have no idea what you're asking there
|
|
|
fab
|
2021-02-24 07:17:14
|
we can't upload more in the internet
|
|
2021-02-24 07:17:42
|
90% of the internet is video
|
|
2021-02-24 07:17:48
|
when the other 10% will be full
|
|
2021-02-24 07:17:54
|
what it will happen?
|
|
2021-02-24 07:18:15
|
they will start deleting torrents etc?
|
|
|
Nova Aurora
|
2021-02-24 07:19:05
|
The internet is someone else's computer, but there's no inherent limit on how much storage is on the internet
|
|
|
fab
|
|
Nova Aurora
|
2021-02-24 07:19:51
|
there is a limit on the number of addresses
|
|
2021-02-24 07:19:52
|
https://en.wikipedia.org/wiki/IPv4_address_exhaustion
|
|
|
fab
|
2021-02-24 07:20:12
|
interesting
|
|
|
Nova Aurora
|
2021-02-24 07:20:28
|
but each of those servers could have an infinite amount of data if they could store it
|
|
|
fab
|
2021-02-24 07:20:55
|
ah
|
|
2021-02-24 07:21:05
|
how to know these better?
|
|
2021-02-24 07:21:18
|
what to search on google ?
|
|
|
Crixis
|
2021-02-24 07:23:46
|
If i have time i will do a gui for the encoder in qt
|
|
|
Nova Aurora
|
|
Nova Aurora
https://en.wikipedia.org/wiki/IPv4_address_exhaustion
|
|
2021-02-24 07:25:14
|
This has a decent solution in the form of ipv6, with 2^128 possible addresses before we run into the same issues as with ipv4
https://en.wikipedia.org/wiki/IPv6
|
|
|
fab
what to search on google ?
|
|
2021-02-24 07:25:49
|
What do you want to know?
|
|
|
fab
|
2021-02-24 07:34:48
|
ok i understood
|
|
2021-02-24 07:35:58
|
alex gui called megapixel doesn't work on windows 7, ultra 7z jpg feels like a rip of moises cardona program with show command line options etc. but you can't set custom settings like very specific qualities for exhale q 78.1 q 78.81
|
|
2021-02-24 07:36:30
|
megapixel is good you can add presets etc
|
|
|
Dr. Taco
|
2021-02-24 08:08:18
|
I had planned a more robust FLIF desktop app but by the time node-flif was done FUIF was being talked about and then JXL.
https://camo.githubusercontent.com/a01164e5e8c9e1928dede8e5680abe469a72f2d1917c7b97fe8e6f94530fff70/68747470733a2f2f692e696d6775722e636f6d2f787350485753432e706e67
|
|
|
Scope
|
2021-02-25 02:33:24
|
<https://www.reddit.com/r/discordapp/comments/lrenv7/they_finally_added_image_compression/>
https://i.redd.it/a7fyfrbevfj61.png
|
|
2021-02-25 02:33:38
|
PNG -> Jpeg? <:Thonk:805904896879493180>
|
|
2021-03-01 02:41:24
|
🇳🇱 https://youtu.be/VtZ9_UmCd5M
|
|
|
Pieter
|
2021-03-01 02:46:15
|
<@111445179587624960> Did you attend the talk?
|
|
|
Scope
|
2021-03-01 02:51:32
|
No, and I wouldn't understand anything without translation
|
|
|
Pieter
|
2021-03-01 02:52:48
|
Yeah, I assume that's why <@794205442175402004> didn't post it himself here.
|
|
|
Scope
|
2021-03-01 02:54:43
|
But Youtube has at least some automatic translation and also slides in English
|
|
2021-03-01 02:57:13
|
<:Thonk:805904896879493180>
|
|
2021-03-01 03:02:50
|
|
|
|
lonjil
|
2021-03-01 03:08:29
|
<@!111445179587624960> that's just these slides: https://docs.google.com/presentation/d/1LlmUR0Uoh4dgT3DjanLjhlXrk_5W2nJBDqDAMbhe8v8/edit#slide=id.gae1d3c10a0_0_17
|
|
|
Pieter
|
2021-03-01 03:08:59
|
<@111445179587624960> If you give me the timestamp I'll translate what he's actually saying 🙂
|
|
|
Scope
|
|
lonjil
<@!111445179587624960> that's just these slides: https://docs.google.com/presentation/d/1LlmUR0Uoh4dgT3DjanLjhlXrk_5W2nJBDqDAMbhe8v8/edit#slide=id.gae1d3c10a0_0_17
|
|
2021-03-01 03:11:51
|
Yes, I know, but maybe there is some additional information in the voice presentation
|
|
|
Pieter
<@111445179587624960> If you give me the timestamp I'll translate what he's actually saying 🙂
|
|
2021-03-01 03:11:56
|
Well, in general, I understand
|
|
2021-03-01 05:09:21
|
|
|
|
_wb_
|
2021-03-01 06:24:15
|
Automatic translation, lol. I did not talk about "muslims compression" or "five clowns", to be clear 😅
|
|
|
|
Deleted User
|
2021-03-01 06:28:35
|
It's probably failed speech recognition and then translation done on that botched text
|
|
|
Scope
|
2021-03-01 06:36:53
|
Yep, mostly when there is a mix of different languages
|
|
|
_wb_
|
2021-03-01 06:40:23
|
The granny components? Lol!
|
|
|
Fox Wizard
|
2021-03-01 08:27:44
|
Pik fuif <a:partykitty:681240034895986696>
|
|
|
_wb_
|
|
Jelmen
|
2021-03-01 09:49:06
|
hey all, just dropped in here from the other discord server where <@!794205442175402004> gave a very nice presentation yesterday 🙂
|
|
|
_wb_
|
2021-03-01 10:31:45
|
Yes, it was about Muslims compression and five clowns who change the color of the granny components
|
|
|
|
paperboyo
|
|
_wb_
Yes, it was about Muslims compression and five clowns who change the color of the granny components
|
|
2021-03-01 10:43:23
|
Now I regret I missed it!
|
|
|
Jelmen
|
2021-03-01 10:43:37
|
that's how I understood it !
|
|
|
bonnibel
|
2021-03-01 08:18:35
|
https://twitter.com/JoanieLemercier/status/1363085114176122880
|
|
2021-03-01 08:18:50
|
😬
|
|
|
Dr. Taco
|
2021-03-01 09:30:28
|
meh, I use twice that with just my microwave everyday. #RealAmerican <:trumpsteaks:443185840949297153>
|
|
|
Scope
|
2021-03-01 11:36:50
|
https://www.slashgear.com/google-photos-high-quality-might-not-be-near-identical-to-original-28661674/
|
|
|
|
Deleted User
|
2021-03-02 12:20:51
|
When Chrome adds JXL support, I'll definitely open a support ticket for adding JXL support in Google Photos.
|
|
|
lonjil
|
2021-03-02 01:48:25
|
google photos team: "d6 seems good, right?"
|
|
|
BlueSwordM
|
2021-03-02 02:01:49
|
*With 1/2 resolution downsampling
|
|
|
Pieter
|
2021-03-02 02:21:17
|
640k pixels ought to be enough for everyone
|
|
|
_wb_
|
2021-03-02 06:40:49
|
"We'll add them back later with AI"
|
|
2021-03-02 06:41:16
|
(the missing pixels)
|
|
|
|
Deleted User
|
2021-03-02 06:41:49
|
Haha pirated Topaz Gigapixel AI goes brrrr
|
|
|
Pieter
|
2021-03-02 06:45:53
|
What's D6, though?
|
|
|
|
Deleted User
|
2021-03-02 06:47:40
|
`-d 6`, or `--distance=6.0`
|
|
2021-03-02 06:48:21
|
Just run `cjxl -h -v -v -v` and you'll get all the switches available out there with a short description.
|
|
|
zebefree
|
2021-03-02 06:54:01
|
Is the JPEG XL spec freely available?
|
|
|
Nova Aurora
|
2021-03-02 06:55:38
|
no sadly
|
|
|
|
Deleted User
|
2021-03-02 06:55:42
|
It's behind ISO's paywall
|
|
|
Nova Aurora
|
2021-03-02 06:55:50
|
the commitee draft is available
|
|
2021-03-02 06:56:02
|
but it's quite outdated
|
|
|
zebefree
|
2021-03-02 06:56:19
|
I found a 2019 draft, but it doesn't seem to have anything about the container format; is that documented somewhere?
|
|
|
Nova Aurora
|
2021-03-02 06:56:34
|
behind ISO's paywall
|
|
2021-03-02 06:57:05
|
ISO doesn't seem to want the spec to be free, even if it's free to use
|
|
|
zebefree
|
|
Nova Aurora
|
2021-03-02 06:58:31
|
If it makes you feel better the core developers of the format are pretty miffed too and are trying to find a way to make it freely available.
|
|
|
zebefree
|
2021-03-02 06:59:51
|
Is the container format based on something else or is it something novel for JPEG XL?
|
|
|
_wb_
|
2021-03-02 07:00:02
|
Container is ISOBMFF
|
|
2021-03-02 07:00:15
|
And optional
|
|
|
zebefree
|
2021-03-02 07:01:02
|
Oh ok, is it HEIF?
|
|
|
Nova Aurora
|
2021-03-02 07:01:09
|
no
|
|
2021-03-02 07:01:31
|
HEIF is only for video formats being used as image formats
|
|
2021-03-02 07:02:01
|
JPEG XL is a (mostly) completely new images format
|
|
|
zebefree
|
2021-03-02 07:03:20
|
Ok I guess I will just reverse engineer libjxl
|
|
|
Nova Aurora
|
|
_wb_
Container is ISOBMFF
|
|
2021-03-02 07:05:55
|
I thought BMFF was for video/audio only?
|
|
|
_wb_
|
2021-03-02 07:07:39
|
It's also used in J2K, iirc
|
|
2021-03-02 07:09:12
|
It's just a simple box/chunk format, not that different from PNG: 4 byte identifier (usually human readable), 4 byte length
|
|
2021-03-02 07:10:21
|
We don't require the container format around the actual codestream, it is optional
|
|
2021-03-02 07:10:53
|
All the render-relevant data is in the codestream (including things like orientation and color profile)
|
|
|
Nova Aurora
|
2021-03-02 07:11:28
|
So BMFF is just a metadata wrapper for you?
|
|
|
_wb_
|
2021-03-02 07:11:50
|
The container is only needed if you want to attach non-rendering metadata, including JPEG bitstream reconstruction data
|
|
2021-03-02 07:12:01
|
Yes, we only use it like that
|
|
|
Nova Aurora
|
2021-03-02 07:12:48
|
So only part 1 of the standard would be needed in theory to write your own encoder?
|
|
|
_wb_
|
2021-03-02 07:12:58
|
We don't need something like HEIF that also has ways to specify aux images for alpha, overlays, tiling, etc. Those are all handled in the jxl codestream itself
|
|
2021-03-02 07:13:34
|
Yes. Part 1 is enough for e.g. images on the web where metadata is usually stripped anyway
|
|
|
zebefree
|
2021-03-02 07:17:17
|
The draft says "if the codestream starts with bytes { 0x0A , 0x04 , 0x42 , 0xD2 , 0xD5 , 0x4E }, the decoder shall reconstruct the original JPEG1 codestream as specified in Annex M", but libjxl doesn't look for that; was that replaced by the ISOBMFF container?
|
|
|
Nova Aurora
|
2021-03-02 07:18:40
|
That is only for when there is an original JPEG encoded within the JXL?
|
|
|
zebefree
|
2021-03-02 07:18:56
|
Right
|
|
|
Nova Aurora
|
2021-03-02 07:19:55
|
sorry if my answers are confusing, I'm a noob at this who's only virtue is being glued to a computer
|
|
|
|
Deleted User
|
2021-03-02 07:23:56
|
Just like me bro
|
|
|
Nova Aurora
|
2021-03-02 07:27:46
|
You want answers from someone who probably knows what they are talking about, look for yellow names
|
|
|
zebefree
|
2021-03-02 07:36:26
|
Ok thanks, looking at the libjxl code it looks like that is done using the container format now, so probably that signature is obsolete.
|
|
|
_wb_
|
|
zebefree
The draft says "if the codestream starts with bytes { 0x0A , 0x04 , 0x42 , 0xD2 , 0xD5 , 0x4E }, the decoder shall reconstruct the original JPEG1 codestream as specified in Annex M", but libjxl doesn't look for that; was that replaced by the ISOBMFF container?
|
|
2021-03-02 08:24:44
|
Yes, that's the old Brunsli which got removed from the spec. Now we have a `jbrd` box in isobmff for the jpeg bitstream reconstruction data, and the image data itself is encoded as varDCT that happens to be not very variable DCT 🙂
|
|
2021-03-02 08:25:32
|
the jxl codestream signature is 0xFF0A
|
|
2021-03-02 08:25:56
|
you can have a standalone codestream and that will decode
|
|
|
zebefree
|
2021-03-02 08:26:00
|
Ah great.
|
|
2021-03-02 08:26:51
|
But if it's JPEG-1 in JPEG XL it will always be in the container, right? With signature 00 00 00 0c J X L 20 0d 0a 87 0a
|
|
|
_wb_
|
2021-03-02 08:27:13
|
yes, well at least if you want to preserve the bitstream reconstruction data
|
|
2021-03-02 08:27:56
|
you _could_ just take the codestream and strip the rest, and it will still be the same image, but you cannot restore the exact same original JPEG file from it
|
|
|
zebefree
|
2021-03-02 08:28:29
|
Ok, sounds good. Thanks!
|
|
|
_wb_
|
2021-03-02 08:29:41
|
not sure if the `JXL` magic is there to stay
|
|
2021-03-02 08:30:26
|
the `ftyp` "`jxl `" (with a space) is certainly there to stay though
|
|
2021-03-02 08:31:40
|
because that one is registered with MP4RA
|
|
2021-03-02 08:31:49
|
see https://mp4ra.org/#/atoms
|
|
|
zebefree
|
2021-03-02 08:33:07
|
ReadSignature() looks for capital "JXL ". It looks like that is what is on mp4ra also.
|
|
|
_wb_
|
2021-03-02 08:33:36
|
ah really
|
|
2021-03-02 08:33:52
|
looks like you're right
|
|
|
zebefree
|
2021-03-02 08:34:16
|
I guess the lowercase ones are atoms
|
|
|
_wb_
|
2021-03-02 08:35:57
|
|
|
2021-03-02 08:36:59
|
that's what the current draft spec of the container says, so I guess it's upper case JXL followed by ftyp lowercase jxl
|
|
2021-03-02 08:37:38
|
these ISOBMFF containers are a bit verbose to my taste, it's nice that we managed to convince the rest of JPEG to make the container optional 🙂
|
|
|
zebefree
|
2021-03-02 08:37:54
|
Oh I see, so it has both. Thanks!
|
|
|
_wb_
|
2021-03-02 08:39:18
|
in the space ISOBMF needs to say "hi, this is a jxl" we already have a full image header which gives you the dimensions, color space, frame encode mode etc etc
|
|
2021-03-02 08:39:56
|
HEIF is even worse, it adds extra verbosity on top of ISOBMFF
|
|
2021-03-02 08:40:45
|
tbh I don't really understand why AVIF was based on HEIF. It is a monstrosity of a container format.
|
|
|
zebefree
|
2021-03-02 08:41:07
|
Yeah seems overkill for just a single image.
|
|
|
_wb_
|
2021-03-02 08:41:13
|
just to say "hi, this is an avif with alpha" takes about 400 bytes
|
|
2021-03-02 08:41:41
|
plus: Nokia does claim patents on HEIF
|
|
2021-03-02 08:43:19
|
everybody is like "that is nonsense, you cannot patent a container" but the reality is that they did, the patents are granted, and I know for sure that there are companies paying Nokia to use the heif container "just to be on the safe side"
|
|
2021-03-02 08:44:24
|
so what's the point of making a nice royalty-free codec if you're then going to demand people to wrap it in a patent-encumbered container... I don't really get it
|
|
2021-03-02 08:45:57
|
Of course if it does at some point come to AOM vs Nokia with the result that Nokia's silly patents get invalidated, that would be nice.
|
|
|
zebefree
|
2021-03-02 08:46:54
|
It's too bad that ISO needed to charge for the spec. Always seems to be someone wanting to get a cut.
|
|
|
_wb_
|
2021-03-02 08:52:15
|
That's the general ISO policy and it's their business model, basically. Which might make sense for non-royalty-free standards that are only ever implemented by large industry players where $200 for a spec is negligible compared to the millions they're paying in royalties and the billions they're making in profits. But for a royalty-free standard where perhaps hobbyists also want to take a look at it and maybe even make their own FOSS implementation, that paywall is a real obstacle, and most likely the number of spec purchases they're getting from it is very small anyway — there's an open source executable spec available for free, after all.
|
|
|
zebefree
|
2021-03-02 08:55:35
|
Right, it seems that for Internet-related things, the IETF is becoming the preferred standardization body. Either that or just create your own foundation and publish the spec on your own web site.
|
|
2021-03-02 09:07:50
|
And for web specs, W3C and WHATWG have always made their specs freely available as well.
|
|
|
Jyrki Alakuijala
|
|
_wb_
Yes, that's the old Brunsli which got removed from the spec. Now we have a `jbrd` box in isobmff for the jpeg bitstream reconstruction data, and the image data itself is encoded as varDCT that happens to be not very variable DCT 🙂
|
|
2021-03-02 09:12:49
|
We developed brunsli in 2014, and I thought to save some development effort in plugging it in as is. Luca and others in our team rebelled and they wanted to make it right. I thought we didn't have time for it. Luca proved me wrong and made it right -- i.e., now brunsli-like functionality is based on the core tech of JPEG XL instead of being an ugly clued-in hack. This was not purely for aesthetics, since it also removed the brotli dependency from the core code stream and thus made it possible to have smaller decoders. Also, that effort lead to having better LZ77 features -- more similar to those of WebP lossless with 2d distances and pixel (not byte) based indexing and lengths
|
|
2021-03-02 09:13:21
|
also, the context modeling is more compatible with the large palette mode now in the LZ77 use
|
|
2021-03-02 09:14:04
|
so, the brunsli removal lead to the brotli removal, which lead the new options in pixel graphics (we haven't yet explored them in the encoder), and to a 123456 bytes smaller decoder
|
|
|
_wb_
|
2021-03-02 09:15:02
|
and to more bitstream expressivity: adding alpha to an existing jpeg, or doing saliency progression with it, are things that are now possible
|
|
|
Jyrki Alakuijala
|
2021-03-02 09:15:52
|
by nature I'm pretty stubborn and don't allow this kind of things, but I'm learning to be more flexible -- and I'm certainly now happy that I didn't resist this change enough to stop it
|
|
|
_wb_
plus: Nokia does claim patents on HEIF
|
|
2021-03-02 09:18:20
|
there are two kinds of engineers -- those who build stuff and those who wrap already function stuff to containers and new layers of APIs -- the latter kind can have a weird aesthetic where doing something is 'dirty' and needs to be isolated from other code by lots of stuff that does nothing
|
|
2021-03-02 09:19:23
|
in one effort I participated 20 years ago there was an engineer who would write a function that would just call another function -- to have a wrapper layer, no functionality was added or removed
|
|
2021-03-02 09:19:39
|
at that time I was mystified by it
|
|
2021-03-02 09:20:11
|
these are the same people who get really happy about mostly useless wrappers and containers
|
|
2021-03-02 09:21:11
|
I bet when they get home they put two table cloths on their dining table to avoid spilling on the other
|
|
2021-03-02 09:22:14
|
also I observe that the people who create the bloaty wrapper can be as proud of building the wrapper as if they had create the image formats in it -- and other people can participate in this fantasy
|
|
|
_wb_
|
2021-03-02 09:28:14
|
AVIF/HEIF images with alpha are required to have this string, in ASCII, somewhere: `urn:mpeg:mpegB:cicp:systems:auxiliary:alpha`
|
|
|
Master Of Zen
|
2021-03-02 09:28:18
|
<:PepeHands:654081051768913941>
I developed framework that just wraps around encoders)
|
|
|
_wb_
|
2021-03-02 09:30:15
|
in some cases, wrappers do have utility. DICOM is an example, or those things they have for geography
|
|
2021-03-02 09:30:55
|
the image is important, but the "metadata" is also critical
|
|
2021-03-02 09:31:46
|
then it makes sense to define a domain-specific container that is mostly specifying how to do the metadata
|
|
2021-03-02 09:33:24
|
|
|
2021-03-02 09:33:45
|
That is a jxl in an isobmf with jpeg bitstream reconstruction data
|
|
2021-03-02 09:34:00
|
|
|
2021-03-02 09:34:09
|
That is an avif
|
|
|
Jyrki Alakuijala
|
|
_wb_
in some cases, wrappers do have utility. DICOM is an example, or those things they have for geography
|
|
2021-03-02 09:42:35
|
DICOM is a complex dictionary for key/values for structured data, I wouldn't consider it just a wrapper
|
|
2021-03-02 09:43:21
|
DICOM is based on very very silly engineers making the first version
|
|
2021-03-02 09:43:32
|
ACR-NEMA 1.0 is the grandfather of it
|
|
2021-03-02 09:44:00
|
they defined the cable to be used for data connection to have 16 data wires
|
|
2021-03-02 09:44:11
|
and they brought that hardware detail all the way to the format
|
|
2021-03-02 09:44:47
|
that is why there was a lot of 16 bit alignment rules in it, like strings needed to have even sizes
|
|
2021-03-02 09:44:58
|
not sure if that is still the case, haven't followed the field for the last 15 years
|
|
2021-03-02 09:45:50
|
this was at the time when it was already clear that Ethernet and friends are going to replace dedicated communication hardware and networking is stronger than point-to-point connections
|
|
|
_wb_
|
2021-03-02 09:48:37
|
16-bit alignment is also a big thing in Photoshop's PSD format
|
|
|
spider-mario
|
2021-03-02 11:52:31
|
seems tangentially related to progressive rendering (though I doubt that in most use cases, it will be long enough to have such an adverse effect)
|
|
|
_wb_
|
2021-03-02 12:14:36
|
I think a lot depends on how gradual the deblurring is. A 15-passes progression is different from a 2-passes progression in that respect.
|
|
|
Jyrki Alakuijala
|
2021-03-02 02:14:27
|
I'm thinking of 4k and 8k photography
|
|
2021-03-02 02:14:52
|
if we have 8k, that is the worst case, and it means 33M pixels
|
|
2021-03-02 02:16:00
|
if we do that with 0.2 bpp, 33e6*0.2/8 one photo will be less than a megabyte, 825 kB
|
|
2021-03-02 02:16:25
|
the first progressive version of it will be about 160 kB
|
|
2021-03-02 02:18:03
|
let's take a slow internet connection of 20 mbps, 2.5 MB/s
|
|
2021-03-02 02:18:28
|
160 kB will take take 64 ms to transmit
|
|
2021-03-02 02:18:55
|
the rest we can send in 270 ms
|
|
2021-03-02 02:19:21
|
the time between the final and initial goes into what I consider a 'preattentive stage'
|
|
2021-03-02 02:19:42
|
to let human vision to orient where to look at rather than already look there
|
|
|
_wb_
|
2021-03-02 02:22:32
|
What about just full HD (2M pixels) but at 1bpp and on a 3G connection? E.g. viewing a full screen photo on a phone in rural areas.
|
|
|
Jyrki Alakuijala
|
2021-03-02 02:22:53
|
the average mobile connection today is 17 mbps
|
|
|
_wb_
|
2021-03-02 02:23:33
|
Averages can be quite different from medians in these things
|
|
|
Jyrki Alakuijala
|
2021-03-02 02:23:50
|
2e6/8 = 250 kB, i.e., perhaps 25 kB for the preview
|
|
2021-03-02 02:24:19
|
average 3g speed in India is about 2 mbps
|
|
2021-03-02 02:24:34
|
so, 100 ms for the preview and 900 ms for the full image
|
|
2021-03-02 02:25:04
|
if 10 % data for the preview
|
|
2021-03-02 02:25:16
|
(it can be 15 % at 1 bpp)
|
|
2021-03-02 02:26:38
|
In Germany 3G is 5x faster at ~10 mbps
|
|
2021-03-02 02:26:58
|
speeds are getting faster quickly
|
|
2021-03-02 02:50:01
|
Germany: 20 ms for preview and 180 ms for the full image
|
|
2021-03-02 02:50:38
|
we often have images at 2.5 and more bpp today
|
|
2021-03-02 02:51:48
|
Argentina is somewhere in the middle with 3G speed: 60 ms for preview and 540 ms for the full image
|
|
|
Scope
|
2021-03-04 09:54:57
|
https://blog.chromium.org/2021/03/speeding-up-release-cycle.html
|
|
2021-03-04 09:55:32
|
> we are excited to announce that Chrome is planning to move to releasing a new milestone every 4 weeks, starting with Chrome 94 in Q3 of 2021.
> Additionally, we will add a new Extended Stable option, with milestone updates every 8 weeks.
<:Thonk:805904896879493180>
|
|
|
Dr. Taco
|
2021-03-04 09:57:39
|
I hope it helps, their quality really went down like 2 years ago. Finally had to switch to Firefox
|
|
2021-03-04 09:58:21
|
At least now when JXL is adopted it will come out a little faster
|
|
|
Scope
|
2021-03-04 09:59:32
|
And faster decoder updates
|
|
|
|
Deleted User
|
|
Dr. Taco
At least now when JXL is adopted it will come out a little faster
|
|
2021-03-04 10:06:39
|
While we're talking about JXL adoption in Chrome...
https://chromium-review.googlesource.com/c/chromium/src/+/2693607/
It stalled after <@456226577798135808>'s latest patches, we haven't heard from Leon Scroggins (the reviewer) yet...
|
|
|
Jyrki Alakuijala
|
2021-03-05 01:46:26
|
we are moving the semicolons and curly braces around in the C++ code
|
|
2021-03-05 01:48:17
|
more seriously, there are production warnings that we need to resolve -- they are not technically challenging as far as I know, just a bit tedious
|
|
|
BlueSwordM
|
2021-03-05 06:49:23
|
Anyway, I did something rather special.
|
|
2021-03-05 06:49:33
|
I use pooled butteraugli to make a butteraugli style video. <:monkaMega:809252622900789269>
|
|
2021-03-05 06:51:53
|
https://drive.google.com/file/d/1NW3AUgBC1CcTJODUK_DscjNaD_-xhyen/view?usp=sharing
|
|
2021-03-05 06:52:00
|
https://drive.google.com/file/d/155Cbh-AG9T931Hs9P_TTmfY3H_S3KHK4/view?usp=sharing
|
|
2021-03-05 06:52:11
|
Flickering and seizure incoming...
|
|
|
|
veluca
|
2021-03-05 06:56:25
|
ouch, my eyes xD
|
|
|
BlueSwordM
|
2021-03-05 06:57:52
|
But yeah, it's very interesting seeing how rav1e and aom encode differently.
|
|
|
spider-mario
|
2021-03-05 07:52:17
|
this map of compression artifacts has a lot of compression artifacts
|
|
2021-03-05 07:52:28
|
we should run butteraugli on that too
|
|
|
|
Deleted User
|
|
spider-mario
we should run butteraugli on that too
|
|
2021-03-05 07:53:03
|
Butteraugliception
|
|
|
Crixis
|
2021-03-05 08:11:27
|
Rav1e is a lot more flashy
|
|
2021-03-05 08:12:25
|
More frames changes?
|
|
|
BlueSwordM
|
|
Crixis
More frames changes?
|
|
2021-03-05 08:13:15
|
Essentially, rav1e seems to be smarter as where to do the distortions vs aom.
|
|
2021-03-05 08:13:35
|
This allows it to spend more bits in places that matter, but that does create a bit more artifacts.
|
|
2021-03-05 08:13:48
|
However, in this kind of video, that is actually a big advantage.
|
|
|
|
bo
|
2021-03-06 10:04:08
|
Is it possible to replicate the gif format - animated series of images with a color table?
|
|
|
_wb_
|
2021-03-06 10:08:35
|
Yes, except jxl does not have a palette that can be shared between frames, every frame needs to have its own palette
|
|
2021-03-06 10:09:59
|
But we do have arbitrary palettes: any number of colors, any number of channels, there can even be "delta" colors that are not absolute colors but deltas w.r.t. a predictor
|
|
2021-03-06 10:10:43
|
And the palette itself gets compressed as if it was a nb_colors x nb_channels image
|
|
|
Jyrki Alakuijala
|
|
BlueSwordM
I use pooled butteraugli to make a butteraugli style video. <:monkaMega:809252622900789269>
|
|
2021-03-06 10:13:16
|
That is a world's first. Achievement unlocked.
|
|
|
Crixis
Rav1e is a lot more flashy
|
|
2021-03-06 10:15:23
|
butteraugli gets seriously upset about small inaccuracies in movement
|
|
2021-03-06 10:16:09
|
I have been planning (with spider-mario) a version of butteraugli for video, but haven't been able to staff it yet
|
|
2021-03-06 10:17:06
|
I wouldn't make big conclusions about butteraugli images for video -- but perhaps it is ok to use it to see where to look at when evaluating video quality manually
|
|
|
_wb_
|
2021-03-06 10:22:07
|
I would be interested in exploring a variant of butteraugli and/or ssimulacra that gets weighted according to how important the fidelity is in different regions. For still images, it could be saliency-weighted. For video, it could be weighted based on motion.
|
|
2021-03-06 10:23:05
|
Maybe use a different norm depending on the weight, or something like that
|
|
|
Jyrki Alakuijala
|
2021-03-06 10:39:37
|
so many options
|
|
|
lithium
|
2021-03-06 12:56:55
|
<@!258670228819410944>
Reply => short name for butteraugli:
https://discord.com/channels/794206087879852103/804008033595162635/817730448577789983
Call butter eye?
I wanted more vowels than there are in PSNRHVS-M and MS-SSIM-YUV, an association with the human eye...
and with a small bread (gipfeli, zopfli, brotli). I deliberately chose an overly complex term to avoid creating
homonym noise for something as specific as this.
Voisilmäpulla, Finnish butter eye buns, translated to German is something like Butteraugebrötchen, and I took the liberty
to invent a new pseudo-Swiss-German word from it, and butteraugli was born.
http://www.food.com/recipe/finnish-butter-eye-buns-voisilm-pulla-326192 -- Tasty with filter coffee.
https://encode.su/threads/2395-SSIM-MSSIM-vs-MSE?p=47367&viewfull=1#post47367
|
|
|
|
bo
|
|
BlueSwordM
Anyway, I did something rather special.
|
|
2021-03-06 01:42:32
|
Does this have 5 or 6 colors? Why are the differences so jerky?
|
|
|
Scope
|
2021-03-06 09:18:47
|
🤔
https://www.phoronix.com/scan.php?page=news_item&px=Exiv2-Looks-To-KDE
> Exiv2 Looks To Team Up With The KDE Project
> Exiv2, the widely-used C++ metadata library / tools for dealing with image metadata via EXIF / IPTC / XMP standards and ICC profiles is looking to join the KDE project.
|
|
|
_wb_
|
2021-03-07 07:22:50
|
https://www.reddit.com/r/programming/comments/lzi2vt/after_being_defended_from_google_now_microsoft/
|
|
2021-03-07 07:26:45
|
What nonsense is this
|
|
2021-03-07 07:27:43
|
Claims:
1. A computer system comprising: an encoded data buffer configured to store encoded data for at least part of a bitstream; and
a range asymmetric number system (“RANS”) decoder configured to perform operations using a two-phase structure, the operations comprising: as part of a first phase of the two-phase structure, selectively updating state of the RANS decoder using probability information for an output symbol from a previous iteration, the state of the RANS decoder being tracked using a value;
as part of a second phase of the two-phase structure, selectively merging a portion of the encoded data from an input buffer into the state of the RANS decoder; and
as part of the second phase of the two-phase structure, selectively generating an output symbol for a current iteration using the state of the RANS decoder.
|
|
2021-03-07 07:31:02
|
What is this two-phase thing? I don't understand this lawyerspeak, it is always so very vague and generic
|
|
|
fab
|
2021-03-07 07:39:44
|
<@!111445179587624960> what you think about the article, be honest is informative, are there some grammar errors?
|
|
2021-03-07 07:39:51
|
also scope are you English?
|
|
|
Scope
|
2021-03-07 07:44:54
|
No, as I said before, I hardly know how to write in English correctly, I never learned English on purpose (I learned it from games, movies and reading the Internet)
|
|
|
fab
|
2021-03-07 07:49:31
|
did you read the article i did
|
|
2021-03-07 07:49:47
|
BTW the one from bluesword maybe the site is more popular
|
|
2021-03-07 07:50:08
|
chipandcheese com how popular it is?
|
|
2021-03-07 08:00:40
|
changed the site theme
|
|
|
lonjil
|
2021-03-08 04:10:28
|
Some people on that Reddit thread seem to think JPEG XL is a Microsoft project 🤔
|
|
|
Nova Aurora
|
2021-03-08 04:12:49
|
Wrong megacorp
|
|
|
username
|
2021-03-08 04:28:26
|
they are probably thinking of JPEG XR
|
|
|
Nova Aurora
|
2021-03-08 04:32:32
|
Jpeg's naming scheme is pretty confusing
|
|
2021-03-08 04:33:14
|
JPEG XS, XR, XL,XT
|
|
2021-03-08 04:33:23
|
What are they, IPhones?
|
|
|
_wb_
|
2021-03-08 05:49:18
|
I don't like it either, it's confusing
|
|
2021-03-08 05:49:48
|
I generally try to avoid those names
|
|
|
Nova Aurora
|
2021-03-08 05:49:50
|
XS sounds like excess JPEG
|
|
|
_wb_
|
2021-03-08 05:50:42
|
There's jpeg, j2k, wdp and jxl
|
|
2021-03-08 05:53:19
|
XS is something most end-users don't need to know about, since it's aiming more for the professional video production market or stuff like that (it's not a format meant to store files in, really, but more like a cable thing)
|
|
2021-03-08 05:54:58
|
Then there is jpeg XT, which is literally a jpeg plus XT stuff so there it makes sense to keep the "jpeg" in the name
|
|
|
_wb_
Claims:
1. A computer system comprising: an encoded data buffer configured to store encoded data for at least part of a bitstream; and
a range asymmetric number system (“RANS”) decoder configured to perform operations using a two-phase structure, the operations comprising: as part of a first phase of the two-phase structure, selectively updating state of the RANS decoder using probability information for an output symbol from a previous iteration, the state of the RANS decoder being tracked using a value;
as part of a second phase of the two-phase structure, selectively merging a portion of the encoded data from an input buffer into the state of the RANS decoder; and
as part of the second phase of the two-phase structure, selectively generating an output symbol for a current iteration using the state of the RANS decoder.
|
|
2021-03-08 06:36:57
|
Isn't that first claim just a vague description of _any_ entropy decoder that does context modeling and/or chance adaptation?
|
|
|
Scope
|
2021-03-08 06:38:41
|
<https://encode.su/threads/2648-Published-rANS-patent-by-Storeleap?p=68945&viewfull=1#post68945>
|
|
|
|
Deleted User
|
|
_wb_
Then there is jpeg XT, which is literally a jpeg plus XT stuff so there it makes sense to keep the "jpeg" in the name
|
|
2021-03-08 08:41:30
|
Maybe `djxl` could decode to JPEG XT if simple JPEG couldn't be enough for some images? <:Thonk:805904896879493180>
|
|
2021-03-08 08:42:04
|
I've been thinking about 'smuggling' JPEG XT support together with JPEG XL
|
|
|
_wb_
|
2021-03-08 09:13:23
|
Not sure if that would help much. The only reason to decode to JPEG is to deal with legacy clients, and they will typically not support JPEG XT. Upgrading them to support JPEG XT is as much work as upgrading them to support JPEG XL 🙂
|
|
|
Scope
|
2021-03-08 11:01:01
|
<@!416586441058025472> I can't say anything about the article, because for me the universal advice on how to encode images in JXL is quite simple (`cjxl image image.jxl`) and that's all, well, unless someone additionally needs to change the speed (`-s`) and quality (`-q`)
|
|
|
fab
|
2021-03-08 11:01:23
|
That i could added
|
|
2021-03-08 11:03:42
|
also the article is beginner english on purpose
|
|
2021-03-08 11:03:51
|
i didn't include words like "just"
|
|
|
Scope
|
2021-03-08 11:04:48
|
The rest of the fine-tuning is more likely to be even useless (without a full understanding of how they work, in addition, the encoder is constantly improving and advice on some of the options may quickly become outdated)
|
|
|
fab
|
2021-03-08 11:06:27
|
yes, an image comparison with different types of pictures would be better
|
|
2021-03-08 11:06:45
|
and more interesting
|
|
2021-03-08 11:07:16
|
also the r/av1 post is
|
|
2021-03-08 11:07:17
|
what_is_av1_codec_in_a_10_second_video_save_and/
|
|
2021-03-08 11:08:32
|
but still i don't understand why Google SEO don't index it
|
|
|
improver
|
2021-03-08 10:51:09
|
https://news.ycombinator.com/item?id=26390040
|
|
|
_wb_
|
2021-03-09 06:28:09
|
https://twitter.com/jonsneyers/status/1369170699793952769?s=19
|
|
|
lonjil
|
2021-03-09 06:49:50
|
A member of an ISO committee on HN had this to say:
> Most (if not all) people on the committees don't like the paywall. The accountants have to explain every couple years why its necessary. Then people seem to forget and we have to do it all over again. It's cyclical.
|
|
|
_wb_
|
2021-03-09 08:18:05
|
It is only necessary because ISO wants to be a private company that makes money by selling specs (and membership fees).
|
|
2021-03-09 08:19:04
|
In my opinion they should be a division of the UN or something and just get some subsidies to pay their staff.
|
|
2021-03-09 08:19:45
|
Countries can tax company profits to pay for those subsidies.
|
|
2021-03-09 08:20:08
|
It's peanut money anyway in the bigger picture
|
|
|
|
Deleted User
|
2021-03-09 08:52:45
|
Paywalling standards is crappy, but should be legal. We shouldn't prevent them from shooting themselves into feet. However, the moment some lawmakers decide to include ISO spec in their laws, they should somehow make that spec free or refrain from using it in the law (e.g. by using open standards or creating in-house freely available alternative). By simple logic you shouldn't be persecuted for not following law you can't have access to, it's different from having access to law but not using it (*ignorantia iuris nocet*).
|
|
2021-03-09 08:57:21
|
Another thing is intellectual "property". Property laws were made to solve the problem of physical items' scarcity. Fortunately information can be 100% perfectly copied indefinitely, you're only limited by the medium (so you can only own the information medium, not the information itself).
|
|
|
_wb_
|
2021-03-09 08:59:40
|
It's not just about including ISO specs in laws themselves. What if my government uses JPEG XL images on its website? What if they publish laws online using formats or protocols standardized by ISO? I think also in those cases, those standards should be accessible for free.
|
|