|
spider-mario
|
2022-11-26 10:33:39
|
too wide for hardware decoding through nvdec
|
|
|
Jean
|
2022-11-27 01:35:53
|
Hi! What encoder parameters would people recommend for scans of books? The source images are 300dpi PNGs coming from a scanner. Black text on white mostly, with the occasional image. I'm not interested in preserving things like the texture of the paper. So far I've been running `cjxl -d 1` which seems to work well and vastly outperforms PNG. Anything else I should try?
|
|
|
w
|
2022-11-27 02:15:18
|
try modular mode
|
|
|
190n
|
2022-11-27 02:54:27
|
i would compare -d1 to other lossy formats, not png
|
|
|
HIMECHAN
|
2022-11-27 09:11:37
|
hi to everyone, should i use modular for arts? (digital art) or i should use vardct?
|
|
|
yurume
|
2022-11-27 09:18:53
|
depends on which digital art. if you make pixel arts modular is most beneficial. otherwise vardct with near-lossless setting will be enough for general consumption. in any case you would want lossless setting when you archive your creations, just like that you keep PSD/Krita/whatever source formats.
|
|
|
HIMECHAN
|
2022-11-27 09:33:38
|
like for this i should use vardct?
|
|
|
yurume
|
2022-11-27 09:51:52
|
for general consumption and not for archiving? technically it would be a mix of modular and vardct (which is possible by first encoding vardct and having a zero-duration modular frame overlaying some portions), don't know if libjxl does figure this out by its own.
|
|
2022-11-27 09:52:57
|
but as I've said before, vardct with low enough distance (1.0 is pretty much acceptable) would work well in most cases in terms of compression
|
|
|
HIMECHAN
|
|
yurume
for general consumption and not for archiving? technically it would be a mix of modular and vardct (which is possible by first encoding vardct and having a zero-duration modular frame overlaying some portions), don't know if libjxl does figure this out by its own.
|
|
2022-11-27 12:02:05
|
how i can do overlay?
|
|
|
_wb_
|
2022-11-27 12:11:04
|
Can the gimp plugin save/load multi-layer images? If not, then we should probably make that work...
|
|
|
yurume
|
2022-11-27 12:45:32
|
AFAIK there is no end-user interface for doing that, yeah
|
|
2022-11-27 12:46:13
|
when patch is enabled (which is a way to reuse already decoded image fragment elsewhere), libjxl does that automatically, but that only happens when there are repeating fragments
|
|
|
daniilmaks
|
|
Wolfbeast
Sorry but low bpp really isn't all that interesting of a metric. If you want real low bpp then you're usually also dealing with small sizes (widgets, thumbnails, etc.) which aren't going to strain bandwidth anyway. The savings on an already small file aren't impactful.
|
|
2022-11-27 04:47:59
|
low bpp and low resolution are two completely separate issues
|
|
2022-11-27 04:49:40
|
in fact low bpp is more of use the bigger the image is. not smaller.
low bpp has nothing that makes it "inherently" about small images.
|
|
|
spider-mario
an ฮฑ7R IV produces 9504ร6336 (61MP) images
|
|
2022-11-27 04:52:07
|
that's already handled by tiling.
|
|
2022-11-27 04:52:54
|
I'll try to stop here since this might be off topic.
|
|
|
_wb_
|
|
that's already handled by tiling.
|
|
2022-11-27 05:16:06
|
File format tiling (as opposed to codestream tiling) has the issue that it tends to create seams at the tile boundaries.
|
|
2022-11-27 05:16:40
|
HEIC does have sometimes visible seams at the 512x512 tile boundaries Apple is using
|
|
2022-11-27 05:19:07
|
https://cloudinary.com/blog/time_for_next_gen_codecs_to_dethrone_jpeg#limitations here is an example of such seams
|
|
|
daniilmaks
|
|
_wb_
https://cloudinary.com/blog/time_for_next_gen_codecs_to_dethrone_jpeg#limitations here is an example of such seams
|
|
2022-11-27 05:20:02
|
none of the images load on my end. hang on
|
|
|
_wb_
|
2022-11-27 05:20:03
|
That's a HEIC file I produced using the heic encoder in Apple Preview, which uses these 512x512 tiles, presumably to leverage hw encode/decode
|
|
2022-11-27 05:21:04
|
https://res.cloudinary.com/cloudinary-marketing/image/upload/Web_Assets/blog/013b.png
|
|
2022-11-27 05:21:42
|
https://res.cloudinary.com/cloudinary-marketing/image/upload/Web_Assets/blog/013heic.png
|
|
|
daniilmaks
|
2022-11-27 05:23:19
|
looks like a worst case scenario
|
|
2022-11-27 05:23:58
|
vertical detail on an horizontal seam
|
|
2022-11-27 05:24:52
|
I want to see the seams on the foliage and the mountain too
|
|
|
_wb_
|
2022-11-27 05:25:39
|
Yeah it will depend on the content how bad the seams are, of course
|
|
2022-11-27 05:26:38
|
Point is, it is something to watch out for when combining file format tiling with lossy compression
|
|
|
Demez
|
2022-11-27 05:28:06
|
i wonder, does Samsung's android heic implementation do it any better? mainly cause I use that on my phone
|
|
|
daniilmaks
|
|
_wb_
Point is, it is something to watch out for when combining file format tiling with lossy compression
|
|
2022-11-27 05:33:09
|
I agree.
|
|
2022-11-27 05:36:13
|
I originally meant that a seam every 8192px in "not-as-starved-bpp" images is better than no image.
|
|
|
|
veluca
|
2022-11-27 05:48:50
|
presumably if one is encoding a 50MP+ image they're encoding it at high enough quality that tiling ought not be a problem...
|
|
|
_wb_
|
2022-11-27 06:08:52
|
Not sure, my crappy phone can make 48 MP photos but the pixels are so crappy and signal processed that you might as well use a low quality setting
|
|
|
|
veluca
|
2022-11-27 06:31:49
|
they don't *actually* save it as 48MP, do they?
|
|
2022-11-27 06:32:05
|
most phones usually don't AFAIU
|
|
|
_wb_
|
2022-11-27 07:00:32
|
Mine does, though by default it does 12 MP (bucketing 2x2 sensor pixels into one pixel), which makes more sense
|
|
|
Demiurge
|
2022-11-27 08:16:28
|
lossy modular is not much better for non-photographic than vardct is, right?
|
|
|
_wb_
|
2022-11-27 09:12:01
|
For lossy non-photo we have plenty of mostly unexplored coding tools like splines and delta palette, but I think realistically those will only get explored if the format gets enough traction to make it worthwhile to invest time in encoder improvements for specific relatively uncommon image content types (most images where raster formats are useful in the first place are photographic).
|
|
|
Demiurge
|
2022-11-27 09:26:54
|
So for non-photographic people should probably stick to lossless and not use lossy modular I think, right?
|
|
|
spider-mario
|
2022-11-27 09:27:40
|
or even stick to the original vector graphics if applicable, I think was \_wb\_โs point
|
|
|
Demiurge
|
2022-11-27 09:27:53
|
Because the present state of lossy modular seems about as efficient as vardct is for non-photo. Which is to say, not very.
|
|
2022-11-27 09:29:05
|
So probably a bad idea to recommend people use -m 1 for non photo at this time
|
|
2022-11-27 09:32:52
|
unless it's lossless
|
|
|
Traneptora
|
2022-11-28 02:42:38
|
<@226977230121598977> I know this was a feature you were interested in
|
|
2022-11-28 02:42:54
|
|
|
|
_wb_
|
2022-11-28 02:51:21
|
I like that visualization style
|
|
|
Traneptora
|
2022-11-28 02:52:17
|
thanks :)
|
|
|
joppuyo
|
2022-11-28 03:05:23
|
I think AV1 supports even triangular macro blocks
|
|
2022-11-28 03:11:56
|
Itโs something called โwedge predictionโ https://miro.medium.com/max/1400/0*snA479ZWMUVWogA2
|
|
|
_wb_
|
2022-11-28 03:23:30
|
I assume that's a prediction mode, not a transform type, right?
|
|
|
joppuyo
|
2022-11-28 03:40:41
|
To be fair, I donโt know the technical details too well, but I read that xiph.org/daala was experimemting with something similar https://jmvalin.ca/daala/paint_demo/
|
|
|
DZgas ะ
|
|
Traneptora
|
|
2022-11-28 04:14:32
|
Oh Yea
|
|
|
|
veluca
|
2022-11-28 06:08:52
|
AV1 doesn't do non-rectangular transforms ๐
|
|
2022-11-28 06:09:08
|
AV2 might I guess
|
|
|
Demiurge
|
2022-11-29 05:48:29
|
If AV1 has better compression but is impossibly slow, then what exactly was gained? Can it get better compression at the same speed or better speed for the same ratio compression?
|
|
2022-11-29 05:48:46
|
Hmm I guess that's an off topic question better suited for <#805176455658733570>
|
|
2022-11-29 05:49:27
|
But I have always wondered this ever since AV1 was first announced.
|
|
2022-11-29 05:50:55
|
After all, if you want to have better compression in exchange for impossibly slow encoding, other encoders already offer such a thing, like "placebo" mode for instance
|
|
2022-11-29 05:51:25
|
But I never did a comparison.
|
|
|
BlueSwordM
|
|
Demiurge
If AV1 has better compression but is impossibly slow, then what exactly was gained? Can it get better compression at the same speed or better speed for the same ratio compression?
|
|
2022-11-29 05:57:40
|
Well, what was gained was better compression.
There's always a point at which an older standard becomes limiting, even if you throw all the compute in the world.
|
|
|
Demiurge
|
2022-11-29 07:31:09
|
Yeah, I just wonder what the extent of that is. I never did a comparison, but I would expect that a new format with new ideas and less limits would at least be able to match an older format with more limits.
|
|
2022-11-29 07:33:27
|
And by match, I mean the same or better ratio if both encoders are running at the same speed
|
|
|
_wb_
|
2022-11-29 07:33:27
|
I wonder to what extent av1 is a superset of h264 and you could basically take x264 and make it produce av1 syntax instead
|
|
|
Demiurge
|
2022-11-29 07:34:36
|
I heard the x264 codebase is very well organized and well suited for such modifications, but I did not take a look at the source code myself.
|
|
2022-11-29 07:34:57
|
One of the x264 devs mentioned possibly modifying it to output theora
|
|
2022-11-29 07:35:07
|
And/or VP8
|
|
2022-11-29 07:36:03
|
since when google released vp8 for the first time it was kind of a joke
|
|
2022-11-29 07:36:26
|
And the encoder was extremely bad
|
|
2022-11-29 07:37:18
|
And the "format spec" was essentially just the source code.
|
|
2022-11-29 07:38:55
|
But the main problem is that the encoder at the time was almost unusably bad
|
|
2022-11-29 07:40:05
|
So it would have been nice if x264 could have been adapted since it already was well optimized for psychovisual performance as well as computational performance.
|
|
2022-11-29 07:40:41
|
I wonder what happened to Dark Shikari and his blog
|
|
|
DZgas ะ
|
|
veluca
AV2 might I guess
|
|
2022-11-29 09:27:29
|
Stop talking about AV2, its big mind scam
|
|
|
Demiurge
If AV1 has better compression but is impossibly slow, then what exactly was gained? Can it get better compression at the same speed or better speed for the same ratio compression?
|
|
2022-11-29 09:31:52
|
This is absolutely impossible, AV1 uses so many algorithms that it is simply impossible to use it normally in 2022, same well as at the time of release in 2018. It would be good if computer progress UP as well as from 2000 to 2010, then everything would be fine. But progress is slower and slower, and it's already so noticeable that one could say that AV1 is already old, although you still haven't even had the opportunity to use its real power at speeds like 1
|
|
2022-11-29 09:34:13
|
The problem is that this codec is from corporations and For corporations. It has such a big gap in compression speed that I have no idea why I can use it. It's literally like "the strongest Power in the universe is free here and now, but you're too weak to use it"
|
|
2022-11-29 09:35:43
|
Nowadays, we have just come to the moment when we can use hevc for encoding, and we can decode it almost everywhere, and this is already 10 years after its release.
|
|
2022-11-29 09:35:52
|
well <#794206170445119489>
|
|
|
fab
|
2022-11-29 10:08:36
|
what you think about vtm?
|
|
|
pshufb
|
|
DZgas ะ
The problem is that this codec is from corporations and For corporations. It has such a big gap in compression speed that I have no idea why I can use it. It's literally like "the strongest Power in the universe is free here and now, but you're too weak to use it"
|
|
2022-11-29 01:44:49
|
reasonably fast av1 sw encoders exist and hardware encoders are already getting quite good
|
|
|
diskorduser
|
|
fab
what you think about vtm?
|
|
2022-11-29 02:00:09
|
What vtm
|
|
|
fab
|
|
Traneptora
|
|
Demiurge
Yeah, I just wonder what the extent of that is. I never did a comparison, but I would expect that a new format with new ideas and less limits would at least be able to match an older format with more limits.
|
|
2022-11-29 03:22:27
|
do keep in mind that you need an encoder to actually take advantage of the new bitstream or nothing interesting happens
|
|
2022-11-29 03:22:48
|
for example, atm splines are not used by any encoder
|
|
2022-11-29 03:22:52
|
but that will change
|
|
|
Demiurge
|
2022-11-29 03:47:36
|
I didn't know AV1 had splines...
|
|
2022-11-29 03:49:25
|
Seems like a strange feature for a video codec
|
|
|
DZgas ะ
|
|
fab
Vvc
|
|
2022-11-29 04:03:58
|
vvc is bruh
|
|
|
Traneptora
|
|
Demiurge
I didn't know AV1 had splines...
|
|
2022-11-29 04:15:26
|
I'm referring to JXL
|
|
|
WoofinaS
|
|
DZgas ะ
The problem is that this codec is from corporations and For corporations. It has such a big gap in compression speed that I have no idea why I can use it. It's literally like "the strongest Power in the universe is free here and now, but you're too weak to use it"
|
|
2022-11-29 04:42:08
|
People said the same things about x26*. It just seems like this time around they donโt actually care about optimizing except rav1e which itself has a long way to go with itโs few developers
|
|
|
lithium
|
|
_wb_
For lossy non-photo we have plenty of mostly unexplored coding tools like splines and delta palette, but I think realistically those will only get explored if the format gets enough traction to make it worthwhile to invest time in encoder improvements for specific relatively uncommon image content types (most images where raster formats are useful in the first place are photographic).
|
|
2022-11-29 05:26:19
|
In my opinion, I think non-photo isn't uncommon image content type,
I already see a lot of non-photo image content on internet,
like 2D mobile game app(lossy jpeg + lossy palette png),
2d art, manga, comic content online gallery, e-book shop, forum(lossy jpeg q80,q90,q95,q99, lossy webp),
and a small amount of manga content site start use avif,
In my test, I think av1 codec CDEF, local palette feature is good for non-photo content type,
CDEF feature can reduce high contrast edge,line area artifacts,
local palette feature can handle some not suitable DCT image,
but av1 can't preserve some grain and noise.
I understand mathematically lossless is best for non-photo content.
but still haven't a great encoder for lossy non-photo content.
I thought if libjxl can cover those lossy non-photo content use case,
Probably can traction more non-photo content content provider and user use libjxl. ๐
|
|
|
_wb_
|
|
lithium
In my opinion, I think non-photo isn't uncommon image content type,
I already see a lot of non-photo image content on internet,
like 2D mobile game app(lossy jpeg + lossy palette png),
2d art, manga, comic content online gallery, e-book shop, forum(lossy jpeg q80,q90,q95,q99, lossy webp),
and a small amount of manga content site start use avif,
In my test, I think av1 codec CDEF, local palette feature is good for non-photo content type,
CDEF feature can reduce high contrast edge,line area artifacts,
local palette feature can handle some not suitable DCT image,
but av1 can't preserve some grain and noise.
I understand mathematically lossless is best for non-photo content.
but still haven't a great encoder for lossy non-photo content.
I thought if libjxl can cover those lossy non-photo content use case,
Probably can traction more non-photo content content provider and user use libjxl. ๐
|
|
2022-11-29 05:30:34
|
Yeah I agree it is not uncommon and it is worthwhile to make encoder improvements in this area, but I would roughly estimate 80% of internet images are photographic, and in the remaining 20% probably at least half would be best done as SVG instead of as a raster image. Still, it's not unimportant โ but the first priority is to get jxl supported by browsers, because outside the web, the use cases for lossy non-photo are quite limited in my opinion.
|
|
|
Demiurge
|
2022-11-29 05:53:25
|
Looks like the pngquant developers are here on this server.
|
|
2022-11-29 05:53:42
|
pngquant is great for lossy compression of non photo content.
|
|
2022-11-29 05:54:14
|
Maybe some tricks and ideas along those lines could be incorporated into the libjxl encoder
|
|
2022-11-29 05:54:54
|
what's CDEF?
|
|
|
WoofinaS
|
2022-11-29 06:46:45
|
Isnโt lossy modular pretty much pngquant++?
|
|
|
_wb_
|
2022-11-29 06:50:25
|
There are various ways to do lossy modular
|
|
2022-11-29 06:51:00
|
The current default way is to do Squeeze and quantize the residuals
|
|
|
Traneptora
|
2022-11-29 06:55:05
|
lossy modular basically uses the haar wavelet transform, mathematically speaking
|
|
2022-11-29 06:55:08
|
although it's a modified version
|
|
|
_wb_
Yeah I agree it is not uncommon and it is worthwhile to make encoder improvements in this area, but I would roughly estimate 80% of internet images are photographic, and in the remaining 20% probably at least half would be best done as SVG instead of as a raster image. Still, it's not unimportant โ but the first priority is to get jxl supported by browsers, because outside the web, the use cases for lossy non-photo are quite limited in my opinion.
|
|
2022-11-29 06:55:37
|
I think it's not quite fair to say that there's only vector content and photographic content with no in between
|
|
2022-11-29 06:55:44
|
a good example of an in-between is digital art
|
|
2022-11-29 06:55:48
|
or, say, game assets
|
|
|
joppuyo
|
2022-11-29 06:55:55
|
Iโm not a fan of PNGQuant due to the dithering. It reminds me of GIF compression. Itโs a poor way to treat your high quality images. But sometimes it works well for certain images like logos etc.
|
|
|
Traneptora
|
2022-11-29 06:56:33
|
or is digital art considered close enough to photographic that it counts?
|
|
|
_wb_
|
|
Traneptora
or is digital art considered close enough to photographic that it counts?
|
|
2022-11-29 07:02:42
|
Some of it is, some of it isn't. I would say 70% of images are real photo, 10% is photo-like art/renders/etc, 10% are non-photo where raster stil makes sense, 10% are non-photo where vector makes more sense
|
|
2022-11-29 07:03:08
|
(very rough ballparky estimates, I have no real data on this)
|
|
2022-11-29 07:04:04
|
There are also quite some mixed-content images though, like product images with a clean synthetic background and text overlays
|
|
|
Traneptora
|
2022-11-29 07:06:19
|
one of the issues with just saying "use SVG" are that a lot of authoring workflows involve starting with vector, rasterizing, and then working with the raster results
|
|
2022-11-29 07:06:34
|
or just making something that theoretically could be done in vector is often easier for digital artists with tablets to do in raster
|
|
2022-11-29 07:10:47
|
I'm thinking, say, something like this
|
|
2022-11-29 07:11:03
|
this is synthetic
|
|
2022-11-29 07:11:08
|
but it's not something that's easy to do with SVG
|
|
2022-11-29 07:11:16
|
there's some rendering passes with glow for example
|
|
|
pshufb
|
2022-11-29 07:15:11
|
at least in my experience, non photographic content that isn't best as svg is already compressed far better with jxl than with anything else, whether you're using lossy compression or lossless
|
|
|
uis
|
|
Demiurge
what's CDEF?
|
|
2022-11-29 07:29:31
|
Constrained Directional Enhancement Filter?
|
|
|
_wb_
|
|
Traneptora
one of the issues with just saying "use SVG" are that a lot of authoring workflows involve starting with vector, rasterizing, and then working with the raster results
|
|
2022-11-29 07:59:09
|
Yeah I agree, it's certainly not always an option or at least not a convenient one
|
|
|
Demiurge
|
|
WoofinaS
Isnโt lossy modular pretty much pngquant++?
|
|
2022-11-29 10:05:43
|
no, not even close. Lossy modular should never be used in its current condition.
|
|
2022-11-29 10:36:24
|
It's no better at non-photographic content than DCT is, as far as I can tell.
|
|
|
Traneptora
|
|
Demiurge
no, not even close. Lossy modular should never be used in its current condition.
|
|
2022-11-29 10:43:21
|
lossy modular is used in modular-encoded LF frames that store the LF coefficients for VarDCT
|
|
2022-11-29 10:43:35
|
it's also used for the alpha channel
|
|
2022-11-29 10:43:55
|
there's minimal reason to *force* it to be used *instead of* VarDCT
|
|
2022-11-29 10:44:15
|
but it's not like it's a useless thing
|
|
2022-11-29 10:44:30
|
also lossy modular is better at *very very* high distances
|
|
2022-11-29 10:44:32
|
like -d10 and higher
|
|
|
|
veluca
|
2022-11-29 10:45:40
|
actually it's also better at very large scales, i.e. you *could* do LF coefficients with DCT but it works way worse than near-lossless modular
|
|
|
Traneptora
|
2022-11-29 10:46:59
|
higher distance settings tend to use larger varblocks for details, right?
|
|
|
|
veluca
|
|
Traneptora
|
2022-11-29 10:47:26
|
I wonder if there was a way to detect smooth detail vs sharp edges
|
|
2022-11-29 10:47:38
|
and use larger varblocks for rough surfaces like grass and smaller varblocks for eyes and lines
|
|
2022-11-29 10:48:06
|
i.e. if you're already doing saliency detection for ordering the Groups, I wonder if you could do something similar for local adaptive quant
|
|
|
|
veluca
|
2022-11-29 10:48:36
|
one of the problems is that all the ways I thought of to do this kind of things kinda fail to tell the difference between "field of grass" (large blocks = good) and "tree branches on a flat bg" (large blocks = bad)
|
|
|
Traneptora
|
2022-11-29 10:49:20
|
I see, so the heuristics aren't good
|
|
2022-11-29 10:49:31
|
or rather, you'd need good heuristics
|
|
2022-11-29 10:49:54
|
I wonder if you could run a quick object identification for *people* and then weight details more highly where people are present in the photo
|
|
2022-11-29 10:50:12
|
since high-quality eyes is higher value than high-quality grass behind the person
|
|
2022-11-29 10:50:43
|
or is that kind of encoder improvement considered lower priority over fixing all the 1.0 issues
|
|
|
Demiurge
|
|
veluca
actually it's also better at very large scales, i.e. you *could* do LF coefficients with DCT but it works way worse than near-lossless modular
|
|
2022-11-29 10:52:22
|
That's a good way to explain it. It's better at very large, low-freq stuff.
|
|
|
Traneptora
|
2022-11-29 10:53:07
|
I think it's more that at very zoomed-out scales, the pixels tend to have very little correlation to nearby pixels
|
|
2022-11-29 10:53:14
|
and modular mode excels more in that scenario than vardct
|
|
|
|
veluca
|
|
Traneptora
I wonder if you could run a quick object identification for *people* and then weight details more highly where people are present in the photo
|
|
2022-11-29 10:53:36
|
You can (could?) Do that actually with an external saliency finder of some sort
|
|
|
Demiurge
|
2022-11-29 10:53:47
|
But -m 1 lossy is still nothing at all comparable to pngquant and is usually less efficient than lossless for line art
|
|
|
Traneptora
|
2022-11-29 10:54:11
|
-d 1 --modular is not less efficient than lossless
|
|
2022-11-29 10:54:15
|
since it has all the same tools available
|
|
|
Demiurge
|
2022-11-29 10:54:42
|
Like wb explained earlier, the Squeeze transform tends to turn low-entropy information into high-entropy, for lots of non-photographic images
|
|
|
Traneptora
|
2022-11-29 10:55:07
|
it depends on what counts as "photographic"
|
|
|
Demiurge
|
2022-11-29 10:55:45
|
-d 1 --modular in it's current default settings and implementation often produces larger filesizes than lossless modular
|
|
2022-11-29 10:56:19
|
And lossless modular uses some tools that are not used by lossy modular
|
|
2022-11-29 10:56:48
|
at least in the current implementation of the encoder
|
|
2022-11-29 11:05:47
|
I think wb said in the past that in lossless mode you can take advantage of tools and assumptions that are too risky to make in lossy mode
|
|
|
Traneptora
|
2022-11-29 11:31:47
|
yea that is true
|
|
|
DZgas ะ
|
2022-11-30 12:04:13
|
<:SadCheems:890866831047417898>
|
|
|
_wb_
|
2022-11-30 06:15:49
|
Good question what would be best for that image
|
|
2022-11-30 06:17:09
|
I didn't make it myself, I made a spreadsheet table and a graphics designer turned it into that image
|
|
2022-11-30 06:17:44
|
They did it in Photoshop, I have a PSD version of it somewhere
|
|
2022-11-30 06:20:48
|
That PSD has the text as vector, including I think the content of the table
|
|
2022-11-30 06:21:49
|
The rest is raster graphics but could actually have been vector too
|
|
2022-11-30 06:23:56
|
I think this would have been better as an svg, but I assume the creator for some reason preferred working in Photoshop rather than Illustrator or Indesign or whatever
|
|
|
OkyDooky
|
2022-11-30 10:35:12
|
Heyo!
Is the JPEG-XL fileformat and codestream described anywhere other than the ISO docs?
I'm trying to write a program that splits .jxls into individual DCT planes, but I couldn't find a publicly available document anywhere. (And I can't afford the ISO docs)
|
|
|
Traneptora
|
|
Heyo!
Is the JPEG-XL fileformat and codestream described anywhere other than the ISO docs?
I'm trying to write a program that splits .jxls into individual DCT planes, but I couldn't find a publicly available document anywhere. (And I can't afford the ISO docs)
|
|
2022-11-30 12:16:16
|
that's almost as difficult as just writing a decoder btw, just so you know
|
|
2022-11-30 12:16:49
|
since the DCT blocks are encoded in a modular stream, which itself is more than 50% of the code of a decoder
|
|
|
_wb_
|
2022-11-30 12:25:46
|
Yes, DC and LF coefficients are modular-encoded, and HF AC is encoded per 256x256 group with the 3 channels interleaved iirc, so not trivial at all to split
|
|
|
OkyDooky
|
2022-11-30 12:56:32
|
I see. I'll find another solution for the moment then. But are there any docs out there? I'll try to poke at it in the long term
|
|
|
|
veluca
|
|
Traneptora
that's almost as difficult as just writing a decoder btw, just so you know
|
|
2022-11-30 12:59:34
|
I wouldn't necessarily say that (at least, not if you compare against writing a reasonably efficient decoder), but it is at the very least not a quick project
|
|
|
OkyDooky
|
2022-11-30 12:59:43
|
Also I'm wondering if I should report this as an Issue\: When decoding a truncated progressive jxl, I see `Input file is truncated and allow_partial_input was disabled.` despite `--allow_partial_files`, and `--allow_partial_input` is not a valid cli option.
|
|
|
|
veluca
|
2022-11-30 01:01:22
|
you probably should ๐
|
|
|
Moritz Firsching
|
2022-11-30 01:03:11
|
I thought we had fixed that?!
|
|
|
_wb_
|
|
I see. I'll find another solution for the moment then. But are there any docs out there? I'll try to poke at it in the long term
|
|
2022-11-30 01:03:57
|
This is the most recent draft I have for the 2nd edition of 18181-1.
|
|
|
OkyDooky
|
2022-11-30 01:23:31
|
I'm on 0.7, it's probably fixed in git maybe?
(<@795684063032901642>)
|
|
|
Moritz Firsching
|
2022-11-30 01:30:53
|
Looks like this is still there, please open an issue..
|
|
|
OkyDooky
|
2022-11-30 01:53:30
|
๐๐๐ Yess I have no expectations of writing a fast or complete decoder, libjxl will be more suitable for that. I'll try to cater it for education and quick hacks.
(<@179701849576833024>)
|
|
2022-11-30 01:53:52
|
Thank you so much!!
(<@794205442175402004>)
|
|
2022-11-30 01:54:48
|
Done \:)) https://github.com/libjxl/libjxl/issues/1935
(<@795684063032901642>)
|
|
|
daniilmaks
|
2022-11-30 01:56:40
|
is there such a thing as an *XYB cube* in the sense that there are RGB cube representations?
I mean as an end-user visualisation/demo/briefing, in which each axis gets asigned X, Y and B respectively, instead of RGB, however with the real colors they would map to in an RGB output.
|
|
|
Traneptora
|
2022-11-30 01:58:40
|
@Aravind#0000 do note that jxlatte can visualize varblocks
|
|
2022-11-30 02:04:37
|
|
|
|
_wb_
|
|
is there such a thing as an *XYB cube* in the sense that there are RGB cube representations?
I mean as an end-user visualisation/demo/briefing, in which each axis gets asigned X, Y and B respectively, instead of RGB, however with the real colors they would map to in an RGB output.
|
|
2022-11-30 02:06:02
|
would be interesting to make such a visualization
|
|
|
Traneptora
|
2022-11-30 02:06:31
|
what would the range be? -1/32 to 1/32, -1 to 1, -1 to 1?
|
|
2022-11-30 02:06:40
|
for X, Y, B respectively?
|
|
|
_wb_
|
2022-11-30 02:07:58
|
X is something like that, Y and B are more like 0 to 1
|
|
2022-11-30 02:08:49
|
this was my attempt at visualizing things
|
|
2022-11-30 02:09:19
|
that's XYB on the left, YCbCr in the middle, YCoCg on the right
|
|
2022-11-30 02:09:54
|
in slices of constant Y and the two chroma components on the two axes
|
|
|
OkyDooky
|
2022-11-30 02:11:18
|
Beautiful! It'll be so helpful in writing the decoder! And also that's a really pretty visualisation!
(<@853026420792360980>)
|
|
|
Traneptora
|
2022-11-30 02:11:26
|
thanks :)
|
|
|
daniilmaks
|
2022-11-30 02:11:49
|
the sRGB space only takes a portion of the xyb cube does it? Or am I thinking it wrong?
|
|
|
sklwmp
|
2022-11-30 02:11:49
|
i do have to ask, why is it a bot that's talking?
|
|
|
Traneptora
|
2022-11-30 02:11:56
|
that's matrix integration
|
|
2022-11-30 02:12:04
|
matrix is a discord competitor
|
|
|
sklwmp
|
|
Traneptora
|
2022-11-30 02:12:24
|
and we have a cross-link thing going. the way it projects messages from there to here is by using webhooks, which get the bot tag
|
|
|
sklwmp
|
2022-11-30 02:12:49
|
okay, thanks, it was just a bit confusing since clicking the bot profile doesn't really give any indication as to what it does
|
|
2022-11-30 02:12:59
|
i didn't know we had a matrix bridge going on here, neat
|
|
2022-11-30 02:13:12
|
anyway back on topic, sorry for the side notes
|
|
|
Traneptora
|
|
Beautiful! It'll be so helpful in writing the decoder! And also that's a really pretty visualisation!
(<@853026420792360980>)
|
|
2022-11-30 02:13:31
|
essentially the way it generates colors is it takes the `(order type times phi) (mod 1)` to get a "hue" value (not really hue though)
|
|
2022-11-30 02:14:01
|
scales it so it's between `0` and `2pi`
|
|
|
OkyDooky
|
2022-11-30 02:14:34
|
Innit!? The #daala on IRC are integrating to matrix too. It's so nifty! But please let me know if using matrix's features creates noise in the discord channel. Apparently it does for IRC.
(<@557099078337560596>)
|
|
|
Traneptora
|
2022-11-30 02:15:42
|
whatever that angle is, it divides the circle into 3 portions. essentially `th`, `th + 2pi/3` and `th + 4pi/3`, and lets Red, Green, and Blue factors be the cosines of these respective angles
|
|
2022-11-30 02:16:18
|
this is basically just a way to generate nice fairly evenly spaced colors from a seed value
|
|
2022-11-30 02:16:25
|
the seed being the block type, in this case
|
|
2022-11-30 02:17:09
|
another way I've seen it done is to take a hue, and fix a saturation and a lightness, generate hue procedurally, and then convert it to RGB
|
|
2022-11-30 02:17:26
|
but this method dosen't require me to go from HSV to RGB
|
|
2022-11-30 02:17:48
|
the adding phibar and then modding 1 trick is something I saw in a blog post probably a decade ago and it works surprisingly well
|
|
2022-11-30 02:18:12
|
idk why, but taking phibar, 2phibar, 3phibar, etc. (mod 1) creates nicely evenly spaced colors
|
|
2022-11-30 02:18:33
|
phi and phibar here are identical since they're off by 1 from each other, so mod 1 erases that differences
|
|
|
OkyDooky
|
2022-11-30 02:25:20
|
You should *definitely* write that on Discussions on https://github.com/thebombzen/jxlatte! It's beautiful and the colour generation is very useful in other visualisations too!
|
|
|
daniilmaks
|
|
_wb_
this was my attempt at visualizing things
|
|
2022-11-30 02:41:38
|
any clues to what caused those artifacts?
|
|
|
|
veluca
|
|
_wb_
this was my attempt at visualizing things
|
|
2022-11-30 02:55:26
|
feels about right
|
|
|
4ravind
|
|
Traneptora
whatever that angle is, it divides the circle into 3 portions. essentially `th`, `th + 2pi/3` and `th + 4pi/3`, and lets Red, Green, and Blue factors be the cosines of these respective angles
|
|
2022-11-30 03:20:50
|
I tried to write it to understand it. Is this about right?
```python
th = ((order * math.pi) % 1 ) # is between 0 and 1
th = th * 2 * math.pi # scale it to 0 and 2 pi
r = math.cos(th)
g = math.cos(th + math.pi * 2/3)
b = math.cos(th + math.pi * 4/3)
# r, g, b are between -1 and 1, being outputs of cos
# make rgb be between 0 and 255
r = int((r + 1) * 255/2)
g = int((g + 1) * 255/2)
b = int((b + 1) * 255/2)
# Print as html/css hexcodes
print(f'#{r:02x}{g:02x}{b:02x}')
```
For every kind of dct, the `order` is their index in this enum: https://github.com/thebombzen/jxlatte/blob/c7f50913880dabe9d1a7001ede6261b1851945f8/java/com/thebombzen/jxlatte/frame/vardct/TransformType.java#L6
|
|
|
Traneptora
|
|
I tried to write it to understand it. Is this about right?
```python
th = ((order * math.pi) % 1 ) # is between 0 and 1
th = th * 2 * math.pi # scale it to 0 and 2 pi
r = math.cos(th)
g = math.cos(th + math.pi * 2/3)
b = math.cos(th + math.pi * 4/3)
# r, g, b are between -1 and 1, being outputs of cos
# make rgb be between 0 and 255
r = int((r + 1) * 255/2)
g = int((g + 1) * 255/2)
b = int((b + 1) * 255/2)
# Print as html/css hexcodes
print(f'#{r:02x}{g:02x}{b:02x}')
```
For every kind of dct, the `order` is their index in this enum: https://github.com/thebombzen/jxlatte/blob/c7f50913880dabe9d1a7001ede6261b1851945f8/java/com/thebombzen/jxlatte/frame/vardct/TransformType.java#L6
|
|
2022-11-30 03:21:14
|
times phi, not pi
|
|
2022-11-30 03:22:12
|
```python
th = (order * phibar) % 1
th = th * 2 * math.pi
r = math.cos(th)
g = math.cos(th + math.pi * 2/3)
b = math.cos(th + math.pi * 4/3)
```
|
|
2022-11-30 03:22:50
|
do note that isn't how you convert from float to 0-255
|
|
|
I tried to write it to understand it. Is this about right?
```python
th = ((order * math.pi) % 1 ) # is between 0 and 1
th = th * 2 * math.pi # scale it to 0 and 2 pi
r = math.cos(th)
g = math.cos(th + math.pi * 2/3)
b = math.cos(th + math.pi * 4/3)
# r, g, b are between -1 and 1, being outputs of cos
# make rgb be between 0 and 255
r = int((r + 1) * 255/2)
g = int((g + 1) * 255/2)
b = int((b + 1) * 255/2)
# Print as html/css hexcodes
print(f'#{r:02x}{g:02x}{b:02x}')
```
For every kind of dct, the `order` is their index in this enum: https://github.com/thebombzen/jxlatte/blob/c7f50913880dabe9d1a7001ede6261b1851945f8/java/com/thebombzen/jxlatte/frame/vardct/TransformType.java#L6
|
|
2022-11-30 03:23:20
|
yea, it's also just the enum.type
|
|
2022-11-30 03:23:39
|
in python, it'd be something like
```python
rscaled = int(r * 255 + 0.5)
```
|
|
2022-11-30 03:23:56
|
basically look what 0 maps to, with your method
|
|
2022-11-30 03:24:03
|
it maps to 127
|
|
2022-11-30 03:24:13
|
which is probably not what you wanted it to map to
|
|
2022-11-30 03:24:56
|
so you scale it and then round
|
|
2022-11-30 03:25:42
|
in this case `phibar = 0.5 * (math.sqrt(5) - 1)`
|
|
|
OkyDooky
|
2022-11-30 03:27:26
|
but then some of the values would be negative? Since cos is -1 to 1 in 0 to 2pi
(<@853026420792360980>)
|
|
|
Traneptora
|
2022-11-30 03:27:39
|
oh yea, you add 1 and then divide by 2
|
|
2022-11-30 03:27:43
|
missed that part
|
|
2022-11-30 03:28:07
|
just map `[-1, 1]` to `[0, 1]` linearly
|
|
|
OkyDooky
|
2022-11-30 03:30:25
|
Hey that's what `((r + 1) * 255/2)` does, it's the same as `((r + 1)/2 * 255)`. Doesn't it?
|
|
|
Traneptora
|
2022-11-30 03:30:39
|
I made an edit, I don't know if that went over to matrix
|
|
2022-11-30 03:30:54
|
`int(r * 255 + 0.5)`
|
|
|
OkyDooky
|
2022-11-30 03:31:10
|
Ah i missed your edit about the miss ๐
|
|
|
4ravind
|
2022-11-30 03:32:05
|
|
|
2022-11-30 03:34:12
|
That's a nice palette! I'll use it when making graphs
|
|
|
diskorduser
|
|
Traneptora
I made an edit, I don't know if that went over to matrix
|
|
2022-11-30 03:41:57
|
Matrix doesn't show discord edits
|
|
|
Traneptora
|
2022-11-30 03:57:49
|
apparently not <:kek:857018203640561677>
|
|
|
OkyDooky
|
2022-11-30 03:58:55
|
image.png
|
|
|
Traneptora
|
2022-11-30 03:59:22
|
oh, it does, huh.
|
|
|
_wb_
|
|
any clues to what caused those artifacts?
|
|
2022-11-30 03:59:30
|
IIRC I made those by taking all 2^24 8-bit sRGB colors, converting them to XYB/YCbCr/YCoCg, and painting a pixel in the corresponding coordinates
|
|
|
OkyDooky
|
2022-11-30 03:59:51
|
Yeah i just have bad eyes ๐
|
|
|
daniilmaks
|
|
_wb_
IIRC I made those by taking all 2^24 8-bit sRGB colors, converting them to XYB/YCbCr/YCoCg, and painting a pixel in the corresponding coordinates
|
|
2022-11-30 04:02:17
|
I'm talking about these black pixels
|
|
2022-11-30 04:02:57
|
they only appear in xyb, which is why I asked
|
|
2022-11-30 04:18:31
|
<@794205442175402004> I've noticed what could be a noticeable flaw in xyb space as a perceptual definition, if as shown in the slices. Most of the chromatic resolution is highly focused on blue and blue surrounding areas (bottom left edge), giving very little space for the great range of colors between red and green (top right edge) *This is exactly what I had noticed in my tests.* There's not enough separation between green and red.
See image. Tons of color variance concentrated in the top edge, with very sparse variation going south. This is evident mostly from darks to mid-high tones.
|
|
|
_wb_
|
|
I'm talking about these black pixels
|
|
2022-11-30 04:20:32
|
yeah, that's just a consequence of how I drew it, which is not sampling densely enough to fill all slices completely
|
|
|
<@794205442175402004> I've noticed what could be a noticeable flaw in xyb space as a perceptual definition, if as shown in the slices. Most of the chromatic resolution is highly focused on blue and blue surrounding areas (bottom left edge), giving very little space for the great range of colors between red and green (top right edge) *This is exactly what I had noticed in my tests.* There's not enough separation between green and red.
See image. Tons of color variance concentrated in the top edge, with very sparse variation going south. This is evident mostly from darks to mid-high tones.
|
|
2022-11-30 04:21:45
|
well this is mostly a consequence of how I scaled X versus B to fit the little squares. How they are actually scaled in JXL will be different
|
|
|
daniilmaks
|
2022-11-30 04:22:40
|
ok that would make more sense. however it does match my observation in tests.
|
|
2022-11-30 04:23:10
|
...that there's too much blue bias
|
|
|
_wb_
|
2022-11-30 04:23:57
|
how do you test this?
|
|
|
daniilmaks
|
2022-11-30 04:24:47
|
visually inspecting the images?
|
|
2022-11-30 04:25:29
|
I dunno what to put into words
|
|
|
Traneptora
|
2022-11-30 04:26:34
|
I wonder how difficult it would be to write an intelligent ANS encoder
|
|
|
_wb_
|
2022-11-30 04:31:52
|
intelligent in what sense?
|
|
|
Traneptora
|
2022-11-30 04:32:00
|
makes good decisions
|
|
2022-11-30 04:32:19
|
or is ANS encoding deterministic?
|
|
|
|
veluca
|
2022-11-30 04:32:50
|
ANS encoding *per se* doesn't give you that many choices, no
|
|
|
_wb_
|
2022-11-30 04:33:31
|
the ANS encoding itself is deterministic, but the context clustering / histograms can be chosen
|
|
|
Traneptora
|
2022-11-30 04:33:38
|
I was trying to figure out how to write an ANS encoder by reading Annex O
|
|
2022-11-30 04:33:44
|
but there's really not much there in the way of guidance
|
|
|
|
veluca
|
2022-11-30 04:33:52
|
heh, yeah, that's not entirely trivial
|
|
|
Traneptora
|
2022-11-30 04:34:17
|
I figure it's approximately as hard as writing the decoder from scratch
|
|
2022-11-30 04:34:23
|
although I'm guessing there's some overlap
|
|
2022-11-30 04:34:52
|
so if I already have a working decoder, idk how hard it would be to write an encoder
|
|
|
_wb_
|
2022-11-30 04:41:49
|
Assuming the ctx model/clustering is fixed, it's basically just counting tokens per ctx and using that as the histogram, and then the only annoying thing is that you need to write the stream in reverse order
|
|
|
|
veluca
|
2022-11-30 04:41:51
|
*any* encoder, probably not very
|
|
2022-11-30 04:42:06
|
a *good* encoder, eh
|
|
|
_wb_
|
2022-11-30 04:42:17
|
Doing ctx modeling/clustering well is trickier though
|
|
|
Traneptora
|
2022-11-30 04:43:46
|
in theory you could just have one cluster
|
|
2022-11-30 04:43:50
|
but that sounds kind of inefficient
|
|
|
|
veluca
|
2022-11-30 04:44:03
|
it is
|
|
|
Traneptora
|
2022-11-30 04:44:25
|
you could also do no clustering, i.e. one distribution per cluster
|
|
2022-11-30 04:44:27
|
but that sounds horrible
|
|
|
|
veluca
|
2022-11-30 04:45:59
|
it is also impossible
|
|
2022-11-30 04:46:23
|
there is a limit of 256 distributions, and way more than 256 potential distributions in vardct ๐
|
|
|
Traneptora
|
2022-11-30 04:53:35
|
right, good point
|
|
|
_wb_
|
2022-11-30 05:23:06
|
You can do some simple fixed clustering though
|
|
|
uis
|
|
Traneptora
I was trying to figure out how to write an ANS encoder by reading Annex O
|
|
2022-11-30 06:06:10
|
Search for ANS at encode.su. Maybe Jarek wrote guide about it. Or look into his papers.
|
|
|
Jarek
|
2022-11-30 06:43:18
|
lots of explanations at the top of https://encode.su/threads/2078-List-of-Asymmetric-Numeral-Systems-implementations
|
|
|
_wb_
|
|
Jarek
lots of explanations at the top of https://encode.su/threads/2078-List-of-Asymmetric-Numeral-Systems-implementations
|
|
2022-11-30 06:56:31
|
Are you the actual Jarek Duda? We should give you some honorary discord role then ๐
|
|
|
Jarek
|
2022-11-30 06:58:00
|
thanks I'm just watching ftom time to time ๐
|
|
|
uis
|
|
pandakekok9
Hello everyone, I'm one of the devs contributing to Pale Moon's platform, and am the one who did the initial work of adding JPEG-XL support. :)
|
|
2022-11-30 08:05:25
|
This is reason why I added Pale Moon overlay into my gentoo
|
|
|
Jarek
lots of explanations at the top of https://encode.su/threads/2078-List-of-Asymmetric-Numeral-Systems-implementations
|
|
2022-11-30 08:12:18
|
Wow! Didn't expect to see you here. You made an impressive work. Excelent one.
|
|
|
_wb_
Are you the actual Jarek Duda? We should give you some honorary discord role then ๐
|
|
2022-11-30 08:13:01
|
"The Jarek" role
|
|
|
yurume
|
2022-11-30 08:14:28
|
oh holy cow
|
|
2022-11-30 08:15:32
|
(some nit: discord per-user colors are based on the "highest" role, but the "shoulder of giants" role seems to be not that high)
|
|
|
_wb_
|
2022-11-30 08:17:13
|
I like how this discord can bring together so many great people.
|
|
|
Jarek
|
2022-11-30 08:17:31
|
thanks, this development gives some satisfaction ... wonder if Huffman watched of jpeg
|
|
|
uis
|
2022-11-30 08:17:49
|
JPEG now means Jarek Picture Encoding Group
|
|
|
_wb_
|
2022-11-30 08:20:35
|
The J in JXL certainly has many meanings ๐
|
|
|
Demiurge
|
|
<@794205442175402004> I've noticed what could be a noticeable flaw in xyb space as a perceptual definition, if as shown in the slices. Most of the chromatic resolution is highly focused on blue and blue surrounding areas (bottom left edge), giving very little space for the great range of colors between red and green (top right edge) *This is exactly what I had noticed in my tests.* There's not enough separation between green and red.
See image. Tons of color variance concentrated in the top edge, with very sparse variation going south. This is evident mostly from darks to mid-high tones.
|
|
2022-11-30 08:27:14
|
In XYB, Blue is quantized separately from Red and Green.
|
|
2022-11-30 08:27:47
|
Since we have less S cones
|
|
2022-11-30 08:27:49
|
in our eyes
|
|
2022-11-30 08:28:27
|
humans are less sensitive to blue so it makes sense to quantize blue with less bits than Red and Green
|
|
2022-11-30 08:28:49
|
That's how XYB works. It's smart.
|
|
2022-11-30 08:29:11
|
Pretty sure the B stands for Blue
|
|
|
_wb_
|
2022-11-30 08:37:27
|
yes, it does
|
|
|
Demiurge
|
2022-11-30 08:37:28
|
https://jpegxl.io/articles/faq/#whatisthexybcolorspace?
|
|
|
_wb_
|
2022-11-30 10:13:33
|
about 150 more members and we can apply for "discovery"
|
|
2022-11-30 10:13:52
|
i.e. make the discord listed somewhere, I guess
|
|
2022-11-30 10:14:16
|
no idea if that would be a good thing or just bring spammers and trolls
|
|
|
Jim
|
|
daniilmaks
|
|
Demiurge
That's how XYB works. It's smart.
|
|
2022-11-30 10:45:21
|
this is exactly backwards from what I've pointed up tho
|
|
2022-11-30 10:50:29
|
although in any case I can't really see how the colors are really mapped without a better visualisation, since as \_wb_ said, that one was rather crude.
|
|
|
Demiurge
|
2022-11-30 10:51:47
|
By "singling out" blue, it can be quantized separately and it can use less bits than red/green
|
|
2022-11-30 10:52:12
|
You can adjust the precision of Blue separately
|
|
2022-11-30 10:52:32
|
So you can make Blue less precise and have less bits of information than the other colors
|
|
|
daniilmaks
|
2022-11-30 10:52:32
|
is it possible to selectively increase red-green?
|
|
|
Demiurge
|
2022-11-30 10:53:03
|
Yes, that's called quantization and I believe that's the point of XYB having blue separate from the other colors
|
|
2022-11-30 10:53:20
|
So that way the precision of blue can be adjusted separately from the other colors
|
|
|
daniilmaks
|
|
Demiurge
So you can make Blue less precise and have less bits of information than the other colors
|
|
2022-11-30 10:53:25
|
less precise as in depth or as in more quantized?
|
|
|
Demiurge
|
2022-11-30 10:53:29
|
And given less precision
|
|
|
daniilmaks
|
|
Demiurge
Yes, that's called quantization and I believe that's the point of XYB having blue separate from the other colors
|
|
2022-11-30 10:53:50
|
ok
|
|
|
Demiurge
|
2022-11-30 10:53:52
|
Less precise as in less bits
|
|
|
daniilmaks
|
|
Demiurge
Less precise as in less bits
|
|
2022-11-30 10:54:04
|
*bits can mean two things*
|
|
|
Demiurge
|
2022-11-30 10:54:16
|
Less information
|
|
|
daniilmaks
|
2022-11-30 10:54:24
|
it can mean bpp or depth
|
|
2022-11-30 10:54:36
|
two different concepts
|
|
|
Demiurge
|
2022-11-30 10:55:12
|
A bit is a measure of information
|
|
|
daniilmaks
|
2022-11-30 10:55:28
|
yes I understand that part
|
|
2022-11-30 10:58:51
|
I'm sorry if I'm turning around heads on this stuff. It's just that I'm entirely unable to visualize XYB distribution of colors
|
|
2022-11-30 10:59:47
|
and worse when it comes to management of wide gamut
|
|
|
Demiurge
|
2022-11-30 11:00:07
|
One is the sum of red+green and one is the difference or distance between red-green
|
|
2022-11-30 11:00:11
|
And the other is blue.
|
|
2022-11-30 11:00:45
|
And you can choose how much bits of precision you give each one
|
|
2022-11-30 11:01:12
|
Blue being the least perceptually sensitive one can be given the least amount of precision
|
|
2022-11-30 11:02:11
|
You can think of one channel being Luminosity, one channel being Red-Green color distance, and one channel being Blue.
|
|
|
daniilmaks
|
2022-11-30 11:02:22
|
I'm familiar with cone's spectral response stuff
|
|
|
Demiurge
|
2022-11-30 11:03:47
|
I'm relatively new to these ideas.
|
|
|
daniilmaks
|
|
Demiurge
You can think of one channel being Luminosity, one channel being Red-Green color distance, and one channel being Blue.
|
|
2022-11-30 11:03:56
|
that's another thing that confuses me. theoretically you can have a blue gradient that doesn't "affect luminosity" when it clearly does
|
|
|
Demiurge
|
2022-11-30 11:04:09
|
But it's pretty smart
|
|
|
that's another thing that confuses me. theoretically you can have a blue gradient that doesn't "affect luminosity" when it clearly does
|
|
2022-11-30 11:04:54
|
And yes, that's true, but you shouldn't overthink it. That's just how it's defined.
|
|
2022-11-30 11:05:16
|
That's just a consequence of Blue being given it's own separate dimension.
|
|
2022-11-30 11:05:50
|
Of course we know that Blue light affects luminosity, not just red and green.
|
|
|
daniilmaks
|
2022-11-30 11:06:01
|
in an almost grayscale picture doesn't that cause quantization issues since the axis of colors in the picture have to narrowly travel two of the 3 axis?
|
|
2022-11-30 11:06:18
|
instead of just changing in a single axis?
|
|
2022-11-30 11:07:48
|
instead of having X and B truly zeroed they have to drift slightly causing a loss of efficiency in near-grayscale content
|
|
2022-11-30 11:10:38
|
(at least for the model in my mind)
|
|
|
Demiurge
|
2022-11-30 11:13:12
|
But in this color space Blue is separate because it's convenient for our purposes to single it out. Our purposes being to efficiently separate and quantize the information in an image based on how perceptually relevant the information is. And to answer your question, in an "almost grayscale" image, the color channels would have very little variation regardless and would be very compressible if the first digits always start with "000..."
|
|
2022-11-30 11:16:52
|
If there's very little variation and the numbers always start (or end) with the same digits then the information is very easy to compress or even truncate or quantize.
|
|
2022-11-30 11:18:01
|
And since our eyes are less sensitive to blue and perceive blue as being darker or less luminous compared to other hues it makes sense to be more aggressive when quantizing blue
|
|
|
daniilmaks
|
|
Demiurge
But in this color space Blue is separate because it's convenient for our purposes to single it out. Our purposes being to efficiently separate and quantize the information in an image based on how perceptually relevant the information is. And to answer your question, in an "almost grayscale" image, the color channels would have very little variation regardless and would be very compressible if the first digits always start with "000..."
|
|
2022-11-30 11:18:15
|
I'll report back once I've made more tests on my own.
|
|
|
Demiurge
And since our eyes are less sensitive to blue and perceive blue as being darker or less luminous compared to other hues it makes sense to be more aggressive when quantizing blue
|
|
2022-11-30 11:18:31
|
I've always agreed on this since before we started the conversation.
|
|
|
Demiurge
|
|
I've always agreed on this since before we started the conversation.
|
|
2022-11-30 11:19:13
|
Yes, I understand
|
|
|
daniilmaks
|
|
Demiurge
If there's very little variation and the numbers always start (or end) with the same digits then the information is very easy to compress or even truncate or quantize.
|
|
2022-11-30 11:19:45
|
it does sound like a tested tradeoff so I kinda get it
|
|
|
Demiurge
|
2022-11-30 11:22:32
|
Yeah, I think it was only designed for the purpose of separating Blue and to be computationally efficient.
|
|
2022-11-30 11:24:10
|
And it seem like a very expressive and flexible color space to me, but I am not an expert.
|
|
|
daniilmaks
|
2022-11-30 11:33:01
|
I'm looking forward to xyb tuning options, if they are ever coming
|
|
|
Demiurge
|
2022-12-01 12:10:41
|
I think it's very cheeky and cute (in a good way) that fjxl encoder utterly destroys QOI.
|
|
2022-12-01 12:11:33
|
I know JXL is not as simple and you can't describe the file format on a single piece of paper like you can with QOI, which is a big part of the appeal of QOI...
|
|
2022-12-01 12:12:06
|
But it's still awesome how there's a blazing fast lossless JXL encoder like that.
|
|
2022-12-01 12:12:30
|
that essentially outperforms everything else
|
|
|
|
afed
|
2022-12-01 12:26:07
|
yeah, but also, depending on the compiler and flags, performance can be very different
https://discord.com/channels/794206087879852103/803645746661425173/1044957884292792391
|
|
|
Demiurge
|
2022-12-01 02:47:37
|
has fjxl been beaten?
|
|
|
_wb_
|
|
in an almost grayscale picture doesn't that cause quantization issues since the axis of colors in the picture have to narrowly travel two of the 3 axis?
|
|
2022-12-01 06:04:29
|
In practice, we use Chroma from Luma to encode B as (B-Y), and things are scaled so that grayscale images have both chroma components exactly equal to zero
|
|
|
diskorduser
|
2022-12-01 08:14:57
|
Is there any way to compress even further by forcing 4:2:0 on jxl?
|
|
|
_wb_
|
2022-12-01 08:55:47
|
420 is only defined for ycbcr in jxl, not for xyb
|
|
2022-12-01 08:56:40
|
There is no real difference in compression between doing 420 and just zeroing the corresponding dct coeffs in jxl
|
|
|
Jyrki Alakuijala
|
|
_wb_
There is no real difference in compression between doing 420 and just zeroing the corresponding dct coeffs in jxl
|
|
2022-12-01 10:56:00
|
and if you have a few very high dct values, zeroing them is going to create disproportionate error for the local savings
|
|
2022-12-01 10:56:40
|
if your input source is yuv420 then it has some consistent weird image properties that give other yuv420 codecs about a 10 % advantage over JPEG XL's always yuv444 coding
|
|
2022-12-01 10:57:23
|
but JPEG XL has about 25 % advantage over that coding otherwise, so you are still left with ~15 % advantage
|
|
2022-12-01 10:58:10
|
(or if it is a primitive coding like JPEG1, then 51 % and 40 % or so)
|
|
|
daniilmaks
|
|
_wb_
In practice, we use Chroma from Luma to encode B as (B-Y), and things are scaled so that grayscale images have both chroma components exactly equal to zero
|
|
2022-12-01 12:59:10
|
now I'm even more confused
|
|
2022-12-01 12:59:44
|
does that mean that almost grayscale images are encoded very differently from perfect grayscale?
|
|
|
_wb_
|
2022-12-01 02:00:17
|
no
|
|
2022-12-01 02:03:24
|
it just means both X and B are in practice (i.e. taking into account the default CfL) encoded so that if both are 0, it's gray.
|
|
2022-12-01 03:07:48
|
is the color one actually grayscale or is it just close to grayscale? if there's subtle chroma, it's not quite the same image...
|
|
|
Traneptora
|
2022-12-01 03:24:39
|
Almost grayscale images end up with very tiny X and B components which compress very well
|
|
2022-12-01 03:25:28
|
so it ends up being a result of the nature of the image
|
|
|
Jyrki Alakuijala
|
2022-12-01 03:52:50
|
during the format development I observed that not that much entropy is in the chromacity channels, thus we didn't have enormous focus on reducing it
|
|
|
fab
|
2022-12-01 06:18:43
|
https://css-ig.net/wavif
|
|
|
Jyrki Alakuijala
|
2022-12-01 06:32:56
|
their mystical '3-norm: 0.592297' etc. help me to have confidence on what they are doing ๐
|
|
|
daniilmaks
|
|
fab
https://css-ig.net/wavif
|
|
2022-12-01 08:04:34
|
offtopic
|
|
|
Jyrki Alakuijala
|
2022-12-02 12:30:37
|
https://google.github.io/attention-center/
|
|
2022-12-02 12:31:08
|
https://opensource.googleblog.com/2022/12/open-sourcing-attention-center-model.html
|
|
2022-12-02 12:31:27
|
'nature level' research deployed for JPEG XL progression guidance
|
|
|
pandakekok9
|
|
Demiurge
|
2022-12-02 03:18:50
|
lossless or lossy JXL?
|
|
2022-12-02 03:19:57
|
lossless JXL is very efficient and very good. lossy JXL has a lot of room for improvement but it already does better than most (maybe all?) other lossy image codecs
|
|
2022-12-02 03:22:31
|
Lossy codecs have a lot of freedom to decide what information to preserve and what information to transform into something more compressible
|
|
|
pandakekok9
|
2022-12-02 05:27:58
|
|
|
2022-12-02 05:28:09
|
got progressive decoding working in Pale Moon as well
|
|
2022-12-02 05:36:31
|
https://repo.palemoon.org/MoonchildProductions/UXP/pulls/2047
|
|
2022-12-02 05:36:36
|
https://repo.palemoon.org/MoonchildProductions/UXP/pulls/2049
|
|
|
BlueSwordM
|
|
pandakekok9
|
|
2022-12-02 05:36:39
|
Very nice ๐
|
|
2022-12-02 05:36:42
|
<:FeelsAmazingMan:808826295768449054>
|
|
|
pandakekok9
|
2022-12-02 05:38:58
|
My backporting looks ugly IMO (it's mostly copy-paste and some code removals here and there due to not having QCMS yet), but it works ยฏ\_(ใ)_/ยฏ
|
|
|
username
|
2022-12-02 05:40:45
|
is alpha channel support fixed with the animation support? because the original patch for firefox that adds animation support also fixes/adds proper alpha support
|
|
2022-12-02 05:41:43
|
oh I read the pull request a bit more and it looks like it does!
|
|
|
pandakekok9
|
2022-12-02 05:42:12
|
It supposedly does, yes, but I haven't tried removing our own premultiplication fix yet (which also fixes alpha support)
|
|
2022-12-02 05:42:26
|
https://repo.palemoon.org/MoonchildProductions/UXP/commit/2e888727e0e08c2f34544da3e410bc03e58fb312
|
|
|
username
|
2022-12-02 05:43:38
|
hmm well how does the dice image look here? https://jpegxl.info/test-page/
|
|
|
pandakekok9
|
2022-12-02 05:45:47
|
|
|
2022-12-02 05:45:58
|
uh-oh, looks like my progressive decoding PR broke alpha
|
|
2022-12-02 05:46:59
|
But the fix itself should be fine, here's an earlier build with the premultiplication fix
|
|
2022-12-02 05:47:04
|
|
|
2022-12-02 05:59:42
|
nope, looks like it was my animation PR that broke it, that's strange...
|
|
2022-12-02 06:03:00
|
hmm maybe the premultiplication fix is conflicting with the alpha support provided by the original patch, lemme see if removing our own premultiply fix will do it...
|
|
2022-12-02 06:06:03
|
Ok it made it worse, I will try ripping out the opacity portion of the patch..
|
|
2022-12-02 06:12:14
|
Ok, it was indeed the opacity check... I wonder why the original author didn't separate out the alpha fix
|
|
2022-12-02 06:26:31
|
https://repo.palemoon.org/MoonchildProductions/UXP/pulls/2050
|
|
|
_wb_
|
2022-12-02 06:47:42
|
<:BlobYay:806132268186861619>
|
|
2022-12-02 06:48:39
|
So then we will have the first browser with full jxl support by default, nice!
|
|
2022-12-02 06:49:27
|
(well except HDR, I guess. But adding that to pale moon will be tricky, not even mainline firefox has any HDR support atm)
|
|
|
w
|
2022-12-02 06:55:21
|
firefox's current approach to hdr and color management (for windows) is to eventually use windows advanced color & auto color management
|
|
2022-12-02 06:55:37
|
windows 11 22 beat wayland to global color management
|
|
|
pandakekok9
|
|
_wb_
(well except HDR, I guess. But adding that to pale moon will be tricky, not even mainline firefox has any HDR support atm)
|
|
2022-12-02 06:57:24
|
Yeah we haven't even backported bug 1493898 yet https://bugzilla.mozilla.org/show_bug.cgi?id=1493898
|
|
2022-12-02 06:57:47
|
And I don't think anyone of us in Pale Moon have an HDR monitor to test
|
|
|
Moritz Firsching
|
2022-12-02 06:58:46
|
Nice job improving Pale Moon, @Job!
|
|
|
pandakekok9
|
2022-12-02 06:58:58
|
I was going to work on adding HDR to the platform, but there's just so much refactoring done by Mozilla I don't even know where to start...
|
|
|
_wb_
|
2022-12-02 02:37:02
|
https://xilinx.github.io/Vitis_Libraries/codec/2022.1/index.html
|
|
|
yurume
|
2022-12-02 02:45:22
|
* JxlEnc_ans_clusterHistogram
* JxlEnc_ans_initHistogram
* JxlEnc_lossy_enc_compute
|
|
2022-12-02 02:46:16
|
so that's two parts, ANS histogram encoding and the seemingly entire VarDCT (limited to DCT8x8, DCT16x16 and DCT32x32 though)
|
|
|
Traneptora
|
2022-12-02 05:19:17
|
isn't this <#806898911091753051> or <#805176455658733570>
|
|
|
fab
|
2022-12-02 05:23:32
|
Yes other codecs
|
|
2022-12-02 05:23:42
|
I'll move
|
|
|
pandakekok9
|
2022-12-03 12:58:13
|
I wonder if it's possible to kinda like control which parts of the image gets the high-res blocks first in progressive mode. I just noticed that progressive in JPEG-XL works differently than old JPEG
|
|
2022-12-03 01:00:05
|
A possible use-case could be making sure some text in an image is legible as early as possible, and the other details are not very important to the user to see right away
|
|
2022-12-03 01:01:36
|
Right now it kinda looks random which parts of the image gets high-res data first, as noted by <@379365257157804044> at UXP#2048: https://repo.palemoon.org/MoonchildProductions/UXP/issues/2048#issuecomment-33073
|
|
2022-12-03 01:06:18
|
Huh, looks like it does seems to know to prioritize text first, as shown by the textbox here:
|
|
|
Jim
|
|
pandakekok9
I wonder if it's possible to kinda like control which parts of the image gets the high-res blocks first in progressive mode. I just noticed that progressive in JPEG-XL works differently than old JPEG
|
|
2022-12-03 01:08:18
|
Yes, you can. I don't know specifically how to do it, but that is the point of the research released today: https://opensource.googleblog.com/2022/12/open-sourcing-attention-center-model.html
It uses a machine-learning algorithm to determine the order the blocks are located within the binary data so that the most important parts are loaded first.
|
|
|
pandakekok9
|
2022-12-03 01:11:03
|
Oooh, that's very nice! That's definitely an improvement over JPEG's progressive mode. Though if I understood correctly, the data where you determine which blocks get high-res first is not stored in the image itself, but determined by the server serving the images?
|
|
|
Wolfbeast
|
2022-12-03 01:13:14
|
I have to say I'm just not a fan of large blocks of high res data being used instead of actually being progressive...
|
|
|
Jim
|
2022-12-03 01:14:00
|
The devs can go into more detail but I don't think that's the case. The server would need a lot more code to try to figure out and re-arrange data and re-encode the image?
From what I understand the data is stored in blocks in the image. It's still downloaded top-down, but the first block could be identified as being in the middle or all the way at the end of the image. By default I think the blocks are ordered left-to-right, top-to-bottom but maybe someone can correct that?
|
|
|
|
afed
|
2022-12-03 01:14:55
|
cjxl
``` --group_order=0|1
Order in which 256x256 groups are stored in the codestream for progressive rendering. Value not provided means 'encoder default', 0 means 'scanline order', 1 means 'center-first order'.
--center_x=0..XSIZE
Determines the horizontal position of center for the center-first group order. The value -1 means 'use the middle of the image', other values [0..xsize) set this to a particular coordinate.
--center_y=0..YSIZE
Determines the vertical position of center for the center-first group order. The value -1 means 'use the middle of the image', other values [0..ysize) set this to a particular coordinate.```
|
|
|
Wolfbeast
|
2022-12-03 01:16:45
|
It's an interesting idea and all to try and think in an encoder what would be the "focal point" of an image and start from there, but it does feel like a lot of unnecessary complexity for something people likely aren't even going to notice ;P.
|
|
|
Jim
|
2022-12-03 01:17:32
|
For small images or a fast connection, probably not. But a large image downloading on a slow connection, it will be very noticeable.
|
|
|
pandakekok9
|
2022-12-03 01:18:44
|
I do think that the results of the progressive decoding can sometimes be distracting though, so I understand why Moonchild is kinda not a fan of it. It's still possible to bring back the progressive mode from JPEG to JPEG-XL right (i.e. the one where all parts are equally blurry)?
|
|
|
Wolfbeast
|
2022-12-03 01:20:16
|
Hey no qualms about progrssive loading as a concept, just think the blocks are too large to begin with (256x256 are pretty hunky chunks...) and find the focalpoint-to-edge really distracting and not really helping with making the image be clear before it's fully loaded. But maybe that's just me.
|
|
2022-12-03 01:20:59
|
sometimes the bottom is done first, sometimes the top... etc.
|
|
|
Jim
|
2022-12-03 01:21:32
|
We are likely looking at it through the lens of fast connections. I've had slow connections on my phone many times and seeing an image loading in is, imo, much better than an empy space until the image finally "pops" in. Also, in developing nations with poorer infrastructure, it will happen all the time.
|
|
|
Wolfbeast
|
2022-12-03 01:22:22
|
No I'm very well aware of how slow connection are.
|
|
2022-12-03 01:22:33
|
as said no qualms about progressive loading, at all.
|
|
|
Jim
|
2022-12-03 01:23:17
|
I believe the block size is controllable too.
|
|
|
Wolfbeast
|
2022-12-03 01:23:38
|
I mean... how is this helpful?
|
|
|
Jim
|
2022-12-03 01:25:35
|
I guess it determined the dial at the bottom was more important? Like all algorithms... they aren't perfect.
|
|
|
Wolfbeast
|
2022-12-03 01:26:03
|
But the bottom right is still blurry too
|
|
2022-12-03 01:26:14
|
I probably just don't get it.
|
|
|
Jim
|
2022-12-03 01:27:09
|
That likely has more to do with the connection, it is likely that portion is next to load.
|
|
2022-12-03 01:28:30
|
Oh, I see, it's going clockwise around that point.
|
|
|
Wolfbeast
|
2022-12-03 01:28:32
|
No I think the next block is the top-left
|
|
2022-12-03 01:30:25
|
Anyway.... progressive will be in the next dev version of Pale Moon. As well as fixed transparency (I slotted in alpha premultiply for our use which was the issue) and animation. All thanks to <@707907827712131102> doing some really good work today/yesterday :)
|
|
|
Jim
|
2022-12-03 01:32:22
|
Still better than Chrome. It doesn't load the DC at all. I think it's better to see a low-res portion and have it load in after that than just empty blocks.
|
|
|
Wolfbeast
|
2022-12-03 01:33:33
|
Yup, agreed
|
|
|
Jim
|
2022-12-03 01:34:04
|
I am working on a wasm polyfil and my progressive looks like Pale Moon. Working on getting animation working now.
|
|
|
|
afed
|
2022-12-03 01:34:39
|
this is only the starting point, then, as far as I understand, the blocks are loaded in a spiral or something like that
and in reality there will probably only be one or two steps to the full image, not a block-by-block rendering
|
|
|
Wolfbeast
|
2022-12-03 01:34:56
|
With Chrome trying to drop it, I just hope that sites won't just kneejerk the wasm in for everyone and ignoring native support.
|
|
|
Cool Doggo
|
|
Jim
Still better than Chrome. It doesn't load the DC at all. I think it's better to see a low-res portion and have it load in after that than just empty blocks.
|
|
2022-12-03 01:35:19
|
works fine for me
|
|
|
Jim
|
2022-12-03 01:35:58
|
Are you using a dev or nightly build?
|
|
|
Cool Doggo
|
|
Jim
|
2022-12-03 01:36:20
|
Hm, strange.
|
|
|
Wolfbeast
|
2022-12-03 01:40:31
|
Oh as an aside, I have had some close looks at comparing various image formats (on that site with the sliding divider comparison thing, I forget where it is exactly) and I have to say that JPEG-XL looks objectively better than WebP or AVIF even at pretty low bpp. most notably on images like the space shuttle photo picking up fine details and staying closer to proper color. so JXL definitely has a lot going for it.
|
|
|
|
veluca
|
|
pandakekok9
I do think that the results of the progressive decoding can sometimes be distracting though, so I understand why Moonchild is kinda not a fan of it. It's still possible to bring back the progressive mode from JPEG to JPEG-XL right (i.e. the one where all parts are equally blurry)?
|
|
2022-12-03 01:41:27
|
progressive mode in JPEG does not make all parts of the image equally blurry though ๐
|
|
2022-12-03 01:42:24
|
that said, you can make JXL have *more* progressive stops and it looks closer to being equally blurry everywhere (which is what JPEG does)
|
|
|
improver
|
2022-12-03 01:42:24
|
AVIF's strength is smoothening filter
|
|
|
Wolfbeast
|
2022-12-03 01:42:44
|
Seems only at the totally crappy bpp levels that is barely acceptable for casual display, AVIF is about on par but it blurs way too much.
|
|
|
pandakekok9
|
|
veluca
progressive mode in JPEG does not make all parts of the image equally blurry though ๐
|
|
2022-12-03 01:43:15
|
Oh, I might be misremembering then, it's just that I don't see lots of progressive JPEGs in the web these days (I remember somewhere saying you shouldn't do progressive and just stick with baseline, or is it actually people saying you shouldn't do Adam7 on PNG..?)
|
|
|
Wolfbeast
|
2022-12-03 01:43:46
|
Yeah don't do adam7 on PNG.
|
|
|
Jim
|
|
improver
AVIF's strength is smoothening filter
|
|
2022-12-03 01:43:55
|
I think that's one of it's weaknesses. It "brushes" out detail. On some images it doesn't matter as much but more finely-detailed images it destroys. If it's just a thumbnail it probably doesn't matter much.
|
|
|
|
veluca
|
2022-12-03 01:44:10
|
progressive on old-school JPEG even helps compression, which is somewhat surprising
|
|
|
improver
|
2022-12-03 01:44:36
|
it's how it does things. for some things it's strength for others it's weakness
|
|
|
Jim
|
|
Wolfbeast
Oh as an aside, I have had some close looks at comparing various image formats (on that site with the sliding divider comparison thing, I forget where it is exactly) and I have to say that JPEG-XL looks objectively better than WebP or AVIF even at pretty low bpp. most notably on images like the space shuttle photo picking up fine details and staying closer to proper color. so JXL definitely has a lot going for it.
|
|
2022-12-03 01:44:52
|
JXL has gotten much better over time as well. Compare now with a year ago. It's night and day.
https://storage.googleapis.com/demos.webmproject.org/webp/cmp/index.html
|
|
|
|
veluca
|
2022-12-03 01:44:55
|
there's no real reason why it *should* be so, except that JPEG's entropy coding is rather primitive
|
|
|
Jim
|
|
veluca
progressive on old-school JPEG even helps compression, which is somewhat surprising
|
|
2022-12-03 01:46:16
|
I remember back in the 90s and 00s it would almost always be a larger file and "progressive" was a luxury. At some point it flipped and now gets a lower file size most of the time.
|
|
|
pandakekok9
|
|
Jim
I remember back in the 90s and 00s it would almost always be a larger file and "progressive" was a luxury. At some point it flipped and now gets a lower file size most of the time.
|
|
2022-12-03 01:53:14
|
Ah, I think that was indeed the reason I heard about the argument against progressive JPEG back in the day
|
|
|
Jim
|
2022-12-03 01:55:35
|
Indeed, it's also why there are so many old jpgs that load top-down. Because when we pulled images into Photoshop back then and hit "progressive" the file size would often balloon. Even though people preferred the progressive loading over top-down, hardly anyone used it since it was all dial-up internet back then and you tried to save all the bytes you could.
|
|
|
Wolfbeast
|
|
Jim
Indeed, it's also why there are so many old jpgs that load top-down. Because when we pulled images into Photoshop back then and hit "progressive" the file size would often balloon. Even though people preferred the progressive loading over top-down, hardly anyone used it since it was all dial-up internet back then and you tried to save all the bytes you could.
|
|
2022-12-03 01:57:14
|
Primarily because at the start people were on slow modems on dialup that was charged per minute. I was on that for quite a while myself
|
|
|
Jim
|
2022-12-03 01:58:39
|
Even then, I preferred having at least top-down, line-by-line loading over showing nothing until the entire image loaded.
|
|
|
Wolfbeast
|
2022-12-03 01:58:54
|
Absolutely.
|
|
2022-12-03 01:59:11
|
Like I said, I have nothing against progressive loading and decoding :)
|
|
|
Jim
|
|
Wolfbeast
Primarily because at the start people were on slow modems on dialup that was charged per minute. I was on that for quite a while myself
|
|
2022-12-03 01:59:37
|
I know some did that, but I only remember dialing into BBSes which you only had to pay extra for if you dialed long distance. In about the mid 90s unlimited internet became a thing and I never had pay-per-minute.
|
|
|
Wolfbeast
|
2022-12-03 02:00:32
|
Well in Europe, flat-rate dialing wasn't a thing until the era of first ISDN then ADSL
|
|
2022-12-03 02:00:55
|
Over copper through the voice channel? you bet you'd be paying per minute.
|
|
|
Jim
|
2022-12-03 02:02:29
|
Ah, that sucks. I remember getting DSL around mid 2000s. Though DSL is over the same copper as voice lines ๐
|
|
|
Wolfbeast
|
2022-12-03 02:03:04
|
Yeah it is over the same copper, just not using the voice band :)
|
|
|
pandakekok9
|
2022-12-03 02:03:40
|
I don't know if I should be jealous for not experiencing dial-up or be glad I didn't experience it
|
|
|
Wolfbeast
|
2022-12-03 02:04:04
|
It was a different way of dealing with the internet.
|
|
|
pandakekok9
|
2022-12-03 02:04:25
|
But I did have a home internet connection that tops at 100 KB/s for like 5 years, and it's a 3G connection I believe
|
|
|
Wolfbeast
|
2022-12-03 02:04:46
|
e.g. what I would do is do a LOT via e-mail. read and write everything off-line, then dial in and spend a few minutes exchanging all mail, then disconnect. that was very affordable
|
|
|
Jim
|
|
pandakekok9
I don't know if I should be jealous for not experiencing dial-up or be glad I didn't experience it
|
|
2022-12-03 02:05:20
|
Be glad you didn't have to listen to this every day.
https://www.youtube.com/watch?v=gsNaR6FRuO0
|
|
|
Wolfbeast
|
2022-12-03 02:06:16
|
that's a 56k bis, if I remember correctly, including data training
|
|
2022-12-03 02:07:07
|
I started on 9600 baud ;)
|
|
2022-12-03 02:08:07
|
And actually, most modems would not have the speaker on for the training (or even the dialing) bit, unless you wanted it to
|
|
|
Jim
|
2022-12-03 02:11:48
|
I think you mean 2400 baud (9600 bits per second). That is what I started on too. Then went to 56k after they became affordable.
|
|
|
Wolfbeast
|
2022-12-03 02:14:15
|
<@172952901075992586> baud = bps. And no I thankfully never had to deal with 2400 (which was I think the lowest at the time). I got a pretty decent Tornado brand 9600 baud metal box as my first modem. Of course it would sometimes still train to 2400 or 1200 if the line was particularly bad....
|
|
|
Jim
|
2022-12-03 02:17:02
|
A baud is a symbol which can be more than one bit per second.
There was no 9600 baud. That would be faster than 56k modem.
https://en.wikipedia.org/wiki/Modem#Evolution_of_dial-up_speeds
|
|
|
Wolfbeast
|
2022-12-03 02:19:57
|
Wikipedia is wrong.
|
|
|
Jim
|
2022-12-03 02:20:08
|
Though mine was 14.4
|
|
|
Wolfbeast
|
2022-12-03 02:20:58
|
just look at the table
|
|
2022-12-03 02:21:10
|
300 baud = 0.3 kbit/s
|
|
2022-12-03 02:21:53
|
then lower it is suddenly a different value
|
|
2022-12-03 02:22:09
|
1,200 bit/s (1200 baud) (Bell 202) FSK 1.2 1976
|
|
2022-12-03 02:22:23
|
2,400 bit/s (1200 baud) (V.26bis) PSK 2.4
|
|
2022-12-03 02:23:09
|
it's a mess and wrong.. baud = bits/s over the data line. That's always what we used.
|
|
2022-12-03 02:23:27
|
14.4 = 14.4kbit/s = 14400 baud
|
|
2022-12-03 02:23:34
|
(not counting compression)
|
|
|
Jim
|
2022-12-03 02:23:55
|
Yes, because the newer standards were able to fit more than one bit within each transmission.
|
|
2022-12-03 02:24:02
|
https://www.physics.udel.edu/~watson/scen103/projects/96s/thosguys/baud.html
|
|
2022-12-03 02:25:18
|
When you talk about baud over a serial port, yes, it is 1 baud = 1 bit.
|
|
|
Wolfbeast
|
2022-12-03 02:28:20
|
OK well maybe I always misunderstood then.
|
|
2022-12-03 02:29:00
|
COM port speeds is what introduced me to baud and my modem WAS called a 9600 baud modem. so maybe some bad marketing too then
|
|
2022-12-03 02:29:40
|
Surprising they managed to squeeze that amount of data into a much slower carrier wave
|
|
|
Jim
|
2022-12-03 02:29:45
|
Probably. I remember back then everything was 2400 baud or less until 56k modems came out.
|
|
2022-12-03 02:30:40
|
For a short time there was 3200 but I don't know anyone that had that. Most just jumped to 56k. After that nothing was described in baud anymore.
|
|
|
Wolfbeast
|
2022-12-03 02:32:44
|
I had the 9600, then a 33.6 then a 56k v.42bis and after that I jumped on ADSL. ISDN was mostly for businesses only as it wanted special line hardware
|
|
2022-12-03 02:33:40
|
Currently on symmetrical 100Mbit hardline (ethernet port in my wall, going to fiber somewhere)
|
|