Introducing Easy Bookworm (version 0.1)

Moderator: BarryK

dimkr
Posts: 2423
Joined: Wed Dec 30, 2020 6:14 pm
Has thanked: 53 times
Been thanked: 1202 times

Re: Introducing Easy Bookworm (version 0.1)

Post by dimkr »

@BarryK The choice of lz4 is surprising.

On computers with slow hard drives, sometimes, it's much faster to read a smaller compressed xz-compressed SFS, at the cost of slower decompression (because the slow decompression is still faster than disk I/O). However, xz can be super slow to decompress and applications can freeze if you have a single/dual-core CPU.

IMHO the sweet spot is zstd (I use -comp zstd -Xcompression-level 19 -b 256K -no-exports -no-xattrs): the result SFS is 8-15% bigger compared to xz, but decompression is super fast (faster than gzip!). The result is almost as small as a xz-compressed SFS, but because decompression is so much faster, you get a nice responsiveness boost on computers with slow drives. On computers with fast drives, the SFS size doesn't matter as much, but they benefit from faster decompression, too, especially if they have a slow CPU with little cores.

IMHO, zstd is almost always a winner in the SFS size vs. decompression speed trade-off, because it's both small and fast to decompress.

user1111

Re: Introducing Easy Bookworm (version 0.1)

Post by user1111 »

dimkr wrote: Thu May 19, 2022 12:15 pm

@BarryK The choice of lz4 is surprising.

On computers with slow hard drives, sometimes, it's much faster to read a smaller compressed xz-compressed SFS, at the cost of slower decompression (because the slow decompression is still faster than disk I/O). However, xz can be super slow to decompress and applications can freeze if you have a single/dual-core CPU.

IMHO the sweet spot is zstd (I use -comp zstd -Xcompression-level 19 -b 256K -no-exports -no-xattrs): the result SFS is 8-15% bigger compared to xz, but decompression is super fast (faster than gzip!). The result is almost as small as a xz-compressed SFS, but because decompression is so much faster, you get a nice responsiveness boost on computers with slow drives. On computers with fast drives, the SFS size doesn't matter as much, but they benefit from faster decompression, too, especially if they have a slow CPU with little cores.

IMHO, zstd is almost always a winner in the SFS size vs. decompression speed trade-off, because it's both small and fast to decompress.

Interesting. Thanks.

https://indico.fnal.gov/event/16264/con ... d__LZ4.pdf

At the current compression ratios, reading with decompression for LZ4 and ZSTD is actually faster than reading decompressed: significantly less data is coming from the IO subsystem.
● We know LZ4 is significantly faster than ZSTD on standalone benchmarks: likely bottleneck is ROOT IO API

The chart in that paper indicates relatively similar decompression speeds so if zstd compressed more highly then that would have it ahead of lz4, however if the benchmark were distorted by a bottleneck as suggested and decomps quicker then larger amounts of data read but quicker decompression might narrow/reverse the situation.

Bit of a flip of a coin perhaps?

One nice factor with lz4 is that it can compress really quickly when using the default setting (no -Xhc high compression switch). Only modest compression, larger than most other choices, but still near halved in size typically compared to non compressed. I tend to use that for doing backups where I mksquashfs a entire partition - rather than using tar/whatever.

dimkr
Posts: 2423
Joined: Wed Dec 30, 2020 6:14 pm
Has thanked: 53 times
Been thanked: 1202 times

Re: Introducing Easy Bookworm (version 0.1)

Post by dimkr »

As far as I know, decompression with lz4 should be faster than with zstd. But zstd is still very fast to decompress (faster than xz and faster than gzip), while the compression ratio is very good (much better than gzip, close to xz). With lz4 it's a different tradeoff - super fast decompression, but much bigger size.

User avatar
BarryK
Posts: 2692
Joined: Tue Dec 24, 2019 1:04 pm
Has thanked: 132 times
Been thanked: 738 times

Re: Introducing Easy Bookworm (version 0.1)

Post by BarryK »

dimkr wrote: Thu May 19, 2022 12:15 pm

@BarryK The choice of lz4 is surprising.

On computers with slow hard drives, sometimes, it's much faster to read a smaller compressed xz-compressed SFS, at the cost of slower decompression (because the slow decompression is still faster than disk I/O). However, xz can be super slow to decompress and applications can freeze if you have a single/dual-core CPU.

IMHO the sweet spot is zstd (I use -comp zstd -Xcompression-level 19 -b 256K -no-exports -no-xattrs): the result SFS is 8-15% bigger compared to xz, but decompression is super fast (faster than gzip!). The result is almost as small as a xz-compressed SFS, but because decompression is so much faster, you get a nice responsiveness boost on computers with slow drives. On computers with fast drives, the SFS size doesn't matter as much, but they benefit from faster decompression, too, especially if they have a slow CPU with little cores.

IMHO, zstd is almost always a winner in the SFS size vs. decompression speed trade-off, because it's both small and fast to decompress.

Yes, interesting, I will think some more about it.

Easy Bookworm 0.3 has just been uploaded, and have gone for lz4-hc when re-compress easy.sfs

This compression method comparison is a good read:

https://www.privex.io/articles/which-co ... ithm-tool/

...it was that article that convinced me to go for lz4-hc (lz4 -9)

Note though, the mksquashfs "-comp lz4 -Xhc" does not specify the level of lz4 compression.

user1111

Re: Introducing Easy Bookworm (version 0.1)

Post by user1111 »

As is lz4 is fast(er) at decompressing. It also however supports multi-core so in some cases can reach ram speeds https://www.carta.tech/man-pages/man1/lz4.1.html ...

lz4 is also scalable with multi-core CPUs. It features an extremely fast decoder, with speed in multiple GB/s per core, typically reaching the RAM speed limits on multi-core systems.

AFAIK higher compression level setting doesn't actually change the decompression speed, just takes longer to compress, no change in decompress (however that said I guess a smaller/tighter compressed file will be faster to decompress due to less I/O being involved).

User avatar
BarryK
Posts: 2692
Joined: Tue Dec 24, 2019 1:04 pm
Has thanked: 132 times
Been thanked: 738 times

Re: Introducing Easy Bookworm (version 0.1)

Post by BarryK »

rufwoof wrote: Mon Jun 06, 2022 6:38 pm

AFAIK higher compression level setting doesn't actually change the decompression speed, just takes longer to compress, no change in decompress (however that said I guess a smaller/tighter compressed file will be faster to decompress due to less I/O being involved).

Yes, that is a very interesting point. I did read that somewhere, probably in one of the links in this thread, someone tested decompression speed of lz4 and lz4-hc and found the latter to be faster.

So that is what I have gone for. Now, all "mksquashfs" operations have the "-comp lz4 -Xhc" parameters.

Actually, I have to use lz4-hc, as easy.sfs would be too big with just lz4. Easy is now deployed as an uncompressed 'easy-<version>-<arch>.img' file, 773MiB, with a 767MiB vfat first partition -- which has 766MiB space. other files have to also reside in there, such as vmlinuz and initrd.

But, what a bonus: a smaller file and it even decompresses faster!

Post Reply

Return to “EasyOS”