Re: Suggestion: switch to zstd -19 for compressing packages over xz

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On Sat, Mar 16, 2019 at 5:01 AM Dan Sommers
<2QdxY4RzWzUUiLuE@xxxxxxxxxxxxxxxxx> wrote:
> My situation is similar to Darren's:  My primary connection
> to the internet is through my cell phone carrier and a
> mobile WiFi hot spot.  In urban areas, I can get as much as
> 50 megabits per second, but presently, due to my remote
> location, it's around 5 or 6.  I also have a monthly data
> cap, which I share with my wife, and only WiFi (i.e., no
> wires; that nice 300 megabits from hot spot to device is
> shared by all devices, and there's a per device limit,
> too).  FWIW, I have an i7-7700HQ CPU.
>
> In the old days (when large files were a megabyte or two
> and network bandwidth was measured in kilibits per second),
> we assumed that the network was the bottleneck.  I think
> what Adam is propsing is that things are different now, and
> that the CPU is the bottleneck.  As always, it depends.  :-)
>
> My vote, whether it has any weight or not, is for higher
> compression ratios at the expense of CPU cycles when
> decompressing; i.e., xz rather than zstd.  Also, consider
> that the 10% increase in archive size is suffered repeatedly
> as servers store and propagate new releases, but that the
> increase in decompression time is only suffered by the
> end user once, likely during a manual update operation or an
> automated background process, where it doesn't matter much.
>
> I used to have this argument with coworkers over build times
> and wake-from-sleep times.  Is the extra time to decompress
> archives really killing anyone's productivity?  Are users
> choosing OS distros based on how long it takes do install
> Open Office?  Are Darren and I dinosaurs, doomed to live in
> a world where everyone else has a multi-gigabit per second
> internet connection and a cell phone class CPU?
>
> Jokingly, but not as much as you think,
> Dan

I think you're overstating your case a little bit. In the United
States, nothing less than 25 Mbps can legally be called broadband, and
the average download speed is approaching 100 Mbps (90% of us have
access to 25 Mbps or better internet). Zstd -19 is faster overall than
xz -6 starting at around 20 Mbps, so it's a better choice even on some
sub-broadband connections. Your PassMark score is only about 50% better
than that used on the Squash compression test, so I don't know that
the computer speed element is significant.

Furthermore, if space saving is the primary concern, why are we using
the default xz -6 option, rather than something stronger like -9? I
support using zstd because even in the absolute worst case (instant
decompression), you're looking at less than a 10% increase in upgrade
time, while for most users, a reduction of 50% would not be atypical
(lzma is slow!). I'm not suggesting throwing out all concerns about
disk space and transfer time, I'm just suggesting that times have
changed *somewhat*, and that for most users zstd may provide a
better trafe-off. In my case (100 Mbit connection), which is close to
the US average, downloading and decompressing the latest Firefox
package would take less than 1/3 the time it currently takes if we
switched to zstd.

Adam



[Index of Archives]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Device Mapper]

  Powered by Linux