Re: pack operation is thrashing my server

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Sat, 6 Sep 2008, Jon Smirl wrote:
> 
> When I was playing with those giant Mozilla packs speed of zlib wasn't
> a big problem. Number one problem was the repack process exceeding 3GB
> which forced me to get 64b hardware and 8GB of memory. If you start
> swapping in a repack, kill it, it will probably take a month to
> finish.

.. and you'd make things much much WORSE?

> Size and speed are not unrelated.

Jon, go away.

Go and _look_ at those damn numbers you tried to point me to.

Those "better" compression models you pointed at are not only hundreds of 
times slower than zlib, they take hundreds of times more memory too!

Yes, size and speed are definitely not unrelated. And in this situation, 
when it comes to compression algorithms, the relationship is _very_ clear:

 - better compression takes more memory and is slower

Really. You're trying to argue for something, but you don't seem to 
realize that you argue _against_ the thing you think you are arguing for.

		Linus
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux