Re: pack operation is thrashing my server

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 9/6/08, Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:
>
>
>  On Sat, 6 Sep 2008, Jon Smirl wrote:
>  >
>  > Some alternative algorithms are here...
>  > http://cs.fit.edu/~mmahoney/compression
>  > It is possible to beat zlib by 2x at the cost of CPU time and memory.
>
>
> Jon, you're missing the point.
>
>  The problem with zlib isn't that it doesn't compress well. It's that it's
>  too _SLOW_.

When I was playing with those giant Mozilla packs speed of zlib wasn't
a big problem. Number one problem was the repack process exceeding 3GB
which forced me to get 64b hardware and 8GB of memory. If you start
swapping in a repack, kill it, it will probably take a month to
finish.

I'm forgetting the numbers now but on a quad core machine (with git
changes to use all cores) and 8GB I believe I was able to repack the
Mozilla repo in under an hour. At that point I believe I was being
limited by disk IO.

Size and speed are not unrelated. Buy reducing the pack size in half
you reduce the IO and memory demands (cache misses) a lot. For example
if we went to no compression we'd be killed by memory and IO
consumption. It's not obvious to me what's the best trade off for git
without trying several compression algorithms and comparing. They were
feeding 100MB into PAQ on that site, I don't know what PAQ would do
with a bunch of 2K objects.

Most delta chains in the Mozilla data were easy to process. There was
a single 2000 delta chain that consumed 15% of the total CPU time to
process. Something causes performance to fall apart on really long
chains.

>  > Turning a 500MB packfile into a 250MB has lots of advantages in IO
>  > reduction so it is worth some CPU/memory to create it.
>
>
> ..and secondly, there's no way you'll find a compressor that comes even
>  close to being twice as good. 10% better yes - but then generally much
>  MUCH slower.
>
>  Take a look at that web page you quote, and then sort things by
>  decompression speed. THAT is the issue.
>
>  And no, LZO isn't even on that list. I haven't tested it, but looking at
>  the code, I do think LZO can be fast exactly because it seems to be
>  byte-based rather than bit-based, so I'd not be surprised if the claims
>  for its uncompression speed are true.
>
>  The constant bit-shifting/masking/extraction kills zlib performance (and
>  please realize that zlib is at the TOP of the list when looking at the
>  thing you pointed to - that silly site seems to not care about compressor
>  speed at all, _only_ about size). So "kills" is a relative measure, but
>  really - we're looking for _faster_ algorithms, not slower ones!
>
>
>                         Linus
>


-- 
Jon Smirl
jonsmirl@xxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux