On 9/7/08, Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote: > > > On Sat, 6 Sep 2008, Jon Smirl wrote: > > > > > When I was playing with those giant Mozilla packs speed of zlib wasn't > > a big problem. Number one problem was the repack process exceeding 3GB > > which forced me to get 64b hardware and 8GB of memory. If you start > > swapping in a repack, kill it, it will probably take a month to > > finish. > > > .. and you'd make things much much WORSE? My observations on the Mozilla packs indicated that the problems were elsewhere in git, not in the decompression algorithms. Why does a single 2000 delta chain take 15% of the entire pack time? Something isn't right when long chains are processed which triggers far more decompressions than needed. > > > > Size and speed are not unrelated. > > > Jon, go away. > > Go and _look_ at those damn numbers you tried to point me to. > > Those "better" compression models you pointed at are not only hundreds of > times slower than zlib, they take hundreds of times more memory too! > > Yes, size and speed are definitely not unrelated. And in this situation, > when it comes to compression algorithms, the relationship is _very_ clear: > > - better compression takes more memory and is slower > > Really. You're trying to argue for something, but you don't seem to > realize that you argue _against_ the thing you think you are arguing for. > > > Linus > -- Jon Smirl jonsmirl@xxxxxxxxx -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html