Re: Compression speed for large files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Alex Riesen wrote:
On 7/3/06, Joachim B Haga <cjhaga@xxxxxxxxxx> wrote:
So: is it a good idea to change to faster compression, at least for larger files? From my (limited) testing I would suggest using Z_BEST_COMPRESSION only for small files (perhaps <1MB?) and Z_DEFAULT_COMPRESSION/Z_BEST_SPEED for
larger ones.

Probably yes, as a per-repo config option.

I can send a patch later. If it's to be a per-repo option, it's probably too confusing with several values. Is it ok with

core.compression = [-1..9]

where the numbers are the zlib/gzip constants,
  -1 = zlib default (currently 6)
   0 = no compression
1..9 = various speed/size tradeoffs (9 is git default)

Btw; I just tested the kernel sources. With gzip only, but files compressed individually:
  time find . -type f | xargs gzip -9 -c | wc -c

I found the space saving from -6 to -9 to be under 0.6%, at double the CPU time. So perhaps Z_DEFAULT_COMPRESSION would be good as default.

-j
-
: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]