Re: More precise tag following

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Jan 27, 2007 at 12:41:54AM -0800, Junio C Hamano wrote:

> > Based on some (limited) profiling with Shark it seems we spend about
> > 50% of our CPU time doing zlib decompression of objects and almost
> > another 14% parsing the tree objects to apply the path limiter.
> 
> I once tried to use zlib compression level 0 for tree objects
> and did not see much difference -- maybe I should dig it up and
> find out why.

I don't know exactly what Shawn meant, but a considerable amount of time
in a blame is spent decompressing the blobs. Just for fun, some numbers:

Fully packed, warm cache, core.compression = -1:
$ time git blame Makefile >/dev/null
real    0m5.537s
user    0m5.500s
sys     0m0.032s

Fully packed, warm cache, core.compression = 0:
$ time git blame Makefile >/dev/null
real    0m3.001s
user    0m2.984s
sys     0m0.012s

That's 45% savings. The resulting pack sizes are 11932K compressed and
22308 uncompressed.

-Peff
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]