Re: pack operation is thrashing my server

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Thu, 14 Aug 2008, Linus Torvalds wrote:
> 
> Doing a rev-list of all objects is a fairly rare operation, but even if 
> you want to clone/repack all of your archives the whole time, please 
> realize that listing objects is _not_ a simple operation. It opens up and 
> parses every single tree in the whole history. That's a _lot_ of data to 
> unpack.

Btw, it's not that hard to run oprofile (link git statically to get better 
numbers). For me, the answer to what is going on for a kernel rev-list is 
pretty straightforward:

	263742   26.6009  lookup_object
	135945   13.7113  inflate
	110525   11.1475  inflate_fast
	75124     7.5770  inflate_table
	64676     6.5232  strlen
	48635     4.9053  memcpy
	47744     4.8154  find_pack_entry_one
	35265     3.5568  _int_malloc
	31579     3.1850  decode_tree_entry
	28388     2.8632  adler32
	19441     1.9608  process_tree
	10398     1.0487  patch_delta
	8925      0.9002  _int_free
	..

so most of it is in inflate, but I suspect the cost of "lookup_object()" 
is so high becuase when we parse the trees we also have to look up every 
blob - even if they didn't change - just to see whether we already saw it 
or not.

For me, an instruction-level profile of lookup_object() shows that the 
cost is all in the hashcmp (53% of the profile is on that "repz cmpsb") 
and in the loading of the object pointer (26% of the profile is on the 
test instruction after the "obj_hash[i]" load). I don't think we can 
really improve that code much - the hash table is very efficient, and the 
cost is just in the fact that we have a lot of meory accesses.

We could try to use the (more memory-hungry) "hash.c" implementation for 
object hashing, which actually includes a 32-bit key inside the hash 
table, but while that will avoid the cost of fetching the object pointer 
for the cases where we have collisions, most of the time the cost is not 
in the collision, but in the fact that we _hit_.

I bet the hit percentage is 90+%, and the cost really is just that we 
encounter the same object hundreds or thousands of times.

Please realize that even if there may be "only" a million objects in the 
kernel, there are *MANY* more ways to _reach_ those objects, and that is 
what git-rev-list --objects does! It's not O(number-of-objects), it's 
O(number-of-object-linkages).

For my current kernel archive, for example, the number of objects is 
roughly 900k. However, think about how many times we'll actually reach a 
blob: that's roughly (blobs per commit)*(number of commits), which can be 
approximated with

	echo $(( $(git ls-files | wc -l) * $(git rev-list --all | wc -l) ))

which is 24324*108518=2639591832 ie about 2.5 _billion_ times.

Now, we don't actually do anything close to that many lookups, because 
when a subdirectory doesn't change at all, we'll skip the whole tree after 
having seen it just once, so that will cut down on the number of objects 
we have to look up by probably a couple of orders of magnitude.

But this is why the "one large directory" load performs worse: in the 
worst case, if you really have a totally flat directory tree, you'd 
literally see that 2.5 billion object lookup case.

So it's not that git scales badly. It's that "git rev-list --objects" is 
really a very expensive operation, and while some good practices (deep 
directory structures) makes it able to optimize the load away a lot, it's 
still potentially very tough.

			Linus
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux