On Sun, 19 Nov 2006, Marco Costalba wrote: > > Sure. File ran against git tree attached. Ok. Nothing really strange stands out - it's a nice trace with just over 400 system calls. I'd expect it to finish in no time at all (tracing will add some overhead, since you context switch back and forth between the tracer and the tracee, but it's really not doing a lot, so even with tracing it should execute almost instantaneously). So it all looks _almost_ fine.. Except for this one: 10:19:04.449236 stat64(".git/objects/3a/41a48d139d1425c1d27e3fbe4f511fb7e09e94", {st_mode=S_IFREG|0444, st_size=278, ...}) = 0 <0.817989> That's a _single_ "stat64()" system call that takes almost a second to execute. All the rest are in the millisecond range, and sometimes a hundreth of a second or two. Ie doing grep -v ' <0.0[012]' tracefile_git_tree.txt on your tracefile, there's really not a lot of system calls that take a long time, and that one stat _really_ stands out (the others are 3 or four hundredths of a second, and then suddenly you have one that is 20 times longer than even the slowest other ones. Basically, you seem to have a _single_ object access that takes up half the time of the whole program. It's the object for 'refs/tags/v1.4.4-rc1' in case you care, btw. > If you want I can repack and prune, but for now I just wait to avoid > to corrupt this test case. What you could try to do is to re-run it a few times (cold-cache) and see if those numbers really are stable, and if it's always the same object that takes that long. In fact, you could even do a simple time ls -l .git/objects/3a/41a48d139d1425c1d27e3fbe4f511fb7e09e94 for the cold-cache case, and see if just even _that_ takes almost a second. If it _is_ stable, there's two possibilities: - you have a large and slow disk, and that one object really is way out there on the other side of the disk, and seeking really takes almost a second. Quite frankly, I expected that the time when a single stat() took almost a second was a decade or more in the past, back in the days of floppy-disks. But what do I know? - your disk is failing, and ends up doing error recovery etc. Maybe worth running "smartctl -a /dev/hda" or whatever, to see if the disk already knows it is having problems. Anyway, repacking will fix this, but quite frankly, you might have a reason to be a bit nervous about that disk if I were you. (NOTE NOTE NOTE! There could be other reasons for that second delay. If the machine was under heavy load, or was running low on memory, maybe the long delay was just due to havign to swap things out or run other things instead. That's why it might be interesting to see if the number is "stable" in that it's always that same object..) Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html