Joshua Redstone <joshua.redstone@xxxxxx> writes: > Greg, 'git commit' does some stat'ing of every file, even with all those > flags - for example, I think one instance it does it is, just in case any > pre-commit hooks touched any files, it re-stats everything. That seems ripe for skipping. If I understand correctly, what's being committed is the index, not the working dir contents, so it would follow that a pre-commit hook changing a file is a bug. > Regarding the perf numbers, I ran it on a beefy linux box. Have you > tried doing your measurements with the drop_caches trick to make sure > the file cache is totally cold? On NetBSD, there should be a clear cache command for just this reason, but I'm not sure there is. So I did sysctl -w kern.maxvnodes=1000 # seemed to take a while ls -lR # wait for those to be faulted in sysctl -w kern.maxvnodes=500000 Then, git status on my repo churned the disk for a long time. real 2m7.121s user 0m3.086s sys 0m7.577s and then again right away real 0m6.497s user 0m2.533s sys 0m3.010s That repo has 217852 files (a real source tree with a few binaries, not synthetic). > Sorry for the dumb question, but how do I check the vnode cache size? On BSD, sysctl kern.maxvnodes. I would aasume that on Linux there is some max size for the the vnode cache, and that stat of a file in that cache is faster than going to the filesystem (even if reading from cached disk blocks). But I really don't know how that works in Linux. I was going to say that if your vnode cache isn't big enough, then the hot run won't be so much faster than the warm run, but that's not true, because the fs blocks will be in the block cache and it will still help.
Attachment:
pgpD6G8akrq72.pgp
Description: PGP signature