On Sat, 1 Dec 2007, Joachim B Haga wrote: > > Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> writes: > > The pack-files (both index and data) are accessed somewhat randomly, but > > there is still enough locality that doing read-ahead and clustering really > > does help. > > They are dense enough that slurping them in whole is 20% faster, at > least here. And much less noisy! These are both cache-cold tests. With BK, I used to have a "readahead" script to something close to this. The problem with that approach is that it works wonderfully well for people who (a) have tons of memory and (b) really only care about the source tree and almost nothing else, but it doesn't work that well at all for others. So yes, for me, forcing a page-in of all the data is actually worth it. I commonly do something like git grep quieuiueriueirue & on my main machine when I reboot it for testing - just to bring in the working tree into cache, so that subsequent "git diff" and "git grep" operations will be faster. > $ time git read-tree -m -u HEAD HEAD > > real 0m9.255s > user 0m0.832s > sys 0m0.196s > > $ time (cat .git/objects/pack/* .git/index >/dev/null; git read-tree -m -u HEAD HEAD) > > real 0m7.141s > user 0m0.936s > sys 0m1.912s > > Now, I don't know how useful this is since git doesn't know if the > data are cached. Is it perhaps possible to give a hint to the > readahead logic that it should try to read as far as possible? You have a much faster disk drive than I do on that slow laptop that I wanted to optimize for. I get [torvalds@hp linux]$ time git read-tree -m -u HEAD HEAD real 0m12.849s user 0m0.232s sys 0m0.124s for the cold-cache case, but then for populating the whole thing: time cat .git/objects/pack/* .git/index >/dev/null real 0m31.350s user 0m0.040s sys 0m0.468s whoops. Can you say "pitiful"? (In contrast, my desktop does the same it in seven seconds - laptop disks really are *much* slower than a reasonable desktop one). Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html