Neal Kreitzinger <nkreitzinger@xxxxxxxxx> writes: > I want to benchmark how long it takes commands like git-gc, git-fsck, > etc. to run against our canonical repo. What is the correct way to do > this? I am being asked how much time such commands would add to > automated on-demand push scripts. Umm, what's wrong with $ time git fsck The bigger question is: do you want to measure hot or cold performance? For most operations it is more useful to measure the hot performance, as the repo will be hot anyway. But in the fsck case I wouldn't be so sure; it's entirely possible that it "usually" faults a bunch of loose objects that were otherwise unused, taking some extra time. So there may be some value in first running (as root) $ echo 3 >/proc/sys/vm/drop_caches to get cold-cache measurements. Besides, if you feel like properly evaluating performance in your repository, you can look in t/perf/README. Then point GIT_PERF_REPO at your repo of choice, and write tests as needed (for example, there is currently no perf test for fsck). That said, both gc and fsck are so slow on even medium-size repositories (like git.git) that you should probably put them in a nightly cronjob instead. -- Thomas Rast trast@{inf,student}.ethz.ch -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html