On Fri, Sep 12, 2008 at 06:43:48PM +0200, Stephen R. van den Berg wrote: > >> True. But repopulating this cache after cloning means that you have to > >> calculate the patch-id of *every* commit in the repository. It sounds > >> like something to avoid, but maybe I'm overly concerned, I have only a > >> vague idea on how computationally intensive this is. > > >For a rough estimate, try: > > > time git log -p | git patch-id >/dev/null > > On my system that results in 2ms per commit on average. Not huge, but > not small either, I guess. Running it results in real waiting time, it > all depends on how patient the user is. For a local clone, git could be taught to copy the cache file. For a network-based clone, the percentage of time needed to download is roughly 2-3 times that (although that will obviously depend on your network connectivity). Building this cache can be done in the background, though, or delayed until the first time the cache is needed. - Ted -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html