On Wed, Jan 05, 2011 at 02:33:36PM -0600, Neal Kreitzinger wrote: > If two or more different users perform a git-fetch on the same mirror > (--mirror) repo concurrently, could that cause corruption? I tried a manual > test using the git protocol over separate machines and they both thought > they needed to do the full updates and they both appeared to work. I'm not > sure if git is serializing this, or if it is possible for concurrent fetches > to step on each other. No, it shouldn't cause corruption, but it will cause wasted effort and it may cause one to report failure. The fetch process gets all of the objects first, and then updates the ref (so we never have refs that point to object we didn't get yet). So both of the concurrent fetches will see that we have a big set of objects to get and will work on getting them at the same time, after which they will update the refs appropriately (presumably to the same thing). I haven't looked specifically at how fetch does locking, but usually the procedure is to lock the ref, fetch the old value, unlock it, then do some long-running task (like fetching objects), then lock again, check that the old value didn't change out from under us, update it, then unlock. In which case one of the fetches might see "oops, somebody updated while we were fetching" and complain. However, in the default configuration, we fetch using a "+" refspec, which forces update of the ref even in the case of a non-fast-forward. I don't know whether that force also would override any lock-checking. -Peff -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html