On Mon, Oct 15, 2012 at 10:29:08AM -0400, Marc Branchaud wrote: > Here's a previous discussion of a race in concurrent updates to the same ref, > even when the updates are all identical: > > http://news.gmane.org/find-root.php?group=gmane.comp.version-control.git&article=164636 > > In that thread, Peff outlines the lock procedure for refs: > > 1. get the lock > 2. check and remember the sha1 > 3. release the lock > 4. do some long-running work (like the actual push) > 5. get the lock > 6. check that the sha1 is the same as the remembered one > 7. update the sha1 > 8. release the lock A minor nit, but I was wrong on steps 1-3. We don't have to take a lock on reading, because our write mechanism uses atomic replacement. So it is really: 1. read and remember the original sha1 2. do some long-running work (like the actual push) 3. get the write lock 4. read the sha1 and check that it's the same as our original 5. write the new sha1 to the lockfile 6. simultaneously release the lock and update the ref by atomically renaming the lockfile to the actual ref Any simultaneous push may see the "old" sha1 before step 6, and when it gets to its own step 4, will fail (and two processes cannot be in steps 3-6 simultaneously). > Angelo, in your case I think one of your concurrent updates would fail in > step 6. As you say, this is after the changes have been uploaded. However, > there's none of the file-overwriting that you fear, because the changes are > stored in git's object database under their SHA hashes. So there'll only be > an object-level collision if two parties upload the exact same object, in > which case it doesn't matter. Right. The only thing that needs locking is the refs, because the object database is add-only for normal operations, and by definition collisions mean you have the same content (or are astronomically unlucky, but your consolation prize is that you can write a paper on how you found a sha1 collision). -Peff -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html