On Wed, Dec 03, 2014 at 01:21:03PM -0800, Brodie Rao wrote: > > I think it is also not sufficient. This patch seems to cover only > > objects. But we assume that we can atomically rename() new versions of > > files into place whenever we like without disrupting existing readers. > > This is the case for ref updates (and packed-refs), as well as the index > > file. The destination end of the rename is an unlink() in disguise, and > > would be susceptible to the same problems. > > I'm not aware of renaming over files happening anywhere in gc-related > code. Do you think that's something that would need to be addressed in > the rest of the code base before going forward with this garbage > directory approach? If so, do you have any suggestions on how to > tackle that problem? As an example, if you run "git pack-refs --all --prune" (which is run by "git gc"), it will create a new pack-refs file and rename it into place. Another git program (say, "git for-each-ref") might be reading the file at the same time. If you run pack-refs and for-each-ref simultaneously in tight loops on your problematic NFS setup, what happens? I have no idea if it breaks or not. I don't have such a misbehaving system, and I don't know how rename() is implemented on it. But if it _is_ a problem of the same variety, then I don't see much point in making an invasive fix to address half of the problem areas, but not the other half (i.e., if we are still left with a broken git in this setup, was the invasive fix worth the cost?). If it is a problem (and again, I am just guessing), I'd imagine you would need a similar setup to what you proposed for unlink(): before renaming "packed-refs.lock" into "packed-refs", hard-link it into your "trash" area. And you'd probably want to intercept rename() there, to catch all places where we use this technique. -Peff -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html