Jeff King <peff@xxxxxxxx> writes: > On Thu, Jul 27, 2017 at 10:19:48AM -0700, Junio C Hamano wrote: > >> Makes sense. Makes me wonder why we need a separate .new file >> (instead of writing into the .lock instead), but that is a different >> issue. > > It comes from 42dfa7ece (commit_packed_refs(): use a staging file > separate from the lockfile, 2017-06-23). That commit explains that we > want to be able to put the new contents into service before we release > the lock. But it doesn't say why that's useful. By being able to hold the lock on packed-refs longer, I guess something like this becomes possible: * hold the lock on packed-refs * hold the lock on loose ref A, B, C, ... * update packed-refs to include the freshest values of these refs * start serving packed-refs without releasing the lock * for X in A, B, C...: delete the loose ref X and unlock X * unlock the packed-refs Other people racing with the sequence to recreate a loose ref that is even fresher than the resulting packed-refs file, while we still hold the lock on packed-refs, is perfectly OK. But we must make sure our packed-refs is visible to others before starting to delete and unlock the loose refs. Hmph, but that is not a sufficient explanation. I am not seeing anything bad to happen if we unlock the packed-refs before deleting loose refs that we have locks on, so there must be something else that needs "new packed-refs is made visible way before we unlock it". > I recall from past discussions that this will help close some races, > and e5cc7d7d2 (repack_without_refs(): don't lock or unlock the packed > refs, 2017-07-01) alludes to this. I think the races in question have to > do with holding the packed-refs lock while pruning the just-packed > files, but I'm having trouble digging up specifics in the archive. > > -Peff