On Wed, Jun 19, 2019 at 07:51:00PM +0700, Duy Nguyen wrote: > > Wheras mine fixes e.g. the same issue for: > > > > parallel 'git fetch {}' ::: $(git remote) > > > > Ditto for you running a "git" command and your editor running a > > "fetch" at the same time. > > You could sort of avoid the problem here too with > > parallel 'git fetch --no-auto-gc {}' ::: $(git remote) > git gc --auto > > It's definitely simpler, but of course we have to manually add > --no-auto-gc in everywhere we need, so not quite as elegant. This has the added advantage that the gc is deterministically run only once after all of the fetches. Whereas any locking scheme is going to run it randomly for at least _one_ of the fetches, but there may be other fetches afterwards. In a sense it might not matter, because any fetches which complete after the auto-gc finishes would either trigger a new auto-gc or not. And if not, then one could argue that it wasn't necessary. But as a general rule, the cost of gc scales with the repo size, not with the number of unpacked objects. So it's more efficient to stick as many updates as you can into a single gc; the cost is running a gc at all, not the incremental cost of including the new fetches. Or put another way, by leaving some fetches from this round of commands out of the gc, we will require another expensive gc sooner. > Actually you could already do that with 'git -c gc.auto=false fetch', I guess. Yeah. I wrote my other response before reading this part of the thread, but IMHO Ævar's example argues even more for "git --no-auto-gc". -Peff