Hi Adina, On Fri, Jan 08, 2021 at 05:39:12PM +0100, Adina Wagner wrote: > Hi, > > > colleagues encouraged me to report a "personal" bug I've stumbled > across. Its "personal", because I wasn't able to create a minimal > reproducer, or even reproduce it with the same script on other > infrastructure. We're suspecting a race between packing and fetch. The > script I am using is at the bottom of the email. Indeed, similar races between fetching and repacking are known. For example, this discussion: https://lore.kernel.org/git/20200316082348.GA26581@xxxxxxxxxxxxxx/ is about the .idx going away during a fetch. A similar thing is happening here, but instead of the .idx file going away, your source repository is repacking (and thus getting rid of loose object files). Here, I think the issue is less complicated. Since you're cloning from a local repository, the 'git clone' command calls 'clone_local()', which in turn calls 'copy_or_link_directory()'. If the directory being copied changes while being iterated over, the receiving end isn't guaranteed to pick up the changes. Worse, if the source _removes_ a file that hasn't yet been copied, over, then the copy will fail, which is what you're seeing here. One workaround would be to clone your repositories locally with '--shared', which won't copy any objects from the source repository, but instead mark its object store as an alternate to the newly created one. > I wonder if there is a way that Git could guard cases where background > gc processes may still be running? Perhaps Git could take some sort of lock when writing to the object store, but an flock wouldn't work since we'd want to allow multiple readers to acquire the lock simultaneously, so long as there is no writer. Thanks, Taylor