RE: [PATCH] repack: respect gc.pid lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: Jeff King [mailto:peff@xxxxxxxx]
> Sent: Friday, April 14, 2017 3:34 PM
> To: David Turner <David.Turner@xxxxxxxxxxxx>
> Cc: git@xxxxxxxxxxxxxxx; christian.couder@xxxxxxxxx; mfick@xxxxxxxxxxxxxx;
> jacob.keller@xxxxxxxxx
> Subject: Re: [PATCH] repack: respect gc.pid lock
> 
> On Thu, Apr 13, 2017 at 04:27:12PM -0400, David Turner wrote:
> 
> > Git gc locks the repository (using a gc.pid file) so that other gcs
> > don't run concurrently. Make git repack respect this lock.
> >
> > Now repack, by default, will refuse to run at the same time as a gc.
> > This fixes a concurrency issue: a repack which deleted packs would
> > make a concurrent gc sad when its packs were deleted out from under
> > it.  The gc would fail with: "fatal: ./objects/pack/pack-$sha.pack
> > cannot be accessed".  Then it would die, probably leaving a large temp
> > pack hanging around.
> >
> > Git repack learns --no-lock, so that when run under git gc, it doesn't
> > attempt to manage the lock itself.
> 
> This also means that two repack invocations cannot run simultaneously, because
> they want to take the same lock.  But depending on the options, the two don't
> necessarily conflict. For example, two simultaneous incremental "git repack -d"
> invocations should be able to complete.
> 
> Do we know where the error message is coming from? I couldn't find the error
> message you've given above; grepping for "cannot be accessed"
> shows only error messages that would have "packfile" after the "fatal:".
> Is it a copy-paste error?

Yes, it is.  Sorry.

We saw this failure in the logs multiple  times (with three different
shas, while a gc was running):
April 12, 2017 06:45 -> ERROR -> 'git -c repack.writeBitmaps=true repack -A -d --pack-kept-objects' in [repo] failed:
fatal: packfile ./objects/pack/pack-[sha].pack cannot be accessed
Possibly some other repack was also running at the time as well.

My colleague also saw it while manually doing gc (again while 
repacks were likely to be running):
$ git gc --aggressive
Counting objects: 13800073, done.
Delta compression using up to 8 threads.
Compressing objects:  99% (11465846/11465971)   
Compressing objects: 100% (11465971/11465971), done.
fatal: packfile [repo]/objects/pack/pack-[sha].pack cannot be accessed

(yes, I know that --aggressive is usually not desirable)

> If that's the case, then it's the one in use_pack(). Do we know what
> program/operation is causing the error? Having a simultaneous gc delete a
> packfile is _supposed_ to work, through a combination of:
> 
>   1. Most sha1-access operations can re-scan the pack directory if they
>      find the packfile went away.
> 
>   2. The pack-objects run by a simultaneous repack is somewhat special
>      in that once it finds and commits to a copy of an object in a pack,
>      we need to use exactly that pack, because we record its offset,
>      delta representation, etc. Usually this works because we open and
>      mmap the packfile before making that commitment, and open packfiles
>      are only closed if you run out of file descriptors (which should
>      only happen when you have a huge number of packs).

We have a reasonable rlimit (64k soft limit), so that failure mode is pretty 
unlikely.  I  think we should have had 20 or so packs -- not tens of thousands.

> So I'm worried that this repack lock is going to regress some other cases that run
> fine together. But I'm also worried that it's a band-aid over a more subtle
> problem. If pack-objects is not able to run alongside a gc, then you're also going
> to have problems serving fetches, and obviously you wouldn't want to take a
> lock there.

I see your point.  I don't know if it's pack-objects that's seeing this, although maybe 
that's the only reasonable codepath.

I did some tracing through the code, and couldn't figure out how to trigger that 
error message.  It appears in two places in the code, but only when the pack is not 
initialized.  But the packs always seem to be set up by that point in my test runs.  
It's worth noting that I'm not testing on the gitlab server; I'm testing on my laptop with
a completely different repo.  But I've tried various ways to repro this -- or even to 
get to a point where those errors would have been thrown given a missing pack -- 
and I have not been able to.

Do you have any idea why this would be happening other than the rlimit thing?





[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]