Re: [PATCH] refs: sync loose refs to disk before committing them

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Nov 05, 2021 at 05:34:44AM -0400, Jeff King wrote:
> On Fri, Nov 05, 2021 at 10:12:25AM +0100, Johannes Schindelin wrote:
> 
> > > So this will definitely hurt at edge / pathological cases.
> > 
> > Ouch.
> > 
> > I wonder whether this could be handled similarly to the
> > `core.fsyncObjectFiles=batch` mode that has been proposed in
> > https://lore.kernel.org/git/pull.1076.v8.git.git.1633366667.gitgitgadget@xxxxxxxxx/
> 
> Yeah, that was along the lines I was thinking.
> 
> I hadn't really looked at the details of the batch-fsync there. The big
> trick seems to be doing a pagecache writeback for each file, and then
> stimulating an actual disk write (by fsyncing a tempfile) after the
> batch is done.
> 
> That would be pretty easy to do for the refs (it's git_fsync() while
> closing each file where Patrick is calling fsync(), followed by a single
> do_batch_fsync() after everything is closed but before we rename).
> 
> > Essentially, we would have to find a better layer to do this, where we
> > can synchronize after a potentially quite large number of ref updates has
> > happened. That would definitely be a different layer than the file-based
> > refs backend, of course, and would probably apply in a different way to
> > other refs backends.
> 
> We do have the concept of a ref_transaction, so that would be the
> natural place for it. Not every caller uses it, though, because it
> implies atomicity of the transaction (so some may do a sequence of N
> independent transactions, because they don't want failure of one to
> impact others). I think that could be changed, if the ref_transaction
> learned about non-atomicity, but it may take some surgery.
> 
> I expect that reftables would similarly benefit; it is probably much
> more efficient to write a table slice with N entries than it is to write
> N slices, even before accounting for fsync(). And once doing that, then
> the fsync() part becomes trivial.
> 
> -Peff

So I've finally found the time to have another look at massaging this
into the ref_transaction mechanism. If we do want to batch the fsync(3P)
calls, then we basically have two different alternatives:

    1. We first lock all loose refs by creating the respective lock
       files and writing the updated ref into that file. We keep the
       file descriptor open such that we can then flush them all in one
       go.

    2. Same as before, we lock all refs and write the updated pointers
       into the lockfiles, but this time we close each lockfile after
       having written to it. Later, we reopen them all to fsync(3P) them
       to disk.

I'm afraid both alternatives aren't any good: the first alternative
risks running out of file descriptors if you queue up lots of refs. And
the second alternative is going to be slow, especially so on Windows if
I'm not mistaken.

So with both not being feasible, we'll likely have to come up with a
more complex scheme if we want to batch-sync files. One idea would be to
fsync(3P) all lockfiles every $n refs, but it adds complexity in a place
where I'd really like to have things as simple as possible. It also
raises the question what $n would have to be.

Does anybody else have better ideas than I do?

Patrick

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux