Re: [PATCH] refs: sync loose refs to disk before committing them

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Peff & Patrick,

On Fri, 5 Nov 2021, Jeff King wrote:

> On Fri, Nov 05, 2021 at 03:07:18AM -0400, Jeff King wrote:
>
> >   2. It's not clear what the performance implications will be,
> >      especially on a busy server doing a lot of ref updates, or on a
> >      filesystem where fsync() ends up syncing everything, not just the
> >      one file (my impression is ext3 is such a system, but not ext4).
> >      Whereas another solution may be journaling data and metadata writes
> >      in order without worrying about the durability of writing them to
> >      disk.
> >
> >      I suspect for small updates (say, a push of one or two refs), this
> >      will have little impact. We'd generally fsync the incoming packfile
> >      and its idx anyway, so we're adding may one or two fsyncs on top of
> >      that. But if you're pushing 100 refs, that will be 100 sequential
> >      fsyncs, which may add up to quite a bit of latency. It would be
> >      nice if we could batch these by somehow (e.g., by opening up all of
> >      the lockfiles, writing and fsyncing them, and then renaming one by
> >      one).
>
> So here's a quick experiment that shows a worst case: a small push that
> updates a bunch of refs. After building Git with and without your patch,
> I set up a small repo like:
>
>   git init
>   git commit --allow-empty -m foo
>   for i in $(seq 100); do
>     git update-ref refs/heads/$i HEAD
>   done
>
> To give a clean slate between runs, I stuck this in a script called
> "setup":
>
>   #!/bin/sh
>   rm -rf dst.git
>   git init --bare dst.git
>   sync
>
> And then ran:
>
>   $ hyperfine -L v orig,fsync -p ./setup '/tmp/{v}/bin/git push dst.git refs/heads/*'
>   Benchmark 1: /tmp/orig/bin/git push dst.git refs/heads/*
>     Time (mean ± σ):       9.9 ms ±   0.2 ms    [User: 6.3 ms, System: 4.7 ms]
>     Range (min … max):     9.5 ms …  10.5 ms    111 runs
>
>   Benchmark 2: /tmp/fsync/bin/git push dst.git refs/heads/*
>     Time (mean ± σ):     401.0 ms ±   7.7 ms    [User: 9.4 ms, System: 15.2 ms]
>     Range (min … max):   389.4 ms … 412.4 ms    10 runs
>
>   Summary
>     '/tmp/orig/bin/git push dst.git refs/heads/*' ran
>      40.68 ± 1.16 times faster than '/tmp/fsync/bin/git push dst.git refs/heads/*'
>
> So it really does produce a noticeable impact (this is on a system with
> a decent SSD and no other disk load, so I'd expect it to be about
> average for modern hardware).
>
> Now this test isn't entirely fair. 100 refs is a larger than average
> number to be pushing, and the effect is out-sized because there's
> virtually no time spent dealing with the objects themselves, nor is
> there any network latency. But 400ms feels like a non-trivial amount of
> time just in absolute numbers.
>
> The numbers scale pretty linearly, as you'd expect. Pushing 10 refs
> takes ~40ms, 100 takes ~400ms, and 1000 takes ~4s. The non-fsyncing
> version gets slower, too (there's more work to do), but much more slowly
> (6ms, 10ms, and 50ms respectively).
>
> So this will definitely hurt at edge / pathological cases.

Ouch.

I wonder whether this could be handled similarly to the
`core.fsyncObjectFiles=batch` mode that has been proposed in
https://lore.kernel.org/git/pull.1076.v8.git.git.1633366667.gitgitgadget@xxxxxxxxx/

Essentially, we would have to find a better layer to do this, where we
can synchronize after a potentially quite large number of ref updates has
happened. That would definitely be a different layer than the file-based
refs backend, of course, and would probably apply in a different way to
other refs backends.

Ciao,
Dscho

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux