Re: [PATCH] midx: use buffered I/O to talk to pack-objects

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 03.08.20 um 14:39 schrieb Derrick Stolee:
> On 8/2/2020 10:38 AM, René Scharfe wrote:
>> Like f0bca72dc77 (send-pack: use buffered I/O to talk to pack-objects,
>> 2016-06-08), significantly reduce the number of system calls and
>> simplify the code for sending object IDs to pack-objects by using
>> stdio's buffering and handling errors after the loop.
>
> Good find. Thanks for doing this important cleanup.
>
> Outside of Chris's other feedback, this looks like an obviously
> correct transformation.

I spent a surprising amount of time trying to find a solution that is
easy to use and allows precise error handling.  But now I get second
thoughts.  The main selling point of buffering is better performance,
which is achieved by reducing the number of system calls.  How much
better actually?

So I get this in my Git repo clone without this patch:

  $ strace --summary-only --trace=write git multi-pack-index repack --no-progress
  % time     seconds  usecs/call     calls    errors syscall
  ------ ----------- ----------- --------- --------- ----------------
  100.00    2.237478           2    921650           write
  ------ ----------- ----------- --------- --------- ----------------
  100.00    2.237478                921650           total

And here's the same with the patch:

  % time     seconds  usecs/call     calls    errors syscall
  ------ ----------- ----------- --------- --------- ----------------
  100.00    0.013293           2      4613           write
  ------ ----------- ----------- --------- --------- ----------------
  100.00    0.013293                  4613           total

Awesome, right?  write(2) calls are down by a factor of almost 200 and
the time spent on them is reduced significantly, as advertised.  Let's
ask hyperfine for a second opinion though.  Without this patch:

  Benchmark #1: git multi-pack-index repack --no-progress
    Time (mean ± σ):      1.652 s ±  0.206 s    [User: 1.383 s, System: 0.317 s]
    Range (min … max):    1.426 s …  1.890 s    10 runs

And the same with this patch applied:

    Time (mean ± σ):      1.635 s ±  0.199 s    [User: 1.363 s, System: 0.204 s]
    Range (min … max):    1.430 s …  1.871 s    10 runs

OK, so system time is down by ca. 50%, but the total duration is
basically unchanged.  It seems strace added quite some overhead to our
measurement above.

Anyway, now I wonder if adding our own buffer on top if the
OS-internal pipe buffer is actually worth it.  The numbers above are
from Debian testing , by the way.  Perhaps buffering still pays off on
operating systems with slower pipes..

René




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux