Re: [PATCH] read-cache: make the index write buffer size 128K

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Feb 21, 2021 at 4:51 AM Junio C Hamano <gitster@xxxxxxxxx> wrote:
>
> Neeraj Singh <nksingh85@xxxxxxxxx> writes:
>
> >> >>   -#define WRITE_BUFFER_SIZE 8192
> >> >> +#define WRITE_BUFFER_SIZE (128 * 1024)
> >> >>   static unsigned char write_buffer[WRITE_BUFFER_SIZE];
> >> >>   static unsigned long write_buffer_len;
> >> >
> >> > [...]
> >> >
> >> > Very nice.
> >>
> >> I wonder if we gain more by going say 4M buffer size or even larger?
> >>
> >> Is this something we can make the system auto-tune itself?  This is
> >> not about reading but writing, so we already have enough information
> >> to estimate how much we would need to write out.
> >>
> >> Thanks.
> >>
> >
> > Hi Junio,
> > At some point the cost of the memcpy into the filesystem cache begins to
> > dominate the cost of the system call, so increasing the buffer size
> > has diminishing returns.
>
> Yes, I know that kind of "general principle".
>
> If I recall correctly, we used to pass too large a buffer to a
> single write(2) system call (I do not know if it was for the
> index---I suspect it was for some other data), and found out that it
> made response to ^C take too long, and tuned the buffer size down.
>
> I was asking where the sweet spot for this codepath would be, and if
> we can take a measurement to make a better decision than "8k feels
> too small and 128k turns out to be better than 8k".  It does not
> tell us if 128k would always do better than 64k or 256k, for
> example.
>
> I suspect that the sweet spot would be dependent on many parameters
> (not just the operating system, but also relative speed among
> memory, "disk", and cpu, and also the size of the index) and if we
> can devise a way to auto-tune it so that we do not have to worry
> about it.
>
> Thanks.

I think the main concern on a reasonably-configured machine is the speed
of memcpy and the cost of the code to get to that memcpy (syscall, file system
free space allocator, page allocator, mapping from file offset to cache page).
Disk shouldn't matter, since we write the file with OS buffering and
buffer flushing
will happen asynchronously some time after the git command completes.

If we think about doing the fastest possible memcpy, I think we want to aim for
maximizing the use of the CPU cache.  A write buffer that's too big would result
in most of the data being flushed to DRAM between when git writes it and the
OS reads it.  L1 caches are typically ~32K and L2 caches are on the
order of 256K.
We probably don't want to exceed the size of the L2 cache, and we
should actually
leave some room for OS code and data, so 128K is a good number from
that perspective.

I collected data from an experiment with different buffer sizes on Windows on my
3.6Ghz Xeon W-2133 machine:
https://docs.google.com/spreadsheets/d/1Bu6pjp53NPDK6AKQI_cry-hgxEqlicv27dptoXZYnwc/edit?usp=sharing

The timing is pretty much in the noise after we pass 32K.  So I think
8K is too small, but
given the flatness of the curve we can feel good about any value above
32K from a performance
perspective.  I still think 128K is a decent number that won't likely
need to be changed for
some time.

Thanks,
-Neeraj



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux