Re: [PATCH] read-cache: make the index write buffer size 128K

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Neeraj Singh <nksingh85@xxxxxxxxx> writes:

> If we think about doing the fastest possible memcpy, I think we want to aim for
> maximizing the use of the CPU cache.  A write buffer that's too big would result
> in most of the data being flushed to DRAM between when git writes it and the
> OS reads it.  L1 caches are typically ~32K and L2 caches are on the
> order of 256K.
> We probably don't want to exceed the size of the L2 cache, and we
> should actually
> leave some room for OS code and data, so 128K is a good number from
> that perspective.
>
> I collected data from an experiment with different buffer sizes on Windows on my
> 3.6Ghz Xeon W-2133 machine:
> https://docs.google.com/spreadsheets/d/1Bu6pjp53NPDK6AKQI_cry-hgxEqlicv27dptoXZYnwc/edit?usp=sharing
>
> The timing is pretty much in the noise after we pass 32K.  So I think
> 8K is too small, but
> given the flatness of the curve we can feel good about any value above
> 32K from a performance
> perspective.  I still think 128K is a decent number that won't likely
> need to be changed for
> some time.

Thanks for a supporting graph.

I can very well imagine that it would have been tempting to instead
say "after we pass 128k" while explaining exactly the same graph,
and doing so would have given a more coherent argument to support
the choice of 128k the patch made.  You knew that a "then perhaps we
can reclaim 96k by sizing the buffer down a bit?" would become a
reasonable response, but you still chose to be honest, which I kinda
like ;-)






[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux