Re: [PATCH] Teach "git add" and friends to be paranoid

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 22 Feb 2010, Zygo Blaxell wrote:

> On Mon, Feb 22, 2010 at 10:40:59AM -0500, Nicolas Pitre wrote:
> > On Sun, 21 Feb 2010, Junio C Hamano wrote:
> > > Dmitry Potapov <dpotapov@xxxxxxxxx> writes:
> > > > But overall the outcome is clear -- read() is always a winner.
> > > 
> > > "... a winner, below 128kB; above that the difference is within noise and
> > > measurement error"?
> > 
> > read() is not always a winner.  A read() call will always have the data 
> > duplicated in memory.  Especially with large files, it is more efficient 
> > on the system as a whole to mmap() a 50 MB file rather than allocating 
> > an extra 50 MB of anonymous memory that cannot be paged out (except to 
> > the swap file which would be yet another data duplication).  With mmap() 
> > when there is memory pressure the read-only mapped memory is simply 
> > dropped with no extra IO.
> 
> That holds if you're comparing read() and mmap() of the entire file as a
> single chunk, instead of in fixed-size chunks at the sweet spot between
> syscall overhead and CPU cache size.

Obviously.  But we currently don't have the infrastructure to do chunked 
read of the input data.  I think we should do that eventually, by 
applying the pack windowing code to input files as well.  That would 
make memory usage constant even for huge files, but this is much more 
complicated to support especially for data fed through stdin.

> If you're read()ing a chunk at a time into a fixed size buffer, and
> doing sha1 and deflate in chunks, the data should be copied once into CPU
> cache, processed with both algorithms, and replaced with new data from
> the next chunk.  The data will be copied from the page cache instead
> of directly mapped, which is a small overhead, but setting up the page
> map in mmap() also a small overhead, so you have to use benchmarks to
> know which of the overheads is smaller.  It might be that there's no
> one answer that applies to all CPU configurations.

Normally mmap() has more overhead than read().  However mmap() provides 
much nicer properties than read() by simplifying the code a lot, and by 
letting the OS manage memory pressure much more gracefully.

> If you're doing mmap() and sha1 and deflate of a 50MB file in two
> separate passes that are the same size as the file, you load 50MB of
> data into CPU cache at least twice, you get two sets of associated
> things like TLB misses, and if the file is very large, you page it from
> disk twice.  So it might make sense to process in chunks regardless
> of read() vs mmap() fetching the data.

We do have to make two separate passes anyway.  The first pass is to 
hash the data only, and if that hash already exists in the object store 
then we call it done and skip over the deflate process which is still 
the dominant cost.  And that happens quite often.

However, with a really large file, then it becomes advantageous to 
simply do the hash and deflate in parallel one chunk at a time, and 
simply discard the newly created objects if it happens to already 
exists.  That's the whole idea behind the newly introduced 
core.bigFileThreshold config variable (but the code to honor it in 
sha1_file.c doesn't exist yet).

> If you're malloc()ing 50MB, you're wasting memory and CPU bandwidth
> making up pages full of zeros before you've even processed the first byte.
> I don't see how that could ever be faster for large file cases.

It can't.  This is why read() is not much better than mmap() in those 
cases.


Nicolas
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]