Re: large files and low memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Oct 4, 2010 at 2:20 AM, Enrico Weigelt <weigelt@xxxxxxxx> wrote:
>
> when adding files which are larger than available physical memory,
> git performs very slow. Perhaps it has to do with git's mmap()ing
> the whole file. Is there any way to do it w/o mmap (hoping that
> might perform a bit better) ?

The mmap() isn't the problem.  Its the allocation of a buffer that is
larger than the file in order to hold the result of deflating the file
before it gets written to disk.  When the file is bigger than physical
memory, the kernel has to page in parts of the file as well as swap in
and out parts of that allocated buffer to hold the deflated file.

This is a known area in Git where big files aren't handled well.

-- 
Shawn.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]