Re: large files and low memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Shawn Pearce wrote:

> The mmap() isn't the problem.  Its the allocation of a buffer that is
> larger than the file in order to hold the result of deflating the file
> before it gets written to disk.

Wasn't this already fixed, at least in some cases?

commit 9892bebafe0865d8f4f3f18d60a1cfa2d1447cd7 (tags/v1.7.0.2~11^2~1)
Author: Nicolas Pitre <nico@xxxxxxxxxxx>
Date:   Sat Feb 20 23:27:31 2010 -0500

    sha1_file: don't malloc the whole compressed result when writing out objects

    There is no real advantage to malloc the whole output buffer and
    deflate the data in a single pass when writing loose objects. That is
    like only 1% faster while using more memory, especially with large
    files where memory usage is far more. It is best to deflate and write
    the data out in small chunks reusing the same memory instead.

    For example, using 'git add' on a few large files averaging 40 MB ...

    Before:
    21.45user 1.10system 0:22.57elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k
    0inputs+828040outputs (0major+142640minor)pagefaults 0swaps

    After:
    21.50user 1.25system 0:22.76elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k
    0inputs+828040outputs (0major+104408minor)pagefaults 0swaps

    While the runtime stayed relatively the same, the number of minor page
    faults went down significantly.

    Signed-off-by: Nicolas Pitre <nico@xxxxxxxxxxx>
    Signed-off-by: Junio C Hamano <gitster@xxxxxxxxx>
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]