Re: large files and low memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Oct 4, 2010 at 11:58 AM, Jonathan Nieder <jrnieder@xxxxxxxxx> wrote:
> Shawn Pearce wrote:
>
>> The mmap() isn't the problem.  Its the allocation of a buffer that is
>> larger than the file in order to hold the result of deflating the file
>> before it gets written to disk.
>
> Wasn't this already fixed, at least in some cases?
>
> commit 9892bebafe0865d8f4f3f18d60a1cfa2d1447cd7 (tags/v1.7.0.2~11^2~1)
> Author: Nicolas Pitre <nico@xxxxxxxxxxx>
> Date:   Sat Feb 20 23:27:31 2010 -0500
>
>    sha1_file: don't malloc the whole compressed result when writing out objects

This change only removes the deflate copy.  But due to the SHA-1
consistency issue I alluded to earlier, I think we're still making a
full copy of the file in memory before we SHA-1 it or deflate it.  So
Nico halved the memory usage, but we're still using 1x the size of the
file rather than ~2x.


-- 
Shawn.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]