Re: large files and low memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 5 Oct 2010, Jonathan Nieder wrote:

> Nicolas Pitre wrote:
> 
> > You can't do a one-pass  calculation.  The first one is required to 
> > compute the SHA1 of the file being added, and if that corresponds to an 
> > object that we already have then the operation stops right there as 
> > there is actually nothing to do.
> 
> Ah.  Thanks for a reminder.
> 
> > In the case of big files, what we need to do is to stream the file data 
> > in, compute the SHA1 and deflate it, in order to stream it out into a 
> > temporary file, then rename it according to the final SHA1.  This would 
> > allow Git to work with big files, but of course it won't be possible to 
> > know if the object corresponding to the file is already known until all 
> > the work has been done, possibly just to throw it away.
> 
> To make sure I understand correctly: are you suggesting that for big
> files we should skip the first pass?

For big files we need a totally separate code path to process the file 
data in small chunks at 'git add' time, using a loop containing 
read()+SHA1sum()+deflate()+write().  Then, if the SHA1 matches an 
existing object we delete the temporary output file, otherwise we rename 
it as a valid object.  No CRLF, no smudge filters, no diff, no deltas, 
just plain storage of huge objects, based on the value of 
core.bigFileThreshold config option.

Same thing on the checkout path: a simple loop to 
read()+inflate()+write() in small chunks.

That's the only sane way to kinda support big files with Git.

> I suppose that makes sense: for small files, using a patch application
> tool to reach a postimage that matches an existing object is something
> git historically needed to expect, but for typical big files:
> 
>  - once you've computed the SHA1, you've already invested a noticeable
>    amount of time.
>  - emailing patches around is difficult, making "git am" etc less important
>  - hopefully git or zlib can notice when files are uncompressible,
>    making the deflate not cost so much in that case.

Emailing is out of the question.  We're talking file sizes in the 
hundreds of megabytes and above here.  So yes, simply computing the SHA1 
is a significant cost, given that you are going to trash your page cache 
in the process already, so better pay the price of deflating it at the 
same time even if it turns out to be unnecessary.


Nicolas
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]