Re: [PATCH] fast-import: Stream very large blobs directly to pack

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Nicolas Pitre <nico@xxxxxxxxxxx> wrote:
> On Thu, 28 Jan 2010, Shawn O. Pearce wrote:
> 
> > If a blob is larger than the configured big-file-threshold, instead
> > of reading it into a single buffer obtained from malloc, stream it
> > onto the end of the current pack file.  Streaming the larger objects
> > into the pack avoids the 4+ GiB memory footprint that occurs when
> > fast-import is processing 2+ GiB blobs.
> 
> Yeah.  I've had that item on my todo list for ages now.  This 
> big-file-threshold principle has to be applied to 'git add' too so a big 
> blob is stored in pack file form right away, and used to bypass delta 
> searching in 'git pack-objects', used to skip the diff machinery, and so 
> on.

Yea, there are a lot of places we should improve for bigger files.
gfi just happened to be the first one I got a bug report on from
a user...

-- 
Shawn.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]