Re: Why Git is so fast (was: Re: Eric Sink's blog - notes on git, dscms and a "whole product" approach)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 1 May 2009, david@xxxxxxx wrote:

> the key thing for his problem is the support for large binary objects. there
> was discussion here a few weeks ago about ways to handle such things without
> trying to pull them into packs. I suspect that solving those sorts of issues
> would go a long way towards closing the gap on this workload.
> 
> there may be issues in doing a clone for repositories that large, I don't
> remember exactly what happens when you have something larger than 4G to send
> in a clone.

If you have files larger than 4G then you definitively need a 64-bit 
machine with plenty of RAM for git to at least be able to cope at the 
moment.

That should be easy to add a config option to determine how big is a big 
file, and store those big files directly in a pack of their own instead 
of a loose object (for easy pack reuse during a further repack), and 
never attempt to deltify them, etc. etc.  At which point git will handle 
big files just fine even on a 32-bit machine but it won't do more than 
copying them in and out, and possibly deflating/inflating them while at 
it, but nothing fancier.


Nicolas
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]