Re: [PATCH WIP 0/4] Special code path for large blobs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 28 May 2009, Nguyễn Thái Ngọc Duy wrote:

> Thread "Problem with large files on different OSes" reminds me this.
> This series is in my repository for quite some time. It addresses
> adding/checking out large blobs as long as:
> 
>  - no conversion will be done
>  - blobs are loose (in checkout case)
> 
> Together with a patch that prevents large blobs from being packed
> (something like Dana How sent long ago), and a modification of "lazy
> clone/remote alternatives" patch to avoid packing large blobs again
> for sending over network, I think it should make git possible for
> large files.
> 
> Just something to play.

I think this is a good start.

However, like I said previously, I'd encapsulate large blobs in a pack 
right away instead of storing them as loose objects.  The reason is that 
you can effortlessly repack/fetch/push them afterwards by simply 
triggering the pack data reuse code path for them.  Extracting large and 
undeltified blobs from a pack is just as easy as from a loose object.

To accomplish that, you only need to copy write_pack_file() from 
builtin-pack-objects.c and strip it to the bone with only one object to 
write.


Nicolas

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]