Re: Fwd: Git and Large Binaries: A Proposed Solution

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Makes a lot of sense --

As you said, the "sparse clone" idea and this one (not downloading all
binaries) probably have a similar or related solution...  In fact, I'd
imagine that most of the reasons for needing a sparse clone are
because of large binaries.    (since text files compress so nicely in
git).

And actually, if the sparse-clone idea is limited to only binaries
being "sparse" (i.e. not copied) that probably simplifies the
spares-clone logic quite a bit since you don't need to split up the
bits of patches to generate the resulting changesets you need?  (this
is based on my very loose understanding of how git works)


So, if we simplify our requirements a bit (at least as a first cut),
perhaps we've now simplified down to these tasks  (similar but
modified from before):

1. clean up git's handling of binaries to improve efficiency.  In
doing so, see if it makes sense to separate somewhat the way that
binaries are stored (particularly because this would help (2))

2.  Allow full clones to be "sparsely cloned" (that is, cloned with
the exception of the large binary files).
   2.1 As a corollary, no clones of any kind can be made from a sparse
clone (sparse clones are "leaf" nodes on a tree of descendants) --
that simplifies the complexity quite a bit, since the "remote" you
cloned from will always have the files if you need 'em.


doing so limits some of the possible applications of "binary sparse"
clones, but might yield a cleaner final solution -- thoughts?


Eric




On Mon, Mar 14, 2011 at 3:32 PM, Jeff King <peff@xxxxxxxx> wrote:
> On Sun, Mar 13, 2011 at 08:33:18PM +0100, Alexander Miseler wrote:
>
>> We want to store them as flat as possible. Ideally if we have a temp
>> file with the content (e.g. the output of some filter) it should be
>> possible to store it by simply doing a move/rename and updating some
>> meta data external to the actual file.
>
> Yeah, that would be a nice optimization.  But I'd rather do the easy
> stuff first and see if more advanced stuff is still worth doing.
>
> For example, I spent some time a while back designing a faster textconv
> interface (the current interface spools the blob to a tempfile, whereas
> in some cases a filter needs to only access the first couple kilobytes
> of the file to get metadata). But what I found was that an even better
> scheme was to cache textconv output in git-notes. Then it speeds up the
> slow case _and_ the already-fast case.
>
> Now after this, would my new textconv interface still speed up the
> initial non-cached textconv? Absolutely. But I didn't really care
> anymore, because the small speed up on the first run was not worth the
> trouble of maintaining two interfaces (at least for my datasets).
>
> And this may fall into the same category. Accessing big blobs is
> expensive. One solution is to make it a bit faster. Another solution is
> to just do it less. So we may find that once we are doing it less, it is
> not worth the complexity to make it faster.
>
> And note that I am not saying "it definitely won't be worth it"; only
> that it is worth making the easy, big optimizations first and then
> seeing what's left to do.
>
>> 1.) The loose file format is inherently unsuited for this. It has a
>> header before the actual content and the whole file (header + content)
>> is always compressed. Even if one changes this to
>> compressing/decompressing header and content independently it is still
>> unsuited by a) having the header within the same file and b) because
>> the header has no flags or other means to indicate a different
>> behavior (e.g. no compression) for the content. We could extend the
>> header format or introduce a new object type (e.g. flatblob) but both
>> would probably cause more trouble than other solutions. Another idea
>> would be to keep the metadata in an external file (e.g. 84d7.header
>> for the object 84d7). This would probably have a bad performance
>> though since every object lookup would first need to check for the
>> existence of a header file. A smarter variant would be to optionally
>> keep the meta data directly in the filename (e.g. saving the object as
>> 84d7.object_type.size.flag instead of just 84d7).
>> This would only require special handling for cases where the normal lookup for 84d7 fails.
>
> A new object type is definitely a bad idea. It changes the sha1 of the
> resulting object, which means that our identical trees which differ only
> in the use of "flatblob" versus regular blob will have different sha1s.
>
> So I think the right place to insert this would be at the object db
> layer. The header just has the type and size. But I don't think anybody
> is having a problem with large objects that are _not_ blobs. So the
> simplest implementation would be a special blob-only object db
> containing pristine files. We implicitly know that objects in this db
> are blobs, and we can get the size from the filesystem via stat().
> Checking their sha1 would involve prepending "blob <size>\0" to the file
> data. It does introduce an extra stat() into object lookup, so probably
> we would have the lookup order of pack, regular loose object, flat blob
> object. Then you pay the extra stat() only in the less-common case of
> accessing either a large blob or a non-existent object.
>
> That being said, I'm not sure how much this optimization will buy us.
> There are times when being able to mmap() the file directly, or point an
> external program directly at the original blob will be helpful. But we
> will still have to copy, for example on checkout. It would be nice if
> there was a way to make a copy-on-write link from the working tree to
> the original file. But I don't think there is a portable way to do so,
> and we can't allow the user to accidentally munge the contents of the
> object db, which are supposed to be immutable.
>
> -Peff
>
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]