Re: Git Large Object Support Proposal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Scott Chacon <schacon@xxxxxxxxx> writes:

> But where Git instead stores a stub object and the large binary object
> is pulled in via a separate mechanism. I was thinking that the client
> could set a max file size and when a binary object larger than that is
> staged, Git instead writes a stub blob like:
>
> ==
> blob [size]\0
> [sha of large blob]
> ==

An immediate pair of questions are, if you can solve the issue by
delegating large media to somebody else (i.e. "media server"), and that
somebody else can solve the issues you are having, (1) what happens if you
lower that "large" threashold to "0 byte"?  Does that somebody else still
work fine, and does the git that uses indirection also still work fine?
If so why are you using git instead of that somebody else altogether?  and
(2) what prevents us from stealing the trick that somebody else uses so
that git itself can natively handle large blobs without indirection?

Without thinking the ramifications through myself, this sounds pretty much
like a band-aid and will nend up hitting the same "blob is larger than we
can handle" issue when you follow the indirection eventually, but that is
just my gut feeling.

This is an off-topic "By the way", but has another topic addressed to you
on git-scm.com/about resolved in any way yet?
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux