On 7/11/2017 3:48 PM, Jonathan Tan wrote:
Currently, Git does not support repos with very large numbers of blobs or repos that wish to minimize manipulation of certain blobs (for example, because they are very large) very well, even if the user operates mostly on part of the repo, because Git is designed on the assumption that every blob referenced by a tree object is available somewhere in the repo storage. As a first step to reducing this problem, introduce the concept of promised blobs. Each Git repo can contain a list of promised blobs and their sizes at $GIT_DIR/objects/promisedblob. This patch contains functions to query them; functions for creating and modifying that file will be introduced in later patches.
As part of my on-going effort on partial/narrow clone/fetch I've also looked at how to represent the set of omitted objects and whether or not we should even try. My primary concern is scale and managing the list of objects over time. My fear is that this list will be quite large. If we only want to omit the very large blobs, then maybe not. But if we want to expand that scope to also omit other objects (such as a clone synchronized with a sparse checkout), then that list will get large on large repos. For example, on the Windows repo we have (conservatively) 100M+ blobs (and growing). Assuming 28 bytes per, gives a 2.8GB list to be manipulated. If I understand your proposal, newly-omitted blobs would need to be merged into the promised-blob list after each fetch. The fetch itself may not have that many new entries, but inserting them into the existing list will be slow. Also, mmap'ing and bsearch'ing will likely have issues. And there's likely to be a very expensive step to remove entries from the list as new blobs are received (or locally created). In such a "sparse clone", it would be nice to omit unneeded tree objects in addition to just blobs. I say that because we are finding with GVFS on the Windows repo, that even with commits-and-trees-only filtering, the number of tree objects is overwhelming. So I'm also concerned about limiting the list to just blobs. If we need to have this list, it should be able to contain any object. (Suggesting having an object type in the entry.) I assume that we'll also need a promised-blob.lock file to control access during list manipulation. This is already a sore spot with the index; I'd hate to create another one. I also have to wonder about the need to have a complete list of omitted blobs up front. It may be better to just relax the consistency checks and assume a missing blob is "intentionally missing" rather than indicating a corruption somewhere. And then let the client do a later round-trip to either demand-load the object -or- demand-load the existence/size info if/when it really matters. Maybe we should add a verb to your new fetch-blob endpoint to just get the size of one or more objects to help with this. Thanks, Jeff