Re: [RFC PATCH 1/3] promised-blob, fsck: introduce promised blobs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Jeff Hostetler wrote:

> My primary concern is scale and managing the list of objects over time.
[...]
>                                                                  For
> example, on the Windows repo we have (conservatively) 100M+ blobs (and
> growing).  Assuming 28 bytes per, gives a 2.8GB list to be manipulated.
>
> If I understand your proposal, newly-omitted blobs would need to be
> merged into the promised-blob list after each fetch.  The fetch itself
> may not have that many new entries, but inserting them into the existing
> list will be slow.

This is a good point.  An alternative would be to avoid storing the
list and instead use a repository extension that treats all missing
blobs in the repository similarly to promised blobs (with a weaker
promise, where the server is allowed to return 404).  The downsides:

- blob sizes are not available without an additional request, e.g. for
  directory listings

- the semantics of has_object_file become more complicated.  If it
  is allowed to return false for omitted blobs, then callers have to
  be audited to tolerate that and try to look up the blob anyway.
  If it has to contact the server to find out whether an omitted blob
  is available, then callers have to be audited to skip this expensive
  operation when possible.

- similarly, the semantics of sha1_object_info{,_extended} become more
  complicated.  If they are allowed to return -1 for omitted blobs,
  then callers have to be audited to handle that. If they have to
  contact the server to find the object type and size, it becomes
  expensive in a way that affects callers.

- causes futile repeated requests to the server for objects that don't
  exist.  Caching negative lookups is fussy because a later push could
  cause those objects to exist --- though it should be possible for
  fetches to invalidate entries in such a cache using the list of
  promised blobs sent by the server.

[...]
> In such a "sparse clone", it would be nice to omit unneeded tree objects
> in addition to just blobs.   I say that because we are finding with GVFS
> on the Windows repo, that even with commits-and-trees-only filtering,
> the number of tree objects is overwhelming.  So I'm also concerned about
> limiting the list to just blobs.  If we need to have this list, it
> should be able to contain any object.  (Suggesting having an object type
> in the entry.)

Would it work to have a separate lists of blobs and trees (either in
separate files or in the same file)?

One option would be to add a version number / magic string to the start
of the file.  That would allow making format changes later without a
proliferation of distinct repository extensions.

[...]
> I assume that we'll also need a promised-blob.lock file to control
> access during list manipulation.  This is already a sore spot with the
> index; I'd hate to create another one.

Can you say more about this concern?  My experience with concurrent
fetches has already not been great (since one fetch process is not
aware of what the other has fetched) --- is your concern that the
promised-blob facility would affect pushes as well some day?

> I also have to wonder about the need to have a complete list of omitted
> blobs up front.  It may be better to just relax the consistency checks
> and assume a missing blob is "intentionally missing" rather than
> indicating a corruption somewhere.

We've discussed this previously on list and gone back and forth. :)

>                                     And then let the client do a later
> round-trip to either demand-load the object -or- demand-load the
> existence/size info if/when it really matters.

The cost of demand-loading this existence/size information is what
ultimately convinced me of this approach.

But I can see how the tradeoffs differ between the omit-large-blobs
case and the omit-all-blobs case.  We might end up having to support
both modes. :(

> Maybe we should add a verb to your new fetch-blob endpoint to just get
> the size of one or more objects to help with this.

No objections from me, though we don't need it yet.

Thanks,
Jonathan



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux