Re: [PATCH v2 0/4] Introduce a "promisor-remote" capability

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Christian Couder <christian.couder@xxxxxxxxx> writes:

>> But there are still a couple of pieces missing in the bigger puzzle:
>>
>>   - How would a client know to omit certain objects? Right now it only
>>     knows that there are promisor remotes, but it doesn't know that it
>>     e.g. should omit every blob larger than X megabytes. The answer
>>     could of course be that the client should just know to do a partial
>>     clone by themselves.
>
> If we add a "filter" field to the "promisor-remote" capability in a
> future patch series, then the server could pass information like a
> filter-spec that the client could use to omit some large blobs.

Yes, but at that point, is the current scheme to mark a promisor
pack with a single bit, the fact that the pack came from a promisor
remote (which one?, and for what filter settings does the remote
used?) becomes insufficient, isn't it?  Chipping away one by one is
fine, but we'd at least need to be aware that it is one of the
things we need to upgrade in the scope of the bigger picture.

It may even be OK to upgrade the on-the-wire protocol side before
the code on the both ends learn to take advantage of the feature
(e.g., to add "promisor-remote" capability itself, or to add the
capability that can also convey the associated filter specification
to that remote), but without even the design (let alone the
implementation) of what runs on both ends of the connection to to
make use of what is communicated via the capability, it is rather
hard to get the details of the protocol design right.

As on-the-wire protocol is harder to upgrade due to compatibility
constraints, it smells like it is a better order to do things if it
is left as the _last_ piece to be designed and implemented, if we
were to chip away one-by-one.  That may, for example, go like this:

 (0) We want to ensure that the projects can specify what kind of
     objects are to be offloaded to other transports.

 (1) We design the client end first.  We may want to be able to
     choose what remote to run a lazy fetch against, based on a
     filter spec, for example.  We realize and make a mental note
     that our new "capability" needs to tell the client enough
     information to make such a decision.

 (2) We design the server end to supply the above pieces of
     information to the client end.  During this process, we may
     realize that some pieces of information cannot be prepared on
     the server end and (1) may need to get adjusted.

 (3) There may be tons of other things that need to be designed and
     implemented before we know what pieces of information our new
     "capability" needs to convey, and what these pieces of
     information mean by iterating (1) and (2).

 (4) Once we nail (3) down, we can add a new protocol capability,
     knowing how it should work, and knowing that the client and the
     server ends would work well once it is implemented.

>> At GitLab, we're thinking
>>     about the ability to use rolling hash functions to chunk such big
>>     objects into smaller parts to also allow for somewhat efficient
>>     deduplication. We're also thinking about how to make the overall ODB
>>     pluggable such that we can eventually make it more scalable in this
>>     context. But that's of course thinking into the future quite a bit.

Reminds me of rsync and bup ;-).

Thanks.




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux