> > +Protocol > > +-------- > > + > > +The server advertises `packfile-uris`. > > Is this a new "protocol capability"? There are multiple things that > are "advertised" over the wire (there is "reference advertisement") > and readers would want to know if this is yet another kind of > advertisement or a new variety of the "capability". Yes, it's a new protocol capability. I'll update the text. > > +If the client then communicates which protocols (HTTPS, etc.) it supports with > > +a `packfile-uris` argument, the server MAY send a `packfile-uris` section > > +directly before the `packfile` section (right after `wanted-refs` if it is > > +sent) containing URIs of any of the given protocols. The URIs point to > > +packfiles that use only features that the client has declared that it supports > > +(e.g. ofs-delta and thin-pack). See protocol-v2.txt for the documentation of > > +this section. > > + > > +Clients then should understand that the returned packfile could be incomplete, > > I am guessing that this merely mean that the result of downloading > and installing the packfile does not necessarily make the resulting > repository up-to-date with respect to the "want" the "fetch" command > requested. But the above can easily be misread that the returned > packfile is somewhat broken, corrupt or missing objects that it > ought to have (e.g. a deltified object lacks its base object in the > file). [#1] Most of this is resolved below, but I'll try to write upfront what's going on so it will be clear from the beginning (and not just at the end). But you bring up a good point here - can one of the linked-by-URI packs have a delta against the inline packfile, or vice versa? I'll take a look and clarify that. > > +and that it needs to download all the given URIs before the fetch or clone is > > +complete. > > So if I say "I want history leading to commit X", and choose to use > the packfile-uris that told me to fetch two packfiles P and Q, does > it mean that I need to only fetch P and Q, index-pack them and store > the resulting two packfiles and their idx files in my repository, do > I have the history leading to commit X? > > I would have guessed that the resulting repository after fetching > these URIs can still be incomplete and the "packfile" section of the > response needs to be processed before the fetch or clone is complete, > but the above does not say so. I'll clarify the statement. > > +Server design > > +------------- > > + > > +The server can be trivially made compatible with the proposed protocol by > > +having it advertise `packfile-uris`, tolerating the client sending > > +`packfile-uris`, and never sending any `packfile-uris` section. But we should > > +include some sort of non-trivial implementation in the Minimum Viable Product, > > +at least so that we can test the client. > > + > > +This is the implementation: a feature, marked experimental, that allows the > > +server to be configured by one or more `uploadpack.blobPackfileUri=<sha1> > > +<uri>` entries. Whenever the list of objects to be sent is assembled, a blob > > +with the given sha1 can be replaced by the given URI. This allows, for example, > > +servers to delegate serving of large blobs to CDNs. > > Let me see if the above is understandable. > > Does "git cat-file blob <sha1>" come back when we try to "wget/curl" > the <uri>? No - a packfile containing a single object will be returned, not the uncompressed and headerless object. I'll update the text to clarify that. > > +The division of work (initial fetch + additional URIs) introduces convenient > > +points for resumption of an interrupted clone - such resumption can be done > > +after the Minimum Viable Product (see "Future work"). > > + > > +The client can inhibit this feature (i.e. refrain from sending the > > +`packfile-uris` parameter) by passing --no-packfile-uris to `git fetch`. > > By default, as long as the other side advertises packfile-uris, the > client automatically attempts to use it and users need to explicitly > disable it? > > It's quite different from the way we introduce new features and I am > wondering if it is a good approach. The client has to opt-in first with the fetch.uriprotocols config (which says what protocols it wants to support) but it's not written here in this spec. I'll add it. > > + * On the server, a long-running process that takes in entire requests and > > + outputs a list of URIs and the corresponding inclusion and exclusion sets of > > + objects. This allows, e.g., signed URIs to be used and packfiles for common > > + requests to be cached. > > Did we discuss "inclusion and exclusion sets" whatever they are? No - this is future/speculative so I didn't want to spend too much time explaining this. I was thinking of saying that this URI includes all objects from commit A (inclusion) but not from commits B and C (exclusion). Maybe I should just leave it out. > > diff --git a/Documentation/technical/protocol-v2.txt b/Documentation/technical/protocol-v2.txt > > index ef7514a3ee..7e1b3a0bfe 100644 > > --- a/Documentation/technical/protocol-v2.txt > > +++ b/Documentation/technical/protocol-v2.txt > > @@ -325,13 +325,26 @@ included in the client's request: > > indicating its sideband (1, 2, or 3), and the server may send "0005\2" > > (a PKT-LINE of sideband 2 with no payload) as a keepalive packet. > > > > +If the 'packfile-uris' feature is advertised, the following argument > > +can be included in the client's request as well as the potential > > +addition of the 'packfile-uris' section in the server's response as > > +explained below. > > + > > + packfile-uris <comma-separated list of protocols> > > + Indicates to the server that the client is willing to receive > > + URIs of any of the given protocols in place of objects in the > > + sent packfile. Before performing the connectivity check, the > > + client should download from all given URIs. Currently, the > > + protocols supported are "http" and "https". > > Ah, I think the puzzlement I repeatedly expressed above is starting > to dissolve. You are assuming that the receiving end would remember > the URIs but the in-protocol packfile data at the end is handled > first, and then before the final connectivity check is done, > packfiles are downloaded from the URIs. If you spelled out that > assumption early in the document, then I wouldn't have had to ask > all those questions. I was assuming a different order (i.e. CDN > packfiles first to establish the baseline, and then in-protocol > packfile to fill the "latest bits", the last mile to reach the tips > of requested refs). > > In practice, I suspect that these fetches would go in parallel with > the processing of the in-protocol packfile, but spelling it out as > if these are done sequencially would help establishing the right > mental model. OK. > "(1) Process in-protocol packfiles first, and then (2) fetch CDN > URIs, and after all is done, (3) update the tips of refs" would > serve as a base to establish a good mental model. It would be even > better to throw in another step before all that: (0) record the > wanted-refs and CDN URIs to the safe place. If you get disconnected > before finishing (1), you have to redo from the scratch, but once > you finished (0) and (1), then (2) and (3) can be done at your > leisure using the information you saved in step (0), and (1) can be > retried if your connection is lousy. OK. This set of patches does not do (0) yet, and I think doing so would require more design - in particular, if we have fetch resumption, we would need to somehow track that none of the previously fetched objects have been deleted. Thanks for all your comments.