(Oops, hit send too early by mistake, so some of my thoughts were incomplete) On Mon, Sep 1, 2008 at 9:05 AM, Tarmigan <tarmigan+git@xxxxxxxxx> wrote: > On Fri, Aug 29, 2008 at 10:39 AM, Shawn O. Pearce <spearce@xxxxxxxxxxx> wrote: >> Yet another draft follows. I believe that I have covered all >> comments with this draft. But I welcome any additional ones, >> as thus far it has been a very constructive process. > > Sorry I'm jumping into this a bit late, but something just occurred to me. > >> >> The updated protocol looks more like the current native protocol >> does. This should make it easier to reuse code between the two >> protocol implementations. >> >> --8<-- >> Smart HTTP transfer protocols > > [...] > >> HTTP Redirects >> -------------- >> >> If a POST request results in an HTTP 302 or 303 redirect response >> clients should retry the request by updating the URL and POSTing >> the same request to the new location. Subsequent requests should >> still be sent to the original URL. >> >> This redirect behavior is unrelated to the in-payload redirect >> that is described below in "Service show-ref". > > I just want to see smart http could support a new feature (please yell > if git:// already supports this and I am not aware of it). The idea > is from http://lkml.org/lkml/2008/8/21/347, the relevant portion > being: > > Greg KH wrote: >>David Vrabel wrote: >>> Or you can pull the changes from the uwb branch of >>> >>> git://pear.davidvrabel.org.uk/git/uwb.git >>> >>> (Please don't clone the entire tree from here as I have very limited >>> bandwidth.) >> >> If this is an issue, I think you can use the --reference option to >> git-clone when creating the tree to reference an external tree (like >> Linus's). That way you don't have the whole tree on your server for >> stuff like this. > > I do not believe that the server (either git:// or http://) can > currently be setup with --reference to redirect to another server for > certain refs, but perhaps with smart http and the POST 302/303 > redirect responses, this would now be possible as a way to reduce > bandwidth for people's home servers? I have also seen similar > requests before ("don't pull the whole kernel from me, just add my > repo as a remote after you've cloned linus-2.6"), so for larger > projects, it might be a nice feature. Would that be something > desirable to support? > > Would the current proposal be able to support this kind of partial > redirect? I don't quite see how it would, but it seems very close. > Perhaps if the show-ref redirect could appear partway through the > show-ref response and then the client could go off, fetch the some > refs from that server and then return to the original server for the > remainder? Or maybe in the upload-pack negotiations, there could be a > special redirect command as part of the "status continue" response > that told the client to run off and look for a specific sha at another > url? Something like > > status continue > > S: 0014status continue > S: 0034common <S_COMMON #1>........................... > S: 0034common <S_COMMON #2>........................... > ... I meant to write: S: 0014status continue S: 0034common <S_COMMON #1>........................... S: 0034common <S_COMMON #2>........................... S: 00xxredirect <WILL_BE_COMMON> <REMOTE_URL> and then the client could go try the remote url, fetch that SHA and ancestors, and then resume the upload pack negotiations with <WILL_BE_COMMON> among the <COMMON> commits. Obviously it's still somewhat of a half baked idea, and would probably need some kind of fallback, but does that seem like a reasonable thing to do and a reasonable way to do it? > > Otherwise, it looks very cool, but I have a few more minor questions > to help my general understanding... > >> If the client has sent 256 HAVE commits and has not yet >> received one of those back from S_COMMON, or the client >> has emptied C_PENDING it should include a "give-up" >> command to let the server know it won't proceed: >> >> C: 000cgive-up > > What does the server do after a 000cgive-up ? Does the server send > back a complete pack (like a new clone) or if not, how does clone work > over smart http? Does that mean that if I fall more than 256 commits > behind, I have to redownload the whole repo? Or am I missing > something about the the C_PENDING commits being sparse and doing some > kind of smart back-off (I'm not at all familiar with the existing > receive-pack/upload-pack)? > >> (s) Parse the upload-pack request: >> >> Verify all objects in WANT are reachable from refs. As >> this may require walking backwards through history to >> the very beginning on invalid requests the server may >> use a reasonable limit of commits (e.g. 1000) walked >> beyond any ref tip before giving up. >> >> If no WANT objects are received, send an error: >> >> S: 0019status error no want >> >> If any WANT object is not reachable, send an error: >> >> S: 001estatus error invalid want > > So again, if the client falls more than 1000 commits behind (not hard > to do for example during the linux merge window), and then the client > WANTs HEAD^1001, what happens? Does the get nothing from the server, > or does the client essentially reclone, or I am missing something? > >> (s) Send the upload-pack response: >> >> If the server has found a closed set of objects to pack or the >> request contains "give-up", it replies with the pack and the >> enabled capabilities. The set of enabled capabilities is limited >> to the intersection of what the client requested and what the >> server supports. >> >> S: 0010status pack >> C: 001bcapability include-tag >> C: 0019capability thin-pack >> S: 000c.PACK... > > Should these be all S: ... ? Thanks, Tarmigan -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html