"H. Peter Anvin" <hpa@xxxxxxxxx> wrote: > Shawn O. Pearce wrote: >> >> Currently git-http-backend requests no caching for info/refs [...] > > Let's put it this way: we're not seeing a huge amount of load from git > protocol requests, and I'm going to assume "git+http" protocol to be > used only by sites behind braindamaged firewalls (everyone else would > use git protocol), so I'm not really all that worried about it. Agreed. There's another application I want git+http for, but that may never materialize. Or maybe it will someday. I just have to adopt a wait and see approach there. > I'm not sure if "emulating a dumb server" is desirable at all; it seems > like it would at least in part defeat the purpose of minimizing the > transaction count and otherwise be as much of a "smart" server as the > medium permits. I think it is a really good idea. Then clients don't have to worry about which HTTP URL is the "correct" one for them to be using. End users will just magically get the smart git+http variant if both sides support it and they need to use HTTP due to firewalls. Clients will fall back onto the dumb protocol if the server doesn't support smart clones. Older clients (pre git+http) will still be able to talk to a smart server, just slower. This is nice for the end user. No thinking is required. Never ask a human to do what a machine can do in less time. I think its just 1 extra HTTP hit per fetch/push done against a dumb server. On a smart server that first hit will also give us what we need to begin the conversation (the info/refs data). On a dumb server its a wasted hit, but a dumb server is already doing to suck. One extra HTTP request against a dumb server is a drop in the bucket. Its also a pretty small request (an empty POST). -- Shawn. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html