On Mon, 25 Aug 2008, Shawn O. Pearce wrote:
"H. Peter Anvin" <hpa@xxxxxxxxx> wrote:
So don't implement things as GET requests unless you genuinely can deal
with the request being cached. Using POST requests throughout seems
like a safer bet to me; on the other hand, since the only use of GET is
obtaining a list of refs the worst thing that can happen, I presume, is
additional latency for the user behind the proxy.
This is a good point. There is probably not any reason to cache the
refs content if we don't also support caching the pack files. So in
this latest draft I have moved the ref listing to also be a POST.
on the other hand, it would be a good thing if pack files could be cached.
in a peer-peer git environment the cache would not be used very much, but
when you have a large number of people tracking a central repository (or
even a pseudo-central one like the kernel) you have a lot of people
upgrading from one point to the next point.
and for cloneing (and especially thing like linux-next where you
essentially re-clone daily) letting the pack get cached is probably a very
good thing.
I know it would be another round-trip, but how painful would it be to
compute what the contents of a pack would be (what objects would be in it,
not calculating the deltas nessasary for a full pack file), and return
that to the client so that the client could do a GET for the pack itself.
if that exact pack happens to be in the cache, great, if not the server
takes the data from the client and creates a pack file with those objects
in it.
David Lang
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html